Você está na página 1de 12

GEOL 1460 Ramsey

Introduction to Remote Sensing Fall, 2016

Atmospheric interactions; Imaging systems; Image processing & classifications


Week #2: September 7, 2016

I. Quick Review from Last Lecture


questions?
will summarize the main take-away points

II. Electromagnetic (EM) Principles


information interpretation
o what is spectral resolution?
quantized spectrum for each pixel over the number of image bands

multi-spectral vs. hyper-spectral data


energy returned from the surface and detected by the sensor is
quantized over some wavelength region
broken down into some number of discrete instrument bands/channels

bandpass filters
o subdivide the EM spectrum of the pixel into discrete wavelength bands or
channels
each pixel in the image is one wavelength band
each image comprises one channel of all the pixels a multi-banded
image
each band can be placed in either the red, green or blue channel of a
remote sensing software package

o bandwidth: width of the filter (band) at 50% of the peak response


o FWHM: full width/half max
measure of the spectral width of each wavelength band

FWHM
example: sunlight reflected off a green leaf
produced a spectrum which contains info on the amount and type of
chlorophyll pigments
spectrum is continuous (many points)
but a 3-band satellite instrument is only able to detect energy over 3
discrete wavelength regions
produces a 3-point spectrum (multi-spectral instrument)
another instrument may have hundreds of channels in this wavelength
region (hyper-spectral instrument)

Figure: visible/near infrared (VNIR) spectra of common desert vegetation


showing 3 wavelength bands (VNIR 1, VNIR 2, VNIR 3) of a multi-spectral
instrument

III. Intro to Spectroscopy


spectroscopy
o spectroscopy = science and analysis of the EM spectra of materials
o type of spectroscopy is a function of the wavelength region under study
gamma ray spectroscopy, TIR spectroscopy, etc.

o the analysis of the spectrum tells you something about the surface material
talked about this last week
much more detail on this later in the course

example: quartz thermal infrared (TIR) emissivity spectrum. Emissivity


lows (absorption bands) indicate regions of fundamental vibrations of the
bonds between the Si -- O atoms

IV. Atmosphere
atmospheric window: regions that are not blocked by the Earth's atmospheric
gases and dust/particulates
o have high atmospheric transmission and low absorption
o H2O, CO2 and O3 are the main gas species that absorb photons in the VIS -
TIR
o even within the atmospheric windows, the energy is interacting with gases
and particulates

image processing: move from DN to radiance to calibrated radiance to physical


properties of the material (reflection, emissivity, temperature, etc.)
o DN to radiance at sensor
generally, a linear function: gain and offsets applied

o radiance at sensor to radiance at surface


removal of atmospheric terms

energy at sensor: path radiance + ground radiance


o atmospheric "correction" algorithms in remote sensing are designed to
remove or lessen the contribution of the path radiance to get at the absolute
ground radiance
path-length: distance traveled through the atmosphere by a photon
o function of the location of the energy source, location of the sensor and the
wavelength
o example: reflected solar energy travels through the atmosphere twice before
detection, but emitted thermal wavelengths only once

transmissivity: measure of the fraction of radiation that passes through the


atmosphere unattenuated (varies between 0 and 1)

path radiance: any energy contributed by interactions with the atmosphere prior
to detection
o energy at sensor = path radiance + ground radiance

scattering of surface radiance from particles in the atmosphere


o 3 types:
o selective scattering (Rayleigh scattering)
caused by particles much less than the size of the scattered wavelengths
atmospheric gases (N2, O2, O3)

effects VIS shorter wavelengths more (UV - VIS blue)


that is why the sky is blue on Earth

none of these gases present in significant quantities on Mars for example


Martian atmosphere scatters the longer red wavelength due mostly to
dust

o selective scattering (Mie scattering)


caused by particles about equal to the wavelength
example: dust, smoke, aerosols

longer VIS wavelengths are effected (reddish coloration)


pollution or volcanic eruptions cause very red sunsets

o non-selective scattering
caused by particles much larger than the wavelength
example: water vapor, ice crystals

all wavelengths are effected (white coloration)


clouds, haze, etc.
low amount of non-selective scattering

much higher amount of non-selective scattering

V. Imaging Systems: Scanners


dwell time
o dwell time = scan time per line / number of cells per line
o in other words, the amount of time a scanner has to collect photons from a
ground resolution cell

o translates to:

(down-track pixel size / orbital velocity)

(cross-track line width / cross-track pixel size)


o for the Landsat Thematic Mapper (TM) scanner
dwell time = [ (30 m / 7500 m/s) / ( 185,000 m / 30m) ]
dwell time = 6.5 x 10-7 sec for each pixel

o very short time per pixel -- low signal to noise ratio


o need to find ways to increase the dwell time for better data

cross-track scanner
o rotation or "back and forth" motion of the foreoptics
o scans each ground resolution cell (pixel) one by one

along-track scanner
o multiple cross-track detectors (no scanning motion)
o positives: dwell time increases. Why?
in the dwell time equation, the denominator = 1.0 since the line width is in
effect the cross track width of the pixel
equation reduces to:
dwell time = (down-track pixel size / orbital velocity)
dwell time = 4.0 x 10 -3 sec/pixel (for the above example)

o negatives: large arrays are difficult to fabricate (TM would require 6200
elements), failure of one element produces a loss/miscalibration of an entire
column of data (see below)

Image A is an example of push-broom line array errors in SWIR band 4 of ASTER;


image B is an example of cross-scanner array errors in TIR band 10 of ASTER.
Both images are from the 9/19/00 Phoenix, AZ scene

whisk-broom scanner
o combination of a cross-track scanner and a push-broom scanner
o scan with a small line array of detectors
o positives: longer dwell time (several lines per scan motion)
if all detectors are the same wavelength
same dwell time as the cross-track scanner if each detector was tuned to
a different wavelength
o negatives: different response sensitivities in each detector can cause striping
in the image (see above)

multispectral scanners
o thus far, we have looked at scanners with just one spectral band
o how do we add multiple wavelength observations?
o add cross-track scanning with a line array
o different than a whisk-broom
there, the scanning is done with a line array of the same wavelength
here, the scanning is performed with a line array of detectors at different
wavelengths
negatives: short dwell time again, spacecraft movement, planet rotation
causes imprecise alignment

1 |X|
2 |X| scan direction
3 |X|

flight direction

2 solutions:
1. push-broom scanning with a 2-D array

1 |X| |X| |X| |X|


2 |X| |X| |X| |X|
3 |X| |X| |X| |X|

flight direction

2. whisk-broom scanning with a 2-D array (TM scanner)

1 2 3 n
|X| |X| |X| |X|
|X| |X| |X| |X| scan direction
|X| |X| |X| |X|

flight direction
VI. Basics of digital image processing
so far, we have looked at basic image theory
o color, pixels, image formation, etc.

now, want to look at altering the image in some way


o data enhancement: stretching, HSI-transforms, density slice
o data extraction: PC-transforms, band ratios, classifications
o data restoration: errors, noise, geometric distortions, filters

generally, one would follow these in order


o fix-up the data, enhance it in some way
o then extract quantitative information

data enhancement (density


slice)
o a visualization tool to add
color to a gray-scale image

o DN range is divided into


groups and assigned a color

o example: cloud top


temperature severity of
storms

Hurricane Rita (most severe part of the storm in red)

histogram or contrast stretches


o what is a histogram?
distribution of all the DN values for an image, single band, or subset
thereof
for an image with a large variation of DN values, the corresponding
histogram is generally normally-distributed with a mean (x) at some DN
value
o linear stretch
application of a linear equation
map input DN to an expanded range of output DN
mapping some percentage of the histogram "tails" to 0 and 255
causes a loss of data in those regions, while expanding the majority of
the DN
input DN ranged from 40 to 100, linear stretched from 127 to 255 and
5 to 10

DN distribution can have a low dynamic range


stretching or separating the data to cover most/all of the available
dynamic range (0-255) is known as a stretch

o gaussian stretch
fit of the histogram to a gaussian distribution
the "tightness" of the curve is determined by the value of gamma

other stretch types


examples: piecewise linear, square root, histogram normalization
all designed to enhance the dynamic range of the input histogram in a
linear or non-linear way
all stretches are purely for image visualization
because they alter the DN values, they can never be used to extract
quantitative information from the image!
data extraction (Band Ratios)
o very basic methodology to extract information in multispectral images
o division of two or more wavelength bands
highlight subtle spectral and/or temporal variations
typically done after atmospheric correction and conversion to surface units
(reflectance, emissivity, temperature)

o reduce topographic and albedo effects (may be good or bad)


o classic ratios for Landsat TM bands which highlight mineral identification
and vegetation health

o Normalized Difference Vegetation Index (NDVI):

NDVI = (TM4 - TM3) / (TM4 + TM3)


for ASTER = (AST3 - AST2) / (AST3 + AST2)
produces values from 0 - 1.0, higher NDVI implies healthier vegetation
WHY??
vegetation health (example: drying out in autumn)
lower water and chlorophyll, increased color pigments
results in increase in brightness in the VIS red
decrease in brightness in the NIR
fairly constant in the green

data extraction (Image Classifications)


o series of algorithms designed to categorize or "clump" data into certain
classes in order to minimize scene variability and extract certain user-defined
parameters
o two primary types of classification:
unsupervised (can be defined automatically)
supervised (defined by the user)

what determines a "good" class or target?


1. covers more than one pixel (mathematically valid)
2. class is well represented in the scene
3. overlap with other classes is minimal
4. able to define enough training pixels per class

approximation: pixels per class = 10 * (n+1)


where, n = number of spectral bands of the instrument or the number
of classes defined

unsupervised classifications
limitation - all unsupervised classifications may produce non-intuitive
classes
user must still interpret the results
k-means approximation
algorithm locates a number of data clusters and their centers
computes statistically significant number of classes

iso-approximation
users seeds the algorithm with some number of data clusters
k-means is performed

ASTER 3,2,1 in R,G,B unsupervised classification (n=7)

supervised classifications
minimum distance (to means)
simplest method
determines the distance to the mean value (in n dimensional
space) of each class and assigns unknown pixels to the class with
a mean closest to that pixel
limitation - ignores the shape (variance) of the data cloud
can cause errors if an unknown pixel lies near/within one class, but
is closer to the mean of another class

maximum likelihood
more complex method
creates an n-dimensional parallelepiped or ellipsoid around each
class
statistically determines whether an unknown pixel falls within the
ellipsoid
generally, the most accurate method of classification
limitations - long computer run time, requires a large number of
pixels to accurately define your classes

accuracy assessment
user validation and check of the classification accuracy is critical
without it, the results of the classification could be wildly incorrect
check can be performed via field work, use of higher spatial
resolution imagery, or other (non-raster) datasets

4 of 5 training regions (classes) max likelihood supervised class.

Você também pode gostar