Você está na página 1de 23

Remote sensing note

Remote sensing depends on observed spectral differences in the energy reflected or emitted from features of
interest. Expressed in everyday terms, one might say that we look for differences in the colors of objects,
even though remote sensing is often conducted outside the visible spectrum, where colors, in the usual
meaning of the word, do not exist. This principle is the basis of multispectral remote sensing, the science of
observing features at varied wavelengths in an effort to derive information about these features and their
distributions. The term spectral signature has been used to refer to the spectral response of a feature, as
observed over a range of wavelengths.
Spatial Differentiation
Every sensor is limited in respect to the size of the smallest area that can be separately recorded as an entity on
an image. This minimum area determines the spatial detail the fineness of the patternson the image. These
minimal areal units, known as pixels (picture elements), are the smallest areal units identifiable on the image.
Our ability to record spatial detail is influenced primarily by the choice of sensor and the altitude at which it is
used to record images of the Earth. Note that landscapes vary greatly in their spatial complexity; some may be
represented clearly at coarse levels of detail, whereas others are so complex that the finest level of detail is
required to record their essential characteristics.
Energy Source
Sensors can be divided into two broad groupspassive and active. Passive sensors measure ambient levels of
existing sources of energy, while active ones provide their own source of energy. The majority of remote
sensing is done with passive sensors, for which the sun is the major energy source. The earliest example of this
is photography. With airborne cameras we have long been able to measure and record the reflection of light off
earth features. While aerial photography is still a major form of remote sensing, newer solid state technologies
have extended capabilities for viewing in the visible and near-infrared wavelengths to include longer
wavelength solar radiation as well. However, not all passive sensors use energy from the sun. Thermal
infrared and passive microwave sensors both measure natural earth energy emissions. Thus the passive
sensors are simply those that do not themselves supply the energy being detected.
By contrast, active sensors provide their own source of energy. The most familiar form of this is flash
photography. However, in environmental and mapping applications, the best example is RADAR. RADAR
systems emit energy in the microwave region of the electromagnetic spectrum. The reflection of that energy by
earth surface materials is then measured to produce an image of the area sensed.
Wavelength
As indicated, most remote sensing devices make use of electromagnetic energy. However, the electromagnetic
spectrum is very broad and not all wavelengths are equally effective for remote sensing purposes. Furthermore,
not all have significant interactions with earth surface materials of interest to us. Figure 3-1 illustrates the
electromagnetic spectrum. The atmosphere itself causes significant absorption and/or scattering of the very
shortest wavelengths. In addition, the glass lenses of many sensors also cause significant absorption of shorter
wavelengths such as the ultraviolet (UV). As a result, the first significant window (i.e., a region in which energy
can significantly pass through the atmosphere) opens up in the visible wavelengths. Even here, the blue
wavelengths undergo substantial attenuation by atmospheric scattering, and are thus often left out in remotely
sensed images. However, the green, red and near-infrared (IR) wavelengths all provide good opportunities for
gauging earth surface interactions without significant interference by the atmosphere. In addition, these regions
provide important clues to the nature of many earth surface materials. Chlorophyll, for example, is a very strong
absorber of red visible wavelengths, while the near-infrared wavelengths provide important clues to the
structures of plant leaves. As a result, the bulk of remotely sensed images used in GIS-related applications are
taken in these regions.

Extending into the middle and thermal infrared regions, a variety of good windows can be found. The longer of
the middle infrared wavelengths have proven to be useful in a number of geological applications. The thermal
regions have proven to be very useful for monitoring not only the obvious cases of the spatial distribution of
heat from industrial activity, but a broad set of applications ranging from fire monitoring to animal distribution
studies to soil moisture conditions.
After the thermal IR, the next area of major significance in environmental remote sensing is in the microwave
region. Anumber of important windows exist in this region and are of particular importance for the use of active
radar imaging. The texture of earth surface materials causes significant interactions with several of the
microwave wavelength regions. This can thus be used as a supplement to information gained in other
wavelengths, and also offers the significant advantage of being usable at night (because as an active system it is
independent of solar radiation) and in regions of persistent cloud cover (since radar wavelengths are not
significantly affected by clouds).
Spectral Response Patterns
A spectral response pattern is sometimes called a signature. It is a description (often in the form of a graph) of
the degree to which energy is reflected in different regions of the spectrum. In the early days of remote sensing,
it was believed (more correctly hoped) that each earth surface material would have a distinctive spectral
response pattern that would allow it to be reliably detected by visual or digital means. However, as our common
experience with color would suggest, in reality this is often not the case. For example, two species of trees may
have quite a different coloration at one time of the year and quite a similar one at another.
Finding distinctive spectral response patterns is the key to most procedures for computer-assisted interpretation
of remotely sensed imagery. This task is rarely trivial. Rather, the analyst must find the combination of spectral
bands and the time of year at which distinctive patterns can be found for each of the information classes of
interest.
Multispectral Remote Sensing
In the visual interpretation of remotely sensed images, a variety of image characteristics are brought into
consideration: color (or tone in the case of panchromatic images), texture, size, shape, pattern, context, and the
like. However, with computer- assisted interpretation, it is most often simply color (i.e., the spectral response
pattern) that is used. It is for this reason that a strong emphasis is placed on the use of multispectral sensors
(sensors that, like the eye, look at more than one place in the spectrum and thus are able to gauge spectral
response patterns), and the number and specific placement of these spectral bands.
Figure 3-5 illustrates the spectral bands of the LANDSAT Thematic Mapper (TM) system. The LANDSAT
satellite is a commercial system providing multi-spectral imagery in seven spectral bands at a 30 meter
resolution. It can be shown through analytical techniques such as Principal Components Analysis, that in many
environments, the bands that carry the greatest amount of information about the natural environment are the
near-infrared and red wavelength bands. Water is strongly absorbed by infrared wavelengths and is thus highly
distinctive in that region. In addition, plant species typically show their greatest differentiation here. The red
area is also very important because it is the primary region in which chlorophyll absorbs energy for
photosynthesis. Thus it is this band which can most readily distinguish between vegetated and non-vegetated
surfaces.
Given this importance of the red and near-infrared bands, it is not surprising that sensor systems designed for
earth resource monitoring will invariably include these in any particular multispectral system. Other bands will
depend upon the range of applications envisioned. Many include the green visible band since it can be used,
along with the other two, to produce a traditional false color compositea full color image derived from the
green, red, and infrared bands (as opposed to the blue, green, and red bands of natural color images). This
format became common with the advent of color infrared photography, and is familiar to many specialists in the
remote sensing field. In addition, the combination of these three bands works well in the interpretation of the
cultural landscape as well as natural and vegetated surfaces. However, it is increasingly common to include
other bands that are more specifically targeted to the differentiation of surface materials. For example,
LANDSAT TM Band 5 is placed between two water absorption bands and has thus proven very useful in
determining soil and leaf moisture differences. Similarly, LANDSAT TM Band 7 targets the detection of
hydrothermal alteration zones in bare rock surfaces.
Hyperspectral Remote Sensing
In addition to traditional multispectral imagery, some new and experimental systems such as AVIRIS and
MODIS are capable of capturing hyperspectral data. These systems cover a similar wavelength range to
multispectral systems, but in much narrower bands. This dramatically increases the number of bands (and thus
precision) available for image classification (typically tens and even hundreds of very narrow bands). Moreover,
hyperspectral signature libraries have been created in lab conditions and contain hundreds of signatures for
different types of landcovers, including many minerals and other earth materials. Thus, it should be possible to
match signatures to surface materials with great precision. However, environmental conditions and natural
variations in materials (which make them different from standard library materials) make this difficult. In
addition, classification procedures have not been developed for hyperspectral data to the degree they have been
for multispectral imagery. As a consequence, multispectral imagery still represents the major tool of remote
sensing today.
Satellite-Based Scanning Systems
Photography has proven to be an important input to visual interpretation and the production of analog maps.
However, the development of satellite platforms, the associated need to telemeter imagery in digital form, and
the desire for highly consistent digital imagery have given rise to the development of solid state scanners as a
major format for the capture of remotely sensed data. The specific features of particular systems vary
(including, in some cases, the removal of a true scanning mechanism). However, in the discussion which
follows, an idealized scanning system is presented that is highly representative of current systems in use.
The basic logic of a scanning sensor is the use of a mechanism to sweep a small field of view (known as an
instantaneous field of viewIFOV) in a west to east direction at the same time the satellite is moving in a north
to south direction.
Together this movement provides the means of composing a complete raster image of the environment. A
simple scanning technique is to use a rotating mirror that can sweep the field of view in a consistent west to east
fashion. The field of view is then intercepted with a prism that can spread the energy contained within the IFOV
into its spectral components. Photoelectric detectors (of the same nature as those found in the exposure meters
of commonly available photographic cameras) are then arranged in the path of this spectrum to provide
electrical measurements of the amount of energy detected in various parts of the electromagnetic spectrum. As
the scan moves from west to east, these detectors are polled to get a set of readings along the east-west scan.
These form the columns along one row of a set of raster imagesone for each detector. Movement of the
satellite from north to south then positions the system to detect the next row, ultimately leading to the
production of a set of raster images as a record of reflectance over a range of spectral bands.
There are several satellite systems in operation today that collect imagery that is subsequently distributed to
users. Several of the most common systems are described below. Each type of satellite data offers specific
characteristics that make it more or less appropriate for a particular application.
In general, there are two characteristics that may help guide the choice of satellite data: spatial resolution and
spectral resolution. The spatial resolution refers to the size of the area on the ground that is summarized by one
data value in the imagery. This is the Instantaneous Field of View (IFOV) described earlier. Spectral resolution
refers to the number and width of the spectral bands that the satellite sensor detects. In addition, issues of cost
and imagery availability must also be considered.
LANDSAT
The LANDSAT system of remote sensing satellites is currently operated by the EROS Data Center of the
United States Geological Survey. This is a new arrangement following a period of commercial distribution
under the Earth Observation Satellite Company (EOSAT) which was recently acquired by Space Imaging
Corporation. As a result, the cost of imagery has dramatically dropped, to the benefit of all. Full or quarter
scenes are available on a variety of distribution media, aswell as photographic products of MSS and TM scenes
in false color and black and white.
There have been seven LANDSAT satellites, the first of which was launched in 1972. The LANDSAT 6
satellite was lost on launch. However, as of this writing, LANDSAT 5 is still operational. LANDSAT 7 was
launched in April, 1999.
LANDSAT carries two multispectral sensors. The first is the Multi-Spectral Scanner (MSS) which acquires
imagery in four spectral bands: blue, green, red and near infrared. The second is the Thematic Mapper (TM)
which collects seven bands: blue, green, red, near-infrared, two mid-infrared and one thermal infrared. The
MSS has a spatial resolution of 80 meters, while that of the TM is 30 meters. Both sensors image a 185 km wide
swath, passing over each day at 09:45 local time, and returning every 16 days. With LANDSAT 7, support for
TM imagery is to be continued with the addition of a co-registered 15 m panchromatic band.
Digital Image Processing
Digital Image Processing is largely concerned with four basic operations: image restoration, image
enhancement, image classification, image transformation. Image restoration is concerned with the correction
and calibration of images in order to achieve as faithful a representation of the earth surface as possiblea
fundamental consideration for all applications.
Image enhancement is predominantly concerned with the modification of images to optimize their appearance
to the visual system. Visual analysis is a key element, even in digital image processing, and the effects of these
techniques can be dramatic.
Image classification refers to the computer-assisted interpretation of images an operation that is vital to GIS.
Finally, image transformation refers to the derivation of new imagery as a result of some mathematical
treatment of the raw image bands.
Image Restoration
Remotely sensed images of the environment are typically taken at a great distance from the earth's surface. As a
result, there is a substantial atmospheric path that electromagnetic energy must pass through before it reaches
the sensor. Depending upon the wavelengths involved and atmospheric conditions (such as particulate matter,
moisture content and turbulence), the incoming energy may be substantially modified. The sensor itself may
then modify the character of that data since it may combine a variety of mechanical, optical and electrical
components that serve to modify or mask the measured radiant energy. In addition, during the time the image is
being scanned, the satellite is following a path that is subject to minor variations at the same time that the earth
is moving underneath. The geometry of the image is thus in constant flux. Finally, the signal needs to be
telemetered back to earth, and subsequently received and processed to yield the final data we receive.
Consequently, a variety of systematic and apparently random disturbances can combine to degrade the quality
of the image we finally receive. Image restoration seeks to remove these degradation effects. Broadly, image
restoration can be broken down into the two sub-areas of radiometric restoration and geometric restoration.
Image Enhancement
Image enhancement is concerned with the modification of images to make them more suited to the capabilities
of human vision. Regardless of the extent of digital intervention, visual analysis invariably plays a very strong
role in all aspects of remote sensing. While the range of image enhancement techniques is broad, the following
fundamental issues form the backbone of this area:
Supervised Classification
Unsupervised Classification
Informational classes are the categories of interest to the ultimate users of the data. Informational classes are
(for example) the different kinds of forest, or the different kinds of land use that convey information to planners,
managers, and scientists who will use information derived from remotely sensed data. These classes convey the
information that we wish to derive from the data--they are the object of our analysis. Unfortunately, remotely
sensed images do not directly convey informational classes-- we can derive them only indirectly, using the
brightnesses that compose each image. For example, the image cannot directly show geological units, but rather
only the differences in topography, vegetation, soil color, shadow, and other factors that lead the analyst to
conclude that certain geological conditions exist in specific areas.
In contrast, Spectral classes are groups of pixels that are uniform with respect to brightnesses in their several
spectral channels. The analyst defines spectral classes within remotely sensed data; then must define links
between spectral classes on the image and informational classes that are of interest to the client. In this manner,
image classification proceeds by matching spectral categories to informational categories. If the match can be
made with confidence, then the information is likely to be reliable. If spectral and informational categories do
not correspond, then the image is unlikely to be a useful source for that particular application.
Spatial Resolution, Pixel Size, and Scale
Filter
The Filter tool can be used to either eliminate spurious data or enhance features otherwise not visibly apparent
in the data. Filters essentially create output values by a moving, overlapping 3x3 cell neighborhood window that
scans through the input raster. As the filter passes over each input cell, the value of that cell and its 8 immediate
neighbors are used to calculate the output value. There are two types of filters available in the tool: low pass and
high pass. The filter type LOW employs a low pass, or averaging, filter over the input raster and essentially
smooths the data. The HIGH filter type uses a high pass filter to enhance the edges and boundaries between
features represented in the raster.
High pass filter
The high pass filter accentuates the comparative difference between a cell's values and its neighbors. It has the
effect of highlighting boundaries between features (for example, where a water body meets the forest), thus
sharpening edges between objects. It is generally referred to as an edge-enhancement filter.
With the HIGH option, the nine input z-values are weighted in such a way that removes low frequency
variations and highlights the boundary between different regions.
Note that the values in the kernel sum to 0, since they are
normalized.
The High Pass filter is essentially equivalent using the Focal Statistics
tool with the Sum statistic option, and a specific weighted kernel.

The output z-values are an indication of the smoothness of the surface, but they have no relation to the original
z-values. Z-values are distributed about zero with positive values on the upper side of an edge and negative
values on the lower side. Areas where the z-values are close to zero are regions with nearly constant slope.
Areas with values near z-min and z-max are regions where the slope is changing rapidly.
Example 1
Following is a simple example of the calculations for one processing cell (the center cell with the value 8):

The output value for the


processing cell will be 29.5.
By giving negative weights
to its neighbors, the filter
accentuates the local detail
by pulling out the
differences or the boundaries
between objects

Você também pode gostar