Você está na página 1de 29

INTRODUCTION TO IMAGE INTERPRETATION

Aerial photographs as well as imagery, obtained by remote sensing using aircraft or


spacecraft as platforms, have applicability in various fields. By studying the qualitative as
well as quantitative aspects of images recorded by various sensor systems, like aerial
photographs (black-and-white, black-and-white infrared, colour and colour infrared),
multiband photographs, satellite data (both pictorial and digital) including thermal and
radar imagery, an interpreter well experienced in his field can derive lot of information.

Image Interpretation

Image interpretation is defined as the act of examining images to identify objects


and judge their significance. An interpreter studies remotely sensed data and attempts
through logical process to detect, identify, measure and evaluate the significance of
environmental and cultural objects, patterns and spatial relationships. It is an information
extraction process.

Anyone who looks at a photograph or an imagery in order to recognize an image is


an interpreter. A soil scientist, a geologist or a hydrogeologist, a forester or a planner,
trained in image interpretation can recognize the vertical view presented by the ground
objects on an aerial photograph or a satellite image, which enables him or her to detect
many small or subtle features that an amateur would either overlook or mis-interpret. An
interpreter is, therefore, a specialist trained in the study of photography or imagery, in
addition to his or her own discipline. The present discussion mainly pertains to the
techniques of visual interpretation, the application of various instruments and the
extraction of information.

Aerial photographs, as well as imagery, obtained by remote sensing employing


electromagnetic energy as the means of detecting and measuring target/objects
characteristics, has applicability to various fields because of four basic reasons.

First - It represents a larger area of the earth from a perspective view and provides a
format that facilitates the study of objects and their relationships.

Second - Certain types of imagery and aerial photograph can provide a 3-D view.
Third - Characteristics of objects not visible to the human eye can be transformed
into images
Fourth - It provides the observer with a permanent record/representation of
objects at any moment of time. In addition, data is real-time,
repetitive and, when in digital form, is computer compatible for quick
analysis.

1
BASIC PRINCIPLES OF IMAGE INTERPRETATION

Images and their interpretability

An image taken from the air or space is a pictorial presentation of the pattern of a
landscape.
The pattern is composed of indicators of objects and events that relate to the physical,
biological and cultural components of the landscape.
Similar conditions, in similar circumstances and surroundings, reflect similar patterns,
and unlike conditions reflect unlike patterns.
The type and amount of information that can be extracted is proportional to the
knowledge, skill and experience of the analyst, the methods used for interpretation
and the analyst's awareness of any limitations.

Factors Governing the Quality of an image

In addition to the inherent characteristics of an object itself, the following factors


influence image quality:

Sensor characteristics (film types, digital systems)


Season of the year and time of day
Atmospheric effects
Resolution of the imaging system and scale
Image motion
Stereoscopic parallax

Factors Governing Interpretability

1. Visual and mental acquity of the interpreter


2. Equipment and technique of interpretation
3. Interpretation keys, guides, manuals and other aids.

Visibility of Objects

The objects on aerial photographs or imagery are represented in the form of photo
images in tones of grey in B/W photography and in colour/false colour photography in
different colours/hues. This visibility of objects in the images varies due to -

a) The inherent characteristics of the objects


b) The quality of the aerial photography or imagery.

2
Inherent Characteristics of Objects

In any photographic image forming process, the negative is composed of tiny silver
deposits formed by the action of light on photosensitive film during exposure. The amount
of light received by the various sections of the film depends on the reflection of
electromagnetic radiation (EMR) from various objects. This light, after passing through the
optical system, gives rise to different tones and textures.

In visual interpretation, an interpreter is primarily concerned with recognizing


changes in tonal values, thereby differentiating an object of a certain reflective
characteristic from another. However, he must be aware that the same object under
different moisture or illumination conditions, and depending on the wavelength of incident
energy, may reflect a different amount of light. For this reason, a general key, based on
tone characteristics of objects, cannot be prepared. In such cases, other characteristics of
objects such as their shape, size and pattern etc. help in their recognition.

Quality of Aerial Photography/Imagery:


The quality of image interpretation depends on the quality of the basic material on
which the images are formed. Normally, in visual interpretation, these images are formed
on the photograph and represented in tones of grey or in colours of various hues, chroma
and values. A study of the factors, affecting image quality and characteristics of images, is
essential from an interpreter's point of view.

The Tonal or Colour Contrast Between an Image and Its Background

Photographic tone contrast is the difference in brightness between an image and its
background. Similarly, in colour photography colour contrast is the result of all hue values
and chroma differences between the image and its background. The tonal contrast can be
sufficiently increased with proper filters.

Image Sharpness Characteristics

Sharpness is the abruptness with which tone or colour contrasts appear on the
photograph or imagery. Both tone and sharpness enable an interpreter to distinguish one
object from another. To a large extent, image sharpness is dependent on the focussing
ability of the optical system. Image sharpness is closely related to the resolution of the
optical system.

3
Stereoscopic Parallax Characteristics

Stereoscopic parallax is the displacement of the apparent position of an image with


respect to a reference point of observation. Sufficient parallax is necessary in order to
distinguish objects from their shadows. Parallax depends on the height of an object, flying
height and the stereobase or its corollary, the forward overlap. Stereoscopic parallax can
be improved by choosing the right base/height (B/H) ratio.

The above investigation appears to be over simplified as a number of other factors


can be mentioned which obviously effect the image quality. However, for the purpose of
simplification, we may conclude that other factors influence image quality indirectly
through their effect on tone, sharpness or parallax.
In general, if image motion and exposure times were no problem, we would
obviously use fine grain, high definition, slow photographic material, with an appropriate
filter in order to get better sharpness and contrast.

ELEMENTS OF IMAGE INTERPRETATION

Image interpretation is essential


for the efficient and effective use of the
data. While the above properties of
aerial photographs/imagery help an
interpreter to detect objects due to their
tonal variations, he must also take
advantage of other important
characteristics of the objects in order to
recognize them. The following elements
of image interpretation shown in Figure
are regarded as being of general
significance, irrespective of the precise
nature of the imagery and the features it
portrays...

Shape
Numerous components of the environment can be identified with reasonable
certainty merely by their shape. This is true of both natural features and man-made
objects.

4
Size
In many cases, the length, breadth, height, area and/or volume of an object can be
significant, whether these are surface features (e.g. different tree species) or atmospheric
phenomena (e.g. cumulus versus cumulonimbus clouds). The approximate size of many
objects can be judged by comparisons with familiar features(e.g. roads) in the same scene.

Tone

We have seen how different objects emit or reflect different wavelengths and
intensities of radiant energy. Such differences may be recorded as variations of picture
tone, colour or density. which enable discrimination of many spatial variables, for
example, on land different crop types or at sea water bodies of contrasting depths or
temperatures. The terms 'light', 'medium' or 'dark' are used to describe variations in tone.

Shadow

Hidden profiles may be revealed in silhouette (e.g. the shapes of buildings or the
forms of field boundaries). Shadows are especially useful in geomorphological studies
where micro relief features may be easier to detect under conditions of low-angle solar
illumination than when the sun is high in the sky. Unfortunately, deep shadows in areas of

5
complex detail may obscure significant features, e.g. the volume and distribution of traffic
on a city street.

Pattern

Repetitive patterns of both natural and cultural features are quite common, which
is fortunate because much image interpretation is aimed at the mapping and analysis of
relatively complex features rather than the more basic units of which they may be
composed. Such features include agricultural complexes (e.g. farms and orchards) and
terrain features (e.g. alluvial river valleys and coastal plains).

Texture

Texture is an important image characteristic


closely associated with tone in the sense that it is a
quality that permits two areas of the same overall tone
to be differentiated on the basis of microtonal
patterns. Common image textures include smooth,
rippled, mottled, lineated and irregular.
Unfortunately, texture analysis tends to be rather
subjective, since different interpreters may use the
same terms in slightly different ways. Texture is rarely

6
the only criterion of identification or correlation employed in interpretation. More often it
is invoked as the basis for a subdivision of categories already established using more
fundamental criteria. For example, two rock units may have the same tone but different
textures.

Site

At an advanced stage in image interpretation, the


location of an object with respect to terrain features of other
objects may be helpful in refining the identification and
classification of certain picture contents. For example, some tree
species are found more commonly in one topographic situation
than in others, while in industrial areas the association of several
clustered, identifiable structures may help us determine the
precise nature of the local enterprise. For example, the
combination of one or two tall chimneys, a large central building, conveyors, cooling
towers and solid fuel piles point to the correct identification of a thermal power station.

Resolution

Resolution of a sensor system may be defined as its capability to discriminate two


closely spaced objects from each other. More than most other picture characteristics,
resolution depends on aspects of the remote sensing system itself, including its nature,
design and performance, as well as the ambient conditions during the sensing programme
and subsequent processing of the acquired data. An interpreter must have a knowledge
about the resolution of various remote sensing data products.

Stereo-scopic Appearance

When the same feature is photographed from two different positions with overlap
between successive images, an apparently solid model of the feature can be seen under a
stereoscope. Such a model is termed a stereomodel and the three-dimentional view it

7
provides can aid interpretation. This valuable information cannot be obtained from a single
print.

In practice, these nine elements assure a variety of ranks of importance.


Consequently, the order in which they may be examined varies from one type of study to
another. Sometimes they can lead to assessment of conditions not directly visible in the
images, in addition to the identification of features or conditions that are explicitly
revealed. The process, by which related invisible conditions are established by inference, is
termed "convergence of evidence". It is useful, for example, in assessing the social class
and/or income group occupying a particular neighbourhood or the soil moisture conditions
in agricultural areas.

Image interpretation may be very general in its approach and objective, such as in
the case of terrain evaluation or land classification. On other occasions it is highly specific,
related to clear-cut goals in such fields as geology, forestry, transport studies and soil
erosion mapping. In no instance should the interpreter fail to take into account features
other than those for which he or she is specifically searching. Failure to give adequate
consideration to all aspects of a terrain is, perhaps, the commonest source of
interpretation error.

The interpretation of images is therefore an essentially deductive process, and the


identification of certain key features leads to the recognition of others. Once a suitable
starting point has been selected, the elements listed earlier are considered either
consciously or subconsciously. The completeness and accuracy of the results depends on
an interpreter's ability to integrate such elements in the most appropriate way to achieve
the objectives that have been set for him or her.

TECHNIQUES OF IMAGE INTERPRETATION

The development of interpretation techniques has been mainly by the empirical


method. The gap between the photo image on the one hand and the reference level, i.e.
the level of knowledge in a specific field, in the human mind on the other hand, is bridged
by the use of image-interpretation. The techniques adopted for one discipline may differ
from those adopted for another. The sequence of activity and the search method may
have to be modified to suit the specific requirements.

Image interpretation comprises at least three mental acts that may or may not be
performed simultaneously:

i) The measurement of images of objects


ii) Identification of the objects imaged
iii) Appropriate use of this information in the solution of the problem.

8
In visual interpretation, the methodology of interpretation for each separate
discipline will depend on :

Kind of information to be interpreted


Accuracy of the results to be obtained
The reference level of the person executing the interpretation
Kind and type of imagery or photographs available
Instruments available
Scale and other requirements of the final map
External knowledge available and any other sensory surveys that have been or
will be made in the near future in the same area.

From the scrutiny of the above list, it is evident that no stereotyped approach can
be prescribed for the techniques or the methodology of photo-interpretation. An
interpreter must work out the plan of operations and the techniques depending on the
project's special requirements.

In carrying out this task, an interpreter may use many more types of data than
those recorded on the images he is to interpret. Many sources, such as literature,
laboratory measurements, analysis, field work and ground and aerial photographs (or
imagery) make up this collateral material.

Activities of Image-interpretation

Image-interpretation is a complex process comprising physical as well as mental


activities. This means familiarity with so wide a variety of stimuli that the even most
accomplished interpreter is occasionally dependent on reference materials.

The reference material in the form of identification keys is a useful aid in image
interpretation. Many types of image interpretation keys are available or may be
constructed depending on the abilities of the interpreter and the purpose to be served by
the interpretation.

METHODS OF SEARCH AND SEQUENCE OF INTERPRETATION

In visual interpretation and whenever possible, especially when examining vertical


or nearly vertical photographs, the scene is viewed stereoscopically. The sequence begins
with the detection and identification of objects followed by measurements of the image.
The image is then considered in terms of information, usually non-pictorial, and finally
deductions are made. The interpreter should work methodically, proceeding from general
considerations to specific details and from known to unknown features.

There are two basic methods that may be used to study aerial imagery:

9
"Fishing expedition" - an examination of each and every object so as not to miss anything,

"Logical search" - quick scanning and selective intensive study.

Sequence of Activities

Normally the activities in an image-interpretation sequence include the following:


Detection

Detection means selectively picking out an object or element of importance for the
particular kind of interpretation in hand. It is often coupled with recognition, in which case
the object is not only seen but also recognized.

Recognition and Identification

Recognition and identification together are sometimes termed photo-reading.


However, they are fundamentally the same process and refer to the process of
classification of an object by means of specific or local knowledge within a known category
upon an object's detection in a photo-image.

Analysis

Analysis is the process of separating or delineating a set of similar objects. In


analysis, boundary lines are drawn separating the groups, and the degree of reliability of
these lines may be indicated.

Deduction

Deduction may be directed to the separation of different groups of objects or


elements and the deduction of their significance based on converging evidence. The
evidence is derived mainly from visible objects or from invisible elements, which give only
partial information on the nature of certain correlative indications.

Classification

Classification establishes the identity of a surface or an object delineated by


analysis. It includes modification of the surface into a pertinent system for use in field
investigation. Classification is made in order to group surfaces or objects according to those
aspects that, for a certain point of view, bring out their most characteristic aspects.

Idealization

10
Idealization refers to the process of drawing or standardized representations of
what is actually seen in the photo image. This process is helpful for the subsequent use of
photograph/imagery during field investigations and in the preparation of base maps.

These processes would be better explained by taking an example. If investigations


of dwellings are to be carried out, the first step would be to detect photo images having
rectangular shape etc. The next step would be to recognize, say, a single storey
construction and a double storey construction. Delineation of the two groups of objects
would be done under the process of analysis in which, a boundary line may be drawn
separating the two groups. At this stage, in view of various converging evidence, it may be
deduced that one group is a single storey dwelling. In more difficult cases this would be
done in the process of classification and a code number appointed to the groups to help
field examinations. Cartographic representation would be made under the process of
idealization.

Convergence of evidence:

Image interpretation is basically a deductive process. Features that can be


recognized and identified directly lead the image interpreter to the identification and
location of other features. Even though all aspects of an area are irreversibly interwined,
the interpreter must begin some place, he can not consider drainage, landform, vegetation,
and manmade features simultaneously. He should begin with one feature or group of
features and then on to the others, integrating each of the facets of the terrain as he goes.
For each terrain, the interpreter must find his own point of beginning and then consider
each of the various aspects of the terrain in logical fashion. Deductive image interpretation
requires conscious or unconscious consideration of the elements of image interpretation
listed earlier. The completeness and accuracy of image interpretation are proportional to
the interpreter's understanding of how and why images show shape, size, tone, shadow,
pattern, and texture, while an understanding of site, association, and resolution
strengthens the interpreter's ability to integrate the different features making up a terrain.
For the beginners, systematic consideration of the elements of image interpretation
should precede integrated terrain interpretation.

The principle of convergence of evidence requires the interpreter first to recognize


basic features or types of features and then to consider their arrangement (pattern) in the
a real context. Several interpretations may suggest themselves. Critical examination of the
evidence usually shows that all interpretations but one are unlikely or impossible. The
greatest difficulty in interpreting images involves judging degrees of probability.

Sensors in Photographic Image Interpretation

As stated earlier, characteristics not visible to the human eye can also be recorded
and displayed by using proper sensor types. Digital data can also be transferred onto any

11
type of film, depending on the type of study to be carried out. Normally, the four types of
films are used for visual data display as follows.

a) Black-and-white panchromatic,
b) Black-and-white infrared
c) Colour,
d) Colour infrared/false colour

All of the above types are available in different grades and sensitivities that can be
preselected for a particular use. An interpreter must know the characteristics of each of
these before starting an interpretation job. The same is true for the digital data display for
multispectral, thermal and radar imagery.

INSTRUMENTS FOR VISUAL INTERPRETATION AND TRANSFER OF DATA

Interpretation Instruments
Monocular instruments: magnifiers
Stereoscopic instruments: mirror and pocket stereoscope
interpretoscope
zoom stereoscope
scanning mirror
stereoscope

Instruments for Transfer of Data


For flat terrain: Sketchmaster
Stereosketchmaster
Zoom transferscope
Optical pantograph or reflecting projector
For hilly terrain: Stereoplotters
Orthophoto together with its stereo-mate, can be used for
interpretation and delineations. Since preparation of
orthophoto and its stereo-mate is a complex process, the
method is not so popular.
Conclusion

The scope of image-interpretation as a tool for analysis and data collection is


widening with the advance of remote sensing techniques. Space images have already found
their use in interpretation for the earth sciences. Because of the flexibility of its techniques
and substantial gains in accuracy, speed and economy over conventional ground methods,
the future of image-interpretation is assured. However, great endeavor is required on the
part of the interpreter to assess his or her own empirical knowledge in order to formulate
the optimum data requirements for different disciplines. This is essential for the better

12
development of image-interpretation and for widening the scope of application of its
techniques.

Spectral Signature of Land Cover Features

As tone EMR incidents on earth's surface, behaviour of land in a particular locality is


mainly due to the component of land exposed at the surface at that locality. The
component may be vegetation canopy, water body, barren rock, lose soil, built-up or
mixture of these. Since each of these exhibit typical spectral signature influenced by so
many other parameters of their own, they are to be considered separately to understand
the nature of EMR interaction with each component. Spectral signature of water,
vegetation and soil are discussed in detail in the following sections.

Spectral Reflectance and Spectral Signature of Soil

The majority of the flux incident on a soil surface is reflected or absorbed and little
is transmitted. The reflectance properties of the majority of soils are similar, with a
positive relationship between reflectance and wavelengths, as can be seen in Fig.1. The
five characteristics of a soil that determine its reflectance properties are, in order of
importance: its moisture content, organic content, texture, structure and iron oxide
content. These factors are all interrelated, for example the texture (the proportion of
sand, silt and clay particles) is related to both the structure (the arrangement of sand, silt
and clay particles into aggregates) and the ability of the soil to hold moisture.

Effect of soil texture, structure and soil moisture

The relationship between texture, structure and soil moisture can best be described
with reference to two contrasting soil types. A clay soil tends to have a strong structure,
which leads to a rough surface on ploughing; clay soils also tend to have high moisture
content and as a result have a fairly low diffuse reflectance. In contrast, a sandy soil tends
to have a weak structure, which leads to a fairly smooth surface on ploughing; sandy soils
also tend to have a low moisture content and a result have fairly high and often specular
reflectance properties. In visible wavelengths the presence of soil moisture considerably
reduces the surface reflectance of soil. This occurs until the soil is saturated; at which
point further additions of moisture has no effect of reflectance.

Reflectance in near and middle infrared wavelengths is also negatively related to


soil moisture. An increase in soil moisture will result in a rapid decrease in reflectance in
water (H2O) and hydroxyl (H2O) absorbing wavebands that absorb at wavelengths centered
at approximately 0.9 m, 1.9 m, 2.2 m and 2.7 m. The effect of water and hydroxyl
absorption is more noticeable in clay soils for these soils have much bound water and very
strong hydroxyl absorption properties, as can be seen in Fig.2.

13
The surface roughness (determined by the texture and structure) and the moisture
content of soil also affect the way in which the reflected visible and near infrared radiation
is polarized. This is because when polarized sunlight is specularly reflected from a smooth
wet surface it becomes weakly polarized to a degree that is positively related to the
smoothness and the wetness of that surface. This effect has been used to estimate soil
surface moisture from aircraft-borne sensors at altitudes of up to 300 meters.

Organic matter

Soil organic matter is dark and its presence decreases the reflectance from the soil
up to an organic matter content of around 4-5 percent. When the organic matter content
of the soil is greater than 5 percent, the soil is black and any further increases in organic
matter will have little effect on reflectance.

Iron Oxide

Iron oxide gives many soils their 'rusty' red coloration by coating or stating
individual soil particles. Iron oxide selectively reflects red light (0.6-0.7 m). This effect is
so marked that workers have been able to use a ratio of red to green bi-directional
reflectance to locate iron ore deposits from satellite altitudes.

Spectral Reflectance and Spectral Signature of Water

The majority of radiant flux incident upon water is either not reflected but is either
absorbed or transmitted. In visible wavelengths of EMR, little light is absorbed, a small
amount, usually below 5% is reflected and the rest is transmitted. Water absorbs NIR and
MIR strongly, (Fig.3) leaving little radiation to be either reflected or transmitted. This
results in sharp contrast between any water and land boundaries.

The factors, which govern the variability in reflectance of a water body, are the
depth of the water, suspended material within the water and surface roughness of the
water.
In shallow water some of the radiation is reflected not by the water itself but from
the bottom of the water body. Therefore, in shallow pools and streams it is often the
underlying material that determines the water body's reflectance properties and colour in
the FCC.

Among the suspended materials the most common materials are non-organic
sediments, tannin and chlorophyll. The effect of non-organic silts and clays increase the
scatter and the reflectance, in visible wavelengths.

Water bodies that contain chlorophyll have reflectance properties that resemble, at
least in part, those of vegetation with increased green and decreased blue and decreased

14
red reflectance. However, chlorophyll content must be very high enough to detect these
changes.

The roughness of water surface can also affect its reflectance properties. If the
surface is smooth then light is reflected specularly from surface, giving very high or very
low reflectance, dependent upon the location of the sensor. If the surface is very rough
then there will be increased scattering at the surface, which in turn will increase the
reflectance.

Spectral Reflectance and Spectral Signature of Vegetation

The spectral reflectance of vegetation over EMR spectrum depends upon

1. Pigmentation
2. Physiological structure
3. Leaf moisture content

The hemispherical reflectance of any individual leaf is insufficient to describe the


remotely sensed bi-directional reflectance of a vegetation canopy. This is because a
vegetation canopy is not a large leaf but is composed of a mosaic of leaves, other plant
structures, background and shadow. Hence spectral reflectance of vegetation canopy
could vary appreciably due to the effect of the soil background, the presence of senescent
vegetation, the angular elevation of Sun and sensor, the canopy geometry and certain
episodic and phenological canopy changes. Among these some are considered for
discussion here.

Effect of Pigmentation absorption

The primary pigments are chlorophyll a, chlorophyll b, B carotene and xantophyll,


all of which absorb visible light for photosynthesis. Chlorophyll a and chlorophyll b, which
are more important pigments, absorb portions of blue and red light; chlorophyll a absorbs
at wavelengths of 0.43 m and chlorophyll b at wavelengths of 0.45 m and 0.65 m. The
carotenoid pigments, carotene and xantophyll, both absorb blue to green light.

Physiological structure and reflectance in NIR

The discontinuities in the refractive indices within a leaf determine its near
reflectance. These discontinuities occur between membranes and cytoplasm within the
upper half of the leaf and more importantly between individual cells and air spaces of the
spongy mesophyll within the lower half of the leaf.

The combined effects of leaf pigments and physiological structure give all healthy
green leaves their characteristic reflectance properties: low reflectance of red and blue
light, medium reflectance of green light and high reflectance of near infrared radiation (Fig

15
4). The major difference in leaf reflectance between species, are dependent upon leaf
thickness, which affects both pigment content and physiological structure. For example, a
thick wheat flat leaf will tend to transmit little and absorb much radiation whereas a flimsy
lettuce leaf will transmit much and absorb little radiation (Fig 5).

Effect of Leaf moisture

Leaf reflectance is reduced as a result of absorption by three major water


absorption bands that occur near wavelengths of 1.4 m, 1.9 m and 2.7 m and two
minor water absorption bands that occur near wavelengths of 0.96 m, and 1.1 m (Fig. 6).
The reflectance of the leaf within these water absorption bands is negatively related to
both the amount of water in the leaf and the thickness of the leaf. However, water in the
atmosphere also absorbs radiation in these water absorption bands and therefore the
majority of sensors are limited to three 'atmospheric windows' that are free of water
absorption at wavelengths of 0.3 to 1.3 m; 1.5 to 1.8 m; and 2.0 to 2.6 m. Fortunately
within these wavebands, electromagnetic radiation is still sensitive to leaf moisture.

The effect of the soil background

The bi-directional reflectance of the soil has a considerable effect on bi-directional


reflectance of the vegetation canopy. The soil/waveband combinations that are unsuitable
for the remote sensing of vegetation can be identified. For example, on dark toned soils
with low red bi-directional reflectance there is little change in the red bi-directional
reflectance of the canopy with an increase in the canopy LAI as the leaves have similar
reflectance properties to the soil. On a light toned soil with a high bi-directional
reflectance, the relationship between near infrared bi-directional reflectance and LAI is
weaker than on a dark soil, as on a dark soil the contrast between leaves and soil is high in
near infrared wavelengths.

The effect of vegetation senescence

Vegetation senesces due to aging and the crop begins to ripen, the near infrared
reflectance of the leaf does not significantly decrease. However, the breakdown of the
plant pigments, result in a rise in the reflectance of blue and red wavelengths. As a result
there is a positive relationship between bi-directional reflectance, at each wavelength, and
the LAI of senescent vegetation.

The effect of canopy geometry

The geometry of a vegetation canopy will determine the amount of shadow seen by
the sensor and will therefore influence the sensitivity of bi-directional reflectance
measurements to angular variation in sun and sensor. For example, the reflectance of a
rough tree canopy unlike a smoother grassland canopy is greatly dependent upon the solar
angle.

16
The effect of phenology

The seasonal change has influence in canopy bi-directional reflectance. From


quantitative studies it is known that for a non-deciduous canopy (e.g. grassland) red bi-
directional reflectance is maximised in autumn and minimised in spring, and near infrared
bi-directional reflectance is maximised in the summer and minimised in the winter. These
relationships can be presented as hysteresis loops of bi-directional reflectance. Each
hysteresis plot contains the expected pattern, with minor variations for the vegetation of
the nature reserve and corn crop and major variations for the wheat and rice crop. The
wheat crop has a lower than expected red bi-directional reflectance in the summer;
probably due to high productivity and a higher than expected near infrared bi-directional
reflectance in autumn, probably as a result of senescent stubble left in the fields. Irrigation
status as well as Leaf Area Index (LAI) of the crop determines the bi-directional reflectance
of rice crop; for example, in the summer the wet soil background reduces the otherwise
high near infrared bi-directional reflectance of the crop.

Figure 1: Spectral Reflectance curve of soil

17
Figure 2: Effect of Soil Moisture on Soil Spectral Reflectance.

Figure 3: Absorption of electromagnetic radiation by seawater

18
Figure 4 : Spectral reflectance of leaf (top)

Figure 5 : The reflectance, absorbance, transmittance properties of wheat


and lettuce leaves

19
Figure 6 Effect of Leaf moisture on spectral reflectance

IMAGE INTERPRETATION FOR MULTISPECTRAL SCANNER IMAGERY

Introduction

The application of MSS image interpretation has been demonstrated in many fields,
such as agriculture, botany, cartography, civil engineering, environmental monitoring,
forestry, geography, geophysics, land resource analysis, land use planning, oceanography,
and water resource analysis.

LANDSAT MSS Image Interpretation

As shown in Table 1, the image scale and area covered per frame are very different
for Landsat images than for conventional aerial photographs. For example, more than 1600
aerial photographs at a scale of 1:20,000 with no overlap are required to cover the area of
a single Landsat MSS image! Because of scale and resolution differences, Landsat images
should be considered as a complementary interpretive tool instead of a replacement for
low altitude aerial photographs. For example, the existence and/or significance of certain
geologic features trending of tens or hundreds of kilometers, and clearly evident on a
Landsat image, might escape notice on low altitude aerial photographs. On the other
hand, housing quality studies from aerial imagery would certainly be more effective using
low altitude aerial photographs rather than Landsat images, since individual houses cannot

20
be resolved on Landsat MSS images. In addition, most Landsat MSS images can only be
studies in two dimensions, whereas most aerial photographs are acquired in stereo.

Table 1 Comparison of Image Characteristics

Image Format Image Scale Area Covered per Frame (km2)


Low altitude USDA-ASCS aerial 1:20,000 21
photographs (230 X 230 mm)
High altitude NASA aerial 1:120,000 760
photographs (RB-57 or ER-2) (230
X 230 mm)
Landsat scene (185 X 185 mm) 1:1,000,000 34,000

Resolution

The effective resolution (in terms of the smallest adjacent ground features that can
be distinguished from each other) of Landsat MSS images is about 79 m (about 30 m on
Landsat-3 RBV images). However, linear features as narrow as a few meters, having a
reflectance that contrasts sharply with that of their surroundings, can often be seen on
Landsat images (for example, two-land roads, concrete bridges crossing water bodies, etc.).
On the other hand, objects much larger than 79 m across may not be apparent if they have
a very low reflectance contrast with their surroundings, and features detected in one band
may not be detected in another.

Stereoscopic ability

As a line scanning system, the Landsat MSS produces images having one
dimensional relief displacement. Because there is displacement only in the scan direction
and not in the flight track direction, Landsat images can be viewed in stereo only in areas of
side lap on adjacent orbit passes. This side lap varies from about 85 percent near the poles
to about 14 percent at the equator. Consequently, only a limited area of the globe may be
viewed in stereo. Also, the vertical exaggeration when viewing MSS images in stereo in
quite small compared to conventional air photos. This systems from the extreme platform
altitude (900 km) of the satellite compared to the base distance between images. Whereas
stereo airphotos may have a 4X vertical exaggeration, stereo Landsat vertical exaggeration
ranges from about 1.3X at the equator to less than 0.4X at latitudes above about 70o.
Subtle as this stereo effect is, geologists in particular have found stereoviewing in Landsat
overlap areas quite valuable in studying topographic expression. However, most
interpretations of Landsat imagery are made monoscopically, either because sidelapping
imagery does not exist or because the relief displacement needed for stereoviewing is so
small. In fact, because of the high altitude and narrow field of view of the MSS, images
from the scanner contain little or no relief displacement in nonmountainous areas. When
such images are properly processed, they can be used as planimetric maps at scales as

21
large as 1:250,000. Recently all these difficulties has been overcome in Panchromatic of
SPOT and IRS-1C imagery.

Individual Band Interpretation

The most appropriate band or combination of bands of MSS imagery should be


selected for each interpretive use. Band 41 (green) and 5(red) are usually best for detecting
cultural features such as urban areas, roads, new subdivisions, gravel pits, and quarries. In
such areas, band 5 is generally preferable because the better atmospheric penetration of
red wavelengths provides a higher contrast image. In areas of deep, clear water, greater
water penetration is achieved in band 4. Bands 4 and 5 are excellent for showing silty
water flowing into clear water. Bands 6 and 7 (near infrared) are best for delineating water
bodies. Since energy of near-infrared wavelengths penetrates only a short distance into
water, where it is absorbed with very little reflection, surface water features have a very
dark tone in bands 6 and 7. Wetlands with standing water or wet organic soil where little
vegetation has yet emerged also have a dark tone in bands 6 and 7, as do asphalt-surfaced
pavements and wet bare soil areas. Both bands 5 and 7 are valuable in geologic studies,
the largest single use of Landsat MSS data.

In the comparative appearance of the four Landsat MSS band, the extent of the
urban areas is best seen in bands 4 and 5 (light toned). The major roads are best seen in
band 5 (light toned), clearly visible in band 4, undetectable in band 6, and slightly visible in
band 7 (dark toned). An airport concrete runway and taxiway are clearly visible. The
concrement pavement is clearly visible in bands 4 and 5 (light toned), very faint in band 6
(light toned),and undetectable in band 7. The asphalt pavements is very faint in bands 4
and 5 (light toned), reasonably clear in band 6 (dark toned), and best seen in band 7 (dark
toned). The major lakes and connecting river are best seen in bands 6 and 7 (dark toned).
These lakes have a natural green colour in mid-July resulting from the presence of algae in
the water. In the band 4 image, all lakes have a tone similar to the surrounding agricultural
land, which consists principally of green-leafed crops such as corn. The lakes mostly
surrounded by urban development, and therefore, their shorelines can be reasonably well
detected. The lakes principally surrounded by agricultural land and their shorelines are
often indistinct. The shorelines are more distinct in band 5, but still somewhat difficult to
delineate. The surface water of major lakes and the connecting river is clearly seen in both
bands 6 and 7 (dark toned). The agricultural use have a rectangular field pattern with
different tones representing different crops. This is best seen in bands 5, 6 and 7. For
purposes of crop identification and mapping from MSS images, the most effective
procedure is to view two or more bands simultaneously in an additive colour viewer or to
interpret color composite images. Small forested areas appear dark-toned in bands 4 and
5. In regions receiving a winter snowfall, forested areas can best be mapped using
wintertime images where the ground is snow covered. On such images, the forested and
shrub land areas will appear dark toned against a background of light-toned snow.

22
Temporal data

As each Landsat satellite passes over the same area on the earth's surface during
daylight hours about 20 times per year. The actual number of times per year a given
ground area is imaged depends on amount of cloud cover, sun angle, and whether or not
the satellite is in operation on any specific pass. This provides the opportunity for many
areas to have Landsat images available for several dates per year. Because the appearance
of the ground in many areas with climatic change is dramatically different in different
seasons, the image interpretation process is often improved by utilizing images from two
or more dates.

Band 5 imaged in September and December, in some areas the ground is snow
covered (about 200 mm deep) in the December image and all water bodies are frozen,
except for a small stretch of the river in northern hemisphere. The physiography of the
area can be better appreciated by viewing the December image, due in part to the low
solar elevation angle in winter that accentuates subtle relief. The snow-covered upland
areas and valley floors have a very light tone, whereas the steep, tree-covered valley sides
have a darker tone. The identification of urban, agricultural, and water areas can better be
accomplished using the September image. The identification of forested areas can be
more positively done using the December image.

Synoptic view

The synoptic view afforded by space platforms can be particularly useful for
observing short-lived phenomena. However, the use of Landsat images to capture such
ephemeral events as floods, forest fires, and volcanic activity is, to some degree, a hit-or-
miss proposition. If a satellite passes over such an event on a clear day when the imaging
system is in operation, excellent images of such events can be obtained. On the other
hand, such events can easily be missed if there are no images obtained within the duration
of the event or, as is often true during floods, extensive cloud cover obscures the earth's
surface. However, some of these events do leave lingering traces. For example, soil is
typically wet in a flooded area for at least several days after the flood waters have receded,
and this condition may be imaged even if the flood waters are not there. Also, the area
burned by a forest fire will have a dark image tone for a considerable period of time after
the actual fire has ceased.

In the red band image, the vast quantities of silt flowing from the river into the
delta can be clearly seen. However, it is difficult to delineate the boundary between land
water in the delta area. In the near-infrared band image, the silt-laden water cannot be
distinguished from the clear water because of the lack of water penetration of near-
infrared wavelengths. However, the delineation of the boundary between land and water
is much clearer than in red band.

23
The black tone of the burned area contrasts sharply with the lighter tones of the
surroundings unburned forest area.

Tropical deforestation in response to intense population pressures. An extensive


area of forest land being cleared for transmigration site development. The dark toned area
shows forested land. Areas being actively cleared are principally are light-toned "fingers"
cutting into the forested land are cleared areas. The indistinct lighter toned plumes from
the nearly cleared areas are smoke plumes from burning debris.

False Colour Composite (FCC)

Bands 4,5, and 7 are combined in this fashion to produce the color image . Spectral
characteristics and color signatures of Landsat MSS color images are comparable to those
of IR color aerial photographs. Typical signatures are as follows :

Healthy vegetation Red


Clear water Dark blue to black
Silty water Light blue
Red beds Yellow
Bare soil, fallow fields Blue
Windblown sand White to yellow
Cities Blue
Clouds and snow White
Shadows Black

Land-Use and Land-Cover Interpretation on FCC

Urban areas has a grid pattern of major traffic arteries. Central commercial areas
have blue signatures caused by pavement, roofs, and an absence of vegetation. The
suburbs are pink to red, depending on density and condition of lawns, trees, and other
landscape vegetation. Small, bright red areas are parks, golf courses, cemeteries, and
other concentrations of vegetation.

Agriculture and vegetation has a rectangular bright red (growing crops) and blue-
grey (fallow fields) pattern. Red circles formed by alfalfa fields irrigated by centerpoint
irrigation sprinklers.

Rangeland has a red-brown signature in the fall season image. Forest and brush
cover mountainous terrain and the Transverse Ranges: lower elevation are covered by
chaparral and higher elevations by pine trees are also red-brown.

24
Water is represented by the ocean and scattered reservoirs. The dark blue color is
typical of the ocean much of the year, but during the winter rainy season, muddy water
from various rivers forms light-colored plumes that are carried.

The desert have a light yellow signature that is ;typical of arid land. Valley are
several light gray to very dark gray triangles, which are alluvial fans of gravel eroded from
the bedrock of the Transverse Ranges. Dry lakes have white signatures caused by silt and
clay deposit.

Major geologic features are also recognizable in the Landsat image. The fault,
which separates the valley from the Transverse Ranges, is expressed as linear scarps and
canyons.

Return-Beam Vidicon System

Return-beam vidicons (RBV) are framing systems that are essentially television
cameras. Landsat 1 and 2 carried three RBVs that recorded green, red and photographic IR
images of the same area on the ground. These images can be projected in blue, green, and
red to produce infrared color images comparable to MSS images. There were problems
with the color RBV system, and the images were inferior to MSS images; for these reasons,
only a few color RBV images were acquired. Landsat 3 deployed an extensively modified
version of RBV.

Typical RBV Images

In typical RBV images the array of small crosses, called reseau marks, are used for
geometric control. The 1:10,00,000 scale is the same as that of the MSS image to which
these RBV frames may be compared. This comparison illustrates the advantages of the
higher spatial resolution of RBV. For example, in the urban area the grid of secondary
streets is recognizable on the RBV image but not on the MSS.

Landsat 3 collected RBV images of many areas around the world. Where RBV and
MSS images are available, it is useful to obtain both data sets in order to have the
advantages of higher spatial resolution (from RBV) plus IR color spectral information (from
MSS).

LANDSAT TM Image Interpretation

Landsat TM images are useful for image interpretation for a much wider range of
applications than Landsat MSS images. This is because the TM has both an increase in the
number of spectral bands and an improvement in spatial resolution as compared with the
MSS. The MSS images are most useful for large area analyses, such as geologic mapping.
More specific mapping, such as detailed land cover mapping, is difficult on MSS images
because so many pixels of the original data are "mixed pixels," pixels containing more than

25
one cover type. With the decreased IFOV of the TM data, the area containing mixed pixels
is smaller and interpretation accuracies are increased. The TM's improved spectral and
radiometric resolution also aid image interpretation. In particular, the incorporation of the
mid-IR bands (bands 5 and 7) has greatly increased the vegetation discrimination of TM
data.

The dramatic improvement in resolution from the MSS's ground resolution cell of
79 x 79 m to the TM's ground resolution cell of 30 x 30 m. Many indistinct light-toned
patches on the MSS image can be clearly seen as recent suburban development on the TM
image. Also, features such as agricultural field patterns that are indistinct on the MSS
image can be clearly seen on the TM image.

TM has more narrowly defined wavelength ranges for the three TM bands roughly
comparably to MSS bands 1 to 4 and has added bands in four wavelength ranges not
covered by the MSS bands.
Table 2 Thematic-mapper spectral bands

Band Wavelength, um Characteristics


1 0.45 to 0.52 Blue-green - no MSS equivalent. Maximum penetration of
water, which is useful for bathymetric mapping in
shallow water. Useful for distinguishing soil from
vegetation and deciduous from coniferous plants
2 0.52 to 0.60 Green - coincident with MSS band 4. Matches green
reflectance peak of vegetation, which is useful for
assessing plant vigor.
3 0.63 to 0.69 Red - coincident with MSS band 5. Matches a
chlorophyll absorption band that is important or
discriminating vegetation type.
4 0.76 to 0.90 Reflected IR - coincident with portions of MSS bands 6
and 7. Useful for determining biomass content and for
mapping shorelines.
5 1.55 to 1.75 Reflected IR. Indicates moisture content of soil and
vegetation. Penetrates thin clouds. Good contrast
between vegetation types.
6 10.40 to 12.50 Thermal IR. Nighttime images are useful for
thermal mapping and for estimating soil moisture
7 2.08 to 2.35 Reflected IR. Coincides with an absorption band caused
by hydroxyl ions in minerals. Ratios of bands 5 and 7 are
potentially useful for mapping hydrothermally altered
rock associated with mineral deposits.

26
Sensing energy emitted form objects at ambient earth temperatures within the 8 to
14 um wavelength range. When objects are extremely hot, such as flowing lava, emitted
energy can be sensed in wavelengths shorter than thermal infrared wavelengths ( 3 to 14
m). Forest fires are another example of an extremely hot phenomenon that can be
sensed in wavelengths shorter than thermal infrared.

Image Mapping

Thematic Mapper data have been used extensively to prepare image maps over a
range of mapping scales. Such maps have proven to be useful tools for resource
assessment in that they depict the terrain in actual detail, rather than in the line-and-
symbol format of conventional maps. Image maps are often used as map supplements to
augment conventional map coverage and to provide coverage of unmapped areas.

As we can see, there are several digital image processing procedures that may be
applied to the image mapping process. These include such things as large area digital
Mosaicing, image enhancement procedures, merging of image data with conventional
cartographic information, and streamlining the map production and printing process using
highly automated cartographic systems. Extensive research continues in the area of image
mapping with both Landsat, SPOT, and IRS data in which push broom scanners has been
deployed. The stereo/coverage with desired B/H ratio is also possible. Resolution has also
improved to 20m and 10m in SPOT while 23.2m and 5.8 in IRS-1C.

SPOT HRV & IRS Image Interpretation

The use of SPOT data for various interpretive purposes is facilitated by the system's
combination of multispectral sensing with excellent spatial resolution, geometric fidelity,
and the provision for multidate and stereo imaging.

Merging Data

An increase in the apparent resolution of SPOT & IRS multispectral images can be
achieved through the merger of multispectral and panchromatic data. 20-m-resolution
multispectral image of an agricultural area and a 10-m-resolution merged multispectral and
panchromatic image in case of SPOT while 23.6m MSS and 5.8 m Pan of IRS-1C. The
merged image maintains the colors of the multispectral image but has a resolution
equivalent to that of the panchromatic image. Both the spatial and spectral resolution of
the merged image approach that seen in small scale, high altitude, color infrared aerial
photographs.

Using the parallax resulting when SPOT & IRS-1C data are acquired from two
different orbit tracks, perspective views of a scene can be calculated and displayed.

27
Perspective views can also be produced by processing data from a single image with digital
elevation data of the same scene.

Analysis of MSS Images

MSS images are interpreted in much the same manner as small-scale photographs
or images and photographs acquired from manned satellites. However, there are some
differences and potential advantages of MSS images. Linear features caused by
topography may be enhanced or suppressed on MSS images depending on orientation of
the features relative to sun azimuth. Linear features trending normal, or at a high angle, to
the sun azimuth are enhanced by shadows and highlights. Those trending parallel with the
azimuth are suppressed and difficult to recognize, as are linear features parallel with the
MSS scan lines.

Scratches and other film defects may be mistaken for natural features, but these
defects are identified by determining whether the questionable features appear on more
than a single band of imagery. Shadows of aircraft contrails may be mistaken for tonal
linear features but are recognized by checking for the parallel white image of the contrail.
Many questionable features are explained by examining several images acquired at
different dates. With experience, an interpreter learns to recognize linear features of
cultural origin, such as roads and field boundaries.

The recommended interpretation procedure in geology is to plot lineaments as


dotted lines on the interpretation map. Field checking and reference to existing maps will
identify some lineaments as faults; for these the dots are connected by solid lines on the
interpretation map. The remaining dotted lines may represent (1) previously unrecognized
faults, (2) zones of fracturing with no displacement, or (3) lineaments unrelated to geologic
structure.

The repeated coverage of landsat enables interpreters to select images from the
optimum season for their purpose. Winter images provide minimum sun elevations and
maximum enhancement of suitably oriented topographic features are commonly enhanced
on images of snow-covered terrain because the snow eliminates or suppresses tonal
differences and minor terrain features, such as small lakes. Areas with wet and dry
seasonal climates should be interpreted from images acquired at the different seasons. In
cloud-free rainy-season images are best for most applications, but this selection may not
apply everywhere.

Significance of colors on Landsat IR color images was described earlier in the section on
MSS images. For special interpretation objectives, black-and-white images of individual
bands are useful. Table 2&4 gives some specific applications of TM & IRS bands.

28
Points to remember

1. Cloud-free MSS images are available for most of the world with no political or security
restrictions.
2. The low to intermediate sun angle enhances many subtle geologic features.
3. Long-term repetitive coverage provide images at different seasons and illumination
conditions.
4. The images are low in cost.
5. IR color composites are available for many of the scenes. With suitable equipment,
color composites may be made for any image.
6. Synoptic coverage of each scene under uniform illumination aids recognition of major
features. Mosaics extend this coverage.
7. There is negligible image distortion.
8. Images are available in a digital format suitable for computer processing.
9. Limited stereo coverage is available except SPOT and IRS-1C.
10. TM provides images with improved spatial resolution, extended spectral range, and
additional spectral bands.

In addition to the applications shown in this chapter, Landsat images are valuable
for resource exploration, environmental monitoring, land-use analysis, and evaluating
natural hazards.

Another major contribution of Landsat is the impetus it has given to digital image
processing. The availability of low-cost multispectral image data in digital form has
encouraged the application and development of computer methods for image processing,
which are increasing the usefulness of the data for interpreters in many disciplines.

Since the first launch in 1972, Landsat has evolved from an experiment into an
operational system. There has been a steady improvements in the quality and utility of the
image data. Many users throughout the world now rely on Landsat, SPOT and IRS images
as routinely as they do on weather and communication satellites. It is essential that the all
remote sensing programs continue to provide images.

References:
1. Campbell John B. 1996 : Introduction to Remote Sensing. Taylor & Francis
2. Curran P.J., 1985. Principles of Remote Sensing. Longman Group Limited, London.
282 pp.
3. Floyd F. Sabins : Remote Sensing and Principles and Image Interpretation
4. Lillesand Thomas M. & Kiefer Ralph 1993 : Remote Sensing and Image
Interpretation Third Edition John Villey
5. Eugene & Avery Interpretation of Aerial Photographs
6. http://www.ccrs.nrcan.gc.ca/ccrs/learn/tutorials/.

29

Você também pode gostar