Você está na página 1de 73

1/19/2012

GNR630
Introduction to Geo-Spatial
Technologies
Instructors:
Prof. B. Krishna Mohan
Prof. (Mrs.) P. Venkatachalam
Prof. S. S. Gedam
CSRE, IIT Bombay
bkmohan/pvenk/shirish@csre.iitb.ac.in
Slot 6
Lecture 03-04 Image Display and Corrections
January 13/18, 2012 11.05 AM 12.30 PM

IIT Bombay

Slide 1

Jan 13/18, 2012

Lecture03-04

Image Display and Corrections

Overview of the Course

Image Display
Histogram
Distortions in satellite images
Georeferencing satellite images
Image Enhancement
GNR630

Lecture03-04

BKM/PV/SSG

1/19/2012

IIT Bombay

Slide 02

Encoding
Normally the quantized image is binary encoded.
If the number of quantization levels is between 0
and 255, each pixel is represented by 1-byte
If the number of levels exceeds 255, each pixel is
assigned two-bytes.
At present, American satellites Quickbird,
Ikonos, Indian satellites Cartosat and a few
others have 11 bit and 10 bit ADCs and store
data in 2 bytes per pixel on disk.

GNR630

Lecture03-04

IIT Bombay

B. Krishna Mohan

Slide 03

Motivation for Digital Image


Processing
Why Digital Image Processing for Remote Sensing?
Nature of data (inherently digital)
Flexibility offered by computers
Reducing the bias of human analysts
Standardizing routine operations
Rapid handling of large volumes of data

GNR630

Lecture03-04

B. Krishna Mohan

1/19/2012

IIT Bombay

Slide 79

Motivation for Digital Image


Processing
Why Digital Image Processing for Remote Sensing?
Certain operations cannot be done manually
(removal of distortions)
Generation of different views
Archival in compact/compressed mode
Easy to share and disseminate

GNR630

Lecture03-04

IIT Bombay

B. Krishna Mohan

Slide 05

The Origins of Digital Image Processing

Early 1920s: One of the first applications in the newspaper industry, cable transmission between NY and
London
Source: http://www. imageprocessingplace.com
GNR630

Lecture03-04

B. Krishna Mohan

1/19/2012

IIT Bombay

Slide 06

Historical Developments
Mid to late 1920s: Improvements to the
Bartlane system resulted in higher quality
imagesNew reproduction processes based
on photographic techniquesIncreased
number of tones in reproduced
imagesImproved digital image.

GNR630

Lecture03-04

B. Krishna Mohan

IIT Bombay

Slide 07

Space Race for Moon


Improvements in computing technology and the
onset of the space race for moon led to a surge of
work in digital image processing
Computers used to improve the quality of images of
the moon taken by the Ranger 7 probeSuch
techniques were used in other space missions
including the Apollo landings

GNR630

Lecture03-04

B. Krishna Mohan

1/19/2012

IIT Bombay

Slide 08

Medicine
Digital image processing begins to be used in
medical applications1979:Sir Godfrey N.
Hounsfield& Prof. Allan M. Cormack share the
Nobel Prize in medicine for the invention of
tomography, the technology behind Computerised
Axial Tomography (CAT) scans

GNR630

Lecture03-04

IIT Bombay

B. Krishna Mohan

Slide 09

1980s and later


1980s -Today: The use of digital image processing
techniques has exploded and they are now used for all
kinds of tasks in all kinds of areas
Image enhancement/restoration
Artistic effects
Medical visualization
Industrial inspection
Law enforcement
Human computer interfaces
GNR630

Lecture03-04

B. Krishna Mohan

1/19/2012

IIT Bombay

Slide 10

Wavelengths used for Imaging

Gamma Rays
Wavelength
X-Rays
Visible/Infrared Rays
Microwaves
Radio waves
Ultrasound waves
Seismic waves
Frequency
GNR630

Lecture03-04

B. Krishna Mohan

IIT Bombay

Slide 11

Components of an Image Processing System

Image Sensors
Image Display
Image Storage
Computer
Image Processing software
Special Purpose graphics hardware
Image printers/plotters

GNR630

Lecture03-04

B. Krishna Mohan

1/19/2012

IIT Bombay

Slide 12

Image

Image

Image

Display(s)

Storage

Hardcopy

Dedicated
Graphic Proc.

Digital
Computer

DIP/DIA
Software

Image Acquisition
from real world
GNR630

Lecture03-04

IIT Bombay

B. Krishna Mohan

Slide 13

PC-Based Image Processing Systems


Todays personal computers with a digital still /
video camera and a printer can become full fledged
image processing systems
Most commercial / shareware / freeware image
processing software will run on normal personal
computer configurations

GNR630

Lecture03-04

B. Krishna Mohan

1/19/2012

Steps in Image Processing

IIT Bombay

Slide 14

Steps in Digital Image Processing


Image

Image

Image

Acquisition

Corrections

Enhancement

Image
Classification

Feature
Selection

Image
Transforms

Final Interpretation
GNR630

Lecture03-04

B. Krishna Mohan

1/19/2012

IIT Bombay

Slide 15

Steps in Image Processing

Image Acquisition
Image Corrections
Image Enhancement
Image Transforms
Feature Selection
Classification
Accuracy Assessment
Change Detection
Efficient Representation and Coding
Applications
GNR630

Lecture03-04

B. Krishna Mohan

IIT Bombay

Slide 16

Image Display
Red Gun
Image Data
on Disk

Green Gun
Blue Gun
Image Display System

GNR630

Lecture 3-4

B. Krishna Mohan

1/19/2012

IIT Bombay

Slide 17

Concept of a Color Composite


In order to generate a color display of a
satellite image on the monitor, we need to
choose
Data to represent in red color
Data to represent in green color
Data to represent in blue color

Such a display is known as a color composite


GNR630

Lecture 3-4

B. Krishna Mohan

IIT Bombay

Slide 18

False Color Composite


A False Color Composite (FCC) is formed when the
data assigned to red / green / blue color on the
display is collected outside the visible region
A standard FCC comprises:
Wavelength of Data
Display
Near Infrared wavelength
Red Color
Red wavelength
Green Color
Green wavelength
Blue color
GNR630

Lecture 3-4

B. Krishna Mohan

10

1/19/2012

IIT Bombay

Slide 19

Example of FCC

GNR630

Lecture 3-4

B. Krishna Mohan

IIT Bombay

Slide 20

Natural Color Composite


A Natural Color Composite (NCC) is formed
when the data assigned to red/green/blue is
collected in the same wavelengths
For instance:
Wavelength of Data
Display
Red wavelength
Green wavelength
Blue wavelength
GNR630

Lecture 3-4

Red Color
Green Color
Blue color
B. Krishna Mohan

11

1/19/2012

IIT Bombay

Slide 21

Natural Color Composite

GNR630

Lecture 3-4

B. Krishna Mohan

IIT Bombay

Slide 22

Black & White Image


A black/white image is one that has no color
but only white, black and shades of gray.
The smallest value at a pixel is 0 (black)
The largest value is 2L-1 (white)
Intermediate values represent shades of gray,
from black increasing towards white
For L=8, black = 0, white = 255
GNR630

Lecture 3-4

B. Krishna Mohan

12

1/19/2012

IIT Bombay

Slide 23

Gray Scale

black
GNR630

dark gray

light gray

Lecture 3-4

white
B. Krishna Mohan

IIT Bombay

Slide 24

Gray Scale Images


Examples of gray scale images in R.S.
An image of a single band of a multisp. image
An image from radar sensor (SAR image)
An image from panchromatic sensor

How does this happen on a display monitor?


When Red, Green and Blue display guns are fed the
same signal, the resulting display on the screen will be
black&white
GNR630

Lecture 3-4

B. Krishna Mohan

13

1/19/2012

IIT Bombay

Slide 25

Black&White Image

GNR630

Lecture 3-4

B. Krishna Mohan

Common Data Structures to


Store Multiband Data

14

1/19/2012

IIT Bombay

Slide 26

Common Data Structures to Store


Multiband Data
BIL band interleaved by line
BSQ band sequential
BIP band interleaved by pixel

GNR630

Lecture 3-4

B. Krishna Mohan

IIT Bombay

Slide 27

Image Acquisition
Optics

Band 1
Band 2
Band 3

Width equal
to pixel
width
GNR630

Ground
Lecture 3-4

Direction of
satellite motion
B. Krishna Mohan

15

1/19/2012

IIT Bombay

Slide 28

BIL
Band interleaved by line storage format
MxN Image; K Bands; One row on ground
B11 B12 B1N
B21 B22 B2N

Bk1 Bk2 BkN


A single file on disk or CD contains M.K rows, each having N
columns; Every K rows in the file correspond to ONE ROW
ON THE GROUND
GNR630

Lecture 3-4

B. Krishna Mohan

IIT Bombay

Slide 29
Band 1 Row1

BIL FILE
STRUCTURE

Band K Row1
Band1 Row2

Band K Row2

Image Size
M rows
N columns
K Bands

Band 1 Row M

Band K Row M
GNR630

Lecture 3-4

B. Krishna Mohan

16

1/19/2012

IIT Bombay

Slide 30

BIL
BIL is a popular format for storing multispectral
images, and supported by most remote sensing
software (ERDAS, PCI, )
Well suited when multiband data analysis is
required
Lot of data I/O involved when access to a single
band image is needed on sequential access systems.
Moderate overhead on random access systems

GNR630

Lecture 3-4

IIT Bombay

B. Krishna Mohan

Slide 31

BSQ
Band sequential method involves storing one
full single band image after another
B11 B12 B1N
B21 B22 B2N

BM1 BM2 BMN

The image for the second band, , up to Band


K follow
GNR630

Lecture 3-4

B. Krishna Mohan

17

1/19/2012

IIT Bombay

Slide 32

BSQ FILE STRUCTURE

Image Size
M rows
N columns
K Bands

GNR630

Band 1 Row 1

Band 1 Row M
Band2 Row 1

Band 2 Row M

Band K Row 1

Band K Row M
Lecture 3-4

IIT Bombay

Band 1

Band 2

Band K

B. Krishna Mohan

Slide 33

BSQ
Ideally suited when the multiband image is
processed one band at a time, such as image
enhancement, neighbourhood filtering, etc.
More overheads when all band values are
required at each pixel

GNR630

Lecture 3-4

B. Krishna Mohan

18

1/19/2012

IIT Bombay

Slide 34

BIP
Band interleaved by pixel
Commonly used for storing color images, with red,
green and blue values alternating
RGBRGBRGB

Not used in present times to store satellite


images
Used in the early stages of Landsat data
distribution
GNR630

Lecture 3-4

B. Krishna Mohan

IIT Bombay

Slide 35

BIP Structure
First Row
Band 1 Band 2 Band K Band 1 Band 2 Band K
Row 1 Row 1
Row 1 Row 1 Row 1 Row 1

Band K
Row 1

Pixel 1 Pixel 1 Pixel 1

Pixel N

Pixel 2 Pixel 2

Pixel 2

Second Row
Band 1 Band 2 Band K Band 1 Band 2 Band K
Row 2 Row 2
Row 2 Row 2 Row 2 Row 2

Band K
Row 2

Pixel 1 Pixel 1 Pixel 1

Pixel N

Pixel 2 Pixel 2

Pixel 2

Mth Row
Band 1 Band 2 Band K Band 1 Band 2 Band K
Band K
Row M Row M
Row M Row M Row M Row M
Row M
Pixel 1 Pixel 1 Pixel 1

GNR630

Pixel 2 Pixel 2

Pixel 2

Lecture 3-4

Pixel N

B. Krishna Mohan

19

1/19/2012

IIT Bombay

Slide 36

Formats for Distributing Remotely Sensed


Data
Suppliers provide image data in different formats:
LGSOWG (super-structure format)
GeoTiff format
Proprietary software format (e.g., ERDAS IMG)

GNR630

Lecture 3-4

B. Krishna Mohan

Image Preprocessing

20

1/19/2012

IIT Bombay

Slide 37

Distortions in Satellite Images


Nature of Distortion
Systematic (predictable)
Random

Types of Distortions
Geometric Distortions
Position of pixel in the image in error
shape of pixel in the image in error

Radiometric Distortions
Recorded value in error
GNR630

Lecture 3-4

IIT Bombay

B. Krishna Mohan

Slide 38

Background
The signal received at the satellite depends on
several factors
Performance of the onboard electronics
Atmospheric conditions
Terrain elevation
Terrain slope and
Reflectance characteristics of objects
The first four factors can result in distortions in the
signal received
GNR630

Lecture 3-4

B. Krishna Mohan

21

1/19/2012

IIT Bombay

Slide 39

Atmospheric Scattering
Three types of scattering are considered
Raleigh scattering where particle size is small compared to
the wavelength of radiation. Only small wavelengths are
affected
Mie scattering where the particle size is comparable to the
wavelength of radiation. The smoke and dust are the
influencing factors
Non-selective scattering where the particle size is much
larger than the wavelength of radiation. The radiation is
absorbed by water vapor.
GNR630

Lecture 3-4

IIT Bombay

B. Krishna Mohan

Slide 40

Scattering Phenomena

Reproduced
with
permission
from the
lecture notes of
Prof. John
Jensen,
University of
South Carolina

GNR630

Lecture 3-4

B. Krishna Mohan

22

1/19/2012

IIT Bombay

Slide 41

Absorption Windows

Reproduced with permission from the


lecture notes of Prof. John Jensen,
University of South Carolina

GNR630

Lecture 3-4

B. Krishna Mohan

IIT Bombay

Slide 42

Detector Errors
Shot noise (random bad pixels)
Detector malfunction resulting in row or
column drop-outs
Detector malfunction resulting in delayed row
or column start
Detector malfunction resulting in a striping
effect (sensor not adapting to changes in
terrain conditions)
GNR630

Lecture 3-4

B. Krishna Mohan

23

1/19/2012

IIT Bombay

Slide 43

Striping Errors
IRS-1C Panchromatic Sensor
GNR630

Lecture 3-4

IIT Bombay

B. Krishna Mohan

Slide 44

Shot Noise
Shot noise pixels can be eliminated by
comparing them with their neighboring pixels
If the gray levels at the neighboring pixels are
very different from that of the pixel under
observation, then the pixel is a noise pixel,
whose gray level is replaced by the average of
the neighboring pixels.

GNR630

Lecture 3-4

B. Krishna Mohan

24

1/19/2012

IIT Bombay

Slide 45

Removal of Shot Noise

Reproduced
with
permission
from the
lecture notes of
Prof. John
Jensen,
University of
South Carolina

GNR630

Lecture 3-4

B. Krishna Mohan

IIT Bombay

Slide 46

Line or Column Drop-outs


In case of Landsat satellite with an electromechanical scanner, malfunction of sensor
results in line (row) drop outs, i.e., during the
scan from left to right the detector does not
function
In case of pushbroom sensors like SPOT, IRS,
Ikonos etc., due to malfunctioning of some of
the elements entire columns may be blank
GNR630

Lecture 3-4

B. Krishna Mohan

25

1/19/2012

IIT Bombay

Slide 47

Scanning Mechanism

Reproduced
with
permission
from the
lecture notes of
Prof. John
Jensen,
University of
South Carolina

GNR630

Lecture 3-4

IIT Bombay

B. Krishna Mohan

Slide 48

Correction of Line or Column Drop-outs


By comparing the histograms of pixels in
different rows (or columns), the defective
rows (or columns) can easily be highlighted.
How?
(Assuming that successive rows or columns
are not defective), the defective row (or
column) is replaced by the average of the rows
above and below (or columns to the left and
right)
GNR630

Lecture 3-4

B. Krishna Mohan

26

1/19/2012

IIT Bombay

Slide 49

Geometric Distortions
Nature of geometric distortion
Positional errors
Shape of pixel

Sources of distortion

Earth curvature
Relative motion between satellite and earth
Satellite attitude
Satellite altitude variations
Errors in case of electromechanical scanners

GNR630

Lecture 3-4

IIT Bombay

B. Krishna Mohan

Slide 50

Example: Panoramic Distortion


Resolution at nadir higher than
at off-nadir locations
(h/cos)

Read
section 2.3
in Richards
and Jias
book

h
(alt.)

Pixel
GNR630

width
Lecture 3-4

Pixel width
B. Krishna Mohan

27

1/19/2012

IIT Bombay

Slide 51

External Geometric Errors


External geometric errors are induced by
satellite attitude (roll, pitch and yaw) and
variations in altitude
Both result in geometric distortions, that can
be corrected by modeling the imaging process

GNR630

Lecture 3-4

IIT Bombay

B. Krishna Mohan

Slide 52

Altitude Errors
Remote sensing systems flown at a constant altitude above
ground level result in imagery with a uniform scale all along
the flight line.
Increasing the altitude will result in smaller-scale imagery.
That is the size of pixel on the ground increases, lowering the
sensor resolution. Decreasing the altitude of the sensor
system will result in larger-scale imagery, due to reduction in
size of pixel on the ground, increasing the spatial resolution
above the specification value.

GNR630

Lecture 3-4

B. Krishna Mohan

28

1/19/2012

IIT Bombay

Slide 53

Attitude Errors
Prominent Errors
Roll: Spacecraft vibrates about the direction of motion
Pitch: Spacecraft vibrates in a vertical plane
perpendicular to the direction of motion
Yaw: Spacecraft moves along an angle to the direction of
motion

Both altitude and attitude errors cause geometric


distortions

GNR630

Lecture 3-4

B. Krishna Mohan

IIT Bombay

Slide 54

Geometric Corrections
Nature of geometric corrections
From modeling ALL errors
Mapping image pixels to a reference coordinate system
with desired pixel size and shape

Modeling Errors
Systematic errors can be estimated in advance
Other errors can be estimated based on telemetry data

A combined approach is commonly followed


GNR630

Lecture 3-4

B. Krishna Mohan

29

1/19/2012

IIT Bombay

Slide 55

Geometric Corrections
Pixel mapping using mathematical transformations
A reference coordinate system is established, with desired
pixel size and shape
Correspondence between pixel in the reference frame and
the image is established
Pixel value in the reference frame is computed from the
known values in the image

Result is a corrected image generated in the desired


frame of reference.

GNR630

Lecture 3-4

B. Krishna Mohan

IIT Bombay

Slide 56

Polynomial Image Correction


Let the uncorrected image be v(x,y)
Let the image in the corrected reference frame be u(x,y)
The task is to find the mapping connecting the two frames of
reference. For an affine transformation,
Let x = a1x + a2y + a3

y = b1x + b2y + b3
We need six equations to solve for six coefficients
Using an affine transformation, we can handle translation,
scaling, rotation and shearing distortions

GNR630

Lecture 3-4

B. Krishna Mohan

30

1/19/2012

IIT Bombay

Slide 57

Polynomial Image Correction


In case a sensors IFOV on the ground is, say, 5.8m x 5.8m,
we can convert it to 6m x 6m or 5.5m x 5.5m during image
corrections
For higher order transformations,
x = a1x + a2y + a3xy + a4x2 + a5y2 + a6
y = b1x + b2y + b3xy + b4x2 + b5y2 + b6
There are 12 variables to be found for which we need
minimum 12 equations to solve
GNR630

Lecture 3-4

B. Krishna Mohan

IIT Bombay

Slide 58

Polynomial Coefficients
In order to find the coefficients, we need to precisely
identify points in the reference frame as well as in the
uncorrected image
For six variables in the affine transform, three pairs of points
(x,y) and (x,y) are the minimum required
More points are useful to detect any errors in the selection
of the points
Minimum six pairs of points are required for second order
polynomial

GNR630

Lecture 3-4

B. Krishna Mohan

31

1/19/2012

IIT Bombay

Slide 59

Control Points
How to select the corresponding points?

GNR630

Lecture 3-4

B. Krishna Mohan

IIT Bombay

Slide 60

Image Corrections
A ground control point is a location on the
surface of Earth that can be accurately
located in the image as well as on a reference
frame such as a map
The mathematical transformation that maps
the pixels in the (distorted) image onto the
reference map is known as the geometrical or
spatial transformation
GNR630

Lecture 3-4

B. Krishna Mohan

32

1/19/2012

IIT Bombay

Slide 61

Image Corrections
The computation of the pixel values (gray
levels) after the geometric transformation is
often referred to as resampling that is
essentially a spatial interpolation
The geometric correction is influenced by the
choice of spatial transformation and the
resampling procedure
GNR630

Lecture 3-4

B. Krishna Mohan

IIT Bombay

Slide 62

Ground Control Points


These are important since they allow us to compute the
transformation coefficients.

x1 = a1 .x1 + b1 .y1 + c1
y1 = a2 .x1 + b2 .y1 + c2
x2 = a1 .x2 + b1 .y2 + c1
y2 = a2 .x2 + b2 .y2 + c2
x3 = a1 .x3 + b1 .y3 + c1
y3 = a2 .x3 + b2 .y3 + c2
More points are needed to check the accuracy of the control
points selected
GNR630

Lecture 3-4

B. Krishna Mohan

33

1/19/2012

IIT Bombay

Slide 63

Ground Control Points


GCPs are obtained from:
Survey of India topographic maps (digital or
paper) at 1:25,000 or 1:50,000 scale
Other maps with ground reference
Global Positioning Systems (GPS)

It is important to choose GCPs that are


invariant with time since the map and image
are often years apart in time
GNR630

Lecture 3-4

B. Krishna Mohan

Reproduced with permission from the lecture


notes of Prof. John Jensen, University of South
Carolina

IIT Bombay

GNR630

Slide 64

Control Point Selection

Lecture 3-4

B. Krishna Mohan

34

1/19/2012

IIT Bombay

Slide 65

Computation of Spatial Transformation


The first order affine transformation is adequate to
account for a several forms of distortions:

Skew
Rotation
Scale changes in x and y directions
Translation in x and y directions

GNR630

Lecture 3-4

IIT Bombay

B. Krishna Mohan

Slide 66

Computation of Spatial Transformation


Given a map reference, we define the pixel size such
that after geometric correction, the image aligns
with the map reference, with a pixel size chosen by
the user.
It may be noted that the size of pixel as acquired by
the satellite can be selected different from the pixel
size after geometric correction

GNR630

Lecture 3-4

B. Krishna Mohan

35

1/19/2012

IIT Bombay

Slide 67

Spatial Transformation

Reproduced
with
permission
from the
lecture notes
of Prof. John
Jensen,
University of
South Carolina

GNR630

Lecture 3-4

B. Krishna Mohan

IIT Bombay

Slide 68

Errors in Transformation
If the GCPs selected are in error, the transformation
maps the points in the image inaccurately onto the
reference. The error can be measured in terms of the
Root Mean Squared (RMS) Error
RMSerror =

1
N
GNR630

(x

'
orig

'
'
'
xcomp
) 2 + ( yorig
ycomp
)2

i =1

Lecture 3-4

B. Krishna Mohan

36

1/19/2012

IIT Bombay

Slide 69

Errors in Transformation
Error for each point is given by
'
'
'
'
( xorig
xcomp
) 2 + ( yorig
ycomp
)2

It is common to select initially more GCPs and choose


those that result in the smallest RMS error

GNR630

Lecture 3-4

B. Krishna Mohan

IIT Bombay

Slide 70

Higher Order Transformations


Sometimes the 1st order affine transformation may not
accurately transform the image onto the map in which case
one can choose a higher order polynomial transformation
such as

x ' = a1 x 2 + b1 xy + c1 y 2 + d1 x + e1 y + f1
y ' = a2 x 2 + b2 xy + c2 y 2 + d 2 x + e2 y + f 2
Based on the order of transformation, the number of coefficients
vary. Accordingly the number of minimum GCPs also vary.
Commercial products support 1st 5th order transformations.
GNR630

Lecture 3-4

B. Krishna Mohan

37

1/19/2012

IIT Bombay

Slide 71

Resampling or Intensity Interpolation


The transformation is of two types:
Forward mapping or input to output mapping, i.e., for
every pixel in the input image find the corresponding
location in the reference map according to the determined
transformation
Reverse mapping or output to input mapping, i.e., for
every pixel in the output frame find the corresponding
location in the input image according to the determined
transformation
GNR630

Lecture 3-4

IIT Bombay

Slide 72

Spatial Transformation

Reproduced with
permission from the
lecture notes of Prof.
John Jensen,
University of South
Carolina

GNR630

B. Krishna Mohan

Lecture 3-4

B. Krishna Mohan

38

1/19/2012

IIT Bombay

Slide 73

Intensity Interpolation
In this phase, gray level values are computed
for the transformed pixels since they are now
at different locations from where they
collected the reflected energy
This step involves intensity interpolation since
the computed values are weighted averages of
existing measured values
GNR630

Lecture 3-4

B. Krishna Mohan

IIT Bombay

Slide 74

Interpolation Strategy
It is more convenient to use reverse mapping
or output to input mapping when
geometrically correcting multispectral images
The reference frame can be assigned a given
pixel size, and each pixel can then be located
in the input image through the spatial
transformation
GNR630

Lecture 3-4

B. Krishna Mohan

39

1/19/2012

IIT Bombay

Slide 75

Intensity Interpolation

Reference frame
GNR630

Lecture 3-4

IIT Bombay

B. Krishna Mohan

Slide 76

Nearest Neighbor Interpolation


Standard Interpolation
Methods:

B
A

Nearest Neighbor
P

Bilinear
Interpolation
D

GNR630

Lecture 3-4

Higher order
interpolation
(bicubic)

B. Krishna Mohan

40

1/19/2012

IIT Bombay

Slide 77

Nearest Neighbor Interpolation


P is the location to which a point from the reference
frame gets transformed
Measured values exist at A, B, C and D
Let DAP be the distance of P from A, likewise DBP, DCP,
and DDP
P is assigned the value of
element K {A,B,C,D} in case of Nearest Neighbor
Interpolation where DKP =
Min{DAP, DBP, DCP, DDP}
GNR630

Lecture 3-4

B. Krishna Mohan

IIT Bombay

Slide 78

Issues in NN Interpolation
Fastest to compute
No new values introduced only the same
values recorded by the sensors retained
Renders the image blocky if large pixel size to
small pixel size resampling is performed
e.g., resampling an IRS-1D LISS-III image to 1
metre pixel size
GNR630

Lecture 3-4

B. Krishna Mohan

41

1/19/2012

IIT Bombay

Slide 79

Bilinear Interpolation
As opposed to nearest neighbor interpolation,
all the four known points are employed in
estimating the value at the unknown point
The weightages assigned to the four points are
dependent on the proximity of the unknown
point to these known points.

GNR630

Lecture 3-4

B. Krishna Mohan

IIT Bombay

Slide 80

Bilinear Interpolation Principle


B
A
P

D
C

Bilinear Interpolation
d(C,P)

GNR630

Lecture 3-4

B. Krishna Mohan

42

1/19/2012

IIT Bombay

Slide 81

Bilinear Interpolation
Denoting the estimated gray level at point P
by f(P), and the known values by f(A), f(B), f(C)
and f(D),

f ( P) =

wA f ( A) + wB f ( B) + wC f (C ) + wD f ( D)
wA + wB + wC + wD

The weight wA = 1/d(A,P), where d(A,P) is the


distance between point A and point P.
GNR630

Lecture 3-4

B. Krishna Mohan

IIT Bombay

Slide 82

Image to Image Registration


When a reference image is available to be used
instead of the map, we register the input image to
the reference image.
Registration is the process of making an image
conform to another image. If image A is not georeferenced and it is being used with image B, then
image B must be registered to image A so that they
conform to each other.
In this example, image A is not rectified to a
particular map projection, so there is no need to
rectify image B to a map projection.
GNR630

Lecture 3-4

B. Krishna Mohan

43

1/19/2012

IIT Bombay

Slide 83

Image Registration
Much of the procedure remains the same
except that if the pixel sizes of the input and
references are different, then one should be
first zoomed in / zoomed out to bring it to the
size of the other.
This step is vital when images from different
sensors are to be fused into one data set.
GNR630

Lecture 3-4

B. Krishna Mohan

IIT Bombay

Slide 84

Image Mosaicing
If the study area is large, it may be covered by
two adjoining scenes.
Remote sensing data providers always keep a
small overlap between adjacent scenes.
Mosaicing is the procedure of joining
overlapping images into a single large image

GNR630

Lecture 3-4

B. Krishna Mohan

44

1/19/2012

IIT Bombay

Slide 85

Image Mosaicing
It is possible that the two adjoining images are
acquired on two different dates due to which
the atmospheric conditions may vary
The brightness levels of the images may be
different, and the place where the two images
are joined, called the seam will be quite
visible
Example:Google Earth images
GNR630

Lecture 3-4

IIT Bombay

B. Krishna Mohan

Slide 86

Seam of Mosaic

GNR630

Lecture 3-4

B. Krishna Mohan

45

1/19/2012

IIT Bombay

Slide 87

Mosaicing Process
Geo-referencing both images
Identification of the overlap area
Adjustment of the brightness levels of the two
images
Adjustment of brightness across the overlap
area (called feathering)
Filling out the blank areas with black/white
values
GNR630

Lecture 3-4

B. Krishna Mohan

IIT Bombay

Slide 88

Mosaicing Process
Overlap Area

Mosaic is the
union image
that contains
both the input
images

GNR630

Lecture 3-4

B. Krishna Mohan

46

1/19/2012

IIT Bombay

Slide 89

Example

Reproduced with
permission from the
lecture notes of Prof.
John Jensen,
University of South
Carolina

GNR630

Lecture 3-4

B. Krishna Mohan

Histogram of an Image

47

1/19/2012

IIT Bombay

Slide 90

Concept of Histogram
Given a digital image Fm,n of size MxN, we can define
f(j) = #{Fm,n = j, 0 m M-1; 0 n N-1}
We refer to the sequence f(j), 0 j K-1, where K is the
number of gray levels in the image, as the histogram of the
image.
f(n) is interpreted as the number of times gray level n has
occurred in the image.
Obviously,
n f(n) = M . N
GNR630

Lecture 11

B. Krishna Mohan

IIT Bombay

Slide 91

Sample Histogram

GNR630

Lecture 11

B. Krishna Mohan

48

1/19/2012

IIT Bombay

Slide 92

Histogram
With digital images, we have a range of values that
can be found at a given pixel. Depending on the
resolution of the sensor from which the image is
acquired, the gray level values may be [0-255], [01023], [0-2047], [0-63], [0-127] etc. in each band

GNR630

Lecture 11

IIT Bombay

B. Krishna Mohan

Slide 93

Histogram
The normalized version of f(n) may be defined as
p(n) = f(n) / (M.N)
p(n)  probability of the occurrence of gray level
n in the image (in relative freq. sense)

n p(n) = 1
MIN = minn {f(n) | f(n) 0}
MAX = maxn {f(n) | f(n) 0}

GNR630

Lecture 11

B. Krishna Mohan

49

1/19/2012

IIT Bombay

Slide 94

Application of Histogram
Dynamic range of display system min to
max range of intensities that can be displayed
Normal range is 0 255 for gray scale; for
color it is 0 255 for red, green and blue
If Min-Max range of data is comparable to
dynamic range of display device, good quality
display is possible
GNR630

Lecture 11

IIT Bombay

B. Krishna Mohan

Slide 95

Histogram of image with good contrast

GNR630

Lecture 11

B. Krishna Mohan

50

1/19/2012

IIT Bombay

Slide 96

Image

GNR630

Lecture 11

IIT Bombay

B. Krishna Mohan

Slide 97

Histogram of Low Contrast Image

GNR630

Lecture 11

B. Krishna Mohan

51

1/19/2012

IIT Bombay

Slide 98

Low Contrast Image

GNR630

Lecture 11

B. Krishna Mohan

IIT Bombay

Slide 99

Image Statistics from Histogram


MIN gray level MIN = n: minn f(n) 0
Max gray level MAX = n: maxn f(n) 0
Mean gray level
= nn.f(n) / (M.N)
Variance
2 = nf(n)[n-]2 / (M.N)
Median
Med = k: kn=0 f(n) = (M.N)/2
GNR630

Lecture 11

B. Krishna Mohan

52

1/19/2012

Image Enhancement

IIT Bombay

Slide 100

Motivation
Image data when received in its original form
often has poor visible appearance, lacking in
adequate contrast to perceive the important
features in it
The visual appearance needs to be enhanced
through image enhancement procedures

GNR630

Lecture 5

B. Krishna Mohan

53

1/19/2012

IIT Bombay

Slide 101

What is Contrast?
Contrast is the difference in the intensity
of the object of interest compared to the
background (rest of the image)
The perceptual contrast does not change
linearly with the difference in the
intensity

GNR630

Lecture 5

Slide 102

Case1

IIT Bombay

B. Krishna Mohan

GNR630

Lecture 5

B. Krishna Mohan

54

1/19/2012

Slide 103

Case 2

IIT Bombay

GNR630

Lecture 5

Slide 104

Low Contrast Image

IIT Bombay

B. Krishna Mohan

GNR630

Lecture 5

B. Krishna Mohan

55

1/19/2012

Slide 105

Image Histogram

IIT Bombay

GNR630

Lecture 5

IIT Bombay

B. Krishna Mohan

Slide 106

Relation to Gray Level Range in Image


Minimum and maximum gray levels in the
image
Imin = min {i | h(i) 0}
Imax = max {i | h(i) 0}
A poor contrast image will have (Imax Imin)
range much less than the display range of the
monitor or printer.
GNR630

Lecture 5

B. Krishna Mohan

56

1/19/2012

IIT Bombay

Slide 107

Point Operations
Point Operations are applied to pixels solely
on the basis of the gray levels found there,
without taking into account the pixel position.
Point operations lead to mapping of gray
levels from one set of values to another set.
gij = H[fij], where H is some transformation

GNR630

Lecture 5

B. Krishna Mohan

IIT Bombay

Slide 108

Point Operations
In case of point operations, gray level
transformations need NOT be computed at
each pixel in the image
If the radiometric resolution is K, then the
transformation has to be computed only for 2K
gray values, 0 , 1, , 2K-1
This provides a look-up table for mapping
each gray level to its new level
GNR630

Lecture 5

B. Krishna Mohan

57

1/19/2012

IIT Bombay

Slide 109

Linear Contrast Stretch


Suppose the display range of the monitor is
Omin to Omax, which means the monitor can
display (Omax Omin + 1) levels
Example: Omin = 0
Omax = 255
Let the input range be Imin to Imax.

GNR630

Lecture 5

B. Krishna Mohan

IIT Bombay

Slide 110

Linear Contrast Stretch


When the input image has poor contrast, then the
range of gray levels in the image is much less than
the display range of the monitor
(Omax Omin) >> (Imax Imin)
If Imax is in the left half of the gray scale, then the
image appears dark
If Imin appears in the right half of the gray scale, then
the image appears light or faded out
GNR630

Lecture 5

B. Krishna Mohan

58

1/19/2012

IIT Bombay

Slide 111

Linear Contrast Stretch


Low contrast images can be linearly enhanced using
simple contrast stretch operations. Then the linear
contrast stretch operation is defined by

Omax Omin
( x I min )
I max I min

y=

= m.(x I min ), where


m=

Omax Omin
I max I min

x is the input level and y is the output level


GNR630

Lecture 5

B. Krishna Mohan

Output
level
IIT
Bombay

Slide 112

Omax

Linear Contrast Stretch


m=1
line

m > 1  stretching
m < 1  compressing
m is the slope of the
line

Omin
Imin
GNR630

Imax

Input level
Lecture 5

B. Krishna Mohan

59

1/19/2012

IIT Bombay

Slide 113

Low Contrast Image

GNR630

Lecture 5

IIT Bombay

B. Krishna Mohan

Slide 114

After
linear
contrast
stretch

GNR630

Lecture 5

B. Krishna Mohan

60

1/19/2012

IIT Bombay

Slide 115

Low Contrast Image

GNR630

Lecture 5

IIT Bombay

B. Krishna Mohan

Slide 116

After Enhancement

GNR630

Lecture 5

B. Krishna Mohan

61

1/19/2012

IIT Bombay

Slide 117

Use of the Histogram


The previous methods just make use of the
extreme gray levels in the input image
The number of pixels at that gray level is not
considered.
If one pixel is present at 0 and one pixel
present with gray level 255, then the entire
dynamic range of the display device is
considered occupied
GNR630

Lecture 5

B. Krishna Mohan

IIT Bombay

Slide 118

h(n)

Effective Gray Scale


Limits

A
GNR630

B
Lecture 5

255
n
B. Krishna Mohan

62

1/19/2012

IIT Bombay

Slide 119

Interactive Choice of Limits


A and B can be either interactively chosen or
Apply a minimum count of the number of pixels at
the extreme gray levels
Omin = 0; Omax = 255;
Imin = A; Imax = B
Apply the standard linear contrast stretch procedure
y = m.(x-Imin)

GNR630

Lecture 5

B. Krishna Mohan

IIT Bombay

Slide 120

Automatic Choice of Limits


Most software packages perform default contrast
enhancement prior to display of an image.
In such cases, automatic computation of Imax and Imin are
computed as
Imin = k.
Imax = + k.
k is an integer, often equal to 1 or 2
This is also referred to as Standard Deviation Stretch
GNR630

Lecture 5

B. Krishna Mohan

63

1/19/2012

IIT Bombay

Slide 121

Non-linear Stretch
Human visual system is not linear; so are films
and computer monitors
When we wish to examine the details in the
dark portion of the image at the expense of
the bright portion, then linear contrast stretch
is not very useful

GNR630

Lecture 5

B. Krishna Mohan

IIT Bombay

Slide 122

Logarithmic Stretch
y = k.log(1+x) + c
Nature of log curve rapid rise initially, and
levels off later
Greater difference in values of log function for
smaller gray levels, smaller difference for
larger gray levels

GNR630

Lecture 5

B. Krishna Mohan

64

1/19/2012

IIT Bombay

Slide 123

Output

Logarithmic
transformation

Input
GNR630

Lecture 5

Slide 124

Another Example

IIT Bombay

B. Krishna Mohan

GNR630

Lecture 5

B. Krishna Mohan

65

1/19/2012

Slide 125

After Logarithmic Stretch

IIT Bombay

GNR630

Lecture 5

B. Krishna Mohan

IIT Bombay

Slide 126

Exponential Stretch
Exponential stretch is the opposite of log
stretch, and enhances the details in the
brighter portion of the gray scale
y = k.xr + c
The exponential curve rises much faster for
higher values of the argument of exp(.).

GNR630

Lecture 5

B. Krishna Mohan

66

1/19/2012

Slide 127

Another Example

IIT Bombay

GNR630

Lecture 5

Slide 128

After Exponential Stretch

IIT Bombay

B. Krishna Mohan

GNR630

Lecture 5

B. Krishna Mohan

67

1/19/2012

IIT Bombay

Slide 129

Piece-wise Enhancement
In piece-wise contrast enhancement, the input
gray scale is divided into several subranges,
and a different enhancement may be applied
to each sub-range. This requires prior
knowledge of the gray scale range of objects
of choice

GNR630

Lecture 5

B. Krishna Mohan

IIT Bombay
y

Slide 130

Piecewise contrast
enhancement

x1

x2
GNR630

x3
Lecture 5

x4

x5

B. Krishna Mohan

68

1/19/2012

IIT Bombay

Slide 131

Thresholding
A trivial form of enhancement of the input
image is to map all values below a threshold
gray level to a constant value, and those
gray levels from the threshold value and
above to another constant value. This can
be expressed as
Y = y1, for x < T
Y = y2 for all other values of x.
GNR630

Lecture 5

B. Krishna Mohan

IIT Bombay

Slide 132

Thresholding
Another option is to map graylevels between
two bounds to a single value, while mapping
all others to a second value.
Y = y1 if T1 < x < T2
Y = y2 otherwise
This assumes that the gray level range of the
desired object is known
GNR630

Lecture 5

B. Krishna Mohan

69

1/19/2012

IIT Bombay

GNR630

Slide 133

Lecture 5

Slide 134

Threshold at gray level 60

IIT Bombay

B. Krishna Mohan

GNR630

Lecture 5

B. Krishna Mohan

70

1/19/2012

IIT Bombay

Slide 135

Density Slicing
Density slicing is a simple extension of thresholding, where a
separate threshold is used for every sub-range of gray levels in
the image. For instance, if the input image is thresholded
using m different thresholds, then the resultant image Y is
given by the equations

Y = y1 if 0 < x < T1
Y = y2 if T1 < x < T2

Y = ym if Tm-1 < x < Tm


GNR630

Lecture 5

IIT Bombay

B. Krishna Mohan

Slide 136

Input image
and its
histogram
GNR630

Lecture 5

B. Krishna Mohan

71

1/19/2012

IIT Bombay

Slide 137

Density Sliced
to four levels

GNR630

Lecture 5

B. Krishna Mohan

IIT Bombay

Slide 138

Common Feature of Contrast


Enhancement Methods
All these methods map one gray level to another
Location of gray level in the image is not relevant
All the methods can be implemented in real time
using look-up tables
All the methods operate on one color or one band at
a time in case of color or multiband images

GNR630

Lecture 5

B. Krishna Mohan

72

1/19/2012

73

Você também pode gostar