Você está na página 1de 194

05/25/13 1

IMAGE ENHANCEMENT
What Is Image
Enhancement?
Image enhancement is the process of
making images more useful
The reasons for doing this include:

Highlighting interesting detail in images

Removing noise from images

Making images more visually appealing


Image Enhancement

Accentuation, or sharpening of image


features such as edges, boundaries, or
contrast

Does not increase the inherent information

Component in the data

In general, image enhancement is used to


generate a visually desirable image.

It can be used as a preprocess or a


postprocess.

Highly application dependent. A technique that


works for one application may not work for
another.
05/25/13 3
Image Enhancement
5
Image Enhancement
(Contd)

The image enhancement methods are


based on either spatial or frequency
domain techniques

spatial domain approaches : direct


manipulation of pixel in an image

frequency domain approach : modify the


Fourier transform of an image
05/25/13
6
Spatial Domain Method

Image processing function may


be expressed as
f(x, y): input image
g(x, y): processed image
T : operator on f defined over
some neighbor of (x, y)

Neighborhood shape : square or


rectangular arrays are the most
predominant due to the ease of
implementation
mask processing / filtering
masks( filters, windows,
templates)
e.g. Image sharpening
A 3x3 neighborhood about a
point (x, y) in an image
)] , ( [ ) , ( y x f T y x g
05/25/13
7
Spatial Domain Method

Simplest form of T : neighbor g


depends only on the value of f at (x, y)

T : gray-level transformation function

s = T(r)
(r,s are variables denoting the gray level of f(x, y) and g(x,
y) at any point(x, y) )
05/25/13
8
Frequency Domain
Method

Convolution theorem
g x y h x y f x y ( , ) ( , ) ( , )
G u v H u v F u v ( , ) ( , ) ( , )
H u v ( , )
: transfer function, optical transfer function
Visually, f(x, y) is given and the goal, after computation
of F(u, v), is to select H(u, v) so that the desired image
[ ]
g x y H u v F u v ( , ) ( , ) ( , )

F
1
.
exhibits some highlighted feature of f(x, y)
where G, H, and F are the Fourier transforms of g, h, and f
05/25/13
Image Enhancement Techniques
Point
operation
Mask
operation
Transform
operation
Coloring
operation
Image Negative
Contrast
Stretching
Compression of
dynamic range
Gray level slicing
Image
Subtraction
Image Averaging
Histogram
operations
Smoothing
operations
Median
Filtering
Sharpening
operations
Derivative
operations
Histogram
operations
Low pass
filtering
High pass
Filtering
Band pass
filtering
Homomorphic
filtering
Histogram
operations
False
coloring
Full color
processing
10
Image enhancement
05/25/13
Types of Image
Enhancement

There are three types of image


enhancement techniques:

Point operations: each pixel is modified


according to a particular equation,
independent of the other pixels.

Mask operations: each pixel is modified


according to the values of the pixels
neighbors.

Global operations: all the pixel values in the


image or subimage are taken into
consideration.
05/25/13 11
Point Operations
( ) v f u
Point operations are zero memory operations
where a given gray level u [0,L] is mapped into a
gray level v [0, L] according to a transformation
1.Contrast Stretching
2.Clipping and Thresholding
3. Digital Negative
4. Intensity Level Slicing
5. Bit plane slicing
6. Log Transformation
7. Power Law Transformation
05/25/13 12
13
Contrast stretching

Increase the dynamic


range of the gray levels in
the image

Before the stretching can


be performed it is
necessary to specify the
upper and lower pixel
value limits over which
the image is to be
normalized.

Often these limits will


just be the minimum and
maximum pixel values
that the image type
concerned allows
05/25/13
14
Contrast stretching

Call the lower and the upper limits a and b


respectively. Scans the image to find the lowest
and highest pixel values currently present in the
image. Call these c and d. Then each pixel P is
scaled using the following function:
Values below 0 are set to 0 and values about 255
are set to 255
a
c d
a b
c P P
in out
+

) )( (
05/25/13
15
After contrast stretching, using a simple linear interpolation
between c = 79 and d = 136
This result is a significant improvement over the original, the
enhanced image itself still appears somewhat flat.
Source image
Its histogram
16

Otherwise,we can
achieve better results
by contrast stretching
the image over a more
narrow range of
graylevel values from
the original image

For example, by setting


the cutoff fraction
parameter to 0.03, we
obtain the contrast-
stretched image
05/25/13
17

Setting the cutoff


fraction to a higher
value, e.g. 0.125,
yields the contrast
stretched image
05/25/13
Contrast Stretching
( )
0
a
( ) b
a
b
u u a
v u a v u b
u b v u L

+
'

p
p
p
The gray scale intervals where pixels occur most frequently
would be stretched to improve the overall visibility of a scene
05/25/13 18
Contrast Stretching
05/25/13 19
Contrast Stretching
05/25/13 20
05/25/13 21
Clipping and
Thresholding

Clipping-

This is useful for noise reduction when the


input signal is known to lie in the range
[a,b]
A special case of contrast stretching
where
0
Thresholding
A special case of clipping
{
1 u T
0 u T
v L
p
05/25/13 22
23
Thresholding

Separate out the regions of the image


corresponding to objects in which we are
interested, from the regions of the image that
correspond to background

perform this segmentation on the basis of the


different intensities or colors in the foreground and
background regions of an image
05/25/13
24

A) shows a classic bi-modal intensity distribution.


This image can be successfully segmented using
a single threshold T1. B) is slightly more
complicated. Here we suppose the central peak
represents the objects we are interested in and so
threshold segmentation requires two thresholds:
T1 and T2. In C), the two peaks of a bi-modal
distribution have run together and so it is almost
certainly not possible to successfully segment
this image using a single global threshold
05/25/13
25
using a single threshold at
a pixel intensity value of 120
Input
Output
Thresholding
05/25/13 26
27
Adaptive Thresholding

Whereas the conventional


thresholding operator uses a global
threshold for all pixels, adaptive
thresholding changes the threshold
dynamically over the image

This more sophisticated version of


thresholding can accommodate
changing lighting conditions in the
image, e.g. those occurring as a
result of a strong illumination
gradient or shadows
05/25/13
28
Adaptive Thresholding

For each pixel in the image, a threshold


has to be calculated. If the pixel value
is below the threshold it is set to the
background value, otherwise it
assumes the foreground value

Two main approaches to finding the


threshold:

the Chow and Kaneko approach

local thresholding
Assumptions: smaller image regions are more
likely to have approximately uniform illumination,
thus being more suitable for thresholding
05/25/13
29
Adaptive Thresholding

For each pixel in the image, a threshold


has to be calculated. If the pixel value
is below the threshold it is set to the
background value, otherwise it
assumes the foreground value

Two main approaches to finding the


threshold:

the Chow and Kaneko approach

local thresholding
Assumptions: smaller image regions are more
likely to have approximately uniform illumination,
thus being more suitable for thresholding
05/25/13
30
Chow and Kaneko
approach

Divide an image into an array of


overlapping subimages and then find
the optimum threshold for each
subimage by investigating its
histogram

The threshold for each single pixel is


found by interpolating the results of
the subimages

The drawback of this method is that


it is computational expensive and,
therefore, is not appropriate for real-
time applications
05/25/13
31
Local Thresholding

Finding the local threshold is to statistically


examine the intensity values of the local
neighborhood of each pixel

The statistic which is most appropriate depends


largely on the input image. Simple and fast
functions include the mean of the local intensity
distribution,
the median value,

or the mean of the minimum and maximum
values,
mean T
median T
2
max min+
T
05/25/13
32

Image contains a
strong illumination
gradient, global
thresholding produces
a very poor result

Source Image
05/25/13
33

Using the mean of a 77


neighborhood, adaptive
thresholding

However, the mean of the


local area is not suitable
as a threshold,because the
range of intensity values
within a local neighborhood
is very small and their mean
is close to the value of the
center pixel
05/25/13
34

Improve If the threshold


employed is not the mean,
but (mean-C), where C is a
constant

Using this statistic, all pixels


which exist in a uniform
neighborhood (e.g. along the
margins) are set to
background
The result for a 77
neighborhood and C=7
05/25/13
35

The larger window yields the poorer result,


because it is more adversely affected by the
illumination gradient
Mean, a 77
neighborhood
and C=7
Mean, a 7575
neighborhood
and C=10
Median, a 77
neighborhood
and C = 4
05/25/13
36
How to choose a threshold
value?
05/25/13
37
Image negatives

Display medical
images
and photographing a
screen with
monochrome positive
film

Reverse the order


from black to white so
that the intensity of
the output image
decreases as the
intensity of the input
increases
L is the number of gray levels,
r and N denote the input and
output gray levels
05/25/13
Digital Negative
Applications:

display of medical images

produce negative prints of images


05/25/13 38
39
an image its negative
Digital Negative
Original Image
Negative Image
05/25/13 40
41
Gray/Intensity-level
slicing

Highlighting a specific range of gray


levels is often desired

Various way to accomplish this

Highlight some range and reduce all others


to a constant level

Highlight some range but preserve all other


levels
05/25/13
Intensity Level Slicing
, a
0 otherwise
L u b
v

'

Without Background
, a
u otherwise
L u b
v

'

With Background
05/25/13 42
43
Intensity-level slicing:
(a) a transformation function that highlights a range [A, B] of intensities while
diminishing all others to a constant ,low level
(b) a transformation that highlights a range [A, B] of intensities but preserves all others
(c) Original image
(d) result of using the transformation in (a)
These transformations permits segmentation of certain gray level regions from
the rest of the image
a)
d) c)
b)
44
Examples of display transfer functions
Manipulation of the grey scale transfer function:
a) an original, moderately low-contrast transmission light microscope image
(prepared slide of a head louse)
b) expanded linear transfer function adjusted to the minimum and maximum
brightness values
c) positive gamma (log) function
d) negative gamma (log) function
e) negative linear transfer function
f) nonlinear transfer function (high slope linear contrast over central portion of
brightness range, with negative slope or solarization for dark and bright portions)
45
Bit-plane slicing

Highlighting the contribution made


to the total image appearance by
specific bits

Higher-order :

The majority of the visually significant data

Lower-order :

Subtle details
05/25/13
Bit Extraction
05/25/13 46
47
Bit-plane slicing
3 2
1 0
7 6
5 4
Original image
48
Bit-plane slicing

Plane 7 contains
the most
significant bits,
and plane 0
contains the
least significant
bits of the pixels
in the original
image
0 1 2
3 4 5
6 7
05/25/13
Log Transformation
Fourier Spectrum Log Transformed image
( )
10
log 1 v c u +
05/25/13 49
Power Law
Transformation

Power law
transformations have
the following form

Map a narrow range


of dark input values
into a wider range of
output values or vice
versa

Varying gives a
whole
family of curves
* v c u

05/25/13 50
Gamma Correction

A variety of devices used for image


capture, printing, and display
respond according to a power law.

By convention, the exponent in the


power-law equation is referred to as
gamma

The process used to correct this


power-law response phenomena is
called gamma correction.
05/25/13 51
Example

cathode ray tube (CRT) devices have


an intensity-to-voltage response that
is a power function, with exponents
varying from approximately 1.8 to
2.5

With reference to the curve for g=2.5


we
see that such display systems would
tend to produce images that are
darker
than intended
05/25/13 52
Gamma Correction
05/25/13 53
05/25/13 54
55
What is a histogram?

A graph indicating the number of times each gray


level occurs in the image, i.e. frequency of the
brightness value in the image

The histogram of an image with L gray levels is


represented by a one-dimensional array with L
elements

Algorithm:

Assign zero values to all


elements of the array h
f
;

For all pixels (x,y) of the


image f, increment
h
f
[f(x,y)] by 1.
05/25/13
Histogram Processing

Histogram of a digital image with


gray levels in the range [0,L-1] is a
discrete function
h(r
h(r
k
k
) = n
) = n
k
k
where
r
k
: the k
th
gray level
n
k
: the number of pixels in the image
having gray level r
k
h(r
k
) : histogram of a digital image with
gray levels r
k
05/25/13 56
Normalized Histogram
dividing each of histogram at gray
level
r
r
k k
by the total number of
pixels in the image,
n
n
p(r
p(r
k
k
) = n
) = n
k
k
/ n
/ n
for k = 0,1,,L-1
p(r
p(r
k k
)
) gives an estimate of the
probability of occurrence of gray
level
r
r
k k
The sum of all components of a
normalized histogram is equal to 1
05/25/13 57
Histogram
An image histogram is a plot of the gray-level frequencies.
05/25/13 58
Image Histogram
05/25/13 59
Properties of Image
Histogram

Histograms with small spread correspond to low


contrast images (i.e., mostly dark, mostly bright,
or mostly gray).

Histograms with wide spread correspond to high


contrast images.
05/25/13 60
Properties of Image
Histogram
Histograms clustered at the low end correspond to
dark images.
Histograms clustered at the high end correspond to
bright images.
05/25/13 61
Example
Dark Image
Components of
histogram are
concentrated on
the low side of the
gray scale.
Bright Image
Components of
histogram are
concentrated on the
high side of the
gray scale.
05/25/13 62
Example
Low contrast image
histogram is narrow and
centered toward the
middle of the gray scale
High contrast image
histogram covers broad
range of the gray scale
and the distribution of
pixels is not too far from
uniform, with very few
vertical lines being much
higher than the others
05/25/13 63
64
Histogram Processing

Histograms
corresponding to four
basic image types
Dark image
Bright image
Low-contrast image
High-contrast image
05/25/13
65
Histogram equalization

Goal: to produce an image with


equally distributed brightness levels
over the whole brightness scale

Effect: enhancing contrast for


brightness values close to histogram
maxima, and decreasing contrast near
minima.

Result is better than just stretching,


and method is fully automatic
05/25/13
Histogram Equalization

As the low-contrast images


histogram is narrow and centered
toward the middle of the gray scale,
if we distribute the histogram to a
wider range the quality of the image
will be improved

We can do it by adjusting the


probability density function of the
original histogram of the image so
that the probability spread equally
05/25/13 66
Histogram Equalization
Let us assume for the moment that the input image to be
enhanced has continuous gray values, with r = 0 representing
black and r = 1 representing white.
We need to design a gray value transformation s = T(r), based
on the histogram of the input image, which will enhance the
image.
he histogram equalization is an approach to enhance a given image.
The approach is to design a transformation T(.) such that the gray
values in the output is uniformly distributed in [0, 1].
05/25/13 67
As before, we assume that:
(1) T(r) is a monotonically increasing function for 0 r
1 (preserves order from black to white).
(2) T(r) maps [0,1] into [0,1] (preserves the range of allowed
Gray values).
05/25/13 68
Let us denote the inverse transformation by r T
-1
(s) . We
assume that the inverse transformation also satisfies the above
two conditions.
We consider the gray values in the input image and output
image as random variables in the interval [0, 1].
Let p
in
(r) and p
out
(s) denote the probability density of the
Gray values in the input and output images.
05/25/13 69
If p
in
(r) and T(r) are known, and r T
-1
(s) satisfies condition 1, we can
write (result from probability theory):
) (
1
) ( ) (
s T r
in out
ds
dr
r p s p

1
]
1

One way to enhance the image is to design a transformation


T(.) such that the gray values in the output is uniformly
distributed in [0, 1], i.e. p
out
(s) 1, 0 s 1
In terms of histograms, the output image will have all
gray values in equal proportion .
This technique is called histogram equalization.
05/25/13 70

Equalization
05/25/13 71
Consider the transformation
1 0 ) ( ) (
,
0

r dw w p r T s
r
in
Note that this is the cumulative distribution function (CDF) of p
in
(r)
and satisfies the previous two conditions.
From the previous equation and using the fundamental
theorem of calculus,
) (r p
dr
ds
in

Next we derive the gray values in the output is uniformly


distributed in [0, 1].
1 ) ( ) (
1
]
1

ds
dr
r p s p
in out
05/25/13 72
Therefore, the output histogram is given by
[ ] 1 0 , 1 1
) (
1
) ( ) (
) (
) (
1
1

1
]
1

s
r p
r p s p
s T r
s T r
in
in out
The output probability density function is uniform, regardless of
the input.
Thus, using a transformation function equal to the CDF of input
gray values r, we can obtain an image with uniform gray values.
This usually results in an enhanced image, with an increase in
the dynamic range of pixel values.
05/25/13 73
Step 1:For images with discrete gray values, compute:
n
n
r p
k
k in
) (
1 0
k
r 1 0 L k
L: Total number of gray levels
n
k
: Number of pixels with gray value r
k

n: Total number of pixels in the image
Step 2: Based on CDF, compute the discrete version of the previous
transformation :


k
j
j in k k
r p r T s
0
) ( ) (
1 0 L k
How to implement histogram
equalization?
05/25/13 74
Example:
Consider an 8-level 64 x 64 image with gray values (0, 1, ,7). The
normalized gray values are (0, 1/7, 2/7, , 1). The normalized
histogram is given below:
NB: The gray values in output are also (0, 1/7, 2/7, , 1).


k
j
j in k k
r p r T s
0
) ( ) (
1 0 L k
05/25/13 75
Gray value
# pixels
Normalized gray value
Fraction of
# pixels
05/25/13 76
Applying the transformation,


k
j
j in k k
r p r T s
0
) ( ) (
we have
05/25/13 77
05/25/13 78
Notice that there are only five distinct gray levels --- (1/7,
3/7,5/7, 6/7, 1) in the output image. We will relabel them
as (s
0
,s
1
, , s
4
).
With this transformation, the output image will have
histogram
05/25/13 79
Histogram of output
image
# pixels
Gray values
Note that the histogram of output image is only approximately, and not exactly,
uniform. This should not be surprising, since there is no result that claims
uniformity in the discrete case.
05/25/13 80
Example
Original image and its histogram
05/25/13 81
Histogram equalized image and its histogram
05/25/13 82
Comments:
Histogram equalization may not always produce desirable
results, particularly if the given histogram is very narrow. It
can produce false edges and regions. It can also increase
image graininess and patchiness.
05/25/13 83
05/25/13 84
Histogram Equalization

Form the cumulative histogram

Normalize the value by dividing it by


the total number of pixels

Multiply these values by the


maximum gray level value and round
off the value

Map the original value to the result of


step 3 by a one-one correspondence
05/25/13 85
Histogram (Matching)
Specification

Histogram equalization has a


disadvantage which is that it can
generates only one type of output
image.

With Histogram Specification, we can


specify the shape of the histogram that
we wish the output image to have.

It doesnt have to be a uniform


histogram
05/25/13 86
87
Histogram specification /
matching

Motivation:

Sometimes, the ability to specify particular


histogram shapes capable of highlighting
certain gray-level ranges in an image is
desirable

The aim is to produce an image with


desired distributed brightness levels
over the whole brightness scale, as
opposed to uniform distribution
05/25/13
88
Histogram specification
) (r p
r
: The original probability density function
: The desired probability density function
) (z p
z
89 05/25/13
90
Example
Illustration of the histogram specification method:
(a)original image; (b)image after histogram equalization;
(c)image enhanced by histogram specification; (d)histograms
(a) (b)
(c) (d)
original
equalized
specified
resulting
Histogram Specification

Find the mapping table of the


histogram equalization

Specify the desired histogram.


Equalize the desired histogram

Perform the mapping process so that


the values of step 1 can be mapped
to the results of step 2.
05/25/13 91
92
Local enhancement

Motivation:

To enhance details over small areas in an image

The computation of a global transformation does not


guarantee the desired local enhancement

Solution:

Define a square / rectangular neighborhood

Move the center of this area from pixel to pixel

Histogram equalization in each neighborhood region


05/25/13
93
Image Subtraction

The difference between two images


f(x, y) and h(x, y), expressed as

h(x, y) : mask - an x-ray image of a region of a


patients body

f(x, y) : image of the same anatomical region but


acquired after injection of a dye into the
bloodstream
) , ( ) , ( ) , ( y x h y x f y x g
(a) mask image
(b) image (after injection
of dye into the bloodstream)
with mask subtracted out
05/25/13
94
Example
Showing image differences by subtraction:
a) original image; b) image after moving one coin; c) difference image after pixel-by pixel subtraction
Different images for quality control. A master image is subtracted from
images of each subsequent part. In this example, the missing chip in a printed
circuit board is evident in the difference image
Applications of Image
Subtraction and change
detection

Medical imaging application : display


blood-flow paths

Automated inspection of printed


circuits

Security monitoring
05/25/13 95
96
Two frames from a videotape sequence of free-swimming
single-celled animals in a drop of pond water , and the
difference image. The length of the white region divided by
the time interval gives the velocity
Analysis of motion in a more complex
situation than shown in fig. Where the
paths of the swimming microorganisms
cross, they are sorted out by assuming
that the path continues in a nearly
straight direction. (Gualtieri &Coltelli,
1992)
05/25/13
97
Image Averaging

Motivation:

Imaging with very low light levels is


routine, causing sensor noise frequently
to render single images virtually useless
for analysis

Solution: image averaging


05/25/13
98
Image Averaging

Noisy image

g(x, y)=noise image

f(x, y)=original image

n(x, y)=noise

Assumption: at every pair of coordinates (x, y), the


noise is uncorrelated & has zero average value

uncorrelated: covariance

If an image is formed by averaging K


different noisy images
) , ( ) , ( ) , ( y x n y x f y x g +
0 )] )( [(
j j i i
m x m x E
) , ( y x g
{ }
2
) , (
2
) , (
1
1
) , ( ) , (
) , (
1
) , (
y x n y x g
K
i
i
K
y x f y x g E
y x g
K
y x g

As K increases: the variability


(noise) of the pixel values at
each location (x,y) decreases
05/25/13
99
Example
05/25/13
Spatial Filtering
-Contents

What is spatial filtering?

Smoothing Spatial filters.

Sharpening Spatial Filters.

Combining Spatial Enhancement


Methods
05/25/13 100
Mask Operation

Linear systems and linear filtering

Smoothing operations
Median Filtering
Sharpening operations

Derivative operations

Correlation
05/25/13 101
Neighbourhood
Operations
Neighbourhood operations simply
operate on a larger neighbourhood of
pixels than point operations
Neighbourhoods are
mostly a rectangle
around a central pixel
Any size rectangle
and any shape filter
are possible
Origin x
y
Image f (x, y)
(x, y)
Neighbourhood
05/25/13 102
Neighbourhood
Operations
For each pixel in the origin image, the
outcome is written on the same
location at the target image.
Origin x
y
Image f (x, y)
(x, y)
Neighbourhood
Target Origin
05/25/13 103
Simple Neighbourhood
Operations
Simple neighbourhood operations
example:

Min: Set the pixel value to the minimum


in the neighbourhood

Max: Set the pixel value to the


maximum in the neighbourhood
05/25/13 104
Image Enhancement:
Spatial Filtering
Image enhancement in the spatial domain can be represented
as:
[ ] ) , ( ) , ( n m f T n m g
Enhanced Image Transformation
Given Image
The transformation T maybe linear or nonlinear. We will mainly study linear
operators T but will see one important nonlinear operation.
There are two closely related concepts that must be understood when
performing linear spatial filtering. One is correlation, the other is convolution.
05/25/13 105
How to specify T
If the operator T is linear and shift invariant (LSI), characterized by
the point-spread sequence (PSS) h(m, n), then (recall convolution)





l k
l k
l k h l n k m f
l k f l n k m h
n m f n m h n m g
) , ( ) , (
) , ( ) , (
) , ( ) , ( ) , (
In practice, to reduce computations, h( n , m ) is of finite extent:
where is a small set (called neighborhood). is also called as the support
of h.
h( n , m ) =0, for (k,l)
05/25/13 106
If h(m,n) is a 3 by 3 mask given by
w
1
w
3
w
2
w
4
w
6
w
5
w
7
w
9
w
8
h =
) 1 , 1 ( ) , 1 ( ) 1 , 1 (
) 1 , ( ) , ( ) 1 , (
) 1 , 1 ( ) , 1 ( ) 1 , 1 (
) , (
9 8 7
6 5 4
3 2 1
+ + + + + + +
+ + + +
+ + +
n m f w n m f w n m f w
n m f w n m f w n m f w
n m f w n m f w n m f w
n m g
(x,y)
y
x
f(-1,-1) f(-1, 0)
f(-1, 1)
f( 0,-1) f( 0, 0) f( 0, 1)
f( 1,-1) f( 1, 0) f( 1, 1)
(m=0,n=0)
05/25/13 107
The output g(m, n) is computed by sliding the mask over each pixel of the
image f(m, n). This filtering procedure is sometimes referred to as moving
average filter.
Special care is required for the pixels at the border of image f(m, n). This
depends on the so-called boundary condition. Common choices are:
(1) The mask is truncated at the border (free boundary).
For one dimension

'

0
) (
) (
~
x f
x f
N x < 0
0 1 ) 2 / ( < + x L
1 ) 2 / ( + < L N x N
MATLAB option is P
05/25/13 108
(2)The image is extended by appending extra rows/columns at the boundaries. The
extension is done by repeating the first/last row/column or by setting them to some
constant (fixed boundary).
(3)The boundaries wrap around (periodic boundary).
For one dimension

'

) 1 (
) (
) 0 (
) (
~
N f
x f
f
x f
1 ) 2 / ( + < L N x N
0 1 ) 2 / ( < + x L
N x < 0

'

) mod (
) (
) mod ) ((
) (
~
N x f
x f
N N x f
x f
1 ) 2 / ( + < L N x N
0 1 ) 2 / ( < + x L
N x < 0
For one dimension
MATLAB option is replicate
MATLAB option is symmetric
05/25/13 109
In any case, the final output g(m, n) is restricted to the support of the original
image f(m, n).
The mask operation can be implemented in matlab using the filter2
command, which is based on the conv2 command.
05/25/13 110
The Spatial Filtering
Process
j k l
m n o
p q r
Origin x
y
Image f (x, y)
e
processed
= n*e +
j*a + k*b + l*c +
m*d + o*f +
p*g + q*h + r*i
Filter (w)
Simple 3*3
Neighbourhood
e 3*3 Filter
a b c
d e f
g h i
Original Image
Pixels
*
The above is repeated for every pixel in the
original image to generate the filtered image
05/25/13 111
Spatial Filtering:
Equation Form


+ +
a
a s
b
b t
t y s x f t s w y x g ) , ( ) , ( ) , (
Filtering can be given
in equation form as
shown above
Notations are based
on the image shown
to the left
05/25/13 112
Smoothing Filters
Image smoothing refers to any image-to-image transformation designed to
smooth or flatten the image by reducing the rapid pixel-to-pixel variation in
gray values.
Smoothing filters are used for:
(1) Blurring: This is usually a preprocessing step for removing small
(unwanted) details before extracting the relevant (large) object, bridging gaps
in lines/curves,
(2)Noise reduction: Mitigate the effect of noise by linear or nonlinear
operations.
Image smoothing by averaging (lowpass spatial filtering)
05/25/13 113
Smoothing is accomplished by applying an averaging mask.
An averaging mask is a mask with positive weights, which sum to 1. It
computes a weighted average of the pixel values in a neighborhood. This
operation is sometimes called neighborhood averaging.
Some 3 x 3 averaging masks:
1
1
1
]
1

0 1 0
1 1 1
0 1 0
5
1
1
1
1
]
1

0 1 0
1 4 1
0 1 0
8
1
1
1
1
]
1

1 1 1
1 1 1
1 1 1
9
1
1
1
1
]
1

1 3 1
3 16 3
1 3 1
32
1
This operation is equivalent to lowpass filtering.
05/25/13 114
Smoothing Spatial Filters
One of the simplest spatial filtering
operations we can perform is a
smoothing operation

Simply average all of the pixels in a


neighbourhood around a central value

Especially useful
in removing noise
from images

Also useful for


highlighting gross
detail
1
/
9
1
/
9
1
/
9
1
/
9
1
/
9
1
/
9
1
/
9
1
/
9
1
/
9
Simple
averaging
filter
05/25/13 115
Smoothing Spatial
Filtering
1
/
9
1
/
9
1
/
9
1
/
9
1
/
9
1
/
9
1
/
9
1
/
9
1
/
9
Origin x
y
Image f (x, y)
e =
1
/
9
*106 +
1
/
9
*104 +
1
/
9
*100 +
1
/
9
*108 +
1
/
9
*99 +
1
/
9
*98 +
1
/
9
*95 +
1
/
9
*90 +
1
/
9
*85
= 98.3333
Filter
Simple 3*3
Neighbourhood
106
104
99
95
100 108
98
90 85
1
/
9
1
/
9
1
/
9
1
/
9
1
/
9
1
/
9
1
/
9
1
/
9
1
/
9
3*3 Smoothing
Filter
104 100 108
99 106 98
95 90 85
Original Image
Pixels
*
The above is repeated for every pixel in the
original image to generate the smoothed image
05/25/13 116
Image Smoothing
Example
The image at the top left
is an original image of
size 500*500 pixels
The subsequent images
show the image after
filtering with an averaging
filter of increasing sizes

3, 5, 9, 15 and 35
Notice how detail begins
to disappear
05/25/13 117
Image Smoothing Example
05/25/13 118
Image Smoothing Example
05/25/13 119
Image Smoothing Example
05/25/13 120
Image Smoothing Example
05/25/13 121
Image Smoothing Example
05/25/13 122
Image Smoothing Example
05/25/13 123
Weighted Smoothing
Filters
More effective smoothing filters can be
generated by allowing different pixels
in the neighbourhood different weights
in the averaging function

Pixels closer to the


central pixel are more
important

Often referred to as a
weighted averaging
1
/
16
2
/
16
1
/
16
2
/
16
4
/
16
2
/
16
1
/
16
2
/
16
1
/
16
Weighted
averaging filter
05/25/13 124
Another Smoothing Example
By smoothing the original image we
get rid of lots of the finer detail which
leaves only the gross features for
thresholding
Original Image Smoothed Image Thresholded Image
* Image taken from Hubble Space Telescope
05/25/13 125
Averaging Filter Vs. Median
Filter Example
Filtering is often used to remove noise
from images
Sometimes a median filter works
better than an averaging filter
Original Image
With Noise
Image After
Averaging Filter
Image After
Median Filter
05/25/13 126
Averaging Filter Vs. Median Filter
Example
Original
05/25/13 127
Averaging Filter Vs. Median Filter
Example
Averaging
Filter
05/25/13 128
Averaging Filter Vs. Median Filter
Example
Median
Filter
05/25/13 129
Strange Things Happen At The
Edges!
Origin x
y
Image f (x, y)
e
e
e
e
At the edges of an image we are missing
pixels to form a neighbourhood
e e
e
05/25/13 130
Strange Things Happen At The
Edges! (cont)
There are a few approaches to dealing
with missing edge pixels:

Omit missing pixels

Only works with some filters

Can add extra code and slow down


processing

Pad the image

Typically with either all white or all black


pixels

Replicate border pixels

Truncate the image


05/25/13 131
Correlation &
Convolution
The filtering we have been talking
about so far is referred to as
correlation with the filter itself referred
to as the correlation kernel
Convolution is a similar operation, with
just one subtle difference
For symmetric filters it makes no
difference
e
processed
= v*e +
z*a + y*b + x*c +
w*d + u*e +
t*f + s*g + r*h
r s t
u v w
x y z
Filter
a b c
d e e
f g h
Original Image
Pixels
*
05/25/13 132
Sharpening Spatial
Filters
Previously we have looked at
smoothing filters which remove fine
detail
Sharpening spatial filters seek to
highlight fine detail

Remove blurring from images

Highlight edges
Sharpening filters are based on spatial
differentiation
05/25/13 133
Image Sharpening
This involves highlighting fine details or
enhancing details that have been blurred.
Basic highpass spatial filtering
This can be accomplished by a linear shift-
invariant operator, implemented by means
of a mask, with positive and negative
coefficients.
This is called a sharpening mask, since it
tends to enhance abrupt gray level
changes in the image.
05/25/13 134
The mask should have a positive coefficient at the
center and negative coefficients at the periphery.
The coefficients should sum to zero. Example:
1
1
1
]
1




1 1 1
1 8 1
1 1 1
9
1
This is equivalent to highpass filtering.
A highpass filtered image g can be thought of as
the difference between the original image f and a
lowpass filtered version of f :
g(m,n) = f(m,n) lowpass(f(m,n))
05/25/13 135
Example:
05/25/13 136
High-boost filtering
This is a filter whose output g is produced by
subtracting a lowpass (blurred) version of f from
an amplified version of f
g(m,n) = A f(m,n) lowpass(f(m,n))
This is also referred to as unsharp masking.
Observe that
g(m,n) = A f(m,n) lowpass(f(m,n))
= (A-1) f(m,n) + f(m,n) lowpass(f(m,n))
= (A-1) f(m,n) + highpass(f(m,n))
For A > 1, part of the original image is added back to the highpass filtered
version of f.
05/25/13 137
The result is the original image with the edges enhanced relative to the original
image.
Example:
05/25/13 138
Spatial Differentiation
Differentiation measures the rate of
change of a function
Lets consider a simple 1 dimensional
example
05/25/13 139
Spatial Differentiation
A B
05/25/13 140
1
st
Derivative
The formula for the 1
st
derivative of a
function is as follows:
Its just the difference between
subsequent values and measures the
rate of change of the function
) ( ) 1 ( x f x f
x
f
+

05/25/13 141
1
st
Derivative (cont)
5 5 4 3 2 1 0 0 0 6 0 0 0 0 1 3 1 0 0 0 0 7 7 7 7
0 -1 -1 -1 -1 0 0 6 -6 0 0 0 1 2 -2 -1 0 0 0 7 0 0 0
f(x)
f(x)
05/25/13 142
2
nd
Derivative
The formula for the 2
nd
derivative of a
function is as follows:
Simply takes into account the values
both before and after the current value
) ( 2 ) 1 ( ) 1 (
2
2
x f x f x f
x
f
+ +

05/25/13 143
2
nd
Derivative (cont)
5 5 4 3 2 1 0 0 0 6 0 0 0 0 1 3 1 0 0 0 0 7 7 7 7
-1 0 0 0 0 1 0 6
-12
6 0 0 1 1 -4 1 1 0 0 7 -7 0 0
f(x)
f(x)
05/25/13 144
05/25/13 145
1
st
and 2
nd
Derivative
f(x)
f(x)
f(x)
146
Using Second Derivatives For
Image Enhancement
The 2
nd
derivative is more useful for
image enhancement than the 1
st

derivative

Stronger response to fine detail

Simpler implementation

We will come back to the 1


st
order
derivative later on
The first sharpening filter we will look
at is the Laplacian

Isotropic

One of the simplest sharpening filters

We will look at a digital implementation


05/25/13 147
The Laplacian
The Laplacian is defined as follows:
where the partial 1
st
order derivative in
the x direction is defined as follows:
and in the y direction as follows:
y
f
x
f
f
2
2
2
2
2


) , ( 2 ) , 1 ( ) , 1 (
2
2
y x f y x f y x f
x
f
+ +

) , ( 2 ) 1 , ( ) 1 , (
2
2
y x f y x f y x f
y
f
+ +

05/25/13 148
The Laplacian (cont)
So, the Laplacian can be given as
follows:
We can easily build a filter based on
this
) , 1 ( ) , 1 ( [
2
y x f y x f f + +
)] 1 , ( ) 1 , ( + + + y x f y x f
) , ( 4 y x f
0 1 0
1 -4 1
0 1 0
05/25/13 149
Laplacian Mask
05/25/13 150
The Laplacian (cont)
Applying the Laplacian to an image we
get a new image that highlights edges
and other discontinuities
Original
Image
Laplacian
Filtered Image
Laplacian
Filtered Image
Scaled for Display
05/25/13 151
But That Is Not Very
Enhanced!
The result of a Laplacian
filtering is not an enhanced
image
We have to do more work in
order to get our final image
Subtract the Laplacian result
from the original image to
generate our final sharpened
enhanced image
Laplacian
Filtered Image
Scaled for Display
f y x f y x g
2
) , ( ) , (
05/25/13 152

Background features can be recovered while


still preserving the sharpening effect of the
Laplacian operation simply by adding the original
and Laplacian images
05/25/13 153
Laplacian Image
Enhancement
In the final sharpened image edges
and fine detail are much more obvious
-
=
Original
Image
Laplacian
Filtered Image
Sharpened
Image
05/25/13 154
Laplacian Image
Enhancement
05/25/13 155
Simplified Image
Enhancement
The entire enhancement can be
combined into a single filtering
operation
) , 1 ( ) , 1 ( [ ) , ( y x f y x f y x f + +
) 1 , ( ) 1 , ( + + + y x f y x f
)] , ( 4 y x f
f y x f y x g
2
) , ( ) , (
) , 1 ( ) , 1 ( ) , ( 5 y x f y x f y x f +
) 1 , ( ) 1 , ( + y x f y x f
05/25/13 156
Simplified Image Enhancement
(cont)
This gives us a new filter which does
the whole job for us in one step
0 -1 0
-1 5 -1
0 -1 0
05/25/13 157
Simplified Image Enhancement
(cont)
05/25/13 158
Variants On The Simple
Laplacian-Composite
Laplacian Mask
There are lots of slightly different
versions of the Laplacian that can be
used: 0 1 0
1 -4 1
0 1 0
1 1 1
1 -8 1
1 1 1
-1 -1 -1
-1 9 -1
-1 -1 -1
Simple
Laplacian
Variant of
Laplacian
05/25/13 159
Unsharp Masking and
Highboost Filtering

Unsharp masking
Sharpen images consists of subtracting an
unsharp (smoothed) version of an image from the
original image

e.g., printing and publishing industry


Steps
1. Blur the original image
2. Subtract the blurred image from the original
3. Add the mask to the original
05/25/13 160
Unsharp masking
( , ) ( , ) ( , )
s
f x y f x y f x y
( , )
s
f x y
- sharpened image obtained by unsharp masking
( , ) f x y
- Blurred version of f(x,y)
161
High-Boost Filtering

Generalization of unsharp masking is


called high-boost filtering
( )
( , ) ( , ) ( , )
( , ) ( , ) ( , ) ( , ) ( , )
( , ) 1 ( , ) ( , )
hb
hb
hb s
f x y Af x y f x y
f x y Af x y f x y f x y f x y
f x y A f x y f x y

+
+
( )
2
2
( , ) 1 ( , ) ( , ) ( , )
( , ) ( , ) ( , )
hb
hb
f x y A f x y f x y f x y
f x y Af x y f x y
+

05/25/13 162
High-Boost Filtering
05/25/13 163
High-Boost Filtered
image
05/25/13 164
Highboost Filtering
05/25/13 165
1
st
Derivative Filtering- The
Gradient
Implementing 1
st
derivative filters is
difficult in practice
For a function f(x, y) the gradient of f at
coordinates (x, y) is given as the column
vector:
1
1
1
1
]
1

1
]
1


y
f
x
f
G
G
y
x
f
05/25/13 166
1
st
Derivative Filtering
(cont)
The magnitude of this vector is given by:
For practical reasons this can be
simplified as:
) f ( mag f
[ ]
2
1
2 2
y x
G G +
2
1
2
2
1
1
]
1

,
_

,
_

y
f
x
f
y x
G G f +
05/25/13 167
1
st
Derivative Filtering
(cont)
There is some debate as to how best to
calculate these gradients but we will use:
which is based on these coordinates
( ) ( )
3 2 1 9 8 7
2 2 z z z z z z f + + + +
( ) ( )
7 4 1 9 6 3
2 2 z z z z z z + + + + +
z
1
z
2
z
3
z
4
z
5
z
6
z
7
z
8
z
9
05/25/13 168
Sobel Operators
Based on the previous equations we can
derive the Sobel Operators
To filter an image it is filtered using both
operators the results of which are added
together
-1 -2 -1
0 0 0
1 2 1
-1 0 1
-2 0 2
-1 0 1
05/25/13 169
Sobel Example
Sobel filters are typically used for edge
detection
An image of a
contact lens which
is enhanced in
order to make
defects (at four
and five oclock in
the image) more
obvious
05/25/13 170
1
st
& 2
nd
Derivatives
Comparing the 1
st
and 2
nd
derivatives
we can conclude the following:

1
st
order derivatives generally produce
thicker edges

2
nd
order derivatives have a stronger
response to fine detail e.g. thin lines

1
st
order derivatives have stronger
response to grey level step

2
nd
order derivatives produce a double
response at step changes in grey level
05/25/13 171
Combining Spatial
Enhancement Methods
Successful image
enhancement is typically
not achieved using a
single operation
Rather we combine a
range of techniques in
order to achieve a final
result
This example will focus
on enhancing the bone
scan to the right
05/25/13 172
Combining Spatial
Enhancement Methods
(cont)
Laplacian filter of
bone scan (a)
Sharpened version of
bone scan achieved
by subtracting (a)
and (b) Sobel filter of bone
scan (a)
(a)
(b)
(c)
(d) 05/25/13 173
Combining Spatial
Enhancement Methods
(cont)
The product of (c)
and (e) which will be
used as a mask
Sharpened image
which is sum of (a)
and (f)
Result of applying a
power-law trans. to
(g)
(e)
(f)
(g)
(h)
Image (d) smoothed with
a 5*5 averaging filter
05/25/13 174
Combining Spatial
Enhancement Methods
(cont)
Compare the original and final images
05/25/13 175
Filtering in Frequency
Domain
05/25/13 176
Notch Filter
( , ) 0 ( , ) ( / 2, / 2)
1 otherwise
H u v if u v M N
05/25/13 177
Transfer Function of Ideal
Lowpass Filter
0
0
0
1
2 2
2
( , ) 1 ( , )
0 ( , )
is the cutoff frequency
( , )
2 2
H u v if D u v D
if D u v D
D
M N
D u v u v

_
_ _
+



, ,
,
f
05/25/13 178
2
2 2
1 1
0 0
( , ) ( , ) ( , ) ( , ) ( , )
( , )
( , )
100
M N
T
u v
u v
T
P u v F u v P u v R u v I u v
P P u v
P u v
P

1
1

1
1
]

05/25/13 179
Ideal Lowpass Filter
05/25/13 180
05/25/13 181
05/25/13 182
Butterworth Low pass Filter
[ ]
2
0
1
( , )
1 ( , ) /
n
H u v
D u v D

+
05/25/13 183
Results of Filtering with
BLPF
05/25/13 184
05/25/13 185
Gaussian Lowpass Filter
2 2
0
( , )/ 2
( , )
D u v D
H u v e

05/25/13 186
High Pass Filter
05/25/13 187
Transfer function of HPF
[ ]
2 2
0
0
0
0
2
0
( , )/ 2
HPF
( , ) 0 ( , )
1 ( , )
is the cutoff frequency
HPF
1
( , )
1 ( , ) /
HPF
( , ) 1
n
D u v D
Ideal
H u v if D u v D
if D u v D
D
Butterworth
H u v
D u v D
Gaussian
H u v e

+

f
05/25/13 188
Results of HPF using
Butterworth HPF
05/25/13 189
Homomorphic Filtering
Approach
05/25/13 190
Homomorphic Filtering
{ } { } { }
{ } { }
{ } { }
( , ) ( , ) ( , )
( , ) ( , ) ( , )
( , ) ln ( , )
= ln ( , ) ln ( , )
( , ) ln ( , )
= ln ( , ) ln ( , )
( , ) ( , ) ( , )
i r
f x y i x y r x y
F f x y F i x y F r x y
z x y f x y
i x y r x y
F z x y F f x y
F i x y F r x y
Z u v F u v F u v

+
+
05/25/13 191
{ }
{ }
1
1 1
we process ( , ) by means of a filter function H(u,v)
( , ) ( , ) ( , )
= ( , ) ( , ) ( , ) ( , )
the spatial domain
( , ) ( , )
( , ) ( , ) ( , ) (
i r
i r
If Z u v
S u v H u v Z u v
H u v F u v H u v F u v
In
s x y F S u v
F H u v F u v F H u v F u

+
{ }
, )

v
{ }
{ }
1
1
( , )
'( , ) '( , )
0 0
'( , ) ( , ) ( , )
'( , ) ( , ) ( , )
( , ) '( , ) '( , )
( , )
= .
= ( , ) ( , )
i
r
s x y
i x y r x y
i x y F H u v F u v
r x y F H u v F u v
s x y i x y r x y
g x y e
e e
i x y r x y

05/25/13 192
Filter Function H(u,v)
The filter function tends to decrease the contribution made by the low
frequencies (illumination) and amplify the contributions made by the
high frequencies (reflectance)
05/25/13 193
Thank You
05/25/13 194

Você também pode gostar