Você está na página 1de 11

Thresholding (image processing)

The simplest thresholding methods replace each pixel in an image with a black pixel if
the image intensity
is less than some fixed constant T (that is,
), or a white
pixel if the image intensity is greater than that constant. In the example image on the
right, this results in the dark tree becoming completely black, and the white snow
becoming complete white.
Histogram shape-based methods, where, for example, the peaks, valleys and
curvatures of the smoothed histogram are analysed

histogram is a graph showing the number of pixels in an


image at each different intensity value found in that image.
For an 8-bit grayscale image there are 256 different possible
intensities, and so the histogram will graphically display 256
numbers showing the distribution of pixels amongst those
grayscale values. Histograms can also be taken of color
images --- either individual histograms of red, green and
blue channels can be taken, or a 3-D histogram can be
produced, with the three axes representing the red, blue and
green channels, and brightness at each point representing
the pixel count. The exact output from the operation
depends upon the implementation --- it may simply be a
picture of the required histogram in a suitable image format,
or it may be a data file of some sort representing the
histogram statistics.
How to select a suitable threshold value from histogram?
When we use thresholding, we typically have to play with it, sometimes
losing too much of the region and sometimes getting too many extraneous
background pixels. (Shadows of objects in the image are also a real pain
not just where they fall across another object but where they mistakenly
get included as part of a dark object on a light background.
http://www.labbookpages.co.uk/software/imgProc/otsuThreshold.html#examples
image segmentation is the process of partitioning a digital image into multiple segments
(sets of pixels, also known as superpixels). The goal of segmentation is to simplify and/or
change the representation of an image into something that is more meaningful and easier
to analyze.
http://homepages.inf.ed.ac.uk/rbf/HIPR2/histgram.htm
How to determine SE to do morphological image processing?
Pixel
The pixel (a word invented from "picture element") is the basic unit of
programmable color on a computer display or in a computer image. Think
of it as a logical - rather than a physical - unit. The physical size of a pixel
depends on how you've set the resolution for the display screen. If you've
set the display to its maximum resolution, the physical size of a pixel will
equal the physical size of the dot pitch (let's just call it the dot size) of the
display. If, however, you've set the resolution to something less than the
maximum resolution, a pixel will be larger than the physical size of the
screen's dot (that is, a pixel will use more than one dot).

Screen image sharpness is sometimes expressed as dpi (dots per inch). (In
this usage, the term dot means pixel, not dot as in dot pitch.) Dots per
inch is determined by both the physical screen size and the resolution
setting. A given image will have lower resolution - fewer dots per inch - on
a larger screen as the same data is spread out over a larger physical area.
On the same size screen, the image will have lower resolution if the
resolution setting is made lower - resetting from 800 by 600 pixels per
horizontal and vertical line to 640 by 480 means fewer dots per inch on
the screen and an image that is less sharp. (On the other hand, individual
image elements such as text will be larger in size.)
How to determine bwareaopen
You have 2 connected blobs. The one on the left is 8 pixels big (area of 8 pixels). The blob on the
right is 3 pixels. When you called bwareaopen, it got rid of blobs less than 4 pixels. Since the blob
with an area of 3 is less than 4, it was removed. Does that explain it? It has nothing to do with
connectivity here because all your blobs are 4-connected. Now if you had an extra pixel diagonally
connected to the blob on the left, like this:

conn=4 means
-XX0X
-X-

conn=8 means

XXX
X0X
XXX

A= 0

Now there are 8 eight connected blobs, but 3 blobs if you consider them as 4 connected. The pixel
at row 4 column 4 is 8-connected to the blob on the left, but not 4 connected. It would be removed
with

bwareaopen(A, 4, 4)

but not with

bwareaopen(A, 4, 8)

because in the second case it's connected while in the first case it's not connected.

Image Processing Toolbox

bwareaopen
Binary area open; remove small objects
Syntax

BW2 = bwareaopen(BW,P)
BW2 = bwareaopen(BW,P,CONN)

Description
bwareaopen(BW,P) removes from a binary image all connected
components (objects) that have fewer than P pixels, producing another
binary image BW2. The default connectivity is 8 for two dimensions, 26 for
three dimensions, and conndef(ndims(BW),'maximal') for higher dimensions.
BW2 =

bwareaopen(BW,P,CONN) specifies the desired connectivity. CONN may


have any of the following scalar values.
BW2 =

Value

Meaning

Two-dimensional connectivities
4

4-connected neighborhood

8-connected neighborhood

Three-dimensional connectivities
6

6-connected neighborhood

18

18-connected neighborhood

26

26-connected neighborhood

Connectivity may be defined in a more general way for any dimension by


using for CONN a 3-by-3-by-...-by-3 matrix of 0's and 1's. The 1-valued
elements define neighborhood locations relative to the center element
of CONN. Note that CONN must be symmetric about its center element.
Class Support
can be a logical or numeric array of any dimension, and it must be
nonsparse. The return value, BW2, is of class logical.
BW

Algorithm
The basic steps are:
1. Determine the connected components.
o

L = bwlabeln(BW, CONN);

2. Compute the area of each component.


o

S = regionprops(L, 'Area');

3. Remove small objects.


o

bw2 = ismember(L, find([S.Area] >= P));

Otsu's method
http://www.labbookpages.co.uk/software/i
mgProc/otsuThreshold.html#examples
From Wikipedia, the free encyclopedia

An example image thresholded using Otsu's algorithm

Original image

In computer vision and image processing, Otsu's method, named after Nobuyuki
Otsu ( tsu Nobuyuki ), is used to automatically perform clustering-based
image thresholding,[1] or, the reduction of a graylevel image to a binary image. The
algorithm assumes that the image contains two classes of pixels following bi-modal
histogram (foreground pixels and background pixels), it then calculates the optimum
threshold separating the two classes so that their combined spread (intra-class variance)
is minimal, or equivalently (because the sum of pairwise squared distances is constant),
so that their inter-class variance is maximal.[2] Consequently, Otsu's method is roughly a
one-dimensional, discrete analog of Fisher's Discriminant Analysis.
?

The extension of the original method to multi-level thresholding is referred to as the Multi
Otsu method.[3

Edge detection
http://www.owlnet.rice.edu/~elec539/Projects97/morphjrks/moredge.html

Edge detection
Prewitt operator is used for edge detection in an image. It detects two types of edges:

Horizontal edges

Vertical Edges
Edges are calculated by using difference between corresponding pixel intensities of an image. All the
masks that are used for edge detection are also known as derivative masks. Because as we have stated
many times before in this series of tutorials that image is also a signal so changes in a signal can only
be calculated using differentiation. So thats why these operators are also called as derivative operators
or derivative masks.
All the derivative masks should have the following properties:

Opposite sign should be present in the mask.

Sum of mask should be equal to zero.

More weight means more edge detection.


Prewitt operator provides us two masks one for detecting edges in horizontal direction and another for
detecting edges in an vertical direction.

VERTICAL DIRECTION:
-1

-1

-1

Above mask will find the edges in vertical direction and it is because the zeros column in the vertical
direction. When you will convolve this mask on an image, it will give you the vertical edges in an image.

HOW IT WORKS:
When we apply this mask on the image it prominent vertical edges. It simply works like as first order
derivate and calculates the difference of pixel intensities in a edge region. As the center column is of
zero so it does not include the original values of an image but rather it calculates the difference of right
and left pixel values around that edge. This increase the edge intensity and it become enhanced
comparatively to the original image.

HORIZONTAL DIRECTION:
-1

-1

-1

Above mask will find edges in horizontal direction and it is because that zeros column is in horizontal
direction. When you will convolve this mask onto an image it would prominent horizontal edges in the
image.

HOW IT WORKS:
This mask will prominent the horizontal edges in an image. It also works on the principle of above mask
and calculates difference among the pixel intensities of a particular edge. As the center row of mask is
consist of zeros so it does not include the original values of edge in the image but rather it calculate the
difference of above and below pixel intensities of the particular edge. Thus increasing the sudden
change of intensities and making the edge more visible. Both the above masks follow the principle of
derivate mask. Both masks have opposite sign in them and both masks sum equals to zero. The third
condition will not be applicable in this operator as both the above masks are standardize and we cant
change the value in them.

AFTER APPLYING VERTICAL MASK:


After applying vertical mask on the above sample image, following image will be obtained. This image
contains vertical edges. You can judge it more correctly by comparing with horizontal edges picture.

AFTER APPLYING HORIZONTAL MASK:


After applying horizontal mask on the above sample image, following image will be obtained.

COMPARISON:
As you can see that in the first picture on which we apply vertical mask, all the vertical edges are more
visible than the original image. Similarly in the second picture we have applied the horizontal mask and
in result all the horizontal edges are visible. So in this way you can see that we can detect both
horizontal and vertical edges from an image.

Sobel Operator:
The sobel operator is very similar to Prewitt operator. It is also a derivate mask and is used for edge
detection. Like Prewitt operator sobel operator is also used to detect two kinds of edges in an image:

Vertical direction

Horizontal direction

Difference with Prewitt Operator:


The major difference is that in sobel operator the coefficients of masks are not fixed and they can be
adjusted according to our requirement unless they do not violate any property of derivative masks.

FOLLOWING IS THE VERTICAL MASK OF SOBEL


OPERATOR:
-1

-2

-1

This mask works exactly same as the Prewitt operator vertical mask. There is only one difference that is
it has 2 and -2 values in center of first and third column. When applied on an image this mask will
highlight the vertical edges.

HOW IT WORKS:
When we apply this mask on the image it prominent vertical edges. It simply works like as first order
derivate and calculates the difference of pixel intensities in a edge region.
As the center column is of zero so it does not include the original values of an image but rather it
calculates the difference of right and left pixel values around that edge. Also the center values of both
the first and third column is 2 and -2 respectively.
This give more weight age to the pixel values around the edge region. This increase the edge intensity
and it become enhanced comparatively to the original image.

FOLLOWING IS THE HORIZONTAL MASK OF SOBEL


OPERATOR:
-1

-2

-1

Above mask will find edges in horizontal direction and it is because that zeros column is in horizontal
direction. When you will convolve this mask onto an image it would prominent horizontal edges in the
image. The only difference between it is that it have 2 and -2 as a center element of first and third row.

HOW IT WORKS:
This mask will prominent the horizontal edges in an image. It also works on the principle of above mask
and calculates difference among the pixel intensities of a particular edge. As the center row of mask is
consist of zeros so it does not include the original values of edge in the image but rather it calculate the
difference of above and below pixel intensities of the particular edge. Thus increasing the sudden
change of intensities and making the edge more visible.
Now its time to see these masks in action:

SAMPLE IMAGE:
Following is a sample picture on which we will apply above two masks one at time.

AFTER APPLYING VERTICAL MASK:


After applying vertical mask on the above sample image, following image will be obtained.

AFTER APPLYING HORIZONTAL MASK:


After applying horizontal mask on the above sample image, following image will be obtained

COMPARISON:
As you can see that in the first picture on which we apply vertical mask, all the vertical edges are more
visible than the original image. Similarly in the second picture we have applied the horizontal mask and
in result all the horizontal edges are visible.
So in this way you can see that we can detect both horizontal and vertical edges from an image. Also if
you compare the result of sobel operator with Prewitt operator, you will find that sobel operator finds
more edges or make edges more visible as compared to Prewitt Operator.
This is because in sobel operator we have allotted more weight to the pixel intensities around the edges.

APPLYING MORE WEIGHT TO MASK


Now we can also see that if we apply more weight to the mask, the more edges it will get for us. Also as
mentioned in the start of the tutorial that there is no fixed coefficients in sobel operator, so here is
another weighted operator
-1

-5

-1

If you can compare the result of this mask with of the Prewitt vertical mask, it is clear that this mask will
give out more edges as compared to Prewitt one just because we have allotted more weight in the mask.

Você também pode gostar