Escolar Documentos
Profissional Documentos
Cultura Documentos
The simplest thresholding methods replace each pixel in an image with a black pixel if
the image intensity
is less than some fixed constant T (that is,
), or a white
pixel if the image intensity is greater than that constant. In the example image on the
right, this results in the dark tree becoming completely black, and the white snow
becoming complete white.
Histogram shape-based methods, where, for example, the peaks, valleys and
curvatures of the smoothed histogram are analysed
Screen image sharpness is sometimes expressed as dpi (dots per inch). (In
this usage, the term dot means pixel, not dot as in dot pitch.) Dots per
inch is determined by both the physical screen size and the resolution
setting. A given image will have lower resolution - fewer dots per inch - on
a larger screen as the same data is spread out over a larger physical area.
On the same size screen, the image will have lower resolution if the
resolution setting is made lower - resetting from 800 by 600 pixels per
horizontal and vertical line to 640 by 480 means fewer dots per inch on
the screen and an image that is less sharp. (On the other hand, individual
image elements such as text will be larger in size.)
How to determine bwareaopen
You have 2 connected blobs. The one on the left is 8 pixels big (area of 8 pixels). The blob on the
right is 3 pixels. When you called bwareaopen, it got rid of blobs less than 4 pixels. Since the blob
with an area of 3 is less than 4, it was removed. Does that explain it? It has nothing to do with
connectivity here because all your blobs are 4-connected. Now if you had an extra pixel diagonally
connected to the blob on the left, like this:
conn=4 means
-XX0X
-X-
conn=8 means
XXX
X0X
XXX
A= 0
Now there are 8 eight connected blobs, but 3 blobs if you consider them as 4 connected. The pixel
at row 4 column 4 is 8-connected to the blob on the left, but not 4 connected. It would be removed
with
bwareaopen(A, 4, 4)
bwareaopen(A, 4, 8)
because in the second case it's connected while in the first case it's not connected.
bwareaopen
Binary area open; remove small objects
Syntax
BW2 = bwareaopen(BW,P)
BW2 = bwareaopen(BW,P,CONN)
Description
bwareaopen(BW,P) removes from a binary image all connected
components (objects) that have fewer than P pixels, producing another
binary image BW2. The default connectivity is 8 for two dimensions, 26 for
three dimensions, and conndef(ndims(BW),'maximal') for higher dimensions.
BW2 =
Value
Meaning
Two-dimensional connectivities
4
4-connected neighborhood
8-connected neighborhood
Three-dimensional connectivities
6
6-connected neighborhood
18
18-connected neighborhood
26
26-connected neighborhood
Algorithm
The basic steps are:
1. Determine the connected components.
o
L = bwlabeln(BW, CONN);
S = regionprops(L, 'Area');
Otsu's method
http://www.labbookpages.co.uk/software/i
mgProc/otsuThreshold.html#examples
From Wikipedia, the free encyclopedia
Original image
In computer vision and image processing, Otsu's method, named after Nobuyuki
Otsu ( tsu Nobuyuki ), is used to automatically perform clustering-based
image thresholding,[1] or, the reduction of a graylevel image to a binary image. The
algorithm assumes that the image contains two classes of pixels following bi-modal
histogram (foreground pixels and background pixels), it then calculates the optimum
threshold separating the two classes so that their combined spread (intra-class variance)
is minimal, or equivalently (because the sum of pairwise squared distances is constant),
so that their inter-class variance is maximal.[2] Consequently, Otsu's method is roughly a
one-dimensional, discrete analog of Fisher's Discriminant Analysis.
?
The extension of the original method to multi-level thresholding is referred to as the Multi
Otsu method.[3
Edge detection
http://www.owlnet.rice.edu/~elec539/Projects97/morphjrks/moredge.html
Edge detection
Prewitt operator is used for edge detection in an image. It detects two types of edges:
Horizontal edges
Vertical Edges
Edges are calculated by using difference between corresponding pixel intensities of an image. All the
masks that are used for edge detection are also known as derivative masks. Because as we have stated
many times before in this series of tutorials that image is also a signal so changes in a signal can only
be calculated using differentiation. So thats why these operators are also called as derivative operators
or derivative masks.
All the derivative masks should have the following properties:
VERTICAL DIRECTION:
-1
-1
-1
Above mask will find the edges in vertical direction and it is because the zeros column in the vertical
direction. When you will convolve this mask on an image, it will give you the vertical edges in an image.
HOW IT WORKS:
When we apply this mask on the image it prominent vertical edges. It simply works like as first order
derivate and calculates the difference of pixel intensities in a edge region. As the center column is of
zero so it does not include the original values of an image but rather it calculates the difference of right
and left pixel values around that edge. This increase the edge intensity and it become enhanced
comparatively to the original image.
HORIZONTAL DIRECTION:
-1
-1
-1
Above mask will find edges in horizontal direction and it is because that zeros column is in horizontal
direction. When you will convolve this mask onto an image it would prominent horizontal edges in the
image.
HOW IT WORKS:
This mask will prominent the horizontal edges in an image. It also works on the principle of above mask
and calculates difference among the pixel intensities of a particular edge. As the center row of mask is
consist of zeros so it does not include the original values of edge in the image but rather it calculate the
difference of above and below pixel intensities of the particular edge. Thus increasing the sudden
change of intensities and making the edge more visible. Both the above masks follow the principle of
derivate mask. Both masks have opposite sign in them and both masks sum equals to zero. The third
condition will not be applicable in this operator as both the above masks are standardize and we cant
change the value in them.
COMPARISON:
As you can see that in the first picture on which we apply vertical mask, all the vertical edges are more
visible than the original image. Similarly in the second picture we have applied the horizontal mask and
in result all the horizontal edges are visible. So in this way you can see that we can detect both
horizontal and vertical edges from an image.
Sobel Operator:
The sobel operator is very similar to Prewitt operator. It is also a derivate mask and is used for edge
detection. Like Prewitt operator sobel operator is also used to detect two kinds of edges in an image:
Vertical direction
Horizontal direction
-2
-1
This mask works exactly same as the Prewitt operator vertical mask. There is only one difference that is
it has 2 and -2 values in center of first and third column. When applied on an image this mask will
highlight the vertical edges.
HOW IT WORKS:
When we apply this mask on the image it prominent vertical edges. It simply works like as first order
derivate and calculates the difference of pixel intensities in a edge region.
As the center column is of zero so it does not include the original values of an image but rather it
calculates the difference of right and left pixel values around that edge. Also the center values of both
the first and third column is 2 and -2 respectively.
This give more weight age to the pixel values around the edge region. This increase the edge intensity
and it become enhanced comparatively to the original image.
-2
-1
Above mask will find edges in horizontal direction and it is because that zeros column is in horizontal
direction. When you will convolve this mask onto an image it would prominent horizontal edges in the
image. The only difference between it is that it have 2 and -2 as a center element of first and third row.
HOW IT WORKS:
This mask will prominent the horizontal edges in an image. It also works on the principle of above mask
and calculates difference among the pixel intensities of a particular edge. As the center row of mask is
consist of zeros so it does not include the original values of edge in the image but rather it calculate the
difference of above and below pixel intensities of the particular edge. Thus increasing the sudden
change of intensities and making the edge more visible.
Now its time to see these masks in action:
SAMPLE IMAGE:
Following is a sample picture on which we will apply above two masks one at time.
COMPARISON:
As you can see that in the first picture on which we apply vertical mask, all the vertical edges are more
visible than the original image. Similarly in the second picture we have applied the horizontal mask and
in result all the horizontal edges are visible.
So in this way you can see that we can detect both horizontal and vertical edges from an image. Also if
you compare the result of sobel operator with Prewitt operator, you will find that sobel operator finds
more edges or make edges more visible as compared to Prewitt Operator.
This is because in sobel operator we have allotted more weight to the pixel intensities around the edges.
-5
-1
If you can compare the result of this mask with of the Prewitt vertical mask, it is clear that this mask will
give out more edges as compared to Prewitt one just because we have allotted more weight in the mask.