Você está na página 1de 4

Automatic Pest Identification using Edge Detection

and Colour Moments


1
Priyadharshini.E, 2 N.M.Santron, and 3Prethesh Kumar Bhalotia
1
Student
Department of Computer Applications,
SSN College of Engineering,
Kanchipuram Dt. 603110
indrapriyadharshini.e@gmail.com
2
Student
Department of Computer Applications,
SSN College of Engineering,
Kanchipuram Dt. 603110
nmsantron@gmail.com
3
Student
Department of Computer Applications,
SSN College of Engineering,
Kanchipuram Dt. 603110
smooth.munna@gmail.com

Abstract— The main aim of this paper is to develop a model for


pest identification in agricultural crops. Using this model, the II. PROBLEM SPECIFICATION
user can identify the pest incidence on it and can obtain solutions The proposed system Pest Control on crops is consisting
for its control. The model is divided into two modules namely: of two modules. They are: Edge detection and colour
edge detection and colour matching. The first module focuses on matching. To proceed with these modules the first step is to
detecting the presence of insect using edge detection and the pre-process the source image by using Gaussian filter. The
second module identifies the type of insect using colour matching.
features are extracted by using edge detection technique called
canny edge detection technique. The second module is the
Keywords— Agriculture, Edge detection, Canny edge detection, colour matching module where the pest details in the database
Histogram, Colour Moments are compared with the feature extracted data.
The whole process of pest detection is shown in figure (1).
I. INTRODUCTION The source image are taken from Texas university database
India is an Agriculture based country and seventy percent (Ref: http://insects.tamu.edu/imagegallery/ ). A sample image
of the population depends on Agriculture. When pests have an details were shown in figure (2). The listed insects in figure (2)
effect on the crops, there will be an incredible decrease in are Chewing Insects.
production and finally affecting the national growth directly.
In most of the cases pests are seen on the leaves or stems of Figure (1) Pest Detection Model
the plant. Therefore identification of plants, leaves, stems and
finding out the pest, percentage of the pest incidence,
symptoms of the pest attack, plays a key role in successful
cultivation of crops. In order to increase the crop productivity,
farmers approach experts to seek their advice regarding the
treatment of incidence of pest and suggestions for control.
Sometimes they have to go long distances to contact experts.
Even though they go such distances expert may not be
available at that time. Sometimes, the expert whom a farmer
contacts, may not be in a position to advise the farmer with
the available information and knowledge. In these cases
seeking the expert advice is very expensive and time
consuming. In case if the pest incidence is not spotted in the
beginning stage, the pests will spread to nearby farms and
results in massive destruction.
The White part refers the difference between the input
Insect name Description Damage images. That refers to the moving object. The area around the
adults have forewings which are mottled moving object should be extracted.
grayish-brown and have an expanse of about 1
1/4 inches
larvae may
BEET ARMYWORM hind wings are silver white with a darker front
defoliate plants
margin
bright green with dark lateral stripes, the Figure (4) Extracted area around the feature
larvae are about 1 1/4 inches long
larvae feed on the
adults are grayish moths, about 1/3 inch long underside of B. Edge detection
leaves, leaving
males have the wings with a row of three shothole type Edge detection refers to the process of identifying and
diamond-shaped yellow spots where they locating sharp discontinuities in an image. The
meet down the middle of the back
discontinuities are abrupt changes in pixel intensity which
folded wings flare outward and upward toward
characterize boundaries of objects in a scene. Variables
their tips
involved in the selection of an edge detection operator
DIAMONDBACK MOTH
hind wings have a fringe of long hairs usually, outer includes as follows
larvae, which rarely exceed 1/3 inch, are pale leaves are
attacked
yellowish-green with fine, scattered, erect
black hairs over the body • Edge orientation: The geometry of the operator
determines a characteristic direction in which it
they wiggle actively when disturbed
is most sensitive to edges. Operators can be
pupa is in a gauzy sack so thin and loosely spun
optimized to look for horizontal, vertical, or
that it hardly conceals the pupa about 3/8 inch
long diagonal edges.
• Noise environment: Edge detection is difficult in
Figure (2) Chewing Insects noisy images, since both the noise and the edges
contain high-frequency content. Attempts to
A. Moving Object Identification reduce the noise result in blurred and distorted
The insect is identified by finding the moving object edges. Operators used on noisy images are
in the frame. By calculating the difference of two images typically larger in scope, so they can average
captured from same area, moving object can be identified. enough data to discount localized noisy pixels.
Once the moving object is identified, extract that area to This results in less accurate localization of the
detect the edges in it. Finding the moving object before detected edges.
applying the edge reduces the noise rate and also the • Edge structure: Not all edges involve a step change
computations. in intensity. Effects such as refraction or poor
Let N be the number of frames and n be the frame focus can result in objects with boundaries
number. Let ∆ be the difference of frames as follows: defined by a gradual change in intensity. The
∆ = (n-1)th frame - nthframe operator needs to be chosen to be responsive to
In figure (3) the first image refers n-1th frame, second such a gradual change in those cases. Newer
image refers to the nth frame and the third image refers ∆. wavelet-based techniques actually characterize
the nature of the transition for each edge in order
to distinguish, for example, edges associated
with hair from edges associated with a face.

C. Canny Edge detection algorithm


The Canny edge detection algorithm is known to many as
the optimal edge detector. Canny's intentions were to enhance
the many edge detectors already out at the time he started his
work. He was very successful in achieving his goal and his
ideas and methods can be found in his paper, "A
Computational Approach to Edge Detection". In his paper, he
followed a list of criteria to improve current methods of edge
detection. The first and most obvious is low error rate. It is
important that edges occurring in images should not be missed
and that there be NO responses to non-edges. The second
criterion is that the edge points be well localized.
Figure (3) Finding Moving Object In order to implement the canny edge detector algorithm, a
series of steps we followed based on our application
i. The first step is to filter out any noise in the original Otherwise the edge direction will equal 90 degrees. The
image before trying to locate and detect any edges. And formula for finding the edge direction is just:
because the Gaussian filter can be computed using a simple
mask, it is used exclusively in the Canny algorithm. The Theta = invtan (Gy / Gx)
Gaussian mask used in my implementation is shown below.
iv. Once the edge direction is known, the next step is to relate
the edge direction to a direction that can be traced in an image.
So if the pixels of a 5x5 image are aligned as follows:

x x x x x
x x x x x
x x a x x
x x x x x
x x x x x

Then, it can be seen by looking at pixel "a", there are only


four possible directions when describing the surrounding
pixels - 0 degrees (in the horizontal direction), 45 degrees
(along the positive diagonal), 90 degrees (in the vertical
direction), or 135 degrees (along the negative diagonal). So
now the edge orientation has to be resolved into one of these
ii. After smoothing the image and eliminating the noise, the four directions depending on which direction it is closest to
next step is to find the edge strength by taking the gradient of (e.g. if the orientation angle is found to be 3 degrees, make it
the image. The Sobel operator performs a 2-D spatial gradient zero degrees).
measurement on an image. Then, the approximate absolute v. After the edge directions are known, non-maximum
gradient magnitude (edge strength) at each point can be found. suppression now has to be applied. Non-maximum
The Sobel operator uses a pair of 3x3 convolution masks, one suppression is used to trace along the edge in the edge
estimating the gradient in the x-direction (columns) and the direction and suppress any pixel value (sets it equal to 0) that
other estimating the gradient in the y-direction (rows). They is not considered to be an edge. This will give a thin line in
are shown below: the output image.

Figure (5) After detecting the edges

III. COLOUR MATCHING

Colour moments are measures that can be used differentiate


The magnitude, or edge strength, of the gradient is then
images based on their features of colour. Once calculated,
approximated using the formula:
these moments provide a measurement for colour similarity
between images. These values of similarity can then be
|G| = |Gx| + |Gy| compared to the values of images indexed in a database for
tasks like image retrieval. The basis of colour moments lays in
iii. The direction of the edge is computed using the gradient in the assumption that the distribution of colour in an image can
the x and y directions. However, an error will be generated be interpreted as a probability distribution. Probability
when sum X is equal to zero. So in the code there has to be a distributions are characterized by a number of unique
restriction set whenever this takes place. Whenever the moments (e.g. Normal distributions are differentiated by their
gradient in the x direction is equal to zero, the edge direction mean and variance). It therefore follows that if the colour in
has to be equal to 90 degrees or 0 degrees, depending on what an image follows a certain probability distribution, the
the value of the gradient in the y-direction is equal to. If GY moments of that distribution can then be used as features to
has a value of zero, the edge direction will equal 0 degrees. identify that image based on colour.
MOMENT 1 – Mean :

Where:
H , I : are the two image distributions being compared
i : is the current channel index (e.g. 1 = H, 2 = S, 3 = V)
r : is the number of channels (e.g. 3)
MOMENT 2 - Standard Deviation :

Ei1, Ei2 : are the first moments (mean) of the two image
distributions
: are the second moments (std) of the two image
distributions

MOMENT 3 – Skewness : : are the third moments (skewness) of the two image
distributions
wi : are the weights for each moment

IV. CONCLUSION
A novel method has been introduced for identifying the
Pest and detecting the pest type. We tested our algorithm for
A function of the similarity between two image distributions various images. There are some disadvantages while
is defined as the sum of the weighted differences between the considering light since we are using colour moments. This can
moments of the two distributions. Formally this is: be rectified by using histogram methods. Presently we are
working on adopting histogram methods.

Image File Mean SD Skew REFERENCES


(R,G,B) (R,G,B) (R,G,B) [1] Ehsan Nadernejad “Edge Detection Techniques: Evaluations and
Comparisons”. Applied Mathematical Sciences, Vol. 2, 2008, no.
32.2697 71.6262 91.0489 31, 1507 - 1520.
27.5776 64.8439 86.1210 [2] Paul Bao, Lei Zhang, and Xiaolin Wu, “Canny Edge Detection
Enhancement by Scale Multiplication” IEEE TRANSACTIONS ON
25.8594 63.6933 86.8712 PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 27, NO. 9,
SEPTEMBER 2005
28.8451 57.9662 76.6682 [3] Shih Jau-Ling and Chen Ling-Hwei, “ Color Image Retrieval Based
27.3571 52.2783 69.5969 on Primitives of Color Moments” Lecture Notes in Computer Science,
2002, Volume 2314/2002, 19-27, DOI: 10.1007/3-540-45925-1_8.
18.1350 42.0060 60.9845 [4] Florica Mindru, TheoMoons, and Luc Van Gool “COLOR-BASED
MOMENT INVARIANTS FOR VIEWPOINT AND ILLUMINATION
9.3551 31.4335 49.9757 INDEPENDENT RECOGNITION OF PLANAR COLOR PATTERNS”
8.1644 28.1354 45.9737 Thesis.
[5] Images taken from the data base of texas university wesite
6.1485 22.6484 39.1242 http://insects.tamu.edu/imagegallery/
18.3069 57.6766 82.9678
16.2749 51.1715 73.4773
5.4213 19.8088 32.1288
24.2741 62.0421 86.2623
19.0499 49.6161 70.6894
9.0604 27.9210 44.8747
30.1699 59.6233 72.3847
24.7666 49.9581 61.9873
20.6847 42.9357 54.6724
30.0994 58.8759 71.1542
24.4765 50.2513 64.1398
17.4749 39.3106 54.5075

Você também pode gostar