Você está na página 1de 15

IAETSD JOURNAL FOR ADVANCED RESEARCH IN APPLIED SCIENCES ISSN (ONLINE): 2394-8442

AN EVALUATIVE MODEL FOR LEAF DISEASE DETECTION IN


AGRICULTURE BASED ON CBIR
1
Dr. Jasmine Samraj, 2Ms. V. Soumiya
1
Associate Professor,Department of Computer Science, 2PG and Research M.Phil Research Scholar, PG and Research,
Department of Computer Science
Quaid-E-Millath Government College for Women (A),Chennai.
jasminesamraj@gmail.com, soumiyaresearch@gmail.com

ABSTRACT :

In Agriculture field plant leaf diseases are the major cause of significant reduction in both quality and quantity of crops in agriculture.
It leads to economic loss. Farmers are facing problem rising from various types of plant diseases. Sometimes Farmers are unable to diagnosis the
disease, results in lack of identification of right type of disease and this leads to crop damage if not taken proper care of it at right time. Farmers
need information about fertilizers, pesticides, information of atmosphere, etc. Today this information is available in scatteredmanner, and it does
not diagnosis about various diseases. Image analysis by using Content Based Image Retrieval (CBIR) is an important method for research used
widely in image processing. This paper discusses an approach to detect the diseased plant leaf image and to identify the disease of affected leaf
using feature extraction techniques. The method used in this research feature extraction is divided into two major phases. First phase concerns
with color space, gray-level co-occurrence matrix (GLCM) and canny edge detection. Second phase concerns with color histogram, histogram of
oriented gradient (HOG) and c-means clustering. The neural network classification is used for similarity matching for featured query image and
database images. The main aim of this system is to provide different feature extraction techniques for effective required image retrieval.

Keywords: Content Based Image Retrieval (CBIR), Color Space, Gray-level Co-occurrence Matrix (GLCM), Canny Edge Detection,
Color Histogram, Histogram of Oriented Gradient (HOG), C-Means Clustering.

I. INTRODUCTION
Content Based Image Retrieval (CBIR) is termed as Query By Image Content (QBIC) and content based visual information retrieval is
the function of computer vision approaches to image retrieval problem, can be stated as a problem of searching for digital images in large
database. The term content based is the search to evaluate the actual contents of image rather than metadata such as keywords, tags, or
descriptions associated with the image. The evaluation of the effectiveness of keyboard image search is subjective and has not been well-
defined. In the same regards, CBIR systems have similar challenges in defining success. CBIR systems have been developed recently inorder to
organize and utilize the valuable image sources effectively and efficiently for huge collections of images.

Farmers always need satisfactory and easy advice from experts. To get the advice from an expert system, it should have enough
knowledge about the domain. Gathering enough knowledge and representing it in a machine understandable format is time consuming and
difficult job. Also, representing each and every kind of knowledge is still a research issue. Since, a single picture is worth a thousand words, it
will be a good idea to acquire knowledge also in images rather than only text. Image is an easy way of communication without any boundary of
language. Thereforethere is a need to build an expert system with CBIR which could grab and convey the knowledge by searching the image
having the similar features that is searched by the user. The proposed work is developed to diagnose diseases in crops by matching the uploaded
image of a diseased plant from the corpus of images.

II. LITERATURE REVIEW


Mr. Vinay S. Mandlik et al.,[1] (2011), Agricultural Plant Image Retrieval System Using CBIR proposed a plant image retrieval system,
with a segmentation preprocessing step. Extracting plant regions from images by the MFMC segmentation technique has given us an opportunity
to focus solely on the plant.For shape-based retrieval, we used SIFT features that capture local characteristics of the plant, as well as newly
proposed global shape descriptors that are based on the outer contour of the plant. The new global shape descriptors provided improvements
over the existing methods.

S. Nagasai, S. Jhansi Rani [2] (2015) Plant Disease Identification using Segmentation Techniques describes a method for identification
of diseased rose plants based on some important features extracted from its leaf images. To present an approach where the leaf is identified
based on its leaf features such as Color, shape using Color histogram and edge histogram. The combination of CBIR, Canny edge detector and
HSI Color model identifies the disease accurately. Canny edge detector is implemented efficiently and the results are very accurate. The
combination of Canny edge detector and HSI color histogram fetched maximum resourceful features, which have been classified using SVM
classifier resulted in very better detection of disease.

VOLUME 4, ISSUE 5, OCT/2017 140 http://iaetsdjaras.org/


IAETSD JOURNAL FOR ADVANCED RESEARCH IN APPLIED SCIENCES ISSN (ONLINE): 2394-8442

SwathiRao G et al.,[3] (2015) Image Database Classification Using Neural Network with Gabor Filter and CBIR Technique proposed
Content based image retrieval system is the technique which uses visual contents to search images from large scale image databases according to
the users interest. The term content refers to color, shape, texture that can be derived from the image. In this paper an image retrieval system
using artificial neural network (ANN) in MATLAB with the help of Gabor filter features is contemplated. In the proposed system, mean and
standard deviation of the images are calculated later to the filtering process of the images using Gabor filter. The Gabor feature gives good
response to texture of the image and makes it very clear and simple for the neural network classifier the system is trained and tested and
classifies the images from a vast database relevant and retrieve the required image.

Nagaraja S. and Prabhakar C.J., [4] (2015) Low-level features for image retrieval based on extraction of directional binary patterns and
its oriented gradients histogram proposed a novel approach for content based image retrieval based on low-levelfeatures such as colour texture
and shape. The Directional Binary Code (DBC), Haar wavelet and Histogram of Oriented Gradients (HOG) techniques are employed
sequentially in order to extract colour texture and shape features from the image. Theexperiments are conducted using two benchmark databases
such as Wangs and Caltech 256. The performance of our approach on Wangs dataset is evaluated using precision, recall, retrieval rate and
processing time based on average results obtained for all the experiments. The evaluation results supports to claim thatin all the experiments, our
approach outperforms other approaches for image retrieval.

III. PROPOSED METHOD


The content based image retrieval system can be implemented, to develop a leaf disease detection which can motivates and encourages
the farmers to identifying disease name and enhance their crops production. The proposed method has been tested on plant leaf images. The
original image and the image obtained by using different CBIR techniques.

3.1 PREPROCESSING USING MEDIAN FILTER

The median filter is a nonlinear digital filtering technique; it used to remove noise from an image. If the noise reduction is a typical
pre-processing steps to improve the results of later processing. The median filters consider each pixel in the image in turn and look at its
neighbors to decide whether or not it is representative of surroundings. Instead of simply replacing the pixel value with the mean of neighboring
pixel values, it replace with the median of those values. The median is calculated by first sorting all the pixel values from surrounding
neighborhood into numerical order and then replacing the pixel being considered with the middle pixel value.
The median filtering output is,
g(x, y) = med {f(x i, y j), i, j, W}
Where, f(x, y), g(x, y) are the original image and the output image respectively,
W is the two-dimensional mask: the mask size is n n (where n is commonly odd) such as 3 3, 5 5 and etc.; the mask shape may be linear,
square, circular, cross and etc.

3.2 PHASE-I

The phase-I proposed color, texture and shape feature extraction techniques such as color space, gray-level co-occurrence matrix
(GLCM) and canny edge detection. Figure 2 is shows the phase-I work flow diagram.

Figure2. Phase-I work flow diagram.

VOLUME 4, ISSUE 5, OCT/2017 141 http://iaetsdjaras.org/


IAETSD JOURNAL FOR ADVANCED RESEARCH IN APPLIED SCIENCES ISSN (ONLINE): 2394-8442

3.2.1COLOR FEATURE EXTRACTION USING COLOR SPACE

Color space is a specific organization of colors. In combination with physical device profiling, it allows for reproducible color in
both analogand digital representations. Color space also known as the color model is an abstract mathematical model which simply describes the
range of color as tuples of numbers, typically as 3 or 4 values (e.g. triples in RGB or quadruples in CMYK) or color components. A color space
is a useful method for users to understand the color capabilities of a particular digital image. There are a variety of color spaces, such as RGB,
CMYK, HSV and HLS.

3.2.2TEXTURE FEATURE EXTRACTION USING GLCM

The Gary-Level Co-occurrence Matrix(GLCM) is statistical method of examining texture feature that considers the spatial relationship
of pixels, also known as the gray-level spatial dependence matrix. The GLCM functions characterize the texture of an image by calculating the
pairs of pixel with specific values and in a specified spatial relationship occur in an image. Matrix elements are computed by the equation
showed as follow GLCM expresses the texture feature according to the correlation of the couple pixels gray-level at different positions.

Energy
Energy is a measure of homogeneity. It is opposite to the entropy. This feature will tell us texture uniformity. Higher Energy value
shows the bigger homogeneity of texture. The range of it is [0, 1], where Energy is 1 for a constant image.
Energy = p(x, y)2
Where, x,y are the spatial co-ordinates of the function p(x,y).

Contrast
Contrast is the local grey level variation in grey level co-occurrence matrix. The contrast of the image is low when the neighboring
pixels are similar in their grey level values. Low contrast values are for smooth texture while high values for heavy texture. The range of
Contrast is [0, (size (GLCM,1)-1)2] where Contrast is 0 for constant image.
Contrast=(x y)2/ p(x, y)

Entropy
Entropy in any system represents disorder. It measures image texture randomness. A completely random distribution would have very
high entropy because it represents disarray. This feature will tell us for heavy textures entropy is bigger or for the smooth textures giving us
information about which type of texture can be considered statistically more disarray.
Entropy = - p(x, y)* log p(x, y)

Correlation
Correlation measures the linear dependency of gray level of neighboring pixels.Digital image correlation is an optical method that
employs tracking & image registration techniques for accurate 2D and 3D measurements of changes in images. The correlation expression is,
( , ) ( , )
Correlation=

3.2.3SHAPE FEATURE EXTRACTION USING CANNY EDGE DETECTION

Canny edge detection algorithm preserving the structural properties to be used for further content based image retrieval. Canny edge
detection is a technique to extract useful structural information from different vision objects and dramatically. The purpose of edge detection in
general is to significantly reduce the amount of data in an image. The aim of this algorithm with regards to the following criteria:
Detection: The probability of detecting real edge points should be maximize while the probability of falsely detecting non edge points should be
minimized. This corresponds to maximizing the signal to noise ratio.
Localization: the detected edges should as close as possible to the real edges.
Number of Responses: one real edge should not result in more than one detected edge.
The Canny edge detection algorithm process into 5 different steps:
Step 1: Apply Gaussian filter to smooth the image in order to remove the noise
Step 2: Find the intensity gradients of the image
Step 3: Apply non-maximum suppression to get rid of spurious response to edge detection
Step 4: Apply double threshold to determine potential edges
Step 5: Track edge by hysteresis, finalize the detection of edges by suppressing all the other edges that are weak and not connected to strong
edges.

3.3 PHASEII
The phase-II proposed color, texture and shape feature extraction techniques such as color histogram, histogram of oriented gradient
(HOG) and c-means clustering. Figure 3is shows the phase-II work flow diagram.

VOLUME 4, ISSUE 5, OCT/2017 142 http://iaetsdjaras.org/


IAETSD JOURNAL FOR ADVANCED RESEARCH IN APPLIED SCIENCES ISSN (ONLINE): 2394-8442

Figure3. Phase-II work flow diagram.

3.3.1 COLOR FEATURE EXTRACTION USING COLOR HISTOGRAM

Color histogram is a distribution of colors in an image. In the digital images, a color histogram represents the number of pixels that
have colors in each of a fixed list of color range. The color histogram can be expressed as Three Color Histograms, which shows the brightness
and its distribution should be each individual Red, Green, and Blue color channel. For multi-spectral images, where each pixel is represented by
an arbitrary number of measurements (i.e., beyond the three measurements in RGB).If the color histogram is n-dimensional, with n being the
number of measurements taken and each measurement has its own wavelength range of the light spectrum, some of which outside visible
spectrum.

3.3.2 TEXTURE FEATURE EXTRACTION USING HOG

Histogram of Oriented Gradients (HOG) is a feature descriptor used to detect objects in computer vision and content based image
retrieval. In this, system to extract the texture features an image. It is very effective to represent objects and widely used in various images. The
HOG feature extraction technique counts occurrence of gradient orientation in localized portions of an image detection window or region of
interest.
The following steps are used to compute the local histogram of oriented gradient.
First to compute the gradients of the image,
Next the histogram of orientation is built for each pixel and,
Finally normalize the histograms within each block of pixels.

Gradient Computation

The gradient of an imageis obtained by filtering it with horizontal and vertical one dimensional discrete derivative mask.
1
DX = [-1 0 1] and DY = 0
1
Where, DX and DY are horizontal and vertical masks respectively and obtain the X and Y derivatives using following convolution operation.
IX =I * DX and IY = I * DY
The magnitude of the gradient is,
|G| = I +I
The orientation of the gradient is given by,
= arctan

VOLUME 4, ISSUE 5, OCT/2017 143 http://iaetsdjaras.org/


IAETSD JOURNAL FOR ADVANCED RESEARCH IN APPLIED SCIENCES ISSN (ONLINE): 2394-8442

Orientation Binning

The next step in the proposed work is carried out by creating the cell histograms. Each pixel calculates a weighted vote for an
orientation based histogram channel based on the values found in the gradient computation. The pixels are rectangular and the histogram
channels are evenly spread over 00 to 1800 or 00 to 3600, depending on whether the gradient is unsigned or signed. N. Dalal and B. Triggs
found that unsigned gradients used in conjunction with 9 histogram channels performed best in their experiments.

Descriptor Blocks

In order to changes in illumination and contrast, the gradient strengths shouldbe regionally normalized, which needs grouping the cells
together into larger spatially connected blocks. The HOG descriptor is then the vector of the elements of the normalized cell histograms from all
of the block regions. These blocks generally overlap, that means each cell contributesmore than once to the final descriptor.

Normalization Factor

The normalization factor is computed over the block and all histograms within this block are normalized according to this factor. Once
the normalization step has been performed all the histograms will concatenated in a single feature vector. There are different methods for block
normalization. Let v be the non-normalized vector containing all histograms in a given block, v be its k-norm for k=1, 2 and e be some small
constant. The normalization factor f can obtain by these methods.

L1-norm f =
L2-norm f =

3.3.3 SHAPE FEATURE EXTRACTION USING C-MEANS CLUSTERING

The C means clustering algorithm is an iterative clustering method that produces an optimal c partition by minimizing the weighted
within group sum of squared error objective function. The c-means clustering which allows one piece of data to belong to two or more clusters.
This method is frequently used in content based image retrieval. The c-means clustering objective function and its generalizations are the most
heavily studied fuzzy model in image processing. The way that most researchers have solved the optimization problem has been through an
iterative locally optimal technique, called the c-means clustering algorithm. Thus, partitions that minimize this function are those that weight
small distances by high membership values and large distances by low membership values.

General description

The C-means algorithm is very similar to the k-means algorithm, the C-Means Clustering approach is given by the following process:
Choose a number of clusters.
Assign randomly to each point coefficients for being in the clusters.
Repeat until the algorithm has converged (that is, the coefficients change between two iterations is no more than , the given sensitivity
threshold):Compute the centroid for each cluster and for each point compute its coefficients in the clusters.

Centroid

Any point x has a set of coefficients giving the degree of being in the kth cluster wk(x). With fuzzy c-means, the centroid of a cluster is
the mean of all points, weighted by their degree of belonging to the cluster:

( )
Ck=
( )

3.3 SIMILARITY MATCHING USING NEURAL NETWORK

Finding good similarity matching between images based on some feature set is a challenging task. On the one hand, the ultimate goal
is to define similarity functions that match with human perception, but how humans judge the similarity between images is a topic of ongoing
research. Many current retrieval systems take a simple approach by using typically norm-based distances on the extracted feature set as a
similarity function. The main basis of CBIR system is given a good set of feature extracted from the images in the database then two images to
be similar their feature extracted images have to be close to each other.

VOLUME 4, ISSUE 5, OCT/2017 144 http://iaetsdjaras.org/


IAETSD JOURNAL FOR ADVANCED RESEARCH IN APPLIED SCIENCES ISSN (ONLINE): 2394-8442

Neural networks are computing systems inspired by the biological neural networks that constitute animal brains. A neural network is
based on a collection of connected units called artificial neurons. Each connection between neurons can transmit a signal to another neuron.
Siamese neural network is a class of neural network architectures that contain two or more identical subnetworks. They have the same
configuration with the same parameters and weights. Parameter updating is mirrored across both subnetworks. Siamese neural networks are
suitable for finding similarity between two comparable things.The siamese is used for learning similarities between images by labeling pairs of
images as similar or dissimilar, and maximizing the distance between different image groups.

This similarity function,


EW(X1, X2) = GW(X1) GW(X2)2
Where, G is parameterized with weights W,
X1 and X2 are paired images.

IV. RESULTS AND DISCUSSION


The proposed research work supports the farmers to familiarize them with leaf disease detection and to overcome the problems they
find solution. The system can be adapted for farmers with different backgrounds can be allowed to engage with different leaf disease without
overwhelmed by too much of information. As the proposed system is simple and easy to understand it allows the farmer to get a quick preview
of an image by visualizing the contents displayed by the system. The features of the system efficiently and the work is displayed to the content
based image retrieval techniques process accuracy has been proved statistically.

4.1 EXPERIMENTAL SETUP

The experiment result is conducted in two phases namely phase-I and phase-II retrieval systems. The evaluation is performed to find
the relevant images for the given input query with reduced number of iterations. The experiment is done by using image database with the
software MATLAB. In order to implement the proposed system have collected plant leaf images. The experimental database disease leaf image
dataset is collected from https://leafsnap.com/dataset/. The leaf image disease name, symptoms and solutions are collected from
https://plantmethods.biomedcentral.com/articles/10.1186/1746-4811-10-8. The leaf images have dimension between 225*225, 220*293, and
259*194 pixels, the images are stored in jpeg format. After preprocessing the image size are converted to 256 * 256 pixels.

SAMPLE SCREEN SHOTS FOR PHASE-I

Figure4. Preprocessing of the input image.

VOLUME 4, ISSUE 5, OCT/2017 145 http://iaetsdjaras.org/


IAETSD JOURNAL FOR ADVANCED RESEARCH IN APPLIED SCIENCES ISSN (ONLINE): 2394-8442

Figure5. Color space feature extraction of the input image.

Figure6. GLCM feature extraction of the input image.

VOLUME 4, ISSUE 5, OCT/2017 146 http://iaetsdjaras.org/


IAETSD JOURNAL FOR ADVANCED RESEARCH IN APPLIED SCIENCES ISSN (ONLINE): 2394-8442

Figure7. Canny edge detection feature extraction of the input image.

Figure8. Canny Edge Detection image

VOLUME 4, ISSUE 5, OCT/2017 147 http://iaetsdjaras.org/


IAETSD JOURNAL FOR ADVANCED RESEARCH IN APPLIED SCIENCES ISSN (ONLINE): 2394-8442

Figure9. After similarity matching retrieved the image and disease name detection with their solution.

SAMPLE SCREEN SHOTS FOR PHASE-II

Figure10. A GUI page to upload the query image for input.

VOLUME 4, ISSUE 5, OCT/2017 148 http://iaetsdjaras.org/


IAETSD JOURNAL FOR ADVANCED RESEARCH IN APPLIED SCIENCES ISSN (ONLINE): 2394-8442

Figure11. Preprocessing of the input image.

Figure12. Color histogram feature extraction of the input image.

VOLUME 4, ISSUE 5, OCT/2017 149 http://iaetsdjaras.org/


IAETSD JOURNAL FOR ADVANCED RESEARCH IN APPLIED SCIENCES ISSN (ONLINE): 2394-8442

Figure13. HOG feature extraction of the input image.

Figure14. C-means clustering feature extraction of the input image.

VOLUME 4, ISSUE 5, OCT/2017 150 http://iaetsdjaras.org/


IAETSD JOURNAL FOR ADVANCED RESEARCH IN APPLIED SCIENCES ISSN (ONLINE): 2394-8442

Figure15. After similarity matching retrieved the image and disease name detection with their solution.

4.2 IMAGE RETRIEVAL USING CBIR IN AGRICULTURE

The image retrieval using CBIR in agriculture system performance is measured in two terms; first one is precision and recall, second
one is accuracy, sensitivity, and specificity. It analyzed phase-I and phase-II retrieval features and then give better results.

PRECISION AND RECALL

The comparison chart figure 16 below shows the content based image retrieval in plant leaf disease detection images. If the phase-II
techniques better than phase-I techniques has been constructed for the precision and recall.

VOLUME 4, ISSUE 5, OCT/2017 151 http://iaetsdjaras.org/


IAETSD JOURNAL FOR ADVANCED RESEARCH IN APPLIED SCIENCES ISSN (ONLINE): 2394-8442

COMPARISON CHART FOR PHASE-I


PHASE & PHASE-II
II
90
87.2

85 84.2
PERCENTAGE

80.5
80

PHASE I
75 74.3 PHASE II

70

65
PRECISION RECALL

Figure16.. Comparison chart for phase-I


phase and phase-II at precision and recall.

ACCURACY, SENSITIVITY AND SPECIFICITY

The below mentioned table 1 displays the comparison of phase


phase-I with the phase-II
II has been constructed for the accuracy, sensitivity,
and specificity.

Table1
Table1. Comparison values of phase-I and phase-II

PARAMETERS FORMULAS PHASE-I VALUES PHASE-II


PHASE VALUES
t +t
ACCURACY t + t +f + f 70 % 83.3 %
t
SENSITIVITY t +f 74.3 % 84.2 %

t
SPECIFICITY t +f 81.9 % 51.7 %

The comparison chart figure 17 below shows the content based image retrieval in plant leaf disease detection phase
phase-II techniques
better than phase-II techniques has been constructed for the accuracy, sensitivity, and specificity.

VOLUME 4, ISSUE 5, OCT/2017 152 http://iaetsdjaras.org/


IAETSD JOURNAL FOR ADVANCED RESEARCH IN APPLIED SCIENCES ISSN (ONLINE): 2394-8442

COMPARISON CHART FOR PHASE-I


PHASE & PHASE-II
II
90 83.3 84.2 81.9
80 74.3
70
70
PERCENTAGE

60
51.7
50
PHASE
PHASE-I
40
PHASE
PHASE-II
30

20

10

0
ACCURACY SENSITIVITY SPECIFICITY

Figure17.. Comparison chart for phase


phase-I and phase-II at accuracy, sensitivity, and specificity
specificity.

V. CONCLUSION
This proposed system helps a farmer to diagnose plant diseases using content based image retrieval technique. The research work wo is
completely based on visual features such as color, texture, and shape. The color space and color histogram feature extractions
extraction are applied on
sample diseased leaf image of the plant. The texture feature extraction using GLCM and HOG techniques are employed on leaf im images. Shape
feature extraction used canny edge detection and c--means clustering are implemented efficiently for edge ge detection an image. The C C-means
clustering as newly proposed shape descriptors that are based on the foreground and background edge detection of the leaf image. It compare comp
query image and database images using neural network classification then retrieve
retrievedd efficient image. The system will provide a solution for the
diseases and also find the factors responsible for causes of diseases. The prevention measures include various types of ferti fertilizers, pesticides
which have to use for plants. The application will provide an efficient solution in less time so usability of the system will be more. The aim of
this proposal was to develop a user friendly automated system for the farmers that will help them in determining detection di diseases of leaves
without bringing ann expert to the field. The system used to improve production of crops in agriculture.

VI. FUTUREWORK
The further development of this proposed research work can be enhanced to deal with large sized database and can be used to ddetect
different types of diseasein
asein various plants.Thissystem can be extended to work with various feature extraction techniques. This method with
Comparison to other feature extraction techniques can be usedto detect the type of disease that affects the stem, fruit, flow
flower and any part of
different plants. It will include expanding the database images in order to make it useful as a practical application. This w work can be
implemented in mobile application using Tamil language for further advancement. This application will give informat
information of different fields of
farming. It also gives information about a type of soil, modern techniques of farming. So the farmer will use a better qualit
quality of crops for farming.

VOLUME 4, ISSUE 5, OCT/2017 153 http://iaetsdjaras.org/


IAETSD JOURNAL FOR ADVANCED RESEARCH IN APPLIED SCIENCES ISSN (ONLINE): 2394-8442

REFERENCES
[1] A.Mathur and M. Goyal, Role of information technology in Indian agriculture, International Journal of Applied Engineering Research, vol. 9,
no. 10, pp. 1193-1198, 2014.
[2] A.S Deokar, AkshayPophale, SwapnilPatil, PrajaktaNazarkar, SukanyaMungase Plant Disease Identification using Content Based Image
Retrieval Techniques Based on Android System.
[3] A.W. Smeulders, M. Worring, S. Santini, A. Gupta, R. Jain, Content-Based Image Retrieval At The End Of The Early Years, IEEE
Transactions On Pattern Analysis And Machine Intelligence 22 (12) (2000) 13491380.
[4] Abdalla Mohamed Hambal, Dr. Zhijun Pei, FaustiniLibentIshabailu, Image Noise Reduction and Filtering Techniques, International Journal of
Science and Research (IJSR) Volume 6 Issue 3, March 2017.
[5] AnanthiSheshasaayee and C. Jasmine A New Content-Based Image Retrieval Framework for Medical Applications, Middle-East Journal of
Scientific Research 24 (7): 2404-2417, 2016.
[6] Arunkumar beyyala1, Saipriya beyyala1, Application for diagnosis of diseases in crops using image processing, International Journal of life
Sciences Biotechnology and Pharma Research, Vol. 1, Issue 2, April 2012.
[7] Borate S.B, Mulmule P.V Study and Analysis of Methodologies for Leaf Disease Detection Using Image Processing 2015.
[8] Christian Wolf, Jean-Michel Jolion, Walter Kropatsch, Horst Bischof, Content based Image Retrieval using Interest Points and Texture
Features, (2000) IEEE, PP-234-237.
[9] Dr. Sana'a khudayerJadwa, Canny Edge Detection Method for Medical Image, International Journal of Scientific Engineering and Applied
Science (IJSEAS) Volume-2, Issue-8, August 2016.
[10] Girisha A B, M C Chandrashekhar, Dr. M Z Kurian, Texture Feature Extraction of Video Frames using GLCM International Journal of
Engineering Trends and Technology (IJETT) Volume 4, Issue 6, June 2013.
[11] H.Al-Hiary, S Bani-Ahmad, M Reyalat, M Braik and Z ALRahamneh ,Fast and Accurate Detection and Classification of Plant Diseases,
International Journal of Computer Applications, Vol. 17, Issue 1, pp. 31-38, Published by Foundation of Computer Science, March 2011.
[12] International pest control Internet: http://international-pest-control.com/perspectives-on-crop-protection-in-india/ Feb 2, 2016.
[13] Jayamala K. Patil, Raj Kumar, Advances in image processing for detection of plant diseases, Journal of Advanced Bioinformatics
Applications and Research, Vol. 2,Issue 2, pp 135-141, June-2011.
[14] JayamalaK.Patil, BharatiVidyapeeth, Comparative Analysis of Content Based Image Retrieval using Texture Features for Plant Leaf Diseases,
International Journal of Applied Engineering Research ISSN 0973-4562 Volume 11, Number 9, 2016.
[15] Joel Pyykk and DorotaGlowacka(B), Interactive Content-Based Image Retrieval with Deep Neural Networks, Department of Computer
Science, HIIT, University of Helsinki, Helsinki, Finland, 2017.
[16] Jun Yue, Zhenbo Li, Lu Liu Zetian Fu, Content- Based Image Retrieval Using Color and Texture Fused Features, Mathematical And
Computer Modelling, Elsevier, (2011), 1121-1127.
[17] K. Padmavathi, Investigation and monitoring for leaves disease detection and evaluation using image processing, International Research
Journal of Engineering Science, Technology and Innovation (IRJESTI), Vol. 1, Issue 3, pp. 66-70, June 2012.
[18] Kamaljot Singh Kailey, Gurjinder Singh Sahdra, Content-Based Image Retrieval (CBIR) For Identifying Image Based Plant Disease,
Int.J.Computer Technology & Applications, Vol 3 (3), 1099-1104: May - June 2012.
[19] ManpreetKaur, Sanjay Singla,Plant Leaf Disease Detection based on Unsupervised Learning, International Journal of Innovations in
Engineering and Technology (IJIET),Volume 7, Issue 2 , August 2016.
[20] Mr. SantoshBharti, LalitWadhwa, Dr. D.Y. Patil, Content Based Image Retrieval in Plant disease Detection, International Journal for Research
in Technological Studies, Vol. 1, Issue 9, August 2014.

VOLUME 4, ISSUE 5, OCT/2017 154 http://iaetsdjaras.org/

Você também pode gostar