Você está na página 1de 39

Fungus Diseases Detection Using Image

Processing
Chapter1
Agriculture is an ancient occupation. It plays an important role in our day to day life.
Food is basic need of all human beings. To distribute food among large population needs proper
amount of production. In India large number of population lives in rural areas where livelihood
of people depends mostly on agriculture. Thus Indian economy mostly depends on agriculture.
Hence increasing quality production has become necessary day by day. Monitoring of
plants/crops and their management from early stage is utmost important. It includes various tasks
like preparation of soil, seeding, adding manure and fertilizer, irrigation, disease detection,
spraying pesticides, harvesting and storage [1]. Amongst these entire tasks spraying proper
amount of pesticides has to be taken proper care. Pesticides are used to attract, seduce and
destroy pests hence known as crop protection product. Pesticides are prepared by harmful
chemicals or sometimes by biological methods to kill pests, weeds or infections on plants. Large
percentage of farmers in India sprays pesticides on cash crops, vegetables or fruit plants. In most
cases it has been observed that overdose of pesticides is more than 40% [2]. Hence it causes
harm to plant/crops as well as to human beings. Farmers manually checks diseases and spray
pesticides accordingly. Pesticides if sprayed in large amount lead to loss in nutrients which
ultimately aims to decrease in quality food production. Due to this, production gets affected by
means of both quality and quantity. Also if they are not washed properly causes harmful diseases
to human beings like chronic diseases. One of the most common practices of spraying pesticide
is by using sprayer. In conventional agriculture mostly mechanical sprayer or hydraulic sprayers
are used. Farmers basically spray manually sometimes in excess amount or in less amount.
Further in most of the cases farmers do not use protective clothing. Hence harmful pesticides
enter in body either by being inhaled or through skin or eyes. Exposure to pesticides thus causes
irritation of nose to most fatal diseases. Hence to avoid all above things and to increase yield by
means of quality and quantity it is necessary to detect disease in proper amount and spray
pesticides properly. Also farmer has to pay for labors too. They also have to work whole day with
much more efforts. Hence need to overcome these drawbacks various techniques have been
invented. Thus it’s important to detect diseases on plant/crop properly. When they are infected by
diseases, there is change in shape, size and color. These symptoms can be checked manually but
not in proper amount. Hence there are various image processing methods that detect diseases on
plant leaf and stems. Using image processing techniques proper amount of disease based on
color, texture or shape change of plants can be identified. These techniques can be used in
Agrobot to detect various diseases.
Many authors have worked on the development methods for the automatic detection and
classifi-cation of leaf diseases based on high resolution multispectral, hyperspectral and stereo
images. The philosophy behind pre-cision agriculture is not only including a direct economical
op-timization of agricultural production [6], it also stands for a re-duction of harmful outputs into
environment and non-target organisms [7]. In particular a contamination of water, soil, and food
resources with pesticides has to be as minimal as possi-ble in crop production. Automatic
detection of plant diseases is a very important research topic as it may prove the benefits in
monitoring large fields of crops, and thus automatically detect the symptoms of diseases [8] as
soon as they appear on plant leaves. Therefore looking for fast, automatic, less expensive [4] and
accurate method to detect plant disease cases is of great realistic significance [9]. Machine
learning [10] based detection and recognition of plant diseases can provide exten-sive clues to
identify [4] and treat the diseases in its very early stages. Comparatively, visually or naked eye
identification of plant diseases is quite expensive, inefficient, inaccurate and difficult. Also, it
requires the expertise of a well-trained botanist [10]. In [11] the authors have worked on the
development of methods for the automatic classification of leaf diseases based on high resolution
multispectral, hyperspectral and stereo im-ages. Leaves of sugar beet are used for evaluating
their ap-proach. Sugar beet leaves might be infected by several dis-eases, such as rusts, powdery
mildew. In [11], a fast and ac-curate new method is developed based on computer vision image
processing for grading of plant diseases. For that, leaf region was segmented by using Otsu’s [3]
method. After that the disease spot regions were segmented by using Sobel edge operator [12] to
detect the disease spot edges. Finally, plant diseases are evaluated by calculating the quotient of
disease spot and leaf areas. Previous works show that Ma-chine learning methods can
successfully be applied as an effi-cacious disease detection mechanism. Examples of machine
learning methods that have been applied in agricultural re-searches are Artificial Neural
Networks (ANNs), Decision Trees, K-means, k nearest neighbors, Support Vector Ma-chines
(SVMs) and BP Neural Networks. For example, Wang et al. in [1] predicted Phytophthora
infestans [6] infection on tomatoes by using ANNs. Also, Camargo and Smith in [4] used SVMs
to identify visual symptoms of cotton mould dis-eases using SVMs. There are two main
important characteris-tics of plant disease detection machine-learning methods that must be
investigated, they are: speed and accuracy. In this study an automatic detection and classification
of leaf dis-eases is been proposed which is literally based on K-means as a clustering and SVM
as classifier.

Chapter 2
Literature Survey
2.1 Disease Detection and Severity Estimationin Cotton Plant from Unconstrained
Images
The primary focus of this paper is to detect disease and estimate its stage for a cotton
plant using images. Most disease symptoms are reflected on the cotton leaf. Unlike earlier
approaches, the novelty of the proposal lies in processing images captured under uncontrolled
conditions in the field using normal or a mobile phone camera by an untrained person. Such field
images have a cluttered background making leaf segmentation very challenging. The proposed
work use two cascaded classifiers. Using local statistical features, first classifier segments leaf
from the background. Then using hue and luminance from HSV colour space another classifier is
trained to detect disease and find its stage. The developed algorithm is a generalised as it can be
applied for any disease. However as a showcase, we detect Grey Mildew, widely prevalent
fungal disease in North Gujarat, India.

2.2 Plant Recognition from Leaf Image through Artificial Neural Network
Getting to know the details of plants growing around us is of great importance
medicinally and economically. Conventionally, plants are categorized mainly by taxonomists
through investigation of various parts of the plant. However, most of the plants can be classified
based on the leaf shape and associated features. This article describes how Artificial Neural
Network is used to identify plant by inputting leaf image. Compared to earlier approaches, new
input features and image processing approach that matter in efficient classification in Artificial
Neural Network have been introduced. Image processing techniques are used to extract leaf
shape features such as aspect ratio, width ratio, apex angle, apex ratio, base angle, centroid
deviation ratio, moment ratio and circularity. These extracted features are used as inputs to neural
network for classifying the plants. Under the current research, 534 leaves of 20 kinds of plants
were collected. Out of these, 400 leaves were trained. The 134 testing samples were recognised
with 92% accuracy; even without considering types of leaf margins, vein and removal of the
petiole. Software has also been developed to identify leaf automatically except two mouse clicks
by the user.
2.3 Method of Feature Extraction from Leaf Architecture

Plants are very important for human being as well as for other living species on the earth.
The food that people eat daily, comes directly or indirectly from plants. In medical field, doctors
use X-ray image to correctly identify disease. Here we used the same principle. Geometrical
features and digital morphological features are extracted from 2-dimensional image of leaf. The
aim of this study is to introduce suitable features of leaf image which can be useful in further
research on plant identification.

2.4 Leaf Vein Extraction Based on Gray-scale Morphology

Leaf features play an important role in plant species identification and plant taxonomy.
The type of the leaf vein is an important morphological feature of the leaf in botany. Leaf vein
should be extracted from the leaf in the image before discriminating its type. In this paper a new
method of leaf vein extraction has been proposed based on gray-scale morphology. Firstly, the
color image of the plant leaf is transformed to the gray image according to the hue and intensity
information. Secondly, the gray-scale morphology processing is applied to the image to eliminate
the color overlap in the whole leaf vein and the whole background. Thirdly, the linear intensity
adjustment is adopted to enlarge the gray value difference between the leaf vein and its
background. Fourthly, calculate a threshold with OSTU method to segment the leaf vein from its
background. Finally, the leaf vein can be got after some processing on details. Experiments have
been conducted with several images. The results show the effectiveness of the method. The idea
of the method is also applicable to other linear objects extraction.

2.5 Identification and Classification of Cotton Leaf Spot Diseases using SVM
Classifier
Plant diseases may cause many losses to agricultural crops around the world. Therefore
methods for identification of disease found in any part of the plant play a critical role in disease
management. Now a days the many aspects of the crop development process uses the Advance
Computing Technology that has been developed to helps the farmer to take superior decision
about the crop. Evaluating & diagnosing the crop diseases is critical in the field of agriculture to
increase the crop productivity. The new technological strategies are used to express the captured
symptoms of cotton leaf spot images and Algorithms are used to categorize the image. In this
proposed work all the images are converted into standard resolution, preprocessed and stored in
the database. The classifier is being trained to achieve intelligent farming, including early
Identification of diseases. The mobile captured image will be pre-processed and Edge detection
algorithm is applied to the pre-processed image. Then the segmentation technique such as k-
means clustering will be applied and features like colour,shape and texture features are extracted.
Finally support vectormachine classifier is used to identify the Diseases comparingwith the
trained dataset.
Chapter 3
The basic approach towards disease identification that we have adopted is , first we have
done pre-processing on images.Then image segmentation waas done.After that we extracted the
features and created training set with that then gave it as an input to SVM classifier.As an output
we got labels for each image.These labels are leaf diseases which we have considered.
A study of visual symptoms [5] of plant disease from analysis of colored images using
image processing methods has been proposed. The RGB image of diseased plant has been then
converted to H, I3a, and I3b. A set of maximum threshold cutoff has been used. A correct
detection of infected part by disease with various ranges of intensities has been obtained using
segmentation process.
A K-means and neural network approach for detection of plant leaf/stem diseases has
been proposed [6]. Images of leaves from Jordon region of Al-Ghor has been taken to test results
using these techniques. After applying clustering, a feature extraction process called Color Co-
Occurrence or CCM method has also been applied to detect different features of diseases like
early scorch, cottony mold, ashen mold etc. Some of the fungal diseases cause brown spots on
leaves. Sugarcane plants infected by fungus also causes brown spot appearance on leaves.
Simple threshold and triangular threshold methods [7] has been used to segment the leaf area and
lesion region of sugarcane respectively.

3.1Block Diagram

Image Acquisition

Image Pre-Processing

Image Segmentation

Feature Execution

Creation of Training Set

Training Using Set

Detection & Classification of Test Images


3.2 Image acquisition
Image acquisition is basically creation of database to feed into the system for training as
well as for testing and classification of images. We acquired our images online from various
sources. The images are of bacterial blight, Alternaria macrospora , Grey mildew and healthy
leaves. Healthy images are added into the dataset to identify both healthy and diseased images.

3.3 Image pre-processing

For preprocessing the images are converted into 256x256 pixels and which is later on
transformed into CIElab color space. The Lab color space describes mathematically all
perceivable colors in the three dimensions L for lightness and a and b for the color opponent’s
green–red and blue–yellow.

3.4 Image Segmentation

Image segmentation is the partition of an image into a set of non-overlapping regions whose
union is the entire image. In the simplest case, one would only have an object region and a
background region.
A region cannot be declared a segment unless it is completely surrounded by edge pixels. It
is not an easy task to make it known to a computer what characteristics constitutes a
”meaningful” segmentation. For this reason, a set of rules in general segmentation procedures is
required:
• Regions of an image segmentation should be uniform and homogeneous with respect to
some characteristic (eg grey level or texture).
• Region interiors should be simple and without many holes.
• Adjacent regions of a segmentation should have significantly varying values with respect
to the characteristic on which they are uniform.
• Boundaries of each segment should be simple, not ragged, and must be
spatially accurate.

Categories of image segmentation

Images of the Leaf is of interest in the study of many brain disorders. A review of some of
the current approaches in the tissue segmentation of images. We broadly divided current
image segmentation algorithms into three categories:
1. Classification based
2. Region based
3. Contour based

LEAF provides rich three-dimensional (3D) information about the human soft tissue
anatomy . It reveals fine details of anatomy, and yet is noninvasive and does not require
ionizing radiation such as r -rays. It is a highly flexible technique where contrast between one
tissue and another in an image can be varied simply by varying the way the image is made.
For example, by altering radio-frequency (RF) and gradient pulses, and by carefully choosing
relaxation timings, it is possible to highlight different components in the object being imaged
and produce high contrast images. The rich anatomy information provided by LEAF has made
it an indispensable tool for medical diagnosis in recent years Applications that use the
morphologic contents of LEAF frequently require segmentation of the image volume into
tissue types. For example, accurate segmentation of LEAF images. In multiple sclerosis,
quantification of white matter lesions is necessary for drug treatment assessment volumetric
analysis of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) is important
to characterize morphological differences between subjects Such studies typically involve
vast amount of data. Currently, in many clinical studies segmentation is still mainly manual or
strongly supervised by a human expert. The level of operator supervision impacts the
performance of the segmentation method in terms of time consumption, leading to infeasible
procedures for large datasets. Manual segmentation also shows large intra- and inter-observer
variability, making thesegmentation irreproducible and deteriorating the precision of the
analysis of the segmentation. Hence, there is a real need for automated LEAF segmentation
tools.

The automatic segmentation of LEAF images has been an area of intense study.

However, this task has proven problematic, due to the many artifacts in the imaging process.
Some of the current approaches in the tissue segmentation of LEAF images. We provide a

mathematical formulation of the LEAF segmentation problem, and an overview of various

LEAF segmentation methods, which we have broadly divided into three categories:

classification-based, region-based, and contour-based

Classification-Based Segmentation
In classification-based segmentation, voxels are classified and labeled as belonging to a
particular tissue class according to a certain criterion. The simplest technique is based on
thresholding. Thresholding algorithm attempts to determine a threshold value which separates
the desired classes. Iterative thresholding used to distinguish brain tissues from others in axial
LEAF slices. Starting at set values, thresholds for the head and the leaf are then iteratively
adjusted based on the geometry of resulting masks. Although thresholding algorithm is simple
and computationally very fast, it is very sensitive to INU artifact and noise in LEAF images. The
automatic determination of a suitable threshold could be problematic if there is severe overlap
between the intensities of different tissue types due to noise and intensity inhomogeneities.
Instead of using simple thresholding in earlier classification-based segmentation
work, statistical classification based segmentation has been the method of choice in more
recent time. Statistical classification has the advantage of being more robust, as well as
having a rigorous mathematical foundation in stochastic theory. In statistical classification
methods, the probability density function of tissue intensity for different tissue classes are
often modeled parametrically as a mixture of Gaussians, usually one Gaussian function per
tissue class. In order to incorporate local contextual information, LEAF regularization is often
employed as well. The bias field estimation problem is cast in a Bayesian framework and the
expectation-maximization (EM) algorithm is used to estimate the inhomogeneity and the
tissue classes. However, their method needs to be supplied with the tissue class conditional
intensity models, which are typically constructed manually from training data. They also did
not consider neighborhood dependencies for the tissue segmentation. algorithm by using
LEAF to introduce context or dependency among neighboring voxels. Propose to use a 3-step
EM algorithm, which interleaves voxel classification, class distribution parameter estimation,
and bias field estimation. Instead of using manually constructed tissue class conditional
intensity models, their method employs digital brain atlas with a priori probability maps for
each tissue class to automatically construct intensity models for each individual scan being
processed. The leaf tissue classes are modeled as finite Gaussian mixtures with MRF
regularization to account for contextual information and the bias field is modeled as a fourth
order least square polynomial fit. It also use the Gaussian mixture to model the three brain
tissue classes. The biological variations of a particular tissue class are accounted for in their
statistical model by assuming that the mean intensities of the tissue classes are slowly varying
spatial functions. The magnetic field in homogeneities modify both the mean tissue intensities
and the noise variances in a similar manner. To account for the smoothness and piecewise
contiguous nature of the tissue regions, they use a 3D MRF as a prior. Consider the statistical
segmentation of multispectral LEAF image. In their work, the intensity distributions of the
brain tissues are again modeled as a mixture of Gaussians. Another major class of voxel
classification techniques uses clustering-based method. Clustering is a popular unsupervised
classification method and has found many applications in pattern classification and image
segmentation. Clustering algorithm attempts to classify a voxel to a tissue class by using the
notion of similarity to the class.

Region-Based Segmentation

The shape of an object can be described in terms of its boundary or the region it occupies.
Image region belonging to an object generally have homogeneous characteristics, e.g. similar
in intensity or texture. Region-based segmentation techniques attempt to segment an image by
identifying the various homogeneous regions that correspond to different objects in an image.
Unlike clustering methods, region-based methods explicitly consider spatial interactions
between neighboring voxels. In its simplest form, region growing methods usually start by
locating some seeds representing distinct regions in the image . The seeds are then grown
until they eventually cover the entire image. The region growing process is therefore
governed by a rule that describe the growth mechanism and a rule that check the homogeneity
of the regions at each growth step. Region growing technique has been applied to LEAF
segmentation. A semi-automatic, interactive LEAF segmentation algorithm was developed
that employ simple region growing technique for lesion segmentation. In , an automatic
statistical region growing algorithm based on a robust estimation of local region mean and
variance for every voxel on the image was proposed for LEAF segmentation. The best region
growing parameters are automatically found via the minimization of a cost functional.
Furthermore, relaxation labeling, region splitting, and constrained region merging were used
to improve the quality of the LEAF segmentation. The determination of an appropriate region
homogeneity criterion is an important factor in region growing segmentation methods.
However, such homogeneity criterion may be difficult to obtain a priori. An adaptive region
growing method is proposed where the homogeneity criterion is learned automatically from
characteristics of the region to be segmented while searching for the region.
Other region-based segmentation techniques,

1.Split-and-merge based segmentation and

2.Watershed based segmentation


have also been proposed for LEAF segmentation.

1.Split-and-merge based segmentation


In the split-and-merge technique, an image is first split into many small regions during the
splitting stage according to a rule, and then the regions are merged if they are similar enough to
produce the final segmentation.

2. Watershed-based segmentation
In the watershed-based segmentation, the gradient magnitude image is considered as a
topographic relief where the brightness value of each voxel corresponds to a physical
elevation. An immersion based approach is used to calculate the watersheds. The operation
can be described by imagine that holes are pierced in each local minimum of the topographic
relief. Then, the surface is slowly immersed in water, which causes a flooding of all the
catchment basins, starting from the basin associated with the global minimum. As soon as two
catchment basins begin to merge, a dam is built. The procedure results in a partitioning of the
image in many catchment basins of which the borders define the watersheds. To reduce over-
segmentation, the image is smoothed by 3D adaptive anisotropic diffusion prior to watershed
operation. Semi-automatic merging of volume primitives returned by the watershed operation
is then used to produce the final segmentation.

Contour-Based Segmentation
Contour-based segmentation approach assumes that the different objects in an image can
be segmented by detecting their boundaries. Whereas region-based techniques attempt to
capitalize on homogeneity properties within regions in an image, boundary-based techniques
rely on the gradient features near an object boundary as a guide. Hence, contourbased
segmentation methods that rely on detecting edges in the image is inherently more prone to
noise and image artifacts. Sophisticated pre- and post-processing is often needed to achieve a
satisfactory segmentation result.

Two types of contour-based techniques :

Edge detection segmentation:

LEAF image segmentation based on edge detection has been proposed , where a
combination of Marr-Hildreth operator for edge detection and morphological operations for
the refinement of the detected edges is used to segment 3D MR images. A boundary tracing
method is proposed, where the operator clicks a pixel in a region to be outlined and the
method then finds the boundary starting from that point. The method is, however, restricted to
segmentation of large, well defined structures, but not to distinguish fine tissue types. Edge-
based segmentation methods usually suffer from over or under-segmentation, induced by
improper threshold selection . In addition, the edges found are usually not closed and
complicated edge linking techniques are further required.
Active contour based segmentation:

Active contour deforms to fit the object’s shape by minimizing (among others) a
gradient dependent attraction force while at the same time maintaining the smoothness of the
contour shape. Thus, unlike edge detection, active contour methods are much more robust to
noise as the requirements for contour smoothness and contour continuity act as a type of
regularization. Another advantage of this approach is that prior knowledge about the object’s
shape can be built into the contour parameterization process. However, active contour based
algorithms usually require initialization of the contour close to the object boundary for it to
converge successfully to the true boundary. More importantly, active contour methods have
difficulty handling deeply convoluted boundary such as CSF, GM and WM boundaries due to
their contour smoothness requirement. Hence, they are often not appropriate for the
segmentation of brain tissues. Nevertheless, it has been applied successfully to the
segmentation of intracranial boundary , brain outer surface , and neuro-anatomic structures in
images.

Feature Extraction
Feature extraction is done after the preprocessing phase in character recognition system.
The primary task of pattern recognition is to take an input pattern and correctly assign it as one of the
possible output classes. This process can be divided into two general stages: Feature selection and
Classification. Feature selection is critical to the whole process since the classifier will not be able to
recognize from poorly selected features. Criteria to choose features given by Lippman are: “Features
should contain information required to distinguish between classes, be insensitive to irrelevant variability
in the input, and also be limited in number, to permit, efficient computation of discriminant functions and
to limit the amount of training data required” Feature extraction is an important step in the construction of
any pattern classification and aims at the extraction of the relevant information that characterizes each
class. In this process relevant features are extracted from objects/ alphabets to form feature vectors. These
feature vectors are then used by classifiers to recognize the input unit with target output unit. It becomes
easier for the classifier to classify between different classes by looking at these features as it allows fairly
easy to distinguish. Feature extraction is the process to retrieve the most important data from the raw data.
Feature extraction is finding the set of parameter that define the shape of a character precisely and
uniquely. In feature extraction phase, each character is represented by a feature vector, which becomes its
identity. The major goal of feature extraction is to extract a set of features, which maximizes the
recognition rate with the least amount of elements and to generate similar feature set for variety of
instance of the same symbol. The widely used feature extraction methods are Template matching,
Deformable templates, Unitary Image transforms, Graph description, Projection Histograms, Contour
profiles, Zoning, Geometric moment invariants, Zernike Moments, Spline curve approximation, Fourier
descriptors, Gradient feature and Gabor features.
Importance of feature extraction
When the pre-processing and the desired level of segmentation (line, word, character or symbol)
has been achieved, some feature extraction technique is applied to the segments to obtain features,
which is followed by application of classification and post processing techniques. It is essential to
focus on the feature extraction phase as it has an observable impact on the efficiency of the
recognition system. Feature selection of a feature extraction method is the single most important
factor in achieving high recognition performance. Feature extraction has been given as “extracting
from the raw data information that is most suitable for classification purposes, while minimizing the
within class pattern variability and enhancing the between class pattern variability”. Thus, selection
of a suitable feature extraction technique according to the input to be applied needs to be done with
utmost care. Taking into consideration all these factors, it becomes essential to look at the various
available techniques for feature extraction in a given domain, covering vast possibilities of cases.

MACHINE LEARNING

A total of five different Machine Learning techniques for learn-ing classifier


have been investigated in this paper. These techniques are selected due to the reason that
these classifi-ers have performed well in many real applications

K- Nearest Neighbor (KNN)


The K Nearest Neighbor is a slow learner which means that this classifier can train
and test at the same time. KNN classi-fier is an instance based classifier that performs
classification of unknown instances by relating unknown to known by using distance or such
similarity functions. It takes K nearest points and then assigns class of majority to the unknown
instance [11].

Naïve Bayes Classifier


Naïve Bayesian Classification is commonly known as a statis-tical means [14] classifier.
Its foundation is on Bayes’ Theo-rem, and uses probabilistic analysis for efficient
classification. Naïve Bayesian Classifier [14] give more accurate results in less computation
time when applied to the large data sets consisting of hundreds of images.

Support Vector Machine (SVM)


Support Vector Machine is machine learning technique which is basically used for
classification. It is a kernel based classi-fier; it was developed for linear separation which
was able to classify data into two classes only. SVM has been used for different realistic
problems such as face, gesture recognition [10], cancer diagnosis [8] voice identification
and glaucoma diagnosis.

Decision Tree
An IJTEEE copyright form must accompany your final Decision Tree Classifiers (DTC's)
are being successfully used in many areas including medical diagnosis, prognosis, speech
recogni-tion, character recognition etc. Decision tree classifiers have ability to convert the
complex decision into easy and under-standable decisions. [7]

Recurrent Neural Networks


Recurrent Neural Networks (RNN) includes feedback connec-tions. In contrast to feed-
forward and back propagation net-works, the dynamical properties are more significant. Neural
Network [6] has evolvement within a constant state and the activation values of any units do not
change anymore. But in some cases, according to required scenario it is important change the
activation value of the output neurons [6].

Classification
In supervised machine learning, support vector machines (SVMs, also support vector
networks are supervised learning models with associated learning capable algorithms that ana-
lyze data and recognize patterns, used for binary classification and to plot regression analysis.
Given a set of training sam-ples, each marked as belonging to one of two categories, an SVM
training algorithm builds a model that assigns new ex-amples into one category or the other,
making it a non-probabilistic binary linear classifier. An SVM model is a repre-sentation of the
examples as points in space, mapped such that the examples of the separate categories are
divided into a clear gap that is as wide as possible. New samples are then mapped into that same
space and are predicted to belong to a category based on which side of the gap which they fall
on. The models are trained using svmtrain () and classified using svmclassify () commands in
Matlab.The kernels used are:
• Linear
• Quadratic
• Polynomial
• MLP
• RBF

Chapter 4

4.1 GENERAL OVERVIEW OF IMAGE PROCESSING


Image processing is a method to convert an image into digital form and perform some
operations on it, in order to get an enhanced image or to extract some useful information from it.
It is a type of signal dispensation in which input is image, like video frame or photograph and
output may be image or characteristics associated with that image. Usually Image
Processing system includes treating images as two dimensional signals while applying already
set signal processing methods to them.

It is among rapidly growing technologies today, with its applications in various


aspects of a business. Image Processing forms core research area within engineering and
computer science disciplines too. In imaging science, image processing is any form of signal
processing for which the input is an image, such as a photograph or video frame; the output of
image processing may be either an image or a set of characteristics or parameters related to the
image. Most image-processing techniques involve treating the image as a two-dimensional signal
and applying standard signal-processing techniques to it.

Image processing usually refers to digital image processing, but optical and analog
image processing also are possible. This article is about general techniques that apply to all of
them. The acquisition of images (producing the input image in the first place) is referred to as
imaging.

Closely related to image processing are computer graphics and computer vision. In
computer graphics, images are manually made from physical models of objects, environments,
and lighting, instead of being acquired (via imaging devices such as cameras) from natural
scenes, as in most animated movies. Computer vision, on the other hand, is often considered
high-level image processing out of which a machine/computer/software intends to decipher the
physical contents of an image or a sequence of images (e.g., videos or 3D full-body magnetic
resonance scans).

In modern sciences and technologies, images also gain much broader scopes due to
the ever growing importance of scientific visualization (of often large-scale complex
scientific/experimental data). Examples include microarray data in genetic research, or real-time
multi-asset portfolio trading in finance.

Image processing basically includes the following three steps.


 Importing the image with optical scanner or by digital photography.
 Analyzing and manipulating the image which includes data compression and
image enhancement and spotting patterns that are not to human eyes like
satellite photographs.
 Output is the last stage in which result can be altered image or report that is
based on image analysis.

4.1.1 Purpose of Image processing

The purpose of image processing is divided into 5 groups. They are:

1. Visualization - Observe the objects that are not visible.


2. Image sharpening and restoration - To create a better image.
3. Image retrieval - Seek for the image of interest.
4. Measurement of pattern – Measures various objects in an image.
5. Image Recognition – Distinguish the objects in an image.

4.2 Types

The two types of methods used for Image Processing are Analog and
Digital Image Processing. Analog or visual techniques of image processing can be used for the
hard copies like printouts and photographs. Image analysts use various fundamentals of
interpretation while using these visual techniques. The image processing is not just confined to
area that has to be studied but on knowledge of analyst. Association is another important tool in
image processing through visual techniques. So analysts apply a combination of personal
knowledge and collateral data to image processing.

Digital Processing techniques help in manipulation of the digital images by using


computers. As raw data from imaging sensors from satellite platform contains deficiencies. To
get over such flaws and to get originality of information, it has to undergo various phases of
processing. The three general phases that all types of data have to undergo while using digital
technique are Pre-processing, enhancement and display, information extraction. Fig.1 represents
the hierarchy of image processing.
Fig.1: Hierarchy of Image Processing

4.2.1 Images in Matlab

The first step in MATLAB image processing is to understand that a digital image is
composed of a two or three dimensional matrix of pixels. Individual pixels contain a number or
numbers representing what grayscale or color value is assigned to it. Color pictures generally
contain three times as much data as grayscale pictures, depending on what color representation
scheme is used. Therefore, color pictures take three times as much computational power to
process. In this tutorial the method for conversion from color to grayscale will be demonstrated
and all processing will be done on grayscale images. However, in order to understand how
image processing works, we will begin by analyzing simple two dimensional 8-bit matrices.

4.2.2. Loading an Image

Many times you will want to process a specific image, other times you may just want to
test a filter on an arbitrary matrix. If you choose to do this in MATLAB you will need to load the
image so you can begin processing. If the image that you have is in color, but color is not
important for the current application, then you can change the image to grayscale. This makes
processing much simpler since then there are only a third of the pixel values present in the new
image. Color may not be important in an image when you are trying to locate a specific object
that has good contrast with its surroundings. Example 4.1, below, demonstrates how to load
different images.

If colour is not an important aspect then rgb2gray can be used to change a color image
into a grayscale image. The class of the new image is the same as that of the color image. As
you can see from the example M-file in Figure 4.1, MATLAB has the capability of loading many
different image formats, two of which are shown. The function imreadused to read an image file
with a specified format. Consult imread in MATLAB’s help to find which formats are supported.
The function imshowdisplays an image, while figure tells MATLAB which figure window the
image should appear in. If figure does not have a number associated with it, then figures will
appear chronologically as they appear in the M-file. Figures 8, 9, 10 and 11, below, are a loaded
bitmap file, the image in Figure 8 converted to a grayscale image, a loaded JPEG file, and the
image in Figure 11 converted to a grayscale image, respectively. The images used in this
example are both MATLAB example images. In order to demonstrate how to load an image file,
these images were copied and pasted into the folder denoted in the M-file in Figure 4.1. In
Example 7.1, later in this tutorial, you will see that MATLAB images can be loaded by simply
using the imread function. However, this function will only load an image stored in:
Figure 8: Bitmap Image Figure 9: Grayscale Image

Figure 10: JPEG Image Figure 11: Grayscale Image

4.2.3 Writing an Image

Sometimes an image must be saved so that it can be transferred to a disk or opened with
another program. In this case you will want to do the opposite of loading an image, reading it,
and instead write it to a file. This can be accomplished in MATLAB using the imwrite function.
This function allows you to save an image as any type of file supported by MATLAB, which are
the same as supported by imread. Figure 12 shows the image for saving the image using m-file.
Figure 12: M-file for Saving an Image

4.3 Image Properties

4.3.1 Histogram

A histogram is bar graph that shows a distribution of data. In image processing


histograms are used to show how many of each pixel value are present in an image. Histograms
can be very useful in determining which pixel values are important in an image. From this data
you can manipulate an image to meet your specifications. Data from a histogram can aid you in
contrast enhancement and thresholding. In order to create a histogram from an image, use the
imhist function. Contrast enhancement can be performed by the histeq function, while
thresholding can be performed by using the graythresh function and the im2bw function. See
Figure 14,15,16,17 for a demonstration of imhist, imadjust, graythresh, and im2bw. If you
want to see the resulting histogram of a contrast enhanced image, simply perform the imhist
operation on the image created with histeq.

4.3.2 Negative

The negative of an image means the output image is the reversal of the input image. In
the case of an 8-bit image, the pixels with a value of 0 take on a new value of 255, while the
pixels with a value of 255 take on a new value of 0. All the pixel values in between take on
similarly reversed new values. The new image appears as the opposite of the original. The
imadjust function performs this operation. See Figure 13 for an example of how to use
imadjust to create the negative of the image. Another method for creating the negative of an
image is to use imcomplement, which is described in Figure 13.

Figure 13: M-file for Creating Histogram, Negative, Contrast Enhanced and
Binary Images from the Image

Figure 14: Histogram Figure 15: Negative


Figure 16: Contrast Enhanced Figure 17: Binary

4.3.3 Median Filters


Median Filters can be very useful for removing noise from images. A median filter is like
an averaging filter in some ways. The averaging filter examines the pixel in question and its
neighbor’s pixel values and returns the mean of these pixel values. The median filter looks at
this same neighborhood of pixels, but returns the median value. In this way noise can be
removed, but edges are not blurred as much, since the median filter is better at ignoring large
discrepancies in pixel values. The Example, below, for how to perform a median filtering
operation.
This example uses two types of median filters that both output the same result. The first
filter is medfilt2, which takes the median value of the pixel in question and its neighbors. In this
case it outputs the median value of nine pixels being examined. The second filter, ordfilt2, does
the exact same thing in this configuration, but can be configured to perform other types of
filtering. In this case, it looks at every pixel in the 3x3 matrix and outputs the value in the fifth
position of rank, which is the median position. In other words it outputs a value, where half the
pixel values are greater and half are less, in the matrix.
Figure 18: Noisy Image

Figure 19: medfilt2 Figure 20: ordfilt2

Figure 18, above depicts the Noisy image. The original image in Figure 19, above, is the
output of the image in Figure 18, filtered with a 3x3 two-dimensional median filter. Figure 19,
above, is the same as Figure 19, but was achieved by filtering the image in Figure 18 with
ordfilt2, configured to produce the same result as medfilt2. Notice how both filters produce the
same result. Each is able to remove the noise, without blurring the edges in the image too much.
4.3.4 Edge Detectors

Edge detectors are very useful for locating objects within images. There are many
different kinds of edge detectors, but we will concentrate on two: the Sobel edge detector and the
Canny edge detector. The Sobel edge detector is able to look for strong edges in the horizontal
direction, vertical direction, or both directions. The Canny edge detector detects all strong edges
plus it will find weak edges that are associated with strong edges. Both of these edge detectors
return binary images with the edges shown in white on a black background. The Example,
below, demonstrates the use of these edge detectors.

The Canny and Sobel edge detectors are both demonstrated in this example.
Figure 21, below, is a sample M-file for performing these operations. The image used is the
MATLAB image, rice.tif, which can be found in the manner described in Example 4.1. Two
methods for performing edge detection using the Sobel method are shown. The first method uses
the MATLAB functions, fspecial, which creates the filter, and imfilter, which applies the filter to
the image. The second method uses the MATLAB function, edge, in which you must specify the
type of edge detection method desired. Sobel was used as the first edge detection method, while
Canny was used as the next type. Figure 21, below, displays the results of the M-file in figure
18. The first image is the original image; the image denoted Horizontal Sobel is the result of
using fspecial and imfilter. The image labeled Sobel is the result of using the edge filter with
Sobel specified, while the image labeled Canny has Canny specified.

The Zoom In tool was used to depict the detail in the images more clearly. As you can
see, the filter used to create the Horizontal Sobel image detects horizontal edges much more
readily than vertical edges. The filter used to create the Sobel image detected both horizontal
and vertical edges. This resulted from MATLAB looking for both horizontal and vertical edges
independently and then summing them. The Canny image demonstrates how well the Canny
method detects all edges. The Canny method does not only look for strong edges, as in the Sobel
method, but also will look for weak edges that are connected to strong edges and show those,
too.

Figure 21: Images Created by Different Edge Detection Methods

CHAPTER 5

SOFTWARE DESCRIPTION

3.1 Introduction
If you are new to MATLAB, you should start by reading Manipulating Matrices. The
most important things to learn are how to enter matrices, how to use the: (colon) operator, and
how to invoke functions. After you master the basics, you should read the rest of the sections
below and run the demos.
At the heart of MATLAB is a new language you must learn before you can fully exploit
its power. You can learn the basics of MATLAB quickly, and mastery comes shortly after. You
will be rewarded with high productivity, high-creativity computing power that will change the
way you work.

Describes the components of the MATLAB system.


Development Environment - introduces the MATLAB development environment, including
information about tools and the MATLAB desktop.

Manipulating Matrices - introduces how to use MATLAB to generate Matrices and perform
mathematical operations on matrices.

Graphics - introduces MATLAB graphic capabilities, including information about plotting


data, annotating graphs, and working with images.

Programming with MATLAB - describes how to use the MATLAB language to create
scripts and functions, and manipulate data structures, such as cell arrays and multidimensional
arrays.

MATLAB is a high-performance language for technical computing. It integrates


computation, visualization, and programming in an easy-to-use environment where problems and
solutions are expressed in familiar mathematical notation.

Typical uses include

i. Math and computation

ii. Algorithm development

iii. Modeling, simulation, and prototyping

iv. Data analysis, exploration, and visualization

v. Scientific and engineering graphics

vi. Application development, including graphical user interface building


MATLAB is an interactive system whose basic data element is an array that does not
require dimensioning. This allows you to solve many technical computing problems, especially
those with matrix and vector formulations, in a fraction of the time it would take to write a
program in a scalar noninteractive language such as C or FORTRAN.

The name MATLAB stands for matrix laboratory. MATLAB was originally written to
provide easy access to matrix software developed by the LINPACK and EISPACK projects.

MATLAB has evolved over a period of years with input from many users. In university
environments, it is the standard instructional tool for introductory and advanced courses in
mathematics, engineering, and science. In industry, MATLAB is the tool of choice for high-
productivity research, development, and analysis.

Toolboxes are comprehensive collections of MATLAB functions (M-files) that extend the
MATLAB environment to solve particular classes of problems. Areas in which
toolboxes are available include signal processing, control systems, neural networks, fuzzy logic,
wavelets, simulation, and many others.

3.2 Matlab system


The MATLAB system consists of five main parts:

Development Environment
This is the set of tools and facilities that help you use MATLAB functions and files.
Many of these tools are graphical user interfaces. It includes the MATLAB desktop and
Command Window, a command history, and browsers for viewing help, the workspace, files, and
the search path.

The Matlab Mathematical Function Library


This is a vast collection of computational algorithms ranging from elementary functions
like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix
inverse, matrix eigenvalues, Bessel functions, and fast Fourier transforms.

The Matlab Language


This is a high-level matrix/array language with control flow statements, functions, data
structures, input/output, and object-oriented programming features. It allows both "programming
in the small" to rapidly create quick and dirty throw-away programs, and "programming in the
large" to create complete large and complex application programs.

Handle Graphics
This is the MATLAB graphics system. It includes high-level commands for two-
dimensional and three-dimensional data visualization, image processing, animation, and
presentation graphics. It also includes low-level commands that allow you to fully customize the
appearance of graphics as well as to build complete graphical user interfaces on your MATLAB
applications.

The Matlab Application Program Interface (Api)


This is a library that allows you to write C and FORTRAN programs that interact with
MATLAB. It include facilities for calling routines from MATLAB (dynamic linking), calling
MATLAB as a computational engine, and for reading and writing MAT-files.

Development environment
This chapter provides a brief introduction to starting and quitting MATLAB, and the
tools and functions that help you to work with MATLAB variables and files. For more
information about the topics covered here, see the corresponding topics under Development
Environment in the MATLAB documentation, which is available online as well as in print.

3.4 Starting and Quitting MATLAB


Starting Matlab
On a Microsoft Windows platform, to start MATLAB, double-click the MATLAB shortcut
icon on your Windows desktop.

On a UNIX platform, to start MATLAB, type matlab at the operating system prompt.

After starting MATLAB, the MATLAB desktop opens - see MATLAB Desktop.You can
change the directory in which MATLAB starts, define startup options including running a script
upon startup, and reduce startup time in some situations.

Quitting Matlab
To end your MATLAB session, select Exit MATLAB from the File menu in the
desktop, or type quit in the Command Window. To execute specified functions each time
MATLAB quits, such as saving the workspace, you can create and run a finish.m script.

Matlab Desktop
When you start MATLAB, the MATLAB desktop appears, containing tools (graphical
user interfaces) for managing files, variables, and applications associated with MATLAB.

3.5 Summary

In this chapter the software analysis was done by giving the flowcharts and their
corresponding algorthims.

Image Processing with MATLAB

The purpose of this tutorial is to gain familiarity with MATLAB’s Image Processing Toolbox.
This tutorial does not contain all of the functions available in MATLAB. It is very useful to go to
Help\MATLAB Help in the MATLAB window if you have any questions not answered by this
tutorial. Many of the examples in this tutorial are modified versions of MATLAB’s help
examples. The help tool is especially useful in image processing applications, since there are
numerous filter examples.

1. Opening MATLAB in the microcomputer lab

Access the Start Menu, Proceed to Programs, Select MATLAB 14 from the MATLAB 14 folder
--OR-- Open through C:\MATLAB6p5\bin\win32\matlab.exe
2. MATLAB

When MATLAB opens, the screen should look something like what is picturedin Figure
2.1,below.

Fig.2.1. MATLAB window

The Command Window is the window on the right hand side of the screen. This window is used
to both enter commands for MATLAB to execute, and to view the results of these commands.
The Command History window, in the lower left side of the screen, displays the commands that
have been recently entered into the Command Window. In the upper left hand side of the screen
there is a window that can contain three different windows with tabs to select between them. The
first window is the Current Directory, which tells the user which M-files are currently in use. The
second window is the Workspace window, which displays which variables are currently being
used and how big they are. The third window is the Launch Pad window, which is especially
important since it contains easy access to the available toolboxes, of which, Image Processing is
one. If these three windows do not all appear as tabs below the window space, simply go to View
and select the ones you want to appear. In order to gain some familiarity with the Command
Window, try Example 2.1, below. You must type code after the >> prompt and press return to
receive a new prompt. If you write code that you do not want to reappear in the MATLAB
Command Window, you must place a semi colon after the line of code. If there is no semi colon,
then the code will print in the command window just under where you typed it.

Example 2.1

X = 1; %press enter to go to next line

Y = 1; %press enter to go to next line

Z = X + Y %press enter to receive result

As you probably noticed, MATLAB gave an answer of Z = 2 under the last line of typed code. If
there had been a semi colon after the last statement, the answer would not have been printed.
Also, notice how the variables you used are listed in the Workspace Window and the commands
you entered are listed in the Command History window. If you want to retype a command, an
easy way to do this is to press the ↑ or ↓ arrows until you reach the command you want to
reenter.

3. The M-file

M-file – An M-file is a MATLAB document the user creates to store the code they write for their
specific application. Creating an M-file is highly recommended, although not entirely necessary.
An M-file is useful because it saves the code the user has written for their application. It can be
manipulated and tested until it meets the user’s specifications. The advantage of using an Mfile is
that the user, after modifying their code, must only tell MATLAB to run the M-file, rather than
reenter each line of code individually. 3.2. Creating an M-file – To create an M-file, select
File\New ►M-file.
Saving – The next step is to save the newly created M-file. In the M-file window, select
File\Save As… Choose a location that suits your needs, such as a disk, the hard drive or the U
drive. It is not recommended that you work from your disk or from the U drive, so before editing
and testing your M-file you may want to move your file to the hard drive. Opening an M-file –
To open up a previously designed M-file, simply open MATLAB in the same manner as
described before. Then, open the M-file by going to File\Open…, and selecting your file. Then,
in order for MATLAB to recognize where your M-file is stored, you must go to File\Set Path…
This will open up a window that will enable you to tell MATLAB where your M-file is stored.
Click the Add Folder… button, then browse to find the folder that your M-file is located in, and
press OK. Then in the Set Path window, select Save, and then Close. If you do not set the path,
MATLAB may open a window saying your file is not in the current directory. In order to get by
this, select the “Add directory to the top of the MATLAB path” button, and hit OK. This is
essentially the same as setting the path, as described above. Writing Code – After creating and
saving your M-file, the next step is to begin writing code. A suggested first move is to begin by
writing comments at the top of the M-file with a description of what the code is for, who
designed it, when it was created, and when it was last modified. Comments are declared by
placing a % symbol before them. Comments appear in green in the M-file window. See Figure
3.1, below, for Example 3.1. Resaving – After writing code, you must save your work before you
can run it. Save your code by going to File\Save. Running Code – To run code, simply go to the
main MATLAB window and type the name of your M-file after the >> prompt. Other ways to
run the M-file are to press F5 while the M-file window is open, select Debug\Run, or press the
Run button (see Figure 3.1) in the M-file window toolbar.

Example 3.1
Fig .3.1. Example of M-file

Images

Images – The first step in MATLAB image processing is to understand that a digital image is
composed of a two or three dimensional matrix of pixels. Individual pixels contain a number or
numbers representing what grayscale or color value is assigned to it. Color pictures generally
contain three times as much data as grayscale pictures, depending on what color representation
scheme is used. Therefore, color pictures take three times as much computational power to
process. In this tutorial the method for conversion from color to grayscale will be demonstrated
and all processing will be done on grayscale images. However, in order to understand how image
processing works, we will begin by analyzing simple two dimensional 8-bit matrices. Loading an
Image – Many times you will want to process a specific image, other times you may just want to
test a filter on an arbitrary matrix. If you choose to do this in MATLAB you will need to load the
image so you can begin processing. If the image that you have is in color, but color is not
important for the current application, then you can change the image to grayscale. This makes
processing much simpler since then there are only a third of the pixel values present in the new
image. Color may not be important in an image when you are trying to locate a specific object
that has good contrast with its surroundings. Example 4.1, below, demonstrates how to load
different images.

Example 4.1
In some instances, the image in question is a matrix of pixel values. For example, you may need
something to test a filter on, but you do not yet need a real image to test the filter. Therefore, you
can simply create a matrix that has the characteristics wanted, such as areas of high and low
frequency. See Example 6.1, for a demonstration of this. Other times a stored image must be
imported into MATLAB to be processed. If color is not an important aspect then rgb2gray can be
used to change a color image into a grayscale image.

C:\MATLAB6p5\toolbox\images\imdemos. Therefore, it is a good idea to


know how to load any image from any folder.

Figure 4.1: M-file for Loading Images

The class of the new image is the same as that of the color image. As you can see from the
example M-file in Figure 4.1, MATLAB has the capability of loading many different image
formats, two of which are shown. The function imread is used to read an image file with a
specified format. Consult imread in MATLAB’s help to find which formats are supported. The
function imshow displays an image, while figure tells MATLAB which figure window the image
should appear in. If figure does not have a number associated with it, then figures will appear
chronologically as they appear in the M-file. Figures 4.2, 4.3, 4.4 and 4.5, below, are a loaded
bitmap file, the image in Figure 4.2 converted to a grayscale image, a loaded JPEG file, and the
image in Figure 4.4 converted to a grayscale image, respectively. The images used in this
example are both MATLAB example images. In order to demonstrate how to load an image file,
these images were copied and pasted into the folder denoted in the M-file in Figure 4.1. In
Example 7.1, later in this tutorial, you will see that MATLAB images can be loaded by simply
using the imread function.

Writing an Image – Sometimes an image must be saved so that it can be transferred to a disk or
opened with another program. In this case you will want to do the opposite of loading an image,
reading it, and instead write it to a file. This can be accomplished in MATLAB using the imwrite
function. This function allows you to save an image as any type of file supported by MATLAB,
which are the same as supported by imread. Example 4.2, below, contains code necessary for
writing an image. Example 4.2

In order to save an image you must use the imwrite function in MATLAB. The M-file in Figure
4.6 contains code for saving an image. This M-file loads the same bitmap file as described in the
M-file pictured in Figure 4.1. However, this new M-file saves the grayscale image created as a
JPEG image. Just like in Example 4.1, the “splash2” bitmap picture must be moved into
MATLAB’s work folder in order for the imread function to find it. When you run this Mfile
notice how the JPEG image that was created is saved into the work folder.
Figure 4.6: M-file for Saving an Image

Image Properties

Histogram – A histogram is bar graph that shows a distribution of data. In image processing
histograms are used to show how many of each pixel value are present in an image. Histograms
can be very useful in determining which pixel values are important in an image. From this data
you can manipulate an image to meet your specifications. Data from a histogram can aid you in
contrast enhancement and thresholding. In order to create a histogram from an image, use the
imhist function. Contrast enhancement can be performed by the histeq function, while
thresholding can be performed by using the graythresh function and the im2bw function. See
Example 5.1, for a demonstration of imhist, imadjust, graythresh, and im2bw. If you want to see
the resulting histogram of a contrast enhanced image, simply perform the imhist operation on the
image created with histeq. Negative – The negative of an image means the output image is the
reversal of the input image. In the case of an 8-bit image, the pixels with a value of 0 take on a
new value of 255, while the pixels with a value of 255 take on a new value of 0. All the pixel
values in between take on similarly reversed new values. The new image appears as the opposite
of the original. The imadjust function performs this operation. See Example 5.1 for an example
of how to use imadjust to create the negative of the image. Another method for creating the
negative of an image is to use imcomplement, which is described

Frequency Domain

Fourier Transform – In order to understand how different image processing filters work, it
is a good idea to begin by understanding what frequency has to do with images. An image is in
essence a two dimensional collection of discrete signals. Therefore, the signals have frequencies
associated with them. For instance, if there is relatively little change in grayscale values as you
scan across an image, then there is lower frequency content contained within the image. If there
is wide variation in grayscale values across an image then there will be more frequency content
associated with the image. This may seem somewhat confusing, so let us think about this in
terms that are more familiar to us. From signal processing, we know that any signal can be
represented by a collection of sine waves of differing frequencies, magnitudes and phases. This
transformation of a signal into its constituent sinusoids is known as the Fourier Transform. This
collection of sine waves can potentially be infinite, if the signal is difficult to represent, but is
generally truncated at a point where adding more signals does not significantly improve the
resolution of the recreation of the original signal. In digital systems, we use a Fourier Transform
designed in such a way that we can enter discrete input values, specify our sampling rate, and
have the computer generate discrete outputs. This is known as the Discrete Fourier Transform, or
DFT. MATLAB uses a fast algorithm for performing a DFT, which is called the Fast Fourier
Transform, or FFT, whose MATLAB command is fft. The FFT can be performed in two
dimensions, fft2 in MATLAB. This is very useful in image processing because we can then
determine the frequency content of an image. Still confused? Picture an image as a two
dimensional matrix of signals. If you plotted just one row, so that it showed the grayscale value
stored within each pixel, you might end up with something that looks like a bar graph, with
varying values in each pixel location. Each pixel value in this signal may appear to have no
correlation to the next one. However, the Fourier Transform can determine which frequencies are
present in the signal. In order to see the frequency content, it is useful to view the absolute value
of the magnitude of the Fourier Transform, since the output of a Fourier Transform is complex in
nature.

Filters

Filters – Image processing is based on filtering the content of images. Filtering is used to modify
an image in some way. This could entail blurring, deblurring, locating certain features within an
image, etc… Linear filtering is accomplished using convolution, as discussed above. A filter, or
convolution kernel as it is also known, is basically an algorithm for modifying a pixel value,
given the original value of the pixel and the values of the pixels surrounding it. There are literally
hundreds of types of filters that are used in image processing. However, we will concentrate on
several common ones. Low Pass Filters – The first filters we will talk about are low pass filters.
These filters blur high frequency areas of images. This can sometimes be useful when attempting
to remove unwanted noise from an image. However, these filters do not discriminate between
noise and edges, so they tend to smooth out content that should not be smoothed out.

Median Filters – Median Filters can be very useful for removing noise from images. A median
filter is like an averaging filter in some ways. The averaging filter examines the pixel in question
and its neighbor’s pixel values and returns the mean of these pixel values. The median filter
looks at this same neighborhood of pixels, but returns the median value. In this way noise can be
removed, but edges are not blurred as much, since the median filter is better at ignoring large
discrepancies in pixel values.

Erosion and Dilation – Erosion and Dilation are similar operations to median filtering in
that they both are neighborhood operations. The erosion operation examines the value of a pixel
and its neighbors and sets the output value equal to the minimum of the input pixel values.
Dilation, on the other hand, examines the same pixels and outputs the maximum of these pixels.
In MATLAB erosion and dilation can be accomplished by the imerode and imdilate functions,
respectively, accompanied by the strel function.
Edge Detectors – Edge detectors are very useful for locating objects within images. There are
many different kinds of edge detectors, but we will concentrate on two: the Sobel edge detector
and the Canny edge detector. The Sobel edge detector is able to look for strong edges in the
horizontal direction, vertical direction, or both directions. The Canny edge detector detects all
strong edges plus it will find weak edges that are associated with strong edges. Both of these
edge detectors return binary images with the edges shown in white on a black background.

Segmentation – Segmentation is the process of fractioning an image into its component


objects. This can be accomplished in various ways in MATLAB. One way is to use a
combination of morphological operations to segment touching objects within an image. Another
method is to use a combination of dilation and erosion to segment objects. The MATLAB
function bwperim performs this operation on binary images.

Você também pode gostar