Você está na página 1de 10

International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882

Volume 4, Issue 7, July 2015


Research Scholar, Muthurangam Govt. Arts College (A), Vellore- 2, T.N.
PG and Research, Dept. of Computer Science, Muthurangam Govt. Arts College, Vellore 2. T. N.

Glaucoma is a chronic eye disease which leads to vision
loss. As it cannot be cured, detecting the disease in time
is important. Current tests using Intra Ocular Pressure
(IOP) are not sensitive enough for population based
glaucoma screening. Optic nerve head assessment in
retinal fund us images is both more promising and
superior. This paper proposes optic disc and optic cup
segmentation using region growing and men shift
algorithm for glaucoma screening. A self-assessment
reliability score is computed to evaluate the quality of
the automated optic disc segmentation. For optic cup
segmentation, in addition to the histograms and centre
Surround statistics, the location information is also
included into the feature space to boost the performance.
The proposed segmentation methods have been
evaluated in a database consisting both healthy and
glaucoma images with optic disc and optic cup
boundaries manually marked by trained professionals.
Experimental results are expected to show a better



Glaucoma is the name for a group of eye conditions in
which the optic nerve is damaged at the point where it
leaves the eye. This nerve carries information from the
retina (the light sensitive layer in the eye) to the brain
where it is perceived as a picture. The damage to the
optic nerve in glaucoma is usually caused by increased
pressure within the eye. This squeezes the optic nerve
and damages some of the nerve fibres which leads to
sight loss. Peripheral vision is the first area to be
affected. But if glaucoma is left untreated, the damage
can progress to eventual loss of central vision.
In some cases of glaucoma, eye pressure may be within
normal limits but damage occurs because there is a
weakness in the optic nerve. This is known as normal or
low tension glaucoma.
High pressure within the eye does not always result in
glaucoma. A common condition is ocular hypertension,

where eye pressure is above normal level but there is no

detectable damage to the field of vision. This condition
may simply be monitored or may be treated depending
upon the consultants view of the risk of developing
There are two main types of glaucoma chronic and
acute. The most common is chronic, more formally
known as primary open angle glaucoma. Here the
channels that drain fluid from the eye become blocked
over many years. The pressure in the eye rises very
slowly and there is no pain to indicate that there is a
problem. However, the optic nerve is being damaged and
the field of vision gradually becomes impaired. Usually
the damage does not occur in the same part of the field
of vision in both eyes. One eye compensates for the
other and a great deal of damage will have been done
before the person realises there is a problem with their
The second type of glaucoma, acute, is much less
common. More formally known as primary angle closure
glaucoma, this develops when there is a sudden and
more complete blockage of aqueous fluid within the eye
and the pressure rises sharply. This tends to be very
painful because the rise in pressure happens suddenly. It
must be treated and in most cases a persons vision
recovers completely. However, if treatment is delayed,
there will usually be permanent damage to the eye.
Early detection is a particular problem in
African/African Caribbean communities where fewer
people have regular eye tests than in white communities.
This applies particularly to older people who are at an
even higher risk of developing the condition. Only 38
per cent of African/African Caribbeans over the age of
60 have a regular eye test against 68 per cent of the
general population. [4]



International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882
Volume 4, Issue 7, July 2015

In computer vision, segmentation refers to the process of

partitioning a digital image into multiple regions (set of
pixels). The goal of segmentation is to simplify and/or
change the representation of an image into something
that is more meaningful and easier to analyze. Image
segmentation is typically used to locate objects and
boundaries (lines, curves, etc) in images.
The result of image segmentation is a set of regions that
collectively cover the entire image, or a set of contours
extracted from the image. Each of the pixels in a region
is similar with respect to some characteristic or
computed property such as color, intensity, or texture.
Adjacent regions are significantly different with respect
to the same characteristic.
Image segmentation is used in multimedia services for
explicit information about content so that human
observers can interpret images clearly by highlighting
specific regions of interest. For example, if segmentation
of important regions from the background areas can be
automated, the subsequent quantize can be optimized to
allocate more resources in areas of interest.
Conventional segmentation methods are very valid only
for geometrical based images and non texture images
where the edges of the objects may be found out and
joined together to display the segmented shapes. The
problem of segmentation is to partition an image into a
set of non-overlapping regions.
Image segmentation is one of the most important
problems in color image analysis. In image segmentation
the expectation is to be able to automatically extract the
desired objects of interest in an image for a certain task.
There are a variety of methods of image segmentation,
and each of them has its advantages and disadvantages.
It should be noted that there is no single standard
approach to image segmentation.
There are primarily four types of segmentation
techniques: threshold, boundary based, region-based,
and hybrid techniques. Some of the simplest approaches
are all types of global threshold.
The spectral feature space is separated into subdivisions
and pixels of the same subdivision are merged when
locally adjacent in the image data. Over-segmentation
and under-segmentation take place easily without good
control of meaningful thresholds. A pixel based
clustering is a generalization of thresholding that can be
used for both gray level and color images. Boundarybased methods assume that the pixel properties, such as
intensity, color, and texture should change abruptly
between different regions.
Region-based methods cluster pixels starting from a
limited number of single seed points. They depend on

the set of given seed points and often suffer from a lack
of control in the break-off criterion for the growth of a
region. Hybrid methods tend to combine two different
techniques together to achieve better segmentation. This
project proposed a hybrid segmentation method
combining region-growing with mean shift clustering
Region-growing is one of the region-based segmentation
methods. It starts with small window objects and grows
regions by merging smaller image objects into its bigger
ones in numerous subsequent steps. There is a scale
parameter to control the growth of regions. However, it
is not a simple task to set a good scale parameter.
Unexpected segmentation results are obtained easily
when setting a bigger scale parameter. But if a smaller
scale is set, it results in the outputs of over-segmentation
i.e., separating into units which are too small. Regiongrowing algorithm is local optimization procedure, while
pixel clustering is a global analysis of color space.
The segmentation technique based on pixel clustering is
an important approach. Segmentation using clustering
involves the search for points that are similar enough to
be grouped together in the color space.
Mean shift (MS) algorithm is a non-parametric
clustering technique that has been successfully applied
to feature space analysis and it has been proved to be an
excellent algorithm in image segmentation and video
object tracking.
Non-parametric methods in feature space analysis avoid
the use of the normality assumption. After clustering, the
segments of the image are composed of connected pixels
that are assigned to the same clusters.
However, some clusters in the space may not correspond
to significant regions in the image. Image segmentation
using clustering does not fully consider the spatial
information of the image. A segmentation procedure
could take into account simultaneously the properties of
pixels as well as their spatial arrangement in the image.
This paper proposes a method combining regiongrowing with adaptive mean shift is, in which the
bandwidth of mean shift is adaptive. The aim of this
paper is to make good use of the advantages of both
region growing algorithm and clustering technique.
Some of the practical applications of image
segmentation are:
Medical Imaging
o Locate tumours and other pathologies
o Measure tissue volumes
o Computer-guided surgery
o Diagnosis



International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882
Volume 4, Issue 7, July 2015

o Treatment planning
o Study of anatomical structure
Locate objects in satellite images (roads,
forests, etc.)
Face recognition and machine vision
Fingerprint recognition
Automatic traffic controlling systems.


Digital Image Processing is a collection of techniques
for the manipulation of digital images by computers. The
raw data received from the imaging sensors on the
satellite platforms contains flaws and deficiencies. To
overcome these flaws and deficiencies in order to get the
originality of the data, it needs to undergo several steps
of processing. This will vary from image to image
depending on the type of image format, initial condition
of the image and the information of interest and the
composition of the image scene. Digital Image
Processing undergoes three general steps:
Display and enhancement
Information extraction
Texture has at least a formal definition, texture has not.
A typical definition in the literature is one or more basic
local patterns that are repeated in a periodic manner.
However, it is not clear exactly what a pattern might be
or how it might repeat. It is not even clear whether
texture is an inherent property of all objects. Even
though texture is an intuitive concept, its definition has
proven difficult to formalize.
Texture has been
extremely refractory to precise definition". Over the
years, many researchers have expressed the same
statement: There is no universally accepted definition
for texture.
Despite this lack of a universally agreed definition, all
researchers agree on two points. Firstly, there is
significant variation in intensity levels between nearby
image elements within a single texture. Secondly,
texture is a homogeneous property at some spatial scale
larger than the resolution of the image. Some researchers
describe texture in terms of the human visual system i.e.
that texture do not have uniform intensity, but are
nonetheless perceived as homogeneous regions by a
human observer. However, a definition based on human
perception poses problems when used as the basis for a
quantitative texture analysis algorithm.
It is very hard to define the goal of a texture
segmentation algorithm, even if the question is restricted
to one image taken from the natural world. Texture

segmentation is not equivalent to object segmentation.

Therefore, there is no one segmentation of an image that
can be considered to be \right." The \right" segmentation
exists only in the mind of the observer, which can
change not only between observers, but within the same
observer at different times.

3.1 Clustering Methods
The K-means algorithm is an iterative technique that is
used to partition an image into K clusters. The basic
algorithm is:
Pick K cluster centers, either randomly or
based on some heuristic
Assign each pixel in the image to the cluster
that minimizes the variance between the pixel
and the cluster center
Re-compute the cluster centers by averaging
all of the pixels in the cluster
Repeat steps 2 and 3 until convergence is
attained (e.g. no pixels change clusters)
In this case, variance is the squared or absolute
difference between a pixel and a cluster center. The
difference is typically based on pixel color, intensity,
texture, and location, or a weighted combination of these
factors. K can be selected manually, randomly, or by a
This algorithm is guaranteed to converge, but it may not
return the optimal solution. The quality of the solution
depends on the initial set of clusters and the value of K.
3.2 Histogram-Based Methods
Histogram-based methods are very efficient when
compared to other image segmentation methods because
they typically require only one pass through the pixels.
In this technique, a histogram is computed from all of
the pixels in the image, and the peaks and valleys in the
histogram are used to locate the clusters in the image.
Color or intensity can be used as the measure.
A refinement of this technique is to recursively apply the
histogram-seeking method to clusters in the image in
order to divide them into smaller clusters. This is
repeated with smaller and smaller clusters until no more
clusters are formed.
One disadvantage of the histogram-seeking method is
that it may be difficult to identify significant peaks and
valleys in the image. In this technique of image
classification distance metric and integrated region
matching are familiar
3.3 Edge Detection Methods
Edge detection is a well-developed field on its own
within image processing. Region boundaries and edges



International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882
Volume 4, Issue 7, July 2015

are closely related, since there is often a sharp

adjustment in intensity at the region boundaries. Edge
detection techniques have therefore been used to as the
base of another segmentation technique.
The edges identified by edge detection are often
disconnected. To segment an object from an image
however, one needs closed region boundaries.
Discontinuities are bridged if the distance between the
two edges is within some predetermined threshold.

A bottom up method.

3.4 Region based segmentation

We have seen two techniques so far. One dealing with
the gray level value and other with the thresholds. In this
section we will concentrate on regions of the image.
3.4.1 Formulation of the regions
An entire image is divided into sub regions and they
must be in accordance to some rules such as
Union of sub regions is the region.
All are connected in some predefined sense.
No to be same, disjoint.
Properties must be satisfied by the pixels in a
segmented region. P(Ri)=true if all pixels have
same gray level.
Two sub regions should have different sense of
3.4.2 Segmentation by region splitting and merging
The basic idea of splitting is, as the name implies, to
break the image into many disjoint regions which are
coherent within themselves. Take into consideration the
entire image and then group the pixels in a region if they
satisfy some kind of similarity constraint. This is like a
divide and conquers method.
Merging is a process used when after the split the
adjacent regions merge if necessary. Algorithms of this
nature are called split and merge algorithms.
3.4.3 Segmentation by region growing
Region growing approach is the opposite of split and
An initial set of small area are iteratively merged
based on similarity of constraints.
Start by choosing an arbitrary pixel and
compared with the neigh boring pixel.
Region is grown from the seed pixel by adding
in neigh boring pixels that are similar, increasing
the size of the region.
When the growth of one region stops we simply
choose another seed pixel which does not yet
belong to any region and start again.
This whole process is continued until all pixels
belong to some region.

Figure 1: Region growing

Some of the undesirable effects of the region growing
Current region dominates the growth process -ambiguities around edges of adjacent regions
may not be resolved correctly.
Different choices of seeds may give different
segmentation results.
Problems can occur if the (arbitrarily chosen)
seed point lies on an edge.
However starting with a particular seed pixel and
letting this region grow completely before trying other
seeds biases the segmentation in favour of the regions
which are segmented first.
To counter the above problems, simultaneous region
growing techniques have been developed.
Similarities of neighbouring regions are taken
into account in the growing process.
No single region is allowed to completely
dominate the proceedings.
A number of regions are allowed to grow at the
same time.
o Similar regions will gradually coalesce
into expanding regions.
Control of these methods may be quite
complicated but efficient methods have been
Easy and efficient to implement on parallel



International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882
Volume 4, Issue 7, July 2015


The goal of any supervised learning algorithm is to find
a function that best maps a set of inputs to its correct
simple classification task, where the input is an image of
an animal, and the correct output would be the name of
the animal. Some input and output patterns can be easily
(i.e. perceptions).
perceptions cannot learn some relatively simple
patterns, such as those that are not linearly separable.
For example, a human may classify an image of an
animal by recognizing certain features such as the
number of limbs, the texture of the skin (whether it is
furry, feathered, scaled, etc.), the size of the animal, and
the list goes on. A single-layer neural network however,
must learn a function that outputs a label solely using the
intensity of the pixels in the image. There is no way for
it to learn any abstract features of the input since it is
limited to having only one layer. A multi-layered
network overcomes this limitation as it can create
internal representations and learn different features in
each layer.[1] The first layer may be responsible for
learning the orientations of lines using the inputs from
the individual pixels in the image. The second layer may
combine the features learned in the first layer and learn
to identify simple shapes such as circles. Each higher
layer learns more and more abstract features such as
those mentioned above that can be used to classify the
image. Each layer finds patterns in the layer below it and
it is this ability to create internal representations that are
independent of outside input that gives multi-layered
networks their power. The goal and motivation for
developing the back propagation algorithm was to find a
way to train a multi-layered neural network such that it
can learn the appropriate internal representations to
allow it to learn any arbitrary mapping of input to
The back propagation learning algorithm can be divided
into two phases: propagation and weight update.
Phase 1: Propagation
Each propagation involves the following steps:
1. Forward propagation of a training pattern's input
through the neural network in order to generate
the propagation's output activations.
2. Backward propagation of the propagation's
output activations through the neural network
using the training pattern target in order to
generate the deltas of all output and hidden

Phase 2: Weight update

For each weight-synapse follow the following steps:
1. Multiply its output delta and input activation to
get the gradient of the weight.
2. Subtract a ratio (percentage) of the gradient from
the weight.
This ratio (percentage) influences the speed and quality
of learning; it is called the learning rate. The greater the
ratio, the faster the neuron trains; the lower the ratio, the
more accurate the training is. The sign of the gradient of
a weight indicates where the error is increasing; this is
why the weight must be updated in the opposite
Repeat phase 1 and 2 until the performance of the
network is satisfactory.
Algorithm for a 3-layer network (only one hidden layer):
initialize network weights (often small random values)
forEach training example ex
prediction = neural-net-output(network, ex) // forward
actual = teacher-output(ex)
compute error (prediction - actual) at the output units
for all weights from hidden layer to
output layer // backward pass
for all weights from input layer to hidden
layer // backward pass continued
update network weights // input layer not modified by
error estimate
until all examples classified correctly or another
stopping criterion satisfied
return the network
As the algorithm's name implies, the errors propagate
backwards from the output nodes to the input nodes.
Technically speaking, back propagation calculates the
gradient of the error of the network regarding the
network's modifiable weights.[2] This gradient is almost
always used in a simple stochastic gradient
descent algorithm to find weights that minimize the
error. Often the term "back propagation" is used in a
more general sense, to refer to the entire procedure
encompassing both the calculation of the gradient and its
use in stochastic gradient descent. Back propagation
usually allows quick convergence on satisfactory local
minima for error in the kind of networks to which it is
Back propagation networks are necessarily multilayer
perceptions (usually with one input, one hidden, and one
output layer). In order for the hidden layer to serve any
useful function, multilayer networks must have non-



International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882
Volume 4, Issue 7, July 2015

linear activation functions for the multiple layers: a

multilayer network using only linear activation functions
is equivalent to some single layer, linear network. Nonlinear activation functions that are commonly used
include the logistic function, the soft max function, and
the Gaussian functions.
The back propagation algorithm for calculating a
gradient has been rediscovered a number of times, and is
a special case of a more general technique
called automatic
differentiation in
accumulation mode.
It is also closely related to the GaussNewton algorithm,
and is also part of continuing research in neural back


The RNFL and SAP data vectors were processed further

to generate the feature vectors. For each eye, the
difference between the baseline RNFL and SAP data
vectors (obtained by the first exam date) and each
follow-up RNFL and SAP data vector were calculated.
This way, we obtained a longitudinal time series of
features for each subjects eye. For instance, if the data
are collected from a subject at baseline and at 4 followup visits, the longitudinal data set for this subject has
four time points and each time point has a corresponding
7-D RNFL and a 54-D SAP (threshold values at 52 test
points, MD, and PSD) feature vector. This is shown in
Fig. 2 in more detail. Fig. 2(a) shows sample RNFL and
SAP measurements and indicates how the data vectors
are formed. Fig. 2(b) shows the longitudinal data vectors
for a single subject. The longitudinal feature vectors,
which are the norm 1 difference between the baseline
and follow-up data vectors, are displayed in Fig. 2(c).
Different combinations of the RNFL and SAP features
were fed to the machine learning classifiers to assess
their effectiveness and power in detecting glaucoma
progression patterns and separating stable from
progressed eyes over time.

Machine Learning Classifiers To analyze the

effectiveness of different classifiers and to assess the
optimality of SAP and RNFL input features, we used
classifiers from Bayesian, Instance-based, Meta, and
Tree families of MLCs including Bayesian net, Lazy K
Star, Meta classification using regression, Meta
ensemble selection, alternating decision tree (AD tree),
random forest tree, and simple classification and
regression tree (CART) to detect glaucoma progression
patterns from the longitudinal feature vectors, and to
separate each eye into either the non progressed (i.e.,
stable) or progressed glaucoma. Eyes with at least 50%
of follow-up exams classified as progressed by the
MLC, or with two consecutive follow-up exams
classified as progressed by the MLC, were assigned to
the progressed glaucoma group; the remaining study
eyes were assigned to the stable glaucoma group. Here,
we briefly describe these classifiers.
Bayesian net employs factored representations of
probability distributions that generalize the naive
Bayesian classifier and explicitly represent statements
about independence. In Lazy learning algorithms, the
generalization beyond the training data is delayed until
the arrival of a new observation. Lazy IB is actually a
nearest neighbor classifier that assigns the nearest
samples class to the new instance [29]. Lazy K Star is
another form of instance-based classifier that utilizes the
entropic measures as the metric distance [30].
The machine learning classifiers were implemented in
MATLAB (Math works, Natick, MA, USA) or Weka
(The University of Waikato, New Zealand) to assess the
effectiveness of structural and functional ophthalmic



International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882
Volume 4, Issue 7, July 2015

features. First, we used RNFL and SAP features

separately and then we combined the SAP and RNFL
features to assess whether the combined functional and
structural data performed significantly better than either
alone. This is a critical analysis to reveal the optimality
of the SAP features for classifiers [38]. Several
classification performance metrics outlined below were
(independent training and testing groups) to assess the
machine learning classifier outcomes and in addition,
independent feature ranking was performed to assess the
discriminating power of the structural and functional
features in detecting stable from progressing glaucoma

6.1 Region-growing algorithm
Segmentation is the subdivision of an image into
separated regions. For many years, image segmentation
has been a main research focus in the area of image
analysis. Many different approaches have been followed,
and each of them has its advantages and disadvantages.
Region-growing is one of the most popular methods of
image segmentation. Region growing algorithm is a
bottom-up region-merging technique starting with onepixel object. In numerous subsequent steps, smaller
image objects are merged into bigger ones. Throughout
this pair wise clustering process, the underlying
optimization procedure minimizes the weighted
heterogeneity of resulting image objects. The definition
of heterogeneity consists of two parts: spectral
heterogeneity and shape heterogeneity. The spectral
heterogeneity is the sum of the standard deviations of
spectral values in each layer weighted with the weights
for each layer wc.

h wc X c


The shape heterogeneity also consists of two parts:

Smoothness and compactness.

Heterogeneity as deviation from a compact shape is

described by the ratio of the border length l and the

square root of the number of pixels forming this image

object. Smoothness describes shape heterogeneity as the
ratio of the border length l and the shortest possible
border length b given by the bounding box of an image
object parallel to the raster.



The spectral criterion is the change in

heterogeneity that occurs when merging two image
objects. The shape criterion is a value that describes the
improvement of the shape with regard to compactness
and smoothness of the resulting image object. The
overall fusion value f is computed based on the spectral
heterogeneity hcolor and the shape heterogeneity hshape
as follows:

f wXhcolor 1 w Xhshape


where w is the user defined weight for color (against

shape) with 0w1.
In each step of merging, that pair of adjacent
image objects is merged which stands for the smallest
growth of the defined overall heterogeneity. If the
smallest growth exceeds the threshold defined by the
scale parameter, the process stops. Doing so, regiongrowing algorithm is a local optimization procedure .
The overall heterogeneity measure includes both spectral
and spatial information of image objects.
However, many clustering based segmentation
methods do not fully consider spatial information. A
popular clustering technique, mean shift, is reviewed in
the next section.
6.2 Mean shift algorithm
Feature space analysis is a widely used tool for solving
low-level image understanding tasks. Given a color
image, the RGB values of pixels are usually extracted as
the feature vectors. The major problem in feature space
analysis is finding the clusters in the feature space.
Estimate of a cluster center is called in statistics the
multivariate location problem. Numerous clustering
techniques were proposed. Mean shift algorithm is an
effective tool for the estimate of cluster centers. Mean
shift algorithm has been successfully applied to image
analysis by many researchers recently. Mean shift
procedure is a nonparametric technique for estimation of
the density gradient.
Thus, the mean shift vector, the vector of difference
between the local mean and the center of the window, is
proportional to the gradient descent of the curve. This is
helpful to locate small and large changes in the shapes in
the real time feature space of the texture images. Given
the radius of the search window and its initial location,



International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882
Volume 4, Issue 7, July 2015

the mean shift vector is computed iteratively. In each

mean shift procedure the centre of the research window
is replaced by the vector. The iterative procedure will
repeat till convergence. The convergent points are the
cluster centres found. So the mean shift procedure is
seen as a clustering technique in feature space analysis.
Segments are produced simply by merging all adjacent
pixels of the same cluster in feature space. However,
mean shift based clustering, just like many data
clustering techniques, does not fully utilize spatial
information, which is important in image segmentation.
The pixels on an image are highly correlated. Therefore,
the spatial relationship of neighbouring pixels is an
important characteristic that can be of great aid in image
6.3 The proposed segmentation method
Although region-growing algorithm takes advantage of
spatial information of image object effectively, it is a
local optimization procedure as stated above. While
mean shift technique is based on clustering in a global
feature space. It is typically not able to separate different
objects of interest of the same cluster in feature space.
The information on which clustering can act is limited to
spectral. Therefore, we can combine region-growing
with mean shift clustering to make good use of their
advantages. The combined method is composed of two
First of all, an initial mask is defined by the user to start
the region-growing algorithm is performed. Since the
procedure involves spatial information of image objects,
it may merge pixels that do not belong to the same
cluster in feature space. This is beneficial to yield
homogeneous image objects, which is difficult for
clustering technique in feature space analysis. In order to
determine the outcome of the region-growing
segmentation algorithm, the user can define the scale
parameter, which defines the break-up criterion. This
process in done manually in our paper. The scale
parameter is a measure for the maximum change in
heterogeneity that may occur when merging two image
objects . In the proposed segmentation method the scale
parameter is set to a possible extent small value. The
task of region growing is not to achieve the resulting
segmented image.
Actually, after the process of mean shift we determine
the region growing contour and we obtain an image that
has homogeneous image objects. The generated image
object primitives are handled in the sequent clustering
process. In the second step, the final segmented image
will be plotted in iteration based process.
Calculation of Euclidean distance is as follows.

EC x1 x2 y1 y2


The next step involves the calculation of the phi values

with which we sort the pixels with the Euclidean
distance between the values -1.2 to 1.2. Along with these
values interior mean and exterior mean is calculated with
phi less than 0 and greater than 0. This information is
used to find out the force from the image information as

F I idx u I idx v


Where u is interior mean and v is exterior mean.

The neighbour pixels are then now sorted out for each
pixel. In this way we find about 8 pixels nearby each
pixel, as follows. Top, bottom, top left, top right, bottom
left, bottom right, right and left Using this we find the
rate of change of curvature shape is detected. Then suss
man algorithm is used to detect the exact pixels of the
contours to be used for region growing.
A conventional mean shift clustering technique uses a
fixed size search window to entire feature space. Using
single bandwidth is not adaptive to a complex feature
space with different distributions. To avoid the drawback
of clustering using single bandwidth for the whole
feature space, we utilize the adaptive mean shift
algorithm improved by involving the neighbourhood
information of samples.
The K-nearest neighbour of a sample point describes its
neighbourhood information in feature space . The nature
of KNN tells the local density information, while the
task of mean shift procedure is to find a local density
maximum. Therefore, the combination of KNN and
mean shift is an adaptive way to detect the exact cluster
centre based on local density. In the clustering
procedure, the primitives are image objects generated in
region-growing step.
The feature space is separated into subdivisions, based
on the means and image objects of the same subdivision
are merged when locally adjacent in the image data by
means of region growing process.. The adaptive mean
shift uses suitable bandwidths on different distributions,
so clusters of contrasting sizes and densities could be
detected correctly.
The proposed segmentation method reduces the effect of
local character of region-growing algorithm when
segmenting image at a large scale. On the other hand, the
method overcomes the shortcomings not fully using
spatial information of clustering based segmentation
techniques. Integrating region growing with mean shift
clustering, the proposed method segments various
images with convincing results.



International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882
Volume 4, Issue 7, July 2015

suitable for diagnosis. Thus the project shows an

efficient result can be detected using the optical cup and
disc in texture based retinal image content. In this
project we specify the region of interests manually and
find the similar pixels under the same group. The
contour is drawn on the grouped pixels to show the
external shape of the recognized object.


Errors in the present Technique may be reduced.
Regions are presently given manually, which
may be made automatically.
In Future, based on the contour shape, the name
of the object may be displayed with the help of
shapes data base.
May Extended for defence applications, where
the enemy missile can be identified and its
dimensions can be calculated through imagery
This project may be extended to recognize the


The proposed segmentation method reduces the effect of

local character of region-growing algorithm when
segmenting image at a large scale. On the other hand, the
method overcomes the shortcomings not fully using
spatial information of clustering based segmentation
techniques. Integrating region growing with mean shift
clustering, the proposed method segments various
images with convincing results. The trained values are
then tested against a real time input image, for which our
algorithm responds as Healthy or Glaucoma.


Employing region growing in discriminating stable from
progressing glaucoma eyes using structural retinal fund
us image measurements is promising. Using the
diameter of cup and disc, the ratio of cup to disc value
will be a valid method of detecting glaucoma with just
two features. To obtain high diagnostic accuracy and
higher sensitivity at high specificity, we suggest RBF
based classifier for positive and negative findings. Our
experiments would reveal that this method will be highly

[1] H. A. Quigley and A. T. Broman, The number of

people with glaucoma worldwide in 2010 and 2020,
Brit. J. Ophthalmol., vol. 90, pp. 262267, Mar. 2006.
[2] R. N. Weinreb and P. T. Khaw, Primary open-angle
glaucoma, Lancet, vol. 363, pp. 17111720, May 22,
[3] S. Kingman, Glaucoma is second leading cause of
blindness globally, Bull World Health Organ, vol. 82,
pp. 887888, Nov. 2004.
[4] J. B. Jonas and A. Dichtl, Evaluation of the retinal
nerve fiber layer, Surv. Ophthalmol., vol. 40, pp. 369
378, Mar./Apr. 1996.
[5] C. Bowd, R. N. Weinreb, and L. M. Zangwill,
Evaluating the optic disc and retinal nerve fiber layer in
photographic methods, Semin. Ophthalmol., vol. 15, pp.
194205, Dec. 2000.



International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882
Volume 4, Issue 7, July 2015

[6] M. C. Lim, D. L. Budenz, S. J. Gedde, D. J. Rhee,

andW. Feuer, Digital stereoscopic photography with
chevron drawings versus standard film photography:
Estimates of cup to disc ratio measurements,
Investigative Ophthalmol. Visual Sci., vol. 42, pp. S131
S131, Mar. 15, 2001.
[7] M. Fingeret, F. A. Medeiros, R. Susanna, Jr., and R.
N. Weinreb, Five rules to evaluate the optic disc and
retinal nerve fiber layer for glaucoma, Optometry, vol.
76, pp. 661668, Nov. 2005.
[8] L. M. Alencar and F. A. Medeiros, The role of
standard automated perimetry and newer functional
methods for glaucoma diagnosis and follow-up, Indian
J. Ophthalmol., vol. 59, pp. S53S58, Jan. 2011.
[9] B. Bengtsson and A. Heijl, A visual field index for
calculation of glaucoma rate of progression, Amer. J.
Ophthalmol., vol. 145, pp. 343353, Feb. 2008.
[10] N. Nassif, B. Cense, B. H. Park, S. H. Yun, T. C.
Chen, B. E. Bouma, G. J. Tearney,and J. F. de Boer, In
vivo human retinal imaging by ultrahigh-speed spectral
domain optical coherence tomography, Opt. Lett., vol.
29, pp. 480482, Mar. 1, 2004.