Você está na página 1de 5

International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 7July 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 2290



A New Approach for Iris Recognition
Hannath C M
#1
, Shreeja R
*2

#1
MTech student(CSE),MES College of Engineering,Kuttippuram,India
#2
Assistant Professor(CSE),MES College of Engineering,Kuttippuram,India





Abstract Biometric utilize something you are to authenticate
identification. This might include fingerprints, retina pattern, iris,
hand geometry, vein patterns, voice password or signature
dynamics. Among the various techniques, iris recognition is
regarded as the most reliable and accurate biometric recognition
system. In this paper, we propose a new approach for iris
recognition. The proposed method consists of 3 steps, iris
segmentation, feature extraction and matching. All the images
used in this paper were collected from CASIA and UBIRIS iris
database. Experimental result shows that proposed method
provides smaller EER, higher CRR and fast feature extraction.

Keywords Contourlet transform, Gabor filter, Hamming
distance ,Hough transform, Iris recognition.

I. INTRODUCTION
Biometrics is the automated recognition of individuals
based on their behavioural and biological characteristics. A
good biometric should be Universal, unique and permanent.
There are two types of biometrics, biological/physiological
biometrics and behavioural biometrics. Former includes God
created characteristics possessed by individuals such as face,
hand geometry, retina, iris etc and latter refers to
characteristics acquired by the individual throughout his life
time, such as voice, signature etc.

The iris is an externally visible, yet protected organ whose
unique epigenetic pattern remains stable throughout adult life.
These characteristics make it very attractive for use as a
biometric for identifying individuals. Fig. 1 shows an eye
image along with iris region. The probabilty that two irises
could produce exactly the same iris pattern is approximately 1
in
78
10 .(The population of the earth is around
10
10 ).

Iris recognition system is composed of several subsystems.
These subsystems include segmentation localizing the iris
region in an eye image, normalization creating a rectangular
block of iris pattern from the circular iris region to eliminate
dimensional inconsistencies, feature encoding generating a
template containing only the significant features of the iris
region, matching and classification-measuring the similarity
between two iris templates. The overall performance of an iris
recognition system is highly related to the proper design of its
subsystems.


Fig 1: Eye image

II. PREVIOUS WORKS
Research in the area of iris recognition has been receiving
considerable attention and a number of techniques and
algorithms have been proposed over the last few years.

Daugman[1] is the first one to give an algorithm for iris
recognition. His algorithm is based on Iris Codes. For the
preprocessing , step inner and outer boundaries of the iris are
located.. Integro- differential operators are then used to detect
the centre and diameter of the iris, then the pupil is also
detected using the differential operators, Feature extraction
algorithm uses the modified complex valued 2-D Gabor filters.
For matching, Hamming Distance has been calculated by the
use of simple Boolean Exclusive - OR operator and for the
perfect match give the hamming distance equal to zero is
obtained. The algorithm gives better accuracy but the time
required for matching and feature extraction is very high.

Wildes[2] has made use of an isotropic band-pass
decomposition derived from application of Laplacian of
Gaussian filters to the image data. Like Daugman , Wildes
also used the first derivative of image intensity to find the
location of edges corresponding to the borders of the iris. The
Wildes system explicitly models the upper and lower eyelids
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 7July 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 2291

with parabolic arcs whereas Daugman excludes the upper and
the lower portions of the image.

In Tans method [3] a bank of circular symmetric filters is
used to capture local iris characteristics to form a fixed length
feature vector. In iris matching, an efficient approach called
nearest feature line (NFL) is used. Constraints are imposed on
the original NFL method to improve performance.

Iris recognition system developed by Li Ma [4] is
characterized by local intensity variations. The sharp variation
points of iris patterns are recorded as features. Using wavelet
analysis, record the position of local sharp variation points in
each intensity signal as features .Exclusive OR operation for
matching. This paper describes an efficient algorithm for iris
recognition by characterizing key local variations. The basic
idea is that local sharp variation points, denoting the
appearing or vanishing of an important image structure, are
utilized to represent the characteristics of the iris. The whole
procedure of feature extraction includes two steps: 1) a set of
one-dimensional intensity signals is constructed to effectively
characterize the most important information of the original
two-dimensional image; 2) using a particular class of wavelets,
a position sequence of local sharp variation points in such
signals is recorded as features.

Li [5] presented an iris recognition algorithm based on
modified Log-Gabor filters. The algorithm is similar as the
method proposed by Dogman in general procedure while
modified Log-Gabor filters are adopted to extract the iris
phase information instead of complex Gabor filters used in
Daugmans method. The advantage of Log- Gabor filters over
complex Gabor filters is the former are strictly band pass
filters and the latter are not. The property of strictly band pass
makes the Log-Gabor filters more suitable to extract the iris
phase features regardless of the background brightness.
III. PROPOSED DESIGN
In any real time biometric system accuracy and recognition
time are crucial parameters. All the methods mentioned in the
literature survey use normalization step, ie the annular iris
pattern is transformed into a polar coordinate system or is
unwrapped into a rectangular block. Then, feature extraction
attempts to extract the iris information from the normalized
iris image to generate a feature vector. But it is a time
consuming process. In our proposed design we avoid
normalizatioin step by extracting features directly from
segmented iris. Our proposed design is shown in fig.2.





























































Fig 2: Proposed Design




Input Image
Denoising
Pupil Detection
Outer Boundary
Detection
Segmented Image
Contourlet Transform
Contourlet
Coefficients
Possibilistic Fuzzy
Matching
Matched Image
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 7July 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 2292

A. Segmentation Based on Gradient Contours

First of all, colour image can be converted to gray scale.
Then, denoising can be done using wavelet transform.
Compared with other part of eye , the pupil is darker. So,it
can detected by finding minimum values in the eye image.
Then gray scale image converted into binary. Morphological
operations like erosion and cleaning can be done on the
binary images. Centroid of minimum intensity values can be
calculated. Travel radially from centroid to find partial
derivatives of intensity gradients in both x and y directions.
If the gradient is greater than a specified threshold mark that
point. Repeat this process and find all the points and join
these discrete points to find the curve. Traditional methods
like Canny operator can be used for edge detection. After
that we scan radially around the pupil where there is
maximum change in the pixel value. Then we draw a circle
according to the equation of circle. Find perimeter of the
circle and we get the sclera-iris boundary. Steps of
segmentation are shown in fig.3.



a) Original eye image



b) Pupil detection













c) Outer boundary detection



d) Segmented Iris

Fig 3.Illustration of segmentation


B. Feature Extraction

Contourlets are constructed via filter banks and can be
viewed as an extension of wavelets with directionality The
contourlet transform [7][8] is a directional transform, which
is capable of capturing contours and fine details in images.
The contourlet expansion is composed of basis function
oriented at various directions in multiple scales, with flexible
aspect ratios. To capture smooth contours in images, the
representation should contain basis functions with variety of
shapes, in particular with different aspect ratios. Directionality
and anisotropy are the important characteristics of contourlet
transform. Directionality indicates that having basis function
in many directions, only three direction in wavelet. The
anisotropy property means the basis functions appear at
various aspect ratios where as wavelets are separable
functions and thus their aspect ratio is one. Due to this
properties CT can efficiently handle 2D singularities, edges in
an image.Flow graph of contourlet transform is shown in fig.4.
It consists of two major stages: the sub-band decomposition
and the directional transform.Block diagram is shown in fig.5.
In contourlet transform, Laplacian pyramid does
decomposition of images into subbands and then directional
filter bank analyse each detail image.First,wavelet transform
is used for edge detection and then local directional transform
for contour segment detection.Directional filter bank is
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 7July 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 2293

designed to capture high frequency components,Laplacian
pyramid permits sub-band decomposition to avoid leaking of
low frequencies into several directional sub-bands,thus
directional information can be captured efficiently.

Fig 4: Flow graph of contourlet transform


Fig 5: Block diagramof contourlet transform


There is a toolbox for contourlet transform available in the
internet. Using this toolbox we can extract the features and
we get the contourlet coefficients. Result is shown in
fig.5.Then,we recomputed the mean and standard deviation of
all images . This is done to reduce the error due to lighting
conditions and background. Extract contourlet coefficients of
all segmented iris.



Fig 5: Contourlet coefficients

C. Matching
Fuzzy matching determines the similarities between data
sets, information, and facts. In PFM, both the position and
local feature vector of each point are used to estimate the
pose transformation and the point correspondence.
The steps are given below:

1)
) , ( ) , (
2 1
1
y x I y x I D
(1)
) , (
1
y x I is the coefficient of trained images.
) , (
2
y x I is the coefficient of test image.
2)
) * ) , ( ( ) , (
2 2
2
rtn y x I y x I D

) * (
1 2 1 1
w D D D (2)
rtn is the angle of rotation.

3) ) * ) , ( ( ) , (
2 2
3
scl y x I y x I D
) * (
2 3 1 1
w D D D (3)
scl is the scaling applied to image.
2 1
,w w are the weightages.
4) Compute minimum value of
1
D and find matching
Iris.


D) Experimental Results


The proposed iris recognition system was evaluated on the
CASIA-IrisV1-Interval database [9] and on the UBIRIS.v2
[10] database . The CASIA database is a popular iris database
and is widely adopted to evaluate the iris recognition system.
The receiver operating characteristic (ROC) curve ,equal error
rate (EER) and correct recognition rate (CRR) are used to
evaluate the performance of the proposed method. The ROC
curve is a false acceptance rate (FAR) versus false rejection
rate (FRR) curve , which measures the accuracy of matching
process and shows the overall performance of an algorithm.
The FAR is the probability of accepting an imposter as an
authorized subject and the FRR is the probability of an
authorized subject being incorrectly rejected.The EER is the
point where FAR and FRR are equal in value. The smaller the
EER is, the better the algorithm. the algorithm.Correct
recognition rate (CRR), is the ratio of the number of samples
being correctly classified to the total number of test samples.
ROC curve of the proposed method shown in fig.6.
Comparison of the proposed method with traditional method
are given in Table I
TABLE I
COMPARISON OF EER,AND CRR
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 7July 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 2294


Methods EER CRR
Daugman 0.08 % 100%
Wildes 1.76 % -
Tan 0.57 % 99.85%
Keylocal Variation
Method
0.07 % 100%
Log Gabor Filter
Method
0.28 % -
Tsai et al 0.1482 % 98.67 %
Proposed System 0.13 % 99.3 %



Fig.6: ROC curve of proposed method

We selected 150 iris images from 50 classes in which there
are three samples in each class to evaluate the system
performance.Also we include noisy input
images(Gaussian,Speckle and Poisson).For comparison gabor
filter method and contourlet transform based method is also
implemented and compare our proposed method with method
proposed by Tsai et al (using gabor filter).Comparison of
EER ,CRR and feature extraction time are shown below.


TABLE III
COMPARISON OF EER,CRR AND FEATURE EXTRACTION TIME


Method Parameters
EER CRR Feature
Extraction Time
Tsai et als
system
0.1482% 98.67 % 310.73 sec
Proposed
Design
0.13% 99.3% 177.981 sec

From the table it can be shown that our posed method
using contourlet transform performs very well in presence
of noise.
IV. CONCLUSIONS
In this paper, we improved Iris Recognition System using
contourlet transform and possibilistic fuzzy matching.
Proposed method achieves a higher accuracy because
contourlet transform has capacity to capture comparatively
richer directional information .Also fuzzy matching is used to
compare two sets of feature points by using the information
comprising the local features and the position of each point.
Experimental results reveal that our algorithm achieves high
performance even in noisy images.

REFERENCES

[1] J . G. Daugman, High confidence visual recognition of persons by a test
of statistical independence, IEEE Trans. Pattern Anal.Mach. Intell.,vol. 15, no.
11, pp. 11481161, Nov. 1993.

[2] R. P.Wildes, Iris recognition: An emerging biometric technology, Proc.
IEEE, vol. 85, no. 9, pp.13481363, Sep. 1997.

[3] Li Ma, Yunhong Wang, Tieniu Tan,Iris recognition using circular
symmetric filters, in Proc. 16
th
Int. Conf. Pattern Recognition, vol. II, 2002,
pp. 414417.

[4] LiMa, TieniuTan, YunhongWang, DexinZhang,"Efficient Iris
Recognition by Characterizing Key Local Variations",IEEE Transactions
On Image Processing, Vol. 13, No. 6, J une 2004.

[5] Peng Yao, J un Li, Xueyi Ye, Zhenquan Zhuang,Bin Li,"Iris Recognition
AlgorithmUsing Modified Log-Gabor Filters", 2006 IEEE.

[6] Chung-Chih Tsai, Heng-Yi Lin, J inshiuh Taur, and Chin-Wang Tao,"Iris
Recognition Using Possibilistic Fuzzy Matching on Local Features"IEEE
Transactions on systems, man, and cybernetics,

[7] Minh N. Do and Martin Vetterli "The Contourlet Transform: An Efficient
Directional Multiresolution Image Representation",IEEE Transactions on
image processing, Dec. 2005,Volume: 14 , Issue: 12, Page(s): 2091 - 2106.


[8] Amir Azizi and Hamid Reza Pourreza,"A New Method for Iris
Recognition Based on Contourlet Transformand Non Linear Approximation
Coefficients",springer,pp. 307316, 2009.

[9] CASIA-IrisV3. [Online]. Available:
http://www.cbsr.ia.ac.cn/IrisDatabase.htm

[10] H. Proena and L. A. Alexandre, UBIRIS: A noisy iris image
database,in Proc. Int. Conf. Image Anal. Process., 2005, vol. 1, pp. 970977.

Você também pode gostar