Você está na página 1de 7

IPASJ International Journal of Electronics & Communication (IIJEC)

Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm


Email: editoriijec@ipasj.org
ISSN 2321-5984

A Publisher for Research Motivation........

Volume 4, Issue 4, April 2016

FACE RECOGNITION ACROSS NONUNIFORM MOTION BLUR


Biju V G1, Anu S Nair2, Aravind Raj P3, Avin Joseph4, Paulson Gilbert5 and Raihana K U6
1

Associate prof., Department of ECE, College Of Engineering Munnar

Assistant prof., Department of ECE, College Of Engineering Munnar


3

UG Scholar,Department of ECE, College Of Engineering Munnar

UG Scholar,Department of ECE, College Of Engineering Munnar

UG Scholar,Department of ECE, College Of Engineering Munnar

UG Scholar,Department of ECE, College Of Engineering Munnar

ABSTRACT
Camera shake during exposure leads to image blur and ruins many photographs. Traditional method for face recognition
cannot handle non-uniform blurring situations that arise due to the tilts and rotations of hand-held cameras. In this paper, a
methodology for face recognition in the presence of space-varying motion blur is proposed. Non-Uniform Motion Blur-robust
(NU-MOB) algorithm is used for this purpose. A set of focused gallery image is blurred and each image is compared with the
input probe image and closest match is found. The recognition result for this algorithm can produce a high value of about
76.27% compared to other techniques like IRBF (58.32%), SRC (54.09%), etc[1].

Keywords: Face recognition, LBP feature, Non-uniform motion blur, TSF model

1. INTRODUCTION
Camera shake, in which an unsteady camera causes blurry photographs, is a chronic problem for photographers [2].
The explosion of consumer digital photography has made camera shake very prominent, particularly with the
popularity of small, high-resolution cameras whose light weight can make them difficult to hold sufficiently steady.
Many photographs capture ephemeral moments that cannot be recaptured under controlled conditions or repeated with
different camera settings. If camera shake occurs in the image for any reason, then that moment is lost.
Face recognition can be attributed to degradations arising from blur, changes in illumination, pose, and expression,
partial occlusions etc. Motion blur, in particular, deserves special attention owing to the very high usage of mobile
phones and hand-held imaging devices. Dealing with camera shake is a very relevant problem because, while tripods
hinder mobility, reducing the exposure time affects image quality. Moreover, in-built sensors such as gyroscope and
accelerometers have their own limitations in sensing the camera motion. In an uncontrolled environment, others
features like illumination and pose could also vary, further compounding the problem. The focus of this paper is to
develop a system that can work across non-uniform blur.
Traditionally, blurring due to camera shake has been modeled as a convolution with a single blur kernel, and the blur is
assumed to be uniform across the image [3]. However, it is space-variant blur that is encountered frequently in
handheld cameras [4]. In this paper, we propose a face recognition algorithm that is robust to non-uniform (i.e., spacevarying) motion blur arising from relative motion between the camera and the subject [5]. We assume that only a single
gallery image is available. The camera transformations can range from in-plane translations and rotations to out-ofplane translations, out-of-plane rotations, and even general 6D motion. The simple yet restrictive convolution model
fails to explain non-uniform motion blur and a space varying formulation becomes necessary.
We do not impose any constraints on the nature of the blur. We assume that the camera motion trajectory is sparse in
the camera motion space [6], [7]. This allows us to construct an optimization function with l1-norm constraint on the
transformation spread function (TSF) weights. Minimizing this cost function gives us an estimate of the
transformations that when applied on the gallery image results in the blurred probe image. Each gallery image, blurred
using the corresponding optimal TSF, is compared with the probe in the LBP (Local Binary Pattern) space. This direct

Volume 4, Issue 4, April 2016

Page 12

IPASJ International Journal of Electronics & Communication (IIJEC)


A Publisher for Research Motivation........

Volume 4, Issue 4, April 2016

Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm


Email: editoriijec@ipasj.org
ISSN 2321-5984

method of recognition allows us to circumvent the challenging and ill-posed problem of single image blind-de-blurring.
The idea of re-blurring followed by LBP-based recognition [5] and LBP histograms have been shown to work well on
blurred faces too.

2. BACKGROUND
The subject of face recognition is as old as computer vision, both because of the practical importance of the topic and
theoretical interest from cognitive scientists. Despite the fact that other methods of identification (such as fingerprints,
or iris scans) can be more accurate, face recognition has always remains a major focus of research because of its noninvasive nature and because it is peoples primary method of person identification.
As one of the most successful applications of image analysis and understanding, face recognition has recently received
significant attention, especially during the past few years [8]. The accuracy and efficiency of the face recognition
system is rapidly improve unconstrained settings. This is evidencedby the emergence of face recognition conferences
such as the International Conference on Audio and Video-Based Authentication (AVBPA) since 1997 and the
International Conference on Automatic Face and Gesture Recognition (AFGR) since 1995, systematic empirical
evaluations of Face Recognition Techniques (FRT), and many commercially available systems. Wide range of
commercial and law enforcement applications and the availability of feasible technologies after 30 years of research are
the reasons for this trend.

3. METHODOLOGY
Non-Uniform Motion Blur (NU-MOB) Algorithm:
Input: Blurred probe image g and a set of gallery images fm, m= 1, 2, ..., M.
For each gallery image fm, nd the optimal TSF hTm by solving,
hTm = argminhT W(gAmhT 22 + hT 1 (1)
(Subject to hT 0.)
Blur each gallery image fm with its corresponding h Tm and extract LBP features
Compare the LBP features of the probe image g with those of the transformed gallery images and nd the closest
match.
Output: Identify the probe image

Figure 1 Block diagram of NU-MOB


Matrix Am RNNTin the above equation has 4096 rows equal to the number of pixels in the image, while the number of
columns NT is determined by the blur setting.
For example, in the case of Setting 2 which has camera motion from bottom right to top left, NT = (Number of
translation steps along X-axis)(Number of translation steps along Y -axis) (Number of rotation steps about Z-axis).
Even if small facial expression changes exist between the gallery and the probe, the weighing matrix W in equation
makes our algorithm reasonably robust to these variations.

Volume 4, Issue 4, April 2016

Page 13

IPASJ International Journal of Electronics & Communication (IIJEC)


A Publisher for Research Motivation........

Volume 4, Issue 4, April 2016

Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm


Email: editoriijec@ipasj.org
ISSN 2321-5984

The set of all images obtained by blurring an image fusing the TSF model is a convex set. Moreover, this convex set is
given by theconvex hullof the columnsof thematrix A, where the columns of A are warped versions of f as determined
by the TSF.
The apparent motion of scene points in the image will vary at different locations when the camera motion is not
restrictedtoin-planetranslations.Insuchascenario,thespacevarying blur across the image cannot be explained using the
convolution model and with a single blur kernel. In this project a space-variant motion blur model is presented, and
illustrate how this model can explain geometric degradations of faces resulting from general camera motion.
We blur each of the gallery images with the corresponding optimal TSFs h Tm.For each blurred gallery image and
probe, we divide the face into non-overlapping rectangular patches, extract LBP histograms independently from each
patch and concatenate the histograms to build a global descriptor. The intuition behind dividing the image into blocks
is that the face can be seen as a composition of micro-patterns, and the textures of the facial regions are locally encoded
by
the
LBP
patterns
while
the
whole
shape
of
the
face
is
recovered
by
the
constructionoftheglobalhistogrami.e.,thespatiallyenhanced global histogram encodes both the appearance and the
spatial relations of facial regions. We then perform recognition with a nearest neighbour classifier using Chi square
distance [9] with the obtained histograms as feature vectors.

Figure 2: Calculation of local binary pattern

Figure 3: Extraction of LBP features


We assume a planar structure for the face [10], [11], [5] and use geometric framework proposed [12]-[13] to model the
blurred face as the weighted average of geometrically warped instances (homographies)ofthefocusedgalleryimage. The
warped instances can be viewed as the intermediate images observed during the exposure time. Each warp is assigned a
weight that denotes the fraction of the exposure duration for that transformation. The weights corresponding to the
warps are referred as the transformation spread function (TSF) [14]. We develop non-uniform motion blur (NUMOB)robust face recognition algorithm based on the TSF model. On each focused gallery image, we apply all the
possible transformations that exist in the 6D space. We extend the convexity result proved for the simple convolution
model [5] to the TSF model and show that the set of all images obtained by blurring a particular gallery image is a
convex set given by the convex hull of the columns of the corresponding matrix. To recognize a blurred probe image,
we minimize the distance between the probe and the convex combination of the columns of the transformation matrix
corresponding to each gallery image. The gallery image whose distance to the probe is minimum is identified as a
match.

Volume 4, Issue 4, April 2016

Page 14

IPASJ International Journal of Electronics & Communication (IIJEC)


A Publisher for Research Motivation........

Volume 4, Issue 4, April 2016

Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm


Email: editoriijec@ipasj.org
ISSN 2321-5984

4. RESULT& DISCUSSION
To evaluate our algorithm on different types and amounts of blur, we synthetically blur the face images with the TSF
model using the following four blur settings:
Setting 1 (S1): Camera motion from top right to bottom left.
Setting 2 (S2): Camera motion from bottom right to top left.
Setting 3 (S3): Camera motion from top left to bottom right.
Setting 4 (S4): Camera motion from bottom left to top right.

Figure 4: S1: Camera motion from top right to bottom left

Figure 5: S2: Camera motion from bottom right to top left

Figure 6: S3: Camera motion from top left to bottom right

Figure 7: S4: Camera motion from bottom left to top right


Intermediate output obtained for this paper is shown below. LBP features of this figures (Figure 7) are compared with
the LBP features of input probe image (Figure 8). If 70% of match occurs between these features, a face is recognized
otherwise the input probe image is not recognized. The reason for not recognizing the face of a person is that the
person is not part of database.

Volume 4, Issue 4, April 2016

Page 15

IPASJ International Journal of Electronics & Communication (IIJEC)


Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm
Email: editoriijec@ipasj.org
ISSN 2321-5984

A Publisher for Research Motivation........

Volume 4, Issue 4, April 2016


4.1 OUTPUT:

(a)

(b)

(c)

(d)
Figure 8: Blurred gallery images

Figure 9: Input probe image


Figure 8(a) represents setting S1, figure 8(b) represents setting S2, figure 8(c) represents setting S3 and figure 8(d)
represents setting S4.
Face recognition is done over non-uniform motion blur. Blurred gallery image is generated, based on motion blur
thatcan normally happen due to motion of imaging device. Gallery
imagesareblurredusingTSFmodel.Generatedblurredgallery image and probe image are compared with the help of LBF
features and recognition is done.
The DRBF and SRC methodsproposed in [15] and [16] are restricted to the simplistic convolution blur model which is
valid only when the motion of the camera is limited to in-plane translations. This assumption of uniform blur does not

Volume 4, Issue 4, April 2016

Page 16

IPASJ International Journal of Electronics & Communication (IIJEC)


Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm
Email: editoriijec@ipasj.org
ISSN 2321-5984

A Publisher for Research Motivation........

Volume 4, Issue 4, April 2016

hold true in real settings because camera tilts and rotations occur frequently in the case of hand-held cameras. In the
proposed algorithm blur due to non-uniform motion can also be processed.
Table 1: Recognition results (%) using NU-MOB along with comparisons

Method
NU-MOB
DRBF
SRC[16]

S1

S2

94.5 90.5
81

77

S3

S4

94

97

78

71

31.5 33.5 14.5 47.5

5. CONCLUSION
We proposed a methodology to perform face recognition under the non-uniform motion blur. We showed that the set of
all images obtained by non-uniformly blurring a given image using the TSF model is a convex set given by the convex
hull ofwarpedversionsoftheimage.Capitalizingonthisresult,we proposed a non-uniform motion blur-robust face
recognition algorithm NU-MOB. The limitation of our approach is that significant occlusions and large changes in
facial expressions cannot be handled.

REFERENCES
[1]. A. Punnappurath, A. N. Rajagopalan, S. Taheri, R. Chellappa, and G. Seetharaman, Face recognition across nonuniform motion blur, illumination, and pose, Image Processing, IEEE Transactions on, vol. 24, no. 7, pp. 2067
2082, 2015.
[2]. R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman, Removing camera shake from a single
photograph, ACM Transactions on Graphics (TOG), vol. 25, no. 3, pp. 787794, 2006.
[3]. Q. Shan, J. Jia, and A. Agarwala, High-quality motion deblurring from a single image, in ACM Transactions on
Graphics (TOG), vol. 27, no. 3. ACM, 2008, p. 73.
[4]. A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, Understanding blind deconvolution algorithms, Pattern
Analysis and Machine Intelligence, IEEE Transactions on, vol. 33, no. 12, pp. 23542367, 2011.
[5]. P. Vageeswaran, K. Mitra, and R. Chellappa, Blur and illumination robust face recognition via set-theoretic
characterization, Image Processing, IEEE Transactions on, vol. 22, no. 4, pp. 13621372, 2013.
[6]. O.Whyte,J.Sivic,A.Zisserman,andJ.Ponce,Non-uniformdeblurring for shaken images, International journal of
computer vision, vol. 98, no. 2, pp. 168186, 2012.
[7]. Z. Hu and M.-H. Yang, Fast non-uniform deblurring using constrained camera pose subspace. in BMVC, vol. 2,
no. 3, 2012, p. 4.
[8]. W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, Face recognition: A literature survey, ACM computing
surveys (CSUR), vol. 35, no. 4, pp. 399458, 2003.
[9]. T. Ahonen, E. Rahtu, V. Ojansivu, and J. Heikkila, Recognition of blurred faces using local phase quantization,
in Pattern Recognition, 2008. ICPR 2008. 19th International Conference on. IEEE, 2008, pp. 14.
[10]. M. Nishiyama, A. Hadid, H. Takeshima, J. Shotton, T. Kozakaya, and O. Yamaguchi, Facial deblur inference
using subspace analysis for recognition of blurred faces, Pattern Analysis and Machine Intelligence, IEEE
Transactions on, vol. 33, no. 4, pp. 838845, 2011.
[11]. R. Gopalan, S. Taheri, P. Turaga, and R. Chellappa, A blur-robust descriptor with applications to face
recognition, Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 34, no. 6, pp. 1220 1226,
2012.
[12]. Y. Chu, S. Li, and M. Wang, A modied richardson-lucy method for space-variant motion deblurring, in
Computer Science & Service System (CSSS), 2012 International Conference on. IEEE, 2012, pp. 18841887.
[13]. A. Gupta, N. Joshi, C. L. Zitnick, M. Cohen, and B. Curless, Single image deblurring using motion density
functions, in Computer Vision ECCV 2010. Springer, 2010, pp. 171184.

Volume 4, Issue 4, April 2016

Page 17

IPASJ International Journal of Electronics & Communication (IIJEC)


A Publisher for Research Motivation........

Volume 4, Issue 4, April 2016

Web Site: http://www.ipasj.org/IIJEC/IIJEC.htm


Email: editoriijec@ipasj.org
ISSN 2321-5984

[14]. P. Chandramouli and A. Rajagopalan, Inferring image transformation and structure from motion-blurred
images, in Proc. of the BMVC. Citeseer, 2010.
[15]. P. Vageeswaran, K. Mitra, and R. Chellappa, Blur and illumination robust face recognition via set-theoretic
characterization, IEEE Trans.Image Process., vol. 22, no. 4, pp. 13621372, Apr. 2013.
[16]. J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, Robust face recognition via sparse representation,
IEEE Trans. Pattern Anal. Mach.Intell., vol. 31, no. 2, pp. 210227, Feb. 2009.

AUTHORS
Aravind Raj P, Avin Joseph, Paulson Gilbert, Raihana K U UG Scholar, Department of ECE, College of
EngineeringMunnar
Undertheguidance of, Biju V G, Associateprof., Department of ECE, College of EngineeringMunnar
Anu S Nair, Assistantprof., Department of ECE, College of EngineeringMunnar.

Volume 4, Issue 4, April 2016

Page 18

Você também pode gostar