Escolar Documentos
Profissional Documentos
Cultura Documentos
Introduction
Due to increasing demands for computer users security, authentication, and biometric is a rapidly growing research area. Many on-going studies on recognition
systems are based on biometrics that use characteristics of human body such
as face, ngerprint, gesture, signature, voice, iris, and vein. Among them, Face
recognition is an important research part of image processing with a large number of possible application areas such as security, multimedia contents and so
on[1].
Generally, face recognition method uses a three-step approach. First face is
detected from input image and then a set of facial features such as eyes, nose,
and mouth is extracted. Lastly, face recognition is employed by the proposed
measures. However, human face may change its appearance because of external
distortions such as the scale, lighting condition, and tilting as well as internal
variations such as make-up, hairstyle, glasses, etc. And many researches have
been conducted to improve face detection and recognition eciency as well as
Corresponding author.
M.S. Szczuka et al. (Eds.): ICHIT 2006, LNAI 4413, pp. 128138, 2007.
c Springer-Verlag Berlin Heidelberg 2007
129
2.1
There are several approaches to detect face such as color based method, ellipse
tting method, etc. Face detection based on the skin color is a very popular
method because the human face has much skin color component than others.
Ellipse tting method is to ap-proximate the shape of the face since human face
130
(a)
(b)
(c)
(d)
(e)
Fig. 1. Facial skin color detection in daylights; (a) Original image, (b) resulting image
using H component, (c) Cr component, (d) I component, (e) multi-color model
(1)
2.2
131
In this paper, a face tracking and recognition algorithm has two stages, motion information is used for tracking and face feature vector function is used for
recognition. Tracking algorithm of the proposed algorithm rst sets an object
selected from the previous frame as a target object and calculates Euclidean
distance between the objects from the frame and those from the previous to nd
one with smallest distance. The face tracking algorithm in this paper tracks
the face with the least Euclidean distance from the area detected by using
the multi-color model to each detected face from the next frame. If it fails
to nd the object to track, it sets the area detected from the current frame
as a target object again and tracks the face real-time. Following Equation is
to detect the target object to track. Here, F T rackingobject represent target
to tracking. Ojt1 and Ojt represent target object to previous frame and the
frame.
||Ojt1 Ojt ||
(2)
F T rackingobject = min
Generally, human face may change its appearance because of external distortions.
Especially, it is dicult to recognize and analyze persons face with a background
noise and a tilting problem. Thus, we employed face correction method using eyes
feature to solve the tilting distortion as well as face detection method based on
multi-color model to improve the background noise and illumination problem
as mentioned above. In the facial feature extraction step of proposed algorithm,
we use face geometry information for the extraction of facial features such as
eyes, nose and mouth. Face geometry can describe the geometrical shape of face
from the coordinates of features. It is consists of each feature coordinate, center coordinate of eyes, length of left and right eyes, distance between eyes and
mouth, and relative distance and ratio between features. The distance between
two features can be approximated by their horizontal and vertical distance. In
this case, the facial coordinates will be invalid when the slope of face is changed.
Namely, if the face is tilted, face geometry is no longer valid, and a correction of
the orientation of the face is necessary. Thus, we correct the tilted face with the
correction of eyes slope. For a face almost frontal and upright, the slope between
two eyes can be approximated by their horizontal angle. In this paper, in order
(a)
(b)
Fig. 2. Facial feature coordinate system; (a) Facial coordinate and (b) rotation angle
of eyes feature
132
to recognize the images of both the rotated and the front faces, we extracted
one of the facial features, eyes and corrected the facial rotation. We employed
the horizontal Sobel edge operation within the detected face and extracted features applying labeling to the objects crowd in a certain range. As shown in
Fig. 2, the left eye and its eyebrow are positioned at Q1 that points to 1/4 of
the width of the whole face, and the right eye and the eyebrow are at Q1 , 3/4
of the width. Each eyes object has the feature information that it is located
at HC , 1/2 of the height of the face, in the range of the width of the eyebrow
objects. Although we can use this information to extract eye feature, it is better
to expand the range as wide as the height of the eye object itself, EH , as can
be seen in Fig. 2(b). Rotation angle (AngleF ace) for the rotation correction is
computed by dividing the width value (W ED ) between the two eye objects by
the height value between the objects (HED ) and applying tan1 to the result.
Fig. 3 is the resulting image of the eyes detection in various tilted images. Fig. 4
is the resulting image after the facial rotation was revised by the rotation angle.
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 3. Eyes feature detection; (a) Eyes detection in frontal face image, (b) vertical histogram of (a), (c) eyes detection in rightward tilted image(150 ), (d) vertical histogram
of (c), (e) eyes detection in leftward tilted image(+300 ), and (f) vertical histogram
of (e)
2.3
Generally, face recognition method uses a three-step approach. First face is detected from input image and then a set of facial features is extracted. Lastly,
face recognition is employed by recognition approach. This chapter mainly deals
with the part of recognition, recognizing the face image by face feature vector
function, which has been detected and corrected using both multi-color model
and tilted face correction algorithm. Face feature vector system based on the
persons face geometry, interrelationship be-tween persons facial features, and
facial angles, which can be used to recognize each persons face using the feature vector found on the face. Conventional face geometry approaches of facial
feature considered only the symmetry and the geometrical relationship of the
persons face. However, in this paper, face feature vector function includes geometry value of facial features using the eyes, nose and mouth, correlation value between the person with facial feature, and facial angles of the persons face. Fig. 5
shows the face feature vector function for face recognition. If one considers the
characteristics of the person with facial characteristics on Fig. 5, Equation GD1
(a)
(b)
(c)
133
(d)
Fig. 4. Tilted face correction; (a) Tilted face image, (b) facial feature extraction,
(c) eyes extraction, and (d) tilted face correction
is the distance between both eyes using the eyes symmetry, GD2 is the distance
between the mid point between the eyes and the center of the nose(PN C ), GD3
and GD4 are the distance between the center of the nose(PN C ) and the center of
the mouth(PMC ), and the distance between the center of the mouth(PMC )and
the center of the chin(PCC ), respectively.
And also, facial angle EA1 , EA2 , and EA3 with facial characteristic can be
extracted from using the cosine function property. Following equation represent
the relative distance between the person with facial feature vector.
ED1 = Abs[1.0 {0.7
ED2 = Abs[1.0 {1.1
ED3 = Abs[1.0 {1.6
ED2 = Abs[1.0 {1.7
ED5 = Abs[1.0 {1.5
GD2
}]
GD1
GD1
}]
(GD2 + GD3
GD1
}]
(GD2 + GD3 + GD4
GD3 + GD4
}]
(GD2 + GD3 + GD4
GD4
}]
GD3
(3)
(4)
(5)
(6)
(7)
134
If one expresses the above ve equations as EDi , one could calculate similarity
of the distance ratio of the person with feature as following equation. It represents
the repetition ratio of the original image and the test image.
F R1 (, ) = M in|
5
(8)
i=1
(, ) = 1 |(EDi ) (EDi )|
(9)
(EDi ) and (EDi ) are the ratio value of the original image and the inquired
test image value and (, ) is the correlation similarity of the two images. And
also, following equation represent facial angle repetition rate and facial angle
correlation similarity. Where, if (, ) is close to 0, it corresponds more to the
feature.
F R2 (, ) = M in|
3
(10)
i=1
(, ) = 1 |(EAi ) (EAi )|
(11)
135
136
center line, horizontal mouth line, and horizontal nose line, respectively. From
the extracted face-line points, we calculate slope of face-line in each areas and
can be estimate facial shape using relative facial feature gradients. In this paper,
we classify the face shape into 9 types for male and 8 types for female, respectively. Fig. 9 shows the classication result of facial shape for automatic avatar
drawing.
(a)
(b)
Fig. 9. Classification result of facial shape; (a) Facial shape for male and (b) facial
shape for female
Experimental Results
(a)
(b)
137
(c)
Fig. 10. GUI of the proposed face analysis and automatic facial avatar drawing system based on internet; (a) Face analysis and avatar for male, (b) face analysis and
avatar for female, and (c) face analysis, avatar drawing and life partner news in
physiognomy DB
(a)
(b)
(c)
Fig. 11. Proposed face analysis and automatic facial avatar drawing system based on
PDA; (a) Initial page, (b) face detection, and (c) facial physiognomy analysis
Conclusions
This paper presented an automatic face analysis system based on face recognition and facial physiognomy. The proposed system can detect users face, extract
facial features, classify shape of facial features, analyze facial physiognomy, and
draw avatar automatically which is resembled to users face. The proposed algorithm could contribute to the development of scientic and quantitative facial
physiognomy system which can be ap-plied to the on-line facial application service as well as biometrics area. And also, it oers oriental facial physiognomy
database and automatic avatar drawing scheme based on face recognition.
Acknowledgement
This research was supported by MIC, Korea, under the ITRC support program
supervised by the IITA.
References
1. Selin Baskan, M., Baskan, S., Bulut, M.M., Atalay, V.: Projection Based Method for
Segmentation of Human Face and its Evaluation. Pattern Recognition Letters 23,
16231629 (2002)
138
2. Kim, Y.G., Lee, O., Lee, C., Oh, H.: Facial Caricaturing System with Correction of
Facial Decline. Proceeding of Korea Information Processing Society 8(1), 887890
(2001)
3. Brunelli, P., et al.: Face Recognition: Feature Versus Templates. IEEE Transaction
on Pattern Analysis Machine Intelligence 15, 10421052 (1993)
4. Samal, A., Iyengar., P.A.: Automatic Recognition and Analysis of Human Faces and
Facial Expression: A Survey. Pattern Recognition 25, 6577 (1992)
5. Jang, K.S.: Facial Feature Detection Using Heuristic Cost Function. Journal of Korea Information Processing Society 8(2), 183188 (2001)
6. Lee, E.J.: Favorite Color Correction for Reference Color. IEEE Transaction on Consumer Electronics 44, 1015 (1998)
7. Liu, Z., Wang, Y.: Face Detection and Tracking in Video Using Dynamic Program.
In: IEEE International Conference on Image Processing, vol. 2, pp. 5356 (2000)
8. Cardinaux, F., Sanderson, C., Bengio, S.: User Authentication via Adapted Statistical Models of Face Images. IEEE Transaction on Signal Processing 54(1), 361373
(2005)
9. Yang, M.H., Kriegman, D., Ahuja, N.: Detecting Faces in Images: A Survey. IEEE
Transaction on Pattern Analysis Machine Intelligence 24(1), 3458 (2002)