Escolar Documentos
Profissional Documentos
Cultura Documentos
S.VEMANA GAUTAM
B.NARESH KUMAR
III YR , ECE
JNTU,ANATAPUR,
III YR , ECE,
JNTU,ANATAPUR.
ABSTRACT
B ASIC C ONCEPTS
OF
R ECOGNITION :
OF
F ACE R ECOGNITION :
Introduction :
Identifying a person by his face is one of the most fundamental human
functions since time immemorial. To impart this basic human capability to
machine has been a subject of interest over the last few years. Such a machine
would find considerable utility in many commercial tractions, personnel
management and security and law enforcement application, specially in criminal
identification, authentication in secure system etc. Enough research has not been
carried out on identification of human face. Recently however a number of
automated recognition are mainly two folds.
OF
F ACIAL M ARKS :
Looking at the side profile of the human face, certain points can be readily selected
on the face profile, which when correctly identified may help in extracting certain
characteristics features for that particular face. Out of these, five facial marks are
independent of each other, while point no.(3) FOREHEAD POINT is a reflection of point
no.(2) CHIN POINT. Through point no.(1) NOSE POINT.
Making of this point helps identifying the start of the profile. It is seen that these
points do not change with age. Therefore, five points have been selected for the
extraction of various feature measurements for identification purposes. These points
have been named as under.
Point 1
Nose point
Point 2
Chin point
Point 3
Forehead point
Point 4
Bridge point
Point 5
It may be seen that the point 5 is soft tissue point and it is rather difficult to
extract them accurately. This position will be dependent on the facial expression of the
person at the time when photographs is taken (i.e. smiling, laughing, frowning).
N ETWORK FOR F ACE R ECOGNITION :
A set of 12 features vectors have been extracted from each facial pattern and
the training of the neutral network is carried out with a set of four (4) facial
photographs. Thus the network is configured with 12 input nodes and 4
output nodes.
Two network topologies are used one the BP not having input to hidden layer
and hidden to output layer connections and the IO net with additional direct
input to output connection.
Instead of training the network with 12 dimensional feature vector directly,
we have used a different feature vector.
This was done because it was observed that the network was more stable
when trained with this differential data rather than the absolute values.
The order of variation in facial features is very small when compared to the
absolute values hence the network cannot differentiate between feature
vectors if the absolute values are chosen.
This clearly shows that the network is dependent on the nature of input data
and thus pre-processing is an essential step for neural classification.
A LGORITM:
1.
2.
3.
From the RAW image we would calculate six facial distances and
six facial angles. Normalize these twelve inputs.
4.
5.
From these twelve inputs the weights to the links between the Input
layer and the Hidden layer are calculated. these Are stored in matrix
12*7 called Random Matrix W.
6.
From these twelve inputs and weights we found out Hidden layer
parameters are calculated.
7.
8.
9.
Based on the output we will find out the present output belongs to
which person for example the output is [1000] this belong to fig
The list of modules are
Module-1:
In this module the BMP image is converted into RAW image.A BMP image
needs to be converted into RAW image because from this image we have to
calculate the inputs. These are given to input layer of the ANN. A RAW image
from BMP is converted according to intensity values. The all dim colors pixels
in a color image are changed to black and al bright color pixels converted into
white pixels.
Module 2 :
In this modules distance and angles are to be calculated these constitute
twelve inputs to the input layer of the input. These 12 values are normalized.
The inputs are calculated from the given BMP as below. To start training first we
have to calculate the 12 inputs from given bmp image. In these 12 inputs 6
constitute facial distances and rest are facial angles. All these are calculated by
side profile of required human face. The side profile of the human face is as
shown below.
Forehead point
bridge point
nose point
1)
D1
BRIDGE POINT
D2
D5
D6
NOSE POINT
D4
D3
BRIDGE POINT
A1
A2
NOSE POINT
A3
BRIDGE POINT
A1
A2
NOSE POINT
A3
Normalization: Normalization is the process of converting a value that falls with in the
range between 0 to 1.
Module 3&4 :
In the module the weights to all links between hidden layer, input layer and hidden
parameters are calculated. In the fourth module the weights between hidden, output
layers and four output are calculated.
Available for each neutron in the output layer, adjusting the associated
weights is easily accomplished using a modification of the delta rule. Interior
layers are referred to as hidden layers, as their outputs have no target values
for comparison; hence, training is more complicated.
The training process for a single weight from neutron p in the hidden layer
j to neuron q in the output layer k. the output of a neuron in layer k is subtracted
from its target value to produce an ERROR signal. This is multiplied by the
derivative of the squashing function [OUT(1-OUT)] calculated for the layers
neuron k, thereby producing the value.
= OUT*(1-OUT)*(TARGET-OUT))1
This is multiplied by OUT from a neuron j, the source neuron for the
weight in question. This product is in turn multiplied by a training rate
coefficient n(typically 0.01) and the result is added to the weight. An identical
process is performed for each weight processing from a neuron in the hidden
layer to a neutron in the output layer.
The following equations illustrate these calculations :
W pq.k = n qk OUT pj ---------------- 2
W pq.k (n+1) = W pq.k (n) + W pq.k ---------------- 3
Where W pq.k (n) = the value of weight from neuron p in the hidden layer
to neuron q in the output layer at step n (before adjustment)0; note that
the subscript k indicates that the weight is associated with its associated
with its destination layer.
W pq.k (n+1) = value of the weight at the steps n+1 (after adjustment)
qk = the value of 8 for neuron q in the output layer k
OUT pj = the value of OUT for neuron p in the hidden layer j
Note that the subscript p and q refer to a specific neuron , whereas
subscript j and k refer to a layer.
Adjusting the weight of the hidden layer :
Hidden layers have no target
vector, so the training process described above cannot be used. This lack of
training target stymied efforts to trained multi layered networks until back
propagation provided a workable algorithm. Back propagation trains the hidden
layers by propagating the out error back through the network layer by layer,
adjusting weights by each layer.
Equations 2 and 3 are used for all layers, both output and hidden;
However, for hidden layer most be generated without benefit of a target vector.
Figure shows how this is accomplished. First, 12 inputs is calculated for each
neuron in the output layer, as in the equation 2.4. It is used to adjust the weights
feeding into the output layer, then it is propagated back through the same
weights to generate a value for 12 neurons in the first hidden layer. These values
of 12 inputs are used, in term, to adjust the weights of this hidden layer and, in a
similar way, are propagated back to all proceeding layers.
Considering a single neuron in the hidden layer just before the output
layer. In the forward pass, this neuron propagates its output value to neurons in
the output layer through the interconnection weights.
Weights operate in reverse, passing value of from the output layer back to hidden
layer. Each of these weights is multiplied by the 7 values of the neurons to which it
connects the output layer. Summing all such products and multiplying by the derivative
of the squashing function produces the value of the hidden layer neutron:
pj = OUTpj (1-OUTpj) (pk Wpq,k)
With 12 inputs in hand, the weights feeding the first hidden layer can be adjusted using
equation 2 and 3 modifying to indicate the correct layers.
For each neuron in a given hidden layer, must be calculated, and all
weights associated with that layer must be calculated and adjusted. This is
repeated, moving back towards input layer by layer, until all weights are
adjusted.
Module-5 :
In this module we have implemented Forward propagation and Backward
propagation algorithms. Forward propagation is done to calculate error. The
Backward propagation is done to reduce error.
FORWARD PROPAGATION:
1.
The normalized inputs are used along with random weight matrix W12*7
to calculate hidden layer parameters as shown by the formula. The hidden layer
parameters are then normalization.
HM = I * W
HMN = 1/(1+exp -HM )
2.
The hidden layer parameters along with random weight matrix V7*4 are
used to calculate the 4 output layer parameters. The output layer parameters are
then normalized as shown in formula.
3.
OQ
= HMN * V
OQN = 1/(1+exp -OQ )
If we train the algorithm for four faces then we should get the outputs.
1000 for the 1 st face
0100
0010
0010
1000
0100
0010
0 0 1 0
non-linearities. The following assumes a sigmoid logistic non-linearity is used where the
function f(a) in figure 1 is
Step 1: Initialize Weights and offsets :
Set all weights and node threshold to small random values.
Step 2: Present input and desired output
Present a continuous input vector X0,X1XN-1
And specify the desired outputs do,d1.dM-1
If the net is used as a classifier then all desired outputs are typically set to zero
except for the corresponding to all the call the input is from. The desired output is one
the input should be new on each trail or samples from a training set could be presented
cyclically unity weights stabilized.
Step 3: Calculate actual outputs.
Use the sigmoid nonlinearly from above and formula as to the calculate outputs
YO,Y1Y2Y3YM-1
Step 4: Adapt Weights
Use first hidden layer. Adjust weight by
Wij(t+1) = Wij(t) +jXi
In this equation Wij(t) is the weight from hidden node I or from an output to note j
at time t xj is either the output of note I or is an input, n is a gain term, and is an error
term if note j is an output node, then.
= Yj(I-Yj)(dj-Yj)
Where d, is the desired output of node j and y, is the actual output.
If node j is an internal hidden node,
= Xj(I-Xj)Wjk
Where K is an overall node in the layer above node j. internal node thresholds are
adapted in a similar manner by assuming they are connection weights on links from
auxiliary constant-valued inputs. Convergence is sometimes faster if a momentum term
is added and weight changes are smoothed by j
Wij(t+1) = Wij(t) +jXi (Wij(t) - Wij(t-1))
Step 5: Repeat by going to step 2
This constitutes the training of the Neural Network
C ONCLUSION :
This Paper recognizes faces belongs to same person at very simple. It
requires only his side profile photo. If any person did any crime and now he
grown beard or grown mustaches or any change made by him in his face to
escape. This paper recognizes him. So it can be used be used in criminal
identification. This is the major application of this paper. The main heart of
this paper is RAW image form the RAW image it calculates the inputs. With the
front profile it is very difficult to recognize to do this we have to do a lot of
things that may require costly hardware devices Scanners, Sensors etc. This
requires only side photo with bmp format. The front profile recognition is
difficult than the side profile because the photo is two dimensional it is not
possible to calculate the input layer parameters from the front profile. By the
side profile face is appeared as a convex shapes with nose projected to outside so
we can easily calculates the distances and angles with making nose as a origin.
The same concept can also be used to recognize patterns such as
recognizing characters, recognizing any particular shapes. This can be used in
many applications like in many Commercial transactions, Personal management,
Security and the law enforcement applications, especially in criminal
identification, Authentication in secure system.
References :
Introduction to Artificial Neural Systems J.M.Zurada
Elements of Artificial Neural Networks- Kishan Mehrotra,Chelkuri K.Mohan
Neural Computing- Theory and Practice-Waserman
Websites:
www.cs.rug.nl/~peterkr/FACE/face.html
www.rst38.org.uk/faces
www.sciencedaily.com/releases/1999/06/990624080203.htm