Escolar Documentos
Profissional Documentos
Cultura Documentos
x1 x2 y x
1
1 1 0
y
1 0 1
0 1 1 x
2
0 0 0
The threshold of unit y is 1.
MP Model for XOR
With one layer done, it is not function
possible to predict the value
of the threshold for the neuron to fire, hence another
layer is introduced
x1 XOR x2 = (x1 ANDNOT x2) OR (x2 ANDNOT x1)
x1 XOR x2 = H1 OR H2 ; where H1 = x1 ANDNOT x2
H2 = x2 ANDNOT x1
The activation of H1 and H2 are
= 1 ; H in-1 1
H= 0 ; H in-1 <1
1 = H
in-1
= 1 ; H in-2 1
H= =
i2
0 H ; H in-2 <1
in-2
X1 X2 Hin-2 H2
1 1 0 0
1 0 -1 0
0 1 1 1
0 0 0 0
The activation for the o/p unit y = 1
y=f(yin) = 1 ; if yin 1
0 ; if yin < 1
Presenting i/p patterns H1 & H2 and calculating net i/p and
activations gives o/p of XOR.
yin = H1w1+H2w2
= H1+H2 (w1 = w2 = 1)
H1 H2 yin y=H1
(or) H2
0 0 0 0
1 0 1 1
0 1 1 1
0 0 0 0
associated
n by the weight matrix w known as
correlation matrix computed as.
.W= x y T
i=0 y i T = Transpose of the associated o/p
i i
vector yi.
2. Perceptron leaning rule
.It is also known as Discrete Perceptron leaning law.
.For the perceptron learning rule, the leaning signal is
the difference between the desired and actual neurons
response.
.It is supervised leaning.
.It is applicable only for bipolar o/p functions f(.).
.The preceptor leaning rule states that for a finite n no.
of I/p training vector x(n), each with an associate target
value t(n) which is +1 (or) -1, and an activation function.
1 ; if y in >
y= 0 ; if y in
-1 ; if y in <
Then the weight updated is given by
wnew = w old+t x ; if y t
wnew = w old ; if y = t
Perceptron Training Algorithm :
i. Start with random value of w.
ii. Test for w.x i >0, if test succeed for i=1,2,..n, then return
w.
iii. Modify w, as wnew = w prev+xfail.
Limitations of Perceptron :
1. Non-linear reparability is not passive i.e it can only model linearly
single perceptron.
2. Single perceptron does not have enough computing
power
SOL : 1. Use larger network.
2 . Tolerate error.
Perceptron Leaning Algorithm :
x(n) = i/p vector
w(n) = weight vector
b(n) = bias
y(x) = actual response
d(n) = desired response
= learning rate parameter
i. Initialization :- Set w(0) = 0
ii. Activation :- Activate perceptron by applying i/p
iii. Complete actual response of perceptron
y(x) = sgn [wT(n).x(n)]
iv. Adapt weight vector i.e. if y(n) & d(n) are different, then
w(n+1) = w(n)+ [d(n)-y(n)].x(n)
+1 ; x(n) c1 c1 = class 1
where-1d(n)
; x(n)
= c2 c2 = class 2
v. Continuation :- Increment step n by 1 and go to
activation step.
3. Delta Learning law :-
.It is valid only for continuous activation functions and
differentiable o/p function.
.It is supervised learning .
.It is also known as continuous perceptron leaning
It states that
The adjustment made to a synaptic weight of neuron is
proportional to the product of the error signal and the i/p
signal of the synapse.
Delta rule for signal o/p unit is that
it changes the weight of the connection to minimize
the difference the net i/p to the o/p unit y inand the target
value t.
i.e. w i = (t-y in)x i
Where x = the vector of activation of i/p units.
y in = net i/p to o/p unit i.e. xw i
t = target vector.
= learning rate.
Delta rule for several o/p units is that,
wjk = (tj-yinj)xi
4. Competitive Learning Rule :-
5. Outstar Leaning Rule :-
.It is also known as gross berg leaning.
.It is supervised leaning .
.It is used to provide learning of repetitive and
characteristic properties of i/p o/p relationship.
.The weight matrix