Você está na página 1de 35

Pattern Association

Klinkhachorn:CpE320

Associative Memory Neural Nets

Map user-selected vectors x1,x2,.xl into user-selected vectors y1,y2,yl Autoassociative class ( y vectors = corresponding x vectors) Heteroassociative class ( y vector corresponding x vectors)

Klinkhachorn:CpE320

Bidirectional Associative Memory (BAM)

Classification of ANN Paradigms

Bidirectional Associative Memory

Klinkhachorn:CpE320

Bart Kosko
Associate Professor of Electrical Engineering Bart Kosko received bachelors degrees in Economics and Philosophy from the University of Southern California, the masters degree in Applied Mathematics from the University of California, San Diego, and the Ph.D. degree in Electrical Engineering from the University of California, Irvine. Research Interests Adaptive Systems Fuzzy Theory Neural Networks Bio-Computing Nonlinear Signal Processing Intelligent Agents Smart Materials Stochastic Resonance

http://sipi.usc.edu/~kosko/

Klinkhachorn:CpE320

Bidirectional Associative Memory


Retrieve Information from both directions

Astro
Pattern X

BAM
Pattern Y
Klinkhachorn:CpE320

Applications
Associate Memorization Pattern Recognition Noise Suppression

Klinkhachorn:CpE320

Kosko BAM

Klinkhachorn:CpE320

Kosko BAM: Architecture

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

Klinkhachorn:CpE320

Kosko BAM: Architecture


n units in its X-layer m units in its Y-layer Weights are found from the sum of the outer products of the bipolar from of the training vector pairs (X TY) Activation function is a step function The connections between the layers are bidirection; Weight matrix for signal sent from X to Y-layer is W Weight matrix for signal sent from Y to X-layer is WT

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

Klinkhachorn:CpE320

Kosko BAM: Algorithm


Step1: Compute the connection weights, Let p be the total # of associated pattern pairs, and for each pattern p xp = (xp1,xp2,,xpi,..xpn) yp = (yp1,yp2,,xpj,..xpm) Then, the connection weight from node i to node j, wij = (2xpi-1)(2ypj-1) . If xp =(0,1),
p

or wij = xpiypj
p

. If xp =(-1,1), . If xp =(-1,1),
Klinkhachorn:CpE320

or W = XTY

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

Algorithm (Cont)
Step2: For each testing input A) Present input pattern x to the X-layer B) Present input pattern y to the Y-layer Step 3: Iterate (update outputs) until convergence
Update activations of units in Y-layer

yj(t+1) = F[ wijxi(t)], j = 1 to m
i=1 or

Y(t+1) = F[W.X]
where F[e] = +1. if e0, or = -1(0). if e<0

Send signal to X-layer


Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall Klinkhachorn:CpE320

Algorithm (Cont)
Update activations of units in X-layer

xi(t+1) = F[ wijyj(t)], i = 1 to n
j=1

Or

X(t+1) = F[WT.Y],
where F[e] = +1. if e0, or = -1(0). if e<0

Send signal to Y-layer Test for convergence: If the vectors x and y have reached equilibrium, then stop; otherwise, continue.

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

Klinkhachorn:CpE320

Kosko BAM: an example


Suppose a Heteroassociative net is to be trained to store/recall the following mapping from input/output row vectors: x = (x1,x2,x3,x4,x5,x6) to output row vectors y = (y1,y2,y3,y4):

Pattern 1st 2nd

x1,x2,x3,x4,x5,x6 (1 -1 1 -1 1 -1) (1 1 1 -1 -1 -1)

y1,y2,y3,y4 (1 1 -1 -1) (1 -1 1 -1)

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

Klinkhachorn:CpE320

Kosko BAM: an example


1 2 3 4 5 6
Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall Klinkhachorn:CpE320

1 2 3 4

Kosko BAM: an example


Weights calculation: Outer Product W = XTY
+1 +1 +1 +1 +1 1 1 . 1 +1 1 +1 1 1 1 0 0 2 w11 w12 0 2 2 0 w21 w22 2 0 0 2 w31 w32 W= = 2 0 0 2 w41 w42 0 2 2 0 w51 w52 0 2 w61 w62 2 0 +1 1 +1 W= 1 +1 1 2

w13 w23 w33 w43 w53 w63

w14 w24 w34 w44 w54 w64


Klinkhachorn:CpE320

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

Kosko BAM: an example


Testing the net for x = (1 -1 -1 -1 1 -1)
Update activations of units in Y-layer

0 0 2 2 0 0 2 2 2 0 0 2 Y = F[ xW ] = F [1 1 1 1 1 1] 2 0 0 2 0 2 2 0 0 2 2 0 Y = F[4 4 4 4] = [1 1 1 1]

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

Klinkhachorn:CpE320

Kosko BAM: an example


Update activations of units in X-layer

0 2 2 2 0 0 2 0 T X = F[ yW ] = F [1 1 1 1] 0 2 0 0 2 0 2 2 X = F[2 2 2 2 2

0 2 2 0

2 0 0 2

2] = [1 1 1 1 1 1]

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

Klinkhachorn:CpE320

Kosko BAM: an example


Repeat until convergence Update activations of units in Y-layer

0 0 2 0 2 2 2 0 0 Y = F[ xW ] = F [1 1 1 1 1 1] 2 0 0 0 2 2 0 2 0 Y = F[8 4 4

2 0 2 2 0 2

8] = [1 1 1 1] = previous Y

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

Klinkhachorn:CpE320

Kosko BAM: an example


Update activations of units in X-layer

0 2 2 2 0 2 0 0 T X = F[ yW ] = F [1 1 1 1] 0 2 0 0 2 0 2 2 X = F[2 2 2 2 2 X = previous X

0 2 2 0

2 0 0 2

2] = [1 1 1 1 1 1]

Results from recalling X = [1 -1 -1 -1 1 -1] X = [1 -1 1 -1 1 -1] and Y =[1 1 -1 -1]

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

Klinkhachorn:CpE320

Kosko BAM: an example


Testing the net (2 bits different) for x = (-1 1 -1 -1 -1 -1)
Update activations of units in Y-layer

0 2 0 2 2 0 Y = F[ xW ] = F [1 1 1 1 1 1] 2 0 0 2 2 0 Y = F[0 4 4 0] = [1 1 1 1]

0 2 0 0 2 0

2 0 2 2 0 2

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

Klinkhachorn:CpE320

Kosko BAM: an example


Update activations of units in X-layer

0 2 0 2 T X = F[ yW ] = F [1 1 1 1] 0 2 2 0 X = F[ 0 4 0

2 0 0 2

2 0 0 2

0 2 2 0

2 0 0 2

0 4 0 ] = [1 1 1 1 1 1]

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

Klinkhachorn:CpE320

Kosko BAM: an example


Repeat until convergence Update activations of units in Y-layer

2 0 2 Y = F[ xW ] = F [1 1 1 1 1 1] 2 0 2 Y = F[0 4 4 0] = [1 1 1 1]

0 2 0 0 2 0

2 2 0 0 2 0 2 2 0 0 2 0

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

Klinkhachorn:CpE320

Kosko BAM: an example


Update activations of units in X-layer

0 2 0 2 T X = F[ yW ] = F [1 1 1 1] 0 2 2 0 X = F[ 0 4 0

2 0 0 2

2 0 0 2

0 2 2 0

2 0 0 2

0 4 0 ] = [1 1 1 1 1 1]

X = [1 1 1 1 -1 1] = Previous X Stop : Crosstalk! Expected X = [1 1 1 -1 -1 -1]


Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall Klinkhachorn:CpE320

Kosko BAM Advantages


Fast Training and Recall Straight Forward operations Bi-directional data Flow

Limitations
Performance ????

Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall

Klinkhachorn:CpE320

Research in BAM

Multiple Training Algorithm (Wang et al, 1991) Dummy Augmentation (Wang et al, 1990) Indirect Generalized Inverse BAM (Li, Nutter, 91)

Klinkhachorn:CpE320

Multiple Training Algorithm

Training pairs for bidirectional associative memory


Laurene Fausett, Fundamentals of Neural Networks, Prentice Hall Klinkhachorn:CpE320

Multiple Training Algorithm

Test results using Koskos encoding method

Klinkhachorn:CpE320

Multiple Training Algorithm


Start Form W0 I=1
Recall Pi

yes no

Recall Pi

no

yes I = I+1 no I=N yes

Train Pi & form W

End

The sequential multiple training algorithm which uses multiple training sequentially to those pairs which cannot be recalled correctly
Klinkhachorn:CpE320

Multiple Training Algorithm

Test results using the multiple training method


Klinkhachorn:CpE320

Multiple Training Algorithm

Test results using the multiple training method with noise present
Klinkhachorn:CpE320

Dummy augmentation method

The training pairs for dummy augmentation method


Klinkhachorn:CpE320

Dummy augmentation method

A sequential test for dummy augmentation


Klinkhachorn:CpE320

Dummy augmentation method

A test for dummy augmentation


Klinkhachorn:CpE320

Dummy augmentation method

Klinkhachorn:CpE320

Você também pode gostar