Escolar Documentos
Profissional Documentos
Cultura Documentos
Intro-1
not ruleoriented
Expert
Systems
Machine
Learning
ruleoriented
Intro-2
Feedforward
Supervised
Learning
No Feedback,
Training Data
Available
Learning
Rule
Unsupervised
Learning
August 9 - 12, 2004
Intro-3
Defining properties
Consists of simple building blocks (neurons)
Connectivity determines functionality
Must be able to learn
Must be able to generalize
Intro-4
dendrite
synapse
Neuron 1
August 9 - 12, 2004
Neuron 2
Intro-5
dendrite
synapse
Neuron 1
Neuron 2
w2,k
Neuron 1
, f1
w2,1
, f2
Neuron 2
w2,n
b
August 9 - 12, 2004
Intro-6
, f1
w2,1
w2,n
, f2
Neuron 2
Intro-7
Intro-8
Medical: cancer cell detection and analysis, EEG and ECG analysis,
disease pathway analysis
Intro-9
Intro-10
Math Notation/Conventions
Intro-11
Single-Input Neuron
Neuron
Scalar
Input
n
w
p
Scalar
Output
b
1
a = f(n) = f(wp + b)
Net Input: n
Transfer Function, f (design choice)
Intro-12
a = f(n) = 0, n < 0
1, n 0
MATLAB: a = hardlim(n)
(often used for binary classification problems)
August 9 - 12, 2004
Intro-13
1
n
0
-1
a = f(n) = -1, n < 0
1, n 0
MATLAB: a = hardlims(n)
(often used for binary classification problems)
August 9 - 12, 2004
Intro-14
a = f(n) = n
MATLAB: a = purelin(n)
(often used in network training for classification problems)
August 9 - 12, 2004
Intro-15
-2
a = f(n) =
1
1 + e n
MATLAB: a = logsig(n)
(often used for training multi-layer networks with backpropagation)
August 9 - 12, 2004
Intro-16
Equation
Output Range
MATLAB
Hard limiter
Discrete: 0 or 1
hardlim
Symmetric
Hard Limiter
a = 0,
1,
a = -1,
1,
Discrete: 1, -1
hardlims
Linear
a=n
Continuous: range of n
purelin
Log-Sigmoid
a= 1
1 + en
Continuous: (0, 1)
logsig
Continuous: (-1, 1)
tansig
Discrete: 0, 1
compet
Hyperbolic
Tangent
Sigmoid
Competitive
n<0
n0
n<0
n0
(
en e n )
a=
(en + en )
a = 1, neuron w/
max n
(0 else)
Intro-17
Multiple-Input Neurons
Multiple-Input
Neuron
n
Scalar
Output
b
1
a = f(n) = f(Wp + b)
where W is the weight matrix with one row: W = [w1,1 w1,2 w1,R]
p1
and p is the column vector: p = M
pR
Intro-18
Input
p
Rx1
1
W
1xR
n
+
f
1x1
1x1
1x1
R
a = f(n) = f(Wp + b)
Intro-19
Intro-20
t2 = 1
(for F)
.42
p2
-1.3
p3
-1.3
f = hardlims
a = hardlims(Wp)
Intro-21
.42
p2
-1.3
p3
-1.3
f =hardlims
a = hardlims(Wp)
Intro-22
p1
n1
a1
b1
w1,2
p2
p1
p = p2
p
3
1
w2,2
p3
w1,3
w2,3
n2
b2
1
August 9 - 12, 2004
a2
a f( W : row(1) p + b1 )
a = 1 =
a2 f( W : row(2) p + b2 )
= f( Wp + b)
Intro-23
Recall for our single neuron with multiple inputs, we used weight
matrix W with one row: W = [w1,1 w1,2 w1,R]
wi,j = wto,from
August 9 - 12, 2004
Intro-24
W
SxR
b
Sx1
n
+
f
Sx1
Sx1
S
a = f(Wp + b)
Intro-25
Allow each layer to have its own weight matrix (W), bias vector (b),
net input vector (n), output vector (a), and # of neurons (S)
The last (right-most) layer of the network is called the output layer;
the inputs are not counted as a layer at all (per Hagan); layers
between the input and output are called hidden layers.
Intro-26
Layer 2
(Output)
Input
p
Rx1
1
W1
S1xR
b1
S1x1
n1
+
a1
f1
S1x1
S1x1
S1
S2xS1
b2
S2x1
R
a1 = f1(W1p + b1)
W2
n2
+
f2
S2x1
a2
S2x1
S2
a2 = f2(W2a1 + b2)
Intro-27
n1
a1
a2
3
1
5
p2
n2
-2
1
a = compet(Wp + b)
where compet(n) = 1, neuron w/max n
0, else
August 9 - 12, 2004
Intro-28
Project Description
Intro-29
Is that a tank
or a tree?
Project Overview
Intro-32
RC
Tank/platform/
clutter
Image Capture
Software (Image
Acquisition
Toolbox
MATLAB)
Data Processing
to obtain NN
Inputs
Neural Network
Movement
direction for
camera
Camera to
Computer
Interface
Video
Camera
Tilt/Pan
Servos
Servo
Controller
Computer
Interface to
Servo Controller