Você está na página 1de 6

FACULTY OF SCIENCE AND TECHNOLOGY

DEPARTMENT OF COMPUTER SCIENCE AND INFORMATION SYSTEMS

Assignment Topic:
Discuss the advantages and disadvantages of supervised Hebbian learning.

Group No

Student Name

Student Number

Oliver Itai Sauramba

R135583A

Chivhayo K
Sibanda M
Kunaka J

R0644164
R133830Q
R0645449

Module Code:

MIM803

Module Name:

Neural Networks

Lecturer:

Mrs Ruvinga

Year:

2014

Level:

2.1

Program:

Master of Science Information Systems Mgt


Page 1 of 6

Discuss the advantages and disadvantages of supervised Hebbian learning.


Advantages
Supervised Hebbian learning (SHL) allows neural networks to learn and generalize
from the samples used for the training. As a result of this ability to learn from a
sample, supervised Hebbian learning (SHL) helps to control the training costs of
neural networks. Generalizations can be very useful in genetic deoxyribonucleic
acid (DNA) mapping where descendants DNA can be mapped from that of
ancestors and vice versa after some supervised training. The same approach can be
used in trend analysis and predictive studies. The beauty of SHL is that is capable
of accommodating distortions in normal patterns without necessarily producing
wrong results. However certain training in areas such as biometric controls in
pattern recognition require 100% training since each individual occurrence has a
unique pattern.
Can be applied to a wide array of applications such as pattern recognition, pattern
matching, optimization, learning, function approximation, associate memory and
vector quantization. Specifically SHL can be used to train neural networks to
perform the following tasks:
o

Face recognition.

o Time series prediction.


o Process identification.
o Process control.
o Optical character recognition.
o Adaptive filtering.
Supervised Hebbian learning (SHL) has been the mainstream of neural networks
development for a long time, since introduced in 1949. As a result supervised
Hebbian learning has been thoroughly tested and is now highly reliable.

Page 2 of 6

Supervised Hebbian Learning can be used to perform nonlinear statistical modeling


and provide a new alternative to logistic regression, the most commonly used
method for developing predictive models for dichotomous outcomes in medicine.
Neural networks offer a number of advantages, including requiring less formal
statistical training, ability to implicitly detect complex nonlinear relationships
between dependent and independent variables, ability to detect all possible
interactions between predictor variables, and the availability of multiple training
algorithms.
Training data consist of many pairs of input/output training patterns and as a result
the learning will benefit from the assistance of the teacher.
The other benefit of using SHL is that the network can be presented with an input
that is similar but different to one of the learned inputs, and it will be able to
recognize similarity and return the desired output.
It was shown that collections of simple neurons connected with one another are able
to learn from outside data, to recognize patterns, and to spontaneously organize
themselves and form memories even in the absence of meaningful input.
Neural networks trained under SHL do not forget.
Neural trained using SHL can be replicated.
Neural networks trained under SHL can be replicated to enhance disaster tolerance.
Another advantage of employing a neural network with this sort of training
algorithm is that the network can be presented with an input that is similar but
different to one of the learned inputs, and it will be able to recognize what learned
input it is similar to and return the desired output.
A SHL neural network learns and does not need to be reprogrammed. It can be
implemented in any application. It can be implemented without any problem.

Page 3 of 6

A neural network learns and does not need to be reprogrammed. Once it has been
trained the neural network becomes available 24/7 and does not forget.
Knowledge learnt can be propagated to other neural networks thereby giving scope
for cost reduction.
Using SHL a neural network can be trained to perform nonlinear analysis and tasks.
Parallelism can be built in SHL trained neural networks to enhance disaster
tolerance. This can be achieved by installing more than one similarly trained units
into one neural network. If one of the redundant units fails the other unit(s) take
over. When an element of the neural network fails, it can continue without any
problem by their parallel nature.
Multiple training algorithms are readily available for use.
SHL can use back propagation (BP) learning algorithm to solve various
classification and forecasting problems. Even though BP convergence is slow, it is
guaranteed.
The major advantage of neural networks is that they are data driven and does not
require restrictive assumptions about the form of the basic model.

Disadvantages
The neural networks that use supervised Hebbian learning need training to operate.
This training is an additional cost to the neural network. However the possible use
of samples for certain types of training helps to reduce training costs.
SHL adopts black box that exposes dependent neural networks to greater
computational burden, proneness to over-fitting, and the empirical nature of model
development. To overcome this several approaches have been combined with ANN
such as feature selection and etc.

Page 4 of 6

Despite the potential of SHL based neural network we can only get the best out of
them when they are integrated with computing, fuzzy logic and so on.
SHLs black box learning approach, cannot interpret relationships between input
and output and cannot deal with uncertainties.
SHL neural networks cannot be retrained. Even if new data is added later on it is
almost impossible to add to an existing SHL neural network functionality.
Although SHL trained neural networks often exhibit patterns similar to those
exhibited by humans, this is more of interest in cognitive sciences than for practical
examples.
Conclusion
Despite its significant disadvantages, the advantages of SHL far outweigh its
disadvantages. This will ensure SHL learning continued relevancy in neural network
applications.

References
Barto, A. B., Sutton, R. S., & Anderson, C.W. (1983). Neuron-like adaptive elements that
can solve difficult learning control problems. IEEE Trans. on Systems, Man and
Cybernetics, 13, 834846.
Cauwnberghs, G. (1993). Afast stochastic error-descent algorithm for supervised learning
and optimization. In Col. Giles, S. J. Hanson, & J. D. Cowan (Eds.), Advances in neural
information processing stystems, 5 (pp. 244251). San Mateo, CA: Morgan Kaufmann.
Chance, F. S., Abbott, L. F.,&Reyes, A. D. (2002). Gain modulation through background
synaptic input. Neuron, 35, 773782. 630 C. Swinehart and L. Abbott Chauvin, Y., &
Rumelhart, D. E., (Eds.) (1995). Back propagation: Theory, architectures, and
applications. Hillsdale, NJ: Erlbaum.

Page 5 of 6

Compte, A., Brunel, N., Goldman-Rakic, P. S., & Wang X. J. (2000). Synaptic mechanisms
and network dynamics underlying spatial working memory in a cortical network model.
Cereb. Cortex, 10, 910923.

Y. Chauvin and D. E. Rumelhart, Backpropagation: Theory, Architectures, And


Applications (Lawrence Erlbaum Associates, Inc., Hillsdale New Jersey, 1995). [5] D.O.
Hebb. The Organization Of Behavior. (Wiley & Sons, New York, 1949).

O. Paulsen and T.J. Sejnowski, Current opinion in neurobiology 10 (2) 172 (2000).

C. Fyfe. Hebbian Learning And Negative Feedback Networks (Springer-Verlag London


Limited, United States, 2005).

The Talented Dr. Hebb, Part 1, Novelty Filtering,


http://blog.peltarion.com/2006/05/11/the-talented-dr-hebb-part-1-novelty-_ltering/

Page 6 of 6

Você também pode gostar