Você está na página 1de 34

Artificial Neural Network (ANN)

F. Y. M. Tech. Civil
Semester - II

Artificial Neural Networks..

Artificial neural networks are typical example of a modern


interdisciplinary subject that helps solving various different
engineering problems which could not be solved by the
traditional modelling and statistical methods.
Neural networks are capable of collecting, memorizing, analysing
and processing large number of data gained from some
experiments or numerical analyses.
They are an illustration of sophisticated modelling technique that
can be used for solving many complex problems.
The trained neural network serves as an analytical tool for
qualified prognoses (predictions) of the results, for any input data
which were not included in the learning process of the network.
Their operation is reasonably simple and easy, yet correct and
precise.

Artificial Neural Networks..


The inspiration for foundation, development and application of
artificial neural networks came out of the attempt of understanding
the work of human brain and from the aspiration of creating an
artificial intelligent system for data calculation and processing
that are typical for human brain.
Mainly because of that the artificial neural networks are very similar
to the biological neural networks.
Both networks have similar structure, function, and technique of
data processing and methodology of calculation.
Artificial neural networks are presented as a simplified mathematical
model, a model that is similar and analogous to the biological neural
networks.
They can easily simulate the basic characteristics of the biological
nerve system. The networks are capable of gathering, memorizing
and processing numerous experimental data.

Artificial Neural Networks..


Some of their basic characteristics are the following:
they can analyse large number of data,
they can learn from the past data and they can solve problems
that are complex, not clear and problems that do not have only
one solution.
Because of that, the artificial neural networks are often a better
calculation and prediction method compared to the classic and
traditional calculation methods.
Researches made around the world showed that neural networks
have an excellent success in prediction of data series and that is
why they can be used for creating prognostic models that could
solve different problems and tasks

Artificial Neural Network


(ANN)

ANN- a computational tool - particularly useful for evaluating


systems with a huge amount of nonlinear variables.
Attempts to simulate architecture & internal features of human brain
& nervous system.
Made up of a number of simple, highly interconnected processing
elements/units, referred as neurons - constitutes a network.
Each neuron receives several inputs from neighboring elements but
sends only one output.
Training process of an ANN involves presenting a set of examples
with known inputs and outputs.
The system adjusts the weights of the internal connections to
minimize errors between the network output and target output
5

Human Neural Network (HNN)


HNN - made of billions of nerve cells/neurons.
Human brain contains about 10 billion neurons, each connected to
some 1000 other neurons forming a massively parallel information
processing system.
Each neuron has- dendrites, a cell body and an axon.
Cell Body
Human
Neuron

Dendrit
es
Axon
BACK TO 97

Structure of Neuron

Components
Dendrites: are branching fibers that extend from the cell body .
Soma or cell body: contains the nucleus and other structures,
support chemical processing and production of neurotrnasmitters.
Axon: singular fiber carries information away from the soma to
the synaptic sites of other neurons (dendrites and somas), muscles
or glands.
Axon hillock : the site of summation for incoming information.
At any moment, the collective influence of all neurons that
conduct impulses to a given nueron will determine whether or not
an action potential will be initiated at the axon hillick and
progogated along the axon.

Components..
Myelin sheath: consists of fat containing cells that insulate the
axon from electrical activity. This insulation acts to increase the
rate of trnasmission of singals. A gap exists between each myeline
sheath cell along the axon. Since fat inhibits the progagation of
electricity, the signals jump from one gap to the next.
Nodes of Ranvier: are the gaps (about 1 micron) between myelin
sheath cells along axons . Since fat servers as a good insulator, the
myelin sheaths speed the rate of trnasmission of an electrical
impusle along the axon.
Synapse: is the point of connection between two neurons or a
neuron and a muscle or a gland. Electrochemical communication
between neuron takes place at these junctions.
Terminal Buttons: small knobs at the end of an axon that release
chemicals called neurothrnsmitters.

Structure of A Neuron Cell In A Human

Clip ANN Information flow

Information flow in a neuron cell


Dendrites receive activation from other neurons
Soma processes the incoming activations and
converts them into output activations
Axons act as transmission lines to send
activation to other neurons.
Synapses
the
junctions
allow
signal
transmission between the axon and dendrites.
The process of transmission is by diffusion of
chemicals called neuro-transmitters.

HNN Vs. ANNs


A neuron receives inputs from other neurons & if average input
exceeds a critical level, it discharges an electrical pulse that
travels from the body down the axon to next neuron / group of
neurons.
These three main parts of a biological neuron are reproduced in an
artificial neuron.
Artificial neuron receives inputs (dendrites), operates in reaction
to them (body cell), and sends an output (axon)
Each neuron receives an input signal from neurons to which it is
connected. Each of these connections has numerical weights
associated with them. These weights determine nature and strength
of the influence between the interconnected neurons.
Signals from each input are then processed through a weighted sum
on the inputs. The processed output signal is then transmitted to
another neuron via a transfer function which modulates (adjusts)
the weighted sum of the inputs.
13

Neuron
Processes inside the biological neural networks are very
complex and they still cannot be completely studied and
explained.
There are hundreds of different types of biological neurons in
human brain, so it is almost impossible to create a mathematical
model that will be absolutely the same as the biological neural
network.
However, for practical application of artificial neural networks,
it is not necessary to use complex neuron models. Therefore,
the developed models for artificial neurons only remind us to
the structure of the biological ones and they have no pretension
to copy their real condition

Model of artificial neuron

Model Of One Layered Artificial Neural Network

Neural Network
Neural network is composed of numerous mutually connected
neurons grouped in layers.
The complicity (participation) of the network is determinate
(defined) by the number of layers.
Beside the input (first) and the output (last) layer, network can
have one or few hidden layers.
The purpose of the input layer is to accept data from the
surroundings. Those data are processed in the hidden layers and
sent into the output layer. The final results from the network are
the outputs of the neurons from the last network layer and that is
actually the solution for the analysed problem.
The input data can have any form or type. The basic rule is that
for each data we must have only one input value. Depending on
the problems type, the network can have one or few outputs.

Weight Coefficients
Weight coefficients are the key elements of every neural network.
They express the relative importance of each neurons input and
determine the inputs capability for stimulation of the neurons.
Every input neuron has its own weight coefficient. By
multiplying those weight coefficients with the input signals and
by summing that, we calculate the input signal from each neuron.
In the figure (neuron), the input data are marked as X1, X2 and
X3, and the appropriate weight coefficients are W1, W2 and W3.
The input neuron impulses are W1X1, W2X2 and W3X3. Neuron
registers the summed input impulse which is equal to the sum of
all input impulses: X = W1X1 + W2X2 + W3X3.
The received impulse is processed through an appropriate
transformation function (activation function), f(x), and the output
signal from the neuron will be: Y = f(x) = f(W1X1 + W2X2 +
W3X3).

Weight Coefficients
Weight coefficients are elements of the matrix W that has n
rows and m columns. For example, the weight coefficient
Wnm is actually the mth output of the nth neuron (Fig. 2).
The connection between the signals source and neurons is
determined by the weight coefficients. Positive weight
coefficient means speeding synapse and negative
coefficient means inhibiting synapse. If Wij = 0 it means
that there is no connection between these two neurons.
One very important characteristic of neural networks is
their ability for weight adjustment according to the
received history data, which is actually the learning
process of the network.

Activation Function
The main purpose of the activation (transformation) function is to
determine whether the result from the summary impulse X =
W1X1 + W2X2 + .... + WnXm can generate an output.
This function is associated with the neurons from the hidden
layers and it is mostly some non-linear function.
Almost every non-linear function can be used as an activation
function, but a common practice is to use the sigmoid function
(hyperbolic tangent and logistic) with the following form:
= 1/(1+ ^( ) ), where: Yt is normalized value of
the result of the summary function. The normalization means that
the outputs value, after the transformation, will be in reasonable
limits, between 0 and 1
If there is no activation function and no transformation, the output
value might be too large, especially for complex networks that
have few hidden layers.

What Are Artificial Neural Networks?


An extremely simplified model of the brain.
Essentially a function approximator.
- Transforms inputs into outputs to the best of its ability.

Human Brain NN & trained nw


Composed of many neurons that co-operate to perform the
desired function.

Typical Neural Network Model


Architecture of a typical NN consists 3 layers of
interconnected neurons.
Input Layer:
Receives the information from outside world i.e.
data is presented to the neural network
Intermediate or hidden layer:
links input layer to output layer i.e. enable networks
to represent & compute complicated associations
between patterns
Output layer communicates the ANN result to outside
world
BACK TO 97

22

Artificial Neuron Model


An artificial neuron is composed of five main parts:
1) inputs, 2) weights, 3) sum function, 4) activation function &
5) outputs.

The weighted sums of the


input components
23

Typical Neural Network Model

BACK TO 97

24

What Are Artificial Neural Networks?...


The output of a neuron is a function of the weighted sum of the inputs plus a bias.

Input Neural network

Basics of NN

Weight & biasing of NN

Learning of ANN
Most commonly used learning system is back-propagation
model.
The learning algorithm processes the patterns in two stages.
In the first stage, the input pattern generates a forward
flow of signals from the input layer to the output layer.
The error of each output neuron is then determined from
the difference between the computed values and the
observed (experimental) values.
The second stage involves - readjustment of weights &
biases in the hidden and output layers to reduce the
difference between computed and desired outputs.
BACK TO 97

26

Building an ANN- Elements Involved


Type of Neuron
The number of neurons in the input layer
The number of hidden layers & number of
neurons in the hidden layer/s
The number of neurons in the output layer
The connections between neurons & the
layers

BACK TO 97

27

ANN- applied to HPC mix design


Bias: The offset or skewing of data or information away
from its true or accurate position as the result of
systematic error
Epoch: Each instance that the training data is presented
to the network is known as an epoch.

BACK TO 97

28

Neural Network Use


Classification: Pattern recognition, feature extraction,
image matching.
Noise Reduction: Recognize patterns in the inputs and
produce noiseless outputs.
Prediction: Extrapolation based on historical data.
Why use Neural Networks : Ability to learn:
NNs figure out how to perform their function on their
own.
Determine their function based only upon sample inputs.
Ability to generalize: Produce reasonable outputs for
inputs it has not been taught how to deal with.

Basic Operation Of Logic Gates


(Weights & Outputs)
There are two attributes associated with this neuron: the
threshold and the weight. The weight is 1.5 and the
threshold is 2.5. An incoming signal will be amplified, or
de-amplified, by the weight as it crosses the incoming
synapse. If the weighted input exceeds the threshold, then
the neuron will fire.
Consider a value of one (true) presented as the input to the
neuron. The value of one will be multiplied by the weight
value of 1.5. This results in a value of 1.5. The value of 1.5
is below the threshold of 2.5, so the neuron will not fire.
This neuron will never fire with Boolean input values. Not
all neurons accept only boolean values. However, the
neurons in this section only accept the Boolean values of
one (true) and zero (false).

The AND Logical Operation


A

A AND B

This network will contain two inputs and one output. A neural network
that recognizes the AND logical operation is shown in Fig.
There are two inputs to the network shown in Fig. Each neuron has a
weight of one. The threshold is 1.5. Therefore, a neuron will only fire if
both inputs are true. If either input is false, the sum of the two inputs will
not exceed the threshold of 1.5.
Consider inputs of true and false. The true input will send a value of one
to the output neuron. This is below the threshold of 1.5. Likewise,
consider inputs of true and true. Each input neuron will send a value of
one. These two inputs are summed by the output neuron, resulting in two.
The value of two is greater than 1.5, therefore, the neuron will fire.

The OR Logical Operation


A

A OR B

0
0
1

0
1
0

0
1
1

The OR neural network looks very similar to the AND


neural network. The biggest difference is the threshold
value. Because the threshold is lower, only one of the
inputs needs to have a value of true for the output neuron
to fire.

The X-OR Logical Operation


A

A XOR B

0
0
1

0
1
0

0
1
1

The XOR logical operation requires a slightly more complex neural


network than the AND and OR operators. The neural networks
presented so far have had only two layers an input layer and an
output layer.
More complex neural networks also include one or more hidden layers.
The XOR operator requires a hidden layer. As a result, the XOR
neural network often becomes a sort of Hello World application for
neural networks. You will see the XOR operator again in this book as
different types of neural network are introduced and trained.

The X-OR Logical Operation


Consider the case in which the values of true and true are
presented to this neural network. Both neurons in the hidden layer
receive the value of two. This is above the thresholds of both of the
hidden layer neurons, so they will both fire. However, the first
hidden neuron has a weight of -1, so its contribution to the output
neuron is -1. The second neuron has a weight of 1, so its
contribution to the output neuron is 1. The sum of 1 and -1 is zero.
Zero is below the threshold of the output neuron, so the output
neuron does not fire. This is consistent with the XOR operation,
because it will produce false if both inputs are true.
Now consider if the values of false and true are presented to the
neural network. The input to the first hidden layer neuron will be 1,
from the second input neuron. This is lower than the threshold of
1.5, so it will not fire. The input to the second hidden layer neuron
will also be 1, from the second input neuron. This is over the 0.5
threshold, so it will fire. The input to the output neuron will be zero
from the left hidden neuron and 1 from the right hidden neuron.
This is greater than 0.5, so the output neuron will fire. This is
consistent with the XOR operation, because it will produce true if
one of the input neurons is true and the other false.

Você também pode gostar