Você está na página 1de 31

` ` ` ` ` ` ` ` `

Definition Neural Model Activation Function Algorithm Stage one conversion Stage two conversion Binary Neural implementation Hardware Optimization References

Neural network is a sorted triple (N, V, w) with two sets N, V and a function w
N is the set of neurons V a sorted set

{(i, j)|i, j E N} whose elements are called connections between neuron i and neuron j.

w : V R defines the weights


w((i, j)), the weight of the connection between neuron i and neuron j.

` ` ` `

Learning capability Predict the outcome of past trends. Robust and Fault tolerant Process information in parallel at high speed and in a distributed manner.

Identity function
x F(x)=x for all x

Binary Step function


x F(x)= 1 if x>= = 0 if x<

Sigmoid function
x F(x)=

Bipolar sigmoid function F(x)=

Steps Digitization Conversion of digitized model into logic gate structure Hardware optimization by elimination of redundant logic gates

` `

Algorithm applicable to both ASIC and FPGA Each neuron treated as a Boolean function and implemented separately Generates a multilayer pyramidal hardware Consist of alternate AND and OR gate Layers

` `

Digitization of one Neuron Mathematical Model

Real values between -1 and +1 can be represented by

Functionality of the neuron should not be affected while transforming analogue neuron into an appropriate digital model.

Conversion is achieved by transforming the analog input into digital inputs

Each analogue neuron input is transformed to its equivalent group of nb binary inputs.

Each input defined by initial weight wij into nb subinputs, whose weights wijp (p=0,1, nb-1) is calculated as

Argument corresponding to the neuron after the first conversion stage is calculated as

= constant

After 1st conversion neurons can have both positive and negative weights.

Stage two aims to replace these neurons with equivalent ones having only positive weights.

ie

Relati a

s i bet ee t e i e r s is i e b

t bits s

lie t sta e 2 e r

sta e

T ese t

alter ati es ca be c

resse i t

Transfer function argument is given by

Substituting the value of xijp(2) we get

Argument function of stage two is

Ar c

e ts f t e acti ati ersi s l be e al

cti

bef re a

after sta e 2

ie

T eref re

Substituting the value of wijp

Therefore threshold level of the stage two neurons is

The neuron parameter after stage two can be calculated as a function of initial analog neuron parameter

Iterative implementation procedure uses 3 input parameters: Index defining the current terminal group(F) The current threshold level (T) The logic gate type(LGT)
x ANY_GATE x AND_GATE

` 1. 2.

At first step F=1,T=t(2) , LGT=ANY_GATE If LGT=AND_GATE then go to 7., else 2. Calculate the number of X input weights and determine number Y of the cumulative weights which are >T. If X>1 and Y=0 then go to3. If X>1 and Y>0 then go to 4 If X=1 and Y=0 then go to 5 If X=0 and Y=1 then go to 7. If X=0 and Y=0 then go to 6.

3. An X input OR gate is added to the netlist. Go to 8. 4. An X+Y-input OR gate is added to the netlist. Go to 8. 5. A single input is capable of triggering the neuron output while the other inputs do not influence its operation. Go to 8. 6. No input can trigger the neuron. The output can be implemented as connection to the ground. Go to 8. 7. An AND gate is required. If there are no non critical weights then a Z+1-input And gate is added to the netlist 8. Return to the calling procedure. If there is no calling procedure left unfinished, then stop the process.

The hardware implementation netlist obtained is repeatedly analyzed and redundant logic gates with the same input

signals and are of the same type are removed.


`

This optimization ends when no more gates can be removed.

[1] A. Dinu and M. N. Cirstea, A digital neural network FPGA direct hardware implementation algorithm, in Proc. ISIE, Vigo, Spain, pp. 2307 2312. [2]Andrei Dinu, Marcian N. Cirstea Direct Nueral Network Hardware Implementation Algorithm IEEE Transactions on Industrial Electronics Vol 57,No 5, May 2010 [3] Martin T. Howard B Demuth and Mark Beale: Neural Network Design, Vikas Thomas Learning [4] Simon Haykin: Neural Networks- A Comprehensive Foundation, Pearson Education [5] Maurico A.Leon, James Keller Toward Implementation of Artificial Neural Networks That "Really Work, Department of Computer engineering and Computer Science,University of Missouri Columbia.

The novel algorithm treats each neuron is treated as a Boolean functions with properties that can be exploited to achieve compact implementation.

Most efficient for low number of inputs on each input.

[6] Y. Singh, A. S. Chauhan, "Neural Networks in Data Mining," Journal of Theoretical and Applied Information Technology, vol 5. No 6, pp.3742, June 2009 [7] C. M. Bishop, "Pattern Recognition and Machine Learning," New York: Springer, 2006. 703 p. [8] V. Ganapathy, K. L. Liew, "Handwritten Character Recognition Using Multiscale Neural Network Training Technique," World Academy of Science, Engineering and Technology, No 39, pp. 3237, 2008. [9] G. P. Zhang, "Neural Networks for Classification: A Survey," IEEE Trans. on Syst., Man and Cybern, vol. 30. No 4, pp. 451462, Nov. 2000. [10] K. R. Farell, R. J. Mommone, K. T. Assaleh, "Speaker Recognition Using Neural Networks and Conventional Classifiers," IEEE Transactions on Speech and Audio Processing, vo2. 3. No 1, pp.194205, 1994.

Você também pode gostar