Escolar Documentos
Profissional Documentos
Cultura Documentos
Bob John
For the training of the network, there is a forward pass and a backward pass. We now look at each layer in turn for the forward pass. The forward pass propagates the input vector through the network layer by layer. In the backward pass, the error is sent back through the network in a similar manner to backpropagation.
for i = 1,2
for i = 3,4
O1,i = Bi 2 ( y )
So, the O1,i ( x) is essentially the membership grade for x and y . The membership functions could be anything but for illustration purposes we will use the bell shaped function given by:
A ( x) = 1 x ci 1+ ai
2 bi
where
a i , bi , ci
Layer 2 Every node in this layer is fixed. This is where the t-norm is used to AND the membership grades - for example the product:
O2,i = wi = Ai ( x) Bi ( y ), i = 1,2
Layer 3 Layer 3 contains fixed nodes which calculates the ratio of the firing strengths of the rules:
O3,i = w i = wi w1 + w2
Layer 4 The nodes in this layer are adaptive and perform the consequent of the rules:
O4 ,i = wi f i = wi ( p i x + qi y + ri )
The parameters in this layer ( pi , qi , ri ) are to be determined and are referred to as the consequent parameters.
Layer 5 There is a single node here that computes the overall output:
O5,i = wi f i =
i
iwi f i iwi
This then is how, typically, the input vector is fed through the network layer by layer. We now consider how the ANFIS learns the premise and consequent parameters for the membership functions and the rules. There are a number of possible approaches but we will discuss the hybrid learning algorithm proposed by Jang, Sun and Mizutani (Neuro-Fuzzy and Soft Computing, Prentice Hall, 1997) which uses a combination of Steepest Descent and Least Squares Estimation (LSE). This can get very complicated (!) so here I will provide a very high level description of how the algorithm operates. It can be shown that for the network described if the premise parameters are fixed the output is linear in the consequent parameters. We split the total parameter set into three:
S = set of total parameters
S1 = set of premise (nonlinear) parameters
So, ANFIS uses a two pass learning algorithm: Forward Pass. Here S1 is unmodified and S 2 is computed using a LSE algorithm. Backward Pass. Here S 2 is unmodified and S1 is computed using a gradient descent algorithm such as back propagation. So, the hybrid learning algorithm uses a combination of steepest descent and least squares to adapt the parameters in the adaptive network. The summary of the process is given below: The Forward Pass Present the input vector Calculate the node outputs layer by layer Repeat for all data A and y formed Identify parameters in S 2 using Least Squares
Compute the error measure for each training pair Backward Pass Use steepest descent algorithm to update parameters in S1 (backpropagation) For given fixed values of S1 the parameters in S 2 found by this approach are guaranteed to be the global optimum point.