Escolar Documentos
Profissional Documentos
Cultura Documentos
&RQWURO $XWRPDWLRQ-XO\$WKHQV*UHHFH
Abstract- This paper introduces a new approach This adaptive network has good ability and
for training the adaptive network based fuzzy performance in system identification, prediction
inference system (ANFIS). The previous works and control and has been applied in many
emphasized on gradient base method or least different systems. The ANFIS has the advantage
square (LS) based method. In this study we apply of good applicability as it can be interpreted as
one of the swarm intelligent branches, named
particle swarm optimization (PSO) with some
local linearization modeling and conventional
modification in it to the training of all parameters linear techniques for state estimation and control
of ANFIS structure. These modifications are are directly applicable.
inspired by natural evolutions. Finally the The ANFIS composes the ability of neural
method is applied to the identification of network and fuzzy system. The training and
nonlinear dynamical system and is compared updating of ANFIS parameters is one of the main
with basic PSO and showed quite satisfactory problems. The most of the training methods are
results. based on gradient and calculation of gradient in
each step is very difficult and chain rule must be
Keywords: Particle Swarm Optimization, TSK used also may causes local minimum. Here, we
System, Fuzzy Systems, ANFIS, Swarm Intelligent, try to propose a method which can update all
Identification, Neuro- Fuzzy. parameters easier and faster than the gradient
method. In the gradient method convergence of
I. INTRODUCTION parameters is very slow and depends on initial
value of parameters and finding the best learning
rate is very difficult. But, in this new method so
T he complexity and the dynamics of some
problems, such as prediction of chaotic
systems and adapting to complex plant, require
called PSO, we do not need the learning rate.
The rest of the paper is organized as follows: In
sophisticated methods and tools for building an Section II, we review ANFIS. In Section III we
intelligent system. Using fuzzy systems as discuss PSO method. An overview of the
approximators, identifier and predictor, is proposed method and applied this method to
reliable approach for this purpose [1, 2]. The nonlinear identification is presented in Section
combination of fuzzy logic with architectural IV. Finally, Section V presents our conclusions.
design of neural network led to creation of
neuro- fuzzy systems which benefit from feed
forward calculation of output and back- II. THE CONCEPT OF ANFIS
propagation learning capability of neural
networks, while keeping interpretability of a A. ANFIS Structure
fuzzy system [3]. The TSK [4, 5] is a fuzzy
system with crisp functions in consequent, which Here, type-3 ANFIS topology and learning
perceived proper for complex applications [6]. It method which use for this neuro- fuzzy network
has been proved that with convenient number of is presented. Both Neural Network and Fuzzy
rules, a TSK system could approximate every Logic [9] are model-free estimators and share the
plant [7]. The TSK systems are widely used in common ability to deal with the uncertainties
the form of a neuro- fuzzy system called ANFIS and noise. Both of them encode the information
[8]. Because of crisp consequent functions, in a parallel and distribute architecture in a
ANFIS uses a simple form of scaling implicitly. numerical framework. Hence , it is possible to
3URFHHGLQJVRIWKHWK0HGLWHUUDQHDQ&RQIHUHQFHRQ 7
&RQWURO $XWRPDWLRQ-XO\$WKHQV*UHHFH
convert fuzzy logic architecture to a neural covered by MFs with overlapping that means
network and vice-versa. This makes it possible to several local regions can be activated
combine the advantages of neural network and simultaneously by a single input. As simple local
fuzzy logic. A network obtained this way could models are adopted in ANFIS model, the ANFIS
use excellent training algorithms that neural approximation ability will depend on the
networks have at their disposal to obtain the resolution of the input space partitioning, which
parameters that would not have been possible in is determined by the number of MFs in ANFIS
fuzzy logic architecture. Moreover, the network and the number of layers. Usually MFs are used
obtained this way would not remain a black box, as bell-shaped with maximum equal to 1 and
since this network would have fuzzy logic minimum equal to 0 such as:
capabilities to interpret in terms of linguistic
variables [10].
(4)
The ANFIS is composed of two approaches
neural network and fuzzy. If we compose these (5)
two intelligent approaches, it will be achieve
good reasoning in quality and quantity. In other
words we have fuzzy reasoning and network Where {ai , bi , ci } are the parameters of MFs
calculation. which are affected in shape of MFs.
ANFISs network organizes two parts like
fuzzy systems. The first part is the antecedent
part and the second part is the conclusion part,
which are connected to each other by rules in
network form. If ANFIS in network structure is
shown, that is demonstrated in five layers. It can
be described as a multi-layered neural network as
shown in Fig. (1). Where, the first layer executes
a fuzzification process, the second layer executes
the fuzzy AND of the antecedent part of the Figure (1): The equivalent ANFIS (type-3
fuzzy rules, the third layer normalizes the ANFIS)
membership functions (MFs), the fourth layer
executes the consequent part of the fuzzy rules, B. Learning Algorithms
and finally the last layer computes the output of
fuzzy system by summing up the outputs of layer The subsequent to the development of ANFIS
fourth. Here for ANFIS structure (fig. (1)) two approach, a number of methods have been
inputs and two labels for each input are proposed for learning rules and for obtaining an
considered. The feed forward equations of optimal set of rules. For example, Mascioli et al
ANFIS are as follows: [11] have proposed to merge Min-Max and
ANFIS model to obtain neuro-fuzzy network and
wi = Ai ( x) Bi ( y ), i = 1,2. (1) determine optimal set of fuzzy rules. Jang and
wi (2) Mizutani [12] have presented application of
wi = , i = 1,2. Lavenberg-Marquardt method, which is
w1 + w 2
essentially a nonlinear least-squares technique,
for learning in ANFIS network. In another paper,
f1 = p1 x + q1 y + r1 z
Jang [13] has presented a scheme for input
f 2 = p 2 x + q 2 y + r2 z (3) selection and [10] used Kohonens map to
w1 f 1 + w2 f 2 training.
f = = w1 f1 + w2 f 2 Jang [8] is introduced four methods to update
w1 + w2
the parameters of ANFIS structure, as listed
below according to their computation
In order to model complex nonlinear systems,
complexities:
the ANFIS model carries out input space
1. Gradient decent only: all parameters are
partitioning that splits the input space into many
updated by the gradient descent.
local regions from which simple local models
2. Gradient decent only and one pass of LSE: the
(linear functions or even adjustable coefficients)
LSE is applied only once at the very beginning
are employed. The ANFIS uses fuzzy MFs for
to get the initial values of the consequent
splitting each input dimension; the input space is
3URFHHGLQJVRIWKHWK0HGLWHUUDQHDQ&RQIHUHQFHRQ 7
&RQWURO $XWRPDWLRQ-XO\$WKHQV*UHHFH
III. PARTICLE SWARM OPTIMIZATION 5. Change the velocity vector for each:
(PSO) ALGORITHMS G G G G G G
v i (t ) = v i (t 1) + 1 ( x pbest i xi (t )) + 2 ( x gbest xi (t ))
A. General PSO Algorithm. (10)
The particle swarm optimization (PSO)
algorithms are a population-based search Where 1 and 2 are random variables. The
algorithms based on the simulation of the social second term above is referred to as the cognitive
behavior of birds within a flock [14]. They all component, while the last term is the social
work in the same way, which is, updating the component.
population of individuals by applying some kind 6. Move each particle to a new position:
of operators according to the fitness information
obtained from the environment so that the G G G
xi (t ) = xi (t 1) + vi (t ) (11)
individuals of the population can be expected to
move toward better solution areas. In the PSO t = t +1
each individual flies in the search space with
velocity which is dynamically adjusted according 7. Go to step 2, and repeat until convergence.
to its own flying experience and its companion
flying experience, each individual is a point in The random variables 1 and 2 are defined
the D- dimensional search space[15]. as 1 = r1C1 and 2 = r2 C 2 , with r1 , r2 ~ U (0,1) ,
Generally, the PSO has three major and C1 and C 2 are positive acceleration constant.
algorithms. The first is the individual best. This
Kennedy has studied the effects of the random
version, each individual compares position to its
variables 1 and 2 on the particle's trajectories.
own best position, pbest , only. No information
He asserted that C1 + C 2 4 guarantees the
from other particles is used in these type
algorithms. stability of PSO [16].
The second version is the global best. The
social knowledge used to drive the movement of
particles includes the position of the best particle B. Modified PSO Algorithm
from the entire swarm. In addition, each particle In this approach we remove worst particle in
uses its history of experience in terms of its own population and replace it by new particle. This is
best solution thus far. In this type the algorithms important that how to determine worst particle
is presented as: and how to generate new particle for current
population.
1. Initialize the swarm, P (t ) , of particles such The particle is selected with worst local best
that the position xGi (t ) of each particle
value at generation. Then we randomly choose
two particles from the population and use
Pi P (t ) is random within the hyperspace, crossover operator to generate two new particles.
with t = 0 . Then select the best one from two newly
2. Evaluated the performance F of each generated particles and the selected particle and
particle, using its current position xGi (t ) . replace the worse particle with it.
In other hand we combine GA operator and
3. Compare the performance of each individual
PSO algorithm to modify it .This modification
to its best performance thus far: if
G make converges faster than basic algorithm.
F ( xi (t )) < pbest i then:
3URFHHGLQJVRIWKHWK0HGLWHUUDQHDQ&RQIHUHQFHRQ 7
&RQWURO $XWRPDWLRQ-XO\$WKHQV*UHHFH
Where
NO of input 5
f (x 1, x 2 , x 3 , x 4 , x 5 ) = NO Of Data for train 1000
(13) NO MFs for each inputs 2
x 1 x 2 x 3 x 5 ( x 3 1) + x 4
15
1 + x 22 + x 32 NO of particle for each population
Here, the current output of the plant depends on Epoch for each population 100
three previous outputs and two previous inputs.
Testing input signal used is as the following:
3URFHHGLQJVRIWKHWK0HGLWHUUDQHDQ&RQIHUHQFHRQ 7
&RQWURO $XWRPDWLRQ-XO\$WKHQV*UHHFH
(a) (a)
(b) (b)
MSE=7.0020.e-004 MSE=0.0905
(c)
(c)
MSE=0.197
Figure 2: Using PSO for training parameters in ANFIS Figure 3: Using PSO for training parameters in ANFIS
structure for example1. Simulation results for nonlinear structure for example2.. Simulation results for nonlinear
system identification. (a) (dashed line), and actual system system identification. (a) (dashed line), and actual system
output (solid line). (b) MSEs for modified algorithm.(c) output (solid line). (b) MSEs for modified algorithm.(c)
MSEs for basic algorithm MSEs for basic algorithm.
3URFHHGLQJVRIWKHWK0HGLWHUUDQHDQ&RQIHUHQFHRQ 7
&RQWURO $XWRPDWLRQ-XO\$WKHQV*UHHFH
REFERENCES