Você está na página 1de 5

The SIJ Transactions on Computer Science Engineering & its Applications (CSEA), Vol. 1, No.

4, September-October 2013
ISSN: 2321 2381 2013 | Published by The Standard International Journals (The SIJ) 136



AbstractTo integrate the best features of fuzzy systems and neural networks, a data mining approach with
ANFIS is applied on all features of Abalone and Monks problem dataset. The main aim of this research is to
reduce the RMSE with fewer numbers of rules in order to achieve high speed and less time consumed in both
learning and application phases. For calculating effective RMSE, an adaptive fuzzy inference system with
subtractive clustering is proposed. Effective partition of input space is done and loaded into the ANFIS editor.
A fuzzy inference system is generated using subtractive clustering and RMSE of training and testing is
calculated by hybrid approach which is combination of back propagation and least square method. A structure
is generated which shows input and output data along with number of fuzzy rules. This result into lower RMSE
with fewer numbers of rules shows that ANFIS is well suited for age prediction of abalone and performance
comparison of learning algorithms.
KeywordsAbalone; ANFIS; Fuzzy Rule; Monks Problem; Root Mean Square Error; Subtractive Clustering.
AbbreviationsAdaptive Network based Fuzzy Inference System (ANFIS); Root Mean Square Error
(RMSE).

I. INTRODUCTION
HE architecture and learning procedure underlying
ANFIS is presented, which is a fuzzy inference system
implemented in the framework of adaptive networks.
The proposed ANFIS can construct an input-output mapping
based on both human knowledge (in the form of fuzzy if-then
rules) and stipulated input-output data pairs by using a hybrid
approach. ANFIS is best tradeoff between neural and fuzzy
systems providing smoothness and adaptability. The
objective of this research is to reduce the RMSE with fewer
numbers of rules in order to achieve high speed and less time
consumed in both learning and application phases. Neural
networks and fuzzy systems both are stand-alone systems.
ANFIS is one of the Neuro-fuzzy models. With the increase
in the complexity of the process being modeled, the difficulty
in developing dependable fuzzy rules and membership
functions increases. This has led to the development of
another approach which is mostly known as ANFIS
approach. A hybrid system named ANFIS has been proposed
by Jang (1993). It has the benefits of both fuzzy logic
[Junhong Nie & Derek Linkens, 1998] and neural networks
[James A. Anderson, 2002]. Fuzzy inference in this system is
realized with the aid of a training algorithm, which enables to
tune the parameters of the fuzzy system. In this paper, firstly
the training and testing data of abalone
[archive.ics.uci.edu/ml/datasets.html] and monks problem
dataset [archive.ics.uci.edu/ml/datasets.html] are divided.
Then they are loaded into the ANFIS editor. After loading
training and testing data a fuzzy inference system is
generated using subtractive clustering. RMSE is calculated by
training the network using hybrid approach which is
combination of back propagation and least square method.
Then RMSE for training against testing is noted. Finally
structure is obtained which shows number of inputs and
outputs along with number of fuzzy rules. It is developed in
the MATLAB V7.9.0.529 (R2009b) [MATLAB]
environment.
II. RELATED WORK
Ravi Jain & Ajith Abraham (2003) examined the
performance of four fuzzy rule generation methods that could
T
*M.Tech Scholar, department of Computer Science & Engineering, Oriental College of Technology, Bhopal, Madhya Pradesh, INDIA.
E-Mail: manishap0707@gmail.com
**Director, Oriental College of Technology, Bhopal, Madhya Pradesh, INDIA. E-Mail: kavitaburse14@gmail.com
Manisha Pariyani* & Kavita Burse**
Age Prediction and Performance
Comparison by Adaptive Network based
Fuzzy Inference System using
Subtractive Clustering
The SIJ Transactions on Computer Science Engineering & its Applications (CSEA), Vol. 1, No. 4, September-October 2013
ISSN: 2321 2381 2013 | Published by The Standard International Journals (The SIJ) 137
generate fuzzy if-then rules directly from training patterns
with no time consuming tuning procedures.
Shibendu Shekhar Roy (2005) proposed ANFIS for
predicting the surface roughness in turning operation for set
of giving cutting parameters. Two different membership
functions triangular and bell shaped were adopted during the
training process of ANFIS in order to compare the prediction
accuracy of surface roughness by two membership functions
Sean N.Ghazavi & Thunshun W.Liao (2008) proposed a
study of medical data mining that involves the use of eleven
feature selection methods and three fuzzy modeling methods,
the objective is to determine which combination of feature
selection and fuzzy modeling method has the best
performance for a given dataset. Three rules are needed to
obtain the classification rate 97% when using a modified
fuzzy c-means radial basis functions network proposed by
Essam Al-Daoud (2010).
Pejman Tahmasebi & Ardeshir Hezarkhani (2010)
proposed the adaptive Neuro-fuzzy inference system as a
newly applied technique to solve such a problem to evaluate
the "copper grade estimation" in Sarcheshmeh porphyry
copper system. Fuzzy logic was invented by Zadeh (2010) for
handling uncertain and imprecise knowledge in real world
applications. It has proved to be a powerful tool for decision-
making, and to handle and manipulate imprecise and noisy
data. Mehdi Khashei et al., (2011) proposed a hybrid
approach that combines artificial intelligence with fuzzy in
order to benefit from unique advantages of both fuzzy logic
and the classification power of the artificial neural networks
to construct an efficient and accurate hybrid classifier in less
available data situations. Jesmin Nahar et al., (2012)
presented a rule extraction experiment on heart disease data
using different rule mining algorithms (Apriori, Predictive
Apriori and Tertius). Further rule-mining-based analysis was
undertaken by categorising data based on gender and
significant risk factors for heart disease. Castanho et al.,
(2012) proposed fuzzy expert system for predicting
pathological stage of prostate cancer. A fuzzy expert system
was developed with the fuzzy rules and membership
functions tuned by a genetic algorithm. As a result, the
utilized approach reached better precision taking into account
some correlated studies.
III. FUZZY NEURO SYSTEMS
This section describes fuzzy inference system along with
ANFIS architecture.
3.1. Fuzzy I nference System
A fuzzy inference system is composed of five functional
blocks- a rule base containing a number of fuzzy if-then rule,
a database which defines the membership functions, a
decision-making unit which performs the inference, a
fuzzification interface which transforms the crisp inputs, a
defuzzification interface which transform the fuzzy.
The steps of inference operations upon fuzzy if-then
rules (fuzzy reasoning) performed by fuzzy inference systems
are:
To obtain the membership values (or compatibility
measures) of each linguistic label, compare the input
variables with the membership functions on the
premise part (This step is known as fuzzification).
To get firing strength (weight) of each rule, combine
(through a specific T-norm operator, usually
multiplication or min.) the membership values on the
premise part.
Depending on the firing strength, generate the qualified
consequent (either fuzzy or crisp) of each rule.
To produce a crisp output, aggregate the qualified
consequents (This step is known as defuzzification).
3.2. ANFI S Architecture
To facilitate learning and adaptation, ANFIS is a fuzzy
Sugeno model put in the framework of adaptive systems. The
Sugeno fuzzy model was proposed by Takagi & Sugeno in an
effort to formalize a systematic approach to generating fuzzy
rules from an input-output data set.
There are broadly three types of Fuzzy reasoning
Models. They are Mamdani fuzzy models, Sugeno fuzzy
models (TSK model) and Tsukamoto fuzzy models. The
Mamdani fuzzy models and Sugeno fuzzy models are most
widely used. Sugeno model is widely used in ANFIS because
its rules are tunable based on input parameters.
A typical fuzzy rule in a Sugeno fuzzy model has the
format
IF x is A and y is B THEN z = f(x, y),
Where A and B are fuzzy sets in the antecedent; z = f(x,
y) is a crisp function in the consequent, f(x, y) is a
polynomial in the input variables x and y, but it can be any
other functions that can appropriately describe the output of
the system within the fuzzy region specified by the
antecedent of the rule. If f(x, y) is a first-order polynomial,
then model is called as the first-order Sugeno fuzzy model. If
f is a constant, then it is called the zero-order Sugeno fuzzy
model, which can be viewed either as a special case of the
Mamdani fuzzy inference system, where each rules
consequent is specified by a fuzzy singleton, or a special case
of Tsukamotos fuzzy model where each rules consequent is
specified by a membership function of a step function
centered at the constant Moreover, a zero order Sugeno fuzzy
model is functionally equivalent to a radial basis function
network under certain minor constraints.

Figure 1: ANFIS Architecture
The SIJ Transactions on Computer Science Engineering & its Applications (CSEA), Vol. 1, No. 4, September-October 2013
ISSN: 2321 2381 2013 | Published by The Standard International Journals (The SIJ) 138
Layer 1: For each node in this layer generates
membership grades of a linguistic label. For instance, the
node function of the ith node may be a generalized bell
membership function:
Aix =
1
1
+x
ci
ai

2
bi
Layer 2: Firing strength of a rule is calculated by each
node via multiplication and the nodes are fixed in this layer
wi = Aix Biy; i = 1,2
Layer 3: The nodes are fixed nodes. They are labeled
with N, indicating that they play a normalization role to the
firing strengths from the previous layer. The outputs of this
layer can be represented as:
wi =
wi
w1
+ w2; i = 1,2
Layer 4: The output of each node in this layer is simply
the product of the normalized firing strength and a first-order
polynomial (for a first-order Sugeno model). Here the nodes
are adaptive nodes. Thus, the outputs of this layer are given
as:
Oix = wifi = wi(pix + qiy + ri)
Layer 5: There is only one single fixed node labeled with
S. This node performs the summation of all incoming signals.
O1x = overall output = wifi = wifi/wi
IV. METHODS
This section includes ANFIS learning method and subtractive
clustering
4.1. ANFI S Learning Method
To learn, or adjust weights on connecting arrows between
neurons from input-output training samples, neural networks,
the back propagation algorithm are used. The ANFIS learning
algorithm consists of adjusting the premises and consequents
parameters.
In the ANFIS structure, the parameters of the premises
and consequents play the role of weights. Specifically, the
shape of membership functions in the If part of the rules is
determined by a finite number of parameters. These
parameters are known as premise parameters, whereas the
parameters in the THEN part of the rules are referred to as
consequent parameters.
For ANFIS, a combination of back propagation and
Least Square Estimation (LSE) is used. Back propagation is
used to learn the premise parameters, and LSE is used to
determine the parameters in the rules consequents. A step in
the learning procedure has two passes-forward and backward
pass. In the forward pass, node outputs go forward, the
premise parameters remain fixed while the consequent
parameters {pi, qi, ri} are estimated by least squares method.
In the backward pass the error signals are propagated
backwards, consequent parameters remain fixed while the
back propagation is used to modify the premise parameters
{ai, bi, ci}. This combination of least-squares and back
propagation methods are used for training FIS membership
function parameters to model a given set of input/output data.
The performance of this system will be evaluated using
RMSE, root mean square errors (difference between the FIS
output and the training/testing data output), which is defined
as
RMSE=1/n[yK oK
2
]
4.2. Subtractive Clustering
When there is no clear idea how many clusters there should
be for a given set of data, subtractive clustering is used.
Subtractive clustering operates by finding the optimal data
point to be defined as a cluster center, based on the density of
surrounding data points. In order to determine the next data
cluster and its center, all data points within the radius
distance of these points are then removed. This process is
repeated until all of the data is within the radius distance of a
cluster center. When number of inputs is larger this method is
used for rule generation. It gives optimized rules by taking
into radii specified.
It is a fast, one-pass algorithm for estimating the number
of clusters and cluster centers in a set of data. It generates FIS
structure by scatter partitioning. Here rules are predetermined
by fixing number of centers. Membership functions were
assigned automatically by software. Therefore, number of
tuning parameters is reduced in this case by reducing number
of rules.
V. RESULTS
5.1. Abalone Dataset
Abalone dataset contains 4177 entries in which each entry
records the features of an abalone together with its age as the
desired output. It contains 8 features of an abalone's physical
measurements, with no missing data and28 classes
corresponding to the age from 1 to 29 years of abalones. The
age of abalone is determined by cutting the shell through the
cone, staining and counting the number of rings through a
microscope --a boring time-consuming task. Other
measurements, which are easier to obtain, are used to predict
the age.






The SIJ Transactions on Computer Science Engineering & its Applications (CSEA), Vol. 1, No. 4, September-October 2013
ISSN: 2321 2381 2013 | Published by The Standard International Journals (The SIJ) 139
Table 1: Results on Abalone Dataset
Train
Data
Test
Data
Epoch
No. of
Rules
No. of Linear
Parameters
No. of Non-Linear
Parameters
No. of
Nodes
RMSE
Train
RMSE
Test
RMSE Train Against
Test
500 500 50 3 24 42 58
8.3*
10^-005
8.3*
10^-005
8.3*
10^-005
500 1000 50 3 24 42 58
8.3*
10^-005
8.3*
10^-005
1.7531
56 500 50 9 72 126 154
1.7*
10^-005
1.7*
10^-005
1.4342

532 1000 50 2 16 28 42 0.22518 0.22518 1.2672
155 1000 50 3 24 42 58 0.71564 0.71564 1.021
374 500 50 3 24 42 58
9.1*
10^-005
9.1*
10^-005
1.5025
169 500 50 3 24 42 58
9.7*
10^-005
9.7*
10^-005
0.00014656

5.2. Monks Problem Dataset
Monks problem dataset is a multivariate dataset. It contains
432 instances, 7 attributes with no missing values. The
Monks problem was the basis of a first international
comparison of learning algorithms. There are three Monk's
problems. The domains for all MONK's problems are the
same one of the Monk's problems has noise added. For each
problem, the domain has been partitioned into a train and test
set.
Table 2: Results on Monks Problem Dataset
Train
Data
Test
Data
Epoch
No. of
Rules
No. of
Linear
Parameters
No. of Non-Linear
Parameters
No. of
Nodes
RMSE
Train
RMSE
Test
RMSE
Train Against Test
25 50 50 11 77 132 163
1.8*
10^-006
1.8*
10^-006
1.3701
25 200 50 11 77 132 163
1.8*
10^-006
1.8*
10^-006
1.0532
50 150 50 32 224 384 457 0.35799 0.35799 1.0691
125 200 50 42 294 504 597 0.39216 0.39216 2.2923
25 257 50 11 77 132 163
1.8*
10^-006
1.8*
10^-006
0.9509
125 257 50 42 294 504 597 0.39216 0.39216 1.8571
25 125 50 11 77 132 163
1.8*
10^-006
1.8*
10^-006
1.2351

Training and testing data are partitioned. Epochs are kept
fixed. Non linear parameters are fixed during forward stroke
and steepest descent during backward stroke. Linear
parameters are least square during forward stroke and fixed
during backward stroke. Lower RMSE along with less
number of rules on training and testing of abalone and
monks problem dataset shows that ANFIS is well suited for
age prediction of abalone and performance comparison of
various learning algorithms.
VI. CONCLUSION
An adaptive fuzzy inference system with neural learning
using subtractive clustering is proposed for calculation of
RMSE. Effective partition of the input space along with
subtractive clustering decreases the rule number and
increases the speed in both learning and application phases. It
also provides smoothness due to fuzzy control interpolation
and adaptability due to neural network back propagation.
Lower RMSE and less number of rules results into less time
consumed and better performance evaluation along with high
speed in learning and application phases of ANFIS. This
shows that ANFIS is well suited for age prediction and
performance comparison of learning algorithms. In future,
reprocessing on dataset can be done through various
techniques before applied to the ANFIS.
REFERENCES
[1] J-S.R. Jang (1993), ANFIS Adaptive Network based Fuzzy
Inference System, IEEE Transactions on Systems, Man and
Cybernatics, Vol. 23, No. 3, Pp. 665685.
[2] Junhong Nie & Derek Linkens (1998), Fuzzy-Neural
Control, Printice-Hall of India.
[3] James A. Anderson (2002) An Introduction to Neural
Network, Prentice Hall of India.
[4] MATLAB V7.9.0.529, R2009b. Neuron-Fuzzy Computing
based on Fuzzy Logic. Toolbox, MATLAB Works.
[5] Ravi Jain & Ajith Abraham (2003), A Comparative Study of
Fuzzy Classification Methods on Breast Cancer Data, 7th
International Work Conference on Artificial and Natural
Neural Networks (IWANN03), Spain.
[6] Shibendu Shekhar Roy (2005), Design of Adaptive Neuro
Fuzzy Inference System for Predicting Surface Roughness in
Turning Operation, Journal of Scientific and Industrial
Research, Vol. 64, Pp. 653659.
The SIJ Transactions on Computer Science Engineering & its Applications (CSEA), Vol. 1, No. 4, September-October 2013
ISSN: 2321 2381 2013 | Published by The Standard International Journals (The SIJ) 140
[7] Sean N.Ghazavi & Thunshun W.Liao (2008), Medical Data
Mining by Fuzzy Modeling with Selected Features, Elsevier,
Vol. 43, Pp.195206.
[8] Essam Al-Daoud (2010), Cancer Diagnosis using Modified
Fuzzy Network, Universal Journal of Computer Science and
Engineering Technology, Vol. 2, Pp.7378.
[9] Pejman Tahmasebi & Ardeshir Hezarkhani (2010),
Application of Adaptive Neuro-Fuzzy Inference System for
Grade Estimation, Australian Journal of Basic and Applied
Sciences, Vol. 4, Pp.408420.
[10] L.A. Zadeh (2010), Fuzzy Logic, IEEE Computer, Vol. 21,
Pp. 8393.
[11] Mehdi Khashei, Ali Zeinal Hamadani & Mehdi Bijari (2011),
A Fuzzy Intelligent Approach to the Classification Problem in
Gene Expression Data Analysis, Elsevier, Vol. 27, Pp. 465
474.
[12] Jesmin Nahar, Tasadduq Imama, Kevin S. Tickle & Yi-Ping
Phoebe Chen (2012), Association Rule Mining to Detect
Factors which Contribute to Heart Disease in Males and
Females, Elsevier, Vol. 40, Pp. 10861093.
[13] M.J.P. Castanho, F. Hernandes, A.M. De Re, S. Rautenberg &
A. Billis (2012), Fuzzy Expert System for Predicting
Pathological Stage of Prostate Cancer, Elsevier, Vol. 40,
Pp.466470.
[14] Abalone Dataset- archive.ics.uci.edu/ml/datasets.html
[15] Monks Problem Dataset-- archive.ics.uci.edu/ml/datasets.html
Manisha Pariyani was born in Bhopal in
1989.She received the BE degree (with
distinction) in computer science and
engineering from Sagar Institute of
Technology Research and Science, RGPV,
Bhopal in 2011.She is currently pursuing
M.Tech in computer science and engineering
from Oriental College of Technology, RGPV,
Bhopal. She has attended the national
conference held in various institutes and presented papers in
different research areas. Her research interests include neural
networks, data mining, network intrusion detection and artificial
intelligence.
Dr. Kavita Burse was born in Bhopal in
1970.She received the ph.D. degree, M.Tech
degree in electronics and communication
from MANIT,Bhopal and B.E. degree from
SGSITS,Indore.She has 18 years of teaching
and industrial experience. Presently she is
Director in Oriental College of Technology,
Bhopal. She has 37 national and an
international publication to her credit.She is a
reviewer for IEEE, Elsevier and other prestigious journals. She is a
member of CSI, IETE and ISTE. Her Areas of Interest are E-
Learning, Digital Signal Processing, Digital Communication and
Neural Networks.

Você também pode gostar