Escolar Documentos
Profissional Documentos
Cultura Documentos
Neurocomputing
journal homepage: www.elsevier.com/locate/neucom
Department of Information and Computer Science, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia
Department of Computer Science and Software Engineering, The University of Melbourne, Victoria 3010, Australia
c
Gippsland School of IT, Monash University, Churchill, VIC 3842, Australia
b
a r t i c l e i n f o
a b s t r a c t
Article history:
Received 24 March 2012
Received in revised form
12 July 2012
Accepted 12 September 2012
Communicated by P. Zhang
Available online 6 December 2012
In this paper, we propose a new type of adaptive fuzzy inference system with a view to achieve
improved performance for forecasting nonlinear time series data by dynamically adapting the fuzzy
rules with arrival of new data. The structure of the fuzzy model utilized in the proposed system is
developed based on the log-likelihood value of each data vector generated by a trained Hidden Markov
Model. As part of its adaptation process, our system checks and computes the parameter values and
generates new fuzzy rules as required, in response to new observations for obtaining better
performance. In addition, it can also identify the most appropriate fuzzy rule in the system that covers
the new data; and thus requires to adapt the parameters of the corresponding rule only, while keeping
the rest of the model unchanged. This intelligent adaptive behavior enables our adaptive fuzzy
inference system (FIS) to outperform standard FISs. We evaluate the performance of the proposed
approach for forecasting stock price indices. The experimental results demonstrate that our approach
can predict a number of stock indices, e.g., Dow Jones Industrial (DJI) index, NASDAQ index, Standard
and Poor500 (S&P500) index and few other indices from UK (FTSE100), Germany (DAX) , Australia
(AORD) and Japan (NIKKEI) stock markets, accurately compared with other existing computational and
statistical methods.
& 2012 Elsevier B.V. All rights reserved.
Keywords:
Fuzzy system
Hidden Markov Model (HMM)
Stock market forecasting
Log-likelihood value
1. Introduction
Adaptive online systems have great appeal in domains where
events change dynamically. Typical examples include nancial,
manufacturing and control engineering. A system is termed
adaptive if it can evolve according to the change in characteristics
of the problem. For instance, to model a chaotic time series where
the values change randomly, the system should continuously
update its knowledge and adapt itself. The aim of such a system is
to improve performance through enhanced modelling of the
changes in behavior. Different application areas of engineering,
computer science and nancial forecasting and analysis can
benet from using such kinds of adaptive systems.
An adaptive online learning system should possess the following criteria to be efcient and effective:
1. It should be able to capture the characteristics of new information as it becomes available;
the structure of the fuzzy model once the fuzzy model has been
built. Evolving Fuzzy Neural Network (EFuNN) is another system
introduced in [9,10] which uses evolving connectionist systems
(ECOS) architecture to make the system evolve. In the dynamic
version of EFuNN [11] the parameters are self optimized. In EFuNN
a new rule is generated if the distance between the new data vector
and cluster centres for each of the existing rules is greater than the
predened radius of cluster R. Hence, the performance of the model
depends on the optimal choice of R. Furthermore, the distance
function between two fuzzy membership vectors works well for
discretized data values but is not suitable for real continuous
numbers. To adjust the rule parameters a feedback algorithm is
used which requires storage to keep the desired outputs.
Recently, the Dynamic Evolving Neuro Fuzzy Inference System
(DENFIS) [12] has become popular, due to its adaptive and online
learning nature. DENFIS is quite similar to EFuNNN, except that in
DENFIS, the position of the input vector in the input space is
identied online and the output is dynamically computed, based
on the set of fuzzy rules created during the earlier learning process.
Rules are created and updated by partitioning the input space using
an online evolving clustering method (ECM). In ECM, the distance
between a data point and cluster center is compared with a
predened threshold Dthr, which is then used to generate clusters
and corresponding fuzzy rules. The threshold Dthr, which is effectively the radius of a cluster, must be statically dened and can affect
the performance of the obtained model. DENFIS uses Euclidean
distance [13] to measure the difference between two input vectors.
However, Euclidean distance is not a suitable method to differentiate
time series data patterns consisting of linear drift [14]. For example,
the two time series data vectors D1: /0 1 2 3 4 5 6 7 8S and D2:
/5 6 7 8 9 10 11 12 13S (as shown in bold in Fig. 1) have similar
trends, although they are dissimilar in terms of Euclidean distance.
For a time series application, since these two data vectors exhibit
similar pattern, they should belong to the same rule. Consequently,
the performance of DENFIS usually degrades with adapting more new
data when it is applied for forecasting non-linear time series data.
Another approach proposed in literature for realizing adaptive
Fuzzy Inference Systems is through leveraging evolutionary
approaches, such as Genetic Algorithm (GA). In [15] a GA-based
approach for adapting and evolving fuzzy rules was proposed to
achieve automated negotiation among relax-criteria negotiation
agents in e-markets. An evolutionary approach for automatic generation of FIS was proposed in [16], where the structure and
parameters of FIS are generated through reinforcement learning
14
12
Value
10
D2
8
6
4
and the fuzzy rules are evolved via GA. In [17], a method for
generating Mamdani FIS was introduced, where the fuzzy model
parameters are optimized by applying GA. Although GA is quite
popular for developing an evolving fuzzy system, its inherent
computational and time complexity makes this approach inapplicable
to an ever-changing non-linear chaotic time series data forecasting.
Hidden Markov Model (HMM) can be applied to nd similarities
in the patterns of a time series data [1820]. In [21,22,19,23],
HMMFuzzy model was proposed by exploiting the ability of HMM
to capture pattern similarities as well as the ease of fuzzy approach
deal with adaptive system. The HMMFuzzy model is an ofine
data driven fuzzy rule generation technique where HMMs data
pattern identication method is used to rank the data vectors and
then fuzzy rules are generated. The reason for using HMM is that it
models a system that provides higher probability to the data
vectors that represent the system, than the data vectors that
represent the minority scenario of the system. Though these models
have shown promising results, their performance in forecasting
time series data is still inadequate and they are designed for ofine
learning only. To improve performance, a model needs to learn
online where new and recent data trends can be captured making
the model continuously adaptive.
In this paper, we propose a model called the Adaptive Fuzzy
Inference System (AFIS) which consists of two phases. First, an
initial fuzzy model is generated using a small number of training
data vectors. To generate the initial fuzzy model, a HMM is
trained and used to compute log-likelihood values for each of
the data vectors. These log-likelihood values are then used to rank
and group the data vectors to generate appropriate fuzzy rules, as
described in Section 3.1.2. Second, the fuzzy model is conformed
to arrival of new data making it a continuous adaptive online
system. On observing new data either the fuzzy rule that satises
the data is identied using the HMM and is then adapted for the
new data or a new fuzzy rule is generated.
The proposed AFIS has signicant differences from the models in
our previous studies in a number of ways. First, AFIS is an online
learning system while others learn only ofine. Second, AFIS is an
adaptive model. Once a model is built based on the available data, it
remains unchanged in previous studies while, in AFIS, an intelligent
online learning is used to adapt the initial model as new data arrives.
In the latter case, currently dened rule is ne-tuned to t the new
data and if necessary, new rule is generated. Third, in AFIS, the
training dataset does not have to be large and the model not
necessarily be trained with data having characteristics of unknown
test data, rather can be trained incrementally as new data become
available. All these features make AFIS very suitable for forecasting
time series data and it outperforms other existing methods in
literature including our previous models as demonstrated in Section 5.
The remainder of the paper is organized as follows. In Section 2,
we briey discuss the fundamental concepts of HMMs. We describe
the proposed approach in details in Section 3. Section 4 presents the
design of our experimental investigation. We present and discuss the
results in Section 5. Lastly in Section 6, we suggest future improvements and conclude the paper. Notations are listed in Table 1 are
used in describing algorithms in the remaining part of the paper.
2. Preliminaries
D1
2
0
11
10
20
30
Time (t)
40
50
Fig. 1. Two similar data patterns with different Euclidean distance (ED). Here ED
between D1 and D2 is 15.
12
where bSj c represents the emission probability of an observation symbol c in state Sj.
5. The initial state distribution vector p fpi g where
Table 1
List of notations.
Notation
Description
N
M
!
x
x1 ,x2 , . . . ,xk
A
Si
Q
q
q0
aij
B
bSj ck
p
l
Rl
M ij
oj
E!
xj
Emse
pi Prq0 Si , i rN,
Fij
sij
Si
G
N
m
u2 S
v
!
x cont
lli
k
!
b
D
Training dataset
1r i and j rN;
1 rj r N and 1r k rM;
!
x Input data vector /x1 ,x2 , . . . ,xk S, xi A O (Observation
Sequence).
!
The values of Pr x 9Q , l and PrQ 9l are calculated using the
following equations [20]:
k
Y
!
Prxi 9qi , l bq1 x1 bq2 x2 . . . bqk xk ,
Pr x 9Q , l
i1
where p or p1 prior probability matrix, aqi ,qj transition probability from state qi to state qj.
So far we have described a HMM that deals with a sequence of
discrete symbols. Most of the real world problems, however, are
continuous (e.g., speech signal recognition, human movement
recognition and stock indices prediction) and hence a HMM able
to deal with continuous dataset is required. This can be achieved
through a slight modication of the discrete HMM. The following
section reviews how a HMM can be used for continuous data.
2.2. HMM for continuous data
There are a number of ways to generate a HMM to deal with
continuous data. Firstly, the continuous dataset can be converted
into a number of discrete sets by adopting a quantization technique.
In fact, a number of studies, especially are dealing with continuous
speech data [25], rst translate the continuous features into a set of
discrete symbols. Another approach is to map the discrete output
13
Fig. 2. Step-by-step example of the proposed model: (1) Convert univariate time series data into data vectors (window); (2) Feed the data vectors into a HMM; (3) Train the HMM
using expectation maximization algorithm; (4) Calculate log-likelihood value for each of the training data vectors and rank them; (5) Group the data vectors based on the ranking;
(6) Generate a set of fuzzy rules (considered as fuzzy system) using the data vector groups; (7) Adapt the generated fuzzy system whenever a new data vector arrives; (8) Feed the
new data vector into the trained HMM; (9) Compute log-likelihood lnew value for the new data vector; (10) If lnew is not within the range of minimum and maximum log-likelihood
values (i.e. ranking score) of the fuzzy system, create a new fuzzy rule; (11) Otherwise identify the rule where the new data vector ts in and (12) Modify the selected fuzzy rule.
14
Vector 1:
Vector 2:
Vector 3:
Vector m:
Desired
Desired
Desired
y
y
Desired
Output:
Output:
Output:
Output:
xWT 1
xWT 2
xWT 3
y
y
xT
and v2 is M j
and vk is M jk
j
y^ pred
j1
oj y^ jpred
,
j 1 oj
Pc
Q
where oj ki 1 M ji (for a k-dimensional input data vector) and
c the total number of rules in the model.
In AFIS, the least-square estimation (LSE) [36,37] function is
used to obtain the optimized parameter values of Eq. (8) in the
consequent part of each fuzzy rule.
Let us assume that, there are m data vectors for the jth fuzzy
!
rule. The co-efcient bi A b , 0 r ir k of Eq. (8) is obtained by
applying the LSE formula (Eq. (10)).
!
!
b CXT y ,
10
where
0
x11
B
B x21
B
B ...
B
C XT X1 ,X B
B ...
B
B ...
@
xm1
x12
...
x22
...
...
...
...
...
...
...
xm2
...
x1k
C
x2k C
C
... C
C
C
... C
C
... C
A
xmk
!
and y y1 y2 . . . ym T :
E!
x j y^ j yj ,
13
here, y^j is the predicted value using the generated fuzzy rule set
!
and yj is the actual value for jth data vector xj .
The total mean squared error (MSE) Emse for the training
dataset (m total training data vectors/instances) is obtained
by the following Eq. (14).
Pm
E!
x j 2
:
14
Emse j 1
m
The prediction error Emse is used to evaluate the performance of
the developed model for the training dataset. If the error for the
training dataset does not reduce further, the algorithm is terminated and no further rule is generated. Otherwise, the input
training data is split into two groups with the help of data vectors
sorted according to their ranks. The splitting of the data is done by
grouping the data vectors based on their ranks.
Initially, the split is done in such a way that the rst group
contains data vectors having comparatively higher rank than the
data belonging to the another group. To achieve this, a parameter
y is introduced, i.e. the rst y% of the whole ranked dataset is
considered to form a group and the remaining data, i.e. 1y% of
the whole ranked dataset belongs to another group. We create a
new rule for each created partition. Thus each split increases the
number of rules by one. The prediction error Emse for the training
dataset is recalculated using the extracted rule set. At each step of
increase in the number of rules, the convergence of error is
monitored. Rule generation is stopped when adding a rule does
not yield further improvement in prediction error Emse .
15
16
Cn C
15
!
Here, x new is the data vector that corresponds to the output ynew.
!
Based on the newly available data x new , recalculation of
the parameters: F and s for each membership function of the
selected rule is done using Eqs. (16) and (17).
Fnij
1
x
nj Fij ,
nj 1 newi
16
sijn2
2
nj
1
nj s2ij
xnewi Fij
,
nj 1
nj 1
17
From date
To date
DJI
NASDAQ
S&P 500
FTSE100
DAX
AORD
NIKKEI
01/10/1928
05/02/1971
03/01/1950
02/04/1984
26/11/1990
03/08/1984
04/01/1984
24/08/2009
24/08/2009
24/08/2009
24/08/2009
24/08/2009
24/08/2009
24/08/2009
17
9yi yi 9
yi
r
i1
100%,
18
4.3.3. t-test
t-test is a statistical hypothetical test where the averages
of the two samples: the predicted output using AFIS and the
predicted output using another fuzzy approach is tested against
the null hypothesis H0. Let us consider the two averages are : y^ AFIS
and y^ # respectively. The null hypothesis is dened as
H0 : y^ AFIS y^ # ,
20
18
102
AFIS
HMMFuzyy
DENFIS
Chiu model
101
100
MAPE
MAPE
102
500
1000
1500
2000
2500
101
100
3000
AFIS
HMMFuzyy
DENFIS
Chiu model
500
MAPE
MAPE
AFIS
HMMFuzyy
DENFIS
Chiu model
101
500
1000
1500
101
100
2000
200
101
100
200
300
400
500
600
800
1000
700
101
100
800
AFIS
HMMFuzyy
DENFIS
Chiu model
100
200
300
400
500
600
700
800
900
102
MAPE
600
102
AFIS
HMMFuzyy
DENFIS
Chiu model
400
MAPE
MAPE
102
2000
AFIS
HMMFuzyy
DENFIS
Chiu model
100
1500
102
102
100
1000
AFIS
HMMFuzyy
DENFIS
Chiu model
101
100
100
200
300
400
500
600
700
800
900
0.4
0.35
0.35
0.3
0.25
0.2
0.15
0.2
0.15
0.1
0.1
0.05
0.05
0
AFIS
HMMFuzyy
DENFIS
Chiu model
0.25
AFIS
HMMFuzyy
DENFIS
Chiu model
NRMSE
NRMSE
0.3
19
500
1000
1500
2000
2500
3000
500
1000
1500
0.45
0.7
0.4
AFIS
HMMFuzyy
DENFIS
Chiu model
0.6
0.35
0.25
NRMSE
NRMSE
0.5
AFIS
HMMFuzyy
DENFIS
Chiu model
0.3
2000
0.2
0.15
0.4
0.3
0.2
0.1
0.1
0.05
0
500
1000
1500
2000
100
200
0.7
500
600
700
0.4
0.35
0.5
0.3
NRMSE
NRMSE
400
0.45
AFIS
HMMFuzyy
DENFIS
Chiu model
0.6
300
0.4
0.3
0.25
AFIS
HMMFuzyy
DENFIS
Chiu model
0.2
0.15
0.2
0.1
0.1
0
0.05
0
100
200
300
400
500
600
700
800
100
200
300
400
500
600
700
800
900
0.35
AFIS
HMMFuzyy
DENFIS
Chiu model
0.3
NRMSE
0.25
0.2
0.15
0.1
0.05
0
0
100
200
300
400
500
600
700
800
900
20
21
Table 4
Comparison of the p-value of t-test for all datasets.
Stock name
Chius Model
DENFIS
DJI
Signicantly difference
Signicantly difference
Signicantly difference
NASDAQ
1:092 103
Signicantly difference
(0.049)
1:006 107
Signicantly difference
2:009 109
Signicantly difference
2:017 103
Signicantly difference
(0.0243)
Signicantly difference
2:171 107
Signicantly difference
(0.0092)
Signicantly difference
2:336 105
Signicantly difference
(0.0064)
Signicantly difference
(0.0137)
3:336 1013
Signicantly difference
(0.0015)
Signicantly difference
S&P500
Signicantly difference
(0.0341)
Signicantly difference
FTSE1000
3:056 103
Signicantly difference
(0.0154)
Signicantly difference
DAX
AORD
2:931 1036
Signicantly difference
(0.02685)
NIKKEI
10
10
Signicantly difference
(0.0471)
AFIS
RPFRG
ARPFRG
10
10
10
10
MAPE
NRMSE
101
10
10
10
10
10
3:987 1013
Signicantly difference
5:289 1090
AFIS
RPFRG
ARPFRG
101
10
200 250 300 350 400 450 500 550 600 650 700
200 250 300 350 400 450 500 550 600 650 700
Fig. 6. Performance comparison among HMMFuzzy, AFIS, randomly partitioned fuzzy rule generation (RPFRG) and adaptive fuzzy system followed by randomly
partitioned fuzzy rule (ARPFRG) for DJI stock index. (a) Performance metric: NRMSE and (b) Performance metric: MAPE.
21
length of the training data. For example, for NIKKEI series, all the
fuzzy models produced a minimum consistent MAPE and NRMSE
starting from the length of the training data as 200 (as we see in
Figs. 4(g) and 5(g)). For this stock AFIS produced even a better
performance starting from the length of the training data as 60 and
onwards.
To further analyze the results, we have conducted a paired t-test
with 5% signicance level (i.e. 95% condence level) between AFIS
and other considered techniques. As shown in Table 4 the computed
p-values between the predicted values by using AFIS and that of using
HMMFuzzy, Chius subtractive clustering based fuzzy model and
DENFIS are much less than 0.05. The fact that the performance of AFIS
is far better than the other fuzzy systems (Figs. 4 and 5) along with
the smaller p-value (i.e., p-value o0:05) statistically signify that AFIS
is capable of forecasting time series data signicantly better than
HMMFuzzy, Chius fuzzy model and DENFIS for the stock data
considered in our experiment.
To analyze the event that makes AFIS such an efcient forecast
approach, rst we generated fuzzy rules using a scheme of randomly
partitioning the training data. Generated rules are also adapted as
soon as new data arrives following a random process as stated in
Section 4. Fig. 6 provides the performance results of Randomly
Partitioned Fuzzy Rule Generation (RPFRG) and Adaptive Fuzzy
System followed by RPFRG (ARPFRG) along with HMMFuzzy and
AFIS for DJI stock index. Table 6 shows that AFIS is clearly able to
model the behavior of the stock series. For example, the performance
of AFIS in MAPE is 1.93 for a training data length of 700, whereas for
the same training data ARPFRG attains a MAPE value of nearly
430 000 (see Fig. 6b). It is worth mentioning here that due to
introducing randomness in generating fuzzy rules and identifying
the rule that needs to be adapted, the MAPE values for both RPFRG
and ARPFRG are much higher than that of AFIS. Second, we generated
fuzzy model using a k-means clustering algorithm and its adaptive
version. In the adaptive k-means fuzzy model, with the arrival of new
data vectors, the initial fuzzy model generated using a k-means
algorithm is adapted by coupling the intelligent dynamic adaptive
Table 5
Performance comparison among AFIS, Fuzzy model generated using k-means and its adaptive version (trained for each new data; the rst 1000 data
instances used for training and the remaining data for testing).
Stock name
DJI
NASDAQ
S&P500
FTSE100
DAX
AORD
NIKEI
AFIS
NRMSE
MAPE
NRMSE
MAPE
NRMSE
MAPE
0.0087
0.0170
0.0102
0.0396
0.0735
0.0251
0.0259
1.5216
2.2276
1.6291
1.7005
3.6791
1.5668
2.4377
3.402
0.3446
0.4287
0.04
0.3402
0.3446
0.0339
42.2622
60.773
46.623
1.7003
42.0948
60.773
2.4259
0.013
0.0216
0.011
0.04
0.0121
0.0216
0.0339
2.4216
3.4422
1.704
1.6976
2.2278
3.4422
2.426
# of Fuzzy rules
3
3
5
4
2
3
3
Table 6
Performance comparison among AFIS, ARIMA and Articial Neural Network (trained for each new data; the rst 1000 data instances used for training and the remaining
data for testing).
Stock name
DJI
NASDAQ
S&P500
FTSE100
DAX
AORD
NIKKEI
AFIS
ARIMA
NRMSE
MAPE
NRMSE
MAPE
p,d,f
0.0087
0.0170
0.0102
0.0396
0.0735
0.0251
0.0259
1.5216
2.2276
1.6291
1.7005
3.6791
1.5668
2.4377
0.8
0.9
0.7
0.8
0.8
0.9
0.9
0.3429
0.2720
0.3407
0.4475
0.3432
0.2736
0.3955
78.4625
50.9447
46.9004
20.7286
79.1502
51.5942
25.8386
3,
1,
1,
2,
4,
1,
1,
1,
1,
1,
1,
1,
1,
1,
3
0
2
2
4
0
3
NRMSE
MAPE
NRMSE
MAPE
#Nodes
0.0090
0.0174
0.0105
0.0404
0.0742
0.0312
0.0237
1.5697
2.3953
1.6411
1.7213
3.7993
1.6817
2.3701
0.3184
0.3142
0.3225
0.118
0.3225
0.2603
0.0411
42.7288
60.5563
32.9437
2.7618
46.7498
47.1483
2.4377
35
10
10
10
15
15
20
22
Table 7
Execution time comparison between AFIS and repetitively trained ARIMA (the rst 1000 data instances used for building the initial model and the remaining data for
testing; this experiment was executed 10 times for each of the stocks and the average performance along with performance variation is reported here).
Stock
name
Length of
Length of
training data test data
AFIS
1000
1000
1000
1000
890
1000
1000
3216
1007
2042
321
84
303
324
3.209
3.315
2.998
3.126
3.391
3.326
3.253
7 0.3512
70.1324
70.5121
70.4142
70.3531
70.1367
70.4851
Speedup per
data prediction
14.0915 7 0.4251
4.8941 7 0.3393
9.0808 70.4734
1.6265 7 0.3381
0.4156 70.2332
1.5275 7 0.2993
1.6441 7 0.8474
137.8300
25.7742
116.9670
7.5625
2.6600
6.0240
6.2243
2.68
2.63
2.58
2.71
2.25
2.55
2.57
7 0.1312
7 0.3532
7 0.4512
7 0.4417
7 0.3931
7 0.6671
7 0.3985
73.8289
70.6556
71.8735
70.1253
70.1867
70.08514
70.6285
9.79
5.27
12.87
4.65
6.40
3.94
3.79
Table 8
Execution time to generate a prediction.
Stock name
DJI
NASDAQ
S&P500
FTSE100
DAX
AORD
NIKKEI
Speedup
AFIS
4.38
4.86
4.45
5.07
4.95
5.04
5.07
42.86
25.60
57.28
23.56
31.67
19.88
19.21
9.79
5.27
12.87
4.65
6.40
3.94
3.79
16000
14000
12000
AFIS
HMMFuzzy
Chius model
Actual data
DENFIS
10000
8000
6000
4000
2000
Nov,1964
Jan,1969
Mar,1973
May,1977
Jul,1981
Sep,1985
Nov,1989
Jan,1994
Date
Fig. 7. Forecast values vs. actual values where forecasts are computed using AFIS, HMMFuzzy model, Chius fuzzy model and DENFIS for the monthly electricity
production in Australia. (Training data: Jan 1956 Oct 1964 and Test data; Nov 1964 Aug 1995).
23
Table 9
Performance comparison among AFIS, HMMFuzzy, DENFIS and Chius model (by varying the length of training data: 100 and 200) for Monthly electricity production in
Australia: million kilowatt hours: Jan 1956 Aug 1995.
Training data
Test data
AFIS
HMMFuzzy
Chius model
DENFIS
From
To
From
To
NRMSE
MAPE
NRMSE
MAPE
NRMSE
MAPE
NRMSE
MAPE
Jan 1956
Jan 1956
Oct 1964
Feb 1973
Nov 1964
Mar 1973
Aug 1995
Aug 1995
0.0686
0.0507
7.5400
4.5667
0.4898
0.0610
38.0055
5.1254
0.2958
0.0611
19.0174
5.1157
0.4686
0.4180
50.1938
32.5049
Table 10
Comparison of adaptive online learning systems based on the desired characteristics.
Desired characteristics
DENFIS
|
|
|
|
|
|
|
24
[18] M.R. Hassan, B. Nath, M. Kirley, A fusion model of HMM, ANN and GA for
stock market forecasting, Expert Syst. Appl. 31 (1) (2007) 171180.
[19] M.R. Hassan, Hybrid HMM and Soft Computing Modeling with Applications
to Time Series Analysis, Ph.D. Thesis, Department of Computer Science and
Software Engineering, The University of Melbourne, 2007.
[20] L.R. Rabiner, A tutorial on Hidden Markov Models and selected applications
in speech recognition, Proc. IEEE 77 (1989) 257286.
[21] M.R. Hassan, A combination of HMM and fuzzy model for stock market
forecasting, Neurocomputing 72 (1618) (2009) 34393446.
[22] M.R. Hassan, B. Nath, M. Kirley, A HMM based fuzzy model for time series
prediction, in: Proceedings of FUZZ-IEEE Conference, 2006, pp. 99669974.
[23] M.R. Hassan, B. Nath, M. Kirley, J. Kamruzzaman, A hybrid of multiobjective
evolutionary algorithm and HMMFuzzy model for time series prediction,
Neurocomputing 81 (2012) 111.
[24] M. Mannle, Identifying rule-based TSK fuzzy models, in: Proceedings of
EUFIT, 1999, pp. 286299.
[25] H. Bahi, M. Sellami, Combination of vector quantization and Hidden Markov
Models for arabic speech recognition, in: ACS/IEEE Proceedings of International Conference on Computer Systems and Applications, 2001, p. 0096.
[26] X. Huang, Y. Aricki, M. Jack, Hidden Markov Models for Speech Recognition,
Edinburgh University Press, 1990.
[27] L.E. Baum, T. Pitrie, G. Souls, N. Weiss, A maximization technique occurring in
the statistical analysis of probabilistic functions of Markov chains, Ann. Math.
Stat. 41 (1970) 164171.
[28] L.E. Baum, An inequality and associated maximization technique in statistical
estimation of probabilistic functions of Markov processes, Inequalities 3
(1972) 18.
[29] S.-M. Chen, S.-H. Lee, A new method for generating fuzzy rules from
numerical data for handling classication problems, Appl. Artif. Intell.
(2001) 645664.
[30] P.P. Angelov, R.A. Buswell, Automatic generation of fuzzy rule-based models
from data by genetic algorithms, Inf. Sci. (2003) 1731.
[31] X.Z. Wang, Y.D. Wang, X.F. Xu, W.D. Ling, D.S. Yeung, A new approach to
fuzzy rule generation: fuzzy extension matrix, Fuzzy Sets Syst. (2001)
291306.
[32] M.R. Hassan, B. Nath, M. Kirley, A data clustering algorithm based on single
Hidden Markov Model, in: Proceedings of the International Multiconference
on Computer Science and Information Technology, 2006, pp. 5766.
[33] M. Ragulskisand, K. Lukoseviciute, Non-uniform attractor embedding for
time series forecasting by fuzzy inference systems, Neurocomputing 72
(2009) 26182626.
[34] T. Takagi, M. Sugeno, Fuzzy identication of systems and its application to
modeling and control, IEEE Trans. Syst. Man Cybern. (1985) 116132.
[35] J. Zurada, Optimal Data Driven Rule Extraction using Adaptive Fuzzy-Neural
Models, Ph.D. Dissertation, University of Louisville, 2002.
[36] A.E. Gaweda, J.M. Zurada, Data-driven linguistic modeling using relational
fuzzy rules, IEEE Trans. Fuzzy Syst. 11 (2003) 121134.
[37] G.C. Goodwin, K.S. Sin, Adaptive Filtering Prediction and Control, PrenticeHall, Upper Saddle River, NJ, 1984.
[38] Yahoo nance, /http://nance.yahoo.com/S. URL /http://nance.yahoo.com/S.
[39] S.L. Chiu, An efcient method for extracting fuzzy classication rules from
high dimensional data, J. Adv. Comput. Intell. 1 (1997) 17.
[40] J.R. Jang, C.T. Sun, E. Mizuatani, Neuro-Fuzzy and Soft Computing, Prentice
Hall, Englewood Cliffs, NJ, 1997.
[41] J.A. Hartigan, M.A. Wong, Algorithm as 136: a k-means clustering algorithm,
Appl. Stat. 28 (1979) 100108.
[42] D. Rumelhart, J. McClelland, Parallel Distributed Processing, MIT Press, 1986.
[43] M.R. Hassan, B. Nath, Stock market forecasting using Hidden Markov Model:
a new approach, in: Proceedings of International Conference on Intelligent
System Design and Application, 2005, pp. 192196.
[44] G. Atsalakisa, K. Valavanisb, Surveying stock market forecasting techniques,
part II: soft computing methods, Expert Syst. Appl. 36 (3) (2009) 59325941.
[45] J. Kamruzzaman, R. Sarker, Forecasting of currency exchange rates using
ANN: a case study, in: International Conference on Neural Networks and
Signal Processing, 2003, pp. 793797.
[46] Monthly electricity data, /http://www.robjhyndman.com/TSDLS. URL /http://
datamarket.com/data/set/22l0/monthly-electricity-production-in-australia-mil
lion-kilowatt-hours-jan-1956-aug-1995#!display=line&ds=22l0S
.
Md. Raul Hassan received a B.Sc. (Engg) in Electronics and Computer Science from Shah Jalal University
of Science and Technology, Bangladesh and a Ph.D. in
Computer Science and Software Engineering from the
University of Melbourne, Australia in 2000 and 2007
respectively. Currently, he is a faculty member in the
Department of Information and Computer Science,
King Fahd University of Petroleum and Minerals, Saudi
Arabia. His research interests include neural networks,
fuzzy logic, evolutionary algorithms, Hidden Markov
Model and support vector machine with a particular
focus on developing new data mining and machine
learning techniques for the analysis and classication
of biomedical data. He is currently involved in several research and development
projects for effective prognosis and diagnosis of breast cancer from gene expression microarray data. He is the author of around 30 papers published in
25