Você está na página 1de 5

JOURNAL OF COMPUTING, VOLUME 3, ISSUE 3, MARCH 2011, ISSN 2151-9617

HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 64

Optimal Weight Selection of ANN to Predict


the Price of the General Index for Amman
Stock Exchange
K. Eghnam, A. Sheta, S. Bani-Ahmad

Abstract— Artificial neural networks (ANNs) have been successfully used to solve variety of problem in prediction, recognition, pat-
tern classification, modeling and simulation of dynamic systems. Unfortunately, it was reported that the optimization of the ANN weights
represents a challenge. The reason is traditional ANN learning algorithms such as the back propagation algorithm can stuck by local
minimum. There is no guarantee that the produced weights are the optimal set to solve the problem under study. Genetic Algorithms
(GAs) were able to provide solutions for diversity of parameter optimization problems. In this study, we encoded the ANN weights as
parameters (i.e. chromosome) for GAs to optimize. This simple idea significantly helps in solving a challenge problem in stock ex-
change. A financial data set for Banks Participation, Insurance Participation, Service Participation, and Industry Participation for the pe-
riod 1992-2005 was collected from Amman Stock Exchange (ASE). This data set was used as a training data set for the proposed
ANNs-GAs model. The developed results show that the proposed model out-performs the traditional Multiple Linear Regression model
(MLR) with 4.95%. The experimental results are promising.

Index Terms— Prediction, Neural Networks, Genetic Algorithms, Stock Market, Amman Stock Exchange.

——————————  ——————————

1 INTRODUCTION

T HE ability to predict the direction of the stock market


is very important factor when investing in the finan-
cial market [1]. There are several motivations to ex-
Network and the selection of the initial weights to start
the weight adjustment process. It was found the GAs has
the applicability to solve these types of problems. GAs is
plore the stock market prediction problem. Academic practically good at efficiently searching large and com-
researchers and business practitioners and many others plex spaces to find nearly global optima. As the complexi-
have developed many types of prediction methods and ty of the search space increase, GA presents an attractive
techniques in an attempt to find a reliable and effective alternative to traditional learning techniques such as
solution of the stock market exchange. Detecting trends of back-propagation. Even better, GA are good complement
stock data helps decision maker to take the correct action to gradient-based techniques for complex search space. In
to increase the profit [3]. No one technique or combina- [12], the authors were proposed and implemented a fu-
tion of techniques has been successful enough to beat the sion model by combining the Hidden Markov Model
market [2]. (HMM), ANN and GA to forecast financial market beha-
In recent years, the concept of Soft-Computing tech- vior. In [13; 14], ANNs were used to predict the stock
niques has provided a good tool to model time series [3]. prices.
Artificial neural network (ANNs) proved to be one of the The objective of this research is to explore the use of
most effective Soft-Computing techniques. Mainly, its Genetic Algorithms (GAs) to train a feed-forward ANN,
ability to discover nonlinear relationship and irregulari- to find the near optimal weights, to predict the price of
ties in input data makes them ideal for predicting the the general index for Amman stock exchange (ASE). We
stock market [4]. plan also to compare our results against the multiple li-
Empirical results have shown that ANNs outperforms near regression (MLR) models. The proposed prediction
regression model [5, 6, and 7]. There are many major model will be used to test the ability of GAs to find the
problems associated with using ANN in the prediction of optimal set of weights for ANN.
the stock market index. They include the determination of
the ANN size, the selection of the best weights of the 1.1 Amman Stock Exchange (ASE)
ASE was established in March 1999 as a non-profit in-
———————————————— stitution with administrative and financial autonomy [16].
 K. Eghnam is with the Information Technology Department , Al-Balqa It is authorized to function as an exchange for the trading
Applied University, Salt, Jordan.
 A. Sheta is with The World Islamic Sciences and Education University
of securities. The exchange is governed by a seven-
(WISE), Amman, Jordan. member board of directors. A chief executive officer over-
 S. Bani-Ahmad is with the Information Technology Department, Al-Balqa sees day-to-day responsibilities and reports to the board.
Applied University. The un-weighted index is supplemented by sub-indices
for the four sectors (see table 1); Banks Participation, In-
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 3, MARCH 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 65

y  0  1 x1  2 x2  ...  4 x4   (1)
where:

y is the price of the general index for ASE,


 x1 is the Banks Participation,
 x 2 is the Insurance participation,
 x3 is the Service Participation,
 x 4 is the Industry participation,
  0 is the intercept of the regression equation,
  i is the regression coefficients, i  1,2,3,4, and
Fig. 1. The proposed ANN
  is independent N (0,  ) .
2

We may write the sample regression model cor-


surance Participation, Service Participation, and Industry responding to Eq. 2.
Participation. The base was changed to 1000 as of January 5
1st 2004. y  0  
j 1
j x ij   i (2)
TABLE 1
SNAPSHOT OF THE INPUT DATA: AMMAN STOCK EXCHANGE 2. ARTIFICIAL NEURAL NETWORK MODEL
INDEX PRICE , JANUARY 2004
In the following we will define main elements of ANN
Day Bank Insurance Security Industry Index Price architecture with three layers. The ANN has an n in-
1 4515.7 2299.1 1308.6 1507.4 2668.2 put nodes, single output node and p hidden units
2 4565.4 2301.8 1300.3 1504.6 2680.5 (nodes). The neural network input consists of a set of
3 4592.6 2317.8 1319.3 1530.8 2708.3 financial inputs factors x  {x1 , x 2 ,........., x n } , where
4 4592.6 2317.8 1319.3 1530.8 2708.3 n is the financial vector inputs. Each sample xi in set
5 4781.2 2348.1 1396.3 1575.4 2818.3 x is an m-dimensional financial vector inputs. A one-
6 4876.6 2373.4 1440.9 1623.4 2887.3 dimensional actual output vec-
7 4816.4 2381.5 1411.8 1617.9 2854.8 tor y  { y1 , y 2 ,........., y p } , where p is equal to the
8 4774.6 2439.8 1409.9 1605.2 2835.5 number of hidden layer units. whi , is n  p matrix of
9 4736 2483.3 1378.4 1608.5 2813.8 synaptic weights connecting the inputs and hidden
10 4659.4 2509.1 1397.1 1630.8 2805 layers and woh , is P  1 matrix of synaptic weights
11 4696.8 2522.6 1399 1655.1 2829.1 connecting the hidden and^ output layers, and for sim-
12 4707.1 2552.5 1405.5 1637.2 2826.9 plicity. We assume that y is the desired response of
13 4657.5 2517 1366.3 1616 2787.2 the ANN. The ANN is shown in Figure 1.
14 4600.8 2502.9 1331.5 1581.8 2740.7
15 4628.9 2475.9 1368.8 1588.3 2765.6 Back-propagation is commonly used to train ANN for
16 4630.4 2508.8 1378.2 1597.7 2774 many cases [3]. Classical back-propagation adopts
17 4649.8 2541.8 1371.9 1584.3 2772.9 first-order steepest descent technique as a learning al-
18 4719.5 2531.5 1390.7 1601 2809.9 gorithm. Weights are modified in a direction that cor-
19 4732.9 2494.2 1384.6 1601.7 2812.1 responds to the negative gradient of the error surface.
20 4756.9 2499.4 1389.4 1602 2822 Gradient is an extremely local pointer and does not
21 4836.3 2513.5 1404.4 1622.9 2863.2 point to global minimum [9]. Newton-based algo-
rithms often converge faster than gradient method.
1.2 Models to Predict Stock Exchange Unfortunately; it is complex and expensive to com-
In the following sections, we will provide three models pute. The Levemburge-Marquardt learning algorithm
used to predict the price of the stock exchange. They are is considered to be the fastest method for training
1) Multiple Linear Regression model 2) Neural Network moderate-sized back-propagation neural networks but
model and 3) ANN tuned GAs model. We will formulate it has a drawback since it requires the storage of some
each case to suitably fit to the ASE matrix that can be quite large for certain problems [3;
7]. If the network is very large then we may run out of
1.3 Multiple Linear Regression Model memory.
Regression analysis is one of the most widely used
techniques for analyzing multifactor data. This is because ANN Tuned GAs Model
of its ability to assess which factors to include and which To use GAs to solve the above described problem. We
to exclude, in order to develop alternate models with dif- need to follow the following steps. They are:
ferent factors. In this method, the multiple regression
models can be presented as follows:
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 3, MARCH 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 66

  START  Tuning Parameters for GAs


Start the evolutionary process by selection, crossover,
Initialize Network 
and mutation operation by GA. The best individuals
survive to the next generation.
Termination Criteria
Finally, the evolution terminates when the generation
Is termination  Yes  number reached it maximum value or the optimal
criteria  Stop GA  values are reached. To summarize the evolution
satisfied?  Training  process of GAs in selecting the best set of weights of
an ANN for solving the stock market prediction prob-
END lem we followed the procedure give in Figure 3.
No 
3. EXPERIMENTS AND DISCUSSION
GA Training 
A fully connected feed foreword ANN was used
with input, single hidden and output layers. Each
Weight Prediction  layer consists of nodes, which are either inputs data,
of Neural Network   connection points where summation of data occurs, or
output data. Specifically, to take an extreme case a
single hidden layer comprising N units (nodes). A
Fig. 3. Framework of combining GAs and ANNs.
population of weight set is initiated randomly (uni-
form distributed in the interval [-1, 1]).
We run GA for 50 generations to obtain the optimal
Representation set of weights. GA with various tuning parameters,
The first phase is to decide the representation of population size, crossover and mutation probabilities,
weights. A floating point representation of connection were used. In Table 2, we show the computed VAF for
weights will be used in our case to represent the ANN various population sizes.
weights. Thus, the chromosome representation can be
given in Figure 2. TABLE 2
COMPUTED VAF WITH VARIOUS POPULATION SIZE
1 2  n*p  (n*p)+1  (n*p)+2  (n*p)+p
Population size VAF
50 99.971
Fig. 2. A Chromosome representation 100 99.9642
Fitness Function Selection 200 99.9342
Secondly, evaluate the fitness of these connection
The experimental results show that it is possible to
weights (chromosome) by computing its fitness using
model the stock price based on historical trading data
the mean square error (MSE) function as given in equ-
by using a three layer neural networks. The ANN-GA
ation 3.
model was able to model 400 days of trading for Am-
1 n  

MSE   
n k 1 
y ( k )  y ( k)

(3)
man stock exchange with a high VAF. The model was
able to predict the price of the index as of January
2004. The price of general index for Amman stock ex-
The fitness of an individual is determined by the total
change is well predicted by the model with high de-
MSE. The higher the error is the lower the fitness. y is
gree of variance-Account-for (VAF=99%).
the actual neurons output for the inputs training sam-
ples at the output layer, while y as its desired re-
In order to get a deeper understanding of the results
sponse. For empirical test, Equation 4 is the modified
we made some statistical analysis. The ANN-GA mod-
fitness function.
el was tested with various numbers of nodes in the
hidden layer. For example, it was tested for 4, 8, 12 and
fitness  e  MSE (4)
16 nodes, and they where tested respectively (see Table
3). These tests were considered as an indication to the
In order to evaluate the performance of our model
success of the developed model in providing outstand-
against the MLR model, the variance-Account-for
ing results in our case.
(VAF) was chosen as given in Equation 3.
 The ANN-GA model achieved a high variance-
V A F  1  var( y  y ) / var( y ) (5)
Account-for up to 99% when the hidden layer has 8
nodes (see Figure 4). In order to improve the perfor-
mance of the model we explored the increase of the
population size of the GAs. We found that the perfor-
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 3, MARCH 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 67

(3) Using the GA procedures over many generations,


increasingly better solutions to the problem will
emerge.
TABLE 3
HYBRID MODEL PERFORMANCE USING VARIOUS NUMBERS OF
NODES WITH MULTI RUNS

N 4 8 12 16
VAF 99.653 98.765 99.484 96.406
99.213 99.316 96.415 99.127
98.437 99.971* 98.200 94.101
97.841 96.327 97.866 95.674

4. CONCLUSIONS AND FUTURE WORK


In this paper, we developed ANNs-GAs based model
for the prediction of Amman Stock Exchange. The
Fig. 4a. Prediction of the general ASE using the MLR model model has been successfully used to predict the price
of general index of ASE. Preliminary research has
shown that the proposed model has a better overall
accuracy than MLR model. In this work, we demon-
strated how GAs and NNs can be used to predict the
financial market. NNs do have the capability to predict
financial markets and, if properly trained. The indi-
vidual investor could benefit from the use of this fore-
casting tool. We plan to extend our effort towards im-
proving the learning methodology for ANN by using
other approaches such as Fuzzy Logic. We also plan to
build more complex models of Amman index using
selected datasets.

REFERENCES
[1] Chen A. S., Leung M. T., and Daouk H., 2003. Application of neural
networks to an emerging financial market: forecasting and trading the
Taiwan Stock Index. Computers and Operations Research, pages 30,
901-923.
[2] Lawrence R., 1997. Using neural network to forecast stock market price.
PhD thesis, University of Manitoba.
Fig. 4b. Prediction of the general ASE using the NN-GA model. [3] Burton G., Malkeil A., 1996. Orthogonal Least Squares Learning Algo-
rithm for Radial Basis Function Networks, pages 302-309.
[4] Makridakis S., Wheelwright S. C., and Hyndam R. J., 1997. Forecasting
mance was static and the model hasn’t recorded an Methods and Applications, Third Edition, pages 30 -70.
improvement (see Table 2). The ANN model also pro- [5] Bansal, Kauffman J., Weitz R., 1993, comparing the modeling perfor-
vides a better fit with observation than MLR model mance of regression and neural networks as data duality varies: A
(See Figure 4a and 4b). It performed 4.95% better than business value approach, to be appearing at the Journal of Management
the MLR model. Information Systems, Vol 10. Pp.11-32.
[6] Marquez, Hill T., Worthley R., and Remus W., 1998.Neural Network
Models as an Alternate to Regression, IEEE 24th Annual Hawaii Int’l
Preliminary research results proved that the hybrid Conference on System Sciences, pp.129-135, Vol VI, 19.
model has a better overall accuracy than MLR model. [7] White H., 1988. Economic prediction using neural networks: The case
We thought the following two main reasons are en- of IBM daily stock returns, IEEE Int’l Conference on Neural Networks.
hance the hybrid model results: [8] Holland H., 1975. Adaptation in natural and artificial systems. Universi-
ty of Michigan, page 183.
[9] Man-Chung C., Chi-Cheong W. and Chi-Chung L., 1995. Financial
(1) The use of ANNs which are the most effective
time series forecasting by neural network using conjugate gradient
technique in financial problem. Mainly, its ability learning algorithm and multiple linear regression weight initialization.
to discover nonlinear relationship and irregulari- University of Hong Kong Polytechnic, Kowloon.
ties in input data makes them ideal for predicting [10] Robert R., 1996. Artificial intelligence in finance & investing, Ch 10,
the stock problems market. IRWIN.
(2) The use of GA was a major reason to overcome the [11] Whitley D., 1995. Genetic Algorithm and Neural Network. Parallel
problem of shortage data availability. To avoid Computing. 14:347-361.
[12] Sheta A. and DeJong K., 2001. Time-Series forecasting using GA tuned
over-training, the number of hidden nodes was li- radial basis functions. Computer and System Department, Electronic
mited to 8 and the number of generations was kept Research Institute, Computer Science Department.
at 500.
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 3, MARCH 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 68
[13] Huse G. and Gjosaeter H., 1999. A Neural network approach for pre-
dicting stock abundance of the Barents Sea Capelin”, University of K. Eghnam received her B.Sc. degree in Computer Science from Al-
Bergen, Department of Fisheries and Marine Biology, PO Box 7800, N- Balqa Applied University in 1999. She received an MSc in Computer
5020. Science from the same university in 2001. She is presently a full-
[14] Whitley D. and Hanson T., 1989. The genetic algorithm: Using genetic time instructor at Al-Balqa Applied University. Her research interests
algorithms to optimize neural networks. Technical Report CS-89-107, are mainly from the areas of software engineering and artificial intel-
ligene.
University of Colorado state.
[15] D. Whitley, T. Starkweather, and C. Bogart, 1989. Genetic algorithms
A. Sheta received his B.E., M.Sc. degrees in Electronics and Com-
and neural networks: optimizing connections and connectivity. Univer-
munication Engineering from the Faculty of Engineering, Cairo Uni-
sity of Colorado State, Technical Report, CS-89-117.
versity in 1988 and 1994, respectively. A. Sheta received his Ph.D.
[16] General Amman Stock Index Data Set degree from the Computer Science Department, School of Informa-
http://www.ammanstockex.com. tion Technology, George Mason University, Fairfax, VA, USA in 1997.
[17] Belew R.K., McInerney J., and Schraudolph N., 1992 .“Evolving Net- Currently, Prof. Sheta is working with the Computer Science Depar-
works: Using the Genetic Algorithm with Connectionist Learning," Ar- ment at the World Islamic Sciences and Education University
ti_cial Life II, pp. 511-547. (W.I.S.E.), Amman, Jordan. His research interests include Neural
[18] Koza JR.and Rice J. P., 1995. “Genetic Generation of Both the Weights Networks, Evolutionary Computation, Modelling and Simulation of
and Architecture for a Neural Network,” IEEE Joint Conference on Dynamical Systems, Robotics, Automatic Control, Fuzzy Logic, and
Neural Networks, pp. 397-404. Swarm Intelligence. Prof. Sheta served as a chair or co-chair for
number of workshops and special sessions within international cofe-
[19] Montana D. J., 1992. “A Weighted Probabilistic Neural Network," Ad-
rences in the area of Computer Science and Engineering. He has
vances in Neural Information Processing Systems 4, pp. 1110-1117. been an invited speaker in number of national and international cofe-
[20] Richard K. Belew, McInerny J., and Nicol Schraudolph N., 1990. Evolv- rences.
ing Networks: Using the Genetic Algorithm with Connectionist Learn-
ing. Technical Report CS90–174, UCSD (La Jolla). S. Bani-Ahmad received his B.Sc. degree in Electrical Engineer-
[21] Darrell Whitley. Applying Genetic Algorithms to Neural Network ing/Computer Engineering from the department of Electrical Engi-
Problems: A Preliminary Report. (Unpublished manuscript). neering, Jordan University of Science and technology in 1999. He
[22] D. Curran and C. O'Riordan. Applying Evolutionary Computation to received an MS in Computer Science from the school of Information
Designing Neural Networks: A Study of the State of the Art. Technical Technology at Al-Albayt University in Jordan, in 2001. He received
Report of the Dept. of IT., NUI, Galway, 2002. his Ph.D. degree in Computing and Information Systems from the
department of Electrical Engineering and Computer Science at Case
Western Reserve University, Cleveland - Ohio, USA, in 2008. He is
presently a professor at Al-Balqa Applied University, Salt, Jordan.

Você também pode gostar