Escolar Documentos
Profissional Documentos
Cultura Documentos
Lukasz Wojtow
lukasz.wojtow@gmail.com
Genotick.com
London, 2016
Table of Contents
Abstract...................................................................................................................................3
Example systems.....................................................................................................................8
Detailed overview..................................................................................................................10
Parameter sensitiveness.........................................................................................................27
Conclusions...........................................................................................................................30
Literature...............................................................................................................................39
2
Abstract
It is hard to imagine an endeavor more competitive than stock
market trading. Hence, it is no surprise that traders have been looking into
Artificial Intelligence (AI) from its very early days. However, most AI research is
concentrated around well known Fundamental or Technical Analysis indicators.
Also, typical Neural Network learning methods are prone to overlearning,
rendering their results untrustable for real life trading or even completely bogus
right from the start.
In this paper we propose a different method. At the heart of our algorithm
lies an epiphany: If simple assembler instructions are capable of building a
variety of computer software, it should be possible to build any type of trading
systems with the equally simple instruction set. After all, a mechanical trading
system is nothing more than a computer program that reads historical data and
comes up with some prediction. Our algorithm has built in about 100 different
instructions. We also created Open Source Software that implements our
method and allows for automatic development of profitable systems. We released
the source code under liberal GPL license.
We showed that these simple instructions grouped together can create a
profitable trading system. Our software does not require a list of potentially
profitable trading rules. It also does not have any rules built in. When it is run
3
for the first time, there are no systems. It has to learn how to invest by itself.
This approach makes built systems very flexible. Our method can build any type
of system: trend following, mean reverting, based on fundamental indicators or
based on price action.
One of the main issue with Machine Learning is overlearning. This is due
to the fact that in traditional AI algorithms data is fed to a Neural Network
multiple times. If the Neural Network can “see” the data more than once, it can
learn to react to changes that are not market inefficiencies but just noise. This
can be reduced by selecting how much learning is allowed before checking
network’s prediction on outofsample data. However, this isn’t perfect as
amount of learning on insampledata is also data dependent and in the end is
just another parameter that must be adjusted.
We overcame this problem by specially designing the algorithm to be Walk
Forward only. That is, similarly to actual trading in real life, our software trades
and learns as it goes along. There is no separate learning and trading modes.
Also, to better simulate real life, we forced the algorithm to trade on the market’s
next price, instead of its last. This is to simulate a real life delay between
analysis and placing a trade. kurwa
At the beginning Genotick creates initial systems by randomly choosing
instructions, changing their arguments and grouping them into lists.
Each system consists of maximum 1024 instructions. Interestingly, once a
system is created it never changes. This allows us to trust the system its
4
output will always be the same on the same data. All systems are saved in a
population which is adjusted (or “learns”) over time with genetic algorithm, one
day at a time.
Our algorithm assumes that there are multiple systems, each a little bit
different. We then take all their predictions to calculate one cumulative
prediction that would be used by a user to put a trade on. Genotick calculates
profit yield by these predictions and reports it to the user at the end. The final
result is what a user would get in real life when executing every day and opening
a trade at next market open.
Proposed algorithm does not check for rationale behind created systems.
Firstly, it would be very difficult to implement (if not impossible). Secondly,
rationale doesn't matter as much as people think. After all, the market does not
“know” why a trader put a position, so final result (profit or loss) does not
depend on the trader's reasoning. Being unable to understand the systems
brings a problem when creating them automatically: how to remove systems
based on flawed ideas? For example, if a system is always Long, is it because it
is trend following and it discovered a long term uptrend? Or is it multiplying
market’s volume by market’s open and predicts Long if result is higher than
zero? It is difficult to make a judgment about a system without knowing its deep
logic and fully understanding underlying market data. We wanted to make
Genotick flexible enough to analyze any data so instead of checking systems’
logic we propose a requirement for systems to be “symmetrical” on “mirrored”
5
data and removing those that are not.
We showed that it is possible for Artificial Intelligence to trade stock
market profitably. We presented an algorithm that by design is WalkForward
only. This makes its results to be trustable and repeatable in real life trading
where no access to future prices is possible.
6
Why Artificial Intelligence?
Artificial Intelligence revolution is upon us. Selfdriving cars are fact, chess
programs that beat an average professional have been known for a long time.
Stock market trading, being unregulated in terms of traders’ methods, will
reward those on the cutting edge of research. Best hedge funds may be still be
run by humans but if their method is really better, AI will figure it out as well.
Besides, most traders do not compete with the best. Just like in the tale where
two men are trying to escape from the bear, they need to out run one another,
not the bear. Because stock market has so much randomness, it will take
traders longer to notice that their opponent is using a superior method.
Another important argument in favor of AI is that most traders specialize
in one trading style, be it trend following, mean reverting, fundamental analysis
and so on. That is because learning one style is difficult enough, mastering all of
them is impossible. When traders start using AI, they suddenly can reach to
methods that were not available for them before. By utilizing AI, traders can
trade free from their believes, misguided opinions and personal limitations.
7
Example systems
Each systems consists of number of instructions executed one after
another. This chapter shows some simplest systems. Real systems are much
more complicated. Genotick can print created systems in a human readable form
if such need arises.
Systems below assume that following data were fed to the software:
#Time, Open, High, Low, Close, Volume, PE ratio, Wide Market PE ratio
20060103,100,102,99,101,42,16,18
20060104,102,102,101,102,30,17,19
20060105,104,106,99,101,26,16,18
20060106,106,108,106,108,90,18,17
Let’s assume that today is 4th January 2006, after market close.
A system that bets in the same direction as today’s change (closetoclose) would
look like this (column count starts at 0):
When this program ends, register 0 contains difference between today’s and
yesterday’s close. Then, sign of that value is taken as system’s prediction:
positive value means systems is betting Long, negative value means that system
is betting Short. Zero means that system would like to stay out of the market.
A system that bets Long if the stock’s PE ratio is lower than that of wide market
8
(and Short if it is lower) looks like this:
ratio and the stock’s PE ratio.
In the case of above data register 0 would contain a positive value. That means
that systems would open a Long position at the next open (price 104 on
20060105). Position would be valid until following open (price 106 on
20060106).
A large variety of systems can be created. Systems can look at volume,
day’s high or in fact any data that can be represented as a number.
9
Detailed overview
The main part of the algorithm is a process of executing systems. Systems
in population are executed one by one. Each system receive data from user
supplied data files. For the purpose of system execution, data is truncated (in
memory) at the day being currently processed and no future data is given to the
system. System then execute its instructions. If no maximum number executed
instruction is exceeded, then system yields a prediction. This can be done
directly by one of the finishing instructions (details for finishing instructions can
be found in appendix) or indirectly by leaving register 0 with its last value.
Processor then looks at this value and returns system’s prediction based on its
sign. If value is positive, prediction is Up (Long), if it is negative – the prediction
is Down (Short). Value 0 means that the system has no prediction for the next
day.
Once all systems were executed, algorithm calculates cumulative
prediction for the market. For this it looks at each system’s prediction and
weight and follows as explained in a table:
Prediction Up Prediction Down
System with positive Add system’s weight to Add system’s weight to
weight Up votes Down votes
System with negative Add system’s absolute Add system’s absolute
weight weight value to Down weight value to Up votes
votes
Table 1: Calculating cumulative prediction
10
This implies that systems that have very high negative weight are actually
useful. In other words, they are so bad, that it is worth betting opposite to their
predictions.
Adding all systems’ weight together to yield one cumulative prediction is an
implementation of WisdomoftheCrowd phenomena [9]. For example, if systems
have 60 % chance of guessing correctly, they will obviously get only 60 %
accuracy when traded in separation. However just 10 uncorrelated systems
together will have accuracy of over 70 %.
Next stage is where systems’ weights are updated. It is assumed that a
trade was opened at the next available open price (so it will be second column in
traditional “Date,Open,High,Low,Close,Volume” market data) and closed at
following open. Opening trade at tomorrow’s open rather than today’s close is
necessary to simulate real life trading: after all, collecting data and running
software to check prediction takes long enough to make today’s close price no
longer available to trade on.
Calculating a system’s weight is very simple:
1. Take square of difference between correct and incorrect predictions.
2. Return positive value from step 1 if majority of system’s prediction is correct,
return negative otherwise.
Interesting part of the algorithm is that weight is only adjusted based on
whether predicted direction of the change was correct and not how close
prediction was to real price. This is done for two reasons:
11
1. Predicting direction is simpler, easier to learn and is enough for profitable
trading (assuming a trader has reasonable risk control).
2. This does not force trader to take on particular option strategy. In other
words, if the algorithm punished systems based on how far they were from
actual tomorrow’s price, it would mean that a trader is assumed to open a short
straddle with strike equal tomorrow’s price. Also, some option strategies reward
being right more than being wrong (at the same distance from today’s price). If a
trader wants to choose option strategy based on other methods and only use AI
for predicting future price, he would need multiple trained population – one for
each option strategy. It is therefore more practical to use AI for predicting delta
and leave the choice of instrument and option strategy to a trader.
After systems’ weights were updated, algorithm moves to crucial operation:
removing systems that do not predict anything and removing bogus systems
build on flawed ideas. To quickly determine if a system is based on a flawed idea,
Genotick uses a simple, yet powerful, trick.
By default, it is required for systems to be “symmetrical” on “mirrored” data. For
example this is SPX chart for years 1999 – 2015:
12
Illustration 1: SPX stock index
And this is “mirrored” SPX:
13
When a user wants to train Genotick to trade SPX, two data files need to be
provided: original SPX and its exact reflection. Later, while training a population,
Genotick will check each system: If number of “Longs” is not equal “Shorts” for
each day, the system will be rejected making space for a new system. For user
friendliness, Genotick can mirror typical market data with a separate command.
Next, the algorithm removes oldest systems. This is a simple step designed
to remove systems that were good in the past (and still has some weight) but the
inefficiency they used to exploit may be long gone. It is assumed that good
systems breed and if particular inefficiency is still present, their children will
continue to trade it gaining enough weight to survive.
More important is the next stage where systems are removed based on
their weight. Really bad systems (with high negative weight) stay in a population
(since their predictions are reversed). In effect, only systems that are about flat
are removed.
The last stage of the main loop is where systems breed. Currently breeding
is implemented with a roulettewheel selection algorithm where probability of
becoming a parent is proportional to system’s absolute weight. Two parents are
selected and their instruction lists are copied in blocks with randomly chosen
break points.
14
Results on random data
One of the biggest challenge when creating any type of simulation software is to
make sure that there are no errors that make results bogus. One type of such
error would be “lookahead” bug where software accesses data from the “future”
making results look suspiciously good. This type of error is usually very obscure
and therefore it is very difficult to prove software’s correctness.
It seems like the most reliable way to check for this type of error is to try to
predict completely random time series, for example one that is generated by a
fair coin toss. Traditional Neural Networks iterate over the same data more than
once and hence their results look as if it was possible to make a profit trading
white noise data [10]. This problem is similar to overoptimizing a mechanical
trading system. If a system has enough parameters it is possible to fit it to any
market. Of course, such system would never make profit trading real market
(except for just getting lucky).
To test if our software has a “lookahead” error we created a random
market where “price” changes a generated from a fair coin toss. Initial price is
1000 and it has 50 % chance to go up by 1 and 50 % chance to go down the
same amount. There is no rule to the changes and there is nothing to learn.
We fed this data Genotick ten times to see if it makes “profit”:
15
Illustration 3: Results "trading" random data
As can be seen, Genotick is not profitable on data generated by fair coin toss and
it is safe to assume that there is no “lookahead” error. In fact, most runs yielded
negative results. This can be explained by inherent problems with fixed
fractional position sizing (Genotick always use all available capital for trading)
[11].
16
Results on fake data with known pattern
One way to judge an AI algorithm is to see if it can learn something we
already know. In the case of predicting changes in time series this can be
achieved by feeding the algorithm data that contains some known pattern.
In investing terms it is known as “an inefficiency”. As market inefficiencies are
never 100 % reliable, simulating this as a pattern requires that it appears only
occasionally.
In a famous experiment [2] where rats beat Yale University students, there
was 60 % chance that a food pellet would drop on left side of a Tshaped maze.
Rats quickly learned to ignore the right side and in the end achieved nearly 60 %
correct guesses. In the meantime, students tried to find hidden patterns and
ended up with only 52 % correct guesses. They refused to believe that there was
nothing more to predict and an error is a natural consequence of dealing with
probabilities. It is interesting to see if Genotick can be as smart as rats. We
created a time series where value has 60% chance that it will go up by 1.0 and
40% that it will go down by 1.0. Initial value was 1000000. We created 10
thousand data points:
17
Illustration 4: Fake market, going Up 6 out of 10 times
We created a mirrored data file to be able to remove nonsymmetrical systems:
18
Then, we run Genotick with its default settings. Cumulative “profit” on such
data is presented below:
Genotick needed only 86 data points to start betting “Up” every time.
Next came fake trending data. We created a “market” where price had 55%
chance that the change between data points (n+2) and (n+3) will be the same as
change between (n) and (n + 1). There was 45% chance that the change will be in
opposite direction. We also created a mirrored market. Chart below shows both
time series:
19
Illustration 7: Fake trending market with its mirror reflection
Genotick struggled for a long time but in the end it learned the inefficiency and
exploited it:
20
Similarly to fake trending data we created fake meanreverting “market”.
This time, there was 45% chance that change from (n+2) to (n+3) will be in the
same direction as change from (n) to (n+1). There was 55% chance that the
change will be in opposite direction. Chart below shows main and mirrored time
series:
21
And this is how our software fared:
Results on fake data with known inefficiency suggests that our algorithm is
capable of learning. The most important part is that it learnt as it went along.
Therefore, it can be repeated in real life, trading and learning one data point at a
time.
22
Results on real market data
One of the most taunted investment strategy is BuyandHold. Proponents
argue that it is impossible to predict changes in the stock market, therefore any
profit from active investing must be attributed to luck only. With BuyandHold
strategy, an investor would buy some indextracking product and hold it forever.
However, investing all assets in stock market seems rather risky, so authors
decided to build a BuyandHold portfolio out of three markets:
1. SPX index as a proxy to wide market and buyandhold.
2. Spot Gold as a proxy to inflation protection.
3. US Dollar index as a proxy to cash and bonds.
We used 10 years of historical data for each market, from 2006 to 2015
inclusive. With daily rebalancing, total profit would be 72 %.
Maximum drawdown would be 18 %.
23
Illustration
Illustration12:
11:Genotick's profit on
Buy-and-Hold SPX, on
results Gold andGold
SPX, USD and
IndexUSD Index
For a comparison, we run Genotick with its default settings on the same
markets. For initial training we used data from 1st January 2000. Algorithm
continued training (and trading) until 31st December 2015, but period from 1st
January 2006 was “on record”. Table below summarizes performance from 1 st
January 2006 to 31st December 2015 for both methods.
Genotick BuyandHold
CAGR (%) 8.7 5.5
Maximum drawdown (%) 12.5 18
MAR ratio 0.69 0.3
Sharpe ratio (1% risk free rate) 0.88 0.66
Table 2: Genotick VS BuyandHold
It is worth noting that presented results show only “directional” edge gained by
the software. There were no stoplosses and no position sizing, which can greatly
improve reallife results.
24
Parameter sensitiveness
Similarly to a trading system created by a human, it is required that AI
algorithm does not break down when run with parameters different than the
ones suggested by authors. Genotick has a convenient command line argument
‘input=random’ to run its operations with random input parameters.
We run Genotick 10 times to see how sensitive it is to changing
parameters. Although total profit made by the algorithm varied heavily, it was
mainly due to the fact that some settings let it learn more quickly than others.
Once the inefficiencies were discovered, profit was fairly similar on each run.
Start date for simulations was 1st January 1999, last day was 31st December
2015. Table below shows profits (in percent) for each year from 2006 to 2015.
2006 2007 2008 2009 2010 2011 2012 2013 2014 2015
Run 1 12.0 26.9 14.1 17.4 14.7 -0.9 -1.5 2.1 6.7 -1.1
Run 2 11.9 26.8 14.8 17.4 14.1 -0.6 -1.7 2.1 6.5 -1.2
Run 3 11.4 26.6 15.1 18.2 14.1 -0.7 -1.5 2.1 6.5 -1.0
Run 4 11.6 27.0 16.1 18.2 14.1 -0.8 -1.6 2.1 6.8 -1.0
Run 5 11.9 25.9 15.7 18.2 13.6 -1.1 -1.6 2.2 6.7 -1.1
Run 6 12.2 27.1 15.3 18.9 14.2 -1.1 -1.6 2.1 6.8 -1.1
Run 7 11.3 26.8 14.2 18.9 14.1 -1.1 -1.7 2.2 6.0 -0.8
Run 8 11.5 26.0 15.0 16.2 14.4 -0.9 -0.6 0.3 6.0 -1.8
Run 9 11.8 26.6 15.3 17.4 14.1 -0.8 -1.7 2.3 6.4 -1.1
Run 10 11.6 27.1 16.1 18.5 14.1 -0.6 -1.7 2.1 6.4 -1.0
Table 3: Results with randomized input parameters
As can be seen, program’s results depend very little on initial parameters.
In fact, authors would like to achieve more variation between independent
runs.
25
License & Source Code
Genotick has been released under Gnu General Public License. GPL is an
“OpenSource” license, which means that everybody can copy, modify and
extend the program to their needs. Full text of the license can be found at [12].
Source code is publicly available via GitHub service at
https://github.com/alphatica/genotick
26
Conclusions
In this paper we showed that it is possible to use Artificial Intelligence to
trade stock market profitably. We presented a new method that is resistance to
overlearning and overoptimizing. We showed that it is capable of learning
market inefficiencies and can be used in real life. We described the algorithm in
its entirety and presented a computer software that implements it. We released
software with liberal GPL license, that allows others to modify and extend it.
27
Appendix I: Software settings
Naturally, such algorithm will have to make a lot of decisions such as
which system should be removed, which parents should be chosen for breeding
and so on. Table below summarizes settings in our software.
28
Parameter Default Description
Value
PopulationDesiredSize 5000 Desired number of
systems in the
population.
ProcessorInstructionLimit 256 This setting prevents
systems to run forever.
Maximum instruction
count is calculated as
value * system’s length.
MaximumDeathByAge 0.01 This setting is used to
calculate how many
systems is considered to
be removed based on
their age.
MaximumDeathByWeight 0.01 This setting is used to
calculate how many
systems is considered to
be removed based on
their weight.
ProbabilityOfDeathByAge 0.5 Probability of removing
a system because it is
too old.
ProbabilityOfDeathByWeight 0.5 Probability of removing
system because its
weight is too close to 0.
InheritedChildWeight 0 Initial weight for a child
calculated as average
parents’ weight * value.
29
Parameter Default Description
Value
DataMaximumOffset 256 This setting limits
system access to
historical data. Systems
cannot see further than
value data points ago.
ProtectRobotsUntilOutcomes 100 This setting regulates
how long systems are
protected for. Protected
systems cannot be
removed due to their
weight or age.
NewInstructionProbability 0.01 Probability of new
instruction when making
a child.
InstructionMutationProbability 0.01 Probability of mutating
existing instruction
when making a child.
SkipInstructionProbability 0.01 Probability of skipping
an instruction when
making a child.
MinimumOutcomesToAllowBreeding 50 This settings is used to
decide when a system
can have a child for the
first time. It is used to
prevent overbreeding for
one system.
30
Parameter Default Description
Value
MinimumOutcomesBetweenBreeding 50 This setting is used to
decide how soon a
system can have a child
after previous child.
KillNonPredictingRobots true This setting allows to
remove systems that vote
to be out of the market.
In such case the system
is removed even if it is
protected.
RequireSymmetricalRobots true This setting allows to
remove systems that do
not yield mirrored vote
on mirrored data. It is
used to prevent keeping
systems that have
permanent Long or Short
bias.
RandomRobotsAtEachUpdate 0.02 Number of totally new
and random systems to
be added at each time
point (as a fraction of
PopulationDesiredSize).
31
Parameter Default Description
Value
ProtectBestRobots 0.02 Elitism. Number of best
systems to protect (as a
fraction of
PopulationDesiredSize).
Protected systems are
not removed even if their
age is high.
IgnoreColumns 0 This setting allows to
ignore first value
columns while learning.
Table 4: Software parameters
32
Appendix II: Instruction set
As was explained earlier, Genotick uses simple instructions to manipulate
data and compute output. Instructions that manipulate data have simple names
which are selfexplanatory. Computational instruction are explained in a table at
the end of this chapter. All instructions are implemented in SimpleProcessor
class, in file SimpleProcessor.java. If the reader is interested in exact execution
algorithm, it is best to read the source code.
Instructions that manipulate data
AddDoubleToRegister, AddDoubleToVariable, AddRegisterToRegister,
AddRegisterToVariable, AddVariableToVariable, DecrementRegister,
DecrementVariable, DivideRegisterByDouble, DivideRegisterByRegister,
DivideRegisterByVariable, DivideVariableByDouble, DivideVariableByRegister,
DivideVariableByVariable, IncrementRegister, IncrementVariable,
MoveDataToRegister, MoveDataToVariable, MoveDoubleToRegister,
MoveDoubleToVariable, MoveRegisterToRegister, MoveRegisterToVariable,
MoveRelativeDataToRegister, MoveRelativeDataToVariable,
MoveVariableToRegister, MoveVariableToVariable, MultiplyRegisterByDouble,
MultiplyRegisterByRegister, MultiplyRegisterByVariable,
MultiplyVariableByDouble, MultiplyVariableByVariable,
NaturalLogarithmOfData, NaturalLogarithmOfRegister,
NaturalLogarithmOfVariable, SqRootOfRegister, SqRootOfVariable,
SubtractDoubleFromRegister, SubtractDoubleFromVariable,
SubtractRegisterFromRegister, SubtractRegisterFromVariable,
SubtractVariableFromRegister, SubtractVariableFromVariable, SwapRegisters,
SwapVariables, ZeroOutRegister and ZeroOutVariable.
Jumps to control looping and conditional execution path
33
JumpIfRegisterEqualDouble, JumpIfRegisterEqualRegister,
JumpIfRegisterEqualZero, JumpIfRegisterGreaterThanDouble,
JumpIfRegisterGreaterThanRegister, JumpIfRegisterGreaterThanZero,
JumpIfRegisterLessThanDouble, JumpIfRegisterLessThanRegister,
JumpIfRegisterLessThanZero, JumpIfRegisterNotEqualDouble,
JumpIfRegisterNotEqualRegister, JumpIfRegisterNotEqualZero,
JumpIfVariableEqualDouble, JumpIfVariableEqualRegister,
JumpIfVariableEqualVariable, JumpIfVariableEqualZero,
JumpIfVariableGreaterThanDouble, JumpIfVariableGreaterThanRegister,
JumpIfVariableGreaterThanVariable, JumpIfVariableGreaterThanZero,
JumpIfVariableLessThanDouble, JumpIfVariableLessThanRegister,
JumpIfVariableLessThanVariable, JumpIfVariableLessThanZero,
JumpIfVariableNotEqualDouble, JumpIfVariableNotEqualRegister,
JumpIfVariableNotEqualVariable, JumpIfVariableNotEqualZero and JumpTo.
Finishing instructions
Any of these instruction will terminate execution of the program:
ReturnRegisterAsResult, ReturnVariableAsResult and TerminateInstructionList.
Computational instructions
These instructions compute a value for a given column in a data file:
34
Instruction name Description
AverageOfColumn Calculates the arithmetic average of
given column. Column’s index and
length of the average are given as
arguments.
HighestOfColumn Calculates the highest value of given
column. Column’s index and length are
given as arguments.
LowestOfColumn Calculates the lowest value of given
column. Column’s index and length are
given as arguments.
SumOfColumn Calculates the sum of all values in
given column. Column’s index and
length are given as arguments.
Table 5: Computational instructions
35
Literature
[1] Gallistel, Charles R. (1993). The organization of learning (Learning,
development, and conceptual change). A Bradford Book: 662 pages. ISBN13:
9780262570985
[2] Language Log. (December 11,2005). Rats beat Yalies? Doing better by
getting less information?
http://itre.cis.upenn.edu/~myl/languagelog/archives/002700.html
[3] Overall, J. E., & Brown, W. L. (1959). A comparison of the decision
behaviour of rats and of human subjects. The American Journal of Psychology,
72(2) 258261.
[4] Spragg, S. S. (1934). Anticipatory responses in the maze. Journal of
Comparative Psychology, 18(1), 5173.
[5] Tetlock, P. (2005). Expert political judgment: How good is it? How can we
know?. Princeton University Press. ISBN13: 9780691128719
[6] – Michalewicz, Z. & Fogel, B. (2004) How to solve it: Modern Heuristics ISBN
9783662078075
[7] – Mitchell, M. (1998). An Introduction to Genetic Algorithms (Complex
Adaptive Systems) ISBN10: 0262631857
[8] – Goldberg, E. (1989). Genetic Algorithms in Search, Optimization and
Machine Learning ISBN10: 0201157675
[9] – Surowiecki, J. (2005). The Wisdom of Crowds: Why the Many Are Smarter
Than the Few ISBN10: 0349116059
[10] – Predicting coin toss with 58% probability retrieved from
https://github.com/tomekd789/clogann/tree/master/applications/falsecoin
toss
36
[11] – Vince, R. (1992). The Mathematics of Money Management: Risk Analysis
Techniques for Traders ISBN10: 0471547387
[12] – Free Software Foundation. GPL License
[13] – van Dyk, S. 2013. Genetick Programming. Evolving decision trees with
application in Investment Management
[14] – Reid, S. 2013. Genetick Programming for Security Analysis
37