Você está na página 1de 6

Mate-Similarity Evolutionary Algorithm: A New Method in

Evolutionary Computation
Abbas Pirnia-ye Dezfuli
Computer Eng. Dept., Azad Univ., Shiraz Branch
pirnia@iaushiraz.ac.ir

Abstract genetic operators, and measures of performance (Spears et


A general model of a new method in evolutionary al., 1993). The interested reader may refer to De Jong (2006)
computation is presented. The basic idea of this model is to for both more details about the algorithms and the reasons
match the parents in recombination operation based on their for their importance.
similarity. Using the concept of similarity, the tendency of These approaches in turn have inspired the development
an individual for recombining with another one is measured, of additional evolutionary algorithms such as “adaptive
and the parent matching will be carried out based on this operator systems” (Davis, 1989), “genetic programming”
measure. This model is implemented and tested in the field (de Garis, 1990; Koza, 1991), “classifier systems” (Holland,
of function optimization, and compared with a traditional 1986), “messy GAs” (Goldberg, 1991), the LS systems
EA-based function optimizer. The experimental part of the (Smith, 1983), GENITOR (Whitley, 1989), SAMUEL
paper presents the comparisons on the basis of standardized (Grefenstette, 1989), and the CHC approach (Eshelman,
test functions in different scales. The last section discusses
1991). The interested reader is encouraged to peruse the
the potential of the presented model and points out the
recent literature for more details (e.g., Belew and Booker,
strengths and weaknesses.
1991; Fogel and Atmar, 1992; Whitley, 1992; Männer and
I. INTRODUCTION Manderick, 1992).
As Spears and his colleagues have mentioned in their
Being an evolving community of people, ideas and valuable study (Spears et al., 1993), available evolutionary
applications, the evolutionary computation is itself an algorithms are simplistic from a biologist’s viewpoint and it
evolving field. What we refer to as evolutionary is important to revisit the biological and evolutionary
computation is “the use of evolutionary systems as literature for new insights and inspirations for
computational process for solving complex problems”. enhancements. Booker (1992) has pointed out the
Although there are several books and survey papers connections with GA recombination theory to the theory of
describing specific subspecies of the field (e.g. genetic population genetics recombination distribution. Mühlenbein
algorithms and evolutionary programming), only in the past (1993) concentrated on EAs that are modelled after breeding
few years have people begun to consider such systems as practices.
particular instances of a more general class of evolutionary This research is based on more precise inspirations from
algorithms (De Jong, 2006). To date, researchers of the field the nature with regard to mate selection for recombination.
have proposed several evolutionary computational models The basic idea is to model a mechanism for mate selection in
which we will refer to as evolutionary algorithms. A EA similar to the natural mate selection in biologic systems.
common conceptual base in all these algorithms is In evolutionary algorithms, during a recombination
simulating the evolution of individual structures via operation (e.g. crossover in GAs), mate selection
processes of selection and recombination which depend on traditionally are performed coincidentally. For example, in
the perceived performance (fitness) of the individuals as GAs, when a chromosome is selected for reproduction
defined by an environment (Spears et al., 1993). (using fitness proportional selection), it must first wait for
There are three canonical evolutionary algorithms which the next chromosome to be selected as a parent. Then this
historically are of importance (De Jong, 2006): couple will be mixed (via a particular kind of crossover).
“evolutionary programming” (Fogel et al., 1966), “evolution One can say there is no particular gravity between
strategies” (Rechenberg, 1973), and “genetic algorithms” individuals for recombination except the nearness of their
(Holland, 1975). As mentioned above, these algorithms fitness values. In cases that the fitness proportional selection
share a common conceptual base, but each of them mechanism is used, as the search continues, more and more
implements an evolutionary algorithm in a different manner. individuals receive fitness values with small relative
The differences are with regard to almost all aspects of EAs, differences. This lessens the selection pressure, decreasing
including individual representation, selection mechanism, the above mentioned gravity. We can use some mechanisms
for mate selection other than the traditional ones to make our
EAs as more plausible evolutionary systems from a
biological point of view. Since the mate selection individual y specifies the probability of selecting y as the
mechanism proposed here is based upon the similarity of the mate by x if it is selected for reproduction. Inspired from
individuals the algorithm is named Mate Similarity natural biological systems, the tendency of one individual to
Evolutionary Algorithm (MSEA). This research shows that another one is defined based on their similarity which is
MSEA may result in either better or worse results depending defined in turn based on their phenotypic features. MSEA
on the problem that is to be solved. uses a real-valued vector representation. The whole vector is
considered as an individual and each real value stands for
II. MSEA ALGORITHM one gene. Considering some functions to translate genotypic
A typical evolutionary algorithm is outlined in figure 1 information into phenotypic form, we can assume two
(Spears et al., 1993). spaces in which each individual situated somewhere
corresponded to its features: the genotypic space and the
Procedure EA; { phenotypic space. The mentioned translator functions are
t = 0; used to map the first space to the second one. For example,
initialize population P(t); let’s suppose that our individuals are carrying information of
evaluate P(t); some cubes. Each individual may has three genes (real
until (done) { values) to represent height, width, and depth. We can define
t = t + 1; two functions to calculate volume and surface of each
parent_selection P(t);
individual as its phenotypic features. Therefore, we have a
recombine P(t);
mutate P(t);
three-dimensional genotypic space, and using those two
evaluate P(t); translator functions (i.e. volume and surface) the
survive P(t); information are mapped into a two-dimensional phenotypic
} space.
} The tendency of ci towards c j is notated as t (ci , c j ) and
is defined as bellow:
Fig. 1. A typical evolutionary algorithm

According to figure 1, parent selection is an important  1 if ci ≠ c j



step of a typical EA which is implemented in a different (
t (ci , c j ) =  ∑ phi ,k − ph j ,k )
2
(1)
manner by each variety of EA. In evolutionary programming  k
(EP), developed by Fogel et al. (1966), all of the individuals if ci ≡ c j
0
(N) are selected to be parents, and then are mutated to
produce N children. Since EP has not any recombination
where phi ,k stands for kth phenotypic feature of
step, there is no need to consider mate selection in it. In
evolution strategies, developed by Rechenberg (1973) and individual ci . The phrase ci ≡ c j means that ci and c j are
enhanced by Schwefel (1981), individuals are uniformly the same regarding their phenotypic features.
selected in random fashion for being the parents, and then It is clear that t (ci , c j ) shows the inverse-distance
are recombined and mutated to produce offspring.
between ci and c j in the phenotypic space. Therefore, the
Therefore, one can say there is no direct factor affecting the
tendency of one chromosome to other ones for mating. In more similar two individuals are, the stronger will be
genetic algorithms, developed by Holland (1975), parents likelihood that they will be recombined with each other. In
are selected based on a probabilistic function representing order to translate the tendency information into a
relative fitness. As a matter of fact, the higher the relative probabilistic domain, another variable namely relative
fitness an individual has, the more likely is the possibility tendency is notated as rt (ci , c j ) and defined as bellow:
that the individual will be selected as a parent. Therefore, in
GAs, mate selection is indirectly influenced just by relative t (c i , c j )
fitness factor. In other words, those individuals with higher rt (ci , c j ) = (2)
relative fitness have a tendency to high fitness individuals to ∑ t (c , c
k
i k )
be recombined with. Since the fitness values of individuals
are convergent and therefore so are relative fitness values,
We can calculate relative tendencies of each individual to
the resulted tendency for one chromosome to the others is to
each member of population in a two-dimensional array
be unified. In addition, once more from a biological point of
namely relative tendency table. Figure 2 illustrates the above
view, this is different from what really happens in natural
concepts by an example. Three real-valued vectors
populations.
representing three cubes (height, width, depth) are depicted
Herein, the new concept tendency is proposed to cover the
in figure 2. Using volume and surface functions two
mentioned issue. Tendency of individual x towards
phenotypic features are calculated for each individual.
C1 C2 C3 Initialization of population in MSEA is performed
randomly as usual. The parent selection step has been
Individuals: 9.1 6.8 4.4 5.3 8.5 1.7 3.3 4.8 4.2 omitted, because all individuals have (and take) the
Volume: 272.272 76.585 66.528 opportunity to be recombined as parents. After making
relative tendency table, in mate selection step, each
Surface: 263.68 137.02 99.72 individual is tagged as either a selector or a selected
individual. The algorithm gets the number of offspring
Figure 2. Three individuals representing dimensions of three cubes, and couples to be reproduced via each recombination as an input
their phenotypic features.
namely reproduction coefficient. The offspring population
Figure 3 illustrates the location of these individuals in the size is equal to the parents’ if reproduction coefficient is
phenotypic space, and table 1 shows the relative tendency equal to one. There will be no individuals from parent
table for them. population in next generation, and therefore, the competition
for survival is just between children. As the reproduction
coefficient increases the pressure and seriousness of the
surface c1 competition will likewise increase. There is no survival step
if the reproduction coefficient is equal to one. The selection
t (c1 , c2 ) =4.2E-3 mechanism for survival is fitness proportional.

t (c3 , c1 ) =3.8E-3 III. EXPERIMENTAL RESULTS


c2
It is clear that one cannot refer to adaptation without
having particularly a performance goal in mind. EP and ES
c3
t (c 2 , c3 ) =2.6E-2 are usually used to solve optimization problems. GAs are
not function optimizers per se, although they can be used as
such. There is very little theory indicating how well GAs
will perform optimization tasks. Instead, theory concentrates
volume
on what is referred to as accumulated payoff (De Jong,
Figure 3. The phenotypic space (volume, surface) and the position of three
1992). In this research, MSEA has optimization for a
individuals in the space along with the tendencies are depicted.
performance goal. In other words, it is most interested in
C1 C2 C3 finding the best solution as quickly as possible.
Of course, problem difficulty is a function of the problem
C1 0 0.1263 0.1118
and the goal. Although many possibilities as measures of
C2 0.1263 0 0.7619
problem difficulty exist (e.g. fitness correlation [Manderick
C3 0.1118 0.7619 0
et al. 1991] and evolvability [Spears et al., 1993]), one
Table 1. The relative tendencies of individuals are illustrated mutually. important measure is deceptiveness (Goldberg 1989).
Since relative tendency is a symmetric relation, the outcome matrix is Difficult problems are often constructed by taking advantage
symmetric as well. of EAs in such a way that selection deliberately steers away
from the global optimum. Such problems are called
According to the above, the MSEA procedure could be deceptive.
defined as depicted in figure 4 in which two new steps have All results of this section are achieved by running an
been added to the typical EA, before recombination, instead implementation of MSEA on basis of standardized test
of parent selection. functions. The test functions used herein have been used by
several authors for analyzing performance of different
Procedure MSEA; {
optimization algorithms (e.g. [Potter and De Jong 1994] and
t = 0;
initialize population P(t); [Hiroyasu et al. 2000]). Most of the test functions have been
until (done) { designed specially to reveal the weak points of the
t = t + 1; algorithms (Affenzeller and Wagner 2005). The selected test
make relative_tendency_table P(t); functions (0ther than De Jong’s) are of high degree of
mate_selection P(t); difficulty. In case of multimodal functions (i.e. f3, f4 and
recombine P(t); f5), the degree of difficulty can be scaled up by increasing
mutate P(t); the dimension of the search space due to the exponentially
evaluate P(t); increasing number of local minima. Other functions are
survive P(t);
difficult in other ways mentioned in GEATbx (for example,
}
}
the unimodal Rosenbrock function whose difficulty is given
Fig. 4. Mate-similarity evolutionary algorithm by the extremely flat region around the global optimum).
• The n-dimensional De Jong function (convex and unimodal) Since we assumed each dimension as a phenotype, the
n genotypic and phenotypic spaces, herein, are the same. But
f1 (x ) = ∑ xi2 one can consider a specific individual representation using
i =1
which these spaces are different. For example each real
for −5.12 ≤ xi ≤ 5.12 with a global minimum of f1 (x ) = 0 at
value standing for a phenotypic attribute can be represented
xi = 0 , 1 ≤ i ≤ n . as two integer values in the genotypic transcription: the first
one represents the integer part of the real value, the second
• The n-dimensional Rosenbrock function (convex and unimodal) one represents the decimal part.
( ) + (1 − x )
n−1
The programs are written in C# programming language
f 2 (x ) = ∑ 100 ⋅ xi +1 − xi2
2 2
i
i =1
using the Microsoft .NET framework 2. All values presented
for −2.048 ≤ xi ≤ 2.048 with a global minimum of f 2 (x ) = 0 at
in the following tables are the best response averages of
thirty independent test runs executed for each test case.
xi = 1 , 1 ≤ i ≤ n .
Parameter Values Used in MSEA on Selected Test Functions in the
• The n-dimensional Rastrigin function (highly multimodal, the Tables 3 and 4
minima are regularly distributed) Reproduction Coefficient 2
Crossover Operator n-point Crossover

∑ (x )
n
f 3 (x ) = 10 ⋅ n + 2
i − 10 ⋅ cos(2 ⋅ π ⋅ xi ) Mutation Operator
Stop Criterion
One-position Uniform Mutation
Number of Generations=10^5
i =1 Mutation Rate 0.75
for −5.12 ≤ xi ≤ 5.12 with a global minimum of f 3 (x ) = 0 at Elitism Rate 2

xi = 0 , 1 ≤ i ≤ n . Table 2. Parameter values used in the test runs of MSEA for the different
test functions illustrated in tables 3 and 4
• The n-dimensional Schwefel function (Schwefel's function is
deceptive in that the global minimum is geometrically distant, Fitness Landscape: De Jong Function
Problem Performance Population Mutation Best
over the parameter space, from the next best local minima) Dimension Duration Size Step Size Answer

∑ − x ⋅ sin ( )
n 2 34 20 (-0.0001,0.0001) 1.70659765862187E-20
f 4 (x ) = i xi 10
100
195
27840
30
100
(-0.001,0.001)
(-0.1,0.1)
2.89426955556278E-12
1.10998473967393E-04
i =1

for −500 ≤ xi ≤ 500 with a global minimum of


Problem
Fitness Landscape: Rosenbrock Function
Performance Population Mutation Best
f 4 (x ) = −n ⋅ 418.9829 at xi = 420.9687 , 1 ≤ i ≤ n . Dimension
2 46
Duration
20
Size Step Size
(-0.0001,0.0001)
Answer
1.19756642881422E-15
10 310 30 (-0.01,0.01) 1.23108809261101E-03
100 41760 100 (-0.1,0.1) 1.86116667198059E+02
• The n-dimensional Ackley function (a widely used multimodal
test fuction) Fitness Landscape: Rastrigin Function
n
Problem Performance Population Mutation Best
n
∑i =1
xi2 ∑ cos (c⋅x ) i
Dimension
2 37
Duration
20
Size Step Size
(-0.5,0.5)
Answer
4.82358153419682E-09
−b⋅ i =1

f 5 (x ) = −a ⋅ e n
−e n
+ a + e1 10
100
260
33180
30
100
(-1.0,1.0)
(-1.0,1.0)
1.24309278616863E-03
3.74889134778938E+02

for a = 20 , b = 0.2 , c = 2 ⋅ π , −32.768 ≤ xi ≤ 32.768 with a Fitness Landscape: Schwefel Function


global minimum of f 5 (x ) = 0 at xi = 0 , 1 ≤ i ≤ n . Problem
Dimension
Performance
Duration
Population
Size
Mutation
Step Size
Best
Answer
2 72 20 (-100,100) 1.10889459392638E-07
10 395 30 (-100,100) 3.94795007820163E+02
• The two-dimensional Easom function (a unimodal test 100 74340 100 (-200,200) 1.47173245664132E+04
function, where the global minimum has a small area relative to
Fitness Landscape: Ackley Function
the search space) Problem Performance Population Mutation Best
− (( x1 −π )2 + ( x2 −π )2 )
f 6 (x1 , x2 ) = − cos(x1 ) ⋅ cos(x2 ) ⋅ e Dimension
2 75
Duration
20
Size Step Size
(-1.0,1.0)
Answer
5.26595764731574E-05
10 360 30 (-1.0,1.0) 2.30098770470022E-03
for −100 ≤ x1 , x2 ≤ 100 with a global minimum of 100 35360 100 (-1.0,1.0) 5.4263123197417

f 6 (x1 , x2 ) = −1 at (x1 , x2 ) = (π , π ) . Fitness Landscape: Easom Function


Problem Performance Population Mutation Best
Dimension Duration Size Step Size Answer
• The two-dimensional Six-hump camel back function (within 2 52 20 10 -0.999999999816228
the bounded region are six local minima, two of them are global
Fitness Landscape: Six-hump camel back Function
minima) Problem Performance Population Mutation Best
  2
( )
4 Dimension Duration Size Step Size Answer
f 7 (x1 , x2 ) =  4 − 2.1x12 + x  ⋅ x1 + x1 x2 + − 4 + 4 x2 ⋅ x2
1 2 2 2 2 20 0.001 -1.03162845348588
 3 
Table 3. Experimental results achieved for the test runs of MSEA for the
For −3 ≤ x1 ≤ 3 and −2 ≤ x2 ≤ 2 with a global minimum of different test functions in various dimensions
f 7 (x1 , x2 ) = −1.0316 at
In addition to the above tables, figures 5, 6, 7, and 8
(x1 , x2 ) = (− 0.0898,0.7126), (0.0898,−0.7126) . depict four comparisons between traditional EA (i.e.
performs mate selection randomly instead of using 400
tendency-based method) and MSEA on four test functions: 350
Standard EA
De Jong (unimodal), Rosenbrock (unimodal), Rastrigin 300 MSEA

Best-so-far fitness
(multimodal), Schwefel (multimodal). Table 4 illustrates the 250

parameter values used in the test runs depicted in the 200

150
following figures.
100

50
Parameter Values Used in MSEA and Traditional EA on Selected Test
0
Functions in the Figures 5, 6, 7, and 8 1 62 123 184 245 306 367 428 489 550 611 672 733 794 855 916 977
Reproduction Coefficient (just in MSEA) 2
Generations
Crossover Operator n-point Crossover
Mutation Operator One-position Uniform Mutation
Figure 8. Comparison of traditional EA and MSEA performance for
Stop Criterion Number of Generations=2*10^5
Mutation Rate 0.75
Schwefel function using value of 100 for mutation step size.
Elitism Rate 2
Problem Dimension 2 IV. DISCUSSION AND CONCLUSIONS
Population Size 20
Table 3 shows that MSEA is capable of finding solutions
Table 4. Parameter values used in the test runs of MSEA and traditional EA very near to the exact solution in low-dimensional search
for the De Jong, Rosenbrock, Rastrigin, and Schwefel functions depicted in spaces in a reasonable time span. Although the best answer
figures 5, 6, 7, and 8 respectively.
for high-dimensional spaces is relatively far from the exact
solution, one can achieve better solutions in unimodal
2.5
spaces by letting the algorithm search more (increasing the
Standard EA number of generations). For higher dimensions of unimodal
2
MSEA
spaces, the performance duration and the best found answer
Best-so-far fitness

1.5
distance to the exact answer are quite reasonable in
1 proportion to the problem complexity. Assuming
0.5
C (dim = α ) showing the problem complexity for an α-

0
dimensional space, and P(dim = β ) showing the MSEA
1 61 121 181 241 301 361 421 481 541 601 661 721 781 841 901 961
Performance duration for a β-dimensional space, table 3
Generations
shows:
Figure 5. Comparison of traditional EA and MSEA performance for De
Jong function using value of 0.01 for mutation step size.
P (dim = 2) ≈ 101 where C (dim = 2) ≈ 10 0 ,

6
P (dim = 10) ≈ 10 2 where C (dim = 10) ≈ 103 , and
5 Standard EA P (dim = 100) ≈ 10 4 where C (dim = 100) ≈ 1030
MSEA
Best-so-far fitness

3 But in the case of the Schwefel function (which is highly


2
multimodal), when the number of dimensions increases, the
1
likelihood of converging to a local optimum increases. As it
is depicted in figure 9, the population converged to the local
0
1 60 119 178 237 296 355 414 473 532 591 650 709 768 827 886 945 optimum of 216.172 using standard EA and 355.388 using
Generations MSEA.
Figure 6. Comparison of traditional EA and MSEA performance for 3500

Rosenbrock function using value of 0.01 for mutation step size. 3000 Standard EA
MSEA
Best-so-far fitness

2500

14 2000

12 Standard EA 1500
MSEA
Best-so-far fitness

10 1000

8 500

6 0
1 2230 4459 6688 8917 11146 13375 15604 17833 20062 22291 24520 26749 28978
4
Generations
2

0
Figure 9. Comparison of standard EA and MSEA performance for 10-
1 61 121 181 241 301 361 421 481 541 601 661 721 781 841 901 961 dimensional Schwefel function using value of 100 for mutation step size.
Generations

Figure 7. Comparison of traditional EA and MSEA performance for


We can conclude that MSEA, as an optimizing method,
Rastrigin function using value of 1 for mutation step size. results in a considerable improvement in convergence
velocity to the global optimum generally, and to a local [18] Manderick, B., de Weger, M., & Spiessens, P. (1991) The genetic
algorithm and the structure of the fitness landscape. Proceedings of
optimum occasionally. In fact, MSEA is capable of escaping the Fourth International Conference on Genetic Algorithms, 143-149.
from local optima in a variety of spaces except very highly La Jolla, CA: Morgan Kaufmann.
multimodal and high-dimensional spaces (e.g. 100- [19] Potter, M. A., De Jong, K. (1994) A cooperative coevolutionary
approach to function optimization. Parallel Problem Solving from
dimensional Scwefel). The reason is using mate-similarity
Nature – PPSN III, pp. 249-257.
method would result in an increase in the selection pressure. [20] Rechenberg, I. (1973) Evolutionsstrategie: Optimierung Technischer
This, in turn, would result in an increase in the velocity of Systeme nach Prinzipien der Biologischen Evolution. Frommann-
finding the best solution, and also converging to a local Holzboog, Stuttgart.
[21] Schwefel, H.-P. (1981) Numerical Optimization of Computer Models.
optimum in some deceptive problems. New York: John Wiley & Sons.
[22] Smith, S. (1983) Flexible learning of problem solving heuristics
through adaptive search. Proceedings of the Eighth International Joint
Conference on Artificial Intelligence, 422-425. Karlsruche, Germany:
REFERENCES William Kaufmann.
[1] Affenzeller, M., Wagner, S. (2005) Offspring Selection: A New Self- [23] Spears, W. M., De Jong, K. A., Baeck, T., Fogel, D., and de Garis, H.
Adaptive Selection Scheme for Genetic Algorithms. Adaptive and (1993) An Overview of Evolutionary Computation, Proceedings of the
Natural Computing Algorithms, Springer Computer Science, pp. 218- European Conference on Machine Learning, v667, 442-459.
221. Springer. [24] Whitley, D. (1989) The GENITOR algorithm and selection pressure:
[2] Belew, R. K., & Booker, L. B. (eds.) (1991) Proceedings of the Fourth why rank-based allocation of reproductive trials is best. Proceedings
International Conference on Genetic Algorithms. La Jolla, CA: of the Third International Conference on Genetic Algorithms, 116-
Morgan Kaufmann. 121. Fairfax, VA: Morgan Kaufmann.
[3] Davis, L. (1989) Adapting operator probabilities in genetic algorithms. [25] Whitley, D. (ed.) (1992) Proceedings of the Foundations of Genetic
Proceedings of the Third International Conference on Genetic Algorithms Workshop. Vail, CO: Morgan Kaufmann.
Algorithms, 60-69. La Jolla, CA: Morgan Kaufmann.
[4] de Garis, H. (1990) Genetic programming: modular evolution for
darwin machines. Proceedings of the 1990 International Joint
Conference on Neural Networks, 194-197. Washington, DC:
Lawrence Erlbaum.
[5] De Jong, K. (2006) Evolutionary Computation: a unified approach, A
Bradford book, Massachusetts Institute of Technology, pp. 1-66.
[6] Eshelman, L. J., & Schaffer, J. D. (1991) Preventing premature
convergence in genetic algorithms by preventing incest. Proceedings
of the Fourth International Conference on Genetic Algorithms, 115-
122. La Jolla, CA: Morgan Kaufmann.
[7] Fogel, D. B., & Atmar, J. W. (eds.) (1992) Proceedings of the First
Annual Conference on Evolutionary Programming. La Jolla, CA:
Evolutionary Programming Society.
[8] Fogel, L. J., Owens, A. J., & Walsh, M. J. (1966) Artificial
Intelligence Through Simulated Evolution. New York: Wiley
Publishing.
[9] Pohlheim, H. GEATbx: Example Functions (single and multi-
objective functions) 2 Parametric Optimization;
http://www.geatbx.com/docu/fcnindex-01.html
[10] Grefenstette, John J. (1989) A system for learning control strategies
with genetic algorithms. Proceedings of the Third International
Conference on Genetic Algorithms, 183-190. Fairfax, VA: Morgan
Kaufmann.
[11] Goldberg, D. E. (1989) Genetic Algorithms in Search, Optimization &
Machine Learning. Reading, MA: Addison Wesley.
[12] Goldberg, D. E., Deb, K., & Korb, B. (1991) Don’t worry, be messy.
Proceedings of the Fourth International Conference on Genetic
Algorithms, 24-30. La Jolla, CA: Morgan Kaufmann.
[13] Hiroyasu, T., Miki M., Hamasaki M, Tanimura, Y. (2000) A new
model of distributed genetic algorithm for cluster systems: Dual
individual DGA. High Performance Computing, Lecture Notes in
Computer Science, 1940, pp.374–383.
[14] Holland, J. H. (1975) Adaptation in Natural and Artificial Systems.
Ann Arbor, Michigan: The University of Michigan Press.
[15] Holland, J. (1986) Escaping brittleness: The possibilities of general-
purpose learning algorithms applied to parallel rule-based systems. In
R. Michalski, J. Carbonell, T. Mitchell (eds.), Machine Learning: An
Artificial Intelligence Approach. Los Altos: Morgan Kaufmann.
[16] Koza, J. R. (1991) Evolving a computer program to generate random
numbers using the genetic programming paradigm. Proceedings of the
Fourth International Conference on Genetic Algorithms, 37-44. La
Jolla, CA: Morgan Kaufmann.
[17] Männer, R., & Manderick, B. (1992) Proceedings of the Second
International Conference on Parallel Problem Solving from Nature,
Amsterdam: North Holland.

Você também pode gostar