Você está na página 1de 15

A Mi

ro-Geneti Algorithm for


Multiobje tive Optimization

Carlos A. Coello Coello1 and Gregorio Tos ano Pulido2


1

CINVESTAV-IPN
Depto. de Ingeniera Ele tri a
Se ion de Computa ion
Av. Instituto Polite ni o Na ional No. 2508
Col. San Pedro Za aten o
Mexi o, D. F. 07300
2

oello s. investav.mx

Maestra en Inteligen ia Arti ial


LANIA-Universidad Vera ruzana
Xalapa, Vera ruz, Mexi o 91090
gtos anomia.uv.mx

In this paper, we propose a multiobje tive optimization approa h based on a mi ro geneti algorithm (mi ro-GA) whi h is a geneti
algorithm with a very small population (four individuals were used in our
experiment) and a reinitialization pro ess. We use three forms of elitism
and a memory to generate the initial population of the mi ro-GA. Our
approa h is tested with several standard fun tions found in the spe ialized literature. The results obtained are very en ouraging, sin e they
show that this simple approa h an produ e an important portion of the
Pareto front at a very low omputational ost.
Abstra t.

Introdu tion

A onsiderable number of evolutionary multiobje tive optimization te hniques


have been proposed in the past [3, 20. However, until re ently, little emphasis
had been pla ed on e ien y issues. It is well known that the two main pro esses
that onsume most of the running time of an EMOO algorithm are: the ranking
of the population (i.e., the omparisons required to determine non-dominan e)
and the me hanism to keep diversity (evolutionary algorithms tend to onverge
to a single solution be ause of sto hasti noise; it is therefore ne essary to avoid
this with an additional me hanism).
Re ent resear h has shown ways of improving the e ien y of an EMOO
te hnique (see for example [14, 6). The main emphasis has been on using an
external le that stores nondominated ve tors found during the evolutionary
pro ess whi h are reinserted later in the population (this an be seen as a form
of elitism in the ontext of multiobje tive optimization [10, 17, 22).
Following the same line of thought of this urrent resear h, we de ided to
develop an approa h in whi h we would use a GA with a very small population

size and a reinitialization pro ess (the so- alled mi ro-GA) ombined with an
external le to store nondominated ve tors previously found. Additionally, we
de ided to in lude an e ient me hanism to keep diversity (similar to the adaptive grid method of Knowles & Corne [14). Our motivation was to show that a
mi ro-GA arefully designed is su ient to generate the Pareto front of a multiobje tive optimization problem. Su h approa h not only redu es the amount of
omparisons required to generate the Pareto front (with respe t to traditional
EMOO approa hes based on Pareto ranking), but also allows us to ontrol the
amount of points that we wish to obtain from the Pareto front (su h amount is
in fa t a parameter of our algorithm).

Related Work

The term mi ro-geneti algorithm (mi ro-GA) refers to a small-population geneti algorithm with reinitialization. The idea was suggested by some theoreti al
results obtained by Goldberg [8, a ording to whi h a population size of 3 was
su ient to onverge, regardless of the hromosomi length. The pro ess suggested by Goldberg was to start with a small randomly generated population,
then apply to it the geneti operators until rea hing nominal onvergen e (e.g.,
when all the individuals have their genotypes either identi al or very similar),
and then to generate a new population by transferring the best individuals of
the onverged population to the new one. The remaining individuals would be
randomly generated.
The rst to report an implementation of a mi ro-GA was Krishnakumar [15,
who used a population size of 5, a rossover rate of 1 and a mutation rate of zero.
His approa h also adopted an elitist strategy that opied the best string found in
the urrent population to the next generation. Sele tion was performed by holding 4 ompetitions between strings that were adja ent in the population array,
and de laring to the individual with the highest tness as the winner. Krishnakumar [15 ompared his mi ro-GA against a simple GA (with a population size
of 50, a rossover rate of 0.6 and a mutation rate of 0.001). Krishnakumar [15
reported faster and better results with his mi ro-GA on two stationary fun tions
and a real-world engineering ontrol problem (a wind-shear ontroller task). After him, several other resear hers have developed appli ations of mi ro-GAs [13,
7, 12, 21. However, to the best of our knowledge, the urrent paper reports the
rst attempt to use a mi ro-GA for multiobje tive optimization, although some
may argue that the multi-membered versions of PAES an be seen as a form of
mi ro-GA1 [14. However, Knowles & Corne [14 on luded that the addition of a
population did not, in general, improve the performan e of PAES, and in reased
the omputational overhead in an important way. Our te hnique, on the other
1

We re ently be ame aware of the fa t that Jaszkiewi z [11 proposed an approa h in


whi h a small population initialized from a large external memory and utilized it for
a short period of time. However, to the best of our knowledgem this approa h has
been used only for multiobje tive ombinatorial optimization.

hand, uses a population and traditional geneti operators and, as we will show
in a further se tion, it performs quite well.

Des ription of the Approa h

In this paper, we propose a mi ro-GA with two memories: the population


memory, whi h is used as the sour e of diversity of the approa h, and the
external memory, whi h is used to ar hive members of the Pareto optimal
set. Population memory is respe tively divided in two parts: a repla eable and
a non-repla eable portion (the per entages of ea h an be regulated by the
user).
Population Memory
Random
Population

Replaceable
Fill in
both parts
of the
population
memory

NonReplaceable

Initial
Population

Selection

Crossover

Mutation

Elitism

New
Population
microGA
cycle
Nominal
Convergence?

Filter

External
Memory

Fig. 1.

Diagram that illustrates the way in whi h our mi ro-GA works.

The way in whi h our te hnique works is illustrated in Fig. 1. First, an initial
random population is generated. This population feeds the population memory,
whi h is divided in two parts as indi ated before. The non-repla eable portion
of the population memory will never hange during the entire run and is meant
to provide the required diversity for the algorithm. The initial population of
the mi ro-GA at the beginning of ea h of its y les is taken (with a ertain
probability) from both portions of the population memory as to allow a greater
diversity.
During ea h y le, the mi ro-GA undergoes onventional geneti operators:
tournament sele tion, two-point rossover, uniform mutation, and elitism (regardless of the amount of nondominated ve tors in the population only one is
arbitrarily sele ted at ea h generation and opied inta t to the following one). After the mi ro-GA nishes one y le (i.e., when nominal onvergen e is a hieved),
we hoose two nondominated ve tors2 from the nal population (the rst and
last) and ompare them with the ontents of the external memory (this memory
is initially empty). If either of them (or both) remains as nondominated after
omparing against the ve tors in this external memory, then they are in luded
there. All the dominated ve tors are eliminated from the external memory. These
two ve tors are also ompared against two elements from the repla eable portion of the population memory (this is done with a loop, so that ea h ve tor is
only ompared against a single position of the population memory). If either of
these ve tors dominates to its mat h in the population memory, then it repla es
it. The idea is that, over time, the repla eable part of the population memory
will tend to have more nondominated ve tors. Some of them will be used in the
initial population of the mi ro-GA to start new evolutionary y les.
Our approa h uses three types of elitism. The rst is based on the notion that
if we store the nondominated ve tors produ ed from ea h y le of the mi ro-GA,
we will not lose any valuable information obtained from the evolutionary pro ess.
The se ond is based on the idea that if we repla e the population memory by
the nominal solutions (i.e., the best solutions found when nominal onvergen e
is rea hed), we will gradually onverge, sin e rossover and mutation will have
a higher probability of rea hing the true Pareto front of the problem over time.
This notion was hinted by Goldberg [8. Nominal onvergen e, in our ase, is
de ned in terms of a ertain (low) number of generations (typi ally, two to ve
in our ase). However, similarities among the strings (either at the phenotypi al
or genotypi al level) ould also been used as a riterion for onvergen e. The
third type of elitism is applied at ertain intervals (de ned by a parameter alled
\repla ement y le"). What we do is to take a ertain amount of points from
all the regions of the Pareto front generated so far and we use them to ll in
the repla eable memory. Depending on the size of the repla eable memory, we
hoose as many points from the Pareto front as ne essary to guarantee a uniform
distribution. This pro ess intends to use the best solutions generated so far as
the starting point for the mi ro-GA, so that we an improve them (either by
2

This is assuming that we have two or more nondominated ve tors. If there is only
one, then this ve tor is the only one sele ted.

getting loser to the true Pareto front or by getting a better distribution). This
also avoids that the ontents of the repla eable memory be omes homogeneous.
To keep diversity in the Pareto front, we use an approa h similar to the adaptive grid proposed by Knowles & Corne [14. The idea is that on e the ar hive
that stores nondominated solutions has rea hed its limit, we divide the sear h
spa e that this ar hive overs, assigning a set of oordinates to ea h solution.
Then, ea h newly generated nondominated solution will be a epted only if the
geographi al lo ation to where the individual belongs is less populated than the
most rowded lo ation. Alternatively, the new nondominated solution ould also
be a epted if the individual belongs to a lo ation outside the previously spe ied boundaries. In other words, the less rowded regions are given preferen e
so that the spread of the individuals on the Pareto front an be more uniform.
The pseudo- ode of the algorithm is the following:
fun tion

Mi ro-GA

begin

Generate starting population P of size N


and store its ontents in the population memory M
=* Both portions of M will be lled with random solutions *=
i=0
while i < Max do
begin

Get the initial population for the mi ro-GA (Pi ) from M


repeat
begin

Apply binary tournament sele tion


based on nondominan e
Apply two-point rossover and uniform mutation
to the sele ted individuals
Apply elitism (retain only one nondominated ve tor)
Produ e the next generation

end

nominal onvergen e is rea hed


Copy two nondominated ve tors from Pi
to the external memory E
if E is full when trying to insert indb
then adaptive grid(indb )
Copy two nondominated ve tors from Pi to M
if i mod repla ement y le
then apply se ond form of elitism
i=i+1
until

end while
end fun tion

The adaptive grid requires two parameters: the expe ted size of the Pareto
front and the amount of positions in whi h we will divide the solution spa e for

ea h obje tive. The rst parameter is de ned by the size of the external memory
and it is provided by the user. The se ond parameter (the amount of positions in
whi h we will divide the solution spa e) has to be provided by the user as well,
although we have found that our approa h is not very sensitive to it (e.g., in most
of our experiments a value of 15 or 25 provided very similar results). The pro ess
of determining the lo ation of a ertain individual has a low omputational ost
(it is based on the values of its obje tives as indi ated before). However, when
the individual is out of range, we have to relo ate all the positions. Nevertheless,
this last situation does not o ur too often, and we allo ate a ertain amount of
extra room in the rst and last lo ations of the grid to minimize the o urren e
of this situation.
When the external memory is full, then we use the adaptive grid to de ide
what nondominated ve tors will be eliminated. The adaptive grid will try to
balan e the amount of individuals representing ea h of the elements of the Pareto
set, so that we an get a uniform spread of points along the Pareto front.

Comparison of Results

Several test fun tions were taken from the spe ialized literature to ompare our
approa h. In all ases, we generated the true Pareto fronts of the problems using
exhaustive enumeration (with a ertain granularity) so that we ould make a
graphi al omparison of the quality of the solutions produ ed by our mi ro-GA.
Sin e the main aim of this approa h has been to in rease e ien y, we additionally de ided to ompare running times of our mi ro-GA against those of
the NSGA II [6 and PAES [14. In the following examples, the NSGA was run
using a population size of 100, a rossover rate of 0.8 (using SBX), tournament
sele tion, and a mutation rate of 1/vars, where vars = number of de ision variables of the problem. In the following examples, PAES was run using a depth of
5, a size of the ar hive of 100, and a mutation rate of 1/bits, where bits refers
to the length of the hromosomi string that en odes the de ision variables. The
amount of tness fun tion evaluations was set su h that the NSGA II, PAES
and the mi ro-GA ould reasonably over the true Pareto front of ea h problem.
4.1

Test Fun tion 1

Our rst example is a two-obje tive optimization problem de ned by Deb [5:
Minimize f1 (x1 ; x2 ) = x1
Minimize f2 (x1 ; x2 ) =
where:

g (x2 )
x1

(1)
(2)

g (x2 ) = 2:0
and 0:1  x1
16
14

exp

x2 0:2
0:004

2 )

0:8 exp

x2

0:6
0:4

2 )

(3)

 1:0, 0:1  x  1:0.


2

true Pareto front


mi ro-GA

12 3
10
f2

33
33
333
33333
4
3333
33
6

0
0.1

33333333
3333333333 3
3333 33
3333 33
33333 33333 33333 333 3333
33

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

f1

Fig. 2.

Comparison of results for the rst test fun tion.

The parameters used by the mi ro-GA for this example were: size of the external memory = 100, size of the population memory = 50, number of iterations
= 1500, number of iterations of the mi ro-GA (to a hieve nominal onvergen e)
= 2, number of subdivisions of the adaptive grid = 25, rossover rate = 0.7,
mutation rate = 0.029, per entage of non-repla eable memory = 0.3, population
size (of the mi ro-GA) = 4, repla ement y le at every 50 iterations.
Our rst test fun tion has a lo al Pareto front to whi h a GA an be easily
attra ted. Fig. 2 shows the true Pareto front for this problem with a ontinuous
line, and the results found by the NSGA II, PAES and our mi ro-GA are shown
with points. Similar fronts were found by the three approa hes. For this example, both the NSGA II and PAES performed 12,000 evaluations of the tness
fun tion. The average running time of ea h algorithm (over 20 runs) were the following: 2.601 se onds for the NSGA II (with a standard deviation of 0.33555913),
1.106 se onds for PAES (with a standard deviation of 0.25193672) and only 0.204
se onds for the mi ro-GA (with a standard deviation of 0.07764461).

4.2

Test Fun tion 2

Our se ond example is a two-obje tive optimization problem proposed by S haffer [18 that has been used by several resear hers [19, 1:
8
>
>
<

x
if x  1
2 + x if 1 < x  3
Minimize f1 (x) =
>
> 4 x if 3 < x  4
:
4 + x if x > 4
and

5  x  10.

Minimize f2 (x) = (x

18

(4)

5)2

(5)

true Pareto front


mi ro-GA
NSGA II
PAES

323+3
16 +
2++
23+33
+
+
+
3+
2+3
+++
3++3
+
14
3+++
3+
3++2+2++23+
3+++++2+2+
+
12
+++
+
+
++
++
+
+
+
+
3+++3+3
+
3++23
+
10
3++2+3+
32++2+3
f2
8

3
2

6
4
2
0

-1

-0.5

+
+
2323
232+
2323323+
33+3+233
2+22+2+3
2333+
23+3+3+3+3333+32323+
33 3+
3 33+
3+3
0
0.5
1

1.5

f1

Fig. 3.

Comparison of results for the se ond test fun tion.

The parameters used for this example were: size of the external memory =
100, size of the population memory = 50, number of iterations = 150, number
of iterations of the mi ro-GA (to a hieve nominal onvergen e) = 2, number of
subdivisions of the adaptive grid = 25, rossover rate = 0.7, mutation rate =
0.056 (1/L, where L=18 bits in this ase), per entage of non-repla eable memory
= 0.3, population size (of the mi ro-GA) = 4, repla ement y le at every 25
iterations.
This problem has a Pareto front that is dis onne ted. Fig. 3 shows the true
Pareto front for this problem with a ontinuous line (the verti al line is obviously

not part of the true Pareto front, but it appears be ause we used linear segments
to onne t every pair of nondominated points). We used points to represent the
solutions found by the NSGA II, PAES and our mi ro-GA.
Again, similar Pareto fronts were found by the three approa hes. For this example, both the NSGA II and PAES performed 1,200 evaluations of the tness
fun tion. The average running time of ea h algorithm (over 20 runs) were the following: 0.282 se onds for the NSGA II (with a standard deviation of 0.00014151),
0.107 se onds for PAES (with a standard deviation of 0.13031718) and only 0.017
se onds for the mi ro-GA (with a standard deviation of 0.0007672).
4.3

Test Fun tion 3

Our se ond example is a two-obje tive optimization problem de ned by Deb [5:
Minimize f1 (x1 ; x2 ) = x1

(6)

Minimize f2 (x1 ; x2 ) = g (x1 ; x2 )  h(x1 ; x2 )

(7)

where:

g (x1 ; x2 ) = 11 + x22
(

h(x1 ; x2 ) =

1
0

f1 (x1 ;x2
g (x1 ;x2

10  os(2x2 )
if f1 (x1 ; x2 )  g (x1 ; x2 )
otherwise

(8)
(9)

and 0  x1  1, 30  x2  30.
This problem has 60 lo al Pareto fronts. Fig. 4 shows the true Pareto front
for this problem with a ontinuous line. The results obtained by the NSGA II,
PAES and our mi ro-GA are displayed as points.
The parameters used by the mi ro-GA for this example were: size of the external memory = 100, size of the population memory = 50, number of iterations
= 700, number of iterations of the mi ro-GA (to a hieve nominal onvergen e)
= 4, number of subdivisions of the adaptive grid = 25, rossover rate = 0.7,
mutation rate = 0.029, per entage of non-repla eable memory = 0.3, population
size (of the mi ro-GA) = 4, repla ement y le at every 50 iterations.
On e again, the fronts produ ed by the three approa hes are very similar.
For this example, both the NSGA II and PAES performed 11,200 evaluations
of the tness fun tion. The average running time of ea h algorithm (over 20
runs) were the following: 2.519 se onds for the NSGA II (with a standard deviation of 0.03648403), 2.497 se onds for PAES (with a standard deviation of
1.03348519) and only 0.107 se onds for the mi ro-GA (with a standard deviation
of 0.00133949).

2.5

true Pareto front


mi ro-GA
NSGA II
PAES

3
+
2

1.5
f2

1+
3+3+
2+32+
32+
32+3++23+2+2+
23+22+3+3+232+
32+23
2+2
+
2+
3
23+23++2+23
+
+
2+32+32+3
2+++33++++2+3
0.5
+++
32322+2+33
+3+3
2+3
+
23++33
++
23++23++3
23+3+23+23++3+2232+22+33++33+++
3+
33++3++3233++23+22+32+3+3
2+232+2+3223+3223+3+32+3332
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
f1

Fig. 4.

4.4

Comparison of results for the third test fun tion.

Test Fun tion 4

Our fourth example is the so- alled \unitation versus pairs" problem [9, whi h
involves the maximization of two fun tions over bit strings. The rst fun tion, f1
is the number of pairs of adja ent omplementary bits found in the string, and
the se ond fun tion, f2 is the numbers of ones found in the string. The Pareto
front in this ase is dis rete. We used a string length of 28, and therefore, the
true Pareto front is omposed of 15 points.
The parameters used for this example were: size of the external memory =
100, size of the population memory = 15, number of iterations = 1250, number
of iterations of the mi ro-GA (to a hieve nominal onvergen e) = 1, number
of subdivisions of the adaptive grid = 3, rossover rate = 0.5, mutation rate
= 0.035, per entage of non-repla eable memory = 0.2, population size (of the
mi ro-GA) = 4, repla ement y le at every 25 iterations.
Fig. 5 shows the results obtained by our mi ro-GA for the fourth test fun tion. A total of 13 (out of 15) elements of the Pareto optimal set were found on
average (only o asionally was our approa h able to nd the 15 target elements).
PAES was also able to generate 13 elements of the Pareto optimal set on average,
and the NSGA II was only able to generate 8 elements on average.
For this example, both the NSGA II and PAES performed 5,000 evaluations
of the tness fun tion. The average running time of ea h algorithm (over 20 runs)
were the following: 2.207 se onds for the NSGA II, 0.134 se onds for PAES and
only 0.042 se onds for the mi ro-GA.

Unitation versus pairs

f2

28

27

0 1

26

0 0 0 1

25

0 0 0 0 0 1

24

0 0 0 0 0 0 0 1

23

0 0 0 0 0 0 0 0 0 1

22

0 0 0 0 0 0 0 0 0 0 0 1

21

0 0 0 0 0 0 0 0 0 0 0 0 0 1

20

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1

19

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1

18

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1

17

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1

16

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1

15

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1

14

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

13

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

12

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

11

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

10

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0
0 0

1
0

0
0 1 2 3 4 5 6 7 8 9 101112131415161718192021222324252627
f1

Fig. 5.

Results of the mi ro-GA for the fourth test fun tion.

Borges & Barbosa [2 reported that were able to nd the 15 elements of the
Pareto optimal set for this problem, using a population size of 100 and 5,000
evaluations of the tness fun tion, although no a tual running times of their
approa h were reported.
4.5

Test Fun tion 5

Our fth example is a two-obje tive optimization problem de ned by Kursawe


[16:
Minimize f1 (x) =

X1 



0:2 xi + x

10 exp

(10)

2
i+1

i=1

Minimize f2 (x) =

n
X

jx j
i

0:8

+ 5 sin(xi )3

(11)

i=1

where:

5  x1 ; x2 ; x3  5
2

3
0+
-2
f2

-4

22+3+22+2+
322+
223
2+2+
22
3+2+
222
+2
32+
2+3
+
2+
2
23++23
22+
+

-6
-8
-10
-12
-20

-19

-18

(12)

true Pareto front


mi ro-GA
NSGA II
PAES

3
+
2

322++233
+
2+
22+2+3
232++
2+22++
23
32+2+2+2
3
2+
32+2+2+2
3
+
22+
22+3
3+++
2++3
2 332+232+2+32++
23232+23
+
2+
323
+
22++
3
23+2+3+
2++
3+3+
3+3
33
++
-17

-16

-15

-14

f1

Fig. 6.

Comparison of results for the fth test fun tion.

Fig. 6 shows the true Pareto front for this problem as points. The results
obtained by the NSGA II, PAES and our mi ro-GA are also shown as points.

It is worth mentioning that PAES ould not eliminate some of the dominated
points in the runs performed.
The parameters used for this example were: size of the external memory =
100, size of the population memory = 50, number of iterations = 3000, number
of iterations of the mi ro-GA (to a hieve nominal onvergen e) = 2, number
of subdivisions of the adaptive grid = 25, rossover rate = 0.7, mutation rate
= 0.019, per entage of non-repla eable memory = 0.3, population size (of the
mi ro-GA) = 4, repla ement y le at every 50 iterations.
For this example, both the NSGA II and PAES performed 2,400 evaluations
of the tness fun tion. The average running time of ea h algorithm (over 20 runs)
were the following: 6.481 se onds for the NSGA II (with a standard deviation of
0.053712), 2.195 se onds for PAES (with a standard deviation of 0.25408319) and
only 0.704 se onds for the mi ro-GA (with a standard deviation of 0.00692099).

Con lusions and Future Work

We have introdu ed an approa h that uses a GA with a very small population


and a reinitialization pro ess to generate the Pareto front of a multiobje tive
optimization problem. The te hnique has a exhibited a low omputational ost
when ompared to the NSGA II and PAES in a few test fun tions. The approa h
uses three forms of elitism, in luding an external le of nondominated ve tors
and a re lling pro ess that allows us to approa h the true Pareto front in a
su essive manner. Also, we use an adaptive grid (similar to the one used by
PAES [14) that maintains diversity in an e ient way.
The approa h still needs more validation (parti ularly, with MOPs that have
more de ision variables and onstraints), and needs to be ompared with other
EMOO approa hes (under similar onditions) using some of the metri s that
have been proposed in the literature (see for example [22). We only provided
running times produ ed on a PC, but a more exhaustive omparison is obviously
la king. However, the preliminary results presented in this paper, indi ate the
potential of the approa h.
Some other future work will be to re ne part of the pro ess, so that we an
eliminate some of the additional parameters that the approa h needs. Sin e some
of them are not very riti al (e.g., the number of grid subdivisions, or the amount
of iterations to rea h riti al onvergen e), we ould probably automati ally
preset them to a reasonable value so that the user does not need to provide
them.
Finally, we are also interested in using this approa h as a basis to develop a
model of in orporation of preferen es from the de ision maker [4.

A knowledgements
We thank the anonymous referees for their omments that helped us improve
the ontents of this paper. The rst author gratefully a knowledges support

from CONACyT through proje t 34201-A. The se ond author a knowledges support from CONACyT through a s holarship to pursue graduate studies at the
Maestra en Inteligen ia Arti ial of LANIA and the Universidad Vera ruzana.

Referen es
1. P. J. Bentley and J. P. Wake eld. Finding A eptable Solutions in the ParetoOptimal Range using Multiobje tive Geneti Algorithms. In P. K. Chawdhry,
R. Roy, and R. K. Pant, editors, Soft Computing in Engineering Design and Manufa turing, Part 5, pages 231{240, London, June 1997. Springer Verlag London
Limited. (Presented at the 2nd On-line World Conferen e on Soft Computing in
Design and Manufa turing (WSC2)).
2. Carlos C.H. Borges and Helio J.C. Barbosa. A Non-generational Geneti Algorithm
for Multiobje tive Optimization. In 2000 Congress on Evolutionary Computation,
volume 1, pages 172{179, San Diego, California, July 2000. IEEE Servi e Center.
3. Carlos A. Coello Coello. A Comprehensive Survey of Evolutionary-Based Multiobje tive Optimization Te hniques. Knowledge and Information Systems. An
International Journal, 1(3):269{308, August 1999.
4. Carlos A. Coello Coello. Handling Preferen es in Evolutionary Multiobje tive
Optimization: A Survey. In 2000 Congress on Evolutionary Computation, volume 1,
pages 30{37, Pis ataway, New Jersey, July 2000. IEEE Servi e Center.
5. Kalyanmoy Deb. Multi-Obje tive Geneti Algorithms: Problem Di ulties and
Constru tion of Test Problems. Evolutionary Computation, 7(3):205{230, Fall
1999.
6. Kalyanmoy Deb, Samir Agrawal, Amrit Pratab, and T. Meyarivan. A Fast Elitist Non-Dominated Sorting Geneti Algorithm for Multi-Obje tive Optimization:
NSGA-II. KanGAL report 200001, Indian Institute of Te hnology, Kanpur, India,
2000.
7. G. Dozier, J. Bowen, and D. Bahler. Solving small and large s ale onstraint satisfa tion problems using a heuristi -based mi rogeneti algorithm. In Pro eedings
of the First IEEE Conferen e on Evolutionary Computation, pages 306{311, 1994.
8. David E. Goldberg. Sizing Populations for Serial and Parallel Geneti Algorithms.
In J. David S ha er, editor, Pro eedings of the Third International Conferen e on
Geneti Algorithms, pages 70{79, San Mateo, California, 1989. Morgan Kaufmann
Publishers.
9. Je rey Horn, Ni holas Nafpliotis, and David E. Goldberg. A Ni hed Pareto Geneti Algorithm for Multiobje tive Optimization. In Pro eedings of the First IEEE
Conferen e on Evolutionary Computation, IEEE World Congress on Computa-

tional Intelligen e, volume 1, pages 82{87, Pis ataway, New Jersey, June 1994.
IEEE Servi e Center.
10. Hisao Ishibu hi and Tadahiko Murata. Multi-Obje tive Geneti Lo al Sear h Algorithm. In Toshio Fukuda and Takeshi Furuhashi, editors, Pro eedings of the 1996
International Conferen e on Evolutionary Computation, pages 119{124, Nagoya,
Japan, 1996. IEEE.
11. Andrzej Jaszkiewi z. Geneti lo al sear h for multiple obje tive ombinatorial optimization. Te hni al Report RA-014/98, Institute of Computing S ien e, Poznan
University of Te hnology, 1998.
12. E.G. Johnson and M.A.G. Abushagur. Mi ro-Geneti Algorithm Optimization
Methods Applied to Diele tri Gratings. Journal of the Opti al So iety of Ameri a,
12(5):1152{1160, 1995.

13. Charles L. Karr. Air-Inje ted Hydro y lone Optimization via Geneti Algorithm.
In Lawren e Davis, editor, Handbook of Geneti Algorithms, pages 222{236. Van
Nostrand Reinhold, New York, 1991.
14. Joshua D. Knowles and David W. Corne. Approximating the Nondominated
Front Using the Pareto Ar hived Evolution Strategy. Evolutionary Computation,
8(2):149{172, 2000.
15. K. Krishnakumar. Mi ro-geneti algorithms for stationary and non-stationary
fun tion optimization. In SPIE Pro eedings: Intelligent Control and Adaptive Systems, pages 289{296, 1989.
16. Frank Kursawe. A variant of evolution strategies for ve tor optimization. In
H. P. S hwefel and R. Manner, editors, Parallel Problem Solving from Nature.
1st Workshop, PPSN I, volume 496 of Le ture Notes in Computer S ien e, pages
193{197, Berlin, Germany, o t 1991. Springer-Verlag.
17. Geo rey T. Parks and I. Miller. Sele tive Breeding in a Multiobje tive Geneti
Algorithm. In A. E. Eiben, M. S hoenauer, and H.-P. S hwefel, editors, Parallel
Problem Solving From Nature | PPSN V, pages 250{259, Amsterdam, Holland,
1998. Springer-Verlag.
18. J. David S ha er. Multiple Obje tive Optimization with Ve tor Evaluated Geneti
Algorithms. PhD thesis, Vanderbilt University, 1984.
19. N. Srinivas and Kalyanmoy Deb. Multiobje tive Optimization Using Nondominated Sorting in Geneti Algorithms. Evolutionary Computation, 2(3):221{248,
fall 1994.
20. David A. Van Veldhuizen and Gary B. Lamont. Multiobje tive Evolutionary Algorithms: Analyzing the State-of-the-Art. Evolutionary Computation, 8(2):125{147,
2000.
21. Feng hao Xiao and Hatsuo Yabe. Mi rowave Imaging of Perfe tly Condu ting
Cylinders from Real Data by Mi ro Geneti Algorithm Coupled with Deterministi Method. IEICE Transa tions on Ele troni s, E81-C(12):1784{1792, De ember
1998.
22. E kart Zitzler, Kalyanmoy Deb, and Lothar Thiele. Comparison of Multiobje tive
Evolutionary Algorithms: Empiri al Results. Evolutionary Computation, 8(2):173{
195, Summer 2000.

Você também pode gostar