Você está na página 1de 1223

Global Optimization Algorithms

Theory and Application


3
r
d

E
d
Author: Thomas Weise
tweise@gmx.de
Edition: third
Version: 2011-12-07-15:31
Newest Version: http://www.it-weise.de/projects/book.pdf
Sources/CVS-Repository: goa-taa.cvs.sourceforge.net
2
Don
,
tprintme!
Also think about
the trees that
would have to
dieforthepaper!
This book is much more
useful as electronic resource:
youcansearchtermsandclick
links. You can t do that in a
printedbook.
,
Thebookisfrequentlyupdatedandimprovedas
well.Printedversionswilljustoutdate.
Don
,
tprintme!
Preface
This e-book is devoted to Global Optimization algorithms, which are methods for nding
solutions of high quality for an incredible wide range of problems. We introduce the basic
concepts of optimization and discuss features which make optimization problems dicult
and thus, should be considered when trying to solve them. In this book, we focus on
metaheuristic approaches like Evolutionary Computation, Simulated Annealing, Extremal
Optimization, Tabu Search, and Random Optimization. Especially the Evolutionary Com-
putation methods, which subsume Evolutionary Algorithms, Genetic Algorithms, Genetic
Programming, Learning Classier Systems, Evolution Strategy, Dierential Evolution, Par-
ticle Swarm Optimization, and Ant Colony Optimization, are discussed in detail.
In this third edition, we try to make a transition from a pure material collection and
compendium to a more structured book. We try to address two major audience groups:
1. Our book may help students, since we try to describe the algorithms in an under-
standable, consistent way. Therefore, we also provide the fundamentals and much
background knowledge. You can nd (short and simplied) summaries on stochastic
theory and theoretical computer science in Part VI on page 638. Additionally, appli-
cation examples are provided which give an idea how problems can be tackled with
the dierent techniques and what results can be expected.
2. Fellow researchers and PhD students may nd the application examples helpful too.
For them, in-depth discussions on the single approaches are included that are supported
with a large set of useful literature references.
The contents of this book are divided into three parts. In the rst part, dierent op-
timization technologies will be introduced and their features are described. Often, small
examples will be given in order to ease understanding. In the second part starting at page
530, we elaborate on dierent application examples in detail. Finally, in the last part fol-
lowing at page 638, the aforementioned background knowledge is provided.
In order to maximize the utility of this electronic book, it contains automatic, clickable
links. They are shaded with dark gray so the book is still b/w printable. You can click on
1. entries in the table of contents,
2. citation references like Heitkotter and Beasley [1220],
3. page references like 253,
4. references such as see Figure 28.1 on page 254 to sections, gures, tables, and listings,
and
5. URLs and links like http://www.lania.mx/

ccoello/EMOO/ [accessed 2007-10-25].


1
1
URLs are usually annotated with the date we have accessed them, like http://www.lania.mx/

ccoello/EMOO/ [accessed 2007-10-25]. We can neither guarantee that their content remains unchanged, nor
that these sites stay available. We also assume no responsibility for anything we linked to.
4 PREFACE
The following scenario is an example for using the book: A student reads the text and
nds a passage that she wants to investigate in-depth. She clicks on a citation which seems
interesting and the corresponding reference is shown. To some of the references which are
online available, links are provided in the reference text. By clicking on such a link, the
Adobe Reader
R
2
will open another window and load the regarding document (or a browser
window of a site that links to the document). After reading it, the student may use the
backwards button in the Acrobat Readers navigation utility to go back to the text initially
read in the e-book.
If this book contains something you want to cite or reference in your work, please use
the citation suggestion provided in Chapter A on page 945. Also, I would be very happy
if you provide feedback, report errors or missing things that you have (or have not) found,
criticize something, or have any additional ideas or suggestions. Do not hesitate to contact
me via my email address tweise@gmx.de. Matter of fact, a large number of people helped
me to improve this book over time. I have enumerated the most important contributors in
Chapter D Thank you guys, I really appreciate your help! At many places in this book
we refer to Wikipedia The Free Encyclopedia [2938] which is a great source of knowledge.
Wikipedia The Free Encyclopedia contains articles and denitions for many of the aspects
discussed in this book. Like this book, it is updated and improved frequently. Therefore,
including the links adds greatly to the books utility, in my opinion.
The updates and improvements will result in new versions of the book, which will
regularly appear at http://www.it-weise.de/projects/book.pdf. The complete
L
A
T
E
X source code of this book, including all graphics and the bibliography, is hosted
at SourceForge under http://sourceforge.net/projects/goa-taa/ in the CVS
repository goa-taa.cvs.sourceforge.net. You can browse the repository under
http://goa-taa.cvs.sourceforge.net/goa-taa/ and anonymously download the
complete sources of it by using the CVS command given in Listing 1.
3
cvs -z3 -d:pserver:anonymous@goa-taa.cvs.sourceforge.net:/cvsroot/goa-taa
checkout -P Book
Listing 1: The CVS command for anonymously checking out the complete book sources.
Compiling the sources requires multiple runs of L
A
T
E
X, BibT
E
X, and makeindex because of
the nifty way the references are incorporated. In the repository, an Ant-Script is provided for
doing this. Also, the complete bibliography of this book is stored in a MySQL
4
database and
the scripts for creating this database as well as tools for querying it (Java) and a Microsoft
Access
5
frontend are part of the sources you can download. These resources as well as all
contents of this book (unless explicitly stated otherwise) are licensed under the GNU Free
Documentation License (FDL, see Chapter B on page 947). Some of the algorithms provided
are made available under the LGPL license (see Chapter C on page 955)
Copyright c _ 2011 Thomas Weise.
Permission is granted to copy, distribute and/or modify this document under the terms of
the GNU Free Documentation License, Version 1.3 or any later version published by the Free
Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover
Texts. A copy of the license is included in the Chapter B on page 947 entitled GNU Free
Documentation License (FDL).
2
The Adobe Reader
R
is available for download at http://www.adobe.com/products/reader/ [ac-
cessed 2007-08-13].
3
More information can be found at http://sourceforge.net/scm/?type=cvs&group_id=264352
4
http://en.wikipedia.org/wiki/MySQL [accessed 2010-07-29]
5
http://en.wikipedia.org/wiki/Microsoft_Access [accessed 2010-07-29]
Contents
Title Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Part I Foundations
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.1 What is Global Optimization? . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.1.1 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.1.2 Algorithms and Programs . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.1.3 Optimization versus Dedicated Algorithms . . . . . . . . . . . . . . . 24
1.1.4 Structure of this Book . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.2 Types of Optimization Problems . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.2.1 Combinatorial Optimization Problems . . . . . . . . . . . . . . . . . . 24
1.2.2 Numerical Optimization Problems . . . . . . . . . . . . . . . . . . . . 27
1.2.3 Discrete and Mixed Problems . . . . . . . . . . . . . . . . . . . . . . 29
1.2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.3 Classes of Optimization Algorithms . . . . . . . . . . . . . . . . . . . . . . . 31
1.3.1 Algorithm Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.3.2 Monte Carlo Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.3.3 Heuristics and Metaheuristics . . . . . . . . . . . . . . . . . . . . . . 34
1.3.4 Evolutionary Computation . . . . . . . . . . . . . . . . . . . . . . . . 36
1.3.5 Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.4 Classication According to Optimization Time . . . . . . . . . . . . . . . . . 37
1.5 Number of Optimization Criteria . . . . . . . . . . . . . . . . . . . . . . . . 39
1.6 Introductory Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2 Problem Space and Objective Functions . . . . . . . . . . . . . . . . . . . . 43
2.1 Problem Space: How does it look like? . . . . . . . . . . . . . . . . . . . . . 43
2.2 Objective Functions: Is it good? . . . . . . . . . . . . . . . . . . . . . . . . . 44
3 Optima: What does good mean? . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.1 Single Objective Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.1.1 Extrema: Minima and Maxima of Dierentiable Functions . . . . . . 52
3.1.2 Global Extrema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.2 Multiple Optima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.3 Multiple Objective Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6 CONTENTS
3.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.3.2 Lexicographic Optimization . . . . . . . . . . . . . . . . . . . . . . . 59
3.3.3 Weighted Sums (Linear Aggregation) . . . . . . . . . . . . . . . . . . 62
3.3.4 Weighted Min-Max . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.3.5 Pareto Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.4 Constraint Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.4.1 Genome and Phenome-based Approaches . . . . . . . . . . . . . . . . 71
3.4.2 Death Penalty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.4.3 Penalty Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.4.4 Constraints as Additional Objectives . . . . . . . . . . . . . . . . . . 72
3.4.5 The Method Of Inequalities . . . . . . . . . . . . . . . . . . . . . . . 72
3.4.6 Constrained-Domination . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.4.7 Limitations and Other Methods . . . . . . . . . . . . . . . . . . . . . 75
3.5 Unifying Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.5.1 External Decision Maker . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.5.2 Prevalence Optimization . . . . . . . . . . . . . . . . . . . . . . . . . 76
4 Search Space and Operators: How can we nd it? . . . . . . . . . . . . . . 81
4.1 The Search Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.2 The Search Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.3 The Connection between Search and Problem Space . . . . . . . . . . . . . . 85
4.4 Local Optima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.4.1 Local Optima of single-objective Problems . . . . . . . . . . . . . . . 89
4.4.2 Local Optima of Multi-Objective Problems . . . . . . . . . . . . . . . 89
4.5 Further Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5 Fitness and Problem Landscape: How does the Optimizer see it? . . . . 93
5.1 Fitness Landscapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.2 Fitness as a Relative Measure of Utility . . . . . . . . . . . . . . . . . . . . . 94
5.3 Problem Landscapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6 The Structure of Optimization: Putting it together. . . . . . . . . . . . . 101
6.1 Involved Spaces and Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.2 Optimization Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.3 Other General Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.3.1 Gradient Descend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.3.2 Iterations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6.3.3 Termination Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6.3.4 Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6.3.5 Modeling and Simulating . . . . . . . . . . . . . . . . . . . . . . . . . 106
7 Solving an Optimization Problem . . . . . . . . . . . . . . . . . . . . . . . . 109
8 Baseline Search Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
8.1 Random Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
8.2 Random Walks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
8.2.1 Adaptive Walks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
8.3 Exhaustive Enumeration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
9 Forma Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
CONTENTS 7
10 General Information on Optimization . . . . . . . . . . . . . . . . . . . . . . 123
10.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
10.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
10.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
10.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Part II Diculties in Optimization
11 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
12 Problem Hardness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
12.1 Algorithmic Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
12.2 Complexity Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
12.2.1 Turing Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
12.2.2 T, AT, Hardness, and Completeness . . . . . . . . . . . . . . . . . . 146
12.3 The Problem: Many Real-World Tasks are AT-hard . . . . . . . . . . . . . 147
12.4 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
13 Unsatisfying Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
13.2 The Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
13.2.1 Premature Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . 152
13.2.2 Non-Uniform Convergence . . . . . . . . . . . . . . . . . . . . . . . . 152
13.2.3 Domino Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
13.3 One Cause: Loss of Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
13.3.1 Exploration vs. Exploitation . . . . . . . . . . . . . . . . . . . . . . . 155
13.4 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
13.4.1 Search Operator Design . . . . . . . . . . . . . . . . . . . . . . . . . . 157
13.4.2 Restarting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
13.4.3 Low Selection Pressure and Population Size . . . . . . . . . . . . . . 157
13.4.4 Sharing, Niching, and Clearing . . . . . . . . . . . . . . . . . . . . . . 157
13.4.5 Self-Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
13.4.6 Multi-Objectivization . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
14 Ruggedness and Weak Causality . . . . . . . . . . . . . . . . . . . . . . . . . 161
14.1 The Problem: Ruggedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
14.2 One Cause: Weak Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
14.3 Fitness Landscape Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
14.3.1 Autocorrelation and Correlation Length . . . . . . . . . . . . . . . . . 163
14.4 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
15 Deceptiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
15.2 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
15.3 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
16 Neutrality and Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
16.2 Evolvability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
16.3 Neutrality: Problematic and Benecial . . . . . . . . . . . . . . . . . . . . . 172
16.4 Neutral Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
16.5 Redundancy: Problematic and Benecial . . . . . . . . . . . . . . . . . . . . 174
16.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
16.7 Needle-In-A-Haystack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
8 CONTENTS
17 Epistasis, Pleiotropy, and Separability . . . . . . . . . . . . . . . . . . . . . 177
17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
17.1.1 Epistasis and Pleiotropy . . . . . . . . . . . . . . . . . . . . . . . . . 177
17.1.2 Separability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
17.1.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
17.2 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
17.3 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
17.3.1 Choice of the Representation . . . . . . . . . . . . . . . . . . . . . . . 182
17.3.2 Parameter Tweaking . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
17.3.3 Linkage Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
17.3.4 Cooperative Coevolution . . . . . . . . . . . . . . . . . . . . . . . . . 184
18 Noise and Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
18.1 Introduction Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
18.2 The Problem: Need for Robustness . . . . . . . . . . . . . . . . . . . . . . . 188
18.3 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
19 Overtting and Oversimplication . . . . . . . . . . . . . . . . . . . . . . . . 191
19.1 Overtting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
19.1.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
19.1.2 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
19.2 Oversimplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
19.2.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
19.2.2 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
20 Dimensionality (Objective Functions) . . . . . . . . . . . . . . . . . . . . . . 197
20.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
20.2 The Problem: Many-Objective Optimization . . . . . . . . . . . . . . . . . . 198
20.3 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
20.3.1 Increasing Population Size . . . . . . . . . . . . . . . . . . . . . . . . 201
20.3.2 Increasing Selection Pressure . . . . . . . . . . . . . . . . . . . . . . . 201
20.3.3 Indicator Function-based Approaches . . . . . . . . . . . . . . . . . . 201
20.3.4 Scalerizing Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . 202
20.3.5 Limiting the Search Area in the Objective Space . . . . . . . . . . . . 202
20.3.6 Visualization Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 202
21 Scale (Decision Variables) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
21.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
21.2 Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
21.2.1 Parallelization and Distribution . . . . . . . . . . . . . . . . . . . . . 204
21.2.2 Genetic Representation and Development . . . . . . . . . . . . . . . . 204
21.2.3 Exploiting Separability . . . . . . . . . . . . . . . . . . . . . . . . . . 204
21.2.4 Combination of Techniques . . . . . . . . . . . . . . . . . . . . . . . . 205
22 Dynamically Changing Fitness Landscape . . . . . . . . . . . . . . . . . . . 207
23 The No Free Lunch Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
23.1 Initial Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
23.2 The Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
23.3 Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
23.4 Innite and Continuous Domains . . . . . . . . . . . . . . . . . . . . . . . . 213
23.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
CONTENTS 9
24 Lessons Learned: Designing Good Encodings . . . . . . . . . . . . . . . . . 215
24.1 Compact Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
24.2 Unbiased Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
24.3 Surjective GPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
24.4 Injective GPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
24.5 Consistent GPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
24.6 Formae Inheritance and Preservation . . . . . . . . . . . . . . . . . . . . . . 217
24.7 Formae in Genotypic Space Aligned with Phenotypic Formae . . . . . . . . . 217
24.8 Compatibility of Formae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
24.9 Representation with Causality . . . . . . . . . . . . . . . . . . . . . . . . . . 218
24.10Combinations of Formae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
24.11Reachability of Formae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
24.12Inuence of Formae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
24.13Scalability of Search Operations . . . . . . . . . . . . . . . . . . . . . . . . . 219
24.14Appropriate Abstraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
24.15Indirect Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
24.15.1Extradimensional Bypass: Example for Good Complexity . . . . . . . 219
Part III Metaheuristic Optimization Algorithms
25 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
25.1 General Information on Metaheuristics . . . . . . . . . . . . . . . . . . . . . 226
25.1.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 226
25.1.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
25.1.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 227
25.1.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
26 Hill Climbing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
26.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
26.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
26.3 Multi-Objective Hill Climbing . . . . . . . . . . . . . . . . . . . . . . . . . . 230
26.4 Problems in Hill Climbing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
26.5 Hill Climbing With Random Restarts . . . . . . . . . . . . . . . . . . . . . . 232
26.6 General Information on Hill Climbing . . . . . . . . . . . . . . . . . . . . . . 234
26.6.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 234
26.6.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
26.6.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 234
26.6.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
26.7 Raindrop Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
26.7.1 General Information on Raindrop Method . . . . . . . . . . . . . . . 236
27 Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
27.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
27.1.1 Metropolis Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
27.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
27.2.1 No Boltzmanns Constant . . . . . . . . . . . . . . . . . . . . . . . . . 246
27.2.2 Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
27.3 Temperature Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
27.3.1 Logarithmic Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . 248
27.3.2 Exponential Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . 249
27.3.3 Polynomial and Linear Scheduling . . . . . . . . . . . . . . . . . . . . 249
27.3.4 Adaptive Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
27.3.5 Larger Step Widths . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
10 CONTENTS
27.4 General Information on Simulated Annealing . . . . . . . . . . . . . . . . . . 250
27.4.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 250
27.4.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
27.4.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 250
27.4.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
28 Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
28.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
28.1.1 The Basic Cycle of EAs . . . . . . . . . . . . . . . . . . . . . . . . . . 254
28.1.2 Biological and Articial Evolution . . . . . . . . . . . . . . . . . . . . 255
28.1.3 Historical Classication . . . . . . . . . . . . . . . . . . . . . . . . . . 263
28.1.4 Populations in Evolutionary Algorithms . . . . . . . . . . . . . . . . . 265
28.1.5 Conguration Parameters of Evolutionary Algorithms . . . . . . . . . 269
28.2 Genotype-Phenotype Mappings . . . . . . . . . . . . . . . . . . . . . . . . . 270
28.2.1 Direct Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
28.2.2 Indirect Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
28.3 Fitness Assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
28.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
28.3.2 Weighted Sum Fitness Assignment . . . . . . . . . . . . . . . . . . . . 275
28.3.3 Pareto Ranking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
28.3.4 Sharing Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
28.3.5 Variety Preserving Fitness Assignment . . . . . . . . . . . . . . . . . 279
28.3.6 Tournament Fitness Assignment . . . . . . . . . . . . . . . . . . . . . 284
28.4 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
28.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
28.4.2 Truncation Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
28.4.3 Fitness Proportionate Selection . . . . . . . . . . . . . . . . . . . . . 290
28.4.4 Tournament Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
28.4.5 Linear Ranking Selection . . . . . . . . . . . . . . . . . . . . . . . . . 300
28.4.6 Random Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
28.4.7 Clearing and Simple Convergence Prevention (SCP) . . . . . . . . . . 302
28.5 Reproduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
28.6 Maintaining a Set of Non-Dominated/Non-Prevailed Individuals . . . . . . . 308
28.6.1 Updating the Optimal Set . . . . . . . . . . . . . . . . . . . . . . . . 308
28.6.2 Obtaining Non-Prevailed Elements . . . . . . . . . . . . . . . . . . . . 309
28.6.3 Pruning the Optimal Set . . . . . . . . . . . . . . . . . . . . . . . . . 311
28.7 General Information on Evolutionary Algorithms . . . . . . . . . . . . . . . 314
28.7.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 314
28.7.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
28.7.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 317
28.7.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
29 Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
29.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
29.1.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
29.2 Genomes in Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 326
29.2.1 Classication According to Length . . . . . . . . . . . . . . . . . . . . 326
29.2.2 Classication According to Element Type . . . . . . . . . . . . . . . . 327
29.2.3 Classication According to Meaning of Loci . . . . . . . . . . . . . . 327
29.2.4 Introns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
29.3 Fixed-Length String Chromosomes . . . . . . . . . . . . . . . . . . . . . . . 328
29.3.1 Creation: Nullary Reproduction . . . . . . . . . . . . . . . . . . . . . 328
29.3.2 Mutation: Unary Reproduction . . . . . . . . . . . . . . . . . . . . . 330
29.3.3 Permutation: Unary Reproduction . . . . . . . . . . . . . . . . . . . . 333
CONTENTS 11
29.3.4 Crossover: Binary Reproduction . . . . . . . . . . . . . . . . . . . . . 335
29.4 Variable-Length String Chromosomes . . . . . . . . . . . . . . . . . . . . . . 339
29.4.1 Creation: Nullary Reproduction . . . . . . . . . . . . . . . . . . . . . 340
29.4.2 Insertion and Deletion: Unary Reproduction . . . . . . . . . . . . . . 340
29.4.3 Crossover: Binary Reproduction . . . . . . . . . . . . . . . . . . . . . 340
29.5 Schema Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
29.5.1 Schemata and Masks . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
29.5.2 Wildcards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
29.5.3 Hollands Schema Theorem . . . . . . . . . . . . . . . . . . . . . . . . 342
29.5.4 Criticism of the Schema Theorem . . . . . . . . . . . . . . . . . . . . 345
29.5.5 The Building Block Hypothesis . . . . . . . . . . . . . . . . . . . . . 346
29.5.6 Genetic Repair and Similarity Extraction . . . . . . . . . . . . . . . . 346
29.6 The Messy Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 347
29.6.1 Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
29.6.2 Reproduction Operations . . . . . . . . . . . . . . . . . . . . . . . . . 347
29.6.3 Overspecication and Underspecication . . . . . . . . . . . . . . . . 348
29.6.4 The Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
29.7 Random Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
29.8 General Information on Genetic Algorithms . . . . . . . . . . . . . . . . . . 351
29.8.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 351
29.8.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
29.8.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 353
29.8.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
30 Evolution Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
30.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
30.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
30.3 Recombination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
30.3.1 Dominant Recombination . . . . . . . . . . . . . . . . . . . . . . . . . 362
30.3.2 Intermediate Recombination . . . . . . . . . . . . . . . . . . . . . . . 362
30.4 Mutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
30.4.1 Using Normal Distributions . . . . . . . . . . . . . . . . . . . . . . . . 364
30.5 Parameter Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
30.5.1 The 1/5th Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
30.5.2 Endogenous Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 370
30.6 General Information on Evolution Strategies . . . . . . . . . . . . . . . . . . 373
30.6.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 373
30.6.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
30.6.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 373
30.6.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
31 Genetic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
31.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
31.1.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
31.2 General Information on Genetic Programming . . . . . . . . . . . . . . . . . 382
31.2.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 382
31.2.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
31.2.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 383
31.2.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
31.3 (Standard) Tree Genomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
31.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
31.3.2 Creation: Nullary Reproduction . . . . . . . . . . . . . . . . . . . . . 389
31.3.3 Node Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
31.3.4 Unary Reproduction Operations . . . . . . . . . . . . . . . . . . . . . 393
12 CONTENTS
31.3.5 Recombination: Binary Reproduction . . . . . . . . . . . . . . . . . . 398
31.3.6 Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
31.4 Linear Genetic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
31.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
31.4.2 Advantages and Disadvantages . . . . . . . . . . . . . . . . . . . . . . 404
31.4.3 Realizations and Implementations . . . . . . . . . . . . . . . . . . . . 405
31.4.4 Recombination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
31.4.5 General Information on Linear Genetic Programming . . . . . . . . . 408
31.5 Grammars in Genetic Programming . . . . . . . . . . . . . . . . . . . . . . . 409
31.5.1 General Information on Grammar-Guided Genetic Programming . . . 409
31.6 Graph-based Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
31.6.1 General Information on Grahp-based Genetic Programming . . . . . 409
32 Evolutionary Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
32.1 General Information on Evolutionary Programming . . . . . . . . . . . . . . 414
32.1.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 414
32.1.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
32.1.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 414
32.1.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
33 Dierential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
33.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
33.2 Ternary Recombination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
33.2.1 Advanced Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
33.3 General Information on Dierential Evolution . . . . . . . . . . . . . . . . . 421
33.3.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 421
33.3.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
33.3.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 421
33.3.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
34 Estimation Of Distribution Algorithms . . . . . . . . . . . . . . . . . . . . . 427
34.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
34.2 General Information on Estimation Of Distribution Algorithms . . . . . . . 428
34.2.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 428
34.2.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
34.2.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 429
34.2.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
34.3 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
34.4 EDAs Searching Bit Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
34.4.1 UMDA: Univariate Marginal Distribution Algorithm . . . . . . . . . 434
34.4.2 PBIL: Population-Based Incremental Learning . . . . . . . . . . . . . 438
34.4.3 cGA: Compact Genetic Algorithm . . . . . . . . . . . . . . . . . . . . 439
34.5 EDAs Searching Real Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 442
34.5.1 SHCLVND: Stochastic Hill Climbing with Learning by Vectors of
Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
34.5.2 PBIL
C
: Continuous PBIL . . . . . . . . . . . . . . . . . . . . . . . . 444
34.5.3 Real-Coded PBIL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
34.6 EDAs Searching Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
34.6.1 PIPE: Probabilistic Incremental Program Evolution . . . . . . . . . . 447
34.7 Diculties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
34.7.1 Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
34.7.2 Epistasis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
CONTENTS 13
35 Learning Classier Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
35.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
35.2 Families of Learning Classier Systems . . . . . . . . . . . . . . . . . . . . . 457
35.3 General Information on Learning Classier Systems . . . . . . . . . . . . . . 459
35.3.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 459
35.3.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
35.3.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 459
35.3.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
36 Memetic and Hybrid Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 463
36.1 Memetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
36.2 Lamarckian Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
36.3 Baldwin Eect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
36.4 General Information on Memetic Algorithms . . . . . . . . . . . . . . . . . . 467
36.4.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 467
36.4.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
36.4.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 467
36.4.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
37 Ant Colony Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
37.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
37.2 General Information on Ant Colony Optimization . . . . . . . . . . . . . . . 473
37.2.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 473
37.2.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
37.2.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 473
37.2.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
38 River Formation Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
38.1 General Information on River Formation Dynamics . . . . . . . . . . . . . . 478
38.1.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 478
38.1.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
38.1.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 478
38.1.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
39 Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
39.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
39.2 The Particle Swarm Optimization Algorithm . . . . . . . . . . . . . . . . . . 481
39.2.1 Communication with Neighbors Social Interaction . . . . . . . . . . 482
39.2.2 Particle Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
39.2.3 Basic Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
39.3 General Information on Particle Swarm Optimization . . . . . . . . . . . . . 483
39.3.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 483
39.3.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
39.3.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 483
39.3.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
40 Tabu Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
40.1 General Information on Tabu Search . . . . . . . . . . . . . . . . . . . . . . 490
40.1.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 490
40.1.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
40.1.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 490
40.1.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
14 CONTENTS
41 Extremal Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
41.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
41.1.1 Self-Organized Criticality . . . . . . . . . . . . . . . . . . . . . . . . . 493
41.1.2 The Bak-Sneppen model of Evolution . . . . . . . . . . . . . . . . . . 493
41.2 Extremal Optimization and Generalized Extremal Optimization . . . . . . . 494
41.3 General Information on Extremal Optimization . . . . . . . . . . . . . . . . 496
41.3.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 496
41.3.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
41.3.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 496
41.3.4 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
42 GRASPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
42.1 General Information on GRAPSs . . . . . . . . . . . . . . . . . . . . . . . . 500
42.1.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 500
42.1.2 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 500
42.1.3 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
43 Downhill Simplex (Nelder and Mead) . . . . . . . . . . . . . . . . . . . . . . 501
43.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
43.2 General Information on Downhill Simplex . . . . . . . . . . . . . . . . . . . 502
43.2.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 502
43.2.2 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 502
43.2.3 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
44 Random Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
44.1 General Information on Random Optimization . . . . . . . . . . . . . . . . . 506
44.1.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 506
44.1.2 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 506
44.1.3 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
Part IV Non-Metaheuristic Optimization Algorithms
45 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
46 State Space Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
46.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
46.1.1 The Baseline: A Binary Goal Criterion . . . . . . . . . . . . . . . . . 511
46.1.2 State Space and Neighborhood Search . . . . . . . . . . . . . . . . . . 511
46.1.3 The Search Space as Graph . . . . . . . . . . . . . . . . . . . . . . . . 512
46.1.4 Key Eciency Features . . . . . . . . . . . . . . . . . . . . . . . . . . 514
46.2 Uninformed Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
46.2.1 Breadth-First Search . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
46.2.2 Depth-First Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
46.2.3 Depth-Limited Search . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
46.2.4 Iteratively Deepening Depth-First Search . . . . . . . . . . . . . . . . 516
46.2.5 General Information on Uninformed Search . . . . . . . . . . . . . . . 517
46.3 Informed Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
46.3.1 Greedy Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
46.3.2 A

Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
46.3.3 General Information on Informed Search . . . . . . . . . . . . . . . . 520
CONTENTS 15
47 Branch And Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
47.1 General Information on Branch And Bound . . . . . . . . . . . . . . . . . . 524
47.1.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 524
47.1.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
47.1.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 524
48 Cutting-Plane Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
48.1 General Information on Cutting-Plane Method . . . . . . . . . . . . . . . . . 528
48.1.1 Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . 528
48.1.2 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
48.1.3 Conferences and Workshops . . . . . . . . . . . . . . . . . . . . . . . 528
Part V Applications
49 Real-World Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
49.1 Symbolic Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
49.1.1 Genetic Programming: Genome for Symbolic Regression . . . . . . . 531
49.1.2 Sample Data, Quality, and Estimation Theory . . . . . . . . . . . . . 532
49.1.3 Limits of Symbolic Regression . . . . . . . . . . . . . . . . . . . . . . 535
49.2 Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
49.2.1 Classication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
49.3 Freight Transportation Planning . . . . . . . . . . . . . . . . . . . . . . . . . 536
49.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
49.3.2 Vehicle Routing in Theory and Practice . . . . . . . . . . . . . . . . . 538
49.3.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
49.3.4 Evolutionary Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 542
49.3.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
49.3.6 Holistic Approach to Logistics . . . . . . . . . . . . . . . . . . . . . . 549
49.3.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
50 Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
50.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
50.2 Bit-String based Problem Spaces . . . . . . . . . . . . . . . . . . . . . . . . . 553
50.2.1 Kaumans NK Fitness Landscapes . . . . . . . . . . . . . . . . . . . 553
50.2.2 The p-Spin Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
50.2.3 The ND Family of Fitness Landscapes . . . . . . . . . . . . . . . . . . 557
50.2.4 The Royal Road . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
50.2.5 OneMax and BinInt . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
50.2.6 Long Path Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
50.2.7 Tunable Model for Problematic Phenomena . . . . . . . . . . . . . . 566
50.3 Real Problem Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
50.3.1 Single-Objective Optimization . . . . . . . . . . . . . . . . . . . . . . 580
50.4 Close-to-Real Vehicle Routing Problem Benchmark . . . . . . . . . . . . . . 621
50.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
50.4.2 Involved Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
50.4.3 Problem Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632
50.4.4 Objectives and Constraints . . . . . . . . . . . . . . . . . . . . . . . . 634
50.4.5 Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
Part VI Background
16 CONTENTS
51 Set Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639
51.1 Set Membership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639
51.2 Special Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
51.3 Relations between Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
51.4 Operations on Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
51.5 Tuples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
51.6 Permutations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
51.7 Binary Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
51.8 Order Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
51.9 Equivalence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
51.10Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
51.10.1Monotonicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
51.11Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
51.11.1Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
51.11.2Searching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
51.11.3Transformations between Sets and Lists . . . . . . . . . . . . . . . . . 650
52 Graph Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
53 Stochastic Theory and Statistics . . . . . . . . . . . . . . . . . . . . . . . . . 653
53.1 Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653
53.1.1 Probabily as dened by Bernoulli (1713) . . . . . . . . . . . . . . . . 654
53.1.2 Combinatorics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
53.1.3 The Limiting Frequency Theory of von Mises . . . . . . . . . . . . . . 655
53.1.4 The Axioms of Kolmogorov . . . . . . . . . . . . . . . . . . . . . . . . 656
53.1.5 Conditional Probability . . . . . . . . . . . . . . . . . . . . . . . . . . 657
53.1.6 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
53.1.7 Cumulative Distribution Function . . . . . . . . . . . . . . . . . . . . 658
53.1.8 Probability Mass Function . . . . . . . . . . . . . . . . . . . . . . . . 659
53.1.9 Probability Density Function . . . . . . . . . . . . . . . . . . . . . . . 659
53.2 Parameters of Distributions and their Estimates . . . . . . . . . . . . . . . . 659
53.2.1 Count, Min, Max and Range . . . . . . . . . . . . . . . . . . . . . . . 660
53.2.2 Expected Value and Arithmetic Mean . . . . . . . . . . . . . . . . . . 661
53.2.3 Variance and Standard Deviation . . . . . . . . . . . . . . . . . . . . 662
53.2.4 Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
53.2.5 Skewness and Kurtosis . . . . . . . . . . . . . . . . . . . . . . . . . . 665
53.2.6 Median, Quantiles, and Mode . . . . . . . . . . . . . . . . . . . . . . 665
53.2.7 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
53.2.8 The Law of Large Numbers . . . . . . . . . . . . . . . . . . . . . . . . 668
53.3 Some Discrete Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
53.3.1 Discrete Uniform Distribution . . . . . . . . . . . . . . . . . . . . . . 668
53.3.2 Poisson Distribution

. . . . . . . . . . . . . . . . . . . . . . . . . . 671
53.3.3 Binomial Distribution B(n, p) . . . . . . . . . . . . . . . . . . . . . . 674
53.4 Some Continuous Distributions . . . . . . . . . . . . . . . . . . . . . . . . . 676
53.4.1 Continuous Uniform Distribution . . . . . . . . . . . . . . . . . . . . 676
53.4.2 Normal Distribution N
_
,
2
_
. . . . . . . . . . . . . . . . . . . . . . 678
53.4.3 Exponential Distribution exp() . . . . . . . . . . . . . . . . . . . . . 682
53.4.4 Chi-square Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 684
53.4.5 Students t-Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 687
53.5 Example Throwing a Dice . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
53.6 Estimation Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691
53.6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691
53.6.2 Likelihood and Maximum Likelihood Estimators . . . . . . . . . . . . 693
53.6.3 Condence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
CONTENTS 17
53.7 Statistical Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699
53.7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699
Part VII Implementation
54 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
55 The Specication Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707
55.1 General Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707
55.1.1 Search Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
55.1.2 Genotype-Phenotype Mapping . . . . . . . . . . . . . . . . . . . . . . 711
55.1.3 Objective Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
55.1.4 Termination Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
55.1.5 Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 713
55.2 Algorithm Specic Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
55.2.1 Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 717
55.2.2 Estimation Of Distribution Algorithms . . . . . . . . . . . . . . . . . 719
56 The Implementation Package . . . . . . . . . . . . . . . . . . . . . . . . . . . 723
56.1 The Optimization Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 724
56.1.1 Algorithm Base Classes . . . . . . . . . . . . . . . . . . . . . . . . . . 724
56.1.2 Baseline Search Patterns . . . . . . . . . . . . . . . . . . . . . . . . . 729
56.1.3 Local Search Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 735
56.1.4 Population-based Metaheuristics . . . . . . . . . . . . . . . . . . . . . 746
56.2 Search Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
56.2.1 Operations for String-based Search Spaces . . . . . . . . . . . . . . . 784
56.2.2 Operations for Tree-based Search Spaces . . . . . . . . . . . . . . . . 809
56.2.3 Multiplexing Operators . . . . . . . . . . . . . . . . . . . . . . . . . . 818
56.3 Data Structures for Special Search Spaces . . . . . . . . . . . . . . . . . . . 824
56.3.1 Tree-based Search Spaces as used in Genetic Programming . . . . . . 824
56.4 Genotype-Phenotype Mappings . . . . . . . . . . . . . . . . . . . . . . . . . 833
56.5 Termination Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835
56.6 Comparator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 837
56.7 Utility Classes, Constants, and Routines . . . . . . . . . . . . . . . . . . . . 839
57 Demos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859
57.1 Demos for String-based Problem Spaces . . . . . . . . . . . . . . . . . . . . . 859
57.1.1 Real-Vector based Problem Spaces X R
n
. . . . . . . . . . . . . . . 859
57.1.2 Permutation-based Problem Spaces . . . . . . . . . . . . . . . . . . . 887
57.2 Demos for Genetic Programming . . . . . . . . . . . . . . . . . . . . . . . . 905
57.2.1 Demos for Genetic Programming Applied to Mathematical Problems 905
57.3 The VRP Benchmark Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 914
57.3.1 The Involved Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . 914
57.3.2 The Optimization Utilities . . . . . . . . . . . . . . . . . . . . . . . . 925
Appendices
A Citation Suggestion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945
18 CONTENTS
B GNU Free Documentation License (FDL) . . . . . . . . . . . . . . . . . . . 947
B.1 Applicability and Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
B.2 Verbatim Copying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 949
B.3 Copying in Quantity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 949
B.4 Modications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 949
B.5 Combining Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 951
B.6 Collections of Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 951
B.7 Aggregation with Independent Works . . . . . . . . . . . . . . . . . . . . . . 951
B.8 Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 952
B.9 Termination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 952
B.10 Future Revisions of this License . . . . . . . . . . . . . . . . . . . . . . . . . 952
B.11 Relicensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953
C GNU Lesser General Public License (LGPL) . . . . . . . . . . . . . . . . . 955
C.1 Additional Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955
C.2 Exception to Section 3 of the GNU GPL . . . . . . . . . . . . . . . . . . . . 955
C.3 Conveying Modied Versions . . . . . . . . . . . . . . . . . . . . . . . . . . . 956
C.4 Object Code Incorporating Material from Library Header Files . . . . . . . . 956
C.5 Combined Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956
C.6 Combined Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 957
C.7 Revised Versions of the GNU Lesser General Public License . . . . . . . . . 957
D Credits and Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 959
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 961
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1190
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1209
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213
List of Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1217
List of Listings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1221
Part I
Foundations
20 CONTENTS
Chapter 1
Introduction
1.1 What is Global Optimization?
One of the most fundamental principles in our world is the search for optimal states. It begins
in the microcosm in physics, where atom bonds
1
minimize the energy of electrons [2137].
When molecules combine to solid bodies during the process of freezing, they assume crystal
structures which come close to energy-optimal congurations. Such a natural optimization
process, of course, is not driven by any higher intention but the result from the laws of
physics.
The same goes for the biological principle of survival of the ttest [2571] which, together
with the biological evolution [684], leads to better adaptation of the species to their environ-
ment. Here, a local optimum is a well-adapted species that can maintain a stable population
in its niche.
biggest...
smallest...
fastest...
cheapest...
...withleastenergy
mostprecise...
mostsimilarto...
maximal...
fastest...
mostrobust...
...thatfulfillsalltimingconstraints
...withleastaerodynamicdrag
...onthesmallestpossiblearea
besttrade-offsbetween....
minimal...
Figure 1.1: Terms that hint: Optimization Problem (inspired by [428])
1
http://en.wikipedia.org/wiki/Chemical_bond [accessed 2007-07-12]
22 1 INTRODUCTION
As long as humankind exists, we strive for perfection in many areas. As sketched in
Figure 1.1 there exists an incredible variety of dierent terms that indicate that a situation
is actually an optimization problem. For example, we want to reach a maximum degree
of happiness with the least amount of eort. In our economy, prot and sales must be
maximized and costs should be as low as possible. Therefore, optimization is one of the oldest
sciences which even extends into daily life [2023]. We also can observe the phenomenon of
trade-os between dierent criteria which are also common in many other problem domains:
One person may choose to work hard with the goal of securing stability and pursuing wealth
for her future life, accepting very little spare time in its current stage. Another person can,
deliberately, choose a lax life full of joy in the current phase but rather unsecure in the
long term. Similar, most optimization problems in real-world applications involve multiple,
usually conicting, objectives.
Furthermore, many optimization problems involve certain constraints. The guy with the
relaxed lifestyle, for instance, regardless how lazy he might become, cannot waive dragging
himself to the supermarket from time to time in order to acquire food. The eager beaver, on
the other hand, will not be able to keep up with 18 hours of work for seven days per week in
the long run without damaging her health considerably and thus, rendering the goal of an
enjoyable future unattainable. Likewise, the parameters of the solutions of an optimization
problem may be constrained as well.
1.1.1 Optimization
Since optimization covers a wide variety of subject areas, there exist dierent points of view
on what it actually is. We approach this question by rst clarifying the goal of optimization:
solving optimization problems.
Denition D1.1 (Optimization Problem (Economical View)). An optimization prob-
lem is a situation which requires deciding for one choice from a set of possible alternatives
in order to reach a predened/required benet at minimal costs.
Denition D1.2 (Optimization Problem (Simplied Mathematical View)). Solving
an optimization problem requires nding an input value x

for which a mathematical func-


tion f takes on the smallest possible value (while usually obeying to some restrictions on the
possible values of x

).
Every task which has the goal of approaching certain congurations considered as optimal in
the context of pre-dened criteria can be viewed as an optimization problem. After giving
these rough denitions which we will later rene in Section 6.2 on page 103 we can also
specify the topic of this book:
Denition D1.3 (Optimization). Optimization is the process of solving an optimization
problem, i. e., nding suitable solutions for it.
Denition D1.4 (Global Optimization). Global optimization is optimization with the
goal of nding solutions x

for a given optimization problem which have the property that


no other, better solutions exist.
If something is important, general, and abstract enough, there is always a mathemati-
cal discipline dealing with it. Global Optimization
2
[609, 2126, 2181, 2714] is also the
2
http://en.wikipedia.org/wiki/Global_optimization [accessed 2007-07-03]
1.1. WHAT IS GLOBAL OPTIMIZATION? 23
branch of applied mathematics and numerical analysis that focuses on, well, optimization.
Closely related and overlapping areas are mathematical economics
3
[559] and operations
research
4
[580, 1232].
1.1.2 Algorithms and Programs
The title of this book is Global Optimization Algorithms Theory and Application and
so far, we have claried what optimization is so the reader has an idea about the general
direction in which we are going to drift. Before giving an overview about the contents to
follow, let us take a quick look into the second component of the book title, algorithms.
The term algorithm comprises essentially all forms of directives what to do to reach a
certain goal. A culinary receipt is an algorithm, for example, since it tells how much of
which ingredient is to be added to a meal in which sequence and how everything should
be heated. The commands inside the algorithms can be very concise or very imprecise,
depending on the area of application. How accurate can we, for instance, carry out the
instruction Add a tablespoon of sugar.? Hence, algorithms are a very wide eld that
there exist numerous dierent, rather fuzzy denitions for the word algorithm [46, 141, 157,
635, 1616]. We base ours on the one given by Wikipedia:
Denition D1.5 (Algorithm (Wikipedia)). An algorithm
5
is a nite set of well-dened
instructions for accomplishing some task. It starts in some initial state and usually termi-
nates in a nal state.
The term algorithms is derived from the name of the mathematician Mohammed ibn-Musa
al-Khwarizmi who was part of the royal court in Baghdad and who lived from about 780 to
850. Al-Khwarizmis work is the likely source for the word algebra as well. Denition D1.6
provides a basic understanding about what an optimization algorithm is, a working hy-
pothesis for the time being. It will later be rened in Denition D6.3 on page 103.
Denition D1.6 (Optimization Algorithm). An optimization algorithm is an algo-
rithm suitable to solve optimization problems.
An algorithm is a set of directions in a representation which is usually understandable for
human beings and thus, independent from specic hardware and computers. A program,
on the other hand, is basically an algorithm realized for a given computer or execution
environment.
Denition D1.7 (Program). A program
6
is a set of instructions that describe a task or
an algorithm to be carried out on a computer.
Therefore, the primitive instructions of the algorithm must be expressed either in a form that
the machine can process directly (machine code
7
[2135]), in a form that can be translated
(1:1) into such code (assembly language [2378], Java byte code [1109], etc.), or in a high-
level programming language
8
[1817] which can be translated (n:m) into the latter using
special software (compiler) [1556].
In this book, we will provide several algorithms for solving optimization problems. In
our algorithms, we will try to bridge between abstraction, easiness of reading, and closeness
to the syntax of high-level programming languages as Java. In the algorithms, we will use
self-explaining primitives which will still base on clear denitions (such as the list operations
discussed in Section 51.11 on page 647).
3
http://en.wikipedia.org/wiki/Mathematical_economics [accessed 2009-06-16]
4
http://en.wikipedia.org/wiki/Operations_research [accessed 2009-06-19]
5
http://en.wikipedia.org/wiki/Algorithm [accessed 2007-07-03]
6
http://en.wikipedia.org/wiki/Computer_program [accessed 2007-07-03]
7
http://en.wikipedia.org/wiki/Machine_code [accessed 2007-07-04]
8
http://en.wikipedia.org/wiki/High-level_programming_language [accessed 2007-07-03]
24 1 INTRODUCTION
1.1.3 Optimization versus Dedicated Algorithms
During at least fty ve years, research in computer science led to the development of many
algorithms which solve given problem classes to optimality. Basically, we can distinguish
two kinds of problem-solving algorithms: dedicated algorithms and optimization algorithms.
Dedicated algorithms are specialized to exactly solve a given class of problems in the
shortest possible time. They are most often deterministic and always utilize exact theoretical
knowledge about the problem at hand. Kruskals algorithm [1625], for example, is the best
way for searching the minimum spanning tree in a graph with weighted edges. The shortest
paths from a source node to the other nodes in a network can best be found with Dijkstras
algorithm [796] and the maximum ow in a ow network can be computed quickly with the
algorithm given by Dinitz [800]. There are many problems for which such fast and ecient
dedicated algorithms exist. For each optimization problem, we should always check rst if
it can be solved with such a specialized, exact algorithm (see also Chapter 7 on page 109).
However, for many optimization tasks no ecient algorithm exists. The reason can be
that (1) the problem is too specic after all, researchers usually develop algorithms only
for problems which are of general interest. (2) Furthermore, for some problems (the current
scientic opinion is that) no ecient, dedicated can be constructed at all (see Section 12.2.2
on page 146).
For such problems, the more general optimization algorithms are employed. Most often,
these algorithms only need a denition of the structure of possible solutions and a function
f which tells measures the quality of a candidate solution. Based on this information, they
try to nd solutions for which f takes on the best values. Optimization methods utilize much
less information than exact algorithms, they are usually slower and cannot provide the same
solution quality. In the cases 1 and 2, however, we have no choice but to use them. And
indeed, there exists a giant number of situations where either Point 1, 2, or both conditions
hold, as you can see in, for example, Table 25.1 on page 226 or 28.4 on page 314. . .
1.1.4 Structure of this Book
In the rst part of this book, we will rst discuss the things which constitute an optimization
problem and which are the basic components of each optimization process. In Part II,
we take a look on all the possible aspects which can make an optimization task dicult.
Understanding these issues will allow you to anticipate possible problems when solving an
optimization task and, hopefully, to prevent their occurrence from the start. We then discuss
metaheuristic optimization algorithms (and especially Evolutionary Computation methods)
in Part III. Whereas these optimization approaches are clearly the center of this book, we
also introduce traditional, non-metaheuristic, numerical, and/or exact methods in Part IV.
In Part V, we then list some examples of applications for which optimization algorithms can
be used. Finally, Part VI is a collection of denitions and knowledge which can be useful to
understand the terms and formulas used in the rest of the book. It is provided in an eort
to make the book stand-alone and to allow students to understand it even if they have little
mathematical or theoretical background.
1.2 Types of Optimization Problems
Basically, there are two general types of optimization tasks: combinatorial and numerical
problems.
1.2.1 Combinatorial Optimization Problems
1.2. TYPES OF OPTIMIZATION PROBLEMS 25
Denition D1.8 (Combinatorial Optimization Problem). Combinatorial optimiza-
tion problems
9
[578, 628, 2124, 2430] are problems which are dened over a nite (or numer-
able innite) discrete problem space X and whose candidate solutions structure can either
be expressed as
1. elements from nite sets,
2. nite sequence or permutation of elements x
i
chosen from nite sets, i. e., x X
x = (x
1
, x
2
, . . . ), as
3. sets of elements, i. e., x X x = x
1
, x
2
, . . . , as
4. tree or graph structures with node or edge properties stemming from any of the above
types, or
5. any form of nesting, combination, partitions, or subsets of the above.
Combinatorial tasks are, for example, the Traveling Salesman Problem (see Example E2.2 on
page 45 [1429]), Vehicle Routing Problems (see Section 49.3 on page 536 and [2162, 2912]),
graph coloring [1455], graph partitioning [344], scheduling [564], packing [564], and satisa-
bility problems [2335]. Algorithms suitable for such problems are, amongst others, Genetic
Algorithms (Chapter 29), Simulated Annealing (Chapter 27), Tabu Search (Chapter 40),
and Extremal Optimization (Chapter 41).
Example E1.1 (Bin Packing Problem).
The bin packing problem
10
[1025] is a combinatorial optimization problem which is dened
a =1
5
a =3
10
a =2
7
a =2
9 a =4
2
a =1
4
a =1
3
a =4
1
a =2
8
a =2
6
b
=
5
bin0 bin1 bin2 bin3 bin4
distributionofn=10
objectsofsizea toa
ink=5binsofsizeb=5
1 10
Figure 1.2: One example instance of a bin packing problem.
as follows: Given are a number k N
1
of bins, each of which having size b N
1
, and a
number n N
1
with the weights a
1
, a
2
, . . . , a
n
. The question to answer is: Can the n objects
be distributed over the k bins in a way that none of the bins ows over? A conguration
which meets this requirement is sketched in Figure 1.2. If another object of size a
11
= 4 was
added, the answer to the question would become No.
The bin packing problem can be formalized to asking whether a mapping x from the
numbers 1..n to 1..k exists so that Equation 1.1 holds. In Equation 1.1, the mapping is
represented as a list x of k sets where the i
th
set contains the indices of the objects to be
packed into the i
th
bin.
x : i 0..k 1 :
_
_

jx[i]
a
j
_
_
b (1.1)
Finding whether this equation holds or not is known to be AT-complete. The question
could be transformed into an optimization problem with the goal of to nd the smallest
9
http://en.wikipedia.org/wiki/Combinatorial_optimization [accessed 2010-08-04]
10
http://en.wikipedia.org/wiki/Bin_packing_problem [accessed 2010-08-27]
26 1 INTRODUCTION
number k of bins of size b which can contain all the objects listed in a. Alternatively, we
could try to nd the mapping x for which the function f
bp
given in Equation 1.2 takes on
the lowest value, for example.
f
bp
(x) =
k1

i=0
max
_
_
_
0,
_
_

jx[i]
a
j
_
_
b
_
_
_
(1.2)
This problem is known to be AT-hard. See Section 12.2 for a discussion on
AT-completeness and AT-hardness and Task 67 on page 238 for a homework task con-
cerned with solving the bin packing problem.
Example E1.2 (Circuit Layout).
Given are a card with a nite set S of slots, a set C : [C[ [S[ of circuits to be placed on
A B D J
C
E
F G
H
I
K
L
M
20slots
11
slots
(A,B),(B,C),(C,F),(C,G),(C,M),
(D,E),(D,J),(E,F),(F,K),(G,J),(G,M),
(H,J),(H,I),(H,L),(I,K),(I,L),(K,L),
(L,M)
R={
}
C={A,B,C,D,E,F,G,H,I,J,K,L,M}
S=20 11
Figure 1.3: An example for circuit layouts.
that card, and a connection list R C C where (c
1
, c
2
) R means that the two circuits
c
1
, c
2
C have to be connected. The goal is to nd a layout x which assigns each circuit
c C to a location s S so that the overall length of wiring required to establish all the
connections r R becomes minimal, under the constraint that no wire must cross another
one. An example for such a scenario is given in Figure 1.3.
Similar circuit layout problems exist in a variety of forms for quite a while now [1541]
and are known to be quite hard. They may be extended with constraints such as to also
minimize the heat emission of the circuit and to minimize cross-wire induction and so on.
They may be extended to not only evolve the circuit location but also the circuit types and
numbers [1479, 2792] and thus, can become arbitrarily complex.
Example E1.3 (Job Shop Scheduling).
Job Shop Scheduling
11
(JSS) is considered to be one of the hardest classes of scheduling
problems and is also AT-complete [1026]. The goal is to distribute a set of tasks to machines
in a way that all deadlines are met.
In a JSS, a set T of jobs t
i
T is given, where each job t
i
consists of a number of
sub-tasks t
i,j
. A partial order is dened on the sub-tasks t
i,j
of each job t
i
which describes
the precedence of the sub-tasks, i. e., constraints in which order they must be processed.
In the case of a beverage factory, 10 000 bottles of cola (t
1
. . . t
10 000
) and 5000 bottles of
lemonade (t
10 001
. . . t
15 000
).
For the cola bottles, the jobs would consist of three subjobs, t
i,1
to t
i,2
for i 1..10 000,
since the bottles rst need to labeled, then they are lled with cola, and nally closed. It is
clear that the bottles cannot be closed before they are lled t
i,2
t
i,3
whereas the labeling
11
http://en.wikipedia.org/wiki/Job_Shop_Scheduling [accessed 2010-08-27]
1.2. TYPES OF OPTIMIZATION PROBLEMS 27
can take place at any time. For the lemonade, all production steps are the same as the
ones for the cola bottles, except that step t
i,2
i 10 001..15 000 is replaced with lling the
bottles with lemonade.
It is furthermore obvious that the dierent sub-jobs have to be performed by dierent
machines m
i
M. Each of the machines m
i
can only process a single job at a time. Finally,
each of the jobs has a deadline until which it must be nished.
The optimization algorithm has to nd a schedule S which denes for each job when it has
to be executed and on what machine while adhering to the order and deadline constraints.
Usually, the optimization process would start with random congurations which violate
some of the constraints (typically the deadlines) and step by step reduces the violations
until results are found that fulll all requirements.
The JSS can be considered as a combinatorial problem since the optimizer creates a
sequence and assignment of jobs and the set of possible starting times (which make sense)
is also nite.
Example E1.4 (Routing).
In a network N composed of numNodesN nodes, nd the shortest path from one node n
1
to another node n
2
. This optimization problem consists of composing a sequence of edges
which form a path from n
1
to n
2
of the shortest length. This combinatorial problem is
rather simple and has been solved by Dijkstra [796] a long time ago. Nevertheless, it is an
optimization problem.
1.2.2 Numerical Optimization Problems
Denition D1.9 (Numerical Optimization Problem). Numerical optimization prob-
lems
12
[1281, 2040] are problems that are dened over problem spaces which are subspaces
of (uncountable innite) numerical spaces, such as the real or complex vectors (X R
n
or
X C
n
).
Numerical problems dened over R
n
(or any kind of space containing it) are called contin-
uous optimization problems. Function optimization [194], engineering design optimization
tasks [2059], classication and data mining tasks [633] are common continuous optimiza-
tion problems. They often can eciently be solved with Evolution Strategies (Chapter 30),
Dierential Evolution (Chapter 33), Evolutionary Programming (Chapter 32), Estimation
of Distribution Algorithms (Chapter 34) or Particle Swarm Optimization (Chapter 39), for
example.
Example E1.5 (Function Roots and Minimization).
The task Find the roots of function g(x) dened in Equation 1.3 and sketched in Figure 1.4
can at least by the authors limited mathematical capabilities hardly be solved manually.
g(x) = 5 + ln

sin
_
x
2
+e
x
tan x
_
+ cosx

arctan [x 3[ (1.3)
However, we can transform it into the optimization problem Minimize the objective function
f
1
(x) = [0 g(x)[ = [g(x)[ or Minimize the objective function f
2
(x) = (0 g(x))
2
= (g(x))
2
.
Dierent from the original formulation, the objective functions f
1
and f
2
provide us with a
12
http://en.wikipedia.org/wiki/Optimization_(mathematics) [accessed 2010-08-04]
28 1 INTRODUCTION
-4
-2
0
2
4
-5 0 5
g(x)
x
x
1
x
2
x
3
x
4
Figure 1.4: The graph of g(x) as given in Equation 1.3 with the four roots x

1
to x

4
.
measure of how close an element x is to be a valid solution. For the global optima x

of
the problem, f
1
(x

) = f
2
(x

) = 0 holds. Based on this formulation, we can hand any of the


two objective functions to a method capable of solving numerical optimization problems. If
these indeed discover elements x

with f
1
(x

) = f
2
(x

) = 0, we have answered the initial


question since the elements x

are the roots of g(x).


1.2. TYPES OF OPTIMIZATION PROBLEMS 29
Example E1.6 (Truss Optimization).
In a truss design optimization problem [18] as sketched in Figure 1.5, the locations of n
F F
trussstructure:fixedpoints(left),
possiblebeamlocations(gray),
appliedforce(right,F)
possibleoptimizationresult:
beamthicknesses(darkgray),
unusedbeams(thinlightgray)
optimization
Figure 1.5: A truss optimization problem like the one given in [18].
possible beams (light gray), a few xed points (left hand side of the truss sketches), and
a point where a weight will be attached (right hand side of the truss sketches,

F ). The
thickness d
i
of each of the i = 1..n beams which leads to the most stable truss whilst not
exceeding a maximum volume of material V would be subject to optimization. A structure
as the one sketched on the right hand side of the gure could result from a numerical
optimization process: Some beams receive dierent non-zero thicknesses (illustrated with
thicker lines). Some other beams have been erased from the design (receive thickness d
i
= 0),
sketched as thin, light gray lines.
Notice that, depending on the problem formulation, truss optimization can also be-
come a discrete or even combinatorial problem: Not in all real-world design tasks, we can
choose the thicknesses of the beams in the truss arbitrarily. It may probably be possi-
ble when synthesizing the support structures for air plane wings or motor blocks. How-
ever, when building a truss from pre-dened building blocks, such as the beams avail-
able for house construction, the thicknesses of the beams are nite and, for example,
d
i
10mm, 20mm, 30mm, 50mm, 100mm, 300mm i 1..n. In such a case, a problem
such as the one sketched in Figure 1.5 becomes eectively combinatorial or, at least, a
discrete internet programming task.
1.2.3 Discrete and Mixed Problems
Of course, numerical problems may also be dened over discrete spaces, such as the N
n
0
.
Problems of this type are called integer programming problems:
Denition D1.10 (Integer Programming). An integer programming problem
13
is an
optimization problem which is dened over a nite or countable innite numerical prob-
lem space (such as N
n
0
or Z
n
) [1495, 2965].
Basically, combinatorial optimization tasks can be considered as a class of such discrete
optimization problems. Because of the wide range of possible solution structures in combi-
natorial tasks, however, techniques well capable of solving integer programming problems
13
http://en.wikipedia.org/wiki/Integer_programming [accessed 2010-08-04]
30 1 INTRODUCTION
are not necessarily suitable for a given combinatorial task. The same goes for tasks which
are suitable for real-valued vector optimization problems: the class of integer programming
problems is, of course, a member of the general numerical optimization tasks. However,
methods for real-valued problems are not necessarily good for integer-valued ones (and
hence, also not for combinatorial tasks). Furthermore, there exist optimization problems
which are combinations of the above classes. For instance, there may be a problem where
a graph must be built which has edges with real-valued properties, i. e., a crossover of a
combinatorial and a continuous optimization problem.
Denition D1.11 (Mixed Integer Programming). A mixed integer programming
problem (MIP) is a numerical optimization problem where some variables stem from -
nite (or countable innite) (numerical) spaces while others stem from uncountable innite
(numerical) spaces.
1.2.4 Summary
Figure 1.6 is not intended to provide a precise hierarchy of problem types. Instead, it is only
Continuous
Optimization
Problems
MixedInteger
Programming
Problems
NumericalOptimizationProblems
DiscreteOptimization
Problems
Integer
Programming
Problems
Combinatorial
Optimization
Problems
Figure 1.6: A rough sketch of the problem type relations.
a rough sketch the relations between them. Basically, there are smooth transitions between
many problem classes and the type of a given problem is often subject to interpretation.
The candidate solutions of combinatorial problems stem from nite sets and are discrete.
They can thus always be expressed as discrete numerical vectors and thus could theoretically
be solved with methods for numerical optimization, such as integer programming techniques.
Of course, every discrete numerical problem can also be expressed as continuous numeri-
cal problem since any vector from the R
n
can be easily be transformed to a vector in the Z
n
by simply rounding each of its elements. Hence, the techniques for continuous optimization
can theoretically be used for discrete numerical problems (integer programming) as well as
for MIPs and for combinatorial problems.
However, algorithms for numerical optimization problems often apply numerical meth-
ods, such as adding vectors, multiplying them with some number, or rotating vectors. Com-
binatorial problems, on the other hand, are usually solved by applying combinatorial modi-
cations to solution structures, such as changing the order of elements in a sequence. These
two approaches are fundamentally dierent and thus, although algorithms for continuous
1.3. CLASSES OF OPTIMIZATION ALGORITHMS 31
optimization can theoretically be used for combinatorial optimization, it is quite clear that
this is rarely the case in practice and would generally yield bad results.
Furthermore, the precision with which numbers can be represented on a computer is nite
since the memory available in any possible physical device is limited. From this perspective,
any continuous optimization problem, if solved on a physical machine, is a discrete problem
as well. In theory, it could thus be solved with approaches for combinatorial optimization.
Again, it is quite easy to see that adjusting the values and sequences of bits in a oating
point representation of a real number will probably lead to inferior results or, at least, to
slower optimization speed than using advanced mathematical operations.
Finally, there may be optimization tasks with both, combinatorial and continuous char-
acteristics. Example E1.6 provides a problem which can exist in a continuous and in a
discrete avor, both of which can be tackled with numerical methods whereas the latter one
can also be solved eciently with approaches for discrete problems.
1.3 Classes of Optimization Algorithms
In this book, we will only be able to discuss a small fraction of the wide variety of Global
Optimization techniques [1068, 2126]. Before digging any deeper into the matter, we will
attempt to classify some of these algorithms in order to give a rough overview of what will
come in Part III and Part IV. Figure 1.7 sketches a rough taxonomy of Global Optimization
methods. Please be aware that the idea of creating a precise taxonomy of optimization
methods makes no sense. Not only are there many hybrid approaches which combine several
dierent techniques, there also exist various denitions for what things like metaheuristics,
articial intelligence, Evolutionary Computation, Soft Computing, and so on exactly are
and which specic algorithms do belong to them and which not. Furthermore, many of
these fuzzy collective terms also widely overlap. The area of Global Optimization is a living
and breathing research discipline. Many contributions come from scientists from various
research areas, ranging from physics to economy and from biology to the ne arts. As
same as important are the numerous ideas developed by practitioners in from engineering,
industry, and business. As stated before, Figure 1.7 should thus be considered as just a rough
sketch, as one possible way to classify optimizers according to their algorithm structure. In
the remainder of this section, we will try to outline some of the terms used in Figure 1.7.
1.3.1 Algorithm Classes
Generally, optimization algorithms can be divided in two basic classes: deterministic and
probabilistic algorithms.
1.3.1.1 Deterministic Algorithms
Denition D1.12 (Deterministic Algorithm). In each execution step of a determinis-
tic algorithm, there exists at most one way to proceed. If no way to proceed exists, the
algorithm has terminated.
For the same input data, deterministic algorithms will always produce the same results.
This means that whenever they have to make a decision on how to proceed, they will come
to the same decision each time the available data is the same.
32 1 INTRODUCTION
Optimization
[609, 2126, 2181, 2714]
Probabilistic Approaches

Metaheuristics
Part III [1068, 1887]

Hill Climbing
Chapter 26 [2356]

Simulated Annealing
Chapter 27 [1541, 2043]

Tabu Search
Chapter 40 [1069, 1228]

Extremal Optimization
Chapter 41 [345, 725]

GRASP
Chapter 42 [902, 906]

Downhill Simplex
Chapter 43 [1655, 2017]

Random Optimization
Chapter 44 [526, 2926]

Harmony Search
[1033]

Evolutionary Computation
[715, 847, 863, 1220, 3025, 3043]

Evolutionary Algorithms
Chapter 28 [167, 171, 599]
Genetic Algorithms
Chapter 29 [1075, 1912]

Evolution Strategies
Chapter 30 [300, 2279]

Genetic Programming
Chapter 31 [1602, 2199]

SGP
Section 31.3 [1602, 1615]

LGP
Section 31.4 [209, 394]

GGGP
Section 31.5 [1853, 2358]

Graph-based
Section 31.6 [1901, 2191]

Dierential Evolution
Chapter 33 [2220, 2621]

EDAs
Chapter 34 [1683, 2153]

Evol. Programming
Chapter 32 [940, 945]

LCS
Chapter 35 [430, 1256]

Swarm Intelligence
[353, 881, 1524]

ACO
Chapter 37 [809, 810]

RFD
Chapter 38 [229, 230]

PSO
Chapter 39 [589, 853]

Memetic Algorithms
Chapter 36 [1141, 1961]

Deterministic Approaches
[1103, 1271, 2040, 2126]
Search Algorithms
Chapter 46 [635, 2356]

Uninformed Search
Section 46.2 [1557, 2140]

BFS
Section 46.2.1

DFS
Section 46.2.2

IDDFS
Section 46.2.4 [2356]

Informed Search
Section 46.3 [1557, 2140]

Greedy Search
Section 46.3.1 [635]

Search
Section 46.3.2 [758]

Branch & Bound


Chapter 47 [673, 1663]

Cutting-Plane Method
Chapter 48 [152, 643]

Figure 1.7: Overview of optimization algorithms.


1.3. CLASSES OF OPTIMIZATION ALGORITHMS 33
Example E1.7 (Decision based on Objective Value).
Let us assume, for example, that the optimization process has to select one of the two
candidate solutions x
1
and x
2
for further investigation. The algorithm could base its decision
on the value the objective function f subject to minimization takes on. If f(x
1
) < f(x
2
), then
x
1
is chosen and x
2
otherwise. For each two points x
i
, x
j
X, the decision procedure is the
same and deterministically chooses the candidate solution with the better objective value.
Deterministic algorithms are thus used when decisions can eciently be made on the data
available to the optimization process. This is the case if a clear relation between the char-
acteristics of the possible solutions and their utility for a given problem exists. Then, the
search space can eciently be explored with, for example, a divide and conquer scheme
14
.
1.3.1.2 Probabilistic Algorithms
There may be situations, however, where deterministic decision making may prevent the
optimizer from nding the best results. If the relation between a candidate solution and its
tness is not obvious, dynamically changing, too complicated, or the scale of the search
space is very high, applying many of the deterministic approaches becomes less ecient and
often infeasible. Then, probabilistic optimization algorithms come into play. Here lies the
focus of this book.
Denition D1.13 (Probabilistic Algorithm). A randomized
15
(or probabilistic or
stochastic) algorithm includes at least one instruction that acts on the basis of random
numbers. In other words, a probabilistic algorithm violates the constraint of determin-
ism [1281, 1282, 1919, 1966, 1967].
Generally, exact algorithms can be much more ecient than probabilistic ones in many
domains. Probabilistic algorithms have the further drawback of not being deterministic,
i. e., may produce dierent results in each run even for the same input. Yet, in situations like
the one described in Example E1.8, randomized decision making can be very advantageous.
Example E1.8 (Probabilistic Decision Making (Example E1.7 Cont.)).
In Example E1.7, we described a possible decision step in a deterministic optimization
algorithm, similar to a statement such as if f(x
1
) < f(x
2
) then investigate(x
1
);
else investigate(x
2
);. Although this decision may be the right one in most cases, it
may be possible that sometimes, it makes sense to investigate a less t candidate solution.
Hence, it could be a good idea to rephrase the decision to if f(x
1
) < f(x
2
), then choose
x
1
with 90% probability, but in 10% of the situations, continue with x
2
. In the case that
f(x
1
) < f(x
2
) actually holds, we could draw a random number uniformly distributed in [0, 1)
and if this number less than 0.9, we would pick x
1
and otherwise, investigate x
2
. Matter of
fact, such kind of randomized decision making distinguishes the more Simulated Annealing
algorithm (Chapter 27) which has been proven to be able to always nd the global optimum
(possibly in innite time) from simple Hill Climbing (Chapter 26) which does not have this
feature.
Early work in this area, which now has become one of most important research elds in
optimization, was started at least sixty years ago. The foundations of many of the ecient
methods currently in use were laid by researchers such as Bledsoe and Browning [327],
Friedberg [995], Robbins and Monro [2313], and Bremermann [407].
14
http://en.wikipedia.org/wiki/Divide_and_conquer_algorithm [accessed 2007-07-09]
15
http://en.wikipedia.org/wiki/Randomized_algorithm [accessed 2007-07-03]
34 1 INTRODUCTION
1.3.2 Monte Carlo Algorithms
Most probabilistic algorithms used for optimization are Monte Carlo-based approaches (see
Section 12.4). They trade guaranteed global optimality of the solutions in for a shorter
runtime. This does not mean that the results obtained by using them are incorrect they
may just not be the global optima. Still, a result little bit inferior to the best possible
solution x

is still better than waiting 10


100
years until x

is found
16
. . .
An umbrella term closely related to Monte Carlo algorithms is Soft Computing
17
(SC),
which summarizes all methods that nd solutions for computational hard tasks (such as
AT-complete problems, see Section 12.2.2) in relatively short time, again by trading in
optimality for higher computation speed [760, 1242, 1430, 2685].
1.3.3 Heuristics and Metaheuristics
Most ecient optimization algorithms which search good individuals in some targeted way,
i. e., all methods better than uninformed searches, utilize information from objective (see
Section 2.2) or heuristic functions. The term heuristic stems from the Greek heuriskein
which translates to to nd or to to discover.
1.3.3.1 Heuristics
Heuristics used in optimization are functions that rates the utility of an element and provides
support for the decision regarding which element out of a set of possible solutions is to be
examined next. Deterministic algorithms usually employ heuristics to dene the processing
order of the candidate solutions. Approaches using such functions are called informed search
methods (which will be discussed in Section 46.3 on page 518). Many probabilistic methods,
on the other hand, only consider those elements of the search space in further computations
that have been selected by the heuristic while discarding the rest.
Denition D1.14 (Heuristic). A heuristic
18
[1887, 2140, 2276] function is a problem
dependent approximate measure for the quality of one candidate solution.
A heuristic may, for instance, estimate the number of search steps required from a given
solution to the optimal results. In this case, the lower the heuristic value, the better the
solution would be. It is quite clear that heuristics are problem class dependent. They can be
considered as the equivalent of tness or objective functions in (deterministic) state space
search methods [1467] (see Section 46.3 and the more specic Denition D46.5 on page 518).
An objective function f(x) usually has a clear meaning denoting an absolute utility of a
candidate solution x. Dierent from that, heuristic values (p) often represent more indirect
criteria such as the aforementioned search costs required to reach the global optimum [1467]
from a given individual.
Example E1.9 (Heuristic for the TSP from Example E2.2).
A heuristic function in the Traveling Salesman Problem involving ve cities in China as
dened in Example E2.2 on page 45 could, for instance, state The total distance of this
candidate solution is 1024km longer than the optimal tour. or You need to swap no more
than three cities in this tour in order to nd the optimal solution. Such a statement has
a dierent quality from an objective function just stating the total tour length and being
subject to minimization.
16
Kindly refer to Chapter 12 for a more in-depth discussion of complexity.
17
http://en.wikipedia.org/wiki/Soft_computing [accessed 2007-09-17]
18
http://en.wikipedia.org/wiki/Heuristic_%28computer_science%29 [accessed 2007-07-03]
1.3. CLASSES OF OPTIMIZATION ALGORITHMS 35
We will discuss heuristics in more depth in Section 46.3. It should be mentioned that opti-
mization and articial intelligence
19
(AI) merge seamlessly with each other. This becomes
especially clear when considering that one most of important research areas of AI is the
design of good heuristics for search, especially for the state space search algorithms which
here, are listed as deterministic optimization methods.
Anyway, knowing the proximity of the meaning of the terms heuristics and objective
functions, it may as well be possible to consider many of the following, randomized ap-
proaches to be informed searches (which brings us back to the fuzzyness of terms). There
is, however, another dierence between heuristics and objective functions: They usually
involve more knowledge about the relation of the structure of the candidate solutions and
their utility as may have become clear from Example E1.9.
1.3.3.2 Metaheuristics
This knowledge is a luxury often not available. In many cases, the optimization algorithms
have a horizon limited to the search operations and the objective functions with unknown
characteristics. They can produce new candidate solutions by applying search operators to
the currently available genotypes. Yet, the exact result of the operators is (especially in
probabilistic approaches) not clear in advance and has to be measured with the objective
functions as sketched in Figure 1.8. Then, the only choices an optimization algorithm has
are:
1. Which candidate solutions should be investigated further?
2. Which search operations should be applied to them? (If more than one operator is
available, that is.)
f(x ) X
blackbox
(3,3)
(0,2) (0,0) (0,1) (0,3)
(1,0) (1,1) (1,2) (1,3)
(2,0) (2,1) (2,2) (2,3)
(3,0) (3,1) (3,2)
Pop(t)
f
(
x
)
1
(3,3)
(0,2) (0,0) (0,1) (0,3)
(1,0) (1,1) (1,2) (1,3)
(2,0) (2,1) (2,2) (2,3)
(3,0) (3,1) (3,2)
Pop(t+1)
computefitnessorheuristic,
selectbestsolutions,and
createnewonesvia
searchoperations
Figure 1.8: Black box treatment of objective functions in metaheuristics.
19
http://en.wikipedia.org/wiki/Artificial_intelligence [accessed 2007-09-17]
36 1 INTRODUCTION
Denition D1.15 (Metaheuristic). A metaheuristic
20
is a method for solving general
classes of problems. It combines utility measures such as objective functions or heuristics
in an abstract and hopefully ecient way, usually without utilizing deeper insight into their
structure, i. e., by treating them as black box-procedures [342, 1068, 1103].
Metaheuristics, which you can nd discussed in Part III, obtain knowledge on the struc-
ture of the problem space and the objective functions by utilizing statistics obtained from
stochastically sampling the search space. Many metaheuristics are inspired by a model of
some natural phenomenon or physical process. Simulated Annealing, for example, decides
which candidate solution is used during the next search step according to the Boltzmann
probability factor of atom congurations of solidifying metal melts. The Raindrop Method
emulates the waves on a sheet of water after a raindrop fell into it. There also are tech-
niques without direct real-world role model like Tabu Search and Random Optimization.
Unied models of metaheuristic optimization procedures have been proposed by Osman
[2103], Rayward-Smith [2275], Vaessens et al. [2761, 2762], and Taillard et al. [2650].
Obviously, metaheuristics are not necessarily probabilistic methods. The A

search, for
instance, a deterministic informed search approach, is considered to be a metaheuristic too
(although it incorporates more knowledge about the structure of the heuristic functions
than, for instance, Evolutionary Algorithms do). Still, in this book, we are interested in
probabilistic metaheuristics the most.
1.3.3.2.1 Summary: Dierence between Heuristics and Metaheuristics
The terms heuristic, objective function, tness function (will be introduced in 28.1.2.5
and Section 28.3 on page 274 in detail), and metaheuristic may easily be mixed up and
it is maybe not entirely obvious which is which and where is the dierence. Therefore, we
here want to give a short outline of the main dierences:
1. Heuristic. problem-specic measure of the potential of a solution as step towards the
global optimum. May be absolute/direct (describe the utility of a solution) or indirect
(e.g., approximate the distance of the solution to the optimum in terms of search
steps).
2. Objective Function. Problem-specic absolute/direct quality measure which rates one
aspect of candidate solution.
3. Fitness Function. Secondary utility measure which may combine information from a
set of primary utility measures (such as objective functions or heuristics) to a single
rating for an individual. This process may also take place relative to a set of given
individuals, i. e., rate the utility of a solution in comparison to another set of solutions
(such as a population in Evolutionary Algorithms).
4. Metaheuristic. general, problem-independent (ternary) approach to select solutions
for further investigation in order to solve an optimization problem. Therefore uses
a (primary or secondary) utility measure such as a heuristic, objective function, or
tness function.
1.3.4 Evolutionary Computation
An important class of probabilistic Monte Carlo algorithms is Evolutionary Computation
21
(EC). It encompasses all randomized metaheuristic approaches which iteratively rene a set
(population) of multiple candidate solutions.
20
http://en.wikipedia.org/wiki/Metaheuristic [accessed 2007-07-03]
21
http://en.wikipedia.org/wiki/Evolutionary_computation [accessed 2007-09-17]
1.4. CLASSIFICATION ACCORDING TO OPTIMIZATION TIME 37
Some of the most important EC approaches are Evolutionary Algorithms and Swarm
Intelligence, which will be discussed in-depth in this book. Swarm Intelligence is inspired
by fact that natural systems composed of many independent, simple agents ants in case of
Ant Colony Optimization (ACO) and birds or sh in case of Particle Swarm Optimization
(PSO) are often able to nd pieces of food or shortest-distance routes very eciently.
Evolutionary Algorithms, on the other hand, copy the process of natural evolution and
treat candidate solutions as individuals which compete and reproduce in a virtual envi-
ronment. Generation after generation, these individuals adapt better and better to the
environment (dened by the user-specied objective function(s)) and thus, become more
and more suitable solutions to the problem at hand.
1.3.5 Evolutionary Algorithms
Evolutionary Algorithms can again be divided into very diverse approaches, spanning from
Genetic Algorithms (GAs) which try to nd bit-or integer-strings of high utility (see, for
instance, Example E4.2 on page 83) and Evolution Strategies (ESs) which do the same with
vectors of real numbers to Genetic Programming (GP), where optimal tree data structures
or programs are sought. The variety of Evolutionary Algorithms is the focus of this book,
also including Memetic Algorithms (MAs) which are EAs hybridized with other search al-
gorithms.
1.4 Classication According to Optimization Time
The taxonomy just introduced classies the optimization methods according to their algo-
rithmic structure and underlying principles, in other words, from the viewpoint of theory. A
software engineer or a user who wants to solve a problem with such an approach is however
more interested in its interfacing features such as speed and precision.
Speed and precision are often conicting objectives, at least in terms of probabilistic
algorithms. A general rule of thumb is that you can gain improvements in accuracy of
optimization only by investing more time. Scientists in the area of global optimization
try to push this Pareto frontier
22
further by inventing new approaches and enhancing or
tweaking existing ones. When it comes to time constraints and hence, the required speed of
the optimization algorithm, we can distinguish two types of optimization use cases.
Denition D1.16 (Online Optimization). Online optimization problems are tasks that
need to be solved quickly in a time span usually ranging between ten milliseconds to a few
minutes.
In order to nd a solution in this short time, optimality is often signicantly traded in
for speed gains. Examples for online optimization are robot localization, load balancing,
services composition for business processes, or updating a factorys machine job schedule
after new orders came in.
Example E1.10 (Robotic Soccer).
Figure 1.9 shows some of the robots of the robotic soccer team
23
[179] as the play in the
RoboCup German Open 2009. In the games of the RoboCup competition, two teams, each
consisting of a xed number of robotic players, oppose each other on a soccer eld. The
rules are similar to human soccer and the team which scores the most goals wins.
When a soccer playing robot approaches the goal of the opposing team with the ball, it
has only a very limited time frame to decide what to do [2518]. Should it shoot directly at
22
Pareto frontiers have been discussed in Section 3.3.5 on page 65.
23
http://carpenoctem.das-lab.net
38 1 INTRODUCTION
Figure 1.9: The robotic soccer team CarpeNoctem (on the right side) on the fourth day of
the RoboCup German Open 2009.
the goal? Should it try to pass the ball to a team mate? Should it continue to drive towards
the goal in order to get into a better shooting position?
To nd a sequence of actions which maximizes the chance of scoring a goal in a contin-
uously changing world is an online optimization problem. If a robot takes too long to make
a decision, opposing and cooperating robots may already have moved to dierent locations
and the decision may have become outdated.
Not only must the problem be solved in a few milliseconds, this process often involves
nding an agreement between dierent, independent robots each having a possibly dierent
view on what is currently going on in the world. Additionally, the degrees of freedom in the
real world, the points where the robot can move or shoot the ball to, are basically innite.
It is clear that in such situations, no complex optimization algorithm can be applied.
Instead, only a smaller set of possible alternatives can be tested or, alternatively, predened
programs may be processed, thus ignoring the optimization character of the decision making.
Online optimization tasks are often carried out repetitively new orders will, for instance,
continuously arrive in a production facility and need to be scheduled to machines in a way
that minimizes the waiting time of all jobs.
Denition D1.17 (Oine Optimization). In oine optimization problems, time is not
so important and a user is willing to wait maybe up to days or weeks if she can get an
optimal or close-to-optimal result.
Such problems regard for example design optimization, some data mining tasks, or creating
long-term schedules for transportation crews. These optimization processes will usually be
carried out only once in a long time.
Before doing anything else, one must be sure about to which of these two classes the
problem to be solved belongs. There are two measures which characterize whether an opti-
mization is suited to the timing constraints of a problem:
1. The time consumption of the algorithm itself.
2. The number of evaluations until the optimum (or a suciently good candidate
solution) is discovered, i. e., the rst hitting time FHT (or its expected value).
An algorithm with high internal complexity and a low FHT may be suitable for a problem
where evaluating the objective functions takes long. On the other hand, it may be less
feasible than a simple approach if the quality of a candidate solution can be determined
quickly.
1.5. NUMBER OF OPTIMIZATION CRITERIA 39
1.5 Number of Optimization Criteria
Optimization algorithms can be divided into
1. such which try to nd the best values of single objective functions f (see Section 3.1
on page 51,
2. such that optimize sets f of target functions (see Section 3.3 on page 55, and
3. such that optimize one or multiple objective functions while adhering to additional
constraints (see Section 3.4 on page 70).
This distinction between single-objective, multi-objective, and constraint optimization will
be discussed in depth in Section 3.3. In this chapter, we will also introduced unifying
approaches which allow us to transform multi-objective problems into single-objective ones
and to gain a joint perspective on constrained and unconstrained optimization.
1.6 Introductory Example
In Algorithm 1.1, we want to give a short introductory example on how metaheuristic
optimization algorithms may look like and of which modules they can be composed. Algo-
rithm 1.1 is based in the bin packing Example E1.1 given on 25. It should be noted that
we do not claim that Algorithm 1.1 is any good in solving the problem, matter of fact, the
opposite is the case. However, it posses typical components which can be found in most
metaheuristics and can relatively easily be understood.
As stated in Example E1.1, the goal of bin packing is to whether an assignment x of n
objects (having the sizes a
1
, a
2
, .., a
n
) to (at most) k bins each able to hold a volume of b
exists where the capacity of no bin is exceeded. This decision problem can be translated
into an optimization problem where it is the goal to discover the x

where the bin capacities


are exceeded the least, i.e., to minimize the objective function f
bp
given in Equation 1.2
on page 26. Objective functions are discussed in Section 2.2 and in Algorithm 1.1, f
bp
is
evaluated in Line 23.
Obviously, if f
bp
(x

) = 0, the answer to the decision problem is Yes, an assignment


which does not violate the constraints exist. and otherwise, the answer is No, it is not
possible to pack the n specied objects into the k bins of size b.
For f
bp
in Example E1.1, we represented a possible assignment of the objects to bins as
a tuple x consisting of k sets. Each set can hold numbers from 1..n. If, for one specic x,
5 x[0], for example, then the fth object should be placed into the rst bin and 3 x[3]
means that the third object is to be put into the fourth bin. The set X of all such possible
solutions (s) x is called problem space and discussed in Section 2.1.
Instead of exploring this space directly, Example E1.1 uses a dierent search space G
(see Section 4.1). Each of these encoded candidate solutions, each so-called genotype, is a
permutation of the rst n natural numbers. The sequence of these numbers stands for the
assignment order of objects to bins.
This representation is translated between Lines 13 and 22 to the tuple-of-sets representa-
tion which can be evaluated by the objective function f
bp
. Basically, this genotype-phenotype
mapping gpm : G X (see Section 4.3) starts with a tuple of k empty sets and sequentially
processes the genotypes g from the beginning to the end. In each processing step, it takes
the next number from the genotype and checks whether the object identied by this number
still ts into the current bin bin. If so, the number is placed into the set in the tuple x at
index bin. Otherwise, if there are empty bins left, bin is increased. This way, the rst k 1
bins are always lled to at most their capacity and only the last bin may overow. The
translation between the genotypes g G and the candidate solutions x X is deterministic.
It is known that bin packing is AT-hard, which means that there currently exists no
algorithm which is always better than enumerating all possible solutions and testing each
40 1 INTRODUCTION
initialization: nullary search operation
searchOp : G (see Section 4.2)
search step: unary search operation
searchOp : G G (see Section 4.2)
genotype-phenotype mapping
gpm : G X (see Section 4.3)
objective function f : X R (see Section 2.2)
What is good? (see Chapter 3 and 5.2)
termination criterion (see Section 6.3.3)
Algorithm 1.1: x optimizeBinPacking(k, b, a) . . . . . . . . . . . . . . . (see Example E1.1)
Input: k N
1
: the number of bins
Input: b N
1
: the size of each single bin
Input: a
1
, a
2
, .., a
n
N
1
: the sizes of the n objects
Data: g, g

G: the current and best genotype so far . . . . . . . . . . (see Denition D4.2)


Data: x X: the current and best phenotype so far . . . . . . . . . . . . (see Denition D2.2)
Data: y, y

R
+
: the current and best objective value . . . . . . . . . (see Denition D2.3)
Output: x X: the best phenotype found. . . . . . . . . . . . . . . . . . . . . . (see Denition D3.5)
1 begin
2 g

(1, 2, .., n)
3 y

+
4 repeat
// perform a random search step
5 g g

6 i randomUni[1, n)
7 repeat
8 j randomUni[1, n)
9 until i ,= j
10 t g[i]
11 g[i] g[j]
12 g[j] t
// compute the bins
13 x (, , . . .
. .
n times
)
14 bin 0
15 sum 0
16 for i 0 up to n 1 do
17 j g[i]
18 if ((sum+a
j
) > b) (bin < (k 1)) then
19 bin bin + 1
20 sum 0
21 x[bin] x[bin] j
22 sum sum+a
j
// compute objective value
23 y f
bp
(x)
// the search logic: always choose the better point
24 if y < y

then
25 g g

26 x x
27 y

y
28 until y

= 0
29 return x
1.6. INTRODUCTORY EXAMPLE 41
of them. Our Algorithm 1.1 tries to explore the search space by starting with putting the
objects into the bins in their original order (Line 2). In each optimization step, it randomly
exchanges two objects (Lines 5 to 12). This operation basically is a unary search operation
which maps the elements of the search space G to new elements in G and creates new
genotypes g. The initialization step can be viewed as a nullary search operation. The basic
denitions for search operations is given in Section 4.2 on page 83.
In Line 24, the optimizer compares the objective value of the best assignment x found
so far with the objective value of the candidate solution x decoded from the newly created
genotype g with the genotype-phenotype mapping gpm. If the new phenotype x is better
than the old one x, the search will from now on use its corresponding genotype (g

x)
for creating new candidate solutions with the search operation and remembers x as well as
its objective value y. A discussion on what the term better actually means in optimization
is given in Chapter 3.
The search process in Algorithm 1.1 continues until a perfect packing, i. e., one with no
overowing bin, is discovered. This termination criterion (see Section 6.3.3), checked in
Line 28, is chosen not too wise, since if no solution can be found, the algorithm will run
forever. Eectively, this Hill Climbing approach (see Chapter 26 on page 229) is thus a Las
Vegas algorithm (see Denition D12.2 on page 148).
In the following chapters, we will put a simple and yet comprehensive theory around
the structure of metaheuristics. We will give clear denitions for the typical modules of an
optimization algorithm and outline how they interact. Algorithm 1.1 here served as example
to show the general direction of where we will be going.
42 1 INTRODUCTION
Tasks T1
1. Name ve optimization problems which can be encountered in daily life. Describe
the criteria which are subject to optimization, whether they should be maximized or
minimized, and how the investigated space of possible solutions looks like.
[10 points]
2. Name ve optimization problems which can be encountered in engineering, business,
or science. Describe them in the same way as requested in Task 1.
[10 points]
3. Classify the problems from Task 1 and 2 into numerical, combinatorial, discrete, and
mixed integer programming, the classes introduced in Section 1.2 and sketched in Fig-
ure 1.6 on page 30. Check if dierent classications are possible for the problems and
if so, outline how the problems should be formulated in each case. The truss opti-
mization problem (see Example E1.6 on page 29) is, for example, usually considered
to be a numerical problem since the thickness of each beam is a real value. If there
is, however, a nite set of beam thicknesses we can choose from, it becomes a discrete
problem and could even be considered as a combinatorial one.
[10 points]
4. Implement Algorithm 1.1 on page 40 in a programming language of your choice. Apply
your implementation to the bin packing instance given in Figure 1.2 on page 25 and
to the problem k = 5, b = 6, a = (6, 5, 4, 3, 3, 3, 2, 2, 1) and (hence) n = 9. Does the
algorithm work? Describe your observations, use a step-by-step debugger if necessary.
[20 points]
5. Find at least three features which make Algorithm 1.1 on page 40 rather unsuitable for
practical applications. Describe these drawbacks, their impact, and point out possible
remedies. You may use an implementation of the algorithm as requested in Task 4 for
gaining experience with what is problematic for the algorithm.
[6 points]
6. What are the advantages and disadvantages of randomized/probabilistic optimization
algorithms?
[4 points]
7. In which way are the requirements for online and oine optimization dierent? Does
a need for online optimization inuence the choice of the optimization algorithm and
the general algorithm structure which can be used in your opinion?
[5 points]
8. What are the dierences between solving general optimization problems and specic
ones? Discuss especially the dierence between black box optimization and solving
functions which are completely known and maybe even can be dierentiated.
[5 points]
Chapter 2
Problem Space and Objective Functions
2.1 Problem Space: How does it look like?
The rst step which has to be taken when tackling any optimization problem is to dene the
structure of the possible solutions. For a minimizing a simple mathematical function, the
solution will be a real number and if we wish to determine the optimal shape of an airplanes
wing, we are looking for a blueprint describing it.
Denition D2.1 (Problem Space). The problem space (phenome) X of an optimization
problem is the set containing all elements x which could be its solution.
Usually, more than one problem space can be dened for a given optimization problem. A
few lines before, we said that as problem space for minimizing a mathematical function, the
real numbers R would be ne. On the other hand, we could as well restrict ourselves to
the natural numbers N
0
or widen the search to the whole complex plane C. This choice has
major impact: On one hand, it determines which solutions we can possible nd. On the
other hand, it also has subtle inuence on the way in which an optimization algorithm can
perform its search. Between each two dierent points in R, for instance, there are innitely
many other numbers, while in N
0
, there are not. Then again, unlike N
0
and R, there is no
total order dened on C.
Denition D2.2 (Candidate Solution). A candidate solution (phenotype) x is an ele-
ment of the problem space X of a certain optimization problem.
In this book, we will especially focus on evolutionary optimization algorithms. One of the
oldest such method, the Genetic Algorithm, is inspired by the natural evolution and re-uses
many terms from biology. In dependence on Genetic Algorithms, we synonymously refer
to the problem space as phenome and to candidate solutions as phenotypes. The problem
space X is often restricted by practical constraints. It is, for instance, impossible to take
all real numbers into consideration in the minimization process of a real function. On our
o-the-shelf CPUs or with the Java programming language, we can only use 64 bit oating
point numbers. With them numbers can only be expressed up to a certain precision. If the
computer can only handles numbers up to 15 decimals, an element x = 1 +10
16
cannot be
discovered.
44 2 PROBLEM SPACE AND OBJECTIVE FUNCTIONS
2.2 Objective Functions: Is it good?
The next basic step when dening an optimization problem is to dene a measure for what
is good. Such measures are called objective functions. As parameter, they take a candidate
solution x from the problem space X and return a value which rates its quality.
Denition D2.3 (Objective Function). An objective function f : X R is a mathe-
matical function which is subject to optimization.
In Listing 55.8 on page 712, you can nd a Java interface resembling Denition D2.3.
Usually, objective functions are subject to minimization. In other words, the smaller f(x),
the better is the candidate solution x X. Notice that although we state that an objective
function is a mathematical function, this is not always fully true. In Section 51.10 on
page 646 we introduce the concept of functions as functional binary relations (see ?? on
page ??) which assign to each element x from the domain X one and only one element from
the codomain (in case of the objective functions, the codomain is R). However, the evaluation
of the objective functions may incorporate randomized simulations (Example E2.3) or even
human interaction (Example E2.6). In such cases, the functionality constraint may be
violated. For the purpose of simplicity, let us, however, stick with the concept given in
Denition D2.3.
In the following, we will use several examples to build an understanding of what the
three concepts problem space X, candidate solution x, and objective function f mean. These
example should give an impression on the variety of possible ways in which they can be
specied and structured.
Example E2.1 (A Stones Throw).
A very simple optimization problem is the question Which is best velocity with which I should
throw
1
a stone (in an = 15

angle) so that it lands exactly 100m away (assuming uniform


gravity and the absence of drag and wind)? Everyone of us has solved countless such tasks
in school and it probably never came to our mind to consider them as optimization problem.
a
100m
f(x)
d(x)
Figure 2.1: A sketch of the stones throw in Example E2.1.
The problem space X of this problem equals to the real numbers R and we can dene
an objective function f : R R which denotes the absolute dierence between the actual
throw distance d(x) and the target distance 100m when a stone is thrown with x m/s as
sketched in Figure 2.1:
f(x) = [100md(x)[ (2.1)
d(x) =
x
2
g
sin(2) (2.2)
0.051s
2
/m x
2

g9.81m/s
2
, =15

(2.3)
1
http://en.wikipedia.org/wiki/Trajectory [accessed 2009-06-13]
2.2. OBJECTIVE FUNCTIONS: IS IT GOOD? 45
This problem can easily be solved and we can obtain the (globally) optimal solution x

,
since we know that the best value which f can take on is zero:
f(x

) = 0

100m0.051s
2
/m (x

)
2

(2.4)
0.051s
2
/m (x

)
2
100m (2.5)
x

100m
0.051s
2
/m

_
1960.78m
2
/s
2
44.28m/s (2.6)
Example E2.1 was very easy. We had a very clear objective function f for which even the
precise mathematical denition was available and, if not known beforehand, could have
easily been obtained from literature. Furthermore, the space of possible solutions, the
problem space, was very simple since it equaled the (positive) real numbers.
Example E2.2 (Traveling Salesman Problem).
The Traveling Salesman Problem [118, 1124, 1694] was rst mentioned in 1831 by Voigt
[2816] and a similar (historic) problem was stated by the Irish Mathematician Hamil-
ton [1024]. Its basic idea is that a salesman wants to visit n cities in the shortest possible
time under the assumption that no city is visited twice and that he will again arrive at the
origin by the end of the tour. The pairwise distances of the cities are known. This problem
can be formalized with the aid of graph theory (see Chapter 52) as follows:
Denition D2.4 (Traveling Salesman Problem). The goal of the Traveling Salesman
Problem (TSP) is to nd a cyclic path of minimum total weight
2
which visits all vertices of
a weighted graph.
Beijing
Shanghai
Hefei
Guangzhou
Chengdu
Chengdu
Guangzhou
Beijing
Hefei
Chengdu
1854km
Guangzhou
2174km
1954km
Hefei
1044km
1615km
1257km
Shanghai
1244km
2095km
1529km
472km
w(a,b)
Figure 2.2: An example for Traveling Salesman Problems, based on data from GoogleMaps.
Assume that we have the task to solve the Traveling Salesman Problem given in Figure 2.2,
i. e., to nd the shortest cyclic tour through Beijing, Chengdu, Guangzhou, Hefei, and
Shanghai. The (symmetric) distance matrix
3
for computing a tours length is given in the
gure, too.
Here, the set of possible solutions X is not as trivial as in Example E2.1. Instead of
being a mapping from the real numbers to the real numbers, the objective function f takes
2
see Equation 52.3 on page 651
3
Based on the routing utility GoogleMaps (http://maps.google.com/ [accessed 2009-06-16]).
46 2 PROBLEM SPACE AND OBJECTIVE FUNCTIONS
one possible permutation of the ve cities as parameter. Since the tours which we want
to nd are cyclic, we only need to take the permutations of four cities into consideration.
If we, for instance, assume that the tour starts in Hefei, it will also end in Hefei and each
possible solution is completely characterized by the sequence in which Beijing, Chengdu,
Guangzhou, and Shanghai are visited. Hence, X is the set of all such permutations.
X = (Beijing, Chengdu, Guangzhou, Shanghai) (2.7)
We can furthermore halve the search space by recognizing that it plays no role for the
total distance in which direction we take our tour x X. In other words, a tour
(Hefei Shanghai Beijing Chengdu Guangzhou Hefei) has the same length and
is equivalent to (Hefei Guangzhou Chengdu Beijing Shanghai Hefei). Hence,
for a TSP with n cities, there are
1
2
(n 1)! possible solutions. In our case, this avails to
12 unique permutations.
f(x) = w(Hefei, x[0]) +
2

i=0
w(x[i], x[i + 1]) +w(x[3], Hefei) (2.8)
The objective function f : X R the criterion subject to optimization is the total
distance of the tours. It equals to the sum of distances between the consecutive cities x[i]
in the lists x representing the permutations and to the start and end point Hefei. We again
want to minimize it. Computing it is again rather simple, but dierent from Example E2.1,
we cannot simply solve it and compute the optimal result x

. Matter of fact, the Traveling


Salesman Problem is one of the most complicated optimization problems.
Here, we will determine x

with a so-called brute force


4
approach: We enumerate all
possible solutions and their corresponding objective values and then pick the best one.
the tour x
i
f(x
i
)
x
1
Hefei Beijing Chengdu Guangzhou Shanghai Hefei 6853km
x
2
Hefei Beijing Chengdu Shanghai Guangzhou Hefei 7779km
x
3
Hefei Beijing Guangzhou Chengdu Shanghai Hefei 7739km
x
4
Hefei Beijing Guangzhou Shanghai Chengdu Hefei 8457km
x
5
Hefei Beijing Shanghai Chengdu Guangzhou Hefei 7594km
x
6
Hefei Beijing Shanghai Guangzhou Chengdu Hefei 7386km
x
7
Hefei Chengdu Beijing Guangzhou Shanghai Hefei 7644km
x
8
Hefei Chengdu Beijing Shanghai Guangzhou Hefei 7499km
x
9
Hefei Chengdu Guangzhou Beijing Shanghai Hefei 7459km
x
10
Hefei Chengdu Shanghai Beijing Guangzhou Hefei 8385km
x
11
Hefei Guangzhou Beijing Chengdu Shanghai Hefei 7852km
x
12
Hefei Guangzhou Chengdu Beijing Shanghai Hefei 6781km
Table 2.1: The unique tours for Example E2.2.
From Table 2.1 we can easily spot the best solution x

= x
12
, as the route over
Guangzhou, Chengdu, Beijing, and Shanghai, both starting and ending in Hefei. With a
total distance of f(x
12
) = 6781 km, it is more than 1500 km shorter than the worst solution
x
4
.
Enumerating all possible solutions is, of course, the most time-consuming optimization ap-
proach possible. For TSPs with more cities, it quickly becomes infeasible since the number
of solutions grows more than exponentially
__
n
3
_
n
n!
_
. Furthermore, in the simple Exam-
ple E2.1, enumerating all candidate solutions would have even been impossible since X = R.
From this example we can learn three things:
4
http://en.wikipedia.org/wiki/Brute-force_search [accessed 2009-06-16]
2.2. OBJECTIVE FUNCTIONS: IS IT GOOD? 47
1. The problem space containing the parameters of the objective functions is not re-
stricted to the real numbers but can, indeed, have a very dierent structure.
2. There are objective functions which cannot be resolved mathematically. Matter of
fact, most interesting optimization problem have this feature.
3. Enumerating all solutions is not a good optimization approach and only applicable in
small or trivial problem instances.
Although the objective function of Example E2.2 could not be solved for x, its results can
still be easily determined for any given candidate solution. Yet, there are many optimization
problems where this is not the case.
Example E2.3 (Antenna Design).
Designing antennas is an optimization problem which has been approached by many
researchers such as Chattoraj and Roy [532], Choo et al. [575], John and Ammann
[1450, 1451], Koza et al. [1615], Lohn et al. [1769, 1770], and Villegas et al. [2814], amongst
others. Rahmat-Samii and Michielssen dedicate no less than four chapters to it in their
book on the application of (evolutionary) optimization methods to electromagnetism-related
tasks [2250]. Here, the problem spaces X are the sets of feasible antenna designs with many
degrees of freedom in up to three dimensions, also often involving wire thickness and geom-
etry.
The objective functions usually involve maximizing the antenna gain
5
and bandwidth
together with constraints concerning, for instance, weight and shape of the proposed con-
structions. However, computing the gain (depending on the signal frequency and radiation
angle) of an arbitrarily structured antenna is not trivial. In most cases, a simulation envi-
ronment and considerable computational power is required. Besides the fact that there are
many possible antenna designs, dierent from Example E2.2 the evaluation of even small
sets of candidate solutions may take a substantial amount of time.
Example E2.4 (Analog Electronic Circuit Design).
A similar problem is the automated analog circuit design, where a circuit diagram involving
a combination of dierent electronic components (resistors, capacitors, inductors, diodes,
and/or transistors) is sought which fullls certain requirements. Here, the tasks span from
the synthesis of ampliers [1607, 1766], lters [1766], such as Butterworth [1608, 1763, 1764]
and Chebyche lters [1608], to analog robot controllers [1613].
Computing the features of analog circuits can be done by hand with some eort, but
requires deeper knowledge of electrical engineering. If an analysis of the frequency and timing
behavior is required, the diculty of the equations quickly goes out of hand. Therefore, in
order to evaluate the tness of circuits, simulation environments [2249] like SPICE [2815]
are normally used. Typically, a set of server computers is employed to execute these external
programs so the utility of multiple candidate solutions can be determined in parallel.
Example E2.5 (Aircraft Design).
Aircraft design is another domain where many optimization problems can be spotted.
Obayashi [2059] and Oyama [2111], for instance, tried to nd wing designs with minimal
drag and weight but maximum tank volume, whereas Chung et al. [579] searched a geometry
with a minimum drag and shock pressure/sonic boom for transonic aircrafts and Lee and
Hajela [1704, 1705] optimized the design of rotor blades for helicopters. Like in the electrical
engineering example, the computational costs of computing the objective values rating the
aforementioned features of the candidate solutions are high and involve simulating the air
ows around the wings and blades.
5
http://en.wikipedia.org/wiki/Antenna_gain [accessed 2009-06-19]
48 2 PROBLEM SPACE AND OBJECTIVE FUNCTIONS
Example E2.6 (Interactive Optimization).
Especially in the area of Evolutionary Computation, some interactive optimization methods
have been developed. The Interactive Genetic Algorithms
6
(IGAs) [471, 2497, 2529, 2651,
2652] outsource the objective function evaluation to human beings who, for instance, judge
whether an evolved graphic
7
or movie [2497, 2746] is attractive or music
8
[1453] sounds good.
By incorporating this information in the search process, esthetic works of art are created. In
this domain, it is not really possible to devise any automated method for assigning objective
values to the points in the problem space. When it comes to judging art, human beings will
not be replaced by machines for a long time.
In Kosorukos Human Based Genetic Algorithm
9
(HBGAs, [1583]) and Unemis graphic
and movie tool [2746], this approach is driven even further by also letting human agents
select and combine the most interesting candidate solutions step by step.
All interactive optimization algorithms have to deal with the peculiarities of their hu-
man modules, such as indeterminism, inaccuracy, ckleness, changing attention and loss
of focus which, obviously, also rub o on the objective functions. Nevertheless, similar fea-
tures may also occur if determining the utility of a candidate solution involves randomized
simulations or measuring data from real-world processes.
Let us now summarize the information gained from these examples. The rst step when
approaching an optimization problem is to choose the problem space X, as we will later
discuss in detail in Chapter 7 on page 109. Objective functions are not necessarily mere
mathematical expressions, but can be complex algorithms that, for example, involve multiple
simulations. Global Optimization comprises all techniques that can be used to nd the
best elements x

in X with respect to such criteria. The next question to answer is What


constitutes these best elements?
6
http://en.wikipedia.org/wiki/Interactive_genetic_algorithm [accessed 2007-07-03]
7
http://en.wikipedia.org/wiki/Evolutionary_art [accessed 2009-06-18]
8
http://en.wikipedia.org/wiki/Evolutionary_music [accessed 2009-06-18]
9
http://en.wikipedia.org/wiki/HBGA [accessed 2007-07-03]
2.2. OBJECTIVE FUNCTIONS: IS IT GOOD? 49
Tasks T2
9. In Example E1.2 on page 26, we stated the layout of circuits as a combinatorial opti-
mization problem. Dene a possible problem space X for this task that allows us to
assign each circuit c C to one location of an m n-sized card. Dene an objective
function f : X R
+
which denotes the total length of required wiring required for a
given element x X. For this purpose, you can ignore wire crossing and simply assume
that a wire going from location (i, j) to location (k, l) has length [i k[ +[j l[.
[10 points]
10. Name some problems which can occur in circuit layout optimization as outlined in
Task 9 if the mentioned simplied assumption is indeed applied.
[5 points]
11. Dene a problem space for general job shop scheduling problems as outlined in Exam-
ple E1.3 on page 26. Do not make the problem specic to the cola/lemonade example.
[10 points]
12. Assume a network N = n
1
, n
2
, . . . , n
m
and that a mm distance matrix is given
that contains the distance between node n
i
and n
j
at position
ij
. A value of + at

ij
means that there is no direct connection n
i
n
j
between node n
i
and n
j
. These
nodes may, however, be connected by a path which leads over other, intermediate
nodes, such as n
i
n
A
n
B
n
j
. In this case, n
j
can be reached from n
i
over
a path leading rst to n
A
and then to n
B
before nally arriving at n
j
. Hence, there
would be a connection of nite length.
We consider the specic optimization task Find the shortest connection between node
n
1
and n
2
. Develop a problem space X which is able to represent all paths between
the rst and second node in an arbitrary network with m 2. Note that there is not
necessarily a direct connection between n
1
and n
2
.
Dene a suitable objective function for this problem.
[15 points]
Chapter 3
Optima: What does good mean?
We already said that Global Optimization is about nding the best possible solutions for
given problems. Let us therefore now discuss what it is that makes a solution optimal
1
.
3.1 Single Objective Functions
In the case of optimizing a single criterion f, an optimum is either its maximum or minimum,
depending on what we are looking for. In the case of antenna design (Example E2.3), the
antenna gain was maximized and in Example E2.2, the goal was to nd a cyclic route of
minimal distance through ve Chinese cities. If we own a manufacturing plant and have to
assign incoming orders to machines, we will do this in a way that miniminzes the production
time. We will, however, arrange the purchase of raw material, the employment of sta, and
the placement of commercials in a way that maximizes the prot, on the other hand.
In Global Optimization, it is a convention that optimization problems are most often
dened as minimization tasks. If a criterion f is subject to maximization, we usually simply
minimize its negation (f) instead. In this section, we will give some basic denitions on
what the optima of a single (objective) function are.
Example E3.1 (two-dimensional objective function).
Figure 3.1 illustrates such a function f dened over a two-dimensional space X = (X
1
, X
2
)
R
2
. As outlined in this graphic, we distinguish between local and global optima. As illus-
trated, a global optimum is an optimum of the whole domain X while a local optimum is
an optimum of only a subset of X.
1
http://en.wikipedia.org/wiki/Maxima_and_minima [accessed 2007-07-03]
52 3 OPTIMA: WHAT DOES GOOD MEAN?
localmaximum
localminimum
globalminimum
localmaximum
globalmaximum
X
X
1
X
2
f
Figure 3.1: Global and local optima of a two-dimensional function.
3.1.1 Extrema: Minima and Maxima of Dierentiable Functions
A local minimum x
l
is a point which has the smallest objective value amongst its neighbors
in the problem space. Similarly, a local maximum x
l
has the highest objective value when
compared to its neighbors. Such points are called (local) extrema
2
. In optimization, a local
optimum is always an extremum, i. e., either a local minimum or maximum.
Back in high school, we learned: If f : X R (X R) has a (local) extremum at x

l
and is dierentiable at that point, then the rst derivative f

is zero at x

l
.
x

l
is local extremum f

(x

l
) = 0 (3.1)
Furthermore, if f can be dierentiated two times at x

l
, the following holds:
(f

(x

l
) = 0) (f

(x

l
) > 0) x

l
is local minimum (3.2)
(f

(x

l
) = 0) (f

(x

l
) < 0) x

l
is local maximum (3.3)
Alternatively, we can detect a local minimum or maximum also when the rst derivative
undergoes a sign change. If, for a small h, sign (f

(x

l
h)) ,= sign (f

(x

l
+h)), then we can
state that:
(f

(x

l
) = 0)
_
lim
h0
f

(x

l
h) < 0
_

_
lim
h0
f

(x

l
+h) > 0
_
x

l
is local minimum(3.4)
(f

(x

l
) = 0)
_
lim
h0
f

(x

l
h) > 0
_

_
lim
h0
f

(x

l
+h) < 0
_
x

l
is local maximum(3.5)
2
http://en.wikipedia.org/wiki/Maxima_and_minima [accessed 2010-09-07]
3.1. SINGLE OBJECTIVE FUNCTIONS 53
Example E3.2 (Extrema and Derivatives).
The function f(x) = x
2
+ 5 has the rst derivative f

(x) = 2x and the second derivative


f

(x) = 2. To solve f

(x

l
) = 0 we set 0 = 2x

l
and get, obviously, x

l
= 0. For x

l
= 0,
f

(x

l
) = 2 and hence, x

l
is a local minimum according to Equation 3.2.
If we take the function g(x) = x
4
2, we get g

(x) = 4x
3
and g

(x) = 12x
2
. At x

l
,
both g

(x

l
) and g

(x

l
) become 0. We cannot use Equation 3.2. However, if we take a small
step to the left, i. e., into x < 0, x < 0 (4x
3
) < 0 g

(x) < 0. For positive x, however,


x > 0 (4x
3
) > 0 g

(x) > 0. We have a sign change from to + and according to


Equation 3.4, x

l
is a minimum of g(x).
From many of our previous examples such as Example E1.1 on page 25, Example E2.2 on
page 45, or Example E2.4 on page 47, we know that objective functions in many optimization
problems cannot be expressed in a mathematically closed form, let alone in one that can be
dierentiated. Furthermore, Equation 3.1 to Equation 3.5 can only be directly applied if the
problem space is a subset of the real numbers X R. They do not work for discrete spaces
or for vector spaces. For the two-dimensional objective function given in Example E3.1, for
example, we would need to a apply Equation 3.6, the generalized form of Equation 3.1:
x

l
is local extremum grad(f)(x

l
) = 0 (3.6)
For distinguishing maxima, minima, and saddle points, the Hessian matrix
3
would be needed.
Even if the objective function exists in a dierentiable form, for problem spaces X R
n
of larger scales n > 5, computing the gradient, the Hessian, and their properties becomes
cumbersome. Derivative-based optimization is hence only feasible in a subset of optimization
problems.
In this book, we focus on derivative-free optimization. The precise denition of local
optima without using derivatives requires some sort of measure of what neighbor means
in the problem space. Since this requires some further denitions, we will postpone it to
Section 4.4.
3.1.2 Global Extrema
The denitions of globally optimal points are much more straightforward and can be given
based on the terms we already have discussed:
Denition D3.1 (Global Maximum). A global maximum x x of one (objective) func-
tion f : X R is an input element with f( x) f(x) x X.
Denition D3.2 (Global Minimum). A global minimum x X of one (objective) func-
tion f : X R is an input element with f( x) f(x) x X.
Denition D3.3 (Global Optimum). A global optimum x

X of one (objective) func-


tion f : X R subject to minimization is a global minimum. A global optimum of a function
subject to maximization is a global maximum.
Of course, each global optimum, minimum, or maximum is, at the same time, also a local
optimum, minimum, or maximum. As stated before, the he denition of local optima is a
bit more complicated and will be discussed in detail in Section 4.4.
3
http://en.wikipedia.org/wiki/Hessian_matrix [accessed 2010-09-07]
54 3 OPTIMA: WHAT DOES GOOD MEAN?
3.2 Multiple Optima
Many optimization problems have more than one globally optimal solution. Even a one-
dimensional function f : R R can have more than one global maximum, multiple global
minima, or even both in its domain X.
Example E3.3 (One-Dimensional Functions with Multiple Optima).
If the function f(x) =

x
3

8x
2
(illustrated in Fig. 3.2.a) was subject to minimization, we
-100
0
100
200
-10 -5 0 5 10
x X
f(x)
x
1
^
x
2
^
Fig. 3.2.a:

x
3

8x
2
: a function with
two global minima.
-1
-0.5
0
0.5
1
-10 -5 0 5 10
x
0
^ x
-1
^ x ^
1
x X
f(x)
Fig. 3.2.b: The cosine function: in-
nitely many optima.
20
40
60
80
100
-10 -5 0 5 10
xX
f(x)
X
^
Fig. 3.2.c: A function with uncount-
able many minima.
Figure 3.2: Examples for functions with multiple optima
would nd that it has two global optimal x

1
= x
1
=
16
3
and x

2
= x
2
=
16
3
. Another
interesting example is the cosine function sketched in Fig. 3.2.b. It has global maxima x
i
at
x
i
= 2i and global minima x
i
at x
i
= (2i + 1) for all i Z. The correct solution of such
an optimization problem would then be a set X

of all optimal inputs in X rather than a


single maximum or minimum.
In Fig. 3.2.c, we sketch a function f(x) which evaluates x
2
outside the interval [5, 5]
and has the value 25 in [5, 5]. Hence, all the points in the real interval [5, 5] are minima.
If the function is subject to minimization, it has uncountable innite global optima.
As mentioned before, the exact meaning of optimal is problem dependent. In single-objective
optimization, it means to nd the candidate solutions which lead to either minimum or max-
imum objective values. In multi-objective optimization, there exist a variety of approaches
to dene optima which we will discuss in-depth in Section 3.3.
Denition D3.4 (Optimal Set). The optimal set X

X of an optimization problem is
the set that contains all its globally optimal solutions.
3.3. MULTIPLE OBJECTIVE FUNCTIONS 55
There are often multiple, sometimes even innite many optimal solutions in many optimiza-
tion problems. Since the storage space in computers is limited, it is only possible to nd a
nite (sub-)set of solutions. In the further text, we will denote the result of optimization
algorithms which return single candidate solutions (in sans-serif font) as x and the results
of algorithms which are capable to produce sets of possible solutions as X = x
1
, x
2
, . . . .
Denition D3.5 (Optimization Result). The set X X contains output elements x X
of an optimization process.
Obviously, the best outcome of an optimization process would be to nd X = X

. From
Example E3.3, we already know that this might actually be impossible if there are too
many global optima. Then, the goal should be to nd a X X

in a way that the candidate


solutions are evenly distributed amongst the global optima and not all of them cling together
in a small area of X

. We will discuss the issue of diversity amongst the optima in Chapter 13


in more depth.
Besides that we are maybe not able to nd all global optima for a given optimization
problem, it may also be possible that we are not able to nd any global optimum at all or that
we cannot determine whether the results x X which we have obtained are optimal or not.
There exists, for instance, no algorithm which can solve the Traveling Salesman Problem
(see Example E2.2) in polynomial time, as described in detail in Section 12.3. Therefore,
if such a task involving a high enough number of locations is subject to optimization, an
ecient approximation algorithm might produce reasonably good x in short time. In order
to know whether these are global optima, we would, however, need to wait for thousands of
years. Hence, in many problems, the goal of optimization is not necessarily to nd X = X

or X X

but a good approximation X X

in a reasonable time.
3.3 Multiple Objective Functions
3.3.1 Introduction
Global Optimization techniques are not just used for nding the maxima or minima of single
functions f. In many real-world design or decision making problems, there are multiple
criteria to be optimized at the same time. Such tasks can be dened as in Denition D3.6
which will later be rened in Denition D6.2 on page 103.
Denition D3.6 (Multi-Objective Optimization Problem (MOP)). In a multi-objective
optimization problem (MOP), a set f : X R
n
consisting of n objective functions
f
i
: X R, each representing one criterion, is to be optimized over a problem space
X [596, 740, 954].
f = f
i
: X R : i 1..n (3.7)
f (x) = (f
1
(x) , f
2
(x) , . . . )
T
(3.8)
In the following, we will treat these sets as vector functions f : X R
n
which return a
vector of real numbers of dimension n when applied to a candidate solution x X. To the
R
n
we will in this case refer to as objective space Y. Algorithms designed to optimize sets
of objective functions are usually named with the prex multi-objective, like multi-objective
Evolutionary Algorithms (see Denition D28.2 on page 253).
Purshouse [2228, 2230] provides a nice classication of the possible relations between
the objective functions which we illustrate here as Figure 3.3. In the best case from the
56 3 OPTIMA: WHAT DOES GOOD MEAN?
relation
dependent independent
conflict harmony
Figure 3.3: The possible relations between objective functions as given in [2228, 2230].
optimization perspective, two objective functions f
1
and f
2
are in harmony, written as f
1
f
2
,
meaning that improving one criterion will also lead to an improvement in the other. Then, it
would suce to optimize only one of them and the MOP can be reduced to a single-objective
one.
Denition D3.7 (Harmonizing Objective Functions). In a multi-objective optimiza-
tion problem, two objective functions f
i
and f
j
are harmonizing (f
i

X
f
j
) in a subset X X
of the problem space X if Equation 3.9 holds.
f
i

X
f
j
[f
i
(x
1
) < f
i
(x
2
) f
j
(x
1
) < f
j
(x
2
) x
1
, x
2
X X] (3.9)
The opposite are conicting objectives where an improvement in one criterion leads to a
degeneration of the utility of the candidate solutions according to another criterion. If two
objectives f
3
and f
4
conict, we denote this as f
3
f
4
. Obviously, objective functions do not
need to be conicting or harmonic for their complete domain X, but instead may exhibit
these properties on certain intervals only.
Denition D3.8 (Conicting Objective Functions). In a multi-objective ptimization
problem, two objective functions f
i
and f
j
are conicting (f
i

X
f
j
) in a subset X X of
the problem space X if Equation 3.10 holds in a subset X of the problem space X.
f
i

X
f
j
[f
i
(x
1
) < f
i
(x
2
) f
j
(x
1
) > f
j
(x
2
) x
1
, x
2
X X] (3.10)
Besides harmonizing and conicting optimization criteria, there may be such which are
independent from each other. In a harmonious relationship, the optimization criteria can
be optimized simultaneously, i. e., an improvement of one leads to an improvement of the
other. If the criteria conict, the exact opposite is the case. Independent criteria, however,
do not inuence each other: It is possible to modify candidate solutions in a way that leads
to a change in one objective value while the other one remains constant (see Example E3.8).
Denition D3.9 (Independent Objective Functions). In a multi-objective optimiza-
tion problem, two objective functions f
i
and f
j
are independent from each other in a subset
X X of the problem space X if the MOP can be decomposed into two problems which can
optimized separately and whose solutions can then be composed to solutions of the original
problem.
If X = X, these relations are called total and the subscript X is left away in the notations. In
the following, we give some examples for typical situations involving harmonizing, conicting,
and independent optimization criteria. We also list some typical multi-objective optimization
applications in Table 3.1, before discussing basic techniques suitable for dening what is a
good solution in problems with multiple, conicting criteria.
3.3. MULTIPLE OBJECTIVE FUNCTIONS 57
Example E3.4 (Factory Example).
Multi-objective optimization often means to compromise conicting goals. If we go back to
our factory example, we can specify the following objectives that all are subject to optimiza-
tion:
1. Minimize the time between an incoming order and the shipment of the corresponding
product.
2. Maximize the prot.
3. Minimize the costs for advertising, personal, raw materials etc..
4. Maximize the quality of the products.
5. Minimize the negative impact of waste and pollution caused by the production process
on environment.
The last two objectives seem to clearly conict with the goal cost minimization. Between the
personal costs and the time needed for production and the product quality there should also
be some kind of contradictive relation, since producing many quality goods in short time
naturally requires many hands. The exact mutual inuences between optimization criteria
can apparently become complicated and are not always obvious.
Example E3.5 (Articial Ant Example).
Another example for such a situation is the Articial Ant problem
4
where the goal is to
nd the most ecient controller for a simulated ant. The eciency of an ant is not only
measured by the amount of food it is able to pile. For every food item, the ant needs to
walk to some point. The more food it aggregates, the longer the distance it needs to walk.
If its behavior is driven by a clever program, it may take the ant a shorter walk to pile the
same amount of food. This better path, however, would maybe not be discovered by an ant
with a clumsy controller. Thus, the distance the ant has to cover to nd the food (or the
time it needs to do so) has to be considered in the optimization process, too.
If two control programs produce the same results and one is smaller (i. e., contains
fewer instructions) than the other, the smaller one should be preferred. Like in the factory
example, these optimization goals conict with each other: A program which contains only
one instruction will likely not guide the ant to a food item and back to the nest.
From these both examples, we can gain another insight: To nd the global optimum could
mean to maximize one function f
i
f and to minimize another one f
j
f , (i ,= j). Hence,
it makes no sense to talk about a global maximum or a global minimum in terms of multi-
objective optimization. We will thus retreat to the notation of the set of optimal elements
x

X in the following discussions.


Since compromises for conicting criteria can be dened in many ways, there exist mul-
tiple approaches to dene what an optimum is. These dierent denitions, in turn, lead to
dierent sets X

. We will discuss some of these approaches in the following by using two


graphical examples for illustration purposes.
Example E3.6 (Graphical Example 1).
In the rst graphical example pictured in Figure 3.4, we want to maximize two independent
objective functions f
1
= f
1
, f
2
. Both functions have the real numbers R as problem space
X
1
. The maximum (and thus, the optimum) of f
1
is x
1
and the largest value of f
2
is at x
2
.
In Figure 3.4, we can easily see that f
1
and f
2
are partly conicting: Their maxima are at
dierent locations and there even exist areas where f
1
rises while f
2
falls and vice versa.
4
See ?? on page ?? for more details.
58 3 OPTIMA: WHAT DOES GOOD MEAN?
y=f (x)
1
y=f (x)
2
y
x^
1
x^
2
xX
1
Figure 3.4: Two functions f
1
and f
2
with dierent maxima x
1
and x
2
.
Example E3.7 (Graphical Example 2).
The objective functions f
1
and f
2
in the rst example are mappings of a one-dimensional prob-
x
3
^
x
4
^
x
1
^
x
2
^
X
2
x
2 x
2
x
1
x
1
f
3
f
4
Figure 3.5: Two functions f
3
and f
4
with dierent minima x
1
, x
2
, x
3
, and x
4
.
lem space X
1
to the real numbers that are to be maximized. In the second example sketched
in Figure 3.5, we instead minimize two functions f
3
and f
4
that map a two-dimensional
problem space X
2
R
2
to the real numbers R. Both functions have two global minima; the
lowest values of f
3
are x
1
and x
2
whereas f
4
gets minimal at x
3
and x
4
. It should be noted
that x
1
,= x
2
,= x
3
,= x
4
.
Example E3.8 (Independent Objectives).
Assume that the problem space are the three-dimensional real vectors R
3
and the two
objectives f
1
and f
2
would be subject to minimization:
f
1
(x) =
_
(x
1
5)
2
+ (x
2
+ 1)
2
(3.11)
f
2
(x) = x
2
3
sin x 3 (3.12)
This problem can be decomposed into two separate problems, since f
1
can be optimized on
the rst two vector elements x
1
, x
2
alone and f
2
can be optimized by considering only x
3
while the values of x
1
and x
2
do not matter. After solving the two separate optimization
problems, their solutions can be combined to a full vector x

= (x

1
, x

2
, x

3
)
T
by taking the
solutions x

1
and x

2
from the optimization of f
1
and x

2
from the optimization process solving
f
2
.
3.3. MULTIPLE OBJECTIVE FUNCTIONS 59
Example E3.9 (Robotic Soccer Cont.).
In Example E1.10 on page 37, we mentioned that in robotic soccer, the robots must try to
decide which action to take in order to maximize the chance of scoring a goal. If decision
making in robotic soccer was only based on this criteria, it is likely that most mates in
a robot team would jointly attack the goal of the opposing team, leaving the own one
unprotected. Hence, at least one second objective is important too: to minimize the chance
that the opposing team scores a goal.
Table 3.1: Applications and Examples of Multi-Objective Optimization.
Area References
Business-To-Business [1559]
Chemistry [306, 668]
Combinatorial Problems [366, 491, 527, 654, 690, 1034, 1429, 1440, 1442, 1553, 1696,
17561758, 1859, 2019, 2025, 2158, 2472, 2473, 2862, 2895];
see also Table 20.1
Computer Graphics [1661]; see also Table 20.1
Data Mining [2069, 2398, 2644, 2645]; see also Table 20.1
Databases [1696]
Decision Making [198, 320, 861, 1299, 1559, 1571, 1801]; see also Table 20.1
Digital Technology [789, 869, 1479]
Distributed Algorithms and Sys-
tems
[320, 605, 662, 663, 669, 861, 1299, 1488, 1559, 1571, 1574,
1801, 1840, 2071, 2644, 2645, 2862, 2897]
Economics and Finances [320, 662, 857, 1559]
Engineering [93, 519, 547, 548, 579, 605, 669, 737, 789, 869, 953, 1000,
1479, 1488, 1508, 1574, 1622, 1717, 1840, 1873, 2059, 2111,
2158, 2644, 2645, 2862, 2897]
Games [560, 997, 2102]
Graph Theory [669, 1488, 1574, 1840, 2025, 2897]
Logistics [527, 579, 861, 1034, 1429, 1442, 1553, 1756, 1859, 2059,
2111]
Mathematics [560, 662, 857, 869, 997, 1661, 2102]
Military and Defense [1661]
Multi-Agent Systems [1661]
Physics [1000]
Security [1508, 1661]
Software [1508, 1559, 2863, 2897]; see also Table 20.1
Testing [2863]
Water Supply and Management [1268]
Wireless Communication [1717, 1796, 2158]
3.3.2 Lexicographic Optimization
The maybe simplest way to optimize a set f of objective functions is to solve them according
to priorities [93, 860, 2291, 2820]. Without loss of generality, we could, for instance, assume
that f
1
is strictly more important than f
2
, which, in turn, is more important than f
3
, and
so on. We then would rst determine the optimal set X

(1)
according to f
1
. If this set
60 3 OPTIMA: WHAT DOES GOOD MEAN?
contains more than one solution, we will then retain only those elements in X

(1)
which
have the best objective values according to f
2
compared to the other members of X

(1)
and
obtain X

(1,2)
X

(1)
. This process is repeated until either

(1,2,... )

= 1 or all criteria have


been processed. By doing so, the multi-objective optimization problem has been reduced to
n = [f [ single-objective ones to be solved iteratively. Then, the problem space X
i
of the i
th
problem equals X only for i = 1 and then is X
i
i > 1 = X

(1,2,..i1)
[860].
Denition D3.10 (Lexicographically Better). A candidate solution x
1
X is lexico-
graphically better than another element x
2
X (denoted by x
1
<
lex
x
2
) if
x
1
<
lex
x
2
(
i
f
i
(x
1
) <
i
f
i
(x
2
)) (i = min k : f
k
(x
1
) ,= f
k
(x
2
)) (3.13)

j
=
_
1 if f
j
should be minimized
1 if f
j
should be maximized
(3.14)
where the
j
in Equation 3.13 only denote whether f
j
should be maximized or minimized,
as specied in Equation 3.14.
Since we generally consider minimization problems, the
j
will be assumed to be 1 if not
explicitly stated otherwise. If n objective functions are to be optimized, there are n! =
[(1..n)[ possible permutations of them where (1..n) is the set of all permutations of the
natural numbers from 1 to n. For each such permutation, the described approach may lead
to dierent results. We can thus rene the previous denition of lexicographically better
by putting it into the context of a permutation of the objective functions:
Denition D3.11 (Lexicographically Better (Rened)). A candidate solution x
1

X is lexicographically better than another element x
2
X (denoted by x
1
<
lex,
x
2
) according
to a permutation (1..n) applied to the objective functions if
x
1
<
lex,
x
2

_

[i]
f
[i]
(x
1
) <
[i]
f
[i]
(x
2
)
_

_
i = min
_
k : f
k
(x
1
) ,= f
[k]
(x
2
)
__
(3.15)
where n = len(f ) is the number of objective functions, [i] is the i
th
element of the permu-
tation , and the
i
retain their meaning from Equation 3.14.
Based on the <
lex
-relation, we can now dene optimality.
Denition D3.12 (Lexicographic Optimality). A candidate solution x

is lexicograph-
ically optimal in terms of a permutation (1..n) applied to the objective functions (and
therefore, member of the optimal set X

) if there is no lexicographically better element in


X, i. e.,
x

,x X : x<
lex,
x

(3.16)
As lexicographically global optimal set X

we consider the union of the optimal sets X

for
all possible permutations of the rst n natural numbers:
X

=
_
(1..n)
X

(3.17)
X

can be computed by solving all the [(1..n)[ = n! dierent multi-objective problems


arising, i. e., the n n! single-objective problems, separately. This would lead to a more
than exponential complexity in n. In case that X is a nite set, this can be achieved in
polynomial time in n [860]. But which solutions would be contained in the X

or X

if we
use the lexicographical denition of optimality?
3.3. MULTIPLE OBJECTIVE FUNCTIONS 61
Example E3.10 (Example E3.4 Cont. Lexicographically).
In the factory example Example E3.4, we wished to minimize the production time
(f
1
,
1
= 1), maximize the prot (f
2
,
2
= 1), minimize the running costs (f
3
,
3
= 1),
maximize a product quality measure (f
4
,
4
= 1), and minimize environmental damage
(f
5
,
5
= 1). Determining lexicographically optimal set X

for the identity permutation


= (1, 2, 3, 4, 5) means to rst determine all possible congurations of the factory the
minimize the production time f
1
. Only if there is more than one such conguration
_
len
_
X

(1)
_
> 1
_
, we consider f
2
. Amongst all elements in X

(1)
, we then copied those
to X

(1,2)
which have the highest f
2
-value and so on. What we get in X

are congurations
which perform exceptionally well in terms of the production time but sacrice prot, running
costs, production quality, and will largely disregard any environmental issues. No trade-o
between the objectives will happen at all. Therefore, the set of all lexicographically optimal
solutions X

will contain the elements x

X which have the best objective values regarding


to one single objective function while largely neglecting the others.
Example E3.11 (Example E3.5 Cont. Lexicographically).
The same would be the case in the Articial Ant problem. X

, if determined lexicographi-
cally, contains the
1. smallest possible program (length 0),
2. the program which leads to the highest possible food collection, and
3. the program which guides the ant along the shortest possible path (which is again the
empty program).
Example E3.12 (Example E3.6 Cont. Lexicographically).
There are exactly two lexicographically optimal points on the two functions illustrated in
Figure 3.4. The rst one x

1
is the single global maximum x
1
of f
1
and the other one (x

2
) is
the single global maximum x
2
of f
2
. In other words, X

(1,2)
= x

1
and X

(2,1)
= x

2
. Since
(1, 2) = (1, 2) , (2, 1), it follows that X

= X

(1,2)
X

(2,1)
= x

1
, x

2
.
Example E3.13 (Example E3.7 Cont. Lexicographically).
The two-dimensional objective functions f
3
and f
4
from Figure 3.5 each have two global
minima x
1
, x
2
and x
3
, x
4
, respectively. Although it is a bit hard to see in the gure itself,
f
4
( x
1
) = f
4
( x
2
) and f
3
( x
3
) = f
3
( x
4
) because of the symmetry in the two objective functions.
This means that X

(3,4)
= x
1
, x
2
, that X

(4,3)
= x
3
, x
4
, and that X

= X

(3,4)
X

(4,3)
=
x
1
, x
2
, x
3
, x
4
.
3.3.2.1 Problems with the Lexicographic Method
From these examples it should become clear that the lexicographic approach to dening what
is optimal and what not has some advantages and some disadvantages. On the positive side,
we nd that it is very easy realize and that it allows us to treat a multi-objective optimization
problem by iteratively solving single-objective ones. On the downside, it does not perform
any trade-o between the dierent objective functions at all and we only nd the extreme
values of the optimization criteria. In the factory example, for instance, we would surely
be interested in solutions which lead to a short production time but do not entirely ignore
any budget issues. Such dierentiations, however, are not possible with the lexicographic
method.
62 3 OPTIMA: WHAT DOES GOOD MEAN?
3.3.3 Weighted Sums (Linear Aggregation)
Another very simple method to dene what an optimal solution is, is to compute a weighted
sum ws(x) of all objective functions f
i
(x) f . Since a weighted sum is a linear aggregation
of the objective values, it is also often referred to as linear aggregating. Each objective f
i
is
multiplied with a weight w
i
representing its importance. Using signed weights allows us to
minimize one objective while maximizing another one. We can, for instance, apply a weight
w
a
= 1 to an objective function f
a
and the weight w
b
= 1 to the criterion f
b
. By minimizing
ws(x), we then actually minimize the rst and maximize the second objective function. If
we instead maximize ws(x), the eect would be inverse and f
b
would be minimized and while
f
a
is maximized. Either way, multi-objective problems are reduced to single-objective ones
with this method. As illustrated in Equation 3.19, a global optimum then is a candidate
solution x

X for which the weighted sum of all objective values is less or equal to those
of all other candidate solutions (if minimization of the weighted sum is assumed).
ws(x) =
n

i=1
w
i
f
i
(x) =

f
i
f
w
i
f
i
(x) (3.18)
x

ws(x

) ws(x) x X (3.19)
With this method, we can also emulate lexicographical optimization in many cases by simply
setting the weights to values of dierent magnitude. If the objective functions f
i
f all had
the range f
i
: X 0..10, for instance, setting the weights to w
i
= 10
i
would eectively
achieve this.
Example E3.14 (Example E3.6 Cont.: Weighted Sums).
Figure 3.6 demonstrates optimization with the weighted sum approach for the example given
in Example E3.6. The weights are both set to 1 = w
1
= w
2
. If we maximize ws
1
(x), we will
thus also maximize the functions f
1
and f
2
. This leads to a single optimum x

= x.
y
y =f (x)
1 1
y =f (x)
2 2
y=ws (x)=f (x)+f (x)
1 1 2
x^
xX
1
Figure 3.6: Optimization using the weighted sum approach (rst example).
Example E3.15 (Example E3.7 Cont.: Weighted Sums).
The sum of the two-dimensional functions f
3
and f
4
from the second graphical Example E3.7
is sketched in Figure 3.7. Again, we set the weights w
3
and w
4
to 1. The sum ws
2
, however,
is subject to minimization. The graph of ws
2
has two especially deep valleys. At the bottoms
of these valleys, the two global minima x
5
and x
6
can be found.
3.3. MULTIPLE OBJECTIVE FUNCTIONS 63
x
2 x
1
y
x
5
^
x
6
^
y=ws (x)=f (x)+f (x)
2 3 4
Figure 3.7: Optimization using the weighted sum approach (second example).
3.3.3.1 Problems with Weighted Sums
This approach has a variety of drawbacks. One is that it cannot handle functions that rise
or fall with dierent speed
5
properly. Whitley [2929] pointed out that the results of an
objective function should not be considered as a precise measure of utility. Adding multiple
such values up thus does not necessarily make sense.
Example E3.16 (Problematic Objectives for Weighted Sums).
In Figure 3.8, we have sketched the sum ws(x) of the two objective functions f
1
(x) = x
2
and f
2
(x) = e
x2
. When minimizing or maximizing this sum, we will always disregard
one of the two functions, depending on the interval chosen. For small x, f
2
is negligible
compared to f
1
. For x > 5 it begins to outpace f
1
which, in turn, will now become negligible.
Such functions cannot be added up properly using constant weights. Even if we would set
w
1
to the really large number 10
10
, f
1
will become insignicant for all x > 40, because

(40
2
)10
10
e
402

0.0005.
5
See Denition D12.1 on page 144 or http://en.wikipedia.org/wiki/Asymptotic_notation [ac-
cessed 2007-07-03] for related information.
64 3 OPTIMA: WHAT DOES GOOD MEAN?
45
35
25
15
-5
5
-5 -3 -1
-15
-25
1 3 5
y =f (x)
1 1
y =f (x)
2 2
y=g(x)
=f (x)+f (x)
1 2
Figure 3.8: A problematic constellation for the weighted sum approach.
Weighted sums are only suitable to optimize functions that at least share the same big-O
notation (see Denition D12.1 on page 144). Often, it is not obvious how the objective
functions will fall or rise. How can we, for instance, determine whether the objective max-
imizing the food piled by an Articial Ant rises in comparison to the objective minimizing
the distance walked by the simulated insect?
And even if the shape of the objective functions and their complexity class were clear,
the question about how to set the weights w properly still remains open in most cases [689].
In the same paper, Das and Dennis [689] also show that with weighted sum approaches, not
necessarily all elements considered optimal in terms of Pareto domination (see next section)
will be found. Instead, candidate solutions hidden in concavities of the trade-o curve will
not be discovered [609, 1621, 1622] (see Example E3.17 on the next page for dierent shapes
of trade-o curves). Also, the candidate solutions which will nally be contained in the
result set X will not necessarily be evenly spaced [1000]. Amongst further authors who
discuss the drawbacks of the linear aggregation method are Koski [1582] and Messac and
Mattson [1873].
3.3.4 Weighted Min-Max
The weighted min-max approach [1303, 1305, 1554, 1741] for multi-objective optimization
compares two candidate solutions x
1
and x
2
on basis of the minimum (or maximum) of
their weighted objective values. A candidate solution x
1
X is preferred in comparison
with x
2
X if Equation 3.20 holds
min w
i
f
i
(x
1
) : i 1..n < min w
i
f
i
(x
2
) : i 1..n (3.20)
where w
i
are the weights of the objective values. For objective functions subject to mini-
mization, w
i
> 0 is used and w
i
is set to a value below zero for functions to be maximized.
This approach which is also called Weighted Tschebychev Norm is able to nd solutions on
both, convex and concave trade-o surfaces (see Denition D3.15 on the facing page). The
weight vector describes a beam in direction (1/w
1
, 1/w
2
, .., 1/w
n
)
T
from the point of origin
in objective space to and along which the optimizer should converge.
3.3. MULTIPLE OBJECTIVE FUNCTIONS 65
3.3.5 Pareto Optimization
The mathematical foundations for multi-objective optimization which considers conicting
criteria in a fair way has been laid by Edgeworth [857] and Pareto [2127] around one hundred
fty years ago [1642]. Pareto optimality
6
became an important notion in economics, game
theory, engineering, and social sciences [560, 997, 2102, 2938]. It denes the frontier of
solutions that can be reached by trading-o conicting objectives in an optimal manner.
From this front, a decision maker (be it a human or an algorithm) can nally choose the
congurations that, in her opinion, suit best [264, 528, 952, 954, 1010, 1166, 2610]. The
notation of optimal in the Pareto sense is strongly based on the denition of domination:
Denition D3.13 (Domination). An element x
1
dominates (is preferred to) an element
x
2
(x
1
x
2
) if x
1
is better than x
2
in at least one objective function and not worse with
respect to all other objectives. Based on the set f of objective functions f, we can write:
x
1
x
2
i 1..n
i
f
i
(x
1
)
i
f
i
(x
2
)
j 1..n :
j
f
j
(x
1
) <
j
f
j
(x
2
) (3.21)

i
=
_
1 if f
i
should be minimized
1 if f
i
should be maximized
(3.22)
We provide a Java version of a comparison operator implementing the domination relation
in Listing 56.55 on page 838. Like in Equation 3.13 in the denition of the lexicographical
ordering of candidate solutions, the
i
again denote whether f
i
should be maximized or
minimized. Thus Equation 3.22 and Equation 3.14 are the same. Since we generally assume
minimization, the
i
will always be 1 if not explicitly stated otherwise in the rest of this
book. Then, if a candidate solution x
1
dominates another one (x
2
), f (x
1
) is partially less
than f (x).
The Pareto domination relation denes a strict partial order (an irreexive, asymmetric,
and transitive binary relation, see Denition D51.16 on page 645) on the space of possible
objective values Y. In contrast, the weighted sum approach imposes a total order by pro-
jecting the objective space Y it into the real numbers R. The use of Pareto domination in
multi-objective Evolutionary Algorithms was rst proposed by [1075] back in 1989 [2228].
Denition D3.14 (Pareto Optimal). An element x

X is Pareto optimal (and hence,


part of the optimal set X

) if it is not dominated by any other element in the problem space


X. X

is called the Pareto-optimal set or Pareto set.


x

,x X : x x

(3.23)
The Pareto optimal set will also comprise all lexicographically optimal solutions since, per
denition, it includes all the extreme points of the objective functions. Additionally, it
provides the trade-o solutions which we longed for in Section 3.3.2.1.
Denition D3.15 (Pareto Frontier). For a given optimization problem, the Pareto
front(ier) F

R
n
is dened as the set of results the objective function vector f creates
when it is applied to all the elements of the Pareto-optimal set X

.
F

= f (x

) : x

(3.24)
Example E3.17 (Shapes of Optimal Sets / Pareto Frontiers).
The Pareto front in Example E13.2 has a convex geometry, but there are other dierent
6
http://en.wikipedia.org/wiki/Pareto_efficiency [accessed 2007-07-03]
66 3 OPTIMA: WHAT DOES GOOD MEAN?
f
1
f
2
Fig. 3.9.a: Non-Convex
(Concave)
f
1
f
2
Fig. 3.9.b: Discon-
nected
f
1
f
2
Fig. 3.9.c: linear
f
1
f
2
Fig. 3.9.d: Non-
Uniformly Distributed
Figure 3.9: Examples of Pareto fronts [2915].
shapes as well. In Figure 3.9 [2915], we show some examples, including non-convex (concave),
disconnected, linear, and non-uniformly distributed Pareto fronts. Dierent structures of the
optimal sets pose dierent challenges to achieve good convergence and spread. Optimization
processes driven by linear aggregating functions will, for instance, have problems to nd all
portions of Pareto fronts with non-convex geometry as shown by Das and Dennis [689].
3.3. MULTIPLE OBJECTIVE FUNCTIONS 67
In this book, we will refer to candidate solutions which are not dominated by any element
in a reference set as non-dominated. This reference set may be the whole problem space X
itself (in which case the non-dominated elements are global optima) or any subset X x of
it. Many optimization algorithms, for instance, work on populations of candidate solutions
and in such a context, a non-dominated candidate solution would be one not dominated by
any other element in the population.
Denition D3.16 (Domination Rank). We dene the domination rank
#
dom(x, X) of
a candidate solution x relative to a reference set X as
#
dom(x, X) = [x

: (x

X) (x

x)[ (3.25)
Hence, an element x with
#
dom(x, X) = 0 is non-dominated in X and a candidate solution
x

with
#
dom(x

, X) = 0 is a global optimum. The higher the domination rank, the worse


is the associated element.
Example E3.18 (Example E3.6 Cont.: Pareto Optimization).
In Figure 3.10, we illustrate the impact of the denition of Pareto optimality on our rst
example (outlined in Example E3.6). We again assume that f
1
and f
2
should both be
maximized and hence,
1
=
2
= 1. The areas shaded with dark gray are Pareto optimal
and thus, represent the optimal set X

= [x
2
, x
3
] [x
5
, x
6
] which contains innite many
elements since it is composed of intervals on the real axis. In practice, of course, our
computers can only handle nitely many elements and we would not be able to nd all
optimal elements unless, of course, we could apply a method able to mathematically resolve
the objective functions and to return the intervals in a form similar to the notation we used
here. All candidate solutions not in the optimal set are dominated by at least one element
of X

.
y
x
1
x
2
x
3
x
4
x
5
x
6
y=f (x)
1
y=f (x)
2
xX
1
Figure 3.10: Optimization using the Pareto Frontier approach.
The points in the area between x
1
and x
2
(shaded in light gray) are dominated by other
points in the same region or in [x
2
, x
3
], since both functions f
1
and f
2
can be improved by
increasing x. In other words, f
1
and f
2
harmonize in [x
1
, x
2
): f
1

[x
1
,x
2
)
f
2
. If we start at
the leftmost point in X (which is position x
1
), for instance, we can go one small step
to the right and will nd a point x
1
+ dominating x
1
because f
1
(x
1
+ ) > f
1
(x
1
) and
f
2
(x
1
+ ) > f
2
(x
1
). We can repeat this procedure and will always nd a new dominating
point until we reach x
2
. x
2
demarks the global maximum of f
2
the point with the highest
possible f
2
value which cannot be dominated by any other element of X by denition (see
Equation 3.21).
68 3 OPTIMA: WHAT DOES GOOD MEAN?
From here on, f
2
will decrease for some time, but f
1
keeps rising. If we now go a
small step to the right, we will nd a point x
2
+ with f
2
(x
2
+ ) < f
2
(x
2
) but also
f
1
(x
2
+ ) > f
1
(x
2
). One objective can only get better if another one degenerates, i. e., f
1
and f
2
conict f
1

[x
2
,x
3
]
f
2
. In order to increase f
1
, f
2
would be decreased and vice versa
and hence, the new point is not dominated by x
2
. Although some of the f
2
(x) values of the
other points x [x
1
, x
2
) may be larger than f
2
(x
2
+ ), f
1
(x
2
+ ) > f
1
(x) holds for all of
them. This means that no point in [x
1
, x
2
) can dominate any point in [x
2
, x
4
] because f
1
keeps rising until x
4
is reached.
At x
3
however, f
2
steeply falls to a very low level. A level lower than f
2
(x
5
). Since the f
1
values of the points in [x
5
, x
6
] are also higher than those of the points in (x
3
, x
4
], all points
in the set [x
5
, x
6
] (which also contains the global maximum of f
1
) dominate those in (x
3
, x
4
].
For all the points in the white area between x
4
and x
5
and after x
6
, we can derive similar
relations. All of them are also dominated by the non-dominated regions that we have just
discussed.
Example E3.19 (Example E3.7 Cont.: Pareto Optimization).
Another method to visualize the Pareto relationship is outlined in Figure 3.11 for our second
graphical example. For a certain grid-based resolution X

2
X
2
of the problem space X
2
, we
counted the number
#
dom(x, X

2
) of elements that dominate each element x X

2
. Again,
the higher this number, the worst is the element x in terms of Pareto optimization. Hence,
those candidate solutions residing in the valleys of Figure 3.11 are better than those which are
part of the hills. This Pareto ranking approach is also used in many optimization algorithms
as part of the tness assignment scheme (see Section 28.3.3 on page 275, for instance). A
non-dominated element is, as the name says, not dominated by any other candidate solution.
These elements are Pareto optimal and have a domination rank of zero. In Figure 3.11, there
are four such areas X

1
, X

2
, X

3
, and X

4
.
x
2 x
1
#dom
X
1

X
2

X
3

X
4

Figure 3.11: Optimization using the Pareto Frontier approach (second example).
If we compare Figure 3.11 with the plots of the two functions f
3
and f
4
in Figure 3.5, we
can see that hills in the domination space occur at positions where both, f
3
and f
4
have high
values. Conversely, regions of the problem space where both functions have small values are
dominated by very few elements.
Besides these examples here, another illustration of the domination relation which may help
understanding Pareto optimization can be found in Section 28.3.3 on page 275 (Figure 28.4
and Table 28.1).
3.3. MULTIPLE OBJECTIVE FUNCTIONS 69
3.3.5.1 Problems of Pure Pareto Optimization
Besides the fact that we will usually not be able to list all Pareto optimal elements, the
complete Pareto optimal set is often also not the wanted result of the optimization process.
Usually, we are rather interested in some special areas of the Pareto front only.
Example E3.20 (Example E3.5 Articial Ant Cont.).
We can again take the Articial Ant example to visualize this problem. In Example E3.5
on page 57 we have introduced multiple conicting criteria in this problem.
1. Maximize the amount of food piled.
2. Minimize the distance covered or the time needed to nd the food.
3. Minimize the size of the program driving the ant.
If we applied pure Pareto optimization to these objective, we may yield for example:
1. A program x
1
consisting of 100 instructions, allowing the ant to gather 50 food items
when walking a distance of 500 length units.
2. A program x
2
consisting of 100 instructions, allowing the ant to gather 60 food items
when walking a distance of 5000 length units.
3. A program x
3
consisting of 50 instructions, allowing the ant to gather 61 food items
when walking a distance of 1 000 000 length units.
4. A program x
4
consisting of 10 instructions, allowing the ant to gather 1 food item
when walking a distance of 5 length units.
5. A program x
5
consisting of 0 instructions, allowing the ant to gather 0 food item when
walking a distance of 0 length units.
The result of the optimization process obviously contains useless but non-dominated indi-
viduals which occupy space in the non-dominated set. A program x
3
which forces the ant
to scan each and every location on the map is not feasible even if it leads to a maximum
in food gathering. Also, an empty program x
5
is useless because it will lead to no food
collection at all. Between x
1
and x
2
, the dierence in the amount of piled food is only 20%,
but the ant driven by x
2
has to cover ten times the distance created by x
1
. Therefore, the
question of the actual utility arises in this case, too.
If such candidate solutions are created and archived, processing time is invested in evaluating
them (and we have already seen that evaluation may be a costly step of the optimization
process). Ikeda et al. [1400] show that various elements, which are apart from the true
Pareto frontier, may survive as hardly-dominated solutions (called dominance resistant, see
Section 20.2). Also, such solutions may dominate others which are not optimal but fall into
the space behind the interesting part of the Pareto front. Furthermore, memory restrictions
usually force us to limit the size of the list of non-dominated solutions X found during
the search. When this size limit is reached, some optimization algorithms use a clustering
technique to prune the optimal set while maintaining diversity. On one hand, this is good
since it will preserve a broad scan of the Pareto frontier. In this case, on the other hand, a
short but dumb program is of course very dierent from a longer, intelligent one. Therefore,
it will be kept in the list and other solutions which dier less from each other but are more
interesting for us will be discarded.
Better yet, non-dominated elements usually have a higher probability of being explored
further. This then leads inevitably to the creation of a great proportion of useless ospring.
70 3 OPTIMA: WHAT DOES GOOD MEAN?
In the next iteration of the optimization process, these useless ospring will need a good
share of the processing time to be evaluated.
Thus, there are several reasons to force the optimization process into a wanted direction.
In ?? on page ?? you can nd an illustrative discussion on the drawbacks of strict Pareto
optimization in a practical example (evolving web service compositions).
3.3.5.2 Goals of Pareto Optimization
Purshouse [2228] identies three goals of Pareto optimization which also mirror the afore-
mentioned problems:
1. Proximity: The optimizer should obtain result sets X which come as close to the Pareto
frontier as possible.
2. Diversity: If the real set of optimal solutions is too large, the set X of solutions actually
discovered should have a good distribution in both uniformity and extent.
3. Pertinency: The set X of solutions discovered should match the interests of the decision
maker using the optimizer. There is no use in containing candidate solutions with no
actual merits for the human operator.
3.4 Constraint Handling
Often, there are feasibility limits for the parameters of the candidate solutions or the objec-
tive function values. Such regions of interest are one of the reasons for one further extension
of multi-objective optimization problems:
Denition D3.17 (Feasibility). In an optimization problem, p 0 inequality constraints
g and q 0 equality constraints h may be imposed additionally to the objective functions.
Then, a candidate solution x is feasible, if and only if it fulls all constraints:
isFeasible(x) g
i
(x) 0 i 1..p
h
j
(x) = 0 j 1..q
(3.26)
A candidate solution x for which isFeasible(x) does not hold is called infeasible. Obviously,
only a feasible individual can be a solution, i. e., an optimum, for a given optimization
problem. Constraint handling and the notation of feasibility can be combined with all the
methods for single-objective and multi-objective optimization previously discussed. Com-
prehensive reviews on techniques for such problems have been provided by Michalewicz
[1885], Michalewicz and Schoenauer [1890], and Coello Coello et al. [594, 599] in the con-
text of Evolutionary Computation. In Table 3.2, we give some examples for constraints in
optimization problems.
Table 3.2: Applications and Examples of Constraint Optimization.
Area References
Combinatorial Problems [237, 1161, 1735, 1822, 1871, 1921, 2036, 2072, 2658, 3034]
Computer Graphics [2487, 2488]
Databases [1735, 1822, 2658]
3.4. CONSTRAINT HANDLING 71
Decision Making [1801]
Distributed Algorithms and Sys-
tems
[1801, 3034]
Economics and Finances [530, 531, 3034]
Engineering [500, 519, 530, 547, 1881, 3070]
Function Optimization [1732]
3.4.1 Genome and Phenome-based Approaches
One approach to ensure that constraints are always obeyed is to ensure that only feasible
candidate solutions can occur during the optimization process.
1. Coding: This can be done as part of the genotype-phenotype mapping by using so-
called decoders. An example for a decoder is given in [2472, 2473]. [2228]
2. Repairing: Infeasible candidate solutions could be repaired, i. e., turned into feasible
ones [2228, 3041] as part of the genotype-phenotype mapping as well.
3.4.2 Death Penalty
Probably the easiest way of dealing with constraints in objective and tness space is to simply
reject all infeasible candidate solutions right away and not considering them any further in
the optimization process. This death penalty [1885, 1888] can only work in problems where
the feasible regions are very large. If the problem space contains only few or non-continuously
distributed feasible elements, such an approach will lead the optimization process to stagnate
because most eort is spent in sampling untargeted in the hope to nd feasible candidate
solutions. Once this has happened, the transition to other feasible elements may be too
complicated and the search only nds local optima. Additionally, if infeasible elements are
directly discarded, the information which could be gained from them is disposed with them.
3.4.3 Penalty Functions
Maybe one of the most popular approach for dealing with constraints, especially in the
area of single-objective optimization, goes back to Courant [647] who introduced the idea of
penalty functions in 1943. Here, the constraints are combined with the objective function f,
resulting in a new function f

which is then actually optimized. The basic idea is that this


combination is done in a way which ensures that an infeasible candidate solution always has
a worse f

-value than a feasible one with the same objective values. In [647], this is achieved
by dening f

as f

(x) = f(x)+v [h(x)]


2
. Various similar approaches exist. Carroll [500, 501],
for instance, chose a penalty function of the form f

(x) = f(x) +v

p
i=1
[g
i
(x)]
1
in order to
ensure that the function g always stays larger than zero.
There are practically no limits for the ways in which a penalty for infeasibility can be
integrated into the objective functions. Several researchers suggest dynamic penalties which
incorporate the index of the current iteration of the optimizer [1458, 2072] or adaptive penal-
ties which additionally utilize population statistics [237, 1161, 2487]. Thorough discussions
on penalty functions have been contributed by Fiacco and McCormick [908] and Smith and
Coit [2526].
72 3 OPTIMA: WHAT DOES GOOD MEAN?
3.4.4 Constraints as Additional Objectives
Another idea for handling constraints would be to consider them as new objective functions.
If g(x) 0 must hold, for instance, we can transform this to a new objective function which
is subject to minimization.
f

(x) = min g(x) , 0 (3.27)


As long as g is larger than 0, the constraint is met. f

will be zero for all candidate solutions


which fulll this requirement, since there is no use in maximizing g further than 0. However,
if g gets negative, the minimum term in f

will become negative as well. The minus in


front of it will turn it positive. The higher the absolute value of the negative g, the higher
will the value of f

become. Since it is subject to minimization, we thus apply a force into


the direction of meeting the constraint. No optimization pressure is applied as long as the
constraint is met. An equality constraint h can similarly be transformed to a function f
=
(x)
(again subject to minimization).
f
=
(x) = max [h(x)[ , (3.28)
f
=
(x) minimizes the absolute value of h. If h is continuous and real, it may arbitrarily closely
approach 0 but never actually reach it. Then, dening a small cut-o value (for instance
= 10
6
) may help the optimizer to satisfy the constraint. An approach similar to the
idea of interpreting constraints as objectives is Debs Goal Programming method [736, 739].
Constraint violations and objective values are ranked together in a part of the MOEA by
Ray et al. [2273] [749].
3.4.5 The Method Of Inequalities
General inequality constraints can also be processed according to the Method of Inequalities
(MOI) introduced by Zakian [3050, 3052, 3054] in his seminal work on computer-aided
control systems design (CACSD) [2404, 2924, 3070]. In the MOI, an area of interest is
specied in form of a goal range [ r
i
,

r
i
] for each objective function f
i
.
Pohlheim [2190] outlines how this approach can be applied to the set f of objective
functions and combined with Pareto optimization: Based on the inequalities, three categories
of candidate solutions can be dened and each element x X belongs to one of them:
1. It fullls all of the goals, i. e.,
r
i
f
i
(x)

r
i
i 1.. [f [ (3.29)
2. It fullls some (but not all) of the goals, i. e.,
(i 1.. [f [ : r
i
f
i
(x)

r
i
) (j 1.. [f [ : (f
j
(x) < r
j
) (f
j
(x) >

r
j
)) (3.30)
3. It fullls none of the goals, i. e.,
(f
i
(x) < r
i
) (f
i
(x) >

r
i
) i 1.. [f [ (3.31)
Using these groups, a new comparison mechanism is created:
1. The candidate solutions which fulll all goals are preferred instead of all other indi-
viduals that either fulll some or no goals.
2. The candidate solutions which are not able to fulll any of the goals succumb to those
which fulll at least some goals.
3. Only the solutions that are in the same group are compared on basis on the Pareto
domination relation.
3.4. CONSTRAINT HANDLING 73
By doing so, the optimization process will be driven into the direction of the interesting
part of the Pareto frontier. Less eort will be spent on creating and evaluating individuals
in parts of the problem space that most probably do not contain any valid solution.
Example E3.21 (Example E3.6 Cont.: MOI).
In Figure 3.12, we apply the Pareto-based Method of Inequalities to the graphical Exam-
ple E3.6. We impose the same goal ranges on both objectives r
1
= r
2
and

r
1
=

r
2
. By
doing so, the second non-dominated region from the Pareto example Figure 3.10 suddenly
becomes infeasible, since f
1
rises over

r
1
. Also, the greater part of the rst optimal area
from this example is now infeasible because f
2
drops under r
2
. In the whole domain X of the
optimization problem, only the regions [x
1
, x
2
] and [x
3
, x
4
] fulll all the target criteria. To
these elements, Pareto comparisons are applied. It turns out that the elements in [x
3
, x
4
]
dominate all the elements [x
1
, x
2
] since they provide higher values in f
1
for same values in f
2
.
If we scan through [x
3
, x
4
] from left to right, we see that f
1
rises while f
2
degenerates, which
is why the elements in this area cannot dominat each other and hence, are all optimal.
y
x
1
x
2
x
3
x
4
y=f (x)
1
y=f (x)
2
r ,r
1 2
r ,r
1 2
xX
1


Figure 3.12: Optimization using the Pareto-based Method of Inequalities approach (rst
example).
Example E3.22 (Example E3.7 Cont.: MOI).
In Figure 3.13 we apply the Pareto-based Method of Inequalities to our second graphical
example (Example E3.7). For illustrating purposes, we dene two dierent ranges of interest
[ r
3
,

r
3
] and [ r
4
,

r
4
] on f
3
and f
4
, as sketched in Fig. 3.13.a.
Like we did in the second example for Pareto optimization, we want to plot the quality of
the elements in the problem space. Therefore, we rst assign a number c 1, 2, 3 to each
of its elements in Fig. 3.13.b. This number corresponds to the classes to which the elements
belong, i. e., 1 means that a candidate solution fullls all inequalities, for an element of class
2, at least some of the constraints hold, and the elements in class 3 fail all requirements.
Based on this class division, we can then perform a modied Pareto counting where each
candidate solution dominates all the elements in higher classes Fig. 3.13.c. The result is that
multiple single optima x

1
, x

2
, x

3
, etc., and even a set of adjacent, non-dominated elements
X

9
occurs. These elements are, again, situated at the bottom of the illustrated landscape
whereas the worst candidate solutions reside on hill tops.
74 3 OPTIMA: WHAT DOES GOOD MEAN?
x
2 x
2
x
1
x
1
f
4
f
3
r
3
r
3
r
4
r
4

Fig. 3.13.a: The ranges applied to f


3
and f
4
.
x
2
x
1
M
O
I

c
l
a
s
s
1
2
3
Fig. 3.13.b: The Pareto-based Method of Inequalities
class division.
x
2
x
1
X
9

x
1

x
2

x
3

x
4

x
5

x
6

x
7

x
8

#dom
Fig. 3.13.c: The Pareto-based Method of Inequali-
ties ranking.
Figure 3.13: Optimization using the Pareto-based Method of Inequalities approach (rst
example).
3.5. UNIFYING APPROACHES 75
A good overview on techniques for the Method of Inequalities is given by Whidborne et al.
[2924].
3.4.6 Constrained-Domination
The idea of constrained-domination is very similar to the Method of Inequalities but natively
applies to optimization problems as specied in Denition D3.17. Deb et al. [749] dene
this principle which can easily be incorporated into the dominance comparison between any
two candidate solutions. Here, a candidate solutions x
1
X is preferred in comparison to
an element x
2
X [547, 749, 1297] if
1. x
1
is feasible while x
2
is not,
2. x
1
and x
2
both are infeasible but x
1
has a smaller overall constraint violation, or
3. x
1
and x
2
are both feasible but x
1
dominates x
2
.
3.4.7 Limitations and Other Methods
Other approaches for incorporating constraints into optimization are Goal Attainment [951,
2951] and Goal Programming
7
[530, 531]. Especially interesting in our context are methods
which have been integrated into Evolutionary Algorithms [736, 739, 2190, 2393, 2662], such
as the popular Goal Attainment approach by Fonseca and Fleming [951] which is similar to
the Pareto-MOI we have adopted from Pohlheim [2190]. Again, an overview on this subject
is given by Coello Coello et al. in [599].
3.5 Unifying Approaches
3.5.1 External Decision Maker
All approaches for dening what optima are and how constraints should be considered until
now were rather specic and bound to certain mathematical constructs. The more general
concept of an External Decision Maker which (or who) decides which candidate solutions
prevail has been introduced by Fonseca and Fleming [952, 954].
One of the ideas behind externalizing the assessment process on what is good and
what is bad is that Pareto optimization as well as, for instance, the Method of Inequalities,
impose rigid, immutable partial orders
8
on the candidate solutions. The rst drawback of
such a partial order is that elements may exists which neither succeed nor precede each
other. As we have seen in the examples discussed in Section 3.3.5, there can, for instance,
be two candidate solutions x
1
, x
2
X with neither x
1
x
2
nor x
2
x
1
. Since there can be
arbitrarily many such elements, the optimization process may run into a situation where it
set of currently discovered best candidate solutions X grows quickly, which leads to high
memory consumption and declining search speed.
The second drawback of only using rigid partial orders is that they make no statement
of the quality relations within X. There is no best of the best. We cannot decide which
of two candidate solutions not dominating each other is the most interesting one for further
7
http://en.wikipedia.org/wiki/Goal_programming [accessed 2007-07-03]
8
A denition of partial order relations is specied in Denition D51.16 on page 645.
76 3 OPTIMA: WHAT DOES GOOD MEAN?
investigation. Also, the plain Pareto relation does not allow us to dispose the weakest
element from X in order to save memory, since all elements in X are alike.
One example for introducing a quality measure additional to the Pareto relation is the
Pareto ranking which we already used in Example E3.19 and which we will discuss later
in Section 28.3.3 on page 275. Here, the utility of a candidate solution is dened by the
number of other candidate solutions it is dominated by. Into this number, additional den-
sity information which deems areas of X which have only sparsely been sampled as more
interesting than those which have already extensively been investigated.
While such ordering methods are a good default approaches able to direct the search
into the direction of the Pareto frontier and delivering a broad scan of it, they neglect the
fact that the user of the optimization most often is not interested in the whole optimal
set but has preferences, certain regions of interest [955]. This region would exclude, for
example, the infeasible (but Pareto optimal) programs for the Articial Ant as discussed in
Section 3.3.5.1. What the user wants is a detailed scan of these areas, which often cannot
be delivered by pure Pareto optimization.
utility/cost
results
apriori
knowledge
objectivevalues
(acquiredknowledge)
EA
(anoptimizer)
DM
(decisionmaker)
Figure 3.14: An external decision maker providing an Evolutionary Algorithm with utility
values.
Here comes the External Decision Maker as an expression of the users preferences [949]
into play, as illustrated in Figure 3.14. The task of this decision maker is to provide a cost
function u : Rn R (or utility function, if the underlying optimizer is maximizing) which
maps the space of objective values (R
n
) to the real numbers R. Since there is a total order
dened on the real numbers, this process is another way of resolving the incomparability-
situation. The structure of the decision making process u can freely be dened and may
incorporate any of the previously mentioned methods. u could, for example, be reduced
to compute a weighted sum of the objective values, to perform an implicit Pareto ranking,
or to compare individuals based on pre-specied goal-vectors. Furthermore, it may even
incorporate forms of articial intelligence, other forms of multi-criterion Decision Making,
and even interaction with the user (smilar to Example E2.6 on page 48, for instance). This
technique allows focusing the search onto solutions which are not only good in the Pareto
sense, but also feasible and interesting from the viewpoint of the user.
Fonseca and Fleming make a clear distinction between tness and cost values. Cost values
have some meaning outside the optimization process and are based on user preferences.
Fitness values on the other hand are an internal construct of the search with no meaning
outside the optimizer (see Denition D5.1 on page 94 for more details). If External Decision
Makers are applied in Evolutionary Algorithms or other search paradigms based on tness
measures, these will be computed using the values of the cost function instead of the objective
functions [949, 950, 956].
3.5.2 Prevalence Optimization
We have now discussed various ways to dene what optima in terms of multi-objective
optimization are and to steer the search process into their direction. Let us subsume all
3.5. UNIFYING APPROACHES 77
of them in general approach. From the concept of Pareto optimization to the Method of
Inequalities, the need to compare elements of the problem space in terms of their quality
as solution for a given problem winds like a read thread through this matter. Even the
weighted sum approach and the External Decision Maker do nothing else than mapping
multi-dimensional vectors to the real numbers in order to make them comparable.
If we compare two candidate solutions x
1
und x
2
, either x
1
is better than x
2
, vice versa,
or the result is undecidable. In the latter case, we assume that both are of equal quality.
Hence, there are three possible relations between two elements of the problem space. These
two results can be expressed with a comparator function cmp
f
.
Denition D3.18 (Comparator Function). A comparator function cmp : A
2
R
maps all pairs of elements (a
1
, a
2
) A
2
to the real numbers R according to a (strict)
partial order
9
R:
R(a
1
, a
2
) (cmp(a
1
, a
2
) < 0) (cmp(a
2
, a
1
) > 0) a
1
, a
2
A (3.32)
R(a
1
, a
2
) R(a
2
, a
1
) cmp(a
1
, a
2
) = cmp(a
2
, a
1
) = 0 a
1
, a
2
A (3.33)
R(a
1
, a
2
) R(a
1
, a
2
) a
1
, a
2
A (3.34)
cmp(a, a) = 0a A (3.35)
We provide an Java interface for the functionality of a comparator functions in Listing 55.15
on page 719. From the three dening equations, many features of cmp can be deduced.
It is, for instance, transitive, i. e., cmp(a
1
, a
2
) < 0 cmp(a
2
, a
3
) < 0 cmp(a
1
, a
3
)) < 0.
Provided with the knowledge of the objective functions f f , such a comparator function
cmp
f
can be imposed on the problem spaces of our optimization problems:
Denition D3.19 (Prevalence Comparator Function). A prevalence comparator func-
tion cmp
f
: X
2
R maps all pairs (x
1
, x
2
) X
2
of candidate solutions to the real numbers
R according to Denition D3.18.
The subscript f in cmp
f
illustrates that the comparator has access to all the values of the
objective functions in addition to the problem space elements which are its parameters. As
shortcut for this comparator function, we introduce the prevalence notation as follows:
Denition D3.20 (Prevalence). An element x
1
prevails over an element x
2
(x
1
x
2
) if
the application-dependent prevalence comparator function cmp
f
(x
1
, x
2
) R returns a value
less than 0.
x
1
x
2
cmp
f
(x
1
, x
2
) < 0 x
1
, x
2
, X (3.36)
(x
1
x
2
) (x
2
x
3
) x
1
x
3
x
1
, x
2
, x
3
X (3.37)
It is easy to see that we can t lexicographic orderings, Pareto domination relations, the
Method of Inequalities-based comparisons, as well as the weighted sum combination of ob-
jective values easily into this notation. Together with the tness assignment strategies for
Evolutionary Algorithms which will be introduced later in this book (see Section 28.3 on
page 274), it covers many of the most sophisticated multi-objective techniques that are pro-
posed, for instance, in [952, 1531, 2662]. By replacing the Pareto approach with prevalence
comparisons, all the optimization algorithms (especially many of the evolutionary tech-
niques) relying on domination relations can be used in their original form while oering the
additional ability of scanning special regions of interests of the optimal frontier.
9
Partial orders are introduced in Denition D51.15 on page 645.
78 3 OPTIMA: WHAT DOES GOOD MEAN?
Since the comparator function cmp
f
and the prevalence relation impose a partial order
on the problem space X like the domination relation does, we can construct the optimal set
X

containing globally optimal elements x

in a way very similar to Equation 3.23:


x

x X : x ,= x

x x

(3.38)
For illustration purposes, we will exercise the prevalence approach on the examples of lexico-
graphic optimality (see Section 3.3.2) and dene the comparator function cmp
f ,lex
. For the
weighted sum method with the weights w
i
(see Section 3.3.3) we similarly provide cmp
f ,ws
and the domination-based Pareto optimization with the objective directions
i
as dened
in Section 3.3.5 can be realized with cmp
f ,
.
cmp
f ,lex
(x
1
, x
2
) =
_

_
1 if (1.. [f [) : (x
1
<
lex,
x
2
)
(1.. [f [) : (x
2
<
lex,
x
1
)
1 if (1.. [f [) : (x
2
<
lex,
x
1
)
(1.. [f [) : (x
1
<
lex,
x
2
)
0 otherwise
(3.39)
cmp
f ,ws
(x
1
, x
2
) =
_
_
|f |

i=1
(w
i
f
i
(x
1
) w
i
f
i
(x
2
))
_
_
ws(x
1
) ws(x
2
) (3.40)
cmp
f ,
(x
1
, x
2
) =
_
_
_
1 if x
1
x
2
1 if x
2
x
1
0 otherwise
(3.41)
Example E3.23 (Example E3.5 Articial Ant Cont.).
With the prevalence comparator, we can also easily solve the problem stated in Sec-
tion 3.3.5.1 by no longer encouraging the evolution of useless programs for Articial Ants
while retaining the benets of Pareto optimization. The comparator function can simply
be dened in a way that infeasible candidate solutions will always be prevailed by useful
programs. It therefore may incorporate the knowledge on the importance of the objective
functions. Let f
1
be the objective function with an output proportional to the food piled, f
2
denote the distance covered in order to nd the food, and f
3
be the program length. Equa-
tion 3.42 demonstrates one possible comparator function for the Articial Ant problem.
cmp
f ,ant
(x
1
, x
2
) =
_

_
1 if (f
1
(x
1
) > 0 f
1
(x
2
) = 0)
(f
2
(x
1
) > 0 f
2
(x
2
) = 0)
(f
3
(x
1
) > 0 f
1
(x
2
) = 0)
1 if (f
1
(x
2
) > 0 f
1
(x
1
) = 0)
(f
2
(x
2
) > 0 f
2
(x
1
) = 0)
(f
3
(x
2
) > 0 f
1
(x
1
) = 0)
cmp
f ,
(x
1
, x
2
) otherwise
(3.42)
Later in this book, we will discuss some of the most popular optimization strategies. Al-
though they are usually implemented based on Pareto optimization, we will always introduce
them using prevalence.
3.5. UNIFYING APPROACHES 79
Tasks T3
13. In Example E1.2 on page 26, we stated the layout of circuits as a combinatorial opti-
mization problem. Name at least three criteria which could be subject to optimization
or constraints in this scenario.
[6 points]
14. The job shop scheduling problem was outlined in Example E1.3 on page 26. The most
basic goal when solving job shop problems is to meet all production deadlines. Would
formulate this goal as a constraint or rather as an objective function? Why?
[4 points]
15. Let us consider the university life of a student as optimization problem with the goal
of obtaining good grades in all the subjects she attends. How could be formulate a
(a) single objective function for that purpose, or
(b) a set of objectives which can be optimized with, for example, the Pareto approach.
Additionally, phrase at least two constraints in daily life which, for example, prevent
the student from learning 24 hours per day.
[10 points]
16. Consider a bi-objective optimization problem with X = R as problem space and the two
objective functions f
1
(x) = e
|x3|
and f
2
(x) = [x 100[ both subject to minimization.
What is your opinion about the viability of solving this problem with the weighted
sum approach with both weights set to 1?
[5 points]
17. If we perform Pareto optimization of n = [f [ objective functions f
i
: i 1..n where
each of the objectives can take on m dierent values, there can be at most m
n1
non-dominated candidate solutions at a time. Sketch congurations of m
n1
non-
dominated candidate solutions in diagrams similar to Figure 28.4 on page 276 for
n 1, 2 and m 1, 2, 3, 4. Outline why the sentence also holds for n > 2.
[10 points]
18. In your opinion, is it good if most of the candidate solutions under investigation
by a Pareto-based optimization algorithm are non-dominated? For answering this
question, consider an optimization process that decides which individual should be
used for further investigation solely on basis of its objective values and, hence, the
Pareto dominance relation. Consider the extreme case that all points known to such
an algorithm are non-dominated to answer the question.
Based on your answer, make a statement about the expected impact of the fact dis-
cussed in Task 17 on the quality of solutions that a Pareto-based optimizer can deliver
for rising dimensionality of the objective space, i. e., for increasing values of n = [f [.
[10 points]
19. Use Equation 3.1 to Equation 3.5 from page 52 to determine all extrema (optima) for
(a) f(x) = 3x
5
+ 2x
2
x,
(b) g(x) = 2x
2
+x, and
(c) h(x) = sin x.
[15 points]
Chapter 4
Search Space and Operators: How can we nd it?
4.1 The Search Space
In Section 2.2, we gave various examples for optimization problems, such as from simple
real-valued problems (Example E2.1), antenna design (Example E2.3), and tasks in aircraft
development (Example E2.5). Obviously, the problem spaces X of these problems are quite
dierent. A solution of the stones throw task, for instance, was a real number of meters per
second. An antenna can be described by its blueprint which is structured very dierently
from the blueprints of aircraft wings or rotors.
Optimization algorithms usually start out with a set of either predened or automatically
generated elements from X which are far from being optimal (otherwise, optimization would
make not too much sense). From these initial solutions, new elements from X are explored
in the hope to improve the solution quality. In order to nd these new candidate solutions,
search operators are used which modify or combine the elements already known.
Since the problem spaces of the three aforementioned examples are dierent, this would
mean that dierent sets of search operators are required, too. It would, however, be cumber-
some to develop search operations time and again for each new problem space we encounter.
Such an approach would not only be error-prone, it would also make it very hard to formulate
general laws and to consolidate ndings.
Instead, we often reuse well-known search spaces for many dierent problems. After we
the problem space is dened, we will always look for ways to map it to such a search space
for which search operators are already available. This is not always possible, but in most of
the problems we will encounter, it will save a lot of eort.
Example E4.1 (Search Spaces for Example E2.1, E2.3, and E2.5).
The stones throw as well as many antenna and wing blueprints, for instance, can all be
represented as vectors of real numbers. For the stones throw, these vectors would have
length one (Fig. 4.1.a). An antenna consisting of n segments could be encoded as vector of
length 3n, where the element at index i species a length and i + 1 and i + 2 are angles
denoting into which direction the segment is bent as sketched in Fig. 4.1.b. The aircraft
wing could be represent as vector of length 3n, too, where the element at index i is the
segment width and i + 1 and i + 2 denote its disposition and height (Fig. 4.1.c).
82 4 SEARCH SPACE AND OPERATORS: HOW CAN WE FIND IT?
x=g=(g )
1
Fig. 4.1.a: Stones throw.
g
1
g
4
g
7
g
5
g
8
g
0
g
3
g
6
g
2
g=
g
g
g
g
0
1
2
...
()
Fig. 4.1.b: Antenna design.
g
0
g
3
g
6
g
9
g
1
g
4 g
7
g
10 g
13
g
14
g
11
g
8
g
5
g
2
g=
g
g
g
g
0
1
2
...
()
Fig. 4.1.c: Wing design.
Figure 4.1: Examples for real-coding candidate solutions.
Denition D4.1 (Search Space). The search space G (genome) of an optimization prob-
lem is the set of all elements g which can be processed by the search operations.
In dependence on Genetic Algorithms, we often refer to the search space synonymously as
genome
1
. Genetic Algorithms (see Chapter 29) are optimization algorithms inspired by the
biological evolution. The term genome was coined by the German biologist Winkler [2958]
as a portmanteau of the words gene and chromosome [1701].
2
In biology, the genome is
the whole hereditary information of organisms and the genotype is represents the heredity
information of an individual. Since in the context of optimization we refer to the search
space as genome, we call its elements genotypes.
Denition D4.2 (Genotype). The elements g G of the search space G of a given
optimization problem are called the genotypes.
As said, a biological genotype encodes the complete heredity information of an individual.
Likewise, a genotype in optimization encodes a complete candidate solution. If G ,= X, we
distinguish genotypes g G and phenotypes x X. In Example E4.1, the genomes all are
the vectors of real numbers but the problem spaces are either a velocity, an antenna design,
or a blueprint of a wing.
The elements of the search space rarely are unstructured aggregations. Instead, they
often consist of distinguishable parts, hierarchical units, or well-typed data structures. The
same goes for the chromosomes in biology. They consist of genes
3
, segments of nucleic acid,
which contain the information necessary to produce RNA strings in a controlled manner and
encode phenotypical or metabolical properties. A sh, for instance, may have a gene for the
color of its scales. This gene could have two possible values called alleles
4
, determining
whether the scales will be brown or gray. The Genetic Algorithm community has adopted
this notation long ago and we use it for specifying the detailed structure of the genotypes.
Denition D4.3 (Gene). The distinguishable units of information in genotypes g G
which encode properties of the candidate solutions x X are called genes.
1
http://en.wikipedia.org/wiki/Genome [accessed 2007-07-15]
2
The words gene, genotype, and phenotype have, in turn, been introduced [2957] by the Danish biologist
Johannsen [1448].
3
http://en.wikipedia.org/wiki/Gene [accessed 2007-07-03]
4
http://en.wikipedia.org/wiki/Allele [accessed 2007-07-03]
4.2. THE SEARCH OPERATIONS 83
Denition D4.4 (Allele). An allele is a value of specic gene.
Denition D4.5 (Locus). The locus
5
is the position where a specic gene can be found
in a genotype [634].
Example E4.2 (Small Binary Genome).
Genetic Algorithms are a widely-researched optimization approach which, in its original
form, utilizes bit strings as genomes. Assume that we wished to apply such a plain GA to
the minimization of a function f : (0..3 0..3) R such as the trivial f(x = (x[0], x[1])) =
x[0] x[1] : x[0] X
0
, x[1] X
1
.
genotypeg G
0 1 1 1
0 0 0 0
0 0 0 1
0 0 1 0
1 1 0 1
1 1 1 0
1 1 1 1
...
genome G
2
1
3
0
1 2 3
phenotypex X
x=gpm(g)
searchOp
2
1
3
0
1 2 3
phenome X=X X
0 1

g G
x X
(searchspace) (problemspace) (solutioncandidate)
allele,,11``
atlocus1
2 Gene
nd
allele,,01``
atlocus0
1 Gene
st
X
0
X
0
X
1
X
1
Figure 4.2: The relation of genome, genes, and the problem space.
Figure 4.2 illustrates how this can be achieved by using two bits for each, x[0] and x[1].
The rst gene resides at locus 0 and encodes x[0] whereas the two bits at locus 1 stand for
x[1]. Four bits suce for encoding two natural numbers in X
1
= X
2
= 1..3 and thus, for the
whole candidate solution (phenotype) x. With this trivial encoding, the Genetic Algorithm
can be applied to the problem and will quickly nd the solution x = (0, 3)
4.2 The Search Operations
Denition D4.6 (Search Operation). A search operation searchOp takes a xed
number n N
0
of elements g
1
..g
n
from the search space G as parameter and returns another
element from the search space.
searchOp : G
n
G (4.1)
Search operations are used an optimization algorithm in order to explore the search space.
n is an operator-specic value, its arity
6
. An operation with n = 0, i. e., a nullary operator,
receives no parameters and may, for instance, create randomly congured genotypes (see,
for instance, Listing 55.2 on page 708). An operator with n = 1 could randomly modify
5
http://en.wikipedia.org/wiki/Locus_%28genetics%29 [accessed 2007-07-03]
6
http://en.wikipedia.org/wiki/Arity [accessed 2008-02-15]
84 4 SEARCH SPACE AND OPERATORS: HOW CAN WE FIND IT?
its parameter the mutation operator in Evolutionary Algorithms is an example for this
behavior.
The key idea of all search operations which take at least one parameter (n > 0) (see
Listing 55.3) is that
1. it is possible to take a genotype g of a candidate solution x = gpm(g) with good
objective values f (x) and to
2. modify this genotype g to get a new one g

= searchOp(g, . . .) which
3. can be translated to a phenotype x

= gpm(g

) with similar or maybe even better


objective values f (x

).
This assumption is one of the most basic concepts of iterative optimization algorithms.
It is known as causality or locality and discussed in-depth in Section 14.2 on page 161
and Section 24.9 on page 218. If it does not hold then applying a small change to a genotype
leads to large changes in its utility. Instead modifying a genotype, we then could as well
create a new, random one directly.
Many operations which have n > 1 try to combine the genotypes they receive in the
input. This is done in the hope that positive characteristics of them can be joined into an
even better ospring genotype. An example for such operations is the crossover operator in
Genetic Algorithms. Dierential Evolution utilizes an operator with n = 3, i. e., which takes
three arguments. Here, the idea often is that good can step by step be accumulated and
united by an optimization process. This Building Block Hypothesis is discussed in detail
in Section 29.5.5 on page 346, where also criticism to this point of view is provided.
Search operations often are randomized, i. e., they are not functions in the mathematical
sense. Instead, the return values of two successive invocations of one search operation with
exactly the same parameters may be dierent. Since optimization processes quite often use
more than a single search operation at a time, we dene the set Op.
Denition D4.7 (Search Operation Set). Op is the set of search operations searchOp
Op which are available to an optimization process. Op(g
1
, g
2
, .., g
n
) denotes the application
of any n-ary search operation in Op to the to the genotypes g
1
, g
2
, G.
g
1
, g
2
, .., g
n
G
_
Op(g
1
, g
2
, .., g
n
) G if n-ary operation in Op
Op(g
1
, g
2
, .., g
n
) = otherwise
(4.2)
It should be noted that, if the n-ary search operation from Op which was applied is a random-
ized operator, obviously also the behavior of Op(. . .) is randomized as well. Furthermore,
if more than one n-ary operation is contained in Op, the optimization algorithm using Op
usually performs some form of decision on which operation to apply. This decision may
either be random or follow a specic schedule and the decision making process may even
change during the course of the optimization process. Writing Op(g) in order to denote the
application of a unary search operation from Op to the genotype g G is thus a simplied
expression which encompasses both, possible randomness in the search operations and in
the decision process on which operation to apply.
With Op
k
(g
1
, g
2
, . . .) we denote k successive applications of (possibly dierent) search
operators (with respect to their arities). The k
th
application receives all possible results of
the k-1
th
application of the search operators as parameter, i. e., is dened recursively as
Op
k
(g
1
, g
2
, . . .) =
_
G(P(Op
k1
(g
1
,g
2
,...)))
Op(G) (4.3)
where Op
0
(g
1
, g
2
, . . .) = g
1
, g
2
, . . . , T(G) is the power set of a set G, and (A) is the set
of all possible permutations of the elements of G.
If the parameter g is left away, i. e., just Op
k
is written, this chain has to start with
a search operation with zero arity. In the style of Badea and Stanciu [177] and Skubch
[2516, 2517], we now can dene:
4.3. THE CONNECTION BETWEEN SEARCH AND PROBLEM SPACE 85
Denition D4.8 (Completeness). A set Op of search operations searchOp is complete
if and only if every point g
1
in the search space G can be reached from every other point
g
2
G by applying only operations searchOp Op.
g
1
, g
2
G k N
1
: g
1
Op
k
(g
2
) (4.4)
Denition D4.9 (Weak Completeness). A set Op of search operations searchOp is
weakly complete if every point g in the search space G can be reached by applying only
operations searchOp Op.
g G k N
1
: g Op
k
(4.5)
A weakly complete set of search operations usually contains at least one operation without
parameters, i. e., of zero arity. If at least one such operation exists and this operation can
potentially return all possible genotypes, then the set of search operations is already weakly
complete. However, nullary operations are often only used in the beginning of the search
process (see, for instance, the Hill Climbing Algorithm 26.1 on page 230). If Op achieved its
weak completeness because of a nullary search operation, the initialization of the algorithm
would determine which solutions can be found and which not. If the set of search operations
is strongly complete, the search process, in theory, can nd any solution regardless of the
initialization.
From the opposite point of view, this means that if the set of search operations is not
complete, there are elements in the search space which cannot be reached from a random
starting point. If it is not even weakly complete, parts of the search space can never be
discovered. Then, the optimization process is probably not able to explore the problem
space adequately and may not nd optimal or satisfyingly good solution.
Denition D4.10 (Adjacency (Search Space)). A point g
2
is -adjacent to a point g
1
in the search space G if it can be reached by applying a single search operation searchOp
to g1 with a probability of more than [0, 1). Notice that the adjacency relation is not
necessarily symmetric.
adjacent

(g
2
, g
1
) P (Op(g
1
) = g
2
) > (4.6)
The probability parameter is dened here in order to allow us to dierentiate between
probable and improbable transitions in the search space. Normally, we will use the adjacency
relation with = 0 and abbreviate the notation with adjacent(g
2
, g
1
) adjacent
0
(g
2
, g
1
).
In the case of Genetic Algorithms with, for instance, single-bit ip mutation operators
(see Section 29.3.2.1 on page 330), adjacent(g
2
, g
1
) would be true for all bit strings g
2
which have a Hamming distance of no more than one from g
1
, i. e., which dier by at
most one bit from g
1
. However, if we assume a real-valued search space G = R and a
mutation operator which adds a normally distributed random value to a genotype, then
adjacent(g
2
, g
1
) would always be true, regardless of their distance. The adjacency relation
with = 0 has thus no meaning in this context. If we set to a very small number such
as 1 10
4
, we can get a meaningful impression on how likely it is to discover a point g
2
when starting from g
1
. Furthermore, this denition allows us to reason about limits such as
lim
0
adjacent

(g
2
, g
1
).
4.3 The Connection between Search and Problem Space
If the search space diers from the problem space, a translation between them is furthermore
required. In Example E4.2, we would need to transform the binary strings processed by the
Genetic Algorithm to tuples of natural numbers which can be processed by the objective
functions.
86 4 SEARCH SPACE AND OPERATORS: HOW CAN WE FIND IT?
Denition D4.11 (Genotype-Phenotype Mapping). The genotype-phenotype map-
ping (GPM, or ontogenic mapping [2134]) gpm : G X is a left-total
7
binary relation
which maps the elements of the search space G to elements in the problem space X. In
Section 28.2, you can nd a thorough discussion of dierent types of genotype-phenotype
mappings used in Evolutionary Algorithms.
g G x X : gpm(g) = x (4.7)
In Listing 55.7 on page 711, you can nd a Java interface which resembles Equation 4.7. The
only hard criterion we impose on genotype-phenotype mappings in this book is left-totality,
i. e., that they map each element of the search space to at least one candidate solution. The
GPM may be a functional relation if it is deterministic. Nevertheless, it is possible to create
mappings which involve random numbers and hence, cannot be considered to be functions
in the mathematical sense of Section 51.10 on page 646. In this case, we would rewrite
Equation 4.7 as Equation 4.8:
g G x X : P (gpm(g) = x) > 0 (4.8)
Genotype-phenotype mappings should further be surjective [2247], i. e., relate at least one
genotype to each element of the problem space. Otherwise, some candidate solutions can
never be found and evaluated by the optimization algorithm even if the search operations
are complete. Then, there is no guarantee whether the solution of a given problem can
be discovered or not. If a genotype-phenotype mapping is injective, which means that it
assigns distinct phenotypes to distinct elements of the search space, we say that it is free from
redundancy. There are dierent forms of redundancy, some are considered to be harmful for
the optimization process, others have positive inuence
8
. Most often, GPMs are not bijective
(since they are neither necessarily injective nor surjective). Nevertheless, if a genotype-
phenotype mapping is bijective, we can construct an inverse mapping gpm
1
: X G.
gpm
1
(x) = g gpm(g) = x x X, g G (4.9)
Based on the genotype-phenotype mapping and the denition of adjacency in the search
space given earlier in this section, we can also dene a neighboring relation for the problem
space, which, of course, is also not necessarily symmetric.
Denition D4.12 (Neighboring (Problem Space)). Under the condition that the can-
didate solution x
1
X was obtained from the element g
1
G by applying the genotype-
phenotype mapping, i. e., x
1
= gpm(g
1
), a point x
2
is -neighboring to x
1
if it can be reached
by applying a single search operation and a subsequent genotype-phenotype mapping to g
1
with at least a probability of [0, 1). Equation 4.10 provides a general denition of the
neighboring relation. If the genotype-phenotype mapping is deterministic (which usually is
the case), this denition boils down to Equation 4.11.
neighboring

(x
2
, x
1
)
_
_

g
2
:adjacent
0
(g
2
,g
1
)
P (Op(g
1
) = g
2
) P
_
x
2
= gpm(g
2
)[
Op(g
1
)=g
2
_
_
_
>

x
1
=gpm(g
1
)
(4.10)
neighboring

(x
2
, x
1
)
_
_
_
_
_
_
_

g
2
: adjacent
0
(g
2
, g
1
)
x
2
= gpm(g
2
)
P (Op(g
1
) = g
2
)
_
_
_
_
_
_
_
>

x
1
=gpm(g
1
)
(4.11)
7
See Equation 51.29 on page 644 to Point 5 for an outline of the properties of binary relations.
8
See Section 16.5 on page 174 for more information.
4.3. THE CONNECTION BETWEEN SEARCH AND PROBLEM SPACE 87
The condition x
1
= gpm(g
1
) is relevant if the genotype-phenotype mapping gpm is not
injective (see Equation 51.31 on page 644), i. e., if more than one genotype can map to
the same phenotype. Then, the set of -neighbors of a candidate solution x
1
depends
on how it was discovered, on the genotype g1 it was obtained from, as can be seen in
Example E4.3. Example E5.1 on page 96 shows an account on how the search operations and,
thus, the denition of neighborhood, inuence the performance of an optimization algorithm.
Example E4.3 (Neighborhoods under Non-Injective GPMs).
As an example for neighborhoods under non-injective genotype-phenotype mappings, let
111
011
001
100 101
1 2
110
010
GPM
X
G
ProblemSpace:
Thenaturalnumbers0,1,and2
= 0,1,2 { }
X
X
Genotype-PhenotypeMapping:gpm
gpm(g)=(g +g )
*
g
1 2 3
SearchSpace:
Thebitstringsoflength3
= = 0,1
3 3
{ }
G
G B
0
(g g ,g
1 2 3
, )=g
000
g
A
g
B
Figure 4.3: Neighborhoods under non-injective genotype-phenotype mappings.
us imagine that we have a very simple optimization problem where the problem space
X only consists of the three numbers 0, 1, and 2, as sketched in Figure 4.3. Imagine
further that we would use the bit strings of length three as search space, i. e., G = B
3
=
0, 1
3
. As genotype-phenotype mapping gpm : G X, we would apply the function
x = gpm(g) = (g
1
+ g
2
) g
3
. The two genotypes g
A
= 010 and g
B
= 110 both map to 0
since (0 + 1) 0 = (1 + 1) 0 = 0.
If the only search operation available would be a single-bit ip mutation, then all geno-
types which have a Hamming distance of 1 are adjacent in the search space. This means
that g
A
= 010 is adjacent to 110, 000, and 011 while g
B
= 110 is adjacent to 010, 100, and
111.
The candidate solution x = 0 is neighboring to 0 and 1 if it was obtained by applying the
genotype-phenotype mapping to g
A
= 010. With one search step, gpm(110) = gpm(000) =
0 and gpm(011) = 1 can be reached. However, if x = 0 was the result of by applying the
genotype-phenotype mapping to g
B
= 110, its neighborhood is 0, 2, since gpm(010) =
gpm(100) = 0 and gpm(111) = 2.
Hence, the neighborhood relation under non-injective genotype-phenotype mappings de-
pends on the genotypes from which the phenotypes under investigation originated. For
injective genotype-phenotype mappings, this is not the case.
If the search space G and the problem space X are the same (G = X) and the genotype-
phenotype mapping is the identity mapping, it follows that g
2
= x
2
and the sum in Equa-
tion 4.11 breaks down to P (Op(g
1
) = g
2
). Then the neighboring relation (dened on the
problem space) equals the adjacency relation (dened on the search space), as can be seen
88 4 SEARCH SPACE AND OPERATORS: HOW CAN WE FIND IT?
when taking a look at Equation 4.6. Since the identity mapping is injective, the condition
x
1
= gpm(g
1
) is always true, too.
(G = X) (gpm(g) = g g G) neighboring adjacent (4.12)
In the following, we will use neighboring

to denote a -neighborhood relation with implicitly


assuming the dependencies on the search space or that the GPM is injective. Furthermore,
with neighboring(x
1
, x
2
), we denote that x
1
and x
2
are = 0-neighbors.
4.4 Local Optima
We now have the means to dene the term local optimum more precisely. From Exam-
ple E4.4, it becomes clear that such a denition is not as trivial as the denitions of global
optima given in Chapter 3.
Example E4.4 (Types of Local Optima).
If we assume minimization, the global optimum x

of the objective function given can easily


f(x)
x X
X

l
x

X
N
Figure 4.4: Minima of a single objective function.
be found. Obviously, every global optimum is also a local optimum. In the case of single-
objective minimization, it is also a local minimum, and a global minimum.
In order to dene what a local minimum is, as a rst step, we could say A (local)
minimum x
l
X of one (objective) function f : X R is an input element with f( x
l
) < f(x)
for all x neighboring to x
l
. However, such a denition would disregard locally optimal
plateaus, such as the set X

l
of candidate solutions sketched in Figure 4.4.
If we simply exchange the < in f( x
l
) < f(x) with a and obtain f( x
l
) f(x) however,
we would classify the elements in X
N
as a local optima as well. This would contradict the
intuition, since they are clearly worse than the elements on the right side of X
N
. Hence, a
more ne-grained denition is necessary.
Similar to Handl et al. [1178], let us start with creating a denition of areas in the search
space based on the specication of -neighboring (Denition D4.12).
Denition D4.13 (Connection Set). A set X X is -connected if all of its elements
can be reached from each other with consecutive search steps, each of which having at least
probability [0, 1), i. e., if Equation 4.13 holds.
x
1
,= x
2
neighboring

(x
1
, x
2
) x
1
, x
2
X (4.13)
4.4. LOCAL OPTIMA 89
4.4.1 Local Optima of single-objective Problems
Denition D4.14 (-Local Minimal Set). A -local minimal set

X
l
X under objec-
tive function f and problem space X is a connected set for which Equation 4.14 holds.
neighboring

(x, x
l
)
_
(x

X
l
) (f(x) = f( x
l
))

_
(x ,

X
l
) (f( x
l
) < f(x))

x
l


X
l
, x X
(4.14)
An -local minimum x
l
is an element of an -local minimal set. We will use the term local
minimum set and local minimum to denote scenarios where = 0.
Denition D4.15 (-Local Maximal Set). A -local maximal set

X
l
X under objec-
tive function f and problem space X is a connected set for which Equation 4.15 holds.
neighboring

(x, x
l
)
_
(x

X
l
) (f(x) = f( x
l
))
_

_
(x ,

X
l
) (f( x
l
) > f(x))
_
x
l


X
l
, x X
(4.15)
Again, an -local maximum x
l
is an element of an -local maximal set and we will use the
term local maximum set and local maximum to indicate scenarios where = 0.
Denition D4.16 (-Local Optimal Set (Single-Objective Optimization)). A-local
optimal set X

l
X is either a -local minimum set or a -local maximum set, depending
on the denition of the optimization problem.
Similar to the case of local maxima and minima, we will again use the terms -local optimum,
local optimum set, and local optimum.
4.4.2 Local Optima of Multi-Objective Problems
As basis for the denition of local optima in multi-objective problems we use the prevalence
relations and the concepts provided by Handl et al. [1178]. In Section 3.5.2, we showed
that these encompass the other multi-objective concepts as well. Therefore, in the following
denition, prevails can as well be read as dominates, which is probably the most common
use.
Denition D4.17 (-Local Optimal Set (Multi-Objective Optimization)). A-local
optimal set X

l
X under the prevalence operator and problem space X is a connected
set for which Equation 4.16 holds.
neighboring

(x, x

l
) [(x X

l
) (x x

l
)]
[(x , X

l
) (x

l
x)] x

l
X

l
, x X
(4.16)
Again, we use the term -local optimum to denote candidate solutions which are part of
-local optimal sets. If is omitted in the name, = 0 is assumed. See, for example, Exam-
ple E13.4 on page 158 for an example of local optima in multi-objective problems.
90 4 SEARCH SPACE AND OPERATORS: HOW CAN WE FIND IT?
4.5 Further Denitions
In order to ease the discussions of dierent Global Optimization algorithms, we furthermore
dene the data structure individual, which is the unison of the genotype and phenotype of
an individual.
Denition D4.18 (Individual). An individual p is a tuple p = (p.g, p.x) G X of an
element p.g in the search space G and the corresponding element p.x = gpm(p.g) in the
problem space X.
Besides this basic individual structure, many practical realizations of optimization algo-
rithms use extended variants of such a data structure to store additional information like
the objective and tness values. Then, we will consider individuals as tuples in GXW,
where W is the space of the additional information W = R
n
R, for instance. Such en-
dogenous information [302] often represents local optimization strategy parameters. It then
usually coexists with global (exogenous) conguration settings. In the algorithm denitions
later in this book, we will often access the phenotypes p.x without explicitly referring to
the genotype-phenotype mapping gpm, since gpm is already implicitly included in the
relation of p.x and p.g according to Denition D4.18. Again relating to the notation used
in Genetic Algorithms, we will refer to lists of such individual records as populations.
Denition D4.19 (Population). A population pop is a list consisting of ps individuals
which are jointly under investigation during an optimization process.
pop (GX)
ps
(p = (p.g, p.x) pop p.x = gpm(p.g)) (4.17)
4.5. FURTHER DEFINITIONS 91
Tasks T4
20. Assume that we are car salesmen and search for an optimal conguration for a car.
We have a search space X = X
1
X
2
X
3
X
4
where
X
1
= with climate control, without climate control,
X
2
= with sun roof, without sun roof,
X
3
= 5 gears, 6 gears, automatic 5 gears, automatic 6 gears, and
X
4
= red, blue, green, silver, white, black, yellow.
Dene a proper search space for this problem along with a unary search operation and
a genotype-phenotype mapping.
[10 points]
21. Dene a nullary search operation which creates new random points in the n-
dimensional search space G = 0, 1, 2
n
. A nullary search operation has no parameters
(see Section 4.2). 0, 1, 2
n
denotes a search space which consists of strings of the
length n where each element of the string is from the set 0, 1, 2. An example for
such a string for n = 5 is (1, 2, 1, 1, 0). A nullary search operation here would thus just
create a new, random string of this type.
[5 points]
22. Dene a unary search operation which randomly modies a point in the n-dimensional
search space G = 0, 1, 2
n
See Task 21 for a description of the search space. A unary
search operation for G = 0, 1, 2
n
has one parameter, one string of length n where
each element is from the set 0, 1, 2. A unary search operation takes such a string
g G and modies it. It returns a new string g

G which is a bit dierent from g.


This modication has to be done in a random way, by adding, subtraction, multiplying,
or permutating g randomly, for example.
[5 points]
23. Dene a possible unary search operation which modies a point g R
2
and which
could be used for minimizing a black box objective function f : R
2
R.
[10 points]
24. When trying to nd the minimum of an black box objective function f : R
3
R, we
could use the same search space and problem space G = X = R
3
. Instead, we could
use G = R
2
and apply a genotype-phenotype mapping gpm : R
2
R
3
. For example,
we could set gpm
_
g = (g[0], g[1])
T
_
= (g[0], g[1], g[0] +g[1])
T
. By this approach, our
optimization algorithm would only face a two-dimensional problem instead of a three-
dimensional one. What do you think about this idea? Is it good?
[5 points]
25. Discuss some basic assumptions usually underlying the design of unary and binary
search operations.
[5 points]
26. Dene a search space and search operations for the Traveling Salesman Problem as
discussed in Example E2.2 on page 45.
[10 points]
27. We can dene a search space G = (1..n)
n
for a Traveling Salesman Problem with n
cities. This search space would clearly be numerical since G N
n
1
R
n
. Discuss
about whether it makes sense to use a pure numerical optimization algorithm (with
operations similar to the one you dened in Task 23, for example) to nd solutions to
the Traveling Salesman Problem in this search space.
[5 points]
92 4 SEARCH SPACE AND OPERATORS: HOW CAN WE FIND IT?
28. Assume that you want to solve an optimization problem dened over an n-dimensional
vector of natural numbers where each element of the vector stems from the natural
interval
9
a..b. In other words, the problem space X is a subsetset of the natural
numbers by the n (i. e., X N
n
0
); more specic: X = (a..b)
n
. You have an optimization
algorithm available which is suitable for continuous optimization problems dened over
m-dimensional real-vectors where each vector element stems from the real interval
10
[c, d], i. e., which deals with search spaces G R
m
; more specic G = [c, d]
m
. m, c,
and d can be chosen freely. n, a, and b are xed constants that are specic to the
optimization problem you have.
How can you solve your optimization problem with the available algorithm? Provide an
implementation of the necessary module in a programming language of your choice.
The implementation should obey to the interface denitions in Chapter 55. If you
choose a programming language dierent from Java, you should give the methods
names similar to those used in that section and write comments relate your code to
the interfaces given there.
[10 points]
9
The natural interval a..b contains only the natural numbers from a to b. For example 1..5 = {1, 2, 34, 5}
and 1..4 .
10
The real interval [c, d] contains all the real numbers between c and d. For example [1, 4] and
c..d [c, d].
Chapter 5
Fitness and Problem Landscape: How does the
Optimizer see it?
5.1 Fitness Landscapes
A very powerful metaphor in Global Optimization is the tness landscape
1
. Like many other
abstractions in optimization, tness landscapes have been developed and extensively been
researched by evolutionary biologists [701, 1030, 1500, 2985]. Basically, they are visualiza-
tions of the relationship between the genotypes or phenotypes in a given population and
their corresponding reproduction probability. The idea of such visualizations goes back to
Wright [2985], who used level contours diagrams in order to outline the eects of selection,
mutation, and crossover on the capabilities of populations to escape locally optimal cong-
urations. Similar abstractions arise in many other areas [2598], like in physics of disordered
systems such as spin glasss [311, 1879], for instance.
In Chapter 28, we will discuss Evolutionary Algorithms, which are optimization methods
inspired by natural evolution. The Evolutionary Algorithm research community has widely
adopted the tness landscapes as relation between individuals and their objective values [865,
1912]. Langdon and Poli [1673]
2
explain that tness landscapes can be imagined as a view
on a countryside from far above. Here, the height of each point is analogous to its objective
value. An optimizer can then be considered as a short-sighted hiker who tries to nd the
lowest valley or the highest hilltop. Starting from a random point on the map, she wants to
reach this goal by walking the minimum distance.
Evolutionary Algorithms were initially developed as single-objective optimization meth-
ods. Then, the objective values were directly used as tness and the reproduction prob-
ability, i. e., the chance of a candidate solution for being subject to further investigation,
was proportional to them. In multi-objective optimization applications with more sophisti-
cated tness assignment and selection processes, this simple approach does not reect the
biological metaphor correctly anymore.
In the context of this book, we therefore deviate from this view. Since it would possibly
be confusing for the reader if we used a dierent denition for tness landscapes than the
rest of the world, we introduce the new term problem landscape and keep using the term
tness landscape in the traditional manner. In Figure 11.1 on page 140, you can nd some
examples for tness landscapes.
1
http://en.wikipedia.org/wiki/Fitness_landscape [accessed 2007-07-03]
2
This part of [1673] is also online available at http://www.cs.ucl.ac.uk/staff/W.Langdon/FOGP/
intro_pic/landscape.html [accessed 2008-02-15].
945 FITNESS AND PROBLEM LANDSCAPE: HOW DOES THE OPTIMIZER SEE IT?
5.2 Fitness as a Relative Measure of Utility
When performing a multi-objective optimization, i. e., n = [f [ > 1, the utility of a candidate
solution is characterized by a vector in R
n
. In Section 3.3 on page 55, we have seen how such
vectors can be compared directly in a consistent way. Such comparisons, however, only state
whether one candidate solution is better than another one or not, but give no information
on how much it is better or worse and how interesting it is for further investigation. Often,
such a scalar, relative measure is needed.
In many optimization techniques, especially in Evolutionary Algorithms, the partial or-
ders created by Pareto or prevalence comparators are therefore mapped to the positive real
numbers R
+
. For each candidate solution, this single number represents its tness as solu-
tion for the given optimization problem. The process of computing such a tness value is
often not solely depending on the absolute objective values of the candidate solutions but
also on those of the other individuals known. It could, for instance, be position of a can-
didate solution in the list of investigated elements sorted according to the Pareto relation.
Hence, tness values often only have a meaning inside the optimization process [949] and
may change in each iteration of the optimizer, even if the objective values stay constant.
This concept is similar to the heuristic functions in many deterministic search algorithms
which approximate how many modications are likely necessary to an element in order to
reach a feasible solution.
Denition D5.1 (Fitness). The tness
3
value v : GX R
+
maps an individual p to a
positive real value v(p) denoting its utility as solution or its priority in the subsequent steps
of the optimization process.
The tness mapping v is usually built by incorporating the information of a set of indi-
viduals, i. e., a population pop. This allows the search process to compute the tness as
a relative utility rating. In multi-objective optimization, a tness assignment procedure
v = assignFitness(pop, cmp
f
) as dened in Denition D28.6 on page 274 is typically used
to construct v based on the set pop of all individuals which are currently under investiga-
tion and a comparison criterion cmp
f
(see Denition D3.20 on page 77). This procedure is
usually repeated in each iteration t cycle of the optimizer and dierent tness assignments
v
t=1
, v
t=2
, . . . , result.
We can detect a similarity between a tness function v and a heuristic function as
dened in Denition D1.14 on page 34 and the more precise Denition D46.5 on page 518.
Both of them give indicators about the utility of p which, most often, are only valid and make
only sense within the optimization process. Both measures reect some ranking criterion
used to pick the point(s) in the search space which should be investigated further in the next
iteration. The dierence between tness functions and heuristics is that heuristics usually
are based on some insights about the concrete structure of the optimization problem whereas
a tness assignment process obtains its information primarily from the objective values of
the individuals. They are thus more abstract and general. Also, as mentioned before, the
tness can change during the optimization process and often a relative measure whereas
heuristic values for a xed point in the search space are usually constant and independent
from the structure of the other candidate solutions under investigation.
Although it is a bit counter-intuitive that a tness function is built repeatedly and is not
a constant mapping, this is indeed the case in most multi-objective optimization situations.
In practical implementations, the individual records are often extended to also facilitate the
tness values. If we refer to v(p) multiple times in an algorithm for the same p and pop,
this should be understood as access to such a cached value which is actually computed only
once. This is indeed a good implementation decision, since some tness assignment processes
3
http://en.wikipedia.org/wiki/Fitness_(genetic_algorithm) [accessed 2008-08-10]
5.3. PROBLEM LANDSCAPES 95
have a complexity of O
_
(len(pop))
2
_
or higher, because they may involve comparing each
individual with every other one in pop as well as computing density information or clustering.
The term tness has been borrowed from biology
4
[2136, 2542] by the Evolutionary
Algorithms community. As already mentioned, the rst applications of Genetic Algorithms
were developed, the focus was mainly on single-objective optimization. Back then, they
called this single function tness function and thus, set objective value tness value. This
point of view is obsolete in principle, yet you will nd many contemporary publications that
use this notion. This is partly due the fact that in simple problems with only one objective
function, the old approach of using the objective values directly as tness, i. e., vp = f(p.x),
can sometimes actually be applied. In multi-objective optimization processes, this is not
possible and tness assignment processes like those which we are going to elaborate on
in Section 28.3 on page 274 are used instead.
In the context of this book, tness is subject to minimization, i. e., elements with smaller
tness are better than those with higher tness. Although this denition diers from the
biological perception of tness, it complies with the idea that optimization algorithms are to
nd the minima of mathematical functions (if nothing else has been stated). It makes tness
and objective values compatible in that it allows us to consider the tness of a multi-objective
problem to be the objective value of a single-objective one.
5.3 Problem Landscapes
The tness steers the search into areas of the search space considered promising to contain
optima. The quality of an optimization process is largely dened by how fast it nds these
points. Since most of the algorithms discussed in this book are randomized, meaning that
two consecutive runs of the same conguration may lead to totally dierent results, we can
dene a quality measure based on probabilities.
Denition D5.2 (Problem Landscape). The problem landscape : X N
1
[0, 1]
maps all the points x in a nite problem space X to the cumulative probability of reaching
them until (inclusively) the th evaluation of a candidate solution (for a certain conguration
of an optimization algorithm).
(x, ) = P
_
x has been visited until the
th
evaluation
_
x X, N
1
(5.1)
This denition of problem landscape is very similar to the performance measure denition
used by Wolpert and Macready [2963, 2964] in their No Free Lunch Theorem (see Chapter 23
on page 209). In our understanding, problem landscapes are not only closer to the original
meaning of tness landscapes in biology, they also have another advantage. According to
this denition, all entities involved in an optimization process over a nite problem space
directly inuence the problem landscape. The choice of the search operations in the search
space G, the way the initial elements are picked, the genotype-phenotype mapping, the
objective functions, the tness assignment process, and the way individuals are selected for
further exploration all have impact on the probability to discover certain candidate solutions
the problem landscape .
Problem landscapes are essentially some form of cumulative distribution functions
(see Denition D53.18 on page 658). As such, they can be considered as the counterparts
of the Markov chain models for the search state used in convergence proofs such as those
in [2043] which resemble probability mass/density functions. On basis of its cumulative
character, we can furthermore make the following assumptions about (x, ):
(x,
1
) (x,
2
)
1
<
2
x X,
1
,
2
N
1
(5.2)
0 (x, ) 1 x X, N
1
(5.3)
4
http://en.wikipedia.org/wiki/Fitness_%28biology%29 [accessed 2008-02-22]
965 FITNESS AND PROBLEM LANDSCAPE: HOW DOES THE OPTIMIZER SEE IT?
Example E5.1 (Inuence of Search Operations on ).
Let us now give a simple example for problem landscapes and how they are inuenced by
the optimization algorithm applied to them. Figure 5.1 illustrates one objective function,
dened over a nite subset X of the two-dimensional real plane, which we are going to
optimize. In this example, we use the problem space X also as search space G, i. e., the
f
(
x
)
X
x
l

Figure 5.1: An example optimization problem.


genotype-phenotype mapping is the identity mapping. For optimization, we will use a very
simple Hill Climbing algorithm
5
, which initially randomly draws one candidate solution with
uniform probability from the X. In each iteration, it creates a new candidate solution from
the current one by using a unary search operation. The old and the new candidate are
compared, and the better one is kept. Hence, we do not need to dierentiate between tness
and objective values. In the example, better means has lower tness.
In Figure 5.1, we can spot one local optimum x

l
and one global optimum x

. Between
them, there is a hill, an area of very bad tness. The rest of the problem space exhibits a
small gradient into the direction of its center. The optimization algorithm will likely follow
this gradient and sooner or later discover x

l
or x

. The chances to rst of x

l
are higher,
since it is closer to the center of X and thus, has a bigger basin of attraction.
With this setting, we have recorded the traces of two experiments with 1.3 million runs of
the optimizer (8000 iterations each). From these records, we can approximate the problem
landscapes very good.
In the rst experiment, depicted in Figure 5.2, the Hill Climbing optimizer used a search
operation searchOp
1
: X X which creates a new candidate solution according to a (dis-
cretized) normal distribution around the old one. We divided X in a regular lattice and
in the second experiment series, we used an operator searchOp
2
: X X which produces
candidate solutions that are direct neighbors of the old ones in this lattice. The problem
landscape produced by this operator is shown in Figure 5.3. Both operators are complete,
since each point in the search space can be reached from each other point by applying them.
searchOp
1
(x) (x
1
+ randomNorm(0, 1) , x
2
+ randomNorm(0, 1)) (5.4)
searchOp
1
(x) (x
1
+ randomUni[1, 1) , x
2
+ randomUni[1, 1)) (5.5)
In both experiments, the probabilities of the elements in the problem space of being discov-
ered are very low, near to zero, and rather uniformly distributed in the rst few iterations.
To put it precise, since our problem space is a 3636 lattice, this probability is exactly
1
/36
2
in the rst iteration. Starting with the tenth or so evaluation, small peaks begin to form
around the places where the optima are located. These peaks then begin to grow.
5
Hill Climbing algorithms are discussed thoroughly in Chapter 26.
5.3. PROBLEM LANDSCAPES 97
0
0.2
0.4
0.6
0.8
F
(
x
,
0
)
X
Fig. 5.2.a: (x, 1)
0
0.2
0.4
0.6
0.8
F
(
x
,
2
)
X
Fig. 5.2.b: (x, 2)
0
0.2
0.4
0.6
0.8
F
(
x
,
5
)
X
Fig. 5.2.c: (x, 5)
0
0.2
0.4
0.6
0.8
F
(
x
,
1
0
)
X
Fig. 5.2.d: (x, 10)
0
0.2
0.4
0.6
0.8
F
(
x
,
5
0
)
X
Fig. 5.2.e: (x, 50)
0
0.2
0.4
0.6
0.8
F
(
x
,
1
0
0
)
X
Fig. 5.2.f: (x, 100)
0
0.2
0.4
0.6
0.8
F
(
x
,
5
0
0
)
X
Fig. 5.2.g: (x, 500)
0
0.2
0.4
0.6
0.8
F
(
x
,
1
0
0
0
)
X
Fig. 5.2.h: (x, 1000)
0
0.2
0.4
0.6
0.8
F
(
x
,
2
0
0
0
)
X
Fig. 5.2.i: (x, 2000)
0
0.2
0.4
0.6
0.8
F
(
x
,
4
0
0
0
)
X
Fig. 5.2.j: (x, 4000)
0
0.2
0.4
0.6
0.8
F
(
x
,
6
0
0
0
)
X
Fig. 5.2.k: (x, 6000)
0
0.2
0.4
0.6
0.8
F
(
x
,
8
0
0
0
)
X
Fig. 5.2.l: (x, 8000)
Figure 5.2: The problem landscape of the example problem derived with searchOp
1
.
985 FITNESS AND PROBLEM LANDSCAPE: HOW DOES THE OPTIMIZER SEE IT?
0
0.2
0.4
0.6
0.8
F
(
x
,
0
)
X
Fig. 5.3.a: (x, 1)
0
0.2
0.4
0.6
0.8
F
(
x
,
2
)
X
Fig. 5.3.b: (x, 2)
0
0.2
0.4
0.6
0.8
F
(
x
,
5
)
X
Fig. 5.3.c: (x, 5)
0
0.2
0.4
0.6
0.8
F
(
x
,
1
0
)
X
Fig. 5.3.d: (x, 10)
0
0.2
0.4
0.6
0.8
F
(
x
,
5
0
)
X
Fig. 5.3.e: (x, 50)
0
0.2
0.4
0.6
0.8
F
(
x
,
1
0
0
)
X
Fig. 5.3.f: (x, 100)
0
0.2
0.4
0.6
0.8
F
(
x
,
5
0
0
)
X
Fig. 5.3.g: (x, 500)
0
0.2
0.4
0.6
0.8
F
(
x
,
1
0
0
0
)
X
Fig. 5.3.h: (x, 1000)
0
0.2
0.4
0.6
0.8
F
(
x
,
2
0
0
0
)
X
Fig. 5.3.i: (x, 2000)
0
0.2
0.4
0.6
0.8
F
(
x
,
4
0
0
0
)
X
Fig. 5.3.j: (x, 4000)
0
0.2
0.4
0.6
0.8
F
(
x
,
6
0
0
0
)
X
Fig. 5.3.k: (x, 6000)
0
0.2
0.4
0.6
0.8
F
(
x
,
8
0
0
0
)
X
Fig. 5.3.l: (x, 8000)
Figure 5.3: The problem landscape of the example problem derived with searchOp
2
.
5.3. PROBLEM LANDSCAPES 99
Since the local optimum x

l
resides at the center of a large basin and the gradients point
straight into its direction, it has a higher probability of being found than the global optimum
x

. The dierence between the two search operators tested becomes obvious starting with
approximately the 2000th iteration. The process with the operator utilizing the normal
distribution, the value of the global optimum begins to rise farther and farther, nally
surpassing the one of the local optimum. Even if the optimizer gets trapped in the local
optimum, it will still eventually discover the global optimum and if we ran this experiment
longer, the according probability would have converge to 1. The reason for this is that with
the normal distribution, all points in the search space have a non-zero probability of being
found from all other points in the search space. In other words, all elements of the search
space are adjacent.
The operator based on the uniform distribution is only able to create candidate solutions
in the direct neighborhood of the known points. Hence, if an optimizer gets trapped in the
local optimum, it can never escape. If it arrives at the global optimum, it will never discover
the local one. In Fig. 5.3.l, we can see that (x

l
, 8000) 0.7 and (x

, 8000) 0.3. In
other words, only one of the two points will be the result of the optimization process whereas
the other optimum remains undiscovered.
From Example E5.1 we can draw four conclusions:
1. Optimization algorithms discover good elements with higher probability than elements
with bad characteristics, given the problem is dened in a way which allows this to
happen.
2. The success of optimization depends very much on the way the search is conducted,
i. e., the denition of neighborhoods of the candidate solutions (see Denition D4.12
on page 86).
3. It also depends on the time (or the number of iterations) the optimizer allowed to use.
4. Hill Climbing algorithms are no Global Optimization algorithms since they have no
general means of preventing getting stuck at local optima.
1005 FITNESS AND PROBLEM LANDSCAPE: HOW DOES THE OPTIMIZER SEE IT?
Tasks T5
29. What is are the similarities and dierences between tness values and traditional
heuristic values?
[10 points]
30. For many (single-objective) applications, a tness assignment process which assigns
the rank of an individual p in a population pop according to the objective value f(p.x)
of its phenotype p.x as its tness v(p) is highly useful. Specify an algorithm for such
an assignment process which sorts the individuals in a list (population pop) according
to their objective value and assigns the index of an individual in this sorted list as its
tness.
Should the individuals be sorted in ascending or descending order if f is subject to
maximization and the tness v is subject to minimization?
[10 points]
Chapter 6
The Structure of Optimization: Putting it together.
After discussing the single entities involved in optimization, it is time to put them together
to the general structure common to all optimization processes, to an overall picture. This
structure consists of the previously introduced well-dened spaces and sets as well as the
mappings between them. One example for this structure of optimization processes is given in
Figure 6.1 by using a Genetic Algorithm which encodes the coordinates of points in a plane
into bit strings as an illustration. The same structure was already used in Example E4.2.
6.1 Involved Spaces and Sets
At the lowest level, there is the search space G inside which the search operations navigate.
Optimization algorithms can usually only process a subset of this space at a time. The
elements of G are mapped to the problem space X by the genotype-phenotype mapping
gpm. In many cases, gpm is the identity mapping (if G = X) or a rather trivial, injective
transformation. Together with the elements under investigation in the search space, the
corresponding candidate solutions form the population pop of the optimization algorithm.
A population may contain the same individual record multiple times if the algorithm
permits this. Some optimization methods only process one candidate solution at a time.
In this case, the size of pop is one and a single variable instead of a list data structure is
used to hold the elements data. Having determined the corresponding phenotypes of the
genotypes produced by the search operations, the objective functions(s) can be evaluated.
In the single-objective case, a single value f(p.x) determines the utility of p.x. In case
of a multi-objective optimization, on the other hand, a vector f (p.x) of objective values
is computed for each candidate solution p.x in the population. Based on these objective
values, the optimizer may compute a real tness v(p) denoting how promising an individual
p is for further investigation, i. e., as parameter for the search operations, relative to the
other known candidate solutions in the population pop. Either based on such a tness value,
directly on the objective values, or on other criteria, the optimizer will then decide how to
proceed, how to generate new points in G.
102 6 THE STRUCTURE OF OPTIMIZATION: PUTTING IT TOGETHER.
(1,3)
(3,3)
(0,2)
(0,2)
(0,2)
(0,2) (0,0) (0,1) (0,2) (0,3)
(1,0) (1,1) (1,2) (1,3)
(2,0) (2,1) (2,2) (2,3)
(3,0) (3,1) (3,2) (3,3)
ProblemSpace X
SearchSpace G
0100
1010
1100 1101
0000 1000 1001
0101
0110 1110 1111 0111
0001
0010 0011 1011
ObjectiveSpace Y R
n

FitnessSpace R
+
(0,0) (0,1) (0,3)
(1,0) (1,1) (1,2) (1,3)
(2,0) (2,1) (2,2) (2,3)
(3,0) (3,1) (3,2) (3,3)
Population (Phenotypes)
Population (Genotypes)
ObjectiveValues
FitnessValues
1111
1111
1110
1000
0100
0111
0111
0010
0010
0010
0010
1111
f
i
t
n
e
s
s
f
(
x
)
1
f
(
x
)
1
Pop( ) G X
ps
Fi tness and heuri sti c val ues
(normally)haveonlyameaninginthe
context of a population or a set of
solution candidates..............................
optimal
solutions
f: XY Y R ( )
n

v: G X R
+

TheInvolvedSpaces TheInvolvedSets/Elements
FitnessAssignmentProcess FitnessAssignmentProcess
Genotype-PhenotypeMapping
ObjectiveFunction(s)
Genotype-PhenotypeMapping
ObjectiveFunction(s)
Pop ( ) G X
ps

Figure 6.1: Spaces, Sets, and Elements involved in an optimization process.


6.2. OPTIMIZATION PROBLEMS 103
6.2 Optimization Problems
An optimization problem is the task we wish to solve with an optimization algorithm. In
Denition D1.1 and D1.2 on page 22, we gave two dierent views on how optimization
problems can be dened. Here, we will give a more precise discussion which will be the basis
of our considerations later in this book. An optimization problem, in the context of this
book, includes the denition of the possible solution structures (X), the means to determine
their utilities (f ), and a way to compare two candidate solutions (cmp
f
).
Denition D6.1 (Optimization Problem (Mathematical View)). An optimization
problem OptProb is a triplet OptProb = (X, f , cmp
f
) consisting of a problem space X, a
set f = f
1
, f
2
, . . . of n = [f [ objective functions f
i
: X R i 1..n and a comparator
function cmp
f
: XX R which allowing us to compare two candidate solutions x
1
, x
2
X
based on their objective values f (x
1
) and f (x
2
) (and/or constraint satisfaction properties or
other possible features).
Many optimization algorithms are general enough to work on dierent search spaces and
utilize various search operations. Others can only be applied to a distinct class of search
spaces or have xed operators. Each optimization algorithm is dened by its specication
that states which operations have to be carried out in which order. Denition D6.2 provides
us with a mathematical expression for one specic application of an optimization algorithm
to a specic optimization problem instance.
Denition D6.2 (Optimization Setup). The setup OptSetup of an application of an
optimization algorithm is the ve-tuple OptSetup = (OptProb, G, Op, gpm, ) of the opti-
mization problem OptProb to be solved, the search space G in which the search operations
searchOp Op navigate and which is translated to the problem space X with the genotype-
phenotype mapping gpm, and the algorithm-specic parameter conguration (such as size
limits for populations, time limits for the optimization process, or starting seeds for random
number generators).
This setup, together with the specication of the algorithm, entirely denes the behavior of
the optimizer OptAlgo.
Denition D6.3 (Optimization Algorithm). An optimization algorithm is a mapping
OptAlgo : OptSetup X

of an optimization setup OptSetup to a problem landscape


as well as a mapping to the optimization result x X

, an arbitrarily sized subset of the


problem space.
Apart from this very mathematical denition, we provide a Java interface which unites the
functionality and features of optimization algorithms in Section 55.1.5 on page 713. Here,
the functionality (nding a list of good individuals) and the properties (the objective func-
tion(s), the search operations, the genotype-phenotype mapping, the termination criterion)
characterize an optimizer. This perspective comes closer to the practitioners point of view
whereas Denition D6.3 gives a good summary of the theoretical idea of optimization.
If an optimization algorithm can eectively be applied to an optimization problem, it
will nd at least one local optimum x

l
if granted innite processing time and if such an
optimum exists. Usual prerequisites for eective problem solving are a weakly complete
set of search operations Op, a surjective genotype-phenotype mapping gpm, and that the
objective functions f do provide more information than needle-in-a-haystack congurations
(see Section 16.7 on page 175).
x

l
X : lim

(x

l
, ) = 1 (6.1)
From an abstract perspective, an optimization algorithm is characterized by
104 6 THE STRUCTURE OF OPTIMIZATION: PUTTING IT TOGETHER.
1. the way it assigns tness to the individuals,
2. the ways it selects them for further investigation,
3. the way it applies the search operations, and
4. the way it builds and treats its state information.
The best optimization algorithm for a given setup OptSetup is the one with the highest values
of (x

, ) for the optimal elements x

in the problem space with the lowest corresponding


values of . Therefore, nding the best optimization algorithm for a given optimization
problem is, itself, a multi-objective optimization problem.
For a perfect optimization algorithm (given an optimization setup with weakly complete
search operations, a surjective genotype-phenotype mapping, and reasonable objective func-
tion(s)), Equation 6.2 would hold. However, it is more than questionable whether such an
algorithm can actually be built (see Chapter 23).
x
1
, x
2
X : x
1
x
2
lim

(x
1
, ) > lim

(x
2
, ) (6.2)
6.3 Other General Features
There are some further common semantics and operations that are shared by most optimiza-
tion algorithms. Many of them, for instance, start out by randomly creating some initial
individuals which are then rened iteratively. Optimization processes which are not allowed
to run innitely have to nd out when to terminate. In this section we dene and discuss
general abstractions for such similarities.
6.3.1 Gradient Descend
Denition D6.4 (Gradient). A gradient
1
of a scalar mathematical function (scalar eld)
f : R
n
R is a vector function (vector eld) which points into the direction of the greatest
increase of the scalar eld. It is denoted by f or grad(f).
lim
h0
[[f (x +h) f (x) grad(f)(x) h[[
[[h[[
= 0 (6.3)
In other words, if we have a dierentiable smooth function with low total variation
2
[2224],
the gradient would be the perfect hint for optimizer, giving the best direction into which
the search should be steered. What many optimization algorithms which treat objective
functions as black boxes do is to use rough estimations of the gradient for this purpose. The
Downhill Simplex (see Chapter 43 on page 501), for example, uses a polytope consisting of
n + 1 points in G = R
n
for approximating the direction with the steepest descend.
In most cases, we cannot directly dierentiate the objective functions with the Nabla op-
erator
3
f because its exact mathematical characteristics are not known or too complicated.
Also, the search space G is often not a vector space over the real numbers R.
Thus, samples of the search space are used to approximate the gradient. If we compare
two elements g
1
and g
2
via their corresponding phenotypes x
1
= gpm(g
1
) and x
2
= gpm(g
2
)
and nd x
1
x
2
, we can assume that there is some sort of gradient facing upwards from
x
1
to x
2
and hence, from g
1
to g
2
. When descending this gradient, we can hope to nd an
x
3
with x
3
x
1
and nally the global minimum.
1
http://en.wikipedia.org/wiki/Gradient [accessed 2007-11-06]
2
http://en.wikipedia.org/wiki/Total_variation [accessed 2009-07-01]
3
http://en.wikipedia.org/wiki/Del [accessed 2008-02-15]
6.3. OTHER GENERAL FEATURES 105
6.3.2 Iterations
Global optimization algorithms often proceed iteratively and step by step evaluate candi-
date solutions in order to approach the optima. We distinguish between evaluations and
iterations t.
Denition D6.5 (Evaluation). The value N
0
denotes the number of candidate solu-
tions for which the set of objective functions f has been evaluated.
Denition D6.6 (Iteration). An iteration
4
refers to one round in a loop of an algorithm.
It is one repetition of a specic sequence of instructions inside of the algorithm.
Algorithms are referred to as iterative if most of their work is done by cyclic repetition of
one main loop. In the context of this book, an iterative optimization algorithm starts with
the rst step t = 0. The value t N
0
is the index of the iteration currently performed by
the algorithm and t + 1 refers to the following step. One example for iterative algorithm
is Algorithm 6.1. In some optimization algorithms like Genetic Algorithms, for instance,
iterations are referred to as generations.
There often exists a well-dened relation between the number of performed evaluations
of candidate solutions and the index of the current iteration t in an optimization pro-
cess: Many Global Optimization algorithms create and evaluate exactly a certain number
of individuals per generation.
6.3.3 Termination Criterion
The termination criterion terminationCriterion
__
is a function with access to all the infor-
mation accumulated by an optimization process, including the number of performed steps
t, the objective values of the best individuals, and the time elapsed since the start of the
process. With terminationCriterion
__
, the optimizers determine when they have to halt.
Denition D6.7 (Termination Criterion). When the termination criterion function
terminationCriterion true, false evaluates to true, the optimization process will
stop and return its results.
In Listing 55.9 on page 712, you can nd a Java interface resembling Denition D6.7. Some
possible criteria that can be used to decide whether an optimizer should terminate or not
are [2160, 2622, 3088, 3089]:
1. The user may grant the optimization algorithm a maximum computation time. If this
time has been exceeded, the optimizer should stop. Here we should note that the time
needed for single individuals may vary, and so will the times needed for iterations.
Hence, this time threshold can sometimes not be abided exactly and may avail to
dierent total numbers of iterations during multiple runs of the same algorithm.
2. Instead of specifying a time limit, an upper limit for the number of iterations

t or
individual evaluations

may be specied. Such criteria are most interesting for the


researcher, since she often wants to know whether a qualitatively interesting solution
can be found for a given problem using at most a predened number of objective
function evaluations. In Listing 56.53 on page 836, we give a Java implementation of
the terminationCriterion
__
operation which can be used for that purpose.
4
http://en.wikipedia.org/wiki/Iteration [accessed 2007-07-03]
106 6 THE STRUCTURE OF OPTIMIZATION: PUTTING IT TOGETHER.
3. An optimization process may be stopped when no signicant improvement in the
solution quality was detected for a specied number of iterations. Then, the process
most probably has converged to a (hopefully good) solution and will most likely not
be able to make further progress.
4. If we optimize something like a decision maker or classier based on a sample data
set, we will normally divide this data into a training and a test set. The training set is
used to guide the optimization process whereas the test set is used to verify its results.
We can compare the performance of our solution when fed with the training set to
its properties if fed with the test set. This comparison helps us detect when most
probably no further generalization can be achieved by the optimizer and we should
terminate the process.
5. Obviously, we can terminate an optimization process if it has already yielded a su-
ciently good solution.
In practical applications, we can apply any combination of the criteria above in order to
determine when to halt. Algorithm 1.1 on page 40 is an example for a badly designed termi-
nation criterion: the algorithm will not halt until a globally optimal solution is found. From
its structure, however, it is clear that it is very prone for premature convergence (see Chap-
ter 13 on page 151 and thus, may get stuck at a local optimum. Then, it will run forever
even if an optimal solution exists. This example also shows another thing: The structure
of the termination criterion decides whether a probabilistic optimization technique is a Las
Vegas or Monte Carlo algorithm (see Denition D12.2 and D12.3). How the termination
criterion may be tested in an iterative algorithm is illustrated in Algorithm 6.1.
Algorithm 6.1: Example Iterative Algorithm
Input: [implicit] terminationCriterion: the termination criterion
Data: t: the iteration counter
1 begin
2 t 0
// initialize the data of the algorithm
3 while terminationCriterion
__
do
// perform one iteration - here happens the magic
4 t t + 1
6.3.4 Minimization
The original form of many optimization algorithms was developed for single-objective op-
timization. Such algorithms may be used for both, minimization or maximization. As
previously mentioned, we will present them as minimization processes since this is the most
commonly used notation without loss of generality. An algorithm that maximizes the func-
tion f may be transformed to a minimization using f instead.
Note that using the prevalence comparisons as introduced in Section 3.5.2 on page 76,
multi-objective optimization processes can be transformed into single-objective minimization
processes. Therefore x
1
x
2
cmp
f
(x
1
, x
2
) < 0.
6.3.5 Modeling and Simulating
While there are a lot of problems where the objective functions are mathematical expressions
that can directly be computed, there exist problem classes far away from such simple function
optimization that require complex models and simulations.
6.3. OTHER GENERAL FEATURES 107
Denition D6.8 (Model). A model
5
is an abstraction or approximation of a system that
allows us to reason and to deduce properties of the system.
Models are often simplications or idealization of real-world issues. They are dened by
leaving away facts that probably have only minor impact on the conclusions drawn from
them. In the area of Global Optimization, we often need two types of abstractions:
1. The models of the potential solutions shape the problem space X. Examples are
(a) programs in Genetic Programming, for example for the Articial Ant problem,
(b) construction plans of a skyscraper,
(c) distributed algorithms represented as programs for Genetic Programming,
(d) construction plans of a turbine,
(e) circuit diagrams for logical circuits, and so on.
2. Models of the environment in which we can test and explore the properties of the
potential solutions, like
(a) a map on which the Articial Ant will move driven by an optimized program,
(b) an abstraction from the environment in which the skyscraper will be built, with
wind blowing from several directions,
(c) a model of the network in which the evolved distributed algorithms can run,
(d) a physical model of air which blows through the turbine,
(e) the model of an energy source the other pins which will be attached to the circuit
together with the possible voltages on these pins.
Models themselves are rather static structures of descriptions and formulas. Deriving con-
crete results (objective values) from them is often complicated. It often makes more sense
to bring the construction plan of a skyscraper to life in a simulation. Then we can test the
inuence of various wind forces and directions on building structure and approximate the
properties which dene the objective values.
Denition D6.9 (Simulation). A simulation
6
is the computational realization of a model.
Whereas a model describes abstract connections between the properties of a system, a
simulation realizes these connections.
Simulations are executable, live representations of models that can be as meaningful as real
experiments. They allow us to reason if a model makes sense or not and how certain objects
behave in the context of a model.
5
http://en.wikipedia.org/wiki/Model_%28abstract%29 [accessed 2007-07-03]
6
http://en.wikipedia.org/wiki/Simulation [accessed 2007-07-03]
Chapter 7
Solving an Optimization Problem
Before considering the following steps, always
rst check if a dedicated algorithm exists for
the problem (see Section 1.1.3).
Figure 7.1: Use dedicated algorithms!
There are many highly ecient algorithms for solving special problems. For nding
the shortest paths from one source node to the other nodes in a network, for example, one
would never use a metaheuristic but Dijkstras algorithm [796]. For computing the maximum
ow in a ow network, the algorithm by Dinitz [800] beats any probabilistic optimization
method. The key to win with metaheuristics is to know when to apply them and when not.
The rst step before using them is always an internet or literature search in order to check
for alternatives.
If this search did not lead to the discovery of a suitable method to solve the problem,
a randomized or traditional optimization technique can be used. Therefore, the following
steps should be performed in the order they are listed below and sketched in Figure 7.2.
0. Check for possible alternatives to using optimization algorithms!
1. Dene the data structures for the possible solutions, the problem space X (see Sec-
tion 2.1).
2. Dene the objective functions f f which are subject to optimization (see Section 2.2).
3. Dene the equality constraints h and inequality constraints g, if needed (see Sec-
tion 3.4).
4. Dene a comparator function cmp
f
(x
1
, x
2
) able to decide which one of two candidate
solutions x
1
and x
2
is better (based on f , and the constraints h
i
and g
i
, see Sec-
tion 3.5.2). In the unconstrained, single-objective case which is most common, this
function will just select the one individual with the lower objective value.
5. Decide upon the optimization algorithm to be used for the optimization problem (see
Section 1.3).
6. Choose an appropriate search space G (see Section 4.1). This decision may be directly
implied by the choice in Point 5.
110 7 SOLVING AN OPTIMIZATION PROBLEM
Determinedatastructureforsolutions.
Defineoptimizationcriteriaandconstraints.
Performrealexperimentconsistingofenough
runstogetstatisticallysignificantresults.
Areresults
satisfying?
X
G G G G X
m
;searchOp: ;gpm:
f: p Pop,Pop X R
n
( ;v ) R
+

EA?CMA-ES?DE?
populationsize=x,
crossoverrate=y,
...
Selectoptimizationalgorithm.
Configureoptimizationalgorithm.
Runsmall-scaletestexperiments.
yes
no
StatisticalEvaluation
ProductionStage
e.g.,30runsforeachconfiguration
andprobleminstance
e.g., Ourapproachbeatsapproach
xyzwith2%probabilitytoerrina
two-tailed Mann-Whitneytest.

integrateintorealapplication
afewrunsperconfiguration
Choosesearchspace,searchoperations,
andmappingtoproblemspace.
Areresults
satisfying?
yes
no
Aretherededi-
catedalgorithms?
no
yes
Usededicatedalgorithm!
Figure 7.2:
7 SOLVING AN OPTIMIZATION PROBLEM 111
7. Dene a genotype-phenotype mapping gpm between the search space G and the
problem space X (see Section 4.3).
8. Dene the set searchOperations of search operations searchOp Op navigating in
the search space (see Section 4.2). This decision may again be directly implied by the
choice in Point 5.
9. Congure the optimization algorithm, i. e., set all its parameters to values which look
reasonable.
10. Run a small-scale test with the algorithm and the settings and test, whether they
approximately meet the expectations. If not, go back to Point 9, 5, or even Point 1,
depending on how bad the results are. Otherwise, continue at Point 11
11. Conduct experiments with some dierent congurations and perform many runs for
each conguration. Obtain enough results to make statistical comparisons between
the congurations and with other algorithms (see e.g., Section 53.7). If the results are
bad, again consider to go back to a previous step.
12. Now either the results of the optimizer can be used and published or the developed
optimization system can become part of an application (see Part V and [2912] as an
example), depending on the goal of the work.
112 7 SOLVING AN OPTIMIZATION PROBLEM
Tasks T7
31. Why is it important to always look for traditional dedicated and deterministic alterna-
tives before trying to solve an optimization problem with a metaheuristic optimization
algorithm?
[5 points]
32. Why is a single experiment with a randomized optimization algorithm not enough to
show that this algorithm is good (or bad) for solving a given optimization task? Why
do we need comprehensive experiments instead, as pointed out in Point 11 on the
previous page?
[5 points]
Chapter 8
Baseline Search Patterns
Metaheuristic optimization algorithms trade in the guaranteed optimality of their results for
shorter runtime. This means that their results can be (but not necessarily are) worse than
the global optima of a given optimization problem. The results are acceptable as long as the
user is satised with them. An interesting question, however, is When is an optimization
algorithm suitable for a given problem?. The answer to this question is strongly related to
how well it can utilize the information it gains during the optimization process in order to
nd promising regions in the search space.
The primary information source for any optimizer is the (set of) objective functions.
Hence, it makes sense to declare an optimization algorithm as ecient if it can outperform
primitive approaches which do not utilize the information gained from evaluating candidate
solutions with objective functions, such as the random sampling and random walk methods
which will be introduced in this chapter. The comparison criterion should be the solution
quality in terms of objective values.
Denition D8.1 (Eectiveness). An optimization algorithm is eective if it can (statis-
tically signicantly) outperform random sampling and random walks in terms of solution
quality according to the objective functions.
A third baseline algorithm is exhaustive enumeration, the testing of all possible solutions. It
does not utilize the information gained from the objective functions as well. It can be used
as a yardstick in terms of optimization speed.
Denition D8.2 (Eciency). An optimization algorithm is ecient if it can meet the
criterion of eectiveness (Denition D8.1) with a lower-than-exponential algorithmic com-
plexity (see Section 12.1 on page 143), i. e., within fewer steps than required by exhaustive
enumeration.
8.1 Random Sampling
Random sampling is the most basic search approach. It is uninformed, i. e., does not make
any use of the information it can obtain from evaluating candidate solutions with the objec-
tive functions. As sketched in Algorithm 8.1 and implemented in Listing 56.4 on page 729,
it keeps creating genotypes of random conguration until the termination criterion is met.
In every iteration, it therefore utilizes the nullary search operation create which builds
a genotype p.g G with a random conguration and maps this genotype to a phenotype
p.x X with the genotype-phenotype mapping gpm. It checks whether the new pheno-
type is better than the best candidate solution discovered so far (x). If so, it is stored in
x.
114 8 BASELINE SEARCH PATTERNS
Algorithm 8.1: x randomSampling(f)
Input: f: the objective/tness function
Data: p: the current individual
Output: x: the result, i. e., the best candidate solution discovered
1 begin
2 x
3 repeat
4 p.g create
__
5 p.x gpm(p.g)
6 if (x = ) (f(p.x) < f(x)) then
7 x p.x
8 until terminationCriterion
__
9 return x
Notice that x is the only variable used for information aggregation and that there is
no form of feedback. The search makes no use of the knowledge about good structures of
candidate solutions nor does it further investigate a genotype which has produced a good
phenotype. Instead, if create samples the search space G in a uniform, unbiased way,
random sampling will be completely unbiased as well.
This unbiasedness and ignorance of available information turns random sampling into
one of the baseline algorithms used for checking whether an optimization method is suitable
for a given problem. If an optimization algorithm does not perform better than random
sampling on a given optimization problem, we can deduce that (a) it is not able to utilize the
information gained from the objective function(s) eciently (see Chapter 14 and 16), (b) its
search operations cannot modify existing genotypes in a useful manner (see Section 14.2),
or (c) the information given by the objective functions is largely misleading (see Chapter 15)..
It is well known as the No Free Lunch Theorem (see Chapter 23), that averaged over the set
of all possible optimization problems, no optimization algorithm can beat random sampling.
However, for specic families of problems and for most practically relevant problems in
general, outperforming random sampling is not too complicated. Still, a comparison to this
primitive algorithm is necessary whenever a new, not yet analyzed optimization task is to
be solved.
8.2 Random Walks
Like random sampling, random walks
1
[899, 1302, 2143] (sometimes also called drunkards
walks) can be considered as an uninformed, undirected search methods. They form the
second one of the two baseline algorithms to which optimization methods must be compared
for specic problems in order to test their utility. We specify the random walk algorithm in
Algorithm 8.2 and implement it in Listing 56.5 on page 732.
While random sampling algorithms make use of no information gathered so far, random
walks base their next search move on the most recently investigated genotype p.g. Instead
of sampling the search space uniformly as random sampling does, a random walk starts at one
(possibly random) point. In each iteration, it uses a unary search operation (in Algorithm 8.2
simply denoted as searchOp(p.g)) to move to the next point.
Because of this structure, random walks are quite similar in their structure to basic
optimization algorithms such as Hill Climbing (see Chapter 26). The dierence is that
optimization algorithms make use of the information they gain by evaluating the candidate
1
http://en.wikipedia.org/wiki/Random_walk [accessed 2010-08-26]
8.2. RANDOM WALKS 115
Algorithm 8.2: x randomWalk(f)
Input: f: the objective/tness function
Data: p: the current individual
Output: x: the result, i. e., the best candidate solution discovered
1 begin
2 x
3 p.g create
__
4 repeat
5 p.x gpm(p.g)
6 if (x = ) (f(p.x) < f(x)) then
7 x p.x
8 p.g searchOp(p.g)
9 until terminationCriterion
__
10 return x
solutions with the objective functions. In any useful conguration for optimization problems
which are not too ill-dened, they should thus be able to outperform random walks and
random sampling.
In cases where they cannot, applying one of these two uninformed, primitive strategies is
advised. Better yet, since optimization methods always introduce some bias into the search
process, always try to steer into the direction of the search space where they expected a
tness improvement, random walk or random sampling may even outperform them if the
problems are deceptive enough (see Chapter 15). Circumstances where random walks can
be the search algorithms of choice are, for example,
1. if a search algorithm encounters a state explosion because there are too many states
and transcending to them would consume too much memory.
2. In certain cases of online search it is not possible to apply systematic approaches
like breadth-rst search or depth-rst search. If the environment, for instance, is only
partially observable and each state transition represents an immediate interaction with
this environment, we are maybe not able to navigate to past states again. One example
for such a case is discussed in the work of Skubch [2516, 2517] about reasoning agents.
Random walks are often used in optimization theory for determining features of a tness
landscape. Measures that can be derived mathematically from a walk model include esti-
mates for the number of steps needed until a certain conguration is found, the chance to
nd a certain conguration at all, and the average dierence of the objective values of two
consecutive populations. From practically running random walks, some information about
the search space can be extracted. Skubch [2516, 2517], for instance, uses the number of en-
counters of certain state transition during the walk in order to successively build a heuristic
function.
Figure 8.1 illustrates some examples for random walks. The subgures 8.1.a to 8.1.c
each illustrate three random walks in which axis-parallel steps of length 1 are taken. The
dimensions range from 1 to 3 from the left to the right. The same goes for Fig. 8.1.d to
Fig. 8.1.f, each of which displaying three random walks with Gaussian (standard-normally)
distributed step widths and arbitrary step directions.
8.2.1 Adaptive Walks
An adaptive walk is a theoretical optimization method which, like a random walk, usually
works on a population of size 1. It starts at a random location in the search space and pro-
116 8 BASELINE SEARCH PATTERNS
s
t
e
p
s
Fig. 8.1.a: 1d, step length 1 Fig. 8.1.b: 2d, axis-parallel,
step length 1
Fig. 8.1.c: 3d, axis-parallel,
step length 1
s
t
e
p
s
Fig. 8.1.d: 1d, Gaussian step
length
Fig. 8.1.e: 2d, arbitrary angle,
Gaussian step length
Fig. 8.1.f: 3d, arbitrary angle,
Gaussian step length
Figure 8.1: Some examples for random walks.
ceeds by changing (or mutating) its single individual. For this modication, three methods
are available:
1. One-mutant change: The optimization process chooses a single new individual from
the set of one-mutant change neighbors. These neighbors dier from the current
candidate solution in only one property, i. e., are in its = 0-neighborhood. If the new
individual is better, it replaces its ancestor, otherwise it is discarded.
2. Greedy dynamics: The optimization process chooses a single new individual from the
set of one-mutant change neighbors. If it is not better than the current candidate
solution, the search continues until a better one has been found or all neighbors have
been enumerated. The major dierence to the previous form is the number of steps
that are needed per improvement.
3. Fitter Dynamics: The optimization process enumerates all one-mutant neighbors of
the current candidate solution and transcends to the best one.
From these elaborations, it becomes clear that adaptive walks are very similar to Hill Climb-
ing and Random Optimization. The major dierence is that an adaptive walk is a theoreti-
cal construct that, very much like random walks, helps us to determine properties of tness
landscapes whereas the other two are practical realizations of optimization algorithms.
Adaptive walks are a very common construct in evolutionary biology. Biological
populations are running for a very long time and so their genetic compositions are as-
sumed to be relatively converged [79, 1057]. The dynamics of such populations in near-
equilibrium states with low mutation rates can be approximated with one-mutant adaptive
walks [79, 1057, 2528].
8.3 Exhaustive Enumeration
Exhaustive enumeration is an optimization method which is as same as primitive as random
walks. It can be applied to any nite search space and basically consists of testing every single
8.3. EXHAUSTIVE ENUMERATION 117
possible solution for its utility, returning the best candidate solution discovered after all have
been tested. If the minimum possible objective value y is known for a given minimization
problem, the algorithm can also terminate sooner.
Algorithm 8.3: x exhaustiveSearch(f, y)
Input: f: the objective/tness function
Input: y: the lowest possible objective value, if unknown
Data: x: the current candidate solution
Data: t: the iteration counter
Output: x: the result, i. e., the best candidate solution discovered
1 begin
2 x gpm(G[0])
3 t 1
4 while (t < [G[) (f(x) > y) do
5 x gpm(G[t])
6 if f(x) > f(x) then x x
7 t t + 1
8 return x
Algorithm 8.3 contains a denition of the exhaustive search algorithm. It is based on the
assumption that the search space G can be traversed in a linear order, where the t
th
element
in the search space can be accessed as G[t]. The second parameter of the algorithm is the
lower limit y of the objective value. If this limit is known, the algorithm may terminate
before traversing the complete search space if it discovers a global optimum with f(x) = y
earlier. If no lower limit for f is known, y = can be provided.
It is interesting to see that for many hard problems such as AT-complete ones (Sec-
tion 12.2.2), no algorithms are known which can generally be faster than exhaustive enu-
meration. For an input size of n, algorithms for such problems need O(2
n
) iterations.
Exhaustive search requires exactly the same complexity. If G = B
n
, i. e., the bit strings of
length n, the algorithm will need exactly 2
n
steps if no lower limit y for the value of f is
known.
118 8 BASELINE SEARCH PATTERNS
Tasks T8
33. What is are the similarities and dierences between random walks, random sampling,
and adaptive walks?
[10 points]
34. Why should we consider an optimization algorithm as ineective for a given problem
if it does not perform better than random sampling or a random walk on the problem?
[5 points]
35. Is it possible that an optimization algorithm such as, for example, Algorithm 1.1 on
page 40 may give us even worse results than a random walk or random sampling? Give
reasons. If yes, try to nd a problem (problem space, objective function) where this
could be the case.
[5 points]
36. Implement random sampling for the bin packing problem as a comparison algorithm
for Algorithm 1.1 on page 40 and test it on the same problem instances used to
verify Task 4 on page 42. Discuss your results.
[10 points]
37. Compare your answers to Task 35 and 36. Is the experiment in Task 36 suitable
to validate your hypothesis? If not, can you nd some instances of the bin packing
problem where it can be used to verify your idea? If so, apply both Algorithm 1.1
on page 40 and your random sampling implementation to these instances. Did the
experiments support your hypothesis?
[15 points]
38. Can we implement an exhaustive search pattern to the nd the minimum of a math-
ematical function f dened over the real interval (1, 1)? Answer this question both
from
(a) a theoretical-mathematical perspective in an idealized execution environment
(i. e., with innite precision) and
(b) from a practical point of view based on real-world computers and requirements
(i. e., under the assumption that the available precision of oating point numbers
and of any given production process is limited).
[10 points]
39. Implement an exhaustive-search based optimization procedure for the bin packing
problem as a comparison algorithm for Algorithm 1.1 on page 40 and test it on the
same problem instances used to verify Task 4 on page 42 and 36. Discuss your results.
[10 points]
40. Implement an exhaustive search pattern to solve the instance of the Traveling Salesman
Problem given in Example E2.2 on page 45.
[10 points]
Chapter 9
Forma Analysis
The Schema Theorem has been stated for Genetic Algorithms by Holland [1250] in its seminal
work [711, 1250, 1253]. In this section, we are going to discuss it in the more general version
from Weicker [2879] as introduced by Radclie and Surry [22412243, 2246, 2634].
Denition D9.1 (Property). A property is a function which maps individuals, geno-
types, or phenotypes to a set of possible property values.
Each individual p in the population pop of an optimization algorithm is characterized by
its properties . Optimizers focus mainly on the phenotypical properties since these are
evaluated by the objective functions. In the forma analysis, the properties of the genotypes
are considered as well, since they inuence the features of the phenotypes via the genotype-
phenotype mapping. With Example E9.1, we want to clarify how possible properties could
look like.
Example E9.1 (Examples for Properties).
A rather structural property
1
of formulas z : R R in symbolic regression
1
would be
whether it contains the mathematical expression x + 1 or not:
1
: X true, false.
We can also declare a behavioral property
2
which is true if [z(0) 1[ 0.1 holds, i. e., if
the result of z is close to a value 1 for the input 0, and false otherwise:
2
= [z(0) 1[
0.1 z X.
Assume that the optimizer uses a binary search space G = B
n
and that the formulas
z are decoded from there to trees that represent mathematical expression by a genotype-
phenotype mapping. A genotypical
3
property then would be if a certain sequence of bits
occurs in the genotype p.g and another phenotypical property
4
is the number of nodes in
the phenotype p.x, for instance.
If we try to solve a graph-coloring problem, for example, a property
5
: X
black, white, gray could denote the color of a specic vertex q as illustrated in Figure 9.1.
In general, we can consider properties
i
to be some sort of functions that map the
individuals to property values.
1
and
2
both map the space of mathematical functions to
the set B = true, false whereas
5
maps the space of all possible colorings for the given
graph to the set white, gray, black and property f
4
maps trees to the natural numbers.
1
More information on symbolic regression can be found in Section 49.1 on page 531.
120 9 FORMA ANALYSIS
Af
5
=white
Af
5
=black
q
G
1
q
G
3
q
G
2
q
G
4
q
G
6
q
G
7
q
G
8
q
G
5
Af
5
=gray

Pop
( = )
X
G X
Figure 9.1: An graph coloring-based example for properties and formae.
On the basis of the properties
i
we can dene equivalence relations
2

i
:
p
1

i
p
2

i
(p
1
) =
i
(p
2
) p
1
, p
2
GX (9.1)
Obviously, for each two candidate solutions x
1
and x
2
, either x
1

i
x
2
or x
1

i
x
2
holds.
These relations divide the search space into equivalence classes A

i
=v
.
Denition D9.2 (Forma). An equivalence class A

i
=v
that contains all the individuals
sharing the same characteristic v in terms of the property
i
is called a forma [2241] or
predicate [2826].
A

i
=v
= p GX :
i
(p) = v (9.2)
p
1
, p
2
A

i
=v
p
1

i
p
2
(9.3)
The number of formae induced by a property, i. e., the number of its dierent characteristics,
is called its precision [2241]. Two formae A

i
=v
and A

j
=w
are said to be compatible, written
as A

i
=v
A

j
=w
, if there can exist at least one individual which is an instance of both.
A

i
=v
A

j
=w
A

i
=v
A

j
=w
,= (9.4)
A

i
=v
A

j
=w
p GX : p A

i
=v
p A

j
=w
(9.5)
A

i
=v
A

i
=w
w = v (9.6)
Of course, two dierent formae of the same property
i
, i. e., two dierent characteristics of
i, are always incompatible.
Example E9.2 (Examples for Formae (Example E9.1 Cont.)).
The precision of
1
and
2
in Example E9.1 is 2, for
5
it is 3. We can dene another
property
6
z(0) denoting the value a mathematical function has for the input 0. This
property would have an uncountable innitely large precision.
2
See the denition of equivalence classes in Section 51.9 on page 646.
9 FORMA ANALYSIS 121
z (x)=x+1
1
z (x)=x +1.1
2
2
z (x)=
3
x+2
z (x)=2(
4
x+1)
z (x)=(sin x)(x+1)
6
z (x)=(cos x)(x+1)
7
z (x)=(tan x)+1
8
z (x)=tan x
5
z
1
z
4
z
6
z
7
Af
1
=true
z
2
z
3
z
8
z
5
Af
1
=false
z
1 z
2
z
7
z
8
Af
2
=true
z
3
z
4
z
6
z
5
Af
2
=false
Af
4
=1
z
1
z
2
z
7
z
8
Af
4
=0
z
6
z
5
z
3
z
4
Af
4
=2
~
f
1
~
f
2
~
f
6
Af
4
=1.1

Pop
( = )
X
G X
Figure 9.2: Example for formae in symbolic regression.
In our initial symbolic regression example A

1
=true
, A

1
=false
holds, since it is not
possible that a function z contains a term x + 1 and at the same time does not contain
it. All formae of the properties
1
and
2
on the other hand are compatible: A

1
=false

A

2
=false
, A

1
=false
A

2
=true
, A

1
=true
A

2
=false
, and A

1
=true
A

2
=true
all
hold. If we take
6
into consideration, we will nd that there exist some formae compatible
with some of
2
and some that are not, like A

2
=true
A

6
=1
and A

2
=false
A

6
=2
, but
A

2
=true
, A

6
=0
and A

2
=false
, A

6
=0.95
.
The discussion of forma and their dependencies stems from the Evolutionary Algorithm
community and there especially from the supporters of the Building Block Hypothesis.
The idea is that an optimization algorithm should rst discover formae which have a good
inuence on the overall tness of the candidate solutions. The hope is that there are many
compatible ones under these formae that can gradually be combined in the search process.
In this text we have dened formae and the corresponding terms on the basis of individ-
uals p which are records that assign an element of the problem spaces p.x X to an element
of the search space p.g G. Generally, we will relax this notation and also discuss forma
directly in the context of the search space G or problem space X, when appropriate.
122 9 FORMA ANALYSIS
Tasks T9
41. Dene at least three reasonable properties on the individuals, genotypes, or phenotypes
of the Traveling Salesman Problem as introduced in Example E2.2 on page 45 and
further discussed in Task 26 on page 91.
[6 points]
42. List the relevant forma for each of the properties you have dened in based on your
denitions in Task 41 and in Task 26 on page 91.
[12 points]
43. List all the members of your forma given in Task 42 for the Traveling Salesman Problem
instance provided in Table 2.1 on page 46.
[10 points]
Chapter 10
General Information on Optimization
10.1 Applications and Examples
Table 10.1: Applications and Examples of Optimization.
Area References
Art see Tables 28.4, 29.1, and 31.1
Astronomy see Table 28.4
Business-To-Business see Table 3.1
Chemistry [623]; see also Tables 3.1, 22.1, 28.4, 29.1, 30.1, 31.1, 32.1,
40.1, and 43.1
Cloud Computing [2301, 2790, 3022, 3047]
Combinatorial Problems [5, 17, 115, 118, 130, 131, 178, 213, 241, 250, 251, 265, 315,
431, 432, 463, 475, 497, 533, 568, 571, 576, 578, 626628,
671, 680, 682, 783, 806, 822825, 832, 868, 1008, 1026, 1027,
1042, 1089, 1091, 1096, 1124, 1239, 1264, 1280, 1308, 1313,
1472, 1497, 1501, 1502, 1533, 1585, 1625, 1694, 1723, 1724,
1834, 1850, 1851, 1879, 1998, 2000, 2066, 2124, 2131, 2132,
2172, 2219, 2223, 2248, 2256, 2257, 2261, 2262, 22882290,
2330, 2422, 2449, 2508, 2511, 2512, 2667, 2673, 2692, 2717,
2812, 2813, 2817, 2885, 2900, 2990, 3016, 3017, 3047]; see
also Tables 3.1, 3.2, 18.1, 20.1, 22.1, 25.1, 26.1, 27.1, 28.4,
29.1, 30.1, 31.1, 33.1, 34.1, 35.1, 36.1, 37.1, 38.1, 39.1, 40.1,
41.1, 42.1, 46.1, 46.3, 47.1, and 48.1
124 10 GENERAL INFORMATION ON OPTIMIZATION
Computer Graphics [45, 327, 450, 844, 1224, 2506]; see also Tables 3.1, 3.2, 20.1,
27.1, 28.4, 29.1, 30.1, 31.1, 33.1, and 36.1
Control [1922]; see also Tables 28.4, 29.1, 31.1, 32.1, and 35.1
Cooperation and Teamwork [2238]; see also Tables 22.1, 28.4, 29.1, 31.1, and 37.1
Data Compression see Table 31.1
Data Mining [37, 40, 92, 115, 126, 189, 203, 217, 277, 281, 305, 310, 406,
450, 538, 656, 766, 805, 849, 889, 930, 961, 968, 975, 983,
991, 1035, 1036, 1100, 1168, 1176, 1194, 1259, 1425, 1503,
1959, 2070, 2088, 2202, 2234, 2272, 2312, 2319, 2357, 2459,
2506, 2602, 2671, 2788, 2790, 2824, 2839, 2849, 2939, 2961,
2999, 3047, 3066, 3077, 3078, 3103]; see also Tables 3.1, 20.1,
21.1, 22.1, 27.1, 28.4, 29.1, 31.1, 32.1, 34.1, 35.1, 36.1, 37.1,
39.1, and 46.3
Databases [115, 130, 131, 213, 533, 568, 571, 783, 991, 1096, 1308, 1585,
2692, 2824, 2971, 3016, 3017]; see also Tables 3.1, 3.2, 21.1,
26.1, 27.1, 28.4, 29.1, 31.1, 35.1, 36.1, 37.1, 39.1, 40.1, 46.3,
and 47.1
Decision Making [1194]; see also Tables 3.1, 3.2, 20.1, and 31.1
Digital Technology see Tables 3.1, 26.1, 28.4, 29.1, 30.1, 31.1, 34.1, and 35.1
Distributed Algorithms and Sys-
tems
[127, 204, 226, 330333, 492, 495, 506, 533, 565, 576, 617,
628, 650, 671, 822, 823, 1101, 1168, 1280, 1342, 1425, 1472,
1497, 1561, 1645, 1647, 1650, 1690, 1777, 1850, 1851, 1998,
2000, 2066, 2070, 2172, 2202, 2238, 2256, 2257, 2301, 2319,
2330, 2375, 2511, 2512, 2602, 2638, 2716, 2790, 2839, 2878,
2990, 3017, 3022, 3058]; see also Tables 3.1, 3.2, 18.1, 21.1,
22.1, 25.1, 26.1, 27.1, 28.4, 29.1, 30.1, 31.1, 32.1, 33.1, 34.1,
35.1, 36.1, 37.1, 39.1, 40.1, 41.1, 46.1, 46.3, and 47.1
E-Learning [506, 565, 2319]; see also Tables 28.4, 29.1, 37.1, and 39.1
Economics and Finances [204, 281, 310, 559, 1194, 1998, 2257, 2301, 2330, 2878, 3022];
see also Tables 3.1, 3.2, 27.1, 28.4, 29.1, 31.1, 33.1, 35.1, 36.1,
37.1, 39.1, 40.1, 46.1, and 46.3
Engineering [189, 199, 312, 1101, 1194, 1219, 1227, 1431, 1690, 1777, 2088,
2375, 2459, 2716, 2839, 2878, 3004, 3047, 3077, 3078]; see also
Tables 3.1, 3.2, 18.1, 21.1, 22.1, 26.1, 27.1, 28.4, 29.1, 30.1,
31.1, 32.1, 33.1, 34.1, 35.1, 36.1, 37.1, 39.1, 40.1, 41.1, 46.3,
and 47.1
Face Recognition [471]
Function Optimization [72, 613, 1136, 1137, 1188, 1942, 2333, 2708]; see also Tables
3.2, 21.1, 26.1, 27.1, 28.4, 29.1, 30.1, 32.1, 33.1, 34.1, 35.1,
36.1, 39.1, 40.1, 41.1, and 43.1
Games [1954, 23832385]; see also Tables 3.1, 28.4, 29.1, 31.1, and
32.1
Graph Theory [241, 576, 628, 650, 1219, 1263, 1472, 1625, 1690, 2172, 2375,
2716, 2839]; see also Tables 3.1, 18.1, 21.1, 22.1, 25.1, 26.1,
27.1, 28.4, 29.1, 30.1, 31.1, 33.1, 34.1, 35.1, 36.1, 37.1, 38.1,
39.1, 40.1, 41.1, 46.1, 46.3, and 47.1
Healthcare [45, 3103]; see also Tables 28.4, 29.1, 31.1, 35.1, 36.1, and
37.1
Image Synthesis [1637]; see also Tables 18.1 and 28.4
Logistics [118, 178, 241, 250, 251, 265, 495, 497, 578, 627, 680, 806,
824, 825, 832, 868, 1008, 1027, 1089, 1091, 1124, 1313, 1501,
1625, 1694, 1723, 1724, 1777, 2131, 2132, 2248, 2261, 2262,
22882290, 2508, 2673, 2717, 2812, 2813, 2817, 3047]; see
also Tables 3.1, 18.1, 22.1, 25.1, 26.1, 27.1, 28.4, 29.1, 30.1,
33.1, 34.1, 36.1, 37.1, 38.1, 39.1, 40.1, 41.1, 46.3, 47.1, and
48.1
10.2. BOOKS 125
Mathematics [405, 766, 930, 1020, 2200, 2349, 2357, 2418, 2572, 2885, 2939,
3078]; see also Tables 3.1, 18.1, 26.1, 27.1, 28.4, 29.1, 30.1,
31.1, 32.1, 33.1, 34.1, and 35.1
Military and Defense see Tables 3.1, 26.1, 27.1, 28.4, and 34.1
Motor Control see Tables 18.1, 22.1, 28.4, 33.1, 34.1, 36.1, and 39.1
Multi-Agent Systems [1818, 2081, 2667, 2919]; see also Tables 3.1, 18.1, 22.1, 28.4,
29.1, 31.1, 35.1, and 37.1
Multiplayer Games [127]; see also Table 31.1
News Industry [1425]
Nutrition [2971]
Operating Systems [2900]
Physics [1879]; see also Tables 3.1, 27.1, 28.4, 29.1, 31.1, 32.1, and
34.1
Police and Justice [471]
Prediction [405]; see also Tables 28.4, 29.1, 30.1, 31.1, and 35.1
Security [2459, 2878, 3022]; see also Tables 3.1, 22.1, 26.1, 27.1, 28.4,
29.1, 31.1, 35.1, 36.1, 37.1, 39.1, 40.1, and 46.3
Shape Synthesis see Table 26.1
Software [822, 823, 827, 1008, 1027, 1168, 1651, 1850, 1851, 2000, 2066,
2131, 2132, 2330, 2961, 2971, 3058]; see also Tables 3.1, 18.1,
20.1, 21.1, 22.1, 25.1, 26.1, 27.1, 28.4, 29.1, 30.1, 31.1, 33.1,
34.1, 36.1, 37.1, 40.1, 46.1, and 46.3
Sorting see Tables 28.4 and 31.1
Telecommunication see Tables 28.4, 37.1, and 40.1
Testing [1168, 1651]; see also Tables 3.1, 28.4, and 31.3
Theorem Proong and Automatic
Verication
see Tables 29.1, 40.1, and 46.3
Water Supply and Management see Tables 3.1 and 28.4
Wireless Communication [1219]; see also Tables 3.1, 18.1, 22.1, 26.1, 27.1, 28.4, 29.1,
31.1, 33.1, 34.1, 35.1, 36.1, 37.1, 39.1, 40.1, and 46.3
10.2 Books
1. Handbook of Evolutionary Computation [171]
2. New Ideas in Optimization [638]
3. Scalable Optimization via Probabilistic Modeling From Algorithms to Applications [2158]
4. Handbook of Metaheuristics [1068]
5. New Optimization Techniques in Engineering [2083]
6. Metaheuristics for Multiobjective Optimisation [1014]
7. Selected Papers from the Workshop on Pattern Directed Inference Systems [2869]
8. Applications of Multi-Objective Evolutionary Algorithms [597]
9. Handbook of Applied Optimization [2125]
10. The Traveling Salesman Problem: A Computational Study [118]
11. Intelligent Systems for Automated Learning and Adaptation: Emerging Trends and Applica-
tions [561]
12. Optimization and Operations Research [773]
13. Readings in Articial Intelligence: A Collection of Articles [2872]
14. Machine Learning: Principles and Techniques [974]
15. Experimental Methods for the Analysis of Optimization Algorithms [228]
16. Evolutionary Multiobjective Optimization Theoretical Advances and Applications [8]
17. Evolutionary Algorithms for Solving Multi-Objective Problems [599]
18. Pareto Optimality, Game Theory and Equilibria [560]
19. Pattern Classication [844]
20. The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization [1694]
21. Parallel Evolutionary Computations [2015]
22. Introduction to Global Optimization [2126]
126 10 GENERAL INFORMATION ON OPTIMIZATION
23. Global Optimization Algorithms Theory and Application [2892]
24. Computational Intelligence in Control [1922]
25. Introduction to Operations Research [580]
26. Management Models and Industrial Applications of Linear Programming [530]
27. Ecient and Accurate Parallel Genetic Algorithms [477]
28. Swarm Intelligence Focus on Ant and Particle Swarm Optimization [525]
29. Nonlinear Programming: Sequential Unconstrained Minimization Techniques [908]
30. Nonlinear Programming: Sequential Unconstrained Minimization Techniques [909]
31. Global Optimization: Deterministic Approaches [1271]
32. Soft Computing: Methodologies and Applications [1242]
33. Handbook of Global Optimization [1272]
34. New Learning Paradigms in Soft Computing [1430]
35. How to Solve It: Modern Heuristics [1887]
36. Numerical Optimization [2040]
37. Articial Intelligence: A Modern Approach [2356]
38. Electromagnetic Optimization by Genetic Algorithms [2250]
39. Soft Computing: Integrating Evolutionary, Neural, and Fuzzy Systems [2685]
40. Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations
[2961]
41. Multi-Objective Swarm Intelligent Systems: Theory & Experiences [2016]
42. Evolutionary Optimization [2394]
43. Rule-Based Evolutionary Online Learning Systems: A Principled Approach to LCS Analysis
and Design [460]
44. Multi-Objective Optimization in Computational Intelligence: Theory and Practice [438]
45. An Operations Research Approach to the Economic Optimization of a Kraft Pulping Process
[500]
46. Fundamental Methods of Mathematical Economics [559]
47. Einf uhrung in die K unstliche Intelligenz [797]
48. Stochastic and Global Optimization [850]
49. Machine Learning Applications in Expert Systems and Information Retrieval [975]
50. Nonlinear and Dynamics Programming [1162]
51. Introduction to Operations Research [1232]
52. Introduction to Stochastic Search and Optimization [2561]
53. Nonlinear Multiobjective Optimization [1893]
54. Heuristics: Intelligent Search Strategies for Computer Problem Solving [2140]
55. Global Optimization with Non-Convex Constraints [2623]
56. The Nature of Statistical Learning Theory [2788]
57. Advances in Greedy Algorithms [247]
58. Multiobjective Decision Making Theory and Methodology [528]
59. Multiobjective Optimization: Principles and Case Studies [609]
60. Multi-Objective Optimization Using Evolutionary Algorithms [740]
61. Multicriteria Optimization [860]
62. Multiple Criteria Optimization: Theory, Computation and Application [2610]
63. Global Optimization [2714]
64. Soft Computing [760]
10.3 Conferences and Workshops
Table 10.2: Conferences and Workshops on Optimization.
10.3. CONFERENCES AND WORKSHOPS 127
IJCAI: International Joint Conference on Articial Intelligence
History: 2009/07: Pasadena, CA, USA, see [373]
2007/01: Hyderabad, India, see [2797]
2005/07: Edinburgh, Scotland, UK, see [1474]
2003/08: Acapulco, Mexico, see [1118, 1485]
2001/08: Seattle, WA, USA, see [2012]
1999/07: Stockholm, Sweden, see [730, 731, 2296]
1997/08: Nagoya, Japan, see [1946, 1947, 2260]
1995/08: Montreal, QC, Canada, see [1836, 1861, 2919]
1993/08: Chambery, France, see [182, 183, 927, 2259]
1991/08: Sydney, NSW, Australia, see [831, 1987, 1988, 2122]
1989/08: Detroit, MI, USA, see [2593, 2594]
1987/08: Milano, Italy, see [1846, 1847]
1985/08: Los Angeles, CA, USA, see [1469, 1470]
1983/08: Karlsruhe, Germany, see [448, 449]
1981/08: Vancouver, BC, Canada, see [1212, 1213]
1979/08: Tokyo, Japan, see [2709, 2710]
1977/08: Cambridge, MA, USA, see [2281, 2282]
1975/09: Tbilisi, Georgia, USSR, see [2678]
1973/08: Stanford, CA, USA, see [2039]
1971/09: London, UK, see [630]
1969/05: Washington, DC, USA, see [2841]
AAAI: Annual National Conference on Articial Intelligence
History: 2011/08: San Francisco, CA, USA, see [3]
2010/07: Atlanta, GA, USA, see [2]
2008/06: Chicago, IL, USA, see [979]
2007/07: Vancouver, BC, Canada, see [1262]
2006/07: Boston, MA, USA, see [1055]
2005/07: Pittsburgh, PA, USA, see [1484]
2004/07: San Jose, CA, USA, see [1849]
2002/07: Edmonton, Alberta, Canada, see [759]
2000/07: Austin, TX, USA, see [1504]
1999/07: Orlando, FL, USA, see [1222]
1998/07: Madison, WI, USA, see [1965]
1997/07: Providence, RI, USA, see [1634, 1954]
1996/08: Portland, OR, USA, see [586, 2207]
1994/07: Seattle, WA, USA, see [1214]
1993/07: Washington, DC, USA, see [913]
1992/07: San Jose, CA, USA, see [2639]
1991/07: Anaheim, CA, USA, see [732]
1990/07: Boston, MA, USA, see [793]
1988/08: St. Paul, MN, USA, see [1916]
1987/07: Seattle, WA, USA, see [960]
128 10 GENERAL INFORMATION ON OPTIMIZATION
1986/08: Philadelphia, PA, USA, see [1512, 1513]
1984/08: Austin, TX, USA, see [381]
1983/08: Washington, DC, USA, see [1041]
1982/08: Pittsburgh, PA, USA, see [2845]
1980/08: Stanford, CA, USA, see [197]
HIS: International Conference on Hybrid Intelligent Systems
History: 2011/12: Melaka, Malaysia, see [14]
2010/08: Atlanta, GA, USA, see [1377]
2009/08: Shenyang, Liaonng, China, see [1375]
2008/10: Barcelona, Catalonia, Spain, see [2993]
2007/09: Kaiserslautern, Germany, see [1572]
2006/12: Auckland, New Zealand, see [1340]
2005/11: Rio de Janeiro, RJ, Brazil, see [2014]
2004/12: Kitakyushu, Japan, see [1421]
2003/12: Melbourne, VIC, Australia, see [10]
2002/12: Santiago, Chile, see [9]
2001/12: Adelaide, SA, Australia, see [7]
IAAI: Conference on Innovative Applications of Articial Intelligence
History: 2011/08: San Francisco, CA, USA, see [4]
2010/07: Atlanta, GA, USA, see [2361]
2009/07: Pasadena, CA, USA, see [1164]
2006/07: Boston, MA, USA, see [1055]
2005/07: Pittsburgh, PA, USA, see [1484]
2004/07: San Jose, CA, USA, see [1849]
2003/08: Acapulco, Mexico, see [2304]
2001/04: Seattle, WA, USA, see [1238]
2000/07: Austin, TX, USA, see [1504]
1999/07: Orlando, FL, USA, see [1222]
1998/07: Madison, WI, USA, see [1965]
1997/07: Providence, RI, USA, see [1634]
1996/08: Portland, OR, USA, see [586]
1995/08: Montreal, QC, Canada, see [51]
1994/08: Seattle, WA, USA, see [462]
1993/07: Washington, DC, USA, see [1]
1992/07: San Jose, CA, USA, see [2439]
1991/06: Anaheim, CA, USA, see [2533]
1990/05: Washington, DC, USA, see [2269]
1989/03: Stanford, CA, USA, see [2428]
ICCI: IEEE International Conference on Cognitive Informatics
History: 2011/08: Ban, AB, Canada, see [200]
2010/07: Beijng, China, see [2630]
2009/06: Kowloon, Hong Kong / Xi angg ang, China, see [164]
2008/08: Stanford, CA, USA, see [2857]
2007/08: Lake Tahoe, CA, USA, see [3065]
2006/07: Beijng, China, see [3029]
2005/08: Irvine, CA, USA, see [1339]
10.3. CONFERENCES AND WORKSHOPS 129
2004/08: Victoria, Canada, see [524]
2003/08: London, UK, see [2133]
2002/08: Calgary, AB, Canada, see [1337]
ICML: International Conference on Machine Learning
History: 2011/06: Seattle, WA, USA, see [2442]
2010/06: Haifa, Israel, see [1163]
2009/06: Montreal, QC, Canada, see [681]
2008/07: Helsinki, Finland, see [603]
2007/06: Corvallis, OR, USA, see [1047]
2006/06: Pittsburgh, PA, USA, see [602]
2005/08: Bonn, North Rhine-Westphalia, Germany, see [724]
2004/07: Ban, AB, Canada, see [426]
2003/08: Washington, DC, USA, see [895]
2002/07: Sydney, NSW, Australia, see [2382]
2001/06: Williamstown, MA, USA, see [427]
2000/06: Stanford, CA, USA, see [1676]
1999/06: Bled, Slovenia, see [401]
1998/07: Madison, WI, USA, see [2471]
1997/07: Nashville, TN, USA, see [926]
1996/07: Bari, Italy, see [2367]
1995/07: Tahoe City, CA, USA, see [2221]
1994/07: New Brunswick, NJ, USA, see [601]
1993/06: Amherst, MA, USA, see [1945]
1992/07: Aberdeen, Scotland, UK, see [2520]
1991/06: Evanston, IL, USA, see [313]
1990/06: Austin, TX, USA, see [2206]
1989/06: Ithaca, NY, USA, see [2447]
1988/06: Ann Arbor, MI, USA, see [1659, 1660]
1987: Irvine, CA, USA, see [1415]
1985: Skytop, PA, USA, see [2519]
1983: Monticello, IL, USA, see [1936]
1980: Pittsburgh, PA, USA, see [2183]
MCDM: Multiple Criteria Decision Making
History: 2011/06: Jyv askyl a, Finland, see [1895]
2009/06: Chengd u, S`chuan, China, see [2750]
2008/06: Auckland, New Zealand, see [861]
2006/06: Chania, Crete, Greece, see [3102]
2004/08: Whistler, BC, Canada, see [2875]
2002/02: Semmering, Austria, see [1798]
2000/06: Ankara, Turkey, see [1562]
1998/06: Charlottesville, VA, USA, see [1166]
1997/01: Cape Town, South Africa, see [2612]
1995/06: Hagen, North Rhine-Westphalia, Germany, see [891]
1994/08: Coimbra, Portugal, see [592]
1992/07: Taipei, Taiwan, see [2744]
1990/08: Fairfax, VA, USA, see [2551]
130 10 GENERAL INFORMATION ON OPTIMIZATION
1988/08: Manchester, UK, see [1759]
1986/08: Kyoto, Japan, see [2410]
1984/06: Cleveland, OH, USA, see [1165]
1982/08: Mons, Brussels, Belgium, see [1190]
1980/08: Newark, DE, USA, see [1960]
1979/08: Hagen/Konigswinter, West Germany, see [890]
1977/08: Bualo, NY, USA, see [3094]
1975/05: Jouy-en-Josas, France, see [2701]
SEAL: International Conference on Simulated Evolution and Learning
History: 2012/12: Hanoi, Vietnam, see [1179]
2010/12: Kanpur, Uttar Pradesh, India, see [2585]
2008/12: Melbourne, VIC, Australia, see [1729]
2006/10: Hefei,

Anhu, China, see [2856]
2004/10: Busan, South Korea, see [2123]
2002/11: Singapore, see [2661]
2000/10: Nagoya, Japan, see [1991]
1998/11: Canberra, Australia, see [1852]
1996/11: Taejon, South Korea, see [2577]
AI: Australian Joint Conference on Articial Intelligence
History: 2010/12: Adelaide, SA, Australia, see [33]
2006/12: Hobart, Australia, see [2406]
2005/12: Sydney, NSW, Australia, see [3072]
2004/12: Cairns, Australia, see [2871]
AISB: Articial Intelligence and Simulation of Behaviour
History: 2008/04: Aberdeen, Scotland, UK, see [1147]
2007/04: Newcastle upon Tyne, UK, see [2549]
2005/04: Bristol, UK and Hateld, England, UK, see [2547, 2548]
2003/03: Aberystwyth, Wales, UK and Leeds, UK, see [2545, 2546]
2002/04: London, UK, see [2544]
2001/03: Heslington, York, UK, see [2759]
2000/04: Birmingham, UK, see [2748]
1997/04: Manchester, UK, see [637]
1996/04: Brighton, UK, see [938]
1995/04: Sheeld, UK, see [937]
1994/04: Leeds, UK, see [936]
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
History: 2010/12: Boston, MA, USA, see [367]
2009/12: Avignon, France, see [1275]
2008/11: Awaji City, Hyogo, Japan, see [153]
2007/12: Budapest, Hungary, see [1342]
2006/12: Cavalese, Italy, see [509]
ICARIS: International Conference on Articial Immune Systems
History: 2011/07: Cambridge, UK, see [1203]
2010/07: Edinburgh, Scotland, UK, see [858]
2009/08: York, UK, see [2363]
2008/08: Phuket, Thailand, see [274]
10.3. CONFERENCES AND WORKSHOPS 131
2007/08: Santos, Brazil, see [707]
2006/09: Oeiras, Portugal, see [282]
2005/08: Ban, AB, Canada, see [1424]
2004/09: Catania, Sicily, Italy, see [2035]
2003/09: Edinburgh, Scotland, UK, see [2706]
: Canterbury, Kent, UK, see [2705]
MICAI: Mexican International Conference on Articial Intelligence
History: 2010/11: Pachuca, Mexico, see [2583]
2009/11: Guanajuato, Mexico, see [38]
2007/11: Aguascalientes, Mexico, see [1037]
2005/11: Monterrey, Mexico, see [1039]
2004/04: Mexico City, Mexico, see [1929]
2002/04: Merida, Yucat an, Mexico, see [598]
2000/04: Acapulco, Mexico, see [470]
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
History: 2011/10: Cluj-Napoca, Romania, see [2576]
2010/05: Granada, Spain, see [1102]
2008/11: Puerto de la Cruz, Tenerife, Spain, see [1620]
2007/11: Catania, Sicily, Italy, see [1619]
2006/06: Granada, Spain, see [2159]
SMC: International Conference on Systems, Man, and Cybernetics
History: 2014/10: San Diego, CA, USA, see [2386]
2013/10: Edinburgh, Scotland, UK, see [859]
2012/10: Seoul, South Korea, see [2454]
2011/10: Anchorage, AK, USA, see [91]
2009/10: San Antonio, TX, USA, see [1352]
2001/10: Tucson, AZ, USA, see [1366]
1999/10: Tokyo, Japan, see [1331]
1998/10: La Jolla, CA, USA, see [1364]
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
History: 2010/11: see [1029]
2009/11: see [1015]
2008/11: see [1857]
2007/10: see [151]
2006/09: see [2980]
2005: see [2979]
2004: see [2978]
2003: see [2977]
2002: see [2976]
2001: see [2975]
2000: see [2974]
1999/09: Nagoya, Japan, see [1993]
1998/08: Nagoya, Japan, see [1992]
1997/06: see [535]
1996/08: Nagoya, Japan, see [1001]
AIMS: Articial Intelligence in Mobile System
132 10 GENERAL INFORMATION ON OPTIMIZATION
History: 2003/10: Seattle, WA, USA, see [1624]
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
History: 2008/10: Cergy-Pontoise, France, see [3049]
2005/05: Muroran, Japan, see [11]
ECML: European Conference on Machine Learning
History: 2007/09: Warsaw, Poland, see [2866]
2006/09: Berlin, Germany, see [278]
2005/10: Porto, Portugal, see [2209]
2004/09: Pisa, Italy, see [2182]
2003/09: Cavtat, Dubrovnik, Croatia, see [512]
2002/08: Helsinki, Finland, see [1221]
2001/09: Freiburg, Germany, see [992]
2000/05: Barcelona, Catalonia, Spain, see [215]
1998/04: Chemnitz, Sachsen, Germany, see [537]
1997/04: Prague, Czech Republic, see [2781]
1995/04: Heraclion, Crete, Greece, see [1223]
1994/04: Catania, Sicily, Italy, see [507]
1993/04: Vienna, Austria, see [2809]
EPIA: Portuguese Conference on Articial Intelligence
History: 2011/10: Lisbon, Portugal, see [1744]
2009/10: Aveiro, Portugal, see [1774]
2007/12: Guimaraes, Portugal, see [2026]
2005/12: Covilh a, Portugal, see [275]
EUFIT: European Congress on Intelligent Techniques and Soft Computing
History: 1999/09: Aachen, North Rhine-Westphalia, Germany, see [2807]
1998/09: Aachen, North Rhine-Westphalia, Germany, see [2806]
1997/09: Aachen, North Rhine-Westphalia, Germany, see [3091]
1996/09: Aachen, North Rhine-Westphalia, Germany, see [877]
1995/08: Aachen, North Rhine-Westphalia, Germany, see [2805]
1994/09: Aachen, North Rhine-Westphalia, Germany, see [2803]
1993/09: Aachen, North Rhine-Westphalia, Germany, see [2804]
EngOpt: International Conference on Engineering Optimization
History: 2010/09: Lisbon, Portugal, see [1405]
2008/06: Rio de Janeiro, RJ, Brazil, see [1227]
GEWS: Grammatical Evolution Workshop
See Table 31.2 on page 385.
HAIS: International Workshop on Hybrid Articial Intelligence Systems
History: 2011/05: Warsaw, Poland, see [2867]
2009/06: Salamanca, Spain, see [2749]
2008/09: Burgos, Spain, see [632]
2007/11: Salamanca, Spain, see [631]
2006/10: Ribeirao Preto, SP, Brazil, see [184]
ICAART: International Conference on Agents and Articial Intelligence
History: 2011/01: Rome, Italy, see [1404]
2010/01: Val`encia, Spain, see [917]
10.3. CONFERENCES AND WORKSHOPS 133
2009/01: Porto, Portugal, see [916]
ICTAI: International Conference on Tools with Articial Intelligence
History: 2003/11: Sacramento, CA, USA, see [1367]
IEA/AIE: International Conference on Industrial and Engineering Applications of Articial Intel-
ligence and Expert Systems
History: 2004/05: Ottawa, ON, Canada, see [2088]
ISA: International Workshop on Intelligent Systems and Applications
History: 2011/05: W uh`an, H ubei, China, see [1380]
2010/05: W uh`an, H ubei, China, see [2846]
2009/05: W uh`an, H ubei, China, see [2847]
ISICA: International Symposium on Advances in Computation and Intelligence
History: 2007/09: W uh`an, H ubei, China, see [1490]
LION: Learning and Intelligent OptimizatioN
History: 2011/01: Rome, Italy, see [2318]
2010/01: Venice, Italy, see [2581]
2009/01: Trento, Italy, see [2625]
2007/02: Andalo, Trento, Italy, see [1273, 1826]
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
History: 2010/09: St Andrews, Scotland, UK, see [780]
2009/09: Lisbon, Portugal, see [779]
2008/09: Sydney, NSW, Australia, see [2011]
2007/09: Providence, RI, USA, see [2225]
2006/09: Nantes, France, see [583]
2005/10: Barcelona, Catalonia, Spain, see [1860]
2004/09: Toronto, ON, Canada, see [2715]
SoCPaR: International Conference on SOft Computing and PAttern Recognition
History: 2011/10: D` ali an, Liaonng, China, see [1381]
2010/12: Cergy-Pontoise, France, see [515]
2009/12: Malacca, Malaysia, see [1224]
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
History: 1992/09: Tutzing, Germany, see [2742]
1989: Germany, see [245]
1986/07: Neubiberg, Bavaria, Germany, see [244]
1983/06: Neubiberg, Bavaria, Germany, see [243]
AINS: Annual Symposium on Autonomous Intelligent Networks and Systems
History: 2003/06: Menlo Park, CA, USA, see [513]
ALIO/EURO: Workshop on Applied Combinatorial Optimization
History: 2011/05: Porto, Portugal, see [138]
2008/12: Buenos Aires, Argentina, see [137]
2005/10: Paris, France, see [136]
2002/11: Pucon, Chile, see [135]
1999/11: Erice, Italy, see [134]
1996/11: Valparaso, Chile, see [133]
1989/08: Rio de Janeiro, RJ, Brazil, see [132]
ANZIIS: Australia and New Zealand Conference on Intelligent Information Systems
History: 1994/11: Brisbane, QLD, Australia, see [1359]
134 10 GENERAL INFORMATION ON OPTIMIZATION
ASC: IASTED International Conference on Articial Intelligence and Soft Computing
History: 2002/07: Ban, AB, Canada, see [1714]
EDM: International Conference on Educational Data Mining
History: 2009/07: Cordoba, Spain, see [217]
EWSL: European Working Session on Learning
Continued as ECML in 1993.
History: 1991/03: Porto, Portugal, see [1560]
1988/10: Glasgow, Scotland, UK, see [1061]
1987/05: Bled, Yugoslavia (now Slovenia), see [322]
1986/02: Orsay, France, see [2097]
FLAIRS: Florida Articial Intelligence Research Symposium
History: 1994/05: Pensacola Beach, FL, USA, see [1360]
FWGA: Finnish Workshop on Genetic Algorithms and Their Applications
See Table 29.2 on page 355.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
History: 2006/12: Vienna, Austria, see [2024]
ICMLC: International Conference on Machine Learning and Cybernetics
History: 2005/08: Guangzhou, Guangd ong, China, see [2263]
IICAI: Indian International Conference on Articial Intelligence
History: 2011/12: Tumkur, India, see [2736]
2009/12: Tumkur, India, see [2217]
2007/12: Pune, India, see [2216]
2005/12: Pune, India, see [2215]
2003/12: Hyderabad, India, see [2214]
ISIS: International Symposium on Advanced Intelligent Systems
History: 2007/09: Sokcho, Korea, see [1579]
MCDA: International Summer School on Multicriteria Decision Aid
History: 1998/07: Monte Estoril, Lisbon, Portugal, see [198]
PAKDD: Pacic-Asia Conference on Knowledge Discovery and Data Mining
History: 2011/05: Shenzh`en, Guangd ong, China, see [1294]
STeP: Finnish Articial Intelligence Conference
History: 2008/08: Espoo, Finland, see [2255]
TAAI: Conference on Technologies and Applications of Articial Intelligence
History: 2010/11: Hsinchu, Taiwan, see [1283]
2009/10: Wufeng Township, Taichung County, Taiwan, see [529]
2008/11: Chiao-hsi Shiang, I-lan County, Taiwan, see [2659]
2007/11: Douliou, Yunlin, Taiwan, see [2006]
2006/12: Kaohsiung, Taiwan, see [1493]
2005/12: Kaohsiung, Taiwan, see [2004]
2004/11: Wenshan District, Taipei City, Taiwan, see [2003]
TAI: IEEE Conference on Tools for Articial Intelligence
History: 1990/11: Herndon, VA, USA, see [1358]
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
History: 1997/09: Rytro, Poland, see [2362]
10.4. JOURNALS 135
Machine Intelligence Workshop
History: 1977: Santa Cruz, CA, USA, see [874]
TAINN: Turkish Symposium on Articial Intelligence and Neural Networks (Tuark Yapay Zeka ve
Yapay Sinir Aglar Sempozyumu)
History: 1993/06: Istanbul, Turkey, see [1643]
AGI: Conference on Articial General Intelligence
History: 2011/08: Mountain View, CA, USA, see [2587]
2010/03: Lugano, Switzerland, see [1786]
2009/03: Arlington, VA, USA, see [661]
2008/03: Memphis, TN, USA, see [1414]
AICS: Irish Conference on Articial Intelligence and Cognitive Science
History: 2007/08: Dublin, Ireland, see [764]
AIIA: Congress of the Italian Association for Articial Intelligence (AI*IA) on Trends in Articial
Intelligence
History: 1991/10: Palermo, Italy, see [124]
ECML PKDD: European Conference on Machine Learning and European Conference on Principles
and Practice of Knowledge Discovery in Databases
History: 2009/09: Bled, Slovenia, see [321]
2008/09: Antwerp, Belgium, see [114]
SEA: International Symposium on Experimental Algorithms
History: 2011/05: Chania, Crete, Greece, see [2098]
2010/05: Ischa Island, Naples, Italy, see [2584]
10.4 Journals
1. IEEE Transactions on Systems, Man, and Cybernetics Part B: Cybernetics published by
IEEE Systems, Man, and Cybernetics Society
2. Information Sciences Informatics and Computer Science Intelligent Systems Applications:
An International Journal published by Elsevier Science Publishers B.V.
3. European Journal of Operational Research (EJOR) published by Elsevier Science Publishers
B.V. and North-Holland Scientic Publishers Ltd.
4. Machine Learning published by Kluwer Academic Publishers and Springer Netherlands
5. Applied Soft Computing published by Elsevier Science Publishers B.V.
6. Soft Computing A Fusion of Foundations, Methodologies and Applications published by
Springer-Verlag
7. Computers & Operations Research published by Elsevier Science Publishers B.V. and Perg-
amon Press
8. Articial Intelligence published by Elsevier Science Publishers B.V.
9. Operations Research published by HighWire Press (Stanford University) and Institute for
Operations Research and the Management Sciences (INFORMS)
10. Annals of Operations Research published by J. C. Baltzer AG, Science Publishers and Springer
Netherlands
11. International Journal of Computational Intelligence and Applications (IJCIA) published by
Imperial College Press Co. and World Scientic Publishing Co.
12. ORSA Journal on Computing published by Operations Research Society of America (ORSA)
13. Management Science published by HighWire Press (Stanford University) and Institute for
Operations Research and the Management Sciences (INFORMS)
14. Journal of Articial Intelligence Research (JAIR) published by AAAI Press and AI Access
Foundation, Inc.
15. Computational Optimization and Applications published by Kluwer Academic Publishers and
Springer Netherlands
16. The Journal of the Operational Research Society (JORS) published by Operations Research
Society and Palgrave Macmillan Ltd.
17. Data Mining and Knowledge Discovery published by Springer Netherlands
136 10 GENERAL INFORMATION ON OPTIMIZATION
18. SIAM Journal on Optimization (SIOPT) published by Society for Industrial and Applied
Mathematics (SIAM)
19. Mathematics of Operations Research (MOR) published by Institute for Operations Research
and the Management Sciences (INFORMS)
20. Journal of Global Optimization published by Springer Netherlands
21. AIAA Journal published by American Institute of Aeronautics and Astronautics
22. Journal of Articial Evolution and Applications published by Hindawi Publishing Corporation
23. Applied Intelligence The International Journal of Articial Intelligence, Neural Networks,
and Complex Problem-Solving Technologies published by Springer Netherlands
24. Articial Intelligence Review An International Science and Engineering Journal published
by Kluwer Academic Publishers and Springer Netherlands
25. Annals of Mathematics and Articial Intelligence published by Springer Netherlands
26. IEEE Transactions on Knowledge and Data Engineering published by IEEE Computer Soci-
ety Press
27. IEEE Intelligent Systems Magazine published by IEEE Computer Society Press
28. ACM SIGART Bulletin published by ACM Press
29. Journal of Machine Learning Research (JMLR) published by MIT Press
30. Discrete Optimization published by Elsevier Science Publishers B.V.
31. Journal of Optimization Theory and Applications published by Plenum Press and Springer
Netherlands
32. Articial Intelligence in Engineering published by Elsevier Science Publishers B.V.
33. Engineering Optimization published by Informa plc and Taylor and Francis LLC
34. Optimization and Engineering published by Kluwer Academic Publishers and Springer
Netherlands
35. International Journal of Organizational and Collective Intelligence (IJOCI) published by Idea
Group Publishing (Idea Group Inc., IGI Global)
36. Journal of Intelligent Information Systems published by Springer-Verlag GmbH
37. International Journal of Knowledge-Based and Intelligent Engineering Systems published by
IOS Press
38. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) published by
IEEE Computer Society
39. Optimization Methods and Software published by Taylor and Francis LLC
40. Journal of Mathematical Modelling and Algorithms published by Springer Netherlands
41. Structural and Multidisciplinary Optimization published by Springer-Verlag GmbH
42. OR/MS Today published by Institute for Operations Research and the Management Sciences
(INFORMS) and Lionheart Publishing, Inc.
43. AI Magazine published by AAAI Press
44. Computers in Industry An International, Application Oriented Research Journal published
by Elsevier Science Publishers B.V.
45. International Journal Approximate Reasoning published by Elsevier Science Publishers B.V.
46. International Journal of Intelligent Systems published by Wiley Interscience
47. International Journal of Production Research published by Taylor and Francis LLC
48. International Transactions in Operational Research published by Blackwell Publishing Ltd
49. INFORMS Journal on Computing (JOC) published by HighWire Press (Stanford University)
and Institute for Operations Research and the Management Sciences (INFORMS)
50. OR Spectrum Quantitative Approaches in Management published by Springer-Verlag
51. SIAG/OPT Views and News: A Forum for the SIAM Activity Group on Optimization pub-
lished by SIAM Activity Group on Optimization SIAG/OPT
52. K unstliche Intelligenz (KI) published by Bottcher IT Verlag
Part II
Diculties in Optimization
138 10 GENERAL INFORMATION ON OPTIMIZATION
Chapter 11
Introduction
The classication of optimization algorithms in Section 1.3 and the table of contents of this
book enumerate a wide variety of optimization algorithms. Yet, the approaches introduced
here resemble only a fraction of the actual number of available methods. It is a justied
question to ask why there are so many dierent approaches, why is this variety needed? One
possible answer is simply because there are so many dierent kinds of optimization tasks.
Each of them puts dierent obstacles into the way of the optimizers and comes with own,
characteristic diculties.
In this part we want to discuss the most important of these complications, the major
problems that may be encountered during optimization [2915]. You may wish to read this
part after reading through some of the later parts on specic optimization algorithms or
after solving your rst optimization task. You will then realize that many of the problems
you encountered are discussed here in depth.
Some of subjects in the following text concern Global Optimization in general (multi-
modality and overtting, for instance), others apply especially to nature-inspired approaches
like Genetic Algorithms (epistasis and neutrality, for example). Sometimes, neglecting a
single one of these aspects during the development of the optimization process can render
the whole eorts invested useless, even if highly ecient optimization techniques are applied.
By giving clear denitions and comprehensive introductions to these topics, we want to raise
the awareness of scientists and practitioners in the industry and hope to help them to use
optimization algorithms more eciently.
In Figure 11.1, we have sketched a set of dierent types of tness landscapes (see Sec-
tion 5.1) which we are going to discuss. The objective values in the gure are subject to
minimization and the small bubbles represent candidate solutions under investigation. An
arrow from one bubble to another means that the second individual is found by applying
one search operation to the rst one.
Before we go more into detail about what makes these landscapes dicult, we should
establish the term in the context of optimization. The degree of diculty of solving a certain
problem with a dedicated algorithm is closely related to its computational complexity. We
will outline this in the next section and show that for many problems, there is no algorithm
which can exactly solve them in reasonable time.
As already stated, optimization algorithms are guided by objective functions. A function
is dicult from a mathematical perspective in this context if it is not continuous, not
dierentiable, or if it has multiple maxima and minima. This understanding of diculty
comes very close to the intuitive sketches in Figure 11.1.
In many real world applications of metaheuristic optimization, the characteristics of the
objective functions are not known in advance. The problems are usually very complex and it
is only rarely possible to derive boundaries for the performance or the runtime of optimizers
in advance, let alone exact estimates with mathematical precision.
In such cases, experience and general rules of thumb are often the best guides available
140 11 INTRODUCTION
o
b
j
e
c
t
i
v
e

v
a
l
u
e
s

f
(
x
)
x
Fig. 11.1.a: Best Case
o
b
j
e
c
t
i
v
e

v
a
l
u
e
s

f
(
x
)
x
Fig. 11.1.b: Low Variation
multiple(local)optima o
b
j
e
c
t
i
v
e

v
a
l
u
e
s

f
(
x
)
x
Fig. 11.1.c: Multi-Modal
nousefulgradientinformation
? ? ?
o
b
j
e
c
t
i
v
e

v
a
l
u
e
s

f
(
x
)
Fig. 11.1.d: Rugged
regionwithmisleading
gradientinformation
o
b
j
e
c
t
i
v
e

v
a
l
u
e
s

f
(
x
)
x
Fig. 11.1.e: Deceptive
?
neutralarea
o
b
j
e
c
t
i
v
e

v
a
l
u
e
s

f
(
x
)
x
Fig. 11.1.f: Neutral
?
?
neutral area or
areawithoutmuch
information
needle
(isolated
optimum)
o
b
j
e
c
t
i
v
e

v
a
l
u
e
s

f
(
x
)
x
Fig. 11.1.g: Needle-In-A-Haystack
?
o
b
j
e
c
t
i
v
e

v
a
l
u
e
s

f
(
x
)
x
Fig. 11.1.h: Nightmare
Figure 11.1: Dierent possible properties of tness landscapes (minimization).
11 INTRODUCTION 141
(see also Chapter 24). Empirical results based on models obtained from related research
areas such as biology are sometimes helpful as well. In this part, we discuss many such models
and rules, providing a better understanding of when the application of a metaheuristic is
feasible and when not, as well as on how to avoid dening problems in a way that makes
them dicult.
Chapter 12
Problem Hardness
Most of the algorithms we will discuss in this book, in particular the metaheuristics, usually
give approximations as results but cannot guarantee to always nd the globally optimal
solution. However, we are actually able to nd the global optima for all given problems (of
course, limited to the precision of the available computers). Indeed, the easiest way to do
so would be exhaustive enumeration (see Section 8.3), to simply test all possible values from
the problem space and to return the best one.
This method, obviously, will take extremely long so our goal is to nd an algorithm
which is better. More precisely, we want to nd the best algorithm for solving a given class
of problems. This means that we need some sort of metric with which we can compare the
eciency of algorithms [2877, 2940].
12.1 Algorithmic Complexity
The most important measures obtained by analyzing an algorithm
1
are the time that it takes
to produce the wanted outcome and the storage space needed for internal data [635]. We
call those the time complexity and the space complexity dimensions. The time-complexity
denotes how many steps algorithms need until they return their results. The space complex-
ity determines how much memory an algorithm consumes at most in one run. Of course,
these measures depend on the input values passed to the algorithm.
If we have an algorithm that should decide whether a given number is prime or not, the
number of steps needed to nd that out will dier if the size of the inputs are dierent.
For example, it is easy to see that telling whether 7 is prime or not can be done in much
shorter time than verifying the same property for 2
32 582 657
1. The nature of the input
may inuence the runtime and space requirements of an algorithm as well. We immediately
know that 1 000 000 000 000 000 is not prime, though it is a big number, whereas the answer
in case of the smaller number 100 000 007 is less clear.
2
Therefore, often best, average, and
worst-case complexities exist for given input sizes.
In order to compare the eciency of algorithms, some approximative notations have been
introduced [1555, 1556, 2510]. As we just have seen, the basic criterion inuencing the time
and space requirements of an algorithm applied to a problem is the size of the inputs, i. e.,
the number of bits needed to represent the number from which we want to nd out whether
it is prime or not, the number of elements in a list to be sorted, the number of nodes or
edges in a graph to be 3-colored, and so on. We can describe this dependency as a function
of this size.
In real systems however, the knowledge of the exact dependency is not needed. If we,
for example, know that sorting n data elements with the Quicksort algorithm
3
[1240, 1557]
1
http://en.wikipedia.org/wiki/Analysis_of_algorithms [accessed 2007-07-03]
2
100 000 007 is indeed a prime, in case youre wondering, 2
32 582 657
1 too.
3
http://en.wikipedia.org/wiki/Quicksort [accessed 2007-07-03]
144 12 PROBLEM HARDNESS
takes in average something about nlog n steps, this is sucient enough, even if the correct
number is 2nln n 1.39nlog n.
The Big-O-family notations introduced by Bachmann [163] and made popular by Landau
[1664] allow us to group functions together that rise at approximately the same speed.
Denition D12.1 (Big-O notation). The big-O notation
4
is a mathematical notation
used to describe the asymptotical upper bound of functions.
f (x) O(g(x)) x
0
, m R : m > 0 [f (x) [ m[g(x) [ x > x
0
(12.1)
In other words, a function f (x) is in O of another function g(x) if and only if there exists a
real number x
0
and a constant, positive factor m so that the absolute value of f (x) is smaller
(or equal) than m-times the absolute value of g(x) for all x that are greater than x
0
. f (x)
thus cannot grow (considerably) faster than g(x). Therefore, x
3
+x
2
+x+1 = f (x) O
_
x
3
_
since for m = 5 and x
0
= 2 it holds that 5x
3
> x
3
+x
2
+x + 1 x 2.
In terms of algorithmic complexity, we specify the amount of steps or memory units an
algorithm needs in dependency on the size of its inputs in the big-O notation. More gen-
erally, this specication is based on key-features of the input data. For a graph-processing
algorithm, this may be the number of edges [E[ and the number of vertexes [V [ of a graph
to be processed by a graph-processing algorithm and the algorithmic complexity may be
O
_
[V [
2
+
_
[E[[V [
_
. Some examples for single-parametric functions can be found in Ex-
ample E12.1.
Example E12.1 (Some examples of the big-O notation).
class examples description
O(1) f
1
(x) = 2
222
,
f
2
(x) = sinx
Algorithms that have constant runtime for all in-
puts are O(1).
O(log n) f
3
(x) = log x,
f
4
(x) = f
4
_
x
2
_
+
1; f
4
(x < 1) = 0
Logarithmic complexity is often a feature of al-
gorithms that run on binary trees or search algo-
rithms in ordered sets. Notice that O(log n) im-
plies that only parts of the input of the algorithm
is read/regarded, since the input has length n and
we only perform mlog n steps.
O(n) f
5
(x) = 23n + 4,
f
6
(x) =
n
2
Algorithms of O(n) require to access and process
their input a constant number of times. This is
for example the case when searching in a linked
list.
O(nlog n) f
7
(x) = 23x +xlog 7x Many sorting algorithms like Quicksort and
Mergesort are in O(nlog n)
O
_
n
2
_
f
8
(x) = 34x
2
,
f
9
(x) =

x+3
i=0
x 2
Some sorting algorithms like selection sort have
this complexity. For many problems, O
_
n
2
_
-
solutions are acceptable good.
O
_
n
i
_
:
i > 1, i R
f
10
(x) = x
5
x
2
The general polynomial complexity. In this group
we nd many algorithms that work on graphs.
O(2
n
) f
11
(x) = 23 2
x
Algorithms with exponential complexity perform
slowly and become infeasible with increasing in-
put size. For many hard problems, there exist
only algorithms of this class. Their solution can
otherwise only be approximated by the means of
randomized global optimization techniques.
12.2. COMPLEXITY CLASSES 145
f(x)=x
f(x)=x
2
f(x)=x
4
f(x)=x
8
1
10
100
1000
1million
1billion
1trillion
10
15
10
20
10
30
10
25
10
40
10
35
4 8 1 2 64 128 16 32 256 512 1024 2048
f(x)=x
10
f(x)=1.1
x
msperday
picoseconds
sincebigbang
f(x)=x
x
f(x)=x! f(x)=e
x
f(x)=2
x
f(x)
x
Figure 12.1: Illustration of some functions illustrating their rising speed, gleaned from [2365].
In Example E12.1, we give some examples for the big-O notation and name kinds of algo-
rithms which have such complexities. The rising behavior of some functions is illustrated in
Figure 12.1. From this gure it becomes clear that problems for which only algorithms with
high complexities such as O(2
n
) exist quickly become intractable even for small input sizes
n.
12.2 Complexity Classes
While the big-O notation is used to state how long a specic algorithm needs (at least/in
average/at most) for input with certain features, the computational complexity
5
of a problem
is bounded by the best algorithm known for it. In other words, it makes a statement about
how much resources are necessary for solving a given problem, or, from the opposite point
of view, tell us whether we can solve a problem with given available resources.
12.2.1 Turing Machines
The amount of resources required at least to solve a problem depends on the problem
itself and on the machine used for solving it. One of the most basic computer models are
deterministic Turing machines
6
(DTMs) [2737] as sketched in Fig. 12.2.a. They consist of
a tape of innite length which is divided into distinct cells, each capable of holding exactly
one symbol from a nite alphabet. A Turing machine has a head which is always located
at one of these cells and can read and write the symbols. The machine has a nite set of
possible states and is driven by rules. These rules decide which symbol should be written
and/or move the head one unit to the left or right based on the machines current state and
the symbol on the tape under its head. With this simple machine, any of our computers
and, hence, also the programs executed by them, can be simulated.
7
4
http://en.wikipedia.org/wiki/Big_O_notation [accessed 2007-07-03]
5
http://en.wikipedia.org/wiki/Computational_complexity_theory [accessed 2010-06-24]
6
http://en.wikipedia.org/wiki/Turing_machine [accessed 2010-06-24]
7
Another computational model, RAM (Random Access Machine) is quite close to todays computer
architectures and can be simulated with Turing machines eciently [1804] and vice versa.
146 12 PROBLEM HARDNESS
1 0
0
0
0
0
0 0 0
0
1
1
1
0
0
0
0
TapeSymbol State Print Motion NextState
A 0 1 Right A
A 1 0 Right B
B 0 1 B Right
B 1 1 Left C
C 1 0 Left C
C 0 1 Right A
Fig. 12.2.a: A deterministic Turing machine.
TapeSymbol State Print Motion NextState
A 0 1 Right A
B 0 1 B Right
B 1 1 Left C
C 1 0 Left C
C 0 Right 1 A
C 0 1 Left A
A 1 0 Right B
A 1 0 Right A
1 0
0
0
0
0
0 0 0
0
1
1 1 0
0 0
0
1 0
0
0
0
0
0 0 0
0
1
1 1 0
0 0
0
1 0
0
0
0
0
0 0 0
0
1
1 1 0
0 0
0
Fig. 12.2.b: A non-deterministic Turing machine.
Figure 12.2: Turing machines.
Like DTMs, non-deterministic Turing machine
8
(NTMs, see Fig. 12.2.b) are thought
experiments rather than real computers. They are dierent from DTMs in that they may
have more than one rule for a state/input pair. Whenever there is an ambiguous situation
where n > 1 rules could be applied, the NTM can be thought of to fork into n independent
NTMs running in parallel. The computation of the overall system stops when any of these
reaches a nal, accepting state. NTMs can be simulated with DTMs by performing a
breadth-rst search on the possible computation steps of the NTM until it nds an accepting
state. The number of steps required for this simulation, however, grows exponentially with
the length of the shortest accepting path of the NTM [1483].
12.2.2 P, NP, Hardness, and Completeness
The set of all problems which can be solved by a deterministic Turing machine within
polynomial time is called T [1025]. This means that for the problems within this class,
there exists at least one algorithm (for a DTM) with an algorithmic complexity in O(n
p
)
(where n is the input size and p N
1
is some natural number).
Algorithms designed for Turing machines can be translated to algorithms for normal PCs.
Since it quite easy to simulate a DTM (with nite tape) with an o-the-shelf computer and
vice versa, these are also the problems that can be solved within polynomial time by existing,
real computers. These are the problems which are said to be exactly solvable in a feasible
way [359, 1092].
AT, on the other hand, includes all problems which can be solved by a non-deterministic
Turing machine in a runtime rising polynomial with the input size. It is the set of all problems
for which solutions can be checked in polynomial time [1549].
Clearly, these include all the problems from T, since DTMs can be considered as a
subset of NTMs and if a deterministic Turing machine can solve something in polynomial
time, a non-deterministic one can as well. The other way around, i.e., the question whether
AT T and, hence, T = AT, is much more complicated and so far, there neither exists
proof against nor for it [1549]. However, it is strongly assumed that problems in AT need
super-polynomial, exponential, time to be solved by deterministic Turing machines and
therefore also by computers as we know and use them.
8
http://en.wikipedia.org/wiki/Non-deterministic_Turing_machine [accessed 2010-06-24]
12.3. THE PROBLEM: MANY REAL-WORLD TASKS ARE AT-HARD 147
If problem A can be transformed in a way so that we can use an algorithm for problem
B to solve A, we say A reduces to B. This means that A is no more dicult than B. A
problem is hard for a complexity class if every other problem in this class can be reduced
to it. For classes larger than T, reductions which take no more than polynomial time in the
input size are usually used. The problems which are hard for AT are called AT-hard. A
problem which is in a complexity class C and hard for C is called complete for C, hence the
name AT-complete.
12.3 The Problem: Many Real-World Tasks are NP-hard
From a practical perspective, problems in T are usually friendly. Everything with an algo-
rithmic complexity in O
_
n
5
_
and less can usually be solved more or less eciently if n does
not get too large. Searching and sorting algorithms, with complexities of between log n and
nlog n, are good examples for such problems which, in most cases, are not bothersome.
However, many real-world problems are AT-hard or, without formal proof, can be as-
sumed to be at least as bad as that. So far, no polynomial-time algorithm for DTMs has
been discovered for this class of problems and the runtime of the algorithms we have for
them increases exponentially with the input size.
In 1962, Bremermann [407] pointed out that No data processing system whether articial
or living can process more than 2 10
47
bits per second per gram of its mass. This is an
absolute limit to computation power which we are very unlikely to ever be able to unleash.
Yet, even with this power, if the number of possible states of a system grow exponentially
with its size, any solution attempt becomes infeasible quickly. Minsky [1907], for instance,
estimates the number of all possible move sequences in chess to be around 1 10
120
.
The Traveling Salesman Problem which we initially introduced in Example E2.2 on
page 45 (TSP) is a classical example for AT-complete combinatorial optimization problems.
Since many Vehicle Routing Problems (VRPs) [2912, 2914] are basically iterated variants
of the TSP, they fall at least into the same complexity class. This means that until now,
there exists no algorithm with a less-than exponential complexity which can be used to nd
optimal transportation plans for a logistics company. In other words, as soon as a logistics
company has to satisfy transportation orders of more than a handful of customers, it will
likely not be able to serve them in an optimal way.
The Knapsack problem [463], i. e., answering the question on how to t as many items
(of dierent weight and value) into a weight-limited container so that the totally value of
the items in the container is maximized, is AT-complete as well. Like the TSP, it occurs in
a variety of practical applications.
The rst known AT-complete problem is the satisability problem (SAT) [626]. Here,
the problem is to decide whether or not there is an assignment of values to a set of Boolean
variables x
1
, x
2
, . . . , x
n
which makes a given Boolean expression based on these variables,
parentheses, , , and evaluate to true.
These few examples and the wide area of practical application problems which can be
reduced to them clearly show one thing: Unless T = AT can be found at some point in
time, there will be no way to solve a variety of problems exactly.
12.4 Countermeasures
Solving a problem exactly to optimality is not always possible, as we have learned in the
previous section. If we have to deal with AT-complete problems, we will therefore have
to give up on some of the features which we expect an algorithm or a solution to have. A
randomized algorithm includes at least one instruction that acts on the basis of random de-
cision, as stated in Denition D1.13 on page 33. In other words, as opposed to deterministic
148 12 PROBLEM HARDNESS
algorithms
9
which always produce the same results when given the same inputs, a random-
ized algorithm violates the constraint of determinism. Randomized algorithms are also often
called probabilistic algorithms [1281, 1282, 1919, 1966, 1967]. There are two general classes
of randomized algorithms: Las Vegas and Monte Carlo algorithms.
Denition D12.2 (Las Vegas Algorithm). A Las Vegas algorithm
10
is a randomized
algorithm that never returns a wrong result [141, 1281, 1282, 1966]. It either returns the
correct result, reports a failure, or does not return at all.
If a Las Vegas algorithm returns, its outcome is deterministic (but not the algorithm itself).
The termination, however, cannot be guaranteed. There usually exists an expected runtime
limit for such algorithms their actual execution however may take arbitrarily long. In
summary, a Las Vegas algorithm terminates with a positive probability and is (partially)
correct.
Denition D12.3 (Monte Carlo Algorithm). A Monte Carlo algorithm
11
always ter-
minates. Its result, however, can be either correct or incorrect [1281, 1282, 1966].
In contrast to Las Vegas algorithms, Monte Carlo algorithms always terminate but are
(partially) correct only with a positive probability. As pointed out in Section 1.3, they are
usually the way to go to tackle complex problems.
9
http://en.wikipedia.org/wiki/Deterministic_computation [accessed 2007-07-03]
10
http://en.wikipedia.org/wiki/Las_Vegas_algorithm [accessed 2007-07-03]
11
http://en.wikipedia.org/wiki/Monte_carlo_algorithm [accessed 2007-07-03]
12.4. COUNTERMEASURES 149
Tasks T12
44. Find all big-O relationships between the following functions f
i
: N
1
R:
f
1
(x) = 9

x
f
2
(x) = log x
f
3
(x) = 7
f
4
(x) = (1.5)
x
f
5
(x) =
99
/x
f
6
(x) = e
x
f
7
(x) = 10x
2
f
8
(x) = (2 + (1)
x
)x
f
9
(x) = 2 ln x
f
10
(x) = (1 + (1)
x
)x
2
+x
[10 points]
45. Show why the statement The runtime of the algorithm is at least in O
_
n
2
_
makes
no sense.
[5 points]
46. What is the dierence between a deterministic Turing machine (DTM) and non-
deterministic one (NTM)?
[10 points]
47. Write down the states and rules of a deterministic Turing machine which inverts a
sequence of zeros and ones on the tape while moving its head to the right and that
stops when encountering a blank symbol.
[5 points]
48. Use literature or the internet to nd at least three more AT-complete problems. Dene
them as optimization problems by specifying proper problem spaces X and objective
functions f which are subject to minimization.
[15 points]
Chapter 13
Unsatisfying Convergence
13.1 Introduction
Denition D13.1 (Convergence). An optimization algorithm has converged (a) if it can-
not reach new candidate solutions anymore or (b) if it keeps on producing candidate solutions
from a small
1
subset of the problem space.
Optimization processes will usually converge at some point in time. In nature, a similar phe-
nomenon can be observed according to Koza [1602]: The niche preemption principle states
that a niche in a natural environment tends to become dominated by a single species [1812].
Linear convergence to the global optimum x

with respect to an objective function f is


dened in Equation 13.1, where t is the iteration index of the optimization algorithm and
x(t) is the best candidate solution discovered until iteration t [1976]. This is basically the
best behavior of an optimization process we could wish for.
[f(x(t + 1)) f(x

)[ c [f(x(t)) f(x

)[ , c [0, 1) (13.1)
One of the problems in Global Optimization (and basically, also in nature) is that it is often
not possible to determine whether the best solution currently known is situated on local or
a global optimum and thus, if convergence is acceptable. In other words, it is usually not
clear whether the optimization process can be stopped, whether it should concentrate on
rening the current optimum, or whether it should examine other parts of the search space
instead. This can, of course, only become cumbersome if there are multiple (local) optima,
i. e., the problem is multi-modal [1266, 2267] as depicted in Fig. 11.1.c on page 140.
Denition D13.2 (Multi-Modality). A mathematical function is multi-modal if it has
multiple (local or global) maxima or minima [711, 2474, 3090]. A set of objective functions
(or a vector function) f is multi-modal if it has multiple (local or global) optima depending
on the denition of optimum in the context of the corresponding optimization problem.
The existence of multiple global optima itself is not problematic and the discovery of only
a subset of them can still be considered as successful in many cases. The occurrence of
numerous local optima, however, is more complicated.
1
according to a suitable metric like numbers of modications or mutations which need to be applied to
a given solution in order to leave this subset
152 13 UNSATISFYING CONVERGENCE
13.2 The Problems
There are three basic problems which are related to the convergence of an optimization algo-
rithm: premature, non-uniform, and domino convergence. The rst one is most important
in optimization, but the latter ones may also cause large inconvenience.
13.2.1 Premature Convergence
Denition D13.3 (Premature Convergence). An optimization process has prema-
turely converged to a local optimum if it is no longer able to explore other parts of the
search space than the area currently being examined and there exists another region that
contains a superior solution [2417, 2760].
The goal of optimization is, of course, to nd the global optima. Premature convergence,
basically, is the convergence of an optimization process to a local optimum. A prematurely
converged optimization process hence provides solutions which are below the possible or
anticipated quality.
Example E13.1 (Premature Convergence).
Figure 13.1 illustrates two examples for premature convergence. Although a higher peak
exists in the maximization problem pictured in Fig. 13.1.a, the optimization algorithm solely
focuses on the local optimum to the right. The same situation can be observed in the
minimization task shown in Fig. 13.1.b where the optimization process gets trapped in the
local minimum on the left hand side.
globaloptimum
localoptimum
Fig. 13.1.a: Example 1: Maximization
o
b
j
e
c
t
i
v
e

v
a
l
u
e
s

f
(
x
)
x
Fig. 13.1.b: Example 2: Minimization
Figure 13.1: Premature convergence in the objective space.
13.2.2 Non-Uniform Convergence
13.2. THE PROBLEMS 153
Denition D13.4 (Non-Uniform Convergence). In optimization problems with in-
nitely many possible (globally) optimal solutions, an optimization process has non-
uniformly converged if the solutions that it provides only represent a subset of the optimal
characteristics.
This denition is rather fuzzy, but it becomes much clearer when taking a look on
Example E13.2. The goal of optimization in problems with innitely many optima is often
not to discover only one or two optima. Instead, usually we want to nd a set X of solutions
which has a good spread over all possible optimal features.
Example E13.2 (Convergence in a Bi-Objective Problem).
Assume that we wish to solve a bi-objective problem with f = (f
1
, f
2
). Then, the goal is to
nd a set X of candidate solutions which approximates the set X

of globally (Pareto) optimal


solutions as good as possible. Then, two issues arise: First, the optimization process should
converge to the true Pareto front and return solutions as close to it as possible. Second,
these solutions should be uniformly spread along this front.
f
1
f
2
trueParetofront
frontreturned
bytheoptimizer
Fig. 13.2.a: Bad Convergence and
Good Spread
f
1
f
2
Fig. 13.2.b: Good Convergence and
Bad Spread
f
1
f
2
Fig. 13.2.c: Good Convergence and
Spread
Figure 13.2: Pareto front approximation sets [2915].
Let us examine the three fronts included in Figure 13.2 [2915]. The rst picture
(Fig. 13.2.a) shows a result X having a very good spread (or diversity) of solutions, but
the points are far away from the true Pareto front. Such results are not attractive because
they do not provide Pareto-optimal solutions and we would consider the convergence to be
premature in this case. The second example (Fig. 13.2.b) contains a set of solutions which
are very close to the true Pareto front but cover it only partially, so the decision maker
could lose important trade-o options. Finally, the front depicted in Fig. 13.2.c has the two
desirable properties of good convergence and spread.
13.2.3 Domino Convergence
The phenomenon of domino convergence has been brought to attention by Rudnick [2347]
who studied it in the context of his BinInt problem [2347, 2698] which is discussed in
Section 50.2.5.2. In principle, domino convergence occurs when the candidate solutions
have features which contribute to signicantly dierent degrees to the total tness. If these
features are encoded in separate genes (or building blocks) in the genotypes, they are likely to
be treated with dierent priorities, at least in randomized or heuristic optimization methods.
Building blocks with a very strong positive inuence on the objective values, for instance,
will quickly be adopted by the optimization process (i. e., converge). During this time, the
alleles of genes with a smaller contribution are ignored. They do not come into play until
154 13 UNSATISFYING CONVERGENCE
the optimal alleles of the more important blocks have been accumulated. Rudnick [2347]
called this sequential convergence phenomenon domino convergence due to its resemblance
to a row of falling domino stones [2698].
Let us consider the application of a Genetic Algorithm, an optimization method working
on bit strings, in such a scenario. Mutation operators from time to time destroy building
blocks with strong positive inuence which are then reconstructed by the search. If this
happens with a high enough frequency, the optimization process will never get to optimize
the less salient blocks because repairing and rediscovering those with higher importance
takes precedence. Thus, the mutation rate of the EA limits the probability of nding the
global optima in such a situation.
In the worst case, the contributions of the less salient genes may almost look like noise
and they are not optimized at all. Such a situation is also an instance of premature conver-
gence, since the global optimum which would involve optimal congurations of all building
blocks will not be discovered. In this situation, restarting the optimization process will not
help because it will turn out the same way with very high probability. Example problems
which are often likely to exhibit domino convergence are the Royal Road and the aforemen-
tioned BinInt problem, which you can nd discussed in Section 50.2.4 and Section 50.2.5.2,
respectively.
There are some relations between domino convergence and the Siren Pitfall in feature
selection for classication in data mining [961]. Here, if one feature is highly useful in a hard
sub-problem of the classication task, its utility may still be overshadowed by mediocre
features contributing to easy sub-tasks.
13.3 One Cause: Loss of Diversity
In biology, diversity is the variety and abundance of organisms at a given place and
time [1813, 2112]. Much of the beauty and eciency of natural ecosystems is based on
a dazzling array of species interacting in manifold ways. Diversication is also a good strat-
egy utilized by investors in the economy in order to increase their wealth.
In population-based Global Optimization algorithms, maintaining a set of diverse candi-
date solutions is very important as well. Losing diversity means approaching a state where all
the candidate solutions under investigation are similar to each other. Another term for this
state is convergence. Discussions about how diversity can be measured have been provided
by Cousins [648], Magurran [1813], Morrison [1956], Morrison and De Jong [1958], Paenke
et al. [2112], Routledge [2344], and Burke et al. [454, 458].
Example E13.3 (Loss of Diversity Premature Convergence).
Figure 13.3 illustrates two possible ways in which a population-based optimization method
may progress. Both processes start with a set of random candidate solutions denoted as
bubbles in the tness landscape already used in Fig. 13.1.a in their search for the global
maximum. While the upper algorithm preserves a reasonable diversity in the set of elements
under investigation although initially, the best candidate solutions are located in the prox-
imity of the local optimum. By doing so, it also manages to explore the area near the global
optimum and eventually, discovers both peaks. The process illustrated in the lower part of
the picture instead solely concentrates on the area where the best candidate solutions were
sampled in the beginning and quickly climbs up the local optimum. Reaching a state of
premature convergence, it cannot escape from there anymore.
13.3. ONE CAUSE: LOSS OF DIVERSITY 155
lossofdiversity prematureconvergence
preservationofdiversity convergencetoglobaloptimum
initialsituation
Figure 13.3: Loss of Diversity leads to Premature Convergence.
Preserving diversity is directly linked with maintaining a good balance between exploitation
and exploration [2112] and has been studied by researchers from many domains, such as
1. Genetic Algorithms [238, 2063, 2321, 2322],
2. Evolutionary Algorithms [365, 366, 1692, 1964, 2503, 2573],
3. Genetic Programming [392, 456, 458, 709, 1156, 1157],
4. Tabu Search [1067, 1069], and
5. Particle Swarm Optimization [2943].
13.3.1 Exploration vs. Exploitation
The operations which create new solutions from existing ones have a very large impact on
the speed of convergence and the diversity of the populations [884, 2535]. The step size in
Evolution Strategy is a good example of this issue: setting it properly is very important and
leads over to the exploration versus exploitation problem [1250]. The dilemma of deciding
which of the two principles to apply to which degree at a certain stage of optimization is
sketched in Figure 13.4 and can be observed in many areas of Global Optimization.
2
Denition D13.5 (Exploration). In the context of optimization, exploration means nd-
ing new points in areas of the search space which have not been investigated before.
2
More or less synonymously to exploitation and exploration, the terms intensications and diversication
have been introduced by Glover [1067, 1069] in the context of Tabu Search.
156 13 UNSATISFYING CONVERGENCE
initialsituation:
goodcandidatesolution
found
exploitation:
searchcloseproximityof
thegoodcandidata
exploration:
searchalsothe
distantarea
Figure 13.4: Exploration versus Exploitation
Since computers have only limited memory, already evaluated candidate solutions usually
have to be discarded in order to accommodate the new ones. Exploration is a metaphor
for the procedure which allows search operations to nd novel and maybe better solution
structures. Such operators (like mutation in Evolutionary Algorithms) have a high chance
of creating inferior solutions by destroying good building blocks but also a small chance of
nding totally new, superior traits (which, however, is not guaranteed at all).
Denition D13.6 (Exploitation). Exploitation is the process of improving and combin-
ing the traits of the currently known solutions.
The crossover operator in Evolutionary Algorithms, for instance, is considered to be an
operator for exploitation. Exploitation operations often incorporate small changes into al-
ready tested individuals leading to new, very similar candidate solutions or try to merge
building blocks of dierent, promising individuals. They usually have the disadvantage that
other, possibly better, solutions located in distant areas of the problem space will not be
discovered.
Almost all components of optimization strategies can either be used for increasing ex-
ploitation or in favor of exploration. Unary search operations that improve an existing
solution in small steps can often be built, hence being exploitation operators. They can also
be implemented in a way that introduces much randomness into the individuals, eectively
making them exploration operators. Selection operations
3
in Evolutionary Computation
choose a set of the most promising candidate solutions which will be investigated in the
next iteration of the optimizers. They can either return a small group of best individuals
(exploitation) or a wide range of existing candidate solutions (exploration).
Optimization algorithms that favor exploitation over exploration have higher conver-
gence speed but run the risk of not nding the optimal solution and may get stuck at a local
optimum. Then again, algorithms which perform excessive exploration may never improve
their candidate solutions well enough to nd the global optimum or it may take them very
long to discover it by accident. A good example for this dilemma is the Simulated An-
nealing algorithm discussed in Chapter 27 on page 243. It is often modied to a form called
simulated quenching which focuses on exploitation but loses the guaranteed convergence to
the optimum. Generally, optimization algorithms should employ at least one search opera-
tion of explorative character and at least one which is able to exploit good solutions further.
There exists a vast body of research on the trade-o between exploration and exploitation
that optimization algorithms have to face [87, 741, 864, 885, 1253, 1986].
3
Selection will be discussed in Section 28.4 on page 285.
13.4. COUNTERMEASURES 157
13.4 Countermeasures
There is no general approach which can prevent premature convergence. The probability
that an optimization process gets caught in a local optimum depends on the characteristics
of the problem to be solved and the parameter settings and features of the optimization
algorithms applied [2350, 2722].
13.4.1 Search Operator Design
The most basic measure to decrease the probability of premature convergence is to make
sure that the search operations are complete (see Denition D4.8 on page 85), i. e., to make
sure that they can at least theoretically reach every point in the search space from every
other point. Then, it is (at least theoretically) possible to escape arbitrary local optima.
A prime example for bad operator design is Algorithm 1.1 on page 40 which cannot escape
any local optima. The two dierent operators given in Equation 5.4 and Equation 5.5 on 96
are another example for the inuence of the search operation design on the vulnerability of
the optimization process to local convergence.
13.4.2 Restarting
A very crude and yet, sometimes eective measure is restarting the optimization process at
randomly chosen points in time. One example for this method is GRASPs, Greedy Random-
ized Adaptive Search Procedures [902, 906] (see Chapter 42 on page 499), which continuously
restart the process of creating an initial solution and rening it with local search. Still, such
approaches are likely to fail in domino convergence situations. Increasing the proportion of
exploration operations may also reduce the chance of premature convergence.
13.4.3 Low Selection Pressure and Population Size
Generally, the higher the chance that candidate solutions with bad tness are investigated
instead of being discarded in favor of the seemingly better ones, the lower the chance of
getting stuck at a local optimum. This is the exact idea which distinguishes Simulated
Annealing from Hill Climbing (see Chapter 27). It is known that Simulated Annealing can
nd the global optimum, whereas simple Hill Climbers are likely to prematurely converge.
In an EA, too, using low selection pressure decreases the chance of premature convergence.
However, such an approach also decreases the speed with which good solutions are exploited.
Simulated Annealing, for instance, takes extremely long if a guaranteed convergence to the
global optimum is required.
Increasing the population size of a population-based optimizer may be useful as well,
since larger populations can maintain more individuals (and hence, also be more diverse).
However, the idea that larger populations will always lead to better optimization results
does not always hold, as shown by van Nimwegen and Crutcheld [2778].
13.4.4 Sharing, Niching, and Clearing
Instead of increasing the population size, it makes thus sense to gain more from the smaller
populations. In order to extend the duration of the evolution in Evolutionary Algorithms,
many methods have been devised for steering the search away from areas which have already
been frequently sampled. In steady-state EAs (see Paragraph 28.1.4.2.1), it is common to
remove duplicate genotypes from the population [2324].
More general, the exploration capabilities of an optimizer can be improved by integrating
density metrics into the tness assignment process. The most popular of such approaches
are sharing and niching (see Section 28.3.4, [735, 742, 1080, 1085, 1250, 1899, 2391]). The
158 13 UNSATISFYING CONVERGENCE
Strength Pareto Algorithms, which are widely accepted to be highly ecient, use another
idea: they adapt the number of individuals that one candidate solution dominates as density
measure [3097, 3100].
Petrowski [2167, 2168] introduced the related clearing approach which you can nd
discussed in Section 28.4.7, where individuals are grouped according to their distance in the
phenotypic or genotypic space and all but a certain number of individuals from each group
just receive the worst possible tness. A similar method aiming for convergence prevention
is introduced in Section 28.4.7, where individuals close to each other in objective space are
deleted.
13.4.5 Self-Adaptation
Another approach against premature convergence is to introduce the capability of self-
adaptation, allowing the optimization algorithm to change its strategies or to modify its
parameters depending on its current state. Such behaviors, however, are often implemented
not in order to prevent premature convergence but to speed up the optimization process
(which may lead to premature convergence to local optima) [23512353].
13.4.6 Multi-Objectivization
Recently, the idea of using helper objectives has emerged. Here, usually a single-objective
problem is transformed into a multi-objective one by adding new objective functions [1429,
1440, 1442, 1553, 1757, 2025]. Brockho et al. [425] proved that in some cases, such changes
can speed up the optimization process. The new objectives are often derived from the main
objective by decomposition [1178] or from certain characteristics of the problem [1429]. They
are then optimized together with the original objective function with some multi-objective
technique, say an MOEA (see Denition D28.2). Problems where the main objective is
some sort of sum of dierent components contributing to the tness are especially suitable
for that purpose, since such a component naturally lends itself as helper objective. It has
been shown that the original objective function has at least as many local optima as the
total number of all local optima present in the decomposed objectives for sum-of-parts
multi-objectivizations [1178].
Example E13.4 (Sum-of-Parts Multi-Objectivization).
In Figure 13.5, we provide an example of multi-objectivization of the a sum-of-parts objective
function f dened on the real interval X = G = [0, 1]. The original (single) objective f
(Fig. 13.5.a) can be divided into the two objective functions f
1
and f
2
with f(x) = f
1
(x)+f
2
(x)
illustrated Fig. 13.5.b.
If we assume minimization, f has ve locally optimal sets according to Denition D4.14
on page 89: The sets X

1
and X

5
are globally optimal since they contain the elements with
lowest function value. x

l,2
, x

l,3
, and x

l,4
are three sets each containing exactly one locally
minimal point.
We now split up f to f
1
and f
2
and optimize either f (f
1
, f
2
) or f (f, f
1
, f
2
) together.
If we apply Pareto-based optimization (see Section 3.3.5 on page 65) and use the Deni-
tion D4.17 on page 89 of local optima (which is also used in [1178]), we nd that there are
only three locally optima regions remaining A

1
, A

2
, and A

3
.
A

1
, for example, is a continuous region which borders to elements it dominates. If we
go a small step to the left from the left border of A

1
, we will nd an element with the same
value of f
1
but with worse values in both f
2
and f. The same is true for its right border. A

3
has exactly the same behavior.
Any step to the left or right within A

2
leads to an improvement of either f
1
or f
2
and a
degeneration of the other function. Therefore, no element within A

2
dominates any other
element in A

2
. However, at the borders of A

2
, f
1
becomes constant while f
2
keeps growing,
13.4. COUNTERMEASURES 159
0
0.2
0.4
0.6
0.8
1
0.2 0.4 0.6 0.8 1
f(x)
mainobjective:f(x)=f (x)+f (x)
1 2
X

1
x

l,2
x

l,3 x

l,4
X

5
x X
Fig. 13.5.a: The original objective and its local op-
tima.
0
0.2
0.4
0.6
0.8
1
0.2 0.4 0.6 0.8 1
A

1
A

2
A

3
f (x)
1
f (x)
2
x X
Fig. 13.5.b: The helper objectives and the local op-
tima in the objective space.
Figure 13.5: Multi-Objectivization `a la Sum-of-Parts.
i. e., getting worse. The neighboring elements outside of A

2 are thus dominated by the


elements on its borders.
Multi-objectivization may thus help here to overcome the local optima, since there are
fewer locally optimal regions. However, a look at A

2
shows us that it contains x

l,2
, x

l,3
, and
x

l,4
and since it is continuous also (uncountable) innitly many other points. In this
case, it would thus not be clear whether multi-objectivization would be benecial or not a
priori.
160 13 UNSATISFYING CONVERGENCE
Tasks T13
49. Sketch one possible objective function for a single-objective optimization problem
which has intely many solutions. Copy this graph three times and paint six points
into each copy which represent:
(a) the possible results of a possible prematurely converged optimization process,
(b) the possible results of a possible correctly converged optimization process with
non-uniform spread (convergence),
(c) the possible results of a possible optimization process which has converged nicely.
[10 points]
50. Discuss the dilemma of exploration versus exploitation. What are the advantages and
disadvantages of these two general search principles?
[5 points]
51. Discuss why the Hill Climbing optimization algorithm (Chapter 26) is particularly
prone to premature convergence and how Simulated Annealing (Chapter 27) mitigates
the cause of this vulnerability.
[5 points]
Chapter 14
Ruggedness and Weak Causality
unimodal multimodal somewhatrugged veryrugged
landscapedifficultyincreases
Figure 14.1: The landscape diculty increases with increasing ruggedness.
14.1 The Problem: Ruggedness
Optimization algorithms generally depend on some form of gradient in the objective or tness
space. The objective functions should be continuous and exhibit low total variation
1
, so the
optimizer can descend the gradient easily. If the objective functions become more unsteady
or uctuating, i. e., go up and down often, it becomes more complicated for the optimization
process to nd the right directions to proceed to as sketched in Figure 14.1. The more
rugged a function gets, the harder it gets to optimize it. For short, one could say ruggedness
is multi-modality plus steep ascends and descends in the tness landscape. Examples of
rugged landscapes are Kaumans NK tness landscape (see Section 50.2.1), the p-Spin
model discussed in Section 50.2.2, Bergman and Feldmans jagged tness landscape [276],
and the sketch in Fig. 11.1.d on page 140.
14.2 One Cause: Weak Causality
During an optimization process, new points in the search space are created by the search
operations. Generally we can assume that the genotypes which are the input of the search
operations correspond to phenotypes which have previously been selected. Usually, the
better or the more promising an individual is, the higher are its chances of being selected for
further investigation. Reversing this statement suggests that individuals which are passed
to the search operations are likely to have a good tness. Since the tness of a candidate
solution depends on its properties, it can be assumed that the features of these individuals
1
http://en.wikipedia.org/wiki/Total_variation [accessed 2008-04-23]
162 14 RUGGEDNESS AND WEAK CAUSALITY
are not so bad either. It should thus be possible for the optimizer to introduce slight changes
to their properties in order to nd out whether they can be improved any further
2
. Normally,
such exploitive modications should also lead to small changes in the objective values and
hence, in the tness of the candidate solution.
Denition D14.1 (Strong Causality). Strong causality (locality) means that small
changes in the properties of an object also lead to small changes in its behavior [2278,
2279, 2329].
This principle (proposed by Rechenberg [2278, 2279]) should not only hold for the search
spaces and operations designed for optimization (see Equation 24.5 on page 218 in Sec-
tion 24.9), but applies to natural genomes as well. The ospring resulting from sexual
reproduction of two sh, for instance, has a dierent genotype than its parents. Yet, it is
far more probable that these variations manifest in a unique color pattern of the scales, for
example, instead of leading to a totally dierent creature.
Apart from this straightforward, informal explanation here, causality has been investi-
gated thoroughly in dierent elds of optimization, such as Evolution Strategy [835, 2278],
structure evolution [1760, 1761], Genetic Programming [835, 1396, 2327, 2329], genotype-
phenotype mappings [2451], search operators [835], and Evolutionary Algorithms in gen-
eral [835, 2338, 2599].
Sendho et al. [2451, 2452] introduce a very nice denition for causality which combines
the search space G, the (unary) search operations searchOp
1
Op, the genotype-phenotype
mapping gpm and the problem space X with the objective functions f . First, a distance
measure dist caus based on the stochastic nature of the unary search operation searchOp
1
is dened.
3
dist caus(g
1
, g
2
) = log
_
1
P
id
P (g
2
= searchOp
1
(g
1
))
_
(14.1)
P
id
= P (g = searchOp
1
(g)) (14.2)
In Equation 14.1, g
1
and g
2
are two elements of the search space G, P (g
2
= searchOp
1
(g
1
))
is the probability that g
2
is the result of one application of the search operation searchOp
1
to
g
1
, and P
id
is the probability that the searchOp
1
maps an element to itself. dist caus(g
1
, g
2
)
falls with a rising probability the g
1
is mapped to g
2
. Sendho et al. [2451, 2452] then dene
strong causality as the state where Equation 14.3 holds:
g
1
, g
2
, g
3
G : [[f (gpm(g
1
)) f (gpm(g
2
))[[
2
< [[f (gpm(g
1
)) f (gpm(g
3
))[[
2
dist caus(g
1
, g
2
) < dist caus(g
1
, g
3
)
P (g
2
= searchOp
1
(g
1
)) > P (g
3
= searchOp
1
(g
1
)) (14.3)
In other words, strong causality means that the search operations have a higher chance to
produce points g
2
which (map to candidate solutions which) are closer in objective space
to their inputs g
1
than to create elements x
3
which (map to candidate solutions which) are
farther away.
In tness landscapes with weak (low) causality, small changes in the candidate solutions
often lead to large changes in the objective values, i. e., ruggedness. It then becomes harder
to decide which region of the problem space to explore and the optimizer cannot nd reliable
gradient information to follow. A small modication of a very bad candidate solution may
then lead to a new local optimum and the best candidate solution currently known may be
surrounded by points that are inferior to all other tested individuals.
The lower the causality of an optimization problem, the more rugged its tness landscape
is, which leads to a degeneration of the performance of the optimizer [1563]. This does not
necessarily mean that it is impossible to nd good solutions, but it may take very long to
do so.
2
We have already mentioned this under the subject of exploitation.
3
This measure has the slight drawback that it is undened for P (g
2
= searchOp
1
(g
2
)) = 0 or P
id
= 0.
14.3. FITNESS LANDSCAPE MEASURES 163
14.3 Fitness Landscape Measures
As measures for the ruggedness of a tness landscape (or their general diculty), many
dierent metrics have been proposed. Wedge and Kell [2874] and Altenberg [80] provide
nice lists of them in their work
4
, which we summarize here:
1. Weinberger [2881] introduced the autocorrelation function and the correlation length
of random walks.
2. The correlation of the search operators was used by Manderick et al. [1821] in con-
junction with the autocorrelation [2267].
3. Jones and Forrest [1467, 1468] proposed the tness distance correlation (FDC), the
correlation of the tness of an individual and its distance to the global optimum. This
measure has been extended by researchers such as Clergue et al. [590, 2784].
4. The probability that search operations create ospring tter than their parents, as
dened by Rechenberg [2278] and Beyer [295] (and called evolvability by Altenberg
[77]), will be discussed in Section 16.2 on page 172 in depth.
5. Simulation dynamics have been researched by Altenberg [77] and Grefenstette [1130].
6. Another interesting metric is the tness variance of formae (Radclie and Surry [2246])
and schemas (Reeves and Wright [2284]).
7. The error threshold method from theoretical biology [867, 2057] has been adopted
Ochoa et al. [2062] for Evolutionary Algorithms. It is the critical mutation rate be-
yond which structures obtained by the evolutionary process are destroyed by mutation
more frequently than selection can reproduce them [2062].
8. The negative slope coecient (NSC) by Vanneschi et al. [2785, 2786] may be considered
as an extension of Altenbergs evolvability measure.
9. Davidor [691] uses the epistatic variance as a measure of utility of a certain represen-
tation in Genetic Algorithms. We discuss the issue of epistasis in Chapter 17.
10. The genotype-tness correlation (GFC) of Wedge and Kell [2874] is a new measure for
ruggedness in tness landscape and has been shown to be a good guide for determining
optimal population sizes in Genetic Programming.
11. Rana [2267] dened the Static- and the Basin- metrics based on the notion of
schemas in binary-coded search spaces.
14.3.1 Autocorrelation and Correlation Length
As example, let us take a look at the autocorrelation function as well as the correlation
length of random walks [2881]. Here we borrow its denition from Verel et al. [2802]:
Denition D14.2 (Autocorrelation Function). Given a random walk (x
i
, x
i+1
, . . . ),
the autocorrelation function of an objective function f is the autocorrelation function
of the time series (f(x
i
) , f(x
i+1
) , . . . ).
(k, f) =
E[f(x
i
) f(x
i+k
)] E[f(x
i
)] E[f(x
i+k
)]
D
2
[f(x
i
)]
(14.4)
where E[f(x
i
)] and D
2
[f(x
i
)] are the expected value and the variance of f(x
i
).
4
Especially the one of Wedge and Kell [2874] is beautiful and far more detailed than this summary here.
164 14 RUGGEDNESS AND WEAK CAUSALITY
The correlation length =
1
log (1,f)
measures how the autocorrelation function decreases
and summarizes the ruggedness of the tness landscape: the larger the correlation length,
the lower the total variation of the landscape. From the works of Kinnear, Jr [1540] and
Lipsitch [1743] from twenty, however, we also know that correlation measures do not always
represent the hardness of a problem landscape full.
14.4 Countermeasures
To the knowledge of the author, no viable method which can directly mitigate the eects
of rugged tness landscapes exists. In population-based approaches, using large population
sizes and applying methods to increase the diversity can reduce the inuence of ruggedness,
but only up to a certain degree. Utilizing Lamarckian evolution [722, 2933] or the Baldwin
eect [191, 1235, 1236, 2933], i. e., incorporating a local search into the optimization pro-
cess, may further help to smoothen out the tness landscape [1144] (see Section 36.2 and
Section 36.3, respectively).
Weak causality is often a home-made problem because it results to some extent from
the choice of the solution representation and search operations. We pointed out that explo-
ration operations are important for lowering the risk of premature convergence. Exploitation
operators are as same as important for rening solutions to a certain degree. In order to
apply optimization algorithms in an ecient manner, it is necessary to nd representations
which allow for iterative modications with bounded inuence on the objective values, i. e.,
exploitation. In Chapter 24, we present some further rules-of-thumb for search space and
operation design.
14.4. COUNTERMEASURES 165
Tasks T14
52. Investigate the Weierstra Function
5
. Is it unimodal, smooth, multimodal, or even
rugged?
[5 points]
53. Why is causality important in metaheuristic optimization? Why are problems with
low causality normally harder than problems with strong causality?
[5 points]
54. Dene three objective functions by yourself: a uni-modal one (f
1
: R R
+
), a simple
multi-modal one (f
2
: R R
+
), and a rugged one (f
3
: R R
+
). All three functions
should have the same global minimum (optimum) 0 with objective value 0. Try to solve
all three function with a trivial implementation of Hill Climbing (see Algorithm 26.1 on
page 230) by applying it 30 times to each function. Note your observations regarding
the problem hardness.
[15 points]
5
http://en.wikipedia.org/wiki/Weierstrass_function [accessed 2010-09-14]
Chapter 15
Deceptiveness
15.1 Introduction
Especially annoying tness landscapes show deceptiveness (or deceptivity). The gradient
of deceptive objective functions leads the optimizer away from the optima, as illustrated in
Figure 15.1 and Fig. 11.1.e on page 140.
unimodal unimodalwith
neutrality
multimodal(center
andlimits)deceptive
verydeceptive
landscapedifficultyincreases
Figure 15.1: Increasing landscape diculty caused by deceptivity.
The term deceptiveness [288] is mainly used in the Genetic Algorithm
1
community in the
context of the Schema Theorem [1076, 1077]. Schemas describe certain areas (hyperplanes)
in the search space. If an optimization algorithm has discovered an area with a better
average tness compared to other regions, it will focus on exploring this region based on
the assumption that highly t areas are likely to contain the true optimum. Objective
functions where this is not the case are called deceptive [288, 1075, 1739]. Examples for
deceptiveness are the ND tness landscapes outlined in Section 50.2.3, trap functions (see
Section 50.2.3.1), and the fully deceptive problems given by Goldberg et al. [743, 1076, 1083]
and [288].
15.2 The Problem
Denition D15.1 (Deceptiveness). An objective function f : X R is deceptive if a Hill
Climbing algorithm (see Chapter 26 on page 229) would be steered in a direction leading
away from all global optima in large parts of the search space.
1
We are going to discuss Genetic Algorithms in Chapter 29 on page 325 and the Schema Theorem
in Section 29.5 on page 341.
168 15 DECEPTIVENESS
For Denition D15.1, it is usually also assumed that (a) the search space is also the problem
space (G = X), (b) the genotype-phenotype mapping is the identity mapping, and (c) the
search operations do not violate the causality (see Denition D14.1 on page 162).. Still,
Denition D15.1 remains a bit blurry since the term large parts of the search space is
intuitive but not precise. However, it reveals the basic problem caused by deceptiveness.
If the information accumulated by an optimizer actually guides it away from the optimum,
search algorithms will perform worse than a random walk or an exhaustive enumeration
method. This issue has been known for a long time [1914, 1915, 2696, 2868] and has been
subsumed under the No Free Lunch Theorem which we will discuss in Chapter 23.
15.3 Countermeasures
Solving deceptive optimization tasks perfectly involves sampling many individuals with very
bad features and low tness. This contradicts the basic ideas of metaheuristics and thus,
there are no ecient countermeasures against deceptivity. Using large population sizes,
maintaining a very high diversity, and utilizing linkage learning (see Section 17.3.3) are,
maybe, the only approaches which can provide at least a small chance of nding good
solutions.
A completely deceptive tness landscape would be even worse than the needle-in-a-
haystack problems outlined in Section 16.7 [? ]. Exhaustive enumeration of the search
space or searching by using random walks may be the only possible way to go in extremely
deceptive cases. However, the question what kind of practical problem would have such a
feature arises. The solution of a completely deceptive problem would be the one that, when
looking on all possible solutions, seems to be the most improbably one. However, when
actually testing it, this worst-looking point in the problem space turns out to be the best-
possible solution. Such kinds of problem are, as far as the author of this book is concerned,
rather unlikely and rare.
15.3. COUNTERMEASURES 169
Tasks T15
55. Why is deceptiveness fatal for metaheuristic optimization? Why are non-deceptive
problems usually easier than deceptive ones?
[5 points]
56. Construct two dierent deceptive objective functions f
1
, f
2
: R R
+
by yourself. Like
the three functions from Task 54 on page 165, they should have the global minimum
(optimum) 0 with objective value 0. Try to solve these functions with a trivial imple-
mentation of Hill Climbing (see Algorithm 26.1 on page 230) by applying it 30 times
to both of them. Note your observations regarding the problem hardness and compare
them to your results from Task 54.
[7 points]
Chapter 16
Neutrality and Redundancy
16.1 Introduction
unimodal notmuchgradient
distantfromoptimum
withsomeneutrality veryneutral
landscapedifficultyincreases
Figure 16.1: Landscape diculty caused by neutrality.
Denition D16.1 (Neutrality). We consider the outcome of the application of a search
operation to an element of the search space as neutral if it yields no change in the objective
values [219, 2287].
It is challenging for optimization algorithms if the best candidate solution currently known
is situated on a plane of the tness landscape, i. e., all adjacent candidate solutions have
the same objective values. As illustrated in Figure 16.1 and Fig. 11.1.f on page 140, an
optimizer then cannot nd any gradient information and thus, no direction in which to
proceed in a systematic manner. From its point of view, each search operation will yield
identical individuals. Furthermore, optimization algorithms usually maintain a list of the
best individuals found, which will then overow eventually or require pruning.
The degree of neutrality is dened as the fraction of neutral results among all possible
products of the search operations applied to a specic genotype [219]. We can generalize
this measure to areas G in the search space G by averaging over all their elements. Regions
where is close to one are considered as neutral.
g
1
G (g
1
) =
[g
2
: P (g
2
= Op(g
1
)) > 0 f (gpm(g
2
)) = f (gpm(g
1
))[
[g
2
: P (g
2
= Op(g
1
)) > 0[
(16.1)
G G (G) =
1
[G[

gG
(g) (16.2)
172 16 NEUTRALITY AND REDUNDANCY
16.2 Evolvability
Another metaphor in Global Optimization which has been borrowed from biological sys-
tems [1289] is evolvability
1
[699]. Wagner [2833, 2834] points out that this word has two
uses in biology: According to Kirschner and Gerhart [1543], a biological system is evolv-
able if it is able to generate heritable, selectable phenotypic variations. Such properties
can then be spread by natural selection and changed during the course of evolution. In its
second sense, a system is evolvable if it can acquire new characteristics via genetic change
that help the organism(s) to survive and to reproduce. Theories about how the ability
of generating adaptive variants has evolved have been proposed by Riedl [2305], Wagner
and Altenberg [78, 2835], Bonner [357], and Conrad [624], amongst others. The idea of
evolvability can be adopted for Global Optimization as follows:
Denition D16.2 (Evolvability). The evolvability of an optimization process in its cur-
rent state denes how likely the search operations will lead to candidate solutions with new
(and eventually, better) objectives values.
The direct probability of success [295, 2278], i. e., the chance that search operators produce
ospring tter than their parents, is also sometimes referred to as evolvability in the context
of Evolutionary Algorithms [77, 80].
16.3 Neutrality: Problematic and Benecial
The link between evolvability and neutrality has been discussed by many researchers [2834,
3039]. The evolvability of neutral parts of a tness landscape depends on the optimization
algorithm used. It is especially low for Hill Climbing and similar approaches, since the search
operations cannot directly provide improvements or even changes. The optimization process
then degenerates to a random walk, as illustrated in Fig. 11.1.f on page 140. The work of
Beaudoin et al. [242] on the ND tness landscapes
2
shows that neutrality may destroy
useful information such as correlation.
Researchers in molecular evolution, on the other hand, found indications that the major-
ity of mutations in biology have no selective inuence [971, 1315] and that the transformation
from genotypes to phenotypes is a many-to-one mapping. Wagner [2834] states that neutral-
ity in natural genomes is benecial if it concerns only a subset of the properties peculiar to
the ospring of a candidate solution while allowing meaningful modications of the others.
Toussaint and Igel [2719] even go as far as declaring it a necessity for self-adaptation.
The theory of punctuated equilibria
3
, in biology introduced by Eldredge and Gould
[875, 876], states that species experience long periods of evolutionary inactivity which are
interrupted by sudden, localized, and rapid phenotypic evolutions [185].
4
It is assumed that
the populations explore neutral layers
5
during the time of stasis until, suddenly, a relevant
change in a genotype leads to a better adapted phenotype [2780] which then reproduces
quickly. Similar phenomena can be observed/are utilized in EAs [604, 1838].
Uh?, you may think, How does this t together? The key to dierentiating between
good and bad neutrality is its degree in relation to the number of possible solutions
maintained by the optimization algorithms. Smith et al. [2538] have used illustrative ex-
amples similar to Figure 16.2 showing that a certain amount of neutral reproductions can
foster the progress of optimization. In Fig. 16.2.a, basically the same scenario of premature
1
http://en.wikipedia.org/wiki/Evolvability [accessed 2007-07-03]
2
See Section 50.2.3 on page 557 for a detailed elaboration on the ND tness landscape.
3
http://en.wikipedia.org/wiki/Punctuated_equilibrium [accessed 2008-07-01]
4
A very similar idea is utilized in the Extremal Optimization method discussed in Chapter 41.
5
Or neutral networks, as discussed in Section 16.4.
16.4. NEUTRAL NETWORKS 173
convergence as in Fig. 13.1.a on page 152 is depicted. The optimizer is drawn to a local opti-
mum from which it cannot escape anymore. Fig. 16.2.b shows that a little shot of neutrality
could form a bridge to the global optimum. The optimizer now has a chance to escape the
smaller peak if it is able to nd and follow that bridge, i. e., the evolvability of the system
has increased. If this bridge gets wider, as sketched in Fig. 16.2.c, the chance of nding the
global optimum increases as well. Of course, if the bridge gets too wide, the optimization
process may end up in a scenario like in Fig. 11.1.f on page 140 where it cannot nd any
direction. Furthermore, in this scenario we expect the neutral bridge to lead to somewhere
useful, which is not necessarily the case in reality.
globaloptimum
localoptimum
Fig. 16.2.a: Premature Conver-
gence
Fig. 16.2.b: Small Neutral Bridge Fig. 16.2.c: Wide Neutral Bridge
Figure 16.2: Possible positive inuence of neutrality.
Recently, the idea of utilizing the processes of molecular
6
and evolutionary
7
biology as
complement to Darwinian evolution for optimization gains interest [212]. Scientists like
Hu and Banzhaf [1286, 1287, 1289] began to study the application of metrics such as the
evolution rate of gene sequences [2973, 3021] to Evolutionary Algorithms. Here, the degree
of neutrality (synonymous vs. non-synonymous changes) seems to play an important role.
Examples for neutrality in tness landscapes are the ND family (see Section 50.2.3),
the NKp and NKq models (discussed in Section 50.2.1.5), and the Royal Road (see Sec-
tion 50.2.4). Another common instance of neutrality is bloat in Genetic Programming,
which is outlined in ?? on page ??.
16.4 Neutral Networks
From the idea of neutral bridges between dierent parts of the search space as sketched by
Smith et al. [2538], we can derive the concept of neutral networks.
Denition D16.3 (Neutral Network). Neutral networks are equivalence classes K of
elements of the search space G which map to elements of the problem space X with the
same objective values and are connected by chains of applications of the search operators
Op [219].
g
1
, g
2
G : g
1
K(g
2
) G k N
0
: P
_
g
2
= Op
k
(g
1
)
_
> 0
f (gpm(g
1
)) = f (gpm(g
2
)) (16.3)
6
http://en.wikipedia.org/wiki/Molecular_biology [accessed 2008-07-20]
7
http://en.wikipedia.org/wiki/Evolutionary_biology [accessed 2008-07-20]
174 16 NEUTRALITY AND REDUNDANCY
Barnett [219] states that a neutral network has the constant innovation property if
1. the rate of discovery of innovations keeps constant for a reasonably large amount of
applications of the search operations [1314], and
2. if this rate is comparable with that of an unconstrained random walk.
Networks with this property may prove very helpful if they connect the optima in the tness
landscape. Stewart [2611] utilizes neutral networks and the idea of punctuated equilibria
in his extrema selection, a Genetic Algorithm variant that focuses on exploring individuals
which are far away from the centroid of the set of currently investigated candidate solutions
(but have still good objective values). Then again, Barnett [218] showed that populations
in Genetic Algorithm tend to dwell in neutral networks of high dimensions of neutrality
regardless of their objective values, which (obviously) cannot be considered advantageous.
The convergence on neutral networks has furthermore been studied by Bornberg-Bauer
and Chan [361], van Nimwegen et al. [2778, 2779], and Wilke [2942]. Their results show that
the topology of neutral networks strongly determines the distribution of genotypes on them.
Generally, the genotypes are drawn to the solutions with the highest degree of neutrality
on the neutral network Beaudoin et al. [242].
16.5 Redundancy: Problematic and Benecial
Denition D16.4 (Redundancy). Redundancy in the context of Global Optimization is
a feature of the genotype-phenotype mapping and means that multiple genotypes map to
the same phenotype, i. e., the genotype-phenotype mapping is not injective.
g
1
, g
2
: g
1
,= g
2
gpm(g
1
) = gpm(g
2
) (16.4)
The role of redundancy in the genome is as controversial as that of neutrality [2880].
There exist many accounts of its positive inuence on the optimization process. Ship-
man et al. [2458, 2480], for instance, tried to mimic desirable evolutionary properties of
RNA folding [1315]. They developed redundant genotype-phenotype mappings using voting
(both, via uniform redundancy and via a non-trivial approach), Turing machine-like binary
instructions, Cellular automata, and random Boolean networks [1500]. Except for the trivial
voting mechanism based on uniform redundancy, the mappings induced neutral networks
which proved benecial for exploring the problem space. Especially the last approach pro-
vided particularly good results [2458, 2480]. Possibly converse eects like epistasis (see
Chapter 17) arising from the new genotype-phenotype mappings have not been considered
in this study.
Redundancy can have a strong impact on the explorability of the problem space. When
utilizing a one-to-one mapping, the translation of a slightly modied genotype will always
result in a dierent phenotype. If there exists a many-to-one mapping between genotypes
and phenotypes, the search operations can create ospring genotypes dierent from the
parent which still translate to the same phenotype. The optimizer may now walk along a
path through this neutral network. If many genotypes along this path can be modied to
dierent ospring, many new candidate solutions can be reached [2480]. One example for
benecial redundancy is the extradimensional bypass idea discussed in Section 24.15.1.
The experiments of Shipman et al. [2479, 2481] additionally indicate that neutrality
in the genotype-phenotype mapping can have positive eects. In the Cartesian Genetic
Programming method, neutrality is explicitly introduced in order to increase the evolvability
(see ?? on page ??) [2792, 3042].
16.6. SUMMARY 175
Yet, Rothlauf [2338] and Shackleton et al. [2458] show that simple uniform redundancy
is not necessarily benecial for the optimization process and may even slow it down. Ronald
et al. [2324] point out that the optimization may be slowed down if the population of the
individuals under investigation contains many isomorphic genotypes, i. e., genotypes which
encode the same phenotype. If this isomorphismcan be identied an removed, a signicant
speedup may be gained [2324]. We can thus conclude that there is no use in introducing
encodings which, for instance, represent each phenotypic bit with two bits in the genotype
where 00 and 01 map to 0 and 10 and 11 map to 1. Another example for this issue is given
in Fig. 24.1.b on page 220.
16.6 Summary
Dierent from ruggedness which is always bad for optimization algorithms, neutrality has
aspects that may further as well as hinder the process of nding good solutions. Generally
we can state that degrees of neutrality very close to 1 degenerate optimization processes
to random walks. Some forms of neutral networks accompanied by low (nonzero) values of
can improve the evolvability and hence, increase the chance of nding good solutions.
Adverse forms of neutrality are often caused by bad design of the search space or
genotype-phenotype mapping. Uniform redundancy in the genome should be avoided where
possible and the amount of neutrality in the search space should generally be limited.
16.7 Needle-In-A-Haystack
Besides fully deceptive problems, one of the worst cases of tness landscapes is the needle-
in-a-haystack (NIAH) problem [? ] sketched in Fig. 11.1.g on page 140, where the optimum
occurs as isolated [1078] spike in a plane. In other words, small instances of extreme rugged-
ness combine with a general lack of information in the tness landscape. Such problems
are extremely hard to solve and the optimization processes often will converge prematurely
or take very long to nd the global optimum. An example for such tness landscapes is
the all-or-nothing property often inherent to Genetic Programming of algorithms [2729], as
discussed in ?? on page ??.
176 16 NEUTRALITY AND REDUNDANCY
Tasks T16
57. Describe the possible positive and negative aspects of neutrality in the tness land-
scape.
[10 points]
58. Construct two dierent objective functions f
1
, f
2
: R R
+
which have neutral parts
in the tness landscape. Like the three functions from Task 54 on page 165 and the
two from Task 56 on page 169, they should have the global minimum (optimum) 0
with objective value 0. Try to solve these functions with a trivial implementation of
Hill Climbing (see Algorithm 26.1 on page 230) by applying it 30 times to both of
them. Note your observations regarding the problem hardness and compare them to
your results from Task 54.
[7 points]
Chapter 17
Epistasis, Pleiotropy, and Separability
17.1 Introduction
17.1.1 Epistasis and Pleiotropy
In biology, epistasis
1
is dened as a form of interaction between dierent genes [2170].
The term was coined by Bateson [231] and originally meant that one gene suppresses the
phenotypical expression of another gene. In the context of statistical genetics, epistasis was
initially called epistacy by Fisher [928]. According to Lush [1799], the interaction between
genes is epistatic if the eect on the tness of altering one gene depends on the allelic state
of other genes.
Denition D17.1 (Epistasis). In optimization, epistasis is the dependency of the con-
tribution of one gene to the value of the objective functions on the allelic state of other
genes. [79, 692, 2007]
We speak of minimal epistasis when every gene is independent of every other gene. Then,
the optimization process equals nding the best value for each gene and can most eciently
be carried out by a simple greedy search (see Section 46.3.1) [692]. A problem is maximally
epistatic when no proper subset of genes is independent of any other gene [2007, 2562]. Our
understanding and eects of epistasis come close to another biological phenomenon:
Denition D17.2 (Pleiotropy). Pleiotropy
2
denotes that a single gene is responsible for
multiple phenotypical traits [1288, 2944].
Like epistasis, pleiotropy can sometimes lead to unexpected improvements but often is harm-
ful for the evolutionary system [77]. Both phenomena may easily intertwine. If one gene
epistatically inuences, for instance, two others which are responsible for distinct phenotyp-
ical traits, it has both epistatic and pleiotropic eects. We will therefore consider pleiotropy
and epistasis together and when discussing the eects of the latter, also implicitly refer to
the former.
Example E17.1 (Pleiotropy, Epistasis, and a Dinosaur).
In Figure 17.1 we illustrate a ctional dinosaur along with a snippet of its ctional genome
1
http://en.wikipedia.org/wiki/Epistasis [accessed 2008-05-31]
2
http://en.wikipedia.org/wiki/Pleiotropy [accessed 2008-03-02]
178 17 EPISTASIS, PLEIOTROPY, AND SEPARABILITY
gene1
color
gene2 gene3 gene4
length
length
shape
Figure 17.1: Pleiotropy and epistasis in a dinosaurs genome.
consisting of four genes. Gene 1 inuences the color of the creature and is neither pleiotropic
nor has any epistatic relations. Gene 2, however, exhibits pleiotropy since it determines the
length of the hind legs and forelegs. At the same time, it is epistatically connected with
gene 3 which too, inuences the length of the forelegs maybe preventing them from looking
exactly like the hind legs. The fourth gene is again pleiotropic by determining the shape of
the bone armor on the top of the dinosaurs skull.
Epistasis can be considered as the higher-order or non-main eects in a model predicting
tness values based on interactions of the genes of the genotypes from the point of view of
Design of Experiments (DOE) [2285].
17.1.2 Separability
In the area of optimizing over continuous problem spaces (such as X R
n
), epistasis and
pleiotropy are closely related to the term separability. Separbility is a feature of the objective
functions of an optimization problem [2668]. A function of n variables is separable if it can
be rewritten as a sum of n functions of just one variable [1162, 2099]. Hence, the decision
variables involved in the problem can be optimized independently of each other, i. e., are
minimally epistatic, and the problem is said to be separable according to the formal denition
given by Auger et al. [147, 1181] as follows:
Denition D17.3 (Separability). A function f : X
n
R (where usually X R is
separable if and only if
arg min
(x
1
,..,x
n
)
f(x
1
, .., x
n
) = (arg min
x
1
f(x
1
, ..) , .., arg min
x
n
f(.., x
n
)) (17.1)
Otherwise, f(x) is called a nonseparable function.
17.2. THE PROBLEM 179
If a function f(x) is separable, the parameters x
1
, .., x
n
forming the candidate solution x X
are called independent. A separable problem is decomposable, as already discussed under
the topic of epistasis.
Denition D17.4 (Nonseparability). A function f : X
n
R (where usually x R
is m-nonseparable if at most m of its parameters x
i
are not independent. A nonseparable
function f(x) is called fully-nonseparable if any two of its parameters x
i
are not independent.
The higher the degree of nonseparability, the harder a function will usually become more
dicult for optimization [147, 1181, 2635]. Often, the term nonseparable is used in the sense
of fully-nonseparable. In other words, in between separable and fully nonseparable problems,
a variety of partially separable problems exists. Denition D17.5 is, hence, an alternative
for Denition D17.4.
Denition D17.5 (Partial Separability). If an objective function f : X
n
R
+
can be
expressed as
f(x) =
M

i=1
f
i
_
x
I
i,1
, .., x
I
i,|I
i
|
_
(17.2)
where each of the M component functions f
i
: X
|I
i
|
R depends on a subset I
i
1..n of
the n decision variables [612, 613, 1136, 1137].
17.1.3 Examples
Examples of problems with a high degree of epistasis are Kaumans NK tness land-
scape [1499, 1501] (Section 50.2.1), the p-Spin model [86] (Section 50.2.2), and the tun-
able model by Weise et al. [2910] (Section 50.2.7). Nonseparable objective functions often
used for benchmarking are, for instance, the Griewank function [1106, 2934, 3026] (see Sec-
tion 50.3.1.11) and Rosenbrocks Function [3026] (see Section 50.3.1.5). Partially separable
test problems are given in, for example, [2670, 2708].
17.2 The Problem
As sketched in Figure 17.2, epistasis has a strong inuence on many of the previously
discussed problematic features. If one gene can turn o or aect the expression of other
genes, a modication of this gene will lead to a large change in the features of the phenotype.
Hence, the causality will be weakened and ruggedness ensues in the tness landscape. It also
becomes harder to dene search operations with exploitive character. Moreover, subsequent
changes to the deactivated genes may have no inuence on the phenotype at all, which
would then increase the degree of neutrality in the search space. Bowers [375] found that
representations and genotypes with low pleiotropy lead to better and more robust solutions.
Epistasis and pleiotropy are mainly aspects of the way in which objective functions f f ,
the genome G, and the genotype-phenotype mapping are dened and interact. They should
be avoided where possible.
180 17 EPISTASIS, PLEIOTROPY, AND SEPARABILITY
ruggedness
multi-
modality
weakcausality
high
epistasis
causes
neutrality
Needleina
Haystack
Figure 17.2: The inuence of epistasis on the tness landscape.
Example E17.2 (Epistasis in Binary Genomes).
Consider the following three objective functions f
1
to f
3
all subject to maximization. All three
are based on a problem space consisting of all bit strings of length ve
_
X
B
= B
5
= 1, 0
5
_
.
f
1
(x) =

i=0
4x[i] (17.3)
f
2
(x) = (x[0] x[1]) + (x[2] x[3]) +x[4] (17.4)
f
3
(x) =

i=0
4x[i] (17.5)
Furthermore, the search space be identical to the problem space, i. e., G = X. In the
optimization problem dened by X
B
and f , the elements of the candidate solutions are
totally independent and can be optimized separately. Thus, f
1
is minimal epistatic. In f
2
,
only the rst two (x[0], x[1]) and second two (x[2], x[3]) genes of the each genotype inuence
each other. Both groups are independent from each other and from x[4]. Hence, the problem
has some epistasis. f
3
is a needle-in-a-haystack problem where all ve genes directly inuence
each other. A single zero in a candidate solution deactivates all other genes and the optimum
the string containing all ones is the only point in the search space with non-zero objective
values.
Example E17.3 (Epistasis in Genetic Programming).
Epistasis can be observed especially often Genetic Programming. In this area of Evolution-
ary Computation the problem spaces are sets of all programs in (usually) domain-specic
languages and we try to nd candidate solutions which solve a given problem algorithmi-
cally.
1 int gcd(int a, int b) { //skeleton code
2 int t; //skeleton code
3 while(b!=0) { // ------------------
4 t = b; //
5 b = a % b; // Evolved Code
6 a = t; //
7 } // ------------------
8 return a; //skeleton code
9 } //skeleton code
Listing 17.1: A program computing the greatest common divisor.
17.2. THE PROBLEM 181
Let us assume that we wish to evolve a program computing the greatest common divisor of
two natural numbers a, b N
1
in a C-style representation (X = C). Listing 17.1 would be
a program which fullls this task. As search space G we could, for instance, use all trees
where the inner nodes are chosen from while, =, !=, ==, %, +, -, each accepting two child
nodes, and the leaf nodes are either xa, b, t, 1, or 0. The evolved trees could be pasted by
the genotype-phenotype mapping into some skeleton code as indicated in Listing 17.1.
The genes of this representation are the nodes of the trees. A unary search operator
(mutation) could exchange a node inside a genotype with a randomly created one and a
binary search operator (crossover) could swap subtrees between two genotypes. Such a
mutation operator could, for instance, replace the t = b in Line 4 of Listing 17.1 with
a t = a. Although only a single gene of the program has been modied, its behavior
has now changed completely. Without stating any specic objective functions, it should be
clear that the new program will have very dierent (functional) objective values. Changing
the instruction has epistatic impact on the behavior of the rest of the program the loop
declared in Line 3, for instance, may now become an endless loop for many inputs a and b.
Example E17.4 (Epistasis in Combinatorial Optimization: Bin Packing).
In Example E1.1 on page 25 we introduced bin packing as a typical combinatorial optimiza-
tion problem. Later, in Task 67 on page 238 we propose a solution approach to a variant of
this problem. As discussed in this task and sketched in Figure 17.3, we suggest representing
a solution to bin packing as a permutation of the n objects to be packed into the bin and
to set G = X = (n). In other words, a candidate solution describes the order in which
the objects have to be placed into the bin and is processed from left to right. Whenever
the current object does not t into the current bin anymore, a new bin is opened. An
implementation of this idea is given in Section 57.1.2.1 and an example output of a test
program applying Hill Climbing, Simulated Annealing, a random walk, and the random
sampling algorithm is listen in Listing 57.16 on page 903.
As can be seen in that listing of results, neither Hill Climbing nor Simulated Annealing
can come very close to the known optimal solutions. One reason for this performance is
that the permutation representation for bin packing exhibits some epistasis. In Figure 17.3,
we sketch how the exchange of two genes near the beginning of a candidate solution x
1
can
lead to a new solution x
2
with a very dierent behavior: Due to the sequential packing of
objects into bins according to the permutation described by the candidate solutions, object
groupings way after the actually swapped objects may also be broken down. In other words,
the genes at the beginning of the representation have a strong inuence on the behavior at
its end. Interestingly (and obviously), this inuence is not mutual: the genes at the end of
the permutation have no inuence on the genes preceding them.
Suggestions for a better encoding, an extension of the permutation encoding with so-
called separators marking the beginning and ending of a bin, have been made by Jones and
Beltramo [1460] and Stawowy [2604]. Khuri et al. [1534] suggest a method which works
completely without permutations: Each gene of a string of length n stands for one of the
n objects and its (numerical) allele denotes the bin into which the object belongs. Though
this solution allows for infeasible solutions, it does not suer from the epistasis phenomenon
discussed in this example.
182 17 EPISTASIS, PLEIOTROPY, AND SEPARABILITY
0
a =2
0
1
a =3
1
2
a =3
2
3
a =3
3
4
a =4
4
5
a =5
5
6
a =6
6
7
a =7
7
8
a =8
8
9
a =9
9
10
a =10
10
11
a =12
11
g =
1
x =( ,11,6, ,2,9,4,8,5,7,1,3)
1
10 0 g =
1
x =( ,11,6, ,2,9,4,8,5,7,1,3)
2
0 10
0
a =2
0
1
a =3
1
2
a =3
2
3
a =3
3
4
a =4
4
5
a =5
5
6
a =6
6
7
a =7
7
8
a =8
8
9
a =9
9
10
a =10
10
11
a =12
11
bin1 bin2 bin3 bin4 bin5 bin6
b
i
n

s
i
z
e

1
4
12 objects (with identifiers 0 to 11) and
sizesa=(2,3,3,3,4,5,5,7,8,9,11,12)
shouldbepackedintobinsofsizeb=14
0
a =2
0
10
a =10
10
0
a =2
0
6
a =6
6
10
a =10
10
11
a =12
11
bin1 bin2 bin3 bin4 bin5 bin6 bin7
1
a =3
1
2
a =3
2
3
a =3
3
4
a =4
4
5
a =5
5
7
a =7
7
8
a =8
8
9
a =9
9
0
a =2
0
10
a =10
10
swapthefirstandfourthelementofthepermutation
Firstcandidatesolutionx requires6bins.
1
Candidatex ,aslightlymodifiedversionofx ,
requires7binsandhasaverydifferent
,
behavior
,
.
2 1
Figure 17.3: Epistasis in the permutation-representation for bin packing.
Generally, epistasis and conicting objectives in multi-objective optimization should be dis-
tinguished from each other. Epistasis as well as pleiotropy is a property of the inuence of
the editable elements (the genes) of the genotypes on the phenotypes. Objective functions
can conict without the involvement of any of these phenomena. We can, for example, dene
two objective functions f
1
(x) = x and f
2
(x) = x which are clearly contradicting regardless
of whether they both are subject to maximization or minimization. Nevertheless, if the can-
didate solutions x and the genotypes are simple real numbers and the genotype-phenotype
mapping is an identity mapping, neither epistatic nor pleiotropic eects can occur.
Naudts and Verschoren [2008] have shown for the special case of length-two binary string
genomes that deceptiveness does not occur in situations with low epistasis and also that
objective functions with high epistasis are not necessarily deceptive. [239] state the de-
ceptiveness is a special case of epistatic interaction (while we would rather say that it is
a phenomenon caused by epistasis). Another discussion about dierent shapes of tness
landscapes under the inuence of epistasis is given by Beerenwinkel et al. [248].
17.3 Countermeasures
We have shown that epistasis is a root cause for multiple problematic features of optimization
tasks. General countermeasures against epistasis can be divided into two groups. The
symptoms of epistasis can be mitigated with the same methods which increase the chance
of nding good solutions in the presence of ruggedness or neutrality.
17.3.1 Choice of the Representation
Epistasis itself is a feature which results from the choice of the search space structure, the
17.3. COUNTERMEASURES 183
search operations, the genotype-phenotype mapping, and the structure of the problem space.
Avoiding epistatic eects should be a major concern during their design.
Example E17.5 (Bad Representation Design: Algorithm 1.1).
Algorithm 1.1 on page 40 is a good example for bad search space design. There, the
assignment of n objects to k bins in order to solve the bin packing problem is represented
as a permutation of the rst n natural numbers. The genotype-phenotype mapping starts
to take numbers from the start of the genotype sequentially and places the corresponding
objects into the rst bin. If the object identied by the current gene does not ll into the
bin anymore, the GPM continues with the next bin.
Representing an assignment of objects to bins like this has one drawback: if the genes at
lower loci, i. e., close to the beginning of the genotype, are modied, this may change many
subsequent assignments. If a large object is swapped from the back to an early bin, the
other objects may be shifted backwards and many of them will land in dierent bins than
before. The meaning of the gene at locus i in the genotype depends on the allelic states of
all genes at loci smaller than i. Hence, this representation is quite epistatic.
Interestingly, using the representation suggested in Example E1.1 on page 25 and in
Equation 1.2 directly as search space would not only save us the work of implementing a
genotype-phenotype mapping, it would also exhibit signicantly lower epistasis.
Choosing the problem space and the genotype-phenotype mapping eciently can lead to a
great improvement in the quality of the solutions produced by the optimization process [2888,
2905]. Introducing specialized search operations can achieve the same [2478].
We outline some general rules for representation design in Chapter 24. Considering these
suggestions may further improve the performance of an optimizer.
17.3.2 Parameter Tweaking
Using larger populations and favoring explorative search operations seems to be helpful in
epistatic problems since this should increase the diversity. Shinkai et al. [2478] showed that
applying (a) higher selection pressure, i. e., increasing the chance to pick the best candidate
solutions for further investigation instead of the weaker ones, and (b) extinctive selection,
i. e., only working with the newest produced set of candidate solutions while discarding
the ones from which they were derived, can also increase the reliability of an optimizer to
nd good solutions. These two concepts are slightly contradicting, so careful adjustment of
the algorithm settings appears to be vital in epistatic environments. Shinkai et al. [2478]
observed higher selection pressure also leads to earlier convergence, a fact that we pointed
out in Chapter 13.
17.3.3 Linkage Learning
According to Winter et al. [2960], linkage is the tendency for alleles of dierent genes to
be passed together from one generation to the next in genetics. This usually indicates that
these genes are closely located in the same chromosome. In the context of Evolutionary
Algorithms, this notation is not useful since identifying spatially close elements inside the
genotypes g G is trivial. Instead, we are interested in alleles of dierent genes which have
a joint eect on the tness [1980, 1981].
Identifying these linked genes, i. e., learning their epistatic interaction, is very helpful
for the optimization process. Such knowledge can be used to protect building blocks
3
from
being destroyed by the search operations (such as crossover in Genetic Algorithms), for
instance. Finding approaches for linkage learning has become an especially popular discipline
3
See Section 29.5.5 for information on the Building Block Hypothesis.
184 17 EPISTASIS, PLEIOTROPY, AND SEPARABILITY
in the area of Evolutionary Algorithms with binary [552, 1196, 1981] and real [754] genomes.
Two important methods from this area are the messy GA (mGA, see Section 29.6) by
Goldberg et al. [1083] and the Bayesian Optimization Algorithm (BOA) [483, 2151]. Module
acquisition [108] may be considered as such an eort.
17.3.4 Cooperative Coevolution
Variable Interaction Learning (VIL) techniques can be used to discover the relations be-
tween decision variables, as introduced by Chen et al. [551] and discussed in the context of
cooperative coevolution in Section 21.2.3 on page 204.
17.3. COUNTERMEASURES 185
Tasks T17
59. What is the dierence between epistasis and pleiotropy?
[5 points]
60. Why does epistasis cause ruggedness in the tness landscape?
[5 points]
61. Why can epistasis cause neutrality in the tness landscape?
[5 points]
62. Why does pleiotropy cause ruggedness in the tness landscape?
[5 points]
63. How are epistasis and separability related with each other?
[5 points]
Chapter 18
Noise and Robustness
18.1 Introduction Noise
In the context of optimization, three types of noise can be distinguished. The rst form is
noise in the training data used as basis for learning or the objective functions [3019] (i). In
many applications of machine learning or optimization where a model for a given system is
to be learned, data samples including the input of the system and its measured response are
used for training.
Example E18.1 (Noise in the Training Data).
Some typical examples of situations where training data is the basis for the objective function
evaluation are
1. the usage of Global Optimization for building classiers (for example for predicting
buying behavior using data gathered in a customer survey for training),
2. the usage of simulations for determining the objective values in Genetic Programming
(here, the simulated scenarios correspond to training cases), and
3. the tting of mathematical functions to (x, y)-data samples (with articial neural
networks or symbolic regression, for instance).
Since no measurement device is 100% accurate and there are always random errors, noise is
present in such optimization problems.
Besides inexactnesses and uctuations in the input data of the optimization process, pertur-
bations are also likely to occur during the application of its results. This category subsumes
the other two types of noise: perturbations that may arise from (ii) inaccuracies in the pro-
cess of realizing the solutions and (iii) environmentally induced perturbations during the
applications of the products.
Example E18.2 (Creating the perfect Tire).
This issue can be illustrated by using the process of developing the perfect tire for a car
as an example. As input for the optimizer, all sorts of material coecients and geometric
constants measured from all known types of wheels and rubber could be available. Since
these constants have been measured or calculated from measurements, they include a certain
degree of noise and imprecision (i).
188 18 NOISE AND ROBUSTNESS
The result of the optimization process will be the best tire construction plan discovered
during its course and it will likely incorporate dierent materials and structures. We would
hope that the tires created according to the plan will not fall apart if, accidently, an extra
0.0001% of a specic rubber component is used (ii). During the optimization process,
the behavior of many construction plans will be simulated in order to nd out about their
utility. When actually manufactured, the tires should not behave unexpectedly when used in
scenarios dierent from those simulated (iii) and should instead be applicable in all driving
situations likely to occur.
The eects of noise in optimization have been studied by various researchers; Miller and
Goldberg [1897, 1898], Lee and Wong [1703], and Gurin and Rastrigin [1155] are some of
them. Many Global Optimization algorithms and theoretical results have been proposed
which can deal with noise. Some of them are, for instance, specialized
1. Genetic Algorithms [932, 1544, 2389, 2390, 2733, 2734],
2. Evolution Strategies [168, 294, 1172], and
3. Particle Swarm Optimization [1175, 2119] approaches.
18.2 The Problem: Need for Robustness
The goal of Global Optimization is to nd the global optima of the objective functions. While
this is fully true from a theoretical point of view, it may not suce in practice. Optimization
problems are normally used to nd good parameters or designs for components or plans to
be put into action by human beings or machines. As we have already pointed out, there will
always be noise and perturbations in practical realizations of the results of optimization.
There is no process in the world that is 100% accurate and the optimized parameters,
designs, and plans have to tolerate a certain degree of imprecision.
Denition D18.1 (Robustness). A system in engineering or biology isrobust if it is able
to function properly in the face of genetic or environmental perturbations [1288, 2833].
Therefore, a local optimum (or even a non-optimal element) for which slight disturbances
only lead to gentle performance degenerations is usually favored over a global optimum
located in a highly rugged area of the tness landscape [395]. In other words, local optima in
regions of the tness landscape with strong causality are sometimes better than global optima
with weak causality. Of course, the level of this acceptability is application-dependent.
Figure 18.1 illustrates the issue of local optima which are robust vs. global optima which
are not. Table 18.1 gives some examples on problems which require robust optimization.
Example E18.3 (Critical Robustness).
When optimizing the control parameters of an airplane or a nuclear power plant, the global
optimum is certainly not used if a slight perturbation can have hazardous eects on the
system [2734].
Example E18.4 (Robustness in Manufacturing).
Wiesmann et al. [2936, 2937] bring up the topic of manufacturing tolerances in multilayer
optical coatings. It is no use to nd optimal congurations if they only perform optimal
when manufactured to a precision which is either impossible or too hard to achieve on a
constant basis.
18.2. THE PROBLEM: NEED FOR ROBUSTNESS 189
Example E18.5 (Dynamic Robustness in Road-Salting).
The optimization of the decision process on which roads should be precautionary salted for
areas with marginal winter climate is an example of the need for dynamic robustness. The
global optimum of this problem is likely to depend on the daily (or even current) weather
forecast and may therefore be constantly changing. Handa et al. [1177] point out that it is
practically infeasible to let road workers follow a constantly changing plan and circumvent
this problem by incorporating multiple road temperature settings in the objective function
evaluation.
Example E18.6 (Robustness in Nature).
Tsutsui et al. [2733, 2734] found a nice analogy in nature: The phenotypic characteristics
of an individual are described by its genetic code. During the interpretation of this code,
perturbations like abnormal temperature, nutritional imbalances, injuries, illnesses and so
on may occur. If the phenotypic features emerging under these inuences have low tness,
the organism cannot survive and procreate. Thus, even a species with good genetic material
will die out if its phenotypic features become too sensitive to perturbations. Species robust
against them, on the other hand, will survive and evolve.
globaloptimum
robustlocaloptimum
f(x)
X
Figure 18.1: A robust local optimum vs. a unstable global optimum.
Table 18.1: Applications and Examples of robust methods.
Area References
Combinatorial Problems [644, 1177]
Distributed Algorithms and Sys-
tems
[52, 1401, 2898]
190 18 NOISE AND ROBUSTNESS
Engineering [52, 644, 1134, 1135, 2936, 2937]
Graph Theory [52, 63, 67, 644, 1401]
Image Synthesis [375]
Logistics [1177]
Mathematics [2898]
Motor Control [489]
Multi-Agent Systems [2925]
Software [1401]
Wireless Communication [63, 67, 644]
18.3 Countermeasures
For the special case where the phenome is a real vector space (X R
n
), several approaches
for dealing with the need for robustness have been developed. Inspired by Taguchi meth-
ods
1
[2647], possible disturbances are represented by a vector = (
1
,
2
, ..,
n
)
T
,
i

R in the method suggested by Greiner [1134, 1135]. If the distributions and inu-
ences of the
i
are known, the objective function f(x) : x X can be rewritten as

f(x, ) [2937]. In the special case where is normally distributed, this can be simplied
to

f
_
(x
1
+
1
, x
2
+
2
, .., x
n
+
n
)
T
_
. It would then make sense to sample the probability
distribution of a number of t times and to use the mean values of

f(x, ) for each objective


function evaluation during the optimization process. In cases where the optimal value y

of the objective function f is known, Equation 18.1 can be minimized. This approach is
also used in the work of Wiesmann et al. [2936, 2937] and basically turns the optimization
algorithm into something like a maximum likelihood estimator (see Section 53.6.2 and Equa-
tion 53.257 on page 695).
f

(x) =
1
t
t

i=1
_
y

f(x,
i
)
_
2
(18.1)
This method corresponds to using multiple, dierent training scenarios during the objective
function evaluation in situations where X , R
n
. By adding random noise and articial
perturbations to the training cases, the chance of obtaining robust solutions which are
stable when applied or realized under noisy conditions can be increased.
1
http://en.wikipedia.org/wiki/Taguchi_methods [accessed 2008-07-19]
Chapter 19
Overtting and Oversimplication
In all scenarios where optimizers evaluate some of the objective values of the candidate
solutions by using training data, two additional phenomena with negative inuence can be
observed: overtting and (what we call) oversimplication. This is especially true if the
solutions represent functional dependencies inside the training data S
t
.
Example E19.1 (Function Approximation).
Let us assume that S
t
S be the training data sample from a space S of possible data.
Let us assume in the following that this space would be the cross product S = A B of a
space B containing the values of the variables whose dependencies on input variables from
A should be learned. The structure of each single element s[i] of the training data sample
S
t
= (s[0], s[1], .., s[n 1]) is s[i] = (a[i], b[i]) : a[i] A, b[i] B. The candidate solutions would
actually be functions x : A B and the goal is to nd the optimal function x

which exactly
represents the (previously unknown) dependencies between A and B as they occur in S
t
,
i. e., a function with x

(a[i]) b[i] i 1..n.


Articial neural networks, (normal) regression, and symbolic regression are the most
common techniques to solve such problems, but other techniques such as Particle Swarm
Optimization, Evolution Strategies, and Dierential Evolution can be used too. Then, the
candidate solutions may, for instance, be polynomials x(a) = x
0
+ x
1
a
1
+ x
2
a
2
+ . . . or
otherwise structured functions and only the coecients x
0
, x
1
, x
2
. . . would be optimized.
When trying to good candidate solutions x : A B, possible objective functions (subject
to minimization) f
E
(x) =

n1
i=0
[x(a[i]) b[i][ or f
SE
(x) =

n1
i=0
(x(a[i]) b[i])
2
.
19.1 Overtting
19.1.1 The Problem
Overtting
1
is the emergence of an overly complicated candidate solution in an optimization
process resulting from the eort to provide the best results for as much of the available
training data as possible [792, 1040, 2397, 2531].
Denition D19.1 (Overtting). A candidate solution ` x X optimized based on a set
of training data S
t
is considered to be overtted if a less complicated, alternative solution
x

X exists which has a smaller (or equal) error for the set of all possible (maybe even
innitely many), available, or (theoretically) producible data samples S S.
1
http://en.wikipedia.org/wiki/Overfitting [accessed 2007-07-03]
192 19 OVERFITTING AND OVERSIMPLIFICATION
Hence, overtting also stands for the creation of solutions which are not robust. Notice
that x

may, however, have a larger error in the training data S


t
than ` x. The phenomenon
of overtting is best known and can often be encountered in the eld of articial neural
networks or in curve tting
2
[1697, 1742, 2334, 2396, 2684] as described in Example E19.1.
Example E19.2 (Example E19.1 Cont.).
There exists exactly one polynomial
3
of the degree n 1 that ts to each such training
data and goes through all its points.
4
Hence, when only polynomial regression is performed,
there is exactly one perfectly tting function of minimal degree. Nevertheless, there will
also be an innite number of polynomials with a higher degree than n 1 that also match
the sample data perfectly. Such results would be considered as overtted.
In Figure 19.1, we have sketched this problem. The function x

(a) = b for A = B = R
shown in Fig. 19.1.b has been sampled three times, as sketched in Fig. 19.1.a. There exists
no other polynomial of a degree of two or less that ts to these samples than x

. Optimizers,
however, could also nd overtted polynomials of a higher degree such as ` x in Fig. 19.1.c
which also match the data. Here, ` x plays the role of the overly complicated solution which
will perform as good as the simpler model x

when tested with the training sets only, but


will fail to deliver good results for all other input data.
a
b
Fig. 19.1.a: Three sample points
of a linear function.
a
b
x

Fig. 19.1.b: The optimal solution


x

.
a
b
x
Fig. 19.1.c: The overly complex
variant ` x.
Figure 19.1: Overtting due to complexity.
A very common cause for overtting is noise in the sample data. As we have already pointed
out, there exists no measurement device for physical processes which delivers perfect results
without error. Surveys that represent the opinions of people on a certain topic or randomized
simulations will exhibit variations from the true interdependencies of the observed entities,
too. Hence, data samples based on measurements will always contain some noise.
Example E19.3 (Overtting caused by Noise).
In Figure 19.2 we have sketched how such noise may lead to overtted results. Fig. 19.2.a
illustrates a simple physical process obeying some quadratic equation. This process has been
measured using some technical equipment and the n = 100 = [S[ noisy samples depicted in
Fig. 19.2.b have been obtained. Fig. 19.2.c shows a function ` x resulting from optimization
that ts the data perfectly. It could, for instance, be a polynomial of degree 99 which goes
right through all the points and thus, has an error of zero. Hence, both f
E
(` x) and f
SQ
(` x)
from Example E19.1 would be zero. Although being a perfect match to the measurements,
this complicated solution does not accurately represent the physical law that produced the
sample data and will not deliver precise results for new, dierent inputs.
2
We will discuss overtting in conjunction with Genetic Programming-based symbolic regression in Sec-
tion 49.1 on page 531.
3
http://en.wikipedia.org/wiki/Polynomial [accessed 2007-07-03]
4
http://en.wikipedia.org/wiki/Polynomial_interpolation [accessed 2008-03-01]
19.1. OVERFITTING 193
a
b
x

Fig. 19.2.a: The original physical


process.
a
b
Fig. 19.2.b: The measured train-
ing data.
a
b
x
Fig. 19.2.c: The overtted result
` x.
Figure 19.2: Fitting noise.
From the examples we can see that the major problem that results from overtted solutions
is the loss of generality.
Denition D19.2 (Generality). A solution of an optimization process is general if it is
not only valid for the sample inputs s[0], s[1], .., s[n 1] S
t
which were used for training
during the optimization process, but also for dierent inputs s , S
t
if such inputs exist.
19.1.2 Countermeasures
There exist multiple techniques which can be utilized in order to prevent overtting to a
certain degree. It is most ecient to apply multiple such techniques together in order to
achieve best results.
19.1.2.1 Restricting the Problem Space
A very simple approach is to restrict the problem space X in a way that only solutions up
to a given maximum complexity can be found. In terms of polynomial function tting, this
could mean limiting the maximum degree of the polynomials to be tested, for example.
19.1.2.2 Complement Optimization Criteria
Furthermore, the functional objective functions (such as those dened in Example E19.1)
which solely concentrate on the error of the candidate solutions should be augmented by
penalty terms and non-functional criteria which favor small and simple models [792, 1510].
19.1.2.3 Using much Training Data
Large sets of sample data, although slowing down the optimization process, may improve the
generalization capabilities of the derived solutions. If arbitrarily many training datasets or
training scenarios can be generated, there are two approaches which work against overtting:
1. The rst method is to use a new set of (randomized) scenarios for each evaluation
of each candidate solution. The resulting objective values then may dier largely
even if the same individual is evaluated twice in a row, introducing incoherence and
ruggedness into the tness landscape.
194 19 OVERFITTING AND OVERSIMPLIFICATION
2. At the beginning of each iteration of the optimizer, a new set of (randomized) scenarios
is generated which is used for all individual evaluations during that iteration. This
method leads to objective values which can be compared without bias. They can be
made even more comparable if the objective functions are always normalized into some
xed interval, say [0, 1].
In both cases it is helpful to use more than one training sample or scenario per evaluation
and to set the resulting objective value to the average (or better median) of the outcomes.
Otherwise, the uctuations of the objective values between the iterations will be very large,
making it hard for the optimizers to follow a stable gradient for multiple steps.
19.1.2.4 Limiting Runtime
Another simple method to prevent overtting is to limit the runtime of the optimizers [2397].
It is commonly assumed that learning processes normally rst nd relatively general solutions
which subsequently begin to overt because the noise is learned, too.
19.1.2.5 Decay of Learning Rate
For the same reason, some algorithms allow to decrease the rate at which the candidate
solutions are modied by time. Such a decay of the learning rate makes overtting less
likely.
19.1.2.6 Dividing Data into Training and Test Sets
If only one nite set of data samples is available for training/optimization, it is common
practice to separate it into a set of training data S
t
and a set of test cases S
c
. During the
optimization process, only the training data is used. The resulting solutions x are tested
with the test cases afterwards. If their behavior is signicantly worse when applied to S
c
than when applied to S
t
, they are probably overtted.
The same approach can be used to detect when the optimization process should be
stopped. The best known candidate solutions can be checked with the test cases in each
iteration without inuencing their objective values which solely depend on the training data.
If their performance on the test cases begins to decrease, there are no benets in letting the
optimization process continue any further.
19.2 Oversimplication
19.2.1 The Problem
With the term oversimplication (or overgeneralization) we refer to the opposite of overt-
ting. Whereas overtting denotes the emergence of overly complicated candidate solutions,
oversimplied solutions are not complicated enough. Although they represent the training
samples used during the optimization process seemingly well, they are rough overgeneral-
izations which fail to provide good results for cases not part of the training.
A common cause for oversimplication is sketched in Figure 19.3: The training sets only
represent a fraction of the possible inputs. As this is normally the case, one should always
be aware that such an incomplete coverage may fail to represent some of the dependencies
and characteristics of the data, which then may lead to oversimplied solutions. Another
possible reason for oversimplication is that ruggedness, deceptiveness, too much neutrality,
19.2. OVERSIMPLIFICATION 195
or high epistasis in the tness landscape may lead to premature convergence and prevent
the optimizer from surpassing a certain quality of the candidate solutions. It then cannot
adapt them completely even if the training data perfectly represents the sampled process. A
third possible cause is that a problem space could have been chosen which does not include
the correct solution.
Example E19.4 (Oversimplied Polynomial).
Fig. 19.3.a shows a cubic function. Since it is a polynomial of degree three, four sample
points are needed for its unique identication. Maybe not knowing this, only three samples
have been provided in Fig. 19.3.b. By doing so, some vital characteristics of the function
are lost. Fig. 19.3.c depicts a square function the polynomial of the lowest degree that ts
exactly to these samples. Although it is a perfect match, this function does not touch any
other point on the original cubic curve and behaves totally dierently at the lower parameter
area.
However, even if we had included point P in our training data, it is still possible that the
optimization process could yield Fig. 19.3.c as a result. Having training data that correctly
represents the sampled system does not mean that the optimizer is able to nd a correct
solution with perfect tness the other, previously discussed problematic phenomena can
prevent it from doing so. Furthermore, if it was not known that the system which was to be
modeled by the optimization process can best be represented by a polynomial of the third
degree, one could have limited the problem space X to polynomials of degree two and less.
Then, the result would likely again be something like Fig. 19.3.c, regardless of how many
training samples are used.
a
b
P
Fig. 19.3.a: The real system
and the points describing it.
a
b
Fig. 19.3.b: The sampled training
data.
b
a
Fig. 19.3.c: The oversimplied re-
sult ` x.
Figure 19.3: Oversimplication.
19.2.2 Countermeasures
In order to counter oversimplication, its causes have to be mitigated.
19.2.2.1 Using many Training Cases
It is not always possible to have training scenarios which cover the complete input space
A. If a program is synthesized Genetic Programming which has an 32bit integer number
as input, we would need 2
32
= 4 294 967 296 samples. By using multiple scenarios for each
individual evaluation, the chance of missing important aspects is decreased. These scenarios
can be replaced with new, randomly created ones in each generation, in order to decrease
this chance even more.
196 19 OVERFITTING AND OVERSIMPLIFICATION
19.2.2.2 Using a larger Problem Space
The problem space, i. e., the representation of the candidate solutions, should further be
chosen in a way which allows constructing a correct solution to the problem dened. Then
again, releasing too many constraints on the solution structure increases the risk of overt-
ting and thus, careful proceeding is recommended.
Chapter 20
Dimensionality (Objective Functions)
20.1 Introduction
The most common way to dene optima in multi-objective problems (MOPs, see Deni-
tion D3.6 on page 55) is to use the Pareto domination relation discussed in Section 3.3.5
on page 65. A candidate solution x
1
X is said to dominate another candidate solution
x
2
X (x
1
x
2
) if and only if its corresponding vector of objective values f (x
1
) is (par-
tially) less than the one of x
2
, i. e., i 1..n f
i
(x
1
) f
i
(x
2
) and i 1..n : f
i
(x
1
) < f
i
(x
2
)
(in minimization problems, see Denition D3.13). The domination rank
#
dom(x, X) of a
candidate solution x X is the number of elements in a reference set X X it is dominated
by (Denition D3.16), a non-dominated element (relative to X) has a domination rank of
#
dom(x, X) = 0, and the solutions considered to be global optima are non-dominated for
the whole problem space X (
#
dom(x, X) = 0, Denition D3.14). These are the elements we
would like to nd with optimization or, in the case of probabilistic metaheuristics, at least
wish to approximate as closely as possible.
Denition D20.1 (Dimension of MOP). We refer to the number n = [f [ of objective
functions f
1
, f
2
, .., f
n
f of an optimization problem as its dimension (or dimensionality).
Many studies in the literature consider mainly bi-objective MOPs [1237, 1530]. As a conse-
quence, many algorithms are designed to deal with that kind of problems. However, MOPs
having a higher number of objective functions are common in practice sometimes the
number of objectives reaches double gures [599] leading to the so-called many-objective
optimization [2228, 2915]. This term has been coined by the Operations Research commu-
nity to denote problems with more than two or three objective functions [894, 3099]. It is
currently a hot research topic with the rst conference on evolutionary multi-objective opti-
mization held in 2001 [3099]. Table 20.1 gives some examples for many-objective situations.
Table 20.1: Applications and Examples of Many-Objective Optimization.
Area References
Combinatorial Problems [1417, 1437]
Computer Graphics [1002, 1577, 1893, 2060]
198 20 DIMENSIONALITY (OBJECTIVE FUNCTIONS)
Data Mining [2213]
Decision Making [1893]
Software [2213]
20.2 The Problem: Many-Objective Optimization
When the dimension of the MOPs increases, the majority of candidate solutions are non-
dominated. As a consequence, traditional nature-inspired algorithms have to be redesigned:
According to Hughes [1303] and on basis of the results provided by Purshouse and Fleming
[2231], it can be assumed that purely Pareto-based optimization approaches (maybe with
added diversity preservation methods) will not perform good in problems with four or more
objectives. Results from the application of such algorithms (NSGA-II [749], SPEA-2 [3100],
PESA [639]) to two or three objectives cannot simply be extrapolated to higher numbers
of optimization criteria [1530]. Kang et al. [1491] consider Pareto optimality as unfair
and imperfect in many-objective problems. Hughes [1303] indicates that the following two
hypotheses may hold:
1. An optimizer that produces an entire Pareto set in one run is better than generating the
Pareto set through many single-objective optimizations using an aggregation approach
such as weighted min-max (see Section 3.3.4) if the number of objective function
evaluations is xed.
2. Optimizers that use Pareto ranking based methods to sort the population will be very
eective for small numbers of objectives, but not perform as eectively for many-
objective optimization in comparison with methods based on other approaches.
The results of Jaszkiewicz [1437] and Ishibuchi et al. [1418] further demonstrate the degener-
ation of the performance of the traditional multi-objective metaheuristics in many-objective
problems in comparison with single-objective approaches. Ikeda et al. [1400] show that vari-
ous elements, which are apart from the true Pareto frontier, may survive as hardly-dominated
solutions which leads to a decrease in the probability of producing new candidate solutions
dominating existing ones. This phenomenon is called dominance resistant. Deb et al. [750]
recognize this problem of redundant solutions and incorporate an example function provok-
ing it into their test function suite for real-valued, multi-objective optimization.
Additionally to this algorithm-sided limitations, Bouyssou [374] suggests that based on,
for instance, the limited capabilities of the human mind [1900], an operator will not be able
to make ecient decisions if more than a dozen objectives are involved. Visualizing the
solutions in a human-understandable way becomes more complex with the rising number of
dimensions, too [1420].
Example E20.1 (Non-Dominated Elements in Random Samples).
Deb [740] showed that the number of non-dominated elements in random samples increases
quickly with the dimension. The hyper-surface of the Pareto frontier may increase exponen-
tially with the number of objective functions [1420]. Like Ishibuchi et al. [1420], we want to
illustrate this issue with an experiment.
Imagine that a population-based optimization approach was used to solve a many-
objective problem with n = [f [ objective functions. At startup, the algorithm will ll
the initial population pop with ps randomly created individuals p. If we assume the problem
space X to be continuous and the creation process suciently random, no two individuals
in pop will be alike. If the objective functions are locally invertible and suciently indepen-
dent, we can assume that there will be no two candidate solution in pop for which any of
the objectives takes on the same value.
20.2. THE PROBLEM: MANY-OBJECTIVE OPTIMIZATION 199
0
8
12
2
4
6
8
10
12
14
0
0.1
0.2
0.3
4
k
P
(
#
d
o
m
=
k
p
s
,
n
)
|
no.ofobjectivesn
Fig. 20.1.a: ps = 5
0
8
12
2
4
6
8
10
12
14
0
0.1
0.2
0.3
4
k
P
(
#
d
o
m
=
k
p
s
,
n
)
|
no.ofobjectivesn
Fig. 20.1.b: ps = 10
0
8
12
2
4
6
8
10
12
14
0
0.1
0.2
0.3
4
k
P
(
#
d
o
m
=
k
p
s
,
n
)
|
no.ofobjectivesn
Fig. 20.1.c: ps = 50
0
8
12
2
4
6
8
10
12
14
0
0.1
0.2
0.3
4
k
P
(
#
d
o
m
=
k
p
s
,
n
)
|
no.ofobjectivesn
Fig. 20.1.d: ps = 100
0
8
12
2
4
6
8
10
12
14
0
0.1
0.2
0.3
4
k
P
(
#
d
o
m
=
k
p
s
,
n
)
|
no.ofobjectivesn
Fig. 20.1.e: ps = 500
0
8
12
2
4
6
8
10
12
14
0
0.1
0.2
0.3
4
k
P
(
#
d
o
m
=
k
p
s
,
n
)
|
no.ofobjectivesn
Fig. 20.1.f: ps = 1000
0
8
12
2
4
6
8
10
12
14
0
0.1
0.2
0.3
4
k
P
(
#
d
o
m
=
k
p
s
,
n
)
|
no.ofobjectivesn
Fig. 20.1.g: ps = 3000
Figure 20.1: Approximated distribution of the domination rank in random samples for
several population sizes.
200 20 DIMENSIONALITY (OBJECTIVE FUNCTIONS)
The distribution P (
#
dom = k[ ps, n) of the probability that the domination rank
#
dom
of a randomly selected individual from the initial, randomly created population is exactly
k 0..ps 1 depends on the population size ps and the number of objective functions n. We
have approximated this probability distribution with experiments where we determined the
domination rank of ps n-dimensional vectors where each element is drawn from a uniform
distribution for several values of n and ps, spanning from n = 2 to n = 50 and ps = 3 to
ps = 3600. (For n = 1, no experiments where performed since there, each domination rank
from 0 to ps 1 occurs exactly once in the population.)
Figure 20.1 and Figure 20.2 illustrate the results of these experiments, the arithmetic
means of 100 000 runs for each conguration. In Figure 20.1, the approximated distribu-
tion of P (
#
dom = k[ ps, n) is illustrated for various n. The value of P (
#
dom = 0[ ps, n)
quickly approaches one for rising n. We limited the z-axis to 0.3 so the behavior of the
other ranks k > 0, which quickly becomes negligible, can still be seen. For xed n and
ps, P (
#
dom = k[ ps, n) drops rapidly, from the looks roughly similar to an exponential dis-
tribution (see Section 53.4.3). For a xed k and ps, P (
#
dom = k[ ps, n) takes on roughly
the shape of a Poisson distribution (see Section 53.3.2) along the n-axis, i. e., it rises for
some time and then falls again. With rising ps, the maximum of this distribution shifts
to the right, meaning that an algorithm with a higher population size may be able to deal
with more objective functions. Still, even for ps = 3000, around 64% of the population are
already non-dominated in this experiment.
500
1000
1500
2000
2500
3000
3500
5
10
15
20
0
0.2
0.4
0.6
0.8
1.0
p
o
p
u
la
t
io
n
s
iz
e
p
s
P
(
#
d
o
m
=
0
p
s
,
n
)
|
n
o
.

o
f

o
b
j
e
c
t
i
v
e
s

n
Figure 20.2: The proportion of non-dominated candidate solutions for several population
sizes ps and dimensionalities n.
The fraction of non-dominated elements in the random populations is illustrated in Fig-
ure 20.2. We nd that it again seems to rise exponentially with n. The population size ps in
Figure 20.2 seems to have only an approximately logarithmically positive inuence: If we list
the population sizes required to keep the fraction of non-dominated candidate solutions at the
same level of P (
#
dom = 0[ ps = 5, n = 2) 0.457, we nd P (
#
dom = 0[ ps = 12, n = 3)
0.466, P (
#
dom = 0[ ps = 35, n = 4) 0.447, P (
#
dom = 0[ ps = 90, n = 5) 0.455,
P (
#
dom = 0[ ps = 250, n = 6) 0.452, P (
#
dom = 0[ ps = 650, n = 7) 0.460, and
P (
#
dom = 0[ ps = 1800, n = 8) 0.459. An extremely coarse rule of thumb for this ex-
periment would hence be that around 0.6e
n
individuals are required in the population to
hold the proportion of non-dominated at around 46%.
20.3. COUNTERMEASURES 201
To sum things up, the increasing dimensionality of the objective space leads to three main
problems [1420]:
1. The performance of traditional approaches based solely on Pareto comparisons dete-
riorates.
2. The utility of the solutions cannot be understood by the human operator anymore.
3. The number of possible Pareto-optimal solutions may increase exponentially.
Finally, it should be noted that there exists also interesting research where objectives are
added to a single-objective problem to make it easier [1553, 2025] which you can nd sum-
marized in Section 13.4.6. Indeed, Brockho et al. [425] showed that in some cases, adding
objective functions may be benecial (while it is not in others).
20.3 Countermeasures
Various countermeasures have been proposed against the problem of dimensionality. Nice
surveys on contemporary approaches to many-objective optimization with evolutionary ap-
proaches, to the diculties arising in many-objective, and on benchmark problems have
been provided by Hiroyasu et al. [1237], Ishibuchi et al. [1420], and Lopez Jaimes and Coello
Coello [1775]. In the following we list some of approaches for many-objective optimization,
mostly utilizing the information provided in [1237, 1420].
20.3.1 Increasing Population Size
The most trivial measure, however, has not been introduced in these studies: increasing
the population size. Well, from Example E20.1 we know that this will work only for a
few objective functions and we have to throw in exponentially more individuals in order
to ght the inuence of many objectives. For ve or six objective functions, we may still
be able to obtain good results if we do this for the price of many additional objective
function evaluations. If we follow the rough approximation equation from the example, it
would already take a population size of around ps = 100 000 in order to keep the fraction of
non-dominated candidate solutions below 0.5 (in the scenario used there, that is). Hence,
increasing the population size can only get us so far.
20.3.2 Increasing Selection Pressure
A very simple idea for improving the way multi-objective approaches scale with increasing
dimensionality is to increase the exploration pressure into the direction of the Pareto frontier.
Ishibuchi et al. [1420] distinguish approaches which modify the denition of domination in
order to reduce the number non-dominated candidate solutions in the population [2403] and
methods which assign dierent ranks to non-dominated solutions [636, 829, 830, 1576, 1635,
2629]. Farina and Amato [894] propose to rely fuzzy Pareto methods instead of pure Pareto
comparisons.
20.3.3 Indicator Function-based Approaches
Ishibuchi et al. [1420] point further out that tness assignment methods could be used which
are not based on Pareto dominance. According to them, one approach is using indicator
functions such as those involving hypervolume metrics [1419, 2838].
202 20 DIMENSIONALITY (OBJECTIVE FUNCTIONS)
20.3.4 Scalerizing Approaches
Another tness assignment method are scalarizing functions [1420] which treat many-
objective problems with single-objective-style methods. Advanced single-objective optimiza-
tion like those developed by Hughes [1303] could be used instead of Pareto comparisons.
Wagner et al. [2837, 2838] support the results and assumptions of Hughes [1303] in terms
of single-objective being better in many-objective problems than traditional multi-objective
Evolutionary Algorithms such as NSGA-II [749] or SPEA 2 [3100], but also refute the gen-
eral idea that all Pareto-based cannot cope with their performance. They also show that
more recent approaches [880, 2010, 3096] which favor more than just distribution aspects
can perform very well. Other scalarizing methods can be found in [1304, 1416, 1417, 1437]
20.3.5 Limiting the Search Area in the Objective Space
Hiroyasu et al. [1237] point out that the search area in the objective space can be lim-
ited. This leads to a decrease of the number of non-dominated points [1420] and can be
achieved by either incorporating preference information [746, 933, 2695] or by reducing the
dimensionality [422424, 745, 2411].
20.3.6 Visualization Methods
Approaches for visualizing solutions of many-objective problems in order to make them
more comprehensible have been provided by Furuhashi and Yoshikawa [1002], K oppen and
Yoshida [1577], Obayashi and Sasaki [2060] and in [1893].
Chapter 21
Scale (Decision Variables)
In Chapter 20, we discussed how an increasing number of objective functions can threaten
the performance of optimization algorithms. We referred to this problem as dimensionality,
i. e., the dimension of the objective space. However, there is another space in optimization
whose dimension can make an optimization problem dicult: the search space G. The
curse of dimensionality
1
of the search space stands for the exponential increase of its
volume with the number of decision variables [260, 261]. In order to better distinguish
between the dimensionality of the search and the objective space, we will refer to the former
one as scale.
In Example E21.1, we give an example for the growth of the search space with the scale
of a problem and in Table 21.1, some large-scale optimization problems are listed.
Example E21.1 (Small-scale versus large-scale).
The problem of small-scale versus large-scale problems can best be illustrated using discrete
or continuous vector-based search spaces as example. If we search, for instance, in the
natural interval G = (1..10), there are ten points which could be the optimal solution.
When G = (1..10)
2
, there exist one hundred possible results and for G = (1..10)
3
, it is
already one thousand. In other words, if the number of points in G be n, then the number
of points in G
m
is n
m
.
Table 21.1: Applications and Examples of Large-Scale Optimization.
Area References
Data Mining [162, 994, 1748]
Databases [162]
Distributed Algorithms and Sys-
tems
[1748, 3031]
Engineering [2860]
Function Optimization [551, 1927, 2130, 2486, 3020]
Graph Theory [3031]
Software [3031]
21.1 The Problem
Especially in combinatorial optimization, the issue of scale has been considered for a long
time. Here, the computational complexity (see Section 12.1) denotes how many algorithm
1
http://en.wikipedia.org/wiki/Curse_of_dimensionality [accessed 2009-10-20]
204 21 SCALE (DECISION VARIABLES)
steps are needed to solve a problem which consists of n parameters. As sketched in Fig-
ure 12.1 on page 145, the time requirements to solve problems which require step numbers
growing exponentially with n quickly exceed any feasible bound. Here, especially AT-hard
problems should be mentioned, since they always exhibit this feature.
Besides combinatorial problems, numerical problems suer the curse of dimensionality
as well.
21.2 Countermeasures
21.2.1 Parallelization and Distribution
When facing problems of large-scale, the problematic symptom is the high runtime require-
ment. Thus, any measure to use as much computational power as available can be a remedy.
Obviously, there (currently) exists no way to solve large-scale AT-hard problems exactly
within feasible time. With more computers, cores, or hardware, only a linear improvement
of runtime can be achieved at most. However, for problems residing in the gray area between
feasible and infeasible, distributed computing [1747] may be the method of choice.
21.2.2 Genetic Representation and Development
The best way to solve a large-scale problem is to solve it as a small-scale problem. For some
optimization tasks, it is possible to choose a search space which has a smaller size than
the problem space ([G[ < [X[). Indirect genotype-phenotype mappings can link the spaces
together, as we will discuss later in Section 28.2.2.
Here, one option are generative mappings (see Section 28.2.2.1) which step-by-step con-
struct a complex phenotype by translating a genotype according to some rules. Grammatical
Evolution, for instance, unfolds a symbol according to a BNF grammar with rules identied
in a genotype. This recursive process can basically lead to arbitrarily complex phenotypes.
Another approach is to apply a developmental, ontogenic mapping (see Section 28.2.2.2)
that uses feedback from simulations or objective functions in this process.
Example E21.2 (Dimension Reduction via Developmental Mapping).
Instead of optimizing (may be up to a million) points on the surface of an airplanes wing
(see Example E2.5 and E4.1) in order to nd an optimal shape, the weights of an articial
neural network used to move surface points to a benecial conguration could be optimized.
The second task would involve signicantly fewer variables, since an articial neural network
with usually less than 1000 or even 100 weights could be applied. Of course, the number
of possible solutions which can be discovered by this approach decreases (it is at most [G[).
However, the problem can become tangible and good results may be obtained in reasonable
time.
21.2.3 Exploiting Separability
Sometimes, parts of candidate solutions are independent from each other and can be opti-
mized more or less separately. In such a case of low epistasis (see Chapter 17), a large-scale
problem can be divided into several problems of smaller-scale which can be optimized sepa-
rately. If solving a problem of scale n takes 2
n
algorithm steps, solving two problems of scale
0.5n will clearly lead to a great runtime reduction. Such a reduction may even be worth
21.2. COUNTERMEASURES 205
sacricing some solution quality. If the optimization problem at hand exhibits low epistasis
or is separable, such a sacrice may even be unnecessary.
Coevolution has shown to be an ecient approach in combinatorial optimization [1311].
If extended with a cooperative component, i. e., to Cooperative Coevolution [2210, 2211],
it can eciently exploit separability in numerical problems and lead to better results [551,
3020].
21.2.4 Combination of Techniques
Generally, it can be a good idea to concurrently use sets of algorithm [1689] or portfo-
lios [2161] working on the same or dierent populations. This way, the dierent strengths
of the optimization methods can be combined. In the beginning, for instance, an algorithm
with good convergence speed may be granted more runtime. Later, the focus can shift
towards methods which can retain diversity and are not prone to premature convergence.
Alternatively, a sequential approach can be performed, which starts with one algorithm and
switches to another one when no further improvements are found [? ]. This way, rst an
interesting area in the search space can be found which then can be investigated in more
detail.
Chapter 22
Dynamically Changing Fitness Landscape
It should also be mentioned that there exist problems with dynamically changing tness
landscapes [396, 397, 400, 1957, 2302]. The task of an optimization algorithm is then to
provide candidate solutions with momentarily optimal objective values for each point in time.
Here we have the problem that an optimum in iteration t will possibly not be an optimum in
iteration t +1 anymore. Problems with dynamic characteristics can, for example, be tackled
with special forms [3019] of
1. Evolutionary Algorithms [123, 398, 1955, 1956, 2723, 2941],
2. Genetic Algorithms [1070, 1544, 19481950],
3. Particle Swarm Optimization [316, 498, 499, 1728, 2118],
4. Dierential Evolution [1866, 2988], and
5. Ant Colony Optimization [1148, 1149, 1730].
The moving peaks benchmarks by Branke [396, 397] and Morrison and De Jong [1957] are
good examples for dynamically changing tness landscapes. You can nd them discussed
in ?? on page ??. In Table 22.1, we give some examples for applications of dynamic opti-
mization.
Table 22.1: Applications and Examples of Dynamic Optimization.
Area References
Chemistry [1909, 2731, 3007]
Combinatorial Problems [400, 644, 1148, 1446, 1486, 1730, 2120, 2144, 2499]
Cooperation and Teamwork [1996, 1997, 2424, 2502]
Data Mining [3018]
Distributed Algorithms and Sys-
tems
[353, 786, 962, 964966, 1401, 1733, 1840, 1909, 1935, 1982,
1995, 2387, 2425, 2694, 2731, 2791, 3007, 3032, 3106]
Engineering [556, 644, 786, 1486, 1733, 1840, 1909, 1982, 2144, 2425,
2499, 2694, 2731, 3007]
Graph Theory [353, 556, 644, 786, 962, 964966, 1401, 1486, 1733, 1840,
1909, 1935, 1982, 1995, 2144, 2387, 2425, 2502, 2694, 2731,
2791, 3032, 3106]
Logistics [1148, 1446, 1730, 2120]
Motor Control [489]
Multi-Agent Systems [2144, 2925]
208 22 DYNAMICALLY CHANGING FITNESS LANDSCAPE
Security [1486]
Software [1401, 1486, 1995, 3032]
Wireless Communication [556, 644, 1486, 2144, 2502]
Chapter 23
The No Free Lunch Theorem
By now, we know the most important diculties that can be encountered when applying an
optimization algorithm to a given problem. Furthermore, we have seen that it is arguable
what actually an optimum is if multiple criteria are optimized at once. The fact that there
is most likely no optimization method that can outperform all others on all problems can,
thus, easily be accepted. Instead, there exists a variety of optimization methods specialized
in solving dierent types of problems. There are also algorithms which deliver good results
for many dierent problem classes, but may be outperformed by highly specialized methods
in each of them. These facts have been formalized by Wolpert and Macready [2963, 2964]
in their No Free Lunch Theorems
1
(NFL) for search and optimization algorithms.
23.1 Initial Denitions
Wolpert and Macready [2964] consider single-objective optimization over a nite do-
main and dene an optimization problem (g) f(gpm(g)) as a mapping of a search
space G to the objective space R.
2
Since this denition subsumes the problem space
and the genotype-phenotype mapping, only skipping the possible search operations, it
is very similar to our Denition D6.1 on page 103. They further call a time-ordered
set d
m
of m distinct visited points in G R a sample of size m and write d
m

(d
g
m
(1), d
y
m
(1)) , (d
g
m
(2), d
y
m
(2)) , . . . , (d
g
m
(m), d
y
m
(m)). d
g
m
(i) is the genotype and d
y
m
(i)
the corresponding objective value visited at time step i. Then, the set D
m
= (GR)
m
is
the space of all possible samples of length m and D =
m0
D
m
is the set of all samples of
arbitrary size.
An optimization algorithm a can now be considered to be a mapping of the previously
visited points in the search space (i. e., a sample) to the next point to be visited. Formally,
this means a : D G. Without loss of generality, Wolpert and Macready [2964] only regard
unique visits and thus dene a : d D g : g , d.
Performance measures can be dened independently from the optimization algorithms
only based on the values of the objective function visited in the samples d
m
. If the objective
function is subject to minimization, (d
y
m
) = min d
y
m
: i = 1..m would be the appropriate
measure.
Often, only parts of the optimization problem are known. If the minima of the objective
function f were already identied beforehand, for instance, its optimization would be useless.
Since the behavior in wide areas of is not obvious, it makes sense to dene a probability
P () that we are actually dealing with and no other problem. Wolpert and Macready
[2964] use the handy example of the Traveling Salesman Problem in order to illustrate this
1
http://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization [accessed 2008-
03-28]
2
Notice that we have partly utilized our own notations here in order to be consistent throughout the
book.
210 23 THE NO FREE LUNCH THEOREM
issue. Each distinct TSP produces a dierent structure of . Yet, we would use the same
optimization algorithm a for all problems of this class without knowing the exact shape of .
This corresponds to the assumption that there is a set of very similar optimization problems
which we may encounter here although their exact structure is not known. We act as if there
was a probability distribution over all possible problems which is non-zero for the TSP-alike
ones and zero for all others.
23.2 The Theorem
The performance of an algorithm a iterated m times on an optimization problem can
then be dened as P (d
y
m
[, m, a), i. e., the conditional probability of nding a particular
sample d
y
m
. Notice that this measure is very similar to the value of the problem landscape
(x, ) introduced in Denition D5.2 on page 95 which is the cumulative probability that
the optimizer has visited the element x X until (inclusively) the th evaluation of the
objective function(s).
Wolpert and Macready [2964] prove that the sum of such probabilities over all possible
optimization problems on nite domains is always identical for all optimization algorithms.
For two optimizers a
1
and a
2
, this means that

P (d
y
m
[, m, a
1
) =

P (d
y
m
[, m, a
2
) (23.1)
Hence, the average over all of P (d
y
m
[, m, a) is independent of a.
23.3 Implications
From this theorem, we can immediately follow that, in order to outperform a
1
in one opti-
mization problem, a
2
will necessarily perform worse in another. Figure 23.1 visualizes this
issue. It shows that general optimization approaches like Evolutionary Algorithms can solve
a variety of problem classes with reasonable performance. In this gure, we have chosen a
performance measure subject to maximization, i. e., the higher its values, the faster will
the problem be solved. Hill Climbing approaches, for instance, will be much faster than
Evolutionary Algorithms if the objective functions are steady and monotonous, that is, in a
smaller set of optimization tasks. Greedy search methods will perform fast on all problems
with matroid
3
structure. Evolutionary Algorithms will most often still be able to solve these
problems, it just takes them longer to do so. The performance of Hill Climbing and greedy
approaches degenerates in other classes of optimization tasks as a trade-o for their high
utility in their area of expertise.
One interpretation of the No Free Lunch Theorem is that it is impossible for any opti-
mization algorithm to outperform random walks or exhaustive enumerations on all possible
problems. For every problem where a given method leads to good results, we can construct
a problem where the same method has exactly the opposite eect (see Chapter 15). As
a matter of fact, doing so is even a common practice to nd weaknesses of optimization
algorithms and to compare them with each other, see Section 50.2.6, for example.
3
http://en.wikipedia.org/wiki/Matroid [accessed 2008-03-28]
23.3. IMPLICATIONS 211
allpossibleoptimizationproblems
p
e
r
f
o
r
m
a
n
c
e
randomwalkorexhaustiveenumerationor...
generaloptimizationalgorithm-anEA,forinstance
specializedoptimizationalgorithm1;ahillclimber,forinstance
specializedoptimizationalgorithm2;adepth-firstsearch,forinstance
verycrudesketch
Figure 23.1: A visualization of the No Free Lunch Theorem.
Example E23.1 (Fitness Landscapes for Minimization and the NFL).
Let us assume that the Hill Climbing algorithm is applied to the objective functions given
o
b
j
e
c
t
i
v
e

v
a
l
u
e
s

f
(
x
)
x
r
i
g
h
t
r
i
g
h
t
w
r
o
n
g
I II III
Fig. 23.2.a: A landscape good for Hill
Climbing.
o
b
j
e
c
t
i
v
e

v
a
l
u
e
s

f
(
x
)
x
w
r
o
n
g
r
i
g
h
t
IV V
Fig. 23.2.b: A landscape bad for Hill
Climbing.
Figure 23.2: Fitness Landscapes and the NFL.
in Figure 23.2 which are both subject to minimization. We divided both tness landscapes
into sections labeled with roman numerals. A Hill Climbing algorithm will generate points
in the proximity of the currently best solution and transcend to these points if they are
better. Hence, in both problems it will always follow the downward gradient in all three
sections. In Fig. 23.2.a, this is the right decision in sections II and III because the global
optimum is located at the border between them, but wrong in the deceptive section I. In
Fig. 23.2.b, exactly the opposite is the case: Here, a gradient descend is only a good idea
in section IV (which corresponds to I in Fig. 23.2.a) whereas it leads away from the global
optimum in section V which has the same extend as sections II and III in Fig. 23.2.a.
Hence, our Hill Climbing would exhibit very dierent performance when applied to
212 23 THE NO FREE LUNCH THEOREM
Fig. 23.2.a and Fig. 23.2.b. Depending on the search operations available, it may be even
outpaced by a random walk on Fig. 23.2.b in average. Since we may not know the nature of
the objective functions before the optimization starts, this may become problematic. How-
ever, Fig. 23.2.b is deceptive in the whole section V, i. e., most of its problem space, which
is not the case to such an extent in many practical problems.
Another interpretation is that every useful optimization algorithm utilizes some form of
problem-specic knowledge. Radclie [2244] basically states that without such knowledge,
search algorithms cannot exceed the performance of simple enumerations. Incorporating
knowledge starts with relying on simple assumptions like if x is a good candidate solution,
than we can expect other good candidate solutions in its vicinity, i. e., strong causality.
The more (correct) problem specic knowledge is integrated (correctly) into the algorithm
structure, the better will the algorithm perform. On the other hand, knowledge correct
for one class of problems is, quite possibly, misleading for another class. In reality, we use
optimizers to solve a given set of problems and are not interested in their performance when
(wrongly) applied to other classes.
Example E23.2 (Arbitrary Landscapes and Exhaustive Enumeration).
An even simpler thought experiment than Example E23.1 on optimization thus goes as
follows: If we consider an optimization problem of truly arbitrary structure, then any element
in X could be the global optimum. In a arbitrary (or random) tness landscape where the
objective value of the global optimum is unknown, it is impossible to be sure that the best
candidate solution found so far is the global optimum until all candidate solutions have
been evaluated. In this case, a complete enumeration or uninformed search is the optimal
approach. In most randomized search procedures, we cannot fully prevent that a certain
candidate solution is discovered and evaluated more than once, at least without keeping all
previously discovered elements in memory which is infeasible in most cases. Hence, especially
when approaching full coverage of the problem space, these methods will waste objective
function evaluations and thus, become inferior.
The number of arbitrary tness landscapes should be at least as high as the number
of optimization-favorable ones: Assuming a favorable landscape with a single global op-
timum, there should be at least one position to which the optimum could be moved in
order to create a tness landscape where any informed search is outperformed by exhaustive
enumeration or a random walk.
The rough meaning of the NLF is that all black-box optimization methods perform equally
well over the complete set of all optimization problems [2074]. In practice, we do not want
to apply an optimizer to all possible problems but to only some, restricted classes. In terms
of these classes, we can make statements about which optimizer performs better.
Today, there exists a wide range of work on No Free Lunch Theorems for many dierent
aspects of machine learning. The website http://www.no-free-lunch.org/
4
gives
a good overview about them. Further summaries, extensions, and criticisms have been
provided by K oppen et al. [1578], Droste et al. [837, 838, 839, 840], Oltean [2074], and
Igel and Toussaint [1397, 1398]. Radclie and Surry [2247] discuss the NFL in the context
of Evolutionary Algorithms and the representations used as search spaces. The No Free
Lunch Theorem is furthermore closely related to the Ugly Duckling Theorem
5
proposed by
Watanabe [2868] for classication and pattern recognition.
4
accessed: 2008-03-28
5
http://en.wikipedia.org/wiki/Ugly_duckling_theorem [accessed 2008-08-22]
23.4. INFINITE AND CONTINUOUS DOMAINS 213
23.4 Innite and Continuous Domains
Recently, Auger and Teytaud [145, 146] showed that the No Free Lunch Theorem holds only
in a weaker form for countable innite domains. For continuous domains (i. e., uncountable
innite ones), it does not hold. This means for a given performance measure and a limit
of the maximally allowed objective function evaluations, it can be possible to construct an
optimal algorithm for a certain family of objective functions [145, 146].
A factor that should be considered when thinking about these issues is, that real ex-
isting computers can only work with nite search spaces, since they have nite memory.
In the end, even oating point numbers are nite in value and precision. Hence, there is
always a dierence between the behavior of the algorithmic, mathematical denition of an
optimization method and its practical implementation for a given computing environment.
Unfortunately, the author of this book nds himself unable to state with certainty whether
this means that the NFL does hold for realistic continuous optimization or whether the
proofs by Auger and Teytaud [145, 146] carry over to actual programs.
23.5 Conclusions
So far, the subject of this chapter was the question about what makes optimization prob-
lems hard, especially for metaheuristic approaches. We have discussed numerous dierent
phenomena which can aect the optimization process and lead to disappointing results.
If an optimization process has converged prematurely, it has been trapped in a non-
optimal region of the search space from which it cannot escape anymore (Chapter 13).
Ruggedness (Chapter 14) and deceptiveness (Chapter 15) in the tness landscape, often
caused by epistatic eects (Chapter 17), can misguide the search into such a region. Neu-
trality and redundancy (Chapter 16) can either slow down optimization because the appli-
cation of the search operations does not lead to a gain in information or may also contribute
positively by creating neutral networks from which the search space can be explored and
local optima can be escaped from. Noise is present in virtually all practical optimization
problems. The solutions that are derived for them should be robust (Chapter 18). Also,
they should neither be too general (oversimplication, Section 19.2) nor too specically
aligned only to the training data (overtting, Section 19.1). Furthermore, many practical
problems are multi-objective, i. e., involve the optimization of more than one criterion at
once (partially discussed in Section 3.3), or concern objectives which may change over time
(Chapter 22).
The No Free Lunch Theorem argues that it is not possible to develop the one optimization
algorithm, the problem-solving machine which can provide us with near-optimal solutions
in short time for every possible optimization task (at least in nite domains). Even though
this proof does not directly hold for purely continuous or innite search spaces, this must
sound very depressing for everybody new to this subject.
Actually, quite the opposite is the case, at least from the point of view of a researcher.
The No Free Lunch Theorem means that there will always be new ideas, new approaches
which will lead to better optimization algorithms to solve a given problem. Instead of being
doomed to obsolescence, it is far more likely that most of the currently known optimization
methods have at least one niche, one area where they are excellent. It also means that it
is very likely that the puzzle of optimization algorithms will never be completed. There
will always be a chance that an inspiring moment, an observation in nature, for instance,
may lead to the invention of a new optimization algorithm which performs better in some
problem areas than all currently known ones.
214 23 THE NO FREE LUNCH THEOREM
Evolutionary
Algorithms
ACO
PSO
Extremal
Optimiz.
Simulated
Annealing
EDA
Tabu
Search
Branch&
Bound
Dynamic
Program.
A
Search

IDDFS
Hill
Climbing
Memetic
Algorithms
Downhill
Simplex
GA,GP,ES,
DE,EP,...
LCS
RFD
Random
Optimiz.
Figure 23.3: The puzzle of optimization algorithms.
Chapter 24
Lessons Learned: Designing Good Encodings
Most Global Optimization algorithms share the premise that solutions to problems are either
(a) elements of a somewhat continuous space that can be approximated stepwise or (b) that
they can be composed of smaller modules which already have good attributes even when oc-
curring separately.Especially in the second case, the design of the search space (or genome)
G and the genotype-phenotype mapping gpm is vital for the success of the optimization
process. It determines to what degree these expected features can be exploited by dening
how the properties and the behavior of the candidate solutions are encoded and how the
search operations inuence them. In this chapter, we will rst discuss a general theory
on how properties of individuals can be dened, classied, and how they are related. We
will then outline some general rules for the design of the search space G, the search opera-
tions searchOp Op, and the genotype-phenotype mapping gpm. These rules represent
the lessons learned from our discussion of the possible problematic aspects of optimization
tasks.In software engineering, there exists a variety of design patterns
1
that describe good
practice and experience values. Utilizing these patterns will help the software engineer to
create well-organized, extensible, and maintainable applications.
Whenever we want to solve a problem with Global Optimization algorithms, we need to
dene the structure of a genome. The individual representation along with the genotype-
phenotype mapping is a vital part of Genetic Algorithms and has major impact on the
chance of nding good solutions. Like in software engineering, there exist basic patterns
and rules of thumb on how to design them properly.
Based on our previous discussions, we can now provide some general best practices for the
design of encodings from dierent perspectives. These principles can lead to nding better
solutions or higher optimization speed if considered in the design phase [1075, 2032, 2338].
We will outline suggestions given by dierent researchers transposed into the more general
vocabulary of formae (introduced in Chapter 9 on page 119).
24.1 Compact Representation
[1075] and Ronald [2323] state that the representations of the formae in the search space
should be as short as possible. The number of dierent alleles and the lengths of the dierent
genes should be as small as possible. In other words, the redundancy in the genomes should
be minimal. In Section 16.5 on page 174, we will discuss that uniform redundancy slows
down the optimization process.
24.2 Unbiased Representation
UnbiasednessThe search space G should be unbiased in the sense that all phenotypes are
1
http://en.wikipedia.org/wiki/Design_pattern_%28computer_science%29 [accessed 2007-08-12]
216 24 LESSONS LEARNED: DESIGNING GOOD ENCODINGS
represented by the same number of genotypes [2032, 2113, 2115]. This property allows to
eciently select an unbiased starting population which gives the optimizer the chance of
reaching all parts of the problem space.
x
1
, x
2
X [g G : x
1
= gpm(g)[ [g G : x
2
= gpm(g)[ (24.1)
Beyer and Schwefel [300, 302] furthermore also request for unbiased variation operators. The
unary mutation operations in Evolution Strategy, for example, should not give preference to
any specic change of the genotypes. The reason for this is that selection operations, i. e.,
the methods which chooses the candidate solutions already exploits tness information and
guides the search into one direction. The variation operators thus should not do this job as
well but instead try to increase the diversity. They should perform exploration as opposed
to the exploitation performed by selection.
Beyer and Schwefel [302] therefore again suggest to use normal distributions to mutate
real-valued vectors if G R
n
and Rudolph [2348] propose to use geometrically distributed
random numbers to oset genotype vectors in G Z
n
.
24.3 Surjective GPM
Palmer [2113], Palmer and Kershenbaum [2115] and Della Croce et al. [765] state that a
good search space G and genotype-phenotype mapping gpm should be able to repre-
sent all possible phenotypes and solution structures, i. e., be surjective (see Section 51.7 on
page 644) [2032].
x X g G : x = gpm(g) (24.2)
24.4 Injective GPM
In the spirit of Section 24.1 and with the goal to lower redundancy, the number of genotypes
which map to the same phenotype should be low [23222324] In Section 16.5 we provided a
thorough discussion of redundancy in the genotypes and its demerits (but also its possible
advantages). Combining the idea of achieving low uniform redundancy with the principle
stated in Section 24.3, Palmer and Kershenbaum [2113, 2115] conclude that the GPM should
ideally be both, simple and bijective.
24.5 Consistent GPM
The genotype-phenotype mapping should always yield valid phenotypes [2032, 2113, 2115].
The meaning of valid in this context is that if the problem space X is the set of all possible
trees, only trees should be encoded in the genome. If we use the R
3
as problem space,
no vectors with fewer or more elements than three should be produced by the genotype-
phenotype mapping. Anything else would not make sense anyway, so this is a pretty basic
rule. This form of validity does not imply that the individuals are also correct solutions in
terms of the objective functions.
Constraints on what a good solution is can, on one hand, be realized by dening them ex-
plicitly and making them available to the optimization process (see Section 3.4 on page 70).
On the other hand, they can also be implemented by dening the encoding, search opera-
tions, and genotype-phenotype mapping in a way which ensures that all phenotypes created
are feasible by default [1888, 2323, 2912, 2914].
24.6. FORMAE INHERITANCE AND PRESERVATION 217
24.6 Formae Inheritance and Preservation
The search operations should be able to preserve the (good) formae and (good) congurations
of multiple formae [978, 2242, 2323, 2879]. Good formae should not easily get lost during
the exploration of the search space.
From a binary search operation like recombination (e.g., crossover in GAs), we would
expect that, if its two parameters g
1
and g
2
are members of a forma A, the resulting element
will also be an instance of A.
g
1
, g
2
A G searchOp(g
1
, g
2
) A (24.3)
If we furthermore can assume that all instances of all formae A with minimal precision
(A mini) of an individual are inherited by at least one parent, the binary reproduction
operation is considered as pure. [2242, 2879]
g
3
= searchOp(g
1
, g
2
) G, A mini : g
3
A g
1
A g
2
A (24.4)
If this is the case, all properties of a genotype g
3
which is a combination of two others (g
1
, g
2
)
can be traced back to at least one of its parents. Otherwise, searchOp also performs an
implicit unary search step, a mutation in Genetic Algorithm, for instance. Such properties,
although discussed here for binary search operations only, can be extended to arbitrary n-ary
operators.
24.7 Formae in Genotypic Space Aligned with Phenotypic Formae
The optimization algorithms nd new elements in the search space G by applying the search
operations searchOp Op. These operations can only create, modify, or combine genotypical
formae since they usually have no information about the problem space. Most mathematical
models dealing with the propagation of formae like the Building Block Hypothesis and the
Schema Theorem
2
thus focus on the search space and show that highly t genotypical formae
will more probably be investigated further than those of low utility. Our goal, however, is to
nd highly t formae in the problem space X. Such properties can only be created, modied,
and combined by the search operations if they correspond to genotypical formae. According
to Radclie [2242] and Weicker [2879], a good genotype-phenotype mapping should provide
this feature.
It furthermore becomes clear that useful separate properties in phenotypic space can only
be combined by the search operations properly if they are represented by separate formae
in genotypic space too.
24.8 Compatibility of Formae
Formae of dierent properties should be compatible [2879]. Compatible Formae in phe-
notypic space should also be compatible in genotypic space. This leads to a low level of
epistasis [240, 2323] (see Chapter 17 on page 177) and hence will increase the chance of
success of the reproduction operations. The representations of dierent, compatible pheno-
typic formae should not inuence each other [1075]. This will lead to a representations with
minimal epistasis.
2
See Section 29.5 for more information on the Schema Theorem.
218 24 LESSONS LEARNED: DESIGNING GOOD ENCODINGS
24.9 Representation with Causality
Besides low epistasis, the search space, GPM, and search operations together should possess
strong causality (see Section 14.2). Generally, the search operations should produce small
changes in the objective values. This would, simplied, mean that:
x
1
, x
2
X, g G : x
1
= gpm(g) x
2
= gpm(searchOp(g)) f (x
2
) f x
1
(24.5)
24.10 Combinations of Formae
If genotypes g
1
, g
2
, . . . which are instances of dierent but compatible formae A
1
A
2
. . .
are combined by a binary (or n-ary) search operation, the resulting genotype g should be an
instance of both properties, i. e., the combination of compatible formae should be a forma
itself [2879].
g
1
A
1
, g
2
A
2
, searchOp(g
1
, g
2
, . . .) A
1
A
2
. . . (,= ) (24.6)
If this principle holds for many individuals and formae, useful properties can be combined by
the optimization step by step, narrowing down the precision of the arising, most interesting
formae more and more. This should lead the search to the most promising regions of the
search space.
24.11 Reachability of Formae
Beyer and Schwefel [300, 302] suggest that a good representation has the feature of reach-
ability that any other (nite) individual state can be reached from any parent state within
a nite number of (mutation) steps. In other words, the set of search operations should be
complete (see Denition D4.8 on page 85). Beyer and Schwefel state that this property is
necessary for proving global convergence.
The set of available search operations Op should thus include at least one (unary) search
operation which is theoretically able to reach all possible formae within nite steps. If the
binary search operations in Op all are pure, this unary operator is the only one (apart
from creation operations) able to introduce new formae which are not yet present in the
population. Hence, it should be able to nd any given forma [2879].
Generally, we just need to be able to generate all formae, not necessarily all genotypes
with such an operator, if formae can arbitrarily combined by the available n-ary search or
recombination operations.
24.12 Inuence of Formae
One rule which, in my opinion, was missing in the lists given by Radclie [2242] and Weicker
[2879] is that the absolute contributions of the single formae to the overall objective values
of a candidate solution should not be too dierent. Let us divide the phenotypic formae
into those with positive and those with negative or neutral contribution and let us, for
simplication purposes, assume that those with positive contribution can be arbitrarily
combined. If one of the positive formae has a contribution with an absolute value much
lower than those of the other positive formae, we will trip into the problem of domino
convergence discussed in Section 13.2.3 on page 153.
Then, the search will rst discover the building blocks of higher value. This, itself, is
not a problem. However, as we have already pointed out in Section 13.2.3, if the search is
24.13. SCALABILITY OF SEARCH OPERATIONS 219
stochastic and performs exploration steps, chances are that alleles of higher importance get
destroyed during this process and have to be rediscovered. The values of the less salient
formae would then play no role. Thus, the chance of nding them strongly depends on how
frequent the destruction of important formae takes place.
Ideally, we would therefore design the genome and phenome in a way that the dierent
characteristics of the candidate solution all inuence the objective values to a similar degree.
Then, the chance of nding good formae increases.
24.13 Scalability of Search Operations
The scalability requirement outlined by Beyer and Schwefel [300, 302] states that the strength
of the variation operator (usually a unary search operation such as mutation) should be
tunable in order to adapt to the optimization process. In the initial phase of the search,
usually larger search steps are performed in order to obtain a coarse idea in which general
direction the optimization process should proceed. The closer the focus gets to a certain local
or global optimum, the smaller should the search steps get in order to balance the genotypic
features in a ne-grained manner. Ideally, the optimization algorithm should achieve such
behavior in a self-adaptive way in reaction to the structure of the problem it is solving.
24.14 Appropriate Abstraction
The problem should be represented at an appropriate level of abstraction [240, 2323]. The
complexity of the optimization problem should evenly be distributed between the search
space G, the search operations searchOp Op, the genotype-phenotype mapping gpm,
and the problem space X. Sometimes, using a non-trivial encoding which shifts complexity
to the genotype-phenotype mapping and the search operations can improve the performance
of an optimizer and increase the causality [240, 2912, 2914].
24.15 Indirect Mapping
If a direct mapping between genotypes and phenotypes is not possible, a suitable develop-
mental approach should be applied [2323]. Such indirect genotype-phenotype mappings, as
we will discuss in Section 28.2.2 on page 272, are especially useful if large-scale and robust
structure designs should be constructed by the optimization process.
24.15.1 Extradimensional Bypass: Example for Good Complexity
Minimal-sized genomes are not always the best approach. An interesting aspect of genome
design supporting this claim is inspired by the works of the theoretical biologist Conrad
[621, 622, 623, 625]. According to his extradimensional bypass principle, it is possible to
transform a rugged tness landscape with isolated peeks into one with connected saddle
points by increasing the dimensionality of the search space [496, 544]. In [625] he states that
the chance of isolated peeks in randomly created tness landscapes decreases when their
dimensionality grows.
This partly contradicts Section 24.1 which states that genomes should be as compact
as possible. Conrad [625] does not suggest that nature includes useless sequences in the
genome but either genes which allow for
1. new phenotypical characteristics or
2. redundancy providing new degrees of freedom for the evolution of a species.
220 24 LESSONS LEARNED: DESIGNING GOOD ENCODINGS
In some cases, such an increase in freedom makes more than up for the additional costs
arising from the enlargement of the search space. The extradimensional bypass can be
considered as an example of positive neutrality (see Chapter 16).
G
G
f(x)
local
optimum
global
optimum
Fig. 24.1.a: Useful increase of dimensionality.
G
G
f(x)
local
optimum
global
optimum
Fig. 24.1.b: Useless increase of dimensionality.
Figure 24.1: Examples for an increase of the dimensionality of a search space G (1d) to G

(2d).
In Fig. 24.1.a, an example for the extradimensional bypass (similar to Fig. 6 in [355])
is sketched. The original problem had a one-dimensional search space G corresponding to
the horizontal axis up front. As can be seen in the plane in the foreground, the objective
function had two peeks: a local optimum on the left and a global optimum on the right,
separated by a larger valley. When the optimization process began climbing up the local
optimum, it was very unlikely that it ever could escape this hill and reach the global one.
Increasing the search space to two dimensions (G

), however, opened up a path way


between them. The two isolated peeks became saddle points on a longer ridge. The global
optimum is now reachable from all points on the local optimum.
Generally, increasing the dimension of the search space makes only sense if the added
dimension has a non-trivial inuence on the objective functions. Simply adding a useless new
dimension (as done in Fig. 24.1.b) would be an example for some sort of uniform redundancy
from which we already know (see Section 16.5) that it is not benecial. Then again, adding
useful new dimensions may be hard or impossible to achieve in most practical applications.
A good example for this issue is given by Bongard and Paul [355] who used an EA to
evolve a neural network for the motion control of a bipedal robot. They performed runs
where the evolution had control over some morphological aspects and runs where it had
not. The ability to change the leg with of the robots, for instance, comes at the expense
of an increase of the dimensions of the search spaced. Hence, one would expect that the
optimization would perform worse. Instead, in one series of experiments, the results were
much better with the extended search space. The runs did not converge to one particular
leg shape but to a wide range of dierent structures. This led to the assumption that the
morphology itself was not so much target of the optimization but the ability of changing it
transformed the tness landscape to a structure more navigable by the evolution. In some
other experimental runs performed by Bongard and Paul [355], this phenomenon could not
be observed, most likely because
1. the robot conguration led to a problem of too high complexity, i. e., ruggedness in
the tness landscape and/or
2. the increase in dimensionality this time was too large to be compensated by the gain
of evolvability.
24.15. INDIRECT MAPPING 221
Further examples for possible benets of gradually complexifying the search space are
given by Malkin in his doctoral thesis [1818].
Part III
Metaheuristic Optimization Algorithms
224 24 LESSONS LEARNED: DESIGNING GOOD ENCODINGS
Chapter 25
Introduction
In Denition D1.15 on page 36 we stated that metaheuristics is a black-box optimization
technique which combines information from the objective functions gained by sampling the
search space in a way which (hopefully) leads to the discovery of candidate solutions with
high utility. In the taxonomy given in Section 1.3, these algorithms are the majority since
they are the focus of this book.
226 25 INTRODUCTION
25.1 General Information on Metaheuristics
25.1.1 Applications and Examples
Table 25.1: Applications and Examples of Metaheuristics.
Area References
Art see Tables 28.4, 29.1, and 31.1
Astronomy see Table 28.4
Chemistry see Tables 28.4, 29.1, 30.1, 31.1, 32.1, 40.1, and 43.1
Combinatorial Problems [342, 651, 1033, 1034, 2283, 2992]; see also Tables 26.1, 27.1,
28.4, 29.1, 30.1, 31.1, 33.1, 34.1, 35.1, 36.1, 37.1, 38.1, 39.1,
40.1, 41.1, and 42.1
Computer Graphics see Tables 27.1, 28.4, 29.1, 30.1, 31.1, 33.1, and 36.1
Control see Tables 28.4, 29.1, 31.1, 32.1, and 35.1
Cooperation and Teamwork see Tables 28.4, 29.1, 31.1, and 37.1
Data Compression see Table 31.1
Data Mining see Tables 27.1, 28.4, 29.1, 31.1, 32.1, 34.1, 35.1, 36.1, 37.1,
and 39.1
Databases see Tables 26.1, 27.1, 28.4, 29.1, 31.1, 35.1, 36.1, 37.1, 39.1,
and 40.1
Decision Making see Table 31.1
Digital Technology see Tables 26.1, 28.4, 29.1, 30.1, 31.1, 34.1, and 35.1
Distributed Algorithms and Sys-
tems
[3001, 3031]; see also Tables 26.1, 27.1, 28.4, 29.1, 30.1, 31.1,
32.1, 33.1, 34.1, 35.1, 36.1, 37.1, 39.1, 40.1, and 41.1
E-Learning see Tables 28.4, 29.1, 37.1, and 39.1
Economics and Finances see Tables 27.1, 28.4, 29.1, 31.1, 33.1, 35.1, 36.1, 37.1, 39.1,
and 40.1
Engineering see Tables 26.1, 27.1, 28.4, 29.1, 30.1, 31.1, 32.1, 33.1, 34.1,
35.1, 36.1, 37.1, 39.1, 40.1, and 41.1
Function Optimization see Tables 26.1, 27.1, 28.4, 29.1, 30.1, 32.1, 33.1, 34.1, 35.1,
36.1, 39.1, 40.1, 41.1, and 43.1
Games see Tables 28.4, 29.1, 31.1, and 32.1
Graph Theory [3001, 3031]; see also Tables 26.1, 27.1, 28.4, 29.1, 30.1, 31.1,
33.1, 34.1, 35.1, 36.1, 37.1, 38.1, 39.1, 40.1, and 41.1
Healthcare see Tables 28.4, 29.1, 31.1, 35.1, 36.1, and 37.1
Image Synthesis see Table 28.4
Logistics [651, 1033, 1034]; see also Tables 26.1, 27.1, 28.4, 29.1, 30.1,
33.1, 34.1, 36.1, 37.1, 38.1, 39.1, 40.1, and 41.1
Mathematics see Tables 26.1, 27.1, 28.4, 29.1, 30.1, 31.1, 32.1, 33.1, 34.1,
and 35.1
Military and Defense see Tables 26.1, 27.1, 28.4, and 34.1
Motor Control see Tables 28.4, 33.1, 34.1, 36.1, and 39.1
Multi-Agent Systems see Tables 28.4, 29.1, 31.1, 35.1, and 37.1
Multiplayer Games see Table 31.1
Physics see Tables 27.1, 28.4, 29.1, 31.1, 32.1, and 34.1
Prediction see Tables 28.4, 29.1, 30.1, 31.1, and 35.1
Security see Tables 26.1, 27.1, 28.4, 29.1, 31.1, 35.1, 36.1, 37.1, 39.1,
and 40.1
25.1. GENERAL INFORMATION ON METAHEURISTICS 227
Shape Synthesis see Table 26.1
Software [467, 3031]; see also Tables 26.1, 27.1, 28.4, 29.1, 30.1, 31.1,
33.1, 34.1, 36.1, 37.1, and 40.1
Sorting see Tables 28.4 and 31.1
Telecommunication see Tables 28.4, 37.1, and 40.1
Testing see Tables 28.4 and 31.3
Theorem Proong and Automatic
Verication
see Tables 29.1 and 40.1
Water Supply and Management see Table 28.4
Wireless Communication see Tables 26.1, 27.1, 28.4, 29.1, 31.1, 33.1, 34.1, 35.1, 36.1,
37.1, 39.1, and 40.1
25.1.2 Books
1. Metaheuristics for Scheduling in Industrial and Manufacturing Applications [2992]
2. Handbook of Metaheuristics [1068]
3. Fleet Management and Logistics [651]
4. Metaheuristics for Multiobjective Optimisation [1014]
5. Large-Scale Network Parameter Conguration using On-line Simulation Framework [3031]
6. Modern Heuristic Techniques for Combinatorial Problems [2283]
7. How to Solve It: Modern Heuristics [1887]
8. Search Methodologies Introductory Tutorials in Optimization and Decision Support Tech-
niques [453]
9. Handbook of Approximation Algorithms and Metaheuristics [1103]
25.1.3 Conferences and Workshops
Table 25.2: Conferences and Workshops on Metaheuristics.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
ICARIS: International Conference on Articial Immune Systems
See Table 10.2 on page 130.
MIC: Metaheuristics International Conference
History: 2009/07: Hamburg, Germany, see [1968]
2008/10: Atizapan de Zaragoza, Mexico, see [1038]
2007/06: Montreal, QC, Canada, see [1937]
2005/08: Vienna, Austria, see [2810]
2003/08: Kyoto, Japan, see [1321]
2001/07: Porto, Portugal, see [2294]
1999/07: Angra dos Reis, Rio de Janeiro, Brazil, see [2298]
1997/07: Sophia-Antipolis, France, see [2828]
1995/07: Breckenridge, CO, USA, see [2104]
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
228 25 INTRODUCTION
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
GEWS: Grammatical Evolution Workshop
See Table 31.2 on page 385.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
META: International Conference on Metaheuristics and Nature Inspired Computing
History: 2012/10: Port El Kantaoui, Sousse, Tunisia, see [2205]
2010/10: Djerba Island, Tunisia, see [873]
2008/10: Hammamet, Tunesia, see [1171]
2006/11: Hammamet, Tunesia, see [1170]
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
25.1.4 Journals
1. European Journal of Operational Research (EJOR) published by Elsevier Science Publishers
B.V. and North-Holland Scientic Publishers Ltd.
2. Computational Optimization and Applications published by Kluwer Academic Publishers and
Springer Netherlands
3. The Journal of the Operational Research Society (JORS) published by Operations Research
Society and Palgrave Macmillan Ltd.
4. Journal of Heuristics published by Springer Netherlands
5. SIAM Journal on Optimization (SIOPT) published by Society for Industrial and Applied
Mathematics (SIAM)
6. Journal of Global Optimization published by Springer Netherlands
7. Discrete Optimization published by Elsevier Science Publishers B.V.
8. Journal of Optimization Theory and Applications published by Plenum Press and Springer
Netherlands
9. Engineering Optimization published by Informa plc and Taylor and Francis LLC
10. Optimization and Engineering published by Kluwer Academic Publishers and Springer
Netherlands
11. Optimization Methods and Software published by Taylor and Francis LLC
12. Journal of Mathematical Modelling and Algorithms published by Springer Netherlands
13. Structural and Multidisciplinary Optimization published by Springer-Verlag GmbH
14. SIAG/OPT Views and News: A Forum for the SIAM Activity Group on Optimization pub-
lished by SIAM Activity Group on Optimization SIAG/OPT
Chapter 26
Hill Climbing
26.1 Introduction
Denition D26.1 (Local Search). A local search
1
algorithm solves an optimization
problem by iteratively moving from one candidate solution to another in the problem space
until a termination criterion is met [2356].
In other words, a local search algorithm in each iteration either keeps the current candidate
solution or moves to one element from its neighborhood as dened in Denition D4.12 on
page 86. Local search algorithms have two major advantages: They use very little memory
(normally only a constant amount) and are often able to nd solutions in large or innite
search spaces. These advantages come, of course, with large trade-os in processing time
and the risk of premature convergence.
Hill Climbing
2
(HC) [2356] is one of the oldest and simplest local search and optimiza-
tion algorithm for single objective functions f. In Algorithm 1.1 on page 40, we gave an
example for a simple and unstructured Hill Climbing algorithm. Here we will generalize
from Algorithm 1.1 to a more basic Monte Carlo Hill Climbing algorithm structure.
26.2 Algorithm
The Hill Climbing procedures usually starts either from a random point in the search space
or from a point chosen according to some problem-specic criterion (visualized with the
operator create
__
Algorithm 26.1). Then, in a loop, the currently known best individual p

is modied (with the mutate( )operator) in order to produce one ospring p


new
. If this new
individual is better than its parent, it replaces it. Otherwise, it is discarded. This procedure
is repeated until the termination criterion terminationCriterion
__
is met. In Listing 56.6 on
page 735, we provide a Java implementation which resembles Algorithm 26.1.
In this sense, Hill Climbing is similar to an Evolutionary Algorithm (see Chapter 28)
with a population size of 1 and truncation selection. Matter of fact, it resembles a (1 + 1)
Evolution Strategies. In Algorithm 30.7 on page 369, we provide such an optimization
algorithm for G R
n
which additionally adapts the search step size.
1
http://en.wikipedia.org/wiki/Local_search_(optimization) [accessed 2010-07-24]
2
http://en.wikipedia.org/wiki/Hill_climbing [accessed 2007-07-03]
230 26 HILL CLIMBING
Algorithm 26.1: x hillClimbing(f)
Input: f: the objective function subject to minization
Input: [implicit] terminationCriterion: the termination criterion
Data: p
new
: the new element created
Data: p

: the (currently) best individual


Output: x: the best element found
1 begin
2 p

.g create
__
3 p

.x gpm(p

.g)
4 while terminationCriterion
__
do
5 p
new
.g mutate(p

.g)
6 p
new
.x gpm(p
new
.g)
7 if f(p
new
.x) f(p

.x) then p

p
new
8 return p

.x
Although the search space G and the problem space X are most often the same in Hill
Climbing, we distinguish them in Algorithm 26.1 for the sake of generality. It should further
be noted that Hill Climbing can be implemented in a deterministic manner if, for each
genotype g
1
G, the set of adjacent points is nite (g
2
G : [adjacent(g
2
, g
1
)[ N
0
).
26.3 Multi-Objective Hill Climbing
As illustrated in Algorithm 26.2, we can easily extend Hill Climbing algorithms with a
support for multi-objective optimization by using some of the methods from Section 3.5.2
on page 76 and some which we later will discuss in the context of Evolutionary Algorithms.
This extended approach will then return a set X of the best candidate solutions found instead
of a single point x from the problem space as done in Algorithm 26.1.
The set of currently known best individuals archive may contain more than one element.
In each round, we pick one individual from the archive at random. We then use this individ-
ual to create the next point in the problem space with the mutate operator. The method
pruneOptimalSet(archive, as) checks whether the new candidate solution is non-prevailed
from any element in archive. If so, it will enter the archive and all individuals from archive
which now are dominated are removed. Since an optimization problem may potentially have
innitely many global optima, we subsequently have to ensure that the archive does not grow
too big: pruneOptimalSet(archive, as) makes sure that it contains at most as individuals and,
if necessary, deletes some individuals. In case of Algorithm 26.2, the archive can grow to
at most as + 1 individuals before the pruning, so at most one individual will be deleted.
When the loop ends because the termination criterion terminationCriterion
__
was met, we
extract all the phenotypes from the archived individuals (extractPhenotypes(archive)) and
return them as the set X. An example implementation of this method in Java can be found
in Listing 56.7 on page 737.
26.4. PROBLEMS IN HILL CLIMBING 231
Algorithm 26.2: X hillClimbingMO(OptProb, as)
Input: OptProb: a tuple (X, f , cmp
f
) of the problem space X, the objective functions
f , and a comparator function cmp
f
Input: as: the maximum archive size
Input: [implicit] terminationCriterion: the termination criterion
Data: p
old
: the individual used as parent for the next point
Data: p
new
: the new individual generated
Data: archive: the set of best individuals known
Output: X: the set of the best elements found
1 begin
2 archive ()
3 p
new
.g create
__
4 p
new
.x gpm(p
new
.g)
5 while terminationCriterion
__
do
// check whether the new individual belongs into archive
6 archive updateOptimalSet(archive, p
new
, cmp
f
)
// if the archive is too big, delete one individual
7 archive pruneOptimalSet(archive, as)
// pick random element from archive
8 p
old
archive[randomUni[0, len(archive))]
9 p
new
.g mutate(p
old
.g)
10 p
new
.x gpm(p
new
.g)
11 return extractPhenotypes(archive)
26.4 Problems in Hill Climbing
The major problem of Hill Climbing is premature convergence. We introduced this phe-
nomenon in Chapter 13 on page 151: The optimization gets stuck at a local optimum and
cannot discover the global optimum anymore.
Both previously specied versions of the algorithm are very likely to fall prey to this
problem, since the always move towards improvements in the objective values and thus,
cannot reach any candidate solution which is situated behind an inferior point. They will
only follow a path of candidate solutions if it is monotonously
3
improving the objective func-
tion(s). The function used in Example E5.1 on page 96 is, for example, already problematic
for Hill Climbing.
As previously mentioned, Hill Climbing in this form is a local search rather than Global
Optimization algorithm. By making a few slight modications to the algorithm however, it
can become a valuable technique capable of performing Global Optimization:
1. A tabu-list which stores the elements recently evaluated can be added. By preventing
the algorithm from visiting them again, a better exploration of the problem space X can
be enforced. This technique is used in Tabu Search which is discussed in Chapter 40
on page 489.
2. A way of preventing premature convergence is to not always transcend to the better
candidate solution in each step. Simulated Annealing introduces a heuristic based
on the physical model the cooling down molten metal to decide whether a superior
ospring should replace its parent or not. This approach is described in Chapter 27
on page 243.
3
http://en.wikipedia.org/wiki/Monotonicity [accessed 2007-07-03]
232 26 HILL CLIMBING
3. The Dynamic Hill Climbing approach by Yuret and de la Maza [3048] uses the last
two visited points to compute unit vectors. With this technique, the directions are
adjusted according to the structure of the problem space and a new coordinate frame
is created which points more likely into the right direction.
4. Randomly restarting the search after so-and-so many steps is a crude but ecient
method to explore wide ranges of the problem space with Hill Climbing. You can nd
it outlined in Section 26.5.
5. Using a reproduction scheme that not necessarily generates candidate solutions directly
neighboring p

.x, as done in Random Optimization, an optimization approach dened


in Chapter 44 on page 505, may prove even more ecient.
26.5 Hill Climbing With Random Restarts
Hill Climbing with random restarts is also called Stochastic Hill Climbing (SH) or Stochastic
gradient descent
4
[844, 2561]. In the following, we will outline how random restarts can be
implemented into Hill Climbing and serve as a measure against premature convergence.
We will further combine this approach directly with the multi-objective Hill Climbing ap-
proach dened in Algorithm 26.2. The new algorithm incorporates two archives for optimal
solutions: archive
1
, the overall optimal set, and archive
2
the set of the best individuals in
the current run. We additionally dene the restart criterion shouldRestart
__
which is eval-
uated in every iteration and determines whether or not the algorithm should be restarted.
shouldRestart
__
therefore could, for example, count the iterations performed or check if
any improvement was produced in the last ten iterations. After each single run, archive
2
is
incorporated into archive1, from which we extract and return the problem space elements
at the end of the Hill Climbing process, as dened in Algorithm 26.3.
4
http://en.wikipedia.org/wiki/Stochastic_gradient_descent [accessed 2007-07-03]
26.5. HILL CLIMBING WITH RANDOM RESTARTS 233
Algorithm 26.3: X hillClimbingMORR(OptProb, as)
Input: OptProb: a tuple (X, f , cmp
f
) of the problem space X, the objective functions
f , and a comparator function cmp
f
Input: as: the maximum archive size
Input: [implicit] terminationCriterion: the termination criterion
Input: [implicit] shouldRestart: the restart criterion
Data: p
old
: the individual used as parent for the next point
Data: p
new
: the new individual generated
Data: archive
1
, archive
2
: the sets of best individuals known
Output: X: the set of the best elements found
1 begin
2 archive
1
()
3 while terminationCriterion
__
do
4 archive
2
()
5 p
new
.g create
__
6 p
new
.x gpm(p
new
.g)
7 while
_
shouldRestart
__
terminationCriterion
___
do
// check whether the new individual belongs into
archive
8 archive
2
updateOptimalSet(archive
2
, p
new
, cmp
f
)
// if the archive is too big, delete one individual
9 archive
2
pruneOptimalSet(archive
2
, as)
// pick random element from archive
10 p
old
archive
2
[randomUni[0, len(archive
2
))]
11 p
new
.g mutate(p
old
.g)
12 p
new
.x gpm(p
new
.g)
// check whether good individual were found in this run
13 archive
1
updateOptimalSetN(archive
1
, archive
2
, cmp
f
)
// if the archive is too big, delete one individual
14 archive
1
pruneOptimalSet(archive
1
, as)
15 return extractPhenotypes(archive
1
)
234 26 HILL CLIMBING
26.6 General Information on Hill Climbing
26.6.1 Applications and Examples
Table 26.1: Applications and Examples of Hill Climbing.
Area References
Combinatorial Problems [16, 502, 775, 1261, 1486, 1532, 1553, 1558, 1781, 1971, 2656,
2663, 2745, 3027]; see also Table 26.3
Databases [1781, 2745]
Digital Technology [1600, 1665]
Distributed Algorithms and Sys-
tems
[15, 410, 775, 1532, 1558, 2058, 2165, 2286, 2315, 2554, 2656,
2663, 2903, 2994, 3027, 3085, 3106]
Engineering [410, 1486, 1600, 1665, 2058, 2286, 2656, 2663]
Function Optimization [1927, 3048]
Graph Theory [15, 410, 775, 1486, 1532, 1558, 2058, 2165, 2237, 2286, 2315,
2554, 2596, 2656, 2663, 3027, 3085, 3106]
Logistics [1553, 1971]
Mathematics [1674]
Military and Defense [775]
Security [1486, 2165, 2554]
Shape Synthesis [887]
Software [1486, 1665, 2554, 2994]
Wireless Communication [1486, 2237, 2596]
26.6.2 Books
1. Articial Intelligence: A Modern Approach [2356]
2. Introduction to Stochastic Search and Optimization [2561]
3. Evolution and Optimum Seeking [2437]
26.6.3 Conferences and Workshops
Table 26.2: Conferences and Workshops on Hill Climbing.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
26.7. RAINDROP METHOD 235
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
26.6.4 Journals
1. Journal of Heuristics published by Springer Netherlands
26.7 Raindrop Method
Only ve years ago, Bettinger and Zhu [290, 3083] contributed a new search heuristic for
constrained optimization: the Raindrop Method, which they used for forest planning prob-
lems. In the original description of the algorithm, the search and problem space are identical
(G = X). The algorithm is based on precise knowledge of the components of the candidate
solutions and on how their interaction inuences the validity of the constraints. It works as
follows:
1. The Raindrop Method starts out with a single, valid candidate solution x

(i. e., one


that violates none of the constraints). This candidate may be found with a random
search process or provided by a human operator.
2. Create a copy x of x

. Set the iteration counter to a user-dened maximum value


max of iterations for modifying and correcting x that are allowed without improve-
ments before reverting to x

.
3. Perturb x by randomly modifying one of its components. Let us refer to this randomly
selected component as s. This modication may lead to constraint violations.
4. If no constraint was violated, continue at Point 11, otherwise proceed as follows.
5. Set a distance value d to 0.
6. Create a list L of the components of x that lead to constraint violations. Here we
make use the knowledge of the interaction of components and constraints.
7. FromL, we pick the component c which is physically closest to s, that is, the component
with the minimum distance dist(c, s). In the original application of the Raindrop
Method, physically close was properly dened due to the fact that the candidate
solutions were basically two-dimensional maps. For applications dierent from forest
planning, appropriate denitions for the distance measure have to be supplied.
8. Set d = dist(c, s).
236 26 HILL CLIMBING
9. Find the next best value for c which does not introduce any new constraint violations
in components f which are at least as close to s, i. e., with dist(f, s) dist(c, s). This
change may, however, cause constraints violations in components farther away from s.
If no such change is possible, go to Point 13. Otherwise, modify the component c in
x.
10. Go back to step Point 4.
11. If x is better than x

, that is, x x

, set x

= x. Otherwise, decrease the iteration


counter .
12. If the termination criterion has not yet been met, go back to step Point 3 if > 0 and
to Point 2 if = 0.
13. Return x

to the user.
The iteration counter here is used to allow the search to explore solutions more distance
from the current optimum x

. The higher the initial value max specied user, the more
iterations without improvement are allowed before reverting x to x

. The name Raindrop


Method comes from the fact that the constraint violations caused by the perturbation of
the valid solution radiate away from the modied component s like waves on a water surface
radiate away from the point where a raindrop hits.
26.7.1 General Information on Raindrop Method
26.7.1.1 Applications and Examples
Table 26.3: Applications and Examples of Raindrop Method.
Area References
Combinatorial Problems [290, 3083]
26.7.1.2 Books
1. Nature-Inspired Algorithms for Optimisation [562]
26.7.1.3 Conferences and Workshops
Table 26.4: Conferences and Workshops on Raindrop Method.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
26.7. RAINDROP METHOD 237
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
26.7.1.4 Journals
1. Journal of Heuristics published by Springer Netherlands
238 26 HILL CLIMBING
Tasks T26
64. Use the implementation of Hill Climbing given in Listing 56.6 on page 735 (or an
equivalent own one as in Task 65) in order to nd approximations of the global minima
of
f
1
(x) = 0.5
1
2n
n

i=1
[cos (ln (1 +[x
i
[))] (26.1)
f
2
(x) = 0.5 0.5 cos
_
ln
_
1 +
n

i=1
[x
i
[
__
(26.2)
f
3
(x) = 0.01 max
n
i=1
_
x
i
2
_
(26.3)
in the real interval [10, 10]
n
.
(a) Run ten experiments for n = 2, n = 5, and n = 10 each. Note all parameter
settings of the algorithm you used (the number of algorithm steps, operators,
etc.), the results, the vectors discovered, and their objective values. [20 points]
(b) Compute the median (see Section 53.2.6 on page 665), the arithmetic mean (see
Section 53.2.2, especially Denition D53.32 on page 661) and the standard devi-
ation (see Section 53.2.3, especially Section 53.2.3.2 on page 663) of the achieved
objective values in the ten runs. [10 points]
(c) The three objective functions are designed so that their value is always in [0, 1]
in the interval [10, 10]
n
. Try to order the optimization problems according to
the median of the achieved objective values. Which problems seem to be the
hardest? What is the inuence of the dimension n on the achieved objective
values? [10 points]
(d) If you compare f
1
and f
2
, you nd striking similarities. Does one of the functions
seem to be harder? If so, which. . . and what, in your opinion, makes it harder?
[5 points]
[45 points]
65. Implement the Hill Climbing procedure Algorithm 26.1 on page 230 in a programming
language dierent from Java. You can specialize your implementation for only solving
continuous numerical problems, i. e., you do not need to follow the general, interface-
based approach we adhere to in Listing 56.6 on page 735. Apply your implementation
to the Sphere function (see Section 50.3.1.1 on page 581 and Listing 57.1 on page 859)
for n = 10 dimensions in order to test its functionality.
[20 points]
66. Extend the Hill Climbing algorithm implementation given in Listing 56.6 on page 735
to Hill Climbing with restarts, i. e., to a Hill Climbing which restarts the search pro-
cedure every n steps (while preserving the results of the optimization process in an
additional variable). You may use the discussion in Section 26.5 on page 232 as refer-
ence. However, you do not need to provide the multi-objective optimization capabilities
of Algorithm 26.3 on page 233. Instead, only extend the single-objective Hill Climbing
method in Listing 56.6 on page 735 or provide a completely new implementation of
Hill Climbing with restarts in a programming language of your choice. Test your im-
plementation on one of the previously discussed benchmark functions for real-valued,
continuous optimization (see, e.g., Task 64).
[25 points]
67. Let us now solve a modied version of the bin packing problem from Example E1.1
on page 25. In Table 26.5, we give four datasets from Data Set 1 in [2422]. Each
26.7. RAINDROP METHOD 239
dataset stands for one bin packing problem and provides the bin size b, the number
n of objects to be put into the bins, and the sizes a
i
of the objects (with i 1..n).
The problem space be the set of all possible assignments of objects to bins, where the
number k of bins is not provided. For each of the four bin packing problems, nd
a candidate solution x which contains an assignment of objects to as few as possible
bins of the given size b. In other words: For each problem, we do know the size b of
the bin, the number n of objects, and the object sizes a. We want to nd out how
many bins we need at least to store all the objects (k N
1
, the objective, subject to
minimization) and how we can store (x X) the objects in these bins.
Name [2422] b n a
i
k

N1C1W1 A 100 50 [50, 100, 99, 99, 96, 96, 92, 92, 91, 88, 87, 86, 85, 76, 74,
72, 69, 67, 67, 62, 61, 56, 52, 51, 49, 46, 44, 42, 40, 40,
33, 33, 30, 30, 29, 28, 28, 27, 25, 24, 23, 22, 21, 20, 17,
14, 13, 11, 10, 7, 7, 3]
25
N1C2W2 D 120 50 [50, 120, 99, 98, 98, 97, 96, 90, 88, 86, 82, 82, 80, 79, 76,
76, 76, 74, 69, 67, 66, 64, 62, 59, 55, 52, 51, 51, 50, 49,
44, 43, 41, 41, 41, 41, 41, 37, 35, 33, 32, 32, 31, 31, 31,
30, 29, 23, 23, 22, 20, 20]
24
N2C1W1 A 100 100 [100, 100, 99, 97, 95, 95, 94, 92, 91, 89, 86, 86, 85, 84, 80,
80, 80, 80, 80, 79, 76, 76, 75, 74, 73, 71, 71, 69, 65, 64,
64, 64, 63, 63, 62, 60, 59, 58, 57, 54, 53, 52, 51, 50, 48,
48, 48, 46, 44, 43, 43, 43, 43, 42, 41, 40, 40, 39, 38, 38,
38, 38, 37, 37, 37, 37, 36, 35, 34, 33, 32, 30, 29, 28, 26,
26, 26, 24, 23, 22, 21, 21, 19, 18, 17, 16, 16, 15, 14, 13,
12, 12, 11, 9, 9, 8, 8, 7, 6, 6, 5, 1]
48
N2C3W2 C 150 100 [100, 150, 100, 99, 99, 98, 97, 97, 97, 96, 96, 95, 95, 95,
94, 93, 93, 93, 92, 91, 89, 88, 87, 86, 84, 84, 83, 83, 82,
81, 81, 81, 78, 78, 75, 74, 73, 72, 72, 71, 70, 68, 67, 66,
65, 64, 63, 63, 62, 60, 60, 59, 59, 58, 57, 56, 56, 55, 54,
51, 49, 49, 48, 47, 47, 46, 45, 45, 45, 45, 44, 44, 44, 44,
43, 41, 41, 40, 39, 39, 39, 37, 37, 37, 35, 35, 34, 32, 31,
31, 30, 28, 26, 25, 24, 24, 23, 23, 22, 21, 20, 20]
41
Table 26.5: Bin Packing test data from [2422], Data Set 1.
Solve this optimization problem with Hill Climbing. It is not the goal to solve it to
optimality, but to get solutions with Hill Climbing which are as good as possible. For
the sake of completeness and in order to allow comparisons, we provide the lowest
known number k

of bins for the problems given by Scholl and Klein [2422] in the last
column of Table 26.5.
In order to solve this optimization problem, you may proceed as discussed in Chap-
ter 7 on page 109, except that you have to use Hill Climbing. Below, we give some
suggestions but you do not necessarily need to follow them:
(a) A possible problem space X for n objects would be the set of all permutations
of the rst n natural numbers (G = (1..n)). A candidate solution x X, i. e.,
such a permutation, could describe the order in which the objects have to be put
into bins. If a bin is lled to its capacity (or the current object does not t into
the bin anymore), a new bin is needed. This process is repeated until all objects
have been packed into bins.
(b) For the objective function, there exist many dierent choices.
i. As objective function f
1
, the number k(x) of bins required for storing the
objects according to a permutation x X could be used. This value can
easily be computed as discussed in our outline of the problem space X above.
You can nd an example implementation of this function in Listing 57.10 on
page 892.
240 26 HILL CLIMBING
ii. Alternatively, we could also sum up the squares of the free space left over in
each bin as f
2
. It is easy to see that if we put each object into a single bin,
the free capacity left over is maximal. If there was a way to perfectly ll each
bin with objects, the free capacity would become 0. Such a conguration
would also require the least number of bins, since using fewer bins would
mean to not store all objects. By summing up the squares of the free spaces,
we punish bins with lots of free space more severely. In other words, we
reward solutions where the bins are lled equally. Naturally one would feel
that such solutions are good. However, this approach may prevent us
from nding the optimum if it is a solution where the objects are distributed
uneven. We implement this objective function as Listing 57.11 on page 893.
iii. Also, a combination of the above would be possible, such as f
3
(x) = f
1
(x)
1
1+f
2
(x)
, for example. An example implementation of this function is given
as Listing 57.12 on page 894.
iv. Then again, we could consider the total number of bins required but ad-
ditionally try to minimize the space lastWaste(x) left over in the last bin,
i. e., set f
4
(x) = f
1
(x) + 1/(1 + lastWaste(x)). The idea here would be that
the dominating factor of the objective should still be the number of bins re-
quired. But, if two solutions require the same amount of bins, we prefer the
one where more space is free in the last bin. If this waste in the last bin is
increased more and more, this means that the volume of objects in that bin
is reduced. It may eventually become 0 which would equal leaving the bin
empty. The empty bin would not be used and hence, we would be saving one
bin. An implementation of such a function for the suggested problem space
is given at Listing 57.13 on page 896.
v. Finally, instead of only considering the remaining empty space in the last
bin, we could consider the maximum empty space over all bins. The idea
is pretty similar: If we have a bin which is almost empty, maybe we can
make it disappear with just a few moves (object swaps). Hence, we could
prefer solutions where there is one bin with much free space. In order to nd
such solutions, we go over all the bins required by the solution, and check how
much space is free in them. We then take the one bin with the most free space
mf and add this space as a factor to the bin count in the objective function,
similar to as we did in the previous function: f
5
(x) = f
1
(x) + 1/(1 + mf(x)).
An implementation of this function for the suggested problem space is given
at Listing 57.14 on page 897.
(c) If the suggested problem space is used, it could also be used as search space,
i. e., G = X. Then, a simple unary search operation (mutation) would swap the
elements of the permutation such as those provided in Listing 56.38 on page 806
and Listing 56.39. A creation operation could draw a permutation according to
the uniform distribution as shown in Listing 56.37.
Use the Hill Climbing Algorithm 26.1 given in Listing 56.6 on page 735 for solving the
task and compare its results with the results produced by equivalent random walks
and random sampling (see Chapter 8 and Section 56.1.2 on page 729). For your
experiments, you may use the program given in Listing 57.15 on page 899.
(a) For each algorithm or objective function you test, perform at least 30 independent
runs to get reliable results (Remember: our Hill Climbing algorithm is randomized
and can produce dierent results in each run.)
(b) For each of the four problems, write down the best assignment of objects to bins
that you have found (along with the number of bins required). You may add own
ideas as well
26.7. RAINDROP METHOD 241
(c) Discuss which of the objective functions and algorithm or algorithm conguration
performed best and give reasons.
(d) Also provide all your source code.
[40 points]
Chapter 27
Simulated Annealing
27.1 Introduction
In 1953, Metropolis et al. [1874] developed a Monte Carlo method for calculating the
properties of any substance which may be considered as composed of interacting individual
molecules. With this new Metropolis procedure stemming from statistical mechanics,
the manner in which metal crystals recongure and reach equilibriums in the process of
annealing can be simulated, for example. This inspired Kirkpatrick et al. [1541] to develop
the Simulated Annealing
1
(SA) algorithm for Global Optimization in the early 1980s and
to apply it to various combinatorial optimization problems. Independently around the same
time,

Cern y [516] employed a similar approach to the Traveling Salesman Problem. Even
earlier, Jacobs et al. [1426, 1427] use a similar method to optimize program code. Simulated
Annealing is a general-purpose optimization algorithm that can be applied to arbitrary
search and problem spaces. Like simple Hill Climbing, Simulated Annealing only needs a
single initial individual as starting point and a unary search operation and thus belongs to
the family of local search algorithms.
27.1.1 Metropolis Simulation
In metallurgy and material science, annealing
2
[1307] is a heat treatment of material
with the goal of altering properties such as its hardness. Metal crystals have small defects,
dislocations of atoms which weaken the overall structure as sketched in Figure 27.1. It
should be noted that Figure 27.1 is a very rough sketch and the following discussion is very
rough and simplied, too.
3
In order decrease the defects, the metal is heated to, for instance
0.4 times of its melting temperature, i. e., well below its melting point but usually already a
few hundred degrees Celsius. Adding heat means adding energy to the atoms in the metal.
Due to the added energy, the electrons are freed from the atoms in the metal (which now
become ions). Also, the higher energy of the ions makes them move and increases their
diusion rate. The ions may now leave their current position and assume better positions.
During this process, they may also temporarily reside in congurations of higher energy, for
example if two ions get close to each other. The movement decreases as the temperature
of the metal sinks. The structure of the crystal is reformed as the material cools down and
1
http://en.wikipedia.org/wiki/Simulated_annealing [accessed 2007-07-03]
2
http://en.wikipedia.org/wiki/Annealing_(metallurgy) [accessed 2008-09-19]
3
I assume that a physicist or chemist would beat for these simplications it. If you are a physicist or
chemist and have suggestions for improving/correcting the discussion, please drop me an email.
244 27 SIMULATED ANNEALING
t
T
T=getTemperature(1)
s
0K=getTemperature( )
physicalrolemodelof
SimulatedAnnealing
metalstructurewithdefects increasedmovementofatoms ionsinlow-energystructure
withfewerdefects
Figure 27.1: A sketch of annealing in metallurgy as role model for Simulated Annealing.
approaches its equilibrium state. During this process, the dislocations in the metal crystal
can disappear. When annealing metal, the initial temperature must not be too low and the
cooling must be done suciently slowly so as to avoid to stop the ions too early and to
prevent the system from getting stuck in a meta-stable non-crystalline state representing a
local minimum of energy.
In the Metropolis simulation of the annealing process, each set of positions pos of all
atoms of a system is weighted by its Boltzmann probability factor e

E(pos)
k
B
T
where E(pos) is
the energy of the conguration pos, T is the temperature measured in Kelvin, and k
B
is the
Boltzmanns constant
4
:
k
B
1.380 650 524 10
23
J/K (27.1)
The Metropolis procedure is a model of this physical process which could be used to simulate
a collection of atoms in thermodynamic equilibrium at a given temperature. A new nearby
geometry pos
i+1
was generated as a random displacement from the current geometry pos
i
of the atoms in each iteration. The energy of the resulting new geometry is computed and
E, the energetic dierence between the current and the new geometry, was determined.
The probability that this new geometry is accepted, P (E) is dened in Equation 27.3.
E = E(pos
i+1
) E(pos
i
) (27.2)
P (E) =
_
e

E
k
B
T
if E > 0
1 otherwise
(27.3)
Systems in chemistry and physics always strive to attain low-energy states, i. e., the lower
the energy, the more stable is the system. Thus, if the new nearby geometry has a lower
energy level, the change of atom positions is accepted in the simulation. Otherwise, a
uniformly distributed random number r = randomUni[0, 1) is drawn and the step will only
be realized if r is less or equal the Boltzmann probability factor, i. e., r P (E). At high
temperatures T, this factor is very close to 1, leading to the acceptance of many uphill steps.
4
http://en.wikipedia.org/wiki/Boltzmann%27s_constant [accessed 2007-07-03]
27.2. ALGORITHM 245
As the temperature falls, the proportion of steps accepted which would increase the energy
level decreases. Now the system will not escape local regions anymore and (hopefully) comes
to a rest in the global energy minimum at temperature T = 0K.
27.2 Algorithm
The abstraction of the Metropolis simulation method in order to allow arbitrary problem
spaces is straightforward the energy computation E(pos
i
) is replaced by an objective
function f or maybe even by the result v of a tness assignment process. Algorithm 27.1
illustrates the basic course of Simulated Annealing which you can nd implemented in List-
ing 56.8 on page 740. Without loss of generality, we reuse the denitions from Evolutionary
Algorithms for the search operations and set Op = create, mutate.
Algorithm 27.1: x simulatedAnnealing(f)
Input: f: the objective function to be minimized
Data: p
new
: the newly generated individual
Data: p
cur
: the point currently investigated in problem space
Data: T: the temperature of the system which is decreased over time
Data: t: the current time index
Data: E: the energy (objective value) dierence of the p
cur
.x and p
new
.x
Output: x: the best element found
1 begin
2 p
cur
.g create
__
3 p
cur
.x gpm(p
new
.g)
4 x p
cur
.x
5 t 1
6 while terminationCriterion
__
do
7 T getTemperature(t)
8 p
new
.g mutate(p
cur
.g)
9 p
new
.x gpm(p
new
.g)
10 E f(p
new
.x) f(p
cur
.x)
11 if E 0 then
12 p
cur
p
new
13 if f(p
cur
.x) < f(x) then x p
cur
.x
14 else
15 if randomUni[0, 1) < e

E
T
then p
cur
p
new
16 t t + 1
17 return x
Let us discuss in a very simplied way how this algorithm works. During the course of
the optimization process, the temperature T decreases (see Section 27.3). If the temperature
is high in the beginning, this means that, dierent from a Hill Climbing process, Simulated
Annealing accepts also many worse individuals. Better candidate solutions are always ac-
cepted. In summary, the chance that the search will transcend to a new candidate solution,
regardless whether it is better or worse, is high. In its behavior, Simulated Annealing is
then a bit similar to a random walk. As the temperature decreases, the chance that worse
individuals are accepted decreases. If T approaches 0, this probability becomes 0 as well
and Simulated Annealing becomes a pure Hill Climbing algorithm. The rationale of having
an algorithm that initially behaves like a random walk and then becomes a Hill Climber is
to make use of the benets of the two extreme cases-
246 27 SIMULATED ANNEALING
A random walk is a maximally-explorative and minimally-exploitive algorithm. It can
never get stuck at a local optimum and will explore all areas in the search space with
the same probability. Yet, it cannot exploit a good solution, i. e., is unable to make small
modications to an individual in a targeted way in order to check if there are better solutions
nearby.
A Hill Climber, on the other hand, is minimally-explorative and maximally-exploitive. It
always tries to modify and improve on the current solution and never investigates far-away
areas of the search space.
Simulated Annealing combines both properties, it performs a lot of exploration in order
to discover interesting areas of the search space. The hope is that, during this time, it will
discover the basin of attraction of the global optimum. The more the optimization process
becomes like a Hill Climber, the less likely will it move away from a good solution to a worse
one and will instead concentrate on exploiting the solution.
27.2.1 No Boltzmanns Constant
Notice the dierence between Line 15 in Algorithm 27.1 and Equation 27.3 on page 244:
Boltzmanns constant has been removed. The reason is that with Simulated Annealing, we
do not want to simulate a physical process but instead, aim to solve an optimization problem.
The nominal value of Boltzmanns constant, given in Equation 27.1 (something around
1.4 10
23
), has quite a heavy impact on the Boltzmann probability. In a combinatorial
problem, where the dierences in the objective values between two candidate solutions are
often discrete, this becomes obvious: In order to achieve an acceptance probability of 0.001
for a solution which is 1 worse than the best known one, one would need a temperature T
of:
P = 0.001 = e

1
k
B
T
(27.4)
ln 0.001 =
1
k
B
T
(27.5)

1
ln 0.001
= k
B
T (27.6)

1
k
B
ln 0.001
= T (27.7)
T 1.05 10
22
K (27.8)
Making such a setting, i. e., using a nominal value of the scale of 1 10
22
or more, for such
a problem is not obvious and hard to justify. Leaving the constant k
B
away leads to
P = 0.001 = e

1
T
(27.9)
ln 0.001 =
1
T
(27.10)

1
ln 0.001
= T (27.11)
T 0.145 (27.12)
In other words, a temperature of around 0.15 would be sucient to achieve an acceptance
probability of around 0.001. Such a setting is much easier to grasp and allows the algo-
rithm user to relate objective values, objective value dierences, and probabilities in a more
straightforward way. Since physical correctness is not a goal in optimization (while nd-
ing good solutions in an easy way is), k
B
is thus eliminated from the computations in the
Simulated Annealing algorithm.
If you take a close look on Equation 27.8 and Equation 27.12, you will spot another
dierence: Equation 27.8 contains the unit K whereas Equation 27.12 does not. The reason
is that the absence of k
B
in Equation 27.12 removes the physical context and hence, using
27.3. TEMPERATURE SCHEDULING 247
the unit Kelvin is no longer appropriate. Technically, we should also no longer speak about
temperature, but for the sake of simplicity and compliance with literature, we will keep that
name during our discussion of Simulated Annealing.
27.2.2 Convergence
It has been shown that Simulated Annealing algorithms with appropriate cooling strategies
will asymptotically converge to the global optimum [1403, 2043]. The probability that
the individual p
cur
as used in Algorithm 27.1 represents a global optimum approaches 1
for t . This feature is closely related to the property ergodicity
5
of an optimization
algorithm:
Denition D27.1 (Ergodicity). An optimization algorithm is ergodic if its result x X
(or its set of results X X) does not depend on the point(s) in the search space at which
the search process is started.
Nolte and Schrader [2043] and van Laarhoven and Aarts [2776] provide lists of the most
important works on the issue of guaranteed convergence to the global optimum. Nolte and
Schrader [2043] further list the research providing deterministic, non-innite boundaries
for the asymptotic convergence by Anily and Federgruen [111], Gidas [1054], Nolte and
Schrader [2042], and Mitra et al. [1917]. In the same paper, they introduce a signicantly
lower bound. They conclude that, in general, p
cur
.x in the Simulated Annealing process is
a globally optimal candidate solution with a high probability after a number of iterations
which exceeds the cardinality of the problem space. This is a boundary which is, well, slow
from the viewpoint of a practitioner [1403]: In other words, it would be faster to enumerate
all possible candidate solutions in order to nd the global optimum with certainty than
applying Simulated Annealing. Indeed, this makes sense: If this was not the case, Simulated
Annealing could be used to solve AT-complete problems in sub-exponential time. . .
This does not mean that Simulated Annealing is generally slow. It only needs that much
time if we insist on the convergence to the global optimum. Speeding up the cooling process
will result in a faster search, but voids the guaranteed convergence on the other hand. Such
speeded-up algorithms are called Simulated Quenching (SQ) [1058, 1402, 2405].
Quenching
6
, too, is a process in metallurgy. Here, a work piece is rst heated and then
cooled rapidly, often by holding it into a water bath. This way, steel objects can be hardened,
for example.
27.3 Temperature Scheduling
Denition D27.2 (Temperature Schedule). The temperature schedule denes how the
temperature parameter T in the Simulated Annealing process (as dened in Algorithm 27.1)
is set. The operator getTemperature : N
1
R
+
maps the current iteration index t to a
(positive) real temperature value T.
getTemperature(t) [1.. +) t N
1
(27.13)
5
http://en.wikipedia.org/wiki/Ergodicity [accessed 2010-09-26]
6
http://en.wikipedia.org/wiki/Quenching [accessed 2010-10-11]
248 27 SIMULATED ANNEALING
Equation 27.13 is provided as a Java interface in Listing 55.12 on page 717. As already
mentioned, the temperature schedule has major inuence on the probability that the Sim-
ulated Annealing algorithm will nd the global optimum. For getTemperature, a few
general rules hold. All schedules start with a temperature T
s
which is greater than zero.
getTemperature(1) = T
s
> 0 (27.14)
If the number of iterations t approaches innity, the temperature should approach 0. If the
temperature sinks too fast, the ergodicity of the search may be lost and the global optimum
may not be discovered. In order to avoid that the resulting Simulated Quenching algorithm
gets stuck at a local optimum, random restarting can be applied, which already has been
discussed in the context of Hill Climbing in Section 26.5 on page 232.
lim
t+
getTemperature(t) = 0 (27.15)
There exists a wide range of methods to determine this temperature schedule. Miki et al.
0.2T
s
0.4T
s
0.6T
s
0.8T
s
T
s
0 20 40 60 80 100
linearscaling
(i.e.,polynomialwith =1) a
polynomial
with a=2
exponentially
with e=0.05
logarithmically
exponentially
with e=0.025
t
Figure 27.2: Dierent temperature schedules for Simulated Annealing.
[1896], for example, used Genetic Algorithms for this purpose. The most established methods
are listed below and implemented in Paragraph 56.1.3.2.1 and some examples of them are
sketched in Figure 27.2.
27.3.1 Logarithmic Scheduling
Scale the temperature logarithmically where T
s
is set to a value larger than the greatest
dierence of the objective value of a local minimum and its best neighboring candidate
27.3. TEMPERATURE SCHEDULING 249
solution. This value may, of course, be estimated since it usually is not known. [1167, 2043]
An implementation of this method is given in Listing 56.9 on page 743.
getTemperature
ln
(t) =
_
T
s
if t < e
T
s
/ ln t otherwise
(27.16)
27.3.2 Exponential Scheduling
Reduce T to (1)T in each step (or ever m steps, see Equation 27.19), where the exact value
of 0 < < 1 is determined by experiment [2808]. Such an exponential cooling strategy will
turn Simulated Annealing into Simulated Quenching [1403]. This strategy is implemented
in Listing 56.10 on page 744.
getTemperature
exp
(t) = (1 )
t
T
s
(27.17)
27.3.3 Polynomial and Linear Scheduling
T is polynomially scaled between T
s
and 0 for

t iterations according to Equation 27.18 [2808].


is a constant, maybe 1, 2, or 4 that depends on the positions and objective values of the
local minima. Large values of will spend more iterations at lower temperatures. For = 1,
the schedule proceeds in linear steps.
getTemperature
poly
(t) =
_
1
t

t
_

T
s
(27.18)
27.3.4 Adaptive Scheduling
Instead of basing the temperature solely on the iteration index t, other information may be
used for self-adaptation as well. After every m moves, the temperature T can be set to
times E
c
= f(p
cur
.x) f(x), where is an experimentally determined constant, f(p
cur
.x)
is the objective value of the currently examined candidate solution p
cur
.x, and f(x) is the
objective value of the best phenotype x found so far. Since E
c
may be 0, we limit the
temperature change to a maximum of T with 0 < < 1. [2808]
27.3.5 Larger Step Widths
If the temperature reduction should not take place continuously but only every m N
1
iterations, t

computed according to Equation 27.19 can be used in the previous formulas,


i. e., getTemperature(t

) instead of getTemperature(t).
t

= mt/m (27.19)
250 27 SIMULATED ANNEALING
27.4 General Information on Simulated Annealing
27.4.1 Applications and Examples
Table 27.1: Applications and Examples of Simulated Annealing.
Area References
Combinatorial Problems [190, 308, 344, 382, 516, 667, 771, 1403, 1454, 1455, 1486,
1541, 1550, 1896, 2043, 2105, 2171, 2640, 2770, 2901, 2966]
Computer Graphics [433, 1043, 1403, 2707]
Data Mining [1403]
Databases [771, 2171]
Distributed Algorithms and Sys-
tems
[616, 1926, 2058, 2068, 2631, 2632, 2901, 2994, 3002]
Economics and Finances [1060, 1403]
Engineering [437, 1058, 1059, 1403, 1486, 1541, 1550, 1926, 2058, 2250,
2405, 3002]
Function Optimization [1018, 1071, 2180, 2486]
Graph Theory [63, 344, 616, 1454, 1455, 1486, 1926, 2058, 2068, 2237, 2631,
2632, 2793, 3002]
Logistics [190, 382, 516, 667, 1403, 1550, 1896, 2043, 2770, 2966]
Mathematics [111, 227, 1043, 1674, 2180]
Military and Defense [1403]
Physics [1403]
Security [1486]
Software [1486, 1550, 2901, 2994]
Wireless Communication [63, 154, 1486, 2237, 2250, 2793]
27.4.2 Books
1. Genetic Algorithms and Simulated Annealing [694]
2. Simulated Annealing: Theory and Applications [2776]
3. Electromagnetic Optimization by Genetic Algorithms [2250]
4. Facts, Conjectures, and Improvements for Simulated Annealing [2369]
5. Simulated Annealing for VLSI Design [2968]
6. Introduction to Stochastic Search and Optimization [2561]
7. Simulated Annealing [2966]
27.4.3 Conferences and Workshops
Table 27.2: Conferences and Workshops on Simulated Annealing.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
27.4. GENERAL INFORMATION ON SIMULATED ANNEALING 251
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
27.4.4 Journals
1. Journal of Heuristics published by Springer Netherlands
252 27 SIMULATED ANNEALING
Tasks T27
68. Perform the same experiments as in Task 64 on page 238 with Simulated Annealing.
Therefore, use the Java Simulated Annealing implementation given in Listing 56.8 on
page 740 or develop an equivalent own implementation.
[45 points]
69. Compare your results from Task 64 on page 238 and Task 68. What are the dierences?
What are the similarities? Did you use dierent experimental settings? If so, what
happens if you use the same experimental settings for both tasks? Try to give reasons
for your results.
[10 points]
70. Perform the same experiments as in Task 67 on page 238 with Simulated Annealing and
Simulated Quenching. Therefore, use the Java Simulated Annealing implementation
given in Listing 56.8 on page 740 or develop an equivalent own implementation.
[25 points]
71. Compare your results from Task 67 on page 238 and Task 70. What are the dierences?
What are the similarities? Did you use dierent experimental settings? If so, what
happens if you use the same experimental settings for both tasks? Try to give reasons
for your results. Put your explanations into the context of the discussions provided
in Example E17.4 on page 181.
[10 points]
72. Try solving the bin packing problem again, this time by using the representation given
by Khuri, Sch utz, and Heitkotter in [1534]. Use both, Simulated Annealing and Hill
Climbing to evaluate the performance of the new representation similar to what we
do in Section 57.1.2.1 on page 887. Compare the results with the results listed in that
section and obtained by you in Task 70. Which method is better? Provide reasons for
your observation.
[50 points]
73. Compute the probabilities that a new solution with E
1
= 0, E
2
= 0.001, E
3
=
0.01, E
4
= 0.1, E
5
= 1.0, E
6
= 10.0, and E
7
= 100 is accepted by the Simulated
Annealing algorithm at temperatures T
1
= 0, T
2
= 1, T
3
= 10, and T
4
= 100. Use
the same formulas used in Algorithm 27.1 (consider the discussion in Section 27.2.1).
[15 points]
74. Compute the temperatures needed so that solutions with E
1
= 0, E
2
= 0.001,
E
3
= 0.01, E
4
= 0.1, E
5
= 1.0, E
6
= 10.0, and E
7
= 100 are accepted by the
Simulated Annealing algorithm with probabilities P
1
= 0, P
2
= 0.1, P
3
= 0.25, and
P
4
= 0.5. Base your calculations on the formulas used in Algorithm 27.1 and consider
the discussion in Section 27.2.1.
[15 points]
Chapter 28
Evolutionary Algorithms
28.1 Introduction
Denition D28.1 (Evolutionary Algorithm). Evolutionary Algorithms
1
(EAs) are
population-based metaheuristic optimization algorithms which use mechanisms inspired by
the biological evolution, such as mutation, crossover, natural selection, and survival of the
ttest, in order to rene a set of candidate solutions iteratively in a cycle. [167, 171, 172]
Evolutionary Algorithms have not been invented in a canonical form. Instead, they are a
large family of dierent optimization methods which have, to a certain degree, been de-
veloped independently by dierent researchers. Information on their historic development
is thus largely distributed amongst the chapters of this book which deal with the specic
members of this family of algorithms.
Like other metaheuristics, all EAs share the black box optimization character and
make only few assumptions about the underlying objective functions. Their consistent and
high performance in many dierent problem domains (see, e.g., Table 28.4 on page 314) and
the appealing idea of copying natural evolution for optimization made them one the most
popular metaheuristic.
As in most metaheuristic optimization algorithms, we can distinguish between single-
objective and multi-objective variants. The latter means that multiple, possibly conict-
ing criteria are optimized as discussed in Section 3.3. The general area of Evolutionary
Computation that deals with multi-objective optimization is called EMOO, evolutionary
multi-objective optimization and the multi-objective EAs are called, well, multi-objective
Evolutionary Algorithms.
Denition D28.2 (Multi-Objective Evolutionary Algorithm). A multi-objective Evo-
lutionary Algorithm (MOEA) is an Evolutionary Algorithm suitable to optimize multiple
criteria at once [595, 596, 737, 740, 954, 1964, 2783].
In this section, we will rst outline the basic cycle in which Evolutionary Algorithms proceed
in Section 28.1.1. EAs are inspired by natural evolution, so we will compare the natural
paragons of the single steps of this cycle in detail in Section 28.1.2. Later in this chapter,
we will outline possible algorithms which can be used for the processes in EAs and nally
give some information on conferences, journals, and books on EAs as well as it applications.
1
http://en.wikipedia.org/wiki/Artificial_evolution [accessed 2007-07-03]
254 28 EVOLUTIONARY ALGORITHMS
28.1.1 The Basic Cycle of EAs
All evolutionary algorithms proceed in principle according to the scheme illustrated in Fig-
Evaluation
computetheobjective
valuesofthesolution
candidates
GPM
applythegenotype-
phenotypemappingand
obtainthephenotypes
FitnessAssignment
usetheobjectivevalues
todeterminefitness
values
InitialPopulation
createaninitial
populationofrandom
individuals
createnewindividuals
fromthematingpoolby
crossoverandmutation
Reproduction
Selection
selectthefittestindi-
vidualsforreproduction
Figure 28.1: The basic cycle of Evolutionary Algorithms.
ure 28.1 which is a specialization of the basic cycle of metaheuristics given in Figure 1.8 on
page 35. We dene the basic EA for single-objective optimization in Algorithm 28.1 and the
multi-objective version in Algorithm 28.2.
1. In the rst generation (t = 1), a population pop of ps individuals p is created. The
function createPop(ps) produces this population. It depends on its implementation
whether the individuals p have random genomes p.g (the usual case) or whether the
population is seeded (see Paragraph 28.1.2.2.2).
2. the genotypes p.g are translated to phenotypes p.x (see Section 28.1.2.3). This
genotype-phenotype mapping may either directly be performed to all individuals
(performGPM) in the population or can be a part of the process of evaluating the
objective functions.
3. The values of the objective functions f f are evaluated for each candidate solu-
tion p.x in pop (and usually stored in the individual records) with the operation
computeObjectives. This evaluation may incorporate complicated simulations and
calculations. In single-objective Evolutionary Algorithms, f = f and, hence, [f [ = 1
holds.
4. With the objective functions, the utilities of the dierent features of the candidate
solutions p.x have been determined and a scalar tness value v(p) can now be assigned
to each of them (see Section 28.1.2.5). In single-objective Evolutionary Algorithms,
v f holds.
5. A subsequent selection process selection(pop, v, mps) (or selection(pop, f, mps), if v f,
as is the case in single-objective Evolutionary Algorithms) lters out the candidate
solutions with bad tness and allows those with good tness to enter the mating pool
with a higher probability (see Section 28.1.2.6).
6. In the reproduction phase, ospring are derived from the genotypes p.g of the selected
individuals p mate by applying the search operations searchOp Op (which are
28.1. INTRODUCTION 255
called reproduction operations in the context of EAs, see Section 28.1.2.7). There usu-
ally are two dierent reproduction operations: mutation, which modies one genotype,
and crossover, which combines two genotypes to a new one. Whether the whole popu-
lation is replaced by the ospring or whether they are integrated into the population as
well as which individuals recombined with each other depends on the implementation
of the operation reproducePop. We highlight the transition between the generations
in Algorithm 28.1 and 28.2 by incrementing a generation counter t and annotating the
populations with it, i. e., pop(t).
7. If the terminationCriterion
__
is met, the evolution stops here. Otherwise, the algo-
rithm continues at Point 2.
Algorithm 28.1: X simpleEA(f, ps, mps)
Input: f: the objective/tness function
Input: ps: the population size
Input: mps: the mating pool size
Data: t: the generation counter
Data: pop: the population
Data: mate: the mating pool
Data: continue: a Boolean variable used for termination detection
Output: X: the set of the best elements found
1 begin
2 t 1
3 pop(t = 1) createPop(ps)
4 continue true
5 while continue do
6 pop(t) performGPM(pop(t) , gpm)
7 pop(t) computeObjectives(pop(t) , f)
8 if terminationCriterion
__
then
9 continue false
10 else
11 mate selection(pop(t) , f, mps)
12 t t + 1
13 pop(t) reproducePop(mate, ps)
14 return extractPhenotypes(extractBest(pop(t) , f))
Algorithm 28.1 is strongly simplied. The termination criterion terminationCriterion
__
,
for example, is usually invoked after each and every evaluation of the objective functions.
Often, we have a limit in terms of objective function evaluations (see, for instance Sec-
tion 6.3.3). Then, it is not sucient to check terminationCriterion
__
after each generation,
since a generation involves computing the objective values of multiple individuals. You can
nd a more precise version of the basic Evolutionary Algorithm scheme (with steady-state
population treatment) is specied in Listing 56.11 on page 746.
Also, the return value(s) of an Evolutionary Algorithm often is not the best individual(s)
from the last generation, but the best candidate solution discovered so far. This, however,
demands for maintaining an archive, a thing which especially in the case of multi-objective
or multi-modal optimization.
An example implementation of Algorithm 28.2 is given in Listing 56.12 on page 750.
28.1.2 Biological and Articial Evolution
The concepts of Evolutionary Algorithms are strongly related to the way in which the
256 28 EVOLUTIONARY ALGORITHMS
Algorithm 28.2: X simpleMOEA(OptProb, ps, mps)
Input: OptProb: a tuple (X, f , cmp
f
) of the problem space X, the objective functions
f , and a comparator function cmp
f
Input: ps: the population size
Input: mps: the mating pool size
Data: t: the generation counter
Data: pop: the population
Data: mate: the mating pool
Data: v: the tness function resulting from the tness assigning process
Data: continue: a Boolean variable used for termination detection
Output: X: the set of the best elements found
1 begin
2 t 1
3 pop(t = 1) createPop(ps)
4 continue true
5 while continue do
6 pop(t) performGPM(pop(t) , gpm)
7 pop(t) computeObjectives(pop(t) , f)
8 if terminationCriterion
__
then
9 continue false
10 else
11 v assignFitness(pop(t) , cmp
f
)
12 mate selection(pop, v, mps)
13 t t + 1
14 pop(t) reproducePop(mate, ps)
15 return extractPhenotypes(extractBest(pop, cmp
f
))
28.1. INTRODUCTION 257
development of species in nature could proceed. We will outline the historic ideas behind
evolution and put the steps of a multi-objective Evolutionary Algorithm into this context.
28.1.2.1 The Basic Principles from Nature
In 1859, Darwin [684] published his book On the Origin of Species
2
in which he identied
the principles of natural selection and survival of the ttest as driving forces behind the
biological evolution. His theory can be condensed into ten observations and deductions
[684, 1845, 2938]:
1. The individuals of a species possess great fertility and produce more ospring than
can grow into adulthood.
2. Under the absence of external inuences (like natural disasters, human beings, etc.),
the population size of a species roughly remains constant.
3. Again, if no external inuences occur, the food resources are limited but stable over
time.
4. Since the individuals compete for these limited resources, a struggle for survival ensues.
5. Especially in sexual reproducing species, no two individuals are equal.
6. Some of the variations between the individuals will aect their tness and hence, their
ability to survive.
7. Some of these variations are inheritable.
8. Individuals which are less t are also less likely to reproduce. The ttest individuals
will survive and produce ospring more probably.
9. Individuals that survive and reproduce will likely pass on their traits to their ospring.
10. A species will slowly change and adapt more and more to a given environment during
this process which may nally even result in new species.
These historic ideas themselves remain valid to a large degree even today. However, some
of Darwins specic conclusions can be challenged based on data and methods which simply
were not available at his time. Point 10, the assumption that change happens slowly over
time, for instance, became controversial as fossils indicate that evolution seems to at rest
for longer times which exchange with short periods of sudden, fast developments [473, 2778]
(see Section 41.1.2 on page 493).
28.1.2.2 Initialization
28.1.2.2.1 Biology
Until now, it is not clear how life started on earth. It is assumed that before the rst organ-
isms emerged, an abiogenesis
3
, a chemical evolution took place in which the chemical com-
ponents of life formed. This process was likely divided in three steps: (1) In the rst step,
simple organic molecules such as alcohols, acids, purines
4
(such as adenine and guanine),
and pyrimidines
5
(such as thymine, cytosine, and uracil) were synthesized from inorganic
2
http://en.wikipedia.org/wiki/The_Origin_of_Species [accessed 2007-07-03]
3
http://en.wikipedia.org/wiki/Abiogenesis [accessed 2009-08-05]
4
http://en.wikipedia.org/wiki/Purine [accessed 2009-08-05]
5
http://en.wikipedia.org/wiki/Pyrimidine [accessed 2009-08-05]
258 28 EVOLUTIONARY ALGORITHMS
substances. (2) Then, simple organic molecules were combined to the basic components of
complex organic molecules such as monosaccharides, amino acids
6
, pyrroles, fatty acids, and
nucleotides
7
. (3) Finally, complex organic molecules emerged.Many dierent theories exist
on how this could have taken place [176, 1300, 1698, 1713, 1837, 2084, 2091, 2432]. An impor-
tant milestone in the research in this area is likely the Miller-Urey experiment
8
[1904, 1905]
where it was shown that amino acids can form under conditions such as those expected to be
present in the early time of the earth from inorganic substances. The rst single-celled life
most probably occurred in the Eoarchean
9
, the earth age 3.8 billion years ago, demarking
the onset of evolution.
28.1.2.2.2 Evolutionary Algorithms
Usually, when an Evolutionary Algorithm starts up, there exists no information about what
is good or what is bad. Basically, only random genes are coupled together as individuals
in the initial population pop(t = 0). This form of initialization can be considered to be
similar to the early steps of chemical and biological evolution, where it was unclear which
substances or primitive life form would be successful and nature experimented with many
dierent reactions and concepts.
Besides random initialization, Evolutionary Algorithms can also be seeded [1126, 1139,
1471]. If seeding is applied, a set of individuals which already are adapted to the optimization
criteria is inserted into the initial population. Sometimes, the entire rst population consists
of such individuals with good characteristics only. The candidate solutions used for this may
either have been obtained manually or by a previous optimization step [1737, 2019].
The goal for seeding is be to achieve faster convergence and better results in the evolu-
tionary process. However, it also raises the danger of premature convergence (see Chapter 13
on page 151), since the already-adapted candidate solutions will likely be much more com-
petitive than the random ones and thus, may wipe them out of the population. Care must
thus be taken in order to achieve the wanted performance improvement.
28.1.2.3 Genotype-Phenotype Mapping
28.1.2.3.1 Biology
TODO
Example E28.1 (Evolution of Fish).
In the following, let us discuss the evolution on the example of sh. We can consider a sh as
an instance of its heredity information, as one possible realization of the blueprint encoded
in its DNA.
28.1.2.3.2 Evolutionary Algorithms
At the beginning of every generation in an EA, each genotype p.g is instantiated as a
new phenotype p.x = gpm(p.g). Genotype-phenotype mappings are discussed in detail in
Section 28.2.
6
http://en.wikipedia.org/wiki/Amino_acid [accessed 2009-08-05]
7
http://en.wikipedia.org/wiki/Nucleotide [accessed 2009-08-05]
8
http://en.wikipedia.org/wiki/MillerUrey_experiment [accessed 2009-08-05]
9
http://en.wikipedia.org/wiki/Eoarchean [accessed 2007-07-03]
28.1. INTRODUCTION 259
28.1.2.4 Objective Values
28.1.2.4.1 Biology
There are dierent criteria which determine whether an organism can survive in its environ-
ment. Often, individuals are good in some characteristics but rarely reach perfection in all
possible aspects.
Example E28.2 (Evolution of Fish (Example E28.1) Cont.).
The survival of the genes of the sh depends on how good the sh performs in the ocean.
Its tness, however, is not only determined by one single feature of the phenotype like its
size. Although a bigger sh will have better chances to survive, size alone does not help if
it is too slow to catch any prey. Also its energy consumption in terms of calories per time
unit which are necessary to sustain its life functions should be low so it does not need to eat
all the time. Other factors inuencing the tness positively are formae like sharp teeth and
colors that blend into the environment so it cannot be seen too easily by its predators such
as sharks. If its camouage is too good on the other hand, how will it nd potential mating
partners? And if it is big, it will usually have higher energy consumption. So there may be
conicts between the desired properties.
28.1.2.4.2 Evolutionary Algorithms
To sum it up, we could consider the life of the sh as the evaluation process of its genotype
in an environment where good qualities in one aspect can turn out as drawbacks from other
perspectives. In multi-objective Evolutionary Algorithms, this is exactly the same. For each
problem that we want to solve, we can specify one or multiple so-called objective functions
f f . An objective function f represents one feature that we are interested in.
Example E28.3 (Evolution of a Car).
Let us assume that we want to evolve a car (a pretty weird assumption, but lets stick with
it). The genotype p.g G would be the construction plan and the phenotype p.x X the
real car, or at least a simulation of it. One objective function f
a
would denitely be safety.
For the sake of our children and their children, the car should also be environment-friendly,
so thats our second objective function f
b
. Furthermore, a cheap price f
c
, fast speed f
d
, and
a cool design f
e
would be good. That makes ve objective functions from which for example
the second and the fourth are contradictory (f
b
f
d
).
28.1.2.5 Fitness
28.1.2.5.1 Biology
As stated before, dierent characteristics determine the tness of an individual. The biolog-
ical tness is a scalar value that describes the number of ospring of an individual [2542].
260 28 EVOLUTIONARY ALGORITHMS
Example E28.4 (Evolution of Fish (Example E28.1) Cont.).
In Example E28.1, the sh genome is instantiated and proved its physical properties in
several dierent categories in Example E28.2. Whether one specic sh is t or not, however,
is still not easy to say: tness is always relative; it depends on the environment. The tness
of the sh is not only dened by its own characteristics but also results from the features of
the other sh in the population (as well as from the abilities of its prey and predators). If
one sh can beat another one in all categories, i. e., is bigger, stronger, smarter, and so on,
it is better adapted to its environment and thus, will have a higher chance to survive. This
relation is transitive but only forms a partial order since a sh which is strong but not very
clever and a sh that is clever but not strong maybe have the same probability to reproduce
and hence, are not directly comparable.
It is not clear whether a weak sh with a clever behavioral pattern is worse or better
than a really strong but less cunning one. Both traits are useful in the evolutionary process
and maybe, one sh of the rst kind will sometimes mate with one of the latter and produce
an ospring which is both, intelligent and powerful.
Example E28.5 (Computer Science and Sports).
Fitness depends on the environment. In my department, I may be considered as average t.
If took a stroll to the department of sports science, however, my relative physical suitability
will decrease. Yet, my relative computer science skills will become much better.
28.1.2.5.2 Evolutionary Algorithms
The tness assignment process in multi-objective Evolutionary Algorithms computes one
scalar tness value for each individual p in a population pop by building a tness function
v (see Denition D5.1 on page 94). Instead of being an a posteriori measure of the number
of ospring of an individual, it rather describes the relative quality of a candidate solution,
i. e., its ability to produce ospring [634, 2987].
Optimization criteria may contradict, so the basic situation in optimization very much
corresponds to the one in biology. One of the most popular methods for computing the tness
is called Pareto ranking
10
. It performs comparisons similar to the ones we just discussed.
Some popular tness assignment processes are given in Section 28.3.
28.1.2.6 Selection
28.1.2.6.1 Biology
Selection in biology has many dierent facets [2945]. Let us again assume tness to be a
measure for the expected number of ospring, as done in Paragraph 28.1.2.5.1. How t an
individual is then does not necessarily determine directly if it can produce ospring and, if
so, how many. There are two more aspects which have inuence on this: the environment
(including the rest of the population) and the potential mating partners. These factors are
the driving forces behind the three selection steps in natural selection
11
: (1) environmental
selection
12
(or survival selection and ecological selection) which determines whether an in-
dividual survives to reach fertility [684], (2) sexual selection, i. e., the choice of the mating
partner(s) [657, 2300], and (3) parental selection possible choices of the parents against
pregnancy or newborn [2300].
10
Pareto comparisons are discussed in Section 3.3.5 on page 65 and Pareto ranking is introduced in
Section 28.3.3.
11
http://en.wikipedia.org/wiki/Natural_selection [accessed 2010-08-03]
12
http://en.wikipedia.org/wiki/Ecological_selection [accessed 2010-08-03]
28.1. INTRODUCTION 261
Example E28.6 (Evolution of Fish (Example E28.1) Cont.).
An intelligent sh may be eaten by a shark, a strong one can die from a disease, and even
the most perfect sh could die from a lighting strike before ever reproducing. The process
of selection is always stochastic, without guarantees even a sh that is small, slow, and
lacks any sophisticated behavior might survive and could produce even more ospring than
a highly t one. The degree to which a sh is adapted to its environment therefore can
be considered as some sort of basic probability which determines its expected chances to
survive environment selection. This is the form of selection propagated by Darwin [684] in
his seminal book On the Origin of Species.
The sexual or mating selection describes the fact that survival alone does not guarantee
reproduction: At least for sexually reproducing species, two individuals are involves in this
process. Hence, even a sh which is perfectly adapted to its environment will have low
chances to create ospring if it is unable to attract female interest. Sexual selection is
considered to be responsible for many of natures beauty, such as the colorful wings of a
buttery or the tail of a peacock [657, 2300].
Example E28.7 (Computer Science and Sports (Example E28.5) cnt.).
For todays noble computer scientist, survival selection is not the threatening factor. Further-
more, our relative tness is on par with the tness of the guys with the sports department,
at least in terms trade-o between theoretical and physical abilities. Yet, surprisingly, the
sexual selection is less friendly to some of us. Indeed, parallels with human behavior and
peacocks which prefer mating partners with colorful feathers over others regardless of their
further qualities can be drawn.
The third selection factor, parental , may lead to examples for reinforcement of certain phys-
ical characteristics which are much more extreme than Example E28.7. Rich Harris [2300]
argues that after the fertilization, i. e., before or after birth, parents may make a choice
for or against the life of their ospring. If the ospring does or is likely to exhibit certain
unwanted traits, this decision may be negative. A negative decision may also be caused by
environmental or economical decisions. A negative decision, however, may also again be
overruled after birth if the ospring turns out to possess especially favorable characteris-
tics. Rich Harris [2300], for instance, argues that parents in prehistoric European and Asian
cultures may this way have actively selected hairlessness and pale skin.
28.1.2.6.2 Evolutionary Algorithms
In Evolutionary Algorithms, survival selection is always applied in order to steer the op-
timization process towards good areas in the search space. From time to time, also a
subsequent sexual selection is used. For survival selection, the so-called selection algorithms
mate = selection(pop, v, mps) determine the mps individuals from the population pop which
should enter the mating pool mate based on their scalar tness v. selection(pop, v, mps)
usually picks the ttest individuals and places them into mate with the highest probability.
The oldest selection scheme is called Roulette wheel and will be introduced in Section 28.4.3
on page 290 in detail. In the original version of this algorithm (intended for tness maxi-
mization), the chance of an individual p to reproduce is proportional to its tness v(p)pop.
More information on selection algorithms is given in Section 28.4.
28.1.2.7 Reproduction
262 28 EVOLUTIONARY ALGORITHMS
28.1.2.7.1 Biology
Last but not least, there is the reproduction. For this purpose, two approaches exist in
nature: the sexual and the asexual method.
Example E28.8 (Evolution of Fish (Example E28.1) Cont.).
Fish reproduce sexually. Whenever a female sh and a male sh mate, their genes will
be recombined by crossover. Furthermore, mutations may take place which slightly al-
ter the DNA. Most often, the mutations aect the characteristics of resulting larva only
slightly [2303]. Since t sh produce ospring with higher probability, there is a good
chance that the next generation will contain at least some individuals that have combined
good traits from their parents and perform even better than them.
Asexual reproducing species basically create clones of themselves, a process during which,
again, mutations may take place. Regardless of how they were produced, the new generation
of ospring now enters the struggle for survival and the cycle of evolution starts in another
round.
28.1.2.7.2 Evolutionary Algorithms
In Evolutionary Algorithms, individuals usually do not have such a gender. Yet, the
concept of recombination, i. e., of binary search operations, is one of the central ideas of this
metaheuristic. Each individual from the mating pool can potentially be recombined with
every other one. In the car example, this means that we could copy the design of the engine
of one car and place it into the construction plan of the body of another car.
The concept of mutation is present in EAs as well. Parts of the genotype (and hence,
properties of the phenotype) can randomly be changed with a unary search operation. This
way, new construction plans for new cars are generated. The common denitions for repro-
duction operations in Evolutionary Algorithms are given in Section 28.5.
28.1.2.8 Does the Natural Paragon Fit?
At this point it should be mentioned that the direct reference to Darwinian evolution in
Evolutionary Algorithms is somehow controversial. The strongest argument against this
reference is that Evolutionary Algorithms introduce a change in semantics by being goal-
driven [2772] while the natural evolution is not.
Paterson [2134] further points out that neither GAs [Genetic Algorithms] nor GP [Ge-
netic Programming] are concerned with the evolution of new species, nor do they use natural
selection. On the other hand, nobody would claim that the idea of selection has not been
borrowed from nature although many additions and modications have been introduced in
favor for better algorithmic performance.
An argument against close correspondence between EAs and natural evolution concerns
the development of dierent species. It depends on the denition of species: According to
Wikipedia The Free Encyclopedia [2938], a species is a class of organisms which are very
similar in many aspects such as appearance, physiology, and genetics. In principle, there
is some elbowroom for us and we may indeed consider even dierent solutions to a single
problem in Evolutionary Algorithms as members of a dierent species especially if the
binary search operation crossover/recombination applied to their genomes cannot produce
another valid candidate solution.
In Genetic Algorithms, the crossover and mutation operations may be considered to be
strongly simplied models of their natural counterparts. In many advanced EAs there may
be, however, operations which work entirely dierently. Dierential Evolution, for instance,
features a ternary search operation (see Chapter 33). Another example is [2912, 2914], where
28.1. INTRODUCTION 263
the search operations directly modify the phenotypes (G = X) according to heuristics and
intelligent decision.
Paterson [2134] points out that neither GAs [Genetic Algorithms] nor GP [Genetic
Programming] are concerned with the evolution of new species, nor do they use natural
selection. Nonetheless, nobody would claim that the idea of selection has not been borrowed
from nature although many additions and modications have been introduced in favor for
better algorithmic performance.
Furthermore, although the concept of tness
13
in nature is controversial [2542], it is often
considered as an a posteriori measurement. It then denes the ratio between the numbers
of occurrences of a genotype in a population after and before selection or the number of
ospring an individual has in relation to the number of ospring of another individual. In
Evolutionary Algorithms, tness is an a priori quantity denoting a value that determines
the expected number of instances of a genotype that should survive the selection process.
However, one could conclude that biological tness is just an approximation of the a priori
quantity arisen due to the hardness (if not impossibility) of directly measuring it.
My personal opinion (which may as well be wrong) is that the citation of Darwin here is
well motivated since there are close parallels between Darwinian evolution and Evolutionary
Algorithms. Nevertheless, natural and articial evolution are still two dierent things and
phenomena observed in either of the two do not necessarily carry over to the other.
28.1.2.9 Formae and Implicit Parallelistm
Let us review our introductory sh example in terms of forma analysis. Fish can, for instance,
be characterized by the properties clever and strong. For the sake of simplicity, let us
assume that both properties may be true or false for a single individual. They hence
dene two formae each. A third property can be the color, for which many dierent possible
variations exist. Some of them may be good in terms of camouage, others maybe good in
terms of nding mating partners. Now a sh can be clever and strong at the same time, as
well as weak and green. Here, a living sh allows nature to evaluate the utility of at least
three dierent formae.
This issue has rst been stated by Holland [1250] for Genetic Algorithms and is termed
implicit parallelism (or intrinsic parallelism). Since then, it has been studied by many dif-
ferent researchers [284, 1128, 1133, 2827]. If the search space and the genotype-phenotype
mapping are properly designed, implicit parallelism in conjunction with the crossover/re-
combination operations is one of the reasons why Evolutionary Algorithms are such a suc-
cessful class of optimization algorithms. The Schema Theorem discussed in Section 29.5.3
on page 342 gives a further explanation for this from of parallel evaluation.
28.1.3 Historical Classication
The family of Evolutionary Algorithms historically encompasses ve members, as illustrated
in Figure 28.2. We will only enumerate them here in short. In-depth discussions will follow
in the corresponding chapters.
1. Genetic Algorithms (GAs) are introduced in Chapter 29 on page 325. GAs subsume
all Evolutionary Algorithms which have string-based search spaces G in which they
mainly perform combinatorial search.
2. The set of Evolutionary Algorithms which explore the space of real vectors X R
n
,
often using more advanced mathematical operations, is called Evolution Strategies (ES,
see Chapter 30 on page 359). Closely related is Dierential Evolution, a numerical
optimizer using a ternary search operation.
13
http://en.wikipedia.org/wiki/Fitness_(biology) [accessed 2008-08-10]
264 28 EVOLUTIONARY ALGORITHMS
3. For Genetic Programming (GP), which will be elaborated on in Chapter 31 on page 379,
we can provide two denitions: On one hand, GP includes all Evolutionary Algorithms
that grow programs, algorithms, and these alike. On the other hand, also all EAs that
evolve tree-shaped individuals are instances of Genetic Programming.
4. Learning Classier Systems (LCS), discussed in Chapter 35 on page 457, are learning
approaches that assign output values to given input values. They often internally use
a Genetic Algorithm to nd new rules for this mapping.
5. Evolutionary Programming (EP, see Chapter 32 on page 413) approaches usually fall
into the same category as Evolution Strategies, they optimize vectors of real numbers.
Historically, such approaches have also been used to derive algorithm-like structures
such as nite state machines.
EvolutionaryAlgorithms
LearningClassifier
Systems
GeneticAlgorithms
Differential
Evolution
GeneticProgramming
GGGP
LGP
SGP
Evolutionary
Programming
EvolutionStrategy
Figure 28.2: The family of Evolutionary Algorithms.
The early research [718] in Genetic Algorithms (see Section 29.1 on page 325), Genetic
Programming (see Section 31.1.1 on page 379), and Evolutionary Programming (see ??
on page ??) date back to the 1950s and 60s. Besides the pioneering work listed in these
sections, at least other important early contribution should not go unmentioned here: The
Evolutionary Operation (EVOP) approach introduced by Box and Draper [376, 377] in the
late 1950s. The idea of EVOP was to apply a continuous and systematic scheme of small
changes in the control variables of a process. The eects of these modications are evaluated
and the process is slowly shifted into the direction of improvement. This idea was never
realized as a computer algorithm, but Spendley et al. [2572] used it as basis for their simplex
method which then served as progenitor of the Downhill Simplex algorithm
14
by Nelder and
Mead [2017]. [718, 1719] Satterthwaites REVOP [2407, 2408], a randomized Evolutionary
Operation approach, however, was rejected at this time [718].
Like in Section 1.3 on page 31, we here classied dierent algorithms according to their
semantics, in other words, corresponding to their specic search and problem spaces and
the search operations [634]. All ve major approaches can be realized with the basic scheme
dened in Algorithm 28.1. To this simple structure, there exist many general improve-
ments and extensions. Often, these do not directly concern the search or problem spaces.
14
We discuss Nelder and Meads Downhill Simplex [2017] optimization method in Chapter 43 on page 501.
28.1. INTRODUCTION 265
Then, they also can be applied to all members of the EA family alike. Later in this chap-
ter, we will discuss the major components of many of todays most ecient Evolutionary
Algorithms [593]. Some of the distinctive features of these EAs are:
1. The population size and the number of populations used.
2. The method of selecting the individuals for reproduction.
3. The way the ospring is included into the population(s).
28.1.4 Populations in Evolutionary Algorithms
Evolutionary Algorithms try to simulate processes known from biological evolution in order
to solve optimization problems. Species in biology consist of many individuals which live
together in a population. The individuals in a population compete but also mate and with
each other generation for generation. In EAs, it is quite similar: the population pop
(GX)
ps
is a set of ps individuals p G X where each individual p possesses heredity
information (its genotype p.g) and a physical representation (the phenotype p.x).
There exist various way in which an Evolutionary Algorithm can process its population.
Especially interesting is how the population pop(t + 1) of the next generation is formed
from (1) the individuals selected from pop(t) which are collected in the mating pool mate
and (2) the new ospring derived from them.
28.1.4.1 Extinctive/Generational EAs
If it the new population only contains the ospring, we speak of extinctive selection [2015,
2478]. Extinctive selection can be compared with ecosystems of small single-cell organisms
(protozoa
15
) which reproduce in a asexual ssiparous
16
manner, i. e., by cell division. In this
case, of course, the elders will not be present in the next generation. Other comparisons can
partly be drawn to the sexual reproducing to octopi, where the female dies after protecting
the eggs until the larvae hatch, or to the black widow spider where the female devours
the male after the insemination. Especially in the area of Genetic Algorithms, extinctive
strategies are also known as generational algorithms.
Denition D28.3 (Generational). In Evolutionary Algorithms that are genera-
tional [2226], the next generation will only contain the ospring of the current one and
no parent individuals will be preserved.
Extinctive Evolutionary Algorithms can further be divided into left and right selec-
tion [2987]. In left extinctive selections, the best individuals are not allowed to reproduce in
order to prevent premature convergence of the optimization process. Conversely, the worst
individuals are not permitted to breed in right extinctive selection schemes in order to reduce
the selective pressure since they would otherwise scatter the tness too much [2987].
28.1.4.2 Preservative EAs
In algorithms that apply a preservative selection scheme, the population is a combination of
the previous population and its ospring [169, 1461, 2335, 2772]. The biological metaphor for
such algorithms is that the lifespan of many organisms exceeds a single generation. Hence,
parent and child individuals compete with each other for survival.
Even in preservative strategies, it is not granted that the best individuals will always
survive, unless truncation selection (as introduced in Section 28.4.2 on page 289) is applied.
In general, most selection methods are randomized. Even if they pick the best candidate
solutions with the highest probabilities, they may also select worse individuals.
15
http://en.wikipedia.org/wiki/Protozoa [accessed 2008-03-12]
16
http://en.wikipedia.org/wiki/Binary_fission [accessed 2008-03-12]
266 28 EVOLUTIONARY ALGORITHMS
28.1.4.2.1 Steady-State Evolutionary Algorithms
Steady-state Evolutionary Algorithms [519, 2041, 2270, 2316, 2641], abbreviated by SSEA,
are preservative Evolutionary Algorithms which usually do not allow inferior individuals to
replace better ones in the population. Therefore, the binary search operator (crossover/re-
combination) is often applied exactly once per generation. The new ospring then replaces
the worst member of the population if and only if it has better tness.
Steady-state Evolutionary Algorithms often produce better results than generational
EAs. Chafekar et al. [519], for example, introduce steady-state Evolutionary Algorithms
that are able to outperform generational NSGA-II (which you can nd summarized in ??
on page ??) for some dicult problems. In experiments of Jones and Soule [1463] (primarily
focused on other issues), steady-state algorithms showed better convergence behavior in a
multi-modal landscape. Similar results have been reported by Chevreux [558] in the context
of molecule design optimization. The steady-state GENITOR has shown good behavior in
an early comparative study by Goldberg and Deb [1079]. On the other hand, with a badly
congured steady-state approach, we run also the risk of premature convergence. In our own
experiments on a real-world Vehicle Routing Problem discussed in Section 49.3 on page 536,
for instance, steady-state algorithms outperformed generational ones but only if sharing
was applied [2912, 2914].
28.1.4.3 The - Notation
The - notation has been developed to describe the treatment of parent and ospring
individuals in Evolution Strategies (see Chapter 30) [169, 1243, 2437]. Now it is often used
in other Evolutionary Algorithms as well. Usually, this notation implies truncation selection,
i. e., the deterministic selection of the best individuals from a population. Then,
1. denotes the number of ospring (to be) created and
2. is the number of parent individuals, i. e., the size of the mating pool (mps = ).
The population adaptation strategies listed below have partly been borrowed from the Ger-
man Wikipedia The Free Encyclopedia [2938] site for Evolution Strategy
17
:
28.1.4.3.1 ( +)-EAs
Using the reproduction operations, from parent individuals ospring are created from
parents. From the joint set of ospring and parents, i. e., a temporary population of size ps =
+, only the ttest ones are kept and the worst ones are discarted [302, 1244]. ( +)-
strategies are thus preservative. The naming ( + ) implies that only unary reproduction
operators are used. However, in practice, this part of the convention is often ignored [302].
28.1.4.3.2 (, )-EAs
For extinctive selection patterns, the (, )-notation is used, as introduced by Schwefel
[2436]. EAs named according to this pattern create child individuals from the
available genotypes. From these, they only keep the best individuals and discard the
parents as well as the worst children [295, 2436]. Here, corresponds to the population
size ps.
Again, this notation stands for algorithms which employ no crossover operator but mu-
tation only. Yet, many works on (, ) Evolution Strategies, still apply recombination [302].
28.1.4.3.3 (1 + 1)-EAs
The EA only keeps track a single individual which is reproduced. The elder and the ospring
together form the population pop, i. e., ps = 2. From these two, only the better individual
will survive and enter the mating pool mate (mps = 1). This scheme basically is equivalent
to Hill Climbing introduced in Chapter 26 on page 229.
17
http://en.wikipedia.org/wiki/Evolutionsstrategie [accessed 2007-07-03]
28.1. INTRODUCTION 267
28.1.4.3.4 (1, 1)-EAs
The EA creates a single ospring from a single parent. The ospring then replaces its parent.
This is equivalent to a random walk and hence, does not optimize very well, see Section 8.2
on page 114.
28.1.4.3.5 ( + 1)-EAs
Here, the population contains individuals from which one is drawn randomly. This indi-
vidual is reproduced. Alternatively, two parents can be chosen and recombined [302, 2278].
From the joint set of its ospring and the current population, the least t individual is
removed.
28.1.4.3.6 (/, )-EAs
The notation (/, ) is a generalization of (, ) used to signify the application of -ary
reproduction operators. In other words, the parameter is added to denote the number of
parent individuals of one ospring. Normally, only unary mutation operators ( = 1) are
used in Evolution Strategies, the family of EAs the - notation stems from. If (binary)
recombination is also applied (as normal in general Evolutionary Algorithms), = 2 holds. A
special case of (/, ) algorithms is the (/, ) Evolution Strategy [1843] which, basically,
is similar to an Estimation of Distribution Algorithm.
28.1.4.3.7 (/ +)-EAs
Analogously to (/, )-Evolutionary Algorithms, the (/ + )-Evolution Strategies are
generalized (, ) approaches where denotes the number of parents of an ospring indi-
vidual.
28.1.4.3.8 (

(, )

)-EAs
Geyer et al. [1044, 1045, 1046] have developed nested Evolution Strategies where

o-
spring are created and isolated for generations from a population of the size

. In each
of the generations, children are created from which the ttest are passed on to the
next generation. After the generations, the best individuals from each of the isolated
candidate solutions propagated back to the top-level population, i. e., selected. Then, the
cycle starts again with

new child individuals. This nested Evolution Strategy can be


more ecient than the other approaches when applied to complex multimodal tness envi-
ronments [1046, 2279].
28.1.4.4 Elitism
Denition D28.4 (Elitism). An elitist Evolutionary Algorithm [595, 711, 1691] ensures
that at least one copy of the best individual(s) of the current generation is propagated on
to the next generation.
The main advantage of elitism is that it is guaranteed that the algorithm will converge. This
means, meaning that once the basin of attraction of the global optimum has been discovered,
the Evolutionary Algorithm will converge to that optimum. On the other hand, the risk
of converging to a local optimum is also higher. Elitism is an additional feature of Global
Optimization algorithms a special type of preservative strategy which is often realized
by using a secondary population (called the archive) only containing the non-prevailed in-
dividuals. This population is updated at the end of each iteration. It should be noted that
268 28 EVOLUTIONARY ALGORITHMS
Algorithm 28.3: X elitistMOEA(OptProb, ps, as, mps)
Input: OptProb: a tuple (X, f , cmp
f
) of the problem space X, the objective functions
f , and a comparator function cmp
f
Input: ps: the population size
Input: as: the archive size
Input: mps: the population size
Data: t: the generation counter
Data: pop: the population
Data: mate: the mating pool
Data: v: the tness function resulting from the tness assigning process
Data: continue: a Boolean variable used for termination detection
Output: X: the set of the best elements found
1 begin
2 t 1
3 archive ()
4 pop(t = 1) createPop(ps)
5 continue true
6 while continue do
7 pop(t) performGPM(pop(t) , gpm)
8 pop(t) computeObjectives(pop(t) , f)
9 archive updateOptimalSetN(archive, pop, cmp
f
)
10 archive pruneOptimalSet(archive, as)
11 if terminationCriterion
__
then
12 continue false
13 else
14 v assignFitness(pop(t) , cmp
f
)
15 mate selection(pop, v, mps)
16 t t + 1
17 pop(t) reproducePop(mate, ps)
18 return extractPhenotypes(extractBest(archive, cmp
f
))
28.1. INTRODUCTION 269
elitism is usually of tness and concentrates on the true objective values of the individuals.
Such an archive-based elitism can also be combined with generational approaches. Algo-
rithm 28.3, an extension of Algorithm 28.2 on page 256 and the basic evolutionary cycle
given in Section 28.1.1:
1. The archive archive is the set of best individuals found by the algorithm. Initially, it is
the empty set . Subsequently, it is updated with the function updateOptimalSetN
which inserts only the new, unprevailed elements from the population into it and also
removes archived individuals which become prevailed by those new optima. Algorithms
that realize such updating are dened in Section 28.6.1 on page 308.
2. If the archive becomes too large it might theoretically contain uncountable many
individuals pruneOptimalSet reduces it to a proper size, employing techniques like
clustering in order to preserve the element diversity. More about pruning can be found
in Section 28.6.3 on page 311.
3. You should also notice that both, the tness assignment and selection processes, of
elitist Evolutionary Algorithms may take the archive as additional parameter. In
principle, such archive-based algorithms can also be used in non-elitist Evolutionary
Algorithms by simply replacing the parameter archive with .
28.1.5 Conguration Parameters of Evolutionary Algorithms
Evolutionary Algorithms are very robust and ecient optimizers which can often also provide
good solutions for hard problems. However, their ecient is strongly inuenced by design
choices and conguration parameters. In Chapter 24, we gave some general rules-of-thumb
for the design of the representation used in an optimizer. Here we want to list the set of
conguration and setup choices available to the user of an EA.
B
a
s
i
c
P
a
r
a
m
e
t
e
r
s
A
r
c
h
iv
e
/
P
r
u
n
in
g
S
e
l
e
c
t
i
o
n

A
l
g
o
r
i
t
h
m
F
i
t
n
e
s
s

A
s
s
i
g
n
m
e
n
t
P
r
o
c
e
s
s
S
e
a
r
c
h
S
p
a
c
e
a
n
d
S
e
a
r
c
h
O
p
e
r
a
t
io
n
s
G
e
n
o
t
y
p
e
-
P
h
e
n
o
t
y
p
e
M
a
p
p
i
n
g
(
p
o
p
u
l
a
t
i
o
n
s
i
z
e
,

m
u
-
t
a
t
i
o
n
a
n
d
c
r
o
s
s
o
v
e
r
r
a
t
e
s
)
Figure 28.3: The conguration parameters of Evolutionary Algorithms.
Figure 28.3 illustrates these parameters. The performance and success of an evolutionary
optimization approach applied to a problem given by a set of objective functions f and a
problem space X is dened by
270 28 EVOLUTIONARY ALGORITHMS
1. its basic parameter settings such as the population size ps or the crossover rate cr and
mutation rate mr,
2. whether it uses an archive archive of the best individuals found and, if so, the archive
size as and which pruning technology is used to prevent it from overowing,
3. the tness assignment process assignFitness and the selection algorithm selection,
4. the choice of the search space G and the search operations Op, and, nally
5. the genotype-phenotype mapping gpm connecting the search Space and the problem
space.
In ??, we go more into detail on how to state the conguration of an optimization algorithm
in order to fully describe experiments and to make them reproducible.
28.2 Genotype-Phenotype Mappings
In Evolutionary Algorithms as well as in any other optimization method, the search space
G is not necessarily the same as the problem space X. In other words, the elements g
processed by the search operations searchOp Op may dier from the candidate solutions
x X which are evaluated by the objective functions f : X R. In such cases, a mapping
between G and X is necessary: the genotype-phenotype mapping (GPM). In Section 4.3 on
page 85 we gave a precise denition of the term GPM. Generally, we can distinguish four
types of genotype-phenotype mappings:
1. If G = X, the genotype-phenotype mapping is the identity mapping. The search
operations then directly work on the candidate solutions.
2. In many cases, direct mappings, i. e., simple functional transformations which directly
relate parts of the genotypes and phenotypes are used (see Section 28.2.1).
3. In generative mappings, the phenotypes are built step by step in a process directed by
the genotypes. Here, the genotypes serve as programs which direct the phenotypical
developments (see Section 28.2.2.1).
4. Finally, the third set of genotype-phenotype mappings additionally incorporates feed-
back from the environment or the objective functions into the developmental process
(see Section 28.2.2.2).
Which kind of genotype-phenotype mapping should be used for a given optimization task
strongly depends on the type of the task at hand. For most small-scale and many large-
scale problems, direct mappings are completely sucient. However, especially if large-scale,
possibly repetitive or recursive structures are to be created, generative or ontogenic mappings
can be the best choice [777, 2735].
28.2.1 Direct Mappings
Direct mappings are explicit functional relationships between the genotypes and the phe-
notypes [887]. Usually, each element of the phenotype is related to one element in the
genotype [2735] or to a group of genes. Also, the algorithmic complexity of these mappings
usually is usually not above O(nlog n) where n is the length of a genotype. Based on the
work of Devert [777], we dene explicit or direct mappings as follows:
28.2. GENOTYPE-PHENOTYPE MAPPINGS 271
Denition D28.5 (Explicit Representation). An explicit (or direct) representation is
a representation for which a computable functions exists that takes as argument the size
the genotype and gives an upper bound for the number of algorithm steps of the genotype-
phenotype mapping until the phenotype-building process has converged to a stable pheno-
type.
The algorithm steps mentioned in Denition D28.5 can be, for example, the write operations
to the phenotype. In the direct mappings we discuss in the remainder of this section, usually
each element of a candidate solution is written to exactly once. If bit strings are transformed
to vectors of natural numbers, for example, each element of the numerical vector is set to
one value. The same goes when scaling real vectors: the elements of the resulting vectors are
written to once. In the following, we provide some examples for direct genotype-phenotype
mappings.
Example E28.9 (Binary Search Spaces: Gray Coding).
Bit string genomes are sometimes complemented with the application of gray coding
18
during
the genotype-phenotype mapping. If the value of a natural number changes by one, in its
binary complement representation, an arbitrary number of bit may change. 8
10
has the
binary representation 1000
2
but 7
10
has 0111
2
. In gray code, only one bit changes: (one
possible) the gray code representation of 8
10
is 1100
2g
and 7
10
then corresponds to 0100
g2
.
Using gray code may thus be good to preserve locality (see Section 14.2) and to reduce
hidden biases [504] when dealing with bit-string encoded numbers in optimization. Collins
and Eaton [610] studied further encodings for Genetic Algorithms and found that their
E-code outperform both gray and direct binary coding in function optimization.
Example E28.10 (Real Search Spaces: Normalization).
If bounded real-valued, continuous problem spaces X R
n
are investigated, it is possible
that the bounds of the dimensions dier signicantly. Depending on the optimization algo-
rithm and search operations applied, this may deceive the optimization process to give more
emphasis to dimensions with large bounds while neglecting dimensions with smaller bounds.
Example E28.11 (Neglecting One Dimension of X).
Assume that a two dimensional problem space X was searched for the best solution of an
optimization task. The bounds of the rst dimension are x
1
= 1 and

x
1
= 1 and the
bounds of the second dimension are x
2
= 10
9
and

x
2
= 10
9
. The only search operation
available would add random numbers normally distributed with expected value = 0 and
standard deviation = 0.01 to each dimension of the candidate vector. Then, a more or
less ecient search in the rst dimension would be possible.
A genotype-phenotype mapping which allows the search to operate on G = [0, 1]
n
instead
of X = [ x
1
,

x
1
] [ x
2
,

x
2
] [ x
n
,

x
n
], where x
i
R and

x
i
R are the lower and upper
bounds of the i
th
dimension of the problem space X, respectively, would prevent such pitfalls
from the beginning:
gpm
scale
(g) = ( x
i
+g[i] (

x
i
x
i
i 1 . . . n) (28.1)
18
http://en.wikipedia.org/wiki/Gray_coding [accessed 2007-07-03]
272 28 EVOLUTIONARY ALGORITHMS
28.2.2 Indirect Mappings
Dierent from direct mappings, in the phenotype-building process in indirect mappings is
more complex [777]. The basic idea of such developmental approaches is that the phenotypes
are constructed from the genotypes in an iterative way, by stepwise applications of rules and
renements. These renements take into consideration some state information such as the
structure of the phenotype. The genotypes usually represent the rules and transformations
which have to be applied to the candidate solutions during the construction phase.
In nature, life begins with a single cell which divides
19
time and again until a mature
individual is formed
20
after the genetic information has been reproduced. The emergence
of a multi-cellular phenotype from its genotypic representation is called embryogenesis
21
in biology. Many researchers have drawn their inspiration for indirect mappings from this
embryogenesis, from the developmental process which embryos undergo when developing to
a fully-functional being inside an egg or the womb of their mother.
Therefore, developmental mappings received colorful names such as articial embryo-
geny [2601], articial ontongeny (AO) [356], computational embryogeny [273, 1637], cellular
encoding [1143], morphogenesis [291, 1431] and articial development [2735].
Natural embryogeny is a highly complex translation from hereditary information to phys-
ical structures. Among other things, the DNA, for instance, encodes the structural design
information of the human brain. As pointed out by Manos et al. [1829], there are only
about 30 thousand active genes in the human genome (2800 million amino acids) for over
100 trillion neural connections in our cerebrum. A huge manifold of information is hence
decoded from data which is of a much lower magnitude. This is possible because the same
genes can be reused in order to repeatedly create the same pattern. The layout of the light
receptors in the eye, for example, is always the same just their wiring changes.
If we draw inspiration from such a process, we nd that features such as gene reuse and
(phenotypic) modularity seem to be benecial. Indeed, allowing for the repetition of the
same modules (cells, receptors, . . . ) encoded in the same way many times is one of the
reasons why natural embryogeny leads to the development of organisms whose number of
modules surpasses the number of genes in the DNA by far.
The inspiration we get from nature can thus lead to good ideas which, in turn, can lead
to better results in optimization. Yet, we would like to point out that being inspired by
nature does by far not mean to be like nature. The natural embryogenesis processes
are much more complicated than the indirect mappings applied in Evolutionary Algorithms.
The same goes for approaches which are inspired by chemical diusion and such and such.
Furthermore, many genotype-phenotype mappings based on gene reuse, modularity, and
feedback between the phenotypes and the environment are not related to and not inspired
by nature.
In the following, we divide indirect mappings into two categories. We distinguish GPMs
which solely use state information (such as the partly-constructed phenotype) and such that
additionally incorporate feedback from external information sources, the environment, into
the phenotype building process.
28.2.2.1 Generative Mappings: Phenotypic Interaction
In the following, we provide some examples for generative genotype-phenotype mappings.
28.2.2.1.1 Genetic Programming
19
http://en.wikipedia.org/wiki/Cell_division [accessed 2007-07-03]
20
Matter of fact, cell division will continue until the individual dies. However, this is not important here.
21
http://en.wikipedia.org/wiki/Embryogenesis [accessed 2007-07-03]
28.2. GENOTYPE-PHENOTYPE MAPPINGS 273
Example E28.12 (Grammar-Guided Genetic Programming).
Genetic Programming, as we will discuss in Chapter 31, is the family of Evolutionary Algo-
rithms which deals with the evolutionary synthesis of programs. In grammar-guided Genetic
Programming, usually the grammar of the programming language in which these programs
are to be specied is given. Each gene in the genotypes of these approaches usually encodes
for the application of a single rule of the grammar, as is, for instance, the case in Gads
(see ?? on page ??) which uses an integer string genome where each integer corresponds to
the ID of a rule. This way, step by step, complex sentences can be built by unfolding the
genotype. In Grammatical Evolution (??), integer strings with genes identifying grammati-
cal productions are used as well. Depending on the un-expanded variables in the phenotype,
these genes may have dierent meanings. Applying the rules identied by them may intro-
duce new unexpanded variables into the phenotype. Hence, the runtime of such a mapping
is not necessarily known in advance.
Example E28.13 (Graph-creating Trees).
In Standard Genetic Programming approaches, the genotypes are trees. Trees are very useful
to express programs and programs can be used to create many dierent structures. In a
genotype-phenotype mapping, the programs may, for example, be executed to build graphs
(as in Luke and Spectors Edge Encoding [1788] discussed here in ?? on page ??) or electrical
circuits [1608]. In the latter case, the structure of the phenotype strongly depends on how
and how often given instructions or automatically dened functions in the genotypes are
called.
28.2.2.2 Ontogenic Mappings: Environmental Interaction
Full ontogenic (or developmental [887]) mappings even go a step farther than just utilizing
state information during the transformation of the genotypes to phenotypes [777]. This
additional information can stem from the objective functions. If simulations are involved
in the computation of the objective functions, the mapping may access the full simulation
information as well.
Example E28.14 (Developmental Wing Construction).
The phenotype could, for instance, be a set of points describing the surface of an airplanes
wing. The genotype could be a real vector holding the weights of an articial neural network.
The articial neural network may then represent rules for moving the surface points as feed-
back to the air pressure on them given by a simulation of the wing. These rules can describe
a distinct wing shape which develops according to the feedback from the environment.
Example E28.15 (Developmental Termite Nest Construction).
Estevez and Lipson [887] evolved rules for the construction of a termite nest. The phenotype
is the nest which consists of multiple cells. The tness of the cell depends on how close the
temperature in each cell comes to a given target temperature. The rules which were evolved
in Estevez and Lipsons work are directly applied to the phenotypes in the simulations.
The decide whether and where cells were added to or removed from the nest based on the
temperature and density of the cells in the simulation.
274 28 EVOLUTIONARY ALGORITHMS
28.3 Fitness Assignment
28.3.1 Introduction
In multi-objective optimization, each candidate solution p.x is characterized by a vector
of objective values f (p.x). The early Evolutionary Algorithms, however, were intended to
perform single-objective optimization, i. e., were capable of nding solutions for a single
objective function alone. Historically, many algorithms for survival and mating selection
base their decision on such a scalar value.
A tness assignment process, as dened in Denition D5.1 on page 94 in Section 5.2 is
thus used to transform the information given by the vector of objective values f (p.x) for
the phenotype p.x X of an individual p G X to a scalar tness value v(p). This
assignment process takes place in each generation of the Evolutionary Algorithm and, as
stated in Section 5.2, may thus change in each iteration.
Including a tness assignment step may also be benecial in a MOEA, but can also
prove useful in a single-objective scenario: The tness assigned to an individual may not
just reect the absolute utility of an individual, but its suitability relative to the other
individuals in the population pop. Then, the tness function has the character of a heuristic
function (see Denition D1.14 on page 34).
Besides such ranking data, tness can also incorporate density/niching information. This
way, not only the quality of a candidate solution is considered, but also the overall diversity
of the population. Therefore, either the phenotypes p.x X or the genotypes p.g G may
be analyzed and compared. This can improve the chance of nding the global optima as
well as the performance of the optimization algorithm signicantly. If many individuals in
the population occupy the same rank or do not dominate each other, for instance, such
information will be very helpful.
The tness v(p.x) thus may not only depend on the candidate solution p.x itself, but
on the whole population pop of the Evolutionary Algorithm (and on the archive archive of
optimal elements, if available). In practical realizations, the tness values are often stored
in a special member variable in the individual records. Therefore, v(p) can be considered as
a mapping that returns the value of such a variable which has previously been stored there
by a tness assignment process assignFitness(pop, cmp
f
).
Denition D28.6 (Fitness Assignment Process). A tness assignment process assignFitness
uses a comparison criterion cmp
f
(see Denition D3.20 on page 77) to create a function
v : GX R
+
(see Denition D5.1 on page 94) which relates a scalar tness value to each
individual p in the population pop as stated in Equation 28.2 (and archive archive, if an
archive is available Equation 28.3).
v = assignFitness(pop, cmp
f
) v(p) R
+
p pop (28.2)
v = assignFitnessArch(pop, archive, cmp
f
) v(p) R
+
p pop archive (28.3)
Listing 55.14 on page 718 provides a Java interface which enables us to implement dierent
tness assignment processes according to Denition D28.6. In the context of this book, we
generally minimize tness values, i. e., the lower the tness of a candidate solution, the
better. If no density or sharing information is included and the tness only reects the
strict partial order introduced by the Pareto (see Section 3.3.5) or a prevalence relation (see
Section 3.5.2), Equation 28.4 will hold. If further information such as diversity metrics are
included in the tness, this equation may no longer be applicable.
p
1
.x p
2
.x v(p
1
) < v(p
2
) p
1
, p
2
pop archive (28.4)
28.3. FITNESS ASSIGNMENT 275
28.3.2 Weighted Sum Fitness Assignment
The most primitive tness assignment strategy would be to just use a weighted sum of
the objective values. This approach is very static and brings about the same problems as
weighted sum-based approach for dening what an optimum is discussed in Section 3.3.3 on
page 62. It makes no use of Pareto or prevalence relations. For computing the weighted sum
of the dierent objective values of a candidate solution, we reuse Equation 3.18 on page 62
from the weighted sum optimum denition. The weights have to be chosen in a way that
ensures that v(p) R
+
holds for all individuals p.x X.
v(p) = assignFitnessWeightedSum(pop) (p pop v(p) = ws(p.x)) (28.5)
28.3.3 Pareto Ranking
Another simple method for computing tness values is to let them directly reect the Pareto
domination (or prevalence) relation. There are two ways for doing this:
1. We can assign a value inversely proportional to the number of other individuals p
2
it
prevails to each individual p
1
.
v
1
(p
1
)
1
[p
2
pop : p
1
.x p
2
.x[ + 1
(28.6)
2. or we can assign the number of other individuals p3 it is dominated by to it.
v
2
(p
1
) [p
3
pop : p
3
.x p
2
.x[ (28.7)
In case 1, individuals that dominate many others will here receive a lower tness value
than those which are prevailed by many. Since tness is subject to minimization they will
strongly be preferred. Although this idea looks appealing at rst glance, it has a decisive
drawback which can also be observed in Example E28.16: It promotes individuals that reside
in crowded region of the problem space and underrates those in sparsely explored areas. If an
individual has many neighbors, it can potentially prevail many other individuals. However,
a non-prevailed candidate solution in a sparsely explored region of the search space may not
prevail any other individual and hence, receive tness 1, the worst possible tness under v
1
.
Thus, the tness assignment process achieves exactly the opposite of what we want.
Instead of exploring the problem space and delivering a wide scan of the frontier of best
possible candidate solutions, it will focus all eort on a small set of individuals. We will
only obtain a subset of the best solutions and it is even possible that this tness assignment
method leads to premature convergence to a local optimum.
A much better second approach for tness assignment has rst been proposed by Gold-
berg [1075]. Here, the idea is to assign the number of individuals it is prevailed by to each
candidate solution [1125, 1783]. This way, the previously mentioned negative eects will not
occur and the exploration pressure is applied to a much wider area of the Pareto frontier.
This so-called Pareto ranking can be performed by rst removing all non-prevailed (non-
dominated) individuals from the population and assigning the rank 0 to them (since they
are prevailed by no individual and according to Equation 28.7 should receive v
2
). Then, the
same is performed with the rest of the population. The individuals only dominated by those
on rank 0 (now non-dominated) will be removed and get the rank 1. This is repeated until
all candidate solutions have a proper tness assigned to them. Since we follow the idea of the
freer prevalence comparators instead of Pareto dominance relations, we will synonymously
refer to this approach as Prevalence ranking and dene Algorithm 28.4 which is implemented
in Java in Listing 56.16 on page 759. It should be noted that this simple algorithm has
a complexity of O
_
ps
2
[f [
_
. A more ecient approach with complexity O
_
ps log
|f |1
ps
_
has been introduced by Jensen [1441] based on a divide-and-conquer strategy.
276 28 EVOLUTIONARY ALGORITHMS
Algorithm 28.4: v
2
assignFitnessParetoRank(pop, cmp
f
)
Input: pop: the population to assign tness values to
Input: cmp
f
: the prevalence comparator dening the prevalence relation
Data: i, j, cnt: the counter variables
Output: v
2
: a tness function reecting the Prevalence ranking
1 begin
2 for i len(pop) 1 down to 0 do
3 cnt 0
4 p pop[i]
5 for j len(pop) 1 down to 0 do
// Check whether cmp
f
(pop[j].x, p.x) < 0
6 if (j ,= i) (pop[j].x p.x) then cnt cnt + 1
7 v
2
(p) cnt
8 return v
2
Example E28.16 (Pareto Ranking Fitness Assignment).
Figure 28.4 and Table 28.1 illustrate the Pareto relations in a population of 15 individuals
and their corresponding objective values f
1
and f
2
, both subject to minimization.
f
1
f
2
8
0
1
2
3
4
5
6
7
8
10
1 2 3 4 5 6 7 8 9 10 12
11
13
9
10
1 6
7
2
15
14
12
5
3
4
ParetoFrontier
Figure 28.4: An example scenario for Pareto ranking.
28.3. FITNESS ASSIGNMENT 277
p.x dominates is dominated by v
1
(p) v
2
(p)
1 5, 6, 8, 9, 14, 15
1
/7 0
2 6, 7, 8, 9, 10, 11, 13, 14, 15
1
/10 0
3 12, 13, 14, 15
1
/5 0
4 1 0
5 8, 15 1
1
/3 1
6 8, 9, 14, 15 1, 2
1
/5 2
7 9, 10, 11, 14, 15 2
1
/6 1
8 15 1, 2, 5, 6
1
/2 4
9 14, 15 1, 2, 6, 7
1
/3 4
10 14, 15 2, 7
1
/3 2
11 14, 15 2, 7
1
/3 2
12 13, 14, 15 3
1
/4 1
13 15 2, 3, 12
1
/2 3
14 15 1, 2, 3, 6, 7, 9, 10, 11, 12
1
/2 9
15 1, 2, 3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 1 13
Table 28.1: The Pareto domination relation of the individuals illustrated in Figure 28.4.
We computed the tness values for both approaches for the individuals in the population
given in Figure 28.4 and noted them in Table 28.1. In column v
1
(p), we provide the tness
values are inversely proportional to the number of individuals which are dominated by p.
A good example for the problem that approach 1 prefers crowded regions in the search
space while ignoring interesting candidate solutions in less populated regions are the four
non-prevailed individuals 1, 2, 3, 4 on the Pareto frontier. The best tness is assigned to
the element 2, followed by individual 1. Although individual 7 is dominated (by 1), its
tness is better than the tness of the non-dominated element 3.
The candidate solution 4 gets the worst possible tness 1, since it prevails no other
element. Its chances for reproduction are similarly low than those of individual 15 which
is dominated by all other elements except 4. Hence, both candidate solutions will most
probably be not selected and vanish in the next generation. The loss of individual 4 will
greatly decrease the diversity and further increase the focus on the crowded area near 1 and
2.
Column v
2
(p) in Table 28.1 shows that all four non-prevailed individuals now have the
best possible tness 0 if the method given as Algorithm 28.4 is applied. Therefore, the focus
is not centered to the overcrowded regions.
However, the region around the individuals 1 and 2 has probably already extensively
been explored, whereas the surrounding of candidate solution 4 is rather unknown. More
individuals are present in the top-left area of Figure 28.4 and at least two individuals with
the same tness as 3 and 4 reside there. Thus, chances are sill that more individuals
from this area will be selected than from the bottom-right area. A better approach of
tness assignment should incorporate such information and put a bit more pressure into the
direction of individual 4, in order to make the Evolutionary Algorithm investigate this area
more thoroughly. Algorithm 28.4 cannot fulll this hope.
28.3.4 Sharing Functions
Previously, we have mentioned that the drawback of Pareto ranking is that it does not
incorporate any information about whether the candidate solutions in the population reside
closely to each other or in regions of the problem space which are only sparsely covered
by individuals. Sharing, as a method for including such diversity information into the
278 28 EVOLUTIONARY ALGORITHMS
tness assignment process, was introduced by Holland [1250] and later rened by Deb [735],
Goldberg and Richardson [1080], and Deb and Goldberg [742] and further investigated
by [1899, 2063, 2391].
Denition D28.7 (Sharing Function). A sharing function Sh : R
+
R
+
is a function
that maps the distance d = dist(p
1
, p
2
) between two individuals p
1
and p
2
to the real interval
[0, 1]. The value of Sh

is 1 if d = 0 and 1 is d exceeds a specic constant value .


Sh

(d = dist(p
1
, p
2
)) =
_
_
_
1 if d 0
[0, 1] if 0 < d <
0 otherwise
(28.8)
Sharing functions can be employed in many dierent ways and are used by a variety of tness
assignment processes [735, 1080]. Typically, the simple triangular function Sh tri [1268]
or one of its either convex (Sh cvex

) or concave (Sh ccav

) pendants with the power


R
+
, > 0 are applied. Besides using dierent powers of the distance--ratio, another
approach is the exponential sharing method Sh exp.
Sh tri

(d) =
_
1
d

if 0 d <
0 otherwise
(28.9)
Sh cvex
,
(d) =
_
_
1
d

if 0 d <
0 otherwise
(28.10)
Sh ccav
,
(d) =
_
1
_
d

if 0 d <
0 otherwise
(28.11)
Sh exp
,
(d) =
_

_
1 if d 0
0 if d
e

d
e

1e

otherwise
(28.12)
For sharing, the distance of the individuals in the search space G as well as their distance
in the problem space X or the objective space Y may be used. If the candidate solutions
are real vectors in the R
n
, we could use the Euclidean distance of the phenotypes of the
individuals directly, i. e., compute dist eucl(p
1
.x, p
2
.x). However, this makes only sense in
lower dimensional spaces (small values of n), since the Euclidean distance in R
n
does not
convey much information for high n.
In Genetic Algorithms, where the search space is the set of all bit strings G = B
n
of the length n, another suitable approach would be to use the Hamming distance
22
dist ham(p
1
.g, p
2
.g) of the genotypes. The work of Deb [735] indicates that phenotypical
sharing will often be superior to genotypical sharing.
Denition D28.8 (Niche Count). The niche count m(p, P) [738, 1899] of an individual
p P is the sum of its sharing values with all individual in a list P.
p P m(p, P) =

P
Sh

(dist(p, p

)) (28.13)
The niche count m is always greater than or equal to 1, since p P and, hence,
Sh

(dist(p, p)) = 1 is computed and added up at least once. The original sharing ap-
proach was developed for tness assignment in single-objective optimization where only one
objective function f was subject to maximization. In this case, the objective value was sim-
ply divided by the niche count, punishing solutions in crowded regions [1899]. The goal of
22
See ?? on page ?? for more information on the Hamming distance.
28.3. FITNESS ASSIGNMENT 279
sharing was to distribute the population over a number of dierent peaks in the tness land-
scape, with each peak receiving a fraction of the population proportional to its height [1268].
The results of dividing the tness by the niche counts strongly depends on the height dif-
ferences of the peaks and thus, on the complexity class
23
of f. On f
1
O(x), for instance,
the inuence of m is much bigger than on a f
2
O(e
x
).
By multiplying the niche count m to predetermined tness values v

, we can use this


approach for tness minimization in conjunction with a variety of other dierent tness
assignment processes, but also inherit its shortcomings:
v(p) = v

(p) m(p, pop) , v

assignFitness(pop, cmp
f
) (28.14)
Sharing was traditionally combined with tness proportionate, i. e., roulette wheel selec-
tion
24
. Oei et al. [2063] have shown that if the sharing function is computed using the
parental individuals of the old population and then navely combined with the more so-
phisticated tournament selection
25
, the resulting behavior of the Evolutionary Algorithm
may be chaotic. They suggested to use the partially lled new population to circumvent
this problem. The layout of Evolutionary Algorithms, as dened in this book, bases the
tness computation on the whole set of new individuals and assumes that their objec-
tive values have already been completely determined. In other words, such issues simply
do not exist in multi-objective Evolutionary Algorithms as introduced here and the chaotic
behavior does occur.
For computing the niche count m, O
_
n
2
_
comparisons are needed. According to
Goldberg et al. [1085], sampling the population can be sucient to approximate min
order to avoid this quadratic complexity.
28.3.5 Variety Preserving Fitness Assignment
Using sharing and the niche counts navely leads to more or less unpredictable eects. Of
course, it promotes solutions located in sparsely populated niches but how much their tness
will be improved is rather unclear. Using distance measures which are not normalized can
lead to strange eects, too. Imagine two objective functions f
1
and f
2
. If the values of f
1
span from 0 to 1 for the individuals in the population whereas those of f
2
range from 0 to
10 000, the components of f
1
will most often be negligible in the Euclidian distance of two
individuals in the objective space Y. Another problem is that the eect of simple sharing
on the pressure into the direction of the Pareto frontier is not obvious either or depends on
the sharing approach applied. Some methods simply add a niche count to the Pareto rank,
which may cause non-dominated individuals having worse tness than any others in the
population. Other approaches scale the niche count into the interval [0, 1) before adding it
which not only ensures that non-dominated individuals have the best tness but also leave
the relation between individuals at dierent ranks intact which, however, does not further
variety very much.
In some of our experiments [2888, 2912, 2914], we applied a Variety Preserving Fitness
Assignment, an approach based on Pareto ranking using prevalence comparators and well-
dosed sharing. We developed it in order to mitigate the previously mentioned side eects
and balance the evolutionary pressure between optimizing the objective functions and max-
imizing the variety inside the population. In the following, we will describe it in detail and
give its specication in Algorithm 28.5.
Before this tness assignment process can begin, it is required that all individuals with
innite objective values must be removed from the population pop. If such a candidate
solution is optimal, i. e., if it has negative innitely large objectives in a minimization process,
for instance, it should receive tness zero, since tness is subject to minimization. If the
23
See Chapter 12 on page 143 for a detailed introduction into complexity and the O-notation.
24
Roulette wheel selection is discussed in Section 28.4.3 on page 290.
25
You can nd an outline of tournament selection in Section 28.4.4 on page 296.
280 28 EVOLUTIONARY ALGORITHMS
Algorithm 28.5: v assignFitnessVariety(pop, cmp
f
)
Input: pop: the population
Input: cmp
f
: the comparator function
Input: [implicit] f : the set of objective functions
Data: . . .: sorry, no space here, well discuss this in the text
Output: v: the tness function
1 begin
/
*
If needed: Remove all elements with infinite objective
values from pop and assign fitness 0 or
len(pop) +
_
len(pop) + 1 to them. Then compute the
prevalence ranks.
*
/
2 ranks createList(len(pop) , 0)
3 maxRank 0
4 for i len(pop) 1 down to 0 do
5 for j i 1 down to 0 do
6 k cmp
f
(pop[i].x, pop[j].x)
7 if k < 0 then ranks[j] ranks[j] + 1
8 else if k > 0 then ranks[i] ranks[i] + 1
9 if ranks[i] > maxRank then maxRank ranks[i]
// determine the ranges of the objectives
10 mins createList([f [ , +)
11 maxs createList([f [ , )
12 foreach p pop do
13 for i [f [ down to 1 do
14 if f
i
(p.x) < mins[i 1] then mins[i 1] f
i
(p.x)
15 if f
i
(p.x) > maxs[i 1] then maxs[i 1] f
i
(p.x)
16 rangeScales createList([f [ , 1)
17 for i [f [ 1 down to 0 do
18 if maxs[i] > mins[i] then rangeScales[i] 1/ (maxs[i] mins[i])
// Base a sharing value on the scaled Euclidean distance of
all elements
19 shares createList(len(pop) , 0)
20 minShare +
21 maxShare
22 for i len(pop) 1 down to 0 do
23 curShare shares[i]
24 for j i 1 down to 0 do
25 dist 0
26 for k [f [ down to 1 do
27 dist dist + [(f
k
(pop[i].x) f
k
(pop[j].x)) rangeScales[k 1]]
2
28 s Sh exp

|f |,16
_

dist
_
29 curShare curShare +s
30 shares[j] shares[j] +s
31 shares[i] curShare
32 if curShare < minShare then minShare curShare
33 if curShare > maxShare then maxShare curShare
// Finally, compute the fitness values
34 scale
_
1/ (maxShare minShare) if maxShare > minShare
1 otherwise
35 for i len(pop) 1 down to 0 do
36 if ranks[i] > 0 then
37 v(pop[i]) ranks[i] +

maxRank scale (shares[i] minShare)


38 else v(pop[i]) scale (shares[i] minShare)
28.3. FITNESS ASSIGNMENT 281
individual is infeasible, on the other hand, its tness should be set to len(pop)+
_
len(pop)+1,
which is 1 larger than every other tness values that may be assigned by Algorithm 28.5.
In lines 2 to 9, a list ranks is created which is used to eciently compute the Pareto rank
of every candidate solution in the population. Actually, the word prevalence rank would be
more precise in this case, since we use prevalence comparisons as introduced in Section 3.5.2.
Therefore, Variety Preserving is not limited to Pareto optimization but may also incorporate
External Decision Makers (Section 3.5.1) or the method of inequalities (Section 3.4.5).
The highest rank encountered in the population is stored in the variable maxRank.
This value may be zero if the population contains only non-prevailed elements. The lowest
rank will always be zero since the prevalence comparators cmp
f
dene order relations which
are non-circular by denition.
26
. maxRank is used to determine the maximum penalty for
solutions in an overly crowded region of the search space later on.
From line 10 to 18, the maximum and the minimum values that each objective function
takes on when applied to the individuals in the population are obtained. These values are
used to store the inverse of their ranges in the array rangeScales, which will be used to
scale all distances in each dimension (objective) of the individuals into the interval [0, 1].
There are [f [ objective functions in f and, hence, the maximum Euclidian distance between
two candidate solutions in the objective space (scaled to normalization) becomes
_
[f [. It
occurs if all the distances in the single dimensions are 1.
Between line 19 and 33, the scaled distance from every individual to every other candidate
solution in the objective space is determined. This distance is used to aggregate share values
(in the array shares). Therefore, again two nested loops are needed (lines 22 and 24). The
distance components of two individuals pop[i] and pop[j] are scaled and summarized in a
variable dist in line 27. The Euclidian distance between them is

dist which is used to


determine a sharing value in 28. We therefore have decided for exponential sharing, as
introduced in Equation 28.12 on page 278, with power 16 and =
_
[f [. For every individual,
we sum up all the shares (see line 30). While doing so, the minimum and maximum total
shares are stored in the variables minShare and maxShare in lines 32 and 33.
These variables are used scale all sharing values again into the interval [0, 1] (line 34),
so the individual in the most crowded region always has a total share of 1 and the most
remote individual always has a share of 0. Basically, two things are now known about the
individuals in pop:
1. their Pareto/Prevalence ranks, stored in the array ranks, giving information about
their relative quality according to the objective values and
2. their sharing values, held in shares, denoting how densely crowded the area around
them is.
With this information, the nal tness values of an individual p is determined as follows: If p
is non-prevailed, i. e., its rank is zero, its tness is its scaled total share (line 38). Otherwise,
the square root of the maximum rank,

maxRank, is multiplied with the scaled share and


add it to its rank (line 37). By doing so, the supremacy of non-prevailed individuals in
the population is preserved. Yet, these individuals can compete with each other based on
the crowdedness of their location in the objective space. All other candidate solutions may
degenerate in rank, but at most by the square root of the worst rank.
Example E28.17 (Variety Preserving Fitness Assignment).
Let us now apply Variety Preserving to the examples for Pareto ranking from Section 28.3.3.
In Table 28.2, we again list all the candidate solutions from Figure 28.4 on page 276, this
time with their objective values obtained with f
1
and f
2
corresponding to their coordinates
in the diagram. In the third column, you can nd the Pareto ranks of the individuals as it
has been listed in Table 28.1 on page 277. The columns share/u and share/s correspond to
the total sharing sums of the individuals, unscaled and scaled into [0, 1].
26
In all order relations imposed on nite sets there is always at least one smallest element. See Sec-
tion 51.8 on page 645 for more information.
282 28 EVOLUTIONARY ALGORITHMS
2
4
6
8
10 f
1
0
2
4
6
8
10
f
2
0
0.2
0.4
0.6
0.8
1
s
h
a
r
e

p
o
t
e
n
t
i
a
l
1
5
6
8
2
9
7
10
11
14
15
3
12
13
4
ParetoFrontier
Figure 28.5: The sharing potential in the Variety Preserving Example E28.17.
28.3. FITNESS ASSIGNMENT 283
x f
1
f
2
rank share/u share/s v(x)
1 1 7 0 0.71 0.779 0.779
2 2 4 0 0.239 0.246 0.246
3 6 2 0 0.201 0.202 0.202
4 10 1 0 0.022 0 0
5 1 8 1 0.622 0.679 3.446
6 2 7 2 0.906 1 5.606
7 3 5 1 0.531 0.576 3.077
8 2 9 4 0.314 0.33 5.191
9 3 7 4 0.719 0.789 6.845
10 4 6 2 0.592 0.645 4.325
11 5 5 2 0.363 0.386 3.39
12 7 3 1 0.346 0.366 2.321
13 8 4 3 0.217 0.221 3.797
14 7 7 9 0.094 0.081 9.292
15 9 9 13 0.025 0.004 13.01
Table 28.2: An example for Variety Preserving based on Figure 28.4.
But rst things rst; as already mentioned, we know the Pareto ranks of the candidate
solutions from Table 28.1, so the next step is to determine the ranges of values the objective
functions take on for the example population. These can again easily be found out from
Figure 28.4. f
1
spans from 1 to 10, which leads to rangeScale[0] =
1
/9. rangeScale[1] =
1
/8
since the maximum of f
2
is 9 and its minimum is 1. With this, we now can compute the
(dimensionally scaled) distances amongst the candidate solutions in the objective space,
the values of

dist in algorithm Algorithm 28.5, as well as the corresponding values of


the sharing function Sh exp

|f |,16
_

dist
_
. We noted these in Table 28.3, using the upper
triangle of the table for the distances and the lower triangle for the shares.
Upper triangle: distances. Lower triangle: corresponding share values.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
1 0.391 0.836 1.25 0.125 0.111 0.334 0.274 0.222 0.356 0.51 0.833 0.863 0.667 0.923
2 0.012 0.51 0.965 0.512 0.375 0.167 0.625 0.391 0.334 0.356 0.569 0.667 0.67 0.998
3 7.7E-5 0.003 0.462 0.933 0.767 0.502 0.981 0.708 0.547 0.391 0.167 0.334 0.635 0.936
4 6.1E-7 1.8E-5 0.005 1.329 1.163 0.925 1.338 1.08 0.914 0.747 0.417 0.436 0.821 1.006
5 0.243 0.003 2.6E-5 1.8E-7 0.167 0.436 0.167 0.255 0.417 0.582 0.914 0.925 0.678 0.898
6 0.284 0.014 1.7E-4 1.8E-6 0.151 0.274 0.25 0.111 0.255 0.417 0.747 0.765 0.556 0.817
7 0.023 0.151 0.003 2.9E-5 0.007 0.045 0.512 0.25 0.167 0.222 0.51 0.569 0.51 0.833
8 0.045 0.001 1.5E-5 1.5E-7 0.151 0.059 0.003 0.274 0.436 0.601 0.933 0.914 0.609 0.778
9 0.081 0.012 3.3E-4 4.8E-6 0.056 0.284 0.059 0.045 0.167 0.334 0.669 0.67 0.444 0.712
10 0.018 0.023 0.002 3.2E-5 0.009 0.056 0.151 0.007 0.151 0.167 0.502 0.51 0.356 0.67
11 0.003 0.018 0.012 2.1E-4 0.001 0.009 0.081 0.001 0.023 0.151 0.334 0.356 0.334 0.669
12 8E-5 0.002 0.151 0.009 3.2E-5 2.1E-4 0.003 2.6E-5 0.001 0.003 0.023 0.167 0.5 0.782
13 5.7E-5 0.001 0.023 0.007 2.9E-5 1.7E-4 0.002 3.2E-5 0.001 0.003 0.018 0.151 0.391 0.635
14 0.001 0.001 0.001 9.3E-5 4.6E-4 0.002 0.003 0.001 0.007 0.018 0.023 0.003 0.012 0.334
15 2.9E-5 1.2E-5 2.5E-5 1.1E-5 3.9E-5 9.7E-5 8E-5 1.5E-4 3.1E-4 0.001 0.001 1.4E-4 0.001 0.023
Table 28.3: The distance and sharing matrix of the example from Table 28.2.
The value of the sharing function can be imagined as a scalar eld, as illustrated in
Figure 28.5. In this case, each individual in the population can be considered as an electron
that will build an electrical eld around it resulting in a potential. If two electrons come
284 28 EVOLUTIONARY ALGORITHMS
close, repulsing forces occur, which is pretty much the same what we want to do with Variety
Preserving. Unlike the electrical eld, the power of the sharing potential falls exponentially,
resulting in relatively steep spikes in Figure 28.5 which gives proximity and density a heavier
inuence. Electrons in atoms on planets are limited in their movement by other inuences
like gravity or nuclear forces, which are often stronger than the electromagnetic force. In
Variety Preserving, the prevalence rank plays this role as you can see in Table 28.2, its
inuence on the tness is often dominant.
By summing up the single sharing potentials for each individual in the example, we
obtain the fth column of Table 28.3, the unscaled share values. Their minimum is around
0.022 and the maximum is 0.94. Therefore, we must subtract 0.022 from each of these values
and multiply the result with 1.131. By doing so, we build the column shares/s. Finally, we
can compute the tness values v(x) according to lines 38 and 37 in Algorithm 28.5.
The last column of Table 28.2 lists these results. All non-prevailed individuals have
retained a tness value less than one, lower than those of any other candidate solution in
the population. However, amongst these best individuals, candidate solution 4 is strongly
preferred, since it is located in a very remote location of the objective space. Individual
1 is the least interesting non-dominated one, because it has the densest neighborhood in
Figure 28.4. In this neighborhood, the individuals 5 and 6 with the Pareto ranks 1 and 2 are
located. They are strongly penalized by the sharing process and receive the tness values
v(5) = 3.446 and v(6) = 5.606. In other words, individual 5 becomes less interesting than
candidate solution 7 which has a worse Pareto rank. 6 now is even worse than individual 8
which would have a tness better by two if strict Pareto ranking was applied.
Based on these tness values, algorithms like Tournament selection (see Section 28.4.2)
or tness proportionate approaches (discussed in Section 28.4.3) will pick elements in a
way that preserves the pressure into the direction of the Pareto frontier but also leads to a
balanced and sustainable variety in the population. The benets of this approach have been
shown, for instance, in [2188, 2888, 2912, 2914].
28.3.6 Tournament Fitness Assignment
In tournament tness assignment which is a generalization of the q-level (binary) tournament
selection introduced by Weicker [2879], the tness of each individual is computed by letting
it compete q times against r other individuals (with r = 1 as default) and counting the
number of competitions it loses. For a better understanding of the tournament metaphor
see Section 28.4.4 on page 296, where the tournament selection scheme is discussed. The
number of losses will approximate the Pareto rank of the individual, but it is a bit more
randomized than that. If the number of tournaments won was instead of the losses, the
same problems than in the rst idea of Pareto ranking would be encountered.
28.4. SELECTION 285
Algorithm 28.6: v assignFitnessTournament(q, r, pop, cmp
f
)
Input: q: the number of tournaments per individuals
Input: r: the number of other contestants per tournament, normally 1
Input: pop: the population to assign tness values to
Input: cmp
f
: the comparator function providing the prevalence relation
Data: i, j, k, z: counter variables
Data: b: a Boolean variable being true as long as a tournament isnt lost
Data: p: the individual currently examined
Output: v: the tness function
1 begin
2 for i len(pop) 1 down to 0 do
3 z q
4 p pop[i]
5 for j q down to 1 do
6 b true
7 k r
8 while (k > 0) b do
9 b pop[randomUni0len(pop)].x p.x
10 k k 1
11 if b then z z 1
12 v(p) z
13 return v
28.4 Selection
28.4.1 Introduction
Denition D28.9 (Selection). In Evolutionary Algorithms, the selection
27
operation
mate = selection(pop, v, mps) chooses mps individuals according to their tness values v
(subject to minimization) from the population pop and places them into the mating pool
mate [167, 340, 1673, 1912].
mate = selection(pop, v, mps) p mate p pop
p pop p GX
v(p) R
+
p pop
(len(mate) min len(pop) , mps) (len(mate) mps)
(28.15)
Denition D28.10 (Mating Pool). The mating pool mate in an Evolutionary Algorithm
contains the individuals which are selected for further investigation and on which the repro-
duction operations (see Section 28.5 on page 306) will subsequently be applied.
27
http://en.wikipedia.org/wiki/Selection_%28genetic_algorithm%29 [accessed 2007-07-03]
286 28 EVOLUTIONARY ALGORITHMS
Selection may behave in a deterministic or in a randomized manner, depending on the
algorithm chosen and its application-dependant implementation. Furthermore, elitist Evo-
lutionary Algorithms may incorporate an archive archive in the selection process, as outlined
in ??.
Generally, there are two classes of selection algorithms: such with replacement (annotated
with a subscript
r
in this book) and such without replacement (here annotated with a
subscript
w
) [2399]. In Figure 28.6, we present the transition from generation t to t + 1 for
both types of selection methods.In a selection algorithm without replacement, each individual
A B C
E F
I
D
H G
A B C
D G
Populationingenerationt
Pop(t);ps=9
selection
with
replacement
individualscanenter
matingpoolmorethanonce
B D C A
D
A
A B
B
G
C D B
A B A D
B
A
B D
C
B
D A D
individualsentermating
poolatmostonce
duplication,
mutation,
crossover
...
duplication,
mutation,
crossover
...
Populationingenerationt+1
Pop(t+1);ps=9
matingpoolattimestept
Mate(t);ms 5 =
selection
without
replacement
insomeEAs(steady-state,( + )ES,...)
theparentscompetewiththeoffsprings
m l
B A A
D B
matingpoolattimestept
Mate(t);ms 5 =
Figure 28.6: Selection with and without replacement.
from the population pop can enter the mating pool mate at most once in each generation.
If a candidate solution has been selected, it cannot be selected a second time in the same
generation. This process is illustrated in the bottom row of Figure 28.6.
The mating pool returned by algorithms with replacement can contain the same indi-
vidual multiple times, as sketched in the top row of Figure 28.6. The dashed gray line in
this gure stands for the possibility to include the parent generation in the next selection
process. In other words, the parents selected into mate(t) may compete with their children
in popt+1 and together take part in the next environmental selection step. This is the case
in ( + )-Evolution Strategies and in steady-state Evolutionary Algorithms, for example.
In extinctive/generational EAs, such as, for example, (, )-Evolution Strategies, the parent
generation simply is discarded, as discussed in Section 28.1.4.1.
mate = selection
w
(pop, v, mps) count(p, mate) = 1 p mate (28.16)
mate = selection
r
(pop, v, mps) count(p, mate) 1 p mate (28.17)
The selection algorithms have major impact on the performance of Evolutionary Algorithms.
Their behavior has thus been subject to several detailed studies, conducted by, for instance,
Blickle and Thiele [340], Chakraborty et al. [520], Goldberg and Deb [1079], Sastry and
Goldberg [2400], and Zhong et al. [3079], just to name a few.
Usually, tness assignment processes are carried out before selection and the selection
algorithms base their decisions solely on the tness v of the individuals. It is possible to
28.4. SELECTION 287
rely on the prevalence or dominance relation, i. e., to write selection(pop, cmp
f
, mps) instead
of selection(pop, v, mps), or, in case of single-objective optimization, to use the objective
values directly (selection(pop, f, mps)) and thus, to save the costs of the tness assignment
process. However, this can lead to the same problems that occurred in the rst approach of
Pareto-ranking tness assignment (see Point 1 in Section 28.3.3 on page 275).
As outlined in Paragraph 28.1.2.6.2, Evolutionary Algorithms generally apply survival
selection, sometimes followed by a sexual selectionSelection!sexual procedure. In this section,
we will focus on the former one.
28.4.1.1 Visualization
In the following sections, we will discuss multiple selection algorithms. In order to ease
understanding them, we will visualize the expected number of ospring ESp of an indi-
vidual, i. e., the number of times it will take part in reproduction. If we assume uniform
sexual selection (i. e., no sexual selection), this value will be proportional to its number of
occurrences in the mating pool.
Example E28.18 (Individual Distribution after Selection).
Therefore, we will use the special case where we have a population pop of the constant size
of ps(t) = ps(t + 1) = 1000 individuals. The individuals are numbered from 0 to 999, i. e.,
p
0
..p
999
. For the sake of simplicity, we assume that sexual selection from the mating pool
mate is uniform and that only a unary reproduction operation such as mutation is used.
The tness of an individual strongly inuence its chance for reproduction. We consider four
cases with tness subject to minimization:
1. As sketched in Fig. 28.7.a, the individual p
i
has tness i, i. e., v
1
(p
0
) = 0, v
1
(p
1
) =
1, . . . , v
1
(p
999
) = 999.
2. Individual p
i
has tness (i + 1)
3
, i. e., v
2
(p
0
) = 1, v
2
(p
1
) = 3, . . . , v
2
(p
999
) =
1 000 000 000, as illustrated in Fig. 28.7.b.
288 28 EVOLUTIONARY ALGORITHMS
0
200
400
600
800
1000
200 400 600 800
i
v (p)
1 i
Fig. 28.7.a: Case 1: v
1
(p
i
) = i
0
2e8
4e8
6e8
8e8
1e9
200 400 600 800
i
v (p)
2 i
Fig. 28.7.b: Case 2: v
2
(p
i
) = (i + 1)
3
0
0.4
0.8
1.2
1.6
2.0
200 400 600 800
i
v (p)
3 i
Fig. 28.7.c: Case 3: v
3
(p
i
) =
0.0001i if i < 999, 1 otherwise
0
0.4
0.8
1.2
1.6
2.0
200 400 600 800
i
v (p)
4 i
Fig. 28.7.d: Case 4: v
4
(p
i
) = 1 + v
3
(p
i
)
Figure 28.7: The four example tness cases.
28.4. SELECTION 289
28.4.2 Truncation Selection
Truncation selection
28
, also called deterministic selection, threshold selection, or breeding
selection, returns the k mps best elements from the list pop. The name breeding selection
stems from the fact that farmers who breed animals or crops only choose the best individuals
for reproduction, too [302]. The selected elements are copied as often as needed until the
mating pool size mps is reached. Most often k equals the mating pool size mps which usually
is signicantly smaller than the population size ps.
Algorithm 28.7 realizes truncation selection with replacement (i. e., where individuals
may enter the mating pool mate more than once) and Algorithm 28.8 represents truncation
selection without replacement. You can nd both algorithms are implemented in one single
class whose sources are provided in Listing 56.13 on page 754. They rst sort the population
in ascending order according to the tness v. For truncation selection without replacement,
k = mps and k < ps always holds, since otherwise, this approach would make no sense since
it would select all individuals. Algorithm 28.8 simply returns the best mps individuals from
the population. Algorithm 28.7, on the other hand, iterates from 0 to mps 1 after the
sorting and inserts only the elements with indices from 0 to k 1 into the mating pool.
Algorithm 28.7: mate truncationSelection
r
(k, pop, v, mps)
Input: pop: the list of individuals to select from
Input: [implicit] ps: the population size, ps = len(pop)
Input: v: the tness values
Input: mps: the number of individuals to be placed into the mating pool mate
Input: k: cut-o rank, usually k = mps
Data: i: counter variable
Output: mate: the survivors of the truncation which now form the mating pool
1 begin
2 mate ()
3 k min k, ps
4 pop sortAsc(pop, v)
5 for i 1 up to mps do
6 mate addItem(mate, pop[i mod k])
7 return mate
Algorithm 28.8: mate truncationSelection
w
(mps, pop, v)
Input: pop: the list of individuals to select from
Input: v: the tness values
Input: mps: the number of individuals to be placed into the mating pool mate
Output: mate: the survivors of the truncation which now form the mating pool
1 begin
2 pop sortAsc(pop, v)
3 return deleteRange(pop, mps, ps mps)
Truncation selection is typically applied in Evolution Strategies with ( +) and (, )
Evolution Strategies as well as in steady-state EAs. In general Evolutionary Algorithms, it
28
http://en.wikipedia.org/wiki/Truncation_selection [accessed 2007-07-03]
290 28 EVOLUTIONARY ALGORITHMS
should be combined with a tness assignment process that incorporates diversity information
in order to prevent premature convergence. Recently, Lassig and Homann [1686], Lassig
et al. [1687] have proved that threshold selection, a truncation selection method where a cut-
o tness is dened after which no further individuals are selected is the optimal selection
strategy for crossover, provided that the right cut-o value is used. In practical applications,
this value is normally not known.
Example E28.19 (Truncation Selection (Example E28.18 cntd)).
In Figure 28.8, we sketch the number of ospring for the individuals p pop from Exam-
0
1
2
3
4
5
6
7
ES(p)
i
9
k=
k=
0.100 * ps
0.250 * ps
k= 0.125 * ps
k= 0.500 * ps
k= 1.000 * ps
100 200 300 400 500 600 700 i 900
100 200 300 400 500 600 700 900 v (p)
1 i
1e6 8e6 3e7 6e7 1e8 2e8 3e8 7e8 v (p)
2 i
0
0
1
0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.09 v (p)
3 i
0
1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.09 1 v (p)
4 i
Figure 28.8: The number of expected ospring in truncation selection.
ple E28.18 if truncation selection would be applied. As stated, k is less or equal to the
mating pool size mps. Under the assumption that no sexual selection takes place, ESp only
depends on k.
Under truncation selection, the diagram will look exactly the same regardless which of
the four example tness congurations we use. Here, only the order of individuals and not
on the numerical relation of their tness is important. If we set k = ps, each individual will
have one ospring in average. If k =
1
2
mps, the top-50% individuals will have two ospring
and the others none. For k =
1
10
mps, only the best 100 from the 1000 candidate solutions
will reach the mating pool and reproduce 10 times in average.
28.4.3 Fitness Proportionate Selection
Fitness proportionate selection
29
was the selection method applied in the original Genetic
Algorithms as introduced by Holland [1250] and therefore is one of the oldest selection
29
http://en.wikipedia.org/wiki/Fitness_proportionate_selection [accessed 2008-03-19]
28.4. SELECTION 291
schemes. In this Genetic Algorithm, tness was subject to maximization. Under tness
proportionate selection, as its name already implies, the probability P (select(p)) of an in-
dividual p pop to enter the mating pool is proportional to its tness v(p). This relation
in its original form is dened in Equation 28.18 below: The probability P (select(p)) that
individual p is picked in each selection decision corresponds to the fraction of its tness in
the sum of the tness of all individuals.
P (select(p)) =
v(p)

pop
v(p

)
(28.18)
There exists a variety of approaches which realize such probability distributions [1079], like
stochastic remainder selection [358, 409] and stochastic universal selection [188, 1133]. The
most commonly known method is the Monte Carlo roulette wheel selection by De Jong
[711], where we imagine the individuals of a population to be placed on a roulette
30
wheel
as sketched in Fig. 28.9.a. The size of the area on the wheel standing for a candidate solution
is proportional to its tness. The wheel is spun, and the individual where it stops is placed
into the mating pool mate. This procedure is repeated until mps individuals have been
selected.
In the context of this book, tness is subject to minimization. Here, higher tness values
v(p) indicate unt candidate solutions p.x whereas lower tness denotes high utility. Fur-
thermore, the tness values are normalized into a range of [0, 1], because otherwise, tness
proportionate selection will handle the set of tness values 0, 1, 2 in a dierent way than
10, 11, 12. Equation 28.22 denes the framework for such a (normalized) tness propor-
tionate selection. It is realized in Algorithm 28.9 as a variant with and in Algorithm 28.10
without replacement.
minV = min v(p) p pop (28.19)
maxV = max v(p) p pop (28.20)
normV(p) =
maxVv(p.x)
maxVminV
(28.21)
P (select(p))
normV (p)

pop
normV (p

)
(28.22)
30
http://en.wikipedia.org/wiki/Roulette [accessed 2008-03-20]
292 28 EVOLUTIONARY ALGORITHMS
Algorithm 28.9: mate rouletteWheelSelection
r
(pop, v, mps)
Input: pop: the list of individuals to select from
Input: [implicit] ps: the population size, ps = len(pop)
Input: v: the tness values
Input: mps: the number of individuals to be placed into the mating pool mate
Data: i: a counter variable
Data: a: a temporary store for a numerical value
Data: A: the array of tness values
Data: min, max, sum: the minimum, maximum, and sum of the tness values
Output: mate: the mating pool
1 begin
2 A createList(ps, 0)
3 min
4 max
// Initialize fitness array and find extreme fitnesses
5 for i 0 up to ps 1 do
6 a v(pop[i])
7 A[i] a
8 if a < min then min a
9 if a > max then max a
// break situations where all individuals have the same
fitness
10 if max = min then
11 max max + 1
12 min min 1
/
*
aggregate fitnesses: A = (normV(pop[0]),normV(pop[0]) +
normV(pop[1]),normV(pop[0]) +normV(pop[1]) +normV(pop[2]),. . . )
*
/
13 sum 0
14 for i 0 up to ps 1 do
15 sum sum+
max A[i]
max min
16 A[i] sum
/
*
spin the roulette wheel: the chance that a uniformly
distributed random number lands in A[i] is (inversely)
proportional to v(pop[1])
*
/
17 for i 0 up to mps 1 do
18 a searchItemAS(randomUni[0, sum) , A)
19 if a < 0 then a a 1
20 mate addItem(mate, pop[a])
21 return mate
28.4. SELECTION 293
Algorithm 28.10: mate rouletteWheelSelection
w
(pop, v, mps)
Input: pop: the list of individuals to select from
Input: [implicit] ps: the population size, ps = len(pop)
Input: v: the tness values
Input: mps: the number of individuals to be placed into the mating pool mate
Data: i: a counter variable
Data: a, b: temporary stores for numerical values
Data: A: the array of tness values
Data: min, max, sum: the minimum, maximum, and sum of the tness values
Output: mate: the mating pool
1 begin
2 A createList(ps, 0)
3 min
4 max
// Initialize fitness array and find extreme fitnesses
5 for i 0 up to ps 1 do
6 a v(pop[i])
7 A[i] a
8 if a < min then min a
9 if a > max then max a
// break situations where all individuals have the same
fitness
10 if max = min then
11 max max + 1
12 min min 1
/
*
aggregate fitnesses: A = (normV(pop[0]),normV(pop[0]) +
normV(pop[1]),normV(pop[0]) +normV(pop[1]) +normV(pop[2]),. . . )
*
/
13 sum 0
14 for i 0 up to ps 1 do
15 sum sum+
max A[i]
max min
16 A[i] sum
/
*
spin the roulette wheel: the chance that a uniformly
distributed random number lands in A[i] is (inversely)
proportional to v(pop[1])
*
/
17 for i 0 up to min mps, ps 1 do
18 a searchItemAS(randomUni[0, sum) , A)
19 if a < 0 then a a 1
20 if a = 0 then b 0
21 else b A[a 1]
22 b A[a] b
// remove the element at index a from A
23 for j a + 1 up to len(A) 1 do
24 A[j] A[j] b
25 sum sumb
26 mate addItem(mate, pop[a])
27 pop deleteItem(pop, a)
28 A deleteItem(A, a)
29 return mate
294 28 EVOLUTIONARY ALGORITHMS
Example E28.20 (Roulette Wheel Selection).
In Figure 28.9, we illustrate the tness distribution on the roulette wheel for tness max-
v(p )=10
A(p )= / A
1
1
10
1
v(p )=20
A(p )= / A
2
2
5
1
v(p )=30
A(p )= / A
3
3
3
1
v(p )=40
A(p )= / A
4
4
5
2
Fig. 28.9.a: Example for tness maximiza-
tion.
v(p )=10
A(p )= / A=
1
1
60
30
2
1
/ A
v(p )=20
A(p )= / A= / A
2
2
60 3
20 1
v(p )=30
A(p )= / A
= / A
3
3
60
6
10
1
v(p )=40
A(p )= / A
=
4
4
60
0
0A
Fig. 28.9.b: Example for normalized tness mini-
mization.
Figure 28.9: Examples for the idea of roulette wheel selection.
imimization (Fig. 28.9.a) and normalized minimization (Fig. 28.9.b). In this example we
compute the expected proportion in which each of the four element p
1
..p
4
will occur in the
mating pool mate if the tness values are v(p
1
) = 10, v(p
2
) = 20, v(p
3
) = 30, and v(p
4
) = 40.
Amongst others, Whitley [2929] points out that even tness normalization as performed here
cannot overcome the drawbacks of tness proportional selection methods. In order to clarify
these drawbacks, let us visualize the expected results of roulette wheel selection applied to
the special cases stated in Section 28.4.1.1.
Example E28.21 (Roulette Wheel Selection (Example E28.18 cntd)).
Figure 28.10 illustrates the expected number of occurrences ES(p
i
) of an individual p
i
if
roulette wheel selection is applied with replacement.
In such cases, usually parents for each ospring are selected into the mating pool, which
then is as large as the population (mps = ps = 1000). Thus we draw one thousand times a
single individual from the population pop. Each single choice is based on the proportion of
the individual tness in the total tness of all individuals, as dened in Equation 28.18 and
Equation 28.22.
In scenario 1 with the tness sum
999998
2
= 498501, the relation ES(p
i
) = mps
i
498501
holds for tness maximization and ES(p
i
) = mps
999i
498501
for minimization. As result
(sketched in Fig. 28.10.a), the ttest individuals produce (on average) two ospring, whereas
the worst candidate solutions will always vanish in this example.
For scenario 2 with v
2
(p
i
.x) = (i +1)
3
, the total tness sum is approximately 2.51 10
11
and ES(p
i
) = mps
(i+1)
3
2.52 10
11
holds for maximization. The resulting expected values depicted
in Fig. 28.10.b are signicantly dierent from those in Fig. 28.10.a.
The meaning of this is that the design of the objective functions (or the tness assign-
ment process) has a very strong inuence on the convergence behavior of the Evolutionary
Algorithm. This selection method only works well if the tness of an individual is indeed
something like a proportional measure for the probability that it will produce better o-
spring.
The drawback becomes even clearer when comparing the two tness functions v
3
and
v
4
. v
4
is basically the same as v
3
just shifted upwards by 1. From a balanced selection
28.4. SELECTION 295
0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.8
200 400 600
fitnessminimization
fitnessmaximization
ES(p)
i
i
200 400 600 v (p)
1
0
0
Fig. 28.10.a: v
1
(p
i
.x) = i
0
200 400 600
fitnessminimization
fitnessmaximization
ES(p)
i
i
8.12e6 6.48e7 2.17e8 v (p)
2
0
1
0.5
1.0
1.5
2.0
2.5
3.5
Fig. 28.10.b: v
1
(p
i
.x) = (i + 1)
3
fitnessmaximization
0
5
10
15
20
ES(p)
i
200 400 600 i
0.02 0.04 0.06 v (p)
3
0
0
Fig. 28.10.c: v
3
(p
i
) =
0.0001i if i < 999, 1 otherwise
0
0.5
1.0
1.5
2.0
200 400 600
fitnessmaximization
ES(p)
i
i
1.02 1.04 1.06 v (p)
4
0
0
Fig. 28.10.d: v
4
(p
i
) = 1 + v
3
(p
i
)
Figure 28.10: The number of expected ospring in roulette wheel selection.
296 28 EVOLUTIONARY ALGORITHMS
method, we would expect that it produces similar mating pools for two functions which are
that similar. However, the result is entirely dierent. For v
3
, the individual at index 999
is totally outstanding with its tness of 1 (which is more than ten times the tness of the
next-best individual). Hence, it receives many more copies in the mating pool. In the case
of v
4
, however, it is only around twice as good as the next best individual and thus, only
receives around twice as many copies. From this situation, we can draw two conclusions:
1. The number of copies that an individual receives changes signicantly when the vertical
shift of the tness function changes even if its other characteristics remain constant.
2. In cases like v
3
, where one individual has outstanding tness, this may degenerate the
Genetic Algorithm to a better Hill Climber, since this individual will unite most of
the slots in the mating pool on itself. If the Genetic Algorithm behaves like a Hill
Climber, its benets, such as the ability to trace more than one possible solution at a
time, disappear.
Roulette wheel selection has a bad performance compared to other schemes like tournament
selection or ranking selection [1079]. It is mainly included here for the sake of completeness
and because it is easy to understand and suitable for educational purposes.
28.4.4 Tournament Selection
Tournament selection
31
, proposed by Wetzel [2923] and studied by Brindle [409], is one
of the most popular and eective selection schemes. Its features are well-known and have
been analyzed by a variety of researchers such as Blickle and Thiele [339, 340], Lee et al.
[1707], Miller and Goldberg [1898], Sastry and Goldberg [2400], and Oei et al. [2063]. In
tournament selection, k elements are picked from the population pop and compared with
each other in a tournament. The winner of this competition will then enter mating pool
mate. Although being a simple selection strategy, it is very powerful and therefore used in
many practical applications [81, 96, 461, 1880, 2912, 2914].
Example E28.22 (Binary Tournament Selection).
As example, consider a tournament selection (with replacement) with a tournament size of
two [2931]. For each single tournament, the contestants are chosen randomly according to a
uniform distribution and the winners will be allowed to enter the mating pool. If we assume
that the mating pool will contain about as same as many individuals as the population, each
individual will, on average, participate in two tournaments. The best candidate solution of
the population will win all the contests it takes part in and thus, again on average, contributes
approximately two copies to the mating pool. The median individual of the population is
better than 50% of its challengers but will also loose against 50%. Therefore, it will enter the
mating pool roughly one time on average. The worst individual in the population will lose
all its challenges to other candidate solutions and can only score even if competing against
itself, which will happen with probability (
1
/mps)
2
. It will not be able to reproduce in the
average case because mps (
1
/mps)
2
=
1
/mps < 1 mps > 1.
Example E28.23 (Tournament Selection (Example E28.18 cntd)).
For visualization purposes, let us go back to Example E28.18 with a population of 1000
individuals p
0
..p
999
and mps = ps = 1000. Again, we assume that each individual has an
unique tness value of v
1
(p
i
) = i, v
2
(p
i
) = (i +1)
3
, v
3
(p
i
) = 0.0001i if i < 999, 1 otherwise,
and v
4
(p
i
) = 1 +v
3
(p
i
), respectively. If we apply tournament selection with replacement in
31
http://en.wikipedia.org/wiki/Tournament_selection [accessed 2007-07-03]
28.4. SELECTION 297
this special scenario, the expected number of occurrences ES(p
i
) of an individual p
i
in the
mating pool can be computed according to Blickle and Thiele [340] as
ES(p
i
) = mps
_
_
1000 i
1000
_
k

_
1000 i 1
1000
_
k
_
(28.23)
The absolute values of the tness play no role in tournament selection. The only thing that
k=10
k=5
k=4
k=3
k=2
k=1
100 200 300 400 500 600 700 i 900
100 200 300 400 500 600 700 900 v (p)
1 i
1e6 8e6 3e7 6e7 1e8 2e8 3e8 7e8 v (p)
2 i
0
0
1
0
1
2
3
4
5
6
7
ES(p)
i
9
0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.09 v (p)
3 i
0
1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.09 1 v (p)
4 i
Figure 28.11: The number of expected ospring in tournament selection.
matters is whether or not the tness of one individual is higher as the tness of another
one, not tness dierence itself. The expected numbers of ospring for the four example
cases 1 and 2 from Example E28.18 are the same. Tournament selection thus gets rid of the
problematic dependency on the relative proportions of the tness values in roulette-wheel
style selection schemes.
Figure 28.11 depicts the expected ospring numbers for dierent tournament sizes k
1, 2, 3, 4, 5, 10. If k = 1, tournament selection degenerates to randomly picking individuals
and each candidate solution will occur one time in the mating pool on average. With rising
k, the selection pressure increases: individuals with good tness values create more and
more ospring whereas the chance of worse candidate solutions to reproduce decreases.
Tournament selection with replacement (TSR) is presented in Algorithm 28.11 which has
been implemented in Listing 56.14 on page 756. Tournament selection without replacement
(TSoR) [43, 1707] can be dened in two forms. In the rst variant specied as Algo-
rithm 28.12, a candidate solution cannot compete against itself. This method is dened in.
In Algorithm 28.13, on the other hand, an individual may enter the mating pool at most
once.
298 28 EVOLUTIONARY ALGORITHMS
Algorithm 28.11: mate tournamentSelection
r
(k, pop, v, mps)
Input: pop: the list of individuals to select from
Input: [implicit] ps: the population size, ps = len(pop)
Input: v: the tness values
Input: mps: the number of individuals to be placed into the mating pool mate
Input: [implicit] k: the tournament size
Data: i, j, a, b: counter variables
Output: mate: the winners of the tournaments which now form the mating pool
1 begin
2 mate ()
3 for i 1 up to mps do
4 a randomUni[0, ps)
5 for j 1 up to k 1 do
6 b randomUni[0, ps)
7 if v(pop[b]) < v(pop[a]) then a b
8 mate addItem(mate, pop[a])
9 return mate
Algorithm 28.12: mate tournamentSelection
w
(k, pop, v, mps)
Input: pop: the list of individuals to select from
Input: v: the tness values
Input: mps: the number of individuals to be placed into the mating pool mate
Input: [implicit] k: the tournament size
Data: i, j, a, b: counter variables
Output: mate: the winners of the tournaments which now form the mating pool
1 begin
2 mate ()
3 pop sortAsc(pop, v)
4 for i 1 up to min len(pop) , mps do
5 a randomUni[0, len(pop))
6 for j 1 up to min len(pop) , k 1 do
7 b randomUni[0, ps)
8 if v(pop[b]) < v(pop[a]) then a b
9 mate addItem(mate, pop[a])
10 pop deleteItem(pop, a)
11 return mate
28.4. SELECTION 299
Algorithm 28.13: mate tournamentSelection
w
(k, pop, v, mps)
Input: pop: the list of individuals to select from
Input: v: the tness values
Input: mps: the number of individuals to be placed into the mating pool mate
Input: [implicit] k: the tournament size
Data: A: the list of contestants per tournament
Data: a: the tournament winner
Data: i, j: counter variables
Output: mate: the winners of the tournaments which now form the mating pool
1 begin
2 mate ()
3 pop sortAsc(pop, v)
4 for i 1 up to mps do
5 A ()
6 for j 1 up to mink, len(pop) do
7 repeat
8 searchItemUS(a, A)0
9 until a randomUni[0, len(pop))
10 A addItem(A, a)
11 a minA
12 mate addItem(mate, pop[a])
13 return mate
The algorithms specied here should more precisely be entitled as deterministic tour-
nament selection algorithms since the winner of the k contestants that take part in each
tournament enters the mating pool. In the non-deterministic variants, this is not necessarily
the case. There, a probability is dened. The best individual in the tournament is selected
with probability , the second best with probability (1), the third best with probability
(1 )
2
and so on. The i
th
best individual in a tournament enters the mating pool with
probability (1)
i
. Algorithm 28.14 realizes this behavior for a tournament selection with
replacement. Notice that it becomes equivalent to Algorithm 28.11 on the preceding page
if is set to 1. Besides the algorithms discussed here, a set of additional tournament-based
selection methods has been introduced by Lee et al. [1707].
300 28 EVOLUTIONARY ALGORITHMS
Algorithm 28.14: mate tournamentSelectionND
r
(, k, pop, v, mps)
Input: pop: the list of individuals to select from
Input: [implicit] ps: the population size, ps = len(pop)
Input: v: the tness values
Input: mps: the number of individuals to be placed into the mating pool mate
Input: [implicit] : the selection probability, [0, 1]
Input: [implicit] k: the tournament size
Data: A: the set of tournament contestants
Data: i, j: counter variables
Output: mate: the winners of the tournaments which now form the mating pool
1 begin
2 mate ()
3 pop sortAsc(pop, v)
4 for i 1 up to mps do
5 A ()
6 for j 1 up to k do
7 A addItem(A, randomUni[0, ps))
8 A sortAsc(A, cmp(a
1
, a
2
) (a
1
a
2
))
9 for j 0 up to len(A) 1 do
10 if (randomUni[0, 1) ) (j len(A) 1) then
11 mate addItem(mate, pop[A[j]])
12 j
13 return mate
28.4.5 Linear Ranking Selection
Linear ranking selection [340], introduced by Baker [187] and more thoroughly discussed by
Blickle and Thiele [340], Whitley [2929], and Goldberg and Deb [1079] is another approach
for circumventing the problems of tness proportionate selection methods. In ranking se-
lection [187, 1133, 2929], the probability of an individual to be selected is proportional to
its position (rank) in the sorted list of all individuals in the population. Using the rank
smoothes out larger dierences of the tness values and emphasizes small ones. Generally,
we can consider the conventional ranking selection method as the application of a tness
assignment process setting the rank as tness (which can be achieved with Pareto ranking,
for instance) and a subsequent tness proportional selection.
In its original denition discussed in [340], linear ranking selection was used to maximize
tness. Here, we will discuss it again for tness minimization. In linear ranking selection with
replacement, the probability P (select(p
i
)) that the individual at rank i 0..ps is picked in
each selection decision is given in Equation 28.24 below. Notice that each individual always
receives a dierent rank and even if two individuals have the same tness, they must be
assigned to dierent ranks [340].
P (select(p
i
)) =
1
ps
_

+
_

_
ps i 1
ps 1
_
(28.24)
In Equation 28.24, the best individual (at rank 0) will have the probability
+
ps
of being
28.4. SELECTION 301
picked while the worst individual (rank ps 1) is chosen with

ps
chance:
P (select(p
0
)) =
1
ps
_

+
_

_
ps 0 1
ps 1
_
=
1
ps
_

+
_

__
=

+
ps
(28.25)
P (select(p
ps1
)) =
1
ps
_

+
_

_
ps (ps 1) 1
ps 1
_
(28.26)
=
1
ps
_

+
_

_
0
ps 1
_
=

ps
(28.27)
It should be noted that all probabilities must add up to one, i. e., :
ps1

i=0
P (select(p
i
)) = 1 (28.28)
1 =
ps1

i=0
1
ps
_

+
_

_
ps i 1
ps 1
_
(28.29)
=
1
ps
ps1

i=0
_

+
_

_
ps i 1
ps 1
_
(28.30)
=
1
ps
_
ps

+
ps1

i=0
_

_
ps i 1
ps 1
_
(28.31)
=
1
ps
_
ps

+
_

_
ps1

i=0
ps i 1
ps 1
_
(28.32)
=
1
ps
_
ps

+

+

ps 1
ps1

i=0
ps i 1
_
(28.33)
=
1
ps
_
ps

+

+

ps 1
_
ps (ps 1)
ps1

i=0
i
__
(28.34)
=
1
ps
_
ps

+

+

ps 1
_
ps (ps 1)
1
2
ps (ps 1)
__
(28.35)
=
1
ps
_
ps

+

+

ps 1
1
2
ps (ps 1)
_
(28.36)
=
1
ps
_
ps

+
1
2
ps
_

_
_
(28.37)
=
1
ps
_
1
2
ps

+
1
2
ps
+
_
(28.38)
1 =
1
2

+
1
2

+
(28.39)
1
1
2

=
1
2

+
(28.40)
2

=
+
(28.41)
The parameters

and
+
must be chosen in a way that 2

=
+
and, obviously,
0 <

and 0 <
+
must hold as well. If the selection algorithm should furthermore make
sense, i. e., prefer individuals with a lower tness rank over those with a higher tness rank
(again, tness is subject to minimization), 0

<
+
2 is true.
Koza [1602] determines the slope of the linear ranking selection probability function by
a factor r
m
in his implementation of this selection scheme. According to [340], this factor
302 28 EVOLUTIONARY ALGORITHMS
can be transformed to

and
+
by the following equations.

=
2
r
m
+ 1
(28.42)

+
=
2r
m
r
m
+ 1
(28.43)
2

=
+
2
2
r
m
+ 1
=
2r
m
r
m
+ 1
(28.44)
2 (r
m
+ 1) 2 = 2r
m
(28.45)
2r
m
= 2r
m
q.e.d. (28.46)
As mentioned, we can simply emulate this scheme by (1) rst building a secondary t-
ness function v

based on the ranks of the individuals p


i
according to the original tness v
and Equation 28.24 and then (2) applying tness proportionate selection as given in Algo-
rithm 28.9 on page 292. We therefore only need to set v

(p
i
) = 1 P (select(p
i
)) (since Al-
gorithm 28.9 on page 292 minimizes tness).
28.4.6 Random Selection
Random selection is a selection method which simply lls the mating pool with randomly
picked individual from the population. Obviously, to use this selection scheme for sur-
vival/environmental selection, i. e., to determine which individuals are allowed to produce
ospring (see Section 28.1.2.6 on page 260) makes little sense. This would turn the Evolu-
tionary Algorithm into some sort of parallel random walk, since it means to disregard all
tness information.
However, after an actual mating pool has been lled, it may make sense to use random
selection to choose the parents for the dierent individuals, especially in cases where there
are many parents such as in Evolution Strategies which are introduced in Chapter 30 on
page 359.
Algorithm 28.15: mate randomSelection
r
(pop, mps)
Input: pop: the list of individuals to select from
Input: [implicit] ps: the population size, ps = len(pop)
Input: mps: the number of individuals to be placed into the mating pool mate
Data: i: counter variable
Output: mate: the randomly picked parents
1 begin
2 mate ()
3 for i 1 up to mps do
4 mate addItem(mate, pop[randomUni[0, ps)])
5 return mate
Algorithm 28.15 provides a specication of random selection with replacement. An
example implementation of this algorithm is given in Listing 56.15 on page 758.
28.4.7 Clearing and Simple Convergence Prevention (SCP)
In our experiments, especially in Genetic Programming [2888] and problems with discrete
objective functions [2912, 2914], we often use a very simple mechanism to prevent premature
convergence. Premature convergence, as outlined in Chapter 13, means that the optimization
process gets stuck at a local optimum and loses the ability to investigate other areas in the
28.4. SELECTION 303
search space, in particular the area where the global optimum resides. OurSCP method
is neither a tness nor a selection algorithm, but instead more of a lter which is placed
after computing the objectives and before applying a tness assignment process. Since our
method actively deletes individuals from the population, we placed it into this section.
The idea is simple: the more similar individuals exist in the population, the more likely
the optimization process has converged. It is not clear whether it converged to a global
optimum or to a local one. If it got stuck at a local optimum, the fraction of the population
which resides at that spot should be limited. Then, some individuals necessarily will be
located somewhere else and, if they survive, may nd a path away from the local optimum
towards a dierent area in the search space. In case that the optimizer indeed converged to
the global optimum, such a measure does not cause problems because in the end, because
better points than the global optimum cannot be discovered anyway.
28.4.7.1 Clearing
The rst one to apply an explicit limitation of the number of individuals in an area of
the search space was Petrowski [2167, 2168] whose clearing approach is applied in each
generation and works as specied in Algorithm 28.16 where tness is subject to minimization.
Basically, clearing divides the population of an EA into several sub-populations according
Algorithm 28.16: v

clearingFilter(pop, , k, v)
Input: pop: the list of individuals to apply clearing to
Input: : the clearing radius
Input: k: the nieche capacity
Input: [implicit] v: the tness values
Input: [implicit] dist: a distance measure in the genome or phenome
Data: n: the current number of winners
Data: i, j: counter variables
Data: P: counter variables
Output: v

: the new tness assignment


1 begin
2 v

fitnf
3 P sortAsc(pop, v)
4 for i 0 up to len(P) 1 do
5 if v(P[i]) < then
6 n 1
7 for j i + 1 up to len(P) 1 do
8 if (v(P[j]) < ) (dist(P[i], P[j]) < ) then
9 if n < k then n n + 1
10 else v

(P[j])
11 return v

to a distance measure dist applied in the genotypic (G) or phenotypic space (X) in each
generation. The individuals of each sub-population have at most the distance to the
ttest individual in this niche. Then, the tness of all but the k best individuals in such a
sub-population is set to the worst possible value. This eectively prevents that a niche can
get too crowded. Sareni and Krahenb uhl [2391] showed that this method is very promising.
Singh and Deb [2503] suggest a modied clearing approach which shifts individuals that
would be cleared farther away and reevaluates their tness.
304 28 EVOLUTIONARY ALGORITHMS
28.4.7.2 SCP
We modify this approach in two respects: We measure similarity not in form of a distance
in the search space G or the problem space X, but in the objective space Y R
|f |
. All
individuals are compared with each other. If two have exactly the same objective values
32
,
one of them is thrown away with probability
33
c [0, 1] and does not take part in any further
comparisons. This way, we weed out similar individuals without making any assumptions
about G or X and make room in the population and mating pool for a wider diversity of
candidate solutions. For c = 0, this prevention mechanism is turned o, for c = 1, all
remaining individuals will have dierent objective values.
Although this approach is very simple, the results of our experiments were often
signicantly better with this convergence prevention method turned on than without
it [2188, 2888, 2912, 2914]. Additionally, in none of our experiments, the outcomes were
inuenced negatively by this lter, which makes it even more robust than other methods for
convergence prevention like sharing or variety preserving. Algorithm 28.17 species how our
simple mechanism works. It has to be applied after the evaluation of the objective values of
the individuals in the population and before any tness assignment or selection takes place.
Algorithm 28.17: P scpFilter(pop, c, f )
Input: pop: the list of individuals to apply convergence prevention to
Input: c: the convergence prevention probability, c [0, 1]
Input: [implicit] f : the set of objective function(s)
Data: i, j: counter variables
Data: p: the individual checked in this generation
Output: P: the pruned population
1 begin
2 P ()
3 for i 0 up to len(P) 1 do
4 p pop[i]
5 for j len(P) 1 down to 0 do
6 if f (p.x) = f (pop

[j].x) then
7 if randomUni[0, 1) < c then
8 P deleteItem(P, j)
9 P addItem(P, p)
10 return P
If an individual p occurs n times in the population or if there are n individuals with
exactly the same objective values, Algorithm 28.17 cuts down the expected number of their
occurrences ESp to
ES(p) =
n

i=1
(1 c)
i1
=
n1

i=0
(1 c)
i
=
(1 c)
n
1
c
=
1 (1 c)
n
c
(28.47)
In Figure 28.12, we sketch the expected number of remaining instances of the individual p
after this pruning process if it occurred n times in the population before Algorithm 28.17
was applied.
From Equation 28.47 follows that even a population of innite size which has fully con-
verged to one single value will probably not contain more than
1
c
copies of this individual
32
The exactly-the-same-criterion makes sense in combinatorial optimization and many Genetic Program-
ming problems but may easily be replaced with a limit imposed on the Euclidian distance in real-valued
optimization problems, for instance.
33
instead of dening a xed threshold k
28.4. SELECTION 305
5 10 15 n 25
cp=0.7
cp=0.5
cp=0.3
cp=0.2
cp=0.1
0
1
2
3
4
5
6
7
ES(p)
9
Figure 28.12: The expected numbers of occurrences for dierent values of n and c.
after the simple convergence prevention has been applied. This threshold is also visible in
Figure 28.12.
lim
n
ES(p) = lim
n
1 (1 c)
n
c
=
1 0
c
=
1
c
(28.48)
28.4.7.3 Discussion
In Petrowskis clearing approach [2167], the maximum number of individuals which can
survive in a niche was a xed constant k and, if less than k individuals resided in a niche,
none of them would be aected. Dierent from that, in SCP, an expected value of the
number of individuals allowed in a niche is specied with the probability c and may be
both, exceeded or undercut.
Another dierence of the approaches arises from the space in which the distance is
computed: Whereas clearing prevents the EA from concentrating too much on a certain area
in the search or problem space, SCP stops it from keeping too many individuals with equal
utility. The former approach works against premature convergence to a certain solution
structure while the latter forces the EA to keep track of a trail to candidate solutions
with worse tness which may later evolve to good individuals with traits dierent from
the currently exploited ones. Furthermore, since our method does not need to access any
information apart from the objective values, it is independent from the search and problem
space and can, once implemented, combined with any kind of population-based optimizer.
Which of the two approaches is better has not yet been tested with comparative ex-
periments and is part of our future work. At the present moment, we assume that in
real-valued search or problem spaces, clearing should be more suitable whereas we know
from experiments using our approach only that SCP performs very good in combinatorial
problems [2188, 2914] and Genetic Programming [2888].
306 28 EVOLUTIONARY ALGORITHMS
28.5 Reproduction
An optimization algorithm uses the information gathered up to step t for creating the can-
didate solutions to be evaluated in step t + 1. There exist dierent methods to do so. In
Evolutionary Algorithms, the aggregated information corresponds to the population pop
and the archive archive of best individuals, if such an archive is maintained. The search
operations searchOp Op used in the Evolutionary Algorithm family are called reproduc-
tion operation, inspired by the biological procreation mechanisms
34
of mother nature [2303].
There are four basic operations:
1. Creation simply creates a new genotype without any ancestors or heritage.
2. Duplication directly copies an existing genotype without any modication.
3. Mutation in Evolutionary Algorithms corresponds to small, random variations in the
genotype of an individual, exactly natural mutation
35
.
4. Like in sexual reproduction, recombination
36
combines two parental genotypes to a
new genotype including traits from both elders.
In the following, we will discuss these operations in detail and provide general denitions
form them. When an Evolutionary Algorithm starts, no information about the search space
has been gathered yet. Hence, unless seeding is applied (see Paragraph 28.1.2.2.2, we cannot
use existing candidate solutions to derive new ones and search operations with an arity higher
than zero cannot be applied. Creation is thus used to ll the initial population pop(t = 1).
Denition D28.11 (Creation). The creation operation create
__
is used to produce a new
genotype g G with a random conguration.
g = create
__
g G (28.49)
For creating the initial population of the size ps, we dene the function createPop(ps) in
Algorithm 28.18.
Algorithm 28.18: pop createPop(ps)
Input: ps: the number of individuals in the new population
Input: [implicit] create: the creation operator
Data: i: a counter variable
Output: pop: the new population of randomly created individuals (len(pop) = s)
1 begin
2 pop ()
3 for i 0 up to ps 1 do
4 p
5 p.g create
__
6 pop addItem(pop, p)
7 return pop
Duplication is just a placeholder for copying an element of the search space, i. e., it is
what occurs when neither mutation nor recombination are applied. It is useful to increase
the share of a given type of individual in a population.
34
http://en.wikipedia.org/wiki/Reproduction [accessed 2007-07-03]
35
http://en.wikipedia.org/wiki/Mutation [accessed 2007-07-03]
36
http://en.wikipedia.org/wiki/Sexual_reproduction [accessed 2008-03-17]
28.5. REPRODUCTION 307
Denition D28.12 (Duplication). The duplication operation duplicate : G G is used
to create an exact copy of an existing genotype g G.
g = duplicate(g) g G (28.50)
In most EAs, a unary search operation, mutation, is provided. This operation takes one
genotype as argument and modies it. The way this modication is performed is application-
dependent. It may happen in a randomized or in a deterministic fashion. Together with
duplication, mutation corresponds to the asexual reproduction in nature.
Denition D28.13 (Mutation). The mutation operation mutate : G G is used to
create a new genotype g
n
G by modifying an existing one.
g
n
= mutate(g) : g G g
n
G (28.51)
Like mutation, the recombination is a search operation inspired by natural reproduction. Its
role model is sexual reproduction. The goal of implementing a recombination operation is to
provide the search process with some means to join benecial features of dierent candidate
solutions. This may happen in a deterministic or randomized fashion.
Notice that the term recombination is more general than crossover since it stands for
arbitrary search operations that combines the traits of two individuals. Crossover, however,
is only used if the elements search space G are linear representations. Then, it stands for
exchanging parts of these so-called strings.
Denition D28.14 (Recombination). The recombination
37
operation recombine : G
G G is used to create a new genotype g
n
G by combining the features of two existing
ones.
g
n
= recombine(g
a
, g
b
) : g
a
, g
b
G g
n
G (28.52)
It is not unusual to chain the application of search operators, i. e., to perform something
like mutate(recombine(g
1
, g
2
)). The four operators are altogether used to reproduce whole
populations of individuals.
Denition D28.15 (reproducePop). The population reproduction operation pop =
reproducePop(mate, ps) is used to create a new population pop of ps individuals by applying
the reproduction operations to the mating pool mate.
In Algorithm 28.19, we give a specication on how Evolutionary Algorithms usually would
apply the reproduction operations dened so far to create a new population of ps individuals.
This procedure applies mutation and crossover at specic rates mr [0, 1] and cr [0, 1],
respectively.
37
http://en.wikipedia.org/wiki/Recombination [accessed 2007-07-03]
308 28 EVOLUTIONARY ALGORITHMS
Algorithm 28.19: pop reproducePopEA(mate, ps, mr, cr)
Input: mate: the mating pool
Input: [implicit] mps: the mating pool size
Input: ps: the number of individuals in the new population
Input: mr: the mutation rate
Input: cr: the crossover rate
Input: [implicit] mutate, recombine: the mutation and crossover operators
Data: i, j: counter variables
Output: pop: the new population (len(pop) = ps)
1 begin
2 pop ()
3 for i 0 up to ps do
4 p duplicate(mate[i mod mps])
5 if (randomUni[0, 1) < cr) (mps > 1) then
6 repeat
7 j randomUni[0, mps)
8 until j = (i mod mps)
9 p.g recombine(p.g, mate[j].g)
10 if randomUni[0, 1) < mr then p.g mutate(p.g)
11 pop addItem(pop, p)
12 return pop
28.6 Maintaining a Set of Non-Dominated/Non-Prevailed
Individuals
Most multi-objective optimization algorithms return a set X of best solutions x discovered
instead of a single individual. Many optimization techniques also internally keep track of
the set of best candidate solutions encountered during the search process. In Simulated An-
nealing, for instance, it is quite possible to discover an optimal element x

and subsequently
depart from it to a local optimum x

l
. Therefore, optimizers normally carry a list of the
non-prevailed candidate solutions ever visited with them.
In scenarios where the search space G diers from the problem space X it often makes
more sense to store the list of individual records P instead of just keeping a set x of the
best phenotypes X. Since the elements of the search space are no longer required at the
end of the optimization process, we dene the simple operation extractPhenotypes which
extracts them from a set of individuals P.
x extractPhenotypes(P) p P : x = p.x (28.53)
28.6.1 Updating the Optimal Set
Whenever a new individual p is created, the set P holding the best individuals known so far
may change. It is possible that the new candidate solution must be included in the optimal
set or even prevails some of the phenotypes already contained therein which then must be
removed.
Denition D28.16 (updateOptimalSet). The function updateOptimalSet updates a
set of elements P
old
with the new individual p
new
.x. It uses knowledge of the prevalence
28.6. MAINTAINING A SET OF NON-DOMINATED/NON-PREVAILED INDIVIDUALS309
relation and the corresponding comparator function cmp
f
.
P
new
= updateOptimalSet(P
old
, p
new
, , ) P
old
, P
new
GX, p
new
GX :
p
1
P
old
p
2
P
old
: p
2
.x p
1
.x P
new
P
old
p
new

p
1
P
new
p
2
P
new
: p
2
.x
(28.54)
We dene two equivalent approaches in Algorithm 28.20 and Algorithm 28.21 which perform
the necessary operations. Algorithm 28.20 creates a new, empty optimal set and successively
inserts non-prevailed elements whereas Algorithm 28.21 removes all elements which are pre-
vailed by the new individual p
new
from the old set P
old
.
Algorithm 28.20: P
new
updateOptimalSet(P
old
, p
new
, cmp
f
)
Input: P
old
: a set of non-prevailed individuals as known before the creation of p
new
Input: p
new
: a new individual to be checked
Input: cmp
f
: the comparator function representing the domainance/prevalence
relation
Output: P
new
: the set updated with the knowledge of p
new
1 begin
2 P
new

3 foreach p
old
P
old
do
4 if (p
new
.x p
old
.x) then
5 P
new
P
new
p
old

6 if p
old
.x p
new
.x then return P
old
7 return P
new
p
new

Algorithm 28.21: P
new
updateOptimalSet(P
old
, p
new
, cmp
f
) (
nd
Version)
Input: P
old
: a set of non-prevailed individuals as known before the creation of p
new
Input: p
new
: a new individual to be checked
Input: cmp
f
: the comparator function representing the domainance/prevalence
relation
Output: P
new
: the set updated with the knowledge of p
new
1 begin
2 P
new
P
old
3 foreach p
old
P
old
do
4 if p
new
.x p
old
.x then
5 P
new
P
new
p
old

6 else if p
old
.x p
new
.x then
7 return P
old
8 return P
new
p
new

Especially in the case of Evolutionary Algorithms, not a single new element is created
in each generation but a set P. Let us dene the operation updateOptimalSetN for this
purpose. This operation can easily be realized by iteratively applying updateOptimalSet,
as shown in Algorithm 28.22.
28.6.2 Obtaining Non-Prevailed Elements
The function updateOptimalSet helps an optimizer to build and maintain a list of optimal
310 28 EVOLUTIONARY ALGORITHMS
Algorithm 28.22: P
new
updateOptimalSetN(P
old
, P, cmp
f
)
Input: P
old
: the old set of non-prevailed individuals
Input: P: the set of new individuals to be checked for non-prevailedness
Input: cmp
f
: the comparator function representing the domainance/prevalence
relation
Data: p: an individual from P
Output: P
new
: the updated set
1 begin
2 P
new
P
old
3 foreach p P do P
new
updateOptimalSet(P
new
, p, cmp
f
)
4 return P
new
individuals. When the optimization process nishes, the extractPhenotypes can then be
used to obtain the non-prevailed elements of the problem space and return them to the
user. However, not all optimization methods maintain an non-prevailed set all the time.
When they terminate, they have to extracts all optimal elements the set of individuals pop
currently known.
Denition D28.17 (extractBest). The function extractBest function extracts a set P
of non-prevailed (non-dominated) individuals from any given set of individuals pop according
to the comparator cmp
f
.
P pop GX, P = extractBest(pop, cmp
f
) p
1
P ,p
2
pop : p
2
.x p
1
.x
(28.55)
Algorithm 28.23 denes one possible realization of extractBest. By the way, this approach
could also be used for updating an optimal set.
updateOptimalSet(P
old
, p
new
, cmp
f
) extractBest(P
old
cupp
new
, cmp
f
) (28.56)
Algorithm 28.23: P extractBest(pop, cmp
f
)
Input: pop: the list to extract the non-prevailed individuals from
Data: p
any
, p
chk
: candidate solutions tested for supremacy
Data: i, j: counter variables
Output: P: the non-prevailed subset extracted from pop
1 begin
2 P pop
3 for i len(P) 1 down to 1 do
4 for j i 1 down to 0 do
5 if P[i] P[j] then
6 P deleteItemPj
7 i i 1
8 else if P[j] P[i] then
9 P deleteItemPi
10 return listToSetP
28.6. MAINTAINING A SET OF NON-DOMINATED/NON-PREVAILED INDIVIDUALS311
28.6.3 Pruning the Optimal Set
In some optimization problems, there may be very many if not innite many optimal indi-
viduals. The set X of the best discovered candidate solutions computed by the optimization
algorithms, however, cannot grow innitely because we only have limited memory. The same
holds for the archives archive of non-dominated individuals maintained by several MOEAs.
Therefore, we need to perform an action called pruning which reduces the size of the set of
non-prevailed individuals to a given limit k [605, 1959, 2645].
There exists a variety of possible pruning operations. Morse [1959] and Taboada and Coit
[2644], for instance, suggest to use clustering algorithms
38
for this purpose. In principle, also
any combination of the tness assignment and selection schemes discussed in Chapter 28
would do. It is very important that the loss of generality during a pruning operation is
minimized. Fieldsend et al. [912], for instance, point out that if the extreme elements of
the optimal frontier are lost, the resulting set may only represent a small fraction of the
optima that could have been found without pruning. Instead of working on the set X of best
discovered candidate solutions x, we again base our approach on a set of individuals P and
we dene:
Denition D28.18 (pruneOptimalSet). The pruning operation pruneOptimalSet re-
duces the size of a set P
old
of individuals to t to a given upper boundary k.
P
new
P
old
subseteqGX, k N
1
: P
new
= pruneOptimalSet(P
old
, k) [P
new
[ k
(28.57)
28.6.3.1 Pruning via Clustering
Algorithm 28.24 uses clustering to provide the functionality specied in this denition and
thereby realizes the idea of Morse [1959] and Taboada and Coit [2644]. Basically, any given
clustering algorithm could be used as replacement for cluster see ?? on page ?? for more
information on clustering.
Algorithm 28.24: P
new
pruneOptimalSet
c
(P
old
, k)
Input: P
old
: the set to be pruned
Input: k: the maximum size allowed for the set (k > 0)
Input: [implicit] cluster: the clustering algorithm to be used
Input: [implicit] nucleus: the function used to determine the nuclei of the clusters
Data: B: the set of clusters obtained by the clustering algorithm
Data: b: a single cluster b B
Output: P
new
: the pruned set
1 begin
// obtain k clusters
2 B cluster(P
old
)
3 P
new

4 foreach b B do P
new
P
new
nucleus(b)
5 return P
new
38
You can nd a discussion of clustering algorithms in ??.
312 28 EVOLUTIONARY ALGORITHMS
28.6.3.2 Adaptive Grid Archiving
Let us discuss the adaptive grid archiving algorithm (AGA) as example for a more sophis-
ticated approach to prune the set of non-dominated / non-prevailed individuals. AGA has
been introduced for the Evolutionary Algorithm PAES
39
by Knowles and Corne [1552] and
uses the objective values (computed by the set of objective functions f ) directly. Hence, it
can treat the individuals as [f [-dimensional vectors where each dimension corresponds to one
objective function f f . This [f [-dimensional objective space Y is divided in a grid with d
divisions in each dimension. Its span in each dimension is dened by the corresponding min-
imum and maximum objective values. The individuals with the minimum/maximum values
are always preserved. This circumvents the phenomenon of narrowing down the optimal
set described by Fieldsend et al. [912] and distinguishes the AGA approach from clustering-
based methods. Hence, it is not possible to dene maximum set sizes k which are smaller
than 2[f [. If individuals need to be removed from the set because it became too large, the
AGA approach removes those that reside in regions which are the most crowded.
The original sources outline the algorithm basically with with descriptions and deni-
tions. Here, we introduce a more or less trivial specication in Algorithm 28.25 on the next
page and Algorithm 28.26 on page 314.
The function divideAGA is internally used to perform the grid division. It transforms
the objective values of each individual to grid coordinates stored in the array lst. Further-
more, divideAGA also counts the number of individuals that reside in the same coordi-
nates for each individual and makes it available in cnt. It ensures the preservation of border
individuals by assigning a negative cnt value to them. This basically disables their later
disposal by the pruning algorithm pruneOptimalSetAGA since pruneOptimalSetAGA
deletes the individuals from the set P
old
that have largest cnt values rst.
39
PAES is discussed in ?? on page ??
28.6. MAINTAINING A SET OF NON-DOMINATED/NON-PREVAILED INDIVIDUALS313
Algorithm 28.25: (P
l
, lst, cnt) divideAGA(P
old
, d)
Input: P
old
: the set to be pruned
Input: d: the number of divisions to be performed per dimension
Input: [implicit] f : the set of objective functions
Data: i, j: counter variables
Data: mini, maxi, mul: temporary stores
Output: (P
l
, lst, cnt): a tuple containing the list representation P
l
of P
old
, a list lst
assigning grid coordinates to the elements of P
l
and a list cnt containing
the number of elements in those grid locations
1 begin
2 mini createList([f [, )
3 maxi createList([f [, )
4 for i [f [ 1 down to 0 do
5 mini[i 1] minf
i
(p.x) p P
old

6 maxi[i 1] maxf
i
(p.x) p P
old

7 mul createList([f [, 0)
8 for i [f [ 1 down to 0 do
9 if maxi[i] ,= mini[i] then
10 mul[i]
d
maxi[i] mini[i]
11 else
12 maxi[i] maxi[i] + 1
13 mini[i] mini[i] 1
14 P
l
setToListP
old
15 lst createList(len(P
l
) , )
16 cnt createList(len(P
l
) , 0)
17 for i len(P
l
) 1 down to 0 do
18 lst[i] createList([f [, 0)
19 for j 1 up to [f [ do
20 if (f
j
(P
l
[i]) mini[j 1]) (f
j
(P
l
[i]) maxi[j 1]) then
21 cnt[i] cnt[i] 2
22 lst[i][j 1] (f
j
(P
l
[i]) mini[j 1]) mul[j 1]
23 if cnt[i] > 0 then
24 for j i + 1 up to len(P
l
) 1 do
25 if lst[i] = lst[j] then
26 cnt[i] cnt[i] + 1
27 if cnt[j] > 0 then cnt[j] cnt[j] + 1
28 return (P
l
, lst, cnt)
314 28 EVOLUTIONARY ALGORITHMS
Algorithm 28.26: P
new
pruneOptimalSetAGA(P
old
, d, k)
Input: Pold: the set to be pruned
Input: d: the number of divisions to be performed per dimension
Input: k: the maximum size allowed for the set (k 2[f [)
Input: [implicit] f : the set of objective functions
Data: i: a counter variable
Data: P
l
: the list representation of P
old
Data: lst: a list assigning grid coordinates to the elements of P
l
Data: cnt: the number of elements in the grid locations dened in lst
Output: P
new
: the pruned set
1 begin
2 if len(P
old
) k then return P
old
3 (P
l
, lst, cnt) divideAGA(P
old
, d)
4 while len(P
l
) > k do
5 idx 0
6 for i len(P
l
) 1 down to 1 do
7 if cnt[i] > cnt[idx] then idx i
8 for i len(P
l
) 1 down to 0 do
9 if (lst[i] = lst[idx]) (cnt[i] > 0) then cnt[i] cnt[i] 1
10 P
l
deleteItemP
l
idx
11 cnt deleteItemcntidx
12 lst deleteItemlstidx
13 return listToSetP
l
28.7 General Information on Evolutionary Algorithms
28.7.1 Applications and Examples
Table 28.4: Applications and Examples of Evolutionary Algorithms.
Area References
Art [1174, 2320, 2445]; see also Tables 29.1 and 31.1
Astronomy [480]
Chemistry [587, 678, 2676]; see also Tables 29.1, 30.1, 31.1, and 32.1
Combinatorial Problems [308, 354, 360, 371, 400, 444, 445, 451, 457, 491, 564, 567,
619, 644, 649, 690, 802, 807, 808, 810, 812, 820, 10111013,
1117, 1145, 1148, 1149, 1177, 1290, 1311, 1417, 1429, 1437,
1486, 1532, 1618, 1653, 1696, 1730, 1803, 18221825, 1830,
1835, 1859, 1871, 1884, 1961, 2025, 2055, 2105, 2144, 2162,
2188, 2189, 2245, 2335, 2448, 2499, 2535, 2633, 2663, 2669,
2772, 2811, 2862, 3027, 3064, 3071, 3101]; see also Tables
29.1, 30.1, 31.1, 33.1, 34.1, and 35.1
28.7. GENERAL INFORMATION ON EVOLUTIONARY ALGORITHMS 315
Computer Graphics [1452, 1577, 1661, 2096, 2487, 2488, 2651]; see also Tables
29.1, 30.1, 31.1, and 33.1
Control [1710]; see also Tables 29.1, 31.1, 32.1, and 35.1
Cooperation and Teamwork [1996, 2424]; see also Tables 29.1 and 31.1
Data Compression see Table 31.1
Data Mining [144, 360, 480, 1048, 1746, 2213, 2503, 2644, 2645, 2676,
3018]; see also Tables 29.1, 31.1, 32.1, 34.1, and 35.1
Databases [371, 567, 1145, 1696, 1822, 1824, 1825, 2633, 3064, 3071];
see also Tables 29.1, 31.1, and 35.1
Decision Making see Table 31.1
Digital Technology [1479]; see also Tables 29.1, 30.1, 31.1, 34.1, and 35.1
Distributed Algorithms and Sys-
tems
[89, 112, 125, 156, 202, 328, 352354, 404, 439, 545, 605,
663, 669, 702, 705, 786, 811, 843, 871, 882, 963, 1006, 1007,
1094, 1225, 1292, 1298, 1532, 1574, 1633, 1718, 1733, 1749,
1800, 1835, 1838, 1863, 1883, 1995, 2020, 2058, 2071, 2100,
2387, 2425, 2440, 2448, 2470, 2493, 2628, 2644, 2645, 2663,
2694, 2791, 2862, 2864, 2893, 2903, 3027, 3061, 3085]; see
also Tables 29.1, 30.1, 31.1, 32.1, 33.1, 34.1, and 35.1
E-Learning [1158, 1298, 2628, 2763]; see also Table 29.1
Economics and Finances [549, 585, 1718, 1825, 2622]; see also Tables 29.1, 31.1, 33.1,
and 35.1
Engineering [112, 156, 267, 517, 535, 567, 579, 605, 644, 669, 677, 777,
786, 787, 853, 871, 882, 953, 1094, 1225, 1310, 1323, 1479,
1486, 1508, 1574, 1710, 1717, 1733, 1746, 1749, 1830, 1835,
1838, 1883, 2058, 2059, 2083, 2111, 2144, 2189, 2320, 2364,
2425, 2448, 2468, 2499, 2535, 2644, 2645, 2663, 2677, 2693,
2694, 2743, 2799, 2858, 2861, 2862, 2864]; see also Tables
29.1, 30.1, 31.1, 32.1, 33.1, 34.1, and 35.1
Function Optimization [742, 795, 853, 855, 1018, 1523, 1727, 1927, 1986, 2099, 2210,
2799]; see also Tables 29.1, 30.1, 32.1, 33.1, 34.1, and 35.1
Games [2350]; see also Tables 29.1, 31.1, and 32.1
Graph Theory [39, 63, 67, 89, 112, 202, 328, 329, 353, 354, 439, 644, 669,
786, 804, 843, 882, 963, 1007, 1094, 1225, 1290, 1486, 1532,
1574, 1733, 1749, 1800, 1835, 1863, 1883, 1934, 1995, 2020,
2025, 2058, 2100, 2144, 2189, 2237, 2387, 2425, 2440, 2448,
2470, 2493, 2663, 2677, 2694, 2772, 2791, 2864, 3027, 3085];
see also Tables 29.1, 30.1, 31.1, 33.1, 34.1, and 35.1
Healthcare [567, 2021, 2537]; see also Tables 29.1, 31.1, and 35.1
Image Synthesis [291, 777, 2445, 2651]
Logistics [354, 444, 445, 451, 517, 579, 619, 802, 808, 810, 812, 820,
1011, 1013, 1148, 1149, 1177, 1429, 1653, 1730, 1803, 1823,
1830, 1859, 1869, 1884, 1961, 2059, 2111, 2162, 2188, 2189,
2245, 2669, 2811, 2912]; see also Tables 29.1, 30.1, 33.1, and
34.1
Mathematics [376, 377, 550, 1217, 1661, 2350, 2407, 2677]; see also Tables
29.1, 30.1, 31.1, 32.1, 33.1, 34.1, and 35.1
Military and Defense [439, 1661, 1872]; see also Table 34.1
Motor Control [489]; see also Tables 33.1 and 34.1
Multi-Agent Systems [967, 1661, 2144, 2268, 2693, 2766]; see also Tables 29.1, 31.1,
and 35.1
Multiplayer Games see Table 31.1
Physics [407, 678]; see also Tables 29.1, 31.1, 32.1, and 34.1
316 28 EVOLUTIONARY ALGORITHMS
Prediction [144, 1045]; see also Tables 29.1, 30.1, 31.1, and 35.1
Security [404, 540, 871, 1486, 1507, 1508, 1661, 1749]; see also Tables
29.1, 31.1, and 35.1
Software [703, 842, 1486, 1508, 1791, 1934, 1995, 2213, 2677, 2863,
2887]; see also Tables 29.1, 30.1, 31.1, 33.1, and 34.1
Sorting [1233]; see also Table 31.1
Telecommunication [820]
Testing [2863]; see also Table 31.3
Theorem Proong and Automatic
Verication
see Table 29.1
Water Supply and Management [1268, 1986]
Wireless Communication [39, 63, 67, 68, 155, 644, 787, 1290, 1486, 1717, 1934, 2144,
2189, 2237, 2364, 2677]; see also Tables 29.1, 31.1, 33.1, 34.1,
and 35.1
28.7.2 Books
1. Handbook of Evolutionary Computation [171]
2. Variants of Evolutionary Algorithms for Real-World Applications [567]
3. New Ideas in Optimization [638]
4. Multiobjective Problem Solving from Nature From Concepts to Applications [1554]
5. Natural Intelligence for Scheduling, Planning and Packing Problems [564]
6. Advances in Evolutionary Computing Theory and Applications [1049]
7. Bio-inspired Algorithms for the Vehicle Routing Problem [2162]
8. Evolutionary Computation 1: Basic Algorithms and Operators [174]
9. Nature-Inspired Algorithms for Optimisation [562]
10. Telecommunications Optimization: Heuristic and Adaptive Techniques [640]
11. Hybrid Evolutionary Algorithms [1141]
12. Success in Evolutionary Computation [3014]
13. Computational Intelligence in Telecommunications Networks [2144]
14. Evolutionary Design by Computers [272]
15. Evolutionary Computation in Economics and Finance [549]
16. Parameter Setting in Evolutionary Algorithms [1753]
17. Evolutionary Computation in Dynamic and Uncertain Environments [3019]
18. Applications of Multi-Objective Evolutionary Algorithms [597]
19. Introduction to Evolutionary Computing [865]
20. Nature-Inspired Informatics for Intelligent Applications and Knowledge Discovery: Implica-
tions in Business, Science and Engineering [563]
21. Evolutionary Computation [847]
22. Evolutionary Computation [863]
23. Evolutionary Multiobjective Optimization Theoretical Advances and Applications [8]
24. Evolutionary Computation 2: Advanced Algorithms and Operators [175]
25. Evolutionary Algorithms for Solving Multi-Objective Problems [599]
26. Genetic and Evolutionary Computation for Image Processing and Analysis [466]
27. Evolutionary Computation: The Fossil Record [939]
28. Swarm Intelligence: Collective, Adaptive [1524]
29. Parallel Evolutionary Computations [2015]
30. Evolutionary Computation in Practice [3043]
31. Design by Evolution: Advances in Evolutionary Design [1234]
32. New Learning Paradigms in Soft Computing [1430]
33. Designing Evolutionary Algorithms for Dynamic Environments [1956]
34. How to Solve It: Modern Heuristics [1887]
35. Electromagnetic Optimization by Genetic Algorithms [2250]
36. Advances in Metaheuristics for Hard Optimization [2485]
37. Soft Computing: Integrating Evolutionary, Neural, and Fuzzy Systems [2685]
28.7. GENERAL INFORMATION ON EVOLUTIONARY ALGORITHMS 317
38. The Art of Articial Evolution: A Handbook on Evolutionary Art and Music [2320]
39. Evolutionary Optimization [2394]
40. Rule-Based Evolutionary Online Learning Systems: A Principled Approach to LCS Analysis
and Design [460]
41. Multi-Objective Optimization in Computational Intelligence: Theory and Practice [438]
42. Computer Simulation in Genetics [659]
43. Evolutionary Computation: A Unied Approach [715]
44. Genetic Algorithms in Search, Optimization, and Machine Learning [1075]
45. Evolutionary Computation in Data Mining [1048]
46. Introduction to Stochastic Search and Optimization [2561]
47. Advances in Evolutionary Algorithms [1581]
48. Self-Adaptive Heuristics for Evolutionary Computation [1617]
49. Representations for Genetic and Evolutionary Algorithms [2338]
50. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Pro-
gramming, Genetic Algorithms [167]
51. Search Methodologies Introductory Tutorials in Optimization and Decision Support Tech-
niques [453]
52. Data Mining and Knowledge Discovery with Evolutionary Algorithms [994]
53. Evolution are Algorithmen [2879]
54. Advances in Evolutionary Algorithms Theory, Design and Practice [41]
55. Evolutionary Algorithms in Molecular Design [587]
56. Multi-Objective Optimization Using Evolutionary Algorithms [740]
57. Evolutionary Dynamics in Time-Dependent Environments [2941]
58. Evolutionary Computation: Theory and Applications [3025]
59. Evolutionary Computation [2987]
28.7.3 Conferences and Workshops
Table 28.5: Conferences and Workshops on Evolutionary Algorithms.
GECCO: Genetic and Evolutionary Computation Conference
History: 2011/07: Dublin, Ireland, see [841]
2010/07: Portland, OR, USA, see [30, 31]
2009/07: Montreal, QC, Canada, see [2342, 2343]
2008/07: Atlanta, GA, USA, see [585, 1519, 1872, 2268, 2537]
2007/07: London, UK, see [905, 2579, 2699, 2700]
2006/07: Seattle, WA, USA, see [1516, 2441]
2005/06: Washington, DC, USA, see [27, 304, 2337, 2339]
2004/06: Seattle, WA, USA, see [751, 752, 1515, 2079, 2280]
2003/07: Chicago, IL, USA, see [484, 485, 2578]
2002/07: New York, NY, USA, see [224, 478, 1675, 1790, 2077]
2001/07: San Francisco, CA, USA, see [1104, 2570]
2000/07: Las Vegas, NV, USA, see [1452, 2155, 2928, 2935, 2986]
1999/07: Orlando, FL, USA, see [211, 482, 2093, 2502]
CEC: IEEE Conference on Evolutionary Computation
History: 2011/06: New Orleans, LA, USA, see [2027]
2010/07: Barcelona, Catalonia, Spain, see [1354]
2009/05: Trondheim, Norway, see [1350]
318 28 EVOLUTIONARY ALGORITHMS
2008/06: Hong Kong (Xiangg ang), China, see [1889]
2007/09: Singapore, see [1343]
2006/07: Vancouver, BC, Canada, see [3033]
2005/09: Edinburgh, Scotland, UK, see [641]
2004/06: Portland, OR, USA, see [1369]
2003/12: Canberra, Australia, see [2395]
2002/05: Honolulu, HI, USA, see [944]
2001/05: Gangnam-gu, Seoul, Korea, see [1334]
2000/07: La Jolla, CA, USA, see [1333]
1999/07: Washington, DC, USA, see [110]
1998/05: Anchorage, AK, USA, see [2496]
1997/04: Indianapolis, IN, USA, see [173]
1996/05: Nagoya, Japan, see [1445]
1995/11: Perth, WA, Australia, see [1361]
1994/06: Orlando, FL, USA, see [1891]
EvoWorkshops: Real-World Applications of Evolutionary Computing
History: 2011/04: Torino, Italy, see [788]
2009/04: T ubingen, Germany, see [1052]
2008/05: Naples, Italy, see [1051]
2007/04: Val`encia, Spain, see [1050]
2006/04: Budapest, Hungary, see [2341]
2005/03: Lausanne, Switzerland, see [2340]
2004/04: Coimbra, Portugal, see [2254]
2003/04: Colchester, Essex, UK, see [2253]
2002/04: Kinsale, Ireland, see [465]
2001/04: Lake Como, Milan, Italy, see [343]
2000/04: Edinburgh, Scotland, UK, see [464]
1999/05: G oteborg, Sweden, see [2197]
1998/04: Paris, France, see [1310]
PPSN: Conference on Parallel Problem Solving from Nature
History: 2012/09: Taormina, Italy, see [2675]
2010/09: Krakow, Poland, see [2412, 2413]
2008/09: Birmingham, UK and Dortmund, North Rhine-Westphalia, Germany, see
[2354, 3028]
2006/09: Reykjavik, Iceland, see [2355]
2002/09: Granada, Spain, see [1870]
2000/09: Paris, France, see [2421]
1998/09: Amsterdam, The Netherlands, see [866]
1996/09: Berlin, Germany, see [2818]
1994/10: Jerusalem, Israel, see [693]
1992/09: Brussels, Belgium, see [1827]
1990/10: Dortmund, North Rhine-Westphalia, Germany, see [2438]
WCCI: IEEE World Congress on Computational Intelligence
History: 2012/06: Brisbane, QLD, Australia, see [1411]
2010/07: Barcelona, Catalonia, Spain, see [1356]
2008/06: Hong Kong (Xiangg ang), China, see [3104]
2006/07: Vancouver, BC, Canada, see [1341]
28.7. GENERAL INFORMATION ON EVOLUTIONARY ALGORITHMS 319
2002/05: Honolulu, HI, USA, see [1338]
1998/05: Anchorage, AK, USA, see [1330]
1994/06: Orlando, FL, USA, see [1324]
EMO: International Conference on Evolutionary Multi-Criterion Optimization
History: 2011/04: Ouro Preto, MG, Brazil, see [2654]
2009/04: Nantes, France, see [862]
2007/03: Matsushima, Sendai, Japan, see [2061]
2005/03: Guanajuato, Mexico, see [600]
2003/04: Faro, Portugal, see [957]
2001/03: Z urich, Switzerland, see [3099]
AE/EA: Articial Evolution
History: 2011/10: Angers, France, see [2586]
2009/10: Strasbourg, France, see [608]
2007/10: Tours, France, see [1928]
2005/10: Lille, France, see [2657]
2003/10: Marseilles, France, see [1736]
2001/10: Le Creusot, France, see [606]
1999/11: Dunkerque, France, see [948]
1997/10: Nmes, France, see [1191]
1995/09: Brest, France, see [75]
1994/09: Toulouse, France, see [74]
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
ICANNGA: International Conference on Adaptive and Natural Computing Algorithms
History: 2011/04: Ljubljana, Slovenia, see [2589]
2009/04: Kuopio, Finland, see [1564]
2007/04: Warsaw, Poland, see [257, 258]
2005/03: Coimbra, Portugal, see [2297]
2003/04: Roanne, France, see [2142]
2001/04: Prague, Czech Republic, see [1641]
1999/04: Protoroz, Slovenia, see [801]
1997/04: Vienna, Austria, see [2527]
1995/04: Al`es, France, see [2141]
1993/04: Innsbruck, Austria, see [69]
ICCI: IEEE International Conference on Cognitive Informatics
See Table 10.2 on page 128.
ICNC: International Conference on Advances in Natural Computation
History: 2012/05: Ch ongq`ng, China, see [574]
2011/07: Sh`angh ai, China, see [1379]
2010/08: Yantai, Shandong, China, see [1355]
2009/08: Tianjn, China, see [2848]
2008/10: J`n an, Shandong, China, see [1150]
2007/08: Haik ou, Hainan, China, see [1711]
2006/09: Xan, Shanx, China, see [1443, 1444]
2005/08: Ch angsha, H unan, China, see [28502852]
320 28 EVOLUTIONARY ALGORITHMS
SEAL: International Conference on Simulated Evolution and Learning
See Table 10.2 on page 130.
BIOMA: International Conference on Bioinspired Optimization Methods and their Applications
History: 2012/05: Bohinj, Slovenia, see [349]
2010/05: Ljubljana, Slovenia, see [921]
2008/10: Ljubljana, Slovenia, see [920]
2006/10: Ljubljana, Slovenia, see [919]
2004/10: Ljubljana, Slovenia, see [918]
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
See Table 10.2 on page 130.
FEA: International Workshop on Frontiers in Evolutionary Algorithms
History: 2005/07: Salt Lake City, UT, USA, see [2379]
2003/09: Cary, NC, USA, see [546]
2002/03: Research Triangle Park, NC, USA, see [508]
2000/02: Atlantic City, NJ, USA, see [2853]
1998/10: Research Triangle Park, NC, USA, see [2293]
1997/03: Research Triangle Park, NC, USA, see [2475]
ICARIS: International Conference on Articial Immune Systems
See Table 10.2 on page 130.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
MICAI: Mexican International Conference on Articial Intelligence
See Table 10.2 on page 131.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
See Table 10.2 on page 131.
CIMCA: International Conference on Computational Intelligence for Modelling, Control and Au-
tomation
History: 2006/11: Sydney, NSW, Australia, see [1923]
2005/11: Vienna, Austria, see [1370]
CIS: International Conference on Computational Intelligence and Security
History: 2009/12: Beijng, China, see [1393]
2008/12: S uzhou, Ji angs u, China, see [1392]
2007/12: Harbin, Heilongjiang China, see [1390]
2006/11: Guangzhou, Guangd ong, China, see [557]
2005/12: Xan, Shanx, China, see [1192, 1193]
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
See Table 10.2 on page 132.
EUFIT: European Congress on Intelligent Techniques and Soft Computing
See Table 10.2 on page 132.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
28.7. GENERAL INFORMATION ON EVOLUTIONARY ALGORITHMS 321
EvoCOP: European Conference on Evolutionary Computation in Combinatorial Optimization
History: 2009/04: T ubingen, Germany, see [645]
2008/03: Naples, Italy, see [2775]
2007/04: Val`encia, Spain, see [646]
2006/04: Budapest, Hungary, see [1116]
2005/03: Lausanne, Switzerland, see [2252]
2004/04: Coimbra, Portugal, see [1115]
GEC: ACM/SIGEVO Summit on Genetic and Evolutionary Computation
History: 2009/06: Sh`angh ai, China, see [3000]
GEM: International Conference on Genetic and Evolutionary Methods
History: 2011/07: Las Vegas, NV, USA, see [2552]
2010/07: Las Vegas, NV, USA, see [119]
2009/07: Las Vegas, NV, USA, see [121]
2008/06: Las Vegas, NV, USA, see [120]
2007/06: Las Vegas, NV, USA, see [122]
GEWS: Grammatical Evolution Workshop
See Table 31.2 on page 385.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
SoCPaR: International Conference on SOft Computing and PAttern Recognition
See Table 10.2 on page 133.
Progress in Evolutionary Computation: Workshops on Evolutionary Computation
History: 19931994: Australia, see [3023]
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
AJWS: Australia-Japan Joint Workshop on Intelligent & Evolutionary Systems
History: 2001/11: Dunedin, New Zealand, see [1498]
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
ASC: IASTED International Conference on Articial Intelligence and Soft Computing
See Table 10.2 on page 134.
EvoNUM: European Event on Bio-Inspired Algorithms for Continuous Parameter Optimisation
History: 2011/04: Torino, Italy, see [2588]
FOCI: IEEE Symposium on Foundations of Computational Intelligence
History: 2007/04: Honolulu, HI, USA, see [1865]
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
IJCCI: International Conference on Computational Intelligence
History: 2010/10: Val`encia, Spain, see [914]
2009/10: Madeira, Portugal, see [1809]
IWNICA: International Workshop on Nature Inspired Computation and Applications
History: 2011/04: Hefei,

Anhu, China, see [2754]
322 28 EVOLUTIONARY ALGORITHMS
2010/10: Hefei,

Anhu, China, see [2758]
2008/05: Hefei,

Anhu, China, see [2757]
2006/10: Hefei,

Anhu, China, see [1122]
2004/10: Hefei,

Anhu, China, see [2755, 2756]
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
NaBIC: World Congress on Nature and Biologically Inspired Computing
History: 2011/11: Salamanca, Spain, see [2368]
2010/12: Kitakyushu, Japan, see [1545]
2009/12: Coimbatore, India, see [12]
WPBA: Workshop on Parallel Architectures and Bioinspired Algorithms
History: 2010/09: Vienna, Austria, see [2311]
2007/07: London, UK, see [905]
2005/06: Oslo, Norway, see [479]
EvoRobots: International Symposium on Evolutionary Robotics
History: 2001/10: Tokyo, Japan, see [1099]
ICEC: International Conference on Evolutionary Computation
History: 2010/10: Val`encia, Spain, see [1276]
2009/10: Madeira, Portugal, see [2325]
IWACI: International Workshop on Advanced Computational Intelligence
History: 2010/08: S uzhou, Ji angs u, China, see [2995]
2009/10: Mexico City, Mexico, see [1875]
2008/06: Macau, China, see [2752]
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
SEMCCO: International Conference on Swarm, Evolutionary and Memetic Computing
History: 2010/12: Chennai, India, see [2595]
SSCI: IEEE Symposium Series on Computational Intelligence
Conference series which contains many other symposia and workshops.
History: 2011/04: Paris, France, see [1357]
2009/03: Nashville, TN, USA, see [1353]
2007/04: Honolulu, HI, USA, see [1345]
28.7.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Evolutionary Computation published by MIT Press
3. IEEE Transactions on Systems, Man, and Cybernetics Part B: Cybernetics published by
IEEE Systems, Man, and Cybernetics Society
4. International Journal of Computational Intelligence and Applications (IJCIA) published by
Imperial College Press Co. and World Scientic Publishing Co.
5. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
6. Natural Computing: An International Journal published by Kluwer Academic Publishers and
Springer Netherlands
7. Journal of Heuristics published by Springer Netherlands
8. Journal of Articial Evolution and Applications published by Hindawi Publishing Corporation
9. International Journal of Computational Intelligence Research (IJCIR) published by Research
India Publications
10. The International Journal of Cognitive Informatics and Natural Intelligence (IJCINI) pub-
lished by Idea Group Publishing (Idea Group Inc., IGI Global)
28.7. GENERAL INFORMATION ON EVOLUTIONARY ALGORITHMS 323
11. IEEE Computational Intelligence Magazine published by IEEE Computational Intelligence
Society
12. Journal of Optimization Theory and Applications published by Plenum Press and Springer
Netherlands
13. Computational Intelligence published by Wiley Periodicals, Inc.
14. International Journal of Swarm Intelligence and Evolutionary Computation (IJSIEC) pub-
lished by Ashdin Publishing (AP)
324 28 EVOLUTIONARY ALGORITHMS
Tasks T28
75. Implement a general steady-state Evolutionary Algorithm. You may use Listing 56.11
as blueprint and create a new class derived from EABase (given in ?? on page ??) which
provides the steady-state functionality described in Paragraph 28.1.4.2.1 on page 266.
[20 points]
76. Implement a general Evolutionary Algorithm with archive-based elitism. You may use
Listing 56.11 as blueprint and create a new class derived from EABase (given in ?? on
page ??) which provides the steady-state functionality described in Section 28.1.4.4 on
page 267.
[20 points]
77. Implement the linear ranking selection scheme given in Section 28.4.5 on page 300
for your version of Evolutionary Algorithms. You may use Listing 56.14 as blueprint
implement the interface ISelectionAlgorithm given in Listing 55.13 in a new class.
[20 points]
Chapter 29
Genetic Algorithms
29.1 Introduction
Genetic Algorithms
1
(GAs) are a subclass of Evolutionary Algorithms where (a) the
genotypes g of the search space G are strings of primitive elements (usually all of the
same type) such as bits, integer, or real numbers, and (b) search operations such as
mutation and crossover directly modify these genotypes.. Because of the simple struc-
ture of the search spaces of Genetic Algorithms, often a non-identity mapping is used
as genotype-phenotype mapping to translate the elements g G to candidate solutions
x X [1075, 1220, 1250, 2931].
Genetic Algorithms are the original prototype of Evolutionary Algorithms and adhere to
the description given in Section 28.1.1. They provide search operators which try to closely
copy sexual and asexual reproduction schemes from nature. In such sexual search opera-
tions, the genotypes of the two parents genotypes will recombine. In asexual reproduction,
mutations are the only changes that occur.
29.1.1 History
The roots of Genetic Algorithms go back to the mid-1950s, where biologists like Barri-
celli [220, 221, 222, 223] and the computer scientist Fraser [941, 988, 989] began to apply
computer-aided simulations in order to gain more insight into genetic processes and the
natural evolution and selection. Bremermann [407] and Bledsoe [323, 324, 325, 326] used
evolutionary approaches based on binary string genomes for solving inequalities, for func-
tion optimization, and for determining the weights in neural networks in the early 1960s.
At the end of that decade, important research on such search spaces was contributed by
Bagley [180] (who introduced the term Genetic Algorithm), Rosenberg [2331], Cavicchio, Jr.
[510, 511], and Frantz [985] all under the advice of Holland at the University of Michigan.
As a result of Hollands work [12471250] Genetic Algorithms as a new approach for prob-
lem solving could be formalized nally became widely recognized and popular. As same as
important was the work of De Jong [711] on benchmarking GAs for function optimization
from 1975 preceded by Hollstiens PhD thesis [1258]. Today, there are many applications
in science, economy, and research and development [564, 2232] that can be tackled with
Genetic Algorithms.
It should further be mentioned that, because of the close relation to biology and since
Genetic Algorithms were originally applied to single-objective optimization, the objective
functions f here are often referred to as tness functions. This is a historically grown
1
http://en.wikipedia.org/wiki/Genetic_algorithm [accessed 2007-07-03]
326 29 GENETIC ALGORITHMS
misnaming which should not be mixed up with the tness assignment processes discussed
in Section 28.3 on page 274 and the tness values v used in the context of this book.
29.2 Genomes in Genetic Algorithms
Most of the terminology which we have dened in Part I and used throughout this book stems
from the GA sector. The search spaces G of Genetic Algorithms, for instance, are referred
to genomes and their elements are called genotypes because of their historical development.
Thymine Adenine Guanine Cytosine
Deoxyribose(sugar) Phosphate
HydrogenBond
g G
SearchSpace : G allbitstringsofagivenlength
specificbitstrings Genotypesg:
roughsimplification
0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
Figure 29.1: A Sketch of a DNA Sequence.
Genotypes in nature encompass the whole hereditary information of an organism en-
coded in the DNA illustrated of the DNA. The DNA is a string of base pairs that encodes
the phenotypical characteristics of the creature it belongs to. Like their natural proto-
types, the genomes in Genetic Algorithms are strings, linear sequences of certain data
types [1075, 1253, 1912]. Because of the linear structure, these genotypes are also often
called chromosomes. Originally, Genetic Algorithms worked on bit strings of xed length,
as sketched in Figure 29.1. Today, a string chromosome can either be a xed-length tuple
(Equation 29.1) or a variable-length list (Equation 29.3) of elements.
29.2.1 Classication According to Length
In the rst case, the loci i of the genes g[i] are constant and hence, each gene can have a
xed meaning. Then, the tuples may contain elements of dierent types G
i
.
G = (g[0], g[1], .., g[n 1]) : g[i] G
i
i 0..n 1 (29.1)
G = G
0
G
1
.. G
n1
(29.2)
This is not given in variable-length string genomes. Here, the positions of the genes may
shift when the reproduction operations are applied. Thus, all elements of such genotypes
must have the same type G
T
.
G = lists g : g[i] G
T
0 i < len(g) (29.3)
G = G
T

(29.4)
29.2. GENOMES IN GENETIC ALGORITHMS 327
29.2.2 Classication According to Element Type
The most common string chromosomes are (1) bit strings, (2) vectors of integer numbers,
and (3) vectors of real numbers.. Genetic Algorithms with numeric vector genomes, i. e.,
where G R
n
, are called real-coded [1509]. GAs with bit string genome are referred to as
binary-coded if no re-coding genotype-phenotype mapping is applied. In, for example, gray
coding is used as discussed in Example E28.9, we would refer to the GA as gray-coded.
29.2.3 Classication According to Meaning of Loci
Let us shortly recapitulate the structure of the elements g of the search space G given in
Section 4.1. A gene (see Denition D4.3 on page 82) is the basic informational unit in a
genotype g. In biology, a gene is a segment of nucleic acid that contains the information
necessary to produce a functional RNA product in a controlled manner. An allele (see
Denition D4.4) is a value of specic gene in nature and in EAs alike. The locus (see
Denition D4.5) is the position where a specic gene can be found in a chromosome.
As stated in Section 29.2.1, in xed-length chromosomes there may be a specic meaning
assigned to each locus in the genotype (if direct genotype-phenotype mappings are applied
as outlined in Section 28.2.1). In Example E17.1 on page 177, for instance, gene 1 stands
for the color of the dinosaurs neck whereas gene 4 codes the shape of its head. Although
this was an example from nature, in a xed-length string genome, each locus may have a
direct connection to an element of the phenotype.
In many applications, this connection of locus to meaning is loose. When, for example,
solving the Traveling Salesman Problem given in Example E2.2 on page 45, each gene may
be an integer number identifying a city within China and the genotype, a permutation of
these identiers, would represent the order in which they are visited. Here, the genes do not
code for distinct and dierent features of the phenotypes. Instead, the phenotypic features
have more uniform meanings. Yet, the sequence of the genes is still important.
Messy genomes (see Section 29.6) where introduced to improve locality by linkage learn-
ing: Here, the genes do not only code for a specic phenotypic element, but also for the
position in the phenotype where this element should be placed.
Finally, there exist indirect genotype-phenotype mappings (Section 28.2.2) and encod-
ings where there is no relation between the location of a gene and its meaning or interpre-
tation. A good example for the latter are Algorithmic Chemistries which we will discuss
in ?? on page ??. In their use in linear Genetic Programming developed by Lasarczyk and
Banzhaf [207, 1684, 1685], each gene stands for an instruction. Unlike normal computer pro-
grams, the Algorithmic Chemistry programs in execution are not interpreted sequentially.
Instead, at each step, the processor randomly chooses one instruction from the program.
Hence, the meaning of a gene (an instruction) here only depends on its allelic state (its
value) and not on its position (its locus) inside the chromosome.
29.2.4 Introns
Besides the functional genes and their alleles, there are also parts of natural genomes which
have no (obvious) function [1072, 2870]. The American biochemist Gilbert [1056] coined the
term intron
2
for such parts. In some representations in Evolutionary Algorithms, similar
structures can be observed, especially in variable-length genomes.
Denition D29.1 (Intron). An intron is a part of a genotype g G that does not con-
tribute to the phenotype x = gpm(g), i. e., plays no role during the genotype-phenotype
mapping.
2
http://en.wikipedia.org/wiki/Intron [accessed 2007-07-05]
328 29 GENETIC ALGORITHMS
Biological introns have often been thought of as junk DNA or old code, i. e., parts of the
genome that were translated to proteins in evolutionary past, but now are not used anymore.
Currently though, many researchers assume that introns are maybe not as useless as initially
assumed [660]. Instead, they seem to provide support for ecient splicing, for instance.
Similarly, in some cases, the role introns in Genetic Algorithms is mysterious as well.
They represent a form of redundancy which may have positive as well as negative eects, as
outlined in Section 16.5 on page 174 and ??.
29.3 Fixed-Length String Chromosomes
Especially widespread in Genetic Algorithms are search spaces based on xed-length chro-
mosomes. The properties of their reproduction operations such as crossover and mutation
are well known and an extensive body of research on them is available [1075, 1253].
29.3.1 Creation: Nullary Reproduction
29.3.1.1 Fully-Random Genotypes
The creation of random xed-length string individuals means to create a new tuple of the
structure dened by the genome and initialize it with random values. In reference to Equa-
tion 29.1 on page 326, we could roughly describe this process with Equation 29.5 for xed-
length genomes where the elementary types G
i
from G
0
G
1
.. G
n1
= G are nite sets
and their j
th
elements can be accessed with an index G
i
[j]. Although creation is a nullary
reproduction operation, i. e., one which does not receive any genotypes g G as parameter,
in Equation 29.5 we provide n, the dimension of the search space, as well as the search space
G to the operation in order to keep the denition general. In Algorithm 29.1, we show how
this general equation can be implemented for xed-length bit-string genomes, i. e., we dene
an algorithm createGAFLBin(n) createGAFL(n, B
n
) which creates bit strings g G of
length n (G = B
n
).
createGAFL(n, G) (g[1], g[2], .., g[n]) : g[i] = G
i
[randomUni[0, len(G
i
))] i 1..n (29.5)
Algorithm 29.1: g createGAFLBin(n)
Input: n: the length of the individuals
Data: i 1..n: the running index
Data: v B: the running index
Output: g: a new bit string of length n with random contents
1 begin
2 i n
3 g ()
4 while i > 0 do
5 if randomUni[0, 1) < 0.5 then v 0
6 else v 1
7 i i 1
8 g addItem(g, v)
9 return g
29.3. FIXED-LENGTH STRING CHROMOSOMES 329
29.3.1.2 Permutations
Besides on arbitrary vectors over some vector space (be it B
n
, N
n
0
, or R
n
), Genetic Algorithms
can also work on permutations, as dened in Section 51.6 on page 643. If G is the space of all
possible permutations of the n natural numbers from 0 to n 1, i. e., G = (0..n 1), we
can create a new genotype g G using the algorithm Algorithm 29.2 which is implemented
in Listing 56.37 on page 804.
Algorithm 29.2: g createGAPerm(n)
Input: n: the number of elements to permutate
Data: tmp: a temporary list
Data: i, j: indices into tmp and g
Output: g: the new random permutation of the numbers 0..n 1
1 begin
2 tmp (0, 1, . . . , n 2, n 1)
3 i n
4 g ()
5 while i > 0 do
6 j randomUni[0, i)
7 i i 1
8 g addItem(g, tmp[j])
9 tmp[j] tmp[i]
10 return g
From a more general perspective, we can consider the problem space X to be the space
X = (S) of all permutations of the elements s
i
: i 0..n 1 of a given set S. Usually,
of course, S = 0..n 1 and s
i
= i i 0..n 1. The genotypes g in the search space
G are vectors of natural numbers from 0 to n 1. The mapping of these vectors to the
permutations can be done implicitly or explicitly according to the following Equation 29.6.
gpm

(g, S) = x : x[i] = s
g[i]
i 0..n 1 (29.6)
In other words, the gene at locus i in the genotype g G denes which element s from the
set S should appear at position i in the permutation x X = (S). More precisely, the
allelic value g[i] of the gene is the index of the element s
g[i]
S which will appear at that
position in the phenotype x.
330 29 GENETIC ALGORITHMS
29.3.2 Mutation: Unary Reproduction
Mutation is an important method for preserving the diversity of the candidate solutions by
introducing small, random changes into them.
Fig. 29.2.a: Single-gene mutation. Fig. 29.2.b: Consecutive multi-gene mutation
(i).
Fig. 29.2.c: Uniform multi-gene mutation (ii). Fig. 29.2.d: Complete mutation.
Figure 29.2: Value-altering mutation of string chromosomes.
29.3.2.1 Single-Point Mutation
In xed-length string chromosomes, this can be achieved by randomly modifying the value
(allele) of a gene, as illustrated in Fig. 29.2.a. If the genome consists of bit strings, this
single-point mutation is called single-bit ip mutation. In this case, this operation creates a
neighborhood where all points which have a Hamming distance of exactly one are adjacent
(see Denition D4.10 on page 85).
adjacent

(g
2
, g
1
) = dist ham(g
2
, g
1
) = 1 (29.7)
29.3.2.2 Multi-Point Mutation
It is also possible to perform a multi-point mutation, to change 0 < n len(g
p
) locations in
the genotype g
p
are changed at once (Fig. 29.2.b and Fig. 29.2.c). If parts of the genotype
interact epistatically (see Chapter 17), such a multi-point mutation may be the only way to
escape local optima.
In this case, the method sketched in Fig. 29.2.c is far more ecient than the one in
Fig. 29.2.b. This becomes clear when assuming that the search space was the bit strings of
length n, i. e., G = B
n
. The former method, illustrated in Fig. 29.2.c, creates a neighborhood
as dened in Equation 29.8: depending on the probability (l) of modifying exactly 0 <
l n elements, all individuals in Hamming distance of exactly l become adjacent (with a
probability of exactly exactly (l)). The method sketched in Fig. 29.2.b, on the other hand,
can only put strings into adjacency which have a Hamming distance of l and where the
modied bits additionally from a consecutive sequence. Of course, similar considerations
would hold for any kind of string-based search space.
adjacent
(l)
(g
2
, g
1
) = dist ham(g
2
, g
1
) = l (29.8)
If a multi-point mutation operator is applied, the number of bits to be modied at once
can be drawn from a (bounded) exponential distribution (see Section 53.4.3 on page 682).
29.3. FIXED-LENGTH STRING CHROMOSOMES 331
This would increase the probability (l) that only few genes are changed (l is low), which
makes sense since one of the basic assumptions in Genetic Algorithms is that there exists
at least some locality ( Denition D14.1 on page 162) in the search space, i. e., that some
small changes in the genotype also lead to small changes in the phenotype. However, bigger
modications (l n in search spaces of dimension n) would still be possible and happen
from time to time with such a choice.
Algorithm 29.3: g mutateGAFLBinRand(g
p
, n)
Input: g
p
: the parent individual
Input: n 1..len(g
p
): the highest possible number of bits to change
Data: i, j: temporary variables
Data: q: the rst len(g
p
) natural numbers
Output: g: the new random permutation of the numbers 0..n 1
1 begin
2 g g
p
3 q (0, 1, 2, .., len(g) 1)
4 repeat
5 i randomUni[0, len(g))
6 j q[i]
7 q deleteItem(q, j)
8 g[j] g[j]
9 until (len(q) = 0) (randomUni[0, 1) < 0.5)
10 return g
In Algorithm 29.3, we provide an implementation of the mutation operation sketched in
Fig. 29.2.c which modies a roughly exponentially distributed number of genes (bits) in a
genotype from a bit-string based search space (G B
n
). It does so by randomly selecting
loci and toggling the alleles (values) of the genes (bits) at these positions. It stops if it has
toggled all bits (i. e., all loci have been used) or a random number uniformly distributed
in [0, 1) becomes lower than 0.5. The latter condition has the eect that only one gene is
modied with probability 0.5. With probability (1 0.5) 0.5 = 0.25, exactly two genes are
changed. With probability (1 0.5)
2
0.5 = 0.125, the operator modies three genes and
so on.
29.3.2.3 Vector Mutation
In real-coded genomes, it is common to instead modify all genes as sketched in Fig. 29.2.d.
This would make little sense in binary-coded search space, since the inversion of all bits
means to completely discard all information of the parent individual.
29.3.2.4 Mutation: Binary and Real Genomes
Changing a single gene in binary-coded chromosomes, i. e., encodings where each gene is a
single bit, means that the value at a given locus is simply inverted as sketched in Fig. 29.3.a.
This operation is referred to as bit-ip mutation.
For real-encoded genomes, modifying an element g[i] can be done by replacing it with a
number drawn from a normal distribution (see Section 53.4.2) with expected value g[i] and
some standard deviation , i. e., g[i]
new
N
_
g[i],
2
_
which we illustrated in Fig. 29.3.b and
outline in detail in Section 30.4.1 on page 364.
332 29 GENETIC ALGORITHMS
Fig. 29.3.a: Mutation in binary genomes.
g[i]+s g[i]-s
g[i] g[i]
Fig. 29.3.b: Mutation in real genomes.
Figure 29.3: Mutation in binary- and real-coded GAs
29.3. FIXED-LENGTH STRING CHROMOSOMES 333
29.3.3 Permutation: Unary Reproduction
The permutation operation is an alternative mutation method where the alleles of two genes
are exchanged as sketched in Figure 29.4. This, of course, makes only sense if all genes have
similar data types and the location of a gene has inuence on its meaning. Permutation is,
for instance, useful when solving problems that involve nding an optimal sequence of items,
like the Traveling Salesman Problem as mentioned in Example E2.2. Here, a genotype g
could encode the sequence in which the cities are visited. Exchanging two alleles then equals
of switching two cities in the route.
Figure 29.4: Permutation applied to a string chromosome.
29.3.3.1 Single Permutation
In Algorithm 29.4 which is implemented in Listing 56.38 on page 806, we provide a sim-
ple unary search operation which exchanges the alleles of two genes, exactly as sketched
in Figure 29.4. If the genotype was a proper permutation obeying the conditions given
in Denition D51.13 on page 643, these conditions will hold for the new ospring as well.
Algorithm 29.4: g mutateSinglePerm(n)
Input: g
p
: the parental genotype
Data: i, j: indices into g
Data: t: a temporary variable
Output: g: the new, permutated genotype
1 begin
2 g g
p
3 i randomUni[0, len(g))
4 repeat
5 j randomUni[0, len(g))
6 until i ,= j
7 t g[i]
8 g[i] g[j]
9 g[j] t
10 return g
29.3.3.2 Multiple Permutations
Dierent from Algorithm 29.4, Algorithm 29.5 (implemented in Listing 56.39 on page 808)
repetitively exchanges elements. The number of elements which are swapped is roughly
exponentially distributed (see Section 53.4.3). This operation also preserves the validity of
the conditions for permutations given in Denition D51.13 on page 643.
334 29 GENETIC ALGORITHMS
Algorithm 29.5: g mutateMultiPerm(g
p
)
Input: g
p
: the parental genotype
Data: i, j: indices into g
Data: t: a temporary variable
Output: g: the new, permutated genotype
1 begin
2 g g
p
3 repeat
4 i randomUni[0, len(g))
5 repeat
6 j randomUni[0, len(g))
7 until i ,= j
8 t g[i]
9 g[i] g[j]
10 g[j] t
11 until randomUni[0, 1) 0.5
12 return g
29.3. FIXED-LENGTH STRING CHROMOSOMES 335
29.3.4 Crossover: Binary Reproduction
There is an incredible wide variety of crossover operators for Genetic Algorithms. In his
2006 book [1159], Gwiazda mentions alone 11 standard, 66 binary-coded, and 89 real-coded
crossover operators. Here, we only can discuss a small fraction of those.
Amongst all Evolutionary Algorithms, some of the recombination operations which
probably comes closest to the natural paragon can be found in Genetic Algorithms. Fig-
ure 29.5 outlines the most basic methods to recombine two string chromosomes, the so-called
crossover, which is performed by swapping parts of two genotypes. Although these still can-
not nearly resemble the idea of natural crossover, they at least mirror the school book view
on it.
()
Fig. 29.5.a: Single-point
Crossover (SPX, 1-PX).
()
Fig. 29.5.b: Two-point
Crossover (TPX, 2-PX).
()
Fig. 29.5.c: Multi-point
Crossover (MPX, k-PX).
()
p=0.5
p=0.5
p=0.5
p=0.5
p=0.5
p=0.5
p=0.5
p=0.5
p=0.5
p=0.5
Fig. 29.5.d: Uniform
Crossover (UX).
Figure 29.5: Crossover (recombination) operators for xed-length string genomes.
29.3.4.1 Single-Point Crossover (SPX, 1-PX)
When performing single-point crossover (SPX
3
, 1-PX, [1159]), both parental chromosomes
are split at a randomly determined crossover point. Subsequently, a new child genotype is
created by appending the second part of the second parent to the rst part of the rst parent
as illustrated in Fig. 29.5.a. With crossover, it is possible that two ospring are created at
once from the two parents. The second ospring is shown in the long parentheses. Doing so
it, however, not mandatory.
29.3.4.2 Two-Point Crossover (TPX)
In two-point crossover (TPX, 2-PX, sketched in Fig. 29.5.b), both parental genotypes are
split at two points and a new ospring is created by using parts number one and three from
the rst, and the middle part from the second parent chromosome. The second possible
ospring of this operation is again displayed in parentheses.
29.3.4.3 Multi-Point Crossover (MPX, k-PX)
In Algorithm 29.6, the generalized form of this technique is specied: The k-point crossover
operation (k-PX, [1159]), also called multi-point crossover (MPX). The operator takes the
two parent genotypes g
p1
and g
p2
from an n-dimensional search space as well as the number
0 < k < n of crossover points as parameter.
By setting the parameter k to 1 or 2, single-point and two-point crossover can be emulated
and with k = n 1, becomes uniform crossover (see Section 29.3.4.4). It furthermore is
possible to choose k randomly from 1..n 1 for each application of the operator, maybe by
picking it with a bounded exponential distribution. This way, many dierent recombination
operations can be performed with the same algorithm.
3
This abbreviation is also used for simplex crossover, see ??.
336 29 GENETIC ALGORITHMS
Algorithm 29.6: g recombineMPX(g
p1
, g
p2
, k)
Input: g
p1
, g
p2
G: the parental genotypes
Input: [implicit] n: the dimension of the search space G
Input: k: the number of crossover points
Data: i: a counter variable
Data: cp: a randomly chosen crossover point candidate
Data: CP: the list of crossover points
Data: s, e: the segment start and end indices
Data: b: a Boolean variable for choosing the segment
Data: g
u
G: the currently used parent
Output: g G: a new genotype
1 begin
// find k crossover points from 0 to n 1
2 CP (n)
3 for i 1 up to k do
4 repeat
5 cp randomUni[0, n 1) + 1
6 until cp , CP
7 CP addItem(CP, cp)
8 CP sortAsc(CP, <)
// perform the crossover by copying sub-strings
9 g ()
10 s 0
11 b true
12 g
u
g
p1
13 for i 0 up to k do
14 e CP[i]
15 if b then g
u
g
p1
16 else g
u
g
p2
// add substring of length e s from g
u
starting at index
s to g
17 g appendList(g, subList(g
u
, s, e s))
18 b b
19 s e
20 return g
29.3. FIXED-LENGTH STRING CHROMOSOMES 337
In Algorithm 29.6, rst k dierent crossover points cp are picked according to the uniform
distribution and the ospring is initialized as empty list. Then, the algorithm iterates
through the sorted list CP of crossover points. It copies string sub-sequences between each
two crossover points. Depending on whether it is in an odd (b = 1) or even (b = 0) iteration,
either a portion from the rst parent g
p1
or the second parent g
p2
is copied. Notice that
n, the dimension of the search space, is also part of the list CP since we need to copy the
sub-sequence between the last crossover point and the end of one of parent genotypes as
well.
An example for the application of this algorithm illustrated in Fig. 29.5.c. For xed-
length strings, the crossover points for both parents are always identical whereas for variable-
length strings (discussed in Section 29.4 and sketched in Fig. 29.8.c on page 341), the algo-
rithm could be redesigned to use dierent crossover points for the two parents.
29.3.4.4 Uniform Crossover (UX)
Uniform crossover (UX [1159, 2563], sketched in Fig. 29.5.d, specied in Algorithm 29.7, and
exemplarily implemented in Listing 56.36 on page 803) follows another scheme: Here, for
each locus in the ospring chromosome, the allelic value if chosen with the same probability
(50%) from the same locus of the rst parent or from the same locus of the second parent.
Dominant (discrete) recombination which is introduced in Section 30.3.1 on page 362 is a
generalization of UX to an -ary operator, i. e., to an operator which creates one child from
parents.
Algorithm 29.7: g recombineUX(g
p1
, g
p2
)
Input: g
p1
, g
p2
G: the parental genotypes
Data: i: a counter variable
Output: g G: a new genotype
1 begin
2 g g
p1
3 for i 0 up to len(g) 1 do
4 if randomUni[0, 1) < 0.5 then g[i] g
p2
[i]
5 return g
29.3.4.5 (Weighted) Average Crossover
The previously discussed crossover methods have primarily been developed for bit string
genomes G = B
n
. If they combine two strings g
1
and g
2
, the ospring will necessarily be
located on one of the corners of the hypercube described by both parents [2099] and sketched
in Figure 29.6. In binary-coded Genetic Algorithms, this makes indeed sense.
For real-valued genomes, we would like to explore the volume inside the hypercube as
well. Therefore, we can use a (weighted) average of the two parent vectors as ospring. One
way to do so is Equation 29.9 for search spaces G R
n
. In this equation, each element of
the two parents g
1
and g
2
are combined by a weighted average to g
new
. For pure average
crossover, the weight will always be set to = 0.5. If the average is randomly weighted,
as weight we can use
1. a new random number ru uniformly distributed in (0, 1),
2. 0.5 (0.5 (ru
n
)) where n is a natural number in order to prefer points which are
somewhat centered between the original coordinates and for , either + or is chosen
randomly (as done in Listing 56.29 on page 792, for example), or
338 29 GENETIC ALGORITHMS
(g [0],g [1],g [2])
1 1 1
(g [0],g [1],g [2])
2 2 2
G
0
R

G
1
R

G
2
R

(g [0],g [1],g [2])


1 1 1
(g [0],g [1],g [2])
2 2 2
G
0
R

G
1
R

G
2
R

G
0
R

G
1
R

G
2
R

1
-
P
X
,

2
-
P
X
,

M
-
P
X
,

U
X
W
e
i
g
h
t
e
d

A
v
e
r
a
g
e
C
r
o
s
s
o
v
e
r
,

e
t
c
.
possibleoffspring parents
Figure 29.6: The points sampled by dierent crossover operators.
3. an approximately normally distributed random number with mean = 0.5, again
within the bounds (0, 1) for each locus i.
g
new
[i] = g
1
[i] + (1 )g
2
[i] i 1..n (29.9)
The intermediate recombination operator discussed in Section 30.3.2 on page 362 is an
extension of average crossover to parents.
29.3.4.6 Homologous Crossover (HRO, HX)
According to Banzhaf et al. [209], natural crossover is very restricted and usually exchanges
only genes that express the same functionality and are located at the same positions (loci)
on the chromosomes.
Denition D29.2 (Homology). In genetics, homology
4
of protein-coding DNA sequences
means that they code for the same protein which may indicate common functionality. Ho-
mologous chromosomes
5
are either chromosomes in a biological cell that pair during meiosis
or non-identical chromosomes which code for the same functional feature by containing
similar genes in dierent allelic states.
In other words, homologous genetic material is very similar and in nature, only such material
is exchanged in sexual reproduction. Park et al. [2128] propose that a similar approach
should be useful in Genetic Algorithms. As basis for their analysis they take multi-point
crossover (MPX, k-PK) as introduced here in Algorithm 29.6. In MPX, k > 0 crossover
points are chosen randomly and subsequences between two points (as well as the start of the
string and the rst crossover point and the last crossover point and the end of the parent
strings) are exchanged.
Park et al. [2128] point out that such an exchange is likely to destroy schemas which
extend over crossover points while leaving only those completely embedded in a sequence
subject to exchange intact. They thus make exchange of genetic information conditional:
4
http://en.wikipedia.org/wiki/Homology_(biology) [accessed 2008-06-17]
5
http://en.wikipedia.org/wiki/Homologous_chromosome [accessed 2008-06-17]
29.4. VARIABLE-LENGTH STRING CHROMOSOMES 339
their HRO (Homologous Recombination Operator, HX) always chooses the subsequence
from the rst parent g
p1
if
1. it would be the turn of g
p1
anyway (b = true in Algorithm 29.6 on Line 15), if
2. the exchanged sequence would be too short, i. e., its length would be below a given
threshold w, or if
3. the sequences are not dierent enough. As measure for dierence, the Hamming
distance dist ham divided by the sequence length (e s). If this value is above
a threshold u, again the sequence from g
p1
is used.
Homologous crossover can be implemented by changing the condition in the alternative in
Line 15 in Algorithm 29.6 to Equation 29.10.
b ((e s) < w)
_
dist ham(subList(g
p1
, s, e s) , subList(g
p2
, s, e s))
e s
u
_
(29.10)
Besides the schema-preservation which was the reason for the invention of HRO, another
argument for the measured better performance of homologous crossover [2128] would be the
discussion on similarity extraction given in Section 29.5.6 on page 346.
The reader should be informed that in Point 3 of the enumeration and in Equation 29.10,
we deviate from the suggestions given in [1159, 2128]. There, combining the two subse-
quences with xor and then counting the number of 1-bits is suggested as similarity measure.
Actually, like the Hamming distance it is a dissimilarity measure. Therefore, in Equa-
tion 29.10, we permit an information exchange between the parental chromosomes only if
it is below a threshold u while using the genes from the rst parent if it is above or equal
the threshold. Either the quoted works just contain a small typo regarding that issue or the
author of this book misunderstood the concept of homology. . .
29.3.4.7 Random Crossover
In his 1995 paper [1466], Jones introduces an experiment to test whether a crossover operator
is ecient or not. His random crossover or headless chicken crossover [101] (g
3
, g
4
)
recombine
R
(g
1
, g
2
) uses an internal crossover operation recombine
i
and a genotype creation
routine create. It receives two parent individuals g
1
and g
2
and produces two ospring
g
3
and g
4
. However, it does not perform real crossover. Instead, it internally creates two
new (random) genotypes g
5
, g
6
and uses the internal operator recombine
i
to recombine each
of them with one of the parents.
(g
3
, g
4
) = recombine
R
(g
1
, g
2
) (recombine
i
_
g
1
, create
___
,
recombine
i
_
g
2
, create
___
)
(29.11)
If the performance of Genetic Algorithm which uses recombine
i
as crossover operator is not
better than with recombine
R
using recombine
i
, than the eciency of recombine
i
is ques-
tionable. In the experiments of Jones [1466] with recombine
i
= TPX, two-point crossover
outperformed random crossover only in problems where there are well-dened building blocks
while not leading to better results in problems without this feature.
29.4 Variable-Length String Chromosomes
Variable-length genomes for Genetic Algorithms where rst proposed by Smith in his PhD
thesis [2536]. There, he introduced a new variant of classier systems (see Chapter 35) with
the goal of evolving programs for playing poker [2240, 2536].
340 29 GENETIC ALGORITHMS
29.4.1 Creation: Nullary Reproduction
Variable-length strings can be created by rst randomly drawing a length l > 0 and then
creating a list of that length lled with random elements. l could be drawn from a bounded
exponential distribution.
createGAVL(G
T
) = createGAFL
_
l, G
T
l
_

l exp (29.12)
29.4.2 Insertion and Deletion: Unary Reproduction
If the string chromosomes are of variable length, the set of mutation operations introduced
in Section 29.3 can be extended by two additional methods. First, we could insert a couple
Fig. 29.7.a: Insertion of random genes. Fig. 29.7.b: Deletion of genes.
Figure 29.7: Search operators for variable-length strings (additional to those from Sec-
tion 29.3.2 and Section 29.3.3).
of genes with randomly chosen alleles at any given position into a chromosome as sketched
in Fig. 29.7.a. Second, this operation can be reversed by deleting elements from the string
as shown in Fig. 29.7.b. It should be noted that both, insertion and deletion, are also
implicitly be performed by crossover. Recombining two identical strings with each other
can, for example, lead to deletion of genes. The crossover of dierent strings may turn out
as an insertion of new genes into an individual.
29.4.3 Crossover: Binary Reproduction
For variable-length string chromosomes, the same crossover operations are available as for
xed-length strings except that the strings are no longer necessarily split at the same loci.
The lengths of the new strings resulting from such a cut and splice operation may dier from
the lengths of the parents, as sketched in Figure 29.8. A special case of this type of recom-
bination is the homologous crossover, where only genes at the same loci and with similar
content are exchanged, as discussed in Section 29.3.4.6 on page 338 and in Section 31.4.4.1
on page 407.
29.5. SCHEMA THEOREM 341
()
Fig. 29.8.a: Single-Point
Crossover
()
Fig. 29.8.b: Two-Point
Crossover
()
Fig. 29.8.c: Multi-Point
Crossover
Figure 29.8: Crossover of variable-length string chromosomes.
29.5 Schema Theorem
The Schema Theorem is a special instance of forma analysis (discussed in Chapter 9 on
page 119) for Genetic Algorithms. Matter of fact, it is older than its generalization and
was rst stated by Holland back in 1975 [711, 1250, 1253]. Here we will rst introduce the
basic concepts of schemata, masks, and wildcards before going into detail about the Schema
Theorem itself, its criticism, and the related Building Block Hypothesis.
29.5.1 Schemata and Masks
Assume that the genotypes g in the search space G of Genetic Algorithms are strings of
a xed-length l over an alphabet
6
, i. e., G =
l
. Normally, is the binary alphabet
= true, false = 0, 1. From forma analysis, we know that properties can be
dened on the genotypic or the phenotypic space. For xed-length string genomes, we can
consider the values at certain loci as properties of a genotype. There are two basic principles
on dening such properties: masks and do not care symbols.
Denition D29.3 (Mask). For a xed-length string genome G =
l
, the set of all geno-
typic masks M(l) is the power set
7
of the valid loci M(l) = T(0..l 1) [2879]. Every mask
m
i
M(l) denes a property
i
and an equivalence relation:
g

i
h g[j] = h[j] j m
i
(29.13)
Denition D29.4 (Order). The order order(m
i
) of the mask m
i
is the number of loci
dened by it:
order(m
i
) = [m
i
[ (29.14)
Denition D29.5 (Dened Length). The dened length (m
i
) of a mask m
i
is the max-
imum distance between two indices in the mask:
(m
i
) = max [j k[ j, k m
i
(29.15)
A mask contains the indices of all elements in a string that are interesting in terms of the
property it denes.
Denition D29.6 (Schema). A forma dened on a string genome concerning the values
of the characters at specied loci is called Schema [558, 1250]
6
Alphabets and such and such are dened in ?? on page ??.
7
The power set you can nd described in Denition D51.9 on page 642.
342 29 GENETIC ALGORITHMS
29.5.2 Wildcards
The second method of specifying such schemata is to use dont care symbols (wildcards)
to create blueprints H of their member individuals. Therefore, we place the dont care
symbol
*
at all irrelevant positions and the characterizing values of the property at the
others.
j 0..l 1 H(j) =
_
g[j] if j m
i
otherwise
(29.16)
H(j) j 0..l 1 (29.17)
Schemas correspond to masks and thus, denitions like the dened length and order can
easily be transported into their context.
Example E29.1 (Masks and Schemas on Bit Strings).
Assume we have bit strings of the length l = 3 as genotypes (G = B
3
). The set of valid
masks M(3) is then M(3) = 0 , 1 , 2 , 0, 1 , 0, 2 , 1, 2 , 0, 1, 2. The mask m
1
=
1, 2, for example, species that the values at the loci 1 and 2 of a genotype denote the
value of a property
1
and the value of the bit at position 3 is irrelevant. Therefore,
it denes four formae A

1
=(0,0)
= (0, 0, 0) , (0, 0, 1), A

1
=(0,1)
= (0, 1, 0) , (0, 1, 1),
A

1
=(1,0)
= (1, 0, 0) , (1, 0, 0), and A

1
=(1,1)
= (1, 1, 0) , (1, 1, 1).
( , , ) 1 0 0
( , , ) 0 0 0
( , , ) 1 1 0
( , , ) 0 1 0
( , , ) 1 0 1 ( , , ) 0 0 1
( , , ) 1 1 1
( , , ) 0 1 1
g
1
g
0
g
2
H =( , , )
2
0 1 *
H =( , , )
1
0 0 *
H =( , , )
3
1 0 *
H =( , , )
4
1 1 *
H =( , , )
5
1 * *
Figure 29.9: An example for schemata in a three bit genome.
These formae can be dened with masks as well: A

1
=(0,0)
H
1
= (0, 0, ), A

1
=(0,1)

H
2
= (0, 1, ), A

1
=(1,0)
H
3
= (1, 0, ), and A

1
=(1,1)
H
4
= (1, 1, ). These schemata
mark hyperplanes in the search space G, as illustrated in Figure 29.9 for the three bit
genome.
29.5.3 Hollands Schema Theorem
The Schema Theorem
8
was dened by Holland [1250] for Genetic Algorithms which use
tness-proportionate selection (see Section 28.4.3 on page 290) where tness is subject to
8
http://en.wikipedia.org/wiki/Holland%27s_Schema_Theorem [accessed 2007-07-29]
29.5. SCHEMA THEOREM 343
maximization [711, 1253]. Let us rst only consider selection, tness proportionate selec-
tion for tness maximization and with replacement in particular, and ignore reproduction
operations such as mutation and crossover.
For tness proportionate selection in this case, Equation 28.18 on page 291 denes the
probability to pick an individual in a single selection choice according its tness. We here
print this equation in a slightly adapted form. If the genotype of p
H,i
is an instance of the
schema H, its probability of being selected with this method equals its tness divided by
the sum of the tness of all other individuals (p

) in the population pop(t) at generation t.


P (select(p
H,i
)) =
v(p
H,i
)

pop(t)
v(p

)
(29.18)
The chance to select any instance of schema H, here denoted as P (select(p
H
)), is thus
simply the sum of the probabilities of selecting any of the count(H, pop(t)) instances of this
schema in the current population:
P (select(p
H
)) =
count(H,pop(t))

i=1
P (select(p
H,i
)) =
count(H,pop(t))

i=1
v(p
H,i
)

pop(t)
v(p

)
(29.19)
We can rewrite this equation to represent the arithmetic mean v
t
(H) of the tness of all
instances of the schema in the population:
v
t
(H) =
1
count(H, pop(t))
count(H,pop(t))

i=1
v(p
H,i
) (29.20)
count(H,pop(t))

i=1
v(p
H,i
) = count(H, pop(t)) v
t
(H) (29.21)
P (select(p
H
)) =
count(H, pop(t)) v
t
(H)
ps1

j=0
v(p
j
)
(29.22)
We can do a similar thing to compute the mean tness v
t
of all the ps individuals p

in the
population of generation t:
v
t
=
1
ps

pop(t)
v(p

) (29.23)

pop(t)
v(p

) = ps v
t
(29.24)
P (select(p
H
)) =
count(H, pop(t)) v
t
(H)
ps v
t
(29.25)
In order to ll the new population pop(t + 1) of generation t + 1, we need to select ps
individuals. In other words, we have repeat the decision given in Equation 29.25 exactly ps
times. The expected value E[count(H, pop(t + 1))] of the number count(H, pop(t + 1)) of
instances of the schema H selected into pop(t + 1) of generation t + 1 then evaluates to:
E[count(H, pop(t + 1))] = ps P (select(p
H
)) (29.26)
= ps
count(H, pop(t)) v
t
(H)
ps v
t
(29.27)
E[count(H, pop(t + 1))] =
count(H, pop(t)) v
t
(H)
v
t
(29.28)
344 29 GENETIC ALGORITHMS
From this equation we can see that schemas with better (higher) tness are likely to be
present to a higher proportion in the next generation. Also, we can expect that schemas
with lower average tness are likely to have fewer instances after selection. The next thing to
consider in the Schema Theorem are reproduction operations. Each individual application
search operation may either:
1. Leave the schema untouched when reproducing an individual. If the parent was in-
stance of the schema, then the child is as well. If the parent was not instance of the
schema, the child will be neither.
2. Destroy an occurrence of the schema: The parent individual was an instance of the
schema, but the child is not.
3. Create a new instance of the schema, i. e., the parent was not a schema instance, but
the child is.
Holland [1250] was only interested in nding a lower bound for the expected value and thus
considered only the rst two cases. If is the probability that an instance of the schema is
destroyed, Equation 29.28 changes to Equation 29.29.
E[count(H, pop(t + 1))] =
count(H, pop(t)) v
t
(H)
v
t
(1 ) (29.29)
If we apply single-point crossover to an individual which is already a member of the schema
H, this can lead to the loss of the schema only if the crossover point separates the bits which
are not declared as
*
in H. In a search space G = B
l
, we have l 1 possible crossover points:
between loci 0 and 1, between 1 and 2, . . . , and between l 2 and l 1. Amongst them, only
(H) points are dangerous. The dened length (H) of H is the number of possible crossover
points lying within the schema (see, for example, Figure 29.10 on page 347). Furthermore,
in order to actually destroy the schema instance, we would have to pick a second parent
which has at least one wrong value in the genes it provides for the non-dont care points it
covers after crossover. We assume that this probability would be P
diff
and, since we want
to nd a lower bound, simply assume P
diff
= 1, i. e., the worst possible luck when choosing
the parent. Under these circumstances
c
, the probability to destroy an instance of H with
crossover, becomes Equation 29.30.

c
=
(H)
l 1
P
diff

(H)
l 1
(29.30)
For single-point mutation, on the other hand, solely the number of non-dont care points in
the schema H is interesting, the order order(H). Here, amongst the l loci we could hit (with
uniform probability), exactly order(H) will lead to the destruction of the schema instance.
In other words, the probability
m
to do this with mutation is given in Equation 29.31.

m
=
order(H)
l
(29.31)
What we see is that both reproduction operations favor short schemas over long ones. The
lower the dened length of a schema and the lower its order, the higher is its chance to
survive mutation and crossover. Finally, we now can put Equation 29.30 and Equation 29.31
together. If we assume that either mutation is applied (with probability mr), crossover
is applied (with probability cr), or neither operation is used and the individual survives
without modication, we get Equation 29.32 and, by substituting it into Equation 29.29,
Equation 29.33.
mr
m
+ cr
c
= mr
order(H)
l
+ cr
(H)
l 1
(29.32)
E[count(H, pop(t + 1))]
count(H, pop(t)) v
t
(H)
v
t
_
1 mr
order(H)
l
cr
(H)
l 1
_
(29.33)
29.5. SCHEMA THEOREM 345
The fact that Equation 29.29 and Equation 29.33 hold for any possible schema at any
given point in time accounts for what is called implicit parallelism and introduced in Sec-
tion 28.1.2.9.
In Equation 29.29, we can already see that the number of instances of a schema is likely
to increase as long as
1 <
v
t
(H)
v
t
(1 ) (29.34)
1 < v
t
(H)
1
v
t
(29.35)
1 >
1
v
t
(H)
v
t
1
(29.36)
v
t
(H) >
v
t
1
(29.37)
If was 0, i. e., if instances of the schema H would never get lost, this just means that the
schema tness v
t
(H) must be higher than the average population tness v
t
. For non-zero
s, the schema tness must be higher in order to compensate for the destroyed instances
appropriately.
If the relation of v
t
(H) to v
t
would remain constant for multiple generations (and of such
a quality that leads to an increase of instances), the number of schema instances would grow
exponentially (since it would multiply with a constant factor in each generation).
29.5.4 Criticism of the Schema Theorem
From Equation 29.29 or Equation 29.33, one could assume that short, highly t schemas
would spread exponentially in the population since their number would multiply with a cer-
tain factor in every generation. This deduction is however a very optimistic assumption and
not generally true, maybe even misleading. If a highly t schema has many ospring with
good tness, this will also improve the overall tness of the population. Hence, the proba-
bilities in Equation 29.29 will shift over time. Generally, the Schema Theorem represents a
lower bound that will only hold for one generation [2931]. Trying to derive predictions for
more than one or two generations using the Schema Theorem as is will lead to deceptive or
wrong results [1129, 1133].
Furthermore, the population of a Genetic Algorithm only represents a sample of limited
size of the search space G. This limits the reproduction of the schemata and also makes
statements about probabilities in general more complicated. Since we only have samples
of the schemata H, we cannot be sure if v
t
(H) really represents the average tness of all
the members of the schema. That is why we annotate it with
t
instead of writing v(H).
Thus, even reproduction operators which preserve the instances of the schema may lead to
a decrease of v
t+...
(H) by time. It is also possible that parts of the population already have
converged and other members of a schema will not be explored anymore, so we do not get
further information about its real utility.
Additionally, we cannot know if it is really good if one specic schema spreads fast, even
it is very t. Remember that we have already discussed the exploration versus exploitation
topic and the importance of diversity in Section 13.3.1 on page 155.
Another issue is that we implicitly assume that most schemata are compatible and can
be combined, i. e., that there is low interaction between dierent genes. This is also not
generally valid: Epistatic eects, for instance, can lead to schema incompatibilities. The
expressiveness of masks and blueprints even is limited and can be argued that there are
properties which we cannot specify with them. Take the set D
3
of numbers divisible by
three for example D
3
= 3, 6, 9, 12, . . . . Representing them as binary strings will lead
to D
3
= 0011, 0110, 1001, 1100, . . . if we have a bit-string genome of the length 4.
Obviously, we cannot seize these genotypes in a schema using the discussed approach. They
346 29 GENETIC ALGORITHMS
may, however, be gathered in a forma. The Schema Theorem, however, cannot hold for such
a forma since the probability p of destruction may be dierent from instance to instance.
Finally, we mentioned that the Schema Theorem gives rise to implicit parallelism since it
holds for any possible schema at the same time. However, it should be noted that (a) only a
small subset of schemas is usually actually instantiated within the population and (b) only a
few samples are usually available per schema (if any).Generally, this assumption (like many
others) about the Schema Theorem makes only sense if the population size is assumed to
be innite and the sampling of the population and the schema would be really uniform.
29.5.5 The Building Block Hypothesis
According to Harik [1196], the substructure of a genotype which allows it to match to
a schema is called a building block. The Building Block Hypothesis (BBH) proposed by
Goldberg [1075], Holland [1250] is based on two assumptions:
1. When a Genetic Algorithm solves a problem, there exist some low-order, low-dening
length schemata with above-average tness (the so-called building blocks).
2. These schemata are combined step by step by the Genetic Algorithm in order to form
larger and better strings. By using the building blocks instead of testing any possible
binary conguration, Genetic Algorithms eciently decrease the complexity of the
problem. [1075]
Although it seems as if the Building Block Hypothesis is supported by the Schema Theorem,
this cannot be veried easily. Experiments that originally were intended to proof this theory
often did not work out as planned [1913]. Also consider the criticisms of the Schema Theorem
mentioned in the previous section which turns higher abstractions like the Building Block
Hypothesis into a touchy subject. In general, there exists much criticism of the Building
Block Hypothesis and, although it is a very nice model, it cannot yet be considered as proven
suciently.
29.5.6 Genetic Repair and Similarity Extraction
The GR Hypothesis by Beyer [298] based on the Genetic Repair (GR) Eect [296] takes
a counter position to the Building Block Hypothesis [302]. It states that not the dier-
ent schemas are combined from the parent individuals to form the child, but instead the
similarities are inherited.
Let us take a look on uniform crossover (UX) for bit-string based search spaces, as
introduced in Section 29.3.4.4 on page 337. Here, the value for each locus of the child
individual g
c
is chosen randomly from the values at the same locus in one of the two parents
g
p,a
and g
p,b
. This means that if the parents have the same value in locus i, then the
child will inherit this value too g
p,a
[i] = g
p,b
[i] g
c
[i] = g
p,a
[i] = g
p,b
[i]. If the two parents
disagree in one locus j, however, the value at that locus in g
c
is either 1 or 0 both with
a probability of 50%. In other words, uniform crossover preserves the positions where the
parents are equal and randomly lls up the remaining genes.
If a sequence of genes carries the same alleles in both parents, there are three possible
reasons: (a) either it emerged randomly which has a probability of 2
l
for length-l sequences
which is thus very unlikely, (b) both parents inherited the sequence, or (c) a combination
of inherited sequences of length m less than l and a random coincidence of l m equal bits
(resulting, for example, from mutation). The latter two cases, the inheritance of the sequence
or subsequence, imply that the sequence (or parts thereof) was previously selected, i. e.,
seemingly has good tness. Uniform crossover preserves these components. The remaining
bits are uninteresting, maybe even harmful, so they can be replaced. In UX, this is done
with maximum entropy, in a maximally random fashion [302].
29.6. THE MESSY GENETIC ALGORITHM 347
Hence, the idea of Genetic Repair is that useful gene sequences which step-by-step are
built by mutation (or the variation resulting from crossover) are aggregated. The useless
sequences, however, are dampened or destroyed by the same operator. This theory has been
explained for bit-string based as well as real-valued vector based search spaces alike [298,
302].
29.6 The Messy Genetic Algorithm
According to the schema theorem specied in Equation 29.29 and Equation 29.33, a schema
is likely to spread in the population if it has above-average tness, is short (i. e., low dened
length) and is of low order [180]. Thus, according to Equation 29.33, from two schemas of
the same average tness and order, the one with the lesser dened length will be propagated
to more ospring, since it is less likely to be destroyed by crossover. Therefore, placing
dependent genes close to each other would be a good search space design approach since it
would allow good building blocks to proliferate faster. These building blocks, however, are
not known at design time otherwise the problem would already be solved. Hence, it is not
generally possible to devise such a design.
The messy Genetic Algorithms (mGAs) developed by Goldberg et al. [1083] use a coding
scheme which is intended to allow the Genetic Algorithm to re-arrange genes at runtime.
Because of this feature, it can place the genes of a building block spatially close together.
This method of linkage learning may thus increase the probability that building blocks, i. e.,
sets of epistatically linked genes, are preserved during crossover operations, as sketched in
Figure 29.10. It thus mitigates the eects of epistasis as discussed in Section 17.3.3.
destroyedin6outof9casesbycrossover
destroyedin1outof9casesbycrossover
rearrange
Figure 29.10: Two linked genes and their destruction probability under single-point
crossover.
29.6.1 Representation
The idea behind the genomes used in messy GAs goes back to the work Bagley
[180] from 1967 who rst introduced a representation where the ordering of the
genes was not xed. Instead, for each gene a tuple (, ) with its position (lo-
cus) and value (allele) was used. For instance, the bit string 000111 can
be represented as g
1
= ((0, 0) , (1, 0) , (2, 0) , (3, 1) , (4, 1) , (5, 1)) but as well as g
2
=
((5, 1) , (1, 0) , (3, 1) , (2, 0) , (0, 0) , (4, 1)) where both genotypes map to the same phenotype,
i. e., gpm(g
1
) = gpm(g
2
).
29.6.2 Reproduction Operations
29.6.2.1 Inversion: Unary Reproduction
The inversion operator reverses the order of genes between two randomly chosen loci [180,
1196]. With this operation, any particular ordering can be produced in a relatively small
number of steps. Figure 29.11 illustrates, for example, how the possible building block com-
ponents (1, 0), (3, 0), (4, 0), and (6, 0) can be brought together in two steps. Nevertheless,
the eects of the inversion operation were rather disappointing [180, 985].
348 29 GENETIC ALGORITHMS
(0, ) 0 (1, ) 0 (2, ) 0 (3, ) 0 (4, ) 1 (5, ) 1 (6, ) 1 (7, ) 1
(0, ) 0 (1, ) 0 (2, ) 0 (3, ) 0 (4, ) 1 (5, ) 1 (6, ) 1 (7, ) 1
(0, ) 0 (1, ) 0 (2, ) 0 (3, ) 0 (4, ) 1 (5, ) 1 (6, ) 1 (7, ) 1
firstinversion
secondinversion
Figure 29.11: An example for two subsequent applications of the inversion operation [1196].
29.6.2.2 Cut: Unary Reproduction
The cut operator splits a genotype g into two with the probability p
c
= (len(g) 1) p
K
where
p
K
is a bitwise probability and len(g) the length of the genotype [1551]. With p
K
= 0.1, the
g
1
= ((0, 0) , (1, 0) , (2, 0) , (3, 1) , (4, 1) , (5, 1)) has a cut probability of p
c
= (6 1)0.1 = 0.5.
A cut at position 4 would lead to g
3
= ((0, 0) , (1, 0) , (2, 0) , (3, 1)) and g
4
= ((4, 1) , (5, 1)).
29.6.2.3 Splice: Binary Reproduction
The splice operator joins two genotypes with a predened probability p
s
by simply attach-
ing one to the other [1551]. Splicing g
2
= ((5, 1) , (1, 0) , (3, 1) , (2, 0) , (0, 0) , (4, 1)) and g
4
=
((4, 1) , (5, 1)), for instance, leads to g
5
= ((5, 1) , (1, 0) , (3, 1) , (2, 0) , (0, 0) , (4, 1) , (4, 1) , (5, 1)).
In summary, the application of two cut and a subsequent splice operation to two genotypes
has roughly the same eect as a single-point crossover operator in variable-length string
chromosomes Section 29.4.3.
29.6.3 Overspecication and Underspecication
The genotypes in messy GAs have a variable length and the cut and splice operators
can lead to genotypes being over or underspecied. If we assume a three bit genome,
the genotype g
6
= ((2, 0) , (0, 0) , (2, 1) , (1, 0)) is overspecied since it contains two (in this
example, dierent) alleles for the third gene (at locus 2). g
7
= ((2, 0) , (0, 0)), in turn, is
underspecied since it does not contain any value for the gene in the middle (at locus 1).
Dealing with overspecication is rather simple [847, 1551]: The genes are processed from
left to right during the genotype-phenotype mapping, and the rst allele found for a specic
locus wins. In other words, g
6
from above codes for 000 and the second value for locus 2
is discarded. The loci left open during the interpretation of underspecied genes are lled
with values from a template string [1551]. If this string was 000, g
7
would code for 000, too.
29.6.4 The Process
In a simple Genetic Algorithm, building blocks are identied and recombined simultaneously,
which leads to a race between recombination and selection [1196]. In the messy GA [1083,
1084], this race is avoided by separating the evolutionary process into two stages:
1. In the primordial phase, building blocks are identied. In the original conception of
the messy GA, all possible building blocks of a particular order k are generated. Via
selection, the best ones are identied and spread in the population.
29.7. RANDOM KEYS 349
2. These building blocks are recombined with the cut and splice operators in the subse-
quent juxtapositional phase.
The complexity of the original mGA needed a bootstrap phase in order to identify the order-
k building blocks which required to identify the order-k 1 blocks rst. This bootstrapping
was done by applying the primordial and juxtapositional phases for all orders from 1 to
k 1. This process was later improved by using a probabilistic complete initialization
algorithm [1087] instead.
29.7 Random Keys
In Section 29.3.1.2 and 29.3.3, we introduce operators for processing strings which directly
represent permutations of the elements from a given set S = s
0
, s
1
, .., s
n1
. This direct
representation is achieved by using a search space Gwhich contains the n-dimensional vectors
over the natural numbers from 0 to n 1, i. e., G = [0..n 1]
n
. The gene g[i] at locus i of
the genotype g G hence has a value j = g[i] 1..n and species that element s
j
should be
at position i in the permutation, as dened in the genotype-phenotype mapping introduced
in Equation 29.6 on page 329.
It is quite clear that every number in 1..n must occur exactly once in each genotype.
Performing a simple single-point crossover on two dierent genotypes is hence not feasi-
ble. The two genotypes g
1
= (1, 2, 3, 5, 4) and g
2
= (4, 2, 3, 5, 1) could lead to the ospring
(1, 2, 3, 4, 1) or (4, 2, 3, 5, 4), both of which containing one number twice while missing an-
other one. We can therefore not apply Algorithm 29.6 nor Algorithm 29.7 to such genotypes.
The Random Keys encoding dened by Bean [236] is a simple idea how to circumvent
this problem. It introduces a genotype-phenotype mapping which translates n-dimensional
vectors of real numbers (G R
n
) to permutations of length n. Each gene g[i] of a genotype
g with i 0..n 1 is thus a real number or, most commonly, an element of the subset [0, 1]
of the real numbers. The gene at locus i stands for the element s
i
. The permutation of these
elements, i. e., the position at which the element s
i
appears in the phenotype, is dened by
the position of the gene g[i] in the sorted list of all genes. In Algorithm 29.8, we present
an algorithm which performs the Random Keys genotype-phenotype mapping. An example
implementation of the algorithm in Java is given in Listing 56.52 on page 834.
Algorithm 29.8: x gpm
RK
(g, S)
Input: g G R
n
: the genotype
Input: S = s
0
, s
1
, .., s
n1
: The set of elements in the permutation. Often S N
0
and s
i
= i
Data: i: a counter variable
Data: j: the position index of the allele of the i
th
gene
Data: g

: a temporary genotype
Output: x X = (S): the corresponding phenotype
1 begin
2 g

()
3 x ()
4 for i 0 up to len(g) 1 do
5 j searchItemAS(g[i], g

)
6 if j < 0 then j ((j) 1)
7 g

insertItem(g

, j, g[i])
8 x insertItem(x, j, s
j
)
9 return x
350 29 GENETIC ALGORITHMS
In other words, the genotype g
3
= (0.2, 0.45, 0.17, 0.84, 0.59) would be translated to the
phenotype x
3
= (s
2
, s
0
, s
1
, s
4
, s
3
), since 0.17 (i. e., g
3
[2]) is the smallest number and hence,
s
2
comes rst. The next smaller number is 0.2, the value of the gene at locus 0. Hence, s
0
will occur at the second place of the phenotype, and so on. Finally, the gene at locus 3 has
the largest allele (g
3
[3] = 0.84) and thus, s
3
is the last element in the permutation.
With this genotype-phenotype mapping, we can translate a combinatorial problem where
a given set of elements must be ordered into a numerical optimization problem dened over
a continuous domain. Bean [236] used the Random Keys method to solve Vehicle Routing
Problems, Traveling Salesman Problems, quadratic assignment problems, scheduling and
resource allocation problems. Snyder and Daskin [2540] combine a Genetic Algorithm, the
Random Keys encoding, and a local search method to, as well, tackle generalized Traveling
Salesman Problems.
29.8. GENERAL INFORMATION ON GENETIC ALGORITHMS 351
29.8 General Information on Genetic Algorithms
29.8.1 Applications and Examples
Table 29.1: Applications and Examples of Genetic Algorithms.
Area References
Art [1953, 2497, 2529]
Chemistry [214, 558, 668, 2729]
Combinatorial Problems [16, 41, 65, 81, 190, 236, 237, 240, 382, 402, 429, 436, 527,
573, 629, 640, 654, 765, 978, 1126, 1161, 1270, 1440, 1442,
1446, 1447, 1460, 1471, 1534, 1538, 1558, 1581, 1686, 1706,
1709, 1737, 1756, 1757, 17791781, 1896, 1969, 1971, 2019,
2075, 2267, 2324, 2338, 2373, 2374, 2472, 2473, 2540, 2656,
2658, 2686, 2713, 2732, 2795, 2796, 2865, 2901, 2923, 3024,
3044, 3062, 3084]
Computer Graphics [56, 466, 511, 541, 757, 2732]
Control [541, 1074]
Cooperation and Teamwork [2922]
Data Mining [633, 683, 717, 911, 973, 994, 1081, 1082, 1105, 1241, 1487,
1581, 1906, 1910, 2147, 2148, 2168, 2643, 2819, 2894, 2902,
3076]
Databases [640, 1269, 1270, 1706, 17791781, 2658, 2795, 2796, 2865,
3015, 3044, 3062, 3063]
Digital Technology [240, 575, 789]
Distributed Algorithms and Sys-
tems
[15, 71, 335, 410, 541, 629, 721, 769, 872, 896, 1291, 1488,
1537, 1538, 1558, 1636, 1657, 1750, 1840, 1935, 1982, 2068,
2113, 2114, 2177, 2338, 2370, 2498, 2656, 2720, 2729, 2796,
2901, 2909, 2922, 2950, 2989, 3002, 3076]
E-Learning [71, 721, 1284, 3076]
Economics and Finances [1060, 1906]
Engineering [56, 240, 303, 355, 410, 519, 532, 556, 575, 640, 757, 769, 789,
1211, 1269, 1450, 1488, 1494, 1636, 1640, 1657, 1704, 1705,
1725, 17631765, 17671770, 1829, 1840, 1894, 1982, 2018,
2163, 2178, 2232, 2250, 2270, 2498, 2656, 2729, 2732, 2814,
2959, 3002, 3055, 3056]
Function Optimization [103, 169, 409, 610, 711, 713, 714, 735, 1080, 1258, 1709,
1841, 2128, 2324, 2933, 2981, 3079]
Games [998]
Graph Theory [15, 410, 556, 575, 629, 640, 769, 872, 896, 1284, 1450, 1471,
1488, 1558, 1636, 1657, 1750, 17671770, 1840, 1935, 1982,
2068, 2113, 2114, 2177, 2338, 2370, 2372, 2374, 2498, 2655,
2656, 2729, 2814, 2950, 3002]
Healthcare [2022]
Logistics [41, 65, 81, 190, 236, 382, 402, 436, 527, 978, 1126, 1442,
1446, 1447, 1581, 1704, 1705, 1709, 1756, 1896, 1971, 2075,
2270, 2324, 2540, 2686, 2713, 2720, 3024, 3084]
352 29 GENETIC ALGORITHMS
Mathematics [240, 520, 652, 998, 2284, 2285, 2416]
Multi-Agent Systems [1494]
Physics [532, 575]
Prediction [1285]
Security [1725]
Software [620, 652, 790, 998, 2901]
Theorem Proong and Automatic
Verication
[640]
Wireless Communication [154, 532, 556, 575, 640, 679, 826, 1284, 1450, 1494, 1535,
17671770, 1796, 2250, 2371, 2372, 2374, 2655, 2814]
29.8.2 Books
1. Handbook of Evolutionary Computation [171]
2. Genetic Algorithms and Evolution Strategy in Engineering and Computer Science: Recent
Advances and Industrial Applications [2163]
3. Evolutionary Computation 1: Basic Algorithms and Operators [174]
4. Genetic Algorithms and Genetic Programming at Stanford [1601]
5. Telecommunications Optimization: Heuristic and Adaptive Techniques [640]
6. Genetic Algorithms and Simulated Annealing [694]
7. Handbook of Genetic Algorithms [695]
8. Adaptation in Natural and Articial Systems: An Introductory Analysis with Applications
to Biology, Control, and Articial Intelligence [1250]
9. Practical Handbook of Genetic Algorithms: New Frontiers [522]
10. Genetic and Evolutionary Computation for Image Processing and Analysis [466]
11. Evolutionary Computation: The Fossil Record [939]
12. Genetic Algorithms for Applied CAD Problems [1640]
13. Extending the Scalability of Linkage Learning Genetic Algorithms Theory & Practice [552]
14. Ecient and Accurate Parallel Genetic Algorithms [477]
15. Electromagnetic Optimization by Genetic Algorithms [2250]
16. Genetic Fuzzy Systems: Evolutionary Tuning and Learning of Fuzzy Knowledge Bases [634]
17. Genetic Algorithms Reference Volume 1 Crossover for Single-Objective Numerical Opti-
mization Problems [1159]
18. Foundations of Learning Classier Systems [443]
19. Practical Handbook of Genetic Algorithms: Complex Coding Systems [523]
20. Evolutionary Computation: A Unied Approach [715]
21. Genetic Algorithms in Search, Optimization, and Machine Learning [1075]
22. Introduction to Stochastic Search and Optimization [2561]
23. Advances in Evolutionary Algorithms [1581]
24. Representations for Genetic and Evolutionary Algorithms [2338]
25. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Pro-
gramming, Genetic Algorithms [167]
26. Evaluation of the Eectiveness of Genetic Algorithms in Combinatorial Optimization [2923]
27. Data Mining and Knowledge Discovery with Evolutionary Algorithms [994]
28. Evolution are Algorithmen [2879]
29. Genetic Algorithm and its Application (Ychuan Su`anfa J Q Y`ngy` ong) [541]
30. Advances in Evolutionary Algorithms Theory, Design and Practice [41]
31. Cellular Genetic Algorithms [66]
32. Adaptation in Natural and Articial Systems: An Introductory Analysis with Applications
to Biology, Control, and Articial Intelligence [1252]
33. An Introduction to Genetic Algorithms [1912]
34. Genetic Algorithms + Data Structures = Evolution Programs [1886]
35. Practical Handbook of Genetic Algorithms: Applications [521]
29.8. GENERAL INFORMATION ON GENETIC ALGORITHMS 353
29.8.3 Conferences and Workshops
Table 29.2: Conferences and Workshops on Genetic Algorithms.
GECCO: Genetic and Evolutionary Computation Conference
See Table 28.5 on page 317.
CEC: IEEE Conference on Evolutionary Computation
See Table 28.5 on page 317.
EvoWorkshops: Real-World Applications of Evolutionary Computing
See Table 28.5 on page 318.
ICGA: International Conference on Genetic Algorithms and their Applications
History: 1997/07: East Lansing, MI, USA, see [166]
1995/07: Pittsburgh, PA, USA, see [883]
1993/07: Urbana-Champaign, IL, USA, see [969]
1991/07: San Diego, CA, USA, see [254]
1989/06: Fairfax, VA, USA, see [2414]
1987/07: Cambridge, MA, USA, see [1132]
1985/06: Pittsburgh, PA, USA, see [1131]
PPSN: Conference on Parallel Problem Solving from Nature
See Table 28.5 on page 318.
WCCI: IEEE World Congress on Computational Intelligence
See Table 28.5 on page 318.
EMO: International Conference on Evolutionary Multi-Criterion Optimization
See Table 28.5 on page 319.
FOGA: Workshop on Foundations of Genetic Algorithms
History: 2011/01: Schwarzenberg, Austria, see [301]
2009/01: Orlando, FL, USA, see [29]
2005/01: Aizu-Wakamatsu City, Japan, see [2983]
2002/09: Torremolinos, Spain, see [719]
2001/07: Charlottesville, VA, USA, see [2565]
1998/07: Madison, WI, USA, see [208]
1996/08: San Diego, CA, USA, see [256]
1994/07: Estes Park, CO, USA, see [2932]
1992/07: Vail, CO, USA, see [2927]
1990/07: Bloomington, IN, USA, see [2562]
AE/EA: Articial Evolution
See Table 28.5 on page 319.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
ICANNGA: International Conference on Adaptive and Natural Computing Algorithms
See Table 28.5 on page 319.
ICCI: IEEE International Conference on Cognitive Informatics
See Table 10.2 on page 128.
354 29 GENETIC ALGORITHMS
ICNC: International Conference on Advances in Natural Computation
See Table 28.5 on page 319.
SEAL: International Conference on Simulated Evolution and Learning
See Table 10.2 on page 130.
BIOMA: International Conference on Bioinspired Optimization Methods and their Applications
See Table 28.5 on page 320.
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
See Table 10.2 on page 130.
EUROGEN: Evolutionary Methods for Design Optimization and Control
History: 2007/06: Jyv askyl a, Finland, see [2751]
2005/09: Munich, Bavaria, Germany, see [2419]
2003/09: Barcelona, Catalonia, Spain, see [216]
2001/09: Athens, Greece, see [1053]
1999/05: Jyv askyl a, Finland, see [1894]
1997/11: Triest, Italy, see [2232]
1995/12: Las Palmas de Gran Canaria, Spain, see [2959]
FEA: International Workshop on Frontiers in Evolutionary Algorithms
See Table 28.5 on page 320.
MENDEL: International Mendel Conference on Genetic Algorithms
History: 2009/06: Brno, Czech Republic, see [411, 412]
2007/09: Prague, Czech Republic, see [2107]
2006/05: Brno, Czech Republic, see [413]
2005/06: Brno, Czech Republic, see [421]
2004/06: Brno, Czech Republic, see [420]
2003/06: Brno, Czech Republic, see [1842]
2002/06: Brno, Czech Republic, see [419]
2001/06: Brno, Czech Republic, see [2522]
2000/06: Brno, Czech Republic, see [2106]
1998/06: Brno, Czech Republic, see [417]
1997/06: Brno, Czech Republic, see [416, 418]
1996/06: Brno, Czech Republic, see [415]
1995/09: Brno, Czech Republic, see [414]
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
MICAI: Mexican International Conference on Articial Intelligence
See Table 10.2 on page 131.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
See Table 10.2 on page 131.
CIMCA: International Conference on Computational Intelligence for Modelling, Control and Au-
tomation
See Table 28.5 on page 320.
29.8. GENERAL INFORMATION ON GENETIC ALGORITHMS 355
CIS: International Conference on Computational Intelligence and Security
See Table 28.5 on page 320.
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
See Table 10.2 on page 132.
EUFIT: European Congress on Intelligent Techniques and Soft Computing
See Table 10.2 on page 132.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
EvoCOP: European Conference on Evolutionary Computation in Combinatorial Optimization
See Table 28.5 on page 321.
GALESIA: International Conference on Genetic Algorithms in Engineering Systems
History: 1997/09: Glasgow, Scotland, UK, see [3056]
1995/09: Sheeld, UK, see [3055]
GEC: ACM/SIGEVO Summit on Genetic and Evolutionary Computation
See Table 28.5 on page 321.
GEM: International Conference on Genetic and Evolutionary Methods
See Table 28.5 on page 321.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
SoCPaR: International Conference on SOft Computing and PAttern Recognition
See Table 10.2 on page 133.
Progress in Evolutionary Computation: Workshops on Evolutionary Computation
See Table 28.5 on page 321.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
AJWS: Australia-Japan Joint Workshop on Intelligent & Evolutionary Systems
See Table 28.5 on page 321.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
ASC: IASTED International Conference on Articial Intelligence and Soft Computing
See Table 10.2 on page 134.
COGANN: International Workshop on Combinations of Genetic Algorithms and Neural Networks
History: 1992/06: Baltimore, MD, USA, see [2415]
EvoNUM: European Event on Bio-Inspired Algorithms for Continuous Parameter Optimisation
See Table 28.5 on page 321.
FOCI: IEEE Symposium on Foundations of Computational Intelligence
See Table 28.5 on page 321.
FWGA: Finnish Workshop on Genetic Algorithms and Their Applications
Later continued as NWGA.
History: 1995/06: Vaasa, Finland, see [59]
1994/03: Vaasa, Finland, see [58]
356 29 GENETIC ALGORITHMS
1992/11: Espoo, Finland, see [57]
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
IJCCI: International Conference on Computational Intelligence
See Table 28.5 on page 321.
IWNICA: International Workshop on Nature Inspired Computation and Applications
See Table 28.5 on page 321.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
NWGA: Nordic Workshop on Genetic Algorithms
History: 1998/07: Trondheim, Norway, see [62]
1997/08: Helsinki, Finland, see [61]
1996/08: Vaasa, Finland, see [60]
1995/06: Vaasa, Finland, see [59]
NaBIC: World Congress on Nature and Biologically Inspired Computing
See Table 28.5 on page 322.
WPBA: Workshop on Parallel Architectures and Bioinspired Algorithms
See Table 28.5 on page 322.
EvoRobots: International Symposium on Evolutionary Robotics
See Table 28.5 on page 322.
ICEC: International Conference on Evolutionary Computation
See Table 28.5 on page 322.
IWACI: International Workshop on Advanced Computational Intelligence
See Table 28.5 on page 322.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
SEMCCO: International Conference on Swarm, Evolutionary and Memetic Computing
See Table 28.5 on page 322.
SSCI: IEEE Symposium Series on Computational Intelligence
See Table 28.5 on page 322.
29.8.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
29.8. GENERAL INFORMATION ON GENETIC ALGORITHMS 357
Tasks T29
78. Perform the same experiments as in Task 64 on page 238 (and 68) with a Genetic
Algorithm. Use a bit-string based search space and utilize an appropriate genotype-
phenotype mapping.
[40 points]
79. Perform the same experiments as in Task 64 on page 238 (and 68 and 78) with a
Genetic Algorithm. Use a real-vector based search space. Implement an appropriate
crossover operator.
[40 points]
80. Compare your results from Task 64 on page 238, 68, and 78. What are the dierences?
What are the similarities? Did you use dierent experimental settings? If so, what
happens if you use the same experimental settings for all three tasks? Try to give
reasons for your results.
[10 points]
81. Perform the same experiments as in Task 67 on page 238 (and 70) with a Genetic
Algorithm. If you use a permutation-based search space, you may re-use the nullary
and unary search operations we already have developed for this task. Yet, you need
to dene a recombination operation which does not violate the permutation-character
of the genotypes.
[40 points]
82. Compare your results from Task 67 on page 238, 70 and 81. What are the dierences?
What are the similarities? Did you use dierent experimental settings? If so, what
happens if you use the same experimental settings for both tasks? Try to give reasons
for your results. Put your explanations into the context of the discussions provided
in Example E17.4 on page 181.
[10 points]
83. Try solving the bin packing problem again, this time by reproducing the experiments
given in Khuri, Sch utz, and Heitkotter in [1534]. Compare the results with the results
listed in that section and obtained by you in Task 70 and 70. Which method is better?
Provide reasons for your observation.
[50 points]
84. In Paragraph 56.2.1.2.1 on page 801, we provide a Java implementation of some of the
standard operators of Genetic Algorithms based on arrays of boolean. Obviously, using
boolean[] is a bit inecient, since it wastes memory. Each boolean will take up at least
1 byte, probably even 4, 8, or maybe even 16 bytes. Since it only represents a single bit,
i. e., one eighth of a byte, it would be more economically to represent bit strings as array
of int or long and access specic bits via bit arithmetic (via, e.g., &, |, , <<, or >>>).
Please re-implement the operators from Paragraph 56.2.1.2.1 on page 801 in a more
ecient way, by either using byte[], int[], or long[], or a specic class such as java
.util.BitSet for storing the bit strings instead of boolean[].
[30 points]
Chapter 30
Evolution Strategies
30.1 Introduction
Evolution Strategies
1
(ES) introduced by Rechenberg [2277, 2278, 2279] and Schwefel [2433,
2434, 2435] are a heuristic optimization technique based in the ideas of adaptation and
evolution, a special form of Evolutionary Algorithms [170, 300, 302, 1220, 22772279, 2437].
The rst Evolution Strategies were quite similar to Hill Climbing methods. They did not
yet focus on evolving real-valued, numerical vectors and only featured two rules for nding
good solutions (a) slightly modifying one candidate solution and (b) keeping it only if it
is at least as good as its parent. Since there was only one parent and one ospring in this
form of Evolution Strategy, this early method more or less resembles a (1 + 1) population
discussed in Paragraph 28.1.4.3.3 on page 266. From there on, ( +) and (, ) Evolution
Strategies were developed a step into a research direction which was not very welcomed
back in the 1960s [302].
The search space of todays Evolution Strategies normally are vectors from the R
n
, but
bit strings or integer strings are common as well [302]. Here, we will focus mainly on the
former, the real-vector based search spaces. In this case, the problem space is often identical
with the search space, i. e., X = G R
n
. ESs normally follow one of the population handling
methods classied in the - notation given in Section 28.1.4.3.
Mutation and selection are the primary reproduction operators and recombination is
used less often in Evolution Strategies. Most often, normal distributed random numbers
are used for mutation. The parameter of the mutation is the standard deviation of these
random numbers. The Evolution Strategy may either
1. maintain an a single standard deviation parameter and use identical normal dis-
tributions for generating the random numbers added to each element of the solution
vectors,
2. use a separate standard deviation (from a standard deviation vector ) for each element
of the genotypes, i. e., create random numbers from dierent normal distributions
for mutations in order to facilitate dierent strengths and resolutions of the decision
variables (see, for instance, Example E28.11), or
3. use a complete covariance matrix C for creating random vectors distributed in a hy-
perellipse and thus also taking into account binary interactions between the elements
of the solution vectors.
The standard deviations are governed by self-adaptation [1185, 1617, 1878] and may result
from a stochastic analysis of the elements in the population [11821184, 1436]. They are often
1
http://en.wikipedia.org/wiki/Evolution_strategy [accessed 2007-07-03]
360 30 EVOLUTION STRATEGIES
treated as endogenous strategy parameters which can directly be stored in the individual
records p and evolve along with them [302]. The idea of self-adaption can be justied
with the following thoughts. Assume we have a Hill Climbing algorithm which uses a xed
mutation step size to modify the (single) genotype it currently has under investigation.
1. If a large value is used for , the Hill Climber will initially quickly nd interesting
areas in the search space. However, due to its large step size, it will often jump in and
out of smaller highly t areas without being able to investigate them thoroughly.
2. If a small value is used for , the Hill Climber will progress very slowly. However, once
it nds the basin of attraction of an optimum, it will be able to trace that optimum
and deliver solutions.
It thus makes sense to adapt the step width of the mutation operation. Starting with a larger
step size, we may reduce it while approaching the global optimum, as we will lay down in
Section 30.5. The parameters and , on the other hand, are called exogenous parameters
and are usually kept constant during the evolutionary optimization process [302].
We can describe the idea of endogenous parameters for governing the search step width
as follows. Evolution Strategies are Evolutionary Algorithms which usually add the step
length vector as endogenous information to the individuals. This vector undergoes selec-
tion, recombination, and mutation together with the individual. The idea is that the EA
selects individuals which are good. By doing so, it also selects step width that led to its
creation. This step width can be considered to be good since it was responsible for creating
a good individual. Bad step widths, on the other hand, will lead to the creation of inferior
candidate solutions and thus, perish together with them during selection. In other words,
the adaptation of the step length of the search operations is governed by the same selection
and reproduction mechanisms which are applied in the Evolutionary Algorithm to nd good
solutions. Since we expect these mechanisms to work for this purpose, it makes sense to
also assume that they can do well to govern the adaptation of the search steps. We can
further expect that initially, larger steps are favored by the evolution, which help the algo-
rithm to explore wide areas in the search space. Later, when the basin(s) of attraction of
the optimum/optima are reached, only small search steps can lead to improvements in the
objective values of the candidate solutions. Hence, individuals created by search operations
which perform small modications will be selected. These individuals will hold endogenous
information which leads to small step sizes. Mutation and recombination will lead to the
construction of endogenous information which favors smaller and smaller search steps the
optimization process converges.
30.2 Algorithm
In Algorithm 30.1 (and in the corresponding Java implementation in Listing 56.17 on
page 760) we present the basic single-objective Evolution Strategy algorithm as specied
by Beyer and Schwefel [302]. The algorithm can facilitate the two basic strategies (/, )
(Paragraph 28.1.4.3.6) and (/ + ) (Paragraph 28.1.4.3.7). These general strategies also
encompass (, ) (/1, ) and ( +) (/1 +) population treatment.
The algorithm starts with the creation of a population of random individuals in Line 3,
maybe by using a procedure similar to the Java code given in Listing 56.24. In each
generation, rst a genotype-phenotype mapping may be performed (Line 6) and then the
objective function is evaluated in .
In every generation (except the rst one with t = 1), there are individuals in the
population pop. We therefore have to select from them into the mating pool mate. The
way in which this selection is performed depends on the type of the Evolution Strategy.
In the (/, ), only the individuals of generation t in pop(t) are considered (Line 13). If
the Evolution Strategy is a (/ +) strategy, then also the previously selected individuals
30.2. ALGORITHM 361
Algorithm 30.1: X generalES(f, , , )
Input: f: the objective/tness function
Input: : the mating pool size
Input: : the number of osprings
Input: : the number of parents
Input: [implicit] strategy: the population strategy, either (/, ) or (/ +)
Data: t: the generation counter
Data: pop: the population
Data: mate: the mating pool
Data: P: a set of parents pool for a single ospring
Data: p
n
: a single ospring
Data: g

, w

: temporary variables
Data: continue: a Boolean variable used for termination detection
Output: X: the set of the best elements found
1 begin
2 t 1
3 pop(t = 1) createPop()
4 continue true
5 while continue do
6 pop(t) performGPM(pop(t) , gpm)
7 pop(t) computeObjectives(pop(t) , f)
8 if terminationCriterion
__
then
9 continue false
10 else
11 if t > 1 then
12 if strategy = (/, ) then
13 mate(t) truncationSelection
w
(pop(t) , f, )
14 else
15 mate(t) truncationSelection
w
(pop(t) mate(t 1) , f, )
16 else
17 mate(t) pop(t)
18 pop(t + 1)
19 for i 1 up to do
20 P selection
w
(mate(t) , f, )
21 p
n

22 g

recombine
G
(p.g p P)
23 w

recombine
W
(p.w p P)
24 p
n
.w mutate
W
(w

)
25 p
n
.g mutate
G
(g

, p
n
.w)
26 pop(t + 1) addItem(pop(t + 1) , p
n
)
27 t t + 1
28 return extractPhenotypes(extractBest(pop(t) pop(t 1) . . . , f))
362 30 EVOLUTION STRATEGIES
from mate(t 1) can be selected. In any way, the selection method chosen for this usually
is truncation selection without replacement, i. e., each candidate solution can be selected at
most once.
After the mating pool mate has been lled, the new individuals can be built. In a
(/
+
,
) strategy, there are parents for each ospring. For each of the ospring p
n
to
be constructed, we therefore rst select parents into a parent pool P in Line 20. This
selection is usually a uniform random process which does not consider the objective values.
Furthermore, again a selection method without replacement is generally used.
The genotype p
n
.g of the ospring p
n
is then created by recombining the genotypes
of all of the parents with an operator recombine (Line 22). This genotype, a vector,
may, for instance, be the arithmetic mean of all parent genotype vectors. The endogenous
information p
n
.w W to be stored in p
n
is created in a similar fashion with the operator
recombine in Line 22. p
n
.w could be the covariance matrix of all parent genotype vectors.
In the next step Line 24, this information is modied, i. e., mutated. This mutated
information record is then used to mutate the newly created genotype in Line 25 with
the operator mutate. Here, a random number normally distributed according to the
covariance stored in p
n
.w could be added to p.gn, for example. The order of rst mutating
the information record and then the genotype is important for the selection round in the
following generation since we want information records which create good individuals to be
selected. However, if the information was mutated after the genotype was created, it would
not be clear whether its features are still good after mutation, it might have become bad.
Finally, the new individual can be added to the next population popt+1 in Line 26 and a
new generation can begin. The algorithm should, during all the iterations, keep track of the
best candidate solutions found so far. In single-objective optimization, most often a single
result will be returned, but even then, a whole set X of equally good candidate solutions
may be discovered. The optimizer can maintain such a set during the generations by using
an archive and pruning techniques. For the sake of simplicity, we sketched that the best
solutions found in all populations over all generations would be returned in Line 28 which
symbolizes such bookkeeping. Of course, no one would implement it like this in reality, but
in order to keep the algorithm denition a bit smaller and to not have too many variables,
we chose this notation.
30.3 Recombination
The standard recombination operators in Evolution Strategy basically resemble generaliza-
tions of uniform crossover and average crossover to 1 parents for (/
+
,
) strategies.
If > 2, we speak of multi-recombination.
30.3.1 Dominant Recombination
Dominant (discrete) recombination, as specied in Algorithm 30.2 and implemented in List-
ing 56.32 on page 798, is a generalization of uniform crossover (see Section 29.3.4.4 on
page 337) to parents. Like in uniform crossover, for the i
th
locus in the child chro-
mosome g

, we randomly pick an allele from the same locus from any of the parents
g
p,1
, g
p,2
, . . . , g
p,
.
30.3.2 Intermediate Recombination
The intermediate recombination operator can be considered to be an extension of average
crossover (discussed in Section 29.3.4.5 on page 337) to parents. Here, the value of the i
th
gene of the ospring g

is the arithmetic mean of the alleles of genes at the same locus in


the parents. We specify this procedure as Algorithm 30.3 and implement it in Listing 56.33
on page 799.
30.3. RECOMBINATION 363
Algorithm 30.2: g

recombineDiscrete(P)
Input: P: the list of parent individuals, len(P) =
Data: i: a counter variable
Data: p: a parent individual
Output: g

: the ospring of the parents


1 begin
2 for i 0 up to n 1 do
3 p P[randomUni[0, )]
4 g

[i] p.g[i]
5 return g

Algorithm 30.3: g

recombineIntermediate(P)
Input: P: the list of parent individuals, len(P) =
Data: i, j: counter variables
Data: s: the sum of the values
Data: p: a parent individual
Output: g

: the ospring of the parents


1 begin
2 for i 0 up to n 1 do
3 s 0
4 for j 0 up to 1 do
5 p P[j]
6 s s +p.g[i]
7 g

[i]
s
n
8 return g

364 30 EVOLUTION STRATEGIES


30.4 Mutation
30.4.1 Using Normal Distributions
If Evolution Strategies are applied to real-valued (vector-based) search spaces G = R
n
, (re-
s
s +s
G G G G
0 1 0 1
2
C
=
s
G G
1 1
s
G G
0 0
s
G G
0 1
s
G G
1 0
( )
selectedparents
s
G G
1 1
s
G G
0 0
( )
s
g
A
B C D
Figure 30.1: Sampling Results from Normal Distributions
combined intermediate) genotypes g

are often mutated by using normally distributed ran-


dom numbers N
_
,
2
_
. Normal distributions have the advantage that (a) small changes
are preferred, since values close to the expected value , but (b) basically, all numbers have
a non-zero probability of being drawn, i. e., all points in the search space are adjacent with
= 0 (see Denition D4.10 on page 85) and, nally, (c) the dispersion of distribution can
easily be increased/decreased by increasing/decreasing the standard deviation . A uni-
variate normal distribution thus has two parameters, the mean and the (scalar) standard
deviation .
If a real-valued, intermediate genotype g

R is mutated, we want the possible results


to be distributed around this genotype. Therefore, we have two equivalent options: either
we draw the numbers around the mean = g

, i. e., use randomNorm


_
g

,
2
_
, or we add a
normal distribution with mean 0 to the genotype, i. e., do g

+ randomNorm
_
0,
2
_
. For a
multi-dimensional case, where the genotypes are vectors g

R
n
, the procedure is exactly
the same.
30.4. MUTATION 365
For the parameter of the normal distribution, however, things are a little bit more
complicated. Usually, this value is usually stored as endogenous strategy parameter and
adapted over time.
Basically, there exist three simple methods for dening the second parameter of the
normal distribution(s) for mutation. They are illustrated exemplarily in Figure 30.1. In this
gure, we assume that the new genotype g

is the mean vector of a set of selected parent


individuals (A) and we assume that the standard deviations are derived from these parents
as well. We illustrate the two-dimensional case where the parents which have been selected
based on their tness reside in a (rotated) ellipse. Such a situation, where the selected
individuals form a shape in the search space which is not aligned with the main axes, is
quite common.
30.4.1.1 Single Normal Distribution
The simplest way to mutate a vector g

R
n
with a normal distribution is given in Algo-
Algorithm 30.4: g mutate
G,
(g

, w = )
Input: g

R
n
: the input vector
Input: R: the standard deviation of the mutation
Data: i: a counter variable
Output: g R
n
: the mutated version of g

1 begin
2 for i 0 up to n 1 do
3 g[i] g

[i] +randomNorm(0, 1)
// Line 3 g[i] randomNorm
_
g

[i],
2
_
4 return g
rithm 30.4 and implemented in Listing 56.25 on page 786. Here, the endogenous information
is a scalar number w = . For each dimension i 0..n 1, a random value normally dis-
tributed with N
_
,
2
_
is drawn and added to g

[i].
As illustrated in part (B) of Figure 30.1, this operation creates an unbiased distribution
of possible genotypes g. The mutants are symmetrically distributed around g

. The dis-
tribution is isotropic, i. e., the surfaces of constant density form concentric spheres around
g

.
The advantage of this mutation operator is that it only needs a single, scalar endogenous
parameter, w = . The drawback is that the resulting genotype distribution is always
isotropic and cannot reect dierent scales along the axes. In Example E28.11 on page 271
we pointed out that the inuence of the genes of a genotype on the objective values, i. e.,
the inuence of the dimensions of the search space, may dier largely. Furthermore, it may
even change depending on which area of the search space is currently investigated by the
optimization process.
Figure 30.1 is the result of an experiment where we illustrate the distribution of a sampled
point cloud. For the normal distribution in part (B), we chose a standard deviation which
equals the mean of the standard deviations along both axes G
0
and G
1
.
30.4.1.2 n Univariate Normal Distributions
By using n dierent standard deviations, i. e., a vector , we can scale the cloud of points
along the axes of the coordinate system as can be seen in part (C) of Figure 30.1. Algo-
rithm 30.5 illustrates how this can be done and Listing 56.25 on page 786 again provides a
366 30 EVOLUTION STRATEGIES
Algorithm 30.5: g mutate
G,
(g

, w = )
Input: g

R
n
: the input vector
Input: R
n
: the standard deviation of the mutation
Data: i: a counter variable
Output: g R
n
: the mutated version of g

1 begin
2 for i 0 up to n 1 do
3 g[i] g

[i] +[i]randomNorm(0, 1)
// Line 3 g[i] randomNorm
_
g

[i], [i]
2
_
4 return g
suitable implementation of the idea. Instead of sampling all dimensions from the same nor-
mal distribution N
_
0,
2
_
, we now use n distinct univariate normal distributions N(0, [i])
with i 0..n1. For Figure 30.1 part (C), we sampled points with the standard deviations
of the parent points along the axes.
The endogenous parameter w = now has n dimensions as well. Even though the
problem of axis scale has been resolved, we still cannot sample an arbitrarily rotated cloud
of points the main axes of the iso-density ellipsoids of the distribution are still always
parallel to the main axes of the coordinate system.
30.4.1.3 n-dimensional Multivariate Normal Distribution
Algorithm 30.6 shows how to mutate vectors sespel

by adding numbers which


Algorithm 30.6: g mutate
G,M
(g

, w = M)
Input: g

R
n
: the input vector
Input: M R
nn
: an (orthogonal) rotation matrix
Data: i, j: a counter variable
Data: t: a temporary vector
Output: g R
n
: the mutated version of g

1 begin
2 for i 0 up to n 1 do
3 t[i] randomNorm(0, 1)
4 g g

// g g +Mt
5 for i 0 up to n 1 do
6 for j 0 up to n 1 do
7 g[i] g[i] +M[i, j] t[j]
8 return g
follow normally distributions with rotated iso-density ellipsoids. With a ma-
trix M we scale and rotate standard-normal distributed random vectors t =
(randomNorm(0, 1) , randomNorm(0, 1) , . . . , randomNorm(0, 1))
T
. The result of the mul-
tiplication Mt is then shifted by g

.
We are thus able to, for instance, sample points similar distributions than the selected
parents as sketched in Figure 30.1, part (D). The advantage of this sampling method is
imminent: we can create points which are similarly distributed than a selected cloud of
parents and thus better represent the information gathered so far.
30.5. PARAMETER ADAPTATION 367
There are also three disadvantages: (a) First, the size of the endogenous Parameter w
increases to
n(n+1)
2
if G R
n
: the main diagonal of M has n elements. The matrix elements
above and below the diagonal are the same and thus, need to be stored only once. (b) The
second problem is that sampling now has a complexity of O
_
n
2
_
, as can easily be seen in
Algorithm 30.6. (c) Finally, obtaining M is another interesting question. As it turns out,
computing M has a higher complexity than using it for sampling. The matrix M can be
constructed from the covariance matrix C of the individuals selected for becoming parents
P. We therefore utilize Denition D53.41 on page 664, i. e., by applying Equation 30.1:
C[i, j] =

pP
_
p.g[i] p.g[i]
_ _
p.g[j] p.g[j]
_
[P[
(30.1)
We then can set, for example, the rotation and scaling matrix M to be the matrix of all
the normalized and scaled (n dimensional) Eigenvectors
i
(C) : i 1..n of the covariance
matrix C. These vectors are normalized by dividing them by their length, i. e., by [[
i
(C)[[
2
.
Then, they are scaled with the square roots of the corresponding Eigenvalues
i
(C). By
multiplying a vector of n independent standard normal distributions with the matrix of
normalized Eigenvectors, it is rotated to be aligned to the covariance matrix C. By scaling
the Eigenvectors with the square roots
_

i
(C) of the corresponding Eigenvalues, we stretch
the standard normal distributions from spheres to ellipsoids with the same shape as described
by the covariance matrix.
M =
_
_

i
(C)
[[
i
(C)[[
2

i
(C)
_
for i from 1 to n (30.2)
If C is a diagonal matrix, M[i, j] =

C[i, j] i, j 0..n 1 can be used. Otherwise,


computing the Eigenvectors and Eigenvalues is only easy for dimensions n 1, 2, 3 but
becomes computationally intense for higher dimensions [1097].
If this last mutation operator is indeed applied with a matrix M directly derived from the
covariance matrix C estimated from the parent individuals, the Evolution Strategy basically
becomes a real-valued Estimation of Distribution Algorithm as introduced in Section 34.5
on page 442.
30.5 Parameter Adaptation
In Section 30.4, we discussed dierent methods to mutation genotypes g

. The mutation
operators randomly picked points in the search space which are distributed around g

. The
distribution chosen for this purpose depends on endogenous strategy parameters such as ,
, and M.
One of the general, basic principles of metaheuristics is that the optimization process
should converge to areas of the search space which contain highly-t individuals. At the
beginning, the best a metaheuristic algorithm can do is to uniformly sample the search
space. The more information it acquires by evaluating the candidate solutions, the smaller
the interesting area should become.
From this perspective, it makes sense to initially sample a wide area around the inter-
mediate genotypes g

with mutation. Step by step, however, as the information gathered


increases, the dispersion of the mutation results should decrease. This change in dispersion
can only be achieved by a change in the parameters controlling it the endogenous strategy
parameters of the ES. Thus, there must be a change in these parameters as well. Such a
change of the parameter values is called adaption. [302]
368 30 EVOLUTION STRATEGIES
30.5.1 The 1/5th Rule
One of the most well-known adaptation rules for Evolution Strategies is the
1
/5th Rule
which adapts the single parameter of the isotropic mutation operator introduced in Sec-
tion 30.4.1.1. Since denes the dispersion of the mutation results, we can refer to it as
mutation strength.
Beyer [300] and Rechenberg [2278] considered two key performance metrics in their works
on (1+1)-ESs: the success probability P
S
of the mutation operation and progress rate de-
scribing the expected distance gain towards the optimum. Like Beyer [300] and Rechenberg
[2278], let us discuss the behavior of these parameters on the Sphere function f
RSO1
discussed
in Section 50.3.1.1 on page 581 for n .
f
RSO1
(x = g) =
n

i=1
x
i
2
with G = X R
n
(30.3)
The isotropic mutation operator samples points in a sphere around the intermediate genotype
g

. Since we consider the Sphere function as benchmark, g

will always lie on a iso-tness


sphere. All points which have a lower Euclidean distance to the coordinate center 0 than
g

have a better objective value and all points with larger distance have a worse objective
value.
Both, P
S
and , depend on the value of . If is very small, the chance P
S
of sampling
a point g with a smaller norm [[g[[
2
than g

is 0.5. However, the expected distance for


moving towards the optimum also becomes zero.
lim
0
P
S
= 0.5 (30.4)
lim
0
= 0 (30.5)
If +, on the other hand, the success probability P
S
becomes zero since mutation will
likely jump over the iso-tness sphere of g

even if it samples towards the right direction.


Since P
S
is zero, the expected distance gain towards the optimum also disappears.
lim
+
P
S
= 0 (30.6)
lim
+
= 0 (30.7)
In between these two extreme cases, i. e., for 0 < < +, lies an area where both > 0
and 0 < P
S
< 0.5. The goal of adaptation is to have a high expected progress towards the
global optimum, i. e., to maximize . If is large, the Evolution Strategy will converge
quickly to the global optimum. is the parameter we can tune in order to achieve this goal.
P
S
can be estimated as feedback from the optimizer: the estimation of P
S
is the number of
times a mutated individual replaced its parent in the population divided the total number of
performed mutations. Beyer and Schwefel [302] and Rechenberg [2278] found theoretically
that becomes maximal for values of P
S
in [0.18, 0.28] for dierent mathematical benchmark
functions. For larger as well as for lower values of P
S
, begins to decline. As a compromise,
they conceived the
1
/5th Rule:
Denition D30.1 (
1
/5th Rule). In order to obtain nearly optimal (local) performance
of the (1 + 1)-ES with isotropic mutation in real-valued search spaces, tune the mutation
strength in such a way that the (estimated) success rate P
S
is about
1
/5 [302].
P
S
is monotonously decreasing with rising from lim
0
P
S
= 0.5 to lim
+
P
S
= 0.
This means if success probability P
S
is larger than 0.2, we need to increase the mutation
strength in order to maintain a near-optimal progress towards the optimum. If, on the
30.5. PARAMETER ADAPTATION 369
Algorithm 30.7: x (1+1)-ES1
/5
(f, L, a,
0
)
Input: f: the objective/tness function
Input: L > 1: the generation limit for adaptation
Input: a (0, 1): the adaptation value
Input:
0
> 0: the initial mutation strength
Data: t: the generation counter
Data: p

: the current individual


Data: p
n
: a single ospring
Data: P
S
[0, 1]: the estimated success probability of mutation
Data: continue: a Boolean variable used for termination detection
Output: x: the best element found
1 begin
2 t 1
3 s 0
4
0
5 p

.g create
__
6 p

.x gpm(p

.g)
7 p
n
p

8 continue true
9 while continue do
10 p
n
.x gpm(p
n
.g)
11 if terminationCriterion
__
then
12 continue false
13 else
14 if f(p
n
.x) < f(p

.x) then
15 p

p
n
16 s s + 1
17 if (t mod L) = 0 then
18 P
S

s
L
19 if P
S
< 0.2 then a
20 else if P
S
> 0.2 then /a
21 s 0
22 p
n
.g mutate
G,
(p

.g, )
23 t t + 1
24 return p

.x
370 30 EVOLUTION STRATEGIES
other hand, the fraction of accepted mutations falls below 0.2, the step size is too large and
must be reduced.
In Algorithm 30.7 we present a (1 + 1)-Evolution Strategy (1+1)-ES which implements
the
1
/5th Rule. The algorithm estimates the success probability P
S
every L generations. If it
falls below 0.2, i. e., if the mutation steps are too long, the mutation strength is multiplied
with a factor a (0, 1), i. e., reduced. If the mutations are too weak and P
S
> 0.2, the
reverse operation is performed and the mutation strength is divided by a.
Beyer and Schwefel [302] propose that if the dimension n of the search space G R
n
is
suciently large (n 30), setting L = n would be a good choice. Schwefel [2435] suggests
to keep a in 0.85 < a < 1 for good results.
30.5.2 Endogenous Parameters
In the introductory Section 30.1 as well as in the beginning of this section, we stated that the
some of the control parameters of Evolution Strategies, such as the mutation strength, are
endogenous. So far, we gave little evidence why this is actually is the case, what implications
this design has, and what it is actually good for. If we look back at the
1
/5th rule, we nd
three drawbacks.
1. First, it only covers one special case. It is suitable for (1+1)-Evolution Strategies and
requires a certain level of causality in the search space. If the tness landscape is too
rugged (see Chapter 14 on page 161), this adaptation mechanism may be misled by
the information gained from sampling.
2. Second, it adapts only one single strategy parameter and thus cannot support
more sophisticated mutation operators such as those discussed in Section 30.4.1.2 and
30.4.1.3.
3. Finally, the adapted parameter holds for the complete population. It does not allow the
optimization algorithm to trace down dierent local or global optima with potentially
dierent topology. (Which is not possible with a (1+1)-ES anyway, but could be done
with an extension of the rule to ( +)-Evolution Strategies.)
In order to nd a more general way to perform self-adaptation, the control parameters are
considered to be endogenous information of each individual in the population. This step
leads to three decisive changes in the adaptation process:
1. Instead of a global parameter set, the parameters are now local,
2. undergo selection together with the individuals owning them, and
3. are processed by reproduction operations.
As can be seen in the basic Evolution Strategy algorithm given in Algorithm 30.1 on page 361,
the endogenous strategy parameters p.w are responsible for the nal individual creation step
(mutation). They are part of the individual p whose genotype p.g they helped building. If
the genotype p.g has a corresponding phenotype p.x = gpm(p.g) with a good objective
value f(p.x), it has a high chance of surviving the selection step. In this case, it will take
the endogenous parameters p.w which lead to its creation with it. On the other hand, if p.x
has bad characteristics, it will perish and doom its genotype p.g and the control parameters
p.w to peril as well. Mutation steps which were involved into the successful production of
good ospring will thus be selected and may lead to the creation of more good candidate
solutions.
30.5.2.1 Recombining the Mutation Strength (Vectors)
Beyer and Schwefel [302] state that mutating the strategy parameters can lead to uctuations
which slow down the optimization process. The similarity-preserving eect of intermediate
recombination (see Section 30.3.2 on page 362 and Section 29.5.6 on page 346) can work
against this phenomenon, especially if many parents are used. For recombining the mutation
strengths, Beyer and Schwefel [302] hence propose to use Algorithm 30.3 on page 363 as well.
30.5. PARAMETER ADAPTATION 371
30.5.2.2 Mutating the Mutation Strength
The mutation of the single strategy parameter , the mutation strength, can be achieved
by multiplying it a positive number . If 0 < < 1, the strength decreases and if 1 < , the
mutation will become more volatile. The interesting question is how to create this number.
Rechenbergs Two-Point Rule [2279] sets = with 50% probability and to
1

otherwise.
> 1 is a xed exogenous strategy parameter. We can then dene the operation mutate
W,
(for the recombined mutation strength

) given in Line 24 of Algorithm 30.1 on page 361


according to Equation 30.8.
mutate
W,
(

) =
_

if randomUni[0, 1) < 0.5

otherwise
(30.8)
= 1 + (30.9)
According to Beyer [297], nearly optimal behavior of the Evolution Strategy can be achieved
if is set according to Equation 30.9. The meaning of will be discussed below and values
for it will be given in Equation 30.11. The interesting thing about Equation 30.8 is that it
creates an extremely limited neighborhood of -values if we extend the concept of adjacency
from Denition D4.10 on page 85 to the space of endogenous information W. Matter of fact,
only the values
0

i
: i Z can be reached at all if the mutation strength is initialized
with
0
. Only the direct predecessor and successor of an element in this series can be found
in one step by mutation.
The = 0 adjacency for the strategy parameter can be extended to the whole R
+
by
setting the multiplication parameter to e raised to the power of a normally distributed
number. Since all real numbers are = 0-adjacent under a normal distribution, all values
in R
+
0 are adjacent under e
N(0,1)
.
mutate
W,
(

) =

e
randomNorm(0,1)
(30.10)

1

N
(30.11)
The exogenous parameter is used to scale the standard deviation of the random numbers.
Beyer and Schwefel [302] that it should be to = 1/

n for problems with G R


n
. If the
tness landscape is multimodal, = 1/

2n can be used.
30.5.2.3 Mutating the Strength Vector
Algorithm 30.8 presents a way to mutate vectors

which represent axis-parallel mutation


Algorithm 30.8: mutate
W,
()

Input:

R
n
: the old mutation strength vector
Data: i: a counter variable
Output: R
n
: the new mutation strength vector
1 begin
2 e

0
randomNorm(0,1)
3 0
4 for i 0 up to n 1 do
5 [i] e
randomNorm(0,1)
[i]

6 return
strengths.

may be the result from recombination of several parents. In Algorithm 30.8,


we rst compute a general scaling factor which is log-normally distributed. Each eld i of
372 30 EVOLUTION STRATEGIES
the new mutation strength vector then is the old value

[i] multiplied with another log-


normally distributed random number and . This idea has exemplarily been implemented
in Listing 56.26 on page 787.

0
=
c

2n
(30.12)
=
c
_
2

n
(30.13)
For R
n
, the settings in Equation 30.12 and Equation 30.13 are recommended by Beyer
and Schwefel [302]. For a (10, 100)-Evolution Strategy, Schwefel [2437] further suggests to
use c = 1.
30.6. GENERAL INFORMATION ON EVOLUTION STRATEGIES 373
30.6 General Information on Evolution Strategies
30.6.1 Applications and Examples
Table 30.1: Applications and Examples of Evolution Strategies.
Area References
Chemistry [1573]
Combinatorial Problems [292, 299, 1617, 2604]
Computer Graphics [3087]
Digital Technology [3042]
Distributed Algorithms and Sys-
tems
[2683]
Engineering [303, 778, 1135, 1586, 1881, 1894, 2163, 2232, 2266, 2279,
2433, 2434, 2683, 2936, 2937, 2959, 3042, 3087]
Function Optimization [169, 3087]
Graph Theory [2683]
Logistics [292, 299, 1586, 1617, 2279]
Mathematics [227, 302, 520, 778, 1182, 1183, 1186, 1187, 1436, 2279]
Prediction [1044]
Software [2279]
30.6.2 Books
1. Handbook of Evolutionary Computation [171]
2. Genetic Algorithms and Evolution Strategy in Engineering and Computer Science: Recent
Advances and Industrial Applications [2163]
3. Self-Adaptive Heuristics for Evolutionary Computation [1617]
4. Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evo-
lution [2278]
5. Evolutionsstrategie 94 [2279]
6. Evolution and Optimum Seeking [2437]
7. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Pro-
gramming, Genetic Algorithms [167]
8. The Theory of Evolution Strategies [300]
9. Cybernetic Solution Path of an Experimental Problem [2277]
10. Numerical Optimization of Computer Models [2436]
30.6.3 Conferences and Workshops
Table 30.2: Conferences and Workshops on Evolution Strategies.
GECCO: Genetic and Evolutionary Computation Conference
See Table 28.5 on page 317.
CEC: IEEE Conference on Evolutionary Computation
See Table 28.5 on page 317.
EvoWorkshops: Real-World Applications of Evolutionary Computing
374 30 EVOLUTION STRATEGIES
See Table 28.5 on page 318.
PPSN: Conference on Parallel Problem Solving from Nature
See Table 28.5 on page 318.
WCCI: IEEE World Congress on Computational Intelligence
See Table 28.5 on page 318.
EMO: International Conference on Evolutionary Multi-Criterion Optimization
See Table 28.5 on page 319.
AE/EA: Articial Evolution
See Table 28.5 on page 319.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
ICANNGA: International Conference on Adaptive and Natural Computing Algorithms
See Table 28.5 on page 319.
ICCI: IEEE International Conference on Cognitive Informatics
See Table 10.2 on page 128.
ICNC: International Conference on Advances in Natural Computation
See Table 28.5 on page 319.
SEAL: International Conference on Simulated Evolution and Learning
See Table 10.2 on page 130.
BIOMA: International Conference on Bioinspired Optimization Methods and their Applications
See Table 28.5 on page 320.
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
See Table 10.2 on page 130.
EUROGEN: Evolutionary Methods for Design Optimization and Control
See Table 29.2 on page 354.
FEA: International Workshop on Frontiers in Evolutionary Algorithms
See Table 28.5 on page 320.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
MICAI: Mexican International Conference on Articial Intelligence
See Table 10.2 on page 131.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
See Table 10.2 on page 131.
CIMCA: International Conference on Computational Intelligence for Modelling, Control and Au-
tomation
See Table 28.5 on page 320.
CIS: International Conference on Computational Intelligence and Security
See Table 28.5 on page 320.
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
See Table 10.2 on page 132.
30.6. GENERAL INFORMATION ON EVOLUTION STRATEGIES 375
EUFIT: European Congress on Intelligent Techniques and Soft Computing
See Table 10.2 on page 132.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
EvoCOP: European Conference on Evolutionary Computation in Combinatorial Optimization
See Table 28.5 on page 321.
GEC: ACM/SIGEVO Summit on Genetic and Evolutionary Computation
See Table 28.5 on page 321.
GEM: International Conference on Genetic and Evolutionary Methods
See Table 28.5 on page 321.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
SoCPaR: International Conference on SOft Computing and PAttern Recognition
See Table 10.2 on page 133.
Progress in Evolutionary Computation: Workshops on Evolutionary Computation
See Table 28.5 on page 321.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
AJWS: Australia-Japan Joint Workshop on Intelligent & Evolutionary Systems
See Table 28.5 on page 321.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
ASC: IASTED International Conference on Articial Intelligence and Soft Computing
See Table 10.2 on page 134.
EvoNUM: European Event on Bio-Inspired Algorithms for Continuous Parameter Optimisation
See Table 28.5 on page 321.
FOCI: IEEE Symposium on Foundations of Computational Intelligence
See Table 28.5 on page 321.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
IJCCI: International Conference on Computational Intelligence
See Table 28.5 on page 321.
IWNICA: International Workshop on Nature Inspired Computation and Applications
See Table 28.5 on page 321.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
NaBIC: World Congress on Nature and Biologically Inspired Computing
See Table 28.5 on page 322.
WPBA: Workshop on Parallel Architectures and Bioinspired Algorithms
See Table 28.5 on page 322.
EvoRobots: International Symposium on Evolutionary Robotics
376 30 EVOLUTION STRATEGIES
See Table 28.5 on page 322.
ICEC: International Conference on Evolutionary Computation
See Table 28.5 on page 322.
IWACI: International Workshop on Advanced Computational Intelligence
See Table 28.5 on page 322.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
SEMCCO: International Conference on Swarm, Evolutionary and Memetic Computing
See Table 28.5 on page 322.
SSCI: IEEE Symposium Series on Computational Intelligence
See Table 28.5 on page 322.
30.6.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
30.6. GENERAL INFORMATION ON EVOLUTION STRATEGIES 377
Tasks T30
85. Implement the simple Evolution Strategy Algorithm 30.7 on page 369 in a program-
ming language of your choice.
[40 points]
86. How does the parameter adaptation method in Algorithm 30.7 on page 369 dier from
the method applied in Algorithm 30.1 on page 361? What is the general dierence be-
tween endogenous and exogenous parameters and the underlying ideas of adaptation?
[10 points]
87. How do the parameters , , and of an Evolution Strategy relate to the population
size ps and the mating pool size mps used in general Evolutionary Algorithms? You
may use Section 28.1.4 on page 265 as a reference.
[10 points]
88. How does the n-ary domination recombination algorithm in Evolution Strategies
(see Section 30.3.1 on page 362) relate to the uniform crossover operator (see Sec-
tion 29.3.4.4 on page 337) in Genetic Algorithms? What are the similarities? What
are the dierences?
[10 points]
89. Perform the same experiments as in Task 67 on page 238 (and 70 and Task 81 on
page 357) with a Evolution Strategy using the Random Keys encoding given in Sec-
tion 29.7 on page 349. In Section 57.1.2.1 on page 887, you can nd the necessary
code for that. How does the Evolution Strategy perform in comparison with the other
algorithms and previous experimental results? What conclusions can we draw about
the utility of the Random Keys genotype-phenotype mapping?
[10 points]
90. Apply an Evolution Strategy using the Random Keys encoding given in Section 29.7
on page 349 to the Traveling Salesman Problem instance att48 taken from [2288] and
listed in Paragraph 57.1.2.2.1.
As already discussed in Example E2.2 on page 45, a solution to an n-city Traveling
Salesman Problem can be represented by a permutation of length n 1. Such per-
mutations can be created and optimized in a way very similar to what we already did
with the bin packing problem a few times. Only the objective function needs to be
re-dened. Additionally to the Evolution Strategy, also apply a simple Hill Climber
to the same problem, using a method similar to what we did in Task 67 on page 238
and a representation according to Section 29.3.1.2 on page 329.
How does the Evolution Strategy perform in comparison with the Hill Climber on
this problem? What conclusions can we draw about the utility of the Random Keys
genotype-phenotype mapping? How close do the algorithms come to the optimal,
shortest path for this problem which has the length 10 628?
[50 points]
Chapter 31
Genetic Programming
31.1 Introduction
The term Genetic Programming
1
(GP) [1220, 1602, 2199] has two possible mean-
ings. (a) First, it is often used to subsume all Evolutionary Algorithms that have tree data
structures as genotypes. (b) Second, it can also be dened as the set of all Evolutionary
Algorithms that breed programs, algorithms, and similar constructs. In this chapter, we
focus on the latter denition which still includes discussing tree-shaped genomes.
Process
(runningProgram)
output input
samplesareknown
tobefoundwithgeneticprogramming
Figure 31.1: Genetic Programming in the context of the IPO model.
The well-known input-processing-output model from computer science states that a run-
ning instance of a program uses its input information to compute and return output data.
In Genetic Programming, usually some inputs and corresponding output data samples are
known, can be produced, or simulated. The goal then is to nd a program that connects
them or that exhibits some kind of desired behavior according to the specied situations, as
sketched in Figure 31.1.
31.1.1 History
The history of Genetic Programming [102] goes back to the early days of computer science
as sketched in Figure 31.2. In 1957, Friedberg [995] left the rst footprints in this area by
using a learning algorithm to stepwise improve a program. The program was represented as
a sequence of instructions for a theoretical computer called Herman [995, 996]. Friedberg
did not use an evolutionary, population-based approach for searching the programs the
idea of Evolutionary Computation had not been developed yet at that time, as you can see
1
http://en.wikipedia.org/wiki/Genetic_programming [accessed 2007-07-03]
380 31 GENETIC PROGRAMMING
FirstSteps
(Friedberg,1958;Samuel1959)
Tree-based
(Forsyth,1981)
StandardGP
(Koza,1988)
Strongly-typedGP
(Montana,1993)
LinearGP
(Nordin,1994)
Grammars
(Antonisse,1990) TAG3P
(Nguyen,2004)
RBGP
(Weise,2007)
PDGP
(Poli,1996)
CartesianGP
(Miller/Thomson,1998)
EvolutionaryProgramming
(Fogeletal.,1966)
GrammaticalEvolution
(Ryanetal.,1995)
GADS1/2
(Paterson/Livesey,1995)
LOGENPRO
(Wong/Leung,1995)
1960 1980 1990
1995
2000
2005
1970 1985
(Brameier/Banzhaf,2001)
String-to-TreeMappings
(Cramer,1985)
GeneExpressionProgramming
(Ferreira,2001)
Figure 31.2: Some important works on Genetic Programming illustrated on a time line.
in Section 29.1.1 on page 325. Also, the computational capacity of the computers of that era
was very limited and would not have allowed for experiments an program synthesis driven
by a (population-based) Evolutionary Algorithm.
Around the same time, Samuel applied machine learning to the game of checkers and, by
doing so, created the worlds rst self-learning program. In the future development section
of his 1959 paper [2383], he suggested that eort could be spent into allowing the (checkers)
program to learn scoring polynomials an activity which would be similar to symbolic
regression. Yet, in his 1967 follow-up work [2385], he could not report any progress in this
issue.
The Evolutionary Programming approach for evolving Finite State Machine by Fogel
et al. [945] (see also Chapter 32 on page 413) dates back to 1966. In order to build predictors,
dierent forms of mutation (but no crossover) were used for creating ospring from successful
individuals.
Fourteen years later, the next generation of scientists began to look for ways to evolve
programs. New results were reported by Smith [2536] in his PhD thesis in 1980. Forsyth
[972] evolved trees denoting fully bracketed Boolean expressions for classication problems
in 1981 [972, 973, 975].
The mid-1980s were a very productive period for the development of Genetic Program-
ming. Cramer [652] applied a Genetic Algorithm in order to evolve a program written in
a subset of the programming language PL in 1985. This GA used a string of integers as
genome and employed a genotype-phenotype mapping that recursively transformed them
into program trees. We discuss this approach in ?? on page ??.
At the same time, the undergraduate student Schmidhuber [2420] also used a Genetic Al-
gorithm to evolve programs at the Siemens AG. He re-implemented his approach in Prolog
at the TU Munich in 1987 [790, 2420]. Hicklin [1230] and Fujuki [999] implemented re-
production operations for manipulating the if-then clauses of LISP programs consisting of
single COND-statements. With this approach, Fujiko and Dickinson [998] evolved strategies
for playing the iterated prisoners dilemma game. Bickel and Bickel [309] evolved sets of
rules which were represented as trees using tree-based mutation crossover operators.
Genetic Programming became fully accepted at the end of this productive decade mainly
because of the work of Koza [1590, 1591]. Koza studied many benchmark applications of
Genetic Programming, such as learning of Boolean functions [1592, 1596], the Articial Ant
31.1. INTRODUCTION 381
problem (as discussed in ?? on page ??) [1594, 1595, 1602], and symbolic regression [1596,
1602], a method for obtaining mathematical expressions that match given data samples
(see Section 49.1 on page 531). Koza formalized (and patented [1590, 1598]) the idea of
employing genomes purely based on tree data structures rather than string chromosomes as
used in Genetic Algorithms. We will discuss this form of Genetic Programming in-depth in
Section 31.3.
In the rst half of the 1990s, researchers such as Banzhaf [205], Perkis [2164], Openshaw
and Turton [2085, 2086], and Crepeau [653], on the other hand, began to evolve programs in
a linear shape. Similar to programs written in machine code, string genomes used in linear
Genetic Programming (LGP) consist of single, simple commands. Each gene identies an
instruction and its parameters. Linear Genetic Programming is discussed in Section 31.4.
Also in the rst half of the 1990s, Genetic Programming systems which are driven by a
given grammar were developed. Here, the candidate solutions are sentences in a language
dened by, for example, an EBNF. Antonisse [113], Stefanski [2605], Roston [2336], and
Mizoguchi et al. [1920] we the rst ones to delve into grammar-guided Genetic Programming
(GGGP, G3P) which is outlined in Section 31.5.
382 31 GENETIC PROGRAMMING
31.2 General Information on Genetic Programming
31.2.1 Applications and Examples
Table 31.1: Applications and Examples of Genetic Programming.
Area References
Art [1234, 1449, 1453, 2746]
Chemistry [487, 798, 799, 958, 1684, 1685, 1909, 27292731, 2889, 3007,
3009]; see also Table 31.3
Combinatorial Problems [2338]
Computer Graphics [97, 581, 1195, 1591, 1596, 1726, 2646]; see also Table 31.3
Control [2541, 2566]; see also Tables 31.3 and 31.4
Cooperation and Teamwork [98, 1215, 1317, 1320, 1789, 1805, 2239, 2502]
Data Compression [336]; see also Table 31.3
Data Mining [76, 99, 350, 351, 481, 972, 973, 993, 994, 1021, 1022, 1422,
1593, 1597, 1721, 1726, 1867, 1985, 2109, 2513, 2728, 2819,
2854, 2855, 2996, 3080]; see also Tables 31.3 and 31.4
Databases [993, 2514, 2904]
Decision Making [2728]
Digital Technology [336, 337, 455, 581, 704, 869, 870, 1478, 1591, 1592, 1600,
1602, 1665, 1787, 2560, 2954]; see also Tables 31.3, 31.4, and
31.5
Distributed Algorithms and Sys-
tems
[52, 76, 150, 543, 614, 615, 658, 798, 799, 1120, 1215, 1317,
1320, 1401, 1456, 1542, 1785, 1805, 1909, 2239, 2338, 2377,
2500, 2501, 27292731, 2904, 2905, 3007, 3009]; see also Ta-
ble 31.3
Economics and Finances [455, 846, 1021, 1022, 1721, 1805, 2728, 2854]; see also Ta-
ble 31.4
Engineering [52, 98, 271, 303, 337, 455, 581, 704, 869, 1120, 1195, 1234,
1317, 1320, 1456, 1478, 1542, 1591, 1592, 1600, 1602, 1607,
1608, 16131615, 1665, 1769, 1785, 1787, 1894, 1909, 1932,
2090, 2500, 2501, 2513, 2514, 2560, 2566, 2567, 2729, 2731,
2954, 3007]; see also Tables 31.3, 31.4, and 31.5
Games [998, 2328]
Graph Theory [52, 76, 1120, 1216, 1401, 1456, 1542, 1615, 1769, 1785, 1909,
2338, 25002502, 2729, 2731]; see also Table 31.3
Healthcare [350, 351]; see also Table 31.3
Mathematics [149, 455, 542, 581, 652, 846, 869, 924, 998, 1157, 1456, 1514,
1596, 1602, 1658, 1674, 1772, 1787, 1855, 1856, 1932, 2251,
2377, 2532, 2997]; see also Tables 31.3, 31.4, and 31.5
Multi-Agent Systems [98, 269, 270, 1317, 1320, 1789, 1805, 2239, 2240]
Multiplayer Games [98, 581]
Physics [487, 2568]
Prediction [351, 1021, 1721, 2728]; see also Table 31.3
Security [76, 658, 1785, 2090, 2513, 2514]; see also Table 31.3
Software [652, 704, 772, 790, 972, 998, 999, 1120, 1230, 1401, 1423,
1591, 1623, 1665, 2681, 2682, 2897, 2970]; see also Tables
31.3 and 31.4
31.2. GENERAL INFORMATION ON GENETIC PROGRAMMING 383
Sorting [1610]; see also Table 31.5
Testing see Table 31.3
Wireless Communication [1615, 1769, 2502]; see also Table 31.3
31.2.2 Books
1. Advances in Genetic Programming II [106]
2. Handbook of Evolutionary Computation [171]
3. Advances in Genetic Programming [2569]
4. Genetic Algorithms and Genetic Programming at Stanford [1601]
5. Genetic Programming IV: Routine Human-Competitive Machine Intelligence [1615]
6. Dynamic, Genetic, and Chaotic Programming: The Sixth-Generation [2559]
7. Genetic Programming: An Introduction On the Automatic Evolution of Computer Pro-
grams and Its Applications [209]
8. Design by Evolution: Advances in Evolutionary Design [1234]
9. Linear Genetic Programming [394]
10. A Field Guide to Genetic Programming [2199]
11. Representations for Genetic and Evolutionary Algorithms [2338]
12. Automatic Quantum Computer Programming A Genetic Programming Approach [2568]
13. Data Mining and Knowledge Discovery with Evolutionary Algorithms [994]
14. Evolution are Algorithmen [2879]
15. Foundations of Genetic Programming [1673]
16. Genetic Programming II: Automatic Discovery of Reusable Programs [1599]
31.2.3 Conferences and Workshops
Table 31.2: Conferences and Workshops on Genetic Programming.
GECCO: Genetic and Evolutionary Computation Conference
See Table 28.5 on page 317.
CEC: IEEE Conference on Evolutionary Computation
See Table 28.5 on page 317.
EvoWorkshops: Real-World Applications of Evolutionary Computing
See Table 28.5 on page 318.
EuroGP: European Workshop on Genetic Programming
History: 2011/04: Torino, Italy, see [2492]
2010/04: Istanbul, Turkey, see [886]
2009/04: T ubingen, Germany, see [2787]
2008/03: Naples, Italy, see [2080]
2007/04: Val`encia, Spain, see [856]
2006/04: Budapest, Hungary, see [607]
2005/03: Lausanne, Switzerland, see [1518]
2004/04: Coimbra, Portugal, see [1517]
2003/04: Colchester, Essex, UK, see [2360]
2002/04: Kinsale, Ireland, see [977]
2001/04: Lake Como, Milan, Italy, see [1903]
384 31 GENETIC PROGRAMMING
2000/04: Edinburgh, Scotland, UK, see [2198]
1999/05: G oteborg, Sweden, see [2196]
1998/04: Paris, France, see [210, 2195]
PPSN: Conference on Parallel Problem Solving from Nature
See Table 28.5 on page 318.
WCCI: IEEE World Congress on Computational Intelligence
See Table 28.5 on page 318.
EMO: International Conference on Evolutionary Multi-Criterion Optimization
See Table 28.5 on page 319.
GP: Annual Conference of Genetic Programming
History: 1998/06: Madison, WI, USA, see [1605, 1612]
1997/07: Stanford, CA, USA, see [1604, 1611]
1996/07: Stanford, CA, USA, see [1603, 1609]
AE/EA: Articial Evolution
See Table 28.5 on page 319.
GPTP: Genetic Programming Theory and Practice
History: 2011/05: Ann Arbor, MI, USA, see [2753]
2010/05: Ann Arbor, MI, USA, see [2310]
2009/05: Ann Arbor, MI, USA, see [2309]
2008/05: Ann Arbor, MI, USA, see [2308]
2007/05: Ann Arbor, MI, USA, see [2575]
2006/05: Ann Arbor, MI, USA, see [2307]
2004/05: Ann Arbor, MI, USA, see [2089]
2003/05: Ann Arbor, MI, USA, see [2306]
1995/07: Tahoe City, CA, USA, see [2326]
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
ICANNGA: International Conference on Adaptive and Natural Computing Algorithms
See Table 28.5 on page 319.
ICCI: IEEE International Conference on Cognitive Informatics
See Table 10.2 on page 128.
ICNC: International Conference on Advances in Natural Computation
See Table 28.5 on page 319.
SEAL: International Conference on Simulated Evolution and Learning
See Table 10.2 on page 130.
BIOMA: International Conference on Bioinspired Optimization Methods and their Applications
See Table 28.5 on page 320.
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
See Table 10.2 on page 130.
FEA: International Workshop on Frontiers in Evolutionary Algorithms
See Table 28.5 on page 320.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
MICAI: Mexican International Conference on Articial Intelligence
31.2. GENERAL INFORMATION ON GENETIC PROGRAMMING 385
See Table 10.2 on page 131.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
See Table 10.2 on page 131.
CIMCA: International Conference on Computational Intelligence for Modelling, Control and Au-
tomation
See Table 28.5 on page 320.
CIS: International Conference on Computational Intelligence and Security
See Table 28.5 on page 320.
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
See Table 10.2 on page 132.
EUFIT: European Congress on Intelligent Techniques and Soft Computing
See Table 10.2 on page 132.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
EvoCOP: European Conference on Evolutionary Computation in Combinatorial Optimization
See Table 28.5 on page 321.
GEC: ACM/SIGEVO Summit on Genetic and Evolutionary Computation
See Table 28.5 on page 321.
GEM: International Conference on Genetic and Evolutionary Methods
See Table 28.5 on page 321.
GEWS: Grammatical Evolution Workshop
History: 2004/06: Seattle, WA, USA, see [2079]
2003/07: Chicago, IL, USA, see [2078]
2002/07: New York, NY, USA, see [2077]
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
SoCPaR: International Conference on SOft Computing and PAttern Recognition
See Table 10.2 on page 133.
Progress in Evolutionary Computation: Workshops on Evolutionary Computation
See Table 28.5 on page 321.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
AJWS: Australia-Japan Joint Workshop on Intelligent & Evolutionary Systems
See Table 28.5 on page 321.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
ASC: IASTED International Conference on Articial Intelligence and Soft Computing
See Table 10.2 on page 134.
386 31 GENETIC PROGRAMMING
EvoNUM: European Event on Bio-Inspired Algorithms for Continuous Parameter Optimisation
See Table 28.5 on page 321.
FOCI: IEEE Symposium on Foundations of Computational Intelligence
See Table 28.5 on page 321.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
IJCCI: International Conference on Computational Intelligence
See Table 28.5 on page 321.
IWNICA: International Workshop on Nature Inspired Computation and Applications
See Table 28.5 on page 321.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
NaBIC: World Congress on Nature and Biologically Inspired Computing
See Table 28.5 on page 322.
WPBA: Workshop on Parallel Architectures and Bioinspired Algorithms
See Table 28.5 on page 322.
EvoRobots: International Symposium on Evolutionary Robotics
See Table 28.5 on page 322.
ICEC: International Conference on Evolutionary Computation
See Table 28.5 on page 322.
IWACI: International Workshop on Advanced Computational Intelligence
See Table 28.5 on page 322.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
SEMCCO: International Conference on Swarm, Evolutionary and Memetic Computing
See Table 28.5 on page 322.
SSCI: IEEE Symposium Series on Computational Intelligence
See Table 28.5 on page 322.
31.2.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
31.3 (Standard) Tree Genomes
31.3.1 Introduction
Tree-based Genetic Programming (TGP), usually referred to as Standard Genetic Program-
ming, SGP) is the most widespread Genetic Programming variant, both for historical reasons
and because of its eciency in many problem domains. Here, the genotypes are tree data
structures. Generally, a tree can represent a rule set [1867], a mathematical expression [1602],
a decision tree [1597], or even the blueprint of an electrical circuit [1608].
31.3. (STANDARD) TREE GENOMES 387
31.3.1.1 Function and Terminal Nodes
In tree-based Genetic Programming, there usually is a semantic distinction between inner
nodes and leaf nodes of the genotypes. In Standard Genetic Programming, every node is
characterized by a type i which denes how many child nodes it can have. We distinguish
between terminal and non-terminal nodes (or node types).
Denition D31.1 (Terminal Node Types). The set contains the node types i which
do not allow their instances t to have child nodes. The nodes t with typeOf (t) i i
are thus always leaf nodes.
typeOf (t) i i numChildren(t) = 0 (31.1)
Denition D31.2 (Non-Terminal Node Types). Each node type i from the set of non-
terminal (function) node types in a Genetic Programming system has a denite number k
N
1
of children (k > 1). The nodes t which belong to a non-terminal node type typeOf (t)
i i N thus are always inner nodes of a tree.
typeOf (t) i i (numChildren(t) = k) k > 0 (31.2)
In Section 56.3.1 on page 824, we provide simple Java implementations for basic tree node
classes (Listing 56.48)). In this implementation, each node belongs to a specic type (List-
ing 56.49. Such a node type does not only describe the number of children that its instance
nodes can have. In our example implementation, we go one step further and also dene
the possible types of children a node can have by specifying a type set (Listing 56.50) for
each possible child node location. We thus provide a fully-edged Strongly-Typed Genetic
Programming (STGP) Montana [1930, 1931, 1932] implementation.
Example E31.1 (GP of Mathematical Expressions: Symbolic Regression).
Trees also provide a very intuitive representation for mathematical functions. This method
a
b
e sin
x
x
3
0.5
+
*
e
sinx
+3 x
a
b
Figure 31.3: The formula e
sinx
+ 3
_
[x[ expressed as tree.
is called symbolic regression and is discussed in detail in Section 49.1 on page 531, but
it can serve here as a good example for the abilities of Genetic Programming. SGP has
initially been used by Koza to evolve them. Here, an inner node stands for a mathematical
operation and its child nodes are the parameters of the operation. Leaf nodes then are
terminal symbols like numbers or variables. With
1. a terminal set of = (x, e, R) where x is a variable, e is Eulers number, and R stands
for a random-valued constant and
2. a function set of N =
_
+, , , /, sin, cos, a
b
_
where a
b
means a to the power of b
388 31 GENETIC PROGRAMMING
we can already express and evolve a large number of mathematical formulas such as the
one given in Figure 31.3. In this gure, the root node is a + operation. Since trees natu-
rally comply with the inx
2
, this + would be the last operation which is evaluated when
computing the value of the formula. Before this nal step, rst the value of its left child
(e
sinx
) and then the value of the right child (3 [x[
0.5
) will be determined. Evaluating a
tree-encoded function requires a depth-rst traversal of the tree where the value of a node
can be computed after the values of all of its children have been calculated.
31.3.1.2 Similarity to High-Level Programming Languages
Trees are very (and deceptively) close to the natural structure of algorithms and programs.
The syntax of most of the high-level programming languages, for example, leads to a certain
hierarchy of modules and alternatives. Not only does this form normally constitute a tree
compilers even use tree representations internally.
AbstractSyntaxTreeRepresentation
Algorithm
1
2
3
4
5
6
7
8
List<IIndividual>createPop(s){
List<Individual>Xpop;
Xpop=newArrayList<IIndividual>(s);
for(inti=s;i>0;i--){
Xpop.add(create());
}
returnXpop;
}
Program
(SchematicJava,High-LevelLanguage)
Pop =createPop(s)
Input:sthesizeofthepopulationtobecreated
Data:iacountervariable
Output:Popthenew,randompopulation
1
2
3
whilei 0do > 4
5
6
returnPop 7
begin
end 8
Pop()
is
Pop appendList(Pop,create())
i i-1
Pop
() i
s
i 0
Pop
1 i
i
Pop create
appendList
ret
>
while
{block}
{block}
Figure 31.4: The AST representation of algorithms/programs.
When reading the source code of a program, compilers rst split it into tokens
3
. Then
2
http://en.wikipedia.org/wiki/Infix_notation [accessed 2010-08-21]
3
http://en.wikipedia.org/wiki/Lexical_analysis [accessed 2007-07-03]
31.3. (STANDARD) TREE GENOMES 389
they parse
4
these tokens. Finally, an abstract syntax tree
5
(AST) is created [1278, 1462].
The internal nodes of ASTs are labeled by operators and the leaf nodes contain the operands
of these operators. In principle, we can illustrate almost every
6
program or algorithm as such
an AST (see Figure 31.4). Tree-based Genetic Programming directly evolves individuals in
this form.
It should, however, be pointed out that Genetic Programming is not suitable to evolve
programs such as the one illustrated in Figure 31.4. As we will discuss in ??, programs in
representations which we know from our everyday life as students or programmers exhibit
too much epistasis. Synthesizing them is thus not possible with GP.
31.3.1.3 Structure and Human-Readability
Another interesting aspect of the tree genome is that it has no natural role model. While Ge-
netic Algorithms match their direct biological metaphor, the DNA, to some degree, Genetic
Programming introduces completely new characteristics and traits.
Genetic Programming is one of the few techniques that are able to learn structures
of potentially unbounded complexity. It can be considered as more general than Genetic
Algorithms because it makes fewer assumptions about the shape of the possible solutions.
Furthermore, it often oers white-box solutions that are human-interpretable. Many other
learning algorithms like articial neural networks, for example, usually generate black-box
outputs, which are highly complicated if not impossible to fully grasp [1853].
31.3.2 Creation: Nullary Reproduction
Before the evolutionary process in Standard Genetic Programming can begin, an initial
population composed of randomized individuals is needed. In Genetic Algorithms, a set
of random bit strings is created for this purpose. In Genetic Programming, random trees
instead of such one-dimensional sequences are constructed.
Normally, there is a maximum depth

d specied that the tree individuals are not allowed
to surpass. Then, the creation operation will return only trees where the path between the
root and the most distant leaf node is not longer than

d. There are three basic ways for
realizing the create
__
and createPop operations from for trees which can be distinguished
according to the depth of the produced individuals.
31.3.2.1 Full
The full method illustrated in Figure 31.5 creates trees where each (non-backtracking)
maximumdepth
Figure 31.5: Tree creation by the full method.
4
http://en.wikipedia.org/wiki/Parse_tree [accessed 2007-07-03]
5
http://en.wikipedia.org/wiki/Abstract_syntax_tree [accessed 2007-07-03]
6
Excluding such algorithms and programs that contain jumps (the infamous goto) that would produce
crossing lines in the owchart (http://en.wikipedia.org/wiki/Flowchart [accessed 2007-07-03]).
390 31 GENETIC PROGRAMMING
Algorithm 31.1: g createGPFull
_

d, N,
_
Input: N: the set of function types
Input: : the set terminal types
Input:

d: the maximum depth
Data: j: a counter variable
Data: i N : the selected tree node type
Output: g G: a new genotype
1 begin
2 if

d > 1 then
3 i N[randomUni[0, len(N))]
4 g instantiate(i)
5 for j 1 up to numChildren(i) do
6 g addChild
_
g, createGPFull
_

d 1, N,
__
7 else
8 i [randomUni[0, len())]
9 g instantiate(i)
10 return g
path from the root to the leaf nodes has exactly the length

d. Algorithm 31.1 illustrates
this process: For each node of a depth below

d, the type is chosen from the non-terminal
symbols (the functions N). A chain of function nodes is recursively constructed until the
maximum depth minus one. If we reach

d, of course, only leaf nodes (with types from )
can be attached.
31.3.2.2 Grow
The grow method depicted in Figure 31.6 and specied in Algorithm 31.2, also creates trees
maximumdepth
Figure 31.6: Tree creation by the grow method.
where each (non-backtracking) path from the root to the leaf nodes is not longer than

d but
may be shorter. This is achieved by deciding randomly for each node if it should be a leaf or
not when it is attached to the tree. Of course, to nodes of the depth

d 1, only leaf nodes
can be attached to.
31.3.2.3 Ramped Half-and-Half
Koza [1602] additionally introduced a mixture method called ramped half-and-half dened
here in Algorithm 31.3. For each tree to be created, this algorithm draws a number r
uniformly distributed between 2 and

d: (r = randomUni
_
2,

d + 1
_
). Now either full or
31.3. (STANDARD) TREE GENOMES 391
Algorithm 31.2: g createGPGrow
_

d, N,
_
Input: N: the set of function types
Input: : the set terminal types
Input:

d: the maximum depth
Data: j: a counter variable
Data: V = N : the joint list of function and terminal node types
Data: i V : the selected tree node type
Output: g: a new genotype
1 begin
2 if

d > 1 then
3 V N
4 i V [randomUni[0, len(V ))]
5 g instantiate(i)
6 for j 1 up to numChildren(i) do
7 g G addChild
_
g, createGPFull
_

d 1, N,
__
8 else
9 i [randomUni[0, len())]
10 g instantiate(i)
11 return g
Algorithm 31.3: g createGPRamped
_

d, N,
_
Input: N: the set of function types
Input: : the set terminal types
Input:

d: the maximum depth
Data: r: the maximum depth chosen for the genotype
Output: g G: a new genotype
1 begin
2 r
_
randomUni
_
2,

d + 1
__
3 if randomUni[0, 1) < 0.5 then return createGPGrow(r, N, )
4 else return createGPFull(r, N, )
392 31 GENETIC PROGRAMMING
grow is chosen to nally create a tree with the maximum depth r (in place of

d). This
method is often preferred since it produces an especially wide range of dierent tree depths
and shapes and thus provides a great initial diversity. The Ramped Half-and-Half algorithm
for Strongly-Typed Genetic Programming is implemented in Listing 56.42 on page 814.
31.3.3 Node Selection
In most of the reproduction operations for tree genomes, in mutation as well as in recom-
bination, certain nodes in the trees need to be selected. In order to apply mutation, for
instance, we rst need to nd the node which is to be altered. For recombination, we need
one node in each parent tree. These nodes are then exchanged. We introduce an operator
selectNode for choosing these nodes.
Denition D31.3 (Node Selection Operator). The operator t
c
= selectNode(t
r
)
chooses one node t
c
from the tree whose root is given by t
r
.
31.3.3.1 Uniform Selection
A good method for doing so could select all nodes t
c
in the tree t
c
with exactly the
Algorithm 31.4: t
c
selectNode
uni
(t
r
)
Input: t
r
: the (root of the) tree to select a node from
Data: b, d: two Boolean variables
Data: w: a value uniformly distributed in [0, nodeWeight(t
c
)]
Data: i: an index
Output: t
c
: the selected node
1 begin
2 b true
3 t
c
t
r
4 while b do
5 w randomUni[0, nodeWeight(t
c
))
6 if w nodeWeight(t
c
) 1 then
7 b false
8 else
9 i numChildren(t
c
) 1
10 while i 0 do
11 w w nodeWeight(t
c
.children[i])
12 if w < 0 then
13 t
c
t
c
.children[i]
14 i 1
15 else
16 i i 1
17 return t
c
same probability as done by the method selectNode
uni
given in Algorithm 31.4. There,
P (selectNode
uni
(t
r
) = t
c
) = P (selectNode
uni
(t
r
) = t
n
) for all nodes t
c
and t
n
in the tree
with root t
r
.
31.3. (STANDARD) TREE GENOMES 393
In order to achieve such behavior, we rst we dene the weight nodeWeight(t
n
) of a tree
node t
n
to be the total number of nodes in the subtree with t
n
as root, i. e., the tree which
contains t
n
itself, its children, grandchildren, grand-grandchildren, and so on.
nodeWeight(t) = 1 +
numChildren(t)1

i=0
nodeWeight(t.children[i]) (31.3)
Thus, the weight of the root of a tree is the number of all nodes in the tree and the weight of
each of the leaves is exactly 1. In selectNode
uni
, the probability for a node of being selected
in a tree with root t
r
is thus
1
/nodeWeight(t
r
). We provide an example implementation such a
node selection method in Listing 56.41 on page 811.
A tree descend where with probabilities dierent from these dened here may lead to
unbalanced node selection probability distributions. Then, the reproduction operators will
prefer accessing some parts of the trees while very rarely altering the other regions. We
could, for example, descend the tree by starting at the root t and would return the current
node with probability 0.5 or recursively go to one of its children (also with 50% probability).
Then, the root t
r
would have a 50 : 50 chance of being the starting point of reproduction
operation. Its direct children have at most probability
0.5
2
/numChildren(t
r
) each, and their
children would have a base multiplier of 0.5
3
for their probabilities and so on. Hence, the
leaves would almost never take actively part in reproduction.
We could also choose other probabilities which strongly prefer going down to the children
of the tree, but then, the nodes near to the root will most likely be left untouched during
reproduction. Often, this approach is favored by selection methods, although leaves in
dierent branches of the tree are not chosen with the same probabilities if the branches
dier in depth. When applying Algorithm 31.4 on the other hand, there exist no regions in
the trees that have lower selection probabilities than others.
31.3.4 Unary Reproduction Operations
31.3.4.1 Mutation: Unary Reproduction
Tree genotypes may undergo small variations during the reproduction process in the Evo-
lutionary Algorithm. Such a mutation is usually dened as the random selection of a node
in the tree, removing this node and all of its children, and nally replacing it with another
node [1602]. Algorithm 31.5 shows one possible way to realize such a mutation. First, a
copy g G from the original genotype g
p
G is created. Then, a random parent node t
is selected from g. One child of this node is replaced with a newly created subtree. For
creating this tree, we pick the ramped half-and-half method with a maximum depth similar
to the one of the child which we replace by it. The possible outcome of this algorithm
is illustrated in Fig. 31.7.a and an example implementation in Java for Strongly-Typed
Genetic Programming is given in Listing 56.43 on page 815.
Furthermore, in the special case that the tree nodes may have a variable number of chil-
dren, two additional operations for mutation become available: (a) insertions of new nodes
or small trees (Fig. 31.7.b), and (b) the deletion of nodes, as illustrated in Fig. 31.7.c.The
eects of insertion and deletion can, of course, also be achieved with replacement.
394 31 GENETIC PROGRAMMING
Algorithm 31.5: g mutateGP(g
p
)
Input: g
p
G: the parent genotype
Input: [implicit] N: the set of function types
Input: [implicit] : the set terminal types
Data: t: the selected tree node
Data: j: the selected child index
Output: g G: a new genotype
1 begin
2 g g
p
3 repeat
4 t selectNode(g)
5 until numChildren(t) > 0
6 j randomUni[0, numChildren(t))
7 t.children[j] createGPRamped(max 2, treeDepth(t.children[j]) , N, )
8 return g
maximumdepth
Fig. 31.7.a: Subtree replacement.
maximumdepth
Fig. 31.7.b: Subtree insertions.
maximumdepth
Fig. 31.7.c: Subtree deletion.
Figure 31.7: Possible tree mutation operations.
31.3. (STANDARD) TREE GENOMES 395
Example E31.2 (Mutation Example E31.1 Cont.).
Based on Example E31.1 on page 387, Figure 31.8 shows one possible application of Algo-
replacearandomly
chosennodewith
arandomlycreated
subtree
randomly
created
12 x
/
x
3
*
+
12 x
/
x
x
3
+
*
+
11
cutrandomnode
Figure 31.8: Mutation of a mathematical expression represented as a tree.
rithm 31.5 to a mathematical expression encoded as a tree. The initial genotype represents
the mathematical formula (11 + x) (3 + [x[). From this tree, the portion (11 + x) is re-
placed with a randomly created subtree which resembles the expression x/12. The resulting
genotype then stands for the formula (x/12) (3 +[x[).
31.3.4.2 Permutation: Unary Reproduction
The tree permutation operation illustrated in Figure 31.9 resembles the permutation oper-
ation of string genomes or the inversion used in messy GA (Section 29.6.2.1, [1602]). Like
mutation, it is used to reproduce one single tree. It rst selects an internal node of the
parental tree. The child nodes attached to that node are then shued randomly, i. e., per-
mutated. If the tree represents a mathematical formula and the operation represented by
the node is commutative, this has no direct eect. The main goal of this operation is to
re-arrange the nodes in highly t subtrees in order to make them less fragile for other oper-
ations such as recombination. The eects of this operation are questionable and most often,
it is not applied [1602].
Figure 31.9: Tree permutation (asexually) shuing subtrees.
396 31 GENETIC PROGRAMMING
31.3.4.3 Editing: Unary Reproduction
Editing is to trees in Genetic Programming what simplifying is to formulas in mathematics.
Editing a tree means to create a new ospring tree which is more ecient (e.g., in size)
but equivalent to its parent in terms of functional aspects. It is thus a very domain-specic
operation.
Example E31.3 (Editing Example E31.1 Cont.).
Let us take the mathematical formula sketched in Figure 31.10 x = b + (7 4) + (1 a)
+
a
1
*
+
b
-
7 4
+
a +
b 3
Figure 31.10: Tree editing (asexual) optimization.
for instance. This expression clearly can be written in a shorter way by replacing (7 4)
with 3 and (1 a) with a. By doing so, we improve its readability and also decrease the
computational time needed to evaluate the expression for concrete values of a and b. Similar
measures can often be applied to algorithms and program code.
The positive aspect of editing here is that it reduces the number of nodes in the tree by
removing useless expressions. This makes it more easy for recombination operations to pick
important building blocks. At the same time, the expression (7 4) is now less likely to
be destroyed by the reproduction processes since it is replaced by the single terminal node
3.
A negative aspect would be if (in our example) a tter expression was (7 (4 a)) and a
is a variable close to 1. Then, transforming (74) into 3 prevents an evolutionary transition
to the tter expression.
Besides the aspects mentioned in Example E31.3, editing also reduces the diversity in the
genome which could degrade the performance by decreasing the variety of structures avail-
able. In Kozas experiments, Genetic Programming with and without editing showed equal
performance, so the overall benets of this operation are not fully clear [1602].
31.3.4.4 Encapsulation: Unary Reproduction
The idea behind the encapsulation operation is to identify potentially useful subtrees and to
turn them into atomic building block as sketched in Figure 31.11. To put it plain, we create
new terminal symbols that (internally hidden) are trees with multiple nodes. This way,
they will no longer be subject to potential damage by other reproduction operations. The
new terminal may spread throughout the population in the further course of the evolution.
According to Koza, this operation has no substantial eect but may be useful in special
applications like the evolution of articial neural networks [1602].
31.3.4.5 Wrapping: Unary Reproduction
Applying the wrapping operation means to rst select an arbitrary node n in the tree. A new
non-terminal node m is created outside of the tree. In m, at least one child node position is
31.3. (STANDARD) TREE GENOMES 397
Figure 31.11: An example for tree encapsulation.
Figure 31.12: An example for tree wrapping.
left unoccupied. Then, n (and all its potential child nodes) is cut from the original tree and
append it to m by plugging it into the free spot. Finally, m is hung into the tree position
that formerly was occupied by n.
The purpose of this reproduction method illustrated in Figure 31.12 is to allow modi-
cations of non-terminal nodes that have a high probability of being useful. Simple mutation
would, for example, cut n from the tree or replace it with another expression. This will
always change the meaning of the whole subtree below n dramatically.
Example E31.4 (Wrapping Example E31.1 Cont.).
If we evolve mathematical expressions, a candidate solution could be the formula (b +a) 5.
A simple mutation operator could replace the node a with a randomly created subtree such
as (

a 7), arriving at (b +(

a 7)) 5, which is quite dierent from (b +a) a. Wrapping


may, however, place a into an expression such as (a + 2). The result, (b + (a + 2)) 5, is
closer to the original formula.
The idea of wrapping is to introduce a search operation with high locality which allows
smaller changes to genotypes. The wrapping operation is used by the author in is exper-
iments and so far, I have not seen another source where it is used. Then again, the idea
of this operation is especially appealing to the evolution of executable structures such as
mathematical formulas and quite simple, so it is likely used in similar ways by a variety of
researchers.
31.3.4.6 Lifting: Unary Reproduction
While wrapping allows nodes to be inserted in non-terminal positions with small change of
the trees semantic, lifting is able to remove them in the same way. It is the inverse operation
to wrapping, which becomes obvious when comparing Figure 31.12 and Figure 31.13.
Lifting begins with selecting an arbitrary inner node n of the tree. This node then
replaces its parent node. The parent node inclusively all of its child nodes (except n)
are removed from the tree. With lifting, a tree that represents the mathematical formula
(b +a) 3 can be transformed to b 3 in a single step.
398 31 GENETIC PROGRAMMING
Figure 31.13: An example for tree lifting.
Lifting is used by the author in his experiments with Genetic Programming (see for
example ?? on page ??). Again, so far I have not seen a similar operation dened explicitly
in other works, but because of its simplicity, it would be strange if it had not been used by
various researchers in the past.
31.3.5 Recombination: Binary Reproduction
The mating process in nature the recombination of the genotypes of two individuals is
also copied in tree-based Genetic Programming. Exchanging parts of genotypes is not much
dierent in tree-based representations than in string-based ones.
31.3.5.1 Subtree Crossover
Applying the default subtree exchange recombination operator to two parental trees means
to swap subtrees between them as dened in Algorithm 31.6 and illustrated in Figure 31.14.
Therefore, single subtrees are selected randomly from each of the parents and subsequently
are cut out and reinserted in the partner genotype.
Algorithm 31.6: g recombineGP(g
p1
, g
p2
)
Input: g
p1
, g
p2
G: the parental genotypes
Data: t: the selected tree node
Data: j: the selected child index
Output: g G: a new genotype
1 begin
2 g g
p1
3 repeat
4 t selectNode(g)
5 until numChildren(t) > 0
6 j randomUni[0, numChildren(t))
7 t.children[j] selectNode(g
p2
)
8 return g
If a depth restriction is imposed on the genome, both, the mutation and the recombi-
nation operation have to respect them. The new trees they create must not exceed it. An
example implementation of this operation in Java for Strongly-Typed Genetic Programming
is given in Listing 56.44 on page 817.
The intent of using the recombination operation in Genetic Programming is the same as
in Genetic Algorithms. Over many generations, successful building blocks for example a
highly t expression in a mathematical formula should spread throughout the population
and be combined with good genes of dierent candidate solutions.
31.3. (STANDARD) TREE GENOMES 399
maximumdepth
()
Figure 31.14: Tree recombination by exchanging subtrees.
Example E31.5 (Recombination Example E31.1 Cont.).
Based on Example E31.1 on page 387, Figure 31.15 shows one possible application of Algo-
Selectnodeto
replaceinParent1
Selectnodetoreplace
withfromParent2
Offspring
x
sin
e
e
3
/
-
x e
3
-
-
x
a
b
sin
e
3
/
a
b
Figure 31.15: Recombination of two mathematical expressions represented as trees.
rithm 31.6. Two mathematical expressions encoded as trees are recombined by the exchange
of subtrees, i. e., mathematical expressions.
The rst parent tree g
p1
represents the formula (sin(3/e))
x
and the second one (g
p2
)
denotes (e x) [3[. The node which stands for the variable x in the rst parent is selected
as well as the subtree e x in the second parent. x is cut from g
p1
and replaced with e x,
yielding an ospring which encodes the formula (sin(3/e))
ex
.
31.3.5.2 Headless Chicken Crossover
Recombination in Standard Genetic Programming can also have a very destructive eect
on the individual tness [2032]. Angeline [101] argues that it performs no better than
mutation and causes bloat [104] and used Joness headless chicken crossover [1466] discussed
in Section 29.3.4.7 on page 339.
Angeline [101] attacked Kozas [1602]s claims about the eciency of subtree crossover
by constructing two new operators:
1. In Strong Headless Chicken Crossover (SHCC), each parent tree is recombined with a
new randomly created tree and the ospring is returned. The new child will normally
contain a relatively small amount of random nodes.
400 31 GENETIC PROGRAMMING
2. In Weak Headless Chicken Crossover (WHCC), the parent tree is recombined with a
new randomly created tree and vice versa. One of the ospring randomly selected and
returned. In half of the returned ospring, there will be a relatively small amount of
non-random code.
SHCC is very close to subtree mutation, since a sub-tree in the parent is replaced with a
randomly created one. It resembles a macromutation. Angeline [101] showed that SHCC
performed at least as good if not better than subtree crossover on a variety of problems and
that the performance of WHCC is good, too.
His conclusion is that subtree crossover actually works similar to a macromutation oper-
ator whose contents are limited by the subtrees available in the population. This contradicts
the common (prior) believe that subtree crossover would be a building block combining fa-
cility. In problems where a constant incoming stream of new genetic material and variation
is necessary, the pure macromutation provided by SHCC thus outperforms crossover.
Interestingly, Lang [1665] claimed that Hill Climbing algorithm with a similar macromu-
tation could outperform Genetic Programming a few years prior but were concluded from
non-conclusive and ill-designed experiments. As a consequence, they could not withstand a
thorough investigation [1600]. The similarity of crossover and mutation (which also becomes
obvious when comparing Algorithm 31.5 and 31.6) in Genetic Programming was thus rst
profoundly pointed out by Angeline [101].
31.3.5.3 Advanced Techniques
Several techniques have been proposed in order to mitigate the destructive eects of re-
combination in tree-based representations. In 1994, DHaeseleer [782] obtained modest im-
provements with his strong context preserving crossover that permitted only the exchange
of subtrees that occupied the same positions in the parents. Poli and Langdon [2193, 2194]
dene the similar single-point crossover for tree genomes with the same purpose: increasing
the probability of exchanging genetic material which is structural and functional akin and
thus decreasing the disruptiveness. A related approach dene by Francone et al. [982] for
linear Genetic Programming is discussed in Section 31.4.4.1 on page 407.
31.3.6 Modules
31.3.6.1 Automatically Dened Functions
The concept of automatically dened functions (ADFs) as introduced by Koza [1602] pro-
vides some sort of pre-specied modularity for Genetic Programming. Finding a way to
evolve modules and reusable building blocks is one of the key issues in using GP to derive
higher-level abstractions and solutions to more complex problems [107, 108, 1599]. If ADFs
are used, a certain structure is dened for the genome.
The root of the tree usually loses its functional responsibility and now serves only as glue
that holds the individual together and has a xed number n of children, from which n1 are
automatically dened functions and one is the result-generating branch. When evaluating
the tness of an individual, often only this rst branch is taken into consideration whereas
the root and the ADFs are ignored. The result-generating branch, however, may use any of
the automatically dened functions to produce its output.
Example E31.6 (ADFs Example E31.1 Cont.).
When ADFs are employed, typically not only their number must be specied beforehand but
also the number of arguments of each of them. How this works can maybe best illustrated
31.3. (STANDARD) TREE GENOMES 401
by using the example given in ??. It stems from function approximation
7
, since this is the
area where many early examples of the idea of ADFs come from.
Assume that the goal of GP is to approximate a function g with the one parameter x
and that a genome is used where two functions (f
0
and f
1
) are automatically dened. f
0
has a single formal parameter a and f
1
has two formal parameters a and b. The genotype
?? encodes the following mathematical functions:
g(x) = f
1
(4, f
0
(x)) (f
0
(x) + 3) (31.4)
f
0
(a) = a + 7 (31.5)
f
1
(a, b) = (a) b (31.6)
Hence, g(x) ((4) (x + 7)) ((x + 7) + 3). The number of children of the function
calls in the result-generating branch must be equal to the number of the parameters of the
corresponding ADF.
Although ADFs were rst introduced for symbolic regression by Koza [1602], they can also
be applied to a variety of other problems like in the evolution of agent behaviors [96, 2240],
electrical circuit design [1608], or the evolution of robotic behavior [98].
31.3.6.2 Automatically Dened Macros
Spectors idea of automatically dened macros (ADMs) complements the ADFs of
Koza [2566, 2567]. Both concepts are very similar and only dier in the way that their
parameters are handled. The parameters in automatically dened functions are always val-
ues whereas automatically dened macros work on code expressions. This dierence shows
up only when side-eects come into play.
Example E31.7 (ADFs vs. ADM).
In Figure 31.16, we have illustrated the pseudo-code of two programs one with a function
(called ADF) and one with a macro (called ADM). Each program has a variable x which is
initially zero. The function y() has the side-eect that it increments x and returns its new
value. Both, the function and the macro, return a sum containing their parameter a two
times. The parameter of ADF is evaluated before ADF is invoked. Hence, x is incremented
one time and 1 is passed to ADF which then returns 2=1+1. The parameter of the macro,
however, is the invocation of y(), not its result. Therefore, the ADM resembles to two calls to
y(), resulting in x being incremented two times and in 3=1+2 being returned.
7
A very common example for function approximation, Genetic Programming-based symbolic regression,
is discussed in Section 49.1 on page 531.
402 31 GENETIC PROGRAMMING
C:\ producedoutput
execmain_program
-> out2
C:\ producedoutput
execmain_program
-> out3
...roughlyresembles
main_program
begin
variabletemp
temp=y()+y()
print(temp)
end
out: +
main_program
begin
variabletemp1,temp2
temp1=y()
temp2=temp1+temp1
print(temp2)
end
out: +
...roughlyresembles
C:\ C:\ C:\ C:\ C:\ C:\
function
ADF(parama) (a+a)
main_program
begin
print(ADF(y))
end
variablex=0
subroutiney()
begin
x++
returnx
end
out: +

ProgramwithADF
variablex=0
subroutiney()
begin
x++
returnx
end
out: +
macro
ADM(parama) (a+a)
main_program
begin
print(ADM(y))
end

ProgramwithADM
Figure 31.16: Comparison of functions and macros.
31.4. LINEAR GENETIC PROGRAMMING 403
The ideas of automatically dened macros and automatically dened functions are very
close to each other. Automatically dened macros are likely to be useful in scenarios where
context-sensitive or side-eect-producing operators play important roles [2566, 2567]. In
other scenarios, there is no much dierence between the application of ADFs and ADMs.
Finally, it should be mentioned that the concepts of automatically dened functions and
macros are not restricted to the standard tree genomes but are also applicable in other forms
of Genetic Programming, such as linear Genetic Programming (see Section 31.4) or PADO
(see ?? on page ??).
31.4 Linear Genetic Programming
31.4.1 Introduction
In the beginning of this chapter, it was stated that one of the two meanings of Genetic
Programming is the automated, evolutionary synthesis of programs that solve a given set of
problems. It was discussed that tree genomes are suitable to encode programs or formulas
and how the genetic operators can be applied to them.
Nevertheless, trees are not the only way for representing programs. Matter of fact, a
computer processes programs them as sequences of simple machine instructions instead.
These sequences may contain branches in form of jumps to other places in the code. Ev-
ery possible owchart describing the behavior of a program can be translated into such a
sequence. It is therefore only natural that the rst approach to automated program gener-
ation developed by Friedberg [995] at the end of the 1950s used a xed-length instruction
sequence genome [995, 996]. The area of Genetic Programming focused on such instruction
string genomes is called linear Genetic Programming (LGP).
Example E31.8 (LGP versus SGP: Syntax).
The general idea of LGP, as opposed to SGP, is that tree-based GP is close to high-level
representations such as LISP S-expressions whereas linear Genetic Programming breeds
instruction sequences resembling programs in assembler language or machine code. This
distinction in philosophy goes as far as down to the naming conventions of the instructions
and variables: In symbolic regression in Standard Genetic Programming, for instance, func-
tion names such as +, , , and sin are common and the variables are usually called x (or
x
1
, x
2
, . . . ). In linear Genetic Programming for the same purpose, the instructions would
usually be named like ADD, SUB ,MUL, SIN and instead of variables, the data is stored in reg-
isters such as R[0], R[1], . . . An ADD instruction in LGP could have, for example, three
parameters: the two input registers and the register to store the result of the computation.
In Standard Genetic Programming, on the other hand, + would have two child nodes and
return the result of the addition to the calling node without explicitly storing it somewhere.
In linear Genetic Programming, instruction strings (or bit/integer strings encoding tem)
are the center of the evolution and contain the program code directly. This distinguishes it
from many grammar-guided Genetic Programming approaches like Grammatical Evolution
(see ?? on page ??), where strings are just genotypic, intermediate representations that
encode the program trees. Some of the most important early contributions to LGP come
from [2199]:
404 31 GENETIC PROGRAMMING
1. Banzhaf [205], who used a genotype-phenotype mapping with repair mechanisms to
translate a bit string to nested simple arithmetic instructions in 1993,
2. Perkis [2164] (1994), whose stack-based GP evaluated arithmetic expressions in Re-
verse Polish Notation (RPN),
3. Openshaw and Turton [2086] (1994) who also used Perkiss approach but already
represented mathematical equations as xed-length bit string back in the 1980s [2085],
and
4. Crepeau [653], who developed a machine code GP system around an emulator for the
Z80 processor.
Besides the methods discussed in this section (and especially in Section 31.4.3), other inter-
esting approaches to linear Genetic Programming are the LGP variants developed by Eklund
[870] and Leung et al. [536, 1715] on specialized hardware, the commercial system by Foster
[976], and the MicroGP (GP) system for test program induction by Corno et al. [642, 2592].
31.4.2 Advantages and Disadvantages
The advantage of linear Genetic Programming lies in the straightforward evaluation of the
evolved algorithms. Like a normal computer program, a (simulated) processor starts at the
beginning of the instruction string. It reads the rst instruction and its operands from the
string and carries it out. Then, it moves to the second instruction and so on. The size of
the status information needed by an interpreter for tree-based programs is in Od (where d
is the depth of the tree), because a recursive depth-rst traversal of the trees is required.
For processing a linear instruction sequence, the required status information is of O1 since
only the index of the currently executed instruction needs to be stored.
Because of this structure, it is also easy to limit the runtime in the program evalua-
tion and even simulating parallelism. Like Standard Genetic Programming, LGP is most
often applied with a simple function set comprised of primitive instructions and some math-
ematical operators. In SGP, such an instruction set can be extended with alternatives
(if-then-else) and loops. Similarly, in linear Genetic Programming jump instructions may
be added to allow for the evolution of a more complex control ow.
Then, the drawback arises that simply reusing the genetic operators for variable-length
string genomes (discussed in Section 29.4 on page 339), which randomly insert, delete, or
toggle bits, is no longer feasible. jump instructions are highly epistatic (see Chapter 17 and
??).
Example E31.9 (Linear Genetic Programming with Jumps).
We can visualize, for example, that the alternatives and loops which we know from high-level
programming languages are mapped to conditional and unconditional jump instructions in
machine code. These jumps target to either absolute or relative addresses inside the program.
Let us consider the insertion of a single, new command into the instruction string, maybe as
result of a mutation or recombination operation. If we do not perform any further corrections
after this insertion, it is well possible that the resulting shift of the absolute addresses of the
subsequent instructions in the program invalidates the control ow and renders the whole
program useless. This issue is illustrated in Fig. 31.17.a.
Nordin et al. [2048, 2048] point out that standard crossover is highly disruptive. Even
though the subtree crossover in tree-genomes is shown to be not very ecient either (see [101]
and Section 31.3.5.2 on page 399), in comparison, tree-based genomes are less vulnerable
in this aspect. As can be seen in Fig. 31.17.b, the insertion of an instruction into a loop
represented as a tree in Standard Genetic Programming is less critical.
31.4. LINEAR GENETIC PROGRAMMING 405
(beforeinsertion)
...50019A103833838F0350... 5001
(afterinsertion:loopbeginshifted)
...50019A10233833838F0350... 5001
Fig. 31.17.a: Inserting into an instruction string.
0.5
b 1 b
a
*
(afterInsertion)
a b
b
=
0 = =
> { } ...
while
{ } ...
Fig. 31.17.b: Inserting in a tree representation.
Figure 31.17: The impact of insertion operations in Genetic Programming
One approach to increase the locality in linear Genetic Programming with jumps is to use
intelligent mutation and crossover operators which preserve the control ow of the program
when inserting or deleting instructions. Such operations could, for instance, analyze the
program structure and automatically correct jump targets. Operations which are restricted
to have only minimal eect on the control ow from the start can also easily be introduced.
In Section 31.4.3.4, we shortly outline some of the work of Brameier and Banzhaf [391],
who dene some interesting approaches to this issue. Section 31.4.4.1 discusses the homol-
ogous crossover operation which represents another method for decreasing the destructive
eects of reproduction in LGP.
31.4.3 Realizations and Implementations
31.4.3.1 The Compiling Genetic Programming System
One of the early linear Genetic Programming approaches was developed by Nordin [2044],
who was dissatised with the performance of GP systems written in an interpreted language
which, in turn, interpret the programs evolved using a tree-shaped genome. In 1994, he
published his work on a new Compiling Genetic Programming System (CGPS) written in the
C programming language
8
[1525] directly manipulating individuals represented as machine
code.
CGPS evolves linear programs which accept some input data and produce output data
as result. Each candidate solution in CGPS consists of (a) a prologue for shoveling the
input data from the stack into registers, (b) a set of instructions for information process-
ing, (c) and an epilogue for terminating the function [2045]. The prologue and epilogue
were never modied by the genetic operations. As instructions for the middle part, the
8
http://en.wikipedia.org/wiki/C_(programming_language) [accessed 2008-09-16]
406 31 GENETIC PROGRAMMING
Genetic Programming system had arithmetical operations and bit-shift operators at its dis-
posal in [2044], but no control ow manipulation primitives like jumps or procedure calls.
These were added in [2046] along with ADFs, making this LGP approach Turing-complete.
Nordin [2044] used the classication of Swedish words as task in the rst experiments
with this new system. He found that it had approximately the same capability for grow-
ing classiers as articial neural networks but performed much faster. Another interesting
application of his system was the compression of images and audio data [2047].
31.4.3.2 Automatic Induction of Machine Code by Genetic Programming
CGPS originally evolved code for the Sun Sparc processors, which is a member of the RISC
9
processor class. This had the advantage that all instructions have the same size. In the Au-
tomatic Induction of Machine Code with GP system (AIM-GP, AIMGP), the successor of
CGPS, the support for multiple other architectures was added by Nordin, Banzhaf, and
Francone [2050, 2053], including Java bytecode
10
and CISC
11
CPUs with variable instruc-
tion widths such as Intel 80x86 processors. A new interesting application for linear Genetic
Programming tackled with AIMGP is the evolution of robot behavior such as obstacle avoid-
ing and wall following [2052].
31.4.3.3 Java Bytecode Evolution
Besides AIMGP, there exist numerous other approaches to the evolution of linear Java
bytecode functions. The Java Bytecode Genetic Programming system (JBGP, also Java
Method Evolver, JME) by Lukschandl et al. [17921795] is written in Java. A genotype
in JBGP contains the maximum allowed stack depth together with a linear list of instruc-
tion descriptors. Each instruction descriptor holds information such as the corresponding
bytecode and a branch oset. The genotypes are transformed with a genotype-phenotype
mapping into methods of a Java class which then can be loaded into the JVM, executed,
and evaluated. [1206, 1207].
The JAPHET system of Klahold et al. [1547], the user provides an initial Java class
at startup. Classes are divided into a static and a dynamic part. The static parts contain
things like version information are not aected by the reproduction operations. The dynamic
parts, containing the methods, are modied by the genetic operations which add new byte
code [1206, 1207].
Harvey et al. [1206, 1207] introduce byte code GP (bcGP), where the whole population
of each generation is represented by one class le. Like in AIMGP, each individual is a linear
sequence of Java bytecode and is surrounded by a prologue and epilogue. Furthermore, by
adding buer space, each individual has the same size and, thus, the whole population can
be kept inside a byte array of a xed size, too.
31.4.3.4 Brameier and Banzhaf: LGP with Implicit Intron removal
In the Genetic Programming system developed by Brameier and Banzhaf [391] based on
former experience with AIMGP, an individual is represented as a linear sequence of simple
C instructions as outlined in the example Listing 31.1 (a slightly modied version of the
example given in [391]). Due to reproduction operations like as mutation and crossover,
linearly encoded programs may contain introns, i. e., instructions not inuencing the result
(see Denition D29.1 and ??). Given that the output of the program dened in Listing 31.1
will store its outputs in the registers v[0] and v[1], all the lines marked with (I) do not
contribute to the overall functional tness.
9
http://en.wikipedia.org/wiki/Reduced_Instruction_Set_Computing [accessed 2008-09-16]
10
http://en.wikipedia.org/wiki/Bytecode [accessed 2008-09-16]
11
http://en.wikipedia.org/wiki/Complex_Instruction_Set_Computing [accessed 2008-09-16]
31.4. LINEAR GENETIC PROGRAMMING 407
1 void ind(double[8] v) {
2 ...
3 v[0] = v[5] + 73;
4 v[7] = v[0] - 59; (I)
5 if(v[1] > 0)
6 if(v[5] > 23)
7 v[4] = v[2]
*
v[1];
8 v[2] = v[5] + v[4]; (I)
9 v[6] = v[0]
*
25; (I)
10 v[6] = v[4] - 4;
11 v[1] = sin(v[6]);
12 if(v[0] > v[1]) (I)
13 v[3] = v[5]
*
v[5]; (I)
14 v[7] = v[6]
*
2;
15 v[5] = v[7] + 115; (I)
16 if(v[1] <= v[6])
17 v[1] = sin(v[7]);
18 }
Listing 31.1: A genotype of an individual in Brameier and Banzhafs LGP system.
Brameier and Banzhaf [391] introduce an algorithm which removes these introns during
the genotype-phenotype mapping, before the tness evaluation. Basically, this approach can
be considered as an editing operation (see Section 31.3.4.3 on page 396) for linear Genetic
Programming. The GP system based on this approach was successfully tested with several
classication tasks [390392], function approximation and Boolean function synthesis [393].
In his doctoral dissertation, Brameier [388] elaborates that the control ow of linear
Genetic Programming more equals a graph than a tree because of jump and call instructions.
In the earlier work of Brameier and Banzhaf [391] mentioned just a few lines ago, introns were
only excluded by the genotype-phenotype mapping but preserved in the genotypes because
they were expected to make the programs robust against variations. In [388], Brameier
concludes that such implicit introns representing unreachable or ineective code have no
real protective eect but reduce the eciency of the reproduction operations and, thus,
should be avoided or at least minimized by them. Instead, the concept of explicitly dened
introns (EDIs) proposed by Nordin et al. [2051] is utilized in form of something like nop
instructions in order to decrease the destructive eect of crossover. Brameier nds that
introducing EDIs decreases the proportion of introns arising from unreachable or ineective
code and leads to better results. In comparison with standard tree-based GP, his linear
Genetic Programming approach performed better during experiments with classication,
regression, and Boolean function evolution benchmarks.
31.4.4 Recombination
31.4.4.1 Sticky (Homologous) Crossover
In Section 29.3.4.6 on page 338, we discussed homologous crossover for Genetic Algorithms.
Normal crossover operations like MPX may have a disruptive eect on building blocks. The
idea of homologous crossover is that, similar to nature Banzhaf et al. [209], only genes that
express the same functionality and are located at the same loci, are exchanged between
parental genotypes. It is assumed that such a recombination operator would have a lower
disruptiveness than plain MPX.
The programs which evolve in linear Genetic Programming are encoded in string chro-
mosome, usually of variable length. Hence, a homologous crossover mechanism could here
be benecial as well.
408 31 GENETIC PROGRAMMING
Francone et al. [982, 2050] introduce a sticky crossover operator which resembles ho-
mology by allowing the exchange of instructions between two genotypes (programs) only if
they reside at the same loci. In sticky crossover, rst a sequence of code in the rst parent
genotype is choosen. This sequence is then swapped with the sequence at exactly the same
position in the second parent.
31.4.4.2 Page-based LGP
An approach similar to Francone et al.s sticky crossover is the Page-based linear Genetic
Programming by Heywood and Zincir-Heywood [1229]. Here, programs are described as
sequences of pages. Each page includes the same number z of instructions.
Crossover exchanges exactly a single page between the parents but, dierent from the
homologous operators, is not locally bound. In other words, the third page of the rst
parent may be swapped with the fth page of the second parent. Since exactly one page is
exchanged, the length of the programs remains the same and the number of their instructions
does not change. This way of crossover is useful to identify and spread building blocks and
thus, is less destructive.
Heywood and Zincir-Heywood [1229] also device a dynamic page size adaptation method
for their crossover operator. Instead of keeping the page size z constant throughout the
evolution, an upper limit

z for this size is dened. The Genetic Programming process starts


with a working page size of z = 1. If a tness plateau is reached, i. e., no further improvement
of the best tness can be found for some time, z is increased to the next divisor of

z. In
other words, if

z = 12, z would go through 1, 2, 3, 4, 6, and 12. If z =

z and the GP process


stagnates again, this iteration starts all over again at z = 1. The idea of such an approach
is that initially, the building blocks in the system are likely to be simple and small. Over
time, longer useful code sequences will be discovered, calling for a larger page size in order
to be preserved.
31.4.5 General Information on Linear Genetic Programming
31.4.5.1 Applications and Examples
Table 31.3: Applications and Examples of Linear Genetic Programming.
Area References
Chemistry [207, 1671, 2899]
Computer Graphics [1671]
Control [2052]
Data Compression [2047]
Data Mining [388392, 982, 1207, 1229, 1671, 2044]
Digital Technology [392, 393, 1229, 2050, 2164]
Distributed Algorithms and Sys-
tems
[947, 1180, 1793, 1977, 2555, 2888, 2896, 2898, 2899, 2916]
Engineering [392, 393, 947, 1180, 1229, 1671, 1682, 17631767, 1793, 1977,
2050, 2052, 2164, 2555, 2896]
Graph Theory [947, 1180, 1682, 1767, 1793, 1977, 2555, 2896]
Healthcare [388, 391]
Mathematics [390, 392, 1206, 1207, 1229, 1301, 1715, 2164, 2888, 2891,
2898]
31.5. GRAMMARS IN GENETIC PROGRAMMING 409
Prediction [205, 390, 1671, 2086]
Security [947, 1180, 1682, 1977, 2555]
Software [976, 1206, 1207, 1547, 1669, 1671, 1792, 1793, 2046, 2050,
2053, 2916]
Testing [642, 2592]
Wireless Communication [1682, 1767]
31.4.5.2 Books
1. Advances in Genetic Programming [2569]
2. Genetic Programming: An Introduction On the Automatic Evolution of Computer Pro-
grams and Its Applications [209]
3. Linear Genetic Programming [394]
4. Evolutionary Program Induction of Binary Machine Code and its Applications [2045]
31.5 Grammars in Genetic Programming
31.5.1 General Information on Grammar-Guided Genetic Programming
31.5.1.1 Applications and Examples
Table 31.4: Applications and Examples of Grammar-Guided Genetic Programming.
Area References
Control [2336]
Data Mining [2969]
Digital Technology [2032]
Economics and Finances [379, 380]
Engineering [1920, 2032, 2336]
Mathematics [20322034, 2359]
Software [1920]
31.5.1.2 Books
1. Data Mining Using Grammar Based Genetic Programming and Applications [2969]
31.6 Graph-based Approaches
31.6.1 General Information on Grahp-based Genetic Programming
31.6.1.1 Applications and Examples
Table 31.5: Applications and Examples of Grahp-based Genetic Programming.
410 31 GENETIC PROGRAMMING
Area References
Digital Technology [2792, 3042]
Engineering [2792, 3042]
Mathematics [2482]
Sorting [2482]
31.6. GRAPH-BASED APPROACHES 411
Tasks T31
91. Use Genetic Programming to perform symbolic regression (see Section 49.1 on page 531
and Example E31.1 on page 387) on the data given in Listing 31.2. Which mathemat-
ical function with b = (a) relates the values of a to the values of b? You can use
the les from Section 57.2.1.1 on page 905 and the other sources provided with this
book to construct your optimization process.
1 a b
2 =============================================
3 0.008830073673861905 0.008752332965198615
4 0.018694563401923325 0.018347254459257438
5 0.05645469964408556 0.05332752398796371
6 0.0766136391753689 0.0708938021131889
7 0.14129105849778467 0.12226631044115759
8 0.34977791509449296 0.241542619578675
9 0.35201052061616 0.24247840720311337
10 0.36438318566633743 0.2475456173823204
11 0.3893919925542615 0.2571847111870792
12 0.45985453110101393 0.2802156496978409
13 0.5294420401690347 0.29744194372200355
14 0.545693694874572 0.30073566724237905
15 0.5794622522099054 0.30675072726587754
16 0.5854806945538868 0.3077087939606112
17 0.587999637880455 0.30809978088344814
18 0.6134151841062928 0.31172078461868413
19 0.6305855626214137 0.3138417822317378
20 0.6451020098521425 0.315436888525947
21 0.6595800013242682 0.3168517955439009
22 0.6622167712763556 0.3170909127233424
23 0.7470926929713728 0.32191168989382
24 0.7513516563209482 0.3220146762426389
25 0.7571482262199194 0.32213477030839227
26 0.760834074066516 0.32219920416927167
27 0.799102390437177 0.32233694519750067
28 0.8823767703419055 0.31955612939384237
29 0.8839162433815917 0.31946827608018985
30 0.9278045504520468 0.3164575071414033
31 0.9482259113015682 0.3147394303666216
32 0.9483969161261779 0.31472423502162195
Listing 31.2: The data for Task 91
[50 points]
92. Use any of the optimization algorithms for real-valued problem spaces to match the
polynomial g
5
a
5
+ g
4
a
4
+ g
3
a
3
+ g
2
a
2
+ g
1
a + g
0
to the data given in Listing 31.2
from Task 91. Optimize the vector g = (g
0
, g
1
, . . . , g
5
)
T
by considering it as genotype
in the six-dimensional search space G R
6
. Use an objective function which could
be roughly similar to Listing 57.20 on page 908. Do not use any tree-related data
structures.
[50 points]
93. Do the same as in Task 92, but instead of the polynomial, try to t the function
g
2
a
2
+g
4
a + sin(g
3
a +g
2
) +g
1
e
ag
0
.
[20 points]
94. Combine what you have learned from Task 91, 92, and 93: Develop a system which in
a rst optimization run synthesizes a formula F via symbolic regression and Genetic
Programming. In a second step, this system should discover all nodes which represent
ephemeral random constants (see Denition D49.3) and optimizes their value with a
numerical optimization procedure, while keeping the structure of the formula F intact.
In other words, rst discover a good formula structure, and then adapt it in the best
412 31 GENETIC PROGRAMMING
possible way to the available data. Both steps should be performed by an automatic
procedure and provide you with the nal result without needing manual interaction.
Repeat the experiments multiple times. Try to nd functions that t dierent data
created by yourself. What conclusions can you draw about the result?
[50 points]
95. Replace the Evolutionary Algorithm as underlying optimization method for Genetic
Programming with symbolic regression or a Hill Climber and apply the resulting sys-
tem to Task 91 or 94. How does the result change? What are your conclusions about
GP, especially in terms of the binary reproduction operation crossover (which is not
used in Simulated Annealing or Hill Climbing)?
[20 points]
96. Instead of a tree-based representation for programs and formulas, as used in Task 91
to 95, implement a linear Genetic Programming (LGP) system which provides mathe-
matical operations in a style similar to assembler code. Repeat the experiment dened
in Task 91 with this LGP system and compare the performance with the tree-based,
Standard Genetic Programming approach.
[70 points]
Chapter 32
Evolutionary Programming
414 32 EVOLUTIONARY PROGRAMMING
32.1 General Information on Evolutionary Programming
32.1.1 Applications and Examples
Table 32.1: Applications and Examples of Evolutionary Programming.
Area References
Chemistry [848]
Control [1536]
Data Mining [2392]
Distributed Algorithms and Sys-
tems
[2711]
Engineering [303, 1536, 1894, 2018]
Function Optimization [169, 2711]
Games [940]
Mathematics [1702]
Physics [848]
32.1.2 Books
1. Evolutionary Computation: The Fossil Record [939]
2. Articial Intelligence through Simulated Evolution [945]
3. Blondie24: Playing at the Edge of AI [940]
4. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Pro-
gramming, Genetic Algorithms [167]
5. Genetic Algorithms + Data Structures = Evolution Programs [1886]
32.1.3 Conferences and Workshops
Table 32.2: Conferences and Workshops on Evolutionary Programming.
GECCO: Genetic and Evolutionary Computation Conference
See Table 28.5 on page 317.
CEC: IEEE Conference on Evolutionary Computation
See Table 28.5 on page 317.
EvoWorkshops: Real-World Applications of Evolutionary Computing
See Table 28.5 on page 318.
PPSN: Conference on Parallel Problem Solving from Nature
See Table 28.5 on page 318.
WCCI: IEEE World Congress on Computational Intelligence
See Table 28.5 on page 318.
EMO: International Conference on Evolutionary Multi-Criterion Optimization
See Table 28.5 on page 319.
AE/EA: Articial Evolution
See Table 28.5 on page 319.
32.1. GENERAL INFORMATION ON EVOLUTIONARY PROGRAMMING 415
EP: Annual Conference on Evolutionary Programming
History: 1998/05: San Diego, CA, USA, see [2208]
1997/04: Indianapolis, IN, USA, see [109]
1996/02: San Diego, CA, USA, see [946]
1995/03: San Diego, CA, USA, see [1848]
1994/02: San Diego, CA, USA, see [2444]
1993/02: La Jolla, CA, USA, see [943]
1992/02: La Jolla, CA, USA, see [942]
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
ICANNGA: International Conference on Adaptive and Natural Computing Algorithms
See Table 28.5 on page 319.
ICCI: IEEE International Conference on Cognitive Informatics
See Table 10.2 on page 128.
ICNC: International Conference on Advances in Natural Computation
See Table 28.5 on page 319.
SEAL: International Conference on Simulated Evolution and Learning
See Table 10.2 on page 130.
BIOMA: International Conference on Bioinspired Optimization Methods and their Applications
See Table 28.5 on page 320.
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
See Table 10.2 on page 130.
FEA: International Workshop on Frontiers in Evolutionary Algorithms
See Table 28.5 on page 320.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
MICAI: Mexican International Conference on Articial Intelligence
See Table 10.2 on page 131.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
See Table 10.2 on page 131.
CIMCA: International Conference on Computational Intelligence for Modelling, Control and Au-
tomation
See Table 28.5 on page 320.
CIS: International Conference on Computational Intelligence and Security
See Table 28.5 on page 320.
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
See Table 10.2 on page 132.
EUFIT: European Congress on Intelligent Techniques and Soft Computing
See Table 10.2 on page 132.
EngOpt: International Conference on Engineering Optimization
416 32 EVOLUTIONARY PROGRAMMING
See Table 10.2 on page 132.
EvoCOP: European Conference on Evolutionary Computation in Combinatorial Optimization
See Table 28.5 on page 321.
GEC: ACM/SIGEVO Summit on Genetic and Evolutionary Computation
See Table 28.5 on page 321.
GEM: International Conference on Genetic and Evolutionary Methods
See Table 28.5 on page 321.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
SoCPaR: International Conference on SOft Computing and PAttern Recognition
See Table 10.2 on page 133.
Progress in Evolutionary Computation: Workshops on Evolutionary Computation
See Table 28.5 on page 321.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
AJWS: Australia-Japan Joint Workshop on Intelligent & Evolutionary Systems
See Table 28.5 on page 321.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
ASC: IASTED International Conference on Articial Intelligence and Soft Computing
See Table 10.2 on page 134.
EvoNUM: European Event on Bio-Inspired Algorithms for Continuous Parameter Optimisation
See Table 28.5 on page 321.
FOCI: IEEE Symposium on Foundations of Computational Intelligence
See Table 28.5 on page 321.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
IJCCI: International Conference on Computational Intelligence
See Table 28.5 on page 321.
IWNICA: International Workshop on Nature Inspired Computation and Applications
See Table 28.5 on page 321.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
NaBIC: World Congress on Nature and Biologically Inspired Computing
See Table 28.5 on page 322.
WPBA: Workshop on Parallel Architectures and Bioinspired Algorithms
See Table 28.5 on page 322.
EvoRobots: International Symposium on Evolutionary Robotics
See Table 28.5 on page 322.
ICEC: International Conference on Evolutionary Computation
See Table 28.5 on page 322.
32.1. GENERAL INFORMATION ON EVOLUTIONARY PROGRAMMING 417
IWACI: International Workshop on Advanced Computational Intelligence
See Table 28.5 on page 322.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
SEMCCO: International Conference on Swarm, Evolutionary and Memetic Computing
See Table 28.5 on page 322.
SSCI: IEEE Symposium Series on Computational Intelligence
See Table 28.5 on page 322.
32.1.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
Chapter 33
Dierential Evolution
33.1 Introduction
Dierential Evolution
1
(strategy) (DE, DES) is a method for mathematical optimization
of multidimensional functions [286, 408, 903, 1662, 1866, 1882, 2220, 2620]. Developed by
Storn and Price [2621], the DE technique has been invented in order to solve the Chebyshev
polynomial tting problem. It has proven to be a very reliable optimization strategy for
many dierent tasks where parameters that can be encoded in real vectors.
In numerical optimization, at the beginning there exists very little knowledge about what
is good or bad. Points are usually sampled uniformly distributed in the search space. The
search operations need to perform step with larger sizes are in order to move towards the
regions with best tness. As the basins of attraction of the optima are reached, smaller step
sizes are required to rene the solutions. The closer the optimization algorithm gets to an
optimum, the smaller should the steps become in order to approach it closer and closer.
Dierential Evolution is an Evolutionary Algorithm which uses the assumption that
selection will make the population converge to the basin of attraction of an optimum as
basis for self-adaption. As the points in the population approach an optimum (or better:
when only those points close to the optimum survive selection), their distances to each
other get smaller. This distance information is used to derive a proper step width for the
search operation. In other words, initially large inter-point distances lead to larger search
steps. As the search progresses, the search steps get smaller, hence allowing the optimization
process to converge to the optimum. Vice versa, the convergence to an optimum leads to
smaller search steps. With this method, Dierential Evolution does not need to use explicit
endogenous information in the sense of Evolution Strategies (see Section 30.5.2), instead it
uses the information stored in the whole population.
33.2 Ternary Recombination
The essential idea behind Dierential Evolution is the way in which the (ternary) recombina-
tion operator recombineDE is dened for creating new genotypes. It takes three parental
individuals g
1
, g
2
, and g
3
as parameter. The dierence between g
1
and g
2
in the search
space G is computed, weighted with a weight F R and added to the third parent g
3
.
recombineDE(g
1
, g
2
, g
3
) = g
3
+F (g
1
g
2
) (33.1)
1
http://en.wikipedia.org/wiki/Differential_evolution [accessed 2007-07-03]
420 33 DIFFERENTIAL EVOLUTION
Except for determining F, no additional probability distribution has to be used in the
basic Dierential Evolution scheme. There exists a common nomenclature for Dierential
Evolution algorithms: DE/A/p/B where
1. DE stands for Dierential Evolution,
2. A is the way the individuals are selected to compute the dierential information,
3. p is the number of individual pairs used for computing the dierence information used
in the recombination operation, and
4. B identies the recombination operator to be used.
33.2.1 Advanced Operators
The algorithm is completely self-organizing. This classical reproduction strategy has been
complemented with new ideas like triangle mutation and alternations with weighted directed
strategies. Gao and Wang [1017] emphasize the close similarities between the reproduction
operators of Dierential Evolution and the search step of the Downhill Simplex. Thus, it
is only logical to combine or to compare the two methods as we show in ?? on page ??.
Further improvements to the basic Dierential Evolution scheme have been contributed, for
instance, by Kaelo and Ali. Their DERL and DELB algorithms outperformed [14751477]
standard DE on the test benchmark compiled by Ali et al. [72].
33.3. GENERAL INFORMATION ON DIFFERENTIAL EVOLUTION 421
33.3 General Information on Dierential Evolution
33.3.1 Applications and Examples
Table 33.1: Applications and Examples of Dierential Evolution.
Area References
Combinatorial Problems [286, 527]
Computer Graphics [2707]
Distributed Algorithms and Sys-
tems
[2315]
Economics and Finances [1060]
Engineering [2860]
Function Optimization [408, 551, 1017, 1476, 2130, 2619, 2621, 3020]
Graph Theory [2315, 2793]
Logistics [527]
Mathematics [2620]
Motor Control [490]
Software [2620]
Wireless Communication [2793]
33.3.2 Books
1. Dierential Evolution A Practical Approach to Global Optimization [2220]
2. Dierential Evolution In Search of Solutions [903]
33.3.3 Conferences and Workshops
Table 33.2: Conferences and Workshops on Dierential Evolution.
GECCO: Genetic and Evolutionary Computation Conference
See Table 28.5 on page 317.
CEC: IEEE Conference on Evolutionary Computation
See Table 28.5 on page 317.
EvoWorkshops: Real-World Applications of Evolutionary Computing
See Table 28.5 on page 318.
PPSN: Conference on Parallel Problem Solving from Nature
See Table 28.5 on page 318.
WCCI: IEEE World Congress on Computational Intelligence
See Table 28.5 on page 318.
EMO: International Conference on Evolutionary Multi-Criterion Optimization
See Table 28.5 on page 319.
AE/EA: Articial Evolution
See Table 28.5 on page 319.
HIS: International Conference on Hybrid Intelligent Systems
422 33 DIFFERENTIAL EVOLUTION
See Table 10.2 on page 128.
ICANNGA: International Conference on Adaptive and Natural Computing Algorithms
See Table 28.5 on page 319.
ICCI: IEEE International Conference on Cognitive Informatics
See Table 10.2 on page 128.
ICNC: International Conference on Advances in Natural Computation
See Table 28.5 on page 319.
SEAL: International Conference on Simulated Evolution and Learning
See Table 10.2 on page 130.
BIOMA: International Conference on Bioinspired Optimization Methods and their Applications
See Table 28.5 on page 320.
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
See Table 10.2 on page 130.
FEA: International Workshop on Frontiers in Evolutionary Algorithms
See Table 28.5 on page 320.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
MICAI: Mexican International Conference on Articial Intelligence
See Table 10.2 on page 131.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
See Table 10.2 on page 131.
CIMCA: International Conference on Computational Intelligence for Modelling, Control and Au-
tomation
See Table 28.5 on page 320.
CIS: International Conference on Computational Intelligence and Security
See Table 28.5 on page 320.
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
See Table 10.2 on page 132.
EUFIT: European Congress on Intelligent Techniques and Soft Computing
See Table 10.2 on page 132.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
EvoCOP: European Conference on Evolutionary Computation in Combinatorial Optimization
See Table 28.5 on page 321.
GEC: ACM/SIGEVO Summit on Genetic and Evolutionary Computation
See Table 28.5 on page 321.
GEM: International Conference on Genetic and Evolutionary Methods
See Table 28.5 on page 321.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
33.3. GENERAL INFORMATION ON DIFFERENTIAL EVOLUTION 423
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
SoCPaR: International Conference on SOft Computing and PAttern Recognition
See Table 10.2 on page 133.
Progress in Evolutionary Computation: Workshops on Evolutionary Computation
See Table 28.5 on page 321.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
AJWS: Australia-Japan Joint Workshop on Intelligent & Evolutionary Systems
See Table 28.5 on page 321.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
ASC: IASTED International Conference on Articial Intelligence and Soft Computing
See Table 10.2 on page 134.
EvoNUM: European Event on Bio-Inspired Algorithms for Continuous Parameter Optimisation
See Table 28.5 on page 321.
FOCI: IEEE Symposium on Foundations of Computational Intelligence
See Table 28.5 on page 321.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
IJCCI: International Conference on Computational Intelligence
See Table 28.5 on page 321.
IWNICA: International Workshop on Nature Inspired Computation and Applications
See Table 28.5 on page 321.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
NaBIC: World Congress on Nature and Biologically Inspired Computing
See Table 28.5 on page 322.
WPBA: Workshop on Parallel Architectures and Bioinspired Algorithms
See Table 28.5 on page 322.
EvoRobots: International Symposium on Evolutionary Robotics
See Table 28.5 on page 322.
ICEC: International Conference on Evolutionary Computation
See Table 28.5 on page 322.
IWACI: International Workshop on Advanced Computational Intelligence
See Table 28.5 on page 322.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
SEMCCO: International Conference on Swarm, Evolutionary and Memetic Computing
See Table 28.5 on page 322.
SSCI: IEEE Symposium Series on Computational Intelligence
424 33 DIFFERENTIAL EVOLUTION
See Table 28.5 on page 322.
33.3.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
33.3. GENERAL INFORMATION ON DIFFERENTIAL EVOLUTION 425
Tasks T33
97. In Paragraph 56.2.1.1.4 on page 794, we give the implementation of the ternary search
operations of the simple Dierential Evolution algorithm, which in turn, you can nd
implemented in Listing 56.19 on page 771. However, as discussed by Mezura-Montes
et al. [1882], Dierential Evolution does not necessary only take two points in the
search space to compute the dierence information. Instead, it may take p point pairs.
In this case, the search operators are no longer ternary but (1+2p)-ary. Re-implement
the operators and Dierential Evolution algorithm for this general case. You may use
a programming language of your choice.
[50 points]
98. In Task 90 on page 377, an Evolution Strategy was applied to an instance of the Trav-
eling Salesman Problem and its results were compared to what we obtain with a simple
Hill Climber that uses the straightforward integer-string permutation representation
discussed in Section 29.3.1.2 on page 329. Repeat this experiment, but now test some
Dierential Evolution variants instead of Evolution Strategy. Compare your results
with those obtained with the other two algorithms in Task 90.
[50 points]
Chapter 34
Estimation Of Distribution Algorithms
34.1 Introduction
Traditional Evolutionary Algorithms are based directly on the natural paragon of Darwinian
evolution. The candidate solutions are considered to be equivalent to living organisms in
nature, which compete for survival (selection) and reproduce if successful. Estimation of
Distribution Algorithms
1
(EDAs, [126, 1683, 1974, 2153, 2158]) are dierent. Instead of
improving possible solutions step by step, they try to learn how a perfect solution should
look like.
This is done from the point of view stochastic: Initially, nothing is known about where
the optimal solution is located in the search space, so the probability of each point which
could be investigated to be the optimum is basically the same. With every candidate solution
which has actually been evaluated with the objective functions, more information is gained.
In Genetic Algorithms or Genetic Programming, the population begins to move to certain,
promising regions. From the stochastic point of view, this corresponds to the assumption
that the probability that the optimum may be contained in these regions is estimated to be
high. The spatially denser the population becomes, the higher this probability is assumed.
EDAs abandon the biological example of repeated sexual and asexual reproduction [2153]
and instead, try to estimate the probability distribution M directly.
In case of an Estimation of Distribution Algorithm solving a continuous optimization
problem, initially, the probability model M(t = 0) at time step t = 0 may be estimated with
a uniform distribution. Step by step, the variance D
2
M of the distribution corresponding to
the parameters of M should decrease and, in case that there exists a single, global optimum
x

, the goal is that D


2
[M(t)] = 0 and E[M(t)] = x

are approached for t (ideally,


earlier) [1197].
In Figure 34.1 we sketch the information held in a population of, for instance, an Evolu-
tion Strategy in Fig. 34.1.a to ?? and the information held in a model of an Estimation of
Distribution Algorithm in Fig. 34.1.d to 34.1.f. In the rst row, of the gure, from the left
to the right, it can be seen how the population of the ES converges. The model used in a
real-valued EDA may converge similarly. In this case, its parameters may shrink down and
become smaller and smaller, thus decreasing the number of likely sampled genotypes.
1
http://en.wikipedia.org/wiki/Estimation_of_distribution_algorithm [accessed 2009-08-20]
428 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
Fig. 34.1.a: Population 1 Fig. 34.1.b: Population 3 Fig. 34.1.c: Population 4
s
1
s
2
m
1
m
2
Fig. 34.1.d: Model 1
s
1
s
2
m
1
m
2
Fig. 34.1.e: Model 3
s
1
s
2
m
1
m
2
Fig. 34.1.f: Model 4
Figure 34.1: Comparison between information in a population and in a model.
34.2 General Information on Estimation Of Distribution
Algorithms
34.2.1 Applications and Examples
Table 34.1: Applications and Examples of Estimation Of Distribution Algorithms.
Area References
Combinatorial Problems [366, 534, 1683, 2158]
Data Mining [42, 1683, 1784, 1910, 2069, 2398, 2917]
Digital Technology [3093]
Distributed Algorithms and Sys-
tems
[1435, 2455, 2456, 2631, 2632]
Engineering [117, 1435, 2158, 2860, 3093]
Function Optimization [2038]
Graph Theory [534, 1435, 2455, 2456, 2631, 2632, 2793, 2794]
Logistics [1121, 1683]
Mathematics [2038]
Military and Defense [3046]
Motor Control [664]
Physics [2187]
Software [117]
Wireless Communication [534, 2158, 2793, 2794]
34.2.2 Books
1. Scalable Optimization via Probabilistic Modeling From Algorithms to Applications [2158]
34.2. GENERAL INFORMATION ON ESTIMATION OF DISTRIBUTION ALGORITHMS429
2. Estimation of Distribution Algorithms A New Tool for Evolutionary Computation [1683]
3. Towards a New Evolutionary Computation Advances on Estimation of Distribution Algo-
rithms [1782]
4. Hierarchical Bayesian Optimization Algorithm Toward a New Generation of Evolutionary
Algorithms [2145]
34.2.3 Conferences and Workshops
Table 34.2: Conferences and Workshops on Estimation Of Distribution Algorithms.
GECCO: Genetic and Evolutionary Computation Conference
See Table 28.5 on page 317.
CEC: IEEE Conference on Evolutionary Computation
See Table 28.5 on page 317.
EvoWorkshops: Real-World Applications of Evolutionary Computing
See Table 28.5 on page 318.
PPSN: Conference on Parallel Problem Solving from Nature
See Table 28.5 on page 318.
WCCI: IEEE World Congress on Computational Intelligence
See Table 28.5 on page 318.
EMO: International Conference on Evolutionary Multi-Criterion Optimization
See Table 28.5 on page 319.
AE/EA: Articial Evolution
See Table 28.5 on page 319.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
ICANNGA: International Conference on Adaptive and Natural Computing Algorithms
See Table 28.5 on page 319.
ICCI: IEEE International Conference on Cognitive Informatics
See Table 10.2 on page 128.
ICNC: International Conference on Advances in Natural Computation
See Table 28.5 on page 319.
SEAL: International Conference on Simulated Evolution and Learning
See Table 10.2 on page 130.
BIOMA: International Conference on Bioinspired Optimization Methods and their Applications
See Table 28.5 on page 320.
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
See Table 10.2 on page 130.
FEA: International Workshop on Frontiers in Evolutionary Algorithms
See Table 28.5 on page 320.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
MICAI: Mexican International Conference on Articial Intelligence
430 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
See Table 10.2 on page 131.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
See Table 10.2 on page 131.
CIMCA: International Conference on Computational Intelligence for Modelling, Control and Au-
tomation
See Table 28.5 on page 320.
CIS: International Conference on Computational Intelligence and Security
See Table 28.5 on page 320.
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
See Table 10.2 on page 132.
EUFIT: European Congress on Intelligent Techniques and Soft Computing
See Table 10.2 on page 132.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
EvoCOP: European Conference on Evolutionary Computation in Combinatorial Optimization
See Table 28.5 on page 321.
GEC: ACM/SIGEVO Summit on Genetic and Evolutionary Computation
See Table 28.5 on page 321.
GEM: International Conference on Genetic and Evolutionary Methods
See Table 28.5 on page 321.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
OBUPM: Optimization by Building and Using Probabilistic Models Workshop
History: 2001/07: San Francisco, CA, USA, see [2149]
2000/07: Las Vegas, NV, USA, see [2155]
SoCPaR: International Conference on SOft Computing and PAttern Recognition
See Table 10.2 on page 133.
Progress in Evolutionary Computation: Workshops on Evolutionary Computation
See Table 28.5 on page 321.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
AJWS: Australia-Japan Joint Workshop on Intelligent & Evolutionary Systems
See Table 28.5 on page 321.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
ASC: IASTED International Conference on Articial Intelligence and Soft Computing
See Table 10.2 on page 134.
EvoNUM: European Event on Bio-Inspired Algorithms for Continuous Parameter Optimisation
34.3. ALGORITHM 431
See Table 28.5 on page 321.
FOCI: IEEE Symposium on Foundations of Computational Intelligence
See Table 28.5 on page 321.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
IJCCI: International Conference on Computational Intelligence
See Table 28.5 on page 321.
IWNICA: International Workshop on Nature Inspired Computation and Applications
See Table 28.5 on page 321.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
NaBIC: World Congress on Nature and Biologically Inspired Computing
See Table 28.5 on page 322.
WPBA: Workshop on Parallel Architectures and Bioinspired Algorithms
See Table 28.5 on page 322.
EvoRobots: International Symposium on Evolutionary Robotics
See Table 28.5 on page 322.
ICEC: International Conference on Evolutionary Computation
See Table 28.5 on page 322.
IWACI: International Workshop on Advanced Computational Intelligence
See Table 28.5 on page 322.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
SEMCCO: International Conference on Swarm, Evolutionary and Memetic Computing
See Table 28.5 on page 322.
SSCI: IEEE Symposium Series on Computational Intelligence
See Table 28.5 on page 322.
34.2.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
34.3 Algorithm
In Algorithm 34.1, we dene the basic structure PlainEDA of Estimation of Distribution
Algorithms and in Algorithm 34.2, the more general structure EDA MO is given. The
problem spaces X of the EDAs are usually the real vectors R
n
or bit strings B
n
and they
also search in these spaces (G X), as we assume in Algorithm 34.2. Both algorithms begin
with creating an initial a population pop (via createPop). createPop might just produce
a set of individuals representing random points in the search space or may as well create the
points systematically following some grid.
After this is done, the actual loop of the EDA begins. First, we wish to compute the
objective values of the candidate solutions. In the general case, where the search space G
is not necessarily the same as the problem space X, we rst need to derive the points p.x
which belong to the initially created elements p.g in the search space for all p pop. This
432 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
Algorithm 34.1: x PlainEDA(f, ps, mps)
Input: f: the objective function
Input: ps: the population size
Input: mps: the mating pool size
Input: [implicit] terminationCriterion: the termination criterion
Data: pop: the population
Data: mate: the matingPool
Data: M: the stochastic model
Data: continue: a variable holding the termination criterion
Output: x: the best individuals
1 begin
2 pop createPop(ps)
3 M initial model, uniform distribution
4 x pop[0]
5 continue true
6 while continue do
7 pop computeObjectives(pop, f)
8 x extractBest(pop x , f) [0]
9 if terminationCriterion
__
then
10 continue false
11 else
12 mate selection(mate, f, mps)
13 M buildModel(mate, M)
14 pop sampleModel(M, ps)
15 return x
34.3. ALGORITHM 433
Algorithm 34.2: X EDA MO(OptProb, ps, mps)
Input: OptProb: a tuple (X, f , cmp
f
) of the problem space X, the objective functions
f , and a comparator function cmp
f
Input: ps: the population size
Input: mps: the mating pool size
Input: [implicit] gpm: the genotype-phenotype mapping
Input: [implicit] terminationCriterion
__
: the termination criterion
Data: pop: the population
Data: mate: the matingPool
Data: v: the tness function
Data: M: the stochastic model
Data: continue: a variable holding the termination criterion
Data: archive: the archive with the best individuals
Output: X: the set of best candidate solutions discovered
1 begin
2 pop createPop(ps)
3 M initial model, uniform distribution
4 archive
5 continue true
6 while continue do
7 pop performGPM(pop, gpm)
8 pop computeObjectives(pop, f )
9 archive extractBest(pop archive, cmp
f
)
10 if terminationCriterion
__
then
11 continue false
12 else
13 v assignFitness(pop, cmp
f
)
14 mate selection(mate, v, mps)
15 M buildModel(mate, M)
16 pop appendList(pop, sampleModel(M, ps))
17 return extractPhenotypes(archive)
434 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
would be done with a genotype-phenotype mapping gpm : G X whose application we
here denote as performGPM in the generalized algorithm variant Algorithm 34.2. This
procedure allows us to use, for instance, an EDA for bit strings B
n
while actually optimizing
antenna designs [3046].
Based on the candidate solutions p.x, we can determine the objective values (here
sketched as applying computeObjectives to pop). Normally, we deal with a single objective
function f (Algorithm 34.1), but generally, multi-objective optimization may be performed
as well. In the latter case outlined in Algorithm 34.2, the set f contains more than one
function f f . Then, an additional tness assignment process assignFitness may be used
which breaks down the objective values to a single scalar tness v(p) for each individual p
denoting its performance in relation to the other individuals in the population pop. In the
usual, single-objective case, this is not required.
After the objective values are known, we can check whether we discovered a new candi-
date solution with better performance than the best one known until now (x, Algorithm 34.1).
In the general case where [f [ 1, there may be more than one candidate solution which
can be considered as optimal from the perspective of the current knowledge, which is, for
instance, not dominated. We use the operation extractBest in both algorithm variants
do sketch this process. Notice that in the latter case, archive pruning strategies may be
necessary in order to prevent the archive of best individuals from overowing.
Based on the objective values f(p.x) (Algorithm 34.1) or the tness (Algorithm 34.2), a
selection step selection chooses mps of the best individuals from pop and puts them into
the mating pool mate. This may be done with any given selection algorithm, but truncation
or tness proportional selection are often preferred in EDAs. Notice that until now, the
EDA proceeds exactly like any normal Evolutionary Algorithm.
In the next step, however, a probabilistic model M representing the traits elements in
mate is computed with an EDA-specic procedure buildModel. This operation may also
involve previously accumulated knowledge such as models created in past generations, which
is illustrated by providing it with the previous model as parameter.
Instead of creating a new population based on elements in mate by, for instance, applying
mutation and crossover operators as it would be done in Genetic Algorithms or Genetic
Programming, the model M is used therefore. M represents a specic probability distribution
which can be sampled by using a random number generator creating elements following the
parameters dened in M. We therefore dene here the model type specic sampleModel-
operation whose results then can either replace the old population completely or (as in case
of Algorithm 34.2) be merged into it.
With the population of new individuals, the cycle starts all over again unless the ter-
mination criterion terminationCriterion
__
was met, obviously. In the end, the single best
candidate solution discovered is returned in the simple case Algorithm 34.1 or the archive
of best solutions in the general case Algorithm 34.2.
34.4 EDAs Searching Bit Strings
Most of the research on EDAs is performed in areas where the search spaces G are bit strings
g G = B
n
of a xed length n.
34.4.1 UMDA: Univariate Marginal Distribution Algorithm
Maybe the simplest EDA for binary search spaces is the Univariate Marginal Distribution
Algorithm (UMDA) [1973, 1974]. We dene it here as UMDA in Algorithm 34.3 and
provide a simple example implementation of its modules in Java on top of our basic EDA
(see Listing 56.20) in Paragraph 56.1.4.4.1 on page 781.
The UMDA constructs a model M storing one real number M[i] corresponding to the
estimated probability that the bit at index i in the genotype p

.g belonging to the optimal


34.4. EDAS SEARCHING BIT STRINGS 435
M=(0.2,0.6,0.3,0.4,0.3)
P(g[1]=0)=0.4
P(g[1]=1)=0.6
P(g[3]=0)=0.6
P(g[3]=1)=0.4
P(g[0]=0)=0.3
P(g[0]=1)=0.7
P(g[2]=0)=0.3
P(g[2]=1)=0.7
P(g[4]=0)=0.2
P(g[4]=1)=0.8
Model:
Rulesfor
Sampling:
Model
sampling:
g=(1,0,0,1,1)
randUni[0,1)=0.3
randUni[0,1)=0.5
randUni[0,1)=0.2
randUni[0,1)=0.9
randUni[0,1)=0.7
Newgenotype:
#0
#1
2
8
6
4
3
7
4
6
3
7
g =(1,0,0,0,1)
1
g =(1,0,1,1,1)
2
g =(1,1,1,0,1)
3
g =(1,0,1,1,0)
4
g =(0,1,0,0,0)
5
g =(1,0,0,1,1)
6
g =(1,0,1,1,1)
7
g =(1,1,1,1,1)
8
g =(0,1,1,1,0)
9
g =(1,0,1,0,1)
10
Model
building:
Selectindividuals,
Illustrated:Indivduals
inthematingpool:
1
2
3
4
5
6
7
Repeat5/6ps(populationsize)times,thengobackto1.
Figure 34.2: A rough sketch of the UMDA process.
436 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
Algorithm 34.3: x UMDA(f, ps, mps)
Input: f: the objective function
Input: ps: the population size
Input: mps: the mating pool size
Input: [implicit] terminationCriterion: the termination criterion
Input: [implicit] n: the number of bits per genotype
Data: pop: the population
Data: mate: the matingPool
Data: M: the stochastic model
Data: continue: a variable holding the termination criterion
Output: x: the best individual
1 begin
2 M (0.5, 0.5, . . . , 0.5)
// see Listing 56.21 on page 781
3 x null
4 continue true
5 while continue do
6 pop sampleModelUMDA(M, ps)
7 pop performGPM(pop, gpm)
8 pop computeObjectives(pop, f)
9 mate rouletteWheelSelection
r
(pop, f, mps)
10 if x = null then optres extractBest(mate, f) [0]
11 else optres extractBest(mate x , f) [0]
12 if terminationCriterion
__
then continue false
13 else M buildModelUMDA(mate)
14 return x
34.4. EDAS SEARCHING BIT STRINGS 437
candidate solution p

.x is 1. Initially, each element of the vector M is 0.5. The model


is sampled in order to ll the population with individuals. Therefore, real-valued random
numbers randomUni[0, 1) uniformly distributed between (inclusively) 0 and (exclusively)
1 are used. For the j
th
bit in a new genotype p.g, such a number is drawn and the bit
becomes 0 if it is smaller or equal to M[j]. A rough sketch of the UMDA process is given
in Figure 34.2. We illustrate this process in Algorithm 34.4 as sampleModelUMDA and
implement it in Listing 56.23 on page 783.
Algorithm 34.4: pop sampleModelUMDA(M, ps)
Input: M: the model
Input: ps: the target population size
Input: [implicit] n: the number of bits per genotype
Data: i, j: counter variables
Data: p: an individual record and its corresponding genotype p.g
Output: pop: the new population
1 begin
2 pop ()
3 for i 1 up to ps do
4 p.g ()
5 for j 0 up to n 1 do
6 if randomUni[0, 1) M[j] then p.g addItem(p.g, 0)
7 else p.g addItem(p.g, 1)
8 pop addItem(pop, p)
A genotype-phenotype mapping may be performed which transforms the genotypes p.g
of the individuals in the population pop to their corresponding phenotypes p.x if needed,
as is, for instance, the case in circuit design [3093]. Then, the objective values of the
candidate solutions p.x can be determined. UMDA prescribes no specic selection scheme.
The original papers [1973] discuss tness proportionate and tournament selection, but setups
with truncation selection are used as well [3093]. With any of these operators (we chose the
tness proportionate variant in Algorithm 34.3, mps individuals are copied into the mating
pool.
Algorithm 34.5: M buildModelUMDA(mate)
Input: mate: the selected individuals
Input: [implicit] n: the number of bits per genotype
Data: i, j, c: counter variables
Data: g: an element from the search space g G
Output: M: the new model
1 begin
2 M (0, 0, . . . , 0)
3 for j 0 up to n 1 do
4 c 0
5 for i 0 up to len(mate) 1 do
6 g mate[i].g
7 if g[j] = 1 then c c + 1
8 M[j]
c
len(mate)
The model building process Algorithm 34.5 of the UMDA is very simple and you can
438 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
nd a Java example implementation in Listing 56.22 on page 782. For the bit at index j,
the number c of times it was 1 is counted over all genotypes g stored the individual records
in the mating pool mate. c divided the number of individuals in the mating pool hence is
the relative frequency h(p.x[j] = 1, len(mate)) and serves as the estimate of the probability
that the locus j should be 1 in the optimal solution. With the new probability vector, the
cycle starts again until the termination criterion terminationCriterion
__
is met.
34.4.2 PBIL: Population-Based Incremental Learning
One of the rst steps into EDA research was the population-based incremental learning
algorithm
2
PBIL published in 1994 by Baluja [193], [194, 195]. Here we dene it as Algo-
rithm 34.6 in the version given in [195].
Algorithm 34.6: x PBIL(f, ps, mps)
Input: f: the objective function
Input: ps: the population size
Input: mps: the mating pool size
Input: [implicit] n: the number of bits per genotype
Input: [implicit] terminationCriterion: the termination criterion
Data: pop: the population
Data: mate: the matingPool
Data: M: the stochastic model
Data: continue: a variable holding the termination criterion
Output: x: the best individual
1 begin
2 M (0.5, 0.5, . . . , 0.5)
3 x null
4 continue true
5 while continue do
6 pop sampleModelUMDA(M, ps)
7 pop performGPM(pop, gpm)
8 pop computeObjectives(pop, f)
9 mate truncationSelection(pop, f, mps)
10 if x = null then x extractBest(mate, f) [0]
11 else x extractBest(mate x , f) [0]
12 if terminationCriterion
__
then continue false
13 else M buildModelPBIL(mate, M)
14 return x
The structure of the models in PBIL is exactly the same as in the UMDA: For each
bit in the genotypes, a real number denoting the estimated probability that that bit should
be 1 in the optimal genotype. The model is therefore sampled like in the UMDA and the
algorithm itself also proceeds pretty much in the same way.
The main dierence is the model building step presented as Algorithm 34.7. Here, a new
model M is created by merging the current model M

and the information provided by the


mating pool mate. The learning rate denes how much each bit in the genotypes g of the
selected individuals p mate contributes to the new model. In PBIL, truncation selection
is used which allows only the best mps individuals to enter mating pool.
It should be noted that in the original version of the algorithm given in [193], a simple
mutation was applied to the models, too. Here, the single probabilities were randomly
2
http://en.wikipedia.org/wiki/Population-based_incremental_learning [accessed 2009-08-20]
34.4. EDAS SEARCHING BIT STRINGS 439
Algorithm 34.7: M buildModelPBIL(mate, M

)
Input: mate: the selected individuals
Input: M

: the old model


Input: [implicit] : the learning rate
Input: [implicit] n: the number of bits per genotype
Data: i: a counter variable
Data: g: an element from the search space g G
Output: M
T
: a temporary model
Output: M: the new model
1 begin
2 M
T
buildModelUMDA(mate)
3 for i 0 up to n 1 do
4 M[i] (1 ) M

[i] + M
T
[i]
changed by a very small number. Also, the mating pool size was one, i. e., only the best
individual had inuence on M. Similar algorithms have been introduced by Kvasnicka et al.
[1652] (Hill Climbing with Learning, HCwL) and M uhlenbein [1973] (Incremental Univariate
Marginal Distribution Algorithm, IUMDA).
34.4.3 cGA: Compact Genetic Algorithm
The compact Genetic Algorithm (cGA) by Harik et al. [1198, 1199] again uses a vector of
(real-valued) probabilities to represent the estimation of the solutions with the best features
according to the objective function f. In Algorithm 34.8 we provide an outline of this
Algorithm 34.8: x cGA(f, ps)
Input: f: the objective function
Input: ps: the population size
Input: [implicit] n: the number of bits per genotype
Data: mate: the matingPool
Data: M: the stochastic model
Output: x: the best individual
1 begin
2 M (0.5, 0.5, . . . , 0.5)
3 while terminationCriterionCGA(M) do
4 mate sampleModelUMDA(M, 2)
5 mate performGPM(mate, gpm)
6 mate computeObjectives(mate, f)
7 M buildModelCGA(mate, M, ps)
8 return transformModelCompGA(M)
algorithm which modies this vector M so that there is a direct relation between it and the
population it represents [2153]. The idea of the cGA is the following: Assume that we have
a normal Genetic Algorithm and two individuals p
1
and p
2
compete in a binary tournament.
In the single-objective case, usually one of them will win and reproduce whereas the other
vanishes. Assume further that p
1
prevails in the competition, i. e., p
1
.x p
2
.x, because
f
p
1
.x
< f
p
2
.x
in case of minimization, for instance. Then, the absolute frequency of its
corresponding genotype p
1
.g in the population will increase by one. For a population of size
440 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
ps, this means that its relative frequency increases by
1
ps
. This obviously also holds for every
single gene in p
1
.g, since the individual is copied as a whole.
The cGA tries to emulate this situation by creating exactly two new genotypes
in each iteration by using the same sampling method already utilized in the UMDA
(sampleModelUMDA(M, 2), see Algorithm 34.4). These two individuals then compete with
each other and the genes of the winner then are used to update the model M as sketched in
Algorithm 34.9.
Algorithm 34.9: M buildModelCGA(mate, M

, ps)
Input: mate: the two individuals
Input: M

: the old model


Input: ps: the simulated population size
Input: [implicit] n: the number of bits per genotype
Data: p
1
, p
2
: the individual variables
Data: i: counter variable
Output: M: the new model
1 begin
2 M M

3 p
1
mate[0]
4 p
2
mate[1]
5 if p
2
.x p
1
.x then
6 p
1
p
2
7 p
2
mate[0]
8 for i n 1 down to 0 do
9 if p
1
.g[i] ,= p
2
.g[i] then
10 if p
1
.g[i] = 0 then M[i] M

[i] +
1
ps
11 else M[i] M

[i]
1
ps
The optimization process has converged if all elements of the model M became either 1
or 0. Then, only genotypes which exactly equal to the model can be sampled and the vector
will not change anymore. Hence, as termination criterion terminationCriterionCGA we
simply need to check whether this convergence already has taken place or not.
Algorithm 34.10: (true, false) terminationCriterionCGA(M)
Input: M: the model
Input: [implicit] n: the number of bits per genotype
Data: i: a counter
Output: (true, false): whether the cGA should terminate
1 begin
2 for i n 1 down to 0 do
3 if (M[i] > 0) (M[i] < 1) then return false
4 return true
The result of the optimization process is the phenotype gpm(g) corresponding to the
genotype g M dened by the model M in the converged state. The trivial way to produce
this element is specied here as Algorithm 34.11.
Further theoretical results concerning the eciency of the cGA can be found in [833,
34.4. EDAS SEARCHING BIT STRINGS 441
Algorithm 34.11: x transformModelCompGA(M)
Input: M: the model
Input: [implicit] n: the number of bits per genotype
Data: i: a counter
Data: g: the genotype, g G
Output: x: the optimization result, x X
1 begin
2 g ()
3 for i 0 up to n 1 do
4 if M[i] < 0.5 then g addItem(g, 0)
5 else g addItem(g, 1)
6 return gpm(g)
834, 2271]. Because of its compact structure only one real vector and two bit strings need
to be held in memory the cGA is especially suitable for implementation in hardware [117].
442 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
34.5 EDAs Searching Real Vectors
A variety of Estimation of Distribution Algorithms has been developed for nding optimal
real vectors from the R
n
, where n the number of elements of the genotypes. M uhlenbein
et al. [1976], for instance, introduce one of the rst real-valued EDAs and in the following,
we discuss multiple such approaches.
34.5.1 SHCLVND: Stochastic Hill Climbing with Learning by Vectors of
Normal Distribution
The Stochastic Hill Climbing with Learning by Vectors of Normal Distribution (SHCLVND)
by Rudlof and K oppen [2346] is kind of a real-valued version of the PBIL. For search spaces
Algorithm 34.12: x SHCLVND(f, ps, mps)
Input: f: the objective function
Input: ps: the population size
Input: mps: the mating pool size
Input: [implicit] terminationCriterion: the termination criterion
Input: [implicit] n: the number of elements of the genotypes
Input: [implicit]

G,

G: the minimum and maximum vector delimiting the problem
space G =
_

G[0],

G[0]
_

G[1],

G[1]
_
. . .
Input: [implicit] rangeToStdDev: a factor for converting the range to the initial
standard deviations , rangeToStdDev = 0.5 in [2346]
Data: pop: the population
Data: mate: the matingPool
Data: M = (, ): the stochastic model
Data: continue: a variable holding the termination criterion
Output: x: the best individual
1 begin
2 M.
1
2
_

G+

G
_
3 M. rangeToStdDev
_

G

G
_
4 x null
5 continue true
6 while continue do
7 pop sampleModelSHCLVND(M, ps)
8 pop performGPM(pop, gpm)
9 pop computeObjectives(pop, f)
10 mate truncationSelection(pop, f, mps)
11 if x = null then x extractBest(mate, f) [0]
12 else x extractBest(mate x , f) [0]
13 if terminationCriterion
__
then continue false
14 else M buildModelSHCLVND(mate, M)
15 return x
G R
n
, it builds models M which consist of a vector of expected values M. and a vector
of standard deviations M.. We introduce it here as Algorithm 34.12.
Initially, the expected value vector M. is set to the point right in the middle of the search
space which is delimited by the vector of minimum values

G and the vector of maximum
values

G. Notice that these limits are soft and genotypes may be created which lie outside
the limits (due to the unconstraint normal distribution used for sampling). The vector of
34.5. EDAS SEARCHING REAL VECTORS 443
standard deviations M. is initialized by multiplying a constant rangeToStdDev with the
range spanned by

G

G. The constant rangeToStdDev is set to rangeToStdDev = 0.5 in
the original paper by Rudlof and K oppen [2346].
Algorithm 34.13: pop sampleModelSHCLVND(M, ps)
Input: M = (, ): the model
Input: ps: the target population size
Input: [implicit] n: the number of elements of the genotypes
Data: i, j: counter variables
Data: p: an individual record and its corresponding genotype p.g
Data: : the vector of estimated expected values
Data: : the vector of standard deviations
Output: pop: the new population
1 begin
2 pop ()
3 M.
4 M.
5 for i 1 up to ps do
6 p.g ()
7 for j 0 up to n 1 do
8 p.g addItem(p.g, randomNorm([j], [j]))
9 pop addItem(pop, p)
Algorithm 34.13 illustrates how the models M = (, ) are sampled in order to create
new points in the search space. Therefore, one univariate normally distributed random
number randomNorm([j], [j])is created for each locus j of the genotypes according to
the value [j] denoting the expected location of the optimum in the j
th
dimension and the
prescribed standard deviation [j].
Algorithm 34.14: M buildModelSHCLVND(mate, M

)
Input: mate: the selected individuals
Input: M

: the old model


Input: [implicit] n: the number of elements of the genotypes
Input: [implicit] : the learning rate
Input: [implicit]
red
[0, 1]: the standard deviation reduction rate
Data: g: the arithmetical mean of the selected vectors
Output: M: the new model
1 begin
2 g
1
len(mate)
len(mate)1

i=0
mate[i].g
3 M. (1 ) M

. + g
4 M.
red
M

.
5 return M
After sampling a population pop of ps individuals, performing a genotype-phenotype
mapping gpm if necessary, and computing the objective values for the individual records
in the computeObjectives step, mps elements are selected using truncation selection
exactly like in the PBIL. These mps best elements are then used to update the model, a
process here dened as buildModelSHCLVND in Algorithm 34.14. Like PBILs model
444 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
updating step Algorithm 34.7, buildModelSHCLVND uses a learning rate for updating
the expected value vector model component M. which denes how much this vector is
shifted into the direction of the mean vector g of the genotypes p.g of the individuals p in
the mating pool mate.
The standard deviation vector M. model component is not learned. Instead, it is simply
multiplied with a simple reduction factor
red
[0, 1]. In the case where all elements of M.
are the same, Rudlof and K oppen [2346] assume a goal value
G
the standard deviations
are to take on after

t generations of the algorithm and set

red
=
t

G
(34.1)
which availed with

t = 2500 generations and
G
=
1
1000
to
red
0.997 241 in their experi-
ment [2346].
34.5.2 PBIL
C
: Continuous PBIL
Sebag and Ducoulombier [2443] suggest a similar extension of the PBIL to continuous search
spaces: PBIL
C
. For updating the center vector M. of the estimated distribution, the
consider the genotypes g
b,1
and g
b,2
of the two best and the genotype g
w
of the worst
individual discovered in the current generation. These vectors are combined in a way similar
to the reproduction operator used in Dierential Evolution [2220, 2621] and here listed in
Equation 34.2.
M. = (1 ) M

. + (g
b,1
+g
b,2
g
w
) (34.2)
For updating M., they suggest four possibilities:
1. Setting it to a constant value would, however, prevent the search from becoming very
specic or would slow it down signicantly, depending on the value,
2. to allow evolution to adjust M. itself in a way similar to Evolution Strategy,
3. to set each of its elements to the variance found in the corresponding loci of the K
best genotypes of the current generation,
4. or to adjust each of its elements to the variance found in the corresponding loci of the
K best genotypes of the current generation using the same learning mechanism as for
the center vector.
34.5.3 Real-Coded PBIL
The real-coded PBIL by Servet et al. [2455, 2456] follows a dierent approach. Here, the
model M consists of three vectors: the upper and lower boundary vectors M.

G and M.

G
limiting the current region of interest and vector M.P denoting the probability for each locus
j that a good genotype is located in the upper half of the interval
_
M.

G[j], M.

G[j]
_
. We
dene this real-coded PBIL as Algorithm 34.15 which basically proceeds almost exactly like
the PBIL for binary search spaces (see Algorithm 34.6). In the startup phase, the boundary
vectors M.

G and M.

G of the model M are initialized to the problem-specic boundaries of


the search space and the M.P is set to (0.5, 0.5, . . . ).
The model can be sampled using, for instance, independent uniformly distributed random
numbers in the interval prescribed by
_
M.

G[j], M.

G[j]
_
for the j
th
gene from any of the n
loci, as done in Algorithm 34.16.
After a possible genotype-phenotype mapping, determining the objective values, and a
truncation selection step, the model M is to be updated with Algorithm 34.17. First, the
vector M.P by merging it with the information of whether good traits can be found in
34.5. EDAS SEARCHING REAL VECTORS 445
Algorithm 34.15: x RCPBIL(f, ps, mps)
Input: f: the objective function
Input: ps: the population size
Input: mps: the mating pool size
Input: [implicit] terminationCriterion: the termination criterion
Input: [implicit] n: the number of elements of the genotypes
Input: [implicit]

G,

G: the minimum and maximum vector delimiting the problem
space G =
_

G[0],

G[0]
_

G[1],

G[1]
_
. . .
Data: pop: the population
Data: mate: the matingPool
Data: M =
_
P,

G,

G
_
: the stochastic model
Data: continue: a variable holding the termination criterion
Output: x: the best individual
1 begin
2 M.

G

G
3 M.

G

G
4 M.P (0.5, 0.5, . . . , 0.5)
5 x null
6 continue true
7 while continue do
8 pop sampleModelRCPBIL(M, ps)
9 pop performGPM(pop, gpm)
10 pop computeObjectives(pop, f)
11 mate truncationSelection(pop, f, mps)
12 if x = null then x extractBest(mate, f) [0]
13 else x extractBest(mate x , f) [0]
14 if terminationCriterion
__
then continue false
15 else M buildModelRCPBIL(mate, M)
16 return x
Algorithm 34.16: M sampleModelRCPBIL(mate, M

)
Input: M: the model
Input: ps: the target population size
Input: [implicit] n: the number of elements of the genotypes
Data: i, j: counter variables
Data: p: an individual record and its corresponding genotype p.g
Output: pop: the new population
1 begin
2 pop ()
3 for i 1 up to ps do
4 p.g ()
5 for j 0 up to n 1 do
6 p.g addItem
_
p.g, randomUni
_
M.

G[j], M.

G[j]
__
7 pop addItem(pop, p)
446 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
Algorithm 34.17: M buildModelRCPBIL(mate, M

)
Input: mate: the selected individuals
Input: M

: the old model


Input: [implicit] n: the number of elements of the genotypes
Input: [implicit] : the learning rate
Data: i, j, c: internal variables
Output: M: the new model
1 begin
2 M M

3 for i 0 up to len(mate) 1 do
4 g mate[i].g
5 for j 0 up to n 1 do
6 if g[j] >
1
2
_
M.

G[j] + M.

G[j]
_
then c 1
7 else c 0
8 M.P[j] (1 ) M.P[j] + c
9 for i 0 up to n 1 do
10 if M.P[j] 0.9 then
11 M.

G[j]
1
2
_
M.

G[j] + M.

G[j]
_
12 M.P[j] 0.5
13 else
14 if p 0.1 then
15 M.

G[j]
1
2
_
M.

G[j] + M.

G[j]
_
16 M.P[j] 0.5
17 return M
34.6. EDAS SEARCHING TREES 447
the upper or lower halves of the regions of interest for each locus. In case that the upper
halve of one dimension j of the region of interest is very promising, i. e., M.P[j] 0.9,
the lower boundary M.

G[j] of the region in this dimension is updated towards the middle


1
2
_
M.

G[j] + M.

G[j]
_
and M.P[j] is reset to 0.5. On the other hand, if the lower half of
dimension j of the region is very interesting and M.P[j] 0.1, the same is adjustment is
done with the upper boundary. In this respect, the real-coded PBIL works similar to a
(randomized) multi-dimensional binary search.
34.6 EDAs Searching Trees
Estimation of Distribution Algorithms cannot only be used for evolving bit strings or real
vectors, but also are applicable to tree-based spaces, i. e., for Genetic Programming as dis-
cussed in Chapter 31 and [1602, 2199].
34.6.1 PIPE: Probabilistic Incremental Program Evolution
Salustowicz and Schmidhuber [2380, 2381] dene such an EDA for learning (program) trees:
PIPE: Probabilistic Incremental Program Evolution. Here, we provide this GP approach
as Algorithm 34.18. Let us assume a Genetic Programming system would construct trees
consisting of inner nodes, each belonging to one of the types in the function set N and leaf
nodes, each belonging to one of the nodes in the terminal set .
The function set could, for instance, be N = +, , , /, sin and the terminal set could
be = X, R, where X stands for an input variable of the expression to be evolved and
R is a real constant. Programs are n-ary trees where n is the number of arguments of the
function with the most parameters in N. In the example conguration, n would be two
since, for instance, arityOf () = 2. Let V = N the set of all possible node types.
Salustowicz and Schmidhuber [2380, 2381] represent the probability distribution of pos-
sible trees with a so-called Probabilistic Prototype Tree (PPT). Each node t of such a tree
consists of a vector function N.P : V [0, 1] which assigns a probability N.P (i) to each
node type i such that

iV
N.P (i) = 1 for all the tree nodes holds. Furthermore, it
also contains one value N.R [0, 1] which stands for the constant value that the termi-
nal R takes on if it was selected. Each inner node N of a prototype tree has n children
N.children[0] to N.children[n 1] which again, are prototype tree nodes. The models M
are the complete prototype trees or, more precisely, their roots (and, due to the children
structure N.children, hold the complete tree data.
The initial prototype tree consists only of a single node as created by the function
newModelNodePIPE in Algorithm 34.19. Salustowicz and Schmidhuber [2381] prescribe
a start probability P

for creating terminal symbols which they set to P

= 0.8 in their
experiments. All the [[ terminal symbols are given the same probability for creation and the
remaining share of the probability (1 P

) is evenly distributed amongst the [N[ function


node types. The random constant N.R is set to a value uniformly distributed in [0, 1].
A new population pop is created by sampling the prototype tree as prescribed in Algo-
rithm 34.20. The method uses a procedure buildTreePIPE dened in Algorithm 34.21
which recursively builds and attaches child nodes to a new genotype. As previously pointed
out, the model tree M initially only consists of a single node. In PIPE, model nodes are
dynamically attached when required. This is done by buildTreePIPE, which therefore
returns a tuple (N, t) consisting of the updated node N of the prototype tree and the new
node t which was inserted in the genotype. In the top-level called instance of the function,
N corresponds to the updated model M and t is the genotype p.g of a new individual.
buildTreePIPE works as follows: First, a node type i is chosen from both, the function
set N and the terminal set according to the probability distribution dened in the input
prototype tree node N

which, in the top-level instance of the method, corresponds to the


448 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
Algorithm 34.18: x PIPE(f, ps)
Input: f: the objective function
Input: ps: the population size
Input: [implicit] terminationCriterion: the termination criterion
Input: [implicit] N: the set of function types
Input: [implicit] : the set terminal types
Input: [implicit] P

: the probability for chosing a terminal symbol


Data: pop: the population
Data: mate: the matingPool (of size len(mate) = 1)
Data: M: the stochastic model
Data: p

: the best individual


Data: continue: a variable holding the termination criterion
Output: x: the best candidate solution
1 begin
2 M newModelNodePIPE(N, , P

)
3 p

null
4 continue true
5 while continue do
6 if (p

,= null) (randomUni[0, 1) < P


E
) then
7 M buildModelPIPE(p

, M, p

)
8 else
9 pop sampleModelPIPE(M, ps)
10 pop performGPM(pop, gpm)
11 pop computeObjectives(pop, f)
12 mate truncationSelection(pop, f, 1)
13 if p

= null then p

extractBest(mate, f) [0]
14 else p

extractBest(mate p

, f) [0]
15 if terminationCriterion
__
then continue false
16 else M buildModelPIPE(mate, M, p

)
17 return p

.x
Algorithm 34.19: N newModelNodePIPE(N, , P

)
Input: N: the set of function types
Input: : the set terminal types
Input: P

: the probability for chosing a terminal symbol


Data: i: a node type variable
Output: N: a new model node with its initial conguration
1 begin
2 N.R randomUni[0, 1)
3 foreach i do
4 N.P (i)
P

[[
5 foreach i N do
6 N.P (i)
1 P

[N[
7 N.children
34.6. EDAS SEARCHING TREES 449
Algorithm 34.20: pop sampleModelPIPE(M, ps)
Input: M: the model
Input: ps: the target population size
Data: j: counter variable
Data: p: an individual record and its corresponding genotype p.g
Output: pop: the new population
1 begin
2 pop ()
3 for j 1 up to ps do
4 (M, p.g) buildTreePIPE(M)
5 pop addItem(pop, p)
current model. The selected type i is then instantiated as new node t to be inserted into
the genotype.
If i happens to be the type for real constants (which is kind of a problem-specic node
type and may be left away in implementations for other domains), the constant value t.val
is either set to a random value uniformly distributed in [0, 1) or to the constant value N.R
held in the prototype node, depending on whether the probability of choosing the type R
surpassed a given threshold P
R
. Salustowicz and Schmidhuber [2381] set P
R
to 0.3 in their
experiments.
If t is a function node, all necessary child nodes are created by recursively calling the
same procedure. In the case that for a required child node no corresponding node exists in
the prototype tree, this node is again created via Algorithm 34.19.
Updating the model in PIPE means to adapt it towards the direction of the genotype of
the best individual p

found so far. This is done by rst computing the recursively dened


probability P ( p

.g[ M) that this individual would occur given the current model M, as noted
in Equation 34.3.
P (t [N ) = N.P (typeOf (t))
_

numChildren(t)
i=0
if numChildren(t) > 0
1 otherwise
(34.3)
P
goal
(p.g [M, x) = P (g [M) + (1 P (p.g [M))
f(x)
f(gpm(g))
(34.4)
Based on a learning rate , the target probability P
goal
( p

.g[ M, x) that this tree should


receive after the model was updated is derived. Here, also the objective values of the best
candidate solution x discovered so far are also taken into consideration. The relation of the
tness of best individual p

of the current generation and this elitist tness is weighted by a


factor . In their experiments, Salustowicz and Schmidhuber [2381] set = 0.01 and = 1.
In Algorithm 34.22, we show how a probability-increasing function increasePropPIPE
(Algorithm 34.23) for the best tree of the current generation is repeatedly invoked in order
to shift its probability towards the target probability P
goal
. This function again works
recursively. It also sets the constant values stored in the probability to the constant values
used in the best tree and ensures that all probabilities remain normalized. A constant c (set
to 0.1 in the original work) is used to trade-o between fast increase of the tree probabilities
and the precision in reaching P
goal
.
The prototype trees are pruned with Algorithm 34.24 after learning which removes paths
which are not reached when expanding a prototype node with a probability of at least P
P
set to 0.999 999 in [2381].
Algorithm 34.18 performs either elitist learning towards the best individual p

discovered
during all generations with probability P
E
(0.2 in the original paper) or normal learning.
Also, the original work [2381] features a mutation operator which changes the probabilities
450 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
Algorithm 34.21: (N, t) buildTreePIPE(N

)
Input: N

: the tree node


Input: [implicit] V : the available symbols
Input: [implicit] P
R
: the threshold probability value to re-use the random constant
Input: [implicit] i: the node type chosen
Input: [implicit] N: the set of function types
Input: [implicit] : the set terminal types
Input: [implicit] P

: the probability for choosing a terminal symbol


Data: p: the probability value
Data: j: a counte variable
Data: N

: the current sub node


Output: N: the updated model
Output: t

: the new sub tree


Output: t: the newly created tree node
1 begin
2 N N

3 p randomUni[0, 1)
4 V setToList(N )
5 j 0
6 while p > V [j] do
7 p p N.P (V [j])
8 j j + 1
9 i V [j]
10 t instantiate(i)
11 if i = R then
12 if N.P (V [j]) > P
R
then t.val N.R
13 else t.val randomUni[0, 1)
14 for j 0 up to arityOf (i) 1 do
15 if numChildren(N) j then
16 N

newModelNodePIPE(N, , P

)
17 N addChild(N, N

)
18 else
19 N

N.children[j]
20 (N

, t

) buildTreePIPE(N

)
21 N.children[j] N

22 t addChild(t, t

)
23 return (N, t)
34.6. EDAS SEARCHING TREES 451
Algorithm 34.22: M buildModelPIPE(mate, M

, x)
Input: mate: the individuals selected for mating
Input: M

: the old model


Input: x: the best individual found so far
Data: P
goal
: a variable holding the target probability
Output: M: the new model
1 begin
2 p mate[0]
3 P
goal
P
goal
(p.g [M

, x)
4 M M

5 while P (p.g [M) < P


goal
do
6 M increasePropPIPE(M, p.g)
7 return pruneTreePIPE(M)
Algorithm 34.23: N increasePropPIPE(N

, t)
Input: N

: the protype tree node


Input: t: the corresponding tree node
Input: N: the updated protype tree node
Data: i
c
, i: tree node types
Data: j: a counter variable
Data: s: the probability sum before normalization
Output: N: the model updated node
1 begin
2 N N

3 i
c
typeOf (t)
4 if i
c
= R then
5 N.R t.val
6 N.P (i) N.P (i
c
) +c(1 N.P (i
c
))
7 s

iN
N.P (i)
8 foreach i N do
9 N.P (i)
1
s
N.P (i)
10 for j 0 up to numChildren(t) do
11 N.children[j] increasePropPIPE(N.children[j], t.children[j])
12 return N
452 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
Algorithm 34.24: N pruneTreePIPE(N

)
Input: N

: the original node


Output: N: the pruned node
1 begin
2 N N

3

P max N.P (i) : i N
4 if

P P
P
then
5 a max
_
arityOf (i) : i N N.P (i) =

P
_
6 for j a up to numChildren(N) 1 do
7 N deleteChild(N, a)
8 for j 0 up to a 1 do
9 N.children[j] pruneTreePIPE(N.children[j])
10 return N
in the prototype trees with a given probability (and normalizes them again).
34.7 Diculties
34.7.1 Diversity
Estimation of Distribution Algorithms evolve a model of how a perfect solution should
probably look like. The central idea is that such a model is dened by several parameters
which will converge and the span of possible values which can be sampled from it will become
smaller and smaller over time. As already stated, if everything goes well, at one point the
model should have become so specic that only the single, global optimum can be sampled
from it. If everything goes well, that is. Now not all problems are uni-modal, i. e., have a
single optimum. In fact, as we already pointed out in Chapter 13 and Section 3.2, many
problems are multi-modal, with many local or global optima.
Then, the danger arises that the EDA too quickly loses diversity [2843] and converges
towards a local optimum. Evolutionary Algorithms can preserve diversity in their population
by allowing with worse objective values candidate solutions to survive if they are located in
dierent areas of the search space. In EDAs, this is not possible. If we use such individuals
during the model updating process, the only thing we achieve is that the area where new
points are sampled stays very large, the search process does not converge, and many objective
function evaluations are wasted to candidate solutions with little chance of being good.
34.7.1.1 Model Mutation
A very simple way for increasing the diversity in EDAs is mutating the model itself from
time to time. By randomly modifying its parameters, new individuals will be sampled a bit
more distant from the congurations which are currently assumed to perform best. If the
newly explored parts of the search space are inferior, chances are that the EDA nds back
the previous parameters. If they are better, on the other hand, the EDA managed to escape
a local optimum. This approach is relatively old and has already been applied in PBIL [193]
and PIPE [2380, 2381].
34.7. DIFFICULTIES 453
34.7.1.2 Sampling Mutation
But diverse individuals can also be created without permanently mutating the model. In-
stead, a model mutation may be applied which only aects the sampling of one single
genotype and is reverted thereafter, i. e., which has very temporary characteristics. This
way, the danger that a good model may get lost is circumvented.
Such an operator is the Sampling Mutation by Wallin and Ryan [2843, 2844] who tem-
porarily invert the probability distribution describing one variable of the model. In a , for
instance, we could temporarily set the i
th
element of element of the model vector M to
1 M[i] during the sampling of one individual.
34.7.1.3 Clustering in the Objective Space
In their Evolutionary Bayesian Classier-based Optimization Algorithms (EBCOAs),
Miquelez et al. [1910] chose yet another approach. Before the model building phase, the
population pop is divided into xed number [K[ of classes. This is achieved by dividing the
population into equal-sized groups of individuals from the ttest to the lest t one [1910]. To
each individual p pop a label k(p). Some of the classes K
i
K : K
i
= p pop : k(p) = i
are discarded to facilitate learning and only [C[ [K[ classes are selected. Using only a
(strict) subset of classes for learning can be justied because it emphasizes the dierences
between the classes and reduces noise [1910, 2843].
The class variables (which indirectly represent the tness) tagged to the individuals are
treated as part of the individuals and incorporated into the model building process as well.
In other words, the modes M also try to estimate the distribution of the classes (additionally
to the parameters of the genotypes). In [1910] and [2843], Bayesian networks are learned as
models.
For the sampling process, Miquelez et al. [1910] suggest sampling a number m(i) of
individuals for a class i in the next generation proportional to the sum of the objective
values that the individuals achieved in the last generation:
m(i)

ppop:k(p)=i
f(p.x) (34.5)
Wallin and Ryan [2843] further combine such tness clustering with sampling mutation.
Pelikan et al. [2157], Sastry and Goldberg [2398] extend the method for multiple objective
functions. They dene a multi-objective version of their hBOA algorithm as a combination
of hBOA, NSGA-II, and clustering in the objective space.
Lu and Yao [1784] introduce a basic multi-model EDA scheme for real-valued optimiza-
tion. This scheme utilizes clustering in a way similar to our work presented here. However,
they focus mainly on numerical optimization whereas we introduce a general framework
together with one possible instantiation for numerical optimization. Platel et al. [2187] pro-
pose a quantum-inspired evolutionary algorithm which is an EDA for bit-string base search
spaces. This algorithm utilizes a structured population, similar to the use of demes in EAs,
which makes it a multi-model EDA. Gallagher et al. [1009] extend the PBIL algorithm to
real-valued optimization by using an Adaptive Gaussian mixture model density estimator.
This approach can deal with multi-modal problems but too, is more complex than multi-
model algorithms which utilize clustering
Niemczyk and Weise [2038] nally introduce a general multi-model EDA which unites
research on Estimation of Distribution Algorithmsand Evolutionary Algorithms: They sug-
gest divide the population into clusters and derive one model for each of them. These models
may then be recombined and sampled. In the case that one model is created for each se-
lected individual and additional models result from recombination, the algorithm behaves
exactly like a standard EA. If, on the other hand, only one single cluster is created and no
model recombination is performed, the algorithm equals a traditional EDA. Niemczyk and
454 34 ESTIMATION OF DISTRIBUTION ALGORITHMS
Weise [2038] point out that this scheme applies to arbitrary search and problem spaces and
clustering methods. They instantiate it for a continuous search space, but it could as well
be used for bit-string based or tree-based genomes.
34.7.1.4 Clustering in the Search Space
Symmetric optimization problems can be especially challenging for EDAs since optima with
similar objective values but located in dierent areas of the search space may hinder them
from converging. Pelikan and Goldberg [2147, 2148] rst applied clustering the search space
of general EAs and EDAs in order to alleviate this problem. In a Genetic Algorithm, they
would cluster the mating pool (i. e., the selected parents) according to the genotypes. Each
cluster will then be processed separately and yield its own ospring. The recombination
operator only uses parents which stem from the same cluster. The number of new genotypes
per cluster is proportional to its average tness or its size. This method hence facilitates
niching. In their work, Pelikan and Goldberg [2147, 2148] use k-means clustering [1503, 1808]
both, in a simple Genetic Algorithm and in UMDA, and found that it helps to achieve better
results in the presence of symmetry.
Bosman and Thierens [363, 364] too apply k-means clustering but also the randomized
leader algorithm and expectation maximization after selection. For each of the clusters,
again a separate model is created. These are combined to an overall distribution via a
weighted sum approach, where the weight of a cluster is proportional to its size.
Lu and Yao [1784] apply the heuristic competitive learning algorithm RPCL to cluster the
populations in their Clustering and Estimation of Gaussian Network Algorithms (CEGNA)
for continuous optimization problems. RPCL has the advantage that it can select the number
of clusters automatically [2999]. In CEGNA, after selection, the mating pool is clustered
into k clusters where both, the clusters and k, are determined by RPCL. For each of the
clusters, an independent model M
i
: i 1..k is derived. In the sampling step, N
i
new
genotypes are sampled for each cluster i where N
i
is proportional to the average tness of
the individuals in the cluster. The experimental results [1784] show that the new algorithm
performs well on multi-modal problems.
Other EDA approaches utilizing clustering have been provided in [42, 486, 2069]. Cluster-
ing has also been incorporated in EAs as, for example, niching method, by various research
groups [1081, 1241].
In [2357], Ruzgas et al. show that a multi-modal problem can be reduced to a set of
uni-modal ones and the quality of estimation procedures increases signicantly if applying
estimators rst to each cluster and later combining the results. Although their work was
not related to EDAs, it is another account for the successful integration of clustering into
distribution estimation.
34.7.2 Epistasis
34.7. DIFFICULTIES 455
Tasks T34
99. Implement the Royal Road function given in Section 50.2.4 on page 558 for genotypes
of lengths 64, 512, and 2048. Apply both, the UMDA (see Section 34.4.1) and a simple
Genetic Algorithm (see Section 29.3) to the three functions. Repeat the experiments
15 times. Describe your ndings.
[25 points]
100. Implement the OneMax function given in Section 50.2.5.1 on page 562 for genotypes
of lengths 64, 512, and 2048. Apply both, the UMDA (see Section 34.4.1) and a simple
Genetic Algorithm (see Section 29.3) to the three functions. Repeat the experiments
15 times. Describe your ndings.
[25 points]
101. Implement the BinInt function given in Section 50.2.5.2 on page 562 for genotypes
of lengths 16, 32, and 48. Apply both, the UMDA (see Section 34.4.1) and a simple
Genetic Algorithm (see Section 29.3) to the three functions. Repeat the experiments
15 times. Describe your ndings.
[25 points]
102. Implement the NK Fitness Landscape given in Section 50.2.1 on page 553 for N = 12
and 16 and K = 4 and K = 6 and random neighbors. Apply both, the UMDA (see
Section 34.4.1) and a simple Genetic Algorithm (see Section 29.3) to the four resulting
functions. Repeat the experiments 15 times. Describe your ndings.
[50 points]
103. Implement any of the real-valued Estimation of Distribution Algorithms dened in Sec-
tion 34.5 on page 442 and test them on suitable benchmark problems (such as those
given in Task 64 on page 238).
[50 points]
Chapter 35
Learning Classier Systems
35.1 Introduction
In the late 1970s, Holland, the father of Genetic Algorithms, also invented the concept of
classier systems (CS) [1251, 1254, 1256]. These systems are a special case of production
systems [696, 697] and consist of four major parts:
1. a set of interacting production rules, called classiers,
2. a performance algorithm which directs the actions of the system in the environment,
3. a learning algorithm which keeps track on the success of each classier and distributes
rewards, and
4. a Genetic Algorithm which modies the set of classiers so that variants of good classi-
ers persist and new, potentially better ones are created in an ecient manner [1255].
By time, classier systems have undergone some name changes. In 1986, reinforcement
learning was added to the approach and the name changed to Learning Classier Systems
1
(LCS) [1220, 2534]. Learning Classier Systems are sometimes subsumed under a machine
learning paradigm called evolutionary reinforcement learning (ERL) [1220] or Evolutionary
Algorithms for Reinforcement Learning (EARLs) [1951].
35.2 Families of Learning Classier Systems
The exact denition of Learning Classier Systems [1257, 1588, 1677, 2534] still seems con-
tentious and there exist many dierent implementations. There are, for example, versions
without message list where the action part of the rules does not encode messages but direct
output signals. The importance of the role of Genetic Algorithms in conjunction with the
reinforcement learning component is also not quite clear. There are scientists who emphasize
more the role of the learning components [2956] and others who tend to grant the Genetic
Algorithms a higher weight [655, 1256].
The families of Learning Classier Systems have been listed and discussed by Brownlee
[430] elaborately. Here we will just summarize their dierences in short. De Jong [712, 713]
and Grefenstette [1127] divide Learning Classier Systems into two main types, depending
on how the Genetic Algorithm acts: The Pitt approach originated at the University of
Pittsburgh with the LS-1 system developed by Smith [2536]. It was then developed further
and applied by Bacardit i Pe narroya [158, 159], De Jong and Spears [716], De Jong et al.
1
http://en.wikipedia.org/wiki/Learning_classifier_system [accessed 2007-07-03]
458 35 LEARNING CLASSIFIER SYSTEMS
[717], Spears and De Jong [2564], and Bacardit i Pe narroya and Krasnogor [161]. Pittsburgh-
style Learning Classier Systems work on a population of separate classier systems, which
are combined and reproduced by the Genetic Algorithm.
The original idea of Holland and Reitman [1256] were Michigan-style LCSs, where the
whole population itself is considered as classier system. They focus on selecting the best
rules in this rule set [706, 1074, 1751]. Wilson [2952, 2953] developed two subtypes of
Michigan-style LCS:
1. In ZCS systems, there is no message list use tness sharing [440, 442, 591, 2952] for a
Q-learning-like reinforcement learning approach called QBB.
2. ZCS have later been somewhat superseded by XCS systems in which the Bucket
Brigade Algorithm has fully been replaced by Q-learning. Furthermore, the credit
assignment is based on the accuracy (usefulness) of the classiers. The Genetic Algo-
rithm is applied to sub-populations containing only classiers which apply to the same
situations. [1587, 1681, 29532955]
35.3. GENERAL INFORMATION ON LEARNING CLASSIFIER SYSTEMS 459
35.3 General Information on Learning Classier Systems
35.3.1 Applications and Examples
Table 35.1: Applications and Examples of Learning Classier Systems.
Area References
Combinatorial Problems [2158]
Control [1074]
Data Mining [53, 158, 160, 162, 279, 633, 675, 676, 706, 717, 904, 994,
1527, 1587, 2095, 2461, 2674, 2894, 2902]
Databases [162]
Digital Technology [706, 1587, 2954]
Distributed Algorithms and Sys-
tems
[1218, 2461, 2767]
Economics and Finances [440]
Engineering [53, 706, 1218, 1587, 2158, 2767, 2954]
Function Optimization [713]
Graph Theory [1218, 2461, 2767]
Healthcare [1527, 2674]
Mathematics [591]
Multi-Agent Systems [440]
Prediction [1587]
Security [53, 1218, 2461]
Wireless Communication [2158]
35.3.2 Books
1. Rule-Based Evolutionary Online Learning Systems: A Principled Approach to LCS Analysis
and Design [460]
2. Foundations of Learning Classier Systems [443]
3. Data Mining and Knowledge Discovery with Evolutionary Algorithms [994]
4. Anticipatory Learning Classier Systems [459]
5. Applications of Learning Classier Systems [441]
35.3.3 Conferences and Workshops
Table 35.2: Conferences and Workshops on Learning Classier Systems.
GECCO: Genetic and Evolutionary Computation Conference
See Table 28.5 on page 317.
CEC: IEEE Conference on Evolutionary Computation
See Table 28.5 on page 317.
EvoWorkshops: Real-World Applications of Evolutionary Computing
See Table 28.5 on page 318.
PPSN: Conference on Parallel Problem Solving from Nature
See Table 28.5 on page 318.
460 35 LEARNING CLASSIFIER SYSTEMS
WCCI: IEEE World Congress on Computational Intelligence
See Table 28.5 on page 318.
EMO: International Conference on Evolutionary Multi-Criterion Optimization
See Table 28.5 on page 319.
AE/EA: Articial Evolution
See Table 28.5 on page 319.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
ICANNGA: International Conference on Adaptive and Natural Computing Algorithms
See Table 28.5 on page 319.
ICCI: IEEE International Conference on Cognitive Informatics
See Table 10.2 on page 128.
ICML: International Conference on Machine Learning
See Table 10.2 on page 129.
ICNC: International Conference on Advances in Natural Computation
See Table 28.5 on page 319.
IWLCS: International Workshop on Learning Classier Systems
History: 2007: London, UK, see [1589, 2579]
2006/07: Seattle, WA, USA, see [2441]
2005/06: Washington, DC, USA, see [27]
2004/06: Seattle, WA, USA, see [2280]
2003/07: Chicago, IL, USA, see [2578]
2002/09: Granada, Spain, see [1680]
2001/07: San Francisco, CA, USA, see [1679]
2000/09: Paris, France, see [1678]
1999/07: Orlando, FL, USA, see [2094]
1992: Houston, TX, USA, see [2002, 2534]
SEAL: International Conference on Simulated Evolution and Learning
See Table 10.2 on page 130.
BIOMA: International Conference on Bioinspired Optimization Methods and their Applications
See Table 28.5 on page 320.
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
See Table 10.2 on page 130.
FEA: International Workshop on Frontiers in Evolutionary Algorithms
See Table 28.5 on page 320.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
MICAI: Mexican International Conference on Articial Intelligence
See Table 10.2 on page 131.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
35.3. GENERAL INFORMATION ON LEARNING CLASSIFIER SYSTEMS 461
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
See Table 10.2 on page 131.
CIMCA: International Conference on Computational Intelligence for Modelling, Control and Au-
tomation
See Table 28.5 on page 320.
CIS: International Conference on Computational Intelligence and Security
See Table 28.5 on page 320.
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
See Table 10.2 on page 132.
ECML: European Conference on Machine Learning
See Table 10.2 on page 132.
EUFIT: European Congress on Intelligent Techniques and Soft Computing
See Table 10.2 on page 132.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
EvoCOP: European Conference on Evolutionary Computation in Combinatorial Optimization
See Table 28.5 on page 321.
GEC: ACM/SIGEVO Summit on Genetic and Evolutionary Computation
See Table 28.5 on page 321.
GEM: International Conference on Genetic and Evolutionary Methods
See Table 28.5 on page 321.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
SoCPaR: International Conference on SOft Computing and PAttern Recognition
See Table 10.2 on page 133.
Progress in Evolutionary Computation: Workshops on Evolutionary Computation
See Table 28.5 on page 321.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
AJWS: Australia-Japan Joint Workshop on Intelligent & Evolutionary Systems
See Table 28.5 on page 321.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
ASC: IASTED International Conference on Articial Intelligence and Soft Computing
See Table 10.2 on page 134.
EWSL: European Working Session on Learning
See Table 10.2 on page 134.
EvoNUM: European Event on Bio-Inspired Algorithms for Continuous Parameter Optimisation
See Table 28.5 on page 321.
FOCI: IEEE Symposium on Foundations of Computational Intelligence
See Table 28.5 on page 321.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
462 35 LEARNING CLASSIFIER SYSTEMS
See Table 10.2 on page 134.
ICMLC: International Conference on Machine Learning and Cybernetics
See Table 10.2 on page 134.
IJCCI: International Conference on Computational Intelligence
See Table 28.5 on page 321.
IWNICA: International Workshop on Nature Inspired Computation and Applications
See Table 28.5 on page 321.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
NaBIC: World Congress on Nature and Biologically Inspired Computing
See Table 28.5 on page 322.
TAAI: Conference on Technologies and Applications of Articial Intelligence
See Table 10.2 on page 134.
WPBA: Workshop on Parallel Architectures and Bioinspired Algorithms
See Table 28.5 on page 322.
ECML PKDD: European Conference on Machine Learning and European Conference on Principles
and Practice of Knowledge Discovery in Databases
See Table 10.2 on page 135.
EvoRobots: International Symposium on Evolutionary Robotics
See Table 28.5 on page 322.
ICEC: International Conference on Evolutionary Computation
See Table 28.5 on page 322.
IWACI: International Workshop on Advanced Computational Intelligence
See Table 28.5 on page 322.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
SEMCCO: International Conference on Swarm, Evolutionary and Memetic Computing
See Table 28.5 on page 322.
SSCI: IEEE Symposium Series on Computational Intelligence
See Table 28.5 on page 322.
35.3.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
Chapter 36
Memetic and Hybrid Algorithms
Starting with the research contributed by Bosworth et al. [368] in 1972, Bethke [288] in 1980,
and Brady [382] in 1985, there is a long tradition of hybridizing Evolutionary Algorithms
with other optimization methods such as Hill Climbing, Simulated Annealing, or Tabu
Search [2507]. A comprehensive review on this topic has been provided by Grosan and
Abraham [1139, 1141].
Hybridization approaches are not limited to GAs as basis algorithm. In ?? for example,
we have already listed a wide variety of approaches to combine the Downhill Simplex with
population-based optimization methods spanning from Genetic Algorithms to Dierential
Evolution and Particle Swarm Optimization. Many of these approaches can be subsumed
under the umbrella term Memetic Algorithms
1
(MAs).
36.1 Memetic Algorithms
The principle of Genetic Algorithms is to simulate the natural evolution in order to solve
optimization problems. In nature, the features of phenotypes are encoded in the genes
of their genotypes. The term Memetic Algorithm was coined by Moscato [1961, 1962] as
allegory for simulating a social evolution where behavioral patterns are passed on in memes
2
.
A meme has been dened by Dawkins [700] as unit of imitation in cultural transmission.
With the simulation of social cooperation and competition, optimization problems can be
solved eciently.
Moscato [1961] uses the example of Chinese martial art Kung-Fu which has developed
over many generations of masters teaching their students certain sequences of movements,
the so-called forms. Each form is composed of a set of elementary aggressive and defensive
patterns. These undecomposable sub-movements can be interpreted as memes. New memes
are rarely introduced and only few amongst the masters of the art have the ability to do
so. Being far from random, such modications involve a lot of problem-specic knowledge
and almost always result in improvements. Furthermore, only the best of the population
of Kung-Fu practitioners can become masters and teach decibels. Kung-Fu ghters can
determine their tness by evaluating their performance or by competing with each other in
tournaments.
Based on this analog, Moscato [1961] creates an example algorithm for solving the Trav-
eling Salesman Problem [118, 1694] by using the three principles of
1. intelligent improvement based on local search with problem-specic operators,
2. competition in form of a selection procedure, and
3. cooperation in form of a problem-specic crossover operator.
1
http://en.wikipedia.org/wiki/Memetic_algorithm [accessed 2007-07-03]
2
http://en.wikipedia.org/wiki/Meme [accessed 2008-09-10]
464 36 MEMETIC AND HYBRID ALGORITHMS
Interesting research work directly focusing on Memetic Algorithms has been contributed,
amongst many others, by Moscato et al. in [329, 451, 1260, 1961, 1963, 2055], Radclie and
Surry [2245], Digalakis and Margaritis [794, 795], and Krasnogor and Smith [1618]. Further
early works on Genetic Algorithm hybridization are Ackley [19] (1987), Goldberg [1075]
(1989), Gorges-Schleuter [1107] (1989), M uhlenbein [1969, 1970, 1972] (1989), Brown et al.
[429] (1989) and Davis [695] (1991).
The denition of Memetic Algorithms given by Moscato [1961] is relatively general and
encompasses many dierent approaches. Even though Memetic Algorithms are a metaphor
based on social evolution, there also exist two theories in natural evolution which t to
the same idea of hybridizing Evolutionary Algorithms with other search methods [2112].
Lamarckism and the Baldwin eect are both concerned with phenotypic changes in living
creatures and their inuence on the tness and adaptation of species.
36.2 Lamarckian Evolution
Lamarckian evolution
3
is a model of evolution accepted by science before the discovery of
genetics. Superseding early the ideas of Erasmus Darwin [686] (the grandfather of Charles
Darwin), Lamarck [722] laid the foundations of the theory later known as Lamarckism with
his book Philosophie Zoologique published in 1809. Lamarckism has two basic principles:
1. Individuals can attain new, benecial characteristics during their lifetime and lose
unused abilities.
2. They inherit their traits (also those acquired during their life) to their ospring.
While the rst concept is obviously correct, the second one contradicts the state of knowledge
in modern biology. This does not decrease the merits of de Monet, Chevalier de Lamarck,
who provided an early idea about how evolution could proceed. In his era, things like genes
and the DNA simply had not been discovered yet. Weismann [2918] was the rst to argue
that the heredity information of higher organisms is separated from the somatic cells and,
thus, could not be inuenced by them [2739]. In nature, no phenotype-genotype mapping
can take place.
ian evolution can be included in Evolutionary Algorithms by performing a local search
starting with each new individual resulting from applications of the reproduction operations.
This search can be thought of as training or learning and its results are coded back into the
genotypes g G [2933]. Therefore, this local optimization approach usually works directly
in the search space G. Here, algorithms such as greedy search Hill Climbing, Simulated
Annealing, or Tabu Search can be utilized, but simply modifying the genotypes randomly
and remembering the best results is also possible.
36.3 Baldwin Eect
The Baldwin eect
4
, [2495, 2740, 2830, 2831] rst proposed by Baldwin [191, 192], Morgan
[1943, 1944], and Osborn [2101] in 1896, is a evolution theory which remains controversial
until today [710, 2179]. Suzuki and Arita [2637] describe it as a possible scenario of in-
teractions between evolution and learning caused by balances between benet and cost of
learning [2873]. Learning is a rather local phenomenon, normally involving only single
individuals, whereas evolution usually takes place in the global scale of a population. The
Baldwin eect combines both in two steps [2739]:
3
http://en.wikipedia.org/wiki/Lamarckism [accessed 2008-09-10]
4
http://en.wikipedia.org/wiki/Baldwin_effect [accessed 2008-09-10]
36.3. BALDWIN EFFECT 465
1. First, lifelong learning gives the individuals the chance to adapt to their environment
or even to change their phenotype. This phenotypic plasticity
5
may help the creatures
to increase their tness and, hence, their probability to produce more ospring. Dif-
ferent from Lamarckian evolution, the abilities attained this way do not inuence the
genotypes nor are inherited.
2. In the second phase, evolution step by step generates individuals which can learn
these abilities faster and easier and, nally, will have encoded them in their genome.
Genotypical traits then replace the learning (or phenotypic adaptation) process and
serve as an energy-saving shortcut to the benecial traits. This process is called genetic
assimilation
6
[28292832].
gG
f(gpm(g))
withlearning
withoutlearning
Fig. 36.1.a: The inuence of learning capabilities of individuals on the tness
landscape, extension of the gure in [1235, 1236].
gG
f(gpm(g))
Case2:
increased
gradient
Case3:
decreased
gradient
Dg Dg
Case1:
neutralizedlandscape
Case4:
smoothedoutlandscape
Fig. 36.1.b: The positive and negative inuence of learning capabilities of
individuals on the tness landscape, extension of the gure in [2637].
Figure 36.1: The Baldwin eect.
Hinton and Nowlan [1235, 1236] performed early experiments on the Baldwin eect with
Genetic Algorithms [253, 1208]. They found that the evolutionary interaction with learning
smoothens the tness landscape [1144] and illustrated this eect on the graphical example of
a needle-in-a-haystack problem. Mayley [1844] used experiments on Kaumans NK tness
landscapes [1501] (see Section 50.2.1) to show that the Baldwin eect can also have negative
inuence. We use Figure 36.1 to sketch both eects that can occur when the tness of an
5
http://en.wikipedia.org/wiki/Phenotypic_plasticity [accessed 2008-09-10]
6
http://en.wikipedia.org/wiki/Genetic_assimilation [accessed 2008-09-10]
466 36 MEMETIC AND HYBRID ALGORITHMS
individual is computed by performing a local search around it, returning the objective values
of the best candidate solution found (without preserving this optimized phenotype).
Whereas learning adds gradient information in regions of the search space which are
distant from local or global optima (case 2 in Fig. 36.1.b), it decreases the information in
their near proximity (called hiding eect [1457, 1844], case 3 in Fig. 36.1.b). Similarly, using
the Baldwin eect in the described way may smoothen out rugged tness landscapes (case
4), it could also reduce smaller gradient information (case 1). Suzuki and Arita [2637] found
that the Baldwin eect decreases the evolution speed in their rugged experimental tness
landscape, but also led to signicantly better results in the long term.
One interpretation of these issues is that learning capabilities help individuals to survive
in adverse conditions since they may nd good abilities by learning and phenotypic adap-
tation. On the other hand, it makes not much of a dierence whether an individual learns
certain abilities or whether it was already born with them when it can exercise them at the
same level of perfection. Thus, the selection pressure furthering the inclusion of good traits
in the heredity information decreases if a life form can learn or adapt its phenotypes.
Like Lamarckian evolution, the Baldwin eect can also be added to Evolutionary Al-
gorithms by performing a local search starting at each new ospring individual. Dierent
from Lamarckism, the abilities and characteristics attained by this process only inuence
the objective values of an individual and are not coded back to the genotypes. Hence, it
plays no role whether the search takes place in the search space G or in the problem space
X. The best objective values f (p

.x) found in a search around individual p become its own


objective values, but the modied variant p

of p actually scoring them is discarded [2933].


Nevertheless, the implementer will store these individuals somewhere else if they were
the best candidate solutions ever found. She must furthermore ensure that the user will be
provided with the correct objective values of the nal set of candidate solutions resulting
from the optimization process (f (p.x), not f (p

.x)).
36.4. GENERAL INFORMATION ON MEMETIC ALGORITHMS 467
36.4 General Information on Memetic Algorithms
36.4.1 Applications and Examples
Table 36.1: Applications and Examples of Memetic Algorithms.
Area References
Combinatorial Problems [451, 457, 619, 649, 1486, 1581, 1618, 1859, 1961, 2055, 2245,
2374, 2669, 3071]
Computer Graphics [2707]
Data Mining [1581]
Databases [3071]
Distributed Algorithms and Sys-
tems
[769, 872, 1007, 1435, 2020, 2897]
Economics and Finances [1060]
Engineering [769, 1435, 1486, 2897]
Function Optimization [795, 1927]
Graph Theory [63, 329, 769, 872, 1007, 1435, 1486, 2020, 2237, 2374, 2897]
Healthcare [2021, 2022]
Logistics [451, 619, 1581, 1859, 1961, 2245, 2669]
Motor Control [489, 490]
Security [1486]
Software [1486, 2897]
Wireless Communication [63, 1486, 2237, 2374]
36.4.2 Books
1. Recent Advances in Memetic Algorithms [1205]
2. Hybrid Evolutionary Algorithms [1141]
3. Advances in Evolutionary Algorithms [1581]
36.4.3 Conferences and Workshops
Table 36.2: Conferences and Workshops on Memetic Algorithms.
GECCO: Genetic and Evolutionary Computation Conference
See Table 28.5 on page 317.
CEC: IEEE Conference on Evolutionary Computation
See Table 28.5 on page 317.
EvoWorkshops: Real-World Applications of Evolutionary Computing
See Table 28.5 on page 318.
PPSN: Conference on Parallel Problem Solving from Nature
See Table 28.5 on page 318.
WCCI: IEEE World Congress on Computational Intelligence
See Table 28.5 on page 318.
EMO: International Conference on Evolutionary Multi-Criterion Optimization
468 36 MEMETIC AND HYBRID ALGORITHMS
See Table 28.5 on page 319.
AE/EA: Articial Evolution
See Table 28.5 on page 319.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
ICANNGA: International Conference on Adaptive and Natural Computing Algorithms
See Table 28.5 on page 319.
ICCI: IEEE International Conference on Cognitive Informatics
See Table 10.2 on page 128.
ICNC: International Conference on Advances in Natural Computation
See Table 28.5 on page 319.
SEAL: International Conference on Simulated Evolution and Learning
See Table 10.2 on page 130.
BIOMA: International Conference on Bioinspired Optimization Methods and their Applications
See Table 28.5 on page 320.
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
See Table 10.2 on page 130.
FEA: International Workshop on Frontiers in Evolutionary Algorithms
See Table 28.5 on page 320.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
MICAI: Mexican International Conference on Articial Intelligence
See Table 10.2 on page 131.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
See Table 10.2 on page 131.
CIMCA: International Conference on Computational Intelligence for Modelling, Control and Au-
tomation
See Table 28.5 on page 320.
CIS: International Conference on Computational Intelligence and Security
See Table 28.5 on page 320.
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
See Table 10.2 on page 132.
EUFIT: European Congress on Intelligent Techniques and Soft Computing
See Table 10.2 on page 132.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
EvoCOP: European Conference on Evolutionary Computation in Combinatorial Optimization
See Table 28.5 on page 321.
GEC: ACM/SIGEVO Summit on Genetic and Evolutionary Computation
See Table 28.5 on page 321.
36.4. GENERAL INFORMATION ON MEMETIC ALGORITHMS 469
GEM: International Conference on Genetic and Evolutionary Methods
See Table 28.5 on page 321.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
SoCPaR: International Conference on SOft Computing and PAttern Recognition
See Table 10.2 on page 133.
Progress in Evolutionary Computation: Workshops on Evolutionary Computation
See Table 28.5 on page 321.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
AJWS: Australia-Japan Joint Workshop on Intelligent & Evolutionary Systems
See Table 28.5 on page 321.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
ASC: IASTED International Conference on Articial Intelligence and Soft Computing
See Table 10.2 on page 134.
EvoNUM: European Event on Bio-Inspired Algorithms for Continuous Parameter Optimisation
See Table 28.5 on page 321.
FOCI: IEEE Symposium on Foundations of Computational Intelligence
See Table 28.5 on page 321.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
IJCCI: International Conference on Computational Intelligence
See Table 28.5 on page 321.
IWNICA: International Workshop on Nature Inspired Computation and Applications
See Table 28.5 on page 321.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
NaBIC: World Congress on Nature and Biologically Inspired Computing
See Table 28.5 on page 322.
WPBA: Workshop on Parallel Architectures and Bioinspired Algorithms
See Table 28.5 on page 322.
EvoRobots: International Symposium on Evolutionary Robotics
See Table 28.5 on page 322.
ICEC: International Conference on Evolutionary Computation
See Table 28.5 on page 322.
IWACI: International Workshop on Advanced Computational Intelligence
See Table 28.5 on page 322.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
470 36 MEMETIC AND HYBRID ALGORITHMS
See Table 10.2 on page 135.
SEMCCO: International Conference on Swarm, Evolutionary and Memetic Computing
See Table 28.5 on page 322.
SSCI: IEEE Symposium Series on Computational Intelligence
See Table 28.5 on page 322.
WOMA: International Workshop on Memetic Algorithms
History: 2009/03: Nashville, TN, USA, see [1394]
36.4.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Genetic Programming and Evolvable Machines published by Kluwer Academic Publishers and
Springer Netherlands
3. Memetic Computing published by Springer-Verlag GmbH
Chapter 37
Ant Colony Optimization
37.1 Introduction
Inspired by the research done by Deneubourg et al. [767, 768, 1111] on real ants and probably
by the simulation experiments by Stickland et al. [2613], Dorigo et al. [810] developed the
Ant Colony Optimization
1
(ACO) Algorithm for problems that can be reduced to nding
optimal paths in graphs in 1996. [807, 811, 819, 1820, 1823] Ant Colony Optimization is
based on the metaphor of ants seeking food. In order to do so, an ant will leave the anthill
and begin to wander into a random direction. While the little insect paces around, it lays
a trail of pheromone. Thus, after the ant has found some food, it can track its way back.
By doing so, it distributes another layer of pheromone on the path. An ant that senses
the pheromone will follow its trail with a certain probability. Each ant that nds the food
will excrete some pheromone on the path. By time, the pheromone density of the path
will increase and more and more ants will follow it to the food and back. The higher the
pheromone density, the more likely will an ant stay on a trail. However, the pheromones
vaporize after some time. If all the food is collected, they will no longer be renewed and the
path will disappear after a while. Now, the ants will head to new, random locations.
This process of distributing and tracking pheromones is one form of stigmergy
2
and was
rst described by Grasse [1123]. Today, we subsume many dierent ways of communication
by modifying the environment under this term, which can be divided into two groups:
sematectonic and sign-based [2425]. According to Wilson [2948], we call modications in
the environment due to a task-related action which leads other entities involved in this task
to change their behavior sematectonic stigmergy. If an ant drops a ball of mud somewhere,
this may cause other ants to place mud balls at the same location. Step by step, these
eects can cumulatively lead to the growth of complex structures. Sematectonic stigmergy
has been simulated on computer systems by, for instance, Theraulaz and Bonabeau [2693]
and with robotic systems by Werfel and Nagpal [2431, 2920, 2921].
The second form, sign-based stigmergy, is not directly task-related. It has been attained
evolutionary by social insects which use a wide range of pheromones and hormones for
communication. Computer simulations for sign-based stigmergy were rst performed by
Stickland et al. [2613] in 1992.
The sign-based stigmergy is copied by Ant Colony Optimization [810], where optimiza-
tion problems are visualized as (directed) graphs. First, a set of ants performs randomized
walks through the graphs. Proportional to the goodness of the solutions denoted by the
paths, pheromones are laid out, i. e., the probability to walk into the direction of the paths
1
http://en.wikipedia.org/wiki/Ant_colony_optimization [accessed 2007-07-03]
2
http://en.wikipedia.org/wiki/Stigmergy [accessed 2007-07-03]
472 37 ANT COLONY OPTIMIZATION
is shifted. The ants run again through the graph, following the previously distributed
pheromone. However, they will not exactly follow these paths. Instead, they may deviate
from these routes by taking other turns at junctions, since their walk is still randomized.
The pheromones modify the probability distributions.
The intention of Ant Colony Optimization is solving combinatorial problems and not
simulating ants realistically [810]. The ants ACO thus dier from real ones in three major
aspects:
1. they have some memory,
2. they are not completely blind, and
3. they are simulated in an environment where time is discrete.
It is interesting to note that even real vector optimizations can be mapped to a graph
problem, as introduced by Korosec and

Silc [1580]. Thanks to such ideas, the applicability
of Ant Colony Optimization is greatly increased.
37.2. GENERAL INFORMATION ON ANT COLONY OPTIMIZATION 473
37.2 General Information on Ant Colony Optimization
37.2.1 Applications and Examples
Table 37.1: Applications and Examples of Ant Colony Optimization.
Area References
Combinatorial Problems [308, 354, 371, 444, 445, 802, 807, 808, 810, 812, 820, 1011
1013, 1145, 1148, 1149, 1486, 1730, 18221825, 1871, 2212,
2811, 3101]
Cooperation and Teamwork [2424]
Data Mining [2674]
Databases [371, 1145, 1822, 1824, 1825]
Distributed Algorithms and Sys-
tems
[328, 353, 354, 786, 811, 843, 871, 963, 1006, 1292, 1633,
1733, 1800, 1863, 2387, 2425, 2493, 2694]
E-Learning [1158, 2763]
Economics and Finances [1060, 1825]
Engineering [786, 787, 871, 1486, 1717, 1733, 2083, 2425, 2694]
Graph Theory [63, 328, 353, 354, 786, 843, 963, 1486, 1733, 1800, 1863,
2387, 2425, 2493, 2694]
Healthcare [2674]
Logistics [354, 444, 445, 802, 808, 810, 812, 820, 1011, 1013, 1148,
1149, 1730, 1823, 2212, 2811]
Multi-Agent Systems [967]
Security [540, 871, 1486]
Software [1486]
Telecommunication [820]
Wireless Communication [63, 787, 1486, 1717]
37.2.2 Books
1. Nature-Inspired Algorithms for Optimisation [562]
2. New Optimization Techniques in Engineering [2083]
3. Swarm Intelligent Systems [2013]
4. Swarm Intelligence: From Natural to Articial Systems [353]
5. Swarm Intelligence: Collective, Adaptive [1524]
6. Swarm Intelligence Introduction and Applications [341]
7. Swarm Intelligence Focus on Ant and Particle Swarm Optimization [525]
8. Advances in Metaheuristics for Hard Optimization [2485]
9. Ant Colony Optimization [809]
10. Fundamentals of Computational Swarm Intelligence [881]
37.2.3 Conferences and Workshops
Table 37.2: Conferences and Workshops on Ant Colony Optimization.
GECCO: Genetic and Evolutionary Computation Conference
See Table 28.5 on page 317.
474 37 ANT COLONY OPTIMIZATION
CEC: IEEE Conference on Evolutionary Computation
See Table 28.5 on page 317.
EvoWorkshops: Real-World Applications of Evolutionary Computing
See Table 28.5 on page 318.
PPSN: Conference on Parallel Problem Solving from Nature
See Table 28.5 on page 318.
WCCI: IEEE World Congress on Computational Intelligence
See Table 28.5 on page 318.
EMO: International Conference on Evolutionary Multi-Criterion Optimization
See Table 28.5 on page 319.
AE/EA: Articial Evolution
See Table 28.5 on page 319.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
IAAI: Conference on Innovative Applications of Articial Intelligence
See Table 10.2 on page 128.
ICANNGA: International Conference on Adaptive and Natural Computing Algorithms
See Table 28.5 on page 319.
ICCI: IEEE International Conference on Cognitive Informatics
See Table 10.2 on page 128.
ICNC: International Conference on Advances in Natural Computation
See Table 28.5 on page 319.
SEAL: International Conference on Simulated Evolution and Learning
See Table 10.2 on page 130.
ANTS: International Workshop on Ant Colony Optimization and Swarm Intelligence
History: 2010/09: Brussels, Belgium, see [434]
2008/09: Brussels, Belgium, see [2580]
2006/09: Brussels, Belgium, see [821]
2004/09: Brussels, Belgium, see [818]
2002/09: Brussels, Belgium, see [816]
2000/09: Brussels, Belgium, see [814, 815]
BIOMA: International Conference on Bioinspired Optimization Methods and their Applications
See Table 28.5 on page 320.
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
See Table 10.2 on page 130.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
MICAI: Mexican International Conference on Articial Intelligence
See Table 10.2 on page 131.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
37.2. GENERAL INFORMATION ON ANT COLONY OPTIMIZATION 475
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
See Table 10.2 on page 131.
CIMCA: International Conference on Computational Intelligence for Modelling, Control and Au-
tomation
See Table 28.5 on page 320.
CIS: International Conference on Computational Intelligence and Security
See Table 28.5 on page 320.
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
See Table 10.2 on page 132.
EUFIT: European Congress on Intelligent Techniques and Soft Computing
See Table 10.2 on page 132.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
EvoCOP: European Conference on Evolutionary Computation in Combinatorial Optimization
See Table 28.5 on page 321.
GEC: ACM/SIGEVO Summit on Genetic and Evolutionary Computation
See Table 28.5 on page 321.
IEA/AIE: International Conference on Industrial and Engineering Applications of Articial Intel-
ligence and Expert Systems
See Table 10.2 on page 133.
ISA: International Workshop on Intelligent Systems and Applications
See Table 10.2 on page 133.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
SoCPaR: International Conference on SOft Computing and PAttern Recognition
See Table 10.2 on page 133.
Progress in Evolutionary Computation: Workshops on Evolutionary Computation
See Table 28.5 on page 321.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
AJWS: Australia-Japan Joint Workshop on Intelligent & Evolutionary Systems
See Table 28.5 on page 321.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
ASC: IASTED International Conference on Articial Intelligence and Soft Computing
See Table 10.2 on page 134.
FOCI: IEEE Symposium on Foundations of Computational Intelligence
See Table 28.5 on page 321.
FWGA: Finnish Workshop on Genetic Algorithms and Their Applications
See Table 29.2 on page 355.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
ICSI: International Conference on Swarm Intelligence
476 37 ANT COLONY OPTIMIZATION
See Table 39.2 on page 485.
IJCCI: International Conference on Computational Intelligence
See Table 28.5 on page 321.
IWNICA: International Workshop on Nature Inspired Computation and Applications
See Table 28.5 on page 321.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
NaBIC: World Congress on Nature and Biologically Inspired Computing
See Table 28.5 on page 322.
TAAI: Conference on Technologies and Applications of Articial Intelligence
See Table 10.2 on page 134.
WPBA: Workshop on Parallel Architectures and Bioinspired Algorithms
See Table 28.5 on page 322.
EvoRobots: International Symposium on Evolutionary Robotics
See Table 28.5 on page 322.
ICEC: International Conference on Evolutionary Computation
See Table 28.5 on page 322.
IWACI: International Workshop on Advanced Computational Intelligence
See Table 28.5 on page 322.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
SEMCCO: International Conference on Swarm, Evolutionary and Memetic Computing
See Table 28.5 on page 322.
SIS: IEEE Swarm Intelligence Symposium
See Table 39.2 on page 486.
SSCI: IEEE Symposium Series on Computational Intelligence
See Table 28.5 on page 322.
37.2.4 Journals
1. International Journal of Swarm Intelligence and Evolutionary Computation (IJSIEC) pub-
lished by Ashdin Publishing (AP)
Chapter 38
River Formation Dynamics
River Formation Dynamics (RFD) is a heuristic optimization method recently developed by
Basalo et al. [229, 230]. It is inspired by the way water forms rivers by eroding the ground
and depositing sediments. In its structure, it is very close to Ant Colony Optimization. In
Ant Colony Optimization, paths through a graph are searched by attaching attributes (the
pheromones) to its edges. The pheromones are laid out by ants (+) and vaporize as time
goes by (-). In River Formation Dynamics, the heights above sea level are the attributes
of the vertexes of the graph. On this landscape, rain begins to fall. Forced by gravity, the
drops ow downhill and try to reach the sea. The altitudes of the points in the graph are
decreased by erosion (-) when water ows over them and increased by sedimentation (+)
if drops end up in a dead end, vaporize, and leave the material which they have eroded
somewhere else behind. Sedimentation punishes inecient paths: If drops reaching a node
surrounded only by nodes of higher altitudes will increase height more and more until it
reaches the level of its neighbors and is not a dead end anymore. While owing over the
map, the probability that a drop takes a certain edge depends on gradient of the down slope.
This gradient, in turn, depends on the dierence in altitude of the nodes it connects and
their distance (i. e., the cost function). Initially, all nodes have the same altitude except for
the destination node which is a hole. New drops are inserted in the origin node and ow
over the landscape, reinforce promising paths, and either reach the destination or vaporize
in dead ends.
Dierent from ACO, cycles cannot occur in RFD because the water always ows downhill.
Of course, rivers in nature may fork and reunite, too. But, unlike ACO, River Formation
Dynamics implicitly creates direction information in its resulting graphs. If this information
is considered to be part of the solution, then cycles are impossible. If it is stripped away,
cycles may occur.
478 38 RIVER FORMATION DYNAMICS
38.1 General Information on River Formation Dynamics
38.1.1 Applications and Examples
Table 38.1: Applications and Examples of River Formation Dynamics.
Area References
Combinatorial Problems [229, 230]
Graph Theory [230]
Logistics [229]
38.1.2 Books
1. Ant Colony Optimization [809]
2. Fundamentals of Computational Swarm Intelligence [881]
3. Swarm Intelligence Focus on Ant and Particle Swarm Optimization [525]
4. Swarm Intelligence Introduction and Applications [341]
5. Swarm Intelligence: Collective, Adaptive [1524]
6. Swarm Intelligence: From Natural to Articial Systems [353]
7. Swarm Intelligent Systems [2013]
8. Nature-Inspired Algorithms for Optimisation [562]
9. New Optimization Techniques in Engineering [2083]
38.1.3 Conferences and Workshops
Table 38.2: Conferences and Workshops on River Formation Dynamics.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
38.1. GENERAL INFORMATION ON RIVER FORMATION DYNAMICS 479
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
38.1.4 Journals
1. Journal of Heuristics published by Springer Netherlands
Chapter 39
Particle Swarm Optimization
39.1 Introduction
Particle Swarm Optimization
1
(PSO), developed by Eberhart and Kennedy [853, 855, 1523]
in 1995, is a form of swarm intelligence in which the behavior of a biological social system
like a ock of birds or a school of sh [2129] is simulated. When a swarm looks for food, its
individuals will spread in the environment and move around independently. Each individual
has a degree of freedom or randomness in its movements which enables it to nd food
accumulations. So, sooner or later, one of them will nd something digestible and, being
social, announces this to its neighbors. These can then approach the source of food, too.
Particle Swarm Optimization has been discussed, improved, and rened by many researchers
such as Cai et al. [468], Gao and Duan [1018], Venter and Sobieszczanski-Sobieski [2799], and
Gao and Ren [1019]. Comparisons with other evolutionary approaches have been provided
by Eberhart and Shi [854] and Angeline [103].
With Particle Swarm Optimization, a swarm of particles (individuals) in a n-dimensional
search space G is simulated, where each particle p has a position p.g G R
n
and a
velocity p.v R
n
. The position p.g corresponds to the genotypes, and, in most cases,
also to the candidate solutions, i. e., p.x = p.g, since most often the problem space X
is also the R
n
and X = G. However, this is not necessarily the case and generally, we
can introduce any form of genotype-phenotype mapping in Particle Swarm Optimization.
The velocity vector p.v of an individual p determines in which direction the search will
continue and if it has an explorative (high velocity) or an exploitive (low velocity) character.
It represents endogenous parameters which we discussed under the subject of Evolution
Strategy in Chapter 30. Whereas the endogenous information in Evolution Strategy is used
for an undirected mutation, the velocity in Particle Swarm Optimization is used to perform
a directed modication of the genotypes (particles positions).
39.2 The Particle Swarm Optimization Algorithm
In the initialization phase of Particle Swarm Optimization, the positions and velocities of all
individuals are randomly initialized. In each step, rst the velocity of a particle is updated
and then its position. Therefore, each particle p has another endogenous parameter: a
memory holding its best position best(p) G.
1
http://en.wikipedia.org/wiki/Particle_swarm_optimization [accessed 2007-07-03]
482 39 PARTICLE SWARM OPTIMIZATION
39.2.1 Communication with Neighbors Social Interaction
In order to realize the social component of natural swarms, a particle furthermore has
a set of topological neighbors N(p). This set could be pre-dened for each particle at
startup. Alternatively, it could contain adjacent particles within a specic perimeter, i. e.,
all individuals which are no further away from p.g than a given distance according to a
certain distance measure
2
dist. Using the Euclidian distance measure dist eucl specied
in ?? on page ?? we get:
p, q pop : q N(p) dist eucl(p.g, q.g) (39.1)
Each particle can communicate with its neighbors, so the best position found so far by any
element in N(p) is known to all of them as best(N(p)). The best position ever visited by
any individual in the population (which the optimization algorithm always keeps track of)
is referred to as best(pop).
39.2.2 Particle Update
One of the basic methods to conduct Particle Swarm Optimization is to use either best(N(p))
or best(pop) for adjusting the velocity of the particle p. If the globally best position is
utilized, the algorithm will converge quickly but maybe prematurely, i. e., may not nd the
global optimum. If, on the other hand, neighborhood communication and best(N(p)) is
used, the convergence speed drops but the global optimum is found more likely.
The search operation q = psoUpdate(p, pop) applied in Particle Swarm Optimization
creates a new particles q to replace an existing one (p) by incorporating its genotype p.g and
its velocity p.v. We distinguish local updating (Equation 39.3) and global updating (Equa-
tion 39.2), which additionally uses the data from the whole population pop. psoUpdate
thus fullls one of these two equations and Equation 39.4, which shows how the i
th
compo-
nents of the corresponding vectors are computed.
q.v
i
= p.v
i
+ (randomUni[0, c
i
) (best(p) .g
i
p.g
i
)) +
(randomUni[0, d
i
) (best(pop) .g
i
p.g
i
))
(39.2)
q.v
i
= p.v
i
+ (randomUni[0, c
i
) (best(p) .g
i
p.g
i
)) +
(randomUni[0, d
i
) (best(N(p)) .g
i
p.g
i
))
(39.3)
q.g
i
= p.g
i
+p.v
i
(39.4)
The learning rate vectors c and d have strong inuence of the convergence speed of Par-
ticle Swarm Optimization. The search space G (and thus, also the values of p.g) is normally
conned by minimum and maximum boundaries. For the absolute values of the velocity,
normally maximum thresholds also exist. Thus, real implementations of psoUpdate have
to check and rene their results before the utility of the candidate solutions is evaluated.
39.2.3 Basic Procedure
Algorithm 39.1 illustrates the native form of the Particle Swarm Optimization using the
basic update procedure. Like Hill Climbing, this algorithm can easily be generalized for
multi-objective optimization and for returning sets of optimal solutions (compare with Sec-
tion 26.3 on page 230).
2
See ?? on page ?? for more information on distance measures.
39.3. GENERAL INFORMATION ON PARTICLE SWARM OPTIMIZATION 483
Algorithm 39.1: x

PSO(f, ps)
Input: f: the function to optimize
Input: ps: the population size
Data: pop: the particle population
Data: i: a counter variable
Output: x

: the best value found


1 begin
2 pop createPop(ps)
3 while terminationCriterion do
4 for i 0 up to len(pop) 1 do
5 pop[i] psoUpdate(pop[i], pop)
6 return best(pop) .x
39.3 General Information on Particle Swarm Optimization
39.3.1 Applications and Examples
Table 39.1: Applications and Examples of Particle Swarm Optimization.
Area References
Combinatorial Problems [527, 1581, 1780, 1781, 1835, 2212, 2633]
Data Mining [1581, 3018]
Databases [1780, 1781, 2633]
Distributed Algorithms and Sys-
tems
[156, 202, 404, 721, 1298, 1835, 2440, 2628, 3061]
E-Learning [721, 1298, 2628]
Economics and Finances [1060]
Engineering [156, 853, 1835, 2799, 2861]
Function Optimization [103, 853, 855, 1018, 1523, 1727, 2799]
Graph Theory [202, 1835, 2440]
Logistics [527, 1581, 2212]
Motor Control [490]
Security [404]
Wireless Communication [68, 155]
39.3.2 Books
1. Nature-Inspired Algorithms for Optimisation [562]
2. Swarm Intelligent Systems [2013]
3. Swarm Intelligence: Collective, Adaptive [1524]
4. Parallel Evolutionary Computations [2015]
5. Swarm Intelligence Introduction and Applications [341]
6. Swarm Intelligence Focus on Ant and Particle Swarm Optimization [525]
7. Particle Swarm Optimization [589]
8. Fundamentals of Computational Swarm Intelligence [881]
9. Advances in Evolutionary Algorithms [1581]
39.3.3 Conferences and Workshops
Table 39.2: Conferences and Workshops on Particle Swarm Optimization.
484 39 PARTICLE SWARM OPTIMIZATION
GECCO: Genetic and Evolutionary Computation Conference
See Table 28.5 on page 317.
CEC: IEEE Conference on Evolutionary Computation
See Table 28.5 on page 317.
EvoWorkshops: Real-World Applications of Evolutionary Computing
See Table 28.5 on page 318.
PPSN: Conference on Parallel Problem Solving from Nature
See Table 28.5 on page 318.
WCCI: IEEE World Congress on Computational Intelligence
See Table 28.5 on page 318.
EMO: International Conference on Evolutionary Multi-Criterion Optimization
See Table 28.5 on page 319.
AE/EA: Articial Evolution
See Table 28.5 on page 319.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
IAAI: Conference on Innovative Applications of Articial Intelligence
See Table 10.2 on page 128.
ICANNGA: International Conference on Adaptive and Natural Computing Algorithms
See Table 28.5 on page 319.
ICCI: IEEE International Conference on Cognitive Informatics
See Table 10.2 on page 128.
ICNC: International Conference on Advances in Natural Computation
See Table 28.5 on page 319.
SEAL: International Conference on Simulated Evolution and Learning
See Table 10.2 on page 130.
ANTS: International Workshop on Ant Colony Optimization and Swarm Intelligence
See Table 37.2 on page 474.
BIOMA: International Conference on Bioinspired Optimization Methods and their Applications
See Table 28.5 on page 320.
BIONETICS: International Conference on Bio-Inspired Models of Network, Information, and Com-
puting Systems
See Table 10.2 on page 130.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
MICAI: Mexican International Conference on Articial Intelligence
See Table 10.2 on page 131.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
WSC: Online Workshop/World Conference on Soft Computing (in Industrial Applications)
39.3. GENERAL INFORMATION ON PARTICLE SWARM OPTIMIZATION 485
See Table 10.2 on page 131.
CIMCA: International Conference on Computational Intelligence for Modelling, Control and Au-
tomation
See Table 28.5 on page 320.
CIS: International Conference on Computational Intelligence and Security
See Table 28.5 on page 320.
CSTST: International Conference on Soft Computing as Transdisciplinary Science and Technology
See Table 10.2 on page 132.
EUFIT: European Congress on Intelligent Techniques and Soft Computing
See Table 10.2 on page 132.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
EvoCOP: European Conference on Evolutionary Computation in Combinatorial Optimization
See Table 28.5 on page 321.
GEC: ACM/SIGEVO Summit on Genetic and Evolutionary Computation
See Table 28.5 on page 321.
IEA/AIE: International Conference on Industrial and Engineering Applications of Articial Intel-
ligence and Expert Systems
See Table 10.2 on page 133.
ISA: International Workshop on Intelligent Systems and Applications
See Table 10.2 on page 133.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
SoCPaR: International Conference on SOft Computing and PAttern Recognition
See Table 10.2 on page 133.
Progress in Evolutionary Computation: Workshops on Evolutionary Computation
See Table 28.5 on page 321.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
AJWS: Australia-Japan Joint Workshop on Intelligent & Evolutionary Systems
See Table 28.5 on page 321.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
ASC: IASTED International Conference on Articial Intelligence and Soft Computing
See Table 10.2 on page 134.
FOCI: IEEE Symposium on Foundations of Computational Intelligence
See Table 28.5 on page 321.
FWGA: Finnish Workshop on Genetic Algorithms and Their Applications
See Table 29.2 on page 355.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
ICSI: International Conference on Swarm Intelligence
History: 2010/06: Beijng, China, see [2665, 2666]
486 39 PARTICLE SWARM OPTIMIZATION
IJCCI: International Conference on Computational Intelligence
See Table 28.5 on page 321.
IWNICA: International Workshop on Nature Inspired Computation and Applications
See Table 28.5 on page 321.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
NaBIC: World Congress on Nature and Biologically Inspired Computing
See Table 28.5 on page 322.
TAAI: Conference on Technologies and Applications of Articial Intelligence
See Table 10.2 on page 134.
WPBA: Workshop on Parallel Architectures and Bioinspired Algorithms
See Table 28.5 on page 322.
EvoRobots: International Symposium on Evolutionary Robotics
See Table 28.5 on page 322.
ICEC: International Conference on Evolutionary Computation
See Table 28.5 on page 322.
IWACI: International Workshop on Advanced Computational Intelligence
See Table 28.5 on page 322.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
SEMCCO: International Conference on Swarm, Evolutionary and Memetic Computing
See Table 28.5 on page 322.
SIS: IEEE Swarm Intelligence Symposium
History: 2003/04: Indianapolis, IN, USA, see [1387]
SSCI: IEEE Symposium Series on Computational Intelligence
See Table 28.5 on page 322.
39.3.4 Journals
1. IEEE Transactions on Evolutionary Computation (IEEE-EC) published by IEEE Computer
Society
2. Evolutionary Computation published by MIT Press
3. Proceedings of the National Academy of Science of the United States of America (PNAS)
published by National Academy of Sciences
4. Journal of Theoretical Biology published by Academic Press Professional, Inc. and Elsevier
Science Publishers B.V.
5. Natural Computing: An International Journal published by Kluwer Academic Publishers and
Springer Netherlands
6. Biosystems published by Elsevier Science Ireland Ltd. and North-Holland Scientic Publish-
ers Ltd.
7. Australian Journal of Biological Science (AJBS)
8. BioData Mining published by BioMed Central Ltd.
9. International Journal of Computational Intelligence Research (IJCIR) published by Research
India Publications
10. The International Journal of Cognitive Informatics and Natural Intelligence (IJCINI) pub-
lished by Idea Group Publishing (Idea Group Inc., IGI Global)
11. IEEE Computational Intelligence Magazine published by IEEE Computational Intelligence
Society
12. Trends in Ecology and Evolution (TREE) published by Cell Press and Elsevier Science Pub-
lishers B.V.
13. Acta Biotheoretica published by Springer Netherlands
39.3. GENERAL INFORMATION ON PARTICLE SWARM OPTIMIZATION 487
14. Bulletin of the American Iris Society published by American Iris Society (AIS)
15. Behavioral Ecology and Sociobiology published by Springer-Verlag
16. FEBS Letters published by Elsevier Science Publishers B.V. and Federation of European
Biochemical Societies (FEBS)
17. Naturwissenschaften The Science of Nature published by Springer-Verlag
18. International Journal of Biometric and Bioinformatics (IJBB) published by Computer Sci-
ence Journals Sdn BhD
19. Journal of Experimental Biology (JEB) published by The Company of Biologists Ltd. (COB)
20. Bulletin of Mathematical Biology published by Springer New York
21. Computational Intelligence published by Wiley Periodicals, Inc.
22. Journal of Heredity published by American Genetic Association
23. Nature Genetics published by Nature Publishing Group (NPG)
24. The American Naturalist published by University of Chicago Press for The American Society
of Naturalists
25. Philosophical Transactions of the Royal Society B: Biological Sciences published by Royal
Society of Edinburgh
26. Astrobiology published by Mary Ann Liebert, Inc.
27. Current Biology published by Elsevier Science Publishers B.V.
28. Quarterly Review of Biology (QRB) published by Stony Brook University
29. Proceedings of the Royal Society B: Biological Sciences published by Royal Society Publishing
30. International Journal of Swarm Intelligence and Evolutionary Computation (IJSIEC) pub-
lished by Ashdin Publishing (AP)
Chapter 40
Tabu Search
Tabu Search
1
(TS) has been developed by Glover [1065, 1069] in the mid-1980s based on
foundations from the 1970s [1064]. Some of the basic ideas were introduced by Hansen
[1189] and further contributions in terms of formalizing this method have been made by
Glover [1066, 1067], and de Werra and Hertz [729] (as summarized by Hertz et al. [1228] in
their tutorial on Tabu Search) as well as by Battiti and Tecchiolli [232] and Cvijovic and
Klinowski [665].
According to Hertz et al. [1228], Tabu Search can be considered as a metaheuristic variant
of neighborhood search as introduced in Denition D46.3 on page 512. The word tabu
2
stems from Polynesia and describes a sacred place or object. Things that are tabu must be
left alone and may not be visited or touched.
1
http://en.wikipedia.org/wiki/Tabu_search [accessed 2007-07-03]
2
http://en.wikipedia.org/wiki/Tapu_%28Polynesian_culture%29 [accessed 2008-03-27]
490 40 TABU SEARCH
40.1 General Information on Tabu Search
40.1.1 Applications and Examples
Table 40.1: Applications and Examples of Tabu Search.
Area References
Chemistry [2680]
Combinatorial Problems [83, 190, 232, 308, 403, 457, 640, 1066, 1069, 1189, 1228,
1486, 1737, 1858, 2075, 2105, 2423, 2491, 2648, 2649]
Databases [640]
Distributed Algorithms and Sys-
tems
[2173, 3002]
Economics and Finances [1060]
Engineering [640, 1486, 2173, 3002, 3075]
Function Optimization [232, 665, 2484]
Graph Theory [63, 640, 1228, 1486, 2173, 2175, 2235, 2236, 2597, 3002]
Logistics [83, 190, 403, 1069, 1858, 2075, 2491, 2648, 2649]
Security [1486]
Software [1486]
Telecommunication [1067]
Theorem Proong and Automatic
Verication
[640]
Wireless Communication [63, 154, 640, 1486, 2175, 2235, 2236, 2597]
40.1.2 Books
1. Telecommunications Optimization: Heuristic and Adaptive Techniques [640]
2. How to Solve It: Modern Heuristics [1887]
40.1.3 Conferences and Workshops
Table 40.2: Conferences and Workshops on Tabu Search.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
40.1. GENERAL INFORMATION ON TABU SEARCH 491
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
40.1.4 Journals
1. Journal of Heuristics published by Springer Netherlands
Chapter 41
Extremal Optimization
41.1 Introduction
Dierent from Simulated Annealing, which is an optimization method based on the metaphor
of thermal equilibria from physics, the Extremal Optimization
1
(EO) algorithm developed
by Boettcher and Percus [344348] is inspired by ideas of non-equilibrium physics. Especially
important in this context is the property of self-organized criticality
2
(SOC) [186, 1439].
41.1.1 Self-Organized Criticality
The theory of SOC states that large interactive systems evolve to a state where a change
in one single of their elements may lead to avalanches or domino eects that can reach
any other element in the system. The probability distribution of the number of elements n
involved in these avalanches is proportional to n

( > 0). Hence, changes involving only


few elements are most likely, but even avalanches involving the whole system are possible
with a non-zero probability. [728]
41.1.2 The Bak-Sneppen model of Evolution
The Bak-Sneppen model
3
of evolution [185] exhibits self-organized criticality and was the
inspiration for Extremal Optimization. Rather than focusing on single species, this model
considers a whole ecosystem and the co-evolution of many dierent species.
In the model, each species is represented only by a real tness value between 0 and 1. In
each iteration, the species with the lowest tness is mutated. The model does not include any
representation for genomes, instead, mutation changes the tness of the species directly by
replacing it with a random value uniformly distributed in [0, 1]. In nature, this corresponds
to the process where one species has developed further or was replaced by another one.
So far, mutation (i. e., development) would become less likely the more the tness
increases. Fitness can also be viewed as a barrier: New characteristics must be at least as
t as the current ones to proliferate. In an ecosystem however, no species lives alone but
depends on others, on its successors and predecessors in the food chain, for instance. Bak
and Sneppen [185] consider this by arranging the species in a one dimensional line. If one
species is mutated, the tness values of its successor and predecessor in that line are also
1
http://en.wikipedia.org/wiki/Extremal_optimization [accessed 2008-08-24]
2
http://en.wikipedia.org/wiki/Self-organized_criticality [accessed 2008-08-23]
3
http://en.wikipedia.org/wiki/Bak-Sneppen_model [accessed 2010-12-06]
494 41 EXTREMAL OPTIMIZATION
set to random values. In nature, the development of one species can foster the development
of others and this way, even highly t species may become able to (re-)adapt.
After a certain amount of iterations, the species in simulations based on this model reach
a highly-correlated state of self-organized critically where all of them have a tness above
a certain threshold. This state is very similar to the idea of punctuated equilibria from
evolutionary biology and groups of species enter a state of passivity lasting multiple cycles.
Sooner or later, this state is interrupted because mutations occurring nearby undermine
their tness. The resulting uctuations may propagate like avalanches through the whole
ecosystem. Thus, such non-equilibrium systems exhibit a state of high adaptability without
limiting the scale of change towards better states [344].
41.2 Extremal Optimization and Generalized Extremal
Optimization
Boettcher and Percus [345] want to utilize this phenomenology to obtain near-optimal solu-
tions for optimization problems. In Extremal Optimization, the search spaces G are always
spaces of structured tuples g = (g[1], g[2], . . . , g[n]). Extremal Optimization works on a single
individual and requires some means to determine the contributions of its genes to the overall
tness.
Extremal Optimization was originally applied to a graph bi-partitioning problem [345],
were the n points of a graph had to be divided into two groups, each of size
n
/2. The objective
was to minimize the number of edges connecting the two groups. Search and problem space
can be considered as identical and a solution candidate x = gpm(g) = g consisted of n
genes, each of which standing for one point of the graph and denoting the Boolean decision
to which set it belongs. Analogously to the Bak-Sneppen model, each such gene g[i] had an
own tness contribution (g[i]), the ratio of its outgoing edges connected to nodes from the
same set in relation to its total edge number. The higher this value, the better, but notice
that f(x = g) ,=

n
i=1
(g[i]), since f(g) corresponds to the number of edges crossing the cut.
In general, the Extremal Optimization algorithm proceeds as follows:
1. Create an initial individual p with a random genotype p.g and set the currently best
known solution candidate x

to its phenotype: x

= p.x.
2. Sort all genes p.g[i] of p.g in a list in ascending order according to their tness contri-
bution (p.g[i]).
3. Then, the gene p.g[i] with the lowest tness contribution is selected from this list
and modied randomly, leading to a new individual p and a new solution candidate
x = p.x = gpm(p.g).
4. If p.x is better that x

, i. e., p.x x

, set x

= p.x.
5. If the termination criterion has not yet been met, continue at step Point 2.
Instead of always picking the weakest part of g, Boettcher and Percus [346] selected the
gene(s) to be modied randomly in order to prevent the method from getting stuck in local
optima. In their work, the probability of a gene at list index j for being drawn is proportional
to j

. This variation was called -EO and showed superior performance compared to the
simple Extremal Optimization. In the graph partitioning problem on which Boettcher and
Percus [346] have worked, two genes from dierent sets needed to be drawn this way in each
step, since always two nodes had to be swapped in order to keep the size of the sub-graphs
constant. Values of in 1.3 . . . 1.6 have been reported to produce good results [346].
The major problem a user is confronted with in Extremal Optimization is how to de-
termine the tness contributions (p.g[i]) of the elements p.g[i] of the genotypes p.g of the
solution candidates p.x. Boettcher and Percus [347] point out themselves that the drawback
41.2. EXTREMAL OPTIMIZATION AND GENERALIZED EXTREMAL OPTIMIZATION495
to EO is that a general denition of tness for individual variables may prove ambiguous or
even impossible [728]. de Sousa and Ramos [725, 727, 728] therefore propose and exten-
sion to EO, called the Generalized Extremal Optimization (GEO) for xed-length binary
genomes G = B
n
. Each gene (bit) p.g[i] in the element p.g of the search space currently
examined, the following procedure is performed:
1. Create a copy g

of p.g.
2. Toggle bit i in g

.
3. Set (p.g[i]) to f(gpm(g

)) for maximization and to f(gpm(g

)) in case of minimiza-
tion.
4
By doing so, (p.g[i]) becomes a measure for how adapted the gene is. If f is subject to
maximization, high positive values of f(gpm(g

)) (corresponding to low (p.g[i])) indicate


that gene i should be mutated and has a low tness. For minimization, low f(gpm(g

))
indicated the mutating gene i would yield high improvements in the objective value.
4
In the original work of de Sousa et al. [728], f(x

) is subtracted from this value. Since we rank the genes,


this has basically no inuence and is omitted here.
496 41 EXTREMAL OPTIMIZATION
41.3 General Information on Extremal Optimization
41.3.1 Applications and Examples
Table 41.1: Applications and Examples of Extremal Optimization.
Area References
Combinatorial Problems [344348, 1835, 2036, 2895]
Distributed Algorithms and Sys-
tems
[708, 1835]
Engineering [547, 548, 726728, 1835]
Function Optimization [725, 727, 1773]
Graph Theory [344348, 1835]
Logistics [345, 727]
41.3.2 Books
1. Nature-Inspired Algorithms for Optimisation [562]
41.3.3 Conferences and Workshops
Table 41.2: Conferences and Workshops on Extremal Optimization.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
41.3. GENERAL INFORMATION ON EXTREMAL OPTIMIZATION 497
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
41.3.4 Journals
1. Journal of Heuristics published by Springer Netherlands
Chapter 42
GRASPs
500 42 GRASPS
42.1 General Information on GRAPSs
42.1.1 Applications and Examples
Table 42.1: Applications and Examples of GRAPSs.
Area References
Combinatorial Problems [2105]
42.1.2 Conferences and Workshops
Table 42.2: Conferences and Workshops on GRAPSs.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
42.1.3 Journals
1. Journal of Heuristics published by Springer Netherlands
Chapter 43
Downhill Simplex (Nelder and Mead)
[1864]
43.1 Introduction
The Downhill Simplex
1
(or Nelder-Mead method or amoeba algorithm
2
) published by Nelder
and Mead [2017] in 1965 is an single-objective optimization approach for searching the space
of n-dimensional real vectors (G R
n
) [1655, 2073]. Historically, it is closely related to the
simplex extension by Spendley et al. [2572] to the Evolutionary Operation method mentioned
in Section 28.1.3 on page 264 [1719]. Since it only uses the values of the objective functions
without any derivative information (explicit or implicit), it falls into the general class of
direct search methods [2724, 2984], as most of the optimization approaches discussed in this
book do.
Downhill Simplex optimization uses n + 1 points in the R
n
. These points form a poly-
tope
3
, a generalization of a polygone, in the n-dimensional space a line segment in R
1
, a
triangle in R
2
, a tetrahedron in R
3
, and so on. Nondegenerated simplexes, i. e., those where
the set of edges adjacent to any vertex form a basis in the R
n
, have one important festure:
The result of replacing a vertex with its reection through the opposite face is again, a
nondegenerated simplex (see ??). The goal of Downhill Simplex optimization is to replace
the best vertex of the simplex with an even better one or to ascertain that it is a candidate
for the global optimum [1719]. Therefore, its other points are constantly ipped around in
an intelligent manner as we will outline in ??.
Like Hill Climbing approaches, the Downhill Simplex may not converge to the global
minimum and can get stuck at local optima [1655, 1854, 2712]. Random restarts (as in Hill
Climbing With Random Restarts discussed in Section 26.5 on page 232) can be helpful here.
1
http://en.wikipedia.org/wiki/Nelder-Mead_method [accessed 2008-06-14]
2
In the book Numerical Recipes in C++ by Vettering et al. [2808], this optimization method is called
amoeba algorithm.
3
http://en.wikipedia.org/wiki/Polytope [accessed 2008-06-14]
502 43 DOWNHILL SIMPLEX (NELDER AND MEAD)
43.2 General Information on Downhill Simplex
43.2.1 Applications and Examples
Table 43.1: Applications and Examples of Downhill Simplex.
Area References
Chemistry [214, 1573]
Function Optimization [1017, 1655, 1854, 2017, 2073, 2712]
43.2.2 Conferences and Workshops
Table 43.2: Conferences and Workshops on Downhill Simplex.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
EvoNUM: European Event on Bio-Inspired Algorithms for Continuous Parameter Optimisation
See Table 28.5 on page 321.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
43.2.3 Journals
1. Computational Optimization and Applications published by Kluwer Academic Publishers and
Springer Netherlands
2. Discrete Optimization published by Elsevier Science Publishers B.V.
3. Engineering Optimization published by Informa plc and Taylor and Francis LLC
4. European Journal of Operational Research (EJOR) published by Elsevier Science Publishers
B.V. and North-Holland Scientic Publishers Ltd.
5. Journal of Global Optimization published by Springer Netherlands
6. Journal of Mathematical Modelling and Algorithms published by Springer Netherlands
43.2. GENERAL INFORMATION ON DOWNHILL SIMPLEX 503
7. Journal of Optimization Theory and Applications published by Plenum Press and Springer
Netherlands
8. Journal of the Operational Research Society (JORS) published by Palgrave Macmillan Ltd.
9. Optimization Methods and Software published by Taylor and Francis LLC
10. Optimization and Engineering published by Kluwer Academic Publishers and Springer
Netherlands
11. SIAM Journal on Optimization (SIOPT) published by Society for Industrial and Applied
Mathematics (SIAM)
Chapter 44
Random Optimization
506 44 RANDOM OPTIMIZATION
44.1 General Information on Random Optimization
44.1.1 Applications and Examples
44.1.2 Conferences and Workshops
Table 44.1: Conferences and Workshops on Random Optimization.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
MIC: Metaheuristics International Conference
See Table 25.2 on page 227.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
META: International Conference on Metaheuristics and Nature Inspired Computing
See Table 25.2 on page 228.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
44.1.3 Journals
1. Journal of Heuristics published by Springer Netherlands
Part IV
Non-Metaheuristic Optimization Algorithms
508 44 RANDOM OPTIMIZATION
Chapter 45
Introduction
One of the most fundamental principles in our world is the search for optimal states. It begins
in the microcosm in physics, where atom bonds
1
minimize the energy of electrons [2137].
When molecules combine to solid bodies during the process of freezing, they assume crystal
structures which come close to energy-optimal congurations. Such a natural optimization
process, of course, is not driven by any higher intention but the result from the laws of
physics.
The same goes for the biological principle of survival of the ttest [2571] which, together
with the biological evolution [684], leads to better adaptation of the species to their envi-
ronment. Here, a local optimum is a well-adapted species that dominates its niche. We, the
Homo sapiens, have reached this level and so have ants, bears, cockroaches, lions, and many
other creatures.
As long as humankind exists, we strive for perfection in many areas. We want to reach
a maximum degree of happiness with the least amount of eort. In our economy, prot and
sales must be maximized and costs should be as low as possible. Therefore, optimization
is one of the oldest sciences which even extends into daily life [2023]. Here, we also can
observe the phenomenon of trade-os between dierent criteria which are also common in
many other problem domains: One person may choose to work hard with the goal of securing
stability and pursuing wealth for her future life, accepting very little spare time in its current
stage. Another person can, deliberately, choose a lax life full of joy in the current phase
but rather unsecure in the long term. Similar, most optimization problems in real-world
applications involve multiple, usually conicting, objectives.
Furthermore, many optimization problems involve certain constraints. The guy with the
relaxed lifestyle, for instance, regardless how lazy he might become, cannot waive dragging
himself to the supermarket from time to time in order to acquire food. The eager beaver, on
the other hand, will not be able to keep up with 18 hours of work for seven days per week in
the long run without damaging her health considerably and thus, rendering the goal of an
enjoyable future unattainable. Likewise, the parameters of the solutions of an optimization
problem may be constrained as well.
Every task which has the goal of approaching certain congurations considered as optimal
in the context of pre-dened criteria can be viewed as an optimization problem. If something
is important, general, and abstract enough, there is always a mathematical discipline dealing
with it. Global Optimization
2
[609, 2126, 2181, 2714] is the branch of applied mathematics
and numerical analysis that focuses on, well, optimization. Closely related and overlapping
areas are mathematical economics
3
[559] and operations research
4
[580, 1232].
In this book, we will rst discuss the things which constitute an optimization problem
and which are the basic components of each optimization process (in the remainder of this
1
http://en.wikipedia.org/wiki/Chemical_bond [accessed 2007-07-12]
2
http://en.wikipedia.org/wiki/Global_optimization [accessed 2007-07-03]
3
http://en.wikipedia.org/wiki/Mathematical_economics [accessed 2009-06-16]
4
http://en.wikipedia.org/wiki/Operations_research [accessed 2009-06-19]
510 45 INTRODUCTION
chapter). In Part II, we take a look on all the possible aspects which can make an optimiza-
tion task dicult. Understanding them will allow you to anticipate possible problems and
to prevent their occurrence from the start. In the remainder of the rst part of this book,
we discuss a variety of dierent optimization methods. Part V then gives many examples
of applications for which optimization algorithms can be used. Part VI is a collection of
denitions and knowledge which can be useful to understand the terms and formulas used
in the rest of the book. It is provided in an eort to make the book stand-alone and to allow
students to understand it even if they have little mathematical or theoretical background.
Chapter 46
State Space Search
46.1 Introduction
State space search (SSS) strategies
1
are not directly counted to the optimization algorithms.
In Global Optimization, objective functions f f are optimized. Normally, all elements of
the problem space X are valid and only dier in their utility as solution. Optimization is
about to nd the elements with the best utility. State space search, instead, is based on
a criterion isGoal : X B which determines whether an element of the problem space is
a valid solution or not. The purpose of the search process is to nd elements x for which
isGoal(x) is true. [635, 797, 2356]
46.1.1 The Baseline: A Binary Goal Criterion
Denition D46.1 (isGoal). The function isGoal : X B is the target predicate of state
space search algorithms which states whether a given candidate solution x X is a valid
solution (by returning true) or not (by returning false).
To built such an isGoal criterion for State space search strategies for Global Optimization
based on objective functions f , we could, for example, specify a threshold y
i
for each objective
function f
i
f . If we assume minimization, isGoal(x) of a candidate solution x can be dened
as true if and only if the values of all objective functions for x are below or equal to the
corresponding thresholds, as stated in Equation 46.1.
isGoal(x) =
_
true if f
i
(x) y
i
i 1.. [f [
false otherwise
(46.1)
46.1.2 State Space and Neighborhood Search
The search space G is referred to as state space in state space search and the genotypes
g G are called states. Furthermore, G is often identical to the problem space (G = X).
Most state space search can only be applied if the search space is enumerable. One feature
of the search algorithms introduced here is that they usually are deterministic. This means
that they will yield the same results in each run (when applied to the same problem, that
is). Jones [1467] provides a further comparison between SSS and Global Optimization.
1
http://en.wikipedia.org/wiki/State_space_search [accessed 2007-08-06]
512 46 STATE SPACE SEARCH
46.1.2.1 The Neighborhood Expansion Operation
Instead of the basic search operations as introduced in Section 4.2, state space search al-
gorithms often use an operator which enumerates the neighborhood of a state. Here, we
call this operation expand and specify it in Denition D46.2. Hertz et al. [1228] refer to
optimization methods that base their search on such an operation, i. e., to state space search
approaches, as neighborhood search.
Denition D46.2 (expand). The operator expand : G T(G) receives one element g
from the search space G as input and computes a set G of elements which can be reached
from it. Based on Equation 4.3 on page 84, we can dene expand in the context of
optimization according to Equation 46.2.
expand(g) = Op
1
(g) (46.2)
Denition D46.3 (Neighborhood Search). Neighborhood search methods are itera-
tive procedures in which a neighborhood expand(g) is dened for each feasible solution
x = gpm(g), and the next solution x

= gpm(g

) is searched among the solutions in


g

expand(g). [1228]
46.1.2.2 Features of the expand Operator
expand is the exploration operation of state space search algorithms. Dierent from the
mutation operator of Evolutionary Algorithms, it usually is deterministic and returns a set
of genotypes instead of single one. Applying it to the same element g values will thus usually
yield the same set G.
The realization of expand has severe impact on the performance of search algorithms.
An ecient implementation, for example, should not include states in the returned set that
have already been visited before by the search. If the same elements are returned, the same
candidate solutions and all of their children will possibly be evaluated multiple times, which
would be useless and time consuming.
Another problem may occur if there are two elements g
1
, g
2
G with g
1
expand(g
2
)
and g
2
expand(g
1
) exist. Especially the uninformed state space search algorithms which
we discuss in the next section may get trapped in an endless loop. Thus, visiting a genotype
twice should always be avoided in state space search. Often, it is possible to design the search
operations in a way preventing this from the start. Otherwise, tabu lists should be used, as
done in the previously discussed Tabu Search algorithm (see Chapter 40 on page 489).
46.1.2.3 Expansion to Individual Lists
Since we want to keep our algorithm denitions as general as possible, we will keep the
notation of individuals p that encompass a genotype p.g G and a phenotype p.x X.
Therefore, we need to an expansion operator that returns a set of individuals P rather than
a set G of elements of the search space. We therefore dene the operation expandP in
Algorithm 46.1.
46.1.3 The Search Space as Graph
In Figure 46.1 we sketch the search space G as a graph spanned by the search operations
searchOp (combined to the set Op used by the expand operator). The genotypes g
i
are
the vertexes in such a graph. A directed edge from a genotype g
i
to another genotype g
j
denotes that g
j
can be reached from g
i
with one search step, i. e., g
j
expand(g
i
). In order
to apply (uninformed) search operations eciently, this graph should by acyclic, meaning
that if there exists a path from g
i
to g
j
along the directed edges, there should not be a path
back from g
j
to g
i
, as we discussed before.
46.1. INTRODUCTION 513
Algorithm 46.1: P expandP(g)
Input: g G: the element of the search space G to be expanded
Data: i: a counter variable
Data: p: an individual record
Output: P: the list of individual records resulting from the expansion
1 begin
2 G expand(g)
3 P ()
4 for i 0 up to len(G) 1 do
5 p
6 p.g G[i]
7 p.x gpm(p.g)
8 P addItem(P, p)
9 return P
g
1
g
2
g
3
g
4
g
5
g
6
g
7
g
8
g
11
g
9
g
12
g
13
g
10
g
1
g
16
g
15
g
17
g
14
g
18
G
g
i
g
j
:g =searchOp(g)or,moregeneral:
j i
g Op (g) g expand(g)
j i j i
1

g
a
: isGoal(gpm(g ))=false
a
g
b
: isGoal(gpm(g ))=true
b
Figure 46.1: An example for a possible search space sketched as acyclic graph.
514 46 STATE SPACE SEARCH
46.1.4 Key Eciency Features
Many state space search methods fall into the category of local search algorithms as given
in Denition D26.1 on page 229. For all state space search strategies, we can dene four
criteria that tell if they are suitable for a given problem or not [797].
1. Completeness. Does the search algorithm guarantee to nd a solution (given that
there exists one)? (Do not mix up with the completeness of search operations specied
in Denition D4.8 on page 85.)
2. Time Consumption. How much time will the strategy need to nd a solution?
3. Memory Consumption. How much memory will the algorithm need to store inter-
mediate steps? Together with time consumption this property is closely related to
complexity theory, as discussed in ?? on page ??.
4. Optimiality. Will the algorithm nd an optimal solution if there exist multiple correct
solutions?
46.2 Uninformed Search
The optimization algorithms that we have considered up to now always use some sort of
utility measure to decide in which direction to search for optima: The objective functions
allow us to make ne distinctions between dierent individuals.
The only exceptions, the only uninformed optimization methods we discussed so far, are
the trivial random walk and sampling as well as exhaustive search algorithms (see Chapter 8
on page 113). Under some circumstances, however, only the criterion isGoal is given as a
form of Boolean objective function.
Denition D46.4 (Uninformed Search). An uninformed search algorithm is an opti-
mization algorithm which does not incorporate any utility information about the candidate
solutions into its search decisions. It only applies a Boolean criterion isGoal which distin-
guishes between phenotypes that are valid solutions from those which are not.
If the objective functions or utility criteria are of Boolean nature, the methods previously
discussed will not be able to estimate and descend some form of gradient. In the best case,
they will degenerate to random walks which still have some chances to nd an optimum. In
the worst case, they will converge to some area in the search space and cease to explore new
solutions soon.
Here, uninformed search strategies
2
are a viable alternative, since they do not require or
take into consideration any knowledge about the special nature of the problem (apart from
the knowledge represented by the expand operation, of course). Such algorithms are very
general and can be applied to a wide variety of problems with small search spaces.
Their common drawback is that they are not suitable if a deep search in a large search
space is required. Without the incorporation of information in form of heuristic functions,
for example, the search may take very long and quickly becomes infeasible [635, 797, 2356].
2
http://en.wikipedia.org/wiki/Uninformed_search [accessed 2007-08-07]
46.2. UNINFORMED SEARCH 515
46.2.1 Breadth-First Search
The uninformed state space search algorithms usually start at one initial local in the search
space. Breadth-rst search
3
(BFS) begins with expanding the neighborhood of the corre-
sponding root candidate solution. All of the states derived from this expansion are visited
and then all their children, and so on. In general, rst all states in depth d are expanded
before considering any state in depth d + 1.
BFS is complete, since it will always nd a solution if there exists one. If so, it will
also nd the solution that can be reached from the root state with the smallest number
of expansion steps. Hence, breadth-rst search is also optimal if the number of expansion
steps needed from the origin to a state is a monotonous function of the objective value of
a solution, i. e., if depth(p
1
.g) > depth(p
2
.g) f(p
1
.x) > f(p
2
.x) where depth(p.g) is the
number of expand steps from the root state r G to p.g.
Algorithm 46.2: X BFS(r, isGoal)
Input: r G: the root state to start the expansion at
Input: isGoal : X B: an operator that checks whether a state is a goal state or not
Data: p: the state currently processed
Data: P: the queue of states to explore
Output: X X: the set with the (single) solution found, or
1 begin
2 P (p = (p.g = r, p.x = gpm(r)))
3 while len(P) > 0 do
4 p P[0]
5 P deleteItem(P, 0)
6 if isGoal(p.x) then return p.x
7 P appendList(P, expandP(p.g))
8 return
Algorithm 46.2 illustrates how breadth-rst search works. The algorithm is initialized
with a root state r G which marks the starting point of the search. BFS uses a state list
P which initially only contains this root individual. In a loop, the rst element p is removed
from this list. If the goal predicate isGoal(p.x) evaluates to true, p.x is a goal state and
we can return a set X = p.x containing it as the solution. Otherwise, we expand p.g and
append the newly found individuals to the end of queue P. If no solution can be found, this
process will continue until the whole accessible search space has been enumerated and P
becomes empty. Then, an empty set is returned in place of X, because there is no element
x in the (accessible part of the) problem space X for which isGoal(x) becomes true.
In order to examine the space and time complexity of BFS, we assume a hypothetical
state space G
H
where the expansion of each state g G
H
will always return a set of
len(expand(g)) = b new states. In depth 0, we only have one state, the root state r. In
depth 1, there are b states, and in depth 2 we can expand each of them to again, b new
states which makes b
2
, and so on. The total number of states up to depth d is given in
Equation 46.3:
1 +b +b
2
+ +b
d
=
b
d+1
+ 1
b 1
O
_
b
d
_
(46.3)
Thus, the space and time complexity are both O
_
b
d
_
and thus, exponentially rise with d. In
the worst case, all nodes in depth d need to be stored, in the best case only those of depth
d 1.
3
http://en.wikipedia.org/wiki/Breadth-first_search [accessed 2007-08-06]
516 46 STATE SPACE SEARCH
46.2.2 Depth-First Search
Depth-rst search
4
(DFS) is very similar to BFS. From the algorithmic point of view, the
only dierence that it uses a stack instead of a queue as internal storage for states (compare
line 4 in Algorithm 46.3 with line 4 in Algorithm 46.2). Here, always the last state element
of the set of expanded states is considered next. Thus, instead of searching level for level
in the breath as BFS does, DFS searches in depth which believe it or not is the reason
for its name. Depth-rst search advances in depth until the current state cannot further be
expanded, i. e., expand(p.g) = (). Then, the search steps again up one level. If the whole
search space has been browsed and no solution is found, is returned.
Algorithm 46.3: X DFS(r, isGoal)
Input: r G: the root individual to start the expansion at
Input: isGoal : X B: an operator that checks whether a state is a goal state or not
Data: p: the state currently processed
Data: P: the stack of states to explore
Output: X X: the set with the (single) solution found, or
1 begin
2 P (p = (p.g = r, p.x = gpm(r)))
3 while len(P) > 0 do
4 p P[len(P) 1]
5 P deleteItem(P, len(P) 1)
6 if isGoal(p.x) then return p.x
7 P appendList(P, expandP(p.g))
8 return
The memory consumption of the DFS grows linearly with the search depth, because
in depth d, at most d b states are held in memory (if we again assume the hypothetical
search space G
H
from the previous section). If we assume a maximum depth m, the time
complexity is b
m
in the worst case where the solution is the last child state in the path which
is explored the last. If m is very large or innite, a depth-rst search may take very long to
discover a solution or will not nd it at all, since it may get stuck in a wrong branch of the
state space which maybe can be followed (expanded) for many step. Also, it may discover
a solution at a deep expansion level while another one may exist in a lower level in another,
not explored branch of the search graph. Hence, depth rst search is neither complete nor
optimal.
46.2.3 Depth-Limited Search
The depth-limited search
5
[2356] is a depth-rst search that only proceeds up to a given
maximum depth d. In other words, it does not examine candidate solutions that are more
than d expand-operations away from the root state r, as outlined in Algorithm 46.4 in
a recursive form. The time complexity now becomes b
d
and the memory complexity is in
O(b d). Of course, the depth-limited search can neither be complete nor optimal. However,
if the maximum depth of the possible solutions in the search tree is known, DLS can be
sucient.
46.2.4 Iteratively Deepening Depth-First Search
The iteratively deepening depth-rst search
6
(IDDFS, [2356]), dened in Algorithm 46.5,
4
http://en.wikipedia.org/wiki/Depth-first_search [accessed 2007-08-06]
5
http://en.wikipedia.org/wiki/Depth-limited_search [accessed 2007-08-07]
6
http://en.wikipedia.org/wiki/IDDFS [accessed 2007-08-08]
46.2. UNINFORMED SEARCH 517
Algorithm 46.4: X DLS(r, isGoal, d)
Input: r G: the root individual to start the expansion at
Input: isGoal : X B: an operator that checks whether a state is a goal state or not
Input: d: the (remaining) allowed depth steps
Input: x X: a phenotype
Input: g G: a genotype
Data: p: the state currently processed
Output: X

: the solutions found, or


1 begin
2 x gpm(r)
3 if isGoal(x) then return x
4 if d > 0 then
5 foreach g expand(r) do
6 X DLS(g, isGoal, d 1)
7 if len(X) > 0 then return X
8 return
iteratively runs a depth-limited search with stepwise increasing maximum depths d. In each
iteration, it visits the states in the according to the depth-rst search. Since the maxi-
mum depth is always incremented by one, one new level in terms of distance in expand-
operations from the root is explored in each iteration. This eectively leads to some form
of breadth-rst search.
Algorithm 46.5: x IDDFS(r, isGoal)
Input: r: the root individual to start the expansion at
Input: isGoal : X B: an operator that checks whether a state is a goal state or not
Data: d: the current depth limit
Output: X: the set with the (single) solutions found, or
1 begin
2 d 0
3 repeat
4 X DLS(r, isGoal, d)
5 d d + 1
6 until len(X) > 0
7 return X
IDDFS thus unites the advantages of BFS and DFS: It is complete and optimal, but only
has a linearly rising memory consumption in O(d b). The time consumption, of course, is
still in O
_
b
d
_
. IDDFS is the best uninformed search strategy and can be applied to large
search spaces with unknown depth of the solution.
The Algorithm 46.5 is intended for (countable) innitely large search spaces. In real
systems, there is a maximum

d after which the whole space would have be explored and the
algorithm should return if no solution was found.
46.2.5 General Information on Uninformed Search
46.2.5.1 Applications and Examples
518 46 STATE SPACE SEARCH
Table 46.1: Applications and Examples of Uninformed Search.
Area References
Combinatorial Problems [1999, 2065, 3013]
Distributed Algorithms and Sys-
tems
[49, 50, 225, 334, 335, 1146, 1295, 1569, 1570, 1816, 1999,
2065, 2265, 2909, 2998, 3011, 3013, 3067, 3073, 3074]
Economics and Finances [49, 50]
Graph Theory [49, 225, 3013]
Software [1569, 1570, 2265]
46.2.5.2 Conferences and Workshops
Table 46.2: Conferences and Workshops on Uninformed Search.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
46.3 Informed Search
In an informed search
7
, a heuristic function helps to decide which states are to be expanded
next. If the heuristic is good, informed search algorithms may dramatically outperform
uninformed strategies [1887, 2140, 2276]. Based on Denition D1.14 on page 34, we now
provide the more specic Denition D46.5.
Denition D46.5 (Heuristic Function). Heuristic functions : GX R
+
are prob-
lem domain dependent functions which map the individuals (comprised of genotypes p.g G
7
http://en.wikipedia.org/wiki/Search_algorithms#Informed_search [accessed 2007-08-08]
46.3. INFORMED SEARCH 519
and phenotypes p.x X) to the positive real numbers R
+
. All heuristics will be zero for the
elements which are (optimal) solutions, i. e.,
p GX(p.x X) : isGoal(p.x) (p) = 0 heuristics : GX R
+
(46.4)
There are two possible meanings of the values returned by a heuristic function :
1. In the above sense, the value of a heuristic function (p.x) for an individual p is the
higher, the more expand-steps p.g is probably (or approximately) away from nding
a valid solution. Hence, the heuristic function represents the distance of an individual
to a solution of the optimization problem.
2. The heuristic function can also represent an objective function in some way. Suppose
that we know the minimal value y for an objective function f or at least a value
from where on all solutions are feasible. If this is the case, we could set (p.x) =
max 0, f(p.x) y, assuming that f is subject to minimization. Now the value of
heuristic function will be the smaller, the closer an individual is to a possible correct
solution and Equation 46.4 still holds. In other words, a heuristic function may also
represent the distance to a solution in objective space.
Of course, both meanings are often closely related since states that are close to each other
in problem space are probably also close to each other in objective space (the opposite does
not necessarily hold).
Denition D46.6 (Best-Limited Search). A best-limited search
8
[2140] is a search al-
gorithm that incorporates a heuristic function in a way which ensures that promising
individuals p with low estimation values (p.x) are evaluated before other states q that
receive a higher values (q.x) > (p.x).
46.3.1 Greedy Search
A greedy search
9
is a best-limited search where the currently known candidate solution with
the lowest heuristic value is investigated next. The greedy algorithm internal sorts the list
of currently known states in descending order according to a heuristic function . Thus,
the elements with the best (lowest) heuristic value will be at the end of the list, which then
can be used as a stack. The greedy search as specied in Algorithm 46.6 now works like a
depth-rst search on this stack and thus, also shares most of the properties of the DFS. It
is neither complete nor optimal and its worst case time consumption is b
m
. On the other
hand, like breadth-rst search, its worst-case memory consumption is also b
m
.
Greedy Search algorithms always nd the best possible solutions very eciently if the
problem forms a matroid
10
[2110]. In other situations, however, greedy search can produce
very bad results and therefore should be used with extreme care [201].
46.3.2 A

Search
A

search
11
(pronounced A-star search) is a best-limited search that uses an estimation
function

: GX R
+
which is the sum of a heuristic function (p) that estimates the
costs needed to get from p to a valid solution and a function g : GX R
+
that computes
the costs of p.

(p) = g(p) +(x) (46.5)


8
http://en.wikipedia.org/wiki/Best-first_search [accessed 2007-09-25]
9
http://en.wikipedia.org/wiki/Greedy_search [accessed 2007-08-08]
10
http://en.wikipedia.org/wiki/Matroid [accessed 2010-09-19]
11
http://en.wikipedia.org/wiki/A%2A_search [accessed 2007-08-09]
520 46 STATE SPACE SEARCH
Algorithm 46.6: X greedySearch(r, isGoal, )
Input: r G: the root individual to start the expansion at
Input: isGoal : X B: an operator that checks whether a state is a goal state or not
Input: : GX R
+
: the heuristic function
Data: p: the state currently processed
Data: P: the queue of states to explore
Output: X: the solution states found, or
1 begin
2 P (p = (p.g = r, p.x = gpm(r)))
3 while len(P) > 0 do
4 P sortDsc(P, )
5 p P[len(P) 1]
6 P deleteItem(P, len(P) 1)
7 if isGoal(p.x) then return p.x
8 P appendList(P, expandP(p.g))
9 return
A

search proceeds exactly like the greedy search outlined in Algorithm 46.6, if

is used
instead of the plain . For a given start state r G, an isGoal operator isGoal : X B,
and an estimation function

according to Equation 46.5, Equation 46.6 holds.


A

Search(r, isGoal,

) = greedySearch(r, isGoal,

) (46.6)
An A

search will denitely nd a solution if there exists one, i. e., it is complete.


Denition D46.7 (Admissible Heuristic Function). A heuristic function : GX
R
+
is admissible if it never overestimates the minimal costs for reaching a goal state.
Denition D46.8 (Monotonic Heuristic Function). A heuristic function : GX
R
+
is monotonic
12
if it never overestimates the costs for getting from one state to its suc-
cessor.
(p) g(q) g(p) +(q) q.g expand(p.g) (46.7)
An A

search is optimal if the heuristic function used is admissible. Optimal in this case
means that there exists no search algorithm that can nd the same solution as the A

search
needing fewer expansion steps if using the same heuristic. If expand is implemented in a
way which prevents that a state is visited more than once, also needs to be monotone in
order for the search to be optimal. [1204]
46.3.3 General Information on Informed Search
46.3.3.1 Applications and Examples
Table 46.3: Applications and Examples of Informed Search.
12
see Denition D51.21 on page 647
46.3. INFORMED SEARCH 521
Area References
Combinatorial Problems [36, 54, 116, 201, 233, 266, 570, 640, 644, 687, 733, 762, 901,
1119, 1153, 1154, 1201, 1486, 1639, 1735, 1803, 1990, 2067,
2117, 2283, 2462, 2483, 2653, 2765, 2769, 2795, 2865, 3027]
Data Mining [116]
Databases [36, 116, 233, 570, 640, 762, 1119, 1153, 1154, 1201, 1639,
1735, 1814, 1990, 2117, 2462, 2483, 2765, 2795, 2865, 3012]
Distributed Algorithms and Sys-
tems
[4850, 54, 233, 334, 335, 687, 761, 1473, 1574, 1797, 2064,
2067, 2117, 2264, 2903, 2909, 3027, 3067, 3074]
Economics and Finances [49, 50, 1060, 2067, 2264]
Engineering [640, 644, 901, 1486, 1574]
Graph Theory [49, 201, 640, 644, 687, 1486, 1574, 2596, 2653, 3027]
Logistics [201, 733, 901, 1803, 2769]
Security [1486]
Software [1486]
Theorem Proong and Automatic
Verication
[640]
Wireless Communication [640, 644, 1486, 2596]
46.3.3.2 Conferences and Workshops
Table 46.4: Conferences and Workshops on Informed Search.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
Chapter 47
Branch And Bound
524 47 BRANCH AND BOUND
47.1 General Information on Branch And Bound
47.1.1 Applications and Examples
Table 47.1: Applications and Examples of Branch And Bound.
Area References
Combinatorial Problems [148, 1095, 1546, 2423]
Databases [1095]
Distributed Algorithms and Sys-
tems
[202]
Engineering [18]
Graph Theory [202]
Logistics [148, 1546]
47.1.2 Books
1. Introduction to Global Optimization [2126]
2. Global Optimization: Deterministic Approaches [1271]
47.1.3 Conferences and Workshops
Table 47.2: Conferences and Workshops on Branch And Bound.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
47.1. GENERAL INFORMATION ON BRANCH AND BOUND 525
See Table 10.2 on page 134.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
Chapter 48
Cutting-Plane Method
528 48 CUTTING-PLANE METHOD
48.1 General Information on Cutting-Plane Method
48.1.1 Applications and Examples
Table 48.1: Applications and Examples of Cutting-Plane Method.
Area References
Combinatorial Problems [252]
Logistics [252]
48.1.2 Books
1. Nonlinear Programming: Analysis and Methods [152]
48.1.3 Conferences and Workshops
Table 48.2: Conferences and Workshops on Cutting-Plane Method.
HIS: International Conference on Hybrid Intelligent Systems
See Table 10.2 on page 128.
NICSO: International Workshop Nature Inspired Cooperative Strategies for Optimization
See Table 10.2 on page 131.
SMC: International Conference on Systems, Man, and Cybernetics
See Table 10.2 on page 131.
EngOpt: International Conference on Engineering Optimization
See Table 10.2 on page 132.
LION: Learning and Intelligent OptimizatioN
See Table 10.2 on page 133.
LSCS: International Workshop on Local Search Techniques in Constraint Satisfaction
See Table 10.2 on page 133.
WOPPLOT: Workshop on Parallel Processing: Logic, Organization and Technology
See Table 10.2 on page 133.
ALIO/EURO: Workshop on Applied Combinatorial Optimization
See Table 10.2 on page 133.
GICOLAG: Global Optimization Integrating Convexity, Optimization, Logic Programming, and
Computational Algebraic Geometry
See Table 10.2 on page 134.
Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Globalna
See Table 10.2 on page 134.
SEA: International Symposium on Experimental Algorithms
See Table 10.2 on page 135.
Part V
Applications
530 48 CUTTING-PLANE METHOD
Chapter 49
Real-World Problems
49.1 Symbolic Regression
In this section, we want to discuss symbolic regression a bit more in detail, after already
shedding some light onto it in Example E31.1 on page 387. In statistics, regression analysis
examines the unknown relation : R
n
R of a dependent variable y R to specied
independent variables x R
m
. Since is not known, the goal is to nd a reasonable good
approximation

.
Denition D49.1 (Regression). Regression
1
[828, 980, 1548] is a statistic technique used
to predict the value of a variable which is dependent one or more independent variables.
The result of the regression process is a function

: R
m
R that relates the m independent
variables (subsumed in the vector x to one dependent variable y

(x). The function

is the best estimator chosen from a set of candidate functions : R


m
. Regression is
strongly related to the estimation theory outlined in Section 53.6 on page 691. In most cases,
like linear
2
or nonlinear
3
regression, the mathematical model of the candidate functions is
not completely free. Instead, we pick a specic one from an array of parametric functions
by nding the best values for the parameters.
Denition D49.2 (Symbolic Regression). Symbolic regression [149, 846, 1514, 1602,
2251, 2377, 2997] is a general approach to regression where regression functions can be
constructed by combining elements of a set of mathematical expressions, variables and con-
stants.
Symbolic regression is thus not limited to determining the optimal values for the set of
parameters of a certain array of functions. It is much more general as polynomial tting
methods.
49.1.1 Genetic Programming: Genome for Symbolic Regression
One of the most widespread methods to perform symbolic regression is to apply Genetic
Programming. Here, the candidate functions are constructed and rened by an evolutionary
process. In the following we will discuss the genotypes (which are also the phenotypes) of the
1
http://en.wikipedia.org/wiki/Regression_analysis [accessed 2007-07-03]
2
http://en.wikipedia.org/wiki/Linear_regression [accessed 2007-07-03]
3
http://en.wikipedia.org/wiki/Nonlinear_regression [accessed 2007-07-03]
532 49 REAL-WORLD PROBLEMS
evolution as well as the objective functions that drive it. As illustrated in Figure 49.1, the
candidate solutions, i. e., the candidate functions, are represented by a tree of mathematical
expressions where the leaf nodes are either constants or the elds of the independent variable
vector x.
+
*
cos
/
3
*
x
1
x x
3
*
x x+cos
* (
1
x
(
mathematicalexpression geneticprogrammingtreerepresentation
Figure 49.1: An example genotype of symbolic regression of with x = x R
1
.
The set of functions that can possible be evolved is limited by the set of expressions
available to the evolutionary process.
= +, , , /, exp, ln, sin, cos, max, min, . . . (49.1)
Another aspect that inuences the possible results of the symbolic regression is the concept
of constants. In general, constants are not really needed since they can be constructed
indirectly via expressions. The constant 2.5, for example, equals the expression
x
x+x
+
ln xx
ln x
.
The evolution of such articial constants, however, takes rather long. Koza [1602] has
therefore introduced the concept of ephemeral random constants.
Denition D49.3 (Ephemeral Random Constants). In symbolic regression, an ephemeral
random constant is a leaf node which represents a constant real value in a formula. This
value is chosen randomly at its creations and then remains unchanged, i. e., constant.
If a new individual is created and a leaf in its expression-tree is chosen to be an ephemeral
random constant, a random number is drawn uniformly distributed from a reasonable in-
terval. For each new constant leaf, a new constant is created independently. The values
of the constant leafs remain unchanged and can be moved around and copied by crossover
operations.
According to Kozas idea ephemeral random constants remain unchanged during the
evolutionary process. In our work, it has proven to be practicable to extend his approach by
providing a mutation operation that changes the value c of a constant leaf of an individual.
A good policy for doing so is by replacing the old constant value c
old
by a new one c
new
which is a normally distributed random number with the expected value c
old
(see ?? on
page ??):
c
new
= randomNorm
_
c
old
,
2
_
(49.2)

2
= e
randomUni[0,10)
[c
old
[ (49.3)
Notice that the other reproduction operators for tree genomes have been discussed in detail
in Section 31.3 on page 386.
49.1.2 Sample Data, Quality, and Estimation Theory
In the following elaborations, we will reuse some terms that we have applied in our discus-
sion on likelihood in Section 53.6.2 on page 693 in order to nd out what measures will
49.1. SYMBOLIC REGRESSION 533
make good objective functions for symbolic regression problems.Again, we are given a nite
set of sample data A containing n = [A[ pairs of (x
i
, y
i
) where the vectors x
i
R
m
are
known inputs to an unknown function : R
m
R and the scalars y
i
are its observed
outputs (possible contaminated with noise and measurement errors subsumed in the term

i
, see Equation 53.242 on page 693). Furthermore, we can access a (possible innite large)
set of functions : R
m
R which are possible estimators of . For the inputs x
i
,
the results of these functions deviate by the estimation error (see Denition D53.56 on
page 692) from the y
i
.
y
i
= (x
i
) +
i
i [0..n 1] (49.4)
y
i
= (x
i
) +
i
() , i [0..n 1] (49.5)
In order to guide the evolution of estimators (in other words, for driving the regression
process), we need an objective function that furthers candidate solutions that represent the
sample data A and thus, resemble the function , closely. Let us call this driving force
quality function.
Denition D49.4 (Quality Function). The quality function f(, A) denes the quality
of the approximation of a function by a function . The smaller the value of the quality
function is, the more precisely is the approximation of by in the context of the sample
data A.
Under the conditions that the measurement errors
i
are uncorrelated and are all normally
distributed with an expected value of zero and the same variance (see Equation 53.243, Equa-
tion 53.244, and Equation 53.245 on page 693), we have shown in Section 53.6.2 that the best
estimators minimize the mean square error MSE (see Equation 53.258 on page 695, Def-
inition D53.62 on page 696 and Denition D53.59 on page 692). Thus, if the source of the
values y
i
complies at least in a simplied, theoretical manner with these conditions or even
is a real measurement process, the square error is the quality function to choose.
f
=0
(, A) =
lenA1

i=0
(y
i
(x
i
))
2
(49.6)
While this is normally true, there is one exception to the rule: The case where the values
y
i
are no measurements but direct results from and = 0. A common example for this
situation is if we apply symbolic regression in order to discover functional identities [1602,
2033, 2359] (see also Example E49.1 on the next page). Dierent from normal regression
analysis or estimation, we then know exactly and want to nd another function

that
is another, equivalent form of . Therefore, we will use to create sample data set A
beforehand, carefully selecting characteristic points x
i
. Thus, the noise and the measurement
errors
i
all become zero. If we would still regard them as normally distributed, their variance
s
2
would be zero, too.
The proof for the statement that minimizing the square errors maximizes the likelihood
is based on the transition from Equation 53.253 to Equation 53.254 on page 695 where we
cut divisions by s
2
. This is not possible if becomes zero. Hence, we may or may not select
metrics dierent from the square error as quality function. Its feature of punishing larger
deviation stronger than small ones, however, is attractive even if the measurement errors
become zero. Another metric which can be used as quality function in these circumstances
are the sums of the absolute values of the estimation errors:
f
=0
(, A) =
lenA1

i=0
[y
i
(x
i
)[ (49.7)
534 49 REAL-WORLD PROBLEMS
Example E49.1 (An Example and the Phenomenon of Overtting).
If multi-objective optimization can be applied, the quality function should be complemented
by an objective function that puts pressure in the direction of smaller estimations . In
symbolic regression by Genetic Programming, the problem of code bloat (discussed in ??
on page ??) is eminent. Here, functions do not only grow large because they include useless
expressions (like
xx+x
x
x1). A large function may consist of functional expressions only,
but instead of really representing or approximating , it is degenerated to just some sort of
mist decision table. This phenomenon is called overtting and has initially been discussed
in Section 19.1 on page 191.
Let us, for example, assume we want to nd a function similar to Equation 49.8. Of
course, we would hope to nd something like Equation 49.9.
y = (x) = x
2
+ 2x + 1 (49.8)
y =

(x) = (x + 1)
2
= (x + 1)(x + 1) (49.9)
i x
i
y
i
= (x
i
) f

2
(x
i
)
01 5 16 15.59
1 4.9 15.21 15.40
2 0.1 1.21 1.11
3 2.9 15.21 15.61
4 3 16 16
5 3.1 16.81 16.48
6 4.9 34.81 34.54
7 5 36 36.02
8 5.1 37.21 37.56
Table 49.1: Sample Data A = (x
i
, y
i
) : i [0..8] for Equation 49.8
For testing purposes, we choose randomly the nine sample data points listed in Table 49.1.
As result of Genetic Programming based symbolic regression we may obtain something like
Equation 49.10, outlined in Figure 49.2, which represents the data points quite precisely but
has nothing to do with the original form of our equation.

2
(x) = (((((0.934911896352446 * 0.258746335682841) - (x * ((x / ((x -
0.763517999368926) + ( 0.0452368900127981 - 0.947318140392111))) / ((x - (x + x)) +
(0.331546588012695 * (x + x)))))) + 0.763517999368926) + ((x - ((( 0.934911896352446 *
((0.934911896352446 / x) / (x + 0.947390132934724))) + (((x * 0.235903629190878) * (x -
0.331546588012695)) + ((x * x) + x))) / x)) * ((((x - (x * (0.258746335682841 /
0.455160839551232))) / (0.0452368900127981 - 0.763517999368926)) * x) *
(0.763517999368926 * 0.947318140392111)))) - (((((x - (x * (0.258746335682841 /
0.455160839551232))) / (0.0452368900127981 - 0.763517999368926)) * 0.763517999368926)
* x) + (x - (x * (0.258746335682841 * 0.934911896352446))))) 49.10
We obtained both functions

1
(in its second form) and

2
using the symbolic regression
applet of Hannes Planatscher which can be found at http://www.potschi.de/sr/ [ac-
cessed 2007-07-03]
4
. It needs to be said that the rst (wanted) result occurred way more often than
absurd variations like

2
. But indeed, there are some factors which further the evolution of
such eyesores:
1. If only few sample data points are provided, the set of prospective functions that have
a low estimation error becomes larger. Therefore, chances are that symbolic regression
provides results that only match those points but dier in all other points signicantly
from .
4
Another good applet for symbolic regression can be found at http://alphard.ethz.ch/gerber/
approx/default.html [accessed 2007-07-03]
49.1. SYMBOLIC REGRESSION 535
-20
-10
0
10
20
30
-4 -3 -2 1 2 4 3 -5 -1
y j
1
(x) (x)

y
2
(x)

Figure 49.2: (x), the evolved

1
(x) (x), and

2
(x).
2. If the sample data points are not chosen wisely, their expressiveness is low. We for
instance chose 4.9,5, and 5.1 as well as 2.9, 3 and 3.1 which form two groups with
members very close to each other. Therefore, a curve that approximately hits these
two clouds is rated automatically with a high quality value.
3. A small population size decreases the diversity and furthers incest between similar
candidate solutions. Due to a lower rate of exploration, only a local minimum of the
quality value will often be yielded.
4. Allowing functions of large depth and putting low pressure against bloat (see ?? on
page ??) leads to uncontrolled function growth. The real laws that we want to ap-
proximate with symbolic regression do usually not consist of more than 40 expressions.
This is valid for most physical, mathematical, or nancial equations. Therefore, the
evolution of large functions is counterproductive in those cases.
Although we made some of these mistakes intentionally, there are many situations where it
is hard to determine good parameter sets and restrictions for the evolution and they occur
accidentally.
49.1.3 Limits of Symbolic Regression
Often, we cannot obtain an optimal approximation of , especially if cannot be repre-
sented by the basic expressions available to the regression process. One of these situations
has already been discussed before: the case where has no closed arithmetical expression.
Another possibility is that the regression method tries to generate a polynomial that ap-
proximates the , but does contain dierent expressions like sin or e
x
or polynomials of
an order higher than available. Yet another problem is that the values y
i
are often not
results computed by directly but could, for example, be measurements taken from some
physical entity and we want to use regression to determine the interrelations between this
entity and some known parameters. Then, the measurements will be biased by noise and
systematic measurement errors. In this situation, f(

, A) will be greater than zero even


after a successful regression.
536 49 REAL-WORLD PROBLEMS
49.2 Data Mining
Denition D49.5 (Data Mining). Data mining
5
can be dened as the nontrivial extrac-
tion of implicit, previously unknown, and potentially useful information from data [991] and
the science of extracting useful information from large data sets or databases [1176].
Today, gigantic amounts of data are collected in the web, in medical databases, by enterprise
resource planning (ERP) and customer relationship management (CRM) systems in corpo-
rations, in web shops, by administrative and governmental bodies, and in science projects.
These data sets are way too large to be incorporated directly into a decision making process
or to be understood as-is by a human being. Instead, automated approaches have to be
applied that extract the relevant information, to nd underlying rules and patterns, or to
detect time-dependent changes. Data mining subsumes the methods and techniques capable
to perform this task. It is very closely related to estimation theory in stochastic (discussed
in Section 53.6 on page 691) the simplest summaries of a data set are still the arithmetic
mean or the median of its elements. Data mining is also strongly related to articial in-
telligence [797, 2356], which includes (machine) learning algorithms that can generalize the
given information. Some of the most wide spread and most common data mining techniques
are:
1. articial neural networks (ANN) [310, 314],
2. support vector machines (SVM) [450, 2773, 2788, 2849],
3. logistic regression [37],
4. decision trees [281, 406, 2234, 2961],
5. Genetic Programming for synthesizing classiers [481, 993, 1985, 2855],
6. Learning Classier Systems as introduced in Chapter 35 on page 457, and
7. nave Bayes Classiers [805, 2312].
49.2.1 Classication
Broadly speaking, the task of classication is to assign a class k from a set of possible
classes K to vectors (data samples) a = (a[0], a[1], . . . , a[n 1]) consisting of n attribute
values a[0] A
i
. In supervised approaches, the starting point is a training set A
t
including
training samples a
t
A
t
for which the corresponding classes class(a
t
) K are already
known.
A data mining algorithm is supposed to learn a relation (called classier) C : A
0
A
1

A
n1
K which can map such attribute vectors a to a corresponding class k K. The
training set A
t
usually contains only a small subset of the possible attribute vectors and may
even include contradicting samples (a
t,1
= a
t,2
: a
t,1
, a
t,2
A
t
class(a
t,1
) ,= class(a
t,2
)).
The better the classier C is, the more often it can correctly classify attribute vectors.
An overtted classier has no generalization capability: it learns only the exact relations
provided in the training sample but is unable to classify samples not included in A
t
with
reasonable precision. In order to test whether a classier C is overtted, not the complete
available data A is used for learning. Instead, A is divided into the training set A
t
and the
test set A
c
. If Cs precision on A
c
is much worse than on A
t
, it is overtted and should not
be used in a practical application.
49.3 Freight Transportation Planning
5
http://en.wikipedia.org/wiki/Data_mining [accessed 2007-07-03]
49.3. FREIGHT TRANSPORTATION PLANNING 537
49.3.1 Introduction
In this section, we outline a real-world application of Evolutionary Computation to freight
transportation planning which has been developed together with a set of partners in the
research project in.west and published in a set of scientic papers [2188, 2189, 29122914],
most prominently in the chapter Solving Real-World Vehicle Routing Problems with Evo-
lutionary Algorithms [2912] of Natural Intelligence for Scheduling, Planning and Packing
Problems [564] at 10.1007/978-3-642-04039-9 2.
0
400
800
1200
2005 2020 2035 2050
10 t*km
9
Figure 49.3: The freight trac on German roads in billion tons*kilometer.
According to the German Federal Ministry of Economics and Technology [447], the freight
trac volume on German roads will have doubled by 2050 as illustrated in Figure 49.3.
Reasons for this development are the eects of globalization as well as the central location
of the country in Europe. With the steadily increasing freight trac resulting from trade
inside the European Union and global import and export [2248], transportation and logistics
become more important [2768, 2825]. Thus, a need for intelligent solutions for the strategic
planning of logistics becomes apparent [447]. Such a planning process can be considered as
a multi-objective optimization problem which has the goals [2607, 2914] of increasing the
prot of the logistics companies by
1. ensuring on-time collection and delivery of all parcels,
2. utilizing all available means of transportation (rail, trucks) eciently, i. e., decreasing
the total transportation distances by using the capacity of the vehicles to the fullest,
while
3. reducing the CO
2
production in order to become more environment-friendly.
Fortunately, the last point is a side-eect of the others. By reducing the total distance
covered and by transporting a larger fraction of the freight via (inexpensive) trains, not only
the drivers work hours and the costs are decreased, but also the CO
2
production declines.
Ecient freight planning is not a static procedure. Although it involves building an
overall plan on how to deliver orders, it should also be able to dynamically react to unforeseen
problems such as trac jams or accidents. This reaction should lead to a local adaptation of
the plan and re-routing of all involved freight vehicles whereas parts of the plan concerning
geographically distant and uninvolved objects are supposed to stay unchanged.
In the literature, the creation of freight plans is known as the Vehicle Routing Prob-
lem [497, 1091, 2162]. In this section, we present an approach to Vehicle Routing for
real-world scenarios: the freight transportation planning component of the in.west system.
in.west, or Intelligente Wechselbr ucksteuerung in full, is a joint research project of DHL,
Deutsche Post AG, Micromata, BIBA, and OHB Teledata funded by the German Federal
Ministry of Economics and Technology.
6
In the following section, we discuss dierent avors of the Vehicle Routing Problem and
the general requirements of logistics departments which specify the framework for our freight
planning component. These specic conditions rendered the related approaches outlined in
Section 49.3.3 infeasible for our situation. In Section 49.3.4, we present an Evolutionary
6
See http://www.inwest.org/ [accessed 2008-10-29].
538 49 REAL-WORLD PROBLEMS
Algorithm for multi-objective, real-world freight planning problems [2189]. The problem-
specic representation of the candidate solutions and the intelligent search operators working
on them are introduced, as well as the objective functions derived from the requirements.
Our approach has been tested in many dierent scenarios and the experimental results are
summarized in Section 49.3.5. The freight transportation planning component described
in this chapter is only one part of the holistic in.west approach to logistics which will be
outlined in Section 49.3.6. Finally, we conclude with a discussion of the results and future
work in Section 49.3.7.
49.3.2 Vehicle Routing in Theory and Practice
49.3.2.1 Vehicle Routing Problems
The Vehicle Routing Problem (VRP) is one of the most famous combinatorial optimization
problems. In simple terms, the goal is to determine a set of routes than can satisfy sev-
eral geographically scattered customers demands while minimizing the overall costs [2162].
Usually, a eet of vehicles located in one depot is supposed to fulll these requests. In this
context, the original version of the VRP problem was proposed by Dantzig and Ramser
[680] in 1959 who addressed the calculation of a set of optimal routes for a eet of gasoline
delivery trucks.
As described next, a large number of variants of the VRP exist, adding dierent con-
straints to the original denition. Within the scope of in.west, we rst identied all the
restrictions of real-world Vehicle Routing Problems that occur in companies like and then
analyzed available approaches from the literature.
The Capacitated Vehicle Routing Problem (CVRP), for example, is similar to the classical
Vehicle Routing Problem with the additional constraint that every vehicle must have the
same capacity. A xed eet of delivery vehicles must service known customers demands
of a single commodity from a common depot at minimum transit costs [806, 2212, 2262].
The Distance Vehicle Routing Problem (DVRP) is a VRP extended with the additional
constraint on the maximum total distance traveled by each vehicle. In addition, Multiple
Depot Vehicle Routing Problems (MDVRP) have several depots from which customers can
be supplied. Therefore, the MDVPR requires the assignment of customers to depots. A
eet of vehicles is based at each depot. Each vehicle then starts at its corresponding depot,
services the customers assigned to that depot, and returns.
Typically, the planning period for a classical VRP is a single day. Dierent from this
approach are Periodic Vehicle Routing Problems (PVRP), where the planning period is ex-
tended to a specic number of days and customers have to be served several times with
commodities. In practice, Vehicle Routing Problems with Backhauls (VRPB), where cus-
tomers can return some commodities [2212] are very common. Therefore all deliveries for
each route must be completed before any pickups are made. Then, it also becomes necessary
to take into account that the goods which customers return to the deliverer must t into
the vehicle.
The Vehicle Routing Problem with Pick-up and Delivering (VRPPD) is a capacitated
Vehicle Routing Problem where each customer can be supplied with commodities as well
as return commodities to the deliverer. Finally the Vehicle Routing Problem with Time
Windows (VRPTW) is similar to the classical Vehicle Routing Problem with the additional
restriction that time windows (intervals) are dened in which the customers have to be
supplied [2212]. Figure 49.4 shows the hierarchy of Vehicle Routing Problem variants and
also the problems which are relevant in the in.west case.
49.3. FREIGHT TRANSPORTATION PLANNING 539
VRPSPD
VRPB
DVRP
PVRP
multi-depot periodic
d
i
s
t
a
n
c
e

o
r
t
i
m
e

c
o
n
s
t
r
a
i
n
t
s
c
a
p
a
c
i
t
y
c
o
n
s
t
r
a
i
n
t
s
t
i
m
e

w
i
n
d
o
w
s
capacitya.
timeor
const.
distance
w
.

l
o
a
d
i
n
g

a
.
u
n
l
o
a
d
i
n
g
b
a
c
k
h
a
u
l
d
e
p
o
t

i
s
s
r
c

a
n
d

d
e
s
t
VRP MDVRP
VRPTW
CVRP
VRPPD
DCVRP
real-worldproblem
intheDHL/in.west
Figure 49.4: Dierent avors of the VRP and their relation to the in.west system.
49.3.2.2 Model of a Real-World Situation
As it becomes obvious from Figure 49.4, the situation in logistics companies is relatively
complicated and involves many dierent aspects of Vehicle Routing. The basic unit of freight
considered in this work is a swap body b, a standardized container (C 745, EN 284 [2679])
with a dimension of roughly 7.5m 2.6m 2.7m and special appliances for easy exchange
between transportation vehicles or railway carriages. Logistics companies like DHL usually
own up to one thousand such containers. We refer to the union of all swap bodies as the set
B.
We furthermore dene the union of all possible means of transportation as the set F. All
trucks tr F can carry at most a certain maximum number v(tr) of swap bodies at once.
Commonly and also in the case of DHL, this limit is v(tr) = 2. The maximum load of trains
z F, on the other hand, is often more variable and usually ranges somewhere between
30 and 60 ( v(z) [30..60]). Trains have xed routes, departure, and arrival times whereas
freight trucks can move freely on the map. In many companies, trucks must perform cyclic
tours, i.e., return to their point of departure by the end of the day, in order to allow the
drivers to return home.
The clients and the depots of the logistics companies together can form more than one
thousand locations from which freight may be collected or to which it may be delivered.
We will refer to the set of all these locations as L. Each transportation order has a xed
time window [

t
s
,

t
s
] in which it must be collected from its source l
s
L. From there, it has
to be carried to its destination location l
d
L where it must arrive within a time window
[

t
d
,

t
d
]. An order furthermore has a volume v which we assume to be an integer multiple
of the capacity of a swap body. Hence, a transportation order o can fully be described by
the tuple o =
_
l
s
, l
d
, [

t
s
,

t
s
], [

t
d
,

t
d
], v
_
. In our approach, orders which require more than one
(v > 1) swap body will be split up into multiple orders requiring one swap body (v = 1)
each.
Logistics companies usually have to service up to a few thousand such orders per day.
The express unit of the project partner DHL, for instance, delivered between 100 and 3000
540 49 REAL-WORLD PROBLEMS
per day in 2007, depending on the day of the week as well as national holidays etc.
The result of the planning process is a set x of tours. Each single tour is described by
a tuple =

l
s
, l
d
, f,

t,

t, b, o
_
. l
s
and l
d
are the start and destination locations and

t and

t are the departure and arrival time of the vehicle f F. On this tour, f carries the set
b = b
1
, b
2
, . . . of swap bodies which, in turn, contain the orders o = o
1
, o
2
, . . . . It is
assumed that, for each truck, there is at least one corresponding truck driver and that the
same holds for all trains.
Tours are the smallest unit of freight transportation. Usually, multiple tours are com-
bined for a delivery: First, a truck tr may need to drive from the depot in Dortmund to
Bochum to pick up an unused swap body sb (
1
= Dortmund, Bochum, tr, 9am, 10am, , ).
In a subsequent tour
2
= Bochum, Essen, tr, 10.05am, 11am, sb , , it car-
ries the empty swap body sb to a customer in Essen. There, the order
o is loaded into sb and then transported to its destination o.l
d
= Hannover
(
3
= Essen, Hannover, tr, 11.30am, 4pm, sb , o).
Obviously, the set x must be physically sound. It must, for instance, not contain
any two intersecting tours
1
,
2
__

1
.

t <
2
.

t
_

2
.

t <
1
.

t
__
involving the same vehicle
(
1
.f =
2
.f), swap bodies (
1
.b
2
.b ,= ), or orders (
1
.o
2
.o ,= ). Also, it must be en-
sured that all objects involved in a tour reside at .l
s
at time .

t. Furthermore, the capacity


limits of all involved means of transportation must be respected, i. e., 0 [.b[ v(.f). If
some of the freight is carried by trains, the xed halting locations of the trains as well as
their assigned departure and arrival times must be considered. The same goes for laws re-
stricting the maximum amount of time a truck driver is allowed to drive without breaks and
constraints imposed by the companys policies such as the aforementioned cyclic character
of truck tours. Only plans for which all these conditions hold can be considered as correct.
From the perspective of the planning systems user, runtime constraints are of the same
importance: Ideally, the optimization process should not exceed one day. Even the best
results become useless if their computation takes longer than the time span from receiving
the orders to the day where they actually have to be delivered.
Experience has shown that hiring external carriers for a small fraction of the freight
can often reduce the number of required tours to be carried out by the own vehicles and
the corresponding total distance to be covered signicantly, if the organizations existing
capacities are already utilized to their limits. Therefore, a good transportation planning
system should also be able to make suggestions on opportunities for such an on-demand
outsourcing. An example for this issue is illustrated in Fig. 49.7.b.
The framework introduced in this section holds for practical scenarios in logistics compa-
nies like DHL and Deutsche Post. It proposes a hard challenge for research, since it involves
multiple intertwined optimization problems and combines several aspects even surpassing
the complexity of the most dicult Vehicle Routing Problems known from the literature.
49.3.3 Related Work
The approaches discussed in the literature on freight transportation planning can roughly be
divided into two basic families: exact and stochastic or metaheuristic methods (see Part III
and Part IV). The exact approaches are usually only able to solve small instances of Vehicle
Routing Problems i. e., those with very limited numbers of orders, customers, or locations
and therefore cannot be applied in most real-world situations. Using them in scenarios
with many constraints further complicates the problem. [2162]
Heuristic methods are reliable and ecient approaches to address Vehicle Routing Prob-
lems of larger scale. Despite the growing problem dimension, they are still able to provide
high quality approximate solutions in a reasonable time. This makes them more attractive
than exact methods for practical applications. In over 40 years of research, a large number
of heuristics have been proposed for VRPs.
Especially, the metaheuristic optimization methods have received more and more at-
tention. Well-known members of this family of algorithms which have been applied to
49.3. FREIGHT TRANSPORTATION PLANNING 541
Vehicle Routing and freight transportation planning are Tabu Search [83, 178, 403, 1065],
Simulated Annealing [667, 2770], Ant Systems [445, 802], and particularly Evolutionary
Algorithms [64, 402, 1446, 2686, 3084].
Ombuki-Berman and Hanshar [2075], for example, proposed a Genetic Algorithm (GA)
for a Multiple Depot Vehicle Routing Problems. They therefore adopted an indirect and
adaptive inter-depot mutation exchange strategy, coupled with capacity and route-length
restrictions.
Machado et al. [1803] used a basic Vehicle Routing Problem to compare a standard
evolutionary approach with a coevolutionary method. They showed that the inclusion of a
heuristic method into evolutionary techniques signicantly improves the results. Instead of
using additional heuristics, knowledge of the problem domain is incorporated into the search
operations in our work.
A cellular and thus, decentralized, GA for solving the Capacitated Vehicle Routing Prob-
lem was presented by Alba Torres and Dorronsoro Daz [64, 65]. This method has a high
performance in terms of the quality of the solutions found and the number of function evalu-
ations needed. Decentralization is a good basis for distributing EAs, a method for speeding
up the evolution which we will consider in our future work.
These methods perform a single-objective optimization enriched with problem-specic
constraints. The size of the problems tackled is roughly around a few hundred customers
and below 1000 orders. This is the case in most of the test sets available. Examples of such
benchmarks are the datasets by Augerat et al. [148], Van Breedam [2770], Golden et al.
[1090], Christodes et al. [577], and Taillard [2648] which are publicly available at [825,
2120, 2261]. Using these (partly articial) benchmarks in our work was not possible since
the framework conditions in in.west are very dierent. Therefore, we could not perform a
direct comparison of our system with the other approaches mentioned.
To our knowledge, the problem most similar to the practical situation specied in Sec-
tion 49.3.2.2 is the Multiple Depot Vehicle Routing Problem with Pickup, Delivery and In-
termediary Depots (MDVRPPDID) dened by Sigurj onsson [2491]. This problem, however,
does not consider orders and freight containers as dierent objects. Instead, each container
has a source and a target destination and corresponds to one order. Also, all vehicles have
the same capacity of one container which is not the case in our system where trucks can
usually transport two containers and trains have much higher capacities. The Tabu Search
approach developed by Sigurj onsson [2491] is similar to our method in that it incorporates
domain knowledge in the solution structure and search operations. However, it also al-
lows infeasible intermediate solutions which we rule out in Section 49.3.4. It was tested
on datasets with up to 16 depots, 40 vehicles, and 100 containers which is more than a
magnitude smaller than the problem dimensions the in.west system has to deal with.
Confessore et al. [619] dene a Genetic Algorithm for the Capacitated Vehicle Routing
Problem with Time Windows (CVRPTW, see Figure 49.4) for real-world scenarios with a
heterogeneous vehicle eet with dierent capacities, multi-dimensional capacity constraints,
order/vehicle, item/vehicle, and item/item compatibility constraints. In in.west, the hetero-
geneity of the vehicles is taken a step further in the sense that trains have totally dierent
characteristics in terms of the degrees of freedom regarding the tour times and end points.
Furthermore, in in.west, orders are not assigned to vehicles but to containers which, in turn,
are assigned to trucks and trains.
The general idea of using Evolutionary Algorithms and their hybrids for Vehicle Routing
Problems has proven to be very ecient [2212]. The quality of solutions produced by
evolutionary or genetic methods is often higher than that obtained by classic heuristics.
Potvin [2212] pointed out that Evolutionary Algorithms can also outperform widely used
metaheuristics like Tabu Search on classic problems. He also states that other approaches
like articial neural networks have more or less been abandoned by now in the area of VRPs
due to their poor performance on o-the-shelf computer platforms.
In many of the publications listed in this section, it is indicated that metaheuristics work
542 49 REAL-WORLD PROBLEMS
best when a good share of domain knowledge is incorporated. This holds not only for Vehicle
Routing, but also in virtually every other application of Global Optimization [2244, 2892,
2915]. Nevertheless, such knowledge is generally used as an extension, as a method to tweak
generic operators and methods. In this work, we have placed problem-specic knowledge at
the center of the approach.
49.3.4 Evolutionary Approach
Evolutionary Algorithms in general are discussed in-depth in Chapter 28. Here we just focus
on the methods that we introduced to solve the transportation planning problem at hand.
We base our approach on an EA with new, intelligent operators and a complex solution
representation.
49.3.4.1 Search Space
When analyzing the problem structure outlined in Section 49.3.2.2, it becomes very obvious
that standard encodings such as binary [1075] or integer strings, matrixes, or real vectors
cannot be used in the context of this very general logistics planning task. Although it might
be possible to create a genotype-phenotype mapping capable of translating an integer string
into a tuple representing a valid tour, trying to encode a set x of a variable number of such
tours in an integer string is not feasible. First, there are many substructures involved in a
tour which have variable length such as the sets of orders o and swap bodies b. Second, it
would be practically impossible to ensure the required physical soundness of the tours given
that the reproduction operations would randomly modify the integer strings.
In our work, we adhered to the premise that all candidate solutions must represent
correct solutions according to the specication given in Section 49.3.2.2 and none of the
search operations are allowed to violate this correctness. A candidate solution x X does
not necessarily contain a complete plan which manages to deliver all orders. Instead, partial
solutions (again as demanded in Section 49.3.2.2) are admitted, too.
Phenotype (x)
SwapBody (b)
Vehicle (f F)
Tour ( ) x
endLocationID (l )
d
startLocationID (l )
s
orderIDs[] ( ) o
swapBodyIDs[] ( ) b
vehicleID (f)
endTime (t)
^
startTime (t)
^
1
1
Location (l ) L
Order (o)
startLocationID (l )
s
endLocationID (l )
d
minStartTime (t )
s
^
maxStartTime (t )
s
^
minEndTime (t )
d
^
maxEndTime (t )
d
^
*
1..*
*
*
*
*
1
1..*
0..v(f) ^
1 1
* *
Figure 49.5: The structure of the phenotypes x.
In order to achieve such a behavior, it is clear that all reproduction operations in the
Evolutionary Algorithm must have access to the complete set x of tuples . Only then, they
can check whether the modications to be applied may impair the correctness of the plans.
Therefore, the phenotypes are not encoded at all, but instead, they are the plan objects in
their native representations as illustrated in Figure 49.5.
This gure holds the UML specication of the phenotypes in our planning system. The
exactly same data structures are also used by the in.west middleware and graphical user
interface. The location IDs (startLocationID, endLocationID) of the Orders and Tours are
49.3. FREIGHT TRANSPORTATION PLANNING 543
indices into a database. They are also used to obtain distances and times of travel between
locations from a sparse distance matrix which can be updated asynchronously from dierent
information sources. The orderIDs, swapBodyIDs, and vehicleIDs are indices into a database
as well.
49.3.4.2 Search Operations
By using this explicit representation, the search operations have full access to all the infor-
mation in the freight plans. Standard crossover and mutation operators are, however, no
longer applicable. Instead, intelligent operators have to be introduced which respect the
correctness of the candidate solutions.
For the in.west planning system, three crossover and sixteen mutation operations have
been dened, each dealing with a specic constellation in the phenotypes and performing one
distinct type of modication. During the evolution, individuals to be mutated are processed
by a randomly picked operator. If the operator is not applicable because the individual does
not belong to the corresponding constellation, another operator is tried. This is repeated,
until either the individual is modied or all operators were tested. Two individuals to be
combined with crossover are processed by a randomly selected operator as well.
D
A
C
B
x O
D
A
C
B
x
Fig. 49.6.a: Add an order.
D
A
C
B
yO
D
A
C
B
x
x
y
Fig. 49.6.b: Append an order.
D
A
C
B
yO
D
A
C
B
x x
y
Fig. 49.6.c: Incorporate an order.
D
A
C
B
x
y
D
A
C
B
y
x
Fig. 49.6.d: Create a freight exchange.
Figure 49.6: Some mutation operators from the freight planning EA.
Obviously, we cannot give detailed specications on all twenty genetic operations [2188]
(including the initial individual creation) in this chapter. Instead, we will outline the muta-
tion operators sketched in Figure 49.6 exemplarily.
The rst operator (Fig. 49.6.a) is applicable if there is at least one order which would
not be delivered if the plan in the input phenotype x was carried out. This operator chooses
randomly from all available means of transportation. Available in this context means not
involved in another tour for the time between the start and end times of the order. The
freight transporters closer to the source of the order are picked with higher probability.
Then, a swap body is allocated in the same manner. This process leads to between one and
three new tours being added to the phenotype. If the transportation vehicle is a truck, a
fourth tour is added which allows it to travel back to its starting point. This step is optional
and is applied only if the system is congured to send all trucks back home after the end
of their routes, as it is the case in DHL.
Fig. 49.6.b illustrates one operator which tries to include an additional order o into an
already existing set of related tours. If the truck driving these tours has space for another
swap body, at least one free swap body b is available, and picking up o and b as well as
delivering o is possible without violating the time constraints of the other transportation
544 49 REAL-WORLD PROBLEMS
orders already involved in the set of tours, the order is included and the corresponding new
tours are added to the plan.
The mutator sketched in Fig. 49.6.c does the same if an additional order can be included
in already existing tours because of available capacities in swap bodies. Such spare capacities
occur from time to time since the containers are not left at the customers locations after
unloading the goods but transported back to the depots. For all operators which add new
orders, swap bodies, or tours to the candidate solutions, inverse operations which remove
these elements are provided, too.
One exceptional operator is the truck-meets-truck mechanism. Often, two trucks are
carrying out deliveries in opposite directions (B D and D B in Fig. 49.6.d). The
operator tries to nd a location C which is close to both, B and D. If the time windows of
the orders allow it, the two involved trucks can meet at this halting point C and exchange
their freight. This way, the total distance that they have to drive can almost be halved from
4 BD to 2 BC + 2 CD where BC +CD BD.
The rst recombination operator used in the in.west system copies all tours from the
rst parent and then adds all tours from the second parent in a way that does not lead to a
violation of the solutions correctness. In this process, tours which belong together such as
those created by the rst mutator mentioned are kept together. A second crossover method
tries to nd sets of tours in the rst parent which intersect with similar sets in the second
parent and joins them into an ospring plan in the same way the truck-meets-truck mutator
combines tours.
49.3.4.3 Objective Functions
The freight transportation planning process run by the Evolutionary Algorithm is driven
by a set f of three objective functions (f = f
1
, f
2
, f
3
). These functions, all subject to
minimization, are based on the requirements stated in Section 49.3.1 and are combined via
Pareto comparisons (see Section 3.3.5 on page 65 and [593, 599]) in the tness assignment
processes.
49.3.4.3.1 f
1
: Order Delivery
One of the most important aspects of freight planning is to deliver as many orders as possible.
Therefore, the rst objective function f
1
(x) returns the number of orders which will not be
delivered in a timely manner if the plan x was carried out. The optimum of f
1
is zero.
Human operators need to hire external carriers for orders which cannot be delivered (due
to insucient resources, for instance).
49.3.4.3.2 f
2
: Kilometers Driven
By using a sparse distance matrix stored in memory, the second objective function deter-
mines the total distance covered by all vehicles involved. Minimizing this distance will lead
to less fuel consumption and thus, lower costs and lesser CO
2
production. The global opti-
mum of this function is not known a priori and may not be discovered by the optimization
process as well.
49.3.4.3.3 f
3
: Full Utilization of the Capacities
The third objective function minimizes the spare capacities of the vehicles involved in tours.
In other words, it considers the total volume left empty in the swap bodies on the road and
the unused swap body slots of the trucks and trains. f
2
does not consider whether trucks
are driving tours empty or loaded with empty containers. These aspects are handled by f
3
which again has the optimum zero.
49.3. FREIGHT TRANSPORTATION PLANNING 545
49.3.5 Experiments
Because of the special requirements of the in.west project and the many constraints im-
posed on the corresponding optimization problem, the experimental results cannot be di-
rectly compared with other works. As we have shown in our discussion of related work
in Section 49.3.3, none of the approaches in the Vehicle Routing literature are suciently
similar to this scenario.
Hence, it was especially important to evaluate our freight planning system rigorously. We
have therefore carried out a series of tests according to the full factorial design of experiments
paradigm [378, 3030]. These experiments (which we will discuss in Section 49.3.5.1) are based
on a single, real-world set of orders. The results of additional experiments performed with
dierent datasets are outlined in Section 49.3.5.2. All data used have been reconstructed
from the actual order database of the project partner DHL, one of the largest logistics
companies worldwide. This database is also the yardstick with which we have measured the
utility of our system.
The experiments were conducted using a simplied distance matrix for both, the EA
and the original plans. Since the original plans did not involve trains, we deactivated the
mutation operators which incorporate train tours into candidate solutions too otherwise
the results would have been incomparable. Legal aspects like statutory idle periods of the
truck drivers have not been considered in the reproduction operators either. However, only
plans not violating these constraints were considered in the experimental evaluation.
49.3.5.1 Full Factorial Tests
Evolutionary Algorithms have a wide variety of parameters, ranging from the choice of sub-
algorithms (like those computing a tness value from the vectors of objective values for
each individual) to the mutation rate determining the fraction of the selected candidate
solutions which are to undergo mutation. The performance of an EA strongly depends on
the conguration of these parameters. In dierent optimization problems, usually dier-
ent congurations are benecial and a setting nding optimal solutions in one application
may lead to premature convergence to a local optimum in other scenarios. Because of the
novelty of the presented approach for transportation planning, performing a large number
of experiments with dierent settings of the EA was necessary in order to nd the optimal
conguration to be utilized in the in.west system in practice.
We, therefore, decided to conduct a full factorial experimental series, i. e., one where all
possible combinations of settings of a set of conguration parameters are tested. As basis
for this series, we used a test case consisting of 183 orders reconstructed from one day in
December 2007. The original freight plan x
o
for these orders contained 159 tours which
covered a total distance of d = f
2
(x
o
) = 19 109 km. The capacity of the vehicles involved
was lled to 65.5%. The parameters examined in these experiments are listed in Table 49.2.
546 49 REAL-WORLD PROBLEMS
Param. Setting and Meaning
ss
In every generation of the EA, new individuals are created by the reproduction
operations. The parent individuals in the population are then either discarded
(generational, ss = 0) or compete with their ospring (steady-state, ss = 1).
el
Elitist Evolutionary Algorithms keep an additional archive preserving the best
candidate solutions found (el = 1). Using elitism ensures that these candidate
solutions cannot be lost due to the randomness of the selection process. Turning
o this feature (el = 0) may allow the EA to escape local optima easier.
ps Allowing the EA to work with populations consisting of many individuals in-
creases its chance of nding good solutions but also increases its runtime. Three
dierent population sizes were tested: ps 200, 500, 1000
fa
Either simple Pareto-Ranking [599] (fa = 0) or an extended assignment process
(fa = 1, called variety preserving in [2892]) with sharing was applied. Shar-
ing [742, 1250] decreases the tness of individuals which are very similar to others
in the population in order to force the EA to explore many dierent areas in the
search space.
c
The simple convergence prevention (SCP) method proposed in Section 28.4.7.2
on page 304 was either used (c = 0.3) or not (c = 0). SCP is a clearing ap-
proach [2167, 2391] applied in the objective space which discards candidate solu-
tions with equal objective values with probability c.
mr/cr Dierent settings for the mutation rate mr 0.6, 0.8 and the crossover rate
cr 0.2, 0.4 were tested. These rates do not necessarily sum up to 1, since
individuals resulting from recombination may undergo mutation as well.
Table 49.2: The congurations used in the full-factorial experiments.
These settings were varied in the experiments and each of the 192 possible congurations
was tested ten times. All runs utilized a tournament selection scheme with ve contestants
and were granted 10 000 generations. The measurements collected are listed in Table 49.3.
Meas. Meaning
ar The number of runs which found plans that completely covered all orders.
at The median number of generations needed by these runs until such plans were
found.
gr The number of runs which managed to nd such plans which additionally were
at least as good as the original freight plans.
gt The median number of generations needed by these runs in order to nd such
plans.
et The median number of generations after which f
2
did not improve by more than
1%, i. e., the point where the experiments could have been stopped without a
signicant loss in the quality of the results.
e The median number of individual evaluations until this point.
d The median value of f
2
, i. e., the median distance covered.
Table 49.3: The measurements taken during the experiments.
49.3. FREIGHT TRANSPORTATION PLANNING 547
# mr cr cp el ps ss fa ar at gr gt et e d
1. 0.8 0.4 0.3 1 1000 1 1 10 341 10 609 3078 3 078 500 15 883 km
2. 0.6 0.2 0.3 0 1000 1 1 10 502 10 770 5746 5 746 500 15 908 km
3. 0.8 0.2 0.3 1 1000 1 1 10 360 10 626 4831 4 831 000 15 929 km
4. 0.6 0.4 0.3 0 1000 1 1 10 468 10 736 5934 5 934 000 15 970 km
5. 0.6 0.2 0.3 1 1000 1 1 10 429 10 713 6236 6 236 500 15 971 km
6. 0.8 0.2 0.3 0 1000 1 1 10 375 10 674 5466 5 466 000 16 003 km
7. 0.8 0.4 0.3 1 1000 1 0 10 370 10 610 5691 5 691 500 16 008 km
8. 0.8 0.2 0.3 0 1000 0 1 10 222 10 450 6186 6 186 500 16 018 km
9. 0.8 0.4 0 0 1000 0 1 10 220 10 463 4880 4 880 000 16 060 km
10. 0.8 0.2 0 1 1000 0 0 10 277 10 506 2862 2 862 500 16 071 km
11. 0.8 0.4 0.3 0 1000 1 0 10 412 10 734 5604 5 604 000 16 085 km
12. 0.8 0.2 0.3 1 1000 0 1 10 214 10 442 4770 4 770 500 16 093 km
13. 0.8 0.2 0.3 1 1000 1 0 10 468 10 673 4970 4 970 500 16 100 km
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15. 0.8 0.2 0 0 200 1 0 10 1286 2 6756 6773 1 354 700 20 236 km
16. 0.6 0.2 0 0 500 1 0 10 1546 1 9279 9279 4 639 500 19 529 km
17. 0.8 0.4 0.3 0 200 0 0 10 993 0 19 891 km
18. 0.8 0.4 0 0 200 0 0 10 721 0 20 352 km
19. 0.6 0.2 0 0 200 1 0 10 6094 0 23 709 km
20. 0.6 0.4 0 0 1000 1 0 0 0
21. 0.8 0.4 0 0 1000 1 0 3 6191 0
22. 0.8 0.4 0 0 500 1 0 4 5598 0
23. 0.6 0.4 0 0 200 0 0 3 2847 0
24. 0.6 0.4 0 0 200 1 0 0 0
25. 0.8 0.4 0 0 200 1 0 0 0
26. 0.6 0.4 0 0 500 1 0 0 0
Table 49.4: The best and the worst evaluation results in the full-factorial tests.
Table 49.4 contains the thirteen best and the twelve worst congurations, sorted accord-
ing to gr, d, and e. The best conguration managed to reduce the distance to be covered
by over 3000 km (17%) consistently. Even the conguration ranked 170 (not in Table 49.4)
saved almost 1100 km in median. In total, 172 out of the 192 test series managed to surpass
the original plans for the orders in the dataset in all of ten runs and only ten congurations
were unable to achieve this goal at all.
The experiments indicate that a combination of the highest tested population size
(ps = 1000), steady-state and elitist population treatment, SCP with rejection probability
c = 0.3, a sharing-based tness assignment process, a mutation rate of 80%, and a crossover
rate of 40% is able to produce the best results. We additionally applied signicance tests
the sign test [2490] and Wilcoxons signed rank test [2490, 2939] in order to check whether
there also are settings of single parameters which generally have positive inuence. On
a signicance level of = 0.02, we considered a tendency only if both (two-tailed) tests
agreed. Applying the convergence prevention mechanism (SCP) [2892], larger population
sizes, variety preserving tness assignment [2892], elitism, and higher mutation and lower
crossover rates have signicantly positive inuence in general.
Interestingly, the steady-state congurations lost in the signicance tests against the
generational ones, although the seven best-performing settings were steady-state. Here the
utility of full factorial tests becomes obvious: steady-state population handling performed
very well if (and only if) sharing and the SCP mechanism were applied, too. In the other
cases, it led to premature convergence.
This behavior shows the following: transportation planning is a multimodal optimization
problem with a probably rugged tness landscape [2915] or with local optima which are many
548 49 REAL-WORLD PROBLEMS
search steps (applications of reproduction operators) apart. Hence, applying steady-state
EAs for Vehicle Routing Problems similar to the one described here can be benecial, but
only if diversity-preserving tness assignment or selection algorithms are used in conjunction.
Only then, the probability of premature convergence is kept low enough and dierent local
optima and distant areas of the search space are explored suciently.
49.3.5.2 Tests with Multiple Datasets
We ran experiments with many other order datasets for which the actual freight plans used by
40000
45000
50000
55000
60000
f
2
70000
4000 8000 1200 0
generations
ordersatisfaction
goalreached
originalplan
performance
100%ordersatisfaction
Fig. 49.7.a: For 642 orders (14% better).
20000
40000
0
4000 8000
60000
f
2
100000
20000 16000 12000
generations
ordersatisfaction
goalreached
originalplan
performance
100%ordersatisfaction
99%ordersatisfaction
Fig. 49.7.b: For 1016 orders (3/10% better).
Figure 49.7: Two examples for the freight plan evolution.
the project partners were available. In all scenarios, our approach yielded an improvement
which was never below 1%, usually above 5%, and for some days even exceeding 15%.
Figure 49.7 illustrates the best f
2
-values (the total kilometers) of the individuals with the
most orders satised in the population for two typical example evolutions.
In both diagrams, the total distance rst increases as the number of orders delivered by
the candidate solutions rises due to the pressure from f
1
. At some point, plans which are
49.3. FREIGHT TRANSPORTATION PLANNING 549
able to deliver all orders evolved and f
1
is satised (minimized). Now, its corresponding
dimension of the objective space begins to collapse, the inuence of f
2
intensies, and the
total distances of the plans decrease. Soon afterwards, the eciency of the original plans is
surpassed. Finally, the populations of the EAs converge to a Pareto frontier and no further
improvements occur. In Fig. 49.7.a, this limit was 54 993 km, an improvement of more than
8800 km or 13.8% compared to the original distance of 63 812 km.
Each point in the graph of f
2
in the diagrams represents one point in the Pareto frontier
of the corresponding generation. Fig. 49.7.b illustrates one additional graph for f
2
: the best
plans which can be created when at most 1% of the orders are outsourced. Compared to the
transportation plan including assignments for all orders which had a length of 79 464 km,
these plans could reduce the distance to 74 436 km, i. e., another 7% of the overall distance
could be saved. Thus, in this case, an overall reduction of around 7575 km is achieved in
comparison to the original plan, which had a length of 82 013 km.
49.3.5.3 Time Consumption
One run of the algorithm (prototypically implemented in Java) for the dataset used in
the full factorial tests (Section 49.3.5.1) took around three hours. For sets with around
1000 orders it still easily fullls the requirement of delivering result within 24 hours. Runs
with close to 2000 orders, however, take longer than one week (in our experiments, we used
larger population sizes for them). Here, it should be pointed out that these measurements
were taken on a single dual-core 2.6 GHz machine, which is only a fraction of the capacity
available in the dedicated data centers of the project partners. It is well known that EAs can
be eciently parallelized and distributed over clusters [1791, 2897]. The nal implementation
of the in.west system will incorporate distribution mechanisms and thus be able to deliver
results in time for all situations.
49.3.6 Holistic Approach to Logistics
Logistics involve many aspects and the planning of ecient routes is only one of them.
Customers and legislation [2768], for instance, require traceability of the production data
and thus, also of the goods on the road. Experts require more linkage between the trac
carriers and ecient distribution of trac to cargo rail and water transportation.
Software/GUI
Communication
viaMiddleware
Transportation
Planner
Communicationvia
Middleware
Satelite-based
location
GraphicalUser Interface(Screenshot)
Swapbody
withsensor node
Figure 49.8: An overview of the in.west system.
Technologies like telematics are regarded as the enablers of smart logistics which optimize
the physical value chain. Only a holistic approach combining intelligent planning and such
technologies can solve the challenges [2248, 2607] faced by logistics companies. Therefore, the
focus of the in.west project was to combine software, telematics, and business optimization
approaches into one exible and adaptive system sketched in Figure 49.8.
550 49 REAL-WORLD PROBLEMS
Data from telematics become input to the optimization system which suggests trans-
portation plans to the human operator who selects or modies these suggestions. The nal
plans then, in turn, are used to congure the telematic units. A combination of both allows
the customers to track their deliveries and the operator to react to unforeseen situations.
In such situations, trac jams or accidents, for instance, the optimization component can
again be used to make ad-hoc suggestions for resolutions.
49.3.6.1 The Project in.west
The prototype introduced in this chapter was provided in the context of the BMWi promoted
project in.west and will be evaluated in eld test in the third quarter of 2009. The analyses
take place in the area of freight transportation in the business segment courier-, express-
and parcel services of DHL, the market leader in this business. Today the transport volume
of this company constitutes a substantial portion of the trac volume of the trac carriers
road, ship, and rail. Hence, a signicant decrease in the freight trac caused by DHL might
lead to a noticeable reduction in the freight trac volume on German roads.
The goal of this project is to achieve this decrease by utilizing information and com-
munication technologies on swap bodies, new approaches of planning, and novel control
processes. The main objective is to design a decision support tool to assist the operator
with suggestions for trac reduction and a smart swap body telematic unit.
The requirements for the in.west software are various, since the needs of both, the
operators and the customers are to be satised. From their perspective, for example, a
constant documentation of the transportation processes is necessary. The operators require
that this documentation start with the selection, movement, and employment of the swap
bodies. From the view of the customer, only tracking and tracing of the containers on the
road must be supported. With the web-based user interfaces we provide, a spontaneous
check of the load condition, the status of the container, and information about the carrier
is furthermore possible.
49.3.6.2 Smart Information and Communication Technology
All the information required by customers and operators on the status of the freight have
to be obtained by the software system rst. Therefore, in.west also features a hardware
development project with the goal of designing a telematic unit. A device called YellowBox
(illustrated in Figure 49.9) was developed which enables swap bodies to transmit real time
geo positioning data of the containers to a software system. The basic functions of the device
are location, communication, and identication. The data received from the YellowBox is
processed by the software for planning and controlling the swap body in the logistic network.
Figure 49.9: The YellowBox a mobile sensor node.
The YellowBox consist of a main board, a location unit (GPS), a communication unit
(GSM/GPRS) and a processor unit. Digital input and output interfaces ensure the scal-
ability of the device, e.g., for load volume monitoring. Swap bodies do not oer reliable
power sources for technical equipment like telematic systems. Thus, the YellowBox has
been designed as a sensor node which uses battery current [2189, 2912].
49.3. FREIGHT TRANSPORTATION PLANNING 551
One of the most crucial criteria for the application of these sensor nodes in practice
was long battery duration and thus, low power consumption. The YellowBox is therefore
turned on and o for certain time intervals. The software system automatically congures
the device with the route along which it will be transported (and which has been evolved
by the planner). In pre-dened time intervals, it can thus check whether or not it is on the
right track. Only if location deviations above a certain threshold are detected, it will notify
the middleware. Also, if more than one swap body is to be transported by the same vehicle,
only one of them needs to perform this notication. With these approaches, communication
one of the functions with the highest energy consumption is eectively minimized.
49.3.7 Conclusions
In this section, we presented the in.west freight planning component which utilizes an Evolu-
tionary Algorithm with intelligent reproduction operations for general transportation plan-
ning problems. The approach was tested rigorously on real-world data from the in.west
partners and achieved excellent results. It has been integrated as a constituting part of the
holistic in.west logistics software system.
We presented this work at the EvoTRANSLOG09 workshop [1052] in T ubingen [2914].
One point of the very fruitful discussion there was the question why we did not utilize
heuristics to create some initial solutions for the EA. We intentionally left this for our
future work for two reasons: First, we fear that creating such initial solutions may lead
to a decrease of diversity in the population. In Section 49.3.5.1 we showed that diversity
is a key to nding good solutions to this class of problem. Second, as can be seen in the
diagrams provided in Figure 49.7, nding initial solutions where all orders are assigned to
routes is not the time consuming part of the EA optimizing them to plans with a low
total distance is. Hence, incorporating measures for distribution and ecient parallelization
may be a more promising addition to our approach. If a cluster of, for instance, twenty
computers is available, we can assume that distribution according to client-server or island
model schemes [1108, 1838, 2897] will allow us to decrease the runtime to at least one tenth
of the current value. Nevertheless, testing the utility of heuristics for creating the initial
population is both, interesting and promising.
The runtime issues for very large datasets of our system could be resolved by paral-
lelization and distribution. Another interesting feature for future work would be to allow
for online updates of the distance matrix which is used to both, to compute f
2
and also to
determine the time a truck needs to travel from one location to another. The system would
then then be capable to (a) perform planning for the whole orders of one day in advance,
and (b) update smaller portions of the plans online if trac jams occur. It should be noted
that even with such a system, the human operator cannot be replaced. There are always
constraints and pieces of information which cannot be employed in even the most advanced
automated optimization process. Hence, the solutions generated by our transportation plan-
ner are suggestions rather than doctrines. They are displayed to the operator and she may
modify them according to her needs.
Chapter 50
Benchmarks
50.1 Introduction
In this chapter, we discuss some benchmark and toy problems which are used to demonstrate
the utility of Global Optimization algorithms. Such problems have usually no direct real-
world application but are well understood, widely researched, and can be used
1. to measure the speed and the ability of optimization algorithms to nd good solutions,
as done for example by Borgulya [360], Liu et al. [1747], Luke and Panait [1787], and
Yang et al. [3020],
2. as benchmark for comparing dierent optimization approaches, as done by Purshouse
and Fleming [2229], Zitzler et al. [3098], Yang et al. [3020], for instance,
3. to derive theoretical results, as done for example by Jansen and Wegener [1433],
4. as basis to verify theories, as used for instance by Burke et al. [455] and Langdon and
Poli [1672],
5. as playground to test new ideas, research, and developments,
6. as easy-to-understand examples to discuss features and problems of optimization (as
done here in Example E3.5 on page 57, for instance), and
7. for demonstration purposes, since they normally are interesting, funny, and can be
visualized in a nice manner.
50.2 Bit-String based Problem Spaces
50.2.1 Kaumans NK Fitness Landscapes
The ideas of tness landscapes
1
and epistasis
2
came originally from evolutionary biology
and later were adopted by Evolutionary Computation theorists. It is thus not surprising
that biologists also contributed much to the research of both. In the late 1980s, Kauman
1
Fitness landscapes have been introduced in Section 5.1 on page 93.
2
Epistasis is discussed in Chapter 17 on page 177.
554 50 BENCHMARKS
[1499] dened the NK tness landscape [1499, 1501, 1502], a family of objective functions
with tunable epistasis, in an eort to investigate the links between epistasis and ruggedness.
The problem space and also the search space of this problem are bit strings of the length
N, i. e., G = X = B
N
. Only one single objective function is used and referred to as tness
function F
N,K
: B
N
R
+
. Each gene x
i
contributes one value f
i
: B
K+1
[0, 1] R
+
to the tness function which is dened as the average of all of these N contributions. The
tness f
i
of a gene x
i
is determined by its allele and the alleles at K other loci x
i
1
, x
i
2
, .., x
i
K
with i
1..K
[0, N 1] i N
0
, called its neighbors.
F
N,K
(x) =
1
N
N1

i=0
f
i
(x
i
, x
i
1
, x
i
2
, .., x
i
K
) (50.1)
Whenever the value of a gene changes, all the tness values of the genes to whose neighbor set
it belongs will change too to values uncorrelated to their previous state. While N describes
the basic problem complexity, the intensity of this epistatic eect can be controlled with the
parameter K 0..N: If K = 0, there is no epistasis at all, but for K = N 1 the epistasis
is maximized and the tness contribution of each gene depends on all other genes. Two
dierent models are dened for choosing the K neighbors: adjacent neighbors, where the K
nearest other genes inuence the tness of a gene or random neighbors where K other genes
are therefore randomly chosen.
The single functions f
i
can be implemented by a table of length 2
K+1
which is indexed
by the (binary encoded) number represented by the gene x
i
and its neighbors. These tables
contain a tness value for each possible value of a gene and its neighbors. They can be lled
by sampling an uniform distribution in [0, 1) (or any other random distribution).
We may also consider the f
i
to be single objective functions that are combined to a
tness value F
N,K
by a weighted sum approach, as discussed in Section 3.3.3. Then, the
nature of NK problems will probably lead to another well known aspect of multi-objective
optimization: conicting criteria. An improvement in one objective may very well lead to
degeneration in another one.
The properties of the NK landscapes have intensely been studied in the past and the
most signicant results from Kauman [1500], Weinberger [2882], and Fontana et al. [959]
will be discussed here. We therefore borrow from the summaries provided by Altenberg [79]
and Platel et al. [2186]. Further information can be found in [572, 573, 1016, 2515, 2982]. An
analysis of the behavior of Estimation of Distribution Algorithms and Genetic Algorithms
in NK landscapes has been provided by Pelikan [2146].
50.2.1.1 K = 0
For K = 0, the tness function is not epistatic. Hence, all genes can be optimized separately
and we have the classical additive multi-locus model.
1. There is a single optimum x

which is globally attractive, i. e., which can and will be


found by any (reasonable) optimization process regardless of the initial conguration.
2. For each individual x ,= x

, there exists a tter neighbor.


3. An adaptive walk
3
from any point in the search space will proceed by reducing the
Hamming distance to the global optimum by 1 in each step (if each mutation only
aects one single gene). The number of better neighbors equals the Hamming distance
to the global optimum. Hence, the estimated number of steps of such a walk is
N
2
.
4. The tness of direct neighbors is highly correlated since it shares N 1 components.
3
See Section 8.2.1 on page 115 for a discussion of adaptive walks.
50.2. BIT-STRING BASED PROBLEM SPACES 555
50.2.1.2 K = N 1
For K = N 1, the tness function equals a random assignment of tness to each point of
the search space.
1. The probability that a genotype is a local optimum is
1
N1
.
2. The expected total number of local optima is thus
2
N
N+1
.
3. The average distance between local optima is approximately 2 ln (N 1).
4. The expected length of adaptive walks is approximately ln (N 1).
5. The expected number of mutants to be tested in an adaptive walk before reaching a
local optimum is

log
2
(N1)1
i=0
2
i
.
6. With increasing N, the expected tness of local optima reached by an adaptive from
a random initial conguration decreases towards the mean tness F
N,K
=
1
2
of the
search space. This is called the complexity catastrophe [1500].
For K = N 1, the work of Flyvbjerg and Lautrup [935] is of further interest.
50.2.1.3 Intermediate K
1. For small K, the best local optima share many common alleles. As K increases, this
correlation diminishes. This degeneration proceeds faster for the random neighbors
method than for the nearest neighbors approach.
2. For larger K, the tness of the local optima approach a normal distribution with mean
m and variance s approximately
m = +
_
2 ln (K + 1)K + 1 (50.2)
s =
(K + 1)
2
N(K + 1 + 2(K + 2) ln (K + 1))
(50.3)
where is the expected value of the f
i
and
2
is their variance.
3. The mean distance between local optima, roughly twice the length of an adaptive walk,
is approximately
N log
2
(K+1)
2(K+1)
.
4. The autocorrelation function
4
(k, F
N,K
) and the correlation length are:
(k, F
N,K
) =
_
1
K + 1
N
_
k
(50.4)
=
1
ln
_
1
K + 1
N
_ (50.5)
4
See Denition D14.2 on page 163 for more information on autocorrelation.
556 50 BENCHMARKS
50.2.1.4 Computational Complexity
Altenberg [79] nicely summarizes the four most important theorems about the computational
complexity of optimization of NK tness landscapes. These theorems have been proven using
dierent algorithms introduced by Weinberger [2883] and Thompson and Wright [2703].
1. The NK optimization problem with adjacent neighbors is solvable in O
_
2
K
N
_
steps
and thus in T [2883].
2. The NK optimization problem with random neighbors is AT-complete for K
2 [2703, 2883].
3. The NK optimization problem with random neighbors and K = 1 is solvable in poly-
nomial time. [2703].
50.2.1.5 Adding Neutrality NKp, NKq, and Technological Landscapes
As we have discussed in Section 16.1, natural genomes exhibit a certain degree of neutral-
ity. Therefore, researchers have proposed extensions for the NK landscape which introduce
neutrality, too [1031, 1032]. Two of them, the NKp and NKq landscapes, achieve this by
altering the contributions f
i
of the single genes to the total tness. In the following, assume
that there are N tables, each with 2
K
entries representing these contributions.
The NKp landscapes devised by Barnett [219] achieves neutrality by setting a certain
number of entries in each table to zero. Hence, the corresponding allele combinations do
not contribute to the overall tness of an individual. If a mutation leads to a transition
from one such zero conguration to another one, it is eectively neutral. The parameter p
denotes the probability that a certain allele combination does not contribute to the tness.
As proven by Reidys and Stadler [2287], the ruggedness of the NKp landscape does not vary
for dierent values of p. Barnett [218] proved that the degree of neutrality in this landscape
depends on p.
Newman and Engelhardt [2028] follow a similar approach with their NKq model. Here,
the tness contributions f
i
are integers drawn from the range [0, q) and the total tness of a
candidate solution is normalized by multiplying it with
1
/q1. A mutation is neutral when
the new allelic combination resulting from it leads to the same contribution than the old
one. In NKq landscapes, the neutrality decreases with rising values of q. In [1032], you can
nd a thorough discussion of the NK, the NKp, and the NKq tness landscape.
With their technological landscapes, Lobo et al. [1754] follow the same approach from
the other side: the discretize the continuous total tness function F
N,K
. The parameter M
of their technological landscapes corresponds to a number of bins [0,
1
/M), [
1
/M,
2
/M), . . . ,
into which the tness values are sorted and put away.
50.2.2 The p-Spin Model
Motivated by the wish of researching the models for the origin of biological information
by Anderson [94, 2317] and Tsallis and Ferreira [2725], Amitrano et al. [86] developed
the p-spin model. This model is an alternative to the NK tness landscape for tunable
ruggedness [2884]. Other than the previous models, it includes a complete denition for all
genetic operations which will be discussed in this section.
The p-spin model works with a xed population size ps of individuals of an also xed
length N. There is no distinction between genotypes and phenotype, in other words, G = X.
Each gene of an individual x is a binary variable which can take on the values 1 and 1.
G = 1, 1
N
, x
i
[j] 1, 1 i [1..ps], j [0..N 1] (50.6)
50.2. BIT-STRING BASED PROBLEM SPACES 557
On the 2
N
possible genotypic congurations, a space with the topology of an N-dimensional
hypercube is dened where neighboring individuals dier in exactly one element. On this
genome, the Hamming distance dist ham can be dened as
dist ham(x
1
, x
2
) =
1
2
N1

i=0
(1 x
1
[i]x
2
[i]) (50.7)
Two congurations are said to be mutations away from each other if they have the Ham-
ming distance . Mutation in this model is applied to a fraction of the N ps genes in the
population. These genes are chosen randomly and their state is changed, i. e., x[i] x[i].
The objective function f
K
(which is called tness function in this context) is subject to
maximization, i. e., individuals with a higher value of f
K
are less likely to be removed from
the population. For very subset z of exactly K genes of a genotype, one contribution A(z)
is added to the tness.
f
K
(x) =

zP([0..K])|z|=K
A(x[z[0]], x[z[1]], . . . , x[z[K 1]]) (50.8)
A(z) = a
z
z[0] z[1] z[K 1] is the product of an evenly distributed random number a
z
and the elements of z. For K = 2, f
2
can be written as f
2
(x) =

K1
i=0

k1
j=0
a
ij
x[i]x[j], which
corresponds to the spin-glass [311, 1879] function rst mentioned by Anderson [94] in this
context. With rising values of K, this tness landscape becomes more rugged. Its correlation
length is approximately
N
/2K, as discussed thoroughly by Weinberger and Stadler [2884].
For selection, Amitrano et al. [86] suggest to use the measure P
D
(x) dened by Rokhsar
et al. [2317] as follows:
P
D
(x) =
1
1 +e
(f
K
(x)H
0
)
(50.9)
where the coecient is a sharpness parameter and H
0
is a threshold value. For ,
all individuals x with f
K
(x) < H
0
will die and for = 0, the death probability is always
1
2
.
The individuals which have died are then replaced with copies of the survivors.
50.2.3 The ND Family of Fitness Landscapes
The ND family of tness landscapes has been developed by Beaudoin et al. [242] in order
to provide a model problem with tunable neutrality.
The degree of neutrality is dened as the number (or, better, the fraction of) neutral
neighbors (i. e., those with same tness) of a candidate solution, as specied in Equation 16.1
on page 171. The populations of optimization processes residing on a neutral network
(see Section 16.4 on page 173) tend to converge into the direction of the individual which
has the highest degree of neutrality on it. Therefore, Beaudoin et al. [242] create a landscape
with a predened neutral degree distribution.
The search space is again the set of all binary strings of the length N, G = X = B
N
.
Thus, a genotype has minimally 0 and at most N neighbors with Hamming distance 1 that
have the same tness. The array D has the length N + 1 and the element D[i] represents
the fraction of genotypes in the population that have i neutral neighbors.
Beaudoin et al. [242] provide an algorithm that divides the search space into neutral
networks according to the values in D. Since this approach cannot exactly realize the
distribution dened by D, the degrees of neutrality of the single individuals are subsequently
rened with a Simulated Annealing algorithm. The objective (tness) function is created in
form of a complete table mapping X R. All members of a neutral network then receive
the same, random tness.
If it is ensured that all members in a neutral network always have the same tness,
its actual value can be modied without changing the topology of the network. Tunable
deceptiveness is achieved by setting the tness values according to the Trap Functions [19,
1467? ].
558 50 BENCHMARKS
50.2.3.1 Trap Functions
Trap functions f
b,r,x
: B
N
R are subject to maximization based on the Hamming distance
to a pre-dened global optimum x

. They build a second, local optimum in form of a hill


with a gradient pointing away from the global optimum. This trap is parameterized with
two values, b and r, where b corresponds to the width of the attractive basins and r to their
relative importance.
f
b,r,x
(x) =
_
1
dist ham(x,x

)
Nb
if N dist ham(x, x

) < b
r(
1
N
dist ham(x,x

)b)
1b
otherwise
(50.10)
Equation 50.11 shows a similar Trap function dened by Ackley [19] where u(x) is the
number of ones in the bit string x of length n and z = 3n/4 [1467]. The objective function
f(x) is subject to maximization is sketched in Figure 50.1.
f(x) =
_
(8n/z) (z u(x)) if u(x) z
(10n/ (n z)) (u(x) z) otherwise
(50.11)
u(x)
f(x) globaloptimium
withsmallbasin
ofattraction
localoptimium
withlargebasin
ofattraction
Figure 50.1: Ackleys Trap function [19, 1467].
50.2.4 The Royal Road
The Royal Road functions developed by Mitchell et al. [1913] and presented rst at the
Fifth International Conference on Genetic Algorithms in July 1993 are a set of special tness
landscapes for Genetic Algorithms [970, 1464, 1913, 2233, 2780]. Their problem space X and
search space G are xed-length bit strings. The Royal Road functions are closely related
to the Schema Theorem
5
and the Building Block Hypothesis
6
and were used to study the
way in which highly t schemas are discovered. They therefore dene a set of schemas
S = s
1
, s
2
, . . . , s
n
and an objective function (here referred to as tness function), subject
to maximization, as
f(x) =

sS
c(s)(s, x) (50.12)
where x X is a bit string, c(s) is a value assigned to the schema s and (s, x) is dened as
(s, x) =
_
1 if x is an instance of s
1 otherwise
(50.13)
In the original version, c(s) is the order of the schema s, i. e., c(s) order(s), and S is
specied as follows (where
*
stands for the dont care symbol as usual).
5
See Section 29.5 on page 341 for more details.
6
The Building Block Hypothesis is elaborated on in Section 29.5.5 on page 346
50.2. BIT-STRING BASED PROBLEM SPACES 559
1 s
1
= 11111111
********************************************************
; c(s
1
) = 8
2 s
2
=
********
11111111
************************************************
; c(s
2
) = 8
3 s
3
=
****************
11111111
****************************************
; c(s
3
) = 8
4 s
4
=
************************
11111111
********************************
; c(s
4
) = 8
5 s
5
=
********************************
11111111
************************
; c(s
5
) = 8
6 s
6
=
****************************************
11111111
****************
; c(s
6
) = 8
7 s
7
=
************************************************
11111111
********
; c(s
7
) = 8
8 s
8
=
********************************************************
11111111; c(s
8
) = 8
9 s
9
= 1111111111111111
************************************************
; c(s
9
) = 16
10 s
10
=
****************
1111111111111111
********************************
; c(s
10
) = 16
11 s
11
=
********************************
1111111111111111
****************
; c(s
11
) = 16
12 s
12
=
************************************************
1111111111111111; c(s
12
) = 16
13 s
13
= 11111111111111111111111111111111
********************************
; c(s
13
) = 32
14 s
14
=
********************************
11111111111111111111111111111111; c(s
14
) = 32
15 s
15
= 1111111111111111111111111111111111111111111111111111111111111111; c(s
15
) = 64
Listing 50.1: An example Royal Road function.
By the way, from this example, we can easily see that a fraction of all mutation and crossover
operations applied to most of the candidate solutions will fall into the dont care areas. Such
modications will not yield any tness change and therefore are neutral.
The Royal Road functions provide certain, predened stepping stones (i. e., building
blocks) which (theoretically) can be combined by the Genetic Algorithm to successively
create schemas of higher tness and order. Mitchell et al. [1913] performed several tests
with their Royal Road functions. These tests revealed or conrmed that
1. Crossover is a useful reproduction operation in this scenario. Genetic Algorithms
which apply this operation clearly outperform Hill Climbing approaches solely based
on mutation.
2. In the spirit of the Building Block Hypothesis, one would expect that the intermediate
steps (for instance order 32 and 16) of the Royal Road functions would help the Genetic
Algorithm to reach the optimum. The experiments of Mitchell et al. [1913] showed the
exact opposite: leaving them away speeds up the evolution signicantly. The reason
is the tness dierence between the intermediate steps and the low-order schemas is
high enough that the rst instance of them will lead the GA to converge to it and wipe
out the low-order schemas. The other parts of this intermediate solution play no role
and may allow many zeros to hitchhike along.
Especially this last point gives us another insight on how we should construct genomes: the
tness of combinations of good low-order schemas should not be too high so other good
low-order schemas do not extinct when they emerge. Otherwise, the phenomenon of domino
convergence researched by Rudnick [2347] and outlined in Section 13.2.3 and Section 50.2.5.2
may occur.
50.2.4.1 Variable-Length Representation
The original Royal Road problems can be dened for binary string genomes of any given
length n, as long as n is xed. A Royal Road benchmark for variable-length genomes has
been dened by Platel et al. [2185].
The problem space X

of the VLR (variable-length representation) Royal Road problem


is based on an alphabet with N = [[ letters. The tness of an individual x X

is
determined by whether or not consecutive building blocks of the length b of the letters l
are present. This presence can be dened as
B
b
(x, l) =
_
1 if i : (0 i < (len(x) b)) (x[i +j] = l j : 0 j < (b 1))
0 otherwise
(50.14)
1. Where b 1 is the length of the building blocks,
560 50 BENCHMARKS
2. is the alphabet with N = [[ letters,
3. l is a letter in ,
4. x X

is a candidate solution, and


5. x[k] is the k
th
locus of x.
B
b
(x, l) is 1 if a building block, an uninterrupted sequence of the letter l, of at least length
b, is present in x. Of course, if len(x) < b this cannot be the case and B
b
(x, l) will be zero.
We can now dene the functional objective function f
b
: X

[0, 1] which is subject


to maximization as
f
b
(x) =
1
N
N

i=1
B
b
(x, [i]) (50.15)
An optimal individual x

solving the VLR Royal Road problem is thus a string that includes
building blocks of length b for all letters l . Notice that the position of these blocks plays
no role. The set X

b
of all such optima with f
b
(x

) = 1 is then
X

b
x

: B
b
(x

, l) = 1 l (50.16)
Such an optimum x

for b = 3 and = A, T, G, C is
x

= AAAGTGGGTAATTTTCCCTCCC (50.17)
The relevant building blocks of x

are written in bold face. As it can easily be seen, their


location plays no role, only their presence is important. Furthermore, multiple occurrences of
building blocks (like the second CCC) do not contribute to the tness. The tness landscape
has been designed in a way ensuring that tness degeneration by crossover can only occur
if the crossover points are located inside building blocks and not by block translocation or
concatenation. In other words, there is no inter-block epistasis.
50.2.4.2 Epistatic Road
Platel et al. [2186] combined their previous work on the VLR Royal Road with Kaumans
NK landscapes and introduced the Epistatic Road. The original NK landscape works on
binary representation of the xed length N. To each locus i in the representation, one
tness function f
i
is assigned denoting its contribution to the overall tness. f
i
however is
not exclusively computed using the allele at the i
th
locus but also depends on the alleles of
K other loci, its neighbors.
The VLR Royal Road uses a genome based on the alphabet with N = len() letters.
It denes the function B
b
(x, l) which returns 1 if a building block of length b containing only
the character l is present in x and 0 otherwise. Because of the xed size of the alphabet
, there exist exactly N such functions. Hence, the variable-length representation can be
translated to a xed-length, binary one by simply concatenating them:
B
b
(x, [0]) B
b
(x, [1]) . . . B
b
(x, [N 1]) (50.18)
Now we can dene a NK landscape for the Epistatic Road by substituting the B
b
(x, l)
into Equation 50.1 on page 554:
F
N,K,b
(x) =
1
N
N1

i=0
f
i
(B
b
(x, [i]), B
b
(x, [i
1
]), . . . , B
b
(x, [i
K
])) (50.19)
The only thing left is to ensure that the end of the road, i. e., the presence of all N building
blocks, also is the optimum of F
N,K,b
. This is done by exhaustively searching the space B
N
and dening the f
i
in a way that B
b
(x, l) = 1 l F
N,K,b
(x) = 1.
50.2. BIT-STRING BASED PROBLEM SPACES 561
50.2.4.3 Royal Trees
An analogue of the Royal Road for Genetic Programming has been specied by Punch et al.
[2227]. This Royal Tree problem species a series of functions A, B, C, . . . with increasing
arity, i. e., A has one argument, B has two arguments, C has three, and so on. Additionally,
a set of terminal nodes x, y, z is dened.
A
x
Fig. 50.2.a: Perfect A-level
A
x
A
x
B
Fig. 50.2.b: Perfect B-level
A
x
A
x
B
A
x
A
x
A
x
A
x
B
C
B
Fig. 50.2.c: Perfect C-level
Figure 50.2: The perfect Royal Trees.
For the rst three levels, the perfect trees are shown Figure 50.2. An optimal A-level
tree consists of an A node with an x leaf attached to it. The perfect level-B tree has a B as
root with two perfect level-A trees as children. A node labeled with C having three children
which all are optimal B-level trees is the optimum at C-level, and so on.
The objective function, subject to maximization, is computed recursively. The raw tness
of a node is the weighted sum of the tness of its children. If the child is a perfect tree at the
appropriate level, a perfect C tree beneath a D-node, for instance, its tness is multiplied
with the constant FullBonus, which normally has the value 2. If the child is not a perfect
tree, but has the correct root, the weight is PartialBonus (usually 1). If it is otherwise
incorrect, its tness is multiplied with Penalty, which is
1
3
per default. If the whole tree is
a perfect tree, its raw tness is nally multiplied with CompleteBonus which normally is
also 2. The value of a x leaf is 1.
From Punch et al. [2227], we can furthermore borrow three examples for this tness
assignment and outline them in Figure 50.3. A tree which represents a perfect A level has
the score of CompleteBonus FullBonus 1 = 2 2 1 = 4. A complete and perfect tree at
level B receives CompleteBonus(FullBonus4+FullBonus4) = 2(24+24) = 32. At
level C, this makes CompleteBonus(FullBonus 32+FullBonus 32+FullBonus 32) =
2(2 32 + 2 32 + 2 32) = 384.
A
x
A
x
B B
C
B
A
x
A
x
A
x
A
x
Fig. 50.3.a: 2(2 32 + 2 32 +
2 32) = 384
A
x
A
x
B B
C
B
A
x
A
x
x x
Fig. 50.3.b: 2(2 32 + 2 32 +
2
3
1) = 128
2
3
A
x
A
x
B
C
x x
Fig. 50.3.c: 2(2 32 +
1
3
1 +
1
3
1) = 64
2
3
Figure 50.3: Example tness evaluation of Royal Trees
562 50 BENCHMARKS
50.2.4.4 Other Derived Problems
Storch and Wegener [2616, 2617, 2618] used their Real Royal Road for showing that there
exist problems where crossover helps improving the performance of Evolutionary Algorithms.
Naudts et al. [2009] have contributed generalized Royal Road functions functions in order
to study epistasis.
50.2.5 OneMax and BinInt
The OneMax and BinInt are two very simple model problems for measuring the convergence
of Genetic Algorithms.
50.2.5.1 The OneMax Problem
The task in the OneMax (or BitCount) problem is to nd a binary string of length n
consisting of all ones. The search and problem space are both the xed-length bit strings
G = X = B
n
. Each gene (bit) has two alleles 0 and 1 which also contribute exactly this
value to the total tness, i. e.,
f(x) =
n1

i=0
x[i], x X (50.20)
For the OneMax problem, an extensive body of research has been provided by Ackley
[19], B ack [165], Blickle and Thiele [338], Miller and Goldberg [1898], M uhlenbein and
Schlierkamp-Voosen [1975], Thierens and Goldberg [2697], and Wilson and Kaur [2947].
50.2.5.2 The BinInt Problem
The BinInt problem devised by Rudnick [2347] also uses the bit strings of the length n as
search and problem space (G = X = B
n
). It is something like a perverted version of the
OneMax problem, with the objective function dened as
f(x) =
n1

i=0
2
ni1
x[i], x[i] 0, 1 i [0..n 1] (50.21)
Since the bit at index i has a higher contribution to the tness than all other bit at higher
indices together, the comparison between two candidate solutions x
1
and x
2
is won by the
lexicographically bigger one. Thierens et al. [2698] give the example x
1
= (1, 1, 1, 1, 0, 0, 0, 0)
and x
2
= (1, 1, 0, 0, 1, 1, 0, 0), where the rst deviating bit (underlined, at index 2) fully
determines the outcome of the comparison of the two.
We can expect that the bits with high contribution (high salience) will converge quickly
whereas the other genes with lower salience are only pressured by selection when all oth-
ers have already been fully converged. Rudnick [2347] called this sequential convergence
phenomenon domino convergence due to its resemblance with a row of falling domino
stones [2698] (see Section 13.2.3). Generally, he showed that rst, the highly salient genes
converge (i. e., take on the correct values in the majority of the population). Then, step
by step, the building blocks of lower signicance can converge, too. Another result of Rud-
nicks work is that mutation may stall the convergence because it can disturb the highly
signicant genes, which then counters the eect of the selection pressure on the less salient
ones. Then, it becomes very less likely that the majority of the population will have the
best alleles in these genes. This somehow dovetails with the idea of error thresholds from
theoretical biology [867, 2057] which we have mentioned in Section 14.3. It is also explains
some the experimental results obtained with the Royal Road problem from Section 50.2.4.
The BinInt problem was used in the studies of Sastry and Goldberg [2401, 2402].
50.2. BIT-STRING BASED PROBLEM SPACES 563
One of the maybe most important conclusions from the behavior of GAs applied to the
BinInt problem is that applying a Genetic Algorithm to solve a numerical problem (X R
n
)
whilst encoding the candidate solutions binary (G B
n
) in a straightforward manner will
like produce suboptimal solutions. Schraudolph and Belew [2429], for instance, recognized
this problem and suggested a Dynamic Parameter Encoding (DPE) where the values genes of
the bit string are readjusted over time: Initially, optimization takes place on a rather coarse
grained scale and after the optimum on this scale is approximated, the focus is shifted to a
ner interval and the genotypes are re-encoded to t into this interval. In their experiments,
this method works better as the direct encoding.
50.2.6 Long Path Problems
The long path problems have been designed by Horn et al. [1267] in order to construct a
unimodal, non-deceptive problem without noise which Hill Climbing algorithms still can
only solve in exponential time. The idea is to wind a path with increasing tness through
the search space so that any two adjacent points on the path are no further away than one
search step and any two points not adjacent on the path are away at least two search steps.
All points which are not on the path should guide the search to its origin.
The problem space and search space in their concrete realization is the space of the
binary strings G = X = B
l
of the xed, odd length l. The objective function f
lp
(x) is
subject to maximization. It is furthermore assumed that the search operations in Hill
Climbing algorithms alter at most one bit per search step, from which we can follow that
two adjacent points on the path have a Hamming distance of one and two non-adjacent
points dier in at least two bits.
The simplest instance of the long path problems that Horn et al. [1267] dene is the
Root2path P
l
. Paths of this type are constructed by iteratively increasing the search space
dimension. Starting with P
1
= (0, 1), the path P
l+2
is constructed from two copies P
a
l
= P
b
l
of the path P
l
as follows. First, we prepend 00 to all elements of the path P
a
l
and 11 to
all elements of the path P
b
l
. For l = 1 this makes P
a
1
= (000, 001) and P
b
l
= (110, 111).
Obviously, two elements on P
a
l
or on P
b
l
still have a Hamming distance of one whereas
each element from P
a
l
diers at least two bits from each element on P
b
l
. Then, a bridge
element B
l
is created that equals the last element of P
a
l
, but has 01 as the rst two bits,
i. e., B
1
= 011. Now the sequence of the elements in P
b
l
is reversed and P
a
l
, B
l
, and the
reversed P
b
l
are concatenated. Hence, P
3
= (000, 001, 011, 111, 110). Due to this recursive
structure of the path construction, the path length increases exponentially with l (for odd
l):
len(P
l+2
) = 2 len(P
l
) + 1 (50.22)
len(P
l
) = 3 2
l1
2
1 (50.23)
The basic tness of a candidate solution is 0 if it is not on the path and its (zero-based)
position on the path plus one if it is part of the path. The total number of points in the
space is B
l
is 2
l
and thus, the fraction occupied by the path is approximately 3 2

l+1
2
, i. e.,
decreases exponentially. In order to avoid that the long path problem becomes a needle-in-a-
haystack problem
7
, Horn et al. [1267] assign a tness that leads the search algorithm to the
paths origin to all o-path points x
o
. Since the rst point of the path is always the string 00.
..0 containing only zeros, subtracting the number of ones from l, i. e., f
lp
(x
o
) = l count1x
o
,
is the method of choice. To all points on the path, l is added to the basic tness, making
them superior to all other candidate solutions.
7
See Section 16.7 for more information on needle-in-a-haystack problems.
564 50 BENCHMARKS
100
000
101
110 111
001
010 011
f ( )=2
lp
100 f ( )=1
lp
101
f ( )=2
lp
010
f ( )=3
lp
000 f ( )=4
lp
001
f ( )=5
lp
011
f ( )=6
lp
111 f ( )=7
lp
110
Figure 50.4: The root2path for l = 3.
P
1
= (0, 1)
P
3
= (000, 001, 011, 111, 110)
P
5
= (00000, 00001, 00011, 00111, 00110, 01110, 11110, 11111, 11011, 11001, 11000)
P
7
= (0000000, 0000001, 0000011, 0000111, 0000110, 0001110, 0011110, 0011111,
0011011, 0011001, 0011000, 0111000, 1111000, 1111001, 1111011, 1111111,
1111110, 1101110, 1100110, 1100111, 1100011, 1100001, 1100000)
P
9
= (000000000, 000000001, 000000011, 000000111, 000000110, 000001110, 000011110,
000011111, 000011011, 000011001, 000011000, 000111000, 001111000, 001111001,
001111011, 001111111, 001111110, 001101110, 001100110, 001100111, 001100011,
001100001, 001100000, 011100000, 111100000, 111100001, 111100011, 111100111,
111100110, 111101110, 111111110, 111111111, 111111011, 111111001, 111111000,
110111000, 110011000, 110011001, 110011011, 110011111, 110011110, 110001110,
110000110, 110000111, 110000011, 110000001, 110000000)
P
11
= (00000000000, 00000000001, 00000000011, 00000000111, 00000000110,
00000001110, 00000011110, 00000011111, 00000011011, 00000011001,
00000011000, 00000111000, 00001111000, 00001111001, 00001111011,
00001111111, 00001111110, 00001101110, 00001100110, 00001100111,
00001100011, 00001100001, 00001100000, 00011100000, 00111100000,
00111100001, 00111100011, 00111100111, 00111100110, 00111101110,
00111111110, 00111111111, 00111111011, 00111111001, 00111111000,
00110111000, 00110011000, 00110011001, 00110011011, 00110011111,
00110011110, 00110001110, 00110000110, 00110000111, 00110000011,
00110000001, 00110000000, 01110000000, 11110000000, 11110000001,
11110000011, 11110000111, 11110000110, 11110001110, 11110011110,
11110011111, 11110011011, 11110011001, 11110011000, 11110111000,
11111111000, 11111111001, 11111111011, 11111111111, 11111111110,
11111101110, 11111100110, 11111100111, 11111100011, 11111100001,
11111100000, 11011100000, 11001100000, 11001100001, 11001100011,
11001100111, 11001100110, 11001101110, 11001111110, 11001111111,
11001111011, 11001111001, 11001111000, 11000111000, 11000011000,
11000011001, 11000011011, 11000011111, 11000011110, 11000001110,
11000000110, 11000000111, 11000000011, 11000000001, 11000000000)
50.2. BIT-STRING BASED PROBLEM SPACES 565
Table 50.1: Some long Root2paths for l from 1 to 11 with underlined bridge elements.
Algorithm 50.1: r f
lp
(x)
Input: x: the candidate solution with an odd length
Data: s: the current position in x
Data: sign: the sign of the next position
Data: isOnPath: true if and only if x is on the path
Output: r: the objective value
1 begin
2 sign 1
3 s len(x) 1
4 r 0
5 isOnPath true
6 while (s 0) isOnPath do
7 if s = 0 then
8 if x[0] = 1 then r r +sign
9 sub subListxs 22
10 if sub = 11 then
11 r r +sign
_
3 2
s
2
2
_
12 sign sign
13 else
14 if sub ,= 00 then
15 if (x[s] = 0) (x[s 1] = 1) (x[s 2] = 1) then
16 if (s = 2) [(x[s 3] = 1)
(count1subListx0s 3 = 0)]
then
r r +sign
_
3 2
s
2
1
1
_
17 else else isOnPath false
18 else isOnPath false
19 s s 2
20 if isOnPath then r r + len(x)
21 else r len(x) count1x 1
Some examples for the construction of Root2paths can be found in Table 50.1 and the
path for l = 3 is illustrated in Figure 50.4. In Algorithm 50.1 we try to outline how the
objective value of a candidate solution x can be computed online. Here, please notice two
things: First, this algorithm deviates from the one introduced by Horn et al. [1267] we
tried to resolve the tail recursion and also added some minor changes. Another algorithm
for determining f
lp
is given by Rudolph [2349]. The second thing to realize is that for small
l, we would not use the algorithm during the individual evaluation but rather a lookup
table. Each candidate solution could directly be used as index for this table which contains
the objective values. For l = 20, for example, a table with entries of the size of 4B would
consume 4MiB which is acceptable on todays computers.
The experiments of Horn et al. [1267] showed that Hill Climbing methods that only
concentrate on sampling the neighborhood of the currently known best solution perform very
poor on long path problems whereas Genetic Algorithms which combine dierent candidate
solutions via crossover easily nd the correct solution. Rudolph [2349] shows that it does so
in polynomial expected time. He also extends this idea long k-paths in [2350]. Droste et al.
[836] and Garnier and Kallel [1028] analyze this path and nd that also (1+1)-EAs can have
exponential expected runtime on such unimodal functions. H ohn and Reeves [1246] further
elaborate on why long path problems are hard for Genetic Algorithms.
566 50 BENCHMARKS
It should be mentioned that the Root2paths constructed according to the method de-
scribed in this section here do not have the maximum length possible for long paths. Horn
et al. [1267] also introduce Fibonacci paths which are longer than the Root2paths. The
problem of nding maximum length paths in a l-dimensional hypercube is known as the
snake-in-the-box
8
problem [682, 1505] which was rst described by Kautz [1505] in the late
1950s. It is a very hard problem suering from combinatorial explosion and currently, max-
imum snake lengths are only known for small values of l.
50.2.7 Tunable Model for Problematic Phenomena
What is a good model problem? Which model ts best to our purposes? These questions
should be asked whenever we apply a benchmark, whenever we want to use something for
testing the ability of a Global Optimization approach. The mathematical functions intro-
duced in ??, for instance, are good for testing special mathematical reproduction operations
like used in Evolution Strategies and for testing the capability of an Evolutionary Algorithm
for estimating the Pareto frontier in multi-objective optimization. Kaumans NK tness
landscape (discussed in Section 50.2.1) was intended to be a tool for exploring the relation
of ruggedness and epistasis in tness landscapes but can prove very useful for nding out
how capable an Global Optimization algorithm is to deal with problems exhibiting these
phenomena. In Section 50.2.4, we outlined the Royal Road functions, which were used to
investigate the ability of Genetic Algorithms to combine dierent useful formae and to test
the Building Block Hypothesis. The Articial Ant (??) and the GCD problem from ?? are
tests for the ability of Genetic Programming of learning algorithms.
All these benchmarks and toy problems focus on specic aspects of Global Optimization
and will exhibit dierent degrees of the problematic properties of optimization problems to
which we had devoted Part II:
1. premature convergence and multimodality (Chapter 13),
2. ruggedness (Chapter 14),
3. deceptiveness (Chapter 15),
4. neutrality and redundancy (Chapter 16),
5. overtting and oversimplication (Section 19.1), and
6. dynamically changing objective functions (Chapter 22).
With the exception of the NK tness landscape, it remains unclear to which degrees these
phenomena occur in the test problem. How much intrinsic epistasis does the Articial Ant
or the GCD problem emit? What is the quantity of neutrality inherent in Royal Road for
variable-length representations? Are the mathematical test functions rugged and, if so, to
which degree? All the problems are useful test instances for Global Optimization. They have
not been designed to give us answers to questions like: Which tness assignment process
can be useful when an optimization problem exhibits weak causality and thus has a rugged
tness landscape? How does a certain selection algorithm inuence the ability of a Genetic
Algorithm to deal with neutrality? Only Kaumans NK landscape provides such answers,
but only for epistatis. By ne-tuning its N and K parameters, we can generate problems
with dierent degrees of epistatis. Applying a Genetic Algorithm to these problems then
allows us to draw conclusions on its expected performance when being fed with high or low
epistatic real-world problems.
In this section, a new model problem is dened that exhibits ruggedness, epistasis, neu-
trality, multi-objectivity, overtting, and oversimplication features in a controllable manner
Niemczyk [2037], Weise et al. [2910]. Each of them is introduced as a distinct lter compo-
nent which can separately be activated, deactivated, and ne-tuned. This model provides
a perfect test bed for optimization algorithms and their conguration settings. Based on a
8
http://en.wikipedia.org/wiki/Snake-in-the-box [accessed 2008-08-13]
50.2. BIT-STRING BASED PROBLEM SPACES 567
rough estimation of the structure of the tness landscape of a given problem, tests can be
run very fast using the model as a benchmark for the settings of an optimization algorithm.
Thus, we could, for instance, determine a priori whether increasing the population size of
an Evolutionary Algorithm over an approximated limit is likely to provide a gain.
With it, we also can evaluate the behavior of an optimization method in the presence of
various problematic aspects, like epistasis or neutrality. This way, strengths and weaknesses
of dierent evolutionary approaches could be explored in a systematic manner. Additionally,
it is also well suited for theoretical analysis because of its simplicity. The layers of the model,
sketched using an example in Figure 50.5, are specied in the following.
010001100000111010000
gG
Genotype
tc=5, =1,o=1 e
IntroductionofOverfitting
*10001
0101 0 *
0 0111 *
01 0 1 1 *
100110 011010
T
1
T
2
T
3
T
4
h =4
*
h =2
*
h =5
*
h =3
*
h =2
*
h =2
*
h =2
*
h =3
*
f (x )=16
1,1,5 1
*
f (x )=14
1,1,5 2
*
11 101 *
T
5
h =3
*
h =4
*
IntroductionofNeutrality
u (g)
2
G
m=2
01 00 01 10 00 00 11 10 10 00
1 0 1 1 0 0 1 1 1 0
0
h=4
IntroductionofEpistasis
1011 0011 10
1001 0110 11
e
4
e
4
e
2
insufficientbits,
attheend,use
2insteadof
=4
h=
h
m=2,n=6
Multi-ObjectivityandPhenotype
100110 011010
1 0 0 1 0 1 1 0 1 1
x
1
x
2
x

[5]=0
padding:
(x ,x )
1 2
X
g=57, q=25
IntroductionofRuggedness
f (x )=16
1,1,5 1
r [f (x )]=17
57 1,1,5 1
f (x )=14
1,1,5 2
r f (x )) =15
57 1,1,5 2
[ ]
( (r )=82) D
g=57
g`=34
1
2
3
4
5
6
Figure 50.5: An example for the tness landscape model.
568 50 BENCHMARKS
50.2.7.1 Model Denition
The basic optimization task in this model is to nd a bit string x

of a predened length
n = len(x

) consisting of alternating zeros and ones in the space of all possible bit strings
X = B

, as sketched in box 1 in Figure 50.5. The tuning parameter for the problem size is
n N
1
.
Although the model is based on nding optimal bit strings, it is not intended to be a
benchmark for Genetic Algorithms. Instead, the purpose of the model is to provide problems
with specic tness landscape features to the optimization algorithm. An optimization algo-
rithm initially creates a set of random candidate solutions with a nullary creation operator.
It evaluates these solutions and selects promising solutions. To these individuals, unary and
binary reproduction operations are applied. New candidate solutions result, which subse-
quently again undergo selection and reproduction. Notice that the nature of the nullary,
unary, and binary operations and hence, the data structure used to represent the candi-
date solutions is not important from the perspective of the optimization algorithm. The
algorithm makes its selection decisions purely based on the obtained objective values and
also may use this information to decide how often mutation and recombination should be
applied. In other words, the behavior of a general optimization algorithm, such as an EA,
is purely based on the tness landscape features, and not on the precise data structures
representing genotypes and phenotypes.
From this perspective, a Genetic Algorithm is just an instantiation of an Evolutionary
Algorithm with search operations for bit strings. Genetic Programming is just an EA for
trees. If we can nd a good strategy with which a Genetic Algorithm can solve a highly
epistatic problem and this strategy does not involve changes inside the reproduction op-
erators, then this strategy is likely to hold for problems in Genetic Programming as well.
Hence, our benchmark model though working on bit-string based genomes is well suit-
able for providing insights which are helpful for other search spaces as well. It was used,
for example, in order to nd good setups for large-scale experimental studies on GP, where
each experiment was very costly and direct parameter tuning was not possible [2888].
x

= 0101010101010 . . . 01 (50.24)
50.2.7.1.1 Overtting and Oversimplication (box 5)
Searching this optimal string could be done by comparing each genotype g with x

. Therefore
we would use the Hamming distance
9
[1173] dist ham(a, b), which denes the dierence
between two bit strings a and b of equal length as the number of bits in which they dier
(see ??).
Instead of doing this directly, we test the candidate solution against tc training samples
T
1
, T
2
, .., T
tc
. These samples are modied versions of the perfect string x

.
As outlined in Section 19.1 on page 191, we can distinguish between overtting and
oversimplication. The latter is often caused by incompleteness of the training cases and
the former can originate from noise in the training data. Both forms can be expressed in
terms of our model by the objective function f
,o,tc
(based on a slightly extended version of
the Hamming distance dist

Ham
) which is subject to minimization.
dist

Ham
(a, b) = [i : (a[i] ,= b[i]) (b[i] ,= ) (0 i < [a[)[ (50.25)
f
,o,tc
(x) =
tc

i=1
dist

Ham
(x, T
i
) , f
,o,tc
(x) [0,

f] x X (50.26)
In the case of oversimplication, the perfect solution x

will always reach a perfect score in


all training cases. There may be incorrect solutions reaching this value in some cases too,
9
?? on page ?? includes the specication of the Hamming distance.
50.2. BIT-STRING BASED PROBLEM SPACES 569
because some of the facets of the problem are hidden. We take this into consideration by
placing o dont care symbols (
*
) uniformly distributed into the training cases. The values of
the candidate solutions at their loci have no inuence on the tness.
When overtting is enabled, the perfect solution will not reach the optimal score in any
training case because of the noise present. Incorrect solutions may score better in some
cases and even outperform the real solution if the noise level is high. Noise is introduced
in the training cases by toggling of the remaining n o bits, again following a uniform
distribution. An optimization algorithm can nd a correct solution only if there are more
training samples with correctly dened values for each locus than with wrong or dont care
values.
The optimal objective value is zero and the maximum

f of f
,o,tc
is limited by the upper
boundary

f (n o)tc. Its exact value depends on the training cases. For each bit index i,
we have to take into account whether a zero or a one in the phenotype would create larger
errors:
count(i, val) = [j 1..n : T
j
[i] = val[ (50.27)
e(i) =
_
count(i, 1) if count(i, 1) count(i, 0)
count(i, 0) otherwise
(50.28)

f =
tc

i=1
e(i) (50.29)
This process is illustrated as box 5 of Figure 50.5.
50.2.7.1.2 Neutrality (box 2)
We can create a well-dened amount of neutrality during the genotype-phenotype mapping
by applying a transformation u

that shortens the candidate solutions by a factor , as


sketched in box 2 of Figure 50.5. The i
th
bit in u

(g) is dened as 0 if and only if the


majority of the bits starting at locus i in g is also 0, and as 1 otherwise. The default
value 1 set in draw situations has (in average) no eect on the tness since the target solution
x

is dened as a sequence of alternating zeros and ones. If the length of a genotype g is


not a multiple of , the remaining len(g) mod bits are ignored. The tunable parameter
for the neutrality in our model is . If is set to 1, no additional neutrality is modeled.
It should be noted that, since our objective functions are based on the Hamming distance,
the number of possible objective values is always lower as the number of possible genotypes.
Therefore, there always exists some inherent neutrality in the model additionally to the one
modeled with the redundancy reduction.
50.2.7.1.3 Epistasis (box 3)
Epistasis in general means that a slight change in one gene of a genotype inuences some
other genes. We can introduce epistasis in our model as part of the genotype mapping and
apply it after the neutrality transformation. We therefore dene a bijective function e

that
translates a bit string z of length to a bit string e

(z) of the same length. Assume we have


two bit strings z
1
and z
2
which dier only in one single locus, i. e., their Hamming distance
is one. e

(i)ntroduces epistasis by exhibiting the following property:


dist ham(z
1
, z
2
) = 1 dist ham(e

(z
1
) , e

(z
2
)) 1 z
1
, z
2
B

(50.30)
The meaning of Equation 50.30 is that a change of one bit in a genotype g leads to the change
of at least 1 bits in the corresponding mapping e

(x). This, as well as the demand for


bijectivity, is provided if we dene e

as follows:
e

(z) [i] =
_

z[j]
j:0j<
j=(i1) mod
if 0 z < 2
1
, i. e., z[ 1] = 0
e

_
z 2
1
_
[i] otherwise
(50.31)
570 50 BENCHMARKS
In other words, for all strings z B

which have the most signicant bit (MSB) not set,


the e

transformation is performed bitwise. The i


th
bit in e

(z) equals the exclusive or


combination of all but one bit in z. Hence, each bit in z inuences the value of 1 bits in
e

(z). For all z with 1 in the MSB, e

(z) is simply set to the negated e

transformation of z
with the MSB cleared (the value of the MSB is 2
1
). This division in e is needed in order
to ensure its bijectiveness. This and the compliance with Equation 50.30 can be shown with
a rather lengthy proof omitted here.
In order to introduce this model of epistasis in genotypes of arbitrary length, we divide
them into blocks of the length and transform each of them separately with e

. If the length
of a given genotype g is no multiple of , the remaining len(g) mod bits at the end will be
transformed with the function e
len(g) mod
instead of e

, as outlined in box 3 of Figure 50.5.


It may also be an interesting fact that the e

transformations are a special case of the NK


landscape discussed in Section 50.2.1 with N = and K 2.
0000 0000
0001 1101
0010 1011
0100 0111
1000 1111
h
=
1
z e (z)
4
h
3

0111 0001
1011 1001
1101 0101
1110 0011
1111 1110
z e (z)
4
0011 0110
0101 1010
0110 1100
1001 0010
1010 0100
1100 1000
z e (z)
4
Figure 50.6: An example for the epistasis mapping z e
4
(z).
The tunable parameter for the epistasis ranges from 2 to n m, the product of the
basic problem length n and the number of objectives m (see next section). If it is set to a
value smaller than 3, no additional epistasis is introduced. Figure 50.6 outlines the mapping
for = 4.
Again it should be noted that, besides the explicit epistasis introduced via this trans-
formation, implicit epistasis can occur through the neutrality and ruggedness (see ??) map-
pings. It is not possible to study these three eects completely separately, but with our
model, well-dosed degrees of epistasis, neutrality, and ruggedness can separately or jointly
generated.
50.2.7.1.4 Multi-Objectivity (box 4)
A multi-objective problem with m criteria can easily be created by interleaving m instances
of the benchmark problem with each other and introducing separate objective functions
for each of them. Instead of just dividing the genotype g in m blocks, each standing for
one objective, we scatter the objectives as illustrated in box 4 of Figure 50.5. The bits for
the rst objective function comprise x
1
= (g[0], g[m], g[2m], . . . ), those used by the second
objective function are x
2
= (g[1], g[m+ 1], g[2m+ 1], . . . ). Notice that no bit in g is used by
more than one objective function. Superuous bits (beyond index nm1) are ignored. If g
is too short, the missing bits in the phenotypes are replaced with the complement from x

,
i. e., if one objective misses the last bit (index n 1), it is padded by x

[n 1] which will
worsen the objective by 1 on average.
Because of the interleaving, the objectives will begin to conict if epistasis ( > 2) is
applied, similar to NK landscapes. Changing one bit in the genotype will change the outcome
of at most min , m objectives. Some of them may improve while others may worsen.
A non-functional objective function minimizing the length of the genotypes is added if
variable-length genomes are used during the evolution. If xed-length genomes are used,
they can be designed in a way that the blocks for the single objectives have always the right
length.
50.2. BIT-STRING BASED PROBLEM SPACES 571
50.2.7.1.5 Ruggedness (box 6)
There are two (possibly interacting) sources of ruggedness in a tness landscape. The rst
one, epistasis, has already been modeled and discussed. The other source concerns the
objective functions themselves, the nature of the problem. We will introduce this type
of ruggedness a posteriori by articially lowering the causality of the problem space. We
therefore shue the objective values with a permutation r : [0,

f] [0,

f], where

f the
abbreviation for the maximum possible objective value, as dened in Equation 50.29.
Before we do that, let us shortly outline what makes a function rugged. Ruggedness
is obviously the opposite of smoothness and causality. In a smooth objective function, the
objective values of the candidate solutions neighboring in problem space are also neighboring.
In our original problem with o = 0, = 0, and tc = 1 for instance, two individuals diering
in one bit will also dier by one in their objective values. We can write down the list of
objective values the candidate solutions will take on when they are stepwise improved from
the worst to the best possible conguration as
_

f,

f 1, .., 2, 1, 0
_
. If we exchange two of the
values in this list, we will create some articial ruggedness. A measure for the ruggedness
of such a permutation r is (r):
(r) =

f1

i=0
[r[i] r[i + 1][ (50.32)
The original sequence of objective values has the minimum value

=

f and the maximum
possible value is

=

f(

f+1)
2
. There exists at least one permutation for each value in

..

. We can hence dene the permutation r

which is applied after the objective values


are computed and which has the following features:
1. It is bijective (since it is a permutation).
2. It must preserve the optimal value, i. e., r

[0] = 0.
3. (r

) =

+.
With
_
0,

_
, we can ne-tune the ruggedness. For = 0, no ruggedness
is introduced. For a given

f, we can compute the permutations r

with the procedure


buildRPermutation
_
,

f
_
dened in Algorithm 50.2.
r
0
[f]
D( )=5 r
0
f
identity
g=0
r
1
[f]
f D(r =6
1
)
rugged
g=1
r
2
[f]
f D(r =7
2
)
rugged
g=2
r
3
[f]
f D(r =8
3
)
rugged
g=3
r
4
[f]
f D(r =9
4
)
rugged
g=4
f D(r =10
5
)
r
5
deceptive
g=5
r
6
f D(r =11
6
)
deceptive
g=60
r
7
f D(r =12
7
)
deceptive
g=7
f D(r =13
8
)
r
8
rugged
g=8
r
9
f D(r =14
9
)
rugged
g=9
r
10
f D(r )=15
10
rugged
g=10
Figure 50.7: An example for r

with = 0..10 and



f = 5.
572 50 BENCHMARKS
Algorithm 50.2: r

buildRPermutation
_
,

f
_
Input: : the value
Input:

f: the maximum objective value
Data: i, j, d, tmp: temporary variables
Data: k, start, r: parameters of the subalgorithm
Output: r

: the permutation r

1 begin
2 Subalgorithm r permutate(k, r, start)
3 begin
4 if k > 0 then
5 if k (

f 1) then
6 r permutate(k 1, r, start)
7 tmp r
_

f
_
8 r
_

f
_
r
_

f k
_
9 r
_

f k
_
tmp
10 else
11 i
_
start + 1
2
_
12 if (start mod 2) = 0 then
13 i

f + 1 i
14 d 1
15 else
16 d 1
17 for j start up to

f do
18 r[j] i
19 i i +d
20 r permutate
_
k

f +start, r, start + 1
_
21 r
_
0, 1, 2, ..,

f 1,

f
_
22 return permutate(, r, 1)
50.2. BIT-STRING BASED PROBLEM SPACES 573
Figure 50.7 outlines all ruggedness permutations r

for an objective function which can


range from 0 to

f = 5. As can be seen, the permutations scramble the objective function more


and more with rising and reduce its gradient information. The ruggedness transformation
is sketched in box 6 of Figure 50.5.
50.2.7.2 Experimental Validation
In this section, we will use a selection of the experimental results obtained with our model in
order to validate the correctness of the approach.
10
In our experiments, we applied variable-
length bit string genomes (see Section 29.4) with lengths between 1 and 8000 bits. We apply
a plain Genetic Algorithm to the default benchmark function set f = f
,o,tc
, f
nf
, where
f
nf
is the non-functional length criterion f
nf
(x) = len(x). Pareto ranking (Section 28.3.3) is
used for tness assignment and Tournament selection (see Section 28.4.4) with a tournament
size k = 5 for selection. In a population of ps = 1000 individuals, we perform single-point
crossover at a crossover rate of cr = 80% and apply single-bit mutation to mr = 20% of the
genotypes. The genotypes are mapped to phenotypes as discussed in Section 50.2.7.1. Each
run performs exactly maxt = 1001 generations. For each of the experiment-specic settings
discussed later, at least 50 runs have been performed.
50.2.7.2.1 Basic Complexity
In the experiments, we distinguish between success and perfection. Success means nding
individuals x of optimal functional tness, i. e., f
,o,tc
(x) = 0. Multiple such successful
strings may exist, since superuous bits at the end of genotypes do not inuence their
functional objective. We will refer to the number of generations needed to nd a successful
individual as success generations. The perfect string x

has no useless bits, it is the shortest


possible solution with f
,o,tc
= 0 and, hence, also optimal in the non-functional length
criterion. In our experiments, we measure:
1. the fraction s/r of experimental runs that turned out successful,
2. the number

st of generations needed by the fastest (successful) experimental run to


nd a successful individual,
3. the average number st of generations needed by the (successful) experimental runs to
nd a successful individual,
4. the number of generations

st needed by the slowest (successful) experimental run to
nd a successful individual, and
5. the average number pt of generations needed by the experimental runs to nd a perfect,
neither overtted nor oversimplied, individual.
In Figure 50.8, we have computed the minimum, average, and maximum number of the
success generations (

st, st, and



st) for values of n ranging from 8 to 800. As illustrated, the
problem hardness increases steadily with rising string length n. Trimming down the solution
strings to the perfect length becomes more and more complicated with growing n. This is
likely because the fraction at the end of the strings where the trimming is to be performed
will shrinks in comparison with its overall length.
50.2.7.2.2 Ruggedness
In Figure 50.9, we plotted the average success generations st with n = 80 and dierent
ruggedness settings . Interestingly, the gray original curve behaves very strangely. It is
divided into alternating solvable and unsolvable
11
problems. The unsolvable ranges of
10
More experimental results and more elaborate discussions can be found in the bachelors thesis of
Niemczyk [2037].
11
We call a problem unsolvable if it has not been solved within 1000 generations.
574 50 BENCHMARKS
200
150
100
50
0
100 200 300 400 500 600 700
st
^
st
-
st
^
pt
-
n
N
u
m
b
e
r

o
f

G
e
n
e
r
a
t
i
o
n
s
Figure 50.8: The basic problem hardness.
20
30
40
50
60
70
80
90
100
110
0 500 1000 1500 2000 2500 3000
g
atn=80
rearranged ` g
original g
deceptivegaps
deceptivenes/
ruggnessfold
st
-
Figure 50.9: Experimental results for the ruggedness.
50.2. BIT-STRING BASED PROBLEM SPACES 575
correspond to gaps in the curve. With rising , the solvable problems require more and
more generations until they are solved. After a certain (earlier) threshold value, the
unsolvable sections become solvable. From there on, they become simpler with rising . At
some point, the two parts of the curve meet.
Algorithm 50.3: translate
_

f
_
Input:

: the raw value


Input:

f: the maximum value of f
,o,tc
Data: i, j, k, l: some temporary variables
Output: : the translated value
1 begin
2 l

f
_

f 1
_
2
3 i
_

f
2
_

f + 1
2
_
4 if

fi then
5 j

f + 2
2

f
2
4
+ 1

_
6 k j
_

f + 2
_
+j
2
+

f
7 return k + 2
_
j
_

f + 2
_
j
2

f
_
j
8 else
9 j

_
_

f mod 2
_
+ 1
2
+

_
1
_

f mod 2
_
4
+ 1 i

_
10 k
_
j
_

f mod 2
__
(j 1) 1 i
11 return l k 2j
2
+j
_

f mod 2
_
(2j + 1)
The reason for this behavior is rooted in the way that we construct the ruggedness
mapping r and illustrates the close relation between ruggedness and deceptiveness. Algo-
rithm 50.2 is a greedy algorithm which alternates between creating groups of mappings that
are mainly rugged and such that are mainly deceptive. In Figure 50.7 for instance, from
= 5 to = 7, the permutations exhibit a high degree of deceptiveness whilst just being
rugged before and after that range. Thus, it seems to be a good idea to rearrange these
sections of the ruggedness mapping. The identity mapping should still come rst, followed
by the purely rugged mappings ordered by their -values. Then, the permutations should
gradually change from rugged to deceptive and the last mapping should be the most de-
ceptive one ( = 10 in Figure 50.7). The black curve in Figure 50.9 depicts the results of
rearranging the -values with Algorithm 50.3. This algorithm maps deceptive gaps to higher
-values and, by doing so, makes the resulting curve continuous.
12
Fig. 50.10.a sketches the average success generations for the rearranged ruggedness prob-
lem for multiple values of n and

. Depending on the basic problem size n, the problem


hardness increases steeply with rising values of

.
In Algorithm 50.2 and Algorithm 50.3, we use the maximum value of the functional
objective function (abbreviated with

f) in order to build and to rearrange the ruggedness
12
This is a deviation from our original idea, but this idea did not consider deceptiveness.
576 50 BENCHMARKS
0
2000
4000
6000
8000
g`
0
100
200
300
0
40
80
120
160
n
st
-
Fig. 50.10.a: Unscaled ruggedness st plot.
0
40
80
120
160
0
2
4
6
8
0
100
200
300
n
ru
g
g
ed
n
ess(sca
led
)
st
-
Fig. 50.10.b: Scaled ruggedness st plot.
40
80
120
160
0
2
4
6
8
0
100
200
n
d
e
c
e
p
tiv
e
n
e
ss(sc
a
le
d
)
st
-
Fig. 50.10.c: Scaled deceptiveness average success
generations st
40
80
120
160
0
2
4
8
6
0%
33%
66%
100%
n
d
e
c
e
p
tiv
e
n
e
ss(sc
a
le
d
)
1-s/r
Fig. 50.10.d: Scaled deceptiveness failed runs
Figure 50.10: Experiments with ruggedness and deceptiveness.
50.2. BIT-STRING BASED PROBLEM SPACES 577
permutations r. Since this value depends on the basic problem length n, the number of
dierent permutations and thus, the range of the

values will too. The length of the


lines in direction of the

axis in Fig. 50.10.a thus increases with n. We introduce two


additional scaling functions for ruggedness and deceptiveness with a parameter g spanning
from zero to ten, regardless of n. Only one of these functions can be used at a time, de-
pending on whether experiments should be run for rugged (Equation 50.33) or for deceptive
(Equation 50.34) problems. For scaling, we use the highest

value which maps to rugged


mappings

r
=
_
0.5

f
_

_
0.5

f
_
, and the minimum and maximum ruggedness values according
to Equation 50.32.
rugged:

= round(0.1g

r
) (50.33)
deceptive:

=
_
0 if g 0

r
+ round
_
0.1g
_

r
__
otherwise
(50.34)
When using this scaling mechanism, the curves resulting from experiments with dierent
n-values can be compared more easily: Fig. 50.10.b based on the scale from Equation 50.33,
for instance, shows much clearer how the problem diculty rises with increasing ruggedness
than Fig. 50.10.a. We also can spot some irregularities which always occur at about the
same degree of ruggedness, near g 9.5, and that we will investigate in future.
The experiments with the deceptiveness scale Equation 50.34 show the tremendous eect
of deceptiveness in the tness landscape. Not only does the problem hardness rise steeply
with g (Fig. 50.10.c), after certain threshold, the Evolutionary Algorithm becomes unable to
solve the model problem at all (in 1000 generations), and the fraction of failed experiments
in Fig. 50.10.d jumps to 100% (since the fraction s/r of solved ones goes to zero).
50.2.7.2.3 Epistasis
Fig. 50.11.a illustrates the relation between problem size n, the epistasis factor , and
0
40
80
120
160
10
20
30
40
50
0
200
400
600
800
n
h
st
-
Fig. 50.11.a: Epistasis and problem length n.
0
40
80
120
160
10
20
30
40
50
0
40%
80%
n
h
1-s/r
Fig. 50.11.b: Epistasis and problem length n:
failed runs.
Figure 50.11: Experiments with epistasis.
the average success generations. Although rising epistasis makes the problems harder, the
complexity does not rise as smoothly as in the previous experiments. The cause for this is
likely the presence of crossover if mutation was allowed solely, the impact of epistasis would
most likely be more intense. Another interesting fact is that experimental settings with odd
values of tend to be much more complex than those with even ones. This relation becomes
even more obvious in Fig. 50.11.b, where the proportion of failed runs, i. e., those which
were not able to solve problem in less than 1000 generations, is plotted. A high plateau
for greater values of is cut by deep valleys at positions where = 2 + 2i i N
1
. This
578 50 BENCHMARKS
phenomenon has thoroughly been discussed by Niemczyk [2037] and can be excluded from
the experiments by applying the scaling mechanism with parameter y [0, 10] as dened in
Equation 50.35:
epistasis: =
_
_
_
y 0 if 2
y 10 if 41
2 2y + 1 otherwise
(50.35)
50.2.7.2.4 Neutrality
Figure 50.12 illustrates the average number of generations st needed to grow an individual
0
0
40
80
120
160
10
20
30
40
50
100
200
300
n
m
st
-
Figure 50.12: The results from experiments with neutrality.
with optimal functional tness for dierent values of the neutrality parameter . Until
10, the problem hardness increases rapidly. For larger degrees of redundancy, only
minor increments in st can be observed. The reason for this strange behavior seems to be the
crossover operation. Niemczyk [2037] shows that a lower crossover rate makes experiments
involving the neutrality lter of the model problem very hard. We recommend using only
-values in the range from zero to eleven for testing the capability of optimization methods
of dealing with neutrality.
50.2.7.2.5 Epistasis and Neutrality
Our model problem consists of independent lters for the properties that may inuence the
hardness of an optimization task. It is especially interesting to nd out whether these lters
can be combined arbitrarily, i. e., if they are indeed free of interaction. In the ideal case, st
of an experiment with n = 80, = 8, and = 0 added to st of an experiment for n = 80,
= 0, and = 4 should roughly equal to st of an experiment with n = 80, = 8, and = 4.
In Figure 50.13, we have sketched these expected values (Fig. 50.13.a) and the results of the
corresponding real experiments (Fig. 50.13.b). In fact, these two diagrams are very similar.
The small valleys caused by the easier values of (see Paragraph 50.2.7.2.3) occur in both
charts. The only dierence is a stronger inuence of the degree of neutrality.
50.2.7.2.6 Ruggedness and Epistasis
It is a well-known fact that epistasis leads to ruggedness, since it violates the causality as
discussed in Chapter 17. Combining the ruggedness and the epistasis lter therefore leads
to stronger interactions. In Fig. 50.14.b, the inuence of ruggedness seems to be amplied
by the presence of epistasis when compared with the estimated results shown in Fig. 50.14.a.
Apart from this increase in problem hardness, the model problem behaves as expected. The
characteristic valleys stemming from the epistasis lter are clearly visible, for instance.
50.2. BIT-STRING BASED PROBLEM SPACES 579
0
100
300
1
3
5
7
9
3
5
7
9
m
h
st
-
Fig. 50.13.a: The expected results.
0
100
200
400
1
3
5
7
9
3
5
7
9
h
m
st
-
Fig. 50.13.b: The experimentally obtained re-
sults.
Figure 50.13: Expection and reality: Experiments involving both, epistasis and neutrality
0
100
300
1
3
5
7
9
0
2
4
6
8
h
r
u
g
g
e
d
n
e
s
s

(
s
c
a
l
e
)
st
-
Fig. 50.14.a: The expected results.
0
150
300
450
1
3
5
7
9
0
2
4
6
8
h
r
u
g
g
e
d
n
e
s
s

(
s
c
a
l
e
)
st
-
Fig. 50.14.b: The experimentally obtained re-
sults.
Figure 50.14: Expection and reality: Experiments involving both, ruggedness and epistasis
580 50 BENCHMARKS
50.2.7.3 Summary
In summary, this model problem has proven to be a viable approach for simulating prob-
lematic phenomena in optimization. It is
1. functional, i. e., allows us to simulate many problematic features,
2. tunable each lter can be tuned independently,
3. easy to understand,
4. allows for very fast tness computation,
5. easily extensible each lter can be replaced with other approaches for simulating the
same feature.
Niemczyk [2037] has written a stand-alone Java class implementing the model
problem which is provided at http://www.it-weise.de/documents/files/
TunableModel.java [accessed 2008-05-17]. This class allows setting the parameters discussed
in this section and provides methods for determining the objective values of individuals in
the form of byte arrays. In the future, some strange behaviors (like the irregularities in the
ruggedness lter and the gaps in epistasis) of the model need to be revisited, explained, and,
if possible, removed.
50.3 Real Problem Spaces
Mathematical benchmark functions are especially interesting for testing and comparing
techniques based on real vectors (X R
n
) like plain Evolution Strategy (see Chapter 30
on page 359), Dierential Evolution (see Chapter 33 on page 419), and Particle Swarm
Optimization (see Chapter 39 on page 481). However, they only require such vectors as
candidate solutions, i. e., elements of the problem space X. Hence, techniques with dier-
ent search spaces G, like Genetic Algorithms, can also be applied to them, given that a
genotype-phenotype mapping gpm : G X R
n
is provided accordingly.
The optima or the Pareto frontier of benchmark functions often have already been de-
termined theoretically. When applying an optimization algorithm to the functions, we are
interested in the number of candidate solutions which they need to process in order to nd
the optima and how close we can get to them. They also give us a great opportunity to
nd out about the inuence of parameters like population size, the choice of the selection
algorithm, or the eciency of reproduction operations.
50.3.1 Single-Objective Optimization
In this section, we list some of the most important benchmark functions for scenarios in-
volving only a single optimization criterion. This, however, does not mean that the search
space has only a single dimension even a single-objective optimization can take place in
n-dimensional space R
n
.
50.3. REAL PROBLEM SPACES 581
50.3.1.1 Sphere
The sphere functions listed by Suganthan et al. [2626] (or as F
1
by De Jong [711] and
in [2934], f
1
in [3020, 3026]) and dened here in Table 50.2 as f
RSO1
are a very simple mea-
sure of eciency of optimization methods. They have, for instance, been used by Rechenberg
[2278] for testing his Evolution Strategy-approach. In Table 50.3 we list some results ob-
tained for optimizing the sphere function from related work and in Figure 50.15, we illustrate
f
RSO1
for one-dimensional and two-dimensional problem spaces X. The code snippet provided
in Listing 50.2 shows how the sphere function can be computed and in Table 50.3, we list
some results for the application of optimizers to the sphere function from literature.
function: f
RSO1
(x) =
n

i=1
x
i
2
50.36
domain: X = [v, v]
n
, v 5.12, 10, 100 50.37
optimization: minimization
optimum: x

= (0, 0, .., 0)
T
, f
RSO1
(x

) = 0 50.38
separable: yes
multimodal: no
Table 50.2: Properties of the sphere function.
0
2000
4000
6000
8000
10000
-100 -50 0 50 100
f
x X
x
^
Fig. 50.15.a: 1d
-100
-50
0
50
100 -100
-50
0
50
100
0
6000
12000
18000
f
x X
x
^
Fig. 50.15.b: 2d
Figure 50.15: Examples for the sphere function (v = 100).
1 public static final double computeSphere(final double[] ind) {
2 int i;
3 double res;
4
5 res = 0;
6 for (i = (ind.length - 1); i >= 0; i--) {
7 res += (ind[i]
*
ind[i]);
8 }
9
10 return res;
11 }
Listing 50.2: A Java code snippet computing the sphere function.
Table 50.3: Results obtained for the sphere function with function evaluations.
582 50 BENCHMARKS
n, v f
RSO1
(x) Algorithm Class Reference
500, 100 2.5 10
6
2.41E-11 SaNSDE DE [3020]
500, 100 2.5 10
6
4.90E-8 FEPCC EA [3020]
500, 100 2.5 10
6
2.28E-21 DECC-O DE [3020]
500, 100 2.5 10
6
6.33E-27 DECC-G DE [3020]
1000, 100 5 10
6
6.97 SaNSDE DE [3020]
1000, 100 5 10
6
5.40E-8 FEPCC EA [3020]
1000, 100 5 10
6
1.77E-20 DECC-O DE [3020]
1000, 100 5 10
6
2.17E-25 DECC-G DE [3020]
50, 10 2.5 10
5
1.38E-18 LSRS HC [1140]
50, 10 1 10
7
1.534E1 GA [1140]
50, 10 1 10
7
1.0016E2 PSO [1140]
100, 10 2.5 10
5
6.94E-16 LSRS HC [1140]
100, 10 1 10
7
6.483E1 GA [1140]
100, 10 1 10
7
2.492E2 PSO [1140]
500, 10 2.5 10
6
9E-16 LSRS HC [1140]
500, 10 1 10
7
1.2E3 GA [1140]
500, 10 1 10
7
2.18035E3 PSO [1140]
1000, 10 2.5 10
6
1.25E-18 LSRS HC [1140]
1000, 10 1 10
7
3.45E3 GA [1140]
1000, 10 1 10
7
5.5E3 PSO [1140]
2000, 10 2.5 10
6
2.35E-33 LSRS HC [1140]
50.3. REAL PROBLEM SPACES 583
50.3.1.2 Schwefels Problem 2.22 (Schwefel Function)
This function is dened as f
2
in [3026]. Here, we name it f
RSO2
and list its features in
Table 50.4. In Figure 50.16 we sketch it one and two-dimensional version, Listing 50.3
contains a Java snippet showing how it can be computed, and provides some results for
this function from literature Table 50.5.
function f
RSO2
(x) =
n

i=1
[x
i
[ +
n

i=1
[x
i
[ 50.39
domain X = [v, v]
n
, v 10 50.40
optimization minimization
optimum x

= (0, 0, .., 0)
T
, f
RSO2
(x

) = 0 50.41
separable yes
multimodal no
Table 50.4: Properties of Schwefels problem 2.22.
0
4
8
12
16
-10 -5 0 5 10
x
^
f
x X
Fig. 50.16.a: 1d
-10
-5
0
5
10 -10
-5
0
5
10
0
40
80
120
x
^
f
x X
Fig. 50.16.b: 2d
Figure 50.16: Examples for Schwefels problem 2.22.
1 public static final double computeSchwefel222(final double[] ind) {
2 int i;
3 double r1, r2, x;
4
5 r1 = 1d;
6 r2 = 0d;
7 for (i = (ind.length - 1); i >= 0; i--) {
8 x = Math.abs(ind[i]);
9 r1
*
= x;
10 r2 += x;
11 }
12
13 return (r1 + r2);
14 }
Listing 50.3: A Java code snippet computing Schwefels problem 2.22.
Table 50.5: Results obtained for Schwefels problem 2.22 with function evaluations.
584 50 BENCHMARKS
n, v f
RSO2
(x) Algorithm Class Reference
500, 10 2.5 10
6
5.27E-2 SaNSDE DE [3020]
500, 10 2.5 10
6
1.3E-3 FEPCC EA [3020]
500, 10 2.5 10
6
3.77E-10 DECC-O DE [3020]
500, 10 2.5 10
6
5.95E-15 DECC-G DE [3020]
500, 10 2.5 10
6
4.08E-19 LSRS HC [1140]
500, 10 1 10
7
6.0051E2 GA [1140]
500, 10 1 10
7
8.9119E2 PSO [1140]
1000, 10 5 10
6
1.24 SaNSDE DE [3020]
1000, 10 5 10
6
2.6E-3 FEPCC EA [3020]
1000, 10 5 10
6
+ DECC-O DE [3020]
1000, 10 5 10
6
5.37E-14 DECC-G DE [3020]
1000, 10 2.5 10
6
1.12E-17 LSRS HC [1140]
1000, 10 1 10
7
1.492E3 GA [1140]
1000, 10 1 10
7
4.1E47 PSO [1140]
50, 10 2.5 10
5
1.91E-11 LSRS HC [1140]
50, 10 1 10
7
1.346E1 GA [1140]
50, 10 1 10
7
4.68E1 PSO [1140]
100, 10 2.5 10
5
3.98E-10 LSRS HC [1140]
100, 10 1 10
7
4.007E1 GA [1140]
100, 10 1 10
7
1.0486E2 PSO [1140]
2000, 10 2.5 10
6
3.08E-17 LSRS HC [1140]
50.3. REAL PROBLEM SPACES 585
50.3.1.3 Schwefels Problem 1.2 (Quadric Function)
This function is dened as f
3
in [3026] and is here introduced as f
RSO3
in Table 50.6. In
Figure 50.17, we sketch this function for one and two dimensions, in Listing 50.4, we give a
Java code snippet showing how it can be computed, and Table 50.7 contains some results
from literature.
function f
RSO3
(x) =
n

i=1
_
_
i

j=1
x
i
_
_
2
50.42
domain X = [v, v]
n
, v 10, 100 50.43
optimization minimization
optimum x

= (0, 0, .., 0)
T
, f
RSO3
(x

) = 0 50.44
separable no
multimodal no
Table 50.6: Properties of Schwefels problem 1.2.
0
2000
4000
6000
8000
10000
-100 -50 0 50 100
f
x X
x
^
Fig. 50.17.a: 1d
-100
-50
0
50
100 -100
-50
0
50
100
0
10e3
20e3
30e3
40e3
f
x
^
x X
Fig. 50.17.b: 2d
Figure 50.17: Examples for Schwefels problem 1.2.
1 public static final double computeSchwefel12(final double[] ind) {
2 int i, j;
3 double res, d;
4
5 res = 0d;
6 for (i = (ind.length - 1); i >= 0; i--) {
7 d = 0d;
8 for (j = 0; j <= i; j++) {
9 d += ind[j];
10 }
11 res += (d
*
d);
12 }
13
14 return res;
15 }
Listing 50.4: A Java code snippet computing Schwefels problem 1.2.
Table 50.7: Results obtained for Schwefels problem 1.2 with function evaluations.
586 50 BENCHMARKS
n, v f
RSO3
(x) Algorithm Class Reference
500, 100 2.5 10
6
2.03E-8 SaNSDE DE [3020]
500, 100 2.5 10
6
FEPCC EA [3020]
500, 100 2.5 10
6
2.93E-19 DECC-O DE [3020]
500, 100 2.5 10
6
6.17E-25 DECC-G DE [3020]
1000, 100 5 10
6
6.43E1 SaNSDE DE [3020]
1000, 100 5 10
6
FEPCC EA [3020]
1000, 100 5 10
6
8.69E-18 DECC-O DE [3020]
1000, 100 5 10
6
3.71E-23 DECC-G DE [3020]
50, 10 2.5 10
5
6.27E-19 LSRS HC [1140]
50, 10 1 10
7
6.113E3 GA [1140]
50, 10 1 10
7
7.81E4 PSO [1140]
100, 10 2.5 10
5
1.15E-15 LSRS HC [1140]
100, 10 1 10
7
2.22E5 GA [1140]
100, 10 1 10
7
7.81E5 PSO [1140]
500, 10 2.5 10
6
4.31E-11 LSRS HC [1140]
500, 10 1 10
7
1.25E8 GA [1140]
500, 10 1 10
7
1.47E8 PSO [1140]
1000, 10 2.5 10
6
1.38E-29 LSRS HC [1140]
1000, 10 1 10
7
6.28E8 GA [1140]
1000, 10 1 10
7
1.61E6 PSO [1140]
2000, 10 2.5 10
6
1.69E-7 LSRS HC [1140]
50.3. REAL PROBLEM SPACES 587
50.3.1.4 Schwefels Problem 2.21
This function is dened as f
4
in [3026] and is here introduced as f
RSO4
in Table 50.8. In
Figure 50.18, we sketch this function for one and two dimensions, in Listing 50.5, we give a
Java code snippet showing how it can be computed, and Table 50.9 contains some results
from literature.
function f
RSO4
(x) = max
i
[x
i
[ : i 1..n 50.45
domain X = [v, v]
n
, v 100 50.46
optimization minimization
optimum x

= (0, 0, .., 0)
T
, f
RSO4
(x

) = 0 50.47
separable yes
multimodal no
Table 50.8: Properties of Schwefels problem 2.21.
0
20
40
60
80
100
-100 -50 0 50 100
x
^
f
x X
Fig. 50.18.a: 1d
-100
-50
0
50
100 -100
-50
0
50
100
0
30
60
90
x
^
f
x X
Fig. 50.18.b: 2d
Figure 50.18: Examples for Schwefels problem 2.21.
1 public static final double computeSchwefel221(final double[] ind) {
2 int i;
3 double res, d;
4
5 res = 0d;
6 for (i = (ind.length - 1); i >= 0; i--) {
7 d = ind[i];
8 if (d < 0d) {
9 d = -d;
10 }
11 if (d > res) {
12 res = d;
13 }
14 }
15
16 return res;
17 }
Listing 50.5: A Java code snippet computing Schwefels problem 2.21.
Table 50.9: Results obtained for Schwefels problem 2.21 with function evaluations.
588 50 BENCHMARKS
n, v f
RSO4
(x) Algorithm Class Reference
500, 100 2.5 10
6
4.07E1 SaNSDE DE [3020]
500, 100 2.5 10
6
9E-5 FEPCC EA [3020]
500, 100 2.5 10
6
6.01E1 DECC-O DE [3020]
500, 100 2.5 10
6
4.58E-5 DECC-G DE [3020]
1000, 100 5 10
6
4.99E1 SaNSDE DE [3020]
1000, 100 5 10
6
8.5E-5 FEPCC EA [3020]
1000, 100 5 10
6
7.92E1 DECC-O DE [3020]
1000, 100 5 10
6
1.01E1 DECC-G DE [3020]
50.3. REAL PROBLEM SPACES 589
50.3.1.5 Generalized Rosenbrocks Function
Originally proposed by Rosenbrock [2333] and also known under the name banana valley
function [1841], this function is dened as f
5
in [3026] and in [2934], the two-dimensional
variant is given as F
2
. The interesting feature of Rosenbrocks function is that it is unimodal
for two dimension but becomes multimodal for higher ones [2464]. Here, we introduce it
as f
RSO5
in Table 50.10. In Figure 50.19, we sketch this function for two dimensions, in
Listing 50.6, we give a Java code snippet showing how it can be computed, and Table 50.11
contains some results from literature.
function f
RSO5
(x) =
n1

i=1
_
100
_
x
i+1
x
i
2
_
2
+ (x
i
1)
2
_
50.48
domain X = [v
1
, v
2
]
n
, v
1
5, 30 , v
2
10, 30 50.49
optimization minimization
optimum x

= (1, 1, .., 1)
T
, f
RSO5
(x

) = 0 50.50
separable no
multimodal no for n 2, yes otherwise
Table 50.10: Properties of the generalized Rosenbrocks function.
-30
-20
-10
0
10
20
30 -30
-20
-10
0
10
20
30
0
2e+7
4e+7
6e+7
8e+7
x
^
f
x X
Figure 50.19: The 2d-example for the generalized Rosenbrocks function.
1 public static final double computeGenRosenbrock(final double[] ind) {
2 int i;
3 double res, ox, x, z;
4
5 res = 0d;
6 i = (ind.length - 1);
7 x = ind[i];
8 for (--i; i >= 0; i--) {
9 ox = x;
10 x = ind[i];
11
12 z = (ox + (x
*
x));
13 res += (100d
*
(z
*
z));
14 z = (x - 1);
15 res += (x
*
x);
16 }
17
18 return res;
19 }
Listing 50.6: A Java code snippet computing the generalized Rosenbrocks function.
590 50 BENCHMARKS
Table 50.11: Results obtained for the generalized Rosenbrocks function with function
evaluations.
n, X f
RSO5
(x) Algorithm Class Reference
500, [30, 30] 2.5 10
6
1.33E3 SaNSDE DE [3020]
500, [30, 30] 2.5 10
6
FEPCC EA [3020]
500, [30, 30] 2.5 10
6
6.64E2 DECC-O DE [3020]
500, [30, 30] 2.5 10
6
4.92E2 DECC-G DE [3020]
1000, [30, 30] 5 10
6
3.31E3 SaNSDE DE [3020]
1000, [30, 30] 5 10
6
FEPCC EA [3020]
1000, [30, 30] 5 10
6
1.48E3 DECC-O DE [3020]
1000, [30, 30] 5 10
6
9.87E2 DECC-G DE [3020]
50, [5, 10] 2.5 10
5
1.38E-18 LSRS HC [1140]
50, [5, 10] 1 10
7
5.534E1 GA [1140]
50, [5, 10] 1 10
7
5.9E4 PSO [1140]
100, [5, 10] 2.5 10
5
6.94E-16 LSRS HC [1140]
100, [5, 10] 1 10
7
2.74E4 GA [1140]
100, [5, 10] 1 10
7
2.19E5 PSO [1140]
500, [5, 10] 2.5 10
6
2.61E-11 LSRS HC [1140]
500, [5, 10] 1 10
7
5.6662E2 GA [1140]
500, [5, 10] 1 10
7
2.56E6 PSO [1140]
1000, [5, 10] 2.5 10
6
7.41E27 LSRS HC [1140]
1000, [5, 10] 1 10
7
1.12E3 GA [1140]
1000, [5, 10] 1 10
7
6.58E6 PSO [1140]
2000, [5, 10] 2.5 10
6
1.48E-26 LSRS HC [1140]
50.3. REAL PROBLEM SPACES 591
50.3.1.6 Step Function
This function is dened as f
6
in [3026] and a similar, ve-dimensional function is provided
as F
3
in [2934]. Here, the function is introduced as f
RSO6
in Table 50.12. In Figure 50.20, we
sketch this function for one dimension, in Listing 50.7, we give a Java code snippet showing
how it can be computed, and Table 50.13 contains some results from literature.
function f
RSO6
(x) =
n

i=1
x
i
+ 0.5
2
50.51
domain X = [v, v]
n
, v 10, 100 50.52
optimization minimization
optimum X

[0.5, 0.0]
n
, f
RSO6
(x

) = 0x

50.53
separable yes
multimodal no
Table 50.12: Properties of the step function.
0
20
40
60
80
100
-10 -8 -6 -4 -2 0 2 4 6 8 10
x
^
f
x X
Figure 50.20: The 1d-example for the step function (v = 10).
1 public static final double computeStepFunction(final double[] ind) {
2 int i;
3 double res, d;
4
5 res = 0d;
6 for (i = (ind.length - 1); i >= 0; i--) {
7 d = Math.floor(ind[i] + 0.5d);
8 res += (d
*
d);
9 }
10
11 return res;
12 }
Listing 50.7: A Java code snippet computing the step function function.
Table 50.13: Results obtained for the step function with function evaluations.
n, v f
RSO6
(x) Algorithm Class Reference
500, 100 2.5 10
6
3.12E2 SaNSDE DE [3020]
592 50 BENCHMARKS
500, 100 2.5 10
6
0 FEPCC EA [3020]
500, 100 2.5 10
6
0 DECC-O DE [3020]
500, 100 2.5 10
6
0 DECC-G DE [3020]
1000, 100 5 10
6
3.93E3 SaNSDE DE [3020]
1000, 100 5 10
6
0 FEPCC EA [3020]
1000, 100 5 10
6
0 DECC-O DE [3020]
1000, 100 5 10
6
0 DECC-G DE [3020]
50.3. REAL PROBLEM SPACES 593
50.3.1.7 Quartic Function with Noise
This function is dened as f
7
in [3026]. Whitley et al. [2934] denes a similar function as F
4
but uses normally distributed noise instead of uniformly distributed one. Here, the quartic
function with noise is introduced as f
RSO7
in Table 50.14. In Figure 50.21, we sketch this
function for one and two dimensions, in Listing 50.8, we give a Java code snippet showing
how it can be computed, and Table 50.15 contains some results from literature.
function f
RSO7
(x) =
n

i=1
ix
i
4
+ randomUni[0, 1) 50.54
domain X = [v, v]
n
, v 500 50.55
optimization minimization
optimum x

= (0, 0, .., 0)
T
, f
RSO7
(x

) = 0 50.56
separable yes
multimodal no
Table 50.14: Properties of the quartic function with noise.
0
1e+10
2e+10
3e+10
4e+10
5e+10
6e+10
7e+10
-400 -200 0 200 400
x
^
f
x X
Fig. 50.21.a: 1d
-400
-200
0
200
400
-400
-200
0
200
400
0
4e+10
8e+10
1.2e+11
1.6e+11
2e+11
x
^
f
x X
Fig. 50.21.b: 2d
Figure 50.21: Examples for the quartic function with noise.
1 private static final Random RAND = new Random();
2
3 public static final double computeQuarticN(final double[] ind) {
4 int i;
5 double res, d;
6
7 res = 0d;
8 for (i = (ind.length - 1); i >= 0; i--) {
9 d = ind[i];
10 d
*
= d;
11 d
*
= d;
12 res += (i
*
d);
13 }
14
15 return res + (RAND.nextDouble());
16 }
Listing 50.8: A Java code snippet computing the quartic function with noise.
Table 50.15: Results obtained for the quartic function with noise with function evaluations.
594 50 BENCHMARKS
n, v f
RSO7
(x) Algorithm Class Reference
500, 500 2.5 10
6
1.28 SaNSDE DE [3020]
500, 500 2.5 10
6
FEPCC EA [3020]
500, 500 2.5 10
6
1.04E1 DECC-O DE [3020]
500, 500 2.5 10
6
1.5E-3 DECC-G DE [3020]
1000, 500 5 10
6
1.18E1 SaNSDE DE [3020]
1000, 500 5 10
6
FEPCC EA [3020]
1000, 500 5 10
6
2.21E1 DECC-O DE [3020]
1000, 500 5 10
6
8.4E-3 DECC-G DE [3020]
50.3. REAL PROBLEM SPACES 595
50.3.1.8 Generalized Schwefels Problem 2.26
This function is dened as f
8
in [3026] and as F
7
in [2934]. Here, we name it f
RSO8
and list
its features in Table 50.16. In Figure 50.22 we sketch it one and two-dimensional version,
Listing 50.9 contains a Java snippet showing how it can be computed, and provides some
results for this function from literature Table 50.17.
function f
RSO8
(x) =
n

i=1
_
x
i
sin
_
[x
i
[
_
50.57
domain X = [v, v]
n
, v 500 50.58
optimization minimization
optimum v = 500, x

= (420.9687, .., 420.9687)


T
, f
RSO8
(x

) 418.983n 50.59
separable yes
multimodal yes
Table 50.16: Properties of Schwefels problem 2.26.
-500
-400
-300
-200
-100
0
100
200
300
400
500
-400 -200 0 200 400
x
^
f
x X
Fig. 50.22.a: 1d
-400
-200
0
200
400
-400
-200
0
200
400
-800
-400
0
400
800
x
^
f
x X
Fig. 50.22.b: 2d
Figure 50.22: Examples for Schwefels problem 2.26.
1 public static final double computeSchwefel221(final double[] ind) {
2 int i;
3 double res, d;
4
5 res = 0d;
6 for (i = (ind.length - 1); i >= 0; i--) {
7 d = ind[i];
8 res += ((-d)
*
Math.sin(Math.sqrt(Math.abs(d))));
9 }
10
11 return res;
12 }
Listing 50.9: A Java code snippet computing Schwefels problem 2.26 function.
Table 50.17: Results obtained for Schwefels problem 2.26 with function evaluations.
596 50 BENCHMARKS
n, v f
RSO8
(x) Algorithm Class Reference
500, 500 2.5 10
6
-201796.5 SaNSDE DE [3020]
500, 500 2.5 10
6
-209316.4 FEPCC EA [3020]
500, 500 2.5 10
6
-209491.0 DECC-O DE [3020]
500, 500 2.5 10
6
-209491.0 DECC-G DE [3020]
1000, 500 5 10
6
-372991.0 SaNSDE DE [3020]
1000, 500 5 10
6
-418622.6 FEPCC EA [3020]
1000, 500 5 10
6
-418983 DECC-O DE [3020]
1000, 500 5 10
6
-418983 DECC-G DE [3020]
50.3. REAL PROBLEM SPACES 597
50.3.1.9 Generalized Rastrigins Function
This function is dened as f
9
in [3026] and as F
6
in [2934]. In the context of this work, we
refer to it as f
RSO9
in Table 50.18. In Figure 50.23, we sketch this function for one and two
dimensions, in Listing 50.10, we give a Java code snippet showing how it can be computed,
and Table 50.19 contains some results from literature.
function f
RSO9
(x) =
n

i=1
_
x
i
2
10cos(2x
i
) + 10

50.60
domain X = [v, v]
n
, v 5.12 50.61
optimization minimization
optimum x

= (0, 0, .., 0)
T
, f
RSO9
(x

) = 0 50.62
separable yes
multimodal yes
Table 50.18: Properties of the generalized Rastrigins function.
0
10
20
30
40
50
60
70
80
90
100
-600 -400 -200 0 200 400 600
x
^
f
x X
Fig. 50.23.a: 1d
-600
-400
-200
0
200
400
600
-600
-400
-200
0
200
400
600
0
40
80
120
160
200
x
^
f
x X
Fig. 50.23.b: 2d
Figure 50.23: Examples for the generalized Rastrigins function.
1 private static final double PI2 = Math.PI+Math.PI;
2
3 public static final double computeGenRastrigin(final double[] ind) {
4 int i;
5 double res, d;
6
7 res = 1d;
8 for (i = (ind.length - 1); i >= 0; i--) {
9 d = ind[i];
10 res += ((d
*
d) - (10d
*
Math.cos(PI2
*
d)) + 10d);
11 }
12
13 return res;
14 }
Listing 50.10: A Java code snippet computing the generalized Rastrigins function.
Table 50.19: Results obtained for the generalized Rastrigins function with function eval-
uations.
598 50 BENCHMARKS
n, v f
RSO9
(x) Algorithm Class Reference
500, 5.12 2.5 10
6
2.84E2 SaNSDE DE [3020]
500, 5.12 2.5 10
6
1.43E-1 FEPCC EA [3020]
500, 5.12 2.5 10
6
1.76E1 DECC-O DE [3020]
500, 5.12 2.5 10
6
0 DECC-G DE [3020]
1000, 5.12 5 10
6
8.69E2 SaNSDE DE [3020]
1000, 5.12 5 10
6
3.13E-1 FEPCC EA [3020]
1000, 5.12 5 10
6
3.12E1 DECC-O DE [3020]
1000, 5.12 5 10
6
3.55E-16 DECC-G DE [3020]
50.3. REAL PROBLEM SPACES 599
50.3.1.10 Ackleys Function
Ackleys function is dened as f
10
in [3026] and is here introduced as f
RSO10
in Table 50.20.
In Figure 50.24, we sketch this function for one and two dimensions, in Listing 50.11, we
give a Java code snippet showing how it can be computed, and Table 50.21 contains some
results from literature.
function f
RSO10
(x) = 20exp
_
0.2
_
1
n

n
i=1
x
i
2
_
exp
_
1
n

n
i=1
cos(2x
i
)
_
+20 +e
50.63
domain X = [v, v]
n
, 10, 32 50.64
optimization minimization
optimum x

= (0, 0, .., 0)
T
, f
RSO10
(x

) = 0 50.65
separable yes
multimodal yes
Table 50.20: Properties of Ackleys function.
0
5
10
15
20
25
-30 -20 -10 0 10 20 30
x
^
f
x X
Fig. 50.24.a: 1d
-30
-20
-10
0
10
20
30
-30
-20
-10
0
10
20
30
0
5
10
15
20
25
x
^
f
x X
Fig. 50.24.b: 2d
Figure 50.24: Examples for Ackleys function.
1 private static final double PI2 = (Math.PI + Math.PI);
2
3 public static final double computeAckley(final double[] ind) {
4 int i;
5 final int dim;
6 double r1, r2, d;
7
8 r1 = 0d;
9 r2 = 0d;
10 dim = ind.length;
11 for (i = (dim - 1); i >= 0; i--) {
12 d = ind[i];
13 r1 += (d
*
d);
14 r2 += Math.cos(PI2
*
d);
15 }
16
17 r1 /= dim;
18 r2 /= dim;
19
20 return ((-20d
*
Math.exp(-0.2
*
Math.sqrt(r1))) - Math.exp(r2) + 20d + Math.E);
21 }
Listing 50.11: A Java code snippet computing Ackleys function.
600 50 BENCHMARKS
Table 50.21: Results obtained for Ackleys function with function evaluations.
n, v f
RSO10
(x) Algorithm Class Reference
500, 32 2.5 10
6
7.88 SaNSDE DE [3020]
500, 32 2.5 10
6
5.7E-4 FEPCC EA [3020]
500, 32 2.5 10
6
1.86E-11 DECC-O DE [3020]
500, 32 2.5 10
6
9.13E-14 DECC-G DE [3020]
1000, 32 5 10
6
1.12E1 SaNSDE DE [3020]
1000, 32 5 10
6
9.5E-4 FEPCC EA [3020]
1000, 32 5 10
6
4.39E-11 DECC-O DE [3020]
1000, 32 5 10
6
2.22E-13 DECC-G DE [3020]
50, 10 2.5 10
5
0 LSRS HC [1140]
50, 10 1 10
7
3.38 GA [1140]
50, 10 1 10
7
6.055 PSO [1140]
100, 10 2.5 10
5
0 LSRS HC [1140]
100, 10 1 10
7
4.43 GA [1140]
100, 10 1 10
7
6.78 PSO [1140]
500, 10 2.5 10
6
0 LSRS HC [1140]
500, 10 1 10
7
7.21 GA [1140]
500, 10 1 10
7
8.14 PSO [1140]
1000, 10 2.5 10
6
1.3E-18 LSRS HC [1140]
1000, 10 1 10
7
7.87 GA [1140]
1000, 10 1 10
7
9.02 PSO [1140]
2000, 10 2.5 10
6
0 LSRS HC [1140]
50.3. REAL PROBLEM SPACES 601
50.3.1.11 Generalized Griewank Function
The generalized Griewank function (also called Griewangk function [1106, 2934]) is dened
as f
11
in [3026] and as F
8
in [2934]. Locatelli [1755] showed that this function becomes
easier to optimize with increasing dimension n and Cho et al. [569] found a way to derive
the number of (local) minima of this function (wich grows exponentiall with n). Here, we
name the function f
RSO11
and list its features in Table 50.22. In Figure 50.25 we sketch it one
and two-dimensional version, Listing 50.12 contains a Java snippet showing how it can be
computed, and provides some results for this function from literature Table 50.23.
function f
RSO11
(x) =
1
4000
n

i=1
x
i
2

i=1
cos
_
x
i

i
_
+ 1 50.66
domain X = [v, v]
n
, v 600 50.67
optimization minimization
optimum x

= (0, 0, .., 0)
T
, f
RSO11
(x

) = 0 50.68
separable no
multimodal yes
Table 50.22: Properties of the generalized Griewank function.
0
10
20
30
40
50
60
70
80
90
100
-600 -400 -200 0 200 400 600
x
^
f
x X
Fig. 50.25.a: 1d
-600
-400
-200
0
200
400
600
-600
-400
-200
0
200
400
600
0
40
80
120
160
200
x
^
f
x X
Fig. 50.25.b: 2d
Figure 50.25: Examples for the generalized Griewank function.
1 public static final double computeGenGriewank(final double[] ind) {
2 int i;
3 double r1, r2, d;
4
5 r1 = 0d;
6 r2 = 1d;
7 for (i = (ind.length - 1); i >= 0; i--) {
8 d = ind[i];
9 r1 += (d
*
d);
10 r2
*
= (Math.cos(d / Math.sqrt(i + 1)));
11 }
12
13 return (1d + (r1 / 4000d) - r2);
14 }
Listing 50.12: A Java code snippet computing the generalized Griewank function.
602 50 BENCHMARKS
Table 50.23: Results obtained for the generalized Griewank function with function evalu-
ations.
50.3. REAL PROBLEM SPACES 603
n, v f
RSO11
(x) Algorithm Class Reference
500, 600 2.5 10
6
1.82E-1 SaNSDE DE [3020]
500, 600 2.5 10
6
2.9E-2 FEPCC EA [3020]
500, 600 2.5 10
6
5.02E-16 DECC-O DE [3020]
500, 600 2.5 10
6
4.4E-16 DECC-G DE [3020]
1000, 600 5 10
6
4.8E-1 SaNSDE DE [3020]
1000, 600 5 10
6
2.5E-2 FEPCC EA [3020]
1000, 600 5 10
6
2.04E-15 DECC-O DE [3020]
1000, 600 5 10
6
1.01E-15 DECC-G DE [3020]
604 50 BENCHMARKS
50.3.1.12 Generalized Penalized Function 1
This function is dened as f
12
in [3026] and is here introduced as f
RSO12
in Table 50.24. In
Figure 50.26, we sketch this function for one and two dimensions, in Listing 50.13, we give a
Java code snippet showing how it can be computed, and Table 50.25 contains some results
from literature.
function f
RSO12
(x) =

n
_
10sin
2
(y
1
) +

n1
i=1
(y
i
1)
2

_
1 + 10sin
2
(y
i+1
)

+ (y
n
1)
2
_
+

n
i=1
u(x
i
, 10, 100, 4)
50.69
u(x
i
, a, k, m) =
_
_
_
k (x
i
a)
m
if x
i
> a
0 if a x
i
a
k (x
i
a)
m
otherwise
50.70
y
i
= 1 +
x
i
+ 1
4
50.71
domain X = [v, v]
n
, v 50 50.72
optimization minimization
optimum x

= (1, 1, .., 1)
T
, f
RSO12
(x

) = 0 50.73
separable yes
multimodal yes
Table 50.24: Properties of the generalized penalized function 1.
0
5e+7
1e+8
1.5e+8
2e+8
2.5e+8
3e+8
-40 -20 0 20 40
x
^
f
x X
Fig. 50.26.a: 1d
-40
-20
0
20
40
-40
-20
0
20
40
0
2e+8
4e+8
6e+8
x
^
f
x X
Fig. 50.26.b: 2d
Figure 50.26: Examples for the general penalized function 1.
1 private static final double y(final double d) {
2 return (1d + 0.25
*
(d + 1d));
3 }
4
5 private static final double u(final double x, final int a, final int k,
6 final int m) {
7 if (x > a) {
8 return (k
*
Math.pow(x - a, m));
9 }
10 if (x < (-a)) {
11 return (k
*
Math.pow(-x - a, m));
12 }
13 return 0d;
14 }
15
50.3. REAL PROBLEM SPACES 605
16 public static final double computePenalizedA(final double[] ind) {
17 int i;
18 double r1, y, ly, s;
19
20 r1 = Math.sin(Math.PI
*
y(ind[0]));
21 r1
*
= r1;
22 r1
*
= 10d;
23
24 i = (ind.length - 1);
25 y = y(ind[i]);
26 r1 += ((y - 1)
*
(y - 1));
27
28 for (--i; i >= 0; i--) {
29 ly = y;
30 y = y(ind[i]);
31 s = Math.sin(Math.PI
*
ly);
32
33 r1 += ((y - 1)
*
(y - 1))
*
(1d + (10d
*
s
*
s));
34 }
35
36 r1
*
= Math.PI;
37 i = ind.length;
38 r1 /= i;
39
40 for (--i; i >= 0; i--) {
41 r1 += u(ind[i], 10, 100, 4);
42 }
43
44 return r1;
45 }
Listing 50.13: A Java code snippet computing the generalized penalized function 1.
Table 50.25: Results obtained for the generalized penalized function 1 with function
evaluations.
n, v f
RSO12
(x) Algorithm Class Reference
500, 50 2.5 10
6
2.96 SaNSDE DE [3020]
500, 50 2.5 10
6
FEPCC EA [3020]
500, 50 2.5 10
6
2.17E-25 DECC-O DE [3020]
500, 50 4.29 10
21
0 DECC-G DE [3020]
1000, 50 5 10
6
2.96 SaNSDE DE [3020]
1000, 50 5 10
6
FEPCC EA [3020]
1000, 50 5 10
6
1.08E-24 DECC-O DE [3020]
1000, 50 5 10
6
6.89E-25 DECC-G DE [3020]
606 50 BENCHMARKS
50.3.1.13 Generalized Penalized Function 2
This function is dened as f
13
in [3026] and is here introduced as f
RSO13
in Table 50.26. In
Figure 50.27, we sketch this function for one and two dimensions, in Listing 50.14, we give a
Java code snippet showing how it can be computed, and Table 50.27 contains some results
from literature.
function f
RSO13
(x) = 0.1
_
sin
2
(3x
i
) +

n1
i=1
(x
i
1)
2

_
1 + sin
2
(3x
i+1
)

+
(x
n
1)
2

_
1 + sin
2
(2x
n
)

_
+

n
i=1
u(x
i
, 5, 100, 4)
50.74
see Equation 50.70 for u(x
i
, a, k, m)
domain X = [v, v]
n
, v 50 50.75
optimization minimization
optimum x

= (1, 1, .., 1)
T
, f
RSO13
(x

) = 0 50.76
separable yes
multimodal yes
Table 50.26: Properties of the generalized penalized function 2.
0
5e+7
1e+8
1.5e+8
2e+8
2.5e+8
3e+8
3.5e+8
4e+8
4.5e+8
-40 -20 0 20 40
x
^
f
x X
Fig. 50.27.a: 1d
-40
-20
0
20
40
-40
-20
0
20
40
0
2e+8
4e+8
6e+8
8e+8
x
^
f
x X
Fig. 50.27.b: 2d
Figure 50.27: Examples for the general penalized function 2.
1 private static final double PI2 = (Math.PI + Math.PI);
2 private static final double PI3 = (PI2 + Math.PI);
3
4 private static final double u(final double x, final int a, final int k,
5 final int m) {
6 if (x > a) {
7 return (k
*
Math.pow(x - a, m));
8 }
9 if (x < (-a)) {
10 return (k
*
Math.pow(-x - a, m));
11 }
12 return 0d;
13 }
14
15 public static final double computePenalizedB(final double[] ind) {
16 double res, x, ox, xm1, s;
17 int i;
18
19 i = (this.m_dimensions - 1);
20 x = ind[i];
50.3. REAL PROBLEM SPACES 607
21
22 xm1 = (x - 1d);
23 s = Math.sin(PI2
*
x);
24 res = ((xm1
*
xm1)
*
(1d + (s
*
s)));
25
26 for (--i; i >= 0; i--) {
27 ox = x;
28 x = ind[i];
29
30 s = Math.sin(PI3
*
ox);
31 xm1 = (x - 1d);
32 res += ((xm1
*
xm1)
*
(1d + (s
*
s)));
33 }
34
35 s = Math.sin(PI3
*
x);
36 res += (s
*
s);
37 res
*
= 0.1d;
38
39 for (i = (this.m_dimensions - 1); i >= 0; i--) {
40 res += F12.u(ind[i], 5, 100, 4);
41 }
42
43 return res;
44 }
Listing 50.14: A Java code snippet computing the generalized penalized function 2.
Table 50.27: Results obtained for the generalized penalized function 2 with function
evaluations.
n, v f
RSO13
(x) Algorithm Class Reference
500, 50 2.5 10
6
1.89E2 SaNSDE DE [3020]
500, 50 2.5 10
6
FEPCC EA [3020]
500, 50 2.5 10
6
5.04E-23 DECC-O DE [3020]
500, 50 4.29 10
21
5.34E-18 DECC-G DE [3020]
1000, 50 5 10
6
7.41E2 SaNSDE DE [3020]
1000, 50 5 10
6
FEPCC EA [3020]
1000, 50 5 10
6
4.82E-22 DECC-O DE [3020]
1000, 50 5 10
6
2.55E-21 DECC-G DE [3020]
608 50 BENCHMARKS
50.3.1.14 Levys Function
This function is dened in [1140].
function f
RSO14
(x) = sin
2
(y
1
) +

n1
i=1
(y
i
1)
2

_
1 + 10sin
2
(y
i
+ 1)

+(y
n
1)
2
_
1 + sin
2
(2 x
n
)

50.77
y
i
= 1 +
x
i
+ 1
4
50.78
domain X = [v, v]
n
, v 10 50.79
optimization minimization
optimum x

= (1, 1, .., 1)
T
, f
RSO12
(x

) = 0 50.80
separable yes
multimodal yes
Table 50.28: Properties of Levys function.
0
2
4
6
8
10
12
14
16
-10 -5 0 5 10 10
x
^
f
x X
Fig. 50.28.a: 1d
-10
-5
0
5
10 -10
-5
0
5
10
0
20
40
60
80
100
x
^
f
x X
Fig. 50.28.b: 2d
Figure 50.28: Examples for Levys function.
Table 50.29: Results obtained for Levys function with function evaluations.
n, v f
RSO14
(x) Algorithm Class Reference
50, 10 2.5 10
5
2.9E-39 LSRS HC [1140]
50, 10 1 10
7
3.27E-1 GA [1140]
50, 10 1 10
7
1.888E1 PSO [1140]
100, 10 2.5 10
5
2.9E-39 LSRS HC [1140]
100, 10 1 10
7
6.509E-1 GA [1140]
100, 10 1 10
7
5.572E1 PSO [1140]
500, 10 2.5 10
6
2.9E-39 LSRS HC [1140]
500, 10 1 10
7
3.33 GA [1140]
500, 10 1 10
7
5.4602E2 PSO [1140]
1000, 10 2.5 10
6
2.9E-39 LSRS HC [1140]
1000, 10 1 10
7
7.17 GA [1140]
50.3. REAL PROBLEM SPACES 609
1000, 10 1 10
7
1.62E3 PSO [1140]
2000, 10 2.5 10
6
2.9E-39 LSRS HC [1140]
610 50 BENCHMARKS
50.3.1.15 Shifted Sphere
The sphere function given in Section 50.3.1.1 is shifted in X by a vector o and in the
objective space by a bias f
bias
. This function is dened as F
1
in [2626, 2627] and [2668].
Here, we specify it as f
RSO15
in Table 50.30 and provide a Java snipped showing how it can
be computed in Listing 50.15 before listing some results from the literature in Table 50.31.
function: f
RSO15
(x) =
n

i=1
z
2
i
+f
bias
50.81
z
i
= x
i
+o
i
50.82
domain: X = [100, 100]
n
50.83
optimization: minimization
optimum: x

= (o
1
, o
2
, .., o
2
)
T
, f
RSO15
(x

) = f
bias
50.84
separable: yes
multimodal: no
shifted: yes
rotated: no
scalable: no
Table 50.30: Properties of the shifted sphere function.
1 private static final double[] DATA = new double[] {
2 // shift data o
i
, see Tang et al. [2668]
3 };
4
5 public static final double computeShiftedSphere(final double[] ind) {
6 int i;
7 double res, x;
8
9 res = 0;
10 for (i = (ind.length - 1); i >= 0; i--) {
11 x = (ind[i] - DATA[i]);
12 res += (x
*
x);
13 }
14
15 return res + -450.0d; // bias f
bias
= 450, see [2668]
16 }
Listing 50.15: A Java code snippet computing the shifted sphere function.
Table 50.31: Results obtained for the shifted sphere function with function evaluations.
n, f
bias
f
RSO15
(x) Algorithm Class Reference
500, 0 2.5 10
6
2.61E-11 SaNSDE DE [3020]
500, 0 2.5 10
6
1.04E-12 DECC-O DE [3020]
500, 0 2.5 10
6
3.71E-13 DECC-G DE [3020]
1000, 0 5 10
6
1.17 SaNSDE DE [3020]
1000, 0 5 10
6
3.66E-8 DECC-O DE [3020]
1000, 0 5 10
6
6.84E-13 DECC-G DE [3020]
50.3. REAL PROBLEM SPACES 611
50.3.1.16 Shifted Rotated High Conditioned Elliptic Function
An ellipse is shifted in X by a vector o, rotated by the orthogonal matrix M, and shifted in
the objective space by a bias f
bias
. This function is dened as F
3
in [2626, 2627].
function: f
RSO16
(x) =
n

i=1
_
10
6
_
i1
n1
z
2
i
+f
bias
50.85
z = x o M 50.86
domain: X = [100, 100]
n
50.87
optimization: minimization
optimum: x

= (o
1
, o
2
, .., o
2
)
T
, f
RSO16
(x

) = f
bias
50.88
separable: no
multimodal: no
shifted: yes
rotated: yes
scalable: no
Table 50.32: Properties of the shifted and rotated elliptic function.
Table 50.33: Results obtained for the shifted and rotated elliptic function with function
evaluations.
n, f
bias
f
RSO16
(x) Algorithm Class Reference
500, ? 2.5 10
6
6.88E8 SaNSDE DE [3020]
500, ? 2.5 10
6
4.78E8 DECC-O DE [3020]
500, ? 2.5 10
6
3.06E8 DECC-G DE [3020]
1000, ? 5 10
6
2.34E9 SaNSDE DE [3020]
1000, ? 5 10
6
1.08E9 DECC-O DE [3020]
1000, ? 5 10
6
8.11E8 DECC-G DE [3020]
612 50 BENCHMARKS
50.3.1.17 Shifted Schwefels Problem 2.21
Schwefels problem 2.21 given in Section 50.3.1.4 is shifted in X by a vector o and in the
objective space by a bias f
bias
450. This function is named F
2
in [2668]. In Table 50.34,
we dene it as f
RSO17
and in Listing 50.16, we provide a Java snippet showing how it can be
computed.
function: f
RSO17
(x) = max
i
[z
i
[ +f
bias
50.89
z
i
= x
i
+o
i
50.90
domain: X = [100, 100]
n
50.91
optimization: minimization
optimum: x

= (o
1
, o
2
, .., o
2
)
T
, f
RSO17
(x

) = f
bias
50.92
separable: false
multimodal: no
shifted: yes
rotated: no
scalable: yes
Table 50.34: Properties of shifted Schwefels problem 2.21.
1 private static final double[] DATA = new double[] {
2 // shift data o
i
, see material accompanying Tang et al. [2668]
3 };
4
5 public static final double computeShiftedSchwefel221(final double[] ind) {
6 int i;
7 double res, d;
8
9 res = 0d;
10 for (i = (this.m_dimensions - 1); i >= 0; i--) {
11 d = ind[i] - DATA[i];
12 if (d < 0d) {
13 d = -d;
14 }
15 if (d > res) {
16 res = d;
17 }
18 }
19
20 return res + -450.0d; // bias f
bias
= 450, see [2668]
21 }
Listing 50.16: A Java code snippet computing shifted Schwefels problem 2.21.
50.3. REAL PROBLEM SPACES 613
50.3.1.18 Schwefels Problem 2.21 with Optimum on Bounds
This function is dened as F
5
in [2626, 2627]. In Equation 50.93, we dene it as f
RSO18
. In
the table, A is a n n matrix and its t
th
row is A
i
. det(A) ,= 0 holds and A is initialized
with random numbers from [500, 500]. B
i
is a vector of dimension n and computed by
multiplying the optimum vector o to A
i
, i. e., B
i
= A
i
o. o is initialized with random
numbers from [100, 100].
function: f
RSO18
(x) = max
i
A
i
x B
i
+f
bias
50.93
domain: X = [100, 100]
n
50.94
optimization: minimization
optimum: x

= (o
1
, o
2
, .., o
2
)
T
, f
RSO18
(x

) = f
bias
50.95
separable: yes
multimodal: no
shifted: yes
rotated: no
scalable: no
Table 50.35: Properties of Schwefels modied problem 2.21 with optimum on the bounds.
Table 50.36: Results obtained for Schwefels modied problem 2.21 with optimum on the
bounds with function evaluations.
n, f
bias
f
RSO18
(x) Algorithm Class Reference
500, 0 2.5 10
6
4.96E5 SaNSDE DE [3020]
500, 0 2.5 10
6
2.4E5 DECC-O DE [3020]
500, 0 2.5 10
6
1.15E5 DECC-G DE [3020]
1000, 0 5 10
6
5.03E5 SaNSDE DE [3020]
1000, 0 5 10
6
3.73E5 DECC-O DE [3020]
1000, 0 5 10
6
2.2E5 DECC-G DE [3020]
614 50 BENCHMARKS
50.3.1.19 Shifted Rosenbrocks Function
The generalized Rosenbrock function given in Section 50.3.1.5 is shifted in X by a vector o
and in the objective space by a bias f
bias
. This function is dened as F
6
in [2626, 2627] and as
F
3
in [2668]. Here, we specify it as f
RSO19
in Table 50.37 and provide a Java snipped showing
how it can be computed in Listing 50.17 before listing some results from the literature in
Table 50.38.
function: f
RSO19
(x) =
n1

i=1
_
100
_
z
2
i
z
i+1
_
2
+ (z
i
1)
2
_
+f
bias
50.96
z
i
= x
i
o
i
+ 1 50.97
domain: X = [100, 100]
n
50.98
optimization: minimization
optimum: x

= (o
1
, o
2
, .., o
2
)
T
, f
RSO19
(x

) = f
bias
50.99
separable: no
multimodal: yes
shifted: yes
rotated: no
scalable: yes
Table 50.37: Properties of the shifted Rosenbrocks function.
1 private static final double[] DATA = new double[] {
2 // shift data o
i
, see material accompanying Tang et al. [2668]
3 };
4
5 public static final double computeShiftedRosenbrock(final double[] ind) {
6 int i;
7 double res, ox, x, z;
8
9 res = 0d;
10 i = (ind.length - 1);
11 x = ind[i] - DATA[i] + 1;
12 for (--i; i >= 0; i--) {
13 ox = x;
14 x = ind[i] - DATA[i] + 1;
15
16 z = (ox + (x
*
x));
17 res += (100d
*
(z
*
z));
18 z = (x - 1);
19 res += (x
*
x);
20 }
21
22 return res + 390.0d; // bias f
bias
= 390, see [2668]
23 }
Listing 50.17: A Java code snippet computing the shifted Rosenbrocks function.
Table 50.38: Results obtained for the shifted Rosenbrocks function with function evalua-
tions.
n, f
bias
f
RSO19
(x) Algorithm Class Reference
500, ? 2.5 10
6
2.71E3 SaNSDE DE [3020]
500, ? 2.5 10
6
1.71E3 DECC-O DE [3020]
50.3. REAL PROBLEM SPACES 615
500, ? 2.5 10
6
1.56E3 DECC-G DE [3020]
1000, ? 5 10
6
1.35E4 SaNSDE DE [3020]
1000, ? 5 10
6
3.13E3 DECC-O DE [3020]
1000, ? 5 10
6
2.22E3 DECC-G DE [3020]
616 50 BENCHMARKS
50.3.1.20 Shifted Ackleys Function
Ackleys function provided in Section 50.3.1.10 is shifted in X by a vector o and in the
objective space by a bias f
bias
= 140. This function is dened as F
6
in [2668]. Here,
we specify it as f
RSO20
in Table 50.39 and provide a Java snipped showing how it can be
computed in Listing 50.18.
function f
RSO20
(x) = 20exp
_
0.2
_
1
n

n
i=1
z
2
i
_
exp
_
1
n

n
i=1
cos(2z
i
)
_
+20 +e
50.100
z
i
= x
i
o
i
50.101
domain X = [v, v]
n
, 10, 32 50.102
optimization minimization
optimum x

= (o
1
, o
2
, .., o
n
)
T
, f
RSO20
(x

) = 0 50.103
separable yes
multimodal yes
shifted: yes
rotated: no
scalable: yes
Table 50.39: Properties of shifted Ackleys function.
1 private static final double PI2 = (Math.PI + Math.PI);
2
3 private static final double[] DATA = new double[] {
4 // shift data o
i
, see material accompanying Tang et al. [2668]
5 };
6
7
8 public static final double computeAckley(final double[] ind) {
9 int i;
10 final int dim;
11 double r1, r2, d;
12
13 r1 = 0d;
14 r2 = 0d;
15 dim = ind.length;
16 for (i = (dim - 1); i >= 0; i--) {
17 d = ind[i] - DATA[i];
18 r1 += (d
*
d);
19 r2 += Math.cos(PI2
*
d);
20 }
21
22 r1 /= dim;
23 r2 /= dim;
24
25 return ((-20d
*
Math.exp(-0.2
*
Math.sqrt(r1))) -
26 Math.exp(r2) + 20d + Math.E) +
27 -140.0d; // bias f
bias
= 140, see [2668]
28 }
Listing 50.18: A Java code snippet computing shifted Ackleys function.
50.3. REAL PROBLEM SPACES 617
50.3.1.21 Shifted Rotated Ackelys Function
The Ackleys function given in Section 50.3.1.10 is shifted in X by a vector o, rotated with
a matrix M, and shifted in the objective space by a bias f
bias
. This function is dened as
F
8
in [2626, 2627].
function f
RSO21
(x) = 20exp
_
0.2
_
1
n

n
i=1
z
2
i
_
exp
_
1
n

n
i=1
cos(2z
i
)
_
+20 +f
bias
50.104
z = (x +o) M 50.105
domain: X = [32, 32]
n
50.106
optimization: minimization
optimum: x

= (o
1
, o
2
, .., o
2
)
T
, f
RSO21
(x

) = f
bias
50.107
separable: no
multimodal: yes
shifted: yes
rotated: yes
scalable: yes
Table 50.40: Properties of the shifted and rotated Ackleys function.
Table 50.41: Results obtained for the shifted and rotated Ackleys function with function
evaluations.
n, f
bias
f
RSO21
(x) Algorithm Class Reference
500, ? 2.5 10
6
2.15E1 SaNSDE DE [3020]
500, ? 2.5 10
6
2.14E1 DECC-O DE [3020]
500, ? 2.5 10
6
2.16E1 DECC-G DE [3020]
1000, ? 5 10
6
2.16E1 SaNSDE DE [3020]
1000, ? 5 10
6
2.14E1 DECC-O DE [3020]
1000, ? 5 10
6
2.16E1 DECC-G DE [3020]
618 50 BENCHMARKS
50.3.1.22 Shifted Rastrigins Function
The generalized Rastrigins function given in Section 50.3.1.9 is shifted in X by a vector o
and in the objective space by a bias f
bias
= 330. This function is dened as F
9
in [2626,
2627] and as F
4
in [2668]. Here, we specify it as f
RSO22
in Table 50.42 and provide a Java
snipped showing how it can be computed in Listing 50.19 before listing some results from
the literature in Table 50.43.
function f
RSO22
(x) =
n

i=1
_
z
2
i
10cos(2z
i
) + 10

+f
bias
50.108
z
i
= x
i
o
i
50.109
domain: X = [5, 5]
n
50.110
optimization: minimization
optimum: x

= (o
1
, o
2
, .., o
2
)
T
, f
RSO22
(x

) = f
bias
50.111
separable: yes
multimodal: yes
shifted: yes
rotated: no
scalable: yes
Table 50.42: Properties of the shifted Rastrigins function.
1 private static final double PI2 = Math.PI+Math.PI;
2
3 private static final double[] DATA = new double[] {
4 // shift data o
i
, see material accompanying Tang et al. [2668]
5 };
6
7 public static final double computeShiftedRastrigin(final double[] ind) {
8 int i;
9 double res, d;
10
11 res = 1d;
12 for (i = (ind.length - 1); i >= 0; i--) {
13 d = ind[i] - DATA[i];
14 res += ((d
*
d) - (10d
*
Math.cos(PI2
*
d)) + 10d);
15 }
16 }
17
18 return res + -330.0d; // bias f
bias
= 330, see [2668]
19 }
Listing 50.19: A Java code snippet computing the shifted Rastrigins function.
Table 50.43: Results obtained for the shifted Rastrigins function with function evaluations.
n, f
bias
f
RSO22
(x) Algorithm Class Reference
500, ? 2.5 10
6
6.6E2 SaNSDE DE [3020]
500, ? 2.5 10
6
8.66 DECC-O DE [3020]
500, ? 2.5 10
6
4.5E2 DECC-G DE [3020]
1000, ? 5 10
6
3.2E3 SaNSDE DE [3020]
1000, ? 5 10
6
8.96E1 DECC-O DE [3020]
1000, ? 5 10
6
6.32E2 DECC-G DE [3020]
50.3. REAL PROBLEM SPACES 619
50.3.1.23 Shifted and Rotated Rastrigins Function
The generalized Rastrigins function given in Section 50.3.1.9 is shifted in X by a vector o,
roated by a matrix M, and shifted in the objective space by a bias f
bias
. This function is
dened as F
10
in [2626, 2627].
function f
RSO23
(x) =
n

i=1
_
z
2
i
10cos(2z
i
) + 10

+f
bias
50.112
z = (x o) M 50.113
domain: X = [5, 5]
n
50.114
optimization: minimization
optimum: x

= (o
1
, o
2
, .., o
2
)
T
, f
RSO23
(x

) = f
bias
50.115
separable: no
multimodal: yes
shifted: yes
rotated: yes
scalable: yes
Table 50.44: Properties of the shifted and rotated Rastrigins function.
Table 50.45: Results obtained for the shifted and rotated Rastrigins function with function
evaluations.
n, f
bias
f
RSO23
(x) Algorithm Class Reference
500, ? 2.5 10
6
6.97E3 SaNSDE DE [3020]
500, ? 2.5 10
6
1.5E4 DECC-O DE [3020]
500, ? 2.5 10
6
5.33E3 DECC-G DE [3020]
1000, ? 5 10
6
1.27E4 SaNSDE DE [3020]
1000, ? 5 10
6
3.18E4 DECC-O DE [3020]
1000, ? 5 10
6
9.73E3 DECC-G DE [3020]
620 50 BENCHMARKS
50.3.1.24 Schwefels Problem 2.13
This function is dened as F
13
in [2626, 2627]. A and B are n n matrices, a
ij
and b
ij
are integer random numbers from 100..100, and the
j
in = (
1
,
2
, ..,
n
)
T
are random
numbers in [, ].
function f
RSO24
(x) = sum
n
i=1
(A
i
B
i
(x))
2
+f
bias
50.116
A
i
=
n

j=1
(a
ij
sin(
j
) +b
ij
cos(
j
)) 50.117
B
i
(x) =
n

j=1
(a
ij
sin(x
j
) +b
ij
cos(x
j
)) 50.118
domain X = [, ]
n
50.119
optimization minimization
optimum x

= (
1
,
2
, ..,
n
)
T
, f
RSO24
(x

) = f
bias
50.120
separable no
multimodal yes
shifted: yes
rotated: no
scalable: yes
Table 50.46: Properties of Schwefels problem 2.13.
Table 50.47: Results obtained for Schwefels problem 2.13 with function evaluations.
n, v f
RSO24
(x) Algorithm Class Reference
500, 100 2.5 10
6
2.53E2 SaNSDE DE [3020]
500, 100 2.5 10
6
2.81E1 DECC-O DE [3020]
500, 100 2.5 10
6
2.09E2 DECC-G DE [3020]
1000, 100 5 10
6
6.61E2 SaNSDE DE [3020]
1000, 100 5 10
6
7.52E1 DECC-O DE [3020]
1000, 100 5 10
6
3.56E2 DECC-G DE [3020]
50.4. CLOSE-TO-REAL VEHICLE ROUTING PROBLEM BENCHMARK 621
50.3.1.25 Shifted Griewank Function
The Griewank function provided in Section 50.3.1.11 is shifted in X by a vector o and in
the objective space by a bias f
bias
= 180. This function is dened as F
5
in [2668]. Here,
we specify it as f
RSO25
in Table 50.48 and provide a Java snipped showing how it can be
computed in Listing 50.20.
function f
RSO25
(x) =
1
4000
n

i=1
z
2
i

n

i=1
cos
_
z
i

i
_
+ 1 50.121
z
i
= x
i
o
i
50.122
domain X = [600, 600]
n
50.123
optimization minimization
optimum x

= (o
1
, o
2
, .., o
n
)
T
, f
RSO25
x

= f
bias
50.124
separable no
multimodal yes
shifted: yes
rotated: no
scalable: yes
Table 50.48: Properties of the shifted Griewank function.
1 private static final double[] DATA = new double[] {
2 // shift data o
i
, see material accompanying Tang et al. [2668]
3 };
4
5 public static final double computeShiftedGriewank(final double[] ind) {
6 int i;
7 double r1, r2, d;
8
9 r1 = 0d;
10 r2 = 1d;
11 for (i = (ind.length - 1); i >= 0; i--) {
12 d = ind[i] - DATA[i];
13 r1 += (d
*
d);
14 r2
*
= (Math.cos(d / Math.sqrt(i + 1)));
15 }
16
17 return (1d + (r1 / 4000d) - r2) + -180d; // bias f
bias
= 180, see [2668]
18 }
Listing 50.20: A Java code snippet computing the shifted Griewank function.
50.4 Close-to-Real Vehicle Routing Problem Benchmark
50.4.1 Introduction
Logistics are one of the most important services in our economy. The transportation of
goods, parcel, and mail from one place to another is what connects dierent businesses and
people. Without such transportation, basically all economical processes in an industrial
society would break done.
Logistics is omnipresent. But is it ecient? We already discussed that the Travel-
ing Salesman Problem (see Example E2.2 on page 45) is AT-complete in Section 12.3 on
page 147. Solving the TSP means nding an ecient travel plan for a single salesman. Plan-
ning for a whole eet of vehicles, a set of orders, under various constraints and conditions,
622 50 BENCHMARKS
seems to be at least as complex and, from an algorithmic and implementation perspective,
is probably much harder. Hence, most logistics companies today still rely on manual plan-
ning. Knowing that, we can be pretty sure that they can benet greatly from solving such
Vehicle Routing Problems [497, 1091, 2162] in an optimal manner which reduces costs, travel
distances, and CO
2
emissions.
In Section 49.3 on page 536, we discuss a real-world application of Evolutionary Algo-
rithms in the area of logistics: the search for optimal goods transportation plans, a problem
which belongs to the class of Vehicle Routing Problems (VRP). Section 49.3, you can nd a
detailed description and model gleaned from the situation in one of the divisions of a large
logistics company (Section 49.3.2) as well as an in-depth discussion of related research work
on Vehicle Routing Problems (Section 49.3.3). Here, we want to introduce a benchmark
generator for a general class of VRPs which are close to the situation in todays parcel
companies (if not even closer than the work introduced in Section 49.3).
depotA
driver d1
container c1
customerC
pickuplocation
ordero1 ordero2
ordero1
ordero2
depotB
delivery
location
container
c2
truck t1
Truck,driver,andcontainer
areinthedepot.
Anothercontainerispicked
upinanotherdepot.
Theordersarepickedupat
thecustomerslocationwithin
agiventimewindow..

Theordersaredelivered
withinatimewindow.
Thetruckcannow
returnhometothedepot.
Figure 50.29: A close-to-real-world Vehicle Routing Problem
50.4. CLOSE-TO-REAL VEHICLE ROUTING PROBLEM BENCHMARK 623
50.4.1.1 Overview on the Involved Objects
A real-world logistics company knows ve kinds of objects: orders, containers, trucks,
drivers, and locations. With these objects, the situation faced by the planner can be com-
pletely described. In Figure 50.29 we sketch how these objects interact and describe this
interaction below in detail.
1. Orders are transportation tasks given by customers. Each order is characterized by a
pickup and a delivery location as well as a pickup and delivery time window. The order
must be picked up by a truck within the pickup time window at the pickup location and
delivered within the delivery time window to the delivery location. See Section 50.4.2.2
on page 626 for more information.
2. Orders are placed into containers which are provided by the logistics company. A
container can either be empty or contain exactly one order. We assume that each
order ts into exactly one container. If an order was larger than one container, we
can imagine that it will be split up into n single orders without loss of generality. A
customer has to pay for at least one container. If the orders are smaller, the company
still uses (and charges for) one full container. Containers reside at depots and, after
all deliveries are performed, must be transported back to a depot (not necessarily their
starting depot). For more information, see Section 50.4.2.4.
3. Trucks are them vehicles that transport the vehicles. The company can own dierent
types of trucks with dierent capacities. The capacity limit per truck is usually between
one and three containers. A truck can either be empty or transport at most as many
containers as its capacity allows. A truck is driven always by one driver, but some
trucks allow that another driver can travel along (i.e., have a higher number of seats).
Trucks also reside at depots and, after they have nished their tours, must arrive at
a depot again (not necessarily their starting depot). A truck cannot drive without a
driver. See Section 50.4.2.3 on page 627 for more information.
4. Drivers work for the logistics company and drive the trucks. Each driver has a home
depot to which she needs to return after the orders have been delivered. For more
information, see Section 50.4.2.5 on page 628.
5. Locations are places on the map. The customers reside at locations and deliveries take
place between locations. Depots are special locations: Here, the logistics company
stores its containers, employs its drivers, and parks its trucks. Trucks pick up orders at
given locations and deliver them to other locations. Also, trucks may meet at locations
and exchange their freight. See Section 50.4.2.1 on page 625 for more information.
50.4.1.2 Overview on the Optimization Goals and Constraints
Basically, a logistics company has two goals:
1. Pick up and deliver all orders within their time windows and, hence, to fulll its con-
tracts with the customer. Every missed time window or undelivered order is considered
as a delivery error. The number of delivery errors must be minimized.
2. Orders should be delivered at the lowest possible costs. Hence, the costs should be
minimized.
While trying to achieve these goals, it obviously has to obey to the laws of physics and logic.
The following constraints do always hold:
624 50 BENCHMARKS
1. No truck, container, order, or driver can be at more than one place at one point in
time. A driver cannot drive a truck from A to B while simultaneously driving another
truck from C to D. A container cannot be used to transport order O from C to D
while, at the same time, being carried by another truck from C to E. And so on.
2. Trucks, containers, orders, and drivers can only leave from places where they actually
reside: A truck which is parked at depot A cannot leave depot B (without driving
there rst). A truck driver who is located at depot C cannot get into a truck which
is parked in depot D (without driving there rst). An container located at depot E
cannot be loaded into a truck residing in depot F (without being driven there rst).
And so on.
3. Driving from one location to another takes time. A truck cannot get from location A
to a distant location B within 0s. The time needed for getting from A to B depends
on (1) the distance between A and B and (2) the maximum speed of the truck. A
truck cannot go from one location to another faster than what the (Newtonian) laws
of physics [2029] allow.
4. If an object (truck, driver, order, container) is moved from location A to location B,
it must reside at location A before the move and will reside at location B after arrival
(i. e., after the time has passed which the truck needs to get there). An object can be
involved in at most one transportation move at any given time.
5. Objects can only be moved by trucks and trucks are driven by drivers: (a) A truck can-
not move without a driver. (b) A driver cannot move without a truck. (c) A container
cannot move without a truck. (d) An order cannot move without a container.
6. The capacity limits of the involved objects (trucks, containers) must be obeyed. A
truck can only carry a nite weight and a container can only contain a nite volume
(here: one order).
7. Orders can only be picked up during the pickup time interval and only be delivered
during the delivery time interval.
8. No new objects appear during the planning process. No new trucks, drivers, or con-
tainers can somehow be created and no new orders come in.
9. The features of the involved objects do not change during the planning process. A
truck cannot become faster or slower and the capacity limits of the involved objects
cannot be increased or decreased.
For additional information, please see Section 50.4.4 on page 634.
50.4.1.3 Overview on the Input Data
The input data of an optimization algorithm for this benchmark problem comprises the
sets of objects and their features describing a complete starting situation for the planning
process.
1. The orders with their IDs pickup and destination locations and time windows.
2. The IDs of the containers and the corresponding starting locations (depots).
3. The trucks with their IDs, starting locations (depots), speed and capacity limits as
well as their costs (per kmh).
4. The drivers with their IDs, home depots, and salary (per day).
50.4. CLOSE-TO-REAL VEHICLE ROUTING PROBLEM BENCHMARK 625
5. The distance matrix
13
containing the distances between the dierent locations. Notice
that, in the real world, these matrices are slightly asymmetric, meaning that the
distance from location A to B may be (slightly) dierent from the distance between
B and A. The distances are given in m.
50.4.1.4 Overview on the Optimization Task
The optimizer then must nd a plan which involves the input objects shortly listed in Sec-
tion 50.4.1.3. Such a plan, which can be constructed by any of the optimization algorithms
discussed in Part III and Part IV, has the following features:
1. It assigns trucks, containers, and drivers to moves. A move is one atomic logistic
action which is denoted byarabic (a) a truck, (b) a (possibly empty) set of orders to
be transported by that truck, (c) a non-empty set of truck drivers (contains at least
one driver), (d) a (possibly empty) set of containers which will contain the orders,
(e) a starting location, (f ) a starting time,orders, (g) and a destination location.
Notice that the arrival time at the destination is determined by the distance between
the start and destination location, the trucks average speed, and the starting time.
See Section 50.4.3.1 on page 632 for more information.
2. A plan may consist of an arbitrary number of such moves. See Section 50.4.3.2 on
page 634 for more information.
3. This plan must fulll all the constraints given in Section 50.4.1.2 while minimizing the
two objectives, cost and delivery errors.
50.4.2 Involved Objects
50.4.2.1 Locations
A location is a place where a truck can drive to. Each location l = (id, isDepot) has the
following features:
1. A unique identier id which is a string that starts with an uppercase L followed by six
decimal digits. Example: L000021.
2. A Boolean value isDepot which is true if the location denotes a depot or false if it is
a normal place (such as a customers location). Trucks, containers, and drivers always
reside at depots at the beginning of a planning period and must return to depots at
the end of a plan. Example: true.
In our benchmark Java implementation given in Section 57.3, locations are represented by
the class Location given in Listing 57.24 on page 914. This class allows creating and reading
textual representations of the location data. Hence, using our benchmark is not bound to
using the Java language. A list of locations can be given in the form of Listing 50.21.
13
http://en.wikipedia.org/wiki/Distance_matrix [accessed 2010-11-09]
626 50 BENCHMARKS
Listing 50.21: The location data of the rst example problem instance.
1 %id is_depot
2 L000000 true
3 L000001 true
4 L000002 true
5 L000003 true
6 L000004 true
7 L000005 false
8 L000006 false
9 L000007 false
10 L000008 false
11 L000009 true
12 L000010 false
13 L000011 false
14 L000012 false
15 L000013 false
16 L000014 false
17 L000015 true
18 L000016 false
19 L000017 false
20 L000018 false
21 L000019 true
50.4.2.2 Orders
An order o = (id, p
s
, p
e
, p
l
, d
s
, d
e
, d
l
) is a delivery task which is booked by a customer from
the logistics company. A customer basically says: Take order o from location p
l
to location
d
l
. Pick up the order between the times p
s
and p
e
and deliver it within the time window
[d
s
, d
e
]. An order is hence described by the following data elements:
1. The order ID id. Order IDs start with an uppercase O followed by a six-digit decimal
number. The order IDs form a strict sequence of numbers. The order ID gives no
information about the pickup or delivery time of an order. Example: O000004.
2. The pickup window start time p
s
N
0
. A positive natural number which denotes the
start of the pickup window in milliseconds, measured from the beginning of the plan-
ning period. These numbers have long precision, i. e., are 64bit numbers. Example:
2 700 000.
3. The pickup window start end p
e
N
0
. A positive natural number which denotes the
end of the pickup window in milliseconds, measured from the beginning of the planning
period. These numbers have long precision, i. e., are 64bit numbers. It always holds
that p
e
> p
s
. Example: 5 400 000.
4. The pickup location p
l
is where the order has to be picked up. An order can only be
picked up during its pickup time window, i. e., at times t with p
s
t p
e
. The pickup
location is denoted with a location identier. Example: L000016. All pick-ups outside
the pickup window are delivery errors.
5. The delivery window start time d
s
N
0
. A positive natural number which denotes
the start of the delivery window in milliseconds, measured from the beginning of
the planning period. These numbers have long precision, i. e., are 64bit numbers.
Example: 2 700 000.
6. The delivery window start end d
e
N
0
. A positive natural number which denotes
the end of the delivery window in milliseconds, measured from the beginning of the
planning period. These numbers have long precision, i. e., are 64bit numbers. It always
holds that d
e
> d
s
. Notice that the pickup and delivery window may overlap, but it
will always hold the d
s
p
s
. Example: 7 200 000.
50.4. CLOSE-TO-REAL VEHICLE ROUTING PROBLEM BENCHMARK 627
Listing 50.22: The order data of the rst example problem instance.
1 %id pickup_window_start pickup_window_end pickup_location delivery_window_start
delivery_window_end delivery_location
2 O000000 2700000 5400000 L000016 2700000 7200000 L000000
3 O000001 1800000 5400000 L000016 2700000 7200000 L000000
4 O000002 4500000 6300000 L000010 5400000 8100000 L000006
5 O000003 4500000 7200000 L000006 4500000 8100000 L000000
6 O000004 1800000 2700000 L000008 2700000 3600000 L000011
7 O000005 0 3600000 L000011 0 5400000 L000018
8 O000006 1800000 3600000 L000018 2700000 5400000 L000005
9 O000007 2700000 5400000 L000005 2700000 6300000 L000013
10 O000008 2700000 7200000 L000010 2700000 9900000 L000008
11 O000009 3600000 5400000 L000002 4500000 7200000 L000004
7. The delivery location d
l
is where the order has to be delivered to. An order can only
be delivered during its delivery time window, i. e., at times t with d
s
t d
e
. The
delivery location is denoted with a location identier. Example: L000000. If a truck
arrives at a delivery location before d
s
, it has to wait until d
s
before it can deliver the
order. All deliveries outside the delivery window are delivery errors.
In Listing 57.25 on page 915, we provide a Java class to hold the order information. This
class can produce and read a textual representation of the data mentioned above. An
example for this representation is given in Listing 50.22.
50.4.2.3 Trucks
Trucks transport orders from one location to another. Each truck t =
(id, s
l
, maxC, maxD, costHKm, avgSpeed) is characterized by
1. A unique identier id which is a string that starts with an uppercase T followed by six
decimal digits. Example: T000003.
2. A starting location s
l
, a depot, where the truck is parked. This location is denoted by
a location identier. Example: L000015.
3. The maximum number maxC N
1
of containers that this truck can transport. This
number, usually 0 < maxc 3 can be dierent for each truck. A truck cannot carry
more than maxc containers. Example: 2.
4. The maximum number maxD N
1
of drivers which can be in the truck. A truck must
be driven by at least one driver. It may, however, be possible that a truck has more
than one seat, so that two truck drivers can drive to their home depot together on
their return trip, for example. Example: 2
5. The costs per hour-kilometer costHKm in USD-cent (or RMB-fen). If the truck t drives
for m kilometers needing n hours, the total costs of this move are n m costHKm.
These costs include, for example, fuel consumption and material/maintenance costs
for the truck. These costs, of course, may be dierent for dierent trucks. Example:
170.
6. The average speed avgSpeed in km/h with which the truck can move. Of course, this
speed can again be dierent from truck to truck, some larger trucks may be slower,
smaller trucks may be faster. The same goes for older and newer trucks. Example:
45.
628 50 BENCHMARKS
Listing 50.23: The truck data of the rst example problem instance.
1 %id start_location max_containers max_drivers costs_per_h_km avg_speed_km_h
2 T000000 L000000 3 2 230 50
3 T000001 L000019 2 1 185 50
4 T000002 L000015 3 1 215 30
5 T000003 L000002 3 1 160 50
6 T000004 L000004 1 1 185 50
7 T000005 L000015 1 2 170 50
8 T000006 L000001 2 1 145 50
9 T000007 L000019 2 1 230 40
10 T000008 L000019 2 1 145 45
11 T000009 L000003 2 2 200 55
12 T000010 L000002 3 1 270 45
13 T000011 L000003 2 1 230 40
14 T000012 L000009 3 1 160 50
A truck will always be located at a depot at the beginning of a planning period and must
also be located at a depot after the plan was executed. The end depot may, however, be a
dierent one from the start depot.
In Listing 57.26 on page 918, we provide a Java class to hold the truck information.
This class can produce and read a textual representation of the data mentioned above. An
example for this representation is given in Listing 50.23.
50.4.2.4 Containers
Containers c = (id, s
l
) are simple objects which are used to package the orders for trans-
portation. They are characterized by
1. The container ID id. Container IDs start with an uppercase C followed by a six-digit
decimal number. The container IDs form a strict sequence of numbers. Example:
C000003.
2. The start location s
l
the depot where the container resides. s
l
is a location ID.
Example: L000000.
A container will always be located at a depot at the beginning of a planning period and
must also be located at a depot after the plan was executed. The end depot may, however,
be a dierent one from the start depot.
In Listing 57.27 on page 920, we provide a Java class to hold the container information.
This class can produce and read a textual representation of the data mentioned above. An
example for this representation is given in ??.
50.4.2.5 Drivers
Drivers d = (id, s
l
, costs) are needed to drive the trucks. They are characterized by
1. The driver ID id. Driver IDs start with an uppercase D followed by a six-digit decimal
number. The driver IDs form a strict sequence of numbers. Example: D000007.
2. The home location s
l
the depot where the driver starts her work. s
l
is a location
ID. Example: L000002.
3. The costs per day costs denote the amount in USD cent (or RMB-fen) that have to
paid for each day on which the driver is working. Even if the driver only works for
one millisecond of the day, the full amount must be paid.
50.4. CLOSE-TO-REAL VEHICLE ROUTING PROBLEM BENCHMARK 629
Listing 50.24: The container data of the rst example problem instance.
1 %id start
2 C000000 L000000
3 C000001 L000000
4 C000002 L000019
5 C000003 L000015
6 C000004 L000019
7 C000005 L000019
8 C000006 L000019
9 C000007 L000001
10 C000008 L000001
11 C000009 L000004
12 C000010 L000000
13 C000011 L000019
14 C000012 L000009
15 C000013 L000000
Listing 50.25: The driver data of the rst example problem instance.
1 %id home costs_per_day
2 D000000 L000000 7000
3 D000001 L000000 11000
4 D000002 L000019 9000
5 D000003 L000015 10000
6 D000004 L000003 11000
7 D000005 L000009 7000
8 D000006 L000001 10000
9 D000007 L000019 7000
10 D000008 L000019 8000
11 D000009 L000009 11000
12 D000010 L000019 9000
13 D000011 L000001 11000
14 D000012 L000009 10000
15 D000013 L000009 11000
A driver will always be located at a depot at the beginning of a planning period and must
also be located at exactly this depot after the plan was executed.
In Listing 57.28 on page 921, we provide a Java class to hold the truck information.
This class can produce and read a textual representation of the data mentioned above. An
example for this representation is given in Listing 50.25.
50.4.2.6 Distance Matrix
The asymmetric distance matrix contains the distances between the dierent locations in
meters. If there are n locations, the distance matrix is an n n matrix.
The element
i,j
at position i, j N
0
denotes the distance between the locations i and
j. The distance between L000009 and L000017 would be found at
9,17
. Notice that this
may be dierent from the distance between L000017 and L000009 which can be found at

17,9
. Since the resolution of the distance matrix is meters, it is easy to understand that
these values may slightly dier. If highways are used, the on-ramps and exits may to the
highway may be at slightly dierent positions.
The time T(t, i, j) in milliseconds that a truck t needs to drive from location i to location
j can be computed as specied in Equation 50.125.
T(t, i, j) =
_

i,j
1000 t.avgSpeed
3600 1000
_
=
_

i,j
t.avgSpeed
3600
_
(50.125)
630 50 BENCHMARKS
The driving costs C(t, i, j) in USD-cent (or RMB-fen) consumed by a truck t when driving
from location i to location j can be computed as specied in Equation 50.126.
C(t, i, j) =
_
T(t, i, j)
3 600 000


i,j
1000
t.costsHKm
_
(50.126)
These costs do not include the costs that have to be spent for the drivers. In Listing 57.29
on page 923, we provide a Java class to hold the distance information. This class can
produce and read a textual representation of the data mentioned above. An example for
this representation is given in Listing 50.26.
5
0
.
4
.
C
L
O
S
E
-
T
O
-
R
E
A
L
V
E
H
I
C
L
E
R
O
U
T
I
N
G
P
R
O
B
L
E
M
B
E
N
C
H
M
A
R
K
6
3
1
Listing 50.26: The distance data of the rst example problem instance.
1 0 15700 10500 16450 5600 6750 5000 4900 5200 14100 14000 11850 13150 14200 13700 12800 14450 18050 12350 4550
2 15550 0 17050 22950 12150 13300 11550 11450 11750 20600 20550 18350 19650 20650 20200 19300 20900 24600 18800 11050
3 10450 17050 0 17950 7100 8150 6450 6350 6700 15500 15500 13300 14500 15550 15150 14150 15800 19500 13700 5950
4 16400 23100 18050 0 12150 13300 11550 12400 11750 6500 2450 4650 3500 4050 4500 5350 7050 10750 5100 12000
5 5550 12250 7100 12150 0 1150 650 1450 400 9750 9700 7500 8850 9800 9400 8400 10100 13750 7950 1150
6 6650 13300 8200 13300 1150 0 1700 2600 1550 10850 10850 8600 9900 10950 10500 9550 11200 14900 9100 2200
7 4950 11650 6500 11650 650 1750 0 850 200 9200 9100 6900 8250 9300 8900 7850 9500 13200 7400 450
8 4800 11500 6400 12300 1450 2600 900 0 1050 9950 9900 7700 8950 10050 9600 8600 10250 13900 8100 350
9 5200 11850 6700 11800 400 1550 200 1050 0 9400 9300 7150 8550 9550 9150 8100 9700 13450 7650 750
10 13950 20600 15500 6500 9750 10850 9150 9850 9350 0 4050 2300 3600 2400 2300 1300 1500 5250 1850 9500
11 13950 20600 15500 2450 9750 10850 9150 9850 9300 4050 0 2250 1050 1700 1950 2950 4650 8300 2700 9500
12 11800 18400 13350 4700 7550 8700 6900 7700 7150 2250 2250 0 1300 2350 1950 900 2600 6300 450 7300
13 13050 19700 14650 3500 8800 10000 8200 8950 8400 3600 1050 1300 0 2800 3000 2200 3900 7600 1750 8600
14 14000 20700 15600 4050 9800 10950 9250 10000 9450 2400 1700 2350 2750 0 450 1450 3100 6750 1900 9550
15 13650 20250 15150 4400 9400 10500 8850 9550 9050 2350 1950 1950 3000 450 0 1000 2650 6350 1500 9150
16 12700 19300 14250 5400 8450 9550 7850 8600 8100 1300 2950 950 2250 1450 1000 0 1700 5350 550 8200
17 14350 21050 15900 7000 10100 11250 9500 10300 9750 1500 4600 2600 3900 3100 2650 1700 0 3650 2200 9950
18 18050 24650 19550 10750 13750 14850 13200 13900 13400 5200 8300 6250 7550 6750 6300 5400 3650 0 5850 13550
19 12150 18900 13700 5100 7950 9050 7350 8100 7550 1850 2700 450 1750 1900 1500 550 2200 5850 0 7750
20 4450 11100 6000 11950 1100 2200 450 350 650 9550 9500 7300 8550 9700 9200 8200 9850 13550 7750 0
632 50 BENCHMARKS
50.4.3 Problem Space
50.4.3.1 Move: The Atomic Action
A candidate solution for this optimization problem is a list of atomic transportation actions
which we here call moves. A move m = (t, o, d, c, s
l
, s
t
, d
l
) is a tuple consisting of the
following elements.
1. A truck t denoted by a truck identier. Example: T000000.
2. A (possibly empty) set of o of orders to be transported by the truck t. Notice that
0 [o[ [c[. Example: (O000002).
3. A non-empty set d of drivers, one of which will drive the truck (whereas the others, if
any, just use the truck for transportation). Notice that 0 < [d[ t.maxD. Example:
(D000000, D000001).
4. A (possibly empty) set of c of containers to be transported by the truck t. Notice that
0 [c[ t.maxC. Example: (C000001, C000000).
5. A starting location s
l
from where the truck will start to drive. Example: L000010.
6. A starting time s
t
at which the truck will leave the starting location (measured in
milliseconds from the beginning of the planning period). Example: 5 691 600.
7. A destination location d
l
,= s
l
where the truck will drive to. Example: L000006.
It is assumed that truck t will arrive at m.d
l
at time m.s
t
+ T(t, m.s
l
, m.d
l
) (see Equa-
tion 50.125), hence it is not necessary to store an arrival time in the tuple m. One part of
the costs of the move can be computed according to Equation 50.126. The other part must
determined by adding the driver costs (see Section 50.4.2.5).
In Listing 57.30 on page 925, we provide a Java class to hold the information of a single
move. This class can produce and read a textual representation of the data mentioned above.
An example for this representation is given in Listing 50.27.
5
0
.
4
.
C
L
O
S
E
-
T
O
-
R
E
A
L
V
E
H
I
C
L
E
R
O
U
T
I
N
G
P
R
O
B
L
E
M
B
E
N
C
H
M
A
R
K
6
3
3
Listing 50.27: The move data of the solution to the rst example problem instance.
1 %truck orders drivers containers from start time to
2 T000000 () (D000000,D000001) (C000001,C000000) L000000 2610000 L000016
3 T000000 (O000001,O000000) (D000000,D000001) (C000001,C000000) L000016 3650400 L000000
4 T000000 () (D000000,D000001) (C000001,C000000) L000000 4683600 L000010
5 T000000 (O000002) (D000000,D000001) (C000001,C000000) L000010 5691600 L000006
6 T000000 (O000003) (D000000,D000001) (C000001,C000000) L000006 6350400 L000000
7 T000001 () (D000002) (C000002) L000019 2520000 L000008
8 T000001 (O000004) (D000002) (C000002) L000008 2566800 L000011
9 T000001 (O000005) (D000002) (C000002) L000011 3081600 L000018
10 T000001 (O000006) (D000002) (C000002) L000018 3114000 L000005
11 T000001 (O000007) (D000002) (C000002) L000005 3765600 L000013
12 T000001 () (D000002) (C000002) L000013 4554000 L000010
13 T000001 (O000008) (D000002) (C000002) L000010 4676400 L000008
14 T000001 () (D000002) (C000002) L000008 5346000 L000019
15 T000002 () (D000003) (C000003) L000015 3060000 L000002
16 T000002 (O000009) (D000003) (C000003) L000002 4770000 L000004
17 T000002 () (D000003) (C000003) L000004 5619600 L000015
634 50 BENCHMARKS
1 Total Errors : 4
2 Total Costs : 63863
3 Total Time Error : 0
4 Container Errors : 0
5 Driver Errors : 0
6 Order Errors : 4
7 Truck Errors : 0
8 Pickup Time Error : 0
9 Delivery Time Error : 0
10 Truck Costs : 5863
11 Driver Costs : 58000
Listing 50.28: An example output of the features of an example plan (with errors).
50.4.3.2 Plans: The Candidate Solutions
In Listing 50.27, we do not display a single move. Instead, a set of multiple moves is
presented. Here, we refer to such sets of moves as plans x = (m
1
, m
2
, . . . ). Trucks rst drive
to the pickup location of the orders. The pickup the orders and deliver them and then drive
back. Sometimes, a truck may not drive back directly but could pick up another order on
the way.
The problem space X thus consists of all possible plans x, i. e., of lists of all possible
moves m. It is the goal of the optimization process to construct good plans.
The solutions to an instance of the presented type of Vehicle Routing Problem follow
exactly the format given in Listing 50.27. They are textual representations of the candidate
solutions x and, hence, can be created using an arbitrary synthesis/optimization method
implemented in an arbitrary programming language.
50.4.4 Objectives and Constraints
The objectives of solving the (benchmarking) real-world Vehicle Routing Problem are quite
easy, as already stated in Section 50.4.1.2 on page 623: (a) We want to deliver all orders
in time (b) at the least possible costs. Additional to these objectives, we have to consider
constraints: the synthesized plans must be physically sound.
Evaluating a candidate solution of this kind of problem is relatively complex. Basically,
we can do this as follows:
1. Initialize an internal data structure that keeps track of the positions and arrival times
of all objects by placing all objects at their starting locations.
2. Sort the list x X of moves m x according to the starting times of the moves.
3. For each move, starting with the earliest one, do:
(a) Compute the costs of the move and add them to an internal cost counter.
(b) Update the positions and arrival times of all involved objects in each step.
(c) After each step, check if any physical or time window constraint has been violated.
If so, count the number of errors.
The feature values obtained by such a computation can be used by the objective functions.
There is no clear rule on how to design the objective functions we already showed in Task 67
on page 238 that often, the same optimization criterion can be expressed in dierent ways.
Hence, we just provide a class Compute which determines some key features of a candidate
solution (plan) x in Listing 57.31 on page 931. This class can evaluate a plan and generate
outputs such as the one in Listing 50.28.The class Compute allows us to process the moves
of a plan x step-by-step and to check, in each step, if new errors occurred or how the total
costs changed. This would also allow for a successive generation of solutions.
50.4. CLOSE-TO-REAL VEHICLE ROUTING PROBLEM BENCHMARK 635
Notice that, if you wish to use the evaluation capabilities of this class without imple-
menting your solution in Java, you can still produce plans as presented in Listing 50.27 on
page 633, parse them in with Java, and then apply Compute to the parsed list.
50.4.5 Framework
As already outlined in the previous sections, we provide a complete framework for this
benchmark problem and list the most important source les in Section 57.3 on page 914.
Additional to the classes listed representing the essential objects involved in the optimization
process (see Section 50.4.2 and Section 50.4.3), we also provide an instance generator which
can create datasets such as those given as examples in the two mentioned sections.
With this instance generator, we create six benchmark instances which dier in the
number of involved objects. Naturally, we would expect that smaller problem instances
(such as case01 which was used as example in our discussion on the benchmark objects)
are easier to solve. Solutions to the larger problem instances should be much harder to
discover. For each of the benchmark sets, we provide a baseline solution. This solution
is not necessarily the best possible solution to the benchmark dataset, but it fullls all
constraints. Matter of fact, the best solutions to the benchmark datasets are unknown and
yet to be discovered.
These benchmark datasets can be loaded via a function loadInstance of the class
InstanceLoader. Notice that each of the classes Container, Driver, Location, Order, Truck,
and DistanceMatrix contains a static variable which allows accessing the list of instances of
that class as dened in the benchmark dataset. The class Driver, for example, has a static
variable called DRIVERS which lists all of its instances. Container has a similar variable called
CONTAINERS.
Part VI
Background
638 50 BENCHMARKS
Chapter 51
Set Theory
Set theory
1
[763, 1169, 2615] is an important part of the mathematical theory. Numerous
other disciplines like algebra, analysis, and topology are based on it. Since set theory is not
the topic of this book but a mere prerequisite, this chapter (and the ones to follow) will
just briey introduce it. We make heavy use of wild denitions and, in some cases, even
rough simplications. More information on the topics discussed can be retrieved from the
literature references and, again, from [2938].
Set theory can be divided into nave set theory
2
and axiomatic set theory
3
. The rst
approach, the nave set theory, is inconsistent and therefore not regarded in this book.
Denition D51.1 (Set). A set is a collection of objects considered as a whole
4
. The
objects of a set are called elements or members. They can be anything, from numbers and
vectors, to complex data structures, algorithms, or even other sets. Sets are conventionally
denoted with capital letters, A, B, C, etc. while their elements are usually referred to with
small letters a, b, c.
51.1 Set Membership
The expression a A means that the element a is a member of the set A while b , A means
that b is not a member of A. There are three common forms to dene sets:
1. With their elements in braces: A = 1, 2, 3 denes a set A containing the three
elements 1, 2, and 3. 1, 1, 2, 3 = 1, 2, 3 since the curly braces only denote set
membership.
2. The same set can be specied using logical operators to describe its elements: a
N
1
: (a 1) (a < 4) a A.
3. A shortcut for the previous form is to denote the logical expression in braces, like
A = (a 1) (a < 4), a N
1
.
The cardinality of a set A is written as [A[ and stands for the count of elements in the
set.
1
http://en.wikipedia.org/wiki/Set_theory [accessed 2007-07-03]
2
http://en.wikipedia.org/wiki/Naive_set_theory [accessed 2007-07-03]
3
http://en.wikipedia.org/wiki/Axiomatic_set_theory [accessed 2007-07-03]
4
http://en.wikipedia.org/wiki/Set_%28mathematics%29 [accessed 2007-07-03]
640 51 SET THEORY
51.2 Special Sets
Special sets used in the context of this book are
1. The empty set = contains no elements ([[ = 0).
2. The natural numbers
5
N
1
without zero include all whole numbers bigger than 0. (N
1
=
1, 2, 3, . . . )
3. The natural numbers including zero (N
0
) comprise all whole numbers bigger than or
equal to 0. (N
0
= 0, 1, 2, 3, . . . )
4. Z is the set of all integers, positive and negative. (Z = . . . , 2, 1, 0, 1, 2, . . . )
5. The rational numbers
6
Q are dened as Q =
_
a
b
: a, b Z, b ,= 0
_
.
6. The real numbers
7
R include all rational and irrational numbers (such as

2).
7. R
+
denotes the positive real numbers including 0: (R
+
= [0, +)).
8. C is the set of complex numbers
8
. When needed in the context of this book, we
abbreviate the immaginary unit with i. The real and imaginary parts of a complex
number z are denoted as Rez and Imz, respectively.
9. We dene B = 0, 1 as the set of values a Boolean variable may take on, where
0 false and 1 true.
N
1
N
0
Z Q R C (51.1)
N
1
N
0
R
+
R C (51.2)
For these numerical sets, special subsets, so called intervals, can be specied. [1, 5) is a set
which contains all the numbers starting from (including) 1 up to (exclusively) 5. (1, 5] on
the other hand contains all numbers bigger than 1 and up to inclusively 5. In order to avoid
ambiguities, such sets will always used in a context where it is clear if the numbers in the
set are natural or real.
51.3 Relations between Sets
Between two sets A and B, the following subset relations may exist: If all elements of A are
also elements of B, A B holds.
(a A a B) A B (51.3)
Hence, 1, 2, 3 1, 2, 3, 4 and x, y, z x, y, z but not , , , , . If
additionally there is at least one element b in B which is not in A, then A B.
(A B) (b B : (b A)) A B (51.4)
Thus, 1, 2, 3 1, 2, 3, 4 but not x, y, z x, y, z. The set relations can be negated by
crossing them out, i. e., A , B means (A B) and A , B denotes (A B), respectively.
51.4. OPERATIONS ON SETS 641
A
B
AB
A B
A B
A
B
A
BA
Figure 51.1: Set operations performed on sets A and B inside a set A.
51.4 Operations on Sets
Let us now dene the possible unary and binary operations on sets, some of which are
illustrated in Figure 51.1.
Denition D51.2 (Set Union). The union
9
C of two sets A and B is written as A B
and contains all the objects that are element of at least one of the sets.
C = A B ((c A) (c B) (c C)) (51.5)
A B = B A (51.6)
A = A (51.7)
A A = A (51.8)
A A B (51.9)
Denition D51.3 (Set Intersection). The intersection
10
D of two sets A and B, denoted
by AB, contains all the objects that are elements of both of the sets. If AB = , meaning
that A and B have no elements in common, they are called disjoint.
D = A B ((d A) (d B) (d D)) (51.10)
A B = B A (51.11)
A = (51.12)
A A = A (51.13)
A B A (51.14)
5
http://en.wikipedia.org/wiki/Natural_numbers [accessed 2008-01-28]
6
http://en.wikipedia.org/wiki/Rational_number [accessed 2008-01-28]
7
http://en.wikipedia.org/wiki/Real_numbers [accessed 2008-01-28]
8
http://en.wikipedia.org/wiki/Complex_number [accessed 2008-01-29]
9
http://en.wikipedia.org/wiki/Union_%28set_theory%29 [accessed 2007-07-03]
10
http://en.wikipedia.org/wiki/Intersection_%28set_theory%29 [accessed 2007-07-03]
642 51 SET THEORY
Denition D51.4 (Set Dierence). The dierence E of two sets A and B, AB, contains
the objects that are element of A but not of B.
E = A B ((e A) (e , B) (e E)) (51.15)
A = A (51.16)
A = (51.17)
A A = (51.18)
A B A (51.19)
Denition D51.5 (Set Complement). The complementary set A of the set A in a set A
includes all the elements which are in A but not element of A:
A A A = A A (51.20)
Denition D51.6 (Cartesian Product). The Cartesian product
11
P of two sets A and
B, denoted P = A B is the set of all ordered pairs (a, b) whose rst component is an
element from A and the second is an element of B.
P = AB P = (a, b) : a A, b B (51.21)
A
n
= AA.. A
. .
n times
(51.22)
Denition D51.7 (Countable Set). A set S is called countable
12
if there exists an
injective function
13
which maps it to the natural numbers (which are countable), i. e.,
f : S N
0
.
Denition D51.8 (Uncountable Set). A set is uncountable if it is not countable, i. e.,
no such function exists for the set.
N
0
, Z, Q, and B are countable, R, R
+
, and C are not.
Denition D51.9 (Power Set). The power set
14
T(A) of the set A is the set of all subsets
of A.
p T(A) p A (51.23)
11
http://en.wikipedia.org/wiki/Cartesian_product [accessed 2007-07-03]
12
http://en.wikipedia.org/wiki/Countable_set [accessed 2007-07-03]
13
see denition of function on page 646
14
http://en.wikipedia.org/wiki/Axiom_of_power_set [accessed 2007-07-03]
51.5. TUPLES 643
51.5 Tuples
Denition D51.10 (Type). A type is a set of values that a variable, constant, function,
or similar entity may take on.
We can, for instance, specify the type T = 1, 2, 3. A variable x which is an instance of
this type then can take on the values 1, 2, or 3.
Denition D51.11 (Tuple). A tuple
15
is an ordered, nite sequence of elements, where
each element is an instance of a certain type.
To each position in i a tuple t, a type T
i
is assigned. The element t[i] at a position i must
then be an element of T
i
. Other than sets, tuples may contain the same element more than
once.
In the context of this book, we dene tuples with parenthesis like (a, b, c) whereas sets
are specied using braces a, b, c. Since every item of a tuple may be of a dierent type,
(Monday, 23, a, b, c) is a valid tuple.
Denition D51.12 (Tuple Type). To formalize this relation, we dene the tuple type T.
T species the basic sets for the elements of the tuples. If a tuple t meets the constraints
imposed to its values by T, we can write t T which means that the tuple t is an instance
of T.
T = (T
0
, T
1
, ..T
n1
) , n N
1
(51.24)
t = (t[0], t[1], ..t[n 1]) T t
i
T
i
i : 0 i < n len(t) = len(T) (51.25)
51.6 Permutations
A permutation
16
is a special tuple. A permutation is an arrangement of n elements where
each element is unique.
Denition D51.13 (Permutation). Without loss of generality, we assume that a permu-
tation of length n is a vector of natural numbers of that length for which the following
conditions hold:
[i] 0..n 1 i 0..n 1 (51.26)
i, j 0..n 1 ([i] = [j] i = j) (51.27)
p 0..n 1 i 0..n 1 : [[]i] = p (51.28)
The permutation space (n) be the space of all permutations (n) for which the
conditions from Denition D51.13 hold.
15
http://en.wikipedia.org/wiki/Tuple [accessed 2007-07-03]
16
http://en.wikipedia.org/wiki/Permutations [accessed 2008-01-31]
644 51 SET THEORY
51.7 Binary Relations
Denition D51.14 (Binary Relation). A binary
17
relation
18
R is dened as an ordered
triplet (A, B, P) where A and B are arbitrary sets, and P is a subset of the Cartesian product
AB (see Equation 51.21). The sets A and B are called the domain and codomain of the
relation and P is called its graph.
The statement (a, b) P (a A b B) is read a is R-related to b and is written as
R(a, b). The order of the elements in each pair of P is important: If a ,= b, then R(a, b) and
R(b, a) both can be true or false independently of each other.
Some types and possible properties of binary relations are listed below and illustrated in
Figure 51.2. A binary relation can be [923]:
left-total
surjective
non-injective
functional
non-bijective
A B
left-total
surjective
injective
functional
bijective
A B
left-total
surjective
injective
non-functional
non-bijective
A B
left-total
non-surjective
injective
functional
non-bijective
A B
notleft-total
non-surjective
injective
functional
non-bijective
A B
left-total
non-surjective
non-injective
functional
non-bijective
A B
notleft-total
non-surjective
non-injective
non-functional
non-bijective
A B
Figure 51.2: Properties of a binary relation R with domain A and codomain B.
1. Left-total if
a A b B : R(a, b) (51.29)
2. Surjective
19
or right-total if
b B a A : R(a, b) (51.30)
3. Injective
20
if
a
1
, a
2
A, b B : R(a
1
, b) R(a
2
, b) a
1
= a
2
(51.31)
17
http://en.wikipedia.org/wiki/Binary_relation [accessed 2007-07-03]
18
http://en.wikipedia.org/wiki/Relation_%28mathematics%29 [accessed 2007-07-03]
19
http://en.wikipedia.org/wiki/Surjective [accessed 2007-07-03]
20
http://en.wikipedia.org/wiki/Injective [accessed 2007-07-03]
51.8. ORDER RELATIONS 645
4. Functional if
a A, b
1
, b
2
B : R(a, b
1
) R(a, b
2
) b
1
= b
2
(51.32)
5. Bijective
21
if it is left-total, right-total and functional.
6. Transititve
22
if
a A, b B, c A B : R(a, c) R(c, b) R(a, b) (51.33)
51.8 Order Relations
All of us have learned the meaning and the importance of order since the earliest years
in school. The alphabet is ordered, the natural numbers are ordered, the marks we can score
in school are ordered, and so on. Matter of fact, we come into contact with orders even way
before entering school by learning to distinguish things according to their size, for instance.
Order relations
23
are binary relations which are used to express the order amongst the
elements of a set A. Since order relations are imposed on single sets, both their domain and
their codomain are the same (A, in this case). For such relations, we can dene an additional
number of properties which can be used to characterize and distinguish the dierent types
of order relations:
1. Antisymmetrie:
R(a
1
, a
2
) R(a
2
, a
1
) a
1
= a
2
a
1
, a
2
A (51.34)
2. Asymmetrie
R(a
1
, a
2
) R(a
2
, a
1
) a
1
, a
2
A (51.35)
3. Reexivenss
R(a, a) a A (51.36)
4. Irreexivenss
,a A : R(a, a) (51.37)
All order relations are transitive
24
, and either antisymmetric or symmetric and either
reexive or irreexive:
Denition D51.15 (Partial Order). A binary relation R denes a (non-strict, reexive)
partial order if and only if it is reexive, antisymmetric, and transitive.
The and operators, for instance, represent non-strict partial orders on the set of the
complex numbers C. Partial orders that correspond to the > and < comparators are called
strict. The Pareto dominance relation introduced in Denition D3.13 on page 65 is another
example for such a strict partial order.
Denition D51.16 (Strict Partial Order). A binary relation R denes a strict (or ir-
reexive) partial order if it is irreexive, asymmetric, and transitive.
21
http://en.wikipedia.org/wiki/Bijective [accessed 2007-07-03]
22
http://en.wikipedia.org/wiki/Transitive_relation [accessed 2007-07-03]
23
http://en.wikipedia.org/wiki/Order_relation [accessed 2007-07-03]
24
See Equation 51.33 for the denition of transitivity.
646 51 SET THEORY
Denition D51.17 (Total Order). A total order
25
(or linear order, simple order) R im-
posed on the set A is a partial order which is complete/total.
R(a
1
, a
2
) R(a
2
, a
1
) a
1
, a
2
A (51.38)
The real numbers R for example are totally ordered whereas on the set of complex
numbers C, only (strict or reexive) partial (non-total) orders can be dened since it is
continuous in two dimensions.
51.9 Equivalence Relations
Another important class of relations are equivalence relations
26
[2774, 2842] which are
often abbreviated with or , i. e., a
1
a
2
and a
1
a
2
mean R(a
1
, a
2
) for the equivalence
relation R imposed on the set A and a
1
, a
2
A. Unlike order relations, equivalence relations
are symmetric, i. e.,
R(a
1
, a
2
) R(a
2
, a
1
) a
1
, a
2
A (51.39)
Denition D51.18 (Equivalence Relation). The binary relation R denes an equiva-
lence relation on the set A if and only if it is reexive, symmetric, and transitive.
Denition D51.19 (Equivalence Class). If an equivalence relation R is dened on a
set A, the subset A

A of A is an equivalence class
27
if and only if a
1
, a
2
A

R(a
1
, a
2
) (a
1
a
2
).
51.10 Functions
Denition D51.20 (Function). A function f : X Y is a binary relation with the
property that for each element x of the domain
28
X there is no more than one element y in
the codomain Y such that x is related to y (by f). This uniquely determined element y is
denoted by f (x). In other words, a function is a functional binary relation (see Point 4 on
the previous page) and we can write:
x X, y
1
, y
2
Y : f (x, y
1
) f (x, y
2
) y
1
= y
2
(51.40)
A function maps each element of X to one element in Y . The domain X is the set of possible
input values of f and the codomain Y is the set its possible outputs. The set of all actual
outputs f (x) : x X is called range. This distinction between range and codomain can
be made obvious with a small example. The sine function can be dened as a mapping from
the real numbers to the real numbers sin : R R, making R its codomain. Its actual range,
however, is just the real interval [1, 1].
25
http://en.wikipedia.org/wiki/Total_order [accessed 2007-07-03]
26
http://en.wikipedia.org/wiki/Equivalence_relation [accessed 2007-07-28]
27
http://en.wikipedia.org/wiki/Equivalence_class [accessed 2007-07-28]
28
http://en.wikipedia.org/wiki/Domain_%28mathematics%29 [accessed 2007-07-03]
51.11. LISTS 647
51.10.1 Monotonicity
Functions are monotone, i. e., have the property of monotonicity
29
, if they preserve orders.
In the following denitions, we assume that two order relations R
X
and R
Y
are dened on
the domain X and codomain Y of a function f, respectively. These orders are expressed
with the <, , and operators. In the common case that X = Y or X R and Y R,
R
X
= R
Y
usually holds and both correspond to the intuitive order of the real numbers.
Denition D51.21 (Monotonically Increasing). A function f : X Y is called mono-
tonic, monotonically increasing, increasing, or non-decreasing, if and only if Equation 51.41
holds (according to the order relations dened on X and Y ).
x
1
< x
2
, x
1
, x
2
X f (x
1
) f (x
2
) (51.41)
Denition D51.22 (Monotonically Decreasing). A function f : X Y is called
monotonically decreasing, decreasing, or non-increasing, if and only if Equation 51.42 holds
(according to the order relations dened on X and Y ).
x
1
, x
2
X : x
1
< x
2
f (x
1
) f (x
2
) (51.42)
51.11 Lists
Denition D51.23 (List). Lists
30
are abstract data types which can be regarded as spe-
cial tuples. They are sequences where every item is of the same type.
Other than our discussions on set theory, the following text about the data structure list is
strictly local to this book and not to be understood as a general mathematical theory. All
the functions and operations dened here are only given in order to allow for a clear and
well-dened notation in the other parts of the book. When specifying population-oriented
optimization algorithms, for instance, we will use lists rather than sets for representing the
populations.
The denitions provided here are not founded upon any related work in particular. We
introduce functions that will add elements to or remove elements from lists; that sort lists
or search within them. All list function are without side eect
31
, i. e., they do not change
their parameters. Instead, the result of a function processing a list l is a new list l

to which
is the modied version of l whereas l is left untouched.
We do not assume any specic form of list implementation such as linear arrays or linked
lists
32
. Instead, we leave the actual implementation of the methods discussed in the following
completely open and try to specify their features axiomatically.
Like tuples, lists can be dened using parenthesis in this book. The single elements of a
list are accessed by their index written in brackets ((a, b, c) [1] = b). The rst element of a
list is located at index 0 and the last element has the index n 1 where n is the number of
elements in the list: n = len((a, b, c)) = 3. The empty list is abbreviated with ().
29
http://en.wikipedia.org/wiki/Monotonic_function [accessed 2007-08-08]
30
http://en.wikipedia.org/wiki/List_%28computing%29 [accessed 2007-07-03]
31
http://en.wikipedia.org/wiki/Side_effect_(computer_science) [accessed 2009-06-13]
32
http://en.wikipedia.org/wiki/Linked_list [accessed 2009-06-13]
648 51 SET THEORY
Denition D51.24 (createList). The l = createList(n, q) method creates a new list l of
the length n lled with the item q.
l = createList(n, q) len(l) = n 0 i < n l[i] = q (51.43)
Denition D51.25 (insertItem). The function m = insertItem(l, i, q) creates a new list
m by inserting one element q in a list l at the index i : 0 i len(l). By doing so, it shifts
all elements located at index i and above to the right by one position.
m = insertItem(l, i, q) len(m) = len(l) + 1 m[i] = q
j : 0 j < i m[j] = l[j]
j : i j < len(l) m[j + 1] = l[j] (51.44)
Denition D51.26 (addItem). The addItem function is a shortcut for inserting one item
at the end of a list:
addItem(l, q) insertItem(l, len(l) , q) (51.45)
Denition D51.27 (deleteItem). The function m = deleteItem(l, i) creates a new list m
by removing the element at index i : 0 i < len(l) from the list l where len(l) i +1 must
hold.
m = deleteItem(l, i) len(m) = len(l) 1
j : 0 j < i m[j] = l[j]
j : i < j < len(l) m[j 1] = l[j] (51.46)
Denition D51.28 (deleteRange). The method m = deleteRange(l, i, c) creates a new
list m by removing c elements beginning at index i : 0 i < len(l) from the list l. len(l)
i +c and c 0 must both hold.
m = deleteRange(l, i, c) len(m) = len(l) c
j : 0 j < i m[j] = l[j]
j : i +c j < len(l) m[j c] = l[j] (51.47)
Denition D51.29 (appendList). The function appendList(l
1
, l
2
) is a shortcut for
adding all the elements of a list l
2
to a list l
1
. We dene it recursively as:
appendList(l
1
, l
2
)
_
l
1
if len(l
2
) = 0
appendList(addItem(l
1
, l
2
[0]) , deleteItem(l
2
, 0)) otherwise
(51.48)
Denition D51.30 (count). The function count(x, l) returns the number of occurrences
of the element x in the list l.
count(x, l) = [i 0..len(l) 1 : l[i] = x[ (51.49)
51.11. LISTS 649
Denition D51.31 (subList). The method subList(l, i, c) extracts c elements from the
list l beginning at index i and returns them as a new list. Both, 0 i < len(l) and
len(l) i c must hold.
subList(l, i, s) deleteRange(deleteRange(l, 0, i) , c, [l[ i c) (51.50)
Denition D51.32 (reverseList). The method m = reverseList(l) creates a list m which
contains the same elements as the list l but in reverse order, i. e.,
m = reverseList(l) len(m) = len(l) (i 0..len(l) 1 m[i] = l[len(l) i 1]) (51.51)
51.11.1 Sorting
Denition D51.33 (Sorted List). A list l is considered as being sorted, if l[i] l[j]
i j holds for each i, j 0..len(l) 1 according to a natural (or explicitly specied) order
relation dened on the type of the elements of the list. Thus, we can dene the predicate
isSorted as:
isSorted(l) (i 1..len(l) 1 l[i 1] l[i]) (51.52)
The concept of comparator functions has been introduced in Denition D3.18 on page 77.
cmp(u
1
, u
2
) returns a negative value if u
1
is smaller than u
2
, a positive number if u
1
is
greater than u
2
, and 0 if both are equal. Comparator functions are very versatile, they from
the foundation of the sorting mechanisms of the Java framework [1109, 1110], for instance.
In Global Optimization, they are perfectly suited to represent the Pareto dominance or
prevalence relations discussed in Section 3.3.5 on page 65 and Section 3.5.2. Here, we use
them to express the orders for according to which a list can be sorted.Hence, we can redene
Equation 51.52 to:
isSorted(l, cmp) (i 1..len(l) 1 cmp(l[i 1], l[i]) < 0) (51.53)
Sorting algorithms
33
transform unsorted lists into sorted ones. Here, we will not explicitly
discuss these algorithms and, instead, only specify prototype interfaces to them.Generally,
a list U can be sorted in O(len(U) log len(U)) time complexity. For concrete examples
of sorting algorithms, see [635, 1557, 2446].We dene the functions sortAsc(U, cmp) and
sortDsc(U, cmp) to sort a list U using a comparator function cmp(u
1
, u
2
) in ascending
or descending order, respectively. In other words, their parameter list U is an arbitrary
(possible unsorted) list and the return value is a list containing the same elements sorted
according to cmp.
S = sortAsc(U, cmp) u count(u, U) = count(u, S)
isSorted(S, cmp)
(51.54)
The result of S = sortDsc(U, cmp) is sorted reversely:
S = sortDsc(U, cmp) u count(u, U) = count(u, S)
isSorted(reverseList(S) , cmp)
(51.55)
Sorting according to a specic function f of only one parameter can easily be performed
by building the comparator cmp(u
1
, u
2
) (f (u
1
) f (u
2
)). Thus, we will furthermore
synonymously use the sorting predicate also with unary functions f.
sort(U, f) sort(U, cmp(u
1
, u
2
) (f (u
1
) f (u
2
))) (51.56)
33
http://en.wikipedia.org/wiki/Sorting_algorithm [accessed 2007-07-03]
650 51 SET THEORY
51.11.2 Searching
Searching an element u in an unsorted list U means walking through it until either the ele-
ment is found or the end of the whole list has been reached, which corresponds to complexity
O(len(U)). We therefore dene the operation searchItemUS(u, U) which returns either the
index of the searched element u in the list or 1 if u , U.
searchItemUS(u, U) =
_
i : U[i] = u if u U
1 otherwise
(51.57)
Searching an element s in sorted list S usually means to perform a fast binary search
34
returning the index of the element if it is contained in S. For the case that s is not contained
in S, we again specify an extension similar to the one used in Java: If s , S, a search
operation for sorted lists returns a negative number indicating the position where the element
could be inserted into the list without violating its order. The function searchItemAS(s, S)
searches in sorted list with elements in ascending order, searchItemDS(s, S) searches in a
descending sorted list. Searching in a sorted list is done in O(log len(S)) time. For concrete
algorithm examples, again see [635, 1557, 2446].
searchItemAS(s, S) =
_
_
_
i : S[i] = s if s S
(i 1) : (j 0, j < i S[j] s)
(j < len(S) , j i S[j] > s)
otherwise(51.58)
searchItemDS(s, S) =
_
_
_
i : S[i] = s if s S
(i 1) : (j 0, j < i S[j] s)
(j < len(S) , j i S[j] < s)
otherwise(51.59)
Denition D51.34 (removeItem). The function removeItem(l, q) nds one occurrence
of an element q in a list l by using the appropriate search algorithm and returns a new list
m where it is deleted. For nding the element, we denote any of the previously dened
search operators with searchItem(q, l).
m = removeItem(l, q)
_
l if searchItem(q, l) < 0
deleteItem(l, searchItem(q, l)) otherwise
(51.60)
51.11.3 Transformations between Sets and Lists
We can further dene transformations between sets and lists which will implicitly be used
when needed in this book. It should be noted that setToListis not the inverse function of
listToSet.
B = setToList(set A) a A i : B[i] = a
i 0..len(B) 1 B[i] A
len(setToList(A)) = len(A) (51.61)
A = listToSet(list B) i 0..len(B) 1 B[i] A
a A i 0..len(B) 1 : B[i] = a
[listToSet(B)[ len(B) (51.62)
34
http://en.wikipedia.org/wiki/Binary_search [accessed 2007-07-03]
Chapter 52
Graph Theory
The graph theory
1
[791, 2741] is an area of mathematics which investigates with properties
of graphs, relations between graphs, and algorithms applied to graphs.
Denition D52.1 (Graph). A graph
2
G is a tuple consisting of set of vertices (or nodes)
V and a set of edges (or connections) E which connect the vertices.
We distinguish between nite graphs, where the set of vertices is nite, and innite
graphs, where this is not the case. The set of edges E denes a binary relation (see Sec-
tion 51.7) on the vertices v V . If the edges of a graph have no orientation, the graph
is called undirected. Then, E can either be considered as a symmetric relation, i. e., if
(v
1
, v
2
) E (v
2
, v
1
), or the single vertexes are dened as 2-multisets, i. e., v
1
, v
2
in-
stead of (v
1
, v
2
) and (v
2
, v
1
). The edges in directed graphs have an orientation and, in
sketches, are often represented by arrows whereas undirected edges are usually denoted by
simple lines between the nodes v.
Denition D52.2 (Path). A path
3
p = (p
0
, p
1
, . . . ) in a graph G = (V, E) is a sequence
of vertices such that from each vertex there is an edge to the next vertex in the path:
p is path in G i : 0 i < len(p) p
i
V (52.1)
i : 0..len(p) 2 (p
i
, p
i
+ 1) E (52.2)
A cycle is a path where the start and end vertex are the same, i. e., p
0
= p
len(p)1
.
A graph is a weighted graph if values (weights) w are assigned to its edges e E. The
weight might stand for a distance or cost. The total weight of a path p in such a graph then
evaluates to
w(o) =
len(p)1

i=0
w(p
i
) (52.3)
1
http://en.wikipedia.org/wiki/Graph_theory [accessed 2009-06-15]
2
http://en.wikipedia.org/wiki/Graph_(mathematics) [accessed 2009-06-15]
3
http://en.wikipedia.org/wiki/Path_(graph_theory) [accessed 2009-06-15]
Chapter 53
Stochastic Theory and Statistics
In this chapter we give a rough introduction into stochastic theory
1
[1438, 1693, 2292], which
subsumes
1. probability
2
theory
3
, the mathematical study of phenomena characterized by random-
ness or uncertainty, and
2. statistics
4
, the art of collecting, analyzing, interpreting, and presenting data.
53.1 Probability
Probability theory is used to determine the likeliness of the occurrence of an event under
ideal mathematical conditions. [1481, 1482] Let us start this short summary with some very
basic denitions.
Denition D53.1 (Random Experiment). Random experiments can be repeated arbi-
trary often, their results cannot be predicted.
Denition D53.2 (Elementary Event). The possible outcomes of random experiments
are called elementary events or samples .
Denition D53.3 (Sample Space). The set of all possible outcomes (elementary events,
samples) of a random experiment is the sample space : .
Example E53.1 (Dice Throw Sample Space).
When throwing dice
5
, for example, the sample space will be =
1
,
2
,
3
,
4
,
5
,
6

whereas
i
means that the number i was thrown.
1
http://en.wikipedia.org/wiki/Stochastic [accessed 2007-07-03]
2
http://en.wikipedia.org/wiki/Probability [accessed 2007-07-03]
3
http://en.wikipedia.org/wiki/Probability_theory [accessed 2007-07-03]
4
http://en.wikipedia.org/wiki/Statistics [accessed 2007-07-03]
5
Throwing a dice is discussed as example for stochastic extensively in Section 53.5 on page 689.
654 53 STOCHASTIC THEORY AND STATISTICS
Denition D53.4 (Random Event). A random event A is a subset of the sample space
(A ). If any A occurs, then A occurs too.
Denition D53.5 (Certain Event). The certain event is the random event will always
occur in each repetition of a random experiment. Therefore, it is equal to the whole sample
space .
Denition D53.6 (Impossible Event). The impossible event will never occur in any rep-
etition of a random experiment, it is dened as .
Denition D53.7 (Conicting Events). Two conicting events A
1
and A
2
can never
occur together in a random experiment. Therefore, A
1
A
2
= .
53.1.1 Probabily as dened by Bernoulli (1713)
In some idealized situations, like throwing ideal coins or ideal dice, all elementary events of
the sample space have the same probability [720].
P () =
1
[[
(53.1)
Equation 53.1 is also called the Laplace-assumption. If it holds, the probability of an event
A can be dened as:
P (A) =
number of possible events in favor of A
number of possible events
=
[A[
[[
(53.2)
53.1.2 Combinatorics
For many random experiments of this type, we can use combinatorial
6
approaches in order
to determine the number of possible outcomes. Therefore, we want to shortly outline the
mathematical concepts of factorial numbers, combinations, and permutations.
7
Denition D53.8 (Factorial). The factorial
8
n! of a number n N
0
is the product of
n and all natural numbers (except 0) smaller than it. It is a specialization of the Gamma
function for positive integer numbers, see ?? on page ??.
n! =
n

i=1
i (53.3)
0! = 1 (53.4)
In combinatorial mathematics
9
, we often want to know in how many ways we can arrange
n N
1
elements from a set with M = [[ n elements. We can distinguish between
combinations, where the order of the elements in the arrangement plays no role, and permuta-
tions, where it is important. (a, b, c) and (c, b, a), for instance, denote the same combination
but dierent permutations of the elements a, b, c. We furthermore distinguish between
arrangements where each element of can occurred at most once (without repetition) and
arrangements where the same elements may occur multiple time (with repetition).
6
http://en.wikipedia.org/wiki/Combinatorics [accessed 2007-07-03]
7
http://en.wikipedia.org/wiki/Combinations_and_permutations [accessed 2007-07-03]
8
http://en.wikipedia.org/wiki/Factorial [accessed 2007-07-03]
9
http://en.wikipedia.org/wiki/Combinatorics [accessed 2008-01-31]
53.1. PROBABILITY 655
53.1.2.1 Combinations
The number of possible combinations
10
C(M, n) of n N
1
elements out of a set with
M = [[ n elements without repetition is
C(M, n) =
_
M
n
_
=
M!
n! (M n)!
(53.5)
_
M
n
_
=
M
n

M 1
n 1

M 2
n 2
..
M n + 1
1
(53.6)
C(M + 1, n) = C(M, n) +C(M, n 1) =
_
M + 1
n
_
=
_
M
n
_
+
_
M
n 1
_
(53.7)
If the elements of may repeatedly occur in the arrangements, the number of possible
combinations becomes
M +n 1!
n! (M 1)!
=
_
M +n 1
n
_
=
_
M +n 1
n 1
_
= C(M +n 1, n) = C(M +n 1, n 1)
(53.8)
53.1.2.2 Permutations
The number of possible permutations (see Section 51.6) Perm(M, n) of n N
1
elements
out of a set with M = [[ n elements without repetition is
Perm(M, n) = (M)
n
=
M!
(M n)!
(53.9)
If an element from can occur more than once in the arrangements, the number of possible
permutations is
M
n
(53.10)
53.1.3 The Limiting Frequency Theory of von Mises
von Mises [2822] proposed that if we repeat a random experiment multiple times, the number
of occurrences of a certain event should somehow reect its probability. The more often we
perform the experiment, the more reliable will the estimations of the event probability
become. We can express this relation using the notation of frequency.
Denition D53.9 (Absolute Frequency). The number H(A, n) denoting how often an
event A occurred during n repetitions of a random experiment is its absolute frequency
11
.
Denition D53.10 (Relative Frequency). The relative frequency h(A, n) of an event A
is its absolute frequency normalized to the total number of experiments n. The relative
frequency has the following properties:
h(A, n) =
H(A, n)
n
(53.11)
0 h(A, n) 1 (53.12)
h(, n) = 1 n N
1
(53.13)
A B = h(A B, n) =
H(A, n) + H(B, n)
n
= h(A, n) + h(B, n) (53.14)
10
http://en.wikipedia.org/wiki/Combination [accessed 2008-01-31]
11
http://en.wikipedia.org/wiki/Frequency_%28statistics%29 [accessed 2007-07-03]
656 53 STOCHASTIC THEORY AND STATISTICS
According to von Mises [2822], the (statistical) probability P (A) of an event A computing
the limit of its relative frequency h(A, n) as n approaching innity. This is the limit of
the quotient of the number of elementary events favoring A and the number of all possible
elementary events for innite many repetitions. [2822, 2823]
P (A) = lim
n
h(A, n) = lim
n
n
A
n
(53.15)
53.1.4 The Axioms of Kolmogorov
Denition D53.11 (-algebra). A subset S of the power set T() of the sample space
is called -algebra
12
, if the following axioms hold:
S (53.16)
S (53.17)
A S A S (53.18)
A S B S (A B) S (53.19)
From these axioms others can be deduced, for example:
A S B S A S B S (53.20)
A B S
A B S
A B S
(53.21)
A S B S (A B) S (53.22)
Denition D53.12 (Probability Space). A probability space (or random experiment)
is dened by the triplet (, S, P) where
1. is the sample space, a set of elementary events,
2. S is a -algebra dened on , and
3. P denes a probability measure
13
which determines the probability of occurrence for
each event . (Kolmogorov [1565, 1566] axioms
14
)
Denition D53.13 (Probability). A mapping P which maps a real number to each el-
ementary event is called probability measure if and only if the -algebra S on
holds:
A S 0 P (A) 1 (53.23)
P () = 1 (53.24)
disjoint A
i
S P (A) = P
_
_
i
A
i
_
=

i
P (A
i
) (53.25)
12
http://en.wikipedia.org/wiki/Sigma-algebra [accessed 2007-07-03]
13
http://en.wikipedia.org/wiki/Probability_measure [accessed 2007-07-03]
14
http://en.wikipedia.org/wiki/Kolmogorov_axioms [accessed 2007-07-03]
53.1. PROBABILITY 657
From these axioms, it can be deduced that:
P () = 0 (53.26)
P (A) = 1 P
_
A
_
(53.27)
P
_
A B
_
= P (A) P (A B) (53.28)
P (A B) = P (A) +P (B) P (A B) (53.29)
53.1.5 Conditional Probability
Denition D53.14 (Conditional Probability). The conditional probability
15
P (A[B)
is the probability of an event A, given that another event B already occurred. P (A[B) is
read the probability of A, given B.
P (A[B) =
P (A B)
P (B)
(53.30)
P (A B) = P (A[B) P (B) (53.31)
Denition D53.15 (Statistical Independence). Two events A and B are (statistically)
independent if and only if P (A B) = P (A) P (B) holds. If two events A and B are
statistically independent, we can deduce:
P (A B) = P (A) P (B) (53.32)
P (A[B) = P (A) (53.33)
P (B[A) = P (B) (53.34)
53.1.6 Random Variables
Denition D53.16 (Random Variable). The function X which relates the sample space
to the real numbers R is called random variable
16
in the probability space (, S, P).
X : R (53.35)
Using such a random variable, we can replace the sample space with the new sample space

X
. Furthermore, the -algebra S can be replaced with a -algebra S
X
, which consists of
subsets of
X
instead of . Last but not least, we replace the probability measure P which
relates the to the interval [0, 1] by a new probability measure P
X
which relates the
real numbers R to this interval.
Denition D53.17 (Probability Space of a Random Variable). Is X : R a ran-
dom variable, then the probability space of X is dened as the triplet
(
X
, S
X
, P
X
) (53.36)
One example for such a new probability measure would be the probability that a random
variable X takes on a real value which is smaller or equal a value x:
P
X
(X x) = P ( : X() x) (53.37)
15
http://en.wikipedia.org/wiki/Conditional_probability [accessed 2007-07-03]
16
http://en.wikipedia.org/wiki/Random_variable [accessed 2007-07-03]
658 53 STOCHASTIC THEORY AND STATISTICS
53.1.7 Cumulative Distribution Function
Denition D53.18 (Cumulative Distribution Function). If X is a random variable of
a probability space (
X
= R, S
X
, P
X
), we call the function F : R [0, 1] the (cumulative)
distribution function
17
(CDF) of the random variable X if it fullls Equation 53.38.
F P
X
(X x)
. .
denition rnd. var.
P ( : X() x)
. .
denition probability space
(53.38)
A cumulative distribution function F has the following properties:
1. F
X
(X) is normalized:
lim
x
F
X
(x) = 0
. .
impossible event
, lim
x+
F
X
(x) = 1
. .
certain event
(53.39)
2. F
X
(X) is monotonously
18
growing:
F
X
(x
1
) F
X
(x
2
) x
1
x
2
(53.40)
3. F
X
(X) is (right-sided) continuous
19
:
lim
h0
F
X
(x +h) = F
X
(x) (53.41)
4. The probability that the random variable X takes on values in the interval x
0
X x
1
can be computed using the CDF:
P (x
0
X x
1
) = F
X
(x
1
) F
X
(x
0
) (53.42)
5. The probability that the random variable X takes on the value of a single random
number x is:
P (X = x) = F
X
(x) lim
h0
F
X
(x h) (53.43)
We can further distinguish between sample spaces which contain at most countable innite
many elements and such that are continuums. Hence, there are discrete
20
and continuous
21
random variables:
Denition D53.19 (Discrete Random Variable). A random variable X (and its prob-
ability measure P
X
(X) respectively) is called discrete if it takes on at most countable innite
many values. Its cumulative distribution function F
X
(X) therefore has the shape of a stair-
way.
Denition D53.20 (Continuous Random Variable). A random variable X (and its
probability measure P
X
respectively) is called continuous if it can take on uncountable
innite many values and its cumulative distribution function F
X
(X) is also continuous.
17
http://en.wikipedia.org/wiki/Cumulative_distribution_function [accessed 2007-07-03]
18
http://en.wikipedia.org/wiki/Monotonicity [accessed 2007-07-03]
19
http://en.wikipedia.org/wiki/Continuous_function [accessed 2007-07-03]
20
http://en.wikipedia.org/wiki/Discrete_random_variable [accessed 2007-07-03]
21
http://en.wikipedia.org/wiki/Continuous_probability_distribution [accessed 2007-07-03]
53.2. PARAMETERS OF DISTRIBUTIONS AND THEIR ESTIMATES 659
53.1.8 Probability Mass Function
Denition D53.21 (Probability Mass Function). The probability mass function
22
(PMF) f is dened for discrete distributions only and assigns a probability to each value a
discrete random variable X can take on.
f : Z [0, 1] : f
X
(x) := P
X
(X = x) (53.44)
We specify the relation between the PMF and its corresponding (discrete) CDF in Equa-
tion 53.45 and Equation 53.45. We further dene the probability of an event A in Equa-
tion 53.47 using the PMF.
P
X
(X x) = F
X
(x) =
x

i=
f
X
(x) (53.45)
P
X
(X = x) = f
X
(x) = F
X
(x) F
X
(x 1) (53.46)
P
X
(A) =

xA
f
X
(x) (53.47)
53.1.9 Probability Density Function
The probability density function
23
(PDF) is the counterpart of the PMF for continuous
distributions. The PDF does not represent the probabilities of the single values of a random
variable. Since a continuous random variable can take on uncountable many values, each
distinct value itself has the probability 0.
Example E53.2 (PDF: No Probability!).
If we, for instance, picture the current temperature outside as (continuous) random variable,
the probability that it takes on the value 18 for 18

C is zero. It will never be exactly 18

C
outside, we can at most declare with a certain probability that we have a temperature
between like 17.999

C and 18.001

C.
Denition D53.22 (Probability Density Function). If a random variable X is contin-
uous, its probability density function f is dened as
f : R [0, ) : F
X
(x) =
_
+

f
X
() d x R (53.48)
53.2 Parameters of Distributions and their Estimates
Each random variable X which conforms to a probability distribution F may have certain
properties such as a maximum and a mean value, a variance, and a value which will be taken
on by X most often. If the cumulative distribution function F of X is known, these values
can usually be computed directly from its parameters. On the other hand, it is possible that
22
http://en.wikipedia.org/wiki/Probability_mass_function [accessed 2007-07-03]
23
http://en.wikipedia.org/wiki/Probability_density_function [accessed 2007-07-03]
660 53 STOCHASTIC THEORY AND STATISTICS
we only know the values A[i] which X took on during some random experiments. From this
set of sample data A, we can estimate the properties of the underlying (possibly unknown)
distribution of X using statistical methods (with a certain error, of course).
In the following, we will elaborate on the properties of a random variable X R both
from the viewpoint of knowing the PMF/PDF f
X
(x) and the CDF F
X
(x) as well as from
the statistical perspective, where only a sample A of past values of X is known. In the
latter case, we dene the sample as a list A with the length n = len(A) and the elements
A[i] : i [0, n 1].
53.2.1 Count, Min, Max and Range
The most primitive features of a random distribution are the minimum, maximum, and the
range of its values. From the statistical perspective, the number of values A[i] in the data
sample A is another primitive parameter.
Denition D53.23 (Count). n = len(A) is the number of elements in the data sample A.
The item number is only dened for data samples, not for random variables, since random
variables represent experiments which can innitely be repeated and thus stand for innitely
many values. The number of items should not be mixed up with the possible number
of dierent values the random variable may take on. A data sample A may contain the
same value a multiple times. When throwing a dice seven times, one may throw A =
(1, 4, 3, 3, 2, 6, 1), for example
24
.
Denition D53.24 (Minimum: Statistics). There exists no smaller element in the sam-
ple data A than the minimum sample a min(A).
min(A) a A : a A a a (53.49)
Denition D53.25 (Minimum: Stochastic). The minimum is the lower boundary x of
the random variable X (or negative innity, if no such boundary exists).
min(X) = x F
X
( x) > 0 (x R : F
X
(x) > 0 x < x) (53.50)
Both denitions are fully compliant to Denition D3.2 on page 53 and the maximum can
specied similarly.
Denition D53.26 (Maximum: Statistics). There exists no larger element in the sam-
ple data A than the maximum sample a max(A).
max(A) a A : a A a A (53.51)
Denition D53.27 (Maximum: Stochastic). The value x is the upper boundary of the
values a random variable X may take on (or positive innity, if X is unbounded).
x = max(X) F
X
( x) F
X
(x) x R (53.52)
24
Throwing a dice is discussed as example for stochastic extensively in Section 53.5 on page 689.
53.2. PARAMETERS OF DISTRIBUTIONS AND THEIR ESTIMATES 661
Denition D53.28 (Range: Statistics). The range rangeA of the sample data A is the
dierence of the maximum max(A) and the minimum min(A) elements in A and therefore
represents the span covered by the data.
range(A) = a a = max(A) min(A) (53.53)
Denition D53.29 (Range: Stochastic). If a random variable X is limited in both di-
rections, it has a nite range rangeX, otherwise this range is innite too.
range(X) = x x = max(X) min(X) (53.54)
53.2.2 Expected Value and Arithmetic Mean
The expected value EX and the a are basic measures for random variables and data samples
that help us to estimate the regions where their values will likely be distributed around.
Denition D53.30 (Expected Value). The expected value
25
of a random variable X is
the sum of the probability of each possible outcome of the random experiment multiplied
by the outcome value.
The expected value is abbreviated by EX or . For discrete distributions, it can be computed
using Equation 53.55 and for continuous ones Equation 53.56 holds.
EX =

x=
xf
X
(x) (53.55)
EX =
_

xf
X
(x) dx (53.56)
If the expected value EX of a random variable X is known, the following statements can
be derived for the expected values of related random variables as follows:
Y = a +X EY = a +EX (53.57)
Z = bX EZ = bEX (53.58)
Denition D53.31 (Sum). The sum(A) represents the sum of all elements in a set of data
samples A.
sum(A) =
n1

i=0
A[i] (53.59)
Denition D53.32 (Arithmetic Mean). The arithmetic mean
26
a is the sum of all ele-
ments in the sample data A divided by the total number of values.
In the spirit of the limiting frequency method of von Mises [2822], it is an estimation of the
expected value a EX of the random variable X that produced the sample data A.
a =
sum(A)
n
=
1
n
n1

i=0
A[i] =
n1

i=0
h(A[i], n) (53.60)
25
http://en.wikipedia.org/wiki/Expected_value [accessed 2007-07-03]
26
http://en.wikipedia.org/wiki/Arithmetic_mean [accessed 2007-07-03]
662 53 STOCHASTIC THEORY AND STATISTICS
53.2.3 Variance and Standard Deviation
53.2.3.1 Variance
The variance
27
[928] is a measure of statistical dispersion. As stochastic entity, it illustrates
how close the outcomes of a random variable or are to their expected value EX. In statistics,
it denotes the proximity of the elements a in a data sample A to the arithmetical mean a of
the sample.
Denition D53.33 (Variance: Random Variable). The variance D
2
X var(X)
2
of a random variable X is dened as
var(X) = D
2
X = E
_
(X EX)
2
_
= E
_
X
2

(EX)
2
(53.61)
The variance of a discrete random variable X can be computed using Equation 53.62 and
for continuous distributions, Equation 53.63 will hold.
D
2
X =

x=
f
X
(x) (x EX)
2
=

x=
x
2
f
X
(x)
_

x=
xf
X
(x)
_
2
=
_

x=
x
2
f
X
(x)
_
(EX)
2
(53.62)
D
2
X =
_

(x EX)
2
dx =
_

x
2
f
X
(x) dx
__

xf
X
(x) dx
_
2
=
__

x
2
f
X
(x) dx
_
(EX)
2
(53.63)
If the variance D
2
X of a random variable X is known, we can derive the variances the some
related random variables as follows:
Y = a +X D
2
Y = D
2
X (53.64)
Z = bX D
2
Z = b
2
D
2
X (53.65)
Denition D53.34 (Sum of Squares). The function sumSqrs(A) represents the sum of
the squares of all elements in the data sample A in statistics.
sumSqrs(A) =
n1

i=0
(A[i])
2
(53.66)
Denition D53.35 (Variance: Estimation in Statistics). The (unbiased) estimator
28
s
2
of the variance of the random variable which produced the sample values A according to
Equation 53.67. The variance is zero for all samples with (n = len(A)) 1.
s
2
=
1
n 1
n1

i=0
(A[i] a)
2
=
1
n 1
_
sumSqrs(A)
(sum(A))
2
n
_
(53.67)
27
http://en.wikipedia.org/wiki/Variance [accessed 2007-07-03]
28
see Denition D53.58 on page 692
53.2. PARAMETERS OF DISTRIBUTIONS AND THEIR ESTIMATES 663
53.2.3.2 Standard Deviation etc.
Denition D53.36 (Standard Deviation). The standard deviation
29
is the square root
of the variance. The standard deviation of a random variable X is abbreviated with DX
and , its statistical estimate is s.
DX =

D
2
X (53.68)
s =

s
2
(53.69)
The standard deviation is zero for all samples with n 1.
Denition D53.37 (Coecient of Variation). The coecient of variation
30
c
V
of a
random variable X is the ratio of the standard deviation by expected value of X. For
data samples, its estimate c
V
is dened as the ration of the estimate of the standard
deviation and the arithmetic mean.
c
V
=
DX
EX
=

(53.70)
c
V
=
n
sum(A)

_
sumSqrs(A)
(sum(A))
2
n
n 1
(53.71)
53.2.3.3 Covariance
Denition D53.38 (Covariance: Random Variables). The covariance
31
cov(X, Y ) of
two random variables X and Y is a measure for how much they are related. It exists if the
expected values EX
2
and EY
2
exist and is dened as
cov(X, Y ) = E[X EX] E[Y EY ] (53.72)
= E[X Y ] EX EY (53.73)
(53.74)
If X and Y are statistically independent, then their covariance is zero, since
E[X Y ] = EX EY (53.75)
Furthermore, the following formulas hold for the covariance
D
2
X = cov(X, X) (53.76)
D
2
[X +Y ] = cov(X +Y, X +Y ) = D
2
X +D
2
Y + 2cov(X, Y ) (53.77)
D
2
[X Y ] = cov(X Y, X +Y ) = D
2
X +D
2
Y 2cov(X, Y ) (53.78)
cov(X, Y ) = cov(Y, X) (53.79)
cov(aX, Y ) = a cov(Y, X) (53.80)
cov(X +Y, Z) = cov(X, Z) + cov(Y, Z) (53.81)
cov(aX +b, cY +d) = a c cov(X, Y ) (53.82)
cov(X, Y ) = cov(Y, X) (53.83)
29
http://en.wikipedia.org/wiki/Standard_deviation [accessed 2007-07-03]
30
http://en.wikipedia.org/wiki/Coefficient_of_variation [accessed 2007-07-03]
31
http://en.wikipedia.org/wiki/Covariance [accessed 2008-02-05]
664 53 STOCHASTIC THEORY AND STATISTICS
Denition D53.39 (Covariance: Statistical Estimate). The covariance cov(A, B) of
two random data (paired) samples A and B with length n and elements a and b is dened
as
cov(A, B) =

i=0
n 1 (a
i
a)
_
b
i
b
_
n
(53.84)
Denition D53.40 (Covariance Matrix: Random Variables). The covariance ma-
trix of n random variables X
i
: i 1..n is an n n matrix whose elements are dened as
[i, j] = cov(X
i
, X
j
) for all i, j 1..n.
Denition D53.41 (Covariance Matrix: Statistic Estimate). The covariance matrix
C of n data samples A[i] : i 1..n is an n n matrix whose elements are dened as
C[i, j] = cov(A[i], A[j]) for all i, j 1..n.
For covariance matrices, [i, j] = [j, i] holds for all i, j 1..n as well as [i, i] = D
2
X
i
for
all i 1..n (and [i, i] = s
2
X
i
for the estimated covariance matrix).
53.2.4 Moments
Denition D53.42 (Moment). The k
th
moment
32

k
(c) about a value c is dened for a
random distribution X as

k
(c) = E
_
(X c)
k
_
(53.85)
It can be specied for discrete (Equation 53.86) and continuous (Equation 53.87) probability
distributions using Equation 53.55 and Equation 53.56 as follows.

k
(c) =

x=
f
X
(x) (x c)
k
(53.86)

k
(c) =
_

f
X
(x) (x c)
k
dx (53.87)
Denition D53.43 (Statistical Moment). The k
th
statistical moment

of a random
distribution is its k
th
moment about zero, i. e., the expected value of its values raised to the
k
th
power.

k
(0) = E
_
X
k

(53.88)
Denition D53.44 (Central Moment). The k
th
moment about the mean (or central
moment)
33
is the expected value of the dierence between elements and their expected
value raised to the k
th
power.
= E
_
(X EX)
k
_
(53.89)
Hence, the variance D
2
X equals the second central moment
2
.
Denition D53.45 (Standardized Moment). The k
th
standardized moment is the
quotient of the k
th
central moment and the standard deviation raised to the k
th
power.
=

k
(53.90)
32
http://en.wikipedia.org/wiki/Moment_%28mathematics%29 [accessed 2008-02-01]
33
http://en.wikipedia.org/wiki/Moment_about_the_mean [accessed 2007-07-03]
53.2. PARAMETERS OF DISTRIBUTIONS AND THEIR ESTIMATES 665
53.2.5 Skewness and Kurtosis
The two other most important moments of random distributions are the skewness
1
and
the kurtosis
2
and their estimates G
1
and G
2
.
Denition D53.46 (Skewness). The skewness
34

1
is the third standardized moment.

1
=
,3
=

3

3
(53.91)
The skewness is a measure of asymmetry of a probability distribution. If
1
> 0, the right
part of the distribution function is either longer or fatter (positive skew, right-skewed). If

1
< 0, the distributions left part is longer or fatter.For sample data A the skewness of the
underlying random variable is approximated with the estimator G
1
where s is the estimated
standard deviation. The sample skewness is only dened for sets A with at least three
elements.
G
1
=
n
(n 1)(n 2)
n1

i=0
_
A[i] a
s
_
3
(53.92)
Denition D53.47 (Kurtosis). The excess kurtosis
35

2
is a measure for the sharpness
of a distributions peak.

2
=
,4
3 =

4
s
3
3 (53.93)
A distribution with a high kurtosis has a sharper peak and fatter tails, while a distri-
bution with a low kurtosis has a more rounded peak with wider shoulders. The normal
distribution (see Section 53.4.2) has a zero kurtosis.
For sample data A which represents only a subset of a greater amount of data, the sample
kurtosis can be approximated with the estimator G
2
where s is the estimate of the samples
standard deviation. The kurtosis is only dened for sets with at least four elements.
G
2
=
_
n(n + 1)
(n 1)(n 2)(n 3)
n1

i=0
_
A[i] a
s
_
4
_

3(n 1)
2
(n 2)(n 3)
(53.94)
53.2.6 Median, Quantiles, and Mode
Denition D53.48 (Median). The median med(X) is the value right in the middle of a
sample or distribution, dividing it into two equal halves.
P (X med(X))
1
2
P (X med(X))
1
2
P (X med(X)) P (X med(X))
(53.95)
The probability of drawing an element less than med(X) is equal to the probability of draw-
ing an element larger than med(X). We can determine the median med(X) of continuous
34
http://en.wikipedia.org/wiki/Skewness [accessed 2007-07-03]
35
http://en.wikipedia.org/wiki/Kurtosis [accessed 2008-02-01]
666 53 STOCHASTIC THEORY AND STATISTICS
and discrete distributions by solving Equation 53.96 and Equation 53.97 respectively.
1
2
=
_
med(X)

f
X
(x) dx (53.96)
med(X)1

i=
f
X
(x)
1
2

i=med(X)
f
X
(x) (53.97)
(53.98)
If a sample A has an odd element count, the median med(X) is the element in the middle,
otherwise (in a set with an even element count there exists no single middle-element), the
arithmetic mean of the two middle elements.
Example E53.3 (Median vs. Mean).
The median represents the dataset in an unbiased manner. If you have, for example, the
dataset A = (1, 1, 1, 1, 1, 2, 2, 2, 500 000), the arithmetic mean, biased by the large element
500 000 would be very high (55556.7). The median however would be 1 and thus represents
the sample better. The median of a sample can be computed as:
A
s
sortAsc(A, >) (53.99)
med(A) =
_
A
s
_
n1
2

if n = len(A) is odd
1
2
(A
s
_
n
2

+A
s
_
n
2
1

) otherwise
(53.100)
Quantiles
36
are points taken at regular intervals from a sorted dataset (or a cumulative
distribution function).
Denition D53.49 (Quantile). The q-quantiles divide a distribution function F or data
sample A into q parts T
i
with equal probability.
x R, i [0, q 1]
1
q
P (x T
i
) (53.101)
They can be regarded as the generalized median, or vice versa, the median is the 2-quantile.
A sorted data sample is divided into q subsets of equal length by the q-quantiles. The cu-
mulative distribution function of a random variable X is divided by the q-quantiles into q
subsets of equal area. The quantiles are the boundaries between the subsets/areas. There-
fore, the k
th
q-quantile is the value so that the probability that the random variable (or
an element of the data set) will take on a value less than is at most
k
q
and the probability
that it will take on a value greater than or equal to is at most
qk
q
. There exist q 1
q-quantiles (k spans from 1 to q 1). The k
th
q-quantile quantile
k
q
(A) of a dataset A can
be computed as:
A
s
sortAscA> (53.102)
t =
k n
q
(53.103)
quantile
k
q
(A) =
_
1
2
(A
s
[t] +A
s
[t 1]) if if t is integer
A
s
[t] otherwise
(53.104)
For some special values of q, the quantiles have been given special names which are listed in
Table 53.1.
36
http://en.wikipedia.org/wiki/Quantiles [accessed 2007-07-03]
53.2. PARAMETERS OF DISTRIBUTIONS AND THEIR ESTIMATES 667
q name
100 percentiles
10 deciles
9 noniles
5 quintiles
4 quartiles
2 median
Table 53.1: Special Quantiles
Denition D53.50 (Interquartile Range). The interquartile range
37
is the range be-
tween the rst and the third quartile and dened as quantile
3
4
(X) quantile
1
4
(X).
Denition D53.51 (Mode). The mode
38
is the value that most often occurs in a data
sample or is most frequently assumed by a random variable.
There exist unimodal distributions/samples that have one mode value and multimodal dis-
tributions/samples with multiple modes. In [369, 2821] you can nd further information of
the relation between the mode, the mean and the skewness.
53.2.7 Entropy
The information entropy
39
H(X) dened by Shannon [2465] is a measure of uncertainty
for discrete probability mass functions f of random variables X in probability theory (see
Equation 53.105). For data samples A, it is estimated based on the relative frequency h(a, n)
of the value a amongst the n samples in A (see Equation 53.106).
Denition D53.52 (Information Entropy). The information entropy is a measure for
the information content of a system and dened as follows:
H(X) =

x=
f
X
(x) log
2
_
1
f
X
(y)
_
=

x
f
X
(x) log f
X
(x) (53.105)
H(A) =

aA
h(a, n) log h(a, n) (53.106)
Denition D53.53 (Dierential Entropy). The dierential (also called continuous) en-
tropy h(X) is a generalization of the information entropy to continuous probability density
functions f of random variables X. [1699]
h(X) =
_

f
X
(x) ln f
X
(x) dx (53.107)
37
http://en.wikipedia.org/wiki/Inter-quartile_range [accessed 2007-07-03]
38
http://en.wikipedia.org/wiki/Mode_%28statistics%29 [accessed 2007-07-03]
39
http://en.wikipedia.org/wiki/Information_entropy [accessed 2007-07-03]
668 53 STOCHASTIC THEORY AND STATISTICS
53.2.8 The Law of Large Numbers
The law of large numbers (LLN) combines statistics and probability by showing that if an
event e with the probability P (e) = p is observed in n independent repetitions of a random
experiment, its relative frequency h(e, n) (see Denition D53.10) converges to its probability
p if n becomes larger.
In the following, assume that the A is an innite sequence of samples from equally
distributed and pairwise independent random variables X
i
with the (same) expected value
EX. The weak law of large numbers states that the mean a of the sequence A converges to
a value in (EX , EX +) for each positive real number > 0, R
+
.
lim
n
P ([a EX[ < ) = 1 (53.108)
In other words, the weak law of large numbers says that the sample average will converge
to the expected value of the random experiment if the experiment is repeated many times.
According to the strong law of large numbers, the mean a of the sequence A even con-
verges to the expected value EX of the underlying distribution for innite large n
P
_
lim
n
a = EX
_
= 1 (53.109)
The law of large numbers implies that the accumulated results of each random experiment
will approximate the underlying distribution function if repeated innitely (under the con-
dition that there exists an invariable underlying distribution function).
Many random processes follow in their characteristics well-known probability distributions,
some of which we will discuss here. Parts of the information provided in this and the
following section have been obtained from Wikipedia The Free Encyclopedia [2938].
53.3 Some Discrete Distributions
In this section, we discuss some of the common discrete distributions. Discrete probability
distributions assign probabilities to the elements of a nite (or, at most, countable innite)
set of discrete events/outcomes of a random experiment.
53.3.1 Discrete Uniform Distribution
53.3. SOME DISCRETE DISTRIBUTIONS 669
Parameter Denition
parameters a, b Z, a > b 53.110
[[ [[ = r = range = b a + 1 53.111
PMF P (X = x) = f
X
(x) =
_
1
r
if a x b, x Z
0 otherwise
53.112
CDF P (X x) = F
X
(x) =
_
_
_
0 if x < a

xa+1
r
if a x b
1 otherwise
53.113
mean EX =
a +b
2
53.114
median med =
a +b
2
53.115
mode mode = 53.116
variance D
2
X =
r
2
1
12
53.117
skewness
1
= 0 53.118
kurtosis
2
=
6(r
2
+ 1)
5(r
2
1)
53.119
entropy H(X) = ln r 53.120
mgf M
X
(t) =
e
at
e
(b+1)t
r(1 e
t
)
53.121
char. func.
X
(t) =
e
iat
e
i(b+1)t
r(1 e
it
)
53.122
Table 53.2: Parameters of the discrete uniform distribution.
The uniform distribution exists in a discrete
40
as well as in a continuous form. In this
section we want to discuss the discrete form whereas the continuous form is elaborated on
in Section 53.4.1 on page 676.
All possible outcomes of a random experiment which obeys the uniform distribu-
tion have exactly the same probability. In the discrete uniform distribution, is usually a
nite set.
Example E53.4 (Uniform Distribution: Ideal Dice and Coins).
The best example for this distribution is throwing an ideal dice. This experiment has six
possible outcomes
i
where each has the same probability P (
i
) =
1
6
. Throwing ideal coins
and drawing one element out of a set of n possible elements are other examples where a
discrete uniform distribution can be assumed.
Table 53.2 contains the characteristics of the discrete uniform distribution. In Fig. 53.1.a
you can nd some example uniform probability mass functions and in Fig. 53.1.b we have
sketched their according cumulative distribution functions.
40
http://en.wikipedia.org/wiki/Uniform_distribution_%28discrete%29 [accessed 2007-07-03]
670 53 STOCHASTIC THEORY AND STATISTICS
0
0.1
0.2
x
f
(
x
)
X
5 10 15 20 25
1/n =0,1
1
1/n =0,2
2
n =10
1
n =5
2
n =b -a 10
1 1 1
=
single,discretepoint
discontinuouspoint
a
1 b
1
a
2 b
2
n =b -a
2 2 2
=5
Fig. 53.1.a: The PMFs of some discrete uniform distributions.
0
0.1
1
x
F
(
x
)
X
5 10 15 20 25
n =10
1
n =5
2
discontinuouspoint
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
a
1
b
1
a
2
b
2
n =b -a
2 2 2
=5
n =b -a 10
1 1 1
=
Fig. 53.1.b: The CDFs of some discrete uniform distributions.
Figure 53.1: Examples for the discrete uniform distribution
53.3. SOME DISCRETE DISTRIBUTIONS 671
53.3.2 Poisson Distribution

The Poisson distribution


41

[47] complies with the reference model telephone switchboard.


It describes a process where the number of events that occur (independently of each other)
in a certain time interval only depends on the duration of the interval and not of its position
(prehistory). Events do not have any aftermath and thus, there is no mutual inuence of
non-overlapping time intervals (homogeneity). Furthermore, only the time when an even
occurs is considered and not the duration of the event.
Example E53.5 (Telephone Switchboard: Poisson).
The most common example for the Poisson distribution is the telephone switchboard. As
already mentioned, we are only interested in the time at which an event, i. e., a call comes in,
not in the length of the call. In this model, no events occur in innitely short time intervals.
The features of the Poisson distribution are listed in Table 53.3
42
and examples for its PDF
and CDF are illustrated in Fig. 53.2.a and Fig. 53.2.b.
Parameter Denition
parameters = t > 0 53.123
PMF P (X = x) = f
X
(x) =
(t)
x
x!
e
t
=

x
x!
e

53.124
CDF P (X x) = F
X
(x) =
(k + 1 , )
k!
=
x

i=0
e

i
i!!
53.125
mean EX = t = 53.126
median med +
1
3

1
5
53.127
mode mode = 53.128
variance D
2
X = t = 53.129
skewness
1
=

1
2
53.130
kurtosis
2
=
1

53.131
entropy H(X) = (1 ln ) +e

k=0

k
ln(k!)
k!
53.132
mgf M
X
(t) = e
(e
t
1)
53.133
char. func.
X
(t) = e
(e
it
1)
53.134
Table 53.3: Parameters of the Poisson distribution.
53.3.2.1 Poisson Process
The Poisson process
43
[2539] is a process that obeys the Poisson distribution just like the
example of the telephone switchboard mentioned before. Here, is expressed as the product
of the intensity and the time t. normally describes a frequency, for example =
1
min
.
Both, the expected value as well as the variance of the Poisson process are = t. In
Equation 53.135, the probability that k events occur in a Poisson process in a time interval
of the length t is dened.
P (X
t
= k) =
(t)
k
k!
e
t
=

k
k!
e

(53.135)
The probability that in a time interval [t, t + t]
41
http://en.wikipedia.org/wiki/Poisson_distribution [accessed 2007-07-03]
42
The in Equation 53.125 denotes the (upper) incomplete gamma function. More information on the
gamma function can be found in ?? on page ??.
43
http://en.wikipedia.org/wiki/Poisson_process [accessed 2007-07-03]
672 53 STOCHASTIC THEORY AND STATISTICS
4
8
12
16
20
24
28
l=1
l=6
l=20
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
f (x)
X
x
l=1
l=3
l=6
l=10
l=20
l
Fig. 53.2.a: The PMFs of some Poisson distributions.
0
0.1
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
F
(
x
)
X
5 10 15 20 25 30
l=1
l=3
l=6
l=10
l=20
discontinuouspoint
0.2
Fig. 53.2.b: The CDFs of some Poisson distributions.
Figure 53.2: Examples for the Poisson distribution.
53.3. SOME DISCRETE DISTRIBUTIONS 673
1. no events occur is 1 t +o(t).
2. exactly one event occurs is t +o(t).
3. multiple events occur o(t).
Here we use an innitesimal version the small-o notation.
44
The statement that f o()
[f (x)[ [(x)[ is normally only valid for x . In the innitesimal variant, it holds for
x 0. Thus, we can state that o(t) is much smaller than t. In principle, the above
equations imply that in an innite small time span either no or one event occurs, i. e., events
do not arrive simultaneously:
lim
t0
P (X
t
> 1) = 0 (53.136)
53.3.2.2 The Relation between the Poisson Process and the Exponential
Distribution
It is important to know that the (time) distance between two events of the Poisson process is
exponentially distributed (see Section 53.4.3 on page 682). The expected value of the number
of events to arrive per time unit in a Poisson process is EX
pois
, then the expected value of
the time between two events
1
EX
pois
. Since this is the excepted value EX
exp
=
1
EX
pois
of the
exponential distribution, its
exp
-value is
exp
=
1
EX
exp
=
1
1
EX
pois
= EX
pois
. Therefore, the

exp
-value of the exponential distribution equals the
pois
-value of the Poisson distribution

exp
=
pois
= EX
pois
. In other words, the time interval between (neighboring) events of the
Poisson process is exponentially distributed with the same value as the Poisson process,
as illustrated in Equation 53.137.
X
i

(t(X
i+1
) tX
i
) exp() i N
1
(53.137)
44
See Denition D12.1 on page 144 and ?? on page ?? for a detailed elaboration on the small-o notation.
674 53 STOCHASTIC THEORY AND STATISTICS
53.3.3 Binomial Distribution B(n, p)
The binomial distribution
45
B(n, p) is the probability distribution that describes the prob-
ability of the possible numbers successes of n independent experiments with the success
probability p each. Such experiments is called Bernoulli experiments or Bernoulli trials. For
n = 1, the binomial distribution is a Bernoulli distribution
46
.
Table 53.4
47
points out some of the properties of the binomial distribution. A few
examples for PMFs and CDFs of dierent binomial distributions are given in Fig. 53.3.a
and Fig. 53.3.b.
Parameter Denition
parameters n N
0
, 0 p 1, p R 53.138
PMF P (X = x) = f
X
(x) =
_
n
x
_
p
x
(1 p)
nx
53.139
CDF P (X x) = F
X
(x) =
x

i=0
f
X
(x) = I
1p
(n x, 1 +x) 53.140
mean EX = np 53.141
median med is one of np 1, np, np + 1 53.142
mode mode = (n + 1)p 53.143
variance D
2
X = np(1 p) 53.144
skewness
1
=
1 2p
_
np(1 p)
53.145
kurtosis
2
=
1 6p(1 p)
np(1 p)
53.146
entropy H(X) =
1
2
ln (2ne p (1 p)) +O
_
1
n
_
53.147
mgf M
X
(t) = (1 p +pe
t
)
n
53.148
char. func.
X
(t) = (1 p +pe
it
)
n
53.149
Table 53.4: Parameters of the Binomial distribution.
For n , the binomial distribution approaches a normal distribution. For large n,
B(n, p) can therefore often be approximated with the normal distribution (see Section 53.4.2)
N(np, np(1 p)). Whether this approximation is good or not can be found out by rules of
thumb, some of them are:
np > 5 n(1 p) > 5 (53.150)
3 np 3
_
np(1 p) [0, n] (53.151)
In case these rules hold, we still need to transform a continuous distribution to a discrete
one. In order to do so, we add 0.5 to the x values, i. e., F
X,X,bin
(x) F
X,X,normal
(x + 0.5).
45
http://en.wikipedia.org/wiki/Binomial_distribution [accessed 2007-10-01]
46
http://en.wikipedia.org/wiki/Bernoulli_distribution [accessed 2007-10-01]
47
I
1p
in Equation 53.140 denotes the regularized incomplete beta function.
53.3. SOME DISCRETE DISTRIBUTIONS 675
0
0.02
f
(
x
)
X
5 10 15 20 25
single,discretepoint
x
35
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
n=20,p=0.3
n=20,p=0.6
n=30,p=0.6
n=40,p=0.6
Fig. 53.3.a: The PMFs of some binomial distributions.
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
F
(
x
)
X
discontinuouspoint
n=20,p=0.3
n=20,p=0.6
n=30,p=0.6
n=40,p=0.6
0
5 10 15 20 25
x
35
Fig. 53.3.b: The CDFs of some binomial distributions.
Figure 53.3: Examples for the binomial distribution.
676 53 STOCHASTIC THEORY AND STATISTICS
53.4 Some Continuous Distributions
In this section we will introduce some common continuous distributions. Unlike the discrete
distributions, continuous distributions have an uncountable innite large set of possible
outcomes of random experiments. Thus, the PDF does not assign probabilities to certain
events. Only the CDF makes statements about the probability of a sub-set of possible
outcomes of a random experiment.
53.4.1 Continuous Uniform Distribution
After discussing the discrete uniform distribution in Section 53.3.1, we now elaborate on
its continuous form
48
. In a uniform distribution, all possible outcomes in a range [a, b], b > a
have exactly the same probability. The characteristics of this distribution can be found in
Table 53.5. Examples of its probability density function is illustrated in Fig. 53.4.a whereas
the according cumulative density functions are outlined Fig. 53.4.b.
Parameter Denition
parameters a, b R, a b 53.152
PDF f
X
(x) =
_
1
ba
if x [a, b]
0 otherwise
53.153
CDF P (X x) = F
X
(x) =
_
_
_
0 if x < a
xa
ba
if x [a, b]
1 otherwise
53.154
mean EX =
1
2
(a +b) 53.155
median med =
1
2
(a +b) 53.156
mode mode = 53.157
variance D
2
X =
1
12
(b a)
2
53.158
skewness
1
= 0 53.159
kurtosis
2
=
6
5
53.160
entropy h(X) = ln (b a) 53.161
mgf M
X
(t) =
e
tb
e
ta
t(b a)
53.162
char. func.
X
(t) =
e
itb
e
ita
it(b a)
53.163
Table 53.5: Parameters of the continuous uniform distribution.
48
http://en.wikipedia.org/wiki/Uniform_distribution_%28continuous%29 [accessed 2007-07-03]
53.4. SOME CONTINUOUS DISTRIBUTIONS 677
0
0.1
0.2
x
f
(
x
)
X
5 10 15 20 25
a =5
1
b =15
1
a =20
2
b =25
2
1/(b -a )
1 1
1/(b -a )
2 2
a=5,b=15
a=20,b=25
Fig. 53.4.a: The PDFs of some continuous uniform distributions.
0
1
x
f
(
x
)
X
5 10 15 20 25
a =5
1
b =15
1
a =20
2
b =25
2
a=5,b=15
a=20,b=25
Fig. 53.4.b: The CDFs of some continuous uniform distributions.
Figure 53.4: Some examples for the continuous uniform distribution.
678 53 STOCHASTIC THEORY AND STATISTICS
53.4.2 Normal Distribution N
_
,
2
_
Many phenomena in nature, like the size of chicken eggs, noise, errors in measurement, and
such and such, can be considered as outcomes of random experiments with properties that
can be approximated by the normal distribution
49
N
_
,
2
_
[3060]. Its probability density
function, shown for some example values in Fig. 53.5.a, is symmetric to the expected value
and becomes atter with rising standard deviation .
The cumulative density function is outline for the same example values in Fig. 53.5.b.
Other characteristics of the normal distribution can be found in Table 53.6.
Parameter Denition
parameters R, R
+
53.164
PDF f
X
(x) =
1

2
e

(x)
2
2
2
53.165
CDF P (X x) = F
X
(x) =
1

2
_
x

(z)
2
2
2
dz 53.166
mean EX = 53.167
median med = 53.168
mode mode = 53.169
variance D
2
X =
2
53.170
skewness
1
= 0 53.171
kurtosis
2
= 0 53.172
entropy h(X) = ln
_

2e
_
53.173
mgf M
X
(t) = e
t+

2
t
2
2
53.174
char. func.
X
(t) = e
it+

2
t
2
2
53.175
Table 53.6: Parameters of the normal distribution.
53.4.2.1 Standard Normal Distribution
For the sake of simplicity, the standard normal distribution N(0, 1) with the CDF (x) is
dened with = 0 and = 1. Values of this function are listed in tables. You can compute
the CDF of any normal distribution using the one of the standard normal distribution by
applying Equation 53.176.
(x) =
1

2
_
x

z
2
2
dz (53.176)
P (X x) =
_
x

_
(53.177)
Some values of (x) are listed in Table 53.7. For the sake of saving space by using two
dimensions, we compose the values of x as a sum of a row and column value. If you want to
look up (2.13) for example, you would go to the row which starts with 2.1 and the column
of 0.03, so youd nd (2.13) 0.9834.
49
http://en.wikipedia.org/wiki/Normal_distribution [accessed 2007-07-03]
53.4. SOME CONTINUOUS DISTRIBUTIONS 679
0.2
0.4
x
f
(
x
)
X
m s =0, =1
m s =0, =5
m s =3, =1
m s =3, =5
m s =6, =1
m s =6, =5
0.1
0.15
0.25
0.3
0.35
0.05
-15 -10 -5 0 5 10 15 20
m
standardnormal
distribution:
m s =0, =1
Fig. 53.5.a: The PDFs of some normal distributions.
0
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
-5 -10 -15 5 10 15 20
F
(
x
)
X
x
m s =0, =1
m s =0, =5
m s =3, =1
m s =3, =5
m s =6, =1
m s =6, =5
theCDF ofthe
standardnormal
distribution:
F
m
s
=0,
=1
0.1
Fig. 53.5.b: The CDFs of some normal distributions.
Figure 53.5: Examples for the normal distribution.
680 53 STOCHASTIC THEORY AND STATISTICS
x 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 0.5000 0.5040 0.5080 0.5120 0.5160 0.5199 0.5239 0.5279 0.5319 0.5359
0.1 0.5398 0.5438 0.5478 0.5517 0.5557 0.5596 0.5636 0.5675 0.5714 0.5753
0.2 0.5793 0.5832 0.5871 0.5910 0.5948 0.5987 0.6026 0.6064 0.6103 0.6141
0.3 0.6179 0.6217 0.6255 0.6293 0.6331 0.6368 0.6406 0.6443 0.6480 0.6517
0.4 0.6554 0.6591 0.6628 0.6664 0.6700 0.6736 0.6772 0.6808 0.6844 0.6879
0.5 0.6915 0.6950 0.6985 0.7019 0.7054 0.7088 0.7123 0.7157 0.7190 0.7224
0.6 0.7257 0.7291 0.7324 0.7357 0.7389 0.7422 0.7454 0.7486 0.7517 0.7549
0.7 0.7580 0.7611 0.7642 0.7673 0.7704 0.7734 0.7764 0.7794 0.7823 0.7852
0.8 0.7881 0.7910 0.7939 0.7967 0.7995 0.8023 0.8051 0.8078 0.8106 0.8133
0.9 0.8159 0.8186 0.8212 0.8238 0.8264 0.8289 0.8315 0.8340 0.8365 0.8389
1.0 0.8413 0.8438 0.8461 0.8485 0.8508 0.8531 0.8554 0.8577 0.8599 0.8621
1.1 0.8643 0.8665 0.8686 0.8708 0.8729 0.8749 0.8770 0.8790 0.8810 0.8830
1.2 0.8849 0.8869 0.8888 0.8907 0.8925 0.8944 0.8962 0.8980 0.8997 0.9015
1.3 0.9032 0.9049 0.9066 0.9082 0.9099 0.9115 0.9131 0.9147 0.9162 0.9177
1.4 0.9192 0.9207 0.9222 0.9236 0.9251 0.9265 0.9279 0.9292 0.9306 0.9319
1.5 0.9332 0.9345 0.9357 0.9370 0.9382 0.9394 0.9406 0.9418 0.9429 0.9441
1.6 0.9452 0.9463 0.9474 0.9484 0.9495 0.9505 0.9515 0.9525 0.9535 0.9545
1.7 0.9554 0.9564 0.9573 0.9582 0.9591 0.9599 0.9608 0.9616 0.9625 0.9633
1.8 0.9641 0.9649 0.9656 0.9664 0.9671 0.9678 0.9686 0.9693 0.9699 0.9706
1.9 0.9713 0.9719 0.9726 0.9732 0.9738 0.9744 0.9750 0.9756 0.9761 0.9767
2.0 0.9772 0.9778 0.9783 0.9788 0.9793 0.9798 0.9803 0.9808 0.9812 0.9817
2.1 0.9821 0.9826 0.9830 0.9834 0.9838 0.9842 0.9846 0.9850 0.9854 0.9857
2.2 0.9861 0.9864 0.9868 0.9871 0.9875 0.9878 0.9881 0.9884 0.9887 0.9890
2.3 0.9893 0.9896 0.9898 0.9901 0.9904 0.9906 0.9909 0.9911 0.9913 0.9916
2.4 0.9918 0.9920 0.9922 0.9925 0.9927 0.9929 0.9931 0.9932 0.9934 0.9936
2.5 0.9938 0.9940 0.9941 0.9943 0.9945 0.9946 0.9948 0.9949 0.9951 0.9952
2.6 0.9953 0.9955 0.9956 0.9957 0.9959 0.9960 0.9961 0.9962 0.9963 0.9964
2.7 0.9965 0.9966 0.9967 0.9968 0.9969 0.9970 0.9971 0.9972 0.9973 0.9974
2.8 0.9974 0.9975 0.9976 0.9977 0.9977 0.9978 0.9979 0.9979 0.9980 0.9981
2.9 0.9981 0.9982 0.9982 0.9983 0.9984 0.9984 0.9985 0.9985 0.9986 0.9986
3.0 0.9987 0.9987 0.9987 0.9988 0.9988 0.9989 0.9989 0.9989 0.9990 0.9990
Table 53.7: Some values of the standardized normal distribution.
53.4.2.2 The Probit Function
The inverse of the cumulative distribution function of the standard normal distribution is
called the probit function. It is also often denoted as z-quantile of the standard normal
distribution.
z(y) probit(y)
1
(y) (53.178)
y = (x)
1
(y) = z(y) = x (53.179)
The values of the quantiles of the standard normal distribution can also be looked up in
Table 53.7. Therefore, the previously discussed process is simply reversed. If we wanted to
nd the value z0.922, we locate the closest match in the table. In Table 53.7, we will nd
0.9222 which leads us to x = 1.4 + 0.02. Hence, z(0.922) 1.42.
53.4.2.3 Multivariate Normal Distribution
The probability density function PDF of the multivariate normal distribution
50
[2345, 2523,
2664] is illustrated in Equation 53.180 and Equation 53.181 in the general case (where is
the covariance matrix) and in Equation 53.182 in the uncorrelated form. If the distributions,
50
http://en.wikipedia.org/wiki/Multivariate_normal_distribution [accessed 2007-07-03]
53.4. SOME CONTINUOUS DISTRIBUTIONS 681
additionally to being uncorrelated, also have the same parameters and , the probability
density function of the multivariate normal distribution can be expressed as it is done in
Equation 53.183.
f
X
(x) =
_

1
(2)
n
2
e

1
2
(x)
T

1
(x)
(53.180)
=
1
(2)
n
2

1
2
e

1
2
(x)
T

1
(x)
(53.181)
f
X
(x) =
n

i=1
1

2
i
e

(x
i

i
)
2
2
2
i
(53.182)
f
X
(x) =
n

i=1
1

2
e

(x
i

i
)
2
2
2
=
_
1
2
2
_n
2
e

n
i=1
(x
i
)
2
2
2
(53.183)
53.4.2.4 Central Limit Theorem
The central limit theorem
51
(CLT) states that the sum S
n
=

n
i=1
X
i
of i identically dis-
tributed random variables X
i
with nite expected values E[X
i
] and non-zero variances
D
2
[X
i
] > 0 approaches a normal distribution for n +. [925, 1481, 2704]
51
http://en.wikipedia.org/wiki/Central_limit_theorem [accessed 2008-08-19]
682 53 STOCHASTIC THEORY AND STATISTICS
53.4.3 Exponential Distribution exp()
The exponential distribution
52
exp() [781] is often used if the probabilities of lifetimes of
apparatuses, half-life periods of radioactive elements, or the time between two events in
the Poisson process (see Section 53.3.2.2 on page 673) has to be approximated. Its PDF
is sketched in Fig. 53.6.a for some example values of the according cases of the CDF are
illustrated Fig. 53.6.b. The most important characteristics of the exponential distribution
can be obtained from Table 53.8.
Parameter Denition
parameters R
+
53.184
PDF f
X
(x) =
_
0 if x 0
e
x
otherwise
53.185
CDF P (X x) = F
X
(x) =
_
0 if x 0
1 e
x
otherwise
53.186
mean EX =
1

53.187
median med =
ln 2

53.188
mode mode = 0 53.189
variance D
2
X =
1

2
53.190
skewness
1
= 2 53.191
kurtosis
2
= 6 53.192
entropy h(X) = 1 ln 53.193
mgf M
X
(t) =
_
1
t

_
1
53.194
char. func.
X
(t) =
_
1
it

_
1
53.195
Table 53.8: Parameters of the exponential distribution.
52
http://en.wikipedia.org/wiki/Exponential_distribution [accessed 2007-07-03]
53.4. SOME CONTINUOUS DISTRIBUTIONS 683
0
0.5
1
1.5
2
2.5
3
l=0.3
l=0.5
l=1
l=2
l=3
0.5 1 1.5 2 2.5 3 3.5
f (x)
X
x
1/l
Fig. 53.6.a: The PDFs of some exponential distributions.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x
F (x)
X
l=0.3
l=0.5
l=1
l=2
l=3
Fig. 53.6.b: The CDFs of some exponential distributions.
Figure 53.6: Some examples for the exponential distribution.
684 53 STOCHASTIC THEORY AND STATISTICS
53.4.4 Chi-square Distribution
The chi-square (or
2
) distribution
53
is a steady probability distribution on the set of
positive real numbers. It is a so-called sample distribution which is used for the estimation
of parameters like the variance of other distributions. We can also describe the sum of
independent standardized normal distributions with it. Its sole parameter, n, denotes the
degrees of freedom.
In Table 53.9
54
, the characteristic parameters of the
2
distribution are outlined. A few
examples for the PDF and CDF of the
2
distribution are illustrated in Fig. 53.7.a and
Fig. 53.7.b.
Table 53.10 provides some selected values of the
2
distribution. The headline of the
table contains results of the cumulative distribution function F
X
(x) of a
2
distribution
with n degrees of freedom (values in the rst column). The cells now denote the x values
that belong to these (m, F
X
(x)) combinations.
Parameter Denition
parameters n R
+
, n > 0 53.196
PDF f
X
(x) =
_
0 if x 0
2

n
/2
(
n
/2)
x
n
/21
e

x
/2
otherwise
53.197
CDF P (X x) = F
X
(x) =
(
n
/2,
x
/2)
(
n
/2)
= P

(
n
/2,
x
/2) 53.198
mean EX = n 53.199
median med n
2
3
53.200
mode mode = n 2 if n 2 53.201
variance D
2
X = 2n 53.202
skewness
1
=
_
8
n
53.203
kurtosis
2
=
12
n
53.204
entropy h(X) =
n
2
+ ln (2(
n
/2)) + (1
n
/2)(
n
/2) 53.205
mgf M
X
(t) = (1 2t)

n
/2
for 2t < 1 53.206
char. func.
X
(t) = (1 2it)

n
/2
53.207
Table 53.9: Parameters of the
2
distribution.
53
http://en.wikipedia.org/wiki/Chi-square_distribution [accessed 2007-09-30]
54
(n, z) in Equation 53.198 is the lower incomplete Gamma function and P

(n, z) is the regularized


Gamma function.
53.4. SOME CONTINUOUS DISTRIBUTIONS 685
0.05
0.1
0.15
0.2
0.3
0.25
0.4
0.35
0.5
f (x)
X
8 7 6 5 4 3 2 1
0
x
10
n=1
n=2
n=4
n=7
n=10
Fig. 53.7.a: The PDFs of some
2
distributions.
0
n=1
n=2
n=4
n=7
n=10
8 7 6 5 4 3 2 1
x
10
0.1
0.2
0.3
0.4
0.6
0.5
0.8
0.7
1
F (x)
X
Fig. 53.7.b: The CDFs of some
2
distributions.
Figure 53.7: Examples for the
2
distribution.
686 53 STOCHASTIC THEORY AND STATISTICS
n 0.995 .99 .975 .95 .9 .1 .05 .025 .01 .005
1 0.001 0.004 0.016 2.706 3.841 5.024 6.635 7.879
2 0.010 0.020 0.051 0.103 0.211 4.605 5.991 7.378 9.210 10.597
3 0.072 0.115 0.216 0.352 0.584 6.251 7.815 9.348 11.345 12.838
4 0.207 0.297 0.484 0.711 1.064 7.779 9.488 11.143 13.277 14.860
5 0.412 0.554 0.831 1.145 1.610 9.236 11.070 12.833 15.086 16.750
6 0.676 0.872 1.237 1.635 2.204 10.645 12.592 14.449 16.812 18.548
7 0.989 1.239 1.690 2.167 2.833 12.017 14.067 16.013 18.475 20.278
8 1.344 1.646 2.180 2.733 3.490 13.362 15.507 17.535 20.090 21.955
9 1.735 2.088 2.700 3.325 4.168 14.684 16.919 19.023 21.666 23.589
10 2.156 2.558 3.247 3.940 4.865 15.987 18.307 20.483 23.209 25.188
11 2.603 3.053 3.816 4.575 5.578 17.275 19.675 21.920 24.725 26.757
12 3.074 3.571 4.404 5.226 6.304 18.549 21.026 23.337 26.217 28.300
13 3.565 4.107 5.009 5.892 7.042 19.812 22.362 24.736 27.688 29.819
14 4.075 4.660 5.629 6.571 7.790 21.064 23.685 26.119 29.141 31.319
15 4.601 5.229 6.262 7.261 8.547 22.307 24.996 27.488 30.578 32.801
16 5.142 5.812 6.908 7.962 9.312 23.542 26.296 28.845 32.000 34.267
17 5.697 6.408 7.564 8.672 10.085 24.769 27.587 30.191 33.409 35.718
18 6.265 7.015 8.231 9.390 10.865 25.989 28.869 31.526 34.805 37.156
19 6.844 7.633 8.907 10.117 11.651 27.204 30.144 32.852 36.191 38.582
20 7.434 8.260 9.591 10.851 12.443 28.412 31.410 34.170 37.566 39.997
21 8.034 8.897 10.283 11.591 13.240 29.615 32.671 35.479 38.932 41.401
22 8.643 9.542 10.982 12.338 14.041 30.813 33.924 36.781 40.289 42.796
23 9.260 10.196 11.689 13.091 14.848 32.007 35.172 38.076 41.638 44.181
24 9.886 10.856 12.401 13.848 15.659 33.196 36.415 39.364 42.980 45.559
25 10.520 11.524 13.120 14.611 16.473 34.382 37.652 40.646 44.314 46.928
26 11.160 12.198 13.844 15.379 17.292 35.563 38.885 41.923 45.642 48.290
27 11.808 12.879 14.573 16.151 18.114 36.741 40.113 43.195 46.963 49.645
28 12.461 13.565 15.308 16.928 18.939 37.916 41.337 44.461 48.278 50.993
29 13.121 14.256 16.047 17.708 19.768 39.087 42.557 45.722 49.588 52.336
30 13.787 14.953 16.791 18.493 20.599 40.256 43.773 46.979 50.892 53.672
40 20.707 22.164 24.433 26.509 29.051 51.805 55.758 59.342 63.691 66.766
50 27.991 29.707 32.357 34.764 37.689 63.167 67.505 71.420 76.154 79.490
60 35.534 37.485 40.482 43.188 46.459 74.397 79.082 83.298 88.379 91.952
70 43.275 45.442 48.758 51.739 55.329 85.527 90.531 95.023 100.425 104.215
80 51.172 53.540 57.153 60.391 64.278 96.578 101.879 106.629 112.329 116.321
90 59.196 61.754 65.647 69.126 73.291 107.565 113.145 118.136 124.116 128.299
100 67.328 70.065 74.222 77.929 82.358 118.498 124.342 129.561 135.807 140.169
Table 53.10: Some values of the
2
distribution.
53.4. SOME CONTINUOUS DISTRIBUTIONS 687
53.4.5 Students t-Distribution
The Students t-distribution
55
is based on the insight that the mean of a normally dis-
tributed feature of a sample is no longer normally distributed if the variance is unknown
and needs to be estimated from the data samples [929, 1112, 1113]. It has been design by
Gosset [1112] who published it under the pseudonym Student.
The parameter n of the distribution denotes the degrees of freedom of the distribution.
If n approaches innity, the t-distribution approaches the standard normal distribution.
The characteristic properties of Students t-distribution are outlined in Table 53.11
56
and examples for its PDF and CDF are illustrated in Fig. 53.8.a and Fig. 53.8.b.
Table 53.12 provides some selected values for the quantiles t
1,n
of the t-distribution
(one-sided condence intervals, see Section 53.6.3 on page 696). The headline of the table
contains results of the cumulative distribution function F
X
(x) of a Students t-distribution
with n degrees of freedom (values in the rst column). The cells now denote the x values
that belong to these (n, F
X
(x)) combinations.
Parameter Denition
parameters n R
+
, n > 0 53.208
PDF f
X
(x) =
(
(n+1)
/2)

n(
n
/2)
_
1 +
x
2
/n
_

(n+1)
/2
53.209
CDF P (X x) = F
X
(x) =
1
2
+x
_
n + 1
2
_
2
F
1
_
1
2
,
n + 1
2
,
3
2
,
x
2
n
_

n
_
n
2
_ 53.210
mean EX = 0 53.211
median med = 0 53.212
mode mode = 0 53.213
variance D
2
X =
n
n 2
for n > 2, otherwise undened 53.214
skewness
1
= 0 for n > 3 53.215
kurtosis
2
=
6
n 4
for n > 4 53.216
entropy h(X) =
n
2
_

_
n + 1
2
_

_
n
2
_
_
+ log
_

nB
_
n
2
,
1
2
__
53.217
mgf undened 53.218
Table 53.11: Parameters of the Students t- distribution.
55
http://en.wikipedia.org/wiki/Student%27s_t-distribution [accessed 2007-09-30]
56
More information on the gamma function used in Equation 53.209 and Equation 53.210 can be
found in ?? on page ??.
2
F in Equation 53.210 stands for the hypergeometric function, and B in
Equation 53.217 are the digamma and the beta function.
688 53 STOCHASTIC THEORY AND STATISTICS
x
3 2 1 0 -1 -2 -3 -4 5
0.05
0.1
0.15
0.2
TheStudent`st-distribution
approachesthenormal
distributionN(0,1)
forn .
0.3
0.25
f (x)
X
0.4
0.35
n=1
n=2
n=5
normaldistribution
Fig. 53.8.a: The PDFs of some Students t-distributions
x
3 2 1 0 -1 -2 -3 -4 5
n=1
n=2
n=5
normaldistribution
0.1
0.2
0.3
0.8
0.7
0.6
0.5
0.9
1
F (x)
X
0.4
TheStudent`st-distribution
approachesthenormal
distributionN(0,1)
forn .
Fig. 53.8.b: The CDFs of some Students t-distributions
Figure 53.8: Examples for Students t-distribution.
53.5. EXAMPLE THROWING A DICE 689
n 0.75 .8 .85 .875 .9 .95 .975 .99 .995 .9975 .999 .9995
1 1.000 1.376 1.963 2.414 3.078 6.314 12.71 31.82 63.66 127.3 318.3 636.6
2 0.816 1.061 1.386 1.605 1.886 2.920 4.303 6.965 9.925 14.09 22.33 31.60
3 0.765 0.978 1.250 1.423 1.638 2.353 3.182 4.541 5.841 7.453 10.21 12.92
4 0.741 0.941 1.190 1.344 1.533 2.132 2.776 3.747 4.604 5.598 7.173 8.610
5 0.727 0.920 1.156 1.301 1.476 2.015 2.571 3.365 4.032 4.773 5.893 6.869
6 0.718 0.906 1.134 1.273 1.440 1.943 2.447 3.143 3.707 4.317 5.208 5.959
7 0.711 0.896 1.119 1.254 1.415 1.895 2.365 2.998 3.499 4.029 4.785 5.408
8 0.706 0.889 1.108 1.240 1.397 1.860 2.306 2.896 3.355 3.833 4.501 5.041
9 0.703 0.883 1.100 1.230 1.383 1.833 2.262 2.821 3.250 3.690 4.297 4.781
10 0.700 0.879 1.093 1.221 1.372 1.812 2.228 2.764 3.169 3.581 4.144 4.587
11 0.697 0.876 1.088 1.214 1.363 1.796 2.201 2.718 3.106 3.497 4.025 4.437
12 0.695 0.873 1.083 1.209 1.356 1.782 2.179 2.681 3.055 3.428 3.930 4.318
13 0.694 0.870 1.079 1.204 1.350 1.771 2.160 2.650 3.012 3.372 3.852 4.221
14 0.692 0.868 1.076 1.200 1.345 1.761 2.145 2.624 2.977 3.326 3.787 4.140
15 0.691 0.866 1.074 1.197 1.341 1.753 2.131 2.602 2.947 3.286 3.733 4.073
16 0.690 0.865 1.071 1.194 1.337 1.746 2.120 2.583 2.921 3.252 3.686 4.015
17 0.689 0.863 1.069 1.191 1.333 1.740 2.110 2.567 2.898 3.222 3.646 3.965
18 0.688 0.862 1.067 1.189 1.330 1.734 2.101 2.552 2.878 3.197 3.610 3.922
19 0.688 0.861 1.066 1.187 1.328 1.729 2.093 2.539 2.861 3.174 3.579 3.883
20 0.687 0.860 1.064 1.185 1.325 1.725 2.086 2.528 2.845 3.153 3.552 3.850
21 0.686 0.859 1.063 1.183 1.323 1.721 2.080 2.518 2.831 3.135 3.527 3.819
22 0.686 0.858 1.061 1.182 1.321 1.717 2.074 2.508 2.819 3.119 3.505 3.792
23 0.685 0.858 1.060 1.180 1.319 1.714 2.069 2.500 2.807 3.104 3.485 3.767
24 0.685 0.857 1.059 1.179 1.318 1.711 2.064 2.492 2.797 3.091 3.467 3.745
25 0.684 0.856 1.058 1.178 1.316 1.708 2.060 2.485 2.787 3.078 3.450 3.725
26 0.684 0.856 1.058 1.177 1.315 1.706 2.056 2.479 2.779 3.067 3.435 3.707
27 0.684 0.855 1.057 1.176 1.314 1.703 2.052 2.473 2.771 3.057 3.421 3.690
28 0.683 0.855 1.056 1.175 1.313 1.701 2.048 2.467 2.763 3.047 3.408 3.674
29 0.683 0.854 1.055 1.174 1.311 1.699 2.045 2.462 2.756 3.038 3.396 3.659
30 0.683 0.854 1.055 1.173 1.310 1.697 2.042 2.457 2.750 3.030 3.385 3.646
40 0.681 0.851 1.050 1.167 1.303 1.684 2.021 2.423 2.704 2.971 3.307 3.551
50 0.679 0.849 1.047 1.164 1.299 1.676 2.009 2.403 2.678 2.937 3.261 3.496
60 0.679 0.848 1.045 1.162 1.296 1.671 2.000 2.390 2.660 2.915 3.232 3.460
80 0.678 0.846 1.043 1.159 1.292 1.664 1.990 2.374 2.639 2.887 3.195 3.416
100 0.677 0.845 1.042 1.158 1.290 1.660 1.984 2.364 2.626 2.871 3.174 3.390
120 0.677 0.845 1.041 1.157 1.289 1.658 1.980 2.358 2.617 2.860 3.160 3.373
0.674 0.842 1.036 1.150 1.282 1.645 1.960 2.326 2.576 2.807 3.090 3.291
Table 53.12: Table of Students t-distribution with right-tail probabilities.
53.5 Example Throwing a Dice
Let us now discuss the dierent parameters of a random variable at the example of throwing
a dice. On a dice, numbers from one to six are written and the result of throwing it is the
number written on the side facing upwards. If a dice is ideal, the numbers one to six will
show up with exactly the same probability,
1
6
. The set of all possible outcomes of throwing
a dice is thus
=
_
1 , 2 , 3 , 4 , 5 , 6
_
(53.219)
We dene a random variable X : R that assigns real numbers to the possible outcomes
of throwing the dice in a way that the value of X matches the number on the dice:
X : 1, 2, 3, 4, 5, 6 (53.220)
690 53 STOCHASTIC THEORY AND STATISTICS
It is obviously a uniformly distributed discrete random variable (see Section 53.3.1 on
page 668) that can take on six states. We can now dene the probability mass function PMF
and the according cumulative distribution function CDF as follows (see also Figure 53.9):
F
X
(x) = P (X x) =
_
_
_
0 if x < 1
x
6
if 1 x 6
1 otherwise
(53.221)
f
X
(x) = P (X = x) =
_
_
_
0 if x < 1
1
6
if 1 x 6
0 otherwise
(53.222)
We now can discuss the statistical parameters of this experiment. This is a good opportunity
0
1/3
x
1 2 3
CDFofthedicethrowingexperiment
PMFofthedicethrowingexperiment
single,discretepoint
discontinuouspoint
1/6
5/6
1
1/2
2/3
4 5 6
Figure 53.9: The PMF and CMF of the dice throw
to compare the real parameters and their estimates. We therefore assume that the dice
was thrown ten times (n = 10) in an experiment. The following numbers have been thrown
as illustrated in Figure 53.10):
A = 4, 5, 3, 2, 4, 6, 4, 2, 5, 3 (53.223)
Table 53.13 outlines how the parameters of the random variable are computed. The real
values of the parameters are dened using the PMF or CDF functions, while the estimations
are based on the sample data obtained from an actual experiment.
53.6. ESTIMATION THEORY 691
0
1
2
3
4
5
6
A
Figure 53.10: The numbers thrown in the dice example
parameter true value estimate
count nonexistent n = len(A) = 10 53.224
minimum a = min x : f
X
(x) > 0 = 1 a = minA = 2 a 53.225
maximum b = max x : f
X
(x) > 0 = 6

b = maxA = 6 b 53.226
range range = r = b a + 1 = 6 r =

b a + 1 = 5 range 53.227
mean EX =
a +b
2
=
7
2
= 3.5 a =
1
n
n1

i=0
A[i] =
19
5
= 3.8 EX 53.228
median med =
a +b
2
=
7
2
= 3.5
A
s
= sortAscA>

med =
A
s
_
n
2

+A
s
_
n
2
1

2
= 4 med
53.229
mode mode =

mode = 4 mode 53.230
variance D
2
X =
2
=
r
2
1
12
=
35
12
2.917 s
2
=
1
n 1
n1

i=0
(A[i] a)
2
=
26
15
1.73
2
53.231
skewness
1
= 0 G
1
0.0876
1
53.232
kurtosis
2
=
6
_
r
2
+ 1
_
5
_
r
2
1
_ =
222
175
1.269 G
2
0.7512
2
53.233
Table 53.13: Parameters of the dice throw experiment.
As you can see, the estimations of the parameters sometimes dier signicantly from their
true values. More information about estimation can be found in the following section.
53.6 Estimation Theory
53.6.1 Introduction
Estimation theory is the science of approximating the values of parameters based on mea-
surements or otherwise obtained sample data [1506, 2203, 2299, 2494, 2782]. The center of
this branch of statistics is to nd good estimators in order to approximate the real values
the parameters as good as possible.
692 53 STOCHASTIC THEORY AND STATISTICS
Denition D53.54 (Estimator). An estimator
57

is a rule (most often a mathematical
function) that takes a set of sample data A as input and returns an estimation of one
parameter of the random distribution of the process sampled with this data set.
We have already discussed some estimators in Section 53.2 the arithmetic mean of a sample
data set (see Denition D53.32 on page 661), for example, is an estimator for the expected
value (see Denition D53.30 on page 661) and in Equation 53.67 on page 662 we introduced
an estimator for the sample variance. Obviously, an estimator

is the better the closer its
results (the estimates) come to the real values of the parameter .
Denition D53.55 (Point Estimator). We dene a point estimator

to be an estimator
which is a mathematical function

: R
n
R. This function takes the data sample A (here
considered as a real vector A R
n
) as input and returns the estimate in the form of a (real)
scalar value.
Denition D53.56 ((Estimation) Error). The absolute (estimation) error
58
is the dif-
ference between the value returned by a point estimator

of a parameter for a certain
input A and its real value. Notice that the error can be zero, positive, or negative.

A
_

_
=

(A) (53.234)
In the following, we will most often not explicitly refer to the data sample A as basis of the
estimation

anymore. We assume that it is implicitly clear that estimations are usually
based on such samples and that subscripts like the A in
A
in Equation 53.234 are not
needed.
Denition D53.57 (Bias). The bias Bias
_

_
of an estimator

is the expected value of
the dierence of the estimate and the real value.
Bias
_

_
= E
_


_
= E
_

__
(53.235)
Denition D53.58 (Unbiased Estimator). An unbiased estimator has a zero bias.
Bias
_

_
= E
_


_
= E
_

__
= 0 E

= (53.236)
Denition D53.59 (Mean Square Error). The mean square error
59
MSE
_

_
of an es-
timator

is the expected value of the square of the estimation error . It is also the sum of
the variance of the estimator and the square of its bias.
MSE
_

_
= E
_
_


_
2
_
= E
_
_

__
2
_
(53.237)
MSE
_

_
= D
2

+
_
Bias
_

__
2
(53.238)
57
http://en.wikipedia.org/wiki/Estimator [accessed 2007-07-03], http://mathworld.wolfram.
com/Estimator.html [accessed 2007-07-03]
58
http://en.wikipedia.org/wiki/Errors_and_residuals_in_statistics [accessed 2007-07-03]
59
http://en.wikipedia.org/wiki/Mean_squared_error [accessed 2007-07-03]
53.6. ESTIMATION THEORY 693
The MSE is a measure for how much an estimator diers from the quantity to be estimated.
Notice that the MSE of unbiased estimators coincides with the variance D
2

of

. For
estimating the mean square error of an estimator

, we use the sample mean:

MSE
_

_
=
1
n
n

i=1
_

_
(53.239)
53.6.2 Likelihood and Maximum Likelihood Estimators
Likelihood
60
is a mathematical expression complementary to probability. Whereas proba-
bility allows us to predict the outcome of a random experiment based on known parameters,
likelihood allows us to predict unknown parameters based on the outcome
Denition D53.60 (Likelihood Function). The likelihood function L returns a value
that is proportional to the probability of a postulated underlying law or probability distri-
bution according to an observed outcome (denoted as the vector y).
L[[y] P ( y[ ) (53.240)
Notice that L not necessarily represents a probability density/mass function and its inte-
gral also does not necessarily equal to 1. In many sources, L is dened in dependency of
a parameter instead of the function . We preferred the latter notation since it is a more
general superset of the rst one.
53.6.2.1 Observation of an Unknown Process
Assume that we are given a nite set A of n sample data points.
A = (x
0
, y
0
) , (x
1
, y
1
) , .., (x
n1
, y
n1
) , x
i
, y
i
R i 0..n 1 (53.241)
The x
i
are known inputs or parameters of an unknown process dened by the function
: R R. By observing the corresponding outputs of the process, we have obtained the y
i
values. During our observations, we make the measurement errors
61

i
.
y
i
= (x
i
) +
i
i 0..n 1 (53.242)
About this measurement error we make the following assumptions:
E = 0 (53.243)
N
_
0,
2
_
0 < < + (53.244)
cov(
i
,
j
) = 0 i, j 0..n 1 (53.245)
1. The expected values of in Equation 53.243 are all zero. Our measurement device
thus gives us, in average, unbiased results. If the expected value of was not zero, we
could simple recalibrate our (imaginary) measurement equipment in order to subtract
E from all measurements and would obtain unbiased observations.
2. Furthermore, Equation 53.244 states that the i are normally distributed around the
zero point with an unknown, nonzero variance
2
. Supposing measurement errors to
be normally distributed is quite common and a reasonable good approximation in most
60
http://en.wikipedia.org/wiki/Likelihood [accessed 2007-07-03]
61
http://en.wikipedia.org/wiki/Measurement_error [accessed 2007-07-03]
694 53 STOCHASTIC THEORY AND STATISTICS
cases. The white noise
62
in transmission of signals for example is often modeled with
Gaussian distributed
63
amplitudes. This second assumption includes, of course, the
rst one: Being normally distributed with N
_
= 0,
2
_
implies a zero expected value
of the error.
3. With Equation 53.245, we assume that the errors
i
of the single measurements are
stochastically independent. If there existed a connection between them, it would be
part of the underlying physical law and could be incorporated in our measurement
device and again be subtracted.
53.6.2.2 Objective: Estimation
Assume that we can choose from a, possible innite large, set of functions (estimators)
f F.
f F f : R R (53.246)
From this set, we wish to select the function f

F which resembles the best (i. e., no


worse than all other f F). is not necessarily an element of F, so we cannot always
presume to nd a f

.
Each estimator f deviates by the estimation error (f) (see Denition D53.56 on
page 692) from the y
i
-values. The estimation error depends on f and may vary for dif-
ferent estimators.
y
i
= f (x
i
) +
i
(f) i 0..n 1 (53.247)
We consider all f F to be valid estimators for and simple look for the one that ts
best. We now can combine Equation 53.247 with 53.242:
f (x
i
) +
i
(f) = y
i
= (x
i
) +
i
i 0..n 1 (53.248)
We do not know and thus, cannot determine the
i
. According to the likelihood method,
we pick the function f F that would have most probably produced the outcomes y
i
.
In other words, we have to maximize the likelihood of the occurrence of the
i
(f). The
likelihood here is dened under the assumption that the true measurement errors
i
are
normally distributed (see Equation 53.244). So what we can do is to determine the
i
in
a way that their occurrence is most probable according to the distribution of the random
variable that created the
i
, N
_
0,
2
_
, that is. In the best case, the (f

) =
i
and thus, f

is equivalent to (x
i
), at least in for the sample information A available to us.
53.6.2.3 Maximizing the Likelihood
Therefore, we can regard the
i
(f) as outcomes of independent random experiments, as
uncorrelated random variables, and combine them to a multivariate normal distribution.
For the ease of notation, we dene the (f) to be the vector containing all
i
(f)-values.
(f) =
_
_
_
_
_

1
(f)

2
(f)
.
.
.

n
(f)
_
_
_
_
_
(53.249)
The probability density function of a multivariate normal distribution with independent
variables
i
that have the same variance
2
looks like this (as dened in Equation 53.183 on
page 681):
f
X
((f)) =
_
1
2
2
_n
2
e

n1
i=0
(
i
(f))
2
2
2
=
_
1
2
2
_n
2
e

n1
i=0
(
i
(f))
2
2
2
(53.250)
62
http://en.wikipedia.org/wiki/White_noise [accessed 2007-07-03]
63
http://en.wikipedia.org/wiki/Gaussian_noise [accessed 2007-07-03]
53.6. ESTIMATION THEORY 695
Amongst all possible vectors (f) : f F we need to nd the most probable one

= (f

according to Equation 53.250. The function f

which produces this vector will then be the


one which most probably matches to .
In order to express how likely the observation of some outcomes is under a certain set of
parameters, we have dened the likelihood function L in Denition D53.60. Here we can
use the probability density function f of the normal distribution, since the maximal values
of f are those that are most probable to occur.
L[ (f)[ f] = f
X
((f)) =
_
1
2
2
_n
2
e

n1
i=0
(
i
(f))
2
2
2
(53.251)
f

F : L[ (f

)[ f

] = max
fF
L[ (f)[ f] (53.252)
= max
fF
_
1
2
2
_n
2
e

n1
i=0
(
i
(f))
2
2
2
(53.253)
Finding a f

which maximizes the function f however is equal to nd a f

which mini-
mizes the sum of the squares of the -values.
f

F :
n1

i=0
(
i
(f

))
2
= min
fF
n1

i=0
(
i
(f))
2
(53.254)
According to Equation 53.247 we can now substitute the
i
-values with the dierence be-
tween the observed outcomes y
i
and the estimates f (x
i
).
n

i=1
(
i
(f))
2
=
n1

i=0
(y
i
f (x
i
))
2
(53.255)
Denition D53.61 (Maximum Likelihood Estimator). A maximum likelihood esti-
mator
64
[70] f

is an estimator which ts with maximum likelihood to a given set of sample


data A.
Under the particular assumption of uncorrelated error terms normally distributed around
zero, a MLE minimizes Equation 53.256.
f

F :
n1

i=0
(y
i
f

(x
i
))
2
= min
fF
n1

i=0
(y
i
f (x
i
))
2
(53.256)
Minimizing the sum of the dierence between the observed y
i
and the estimates f (x
i
)
also minimizes their mean, so with this we have also shown that the estimator that mini-
mizes mean square error MSE (see Denition D53.59) is the best estimator according to the
likelihood of the produced outcomes.
f

F :
1
n
n1

i=0
(y
i
f

(x
i
))
2
= min
fF
1
n
n1

i=0
(y
i
f (x
i
))
2
(53.257)
f

F : MSE(f

) = min
fF
MSE(f) (53.258)
The term (y
i
f (x
i
))
2
is often justied by the statement that large deviations of f from
the y-values are punished harder than smaller ones. The correct reason why we minimize
the square error, however, is that we maximize the likelihood of the resulting estimator.
64
http://en.wikipedia.org/wiki/Maximum_likelihood [accessed 2007-07-03]
696 53 STOCHASTIC THEORY AND STATISTICS
At this point, one should also notice that the x
i
also could be replaced with vectors
x
i
R
m
without any further implications or modications of the equations.
In most practical cases, the set F of possible functions is closely dened. It usually
contains only one type of parameterized function, so we only have to determine the unknown
parameters in order to nd f

. Let us consider a set of linear functions as example. If we


want to nd estimators of the form F = f (x) = ax +b : a, b R, we will minimize
Equation 53.259 by determining the best possible values for a and b.
MSE( f (x)[ a, b) =
1
n
n1

i=0
(ax
i
+b y
i
)
2
(53.259)
If we now could nd a perfect estimator f

p
and our data would be free of any measurement
error, all parts of the sum would become zero. For n > 2, this perfect estimator would be the
solution of the over-determined system of linear equations illustrated in Equation 53.260.
0 = ax
1
+b y
1
0 = ax
2
+b y
2
. . . . . .
0 = ax
n
+b y
n
(53.260)
Since it is normally not possible to obtain a perfect estimator because there are measurement
errors or other uncertainties like unknown dependencies, the system in Equation 53.260 often
cannot be solved but only minimized.
53.6.2.4 Best Linear Unbiased Estimators
The Gauss-Markov Theorem
65
denes BLUEs (best linear unbiased estimators) according
to the facts just discussed:
Denition D53.62 (BLUE). In a linear model in which the measurement errors
i
are
uncorrelated and are all normally distributed with an expected value of zero and the same
variance, the best linear unbiased estimators (BLUE) of the (unknown) coecients are the
least-square estimators [2184].
Hence, for the best linear unbiased estimator also the same three assumptions (Equa-
tion 53.243, Equation 53.244, and Equation 53.245) as for the maximum likelihood estimator
hold.
53.6.3 Condence Intervals
There is a very simple principle in statistics that always holds: All estimates may as well
be wrong. There is no guarantee whatsoever that we have estimated a parameter of an
underlying distribution correct regardless how many samples we have analyzed. However, if
we can assume or know the underlying distribution of the process which has been sampled,
we can compute certain intervals which include the real value of the estimated parameter
with a certain probability.Unlike point estimators, which approximate a parameter of a data
sample with a single value, condence intervals
66
(CIs) are estimations that give certain
upper and lower boundaries in which the true value of the parameter will be located with a
certain, predened probability. [505, 674, 931]
65
http://en.wikipedia.org/wiki/Gauss-Markov_theorem [accessed 2007-07-03], http://www.
answers.com/topic/gauss-markov-theorem [accessed 2007-07-03]
66
http://en.wikipedia.org/wiki/Confidence_interval [accessed 2007-10-01]
53.6. ESTIMATION THEORY 697
Denition D53.63 (Condence Interval). A condence interval is estimations that
providing a range in which the true value of the estimated parameter will be located with a
certain probability.
The advantage of condence intervals is that we can directly derive the signicance of the
data samples from them the larger the intervals are, the less reliable is the sample. The
narrower condence intervals get for high predened probabilities, the more profound, i. e.,
signicant, will the conclusions drawn from them be.
Example E53.6 (Chickens and Condence).
Imagine we run a farm and own 25 chickens. Each chicken lays one egg a day. We collect
all the eggs in the morning and weigh them in order to nd the average weight of the eggs
produced by our farm. Assume our sample contains the values (in g):
A =
120, 121, 119, 116, 115, 122, 121, 123, 122, 120
119, 122, 121, 120, 119, 121, 123, 117, 118, 121
(53.261)
n = len(A) = 20 (53.262)
From these measurements, we can determine the arithmetic mean a and the sample variance
s
2
according to Equation 53.60 on page 661 and Equation 53.67 on page 662:
a =
1
n
n1

i=0
A[i] =
2400
20
= 120 (53.263)
s
2
=
1
n 1
n1

i=0
(A[i] a)
2
=
92
19
(53.264)
The question that arises now is if the mean of 120 is signicant, i. e., whether it likely
approximates the expected value of the egg weight, or if the data sample was too small to
be representative. Furthermore, we would like to know in which interval the expected value
of the egg weights will likely be located. Here, condence intervals come into play.
First, we need to nd out what the underlying distribution of the random variable
producing A as sample output is. In case of chicken eggs, we safely can assume
67
that
it is the normal distribution discussed in Section 53.4.2 on page 678. With that we can
calculate an interval which includes the unknown parameter (i. e., the real expected value)
with a condence probability of . = 1 a is the so-called condence coecient and a is
the probability that the real value of the estimated parameter lies not inside the condence
interval.
Let us compute the interval including the expected value of the chicken egg weights
with a probability of = 1 a = 95%. Thus, a = 0.05. Therefore, we have to pick the right
formula from Section 53.6.3.1 on the following page (here it is Equation 53.277 on the next
page) and substitute in the proper values:


_
a t
1
a
2
,n1
s

n
_
(53.265)

95%

_

_
120 t
0.975,19

_
92
19

19
_

_
(53.266)

95%
[120 2.093 0.5048] (53.267)

95%
[118.94, 121.06] (53.268)
67
Notice that such an assumption is also a possible source of error!
698 53 STOCHASTIC THEORY AND STATISTICS
The value of t
19,0.025
can easily be obtained from Table 53.12 on page 689 which contains the
respective quantiles of Students t-distribution discussed in Section 53.4.5 on page 687. Let
us repeat the procedure in order to nd the interval that will contain with probabilities
1 = 99% a = 0.01 and 1 = 90% a = 0.1:

99%
[120 t
0.995,19
0.5048] (53.269)

99%
[120 2.861 0.5048] (53.270)

99%
[118.56, 121.44] (53.271)

90%
[120 t
0.95,19
0.5048] (53.272)

90%
[120 1.729 0.5048] (53.273)

90%
[119.13, 120.87] (53.274)
(53.275)
As you can see, the higher the condence probabilities we specify the larger become the
intervals in which the parameter is contained. We can be to 99% sure that the expected
value of laid eggs is somewhere between 118.56 and 121.44. If we narrow the interval down
to [119.13, 120.87], we can only be 90% condent that the real expected value falls in it
based on the data samples which we have gathered.
53.6.3.1 Some Condence Intervals
The following condence intervals are two-sided, i. e., we determine a range

x,

+x
_
that contains the parameter with probability based on the estimate

. If you need a one-sided condence interval like



_
,

+x
_
or

x,
_
,
you just need to replace 1
a
2
with 1 a in the equations.
53.6.3.1.1 Expected Value of a Normal Distribution N
53.6.3.1.1.1 With knowing the variance
2
If the exact variance
2
of the distribution
underlying our data samples is known, and we have an estimate of the expected value by
the arithmetic mean a according to Equation 53.60 on page 661, the two-sided condence
interval (of probability ) for the expected value of the normal distribution is:


_
a z
_
1
a
2
_

n
_
(53.276)
Where z(y) probit(y)
1
y is the y-quantil of the standard normal distribution
(see Section 53.4.2.2 on page 680) which can for example be looked up in Table 53.7.
53.6.3.1.1.2 With estimated sample variance s
2
Often, the true variance
2
of an underly-
ing distribution is not known and instead estimated with the sample variance s
2
according
to Equation 53.67 on page 662. The two-sided condence interval (of probability ) for
the expected value can then be computed using the arithmetic mean a and the estimate
of the standard deviation s =

s
2
of the sample and the t
n1,1
a
2
quantile of Students
t-distribution which can be looked up in Table 53.12 on page 689.


_
a t
1
a
2
,n1
s
2

n
_
(53.277)
53.7. STATISTICAL TESTS 699
53.6.3.1.2 Variance of a Normal Distribution
The two-sided condence interval (of probability ) for the variance of a normal distribution
can computed using sample variance s
2
and the
2
(p, k)-quantile of the
2
distribution
which can be looked up in Table 53.10 on page 686.


_
_
(n 1)s
2

2
_
1
a
2
, n 1
_,
(n 1)s
2

2
_

2
, n 1
_
_
_
(53.278)
53.6.3.1.3 Success Probability p of a B(1, p) Binomial Distribution
The two-sided condence interval (of probability ) of the success probability p of a B(1, p)
binomial distribution can be computed as follows:
p


_
_
n
n +z
2
1
a
2
_
_
a +
1
2n
z
2
1
a
2
z
2
1
a
2

a(1 a)
n
+
_
1
2n
z
2
1
a
2
_
2
_
_
_
_
(53.279)
53.6.3.1.4 Expected Value of an Unknown Distribution with Sample Variance
The two-sided condence interval (of probability ) of the expected value EX of an unknown
distribution with an unknown real variance D
2
X can be determined using the arithmetic
mean a and the sample variance s
2
if the sample data set contains more than n = 50
elements.
EX


_
a z
_
1
a
2
_
s

n
_
(53.280)
53.6.3.1.5 Condence Intervals from Tests
Many statistical tests (such as the Wilcoxons signed rank test introduced in ??) can be
inverted in order to obtain condence intervals [307]. The topic of statistical tests are
discussed in Section 53.7.
53.7 Statistical Tests
53.7.1 Introduction
With statistical tests [674, 1160, 1202, 1716, 2476, 2490], it is possible to nd out whether
an alternative hypothesis H
1
about the distribution(s) from a set of measured data A is
likely to be true. This is done by showing that the sampled data would very unlikely have
occurred if the opposite hypothesis, the null hypothesis H
0
, holds.
Example E53.7 (EA Settings Test).
If we want to show, for instance, that two dierent settings for an Evolutionary Algorithm
will probably lead to dierent solution qualities (H
1
), we assume that the distributions of
the objective values of the candidate solutions returned by them are equal (H
0
). Then,
we run the two Evolutionary Algorithms multiple times and measure the outcome, i. e.,
obtain A. Based on A, we can estimate the probability with which the two dierent sets
of measurements (the samples) would have occurred if H
0
was true. In the case that this
probability is very low, lets say < 5%, H
0
can be rejected (with 5% probability of making
a type 1 error) and H
1
is likely to hold. Otherwise, we would expect H
0
to hold and reject
H
1
.
700 53 STOCHASTIC THEORY AND STATISTICS
Neyman and Pearson [2030, 2031] distinguish two classes of errors
68
that can be made when
performing hypothesis tests:
Denition D53.64 (Type 1 Error). Type 1 ErrorA type 1 error ( error, false positive)
is the rejection of a correct null hypothesis H
0
, i. e., the acceptance of a wrong alternative
hypothesis H
1
. Type 1 errors are made with probability .
Denition D53.65 (Type 2 Error). Type 2 ErrorA type 2 error ( error, false negative)
is the acceptance of a wrong null hypothesis H
0
, i. e., the rejection of a correct alternative
hypothesis H
1
. Type 2 errors are made with probability .
Denition D53.66 (Power). PowerThe (statistical) power
69
of a statistical test is the
probability of rejecting a false null hypothesis H
0
. Therefore, the power equals 1 .
A few basic principles for testing should be mentioned before going more into detail:
1. The more samples we have, the better the quality and signicance of the conclusions
that we can make by testing. An arithmetic mean of the runtime 7s is certainly more
signicant when being derived from 1000 runs of certain algorithm than from the
sample set A = 9s, 5s. . .
2. The more assumptions that we can make about the sampled probability distribution,
the more powerful will the tests be that are available.
3. Wrong assumptions, falsely carried out measurements, or other misconduct will nullify
all results and eorts put into testing.
In the following, we will discuss multiple methods for hypothesis testing. We can, for
instance, distinguish between tests based on paired samples and those for independent pop-
ulations. In Table 53.14, we have illustrated an example for the former, where pairs of
elements (a, b) are drawn from two dierent populations. Table 53.15 contains two indepen-
dent samples a and b with a dierent number of elements (n
a
= 6 ,= n
b
= 8).
68
http://en.wikipedia.org/wiki/Type_I_and_type_II_errors [accessed 2008-08-15]
69
http://en.wikipedia.org/wiki/Statistical_power [accessed 2008-08-15]
53.7. STATISTICAL TESTS 701
Example E53.8 (Example for Paired Data Samples).
Row a b d = b a Sign Rank |r| Rank r
1. 2 10 +8 + 13 13
2. 3 4 +1 + 2 2
3. 6 10 +4 + 10 10
4. 4 6 +2 + 6 6
5. 6 11 +5 + 11 11
6. 5 6 +1 + 2 2
7. 4 11 +7 + 12 12
8. 9 6 3 - 9 9
9. 10 12 +2 + 6 6
10. 8 8 0 =
11. 6 8 +2 + 6 6
12. 7 6 1 - 2 2
13. 4 4 0 =
14. 4 6 +2 + 6 6
15. 9 7 2 - 6 6

a
i
= 87

b
i
= 115 D =

d
i
= 28 R =

r
i
= 57
med(a) = 6; a = 5.8
med(b) = 7; b = 7.6 7.67
Table 53.14: Example for paired samples (a, b).
This example data set consists of n = n
a
= n
b
= 15 tuples (a, b). The leftmost column
is the tuple index, the columns a and b contain the tuple values, and d is the inner-tuple
dierence b a. The three columns to the right contain intermediate values which will later
be used in the example calculations accompanying the descriptions of the tests.
The arithmetic mean of the samples a is a = 5.8 and for b it is b = 7.6 7.67. The
medians are med(a) = 6 and med(b) = 7. Although it looks as if a and b were dierent and
that the values of a seem to be lower than those of the sample b, we cannot conrm this
without a statistical test.
702 53 STOCHASTIC THEORY AND STATISTICS
Example E53.9 (Example for Unpaired Data Samples).
Row a b Ranks r
a
Ranks r
b
1. 2 1.5
2. 2 1.5
3. 3 4.0
4. 3 4.0
5. 3 4.0
6. 4 6.0
7. 5 8.5
8. 5 8.5
9. 5 8.5
10. 5 8.5
11. 6 11.5
12. 6 11.5
13. 7 13.5
14. 7 13.5
med(a) = 3 med(b) = 5.5 R
a
= 28 R
b
= 77
n
a
= 6 n
b
= 8
Table 53.15: Example for unpaired samples.
Unlike the data given in Example E53.9, Table 53.14 contains two unpaired samples a and
b consisting of n
a
= 6 and n
b
= 8 samples, respectively. We again provide the item index
in the leftmost column followed by the elements of a and b in the next two columns. In the
two columns to the right, again the ranks are given.
The arithmetic mean of the samples a is a =
20
/3 = 3.3 3.33 and for b it is b = 5.375.
The medians are med(a) = 3 and med(b) = 5.5. Again, it looks like the sample a contains
the smaller values. Whether this is actually the case we will have to check with tests. Or
better, with which error probability this assumption holds, to be precise.
Part VII
Implementation
704 53 STOCHASTIC THEORY AND STATISTICS
Chapter 54
Introduction
In this book part, we try to provide straightforward implementations of some of the algo-
rithms discussed in this book. The latest versions of the sources of the following programs,
including documentation, are available at http://goa-taa.cvs.sourceforge.net/
viewvc/goa-taa/Book/implementation/java/sources.zip [accessed 2010-10-02]. This
url directly obtains the sources, binaries, and documentation from the version management
system in which the complete book is located (see page 4 for details).
In this part of the book, we will not provide much discussion. The theoretical aspects
of optimization and the algorithm structures have already been outlined in length in the
previous parts. Here, we will just give the code with some documentation (with clickable
links) which glues the implementation to these aforementioned theoretical discussions.
The implementations and program sources given here have been developed on the y
during the course Practical Optimization Algorithm Design given by me at School of Software
Engineering in the Suzhou Institute of Advanced Studies of the University of Science and
Technology of China (USTC) in the winter semester 2010. The goal of this course was
to solve optimization problems interactively together with my students. Some of the code
presented here is the result of the suggestions of my students and a discussion which I
tried to coarsely steer but not to control. I thus like to thank my students here for their
participation and good suggestions.
Programming Language Java
TM
java version "1.6.0_21"
Java(TM)SE Runtime Environment (build 1.6.0_21-b07)
Java HotSpot(TM)Client VM (build 17.0-b17, mixed mode,
sharing)
See http://java.sun.com/ [accessed 2010-09-30]
Developer Environment Eclipse
Build id: 20090920-1017
See http://www.eclipse.org/ [accessed 2010-09-30]
Operating System Microsoft
R
Windows Vista
R
Business
Version: 6.0 (Build 6002: Service Pack 2)
32 Bit Operating System
See http://windows.microsoft.com/ [accessed 2010-09-30]
Computer Toshiba Satego X200-21U
Intel
R
Core
R
2 Duo CPU T7500 @ 2.20GHz
4GiB RAM
See http://www.toshiba.com/ [accessed 2010-09-30]
Table 54.1: The precise developer environment used implementing and testing.
We use the Java programming language for the implementation, mainly because it is more
or less simple and because it is the language I am more or less uent in. In Table 54.1, we
706 54 INTRODUCTION
provide the exact description of the developer environment and testing machine used. This
may serve as a reference for verifying our implementation.
The code listed in this book is not complete. We merely provide those classes which
include the essential algorithms and skip most of the basic classes or helper classes. These
are available in the source code archive published together with this book. Here, they would
just waste space without contributing much to the understanding of the algorithms.
It is not clear how the Java language will develop, how it will be supported, or whether
the behavior of Java virtual machines remains the same or is the same as in the past.
Hence, we cannot guarantee for the functionality of our example in newer or older versions
of Java as well as in dierent execution environments. All the code in the following sections
is put under the GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999),
which you can nd in Chapter C on page 955.
Chapter 55
The Specication Package
In this package, we give interfaces providing the abstract specications of the basic elements
of the optimization processes given in Part I. We divide the interface specication of the
modules of optimization algorithms the objective functions from their actual implementa-
tions which will be provided Chapter 56.
Also, we provide interfaces to modules used in some specic algorithm patterns, such as
the temperature schedule in Simulated Annealing (see Listing 55.12 and Chapter 27) and
selection in Evolutionary Algorithms (see Listing 55.13 and Chapter 28).
55.1 General Denitions
In this section, we discuss general interface denitions dealing with the basic abstractions
and structural elements of optimization. They are based on the denitions given in Part I.
Listing 55.1: The basic interface for everything involved in optimization.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.io.Serializable;
7
8 /
**
9
*
A simple basic interface common to all modules of optimization
10
*
algorithms that we provide in this package. It ensures that all objects
11
*
are serializable, i.e., can be written to an ObjectOutputStream and read
12
*
from an ObjectInputStream, for instance. It furthermore provides some
13
*
simple methods for converting an object to representative strings.
14
*
15
*
@author Thomas Weise
16
*
/
17 public interface IOptimizationModule extends Serializable {
18
19 /
**
20
*
Get the name of the optimization module
21
*
22
*
@param longVersion
23
*
true if the long name should be returned, false if the short
24
*
name should be returned
25
*
@return the name of the optimization module
26
*
/
27 public abstract String getName(final boolean longVersion);
28
29 /
**
30
*
Get the full configuration which holds all the data necessary to
31
*
describe this object.
708 55 THE SPECIFICATION PACKAGE
32
*
33
*
@param longVersion
34
*
true if the long version should be returned, false if the
35
*
short version should be returned
36
*
@return the full configuration
37
*
/
38 public abstract String getConfiguration(final boolean longVersion);
39
40 /
**
41
*
Get the string representation of this object, i.e., the name and
42
*
configuration.
43
*
44
*
@param longVersion
45
*
true if the long version should be returned, false if the
46
*
short version should be returned
47
*
@return the string version of this object
48
*
/
49 public abstract String toString(final boolean longVersion);
50 }
55.1.1 Search Operations
A nullary search operations takes no arguments and just creates one new, random genotype
g G.
Listing 55.2: The interface for all nullary search operations.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.Random;
7
8 /
**
9
*
This interface provides the method for a nullary search operations.
10
*
Search operations are discussed in Section 4.2 on page 83.
11
*
12
*
@param <G>
13
*
the search space (genome, Section 4.1 on page 81)
14
*
@author Thomas Weise
15
*
/
16 public interface INullarySearchOperation<G> extends IOptimizationModule {
17
18 /
**
19
*
This is the nullary search operation. It takes no argument from the
20
*
search space and produces one new element in the search. This new
21
*
element is usually created randomly. For this reason, we pass a random
22
*
number generator in as parameter so we can use the same random number
23
*
generator in all parts of an optimization algorithm.
24
*
25
*
@param r
26
*
the random number generator
27
*
@return a new genotype (see Definition D4.2 on page 82)
28
*
/
29 public abstract G create(final Random r);
30
31 }
Unary search operations take one existing genotype g G and use it as basis to create a
new, similar one g

G.
Listing 55.3: The interface for all unary search operations.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
55.1. GENERAL DEFINITIONS 709
3
4 package org.goataa.spec;
5
6 import java.util.Random;
7
8 /
**
9
*
This interface provides the method for a unary search operations. Search
10
*
operations are discussed in Section 4.2 on page 83.
11
*
12
*
@param <G>
13
*
the search space (genome, Section 4.1 on page 81)
14
*
@author Thomas Weise
15
*
/
16 public interface IUnarySearchOperation<G> extends IOptimizationModule {
17
18 /
**
19
*
This is the unary search operation. It takes one existing genotype g
20
*
(see Definition D4.2 on page 82) from the genome and produces one new
21
*
element in the search space. This new element is usually a copy of te
22
*
existing element g which is slightly modified in a random manner. For
23
*
this purpose, we pass a random number generator in as parameter so we
24
*
can use the same random number generator in all parts of an
25
*
optimization algorithm.
26
*
27
*
@param g
28
*
the existing genotype in the search space from which a
29
*
slightly modified copy should be created
30
*
@param r
31
*
the random number generator
32
*
@return a new genotype
33
*
/
34 public abstract G mutate(final G g, final Random r);
35
36 }
Binary search operations take two existing genotype g
p1
, g
p2
G and use them as basis to
create a new genotype g

G which, hopefully, unites some of the positive features of its


parents.
Listing 55.4: The interface for all binary search operations.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.Random;
7
8 /
**
9
*
A binary search operation (see Definition D4.6 on page 83) such
10
*
as the recombination operation in Evolutionary Algorithms (see
11
*
Definition D28.14 on page 307) and its specific string-based version
12
*
used in Genetic Algorithms (see
13
*
Section 29.3.4 on page 335).
14
*
15
*
@param <G>
16
*
the search space (genome, Section 4.1 on page 81)
17
*
@author Thomas Weise
18
*
/
19 public interface IBinarySearchOperation<G> extends IOptimizationModule {
20
21 /
**
22
*
This is the binary search operation. It takes two existing genotypes
23
*
p1 and p2 (see Definition D4.2 on page 82) from the genome and produces
24
*
one new element in the search space. There are two basic assumptions
25
*
about this operator: 1) Its input elements are good because they have
26
*
previously been selected. 2) It is somehow possible to combine these
27
*
good traits and hence, to obtain a single individual which unites them
710 55 THE SPECIFICATION PACKAGE
28
*
and thus, has even better overall qualities than its parents. The
29
*
original underlying idea of this operation is the
30
*
"Building Block Hypothesis" (see
31
*
Section 29.5.5 on page 346) for which, so far, not much
32
*
evidence has been found. The hypothesis
33
*
"Genetic Repair and Extraction" (see Section 29.5.6 on page 346)
34
*
has been developed as an alternative to explain the positive aspects
35
*
of binary search operations such as recombination.
36
*
37
*
@param p1
38
*
the first "parent" genotype
39
*
@param p2
40
*
the second "parent" genotype
41
*
@param r
42
*
the random number generator
43
*
@return a new genotype
44
*
/
45 public abstract G recombine(final G p1, final G p2, final Random r);
46
47 }
Ternary search operations take three existing genotype g
p1
, g
p2
, g
p3
G and use them as
basis to create a new genotype g

G.
Listing 55.5: The interface for all ternary search operations.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.Random;
7
8 /
**
9
*
A ternary search operation (see Definition D4.6 on page 83) such
10
*
as the recombination operation in Differential Evolution (see
11
*
Section 33.2 on page 419).
12
*
13
*
@param <G>
14
*
the search space (genome, Section 4.1 on page 81)
15
*
@author Thomas Weise
16
*
/
17 public interface ITernarySearchOperation<G> extends IOptimizationModule {
18
19 /
**
20
*
This is the ternary search operation. It takes three existing
21
*
genotypes p1, p2, and p3 (see Definition D4.2 on page 82) from the
22
*
genome and produces one new element in the search space.
23
*
24
*
@param p1
25
*
the first "parent" genotype
26
*
@param p2
27
*
the second "parent" genotype
28
*
@param p3
29
*
the third parent genotype
30
*
@param r
31
*
the random number generator
32
*
@return a new genotype
33
*
/
34 public abstract G ternaryRecombine(final G p1, final G p2, final G p3,
35 final Random r);
36
37 }
N-ary search operations take n existing genotype g
p1
, g
p2
, .., g
pn
G and use them as basis
to create a new genotype g

G which again, hopefully, unites some of the positive features


of its parents.
55.1. GENERAL DEFINITIONS 711
Listing 55.6: The interface for all binary search operations.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.Random;
7
8 /
**
9
*
This interface provides the method for an n-ary search operation. Search
10
*
operations are discussed in Section 4.2 on page 83.
11
*
12
*
@param <G>
13
*
the search space (genome, Section 4.1 on page 81)
14
*
@author Thomas Weise
15
*
/
16 public interface INarySearchOperation<G> extends IOptimizationModule {
17
18 /
**
19
*
This is an n-ary search operation. It takes n existing genotype gi
20
*
(see Definition D4.2 on page 82) from the genome and produces one new
21
*
element in the search space.
22
*
23
*
@param gs
24
*
the existing genotypes in the search space which will be
25
*
combined to a new one
26
*
@param r
27
*
the random number generator
28
*
@return a new genotype
29
*
/
30 public abstract G combine(final G[] gs, final Random r);
31
32 }
55.1.2 Genotype-Phenotype Mapping
The genotype-phenotype mapping gpm : G X translates one genotype g G to a candi-
date solution x X.
Listing 55.7: The interface common to all genotype-phenotype mappings.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.Random;
7
8 /
**
9
*
This interface provides the method for genotype-phenotype mappings as
10
*
discussed in Section 4.3 on page 85.
11
*
12
*
@param <G>
13
*
the search space (genome, Section 4.1 on page 81)
14
*
@param <X>
15
*
the problem space (phenome, Section 2.1 on page 43)
16
*
@author Thomas Weise
17
*
/
18 public interface IGPM<G, X> extends IOptimizationModule {
19
20 /
**
21
*
This function carries out the genotype-phenotype mapping as defined in
22
*
Definition D4.11 on page 86. In other words, it translates one genotype (an
23
*
element in the search space) to one element in the problem space,
24
*
i.e., a phenotype.
712 55 THE SPECIFICATION PACKAGE
25
*
26
*
@param g
27
*
the genotype ( Definition D4.2 on page 82)
28
*
@param r
29
*
a randomizer
30
*
@return the phenotype (see Definition D2.2 on page 43)
31
*
corresponding to the genotype g
32
*
/
33 public abstract X gpm(final G g, final Random r);
34
35 }
55.1.3 Objective Function
Objective functions f : X R rates a candidate solution x X to a utility value.
Listing 55.8: The interface for objective functions.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.Random;
7
8 /
**
9
*
In Section 2.2 on page 44, we introduce the concept of
10
*
objective functions. An objective function is an optimization criterion
11
*
which rates one candidate solution (see
12
*
Definition D2.2 on page 43) from the problem space according to
13
*
its utility. According to Section 6.3.4 on page 106, smaller
14
*
objective values indicate better fitness, i.e., higher utility.
15
*
16
*
@param <X>
17
*
the problem space (phenome, Section 2.1 on page 43)
18
*
@author Thomas Weise
19
*
/
20 public interface IObjectiveFunction<X> extends IOptimizationModule {
21
22 /
**
23
*
Compute the objective value, i.e., determine the utility of the
24
*
solution candidate x as specified in
25
*
Definition D2.3 on page 44.
26
*
27
*
@param x
28
*
the phenotype to be rated
29
*
@param r
30
*
a randomizer
31
*
@return the objective value of x, the lower the better (see
32
*
Section 6.3.4 on page 106)
33
*
/
34 public abstract double compute(final X x, final Random r);
35
36 }
55.1.4 Termination Criterion
Termination criteria are used to determine when the optimization process should nish.
Listing 55.9: The interface for termination criteria.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
55.1. GENERAL DEFINITIONS 713
4 package org.goataa.spec;
5
6 /
**
7
*
The termination criterion is used to tell the optimization process when
8
*
to stop. It is defined in Section 6.3.3 on page 105.
9
*
10
*
@author Thomas Weise
11
*
/
12 public interface ITerminationCriterion extends IOptimizationModule {
13
14 /
**
15
*
The function terminationCriterion, as stated in
16
*
Definition D6.7 on page 105, returns a Boolean value which
17
*
is true if the optimization process should terminate and false
18
*
otherwise.
19
*
20
*
@return true if the optimization process should terminate, false if it
21
*
can continue
22
*
/
23 public abstract boolean terminationCriterion();
24
25 /
**
26
*
Reset the termination criterion to make it useable more than once
27
*
/
28 public abstract void reset();
29 }
55.1.5 Optimization Algorithm
Here we provide a simple interface which unites the functionality of all single-objective
optimization algorithms discussed in this book.
Listing 55.10: An interface to all single-objective optimization algorithms.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.List;
7 import java.util.Random;
8 import java.util.concurrent.Callable;
9
10 import org.goataa.impl.utils.Individual;
11
12 /
**
13
*
A simple and general interface which enables us to provide the
14
*
functionality of a single-objective optimization algorithm (see
15
*
Section 1.3 on page 31 and
16
*
Definition D6.3) in a unified way.
17
*
18
*
@param <G>
19
*
the search space (genome, Section 4.1 on page 81)
20
*
@param <X>
21
*
the problem space (phenome, Section 2.1 on page 43)
22
*
@param <IT>
23
*
the individual type
24
*
@author Thomas Weise
25
*
/
26 public interface ISOOptimizationAlgorithm<G, X, IT extends Individual<G, X>>
27 extends IOptimizationModule, Callable<List<IT>>, Runnable {
28
29 /
**
30
*
Invoke the optimization process. This method calls the optimizer and
31
*
returns the list of best individuals (see Definition D4.18 on page 90)
32
*
found. Usually, only a single individual will be returned.
714 55 THE SPECIFICATION PACKAGE
33
*
34
*
@return computed result
35
*
/
36 public abstract List<IT> call();
37
38 /
**
39
*
Invoke the optimization process. This method calls the optimizer and
40
*
returns the list of best individuals (see Definition D4.18 on page 90)
41
*
found. Usually, only a single individual will be returned. Different
42
*
from the parameterless call method, here a randomizer and a
43
*
termination criterion are directly passed in. Also, a list to fill in
44
*
the optimization results is provided. This allows recursively using
45
*
the optimization algorithms.
46
*
47
*
@param r
48
*
the randomizer (will be used directly without setting the
49
*
seed)
50
*
@param term
51
*
the termination criterion (will be used directly without
52
*
resetting)
53
*
@param result
54
*
a list to which the results are to be appended
55
*
/
56 public abstract void call(final Random r,
57 final ITerminationCriterion term, final List<IT> result);
58
59 /
**
60
*
Set the maximum number of solutions which can be returned by the
61
*
algorithm
62
*
63
*
@param max
64
*
the maximum number of solutions
65
*
/
66 public void setMaxSolutions(final int max);
67
68 /
**
69
*
Get the maximum number of solutions which can be returned by the
70
*
algorithm
71
*
72
*
@return the maximum number of solutions
73
*
/
74 public int getMaxSolutions();
75
76 /
**
77
*
Set the genotype-phenotype mapping (GPM, see
78
*
Section 4.3 on page 85 and
79
*
Listing 55.7 on page 711.
80
*
81
*
@param gpm
82
*
the GPM to use during the optimization process
83
*
/
84 public abstract void setGPM(final IGPM<G, X> gpm);
85
86 /
**
87
*
Get the genotype-phenotype mapping (GPM, see
88
*
Section 4.3 on page 85 and
89
*
Listing 55.7 on page 711.
90
*
91
*
@return gpm the GPM which is used during the optimization process
92
*
/
93 public abstract IGPM<G, X> getGPM();
94
95 /
**
96
*
Set the termination criterion (see
97
*
Section 6.3.3 on page 105 and
98
*
Listing 55.9 on page 712).
99
*
55.1. GENERAL DEFINITIONS 715
100
*
@param term
101
*
the termination criterion
102
*
/
103 public abstract void setTerminationCriterion(
104 final ITerminationCriterion term);
105
106 /
**
107
*
Get the termination criterion (see
108
*
Section 6.3.3 on page 105 and
109
*
Listing 55.9 on page 712).
110
*
111
*
@return the termination criterion
112
*
/
113 public abstract ITerminationCriterion getTerminationCriterion();
114
115 /
**
116
*
Set the objective function (see Section 2.2 on page 44
117
*
and Listing 55.8 on page 712) to be used to
118
*
guide the optimization process
119
*
120
*
@param f
121
*
the objective function
122
*
/
123 public abstract void setObjectiveFunction(final IObjectiveFunction<X> f);
124
125 /
**
126
*
Get the objective function (see Section 2.2 on page 44
127
*
and Listing 55.8 on page 712) which is used
128
*
to guide the optimization process
129
*
130
*
@return the objective function
131
*
/
132 public abstract IObjectiveFunction<X> getObjectiveFunction();
133
134 /
**
135
*
Set the seed for the internal random number generator
136
*
137
*
@param seed
138
*
the seed for the random number generator
139
*
/
140 public abstract void setRandSeed(final long seed);
141
142 /
**
143
*
Get the current seed for the internal random number generator
144
*
145
*
@return the current seed for the random number generator
146
*
/
147 public abstract long getRandSeed();
148
149 /
**
150
*
Set that a randomly selected seed should be used for the random number
151
*
generator every time. If you set a random seed with setRandSeed, the
152
*
effect of this method will be nullified. Vice versa, this method
153
*
nullifies the effect of a previous random seed setting via
154
*
setRandSeed.
155
*
/
156 public abstract void useRandomRandSeed();
157
158 /
**
159
*
Are we currently using a random rand seed for each run of the
160
*
optimizer?
161
*
162
*
@return true if we are currently using a random rand seed for each run
163
*
of the optimizer, false otherwise
164
*
/
165 public abstract boolean usesRandomRandSeed();
166
716 55 THE SPECIFICATION PACKAGE
167 }
Listing 55.11: An interface to all multi-objective optimization algorithms.
1 // Copyright (c) 2011 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import org.goataa.impl.utils.MOIndividual;
7
8 /
**
9
*
A simple and general interface which enables us to provide the
10
*
functionality of a multi-objective optimization algorithm (see
11
*
Section 1.3 on page 31 and
12
*
Definition D6.3) in a unified way.
13
*
14
*
@param <G>
15
*
the search space (genome, Section 4.1 on page 81)
16
*
@param <X>
17
*
the problem space (phenome, Section 2.1 on page 43)
18
*
@author Thomas Weise
19
*
/
20 public interface IMOOptimizationAlgorithm<G, X> extends
21 ISOOptimizationAlgorithm<G, X, MOIndividual<G, X>> {
22
23 /
**
24
*
Get the individual comparator
25
*
26
*
@return the individual comparator
27
*
/
28 public abstract IIndividualComparator getComparator();
29
30 /
**
31
*
Set the individual comparator
32
*
33
*
@param pcmp
34
*
the individual comparator
35
*
/
36 public abstract void setComparator(final IIndividualComparator pcmp);
37
38 /
**
39
*
Get the objective functions
40
*
41
*
@return the objective functions
42
*
/
43 public abstract IObjectiveFunction<X>[] getObjectiveFunctions();
44
45 /
**
46
*
Set the objective functions
47
*
48
*
@param o
49
*
the objective functions
50
*
/
51 public abstract void setObjectiveFunctions(
52 final IObjectiveFunction<X>[] o);
53
54 }
55.2 Algorithm Specic Interfaces
The temperature schedule is used in Simulated Annealing to determine the current temper-
ature and hence, the acceptance probability for inferior candidate solutions.
55.2. ALGORITHM SPECIFIC INTERFACES 717
Listing 55.12: The interface for temperature schedules.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 /
**
7
*
The interface ITemperatureSchedule allows us to implement the different
8
*
temperature scheduling methods for Simulated Annealing (see
9
*
Chapter 27) as listed in
10
*
Section 27.3.
11
*
12
*
@author Thomas Weise
13
*
/
14 public interface ITemperatureSchedule extends IOptimizationModule {
15
16 /
**
17
*
Get the temperature to be used in the iteration t. This method
18
*
implements the getTemperature function specified in
19
*
Equation 27.13 on page 247.
20
*
21
*
@param t
22
*
the iteration
23
*
@return the temperature to be used for determining whether a worse
24
*
solution should be accepted or not
25
*
/
26 public abstract double getTemperature(final int t);
27
28 }
55.2.1 Evolutionary Algorithms
In Evolutionary Algorithms, the selection step determines which individuals should become
the parents of the individuals in the next generation.
Listing 55.13: The interface for selection algorithms.
1 /
*
2
*
Copyright (c) 2010 Thomas Weise
3
*
http://www.it-weise.de/
4
*
tweise@gmx.de
5
*
6
*
GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
7
*
/
8
9 package org.goataa.spec;
10
11 import java.util.Random;
12
13 import org.goataa.impl.utils.Individual;
14
15 /
**
16
*
A selection algorithm (see Section 28.4 on page 285) receives a
17
*
population of individuals (see Definition D4.18 on page 90) as input and
18
*
chooses some individuals amongst them to fill a mating pool. Selection
19
*
is an essential step in Evolutionary Algorithms (EAs) which you can find
20
*
introduced in Chapter 28 on page 253. In each
21
*
generation of an EA, a set of new individuals are created and evaluated.
22
*
All the individuals which are currently processed form the population
23
*
(see Definition D4.19 on page 90). Some of these individuals will
24
*
further be investigated, i.e., processed by the search operations. The
25
*
selection step chooses which of the individuals should be chosen for
26
*
this purpose. Usually, this decision is randomized and better
27
*
individuals are selected with higher probability whereas worse
28
*
individuals are chosen with lower probability.
718 55 THE SPECIFICATION PACKAGE
29
*
30
*
@author Thomas Weise
31
*
/
32 public interface ISelectionAlgorithm extends IOptimizationModule {
33
34 /
**
35
*
Perform the selection step (see Section 28.4 on page 285), i.e.,
36
*
fill the mating pool mate with individuals chosen from the population
37
*
pop.
38
*
39
*
@param pop
40
*
the population (see Definition D4.19 on page 90)
41
*
@param popStart
42
*
the index of the first individual in the population
43
*
@param popCount
44
*
the number of individuals to select from
45
*
@param mate
46
*
the mating pool (see Definition D28.10 on page 285)
47
*
@param mateStart
48
*
the first index to which the selected individuals should be
49
*
copied
50
*
@param mateCount
51
*
the number of individuals to be selected
52
*
@param r
53
*
the random number generator
54
*
/
55 public abstract void select(final Individual<?, ?>[] pop,
56 final int popStart, final int popCount,
57 final Individual<?, ?>[] mate, final int mateStart,
58 final int mateCount, final Random r);
59
60 }
Listing 55.14: The interface for tness assignment processes.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.Random;
7
8 import org.goataa.impl.utils.MOIndividual;
9
10 /
**
11
*
A fitness assignment process translates the (vectors of) objective
12
*
values used in multi-objective optimization
13
*
( Section 3.3 on page 55) of a set of individuals to
14
*
single fitness values as specified in
15
*
Definition D28.6 on page 274.
16
*
17
*
@author Thomas Weise
18
*
/
19 public interface IFitnessAssignmentProcess extends IOptimizationModule {
20
21 /
**
22
*
Assign fitness to the individuals of a population based on their
23
*
objective values as specified in
24
*
Definition D28.6 on page 274.
25
*
26
*
@param pop
27
*
the population of individuals
28
*
@param start
29
*
the index of the first individual in the population
30
*
@param count
31
*
the number of individuals in the population
32
*
@param cmp
33
*
the individual comparator which tells which individual is
55.2. ALGORITHM SPECIFIC INTERFACES 719
34
*
better than another one
35
*
@param r
36
*
than randomizer
37
*
/
38 public abstract void assignFitness(final MOIndividual<?, ?>[] pop,
39 final int start, final int count, final IIndividualComparator cmp,
40 final Random r);
41
42 }
Listing 55.15: The interface for comparator functions.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.Comparator;
7
8 import org.goataa.impl.utils.MOIndividual;
9
10 /
**
11
*
A comparator interface for multi-objective individuals. During the
12
*
fitness assignment processes in multi-objective optimization, we can use
13
*
such a comparator to check which individuals are better and which are
14
*
worst. Based on the comparison results, the fitness values can be
15
*
computed which are then used during a subsequent selection process. This
16
*
interface provides the functionality specified in
17
*
Definition D3.18 on page 77 in
18
*
Section 3.5.2.
19
*
20
*
@author Thomas Weise
21
*
/
22 public interface IIndividualComparator extends
23 Comparator<MOIndividual<?, ?>>, IOptimizationModule {
24
25 /
**
26
*
Compare two individuals with each other usually by using their
27
*
objective values as defined in Definition D3.18 on page 77. .
28
*
Warning: The fitness values (a.v, b.v) must not be used here since
29
*
they are usually computed by using the comparators during the fitness
30
*
assignment process.
31
*
32
*
@param a
33
*
the first individual
34
*
@param b
35
*
the second individual
36
*
@return -1 if the a is better than b, 1 if b is better than a, 0 if
37
*
neither of them is better
38
*
/
39 public abstract int compare(final MOIndividual<?, ?> a,
40 final MOIndividual<?, ?> b);
41 }
55.2.2 Estimation Of Distribution Algorithms
Listing 55.16: The interface for model building and updating.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.Random;
720 55 THE SPECIFICATION PACKAGE
7
8 import org.goataa.impl.utils.Individual;
9
10 /
**
11
*
Here we give a basic interface to the model building algorithms used in
12
*
EDAs as introduced in Section 34.3 on page 431.
13
*
14
*
@param <M>
15
*
the model type
16
*
@param <G>
17
*
the search space (genome, Section 4.1 on page 81)
18
*
@author Thomas Weise
19
*
/
20 public interface IModelBuilder<M, G> extends IOptimizationModule {
21
22 /
**
23
*
Build a model from a set of selected points in the search space and an
24
*
old model as introduced in Section 34.3 on page 431.
25
*
26
*
@param mate
27
*
the mating pool of selected individuals
28
*
@param start
29
*
the index of the first individual in the mating pool
30
*
@param count
31
*
the number of individuals in the mating pool
32
*
@param oldModel
33
*
the old model
34
*
@param r
35
*
a random number generator
36
*
@return the new model
37
*
/
38 public abstract M buildModel(final Individual<G, ?>[] mate,
39 final int start, final int count, final M oldModel, final Random r);
40
41 }
Listing 55.17: The interface for sampling points from a model.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.spec;
5
6 import java.util.Random;
7
8 /
**
9
*
Here we give a basic interface for sampling models which is a basic
10
*
functionality of EDAs Section 34.3 on page 431.
11
*
12
*
@param <M>
13
*
the model type
14
*
@param <G>
15
*
the search space (genome, Section 4.1 on page 81)
16
*
@author Thomas Weise
17
*
/
18 public interface IModelSampler<M, G> extends IOptimizationModule {
19
20 /
**
21
*
Sample one point from the model as required in Estimation of
22
*
Distribution algorithms introduced in
23
*
Chapter 34 on page 427.
24
*
25
*
@param model
26
*
the model
27
*
@param r
28
*
a random number generator
29
*
@return the new model
30
*
/
55.2. ALGORITHM SPECIFIC INTERFACES 721
31 public abstract G sampleModel(final M model, final Random r);
32
33 }
Chapter 56
The Implementation Package
In this package, we implement the interfaces given in the package org.goataa.spec and dis-
cussed in Chapter 55 on page 707.
Listing 56.1: The base class for everything involved in optimization.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl;
5
6 import org.goataa.spec.IOptimizationModule;
7
8 /
**
9
*
The base class for all objects involved in this implementation of
10
*
optimization. This class implements the interface given in
11
*
Listing 55.1 on page 707.
12
*
13
*
@author Thomas Weise
14
*
/
15 public class OptimizationModule implements IOptimizationModule {
16 /
**
a constant required by Java serialization
*
/
17 private static final long serialVersionUID = 1;
18
19 /
**
Instantiate
*
/
20 protected OptimizationModule() {
21 super();
22 }
23
24 /
**
25
*
Get the name of the optimization module
26
*
27
*
@param longVersion
28
*
true if the long name should be returned, false if the short
29
*
name should be returned
30
*
@return the name of the optimization module
31
*
/
32 public String getName(final boolean longVersion) {
33 return this.getClass().getSimpleName();
34 }
35
36 /
**
37
*
Get the full configuration which holds all the data necessary to
38
*
describe this object.
39
*
40
*
@param longVersion
41
*
true if the long version should be returned, false if the
42
*
short version should be returned
43
*
@return the full configuration
44
*
/
45 public String getConfiguration(final boolean longVersion) {
724 56 THE IMPLEMENTATION PACKAGE
46 return ""; //NON NLS 1
47 }
48
49 /
**
50
*
Get the string representation of this object, i.e., the name and
51
*
configuration.
52
*
53
*
@param longVersion
54
*
true if the long version should be returned, false if the
55
*
short version should be returned
56
*
@return the string version of this object
57
*
/
58 public String toString(final boolean longVersion) {
59 String s, t;
60 int i, j;
61 char[] ch;
62
63 s = this.getName(longVersion);
64 t = this.getConfiguration(longVersion);
65
66 if ((t == null) || ((i = t.length()) <= 0)) {
67 return s;
68 }
69
70 j = s.length();
71 ch = new char[i + j + 2];
72 s.getChars(0, j, ch, 0);
73 ch[j] = (;
74 t.getChars(0, i, ch, ++j);
75 ch[i + j] = );
76 return String.valueOf(ch);
77 }
78
79 /
**
80
*
Obtain the string representation of this object.
81
*
82
*
@return the string representation of this object.
83
*
/
84 @Override
85 public final String toString() {
86 return this.toString(false);
87 }
88 }
56.1 The Optimization Algorithms
In this package, we provide the implementations of the optimization algorithms discussed in
Part III and Part IV according to the specications given in Chapter 55. Before we do that,
we rst provide the baseline search patterns from Chapter 8 for the sake of completeness
and for comparison.
56.1.1 Algorithm Base Classes
With this class, we provide a default implementation of the interface for single-objective
optimization methods given in Listing 55.10 on page 713.
Listing 56.2: A base class for all single-objective optimization methods.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms;
5
56.1. THE OPTIMIZATION ALGORITHMS 725
6 import java.util.ArrayList;
7 import java.util.List;
8 import java.util.Random;
9
10 import org.goataa.impl.OptimizationModule;
11 import org.goataa.impl.gpms.IdentityMapping;
12 import org.goataa.impl.termination.TerminationCriterion;
13 import org.goataa.impl.utils.Individual;
14 import org.goataa.spec.IGPM;
15 import org.goataa.spec.IObjectiveFunction;
16 import org.goataa.spec.IOptimizationModule;
17 import org.goataa.spec.ISOOptimizationAlgorithm;
18 import org.goataa.spec.ITerminationCriterion;
19
20 /
**
21
*
The default base class for implementations of the optimization algorithm
22
*
interface given in
23
*
Listing 55.10 on page 713
24
*
25
*
@param <G>
26
*
the search space (genome, Section 4.1 on page 81)
27
*
@param <X>
28
*
the problem space (phenome, Section 2.1 on page 43)
29
*
@param <IT>
30
*
the individual type
31
*
@author Thomas Weise
32
*
/
33 public class SOOptimizationAlgorithm<G, X, IT extends Individual<G, X>>
34 extends OptimizationModule implements
35 ISOOptimizationAlgorithm<G, X, IT> {
36 /
**
a constant required by Java serialization
*
/
37 private static final long serialVersionUID = 1;
38
39 /
**
the seed generator
*
/
40 private static final Random SEED = new Random(0l);
41
42 /
**
the genotype-phenotype mapping
*
/
43 private IGPM<G, X> gpm;
44
45 /
**
the termination criterion
*
/
46 private ITerminationCriterion tc;
47
48 /
**
the objective function
*
/
49 private IObjectiveFunction<X> f;
50
51 /
**
the internal randomizer
*
/
52 private final Random random;
53
54 /
**
the seed
*
/
55 private long seed;
56
57 /
**
should we use a random seed ?
*
/
58 private boolean randomSeed;
59
60 /
**
the maximum number of solutions
*
/
61 private int maxs;
62
63 /
**
Create a new optimization algorithm
*
/
64 @SuppressWarnings("unchecked")
65 protected SOOptimizationAlgorithm() {
66 super();
67 this.random = new Random();
68 this.gpm = ((IGPM) (IdentityMapping.IDENTITY_MAPPING));
69 this.tc = TerminationCriterion.NEVER_TERMINATE;
70 this.randomSeed = true;
71 this.seed = SEED.nextLong();
72 this.maxs = 1;
726 56 THE IMPLEMENTATION PACKAGE
73 }
74
75 /
**
{@inheritDoc}
*
/
76 @Override
77 public List<IT> call() {
78 final List<IT> l;
79 final ITerminationCriterion c;
80 final Random r;
81
82 l = new ArrayList<IT>();
83
84 c = this.getTerminationCriterion();
85 c.reset();
86
87 r = this.random;
88 if (this.randomSeed) {
89 this.seed = SEED.nextLong();
90 }
91 r.setSeed(this.seed);
92
93 this.call(r, c, l);
94
95 return l;
96 }
97
98 /
**
{@inheritDoc}
*
/
99 @Override
100 public void call(final Random r, final ITerminationCriterion term,
101 final List<IT> result) {
102 result.clear();
103 }
104
105 /
**
{@inheritDoc}
*
/
106 @Override
107 public final void setGPM(final IGPM<G, X> agpm) {
108 this.gpm = agpm;
109 }
110
111 /
**
{@inheritDoc}
*
/
112 @Override
113 public final IGPM<G, X> getGPM() {
114 return this.gpm;
115 }
116
117 /
**
{@inheritDoc}
*
/
118 @Override
119 public void setTerminationCriterion(final ITerminationCriterion aterm) {
120 this.tc = ((aterm != null) ? aterm
121 : TerminationCriterion.NEVER_TERMINATE);
122 }
123
124 /
**
{@inheritDoc}
*
/
125 @Override
126 public ITerminationCriterion getTerminationCriterion() {
127 return this.tc;
128 }
129
130 /
**
{@inheritDoc}
*
/
131 @Override
132 public void setObjectiveFunction(final IObjectiveFunction<X> af) {
133 this.f = af;
134 }
135
136 /
**
{@inheritDoc}
*
/
137 @Override
138 public final IObjectiveFunction<X> getObjectiveFunction() {
139 return this.f;
56.1. THE OPTIMIZATION ALGORITHMS 727
140 }
141
142 /
**
{@inheritDoc}
*
/
143 @Override
144 public final void setRandSeed(final long aseed) {
145 this.randomSeed = false;
146 this.seed = aseed;
147 }
148
149 /
**
{@inheritDoc}
*
/
150 @Override
151 public final long getRandSeed() {
152 return this.seed;
153 }
154
155 /
**
{@inheritDoc}
*
/
156 @Override
157 public final void useRandomRandSeed() {
158 this.randomSeed = true;
159 }
160
161 /
**
{@inheritDoc}
*
/
162 @Override
163 public final boolean usesRandomRandSeed() {
164 return this.randomSeed;
165 }
166
167 /
**
{@inheritDoc}
*
/
168 @Override
169 public void setMaxSolutions(final int max) {
170 this.maxs = ((max > 1) ? max : 1);
171 }
172
173 /
**
{@inheritDoc}
*
/
174 @Override
175 public int getMaxSolutions() {
176 return this.maxs;
177 }
178
179 /
**
{@inheritDoc}
*
/
180 @Override
181 @SuppressWarnings("unchecked")
182 public String getConfiguration(final boolean longVersion) {
183 StringBuilder sb;
184 IOptimizationModule o;
185
186 sb = new StringBuilder();
187
188 o = this.getGPM();
189 if (o != null) {
190 if (longVersion || (!(o instanceof IdentityMapping))) {
191 sb.append("gpm="); //NON NLS 1
192 sb.append(o.toString(longVersion));
193 }
194 }
195
196 o = this.getObjectiveFunction();
197 if (o != null) {
198 if (sb.length() > 0) {
199 sb.append(,);
200 }
201 sb.append("f=");//NON NLS 1
202 sb.append(o.toString(longVersion));
203 }
204
205 o = this.getTerminationCriterion();
206 if (o != null) {
728 56 THE IMPLEMENTATION PACKAGE
207 if (sb.length() > 0) {
208 sb.append(,);
209 }
210 sb.append(longVersion ? "terminationCriterion=" : //NON NLS 1
211 "tc=");//NON NLS 1
212 sb.append(o.toString(longVersion));
213 }
214
215 if (sb.length() > 0) {
216 sb.append(,);
217 }
218 if (this.usesRandomRandSeed()) {
219 sb.append(longVersion ? "fixedRandomSeed=" : //NON NLS 1
220 "frs=");//NON NLS 1
221 sb.append(this.getRandSeed());
222 } else {
223 sb.append(longVersion ? "randomSeed" : "r");//NON NLS 1//NON NLS 2
224 }
225
226 return sb.toString();
227 }
228
229 /
**
A simple wrapper to make this algorithm Runnable
*
/
230 public void run() {
231 this.call();
232 }
233 }
Listing 56.3: A base class for all multi-objective optimization methods.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms;
5
6 import org.goataa.impl.comparators.Pareto;
7 import org.goataa.impl.utils.MOIndividual;
8 import org.goataa.spec.IIndividualComparator;
9 import org.goataa.spec.IObjectiveFunction;
10
11 /
**
12
*
An algorithm for multi-objective optimization
13
*
14
*
@param <G>
15
*
the search space (genome, Section 4.1 on page 81)
16
*
@param <X>
17
*
the problem space (phenome, Section 2.1 on page 43)
18
*
@author Thomas Weise
19
*
/
20 public class MOOptimizationAlgorithm<G, X> extends
21 SOOptimizationAlgorithm<G, X, MOIndividual<G, X>> {
22 /
**
a constant required by Java serialization
*
/
23 private static final long serialVersionUID = 1;
24
25 /
**
the individual comparator
*
/
26 private IIndividualComparator cmp;
27
28 /
**
the objective functions
*
/
29 private IObjectiveFunction<X>[] ofs;
30
31 /
**
Instantiate a new multi-objective optimization algorithm
*
/
32 protected MOOptimizationAlgorithm() {
33 super();
34 this.cmp = Pareto.PARETO_COMPARATOR;
35 this.setMaxSolutions(20);
36 }
37
38 /
**
56.1. THE OPTIMIZATION ALGORITHMS 729
39
*
Get the individual comparator
40
*
41
*
@return the individual comparator
42
*
/
43 public IIndividualComparator getComparator() {
44 return this.cmp;
45 }
46
47 /
**
48
*
Set the individual comparator
49
*
50
*
@param pcmp
51
*
the individual comparator
52
*
/
53 public void setComparator(final IIndividualComparator pcmp) {
54 this.cmp = pcmp;
55 }
56
57 /
**
58
*
Get the objective functions
59
*
60
*
@return the objective functions
61
*
/
62 public IObjectiveFunction<X>[] getObjectiveFunctions() {
63 return this.ofs;
64 }
65
66 /
**
67
*
Set the objective functions
68
*
69
*
@param o
70
*
the objective functions
71
*
/
72 public void setObjectiveFunctions(final IObjectiveFunction<X>[] o) {
73 this.ofs = o;
74 if (o.length > 0) {
75 super.setObjectiveFunction(o[0]);
76 }
77 }
78
79 /
**
80
*
Set the objective function (see Section 2.2 on page 44
81
*
and Listing 55.8 on page 712) to be used to
82
*
guide the optimization process
83
*
84
*
@param af
85
*
the objective function
86
*
/
87 @Override
88 @SuppressWarnings("unchecked")
89 public void setObjectiveFunction(final IObjectiveFunction<X> af) {
90 this.setObjectiveFunctions(new IObjectiveFunction[] { af });
91 }
92 }
56.1.2 Baseline Search Patterns
Random sampling and random walks, both discussed in Chapter 8, are two baseline search
patterns which do not utilize any information from the objective functions during their
course (apart from for remembering the best candidate solution found).
Listing 56.4: The random sampling algorithm randomSampling given in Algorithm 8.1.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
730 56 THE IMPLEMENTATION PACKAGE
4 package org.goataa.impl.algorithms;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.searchOperations.NullarySearchOperation;
10 import org.goataa.impl.utils.Individual;
11 import org.goataa.spec.IGPM;
12 import org.goataa.spec.INullarySearchOperation;
13 import org.goataa.spec.IObjectiveFunction;
14 import org.goataa.spec.IOptimizationModule;
15 import org.goataa.spec.ITerminationCriterion;
16
17 /
**
18
*
A simple implementation of the Random Sampling algorithm introduced as
19
*
Algorithm 8.1 on page 114.
20
*
21
*
@param <G>
22
*
the search space (genome, Section 4.1 on page 81)
23
*
@param <X>
24
*
the problem space (phenome, Section 2.1 on page 43)
25
*
@author Thomas Weise
26
*
/
27 public final class RandomSampling<G, X> extends
28 SOOptimizationAlgorithm<G, X, Individual<G, X>> {
29
30 /
**
a constant required by Java serialization
*
/
31 private static final long serialVersionUID = 1;
32
33 /
**
the nullary search operation
*
/
34 private INullarySearchOperation<G> o0;
35
36 /
**
instantiate the random sampling class
*
/
37 @SuppressWarnings("unchecked")
38 public RandomSampling() {
39 super();
40 this.o0 = (INullarySearchOperation) (NullarySearchOperation.NULL_CREATION);
41 }
42
43 /
**
44
*
Set the nullary search operation to be used by this optimization
45
*
algorithm (see Section 4.2 on page 83 and
46
*
Listing 55.2 on page 708).
47
*
48
*
@param op
49
*
the nullary search operation to use
50
*
/
51 @SuppressWarnings("unchecked")
52 public void setNullarySearchOperation(final INullarySearchOperation<G> op) {
53 this.o0 = ((op != null) ? op
54 : ((INullarySearchOperation) (NullarySearchOperation.NULL_CREATION)));
55 }
56
57 /
**
58
*
Get the nullary search operation which is used by this optimization
59
*
algorithm (see Section 4.2 on page 83 and
60
*
Listing 55.2 on page 708).
61
*
62
*
@return the nullary search operation to use
63
*
/
64 public final INullarySearchOperation<G> getNullarySearchOperation() {
65 return this.o0;
66 }
67
68 /
**
69
*
Invoke the random sampling process.
70
*
56.1. THE OPTIMIZATION ALGORITHMS 731
71
*
@param r
72
*
the randomizer (will be used directly without setting the
73
*
seed)
74
*
@param term
75
*
the termination criterion (will be used directly without
76
*
resetting)
77
*
@param result
78
*
a list to which the results are to be appended
79
*
/
80 @Override
81 public void call(final Random r, final ITerminationCriterion term,
82 final List<Individual<G, X>> result) {
83
84 result.add(RandomSampling.randomSampling(this.getObjectiveFunction(),//
85 this.getNullarySearchOperation(), //
86 this.getGPM(), term, r));
87
88 }
89
90 /
**
91
*
We place the complete Random Sampling method as defined in
92
*
Algorithm 8.1 on page 114 into this single procedure.
93
*
94
*
@param f
95
*
the objective function ( Definition D2.3 on page 44)
96
*
@param create
97
*
the nullary search operator
98
*
( Section 4.2 on page 83) for creating the initial
99
*
genotype
100
*
@param gpm
101
*
the genotype-phenotype mapping
102
*
( Section 4.3 on page 85)
103
*
@param term
104
*
the termination criterion
105
*
( Section 6.3.3 on page 105)
106
*
@param r
107
*
the random number generator
108
*
@return the individual holding the best candidate solution
109
*
( Definition D2.2 on page 43) found
110
*
@param <G>
111
*
the search space ( Section 4.1 on page 81)
112
*
@param <X>
113
*
the problem space ( Section 2.1 on page 43)
114
*
/
115 public static final <G, X> Individual<G, X> randomSampling(
116 //
117 final IObjectiveFunction<X> f,
118 final INullarySearchOperation<G> create,//
119 final IGPM<G, X> gpm,//
120 final ITerminationCriterion term, final Random r) {
121
122 Individual<G, X> p, pbest;
123 int t;
124
125 p = new Individual<G, X>();
126 pbest = new Individual<G, X>();
127
128 // create the first genotype, map it to a phenotype, and evaluate it
129 p.g = create.create(r);
130 t = 1;
131
132 // check the termination criterion
133 while (!(term.terminationCriterion())) {
134 p.x = gpm.gpm(p.g, r);
135 p.v = f.compute(p.x, r);
136
137 // remember the best candidate solution
732 56 THE IMPLEMENTATION PACKAGE
138 if ((t == 1) || (p.v < pbest.v)) {
139 pbest.assign(p);
140 }
141
142 t++;
143
144 // Instead of modifying existing genotypes as done in
145 // Listing 56.5, we create
146 // new ones in each round
147 p.g = create.create(r);
148 }
149
150 return pbest;
151 }
152
153 /
**
154
*
Get the full configuration which holds all the data necessary to
155
*
describe this object.
156
*
157
*
@param longVersion
158
*
true if the long version should be returned, false if the
159
*
short version should be returned
160
*
@return the full configuration
161
*
/
162 @Override
163 public String getConfiguration(final boolean longVersion) {
164 StringBuilder sb;
165 IOptimizationModule o;
166
167 sb = new StringBuilder();
168
169 sb.append(super.getConfiguration(longVersion));
170
171 o = this.getNullarySearchOperation();
172 if (o != null) {
173 if (sb.length() > 0) {
174 sb.append(,);
175 }
176 sb.append("o0=");//NON NLS 1
177 sb.append(o.toString(longVersion));
178 }
179
180 return sb.toString();
181 }
182
183 /
**
184
*
Get the name of the optimization module
185
*
186
*
@param longVersion
187
*
true if the long name should be returned, false if the short
188
*
name should be returned
189
*
@return the name of the optimization module
190
*
/
191 @Override
192 public String getName(final boolean longVersion) {
193 if (longVersion) {
194 return this.getClass().getSimpleName();
195 }
196 return "RS"; //NON NLS 1
197 }
198 }
Listing 56.5: The random walk algorithm randomWalk given in Algorithm 8.2.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms;
56.1. THE OPTIMIZATION ALGORITHMS 733
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.utils.Individual;
10 import org.goataa.spec.IGPM;
11 import org.goataa.spec.INullarySearchOperation;
12 import org.goataa.spec.IObjectiveFunction;
13 import org.goataa.spec.ITerminationCriterion;
14 import org.goataa.spec.IUnarySearchOperation;
15
16 /
**
17
*
A simple implementation of the Random Walk algorithm introduced as
18
*
Algorithm 8.2 on page 115.
19
*
20
*
@param <G>
21
*
the search space (genome, Section 4.1 on page 81)
22
*
@param <X>
23
*
the problem space (phenome, Section 2.1 on page 43)
24
*
@author Thomas Weise
25
*
/
26 public final class RandomWalk<G, X> extends
27 LocalSearchAlgorithm<G, X, Individual<G, X>> {
28
29 /
**
a constant required by Java serialization
*
/
30 private static final long serialVersionUID = 1;
31
32 /
**
instantiate the random walk class
*
/
33 public RandomWalk() {
34 super();
35 }
36
37 /
**
38
*
Invoke the random walk.
39
*
40
*
@param r
41
*
the randomizer (will be used directly without setting the
42
*
seed)
43
*
@param term
44
*
the termination criterion (will be used directly without
45
*
resetting)
46
*
@param result
47
*
a list to which the results are to be appended
48
*
/
49 @Override
50 public void call(final Random r, final ITerminationCriterion term,
51 final List<Individual<G, X>> result) {
52
53 result.add(RandomWalk.randomWalk(this.getObjectiveFunction(),//
54 this.getNullarySearchOperation(), //
55 this.getUnarySearchOperation(),//
56 this.getGPM(), term, r));
57 }
58
59 /
**
60
*
We place the complete Random Walk method as defined in
61
*
Algorithm 8.2 on page 115 into this single procedure.
62
*
63
*
@param f
64
*
the objective function ( Definition D2.3 on page 44)
65
*
@param create
66
*
the nullary search operator
67
*
( Section 4.2 on page 83) for creating the initial
68
*
genotype
69
*
@param mutate
70
*
the unary search operator ( Section 4.2 on page 83)
71
*
for modifying existing genotypes
734 56 THE IMPLEMENTATION PACKAGE
72
*
@param gpm
73
*
the genotype-phenotype mapping
74
*
( Section 4.3 on page 85)
75
*
@param term
76
*
the termination criterion
77
*
( Section 6.3.3 on page 105)
78
*
@param r
79
*
the random number generator
80
*
@return the individual holding the best candidate solution
81
*
( Definition D2.2 on page 43) found
82
*
@param <G>
83
*
the search space ( Section 4.1 on page 81)
84
*
@param <X>
85
*
the problem space ( Section 2.1 on page 43)
86
*
/
87 public static final <G, X> Individual<G, X> randomWalk(
88 //
89 final IObjectiveFunction<X> f,
90 final INullarySearchOperation<G> create,//
91 final IUnarySearchOperation<G> mutate,//
92 final IGPM<G, X> gpm,//
93 final ITerminationCriterion term, final Random r) {
94
95 Individual<G, X> p, pbest;
96 int t;
97
98 p = new Individual<G, X>();
99 pbest = new Individual<G, X>();
100
101 // create the first genotype, map it to a phenotype, and evaluate it
102 p.g = create.create(r);
103 t = 1;
104
105 // check the termination criterion
106 while (!(term.terminationCriterion())) {
107 p.x = gpm.gpm(p.g, r);
108 p.v = f.compute(p.x, r);
109
110 // remember the best candidate solution
111 if ((t == 1) || (p.v < pbest.v)) {
112 pbest.assign(p);
113 }
114
115 t++;
116
117 // modify the last point checked, map the new point to a phenotype
118 // and evaluat it - this is the main difference to
119 // ?? on page ?? is that we
120 // do not use the best genotype for this
121 p.g = mutate.mutate(p.g, r);
122 }
123
124 return pbest;
125 }
126
127 /
**
128
*
Get the name of the optimization module
129
*
130
*
@param longVersion
131
*
true if the long name should be returned, false if the short
132
*
name should be returned
133
*
@return the name of the optimization module
134
*
/
135 @Override
136 public String getName(final boolean longVersion) {
137 if (longVersion) {
138 return this.getClass().getSimpleName();
56.1. THE OPTIMIZATION ALGORITHMS 735
139 }
140 return "RW"; //NON NLS 1
141 }
142 }
56.1.3 Local Search Algorithms
56.1.3.1 Hill Climbers
Here we provide Hill Climbing algorithms as discussed in Chapter 26. Hill Climbing is one
of the most basic optimization methods. As discussed in Chapter 26, it is a local search
that keeps a single individual and repeatedly modies its genotype. It transcend to the new,
resulting candidate solution if and only if it is better than the current one.
Listing 56.6: The Hill Climbing algorithm hillClimbing given in Algorithm 26.1.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.hc;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.algorithms.LocalSearchAlgorithm;
10 import org.goataa.impl.utils.Individual;
11 import org.goataa.spec.IGPM;
12 import org.goataa.spec.INullarySearchOperation;
13 import org.goataa.spec.IObjectiveFunction;
14 import org.goataa.spec.ITerminationCriterion;
15 import org.goataa.spec.IUnarySearchOperation;
16
17 /
**
18
*
A simple implementation of the Hill Climbing algorithm introduced as
19
*
Algorithm 26.1 on page 230.
20
*
21
*
@param <G>
22
*
the search space (genome, Section 4.1 on page 81)
23
*
@param <X>
24
*
the problem space (phenome, Section 2.1 on page 43)
25
*
@author Thomas Weise
26
*
/
27 public final class HillClimbing<G, X> extends
28 LocalSearchAlgorithm<G, X, Individual<G, X>> {
29
30 /
**
a constant required by Java serialization
*
/
31 private static final long serialVersionUID = 1;
32
33 /
**
instantiate the hill climbing class
*
/
34 public HillClimbing() {
35 super();
36 }
37
38 /
**
39
*
Invoke the optimization process. This method calls the optimizer and
40
*
returns the list of best individuals (see Definition D4.18 on page 90)
41
*
found. Usually, only a single individual will be returned. Different
42
*
from the parameterless call method, here a randomizer and a
43
*
termination criterion are directly passed in. Also, a list to fill in
44
*
the optimization results is provided. This allows recursively using
45
*
the optimization algorithms.
46
*
736 56 THE IMPLEMENTATION PACKAGE
47
*
@param r
48
*
the randomizer (will be used directly without setting the
49
*
seed)
50
*
@param term
51
*
the termination criterion (will be used directly without
52
*
resetting)
53
*
@param result
54
*
a list to which the results are to be appended
55
*
/
56 @Override
57 public void call(final Random r, final ITerminationCriterion term,
58 final List<Individual<G, X>> result) {
59
60 result.add(HillClimbing.hillClimbing(this.getObjectiveFunction(),//
61 this.getNullarySearchOperation(), //
62 this.getUnarySearchOperation(),//
63 this.getGPM(), term, r));
64
65 }
66
67 /
**
68
*
We place the complete Hill Climbing method as defined in
69
*
Algorithm 26.1 on page 230 into this single procedure.
70
*
71
*
@param f
72
*
the objective function ( Definition D2.3 on page 44)
73
*
@param create
74
*
the nullary search operator
75
*
( Section 4.2 on page 83) for creating the initial
76
*
genotype
77
*
@param mutate
78
*
the unary search operator ( Section 4.2 on page 83)
79
*
for modifying existing genotypes
80
*
@param gpm
81
*
the genotype-phenotype mapping
82
*
( Section 4.3 on page 85)
83
*
@param term
84
*
the termination criterion
85
*
( Section 6.3.3 on page 105)
86
*
@return the individual holding the best candidate solution
87
*
@param r
88
*
the random number generator
89
*
( Definition D2.2 on page 43) found
90
*
@param <G>
91
*
the search space ( Section 4.1 on page 81)
92
*
@param <X>
93
*
the problem space ( Section 2.1 on page 43)
94
*
/
95 public static final <G, X> Individual<G, X> hillClimbing(
96 //
97 final IObjectiveFunction<X> f,
98 final INullarySearchOperation<G> create,//
99 final IUnarySearchOperation<G> mutate,//
100 final IGPM<G, X> gpm,//
101 final ITerminationCriterion term, final Random r) {
102
103 Individual<G, X> p, pnew;
104
105 p = new Individual<G, X>();
106 pnew = new Individual<G, X>();
107
108 // create the first genotype, map it to a phenotype, and evaluate it
109 p.g = create.create(r);
110 p.x = gpm.gpm(p.g, r);
111 p.v = f.compute(p.x, r);
112
113 // check the termination criterion
56.1. THE OPTIMIZATION ALGORITHMS 737
114 while (!(term.terminationCriterion())) {
115
116 // modify the best point known, map the new point to a phenotype and
117 // evaluat it
118 pnew.g = mutate.mutate(p.g, r);
119 pnew.x = gpm.gpm(pnew.g, r);
120 pnew.v = f.compute(pnew.x, r);
121
122 // In Algorithm 26.1 on page 230, the objective functions are
123 // evaluated here. By storing the objective values in the individual
124 // records, we avoid evaluating p.x more than once.
125 if (pnew.v < p.v) {
126 p.assign(pnew);
127 }
128 }
129
130 return p;
131 }
132
133 /
**
134
*
Get the name of the optimization module
135
*
136
*
@param longVersion
137
*
true if the long name should be returned, false if the short
138
*
name should be returned
139
*
@return the name of the optimization module
140
*
/
141 @Override
142 public String getName(final boolean longVersion) {
143 if (longVersion) {
144 return this.getClass().getSimpleName();
145 }
146 return "HC"; //NON NLS 1
147 }
148 }
Listing 56.7: The Multi-Objective Hill Climbing algorithm hillClimbingMO given in Al-
gorithm 26.2.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.hc;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.algorithms.MOLocalSearchAlgorithm;
10 import org.goataa.impl.utils.Archive;
11 import org.goataa.impl.utils.MOIndividual;
12 import org.goataa.impl.utils.Utils;
13 import org.goataa.spec.IGPM;
14 import org.goataa.spec.IIndividualComparator;
15 import org.goataa.spec.INullarySearchOperation;
16 import org.goataa.spec.IObjectiveFunction;
17 import org.goataa.spec.ITerminationCriterion;
18 import org.goataa.spec.IUnarySearchOperation;
19
20 /
**
21
*
An implementation of the Multi-Objective Hill Climbing algorithm
22
*
introduced as Algorithm 26.2 on page 231.
23
*
24
*
@param <G>
25
*
the search space (genome, Section 4.1 on page 81)
26
*
@param <X>
27
*
the problem space (phenome, Section 2.1 on page 43)
28
*
@author Thomas Weise
738 56 THE IMPLEMENTATION PACKAGE
29
*
/
30 public final class MOHillClimbing<G, X> extends
31 MOLocalSearchAlgorithm<G, X> {
32
33 /
**
a constant required by Java serialization
*
/
34 private static final long serialVersionUID = 1;
35
36 /
**
instantiate the hill climbing class
*
/
37 public MOHillClimbing() {
38 super();
39 }
40
41 /
**
42
*
Invoke the multi-objective hill climber.
43
*
44
*
@param r
45
*
the randomizer (will be used directly without setting the
46
*
seed)
47
*
@param term
48
*
the termination criterion (will be used directly without
49
*
resetting)
50
*
@param result
51
*
a list to which the results are to be appended
52
*
/
53 @Override
54 public void call(final Random r, final ITerminationCriterion term,
55 final List<MOIndividual<G, X>> result) {
56
57 MOHillClimbing.hillClimbingMO(//
58 this.getObjectiveFunctions(),//
59 this.getComparator(),//
60 this.getNullarySearchOperation(), //
61 this.getUnarySearchOperation(),//
62 this.getGPM(), term,//
63 this.getMaxSolutions(),//
64 r, //
65 result);
66
67 }
68
69 /
**
70
*
We place the complete Hill Climbing method as defined in
71
*
Algorithm 26.1 on page 230 into this single procedure.
72
*
73
*
@param f
74
*
the objective functions ( Definition D2.3 on page 44)
75
*
@param cmp
76
*
the comparator
77
*
@param create
78
*
the nullary search operator
79
*
( Section 4.2 on page 83) for creating the initial
80
*
genotype
81
*
@param mutate
82
*
the unary search operator ( Section 4.2 on page 83)
83
*
for modifying existing genotypes
84
*
@param gpm
85
*
the genotype-phenotype mapping
86
*
( Section 4.3 on page 85)
87
*
@param term
88
*
the termination criterion
89
*
( Section 6.3.3 on page 105)
90
*
@param result
91
*
the list where the best individuals should be added to
92
*
@param r
93
*
the random number generator
94
*
( Definition D2.2 on page 43) found
95
*
@param maxSolutions
56.1. THE OPTIMIZATION ALGORITHMS 739
96
*
the maximum number of allowed solutions
97
*
@param <G>
98
*
the search space ( Section 4.1 on page 81)
99
*
@param <X>
100
*
the problem space ( Section 2.1 on page 43)
101
*
/
102 public static final <G, X> void hillClimbingMO(
103 //
104 final IObjectiveFunction<X>[] f,//
105 final IIndividualComparator cmp,//
106 final INullarySearchOperation<G> create,//
107 final IUnarySearchOperation<G> mutate,//
108 final IGPM<G, X> gpm,//
109 final ITerminationCriterion term,//
110 final int maxSolutions,//
111 final Random r,//
112 final List<MOIndividual<G, X>> result) {
113
114 MOIndividual<G, X> p, pnew;
115 Archive<G, X> arc;
116 int max;
117
118 p = new MOIndividual<G, X>(f.length);
119 pnew = new MOIndividual<G, X>(f.length);
120 arc = new Archive<G, X>(maxSolutions, cmp);
121
122 // create the first genotype, map it to a phenotype, and evaluate it
123 p.g = create.create(r);
124 p.x = gpm.gpm(p.g, r);
125 p.evaluate(f, r);
126 arc.add(p);
127
128 // check the termination criterion
129 while (!(term.terminationCriterion())) {
130 pnew = new MOIndividual<G, X>(f.length);
131
132 mutateCreate: {
133 for (max = 1000; (--max) >= 0;) {
134 // pick a random individual from the set of best individuals
135 p = arc.get(r.nextInt(arc.size()));
136
137 pnew.g = mutate.mutate(p.g, r);
138 if (!(Utils.equals(pnew.g, p.g))) {
139 break mutateCreate;
140 }
141 }
142 pnew.g = create.create(r);
143 }
144 pnew.x = gpm.gpm(pnew.g, r);
145 pnew.evaluate(f, r);
146
147 // check whether the individual should enter the archive
148 arc.add(pnew);
149 }
150
151 result.addAll(arc);
152 }
153
154 /
**
155
*
Get the name of the optimization module
156
*
157
*
@param longVersion
158
*
true if the long name should be returned, false if the short
159
*
name should be returned
160
*
@return the name of the optimization module
161
*
/
162 @Override
740 56 THE IMPLEMENTATION PACKAGE
163 public String getName(final boolean longVersion) {
164 if (longVersion) {
165 return this.getClass().getSimpleName();
166 }
167 return "MOHC"; //NON NLS 1
168 }
169 }
56.1.3.2 Simulated Annealing
In this package, we provide the implementations of the Simulated Annealing algorithm and
its modules as discussed in Chapter 27.
Listing 56.8: The Simulated Annealing algorithm simulatedAnnealing given in Algo-
rithm 27.1.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.sa;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.algorithms.LocalSearchAlgorithm;
10 import org.goataa.impl.algorithms.sa.temperatureSchedules.Exponential;
11 import org.goataa.impl.utils.Individual;
12 import org.goataa.spec.IGPM;
13 import org.goataa.spec.INullarySearchOperation;
14 import org.goataa.spec.IObjectiveFunction;
15 import org.goataa.spec.ITemperatureSchedule;
16 import org.goataa.spec.ITerminationCriterion;
17 import org.goataa.spec.IUnarySearchOperation;
18
19 /
**
20
*
A simple implementation of the Simulated Annealing algorithm introduced
21
*
as Algorithm 27.1 on page 245.
22
*
23
*
@param <G>
24
*
the search space (genome, Section 4.1 on page 81)
25
*
@param <X>
26
*
the problem space (phenome, Section 2.1 on page 43)
27
*
@author Thomas Weise
28
*
/
29 public final class SimulatedAnnealing<G, X> extends
30 LocalSearchAlgorithm<G, X, Individual<G, X>> {
31
32 /
**
a constant required by Java serialization
*
/
33 private static final long serialVersionUID = 1;
34
35 /
**
the temperature schedule
*
/
36 private ITemperatureSchedule schedule;
37
38 /
**
instantiate the simulated annealing class
*
/
39 public SimulatedAnnealing() {
40 super();
41 }
42
43 /
**
44
*
Set the temperature schedule (see
45
*
Definition D27.2 on page 247) which will be responsible for
46
*
setting the acceptance probability of inferior candidate solutions
47
*
during all optimization steps.
48
*
49
*
@param aschedule
50
*
the temperature schedule
56.1. THE OPTIMIZATION ALGORITHMS 741
51
*
/
52 public final void setTemperatureSchedule(
53 final ITemperatureSchedule aschedule) {
54 this.schedule = aschedule;
55 }
56
57 /
**
58
*
Get the temperature schedule (see
59
*
Definition D27.2 on page 247) which is responsible for setting
60
*
the acceptance probability of inferior candidate solutions during all
61
*
optimization steps.
62
*
63
*
@return the temperature schedule
64
*
/
65 public final ITemperatureSchedule getTemperatureSchedule() {
66 return this.schedule;
67 }
68
69 /
**
70
*
Invoke the optimization process. This method calls the optimizer and
71
*
returns the list of best individuals (see Definition D4.18 on page 90)
72
*
found. Usually, only a single individual will be returned. Different
73
*
from the parameterless call method, here a randomizer and a
74
*
termination criterion are directly passed in. Also, a list to fill in
75
*
the optimization results is provided. This allows recursively using
76
*
the optimization algorithms.
77
*
78
*
@param r
79
*
the randomizer (will be used directly without setting the
80
*
seed)
81
*
@param term
82
*
the termination criterion (will be used directly without
83
*
resetting)
84
*
@param result
85
*
a list to which the results are to be appended
86
*
/
87 @Override
88 public void call(final Random r, final ITerminationCriterion term,
89 final List<Individual<G, X>> result) {
90
91 result.add(SimulatedAnnealing.simulatedAnnealing(//
92 this.getObjectiveFunction(),//
93 this.getNullarySearchOperation(), //
94 this.getUnarySearchOperation(),//
95 this.getGPM(), this.getTemperatureSchedule(),//
96 term, r));
97 }
98
99 /
**
100
*
We place the complete Simulated Annealing method as defined in
101
*
Algorithm 27.1 on page 245 into this single procedure.
102
*
103
*
@param f
104
*
the objective function ( Definition D2.3 on page 44)
105
*
@param create
106
*
the nullary search operator
107
*
( Section 4.2 on page 83) for creating the initial
108
*
genotype
109
*
@param mutate
110
*
the unary search operator ( Section 4.2 on page 83)
111
*
for modifying existing genotypes
112
*
@param gpm
113
*
the genotype-phenotype mapping
114
*
( Section 4.3 on page 85)
115
*
@param temperature
116
*
the temperature schedule according to
117
*
( Definition D27.2 on page 247
742 56 THE IMPLEMENTATION PACKAGE
118
*
@param term
119
*
the termination criterion
120
*
( Section 6.3.3 on page 105)
121
*
@param r
122
*
the random number generator
123
*
@return the best candidate solution
124
*
( Definition D2.2 on page 43) found
125
*
@param <G>
126
*
the search space ( Section 4.1 on page 81)
127
*
@param <X>
128
*
the problem space ( Section 2.1 on page 43)
129
*
/
130 public static final <G, X> Individual<G, X> simulatedAnnealing(
131 //
132 final IObjectiveFunction<X> f,
133 final INullarySearchOperation<G> create,//
134 final IUnarySearchOperation<G> mutate,//
135 final IGPM<G, X> gpm,//
136 final ITemperatureSchedule temperature,
137 final ITerminationCriterion term, final Random r) {
138
139 Individual<G, X> pbest, pcur, pnew;
140 int t;
141 double T, DE;
142
143 pbest = new Individual<G, X>();
144 pnew = new Individual<G, X>();
145 pcur = new Individual<G, X>();
146
147 // create the first genotype, map it to a phenotype, and evaluate it
148 pcur.g = create.create(r);
149 pcur.x = gpm.gpm(pcur.g, r);
150 pcur.v = f.compute(pcur.x, r);
151 pbest.assign(pcur);
152 t = 1;
153
154 // check the termination criterion
155 while (!(term.terminationCriterion())) {
156
157 // compute the current temperature
158 T = temperature.getTemperature(t);
159
160 // modify the best point known, map the new point to a phenotype and
161 // evaluat it
162 pnew.g = mutate.mutate(pcur.g, r);
163 pnew.x = gpm.gpm(pnew.g, r);
164 pnew.v = f.compute(pnew.x, r);
165
166 // compute the energy difference according to
167 // Equation 27.2 on page 244
168 DE = pnew.v - pcur.v;
169
170 // implement Equation 27.3 on page 244
171 if (DE <= 0.0d) {
172 pcur.assign(pnew);
173
174 // remember the best candidate solution
175 if (pnew.v < pbest.v) {
176 pbest.assign(pnew);
177 }
178 } else {
179 // otherwise, use
180 if (r.nextDouble() < Math.exp(-DE / T)) {
181 pcur.assign(pnew);
182 }
183 }
184
56.1. THE OPTIMIZATION ALGORITHMS 743
185 t++;
186 }
187
188 return pbest;
189 }
190
191 /
**
192
*
Get the name of the optimization module
193
*
194
*
@param longVersion
195
*
true if the long name should be returned, false if the short
196
*
name should be returned
197
*
@return the name of the optimization module
198
*
/
199 @Override
200 public String getName(final boolean longVersion) {
201 ITemperatureSchedule ts;
202
203 ts = this.getTemperatureSchedule();
204
205 if (ts instanceof Exponential) {
206 return (longVersion ? "SimulatedQuenching" :
"SQ");//NON NLS 1//NON NLS 2
207 }
208
209 if (longVersion) {
210 return this.getClass().getSimpleName();
211 }
212 return "SA"; //NON NLS 1
213 }
214
215 /
**
216
*
Get the full configuration which holds all the data necessary to
217
*
describe this object.
218
*
219
*
@param longVersion
220
*
true if the long version should be returned, false if the
221
*
short version should be returned
222
*
@return the full configuration
223
*
/
224 @Override
225 public String getConfiguration(final boolean longVersion) {
226 ITemperatureSchedule ts;
227 String s;
228
229 s = super.getConfiguration(longVersion);
230 ts = this.getTemperatureSchedule();
231 if (ts != null) {
232 s += , + ts.toString(longVersion);
233 }
234
235 return s;
236 }
237 }
56.1.3.2.1 Temperature Schedules
In this package, we provide the implementations of the temperature schedules for the Sim-
ulated Annealing algorithm and its modules as discussed in Section 27.3.
Listing 56.9: The logarithmic temperature schedule as specied in Equation 27.16.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.sa.temperatureSchedules;
5
744 56 THE IMPLEMENTATION PACKAGE
6 /
**
7
*
A logarithmic temperature schedule as defined in
8
*
Section 27.3.1 on page 248 which implements the interface
9
*
ITemperatureSchedule given in
10
*
Listing 55.12 on page 717.
11
*
12
*
@author Thomas Weise
13
*
/
14 public class Logarithmic extends TemperatureSchedule {
15
16 /
**
a constant required by Java serialization
*
/
17 private static final long serialVersionUID = 1;
18
19 /
**
20
*
Create a new logarithmic temperature schedule
21
*
22
*
@param pTs
23
*
the starting temperature
24
*
/
25 public Logarithmic(final double pTs) {
26 super(pTs);
27 }
28
29 /
**
30
*
Get the temperature to be used in the iteration t according to a
31
*
logarithmic schedule Section 27.3.1 on page 248
32
*
according to Equation 27.13 on page 247.
33
*
34
*
@param t
35
*
the iteration
36
*
@return the temperature to be used for determining whether a worse
37
*
solution should be accepted or not
38
*
/
39 @Override
40 public final double getTemperature(final int t) {
41 if (t < Math.E) {
42 return this.Ts;
43 }
44 return (this.Ts / Math.log(t));
45 }
46
47 /
**
48
*
Get the name of the optimization module
49
*
50
*
@param longVersion
51
*
true if the long name should be returned, false if the short
52
*
name should be returned
53
*
@return the name of the optimization module
54
*
/
55 @Override
56 public String getName(final boolean longVersion) {
57 if (longVersion) {
58 return this.getClass().getSimpleName();
59 }
60 return "log"; //NON NLS 1
61 }
62 }
Listing 56.10: The logarithmic temperature schedule as specied in Equation 27.17.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.sa.temperatureSchedules;
5
6 import org.goataa.impl.utils.TextUtils;
7
8 /
**
56.1. THE OPTIMIZATION ALGORITHMS 745
9
*
An exponential temperature schedule as defined in
10
*
Section 27.3.2 on page 249 which implements the interface
11
*
ITemperatureSchedule given in
12
*
Listing 55.12 on page 717.
13
*
14
*
@author Thomas Weise
15
*
/
16 public class Exponential extends TemperatureSchedule {
17
18 /
**
a constant required by Java serialization
*
/
19 private static final long serialVersionUID = 1;
20
21 /
**
the epsilong value
*
/
22 private final double e;
23
24 /
**
25
*
Create a new exponential temperature schedule
26
*
27
*
@param pTs
28
*
the starting temperature
29
*
@param pE
30
*
the epsilon value
31
*
/
32 public Exponential(final double pTs, final double pE) {
33 super(pTs);
34 this.e = pE;
35 }
36
37 /
**
38
*
Get the temperature to be used in the iteration t according to a
39
*
exponential schedule Section 27.3.2 on page 249
40
*
according to Equation 27.13 on page 247.
41
*
42
*
@param t
43
*
the iteration
44
*
@return the temperature to be used for determining whether a worse
45
*
solution should be accepted or not
46
*
/
47 @Override
48 public final double getTemperature(final int t) {
49 return (Math.pow((1 - this.e), t)
*
this.Ts);
50 }
51
52 /
**
53
*
Get the name of the optimization module
54
*
55
*
@param longVersion
56
*
true if the long name should be returned, false if the short
57
*
name should be returned
58
*
@return the name of the optimization module
59
*
/
60 @Override
61 public String getName(final boolean longVersion) {
62 if (longVersion) {
63 return this.getClass().getSimpleName();
64 }
65 return "exp"; //NON NLS 1
66 }
67
68 /
**
69
*
Get the full configuration which holds all the data necessary to
70
*
describe this object.
71
*
72
*
@param longVersion
73
*
true if the long version should be returned, false if the
74
*
short version should be returned
75
*
@return the full configuration
746 56 THE IMPLEMENTATION PACKAGE
76
*
/
77 @Override
78 public String getConfiguration(final boolean longVersion) {
79 return super.getConfiguration(longVersion) + ,
80 + (longVersion ? ("e=" + TextUtils.formatNumber(this.e)) : //NON NLS 1
81 TextUtils.formatNumber(this.e));
82 }
83 }
56.1.4 Population-based Metaheuristics
56.1.4.1 Evolutionary Algorithms
In this package, we provide the implementations of the Evolutionary Algorithms discussed
in Chapter 28.
Listing 56.11: A base class providing the features Algorithm 28.1 in a generational way (see
Section 28.1.4.1).
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.ea;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.utils.Constants;
10 import org.goataa.impl.utils.Individual;
11 import org.goataa.spec.IBinarySearchOperation;
12 import org.goataa.spec.IGPM;
13 import org.goataa.spec.INullarySearchOperation;
14 import org.goataa.spec.IObjectiveFunction;
15 import org.goataa.spec.IOptimizationModule;
16 import org.goataa.spec.ISelectionAlgorithm;
17 import org.goataa.spec.ITerminationCriterion;
18 import org.goataa.spec.IUnarySearchOperation;
19
20 /
**
21
*
A straightforward implementation of the Evolutionary Algorithm given in
22
*
Section 28.1.1 and specified in
23
*
Algorithm 28.1 on page 255 with generational population treatment
24
*
(see Section 28.1.4.1 on page 265).
25
*
26
*
@param <G>
27
*
the search space (genome, Section 4.1 on page 81)
28
*
@param <X>
29
*
the problem space (phenome, Section 2.1 on page 43)
30
*
@author Thomas Weise
31
*
/
32 public final class SimpleGenerationalEA<G, X> extends EABase<G, X> {
33
34 /
**
a constant required by Java serialization
*
/
35 private static final long serialVersionUID = 1;
36
37 /
**
instantiate the simple generational EA
*
/
38 public SimpleGenerationalEA() {
39 super();
40 }
41
42 /
**
43
*
Perform a simple EA as defined in Algorithm 28.1 on page 255 while
56.1. THE OPTIMIZATION ALGORITHMS 747
44
*
using a generational population handling (see
45
*
Section 28.1.4.1 on page 265).
46
*
47
*
@param f
48
*
the objective function ( Definition D2.3 on page 44)
49
*
@param create
50
*
the nullary search operator
51
*
( Section 4.2 on page 83) for creating the initial
52
*
genotype
53
*
@param mutate
54
*
the unary search operator (mutation,
55
*
Section 4.2 on page 83) for modifying existing
56
*
genotypes
57
*
@param recombine
58
*
the recombination operator ( Definition D28.14 on page 307)
59
*
@param gpm
60
*
the genotype-phenotype mapping
61
*
( Section 4.3 on page 85)
62
*
@param sel
63
*
the selection algorithm ( Section 28.4 on page 285
64
*
@param term
65
*
the termination criterion
66
*
( Section 6.3.3 on page 105)
67
*
@param ps
68
*
the population size
69
*
@param mps
70
*
the mating pool size
71
*
@param cr
72
*
the crossover rate
73
*
@param mr
74
*
the mutation rate
75
*
@param r
76
*
the random number generator
77
*
@return the individual containing the best candidate solution
78
*
( Definition D2.2 on page 43) found
79
*
@param <G>
80
*
the search space ( Section 4.1 on page 81)
81
*
@param <X>
82
*
the problem space ( Section 2.1 on page 43)
83
*
/
84 @SuppressWarnings("unchecked")
85 public static final <G, X> Individual<G, X> evolutionaryAlgorithm(
86 //
87 final IObjectiveFunction<X> f,
88 final INullarySearchOperation<G> create,//
89 final IUnarySearchOperation<G> mutate,//
90 final IBinarySearchOperation<G> recombine,
91 final IGPM<G, X> gpm,//
92 final ISelectionAlgorithm sel, final int ps, final int mps,
93 final double mr, final double cr, final ITerminationCriterion term,
94 final Random r) {
95
96 Individual<G, X>[] pop, mate;
97 int mateIndex, popIndex, i, j, k;
98 Individual<G, X> p, best;
99
100 best = new Individual<G, X>();
101 best.v = Constants.WORST_FITNESS;
102 pop = new Individual[ps];
103 mate = new Individual[mps];
104
105 // build the initial population of random individuals
106 for (i = pop.length; (--i) >= 0;) {
107 p = new Individual<G, X>();
108 pop[i] = p;
109 p.g = create.create(r);
110 }
748 56 THE IMPLEMENTATION PACKAGE
111
112 // the basic loop of the Evolutionary Algorithm 28.1 on page 255
113 for (;;) {
114 // for eachgeneration do...
115
116 // for each individual in the population
117 for (i = pop.length; (--i) >= 0;) {
118 p = pop[i];
119
120 // perform the genotype-phenotype mapping
121 p.x = gpm.gpm(p.g, r);
122 // compute the objective value
123 p.v = f.compute(p.x, r);
124
125 // is the current individual the best one so far?
126 if (p.v < best.v) {
127 best.assign(p);
128 }
129
130 // after each objective function evaluation, check if we should
131 // stop
132 if (term.terminationCriterion()) {
133 // if we should stop, return the best individual found
134 return best;
135 }
136 }
137
138 // perform the selection step
139 sel.select(pop, 0, pop.length, mate, 0, mate.length, r);
140
141 mateIndex = 0;
142 popIndex = ps;
143
144 // create cr
*
ps individuals with crossover
145 k = (int) (0.5 + (cr
*
ps));
146 for (j = 0; (j < k) && (popIndex > 0); j++) {
147 p = new Individual<G, X>();
148
149 // recombine an individual with another
150 p.g = recombine.recombine(mate[mateIndex].g,
151 mate[r.nextInt(mps)].g, r);
152 pop[--popIndex] = p;
153 mateIndex = ((mateIndex + 1) % mps);
154 }
155
156 // create mr
*
ps individuals with mutation
157 k = (int) (0.5 + (mr
*
ps));
158 for (j = 0; (j < k) && (popIndex > 0); j++) {
159 p = new Individual<G, X>();
160
161 // recombine an individual with another
162 p.g = mutate.mutate(mate[mateIndex].g, r);
163 pop[--popIndex] = p;
164 mateIndex = ((mateIndex + 1) % mps);
165 }
166
167 // copy the remaining individuals
168 for (; popIndex > 0;) {
169 pop[--popIndex] = mate[mateIndex];
170 mateIndex = ((mateIndex + 1) % mps);
171 }
172 }
173 }
174
175 /
**
176
*
Invoke the simple ga
177
*
56.1. THE OPTIMIZATION ALGORITHMS 749
178
*
@param r
179
*
the randomizer (will be used directly without setting the
180
*
seed)
181
*
@param term
182
*
the termination criterion (will be used directly without
183
*
resetting)
184
*
@param result
185
*
a list to which the results are to be appended
186
*
/
187 @Override
188 public void call(final Random r, final ITerminationCriterion term,
189 final List<Individual<G, X>> result) {
190
191 result.add(SimpleGenerationalEA.evolutionaryAlgorithm(//
192 this.getObjectiveFunction(),//
193 this.getNullarySearchOperation(), //
194 this.getUnarySearchOperation(),//
195 this.getBinarySearchOperation(),//
196 this.getGPM(), //
197 this.getSelectionAlgorithm(),//
198 this.getPopulationSize(),//
199 this.getMatingPoolSize(),//
200 this.getMutationRate(),//
201 this.getCrossoverRate(),//
202 term, // //
203 r));
204 }
205
206 /
**
207
*
Get the name of the optimization module
208
*
209
*
@param longVersion
210
*
true if the long name should be returned, false if the short
211
*
name should be returned
212
*
@return the name of the optimization module
213
*
/
214 @Override
215 public String getName(final boolean longVersion) {
216 IOptimizationModule om;
217 boolean ga;
218
219 ga = false;
220
221 om = this.getNullarySearchOperation();
222 if (om != null) {
223 if (om.getClass().getCanonicalName().contains("strings.bits.")) {
//NON NLS 1
224 ga = true;
225 }
226 } else {
227 om = this.getUnarySearchOperation();
228 if (om != null) {
229 if (om.getClass().getCanonicalName().contains("strings.bits.")) {
//NON NLS 1
230 ga = true;
231 }
232 } else {
233 om = this.getBinarySearchOperation();
234 if (om != null) {
235 if (om.getClass().getCanonicalName().contains("strings.bits.")) {
//NON NLS 1
236 ga = true;
237 }
238 }
239 }
240 }
241
750 56 THE IMPLEMENTATION PACKAGE
242 if (ga) {
243 return (longVersion ? "sGeneticAlgorithm" :
"sGA");//NON NLS 1//NON NLS 2
244 }
245
246 if (longVersion) {
247 return this.getClass().getSimpleName();
248 }
249 return "sEA"; //NON NLS 1
250 }
251
252 }
Listing 56.12: A base class providing the features Algorithm 28.2 in a generational way.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.ea;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.utils.Archive;
10 import org.goataa.impl.utils.MOIndividual;
11 import org.goataa.spec.IBinarySearchOperation;
12 import org.goataa.spec.IFitnessAssignmentProcess;
13 import org.goataa.spec.IGPM;
14 import org.goataa.spec.IIndividualComparator;
15 import org.goataa.spec.INullarySearchOperation;
16 import org.goataa.spec.IObjectiveFunction;
17 import org.goataa.spec.IOptimizationModule;
18 import org.goataa.spec.ISelectionAlgorithm;
19 import org.goataa.spec.ITerminationCriterion;
20 import org.goataa.spec.IUnarySearchOperation;
21
22 /
**
23
*
A straightforward implementation of the multi-objective Evolutionary
24
*
Algorithm given in Section 28.1.1 and specified in
25
*
Algorithm 28.2 on page 256 with generational population treatment
26
*
(see Section 28.1.4.1 on page 265).
27
*
28
*
@param <G>
29
*
the search space (genome, Section 4.1 on page 81)
30
*
@param <X>
31
*
the problem space (phenome, Section 2.1 on page 43)
32
*
@author Thomas Weise
33
*
/
34 public class SimpleGenerationalMOEA<G, X> extends MOEABase<G, X> {
35
36 /
**
a constant required by Java serialization
*
/
37 private static final long serialVersionUID = 1;
38
39 /
**
instantiate the simple generational EA
*
/
40 public SimpleGenerationalMOEA() {
41 super();
42 }
43
44 /
**
45
*
Perform a simple EA as defined in Algorithm 28.1 on page 255 while
46
*
using a generational population handling (see
47
*
Section 28.1.4.1 on page 265).
48
*
49
*
@param f
50
*
the objective functions ( Definition D2.3 on page 44)
51
*
@param cmp
52
*
the comparator
53
*
@param create
56.1. THE OPTIMIZATION ALGORITHMS 751
54
*
the nullary search operator
55
*
( Section 4.2 on page 83) for creating the initial
56
*
genotype
57
*
@param mutate
58
*
the unary search operator (mutation,
59
*
Section 4.2 on page 83) for modifying existing
60
*
genotypes
61
*
@param recombine
62
*
the recombination operator ( Definition D28.14 on page 307)
63
*
@param gpm
64
*
the genotype-phenotype mapping
65
*
( Section 4.3 on page 85)
66
*
@param sel
67
*
the selection algorithm ( Section 28.4 on page 285
68
*
@param fa
69
*
the fitness assignment process
70
*
@param term
71
*
the termination criterion
72
*
( Section 6.3.3 on page 105)
73
*
@param ps
74
*
the population size
75
*
@param mps
76
*
the mating pool size
77
*
@param cr
78
*
the crossover rate
79
*
@param mr
80
*
the mutation rate
81
*
@param maxSolutions
82
*
the maximum number of solutions
83
*
@param r
84
*
the random number generator
85
*
@param result
86
*
the list to add the best candidate solutions
87
*
( Definition D2.2 on page 43) to
88
*
@param <G>
89
*
the search space ( Section 4.1 on page 81)
90
*
@param <X>
91
*
the problem space ( Section 2.1 on page 43)
92
*
/
93 @SuppressWarnings("unchecked")
94 public static final <G, X> void evolutionaryAlgorithmMO(
95 //
96 final IObjectiveFunction<X>[] f,//
97 final IIndividualComparator cmp,//
98 final INullarySearchOperation<G> create,//
99 final IUnarySearchOperation<G> mutate,//
100 final IBinarySearchOperation<G> recombine, final IGPM<G, X> gpm,//
101 final ISelectionAlgorithm sel,//
102 final IFitnessAssignmentProcess fa,//
103 final int ps, final int mps,//
104 final double mr,//
105 final double cr,//
106 final ITerminationCriterion term, final int maxSolutions,//
107 final Random r,//
108 final List<MOIndividual<G, X>> result) {
109
110 MOIndividual<G, X>[] pop, mate;
111 int mateIndex, popIndex, i, j, k;
112 MOIndividual<G, X> p;
113 Archive<G, X> best;
114
115 best = new Archive<G, X>(maxSolutions, cmp);
116 pop = new MOIndividual[ps];
117 mate = new MOIndividual[mps];
118
119 // build the initial population of random individuals
120 for (i = pop.length; (--i) >= 0;) {
752 56 THE IMPLEMENTATION PACKAGE
121 p = new MOIndividual<G, X>(f.length);
122 pop[i] = p;
123 p.g = create.create(r);
124 }
125
126 // the basic loop of the Evolutionary Algorithm 28.1 on page 255
127 for (;;) {
128 // for each generation do...
129
130 // for each individual in the population
131 for (i = pop.length; (--i) >= 0;) {
132 p = pop[i];
133
134 // perform the genotype-phenotype mapping
135 p.x = gpm.gpm(p.g, r);
136
137 // compute the objective values
138 p.evaluate(f, r);
139
140 // update the archive
141 best.add(p);
142
143 // after each objective function evaluation, check if we should
144 // stop
145 if (term.terminationCriterion()) {
146 // if we should stop, return the best individuals found
147 result.addAll(best);
148 return;
149 }
150 }
151
152 // assign the fitness
153 fa.assignFitness(pop, 0, pop.length, cmp, r);
154
155 // perform the selection step
156 sel.select(pop, 0, pop.length, mate, 0, mate.length, r);
157
158 mateIndex = 0;
159 popIndex = ps;
160
161 // create cr
*
ps individuals with crossover
162 k = (int) (0.5 + (cr
*
ps));
163 for (j = 0; (j < k) && (popIndex > 0); j++) {
164 p = new MOIndividual<G, X>(f.length);
165
166 // recombine an individual with another
167 p.g = recombine.recombine(mate[mateIndex].g,
168 mate[r.nextInt(mps)].g, r);
169 pop[--popIndex] = p;
170 mateIndex = ((mateIndex + 1) % mps);
171 }
172
173 // create mr
*
ps individuals with mutation
174 k = (int) (0.5 + (mr
*
ps));
175 for (j = 0; (j < k) && (popIndex > 0); j++) {
176 p = new MOIndividual<G, X>(f.length);
177
178 // recombine an individual with another
179 p.g = mutate.mutate(mate[mateIndex].g, r);
180 pop[--popIndex] = p;
181 mateIndex = ((mateIndex + 1) % mps);
182 }
183
184 // copy the remaining individuals
185 for (; popIndex > 0;) {
186 pop[--popIndex] = mate[mateIndex];
187 mateIndex = ((mateIndex + 1) % mps);
56.1. THE OPTIMIZATION ALGORITHMS 753
188 }
189 }
190 }
191
192 /
**
193
*
Invoke the multi-objective EA.
194
*
195
*
@param r
196
*
the randomizer (will be used directly without setting the
197
*
seed)
198
*
@param term
199
*
the termination criterion (will be used directly without
200
*
resetting)
201
*
@param result
202
*
a list to which the results are to be appended
203
*
/
204 @Override
205 public void call(final Random r, final ITerminationCriterion term,
206 final List<MOIndividual<G, X>> result) {
207
208 evolutionaryAlgorithmMO(//
209 this.getObjectiveFunctions(),//
210 this.getComparator(),//
211 this.getNullarySearchOperation(), //
212 this.getUnarySearchOperation(),//
213 this.getBinarySearchOperation(),//
214 this.getGPM(), //
215 this.getSelectionAlgorithm(),//
216 this.getFitnesAssignmentProcess(),//
217 this.getPopulationSize(),//
218 this.getMatingPoolSize(),//
219 this.getMutationRate(),//
220 this.getCrossoverRate(),//
221 term, //
222 this.getMaxSolutions(),//
223 r,//
224 result);
225 }
226
227 /
**
228
*
Get the name of the optimization module
229
*
230
*
@param longVersion
231
*
true if the long name should be returned, false if the short
232
*
name should be returned
233
*
@return the name of the optimization module
234
*
/
235 @Override
236 public String getName(final boolean longVersion) {
237 IOptimizationModule om;
238 boolean ga;
239
240 ga = false;
241
242 om = this.getNullarySearchOperation();
243 if (om != null) {
244 if (om.getClass().getCanonicalName().contains("strings.bits.")) {
//NON NLS 1
245 ga = true;
246 }
247 } else {
248 om = this.getUnarySearchOperation();
249 if (om != null) {
250 if (om.getClass().getCanonicalName().contains("strings.bits.")) {
//NON NLS 1
251 ga = true;
252 }
754 56 THE IMPLEMENTATION PACKAGE
253 } else {
254 om = this.getBinarySearchOperation();
255 if (om != null) {
256 if (om.getClass().getCanonicalName().contains("strings.bits.")) {
//NON NLS 1
257 ga = true;
258 }
259 }
260 }
261 }
262
263 if (ga) {
264 return (longVersion ? "sMOGeneticAlgorithm" :
"sMOGA");//NON NLS 1//NON NLS 2
265 }
266
267 if (longVersion) {
268 return this.getClass().getSimpleName();
269 }
270 return "sMOEA"; //NON NLS 1
271 }
272
273 }
56.1.4.1.1 Selection Algorithms
In this package, we provide the implementations of the selection algorithms discussed in
Section 28.4.
Listing 56.13: The truncation selection method given in Algorithm 28.8 on page 289.
1 /
*
2
*
Copyright (c) 2010 Thomas Weise
3
*
http://www.it-weise.de/
4
*
tweise@gmx.de
5
*
6
*
GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
7
*
/
8
9 package org.goataa.impl.algorithms.ea.selection;
10
11 import java.util.Arrays;
12 import java.util.Random;
13
14 import org.goataa.impl.OptimizationModule;
15 import org.goataa.impl.utils.Constants;
16 import org.goataa.impl.utils.Individual;
17 import org.goataa.spec.ISelectionAlgorithm;
18
19 /
**
20
*
An implementation of the truncation selection algorithm which unites
21
*
both, Algorithm 28.7 on page 289 and
22
*
Algorithm 28.8 on page 289 from
23
*
Section 28.4.2 in one class.
24
*
25
*
@author Thomas Weise
26
*
/
27 public class TruncationSelection extends OptimizationModule implements
28 ISelectionAlgorithm {
29
30 /
**
a constant required by Java serialization
*
/
31 private static final long serialVersionUID = 1;
32
33 /
**
the number of best individuals to select
*
/
34 private final int k;
35
36 /
**
Create a new instance of the truncation selection algorithm
*
/
56.1. THE OPTIMIZATION ALGORITHMS 755
37 public TruncationSelection() {
38 this(-1);
39 }
40
41 /
**
42
*
Create a new instance of the truncation selection algorithm
43
*
44
*
@param ak
45
*
the number of survivors
46
*
/
47 public TruncationSelection(final int ak) {
48 super();
49 this.k = ((ak > 0) ? ak : Integer.MAX_VALUE);
50 }
51
52 /
**
53
*
Perform the truncation selection as defined in
54
*
Section 28.4.2.
55
*
56
*
@param pop
57
*
the population (see Definition D4.19 on page 90)
58
*
@param popStart
59
*
the index of the first individual in the population
60
*
@param popCount
61
*
the number of individuals to select from
62
*
@param mate
63
*
the mating pool (see Definition D28.10 on page 285)
64
*
@param mateStart
65
*
the first index to which the selected individuals should be
66
*
copied
67
*
@param mateCount
68
*
the number of individuals to be selected
69
*
@param r
70
*
the random number generator
71
*
/
72 @Override
73 public void select(final Individual<?, ?>[] pop, final int popStart,
74 final int popCount, final Individual<?, ?>[] mate,
75 final int mateStart, final int mateCount, final Random r) {
76 int i, x, e;
77
78 Arrays.sort(pop, popStart, popStart + popCount,
79 Constants.FITNESS_SORT_ASC_COMPARATOR);
80
81 i = mateStart;
82 e = (i + mateCount);
83 while (i < e) {
84 x = Math.min(this.k, Math.min(mateCount - i, popCount));
85 System.arraycopy(pop, popStart, mate, i, x);
86 i += x;
87 }
88 }
89
90 /
**
91
*
Get the name of the optimization module
92
*
93
*
@param longVersion
94
*
true if the long name should be returned, false if the short
95
*
name should be returned
96
*
@return the name of the optimization module
97
*
/
98 @Override
99 public String getName(final boolean longVersion) {
100 if (longVersion) {
101 return this.getClass().getSimpleName();
102 }
103 return "Trunc"; //NON NLS 1
756 56 THE IMPLEMENTATION PACKAGE
104 }
105
106 /
**
107
*
Get the full configuration which holds all the data necessary to
108
*
describe this object.
109
*
110
*
@param longVersion
111
*
true if the long version should be returned, false if the
112
*
short version should be returned
113
*
@return the full configuration
114
*
/
115 @Override
116 public String getConfiguration(final boolean longVersion) {
117 if ((this.k > 0) && (this.k < Integer.MAX_VALUE)) {
118 return "k=" + String.valueOf(this.k); //NON NLS 1
119 }
120 return super.getConfiguration(longVersion);
121 }
122 }
Listing 56.14: The tournament selection method given in Algorithm 28.11 on page 298.
1 /
*
2
*
Copyright (c) 2010 Thomas Weise
3
*
http://www.it-weise.de/
4
*
tweise@gmx.de
5
*
6
*
GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
7
*
/
8
9 package org.goataa.impl.algorithms.ea.selection;
10
11 import java.util.Random;
12
13 import org.goataa.impl.OptimizationModule;
14 import org.goataa.impl.utils.Individual;
15 import org.goataa.spec.ISelectionAlgorithm;
16
17 /
**
18
*
The tournament selection algorithm with replacement as specified in
19
*
Algorithm 28.11 on page 298 in
20
*
Section 28.4.4.
21
*
22
*
@author Thomas Weise
23
*
/
24 public class TournamentSelection extends OptimizationModule implements
25 ISelectionAlgorithm {
26
27 /
**
a constant required by Java serialization
*
/
28 private static final long serialVersionUID = 1;
29
30 /
**
the tournament size
*
/
31 private final int k;
32
33 /
**
34
*
Create a new instance of the tournament selection algorithm with
35
*
replacement
36
*
37
*
@param ik
38
*
the tournament size
39
*
/
40 public TournamentSelection(final int ik) {
41 super();
42 this.k = ik;
43 }
44
45 /
**
46
*
Perform the tournament selection as defined in
56.1. THE OPTIMIZATION ALGORITHMS 757
47
*
Algorithm 28.11 on page 298.
48
*
49
*
@param pop
50
*
the population (see Definition D4.19 on page 90)
51
*
@param popStart
52
*
the index of the first individual in the population
53
*
@param popCount
54
*
the number of individuals to select from
55
*
@param mate
56
*
the mating pool (see Definition D28.10 on page 285)
57
*
@param mateStart
58
*
the first index to which the selected individuals should be
59
*
copied
60
*
@param mateCount
61
*
the number of individuals to be selected
62
*
@param r
63
*
the random number generator
64
*
/
65 @Override
66 public void select(final Individual<?, ?>[] pop, final int popStart,
67 final int popCount, final Individual<?, ?>[] mate,
68 final int mateStart, final int mateCount, final Random r) {
69 Individual<?, ?> x, y;
70 int i, j;
71
72 for (i = (mateStart + mateCount); (--i) >= mateStart;) {
73 x = pop[popStart + r.nextInt(popCount)];
74 for (j = 2; j <= this.k; j++) {
75 y = pop[popStart + r.nextInt(popCount)];
76 if (y.v < x.v) {
77 x = y;
78 }
79 }
80 mate[i] = x;
81 }
82
83 }
84
85 /
**
86
*
Get the name of the optimization module
87
*
88
*
@param longVersion
89
*
true if the long name should be returned, false if the short
90
*
name should be returned
91
*
@return the name of the optimization module
92
*
/
93 @Override
94 public String getName(final boolean longVersion) {
95 if (longVersion) {
96 return this.getClass().getSimpleName();
97 }
98 return "Tour"; //NON NLS 1
99 }
100
101 /
**
102
*
Get the full configuration which holds all the data necessary to
103
*
describe this object.
104
*
105
*
@param longVersion
106
*
true if the long version should be returned, false if the
107
*
short version should be returned
108
*
@return the full configuration
109
*
/
110 @Override
111 public String getConfiguration(final boolean longVersion) {
112 return "k=" + String.valueOf(this.k); //NON NLS 1
113 }
758 56 THE IMPLEMENTATION PACKAGE
114 }
Listing 56.15: The random selection method given in Algorithm 28.15 on page 302.
1 /
*
2
*
Copyright (c) 2010 Thomas Weise
3
*
http://www.it-weise.de/
4
*
tweise@gmx.de
5
*
6
*
GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
7
*
/
8
9 package org.goataa.impl.algorithms.ea.selection;
10
11 import java.util.Random;
12
13 import org.goataa.impl.utils.Individual;
14 import org.goataa.spec.ISelectionAlgorithm;
15
16 /
**
17
*
An selection operator which randomly selects individuals as specified in
18
*
Algorithm 28.15 on page 302. This operator should mainly be used
19
*
for parental selection but not for survival selection.
20
*
21
*
@author Thomas Weise
22
*
/
23 public class RandomSelection extends SelectionAlgorithm {
24
25 /
**
a constant required by Java serialization
*
/
26 private static final long serialVersionUID = 1;
27
28 /
**
the globally shared instance of random selection
*
/
29 public static final ISelectionAlgorithm RANDOM_SELECTION = new RandomSelection();
30
31 /
**
Create a new instance of the random selection algorithm
*
/
32 protected RandomSelection() {
33 super();
34 }
35
36 /
**
37
*
Perform the random selection as defined in
38
*
Algorithm 28.15 on page 302.
39
*
40
*
@param pop
41
*
the population (see Definition D4.19 on page 90)
42
*
@param popStart
43
*
the index of the first individual in the population
44
*
@param popCount
45
*
the number of individuals to select from
46
*
@param mate
47
*
the mating pool (see Definition D28.10 on page 285)
48
*
@param mateStart
49
*
the first index to which the selected individuals should be
50
*
copied
51
*
@param mateCount
52
*
the number of individuals to be selected
53
*
@param r
54
*
the random number generator
55
*
/
56 @Override
57 public final void select(final Individual<?, ?>[] pop,
58 final int popStart, final int popCount,
59 final Individual<?, ?>[] mate, final int mateStart,
60 final int mateCount, final Random r) {
61 int i;
62
63 for (i = (mateStart + mateCount); (--i) >= mateStart;) {
64 mate[i] = pop[popStart + r.nextInt(popCount)];
56.1. THE OPTIMIZATION ALGORITHMS 759
65 }
66
67 }
68
69 /
**
70
*
Get the name of the optimization module
71
*
72
*
@param longVersion
73
*
true if the long name should be returned, false if the short
74
*
name should be returned
75
*
@return the name of the optimization module
76
*
/
77 @Override
78 public String getName(final boolean longVersion) {
79 if (longVersion) {
80 return this.getClass().getSimpleName();
81 }
82 return "RND"; //NON NLS 1
83 }
84 }
56.1.4.1.2 Fitness Assignment Processes
Dierent tness assignment processes as discussed in Section 28.3 on page 274.
Listing 56.16: Pareto ranking tness assignment.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.ea.fitnessAssignment;
5
6 import java.util.Random;
7
8 import org.goataa.impl.utils.MOIndividual;
9 import org.goataa.spec.IFitnessAssignmentProcess;
10 import org.goataa.spec.IIndividualComparator;
11
12 /
**
13
*
The Pareto ranking algorithm as discussed in
14
*
Section 28.3.3 on page 275.
15
*
16
*
@author Thomas Weise
17
*
/
18 public class ParetoRanking extends FitnessAssignmentProcess {
19 /
**
a constant required by Java serialization
*
/
20 private static final long serialVersionUID = 1;
21
22 /
**
the globally shared Pareto ranking procedure
*
/
23 public static final IFitnessAssignmentProcess PARETO_RANKING = new
ParetoRanking();
24
25 /
**
instantitate the pareto ranking algorithm
*
/
26 protected ParetoRanking() {
27 super();
28 }
29
30 /
**
31
*
Rank individuals according to the Pareto dominance relationship
32
*
Section 28.3.3 on page 275.
33
*
34
*
@param pop
35
*
the population of individuals
36
*
@param start
37
*
the index of the first individual in the population
38
*
@param count
39
*
the number of individuals in the population
760 56 THE IMPLEMENTATION PACKAGE
40
*
@param cmp
41
*
the individual comparator which tells which individual is
42
*
better than another one
43
*
@param r
44
*
than randomizer
45
*
/
46 @Override
47 public void assignFitness(final MOIndividual<?, ?>[] pop,
48 final int start, final int count, final IIndividualComparator cmp,
49 final Random r) {
50 int i, j, res;
51 MOIndividual<?, ?> a, b;
52
53 for (i = (start + count); (--i) >= start;) {
54 pop[i].v = 1d;
55 }
56
57 // compare each individual
58 for (i = (start + count); (--i) > start;) {
59 a = pop[i];
60 // with each other individual
61 for (j = i; (--j) >= start;) {
62 b = pop[j];
63 res = cmp.compare(a, b);
64 if (res < 0) {
65 // a dominates b -> make fitness of b bigger = worse
66 b.v++;
67 } else {
68 if (res > 0) {
69 // b dominates a -> make fitness of a bigger = worse
70 a.v++;
71 }
72 }
73 }
74 }
75 }
76
77 }
56.1.4.2 Evolution Strategies
In this package, we provide the implementations of the Evolutionary Strategies discussed in
Chapter 30.
Listing 56.17: An example implementation of the Evolution Strategy Algorithm 30.1 on
page 361.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.es;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.algorithms.LocalSearchAlgorithm;
10 import org.goataa.impl.algorithms.ea.selection.RandomSelection;
11 import org.goataa.impl.algorithms.ea.selection.TruncationSelection;
12 import
org.goataa.impl.searchOperations.strings.real.nary.DoubleArrayDominantRecombination;
13 import
org.goataa.impl.searchOperations.strings.real.nary.DoubleArrayIntermediateRecombination;
14 import
org.goataa.impl.searchOperations.strings.real.nullary.DoubleArrayUniformCreation;
15 import
org.goataa.impl.searchOperations.strings.real.unary.DoubleArrayStdDevNormalMutation;
56.1. THE OPTIMIZATION ALGORITHMS 761
16 import
org.goataa.impl.searchOperations.strings.real.unary.DoubleArrayStrategyLogNormalMutation;
17 import org.goataa.impl.utils.Constants;
18 import org.goataa.impl.utils.Individual;
19 import org.goataa.impl.utils.TextUtils;
20 import org.goataa.spec.IGPM;
21 import org.goataa.spec.INarySearchOperation;
22 import org.goataa.spec.INullarySearchOperation;
23 import org.goataa.spec.IObjectiveFunction;
24 import org.goataa.spec.IOptimizationModule;
25 import org.goataa.spec.ISelectionAlgorithm;
26 import org.goataa.spec.ITerminationCriterion;
27 import org.goataa.spec.IUnarySearchOperation;
28
29 /
**
30
*
An Evolution Strategy implementation which follows the definitions in
31
*
Chapter 30 and which comes close to
32
*
Algorithm 30.1. This implementation uses vectors of standard
33
*
deviations as endogenous strategy parameters, as discussed in
34
*
Section 30.4.1.2 on page 365.
35
*
36
*
@param <X>
37
*
the problem space (phenome, Section 2.1 on page 43)
38
*
@author Thomas Weise
39
*
/
40 public class EvolutionStrategy<X> extends
41 LocalSearchAlgorithm<double[], X, Individual<double[], X>> {
42
43 /
**
a constant required by Java serialization
*
/
44 private static final long serialVersionUID = 1;
45
46 /
**
the survival selection algorithm
*
/
47 private ISelectionAlgorithm survivalSelection;
48
49 /
**
the survival parental algorithm
*
/
50 private ISelectionAlgorithm parentalSelection;
51
52 /
**
53
*
mu - the number of parents in the mating pool, see
54
*
Section 28.1.4.3 on page 266
55
*
/
56 private int mu;
57
58 /
**
59
*
lambda - the number of offsprings, see
60
*
Section 28.1.4.3 on page 266
61
*
/
62 private int lambda;
63
64 /
**
65
*
rho - the number of parents per offspring, see
66
*
Section 28.1.4.3 on page 266
67
*
/
68 private int rho;
69
70 /
**
true for (lambda+mu) strategies, false for (lambda,mu) strategies
*
/
71 private boolean plus;
72
73 /
**
the dimension
*
/
74 private int dimension;
75
76 /
**
the minimum values per dimension
*
/
77 private double minX;
78
79 /
**
the maximum values per dimension
*
/
80 private double maxX;
81
762 56 THE IMPLEMENTATION PACKAGE
82 /
**
do we need an internal setup ?
*
/
83 private boolean internalSetupNeeded;
84
85 /
**
the unary reproduction operation was set by the user
*
/
86 private boolean unaryOperationSet;
87
88 /
**
the nullary reproduction operation was set by the user
*
/
89 private boolean nullaryOperationSet;
90
91 /
**
the internal search operation for creating the strategy parameters
*
/
92 private INullarySearchOperation<double[]> createStrategyParameters;
93
94 /
**
the strategy parameter mutator
*
/
95 private IUnarySearchOperation<double[]> mutateStrategyParameters;
96
97 /
**
the genotype crossover method
*
/
98 private INarySearchOperation<double[]> crossoverGenotype;
99
100 /
**
the strategy crossover method
*
/
101 private INarySearchOperation<double[]> crossoverStrategy;
102
103 /
**
instantiate the evolution strategy
*
/
104 public EvolutionStrategy() {
105 super();
106
107 this.survivalSelection = new TruncationSelection();
108 this.parentalSelection = RandomSelection.RANDOM_SELECTION;
109 this.crossoverGenotype =
DoubleArrayDominantRecombination.DOUBLE_ARRAY_DOMINANT_RECOMBINATION;
110 this.crossoverStrategy =
DoubleArrayIntermediateRecombination.DOUBLE_ARRAY_INTERMEDIATE_RECOMBINATION;
111 this.lambda = 100;
112 this.mu = 10;
113 this.rho = 2;
114 this.plus = true;
115 this.dimension = 10;
116 this.minX = -1d;
117 this.maxX = 1d;
118 this.internalSetupNeeded = true;
119 }
120
121 /
**
122
*
Apply an Evolution Strategy implementation which follows the
123
*
definitions in Chapter 30 and which comes
124
*
close to Algorithm 30.1. This implementation uses vectors
125
*
of standard deviations as endogenous strategy parameters, as discussed
126
*
in Section 30.4.1.2 on page 365.
127
*
128
*
@param f
129
*
the objective function ( Definition D2.3 on page 44)
130
*
@param createGenotype
131
*
the nullary search operator
132
*
( Section 4.2 on page 83) for creating the initial
133
*
genotypes
134
*
@param createStrategy
135
*
the nullary search operator
136
*
( Section 4.2 on page 83) for creating the initial
137
*
strategy parameters
138
*
@param mutateGenotype
139
*
the unary search operator (mutation,
140
*
Section 4.2 on page 83) for modifying existing
141
*
genotypes
142
*
@param mutateStrategy
143
*
the unary search operator (mutation,
144
*
Section 4.2 on page 83) for modifying existing
145
*
strategy parameters
146
*
@param recombineGenotype
56.1. THE OPTIMIZATION ALGORITHMS 763
147
*
the n-ary recombination operator for genotypes
148
*
@param recombineStrategy
149
*
the n-ary recombination operator for strategy parameters
150
*
@param gpm
151
*
the genotype-phenotype mapping
152
*
( Section 4.3 on page 85)
153
*
@param survivalSel
154
*
the survival selection algorithm ( Section 28.4 on page 285
155
*
@param parentalSel
156
*
the parental/sexual selection algorithm
157
*
( Section 28.4 on page 285
158
*
@param m
159
*
the number of parents in the mating pool
160
*
@param la
161
*
the number of offspring
162
*
@param rh
163
*
the number of parents per offspring
164
*
@param pl
165
*
true for a (lambda+mu), false for a (lambda,mu) strategy
166
*
@param term
167
*
the termination criterion
168
*
( Section 6.3.3 on page 105)
169
*
@param r
170
*
the random number generator
171
*
@return the individual containing the best candidate solution
172
*
( Definition D2.2 on page 43) found
173
*
@param <X>
174
*
the problem space ( Section 2.1 on page 43)
175
*
/
176 @SuppressWarnings("unchecked")
177 public static final <X> Individual<double[], X> evolutionaryStrategy(
178 //
179 final IObjectiveFunction<X> f,
180 final INullarySearchOperation<double[]> createGenotype,//
181 final INullarySearchOperation<double[]> createStrategy,//
182 final IUnarySearchOperation<double[]> mutateGenotype,//
183 final IUnarySearchOperation<double[]> mutateStrategy,//
184 final INarySearchOperation<double[]> recombineGenotype,//
185 final INarySearchOperation<double[]> recombineStrategy,//
186 final IGPM<double[], X> gpm,//
187 final ISelectionAlgorithm survivalSel,//
188 final ISelectionAlgorithm parentalSel,//
189 final int m,//
190 final int la,//
191 final int rh,//
192 final boolean pl,//
193 final ITerminationCriterion term,//
194 final Random r) {
195
196 ESIndividual<X>[] selected, pop, parents;
197 double[][] parents2, parents3;
198 int i, j;
199 ESIndividual<X> p;
200 Individual<double[], X> best;
201 DoubleArrayStdDevNormalMutation mut;
202
203 best = new ESIndividual<X>();
204 best.v = Constants.WORST_FITNESS;
205 selected = new ESIndividual[m];
206 pop = selected;
207 parents = new ESIndividual[rh];
208 parents2 = new double[rh][];
209 parents3 = new double[rh][];
210
211 if (mutateGenotype instanceof DoubleArrayStdDevNormalMutation) {
212 mut = ((DoubleArrayStdDevNormalMutation) mutateGenotype);
213 } else {
764 56 THE IMPLEMENTATION PACKAGE
214 mut = null;
215 }
216
217 // build the initial population of random individuals
218 for (i = selected.length; (--i) >= 0;) {
219 p = new ESIndividual<X>();
220 selected[i] = p;
221 p.g = createGenotype.create(r);
222 p.w = createStrategy.create(r);
223 }
224
225 // the basic loop of the Evolution Strategy Algorithm 30.1 on page 361
226 for (;;) {
227 // for eachgeneration do...
228
229 // for each individual in the population
230 for (i = pop.length; (--i) >= 0;) {
231 p = pop[i];
232
233 // perform the genotype-phenotype mapping
234 p.x = gpm.gpm(p.g, r);
235 // compute the objective value
236 p.v = f.compute(p.x, r);
237
238 // is the current individual the best one so far?
239 if (p.v < best.v) {
240 best.assign(p);
241 }
242
243 // after each objective function evaluation, check if we should
244 // stop
245 if (term.terminationCriterion()) {
246 // if we should stop, return the best individual found
247 return best;
248 }
249 }
250
251 // perform the selection step, usually truncation selection (see
252 // Algorithm 28.8 on page 289)
253 if (pop != selected) {
254 // for (lambda+mu) strategies use both, parents and offspring
255 if (pl) {
256 System.arraycopy(selected, 0, pop, la, m);
257 }
258 survivalSel.select(pop, 0, pop.length, selected, 0,
259 selected.length, r);
260 }
261
262 if (pop == selected) {
263 pop = new ESIndividual[pl ? (la + m) : la];
264 }
265
266 // fill the new population with new offspring
267 for (i = pop.length; (--i) >= 0;) {
268 // select the parents for the new individual, which usually is done
269 // randomly with Algorithm 28.15 on page 302
270 parentalSel.select(selected, 0, selected.length, parents, 0,
271 parents.length, r);
272
273 for (j = parents.length; (--j) >= 0;) {
274 p = parents[j];
275 parents2[j] = p.g;
276 parents3[j] = p.w;
277 }
278
279 p = new ESIndividual<X>();
280 pop[i] = p;
56.1. THE OPTIMIZATION ALGORITHMS 765
281
282 // usually done via dominate recombination, see
283 // Algorithm 30.2 on page 363
284 p.g = recombineGenotype.combine(parents2, r);
285
286 // We only use the strategy parameters if the mutator actually uses
287 // them: mut is null if it is not the ES-typical mutator.
288 if (mut != null) {
289 // usually done via intermediate recombination, see
290 // Algorithm 30.3 on page 363
291 p.w = recombineStrategy.combine(parents3, r);
292
293 // usually done via log-normal mutation, see
294 // Algorithm 30.8 on page 371
295 p.w = mutateStrategy.mutate(p.w, r);
296 mut.setStdDevs(p.w);
297 }
298
299 // usually done via normally distributed mutation as specified in
300 // Algorithm 30.5 on page 366.
301 p.g = mutateGenotype.mutate(p.g, r);
302 }
303 }
304 }
305
306 /
**
307
*
Invoke the evolution strategy
308
*
309
*
@param r
310
*
the randomizer (will be used directly without setting the
311
*
seed)
312
*
@param term
313
*
the termination criterion (will be used directly without
314
*
resetting)
315
*
@param result
316
*
a list to which the results are to be appended
317
*
/
318 @Override
319 public void call(final Random r, final ITerminationCriterion term,
320 final List<Individual<double[], X>> result) {
321
322 this.internalSetup();
323
324 result.add(EvolutionStrategy.evolutionaryStrategy(//
325 this.getObjectiveFunction(),//
326 this.getNullarySearchOperation(),//
327 this.createStrategyParameters,//
328 this.getUnarySearchOperation(),//
329 this.mutateStrategyParameters, //
330 this.crossoverGenotype, this.crossoverStrategy,//
331 this.getGPM(),//
332 this.getSelectionAlgorithm(), //
333 this.parentalSelection,//
334 this.getMu(),//
335 this.getLambda(),//
336 this.getRho(), //
337 this.isPlus(), //
338 term,//
339 r));
340 }
341
342 /
**
343
*
Set the dimension of the search space
344
*
345
*
@param dim
346
*
the dimension of the search space
347
*
/
766 56 THE IMPLEMENTATION PACKAGE
348 public final void setDimension(final int dim) {
349 this.dimension = dim;
350 this.internalSetupNeeded = true;
351 }
352
353 /
**
354
*
Get the dimension of the search space
355
*
356
*
@return the dimension of the search space
357
*
/
358 public final int getDimension() {
359 return this.dimension;
360 }
361
362 /
**
363
*
Set the minimum value of the search space
364
*
365
*
@param x
366
*
the minimum value of the search space
367
*
/
368 public final void setMinimum(final double x) {
369 this.minX = x;
370 this.internalSetupNeeded = true;
371 }
372
373 /
**
374
*
Get the minimum value of the search space
375
*
376
*
@return the minimum value of the search space
377
*
/
378 public final double getMaximum() {
379 return this.maxX;
380 }
381
382 /
**
383
*
Set the maximum value of the search space
384
*
385
*
@param x
386
*
the maximum value of the search space
387
*
/
388 public final void setMaximum(final double x) {
389 this.maxX = x;
390 this.internalSetupNeeded = true;
391 }
392
393 /
**
394
*
Get the minimum value of the search space
395
*
396
*
@return the minimum value of the search space
397
*
/
398 public final double getMinimum() {
399 return this.minX;
400 }
401
402 /
**
perform some internalsetup
*
/
403 private final void internalSetup() {
404 final double maxS;
405
406 if (this.internalSetupNeeded) {
407 this.internalSetupNeeded = false;
408
409 if (!(this.nullaryOperationSet)) {
410 super.setNullarySearchOperation(new DoubleArrayUniformCreation(
411 this.dimension, this.minX, this.maxX));
412 }
413
414 maxS = Math.sqrt(this.maxX - this.minX);
56.1. THE OPTIMIZATION ALGORITHMS 767
415
416 this.createStrategyParameters = new DoubleArrayUniformCreation(
417 this.dimension, Double.MIN_VALUE, maxS);
418 this.mutateStrategyParameters = new DoubleArrayStrategyLogNormalMutation(
419 maxS);
420
421 if (!(this.unaryOperationSet)) {
422 super.setUnarySearchOperation(new DoubleArrayStdDevNormalMutation(
423 this.dimension, this.minX, this.maxX));
424 }
425 }
426 }
427
428 /
**
429
*
Set mu, the number of parents in the mating pool (see
430
*
Section 28.1.4.3 on page 266)
431
*
432
*
@param m
433
*
the parameter mu
434
*
/
435 public final void setMu(final int m) {
436 this.mu = Math.max(1, m);
437 }
438
439 /
**
440
*
Get mu, the number of parents in the mating pool (see
441
*
Section 28.1.4.3 on page 266)
442
*
443
*
@return the parameter mu
444
*
/
445 public final int getMu() {
446 return this.mu;
447 }
448
449 /
**
450
*
Set lambda, the number of offspring (see
451
*
Section 28.1.4.3 on page 266)
452
*
453
*
@param l
454
*
the parameter lambda
455
*
/
456 public final void setLambda(final int l) {
457 this.lambda = Math.max(1, l);
458 }
459
460 /
**
461
*
Get lambda, the number of offspring (see
462
*
Section 28.1.4.3 on page 266)
463
*
464
*
@return the parameter lambda
465
*
/
466 public final int getLambda() {
467 return this.lambda;
468 }
469
470 /
**
471
*
Set rho, the number of parents per offspring (see
472
*
Section 28.1.4.3 on page 266)
473
*
474
*
@param r
475
*
the parameter rho
476
*
/
477 public final void setRho(final int r) {
478 this.rho = Math.max(1, r);
479 }
480
481 /
**
768 56 THE IMPLEMENTATION PACKAGE
482
*
Get rho, the number of parents per offspring (see
483
*
Section 28.1.4.3 on page 266)
484
*
485
*
@return the parameter rho
486
*
/
487 public final int getRho() {
488 return this.rho;
489 }
490
491 /
**
492
*
Set whether we have a (lambda+mu) strategy or a (lambda,mu) strategy
493
*
494
*
@param p
495
*
true for (lambda+mu) strategies, false for (lambda,mu)
496
*
strategies
497
*
/
498 public final void setPlus(final boolean p) {
499 this.plus = p;
500 }
501
502 /
**
503
*
Get whether we have a (lambda+mu) strategy or a (lambda,mu) strategy
504
*
505
*
@return true for (lambda+mu) strategies, false for (lambda,mu)
506
*
strategies
507
*
/
508 public final boolean isPlus() {
509 return this.plus;
510 }
511
512 /
**
513
*
Set the selection algorithm (see Section 28.4 on page 285) to be
514
*
used by this evolutionary algorithm.
515
*
516
*
@param sel
517
*
the selection algorithm
518
*
/
519 public final void setSelectionAlgorithm(final ISelectionAlgorithm sel) {
520 this.survivalSelection = sel;
521 }
522
523 /
**
524
*
Get the selection algorithm (see Section 28.4 on page 285) which is
525
*
used by this evolutionary algorithm.
526
*
527
*
@return the selection algorithm
528
*
/
529 public final ISelectionAlgorithm getSelectionAlgorithm() {
530 return this.survivalSelection;
531 }
532
533 /
**
534
*
Set the unary search operation to be used by this optimization
535
*
algorithm (see Section 4.2 on page 83 and
536
*
Listing 55.3 on page 708).
537
*
538
*
@param op
539
*
the unary search operation to use
540
*
/
541 @Override
542 public void setUnarySearchOperation(
543 final IUnarySearchOperation<double[]> op) {
544 this.unaryOperationSet = (op != null);
545 super.setUnarySearchOperation(op);
546 }
547
548 /
**
56.1. THE OPTIMIZATION ALGORITHMS 769
549
*
Set the nullary search operation to be used by this optimization
550
*
algorithm (see Section 4.2 on page 83 and
551
*
Listing 55.2 on page 708).
552
*
553
*
@param op
554
*
the nullary search operation to use
555
*
/
556 @Override
557 public void setNullarySearchOperation(
558 final INullarySearchOperation<double[]> op) {
559 this.nullaryOperationSet = (op != null);
560 super.setNullarySearchOperation(op);
561 }
562
563 /
**
564
*
Get the name of the optimization module
565
*
566
*
@param longVersion
567
*
true if the long name should be returned, false if the short
568
*
name should be returned
569
*
@return the name of the optimization module
570
*
/
571 @Override
572 public String getName(final boolean longVersion) {
573 if (longVersion) {
574 return super.getName(longVersion);
575 }
576 return "ES"; //NON NLS 1
577 }
578
579 /
**
580
*
Get the full configuration which holds all the data necessary to
581
*
describe this object.
582
*
583
*
@param longVersion
584
*
true if the long version should be returned, false if the
585
*
short version should be returned
586
*
@return the full configuration
587
*
/
588 @Override
589 public String getConfiguration(final boolean longVersion) {
590 StringBuilder sb;
591 IOptimizationModule m;
592
593 sb = new StringBuilder();
594
595 sb.append(();
596 sb.append(this.mu);
597 if (this.rho != 1) {
598 sb.append(/);
599 sb.append(this.rho);
600 }
601
602 sb.append(this.plus ? + : ,);
603 sb.append(this.lambda);
604 sb.append());
605
606 sb.append(",G=["); //NON NLS 1
607 sb.append(TextUtils.formatNumber(this.minX));
608
609 sb.append(","); //NON NLS 1
610 sb.append(TextUtils.formatNumber(this.maxX));
611
612 sb.append("]");//NON NLS 1
613 sb.append(this.dimension);
614
615 sb.append(,);
770 56 THE IMPLEMENTATION PACKAGE
616 sb.append(super.getConfiguration(longVersion));
617
618 m = this.createStrategyParameters;
619 if (m != null) {
620 sb.append(",o0s="); //NON NLS 1
621 sb.append(m.toString(longVersion));
622 }
623
624 m = this.mutateStrategyParameters;
625 if (m != null) {
626 sb.append(",o1s="); //NON NLS 1
627 sb.append(m.toString(longVersion));
628 }
629
630 m = this.crossoverStrategy;
631 if (m != null) {
632 sb.append(",oxs="); //NON NLS 1
633 sb.append(m.toString(longVersion));
634 }
635
636 m = this.crossoverGenotype;
637 if (m != null) {
638 sb.append(",oxg="); //NON NLS 1
639 sb.append(m.toString(longVersion));
640 }
641
642 if (this.survivalSelection != null) {
643 sb.append(",selSur="); //NON NLS 1
644 sb.append(this.survivalSelection.toString(longVersion));
645 }
646
647 if (this.parentalSelection != null) {
648 sb.append(",selPar="); //NON NLS 1
649 sb.append(this.parentalSelection.toString(longVersion));
650 }
651
652 return sb.toString();
653 }
654 }
Listing 56.18: An individual record also holding endogenous information.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.es;
5
6 import org.goataa.impl.utils.Individual;
7
8 /
**
9
*
An individual record for evolution strategies which also contains a
10
*
field for the strategy parameter. The search space is always the vectors
11
*
of real numbers.
12
*
13
*
@param <X>
14
*
the problem space (phenome, Section 2.1 on page 43)
15
*
@author Thomas Weise
16
*
/
17 public class ESIndividual<X> extends Individual<double[], X> {
18
19 /
**
a constant required by Java serialization
*
/
20 private static final long serialVersionUID = 1;
21
22 /
**
the strategy parameter
*
/
23 public double[] w;
24
25 /
**
The constructor creates a new Evolution Strategy individual record
*
/
26 public ESIndividual() {
56.1. THE OPTIMIZATION ALGORITHMS 771
27 super();
28 }
29
30 /
**
31
*
Copy the values of another individual record to this record. This
32
*
method saves us from excessively creating new individual records.
33
*
34
*
@param to
35
*
the individual to copy
36
*
/
37 @Override
38 @SuppressWarnings("unchecked")
39 public void assign(final Individual<double[], X> to) {
40 super.assign(to);
41 if (to instanceof ESIndividual) {
42 this.w = ((ESIndividual) to).w;
43 }
44 }
45 }
56.1.4.3 Dierential Evolution
In this package, we provide implementations of Dierential Evolution according to Chap-
ter 33.
Listing 56.19: An example implementation of the Dierential Evolution algorithm which
uses ternary crossover.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.de;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.algorithms.SOOptimizationAlgorithm;
10 import org.goataa.impl.searchOperations.NullarySearchOperation;
11 import org.goataa.impl.utils.Constants;
12 import org.goataa.impl.utils.Individual;
13 import org.goataa.impl.utils.TextUtils;
14 import org.goataa.spec.IGPM;
15 import org.goataa.spec.INullarySearchOperation;
16 import org.goataa.spec.IObjectiveFunction;
17 import org.goataa.spec.IOptimizationModule;
18 import org.goataa.spec.ITerminationCriterion;
19 import org.goataa.spec.ITernarySearchOperation;
20
21 /
**
22
*
The base class for Differential Evolution-based algorithms as discussed
23
*
in Chapter 33 on page 419.
24
*
25
*
@param <X>
26
*
the problem space (phenome, Section 2.1 on page 43)
27
*
@author Thomas Weise
28
*
/
29 public class DifferentialEvolution1<X> extends
30 SOOptimizationAlgorithm<double[], X, Individual<double[], X>> {
31
32 /
**
a constant required by Java serialization
*
/
33 private static final long serialVersionUID = 1;
34
35 /
**
the nullary search operation
*
/
36 private INullarySearchOperation<double[]> o0;
37
38 /
**
the population size
*
/
772 56 THE IMPLEMENTATION PACKAGE
39 private int ps;
40
41 /
**
the ternary operator for crossover
*
/
42 private ITernarySearchOperation<double[]> tc;
43
44 /
**
instantiate the Differential Evolution algorithm
*
/
45 @SuppressWarnings("unchecked")
46 public DifferentialEvolution1() {
47 super();
48 this.o0 = (INullarySearchOperation) (NullarySearchOperation.NULL_CREATION);
49 this.ps = 100;
50 }
51
52 /
**
53
*
Set the nullary search operation to be used by this optimization
54
*
algorithm (see Section 4.2 on page 83 and
55
*
Listing 55.2 on page 708).
56
*
57
*
@param op
58
*
the nullary search operation to use
59
*
/
60 @SuppressWarnings("unchecked")
61 public void setNullarySearchOperation(
62 final INullarySearchOperation<double[]> op) {
63 this.o0 = ((op != null) ? op
64 : ((INullarySearchOperation) (NullarySearchOperation.NULL_CREATION)));
65 }
66
67 /
**
68
*
Get the nullary search operation which is used by this optimization
69
*
algorithm (see Section 4.2 on page 83 and
70
*
Listing 55.2 on page 708).
71
*
72
*
@return the nullary search operation to use
73
*
/
74 public final INullarySearchOperation<double[]> getNullarySearchOperation() {
75 return this.o0;
76 }
77
78 /
**
79
*
Perform a simple differential evolution as defined in
80
*
Chapter 33 on page 419
81
*
82
*
@param f
83
*
the objective function ( Definition D2.3 on page 44)
84
*
@param create
85
*
the nullary search operator
86
*
( Section 4.2 on page 83) for creating the initial
87
*
genotype
88
*
@param crossover
89
*
the ternary crossover operator
90
*
@param gpm
91
*
the genotype-phenotype mapping
92
*
( Section 4.3 on page 85)
93
*
@param term
94
*
the termination criterion
95
*
( Section 6.3.3 on page 105)
96
*
@param ps
97
*
the population size
98
*
@param r
99
*
the random number generator
100
*
@return the individual containing the best candidate solution
101
*
( Definition D2.2 on page 43) found
102
*
@param <X>
103
*
the problem space ( Section 2.1 on page 43)
104
*
/
105 @SuppressWarnings("unchecked")
56.1. THE OPTIMIZATION ALGORITHMS 773
106 public static final <X> Individual<double[], X> differentialEvolution(
107 //
108 final IObjectiveFunction<X> f,
109 final INullarySearchOperation<double[]> create,//
110 final ITernarySearchOperation<double[]> crossover,//
111 final IGPM<double[], X> gpm,//
112 final int ps, final ITerminationCriterion term, final Random r) {
113
114 Individual<double[], X>[] pop, npop;
115 Individual<double[], X> p, best;
116 int i, a, b, c;
117
118 best = new Individual<double[], X>();
119 best.v = Constants.WORST_FITNESS;
120 pop = new Individual[ps];
121 npop = new Individual[ps];
122
123 // build the initial population of random individuals
124 for (i = npop.length; (--i) >= 0;) {
125 p = new Individual<double[], X>();
126 npop[i] = p;
127 p.g = create.create(r);
128 }
129
130 // the basic loop of the Differential Evolution
131 for (;;) {
132 // for eachgeneration do...
133
134 // for each individual in the population
135 for (i = npop.length; (--i) >= 0;) {
136 p = npop[i];
137
138 // perform the genotype-phenotype mapping
139 p.x = gpm.gpm(p.g, r);
140 // compute the objective value
141 p.v = f.compute(p.x, r);
142
143 // fight against direct parent
144 if ((pop[i] == null) || (p.v < pop[i].v)) {
145 pop[i] = p;
146
147 // is the current individual the best one so far?
148 if (p.v < best.v) {
149 best.assign(p);
150 }
151 }
152
153 // after each objective function evaluation, check if we should
154 // stop
155 if (term.terminationCriterion()) {
156 // if we should stop, return the best individual found
157 return best;
158 }
159 }
160
161 for (i = npop.length; (--i) >= 0;) {
162 a = pop[i];
163 do {
164 b = r.nextInt(pop.length);
165 } while (a == b);
166
167 do {
168 c = r.nextInt(pop.length);
169 } while ((a == c) || (b == c));
170
171 npop[i] = p = new Individual<double[], X>();
172 p.g = crossover.ternaryRecombine(pop[a].g, pop[b].g, pop[c].g, r);
774 56 THE IMPLEMENTATION PACKAGE
173
174 }
175 }
176 }
177
178 /
**
179
*
Invoke the differential evolution
180
*
181
*
@param r
182
*
the randomizer (will be used directly without setting the
183
*
seed)
184
*
@param term
185
*
the termination criterion (will be used directly without
186
*
resetting)
187
*
@param result
188
*
a list to which the results are to be appended
189
*
/
190 @Override
191 public void call(final Random r, final ITerminationCriterion term,
192 final List<Individual<double[], X>> result) {
193
194 result.add(DifferentialEvolution1.differentialEvolution(//
195 this.getObjectiveFunction(),//
196 this.getNullarySearchOperation(),//
197 this.getTernarySearchOperation(),//
198 this.getGPM(), //
199 this.getPopulationSize(), //
200 term, //
201 r));
202 }
203
204 /
**
205
*
Obtain the ternary search operation
206
*
207
*
@return the ternary search operation
208
*
/
209 public final ITernarySearchOperation<double[]> getTernarySearchOperation() {
210 return this.tc;
211 }
212
213 /
**
214
*
Set the ternary search operation
215
*
216
*
@param atc
217
*
the ternary search operation
218
*
/
219 public final void setTernarySearchOperation(
220 final ITernarySearchOperation<double[]> atc) {
221 this.tc = atc;
222 }
223
224 /
**
225
*
Set the population size, i.e., the number of individuals which are
226
*
kept in the population.
227
*
228
*
@param aps
229
*
the population size
230
*
/
231 public final void setPopulationSize(final int aps) {
232 this.ps = Math.max(1, aps);
233 }
234
235 /
**
236
*
Set the population size, i.e., the number of individuals which are
237
*
kept in the population.
238
*
239
*
@return the population size
56.1. THE OPTIMIZATION ALGORITHMS 775
240
*
/
241 public final int getPopulationSize() {
242 return this.ps;
243 }
244
245 /
**
246
*
Get the full configuration which holds all the data necessary to
247
*
describe this object.
248
*
249
*
@param longVersion
250
*
true if the long version should be returned, false if the
251
*
short version should be returned
252
*
@return the full configuration
253
*
/
254 @Override
255 public String getConfiguration(final boolean longVersion) {
256 StringBuilder sb;
257 IOptimizationModule o;
258
259 sb = new StringBuilder(super.getConfiguration(longVersion));
260
261 o = this.getNullarySearchOperation();
262 if (o != null) {
263 if (sb.length() > 0) {
264 sb.append(,);
265 }
266 sb.append("o0=");//NON NLS 1
267 sb.append(o.toString(longVersion));
268 }
269
270 sb.append(",ps=");//NON NLS 1
271 sb.append(TextUtils.formatNumber(this.getPopulationSize()));
272
273 if (this.tc != null) {
274 sb.append(",op3=");//NON NLS 1
275 sb.append(this.tc.toString(longVersion));
276 }
277
278 return sb.toString();
279 }
280
281 /
**
282
*
Get the name of the optimization module
283
*
284
*
@param longVersion
285
*
true if the long name should be returned, false if the short
286
*
name should be returned
287
*
@return the name of the optimization module
288
*
/
289 @Override
290 public String getName(final boolean longVersion) {
291 return (longVersion ? super.getName(true) : "DE-1"); //NON NLS 1
292 }
293 }
56.1.4.4 Estimation Of Distribution Algorithms
In this package, we provide the implementations of Estimation of Distribution Algorithms
as discussed in Chapter 34.
Listing 56.20: The Estimation Of Distribution Algorithm algorithm PlainEDA given in
Algorithm 34.1.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
776 56 THE IMPLEMENTATION PACKAGE
4 package org.goataa.impl.algorithms.eda;
5
6 import java.util.List;
7 import java.util.Random;
8
9 import org.goataa.impl.algorithms.SOOptimizationAlgorithm;
10 import org.goataa.impl.utils.Constants;
11 import org.goataa.impl.utils.Individual;
12 import org.goataa.impl.utils.TextUtils;
13 import org.goataa.spec.IGPM;
14 import org.goataa.spec.IModelBuilder;
15 import org.goataa.spec.IModelSampler;
16 import org.goataa.spec.INullarySearchOperation;
17 import org.goataa.spec.IObjectiveFunction;
18 import org.goataa.spec.IOptimizationModule;
19 import org.goataa.spec.ISelectionAlgorithm;
20 import org.goataa.spec.ITerminationCriterion;
21
22 /
**
23
*
The basic Estimation of Distribution Algorithm (EDA) scheme according to
24
*
Algorithm 34.1 on page 432.
25
*
26
*
@param <G>
27
*
the search space (genome, Section 4.1 on page 81)
28
*
@param <X>
29
*
the problem space (phenome, Section 2.1 on page 43)
30
*
@param <M>
31
*
the model type
32
*
@author Thomas Weise
33
*
/
34 public class EDA<G, X, M> extends
35 SOOptimizationAlgorithm<G, X, Individual<G, X>> {
36 /
**
a constant required by Java serialization
*
/
37 private static final long serialVersionUID = 1;
38
39 /
**
the model creation operation
*
/
40 private INullarySearchOperation<M> modelCreate;
41
42 /
**
the model builder
*
/
43 private IModelBuilder<M, G> modelBuilder;
44
45 /
**
the model sampler
*
/
46 private IModelSampler<M, G> modelSampler;
47
48 /
**
the selection algorithm
*
/
49 private ISelectionAlgorithm selectionAlgorithm;
50
51 /
**
the population size
*
/
52 private int ps;
53
54 /
**
the mating pool size
*
/
55 private int mps;
56
57 /
**
instantiate an EDA
*
/
58 public EDA() {
59 super();
60 this.ps = 100;
61 this.mps = 100;
62 }
63
64 /
**
65
*
Set the population size, i.e., the number of individuals which are
66
*
kept in the population.
67
*
68
*
@param aps
69
*
the population size
70
*
/
56.1. THE OPTIMIZATION ALGORITHMS 777
71 public final void setPopulationSize(final int aps) {
72 this.ps = Math.max(1, aps);
73 }
74
75 /
**
76
*
Set the population size, i.e., the number of individuals which are
77
*
kept in the population.
78
*
79
*
@return the population size
80
*
/
81 public final int getPopulationSize() {
82 return this.ps;
83 }
84
85 /
**
86
*
Set the mating pool size, i.e., the number of individuals which will
87
*
be selected into the mating pool.
88
*
89
*
@param amps
90
*
the mating pool size
91
*
/
92 public final void setMatingPoolSize(final int amps) {
93 this.mps = Math.max(1, amps);
94 }
95
96 /
**
97
*
Set the mating pool size, i.e., the number of individuals which will
98
*
be selected into the mating pool.
99
*
100
*
@return the mating pool size
101
*
/
102 public final int getMatingPoolSize() {
103 return this.mps;
104 }
105
106 /
**
107
*
Set the algorithm used to create a model
108
*
109
*
@param create
110
*
the model creation algorithm
111
*
/
112 public final void setModelCreator(final INullarySearchOperation<M> create) {
113 this.modelCreate = create;
114 }
115
116 /
**
117
*
Get the algorithm used to create a model
118
*
119
*
@return the model creation algorithm
120
*
/
121 public final INullarySearchOperation<M> getModelCreator() {
122 return this.modelCreate;
123 }
124
125 /
**
126
*
Set the algorithm used to build/update a model
127
*
128
*
@param builder
129
*
the model building/updating algorithm
130
*
/
131 public final void setModelBuilder(final IModelBuilder<M, G> builder) {
132 this.modelBuilder = builder;
133 }
134
135 /
**
136
*
Get the algorithm used to build/update a model
137
*
778 56 THE IMPLEMENTATION PACKAGE
138
*
@return the model building/updating algorithm
139
*
/
140 public final IModelBuilder<M, G> getModelBuilder() {
141 return this.modelBuilder;
142 }
143
144 /
**
145
*
Set the algorithm used to sample a model
146
*
147
*
@param sampler
148
*
the model sampling algorithm
149
*
/
150 public final void setModelSampler(final IModelSampler<M, G> sampler) {
151 this.modelSampler = sampler;
152 }
153
154 /
**
155
*
Get the algorithm used to sample a model
156
*
157
*
@return the model sampling algorithm
158
*
/
159 public final IModelSampler<M, G> getModelSampler() {
160 return this.modelSampler;
161 }
162
163 /
**
164
*
Set the selection algorithm (see Section 28.4 on page 285) to be
165
*
used by this evolutionary algorithm.
166
*
167
*
@param sel
168
*
the selection algorithm
169
*
/
170 public final void setSelectionAlgorithm(final ISelectionAlgorithm sel) {
171 this.selectionAlgorithm = sel;
172 }
173
174 /
**
175
*
Get the selection algorithm (see Section 28.4 on page 285) which is
176
*
used by this evolutionary algorithm.
177
*
178
*
@return the selection algorithm
179
*
/
180 public final ISelectionAlgorithm getSelectionAlgorithm() {
181 return this.selectionAlgorithm;
182 }
183
184 /
**
185
*
Get the full configuration which holds all the data necessary to
186
*
describe this object.
187
*
188
*
@param longVersion
189
*
true if the long version should be returned, false if the
190
*
short version should be returned
191
*
@return the full configuration
192
*
/
193 @Override
194 public String getConfiguration(final boolean longVersion) {
195 StringBuilder sb;
196 IOptimizationModule m;
197
198 sb = new StringBuilder(super.getConfiguration(longVersion));
199
200 sb.append(",ps=");//NON NLS 1
201 sb.append(TextUtils.formatNumber(this.getPopulationSize()));
202
203 sb.append(",mps=");//NON NLS 1
204 sb.append(TextUtils.formatNumber(this.getMatingPoolSize()));
56.1. THE OPTIMIZATION ALGORITHMS 779
205
206 m = this.getModelCreator();
207 if (m != null) {
208 sb.append(",new=");//NON NLS 1
209 sb.append(m.toString(longVersion));
210 }
211
212 m = this.getModelBuilder();
213 if (m != null) {
214 sb.append(",build=");//NON NLS 1
215 sb.append(m.toString(longVersion));
216 }
217
218 m = this.getModelSampler();
219 if (m != null) {
220 sb.append(",sample=");//NON NLS 1
221 sb.append(m.toString(longVersion));
222 }
223
224 m = this.getSelectionAlgorithm();
225 if (m != null) {
226 sb.append(",sel=");//NON NLS 1
227 sb.append(m.toString(longVersion));
228 }
229
230 return sb.toString();
231 }
232
233 /
**
234
*
Perform a basic Estimation of Distribution algorithm as defined in
235
*
Algorithm 34.1 on page 432.
236
*
237
*
@param f
238
*
the objective function ( Definition D2.3 on page 44)
239
*
@param newModel
240
*
the model creation operation
241
*
@param buildModel
242
*
the model building/updating operator
243
*
@param sampleModel
244
*
the model sampling algorithm
245
*
@param gpm
246
*
the genotype-phenotype mapping
247
*
( Section 4.3 on page 85)
248
*
@param sel
249
*
the selection algorithm ( Section 28.4 on page 285
250
*
@param term
251
*
the termination criterion
252
*
( Section 6.3.3 on page 105)
253
*
@param ps
254
*
the population size
255
*
@param mps
256
*
the mating pool size
257
*
@param r
258
*
the random number generator
259
*
@return the individual containing the best candidate solution
260
*
( Definition D2.2 on page 43) found
261
*
@param <G>
262
*
the search space ( Section 4.1 on page 81)
263
*
@param <X>
264
*
the problem space ( Section 2.1 on page 43)
265
*
@param <M>
266
*
the model type
267
*
/
268 @SuppressWarnings("unchecked")
269 public static final <G, X, M> Individual<G, X> eda(
270 //
271 final IObjectiveFunction<X> f,
780 56 THE IMPLEMENTATION PACKAGE
272 final INullarySearchOperation<M> newModel,//
273 final IModelBuilder<M, G> buildModel,//
274 final IModelSampler<M, G> sampleModel,
275 final IGPM<G, X> gpm,//
276 final ISelectionAlgorithm sel, final int ps, final int mps,
277 final ITerminationCriterion term, final Random r) {
278
279 Individual<G, X>[] pop, mate;
280 Individual<G, X> p, best;
281 M model;
282 int i;
283
284 best = new Individual<G, X>();
285 best.v = Constants.WORST_FITNESS;
286 pop = new Individual[ps];
287 mate = new Individual[mps];
288
289 model = newModel.create(r);
290
291 // the basic loop of the Evolutionary Algorithm 28.1 on page 255
292 for (;;) {
293 // for eachgeneration do...
294
295 // for each individual in the population
296 for (i = pop.length; (--i) >= 0;) {
297 pop[i] = p = new Individual<G, X>();
298
299 // sample the model
300 p.g = sampleModel.sampleModel(model, r);
301
302 // perform the genotype-phenotype mapping
303 p.x = gpm.gpm(p.g, r);
304 // compute the objective value
305 p.v = f.compute(p.x, r);
306
307 // is the current individual the best one so far?
308 if (p.v < best.v) {
309 best.assign(p);
310 }
311
312 // after each objective function evaluation, check if we should
313 // stop
314 if (term.terminationCriterion()) {
315 // if we should stop, return the best individual found
316 return best;
317 }
318 }
319
320 // perform the selection step
321 sel.select(pop, 0, pop.length, mate, 0, mate.length, r);
322
323 // build the model for the next generation
324 model = buildModel.buildModel(mate, 0, mate.length, model, r);
325 }
326 }
327
328 /
**
329
*
Invoke the estimation of distribution algorithm
330
*
331
*
@param r
332
*
the randomizer (will be used directly without setting the
333
*
seed)
334
*
@param term
335
*
the termination criterion (will be used directly without
336
*
resetting)
337
*
@param result
338
*
a list to which the results are to be appended
56.1. THE OPTIMIZATION ALGORITHMS 781
339
*
/
340 @Override
341 public void call(final Random r, final ITerminationCriterion term,
342 final List<Individual<G, X>> result) {
343
344 result.add(EDA.eda(this.getObjectiveFunction(),//
345 this.getModelCreator(),//
346 this.getModelBuilder(),//
347 this.getModelSampler(),//
348 this.getGPM(),//
349 this.getSelectionAlgorithm(),//
350 this.getPopulationSize(),//
351 this.getMatingPoolSize(),//
352 term,//
353 r));
354 }
355 }
56.1.4.4.1 UMDA
This package contains an implementation of the UMDA, an Estimation of Distribution
Algorithm introduced in Section 34.4.1.
Listing 56.21: The creator for the initial model to be used in UMDA.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.eda.umda;
5
6 import java.util.Arrays;
7 import java.util.Random;
8
9 import org.goataa.impl.OptimizationModule;
10 import org.goataa.spec.INullarySearchOperation;
11
12 /
**
13
*
The UMDA-style creation operation for models, as defined in
14
*
Algorithm 34.3 on page 436, fills the initial probability vector with 0.5.
15
*
16
*
@author Thomas Weise
17
*
/
18 public class UMDAModelCreator extends OptimizationModule implements
19 INullarySearchOperation<double[]> {
20 /
**
a constant required by Java serialization
*
/
21 private static final long serialVersionUID = 1;
22
23 /
**
the dimension
*
/
24 private final int dimension;
25
26 /
**
27
*
Instantiate the UMDA model creation operation
28
*
29
*
@param dim
30
*
the dimension of the search space
31
*
/
32 public UMDAModelCreator(final int dim) {
33 super();
34 this.dimension = dim;
35 }
36
37 /
**
38
*
Create a vector of all 0.5 values, as stated in Algorithm 34.3 on page 436
39
*
40
*
@param r
41
*
the random number generator
42
*
@return the vector of 0.5 values
782 56 THE IMPLEMENTATION PACKAGE
43
*
/
44 public final double[] create(final Random r) {
45 double[] d;
46
47 d = new double[this.dimension];
48 Arrays.fill(d, 0.5d);
49
50 return d;
51 }
52
53 /
**
54
*
Get the full configuration which holds all the data necessary to
55
*
describe this object.
56
*
57
*
@param longVersion
58
*
true if the long version should be returned, false if the
59
*
short version should be returned
60
*
@return the full configuration
61
*
/
62 @Override
63 public String getConfiguration(final boolean longVersion) {
64 return ("dim=" + this.dimension); //NON NLS 1
65 }
66 }
Listing 56.22: The model building method used in UMDA.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.eda.umda;
5
6 import java.util.Arrays;
7 import java.util.Random;
8
9 import org.goataa.impl.algorithms.eda.ModelBuilder;
10 import org.goataa.impl.utils.Individual;
11
12 /
**
13
*
The build a model UMDA-style, as defined in
14
*
Algorithm 34.5 on page 437
15
*
16
*
@author Thomas Weise
17
*
/
18 public class UMDAModelBuilder extends ModelBuilder<double[], boolean[]> {
19 /
**
a constant required by Java serialization
*
/
20 private static final long serialVersionUID = 1;
21
22 /
**
the temporary counter
*
/
23 private int[] tmp;
24
25 /
**
Instantiate the UMDA model building operation
*
/
26 public UMDAModelBuilder() {
27 super();
28 }
29
30 /
**
31
*
Build a model from a set of selected points in the search space and an
32
*
old model as introduced in Algorithm 34.5 on page 437: Count the
33
*
number of bits which are false at each locus and use their frequency
34
*
as probability value in the new model
35
*
36
*
@param mate
37
*
the mating pool of selected individuals
38
*
@param start
39
*
the index of the first individual in the mating pool
40
*
@param count
41
*
the number of individuals in the mating pool
56.1. THE OPTIMIZATION ALGORITHMS 783
42
*
@param oldModel
43
*
the old model
44
*
@param r
45
*
a random number generator
46
*
@return the new model
47
*
/
48 public double[] buildModel(final Individual<boolean[], ?>[] mate,
49 final int start, final int count, final boolean[] oldModel,
50 final Random r) {
51 double[] m;
52 int[] cc;
53 boolean[] b;
54 int i, j;
55
56 m = new double[oldModel.length];
57
58 cc = this.tmp;
59 if ((cc == null) || (m.length > cc.length)) {
60 this.tmp = cc = new int[m.length];
61 } else {
62 Arrays.fill(cc, 0, m.length, 0);
63 }
64
65 for (i = (count + start); (--i) >= start;) {
66 b = mate[i].g;
67 for (j = m.length; (--j) >= 0;) {
68 if (!(b[j])) {
69 cc[j]++;
70 }
71 }
72 }
73
74 for (i = m.length; (--i) >= 0;) {
75 m[i] = ((((double) (cc[i])) / (count)));
76 }
77
78 return m;
79 }
80 }
Listing 56.23: Sample points from the UMDA model.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.algorithms.eda.umda;
5
6 import java.util.Random;
7
8 import org.goataa.impl.algorithms.eda.ModelSampler;
9
10 /
**
11
*
The sample a model (a probability vector) UMDA-style accoring to
12
*
Algorithm 34.4 on page 437
13
*
14
*
@author Thomas Weise
15
*
/
16 public class UMDAModelSampler extends ModelSampler<double[], boolean[]> {
17 /
**
a constant required by Java serialization
*
/
18 private static final long serialVersionUID = 1;
19
20 /
**
Instantiate the UMDA model sampling operation
*
/
21 public UMDAModelSampler() {
22 super();
23 }
24
25 /
**
26
*
According to Algorithm 34.4 on page 437, every value of the
784 56 THE IMPLEMENTATION PACKAGE
27
*
model is between 0 and 1 and denotes the probability that the bit at
28
*
the same position in the genotype will be false. The model is sampled
29
*
using uniform distributions.
30
*
31
*
@param model
32
*
the model
33
*
@param r
34
*
a random number generator
35
*
@return the new model
36
*
/
37 @Override
38 public final boolean[] sampleModel(final double[] model, final Random r) {
39 boolean[] b;
40 int i;
41
42 i = model.length;
43 b = new boolean[i];
44
45 for (; (--i) >= 0;) {
46 b[i] = (r.nextDouble() > model[i]);
47 }
48
49 return b;
50 }
51 }
56.2 Search Operation
In this package, we provide implementations for the search operations.
56.2.1 Operations for String-based Search Spaces
Here we discuss string-based search operators. Strings are very common search spaces, be it
strings of bits or of real vectors. They also can be used to represent permutations of objects.
56.2.1.1 Real Vectors: G R
n
One of the most common search spaces are vectors of real numbers and here, we introduce
search operators for them.
56.2.1.1.1 Nullary Search Operations: create : G
Here we provide the implementation of Listing 55.2 on page 708 for real vectors, that is, a
nullary search operation which creates random vectors with elements distributed (uniformly)
in a given range.
Listing 56.24: Create uniformly distributed real vectors.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.nullary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.strings.real.RealVectorCreation;
9
10 /
**
11
*
A nullary search operation (see Section 4.2 on page 83) for
12
*
real vectors in a bounded subspace of the real vectors of dimension n.
13
*
This operation creates vectors which are uniformly distributed in the
56.2. SEARCH OPERATION 785
14
*
interval [min,max]n.
15
*
16
*
@author Thomas Weise
17
*
/
18 public class DoubleArrayUniformCreation extends RealVectorCreation {
19
20 /
**
a constant required by Java serialization
*
/
21 private static final long serialVersionUID = 1;
22
23 /
**
24
*
Instantiate the real-vector creation operation
25
*
26
*
@param dim
27
*
the dimension of the search space
28
*
@param mi
29
*
the minimum value of the allele ( Definition D4.4 on page 83) of
30
*
a gene
31
*
@param ma
32
*
the maximum value of the allele of a gene
33
*
/
34 public DoubleArrayUniformCreation(final int dim, final double mi,
35 final double ma) {
36 super(dim, mi, ma);
37 }
38
39 /
**
40
*
This operation just produces uniformly distributed random vectors in
41
*
the interval [this.min, this.max]this.n
42
*
43
*
@param r
44
*
the random number generator
45
*
@return a new random real vector of dimension n
46
*
/
47 @Override
48 public final double[] create(final Random r) {
49 double[] g;
50 int i;
51
52 // create a new real vector (genotype, Definition D4.2 on page 82) of
53 // dimension n
54 g = new double[this.n];
55
56 // initialize each gene ( Definition D4.3 on page 82) of the vector...
57 for (i = this.n; (--i) >= 0;) {
58 // ...by setting it to a random number uniformly distributed in
59 // [min,max] (i is the locus Definition D4.5 on page 83)
60 g[i] = (this.min + (r.nextDouble()
*
(this.max - this.min)));
61 }
62
63 return g;
64 }
65
66 /
**
67
*
Get the name of the optimization module
68
*
69
*
@param longVersion
70
*
true if the long name should be returned, false if the short
71
*
name should be returned
72
*
@return the name of the optimization module
73
*
/
74 @Override
75 public String getName(final boolean longVersion) {
76 if (longVersion) {
77 return super.getName(true);
78 }
79 return "DA-U"; //NON NLS 1
80 }
786 56 THE IMPLEMENTATION PACKAGE
81 }
56.2.1.1.2 Unary Search Operations: mutate : G G
It is quite easy to dene multiple dierent search operators for real vectors which adhere to
the interface given in Listing 55.3 on page 708.
56.2.1.1.2.1 Evolution Strategy Operators
Listing 56.25: Modify a real vector by adding normally distributed random numbers with a
given standard deviation.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.unary;
5
6 import java.util.Arrays;
7 import java.util.Random;
8
9 import org.goataa.impl.searchOperations.strings.real.RealVectorMutation;
10
11 /
**
12
*
This is a mutation operation that performes mutation using normally
13
*
distributed random numbers according to a given set of standard
14
*
deviation parameters, as suggested in
15
*
Section 30.4.1.2 on page 365. This operator is quite similar,
16
*
but more sophisticated than the simple mutation method defined in
17
*
Listing 56.28 on page 791
18
*
19
*
@author Thomas Weise
20
*
/
21 public final class DoubleArrayStdDevNormalMutation extends
22 RealVectorMutation {
23
24 /
**
a constant required by Java serialization
*
/
25 private static final long serialVersionUID = 1;
26
27 /
**
the standard deviation to use
*
/
28 private final double[] stddev;
29
30 /
**
31
*
Create a new real-vector mutation operation
32
*
33
*
@param dim
34
*
the dimension of the search space
35
*
@param mi
36
*
the minimum value of the allele ( Definition D4.4 on page 83) of
37
*
a gene
38
*
@param ma
39
*
the maximum value of the allele of a gene
40
*
/
41 public DoubleArrayStdDevNormalMutation(final int dim, final double mi,
42 final double ma) {
43 super(mi, ma);
44 this.stddev = new double[dim];
45 Arrays.fill(this.stddev, 1d);
46 }
47
48 /
**
49
*
Perform a mutation operation that performes mutation using normally
50
*
distributed random numbers according to a given set of standard
51
*
deviation parameters, as suggested in
52
*
Section 30.4.1.2 on page 365.
53
*
54
*
@param g
55
*
the existing genotype in the search space from which a
56.2. SEARCH OPERATION 787
56
*
slightly modified copy should be created
57
*
@param r
58
*
the random number generator
59
*
@return a new genotype
60
*
/
61 @Override
62 public final double[] mutate(final double[] g, final Random r) {
63 final double[] gnew, stddevs;
64 double d;
65 int i;
66
67 i = g.length;
68
69 // create a new real vector of dimension n
70 gnew = new double[i];
71 stddevs = this.stddev;
72
73 // set each gene Definition D4.3 on page 82 of gnew to ...
74 for (; (--i) >= 0;) {
75 do {
76 // Use a normally distributed random number with a standard
77 // deviation as specified in the array stddev.
78 d = (g[i] + (r.nextGaussian()
*
stddevs[i]));
79 } while ((d < this.min) || (d > this.max));
80 gnew[i] = d;
81 }
82
83 return gnew;
84 }
85
86 /
**
87
*
Set the standard deviations
88
*
89
*
@param s
90
*
the standard deviations
91
*
/
92 public final void setStdDevs(final double[] s) {
93 System.arraycopy(s, 0, this.stddev, 0, this.stddev.length);
94 }
95
96 /
**
97
*
Set the standard deviations to a single, fixed value
98
*
99
*
@param s
100
*
the standard deviation
101
*
/
102 public final void setStdDev(final double s) {
103 Arrays.fill(this.stddev, s);
104 }
105
106 /
**
107
*
Get the name of the optimization module
108
*
109
*
@param longVersion
110
*
true if the long name should be returned, false if the short
111
*
name should be returned
112
*
@return the name of the optimization module
113
*
/
114 @Override
115 public String getName(final boolean longVersion) {
116 if (longVersion) {
117 return super.getName(true);
118 }
119 return "DA-SDNM"; //NON NLS 1
120 }
121 }
788 56 THE IMPLEMENTATION PACKAGE
Listing 56.26: A strategy mutation operator for Evolution Strategy.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.unary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.strings.real.RealVectorMutation;
9
10 /
**
11
*
This operator realizes the mutation strength strategy vector mutation
12
*
for Evolution Strategies ( Chapter 30 on page 359) as
13
*
specified in Algorithm 30.8 on page 371,
14
*
Equation 30.12, and Equation 30.13.
15
*
16
*
@author Thomas Weise
17
*
/
18 public final class DoubleArrayStrategyLogNormalMutation extends
19 RealVectorMutation {
20
21 /
**
a constant required by Java serialization
*
/
22 private static final long serialVersionUID = 1;
23
24 /
**
25
*
Create a new real-vector mutation operation
26
*
27
*
@param ma
28
*
the maximum value of the allele of a gene
29
*
/
30 public DoubleArrayStrategyLogNormalMutation(final double ma) {
31 super(Double.MIN_VALUE, ma);
32 }
33
34 /
**
35
*
Perform the mutation strength strategy vector mutation for Evolution
36
*
Strategies ( Chapter 30 on page 359) as specified in
37
*
Algorithm 30.8 on page 371.
38
*
39
*
@param g
40
*
the existing genotype in the search space from which a
41
*
slightly modified copy should be created
42
*
@param r
43
*
the random number generator
44
*
@return a new genotype
45
*
/
46 @Override
47 public final double[] mutate(final double[] g, final Random r) {
48 final double[] gnew;
49 final double t0, t, c, nu;
50 int i;
51 double x;
52
53 i = g.length;
54
55 c = 1d;
56
57 // set tau0 according to Equation 30.12 on page 372
58 t0 = (c / Math.sqrt(2
*
i));
59
60 // set tau according to Equation 30.13 on page 372
61 t = (c / Math.sqrt(2
*
Math.sqrt(i)));
62
63 nu = Math.exp(t0
*
r.nextGaussian());
64 gnew = g.clone();
65
66 // set each gene Definition D4.3 on page 82 of gnew to ...
56.2. SEARCH OPERATION 789
67 for (; (--i) >= 0;) {
68 do {
69 x = gnew[i]
*
nu
*
Math.exp(t
*
r.nextGaussian());
70 } while ((x < this.min) || (x > this.max));
71 gnew[i] = x;
72 }
73
74 return gnew;
75 }
76
77 /
**
78
*
Get the name of the optimization module
79
*
80
*
@param longVersion
81
*
true if the long name should be returned, false if the short
82
*
name should be returned
83
*
@return the name of the optimization module
84
*
/
85 @Override
86 public String getName(final boolean longVersion) {
87 if (longVersion) {
88 return super.getName(true);
89 }
90 return "DA-SLN"; //NON NLS 1
91 }
92 }
56.2.1.1.2.2 Operators Suggested by Students The next two operators introduced here stem
from the discussion in my lecture: ?? is an operator which performs small uniformly dis-
tributed changes to the vector elements whereas ?? performs a similar modication using
normally distributed random numbers.
Listing 56.27: Modify a real vector by adding uniformly distributed random numbers.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.unary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.strings.real.RealVectorMutation;
9
10 /
**
11
*
A unary search operation (see Section 4.2 on page 83) for
12
*
real vectors in a bounded subspace of the real vectors of dimension n.
13
*
This operation tales an existing genotype (see
14
*
Definition D4.2 on page 82) and adds a small uniformly distributed (see
15
*
Section 53.4.1 on page 676,
16
*
Section 53.3.1 on page 668) random number to each of its
17
*
genes (see Definition D4.3 on page 82).
18
*
19
*
@author Thomas Weise
20
*
/
21 public final class DoubleArrayAllUniformMutation extends
22 RealVectorMutation {
23
24 /
**
a constant required by Java serialization
*
/
25 private static final long serialVersionUID = 1;
26
27 /
**
28
*
Create a new real-vector mutation operation
29
*
30
*
@param mi
31
*
the minimum value of the allele ( Definition D4.4 on page 83) of
32
*
a gene
33
*
@param ma
790 56 THE IMPLEMENTATION PACKAGE
34
*
the maximum value of the allele of a gene
35
*
/
36 public DoubleArrayAllUniformMutation(final double mi, final double ma) {
37 super(mi, ma);
38 }
39
40 /
**
41
*
This is an unary search operation for vectors of real numbers. It
42
*
takes one existing genotype g (see Definition D4.2 on page 82) from the
43
*
search space and produces one new genotype. This new element is a
44
*
slightly modified version of g which is obtained by adding uniformly
45
*
distributed random numbers to its elements.
46
*
47
*
@param g
48
*
the existing genotype in the search space from which a
49
*
slightly modified copy should be created
50
*
@param r
51
*
the random number generator
52
*
@return a new genotype
53
*
/
54 @Override
55 public final double[] mutate(final double[] g, final Random r) {
56 final double[] gnew;
57 final double strength;
58 int i;
59
60 i = g.length;
61
62 // create a new real vector of dimension n
63 gnew = new double[i];
64
65 // the mutation strength: here we use a constant which is small
66 // compared to the range min...max
67 strength = 0.001d
*
(this.max - this.min);
68
69 // set each gene Definition D4.3 on page 82 of gnew to ...
70 for (; (--i) >= 0;) {
71 do {
72 // the original allele ( Definition D4.4 on page 83) of the gene plus a
73 // random number uniformly distributed
74 // Section 53.4.1 on page 676 in
75 // [-0.5
*
strength,+0.5
*
strength]
76 gnew[i] = g[i] + ((r.nextDouble() - 0.5)
*
strength);
77 // and repeat this until the new allele falls into the specified
78 // boundaries
79 } while ((gnew[i] < this.min) || (gnew[i] > this.max));
80 }
81
82 return gnew;
83 }
84
85 /
**
86
*
Get the name of the optimization module
87
*
88
*
@param longVersion
89
*
true if the long name should be returned, false if the short
90
*
name should be returned
91
*
@return the name of the optimization module
92
*
/
93 @Override
94 public String getName(final boolean longVersion) {
95 if (longVersion) {
96 return super.getName(true);
97 }
98 return "DA-AUM"; //NON NLS 1
99 }
100 }
56.2. SEARCH OPERATION 791
Listing 56.28: Modify a real vector by adding normally distributed random numbers.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.unary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.strings.real.RealVectorMutation;
9
10 /
**
11
*
A unary search operation (see Section 4.2 on page 83) for
12
*
real vectors in a bounded subspace of the real vectors of dimension n.
13
*
This operation tales an existing genotype (see
14
*
Definition D4.2 on page 82) and adds a small normally distributed random
15
*
number to each of its genes (see Definition D4.3 on page 82). There are two
16
*
main differences between normally distributed random numbers and
17
*
uniformly distributed ones: 1) The normal distribution (see
18
*
Section 53.4.2 on page 678) is unbounded whereas the uniform
19
*
distribution has limits to each side (see
20
*
Section 53.4.1 on page 676 and
21
*
Section 53.3.1 on page 668) 2) The normal distribution gives
22
*
the elements around its expected value
23
*
Section 53.2.2 on page 661 a higher probability and a lower
24
*
probability to elements distant from it, whereas all possible samples
25
*
have the same probability in a uniform distribution.
26
*
27
*
@author Thomas Weise
28
*
/
29 public final class DoubleArrayAllNormalMutation extends RealVectorMutation {
30
31 /
**
a constant required by Java serialization
*
/
32 private static final long serialVersionUID = 1;
33
34 /
**
35
*
Create a new real-vector mutation operation
36
*
37
*
@param mi
38
*
the minimum value of the allele ( Definition D4.4 on page 83) of
39
*
a gene
40
*
@param ma
41
*
the maximum value of the allele of a gene
42
*
/
43 public DoubleArrayAllNormalMutation(final double mi, final double ma) {
44 super(mi, ma);
45 }
46
47 /
**
48
*
This is an unary search operation for vectors of real numbers. It
49
*
takes one existing genotype g (see Definition D4.2 on page 82) from the
50
*
search space and produces one new genotype. This new element is a
51
*
slightly modified version of g which is obtained by adding normally
52
*
distributed random numbers to its elements.
53
*
54
*
@param g
55
*
the existing genotype in the search space from which a
56
*
slightly modified copy should be created
57
*
@param r
58
*
the random number generator
59
*
@return a new genotype
60
*
/
61 @Override
62 public final double[] mutate(final double[] g, final Random r) {
63 final double[] gnew;
64 final double strength;
65 int i;
66
792 56 THE IMPLEMENTATION PACKAGE
67 i = g.length;
68
69 // create a new real vector of dimension n
70 gnew = new double[i];
71
72 // the mutation strength: here we use a constant which is small
73 // compared to the range min...max
74 strength = 0.001d
*
(this.max - this.min);
75
76 // set each gene Definition D4.3 on page 82 of gnew to ...
77 for (; (--i) >= 0;) {
78 do {
79 // the original allele ( Definition D4.4 on page 83) of the gene plus a
80 // random number normally distributed
81 // Section 53.4.2 on page 678 with N(0, strength2)
82 gnew[i] = g[i] + (r.nextGaussian()
*
strength);
83 // and repeat this until the new allele falls into the specified
84 // boundaries
85 } while ((gnew[i] < this.min) || (gnew[i] > this.max));
86 }
87
88 return gnew;
89 }
90
91 /
**
92
*
Get the name of the optimization module
93
*
94
*
@param longVersion
95
*
true if the long name should be returned, false if the short
96
*
name should be returned
97
*
@return the name of the optimization module
98
*
/
99 @Override
100 public String getName(final boolean longVersion) {
101 if (longVersion) {
102 return super.getName(true);
103 }
104 return "DA-NAM"; //NON NLS 1
105 }
106 }
56.2.1.1.3 Binary Search Operations: recombine : GG G
Recombination operations for real-valued strings, i.e., implementations of Listing 55.4 on
page 709 for arrays of real numbers.
Listing 56.29: A weighted average crossover operator.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.binary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.BinarySearchOperation;
9 import org.goataa.spec.IBinarySearchOperation;
10
11 /
**
12
*
A weighted average crossover operator as defined in
13
*
Section 29.3.4.5 on page 337.
14
*
15
*
@author Thomas Weise
16
*
/
17 public final class DoubleArrayWeightedMeanCrossover extends
18 BinarySearchOperation<double[]> {
19
56.2. SEARCH OPERATION 793
20 /
**
a constant required by Java serialization
*
/
21 private static final long serialVersionUID = 1;
22
23 /
**
24
*
the globally shared instance of the double array weighted mean
25
*
crossover
26
*
/
27 public static final IBinarySearchOperation<double[]>
DOUBLE_ARRAY_WEIGHTED_MEAN_CROSSOVER = new
DoubleArrayWeightedMeanCrossover();
28
29 /
**
Create a new real-vector crossover operation
*
/
30 protected DoubleArrayWeightedMeanCrossover() {
31 super();
32 }
33
34 /
**
35
*
Perform a weighted average crossover as discussed in
36
*
Section 29.3.4.5 on page 337.
37
*
38
*
@param p1
39
*
the first "parent" genotype
40
*
@param p2
41
*
the second "parent" genotype
42
*
@param r
43
*
the random number generator
44
*
@return a new genotype
45
*
/
46 @Override
47 public final double[] recombine(final double[] p1, final double[] p2,
48 final Random r) {
49 final double[] gnew;
50 double gamma;
51 int i;
52
53 i = p1.length;
54 gnew = new double[i];
55
56 for (; (--i) >= 0;) {
57 gamma = r.nextDouble();
58 gamma = (gamma
*
gamma
*
gamma);
59 if (r.nextBoolean()) {
60 gamma = 0.5d + (0.5d
*
gamma);
61 } else {
62 gamma = 0.5d - (0.5d
*
gamma);
63 }
64 gnew[i] = ((p1[i]
*
gamma) + (p2[i]
*
(1d - gamma)));
65 }
66
67 return gnew;
68 }
69
70 /
**
71
*
Get the name of the optimization module
72
*
73
*
@param longVersion
74
*
true if the long name should be returned, false if the short
75
*
name should be returned
76
*
@return the name of the optimization module
77
*
/
78 @Override
79 public String getName(final boolean longVersion) {
80 if (longVersion) {
81 return super.getName(true);
82 }
83 return "DA-WMC"; //NON NLS 1
84 }
794 56 THE IMPLEMENTATION PACKAGE
85 }
56.2.1.1.4 Ternary Search Operations: ternaryRecombine : GGG G
Recombination operations for real-valued strings, i.e., implementations of Listing 55.5 on
page 710 for arrays of real numbers.
Listing 56.30: The binomial crossover operator from Dierential Evolution.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.ternary;
5
6 import java.util.Random;
7
8 /
**
9
*
The ternary binominal crossover operator for Differential Evolution as
10
*
defined in Section 33.2 on page 419.
11
*
12
*
@author Thomas Weise
13
*
/
14 public final class DoubleArrayDEbin extends DEbase {
15
16 /
**
a constant required by Java serialization
*
/
17 private static final long serialVersionUID = 1;
18
19 /
**
20
*
Create a new real-vector ternary recombination operation
21
*
22
*
@param mi
23
*
the minimum value of the allele ( Definition D4.4 on page 83) of
24
*
a gene
25
*
@param ma
26
*
the maximum value of the allele of a gene
27
*
@param acr
28
*
the gene crossover rate cr
29
*
@param af
30
*
the influence factor F
31
*
/
32 public DoubleArrayDEbin(final double mi, final double ma, double acr, double af)
{
33 super(mi, ma, acr, af);
34 }
35
36 /
**
37
*
Create a new real-vector ternary recombination operation
38
*
39
*
@param dim
40
*
the dimension of the search space
41
*
@param mi
42
*
the minimum value of the allele ( Definition D4.4 on page 83) of
43
*
a gene
44
*
@param ma
45
*
the maximum value of the allele of a gene
46
*
/
47 public DoubleArrayDEbin(final int dim, final double mi, final double ma) {
48 super(mi, ma);
49 }
50
51 /
**
52
*
Perform the ternary Differential Evolution recombination method as
53
*
discussed in
54
*
Section 33.2 on page 419 using a
55
*
normally distributed weight.
56
*
57
*
@param p1
56.2. SEARCH OPERATION 795
58
*
the first "parent" genotype
59
*
@param p2
60
*
the second "parent" genotype
61
*
@param p3
62
*
the third parent genotype
63
*
@param r
64
*
the random number generator
65
*
@return a new genotype
66
*
/
67 @Override
68 public double[] ternaryRecombine(final double[] p1, final double[] p2,
69 final double[] p3, final Random r) {
70 final double[] gnew;
71 double weight, xn;
72 boolean change;
73 int i, j, maxTrials;
74 final double f, cr;
75
76 i = p1.length;
77 gnew = new double[i];
78
79 f = this.getF();
80 cr = this.getCR();
81
82 for (maxTrials = 10; maxTrials > 0; maxTrials--) {
83 // choose a weight very close to F with a tiny variation
84 weight = f
*
(1d + (0.01d
*
r.nextGaussian()));
85 change = false;
86 j = r.nextInt(gnew.length);
87
88 // compute the new vector
89 for (; (--i) >= 0;) {
90
91 // if we should perform the crossover in this dimension
92 if ((i == j) || (r.nextDouble() < cr)) {
93 // compute the new allele and bound it to the search space
94 xn = Math.min(this.max, Math.max(this.min, p3[i]
95 + (weight
*
(p1[i] - p2[i]))));
96
97 if (xn != p3[i]) {
98 change = true;
99 }
100 } else {
101 // if not, just copy the old value
102 xn = p3[i];
103 }
104
105 gnew[i] = xn;
106 }
107
108 if (change) {
109 return gnew;
110 }
111 }
112
113 return p3;
114 }
115
116 /
**
117
*
Get the name of the optimization module
118
*
119
*
@param longVersion
120
*
true if the long name should be returned, false if the short
121
*
name should be returned
122
*
@return the name of the optimization module
123
*
/
124 @Override
796 56 THE IMPLEMENTATION PACKAGE
125 public String getName(final boolean longVersion) {
126 return "DA-1/rand/bin"; //NON NLS 1
127 }
128 }
Listing 56.31: The exponential crossover operator from Dierential Evolution.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.ternary;
5
6 import java.util.Random;
7
8 /
**
9
*
The ternary exponential crossover operator for Differential Evolution as
10
*
defined in Section 33.2 on page 419.
11
*
12
*
@author Thomas Weise
13
*
/
14 public final class DoubleArrayDEexp extends DEbase {
15
16 /
**
a constant required by Java serialization
*
/
17 private static final long serialVersionUID = 1;
18
19 /
**
20
*
Create a new real-vector ternary recombination operation
21
*
22
*
@param mi
23
*
the minimum value of the allele ( Definition D4.4 on page 83) of
24
*
a gene
25
*
@param ma
26
*
the maximum value of the allele of a gene
27
*
@param acr
28
*
the gene crossover rate cr
29
*
@param af
30
*
the influence factor F
31
*
/
32 public DoubleArrayDEexp(final double mi, final double ma, double acr,
33 double af) {
34 super(mi, ma, acr, af);
35 }
36
37 /
**
38
*
Create a new real-vector ternary recombination operation
39
*
40
*
@param mi
41
*
the minimum value of the allele ( Definition D4.4 on page 83) of
42
*
a gene
43
*
@param ma
44
*
the maximum value of the allele of a gene
45
*
/
46 public DoubleArrayDEexp(final double mi, final double ma) {
47 super(mi, ma);
48 }
49
50 /
**
51
*
Perform the ternary Differential Evolution recombination method as
52
*
discussed in
53
*
Section 33.2 on page 419 using a
54
*
normally distributed weight.
55
*
56
*
@param p1
57
*
the first "parent" genotype
58
*
@param p2
59
*
the second "parent" genotype
60
*
@param p3
61
*
the third parent genotype
56.2. SEARCH OPERATION 797
62
*
@param r
63
*
the random number generator
64
*
@return a new genotype
65
*
/
66 @Override
67 public double[] ternaryRecombine(final double[] p1, final double[] p2,
68 final double[] p3, final Random r) {
69 final double[] gnew;
70 double weight, xn;
71 boolean change, copy;
72 int idx, i, maxTrials;
73 final double f, cr;
74
75 i = p1.length;
76 gnew = new double[i];
77
78 f = this.getF();
79 cr = this.getCR();
80
81 outer: for (maxTrials = 10; maxTrials > 0; maxTrials--) {
82 // choose a weight very close to F with a tiny variation
83 weight = f
*
(1d + (0.01d
*
r.nextGaussian()));
84 change = false;
85 idx = r.nextInt(gnew.length);
86 copy = false;
87
88 // compute the new vector
89 for (; (--i) >= 0;) {
90
91 if (copy) {
92 xn = p3[idx];
93 } else {
94 // compute the new allele and bound it to the search space
95 xn = Math.min(this.max, Math.max(this.min, p3[i]
96 + (weight
*
(p1[i] - p2[i]))));
97 if (xn != p3[i]) {
98 change = true;
99 }
100 }
101
102 gnew[i] = xn;
103
104 // should we toggle from mutation to copy mode?
105 if (!copy) {
106 if (r.nextDouble() < cr) {
107 // restart if this will result in the same result
108 if (!change) {
109 continue outer;
110 }
111 copy = true;
112 }
113 }
114
115 // the next index
116 idx = ((idx + 1) % gnew.length);
117 }
118
119 if (change) {
120 return gnew;
121 }
122
123 if (change) {
124 return gnew;
125 }
126 }
127
128 return p3;
798 56 THE IMPLEMENTATION PACKAGE
129 }
130
131 /
**
132
*
Get the name of the optimization module
133
*
134
*
@param longVersion
135
*
true if the long name should be returned, false if the short
136
*
name should be returned
137
*
@return the name of the optimization module
138
*
/
139 @Override
140 public String getName(final boolean longVersion) {
141 return "DA-1/rand/exp"; //NON NLS 1
142 }
143
144 }
56.2.1.1.5 N-ary Search Operations: searchOp : G
n
G
N-ary search operations for real-valued strings, i.e., implementations of Listing 55.6 on
page 711 for arrays of real numbers.
Listing 56.32: The dominant recombination operator dened in Section 30.3.1.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.nary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.NarySearchOperation;
9 import org.goataa.spec.INarySearchOperation;
10
11 /
**
12
*
The dominant recombination operator as defined in
13
*
Algorithm 30.2 on page 363.
14
*
15
*
@author Thomas Weise
16
*
/
17 public final class DoubleArrayDominantRecombination extends
18 NarySearchOperation<double[]> {
19
20 /
**
a constant required by Java serialization
*
/
21 private static final long serialVersionUID = 1;
22
23 /
**
24
*
the globally shared instance of the double array dominant
25
*
recombination operation
26
*
/
27 public static final INarySearchOperation<double[]>
DOUBLE_ARRAY_DOMINANT_RECOMBINATION = new DoubleArrayDominantRecombination();
28
29 /
**
Create a new real-vector n-ary search operation
*
/
30 protected DoubleArrayDominantRecombination() {
31 super();
32 }
33
34 /
**
35
*
Perform the dominant recombination operator as defined in
36
*
Algorithm 30.2 on page 363.
37
*
38
*
@param gs
39
*
the existing genotypes in the search space which will be
40
*
combined to a new one
41
*
@param r
42
*
the random number generator
56.2. SEARCH OPERATION 799
43
*
@return a new genotype
44
*
/
45 @Override
46 public final double[] combine(final double[][] gs, final Random r) {
47 final double[] gnew;
48 final int dim;
49 int i;
50
51 if ((gs == null) || (gs.length <= 0)) {
52 return null;
53 }
54
55 dim = gs[0].length;
56 gnew = new double[dim];
57
58 for (i = dim; (--i) >= 0;) {
59 gnew[i] = gs[r.nextInt(gs.length)][i];
60 }
61
62 return gnew;
63 }
64
65 /
**
66
*
Get the name of the optimization module
67
*
68
*
@param longVersion
69
*
true if the long name should be returned, false if the short
70
*
name should be returned
71
*
@return the name of the optimization module
72
*
/
73 @Override
74 public String getName(final boolean longVersion) {
75 if (longVersion) {
76 return super.getName(true);
77 }
78 return "DA-domX"; //NON NLS 1
79 }
80 }
Listing 56.33: The intermedtiate recombination operator dened as Section 30.3.2.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.real.nary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.OptimizationModule;
9 import org.goataa.spec.INarySearchOperation;
10
11 /
**
12
*
A average n-ary search operator which computes the average of n
13
*
elements, i.e., performs intermediate recombination as defined in
14
*
Algorithm 30.3 on page 363.
15
*
16
*
@author Thomas Weise
17
*
/
18 public final class DoubleArrayIntermediateRecombination extends
19 OptimizationModule implements INarySearchOperation<double[]> {
20
21 /
**
a constant required by Java serialization
*
/
22 private static final long serialVersionUID = 1;
23
24 /
**
25
*
the globally shared instance of the double array intermediate
26
*
recombination operation
27
*
/
800 56 THE IMPLEMENTATION PACKAGE
28 public static final INarySearchOperation<double[]>
DOUBLE_ARRAY_INTERMEDIATE_RECOMBINATION = new
DoubleArrayDominantRecombination();
29
30 /
**
Create a new real-vector n-ary search operation
*
/
31 protected DoubleArrayIntermediateRecombination() {
32 super();
33 }
34
35 /
**
36
*
Compute theaverage the average of n genotypes, i.e., perform
37
*
intermediate recombination as defined in
38
*
Algorithm 30.3 on page 363.
39
*
40
*
@param gs
41
*
the existing genotypes in the search space which will be
42
*
combined to a new one
43
*
@param r
44
*
the random number generator
45
*
@return a new genotype
46
*
/
47 public final double[] combine(final double[][] gs, final Random r) {
48 final double[] gnew;
49 double[] g;
50 final int dim;
51 final double gamma;
52 int i, j;
53
54 if ((gs == null) || (gs.length <= 0)) {
55 return null;
56 }
57
58 dim = gs[0].length;
59 gnew = new double[dim];
60 gamma = (1d / gs.length);
61
62 for (j = gs.length; (--j) >= 0;) {
63 g = gs[j];
64 for (i = dim; (--i) >= 0;) {
65 gnew[i] += (g[i]
*
gamma);
66 }
67 }
68
69 return gnew;
70 }
71
72 /
**
73
*
Get the name of the optimization module
74
*
75
*
@param longVersion
76
*
true if the long name should be returned, false if the short
77
*
name should be returned
78
*
@return the name of the optimization module
79
*
/
80 @Override
81 public String getName(final boolean longVersion) {
82 if (longVersion) {
83 return super.getName(true);
84 }
85 return "DA-IX"; //NON NLS 1
86 }
87 }
56.2.1.2 Bit Strings
In this package we provide search operations for bit string as used by the Genetic Algorithms
56.2. SEARCH OPERATION 801
discussed in Chapter 29.
56.2.1.2.1 Boolean Strings
A trivial implementation of the search operations for bit strings which is only for teaching
purposes. Bit strings can much better be expressed in an array of byte where we pack eight
bits into each byte. If we use an array of Boolean, we waste a lot of memory... but for
teaching purposes, it is OK.
56.2.1.2.1.1 Nullary Search Operations: create : G The bit string creation operations
(nullary search operators).
Listing 56.34: A uniform random string generator.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.bits.booleans.nullary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.strings.FixedLengthStringCreation;
9
10 /
**
11
*
An uniform bit string creator which builds random bit strings as defined
12
*
in Algorithm 29.1 on page 328.
13
*
14
*
@author Thomas Weise
15
*
/
16 public final class BooleanArrayUniformCreation extends
17 FixedLengthStringCreation<boolean[]> {
18
19 /
**
a constant required by Java serialization
*
/
20 private static final long serialVersionUID = 1;
21
22 /
**
23
*
The uniform boolean string creator
24
*
25
*
@param dim
26
*
the dimension of the search space
27
*
/
28 public BooleanArrayUniformCreation(final int dim) {
29 super(dim);
30 }
31
32 /
**
33
*
This is a nullary search operation which creates strings of boolean
34
*
values
35
*
36
*
@param r
37
*
the random number generator
38
*
@return a new genotype (see Definition D4.2 on page 82)
39
*
/
40 @Override
41 public boolean[] create(final Random r) {
42 boolean[] bs;
43 int i;
44
45 i = this.n;
46 bs = new boolean[i];
47
48 for (; (--i) >= 0;) {
49 bs[i] = r.nextBoolean();
50 }
51
52 return bs;
53 }
802 56 THE IMPLEMENTATION PACKAGE
54
55 /
**
56
*
Get the name of the optimization module
57
*
58
*
@param longVersion
59
*
true if the long name should be returned, false if the short
60
*
name should be returned
61
*
@return the name of the optimization module
62
*
/
63 @Override
64 public String getName(final boolean longVersion) {
65 if (longVersion) {
66 return super.getName(true);
67 }
68 return "BA-U"; //NON NLS 1
69 }
70 }
56.2.1.2.1.2 Unary Search Operations: mutate : G G Mutation (unary search opera-
tors) operators for bit strings.
Listing 56.35: A single bit-ip mutator.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.bits.booleans.unary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.UnarySearchOperation;
9 import org.goataa.spec.IUnarySearchOperation;
10
11 /
**
12
*
A unary search operation which flips exactly one bit of a bit string, as
13
*
sketched in Fig. 29.2.a on page 330 and discussed
14
*
in Section 29.3.2.1 on page 330.
15
*
16
*
@author Thomas Weise
17
*
/
18 public final class BooleanArraySingleBitFlipMutation extends
19 UnarySearchOperation<boolean[]> {
20
21 /
**
a constant required by Java serialization
*
/
22 private static final long serialVersionUID = 1;
23
24 /
**
the globally shared instance of the single bit flip mutation
*
/
25 public static final IUnarySearchOperation<boolean[]>
BOOLEAN_ARRAY_SINGLE_BIT_FLIP_MUTATION = new
BooleanArraySingleBitFlipMutation();
26
27 /
**
Create a new bit-string single bit mutation operation
*
/
28 protected BooleanArraySingleBitFlipMutation() {
29 super();
30 }
31
32 /
**
33
*
The single-bit flip unary search operation (mutator) sketched in
34
*
Fig. 29.2.a on page 330 and discussed in
35
*
Section 29.3.2.1 on page 330.
36
*
37
*
@param g
38
*
the existing genotype in the search space from which a
39
*
slightly modified copy should be created
40
*
@param r
41
*
the random number generator
42
*
@return a new genotype
56.2. SEARCH OPERATION 803
43
*
/
44 @Override
45 public final boolean[] mutate(final boolean[] g, final Random r) {
46 final boolean[] gnew;
47 int i;
48
49 gnew = g.clone();
50 i = r.nextInt(gnew.length);
51 gnew[i] = (!(gnew[i]));
52
53 return gnew;
54 }
55
56 /
**
57
*
Get the name of the optimization module
58
*
59
*
@param longVersion
60
*
true if the long name should be returned, false if the short
61
*
name should be returned
62
*
@return the name of the optimization module
63
*
/
64 @Override
65 public String getName(final boolean longVersion) {
66 if (longVersion) {
67 return super.getName(true);
68 }
69 return "BA-SBF"; //NON NLS 1
70 }
71 }
56.2.1.2.1.3 Nullary Search Operations: recombine : GG G Recombination operators
(binary search operators) for bit strings.
Listing 56.36: The uniform crossover operator.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.bits.booleans.binary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.BinarySearchOperation;
9 import org.goataa.spec.IBinarySearchOperation;
10
11 /
**
12
*
A binary search operation which mixes the bits of two parent genotypes
13
*
sketched in Fig. 29.5.d on page 335 and discussed
14
*
in Section 29.3.4.4 on page 337.
15
*
16
*
@author Thomas Weise
17
*
/
18 public final class BooleanArrayUniformCrossover extends
19 BinarySearchOperation<boolean[]> {
20
21 /
**
a constant required by Java serialization
*
/
22 private static final long serialVersionUID = 1;
23
24 /
**
the globally shared instance of the boolean array uniform crossover
*
/
25 public static final IBinarySearchOperation<boolean[]>
BOOLEAN_ARRAY_UNIFORM_CROSSOVER = new BooleanArrayUniformCrossover();
26
27 /
**
Create a new bit-string uniform crossover operation
*
/
28 protected BooleanArrayUniformCrossover() {
29 super();
30 }
31
804 56 THE IMPLEMENTATION PACKAGE
32 /
**
33
*
The binary search operation which mixes the bits of two parent
34
*
genotypes sketched in Fig. 29.5.d on page 335
35
*
and discussed in Section 29.3.4.4 on page 337.
36
*
37
*
@param p1
38
*
the first "parent" genotype
39
*
@param p2
40
*
the second "parent" genotype
41
*
@param r
42
*
the random number generator
43
*
@return a new genotype
44
*
/
45 @Override
46 public final boolean[] recombine(final boolean[] p1, final boolean[] p2,
47 final Random r) {
48 final boolean[] gnew;
49 int i;
50
51 gnew = p1.clone();
52 for (i = Math.min(gnew.length, p2.length); (--i) >= 0;) {
53 if (r.nextBoolean()) {
54 gnew[i] = p2[i];
55 }
56 }
57
58 return gnew;
59 }
60
61 /
**
62
*
Get the name of the optimization module
63
*
64
*
@param longVersion
65
*
true if the long name should be returned, false if the short
66
*
name should be returned
67
*
@return the name of the optimization module
68
*
/
69 @Override
70 public String getName(final boolean longVersion) {
71 if (longVersion) {
72 return super.getName(true);
73 }
74 return "BA-UX"; //NON NLS 1
75 }
76 }
56.2.1.3 Real Vectors: G R
n
In this package, we provide permutation-based search operators. A permutation of n objects
can be expressed as an array of the rst n natural numbers, i.e., the numbers from 0 to n1.
It can hence be stored in an integer array if we ensure that each number from 0 to n 1
always occurs exactly once in the array. Search operations which have this feature are
presented in this package.
56.2.1.3.1 Nullary Search Operations: create : G
Here we present creation operations for permutations. Such nullary operators create integer
arrays consisting of n elements, each of which is a unique number from 0 to n 1. They
furthermore should have the feature that every possible permutation has the same chance
of being created. Create a new permutation according to Algorithm 29.2.
Listing 56.37: Create uniformly distributed permutations.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
56.2. SEARCH OPERATION 805
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.permutation.nullary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.strings.FixedLengthStringCreation;
9
10 /
**
11
*
A nullary search operation (see Section 4.2 on page 83) for
12
*
permutatitons of n elements which works according to
13
*
Algorithm 29.2 on page 329.
14
*
15
*
@author Thomas Weise
16
*
/
17 public final class IntPermutationUniformCreation extends
18 FixedLengthStringCreation<int[]> {
19
20 /
**
a constant required by Java serialization
*
/
21 private static final long serialVersionUID = 1;
22
23 /
**
a temporary variable
*
/
24 private transient int[] temp;
25
26 /
**
27
*
Instantiate the permutation creation operation
28
*
29
*
@param dim
30
*
the number of objects to permutate
31
*
/
32 public IntPermutationUniformCreation(final int dim) {
33 super(dim);
34 }
35
36 /
**
37
*
This operation creates a random permutation according to
38
*
Algorithm 29.2 on page 329.
39
*
40
*
@param r
41
*
the random number generator
42
*
@return a new random permutation of the numbers 0..n-1
43
*
/
44 @Override
45 public final int[] create(final Random r) {
46 int[] g, tmp;
47 int i, j;
48
49 i = this.n;
50 g = new int[i];
51
52 // get the temporary variable
53 tmp = this.temp;
54 if (tmp == null) {
55 this.temp = tmp = new int[i];
56 }
57
58 // initialize the temporary variable by setting the value of the ith
59 // element to i
60 for (; (--i) >= 0;) {
61 tmp[i] = i;
62 }
63
64 i = this.n;
65 // repeat for all genes in the genotype
66 while (i > 0) {
67 // select the next number for the permutation from the i available
68 // ones randomly by chosing its index uniformly distributed in 0..i-1
806 56 THE IMPLEMENTATION PACKAGE
69 j = r.nextInt(i);
70
71 // in each step, there is one number less
72 i--;
73
74 // put the selected number from index j to index i in the genotype
75 g[i] = tmp[j];
76
77 // delete the jth number from the temporary variable by replacing it
78 // with the last number in the arry. Since we decreased i already,
79 // that last number could be accessed in the next interation anyway
80 tmp[j] = tmp[i];
81 }
82
83 return g;
84 }
85
86 /
**
87
*
Get the name of the optimization module
88
*
89
*
@param longVersion
90
*
true if the long name should be returned, false if the short
91
*
name should be returned
92
*
@return the name of the optimization module
93
*
/
94 @Override
95 public String getName(final boolean longVersion) {
96 if (longVersion) {
97 return super.getName(true);
98 }
99 return "IP-U"; //NON NLS 1
100 }
101 }
56.2.1.3.2 Unary Search Operations: mutate : G G
Here we provide mutation operations for permutations. Such unary operators take one
integer array as parameter and return a slightly modied version. The integer arrays have
length n and represent permutations of the rst n natural numbers, i.e., from 0 to n1. The
mutation operations hence must ensure that no duplicate alleles occur and that no number
is lost. They may achieve this by simply swapping the elements of the arrays. Listing 56.38
switches elements according to Algorithm 29.4 on page 333 whereas Listing 56.39 is an
implementation of Algorithm 29.5 on page 334.
Listing 56.38: Modify a permutation by swapping two genes.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.permutation.unary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.UnarySearchOperation;
9 import org.goataa.spec.IUnarySearchOperation;
10
11 /
**
12
*
A unary search operation (see Section 4.2 on page 83) for
13
*
permutations of n elements expressed as integer arrays of length n which
14
*
works according to Algorithm 29.4 on page 333. This
15
*
operation tales an existing genotype (see Definition D4.2 on page 82),
16
*
picks two different loci ( Definition D4.5 on page 83) uniformly distributed
17
*
in 0..n-1, and swaps the alleles ( Definition D4.4 on page 83) of the genes
18
*
( Definition D4.3 on page 82) at these positions.
19
*
56.2. SEARCH OPERATION 807
20
*
@author Thomas Weise
21
*
/
22 public final class IntPermutationSingleSwapMutation extends
23 UnarySearchOperation<int[]> {
24
25 /
**
a constant required by Java serialization
*
/
26 private static final long serialVersionUID = 1;
27
28 /
**
29
*
the globally shared instance of the int permutation single swap
30
*
mutation
31
*
/
32 public static final IUnarySearchOperation<int[]>
INT_PERMUTATION_SINGLE_SWAP_MUTATION = new
IntPermutationSingleSwapMutation();
33
34 /
**
Create a new permutation mutation operation
*
/
35 protected IntPermutationSingleSwapMutation() {
36 super();
37 }
38
39 /
**
40
*
This is an unary search operation for permutations as defined in
41
*
Algorithm 29.4 on page 333. It takes one existing
42
*
genotype g (see Definition D4.2 on page 82) from the search space and
43
*
produces one new genotype. This new element is a slightly modified
44
*
version of g which is obtained by swapping two elements in the
45
*
permutation.
46
*
47
*
@param g
48
*
the existing genotype in the search space from which a
49
*
slightly modified copy should be created
50
*
@param r
51
*
the random number generator
52
*
@return a new genotype
53
*
/
54 @Override
55 public final int[] mutate(final int[] g, final Random r) {
56 final int[] gnew;
57 int i, j, t;
58
59 // copy g
60 gnew = g.clone();
61
62 // draw the first index i
63 i = r.nextInt(g.length);
64
65 // draw a second index j which is different from g
66 do {
67 j = r.nextInt(g.length);
68 } while (i == j);
69
70 t = gnew[i];
71 gnew[i] = gnew[j];
72 gnew[j] = t;
73
74 return gnew;
75 }
76
77 /
**
78
*
Get the name of the optimization module
79
*
80
*
@param longVersion
81
*
true if the long name should be returned, false if the short
82
*
name should be returned
83
*
@return the name of the optimization module
84
*
/
808 56 THE IMPLEMENTATION PACKAGE
85 @Override
86 public String getName(final boolean longVersion) {
87 if (longVersion) {
88 return super.getName(true);
89 }
90 return "IP-SS"; //NON NLS 1
91 }
92 }
Listing 56.39: Modify a permutation by repeatedly swapping two genes.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.strings.permutation.unary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.UnarySearchOperation;
9 import org.goataa.spec.IUnarySearchOperation;
10
11 /
**
12
*
A unary search operation (see Section 4.2 on page 83) for
13
*
permutations of n elements expressed as integer arrays of length n which
14
*
works according to Algorithm 29.4 on page 333. This
15
*
operation tales an existing genotype (see Definition D4.2 on page 82),
16
*
and iteratively picks two different loci ( Definition D4.5 on page 83)
17
*
uniformly distributed in 0..n-1, and swaps the alleles
18
*
( Definition D4.4 on page 83) of the genes ( Definition D4.3 on
page 82) at these
19
*
positions. This is repeated more a number of times which is more or less
20
*
exponentially distributed.
21
*
22
*
@author Thomas Weise
23
*
/
24 public final class IntPermutationMultiSwapMutation extends
25 UnarySearchOperation<int[]> {
26
27 /
**
a constant required by Java serialization
*
/
28 private static final long serialVersionUID = 1;
29
30 /
**
31
*
the globally shared instance of the int permutation multi swap
32
*
mutation
33
*
/
34 public static final IUnarySearchOperation<int[]>
INT_PERMUTATION_MULTI_SWAP_MUTATION = new IntPermutationMultiSwapMutation();
35
36 /
**
Create a new permutation mutation operation
*
/
37 protected IntPermutationMultiSwapMutation() {
38 super();
39 }
40
41 /
**
42
*
This is an unary search operation for permutations according to
43
*
Algorithm 29.5 on page 334. It takes one existing
44
*
genotype g (see Definition D4.2 on page 82) from the search space and
45
*
produces one new genotype. This new element is a slightly modified
46
*
version of g which is obtained by repeatedly swapping two elements in
47
*
the permutation.
48
*
49
*
@param g
50
*
the existing genotype in the search space from which a
51
*
slightly modified copy should be created
52
*
@param r
53
*
the random number generator
54
*
@return a new genotype
55
*
/
56.2. SEARCH OPERATION 809
56 @Override
57 public final int[] mutate(final int[] g, final Random r) {
58 final int[] gnew;
59 int i, j, t;
60
61 // copy g
62 gnew = g.clone();
63
64 do {
65
66 // draw the first index i
67 i = r.nextInt(g.length);
68
69 // draw a second index j which is different from g
70 do {
71 j = r.nextInt(g.length);
72 } while (i == j);
73
74 t = gnew[i];
75 gnew[i] = gnew[j];
76 gnew[j] = t;
77
78 // repeat with 50% probability -> roughly exponentially distributed
79 } while (r.nextBoolean());
80
81 return gnew;
82 }
83
84 /
**
85
*
Get the name of the optimization module
86
*
87
*
@param longVersion
88
*
true if the long name should be returned, false if the short
89
*
name should be returned
90
*
@return the name of the optimization module
91
*
/
92 @Override
93 public String getName(final boolean longVersion) {
94 if (longVersion) {
95 return super.getName(true);
96 }
97 return "IP-MS"; //NON NLS 1
98 }
99 }
56.2.2 Operations for Tree-based Search Spaces
Reproduction operations for (strongly-typed) tree genomes, as introduced in Section 31.3
on page 386.
56.2.2.1 Utility and Base Classes
Listing 56.40: The base class for search operations for trees.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.trees;
5
6 import java.util.Random;
7
8 import org.goataa.impl.OptimizationModule;
9 import org.goataa.impl.searchSpaces.trees.Node;
810 56 THE IMPLEMENTATION PACKAGE
10 import org.goataa.impl.searchSpaces.trees.NodeType;
11 import org.goataa.impl.searchSpaces.trees.NodeTypeSet;
12
13 /
**
14
*
A base class for tree-based search operations
15
*
16
*
@author Thomas Weise
17
*
/
18 public class TreeOperation extends OptimizationModule {
19 /
**
a constant required by Java serialization
*
/
20 private static final long serialVersionUID = 1;
21
22 /
**
the maximum depth
*
/
23 public final int maxDepth;
24
25 /
**
26
*
Create a new tree operation
27
*
28
*
@param md
29
*
the maximum tree depth
30
*
/
31 protected TreeOperation(final int md) {
32 super();
33 this.maxDepth = Math.max(2, md);
34 }
35
36 /
**
37
*
Get the full configuration which holds all the data necessary to
38
*
describe this object.
39
*
40
*
@param longVersion
41
*
true if the long version should be returned, false if the
42
*
short version should be returned
43
*
@return the full configuration
44
*
/
45 @Override
46 public String getConfiguration(final boolean longVersion) {
47 return (((longVersion) ? "maxDepth" : //NON NLS 1
48 "md") + this.maxDepth); //NON NLS 1
49 }
50
51 /
**
52
*
Create a sub-tree of the specified size maximum depth. This function
53
*
facilitates both, an implementation of the full method
54
*
Algorithm 31.1 on page 390 and one of the grow method
55
*
Algorithm 31.2 on page 391, which can be chosen by setting the
56
*
parameter full apropriately. It can be used to create trees during
57
*
the random population initialization phase or during mutation steps.
58
*
59
*
@param types
60
*
the node types available for creating the tree
61
*
@param maxDepth
62
*
the maximum depth of the tree
63
*
@param full
64
*
Should we construct a sub-tree according to the full method?
65
*
If full is false, grow is used.
66
*
@param r
67
*
the random number generator
68
*
@return the new tree
69
*
@param <NT>
70
*
the node type
71
*
/
72 @SuppressWarnings("unchecked")
73 public static final <NT extends Node<NT>> NT createTree(
74 final NodeTypeSet<NT> types, final int maxDepth, final boolean full,
75 final Random r) {
76 NodeType<NT, NT> t;
56.2. SEARCH OPERATION 811
77 NT[] x;
78 int i;
79
80 t = null;
81 if (maxDepth <= 1) {
82 t = types.randomTerminalType(r);
83 } else {
84 if (full) {
85 t = types.randomNonTerminalType(r);
86 }
87 if (t == null) {
88 t = types.randomType(r);
89 }
90 }
91 if (t.isTerminal()) {
92 return types.randomTerminalType(r).instantiate(null, r);
93 }
94
95 i = t.getChildCount();
96 x = ((NT[]) (new Node[i]));
97 for (; (--i) >= 0;) {
98 x[i] = createTree(t.getChildTypes(i), maxDepth - 1, full, r);
99 }
100 return t.instantiate(x, r);
101 }
102
103 }
Listing 56.41: An utility class to construct random paths in a tree, i. e., to select random
nodes.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.trees;
5
6 import java.util.Random;
7
8 import org.goataa.impl.OptimizationModule;
9 import org.goataa.impl.searchSpaces.trees.Node;
10
11 /
**
12
*
This class allows us to select a path in a tree. When applying a
13
*
reproduction operation, it is not only necessary to find a tree node
14
*
within the tree, but also to find the complete path to this node. This
15
*
class allows us to perform such a selection. It also allows us to
16
*
replace the end node of such a path, hence creating a completely new
17
*
tree: Since genotypes must not directly be changed (because of possible
18
*
side-effects if an individual is selected more than once), all
19
*
modifications (such as the replacement of a node) will lead to the
20
*
creation of new trees.
21
*
22
*
@param <NT>
23
*
the node type
24
*
@author Thomas Weise
25
*
/
26 public class TreePath<NT extends Node<NT>> extends OptimizationModule {
27 /
**
a constant required by Java serialization
*
/
28 private static final long serialVersionUID = 1;
29
30 /
**
the path
*
/
31 private NT[] path;
32
33 /
**
the path indexes
*
/
34 private int[] pathidx;
35
36 /
**
the length
*
/
812 56 THE IMPLEMENTATION PACKAGE
37 private int len;
38
39 /
**
Create a tree path
*
/
40 @SuppressWarnings("unchecked")
41 public TreePath() {
42 super();
43 this.path = ((NT[]) (new Node[16]));
44 this.pathidx = new int[16];
45 }
46
47 /
**
48
*
Get the path length
49
*
50
*
@return the path length
51
*
/
52 public final int size() {
53 return this.len;
54 }
55
56 /
**
57
*
Get the element at the specified index
58
*
59
*
@param index
60
*
the index
61
*
@return the element
62
*
/
63 public final NT get(final int index) {
64 return this.path[index];
65 }
66
67 /
**
68
*
Get the index of the next child in the path
69
*
70
*
@param index
71
*
the index of the current parent
72
*
@return the index of the next child in the path
73
*
/
74 public final int getChildIndex(final int index) {
75 return this.pathidx[index];
76 }
77
78 /
**
79
*
Create a random path through the tree. Each node in the tree is
80
*
selected with exactly the same probability.
81
*
82
*
@param node
83
*
the node
84
*
@param r
85
*
the randomizer
86
*
/
87 public final void randomPath(final NT node, final Random r) {
88 int i, w, w2;
89 NT cur, next;
90
91 this.len = 0;
92 cur = node;
93 w = r.nextInt(node.getWeight());
94
95 for (;;) {
96
97 if (w <= 0) {
98 this.addPath(cur, -1);
99 return;
100 }
101
102 // iterate over the children
103 innerLoop: for (i = cur.size(); (--i) >= 0;) {
56.2. SEARCH OPERATION 813
104 next = cur.get(i);
105
106 w2 = next.getWeight();
107 if (w2 >= w) {
108 this.addPath(cur, i);
109 cur = next;
110 break innerLoop;
111 }
112 w -= w2;
113 }
114
115 // take account for the currently selected node
116 w--;
117 }
118 }
119
120 /
**
121
*
An internal method used to add an element to the path.
122
*
123
*
@param node
124
*
the node
125
*
@param pi
126
*
the parent index
127
*
/
128 @SuppressWarnings("unchecked")
129 private final void addPath(final NT node, final int pi) {
130 NT[] ppp;
131 int l;
132 int[] idx;
133
134 ppp = this.path;
135 idx = this.pathidx;
136 l = this.len;
137
138 if (l >= ppp.length) {
139 ppp = (NT[]) (new Node[l << 1]);
140 System.arraycopy(this.path, 0, ppp, 0, l);
141 this.path = ppp;
142
143 idx = new int[l << 1];
144 System.arraycopy(this.pathidx, 0, idx, 0, l);
145 this.pathidx = idx;
146 }
147
148 idx[l] = pi;
149 ppp[l++] = node;
150
151 this.len = l;
152 }
153
154 /
**
155
*
Replace the end of this path with the new node. Since tree nodes are
156
*
immuatable, this will result in the creation of a completely new tree
157
*
and a new root node. The path is updated while this operation is
158
*
performed, i.e., it is still valid afterwards.
159
*
160
*
@param newNode
161
*
the new node
162
*
@return the result new root node of the path
163
*
/
164 public final NT replaceEnd(final NT newNode) {
165 NT[] ppp;
166 int l;
167 int[] idx;
168 NT x;
169
170 l = this.len;
814 56 THE IMPLEMENTATION PACKAGE
171 ppp = this.path;
172 idx = this.pathidx;
173 x = newNode;
174
175 l = this.len;
176 ppp[--l] = x;
177 for (; (--l) >= 0;) {
178 ppp[l] = x = ppp[l].setChild(x, idx[l]);
179 }
180
181 return x;
182 }
183 }
56.2.2.2 Nullary Search Operations for Trees
Tree creation operations as discussed in Section 31.3.2 on page 389 (but for strongly-typed
trees).
Listing 56.42: The ramped-half-and-half creation procedure.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.trees.nullary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.trees.TreeOperation;
9 import org.goataa.impl.searchSpaces.trees.Node;
10 import org.goataa.impl.searchSpaces.trees.NodeTypeSet;
11 import org.goataa.spec.INullarySearchOperation;
12
13 /
**
14
*
A tree creator using the ramped-half-and-half method as described in
15
*
Section 31.3.2.3 on page 390.
16
*
17
*
@param <NT>
18
*
the node type
19
*
@author Thomas Weise
20
*
/
21 public class TreeRampedHalfAndHalf<NT extends Node<NT>> extends TreeOperation
22 implements INullarySearchOperation<NT> {
23 /
**
a constant required by Java serialization
*
/
24 private static final long serialVersionUID = 1;
25
26 /
**
the types to choose from
*
/
27 public final NodeTypeSet<NT> types;
28
29 /
**
30
*
Create a new ramped-half-and-half
31
*
32
*
@param md
33
*
the maximum tree depth
34
*
@param ptypes
35
*
the types
36
*
/
37 public TreeRampedHalfAndHalf(final NodeTypeSet<NT> ptypes, final int md) {
38 super(md);
39 this.types = ptypes;
40 }
41
42 /
**
43
*
Create a new tree according to the ramped-half-and-half method given
44
*
in Algorithm 31.3 on page 391.
45
*
56.2. SEARCH OPERATION 815
46
*
@param r
47
*
the random number generator
48
*
@return a new genotype (see Definition D4.2 on page 82)
49
*
/
50 public NT create(final Random r) {
51
52 return TreeOperation.createTree(this.types,//
53 (2 + r.nextInt(this.maxDepth)), r.nextBoolean(), r);
54 }
55
56 /
**
57
*
Get the full configuration which holds all the data necessary to
58
*
describe this object.
59
*
60
*
@param longVersion
61
*
true if the long version should be returned, false if the
62
*
short version should be returned
63
*
@return the full configuration
64
*
/
65 @Override
66 public String getConfiguration(final boolean longVersion) {
67 return super.getConfiguration(longVersion) + ,
68 + this.types.toString(longVersion);
69 }
70 }
56.2.2.3 Unary Search Operations for Trees
Unary tree reproduction operations as discussed in Section 31.3.4 on page 393.
Listing 56.43: A simple sub-tree replacement mutator.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.trees.unary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.trees.TreeOperation;
9 import org.goataa.impl.searchOperations.trees.TreePath;
10 import org.goataa.impl.searchSpaces.trees.Node;
11 import org.goataa.spec.IUnarySearchOperation;
12
13 /
**
14
*
A simple mutation operation for a given tree which implants a randomly
15
*
created subtree into parent, thereby replacing a randomly picked node in
16
*
the parent. This operation basically proceeds according to the ideas
17
*
discussed in Section 31.3.4.1 on page 393 with the extension
18
*
that it also respects the type system of the strongly-typed GP system.
19
*
20
*
@author Thomas Weise
21
*
/
22 public class TreeMutator extends TreeOperation implements
23 IUnarySearchOperation<Node<?>> {
24 /
**
a constant required by Java serialization
*
/
25 private static final long serialVersionUID = 1;
26
27 /
**
the internal path
*
/
28 private final TreePath<?> path;
29
30 /
**
31
*
Create a new tree mutationoperation
32
*
33
*
@param md
34
*
the maximum tree depth
816 56 THE IMPLEMENTATION PACKAGE
35
*
/
36 @SuppressWarnings("unchecked")
37 public TreeMutator(final int md) {
38 super(md);
39 this.path = new TreePath();
40 }
41
42 /
**
43
*
This is the unary search operation. It takes one existing genotype g
44
*
(see Definition D4.2 on page 82) from the genome and produces one new
45
*
element in the search space. This new element is usually a copy of te
46
*
existing element g which is slightly modified in a random manner. For
47
*
this purpose, we pass a random number generator in as parameter so we
48
*
can use the same random number generator in all parts of an
49
*
optimization algorithm.
50
*
51
*
@param g
52
*
the existing genotype in the search space from which a
53
*
slightly modified copy should be created
54
*
@param r
55
*
the random number generator
56
*
@return a new genotype
57
*
/
58 @SuppressWarnings("unchecked")
59 public Node<?> mutate(final Node<?> g, final Random r) {
60 int trials, len, idx;
61 Node sel, nu, par;
62 TreePath p;
63
64 p = this.path;
65 for (trials = 100; trials > 0; trials--) {
66
67 p.randomPath(g, r);
68 len = p.size() - 1;
69
70 sel = p.get(len);
71 if (sel.isTerminal()) {
72 nu = sel.getType().mutate(sel, r);
73 } else {
74 nu = sel;
75 }
76 if ((nu == sel) && (len > 0)) {
77 len--;
78 idx = p.getChildIndex(len);
79 par = p.get(len);
80 nu = TreeOperation.createTree(par.getType().getChildTypes(idx),
81 this.maxDepth - len + 1, false, r);
82 if (nu == null) {
83 nu = sel;
84 }
85 }
86
87 if (nu != sel) {
88 return p.replaceEnd(nu);
89 }
90 }
91
92 return g;
93 }
94 }
56.2.2.4 Binary Search Operations for Trees
Tree crossover operations as discussed in Section 31.3.5 on page 398 (but for strongly-typed
trees).
56.2. SEARCH OPERATION 817
Listing 56.44: A binary search operation for trees.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations.trees.binary;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchOperations.trees.TreeOperation;
9 import org.goataa.impl.searchOperations.trees.TreePath;
10 import org.goataa.impl.searchSpaces.trees.Node;
11 import org.goataa.spec.IBinarySearchOperation;
12
13 /
**
14
*
A simple recombination operation for a given tree which implants a
15
*
randomly chosen subtree from one parent into the other parent, thereby
16
*
replacing a randomly picked node in the second parent. This operation
17
*
basically proceeds according to the ideas discussed in
18
*
Section 31.3.5 on page 398 with the extension that
19
*
it also respects the type system of the strongly-typed GP system.
20
*
21
*
@author Thomas Weise
22
*
/
23 public class TreeRecombination extends TreeOperation implements
24 IBinarySearchOperation<Node<?>> {
25 /
**
a constant required by Java serialization
*
/
26 private static final long serialVersionUID = 1;
27
28 /
**
the internal path 1
*
/
29 private final TreePath<?> pa1;
30
31 /
**
the internal path 2
*
/
32 private final TreePath<?> pa2;
33
34 /
**
35
*
Create a new tree mutationoperation
36
*
37
*
@param md
38
*
the maximum tree depth
39
*
/
40 @SuppressWarnings("unchecked")
41 public TreeRecombination(final int md) {
42 super(md);
43 this.pa1 = new TreePath();
44 this.pa2 = new TreePath();
45 }
46
47 /
**
48
*
This is the binary search operation. It takes two existing genotypes
49
*
p1 and p2 (see Definition D4.2 on page 82) from the genome and produces
50
*
one new element in the search space. There are two basic assumptions
51
*
about this operator: 1) Its input elements are good because they have
52
*
previously been selected. 2) It is somehow possible to combine these
53
*
good traits and hence, to obtain a single individual which unites them
54
*
and thus, has even better overall qualities than its parents. The
55
*
original underlying idea of this operation is the
56
*
"Building Block Hypothesis" (see
57
*
Section 29.5.5 on page 346) for which, so far, not much
58
*
evidence has been found. The hypothesis
59
*
"Genetic Repair and Extraction" (see Section 29.5.6 on page 346)
60
*
has been developed as an alternative to explain the positive aspects
61
*
of binary search operations such as recombination.
62
*
63
*
@param p1
64
*
the first "parent" genotype
65
*
@param p2
66
*
the second "parent" genotype
818 56 THE IMPLEMENTATION PACKAGE
67
*
@param r
68
*
the random number generator
69
*
@return a new genotype
70
*
/
71 @SuppressWarnings("unchecked")
72 public Node<?> recombine(final Node<?> p1, final Node<?> p2,
73 final Random r) {
74 TreePath pt1, pt2;
75 int trials, i;
76 Node<?> e1, e2;
77
78 pt1 = this.pa1;
79 pt2 = this.pa2;
80
81 outer: for (trials = 100; trials > 0; trials--) {
82 pt1.randomPath(p1, r);
83 pt2.randomPath(p2, r);
84
85 i = pt1.size();
86 if (i <= 1) {
87 continue outer;
88 }
89
90 e2 = pt2.get(pt2.size() - 1);
91 i -= 2;
92 e1 = pt1.get(i);
93 if (((i + e2.getHeight() + 1) < this.maxDepth)
94 && e1.getType().getChildTypes(pt1.getChildIndex(i))
95 .containsType(e2.getType())) {
96 return pt1.replaceEnd(e2);
97 }
98 }
99
100 return (r.nextBoolean() ? p1 : p2);
101 }
102 }
56.2.3 Multiplexing Operators
Listing 56.45: A multiplexing nullary search operation.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations;
5
6 import java.util.Random;
7
8 import org.goataa.spec.INullarySearchOperation;
9
10 /
**
11
*
A simple class that allows to randomly choose between multiple different
12
*
creators (i.e., nullary search operations as defined in
13
*
Definition D4.6 on page 83).
14
*
15
*
@param <G>
16
*
the search space
17
*
@author Thomas Weise
18
*
/
19 public class MultiCreator<G> extends NullarySearchOperation<G> {
20
21 /
**
a constant required by Java serialization
*
/
22 private static final long serialVersionUID = 1;
23
56.2. SEARCH OPERATION 819
24 /
**
the internally available operations
*
/
25 private final INullarySearchOperation<G>[] o;
26
27 /
**
28
*
Create a new multimutator
29
*
30
*
@param ops
31
*
the search operations
32
*
/
33 public MultiCreator(final INullarySearchOperation<G>... ops) {
34 super();
35
36 this.o = ops;
37 }
38
39 /
**
40
*
This operation tries to create a genotype by randomly applying one of
41
*
its sub-creators.
42
*
43
*
@param r
44
*
the random number generator
45
*
@return a new genotype (see Definition D4.2 on page 82)
46
*
/
47 @Override
48 public final G create(final Random r) {
49 return this.o[r.nextInt(this.o.length)].create(r);
50 }
51
52 /
**
53
*
Get the name of the optimization module
54
*
55
*
@param longVersion
56
*
true if the long name should be returned, false if the short
57
*
name should be returned
58
*
@return the name of the optimization module
59
*
/
60 @Override
61 public String getName(final boolean longVersion) {
62 if (longVersion) {
63 return super.getName(true);
64 }
65 return "MC"; //NON NLS 1
66 }
67
68 /
**
69
*
Get the full configuration which holds all the data necessary to
70
*
describe this object.
71
*
72
*
@param longVersion
73
*
true if the long version should be returned, false if the
74
*
short version should be returned
75
*
@return the full configuration
76
*
/
77 @Override
78 public String getConfiguration(final boolean longVersion) {
79 StringBuilder sb;
80 int i;
81
82 sb = new StringBuilder();
83 for (i = 0; i < this.o.length; i++) {
84 if (i > 0) {
85 sb.append(,);
86 }
87 sb.append(this.o[i].toString(longVersion));
88 }
89
90 return sb.toString();
820 56 THE IMPLEMENTATION PACKAGE
91 }
92 }
Listing 56.46: A multiplexing unary search operation.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations;
5
6 import java.util.Random;
7
8 import org.goataa.spec.IUnarySearchOperation;
9
10 /
**
11
*
A simple class that allows to randomly choose between multiple different
12
*
mutators, i.e., unary search operators (see
13
*
Definition D4.6 on page 83).
14
*
15
*
@param <G>
16
*
the search space
17
*
@author Thomas Weise
18
*
/
19 public class MultiMutator<G> extends UnarySearchOperation<G> {
20
21 /
**
a constant required by Java serialization
*
/
22 private static final long serialVersionUID = 1;
23
24 /
**
the internally available operations
*
/
25 private final IUnarySearchOperation<G>[] o;
26
27 /
**
the temporary array
*
/
28 private final int[] tmp;
29
30 /
**
the permutation
*
/
31 private final int[] perm;
32
33 /
**
34
*
Create a new multi-mutator
35
*
36
*
@param ops
37
*
the search operations
38
*
/
39 public MultiMutator(final IUnarySearchOperation<G>... ops) {
40 super();
41
42 int[] x;
43 int l;
44
45 this.o = ops;
46 l = ops.length;
47 this.tmp = new int[l];
48 this.perm = x = new int[l];
49 for (; (--l) >= 0;) {
50 x[l] = l;
51 }
52 }
53
54 /
**
55
*
This operation tries to mutate a genotype by randomly applying one of
56
*
its sub-mutators. If a sub-mutator is not able to provide a new
57
*
genotype, i.e., returns null or its input, then we try the next
58
*
sub-mutator, and so on.
59
*
60
*
@param g
61
*
the existing genotype in the search space from which a
62
*
slightly modified copy should be created
63
*
@param r
56.2. SEARCH OPERATION 821
64
*
the random number generator
65
*
@return a new genotype
66
*
/
67 @Override
68 public final G mutate(final G g, final Random r) {
69 final int[] a;
70 final IUnarySearchOperation<G>[] ops;
71 int i, c;
72 G res;
73
74 ops = this.o;
75 a = this.tmp;
76 c = a.length;
77 System.arraycopy(this.perm, 0, a, 0, c);
78
79 while (c > 0) {
80 i = r.nextInt(c);
81 res = ops[a[i]].mutate(g, r);
82 if ((res != null) && (res != g)) {
83 return res;
84 }
85 a[i] = a[--c];
86 }
87
88 return super.mutate(g, r);
89 }
90
91 /
**
92
*
Get the name of the optimization module
93
*
94
*
@param longVersion
95
*
true if the long name should be returned, false if the short
96
*
name should be returned
97
*
@return the name of the optimization module
98
*
/
99 @Override
100 public String getName(final boolean longVersion) {
101 if (longVersion) {
102 return super.getName(true);
103 }
104 return "MM"; //NON NLS 1
105 }
106
107 /
**
108
*
Get the full configuration which holds all the data necessary to
109
*
describe this object.
110
*
111
*
@param longVersion
112
*
true if the long version should be returned, false if the
113
*
short version should be returned
114
*
@return the full configuration
115
*
/
116 @Override
117 public String getConfiguration(final boolean longVersion) {
118 StringBuilder sb;
119 int i;
120
121 sb = new StringBuilder();
122 for (i = 0; i < this.o.length; i++) {
123 if (i > 0) {
124 sb.append(,);
125 }
126 sb.append(this.o[i].toString(longVersion));
127 }
128
129 return sb.toString();
130 }
822 56 THE IMPLEMENTATION PACKAGE
131 }
Listing 56.47: A multiplexing binary search operation.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchOperations;
5
6 import java.util.Random;
7
8 import org.goataa.spec.IBinarySearchOperation;
9
10 /
**
11
*
A simple class that allows to randomly choose between multiple different
12
*
recombination (binary) search operators (see
13
*
Definition D4.6 on page 83) as given in
14
*
Definition D28.14 on page 307.
15
*
16
*
@param <G>
17
*
the search space
18
*
@author Thomas Weise
19
*
/
20 public class MultiRecombinator<G> extends BinarySearchOperation<G> {
21
22 /
**
a constant required by Java serialization
*
/
23 private static final long serialVersionUID = 1;
24
25 /
**
the internally available operations
*
/
26 private final IBinarySearchOperation<G>[] o;
27
28 /
**
the temporary array
*
/
29 private final int[] tmp;
30
31 /
**
the permutation
*
/
32 private final int[] perm;
33
34 /
**
35
*
Create a new multi-recombinator
36
*
37
*
@param ops
38
*
the search operations
39
*
/
40 public MultiRecombinator(final IBinarySearchOperation<G>... ops) {
41 super();
42
43 int[] x;
44 int l;
45
46 this.o = ops;
47 l = ops.length;
48 this.tmp = new int[l];
49 this.perm = x = new int[l];
50 for (; (--l) >= 0;) {
51 x[l] = l;
52 }
53 }
54
55 /
**
56
*
Choose a recombination operation randomly from the available
57
*
operators.
58
*
59
*
@param p1
60
*
the first "parent" genotype
61
*
@param p2
62
*
the second "parent" genotype
63
*
@param r
64
*
the random number generator
56.2. SEARCH OPERATION 823
65
*
@return a new genotype
66
*
/
67 @Override
68 public final G recombine(final G p1, final G p2, final Random r) {
69 final int[] a;
70 final IBinarySearchOperation<G>[] ops;
71 int i, c;
72 G res;
73 IBinarySearchOperation<G> ox;
74
75 ops = this.o;
76 a = this.tmp;
77 c = a.length;
78 System.arraycopy(this.perm, 0, a, 0, c);
79
80 while (c > 0) {
81 i = r.nextInt(c);
82 ox = ops[a[i]];
83 res = ox.recombine(p1, p2, r);
84 if ((res == p1) && (res == p2)) {
85 res = ox.recombine(p2, p1, r);
86 }
87 if ((res != null) && (res != p1) && (res != p2)) {
88 return res;
89 }
90 a[i] = a[--c];
91 }
92
93 return super.recombine(p1, p2, r);
94 }
95
96 /
**
97
*
Get the name of the optimization module
98
*
99
*
@param longVersion
100
*
true if the long name should be returned, false if the short
101
*
name should be returned
102
*
@return the name of the optimization module
103
*
/
104 @Override
105 public String getName(final boolean longVersion) {
106 if (longVersion) {
107 return super.getName(true);
108 }
109 return "MX"; //NON NLS 1
110 }
111
112 /
**
113
*
Get the full configuration which holds all the data necessary to
114
*
describe this object.
115
*
116
*
@param longVersion
117
*
true if the long version should be returned, false if the
118
*
short version should be returned
119
*
@return the full configuration
120
*
/
121 @Override
122 public String getConfiguration(final boolean longVersion) {
123 StringBuilder sb;
124 int i;
125
126 sb = new StringBuilder();
127 for (i = 0; i < this.o.length; i++) {
128 if (i > 0) {
129 sb.append(,);
130 }
131 sb.append(this.o[i].toString(longVersion));
824 56 THE IMPLEMENTATION PACKAGE
132 }
133
134 return sb.toString();
135 }
136 }
56.3 Data Structures for Special Search Spaces
56.3.1 Tree-based Search Spaces as used in Genetic Programming
Data structure to support the synthesis of strongly-typed trees as discussed in Section 31.3.
Listing 56.48: A base class for tree nodes.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchSpaces.trees;
5
6 import java.util.Arrays;
7
8 import org.goataa.impl.OptimizationModule;
9
10 /
**
11
*
A class which represents tree nodes. Trees are the search spaces of
12
*
Standard Genetic Programming (SGP, see Section 31.3). The node class
13
*
here could, however, also be used as problem space (in SGP, search and
14
*
problem space are the same) for a different search space. Our node class
15
*
here supports strongly-typed GP. In other words, we can define which
16
*
type of node is allowed as child for which other node.
17
*
18
*
@param <CT>
19
*
the child node type
20
*
@author Thomas Weise
21
*
/
22 public class Node<CT extends Node<CT>> extends OptimizationModule
23 implements Cloneable {
24 /
**
a constant required by Java serialization
*
/
25 private static final long serialVersionUID = 1;
26
27 /
**
no children
*
/
28 public static final Node<?>[] EMPTY_CHILDREN = new Node[0];
29
30 /
**
the children
*
/
31 Node<?>[] children;
32
33 /
**
the node type record
*
/
34 private final NodeType<? extends CT, CT> type;
35
36 /
**
the tree weight
*
/
37 private int weight;
38
39 /
**
the maximum depth of all children
*
/
40 private int height;
41
42 /
**
43
*
Create a node with the given children
44
*
45
*
@param pchildren
46
*
the child nodes
56.3. DATA STRUCTURES FOR SPECIAL SEARCH SPACES 825
47
*
@param in
48
*
the node type record
49
*
/
50 @SuppressWarnings("unchecked")
51 public Node(final Node<?>[] pchildren,
52 final NodeType<? extends CT, CT> in) {
53 super();
54
55 this.children = (((pchildren != null) && (pchildren.length > 0)) ? pchildren
56 : ((CT[]) EMPTY_CHILDREN));
57
58 this.type = in;
59 this.computeParams();
60 }
61
62 /
**
63
*
Compute the fixed parameters of this node. This method is called
64
*
whenever a reproduced copy of the node is created or when a new node
65
*
is created with the constructor.
66
*
/
67 protected void computeParams() {
68 int i, w, h;
69 Node<?>[] x;
70 Node<?> v;
71
72 x = this.children;
73
74 w = 1;
75 h = 0;
76 for (i = x.length; (--i) >= 0;) {
77 v = x[i];
78 w += v.weight;
79 h = Math.max(h, v.height);
80 }
81 this.weight = w;
82 this.height = (h + 1);
83 }
84
85 /
**
86
*
Get the full configuration which holds all the data necessary to
87
*
describe this object.
88
*
89
*
@param longVersion
90
*
true if the long version should be returned, false if the
91
*
short version should be returned
92
*
@return the full configuration
93
*
/
94 @Override
95 public final String getConfiguration(final boolean longVersion) {
96 StringBuilder sb;
97
98 sb = new StringBuilder();
99 this.fillInText(sb);
100 return sb.toString();
101 }
102
103 /
**
104
*
Get the number of children
105
*
106
*
@return the number of children
107
*
/
108 public final int size() {
109 return this.children.length;
110 }
111
112 /
**
113
*
Get a specific child
826 56 THE IMPLEMENTATION PACKAGE
114
*
115
*
@param idx
116
*
the child index
117
*
@return the child at that index
118
*
/
119 @SuppressWarnings("unchecked")
120 public final CT get(final int idx) {
121 return ((CT) (this.children[idx]));
122 }
123
124 /
**
125
*
Set the child at the given index and create a new tree with the child
126
*
at that index. As usual, we follow the paradigm that genotypes in the
127
*
search must not be modified. We just can create a modified copy of a
128
*
genotype. Hence, setting the child of a specific node object does not
129
*
change the node object itself, but results in a copy of the node
130
*
object where the child is set at the specific position.
131
*
132
*
@param child
133
*
the child to be stored at the given index
134
*
@param idx
135
*
the index where to insert the child
136
*
@return the new tree node with that child
137
*
/
138 public final CT setChild(final CT child, final int idx) {
139 CT x;
140
141 x = this.clone();
142 x.children = x.children.clone();
143 x.children[idx] = child;
144 x.computeParams();
145 return x;
146 }
147
148 /
**
149
*
Clone this node
150
*
151
*
@return a copy of this node
152
*
/
153 @SuppressWarnings("unchecked")
154 @Override
155 protected CT clone() {
156 try {
157 return ((CT) (super.clone()));
158 } catch (Throwable t) {
159 return null;
160 }
161 }
162
163 /
**
164
*
Fill in the text associated with this node
165
*
166
*
@param sb
167
*
the string builder
168
*
/
169 public void fillInText(final StringBuilder sb) {
170 sb.append(super.getConfiguration(false));
171 }
172
173 /
**
174
*
Get the node type record associated with this node
175
*
176
*
@return the node type record associated with this node
177
*
/
178 public final NodeType<? extends CT, CT> getType() {
179 return this.type;
180 }
56.3. DATA STRUCTURES FOR SPECIAL SEARCH SPACES 827
181
182 /
**
183
*
Get the string representation of this object, i.e., the name and
184
*
configuration.
185
*
186
*
@param longVersion
187
*
true if the long version should be returned, false if the
188
*
short version should be returned
189
*
@return the string version of this object
190
*
/
191 @Override
192 public String toString(final boolean longVersion) {
193 return this.getConfiguration(longVersion);
194 }
195
196 /
**
197
*
Obtain the 1+ total number of nodes in all sub-trees of this tree
198
*
199
*
@return the total number of nodes in this tree (including this node)
200
*
/
201 public final int getWeight() {
202 return this.weight;
203 }
204
205 /
**
206
*
Get the maximum depth of all child nodes
207
*
208
*
@return the maximum depth of all child nodes
209
*
/
210 public final int getHeight() {
211 return this.height;
212 }
213
214 /
**
215
*
Is this a terminal node?
216
*
217
*
@return true if this node is terminal, i.e., a leaf, false otherwise
218
*
/
219 public final boolean isTerminal() {
220 return (this.weight <= 1);
221 }
222
223 /
**
224
*
Compare with another object
225
*
226
*
@param o
227
*
the other object
228
*
@return true if the objects are equal
229
*
/
230 @Override
231 @SuppressWarnings("unchecked")
232 public boolean equals(final Object o) {
233 if (o == null) {
234 return false;
235 }
236 if (o == this) {
237 return true;
238 }
239 if (o.getClass() == this.getClass()) {
240 return Arrays.equals(this.children, ((Node) o).children);
241 }
242 return false;
243 }
244 }
Listing 56.49: A type description for tree nodes.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
828 56 THE IMPLEMENTATION PACKAGE
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchSpaces.trees;
5
6 import java.util.Random;
7
8 import org.goataa.impl.OptimizationModule;
9
10 /
**
11
*
The node type of a node has two main functions: 1) It describes which
12
*
types of children are allowed to occur at which position (if any), and
13
*
2) it is used to instantiate nodes of the type. Notice that 1) is the
14
*
basic requirement for strongly-typed GP: We can specify which type of
15
*
node can occur as child of which other node and thus, build a complex
16
*
type system and pre-define node structures precisely. Point 2) somehow
17
*
reproduces the capabilities of Javas reflection system with the
18
*
extension of allowing us to perform some additional, randomized actions.
19
*
Matter of fact, with the class ReflectionNodeType, we defer the node
20
*
creation to Javas reflection mechanisms in cases where no randomized
21
*
instantiation actions are required.
22
*
23
*
@param <NT>
24
*
the specific node type
25
*
@param <CT>
26
*
the child node type (i.e., the general type where NT is an
27
*
instance of)
28
*
@author Thomas Weise
29
*
/
30 public class NodeType<NT extends Node<CT>, CT extends Node<CT>> extends
31 OptimizationModule {
32 /
**
a constant required by Java serialization
*
/
33 private static final long serialVersionUID = 1;
34
35 /
**
the key counter
*
/
36 static int keyCounter = 0;
37
38 /
**
no children
*
/
39 public static final NodeTypeSet<?>[] EMPTY_CHILDREN = new NodeTypeSet[0];
40
41 /
**
a list of possible node types for each child
*
/
42 private final NodeTypeSet<CT>[] chTypes;
43
44 /
**
a search key
*
/
45 final int key;
46
47 /
**
48
*
Create a new node type record
49
*
50
*
@param ch
51
*
the types of the possible chTypes
52
*
/
53 @SuppressWarnings("unchecked")
54 public NodeType(final NodeTypeSet<CT>[] ch) {
55 super();
56
57 this.chTypes = (((ch != null) && (ch.length) > 0) ? ((NodeTypeSet[]) ch)
58 : (EMPTY_CHILDREN));
59 this.key = (keyCounter++);
60 }
61
62 /
**
63
*
Get the number of chTypes of this node type
64
*
65
*
@return the number of chTypes of this node type
66
*
/
67 public final int getChildCount() {
68 return this.chTypes.length;
56.3. DATA STRUCTURES FOR SPECIAL SEARCH SPACES 829
69 }
70
71 /
**
72
*
Get the possible types for the chTypes at the specific index.
73
*
74
*
@param index
75
*
the child index
76
*
@return the possible types of that child
77
*
/
78 public final NodeTypeSet<CT> getChildTypes(final int index) {
79 return this.chTypes[index];
80 }
81
82 /
**
83
*
Instantiate a node
84
*
85
*
@param children
86
*
a given set of children
87
*
@param r
88
*
the randomizer
89
*
@return the new node
90
*
/
91 public NT instantiate(final Node<?>[] children, final Random r) {
92 throw new UnsupportedOperationException();
93 }
94
95 /
**
96
*
Create a new, slightly modified copy of the given node. This may come
97
*
in handy if a node represents a real constant or something. We would
98
*
then be able to randomly modify it which is better than replacing it
99
*
with a completely random node. If no useful way to modify the node in
100
*
a suitable way exists, just return the original-
101
*
102
*
@param node
103
*
the input node
104
*
@param r
105
*
the randomizer
106
*
@return the modified node or the original value, if no modification is
107
*
possible
108
*
/
109 public NT mutate(final NT node, final Random r) {
110 return node;
111 }
112
113 /
**
114
*
Does this node type describe a terminal node?
115
*
116
*
@return true if this node type describes a terminal node, false
117
*
otherwise
118
*
/
119 public final boolean isTerminal() {
120 return (this.chTypes.length <= 0);
121 }
122
123 }
Listing 56.50: A set of tree node types.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.searchSpaces.trees;
5
6 import java.util.Random;
7
8 import org.goataa.impl.OptimizationModule;
9
10 /
**
830 56 THE IMPLEMENTATION PACKAGE
11
*
A set of node type records. For each child position of a genotype, we
12
*
need to specify such a set of possible child types. This set provides an
13
*
easy interface to access the possible types, the possible non-leaf
14
*
types, and the possible leaf-types stored in it. Also, we can find very
15
*
efficiently if a node type is in a node type set (in O(1)), which is a
16
*
necessary operation of all tree mutation and crossover operations of
17
*
strongly-typed Genetic Programming.
18
*
19
*
@param <CT>
20
*
the node type
21
*
@author Thomas Weise
22
*
/
23 public class NodeTypeSet<CT extends Node<CT>> extends OptimizationModule {
24 /
**
a constant required by Java serialization
*
/
25 private static final long serialVersionUID = 1;
26
27 /
**
no children
*
/
28 public static final NodeType<?, ?>[] EMPTY_CHILDREN = new NodeType[0];
29
30 /
**
the list
*
/
31 private NodeType<CT, CT>[] lst;
32
33 /
**
the number of node type records
*
/
34 private int cnt;
35
36 /
**
the terminal nodes
*
/
37 private NodeType<CT, CT>[] terminals;
38
39 /
**
the number of terminal node type records
*
/
40 private int terminalCnt;
41
42 /
**
the non-terminal nodes
*
/
43 private NodeType<CT, CT>[] nonTerminals;
44
45 /
**
the number of non terminal node type records
*
/
46 private int nonTerminalCnt;
47
48 /
**
the set of available node types
*
/
49 private boolean[] available;
50
51 /
**
52
*
Create a new node type set
53
*
/
54 @SuppressWarnings("unchecked")
55 public NodeTypeSet() {
56 super();
57 this.lst = new NodeType[16];
58 this.terminals = new NodeType[16];
59 this.nonTerminals = new NodeType[16];
60 this.available = new boolean[Math.max(16, NodeType.keyCounter)];
61 }
62
63 /
**
64
*
Get the number of entries
65
*
66
*
@return the number of entries
67
*
/
68 public final int size() {
69 return this.cnt;
70 }
71
72 /
**
73
*
Get the node type at the specified index
74
*
75
*
@param index
76
*
the index into the information set
77
*
@return the node type at the specified index
56.3. DATA STRUCTURES FOR SPECIAL SEARCH SPACES 831
78
*
/
79 public final NodeType<CT, CT> get(final int index) {
80 return this.lst[index];
81 }
82
83 /
**
84
*
Get the number of terminal node types
85
*
86
*
@return the number of terminal node types
87
*
/
88 public final int terminalCount() {
89 return this.terminalCnt;
90 }
91
92 /
**
93
*
Get the terminal node type at the specified index
94
*
95
*
@param index
96
*
the index into the information set
97
*
@return the node type at the specified index
98
*
/
99 public final NodeType<CT, CT> getTerminal(final int index) {
100 return this.terminals[index];
101 }
102
103 /
**
104
*
Get the number of non-terminal node types
105
*
106
*
@return the number of non-terminal node types
107
*
/
108 public final int nonTerminalCount() {
109 return this.nonTerminalCnt;
110 }
111
112 /
**
113
*
Get the non-terminal node type at the specified index
114
*
115
*
@param index
116
*
the index into the information set
117
*
@return the node type at the specified index
118
*
/
119 public final NodeType<CT, CT> getNonTerminal(final int index) {
120 return this.nonTerminals[index];
121 }
122
123 /
**
124
*
IfThenElse a new entry to the node type set
125
*
126
*
@param type
127
*
the node type to be added
128
*
/
129 @SuppressWarnings("unchecked")
130 public final void add(final NodeType<? extends CT, CT> type) {
131 NodeType<CT, CT>[] l;
132 int i;
133 boolean[] av;
134
135 if (type != null) {
136
137 l = this.lst;
138 i = this.cnt;
139 if (i >= l.length) {
140 l = new NodeType[i << 1];
141 System.arraycopy(this.lst, 0, l, 0, i);
142 this.lst = l;
143 }
144 l[i++] = ((NodeType) type);
832 56 THE IMPLEMENTATION PACKAGE
145 this.cnt = i;
146
147 if (type.isTerminal()) {
148
149 l = this.terminals;
150 i = this.terminalCnt;
151 if (i >= l.length) {
152 l = new NodeType[i << 1];
153 System.arraycopy(this.terminals, 0, l, 0, i);
154 this.terminals = l;
155 }
156 l[i++] = ((NodeType) type);
157 this.terminalCnt = i;
158
159 } else {
160
161 l = this.nonTerminals;
162 i = this.nonTerminalCnt;
163 if (i >= l.length) {
164 l = new NodeType[i << 1];
165 System.arraycopy(this.nonTerminals, 0, l, 0, i);
166 this.nonTerminals = l;
167 }
168 l[i++] = ((NodeType) type);
169 this.nonTerminalCnt = i;
170
171 }
172
173 av = this.available;
174 i = type.key;
175 if (i >= av.length) {
176 av = new boolean[i << 1];
177 System.arraycopy(this.available, 0, av, 0, this.available.length);
178 this.available = av;
179 }
180 av[i] = true;
181 }
182 }
183
184 /
**
185
*
Obtain a random node type
186
*
187
*
@param r
188
*
the random number generator
189
*
@return the node type
190
*
/
191 public final NodeType<CT, CT> randomType(final Random r) {
192 final int i;
193
194 i = this.cnt;
195 if (i <= 0) {
196 return null;
197 }
198 return this.lst[r.nextInt(i)];
199 }
200
201 /
**
202
*
Obtain a random terminal node type
203
*
204
*
@param r
205
*
the random number generator
206
*
@return the terminal node type
207
*
/
208 public final NodeType<CT, CT> randomTerminalType(final Random r) {
209 final int i;
210
211 i = this.terminalCnt;
56.4. GENOTYPE-PHENOTYPE MAPPINGS 833
212 if (i <= 0) {
213 return null;
214 }
215 return this.terminals[r.nextInt(i)];
216 }
217
218 /
**
219
*
Obtain a random non-terminal node type
220
*
221
*
@param r
222
*
the random number generator
223
*
@return the non-terminal node type
224
*
/
225 public final NodeType<CT, CT> randomNonTerminalType(final Random r) {
226 final int i;
227
228 i = this.nonTerminalCnt;
229 if (i <= 0) {
230 return null;
231 }
232 return this.nonTerminals[r.nextInt(i)];
233 }
234
235 /
**
236
*
Check whether the given node type is contained in this type set or not
237
*
238
*
@param t
239
*
the node type
240
*
@return true if the type is contained, false otherwise
241
*
/
242 public final boolean containsType(final NodeType<?, ?> t) {
243 final int i;
244 final boolean[] av;
245
246 i = t.key;
247 av = this.available;
248
249 return ((i < av.length) && av[i]);
250 }
251 }
56.4 Genotype-Phenotype Mappings
In this package, we provide a couple of basic genotype-phenotype mappings according to
the interface dened in Listing 55.7 on page 711.
Listing 56.51: The identity mapping for cases where G = X.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.gpms;
5
6 import java.util.Random;
7
8 import org.goataa.spec.IGPM;
9
10 /
**
11
*
This class provides an identity genotype-phenotype mapping, the simplest
12
*
possible variant of genotype-phenotype mappings (see
13
*
Section 4.3 on page 85). This mapping can only
14
*
be applied if the search space (see Section 4.1 on page 81) and
15
*
the problem space (see Section 2.1 on page 43) are identical.
16
*
17
*
@param <S>
834 56 THE IMPLEMENTATION PACKAGE
18
*
the type which is both, the search and the problem space
19
*
@author Thomas Weise
20
*
/
21 public final class IdentityMapping<S> extends GPM<S, S> {
22
23 /
**
a constant required by Java serialization
*
/
24 private static final long serialVersionUID = 1;
25
26 /
**
the globally shared default instance of the identity mapping
*
/
27 public static final IGPM<?, ?> IDENTITY_MAPPING = new IdentityMapping<Object>();
28
29 /
**
Create a new instance of the identity mapping class
*
/
30 protected IdentityMapping() {
31 super();
32 }
33
34 /
**
35
*
Perform the identity mapping: the genotype ( Definition D4.2 on page 82)
36
*
equals the phenotype ( Definition D2.2 on page 43).
37
*
38
*
@param g
39
*
the genotype
40
*
@param r
41
*
the randomizer
42
*
@return the phenotype which equals genotype g
43
*
/
44 @Override
45 public final S gpm(final S g, final Random r) {
46 return g;
47 }
48
49 /
**
50
*
Get the name of the optimization module
51
*
52
*
@param longVersion
53
*
true if the long name should be returned, false if the short
54
*
name should be returned
55
*
@return the name of the optimization module
56
*
/
57 @Override
58 public String getName(final boolean longVersion) {
59 if (longVersion) {
60 return super.getName(true);
61 }
62 return "I"; //NON NLS 1
63 }
64 }
Listing 56.52: The Random Keys genotype-phenotype mapping introduced in Algo-
rithm 29.8.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.gpms;
5
6 import java.util.Arrays;
7 import java.util.Random;
8
9 /
**
10
*
With this class, we implement the Random Keys encoding
11
*
genotype-phenotype mapping as introduced in
12
*
Section 29.7 on page 349. This mapping which is specified in
13
*
Algorithm 29.8 on page 349 translates an n-dimensional vector of
14
*
real numbers (i.e., doubles) to a permutation of the first n natural
15
*
numbers (i.e., an array of int containing the values 0, 1, 2, ..., n-1).
16
*
This mapping can only
56.5. TERMINATION CRITERIA 835
17
*
18
*
@author Thomas Weise
19
*
/
20 public final class RandomKeys extends GPM<double[], int[]> {
21 /
**
a constant required by Java serialization
*
/
22 private static final long serialVersionUID = 1;
23
24 /
**
a temporary double array
*
/
25 private transient double[] temp;
26
27 /
**
Instantiate
*
/
28 public RandomKeys() {
29 super();
30 }
31
32 /
**
33
*
Perform Algorithm 29.8 on page 349, i.e., translate a vector of n
34
*
real numbers into a permutation of the first n natural numbers.
35
*
36
*
@param g
37
*
the genotype ( Definition D4.2 on page 82)
38
*
@param r
39
*
the randomizer
40
*
@return the phenotype (see Definition D2.2 on page 43)
41
*
corresponding to the genotype g
42
*
/
43 @Override
44 public final int[] gpm(final double[] g, final Random r) {
45 int[] x;
46 double[] s;
47 final int l;
48 int i, j;
49
50 l = g.length;
51 x = new int[g.length];
52
53 // We cannot sort g directly, since this would destroy it. Hence, we
54 // use an internal temporary array.
55 s = this.temp;
56 if ((s == null) || (s.length < l)) {
57 this.temp = s = new double[l];
58 }
59
60 // in each step, put one element from g into the right position in s
61 // and update the phenotype accordingly
62 for (i = 0; i < l; i++) {
63 j = Arrays.binarySearch(s, 0, i, g[i]);
64 if (j < 0) {
65 j = ((-j) - 1);
66 }
67 System.arraycopy(s, j, s, j + 1, i - j);
68 s[j] = g[i];
69 System.arraycopy(x, j, x, j + 1, i - j);
70 x[j] = i;
71 }
72
73 return x;
74 }
75
76 }
56.5 Termination Criteria
Termination criteria, which obey to the denition provided in Listing 55.9 on page 712,
determine when an optimization algorithm should terminate. In this package, we provide
836 56 THE IMPLEMENTATION PACKAGE
some simple termination criteria.
Listing 56.53: The step-limit termination criterion.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.termination;
5
6 /
**
7
*
The step limit termination criterion (see Point 2 in
8
*
Section 6.3.3 on page 105) gets true after a maximum number
9
*
of steps has been exhausted. The steps are counted in terms of
10
*
invocations of the {@link #terminationCriterion()} method.
11
*
12
*
@author Thomas Weise
13
*
/
14 public final class StepLimit extends TerminationCriterion {
15
16 /
**
a constant required by Java serialization
*
/
17 private static final long serialVersionUID = 1;
18
19 /
**
the step limit
*
/
20 private final int maxSteps;
21
22 /
**
the number of remaining steps
*
/
23 private int remaining;
24
25 /
**
26
*
Create a new step limit termination criterion
27
*
28
*
@param steps
29
*
the number of steps
30
*
/
31 public StepLimit(final int steps) {
32 this.maxSteps = steps;
33 }
34
35 /
**
36
*
This method should be invoked after each search step, iteration, or
37
*
objective function evaluation. This method returns true after it has
38
*
been called exactly the number of times specified in the parameter
39
*
"steps" passed to the constructor minus 1. For example, if "1" is
40
*
specified in the constructor, the first call of this method will
41
*
return true. For "2", the second call will be true while the first
42
*
returns false.
43
*
44
*
@return true after the specified steps are exhausted, false otherwise
45
*
/
46 @Override
47 public final boolean terminationCriterion() {
48 return (--this.remaining) <= 0;
49 }
50
51 /
**
Reset the termination criterion to make it useable more than once
*
/
52 @Override
53 public void reset() {
54 this.remaining = this.maxSteps;
55 }
56
57 /
**
58
*
Get the full configuration which holds all the data necessary to
59
*
describe this object.
60
*
61
*
@param longVersion
62
*
true if the long version should be returned, false if the
63
*
short version should be returned
64
*
@return the full configuration
56.6. COMPARATOR 837
65
*
/
66 @Override
67 public String getConfiguration(final boolean longVersion) {
68 if (longVersion) {
69 return "limit=" + String.valueOf(this.maxSteps); //NON NLS 1
70 }
71 return String.valueOf(this.maxSteps);
72 }
73
74 /
**
75
*
Get the name of the optimization module
76
*
77
*
@param longVersion
78
*
true if the long name should be returned, false if the short
79
*
name should be returned
80
*
@return the name of the optimization module
81
*
/
82 @Override
83 public String getName(final boolean longVersion) {
84 if (longVersion) {
85 return super.getName(true);
86 }
87 return "sl"; //NON NLS 1
88 }
89 }
56.6 Comparator
Implementations of the individual comparator interface for multi-objective optimization as
introduced in Section 3.5 on page 75.
Listing 56.54: A comparator realizing the lexicographic relationship.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.comparators;
5
6 import org.goataa.impl.OptimizationModule;
7 import org.goataa.impl.utils.MOIndividual;
8 import org.goataa.spec.IIndividualComparator;
9
10 /
**
11
*
Perform an individual comparison according to the lexicographic
12
*
relationship Section 3.3.2 on page 59.
13
*
14
*
@author Thomas Weise
15
*
/
16 public class Lexicographic extends OptimizationModule implements
17 IIndividualComparator {
18 /
**
a constant required by Java serialization
*
/
19 private static final long serialVersionUID = 1;
20
21 /
**
the lexicographic comparator
*
/
22 public static final IIndividualComparator LEXICOGRAPHIC_COMPARATOR = new
Lexicographic();
23
24 /
**
instantitate a lexicographicOptimization comparator
*
/
25 protected Lexicographic() {
26 super();
27 }
28
29 /
**
30
*
Compare two individuals with each other according to the lexicographic
31
*
relationship defined in Section 3.3.2 on page 59
838 56 THE IMPLEMENTATION PACKAGE
32
*
based on their objective values.
33
*
34
*
@param a
35
*
the first individual
36
*
@param b
37
*
the second individual
38
*
@return -1 if the a dominates b, 1 if b dominates a, 0 if neither of
39
*
them is better
40
*
/
41 public int compare(final MOIndividual<?, ?> a, final MOIndividual<?, ?> b) {
42 double[] x, y;
43 int i, cr;
44
45 x = a.f;
46 y = b.f;
47 for (i = 0; i < x.length; i++) {
48 cr = Double.compare(x[i], y[i]);
49 if (cr != 0) {
50 return cr;
51 }
52 }
53
54 return 0;
55 }
56 }
Listing 56.55: A comparator realizing the Pareto dominance relationship.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.comparators;
5
6 import org.goataa.impl.OptimizationModule;
7 import org.goataa.impl.utils.MOIndividual;
8 import org.goataa.spec.IIndividualComparator;
9
10 /
**
11
*
Perform an individual comparison according to the Pareto dominance
12
*
relationship Section 3.3.5 on page 65.
13
*
14
*
@author Thomas Weise
15
*
/
16 public class Pareto extends OptimizationModule implements
17 IIndividualComparator {
18 /
**
a constant required by Java serialization
*
/
19 private static final long serialVersionUID = 1;
20
21 /
**
the pareto comparator
*
/
22 public static final IIndividualComparator PARETO_COMPARATOR = new Pareto();
23
24 /
**
instantitate a pareto comparator
*
/
25 protected Pareto() {
26 super();
27 }
28
29 /
**
30
*
Compare two individuals with each other according to the dominance
31
*
relationship defined in Section 3.3.5 on page 65 based on
32
*
their objective values.
33
*
34
*
@param a
35
*
the first individual
36
*
@param b
37
*
the second individual
38
*
@return -1 if the a dominates b, 1 if b dominates a, 0 if neither of
39
*
them is better
40
*
/
56.7. UTILITY CLASSES, CONSTANTS, AND ROUTINES 839
41 public int compare(final MOIndividual<?, ?> a, final MOIndividual<?, ?> b) {
42 double[] x, y;
43 int i, cr, res;
44
45 x = a.f;
46 y = b.f;
47 res = 0;
48
49 // check every objective value
50 for (i = x.length; (--i) >= 0;) {
51
52 // cr<0: x[i]<y[i]; cr>0: x[i]>y[i], cr=0: x[i]=y[i]
53 cr = Double.compare(x[i], y[i]);
54
55 if (cr > 0) {
56 // x[i]>y[i]
57 if (res < 0) {
58 // we encountered at least one j:x[j]<y[j], hence result is
59 // undecided
60 return 0;
61 }
62 // assume that y dominates x (i.e., b dominates a)
63 res = 1;
64 } else {
65 if (cr < 0) {
66 // x[i]<y[i]
67 if (res > 0) {
68 // we encountered at least one j:x[j]>y[j], hence result is
69 // undecided
70 return 0;
71 }
72 // assume that x dominates y (i.e., a dominates b)
73 res = -1;
74 }
75 }
76 }
77
78 return res;
79 }
80 }
56.7 Utility Classes, Constants, and Routines
In this package, we provide some utility classes and constants.
Listing 56.56: The individual record.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.utils;
5
6 import org.goataa.impl.OptimizationModule;
7
8 /
**
9
*
An individual record, as defined in Definition D4.18 on page 90. Such a
10
*
record always holds one genotype from the search space and the
11
*
corresponding phenotype from the problem space. In this implementation,
12
*
we also store a fitness value v which corresponds to the objective value
13
*
in a single-objective problem. This way, we do not need to evaluate the
14
*
objective function more than once per individual. This makes sense since
15
*
it would (in most cases) yield the same result anyway and only waste CPU
16
*
time.
17
*
18
*
@param <G>
840 56 THE IMPLEMENTATION PACKAGE
19
*
the search space (genome, Section 4.1 on page 81)
20
*
@param <X>
21
*
the problem space (phenome, Section 2.1 on page 43)
22
*
@author Thomas Weise
23
*
/
24 public class Individual<G, X> extends OptimizationModule {
25
26 /
**
a constant required by Java serialization
*
/
27 private static final long serialVersionUID = 1;
28
29 /
**
the genotype as defined in Definition D4.2 on page 82
*
/
30 public G g;
31
32 /
**
33
*
the corresponding phenotype as defined in
34
*
Definition D2.2 on page 43
35
*
/
36 public X x;
37
38 /
**
39
*
the fitness value, as defined in Definition D5.1 on page 94 which
40
*
corresponds to the objective value in single-objective optimization
41
*
problems
42
*
/
43 public double v;
44
45 /
**
The constructor creates a new individual record
*
/
46 public Individual() {
47 super();
48 // initialize the record with the worst fitness
49 this.v = Constants.WORST_FITNESS;
50 }
51
52 /
**
53
*
Copy the values of another individual record to this record. This
54
*
method saves us from excessively creating new individual records.
55
*
56
*
@param to
57
*
the individual to copy
58
*
/
59 public void assign(final Individual<G, X> to) {
60 this.g = to.g;
61 this.x = to.x;
62 this.v = to.v;
63 }
64
65 /
**
66
*
Get the full configuration which holds all the data necessary to
67
*
describe this individual record, i.e., the genotype, phenotype, and
68
*
fitness.
69
*
70
*
@param longVersion
71
*
true if the long version should be returned, false if the
72
*
short version should be returned
73
*
@return the full configuration
74
*
/
75 @Override
76 public String getConfiguration(final boolean longVersion) {
77 StringBuilder sb;
78
79 sb = new StringBuilder();
80 if (longVersion) {
81 sb.append("fitness"); //NON NLS 1
82 } else {
83 sb.append(v);
84 }
85 sb.append(=);
56.7. UTILITY CLASSES, CONSTANTS, AND ROUTINES 841
86 sb.append(this.v);
87
88 if (longVersion) {
89 sb.append(", genotype="); //NON NLS 1
90 } else {
91 sb.append(", g=");}//NON NLS 1
92 TextUtils.toStringBuilder(this.g, sb);
93
94 if (longVersion) {
95 sb.append(", phenotype="); //NON NLS 1
96 } else {
97 sb.append(", x=");}//NON NLS 1
98 TextUtils.toStringBuilder(this.x, sb);
99
100 return sb.toString();
101 }
102
103 /
**
104
*
Get the name of the individual record
105
*
106
*
@param longVersion
107
*
true if the long name should be returned, false if the short
108
*
name should be returned
109
*
@return the name of the optimization module
110
*
/
111 @Override
112 public String getName(final boolean longVersion) {
113 if (longVersion) {
114 return super.getName(true);
115 }
116 return "ind"; //NON NLS 1
117 }
118 }
Listing 56.57: The individual record for multi-objective optimization.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.utils;
5
6 import java.util.Arrays;
7 import java.util.Random;
8
9 import org.goataa.spec.IObjectiveFunction;
10
11 /
**
12
*
An individual record for multi-objective optimization as discussed in
13
*
Section 3.3 on page 55. We just extend the basic
14
*
individual record with the capability to store multiple objective
15
*
values.
16
*
17
*
@param <G>
18
*
the search space (genome, Section 4.1 on page 81)
19
*
@param <X>
20
*
the problem space (phenome, Section 2.1 on page 43)
21
*
@author Thomas Weise
22
*
/
23 public class MOIndividual<G, X> extends Individual<G, X> {
24
25 /
**
a constant required by Java serialization
*
/
26 private static final long serialVersionUID = 1;
27
28 /
**
the objective values
*
/
29 public final double[] f;
30
31 /
**
32
*
Create a multi-objective individual
842 56 THE IMPLEMENTATION PACKAGE
33
*
34
*
@param fc
35
*
the number of objective functions
36
*
/
37 public MOIndividual(final int fc) {
38 super();
39 this.f = new double[fc];
40 // initialize the record with the worst objective values
41 Arrays.fill(this.f, Constants.WORST_OBJECTIVE);
42 }
43
44 /
**
45
*
Copy the values of another individual record to this record. This
46
*
method saves us from excessively creating new individual records.
47
*
48
*
@param to
49
*
the individual to copy
50
*
/
51 @SuppressWarnings("unchecked")
52 @Override
53 public void assign(final Individual<G, X> to) {
54 final double[] d;
55 super.assign(to);
56 if (to instanceof MOIndividual) {
57 d = (((MOIndividual) to).f);
58 System.arraycopy(d, 0, this.f, 0, Math.min(d.length, this.f.length));
59 }
60 }
61
62 /
**
63
*
Get the full configuration which holds all the data necessary to
64
*
describe this individual record, i.e., the genotype, phenotype, and
65
*
fitness as well as the objective values.
66
*
67
*
@param longVersion
68
*
true if the long version should be returned, false if the
69
*
short version should be returned
70
*
@return the full configuration
71
*
/
72 @Override
73 public String getConfiguration(final boolean longVersion) {
74 StringBuilder sb;
75
76 sb = new StringBuilder();
77 if (longVersion) {
78 sb.append("objectives"); //NON NLS 1
79 } else {
80 sb.append(f);
81 }
82 sb.append(=);
83 TextUtils.toStringBuilder(this.f, sb);
84 sb.append(,);
85 sb.append( );
86 sb.append(super.getConfiguration(longVersion));
87
88 return sb.toString();
89 }
90
91 /
**
92
*
Get the name of the multi-objective individual record
93
*
94
*
@param longVersion
95
*
true if the long name should be returned, false if the short
96
*
name should be returned
97
*
@return the name of the optimization module
98
*
/
99 @Override
56.7. UTILITY CLASSES, CONSTANTS, AND ROUTINES 843
100 public String getName(final boolean longVersion) {
101 if (longVersion) {
102 return super.getName(true);
103 }
104 return "mo-ind"; //NON NLS 1
105 }
106
107 /
**
108
*
Compute the objective values by using the given objective functions
109
*
110
*
@param fs
111
*
the objective functions
112
*
@param r
113
*
the randomizer
114
*
/
115 public final void evaluate(final IObjectiveFunction<X>[] fs,
116 final Random r) {
117 int i;
118 for (i = fs.length; (--i) >= 0;) {
119 this.f[i] = fs[i].compute(this.x, r);
120 }
121 }
122 }
Listing 56.58: A base class for archiving non-dominated individuals.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.utils;
5
6 import java.util.AbstractList;
7 import java.util.Arrays;
8 import java.util.Collection;
9 import java.util.Comparator;
10 import java.util.Random;
11
12 import org.goataa.impl.comparators.Lexicographic;
13 import org.goataa.impl.comparators.Pareto;
14 import org.goataa.spec.IIndividualComparator;
15 import org.goataa.spec.IOptimizationModule;
16 import org.goataa.spec.ISelectionAlgorithm;
17
18 /
**
19
*
An archive of non-dominated individuals as discussed in
20
*
Section 28.6 on page 308 for multi-objective
21
*
optimization Section 3.3 on page 55.
22
*
23
*
@param <G>
24
*
the search space (genome, Section 4.1 on page 81)
25
*
@param <X>
26
*
the problem space (phenome, Section 2.1 on page 43)
27
*
@author Thomas Weise
28
*
/
29 public class Archive<G, X> extends AbstractList<MOIndividual<G, X>>
30 implements IOptimizationModule {
31 /
**
a constant required by Java serialization
*
/
32 private static final long serialVersionUID = 1;
33
34 /
**
the maximum size
*
/
35 private final int maxSize;
36
37 /
**
the list
*
/
38 @SuppressWarnings("unchecked")
39 private MOIndividual[] lst;
40
41 /
**
the individual comparator
*
/
42 private final IIndividualComparator ic;
844 56 THE IMPLEMENTATION PACKAGE
43
44 /
**
the replacement index
*
/
45 private int replaceIdx;
46
47 /
**
the epsilons
*
/
48 private transient double[] epsilon;
49
50 /
**
the number of individuals in the list
*
/
51 private int cnt;
52
53 /
**
the copy index
*
/
54 private int copyIndex;
55
56 /
**
Create an archive with the default maximum size 100
*
/
57 public Archive() {
58 this(100, null);
59 }
60
61 /
**
62
*
Create an archive with the given maximum size
63
*
64
*
@param ms
65
*
the maximum size, -1 for unrestrained
66
*
@param i
67
*
the individual comparator to use
68
*
/
69 public Archive(final int ms, final IIndividualComparator i) {
70 super();
71
72 boolean b;
73 int is;
74
75 b = ((ms > 0) && (ms < Integer.MAX_VALUE));
76 if (b) {
77 is = Math.min(1024, ms);
78 } else {
79 is = 1024;
80 }
81 this.maxSize = (b ? ms : Integer.MAX_VALUE);
82 this.lst = new MOIndividual[is];
83
84 this.ic = ((i != null) ? i : Pareto.PARETO_COMPARATOR);
85 }
86
87 /
**
88
*
Get the name of the optimization module
89
*
90
*
@param longVersion
91
*
true if the long name should be returned, false if the short
92
*
name should be returned
93
*
@return the name of the optimization module
94
*
/
95 public String getName(final boolean longVersion) {
96 return (longVersion ? "archive" : "arch"); //NON NLS 1 //NON NLS 2
97 }
98
99 /
**
100
*
Get the full configuration which holds all the data necessary to
101
*
describe this object.
102
*
103
*
@param longVersion
104
*
true if the long version should be returned, false if the
105
*
short version should be returned
106
*
@return the full configuration
107
*
/
108 public String getConfiguration(final boolean longVersion) {
109 StringBuilder sb;
56.7. UTILITY CLASSES, CONSTANTS, AND ROUTINES 845
110
111 sb = new StringBuilder();
112 if ((this.maxSize > 0) && (this.maxSize < Integer.MAX_VALUE)) {
113 sb.append("maxLen="); //NON NLS 1
114 sb.append(this.maxSize);
115 }
116
117 if (this.ic != null) {
118 if (sb.length() > 0) {
119 sb.append(", ");//NON NLS 1
120 }
121 sb.append("cmp=");//NON NLS 1
122 sb.append(this.ic.toString(longVersion));
123 }
124
125 return sb.toString();
126 }
127
128 /
**
129
*
Get the string representation of this object, i.e., the name and
130
*
configuration.
131
*
132
*
@param longVersion
133
*
true if the long version should be returned, false if the
134
*
short version should be returned
135
*
@return the string version of this object
136
*
/
137 public String toString(final boolean longVersion) {
138 return this.getName(longVersion) + (
139 + this.getConfiguration(longVersion) + );
140 }
141
142 /
**
143
*
Get the size of the list
144
*
145
*
@return the number of individuals in the list
146
*
/
147 @Override
148 public final int size() {
149 return this.cnt;
150 }
151
152 /
**
153
*
Get a specific individual
154
*
155
*
@param index
156
*
the index
157
*
@return the individual
158
*
/
159 @Override
160 @SuppressWarnings("unchecked")
161 public final MOIndividual<G, X> get(final int index) {
162 return this.lst[index];
163 }
164
165 /
**
166
*
The set operation actually performs a deletion followed by an addition
167
*
168
*
@param index
169
*
ignored
170
*
@param ind
171
*
the individual
172
*
@return the removed indivdual
173
*
/
174 @Override
175 public final MOIndividual<G, X> set(final int index,
176 final MOIndividual<G, X> ind) {
846 56 THE IMPLEMENTATION PACKAGE
177 MOIndividual<G, X> r;
178 r = this.remove(index);
179 this.add(ind);
180 return r;
181 }
182
183 /
**
184
*
IfThenElse an individual to the end of the list
185
*
186
*
@param index
187
*
ignored
188
*
@param ind
189
*
the individual
190
*
/
191 @Override
192 public final void add(final int index, final MOIndividual<G, X> ind) {
193 this.add(ind);
194 }
195
196 /
**
197
*
Delete the individual at the given index
198
*
199
*
@param index
200
*
the index
201
*
@return the deleted individual
202
*
/
203 @SuppressWarnings("unchecked")
204 @Override
205 public MOIndividual<G, X> remove(final int index) {
206 MOIndividual<G, X> x;
207 MOIndividual[] v;
208 int q;
209
210 v = this.lst;
211 x = v[index];
212 q = (--this.cnt);
213 v[index] = v[q];
214 v[q] = null;
215 return x;
216 }
217
218 /
**
219
*
@param fromIndex
220
*
index of first element to be removed
221
*
@param toIndex
222
*
index after last element to be removed
223
*
/
224 @Override
225 protected final void removeRange(final int fromIndex, final int toIndex) {
226 int x;
227 x = toIndex - fromIndex;
228 System.arraycopy(this.lst, toIndex, this.lst, fromIndex, x);
229 Arrays.fill(this.lst, fromIndex, this.cnt, null);
230 this.cnt -= x;
231 }
232
233 /
**
234
*
Check in an individual into the archive while maintaining the proper
235
*
maximum size of the archive by using techniques such as those
236
*
discussed in Section 28.6.3 on page 311.
237
*
238
*
@param ind
239
*
the individual to be checked in
240
*
@return true if the list changed, falseotherwise
241
*
/
242 @Override
243 @SuppressWarnings("unchecked")
56.7. UTILITY CLASSES, CONSTANTS, AND ROUTINES 847
244 public boolean add(final MOIndividual<G, X> ind) {
245 int i, x;
246 int sze;
247 final int ms;
248 MOIndividual<G, X> m;
249 MOIndividual[] l;
250
251 sze = this.cnt;
252 l = this.lst;
253 // check if we already have that individual or if we can remove some
254 // others
255 for (i = sze; (--i) >= 0;) {
256
257 m = l[i];
258 if (Utils.equals(m.g, ind.g) && Utils.equals(m.x, ind.x)) {
259 return false;
260 }
261
262 x = this.ic.compare(ind, m);
263 if (x > 0) {
264 return false;
265 }
266
267 if (x < 0) {
268 this.remove(i);
269 sze--;
270 }
271 }
272
273 ms = this.maxSize;
274 if (sze < ms) {
275 // ok, we do not yet have that individual, but we have enough space
276 if (sze >= l.length) {
277 l = new MOIndividual[Math.min(ms, Math.max(sze + 1, sze << 1))];
278 System.arraycopy(this.lst, 0, l, 0, sze);
279 this.lst = l;
280 }
281
282 l[sze++] = ind;
283 this.cnt = sze;
284 return true;
285 }
286
287 // the individual is new but the space is not sufficient -> delete one
288 return this.replaceOne(ind);
289 }
290
291 /
**
292
*
Appends all of the elements in the specified collection to the end of
293
*
this list, in the order that they are returned by the specified
294
*
collections Iterator. The behavior of this operation is undefined if
295
*
the specified collection is modified while the operation is in
296
*
progress. (This implies that the behavior of this call is undefined if
297
*
the specified collection is this list, and this list is nonempty.)
298
*
299
*
@param c
300
*
collection containing elements to be added to this list
301
*
@return <tt>true</tt> if this list changed as a result of the call
302
*
@throws NullPointerException
303
*
if the specified collection is null
304
*
/
305 @Override
306 public boolean addAll(Collection<? extends MOIndividual<G, X>> c) {
307 boolean b;
308
309 b = false;
310 for (MOIndividual<G, X> e : c) {
848 56 THE IMPLEMENTATION PACKAGE
311 if (this.add(e)) {
312 b = true;
313 }
314 }
315 return b;
316 }
317
318 /
**
319
*
Replace one individual in the archive. Currently, this method just
320
*
replaces a random individual. However, more sophisticated strategies
321
*
could be employed as well.
322
*
323
*
@param ind
324
*
the individual
325
*
@return true if the new individual was added, false otherwise
326
*
/
327 @SuppressWarnings("unchecked")
328 protected boolean replaceOne(final MOIndividual<G, X> ind) {
329 double w, deps;
330 int i, j, bi, ci, z;
331 double[] a, b, eps;
332 final int s, r;
333 final MOIndividual[] l;
334 boolean cr;
335
336 a = ind.f;
337 r = a.length;
338 s = this.cnt;
339 l = this.lst;
340
341 // do the replacement
342 for (i = s; (--i) >= 0;) {
343 // if the new individual equals and existing one, skip
344 if (Utils.equals(l[i].x, ind.x) || Utils.equals(l[i].g, ind.g)) {
345 return false;
346 }
347 }
348
349 bi = (this.replaceIdx % s);
350
351 eps = this.epsilon;
352 if (eps == null) {
353 this.epsilon = eps = new double[r];
354 } else {
355 Arrays.fill(eps, 0, r, 0d);
356 }
357
358 // try to find the most similar individual and replace it
359 ci = bi;
360 for (z = (r
*
3); (--z) >= 0;) {
361 deps = Double.POSITIVE_INFINITY;
362 ci = ((ci + 1) % r);
363
364 for (i = s; (--i) >= 0;) {
365 bi = ((bi + 1) % s);
366 b = l[bi].f;
367 cr = true;
368 for (j = r; (--j) >= 0;) {
369 w = Math.abs(a[j] - b[j]);
370 if (w > eps[j]) {
371 cr = false;
372 if ((j == ci) && (w < deps)) {
373 deps = w;
374 }
375 }
376 }
377
56.7. UTILITY CLASSES, CONSTANTS, AND ROUTINES 849
378 if (cr) {
379 l[bi] = ind;
380 this.replaceIdx = bi;
381 return true;
382 }
383 }
384 eps[ci] = deps;
385 }
386
387 // individuals are not sufficiently similar: delete more or less
388 // randomly
389 bi = ((bi + 1) % s);
390 this.replaceIdx = bi;
391 l[bi] = ind;
392 return true;
393 }
394
395 /
**
396
*
Sort by a given comparator. Notice that sorting with the archives
397
*
default comparator makes no sense at all.
398
*
399
*
@param cmp
400
*
the comparator
401
*
/
402 @SuppressWarnings("unchecked")
403 public void sort(final IIndividualComparator cmp) {
404 Arrays.sort(this.lst, 0, this.cnt, ((Comparator) cmp));
405 }
406
407 /
**
Sort lexicographically
*
/
408 public void sortLexicographically() {
409 this.sort(Lexicographic.LEXICOGRAPHIC_COMPARATOR);
410 }
411
412 /
**
413
*
Select a couple of individuals
414
*
415
*
@param sel
416
*
the selection algorithm
417
*
@param mate
418
*
the mating pool to select to
419
*
@param mateStart
420
*
the start index
421
*
@param mateCount
422
*
the number of individuals to select
423
*
@param r
424
*
the randomizer
425
*
/
426 public final void select(final ISelectionAlgorithm sel,
427 final Individual<?, ?>[] mate, final int mateStart,
428 final int mateCount, final Random r) {
429 final int c;
430
431 c = this.cnt;
432 if (c <= 0) {
433 Arrays.fill(mate, mateStart, mateStart + mateCount, null);
434 } else {
435 sel.select(this.lst, 0, c, mate, mateStart, mateCount, r);
436 }
437 }
438
439 /
**
440
*
Copy some the individuals
441
*
442
*
@param mate
443
*
the mating pool to select to
444
*
@param mateStart
850 56 THE IMPLEMENTATION PACKAGE
445
*
the start index
446
*
@param mateCount
447
*
the maximum number of individuals to select
448
*
@return the number of selected individuals (may be less)
449
*
/
450 @SuppressWarnings("unchecked")
451 public final int copy(final Individual<?, ?>[] mate,
452 final int mateStart, final int mateCount) {
453 final int c;
454 final MOIndividual[] l;
455 int i, cpy;
456
457 c = this.cnt;
458 l = this.lst;
459 if (mateCount >= c) {
460 System.arraycopy(l, 0, mate, mateStart, c);
461 Arrays.fill(mate, mateStart + c, mateStart + mateCount, null);
462 return c;
463 }
464
465 cpy = this.copyIndex;
466 for (i = (mateStart + mateCount); (--i) >= mateStart;) {
467 cpy = ((cpy + 1) % c);
468 mate[i] = l[cpy];
469 }
470
471 this.copyIndex = cpy;
472
473 return mateCount;
474 }
475 }
Listing 56.59: Some constants useful for optimization.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.utils;
5
6 import java.io.Serializable;
7 import java.util.Comparator;
8
9 /
**
10
*
This class holds some common constants
11
*
12
*
@author Thomas Weise
13
*
/
14 public final class Constants {
15
16 /
*
17
*
based on Section 6.3.4 on page 106, we define the following
18
*
constants
19
*
/
20
21 /
**
the worst possible objective value
*
/
22 public static final double WORST_OBJECTIVE = Double.POSITIVE_INFINITY;
23
24 /
**
the best possible objective value
*
/
25 public static final double BEST_OBJECTIVE = Double.NEGATIVE_INFINITY;
26
27 /
**
the worst possible fitness value
*
/
28 public static final double WORST_FITNESS = Double.POSITIVE_INFINITY;
29
30 /
**
the best possible fitness value
*
/
31 public static final double BEST_FITNESS = 0d;
32
33 /
**
34
*
a comparator that can be used for sorting individuals according to
56.7. UTILITY CLASSES, CONSTANTS, AND ROUTINES 851
35
*
their fitness in ascending order, from the best to the worst.
36
*
/
37 public static final Comparator<Individual<?, ?>> FITNESS_SORT_ASC_COMPARATOR =
new FitnessSortASCComparator();
38
39 /
**
40
*
An internal utility class for sorting individuals according to their
41
*
fitness in ascending order.
42
*
43
*
@author Thomas Weise
44
*
/
45 private static final class FitnessSortASCComparator implements
46 Serializable, Comparator<Individual<?, ?>> {
47 /
**
a constant required by Java serialization
*
/
48 private static final long serialVersionUID = 1;
49
50 /
**
instantiate
*
/
51 FitnessSortASCComparator() {
52 super();
53 }
54
55 /
**
56
*
Compare two individual records
57
*
58
*
@param a
59
*
the first individual record
60
*
@param b
61
*
the second individual record
62
*
@return -1 if a.v<b.v, 0 if a.v=b.v, 1 if a.v>b.v
63
*
/
64 public final int compare(final Individual<?, ?> a,
65 final Individual<?, ?> b) {
66 return Double.compare(a.v, b.v);
67 }
68 }
69 }
Listing 56.60: A small and handy class for computing some useful statistics.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.utils;
5
6 import java.util.Arrays;
7
8 import org.goataa.impl.OptimizationModule;
9
10 /
**
11
*
This class helps us to collect statistics
12
*
13
*
@author Thomas Weise
14
*
/
15 public class BufferedStatistics extends OptimizationModule {
16
17 /
**
a constant required by Java serialization
*
/
18 private static final long serialVersionUID = 1;
19
20 /
**
the data
*
/
21 private double[] data;
22
23 /
**
the length
*
/
24 private int len;
25
26 /
**
did we calculate the statistics?
*
/
27 private boolean calculated;
28
29 /
**
the mean
*
/
852 56 THE IMPLEMENTATION PACKAGE
30 private double mean;
31
32 /
**
the median
*
/
33 private double med;
34
35 /
**
the minimum
*
/
36 private double min;
37
38 /
**
the maximum
*
/
39 private double max;
40
41 /
**
42
*
Create a new statistics object
43
*
/
44 public BufferedStatistics() {
45 super();
46 this.data = new double[128];
47 this.clear();
48 }
49
50 /
**
51
*
IfThenElse a double to the buffer
52
*
53
*
@param d
54
*
the double
55
*
/
56 public final void add(final double d) {
57 double[] x;
58 int l;
59
60 this.calculated = false;
61 x = this.data;
62 l = this.len;
63 if (l >= x.length) {
64 x = new double[l << 1];
65 System.arraycopy(this.data, 0, x, 0, l);
66 this.data = x;
67 }
68
69 x[l] = d;
70 this.len = (l + 1);
71 }
72
73 /
**
74
*
Calculate the statistics
75
*
/
76 private final void calculate() {
77 double[] x;
78 int l;
79 double s;
80
81 if (this.calculated) {
82 return;
83 }
84
85 x = this.data;
86 l = this.len;
87 if (l <= 0) {
88 this.max = Double.NaN;
89 this.min = Double.NaN;
90 this.mean = Double.NaN;
91 this.med = Double.NaN;
92 } else {
93 Arrays.sort(x, 0, l);
94 this.min = x[0];
95 this.max = x[l - 1];
96
56.7. UTILITY CLASSES, CONSTANTS, AND ROUTINES 853
97 if ((l & 1) == 0) {
98 this.med = (0.5d
*
(x[l >>> 1] + x[(l >>> 1) - 1]));
99 } else {
100 this.med = x[l >>> 1];
101 }
102
103 s = 0d;
104 for (; (--l) >= 0;) {
105 s += x[l];
106 }
107 this.mean = (s / this.len);
108 }
109
110 this.calculated = true;
111 }
112
113 /
**
114
*
Get the minimum of the collected value
115
*
116
*
@return the minimum value
117
*
/
118 public final double min() {
119 this.calculate();
120 return this.min;
121 }
122
123 /
**
124
*
Get the maximum of the collected value
125
*
126
*
@return the maximum value
127
*
/
128 public final double max() {
129 this.calculate();
130 return this.max;
131 }
132
133 /
**
134
*
Get the median of the collected value
135
*
136
*
@return the median value
137
*
/
138 public final double median() {
139 this.calculate();
140 return this.med;
141 }
142
143 /
**
144
*
Clear this statistics record
145
*
/
146 public final void clear() {
147 this.len = 0;
148 this.calculated = false;
149 }
150
151 /
**
152
*
Get the mean of the collected values
153
*
154
*
@return the mean value
155
*
/
156 public final double mean() {
157 this.calculate();
158 return this.mean;
159 }
160
161 /
**
162
*
Get the full configuration which holds all the data necessary to
163
*
describe this object.
854 56 THE IMPLEMENTATION PACKAGE
164
*
165
*
@param longVersion
166
*
true if the long version should be returned, false if the
167
*
short version should be returned
168
*
@return the full configuration
169
*
/
170 @Override
171 public String getConfiguration(final boolean longVersion) {
172 StringBuilder sb;
173
174 sb = new StringBuilder();
175 sb.append("min="); //NON NLS 1
176 sb.append(TextUtils.formatNumberSameWidth(this.min()));
177 sb.append(longVersion ? ", median=" : " med=");//NON NLS 1//NON NLS 2
178 sb.append(TextUtils.formatNumberSameWidth(this.median()));
179 sb.append(longVersion ? ", mean=" : " avg=");//NON NLS 1//NON NLS 2
180 sb.append(TextUtils.formatNumberSameWidth(this.mean()));
181 sb.append(longVersion ? ", max=" : " max=");//NON NLS 1//NON NLS 2
182 sb.append(TextUtils.formatNumberSameWidth(this.max()));
183
184 return sb.toString();
185 }
186
187 /
**
188
*
Get the name of the optimization module
189
*
190
*
@param longVersion
191
*
true if the long name should be returned, false if the short
192
*
name should be returned
193
*
@return the name of the optimization module
194
*
/
195 @Override
196 public String getName(final boolean longVersion) {
197 if (longVersion) {
198 return super.getName(true);
199 }
200 return "stat"; //NON NLS 1
201 }
202 }
Listing 56.61: Some simple utilities for converting objects to strings.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package org.goataa.impl.utils;
5
6 import java.lang.reflect.Array;
7 import java.text.DecimalFormat;
8 import java.text.DecimalFormatSymbols;
9 import java.text.NumberFormat;
10 import java.util.Locale;
11
12 /
**
13
*
A utility class for text
14
*
15
*
@author Thomas Weise
16
*
/
17 public class TextUtils {
18
19 /
**
the newline string
*
/
20 public static final String NEWLINE = System
21 .getProperty("line.separator"); //NON NLS 1
22
23 /
**
24
*
Print an object to a string builder while handling arrays correctly
25
*
26
*
@param o
56.7. UTILITY CLASSES, CONSTANTS, AND ROUTINES 855
27
*
the object
28
*
@param sb
29
*
the string builder
30
*
/
31 public static final void toStringBuilder(final Object o,
32 final StringBuilder sb) {
33 int i, j;
34 String s;
35
36 if (o == null) {
37 sb.append((Object) null);
38 return;
39 }
40
41 if (o.getClass().isArray()) {
42
43 sb.append([);
44
45 i = Array.getLength(o);
46 for (j = 0; j < i; j++) {
47 if (j > 0) {
48 sb.append(,);
49 sb.append( );
50 }
51 toStringBuilder(Array.get(o, j), sb);
52 }
53
54 sb.append(]);
55 return;
56 }
57
58 if (o instanceof String) {
59 sb.append(");
60 s = ((String) o);
61 i = s.length();
62 for (j = 0; j < i; j++) {
63 escapeCharToStringBuilder(s.charAt(j), sb);
64 }
65 sb.append(");
66 return;
67 }
68
69 if (o instanceof Character) {
70 sb.append(\);
71 escapeCharToStringBuilder(((Character) o).charValue(), sb);
72 sb.append(\);
73 }
74
75 sb.append(o.toString());
76 }
77
78 /
**
79
*
Print a characte to a string builder while escaping special characters
80
*
81
*
@param ch
82
*
the character
83
*
@param sb
84
*
the string builder
85
*
/
86 public static final void escapeCharToStringBuilder(final char ch,
87 final StringBuilder sb) {
88 if ((ch < 32) || (ch > 127)) {
89 sb.append(\\);
90 sb.append(u);
91 sb.append(Integer.toHexString(ch));
92 } else {
93 sb.append(ch);
856 56 THE IMPLEMENTATION PACKAGE
94 }
95 }
96
97 /
**
98
*
Convert an object to a string while handling arrays correctly
99
*
100
*
@param o
101
*
the object
102
*
@return the string
103
*
/
104 public static final String toString(final Object o) {
105 StringBuilder sb;
106 sb = new StringBuilder();
107 toStringBuilder(o, sb);
108 return sb.toString();
109 }
110
111 /
**
the internal formatter
*
/
112 private static final NumberFormat FORMATTER = new DecimalFormat(
113 "0.00E00", new DecimalFormatSymbols(Locale.US)); //NON NLS 1
114
115 /
**
116
*
Format a given number for output
117
*
118
*
@param v
119
*
the number
120
*
@return the corresponding string
121
*
/
122 public static final String formatNumberSameWidth(final double v) {
123 return FORMATTER.format(v);
124 }
125
126 /
**
127
*
Format a given number for output
128
*
129
*
@param v
130
*
the number
131
*
@return the corresponding string
132
*
/
133 public static final String formatNumber(final double v) {
134 String s1, s2;
135 int i;
136
137 s1 = String.valueOf(v);
138
139 for (i = s1.length(); (--i) >= 0;) {
140 if (s1.charAt(i) != 0) {
141 break;
142 }
143 }
144 if (i >= 0) {
145 if (s1.charAt(i) == .) {
146 s1 = s1.substring(0, i);
147 } else {
148 if (s1.indexOf(.) < i) {
149 s1 = s1.substring(0, i + 1);
150 }
151 }
152 }
153
154 s2 = TextUtils.formatNumberSameWidth(v);
155 if (s2.endsWith("E00")) { //NON NLS 1
156 s2 = s2.substring(0, s2.length() - 3);
157 }
158
159 return (s1.length() <= s2.length()) ? s1 : s2;
160 }
56.7. UTILITY CLASSES, CONSTANTS, AND ROUTINES 857
161 }
Chapter 57
Demos
57.1 Demos for String-based Problem Spaces
In this package, we provide some example optimization tasks with string-based problem
spaces.
57.1.1 Real-Vector based Problem Spaces X R
n
Here we provide some benchmark problems for real-valued continuous vector spaces.
57.1.1.1 Some Benchmark Functions
Some real-valued benchmark functions for numerical optimization.
Listing 57.1: The Sphere Function.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.real.benchmarkFunctions;
5
6 import java.util.Random;
7
8 import org.goataa.impl.OptimizationModule;
9 import org.goataa.spec.IObjectiveFunction;
10
11 /
**
12
*
The sphere function introduced in Section 50.3.1.1 on page 581
13
*
is a very simple benchmark function for real-vector based problem
14
*
spaces. It just adds up the squares of the elements of the candidate
15
*
solutions ( Definition D2.2 on page 43).
16
*
17
*
@author Thomas Weise
18
*
/
19 public final class Sphere extends OptimizationModule implements
20 IObjectiveFunction<double[]> {
21
22 /
**
a constant required by Java serialization
*
/
23 private static final long serialVersionUID = 1;
24
25 /
**
Instantiate the sphere objective function
*
/
26 public Sphere() {
27 super();
28 }
860 57 DEMOS
29
30 /
**
31
*
Compute the value of the sphere function
32
*
( Section 50.3.1.1 on page 581)
33
*
34
*
@param x
35
*
the candidate solution ( Definition D2.2 on page 43)
36
*
@param r
37
*
the randomizer
38
*
@return the objective value
39
*
/
40 public final double compute(final double[] x, final Random r) {
41 double res;
42 int i;
43
44 res = 0d;
45 for (i = x.length; (--i) >= 0;) {
46 res += (x[i]
*
x[i]);
47 }
48
49 return res;
50 }
51
52 }
Listing 57.2: f
1
as dened in Equation 26.1 on page 238.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.real.benchmarkFunctions;
5
6 import java.util.Random;
7
8 import org.goataa.impl.OptimizationModule;
9 import org.goataa.spec.IObjectiveFunction;
10
11 /
**
12
*
A function f1 which computes the sum of the cosines of the logarithms of
13
*
all elements in a vector See Equation 26.1 on page 238 in
14
*
Task 64 on page 238.
15
*
16
*
@author Thomas Weise
17
*
/
18 public final class SumCosLn extends OptimizationModule implements
19 IObjectiveFunction<double[]> {
20
21 /
**
a constant required by Java serialization
*
/
22 private static final long serialVersionUID = 1;
23
24 /
**
Instantiate the sum-cos-ln objective
*
/
25 public SumCosLn() {
26 super();
27 }
28
29 /
**
30
*
Compute the function value according to
31
*
Equation 26.1 on page 238.
32
*
33
*
@param x
34
*
the candidate solution ( Definition D2.2 on page 43)
35
*
@param r
36
*
the randomizer
37
*
@return the objective value
38
*
/
39 public final double compute(final double[] x, final Random r) {
40 double res;
41 int i;
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 861
42
43 res = 0d;
44 for (i = x.length; (--i) >= 0;) {
45 res += Math.cos(Math.log(1d + Math.abs(x[i])));
46 }
47
48 return (0.5d - ((1d / (x.length << 1))
*
res));
49 }
50
51 /
**
52
*
Get the name of the optimization module
53
*
54
*
@param longVersion
55
*
true if the long name should be returned, false if the short
56
*
name should be returned
57
*
@return the name of the optimization module
58
*
/
59 @Override
60 public String getName(final boolean longVersion) {
61 if (longVersion) {
62 return super.getName(true);
63 }
64 return "f1"; //NON NLS 1
65 }
66 }
Listing 57.3: f
2
as dened in Equation 26.2 on page 238.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.real.benchmarkFunctions;
5
6 import java.util.Random;
7
8 import org.goataa.impl.OptimizationModule;
9 import org.goataa.spec.IObjectiveFunction;
10
11 /
**
12
*
A function f2 which computes the cosine of the logarithm of the sum of
13
*
all elements in a vector See Equation 26.2 on page 238 in
14
*
Task 64 on page 238.
15
*
16
*
@author Thomas Weise
17
*
/
18 public final class CosLnSum extends OptimizationModule implements
19 IObjectiveFunction<double[]> {
20
21 /
**
a constant required by Java serialization
*
/
22 private static final long serialVersionUID = 1;
23
24 /
**
Instantiate the cos-ln-sum objective
*
/
25 public CosLnSum() {
26 super();
27 }
28
29 /
**
30
*
Compute the function value according to
31
*
Equation 26.2 on page 238.
32
*
33
*
@param x
34
*
the candidate solution ( Definition D2.2 on page 43)
35
*
@param r
36
*
the randomizer
37
*
@return the objective value
38
*
/
39 public final double compute(final double[] x, final Random r) {
40 double res;
862 57 DEMOS
41 int i;
42
43 res = 0d;
44 for (i = x.length; (--i) >= 0;) {
45 res += Math.abs(x[i]);
46 }
47
48 return (0.5d - (0.5d
*
Math.cos(Math.log(1d + res))));
49 }
50
51 /
**
52
*
Get the name of the optimization module
53
*
54
*
@param longVersion
55
*
true if the long name should be returned, false if the short
56
*
name should be returned
57
*
@return the name of the optimization module
58
*
/
59 @Override
60 public String getName(final boolean longVersion) {
61 if (longVersion) {
62 return super.getName(true);
63 }
64 return "f2"; //NON NLS 1
65 }
66 }
Listing 57.4: f
3
as dened in Equation 26.3 on page 238.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.real.benchmarkFunctions;
5
6 import java.util.Random;
7
8 import org.goataa.impl.OptimizationModule;
9 import org.goataa.spec.IObjectiveFunction;
10
11 /
**
12
*
A function f3 which computes the maximum of all square values in a
13
*
vector. See Equation 26.3 on page 238 in
14
*
Task 64 on page 238.
15
*
16
*
@author Thomas Weise
17
*
/
18 public final class MaxSqr extends OptimizationModule implements
19 IObjectiveFunction<double[]> {
20
21 /
**
a constant required by Java serialization
*
/
22 private static final long serialVersionUID = 1;
23
24 /
**
Instantiate max-squares objective function
*
/
25 public MaxSqr() {
26 super();
27 }
28
29 /
**
30
*
Compute the function value according to
31
*
Equation 26.3 on page 238.
32
*
33
*
@param x
34
*
the candidate solution ( Definition D2.2 on page 43)
35
*
@param r
36
*
the randomizer
37
*
@return the objective value
38
*
/
39 public final double compute(final double[] x, final Random r) {
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 863
40 double res;
41 int i;
42
43 res = 0d;
44 for (i = x.length; (--i) >= 0;) {
45 res = Math.max(res, (x[i]
*
x[i]));
46 }
47
48 return (0.01d
*
res);
49 }
50
51 /
**
52
*
Get the name of the optimization module
53
*
54
*
@param longVersion
55
*
true if the long name should be returned, false if the short
56
*
name should be returned
57
*
@return the name of the optimization module
58
*
/
59 @Override
60 public String getName(final boolean longVersion) {
61 if (longVersion) {
62 return super.getName(true);
63 }
64 return "f3"; //NON NLS 1
65 }
66 }
57.1.1.2 Experiments
57.1.1.2.1 The Function Test
Listing 57.5: Testing the function from Task 64.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.real;
5
6 import java.util.List;
7
8 import org.goataa.impl.algorithms.RandomSampling;
9 import org.goataa.impl.algorithms.RandomWalk;
10 import org.goataa.impl.algorithms.de.DifferentialEvolution1;
11 import org.goataa.impl.algorithms.ea.SimpleGenerationalEA;
12 import org.goataa.impl.algorithms.ea.selection.TournamentSelection;
13 import org.goataa.impl.algorithms.ea.selection.TruncationSelection;
14 import org.goataa.impl.algorithms.es.EvolutionStrategy;
15 import org.goataa.impl.algorithms.hc.HillClimbing;
16 import org.goataa.impl.algorithms.sa.SimulatedAnnealing;
17 import org.goataa.impl.algorithms.sa.temperatureSchedules.Exponential;
18 import org.goataa.impl.algorithms.sa.temperatureSchedules.Logarithmic;
19 import
org.goataa.impl.searchOperations.strings.real.binary.DoubleArrayWeightedMeanCrossover;
20 import
org.goataa.impl.searchOperations.strings.real.nullary.DoubleArrayUniformCreation;
21 import org.goataa.impl.searchOperations.strings.real.ternary.DoubleArrayDEbin;
22 import org.goataa.impl.searchOperations.strings.real.ternary.DoubleArrayDEexp;
23 import
org.goataa.impl.searchOperations.strings.real.unary.DoubleArrayAllNormalMutation;
24 import
org.goataa.impl.searchOperations.strings.real.unary.DoubleArrayAllUniformMutation;
864 57 DEMOS
25 import org.goataa.impl.termination.StepLimit;
26 import org.goataa.impl.utils.BufferedStatistics;
27 import org.goataa.impl.utils.Individual;
28 import org.goataa.spec.INullarySearchOperation;
29 import org.goataa.spec.IObjectiveFunction;
30 import org.goataa.spec.ISOOptimizationAlgorithm;
31 import org.goataa.spec.ISelectionAlgorithm;
32 import org.goataa.spec.ITemperatureSchedule;
33 import org.goataa.spec.ITernarySearchOperation;
34 import org.goataa.spec.IUnarySearchOperation;
35
36 import demos.org.goataa.strings.real.benchmarkFunctions.CosLnSum;
37 import demos.org.goataa.strings.real.benchmarkFunctions.MaxSqr;
38 import demos.org.goataa.strings.real.benchmarkFunctions.Sphere;
39 import demos.org.goataa.strings.real.benchmarkFunctions.SumCosLn;
40
41 /
**
42
*
The application of several optimization methods to the benchmark
43
*
functions defined in Task 64 on page 238. This
44
*
program is a more extensive version of the one given in
45
*
Paragraph 57.1.1.2.2 on page 885.
46
*
47
*
@author Thomas Weise
48
*
/
49 public class FunctionTest {
50
51 /
**
52
*
The main routine
53
*
54
*
@param args
55
*
the command line arguments which are ignored here
56
*
/
57 @SuppressWarnings("unchecked")
58 public static final void main(final String[] args) {
59 final IObjectiveFunction<double[]>[] f;
60 INullarySearchOperation<double[]> create;
61 IUnarySearchOperation<double[]>[] mutate;
62 final int maxSteps, runs;
63 ITemperatureSchedule[] schedules;
64 int i, k, l, n, ri, li, mi;
65 HillClimbing<double[], double[]> HC;
66 RandomWalk<double[], double[]> RW;
67 RandomSampling<double[], double[]> RS;
68 SimulatedAnnealing<double[], double[]> SA;
69 SimpleGenerationalEA<double[], double[]> GA;
70 EvolutionStrategy<double[]> ES;
71 DifferentialEvolution1<double[]> DE;
72 ISelectionAlgorithm[] sel;
73 int[] ns, rhos, lambdas, mus;
74 ITernarySearchOperation[] tern;
75
76 maxSteps = 100000;
77 runs = 100;
78
79 System.out.println("Maximum Number of Steps: " + maxSteps); //NON NLS 1
80 System.out.println("Number of Runs : " + runs); //NON NLS 1
81 System.out.println();
82
83 f = new IObjectiveFunction[] { new Sphere(), new SumCosLn(),
84 new CosLnSum(), new MaxSqr() };
85
86 HC = new HillClimbing<double[], double[]>();
87 SA = new SimulatedAnnealing<double[], double[]>();
88 RW = new RandomWalk<double[], double[]>();
89 RS = new RandomSampling<double[], double[]>();
90 GA = new SimpleGenerationalEA<double[], double[]>();
91 ES = new EvolutionStrategy<double[]>();
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 865
92 DE = new DifferentialEvolution1<double[]>();
93
94 GA
95 .setBinarySearchOperation(DoubleArrayWeightedMeanCrossover.DOUBLE_ARRAY_WEIGHTED_MEAN_CROSSOVER);
96 sel = new ISelectionAlgorithm[] { new TournamentSelection(2),
97 new TournamentSelection(3), new TournamentSelection(5),
98 new TruncationSelection(10), new TruncationSelection(50) };
99
100 schedules = new ITemperatureSchedule[] {//
101 new Logarithmic(1d),//
102 new Logarithmic(1d),//
103 new Logarithmic(0.1d),//
104 new Logarithmic(1e-3d),//
105 new Logarithmic(1e-4d),//
106 new Exponential(1d, 1e-5),//
107 new Exponential(1d, 1e-4),//
108 new Exponential(0.1d, 1e-5), //
109 new Exponential(0.1d, 1e-4), //
110 new Exponential(1e-3d, 1e-5),//
111 new Exponential(1e-3d, 1e-4),//
112 new Exponential(1e-4d, 1e-5), //
113 new Exponential(1e-4d, 1e-4) };
114
115 ns = new int[] { 2, 5, 10 };
116
117 rhos = new int[] { 2, 10 };
118 lambdas = new int[] { 50, 100, 200 };
119 mus = new int[] { 10, 20 };
120
121 for (n = 0; n < ns.length; n++) {
122
123 System.out.println();
124 System.out.println();
125 System.out.println("Dimensions: " + ns[n]); //NON NLS 1
126 System.out.println();
127
128 tern = new ITernarySearchOperation[] {
129 new DoubleArrayDEbin(-10d, 10d, 0.3d, 0.7d),
130 new DoubleArrayDEbin(-10d, 10d, 0.7d, 0.3d),
131 new DoubleArrayDEbin(-10d, 10d, 0.5d, 0.5d),
132 new DoubleArrayDEexp(-10d, 10d, 0.3d, 0.7d),
133 new DoubleArrayDEexp(-10d, 10d, 0.7d, 0.3d),
134 new DoubleArrayDEexp(-10d, 10d, 0.5d, 0.5d), };
135
136 mutate = new IUnarySearchOperation[] {//
137 new DoubleArrayAllNormalMutation(-10.0d, 10.0d),//
138 new DoubleArrayAllUniformMutation(-10.0d, 10.0d) };
139 create = new DoubleArrayUniformCreation(ns[n], -10.0d, 10.0d);
140
141 for (i = 0; i < f.length; i++) {
142 System.out.println();
143 System.out.println();
144 System.out.println(f[i].getClass().getSimpleName());
145 System.out.println();
146
147 // Hill Climbing ( Algorithm 26.1 on page 230)
148 HC.setObjectiveFunction(f[i]);
149 HC.setNullarySearchOperation(create);
150 for (k = 0; k < mutate.length; k++) {
151 HC.setUnarySearchOperation(mutate[k]);
152 testRuns(HC, runs, maxSteps);
153 }
154
155 // Simulated Annealing ( Algorithm 27.1 on page 245)
156 // and Simulated Quenching
157 // ( Section 27.3.2 on page 249)
158 SA.setObjectiveFunction(f[i]);
866 57 DEMOS
159 SA.setNullarySearchOperation(create);
160 for (l = 0; l < schedules.length; l++) {
161 SA.setTemperatureSchedule(schedules[l]);
162 for (k = 0; k < mutate.length; k++) {
163 SA.setUnarySearchOperation(mutate[k]);
164 testRuns(SA, runs, maxSteps);
165 }
166 }
167
168 // Generational Genetic/Evolutionary Algorithm
169 // ( Chapter 28 on page 253,
170 // Section 28.1.4.1 on page 265,
171 // Section 29.3 on page 328)
172 GA.setObjectiveFunction(f[i]);
173 GA.setNullarySearchOperation(create);
174 for (l = 0; l < sel.length; l++) {
175 GA.setSelectionAlgorithm(sel[l]);
176 for (k = 0; k < mutate.length; k++) {
177 GA.setUnarySearchOperation(mutate[k]);
178 testRuns(GA, runs, maxSteps);
179 }
180 }
181
182 // Evolution Strategies: Algorithm 30.1 on page 361
183 ES.setObjectiveFunction(f[i]);
184 ES.setNullarySearchOperation(create);
185 ES.setDimension(ns[n]);
186 ES.setMinimum(-10d);
187 ES.setMaximum(10d);
188
189 for (mi = 0; mi < mus.length; mi++) {
190 ES.setMu(mus[mi]);
191 for (li = 0; li < lambdas.length; li++) {
192 ES.setLambda(lambdas[li]);
193 for (ri = 0; ri < rhos.length; ri++) {
194 ES.setRho(rhos[ri]);
195
196 ES.setPlus(true);
197 testRuns(ES, runs, maxSteps);
198
199 ES.setPlus(false);
200 testRuns(ES, runs, maxSteps);
201 }
202 }
203 }
204
205 // Differential Evolution: Chapter 33 on page 419
206 DE.setObjectiveFunction(f[i]);
207 DE.setNullarySearchOperation(create);
208 for (li = 0; li < lambdas.length; li++) {
209 DE.setPopulationSize(lambdas[li]);
210 for (mi = 0; mi < tern.length; mi++) {
211 DE.setTernarySearchOperation(tern[mi]);
212 testRuns(DE, runs, maxSteps);
213 }
214 }
215
216 // Random Walks ( Section 8.2 on page 114)
217 RW.setObjectiveFunction(f[i]);
218 RW.setNullarySearchOperation(create);
219 for (k = 0; k < mutate.length; k++) {
220 RW.setUnarySearchOperation(mutate[k]);
221 testRuns(RW, runs, maxSteps);
222 }
223
224 // Random Sampling ( Section 8.1 on page 113)
225 RS.setObjectiveFunction(f[i]);
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 867
226 RS.setNullarySearchOperation(create);
227 testRuns(RS, runs, maxSteps);
228 }
229 }
230 }
231
232 /
**
233
*
Perform the test runs
234
*
235
*
@param algorithm
236
*
the algorithm configuration to test
237
*
@param runs
238
*
the number of runs to perform
239
*
@param steps
240
*
the number of steps to execute per run
241
*
/
242 @SuppressWarnings("unchecked")
243 private static final void testRuns(
244 final ISOOptimizationAlgorithm<?, double[], ?> algorithm,
245 final int runs, final int steps) {
246 int i;
247 BufferedStatistics stat;
248 List<Individual<?, double[]>> solutions;
249 Individual<?, double[]> individual;
250
251 stat = new BufferedStatistics();
252 algorithm.setTerminationCriterion(new StepLimit(steps));
253
254 for (i = 0; i < runs; i++) {
255 algorithm.setRandSeed(i);
256 solutions = ((List<Individual<?, double[]>>) (algorithm.call()));
257 individual = solutions.get(0);
258 stat.add(individual.v);
259 }
260
261 System.out.println(stat.getConfiguration(false) +
262 + algorithm.toString(false));
263 }
264 }
8
6
8
5
7
D
E
M
O
S
1 Maximum Number of Steps : 100000
2 Number of Runs : 100
3
4
5
6 Di mensi ons : 2
7
8
9
10 Sphere
11
12 min=6.85E11 med=4.99E09 avg=7.19E09 max=4.41E08 HC( o1=NM([ 10 , 10] 2) , r )
13 min=6.78E12 med=9.34E10 avg=1.48E09 max=9.31E09 HC( o1=UM([ 10 , 10] 2) , r )
14 min=2.44E09 med=6.86E07 avg=9.83E07 max=7.55E06 SA( o1=NM([ 10 , 10] 2) , r , l og ( 1) )
15 min=1.01E08 med=8.56E07 avg=1.52E06 max=2.10E05 SA( o1=UM([ 10 , 10] 2) , r , l og ( 1) )
16 min=2.44E09 med=6.86E07 avg=9.83E07 max=7.55E06 SA( o1=NM([ 10 , 10] 2) , r , l og ( 1) )
17 min=1.01E08 med=8.56E07 avg=1.52E06 max=2.10E05 SA( o1=UM([ 10 , 10] 2) , r , l og ( 1) )
18 min=4.59E10 med=6.36E08 avg=1.03E07 max=7.99E07 SA( o1=NM([ 10 , 10] 2) , r , l og ( 0 . 1 ) )
19 min=1.67E10 med=7.77E08 avg=1.12E07 max=7.26E07 SA( o1=UM([ 10 , 10] 2) , r , l og ( 0 . 1 ) )
20 min=1.24E12 med=7.91E09 avg=1.27E08 max=5.39E08 SA( o1=NM([ 10 , 10] 2) , r , l og ( 0. 001) )
21 min=7.35E12 med=1.41E09 avg=1.62E09 max=6.27E09 SA( o1=UM([ 10 , 10] 2) , r , l og ( 0. 001) )
22 min=9.70E12 med=5.74E09 avg=8.76E09 max=3.96E08 SA( o1=NM([ 10 , 10] 2) , r , l og ( 1. 0E4) )
23 min=3.34E11 med=1.03E09 avg=1.34E09 max=6.46E09 SA( o1=UM([ 10 , 10] 2) , r , l og ( 1. 0E4) )
24 min=1.83E07 med=4.23E06 avg=7.18E06 max=4.52E05 SQ( o1=NM([ 10 , 10] 2) , r , exp ( Ts=1, e =1.0E5) )
25 min=8.02E08 med=3.80E04 avg=1.96E02 max=3.90E01 SQ( o1=UM([ 10 , 10] 2) , r , exp ( Ts=1, e =1.0E5) )
26 min=1.03E09 med=2.24E08 avg=2.75E08 max=1.21E07 SQ( o1=NM([ 10 , 10] 2) , r , exp ( Ts=1, e =1.0E4) )
27 min=8.77E11 med=5.17E09 avg=7.33E09 max=3.62E08 SQ( o1=UM([ 10 , 10] 2) , r , exp ( Ts=1, e =1.0E4) )
28 min=1.02E09 med=4.24E07 avg=6.70E07 max=3.33E06 SQ( o1=NM([ 10 , 10] 2) , r , exp ( Ts =0. 1 , e =1.0E5) )
29 min=1.56E08 med=5.58E07 avg=7.98E07 max=4.22E06 SQ( o1=UM([ 10 , 10] 2) , r , exp ( Ts =0. 1 , e =1.0E5) )
30 min=1.16E10 med=1.21E08 avg=1.56E08 max=6.18E08 SQ( o1=NM([ 10 , 10] 2) , r , exp ( Ts =0. 1 , e =1.0E4) )
31 min=3.37E11 med=2.08E09 avg=3.10E09 max=2.13E08 SQ( o1=UM([ 10 , 10] 2) , r , exp ( Ts =0. 1 , e =1.0E4) )
32 min=1.01E10 med=1.12E08 avg=1.62E08 max=7.30E08 SQ( o1=NM([ 10 , 10] 2) , r , exp ( Ts =0. 001 , e =1.0E5) )
33 min=1.84E11 med=4.31E09 avg=6.73E09 max=3.62E08 SQ( o1=UM([ 10 , 10] 2) , r , exp ( Ts =0. 001 , e =1.0E5) )
34 min=2.33E12 med=5.34E09 avg=7.88E09 max=3.41E08 SQ( o1=NM([ 10 , 10] 2) , r , exp ( Ts =0. 001 , e =1.0E4) )
35 min=5.47E11 med=1.09E09 avg=1.44E09 max=6.11E09 SQ( o1=UM([ 10 , 10] 2) , r , exp ( Ts =0. 001 , e =1.0E4) )
36 min=2.96E11 med=6.49E09 avg=9.15E09 max=4.86E08 SQ( o1=NM([ 10 , 10] 2) , r , exp ( Ts=1.0E4, e =1.0E5) )
37 min=1.52E12 med=1.10E09 avg=1.40E09 max=6.26E09 SQ( o1=UM([ 10 , 10] 2) , r , exp ( Ts=1.0E4, e =1.0E5) )
38 min=8.03E12 med=6.34E09 avg=8.78E09 max=5.12E08 SQ( o1=NM([ 10 , 10] 2) , r , exp ( Ts=1.0E4, e =1.0E4) )
39 min=4.43E11 med=8.66E10 avg=1.34E09 max=6.82E09 SQ( o1=UM([ 10 , 10] 2) , r , exp ( Ts=1.0E4, e =1.0E4) )
40 min=1.32E11 med=3.22E09 avg=4.10E09 max=2.85E08 sEA( o1=NM([ 10 , 10] 2) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
41 min=5.41E13 med=2.49E10 avg=3.25E10 max=1.56E09 sEA( o1=UM([ 10 , 10] 2) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
42 min=2.71E11 med=8.06E10 avg=1.23E09 max=5.90E09 sEA( o1=NM([ 10 , 10] 2) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
43 min=1.55E12 med=1.14E10 avg=1.66E10 max=9.82E10 sEA( o1=UM([ 10 , 10] 2) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
44 min=1.74E13 med=2.44E11 avg=3.60E11 max=2.43E10 sEA( o1=NM([ 10 , 10] 2) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
45 min=4.92E14 med=4.93E12 avg=7.50E12 max=3.48E11 sEA( o1=UM([ 10 , 10] 2) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
46 min=9.14E38 med=1.77E11 avg=9.49E10 max=1.12E08 sEA( o1=NM([ 10 , 10] 2) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
47 min=1.94E52 med=6.27E12 avg=1.59E10 max=1.70E09 sEA( o1=UM([ 10 , 10] 2) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
48 min=3.45E12 med=4.16E10 avg=5.46E10 max=2.21E09 sEA( o1=NM([ 10 , 10] 2) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
49 min=5.78E13 med=5.69E11 avg=7.49E11 max=2.78E10 sEA( o1=UM([ 10 , 10] 2) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
50 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+50) , o1=SDNM([ 10 , 10] 2) , r )
51 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 50) , o1=SDNM([ 10 , 10] 2) , r )
52 min=1.66E164 med=8.81E92 avg=2.08E15 max=2.08E13 ES((10/10+50) , o1=SDNM([ 10 , 10] 2) , r )
53 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/10 , 50) , o1=SDNM([ 10 , 10] 2) , r )
54 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+100) , o1=SDNM([ 10 , 10] 2) , r )
55 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 100) , o1=SDNM([ 10 , 10] 2) , r )
56 min=4.23E132 med=4.28E77 avg=1.57E12 max=1.56E10 ES((10/10+100) , o1=SDNM([ 10 , 10] 2) , r )
57 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/10 , 100) , o1=SDNM([ 10 , 10] 2) , r )
58 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+200) , o1=SDNM([ 10 , 10] 2) , r )
59 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 200) , o1=SDNM([ 10 , 10] 2) , r )
60 min=1.68E110 med=1.56E60 avg=3.08E11 max=2.56E09 ES((10/10+200) , o1=SDNM([ 10 , 10] 2) , r )
61 min=4.84E272 med=1.35E257 avg=1.17E251 max=7.21E250 ES( ( 10/10 , 200) , o1=SDNM([ 10 , 10] 2) , r )
62 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/2+50) , o1=SDNM([ 10 , 10] 2) , r )
63 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 50) , o1=SDNM([ 10 , 10] 2) , r )
64 min=2.58E137 med=1.88E89 avg=7.19E40 max=7.19E38 ES((20/10+50) , o1=SDNM([ 10 , 10] 2) , r )
65 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/10 , 50) , o1=SDNM([ 10 , 10] 2) , r )
66 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/2+100) , o1=SDNM([ 10 , 10] 2) , r )
67 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 100) , o1=SDNM([ 10 , 10] 2) , r )
5
7
.
1
.
D
E
M
O
S
F
O
R
S
T
R
I
N
G
-
B
A
S
E
D
P
R
O
B
L
E
M
S
P
A
C
E
S
8
6
9
68 min=1.15E122 med=7.92E82 avg=1.21E32 max=1.21E30 ES((20/10+100) , o1=SDNM([ 10 , 10] 2) , r )
69 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/10 , 100) , o1=SDNM([ 10 , 10] 2) , r )
70 min=1.51E321 med=2.06E310 avg=9.43E301 max=5.23E299 ES((20/2+200) , o1=SDNM([ 10 , 10] 2) , r )
71 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 200) , o1=SDNM([ 10 , 10] 2) , r )
72 min=1.73E93 med=3.59E61 avg=4.60E25 max=4.60E23 ES((20/10+200) , o1=SDNM([ 10 , 10] 2) , r )
73 min=1.18E237 med=5.92E231 avg=1.38E225 max=1.19E223 ES( ( 20/10 , 200) , o1=SDNM([ 10 , 10] 2) , r )
74 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=Sphere , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
75 min=0.00E00 med=0.00E00 avg=1.32E161 max=1.32E159 DE1( f=Sphere , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
76 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=Sphere , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
77 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=Sphere , r , ps =50, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
78 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=Sphere , r , ps =50, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
79 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=Sphere , r , ps =50, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
80 min=3.94E219 med=8.33E216 avg=3.42E213 max=1.47E211 DE1( f=Sphere , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
81 min=9.03E275 med=1.66E269 avg=5.76E267 max=1.99E265 DE1( f=Sphere , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
82 min=1.46E243 med=5.73E240 avg=6.28E237 max=5.39E235 DE1( f=Sphere , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
83 min=2.81E204 med=2.98E200 avg=1.13E198 max=4.94E197 DE1( f=Sphere , r , ps =100, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
84 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=Sphere , r , ps =100, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
85 min=1.79E265 med=1.23E259 avg=1.41E257 max=3.95E256 DE1( f=Sphere , r , ps =100, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
86 min=1.73E111 med=1.26E108 avg=9.40E108 max=3.69E106 DE1( f=Sphere , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
87 min=6.92E140 med=6.38E136 avg=8.48E135 max=2.66E133 DE1( f=Sphere , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
88 min=3.68E123 med=6.74E121 avg=1.14E119 max=6.96E118 DE1( f=Sphere , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
89 min=1.67E103 med=2.59E101 avg=2.08E100 max=6.16E99 DE1( f=Sphere , r , ps =200, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
90 min=4.08E170 med=8.84E167 avg=1.66E165 max=7.69E164 DE1( f=Sphere , r , ps =200, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
91 min=1.06E133 med=8.09E131 avg=5.89E130 max=1.08E128 DE1( f=Sphere , r , ps =200, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
92 min=1.09E05 med=1.07E01 avg=1.68E01 max=7.14E01 RW( o1=NM([ 10 , 10] 2) , r )
93 min=1.18E00 med=3.09E01 avg=3.78E01 max=9.14E01 RW( o1=UM([ 10 , 10] 2) , r )
94 min=9.36E06 med=8.16E04 avg=1.16E03 max=9.55E03 RS( f=Sphere , r )
95
96
97 SumCosLn
98
99 min=9.30E13 med=6.37E10 avg=9.17E10 max=5.52E09 HC( o1=NM([ 10 , 10] 2) , f=f1 , r )
100 min=8.60E13 med=9.58E11 avg=1.44E10 max=7.21E10 HC( o1=UM([ 10 , 10] 2) , f=f1 , r )
101 min=3.14E09 med=1.28E04 avg=4.20E02 max=3.55E01 SA( o1=NM([ 10 , 10] 2) , f=f1 , r , l og ( 1) )
102 min=3.66E02 med=4.21E01 avg=4.16E01 max=7.06E01 SA( o1=UM([ 10 , 10] 2) , f=f1 , r , l og ( 1) )
103 min=3.14E09 med=1.28E04 avg=4.20E02 max=3.55E01 SA( o1=NM([ 10 , 10] 2) , f=f1 , r , l og ( 1) )
104 min=3.66E02 med=4.21E01 avg=4.16E01 max=7.06E01 SA( o1=UM([ 10 , 10] 2) , f=f1 , r , l og ( 1) )
105 min=4.21E09 med=1.02E07 avg=1.35E07 max=4.62E07 SA( o1=NM([ 10 , 10] 2) , f=f1 , r , l og ( 0 . 1 ) )
106 min=1.69E09 med=4.40E07 avg=5.28E02 max=4.12E01 SA( o1=UM([ 10 , 10] 2) , f=f1 , r , l og ( 0 . 1 ) )
107 min=1.33E11 med=1.12E09 avg=1.74E09 max=7.74E09 SA( o1=NM([ 10 , 10] 2) , f=f1 , r , l og ( 0. 001) )
108 min=1.33E11 med=7.39E10 avg=1.12E09 max=6.27E09 SA( o1=UM([ 10 , 10] 2) , f=f1 , r , l og ( 0. 001) )
109 min=2.25E11 med=7.34E10 avg=8.95E10 max=3.90E09 SA( o1=NM([ 10 , 10] 2) , f=f1 , r , l og ( 1. 0E4) )
110 min=3.15E13 med=1.57E10 avg=2.09E10 max=1.18E09 SA( o1=UM([ 10 , 10] 2) , f=f1 , r , l og ( 1. 0E4) )
111 min=3.98E06 med=2.25E01 avg=2.17E01 max=5.82E01 SQ( o1=NM([ 10 , 10] 2) , f=f1 , r , exp ( Ts=1, e =1.0E5) )
112 min=1.19E01 med=4.71E01 avg=4.55E01 max=7.06E01 SQ( o1=UM([ 10 , 10] 2) , f=f1 , r , exp ( Ts=1, e =1.0E5) )
113 min=9.79E11 med=5.96E09 avg=8.12E09 max=3.36E08 SQ( o1=NM([ 10 , 10] 2) , f=f1 , r , exp ( Ts=1, e =1.0E4) )
114 min=1.57E10 med=3.60E09 avg=4.63E09 max=2.29E08 SQ( o1=UM([ 10 , 10] 2) , f=f1 , r , exp ( Ts=1, e =1.0E4) )
115 min=2.33E08 med=2.56E06 avg=1.01E02 max=3.11E01 SQ( o1=NM([ 10 , 10] 2) , f=f1 , r , exp ( Ts =0. 1 , e =1.0E5) )
116 min=2.34E04 med=3.89E01 avg=3.83E01 max=7.06E01 SQ( o1=UM([ 10 , 10] 2) , f=f1 , r , exp ( Ts =0. 1 , e =1.0E5) )
117 min=3.52E11 med=1.83E09 avg=2.89E09 max=1.34E08 SQ( o1=NM([ 10 , 10] 2) , f=f1 , r , exp ( Ts =0. 1 , e =1.0E4) )
118 min=8.68E13 med=6.22E10 avg=8.85E10 max=4.60E09 SQ( o1=UM([ 10 , 10] 2) , f=f1 , r , exp ( Ts =0. 1 , e =1.0E4) )
119 min=3.41E11 med=5.67E09 avg=7.23E09 max=2.85E08 SQ( o1=NM([ 10 , 10] 2) , f=f1 , r , exp ( Ts =0. 001 , e =1.0E5) )
120 min=1.54E10 med=5.72E09 avg=8.41E09 max=4.16E08 SQ( o1=UM([ 10 , 10] 2) , f=f1 , r , exp ( Ts =0. 001 , e =1.0E5) )
121 min=1.60E12 med=1.15E09 avg=1.42E09 max=7.01E09 SQ( o1=NM([ 10 , 10] 2) , f=f1 , r , exp ( Ts =0. 001 , e =1.0E4) )
122 min=6.90E12 med=1.69E10 avg=2.43E10 max=1.19E09 SQ( o1=UM([ 10 , 10] 2) , f=f1 , r , exp ( Ts =0. 001 , e =1.0E4) )
123 min=8.97E13 med=1.12E09 avg=1.60E09 max=9.43E09 SQ( o1=NM([ 10 , 10] 2) , f=f1 , r , exp ( Ts=1.0E4, e =1.0E5) )
124 min=1.62E11 med=4.33E10 avg=7.91E10 max=3.44E09 SQ( o1=UM([ 10 , 10] 2) , f=f1 , r , exp ( Ts=1.0E4, e =1.0E5) )
125 min=3.18E12 med=7.35E10 avg=9.88E10 max=3.87E09 SQ( o1=NM([ 10 , 10] 2) , f=f1 , r , exp ( Ts=1.0E4, e =1.0E4) )
126 min=1.15E12 med=1.28E10 avg=1.74E10 max=1.08E09 SQ( o1=UM([ 10 , 10] 2) , f=f1 , r , exp ( Ts=1.0E4, e =1.0E4) )
127 min=7.87E13 med=3.31E10 avg=5.00E10 max=3.08E09 sEA( o1=NM([ 10 , 10] 2) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
128 min=6.81E14 med=4.53E11 avg=6.00E11 max=3.38E10 sEA( o1=UM([ 10 , 10] 2) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
129 min=9.03E13 med=1.05E10 avg=1.28E10 max=4.85E10 sEA( o1=NM([ 10 , 10] 2) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
130 min=2.20E13 med=1.30E11 avg=1.98E11 max=1.04E10 sEA( o1=UM([ 10 , 10] 2) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
131 min=3.65E14 med=3.24E12 avg=6.00E12 max=4.22E11 sEA( o1=NM([ 10 , 10] 2) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
132 min=3.44E15 med=4.92E13 avg=7.62E13 max=4.89E12 sEA( o1=UM([ 10 , 10] 2) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
133 min=0.00E00 med=5.77E12 avg=1.48E10 max=3.29E09 sEA( o1=NM([ 10 , 10] 2) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
8
7
0
5
7
D
E
M
O
S
134 min=0.00E00 med=1.40E13 avg=1.78E11 max=3.05E10 sEA( o1=UM([ 10 , 10] 2) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
135 min=3.99E12 med=4.41E11 avg=6.29E11 max=3.08E10 sEA( o1=NM([ 10 , 10] 2) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
136 min=1.01E13 med=6.92E12 avg=1.01E11 max=4.91E11 sEA( o1=UM([ 10 , 10] 2) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
137 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+50) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
138 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 50) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
139 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/10+50) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
140 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/10 , 50) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
141 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+100) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
142 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 100) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
143 min=0.00E00 med=0.00E00 avg=1.38E11 max=9.47E10 ES((10/10+100) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
144 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/10 , 100) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
145 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+200) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
146 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 200) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
147 min=0.00E00 med=0.00E00 avg=3.65E16 max=1.81E14 ES((10/10+200) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
148 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/10 , 200) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
149 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/2+50) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
150 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 50) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
151 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/10+50) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
152 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/10 , 50) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
153 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/2+100) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
154 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 100) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
155 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/10+100) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
156 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/10 , 100) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
157 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/2+200) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
158 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 200) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
159 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/10+200) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
160 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/10 , 200) , o1=SDNM([ 10 , 10] 2) , f=f1 , r )
161 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
162 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
163 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
164 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
165 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
166 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
167 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
168 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
169 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
170 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
171 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
172 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
173 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
174 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
175 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
176 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
177 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
178 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
179 min=1.36E06 med=2.57E01 avg=2.57E01 max=6.49E01 RW( o1=NM([ 10 , 10] 2) , f=f1 , r )
180 min=6.96E02 med=4.77E01 avg=4.58E01 max=6.94E01 RW( o1=UM([ 10 , 10] 2) , f=f1 , r )
181 min=1.17E06 med=9.95E05 avg=1.40E04 max=1.11E03 RS( f=f1 , r )
182
183
184 CosLnSum
185
186 min=1.03E11 med=1.87E09 avg=2.57E09 max=1.95E08 HC( o1=NM([ 10 , 10] 2) , f=f2 , r )
187 min=1.22E11 med=3.57E10 avg=5.45E10 max=2.91E09 HC( o1=UM([ 10 , 10] 2) , f=f2 , r )
188 min=2.62E08 med=4.57E06 avg=1.05E01 max=8.73E01 SA( o1=NM([ 10 , 10] 2) , f=f2 , r , l og ( 1) )
189 min=3.32E03 med=7.61E01 avg=6.83E01 max=9.38E01 SA( o1=UM([ 10 , 10] 2) , f=f2 , r , l og ( 1) )
190 min=2.62E08 med=4.57E06 avg=1.05E01 max=8.73E01 SA( o1=NM([ 10 , 10] 2) , f=f2 , r , l og ( 1) )
191 min=3.32E03 med=7.61E01 avg=6.83E01 max=9.38E01 SA( o1=UM([ 10 , 10] 2) , f=f2 , r , l og ( 1) )
192 min=1.90E09 med=7.03E08 avg=1.12E07 max=4.19E07 SA( o1=NM([ 10 , 10] 2) , f=f2 , r , l og ( 0 . 1 ) )
193 min=6.36E10 med=2.51E07 avg=5.77E02 max=8.51E01 SA( o1=UM([ 10 , 10] 2) , f=f2 , r , l og ( 0 . 1 ) )
194 min=3.54E12 med=2.50E09 avg=3.58E09 max=1.79E08 SA( o1=NM([ 10 , 10] 2) , f=f2 , r , l og ( 0. 001) )
195 min=9.03E12 med=8.60E10 avg=1.33E09 max=7.40E09 SA( o1=UM([ 10 , 10] 2) , f=f2 , r , l og ( 0. 001) )
196 min=2.18E12 med=2.13E09 avg=2.68E09 max=1.16E08 SA( o1=NM([ 10 , 10] 2) , f=f2 , r , l og ( 1. 0E4) )
197 min=5.52E12 med=3.08E10 avg=4.76E10 max=3.75E09 SA( o1=UM([ 10 , 10] 2) , f=f2 , r , l og ( 1. 0E4) )
198 min=5.49E06 med=4.79E01 avg=4.20E01 max=8.92E01 SQ( o1=NM([ 10 , 10] 2) , f=f2 , r , exp ( Ts=1, e =1.0E5) )
199 min=1.93E01 med=7.90E01 avg=7.37E01 max=9.44E01 SQ( o1=UM([ 10 , 10] 2) , f=f2 , r , exp ( Ts=1, e =1.0E5) )
5
7
.
1
.
D
E
M
O
S
F
O
R
S
T
R
I
N
G
-
B
A
S
E
D
P
R
O
B
L
E
M
S
P
A
C
E
S
8
7
1
200 min=1.84E10 med=8.98E09 avg=1.60E08 max=7.80E08 SQ( o1=NM([ 10 , 10] 2) , f=f2 , r , exp ( Ts=1, e =1.0E4) )
201 min=6.02E12 med=4.27E09 avg=6.51E09 max=3.26E08 SQ( o1=UM([ 10 , 10] 2) , f=f2 , r , exp ( Ts=1, e =1.0E4) )
202 min=9.49E09 med=1.06E06 avg=3.74E02 max=8.14E01 SQ( o1=NM([ 10 , 10] 2) , f=f2 , r , exp ( Ts =0. 1 , e =1.0E5) )
203 min=5.39E06 med=7.39E01 avg=6.27E01 max=9.34E01 SQ( o1=UM([ 10 , 10] 2) , f=f2 , r , exp ( Ts =0. 1 , e =1.0E5) )
204 min=9.42E11 med=4.92E09 avg=6.85E09 max=3.47E08 SQ( o1=NM([ 10 , 10] 2) , f=f2 , r , exp ( Ts =0. 1 , e =1.0E4) )
205 min=5.32E12 med=1.26E09 avg=1.69E09 max=8.53E09 SQ( o1=UM([ 10 , 10] 2) , f=f2 , r , exp ( Ts =0. 1 , e =1.0E4) )
206 min=9.65E12 med=7.97E09 avg=9.72E09 max=4.83E08 SQ( o1=NM([ 10 , 10] 2) , f=f2 , r , exp ( Ts =0. 001 , e =1.0E5) )
207 min=6.07E12 med=3.65E09 avg=5.84E09 max=3.74E08 SQ( o1=UM([ 10 , 10] 2) , f=f2 , r , exp ( Ts =0. 001 , e =1.0E5) )
208 min=2.02E11 med=2.47E09 avg=4.28E09 max=1.74E08 SQ( o1=NM([ 10 , 10] 2) , f=f2 , r , exp ( Ts =0. 001 , e =1.0E4) )
209 min=7.77E12 med=4.60E10 avg=7.18E10 max=2.72E09 SQ( o1=UM([ 10 , 10] 2) , f=f2 , r , exp ( Ts =0. 001 , e =1.0E4) )
210 min=4.39E11 med=2.47E09 avg=3.69E09 max=1.51E08 SQ( o1=NM([ 10 , 10] 2) , f=f2 , r , exp ( Ts=1.0E4, e =1.0E5) )
211 min=2.25E12 med=7.74E10 avg=9.91E10 max=5.23E09 SQ( o1=UM([ 10 , 10] 2) , f=f2 , r , exp ( Ts=1.0E4, e =1.0E5) )
212 min=5.10E12 med=2.50E09 avg=3.70E09 max=1.87E08 SQ( o1=NM([ 10 , 10] 2) , f=f2 , r , exp ( Ts=1.0E4, e =1.0E4) )
213 min=1.93E11 med=4.51E10 avg=6.45E10 max=2.73E09 SQ( o1=UM([ 10 , 10] 2) , f=f2 , r , exp ( Ts=1.0E4, e =1.0E4) )
214 min=1.53E11 med=1.10E09 avg=1.53E09 max=8.57E09 sEA( o1=NM([ 10 , 10] 2) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
215 min=5.49E13 med=1.00E10 avg=1.72E10 max=9.17E10 sEA( o1=UM([ 10 , 10] 2) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
216 min=1.76E11 med=4.65E10 avg=5.69E10 max=2.56E09 sEA( o1=NM([ 10 , 10] 2) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
217 min=3.90E13 med=3.68E11 avg=5.57E11 max=2.60E10 sEA( o1=UM([ 10 , 10] 2) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
218 min=1.36E13 med=1.29E11 avg=1.88E11 max=1.06E10 sEA( o1=NM([ 10 , 10] 2) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
219 min=1.29E14 med=1.54E12 avg=1.90E12 max=7.51E12 sEA( o1=UM([ 10 , 10] 2) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
220 min=0.00E00 med=3.82E12 avg=6.62E10 max=1.01E08 sEA( o1=NM([ 10 , 10] 2) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
221 min=0.00E00 med=1.00E12 avg=8.91E11 max=2.07E09 sEA( o1=UM([ 10 , 10] 2) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
222 min=1.69E12 med=1.37E10 avg=2.18E10 max=8.50E10 sEA( o1=NM([ 10 , 10] 2) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
223 min=8.48E14 med=1.96E11 avg=2.79E11 max=1.82E10 sEA( o1=UM([ 10 , 10] 2) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
224 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+50) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
225 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 50) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
226 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/10+50) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
227 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/10 , 50) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
228 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+100) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
229 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 100) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
230 min=0.00E00 med=0.00E00 avg=3.53E11 max=3.53E09 ES((10/10+100) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
231 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/10 , 100) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
232 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+200) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
233 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 200) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
234 min=0.00E00 med=0.00E00 avg=3.13E12 max=3.13E10 ES((10/10+200) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
235 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/10 , 200) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
236 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/2+50) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
237 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 50) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
238 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/10+50) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
239 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/10 , 50) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
240 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/2+100) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
241 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 100) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
242 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/10+100) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
243 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/10 , 100) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
244 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/2+200) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
245 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 200) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
246 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/10+200) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
247 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/10 , 200) , o1=SDNM([ 10 , 10] 2) , f=f2 , r )
248 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
249 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
250 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
251 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
252 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
253 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
254 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
255 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
256 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
257 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
258 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
259 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
260 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
261 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
262 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
263 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
264 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
8
7
2
5
7
D
E
M
O
S
265 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
266 min=4.41E06 med=5.13E01 avg=4.73E01 max=9.08E01 RW( o1=NM([ 10 , 10] 2) , f=f2 , r )
267 min=1.69E01 med=7.72E01 avg=7.32E01 max=9.41E01 RW( o1=UM([ 10 , 10] 2) , f=f2 , r )
268 min=3.74E06 med=2.52E04 avg=4.52E04 max=3.88E03 RS( f=f2 , r )
269
270
271 MaxSqr
272
273 min=2.55E13 med=4.21E11 avg=5.83E11 max=2.85E10 HC( o1=NM([ 10 , 10] 2) , f=f3 , r )
274 min=7.12E14 med=8.55E12 avg=1.15E11 max=5.25E11 HC( o1=UM([ 10 , 10] 2) , f=f3 , r )
275 min=7.45E08 med=1.70E04 avg=5.57E03 max=7.33E02 SA( o1=NM([ 10 , 10] 2) , f=f3 , r , l og ( 1) )
276 min=2.62E03 med=1.61E01 avg=1.84E01 max=6.41E01 SA( o1=UM([ 10 , 10] 2) , f=f3 , r , l og ( 1) )
277 min=7.45E08 med=1.70E04 avg=5.57E03 max=7.33E02 SA( o1=NM([ 10 , 10] 2) , f=f3 , r , l og ( 1) )
278 min=2.62E03 med=1.61E01 avg=1.84E01 max=6.41E01 SA( o1=UM([ 10 , 10] 2) , f=f3 , r , l og ( 1) )
279 min=1.47E10 med=7.32E08 avg=1.27E07 max=7.98E07 SA( o1=NM([ 10 , 10] 2) , f=f3 , r , l og ( 0 . 1 ) )
280 min=5.13E09 med=4.80E03 avg=7.63E03 max=3.21E02 SA( o1=UM([ 10 , 10] 2) , f=f3 , r , l og ( 0 . 1 ) )
281 min=1.26E13 med=6.51E10 avg=9.54E10 max=6.62E09 SA( o1=NM([ 10 , 10] 2) , f=f3 , r , l og ( 0. 001) )
282 min=4.57E12 med=6.73E10 avg=8.88E10 max=4.15E09 SA( o1=UM([ 10 , 10] 2) , f=f3 , r , l og ( 0. 001) )
283 min=4.10E13 med=1.12E10 avg=1.74E10 max=7.75E10 SA( o1=NM([ 10 , 10] 2) , f=f3 , r , l og ( 1. 0E4) )
284 min=5.69E14 med=7.47E11 avg=1.12E10 max=7.10E10 SA( o1=UM([ 10 , 10] 2) , f=f3 , r , l og ( 1. 0E4) )
285 min=7.03E08 med=6.65E02 avg=8.79E02 max=4.36E01 SQ( o1=NM([ 10 , 10] 2) , f=f3 , r , exp ( Ts=1, e =1.0E5) )
286 min=1.36E02 med=1.82E01 avg=2.44E01 max=7.49E01 SQ( o1=UM([ 10 , 10] 2) , f=f3 , r , exp ( Ts=1, e =1.0E5) )
287 min=3.21E11 med=3.90E09 avg=5.54E09 max=2.48E08 SQ( o1=NM([ 10 , 10] 2) , f=f3 , r , exp ( Ts=1, e =1.0E4) )
288 min=2.13E11 med=3.26E09 avg=5.33E09 max=3.80E08 SQ( o1=UM([ 10 , 10] 2) , f=f3 , r , exp ( Ts=1, e =1.0E4) )
289 min=1.55E08 med=3.43E06 avg=4.59E04 max=8.41E03 SQ( o1=NM([ 10 , 10] 2) , f=f3 , r , exp ( Ts =0. 1 , e =1.0E5) )
290 min=9.65E04 med=1.45E01 avg=1.50E01 max=4.74E01 SQ( o1=UM([ 10 , 10] 2) , f=f3 , r , exp ( Ts =0. 1 , e =1.0E5) )
291 min=4.80E13 med=6.25E10 avg=7.75E10 max=3.89E09 SQ( o1=NM([ 10 , 10] 2) , f=f3 , r , exp ( Ts =0. 1 , e =1.0E4) )
292 min=1.33E11 med=4.11E10 avg=5.69E10 max=2.83E09 SQ( o1=UM([ 10 , 10] 2) , f=f3 , r , exp ( Ts =0. 1 , e =1.0E4) )
293 min=8.79E12 med=3.89E09 avg=5.83E09 max=3.44E08 SQ( o1=NM([ 10 , 10] 2) , f=f3 , r , exp ( Ts =0. 001 , e =1.0E5) )
294 min=1.05E10 med=4.46E09 avg=9.31E09 max=1.00E07 SQ( o1=UM([ 10 , 10] 2) , f=f3 , r , exp ( Ts =0. 001 , e =1.0E5) )
295 min=1.70E13 med=9.98E11 avg=1.46E10 max=6.55E10 SQ( o1=NM([ 10 , 10] 2) , f=f3 , r , exp ( Ts =0. 001 , e =1.0E4) )
296 min=2.45E13 med=1.50E11 avg=2.25E11 max=1.42E10 SQ( o1=UM([ 10 , 10] 2) , f=f3 , r , exp ( Ts =0. 001 , e =1.0E4) )
297 min=1.28E12 med=5.37E10 avg=6.63E10 max=3.63E09 SQ( o1=NM([ 10 , 10] 2) , f=f3 , r , exp ( Ts=1.0E4, e =1.0E5) )
298 min=4.20E12 med=4.60E10 avg=6.85E10 max=3.89E09 SQ( o1=UM([ 10 , 10] 2) , f=f3 , r , exp ( Ts=1.0E4, e =1.0E5) )
299 min=8.34E13 med=4.95E11 avg=9.03E11 max=7.97E10 SQ( o1=NM([ 10 , 10] 2) , f=f3 , r , exp ( Ts=1.0E4, e =1.0E4) )
300 min=2.92E13 med=1.38E11 avg=1.68E11 max=9.63E11 SQ( o1=UM([ 10 , 10] 2) , f=f3 , r , exp ( Ts=1.0E4, e =1.0E4) )
301 min=1.94E13 med=2.34E11 avg=3.61E11 max=2.07E10 sEA( o1=NM([ 10 , 10] 2) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
302 min=3.80E14 med=2.39E12 avg=3.25E12 max=1.44E11 sEA( o1=UM([ 10 , 10] 2) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
303 min=7.11E14 med=6.81E12 avg=9.05E12 max=5.08E11 sEA( o1=NM([ 10 , 10] 2) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
304 min=1.63E14 med=8.74E13 avg=1.27E12 max=8.12E12 sEA( o1=UM([ 10 , 10] 2) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
305 min=5.34E16 med=2.03E13 avg=3.57E13 max=2.47E12 sEA( o1=NM([ 10 , 10] 2) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
306 min=2.16E17 med=4.03E14 avg=5.68E14 max=3.28E13 sEA( o1=UM([ 10 , 10] 2) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
307 min=6.17E55 med=3.59E13 avg=1.57E11 max=1.75E10 sEA( o1=NM([ 10 , 10] 2) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
308 min=1.40E37 med=8.14E15 avg=1.81E12 max=4.51E11 sEA( o1=UM([ 10 , 10] 2) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
309 min=5.43E14 med=3.68E12 avg=4.99E12 max=1.98E11 sEA( o1=NM([ 10 , 10] 2) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
310 min=2.57E15 med=4.66E13 avg=6.44E13 max=2.69E12 sEA( o1=UM([ 10 , 10] 2) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
311 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+50) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
312 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 50) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
313 min=3.42E176 med=2.70E90 avg=5.23E14 max=5.22E12 ES((10/10+50) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
314 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/10 , 50) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
315 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+100) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
316 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 100) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
317 min=7.98E129 med=7.66E76 avg=1.98E15 max=1.45E13 ES((10/10+100) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
318 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/10 , 100) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
319 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+200) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
320 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 200) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
321 min=2.10E110 med=8.99E60 avg=2.80E15 max=2.52E13 ES((10/10+200) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
322 min=1.32E271 med=1.78E258 avg=4.92E245 max=4.92E243 ES( ( 10/10 , 200) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
323 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/2+50) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
324 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 50) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
325 min=5.52E153 med=6.29E90 avg=3.92E33 max=3.92E31 ES((20/10+50) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
326 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/10 , 50) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
327 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/2+100) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
328 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 100) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
329 min=3.62E112 med=1.99E81 avg=1.08E18 max=1.08E16 ES((20/10+100) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
330 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/10 , 100) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
5
7
.
1
.
D
E
M
O
S
F
O
R
S
T
R
I
N
G
-
B
A
S
E
D
P
R
O
B
L
E
M
S
P
A
C
E
S
8
7
3
331 min=3.85E322 med=2.38E308 avg=7.45E300 max=6.98E298 ES((20/2+200) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
332 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 200) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
333 min=1.46E93 med=3.52E64 avg=2.84E28 max=2.84E26 ES((20/10+200) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
334 min=2.14E237 med=2.05E231 avg=4.20E226 max=3.63E224 ES( ( 20/10 , 200) , o1=SDNM([ 10 , 10] 2) , f=f3 , r )
335 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f3 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
336 min=0.00E00 med=0.00E00 avg=3.68E145 max=3.68E143 DE1( f=f3 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
337 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f3 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
338 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f3 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
339 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f3 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
340 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f3 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
341 min=2.74E215 med=4.06E211 avg=8.01E209 max=3.03E207 DE1( f=f3 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
342 min=2.75E276 med=2.91E269 avg=2.73E265 max=1.76E263 DE1( f=f3 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
343 min=6.22E241 med=9.41E238 avg=1.86E235 max=5.05E234 DE1( f=f3 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
344 min=1.40E201 med=4.26E197 avg=9.18E196 max=2.04E194 DE1( f=f3 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
345 min=0.00E00 med=0.00E00 avg=1.92E321 max=1.04E319 DE1( f=f3 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
346 min=1.92E259 med=1.68E255 avg=9.92E253 max=8.59E251 DE1( f=f3 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
347 min=1.44E109 med=2.04E107 avg=1.03E106 max=2.10E105 DE1( f=f3 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
348 min=8.25E140 med=4.68E137 avg=2.32E135 max=7.72E134 DE1( f=f3 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
349 min=1.72E123 med=7.18E121 avg=5.00E120 max=7.40E119 DE1( f=f3 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
350 min=2.83E103 med=1.67E100 avg=7.00E100 max=8.58E99 DE1( f=f3 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 3 ,F=0. 7) )
351 min=1.06E167 med=1.51E164 avg=6.79E163 max=3.11E161 DE1( f=f3 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 7 ,F=0. 3) )
352 min=2.43E132 med=1.21E129 avg=1.11E128 max=5.49E127 DE1( f=f3 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 2 , cr =0. 5 ,F=0. 5) )
353 min=9.72E08 med=7.52E02 avg=1.18E01 max=5.20E01 RW( o1=NM([ 10 , 10] 2) , f=f3 , r )
354 min=9.30E03 med=1.94E01 avg=2.60E01 max=7.11E01 RW( o1=UM([ 10 , 10] 2) , f=f3 , r )
355 min=8.41E08 med=6.32E06 avg=9.18E06 max=7.30E05 RS( f=f3 , r )
356
357
358 Di mensi ons : 5
359
360
361
362 Sphere
363
364 min=1.20E06 med=1.24E05 avg=1.26E05 max=2.38E05 HC( o1=NM([ 10 , 10] 5) , r )
365 min=3.85E07 med=1.79E06 avg=1.94E06 max=3.64E06 HC( o1=UM([ 10 , 10] 5) , r )
366 min=2.20E04 med=1.59E03 avg=1.72E03 max=4.94E03 SA( o1=NM([ 10 , 10] 5) , r , l og ( 1) )
367 min=3.28E04 med=5.23E03 avg=6.27E03 max=2.57E02 SA( o1=UM([ 10 , 10] 5) , r , l og ( 1) )
368 min=2.20E04 med=1.59E03 avg=1.72E03 max=4.94E03 SA( o1=NM([ 10 , 10] 5) , r , l og ( 1) )
369 min=3.28E04 med=5.23E03 avg=6.27E03 max=2.57E02 SA( o1=UM([ 10 , 10] 5) , r , l og ( 1) )
370 min=4.21E05 med=1.31E04 avg=1.53E04 max=3.26E04 SA( o1=NM([ 10 , 10] 5) , r , l og ( 0 . 1 ) )
371 min=3.06E05 med=1.73E04 avg=1.82E04 max=5.07E04 SA( o1=UM([ 10 , 10] 5) , r , l og ( 0 . 1 ) )
372 min=4.14E07 med=1.25E05 avg=1.33E05 max=2.82E05 SA( o1=NM([ 10 , 10] 5) , r , l og ( 0. 001) )
373 min=5.58E07 med=2.53E06 avg=2.52E06 max=5.87E06 SA( o1=UM([ 10 , 10] 5) , r , l og ( 0. 001) )
374 min=1.47E06 med=1.10E05 avg=1.14E05 max=2.30E05 SA( o1=NM([ 10 , 10] 5) , r , l og ( 1. 0E4) )
375 min=3.26E07 med=1.79E06 avg=1.84E06 max=4.10E06 SA( o1=UM([ 10 , 10] 5) , r , l og ( 1. 0E4) )
376 min=1.05E03 med=1.81E02 avg=2.26E02 max=5.89E02 SQ( o1=NM([ 10 , 10] 5) , r , exp ( Ts=1, e =1.0E5) )
377 min=1.47E03 med=1.65E01 avg=2.11E01 max=7.56E01 SQ( o1=UM([ 10 , 10] 5) , r , exp ( Ts=1, e =1.0E5) )
378 min=1.93E06 med=2.40E05 avg=2.44E05 max=4.55E05 SQ( o1=NM([ 10 , 10] 5) , r , exp ( Ts=1, e =1.0E4) )
379 min=1.24E06 med=5.29E06 avg=5.24E06 max=1.16E05 SQ( o1=UM([ 10 , 10] 5) , r , exp ( Ts=1, e =1.0E4) )
380 min=1.79E04 med=7.35E04 avg=8.43E04 max=2.25E03 SQ( o1=NM([ 10 , 10] 5) , r , exp ( Ts =0. 1 , e =1.0E5) )
381 min=2.93E04 med=2.04E03 avg=2.39E03 max=7.23E03 SQ( o1=UM([ 10 , 10] 5) , r , exp ( Ts =0. 1 , e =1.0E5) )
382 min=3.58E06 med=1.60E05 avg=1.67E05 max=3.15E05 SQ( o1=NM([ 10 , 10] 5) , r , exp ( Ts =0. 1 , e =1.0E4) )
383 min=4.27E07 med=3.02E06 avg=2.97E06 max=5.89E06 SQ( o1=UM([ 10 , 10] 5) , r , exp ( Ts =0. 1 , e =1.0E4) )
384 min=3.21E06 med=2.14E05 avg=2.23E05 max=4.19E05 SQ( o1=NM([ 10 , 10] 5) , r , exp ( Ts =0. 001 , e =1.0E5) )
385 min=1.20E06 med=9.34E06 avg=9.27E06 max=1.79E05 SQ( o1=UM([ 10 , 10] 5) , r , exp ( Ts =0. 001 , e =1.0E5) )
386 min=1.63E06 med=1.34E05 avg=1.31E05 max=2.75E05 SQ( o1=NM([ 10 , 10] 5) , r , exp ( Ts =0. 001 , e =1.0E4) )
387 min=2.59E07 med=2.05E06 avg=2.08E06 max=3.98E06 SQ( o1=UM([ 10 , 10] 5) , r , exp ( Ts =0. 001 , e =1.0E4) )
388 min=2.14E06 med=1.28E05 avg=1.25E05 max=2.48E05 SQ( o1=NM([ 10 , 10] 5) , r , exp ( Ts=1.0E4, e =1.0E5) )
389 min=3.57E07 med=2.13E06 avg=2.20E06 max=4.00E06 SQ( o1=UM([ 10 , 10] 5) , r , exp ( Ts=1.0E4, e =1.0E5) )
390 min=2.57E06 med=1.14E05 avg=1.20E05 max=2.62E05 SQ( o1=NM([ 10 , 10] 5) , r , exp ( Ts=1.0E4, e =1.0E4) )
391 min=1.52E07 med=1.69E06 avg=1.81E06 max=3.38E06 SQ( o1=UM([ 10 , 10] 5) , r , exp ( Ts=1.0E4, e =1.0E4) )
392 min=9.94E07 med=5.42E06 avg=5.77E06 max=1.06E05 sEA( o1=NM([ 10 , 10] 5) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
393 min=8.81E08 med=5.34E07 avg=5.68E07 max=1.25E06 sEA( o1=UM([ 10 , 10] 5) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
394 min=3.12E07 med=1.70E06 avg=1.81E06 max=3.92E06 sEA( o1=NM([ 10 , 10] 5) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
395 min=4.04E08 med=1.89E07 avg=1.95E07 max=4.06E07 sEA( o1=UM([ 10 , 10] 5) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
396 min=1.75E08 med=1.36E07 avg=1.44E07 max=3.36E07 sEA( o1=NM([ 10 , 10] 5) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
397 min=8.33E10 med=1.64E08 avg=1.63E08 max=3.51E08 sEA( o1=UM([ 10 , 10] 5) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
8
7
4
5
7
D
E
M
O
S
398 min=8.50E08 med=6.45E06 avg=7.57E06 max=2.77E05 sEA( o1=NM([ 10 , 10] 5) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
399 min=2.56E08 med=7.81E07 avg=9.70E07 max=3.98E06 sEA( o1=UM([ 10 , 10] 5) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
400 min=2.87E07 med=1.10E06 avg=1.14E06 max=2.38E06 sEA( o1=NM([ 10 , 10] 5) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
401 min=7.20E09 med=1.05E07 avg=1.15E07 max=2.72E07 sEA( o1=UM([ 10 , 10] 5) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
402 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+50) , r )
403 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 50) , r )
404 min=3.92E262 med=1.39E243 avg=1.32E215 max=1.32E213 ES((10/10+50) , r )
405 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/10 , 50) , r )
406 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+100) , r )
407 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 100) , r )
408 min=1.37E203 med=1.80E187 avg=3.94E166 max=3.79E164 ES((10/10+100) , r )
409 min=2.64E298 med=1.55E290 avg=2.24E285 max=2.11E283 ES( ( 10/10 , 100) , r )
410 min=1.19E223 med=8.42E217 avg=2.61E210 max=1.71E208 ES((10/2+200) , r )
411 min=1.81E249 med=7.40E240 avg=2.38E233 max=2.05E231 ES( ( 10/2 , 200) , r )
412 min=1.74E137 med=2.02E124 avg=2.68E105 max=2.61E103 ES((10/10+200) , r )
413 min=4.56E179 med=7.87E173 avg=5.69E170 max=2.04E168 ES( ( 10/10 , 200) , r )
414 min=1.97E310 med=1.00E301 avg=5.60E290 max=5.60E288 ES((20/2+50) , r )
415 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 50) , r )
416 min=1.13E169 med=1.73E158 avg=3.83E146 max=3.80E144 ES((20/10+50) , r )
417 min=6.23E257 med=3.30E251 avg=3.14E245 max=1.22E243 ES( ( 20/10 , 50) , r )
418 min=2.50E258 med=1.58E249 avg=5.53E242 max=4.50E240 ES((20/2+100) , r )
419 min=0.00E00 med=1.50E316 avg=2.86E311 max=2.37E309 ES( ( 20/2 , 100) , r )
420 min=4.53E146 med=4.30E138 avg=1.19E130 max=5.91E129 ES((20/10+100) , r )
421 min=8.39E227 med=2.67E223 avg=5.05E220 max=2.33E218 ES( ( 20/10 , 100) , r )
422 min=8.75E183 med=2.88E177 avg=2.26E173 max=1.85E171 ES((20/2+200) , r )
423 min=3.85E211 med=1.08E205 avg=6.32E203 max=2.90E201 ES( ( 20/2 , 200) , r )
424 min=3.56E110 med=2.28E102 avg=2.25E97 max=2.08E95 ES((20/10+200) , r )
425 min=6.80E151 med=2.91E147 avg=5.99E146 max=1.25E144 ES( ( 20/10 , 200) , r )
426 min=1.08E192 med=1.41E186 avg=1.69E36 max=1.69E34 DE1(r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
427 min=5.14E09 med=2.73E02 avg=1.56E01 max=2.73E00 DE1(r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
428 min=9.27E209 med=7.43E203 avg=1.49E09 max=1.49E07 DE1(r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
429 min=9.94E195 med=1.05E189 avg=2.28E187 max=6.99E186 DE1(r , ps =50, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
430 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1(r , ps =50, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
431 min=6.90E282 med=1.14E276 avg=2.51E273 max=1.10E271 DE1(r , ps =50, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
432 min=6.46E97 med=4.08E93 avg=4.86E92 max=1.27E90 DE1(r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
433 min=2.31E35 med=1.96E08 avg=1.48E03 max=7.69E02 DE1(r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
434 min=6.68E105 med=6.09E102 avg=5.40E101 max=1.01E99 DE1(r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
435 min=2.80E97 med=2.43E95 avg=1.78E94 max=1.99E93 DE1(r , ps =100, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
436 min=3.73E193 med=9.35E187 avg=1.71E184 max=4.09E183 DE1(r , ps =100, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
437 min=2.25E143 med=9.42E140 avg=1.69E138 max=3.38E137 DE1(r , ps =100, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
438 min=7.59E48 med=4.53E46 avg=2.13E45 max=4.92E44 DE1(r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
439 min=1.36E58 med=2.03E56 avg=7.80E22 max=7.80E20 DE1(r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
440 min=4.12E52 med=1.26E50 avg=5.02E50 max=1.12E48 DE1(r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
441 min=9.27E49 med=1.23E47 avg=3.03E47 max=3.31E46 DE1(r , ps =200, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
442 min=1.51E96 med=6.85E94 avg=8.07E93 max=2.31E91 DE1(r , ps =200, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
443 min=5.87E72 med=8.49E70 avg=2.34E69 max=3.81E68 DE1(r , ps =200, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
444 min=2.15E00 med=6.91E01 avg=7.64E01 max=1.81E02 RW( o1=NM([ 10 , 10] 5) , r )
445 min=2.43E01 med=1.22E02 avg=1.23E02 max=2.44E02 RW( o1=UM([ 10 , 10] 5) , r )
446 min=4.67E01 med=1.84E00 avg=1.94E00 max=3.74E00 RS( r )
447
448
449 SumCosLn
450
451 min=7.09E08 med=5.64E07 avg=5.91E07 max=1.37E06 HC( o1=NM([ 10 , 10] 5) , f=f1 , r )
452 min=1.19E08 med=9.04E08 avg=9.24E08 max=1.90E07 HC( o1=UM([ 10 , 10] 5) , f=f1 , r )
453 min=5.34E03 med=2.59E01 avg=2.66E01 max=5.52E01 SA( o1=NM([ 10 , 10] 5) , f=f1 , r , l og ( 1) )
454 min=2.14E01 med=4.90E01 avg=4.80E01 max=7.07E01 SA( o1=UM([ 10 , 10] 5) , f=f1 , r , l og ( 1) )
455 min=5.34E03 med=2.59E01 avg=2.66E01 max=5.52E01 SA( o1=NM([ 10 , 10] 5) , f=f1 , r , l og ( 1) )
456 min=2.14E01 med=4.90E01 avg=4.80E01 max=7.07E01 SA( o1=UM([ 10 , 10] 5) , f=f1 , r , l og ( 1) )
457 min=5.23E05 med=4.73E04 avg=6.15E04 max=2.92E03 SA( o1=NM([ 10 , 10] 5) , f=f1 , r , l og ( 0 . 1 ) )
458 min=1.10E02 med=3.06E01 avg=2.79E01 max=6.82E01 SA( o1=UM([ 10 , 10] 5) , f=f1 , r , l og ( 0 . 1 ) )
459 min=2.36E07 med=1.89E06 avg=2.01E06 max=4.66E06 SA( o1=NM([ 10 , 10] 5) , f=f1 , r , l og ( 0. 001) )
460 min=2.61E07 med=1.63E06 avg=1.62E06 max=3.23E06 SA( o1=UM([ 10 , 10] 5) , f=f1 , r , l og ( 0. 001) )
461 min=1.12E07 med=6.97E07 avg=7.29E07 max=1.88E06 SA( o1=NM([ 10 , 10] 5) , f=f1 , r , l og ( 1. 0E4) )
462 min=2.08E08 med=1.82E07 avg=1.84E07 max=3.60E07 SA( o1=UM([ 10 , 10] 5) , f=f1 , r , l og ( 1. 0E4) )
463 min=1.16E01 med=3.67E01 avg=3.59E01 max=5.90E01 SQ( o1=NM([ 10 , 10] 5) , f=f1 , r , exp ( Ts=1, e =1.0E5) )
5
7
.
1
.
D
E
M
O
S
F
O
R
S
T
R
I
N
G
-
B
A
S
E
D
P
R
O
B
L
E
M
S
P
A
C
E
S
8
7
5
464 min=2.58E01 med=5.08E01 avg=4.96E01 max=7.29E01 SQ( o1=UM([ 10 , 10] 5) , f=f1 , r , exp ( Ts=1, e =1.0E5) )
465 min=5.37E07 med=3.93E06 avg=4.00E06 max=8.69E06 SQ( o1=NM([ 10 , 10] 5) , f=f1 , r , exp ( Ts=1, e =1.0E4) )
466 min=4.86E07 med=2.35E06 avg=2.66E06 max=7.77E06 SQ( o1=UM([ 10 , 10] 5) , f=f1 , r , exp ( Ts=1, e =1.0E4) )
467 min=1.03E02 med=1.81E01 avg=2.00E01 max=5.44E01 SQ( o1=NM([ 10 , 10] 5) , f=f1 , r , exp ( Ts =0. 1 , e =1.0E5) )
468 min=1.83E01 med=4.91E01 avg=4.71E01 max=7.25E01 SQ( o1=UM([ 10 , 10] 5) , f=f1 , r , exp ( Ts =0. 1 , e =1.0E5) )
469 min=1.36E07 med=1.31E06 avg=1.38E06 max=2.89E06 SQ( o1=NM([ 10 , 10] 5) , f=f1 , r , exp ( Ts =0. 1 , e =1.0E4) )
470 min=6.36E08 med=3.83E07 avg=3.83E07 max=7.91E07 SQ( o1=UM([ 10 , 10] 5) , f=f1 , r , exp ( Ts =0. 1 , e =1.0E4) )
471 min=1.26E06 med=9.50E06 avg=9.76E06 max=2.11E05 SQ( o1=NM([ 10 , 10] 5) , f=f1 , r , exp ( Ts =0. 001 , e =1.0E5) )
472 min=2.52E06 med=1.16E05 avg=1.48E05 max=5.73E05 SQ( o1=UM([ 10 , 10] 5) , f=f1 , r , exp ( Ts =0. 001 , e =1.0E5) )
473 min=1.25E07 med=7.24E07 avg=7.09E07 max=1.58E06 SQ( o1=NM([ 10 , 10] 5) , f=f1 , r , exp ( Ts =0. 001 , e =1.0E4) )
474 min=2.02E08 med=1.22E07 avg=1.19E07 max=2.84E07 SQ( o1=UM([ 10 , 10] 5) , f=f1 , r , exp ( Ts =0. 001 , e =1.0E4) )
475 min=4.07E07 med=1.30E06 avg=1.36E06 max=3.60E06 SQ( o1=NM([ 10 , 10] 5) , f=f1 , r , exp ( Ts=1.0E4, e =1.0E5) )
476 min=6.60E08 med=7.91E07 avg=8.50E07 max=1.58E06 SQ( o1=UM([ 10 , 10] 5) , f=f1 , r , exp ( Ts=1.0E4, e =1.0E5) )
477 min=4.55E08 med=6.77E07 avg=6.63E07 max=1.31E06 SQ( o1=NM([ 10 , 10] 5) , f=f1 , r , exp ( Ts=1.0E4, e =1.0E4) )
478 min=1.06E08 med=1.05E07 avg=1.07E07 max=2.16E07 SQ( o1=UM([ 10 , 10] 5) , f=f1 , r , exp ( Ts=1.0E4, e =1.0E4) )
479 min=7.35E08 med=2.98E07 avg=3.00E07 max=5.58E07 sEA( o1=NM([ 10 , 10] 5) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
480 min=7.08E09 med=3.02E08 avg=3.08E08 max=5.69E08 sEA( o1=UM([ 10 , 10] 5) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
481 min=1.36E08 med=8.74E08 avg=8.95E08 max=1.77E07 sEA( o1=NM([ 10 , 10] 5) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
482 min=6.52E10 med=9.08E09 avg=9.62E09 max=2.46E08 sEA( o1=UM([ 10 , 10] 5) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
483 min=1.01E09 med=6.66E09 avg=7.17E09 max=1.97E08 sEA( o1=NM([ 10 , 10] 5) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
484 min=9.14E11 med=7.68E10 avg=7.96E10 max=1.81E09 sEA( o1=UM([ 10 , 10] 5) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
485 min=4.17E09 med=2.43E07 avg=2.72E07 max=8.29E07 sEA( o1=NM([ 10 , 10] 5) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
486 min=1.23E09 med=4.27E08 avg=5.06E08 max=1.59E07 sEA( o1=UM([ 10 , 10] 5) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
487 min=7.80E09 med=5.38E08 avg=5.72E08 max=1.12E07 sEA( o1=NM([ 10 , 10] 5) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
488 min=8.89E10 med=6.25E09 avg=6.44E09 max=1.22E08 sEA( o1=UM([ 10 , 10] 5) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
489 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+50) , f=f1 , r )
490 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 50) , f=f1 , r )
491 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/10+50) , f=f1 , r )
492 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/10 , 50) , f=f1 , r )
493 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+100) , f=f1 , r )
494 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 100) , f=f1 , r )
495 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/10+100) , f=f1 , r )
496 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/10 , 100) , f=f1 , r )
497 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+200) , f=f1 , r )
498 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 200) , f=f1 , r )
499 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/10+200) , f=f1 , r )
500 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/10 , 200) , f=f1 , r )
501 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/2+50) , f=f1 , r )
502 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 50) , f=f1 , r )
503 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/10+50) , f=f1 , r )
504 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/10 , 50) , f=f1 , r )
505 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/2+100) , f=f1 , r )
506 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 100) , f=f1 , r )
507 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/10+100) , f=f1 , r )
508 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/10 , 100) , f=f1 , r )
509 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/2+200) , f=f1 , r )
510 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 200) , f=f1 , r )
511 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/10+200) , f=f1 , r )
512 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/10 , 200) , f=f1 , r )
513 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
514 min=2.17E13 med=3.33E04 avg=1.61E03 max=2.28E02 DE1( f=f1 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
515 min=0.00E00 med=0.00E00 avg=2.24E12 max=2.24E10 DE1( f=f1 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
516 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
517 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
518 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
519 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
520 min=0.00E00 med=3.14E10 avg=6.53E06 max=4.03E04 DE1( f=f1 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
521 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
522 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
523 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
524 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
525 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
526 min=0.00E00 med=0.00E00 avg=5.00E18 max=5.00E16 DE1( f=f1 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
527 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
528 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
8
7
6
5
7
D
E
M
O
S
529 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
530 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
531 min=5.06E02 med=3.65E01 avg=3.66E01 max=6.38E01 RW( o1=NM([ 10 , 10] 5) , f=f1 , r )
532 min=2.33E01 med=4.99E01 avg=4.87E01 max=7.16E01 RW( o1=UM([ 10 , 10] 5) , f=f1 , r )
533 min=1.65E02 med=4.65E02 avg=4.70E02 max=8.39E02 RS( f=f1 , r )
534
535
536 CosLnSum
537
538 min=4.38E06 med=8.52E01 avg=5.62E01 max=8.52E01 HC( o1=NM([ 10 , 10] 5) , f=f2 , r )
539 min=2.30E07 med=8.52E01 avg=5.62E01 max=8.52E01 HC( o1=UM([ 10 , 10] 5) , f=f2 , r )
540 min=5.28E03 med=9.22E01 avg=8.72E01 max=9.89E01 SA( o1=NM([ 10 , 10] 5) , f=f2 , r , l og ( 1) )
541 min=8.16E01 med=9.73E01 avg=9.66E01 max=9.97E01 SA( o1=UM([ 10 , 10] 5) , f=f2 , r , l og ( 1) )
542 min=5.28E03 med=9.22E01 avg=8.72E01 max=9.89E01 SA( o1=NM([ 10 , 10] 5) , f=f2 , r , l og ( 1) )
543 min=8.16E01 med=9.73E01 avg=9.66E01 max=9.97E01 SA( o1=UM([ 10 , 10] 5) , f=f2 , r , l og ( 1) )
544 min=6.30E05 med=8.60E01 avg=5.76E01 max=9.45E01 SA( o1=NM([ 10 , 10] 5) , f=f2 , r , l og ( 0 . 1 ) )
545 min=1.52E04 med=9.57E01 avg=8.33E01 max=9.97E01 SA( o1=UM([ 10 , 10] 5) , f=f2 , r , l og ( 0 . 1 ) )
546 min=4.00E06 med=8.52E01 avg=5.79E01 max=8.52E01 SA( o1=NM([ 10 , 10] 5) , f=f2 , r , l og ( 0. 001) )
547 min=5.53E07 med=8.52E01 avg=5.54E01 max=8.52E01 SA( o1=UM([ 10 , 10] 5) , f=f2 , r , l og ( 0. 001) )
548 min=2.42E06 med=8.52E01 avg=5.62E01 max=8.52E01 SA( o1=NM([ 10 , 10] 5) , f=f2 , r , l og ( 1. 0E4) )
549 min=1.54E07 med=8.52E01 avg=5.62E01 max=8.52E01 SA( o1=UM([ 10 , 10] 5) , f=f2 , r , l og ( 1. 0E4) )
550 min=4.07E01 med=9.30E01 avg=9.13E01 max=9.78E01 SQ( o1=NM([ 10 , 10] 5) , f=f2 , r , exp ( Ts=1, e =1.0E5) )
551 min=8.58E01 med=9.73E01 avg=9.67E01 max=9.97E01 SQ( o1=UM([ 10 , 10] 5) , f=f2 , r , exp ( Ts=1, e =1.0E5) )
552 min=4.77E06 med=8.52E01 avg=6.30E01 max=8.52E01 SQ( o1=NM([ 10 , 10] 5) , f=f2 , r , exp ( Ts=1, e =1.0E4) )
553 min=2.64E06 med=8.52E01 avg=6.05E01 max=8.52E01 SQ( o1=UM([ 10 , 10] 5) , f=f2 , r , exp ( Ts=1, e =1.0E4) )
554 min=7.24E04 med=9.27E01 avg=8.53E01 max=9.87E01 SQ( o1=NM([ 10 , 10] 5) , f=f2 , r , exp ( Ts =0. 1 , e =1.0E5) )
555 min=7.77E01 med=9.72E01 avg=9.64E01 max=9.97E01 SQ( o1=UM([ 10 , 10] 5) , f=f2 , r , exp ( Ts =0. 1 , e =1.0E5) )
556 min=3.82E06 med=8.52E01 avg=5.71E01 max=8.52E01 SQ( o1=NM([ 10 , 10] 5) , f=f2 , r , exp ( Ts =0. 1 , e =1.0E4) )
557 min=8.19E07 med=8.52E01 avg=5.62E01 max=8.52E01 SQ( o1=UM([ 10 , 10] 5) , f=f2 , r , exp ( Ts =0. 1 , e =1.0E4) )
558 min=5.46E06 med=8.52E01 avg=5.71E01 max=8.52E01 SQ( o1=NM([ 10 , 10] 5) , f=f2 , r , exp ( Ts =0. 001 , e =1.0E5) )
559 min=2.61E06 med=8.52E01 avg=5.79E01 max=8.52E01 SQ( o1=UM([ 10 , 10] 5) , f=f2 , r , exp ( Ts =0. 001 , e =1.0E5) )
560 min=1.37E06 med=8.52E01 avg=5.71E01 max=8.52E01 SQ( o1=NM([ 10 , 10] 5) , f=f2 , r , exp ( Ts =0. 001 , e =1.0E4) )
561 min=6.34E07 med=8.52E01 avg=5.79E01 max=8.52E01 SQ( o1=UM([ 10 , 10] 5) , f=f2 , r , exp ( Ts =0. 001 , e =1.0E4) )
562 min=9.08E07 med=8.52E01 avg=5.71E01 max=8.52E01 SQ( o1=NM([ 10 , 10] 5) , f=f2 , r , exp ( Ts=1.0E4, e =1.0E5) )
563 min=3.52E07 med=8.52E01 avg=5.54E01 max=8.52E01 SQ( o1=UM([ 10 , 10] 5) , f=f2 , r , exp ( Ts=1.0E4, e =1.0E5) )
564 min=2.67E06 med=8.52E01 avg=5.71E01 max=8.52E01 SQ( o1=NM([ 10 , 10] 5) , f=f2 , r , exp ( Ts=1.0E4, e =1.0E4) )
565 min=3.26E07 med=8.52E01 avg=5.54E01 max=8.52E01 SQ( o1=UM([ 10 , 10] 5) , f=f2 , r , exp ( Ts=1.0E4, e =1.0E4) )
566 min=3.56E07 med=5.02E06 avg=5.05E06 max=1.09E05 sEA( o1=NM([ 10 , 10] 5) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
567 min=2.78E08 med=4.73E07 avg=4.87E07 max=1.04E06 sEA( o1=UM([ 10 , 10] 5) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
568 min=3.65E07 med=1.45E06 avg=1.56E06 max=3.77E06 sEA( o1=NM([ 10 , 10] 5) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
569 min=1.78E08 med=1.61E07 avg=1.71E07 max=4.17E07 sEA( o1=UM([ 10 , 10] 5) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
570 min=1.88E08 med=9.93E08 avg=1.14E07 max=3.53E07 sEA( o1=NM([ 10 , 10] 5) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
571 min=3.02E09 med=1.32E08 avg=1.70E02 max=8.52E01 sEA( o1=UM([ 10 , 10] 5) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
572 min=2.53E07 med=5.00E06 avg=1.70E02 max=8.52E01 sEA( o1=NM([ 10 , 10] 5) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
573 min=8.89E09 med=7.65E07 avg=1.91E02 max=8.52E01 sEA( o1=UM([ 10 , 10] 5) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
574 min=1.95E07 med=8.59E07 avg=8.95E07 max=1.81E06 sEA( o1=NM([ 10 , 10] 5) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
575 min=2.06E08 med=9.71E08 avg=1.04E07 max=2.33E07 sEA( o1=UM([ 10 , 10] 5) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
576 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+50) , f=f2 , r )
577 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 50) , f=f2 , r )
578 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/10+50) , f=f2 , r )
579 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/10 , 50) , f=f2 , r )
580 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+100) , f=f2 , r )
581 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 100) , f=f2 , r )
582 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/10+100) , f=f2 , r )
583 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/10 , 100) , f=f2 , r )
584 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+200) , f=f2 , r )
585 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 200) , f=f2 , r )
586 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/10+200) , f=f2 , r )
587 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/10 , 200) , f=f2 , r )
588 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/2+50) , f=f2 , r )
589 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 50) , f=f2 , r )
590 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/10+50) , f=f2 , r )
591 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/10 , 50) , f=f2 , r )
592 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/2+100) , f=f2 , r )
593 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 100) , f=f2 , r )
594 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/10+100) , f=f2 , r )
5
7
.
1
.
D
E
M
O
S
F
O
R
S
T
R
I
N
G
-
B
A
S
E
D
P
R
O
B
L
E
M
S
P
A
C
E
S
8
7
7
595 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/10 , 100) , f=f2 , r )
596 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/2+200) , f=f2 , r )
597 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 200) , f=f2 , r )
598 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/10+200) , f=f2 , r )
599 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/10 , 200) , f=f2 , r )
600 min=0.00E00 med=8.61E03 avg=2.99E01 max=8.52E01 DE1( f=f2 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
601 min=3.86E10 med=6.31E04 avg=6.61E02 max=8.52E01 DE1( f=f2 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
602 min=0.00E00 med=0.00E00 avg=7.68E02 max=8.52E01 DE1( f=f2 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
603 min=0.00E00 med=0.00E00 avg=1.67E01 max=8.52E01 DE1( f=f2 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
604 min=0.00E00 med=0.00E00 avg=1.70E02 max=8.52E01 DE1( f=f2 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
605 min=0.00E00 med=0.00E00 avg=8.52E03 max=8.52E01 DE1( f=f2 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
606 min=0.00E00 med=0.00E00 avg=7.64E02 max=8.52E01 DE1( f=f2 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
607 min=0.00E00 med=0.00E00 avg=8.53E03 max=8.52E01 DE1( f=f2 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
608 min=0.00E00 med=0.00E00 avg=4.15E04 max=4.15E02 DE1( f=f2 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
609 min=0.00E00 med=0.00E00 avg=5.09E02 max=8.52E01 DE1( f=f2 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
610 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
611 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
612 min=0.00E00 med=0.00E00 avg=1.17E02 max=8.52E01 DE1( f=f2 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
613 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
614 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
615 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
616 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
617 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f2 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
618 min=3.33E01 med=9.34E01 avg=9.06E01 max=9.75E01 RW( o1=NM([ 10 , 10] 5) , f=f2 , r )
619 min=8.45E01 med=9.75E01 avg=9.69E01 max=9.98E01 RW( o1=UM([ 10 , 10] 5) , f=f2 , r )
620 min=1.41E01 med=3.34E01 avg=3.30E01 max=5.11E01 RS( f=f2 , r )
621
622
623 MaxSqr
624
625 min=6.93E09 med=6.00E08 avg=5.75E08 max=1.15E07 HC( o1=NM([ 10 , 10] 5) , f=f3 , r )
626 min=2.54E09 med=9.30E09 avg=9.43E09 max=1.75E08 HC( o1=UM([ 10 , 10] 5) , f=f3 , r )
627 min=5.03E03 med=6.25E02 avg=8.35E02 max=3.55E01 SA( o1=NM([ 10 , 10] 5) , f=f3 , r , l og ( 1) )
628 min=7.92E02 med=3.85E01 avg=3.95E01 max=8.43E01 SA( o1=UM([ 10 , 10] 5) , f=f3 , r , l og ( 1) )
629 min=5.03E03 med=6.25E02 avg=8.35E02 max=3.55E01 SA( o1=NM([ 10 , 10] 5) , f=f3 , r , l og ( 1) )
630 min=7.92E02 med=3.85E01 avg=3.95E01 max=8.43E01 SA( o1=UM([ 10 , 10] 5) , f=f3 , r , l og ( 1) )
631 min=6.55E05 med=7.02E04 avg=8.35E04 max=3.83E03 SA( o1=NM([ 10 , 10] 5) , f=f3 , r , l og ( 0 . 1 ) )
632 min=8.30E03 med=6.63E02 avg=7.16E02 max=2.18E01 SA( o1=UM([ 10 , 10] 5) , f=f3 , r , l og ( 0 . 1 ) )
633 min=3.12E07 med=1.44E06 avg=1.46E06 max=3.00E06 SA( o1=NM([ 10 , 10] 5) , f=f3 , r , l og ( 0. 001) )
634 min=2.19E07 med=2.06E06 avg=2.40E06 max=6.79E06 SA( o1=UM([ 10 , 10] 5) , f=f3 , r , l og ( 0. 001) )
635 min=4.41E08 med=1.92E07 avg=2.07E07 max=4.13E07 SA( o1=NM([ 10 , 10] 5) , f=f3 , r , l og ( 1. 0E4) )
636 min=1.58E08 med=1.40E07 avg=1.48E07 max=3.85E07 SA( o1=UM([ 10 , 10] 5) , f=f3 , r , l og ( 1. 0E4) )
637 min=1.93E02 med=2.63E01 avg=2.72E01 max=6.09E01 SQ( o1=NM([ 10 , 10] 5) , f=f3 , r , exp ( Ts=1, e =1.0E5) )
638 min=7.70E02 med=5.27E01 avg=4.87E01 max=8.23E01 SQ( o1=UM([ 10 , 10] 5) , f=f3 , r , exp ( Ts=1, e =1.0E5) )
639 min=4.08E07 med=2.66E06 avg=2.75E06 max=5.81E06 SQ( o1=NM([ 10 , 10] 5) , f=f3 , r , exp ( Ts=1, e =1.0E4) )
640 min=3.68E07 med=5.00E06 avg=5.54E06 max=1.97E05 SQ( o1=UM([ 10 , 10] 5) , f=f3 , r , exp ( Ts=1, e =1.0E4) )
641 min=1.61E03 med=2.65E02 avg=3.80E02 max=1.75E01 SQ( o1=NM([ 10 , 10] 5) , f=f3 , r , exp ( Ts =0. 1 , e =1.0E5) )
642 min=3.66E02 med=3.09E01 avg=3.36E01 max=7.00E01 SQ( o1=UM([ 10 , 10] 5) , f=f3 , r , exp ( Ts =0. 1 , e =1.0E5) )
643 min=5.18E08 med=4.03E07 avg=3.96E07 max=9.86E07 SQ( o1=NM([ 10 , 10] 5) , f=f3 , r , exp ( Ts =0. 1 , e =1.0E4) )
644 min=4.77E08 med=2.40E07 avg=2.68E07 max=7.34E07 SQ( o1=UM([ 10 , 10] 5) , f=f3 , r , exp ( Ts =0. 1 , e =1.0E4) )
645 min=1.43E06 med=9.02E06 avg=9.89E06 max=2.40E05 SQ( o1=NM([ 10 , 10] 5) , f=f3 , r , exp ( Ts =0. 001 , e =1.0E5) )
646 min=2.43E06 med=3.15E05 avg=4.10E05 max=1.36E04 SQ( o1=UM([ 10 , 10] 5) , f=f3 , r , exp ( Ts =0. 001 , e =1.0E5) )
647 min=1.19E08 med=9.70E08 avg=9.41E08 max=2.03E07 SQ( o1=NM([ 10 , 10] 5) , f=f3 , r , exp ( Ts =0. 001 , e =1.0E4) )
648 min=2.69E09 med=1.41E08 avg=1.54E08 max=3.48E08 SQ( o1=UM([ 10 , 10] 5) , f=f3 , r , exp ( Ts =0. 001 , e =1.0E4) )
649 min=1.20E07 med=8.51E07 avg=8.30E07 max=2.02E06 SQ( o1=NM([ 10 , 10] 5) , f=f3 , r , exp ( Ts=1.0E4, e =1.0E5) )
650 min=1.38E07 med=9.26E07 avg=1.05E06 max=2.17E06 SQ( o1=UM([ 10 , 10] 5) , f=f3 , r , exp ( Ts=1.0E4, e =1.0E5) )
651 min=1.05E08 med=6.50E08 avg=6.87E08 max=1.63E07 SQ( o1=NM([ 10 , 10] 5) , f=f3 , r , exp ( Ts=1.0E4, e =1.0E4) )
652 min=3.28E09 med=1.07E08 avg=1.17E08 max=2.63E08 SQ( o1=UM([ 10 , 10] 5) , f=f3 , r , exp ( Ts=1.0E4, e =1.0E4) )
653 min=1.53E09 med=2.87E08 avg=2.92E08 max=6.16E08 sEA( o1=NM([ 10 , 10] 5) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
654 min=2.71E10 med=3.12E09 avg=3.14E09 max=7.35E09 sEA( o1=UM([ 10 , 10] 5) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
655 min=1.95E09 med=1.12E08 avg=1.11E08 max=2.30E08 sEA( o1=NM([ 10 , 10] 5) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
656 min=1.63E10 med=1.04E09 avg=1.08E09 max=2.35E09 sEA( o1=UM([ 10 , 10] 5) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
657 min=1.19E10 med=7.26E10 avg=7.82E10 max=1.92E09 sEA( o1=NM([ 10 , 10] 5) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
658 min=6.16E12 med=9.65E11 avg=1.05E10 max=2.57E10 sEA( o1=UM([ 10 , 10] 5) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
659 min=4.36E10 med=2.54E08 avg=3.17E08 max=9.12E08 sEA( o1=NM([ 10 , 10] 5) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
660 min=5.62E11 med=5.32E09 avg=5.96E09 max=1.79E08 sEA( o1=UM([ 10 , 10] 5) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
8
7
8
5
7
D
E
M
O
S
661 min=8.67E10 med=6.19E09 avg=6.08E09 max=1.37E08 sEA( o1=NM([ 10 , 10] 5) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
662 min=1.02E10 med=6.16E10 avg=6.54E10 max=1.30E09 sEA( o1=UM([ 10 , 10] 5) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
663 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+50) , f=f3 , r )
664 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 50) , f=f3 , r )
665 min=8.15E238 med=6.63E218 avg=3.07E190 max=2.37E188 ES((10/10+50) , f=f3 , r )
666 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/10 , 50) , f=f3 , r )
667 min=0.00E00 med=2.65E318 avg=1.82E307 max=1.22E305 ES((10/2+100) , f=f3 , r )
668 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 100) , f=f3 , r )
669 min=1.19E185 med=1.05E170 avg=7.28E154 max=7.27E152 ES((10/10+100) , f=f3 , r )
670 min=4.64E282 med=2.48E275 avg=3.08E270 max=9.35E269 ES( ( 10/10 , 100) , f=f3 , r )
671 min=3.74E218 med=5.54E208 avg=3.69E200 max=3.66E198 ES((10/2+200) , f=f3 , r )
672 min=5.37E238 med=5.98E232 avg=8.61E228 max=2.80E226 ES( ( 10/2 , 200) , f=f3 , r )
673 min=4.55E130 med=1.55E116 avg=5.67E105 max=5.64E103 ES((10/10+200) , f=f3 , r )
674 min=9.46E173 med=2.47E167 avg=3.57E163 max=3.09E161 ES( ( 10/10 , 200) , f=f3 , r )
675 min=1.16E291 med=1.05E279 avg=1.29E268 max=1.29E266 ES((20/2+50) , f=f3 , r )
676 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 50) , f=f3 , r )
677 min=1.16E151 med=9.90E140 avg=6.72E128 max=6.71E126 ES((20/10+50) , f=f3 , r )
678 min=4.35E225 med=2.00E219 avg=9.17E212 max=9.17E210 ES( ( 20/10 , 50) , f=f3 , r )
679 min=5.53E244 med=2.43E234 avg=1.43E229 max=5.26E228 ES((20/2+100) , f=f3 , r )
680 min=6.62E308 med=3.25E301 avg=1.13E297 max=6.70E296 ES( ( 20/2 , 100) , f=f3 , r )
681 min=1.01E136 med=1.34E125 avg=1.28E116 max=1.06E114 ES((20/10+100) , f=f3 , r )
682 min=9.53E211 med=5.43E207 avg=5.92E203 max=4.01E201 ES( ( 20/10 , 100) , f=f3 , r )
683 min=2.02E174 med=2.65E169 avg=1.93E166 max=6.47E165 ES((20/2+200) , f=f3 , r )
684 min=2.40E202 med=6.07E199 avg=4.08E196 max=1.89E194 ES( ( 20/2 , 200) , f=f3 , r )
685 min=9.85E102 med=1.18E95 avg=7.79E89 max=7.63E87 ES((20/10+200) , f=f3 , r )
686 min=3.79E144 med=6.30E141 avg=4.73E139 max=1.40E137 ES( ( 20/10 , 200) , f=f3 , r )
687 min=1.58E169 med=2.96E154 avg=4.26E12 max=4.26E10 DE1( f=f3 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
688 min=4.59E08 med=1.30E04 avg=7.84E04 max=6.30E03 DE1( f=f3 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
689 min=6.24E195 med=2.00E32 avg=3.33E06 max=1.24E04 DE1( f=f3 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
690 min=2.22E174 med=3.14E168 avg=3.11E165 max=1.41E163 DE1( f=f3 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
691 min=0.00E00 med=4.36E318 avg=8.26E23 max=8.26E21 DE1( f=f3 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
692 min=7.39E253 med=1.49E245 avg=1.91E240 max=1.41E238 DE1( f=f3 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
693 min=1.30E88 med=3.47E84 avg=3.35E82 max=2.65E80 DE1( f=f3 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
694 min=6.32E51 med=9.16E11 avg=2.07E05 max=1.10E03 DE1( f=f3 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
695 min=5.17E101 med=5.73E97 avg=7.36E35 max=7.36E33 DE1( f=f3 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
696 min=9.72E89 med=2.52E86 avg=4.12E85 max=1.33E83 DE1( f=f3 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
697 min=2.31E173 med=1.88E165 avg=5.64E161 max=5.62E159 DE1( f=f3 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
698 min=8.34E130 med=8.26E127 avg=1.18E124 max=5.33E123 DE1( f=f3 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
699 min=1.18E44 med=3.23E43 avg=8.41E43 max=7.50E42 DE1( f=f3 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
700 min=4.14E60 med=4.31E57 avg=3.07E30 max=1.58E28 DE1( f=f3 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
701 min=5.50E51 med=4.38E49 avg=1.26E48 max=1.46E47 DE1( f=f3 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
702 min=9.58E46 med=2.22E44 avg=5.00E44 max=8.09E43 DE1( f=f3 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 3 ,F=0. 7) )
703 min=3.42E88 med=1.58E85 avg=1.67E84 max=5.29E83 DE1( f=f3 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 7 ,F=0. 3) )
704 min=7.45E67 med=7.80E65 avg=2.54E64 max=2.98E63 DE1( f=f3 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 5 , cr =0. 5 ,F=0. 5) )
705 min=1.08E02 med=2.82E01 avg=2.83E01 max=5.85E01 RW( o1=NM([ 10 , 10] 5) , f=f3 , r )
706 min=9.70E02 med=5.36E01 avg=4.92E01 max=8.04E01 RW( o1=UM([ 10 , 10] 5) , f=f3 , r )
707 min=1.45E03 med=9.36E03 avg=9.67E03 max=2.09E02 RS( f=f3 , r )
708
709
710 Di mensi ons : 10
711
712
713
714 Sphere
715
716 min=9.51E05 med=2.25E04 avg=2.21E04 max=3.09E04 HC( o1=NM([ 10 , 10] 10) , r )
717 min=1.36E05 med=3.15E05 avg=3.02E05 max=4.32E05 HC( o1=UM([ 10 , 10] 10) , r )
718 min=1.24E02 med=3.19E02 avg=3.23E02 max=5.84E02 SA( o1=NM([ 10 , 10] 10) , r , l og ( 1) )
719 min=1.80E02 med=6.46E02 avg=6.68E02 max=1.32E01 SA( o1=UM([ 10 , 10] 10) , r , l og ( 1) )
720 min=1.24E02 med=3.19E02 avg=3.23E02 max=5.84E02 SA( o1=NM([ 10 , 10] 10) , r , l og ( 1) )
721 min=1.80E02 med=6.46E02 avg=6.68E02 max=1.32E01 SA( o1=UM([ 10 , 10] 10) , r , l og ( 1) )
722 min=7.67E04 med=2.77E03 avg=2.65E03 max=4.03E03 SA( o1=NM([ 10 , 10] 10) , r , l og ( 0 . 1 ) )
723 min=1.16E03 med=3.38E03 avg=3.39E03 max=6.63E03 SA( o1=UM([ 10 , 10] 10) , r , l og ( 0 . 1 ) )
724 min=1.11E04 med=2.32E04 avg=2.30E04 max=3.59E04 SA( o1=NM([ 10 , 10] 10) , r , l og ( 0. 001) )
725 min=1.59E05 med=4.53E05 avg=4.50E05 max=6.74E05 SA( o1=UM([ 10 , 10] 10) , r , l og ( 0. 001) )
726 min=5.94E05 med=2.24E04 avg=2.25E04 max=3.77E04 SA( o1=NM([ 10 , 10] 10) , r , l og ( 1. 0E4) )
727 min=9.82E06 med=3.05E05 avg=2.92E05 max=4.86E05 SA( o1=UM([ 10 , 10] 10) , r , l og ( 1. 0E4) )
5
7
.
1
.
D
E
M
O
S
F
O
R
S
T
R
I
N
G
-
B
A
S
E
D
P
R
O
B
L
E
M
S
P
A
C
E
S
8
7
9
728 min=5.62E02 med=2.65E01 avg=2.70E01 max=5.35E01 SQ( o1=NM([ 10 , 10] 10) , r , exp ( Ts=1, e =1.0E5) )
729 min=1.38E01 med=7.82E01 avg=8.64E01 max=2.08E00 SQ( o1=UM([ 10 , 10] 10) , r , exp ( Ts=1, e =1.0E5) )
730 min=1.35E04 med=3.15E04 avg=3.22E04 max=5.56E04 SQ( o1=NM([ 10 , 10] 10) , r , exp ( Ts=1, e =1.0E4) )
731 min=1.85E05 med=6.17E05 avg=5.94E05 max=9.37E05 SQ( o1=UM([ 10 , 10] 10) , r , exp ( Ts=1, e =1.0E4) )
732 min=5.04E03 med=1.51E02 avg=1.53E02 max=2.54E02 SQ( o1=NM([ 10 , 10] 10) , r , exp ( Ts =0. 1 , e =1.0E5) )
733 min=7.21E03 med=2.79E02 avg=2.90E02 max=6.19E02 SQ( o1=UM([ 10 , 10] 10) , r , exp ( Ts =0. 1 , e =1.0E5) )
734 min=1.03E04 med=2.81E04 avg=2.79E04 max=4.21E04 SQ( o1=NM([ 10 , 10] 10) , r , exp ( Ts =0. 1 , e =1.0E4) )
735 min=9.88E06 med=4.01E05 avg=3.94E05 max=5.48E05 SQ( o1=UM([ 10 , 10] 10) , r , exp ( Ts =0. 1 , e =1.0E4) )
736 min=6.37E05 med=3.45E04 avg=3.42E04 max=5.32E04 SQ( o1=NM([ 10 , 10] 10) , r , exp ( Ts =0. 001 , e =1.0E5) )
737 min=6.37E05 med=1.38E04 avg=1.43E04 max=2.17E04 SQ( o1=UM([ 10 , 10] 10) , r , exp ( Ts =0. 001 , e =1.0E5) )
738 min=9.63E05 med=2.26E04 avg=2.31E04 max=3.62E04 SQ( o1=NM([ 10 , 10] 10) , r , exp ( Ts =0. 001 , e =1.0E4) )
739 min=1.05E05 med=3.28E05 avg=3.22E05 max=5.30E05 SQ( o1=UM([ 10 , 10] 10) , r , exp ( Ts =0. 001 , e =1.0E4) )
740 min=9.69E05 med=2.41E04 avg=2.35E04 max=3.65E04 SQ( o1=NM([ 10 , 10] 10) , r , exp ( Ts=1.0E4, e =1.0E5) )
741 min=1.74E05 med=3.65E05 avg=3.59E05 max=5.12E05 SQ( o1=UM([ 10 , 10] 10) , r , exp ( Ts=1.0E4, e =1.0E5) )
742 min=1.04E04 med=2.21E04 avg=2.23E04 max=3.28E04 SQ( o1=NM([ 10 , 10] 10) , r , exp ( Ts=1.0E4, e =1.0E4) )
743 min=1.52E05 med=3.22E05 avg=3.19E05 max=4.56E05 SQ( o1=UM([ 10 , 10] 10) , r , exp ( Ts=1.0E4, e =1.0E4) )
744 min=3.92E05 med=1.03E04 avg=1.02E04 max=1.57E04 sEA( o1=NM([ 10 , 10] 10) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
745 min=1.93E06 med=9.65E06 avg=9.42E06 max=1.59E05 sEA( o1=UM([ 10 , 10] 10) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
746 min=1.30E05 med=3.24E05 avg=3.27E05 max=5.11E05 sEA( o1=NM([ 10 , 10] 10) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
747 min=1.40E06 med=3.45E06 avg=3.48E06 max=5.80E06 sEA( o1=UM([ 10 , 10] 10) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
748 min=1.39E06 med=4.51E06 avg=4.52E06 max=8.36E06 sEA( o1=NM([ 10 , 10] 10) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
749 min=1.76E07 med=4.42E07 avg=4.57E07 max=8.17E07 sEA( o1=UM([ 10 , 10] 10) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
750 min=1.92E05 med=1.30E04 avg=1.41E04 max=4.01E04 sEA( o1=NM([ 10 , 10] 10) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
751 min=4.24E06 med=2.22E05 avg=2.18E05 max=4.46E05 sEA( o1=UM([ 10 , 10] 10) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
752 min=9.48E06 med=2.24E05 avg=2.23E05 max=3.59E05 sEA( o1=NM([ 10 , 10] 10) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
753 min=8.02E07 med=2.15E06 avg=2.13E06 max=3.78E06 sEA( o1=UM([ 10 , 10] 10) , o2=WAC, r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
754 min=2.73E310 med=3.58E300 avg=1.95E287 max=1.95E285 ES((10/2+50) , r )
755 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 50) , r )
756 min=6.49E199 med=1.01E187 avg=2.96E177 max=2.96E175 ES((10/10+50) , r )
757 min=2.17E297 med=2.76E288 avg=1.13E282 max=1.12E280 ES( ( 10/10 , 50) , r )
758 min=4.69E228 med=5.97E218 avg=4.13E211 max=3.94E209 ES((10/2+100) , r )
759 min=5.05E265 med=1.27E259 avg=3.39E255 max=2.21E253 ES( ( 10/2 , 100) , r )
760 min=1.20E151 med=1.91E144 avg=1.74E139 max=1.12E137 ES((10/10+100) , r )
761 min=2.23E201 med=1.44E196 avg=3.69E193 max=3.28E191 ES( ( 10/10 , 100) , r )
762 min=5.89E146 med=2.62E141 avg=1.32E138 max=1.13E136 ES((10/2+200) , r )
763 min=1.52E159 med=2.29E156 avg=1.59E152 max=1.53E150 ES( ( 10/2 , 200) , r )
764 min=1.30E102 med=3.15E97 avg=2.04E93 max=1.92E91 ES((10/10+200) , r )
765 min=6.40E123 med=1.25E119 avg=1.65E117 max=9.15E116 ES( ( 10/10 , 200) , r )
766 min=2.03E199 med=5.24E193 avg=2.41E189 max=1.16E187 ES((20/2+50) , r )
767 min=1.50E246 med=8.13E241 avg=1.01E236 max=5.88E235 ES( ( 20/2 , 50) , r )
768 min=4.06E124 med=1.58E118 avg=3.59E115 max=1.89E113 ES((20/10+50) , r )
769 min=3.64E164 med=3.64E160 avg=4.60E156 max=2.82E154 ES( ( 20/10 , 50) , r )
770 min=1.94E165 med=2.82E161 avg=1.00E157 max=4.59E156 ES((20/2+100) , r )
771 min=2.35E207 med=1.98E202 avg=3.19E199 max=2.98E197 ES( ( 20/2 , 100) , r )
772 min=2.19E108 med=5.48E104 avg=1.65E101 max=4.48E100 ES((20/10+100) , r )
773 min=2.97E151 med=2.15E148 avg=2.14E146 max=6.39E145 ES( ( 20/10 , 100) , r )
774 min=6.35E119 med=8.72E116 avg=1.01E113 max=4.61E112 ES((20/2+200) , r )
775 min=1.50E136 med=2.32E133 avg=3.89E132 max=1.92E130 ES( ( 20/2 , 200) , r )
776 min=2.01E80 med=1.37E77 avg=2.66E76 max=7.45E75 ES((20/10+200) , r )
777 min=6.73E103 med=2.71E100 avg=2.16E99 max=4.52E98 ES( ( 20/10 , 200) , r )
778 min=7.64E98 med=2.54E88 avg=7.44E04 max=7.08E02 DE1(r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
779 min=2.14E01 med=3.15E00 avg=3.98E00 max=1.69E01 DE1(r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
780 min=2.42E24 med=3.78E07 avg=6.14E03 max=3.41E01 DE1(r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
781 min=2.93E117 med=1.31E113 avg=1.11E111 max=5.49E110 DE1(r , ps =50, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
782 min=8.02E223 med=2.59E216 avg=4.83E38 max=4.83E36 DE1(r , ps =50, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
783 min=1.59E170 med=2.72E166 avg=2.51E163 max=1.98E161 DE1(r , ps =50, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
784 min=3.27E49 med=4.75E47 avg=1.23E46 max=1.73E45 DE1(r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
785 min=2.66E03 med=1.50E01 avg=3.39E01 max=2.41E00 DE1(r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
786 min=7.14E55 med=7.35E53 avg=5.53E52 max=9.77E51 DE1(r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
787 min=5.25E59 med=5.50E57 avg=1.92E56 max=3.56E55 DE1(r , ps =100, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
788 min=1.32E112 med=1.03E109 avg=2.17E107 max=7.80E106 DE1(r , ps =100, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
789 min=3.77E86 med=3.76E84 avg=7.17E83 max=3.10E81 DE1(r , ps =100, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
790 min=3.08E24 med=3.95E22 avg=8.05E22 max=7.20E21 DE1(r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
791 min=9.00E10 med=1.17E04 avg=7.73E04 max=1.12E02 DE1(r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
792 min=2.98E26 med=5.14E25 avg=8.44E25 max=5.57E24 DE1(r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
8
8
0
5
7
D
E
M
O
S
793 min=5.50E29 med=3.15E28 avg=6.09E28 max=6.15E27 DE1(r , ps =200, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
794 min=4.57E58 med=2.72E55 avg=1.91E54 max=5.04E53 DE1(r , ps =200, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
795 min=1.04E43 med=4.87E42 avg=1.70E41 max=4.04E40 DE1(r , ps =200, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
796 min=6.52E01 med=2.04E02 avg=1.99E02 max=3.39E02 RW( o1=NM([ 10 , 10] 10) , r )
797 min=9.78E01 med=2.80E02 avg=2.80E02 max=4.80E02 RW( o1=UM([ 10 , 10] 10) , r )
798 min=1.48E01 med=3.21E01 avg=3.14E01 max=4.40E01 RS( r )
799
800
801 SumCosLn
802
803 min=1.81E06 med=5.45E06 avg=5.35E06 max=8.32E06 HC( o1=NM([ 10 , 10] 10) , f=f1 , r )
804 min=2.87E07 med=7.85E07 avg=7.68E07 max=1.07E06 HC( o1=UM([ 10 , 10] 10) , f=f1 , r )
805 min=1.74E01 med=3.73E01 avg=3.72E01 max=5.47E01 SA( o1=NM([ 10 , 10] 10) , f=f1 , r , l og ( 1) )
806 min=2.66E01 med=5.08E01 avg=5.01E01 max=6.91E01 SA( o1=UM([ 10 , 10] 10) , f=f1 , r , l og ( 1) )
807 min=1.74E01 med=3.73E01 avg=3.72E01 max=5.47E01 SA( o1=NM([ 10 , 10] 10) , f=f1 , r , l og ( 1) )
808 min=2.66E01 med=5.08E01 avg=5.01E01 max=6.91E01 SA( o1=UM([ 10 , 10] 10) , f=f1 , r , l og ( 1) )
809 min=4.49E03 med=1.90E02 avg=3.10E02 max=1.58E01 SA( o1=NM([ 10 , 10] 10) , f=f1 , r , l og ( 0 . 1 ) )
810 min=1.59E01 med=4.00E01 avg=4.05E01 max=6.28E01 SA( o1=UM([ 10 , 10] 10) , f=f1 , r , l og ( 0 . 1 ) )
811 min=1.30E05 med=3.01E05 avg=3.02E05 max=4.60E05 SA( o1=NM([ 10 , 10] 10) , f=f1 , r , l og ( 0. 001) )
812 min=9.12E06 med=3.06E05 avg=3.09E05 max=4.65E05 SA( o1=UM([ 10 , 10] 10) , f=f1 , r , l og ( 0. 001) )
813 min=1.91E06 med=7.65E06 avg=7.37E06 max=1.08E05 SA( o1=NM([ 10 , 10] 10) , f=f1 , r , l og ( 1. 0E4) )
814 min=9.69E07 med=2.81E06 avg=2.73E06 max=4.64E06 SA( o1=UM([ 10 , 10] 10) , f=f1 , r , l og ( 1. 0E4) )
815 min=1.85E01 med=4.12E01 avg=4.11E01 max=6.28E01 SQ( o1=NM([ 10 , 10] 10) , f=f1 , r , exp ( Ts=1, e =1.0E5) )
816 min=2.77E01 med=5.05E01 avg=5.08E01 max=6.98E01 SQ( o1=UM([ 10 , 10] 10) , f=f1 , r , exp ( Ts=1, e =1.0E5) )
817 min=1.51E05 med=3.79E05 avg=3.69E05 max=5.62E05 SQ( o1=NM([ 10 , 10] 10) , f=f1 , r , exp ( Ts=1, e =1.0E4) )
818 min=1.42E05 med=3.80E05 avg=3.86E05 max=7.46E05 SQ( o1=UM([ 10 , 10] 10) , f=f1 , r , exp ( Ts=1, e =1.0E4) )
819 min=1.19E01 med=3.28E01 avg=3.30E01 max=5.16E01 SQ( o1=NM([ 10 , 10] 10) , f=f1 , r , exp ( Ts =0. 1 , e =1.0E5) )
820 min=2.57E01 med=4.96E01 avg=4.98E01 max=6.93E01 SQ( o1=UM([ 10 , 10] 10) , f=f1 , r , exp ( Ts =0. 1 , e =1.0E5) )
821 min=5.31E06 med=1.09E05 avg=1.08E05 max=1.61E05 SQ( o1=NM([ 10 , 10] 10) , f=f1 , r , exp ( Ts =0. 1 , e =1.0E4) )
822 min=5.44E07 med=3.33E06 avg=3.30E06 max=5.07E06 SQ( o1=UM([ 10 , 10] 10) , f=f1 , r , exp ( Ts =0. 1 , e =1.0E4) )
823 min=5.18E05 med=1.70E04 avg=1.67E04 max=2.70E04 SQ( o1=NM([ 10 , 10] 10) , f=f1 , r , exp ( Ts =0. 001 , e =1.0E5) )
824 min=6.21E05 med=2.74E04 avg=2.80E04 max=5.32E04 SQ( o1=UM([ 10 , 10] 10) , f=f1 , r , exp ( Ts =0. 001 , e =1.0E5) )
825 min=2.92E06 med=6.12E06 avg=6.17E06 max=9.99E06 SQ( o1=NM([ 10 , 10] 10) , f=f1 , r , exp ( Ts =0. 001 , e =1.0E4) )
826 min=4.20E07 med=9.52E07 avg=9.52E07 max=1.33E06 SQ( o1=UM([ 10 , 10] 10) , f=f1 , r , exp ( Ts =0. 001 , e =1.0E4) )
827 min=6.32E06 med=2.00E05 avg=1.93E05 max=3.06E05 SQ( o1=NM([ 10 , 10] 10) , f=f1 , r , exp ( Ts=1.0E4, e =1.0E5) )
828 min=7.75E06 med=1.49E05 avg=1.50E05 max=2.83E05 SQ( o1=UM([ 10 , 10] 10) , f=f1 , r , exp ( Ts=1.0E4, e =1.0E5) )
829 min=1.41E06 med=5.96E06 avg=5.82E06 max=8.84E06 SQ( o1=NM([ 10 , 10] 10) , f=f1 , r , exp ( Ts=1.0E4, e =1.0E4) )
830 min=4.20E07 med=8.13E07 avg=8.34E07 max=1.22E06 SQ( o1=UM([ 10 , 10] 10) , f=f1 , r , exp ( Ts=1.0E4, e =1.0E4) )
831 min=9.08E07 med=2.40E06 avg=2.39E06 max=4.11E06 sEA( o1=NM([ 10 , 10] 10) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
832 min=9.25E08 med=2.33E07 avg=2.34E07 max=4.25E07 sEA( o1=UM([ 10 , 10] 10) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
833 min=3.54E07 med=8.83E07 avg=8.81E07 max=1.39E06 sEA( o1=NM([ 10 , 10] 10) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
834 min=3.99E08 med=8.99E08 avg=8.96E08 max=1.43E07 sEA( o1=UM([ 10 , 10] 10) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
835 min=2.91E08 med=1.16E07 avg=1.12E07 max=1.94E07 sEA( o1=NM([ 10 , 10] 10) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
836 min=4.16E09 med=1.13E08 avg=1.14E08 max=2.03E08 sEA( o1=UM([ 10 , 10] 10) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
837 min=5.61E07 med=3.11E06 avg=3.32E06 max=8.17E06 sEA( o1=NM([ 10 , 10] 10) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
838 min=1.22E07 med=4.85E07 avg=5.19E07 max=1.18E06 sEA( o1=UM([ 10 , 10] 10) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
839 min=1.61E07 med=5.61E07 avg=5.64E07 max=9.32E07 sEA( o1=NM([ 10 , 10] 10) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
840 min=1.91E08 med=5.48E08 avg=5.47E08 max=1.14E07 sEA( o1=UM([ 10 , 10] 10) , o2=WAC, f=f1 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
841 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+50) , f=f1 , r )
842 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 50) , f=f1 , r )
843 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/10+50) , f=f1 , r )
844 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/10 , 50) , f=f1 , r )
845 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+100) , f=f1 , r )
846 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 100) , f=f1 , r )
847 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/10+100) , f=f1 , r )
848 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/10 , 100) , f=f1 , r )
849 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+200) , f=f1 , r )
850 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 200) , f=f1 , r )
851 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/10+200) , f=f1 , r )
852 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/10 , 200) , f=f1 , r )
853 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/2+50) , f=f1 , r )
854 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 50) , f=f1 , r )
855 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/10+50) , f=f1 , r )
856 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/10 , 50) , f=f1 , r )
857 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/2+100) , f=f1 , r )
858 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 100) , f=f1 , r )
5
7
.
1
.
D
E
M
O
S
F
O
R
S
T
R
I
N
G
-
B
A
S
E
D
P
R
O
B
L
E
M
S
P
A
C
E
S
8
8
1
859 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/10+100) , f=f1 , r )
860 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/10 , 100) , f=f1 , r )
861 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/2+200) , f=f1 , r )
862 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/2 , 200) , f=f1 , r )
863 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/10+200) , f=f1 , r )
864 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 20/10 , 200) , f=f1 , r )
865 min=0.00E00 med=0.00E00 avg=1.50E06 max=1.40E04 DE1( f=f1 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
866 min=1.96E03 med=1.85E02 avg=2.46E02 max=8.89E02 DE1( f=f1 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
867 min=0.00E00 med=1.39E10 avg=6.28E06 max=2.57E04 DE1( f=f1 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
868 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
869 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
870 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
871 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
872 min=6.42E06 med=1.26E03 avg=1.79E03 max=1.46E02 DE1( f=f1 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
873 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
874 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
875 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
876 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
877 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
878 min=3.37E12 med=1.12E06 avg=1.11E05 max=2.30E04 DE1( f=f1 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
879 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
880 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
881 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
882 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 DE1( f=f1 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
883 min=2.48E01 med=4.20E01 avg=4.15E01 max=5.84E01 RW( o1=NM([ 10 , 10] 10) , f=f1 , r )
884 min=2.73E01 med=5.09E01 avg=5.05E01 max=7.04E01 RW( o1=UM([ 10 , 10] 10) , f=f1 , r )
885 min=1.14E01 med=1.68E01 avg=1.66E01 max=1.95E01 RS( f=f1 , r )
886
887
888 CosLnSum
889
890 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 HC( o1=NM([ 10 , 10] 10) , f=f2 , r )
891 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 HC( o1=UM([ 10 , 10] 10) , f=f2 , r )
892 min=6.38E01 med=7.44E01 avg=7.42E01 max=8.92E01 SA( o1=NM([ 10 , 10] 10) , f=f2 , r , l og ( 1) )
893 min=6.80E01 med=8.23E01 avg=8.17E01 max=9.54E01 SA( o1=UM([ 10 , 10] 10) , f=f2 , r , l og ( 1) )
894 min=6.38E01 med=7.44E01 avg=7.42E01 max=8.92E01 SA( o1=NM([ 10 , 10] 10) , f=f2 , r , l og ( 1) )
895 min=6.80E01 med=8.23E01 avg=8.17E01 max=9.54E01 SA( o1=UM([ 10 , 10] 10) , f=f2 , r , l og ( 1) )
896 min=5.68E01 med=5.99E01 avg=6.02E01 max=6.91E01 SA( o1=NM([ 10 , 10] 10) , f=f2 , r , l og ( 0 . 1 ) )
897 min=6.65E01 med=7.78E01 avg=7.77E01 max=9.21E01 SA( o1=UM([ 10 , 10] 10) , f=f2 , r , l og ( 0 . 1 ) )
898 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 SA( o1=NM([ 10 , 10] 10) , f=f2 , r , l og ( 0. 001) )
899 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 SA( o1=UM([ 10 , 10] 10) , f=f2 , r , l og ( 0. 001) )
900 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 SA( o1=NM([ 10 , 10] 10) , f=f2 , r , l og ( 1. 0E4) )
901 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 SA( o1=UM([ 10 , 10] 10) , f=f2 , r , l og ( 1. 0E4) )
902 min=6.64E01 med=7.61E01 avg=7.61E01 max=8.87E01 SQ( o1=NM([ 10 , 10] 10) , f=f2 , r , exp ( Ts=1, e =1.0E5) )
903 min=6.81E01 med=8.27E01 avg=8.19E01 max=9.57E01 SQ( o1=UM([ 10 , 10] 10) , f=f2 , r , exp ( Ts=1, e =1.0E5) )
904 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 SQ( o1=NM([ 10 , 10] 10) , f=f2 , r , exp ( Ts=1, e =1.0E4) )
905 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 SQ( o1=UM([ 10 , 10] 10) , f=f2 , r , exp ( Ts=1, e =1.0E4) )
906 min=6.31E01 med=7.27E01 avg=7.33E01 max=8.51E01 SQ( o1=NM([ 10 , 10] 10) , f=f2 , r , exp ( Ts =0. 1 , e =1.0E5) )
907 min=6.85E01 med=8.17E01 avg=8.14E01 max=9.54E01 SQ( o1=UM([ 10 , 10] 10) , f=f2 , r , exp ( Ts =0. 1 , e =1.0E5) )
908 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 SQ( o1=NM([ 10 , 10] 10) , f=f2 , r , exp ( Ts =0. 1 , e =1.0E4) )
909 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 SQ( o1=UM([ 10 , 10] 10) , f=f2 , r , exp ( Ts =0. 1 , e =1.0E4) )
910 min=5.49E01 med=5.50E01 avg=5.50E01 max=5.50E01 SQ( o1=NM([ 10 , 10] 10) , f=f2 , r , exp ( Ts =0. 001 , e =1.0E5) )
911 min=5.49E01 med=5.50E01 avg=5.50E01 max=5.51E01 SQ( o1=UM([ 10 , 10] 10) , f=f2 , r , exp ( Ts =0. 001 , e =1.0E5) )
912 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 SQ( o1=NM([ 10 , 10] 10) , f=f2 , r , exp ( Ts =0. 001 , e =1.0E4) )
913 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 SQ( o1=UM([ 10 , 10] 10) , f=f2 , r , exp ( Ts =0. 001 , e =1.0E4) )
914 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 SQ( o1=NM([ 10 , 10] 10) , f=f2 , r , exp ( Ts=1.0E4, e =1.0E5) )
915 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 SQ( o1=UM([ 10 , 10] 10) , f=f2 , r , exp ( Ts=1.0E4, e =1.0E5) )
916 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 SQ( o1=NM([ 10 , 10] 10) , f=f2 , r , exp ( Ts=1.0E4, e =1.0E4) )
917 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 SQ( o1=UM([ 10 , 10] 10) , f=f2 , r , exp ( Ts=1.0E4, e =1.0E4) )
918 min=5.49E01 med=5.55E01 avg=5.57E01 max=5.83E01 sEA( o1=NM([ 10 , 10] 10) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
919 min=5.76E01 med=6.39E01 avg=6.38E01 max=6.72E01 sEA( o1=UM([ 10 , 10] 10) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
920 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.55E01 sEA( o1=NM([ 10 , 10] 10) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
921 min=5.65E01 med=6.13E01 avg=6.12E01 max=6.45E01 sEA( o1=UM([ 10 , 10] 10) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
922 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 sEA( o1=NM([ 10 , 10] 10) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
923 min=5.57E01 med=5.95E01 avg=5.94E01 max=6.24E01 sEA( o1=UM([ 10 , 10] 10) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
924 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 sEA( o1=NM([ 10 , 10] 10) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
8
8
2
5
7
D
E
M
O
S
925 min=5.52E01 med=5.76E01 avg=5.77E01 max=6.07E01 sEA( o1=UM([ 10 , 10] 10) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
926 min=5.49E01 med=5.56E01 avg=5.58E01 max=5.87E01 sEA( o1=NM([ 10 , 10] 10) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
927 min=5.78E01 med=6.32E01 avg=6.31E01 max=6.75E01 sEA( o1=UM([ 10 , 10] 10) , o2=WAC, f=f2 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
928 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+50) , f=f2 , r )
929 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 50) , f=f2 , r )
930 min=0.00E00 med=0.00E00 avg=1.09E01 max=7.25E01 ES((10/10+50) , f=f2 , r )
931 min=0.00E00 med=6.88E01 avg=6.58E01 max=7.40E01 ES( ( 10/10 , 50) , f=f2 , r )
932 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+100) , f=f2 , r )
933 min=0.00E00 med=0.00E00 avg=1.93E02 max=6.68E01 ES( ( 10/2 , 100) , f=f2 , r )
934 min=0.00E00 med=0.00E00 avg=6.34E02 max=7.32E01 ES((10/10+100) , f=f2 , r )
935 min=0.00E00 med=6.99E01 avg=6.52E01 max=7.38E01 ES( ( 10/10 , 100) , f=f2 , r )
936 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((10/2+200) , f=f2 , r )
937 min=0.00E00 med=0.00E00 avg=2.08E02 max=7.13E01 ES( ( 10/2 , 200) , f=f2 , r )
938 min=0.00E00 med=0.00E00 avg=7.61E02 max=7.24E01 ES((10/10+200) , f=f2 , r )
939 min=0.00E00 med=6.97E01 avg=6.07E01 max=7.39E01 ES( ( 10/10 , 200) , f=f2 , r )
940 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/2+50) , f=f2 , r )
941 min=0.00E00 med=0.00E00 avg=3.22E02 max=5.91E01 ES( ( 20/2 , 50) , f=f2 , r )
942 min=0.00E00 med=0.00E00 avg=1.43E01 max=7.32E01 ES((20/10+50) , f=f2 , r )
943 min=4.35E01 med=6.73E01 avg=6.61E01 max=7.34E01 ES( ( 20/10 , 50) , f=f2 , r )
944 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/2+100) , f=f2 , r )
945 min=0.00E00 med=0.00E00 avg=2.51E01 max=7.17E01 ES( ( 20/2 , 100) , f=f2 , r )
946 min=0.00E00 med=0.00E00 avg=1.04E01 max=7.38E01 ES((20/10+100) , f=f2 , r )
947 min=5.08E01 med=6.85E01 avg=6.78E01 max=7.42E01 ES( ( 20/10 , 100) , f=f2 , r )
948 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES((20/2+200) , f=f2 , r )
949 min=0.00E00 med=0.00E00 avg=7.11E02 max=7.21E01 ES( ( 20/2 , 200) , f=f2 , r )
950 min=0.00E00 med=0.00E00 avg=6.18E02 max=7.20E01 ES((20/10+200) , f=f2 , r )
951 min=0.00E00 med=6.98E01 avg=6.74E01 max=7.36E01 ES( ( 20/10 , 200) , f=f2 , r )
952 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 DE1( f=f2 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
953 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.60E01 DE1( f=f2 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
954 min=0.00E00 med=5.49E01 avg=5.27E01 max=5.50E01 DE1( f=f2 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
955 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 DE1( f=f2 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
956 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 DE1( f=f2 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
957 min=1.60E05 med=5.49E01 avg=4.67E01 max=5.49E01 DE1( f=f2 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
958 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 DE1( f=f2 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
959 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 DE1( f=f2 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
960 min=9.22E05 med=5.49E01 avg=4.92E01 max=5.49E01 DE1( f=f2 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
961 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 DE1( f=f2 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
962 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 DE1( f=f2 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
963 min=0.00E00 med=2.20E01 avg=2.98E01 max=5.49E01 DE1( f=f2 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
964 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 DE1( f=f2 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
965 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 DE1( f=f2 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
966 min=0.00E00 med=5.49E01 avg=4.83E01 max=5.49E01 DE1( f=f2 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
967 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 DE1( f=f2 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
968 min=5.49E01 med=5.49E01 avg=5.49E01 max=5.49E01 DE1( f=f2 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
969 min=0.00E00 med=1.16E04 avg=1.18E01 max=5.49E01 DE1( f=f2 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
970 min=6.79E01 med=7.60E01 avg=7.65E01 max=9.00E01 RW( o1=NM([ 10 , 10] 10) , f=f2 , r )
971 min=7.01E01 med=8.19E01 avg=8.20E01 max=9.50E01 RW( o1=UM([ 10 , 10] 10) , f=f2 , r )
972 min=5.94E01 med=6.20E01 avg=6.20E01 max=6.41E01 RS( f=f2 , r )
973
974
975 MaxSqr
976
977 min=2.48E07 med=7.09E07 avg=6.72E07 max=1.08E06 HC( o1=NM([ 10 , 10] 10) , f=f3 , r )
978 min=3.89E08 med=9.93E08 avg=9.92E08 max=1.55E07 HC( o1=UM([ 10 , 10] 10) , f=f3 , r )
979 min=5.83E02 med=2.45E01 avg=2.61E01 max=5.84E01 SA( o1=NM([ 10 , 10] 10) , f=f3 , r , l og ( 1) )
980 min=1.60E01 med=5.66E01 avg=5.59E01 max=8.42E01 SA( o1=UM([ 10 , 10] 10) , f=f3 , r , l og ( 1) )
981 min=5.83E02 med=2.45E01 avg=2.61E01 max=5.84E01 SA( o1=NM([ 10 , 10] 10) , f=f3 , r , l og ( 1) )
982 min=1.60E01 med=5.66E01 avg=5.59E01 max=8.42E01 SA( o1=UM([ 10 , 10] 10) , f=f3 , r , l og ( 1) )
983 min=2.30E03 med=7.57E03 avg=8.24E03 max=2.10E02 SA( o1=NM([ 10 , 10] 10) , f=f3 , r , l og ( 0 . 1 ) )
984 min=6.02E02 med=1.59E01 avg=1.75E01 max=3.77E01 SA( o1=UM([ 10 , 10] 10) , f=f3 , r , l og ( 0 . 1 ) )
985 min=6.61E06 med=2.66E05 avg=2.66E05 max=4.27E05 SA( o1=NM([ 10 , 10] 10) , f=f3 , r , l og ( 0. 001) )
986 min=1.40E05 med=3.90E05 avg=4.09E05 max=8.48E05 SA( o1=UM([ 10 , 10] 10) , f=f3 , r , l og ( 0. 001) )
987 min=8.38E07 med=3.12E06 avg=3.09E06 max=4.69E06 SA( o1=NM([ 10 , 10] 10) , f=f3 , r , l og ( 1. 0E4) )
988 min=7.49E07 med=2.77E06 avg=2.80E06 max=4.92E06 SA( o1=UM([ 10 , 10] 10) , f=f3 , r , l og ( 1. 0E4) )
989 min=1.58E01 med=4.69E01 avg=4.61E01 max=6.79E01 SQ( o1=NM([ 10 , 10] 10) , f=f3 , r , exp ( Ts=1, e =1.0E5) )
990 min=1.65E01 med=6.48E01 avg=6.39E01 max=8.85E01 SQ( o1=UM([ 10 , 10] 10) , f=f3 , r , exp ( Ts=1, e =1.0E5) )
5
7
.
1
.
D
E
M
O
S
F
O
R
S
T
R
I
N
G
-
B
A
S
E
D
P
R
O
B
L
E
M
S
P
A
C
E
S
8
8
3
991 min=1.26E05 med=3.16E05 avg=3.23E05 max=6.22E05 SQ( o1=NM([ 10 , 10] 10) , f=f3 , r , exp ( Ts=1, e =1.0E4) )
992 min=1.55E05 med=5.95E05 avg=6.39E05 max=1.71E04 SQ( o1=UM([ 10 , 10] 10) , f=f3 , r , exp ( Ts=1, e =1.0E4) )
993 min=2.17E02 med=1.42E01 avg=1.46E01 max=3.40E01 SQ( o1=NM([ 10 , 10] 10) , f=f3 , r , exp ( Ts =0. 1 , e =1.0E5) )
994 min=1.73E01 med=5.09E01 avg=4.98E01 max=7.88E01 SQ( o1=UM([ 10 , 10] 10) , f=f3 , r , exp ( Ts =0. 1 , e =1.0E5) )
995 min=1.69E06 med=4.27E06 avg=4.07E06 max=6.38E06 SQ( o1=NM([ 10 , 10] 10) , f=f3 , r , exp ( Ts =0. 1 , e =1.0E4) )
996 min=8.95E07 med=3.30E06 avg=3.23E06 max=5.54E06 SQ( o1=UM([ 10 , 10] 10) , f=f3 , r , exp ( Ts =0. 1 , e =1.0E4) )
997 min=5.81E05 med=1.91E04 avg=1.87E04 max=3.08E04 SQ( o1=NM([ 10 , 10] 10) , f=f3 , r , exp ( Ts =0. 001 , e =1.0E5) )
998 min=1.39E04 med=4.25E04 avg=4.56E04 max=1.31E03 SQ( o1=UM([ 10 , 10] 10) , f=f3 , r , exp ( Ts =0. 001 , e =1.0E5) )
999 min=3.82E07 med=8.67E07 avg=8.59E07 max=1.44E06 SQ( o1=NM([ 10 , 10] 10) , f=f3 , r , exp ( Ts =0. 001 , e =1.0E4) )
1000 min=4.17E08 med=1.45E07 avg=1.39E07 max=2.20E07 SQ( o1=UM([ 10 , 10] 10) , f=f3 , r , exp ( Ts =0. 001 , e =1.0E4) )
1001 min=5.15E06 med=1.49E05 avg=1.45E05 max=2.36E05 SQ( o1=NM([ 10 , 10] 10) , f=f3 , r , exp ( Ts=1.0E4, e =1.0E5) )
1002 min=5.36E06 med=1.74E05 avg=1.87E05 max=3.65E05 SQ( o1=UM([ 10 , 10] 10) , f=f3 , r , exp ( Ts=1.0E4, e =1.0E5) )
1003 min=4.33E07 med=7.68E07 avg=7.77E07 max=1.37E06 SQ( o1=NM([ 10 , 10] 10) , f=f3 , r , exp ( Ts=1.0E4, e =1.0E4) )
1004 min=4.87E08 med=1.15E07 avg=1.15E07 max=1.83E07 SQ( o1=UM([ 10 , 10] 10) , f=f3 , r , exp ( Ts=1.0E4, e =1.0E4) )
1005 min=1.62E07 med=3.64E07 avg=3.62E07 max=5.36E07 sEA( o1=NM([ 10 , 10] 10) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
1006 min=1.55E08 med=6.68E08 avg=1.74E04 max=3.38E03 sEA( o1=UM([ 10 , 10] 10) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=2) )
1007 min=4.80E08 med=1.23E07 avg=1.27E07 max=2.09E07 sEA( o1=NM([ 10 , 10] 10) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
1008 min=1.63E09 med=1.54E08 avg=6.88E05 max=6.27E03 sEA( o1=UM([ 10 , 10] 10) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=3) )
1009 min=8.05E09 med=1.85E08 avg=1.83E08 max=3.93E08 sEA( o1=NM([ 10 , 10] 10) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
1010 min=5.56E10 med=2.09E09 avg=1.85E06 max=1.09E04 sEA( o1=UM([ 10 , 10] 10) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Tour ( k=5) )
1011 min=1.12E07 med=5.34E07 avg=5.18E07 max=1.01E06 sEA( o1=NM([ 10 , 10] 10) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
1012 min=3.10E08 med=8.64E08 avg=1.51E04 max=6.32E03 sEA( o1=UM([ 10 , 10] 10) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=10) )
1013 min=4.05E08 med=8.51E08 avg=8.52E08 max=1.39E07 sEA( o1=NM([ 10 , 10] 10) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
1014 min=4.04E09 med=1.03E08 avg=3.12E05 max=7.60E04 sEA( o1=UM([ 10 , 10] 10) , o2=WAC, f=f3 , r , cr =0. 7 , mr=0. 3 , ps =100,mps=100, s e l=Trunc (mps=50) )
1015 min=2.16E263 med=3.27E252 avg=4.47E241 max=4.13E239 ES((10/2+50) , f=f3 , r )
1016 min=0.00E00 med=0.00E00 avg=0.00E00 max=0.00E00 ES( ( 10/2 , 50) , f=f3 , r )
1017 min=2.75E157 med=5.60E143 avg=5.19E132 max=4.18E130 ES((10/10+50) , f=f3 , r )
1018 min=2.48E247 med=1.34E240 avg=4.35E234 max=3.88E232 ES( ( 10/10 , 50) , f=f3 , r )
1019 min=2.04E198 med=5.86E191 avg=6.18E184 max=6.13E182 ES((10/2+100) , f=f3 , r )
1020 min=1.84E243 med=1.06E235 avg=4.38E231 max=1.52E229 ES( ( 10/2 , 100) , f=f3 , r )
1021 min=4.43E126 med=2.74E116 avg=6.56E112 max=4.19E110 ES((10/10+100) , f=f3 , r )
1022 min=1.67E178 med=1.86E172 avg=3.71E169 max=2.51E167 ES( ( 10/10 , 100) , f=f3 , r )
1023 min=1.70E132 med=3.68E128 avg=3.75E124 max=3.66E122 ES((10/2+200) , f=f3 , r )
1024 min=1.48E149 med=4.41E145 avg=1.11E140 max=1.08E138 ES( ( 10/2 , 200) , f=f3 , r )
1025 min=5.62E89 med=2.70E83 avg=4.31E79 max=3.22E77 ES((10/10+200) , f=f3 , r )
1026 min=1.51E112 med=5.22E109 avg=4.88E107 max=2.13E105 ES( ( 10/10 , 200) , f=f3 , r )
1027 min=4.74E170 med=1.89E161 avg=5.44E155 max=5.43E153 ES((20/2+50) , f=f3 , r )
1028 min=1.67E211 med=4.91E204 avg=5.56E200 max=3.92E198 ES( ( 20/2 , 50) , f=f3 , r )
1029 min=1.26E97 med=8.56E91 avg=2.17E87 max=4.95E86 ES((20/10+50) , f=f3 , r )
1030 min=9.37E124 med=1.61E119 avg=1.46E115 max=1.01E113 ES( ( 20/10 , 50) , f=f3 , r )
1031 min=1.30E147 med=1.48E140 avg=1.61E136 max=1.26E134 ES((20/2+100) , f=f3 , r )
1032 min=4.62E186 med=3.39E181 avg=7.32E179 max=5.29E177 ES( ( 20/2 , 100) , f=f3 , r )
1033 min=5.45E87 med=3.16E83 avg=2.15E79 max=1.07E77 ES((20/10+100) , f=f3 , r )
1034 min=5.71E129 med=1.04E125 avg=3.03E123 max=2.03E121 ES( ( 20/10 , 100) , f=f3 , r )
1035 min=1.37E107 med=3.07E104 avg=9.60E103 max=5.28E101 ES((20/2+200) , f=f3 , r )
1036 min=5.72E127 med=3.11E123 avg=6.42E122 max=9.39E121 ES( ( 20/2 , 200) , f=f3 , r )
1037 min=2.47E70 med=1.89E65 avg=5.60E64 max=2.38E62 ES((20/10+200) , f=f3 , r )
1038 min=6.72E92 med=1.25E89 avg=1.31E88 max=5.09E87 ES( ( 20/10 , 200) , f=f3 , r )
1039 min=4.79E08 med=4.50E04 avg=4.44E03 max=1.05E01 DE1( f=f3 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
1040 min=6.86E04 med=1.88E02 avg=2.34E02 max=7.67E02 DE1( f=f3 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
1041 min=2.67E07 med=1.67E03 avg=3.43E03 max=2.38E02 DE1( f=f3 , r , ps =50, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
1042 min=3.31E93 med=6.71E89 avg=6.34E87 max=2.35E85 DE1( f=f3 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
1043 min=4.67E152 med=4.73E23 avg=3.02E04 max=2.17E02 DE1( f=f3 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
1044 min=5.11E134 med=3.46E128 avg=1.74E123 max=1.54E121 DE1( f=f3 , r , ps =50, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
1045 min=6.35E40 med=4.25E37 avg=3.81E07 max=3.43E05 DE1( f=f3 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
1046 min=4.63E05 med=3.12E03 avg=4.20E03 max=1.92E02 DE1( f=f3 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
1047 min=2.01E46 med=2.30E13 avg=1.92E05 max=8.15E04 DE1( f=f3 , r , ps =100, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
1048 min=1.56E49 med=5.08E47 avg=5.86E46 max=1.68E44 DE1( f=f3 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
1049 min=2.96E89 med=3.16E83 avg=2.31E18 max=1.99E16 DE1( f=f3 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
1050 min=8.32E71 med=2.69E68 avg=5.74E67 max=2.74E65 DE1( f=f3 , r , ps =100, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
1051 min=3.62E21 med=2.32E19 avg=3.31E19 max=2.69E18 DE1( f=f3 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
1052 min=3.65E09 med=3.57E05 avg=1.71E04 max=2.21E03 DE1( f=f3 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
1053 min=9.65E25 med=2.18E23 avg=4.58E23 max=3.99E22 DE1( f=f3 , r , ps =200, op3=1/rand/ bi n ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
1054 min=3.45E26 med=1.08E24 avg=2.13E24 max=1.28E23 DE1( f=f3 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 3 ,F=0. 7) )
1055 min=3.70E47 med=9.81E45 avg=8.41E44 max=2.48E42 DE1( f=f3 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 7 ,F=0. 3) )
8
8
4
5
7
D
E
M
O
S
1056 min=3.67E38 med=6.08E36 avg=1.69E35 max=1.49E34 DE1( f=f3 , r , ps =200, op3=1/rand/exp ([ 10 , 10] 10 , cr =0. 5 ,F=0. 5) )
1057 min=1.84E01 med=4.78E01 avg=4.69E01 max=7.12E01 RW( o1=NM([ 10 , 10] 10) , f=f3 , r )
1058 min=2.26E01 med=6.78E01 avg=6.56E01 max=8.45E01 RW( o1=UM([ 10 , 10] 10) , f=f3 , r )
1059 min=4.70E02 med=9.81E02 avg=9.45E02 max=1.34E01 RS( f=f3 , r )
Listing 57.6: An example output of the Function test program.
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 885
57.1.1.2.2 Sphere Problem
Listing 57.7: Solving the Sphere functions.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.real;
5
6 import java.util.Random;
7
8 import org.goataa.impl.algorithms.RandomSampling;
9 import org.goataa.impl.algorithms.RandomWalk;
10 import org.goataa.impl.algorithms.hc.HillClimbing;
11 import org.goataa.impl.gpms.IdentityMapping;
12 import
org.goataa.impl.searchOperations.strings.real.nullary.DoubleArrayUniformCreation;
13 import
org.goataa.impl.searchOperations.strings.real.unary.DoubleArrayAllNormalMutation;
14 import
org.goataa.impl.searchOperations.strings.real.unary.DoubleArrayAllUniformMutation;
15 import org.goataa.impl.termination.StepLimit;
16 import org.goataa.impl.utils.BufferedStatistics;
17 import org.goataa.spec.IGPM;
18 import org.goataa.spec.INullarySearchOperation;
19 import org.goataa.spec.IObjectiveFunction;
20 import org.goataa.spec.ITerminationCriterion;
21 import org.goataa.spec.IUnarySearchOperation;
22
23 import demos.org.goataa.strings.real.benchmarkFunctions.Sphere;
24
25 /
**
26
*
A test of some optimization methods applied to the sphere function (see
27
*
Section 50.3.1.1 on page 581). A more extensive test is given in
28
*
Paragraph 57.1.1.2.1 on page 863.
29
*
30
*
@author Thomas Weise
31
*
/
32 public class SphereTest {
33
34 /
**
35
*
The main routine
36
*
37
*
@param args
38
*
the command line arguments which are ignored here
39
*
/
40 @SuppressWarnings("unchecked")
41 public static final void main(final String[] args) {
42 final IObjectiveFunction<double[]> f;
43 final INullarySearchOperation<double[]> create;
44 final IUnarySearchOperation<double[]> uniform, normal;
45 final IGPM<double[], double[]> gpm;
46 final ITerminationCriterion term;
47 double[] x;
48 int maxRuns, maxSteps;
49 final BufferedStatistics stat;
50 int i;
51
52 maxRuns = 10;
53 maxSteps = 10000;
54
55 System.out.println("Maximum Number of Steps: " + maxSteps); //NON NLS 1
56 System.out.println("Number of Runs : " + maxRuns); //NON NLS 1
57 System.out.println();
58
59 f = new Sphere();
886 57 DEMOS
60 term = new StepLimit(maxSteps);
61
62 create = new DoubleArrayUniformCreation(10, -3.0, 3.0);
63 normal = new DoubleArrayAllNormalMutation(-3.0, 3.0);
64 uniform = new DoubleArrayAllUniformMutation(-3.0, 3.0);
65 gpm = ((IGPM) (IdentityMapping.IDENTITY_MAPPING));
66 stat = new BufferedStatistics();
67
68 // Hill Climbing ( Algorithm 26.1 on page 230)
69 stat.clear();
70 for (i = 0; i < maxRuns; i++) {
71 term.reset();
72 x = HillClimbing.hillClimbing(f, create, uniform, gpm, term,
73 new Random()).x;
74 stat.add(f.compute(x, null));
75 }
76 System.out.println("HC + uniform: best =" + stat.min() + //NON NLS 1
77 "\n med =" + stat.median() + //NON NLS 1
78 "\n mean =" + stat.mean() + //NON NLS 1
79 "\n worst=" + stat.max());//NON NLS 1
80
81 stat.clear();
82 for (i = 0; i < maxRuns; i++) {
83 term.reset();
84 x = HillClimbing.hillClimbing(f, create, normal, gpm, term,
85 new Random()).x;
86 stat.add(f.compute(x, null));
87 }
88 System.out.println("\nHC + normal : best =" + stat.min() + //NON NLS 1
89 "\n med =" + stat.median() + //NON NLS 1
90 "\n mean =" + stat.mean() + //NON NLS 1
91 "\n worst=" + stat.max());//NON NLS 1
92
93 // Random Walks ( Section 8.2 on page 114)
94 stat.clear();
95 for (i = 0; i < maxRuns; i++) {
96 term.reset();
97 x = RandomWalk.randomWalk(f, create, uniform, gpm, term,
98 new Random()).x;
99 stat.add(f.compute(x, null));
100 }
101 System.out.println("\nRW + uniform: best =" + stat.min() + //NON NLS 1
102 "\n med =" + stat.median() + //NON NLS 1
103 "\n mean =" + stat.mean() + //NON NLS 1
104 "\n worst=" + stat.max());//NON NLS 1
105
106 stat.clear();
107 for (i = 0; i < maxRuns; i++) {
108 term.reset();
109 x = RandomWalk
110 .randomWalk(f, create, normal, gpm, term, new Random()).x;
111 stat.add(f.compute(x, null));
112 }
113 System.out.println("\nRW + normal : best =" + stat.min() + //NON NLS 1
114 "\n med =" + stat.median() + //NON NLS 1
115 "\n mean =" + stat.mean() + //NON NLS 1
116 "\n worst=" + stat.max());//NON NLS 1
117
118 // Random Sampling ( Section 8.1 on page 113)
119 stat.clear();
120 for (i = 0; i < maxRuns; i++) {
121 term.reset();
122 x = RandomSampling
123 .randomSampling(f, create, gpm, term, new Random()).x;
124 stat.add(f.compute(x, null));
125 }
126 System.out.println("\nRS : best =" + stat.min() + //NON NLS 1
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 887
127 "\n med =" + stat.median() + //NON NLS 1
128 "\n mean =" + stat.mean() + //NON NLS 1
129 "\n worst=" + stat.max());//NON NLS 1
130 }
131 }
1 Maximum Number of Steps: 1000000
2 Number of Runs : 100
3
4 HC + uniform: best =2.1184487316487466E-7
5 med =1.7652549059811728E-6
6 mean =1.7392477542969818E-6
7 worst=2.5010946915949356E-6
8
9 HC + normal : best =3.806889268385154E-6
10 med =1.223340156083753E-5
11 mean =1.2081647908816525E-5
12 worst=1.674893713000729E-5
13
14 RW + uniform: best =6.426100906677331
15 med =18.57390162796623
16 mean =18.917613859217102
17 worst=30.58207139961655
18
19 RW + normal : best =2.198685027112634
20 med =9.71028302069235
21 mean =9.905444249311946
22 worst=20.553126721126365
23
24 RS : best =0.6277154825382958
25 med =1.7375737390707902
26 mean =1.6778694214227383
27 worst=2.4311392069803324
Listing 57.8: An example output of the Sphere test program.
57.1.2 Permutation-based Problem Spaces
Many combinatorial optimization problems basically require us to nd a good permutation
of elements in this package, we provide some examples for that.
57.1.2.1 The Bin Packing Problem.
The Bin Packing problem has been introduced in Example E1.1 on page 25 in the book.
In Task 67 on page 238, we provide a version of the problem where the aim is to pack n
objects with sizes a
1
to a
n
into as few as possible bins of size b. The number of bins required
is k.
Here, we focus on that problem and the objective functions introduced in this package
consider a candidate solution x to be a permutation of the natural numbers 0 to n1 which
denotes the order in which the objects are to be put into bins. Starting with the rst object
and an empty bin, the objects are packed in a loop. The ith element x[i] is taken. If its size
a[x[i]] is smaller than the remaining space in the bin, it is placed into the bin and the free
space of the bin is reduced by a[x[i]]. If it does not t, a new bin is required. The object
is put into the bin and b a[x[i]] remains as free space. This process is repeated until all
objects have been packed into the bins according to the permutation x. The number k of
bins is counted.
This number is subject to minimization. There exists a couple of dierent ways to express
this optimization criterion and in this package, we provide some of them. Besides counting
the bins directly, we could, for instance, also count the (squares of the) space remaining free
(wasted) in each bin and minimize that number instead.
888 57 DEMOS
57.1.2.1.1 Base Classes
Listing 57.9: An Instance of the Bin Packing Problem.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.permutation.binPacking;
5
6 import java.util.Arrays;
7
8 import org.goataa.impl.OptimizationModule;
9 import org.goataa.impl.utils.TextUtils;
10
11 /
**
12
*
We discussed bin packing both in
13
*
Task 67 on page 238 and
14
*
Example E1.1 on page 25. This object holds the
15
*
information of an instance of the bin packing problem. We took the
16
*
example instances from [SchKle2003]: Armin Scholl and Robert Klein. Bin
17
*
Packing. Friedrich Schiller University of Jena,
18
*
Wirtschaftswissenschaftliche Fakultaet: Jena, Thuringia, Germany,
19
*
September 2, 2003. See http://www.wiwi.uni-jena.de/Entscheidung/binpp/
20
*
21
*
@author Thomas Weise
22
*
/
23 public class BinPackingInstance extends OptimizationModule {
24
25 /
**
a constant required by Java serialization
*
/
26 private static final long serialVersionUID = 1;
27
28 /
**
a bin packing instance from [SchKle2003]
*
/
29 public static final BinPackingInstance N1C1W1_A = new BinPackingInstance(
30 100, new int[] { 50, 100, 99, 99, 96, 96, 92, 92, 91, 88, 87, 86,
31 85, 76, 74, 72, 69, 67, 67, 62, 61, 56, 52, 51, 49, 46, 44, 42,
32 40, 40, 33, 33, 30, 30, 29, 28, 28, 27, 25, 24, 23, 22, 21, 20,
33 17, 14, 13, 11, 10, 7, 7, 3 }, 25, "N1C1W1_A"); //NON NLS 1
34
35 /
**
a bin packing instance from [SchKle2003]
*
/
36 public static final BinPackingInstance N1C2W2_D = new BinPackingInstance(
37 120, new int[] { 50, 120, 99, 98, 98, 97, 96, 90, 88, 86, 82, 82,
38 80, 79, 76, 76, 76, 74, 69, 67, 66, 64, 62, 59, 55, 52, 51, 51,
39 50, 49, 44, 43, 41, 41, 41, 41, 41, 37, 35, 33, 32, 32, 31, 31,
40 31, 30, 29, 23, 23, 22, 20, 20 }, 24, "N1C2W2_D");//NON NLS 1
41
42 /
**
a bin packing instance from [SchKle2003]
*
/
43 public static final BinPackingInstance N2C1W1_A = new BinPackingInstance(
44 100, new int[] { 100, 100, 99, 97, 95, 95, 94, 92, 91, 89, 86, 86,
45 85, 84, 80, 80, 80, 80, 80, 79, 76, 76, 75, 74, 73, 71, 71, 69,
46 65, 64, 64, 64, 63, 63, 62, 60, 59, 58, 57, 54, 53, 52, 51, 50,
47 48, 48, 48, 46, 44, 43, 43, 43, 43, 42, 41, 40, 40, 39, 38, 38,
48 38, 38, 37, 37, 37, 37, 36, 35, 34, 33, 32, 30, 29, 28, 26, 26,
49 26, 24, 23, 22, 21, 21, 19, 18, 17, 16, 16, 15, 14, 13, 12, 12,
50 11, 9, 9, 8, 8, 7, 6, 6, 5, 1 }, 48, "N2C1W1_A");//NON NLS 1
51
52 /
**
a bin packing instance from [SchKle2003]
*
/
53 public static final BinPackingInstance N2C3W2_C = new BinPackingInstance(
54 150, new int[] { 100, 150, 100, 99, 99, 98, 97, 97, 97, 96, 96, 95,
55 95, 95, 94, 93, 93, 93, 92, 91, 89, 88, 87, 86, 84, 84, 83, 83,
56 82, 81, 81, 81, 78, 78, 75, 74, 73, 72, 72, 71, 70, 68, 67, 66,
57 65, 64, 63, 63, 62, 60, 60, 59, 59, 58, 57, 56, 56, 55, 54, 51,
58 49, 49, 48, 47, 47, 46, 45, 45, 45, 45, 44, 44, 44, 44, 43, 41,
59 41, 40, 39, 39, 39, 37, 37, 37, 35, 35, 34, 32, 31, 31, 30, 28,
60 26, 25, 24, 24, 23, 23, 22, 21, 20, 20 }, 41, "N2C3W2_C");//NON NLS 1
61
62 /
**
a bin packing instance from [SchKle2003]
*
/
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 889
63 public static final BinPackingInstance N3C3W4_M = new BinPackingInstance(
64 150, new int[] { 200, 150, 100, 100, 100, 99, 99, 98, 98, 98, 98,
65 97, 96, 95, 94, 94, 94, 94, 93, 93, 93, 93, 93, 92, 92, 92, 91,
66 90, 90, 90, 90, 90, 90, 89, 89, 88, 88, 87, 87, 86, 86, 86, 86,
67 86, 85, 85, 85, 85, 84, 84, 83, 83, 83, 82, 82, 82, 82, 82, 81,
68 81, 80, 80, 79, 79, 79, 79, 79, 79, 78, 78, 78, 77, 77, 76, 76,
69 76, 76, 75, 75, 75, 74, 74, 74, 74, 74, 73, 73, 73, 73, 72, 72,
70 71, 69, 69, 69, 69, 68, 68, 68, 67, 67, 66, 65, 65, 65, 63, 63,
71 63, 62, 61, 61, 61, 61, 60, 60, 59, 59, 59, 59, 58, 58, 58, 58,
72 58, 56, 56, 56, 55, 55, 54, 54, 54, 53, 53, 53, 53, 53, 52, 52,
73 52, 52, 51, 51, 51, 51, 51, 50, 50, 49, 49, 49, 48, 48, 47, 46,
74 46, 46, 46, 45, 45, 45, 44, 44, 44, 42, 42, 42, 41, 41, 39, 39,
75 38, 38, 38, 38, 38, 37, 37, 37, 37, 37, 37, 37, 36, 36, 36, 36,
76 35, 35, 35, 34, 34, 34, 33, 32, 31, 30, 30, 30, 30, 30, 30 },
77 88, "N3C3W4_M");//NON NLS 1
78
79 /
**
a bin packing instance from [SchKle2003]
*
/
80 public static final BinPackingInstance N4C2W1_I = new BinPackingInstance(
81 120, new int[] { 500, 120, 100, 100, 100, 100, 100, 99, 99, 99, 99,
82 99, 99, 99, 99, 99, 99, 99, 98, 98, 98, 98, 98, 98, 98, 97, 97,
83 97, 96, 96, 96, 96, 96, 96, 96, 96, 95, 95, 95, 95, 94, 94, 94,
84 94, 94, 93, 92, 92, 92, 92, 91, 91, 91, 91, 91, 91, 90, 90, 90,
85 90, 90, 89, 89, 89, 89, 89, 88, 88, 88, 88, 88, 87, 87, 87, 86,
86 86, 86, 86, 85, 85, 85, 85, 84, 84, 84, 84, 84, 84, 83, 83, 83,
87 83, 83, 83, 82, 82, 82, 82, 82, 82, 82, 82, 81, 81, 81, 81, 81,
88 80, 80, 80, 80, 79, 79, 79, 79, 78, 78, 78, 77, 77, 77, 76, 76,
89 75, 75, 74, 74, 74, 74, 74, 73, 73, 73, 72, 72, 72, 72, 72, 72,
90 72, 72, 71, 71, 71, 71, 71, 70, 70, 70, 70, 70, 70, 70, 70, 69,
91 69, 69, 69, 68, 68, 67, 67, 67, 67, 67, 67, 67, 66, 66, 66, 65,
92 65, 65, 65, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 63, 63, 63,
93 63, 63, 63, 63, 62, 62, 62, 62, 62, 61, 61, 61, 61, 61, 60, 60,
94 60, 59, 59, 58, 58, 58, 58, 58, 58, 57, 57, 57, 57, 56, 56, 56,
95 56, 55, 55, 55, 55, 55, 55, 54, 54, 54, 54, 53, 53, 53, 52, 52,
96 52, 52, 52, 51, 51, 51, 51, 51, 50, 50, 50, 50, 50, 50, 50, 50,
97 50, 49, 49, 49, 48, 48, 48, 47, 47, 47, 47, 47, 47, 46, 46, 46,
98 46, 46, 45, 45, 45, 45, 44, 44, 44, 43, 43, 43, 43, 43, 43, 43,
99 42, 42, 42, 42, 42, 42, 42, 42, 41, 41, 41, 41, 41, 41, 40, 40,
100 40, 40, 40, 40, 40, 39, 39, 39, 39, 39, 38, 38, 38, 38, 37, 37,
101 37, 37, 37, 36, 36, 36, 36, 36, 36, 36, 36, 35, 35, 34, 34, 34,
102 34, 34, 34, 34, 34, 33, 33, 33, 33, 33, 32, 32, 31, 31, 31, 31,
103 31, 31, 30, 29, 29, 29, 28, 28, 28, 28, 28, 28, 28, 27, 27, 27,
104 27, 26, 26, 26, 26, 26, 26, 26, 26, 25, 25, 25, 25, 25, 25, 24,
105 24, 24, 24, 24, 24, 24, 24, 24, 23, 23, 23, 22, 22, 22, 21, 21,
106 21, 21, 21, 21, 20, 20, 20, 20, 20, 20, 19, 19, 19, 19, 18, 18,
107 18, 18, 18, 18, 18, 18, 18, 18, 17, 17, 17, 17, 17, 16, 16, 16,
108 16, 16, 15, 15, 15, 15, 15, 14, 14, 14, 14, 14, 13, 13, 13, 13,
109 13, 12, 12, 12, 12, 11, 11, 11, 11, 11, 11, 10, 10, 10, 10, 10,
110 9, 9, 9, 8, 7, 7, 7, 7, 7, 7, 7, 7, 7, 6, 6, 6, 5, 5, 5, 5, 5,
111 5, 5, 5, 5, 4, 4, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1,
112 1, }, 209, "N4C2W1_I");//NON NLS 1
113
114 /
**
a bin packing instance from [SchKle2003]
*
/
115 public static final BinPackingInstance N4C3W1_L = new BinPackingInstance(
116 150, new int[] { 500, 150, 100, 100, 100, 100, 100, 99, 99, 99, 98,
117 98, 98, 98, 98, 98, 97, 97, 97, 97, 97, 97, 97, 97, 97, 96, 96,
118 95, 95, 94, 94, 94, 94, 93, 93, 93, 93, 93, 93, 92, 92, 92, 92,
119 92, 92, 91, 91, 91, 91, 91, 90, 89, 89, 88, 88, 88, 88, 88, 87,
120 87, 87, 87, 86, 85, 85, 85, 85, 84, 84, 84, 83, 83, 83, 83, 82,
121 81, 81, 81, 81, 81, 81, 81, 80, 80, 79, 79, 79, 79, 79, 79, 79,
122 79, 78, 78, 78, 78, 78, 78, 77, 77, 77, 77, 77, 77, 77, 76, 76,
123 76, 76, 76, 75, 75, 75, 74, 74, 74, 74, 74, 74, 74, 74, 73, 73,
124 73, 73, 72, 72, 72, 72, 72, 72, 71, 71, 71, 71, 70, 70, 70, 70,
125 70, 69, 69, 69, 69, 68, 68, 68, 68, 67, 67, 67, 67, 67, 66, 66,
126 66, 66, 66, 66, 65, 65, 65, 65, 65, 64, 64, 64, 64, 64, 64, 64,
127 63, 63, 63, 63, 63, 63, 63, 62, 62, 61, 61, 61, 60, 60, 60, 60,
128 59, 59, 59, 59, 59, 58, 58, 58, 58, 57, 57, 57, 57, 57, 57, 57,
129 57, 56, 56, 56, 56, 56, 55, 55, 55, 54, 54, 54, 53, 53, 53, 52,
890 57 DEMOS
130 52, 52, 52, 52, 52, 52, 51, 51, 51, 50, 50, 50, 50, 50, 50, 50,
131 49, 49, 49, 49, 49, 48, 48, 48, 48, 48, 48, 48, 48, 48, 47, 47,
132 47, 47, 47, 47, 46, 46, 46, 46, 46, 46, 45, 45, 45, 45, 45, 45,
133 45, 44, 44, 44, 44, 44, 44, 44, 43, 43, 43, 43, 43, 43, 43, 43,
134 43, 43, 42, 42, 42, 42, 42, 42, 42, 41, 41, 41, 41, 40, 40, 40,
135 40, 39, 39, 39, 39, 39, 38, 38, 38, 38, 38, 38, 37, 37, 37, 36,
136 36, 36, 36, 36, 35, 35, 35, 35, 35, 35, 35, 35, 34, 34, 34, 33,
137 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 31, 31, 31, 31,
138 31, 31, 30, 30, 30, 30, 30, 29, 29, 29, 29, 28, 28, 28, 28, 28,
139 28, 28, 28, 27, 27, 27, 27, 26, 26, 26, 26, 26, 26, 26, 26, 26,
140 26, 25, 25, 25, 25, 25, 25, 25, 24, 24, 24, 23, 23, 23, 23, 23,
141 23, 23, 23, 23, 22, 22, 22, 22, 22, 21, 21, 21, 21, 21, 21, 21,
142 21, 20, 20, 20, 19, 19, 18, 18, 18, 17, 17, 17, 17, 16, 16, 16,
143 15, 15, 14, 14, 14, 14, 14, 14, 13, 13, 13, 13, 13, 13, 13, 12,
144 12, 11, 11, 11, 11, 11, 11, 11, 11, 11, 10, 10, 10, 10, 10, 10,
145 9, 9, 9, 8, 8, 8, 8, 8, 8, 7, 7, 7, 7, 7, 7, 7, 6, 6, 6, 5, 5,
146 5, 5, 4, 4, 4, 4, 4, 4, 4, 4, 4, 3, 3, 3, 3, 2, 2, 2, 2, 1, 1,
147 1, }, 163, "N4C3W1_L");//NON NLS 1
148
149 /
**
a list with the the bin packing instances
*
/
150 public static final BinPackingInstance[] ALL_INSTANCES = new
BinPackingInstance[] {
151 N1C1W1_A, N1C2W2_D, N2C1W1_A, N2C3W2_C, N3C3W4_M, N4C2W1_I, N4C3W1_L };
152
153 /
**
the size of each bin
*
/
154 public final int b;
155
156 /
**
the sizes of the objects, sorted from the smallest to largest
*
/
157 public final int[] a;
158
159 /
**
160
*
the minimum number of bins of size b known to be able to facilitate
161
*
the objects of sizes a; -1 if unknown
162
*
/
163 public final int bestK;
164
165 /
**
the instance name, null if unknown
*
/
166 public final String name;
167
168 /
**
169
*
Instantiate the bin packing instance
170
*
171
*
@param pb
172
*
the bin size
173
*
@param pa
174
*
the object sizes
175
*
@param k
176
*
the best solution; -1 if unknown
177
*
@param pname
178
*
the intance name, null if unknown
179
*
/
180 public BinPackingInstance(final int pb, final int[] pa, final int k,
181 final String pname) {
182 super();
183 this.b = pb;
184 this.a = pa.clone();
185 Arrays.sort(this.a);
186 this.bestK = k;
187 this.name = pname;
188 }
189
190 /
**
191
*
Instantiate the bin packing instance
192
*
193
*
@param copy
194
*
the bin packing instance to copy
195
*
/
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 891
196 public BinPackingInstance(final BinPackingInstance copy) {
197 this(copy.b, copy.a, copy.bestK, copy.name);
198 }
199
200 /
**
201
*
Instantiate the bin packing instance
202
*
203
*
@param copy
204
*
the bin packing instance to copy
205
*
@param on
206
*
the override name
207
*
/
208 BinPackingInstance(final BinPackingInstance copy, final String on) {
209 this(copy.b, copy.a, copy.bestK, (copy.name == null) ? on : (on + (
210 + copy.name + )));
211 }
212
213 /
**
214
*
Get the name of the optimization module
215
*
216
*
@param longVersion
217
*
true if the long name should be returned, false if the short
218
*
name should be returned
219
*
@return the name of the optimization module
220
*
/
221 @Override
222 public String getName(final boolean longVersion) {
223 if (this.name != null) {
224 return this.name;
225 }
226
227 return super.getName(longVersion);
228 }
229
230 /
**
231
*
Get the full configuration which holds all the data necessary to
232
*
describe this object.
233
*
234
*
@param longVersion
235
*
true if the long version should be returned, false if the
236
*
short version should be returned
237
*
@return the full configuration
238
*
/
239 @Override
240 public String getConfiguration(final boolean longVersion) {
241 boolean bx;
242 StringBuilder sb;
243
244 bx = (this.getClass() != BinPackingInstance.class);
245
246 if (bx && (this.name != null)) {
247 return "";//NON NLS 1
248 }
249
250 sb = new StringBuilder();
251
252 if (!bx) {
253 if (longVersion) {
254 sb.append("sizes=");//NON NLS 1
255 }
256 TextUtils.toStringBuilder(this.a, sb);
257 sb.append(,);
258 }
259
260 if (longVersion) {
261 sb.append("bin=");//NON NLS 1
262 }
892 57 DEMOS
263 sb.append(this.b);
264
265 sb.append(,);
266 if ((this.bestK > 0) && (this.bestK < Integer.MAX_VALUE)) {
267 if (longVersion) {
268 sb.append("best=");//NON NLS 1
269 }
270 sb.append(this.bestK);
271 }
272
273 return sb.toString();
274 }
275
276 }
Listing 57.10: The rst objective function f
1
from Task 67.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.permutation.binPacking;
5
6 import java.util.Random;
7
8 import org.goataa.spec.IObjectiveFunction;
9
10 /
**
11
*
The first objective function (f1) mentioned in the bin packing
12
*
Task 67 on page 238. An objective function which
13
*
computes the number of bins of size b required to store n objects with
14
*
the sizes a0 to an-1 according to a given sequence x. It takes a
15
*
permutation x of the first n natural numbers (or better, from 0 to n-1)
16
*
as argument and returns the number of bins required to facilitate the
17
*
objects.
18
*
19
*
@author Thomas Weise
20
*
/
21 public class BinPackingNumberOfBins extends BinPackingInstance implements
22 IObjectiveFunction<int[]> {
23
24 /
**
a constant required by Java serialization
*
/
25 private static final long serialVersionUID = 1;
26
27 /
**
28
*
Instantiate the bin packing objective function with the raw
29
*
parameters.
30
*
31
*
@param pb
32
*
the bin size
33
*
@param pa
34
*
the object sizes
35
*
/
36 public BinPackingNumberOfBins(final int pb, final int[] pa) {
37 super(pb, pa, -1, "f1"); //NON NLS 1
38 }
39
40 /
**
41
*
Instantiate the bin packing objective function based on an instance
42
*
object
43
*
44
*
@param bpi
45
*
the bin packing instance
46
*
/
47 public BinPackingNumberOfBins(final BinPackingInstance bpi) {
48 super(bpi, "f1");//NON NLS 1
49 }
50
51 /
**
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 893
52
*
Compute the number k of bins required by the object permutation x.
53
*
54
*
@param x
55
*
the phenotype to be rated
56
*
@param r
57
*
the randomizer
58
*
@return the number k of bins
59
*
/
60 public final double compute(final int[] x, final Random r) {
61 int k, free, size, i;
62
63 free = 0;
64 k = 0;
65 // for each object in the sequence
66 for (i = 0; i < x.length; i++) {
67 // get the size of the current object
68 size = this.a[x[i]];
69
70 // does the object fit into the current bin?
71 if (size <= free) {
72 // if so, subtract its size from the remaining free size
73 free -= size;
74 } else {
75 // otherwise, start a new bin and put the object into it
76 free = (this.b - size);
77 k++;
78 }
79 }
80
81 return k;
82 }
83 }
Listing 57.11: The second objective function f
2
from Task 67.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.permutation.binPacking;
5
6 import java.util.Random;
7
8 import org.goataa.spec.IObjectiveFunction;
9
10 /
**
11
*
The second objective function (f2) mentioned in the bin packing
12
*
Task 67 on page 238. An objective function which
13
*
computes the squares of the free space left over in each bins of size b
14
*
required to store n objects with the sizes a0 to an-1 according to a
15
*
given sequence x. It takes a permutation x of the first n natural
16
*
numbers (or better, from 0 to n-1) as argument and returns the free
17
*
space left over in the bins.
18
*
19
*
@author Thomas Weise
20
*
/
21 public class BinPackingFreeSpace extends BinPackingInstance implements
22 IObjectiveFunction<int[]> {
23
24 /
**
a constant required by Java serialization
*
/
25 private static final long serialVersionUID = 1;
26
27 /
**
28
*
Instantiate the bin packing objective function with the raw parameters
29
*
30
*
@param pb
31
*
the bin size
32
*
@param pa
33
*
the object sizes
894 57 DEMOS
34
*
/
35 public BinPackingFreeSpace(final int pb, final int[] pa) {
36 super(pb, pa, -1, "f2"); //NON NLS 1
37 }
38
39 /
**
40
*
Instantiate the bin packing objective function based on an instance
41
*
object
42
*
43
*
@param bpi
44
*
the bin packing instance
45
*
/
46 public BinPackingFreeSpace(final BinPackingInstance bpi) {
47 super(bpi, "f2");//NON NLS 1
48 }
49
50 /
**
51
*
Compute the quares of the wasted space in the bins required by the
52
*
object permutation x.
53
*
54
*
@param x
55
*
the phenotype to be rated
56
*
@param r
57
*
the randomizer
58
*
@return the number k of bins
59
*
/
60 public final double compute(final int[] x, final Random r) {
61 int leftOver, free, size, i;
62
63 free = 0;
64 leftOver = 0;
65 // for each object in the sequence
66 for (i = 0; i < x.length; i++) {
67 // get the size of the current object
68 size = this.a[x[i]];
69
70 // does the object fit into the current bin?
71 if (size <= free) {
72 // if so, subtract its size from the remaining free size
73 free -= size;
74 } else {
75 // add up the left-over, wasted space
76 leftOver += free;
77 // otherwise, start a new bin and put the object into it
78 free = (this.b - size);
79 }
80 }
81
82 return (((double) leftOver)
*
leftOver);
83 }
84 }
Listing 57.12: The third objective function f
3
from Task 67.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.permutation.binPacking;
5
6 import java.util.Random;
7
8 import org.goataa.spec.IObjectiveFunction;
9
10 /
**
11
*
The third objective function (f3) mentioned in the bin packing
12
*
Task 67 on page 238. It combines the objectives
13
*
BinPackingNumberOfBins(f1) and BinPackingFreeSpace (f2) to f3(x)= f1(x)
14
*
- 1/(1+f2(x))
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 895
15
*
16
*
@author Thomas Weise
17
*
/
18 public class BinPackingNumberOfBinsAndFreeSpace extends BinPackingInstance
19 implements IObjectiveFunction<int[]> {
20
21 /
**
a constant required by Java serialization
*
/
22 private static final long serialVersionUID = 1;
23
24 /
**
25
*
Instantiate the bin packing objective function with the raw parameters
26
*
27
*
@param pb
28
*
the bin size
29
*
@param pa
30
*
the object sizes
31
*
/
32 public BinPackingNumberOfBinsAndFreeSpace(final int pb, final int[] pa) {
33 super(pb, pa, -1, "f3"); //NON NLS 1
34 }
35
36 /
**
37
*
Instantiate the bin packing objective function based on an instance
38
*
object
39
*
40
*
@param bpi
41
*
the bin packing instance
42
*
/
43 public BinPackingNumberOfBinsAndFreeSpace(final BinPackingInstance bpi) {
44 super(bpi, "f3");//NON NLS 1
45 }
46
47 /
**
48
*
Compute f3(x)= f1(x) - 1/(1+f2(x)).
49
*
50
*
@param x
51
*
the phenotype to be rated
52
*
@param r
53
*
the randomizer
54
*
@return the number k of bins
55
*
/
56 public final double compute(final int[] x, final Random r) {
57 int k, leftOver, free, size, i;
58
59 free = 0;
60 leftOver = 0;
61 k = 0;
62 // for each object in the sequence
63 for (i = 0; i < x.length; i++) {
64 // get the size of the current object
65 size = this.a[x[i]];
66
67 // does the object fit into the current bin?
68 if (size <= free) {
69 // if so, subtract its size from the remaining free size
70 free -= size;
71 } else {
72 // add up the left over (waste) space
73 leftOver += free;
74 // otherwise, start a new bin and put the object into it
75 free = (this.b - size);
76 k++;
77 }
78 }
79
80 // f3(x)= f1(x) - 1/(1+f2(x))
81 return (k - ((1d / ((((double) leftOver)
*
leftOver) + 1d))));
896 57 DEMOS
82 }
83 }
Listing 57.13: The fourth objective function f
4
from Task 67.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.permutation.binPacking;
5
6 import java.util.Random;
7
8 import org.goataa.spec.IObjectiveFunction;
9
10 /
**
11
*
The fourth objective function (f5) mentioned in the bin packing
12
*
Task 67 on page 238. An objective function which
13
*
computes the number of bins of size b required to store n objects with
14
*
the sizes a0 to an-1 according to a given sequence x. It takes a
15
*
permutation x of the first n natural numbers (or better, from 0 to n-1)
16
*
as argument and returns the number of bins required to facilitate the
17
*
objects. Different from f1, it also considers the space left over in the
18
*
last bin. Different from f1, it also considers the maximum space left
19
*
over in any bin. The idea is that if the space wasted in one bin
20
*
gradually increases, sooner or later, the bin may disappear alltogethre.
21
*
22
*
@author Thomas Weise
23
*
/
24 public class BinPackingNumberOfBinsAndLastFree extends BinPackingInstance
25 implements IObjectiveFunction<int[]> {
26
27 /
**
a constant required by Java serialization
*
/
28 private static final long serialVersionUID = 1;
29
30 /
**
31
*
Instantiate the bin packing objective function with the raw
32
*
parameters.
33
*
34
*
@param pb
35
*
the bin size
36
*
@param pa
37
*
the object sizes
38
*
/
39 public BinPackingNumberOfBinsAndLastFree(final int pb, final int[] pa) {
40 super(pb, pa, -1, "f5"); //NON NLS 1
41 }
42
43 /
**
44
*
Instantiate the bin packing objective function based on an instance
45
*
object
46
*
47
*
@param bpi
48
*
the bin packing instance
49
*
/
50 public BinPackingNumberOfBinsAndLastFree(final BinPackingInstance bpi) {
51 super(bpi, "f5");//NON NLS 1
52 }
53
54 /
**
55
*
Compute the number k of bins required by the object permutation x and
56
*
include the space wasted in the last bin in the objective value.
57
*
58
*
@param x
59
*
the phenotype to be rated
60
*
@param r
61
*
the randomizer
62
*
@return the number k of bins
63
*
/
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 897
64 public final double compute(final int[] x, final Random r) {
65 int k, free, size, i;
66
67 free = 0;
68 k = 0;
69 // for each object in the sequence
70 for (i = 0; i < x.length; i++) {
71 // get the size of the current object
72 size = this.a[x[i]];
73
74 // does the object fit into the current bin?
75 if (size <= free) {
76 // if so, subtract its size from the remaining free size
77 free -= size;
78 } else {
79 // otherwise, start a new bin and put the object into it
80 free = (this.b - size);
81 k++;
82 }
83 }
84
85 // We want to minimize the number of bins and maximize the free space
86 // in the last bin (now stored in "free"). k is a natural number and
87 // should be the dominating factor. We can add a value in [0,1) to
88 // represent the free space. This factor should be the smaller, the
89 // larger the free space is. By choosing 1/(1+free), we can realize
90 // this. Amongst two solutions which have the same number of bins, now
91 // the one with more free space will be preferred.
92 return k + (1d / (1d + free));
93 }
94 }
Listing 57.14: The fth objective function f
5
from Task 67.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.permutation.binPacking;
5
6 import java.util.Random;
7
8 import org.goataa.spec.IObjectiveFunction;
9
10 /
**
11
*
The fourth objective function (f4) mentioned in the bin packing
12
*
Task 67 on page 238. An objective function which
13
*
computes the number of bins of size b required to store n objects with
14
*
the sizes a0 to an-1 according to a given sequence x. It takes a
15
*
permutation x of the first n natural numbers (or better, from 0 to n-1)
16
*
as argument and returns the number of bins required to facilitate the
17
*
objects. Different from f1, it also considers the maximum space left
18
*
over in any bin. The idea is that if the space wasted in one bin
19
*
gradually increases, sooner or later, the next object will fit into the
20
*
bin or the bin may disappear alltogethre.
21
*
22
*
@author Thomas Weise
23
*
/
24 public class BinPackingNumberOfBinsAndMaxFree extends BinPackingInstance
25 implements IObjectiveFunction<int[]> {
26
27 /
**
a constant required by Java serialization
*
/
28 private static final long serialVersionUID = 1;
29
30 /
**
31
*
Instantiate the bin packing objective function with the raw
32
*
parameters.
33
*
34
*
@param pb
898 57 DEMOS
35
*
the bin size
36
*
@param pa
37
*
the object sizes
38
*
/
39 public BinPackingNumberOfBinsAndMaxFree(final int pb, final int[] pa) {
40 super(pb, pa, -1, "f4"); //NON NLS 1
41 }
42
43 /
**
44
*
Instantiate the bin packing objective function based on an instance
45
*
object
46
*
47
*
@param bpi
48
*
the bin packing instance
49
*
/
50 public BinPackingNumberOfBinsAndMaxFree(final BinPackingInstance bpi) {
51 super(bpi, "f4");//NON NLS 1
52 }
53
54 /
**
55
*
Compute the number k of bins required by the object permutation x and
56
*
include the space wasted in the last bin in the objective value.
57
*
58
*
@param x
59
*
the phenotype to be rated
60
*
@param r
61
*
the randomizer
62
*
@return the number k of bins
63
*
/
64 public final double compute(final int[] x, final Random r) {
65 int k, maxFree, maxFreeT, free, size, i;
66
67 free = 0;
68 k = 0;
69 maxFree = 0;
70 maxFreeT = 0;
71 // for each object in the sequence
72 for (i = 0; i < x.length; i++) {
73 // get the size of the current object
74 size = this.a[x[i]];
75
76 // does the object fit into the current bin?
77 if (size <= free) {
78 // if so, subtract its size from the remaining free size
79 free -= size;
80 } else {
81 if (free > maxFree) {
82 maxFree = free;
83 maxFreeT = 1;
84 } else {
85 if (free == maxFree) {
86 maxFreeT++;
87 }
88 }
89 // otherwise, start a new bin and put the object into it
90 free = (this.b - size);
91 k++;
92 }
93 }
94
95 // We want to minimize the number of bins and maximize the maximum free
96 // space (now stored in "maxFree"). k is a natural number and should be
97 // the dominating factor. We can add a value in [0,1) to represent the
98 // free space. This factor should be the smaller, the larger the free
99 // space is. By choosing 1/(1+maxFreeT
*
maxFree), we can realize this.
100 // Amongst two solutions which have the same number of bins, now the
101 // one with more free space will be preferred.
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 899
102 return k + (1d / (1d + (maxFreeT
*
maxFree)));
103 }
104 }
57.1.2.1.2 Experiment
Listing 57.15: Solving the Bin Packing problem.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.strings.permutation.binPacking;
5
6 import java.util.List;
7
8 import org.goataa.impl.algorithms.RandomSampling;
9 import org.goataa.impl.algorithms.RandomWalk;
10 import org.goataa.impl.algorithms.de.DifferentialEvolution1;
11 import org.goataa.impl.algorithms.es.EvolutionStrategy;
12 import org.goataa.impl.algorithms.hc.HillClimbing;
13 import org.goataa.impl.algorithms.sa.SimulatedAnnealing;
14 import org.goataa.impl.algorithms.sa.temperatureSchedules.Exponential;
15 import org.goataa.impl.algorithms.sa.temperatureSchedules.Logarithmic;
16 import org.goataa.impl.gpms.RandomKeys;
17 import
org.goataa.impl.searchOperations.strings.permutation.nullary.IntPermutationUniformCreation;
18 import
org.goataa.impl.searchOperations.strings.permutation.unary.IntPermutationMultiSwapMutation;
19 import
org.goataa.impl.searchOperations.strings.permutation.unary.IntPermutationSingleSwapMutation;
20 import org.goataa.impl.searchOperations.strings.real.ternary.DoubleArrayDEbin;
21 import org.goataa.impl.searchOperations.strings.real.ternary.DoubleArrayDEexp;
22 import org.goataa.impl.termination.StepLimit;
23 import org.goataa.impl.utils.BufferedStatistics;
24 import org.goataa.impl.utils.Individual;
25 import org.goataa.spec.INullarySearchOperation;
26 import org.goataa.spec.IObjectiveFunction;
27 import org.goataa.spec.ISOOptimizationAlgorithm;
28 import org.goataa.spec.ITemperatureSchedule;
29 import org.goataa.spec.ITernarySearchOperation;
30 import org.goataa.spec.IUnarySearchOperation;
31
32 /
**
33
*
An attempt to solve the bin packing problem introduced in
34
*
Example E1.1 on page 25 as stated in
35
*
Task 67 on page 238 and
36
*
Task 70 on page 252 with some different optimization
37
*
algorithms, objective functions, and search operations.
38
*
39
*
@author Thomas Weise
40
*
/
41 public class BinPackingTest {
42
43 /
**
the objective functions
*
/
44 static IObjectiveFunction<int[]>[] FS;
45
46 /
**
47
*
The main routine which executes the experiment described in
48
*
Task 67 on page 238
49
*
50
*
@param args
51
*
the command line arguments which are ignored here
52
*
/
53 @SuppressWarnings("unchecked")
54 public static final void main(final String[] args) {
900 57 DEMOS
55 BinPackingInstance inst;
56 INullarySearchOperation<int[]> create;
57 final IUnarySearchOperation<int[]>[] mutate;
58 final int maxSteps, runs;
59 ITemperatureSchedule[] schedules;
60 int i, j, k, l, mi, li, ri;
61 HillClimbing<int[], int[]> HC;
62 RandomWalk<int[], int[]> RW;
63 RandomSampling<int[], int[]> RS;
64 SimulatedAnnealing<int[], int[]> SA;
65 EvolutionStrategy<int[]> ES;
66 DifferentialEvolution1<int[]> DE;
67 ITernarySearchOperation[] tern;
68 int[] rhos, lambdas, mus;
69
70 maxSteps = 100000;
71 runs = 100;
72
73 System.out.println("Maximum Number of Steps: " + maxSteps); //NON NLS 1
74 System.out.println("Number of Runs : " + runs); //NON NLS 1
75 System.out.println();
76
77 mutate = new IUnarySearchOperation[] {//
78 IntPermutationSingleSwapMutation.INT_PERMUTATION_SINGLE_SWAP_MUTATION,//
79 IntPermutationMultiSwapMutation.INT_PERMUTATION_MULTI_SWAP_MUTATION };
80
81 HC = new HillClimbing<int[], int[]>();
82 SA = new SimulatedAnnealing<int[], int[]>();
83 RW = new RandomWalk<int[], int[]>();
84 RS = new RandomSampling<int[], int[]>();
85 ES = new EvolutionStrategy<int[]>();
86 ES.setGPM(new RandomKeys());
87 DE = new DifferentialEvolution1<int[]>();
88 DE.setGPM(new RandomKeys());
89
90 rhos = new int[] { 2, 10 };
91 lambdas = new int[] { 50, 100, 200 };
92 mus = new int[] { 10, 20 };
93
94 for (i = 0; i < BinPackingInstance.ALL_INSTANCES.length; i++) {
95 inst = BinPackingInstance.ALL_INSTANCES[i];
96 create = new IntPermutationUniformCreation(inst.a.length);
97
98 FS = new IObjectiveFunction[] {//
99 new BinPackingNumberOfBins(inst),//
100 new BinPackingFreeSpace(inst),//
101 new BinPackingNumberOfBinsAndFreeSpace(inst),//
102 new BinPackingNumberOfBinsAndLastFree(inst),//
103 new BinPackingNumberOfBinsAndMaxFree(inst) };//
104
105 System.out.println();
106 System.out.println();
107 System.out.println("Instance Name : " + inst.name); //NON NLS 1
108 System.out.println("Best Possible Solution : " + inst.bestK); //NON NLS 1
109 System.out.println();
110
111 // Hill Climbing ( Algorithm 26.1 on page 230)
112 HC.setNullarySearchOperation(create);
113 for (k = 0; k < mutate.length; k++) {
114 HC.setUnarySearchOperation(mutate[k]);
115 for (j = 0; j < FS.length; j++) {
116 HC.setObjectiveFunction(FS[j]);
117 testRuns(HC, runs, maxSteps);
118 }
119 }
120
121 // Simulated Annealing ( Algorithm 27.1 on page 245)
57.1. DEMOS FOR STRING-BASED PROBLEM SPACES 901
122 // and Simulated Quenching ( Section 27.3.2 on page 249)
123 SA.setNullarySearchOperation(create);
124
125 schedules = new ITemperatureSchedule[] {//
126 new Logarithmic(inst.a.length - inst.bestK + 1),//
127 new Logarithmic(2
*
(inst.a[inst.a.length - 1] - inst.a[0])),//
128 new Exponential(inst.a.length - inst.bestK + 1, 1e-5),//
129 new Exponential(2
*
(inst.a[inst.a.length - 1] - inst.a[0]),
130 1e-5) //
131 };
132 for (l = 0; l < schedules.length; l++) {
133 SA.setTemperatureSchedule(schedules[l]);
134 for (k = 0; k < mutate.length; k++) {
135 SA.setUnarySearchOperation(mutate[k]);
136 for (j = 0; j < FS.length; j++) {
137 SA.setObjectiveFunction(FS[j]);
138 testRuns(SA, runs, maxSteps);
139 }
140 }
141 }
142
143 // Evolution Strategies: Algorithm 30.1 on page 361
144 ES.setNullarySearchOperation(//
145 new
org.goataa.impl.searchOperations.strings.real.nullary.DoubleArrayUniformCreation(
146 inst.a.length, 0d, 1d));
147 ES.setDimension(inst.a.length);
148 ES.setMinimum(0d);
149 ES.setMaximum(1d);
150
151 for (j = 0; j < FS.length; j++) {
152 ES.setObjectiveFunction(FS[j]);
153
154 for (mi = 0; mi < mus.length; mi++) {
155 ES.setMu(mus[mi]);
156 for (li = 0; li < lambdas.length; li++) {
157 ES.setLambda(lambdas[li]);
158 for (ri = 0; ri < rhos.length; ri++) {
159 ES.setRho(rhos[ri]);
160
161 ES.setPlus(true);
162 testRuns(ES, runs, maxSteps);
163
164 ES.setPlus(false);
165 testRuns(ES, runs, maxSteps);
166 }
167 }
168 }
169 }
170
171 // Differential Evolution: Chapter 33 on page 419
172 tern = new ITernarySearchOperation[] {
173 new DoubleArrayDEbin(0d, 1d, 0.3d, 0.7d),
174 new DoubleArrayDEbin(0d, 1d, 0.7d, 0.3d),
175 new DoubleArrayDEbin(0d, 1d, 0.5d, 0.5d),
176 new DoubleArrayDEexp(0d, 1d, 0.3d, 0.7d),
177 new DoubleArrayDEexp(0d, 1d, 0.7d, 0.3d),
178 new DoubleArrayDEexp(0d, 1d, 0.5d, 0.5d), };
179
180 for (j = 0; j < FS.length; j++) {
181 DE.setObjectiveFunction(FS[j]);
182 DE.setNullarySearchOperation(//
183 new
org.goataa.impl.searchOperations.strings.real.nullary.DoubleArrayUniformCreation(
184 inst.a.length, 0d, 1d));
185 for (li = 0; li < lambdas.length; li++) {
186 DE.setPopulationSize(lambdas[li]);
902 57 DEMOS
187 for (mi = 0; mi < tern.length; mi++) {
188 DE.setTernarySearchOperation(tern[mi]);
189 testRuns(DE, runs, maxSteps);
190 }
191 }
192 }
193
194 // Random Walks ( Section 8.2 on page 114)
195 RW.setNullarySearchOperation(create);
196 for (k = 0; k < mutate.length; k++) {
197 RW.setUnarySearchOperation(mutate[k]);
198 RW.setObjectiveFunction(FS[0]);
199 testRuns(RW, runs, maxSteps);
200 }
201
202 // Random Sampling ( Section 8.1 on page 113)
203 RS.setNullarySearchOperation(create);
204 RS.setObjectiveFunction(FS[0]);
205 testRuns(RS, runs, maxSteps);
206
207 }
208 }
209
210 /
**
211
*
Perform the test runs
212
*
213
*
@param algorithm
214
*
the algorithm configuration to test
215
*
@param runs
216
*
the number of runs to perform
217
*
@param steps
218
*
the number of steps to execute per run
219
*
/
220 @SuppressWarnings("unchecked")
221 private static final void testRuns(
222 final ISOOptimizationAlgorithm<?, int[], ?> algorithm,
223 final int runs, final int steps) {
224 int i;
225 BufferedStatistics stat;
226 List<Individual<?, int[]>> solutions;
227 Individual<?, int[]> individual;
228
229 stat = new BufferedStatistics();
230 algorithm.setTerminationCriterion(new StepLimit(steps));
231
232 for (i = 0; i < runs; i++) {
233 algorithm.setRandSeed(i);
234 solutions = (List) (algorithm.call());
235 individual = solutions.get(0);
236 stat.add(FS[0].compute(individual.x, null));
237 }
238
239 System.out.println(stat.getConfiguration(false) +
240 + algorithm.toString(false));
241 }
242 }
5
7
.
1
.
D
E
M
O
S
F
O
R
S
T
R
I
N
G
-
B
A
S
E
D
P
R
O
B
L
E
M
S
P
A
C
E
S
9
0
3
1 Maximum Number of Steps : 100000
2 Number of Runs : 100
3
4
5
6 I ns t ance Name : N1C1W1 A
7 Best Pos s i bl e Sol ut i on : 25
8
9 min=2.80E01 med=2.90E01 avg=2.92E01 max=3.10E01 HC( o0=U( 52) , o1=SS , f=f 1 (N1C1W1 A) , tc=s l (100000) , r )
10 min=2.70E01 med=2.70E01 avg=2.75E01 max=2.80E01 HC( o0=U( 52) , o1=SS , f=f 2 (N1C1W1 A) , tc=s l (100000) , r )
11 min=2.70E01 med=2.70E01 avg=2.75E01 max=2.80E01 HC( o0=U( 52) , o1=SS , f=f 3 (N1C1W1 A) , tc=s l (100000) , r )
12 min=2.70E01 med=2.70E01 avg=2.75E01 max=2.80E01 HC( o0=U( 52) , o1=SS , f=f 5 (N1C1W1 A) , tc=s l (100000) , r )
13 min=2.70E01 med=2.80E01 avg=2.81E01 max=3.00E01 HC( o0=U( 52) , o1=SS , f=f 4 (N1C1W1 A) , tc=s l (100000) , r )
14 min=2.70E01 med=2.80E01 avg=2.84E01 max=2.90E01 HC( o0=U( 52) , o1=MS, f=f 1 (N1C1W1 A) , tc=s l (100000) , r )
15 min=2.70E01 med=2.70E01 avg=2.72E01 max=2.80E01 HC( o0=U( 52) , o1=MS, f=f 2 (N1C1W1 A) , tc=s l (100000) , r )
16 min=2.70E01 med=2.70E01 avg=2.72E01 max=2.80E01 HC( o0=U( 52) , o1=MS, f=f 3 (N1C1W1 A) , tc=s l (100000) , r )
17 min=2.70E01 med=2.70E01 avg=2.72E01 max=2.80E01 HC( o0=U( 52) , o1=MS, f=f 5 (N1C1W1 A) , tc=s l (100000) , r )
18 min=2.70E01 med=2.80E01 avg=2.76E01 max=2.90E01 HC( o0=U( 52) , o1=MS, f=f 4 (N1C1W1 A) , tc=s l (100000) , r )
19 min=2.80E01 med=2.90E01 avg=2.90E01 max=3.00E01 SA( o0=U( 52) , o1=SS , f=f 1 (N1C1W1 A) , tc=s l (100000) , r , l og ( 28) )
20 min=2.70E01 med=2.70E01 avg=2.70E01 max=2.70E01 SA( o0=U( 52) , o1=SS , f=f 2 (N1C1W1 A) , tc=s l (100000) , r , l og ( 28) )
21 min=2.90E01 med=2.90E01 avg=2.90E01 max=3.00E01 SA( o0=U( 52) , o1=SS , f=f 3 (N1C1W1 A) , tc=s l (100000) , r , l og ( 28) )
22 min=2.90E01 med=2.90E01 avg=2.90E01 max=3.00E01 SA( o0=U( 52) , o1=SS , f=f 5 (N1C1W1 A) , tc=s l (100000) , r , l og ( 28) )
23 min=2.90E01 med=2.90E01 avg=2.90E01 max=3.00E01 SA( o0=U( 52) , o1=SS , f=f 4 (N1C1W1 A) , tc=s l (100000) , r , l og ( 28) )
24 min=2.80E01 med=2.90E01 avg=2.90E01 max=3.00E01 SA( o0=U( 52) , o1=MS, f=f 1 (N1C1W1 A) , tc=s l (100000) , r , l og ( 28) )
25 min=2.70E01 med=2.70E01 avg=2.70E01 max=2.70E01 SA( o0=U( 52) , o1=MS, f=f 2 (N1C1W1 A) , tc=s l (100000) , r , l og ( 28) )
26 min=2.80E01 med=2.90E01 avg=2.90E01 max=2.90E01 SA( o0=U( 52) , o1=MS, f=f 3 (N1C1W1 A) , tc=s l (100000) , r , l og ( 28) )
27 min=2.80E01 med=2.90E01 avg=2.90E01 max=3.00E01 SA( o0=U( 52) , o1=MS, f=f 5 (N1C1W1 A) , tc=s l (100000) , r , l og ( 28) )
28 min=2.80E01 med=2.90E01 avg=2.90E01 max=3.00E01 SA( o0=U( 52) , o1=MS, f=f 4 (N1C1W1 A) , tc=s l (100000) , r , l og ( 28) )
29 min=2.90E01 med=2.90E01 avg=2.92E01 max=3.00E01 SA( o0=U( 52) , o1=SS , f=f 1 (N1C1W1 A) , tc=s l (100000) , r , l og (194) )
30 min=2.70E01 med=2.70E01 avg=2.70E01 max=2.70E01 SA( o0=U( 52) , o1=SS , f=f 2 (N1C1W1 A) , tc=s l (100000) , r , l og (194) )
31 min=2.90E01 med=2.90E01 avg=2.93E01 max=3.00E01 SA( o0=U( 52) , o1=SS , f=f 3 (N1C1W1 A) , tc=s l (100000) , r , l og (194) )
32 min=2.80E01 med=2.90E01 avg=2.93E01 max=3.00E01 SA( o0=U( 52) , o1=SS , f=f 5 (N1C1W1 A) , tc=s l (100000) , r , l og (194) )
33 min=2.90E01 med=2.90E01 avg=2.92E01 max=3.00E01 SA( o0=U( 52) , o1=SS , f=f 4 (N1C1W1 A) , tc=s l (100000) , r , l og (194) )
34 min=2.90E01 med=2.90E01 avg=2.92E01 max=3.00E01 SA( o0=U( 52) , o1=MS, f=f 1 (N1C1W1 A) , tc=s l (100000) , r , l og (194) )
35 min=2.70E01 med=2.70E01 avg=2.70E01 max=2.70E01 SA( o0=U( 52) , o1=MS, f=f 2 (N1C1W1 A) , tc=s l (100000) , r , l og (194) )
36 min=2.90E01 med=2.90E01 avg=2.92E01 max=3.00E01 SA( o0=U( 52) , o1=MS, f=f 3 (N1C1W1 A) , tc=s l (100000) , r , l og (194) )
37 min=2.90E01 med=2.90E01 avg=2.92E01 max=3.00E01 SA( o0=U( 52) , o1=MS, f=f 5 (N1C1W1 A) , tc=s l (100000) , r , l og (194) )
38 min=2.90E01 med=2.90E01 avg=2.92E01 max=3.00E01 SA( o0=U( 52) , o1=MS, f=f 4 (N1C1W1 A) , tc=s l (100000) , r , l og (194) )
39 min=2.90E01 med=2.90E01 avg=2.92E01 max=3.00E01 SQ( o0=U( 52) , o1=SS , f=f 1 (N1C1W1 A) , tc=s l (100000) , r , exp ( Ts=28, e =1.0E5) )
40 min=2.70E01 med=2.70E01 avg=2.70E01 max=2.70E01 SQ( o0=U( 52) , o1=SS , f=f 2 (N1C1W1 A) , tc=s l (100000) , r , exp ( Ts=28, e =1.0E5) )
41 min=2.90E01 med=2.90E01 avg=2.92E01 max=3.00E01 SQ( o0=U( 52) , o1=SS , f=f 3 (N1C1W1 A) , tc=s l (100000) , r , exp ( Ts=28, e =1.0E5) )
42 min=2.90E01 med=2.90E01 avg=2.93E01 max=3.00E01 SQ( o0=U( 52) , o1=SS , f=f 5 (N1C1W1 A) , tc=s l (100000) , r , exp ( Ts=28, e =1.0E5) )
43 min=2.80E01 med=2.90E01 avg=2.93E01 max=3.00E01 SQ( o0=U( 52) , o1=SS , f=f 4 (N1C1W1 A) , tc=s l (100000) , r , exp ( Ts=28, e =1.0E5) )
44 min=2.90E01 med=2.90E01 avg=2.92E01 max=3.00E01 SQ( o0=U( 52) , o1=MS, f=f 1 (N1C1W1 A) , tc=s l (100000) , r , exp ( Ts=28, e =1.0E5) )
45 min=2.70E01 med=2.70E01 avg=2.70E01 max=2.70E01 SQ( o0=U( 52) , o1=MS, f=f 2 (N1C1W1 A) , tc=s l (100000) , r , exp ( Ts=28, e =1.0E5) )
46 min=2.90E01 med=2.90E01 avg=2.92E01 max=3.00E01 SQ( o0=U( 52) , o1=MS, f=f 3 (N1C1W1 A) , tc=s l (100000) , r , exp ( Ts=28, e =1.0E5) )
47 min=2.90E01 med=2.90E01 avg=2.92E01 max=3.00E01 SQ( o0=U( 52) , o1=MS, f=f 5 (N1C1W1 A) , tc=s l (100000) , r , exp ( Ts=28, e =1.0E5) )
48 min=2.90E01 med=2.90E01 avg=2.92E01 max=3.00E01 SQ( o0=U( 52) , o1=MS, f=f 4 (N1C1W1 A) , tc=s l (100000) , r , exp ( Ts=28, e =1.0E5) )
49 min=2.90E01 med=2.90E01 avg=2.93E01 max=3.00E01 SQ( o0=U( 52) , o1=SS , f=f 1 (N1C1W1 A) , tc=s l (100000) , r , exp ( Ts=194, e =1.0E5) )
50 min=2.70E01 med=2.70E01 avg=2.70E01 max=2.70E01 SQ( o0=U( 52) , o1=SS , f=f 2 (N1C1W1 A) , tc=s l (100000) , r , exp ( Ts=194, e =1.0E5) )
51 min=2.90E01 med=2.90E01 avg=2.93E01 max=3.00E01 SQ( o0=U( 52) , o1=SS , f=f 3 (N1C1W1 A) , tc=s l (100000) , r , exp ( Ts=194, e =1.0E5) )
52 min=2.90E01 med=2.90E01 avg=2.93E01 max=3.00E01 SQ( o0=U( 52) , o1=SS , f=f 5 (N1C1W1 A) , tc=s l (100000) , r , exp ( Ts=194, e =1.0E5) )
53 min=2.90E01 med=2.90E01 avg=2.92E01 max=3.00E01 SQ( o0=U( 52) , o1=SS , f=f 4 (N1C1W1 A) , tc=s l (100000) , r , exp ( Ts=194, e =1.0E5) )
54 min=2.90E01 med=2.90E01 avg=2.92E01 max=3.00E01 SQ( o0=U( 52) , o1=MS, f=f 1 (N1C1W1 A) , tc=s l (100000) , r , exp ( Ts=194, e =1.0E5) )
55 min=2.70E01 med=2.70E01 avg=2.70E01 max=2.70E01 SQ( o0=U( 52) , o1=MS, f=f 2 (N1C1W1 A) , tc=s l (100000) , r , exp ( Ts=194, e =1.0E5) )
56 min=2.90E01 med=2.90E01 avg=2.93E01 max=3.00E01 SQ( o0=U( 52) , o1=MS, f=f 3 (N1C1W1 A) , tc=s l (100000) , r , exp ( Ts=194, e =1.0E5) )
57 min=2.90E01 med=2.90E01 avg=2.93E01 max=3.00E01 SQ( o0=U( 52) , o1=MS, f=f 5 (N1C1W1 A) , tc=s l (100000) , r , exp ( Ts=194, e =1.0E5) )
58 min=2.80E01 med=2.90E01 avg=2.92E01 max=3.00E01 SQ( o0=U( 52) , o1=MS, f=f 4 (N1C1W1 A) , tc=s l (100000) , r , exp ( Ts=194, e =1.0E5) )
Listing 57.16: An example output of the Bin Packing test program.
904 57 DEMOS
57.1.2.2 Example instances of the Traveling Salesman Problem.
Example instances for the Traveling Salesman Problem.
57.1.2.2.1 The Problem att48.
A Traveling Salesman Problem which corresponds to nding the optimal tour through
the 48 capitals of the US. This problem has been taken from http://www.iwr.uni-
heidelberg.de/groups/comopt/software/TSPLIB95/tsp/. In this task, each of the 48 cities
is represented by a coordinate pair and the distance between two cities is the Euclidean
distance of their coordinates. The optimal solution of this problem is 10628.
Listing 57.17: The data of the att48 problem instance.
1 NAME : att48
2 COMMENT : 48 capitals of the US (Padberg/Rinaldi)
3 TYPE : TSP
4 DIMENSION : 48
5 EDGE_WEIGHT_TYPE : ATT
6 NODE_COORD_SECTION
7 1 6734 1453
8 2 2233 10
9 3 5530 1424
10 4 401 841
11 5 3082 1644
12 6 7608 4458
13 7 7573 3716
14 8 7265 1268
15 9 6898 1885
16 10 1112 2049
17 11 5468 2606
18 12 5989 2873
19 13 4706 2674
20 14 4612 2035
21 15 6347 2683
22 16 6107 669
23 17 7611 5184
24 18 7462 3590
25 19 7732 4723
26 20 5900 3561
27 21 4483 3369
28 22 6101 1110
29 23 5199 2182
30 24 1633 2809
31 25 4307 2322
32 26 675 1006
33 27 7555 4819
34 28 7541 3981
35 29 3177 756
36 30 7352 4506
37 31 7545 2801
38 32 3245 3305
39 33 6426 3173
40 34 4608 1198
41 35 23 2216
42 36 7248 3779
43 37 7762 4595
44 38 7392 2244
45 39 3484 2829
46 40 6271 2135
47 41 4985 140
48 42 1916 1569
49 43 7280 4899
50 44 7509 3239
51 45 10 2676
52 46 6807 2993
57.2. DEMOS FOR GENETIC PROGRAMMING 905
53 47 5185 3258
54 48 3023 1942
55 EOF
57.2 Demos for Genetic Programming
Examples for (strongly-typed, tree-based) Genetic Programming as dened in Section 31.3
on page 386.
57.2.1 Demos for Genetic Programming Applied to Mathematical Problems
57.2.1.1 Mathematical Problems dened over the Real Numbers
Examples for synthesizing mathematical formulas.
57.2.1.1.1 Symbolic Regressions
An example for Symbolic Regression.
Listing 57.18: A single-objective test program for symbolic regression.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.trees.math.real.sr;
5
6 import org.goataa.impl.algorithms.ea.SimpleGenerationalEA;
7 import org.goataa.impl.algorithms.ea.selection.TournamentSelection;
8 import org.goataa.impl.gpms.IdentityMapping;
9 import org.goataa.impl.searchOperations.trees.binary.TreeRecombination;
10 import org.goataa.impl.searchOperations.trees.nullary.TreeRampedHalfAndHalf;
11 import org.goataa.impl.searchOperations.trees.unary.TreeMutator;
12 import org.goataa.impl.searchSpaces.trees.NodeTypeSet;
13 import org.goataa.impl.searchSpaces.trees.ReflectionNodeType;
14 import org.goataa.impl.searchSpaces.trees.math.real.RealFunction;
15 import org.goataa.impl.searchSpaces.trees.math.real.arith.Abs;
16 import org.goataa.impl.searchSpaces.trees.math.real.arith.Add;
17 import org.goataa.impl.searchSpaces.trees.math.real.arith.Div;
18 import org.goataa.impl.searchSpaces.trees.math.real.arith.Exp;
19 import org.goataa.impl.searchSpaces.trees.math.real.arith.Mul;
20 import org.goataa.impl.searchSpaces.trees.math.real.arith.Pow;
21 import org.goataa.impl.searchSpaces.trees.math.real.arith.Sqrt;
22 import org.goataa.impl.searchSpaces.trees.math.real.arith.Sub;
23 import org.goataa.impl.searchSpaces.trees.math.real.basic.ConstantType;
24 import org.goataa.impl.searchSpaces.trees.math.real.basic.VariableType;
25 import org.goataa.impl.searchSpaces.trees.math.real.trig.Sin;
26 import org.goataa.impl.termination.StepLimit;
27 import org.goataa.impl.utils.Individual;
28 import org.goataa.spec.IBinarySearchOperation;
29 import org.goataa.spec.IGPM;
30 import org.goataa.spec.INullarySearchOperation;
31 import org.goataa.spec.IObjectiveFunction;
32 import org.goataa.spec.IUnarySearchOperation;
33
34 /
**
35
*
An example test program for Symbolic Regression as discussed in
36
*
Section 49.1 on page 531.
37
*
38
*
@author Thomas Weise
39
*
/
906 57 DEMOS
40 public class SRTest extends Examples {
41
42 /
**
the training cases
*
/
43 static final TrainingCase[] DATA = createTrainingCases(F1, 250);
44
45 /
**
46
*
The main routine
47
*
48
*
@param args
49
*
the command line arguments which are ignored here
50
*
/
51 @SuppressWarnings("unchecked")
52 public static final void main(final String[] args) {
53 final IObjectiveFunction<RealFunction> f;
54 final INullarySearchOperation<RealFunction> create;
55 final IUnarySearchOperation<RealFunction> mutate;
56 final IBinarySearchOperation<RealFunction> crossover;
57 final NodeTypeSet<RealFunction> nts;
58 final NodeTypeSet<RealFunction>[] binary, unary;
59 final SimpleGenerationalEA<RealFunction, RealFunction> EA;
60 Individual<RealFunction, RealFunction> ind;
61
62 f = new SRObjectiveFunction2(DATA);
63
64 nts = new NodeTypeSet<RealFunction>();
65 binary = new NodeTypeSet[] { nts, nts };
66 unary = new NodeTypeSet[] { nts };
67 nts.add(new VariableType(DATA[0].data.length));
68 nts.add(ConstantType.DEFAULT_CONSTANT_TYPE);
69 nts.add(new ReflectionNodeType<Add, RealFunction>(Add.class, binary));
70 nts.add(new ReflectionNodeType<Sub, RealFunction>(Sub.class, binary));
71 nts.add(new ReflectionNodeType<Mul, RealFunction>(Mul.class, binary));
72 nts.add(new ReflectionNodeType<Div, RealFunction>(Div.class, binary));
73 nts.add(new ReflectionNodeType<Pow, RealFunction>(Pow.class, binary));
74 nts.add(new ReflectionNodeType<Exp, RealFunction>(Exp.class, unary));
75 nts.add(new ReflectionNodeType<Abs, RealFunction>(Abs.class, unary));
76 nts.add(new ReflectionNodeType<Sin, RealFunction>(Sin.class, unary));
77 nts.add(new ReflectionNodeType<Sqrt, RealFunction>(Sqrt.class, unary));
78
79 create = new TreeRampedHalfAndHalf<RealFunction>(nts, 15);
80 mutate = ((IUnarySearchOperation) (new TreeMutator(15)));
81 crossover = ((IBinarySearchOperation) (new TreeRecombination(15)));
82
83 EA = new SimpleGenerationalEA<RealFunction, RealFunction>();
84 EA.setCrossoverRate(0.5);
85 EA.setMutationRate(0.5);
86 EA.setNullarySearchOperation(create);
87 EA.setUnarySearchOperation(mutate);
88 EA.setBinarySearchOperation(crossover);
89 EA.setGPM((IGPM) (IdentityMapping.IDENTITY_MAPPING));
90 EA.setSelectionAlgorithm(new TournamentSelection(2));
91 EA.setTerminationCriterion(new StepLimit(500000));
92 EA.setPopulationSize(4096);
93 EA.setMatingPoolSize(1024);
94 EA.setObjectiveFunction(f);
95
96 ind = EA.call().get(0);
97
98 System.out.println(ind.x.toString());
99 System.out.println(ind.x.getWeight() + " " + ind.x.getHeight());
//NON NLS 1
100 System.out.println();
101 }
102 }
Listing 57.19: A multi-objective test program for symbolic regression.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
57.2. DEMOS FOR GENETIC PROGRAMMING 907
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.trees.math.real.sr;
5
6 import java.util.List;
7
8 import org.goataa.impl.algorithms.ea.SimpleGenerationalMOEA;
9 import org.goataa.impl.algorithms.ea.fitnessAssignment.ParetoRanking;
10 import org.goataa.impl.algorithms.ea.selection.TournamentSelection;
11 import org.goataa.impl.comparators.Pareto;
12 import org.goataa.impl.gpms.IdentityMapping;
13 import org.goataa.impl.objectiveFunctions.TreeSizeObjective;
14 import org.goataa.impl.searchOperations.trees.binary.TreeRecombination;
15 import org.goataa.impl.searchOperations.trees.nullary.TreeRampedHalfAndHalf;
16 import org.goataa.impl.searchOperations.trees.unary.TreeMutator;
17 import org.goataa.impl.searchSpaces.trees.NodeTypeSet;
18 import org.goataa.impl.searchSpaces.trees.ReflectionNodeType;
19 import org.goataa.impl.searchSpaces.trees.math.real.RealFunction;
20 import org.goataa.impl.searchSpaces.trees.math.real.arith.Abs;
21 import org.goataa.impl.searchSpaces.trees.math.real.arith.Add;
22 import org.goataa.impl.searchSpaces.trees.math.real.arith.Div;
23 import org.goataa.impl.searchSpaces.trees.math.real.arith.Exp;
24 import org.goataa.impl.searchSpaces.trees.math.real.arith.Mul;
25 import org.goataa.impl.searchSpaces.trees.math.real.arith.Pow;
26 import org.goataa.impl.searchSpaces.trees.math.real.arith.Sqrt;
27 import org.goataa.impl.searchSpaces.trees.math.real.arith.Sub;
28 import org.goataa.impl.searchSpaces.trees.math.real.basic.ConstantType;
29 import org.goataa.impl.searchSpaces.trees.math.real.basic.VariableType;
30 import org.goataa.impl.searchSpaces.trees.math.real.trig.Sin;
31 import org.goataa.impl.termination.StepLimit;
32 import org.goataa.impl.utils.MOIndividual;
33 import org.goataa.spec.IBinarySearchOperation;
34 import org.goataa.spec.IGPM;
35 import org.goataa.spec.INullarySearchOperation;
36 import org.goataa.spec.IObjectiveFunction;
37 import org.goataa.spec.IUnarySearchOperation;
38
39 /
**
40
*
An example test program for Symbolic Regression as discussed in
41
*
Section 49.1 on page 531.
42
*
43
*
@author Thomas Weise
44
*
/
45 public class SRTestMO extends Examples {
46
47 /
**
the training cases
*
/
48 static final TrainingCase[] DATA = createTrainingCases(F1, 250);
49
50 /
**
51
*
The main routine
52
*
53
*
@param args
54
*
the command line arguments which are ignored here
55
*
/
56 @SuppressWarnings("unchecked")
57 public static final void main(final String[] args) {
58 final IObjectiveFunction<RealFunction>[] f;
59 final INullarySearchOperation<RealFunction> create;
60 final IUnarySearchOperation<RealFunction> mutate;
61 final IBinarySearchOperation<RealFunction> crossover;
62 final NodeTypeSet<RealFunction> nts;
63 final NodeTypeSet<RealFunction>[] binary, unary;
64 final SimpleGenerationalMOEA<RealFunction, RealFunction> EA;
65 MOIndividual<RealFunction, RealFunction> ind;
66 List<MOIndividual<RealFunction, RealFunction>> l;
67 int i;
68
908 57 DEMOS
69 f = new IObjectiveFunction[] { new SRObjectiveFunction1(DATA),
70 TreeSizeObjective.TREE_SIZE_OBJECTIVE };
71
72 nts = new NodeTypeSet<RealFunction>();
73 binary = new NodeTypeSet[] { nts, nts };
74 unary = new NodeTypeSet[] { nts };
75 nts.add(new VariableType(DATA[0].data.length));
76 nts.add(ConstantType.DEFAULT_CONSTANT_TYPE);
77 nts.add(new ReflectionNodeType<Add, RealFunction>(Add.class, binary));
78 nts.add(new ReflectionNodeType<Sub, RealFunction>(Sub.class, binary));
79 nts.add(new ReflectionNodeType<Mul, RealFunction>(Mul.class, binary));
80 nts.add(new ReflectionNodeType<Div, RealFunction>(Div.class, binary));
81 nts.add(new ReflectionNodeType<Pow, RealFunction>(Pow.class, binary));
82 nts.add(new ReflectionNodeType<Exp, RealFunction>(Exp.class, unary));
83 nts.add(new ReflectionNodeType<Abs, RealFunction>(Abs.class, unary));
84 nts.add(new ReflectionNodeType<Sin, RealFunction>(Sin.class, unary));
85 nts.add(new ReflectionNodeType<Sqrt, RealFunction>(Sqrt.class, unary));
86
87 create = new TreeRampedHalfAndHalf<RealFunction>(nts, 15);
88 mutate = ((IUnarySearchOperation) (new TreeMutator(15)));
89 crossover = ((IBinarySearchOperation) (new TreeRecombination(15)));
90
91 EA = new SimpleGenerationalMOEA<RealFunction, RealFunction>();
92 EA.setCrossoverRate(0.5);
93 EA.setMutationRate(0.5);
94 EA.setComparator(Pareto.PARETO_COMPARATOR);
95 EA.setFitnesAssignmentProcess(ParetoRanking.PARETO_RANKING);
96 EA.setNullarySearchOperation(create);
97 EA.setUnarySearchOperation(mutate);
98 EA.setBinarySearchOperation(crossover);
99 EA.setGPM((IGPM) (IdentityMapping.IDENTITY_MAPPING));
100 EA.setSelectionAlgorithm(new TournamentSelection(2));
101 EA.setTerminationCriterion(new StepLimit(1000000));
102 EA.setPopulationSize(4096);
103 EA.setMatingPoolSize(2048);
104 EA.setObjectiveFunctions(f);
105
106 l = EA.call();
107
108 for (i = l.size(); (--i) >= 0;) {
109 ind = l.get(i);
110 System.out.println("f(p)=(" + ind.f[0] + //NON NLS 1
111 ", " + ind.f[1] + ")");//NON NLS 1//NON NLS 2
112 System.out.println("p(x)=" + ind.x.toString()); //NON NLS 1
113 System.out.println();
114 }
115
116 }
117 }
Listing 57.20: An objective function for Symbolic Regression.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.trees.math.real.sr;
5
6 import java.util.Random;
7
8 import org.goataa.impl.OptimizationModule;
9 import org.goataa.impl.searchSpaces.trees.math.real.RealContext;
10 import org.goataa.impl.searchSpaces.trees.math.real.RealFunction;
11 import org.goataa.impl.utils.Constants;
12 import org.goataa.spec.IObjectiveFunction;
13
14 /
**
15
*
An objective RealFunction for Symbolic Regression which evaluates a
16
*
program on basis of test cases. It incorporates the approximation
57.2. DEMOS FOR GENETIC PROGRAMMING 909
17
*
accuracy of the program into the fitness as well as its size.
18
*
19
*
@author Thomas Weise
20
*
/
21 public class SRObjectiveFunction1 extends OptimizationModule implements
22 IObjectiveFunction<RealFunction> {
23 /
**
a constant required by Java serialization
*
/
24 private static final long serialVersionUID = 1;
25
26 /
**
the training data
*
/
27 private final TrainingCase[] tc;
28
29 /
**
the real context
*
/
30 private final RealContext rc;
31
32 /
**
33
*
Create a new instance of the symbolic regression objective
34
*
RealFunction.
35
*
36
*
@param t
37
*
the training case
38
*
/
39 public SRObjectiveFunction1(final TrainingCase[] t) {
40 super();
41 this.tc = t;
42 this.rc = new RealContext(10000, t[0].data.length);
43 }
44
45 /
**
46
*
Compute the objective value, i.e., determine the utility of the
47
*
solution candidate x as specified in
48
*
Definition D2.3 on page 44.
49
*
50
*
@param x
51
*
the phenotype to be rated
52
*
@param r
53
*
the randomizer
54
*
@return the objective value of x, the lower the better (see
55
*
Section 6.3.4 on page 106)
56
*
/
57 public double compute(final RealFunction x, final Random r) {
58 TrainingCase[] y;
59 int i;
60 TrainingCase q;
61 double res, v;
62 RealContext c;
63
64 res = 0d;
65 y = this.tc;
66 c = this.rc;
67 for (i = y.length; (--i) >= 0;) {
68 q = y[i];
69 c.beginProgram();
70 c.copy(q.data);
71 v = (q.result - x.compute(c));
72 res += (v
*
v);
73 c.endProgram();
74 }
75
76 if (Double.isInfinite(res) || Double.isNaN(res)) {
77 return Constants.WORST_OBJECTIVE;
78 }
79
80 return res;
81 }
82
83 }
910 57 DEMOS
Listing 57.21: An objective function based on Listing 57.20, but also including tree-size
information.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.trees.math.real.sr;
5
6 import java.util.Random;
7
8 import org.goataa.impl.searchSpaces.trees.math.real.RealFunction;
9
10 /
**
11
*
An objective RealFunction for Symbolic Regression which evaluates a program
12
*
on basis of test cases. It incorporates the approximation accuracy of
13
*
the program into the fitness as well as its size.
14
*
15
*
@author Thomas Weise
16
*
/
17 public class SRObjectiveFunction2 extends SRObjectiveFunction1 {
18 /
**
a constant required by Java serialization
*
/
19 private static final long serialVersionUID = 1;
20
21 /
**
22
*
Create a new instance of the symbolic regression objective RealFunction.
23
*
24
*
@param t
25
*
the training case
26
*
/
27 public SRObjectiveFunction2(final TrainingCase[] t) {
28 super(t);
29 }
30
31 /
**
32
*
Compute the objective value, i.e., determine the utility of the
33
*
solution candidate x as specified in
34
*
Definition D2.3 on page 44.
35
*
36
*
@param x
37
*
the phenotype to be rated
38
*
@param r
39
*
the randomizer
40
*
@return the objective value of x, the lower the better (see
41
*
Section 6.3.4 on page 106)
42
*
/
43 @Override
44 public final double compute(final RealFunction x, final Random r) {
45 int i;
46 double res;
47
48 res = super.compute(x, r);
49
50 i = Math.max(3, x.getWeight());
51
52 return (res
*
Math.log(i));
53 }
54
55 }
Listing 57.22: An objective function for Symbolic Regression.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.trees.math.real.sr;
5
6 import org.goataa.impl.OptimizationModule;
7 import org.goataa.impl.utils.TextUtils;
57.2. DEMOS FOR GENETIC PROGRAMMING 911
8
9 /
**
10
*
A training case
11
*
12
*
@author Thomas Weise
13
*
/
14 public class TrainingCase extends OptimizationModule implements
15 Comparable<TrainingCase> {
16 /
**
a constant required by Java serialization
*
/
17 private static final long serialVersionUID = 1;
18
19 /
**
the data
*
/
20 public final double[] data;
21
22 /
**
the result
*
/
23 public final double result;
24
25 /
**
26
*
Create a new training case
27
*
28
*
@param t
29
*
the data
30
*
@param r
31
*
the result
32
*
/
33 public TrainingCase(final double[] t, double r) {
34 super();
35 this.data = t;
36 this.result = r;
37 }
38
39 /
**
40
*
Get the string representation of this object, i.e., the name and
41
*
configuration.
42
*
43
*
@param longVersion
44
*
true if the long version should be returned, false if the
45
*
short version should be returned
46
*
@return the string version of this object
47
*
/
48 @Override
49 public String toString(final boolean longVersion) {
50 String s;
51
52 s = TextUtils.toString(this.data);
53 s = s.substring(1, s.length() - 2);
54
55 return "g(" + s + //NON NLS 1
56 ")=" + this.result;//NON NLS 1
57 }
58
59 /
**
60
*
Compare to another training cases
61
*
62
*
@param tc
63
*
the other training case
64
*
@return the result of that comparison
65
*
/
66 public int compareTo(final TrainingCase tc) {
67 double[] d1, d2;
68 int i, r;
69
70 d1 = this.data;
71 d2 = tc.data;
72 for (i = 0; i < d1.length; i++) {
73 r = Double.compare(d1[i], d2[i]);
74 if (r != 0) {
912 57 DEMOS
75 return r;
76 }
77 }
78 return Double.compare(this.result, tc.result);
79 }
80 }
Listing 57.23: Some example functions to match with regression.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.trees.math.real.sr;
5
6 import java.util.Arrays;
7 import java.util.Random;
8
9 import org.goataa.impl.searchSpaces.trees.math.real.RealContext;
10 import org.goataa.impl.searchSpaces.trees.math.real.RealFunction;
11
12 /
**
13
*
Some examples to be used for testing the Symbolic Regression (see
14
*
Section 49.1 on page 531). capabilities if GP.
15
*
16
*
@author Thomas Weise
17
*
/
18 public class Examples {
19
20 /
**
the first example RealFunction
*
/
21 public static final ExampleFunct F1 = new ExampleFunct(1, -100, 100) {
22
23 /
**
24
*
compute the result
25
*
26
*
@param d
27
*
the vector
28
*
@return the result
29
*
/
30 @Override
31 final double compute(final double[] d) {
32 return Math.exp(Math.sin(d[0])) + 3d
*
Math.sqrt(Math.abs(d[0]));
33 }
34 };
35
36 /
**
the first example RealFunction
*
/
37 public static final ExampleFunct F2 = new ExampleFunct(1, 0, 1) {
38
39 /
**
40
*
compute the result
41
*
42
*
@param d
43
*
the vector
44
*
@return the result
45
*
/
46 @Override
47 final double compute(final double[] d) {
48 return Math.pow(d[0], 0.2) - Math.sin(Math.abs(d[0]));
49 }
50 };
51
52 /
**
53
*
Create new random training cases for a RealFunction
54
*
55
*
@param f
56
*
the RealFunction
57
*
@param cnt
58
*
the number of training cases to create
59
*
@return the new random training cases
57.2. DEMOS FOR GENETIC PROGRAMMING 913
60
*
/
61 public static final TrainingCase[] createTrainingCases(
62 final ExampleFunct f, final int cnt) {
63 double[] d;
64 TrainingCase[] tc;
65 int i, j;
66 Random r;
67
68 r = new Random();
69 tc = new TrainingCase[cnt];
70 for (i = cnt; (--i) >= 0;) {
71 j = f.dim;
72 d = new double[j];
73 for (; (--j) >= 0;) {
74 d[j] = f.min + ((f.max - f.min)
*
r.nextDouble());
75 }
76 tc[i] = new TrainingCase(d, f.compute(d));
77 }
78 Arrays.sort(tc);
79 return tc;
80 }
81
82 /
**
83
*
Print the training case result
84
*
85
*
@param tc
86
*
the training case
87
*
@param f
88
*
the RealFunction
89
*
/
90 static final void print(final TrainingCase[] tc, final RealFunction f) {
91 int i, j;
92 RealContext d;
93
94 d = new RealContext(10000, tc[0].data.length);
95
96 for (i = 0; i < tc.length; i++) {
97 d.copy(tc[i].data);
98 for (j = 0; j < d.getMemorySize(); j++) {
99 System.out.print(String.valueOf(d.read(j)));
100 System.out.print(\t);
101 }
102 System.out.print(String.valueOf(tc[i].result));
103 System.out.print(\t);
104 System.out.println(f.compute(d));
105 }
106 }
107
108 /
**
109
*
The internal class for example functions
110
*
111
*
@author Thomas Weise
112
*
/
113 private static abstract class ExampleFunct {
114 /
**
the dimension
*
/
115 final int dim;
116
117 /
**
the minimum
*
/
118 final double min;
119
120 /
**
the maximum
*
/
121 final double max;
122
123 /
**
124
*
Create a new example RealFunction
125
*
126
*
@param d
914 57 DEMOS
127
*
the dimension
128
*
@param mi
129
*
the minimum
130
*
@param ma
131
*
the maximum
132
*
/
133 ExampleFunct(final int d, final double mi, final double ma) {
134 super();
135 this.dim = d;
136 this.min = mi;
137 this.max = ma;
138 }
139
140 /
**
141
*
compute the result
142
*
143
*
@param d
144
*
the vector
145
*
@return the result
146
*
/
147 abstract double compute(final double[] d);
148 }
149 }
57.3 The VRP Benchmark Problem
The implementation of the benchmark problem discussed in Section 50.4.
57.3.1 The Involved Objects
All the objects involved in the benchmark problem discussed in Section 50.4.2.
Listing 57.24: An instance of this class holds the information of one location.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.vrpBenchmark.objects;
5
6 import java.util.ArrayList;
7 import java.util.List;
8
9 /
**
10
*
This class represents a location. Each customer, depot, and halting
11
*
point is always associated with a location.
12
*
13
*
@author Thomas Weise
14
*
/
15 public class Location extends InputObject {
16 /
**
a constant required by Java serialization
*
/
17 private static final long serialVersionUID = 1;
18
19 /
**
the locations
*
/
20 public static final InputObjectList<Location> LOCATIONS = new
InputObjectList<Location>(
21 Location.class);
22
23 /
**
the headers
*
/
24 private static final List<String> HEADERS;
25
26 static {
27 HEADERS = new ArrayList<String>();
28 HEADERS.addAll(IHEADERS);
29 HEADERS.add("is_depot");//NON NLS 1
57.3. THE VRP BENCHMARK PROBLEM 915
30 }
31
32 /
**
is the location a depot?
*
/
33 public final boolean isDepot;
34
35 /
**
36
*
Create a new location of the given id
37
*
38
*
@param pid
39
*
the id
40
*
@param depot
41
*
true if and only if the location is a depot
42
*
/
43 public Location(final int pid, final boolean depot) {
44 super(pid, LOCATIONS);
45 this.isDepot = depot;
46 }
47
48 /
**
49
*
Create a new location
50
*
51
*
@param data
52
*
the data
53
*
/
54 public Location(final String[] data) {
55 super(data, LOCATIONS);
56 this.isDepot = Boolean.parseBoolean(data[1]);
57 }
58
59 /
**
60
*
{@inheritDoc}
61
*
/
62 @Override
63 final char getIDPrefixChar() {
64 return L;
65 }
66
67 /
**
68
*
Get the column headers for a csv data representation. Child classes
69
*
may add additional fields.
70
*
71
*
@return the column headers for representing the data of this object in
72
*
csv format
73
*
/
74 @Override
75 public List<String> getCSVColumnHeader() {
76 return HEADERS;
77 }
78
79 /
**
80
*
Fill in the data of this object into a string array which can be used
81
*
to serialize the object in a nice way
82
*
83
*
@param output
84
*
the output
85
*
/
86 @Override
87 public void fillInCSVData(final String[] output) {
88 super.fillInCSVData(output);
89 output[1] = String.valueOf(this.isDepot);
90 }
91 }
Listing 57.25: An instance of this class holds the information of one order.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
916 57 DEMOS
4 package demos.org.goataa.vrpBenchmark.objects;
5
6 import java.util.ArrayList;
7 import java.util.List;
8
9 /
**
10
*
An order is a set of goods that must be taken from one place to another
11
*
one. It requires a single of container and has timewindoes for pickup
12
*
and delivery as well as a start and an end location.
13
*
14
*
@author Thomas Weise
15
*
/
16 public class Order extends InputObject {
17 /
**
a constant required by Java serialization
*
/
18 private static final long serialVersionUID = 1;
19
20 /
**
the list of orders
*
/
21 public static final InputObjectList<Order> ORDERS = new InputObjectList<Order>(
22 Order.class);
23
24 /
**
no orders
*
/
25 public static final Order[] NO_ORDERS = new Order[0];
26
27 /
**
the headers
*
/
28 private static final List<String> HEADERS;
29
30 static {
31 HEADERS = new ArrayList<String>();
32 HEADERS.addAll(IHEADERS);
33 HEADERS.add("pickup_window_start");//NON NLS 1
34 HEADERS.add("pickup_window_end");//NON NLS 1
35 HEADERS.add("pickup_location");//NON NLS 1
36 HEADERS.add("delivery_window_start");//NON NLS 1
37 HEADERS.add("delivery_window_end");//NON NLS 1
38 HEADERS.add("delivery_location");//NON NLS 1
39 }
40
41 /
**
the beginning of the pickup time
*
/
42 public final long pickupWindowStart;
43
44 /
**
the ending of the pickup time
*
/
45 public final long pickupWindowEnd;
46
47 /
**
the pickup location
*
/
48 public final Location pickupLocation;
49
50 /
**
the beginning of the delivery time
*
/
51 public final long deliveryWindowStart;
52
53 /
**
the ending of the deliver time
*
/
54 public final long deliveryWindowEnd;
55
56 /
**
the delivery location
*
/
57 public final Location deliveryLocation;
58
59 /
**
60
*
Create a new order
61
*
62
*
@param pid
63
*
the id
64
*
@param ppickupWindowStart
65
*
the beginning of the pickup time
66
*
@param ppickupWindowEnd
67
*
the ending of the pickup time
68
*
@param ppickupLocation
69
*
the pickup location
70
*
@param pdeliveryWindowStart
57.3. THE VRP BENCHMARK PROBLEM 917
71
*
the beginning of the delivery time
72
*
@param pdeliveryWindowEnd
73
*
the ending of the deliver time
74
*
@param pdeliveryLocation
75
*
the delivery location
76
*
/
77 public Order(final int pid, final long ppickupWindowStart,
78 final long ppickupWindowEnd, final Location ppickupLocation,
79 final long pdeliveryWindowStart, final long pdeliveryWindowEnd,
80 final Location pdeliveryLocation) {
81 super(pid, ORDERS);
82 this.pickupWindowStart = ppickupWindowStart;
83 this.pickupWindowEnd = ppickupWindowEnd;
84 this.pickupLocation = ppickupLocation;
85 this.deliveryWindowStart = pdeliveryWindowStart;
86 this.deliveryWindowEnd = pdeliveryWindowEnd;
87 this.deliveryLocation = pdeliveryLocation;
88 }
89
90 /
**
91
*
Create a new order
92
*
93
*
@param data
94
*
the order data
95
*
/
96 public Order(final String[] data) {
97 super(data, ORDERS);
98
99 this.pickupWindowStart = Long.parseLong(data[1]);
100 this.pickupWindowEnd = Long.parseLong(data[2]);
101 this.pickupLocation = Location.LOCATIONS.find(extractId(data[3]));
102 this.deliveryWindowStart = Long.parseLong(data[4]);
103 this.deliveryWindowEnd = Long.parseLong(data[5]);
104 this.deliveryLocation = Location.LOCATIONS.find(extractId(data[6]));
105 }
106
107 /
**
108
*
{@inheritDoc}
109
*
/
110 @Override
111 final char getIDPrefixChar() {
112 return O;
113 }
114
115 /
**
116
*
Get the column headers for a csv data representation. Child classes
117
*
may add additional fields.
118
*
119
*
@return the column headers for representing the data of this object in
120
*
csv format
121
*
/
122 @Override
123 public List<String> getCSVColumnHeader() {
124 return HEADERS;
125 }
126
127 /
**
128
*
Fill in the data of this object into a string array which can be used
129
*
to serialize the object in a nice way
130
*
131
*
@param output
132
*
the output
133
*
/
134 @Override
135 public void fillInCSVData(final String[] output) {
136 super.fillInCSVData(output);
137 output[1] = String.valueOf(this.pickupWindowStart);
918 57 DEMOS
138 output[2] = String.valueOf(this.pickupWindowEnd);
139 output[3] = String.valueOf(this.pickupLocation.getIDString());
140 output[4] = String.valueOf(this.deliveryWindowStart);
141 output[5] = String.valueOf(this.deliveryWindowEnd);
142 output[6] = String.valueOf(this.deliveryLocation.getIDString());
143 }
144 }
Listing 57.26: An instance of this class holds the information of one truck.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.vrpBenchmark.objects;
5
6 import java.util.ArrayList;
7 import java.util.List;
8
9 /
**
10
*
A truck is a car that can transport containers.
11
*
12
*
@author Thomas Weise
13
*
/
14 public class Truck extends InputObject {
15 /
**
a constant required by Java serialization
*
/
16 private static final long serialVersionUID = 1;
17
18 /
**
the trucks
*
/
19 public static final InputObjectList<Truck> TRUCKS = new InputObjectList<Truck>(
20 Truck.class);
21
22 /
**
the headers
*
/
23 private static final List<String> HEADERS;
24
25 static {
26 HEADERS = new ArrayList<String>();
27 HEADERS.addAll(IHEADERS);
28 HEADERS.add("start_location");//NON NLS 1
29 HEADERS.add("max_containers");//NON NLS 1
30 HEADERS.add("max_drivers");//NON NLS 1
31 HEADERS.add("costs_per_h_km");//NON NLS 1
32 HEADERS.add("avg_speed_km_h");//NON NLS 1
33 }
34
35 /
**
the starting location
*
/
36 public final Location start;
37
38 /
**
the maximum number of containers this truck can transport
*
/
39 public final int maxContainers;
40
41 /
**
the maximum number of drivers
*
/
42 public final int maxDrivers;
43
44 /
**
the costs per kilometer in cent
*
/
45 public final int costsPerHKm;
46
47 /
**
the average speed in km/h
*
/
48 public final int avgSpeedKmH;
49
50 /
**
51
*
Create a new truck
52
*
53
*
@param pid
54
*
the id
55
*
@param lstart
56
*
the starting location
57
*
@param maxCont
58
*
the maximum number of containers this truck can transport
57.3. THE VRP BENCHMARK PROBLEM 919
59
*
@param maxDriv
60
*
the maximum number of drivers that can sit in this truck
61
*
@param costsHKM
62
*
the costs per hour
*
kilometer in cent
63
*
@param avgSpeed
64
*
the average speed in km/h
65
*
/
66 public Truck(final int pid, final Location lstart, final int maxCont,
67 final int maxDriv, final int costsHKM, final int avgSpeed) {
68 super(pid, TRUCKS);
69 this.start = lstart;
70 this.maxContainers = maxCont;
71 this.maxDrivers = maxDriv;
72 this.costsPerHKm = costsHKM;
73 this.avgSpeedKmH = avgSpeed;
74 }
75
76 /
**
77
*
Create a new truck
78
*
79
*
@param data
80
*
the data
81
*
/
82 public Truck(final String[] data) {
83 super(data, TRUCKS);
84 this.start = Location.LOCATIONS.find(InputObject.extractId(data[1]));
85 this.maxContainers = Integer.parseInt(data[2]);
86 this.maxDrivers = Integer.parseInt(data[3]);
87 this.costsPerHKm = Integer.parseInt(data[4]);
88 this.avgSpeedKmH = Integer.parseInt(data[5]);
89 }
90
91 /
**
92
*
{@inheritDoc}
93
*
/
94 @Override
95 final char getIDPrefixChar() {
96 return T;
97 }
98
99 /
**
100
*
Get the column headers for a csv data representation. Child classes
101
*
may add additional fields.
102
*
103
*
@return the column headers for representing the data of this object in
104
*
csv format
105
*
/
106 @Override
107 public List<String> getCSVColumnHeader() {
108 return HEADERS;
109 }
110
111 /
**
112
*
Fill in the data of this object into a string array which can be used
113
*
to serialize the object in a nice way
114
*
115
*
@param output
116
*
the output
117
*
/
118 @Override
119 public void fillInCSVData(final String[] output) {
120 super.fillInCSVData(output);
121 output[1] = this.start.getIDString();
122 output[2] = String.valueOf(this.maxContainers);
123 output[3] = String.valueOf(this.maxDrivers);
124 output[4] = String.valueOf(this.costsPerHKm);
125 output[5] = String.valueOf(this.avgSpeedKmH);
920 57 DEMOS
126 }
127 }
Listing 57.27: An instance of this class holds the information of one container.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.vrpBenchmark.objects;
5
6 import java.util.ArrayList;
7 import java.util.List;
8
9 /
**
10
*
A container is an object which can be used to package one order.
11
*
12
*
@author Thomas Weise
13
*
/
14 public class Container extends InputObject {
15 /
**
a constant required by Java serialization
*
/
16 private static final long serialVersionUID = 1;
17
18 /
**
the drivers
*
/
19 public static final InputObjectList<Container> CONTAINERS = new
InputObjectList<Container>(
20 Container.class);
21
22 /
**
no containers
*
/
23 public static final Container[] NO_CONTAINERS = new Container[0];
24
25 /
**
the headers
*
/
26 private static final List<String> HEADERS;
27
28 static {
29 HEADERS = new ArrayList<String>();
30 HEADERS.addAll(IHEADERS);
31 HEADERS.add("start");//NON NLS 1
32 }
33
34 /
**
the start location
*
/
35 public final Location start;
36
37 /
**
38
*
Create a new container
39
*
40
*
@param pid
41
*
the id
42
*
@param lstart
43
*
the start location
44
*
/
45 public Container(final int pid, final Location lstart) {
46 super(pid, CONTAINERS);
47 this.start = lstart;
48 }
49
50 /
**
51
*
Create a new truck
52
*
53
*
@param data
54
*
the data
55
*
/
56 public Container(final String[] data) {
57 super(data, CONTAINERS);
58 this.start = Location.LOCATIONS.find(InputObject.extractId(data[1]));
59 }
60
61 /
**
62
*
{@inheritDoc}
57.3. THE VRP BENCHMARK PROBLEM 921
63
*
/
64 @Override
65 final char getIDPrefixChar() {
66 return C;
67 }
68
69 /
**
70
*
Get the column headers for a csv data representation. Child classes
71
*
may add additional fields.
72
*
73
*
@return the column headers for representing the data of this object in
74
*
csv format
75
*
/
76 @Override
77 public List<String> getCSVColumnHeader() {
78 return HEADERS;
79 }
80
81 /
**
82
*
Fill in the data of this object into a string array which can be used
83
*
to serialize the object in a nice way
84
*
85
*
@param output
86
*
the output
87
*
/
88 @Override
89 public void fillInCSVData(final String[] output) {
90 super.fillInCSVData(output);
91 output[1] = this.start.getIDString();
92 }
93 }
Listing 57.28: An instance of this class holds the information of one driver.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.vrpBenchmark.objects;
5
6 import java.util.ArrayList;
7 import java.util.List;
8
9 /
**
10
*
A driver is a person that can drive a truck.
11
*
12
*
@author Thomas Weise
13
*
/
14 public class Driver extends InputObject {
15 /
**
a constant required by Java serialization
*
/
16 private static final long serialVersionUID = 1;
17
18 /
**
the drivers
*
/
19 public static final InputObjectList<Driver> DRIVERS = new
InputObjectList<Driver>(
20 Driver.class);
21
22 /
**
no drivers
*
/
23 public static final Driver[] NO_DRIVERS = new Driver[0];
24
25 /
**
the headers
*
/
26 private static final List<String> HEADERS;
27
28 static {
29 HEADERS = new ArrayList<String>();
30 HEADERS.addAll(IHEADERS);
31 HEADERS.add("home");//NON NLS 1
32 HEADERS.add("costs_per_day");//NON NLS 1
33 }
922 57 DEMOS
34
35 /
**
the maximum driving time
*
/
36 public static int MAX_DRIVING_TIME = 8
*
60
*
60
*
1000;
37
38 /
**
the minimum break time
*
/
39 public static int MIN_BREAK_TIME = 6
*
60
*
60
*
1000;
40
41 /
**
the home location
*
/
42 public final Location home;
43
44 /
**
the costs for any day during which the driver works
*
/
45 public final int costsPerDay;
46
47 /
**
48
*
Create a new driver
49
*
50
*
@param pid
51
*
the id
52
*
@param lhome
53
*
the home location
54
*
@param costs
55
*
the costs per day
56
*
/
57 public Driver(final int pid, final Location lhome, final int costs) {
58 super(pid, DRIVERS);
59 this.home = lhome;
60 this.costsPerDay = costs;
61 }
62
63 /
**
64
*
Create a new truck
65
*
66
*
@param data
67
*
the data
68
*
/
69 public Driver(final String[] data) {
70 super(data, DRIVERS);
71 this.home = Location.LOCATIONS.find(InputObject.extractId(data[1]));
72 this.costsPerDay = Integer.parseInt(data[2]);
73 }
74
75 /
**
76
*
{@inheritDoc}
77
*
/
78 @Override
79 final char getIDPrefixChar() {
80 return D;
81 }
82
83 /
**
84
*
Get the column headers for a csv data representation. Child classes
85
*
may add additional fields.
86
*
87
*
@return the column headers for representing the data of this object in
88
*
csv format
89
*
/
90 @Override
91 public List<String> getCSVColumnHeader() {
92 return HEADERS;
93 }
94
95 /
**
96
*
Fill in the data of this object into a string array which can be used
97
*
to serialize the object in a nice way
98
*
99
*
@param output
100
*
the output
57.3. THE VRP BENCHMARK PROBLEM 923
101
*
/
102 @Override
103 public void fillInCSVData(final String[] output) {
104 super.fillInCSVData(output);
105 output[1] = this.home.getIDString();
106 output[2] = String.valueOf(this.costsPerDay);
107 }
108 }
Listing 57.29: This class holds the complete infomation about the distances between loca-
tions.
1 /
*
2
*
Copyright (c) 2010 Thomas Weise
3
*
http://www.it-weise.de/
4
*
tweise@gmx.de
5
*
6
*
GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
7
*
/
8
9 package demos.org.goataa.vrpBenchmark.objects;
10
11 import java.io.BufferedReader;
12 import java.io.IOException;
13 import java.io.Writer;
14
15 import org.goataa.impl.utils.TextUtils;
16
17 /
**
18
*
A distance matrix calculates the distance between two locations
19
*
20
*
@author Thomas Weise
21
*
/
22 public class DistanceMatrix extends TextSerializable {
23 /
**
a constant required by Java serialization
*
/
24 private static final long serialVersionUID = 1;
25
26 /
**
the globally shared distance matrix instance
*
/
27 public static final DistanceMatrix MATRIX = new DistanceMatrix();
28
29 /
**
the matrix
*
/
30 private int[][] matrix;
31
32 /
**
33
*
Create a new distance matrix
34
*
/
35 public DistanceMatrix() {
36 super();
37 }
38
39 /
**
40
*
Initialize to a given matrix
41
*
42
*
@param m
43
*
the matrix
44
*
/
45 public void init(final int[][] m) {
46 int[] drow;
47 int i, j;
48
49 this.matrix = m;
50
51 if (m != null) {
52 i = m.length;
53
54 for (; (--i) >= 0;) {
55 drow = m[i];
924 57 DEMOS
56
57 for (j = drow.length; (--j) >= 0;) {
58 if (i != j) {
59 drow[j] = Math.max(50, (50
*
((drow[j] + 49) / 50)));
60 } else {
61 drow[j] = 0;
62 }
63 }
64 }
65 }
66 }
67
68 /
**
69
*
Obtain the distance between two locations
70
*
71
*
@param l1
72
*
the first location
73
*
@param l2
74
*
the second location
75
*
@return the distance in meters
76
*
/
77 public final int getDistance(final Location l1, final Location l2) {
78 if (l1 == l2) {
79 return 0;
80 }
81 return this.matrix[l1.id][l2.id];
82 }
83
84 /
**
85
*
Get the time needed by a given truck for getting from l1 to l2. We
86
*
assume that a truck will drive in average with 66% of its top speed.
87
*
88
*
@param l1
89
*
the first location
90
*
@param l2
91
*
the second location
92
*
@param t
93
*
the truck
94
*
@return the time needed, in milliseconds
95
*
/
96 public final int getTimeNeeded(final Location l1, final Location l2,
97 final Truck t) {
98
99 if (l1 == l2) {
100 return 0;
101 }
102
103 return (int) (0.5d + (3600
*
(this.matrix[l1.id][l2.id] / t.avgSpeedKmH)));
104 }
105
106 /
**
107
*
Serialize to a writer
108
*
109
*
@param w
110
*
the writer
111
*
@throws IOException
112
*
if anything goes wrong
113
*
/
114 @Override
115 public void serialize(final Writer w) throws IOException {
116 int[][] ma;
117 int[] l;
118 int i, j;
119
120 ma = this.matrix;
121 for (i = 0; i < ma.length; i++) {
122 if (i > 0) {
57.3. THE VRP BENCHMARK PROBLEM 925
123 w.write(TextUtils.NEWLINE);
124 }
125 l = ma[i];
126 for (j = 0; j < l.length; j++) {
127 if (j > 0) {
128 w.write(\t);
129 }
130 w.write(String.valueOf(l[j]));
131 }
132 }
133 }
134
135 /
**
136
*
Deserialize from a buffered reader
137
*
138
*
@param r
139
*
reader
140
*
@throws IOException
141
*
if something goes wrong
142
*
/
143 @Override
144 public void deserialize(final BufferedReader r) throws IOException {
145 String[] s;
146 String l;
147 int[][] ma;
148 int[] q;
149 int v, i, j;
150
151 l = r.readLine();
152 s = TransportationObjectList.SPLIT.split(l);
153
154 v = s.length;
155 this.matrix = ma = new int[v][v];
156 for (i = 0; i < v; i++) {
157 q = ma[i];
158 for (j = 0; j < v; j++) {
159 q[j] = Integer.parseInt(s[j]);
160 }
161 if (i < (v - 1)) {
162 s = TransportationObjectList.SPLIT.split(r.readLine());
163 }
164 }
165
166 }
167 }
57.3.2 The Optimization Utilities
The objective value computation utility classes from Section 50.4.4 and the candidate solu-
tion element according to Section 50.4.3.
Listing 57.30: An object which holds a transportation move.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.vrpBenchmark.optimization;
5
6 import java.util.ArrayList;
7 import java.util.List;
8 import java.util.regex.Pattern;
9
10 import demos.org.goataa.vrpBenchmark.objects.Container;
11 import demos.org.goataa.vrpBenchmark.objects.Driver;
12 import demos.org.goataa.vrpBenchmark.objects.InputObject;
13 import demos.org.goataa.vrpBenchmark.objects.Location;
926 57 DEMOS
14 import demos.org.goataa.vrpBenchmark.objects.Order;
15 import demos.org.goataa.vrpBenchmark.objects.TransportationObject;
16 import demos.org.goataa.vrpBenchmark.objects.Truck;
17
18 /
**
19
*
A move
20
*
21
*
@author Thomas Weise
22
*
/
23 public class Move extends TransportationObject {
24
25 /
**
a constant required by Java serialization
*
/
26 private static final long serialVersionUID = 1;
27
28 /
**
no moves
*
/
29 public static final Move[] NO_MOVES = new Move[0];
30
31 /
**
split
*
/
32 static final Pattern SPLIT = Pattern.compile("[,]+"); //NON NLS 1
33
34 /
**
the headers
*
/
35 private static final List<String> HEADERS;
36
37 static {
38 HEADERS = new ArrayList<String>();
39 HEADERS.add("truck");//NON NLS 1
40 HEADERS.add("orders"); //NON NLS 1
41 HEADERS.add("drivers");//NON NLS 1
42 HEADERS.add("containers");//NON NLS 1
43 HEADERS.add("from");//NON NLS 1
44 HEADERS.add("start_time");//NON NLS 1
45 HEADERS.add("to");//NON NLS 1
46 }
47
48 /
**
the truck
*
/
49 public final Truck truck;
50
51 /
**
the involved orders
*
/
52 public final Order[] orders;
53
54 /
**
the involved drivers
*
/
55 public final Driver[] drivers;
56
57 /
**
the involved containers
*
/
58 public final Container[] containers;
59
60 /
**
the starting location
*
/
61 public final Location from;
62
63 /
**
the start time
*
/
64 public final long startTime;
65
66 /
**
the destination location
*
/
67 public final Location to;
68
69 /
**
70
*
Create a new freight object
71
*
72
*
@param t
73
*
the truck
74
*
@param o
75
*
the orders
76
*
@param d
77
*
the drivers
78
*
@param c
79
*
the container
80
*
@param lfrom
57.3. THE VRP BENCHMARK PROBLEM 927
81
*
the starting location
82
*
@param lstartTime
83
*
the start time
84
*
@param lto
85
*
the destination location
86
*
/
87 public Move(final Truck t, final Order[] o, final Driver[] d,
88 final Container[] c, final Location lfrom, long lstartTime,
89 final Location lto) {
90 super();
91
92 this.truck = t;
93 this.orders = (((o != null) && (o.length > 0)) ? o : Order.NO_ORDERS);
94 this.drivers = (((d != null) && (d.length > 0)) ? d
95 : Driver.NO_DRIVERS);
96 this.containers = (((c != null) && (c.length > 0)) ? c
97 : Container.NO_CONTAINERS);
98 this.from = lfrom;
99 this.startTime = lstartTime;
100 this.to = lto;
101 }
102
103 /
**
104
*
Create a move from a textual representation
105
*
106
*
@param s
107
*
the strings
108
*
/
109 public Move(final String[] s) {
110 this(Truck.TRUCKS.find(InputObject.extractId(s[0])), getOrders(s[1]),
111 getDrivers(s[2]), getContainers(s[3]), Location.LOCATIONS
112 .find(InputObject.extractId(s[4])), Long.parseLong(s[5]),
113 Location.LOCATIONS.find(InputObject.extractId(s[4])));
114 }
115
116 /
**
117
*
Get the drivers
118
*
119
*
@param s
120
*
the strings
121
*
@return the drivers
122
*
/
123 private static final Driver[] getDrivers(final String s) {
124 int i, l;
125 boolean b;
126 String q;
127 String[] qs;
128 Driver[] o;
129
130 if ((s == null) || ((l = (s.length())) <= 0)) {
131 return Driver.NO_DRIVERS;
132 }
133
134 i = 0;
135 b = false;
136 if (s.charAt(0) == () {
137 i++;
138 b = true;
139 }
140 if (s.charAt(l - 1) == )) {
141 l--;
142 b = true;
143 }
144 if (b) {
145 q = s.substring(i, l);
146 } else {
147 q = s;
928 57 DEMOS
148 }
149
150 qs = SPLIT.split(q);
151 if ((qs == null) || ((l = qs.length) <= 0)) {
152 return Driver.NO_DRIVERS;
153 }
154 o = new Driver[l];
155 for (i = 0; i < l; i++) {
156 o[i] = Driver.DRIVERS.find(InputObject.extractId(qs[i]));
157 }
158
159 return o;
160 }
161
162 /
**
163
*
Get the orders
164
*
165
*
@param s
166
*
the strings
167
*
@return the orders
168
*
/
169 private static final Order[] getOrders(final String s) {
170 int i, l;
171 boolean b;
172 String q;
173 String[] qs;
174 Order[] o;
175
176 if ((s == null) || ((l = (s.length())) <= 0)) {
177 return Order.NO_ORDERS;
178 }
179
180 i = 0;
181 b = false;
182 if (s.charAt(0) == () {
183 i++;
184 b = true;
185 }
186 if (s.charAt(l - 1) == )) {
187 l--;
188 b = true;
189 }
190 if (b) {
191 q = s.substring(i, l);
192 } else {
193 q = s;
194 }
195
196 qs = SPLIT.split(q);
197 if ((qs == null) || ((l = qs.length) <= 0)) {
198 return Order.NO_ORDERS;
199 }
200 o = new Order[l];
201 for (i = 0; i < l; i++) {
202 o[i] = Order.ORDERS.find(InputObject.extractId(qs[i]));
203 }
204
205 return o;
206 }
207
208 /
**
209
*
Get the containers
210
*
211
*
@param s
212
*
the strings
213
*
@return the containers
214
*
/
57.3. THE VRP BENCHMARK PROBLEM 929
215 private static final Container[] getContainers(final String s) {
216 int i, l;
217 boolean b;
218 String q;
219 String[] qs;
220 Container[] o;
221
222 if ((s == null) || ((l = (s.length())) <= 0)) {
223 return Container.NO_CONTAINERS;
224 }
225
226 i = 0;
227 b = false;
228 if (s.charAt(0) == () {
229 i++;
230 b = true;
231 }
232 if (s.charAt(l - 1) == )) {
233 l--;
234 b = true;
235 }
236 if (b) {
237 q = s.substring(i, l);
238 } else {
239 q = s;
240 }
241
242 qs = SPLIT.split(q);
243 if ((qs == null) || ((l = qs.length) <= 0)) {
244 return Container.NO_CONTAINERS;
245 }
246 o = new Container[l];
247 for (i = 0; i < l; i++) {
248 o[i] = Container.CONTAINERS.find(InputObject.extractId(qs[i]));
249 }
250
251 return o;
252 }
253
254 /
**
255
*
Get the column headers for a csv data representation. Child classes
256
*
may add additional fields.
257
*
258
*
@return the column headers for representing the data of this object in
259
*
csv format
260
*
/
261 @Override
262 public List<String> getCSVColumnHeader() {
263 return HEADERS;
264 }
265
266 /
**
267
*
Convert an object array to a string
268
*
269
*
@param src
270
*
the source
271
*
@return the string
272
*
/
273 private static final String toString(final InputObject[] src) {
274 StringBuffer sb;
275 int i;
276 boolean b;
277 InputObject x;
278
279 sb = new StringBuffer();
280 sb.append(();
281 b = false;
930 57 DEMOS
282
283 for (i = 0; i < src.length; i++) {
284
285 x = src[i];
286 if (x != null) {
287 if (b) {
288 sb.append(,);
289 }
290 sb.append(x.getIDString());
291 b = true;
292 }
293
294 }
295
296 sb.append());
297 return sb.toString();
298 }
299
300 /
**
301
*
Fill in the data of this object into a string array which can be used
302
*
to serialize the object in a nice way
303
*
304
*
@param output
305
*
the output
306
*
/
307 @Override
308 public void fillInCSVData(final String[] output) {
309 output[0] = this.truck.getIDString();
310 output[1] = toString(this.orders);
311 output[2] = toString(this.drivers);
312 output[3] = toString(this.containers);
313 output[4] = this.from.getIDString();
314 output[5] = String.valueOf(this.startTime);
315 output[6] = this.to.getIDString();
316 }
317
318 /
**
319
*
Does this move use the given container?
320
*
321
*
@param c
322
*
the container
323
*
@return true if the move uses the container, false otherwise
324
*
/
325 public final boolean usesContainer(final Container c) {
326 int i;
327
328 for (i = this.containers.length; (--i) >= 0;) {
329 if (this.containers[i] == c) {
330 return true;
331 }
332 }
333
334 return false;
335 }
336
337 /
**
338
*
Does this move use the given driver?
339
*
340
*
@param c
341
*
the driver
342
*
@return true if the move uses the driver, false otherwise
343
*
/
344 public final boolean usesDriver(final Driver c) {
345 int i;
346
347 for (i = this.drivers.length; (--i) >= 0;) {
348 if (this.drivers[i] == c) {
57.3. THE VRP BENCHMARK PROBLEM 931
349 return true;
350 }
351 }
352
353 return false;
354 }
355 }
Listing 57.31: An object which can be used to compute the features of a candidate solution
of the benchmark problem.
1 // Copyright (c) 2010 Thomas Weise (http://www.it-weise.de/, tweise@gmx.de)
2 // GNU LESSER GENERAL PUBLIC LICENSE (Version 2.1, February 1999)
3
4 package demos.org.goataa.vrpBenchmark.optimization;
5
6 import java.io.Serializable;
7 import java.util.Arrays;
8 import java.util.Comparator;
9
10 import demos.org.goataa.vrpBenchmark.objects.Container;
11 import demos.org.goataa.vrpBenchmark.objects.DistanceMatrix;
12 import demos.org.goataa.vrpBenchmark.objects.Driver;
13 import demos.org.goataa.vrpBenchmark.objects.Location;
14 import demos.org.goataa.vrpBenchmark.objects.Order;
15 import demos.org.goataa.vrpBenchmark.objects.TransportationObjectList;
16 import demos.org.goataa.vrpBenchmark.objects.Truck;
17
18 /
**
19
*
This class allows to compute the state of the world and, hence, the
20
*
objective values. The initial state of the world is t=0 and all objects
21
*
are at their starting locations. Then, we can make moves. Every move
22
*
consists of a truck driving from one location to another. By doing so,
23
*
it may carry objects (containers, orders) and is driven by at least one
24
*
driver (but may carry more). After the move, the involved objects are at
25
*
their new locations.
26
*
27
*
@author Thomas Weise
28
*
/
29 public class Compute extends PerformanceValues {
30 /
**
a constant required by Java serialization
*
/
31 private static final long serialVersionUID = 1;
32
33 /
**
the internal comparator
*
/
34 private static final Comparator<Move> MOVE_COMPARATOR = new CMP();
35
36 /
**
the drivers
*
/
37 private final DriverTrack[] drivers;
38
39 /
**
the trucks
*
/
40 private final TruckTrack[] trucks;
41
42 /
**
the container
*
/
43 private final Track[] containers;
44
45 /
**
the order
*
/
46 private final Track[] orders;
47
48 /
**
the temporary move list
*
/
49 private transient Move[] temp;
50
51 /
**
Create a new compute instance
*
/
52 public Compute() {
53 super();
54
55 Track[] t;
56 DriverTrack[] x;
932 57 DEMOS
57 TruckTrack[] y;
58 int i;
59
60 i = Truck.TRUCKS.size();
61 this.trucks = y = new TruckTrack[i];
62 for (; (--i) >= 0;) {
63 y[i] = new TruckTrack();
64 }
65
66 i = Driver.DRIVERS.size();
67 this.drivers = x = new DriverTrack[i];
68 for (; (--i) >= 0;) {
69 x[i] = new DriverTrack();
70 }
71
72 i = Container.CONTAINERS.size();
73 this.containers = t = new Track[i];
74 for (; (--i) >= 0;) {
75 t[i] = new Track();
76 }
77
78 i = Order.ORDERS.size();
79 this.orders = t = new Track[i];
80 for (; (--i) >= 0;) {
81 t[i] = new Track();
82 }
83
84 this.temp = null;
85 }
86
87 /
**
88
*
Initialize the computer: reset all variables and move all objects to
89
*
their default locations
90
*
/
91 @Override
92 public final void init() {
93 Track[] ts;
94 DriverTrack[] dts;
95 TruckTrack[] tts;
96 TruckTrack q;
97 Track t;
98 DriverTrack dt;
99 Truck a;
100 Driver b;
101 Container c;
102 Order o;
103 int i;
104
105 super.init();
106
107 tts = this.trucks;
108 for (i = tts.length; (--i) >= 0;) {
109 q = tts[i];
110 a = Truck.TRUCKS.get(i);
111 q.location = a.start;
112 q.time = 0l;
113 q.orders = null;
114 q.lastTime = -1l;
115 }
116
117 dts = this.drivers;
118 for (i = dts.length; (--i) >= 0;) {
119 dt = dts[i];
120 b = Driver.DRIVERS.get(i);
121 dt.location = b.home;
122 dt.time = 0l;
123 dt.lastDay = -1;
57.3. THE VRP BENCHMARK PROBLEM 933
124 }
125
126 ts = this.containers;
127 for (i = ts.length; (--i) >= 0;) {
128 t = ts[i];
129 c = Container.CONTAINERS.get(i);
130 t.location = c.start;
131 t.time = 0l;
132 }
133
134 ts = this.orders;
135 for (i = ts.length; (--i) >= 0;) {
136 t = ts[i];
137 o = Order.ORDERS.get(i);
138 t.location = o.pickupLocation;
139 t.time = o.pickupWindowStart;
140 }
141
142 }
143
144 /
**
145
*
Make a move: Truck t drives from the location "from" to the location
146
*
"to" at the given start time. It carries the drivers "d", the orders
147
*
"o", and the containers "c".
148
*
149
*
@param t
150
*
the truck
151
*
@param o
152
*
the orders
153
*
@param d
154
*
the drivers
155
*
@param c
156
*
the container
157
*
@param from
158
*
the starting location
159
*
@param startTime
160
*
the start time
161
*
@param to
162
*
the destination location
163
*
/
164 public final void move(final Truck t, final Order[] o, final Driver[] d,
165 final Container[] c, final Location from, long startTime,
166 final Location to) {
167 final Track[] lcontainers, lorders;
168 final DriverTrack[] ldrivers;
169 Track track;
170 final TruckTrack truck;
171 Container cc;
172 Location startLoc;
173 long start, end, m, timeNeeded, lastTime;
174 int totalContainers, totalOrders, totalDrivers;
175 int i, ed, sd, td;
176 Order oo;
177 Driver dd;
178 DriverTrack dt;
179 Order[] o2;
180
181 this.clearNew();
182
183 ldrivers = this.drivers;
184 lcontainers = this.containers;
185 lorders = this.orders;
186
187 // move the truck
188 startLoc = from;
189 start = startTime;
190
934 57 DEMOS
191 if (t != null) {
192 truck = this.trucks[t.id];
193 lastTime = truck.lastTime;
194
195 // is the truck at the right location?
196 startLoc = truck.location;
197 if (startLoc != from) {
198 this.addTruckError(1);
199 }
200
201 // is it there at the right time
202 if (truck.time > start) {
203 this.addTruckError(1);
204 start = truck.time;
205 }
206
207 // move the truck, if the start location is different from the end
208 if (startLoc != to) {
209 // compute the time to travel
210 m = DistanceMatrix.MATRIX.getDistance(startLoc, to);
211 timeNeeded = DistanceMatrix.MATRIX.getTimeNeeded(startLoc, to, t);
212
213 this
214 .addTruckCosts(((long) (0.5d + ((t.costsPerHKm
*
timeNeeded
*
m) /
3600000000.0d))));
215
216 end = (start + timeNeeded);
217 truck.location = to;
218 } else {
219 end = (start + 1l);
220 this.addTruckCosts(1000l);
221 }
222
223 truck.time = end;
224 } else {
225 // no truck? -> error
226 this.addTruckError(1);
227 end = start + 10000000l;
228 this.addTruckCosts(1000l);
229 truck = null;
230 lastTime = -1l;
231 }
232
233 // move all containers
234 totalContainers = 0;
235 if (c != null) {
236 for (i = c.length; (--i) >= 0;) {
237 cc = c[i];
238
239 // aha, a container
240 if (cc != null) {
241 totalContainers++;
242 track = lcontainers[cc.id];
243
244 // check the current location of the container
245 if (track.location != startLoc) {
246 this.addContainerError(1);
247 }
248
249 // check the time at which the container is there
250 if (track.time > start) {
251 this.addContainerError(1);
252 }
253
254 // move the container
255 track.location = to;
256 track.time = end;
57.3. THE VRP BENCHMARK PROBLEM 935
257 }
258 }
259
260 // are there too many containers?
261 i = (totalContainers - ((t == null) ? 0 : t.maxContainers));
262 if (i > 0) {
263 this.addContainerError(i);
264 }
265 }
266
267 // move all orders
268 totalOrders = 0;
269 if (o != null) {
270 for (i = o.length; (--i) >= 0;) {
271 oo = o[i];
272
273 // aha, an order
274 if (oo != null) {
275 totalOrders++;
276 track = lorders[oo.id];
277
278 // check the current location of the order
279 if (track.location != startLoc) {
280 this.addOrderError(1);
281 }
282
283 // is the order at the starting location?
284 if ((track.location == oo.pickupLocation)
285 && (track.time <= oo.pickupWindowStart)) {
286
287 if (lastTime >= 0l) {
288 m = windowIntersection(oo.pickupWindowStart,
289 oo.pickupWindowEnd, lastTime, start);
290 } else {
291 m = start;
292 }
293
294 m = Compute.windowViolation(oo.pickupWindowStart,
295 oo.pickupWindowEnd, m);
296
297 if (m > 0l) {
298 this.addPickupErrorTime(m);
299 this.addOrderError(1);
300 }
301 } else {
302 // check the time at which the order is there
303 if (track.time > start) {
304 this.addOrderError(1);
305 }
306 }
307
308 // move the order
309 track.location = to;
310 track.time = end;
311 }
312 }
313 }
314
315 // now let us check if delivery took place in the time windows by
316 // possibly correcting drop-off times. It could be that a truck arrives
317 // before the delivery time window but leaves after it
318 if (truck != null) {
319
320 if (lastTime >= 0l) {
321 o2 = truck.orders;
322 if (o2 != null) {
323 // for each order
936 57 DEMOS
324 for (i = o2.length; (--i) >= 0;) {
325 oo = o2[i];
326 if (oo != null) {
327 // get the tracking object
328 track = lorders[oo.id];
329 // is the order still at the place where this truck dropped
330 // it
331 // off and is the order at the delivery location?
332 if ((track.time == lastTime) && (track.location == startLoc)
333 && (track.location == oo.deliveryLocation)) {
334 track.time = windowIntersection(oo.deliveryWindowStart,
335 oo.deliveryWindowEnd, lastTime, start);
336 }
337 }
338 }
339 }
340 }
341
342 truck.orders = o;
343 truck.lastTime = end;
344 }
345
346 // are there more orders than containers?
347 if (totalOrders > totalContainers) {
348 this.addContainerError((totalOrders - totalContainers));
349 }
350
351 // move all drivers
352 totalDrivers = 0;
353 if (d != null) {
354 sd = ((int) ((start / 86400000)));
355 ed = ((int) ((end / 86400000)));
356 td = (ed - sd + 1);
357
358 for (i = d.length; (--i) >= 0;) {
359 dd = d[i];
360
361 // aha, a driver
362 if (dd != null) {
363 totalDrivers++;
364 dt = ldrivers[dd.id];
365
366 // check the current location of the driver
367 if (dt.location != startLoc) {
368 this.addDriverError(1);
369 }
370
371 // check the time at which the driver is there
372 if (dt.time > start) {
373 this.addDriverError(1);
374 }
375
376 // compute costs of that driver: each day, he receives a salary
377 if (ed > dt.lastDay) {
378 dt.lastDay = ed;
379 this.addDriverCosts((td
*
dd.costsPerDay));
380 }
381
382 // move the driver
383 dt.location = to;
384 dt.time = end;
385 }
386 }
387 }
388
389 // no driver? thats an error
390 if (totalDrivers <= 0) {
57.3. THE VRP BENCHMARK PROBLEM 937
391 this.addDriverError(1);
392 }
393
394 this.commitNew();
395 }
396
397 /
**
398
*
Compute the time window violation.
399
*
400
*
@param ws
401
*
the window start
402
*
@param we
403
*
the window end
404
*
@param t
405
*
the time
406
*
@return the window violation
407
*
/
408 private static final long windowViolation(final long ws, final long we,
409 final long t) {
410 if (t < ws) {
411 return (ws - t);
412 }
413 if (t > we) {
414 return (t - we);
415 }
416 return 0l;
417 }
418
419 /
**
420
*
Get the best window intersection time
421
*
422
*
@param w1s
423
*
the start of the first window
424
*
@param w1e
425
*
the end of the first window
426
*
@param w2s
427
*
the start of the second window
428
*
@param w2e
429
*
the end of the second window
430
*
@return the earliest/best time within the second window which creates
431
*
the smallest violation of the first window
432
*
/
433 private static final long windowIntersection(final long w1s,
434 final long w1e, final long w2s, final long w2e) {
435
436 if ((w2e >= w1s) && (w2s <= w1e)) {
437 return Math.max(w1s, w2s);
438 }
439
440 if (w2e < w1s) {
441 return w2e;
442 }
443
444 if (w2s > w1e) {
445 return w2s;
446 }
447
448 throw new RuntimeException();
449 }
450
451 /
**
452
*
Finalize the computation, i.e., check if everything is placed at
453
*
positions to which it belongs.
454
*
/
455 public final void finish() {
456 final Track[] ltrucks, lcontainers, lorders;
457 final DriverTrack[] ldrivers;
938 57 DEMOS
458 Track track;
459 int i;
460 Order oo;
461 long y;
462
463 this.clearNew();
464
465 // check the trucks
466 ltrucks = this.trucks;
467 for (i = ltrucks.length; (--i) >= 0;) {
468 track = ltrucks[i];
469 // all trucks need to be at a depot
470 if (!(track.location.isDepot)) {
471 this.addTruckError(1);
472 }
473 }
474
475 // check the drivers
476 ldrivers = this.drivers;
477 for (i = ldrivers.length; (--i) >= 0;) {
478 track = ldrivers[i];
479 // all drivers need to be home at the end of the jobs
480 if (track.location != Driver.DRIVERS.get(i).home) {
481 this.addDriverError(1);
482 }
483 }
484
485 // check the containers
486 lcontainers = this.containers;
487 for (i = lcontainers.length; (--i) >= 0;) {
488 track = lcontainers[i];
489 // all containers need to be at a depot
490 if (!(track.location.isDepot)) {
491 this.addContainerError(1);
492 }
493 }
494
495 // check the orders
496 lorders = this.orders;
497 for (i = lorders.length; (--i) >= 0;) {
498 track = lorders[i];
499 oo = Order.ORDERS.get(i);
500 // all orders need to be at their destination
501 if (track.location != oo.deliveryLocation) {
502 this.addOrderError(1);
503 }
504 y = windowViolation(oo.deliveryWindowStart, oo.deliveryWindowEnd,
505 track.time);
506 if (y > 0) {
507 this.addOrderError(1);
508 this.addDeliveryErrorTime(y);
509 }
510 }
511
512 this.commitNew();
513 }
514
515 /
**
516
*
Make a move
517
*
518
*
@param m
519
*
the move
520
*
/
521 public final void move(final Move m) {
522 if (m != null) {
523 this.move(m.truck, m.orders, m.drivers, m.containers, m.from,
524 m.startTime, m.to);
57.3. THE VRP BENCHMARK PROBLEM 939
525 }
526 }
527
528 /
**
529
*
Do some moves internally
530
*
531
*
@param moves
532
*
the moves
533
*
@param len
534
*
the length
535
*
/
536 private final void move(final Move[] moves, final int len) {
537 int i;
538
539 Arrays.sort(moves, 0, len, MOVE_COMPARATOR);
540 for (i = len; (--i) >= 0;) {
541 this.move(moves[i]);
542 }
543 }
544
545 /
**
546
*
Get the internal move array
547
*
548
*
@param len
549
*
the length
550
*
@return the move array
551
*
/
552 private final Move[] getMoves(final int len) {
553 Move[] m;
554
555 m = this.temp;
556 if ((m == null) || (m.length < len)) {
557 return (this.temp = new Move[len]);
558 }
559 return m;
560 }
561
562 /
**
563
*
Make the given moves
564
*
565
*
@param ms
566
*
the move list
567
*
/
568 public final void move(final TransportationObjectList<Move> ms) {
569 int i, s;
570 Move[] m;
571
572 if (ms != null) {
573 s = ms.size();
574 if (s > 0) {
575 m = this.getMoves(s);
576 for (i = s; (--i) >= 0;) {
577 m[i] = ms.get(i);
578 }
579 this.move(m, s);
580 }
581 }
582 }
583
584 /
**
585
*
Make the given moves
586
*
587
*
@param ms
588
*
the moves
589
*
/
590 public final void move(final Move[] ms) {
591 Move[] m;
940 57 DEMOS
592 int i;
593
594 if (ms != null) {
595 i = ms.length;
596 if (i > 0) {
597 m = this.getMoves(i);
598 System.arraycopy(ms, 0, m, 0, i);
599 this.move(m, i);
600 }
601 }
602 }
603
604 /
**
605
*
Get the location where the given container was placed last
606
*
607
*
@param c
608
*
the container
609
*
@return the location where the given container was placed last
610
*
/
611 public final Location getLocation(final Container c) {
612 return this.containers[c.id].location;
613 }
614
615 /
**
616
*
Get the location where the given truck was placed last
617
*
618
*
@param c
619
*
the truck
620
*
@return the location where the given truck was placed last
621
*
/
622 public final Location getLocation(final Truck c) {
623 return this.trucks[c.id].location;
624 }
625
626 /
**
627
*
Get the location where the given driver was placed last
628
*
629
*
@param c
630
*
the driver
631
*
@return the location where the given driver was placed last
632
*
/
633 public final Location getLocation(final Driver c) {
634 return this.drivers[c.id].location;
635 }
636
637 /
**
638
*
Get the location where the given order was placed last
639
*
640
*
@param c
641
*
the order
642
*
@return the location where the given order was placed last
643
*
/
644 public final Location getLocation(final Order c) {
645 return this.orders[c.id].location;
646 }
647
648 /
**
649
*
A class used for tracking
650
*
651
*
@author Thomas Weise
652
*
/
653 private static class Track implements Serializable {
654 /
**
a constant required by Java serialization
*
/
655 private static final long serialVersionUID = 1;
656
657 /
**
the location
*
/
658 Location location;
57.3. THE VRP BENCHMARK PROBLEM 941
659
660 /
**
the time
*
/
661 long time;
662
663 /
**
the track
*
/
664 Track() {
665 super();
666 }
667
668 }
669
670 /
**
671
*
A class used for driver tracking
672
*
673
*
@author Thomas Weise
674
*
/
675 private static class DriverTrack extends Track {
676 /
**
a constant required by Java serialization
*
/
677 private static final long serialVersionUID = 1;
678
679 /
**
the days during which a driver was driving
*
/
680 int lastDay;
681
682 /
**
the track
*
/
683 DriverTrack() {
684 super();
685 }
686
687 }
688
689 /
**
690
*
A class used for truck tracking
691
*
692
*
@author Thomas Weise
693
*
/
694 private static class TruckTrack extends Track {
695 /
**
a constant required by Java serialization
*
/
696 private static final long serialVersionUID = 1;
697
698 /
**
the last time the truck arrived somewhere
*
/
699 long lastTime;
700
701 /
**
orders which were dropped off
*
/
702 Order[] orders;
703
704 /
**
the track
*
/
705 TruckTrack() {
706 super();
707 }
708
709 }
710
711 /
**
712
*
The move comparator for sorting
713
*
714
*
@author Thomas Weise
715
*
/
716 private static final class CMP implements Comparator<Move> {
717
718 /
**
instantiate the comparator
*
/
719 CMP() {
720 super();
721 }
722
723 /
**
724
*
Compare two moves according to their start time
725
*
942 57 DEMOS
726
*
@param a
727
*
the first move
728
*
@param b
729
*
the second move
730
*
@return the comparison result
731
*
/
732 public final int compare(final Move a, final Move b) {
733 if (a.startTime < b.startTime) {
734 return 1;
735 }
736 if (b.startTime < a.startTime) {
737 return -1;
738 }
739 return 0;
740 }
741 }
742 }
Appendices
944 57 DEMOS
Appendix A
Citation Suggestion
@book{W2011GOEB,
author = {Thomas Weise},
title = {Global Optimization Algorithms -- Theory and Application},
year = {2011},
month = dec # {7,},
publisher = {www.it-weise.de},
howpublished = {Online as E-Book},
edition = {third},
copyright = {Copyright c 2006-2011 Thomas Weise, licensed under GNU FDL},
keywords = {Global Optimization, Evolutionary Computation, Evolutionary Al-
gorithms, Genetic Algorithms, Genetic Programming, Evolution
Strategy, Learning Classier Systems, Extremal Optimization,
Raindrop Method, Ant Colony Optimization, Particle Swarm Op-
timization, Downhill Simplex, Simulated Annealing, Hill Climbing,
Combinatorial Optimization, Numerical Optimization, Java},
language = {en},
url = {http://www.it-weise.de/projects/book.pdf},
}
%0 Book
%A Weise, Thomas
%T Global Optimization Algorithms Theory and Application
%F Thomas Weise: Global Optimization Algorithms -- Theory and Application
%I www.it-weise.de
%D third
%8 third
%7 third
%9 Online available as E-Book at http://www.it-weise.de/projects/book.pdf
Appendix B
GNU Free Documentation License (FDL)
Version 1.3, 3 November 2008
Copyright c _ 2000, 2001, 2002, 2007, 2008 Free Software Foundation, Inc.
http://fsf.org/ [accessed 2009-06-07]
Everyone is permitted to copy and distribute verbatim copies of this license document, but
changing it is not allowed.
Preamble
The purpose of this License is to make a manual, textbook, or other functional and useful
document free in the sense of freedom: to assure everyone the eective freedom to copy
and redistribute it, with or without modifying it, either commercially or noncommercially.
Secondarily, this License preserves for the author and publisher a way to get credit for their
work, while not being considered responsible for modications made by others.
This License is a kind of copyleft, which means that derivative works of the document
must themselves be free in the same sense. It complements the GNU General Public License,
which is a copyleft license designed for free software.
We have designed this License in order to use it for manuals for free software, because free
software needs free documentation: a free program should come with manuals providing the
same freedoms that the software does. But this License is not limited to software manuals; it
can be used for any textual work, regardless of subject matter or whether it is published as a
printed book. We recommend this License principally for works whose purpose is instruction
or reference.
B.1 Applicability and Denitions
This License applies to any manual or other work, in any medium, that contains a notice
placed by the copyright holder saying it can be distributed under the terms of this License.
Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that
work under the conditions stated herein. The Document, below, refers to any such
manual or work. Any member of the public is a licensee, and is addressed as you. You
accept the license if you copy, modify or distribute the work in a way requiring permission
under copyright law.
948 B GNU FREE DOCUMENTATION LICENSE (FDL)
A Modied Version of the Document means any work containing the Document or
a portion of it, either copied verbatim, or with modications and/or translated into another
language.
A Secondary Section is a named appendix or a front-matter section of the Document
that deals exclusively with the relationship of the publishers or authors of the Document
to the Documents overall subject (or to related matters) and contains nothing that could
fall directly within that overall subject. (Thus, if the Document is in part a textbook of
mathematics, a Secondary Section may not explain any mathematics.) The relationship
could be a matter of historical connection with the subject or with related matters, or of
legal, commercial, philosophical, ethical or political position regarding them.
The Invariant Sections are certain Secondary Sections whose titles are designated,
as being those of Invariant Sections, in the notice that says that the Document is released
under this License. If a section does not t the above denition of Secondary then it is not
allowed to be designated as Invariant. The Document may contain zero Invariant Sections.
If the Document does not identify any Invariant Sections then there are none.
The Cover Texts are certain short passages of text that are listed, as Front-Cover
Texts or Back-Cover Texts, in the notice that says that the Document is released under this
License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at
most 25 words.
A Transparent copy of the Document means a machine readable copy, represented
in a format whose specication is available to the general public, that is suitable for revising
the document straightforwardly with generic text editors or (for images composed of pixels)
generic paint programs or (for drawings) some widely available drawing editor, and that is
suitable for input to text formatters or for automatic translation to a variety of formats
suitable for input to text formatters. A copy made in an otherwise Transparent le format
whose markup, or absence of markup, has been arranged to thwart or discourage subsequent
modication by readers is not Transparent. An image format is not Transparent if used for
any substantial amount of text. A copy that is not Transparent is called Opaque.
Examples of suitable formats for Transparent copies include plain ASCII without
markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly avail-
able DTD, and standard-conforming simple HTML, PostScript or PDF designed for human
modication. Examples of transparent image formats include PNG, XCF and JPG. Opaque
formats include proprietary formats that can be read and edited only by proprietary word
processors, SGML or XML for which the DTD and/or processing tools are not generally
available, and the machine-generated HTML, PostScript or PDF produced by some word
processors for output purposes only.
The Title Page means, for a printed book, the title page itself, plus such following
pages as are needed to hold, legibly, the material this License requires to appear in the title
page. For works in formats which do not have any title page as such, Title Page means
the text near the most prominent appearance of the works title, preceding the beginning of
the body of the text.
The publisher means any person or entity that distributes copies of the Document
to the public.
A section Entitled XYZ means a named subunit of the Document whose title ei-
ther is precisely XYZ or contains XYZ in parentheses following text that translates XYZ
in another language. (Here XYZ stands for a specic section name mentioned below,
such as Acknowledgements, Dedications, Endorsements, or History.) To
Preserve the Title of such a section when you modify the Document means that it
remains a section Entitled XYZ according to this denition.
The Document may include Warranty Disclaimers next to the notice which states that
this License applies to the Document. These Warranty Disclaimers are considered to be
included by reference in this License, but only as regards disclaiming warranties: any other
implication that these Warranty Disclaimers may have is void and has no eect on the
B.2. VERBATIM COPYING 949
meaning of this License.
B.2 Verbatim Copying
You may copy and distribute the Document in any medium, either commercially or noncom-
mercially, provided that this License, the copyright notices, and the license notice saying
this License applies to the Document are reproduced in all copies, and that you add no
other conditions whatsoever to those of this License. You may not use technical measures
to obstruct or control the reading or further copying of the copies you make or distribute.
However, you may accept compensation in exchange for copies. If you distribute a large
enough number of copies you must also follow the conditions in Section B.3.
You may also lend copies, under the same conditions stated above, and you may publicly
display copies.
B.3 Copying in Quantity
If you publish printed copies (or copies in media that commonly have printed covers) of
the Document, numbering more than 100, and the Documents license notice requires Cover
Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover
Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both
covers must also clearly and legibly identify you as the publisher of these copies. The front
cover must present the full title with all words of the title equally prominent and visible.
You may add other material on the covers in addition. Copying with changes limited to the
covers, as long as they preserve the title of the Document and satisfy these conditions, can
be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to t legibly, you should put the
rst ones listed (as many as t reasonably) on the actual cover, and continue the rest onto
adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100,
you must either include a machine-readable Transparent copy along with each Opaque copy,
or state in or with each Opaque copy a computer-network location from which the general
network-using public has access to download using public-standard network protocols a
complete Transparent copy of the Document, free of added material. If you use the latter
option, you must take reasonably prudent steps, when you begin distribution of Opaque
copies in quantity, to ensure that this Transparent copy will remain thus accessible at the
stated location until at least one year after the last time you distribute an Opaque copy
(directly or through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well
before redistributing any large number of copies, to give them a chance to provide you with
an updated version of the Document.
B.4 Modications
You may copy and distribute a Modied Version of the Document under the conditions
of sections 2 and 3 above, provided that you release the Modied Version under precisely
this License, with the Modied Version lling the role of the Document, thus licensing
distribution and modication of the Modied Version to whoever possesses a copy of it. In
addition, you must do these things in the Modied Version:
A. Use in the Title Page (and on the covers, if any) a title distinct from that of the
Document, and from those of previous versions (which should, if there were any, be
950 B GNU FREE DOCUMENTATION LICENSE (FDL)
listed in the History section of the Document). You may use the same title as a
previous version if the original publisher of that version gives permission.
B. List on the Title Page, as authors, one or more persons or entities responsible for
authorship of the modications in the Modied Version, together with at least ve of
the principal authors of the Document (all of its principal authors, if it has fewer than
ve), unless they release you from this requirement.
C. State on the Title page the name of the publisher of the Modied Version, as the
publisher.
D. Preserve all the copyright notices of the Document.
E. Add an appropriate copyright notice for your modications adjacent to the other
copyright notices.
F. Include, immediately after the copyright notices, a license notice giving the public
permission to use the Modied Version under the terms of this License, in the form
shown in the Addendum below.
G. Preserve in that license notice the full lists of Invariant Sections and required Cover
Texts given in the Documents license notice.
H. Include an unaltered copy of this License.
I. Preserve the section Entitled History, Preserve its Title, and add to it an item
stating at least the title, year, new authors, and publisher of the Modied Version as
given on the Title Page. If there is no section Entitled History in the Document,
create one stating the title, year, authors, and publisher of the Document as given
on its Title Page, then add an item describing the Modied Version as stated in the
previous sentence.
J. Preserve the network location, if any, given in the Document for public access to a
Transparent copy of the Document, and likewise the network locations given in the
Document for previous versions it was based on. These may be placed in the History
section. You may omit a network location for a work that was published at least four
years before the Document itself, or if the original publisher of the version it refers to
gives permission.
K. For any section Entitled Acknowledgements or Dedications, Preserve the Title
of the section, and preserve in the section all the substance and tone of each of the
contributor acknowledgements and/or dedications given therein.
L. Preserve all the Invariant Sections of the Document, unaltered in their text and in
their titles. Section numbers or the equivalent are not considered part of the section
titles.
M. Delete any section Entitled Endorsements. Such a section may not be included in
the Modied Version.
N. Do not retitle any existing section to be Entitled Endorsements or to conict in title
with any Invariant Section.
O. Preserve any Warranty Disclaimers.
If the Modied Version includes new front-matter sections or appendices that qualify as
Secondary Sections and contain no material copied from the Document, you may at your
option designate some or all of these sections as invariant. To do this, add their titles to
B.5. COMBINING DOCUMENTS 951
the list of Invariant Sections in the Modied Versions license notice. These titles must be
distinct from any other section titles.
You may add a section Entitled Endorsements, provided it contains nothing but en-
dorsements of your Modied Version by various parties for example, statements of peer
review or that the text has been approved by an organization as the authoritative denition
of a standard.
You may add a passage of up to ve words as a Front-Cover Text, and a passage of up
to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modied
Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added
by (or through arrangements made by) any one entity. If the Document already includes
a cover text for the same cover, previously added by you or by arrangement made by the
same entity you are acting on behalf of, you may not add another; but you may replace the
old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to
use their names for publicity for or to assert or imply endorsement of any Modied Version.
B.5 Combining Documents
You may combine the Document with other documents released under this License, under
the terms dened in Section B.4 above for modied versions, provided that you include in
the combination all of the Invariant Sections of all of the original documents, unmodied,
and list them all as Invariant Sections of your combined work in its license notice, and that
you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical
Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections
with the same name but dierent contents, make the title of each such section unique by
adding at the end of it, in parentheses, the name of the original author or publisher of that
section if known, or else a unique number. Make the same adjustment to the section titles
in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled History in the various
original documents, forming one section Entitled History; likewise combine any sections
Entitled Acknowledgements, and any sections Entitled Dedications. You must delete
all sections Entitled Endorsements.
B.6 Collections of Documents
You may make a collection consisting of the Document and other documents released
under this License, and replace the individual copies of this License in the various documents
with a single copy that is included in the collection, provided that you follow the rules of
this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually
under this License, provided you insert a copy of this License into the extracted document,
and follow this License in all other respects regarding verbatim copying of that document.
B.7 Aggregation with Independent Works
A compilation of the Document or its derivatives with other separate and independent
documents or works, in or on a volume of a storage or distribution medium, is called an
952 B GNU FREE DOCUMENTATION LICENSE (FDL)
aggregate if the copyright resulting from the compilation is not used to limit the legal rights
of the compilations users beyond what the individual works permit. When the Document
is included in an aggregate, this License does not apply to the other works in the aggregate
which are not themselves derivative works of the Document.
If the Cover Text requirement of Section B.3 is applicable to these copies of the Docu-
ment, then if the Document is less than one half of the entire aggregate, the Documents
Cover Texts may be placed on covers that bracket the Document within the aggregate, or
the electronic equivalent of covers if the Document is in electronic form. Otherwise they
must appear on printed covers that bracket the whole aggregate.
B.8 Translation
Translation is considered a kind of modication, so you may distribute translations of the
Document under the terms of Section B.4. Replacing Invariant Sections with translations
requires special permission from their copyright holders, but you may include translations of
some or all Invariant Sections in addition to the original versions of these Invariant Sections.
You may include a translation of this License, and all the license notices in the Document,
and any Warranty Disclaimers, provided that you also include the original English version
of this License and the original versions of those notices and disclaimers. In case of a
disagreement between the translation and the original version of this License or a notice or
disclaimer, the original version will prevail.
If a section in the Document is Entitled Acknowledgements, Dedications, or His-
tory, the requirement (Section B.4) to Preserve its Title (Section B.1) will typically require
changing the actual title.
B.9 Termination
You may not copy, modify, sublicense, or distribute the Document except as expressly
provided under this License. Any attempt otherwise to copy, modify, sublicense, or distribute
it is void, and will automatically terminate your rights under this License.
However, if you cease all violation of this License, then your license from a particular
copyright holder is reinstated (a) provisionally, unless and until the copyright holder explic-
itly and nally terminates your license, and (b) permanently, if the copyright holder fails to
notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if
the copyright holder noties you of the violation by some reasonable means, this is the rst
time you have received notice of violation of this License (for any work) from that copyright
holder, and you cure the violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of parties
who have received copies or rights from you under this License. If your rights have been
terminated and not permanently reinstated, receipt of a copy of some or all of the same
material does not give you any rights to use it.
B.10 Future Revisions of this License
The Free Software Foundation may publish new, revised versions of the GNU Free Doc-
umentation License from time to time. Such new versions will be similar in spirit to the
B.11. RELICENSING 953
present version, but may dier in detail to address new problems or concerns. See http://
www.gnu.org/copyleft/ [accessed 2009-06-07].
Each version of the License is given a distinguishing version number. If the Document
species that a particular numbered version of this License or any later version applies
to it, you have the option of following the terms and conditions either of that specied
version or of any later version that has been published (not as a draft) by the Free Software
Foundation. If the Document does not specify a version number of this License, you may
choose any version ever published (not as a draft) by the Free Software Foundation. If the
Document species that a proxy can decide which future versions of this License can be
used, that proxys public statement of acceptance of a version permanently authorizes you
to choose that version for the Document.
B.11 Relicensing
Massive Multiauthor Collaboration Site (or MMC Site) means any World Wide Web
server that publishes copyrightable works and also provides prominent facilities for anybody
to edit those works. A public wiki that anybody can edit is an example of such a server. A
Massive Multiauthor Collaboration (or MMC) contained in the site means any set of
copyrightable works thus published on the MMC site.
CC-BY-SA means the Creative Commons Attribution-Share Alike 3.0 license published
by Creative Commons Corporation, a not-for-prot corporation with a principal place of
business in San Francisco, California, as well as future copyleft versions of that license
published by that same organization.
Incorporate means to publish or republish a Document, in whole or in part, as part of
another Document.
An MMC is eligible for relicensing if it is licensed under this License, and if all works
that were rst published under this License somewhere other than this MMC, and subse-
quently incorporated in whole or in part into the MMC, (1) had no cover texts or invariant
sections, and (2) were thus incorporated prior to November 1, 2008.
The operator of an MMC Site may republish an MMC contained in the site under CC-
BY-SA on the same site at any time before August 1, 2009, provided the MMC is eligible
for relicensing.
Addendum: How to use this License for your documents
To use this License in a document you have written, include a copy of the License in the
document and put the following copyright and license notices just after the title page:
Copyright c _ YEAR YOUR NAME.
Permission is granted to copy, distribute and/or modify this document under the
terms of the GNU Free Documentation License, Version 1.3 or any later version
published by the Free Software Foundation; with no Invariant Sections, no Front-
Cover Texts, and no Back-Cover Texts. A copy of the license is included in the
section entitled GNU Free Documentation License.
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the
with . . . Texts. line with this:
954 B GNU FREE DOCUMENTATION LICENSE (FDL)
with the Invariant Sections being LIST THEIR TITLES, with the Front-Cover
Texts being LIST, and with the Back-Cover Texts being LIST.
If you have Invariant Sections without Cover Texts, or some other combination of the
three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing
these examples in parallel under your choice of free software license, such as the GNU General
Public License, to permit their use in free software.
Appendix C
GNU Lesser General Public License (LGPL)
Version 3, 29 June 2007
Copyright c _ 2007 Free Software Foundation, Inc.
http://fsf.org/ [accessed 2009-06-07]
Everyone is permitted to copy and distribute verbatim copies of this license document, but
changing it is not allowed.
This version of the GNU Lesser General Public License incorporates the terms and
conditions of version 3 of the GNU General Public License, supplemented by the additional
permissions listed below.
C.1 Additional Denitions
As used herein, this License refers to version 3 of the GNU Lesser General Public
License, and the GNU GPL refers to version 3 of the GNU General Public License.
The Library refers to a covered work governed by this License, other than an Appli-
cation or a Combined Work as dened below.
An Application is any work that makes use of an interface provided by the Library,
but which is not otherwise based on the Library. Dening a subclass of a class dened by
the Library is deemed a mode of using an interface provided by the Library.
A Combined Work is a work produced by combining or linking an Application with
the Library. The particular version of the Library with which the Combined Work was made
is also called the Linked Version.
The Minimal Corresponding Source for a Combined Work means the Corresponding
Source for the Combined Work, excluding any source code for portions of the Combined
Work that, considered in isolation, are based on the Application, and not on the Linked
Version.
The Corresponding Application Code for a Combined Work means the object code
and/or source code for the Application, including any data and utility programs needed for
reproducing the Combined Work from the Application, but excluding the System Libraries
of the Combined Work.
C.2 Exception to Section 3 of the GNU GPL
You may convey a covered work under Section C.4 and Section C.5 of this License without
being bound by section 3 of the GNU GPL.
956 C GNU LESSER GENERAL PUBLIC LICENSE (LGPL)
C.3 Conveying Modied Versions
If you modify a copy of the Library, and, in your modications, a facility refers to a
function or data to be supplied by an Application that uses the facility (other than as an
argument passed when the facility is invoked), then you may convey a copy of the modied
version:
1. under this License, provided that you make a good faith eort to ensure that, in the
event an Application does not supply the function or data, the facility still operates,
and performs whatever part of its purpose remains meaningful, or
2. under the GNU GPL, with none of the additional permissions of this License applicable
to that copy.
C.4 Object Code Incorporating Material from Library Header
Files
The object code form of an Application may incorporate material from a header le
that is part of the Library. You may convey such object code under terms of your choice,
provided that, if the incorporated material is not limited to numerical parameters, data
structure layouts and accessors, or small macros, inline functions and templates (ten or
fewer lines in length), you do both of the following:
1. Give prominent notice with each copy of the object code that the Library is used in it
and that the Library and its use are covered by this License.
2. Accompany the object code with a copy of the GNU GPL and this license document.
C.5 Combined Works
You may convey a Combined Work under terms of your choice that, taken together,
eectively do not restrict modication of the portions of the Library contained in the Com-
bined Work and reverse engineering for debugging such modications, if you also do each of
the following:
1. Give prominent notice with each copy of the Combined Work that the Library is used
in it and that the Library and its use are covered by this License.
2. Accompany the Combined Work with a copy of the GNU GPL and this license docu-
ment.
3. For a Combined Work that displays copyright notices during execution, include the
copyright notice for the Library among these notices, as well as a reference directing
the user to the copies of the GNU GPL and this license document.
4. Do one of the following:
C.6. COMBINED LIBRARIES 957
(a) Convey the Minimal Corresponding Source under the terms of this License, and
the Corresponding Application Code in a form suitable for, and under terms that
permit, the user to recombine or relink the Application with a modied version
of the Linked Version to produce a modied Combined Work, in the manner
specied by section 6 of the GNU GPL for conveying Corresponding Source.
(b) Use a suitable shared library mechanism for linking with the Library. A suitable
mechanism is one that (a) uses at run time a copy of the Library already present
on the users computer system, and (b) will operate properly with a modied
version of the Library that is interface-compatible with the Linked Version.
5. Provide Installation Information, but only if you would otherwise be required to pro-
vide such information under section 6 of the GNU GPL, and only to the extent that
such information is necessary to install and execute a modied version of the Combined
Work produced by recombining or relinking the Application with a modied version
of the Linked Version. (If you use option Section C.5-a, the Installation Information
must accompany the Minimal Corresponding Source and Corresponding Application
Code. If you use option Section C.5-b, you must provide the Installation Information
in the manner specied by section 6 of the GNU GPL for conveying Corresponding
Source.)
C.6 Combined Libraries
You may place library facilities that are a work based on the Library side by side in
a single library together with other library facilities that are not Applications and are not
covered by this License, and convey such a combined library under terms of your choice, if
you do both of the following:
1. Accompany the combined library with a copy of the same work based on the Library,
uncombined with any other library facilities, conveyed under the terms of this License.
2. Give prominent notice with the combined library that part of it is a work based on
the Library, and explaining where to nd the accompanying uncombined form of the
same work.
C.7 Revised Versions of the GNU Lesser General Public License
The Free Software Foundation may publish revised and/or new versions of the GNU
Lesser General Public License from time to time. Such new versions will be similar in spirit
to the present version, but may dier in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Library as you received it
species that a certain numbered version of the GNU Lesser General Public License or any
later version applies to it, you have the option of following the terms and conditions either
of that published version or of any later version published by the Free Software Foundation.
If the Library as you received it does not specify a version number of the GNU Lesser General
Public License, you may choose any version of the GNU Lesser General Public License ever
published by the Free Software Foundation.
If the Library as you received it species that a proxy can decide whether future versions
of the GNU Lesser General Public License shall apply, that proxys public statement of
acceptance of any version is permanent authorization for you to choose that version for the
Library.
Appendix D
Credits and Contributors
In this section I want to give credits to whom they deserve (by directly or indirectly con-
tributing to this book). So its props to:
Stefan Achler
For working together at the 2007 DATA-MINING-CUP Contest.
See ?? on page ?? and [2902].
Steen Bleul
For working together at the 2006, 2007, and 2008 Web Service Challenge.
See ?? on page ?? and [334, 335, 2903, 2908]
Raymond Chiong
For co-authoring a bookchapter corresponding to Part II and by doing so, helping to correct
many mistakes.
See Part II on page 138
Distributed Systems Group, University of Kassel
To the whole Distributed Systems Group at the University of Kassel for being supportive
and coming over again and again with new ideas and helpful advices.
Each and every research project described in this book.
Qiu Li
For careful reading and pointing out inconsistencies and spelling mistakes.
See Chapter 51
Gan Min
For careful reading and pointing out inconsistencies and spelling mistakes.
See especially the Pareto optimization area Section 3.3.5 on page 65.
Kurt Geihs
For being supportive and contributing to many of my research projects.
See for instance ?? on page ?? and [334, 335, 2896, 2897, 2904]
Martin G ob
For working together at the 2007 DATA-MINING-CUP Contest.
See ?? on page ?? and [2902].
Ibrahim Ibrahim
For careful proofreading.
See Chapter 1 on page 21.
Antonio Nebro
For co-authoring a bookchapter corresponding to Part II and by doing so, helping to correct
960 D CREDITS AND CONTRIBUTORS
many mistakes.
See Chapter 3 on page 51
Stefan Niemczyk
For careful proofreading and scientic support.
See for instance Chapter 1 on page 21, Section 50.2.7 on page 566, and [2037, 2910].
Roland Reichle
For fruitful discussions and scientic support.
See for instance Section 53.6.2 on page 693, Section 50.2.7 on page 566, and [2910].
Laurent Rodriguez
For contributing corrections in some of the chapters.
See for instance Section 53.1 on page 653 and Section 51.11 on page 647.
Richard Seabrook
For pointing out mistakes in some of the set theory denitions.
See Chapter 51 on page 639
Hendrik Skubch
For fruitful discussions and scientic support.
See for instance ?? on page ??, Section 50.2.7 on page 566, and [2910].
Christian Voigtmann
For working together at the 2007 and 2008 DATA-MINING-CUP Contest.
See ?? on page ?? and [2902].
Wibul Wongpoowarak
For valuable suggestions in dierent areas, like including random optimization.
See for instance ?? on page ?? [2971], Chapter 44 on page 505.
Michael Zapf
For helping me to make many sections more comprehensible.
See for instance ?? on page ??, ?? on page ??, Section 31.5 on page 409, and [2906? ].
References
A
1. Proceedings of the The Fifth Conference on Innovative Applications of Articial Intelligence
(IAAI93), July 1115, 1993, Washington, DC, USA. AAAI Press: Menlo Park, CA, USA.
isbn: 0-929280-46-6. Partly available at http://www.aaai.org/Conferences/IAAI/
iaai93.php [accessed 2007-09-06].
2. Proceedings of the Twenty-Fourth AAAI Conference on Articial Intelligence (AAAI10),
July 1115, 2010, Westin Peachtree Plaza: Atlanta, GA, USA. AAAI Press: Menlo Park,
CA, USA.
3. Proceedings of the Twenty-Fifth Conference on Articial Intelligence (AAAI11), August 7
11, 2011, Hyatt Regency San Francisco: San Francisco, CA, USA. AAAI Press: Menlo Park,
CA, USA.
4. Proceedings of the 23rd Conference on Innovative Applications of Articial Intelligence
(IAAI11), August 911, 2011, Hyatt Regency San Francisco: San Francisco, CA, USA.
AAAI Press: Menlo Park, CA, USA.
5. Emile H. L. Aarts and Jan Karel Lenstra, editors. Local Search in Combinatorial Opti-
mization, Estimation, Simulation, and Control Wiley-Interscience Series in Discrete Math-
ematics and Optimization. Princeton University Press: Princeton, NJ, USA, 19972003.
isbn: 0585277540 and 0691115222. Google Books ID: NWghN9G7q9MC. OCLC: 45733970.
6. Florin Abel`es, editor. Optical Interference Coatings, June 610, 1994, Grenoble, France,
volume 2253 in The Proceedings of SPIE. Society of Photographic Instrumentation Engineers
(SPIE): Bellingham, WA, USA. isbn: 0819415626. Google Books ID: 9OxTAAAAMAAJ and
cutTAAAAMAAJ. OCLC: 31676720.
7. Ajith Abraham and Mario Koppen, editors. Hybrid Information Systems, Proceedings of the
First International Workshop on Hybrid Intelligent Systems (HIS01), December 1112, 2001,
Adelaide University, Main Campus: Adelaide, SA, Australia, Advances in Soft Computing.
Physica-Verlag GmbH & Co.: Heidelberg, Germany. isbn: 3-7908-1480-6. Partly available
at http://his01.hybridsystem.com/ [accessed 2007-09-01].
8. Ajith Abraham, Lakhmi C. Jain, and Robert Goldberg, editors. Evolutionary Multiobjective
Optimization Theoretical Advances and Applications, Advanced Information and Knowledge
Processing. Springer-Verlag GmbH: Berlin, Germany. isbn: 1852337877. Google Books
ID: Ei7q1YSjiSAC.
9. Ajith Abraham, Javier Ruiz-del-Solar, and Mario Koppen, editors. Soft Computing Systems
Design, Management and Applications (HIS02), December 14, 2002, Santiago, Chile,
volume 87 in Frontiers in Articial Intelligence and Applications. IOS Press: Amsterdam,
The Netherlands. isbn: 1-58603-297-6.
10. Ajith Abraham, Mario Koppen, and Katrin Franke, editors. Design and Application of Hybrid
Intelligent Systems, Proceedings of the Third International Conference on Hybrid Intelligent
Systems (HIS03), December 1417, 2003, Monash Conference Centre: Melbourne, VIC,
Australia, volume 105 in Frontiers in Articial Intelligence and Applications. IOS Press:
962 REFERENCES
Amsterdam, The Netherlands. isbn: 1-58603-394-8. Partly available at http://his03.
hybridsystem.com/ [accessed 2007-09-01].
11. Ajith Abraham, Yasuhiko Dote, Takeshi Furuhashi, Azuma Ohuchi, and Yukio Ohsawa,
editors. The Fourth IEEE International Workshop on Soft Computing as Transdisci-
plinary Science and Technology (WSTST05), May 2527, 2005, Muroran, Japan, volume
29/2005 in Advances in Intelligent and Soft Computing. Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/3-540-32391-0. isbn: 3-540-25055-7. Google Books ID: gLGl5vBTnrAC. Li-
brary of Congress Control Number (LCCN): 2005923683.
12. Ajith Abraham, Andre Carvalho, Francisco Herrera Triguero, and Vijayalakshmi Pai, editors.
World Congress on Nature and Biologically Inspired Computing (NaBIC09), December 911,
2009, PSG College of Technology: Coimbatore, India. Library of Congress Control Number
(LCCN): 2009907135. Catalogue no.: CFP0995H.
13. Ajith Abraham, Aboul-Ella Hassanien, and Andre Ponce de Leon F. de Carvalho, editors.
Foundations of Computational Intelligence Volume 4: Bio-Inspired Data Mining, volume
204/2009 in Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg, 2009.
doi: 10.1007/978-3-642-01088-0.
14. Ajith Abraham, Mohamed Kamel, and Ronald Yager, editors. Proceedings of the 11th Inter-
national Conference on Hybrid Intelligent Systems (HIS11), December 58, 2011, Melaka,
Malaysia. IEEE Computer Society Press: Los Alamitos, CA, USA.
15. Faris N. Abuali, Dale A. Schoenefeld, and Roger L. Wainwright. Designing Telecom-
munications Networks using Genetic Algorithms and Probabilistic Minimum Spanning
Trees. In SAC94 [734], pages 242246, 1994. doi: 10.1145/326619.326733. Fully avail-
able at http://doi.acm.org/10.1145/326619.326733 and http://euler.mcs.
utulsa.edu/

rogerw/papers/Abuali-Prufer-CSC-94.pdf [accessed 2008-08-28]. See also


[16].
16. Faris N. Abuali, Dale A. Schoenefeld, and Roger L. Wainwright. Terminal Assignment
in a Communications Network using Genetic Algorithms. In CSC94 [584], pages 7481,
1994. doi: 10.1145/197530.197559. Fully available at http://euler.mcs.utulsa.edu/

rogerw/papers/Abuali-Terminal-CSC-94.pdf [accessed 2008-08-28]. See also [15].


17. Congress on Numerical Methods in Combinatorial Optimization, March 1986, Capri, Italy.
Academic Press Professional, Inc.: San Diego, CA, USA.
18. Wolfgang Achtziger and Mathias Stolpe. Truss Topology Optimization with Discrete Design
Variables Guaranteed Global Optimality and Benchmark Examples. Structural and Mul-
tidisciplinary Optimization, 34(1):120, July 2007, Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/s00158-006-0074-2.
19. David H. Ackley. A Connectionist Machine for Genetic Hillclimbing, volume 28 in Kluwer
International Series in Engineering and Computer Science, The Springer International Series
in Engineering and Computer Science. PhD thesis, Carnegy Mellon University (CMU): Pitts-
burgh, PA, USA, Kluwer Academic Publishers: Norwell, MA, USA and Springer US: Boston,
MA, USA, August 31, 1987. isbn: 0-89838-236-X. OCLC: 16005487, 475896170, and
493204271. Library of Congress Control Number (LCCN): 87013536. GBV-Identication
(PPN): 016581156 and 016643720. LC Classication: Q336 .A25 1987.
20. Proceedings of the Third Annual ACM Symposium on Theory of Computing (STOC71), 1971,
Shaker Heights, OH, USA. ACM Press: New York, NY, USA.
21. Proceedings of the ACM Annual Conference (ACM72), Boston, MA, USA. ACM Press: New
York, NY, USA.
22. Proceedings of the 1999 ACM SIGMOD International Conference on Management of
Data (SIGMOD99), 1999, Philadelphia, PA, USA. ACM Press: New York, NY, USA.
isbn: 1-58113-084-8.
23. Proceedings of the 2000 ACM symposium on Applied computing (SAC00), March 1921,
2000, Villa Olmo, Como, Italy. ACM Press: New York, NY, USA. isbn: 1-58113-240-9.
OCLC: 57310819.
24. Proceedings of the 2001 ACM Symposium on Applied Computing (SAC01), March 1114,
2001, Alexis Park Resort Hotel: Las Vegas, NV, USA. ACM Press: New York, NY, USA.
isbn: 1-58113-287-5. OCLC: 47677896 and 74839643.
25. Proceedings of the Eleventh International Conference on Information and Knowledge Man-
agement (CIKM02), November 49, 2002, McLean, VA, USA. ACM Press: New York, NY,
USA. isbn: 1-58113-590-4.
REFERENCES 963
26. Proceedings of the 11th international conference on World Wide (WWW02), May 711, 2002,
Honolulu, HI, USA. ACM Press: New York, NY, USA. isbn: 1-58113-449-5.
27. Eighth International Workshop on Learning Classier Systems (IWLCS05), June 2529,
2005, Washington, DC, USA. ACM Press: New York, NY, USA. Part of [304]. See also
[1589].
28. Proceedings of the Conference on Information and Knowledge Management (CIKM07),
November 610, 2007, Lisbon, Portugal. ACM Press: New York, NY, USA.
29. ACM SIGEVO Workshop on Foundations of Genetic Algorithms X (FOGA09), January 9
11, 2009, Orlando, FL, USA. ACM Press: New York, NY, USA. ACM Order No.: 910094.
30. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO10), July 7
11, 2010, Portland Marriott Downtown Waterfront Hotel: Portland, OR, USA. ACM Press:
New York, NY, USA. Partly available at http://www.sigevo.org/gecco-2010/ [ac-
cessed 2011-02-28]. ACM Order No.: 910102. See also [31].
31. Companion Publication of the Genetic and Evolutionary Computation Conference
(GECCO10 Companion), July 711, 2010, Portland Marriott Downtown Waterfront Ho-
tel: Portland, OR, USA. ACM Press: New York, NY, USA. ACM Order No.: 910102. See
also [30].
32. Christoph Adami, Richard K. Belew, Hiroaki Kitano, and Charles E. Taylor, editors. Proceed-
ings of the Sixth International Conference on Articial Life (Articial Life VI), June 2729,
1998, University of California (UCLA): Los Angeles, CA, USA, volume 6 in Complex Adap-
tive Systems, Bradford Books. MIT Press: Cambridge, MA, USA. isbn: 0-262-51099-5.
Google Books ID: K2M6VDCQ5mMC. OCLC: 39013808 and 245676852.
33. Proceedings of the Australasian Joint Conference on Articial Intelligence Conference
(AI10), December 710, 2010, Adelaide, SA, Australia.
34. Proceedings of the 6th International Conference on Database Theory (ICDT97). Foto N.
Afrati and Phokion G. Kolaitis, editors, volume 1186 in Lecture Notes in Computer Sci-
ence (LNCS). Springer-Verlag GmbH: Berlin, Germany, January 810, 1997, Delphi, Greece.
isbn: 3-540-62222-5.
35. Divyakant Agrawal, Pusheng Zhang, Amr El Abbadi, and Mohamed F. Mokbel, editors. Pro-
ceedings of the 18th ACM SIGSPATIAL International Conference on Advances in Geographic
Information Systems (ACM SIGSPATIAL GIS 2010), November 25, 2010, Doubletree Hotel
San Jose: San Jose, CA, USA. ACM Press: New York, NY, USA.
36. Vikas R. Agrawal. A Dissertation Entitled Data Warehouse Operational Design: View Se-
lection and Performance Simulation. PhD thesis, University of Toledo: Toledo, OH, USA,
May 2005, Mesbah Ahmed and P. S. Sundararaghavan, Advisors, Udayan Nandkeolyar and
Robert Bennett, Committee members. Fully available at http://www.utoledo.edu/
business/phd/PHDDocs/Vikas_Agrawal_-_Data_warehouse.pdf [accessed 2011-04-13].
37. Alan Agresti. An Introduction to Categorical Data Analysis, Wiley Series in Probability and
Mathematical Statistics Applied Probability and Statistics Section Series. Wiley Inter-
science: Chichester, West Sussex, UK, January 1996. isbn: 0471113387. OCLC: 32745985
and 464377546. Library of Congress Control Number (LCCN): 95021928. GBV-
Identication (PPN): 186503121. LC Classication: QA278 .A355 1996.
38. Arturo Hernandez Aguirre, Ra ul Monroy Borja, and Carlos A. Reyes Garca, editors. Ad-
vances in Articial Intelligence: 8th Mexican International Conference on Articial Intelli-
gence (MICAI), November 913, 2009, Guanajuato, Mexico, volume 5845 in Lecture Notes
in Articial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. isbn: 3-642-05257-6. Library of Congress Control Number
(LCCN): 2009937486.
39. Luis E. Agustn-Blas, Sancho Salcedo-Sanz, Pablo Vidales, Gilberto Urueta, Jose Antonio
Portilla-Figueras, and Mark Solarski. A Hybrid Grouping Genetic Algorithm for City-
wide Ubiquitous WiFi Access Deployment. In CEC09 [1350], pages 21722179, 2009.
doi: 10.1109/CEC.2009.4983210. INSPEC Accession Number: 10688805.
40. David W. Aha, Andras Janosi, William Steinbrunn, and Matthias Psterer. Heart Disease
Data Set. UCI Machine Learning Repository, Center for Machine Learning and Intelligent
Systems, Donald Bren School of Information and Computer Science, University of California:
Irvine, CA, USA, July 1, 1988. Fully available at http://archive.ics.uci.edu/ml/
datasets/Heart+Disease [accessed 2008-12-27].
41. Chang Wook Ahn. Advances in Evolutionary Algorithms Theory, Design and Practice,
964 REFERENCES
volume 18 in Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg,
2006. doi: 10.1007/3-540-31759-7. isbn: 3-540-31758-9. Library of Congress Control
Number (LCCN): 2005939008.
42. Chang Wook Ahn and Rudrapatna S. Ramakrishna. Clustering-Based Probabilistic Model
Fitting in Estimation of Distribution Algorithms. IEICE Transactions on Information and
Systems, Oxford Journals, E89-D(1):381383, January 2006, Institute of Electronics, Informa-
tion and Communication Engineers (IEICE): Tokyo, Japan. doi: 10.1093/ietisy/e89-d.1.381.
43. Chang Wook Ahn and Rudrapatna S. Ramakrishna. Augmented Compact Genetic Algorithm.
In PPAM03 [2991], volume 560 (565), 2003. doi: 10.1007/978-3-540-24669-5 73. See also
[44].
44. Chang Wook Ahn and Rudrapatna S. Ramakrishna. Augmented Compact Genetic Algo-
rithm. NWC Report EC-200201, Kwang-Ju Institute of Science & Technology (K-JIST),
Department of Information & Communications, New Wave Computing (NWC) Laboratory:
Puk-gu, Gwangju, Korea, February 2002. Fully available at http://www.evolution.re.
kr/Publication/Tech/EC-200201.pdf [accessed 2010-08-03]. See also [43].
45. Sangtae Ahn, Jerey A. Fessler, Doron Blatt, and Alfred O. Hero. Convergent Incre-
mental Optimization Transfer Algorithms: Application to Tomography. IEEE Trans-
actions on Medical Imaging, 25(3):283296, March 2006, IEEE (Institute of Electri-
cal and Electronics Engineers): Piscataway, NJ, USA. doi: 10.1109/TMI.2005.862740.
Fully available at http://www-rcf.usc.edu/

sangtaea/papers/ahn:06:cio.pdf
[accessed 2009-10-25]. Partly available at http://www-rcf.usc.edu/

sangtaea/papers/
ahn:06:cio_correct_eq.jpg and www-rcf.usc.edu/

sangtaea/papers/ahn:04:
iot,talk.pdf [accessed 2009-10-25]. 2004 IEEE NSS-MIC, October 21, 2004.
46. Alfred Vaino Aho, John Edward Hopcroft, and Jerey David Ullman. The Design and
Analysis of Computer Algorithms, Addison-Wesley Series in Computer Science and Informa-
tion Processing. Addison-Wesley Publishing Co. Inc.: Reading, MA, USA, international
ed edition, January 1974. isbn: 0-201-00029-6. Google Books ID: O4dGcAAACAAJ.
OCLC: 1147299, 252049725, 466284870, and 634120903. Library of Congress Con-
trol Number (LCCN): 74003995. GBV-Identication (PPN): 020989148, 027429121,
and 132190680. LC Classication: QA76.6 .A36.
47. J. H. Ahrens and U. Dieter. Computer Methods for Sampling from Gamma, Beta, Poisson
and Bionomial Distributions. Computing, 12(3):223246, September 10, 1974, Springer Verlag
GmbH: Vienna, Austria. doi: 10.1007/BF02293108.
48. Marco Aiello, Christian Platzer, Florian Rosenberg, Huy Tran, Martin Vasko, and
Schahram Dustdar. Web Service Indexing for Ecient Retrieval and Composition. In
CEC/EEE06 [2967], pages 424426, 2006. doi: 10.1109/CEC-EEE.2006.96. Fully avail-
able at https://berlin.vitalab.tuwien.ac.at/

florian/papers/cec2006.pdf
[accessed 2007-10-25]. INSPEC Accession Number: 9199703. See also [49, 50, 318, 761].
49. Marco Aiello, Nico van Benthem, and Elie el Khoury. Visualizing Compositions of Services
from Large Repositories. In CEC/EEE08 [1346], pages 359362, 2008. doi: 10.1109/CE-
CandEEE.2008.149. INSPEC Accession Number: 10475111. See also [48, 50, 204, 761].
50. Marco Aiello, Elie el Khoury, Alexander Lazovik, and Patrick Ratelband. Opti-
mal QoSAware Web Service Composition. In CEC09 [1245], pages 491494, 2009.
doi: 10.1109/CEC.2009.63. INSPEC Accession Number: 10839135. See also [48, 49, 761, 1571].
51. Jan Aikins and Howard Shrobe, editors. Proceedings of the The Seventh Conference on In-
novative Applications of Articial Intelligence (IAAI95), August 2125, 1995, Montreal,
QC, Canada. AAAI Press: Menlo Park, CA, USA. isbn: 0-929280-81-4. Partly
available at http://www.aaai.org/Conferences/IAAI/iaai95.php [accessed 2007-09-06].
OCLC: 33813525, 633974570, and 635878104. GBV-Identication (PPN): 212259687.
52. P. Aiyarak, A. S. Saket, and Mark C. Sinclair. Genetic Programming Approaches
for Minimum Cost Topology Optimisation of Optical Telecommunication Networks. In
GALESIA97 [3056], pages 415420, 1997. doi: 10.1049/cp:19971216. Fully available
at http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/aiyarak_1997_GPtootn.
html [accessed 2009-09-01]. CiteSeer
x
: 10.1.1.55.6656. See also [2498].
53. M. Ali Akbar and Muddassar Farooq. Application of Evolutionary Algorithms in De-
tection of SIP based Flooding Attacks. In GECCO09-I [2342], pages 14191426,
2009. doi: 10.1145/1569901.1570092. Fully available at http://nexginrc.org/papers/
gecco09-ali.pdf [accessed 2009-09-03].
REFERENCES 965
54. Rama Akkiraju, Biplav Srivastava, Anca Ivan, Richard Goodwin, and Tanveer Syeda-
Mahmood. Semantic Matching to Achieve Web Service Discovery and Composition. In
CEC/EEE06 [2967], pages 445447, 2006. doi: 10.1109/CEC-EEE.2006.78. INSPEC Acces-
sion Number: 9199704. See also [318].
55. Selim G. Akl, Cristian S. Calude, Michael J. Dinneen, Grzegorz Rozenberg, and Todd Ware-
ham, editors. Proceedings of the 6th International Conference on Unconventional Compu-
tation (UC07), August 1317, 2007, Kingston, Canada, volume 4618/2007 in Theoretical
Computer Science and General Issues (SL 1), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 978-3-540-73553-3.
56. Jarmo Tapani Alander. An Indexed Bibliography of Genetic Algorithms in Signal and Im-
age Processing. Technical Report 94-1-SIGNAL, University of Vaasa: Finland, May 18,
1998. Fully available at ftp://ftp.uwasa.fi/cs/report94-1/gaSIGNALbib.ps.Z
and http://citeseer.ist.psu.edu/319835.html [accessed 2008-05-18].
57. Jarmo Tapani Alander, editor. Proceedings of the 1st Finnish Workshop on Genetic Algo-
rithms and Their Applications (1FWGA), November 45, 1992, Helsinki University of Tech-
nology: Espoo, Finland. Helsinki University of Technology, Department of Computer Science:
Helsinki, Finland. Technical Report TKO-A30, partly in Finnish, published 1993.
58. Jarmo Tapani Alander, editor. Proceedings of the Second Finnish Workshop on Genetic Al-
gorithms and Their Applications (2FWGA), March 1618, 1994, Vaasa, Finland. University
of Vaasa, Department of Information Technology and Production Economics: Vaasa, Fin-
land. isbn: 951-683-526-0. Fully available at ftp://ftp.uwasa.fi/cs/report94-2
[accessed 2008-07-24]. issn: 1237-7589. Technical Report 94-2.
59. Jarmo Tapani Alander, editor. Proceedings of the First Nordic Workshop on Genetic Algo-
rithms (The Third Finnish Workshop on Genetic Algorithms, 3FWGA) (1NWGA), June 9
13, 1995, Vaasa, Finland, 2 in Proceedings of the University of Vaasa. University of Vaasa,
Department of Information Technology and Production Economics: Vaasa, Finland. Fully
available at ftp://ftp.uwasa.fi/cs/1NWGA [accessed 2008-07-24]. Technical Report 95-1.
60. Jarmo Tapani Alander, editor. Proceedings of the Second Nordic Workshop on Genetic Al-
gorithms (2NWGA), August 1923, 1996, Vaasa, Finland, volume 13 in Proceedings of the
University of Vaasa. Fully available at ftp://ftp.uwasa.fi/cs/2NWGA [accessed 2008-07-24].
In collaboration with the Finnish Articial Intelligence Society.
61. Jarmo Tapani Alander, editor. Proceedings of the Third Nordic Workshop on Genetic Al-
gorithms (3NWGA), August 2022, 1997, Helsinki, Finland. Finnish Articial Intelligence
Society (FAIS): Vantaa, Finland. Fully available at ftp://ftp.uwasa.fi/cs/3NWGA [ac-
cessed 2008-07-24].
62. Jarmo Tapani Alander, editor. Proceedings of the Fourth Nordic Workshop on Genetic Al-
gorithms (4NWGA), July 2731, 1998, Norwegian University of Science and Technology:
Trondheim, Norway. Finnish Articial Intelligence Society (FAIS): Vantaa, Finland. seem-
ingly didnt take place?
63. Enrique Alba Torres and J. Francisco Chicano. Evolutionary Algorithms in Telecommunica-
tions. In MELECON06 [2388], pages 795798, 2006. doi: 10.1109/MELCON.2006.1653218.
Fully available at http://dea.brunel.ac.uk/ugprojects/docs/melecon06.pdf [ac-
cessed 2008-08-01].
64. Enrique Alba Torres and Bernabe Dorronsoro Daz. Solving the Vehicle Routing Problem by
Using Cellular Genetic Algorithms. In EvoCOP04 [1115], pages 1120, 2004. Fully available
at http://www.lcc.uma.es/

eat/pdf/evocop04.pdf [accessed 2008-10-27]. See also [65].


65. Enrique Alba Torres and Bernabe Dorronsoro Daz. Computing Nine New Best-So-Far So-
lutions for Capacitated VRP with a Cellular Genetic Algorithm. Information Processing
Letters, 98(6):225230, June 30, 2006, Elsevier Science Publishers B.V.: Amsterdam, The
Netherlands. Imprint: North-Holland Scientic Publishers Ltd.: Amsterdam, The Nether-
lands. doi: 10.1016/j.ipl.2006.02.006. See also [64].
66. Enrique Alba Torres and Bernabe Dorronsoro Daz. Cellular Genetic Algorithms, volume 42
in Operations Research/Computer Science Interfaces Series. Springer-Verlag GmbH: Berlin,
Germany, June 2008. isbn: 0387776095.
67. Enrique Alba Torres, Carlos Cotta, J. Francisco Chicano, and Antonio Jes us Nebro Ur-
baneja. Parallel Evolutionary Algorithms in Telecommunications: Two Case Studies. In
CACIC02 [2747], 2002. Fully available at http://www.lcc.uma.es/

eat/./pdf/
cacic02.pdf [accessed 2008-08-01]. CiteSeer
x
: 10.1.1.8.4073.
966 REFERENCES
68. Enrique Alba Torres, Jose Garca-Nieto, Javid Taheri, and Albert Zomaya. New Research
in Nature Inspired Algorithms for Mobility Management in GSM Networks. In EvoWork-
shops08 [1051], pages 110, 2008. doi: 10.1007/978-3-540-78761-7 1.
69. Rudolf F. Albrecht, Colin R. Reeves, and Nigel C. Steele, editors. Proceedings of
Articial Neural Nets and Genetic Algorithms (ANNGA93), April 1416, 1993, Inns-
bruck, Austria. Springer New York: New York, NY, USA. isbn: 0-387-82459-6 and
3-211-82459-6. Google Books ID: ZYkAQAAIAAJ. OCLC: 27936988, 300357118,
443980789, 611581837, and 633734458. GBV-Identication (PPN): 04374902X and
126104662.
70. John Aldrich. R.A. Fisher and the Making of Maximum Likelihood 1912-1922. Discussion Pa-
per Series In Economics And Econometrics, Statistical Science, 3(12):162176, January 1995,
Institute of Mathematical Statistics, University of Southampton. doi: 10.1214/ss/1030037906.
Fully available at http://projecteuclid.org/euclid.ss/1030037906 [accessed 2007-09-
15]. Zentralblatt MATH identier: 0955.62525. Mathematical Reviews number (Math-
SciNet): 1617519.
71. Christos E. Alexakos, Konstantinos C. Giotopoulos, Eleni J. Thermogianni, Grigorios N.
Beligiannis, and Spiridon D. Likothanassis. Integrating E-learning Environments with Com-
putational Intelligence Assessment Agents. Proceedings of World Academy of Science, En-
gineering and Technology, 13:233239, May 2006. Fully available at http://www.waset.
org/pwaset/v13/v13-45.pdf [accessed 2009-03-31].
72. M. Montaz Ali, Charoenchai Khompatraporn, and Zelda B. Zabinsky. A Numerical Eval-
uation of Several Stochastic Algorithms on Selected Continuous Global Optimization Test
Problems. Journal of Global Optimization, 31(4):635672, April 2005, Springer Netherlands:
Dordrecht, Netherlands. doi: 10.1007/s10898-004-9972-2.
73. Eugene L. Allgower and Kurt Georg, editors. Computational Solution of Nonlinear Systems
of Equations Proceedings of the 1988 SIAM-AMS Summer Seminar on Computational So-
lution of Nonlinear Systems of Equations, July 1829, 1988, Colorado State University: Fort
Collins, CO, USA, volume 26 in Lectures in Applied Mathematics (LAM). American Math-
ematical Society (AMS) Bookstore: Providence, RI, USA. isbn: 0-8218-1131-2. Google
Books ID: UAabWwXBifoC. OCLC: 20992934, 243444170, 311687125, and 423088702.
Published in 1990.
74. Jean-Marc Alliot, Evelyne Lutton, and Marc Schoenauer, editors. European Conference on
Articial Evolution (AE94), September 2023, 1994, ENAC: Toulouse, France. Cepadu`es:
Toulouse, France. isbn: 2854284119. Google Books ID: AhBWAAAACAAJ. OCLC: 34264284
and 258151166. Published in January 1995.
75. Jean-Marc Alliot, Evelyne Lutton, Edmund M. A. Ronald, Marc Schoenauer, and Dominique
Snyers, editors. Selected Papers of the European Conference on Articial Evolution (AE95),
September 46, 1995, Brest, France, volume 1063 in Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-61108-8. Published in
1996.
76. Riyad Alshammari, Peter I. Lichodzijewski, Malcom Ian Heywood, and A. Nur Zincir-
Heywood. Classifying SSH Encrypted Trac with Minimum Packet Header Fea-
tures using Genetic Programming. In GECCO09-II [2343], pages 25392546, 2009.
doi: 10.1145/1570256.1570358.
77. Lee Altenberg. The Schema Theorem and Prices Theorem. In FOGA 3 [2932], pages 23
49, 1994. Fully available at http://citeseer.ist.psu.edu/old/700230.html and
http://dynamics.org/Altenberg/FILES/LeeSTPT.pdf [accessed 2009-07-10].
78. Lee Altenberg. Genome Growth and the Evolution of the Genotype-Phenotype Map. In
Evolution and Biocomputation Computational Models of Evolution [206]. Springer-Verlag
GmbH: Berlin, Germany, 1995. CiteSeer
x
: 10.1.1.39.3876. Corrections and revisions,
October 1997.
79. Lee Altenberg. NK Fitness Landscapes. In Handbook of Evolutionary Computation [171],
Chapter B2.7.2. Oxford University Press, Inc.: New York, NY, USA, Institute of Physics
Publishing Ltd. (IOP): Dirac House, Temple Back, Bristol, UK, and CRC Press, Inc.:
Boca Raton, FL, USA, 1997. Fully available at http://dynamics.org/Altenberg/
FILES/LeeNKFL.pdf and http://www.cmi.univ-mrs.fr/

pardoux/LeeNKFL.pdf
[accessed 2010-08-26]. CiteSeer
x
: 10.1.1.79.1351.
80. Lee Altenberg. Fitness Distance Correlation Analysis: An Instructive Counterexam-
REFERENCES 967
ple. In ICGA97 [166], pages 5764, 1997. Fully available at http://dynamics.
org/Altenberg/PAPERS/FDCAAIC/ and http://www.cs.uml.edu/

giam/91.510/
Papers/Altenberg1997.pdf [accessed 2008-07-20].
81. Guilherme Bastos Alvarenga and Geraldo Robson Mateus. Hierarchical Tournament Selection
Genetic Algorithm for the Vehicle Routing Problem with Time Windows. In HIS04 [1421],
pages 410415, 2004. doi: 10.1109/ICHIS.2004.53. INSPEC Accession Number: 8470883.
82. Shun-Ichi Amari, editor. Proceedings of the IEEE-INNS-ENNS International Joint
Conference on Neural Networks (IJCNN00), July 2427, 2000, Como, Italy. IEEE
Computer Society: Piscataway, NJ, USA. isbn: 0-7695-0619-4 and 0780365410.
Google Books ID: JAlWAAAAMAAJ, i7RVAAAAMAAJ, iLhVAAAAMAAJ, and tglWAAAAMAAJ.
OCLC: 49364554, 59509805, and 423975593.
83. Anita Amberg, Wolfgang Domschke, and Stefan Vo. Multiple Center Capacitated Arc
Routing Problems: A Tabu Search Algorithm using Capacitated Trees. European Journal
of Operational Research (EJOR), 124(2):360376, July 16, 2000, Elsevier Science Publish-
ers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientic Publishers Ltd.:
Amsterdam, The Netherlands. doi: 10.1016/S0377-2217(99)00170-8.
84. 43rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Con-
ference and Exhibit, April 2225, 2002, Denver, CO, USA. American Institute of Aeronautics
and Astronautics (AIAA): Reston, VA, USA.
85. Proceedings of OMAE04, 23rd International Conference on Oshore Mechanics and Arc-
tic Engineering, June 2025, 2004, Vancouver, BC, Canada. American Society of Me-
chanical Engineers (ASME): New York, NY, USA. isbn: 0791837386, 0-7918-3743-2,
0-7918-3744-0, and 0-7918-3745-9. Order No.: I706CD. LC Classication: TC1505
.I566 2004. Three volumes.
86. Ciro Amitrano, Luca Peliti, and M. Saber. Population Dynamics in a Spin-Glass Model of
Chemical Evolution. Journal of Molecular Evolution, 29(6):513525, December 1989, Springer
New York: New York, NY, USA. doi: 10.1007/BF02602923.
87. Heni Ben Amor and Achim Rettinger. Intelligent Exploration for Genetic Algorithms: Using
Self-Organizing Maps in Evolutionary Computation. In GECCO05 [304], pages 15311538,
2005. doi: 10.1145/1068009.1068250. See also [88].
88. Heni Ben Amor and Achim Rettinger. Intelligent Exploration for Genetic Algorithms
Using Self-Organizing Maps in Evolutionary Computation. Fachberichte Informatik 1,
Universitat Koblenz-Landau, Institut f ur Informatik: Koblenz, Rhineland-Palatinate, Ger-
many, February 4, 2005. Fully available at http://www.uni-koblenz-landau.de/
koblenz/fb4/publications/fachberichte/fb2005/rr-1-2005.pdf [accessed 2011-08-
06]. issn: 1860-4471. See also [87].
89. Michele Amoretti. A Framework for Evolutionary Peer-to-Peer Overlay Schemes. In
EvoWorkshops09 [1052], pages 6170, 2009. doi: 10.1007/978-3-642-01129-0 7.
90. IBM PhD Student Symposium at ICSOC 2005, December 2005, Amsterdam, The Nether-
lands.
91. Proceedings of the 2011 International Conference on Systems, Man, and Cybernetics
(SCM11), October 912, 2011, Anchorage, AK, USA.
92. Edgar Anderson. The Irises of the Gaspe Peninsula. Bulletin of the American Iris Society,
59:25, 1935, American Iris Society (AIS): Philadelphia, PA, USA. See also [305, 930].
93. Johan Anderson. A Survey of Multiobjective Optimization in Engineering Design. Techni-
cal Report LiTH-IKP-R-1097, Link oping University, Department of Mechanical Engineer-
ing: Link oping, Sweden, 2000. Fully available at http://www.lania.mx/

ccoello/
andersson00.pdf.gz [accessed 2009-06-22]. CiteSeer
x
: 10.1.1.8.5638.
94. Philip Warren Anderson. Suggested Model for Prebiotic Evolution: The Use of Chaos. Pro-
ceedings of the National Academy of Science of the United States of America (PNAS), 80
(11):33863390, June 1, 1983, National Academy of Sciences: Washington, DC, USA. Fully
available at http://www.pnas.org/cgi/reprint/80/11/3386.pdf and http://
www.pubmedcentral.nih.gov/articlerender.fcgi?artid=394048 [accessed 2008-07-
02]. PubMed ID: 6190177.
95. David Andre. Evolution of Mapmaking Ability: Strategies for the Evolution of Learning,
Planning, and Memory using Genetic Programming. In CEC94 [1891], pages 250255,
volume 1, 1994. See also [96].
96. David Andre. The Evolution of Agents that Build Mental Models and Create Simple Plans
968 REFERENCES
Using Genetic Programming. In ICGA95 [883], pages 248255, 1995. See also [95].
97. David Andre. Learning and Upgrading Rules for an Optical Character Recognition System
Using Genetic Programming. In Handbook of Evolutionary Computation [171]. Oxford Uni-
versity Press, Inc.: New York, NY, USA, Institute of Physics Publishing Ltd. (IOP): Dirac
House, Temple Back, Bristol, UK, and CRC Press, Inc.: Boca Raton, FL, USA, 1997.
98. David Andre and Astro Teller. Evolving Team Darwin United. In RoboCup-98: Robot
Soccer World Cup II [129], pages 346351, 1998. Fully available at http://www.
cs.bham.ac.uk/

wbl/biblio/gp-html/Andre_1999_ETD.html and http://www.


cs.cmu.edu/afs/cs/usr/astro/public/papers/Teller_Astro.ps [accessed 2010-08-
21]. CiteSeer
x
: 10.1.1.36.2696.
99. David Andre, Forrest H. Bennett III, and John R. Koza. Discovery by Genetic Programming
of a Cellular Automata Rule that is Better than any Known Rule for the Majority Classi-
cation Problem. pages 311. Fully available at http://citeseer.ist.psu.edu/33008.
html [accessed 2007-08-01]. See also [100].
100. David Andre, Forrest H. Bennett III, and John R. Koza. Evolution of Intricate Long-
Distance Communication Signals in Cellular Automata using Genetic Programming. In
Articial Life V [1911], volume 1, 1996. Fully available at http://citeseer.ist.
psu.edu/andre96evolution.html and http://www.genetic-programming.com/
jkpdf/alife1996gkl.pdf [accessed 2007-10-03]. See also [99].
101. Peter John Angeline. Subtree Crossover: Building Block Engine or Macromutation? In
GP97 [1611], pages 917, 1997. Fully available at http://ncra.ucd.ie/COMP30290/
SubtreeXoverBuildingBlockorMacromutation_angeline_gp97.ps [accessed 2010-08-
20].
102. Peter John Angeline. A Historical Perspective on the Evolution of Executable Struc-
tures. Fundamenta Informaticae Annales Societatis Mathematicae Polonae, Series IV,
35(1-2):179195, August 1998, IOS Press: Amsterdam, The Netherlands and European
Association for Theoretical Computer Science (EATCS): Rio, Greece. Fully available
at http://www.natural-selection.com/publications_1998.html [accessed 2009-07-
09]. CiteSeer
x
: 10.1.1.15.6779. Special volume: Evolutionary Computation. See also
[105].
103. Peter John Angeline. Evolutionary Optimization versus Particle Swarm Optimiza-
tion: Philosophy and Performance Dierences. In EP98 [2208], pages 601610, 1998.
doi: 10.1007/BFb0040811.
104. Peter John Angeline. Subtree Crossover Causes Bloat. In GP98 [1612], pages 745752, 1998.
105. Peter John Angeline. A Historical Perspective on the Evolution of Executable Structures. In
Evolutionary Computation [863]. IOS Press: Amsterdam, The Netherlands, 1999. See also
[102].
106. Peter John Angeline and Kenneth E. Kinnear, Jr, editors. Advances in Genetic Programming
II, Complex Adaptive Systems, Bradford Books. MIT Press: Cambridge, MA, USA, Octo-
ber 26, 1996. isbn: 0-262-01158-1. Google Books ID: c3G7QgAACAAJ. OCLC: 29595260
and 59664755. GBV-Identication (PPN): 282212574.
107. Peter John Angeline and Jordan B. Pollack. Coevolving High-Level Representations. In
Articial Life III [1667], pages 5571, 1992. Fully available at http://citeseer.
ist.psu.edu/angeline94coevolving.html and http://demo.cs.brandeis.edu/
papers/alife3.pdf [accessed 2007-10-05].
108. Peter John Angeline and Jordan B. Pollack. Evolutionary Module Acquisition. In
EP93 [943], pages 154163, 1993. Fully available at http://www.demo.cs.brandeis.
edu/papers/ep93.pdf and https://eprints.kfupm.edu.sa/38410/ [accessed 2009-07-
11]. CiteSeer
x
: 10.1.1.50.938 and 10.1.1.57.3280.
109. Peter John Angeline, Robert G. Reynolds, John Robert McDonnell, and Russel C. Eber-
hart, editors. Proceedings of the 6th International Conference on Evolutionary Pro-
gramming (EP97), April 1316, 1997, Indianapolis, IN, USA, volume 1213/1997 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 354062788X. Google Books ID: 4cviSAAACAAJ. OCLC: 36629908, 247514632,
316851036, 441306808, 466121928, and 473591147.
110. Peter John Angeline, Zbigniew Michalewicz, Marc Schoenauer, Xin Yao, and Ali
M. S. Zalzala, editors. Proceedings of the IEEE Congress on Evolutionary Compu-
tation (CEC99), July 69, 1999, Mayower Hotel: Washington, DC, USA, volume
REFERENCES 969
1-3. IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7803-5536-9 and
0-7803-5537-7. Google Books ID: GJtVAAAAMAAJ, YZpVAAAAMAAJ, hM6nAAAACAAJ,
and r5hVAAAAMAAJ. OCLC: 42253982 and 64431060. Library of Congress Control Num-
ber (LCCN): 99-61143. INSPEC Accession Number: 6338817. Catalogue no.: 99TH8406.
CEC-99 A joint meeting of the IEEE, Evolutionary Programming Society, Galesia, and the
IEE.
111. Shoshana Anily and Awi Federgruen. Ergodicity in Parametric Non-Stationary Markov
Chains: An Application to Simulated Annealing Methods. Operations Research, 35(6):867
874, NovemberDecember 1987, Institute for Operations Research and the Management Sci-
ences (INFORMS): Linthicum, ML, USA and HighWire Press (Stanford University): Cam-
bridge, MA, USA. JSTOR Stable ID: 171435.
112. Pavlos Antoniou, Andreas Pitsilides, Tim Blackwell, and Andries P. Engelbrecht. Employing
the Flocking Behavior of Birds for Controlling Congestion in Autonomous Decentralized Net-
works. In CEC09 [1350], pages 17531761, 2009. doi: 10.1109/CEC.2009.4983153. INSPEC
Accession Number: 10688748.
113. Hendrik James Antonisse. A Grammar-Based Genetic Algorithm. In FOGA90 [2562], pages
193204, 1990.
114. 19th European Conference on Machine Learning and the 12th European Conference on Princi-
ples and Practice of Knowledge Discovery in Databases (ECML PKDD08), September 1519,
2008, Antwerp, Belgium.
115. Kamel Aouiche and Jer ome Darmont. Data Mining-Based Materialized View and Index
Selection in Data Warehouses. Journal of Intelligent Information Systems, 33(1):6593, Au-
gust 2009, Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/s10844-009-0080-0.
116. Kamel Aouiche, Pierre-Emmanuel Jouve, and Jer ome Darmont. Clustering-Based Mate-
rialized View Selection in Data Warehouses. In ADBIS06 [1828], pages 8195, 2009.
doi: 10.1007/11827252 9. Fully available at http://hal.archives-ouvertes.fr/docs/
00/32/06/24/PDF/adbis_06.pdf and http://hal.archives-ouvertes.fr/docs/
00/32/06/24/PS/adbis_06.ps [accessed 2011-04-12]. CiteSeer
x
: 10.1.1.95.8281. arXiv
ID: cs/0703114v1.
117. Chatchawit Aporntewan and Prabhas Chongstitvatana. A Hardware Implementation of
the Compact Genetic Algorithm. In CEC01 [1334], pages 624629, volume 1, 2001.
doi: 10.1109/CEC.2001.934449. Fully available at http://pioneer.netserv.chula.
ac.th/

achatcha/Publications/0004.pdf [accessed 2009-08-21]. INSPEC Accession Num-


ber: 7006160.
118. David L. Applegate, Robert E. Bixby, Vasek Chvatal, and William J. Cook. The Travel-
ing Salesman Problem: A Computational Study, Princeton Series in Applied Mathematics.
Princeton University Press: Princeton, NJ, USA, February 2007. isbn: 0-691-12993-2.
Google Books ID: nmF4rVNJMVsC.
119. Hamid R. Arabnia, editor. Proceedings of the 2010 International Conference on Genetic and
Evolutionary Methods (GEM10), July 1215, 2010, Monte Carlo Resort: Las Vegas, NV,
USA.
120. Hamid R. Arabnia and Youngsong Mun, editors. Proceedings of the 2008 International Con-
ference on Genetic and Evolutionary Methods (GEM08), June 1417, 2008, Monte Carlo
Resort: Las Vegas, NV, USA. CSREA Press: Las Vegas, NV, USA. isbn: 1-60132-069-8.
121. Hamid R. Arabnia and Ashu M. G. Solo, editors. Proceedings of the 2009 International
Conference on Genetic and Evolutionary Methods (GEM09), July 1316, 2009, Monte Carlo
Resort: Las Vegas, NV, USA. CSREA Press: Las Vegas, NV, USA. isbn: 1-60132-106-6.
122. Hamid R. Arabnia, Jack Y. Yang, and Mary Qu Yang, editors. Proceedings of the 2007
International Conference on Genetic and Evolutionary Methods (GEM07), June 2528, 2007,
Las Vegas, NV, USA. CSREA Press: Las Vegas, NV, USA. isbn: 1-60132-038-8.
123. Victoria S. Aragon and Susana C. Esquivel. An Evolutionary Algorithm to Track Changes
of Optimum Value Locations in Dynamic Environments. Journal of Computer Science
& Technology (JCS&T), 4(3(12)):127133, October 2004, Iberoamerican Science & Tech-
nology Education Consortium (ISTEC; University of New Mexico): Albuquerque, NM,
USA. Fully available at http://journal.info.unlp.edu.ar/journal/journal12/
papers/JCST-Oct04-1.pdf [accessed 2009-07-13]. Invited paper.
124. Edoardo Ardizzone, Salvatore Gaglio, and Filippo Sorbello, editors. Proceedings of the 2nd
Congress of the Italian Association for Articial Intelligence (AI*IA) on Trends in Articial
970 REFERENCES
Intelligence (AIIA91), October 29, 1991, Palermo, Italy, volume 549 in Lecture Notes in
Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 0387547126 and
3-540-54712-6. OCLC: 150398646, 311526865, 320328143, 440934107, 465123834,
and 466123701.
125. Maribel G. Arenas, Pierre Collet,

Agoston E. Eiben, Mark Jelasity, Juan Julian Merelo-
Guervos, Ben Paechter, Mike Preu, and Marc Schoenauer. A Framework for Distributed
Evolutionary Algorithms. In PPSN VII [1870], pages 665675, 2002. Fully available
at http://citeseer.ist.psu.edu/arenas02framework.html [accessed 2008-04-06].
126. Ruben Arma nanzas, I naki Inza, Roberto Santana, Yvan Saeys, Jose Luis Flores, Jose An-
tonio Lozano, Yves Van de Peer, Rosa Blanco, Vctor Robles, Concha Bielza, and Pedro
Larra naga. A Review of Estimation of Distribution Algorithms in Bioinformatics. Bio-
Data Mining, 1(6), 2008, BioMed Central Ltd.: London, UK. doi: 10.1186/1756-0381-1-6.
Fully available at http://www.biodatamining.org/content/1/1/6 [accessed 2009-08-28].
PubMed ID: 18822112.
127. Grenville Armitage, Mark Claypool, and Philip Branch. Network Latency, Jitter and Loss.
In Networking and Online Games: Understanding and Engineering Multiplayer Internet
Games [128], Chapter 5, pages 6982. John Wiley & Sons Ltd.: New York, NY, USA, 2006.
doi: 10.1002/047003047X.ch5.
128. Grenville Armitage, Mark Claypool, and Philip Branch. Networking and Online Games:
Understanding and Engineering Multiplayer Internet Games. John Wiley & Sons Ltd.:
New York, NY, USA, 2006. doi: 10.1002/047003047X. isbn: 0470018577. Google Books
ID: AQ7bIwAACAAJ and mK9QAAAAMAAJ. OCLC: 62896496 and 222232694.
129. Minoru Asada and Hiroaki Kitano, editors. RoboCup-98: Robot Soccer World Cup II,
July 1998, Paris, France, volume 1604 in Lecture Notes in Articial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-66320-7. Published in 1999.
130. B. Ashadevi and R. Balasubramanian. Cost Eective Approach for Materialized View Se-
lection in Data Warehousing Environment. International Journal of Computer Science and
Network Security (IJCSNS), 8(10):236242, 2008, IJCSNS: Seoul, South Korea. Fully avail-
able at http://paper.ijcsns.org/07_book/200810/20081036.pdf [accessed 2011-04-12].
131. B. Ashadevi and R. Balasubramanian. Optimized Cost Eective Approach for Se-
lection of Materialized Views in Data Warehousing. Journal of Computer Science
& Technology (JCS&T), 9(1):2126, April 2009, Iberoamerican Science & Technol-
ogy Education Consortium (ISTEC; University of New Mexico): Albuquerque, NM,
USA. Fully available at http://journal.info.unlp.edu.ar/journal/journal25/
papers/JCST-Apr09-4.pdf [accessed 2011-04-12].
132. Workshop on Practical Combinatorial Optimization (I ALIO/EURO), August 1418, 1989,
Rio de Janeiro, RJ, Brazil. Asociacion Latino-Iberoamericana de Investigaci on Operativa
(ALIO): Rio de Janeiro, RJ, Brazil and Association of European Operational Research Soci-
eties (EURO): Brussels, Belgium.
133. Workshop on Practical Combinatorial Optimization (II ALIO/EURO), November 1115,
1996, Catholic University: Valparaso, Chile. Asociacion Latino-Iberoamericana de Inves-
tigacion Operativa (ALIO): Rio de Janeiro, RJ, Brazil and Association of European Opera-
tional Research Societies (EURO): Brussels, Belgium.
134. Workshop on Applied Combinatorial Optimization (III ALIO/EURO), November 17, 1999,
Erice, Italy. Asociacion Latino-Iberoamericana de Investigaci on Operativa (ALIO): Rio de
Janeiro, RJ, Brazil and Association of European Operational Research Societies (EURO):
Brussels, Belgium.
135. Workshop on Applied Combinatorial Optimization (IV ALIO/EURO), November 46, 2002,
Pucon, Chile. Asociacion Latino-Iberoamericana de Investigaci on Operativa (ALIO): Rio de
Janeiro, RJ, Brazil and Association of European Operational Research Societies (EURO):
Brussels, Belgium.
136. The Fifth ALIO/EURO Conference on Combinatorial Optimization (V ALIO/EURO), Octo-
ber 2628, 2005, Paris, France. Asociacion Latino-Iberoamericana de Investigaci on Operativa
(ALIO): Rio de Janeiro, RJ, Brazil and Association of European Operational Research Soci-
eties (EURO): Brussels, Belgium.
137. Workshop on Applied Combinatorial Optimization (VI ALIO/EURO), December 1517,
2008, Buenos Aires, Argentina. Asociacion Latino-Iberoamericana de Investigaci on Oper-
REFERENCES 971
ativa (ALIO): Rio de Janeiro, RJ, Brazil and Association of European Operational Research
Societies (EURO): Brussels, Belgium.
138. Workshop on Applied Combinatorial Optimization (VII ALIO/EURO), May 46, 2011, Uni-
versity of Porto, Faculty of Sciences: Porto, Portugal. Asociacion Latino-Iberoamericana de
Investigaci on Operativa (ALIO): Rio de Janeiro, RJ, Brazil and Association of European
Operational Research Societies (EURO): Brussels, Belgium.
139. Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing
Systems (CHI08), April 510, 2008, Florence, Italy. Association for Computing Machinery
(ACM): New York, NY, USA.
140. Proceedings of the 17th International World Wide Web Conference (WWW08), April 2125,
2008, Beijng International Convention Center: Beijng, China. Association for Computing
Machinery (ACM): New York, NY, USA.
141. Mikhail J. Atallah, editor. Algorithms and Theory of Computation Handbook, Chapman and
Hall/CRC Applied Algorithms and Data Structures Series. CRC Press, Inc.: Boca Raton,
FL, USA, rst edition, 1998. isbn: 0-8493-2649-4. Google Books ID: fyTk5AwnnoQC.
OCLC: 39606815, 54016189, 222972701, and 471519169. Library of Congress Con-
trol Number (LCCN): 98038016. GBV-Identication (PPN): 247401749. LC Classica-
tion: QA76.9.A43 A43 1999. Produced By Suzanne Lassandro.
142. Proceedings of the Fifth Joint Conference on Information Science (JCIS00), February 27
March 3, 2000, Atlantic City, NJ, USA.
143. Paolo Atzeni, Albertas Caplinskas, and Hannu Jaakkola, editors. Proceedings of the 12th
East European Conference on Advances in Databases and Information Systems (ADBIS),
September 59, 2008, Pori, Finland, volume 5207 in Information Systems and Application,
incl. Internet/Web and HCI (SL 3), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-85713-6. isbn: 3-540-85712-5.
Library of Congress Control Number (LCCN): 2008933208.
144. Wai-Ho Au, Keith C. C. Chan, and Xin Yao. A Novel Evolutionary Data Mining Algorithm
with Applications to Churn Prediction. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 7(6):532545, December 2003, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2003.819264. CiteSeer
x
: 10.1.1.10.8230.
145. Anne Auger and Oliver Teytaud. Continuous Lunches are Free Plus the Design of
Optimal Optimization Algorithms. Rapports de Recherche inria-00369788, Insti-
tut National de Recherche en Informatique et en Automatique (INRIA), March 21,
2009. Fully available at http://hal.inria.fr/docs/00/36/97/88/PDF/
ccflRevisedVersionAugerTeytaud.pdf [accessed 2010-10-29]. See also [146].
146. Anne Auger and Oliver Teytaud. Continuous Lunches are Free Plus the Design of Optimal
Optimization Algorithms. Algorithmica, 57(1):121146, May 2010, Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/s00453-008-9244-5. See also [145].
147. Anne Auger, Nikolaus Hansen, Nikolas Mauny, Raymond Ros, and Marc Schoenauer. Bio-
Inspired Continuous Optimization: The Coming of Age. In CEC07 [1343], 2007. Fully
available at http://www.lri.fr/

marc/schoenauerCEC07.pdf [accessed 2009-10-20]. In-


vited Talk at CEC2007. See also [1181].
148. P. Augerat, Jose Manuel Belenguer Ribera, Enrique Benavent, A. Corber an, D. Naddef, and
G. Rinaldi. Computational Results with a Branch and Cut Code for the Capacitated Vehicle
Routing Problem. Technical Report Research Report 949-M, Universite Joseph Fourier:
Grenoble, France, 1995.
149. Douglas A. Augusto and Helio Josee Correa Barbosa. Symbolic Regression via Genetic
Programming. In SBRN00 [981], pages 173178, 2000. doi: 10.1109/SBRN.2000.889734.
INSPEC Accession Number: 6813089.
150. Lerina Aversano, Massililiano di Penta, and Kunal Taneja. A Genetic Programming Approach
to Support the Design of Service Compositions. International Journal of Computer Systems
Science and Engineering (CSSE), 21(4):247254, July 2006, CRL Publishing Limited: Le-
icester, UK. Fully available at http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/
Aversano_2006_IJCSSE.html and http://www.rcost.unisannio.it/mdipenta/
papers/csse06.pdf [accessed 2011-01-19]. CiteSeer
x
: 10.1.1.145.843.
151. Erel Avineri, Mario Koppen, Keshav Dahal, Yos Sunitiyoso, and Rajkumar Roy, editors. Ap-
plications of Soft Computing: 12th Online World Conference on Soft Computing in Industrial
Applications (WSC12), October 1626, 2007, volume 52 in Advances in Intelligent and Soft
972 REFERENCES
Computing. Springer-Verlag: Berlin/Heidelberg. doi: 10.1007/978-3-540-88079-0. Library of
Congress Control Number (LCCN): 2008935493.
152. Mordecai Avriel. Nonlinear Programming: Analysis and Methods. Dover Publica-
tions: Mineola, NY, USA, September 16, 2003. isbn: 0-486-43227-0. Google Books
ID: 815RAAAAMAAJ and byF4Xb1QbvMC.
153. Proceedings of the 3rd International Conference on Bio-Inspired Models of Network, Informa-
tion, and Computing Systems (BIONETICS08), November 2528, 2008, Awaji Yumebutai
International Conference Center: Awaji City, Hyogo, Japan.
154. Mehmet E. Aydin, Jun Yang, and Jie Zhang. A Comparative Investigation on Heuristic
Optimization of WCDMA Radio Networks. In EvoWorkshops07 [1050], pages 111120,
2007. doi: 10.1007/978-3-540-71805-5 12.
155. Mehmet E. Aydin, Raymond S. K. Kwan, Cyril Leung, and Jie Zhang. Multiuser Scheduling
in HSDPA with Particle Swarm Optimization. In EvoWorkshops09 [1052], pages 7180,
2009. doi: 10.1007/978-3-642-01129-0 8.
156. Antonia Azzini, Matteo De Felice, Sandro Meloni, and Andrea G. B. Tettamanzi. Soft
Computing Techniques for Internet Backbone Trac Anomaly Detection. In EvoWork-
shops09 [1052], pages 99104, 2009. doi: 10.1007/978-3-642-01129-0 12.
B
157. Sara Baase and Allen Van Gelder. Computer Algorithms: Introduction to Design and Analy-
sis. Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 3rd edition, Novem-
ber 1999. isbn: 0-201-61244-5. Google Books ID: GInCQgAACAAJ. OCLC: 40813345 and
474499890. Library of Congress Control Number (LCCN): 99014185. GBV-Identication
(PPN): 301465576 and 314912401. LC Classication: QA76.9.A43 B33 2000.
158. Jaume Bacardit i Pe narroya. Pittsburgh Genetics-Based Machine Learning in the Data Min-
ing Era: Representations, Generalization, and Run-time. PhD thesis, Ramon Llull Univer-
sity: Barcelona, Catalonia, Spain, October 8, 2004, Josep Maria Garrell i Guiu, Advisor.
Fully available at http://www.cs.nott.ac.uk/

jqb/publications/thesis.pdf [ac-
cessed 2007-09-12].
159. Jaume Bacardit i Pe narroya. Analysis of the Initialization Stage of a Pittsburgh
Approach Learning Classier System. In GECCO05 [304], pages 18431850, 2005.
doi: 10.1145/1068009.1068321. Fully available at http://www.cs.nott.ac.uk/

jqb/
publications/gecco2005.pdf [accessed 2007-09-12].
160. Jaume Bacardit i Pe narroya and Martin V. Butz. Data Mining in Learning Classi-
er Systems: Comparing XCS with GAssist. In IWLCS 03-05 [1589], pages 282290,
2007. doi: 10.1007/978-3-540-71231-2 19. Fully available at http://www.illigal.
uiuc.edu/pub/papers/IlliGALs/2004030.pdf and http://www.psychologie.
uni-wuerzburg.de/IWLCS/abstracts/IWLCS04-Bacardit_Butz.pdf [accessed 2010-07-
02].
161. Jaume Bacardit i Pe narroya and Natalio Krasnogor. Smart Crossover Operator with Multiple
Parents for a Pittsburgh Learning Classier System. In GECCO06 [1516], pages 14411448,
2006. doi: 10.1145/1143997.1144235. Fully available at http://www.cs.nott.ac.uk/

jqb/publications/gecco2006-sx.pdf [accessed 2007-09-12].


162. Jaume Bacardit i Pe narroya and Natalio Krasnogor. A Mixed Discrete-
Continuous Attribute List Representation for Large Scale Classication Domains.
In GECCO09-I [2342], pages 11551162, 2009. doi: 10.1145/1569901.1570057.
Fully available at http://www.asap.cs.nott.ac.uk/publications/pdf/
AMixedDiscreteContinuousAttributeListRepresentation.pdf [accessed 2009-09-
05].
163. Paul Gustav Heinrich Bachmann. Die Analytische Zahlentheorie / Dargestellt von Paul Bach-
mann, volume Zweiter Theil in Zahlentheorie: Versuch einer Gesamtdarstellung dieser Wis-
senschaft in ihren Haupttheilen. University of Michigan Library, Scholarly Publishing Oce:
Ann Arbor, MI, USA, 18942004. isbn: 1418169633, 141818540X, and 978-1418169633.
Fully available at http://gallica.bnf.fr/ark:/12148/bpt6k994750 [accessed 2008-05-
20]. Google Books ID: SfKYPQAACAAJ, SvKYPQAACAAJ, and ujyhGQAACAAJ.
164. George Baciu, Yingxu Wang, Yiyu Yao, Witold Kinsner, Keith C. C. Chan, and Lot A.
REFERENCES 973
Zadeh, editors. Cognitive Computing and Semantic Mining Proceedings of the 8th IEEE
International Conference on Cognitive Informatics (ICCI09), June 1517, 2009, Hong Kong
Polytechnic University: Kowloon, Hong Kong / Xi angg ang, China. IEEE Computer Society:
Piscataway, NJ, USA. isbn: 1-4244-4642-2. Google Books ID: MTx3PwAACAAJ.
165. Thomas Back. Generalized Convergence Models for Tournament- and (, )-Selection. In
ICGA95 [883], pages 28, 1995. CiteSeer
x
: 10.1.1.56.5969.
166. Thomas Back, editor. Proceedings of The Seventh International Conference on Genetic Al-
gorithms (ICGA97), July 1923, 1997, Michigan State University: East Lansing, MI, USA.
Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-487-1. Google
Books ID: eP1QAAAAMAAJ and fI0BAAAACAAJ. OCLC: 37024995 and 247130717.
167. Thomas Back. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evo-
lutionary Programming, Genetic Algorithms. Oxford University Press, Inc.: New York,
NY, USA, January 1996. isbn: 0-19-509971-0. Google Books ID: 6MOqAAAACAAJ and
EaN7kvl5coYC. OCLC: 32350061 and 246886085. Library of Congress Control Number
(LCCN): 95013506. GBV-Identication (PPN): 185320007 and 27885365X. LC Classi-
cation: QA402.5 .B333 1996.
168. Thomas Back and Ulrich Hammel. Evolution Strategies Applied to Perturbed Objective
Functions. In CEC94 [1891], pages 4045, volume 1, 1994. doi: 10.1109/ICEC.1994.350045.
Fully available at http://citeseer.ist.psu.edu/16973.html [accessed 2008-07-19].
169. Thomas Back and Hans-Paul Schwefel. An Overview of Evolutionary Algorithms for Pa-
rameter Optimization. Evolutionary Computation, 1(1):123, Spring 1993, MIT Press:
Cambridge, MA, USA. doi: 10.1162/evco.1993.1.1.1. Fully available at http://www.
santafe.edu/events/workshops/images/7/7e/Back-ec93.pdf and www.robots.
ox.ac.uk/

cvrg/trinity2002/EC/EAs.ps.gz [accessed 2009-10-18].


170. Thomas Back, Frank Homeister, and Hans-Paul Schwefel. A Survey of Evolution Strate-
gies. In ICGA91 [254], pages 29, 1991. Fully available at http://130.203.133.121:
8080/viewdoc/summary?doi=10.1.1.42.3375, http://bi.snu.ac.kr/Info/EC/
A%20Survey%20of%20Evolution%20Strategies.pdf, and http://www6.uniovi.
es/pub/EC/GA/papers/icga91.ps.gz [accessed 2009-09-18]. CiteSeer
x
: 10.1.1.42.3375.
171. Thomas Back, David B. Fogel, and Zbigniew Michalewicz, editors. Handbook of Evolutionary
Computation, Computational Intelligence Library. Oxford University Press, Inc.: New York,
NY, USA, Institute of Physics Publishing Ltd. (IOP): Dirac House, Temple Back, Bristol, UK,
and CRC Press, Inc.: Boca Raton, FL, USA, January 1, 1997. isbn: 0-7503-0392-1 and
0-7503-0895-8. Google Books ID: kgqGQgAACAAJ, n5nuiIZvmpAC, and neKNGAAACAAJ.
OCLC: 173074676, 327018351, and 327018351. Library of Congress Control Number
(LCCN): 97004461. GBV-Identication (PPN): 224708430 and 364589272. LC Classi-
cation: QA76.87 .H357 1997.
172. Thomas Back, Ulrich Hammel, and Hans-Paul Schwefel. Evolutionary Computation:
Comments on the History and Current State. IEEE Transactions on Evolution-
ary Computation (IEEE-EC), 1(1):317, April 1997, IEEE Computer Society: Wash-
ington, DC, USA. doi: 10.1109/4235.585888. Fully available at http://sci2s.
ugr.es/docencia/doctobio/EC-History-IEEETEC-1-1-1997.pdf [accessed 2009-08-05].
CiteSeer
x
: 10.1.1.6.5943. INSPEC Accession Number: 5611369.
173. Thomas Back, Zbigniew Michalewicz, and Xin Yao, editors. IEEE International Confer-
ence on Evolutionary Computation (CEC97), April 1316, 1997, Indianapolis, IN, USA.
IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7803-3949-5. Google Books
ID: 6JZVAAAAMAAJ and so7XHQAACAAJ. INSPEC Accession Number: 5577833.
174. Thomas Back, David B. Fogel, and Zbigniew Michalewicz, editors. Evolutionary Compu-
tation 1: Basic Algorithms and Operators. Institute of Physics Publishing Ltd. (IOP):
Dirac House, Temple Back, Bristol, UK, January 2000. isbn: 0750306645. Google Books
ID: 4HMYCq9US78C.
175. Thomas Back, David B. Fogel, and Zbigniew Michalewicz, editors. Evolutionary Computation
2: Advanced Algorithms and Operators. Institute of Physics Publishing Ltd. (IOP): Dirac
House, Temple Back, Bristol, UK, November 2000. isbn: 0750306653.
176. Jerey L. Bada, C. Bigham, and Stanley L. Miller. Impact Melting of Frozen Oceans on
the Early Earth: Implications for the Origin of Life. Proceedings of the National Academy
of Science of the United States of America (PNAS), 91(4):12481250, February 15, 1994,
National Academy of Sciences: Washington, DC, USA. doi: 10.1073/pnas.91.4.1248. PubMed
974 REFERENCES
ID: 11539550.
177. Liviu Badea and Monica Stanciu. Renement Operators Can Be (Weakly) Perfect. In ILP-
99 [851], pages 2132, 1999. doi: 10.1007/3-540-48751-4 4. Fully available at http://www.
ai.ici.ro/papers/ilp99.ps.gz [accessed 2008-02-22].
178. P. Badeau, Michel Gendreau, F. Guertin, Jean-Yves Potvin, and

Eric D. Taillard. A Parallel
Tabu Search Heuristic for the Vehicle Routing Problem with Time Windows. Transportation
Research Part C: Emerging Technologies, 5(2):109122, April 1997, Elsevier Science Publish-
ers B.V.: Amsterdam, The Netherlands and Pergamon Press: Oxford, UK.
179. Philipp Andreas Baer and Roland Reichle. Communication and Collaboration in Heteroge-
neous Teams of Soccer Robots. In Robotic Soccer [1740], Chapter 1, pages 128. I-Tech Educa-
tion and Publishing: Vienna, Austria, 2007. Fully available at http://s.i-techonline.
com/Book/Robotic-Soccer/ISBN978-3-902613-21-9-rs01.pdf [accessed 2008-04-23].
180. John Daniel Bagley. The Behavior of Adaptive Systems which Employ Genetic and Cor-
relation Algorithms. PhD thesis, University of Michigan: Ann Arbor, MI, USA, De-
cember 1967. Fully available at http://deepblue.lib.umich.edu/handle/2027.
42/3354 [accessed 2010-08-03]. Order No.: AAI6807556. University Microlms No.: 68-7556.
ID: bab0779.0001.001. ORA PROJECT.01-i252. See also [181].
181. John Daniel Bagley. The Behavior of Adaptive Systems which employ Genetic and Correla-
tion Algorithms. Dissertation Abstracts International (DAI), 28(12):5160B, ProQuest: Ann
Arbor, MI, USA. See also [180].
182. Ruzena Bajcsy, editor. Proceedings of the 13th International Joint Conference on Articial In-
telligence (IJCAI93-I), August 28September 3, 1993, Chambery, France, volume 1. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-300-X. Fully avail-
able at http://dli.iiit.ac.in/ijcai/IJCAI-93-VOL1/CONTENT/content.htm [ac-
cessed 2008-04-01]. See also [183, 927, 2259].
183. Ruzena Bajcsy, editor. Proceedings of the 13th International Joint Conference on Articial In-
telligence (IJCAI93-II), August 28September 3, 1993, Chambery, France, volume 2. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-300-X. Fully avail-
able at http://dli.iiit.ac.in/ijcai/IJCAI-93-VOL2/CONTENT/content.htm [ac-
cessed 2008-04-01]. See also [182, 927, 2259].
184. Javier Bajo, Emilio S. Corchado,

Alvaro Herrero, and Juan M. Corchado, editors. Proceed-
ings of the 1st International Workshop on Hybrid Articial Intelligence Systems (HAIS06),
October 27, 2006, Ribeirao Preto, SP, Brazil. University of Salamanca: Salamanca, Spain.
isbn: 84-934181-9-6.
185. Per Bak and Kim Sneppen. Punctuated Equilibrium and Criticality in a Simple Model of
Evolution. Physical Review Letters, 71(24):40834086, December 13, 1993, American Phys-
ical Society: College Park, MD, USA. doi: 10.1103/PhysRevLett.71.4083. Fully available
at http://prola.aps.org/abstract/PRL/v71/i24/p4083_1 [accessed 2008-08-24].
186. Per Bak, Chao Tang, and Kurt Wiesenfeld. Self-Organized Criticality. Physical Review
A, 38(1):364374, July 1, 1998, American Physical Society: College Park, MD, USA.
doi: 10.1103/PhysRevA.38.364. Fully available at http://prola.aps.org/abstract/
PRA/v38/i1/p364_1 [accessed 2008-08-23].
187. James E. Baker. Adaptive Selection Methods for Genetic Algorithms. In ICGA85 [1131],
pages 101111, 1985.
188. James E. Baker. Reducing Bias and Ineciency in the Selection Algorithm. In
ICGA87 [1132], pages 1421, 1987.
189. Lazlo Bako. Real-time Classication of Datasets with Hardware Embedded Neuromorphic
Neural Networks. Briengs in Bioinformatics, 11(3):348363, 2010, Oxford University Press,
Inc.: New York, NY, USA. doi: 10.1093/bib/bbp066.
190. Roberto Baldacci, Maria Battarra, and Daniele Vigo. Routing a Heterogeneous Fleet
of Vehicles. Technical Report DEIS OR.INGCE 2007/1, Universit`a degli Studi di
Bologna, Dipartimento di Elettronica, Informatica e Sistemistica (DEIS), Operations
Research Group (Ricerca Operativa): Bologna, Emilia-Romagna, Italy, January 2007.
Fully available at or.ingce.unibo.it/ricerca/technical-reports-or-ingce/
papers/bbv_hvrp.pdf [accessed 2011-10-12].
191. James Mark Baldwin. A New Factor in Evolution. The American Naturalist, 30(354):441451,
June 1896, University of Chicago Press for The American Society of Naturalists: Chicago,
IL, USA. Fully available at http://www.brocku.ca/MeadProject/Baldwin/Baldwin_
REFERENCES 975
1896_h.html [accessed 2008-09-10]. See also [192].
192. James Mark Baldwin. A New Factor in Evolution. In Adaptive Individuals in Evolving Pop-
ulations: Models and Algorithms [255], pages 5980. Addison-Wesley Longman Publishing
Co., Inc.: Boston, MA, USA and Westview Press, 1996. See also [191].
193. Shumeet Baluja. Population-Based Incremental Learning A Method for Integrating Genetic
Search Based Function Optimization and Competitive Learning. Technical Report CMU-CS-
94-163, Carnegy Mellon University (CMU), School of Computer Science, Computer Science
Department: Pittsburgh, PA, USA, June 2, 1994. Fully available at http://www.ri.cmu.
edu/pub_files/pub1/baluja_shumeet_1994_2/baluja_shumeet_1994_2.pdf [ac-
cessed 2011-12-05]. CiteSeer
x
: 10.1.1.61.8554.
194. Shumeet Baluja. An Empirical Comparison of Seven Iterative and Evolutionary Function Op-
timization Heuristics. Technical Report CMU-CS-95-193, Carnegy Mellon University (CMU),
School of Computer Science, Computer Science Department: Pittsburgh, PA, USA, Septem-
ber 1, 1995. CiteSeer
x
: 10.1.1.43.1108.
195. Shumeet Baluja and Richard A. Caruana. Removing the Genetics from the Standard Genetic
Algorithm. In ICML95 [2221], pages 3846, 1995. Fully available at http://www.cs.
cornell.edu/

caruana/ml95.ps [accessed 2009-08-20]. See also [196].


196. Shumeet Baluja and Richard A. Caruana. Removing the Genetics from the Stan-
dard Genetic Algorithm. Technical Report CMU-CS-95-141, Carnegy Mellon University
(CMU), School of Computer Science, Computer Science Department: Pittsburgh, PA,
USA, May 22, 1995. Fully available at http://www.i3s.unice.fr/

verel/supports/
articles/baluja95removing.pdf and https://eprints.kfupm.edu.sa/62112/
1/62112.pdf [accessed 2009-08-20]. CiteSeer
x
: 10.1.1.44.5424. See also [195].
197. Robert Balzer, editor. Proceedings of the 1st Annual National Conference on Articial Intel-
ligence (AAAI80), August 1820, 1980, Stanford University: Stanford, CA, USA. American
Association for Articial Intelligence: Los Altos, CA, USA and John W. Kaufmann Inc.:
Washington, DC, USA. isbn: 0-262-51050-2. Partly available at http://www.aaai.
org/Conferences/AAAI/aaai80.php [accessed 2007-09-06].
198. Carlos A. Bana e Costa, editor. Selected Readings from the Third International Summer
School on Multicriteria Decision Aid: Methods, Applications, and Software (MCDA90),
July 2327, 1998, Monte Estoril, Lisbon, Portugal. Springer-Verlag GmbH: Berlin, Ger-
many. isbn: 0387529500 and 3540529500. Google Books ID: F2yfGAAACAAJ. Library of
Congress Control Number (LCCN): 90010313. LC Classication: T58.62 .R43 198. Pub-
lished in 1990.
199. Saswatee Banerjee and Lakshminarayan N. Hazra. Thin Lens Design of Cooke Triplet
Lenses: Application of a Global Optimization Technique. In Novel Optical Systems and
Large-Aperture Imaging [259], pages 175183, 1998. doi: 10.1117/12.332474.
200. Proceedings of the 10th IEEE International Conference on Cognitive Informatics & Cogni-
tive Computing (ICCI*CC11), August 1820, 2011, Ban, AB, Canada. Partly available
at http://www.ucalgary.ca/icci_cc2011/ [accessed 2011-02-28].
201. Jrgen Bang-Jensen, Gregory Gutin, and Anders Yeo. When the Greedy Algorithm
Fails. Discrete Optimization, 1(2):121127, November 15, 2004, Elsevier Science Publish-
ers B.V.: Essex, UK. doi: 10.1016/j.disopt.2004.03.007. Fully available at http://www.
optimization-online.org/DB_FILE/2003/03/622.pdf.
202. Balazs Banhelyi, Marco Biazzini, Alberto Montresor, and Mark Jelasity. Peer-to-Peer Op-
timization in Large Unreliable Networks with Branch-and-Bound and Particle Swarms. In
EvoWorkshops09 [1052], pages 8792, 2009. doi: 10.1007/978-3-642-01129-0 10.
203. David Banks, Leanna House, Frederick R. McMorris, Phipps Arabie, and Wolfgang Gaul,
editors. Classication, Clustering, and Data Mining Applications: Proceedings of the Meeting
of the International Federation of Classication Societies (IFCS), July 1518, 2004, Studies in
Classication, Data Analysis, and Knowledge Organization. Springer New York: New York,
NY, USA. isbn: 3540220143.
204. Ajay Bansal, M. Brian Blake, Srividya Kona, Steen Bleul, Thomas Weise, and Michael C.
J ager. WSC-08: Continuing the Web Services Challenge. In CEC/EEE08 [1346], pages
351354, 2008. doi: 10.1109/CECandEEE.2008.146. Fully available at http://www.
it-weise.de/documents/files/BBKBWJ2008WSC08CTWSC.pdf. INSPEC Accession
Number: 10475109. Ei ID: 20091512027446. IDS (SCI): BJF01.
205. Wolfgang Banzhaf. Genetic Programming for Pedestrians. In ICGA93 [969], pages 1721,
976 REFERENCES
1993. Fully available at ftp://ftp.cs.cuhk.hk/pub/EC/GP/papers/pedes93.
ps.gz, http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/banzhaf_mrl.html,
and http://www.cs.ucl.ac.uk/staff/W.Langdon/ftp/ftp.io.com/papers/
GenProg_forPed.ps.Z [accessed 2010-08-18]. CiteSeer
x
: 10.1.1.38.1144. Also: MRL
Technical Report 93-03.
206. Wolfgang Banzhaf and Frank H. Eeckman, editors. Evolution and Biocomputation
Computational Models of Evolution, volume 899/1995 in Lecture Notes in Computer Sci-
ence (LNCS). Springer-Verlag GmbH: Berlin, Germany, kindle edition, April 13, 1995.
doi: 10.1007/3-540-59046-3. isbn: 0-387-59046-3 and 3-540-59046-3. Google Books
ID: dZf-yrmSZPkC. asin: B000V1DAP0. Revised papers from the Evolution as a Compu-
tational Process workshop held in July 1992 in Monterey, CA, USA.
207. Wolfgang Banzhaf and Christian W. G. Lasarczyk. Genetic Programming of an Algorith-
mic Chemistry. In GPTP04 [2089], pages 175190, 2004. doi: 10.1007/0-387-23254-0 11.
Fully available at http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/banzhaf_
2004_GPTP.html and http://www.cs.mun.ca/

banzhaf/papers/algochem.pdf [ac-
cessed 2010-08-01]. CiteSeer
x
: 10.1.1.94.6816.
208. Wolfgang Banzhaf and Colin R. Reeves, editors. Proceedings of the Fifth Workshop on Foun-
dations of Genetic Algorithms (FOGA98), July 2225, 1998, Madison, WI, USA. Morgan
Kaufmann: San Mateo, CA, USA. isbn: 1-55860-559-2. Published April 2, 1991.
209. Wolfgang Banzhaf, Peter Nordin, Robert E. Keller, and Frank D. Francone. Genetic Pro-
gramming: An Introduction On the Automatic Evolution of Computer Programs and Its Ap-
plications. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA and dpunkt.verlag:
Heidelberg, Germany, November 30, 1997. isbn: 1-558-60510-X and 3-920993-58-6.
Google Books ID: 1697qefFdtIC. OCLC: 38067741, 247738380, 468426941, and
474902629. Library of Congress Control Number (LCCN): 97051603. GBV-Identication
(PPN): 238379108, 279969872, and 307343650. LC Classication: QA76.623 .G46 1998.
210. Wolfgang Banzhaf, Riccardo Poli, Marc Schoenauer, and Terence Claus Fogarty, editors. Pro-
ceedings of the First European Workshop on Genetic Programming (EuroGP98), April 14
15, 1998, Paris, France, volume 1391/1998 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/BFb0055923. isbn: 3-540-64360-5.
See also [2195].
211. Wolfgang Banzhaf, Jason M. Daida,

Agoston E. Eiben, Max H. Garzon, Vasant Honavar,
Mark J. Jakiela, and Robert Elliott Smith, editors. Proceedings of the Genetic and Evolu-
tionary Computation Conference (GECCO99), July 1317, 1999, Orlando, FL, USA. Mor-
gan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-611-4. Google
Books ID: -1vTAAAACAAJ and hZlQAAAAMAAJ. OCLC: 42918215, 43044489, 59333111,
and 316858650. The 1999 Genetic and Evolutionary Computational Conference (GECCO-
99) combined the longest running conferences in evolutionary computation (ICGA) and the
worlds two largest EC conferences (GP and ICGA) to create a unique opportunity to col-
lect the best in research in this growing eld of computer science and engineering. See also
[2093, 2502].
212. Wolfgang Banzhaf, Guillaume Beslon, Steen Christensen, James A. Foster, Fran cois Kep`es,
Virginie Lefort, Julian Francis Miller, Miroslav Radman, and Jeremy J. Ramsden. From
Articial Evolution to Computational Evolution: A Research Agenda. Nature Reviews Ge-
netics, 7(9):729735, September 2006, Nature Publishing Group (NPG): Houndmills, Bas-
ingstoke, Hampshire, UK. doi: 10.1038/nrg1921. Fully available at http://www.cs.mun.
ca/

banzhaf/papers/nrg2006.pdf [accessed 2008-07-20]. Guidelines/Perspectives.


213. Elena Baralis, Stefano Paraboschi, and Ernest Teniente. Materialized View Selection in a Mul-
tidimensional Database. In VLDB97 [1434], pages 156165, 1997. Fully available at http://
www.vldb.org/conf/1997/P156.PDF [accessed 2011-04-13]. CiteSeer
x
: 10.1.1.41.7437
and 10.1.1.78.9501.
214. Helio Josee Correa Barbosa, Fernanda M. P. Raupp, Carlile Campos Lavor, Harry C. Lima,
and Nelson Maculan. A Hybrid Genetic Algorithm for Finding Stable Conformations of Small
Molecules. In SBRN00 [981], pages 9094, 2000. doi: 10.1109/SBRN.2000.889719. INSPEC
Accession Number: 6813074.
215. 11th European Conference on Machine Learning (ECML00), May 30June 2, 2000,
Barcelona, Catalonia, Spain.
216. Proceedings of the EUROGEN2003 Conference: Evolutionary Methods for Design Optimiza-
REFERENCES 977
tion and Control with Applications to Industrial Problems (EUROGEN03), September 15
17, 2003, Barcelona, Catalonia, Spain. isbn: 84-95999-33-1. Partly available at http://
congress.cimne.upc.es/eurogen03/ [accessed 2007-09-16]. Published on CD.
217. Tiany Barnes, Michel Desmarais, Crist obal Romero, and Sebastian Ventura, ed-
itors. Proceedings of the 2nd International Conference on Educational Data
Mining (EDM09), July 13, 2009, Cordoba, Spain. isbn: 84-613-2308-4.
Fully available at http://www.educationaldatamining.org/EDM2009/uploads/
proceedings/edm-proceedings-2009.pdf [accessed 2011-10-25]. OCLC: 733734163.
218. Lionel Barnett. Tangled Webs: Evolutionary Dynamics on Fitness Landscapes with Neu-
trality. Masters thesis, University of Sussex, School of Cognitive Science: Brighton, UK,
Summer 1997, Inman Harvey, Supervisor. CiteSeer
x
: 10.1.1.48.2862.
219. Lionel Barnett. Ruggedness and Neutrality The NKp Family of Fitness Landscapes. In
Articial Life VI [32], pages 1827, 1998. Fully available at http://www.cogs.susx.ac.
uk/users/lionelb/downloads/publications/alife6_paper.ps.gz [accessed 2009-07-
10]. CiteSeer
x
: 10.1.1.43.5010.
220. Nils Aaall Barricelli. Esempi Numerici di Processi di Evoluzione. Methodos, 6(2122):4568,
1954.
221. Nils Aaall Barricelli. Symbiogenetic Evolution Processes Realized by Articial Methods.
Methodos, 9(3536):143182, 1957.
222. Nils Aaall Barricelli. Numerical Testing of Evolution Theories. Part I. Theroetical Introduc-
tion and Basic Tests. Acta Biotheoretica, 16(1/2):6998, March 1962, Springer Netherlands:
Dordrecht, Netherlands. doi: 10.1007/BF01556771. Received: 27 November 1961. See also
[223].
223. Nils Aaall Barricelli. Numerical Testing of Evolution Theories. Part II. Preliminary Tests
of Performance. Symbiogenesis and Terrestrial Life. Acta Biotheoretica, 16(3/4):99126,
September 1963, Springer Netherlands: Dordrecht, Netherlands. doi: 10.1007/BF01556602.
Received: 27 November 1961. See also [222].
224. Alwyn M. Barry, editor. Proceedings of the Bird of a Feather Workshop at Genetic and
Evolutionary Computation Conference (GECCO02 WS), July 8, 2002, Roosevelt Hotel: New
York, NY, USA. AAAI Press: Menlo Park, CA, USA. Bird-of-a-feather Workshops, GECCO-
2002. See also [478, 1675, 1790].
225. Peter Bartalos and Maria Bielikova. Semantic Web Service Composition Framework Based
on Parallel Processing. In CEC09 [1245], pages 495498, 2009. doi: 10.1109/CEC.2009.27.
INSPEC Accession Number: 10839132. See also [1571].
226. Thomas Barth, Bernd Freisleben, Manfred Grauer, and Frank Thilo. Distributed Solution of
Optimal Hybrid Control Problems on Networks of Workstations. In CLUSTER 2000 [1365],
pages 162169, 2000. doi: 10.1109/CLUSTR.2000.889027.
227. Thomas Bartz-Beielstein. Experimental Analysis of Evolution Strategies Overview and
Comprehensive Introduction. Technical Report 157, Universitat Dortmund, Collabora-
tive Research Center (Sonderforschungsbereich) 531: Dortmund, North Rhine-Westphalia,
Germany, May 24, 2004. Fully available at http://hdl.handle.net/2003/5449 and
http://ls11-www.cs.uni-dortmund.de/people/tom/158703.pdf [accessed 2009-09-09].
CiteSeer
x
: 10.1.1.5.2261.
228. Thomas Bartz-Beielstein, Marco Chiarandini, Lus Paquete, and Mike Preu, editors. Ex-
perimental Methods for the Analysis of Optimization Algorithms. Springer-Verlag GmbH:
Berlin, Germany, February 2010. doi: 10.1007/978-3-642-02538-9. isbn: 3-642-02537-4.
Libri-Number: 9596283.
229. Pablo Manuel Rabanal Basalo, Ismael Rodrguez Laguna, and Fernando Rubio Dez. Using
River Formation Dynamics to Design Heuristic Algorithms. In UC07 [55], pages 163177,
2007. doi: 10.1007/978-3-540-73554-0 16.
230. Pablo Manuel Rabanal Basalo, Ismael Rodrguez Laguna, and Fernando Rubio Dez. Finding
Minimum Spanning/Distance Trees by Using River Formation Dynamics. In ANTS08 [2580],
pages 6071, 2008. doi: 10.1007/978-3-540-87527-7 6.
231. William Bateson. Mendels Principles of Heredity, Kessinger Publishings R Rare
Reprints. Cambridge University Press: Cambridge, UK, 19091930. isbn: 1428648194.
Google Books ID: ASoFAQAAIAAJ, NskrHgAACAAJ, WWZfDQljn8gC, YpTPAAAAMAAJ, and
dRjSCFULQegC.
232. Roberto Battiti and Giampietro Tecchiolli. The Reactive Tabu Search. ORSA
978 REFERENCES
Journal on Computing, 6(2):126140, 1994, Operations Research Society of Amer-
ica (ORSA). doi: 10.1287/ijoc.6.2.126. Fully available at http://citeseer.ist.
psu.edu/141556.html and http://rtm.science.unitn.it/

battiti/archive/
reactive-tabu-search.ps.gz [accessed 2007-09-15].
233. Andreas Bauer and Wolfgang Lehner. On Solving the View Selection Problem in
Distributed Data Warehouse Architectures. In SSDBM03 [1368], pages 4351, 2003.
doi: 10.1109/SSDM.2003.1214953. INSPEC Accession Number: 7804318.
234. Helmut Baumgarten, editor. Das Beste Der Logistik: Innovationen, Strategien, Umsetzun-
gen. Springer-Verlag GmbH: Berlin, Germany and Bundesvereinigung Logistik e.V. (BVL):
Bremen, Germany, 2008. isbn: 3-540-78404-7. Google Books ID: di5g8FY5X YC.
235. James C. Bean. Genetics and Random Keys for Sequencing and Optimization. Technical
Report 92-43, University of Michigan, Department of Industrial and Operations Engineering:
Ann Arbor, MI, USA, June 1992December 17, 1993. Fully available at http://deepblue.
lib.umich.edu/bitstream/2027.42/3481/5/ban1152.0001.001.pdf [accessed 2010-
11-22]. See also [236].
236. James C. Bean. Genetic Algorithms and Random Keys for Sequencing and Optimization.
ORSA Journal on Computing, 6(2):154160, Spring 1994, Operations Research Society of
America (ORSA). doi: 10.1287/ijoc.6.2.154. See also [235].
237. James C. Bean and Atidel Ben Hadj-Alouane. A Dual Genetic Algorithm for Bounded Integer
Programs. Technical Report 92-53, University of Michigan, Department of Industrial and
Operations Engineering: Ann Arbor, MI, USA, October 1992. Fully available at http://
hdl.handle.net/2027.42/3480 [accessed 2008-11-15]. Revised in June 1993.
238. David Beasley, David R. Bull, and Ralph R. Martin. An Overview of Genetic Algorithms:
Part 1. Fundamentals. University Computing, 15(2):5869, 1993, Inter-University Commit-
tee on Computing: Cambridge, MA, USA. Fully available at http://ralph.cs.cf.ac.
uk/papers/GAs/ga_overview1.pdf [accessed 2010-07-23]. CiteSeer
x
: 10.1.1.44.3757 and
10.1.1.61.3619. See also [239].
239. David Beasley, David R. Bull, and Ralph R. Martin. An Overview of Genetic Algorithms:
Part 2, Research Topics. University Computing, 15(4):170181, 1993, Inter-University
Committee on Computing: Cambridge, MA, USA. Fully available at http://proton.
tau.ac.il/

marek/genetic_algorithms_beasley93_overview_partII.pdf and
http://ralph.cs.cf.ac.uk/papers/GAs/ga_overview2.pdf [accessed 2010-07-23].
CiteSeer
x
: 10.1.1.23.8293 and 10.1.1.61.3643. See also [238].
240. David Beasley, David R. Bull, and Ralph R. Martin. Reducing Epistasis in Combinatorial
Problems by Expansive Coding. In ICGA93 [969], pages 400407, 1993. Fully available
at http://ralph.cs.cf.ac.uk/papers/GAs/expansive_coding.pdf [accessed 2010-07-
29]. CiteSeer
x
: 10.1.1.23.1953.
241. John Edward Beasley. OR-Library. Brunel University, Department of Mathematical Sciences:
Uxbridge, Middlesex, UK, June 1990. Fully available at http://people.brunel.ac.uk/

mastjjb/jeb/info.html [accessed 2011-10-12].


242. William Beaudoin, Sebastien Verel, Philippe Collard, and Cathy Escazut. Deceptiveness and
Neutrality the ND Family of Fitness Landscapes. In GECCO06 [1516], pages 507514, 2006.
doi: 10.1145/1143997.1144091.
243. J org D. Becker and Ignaz Eisele, editors. Proceedings of the Workshop on Parallel Process-
ing: Logic, Organization and Technology (WOPPLOT83), June 2729, 1983, Federal Armed
Forces University Munich (HSBw M): Neubiberg, Bavaria, Germany, volume 196/1984 in
Lecture Notes in Physics. Springer-Verlag: Berlin/Heidelberg. doi: 10.1007/BFb0018249.
isbn: 3540129170. Published in 1984.
244. J org D. Becker and Ignaz Eisele, editors. Proceedings of the Workshop on Parallel Pro-
cessing: Logic, Organization, and Technology (WOPPLOT86), July 24, 1986, Federal
Armed Forces University Munich (HSBw M): Neubiberg, Bavaria, Germany, volume 253/1987
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-18022-2. isbn: 0-387-18022-2 and 3-540-18022-2. Published in
1987.
245. J org D. Becker, Ignaz Eisele, and F. W. M undemann, editors. Parallelism, Learning, Evolu-
tion: Workshop on Evolutionary Models and Strategies (Neubiberg, Germany, 1989-03-10/11)
and Workshop on Parallel Processing: Logic, Organization, and Technology (Wildbad Kreuth,
Germany, 1989-07-24 to 28) (WOPPLOT89), 1989, Germany, volume 565/1991 in Lecture
REFERENCES 979
Notes in Articial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-55027-5. isbn: 0-387-55027-5
and 3-540-55027-5. Google Books ID: AJrVD3dAousC. OCLC: 25111932, 231262527,
311366610, 320328459, and 440927259. Published in 1991.
246. Mark A. Bedau, John S. McCaskill, Norman H. Packard, and Steen Rasmussen, edi-
tors. Proceedings of the Seventh International Conference on Articial Life (Articial Life
VII), August 12, 2000, Reed College: Portland, OR, USA, Complex Adaptive Systems,
Bradford Books. MIT Press: Cambridge, MA, USA. isbn: 026252290X. Google Books
ID: xLGm7KFGy4C. OCLC: 44117888.
247. Witold Bednorz, editor. Advances in Greedy Algorithms. IN-TECH Education and Pub-
lishing: Vienna, Austria, November 2008. Fully available at http://intechweb.org/
downloadfinal.php?is=978-953-7619-27-5&type=B [accessed 2009-01-13].
248. Niko Beerenwinkel, Lior Pachter, and Bernd Sturmfels. Epistasis and Shapes of Fitness
Landscapes. Statistica Sinica, 17(4):13171342, October 2007, Academia Sinica, Institute of
Statistical Science: Taipei, Taiwan and International Chinese Statistical Association (ICSA).
Fully available at http://www3.stat.sinica.edu.tw/statistica/J17N4/J17N43/
J17N43.html [accessed 2009-07-11]. arXiv ID: q-bio/0603034.
249. Catriel Beeri and Peter Buneman, editors. Proceedings of the 7th International Confer-
ence on Database Theory (ICDT99), January 1012, 1999, Jerusalem, Israel, volume 1540
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-65452-6.
250. Jose Manuel Belenguer Ribera. CARPLIB. Universitat de Val`encia, Departament
dEstadstica i Investigaci o Operativa: Val`encia, Spain, November 5, 2005. Fully available
at http://www.uv.es/

belengue/carp/READ_ME [accessed 2011-09-18].


251. Jose Manuel Belenguer Ribera. The Capacitated Arc Routing Problem (CARPLIB). Universi-
tat de Val`encia, Departament dEstadstica i Investigaci o Operativa: Val`encia, Spain, Novem-
ber 5, 2005. Fully available at http://www.uv.es/belengue/carp.html [accessed 2010-12-
03].
252. Jose Manuel Belenguer Ribera and Enrique Benavent. A Cutting Plane Algorithm for
the Capacitated Arc Routing Problem. Computers & Operations Research, 30(5):705728,
April 2003, Pergamon Press: Oxford, UK and Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/S0305-0548(02)00046-1.
253. Richard K. Belew. When both Individuals and Populations Search: Adding Simple Learning
to the Genetic Algorithm. In ICGA89 [2414], pages 3441, 1989.
254. Richard K. Belew and Lashon Bernard Booker, editors. Proceedings of the Fourth Interna-
tional Conference on Genetic Algorithms (ICGA91), July 1316, 1991, University of Cal-
ifornia (UCSD): San Diego, CA, USA. Morgan Kaufmann Publishers Inc.: San Francisco,
CA, USA. isbn: 1-55860-208-9. Google Books ID: OtnJOwAACAAJ and YvBQAAAAMAAJ.
OCLC: 24132977.
255. Richard K. Belew and Melanie Mitchell, editors. Adaptive Individuals in Evolving Popu-
lations: Models and Algorithms, volume 26 in Santa Fe Institue Studies in the Sciences of
Complexity. Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA and West-
view Press, January 15, 1996. isbn: 0-201-48364-5 and 0-201-48369-6. Google Books
ID: OKkytWTQxeEC, YurWHgAACAAJ, and qOAhIAAACAAJ.
256. Richard K. Belew and Michael D. Vose, editors. Proceedings of the 4th Workshop on
Foundations of Genetic Algorithms (FOGA96), August 5, 1996, University of San Diego:
San Diego, CA, USA. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA.
isbn: 1-55860-460-X. Published March 1, 1997.
257. Bart lomiej Beliczy nski, Andrzej Dzieli nski, Marcin Iwanowski, and Bernardete Ribeiro, ed-
itors. Proceedings of the 8th International Conference on Adaptive and Natural Comput-
ing Algorithms, Part I (ICANNGA07-I), April 1117, 2007, Warsaw University: War-
saw, Poland, volume 4431/2007 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-71618-1. isbn: 3-540-71589-4. Partly available at http://
icannga07.ee.pw.edu.pl/ [accessed 2009-08-20]. Google Books ID: WYvMFlgm1KkC.
OCLC: 122935609, 154951366, and 318299908. Library of Congress Control Number
(LCCN): 2007923870. See also [258].
258. Bart lomiej Beliczy nski, Andrzej Dzieli nski, Marcin Iwanowski, and Bernardete Ribeiro, ed-
980 REFERENCES
itors. Proceedings of the 8th International Conference on Adaptive and Natural Comput-
ing Algorithms, Part II (ICANNGA07-II), April 1117, 2007, Warsaw University: War-
saw, Poland, volume 4432/2007 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-71629-7. isbn: 3-540-71590-8 and 3-540-71629-7. Google Books
ID: 5yYBOAAACAAJ and V9ts51 ia2sC. OCLC: 122935609 and 154951366. Library of
Congress Control Number (LCCN): 2007923870. See also [257].
259. Kevin D. Bell, Michael K. Powers, and Jose M. Sasian, editors. Novel Optical Systems and
Large-Aperture Imaging, July 21, 1998, San Diego, CA, USA, volume 3430 in The Proceedings
of SPIE. Society of Photographic Instrumentation Engineers (SPIE): Bellingham, WA, USA.
Published in December 1998.
260. Richard Ernest Bellman. Dynamic Programming, Dover Books on Mathematics. Prince-
ton University Press: Princeton, NJ, USA, 19572003. isbn: 0486428095. Google Books
ID: L ChPwAACAAJ, fyVtp3EMxasC, and wdtoPwAACAAJ. OCLC: 50866848.
261. Richard Ernest Bellman. Adaptive Control Processes: A Guided Tour. Prince-
ton University Press: Princeton, NJ, USA, 19611990. isbn: 0691079013. Google
Books ID: 8wY3PAAACAAJ, POAmAAAAMAAJ, cXJEAAAAIAAJ, and pVu0PQAACAAJ.
OCLC: 67470618.
262. David A. Belsley and Christopher F. Baum, editors. Fifth International Conference on Com-
puting in Economics and Finance, June 2426, 1999, Boston College: Boston, MA, USA.
263. David A. Belsley and Erricos John Kontoghiorghes, editors. Handbook on Com-
putational Econometrics. Wiley Interscience: Chichester, West Sussex, UK.
doi: 10.1002/9780470748916. isbn: 0-470-74385-9. OCLC: 319499406. Library of
Congress Control Number (LCCN): 2009025907. GBV-Identication (PPN): 599321954.
LC Classication: HB143.5 .H357 2009.
264. Aharon Ben-Tal. Characterization of Pareto and Lexicographic Optimal Solutions. In
MCDM80 [1960], pages 111, 1980.
265. Enrique Benavent, V. Campos, A. Corber an, and E. Mota. The Capacitated Arc Routing
Problem. Lower Bounds. Networks, 22(7):669690, December 1992, Wiley Periodicals, Inc.:
Wilmington, DE, USA. doi: 10.1002/net.3230220706.
266. Stefano Benedettini, Christian Blum, and Andrea Roli. A Randomized Iterated Greedy
Algorithm for the Founder Sequence Reconstruction Problem. In LION 4 [2581], pages
3751, 2010. doi: 10.1007/978-3-642-13800-3 4.
267. Ernesto Benini and Andrea Toolo. Optimal Design of Horizontal-Axis Wind Turbines Using
Blade-Element Theory and Evolutionary Computation. Journal of Solar Energy Engineering,
124(4):357363, November 2002, American Society of Mechanical Engineers (ASME): New
York, NY, USA. doi: 10.1115/1.1510868.
268. J. H. Bennett, editor. The Collected Papers of R.A. Fisher, volume IV. University
of Adelaide: SA, Australia, 19711974. Fully available at http://digital.library.
adelaide.edu.au/coll/special/fisher/stat_math.html [accessed 2008-12-08]. Col-
lected Papers of R.A. Fisher, Relating to Statistical and Mathematical Theory and Ap-
plications (excluding Genetics).
269. Forrest H. Bennett III. Emergence of a Multi-Agent Architecture and New Tactics For the
Ant Colony Foraging Problem Using Genetic Programming. In SAB96 [1811], pages 430
439, 1996.
270. Forrest H. Bennett III. Automatic Creation of an Ecient Multi-Agent Architecture Using
Genetic Programming with Architecture-Altering Operations. pages 3038.
271. Forrest H. Bennett III, John R. Koza, David Andre, and Martin A. Keane. Evolu-
tion of a 60 Decibel Op Amp using Genetic Programming. In ICES96 [1231], 1996.
doi: 10.1007/3-540-63173-9 65. CiteSeer
x
: 10.1.1.56.6051. See also [1607].
272. Peter J. Bentley, editor. Evolutionary Design by Computers. Morgan Kaufmann Pub-
lishers Inc.: San Francisco, CA, USA, May 1999. isbn: 1-55860-605-X. Google Books
ID: EgC6LBAH5r8C.
273. Peter J. Bentley and S.P. Kumar. The Ways to Grow Designs: A Comparison of Embryogenies
for an Evolutionary Design Problem. In GECCO99 [211], pages 3543, 1999.
274. Peter J. Bentley, Doheon Lee, and Sungwon Jung, editors. Proceedings of the 7th International
Conference on Articial Immune Systems (ICARIS08), August 1013, 2008, Phuket, Thai-
land, volume 5132 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
REFERENCES 981
Berlin, Germany.
275. Carlos Bento, Amlcar Cardoso, and Gael Dias, editors. Progress in Articial Intelligence
Proceedings of the 13th Portuguese Conference on Aritcial Intelligence (EPIA05), De-
cember 58, 2005, Covilh a, Portugal, volume 3808 in Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany.
276. Aviv Bergman and Marcus W. Feldman. Recombination Dynamics and the Fitness Land-
scape. Physica D: Nonlinear Phenomena, 56(1):5767, April 1992, Elsevier Science Publishers
B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientic Publishers Ltd.: Am-
sterdam, The Netherlands. doi: 10.1016/0167-2789(92)90050-W.
277. Pavel Berkhin. Survey Of Clustering Data Mining Techniques. Technical Report, Ac-
crue Software: San Jose, CA, USA, 2002. Fully available at http://citeseer.ist.
psu.edu/berkhin02survey.html and http://www.ee.ucr.edu/

barth/EE242/
clustering_survey.pdf [accessed 2007-08-27].
278. 17th European Conference on Machine Learning (ECML06), September 1822, 2006, Berlin,
Germany.
279. Ester Bernad o, Xavier Llor` a, and Josep Maria Garrell i Guiu. XCS and GALE: A Com-
parative Study of Two Learning Classier Systems with Six Other Learning Algorithms on
Classication Tasks. In IWLCS01 [1679], pages 115132, 2001. doi: 10.1007/3-540-48104-4 8.
CiteSeer
x
: 10.1.1.1.4467.
280. Gerard Berry, Hubert Comon, and Alain Finkel, editors. Computer Aided Verica-
tion Proceedings of the 13th International Conference on Computer Aided Verication
(CAV01), July 1822, 2001, Palais de la Mutualite: Paris, France, volume 2102/2001
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-44585-4. isbn: 3-540-42345-1. Google Books ID: 1llkK-8I7YYC.
OCLC: 47282220, 47282220, 47675546, 48729917, 61847966, 67015253, 76232298,
243488788, and 248623804.
281. Michael J. A. Berry and Gordon S. Lino. Mastering Data Mining: The Art and Science
of Customer Relationship Management. Wiley Interscience: Chichester, West Sussex, UK,
December 1999. isbn: 0471331236. Google Books ID: QeVJPwAACAAJ. Library of Congress
Control Number (LCCN): 00265296. GBV-Identication (PPN): 311887082. LC Classi-
cation: HF5415.125 .B47 2000.
282. Hugues Bersini and Jorge Carneiro, editors. Proceedings of the 5th International Conference
on Articial Immune Systems (ICARIS06), September 46, 2006, Instituto Gulbenkian de
Ciencia: Oeiras, Portugal, volume 4163 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-37749-2.
283. Elisa Bertino and Silvana Castano, editors. Atti del Settimo Convegno Nazionale Sistemi
Evoluti per Basi di Dati, June 2325, 1999, Villa Olmo, Como, Italy.
284. Alberto Bertoni and Marco Dorigo. Implicit Parallelism in Genetic Algorithms. Arti-
cial Intelligence, 61(2):307314, June 1993, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/0004-3702(93)90071-I. See also [285].
285. Alberto Bertoni and Marco Dorigo. Implicit Parallelism in Genetic Algorithms. Tech-
nical Report TR-93-001-Revised, Universita Statale di Milano, Dipartimento di Scienze
dellInformazione: Milano, Italy and International Computer Science Institute (ICSI), Uni-
versity of California: Berkeley, CA, USA, April 1993. Fully available at ftp://ftp.
icsi.berkeley.edu/pub/techreports/1993/tr-93-001.ps.gz and http://ftp.
icsi.berkeley.edu/ftp/pub/techreports/1993/tr-93-001.pdf [accessed 2010-07-30].
CiteSeer
x
: 10.1.1.34.3275. See also [284].
286. Patricia Besson, Jean-Marc Vesin, Vlad Popovici, and Murat Kunt. Dierential Evolu-
tion Applied to a Multimodal Information Theoretic Optimization Problem. In EvoWork-
shops06 [2341], pages 505509, 2006. doi: 10.1007/11732242 46.
287. Albert Donally Bethke. Genetic Algorithms as Function Optimizers. In CSC78 [774],
1978. Fully available at http://hdl.handle.net/2027.42/3572 [accessed 2010-07-23].
ID: UMR0341. See also [288].
288. Albert Donally Bethke. Genetic Algorithms as Function Optimizers. PhD thesis, University
of Michigan: Ann Arbor, MI, USA, 1980. Order No.: AAI8106101. University Microlms
No.: 8106101. National Science Foundation Grant No. MCS76-04297. See also [287, 289].
289. Albert Donally Bethke. Genetic Algorithms as Function Optimizers. Dissertation Abstracts
International (DAI), 41(9):3503B, 1980, ProQuest: Ann Arbor, MI, USA. See also [288].
982 REFERENCES
290. Pete Bettinger and Jianping Zhu. A New Heuristic Method for Solving Spatially Constrained
Forest Planning Problems based on Mitigation of Infeasibilities Radiating Outward from
a Forced Choice. Silva Fennica, 40(2):315333, 2006, Finnish Society of Forest Science,
The Finnish Forest Research Institute: Finland, Eeva Korpilahti, editor. Fully available
at http://www.metla.fi/silvafennica/full/sf40/sf402315.pdf [accessed 2010-07-
24]. CiteSeer
x
: 10.1.1.133.2656.
291. Gregory Beurier, Fabien Michel, and Jacques Ferber. A Morphogenesis Model for Multia-
gent Embryogeny. In ALIFE06 [2314], 2006. Fully available at http://www.lirmm.fr/

fmichel/publi/pdfs/beurier06alifeX.pdf [accessed 2010-08-06].


292. Hans-Georg Beyer. Some Aspects of the Evolution Strategy for Solving TSP-Like Optimiza-
tion Problems Appearing at the Design Studies of a 0.5TeV e
+
e

-Linear Collider. In PPSN


II [1827], pages 361370, 1992.
293. Hans-Georg Beyer. Towards a Theory of Evolution Strategies: Some Asymptotical Results
from the (1, +)-Theory. Technical Report SYS5/92, Universitat Dortmund, Fachbereich
Informatik: Dortmund, North Rhine-Westphalia, Germany, December 1, 1992. Fully avail-
able at http://ls11-www.cs.uni-dortmund.de/

beyer/coll/Bey93a/Bey93a.ps
[accessed 2009-07-11]. See also [294].
294. Hans-Georg Beyer. Toward a Theory of Evolution Strategies: Some Asymptotical Re-
sults from the (1, +)-Theory. Evolutionary Computation, 1(2):165188, Summer 1993,
MIT Press: Cambridge, MA, USA, Marc Schoenauer, editor. Fully available at http://
ls11-www.cs.uni-dortmund.de/

beyer/coll/Bey93a/Bey93a.ps [accessed 2009-07-11].


CiteSeer
x
: 10.1.1.38.3238. See also [293].
295. Hans-Georg Beyer. Toward a Theory of Evolution Strategies: The (, )-Theory.
Evolutionary Computation, 2(4):381407, Winter 1994, MIT Press: Cambridge, MA,
USA. doi: 10.1162/evco.1994.2.4.381. Fully available at http://ls11-www.cs.
uni-dortmund.de/people/beyer/coll/Bey95a/Bey95a.ps [accessed 2009-07-10].
CiteSeer
x
: 10.1.1.47.3088.
296. Hans-Georg Beyer. Toward a Theory of Evolution Strategies: On the Benet of
Sex The (/, )-Theory. Evolutionary Computation, 3(1):81111, Spring 1995,
MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1995.3.1.81. Fully avail-
able at http://ls11-www.informatik.uni-dortmund.de/people/beyer/coll/
Bey95b/Bey95b.ps [accessed 2010-08-18]. CiteSeer
x
: 10.1.1.30.5489.
297. Hans-Georg Beyer. Toward a Theory of Evolution Strategies: Self-Adaptation. Evo-
lutionary Computation, 3(3):311347, Fall 1995, MIT Press: Cambridge, MA, USA.
doi: 10.1162/evco.1995.3.3.311. Fully available at http://ls11-www.cs.uni-dortmund.
de/

beyer/coll/Bey95e/Bey95e.ps [accessed 2010-08-18]. CiteSeer


x
: 10.1.1.26.3527.
298. Hans-Georg Beyer. An Alternative Explanation for the Manner in which Genetic Algorithms
Operate. Biosystems, 41(1):115, January 1997, Elsevier Science Ireland Ltd.: East Park,
Shannon, Ireland and North-Holland Scientic Publishers Ltd.: Amsterdam, The Nether-
lands. doi: 10.1016/S0303-2647(96)01657-7. CiteSeer
x
: 10.1.1.39.307.
299. Hans-Georg Beyer. Design Optimization of a Linear Accelerator using Evolution Strategy:
Solving a TSP-like Optimization Problem. In Handbook of Evolutionary Computation [171],
Chapter G4.2, pages 18. Oxford University Press, Inc.: New York, NY, USA, Institute of
Physics Publishing Ltd. (IOP): Dirac House, Temple Back, Bristol, UK, and CRC Press,
Inc.: Boca Raton, FL, USA, 1997.
300. Hans-Georg Beyer. The Theory of Evolution Strategies, Natural Computing Series. Springer
New York: New York, NY, USA, May 27, 2001. isbn: 3-540-67297-4. Google Books
ID: 8tbInLufkTMC. OCLC: 46394059 and 247928112. Library of Congress Control
Number (LCCN): 2001020620. LC Classication: QA76.618 B49 2001.
301. Hans-Georg Beyer and William B. Langdon, editors. Foundations of Genetic Algorithms
XI (FOGA11), January 59, 2011, Hirschen-Schwarzenberg: Schwarzenberg, Austria. ACM
Press: New York, NY, USA.
302. Hans-Georg Beyer and Hans-Paul Schwefel. Evolution Strategies A Comprehensive In-
troduction. Natural Computing: An International Journal, 1(1):352, March 2002, Kluwer
Academic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Nether-
lands. doi: 10.1023/A:1015059928466. Fully available at http://www.cs.bham.ac.uk/

pxt/NIL/es.pdf [accessed 2010-08-09].


303. Hans-Georg Beyer, Eva Brucherseifer, Wilfried Jakob, Hartmut Pohlheim, Bernhard Send-
REFERENCES 983
ho, and Thanh Binh To. Evolutionary Algorithms Terms and Denitions. Universitat
Dortmund: Dortmund, North Rhine-Westphalia, Germany, version e-1.2 edition, Febru-
ary 25, 2002. Fully available at http://ls11-www.cs.uni-dortmund.de/people/
beyer/EA-glossary/ [accessed 2008-04-10].
304. Hans-Georg Beyer, Una-May OReilly, Dirk V. Arnold, Wolfgang Banzhaf, Christian Blum,
Eric W. Bonabeau, Erick Cant u-Paz, Dipankar Dasgupta, Kalyanmoy Deb, James A. Fos-
ter, Edwin D. de Jong, Hod Lipson, Xavier Llor` a, Spiros Mancoridis, Martin Pelikan,
G unther R. Raidl, Terence Soule, Jean-Paul Watson, and Eckart Zitzler, editors. Pro-
ceedings of Genetic and Evolutionary Computation Conference (GECCO05), June 25
27, 2005, Loews LEnfant Plaza Hotel: Washington, DC, USA. ACM Press: New York,
NY, USA. isbn: 1-59593-010-8. Google Books ID: 1Y5QAAAAMAAJ, CY9QAAAAMAAJ,
KY1QAAAAMAAJ, TlWQAAAACAAJ, and VI5QAAAAMAAJ. OCLC: 62461063 and 63125628.
ACM Order No.: 910050. A joint meeting of the fourteenth international conference on genetic
algorithms (ICGA-2005) and the tenth annual genetic programming conference (GP-2005).
See also [2337, 2339].
305. James C. Bezdek, James M. Keller, Raghu Krishnapuram, Ludmila I. Kuncheva, and
Nikhil R. Pal. Will the Real Iris Data Please Stand Up? IEEE Transactions on Fuzzy
Systems (TFS), 7(3):368369, June 1999, IEEE Computational Intelligence Society: Piscat-
away, NJ, USA. doi: 10.1109/91.771092. INSPEC Accession Number: 6293498. See also
[92].
306. V. Bhaskar, Santosh K. Gupta, and A.K. Ray. Applications of Multiobjective Optimization
in Chemical Engineering. Reviews in Chemical Engineering, 16(1):154, 2000.
307. Gouri K. Bhattacharyya and Richard A. Johnson. Statistical Concepts and Methods. John Wi-
ley & Sons Ltd.: New York, NY, USA, March 8, 1977. isbn: 0471035327 and 0471072044.
Google Books ID: Xk9AAAACAAJ. OCLC: 2597235.
308. Leonora Bianchi, Marco Dorigo, Luca Maria Gambardella, and Walter J. Gutjahr. A Sur-
vey on Metaheuristics for Stochastic Combinatorial Optimization. Natural Computing: An
International Journal, 8(2):239287, June 2009, Kluwer Academic Publishers: Norwell, MA,
USA and Springer Netherlands: Dordrecht, Netherlands. doi: 10.1007/s11047-008-9098-4.
309. Arthur S. Bickel and Riva Wenig Bickel. Tree Structured Rules in Genetic Algorithms. In
ICGA87 [1132], pages 7781, 1987.
310. Joseph P. Bigus. Data Mining With Neural Networks: Solving Business Problems from Ap-
plication Development to Decision Support. McGraw-Hill: New York, NY, USA, May 20,
1996. isbn: 0-07-005779-6. OCLC: 34282366 and 34282366. Library of Congress
Control Number (LCCN): 96033779. GBV-Identication (PPN): 194728420. LC Classi-
cation: QA76.87 .B55 1996.
311. Kurt Binder and A. Peter Young. Spin Glasses: Experimental Facts, Theoretical Concepts,
and Open Questions. Reviews of Modern Physics, 58(4):801976, October 1986, American
Physical Society: College Park, MD, USA. doi: 10.1103/RevModPhys.58.801. Fully available
at http://link.aps.org/abstract/RMP/v58/p801 [accessed 2008-07-02].
312. Lothar Birk, G unther F. Clauss, and June Y. Lee. Practical Application of Global Optimiza-
tion to the Design of Oshore Structures. In Proceedings of OMAE04, 23rd International
Conference on Oshore Mechanics and Arctic Engineering [85], pages 567579, volume III,
2004. Fully available at http://130.149.35.79/downloads/publikationen/2004/
OMAE2004-51225.pdf [accessed 2007-09-01]. Paper no. OMAE2004-51225.
313. Lawrence A. Birnbaum and Gregg C. Collins, editors. Proceedings of the 8th International
Workshop on Machine Learning (ML91), June 1991, Evanston, IL, USA. Morgan Kauf-
mann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-200-3. Google Books
ID: OnNgQgAACAAJ. OCLC: 24106151, 438477848, and 1558602003. GBV-Identication
(PPN): 022371885.
314. Christopher M. Bishop. Neural Networks for Pattern Recognition. Clarendon Press
(Oxford University Press): Oxford, UK, November 3, 1995. isbn: 0-19-853849-9
and 0-19-853864-2. Google Books ID: -aAwQO -rXwC. OCLC: 33101074,
174549000, 257120262, and 263662370. Library of Congress Control Number
(LCCN): 95040465. GBV-Identication (PPN): 082852855, 265786185, 315914459,
354835041, 475047974, 506020282, and 611503409. LC Classication: QA76.87 .B574
1995.
315. Susanne Biundo, Karen Myers, and Kanna Rajan, editors. Proceedings of the Fifteenth
984 REFERENCES
International Conference on Automated Planning and Scheduling (ICAPS05), June 510,
2005, Monterey, CA, USA. AAAI Press: Menlo Park, CA, USA.
316. Tim Blackwell. Particle Swarm Optimization in Dynamic Environments. In Evolution-
ary Computation in Dynamic and Uncertain Environments [3019], Chapter 2, pages 2949.
Springer-Verlag: Berlin/Heidelberg, 2007. doi: 10.1007/978-3-540-49774-5 2. Fully available
at http://igor.gold.ac.uk/

mas01tb/papers/PSOdynenv.pdf [accessed 2009-07-13].


317. M. Brian Blake, Kwok Ching Tsui, and Andreas Wombacher. The EEE-05 Challenge:
A New Web Service Discovery and Composition Competition. In EEE05 [1371], pages
780783, 2005. doi: 10.1109/EEE.2005.131. Fully available at http://ws-challenge.
georgetown.edu/ws-challenge/The%20EEE.htm [accessed 2007-09-02]. INSPEC Accession
Number: 8530438. Ei ID: 2006049663049.
318. M. Brian Blake, William K.W. Cheung, Michael C. J ager, and Andreas Wombacher.
WSC-06: The Web Service Challenge. In CEC/EEE06 [2967], pages 505508, 2006.
doi: 10.1109/CEC-EEE.2006.98. INSPEC Accession Number: 9189338.
319. M. Brian Blake, William K.W. Cheung, Michael C. J ager, and Andreas Wombacher. WSC-
07: Evolving the Web Service Challenge. In CEC/EEE07 [1344], pages 505508, 2007.
doi: 10.1109/CEC-EEE.2007.107. INSPEC Accession Number: 9868634.
320. M. Brian Blake, Thomas Weise, and Steen Bleul. WSC-2010: Web Services Composition and
Evaluation. In SOCA10 [1378], pages 14, 2010. doi: 10.1109/SOCA.2010.5707190. Fully
available at http://www.it-weise.de/documents/files/BWB2010W2WSCAE.pdf.
321. 20th European Conference on Machine Learning and the 13th European Conference on Princi-
ples and Practice of Knowledge Discovery in Databases (ECML PKDD09), September 711,
2009, Bled, Slovenia.
322. Progress in Machine Learning Proceedings of the Second European Working Session on
Learning (EWSL87), May 1317, 1987, Bled, Yugoslavia (now Slovenia).
323. Woodrow Woody Wilson Bledsoe. The Use of Biological Concepts in the Analytical Study
of Systems. Technical Report PRI 2, Panoramic Research, Inc.: Palo Alto, CA, USA, 1961.
Presented at ORSA-TIMS National Meeting, San Francisco, California, November 10, 1961.
324. Woodrow Woody Wilson Bledsoe. Lethally Dependent Genes Using Instant Selection.
Technical Report PRI 1, Panoramic Research, Inc.: Palo Alto, CA, USA, 1961.
325. Woodrow Woody Wilson Bledsoe. An Analysis of Genetic Populations. Technical Report,
Panoramic Research, Inc.: Palo Alto, CA, USA, 1962.
326. Woodrow Woody Wilson Bledsoe. The Evolutionary Method in Hill Climbing: Convergence
Rates. Technical Report, Panoramic Research, Inc.: Palo Alto, CA, USA, 1962.
327. Woodrow Woody Wilson Bledsoe and I. Browning. Pattern Recognition and Reading by
Machine. In Proceedings of the Eastern Joint Computer Conference (EJCC) Papers and
Discussions Presented at the Joint IRE AIEE ACM Computer Conference [1395], pages
225232, 1959. doi: 10.1109/AFIPS.1959.88.
328. Maria J. Blesa and Christian Blum. Ant Colony Optimization for the Maximum Edge-Disjoint
Paths Problem. In EvoWorkshops04 [2254], pages 160169, 2004.
329. Maria J. Blesa, Pablo Moscato, and Fatos Xhafa. A Memetic Algorithm for the Minimum
Weighted k-Cardinality Tree Subgraph Problem. In MIC01 [2294], pages 8590, 2001.
CiteSeer
x
: 10.1.1.11.2478 and 10.1.1.21.5376.
330. Steen Bleul and Kurt Geihs. Automatic Quality-Aware Service Discovery and Matching.
In HPOVUA [370], 2006.
331. Steen Bleul and Thomas Weise. An Ontology for Quality-Aware Service Discovery. In
WESC05 [3095], volume RC23821, 2005. Fully available at http://www.it-weise.de/
documents/files/BW2005QASD.pdf [accessed 2011-01-11]. CiteSeer
x
: 10.1.1.111.390. See
also [333].
332. Steen Bleul and Michael Zapf. Ontology-Based Self-Organization in Service-Oriented Ar-
chitectures. In Proceedings of the Workshop SAKS 2007 [2801], 2007.
333. Steen Bleul, Thomas Weise, and Kurt Geihs. An Ontology for Quality-Aware Service
Discovery. International Journal of Computer Systems Science and Engineering (CSSE), 21
(4):227234, July 2006, CRL Publishing Limited: Leicester, UK. Fully available at http://
www.it-weise.de/documents/files/BWG2006QASD.pdf. Ei ID: 20064410208380. IDS
(SCI): 095XP. Special issue on Engineering Design and Composition of Service-Oriented
Applications. See also [331].
334. Steen Bleul, Thomas Weise, and Kurt Geihs. Large-Scale Service Composi-
REFERENCES 985
tion in Semantic Service Discovery. In CEC/EEE06 [2967], pages 427429,
2006. doi: 10.1109/CEC-EEE.2006.59. Fully available at http://www.it-weise.
de/documents/files/BWG2006WSC.pdf. INSPEC Accession Number: 9189339. Ei
ID: 20070110350633. 1st place in 2006 WSC. See also [318, 335, 2909].
335. Steen Bleul, Thomas Weise, and Kurt Geihs. Making a Fast Semantic Ser-
vice Composition System Faster. In CEC/EEE07 [1344], pages 517520, 2007.
doi: 10.1109/CEC-EEE.2007.62. Fully available at http://www.it-weise.de/
documents/files/BWG2007WSC.pdf. INSPEC Accession Number: 9868637. Ei
ID: 20083511485208. IDS (SCI): BGR20. 2nd place in 2007 WSC. See also [319, 334, 2909].
336. Tobias Blickle. Theory of Evolutionary Algorithms and Application to System Synthe-
sis, TIK-Schriftenreihe. PhD thesis, Eidgen ossische Technische Hochschule (ETH) Z urich:
Z urich, Switzerland, Verlag der Fachvereine Hochschulverlag AG der ETH Z urich: Z urich,
Switzerland, November 1996, Lothar Thiele and Hans-Paul Schwefel, Committee mem-
bers. doi: 10.3929/ethz-a-001710359. Fully available at http://www.handshake.de/
user/blickle/publications/diss.pdf and http://www.tik.ee.ethz.ch/

tec/
publications/bli97a/diss.ps.gz [accessed 2007-09-07]. ETH Thesis Number: 11894.
337. Tobias Blickle and Lothar Thiele. Genetic Programming and Redundancy. In Genetic Algo-
rithms within the Framework of Evolutionary Computation, Workshop at KI-94 [1265], pages
3338, 1994. Fully available at ftp://ftp.tik.ee.ethz.ch/pub/people/thiele/
paper/BlTh94.ps [accessed 2007-09-07]. CiteSeer
x
: 10.1.1.48.8474.
338. Tobias Blickle and Lothar Thiele. A Comparison of Selection Schemes Used in Genetic Al-
gorithms. TIK-Report 11, Eidgen ossische Technische Hochschule (ETH) Z urich, Department
of Electrical Engineering, Computer Engineering and Networks Laboratory (TIK): Z urich,
Switzerland, December 1995. Fully available at ftp://ftp.tik.ee.ethz.ch/pub/
publications/TIK-Report11.ps and http://www.handshake.de/user/blickle/
publications/tik-report11_v2.ps [accessed 2010-08-03]. CiteSeer
x
: 10.1.1.11.509. See
also [340].
339. Tobias Blickle and Lothar Thiele. A Mathematical Analysis of Tournament Se-
lection. In ICGA95 [883], pages 916, 1995. Fully available at http://
www.handshake.de/user/blickle/publications/tournament.ps [accessed 2010-08-04].
CiteSeer
x
: 10.1.1.52.9907.
340. Tobias Blickle and Lothar Thiele. A Comparison of Selection Schemes used in
Evolutionary Algorithms. Evolutionary Computation, 4(4):361394, Winter 1996,
MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1996.4.4.361. Fully avail-
able at http://www.handshake.de/user/blickle/publications/ECfinal.ps [ac-
cessed 2010-08-03]. CiteSeer
x
: 10.1.1.15.9584. See also [338].
341. Christian Blum and Daniel Merkle, editors. Swarm Intelligence Introduction and Ap-
plications, Natural Computing Series. Springer New York: New York, NY, USA, 2008.
doi: 10.1007/978-3-540-74089-6. Google Books ID: 6Ky4bVPCXqMC, B1rrPQAACAAJ, and
ofRKPAAACAAJ. Library of Congress Control Number (LCCN): 2008933849.
342. Christian Blum and Andrea Roli. Metaheuristics in Combinatorial Optimization: Overview
and Conceptual Comparison. ACM Computing Surveys (CSUR), 35(3):268308, Septem-
ber 2003, ACM Press: New York, NY, USA. doi: 10.1145/937503.937505. Fully available
at http://iridia.ulb.ac.be/

meta/newsite/downloads/ACSUR-blum-roli.
pdf [accessed 2008-10-06].
343. Egbert J. W. Boers, Jens Gottlieb, Pier Luca Lanzi, Robert Elliott Smith, Stefano Cagnoni,
Emma Hart, G unther R. Raidl, and Harald Tijink, editors. Applications of Evolution-
ary Computing, Proceedings of EvoWorkshops 2001: EvoCOP, EvoFlight, EvoIASP, Ev-
oLearn, and EvoSTIM (EvoWorkshops01), April 1820, 2001, Lake Como, Milan, Italy,
volume 2037/2001 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/3-540-45365-2. isbn: 3-540-41920-9. Google Books
ID: EhROxow98NoC.
344. Stefan Boettcher. Extremal Optimization of Graph Partitioning at the Percolation
Threshold. Journal of Physics A: Mathematical and General, 32(28):52015211, July 16,
1999, Institute of Physics Publishing Ltd. (IOP): Dirac House, Temple Back, Bristol,
UK. doi: 10.1088/0305-4470/32/28/302. Fully available at http://www.iop.org/
EJ/article/0305-4470/32/28/302/a92802.pdf and http://xxx.lanl.gov/abs/
cond-mat/9901353 [accessed 2008-08-23]. arXiv ID: cond-mat/9901353v2.
986 REFERENCES
345. Stefan Boettcher and Allon G. Percus. Extremal Optimization: Methods Derived from Co-
Evolution. In GECCO99 [211], pages 825832, 1999. arXiv ID: math.OC/9904056.
346. Stefan Boettcher and Allon G. Percus. Extremal Optimization for Graph Partitioning.
Physical Review E, 64(2), July 2001, American Physical Society: College Park, MD,
USA. doi: 10.1103/PhysRevE.64.026114. Fully available at http://www.math.ucla.edu/

percus/Publications/eopre.pdf [accessed 2008-08-23]. arXiv ID: cond-mat/0104214.


347. Stefan Boettcher and Allon G. Percus. Optimization with Extremal Dynamics. Physi-
cal Review Letters, 86(23):52115214, June 4, 2001, American Physical Society: College
Park, MD, USA. doi: 10.1103/PhysRevLett.86.5211. CiteSeer
x
: 10.1.1.94.3835. arXiv
ID: cond-mat/0010337.
348. Stefan Boettcher and Allon G. Percus. Optimization with Extremal Dynamics. Complexity,
8(2):5762, March 21, 2003, Wiley Periodicals, Inc.: Wilmington, DE, USA. doi: 10.1002/c-
plx.10072. Special Issue: Complex Adaptive Systems: Part I.
349. The 5th International Conference on Bioinspired Optimization Methods and their Applica-
tions (BIOMA12), May 2425, 2012, Bohinj, Slovenia. Partly available at http://bioma.
ijs.si/conference/2012/ [accessed 2011-06-21].
350. Celia C. Bojarczuk, Heitor Silverio Lopes, and Alex Alves Freitas. Data Mining with
Constrained-Syntax Genetic Programming: Applications to Medical Data Sets. In
IDAMAP01 [3103], 2001. Fully available at http://www.cs.bham.ac.uk/

wbl/
biblio/gp-html/bojarczuk_2001_idamap.html, http://www.graco.unb.br/
alvares/emule/Data%20Mining%20with%20Constrained-Syntax%20Genetic
%20Programming%20Applications%20in%20Medical%20Data%20Set.pdf, and
www.ailab.si/idamap/idamap2001/papers/bojarczuk.pdf [accessed 2009-09-09].
CiteSeer
x
: 10.1.1.21.6146.
351. Celia C. Bojarczuk, Heitor Silverio Lopes, and Alex Alves Freitas. An Innova-
tive Application of a Constrained-Syntax Genetic Programming System to the Prob-
lem of Predicting Survival of Patients. In EuroGP03 [2360], pages 1159, 2003.
doi: 10.1007/3-540-36599-0 2. Fully available at http://www.cs.kent.ac.uk/people/
staff/aaf/pub_papers.dir/EuroGP-2003-Celia.pdf [accessed 2007-09-09].
352. Alessandro Bollini and Marco Piastra. Distributed and Persistent Evolutionary Algorithms:
A Design Pattern. In EuroGP99 [2196], pages 173183, 1999.
353. Eric W. Bonabeau, Marco Dorigo, and Guy Theraulaz. Swarm Intelligence: From Nat-
ural to Articial Systems. Oxford University Press, Inc.: New York, NY, USA, Au-
gust 1999. isbn: 0195131592. Google Books ID: PvTDhzqMr7cC, fcTcHvSsRMYC, and
mhLj70l8HfAC. OCLC: 40331208, 232159801, and 318382300.
354. Eric W. Bonabeau, Marco Dorigo, and Guy Theraulaz. Inspiration for Optimization from So-
cial Insect Behavior. Nature, 406:3942, July 6, 2000. doi: 10.1038/35017500. Fully available
at http://biology.unm.edu/pibbs/classes/readings/socialinsectbehavior.
pdf and http://www.nature.com/nature/journal/v406/n6791/abs/406039a0.
html [accessed 2011-05-02].
355. Josh C. Bongard and Chandana Paul. Making Evolution an Oer It Cant Refuse: Mor-
phology and the Extradimensional Bypass. In ECAL01 [1520], pages 401412, 2001.
doi: 10.1007/3-540-44811-X 43. Fully available at http://www.cs.uvm.edu/

jbongard/
papers/bongardPaulEcal2001.ps.gz [accessed 2008-11-02]. CiteSeer
x
: 10.1.1.27.8240.
356. Josh C. Bongard and Rolf Pfeifer. Repeated Structure and Dissociation of Genotypic and
Phenotypic Complexity in Articial Ontogeny. In GECCO01 [2570], pages 829836, 2001.
CiteSeer
x
: 10.1.1.11.6911.
357. John Tyler Bonner. On Development: The Biology of Form, Commonwealth Fund Pub-
lications. Harvard University Press: Cambridge, MA, USA, new ed edition, Decem-
ber 12, 1974. isbn: 0674634128. Google Books ID: dNa2AAAACAAJ, dda2AAAACAAJ, and
lO5BGACTstMC.
358. Lashon Bernard Booker. Intelligent Behavior as an Adaptation to the Task Environment.
PhD thesis, University of Michigan: Ann Arbor, MI, USA. Fully available at http://
hdl.handle.net/2027.42/3746 and http://hdl.handle.net/2027.42/3747 [ac-
cessed 2010-08-03]. Order No.: AAI8214966. University Microlms No.: 8214966. Technical
Report No. 243.
359. Egon Borger. Computability, Complexity, Logic, volume 128 in Studies in Logic and the
Foundations of Mathematics. Elsevier Science Publishing Company, Inc.: New York, NY,
REFERENCES 987
USA. Imprint: North-Holland Scientic Publishers Ltd.: Amsterdam, The Netherlands, 1989.
isbn: 0-444-87406-2. Google Books ID: T88gs0nemygC.
360. Istv an Borgulya. A Cluster-based Evolutionary Algorithm for the Single Machine Total
Weighted Tardiness-Scheduling Problem. Journal of Computing and Information Technology
(CIT), 10(3):211217, September 2002, University Computing Centre: Zagreb, Croatia. Fully
available at http://cit.zesoi.fer.hr/downloadPaper.php?paper=384 [accessed 2008-
04-07].
361. Erich G. Bornberg-Bauer and Hue Sun Chan. Modeling Evolutionary Landscapes: Muta-
tional Stability, Topology, and Superfunnels in Sequence Space. Proceedings of the National
Academy of Science of the United States of America (PNAS), 96(19):1068910694, Septem-
ber 14, 1999, National Academy of Sciences: Washington, DC, USA, Peter G. Wolynes, edi-
tor. Fully available at http://www.pnas.org/cgi/content/abstract/96/19/10689
[accessed 2008-07-02].
362. Bruno Bosacchi, David B. Fogel, and James C. Bezdek, editors. Proceedings of SPIEs
45th International Symposium on Optical Science and Technology: Applications and Science
of Neural Networks, Fuzzy Systems, and Evolutionary Computation III, July 30August 4,
2000, San Diego, CA, USA, volume 4120. Society of Photographic Instrumentation Engineers
(SPIE): Bellingham, WA, USA. isbn: 0-8194-3765-4. Google Books ID: tLhQAAAAMAAJ.
OCLC: 45211584 and 248133501.
363. Peter A. N. Bosman and Dirk Thierens. Mixed IDEAs. Technical Report UU-CS-2000-
45, Utrecht University, Department of Information and Computing Sciences: Utrecht,
The Netherlands, December 2000. Fully available at http://www.cs.uu.nl/research/
techreps/UU-CS-2000-45.html [accessed 2009-08-28].
364. Peter A. N. Bosman and Dirk Thierens. Advancing Continuous IDEAs with Mixture
Distributions and Factorization Selection Metrics. In OBUPM01 [2149], pages 208212,
2001. Fully available at http://citeseer.ist.psu.edu/old/bosman01advancing.
html [accessed 2009-08-28]. CiteSeer
x
: 10.1.1.19.7412.
365. Peter A. N. Bosman and Dirk Thierens. Multi-Objective Optimization with Diversity Pre-
serving Mixture-based Iterated Density Estimation Evolutionary Algorithms. International
Journal Approximate Reasoning, 31(2):259289, November 2002, Elsevier Science Publishers
B.V.: Essex, UK. doi: 10.1016/S0888-613X(02)00090-7.
366. Peter A. N. Bosman and Dirk Thierens. A Thorough Documentation of Obtained Results on
Real-Valued Continuous And Combinatorial Multi-Objective Optimization Problems Using
Diversity Preserving Mixture-Based Iterated Density Estimation Evolutionary Algorithms.
Technical Report UU-CS-2002-052, Utrecht University, Institute of Information and Comput-
ing Sciences: Utrecht, The Netherlands, December 2002. Fully available at http://www.
cs.uu.nl/research/techreps/repo/CS-2002/2002-052.pdf [accessed 2007-08-24].
367. Proceedings of the 5th International ICST Conference on Bio-Inspired Models of Network,
Information, and Computing Systems (BIONETICS10), December 13, 2010, Boston, MA,
USA.
368. Jack Bosworth, Norman Foo, and Bernard P. Zeigler. Comparison of Genetic Algorithms
with Conjugate Gradient Methods. Technical Report 00312-1-T, University of Michigan:
Ann Arbor, MI, USA, February 1972. Fully available at http://hdl.handle.net/2027.
42/3761 [accessed 2010-05-29]. ID: UMR0554. Other Identiers: UMR0554.
369. Henry Bottomley. Relationship between the Mean, Median, Mode, and Standard Deviation in
a Unimodal Distribution. self-published, September 2006. Fully available at http://www.
btinternet.com/

se16/hgb/median.htm [accessed 2007-09-15].


370. Karima Boudaoud, Nicolas Nobelis, and Thomas Nebe, editors. Proceedings of the 13th
Annual Workshop of HP OpenView University Association (HPOVUA), May 16, 2006, Cote
dAzur, France. Infonomics-Consulting.
371. Abdelmadjid Boukra, Mohamed Ahmed-nacer, and Sadek Bouroubi. Selection of Views to
Materialize in Data Warehouse: A Hybrid Solution. International Journal of Computational
Intelligence Research (IJCIR), 3(4):327334, 2007, Research India Publications: Delhi, India.
doi: 10.5019/j.ijcir.2004.113. Fully available at http://www.ijcir.com/download.php?
idArticle=113 [accessed 2011-04-10].
372. Anu G. Bourgeois, editor. Proceedings of the 8th International Conference on Algorithms and
Architectures for Parallel Processing (ICA3PP), June 911, 2008, Ayia Napa, Cyprus, volume
5022/2008 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
988 REFERENCES
Germany. doi: 10.1007/978-3-540-69501-1. isbn: 3-540-69500-1. OCLC: 496756775.
GBV-Identication (PPN): 568737482.
373. Craig Boutilier, editor. Proceedings of the International Joint Conference on Articial Intel-
ligence (IJCAI09), July 1117, 2009, Pasadena, CA, USA. AAAI Press: Menlo Park, CA,
USA. Fully available at http://ijcai.org/papers09/ [accessed 2010-06-23].
374. Denis Bouyssou. Building Criteria: A Prerequisite For MCDA. In MCDA90 [198],
pages 5880, 1998. Fully available at http://www.lamsade.dauphine.fr/

bouyssou/
CRITERIA.PDF [accessed 2009-07-17]. CiteSeer
x
: 10.1.1.2.7628.
375. Chris P. Bowers. Simulating Evolution with a Computational Model of Embryogeny: Ob-
taining Robustness from Evolved Individuals. In ECAL05 [488], pages 149158, 2005. Fully
available at http://www.cs.bham.ac.uk/

cpb/publications/ecal05_bowers.pdf
[accessed 2010-08-06].
376. George Edward Pelham Box. Evolutionary Operation: A Method for Increasing Industrial
Productivity. Journal of the Royal Statistical Society: Series C Applied Statistics, 6(2):
81101, June 1957, Blackwell Publishing for the Royal Statistical Society: Chichester, West
Sussex, UK.
377. George Edward Pelham Box and Norman Richard Draper. Evolutionary Operation. A Sta-
tistical Method for Process Improvement, Wiley Publication in Applied Statistics. John
Wiley & Sons Ltd.: New York, NY, USA, 1969. isbn: 0-471-09305-X. OCLC: 2592,
252982175, and 639061685. Library of Congress Control Number (LCCN): 68056159.
GBV-Identication (PPN): 020827873. LC Classication: TP155.7 .B6.
378. George Edward Pelham Box, J. Stuart Hunter, and William G. Hunter. Statistics for Ex-
perimenters: Design, Innovation, and Discovery. John Wiley & Sons Ltd.: New York, NY,
USA, 19782005. isbn: 471093157 and 0-471-71813-0.
379. Anthony Brabazon and Michael ONeill. Evolving Financial Models using Grammatical Evo-
lution. In Proceedings of The Annual Conference of the South Eastern Accounting Group
(SEAG) 2003 [1771], 2003.
380. Anthony Brabazon, Michael ONeill, Robin Matthews, and Conor Ryan. Grammati-
cal Evolution And Corporate Failure Prediction. In GECCO02 [1675], pages 1011
1018, 2002. Fully available at http://business.kingston.ac.uk/research/intbus/
paper4.pdf and http://www.cs.bham.ac.uk/

wbl/biblio/gecco2002/RWA145.
pdf [accessed 2007-09-09].
381. Ronald J. Brachman, editor. Proceedings of the Fourth National Conference on Arti-
cial Intelligence (AAAI84), August 610, 1984, University of Texas: Austin, TX, USA.
isbn: 0-262-51053-7. Partly available at http://www.aaai.org/Conferences/AAAI/
aaai84.php [accessed 2007-09-06].
382. R. M. Brady. Optimization Strategies Gleaned from Biological Evolution. Nature, 317(6040):
804806, October 31, 1985. doi: 10.1038/317804a0. Fully available at http://www.nature.
com/nature/journal/v317/n6040/pdf/317804a0.pdf [accessed 2008-09-10]. In: Letters to
Nature.
383. Marco Brambilla, Irene Celino, Stefano Ceri, Dario Cerizza, Emanuele Della Valle, Fed-
erico Michele Facca, and Christina Tziviskou. Improvements and Future Perspectives on
Web Engineering Methods for Automating Web Services Mediation, Choreography and
Discovery. In Third Workshop of the Semantic Web Service Challenge 2006 Chal-
lenge on Automating Web Services Mediation, Choreography and Discovery [2690], 2006.
Fully available at http://sws-challenge.org/workshops/2006-Athens/papers/
SWS-phase-III_polimi_cefriel_v1.0.pdf [accessed 2010-12-16]. See also [384387, 1650,
1831].
384. Marco Brambilla, Stefano Ceri, Dario Cerizza, Emanuele Della Valle, Federico Michele Facca,
Piero Fraternali, and Christina Tziviskou. Web Modeling-based Approach to Automat-
ing Web Services Mediation, Choreography and Discovery. In First Workshop of the Se-
mantic Web Service Challenge 2006 Challenge on Automating Web Services Mediation,
Choreography and Discovery [2688], 2006. Fully available at http://sws-challenge.
org/workshops/2006-Stanford/papers/01.pdf [accessed 2010-12-12]. See also [383, 385
387, 1650, 1831].
385. Marco Brambilla, Stefano Ceri, Dario Cerizza, Emanuele Della Valle, Federico Michele
Facca, and Christina Tziviskou. Coping with Requirements Changes: SWS-Challenge
Phase II. In Second Workshop of the Semantic Web Service Challenge 2006 Chal-
REFERENCES 989
lenge on Automating Web Services Mediation, Choreography and Discovery [2689], 2006.
Fully available at http://sws-challenge.org/workshops/2006-Budva/papers/
SWS-phase-Finale_polimi_cefriel.pdf [accessed 2010-12-12]. See also [383, 384, 386, 387,
1650, 1831].
386. Marco Brambilla, Irene Celino, Stefano Ceri, Dario Cerizza, Emanuele Della Valle,
Federico Michele Facca, Andrea Turati, and Christina Tziviskou. WebML and Glue:
An Integrated Discovery Approach for the SWS Challenge. In KWEB SWS Chal-
lenge Workshop [2450], 2007. Fully available at http://sws-challenge.org/
workshops/2007-Innsbruck/papers/3-SWS-phase-IV_polimi_cefriel_v1.
2.pdf [accessed 2010-12-16]. See also [383385, 387, 1650, 1831].
387. Marco Brambilla, Stefano Ceri, Federico Michele Facca, Christina Tziviskou, Irene Celino,
Dario Cerizza, Emanuele Della Valle, and Andrea Turati. WebML and Glue: An Integrated
Discovery Approach for the SWS Challenge. In WI/IAT Workshops07 [1731], pages 148
151, 2007. doi: 10.1109/WI-IATW.2007.58. INSPEC Accession Number: 9894902. See also
[384386, 1831].
388. Markus F. Brameier. On Linear Genetic Programming. PhD thesis, Universitat Dort-
mund, Fachbereich Informatik: Dortmund, North Rhine-Westphalia, Germany, Febru-
ary 13, 2004, Wolfgang Banzhaf, Martin Riedmiller, and Peter Nordin, Committee members.
Fully available at http://hdl.handle.net/2003/20098 and https://eldorado.
tu-dortmund.de/handle/2003/20098 [accessed 2010-08-22].
389. Markus F. Brameier and Wolfgang Banzhaf. A Comparison of Genetic Programming and
Neural Networks in Medical Data Analysis. Reihe computational intelligence: Design and
management of complex technical processes and systems by means of computational intelli-
gence methods, Universitat Dortmund, Collaborative Research Center (Sonderforschungs-
bereich) 531: Dortmund, North Rhine-Westphalia, Germany, November 8, 1998. Fully
available at http://dspace.hrhttps://eldorado.tu-dortmund.de/handle/2003/
5344 [accessed 2009-07-22]. CiteSeer
x
: 10.1.1.36.4678. issn: 1433-3325. Reihe Computa-
tional Intelligence, Sonderforschungsbereich 531. See also [391].
390. Markus F. Brameier and Wolfgang Banzhaf. Evolving Teams of Predictors with Linear Ge-
netic Programming. Genetic Programming and Evolvable Machines, 2(4):381407, Decem-
ber 2001, Springer Netherlands: Dordrecht, Netherlands. Imprint: Kluwer Academic Pub-
lishers: Norwell, MA, USA. doi: 10.1023/A:1012978805372. CiteSeer
x
: 10.1.1.16.7410.
391. Markus F. Brameier and Wolfgang Banzhaf. A Comparison of Linear Genetic Program-
ming and Neural Networks in Medical Data Mining. IEEE Transactions on Evolution-
ary Computation (IEEE-EC), 5(1):1726, 2001, IEEE Computer Society: Washington, DC,
USA. doi: 10.1109/4235.910462. Fully available at http://web.cs.mun.ca/

banzhaf/
papers/ieee_taec.pdf [accessed 2009-07-22]. CiteSeer
x
: 10.1.1.41.208. INSPEC Accession
Number: 6876106. See also [389].
392. Markus F. Brameier and Wolfgang Banzhaf. Explicit Control of Diversity and Eective Vari-
ation Distance in Linear Genetic Programming. In EuroGP02 [977], pages 3749, 2002.
doi: 10.1007/3-540-45984-7 4. Fully available at http://eldorado.uni-dortmund.de:
8080/bitstream/2003/5419/1/123.pdf and http://www.cs.mun.ca/

banzhaf/
papers/eurogp02_dist.pdf [accessed 2010-08-22]. CiteSeer
x
: 10.1.1.11.5531.
393. Markus F. Brameier and Wolfgang Banzhaf. Neutral Variations Cause Bloat in Linear GP.
In EuroGP03 [2360], pages 9971039, 2003. CiteSeer
x
: 10.1.1.120.8070.
394. Markus F. Brameier and Wolfgang Banzhaf. Linear Genetic Programming, volume 1 in
Genetic and Evolutionary Computation. Springer US: Boston, MA, USA and Kluwer Aca-
demic Publishers: Norwell, MA, USA, December 11, 2006. doi: 10.1007/978-0-387-31030-5.
isbn: 0-387-31029-0 and 0-387-31030-4. Library of Congress Control Number
(LCCN): 2006920909. Series Editor: David L. Goldberg, John R. Koza.
395. J urgen Branke. Creating Robust Solutions by Means of Evolutionary Algorithms. In PPSN
V [866], pages 119128, 1998. doi: 10.1007/BFb0056855.
396. J urgen Branke. The Moving Peaks Benchmark. Technical Report, University of Karlsruhe,
Institute for Applied Computer Science and Formal Description Methods (AIFB): Karlsruhe,
Germany, December 16, 1999. Fully available at http://www.aifb.uni-karlsruhe.de/

jbr/MovPeaks/ [accessed 2007-08-19]. See also [397].


397. J urgen Branke. Memory Enhanced Evolutionary Algorithms for Changing Optimization
Problems. In CEC99 [110], pages 18751882, volume 3, 1999. doi: 10.1109/CEC.1999.785502.
990 REFERENCES
Fully available at http://www.aifb.uni-karlsruhe.de/

jbr/Papers/memory_
final2.ps.gz [accessed 2008-12-07]. CiteSeer
x
: 10.1.1.2.8897. INSPEC Accession Num-
ber: 6346978.
398. J urgen Branke. Evolutionary Optimization in Dynamic Environments, volume 3 in Genetic
and Evolutionary Computation. PhD thesis, University of Karlsruhe, Department of Eco-
nomics and Business Engineering: Karlsruhe, Germany, Springer US: Boston, MA, USA and
Kluwer Academic Publishers: Norwell, MA, USA, December 20002001, Hartmut Schmeck,
Georg Bol, and Lothar Thiele, Advisors. isbn: 0792376315 and 9780792376316. Google
Books ID: weXuTB5JjFsC.
399. J urgen Branke, Kalyanmoy Deb, Kaisa Miettinen, and Ralph E. Steuer, editors. Practi-
cal Approaches to Multi-Objective Optimization, November 712, 2004, Schloss Dagstuhl:
Wadern, Germany, 04461 in Dagstuhl Seminar Proceedings. Internationales Begegnungs-
und Forschungszentrum f ur Informatik (IBFI): Schloss Dagstuhl, Wadern, Germany.
Fully available at http://drops.dagstuhl.de/portals/index.php?semnr=04461
and http://www.dagstuhl.de/de/programm/kalender/semhp/?semnr=04461 [ac-
cessed 2007-09-19]. Published in 2005.
400. J urgen Branke, Erdem Saliho glu, and A. Sima Uyar. Towards an Analysis of Dynamic
Environments. In GECCO05 [304], pages 14331440, 2005. doi: 10.1145/1068009.1068237.
401. Ivan Bratko and Saso Dzeroski, editors. Proceedings of the 16th International Conference on
Machine Learning (ICML99), June 2730, 1999, Bled, Slovenia. Morgan Kaufmann Publish-
ers Inc.: San Francisco, CA, USA. isbn: 1-55860-612-2.
402. Olli Br aysy. Genetic Algorithms for the Vehicle Routing Problem with Time Windows.
Arpakannus Newsletter of the Finnish Articial Intelligence Society (FAIS), 1:3338,
May 2001, Finnish Articial Intelligence Society (FAIS): Vantaa, Finland. Special issue
on Bioinformatics and Genetic Algorithms.
403. Olli Br aysy and Michel Gendreau. Tabu Search Heuristics for the Vehicle Routing Problem
with Time Windows. TOP: An Ocial Journal of the Spanish Society of Statistics and Op-
erations Research, 10(2):211237, December 2002, Springer-Verlag GmbH: Berlin, Germany
and Sociedad de Estadstica e Investigation Operativa. doi: 10.1007/BF02579017.
404. Mihaela Breaban, Lenuta Alboaie, and Henri Luchian. Guiding Users within Trust
Networks Using Swarm Algorithms. In CEC09 [1350], pages 17701777, 2009.
doi: 10.1109/CEC.2009.4983155. INSPEC Accession Number: 10688750.
405. Leo Breiman. Bagging Predictors. Machine Learning, 24(2):123140, August 1996, Kluwer
Academic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1007/BF00058655. Fully available at http://www.salford-systems.com/doc/
BAGGING_PREDICTORS.pdf [accessed 2009-03-18]. CiteSeer
x
: 10.1.1.32.9399.
406. Leo Breiman. Random Forests. Machine Learning, 45(1):532, 2001, Kluwer Aca-
demic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Nether-
lands. doi: 10.1023/A:1010933404324. Fully available at http://www.springerlink.
com/content/u0p06167n6173512/fulltext.pdf [accessed 2008-12-27].
407. Hans J. Bremermann. Optimization Through Evolution and Recombination. In Self-
Organizing Systems (Proceedings of the conference sponsored by the Information Systems
Branch of the Oce of Naval Research and the Armour Research Foundation of the Illinois
Institute of Technology.) [3037], pages 93103, 1962. Fully available at http://holtz.org/
Library/Natural%20Science/Physics/ [accessed 2007-10-31].
408. Janez Brest, Viljem

Zumer, and Mirjam Sepesy Maucec. Control Parameters
in Self-Adaptive Dierential Evolution. In BIOMA06 [919], pages 3544, 2006.
CiteSeer
x
: 10.1.1.106.8106.
409. Anne F. Brindle. Genetic Algorithms for Function Optimization. PhD thesis, University of
Alberta: Edmonton, Alberta, Canada, 1980. isbn: 0-315-01039-8. OCLC: 15896420,
59840399, and 632337062. Technical Report TR81-2.
410. David Brittain, Jon Sims Williams, and Chris McMahon. A Genetic Algorithm Approach
To Planning The Telecommunications Access Network. In ICGA97 [166], pages 623628,
1997. Fully available at http://citeseer.ist.psu.edu/brittain97genetic.html
[accessed 2008-08-01].
411. Proceedings of the 14th International Conference on Soft Computing (MENDEL08), June 18
20, 2009, Brno, Czech Republic.
412. Proceedings of the 15th International Conference on Soft Computing (MENDEL09), June 17
REFERENCES 991
26, 2009, Brno, Czech Republic.
413. Proceedings of the 12th International Conference on Soft Computing (MENDEL06), May 31
June 2, 2006, Brno University of Technology: Brno, Czech Republic. Brno University of Tech-
nology, Faculty of Mechanical Engineering: Brno, Czech Republic. isbn: 80-214-3195-4.
414. Proceedings of the 1st International Mendel Conference on Genetic Algorithms on the occasion
of 130th anniversary of Mendels laws (Mendel95), September 1995, Brno, Czech Republic.
Brno University of Technology,

Ustav Automatizace a Informatiky: Brno, Czech Republic.
isbn: 80-214-0672-0.
415. Proceedings of the 2nd International Mendel Conference on Genetic Algorithms
(MENDEL96), June 2628, 1996, Brno University of Technology: Brno, Czech Republic.
Brno University of Technology,

Ustav Automatizace a Informatiky: Brno, Czech Republic.
isbn: 80-214-0769-7.
416. Proceedings of the 2nd International Mendel Conference on Genetic Algorithms
(MENDEL97), June 2628, 1997, Brno University of Technology: Brno, Czech Republic.
Brno University of Technology,

Ustav Automatizace a Informatiky: Brno, Czech Republic.
isbn: 80-214-0769-7.
417. Proceedings of the 4th International Mendel Conference on Genetic Algorithms, Optimisa-
tion Problems, Fuzzy Logic, Neural Networks, Rough Sets (MENDEL98), June 2426, 1998,
Brno University of Technology: Brno, Czech Republic. Brno University of Technology,

Ustav
Automatizace a Informatiky: Brno, Czech Republic. isbn: 80-214-1131-7.
418. Proceedings of the 3rd International Conference on Genetic Algorithms (MENDEL99),
June 1997, Brno University of Technology: Brno, Czech Republic. Brno University of Tech-
nology,

Ustav Automatizace a Informatiky: Brno, Czech Republic. isbn: 80-214-1131-7.
419. Proceedings of the 8th International Conference on Soft Computing (MENDEL02), June 57,
2002, Brno University of Technology: Brno, Czech Republic. Brno University of Technology,

Ustav Automatizace a Informatiky: Brno, Czech Republic. isbn: 80-214-2135-5.


420. Proceedings of the 10th International Conference on Soft Computing (MENDEL04), June 16
18, 2004, Brno University of Technology: Brno, Czech Republic. Brno University of Technol-
ogy,

Ustav Automatizace a Informatiky: Brno, Czech Republic. isbn: 80-214-2676-4.
421. Proceedings of the 11th International Conference on Soft Computing (MENDEL05), June 15
17, 2005, Brno University of Technology: Brno, Czech Republic. Brno University of Technol-
ogy,

Ustav Automatizace a Informatiky: Brno, Czech Republic. isbn: 80-214-2961-5.
422. Dimo Brockho and Eckart Zitzler. Are All Objectives Necessary? On Dimensionality Re-
duction in Evolutionary Multiobjective Optimization. In PPSN IX [2355], pages 533542,
2006. doi: 10.1007/11844297 54. Fully available at http://www.tik.ee.ethz.ch/sop/
publicationListFiles/bz2006d.pdf [accessed 2009-07-18].
423. Dimo Brockho and Eckart Zitzler. Dimensionality Reduction in Multiobjective Optimiza-
tion: The Minimum Objective Subset Problem. In Selected Papers of the Annual Interna-
tional Conference of the German Operations Research Society (GOR), Jointly Organized with
the Austrian Society of Operations Research (

OGOR) and the Swiss Society of Operations


Research (SVOR) [2840], pages 423429, 2006. doi: 10.1007/978-3-540-69995-8 68. Fully
available at http://www.tik.ee.ethz.ch/sop/publicationListFiles/bz2007d.
pdf [accessed 2009-07-18].
424. Dimo Brockho and Eckart Zitzler. Improving Hypervolume-based Multiobjective Evolution-
ary Algorithms by using Objective Reduction Methods. In CEC07 [1343], pages 20862093,
2007. doi: 10.1109/CEC.2007.4424730. Fully available at http://www.tik.ee.ethz.ch/
sop/publicationListFiles/bz2007c.pdf [accessed 2009-07-18]. INSPEC Accession Num-
ber: 9889552.
425. Dimo Brockho, Tobias Friedrich, Nils Hebbinghaus, Christian Klein, Frank Neumann, and
Eckart Zitzler. Do additional objectives make a problem harder? In GECCO07-I [2699],
pages 765772, 2007. doi: 10.1145/1276958.1277114.
426. Carla E. Brodley, editor. Proceedings of the 21st International Conference on Machine Learn-
ing (ICML04), July 48, 2004, Ban, AB, Canada, volume 69 in ACM International Con-
ference Proceeding Series (AICPS). ACM Press: New York, NY, USA.
427. Carla E. Brodley and Andrea Pohoreckyj Danyluk, editors. Proceedings of the 18th Interna-
tional Conference on Machine Learning (ICML01), June 28July 1, 2001, Williams College:
Williamstown, MA, USA. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA.
isbn: 1-55860-778-1.
992 REFERENCES
428. Alexander M. Bronstein and Michael M. Bronstein. Numerical Optimization. Project TOSCA
Tools for Non-Rigid Shape Comparison and Analysis, Computer Science Department, Tech-
nion Israel Institute of Technology: Haifa, Israel, 2008. Fully available at http://tosca.
cs.technion.ac.il/book/slides/Milano08_optimization.ppt [accessed 2010-08-28].
429. Donald E. Brown, Christopher L. Huntley, and Andrew R. Spillane. A Parallel Genetic
Heuristic for the Quadratic Assignment Problem. In ICGA89 [2414], pages 406415, 1989.
430. Jason Brownlee. Learning Classier Systems. Technical Report 070514A, Swinburne Uni-
versity of Technology, Faculty of Information and Communication Technologies, Centre for
Information Technology Research, Complex Intelligent Systems Laboratory: Melbourne,
VIC, Australia, May 2007. Fully available at http://www.ict.swin.edu.au/personal/
jbrownlee/2007/TR24-2007.pdf [accessed 2008-04-03].
431. Jason Brownlee. OAT HowTo: High-Level Domain, Problem, and Algorithm Implementation.
Technical Report 20071218A, Swinburne University of Technology, Faculty of Information and
Communication Technologies, Centre for Information Technology Research, Complex Intel-
ligent Systems Laboratory: Melbourne, VIC, Australia, December 2007. Fully available
at http://optalgtoolkit.sourceforge.net/docs/TR46-2007.pdf [accessed 2010-03-
11].
432. Jason Brownlee. OAT: The Optimization Algorithm Toolkit. Technical Report 20071220A,
Swinburne University of Technology, Faculty of Information and Communication Technolo-
gies, Centre for Information Technology Research, Complex Intelligent Systems Laboratory:
Melbourne, VIC, Australia, December 2007. Fully available at http://www.ict.swin.
edu.au/personal/jbrownlee/2007/TR47-2007.pdf [accessed 2010-03-11].
433. Axel T. Br unger, Paul D. Adams, and Luke M. Rice. New Applications of Simulated Anneal-
ing in X-Ray Crystallography and Solution NMR. Structure, 15:325336, 1997. Fully available
at http://atb.slac.stanford.edu/public/papers.php?sendfile=44 [accessed 2007-
08-25].
434. Seventh International Conference on Swarm Intelligence (ANTS10), September 810, 2010,
Brussels, Belgium.
435. Barrett Richard Bryant, editor. Proceedings of the 12th/1997 ACM/SIGAPP Symposium
on Applied Computing (SAC97), February 28March 2, 1997, Hyatt St. Claire: San Jose,
CA, USA. ACM Press: New York, NY, USA. isbn: 0-89791-850-9. Google Books
ID: JbAmPQAACAAJ. OCLC: 36711176 and 246580075.
436. Kylie Bryant. Genetic Algorithms and the Traveling Salesman Problem. Masters the-
sis, Harvey Mudd College (HMC), Department of Mathematics: Claremont, CA, USA,
December 2000, Arthur Benjamin, Advisor, Lisette de Pillis, Committee member. Fully
available at http://www.math.hmc.edu/seniorthesis/archives/2001/kbryant/
kbryant-2001-thesis.pdf [accessed 2009-09-30].
437. Bradley J. Buckham and Casey Lambert. Simulated Annealing Applications. University
of Victoria, Mechanical Engineering Department: Victoria, BC, Canada, November 1999,
Zuomin Dong, Advisor. Fully available at http://www.me.uvic.ca/

zdong/courses/
mech620/SA_App.PDF [accessed 2007-08-25]. Seminar presentation: MECH620 Quantitative
Analysis, Reasoning and Optimization Methods in CAD/CAM and Concurrent Engineering.
438. Lam Thu Bui and Sameer Alam, editors. Multi-Objective Optimization in Computational
Intelligence: Theory and Practice, Premier Reference Source. Idea Group Publishing (Idea
Group Inc., IGI Global): New York, NY, USA, March 30, 2008. isbn: 1599044986.
439. Vinh Bui, Lam Thu Bui, Hussein A. Abbass, Axel Bender, and Pradeep Ray. On the Role of
Information Networks in Logistics: An Evolutionary Approach with Military Scenarios. In
CEC09 [1350], pages 598605, 2009. doi: 10.1109/CEC.2009.4983000. INSPEC Accession
Number: 10688573.
440. Larry Bull. On using ZCS in a Simulated Continuous Double-Auction Market. In
GECCO99 [211], pages 8390, 1999. Fully available at http://www.cs.bham.ac.uk/

wbl/biblio/gecco1999/GA-806.pdf [accessed 2007-09-12].


441. Larry Bull, editor. Applications of Learning Classier Systems, volume 150 in Studies
in Fuzziness and Soft Computing. Springer-Verlag GmbH: Berlin, Germany, April 2004.
isbn: 3540211098. Google Books ID: aBIjqGag5-kC.
442. Larry Bull and Jacob Hurst. ZCS Redux. Evolutionary Computation, 10(2):185205, Sum-
mer 2002, MIT Press: Cambridge, MA, USA. doi: 10.1162/106365602320169848. Fully
available at http://www.cems.uwe.ac.uk/lcsg/papers/zcsredux.ps [accessed 2007-09-
REFERENCES 993
12].
443. Larry Bull and Tim Kovacs, editors. Foundations of Learning Classier Systems, Studies
in Fuzziness and Soft Computing. Springer-Verlag GmbH: Berlin, Germany, September 1,
2005. isbn: 3540250735.
444. Bernd Bullnheimer, Richard F. Hartl, and Christine Strauss. Applying the Ant System to
the Vehicle Routing Problem. In MIC97 [2828], 1997. CiteSeer
x
: 10.1.1.48.7946.
445. Bernd Bullnheimer, Richard F. Hartl, and Christine Strauss. An improved Ant System Al-
gorithm for the Vehicle Routing Problem. Annals of Operations Research, 89(0):319328,
January 1999, Springer Netherlands: Dordrecht, Netherlands and J. C. Baltzer AG, Sci-
ence Publishers: Amsterdam, The Netherlands. doi: 10.1023/A:1018940026670. Fully avail-
able at http://www.macs.hw.ac.uk/

dwcorne/Teaching/bullnheimer-vr.pdf [ac-
cessed 2008-10-27]. CiteSeer
x
: 10.1.1.49.1415.
446. Seth Bullock, Jason Noble, Richard A. Watson, and Mark A. Bedau, editors. Proceedings
of the Eleventh International Conference on the Simulation and Synthesis of Living Systems
(Articial Life XI), August 58, 2008, Winchester, Hampshire, UK. MIT Press: Cambridge,
MA, USA.
447. Bundesministerium f ur Wirtschaft und Technologie (BMWi): Berlin, Germany. Innovation-
spolitik, Informationsgesellschaft, Telekommunikation Mobilitat und Verkehrstechnologien
Das 3. Verkehrsforschungsprogramm der Bundesregierung. Bundesministerium f ur Wirtschaft
und Technologie (BMWi),

Oentlichkeitsarbeit: Berlin, Germany, May 2008.
448. Alan Bundy, editor. Proceedings of the 8th International Joint Conference on Articial Intel-
ligence (IJCAI83-I), August 1983, Karlsruhe, Germany, volume 1. William Kaufmann: Los
Altos, CA, USA. Fully available at http://dli.iiit.ac.in/ijcai/IJCAI-83-VOL-1/
CONTENT/content.htm [accessed 2008-04-01]. See also [449].
449. Alan Bundy, editor. Proceedings of the 8th International Joint Conference on Articial Intel-
ligence (IJCAI83-II), August 1983, Karlsruhe, Germany, volume 2. William Kaufmann: Los
Altos, CA, USA. Fully available at http://dli.iiit.ac.in/ijcai/IJCAI-83-VOL-2/
CONTENT/content.htm [accessed 2008-04-01]. See also [448].
450. Christopher J. C. Burges. A Tutorial on Support Vector Machines for Pattern Recog-
nition. Data Mining and Knowledge Discovery, 2(2):121167, 1998, Springer Nether-
lands: Dordrecht, Netherlands, Usama Fayyad, editor. doi: 10.1023/A:1009715923555.
CiteSeer
x
: 10.1.1.18.1083.
451. Luciana Buriol, Paulo M. Fran ca, and Pablo Moscato. A New Memetic Algo-
rithm for the Asymmetric Traveling Salesman Problem. Journal of Heuristics,
10(5):483506, September 2004, Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1023/B:HEUR.0000045321.59202.52. Fully available at http://citeseer.
ist.psu.edu/544761.html and http://www.springerlink.com/content/
w617486q60mphg88/fulltext.pdf [accessed 2007-09-12]. CiteSeer
x
: 10.1.1.20.1660.
452. Edmund K. Burke and Wilhelm Erben, editors. Selected Papers from the Third Inter-
national Conference on Practice and Theory of Automated Timetabling (PATAT00), Au-
gust 1618, 2000, Konstanz, Germany, volume 2079/2001 in Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-44629-X.
isbn: 3-540-42421-0.
453. Edmund K. Burke and Graham Kendall, editors. Search Methodologies Introductory Tuto-
rials in Optimization and Decision Support Techniques. Springer Science+Business Media,
Inc.: New York, NY, USA, 2005. doi: 10.1007/0-387-28356-0. isbn: 0-387-23460-8. Google
Books ID: X4BcOZEa4HsC. Library of Congress Control Number (LCCN): 2005051623.
454. Edmund K. Burke, Steven Matt Gustafson, and Graham Kendall. Survey and Analysis of
Diversity Measures in Genetic Programming. In GECCO02 [1675], pages 716723, 2002.
Fully available at http://www.gustafsonresearch.com/research/publications/
gecco-diversity-2002.pdf [accessed 2009-07-09]. CiteSeer
x
: 10.1.1.2.3757.
455. Edmund K. Burke, Steven Matt Gustafson, Graham Kendall, and Natalio Krasnogor. Ad-
vanced Population Diversity Measures in Genetic Programming. In PPSN VII [1870],
pages 341350, 2002. doi: 10.1007/3-540-45712-7 33. Fully available at http://www.cs.
bham.ac.uk/

wbl/biblio/gp-html/burke_ppsn2002_pp341.html and http://


www.gustafsonresearch.com/research/publications/ppsn-2002.ps [accessed 2009-
08-07]. CiteSeer
x
: 10.1.1.18.6142.
456. Edmund K. Burke, Steven Matt Gustafson, Graham Kendall, and Natalio Krasnogor. Is
994 REFERENCES
Increasing Diversity in Genetic Programming Benecial? An Analysis of the Eects on Fit-
ness. In CEC03 [2395], pages 13981405, volume 2, 2003. doi: 10.1109/CEC.2003.1299834.
Fully available at http://www.gustafsonresearch.com/research/publications/
cec-2003.pdf [accessed 2009-07-09]. CiteSeer
x
: 10.1.1.1.1929.
457. Edmund K. Burke, Graham Kendall, and Eric Soubeiga. A Tabu Search Hyperheuristic for
Timetabling and Rostering. Journal of Heuristics, 9(6):451470, December 2003, Springer
Netherlands: Dordrecht, Netherlands. doi: 10.1023/B:HEUR.0000012446.94732.b6. Fully
available at http://www.asap.cs.nott.ac.uk/publications/pdf/Journal_eric.
pdf and http://www.asap.cs.nott.ac.uk/publications/pdf/Journal_eric03.
pdf [accessed 2011-04-23]. CiteSeer
x
: 10.1.1.1.9539.
458. Edmund K. Burke, Steven Matt Gustafson, and Graham Kendall. Diversity in Genetic
Programming: An Analysis of Measures and Correlation with Fitness. IEEE Transac-
tions on Evolutionary Computation (IEEE-EC), 8(1):4762, February 2004, IEEE Com-
puter Society: Washington, DC, USA, Xin Yao, editor. doi: 10.1109/TEVC.2003.819263.
Fully available at http://www.gustafsonresearch.com/research/publications/
gustafson-ieee2004.pdf [accessed 2009-07-09]. CiteSeer
x
: 10.1.1.59.6663.
459. Martin V. Butz. Anticipatory Learning Classier Systems, Genetic and Evolutionary Com-
putation. Springer US: Boston, MA, USA and Kluwer Academic Publishers: Norwell, MA,
USA, January 25, 2002. isbn: 0792376307.
460. Martin V. Butz. Rule-Based Evolutionary Online Learning Systems: A Principled Approach
to LCS Analysis and Design, Studies in Fuzziness and Soft Computing. Springer-Verlag
GmbH: Berlin, Germany, December 22, 2005. isbn: 3540253793.
461. Martin V. Butz, Kumara Sastry, and David Edward Goldberg. Tournament Selection in
XCS. Technical Report 2002020, Illinois Genetic Algorithms Laboratory (IlliGAL), Depart-
ment of Computer Science, Department of General Engineering, University of Illinois at
Urbana-Champaign: Urbana-Champaign, IL, USA, July 2002. Fully available at ftp://
ftp-illigal.ge.uiuc.edu/pub/papers/IlliGALs/2002020.ps.Z [accessed 2010-08-03].
CiteSeer
x
: 10.1.1.19.1850.
462. Elizabeth Byrnes and Jan Aikins, editors. Proceedings of the The Sixth Conference on Innova-
tive Applications of Articial Intelligence (IAAI94), August 13, 1994, Convention Center:
Seattle, WA, USA. AAAI Press: Menlo Park, CA, USA. isbn: 0-929280-62-8. Partly
available at http://www.aaai.org/Conferences/IAAI/iaai94.php [accessed 2007-09-06].
OCLC: 635873487. GBV-Identication (PPN): 196175178.
C
463. Louis Caccetta and Araya Kulanoot. Computational Aspects of Hard Knapsack Prob-
lems. Nonlinear Analysis: Theory, Methods & Applications Theory, Methods & Ap-
plications An International Multidisciplinary Journal, 47(8):55475558, August 2001, El-
sevier Science Publishers B.V.: Essex, UK. Imprint: Pergamon Press: Oxford, UK.
doi: 10.1016/S0362-546X(01)00658-7.
464. Stefano Cagnoni, Riccardo Poli, Yung-Chien Lin, George D. Smith, David Wolfe Corne,
Martin J. Oates, Emma Hart, Pier Luca Lanzi, Egbert J. W. Boers, Ben Paechter, and Ter-
ence Claus Fogarty, editors. Real-World Applications of Evolutionary Computing,:EvoIASP,
EvoSCONDI, EvoTel, EvoSTIM, EvoROB, and EvoFlight (EvoWorkshops00), April 17,
2000, Edinburgh, Scotland, UK, volume 1803/200 in Lecture Notes in Computer Sci-
ence (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-45561-2.
isbn: 3-540-67353-9. Google Books ID: KQo0n1vZbi0C. OCLC: 43851409, 122976715,
174380229, and 243487744.
465. Stefano Cagnoni, Jens Gottlieb, Emma Hart, Martin Middendorf, and G unther R. Raidl,
editors. Applications of Evolutionary Computing, Proceedings of EvoWorkshops 2002: Evo-
COP, EvoIASP, EvoSTIM/EvoPLAN (EvoWorkshops02), April 24, 2002, Kinsale, Ireland,
volume 2279/-1/20 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/3-540-46004-7. isbn: 3-540-43432-1.
466. Stefano Cagnoni, Evelyne Lutton, and Gustavo Olague, editors. Genetic and Evolutionary
Computation for Image Processing and Analysis, volume 8 in EURASIP Book Series on Signal
Processing and Communications. Hindawi Publishing Corporation: New York, NY, USA,
REFERENCES 995
February 2008. Fully available at http://www.hindawi.com/books/9789774540011.
pdf [accessed 2008-05-17].
467. Sebastien Cahon, Nordine Melab, and El-Ghazali Talbi. ParadisEO: A Framework
for the Reusable Design of Parallel and Distributed Metaheuristics. Journal of
Heuristics, 10(3):357380, May 2004, Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1023/B:HEUR.0000026900.92269.ec.
468. Tao Cai, Feng Pan, and Jie Chen. Adaptive Particle Swarm Optimization Algorithm. In
WCICA04 [1388], pages 22452247, volume 3, 2004. doi: 10.1109/WCICA.2004.1341988.
INSPEC Accession Number: 8152861.
469. Proceedings of the 7th Asia-Pacic Conference on Complex Systems (Complex04), Decem-
ber 610, 2004, Cairns Convention Centre: Cairns, Australia.
470. Osvaldo Cair o, Luis Enrique Sucar, and Francisco J. Cantu, editors. Advances in Articial
Intelligence, Proceedings of the Mexican International Conference on Articial Intelligence
(MICAI00), April 1114, 2000, Acapulco, Mexico, volume 1793/2000 in Lecture Notes in
Articial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/10720076. isbn: 3-540-67354-7.
471. Craig Caldwell and Victor S. Johnston. Tracking a Criminal Suspect Through Face-Space
with a Genetic Algorithm. In ICGA91 [254], pages 416421, 1991.
472. Cristian S. Calude, Michael J. Dinneen, Gheorghe Paun, Grzegorz Rozenberg, and Susan
Stepney, editors. Proceedings of the 5th International Conference on Unconventional Com-
putation (UC06), September 48, 2006, York, UK, volume 4135 in Theoretical Computer
Science and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/11839132. isbn: 3-540-38593-2. Library of
Congress Control Number (LCCN): 2006931474.
473. William H. Calvin. The River That Flows Uphill: A Journey from the Big Bang to the Big
Brain. Macmillan Publishers Co.: New York, NY, USA, Backinprint.com: New York, NY,
USA, and Sierra Club Books: San Francisco, CA, USA, July 1986. isbn: 0025209205,
0595167004, 0792445228, and 0-87156-719-9. Google Books ID: KVdoPwAACAAJ,
OpNuPwAACAAJ, nT7rNiPqEuAC, and uavwAAAAMAAJ. OCLC: 13524443, 48962546, and
246817496. Library of Congress Control Number (LCCN): 86008490 and 87000381.
GBV-Identication (PPN): 116426063 and 277678366. LC Classication: QP356 .C35
1986b. See also [474].
474. William H. Calvin. Die Geschichte des Lebens Vom Urknall bis zum Grohirn des Homo
Sapiens. Bechterm unz Verlag GmbH: Augsburg, Germany, 1997, Friedrich Griese, Translator.
isbn: 3-86047-567-3. OCLC: 57376043. See also [473].
475. Mike Campos, Eric W. Bonabeau, Guy Theraulaz, and Jean-Louis Deneubourg.
Dynamic Scheduling and Division of Labor in Social Insects. Adaptive Behav-
ior, 8(2):8395, March 2000, SAGE Publications: Thousand Oaks, CA, USA.
doi: 10.1177/105971230000800201. Fully available at http://www.icosystem.com/
dynamic-scheduling-and-division-of-labor-in-social-insects/ [accessed 2011-
05-02]. CiteSeer
x
: 10.1.1.112.3678.
476. Erick Cant u-Paz. Designing Ecient and Accurate Parallel Genetic Algorithms. PhD the-
sis, Illinois Genetic Algorithms Laboratory (IlliGAL), Department of Computer Science,
Department of General Engineering, University of Illinois at Urbana-Champaign: Urbana-
Champaign, IL, USA, July 1999. Fully available at http://citeseer.ist.psu.edu/
cantu-paz99designing.html [accessed 2008-04-06]. Also IlliGAL report 99017. See also [477].
477. Erick Cant u-Paz. Ecient and Accurate Parallel Genetic Algorithms, volume 1 in Genetic
and Evolutionary Computation. Springer US: Boston, MA, USA and Kluwer Academic
Publishers: Norwell, MA, USA, December 15, 2000. isbn: 0-7923-7221-2. Google Books
ID: UXkGGQbmsfAC. OCLC: 45058477. See also [476].
478. Erick Cant u-Paz, editor. Late Breaking papers at the Genetic and Evolutionary Computation
Conference (GECCO02 LBP), July 913, 2002, Roosevelt Hotel: New York, NY, USA.
AAAI Press: Menlo Park, CA, USA. See also [224, 1675, 1790].
479. Erick Cant u-Paz and Francisco Fernandez de Vega, editors. First Workshop on Parallel
Architectures and Bioinspired Algorithms (WPBA05), June 14, 2005, Oslo, Norway.
480. Erick Cant u-Paz and Chandrika Kamath. Combining Evolutionary Algorithms with Oblique
Decision Trees to Detect Bent-Double Galaxis. In Proceedings of SPIEs 45th International
Symposium on Optical Science and Technology: Applications and Science of Neural Networks,
996 REFERENCES
Fuzzy Systems, and Evolutionary Computation III [362], pages 6371, 2000.
481. Erick Cant u-Paz and Chandrika Kamath. Using Evolutionary Algorithms to Induce
Oblique Decision Trees. In GECCO00 [2935], pages 10531060, 2000. Fully avail-
able at https://computation.llnl.gov/casc/sapphire/dtrees/oc1.html and
www.evolutionaria.com/publications/gecco00-obliqueDT.ps.gz [accessed 2009-09-
09]. CiteSeer
x
: 10.1.1.28.7300.
482. Erick Cant u-Paz and William F. Punch, editors. Evolutionary Computation and Parallel
Processing Workshop, 1999, Orlando, FL, USA. Morgan Kaufmann Publishers Inc.: San
Francisco, CA, USA. Part of [211].
483. Erick Cant u-Paz, Martin Pelikan, and David Edward Goldberg. Linkage Problem, Dis-
tribution Estimation, and Bayesian Networks. Evolutionary Computation, 8(3):311
340, Fall 2000, MIT Press: Cambridge, MA, USA. doi: 10.1162/106365600750078808.
CiteSeer
x
: 10.1.1.16.236. See also [2150].
484. Erick Cant u-Paz, James A. Foster, Kalyanmoy Deb, Lawrence Davis, Rajkumar Roy, Una-
May OReilly, Hans-Georg Beyer, Russell K. Standish, Graham Kendall, Stewart W. Wilson,
Mark Harman, Joachim Wegener, Dipankar Dasgupta, Mitchell A. Potter, Alan C. Schultz,
Kathryn A. Dowsland, Natasa Jonoska, and Julian Francis Miller, editors. Proceedings of
the Genetic and Evolutionary Computation Conference, Part I (GECCO03-I), July 1216,
2003, Holiday Inn Chicago: Chicago, IL, USA, volume 2723/2003 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-45105-6.
isbn: 3-540-40602-6. Google Books ID: IlWhIKahmsC. OCLC: 52506094, 52559224,
53068317, 243490402, and 314238192. See also [485, 2078].
485. Erick Cant u-Paz, James A. Foster, Kalyanmoy Deb, Lawrence Davis, Rajkumar Roy, Una-
May OReilly, Hans-Georg Beyer, Russell K. Standish, Graham Kendall, Stewart W. Wil-
son, Mark Harman, Joachim Wegener, Dipankar Dasgupta, Mitchell A. Potter, Alan C.
Schultz, Kathryn A. Dowsland, Natasa Jonoska, and Julian Francis Miller, editors. Pro-
ceedings of the Genetic and Evolutionary Computation Conference, Part II (GECCO03-
II), July 1216, 2003, Holiday Inn Chicago: Chicago, IL, USA, volume 2724/2003 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-45110-2. isbn: 3-540-40603-4. Google Books ID: eYlQAAAAMAAJ and
zcN2KcCllT0C. OCLC: 52506094, 52559224, 59295690, and 314195657. See also
[484, 2078].
486. Aizeng Cao, Yueting Chen, Jun Wei, and Jinping Li. A Hybrid Evolutionary Algo-
rithm Based on EDAs and Clustering Analysis. In CCC07 [555], pages 754758, 2007.
doi: 10.1109/CHICC.2006.4347236. INSPEC Accession Number: 9800559.
487. Hongqing Cao, Jingxian Yu, and Lishan Kang. An Evolutionary Approach for Modeling the
Equivalent Circuit for Electrochemical Impedance Spectroscopy. In CEC03 [2395], pages
18191825, volume 3, 2003. doi: 10.1109/CEC.2003.1299893. Fully available at http://www.
cs.bham.ac.uk/

wbl/biblio/gp-html/Cao_2003_Aeafmtecfeis.html [accessed 2009-


06-16].
488. Mathieu S. Capcarr`ere, Alex Alves Freitas, Peter J. Bentley, Colin G. Johnson, and Jonathan
Timmis, editors. Proceedings of the 8th European Conference on Advances in Articial Life
(ECAL05), September 59, 2005, University of Kent: Canterbury, Kent, UK, volume 3630
in Lecture Notes in Articial Intelligence (LNAI, SL7), Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-28848-1.
489. Andrea Caponio, Giuseppe Leonardo Cascella, Ferrante Neri, Nadia Salvatore, and Mark
Sumner. A Fast Adaptive Memetic Algorithm for Online and Oine Control Design of
PMSM Drives. IEEE Transactions on Systems, Man, and Cybernetics Part B: Cybernet-
ics, 37(1):2841, February 2007, IEEE Systems, Man, and Cybernetics Society: New York,
NY, USA. doi: 10.1109/TSMCB.2006.883271. PubMed ID: 17278556. INSPEC Accession
Number: 9299478.
490. Andrea Caponio, Ferrante Neri, and Ville Tirronen. Super-t Control Adaptation in Memetic
Dierential Evolution Frameworks. Soft Computing A Fusion of Foundations, Method-
ologies and Applications, 13(8-9):811831, July 2009, Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/s00500-008-0357-1.
491. Alain Cardon, Theirry Galinho, and Jean-Philippe Vacher. A Multi-Objective Genetic Al-
gorithm in Job Shop Scheduling Problem to Rene an Agents Architecture. In EURO-
GEN99 [1894], 1999. Fully available at http://www.jeo.org/emo/cardon99.ps.gz.
REFERENCES 997
492. Jorge Cardoso and Amit P. Sheth. Introduction to Semantic Web Services and Web Process
Composition. In SWSWPC04 [493], pages 113, 2004. doi: 10.1007/978-3-540-30581-1 1.
493. Jorge Cardoso and Amit P. Sheth, editors. Revised Selected Papers from the First Interna-
tional Workshop on Semantic Web Services and Web Process Composition (SWSWPC04),
July 6, 2004, Westin Horton Plaza Hotel: San Diego, CA, USA, volume 3387/2005 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/b105145. isbn: 3-540-24328-3. Google Books ID: AvChgxNuKK8C. Library
of Congress Control Number (LCCN): 2004117658.
494. Jorge Cardoso, Jose Cordeiro, and Joaquim Filipe, editors. Proceedings of the 9th Interna-
tional Conference on Enterprise Information Systems (ICEIS07), June 1216, 2007, Funchal,
Madeira, Portugal, volume SAIC. Institute for Systems and Technologies of Information,
Control and Communication (INSTICC) Press: Setubal, Portugal. isbn: 972-8865-91-0.
Google Books ID: JwTJPgAACAAJ. See also [915].
495. Alessio Carenini, Dario Cerizza, Marco Comerio, Emanuele Della Valle, Flavio De Paoli,
Andrea Maurino, Matteo Palmonari, Matteo Sassi, and Andrea Turati. Semantic Web
Service Discovery and Selection: A Test Bed Scenario. In EON-SWSC08 [1023], 2008.
Fully available at http://sunsite.informatik.rwth-aachen.de/Publications/
CEUR-WS/Vol-359/Paper-3.pdf [accessed 2010-12-16]. See also [2166].
496. Peter A. Cariani. Extradimensional Bypass. Biosystems, 64(1-3):4753, January 2002, El-
sevier Science Ireland Ltd.: East Park, Shannon, Ireland and North-Holland Scientic Pub-
lishers Ltd.: Amsterdam, The Netherlands. doi: 10.1016/S0303-2647(01)00174-5.
497. Tonci Caric and Hrvoje Gold, editors. Vehicle Routing Problem. IN-TECH Education and
Publishing: Vienna, Austria, September 2008. Fully available at http://intechweb.org/
downloadfinal.php?is=978-953-7619-09-1&type=B [accessed 2009-01-13].
498. Anthony Jack Carlisle. Applying the Particle Swarm Optimizer to Non-Stationary Envi-
ronments. PhD thesis, Auburn University, Graduate Faculty: Auburn, AL, USA, Decem-
ber 16, 2002, Gerry V. Dozier, Advisor, Gerry V. Dozier, Kai-Hsiung Chang, Dean Hen-
drix, and Stephen L. McFarland, Committee members. Fully available at http://antho.
huntingdon.edu/publications/Dissertation.pdf [accessed 2009-07-13].
499. Anthony Jack Carlisle and Gerry V. Dozier. Tracking Changing Extrema with Adap-
tive Particle Swarm Optimizer. In WAC02 [1432], pages 265270, volume 13, 2002.
doi: 10.1109/WAC.2002.1049555. Fully available at http://antho.huntingdon.edu/
publications/default.html [accessed 2007-08-19].
500. Charles W. Carroll. An Operations Research Approach to the Economic Optimization of
a Kraft Pulping Process. PhD thesis, Institute of Paper Chemistry: Appleton, WI, USA,
Georgia Institute of Technology: Atlanta, GA, USA, June 1959. Fully available at http://
hdl.handle.net/1853/5853 [accessed 2008-11-15].
501. Charles W. Carroll. The Created Response Surface Technique for Optimizing Nonlinear,
Restrained Systems. Operations Research, 9(2):169184, MarchApril 1961, Institute for
Operations Research and the Management Sciences (INFORMS): Linthicum, ML, USA and
HighWire Press (Stanford University): Cambridge, MA, USA. doi: 10.1287/opre.9.2.169.
502. Ted Carson and Russell Impagliazzo. Hill-Climbing nds Random Planted Bisections. In
Proceedings of the twelfth annual ACM-SIAM Symposium on Discrete Algorithms [2543],
pages 903909, volume 12, 2001. Fully available at http://citeseer.ist.psu.
edu/carson01hillclimbing.html and http://portal.acm.org/citation.cfm?
id=365411.365805 [accessed 2007-09-11].
503. Hugh Cartwright, editor. Intelligent Data Analysis in Science, volume 4 in Oxford Chemistry
Masters. Oxford University Press, Inc.: New York, NY, USA, June 2000. isbn: 0198502338.
504. Richard A. Caruana, Larry J. Eshelman, and J. David Schaer. Representation and Hidden
Bias: Gray vs. Binary Coding for Genetic Algorithms. In ICML88 [1659], pages 153161,
1988.
505. George Casella and Roger L. Berger. Statistical Inference, Duxbury Advanced Series.
Duxbury Thomson Learning / Duxbury Press: Pacic Grove, CA, USA, 2nd edition, June 18,
2000. isbn: 0-534-24312-6. OCLC: 46538638.
506. Felix Castro,
`
Angela Nebot, and Francisco Mugica. A Soft Computing Decision Support
Framework to Improve the e-Learning Experience. In SpringSim08 [2258], pages 781
788, 2008. Fully available at http://portal.acm.org/citation.cfm?id=1400674
[accessed 2009-03-29]. SESSION: 2008 Modeling & Simulation in Education (MSE08): The Edu-
998 REFERENCES
cation Continuum.
507. Seventh European Conference on Machine Learning (ECML94), April 68, 1994, Catania,
Sicily, Italy.
508. H. John Cauleld, Shu-Heng Chen, Heng-Da Cheng, Richard J. Duro, Vasant Honavar,
Etienne E. Kerre, Mikulas Luptacik, Manuel Grana Romay, Timothy K. Shih, Dan Ventura,
Paul P. Wang, and Yuanyuan Yang, editors. Proceedings of the Sixth Joint Conference on
Information Science (JCIS 2002), Section: The Fourth International Workshop on Frontiers
in Evolutionary Algorithms (FEA02), March 813, 2002, Research Triangle Park, NC, USA.
Association for Intelligent Machinery, Inc.: Durham, NC, USA. isbn: 0-9707890-1-7.
509. Proceedings of the 1st International Conference on Bio Inspired mOdels of NEtwork, Infor-
mation and Computing Systems (BIONETICS06), December 1113, 2006, Cavalese, Italy.
510. Daniel Joseph Cavicchio, Jr. Adaptive Search using Simulated Evolution. PhD thesis, Uni-
versity of Michigan, College of Literature, Science, and the Arts, Computer and Commu-
nication Sciences Department: Ann Arbor, MI, USA, August 1970, John Henry Holland,
Advisor. Fully available at http://deepblue.lib.umich.edu/handle/2027.42/4042
[accessed 2010-08-03]. ID: bab9712.0001.001.
511. Daniel Joseph Cavicchio, Jr. Reproductive Adaptive Plans. In ACM72 [21], pages 6070.
doi: 10.1145/800193.805822.
512. 14th European Conference on Machine Learning (ECML03), September 2226, 2003, Cavtat,
Dubrovnik, Croatia.
513. The Second Annual Symposium on Autonomous Intelligent Networks and Systems (AINS03),
June 30July 1, 2003, SRI International: Menlo Park, CA, USA. Center for Autonomous
Intelligent Networks and Systems (CAINS), University of California (UCLA): Los Angeles,
CA, USA. Fully available at http://path.berkeley.edu/ains/ [accessed 2009-07-10].
514. Proceedings of the 20th Informatics and Operations Research Meeting (20th Jornadas Ar-
gentinas e Informatica e Investigaci on Operativa) (JAIIO91), August 2023, 1991, Centro
Cultural General San Martn: Buenos Aires, Argentina.
515. Proceedings of the International Conference on Soft Computing and Pattern Recognition
(SoCPaR10), December 710, 2010, Cergy-Pontoise, France.
516. Vladimr

Cern y. Thermodynamical Approach to the Traveling Salesman Problem: An Ef-
cient Simulation Algorithm. Journal of Optimization Theory and Applications, 45(1):41
51, January 1985, Springer Netherlands: Dordrecht, Netherlands and Plenum Press: New
York, NY, USA. doi: 10.1007/BF00940812. Fully available at http://mkweb.bcgsc.ca/
papers/cerny-travelingsalesman.pdf [accessed 2010-09-25]. Communicated by S. E. Drey-
fus. Also: Technical Report, Comenius University, Mlynska Dolina, Bratislava, Czechoslo-
vakia, 1982.
517. Miroslav

Cervenka and Vojtech Kres alek. Aerodynamic Wing Optimisation Us-
ing SOMA Evolutionary Algorithm. In NICSO08 [1620], pages 127138, 2008.
doi: 10.1007/978-3-642-03211-0 11. See also [518].
518. Miroslav

Cervenka and Ivan Zelinka. Application of Evolutionary Algorithm on Aerody-
namic Wing Optimisation. In ECC08 [1819], 2008. Fully available at http://www.wseas.
us/e-library/conferences/2008/malta/ecc/ecc53.pdf [accessed 2010-12-27]. See also
[517].
519. Deepti Chafekar, Jiang Xuan, and Khaled Rasheed. Constrained Multi-objective Optimiza-
tion Using Steady State Genetic Algorithms. In GECCO03-I [484], pages 813824, 2003.
Fully available at http://www.cs.uga.edu/

khaled/papers/385.pdf [accessed 2010-08-


01]. CiteSeer
x
: 10.1.1.15.2094.
520. Uday Kumar Chakraborty, Kalyanmoy Deb, and Mandira Chakraborty. Analysis of Se-
lection Algorithms: A Markov Chain Approach. Evolutionary Computation, 4(2):133167,
Summer 1996, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1996.4.2.133.
521. Lance D. Chambers, editor. Practical Handbook of Genetic Algorithms: Applications, volume
I in Practical Handbook of Genetic Algorithms. Chapman & Hall: London, UK and CRC
Press, Inc.: Boca Raton, FL, USA, 2nd ed: 2000 edition, 1995. isbn: 0849325196 and
1584882409.
522. Lance D. Chambers, editor. Practical Handbook of Genetic Algorithms: New Frontiers, vol-
ume II. CRC Press, Inc.: Boca Raton, FL, USA, 1995. isbn: 0849325293. Google Books
ID: 9RCE3pgj9K4C. OCLC: 32394368, 35865793, 54016160, 61827855, 468627012,
and 503637768.
REFERENCES 999
523. Lance D. Chambers, editor. Practical Handbook of Genetic Algorithms: Complex Coding
Systems, volume III in Practical Handbook of Genetic Algorithms. Chapman & Hall: London,
UK and CRC Press, Inc.: Boca Raton, FL, USA, December 23, 1998. isbn: 0849325390.
524. Christine W. Chan, Witold Kinsner, Yingxu Wang, and D. Michael Miller, editors. Pro-
ceedings of the 3rd IEEE International Conference on Cognitive Informatics (ICCI04),
August 1617, 2004, Victoria, Canada. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7695-2190-8.
525. Felix T.S. Chan and Manoj Kumar Tiwari, editors. Swarm Intelligence Focus on
Ant and Particle Swarm Optimization. I-Tech Education and Publishing: Vienna,
Austria, December 2007. Fully available at http://s.i-techonline.com/Book/
Swarm-Intelligence/Swarm-Intelligence.zip [accessed 2008-08-22].
526. Sandeep Chandran and R. Russell Rhinehart. Heuristic Random Optimizer-Version II. In
ACC02 [1335], pages 25892594, volume 4, 2002. doi: 10.1109/ACC.2002.1025175.
527. C. S. Chang and Chung Min Kwan. Evaluation of Evolutionary Algorithms for
Multi-Objective Train Schedule Optimization. In AI04 [2871], pages 803815, 2004.
doi: 10.1007/978-3-540-30549-1 69 and 10.1007/b104336.
528. Vira Chankong and Yacov Y. Haimes. Multiobjective Decision Making Theory and Methodol-
ogy. North-Holland Scientic Publishers Ltd.: Amsterdam, The Netherlands, Elsevier Science
Publishers B.V.: Amsterdam, The Netherlands, and Dover Publications: Mineola, NY, USA,
January 1983. isbn: 0-444-00710-5. Google Books ID: fQ5VPQAACAAJ.
529. 14th Conference on Technologies and Applications of Articial Intelligence (TAAI09), Oc-
tober 3031, 2009, Chaoyang University of Technology (CYU): Wufeng Township, Taichung
County, Taiwan.
530. Abraham Charnes and William Wager Cooper. Management Models and Industrial Appli-
cations of Linear Programming. John Wiley & Sons Canada Ltd.: Toronto, ON, Canada,
December 1961. isbn: 0471148504. Google Books ID: xj3WAAAACAAJ.
531. Abraham Charnes, William Wager Cooper, and R. O. Ferguson. Optimal Estimation of
Executive Compensation by Linear Programming. Management Science, 1(2):138151, Jan-
uary 1955, Institute for Operations Research and the Management Sciences (INFORMS):
Linthicum, ML, USA and HighWire Press (Stanford University): Cambridge, MA, USA.
doi: 10.1287/mnsc.1.2.138.
532. Neela Chattoraj and Jibendu Sekhar Roy. Application of Genetic Algorithm to the Optimiza-
tion of Microstrip Antennas with and without Superstrate. Mikrotalasna Revija (Microwave
Review), 2(6), November 2006, Yugoslav Society for Microwave Technique and Technolo-
gies, Serbia and Montenegro IEEE MTT-S Chapter: Novi Beograd, Serbia. Fully available
at http://www.mwr.medianis.net/pdf/Vol12No2-06-NChattoraj.pdf [accessed 2008-
09-02].
533. Leonardo Weiss F. Chaves, Erik Buchmann, Fabian Hueske, and Klemens Bohm. Towards
Materialized View Selection for Distributed Databases. In EDBT09 [1526], pages 1088
1099, 2009. doi: 10.1145/1516360.1516484. Fully available at http://www.ipd.kit.edu/

buchmann/pdfs/weiss08matviews.pdf [accessed 2011-04-09].


534. Jose M. Chaves-Gonzalez, Miguel A. Vega-Rodrguez, David Domnguez-Gonzalez, Juan A.
G omez-Pulido, and Juan M. Sanchez-Perez. SS vs PBIL to Solve a Real-World Frequency
Assignment Problem in GSM Networks. In EvoWorkshops08 [1051], pages 2130, 2008.
doi: 10.1007/978-3-540-78761-7 3.
535. Pravir K. Chawdry, Rajkumar Roy, and Raj K. Pant, editors. Soft Computing in En-
gineering Design and Manufacturing Proceedings of the 2nd On-line World Conference
on Soft Computing in Engineering Design and Manufacture (WSC2), June 1997. Springer-
Verlag GmbH: Berlin, Germany. isbn: 3-540-76214-0. Google Books ID: mxcP1mSjOlsC.
OCLC: 37696334, 247521345, and 634603077. Library of Congress Control Number
(LCCN): 97031959. GBV-Identication (PPN): 236542257 and 279731663. LC Classi-
cation: QA76.9.S63 S63 1998.
536. Sin Man Cheang, Kwong Sak Leung, and Kin Hong Lee. Genetic Parallel Programming:
Design and Implementation. Evolutionary Computation, 14(2):129156, Summer 2006, MIT
Press: Cambridge, MA, USA. doi: 10.1162/evco.2006.14.2.129.
537. 10th European Conference on Machine Learning (ECML98), April 2124, 1998, Chemnitz,
Sachsen, Germany.
538. Arbee L. P. Chen, Jui-Shang Chiu, and Frank S.C. Tseng. Evaluating Aggregate Opera-
1000 REFERENCES
tions Over Imprecise Data. IEEE Transactions on Knowledge and Data Engineering, 8(2):
273284, April 1996, IEEE Computer Society Press: Los Alamitos, CA, USA. Fully avail-
able at http://make.cs.nthu.edu.tw/alp/alp_paper/Evaluating%20aggregate
%20operations%20over%20imprecise%20data.pdf [accessed 2007-09-12].
539. Chi-hau Chen, L. F. Pau, and P. Shen-pei Wang, editors. Handbook of Pattern Recogni-
tion and Computer Vision. World Scientic Publishing Co.: Singapore, 1st edition, Au-
gust 1993. isbn: 981-02-1136-8, 981-02-2276-9, and 981-02-3071-0. Google Books
ID: 5J6J7PP0-ZMC, pCghq7-i1oMC, and wnhwKoen4xAC.
540. Chia-Mei Chen, Bing Chiang Jeng, Chia Ru Yang, and Gu Hsin Lai. Tracing Denial of
Service Origin: Ant Colony Approach. In EvoWorkshops06 [2341], pages 286295, 2006.
doi: 10.1007/11732242 26.
541. Guoliang Chen, editor. Genetic Algorithm and its Application (Ychu an Su`anf a J Q
Y`ngy` ong), National High-Tech Key Books: Communications Technology (Quangu o G ao
J`sh` u Zh`ongdian T ush u: Tongx`n J`sh` u Lngy` u). Post & Telecom Press (PTPress, Renmn
Youdi` an Ch uban Sh`e): Beijng, China, 19962001. isbn: 7115059640. Google Books
ID: qmNoAAAACAAJ. OCLC: 300052154.
542. Jing Chen, Zeng-zhi Li, Zhi-Gang Liao, and Yun-lan Wang. Distributed Service Performance
Management Based on Linear Regression and Genetic Programming. In ICMLC05 [2263],
pages 560563, volume 1, 2005. doi: 10.1109/ICMLC.2005.1527007.
543. Jing Chen, Zeng-zhi Li, and Yun-lan Wang. Distributed Service Management Based on
Genetic Programming. In AWIC05 [2642], pages 8388, 2005. doi: 10.1007/11495772 14.
544. Jong-Chen Chen and Michael Conrad. A Multilevel Neuromolecular Architecture that uses
the Extradimensional Bypass Principle to Facilitate Evolutionary Learning. Physica D: Non-
linear Phenomena, 75(13):417437, August 1, 1994, Elsevier Science Publishers B.V.: Am-
sterdam, The Netherlands. Imprint: North-Holland Scientic Publishers Ltd.: Amsterdam,
The Netherlands. doi: 10.1016/0167-2789(94)90295-X.
545. Jong-Chen Chen, Guo-Xun Liao, Jr-Sung Hsie, and Cheng-Hua Liao. A Study of the Contri-
bution Made by Evolutionary Learning on Dynamic Load-Balancing Problems in Distributed
Cmputing Systems. Expert Systems with Applications An International Journal, 34(1):357
365, January 2008, Elsevier Science Publishers B.V.: Essex, UK. Imprint: Pergamon Press:
Oxford, UK. doi: 10.1016/j.eswa.2006.09.036.
546. Ken Chen, Shu-Heng Chen, Heng-Da Cheng, David K.Y. Chin, Sanjoy Das, Richard J.
Duro, Zhen Jiang, Nikola Kasabov, Etienne E. Kerre, Hong Va Leong, Qing Li, Mi Lu,
Manuel Grana Romay, Dan Ventura, Paul P. Wang, and Jie Wu, editors. Proceedings of the
Seventh Joint Conference on Information Science (JCIS 2003), Section: The Fifth Interna-
tional Workshop on Frontiers in Evolutionary Algorithms (FEA03), September 2630, 2003,
Embassy Suites Hotel and Conference Center: Cary, NC, USA. Association for Intelligent
Machinery, Inc.: Durham, NC, USA. isbn: 0964345692. OCLC: 248354376. Workshop
held in conjunction with Seventh Joint Conference on Information Sciences.
547. Min-Rong Chen, Yong-zai Lu, and Gen-ke Yang. Multiobjective Extremal Optimization with
Applications to Engineering Design. Journal of Zhejiang University Science A (JZUS-A), 8
(12):19051911, November 2007, Zhejiang University Press: Hangzhou, Zh`eji ang, China and
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1631/jzus.2007.A1905.
548. Min-Rong Chen, Yong-zai Lu, and Gen-ke Yang. Multiobjective Extremal Optimization with
Applications to Engineering Design. Journal of Zhejiang University Science A (JZUS-A), 8
(12):19051911, November 2007, Zhejiang University Press: Hangzhou, Zh`eji ang, China and
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1631/jzus.2007.A1905.
549. Shu-Heng Chen, editor. Evolutionary Computation in Economics and Finance, volume 100 in
Studies in Fuzziness and Soft Computing. Springer-Verlag GmbH: Berlin, Germany, August 5,
2002. isbn: 3790814768.
550. Tianshi Chen, Ke Tang, Guoliang Chen, and Xin Yao. A Large Population Size Can Be
Unhelpful in Evolutionary Algorithms. Theoretical Computer Science, 2011, Elsevier Science
Publishers B.V.: Essex, UK. doi: 10.1016/j.tcs.2011.02.016.
551. Wenxiang Chen, Thomas Weise, Zhenyu Yang, and Ke Tang. Large-Scale Global
Optimization Using Cooperative Coevolution with Variable Interaction Learning. In
PPSN10-2 [2413], pages 300309, 2010. doi: 10.1007/978-3-642-15871-1 31. Fully
available at http://mail.ustc.edu.cn/

chenwx/ppsn10Chen.pdf [accessed 2011-02-18]


and http://www.it-weise.de/documents/files/CWYT2010LSGOUCCWVIL.pdf. Ei
REFERENCES 1001
ID: 20104513369063. IDS (SCI): BTC53.
552. Ying-Ping Chen. Extending the Scalability of Linkage Learning Genetic Algorithms Theory
& Practice, volume 190/2006 in Studies in Fuzziness and Soft Computing. Springer-Verlag
GmbH: Berlin, Germany, University of Illinois: Urbana-Champaign, IL, USA, 20042006,
David Edward Goldberg, Advisor. doi: 10.1007/b102053. isbn: 3-540-28459-1. Google
Books ID: kKr3rKhPU7oC. Order No.: AAI3130894.
553. Yuehui Chen and Ajith Abraham, editors. Proceedings of the Sixth International Conference
on Intelligent Systems Design and Applications (ISDA06), October 1618, 2006, J`n an,
Shandong, China. IEEE Computer Society: Washington, DC, USA. isbn: 0-7695-2528-8.
Google Books ID: xgDBAgAACAAJ. Order No.: P2528.
554. Zhixiong Chen, Charles A. Shoniregun, and Yuan-Chwen You, editors. First IEEE In-
ternational Services Computing Contest (SCContest06), 2006, Chicago, IL, USA. IEEE
Computer Society Press: Los Alamitos, CA, USA. Fully available at http://iscc.
servicescomputing.org/2006/ [accessed 2010-12-17]. Part of [1373].
555. D` aizhan Cheng and Mn W u, editors. Proceedings of the 26th Chinese Control Confer-
ence (CCC07), July 2631, 2007, Zhangjiaji`e, H unan, China. IEEE (Institute of Electrical
and Electronics Engineers): Piscataway, NJ, USA. isbn: 7-81124-055-6. Google Books
ID: MCLlMQAACAAJ. OCLC: 182550073. Catalogue no.: 07EX1694.
556. Hui Cheng and Shengxiang Yang. Genetic Algorithms with Elitism-based Immigrants for
Dynamic Shortest Path Problem in Mobile Ad Hoc Networks. In CEC09 [1350], pages
31353140, 2009. doi: 10.1109/CEC.2009.4983340. INSPEC Accession Number: 10688935.
557. Yiu-ming Cheung, Yuping Wang, and Hailin Liu, editors. Proceedings of the 2006 Inter-
national Conference on Computational Intelligence and Security (CIS06), November 36,
2006, Ramada Pearl Hotel, Guangzhou, Guangd ong, China. IEEE (Institute of Electrical and
Electronics Engineers): Piscataway, NJ, USA. isbn: 1-4244-0605-6. Library of Congress
Control Number (LCCN): 2006931375. Catalogue no.: 06EX1512.
558. Bastien Chevreux. Genetische Algorithmen zur Molek ulstrukturoptimierung. Masters thesis,
Universitat Heidelberg: Heidelberg, Germany, Fachhochschule Heilbronn: Heilbronn, Ger-
many, and Deutsches Krebsforschungszentrum Heidelberg: Heidelberg, Germany, March 1,
1997. Fully available at http://chevreux.org/diplom/diplom.html [accessed 2010-08-01].
559. Alpha C. Chiang and Kevin Wainwright. Fundamental Methods of Mathematical Eco-
nomics. McGraw-Hill Ltd.: Maidenhead, UK, England, 4th edition, February 2, 2005.
isbn: 0070109109.
560. Altannar Chinchuluun, Panos M. Pardalos, Athanasios Migdalas, and Leonidas S. Pit-
soulis, editors. Pareto Optimality, Game Theory and Equilibria, volume 17 in Springer
Optimization and Its Applications. Springer New York: New York, NY, USA, 2008.
doi: 10.1007/978-0-387-77247-9. isbn: 0387772464. Google Books ID: kNqHxU3Tc2YC.
561. Raymond Chiong, editor. Intelligent Systems for Automated Learning and Adaptation:
Emerging Trends and Applications. Information Science Reference: Hershey, PA, USA,
September 2009. doi: 10.4018/978-1-60566-798-0. isbn: 1-60566-798-6. Google Books
ID: bjFRPgAACAAJ and r3BUPgAACAAJ. OCLC: 318420065.
562. Raymond Chiong, editor. Nature-Inspired Algorithms for Optimisation, volume 193/2009 in
Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg, April 30, 2009.
doi: 10.1007/978-3-642-00267-0. isbn: 3-642-00266-8 and 3-642-00267-6. Google Books
ID: 557PPQbkfNYC. OCLC: 400530826, 405547847, and 423750494. Library of Congress
Control Number (LCCN): 2009920517.
563. Raymond Chiong, editor. Nature-Inspired Informatics for Intelligent Applications and Knowl-
edge Discovery: Implications in Business, Science and Engineering. Information Science
Reference: Hershey, PA, USA, 2009. doi: 10.4018/978-1-60566-705-8. isbn: 1605667056.
564. Raymond Chiong and Sandeep Dhakal, editors. Natural Intelligence for Schedul-
ing, Planning and Packing Problems, volume 250 in Studies in Computational Intelli-
gence. Springer-Verlag: Berlin/Heidelberg, October 2009. doi: 10.1007/978-3-642-04039-9.
isbn: 3-642-04038-1. Google Books ID: X1XkPwAACAAJ. Library of Congress Control
Number (LCCN): 2009934305.
565. Raymond Chiong and Thomas Weise. Global Optimisation and Mobile Learning.
Learning Technology LTTF Newsletter, 11(1-2):2628, JanuaryApril 2009, Technical
Committee on Learning Technology. Fully available at http://www.ieeetclt.org/
issues/april2009/index.html#_Toc230200599 and http://www.it-weise.de/
1002 REFERENCES
documents/files/CW2009GOAML.pdf [accessed 2009-09-08].
566. Raymond Chiong, Thomas Weise, and Bee Theng Lau. Template Design using Extremal
Optimization with Multiple Search Operators. In SoCPaR09 [1224], pages 202207, 2009.
doi: 10.1109/SoCPaR.2009.49. Fully available at http://www.it-weise.de/documents/
files/CWL2009TDUEOWMSO.pdf [accessed 2010-06-10]. INSPEC Accession Number: 11050930.
Ei ID: 20101012758280. IDS (SCI): BOP29. See also [2895].
567. Raymond Chiong, Thomas Weise, and Zbigniew Michalewicz, editors. Variants of Evolution-
ary Algorithms for Real-World Applications. Springer-Verlag: Berlin/Heidelberg, Septem-
ber 30, 20112012. doi: 10.1007/978-3-642-23424-8. Google Books ID: B2ONePP40MEC.
Library of Congress Control Number (LCCN): 2011935740.
568. Rada Chirkova. The View-Selection Problem Has an Exponential-Time Lower Bound
for Conjunctive Queries and Views. In PODS02 [2204], pages 159168, 2002.
doi: 10.1145/543613.543634. Fully available at http://dbgroup.ncsu.edu/rychirko/
Papers/pods02.pdf [accessed 2011-03-28]. CiteSeer
x
: 10.1.1.88.2166.
569. Huidae Cho, Francisco Olivera, and Seth D. Guikema. A Derivation of the Number of
Minima of the Griewank Function. Journal of Applied Mathematics and Computation, 204
(2):694701, October 15, 2008, Elsevier Science Publishers B.V.: Essex, UK. Fully available
at https://ceprofs.civil.tamu.edu/folivera/PapersPDFs/A%20derivation
%20of%20the%20number%20of%20minima%20of%20the%20Griewank%20function.
pdf [accessed 2009-08-16]. Special Issue on New Approaches in Dynamic Optimization to
Assessment of Economic and Environmental Systems.
570. Chi-Hon Choi, Jerey Xu Yu, and Gang Gou. What Dierence Heuristics Make:
Maintenance-Cost View-Selection Revisited. In WAIM02 [1868], pages 313350, 2002.
doi: 10.1007/3-540-45703-8 23.
571. Chi-Hon Choi, Jerey Xu Yu, and Hongjun Lu. Dynamic Materialized View
Management Based on Predicates. In APWeb03 [3082], pages 583594, 2003.
doi: 10.1007/3-540-36901-5 58.
572. Sung-Soon Choi, Kyomin Jung, and Jeong Han Kim. Phase Transition in a
Random NK Landscape Model. In GECCO05 [304], pages 12411248, 2005.
doi: 10.1145/1068009.1068212. Fully available at http://web.mit.edu/kmjung/Public/
NK%20gecco%20final.pdf [accessed 2009-02-26]. Session: Genetic Algorithms. See also [573].
573. Sung-Soon Choi, Kyomin Jung, and Jeong Han Kim. Phase Transition in a Random NK
Landscape Model. Articial Intelligence, 172(23):179203, February 2008, Elsevier Science
Publishers B.V.: Essex, UK. doi: 10.1016/j.artint.2007.06.002. See also [572].
574. Proceedings of the 8th International Conference on Natural Computation (ICNC12), May 29
31, 2012, Ch ongq`ng, China.
575. Hosung Choo, Adrian Hutani, Luiz Cezar Trintinalia, and Hao Ling. Shape Optimisation of
Broadband Microstrip Antennas using Genetic Algorithm. Electronics Letters, 36(25):2057
2058, December 7, 2000, Institution of Engineering and Technology (IET): Stevenage, Herts,
UK. doi: 10.1049/el:20001452.
576. Heather Christensen, Roger L. Wainwright, and Dale A. Schoenefeld. A Hybrid Algo-
rithm for the Point to Multipoint Routing Problem. In SAC97 [435], pages 263268,
1997. doi: 10.1145/331697.331751. Fully available at http://euler.mcs.utulsa.edu/

rogerw/papers/Heather-PMP.pdf [accessed 2008-08-28]. CiteSeer


x
: 10.1.1.22.8559.
577. Nicos Christodes, Aristide Mingozzi, and Paolo Toth. The Vehicle Routing Problem. In
Combinatorial Optimization [578], Chapter 11, pages 315338. John Wiley & Sons Ltd.: New
York, NY, USA, 1977.
578. Nicos Christodes, Aristide Mingozzi, Paolo Toth, and C. Sandi, editors. Combinatorial
Optimization, A Wiley-Interscience Publiction. John Wiley & Sons Ltd.: New York, NY,
USA, May 30June 11, 1977. isbn: 0471997498. OCLC: 4495250, 264568944, and
636199125. Library of Congress Control Number (LCCN): 78011131. GBV-Identication
(PPN): 027251012. LC Classication: QA402.5 .C545. Based on a series of lectures given
at the Summer School in Combinatorial Optimization held in SOGESTA, Urbino, Italy from
30th May to 11th June 1977.
579. Hyoung-Seog Chung, Seongim Choi, and Juan J. Alonso. Supersonic Business Jet Design Us-
ing Knowledge-Based Genetic Algorithm with Adaptive, Unstructured Grid Methodology. In
AIAA03 [2550], page 3791, 2003. Fully available at http://www.lania.mx/

ccoello/
chung03.pdf.gz [accessed 2009-06-17].
REFERENCES 1003
580. Charles West Churchman, Lincoln Acko Russell, and E. Leonard Arno. Introduction to
Operations Research. John Wiley & Sons Ltd.: New York, NY, USA and Chapman & Hall:
London, UK, 2nd edition, 1957. asin: B0000CJP9J and B001ORMIX0.
581. Vic Ciesielski and Xiang Li. Analysis of Genetic Programming Runs. In Complex04 [469],
2004. Fully available at http://goanna.cs.rmit.edu.au/

xiali/pub/ai04.vc.
pdf and http://www.cs.rmit.edu.au/

vc/papers/aspgp04.pdf [accessed 2010-11-20].


CiteSeer
x
: 10.1.1.87.4136.
582. Emilia Cimpian, Paavo Kotinurmi, Adrian Mocan, Matthew Moran, Tomas Vitvar, and Ma-
ciej Zaremba. Dynamic RosettaNet Integration on the Semantic Web Services. In First
Workshop of the Semantic Web Service Challenge 2006 Challenge on Automating Web
Services Mediation, Choreography and Discovery [2688], 2006. Fully available at http://
sws-challenge.org/workshops/2006-Stanford/papers/03.pdf [accessed 2010-12-12].
See also [1210, 1940, 30573059].
583. Proceedings of the Third International Workshop on Local Search Techniques in Constraint
Satisfaction, September 25, 2006, Cite des Congres: Nantes, France.
584. Dawn Cizmar, editor. Proceedings of the 22nd Annual ACM Computer Science Confer-
ence on Scaling up: Meeting the Challenge of Complexity in Real-World Computing Ap-
plications (CSC94), March 810, 1994, Phoenix, AZ, USA. ACM Press: New York, NY,
USA. isbn: 0-89791-634-4. Google Books ID: 6jUbOAAACAAJ, InpQAAAAMAAJ, and
nIkzMgAACAAJ. OCLC: 31861788, 51345201, 257836439, and 312016702.
585. Christopher D. Clack, editor. Advanced Research Challenges in Financial Evolutionary Com-
puting Workshop (ARC-FEC08), July 12, 2008, Renaissance Atlanta Hotel Downtown: At-
lanta, GA, USA. ACM Press: New York, NY, USA. Part of [1519].
586. Bill Clancey and Dan Weld, editors. Proceedings of the Thirteenth National Conference on
Articial Intelligence and Eighth Innovative Applications of Articial Intelligence Conference
(AAA96, IAAI96), August 48, 1996, Portland, OR, USA. isbn: 0-262-51091-X. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai96.php and http://
www.aaai.org/Conferences/IAAI/iaai96.php [accessed 2007-09-06]. 2 volumes. See also
[2207].
587. David E. Clark, editor. Evolutionary Algorithms in Molecular Design, volume 8 in Methods
and Principles in Medicinal Chemistry. Wiley-VCH Verlag GmbH & Co. KGaA: Weinheim,
Germany, September 11, 2000. isbn: 3527301550. Series editors: Raimund Mannhold, Hugo
Kubinyi, and Hendrik Timmerman.
588. The International Workshop on Modeling & Applied Simulation (MAS03), October 24,
2003, Claudio Hotel & Congress Center: Bergeggi, Italy.
589. Maurice Clerc. Particle Swarm Optimization. ISTE Publishing Company: London, UK,
February 24, 2006. isbn: 1905209045. Google Books ID: a0YeAAAACAAJ.
590. Manuel Clergue, Philippe Collard, Marco Tomassini, and Leonardo Vanneschi. Fit-
ness Distance Correlation And Problem Diculty For Genetic Programming. In
GECCO02 [1675], pages 724732, 2002. Fully available at http://www.cs.bham.ac.
uk/

wbl/biblio/gecco2002/GP072.pdf and http://www.cs.ucl.ac.uk/staff/


W.Langdon/ftp/papers/gecco2002/gecco-2002-14.pdf [accessed 2008-07-20]. See also
[2784].
591. Dave Cli and Susi Ross. Adding Temporary Memory to ZCS. Adaptive Be-
havior, 3(2):101150, Fall 1994, SAGE Publications: Thousand Oaks, CA, USA.
doi: 10.1177/105971239400300201. Fully available at ftp://ftp.informatics.sussex.
ac.uk/pub/reports/csrp/csrp347.ps.Z [accessed 2007-09-12].
592. Joao Clmaco, editor. Proceedings of the 11th International Conference on Multiple Criteria
Decision Making: Multicriteria Analysis (MCDM94), August 16, 1994, Coimbra, Portu-
gal. Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-62074-5. OCLC: 36181115,
636440409, and 639833152. GBV-Identication (PPN): 222685735 and 25787321X.
Published in May 1997.
593. Carlos Artemio Coello Coello. A Short Tutorial on Evolutionary Multiobjective Op-
timization. In EMO01 [3099], pages 2140, 2001. Fully available at http://www.
cs.cinvestav.mx/

emooworkgroup/tutorial-slides-coello.pdf [accessed 2010-07-


31]. CiteSeer
x
: 10.1.1.29.3997.
594. Carlos Artemio Coello Coello. Theoretical and Numerical Constraint-Handling Techniques
used with Evolutionary Algorithms: A Survey of the State of the Art. Computer Methods
1004 REFERENCES
in Applied Mechanics and Engineering, 191(1112):12451287, January 14, 2002, Elsevier
Science Publishers B.V.: Amsterdam, The Netherlands. doi: 10.1016/S0045-7825(01)00323-1.
595. Carlos Artemio Coello Coello. A Comprehensive Survey of Evolutionary-Based Multiobjec-
tive Optimization Techniques. Knowledge and Information Systems An International Jour-
nal (KAIS), 1(3), August 1999, Springer-Verlag London Limited: London, UK. Fully available
at http://www.lania.mx/

ccoello/EMOO/informationfinal.ps.gz [accessed 2009-08-


04]. CiteSeer
x
: 10.1.1.48.3504.
596. Carlos Artemio Coello Coello. An Updated Survey of Evolutionary Multiobjective Opti-
mization Techniques: State of the Art and Future Trends. In CEC99 [110], pages 313,
volume 1, 1999. doi: 10.1109/CEC.1999.781901. Fully available at http://www-course.
cs.york.ac.uk/evo/SupportingDocs/coellocoello99updated.pdf [accessed 2009-06-
20]. CiteSeer
x
: 10.1.1.50.2154.
597. Carlos Artemio Coello Coello and Gary B. Lamont, editors. Applications of Multi-Objective
Evolutionary Algorithms, volume 1 in Advances in Natural Computation. World Scien-
tic Publishing Co.: Singapore, December 2004. isbn: 981-256-106-4. Google Books
ID: viWm0k9meOcC.
598. Carlos Artemio Coello Coello, Alvaro de Albornoz, Luis Enrique Sucar, and Osvaldo Cair o
Battistutti, editors. Advances in Articial Intelligence: Proceedings of the Second Mex-
ican International Conference on Articial Intelligence (MICAI02), April 2226, 2002,
Merida, Yucat an, Mexico, volume 2313/2002 in Lecture Notes in Articial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-46016-0. isbn: 3-540-43475-5.
599. Carlos Artemio Coello Coello, Gary B. Lamont, and David A. van Veldhuizen. Evolution-
ary Algorithms for Solving Multi-Objective Problems, volume 5 in Genetic and Evolutionary
Computation. Springer US: Boston, MA, USA and Kluwer Academic Publishers: Norwell,
MA, USA, 2nd edition, 20022007. doi: 10.1007/978-0-387-36797-2. isbn: 0306467623 and
0387332545. Google Books ID: NZdOgAACAAJ, rXIuAMw3lGAC, and sgX Cst yTsC.
600. Carlos Artemio Coello Coello, Arturo Hernandez Aguirre, and Eckart Zitzler, editors. Pro-
ceedings of the Third International Conference on Evolutionary Multi-Criterion Optimiza-
tion (EMO05), March 911, 2005, Guanajuato, Mexico, volume 3410/2005 in Theoretical
Computer Science and General Issues (SL 1), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/b106458. isbn: 3-540-24983-4
and 3-540-31880-1. Google Books ID: AfE5fFFl5AgC. OCLC: 58601838, 58998290,
76657382, 150417695, 316700466, 318300131, and 403748504. Library of Congress
Control Number (LCCN): 2005920590.
601. William W. Cohen and Haym Hirsh, editors. Proceedings of the 11th International Confer-
ence on Machine Learning (ICML94), July 1013, 1994, New Brunswick, NJ, USA. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-335-2.
602. William W. Cohen and Andrew Moore, editors. Proceedings of the 23rd International Con-
ference on Machine Learning (ICML06), June 2529, 2006, Carnegy Mellon University
(CMU): Pittsburgh, PA, USA, volume 148 in ACM International Conference Proceeding
Series (AICPS). Omipress: Madison, WI, USA. isbn: 1-59593-383-2. Fully available
at http://www.machinelearning.org/icml2006_proc.html [accessed 2010-06-23].
603. William W. Cohen, Andrew McCallum, and Sam Roweis, editors. Proceedings of the 25th
International Conference on Machine Learning (ICML08), July 59, 2008, Helsinki, Fin-
land, volume 307 in ACM International Conference Proceeding Series (AICPS). Omipress:
Madison, WI, USA. Fully available at http://www.machinelearning.org/archive/
icml2008/proceedings.shtml [accessed 2010-06-23].
604. James P. Cohoon, S. U. Hegde, Worthy Neil Martin, and D. Richards. Punctuated Equilibria:
A Parallel Genetic Algorithm. In ICGA87 [1132], pages 148154, 1987.
605. David W. Coit and Fatema Baheranwala. Solution of Stochastic Multi-Objective System
Reliability Design Problems using Genetic Algorithms. In ESREL05 [1567], pages 391
398, 2005. Fully available at http://ise.rutgers.edu/resource/research_paper/
paper_05-010.pdf [accessed 2010-12-16]. See also [2645].
606. Pierre Collet, Cyril Fonlupt, Jin-Kao Hao, Evelyne Lutton, and Marc Schoenauer, edi-
tors. Selected Papers of the 5th International Conference on Articial Evolution, Evo-
lution Articielle (EA01), October 2931, 2001, Le Creusot, France, volume 2310/2002
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
REFERENCES 1005
doi: 10.1007/3-540-46033-0. isbn: 3-540-43544-1.
607. Pierre Collet, Marco Tomassini, Marc Ebner, Steven Matt Gustafson, and Anik o Ek art,
editors. Proceedings of the 9th European Conference Genetic Programming (EuroGP06),
April 1012, 2006, Budapest, Hungary, volume 3905/2006 in Theoretical Computer Sci-
ence and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/11729976. isbn: 3-540-33143-3. Google
Books ID: 1JNQAAAAMAAJ and ZusmAAAACAAJ. Library of Congress Control Number
(LCCN): 2006922466.
608. Pierre Collet, Nicolas Monmarche, Pierrick Legrand, Marc Schoenauer, and Evelyne Lutton,
editors. Articial Evolution: Revised Selected Papers from the 9th International Conference,
Evolution Articielle (EA09), October 2628, 2009, Strasbourg, France, volume 5975 in
Theoretical Computer Science and General Issues (SL 1), Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3642141552.
609. Yann Collette and Patrick Siarry. Multiobjective Optimization: Principles and Case Stud-
ies, volume 2 in Decision Engineering. Groupe Eyrolles: Paris, France, 20022004.
isbn: 3540401822. Google Books ID: XNYF4hltoF0C.
610. J. J. Collins and Malachy Eaton. Genocodes for Genetic Algorithms. In MENDEL97 [416],
pages 2330, 1997. Fully available at http://www.csis.ul.ie/staff/jjcollins/
mendel97.ps.gz [accessed 2010-08-05]. CiteSeer
x
: 10.1.1.51.2349.
611. Proceedings of the Third Bi-Annual EUROMICRO International Conference on Massively
Parallel Computing Systems (MPCS98), April 69, 1998, Colorado Springs, CO, USA.
612. Benot Colson and Philippe L. Toint. Optimizing Partially Separable Functions with-
out Derivatives. Technical Report 2003/20, University of Namur (FUNDP), Department
of Mathematics: Namur, Belgium, November 6, 2003. Fully available at http://www.
fundp.ac.be/pdf/publications/51357.pdf and perso.fundp.ac.be/

phtoint/
pubs/TR03-20.ps [accessed 2009-10-26]. See also [613].
613. Benot Colson and Philippe L. Toint. Optimizing Partially Separable Functions without
Derivatives. Optimization Methods and Software, 20(4 & 5):493508, August 2005, Taylor
and Francis LLC: London, UK. doi: 10.1080/10556780500140227. See also [612].
614. Francesc Comellas. Using Genetic Programming to Design Broadcasting Algorithms for
Manhattan Street Networks. In EvoWorkshops04 [2254], pages 170177, 2004.
615. Francesc Comellas and G. Gimenez. Genetic Programming to Design Communication Algo-
rithms for Parallel Architectures. Parallel Processing Letters (PPL), 8(4):549560, Decem-
ber 1998, World Scientic Publishing Co.: Singapore. doi: 10.1142/S0129626498000547. Fully
available at http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/Comellas_1998_
GPD.html [accessed 2009-09-01]. CiteSeer
x
: 10.1.1.57.4752.
616. Francesc Comellas and Juan Paz-S anchez. Reconstruction of Networks from
Their Betweenness Centrality. In EvoWorkshops08 [1051], pages 3137, 2008.
doi: 10.1007/978-3-540-78761-7 4.
617. Diana Elena Comes, Steen Bleul, Thomas Weise, and Kurt Geihs. A Flexible
Approach for Business Processes Monitoring. In DAIS09 [2453], pages 116128,
2009. doi: 10.1007/978-3-642-02164-0 9. Fully available at http://www.it-weise.de/
documents/files/CBTG2009AFAFBPM.pdf. Ei ID: 20093412265790. IDS (SCI): BKH10.
618. Committee on the Fundamentals of Computer Science: Challenges and Opportunities,
Computer Science and Telecommunications Board, National Research Council of the Na-
tional Academies: Washington, DC, USA, editor. Computer Science: Reections on the
Field, Reections from the Field. National Academies Press: Washington, DC, USA, 2004.
isbn: 0-309-09301-5 and 0-309-54529-3. Google Books ID: sTlPLMq6ZdYC.
619. Giuseppe Confessore, Graziano Galiano, and Giuseppe Stecca. An Evolutionary Algorithm
for Vehicle Routing Problem with Real Life Constraints. In Manufacturing Systems and Tech-
nologies for the New Frontier The 41st CIRP Conference on Manufacturing Systems [1918],
pages 225228, 2008. doi: 10.1007/978-1-84800-267-8 46.
620. Brian Connolly. Genetic Algorithms Survival of the Fittest: Natural Selection
with Windows Forms. MSDN Magazin, August 2004, Microsoft Corporation: Red-
mond, WA, USA. Fully available at http://msdn.microsoft.com/de-de/magazine/
cc163934(en-us).aspx [accessed 2008-06-24].
621. Michael Conrad. Bootstrapping on the Adaptive Landscape. Biosystems, 11(2-
3):8184, August 1979, Elsevier Science Ireland Ltd.: East Park, Shannon, Ire-
1006 REFERENCES
land and North-Holland Scientic Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/0303-2647(79)90009-1. Fully available at http://dx.doi.org/10.1016/
0303-2647(79)90009-1 and http://hdl.handle.net/2027.42/23514 [accessed 2008-
11-02].
622. Michael Conrad. Adaptability: The Signicance of Variability from Molecule to Ecosystem.
Plenum Press: New York, NY, USA, March 31, 1983. isbn: 0-306-41223-3. Google Books
ID: 7ewUAQAAIAAJ. OCLC: 9110177, 251709596, 299388964, and 476965274. Library
of Congress Control Number (LCCN): 82024558. GBV-Identication (PPN): 014038137.
LC Classication: QH546 .C65 1983.
623. Michael Conrad. Molecular Computing. In Advances in Computers [3036], pages 235324.
Academic Press Professional, Inc.: San Diego, CA, USA and Elsevier Science Publishers
B.V.: Amsterdam, The Netherlands, 1990. doi: 10.1016/S0065-2458(08)60155-2.
624. Michael Conrad. The Geometry of Evolution. Biosystems, 24(1):6181, 1990, Elsevier Science
Ireland Ltd.: East Park, Shannon, Ireland and North-Holland Scientic Publishers Ltd.:
Amsterdam, The Netherlands. doi: 10.1016/0303-2647(90)90030-5.
625. Michael Conrad. Towards High Evolvability Dynamics. In Evolutionary Systems Biological
and Epistemological Perspectives on Selection and Self-Organization [2771], pages 3343.
Kluwer Academic Publishers: Norwell, MA, USA, 1998.
626. Stephen A. Cook. The Complexity of Theorem-Proving Procedures. In STOC71 [20], pages
151158, 1971. doi: 10.1145/800157.805047.
627. William J. Cook. Traveling Salesman Problem. Georgia Institute of Technology, College of
Engineering, School of Industrial and Systems Engineering, Operations Research: Atlanta,
GA, USA, in edition, September 18, 2011. Fully available at http://www.tsp.gatech.
edu/index.html [accessed 2011-09-18].
628. William J. Cook, William H. Cunningham, William R. Pulleyblank, and Alexander Schrijver.
Combinatorial Optimization, Estimation, Simulation, and Control Wiley-Interscience Series
in Discrete Mathematics and Optimization. Wiley Interscience: Chichester, West Sussex, UK,
November 12, 1997. isbn: 0-471-55894-X. OCLC: 37451897, 247024428, 288962207,
and 473487390. Library of Congress Control Number (LCCN): 97035774. GBV-
Identication (PPN): 233568778 and 280181795. LC Classication: QA402.5 .C54523
1998.
629. Susan Coombs and Lawrence Davis. Genetic Algorithms and Communication Link Speed
Design: Constraints and Operators. In ICGA87 [1132], pages 257260, 1987. Partly available
at http://www.questia.com/PM.qst?a=o&d=97597557 [accessed 2008-08-29].
630. D. C. Cooper, editor. Proceedings of the 2nd International Joint Conference on Arti-
cial Intelligence (IJCAI71), September 13, 1971, Imperial College London: London,
UK. isbn: 0-934613-34-6. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-1971/CONTENT/content.htm [accessed 2008-04-01]. OCLC: 17487276.
631. Emilio S. Corchado, Juan M. Corchado, and Ajith Abraham, editors. Innovations in Hybrid
Intelligent Systems Proceedings of the 2nd International Workshop on Hybrid Articial
Intelligence Systems (HAIS07), November 2007, University of Salamanca: Salamanca, Spain,
volume 44/2008 in Advances in Soft Computing. Physica-Verlag GmbH & Co.: Heidelberg,
Germany. doi: 10.1007/978-3-540-74972-1. isbn: 3-540-74971-3. Library of Congress
Control Number (LCCN): 2007935489.
632. Emilio S. Corchado, Ajith Abraham, and Witold Pedrycz, editors. Proceedings of the Third
International Workshop on Hybrid Articial Intelligence Systems (HAIS08), September 24
26, 2008, Burgos, Spain, volume 5271/2008 in Lecture Notes in Articial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-87656-4. isbn: 3-540-87655-3. Library of Congress Control Number
(LCCN): 2008935394.
633. Arthur L. Corcoran and Sandip Sen. Using Real-Valued Genetic Algorithms to Evolve
Rule Sets for Classication. In CEC94 [1891], pages 120124, volume 1, 1994.
doi: 10.1109/ICEC.1994.350030. CiteSeer
x
: 10.1.1.55.1864.
634.

Oscar Cordon, Francisco Herrera Triguero, and Achim G. Homann. Genetic Fuzzy
Systems: Evolutionary Tuning and Learning of Fuzzy Knowledge Bases, volume 19
in Advances in Fuzzy Systems. World Scientic Publishing Co.: Singapore, 2001.
isbn: 981-02-4016-3 and 981-02-4017-1. Google Books ID: BWmSV-38fKAC
and bwa9QgAACAAJ. OCLC: 47768307. Library of Congress Control Number
REFERENCES 1007
(LCCN): 2001275437. GBV-Identication (PPN): 334123739. LC Classication: QA402.5
.G4565 2001.
635. Thomas H. Cormen, Charles E. Leiserson, and Ronald L. Rivest. Introduction to Algo-
rithms, MIT Electrical Engineering and Computer Science. MIT Press: Cambridge, MA,
USA and McGraw-Hill Ltd.: Maidenhead, UK, England, 2nd edition, 1990August 2001.
isbn: 0070131430, 0070131449, 0-2620-3141-8, 0-2625-3196-8, and 8120321413.
Google Books ID: CJY1KwAACAAJ, CZY1KwAACAAJ, ElKaGgAACAAJ, JlCNJwAACAAJ,
KVCNJwAACAAJ, NLngYyWFl YC, O9WbHAAACAAJ, OdWbHAAACAAJ, PB02AAAACAAJ,
PNWbHAAACAAJ, V7wYPAAACAAJ, WLwYPAAACAAJ, oStGQAACAAJ, iLY1NwAACAAJ, and
zukGIQAACAAJ.
636. David Wolfe Corne and Joshua D. Knowles. Techniques for Highly Multiobjective Optimisa-
tion: Some Nondominated Points are Better than Others. In GECCO07-I [2699], pages 773
780, 2007. Fully available at http://dbkgroup.org/knowles/pap583s7-corne.pdf
and http://www.macs.hw.ac.uk/

dwcorne/p773-corne.pdf [accessed 2011-12-05]. arXiv


ID: 0908.3025v1.
637. David Wolfe Corne and Jonathan L. Shapiro, editors. Proceedings of the Workshop on Ar-
ticial Intelligence and Simulation of Behaviour, International Workshop on Evolutionary
Computing, Selected Papers (AISB97), April 78, 1997, Manchester, UK, volume 1305/1997
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/BFb0027161. isbn: 3-540-63476-2.
638. David Wolfe Corne, Marco Dorigo, Fred Glover, Dipankar Dasgupta, Pablo Moscato, Ric-
cardo Poli, and Kenneth V. Price, editors. New Ideas in Optimization, McGraw-Hills Ad-
vanced Topics In Computer Science Series. McGraw-Hill Ltd.: Maidenhead, UK, England,
May 1999. isbn: 0-07-709506-5. Google Books ID: nC35AAAACAAJ. OCLC: 40883094
and 265809212.
639. David Wolfe Corne, Joshua D. Knowles, and Martin J. Oates. The Pareto Envelope-
Based Selection Algorithm for Multiobjective Optimization. In PPSN VI [2421], pages
839848, 2000. doi: 10.1007/3-540-45356-3 82. Fully available at http://www.lania.
mx/

ccoello/corne00.ps.gz [accessed 2009-07-18]. CiteSeer


x
: 10.1.1.32.9195.
640. David Wolfe Corne, Martin J. Oates, and George D. Smith, editors. Telecommunications Opti-
mization: Heuristic and Adaptive Techniques. John Wiley & Sons Ltd.: New York, NY, USA,
September 2000. doi: 10.1002/047084163X. isbn: 047084163X and 0471988553. Google
Books ID: EgNTAAAAMAAJ. OCLC: 43187002, 43903625, 50745182, and 123099960.
641. David Wolfe Corne, Zbigniew Michalewicz, Robert Ian McKay,

Agoston E. Eiben, David B.
Fogel, Carlos M. Fonseca, G unther R. Raidl, Kay Chen Tan, and Ali M. S. Zalzala, edi-
tors. Proceedings of the IEEE Congress on Evolutionary Computation (CEC05), Septem-
ber 25, 2005, Edinburgh, Scotland, UK. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7803-9363-5. Google Books ID: -QIVAAAACAAJ. OCLC: 62773008 and
423962661. Catalogue no.: 05TH8834.
642. F. Corno, G. Cumani, M. Sonza Reorda, and Giovanni Squillero. Ecent Machine-
Code Test-Program Induction. In CEC02 [944], pages 14861491, 2002. Fully available
at http://www.cad.polito.it/pap/db/cec2002.pdf and http://www.cs.bham.
ac.uk/

wbl/biblio/gp-html/corno_2002_emctpi.html [accessed 2008-09-17].


643. Gerard Cornuejols. Revival of the Gomory cuts in the 1990s. Annals of Operations Research,
149(1):6366, February 2007, Springer Netherlands: Dordrecht, Netherlands and J. C. Baltzer
AG, Science Publishers: Amsterdam, The Netherlands. doi: 10.1007/s10479-006-0100-1. Fully
available at http://integer.tepper.cmu.edu/webpub/gomory.pdf [accessed 2009-07-05].
644. Pablo Cortes Achedad, Luis Onieva Gimenez, Jes us Mu nuzuri Sanz, and Jose Guadix
Martn. A Revision of Evolutionary Computation Techniques in Telecommunications
and an Application for the Network Global Planning Problem. In Success in Evolu-
tionary Computation [3014], pages 239262. Springer-Verlag: Berlin/Heidelberg, 2008.
doi: 10.1007/978-3-540-76286-7 11.
645. Carlos Cotta and Peter Cowling, editors. Proceedings of the 9th European Conference
on Evolutionary Computation in Combinatorial Optimization (EvoCOP09), April 1517,
2009, Eberhard-Karls-Universitat T ubingen, Fakult at f ur Informations- und Kognitionswis-
senschaften: T ubingen, Germany, volume 5482/2009 in Theoretical Computer Science and
General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/978-3-642-01009-5. isbn: 3-642-01008-3.
1008 REFERENCES
646. Carlos Cotta and Jano I. van Hemert, editors. Proceedings of the 7th European Conference
on Evolutionary Computation in Combinatorial Optimization (EvoCOP07), April 1113,
2007, Val`encia, Spain, volume 4446/2007 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany.
647. Richard Courant. Variational Methods for the Solution of Problems of Equilibrium and
Vibrations. Bulletin of the American Iris Society, 49(1):123, 1943, American Iris So-
ciety (AIS): Philadelphia, PA, USA. doi: 10.1090/S0002-9904-1943-07818-4. Fully avail-
able at http://projecteuclid.org/euclid.bams/1183504922 and http://www.
ams.org/bull/1943-49-01/S0002-9904-1943-07818-4/home.html [accessed 2008-11-
15]. Zentralblatt MATH identier: 0063.00985. Mathematical Reviews number (Math-
SciNet): MR0007838.
648. Steven H. Cousins. Species Diversity Measurement: Choosing the Right Index. Trends
in Ecology and Evolution (TREE), 6(6):190192, June 1991, Elsevier Science Pub-
lishers B.V.: Amsterdam, The Netherlands and Cell Press: St. Louis, MO, USA.
doi: 10.1016/0169-5347(91)90212-G.
649. Peter Cowling, Graham Kendall, and Eric Soubeiga. A Hyperheuristic Ap-
proach to Scheduling a Sales Summit. In PATAT00 [452], pages 176190, 2000.
doi: 10.1007/3-540-44629-X 11. Fully available at http://www.asap.cs.nott.ac.
uk/publications/pdf/exs_patat01.pdf and http://www.cs.nott.ac.uk/

gxk/
aim/2009/reading/hh_paper_1.pdf [accessed 2011-04-23]. CiteSeer
x
: 10.1.1.63.7949.
650. Louis Anthony Cox, Jr., Lawrence Davis, and Yuping Qiu. Dynamic Anticipatory Routing In
Circuit-Switched Telecommunications Networks. In Handbook of Genetic Algorithms [695],
pages 124143. Thomson Publishing Group, Inc.: Stamford, CT, USA, 1991.
651. Teodor Gabriel Crainic and Gilbert Laporte, editors. Fleet Management and Logistics.
Springer Netherlands: Dordrecht, Netherlands. Imprint: Kluwer Academic Publishers: Nor-
well, MA, USA, 1998. isbn: 0792381610. Google Books ID: IXed6zq9ZigC.
652. Nichael Lynn Cramer. A Representation for the Adaptive Generation of Simple Sequential
Programs. In ICGA85 [1131], pages 183187, 1985. Fully available at http://www.sover.
net/

nichael/nlc-publications/icga85/index.html [accessed 2007-09-06].


653. Ronald L. Crepeau. Genetic Evolution of Machine Language Software. In Proceedings of
the Workshop on Genetic Programming: From Theory to Real-World Applications [2326],
pages 121134, 1995. Fully available at http://www.cs.bham.ac.uk/

wbl/biblio/
gp-html/crepeau_1995_GEMS.html [accessed 2008-09-17]. CiteSeer
x
: 10.1.1.61.7001.
654. Matej

Crepincek, Marjan Mernik, and Shih-Hsi Liu. Analysis of Exploration and Exploita-
tion in Evolutionary Algorithms by Ancestry Trees. International Journal of Innovative
Computing and Applications (IJICA), 3(1), January 2011, InderScience Publishers: Geneva,
Switzerland. doi: 10.1504/IJICA.2011.037947.
655. H. Brown Cribbs, III and Robert Elliott Smith. What Can I Do With a Learning Classier
System? In Industrial Applications of Genetic Algorithms [1496], pages 299319. CRC Press,
Inc.: Boca Raton, FL, USA, 1998.
656. Sven F. Crone, Stefan Lessmann, and Robert Stahlbock, editors. Proceedings of the 2006
International Conference on Data Mining (DMIN06), June 2629, 2006, Las Vegas, NV,
USA. CSREA Press: Las Vegas, NV, USA. isbn: 1-60132-004-3.
657. Helena Cronin. The Ant and the Peacock Altruism and Sexual Selection from Darwin to
Today. Cambridge University Press: Cambridge, UK, 1991. isbn: 0-521-32937-X and
0-521-45765-3. Google Books ID: SwEnVmakCB0C. OCLC: 29778586, 444540292, and
490467682. GBV-Identication (PPN): 040676250 and 232802165. With a foreword by
John Maynard Smith.
658. Mark Crosbie and Gene Spaord. Applying Genetic Programming to Intrusion De-
tection. In Proceedings of 1995 AAAI Fall Symposium Series, Genetic Programming
Track [1606], 1995. Fully available at http://ftp.cerias.purdue.edu/pub/papers/
mark-crosbie/mcrosbie-spaf-AAAI.ps.Z [accessed 2008-06-17]. The paper appeared also
as technical report COAST TR 95-05 of COAST Laboratory, Deptartment of Computer
Sciences, Purdue University, West Lafayette, IN, USA.
659. Jack L. Crosby. Computer Simulation in Genetics. John Wiley & Sons Ltd.: New York, NY,
USA, January 1973. isbn: 0-4711-8880-8.
660. Claudia Crosio, Francesco Cecconi, Paolo Mariottini, Gianni Cesareni, Sydney Brenner,
and Francesco Amaldi. Fugu Intron Oversize Reveals the Presence of U
15
snoRNA Cod-
REFERENCES 1009
ing Sequences in Some Introns of the Ribosomal Protein S
3
Gene. Genome Research, 6(12):
12271231, December 1996. Fully available at http://www.genome.org/cgi/content/
abstract/6/12/1227 [accessed 2008-03-23].
661. Proceedings of the Second Conference on Articial General Intelligence (AGI09), March 69,
2009, Crowne Plaza National Airport: Arlington, VA, USA.
662. Li Ying Cui, Soundar R. T. Kumara, John Jung-Woon Yoo, and Fatih Cavdur. Large-Scale
Network Decomposition and Mathematical Programming Based Web Service Composition.
In CEC09 [1245], pages 511514, 2009. doi: 10.1109/CEC.2009.91. INSPEC Accession
Number: 10839128. See also [1571, 1999, 20642067, 3034].
663. Xunxue Cui and Chuang Lin. A Multiobjective Genetic Algorithm for Distributed Database
Management. In WCICA04 [1388], pages 21172121, volume 3, 2004. doi: 10.1109/W-
CICA.2004.1341959. INSPEC Accession Number: 8143777.
664. Francesco Cupertino, Ernesto Mininno, and David Naso. Compact Genetic Algorithms for
the Optimization of Induction Motor Cascaded Control. In IEMDC07 [1391], pages 8287,
volume 2, 2007. doi: 10.1109/IEMDC.2007.383557. INSPEC Accession Number: 9892564.
665. Djurdje Cvijovic and Jacek Klinowski. Taboo Search: An Approach to the Multiple Minima
Problem. Science Magazine, 267(5198):664666, February 3, 1995, American Association for
the Advancement of Science (AAAS): Washington, DC, USA and HighWire Press (Stan-
ford University): Cambridge, MA, USA. doi: 10.1126/science.267.5198.664. Fully available
at http://www-klinowski.ch.cam.ac.uk/pdfs/244.pdf [accessed 2010-10-04]. PubMed
ID: 17745843.
666. Hans Czap, Rainer Unland, Cherif Branki, and Huaglory Tianeld, editors. Self-Organization
and Autonomic Informatics (I) International Conference on Self-Organization and Adap-
tation of Multi-agent and Grid Systems (SOAS2005), December 1113, 2005, University of
Paisley: Glasgow, Scotland, UK, Frontiers in Articial Intelligence and Applications. IOS
Press: Amsterdam, The Netherlands. isbn: 1-58603-577-0 and 1601291337. Google
Books ID: 0wp9iwroM9gC and tPP8OgAACAAJ.
667. Zbigniew J. Czech and Piotr Czarnas. Parallel Simulated Annealing for the Vehicle Routing
problem with Time Windows. In PDP02 [1383], pages 376383, 2002. doi: 10.1109/EM-
PDP.2002.994313. CiteSeer
x
: 10.1.1.16.5766.
D
668. Antonio Gaspar Lopes da Cunha and Jose Antonio Colaco Gomes Covas. RPSGAe - Re-
duced Pareto Set Genetic Algorithm: A Multiobjectiv Algorithm with Elistim: Application
to Polymer Extrusion. Xavier Gandibleux, Marc Sevaux, Kenneth Sorensen, and Vincent
Tkindt, editors. Fully available at http://www2.lifl.fr/PM2O/Reunions/04112002/
gaspar.pdf [accessed 2007-09-21]. Poster on the joint PM2O-EU/ME meeting.
669. Marcus Vinicius Carvalho da Silva, Nadia Nedjah, and Luiza de Macedo Mourelle. Evolution-
ary IP Assignment for Ecient NoC-based System Design using Multi-objective Optimiza-
tion. In CEC09 [1350], pages 22572264, 2009. doi: 10.1109/CEC.2009.4983221. INSPEC
Accession Number: 10688816.
670. Cihan H. Dagli, Anna L. Buczak, Joydeep Ghosh, Mark J. Embrechts, and Okan Erson,
editors. Intelligent Engineering Systems through Articial Neural Networks Smart Engi-
neering System Design: Neural Networks, Fuzzy Logic, EP, Complex Systems and Articial
Life Proceedings of the Articial Neural Networks in Engineering Conference (ANNIE03),
November 25, 2003, St. Louis, MO, USA, volume 13. American Society of Mechanical En-
gineers (ASME): St. Louis, MO, USA.
671. Wei Dai, Paul Moynihan, Juanqiong Gou, Ping Zou, Xi Yang, Tiedong Chen, and Xin Wan.
Services Oriented Knowledge-based Supply Chain Application. In SCContest07 [1374], pages
660667, 2007. doi: 10.1109/SCC.2007.106. INSPEC Accession Number: 9869254.
672. Hosei Daigaku, Shietung Peng, Vladimir V Savchenko, and Shuichi Yukita, editors. First
International Symposium on Cyber Worlds: Theory and Practices (CW02), November 68,
2002. IEEE Computer Society Press: Los Alamitos, CA, USA. isbn: 0-7695-1862-1.
Google Books ID: lxIOAAAACAAJ. OCLC: 51208033, 65194958, 71489103, and
423965835.
673. R. J. Dakin. A Tree-Search Algorithm for Mixed Integer Programming Problems. The
1010 REFERENCES
Computer Journal, Oxford Journals, 8(3):250255, 1965, British Computer Society: Swindon,
UK. doi: 10.1093/comjnl/8.3.250.
674. Gerard E. Dallal. The Little Handbook of Statistical Practice. Tufts University, Jean Mayer
USDA Human Nutrition Research Center on Aging, Biostatistics Unit: Boston, MA, USA,
July 16, 2008. Fully available at http://www.statisticalpractice.com/ [accessed 2008-
08-15].
675. Hai H. Dam, Hussein A. Abbass, and Chris Lokan. DXCS: An XCS System for Distributed
Data Mining. In GECCO05 [304], pages 18831890, 2005. Fully available at http://doi.
acm.org/10.1145/1068009.1068326 [accessed 2007-09-12]. See also [676].
676. Hai H. Dam, Hussein A. Abbass, and Chris Lokan. DXCS: An XCS System for Distributed
Data Mining. Technical Report TR-ALAR-200504002, University of New South Wales, School
of Information Technology and Electrical Engineering, Articial Life and Adaptive Robotics
Laboratory: Canberra, Australia, 2005. Fully available at http://www.itee.adfa.edu.
au/

alar/techreps/200504002.pdf [accessed 2007-09-12]. See also [675].


677. David B. DAmbrosio and Kenneth Owen Stanley. A Novel Generative Encoding for Ex-
ploiting Neural Network Sensor and Output Geometry. In GECCO07-I [2699], pages 974
981, 2007. doi: 10.1145/1276958.1277155. Fully available at http://eplex.cs.ucf.edu/
papers/dambrosio_gecco07.pdf [accessed 2011-06-19].
678. Martin Damsbo, Brian S. Kinnear, Matthew R. Hartings, Peder T. Ruho, Martin F. Jar-
rold, and Mark A. Ratner. Application of Evolutionary Algorithm Methods to Polypep-
tide Folding: Comparison with Experimental Results for Unsolvated Ac-(Ala-Gly-Gly)
5
-
LysH
+
. Proceedings of the National Academy of Science of the United States of Amer-
ica (PNAS), 101(19):72157222, May 2004, National Academy of Sciences: Washington,
DC, USA. Fully available at http://www.pnas.org/cgi/reprint/101/19/7215.pdf?
ck=nck and http://www.pnas.org/content/vol101/issue19/ [accessed 2007-08-27].
679. Gregoire Danoy, Bernabe Dorronsoro Daz, and Pascal Bouvry. Overcoming Partitioning in
Large Ad Hoc Networks Using Genetic Algorithms. In GECCO09-I [2342], pages 13471354,
2009. doi: 10.1145/1569901.1570082.
680. George Bernhard Dantzig and J. H. Ramser. The Truck Dispatching Problem. Management
Science, 6(1):8091, October 1959, Institute for Operations Research and the Management
Sciences (INFORMS): Linthicum, ML, USA and HighWire Press (Stanford University): Cam-
bridge, MA, USA. doi: 10.1287/mnsc.6.1.80.
681. Andrea Pohoreckyj Danyluk, Leon Bottou, and Michael Littman, editors. Proceedings of the
26th International Conference on Machine Learning (ICML09), June 1418, 2009, Montreal,
QC, Canada, volume 382 in ACM International Conference Proceeding Series (AICPS). ACM
Press: New York, NY, USA. Fully available at http://www.machinelearning.org/
archive/icml2009/abstracts.html [accessed 2010-06-23].
682. Ludwig Danzera and Victor Klee. Lengths of Snakes in Boxes. Journal of Combinatorial
Theory, 2(3):258265, May 1967, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. doi: 10.1016/S0021-9800(67)80026-7.
683. Paul J. Darwen and Xin Yao. Every Niching Method has its Niche: Fitness Shar-
ing and Implicit Sharing Compared. In PPSN IV [2818], pages 398407, 1996.
doi: 10.1007/3-540-61723-X 1004. CiteSeer
x
: 10.1.1.20.9897 and 10.1.1.56.4202.
684. Charles Darwin. On the Origin of Species by Means of Natural Selection, or the Preserva-
tion of Favoured Races in the Struggle for Life. John Murray: London, UK, 6th edition,
November 24, 1859. Fully available at http://www.gutenberg.org/etext/1228 [ac-
cessed 2007-08-05]. Google Books ID: 6gcpNAAACAAJ and nNQNAAAAYAAJ. Newly published as
[685].
685. Charles Darwin, author. Gillian Beer, editor. On the Origin of Species, Oxford Worlds Clas-
sics. Oxford University Press, Inc.: New York, NY, USA, May 1998. isbn: 0-19-283438-X.
Google Books ID: LDrPI52uFQsC. OCLC: 39117382. New edition of [684].
686. Erasmus Darwin. Zoonomia; or, the Organic Laws of Life, volume I. J. Johnson: St. Pauls
Church-Yard, London, UK, second corrected edition, 1796. Fully available at http://www.
gutenberg.org/etext/15707 [accessed 2008-09-10]. Entered at Stationers Hall.
687. Angan Das and Ranga Vemuri. A Self-learning Optimization Technique for Topol-
ogy Design of Computer Networks. In EvoWorkshops08 [1051], pages 3851, 2008.
doi: 10.1007/978-3-540-78761-7 5.
688. Indraneel Das and John E. Dennis. A Closer Look at Drawbacks of Minimizing Weighted
REFERENCES 1011
Sums of Objectives for Pareto Set Generation in Multicriteria Optimization Problems. Tech-
nical Report TR96.36, Rice University, Department of Computational & Applied Mathemat-
ics (CAAM): Houston, TX, USA, December 1996. Fully available at http://www.caam.
rice.edu/tech_reports/1996/TR96-36.ps [accessed 2009-07-18]. See also [689].
689. Indraneel Das and John E. Dennis. A Closer Look at Drawbacks of Minimizing Weighted
Sums of Objectives for Pareto Set Generation in Multicriteria Optimization Problems. Struc-
tural Optimization, 14(1):6369, August 1997, Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/BF01197559. See also [688].
690. Dipankar Dasgupta, Fernando Nino, Deon Garrett, Koyel Chaudhuri, Soujanya Medapati,
Aishwarya Kaushal, and James Simien. A Multiobjective Evolutionary Algorithm for the
Task based Sailor Assignment Problem. In GECCO09-I [2342], pages 14751482, 2009.
doi: 10.1145/1569901.1570099. Fully available at http://ais.cs.memphis.edu/files/
papers/t13fp621-dasguptaPS.pdf [accessed 2010-09-21].
691. Yuval Davidor. Epistasis Variance: Suitability of a Representation to Genetic Algorithms.
Complex Systems, 4(4):369383, 1990, Complex Systems Publications, Inc.: Champaign, IL,
USA, Stephen Wolfram and Todd Rowland, editors.
692. Yuval Davidor. Epistasis Variance: A Viewpoint on GA-Hardness. In FOGA90 [2562], pages
2335, 1990.
693. Yuval Davidor, Hans-Paul Schwefel, and Reinhard Manner, editors. Proceedings of the
Third Conference on Parallel Problem Solving from Nature; International Conference on
Evolutionary Computation (PPSN III), October 914, 1994, Jerusalem, Israel, volume
866/1994 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/3-540-58484-6. isbn: 0387584846 and 3-540-58484-6. Google
Books ID: n2lXlC5mv68C. OCLC: 31132760, 190832323, 202350473, 311905312, and
422861493.
694. Lawrence Davis, editor. Genetic Algorithms and Simulated Annealing, Research Notes in
Articial Intelligence. Pitman: London, UK, October 19871990. isbn: 0273087711 and
0934613443. Google Books ID: edfSSAAACAAJ. OCLC: 16355405, 257879939, and
476337140. Library of Congress Control Number (LCCN): 87021357.
695. Lawrence Davis, editor. Handbook of Genetic Algorithms, VNR Computer Library. Thom-
son Publishing Group, Inc.: Stamford, CT, USA, January 1991. isbn: 0-442-00173-8
and 1850328250. Google Books ID: vTG5PAAACAAJ. OCLC: 23081440, 36495685,
231258310, and 300431834. Library of Congress Control Number (LCCN): 90012823.
GBV-Identication (PPN): 01945077X and 276468937. LC Classication: QA402.5 .H36
1991.
696. Randall Davis and Jonathan King. An Overview of Production Systems. Technical Report
STAN-CS-75-524, Stanford University, Computer Science Department: Stanford, CA, USA,
October 1975. See also [697].
697. Randall Davis and Jonathan King. An Overview of Production Systems. In Machine Intel-
ligence 8 [874], pages 300334, 1977. See also [696].
698. Brian D. Davison and Khaled Rasheed. Eect of Global Parallelism on a Steady State GA.
In Evolutionary Computation and Parallel Processing Workshop [482], pages 167170, 1999.
Fully available at http://www.cse.lehigh.edu/

brian/pubs/1999/cec99/pgado.
pdf [accessed 2010-08-01]. CiteSeer
x
: 10.1.1.112.5296. See also [2270].
699. Richard Dawkins. The Evolution of Evolvability. In Articial Life87 [1666], pages 201220,
1987.
700. Richard Dawkins. The Selsh Gene. Oxford University Press, Inc.: New York, NY,
USA, 1st/2nd edition, 1976October 1989. isbn: 0-192-86092-5. Google Books
ID: WkHO9HI7koEC.
701. Richard Dawkins. Climbing Mount Improbable. Penguin Books: London, UK and W.W.
Norton & Company: New York, NY, USA, 1 edition, 1996. isbn: 0140179186, 0141026170,
and 0670850187. Google Books ID: 5XUYHAAACAAJ, BuKaKAAACAAJ, TPzqAAAACAAJ, and
TvzqAAAACAAJ. asin: 0393039307 and 0393316823.
702. Sergio Granato de Ara ujo, Aloysio de Castro Pinto Pedroza, and Antonio Carneiro de
Mesquita Filho. Evolutionary Synthesis of Communication Protocols. In ICT03 [1776], pages
986993, volume 2, 2003. doi: 10.1109/ICTEL.2003.1191573. Fully available at http://www.
gta.ufrj.br/ftp/gta/TechReports/AMP03a.pdf [accessed 2008-06-21]. See also [703, 705].
703. Sergio Granato de Ara ujo, Aloysio de Castro Pinto Pedroza, and Antonio Carneiro de
1012 REFERENCES
Mesquita Filho. Uma Metodologia de Projeto de Protocolos de Comunicac ao Baseada em
Tecnicas Evolutivas. In SBrT03 [1274], 2003. Fully available at http://www.gta.ufrj.
br/ftp/gta/TechReports/AMP03e.pdf [accessed 2008-06-21]. See also [702, 704].
704. Sergio Granato de Ara ujo, Antonio Carneiro de Mesquita Filho, and Aloysio de Castro Pinto
Pedroza. Sntese de Circuitos Digitais Otimizados via Programac ao Genetica. In SEM-
ISH03 [3003], pages 273285, volume III, 2003. Fully available at http://www.gta.ufrj.
br/ftp/gta/TechReports/AMP03d.pdf [accessed 2008-06-21].
705. Sergio Granato de Ara ujo, Antonio Carneiro de Mesquita Filho, and Aloysio C. P. Pedroza.
A Scenario-Based Approach to Protocol Design Using Evolutionary Techniques. In EvoWork-
shops04 [2254], pages 178187, 2004. See also [702].
706. Bart de Boer. Classier Systems: A useful Approach to Machine Learning?, IR94-02. Mas-
ters thesis, Leiden University: Leiden, The Netherlands, August 1994, Ida Sprinkhuizen-
Kuyper and Egbert J. W. Boers, Supervisors. Fully available at ftp://ftp.cs.bham.
ac.uk/pub/authors/T.Kovacs/lcs.archive/DeBoer1994a.ps.gz [accessed 2010-12-15].
CiteSeer
x
: 10.1.1.38.6367.
707. Leandro Nunes de Castro, Fernando Jose Von Zuben, and Helder Knidel, editors. Pro-
ceedings of the 6th International Conference on Articial Immune Systems (ICARIS07),
August 2629, 2007, Santos, Brazil, volume 4628 in Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany.
708. Ivanoe de Falco, Antonio Della Cioppa, Domenico Maisto, Umberto Scafuri, and Ernesto
Tarantino. Extremal Optimization as a Viable Means for Mapping in Grids. In EvoWork-
shops09 [1052], pages 4150, 2009. doi: 10.1007/978-3-642-01129-0 5.
709. Edwin D. de Jong, Richard A. Watson, and Jordan B. Pollack. Reducing Bloat and Pro-
moting Diversity using Multi-Objective Methods. In GECCO01 [2570], pages 1118, 2001.
Fully available at http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/jong_2001_
gecco.html and http://www.demo.cs.brandeis.edu/papers/rbpd_gecco01.ps.
gz [accessed 2009-07-09]. CiteSeer
x
: 10.1.1.28.4549.
710. Gerdien de Jong. Evolution of Phenotypic Plasticity: Patterns of Plasticity and the Emer-
gence of Ecotypes. New Phytologist, 166(1):101118, April 2005, New Phytologist Trust and
Wiley Interscience: Chichester, West Sussex, UK. doi: 10.1111/j.1469-8137.2005.01322.x.
711. Kenneth Alan De Jong. An Analysis of the Behavior of a Class of Genetic Adaptive Sys-
tems. PhD thesis, University of Michigan: Ann Arbor, MI, USA, August 1975, John Henry
Holland, Larry K. Flanigan, Richard A. Volz, and Bernard P. Zeigler, Committee members.
Fully available at http://cs.gmu.edu/

eclab/kdj_thesis.html [accessed 2007-08-17]. Or-


der No.: AAI7609381.
712. Kenneth Alan De Jong. On Using Genetic Algorithms to Search Program Spaces. In
ICGA87 [1132], pages 210216, 1987.
713. Kenneth Alan De Jong. Learning with Genetic Algorithms: An Overview. Machine Learning,
3(2-3):121138, October 1988, Kluwer Academic Publishers: Norwell, MA, USA and Springer
Netherlands: Dordrecht, Netherlands. doi: 10.1007/BF00113894.
714. Kenneth Alan De Jong. Genetic Algorithms are NOT Function Optimizers. In
FOGA92 [2927], pages 517, 1992. Fully available at http://www.mli.gmu.edu/
papers/91-95/92-12.pdf [accessed 2011-05-30]. CiteSeer
x
: 10.1.1.161.5655.
715. Kenneth Alan De Jong. Evolutionary Computation: A Unied Approach, volume 4 in Com-
plex Adaptive Systems, Bradford Books. MIT Press: Cambridge, MA, USA, February 2006.
isbn: 0262041944 and 8120330021. Google Books ID: OIRQAAAAMAAJ. OCLC: 46500047,
182530408, and 276452339.
716. Kenneth Alan De Jong and William M. Spears. Learning Concept Classication Rules
using Genetic Algorithms. In IJCAI91-II [1988], pages 651656, 1991. Fully avail-
able at http://citeseer.ist.psu.edu/dejong91learning.html [accessed 2007-09-12].
CiteSeer
x
: 10.1.1.104.4397 and 10.1.1.56.5640.
717. Kenneth Alan De Jong, William M. Spears, and Diana F. Gordon. Using Genetic
Algorithms for Concept Learning. Machine Learning, 13:161188, 1993, Kluwer Aca-
demic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands.
Fully available at http://www.aic.nrl.navy.mil/papers/1993/AIC-93-050.ps.
Z and http://www.cs.uwyo.edu/

dspears/papers/mlj93.pdf [accessed 2010-12-14].


CiteSeer
x
: 10.1.1.115.8585 and 10.1.1.51.8124.
718. Kenneth Alan De Jong, David B. Fogel, and Hans-Paul Schwefel. A History of Evolution-
REFERENCES 1013
ary Computation. In Evolutionary Computation 1: Basic Algorithms and Operators [174],
Chapter 6, page 40. Institute of Physics Publishing Ltd. (IOP): Dirac House, Temple Back,
Bristol, UK, 2000.
719. Kenneth Alan De Jong, Riccardo Poli, and Jonathan E. Rowe, editors. Foundations of Genetic
Algorithms 7 (FOGA02), September 46, 2002, Torremolinos, Spain. Morgan Kaufmann:
San Mateo, CA, USA. isbn: 0-12-208155-2. Published in 2003.
720. Pierre-Simon Marquis de Laplace. Theorie Analytique des Probabilites (Analytical Theory
of Probability). M. V. Courcier, Imprimeur-Libraire pour les Mathematiques, quai des Au-
gustins, no. 57: Paris, France, 1812. Google Books ID: 6MRLAAAAMAAJ, VWuGOwAACAAJ,
VmuGOwAACAAJ, dR6 PAAACAAJ, eB6 PAAACAAJ, and nQwAAAAAMAAJ. Premi`ere Partie.
721. Luis de-Marcos, Jose J. Martnez, Jose A. Gutierrez, Roberto Barchino, and Jose M.
Gutierrez. A New Sequencing Method in Web-Based Education. In CEC09 [1350], pages
32193225, 2009. doi: 10.1109/CEC.2009.4983352. INSPEC Accession Number: 10688947.
722. Jean-Baptiste Pierre Antoine de Monet, Chevalier de Lamarck. Philosophie zoologique
ou Exposition des considerations relatives `a lhistoire naturelle des Animaux; `a la diver-
site de leur organisation et des facultes quils en obtiennent; . . . . Dentu: Paris, France
and J. B. Bailli`ere Liberaire: Paris, France, 18091830. isbn: 1412116465. Fully avail-
able at http://www.lamarck.cnrs.fr/ice/ice_book_detail.php?type=text&
bdd=lamarck&table=ouvrages_lamarck&bookId=29 [accessed 2008-09-10]. Google Books
ID: 0gkOAAAAQAAJ, L6qAG6ZPgj4C, rlAKHKY8RboC, and vUIDAAAAQAAJ. Philosophie
zoologique ou Exposition des considerations relatives `a lhistoire naturelle des Animaux; `a
la diversite de leur organisation et des facultes quils en obtiennent; aux causes physiques qui
maintiennent en eux la vie et donnent lieu aux mouvements quils executant; enn, `a celles
qui produisent les unes le sentiment, et les autres lintelligence de ceux qui en sont doues.
723. Luc de Raedt and Arno Siebes, editors. 5th European Conference on Principles of Data
Mining and Knowledge Discovery (PKDD01), September 35, 2001, Freiburg, Germany,
volume 2168 in Lecture Notes in Articial Intelligence (LNAI, SL7), Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-44794-6.
isbn: 3-540-42534-9. Google Books ID: U9IXQfcoqvwC.
724. Luc de Raedt and Stefan Wrobel, editors. Proceedings of the 22nd International Conference
on Machine Learning (ICML05), August 711, 2005, University of Bonn: Bonn, North
Rhine-Westphalia, Germany, volume 119 in ACM International Conference Proceeding Series
(AICPS). ACM Press: New York, NY, USA. isbn: 1-59593-180-5.
725. Fabiano Luis de Sousa and Fernando Manuel Ramos. Function Optimization us-
ing Extremal Dynamics. In ICIPE02 [2092], pages 115119, volume 1, 2002.
CiteSeer
x
: 10.1.1.128.9802.
726. Fabiano Luis de Sousa and Walter Kenkiti Takahashi. Discrete Optimal De-
sign of Trusses by Generalized Extremal Optimization. In WCSMO6 [1226], 2005.
CiteSeer
x
: 10.1.1.76.8740.
727. Fabiano Luis de Sousa, Fernando Manuel Ramos, Pedro Pagione, and Roberto M. Girardi.
New Stochastic Algorithm for Design Optimization. AIAA Journal, 41(9):18081818, 2003,
American Institute of Aeronautics and Astronautics: Reston, VA, USA. doi: 10.2514/2.7299.
728. Fabiano Luis de Sousa, Valeri Vlassov, and Fernando Manuel Ramos. Generalized Extremal
Optimization: An Application in Heat Pipe Design. Applied Mathematical Modelling, 28
(10):911931, October 2004, Elsevier Science Publishers B.V.: Amsterdam, The Netherlands.
doi: 10.1016/j.apm.2004.04.004.
729. Dominique de Werra and Alain Hertz. Tabu Search Techniques: A Tutorial
and an Application to Neural Networks. OR Spectrum Quantitative Approaches
in Management, 11(3):131141, September 1989, Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/BF01720782. Fully available at http://www.springerlink.de/content/
x25k97k0qx237553/fulltext.pdf [accessed 2008-03-27].
730. Thomas L. Dean, editor. Proceedings of the Sixteenth International Joint Conference
on Articial Intelligence (IJCAI99-I), July 31August 6, 1999, City Conference Center:
Stockholm, Sweden, volume 1. Morgan Kaufmann Publishers Inc.: San Francisco, CA,
USA. isbn: 1-55860-613-0. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-99-VOL-1/CONTENT/content.htm [accessed 2008-04-01]. See also [731, 2296].
731. Thomas L. Dean, editor. Proceedings of the Sixteenth International Joint Conference
on Articial Intelligence (IJCAI99-II), July 31August 6, 1999, City Conference Cen-
1014 REFERENCES
ter: Stockholm, Sweden, volume 2. Morgan Kaufmann Publishers Inc.: San Francisco,
CA, USA. isbn: 1-55860-613-0. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-99-VOL-2/CONTENT/content.htm [accessed 2008-04-01]. See also [730, 2296].
732. Thomas L. Dean and Kathleen McKeown, editors. Proceedings of the 9th National Conference
on Articial Intelligence (AAAI91), July 1419, 1991, Anaheim, CA, USA. AAAI Press:
Menlo Park, CA, USA and MIT Press: Cambridge, MA, USA. isbn: 0-262-51059-6. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai91.php [accessed 2007-09-06].
2 volumes.
733. James S. DeArmon. A Comparison of Heuristics for the Capacitated Chinese Postman Prob-
lem. Masters thesis, University of Maryland: College Park, MD, USA, 1981.
734. Ed Deaton, editor. Proceedings of the 1994 ACM Symposium on Applied computing (SAC94),
March 68, 1994, Phoenix Civic Plaza: Phoenix, AZ, USA. ACM Press: New York, NY, USA.
isbn: 0-89791-647-6. Google Books ID: uq1QAAAAMAAJ. OCLC: 31391245.
735. Kalyanmoy Deb. Genetic Algorithms for Multimodal Function Optimization. Masters thesis,
Clearinghouse for Genetic Algorithms, University of Alabama: Tuscaloosa, 1989. TCGA
Report No. 89002.
736. Kalyanmoy Deb. Solving Goal Programming Problems Using Multi-Objective Genetic Algo-
rithms. In CEC99 [110], pages 7784, volume 1, 1999. doi: 10.1109/CEC.1999.781910.
737. Kalyanmoy Deb. Evolutionary Algorithms for Multi-Criterion Optimization in
Engineering Design. In EUROGEN99 [1894], pages 135161, 1999. Fully
available at http://www.lania.mx/

ccoello/EMOO/deb99.ps.gz [accessed 2009-08-05].


CiteSeer
x
: 10.1.1.52.3761.
738. Kalyanmoy Deb. An Introduction to Genetic Algorithms. Sadhana Academy Proceedings in
Engineering Sciences, 24(4-5):293315, August 1999, Indian Academy of Sciences: Bangalore,
India and Springer India: New Delhi, India. doi: 10.1007/BF02823145. Fully available
at http://www.iitk.ac.in/kangal/papers/sadhana.ps.gz [accessed 2007-07-28].
739. Kalyanmoy Deb. Nonlinear Goal Programming using Multi-Objective Genetic Al-
gorithms. The Journal of the Operational Research Society (JORS), 52(3):291302,
March 2001, Palgrave Macmillan Ltd.: Houndmills, Basingstoke, Hampshire, UK and Op-
erations Research Society: Birmingham, UK. doi: 10.1057/palgrave.jors.2601089. Fully
available at http://www.lania.mx/

ccoello/EMOO/deb01d.ps.gz [accessed 2008-11-14].


CiteSeer
x
: 10.1.1.6.6775.
740. Kalyanmoy Deb. Multi-Objective Optimization Using Evolutionary Algorithms, Wiley In-
terscience Series in Systems and Optimization. John Wiley & Sons Ltd.: New York, NY,
USA, May 2001. isbn: 047187339X. Google Books ID: OSTn4GSy2uQC and ll Blt03u5IC.
OCLC: 46472121 and 46694962.
741. Kalyanmoy Deb. Genetic Algorithms for Optimization. KanGAL Report 2001002, Kan-
pur Genetic Algorithms Laboratory (KanGAL), Department of Mechanical Engineering,
Indian Institute of Technology Kanpur (IIT): Kanpur, Uttar Pradesh, India, 2001. Fully
available at http://www.iitk.ac.in/kangal/papers/isna.ps.gz [accessed 2009-07-09].
CiteSeer
x
: 10.1.1.6.9296.
742. Kalyanmoy Deb and David Edward Goldberg. An Investigation of Niche and Species For-
mation in Genetic Function Optimization. In ICGA89 [2414], pages 4250, 1989.
743. Kalyanmoy Deb and David Edward Goldberg. Sucient Conditions for Deceptive and Easy
Binary Functions. Annals of Mathematics and Articial Intelligence, 10(4):385408, Decem-
ber 1994, Springer Netherlands: Dordrecht, Netherlands. doi: 10.1007/BF01531277. See also
[744].
744. Kalyanmoy Deb and David Edward Goldberg. Sucient Conditions for Deceptive and Easy
Binary Functions. Technical Report 92001, Illinois Genetic Algorithms Laboratory (IlliGAL),
Department of Computer Science, Department of General Engineering, University of Illinois
at Urbana-Champaign: Urbana-Champaign, IL, USA, 1992. See also [743].
745. Kalyanmoy Deb and Dhish Kumar Saxena. On Finding Pareto-Optimal Solutions Through
Dimensionality Reduction for Certain Large-Dimensional Multi-Objective Optimization
Problems. KanGAL Report 2005011, Kanpur Genetic Algorithms Laboratory (KanGAL),
Department of Mechanical Engineering, Indian Institute of Technology Kanpur (IIT): Kan-
pur, Uttar Pradesh, India, December 2005. Fully available at http://www.iitk.ac.in/
kangal/papers/k2005011.pdf [accessed 2009-07-18].
746. Kalyanmoy Deb and J. Sundar. Reference Point Based Multi-Objective Optimization using
REFERENCES 1015
Evolutionary Algorithms. In GECCO06 [1516], 2006. doi: 10.1145/1143997.1144112. See
also [753, 756].
747. Kalyanmoy Deb, Samir Agrawal, Amrit Pratab, and T Meyarivan. A Fast Elitist
Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-
II. KanGAL Report 200001, Kanpur Genetic Algorithms Laboratory (KanGAL), De-
partment of Mechanical Engineering, Indian Institute of Technology Kanpur (IIT): Kan-
pur, Uttar Pradesh, India, 2000. Fully available at http://vision.ucsd.edu/

sagarwal/nsga2.pdf and http://www.jeo.org/emo/deb00.ps.gz [accessed 2009-07-


18]. CiteSeer
x
: 10.1.1.18.4257. See also [748, 749].
748. Kalyanmoy Deb, Samir Agrawal, Amrit Pratab, and T Meyarivan. A Fast Elitist Non-
Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-II. In PPSN
VI [2421], pages 849858, 2000. doi: 10.1007/3-540-45356-3 83. Fully available at https://
eprints.kfupm.edu.sa/17643/1/17643.pdf [accessed 2008-10-20]. See also [747, 749].
749. Kalyanmoy Deb, Amrit Pratab, Samir Agrawal, and T Meyarivan. A Fast and Elitist Mul-
tiobjective Genetic Algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 6(2):182197, April 2002, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/4235.996017. Fully available at http://dynamics.org/

altenber/UH_
ICS/EC_REFS/MULTI_OBJ/DebPratapAgarwalMeyarivan.pdf [accessed 2009-07-17]. See
also [747, 748].
750. Kalyanmoy Deb, Lothar Thiele, Marco Laumanns, and Eckart Zitzler. Scalable Multi-
Objective Optimization Test Problems. In CEC02 [944], pages 825830, volume 1, 2002.
doi: 10.1109/CEC.2002.1007032. CiteSeer
x
: 10.1.1.18.7531. INSPEC Accession Num-
ber: 7321894.
751. Kalyanmoy Deb, Riccardo Poli, Wolfgang Banzhaf, Hans-Georg Beyer, Edmund K. Burke,
Paul J. Darwen, Dipankar Dasgupta, Dario Floreano, James A. Foster, Mark Harman,
Owen E. Holland, Pier Luca Lanzi, Lee Spector, Andrea G. B. Tettamanzi, and Dirk
Thierens, editors. Proceedings of the Genetic and Evolutionary Computation Conference, Part
I (GECCO04-I), June 2630, 2004, Red Lion Hotel: Seattle, WA, USA, volume 3102/2004
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-22344-4. Google Books ID: b7k G2KpRrQC. OCLC: 55857927, 181345757,
and 315013110. Library of Congress Control Number (LCCN): 2004107860. See also
[752, 1515, 2079].
752. Kalyanmoy Deb, Riccardo Poli, Wolfgang Banzhaf, Hans-Georg Beyer, Edmund K. Burke,
Paul J. Darwen, Dipankar Dasgupta, Dario Floreano, James A. Foster, Mark Harman,
Owen E. Holland, Pier Luca Lanzi, Lee Spector, Andrea G. B. Tettamanzi, and Dirk
Thierens, editors. Proceedings of the Genetic and Evolutionary Computation Conference,
Part II (GECCO04-II), June 2630, 2004, Red Lion Hotel: Seattle, WA, USA, volume
3103/2004 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. isbn: 3-540-22343-6. Google Books ID: ftKoWMXvHKoC. Library of Congress
Control Number (LCCN): 2004107860. See also [751, 1515, 2079].
753. Kalyanmoy Deb, J. Sundar, and Udaya Bhaskara Rao N. Reference Point Based Multi-
Objective Optimization using Evolutionary Algorithms. KanGAL Report 2005012, Kanpur
Genetic Algorithms Laboratory (KanGAL), Department of Mechanical Engineering, Indian
Institute of Technology Kanpur (IIT): Kanpur, Uttar Pradesh, India, December 2005. Fully
available at http://www.iitk.ac.in/kangal/papers/k2005012.pdf [accessed 2009-07-
18]. See also [746, 756].
754. Kalyanmoy Deb, Ankur Sinha, and Saku Kukkonen. Multi-Objective Test Problems, Link-
ages, and Evolutionary Methodologies. In GECCO06 [1516], pages 11411148, 2006.
doi: 10.1145/1143997.1144179. See also [755].
755. Kalyanmoy Deb, Ankur Sinha, and Saku Kukkonen. Multi-Objective Test Problems, Link-
ages, and Evolutionary Methodologies. KanGAL Report 2006001, Kanpur Genetic Algo-
rithms Laboratory (KanGAL), Department of Mechanical Engineering, Indian Institute of
Technology Kanpur (IIT): Kanpur, Uttar Pradesh, India, January 2006. Fully available
at http://www.iitk.ac.in/kangal/papers/k2006001.pdf [accessed 2009-07-11]. See also
[754].
756. Kalyanmoy Deb, J. Sundar, Udaya Bhaskara Rao N., and Shamik Chaudhuri. Reference Point
Based Multi-Objective Optimization using Evolutionary Algorithms. International Journal
of Computational Intelligence Research (IJCIR), 2(3):273286, 2006, Research India Publica-
1016 REFERENCES
tions: Delhi, India. Fully available at http://www.lania.mx/

ccoello/deb06b.pdf.
gz and http://www.ripublication.com/ijcirv2/ijcirv2n3_4.pdf [accessed 2009-07-
18]. See also [746, 753].
757. Stefano Debattisti, Nicola Marlat, Luca Mussi, and Stefano Cagnoni. Implementation of
a Simple Genetic Algorithm within the CUDA Architecture. Technical Report, Universit`a
degli Studi di Parma, Dipartimento di Ingegneria dellInformazione: Parma, Italy, 2009. Fully
available at http://www.gpgpgpu.com/gecco2009/3.pdf [accessed 2011-11-14].
758. Rina Dechter and Judea Pearl. Generalized Best-First Search Strategies and the Optimality of
A*. Journal of the Association for Computing Machinery (JACM), 32(3):505536, July 1985,
ACM Press: New York, NY, USA. doi: 10.1145/3828.3830. CiteSeer
x
: 10.1.1.89.3090.
759. Rina Dechter and Richard Sutton, editors. Proceedings of the Eighteenth National Con-
ference on Articial Intelligence and Fourteenth Conference on Innovative Applications of
Articial Intelligence (AAAI02, IAAI02), July 28August 1, 2002, Edmonton, Alberta,
Canada. AAAI Press: Menlo Park, CA, USA. Partly available at http://www.aaai.org/
Conferences/AAAI/aaai02.php and http://www.aaai.org/Conferences/IAAI/
iaai02.php [accessed 2007-09-06].
760. Garg Deepak, editor. Soft Computing. Allied Publishers: Vadodara, Gujrat, India, 2005.
isbn: 817764632X. Google Books ID: IkajJC9iGxMC.
761. Viktoriya Degeler, Ilce Georgievski, Alexander Lazovik, and Marco Aiello. Concept Mapping
for Faster QoS-Aware Web Service Composition. In SOCA10 [1378], 2010. See also [48
50, 320].
762. Frank K. H. A. Dehne, Todd Eavis, and Andrew Rau-Chaplin. Ecient Computation of
View Subsets. In DOLAP07 [2556], pages 6572, 2007. doi: 10.1145/1317331.1317343.
Fully available at http://dolap07.cs.aau.dk/dehne.pdf [accessed 2011-03-29].
CiteSeer
x
: 10.1.1.68.9167.
763. Marcel Dekker. Introduction to Set Theory. CRC Press, Inc.: Boca Raton, FL, USA, revised
and expanded, 3rd edition, June 22, 1999. isbn: 0-8247-7915-0.
764. Sarah Jane Delany and Michael Madden, editors. 18th Irish Conference on Articial Intelli-
gence and Cognitive Science (AICS07), August 2931, 2007, Dublin Institute of Technology:
Dublin, Ireland.
765. Federico Della Croce, Roberto Tadei, and Giuseppe Volta. A Genetic Algorithm for
the Job Shop Problem. Computers & Operations Research, 22(1):1524, January 1995,
Pergamon Press: Oxford, UK and Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/0305-0548(93)E0015-L.
766. Janez Demsar. Statistical Comparisons of Classiers over Multiple Data Sets. Journal of
Machine Learning Research (JMLR), 7:130, January 2006, MIT Press: Cambridge, MA,
USA. Fully available at http://jmlr.csail.mit.edu/papers/volume7/demsar06a/
demsar06a.pdf [accessed 2010-10-19]. CiteSeer
x
: 10.1.1.141.3142. See also [1020].
767. Jean-Louis Deneubourg and Simon Goss. Collective Patterns and Decision-Making. Ethol-
ogy, Ecology & Evolution, 1(4):295311, December 1989. Fully available at http://www.
ulb.ac.be/sciences/use/publications/JLD/53.pdfhttp://www.ulb.ac.be/
sciences/use/publications/JLD/53.pdf [accessed 2010-12-09].
768. Jean-Louis Deneubourg, Jacques M. Pasteels, and J. C. Verhaeghe. Probabilistic Behaviour
in Ants: A Strategy of Errors? Journal of Theoretical Biology, 105(2):259271, 1983, El-
sevier Science Publishers B.V.: Amsterdam, The Netherlands. Imprint: Academic Press
Professional, Inc.: San Diego, CA, USA. doi: 10.1016/S0022-5193(83)80007-1.
769. Berna Dengiz, Fulya Altiparmak, and Alice E. Smith. Local Search Genetic Algorithm
for Optimal Design of Reliable Networks. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 1(3):179188, September 1997, IEEE Computer Society: Washington, DC, USA.
Fully available at http://www.eng.auburn.edu/

aesmith/publications/journal/
ieeeec.pdf [accessed 2009-09-01]. CiteSeer
x
: 10.1.1.40.8821.
770. Roozbeh Derakhshan, Frank K. H. A. Dehne, Othmar Korn, and Bela Stantic. Sim-
ulated Annealing For Materialized View Selection in Data Warehousing Environment.
In DBA06 [1407], pages 8994, 2006. Fully available at http://www.inf.ethz.ch/
personal/droozbeh/docs/503-036.pdf [accessed 2010-09-11]. See also [771].
771. Roozbeh Derakhshan, Bela Stantic, Othmar Korn, and Frank K. H. A. Dehne. Par-
allel Simulated Annealing for Materialized View Selection in Data Warehousing Envi-
ronments. In ICA3PP [372], pages 121132, 2008. doi: 10.1007/978-3-540-69501-1 14.
REFERENCES 1017
CiteSeer
x
: 10.1.1.148.4815. See also [770].
772. Anna Derezi nska. Advanced Mutation Operators Applicable in C# Programs. In
SET06 [2366], pages 283288, 2006.
773. Ulrich Derigs, editor. Optimization and Operations Research, Encyclopedia of Life Support
Systems (EOLSS). Developed under the Auspices of the UNESCO, Eolss Publishers: Oxford,
UK.
774. Proceedings of the 1978 ACM 7th Annual Computer Science Conference (CSC78), Febru-
ary 2123, 1978, Detroit, MI, USA.
775. I. Devarenne, Alexandre Caminada, H. Mabed, and T. Defaix. Adaptive Local Search for
a New Military Frequency Hopping Planning Problem. In EvoWorkshops08 [1051], pages
1120, 2008. doi: 10.1007/978-3-540-78761-7 2.
776. Vladan Devedzic, editor. Proceedings of the 25th IASTED International Multi-Conference on
Articial Intelligence and Applications (AIAP07), February 1214, 2007, Innsbruck, Aus-
tria. ACTA Press: Anaheim, CA, USA. Partly available at http://www.iasted.org/
conferences/pastinfo-549.html [accessed 2011-04-09].
777. Alexandre Devert. Building Processes Optimization: Toward an Articial Ontogeny based
Approach. PhD thesis, Universite Paris-Sud, Ecole Doctorale dInformatique: Paris, France
and Institut National de Recherche en Informatique et en Automatique (INRIA), Centre de
Recherche Saclay -

Ile-de-France: Orsay, France, May 2009, Marc Schoenauer and Nicolas
Bred`eche, Advisors.
778. Alexandre Devert, Thomas Weise, and Ke Tang. A Study on Scalable Representations for
Evolutionary Optimization of Ground Structures. Evolutionary Computation, 20, 2012, MIT
Press: Cambridge, MA, USA. doi: 10.1162/EVCO a 00054. PubMed ID: 22004002.
779. Yves Deville and Christine Solnon, editors. EPTCS 5: Proceedings 6th International Work-
shop on Local Search Techniques in Constraint Satisfaction, September 20, 2009, Lisbon,
Portugal. doi: 10.4204/EPTCS.5. arXiv ID: 0910.1404v1.
780. Yves Deville and Christine Solnon, editors. Proceedings of the 7th Workshop on Local Search
Techniques in Constraint Satisfaction, September 6, 2010, St Andrews, Scotland, UK.
781. Luc Devroye. Non-Uniform Random Variate Generation. Springer-Verlag London Lim-
ited: London, UK, 1986. isbn: 0-387-96305-7 and 3-540-96305-7. Fully available
at http://cg.scs.carleton.ca/

luc/rnbookindex.html [accessed 2007-07-05]. Google


Books ID: J9KZAQAACAAJ and SJgWGQAACAAJ. OCLC: 260101691 and 300414888.
782. Patrik DHaeseleer. Context Preserving Crossover in Genetic Programming. In
CEC94 [1891], pages 256261, volume 1, 1994. doi: 10.1109/ICEC.1994.350006.
CiteSeer
x
: 10.1.1.56.2785. INSPEC Accession Number: 4812326.
783. C. A. Dhote and M. S. Ali. Materialized View Selection in Data Warehousing: A Survey.
Journal of Applied Sciences, 9(3):401414, 2009, Asian Network for Scientic Information
(ANSI): Faisalabad, Pakistan. doi: 10.3923/jas.2009.401.414. Fully available at http://
docsdrive.com/pdfs/ansinet/jas/2009/401-414.pdf [accessed 2011-04-13].
784. Gianni A. Di Caro. Ant Colony Optimization and its Application to Adaptive Routing
in Telecommunication Networks. PhD thesis, Applied Sciences, Polytechnic School, Uni-
versite Libre de Bruxelles: Brussels, Belgium, September 2004, Marco Dorigo, Advisor.
Partly available at http://www.idsia.ch/

gianni/Papers/routing-chapters.pdf
and http://www.idsia.ch/

gianni/Papers/thesis-abstract.pdf [accessed 2008-07-


26]. See also [786].
785. Gianni A. Di Caro and Marco Dorigo. AntNet: Distributed Stigmergetic Control for
Communications Networks. Journal of Articial Intelligence Research (JAIR), 9:317
365, December 1998, AI Access Foundation, Inc.: El Segundo, CA, USA and AAAI
Press: Menlo Park, CA, USA. Fully available at http://www.jair.org/media/544/
live-544-1748-jair.pdf [accessed 2009-07-21]. CiteSeer
x
: 10.1.1.53.4154. See also [786].
786. Gianni A. Di Caro and Marco Dorigo. Two Ant Colony Algorithms for Best-Eort Routing
in Datagram Networks. In PDCS98 [1409], pages 541546, 1998. Fully available at http://
citeseer.ist.psu.edu/dicaro98two.html and http://www.idsia.ch/

gianni/
Papers/PDCS98.ps.gz [accessed 2008-07-26]. See also [784, 785].
787. Gianni A. Di Caro, Frederick Ducatelle, and Luca Maria Gambardella. Wireless Commu-
nications for Distributed Navigation in Robot Swarms. In EvoWorkshops09 [1052], pages
2130, 2009. doi: 10.1007/978-3-642-01129-0 3.
788. Cecilia Di Chio, Stefano Cagnoni, Carlos Cotta, Marc Ebner, Anik o Ek art, Anna Isabel
1018 REFERENCES
Esparcia-Alc azar, Juan Julian Merelo-Guerv os, Ferrante Neri, Mike Preu, Hendrik Richter,
Julian Togelius, and Georgios N. Yannakakis, editors. Applications of Evolutionary Computa-
tion Proceedings of EvoApplications 2011: EvoCOMPLEX, EvoGAMES, EvoIASP, EvoIN-
TELLIGENCE, EvoNUM, and EvoSTOC, Part 1 (EvoAPPLICATIONS11), April 2729,
2011, Torino, Italy, volume 6624 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-642-20525-5. Library of Congress Control Number (LCCN): 2011925061.
789. Robert P. Dick and Niraj K. Jha. MOGAC: A Multiobjective Genetic Algorithm for
Hardware-Software Co-synthesis of Hierarchical Heterogeneous Distributed Embedded Sys-
tems. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems,
17(10):920935, October 1998, IEEE Circuits and Systems Society: Piscataway, NJ, USA.
doi: 10.1109/43.728914. Fully available at http://www.jeo.org/emo/dick98.ps.gz [ac-
cessed 2009-07-31]. CiteSeer
x
: 10.1.1.46.5584. INSPEC Accession Number: 6092176.
790. Dirk Dickmanns, J urgen Schmidhuber, and Andreas Winklhofer. Der Genetische Algo-
rithmus: Eine Implementierung in Prolog. Technische Universitat M unchen, Institut f ur
Informatik, Lehrstuhl Professor Radig: Munich, Bavaria, Germany, 1987. Fully avail-
able at http://www.idsia.ch/

juergen/geneticprogramming.html [accessed 2007-11-


01]. Fortgeschrittenenpraktikum.
791. Reinhard Diestel. Graph Theory, volume 173 in Graduate Texts in Mathematics. Springer
New York: New York, NY, USA, 3rd edition, 2006. isbn: 3540261834. Google Books
ID: aR2TMYQr2CMC.
792. Thomas G. Dietterich. Overtting and Undercomputing in Machine Learning. ACM
Computing Surveys (CSUR), 27(3):326327, 1995, ACM Press: New York, NY, USA.
doi: 10.1145/212094.212114. CiteSeer
x
: 10.1.1.55.2069.
793. Thomas G. Dietterich and William R. Swartout, editors. Proceedings of the 8th Na-
tional Conference on Articial Intelligence (AAAI90), July 29August 3, 1990, Boston,
MA, USA. AAAI Press: Menlo Park, CA, USA and MIT Press: Cambridge, MA, USA.
isbn: 0-262-51057-X. Partly available at http://www.aaai.org/Conferences/AAAI/
aaai90.php [accessed 2007-09-06]. 2 volumes.
794. Jason Digalakis and Konstantinos Margaritis. A Parallel Memetic Algorithm for Solv-
ing Optimization Problems. In MIC01 [2294], pages 121125, 2001. Fully available
at http://citeseer.ist.psu.edu/digalakis01parallel.html and http://www.
it.uom.gr/people/digalakis/digiasfull.pdf [accessed 2007-09-12]. See also [795].
795. Jason Digalakis and Konstantinos Margaritis. Performance Comparison of Memetic Al-
gorithms. Journal of Applied Mathematics and Computation, 158:237252, October 2004,
Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/j.amc.2003.08.115. Fully avail-
able at http://www.complexity.org.au/ci/draft/draft/digala02/digala02s.
pdf and http://www.complexity.org.au/ci/vol10/digala02/digala02s.pdf [ac-
cessed 2010-09-11]. CiteSeer
x
: 10.1.1.21.5495. See also [794].
796. Edsger Wybe Dijkstra. A Note on Two Problems in Connexion with Graphs. Numerische
Mathematik, 1:269271, 1959. Fully available at http://www-m3.ma.tum.de/twiki/
pub/MN0506/WebHome/dijkstra.pdf [accessed 2008-06-24].
797. Werner Dilger. Einf uhrung in die K unstliche Intelligenz. Chemnitz University of Technol-
ogy, Faculty of Computer Science, Chair of Articial Intelligence (K unstliche Intelligenz):
Chemnitz, Sachsen, Germany, April 2006. The Lecture Notes of the Lecture on Articial
Intelligence by Prof. Dilger from 2006.
798. Paolo Dini. Section 1: Science, New Paradigms. Subsection 1: A Scientic Foundation
for Digital Ecosystems. In Digital Business Ecosystems [1989]. Oce for Ocial Publi-
cations of the European Communities: Luxembourg, 2007. Fully available at http://
www.digital-ecosystems.org/book/Section1.pdf and http://www.ieee-dest.
curtin.edu.au/2007/PDini.pdf [accessed 2008-11-09]. See also: Keynote Talk Structure
and Outlook of Digital Ecosystems Research, IEEE Digital Ecosystems Technologies Con-
ference, DEST07, Cairns, Australia, February 2023, 2007, and. See also [799].
799. Paolo Dini. Digital Business Ecosystems, WP 18: Socio-economic Perspectives of Sus-
tainability and Dynamic Specication of Behaviour in Digital Business Ecosystems,
Deliverable 18.4: Report on Self-Organisation from a Dynamical Systems and Computer
Science Viewpoint. Information Society Technologies, March 5, 2007. Fully available
at http://files.opaals.org/DBE/deliverables/Del_18.4_DBE_Report_on_
REFERENCES 1019
self-organisation_from_a_dynamical_systems_and_computer_viewpoint.
pdf [accessed 2008-11-09]. Contract nr. 507953.
800. Yem Dinitz. Dinitz Algorithm: The Original Version and Evens Version. In Theoretical
Computer Science: Essays in Memory of Shimon Even [1093], pages 218240. Springer-Verlag
GmbH: Berlin, Germany, 2006. Fully available at http://www.cs.bgu.ac.il/

dinitz/
Papers/Dinitz_alg.pdf [accessed 2010-09-01].
801. Andrej Dobnikar, Nigel C. Steele, David W. Pearson, and Rudolf F. Albrecht, editors. Pro-
ceedings of the 4th International Conference on Articial Neural Nets and Genetic Algorithms
(ICANNGA99), April 69, 1999, Protoroz, Slovenia. Birkhauser Verlag: Basel, Switzer-
land and Springer-Verlag GmbH: Berlin, Germany. isbn: 3-211-83364-1. Google Books
ID: clKwynlfZYkC.
802. Karl Doerner, Manfred Gronalt, Richard F. Hartl, Marc Reimann, Christine Strauss,
and Michael Stummer. Savings Ants for the Vehicle Routing Problem. In EvoWork-
shops02 [465], pages 1120, 2002. doi: 10.1007/3-540-46004-7 2. Fully avail-
able at http://epub.wu-wien.ac.at/dyn/virlib/wp/mediate/epub-wu-01_1f1.
pdf?ID=epub-wu-01_1f1 [accessed 2008-10-27]. CiteSeer
x
: 10.1.1.6.8882. See also [2811].
803. Benjamin Doerr, Edda Happ, and Christian Klein. Crossover can provably be
useful in Evolutionary Computation. In GECCO08 [1519], pages 539546, 2008.
doi: 10.1145/1389095.1389202. Fully available at http://www.mpi-inf.mpg.de/

doerr/papers/cpuinec.pdf and http://www.mpi-inf.mpg.de/

edda/papers/
gecco2008_1.pdf [accessed 2011-11-23]. CiteSeer
x
: 10.1.1.163.9588. See also [804].
804. Benjamin Doerr, Edda Happ, and Christian Klein. Crossover can provably be useful in
Evolutionary Computation. Theoretical Computer Science, pages 117, 2010, Elsevier Science
Publishers B.V.: Essex, UK. doi: 10.1016/j.tcs.2010.10.035. See also [803].
805. Pedro Domingos and Michael Pazzani. On the Optimality of the Simple Bayesian Clas-
sier under Zero-One Loss. Machine Learning, 29(2-3):103130, November 1997, Kluwer
Academic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Nether-
lands. doi: 10.1023/A:1007413511361. Fully available at http://citeseer.ist.psu.
edu/old/domingos97optimality.html and http://www.ics.uci.edu/

pazzani/
Publications/mlj97-pedro.pdf [accessed 2009-09-10].
806. Wolfgang Domschke. Logistik, Rundreisen und Touren, Oldenbourgs Lehr- und Handb ucher
der Wirtschafts- u. Sozialwissenschaften. Oldenbourg Verlag: Munich, Bavaria, Germany,
fourth edition, 1997. doi: 978-3-486-24273-7.
807. Marco Dorigo and Christian Blum. Ant Colony Optimization Theory: A Survey. Theo-
retical Computer Science, 344(2-3), November 17, 2005, Elsevier Science Publishers B.V.:
Essex, UK. doi: 10.1016/j.tcs.2005.05.020. Fully available at http://code.ulb.ac.be/
dbfiles/DorBlu2005tcs.pdf [accessed 2007-08-05].
808. Marco Dorigo and Luca Maria Gambardella. Ant Colony System: A Cooperative Learning
Approach to the Traveling Salesman Problem. IEEE Transactions on Evolutionary Compu-
tation (IEEE-EC), 1(1):5366, April 1997, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/4235.585892. Fully available at http://www.idsia.ch/

luca/acs-ec97.
pdf [accessed 2009-06-27]. CiteSeer
x
: 10.1.1.49.7702.
809. Marco Dorigo and Thomas St utzle. Ant Colony Optimization, Bradford Books. MIT
Press: Cambridge, MA, USA, July 1, 2004. isbn: 0-262-04219-3. Google Books
ID: aefcpY8GiEC.
810. Marco Dorigo, Vittorio Maniezzo, and Alberto Colorni. The Ant System: Optimization by a
Colony of Cooperating Agents. IEEE Transactions on Systems, Man, and Cybernetics Part
B: Cybernetics, 26(1):2941, February 1996, IEEE Systems, Man, and Cybernetics Society:
New York, NY, USA. doi: 10.1109/3477.484436. Fully available at ftp://iridia.ulb.ac.
be/pub/mdorigo/journals/IJ.10-SMC96.pdf and http://www.agent.ai/doc/
upload/200302/dori96.pdf [accessed 2009-06-26]. CiteSeer
x
: 10.1.1.50.6491. INSPEC
Accession Number: 5191313.
811. Marco Dorigo, Gianni A. Di Caro, and Luca Maria Gambardella. Ant Algorithms for Discrete
Optimization. Technical Report IRIDIA/98-10, Universite Libre de Bruxelles: Brussels,
Belgium, 1998. CiteSeer
x
: 10.1.1.72.9591. See also [812].
812. Marco Dorigo, Gianni A. Di Caro, and Luca Maria Gambardella. Ant Algorithms for
Discrete Optimization. Articial Life, 5(2):137172, Spring 1999, MIT Press: Cambridge,
MA, USA. doi: 10.1162/106454699568728. Fully available at ftp://ftp.idsia.ch/
1020 REFERENCES
pub/luca/papers/ij_23-alife99.ps.gz, http://code.ulb.ac.be/dbfiles/
DorDicGam1999al.pdf, http://www.cs.ubc.ca/

hutter/earg/papers04-05/
artificial_life.pdf, http://www.idsia.ch/

luca/abstracts/papers/ij_
23-alife99.pdf, and http://www.idsia.ch/

luca/ij_23-alife99.pdf [ac-
cessed 2010-12-10]. See also [811].
813. Marco Dorigo, Gianni A. Di Caro, and Thomas St utzle. Special issue on Ant Algorithms.
Future Generation Computer Systems The International Journal of Grid Computing: The-
ory, Methods and Applications (FGCS), 16(8):851955, June 2000, Elsevier Science Publish-
ers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientic Publishers Ltd.:
Amsterdam, The Netherlands.
814. Marco Dorigo, Gianni A. Di Caro, and Thomas St utzle, editors. From Ant Colonies to Arti-
cial Ants First International Workshop on Ant Colony Optimization and Swarm Intelligence
(ANTS98), October 1516, 2000, Brussels, Belgium. See also [813].
815. Marco Dorigo, Luca Maria Gambardella, Martin Middendorf, and Thomas St utzle, editors.
From Ant Colonies to Articial Ants Second International Workshop on Ant Colony Opti-
mization and Swarm Intelligence (ANTS00), September 89, 2000, Brussels, Belgium. See
also [817].
816. Marco Dorigo, Gianni A. Di Caro, and Michael Samples, editors. From Ant Colonies to
Articial Ants Third International Workshop on Ant Colony Optimization (ANTS02),
September 1214, 2002, Brussels, Belgium, volume 2463/2002 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-45724-0.
isbn: 3-540-44146-8.
817. Marco Dorigo, Luca Maria Gambardella, Martin Middendorf, and Thomas St utzle. Special
Section on Ant Colony Optimization. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 6(4):317365, August 2002, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2002.802446.
818. Marco Dorigo, Mauro Birattari, Christian Blum, Luca Maria Gambardella, Francesco Mon-
dada, and Thomas St utzle, editors. Fourth International Workshop on Ant Colony Optimiza-
tion and Swarm Intelligence (ANTS04), September 58, 2004, Brussels, Belgium, volume
3172/2004 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/b99492.
819. Marco Dorigo, Mauro Birattari, and Thomas St utzle. Ant Colony Optimization. IEEE
Computational Intelligence Magazine, 1(4):2839, November 2006, IEEE Computational In-
telligence Society: Piscataway, NJ, USA. doi: 10.1109/MCI.2006.329691. INSPEC Accession
Number: 9184238.
820. Marco Dorigo, Mauro Birattari, and Thomas St utzle. Ant Colony Optimization Articial
Ants as a Computational Intelligence Technique. IEEE Computational Intelligence Mag-
azine, 1(4):2839, 2006, IEEE Computational Intelligence Society: Piscataway, NJ, USA.
doi: 10.1109/MCI.2006.329691. Fully available at http://iridia.ulb.ac.be/

mbiro/
paperi/DorBirStu2006ieee-cim.pdf and http://iridia.ulb.ac.be/

mbiro/
paperi/IridiaTr2006-023r001.pdf [accessed 2010-09-26]. CiteSeer
x
: 10.1.1.64.9532.
INSPEC Accession Number: 9184238.
821. Marco Dorigo, Luca Maria Gambardella, Mauro Birattari, A. Martinoli, Riccardo Poli, and
Thomas St utzle, editors. Fifth International Workshop on Ant Colony Optimization and
Swarm Intelligence (ANTS06), September 47, 2006, Brussels, Belgium, volume 4150/2006
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/11839088.
822. J urgen Dorn, Peter Hrastnik, and Albert Rainer. Web Service Discovery and Composition
with MOVE. In EEE05 [1371], pages 791792, 2005. doi: 10.1109/EEE.2005.142. INSPEC
Accession Number: 8530441. See also [317, 823, 2257].
823. J urgen Dorn, Albert Rainer, and Peter Hrastnik. Toward Semantic Composi-
tion of Web Services with MOVE. In CEC/EEE06 [2967], pages 437438, 2006.
doi: 10.1109/CEC-EEE.2006.89. Fully available at http://move.ec3.at/Papers/
WSC06.pdf [accessed 2007-10-25]. INSPEC Accession Number: 9189342. See also [318, 822, 2257].
824. Bernabe Dorronsoro Daz. The VRP Web. Auren: Madrid, Spain and University of Malaga
(UMA), Complejo Tecnologico, Departamento Lenguajes y Ciencias de la Computacion:
Campus de Teatinos, Malaga, Spain, March 2007. Fully available at http://neo.lcc.
uma.es/radi-aeb/WebVRP/index.html [accessed 2011-10-12].
REFERENCES 1021
825. Bernabe Dorronsoro Daz. Known Best Results. University of Malaga (UMA), Networking and
Emerging Optimization (NEO): Campus de Teatinos, Malaga, Spain, March 2007. Fully avail-
able at http://neo.lcc.uma.es/radi-aeb/WebVRP/results/BestResults.htm [ac-
cessed 2007-12-28].
826. Bernabe Dorronsoro Daz, Patricia Ruiz, Gregoire Danoy, Pascal Bouvry, and Lorenzo J. Tar-
don. Towards Connectivity Improvement in VANETs using Bypass Links. In CEC09 [1350],
pages 22012208, 2009. doi: 10.1109/CEC.2009.4983214. INSPEC Accession Num-
ber: 10688809.
827. Walter Dosch and Narayan C. Debnath, editors. Proceedings of the ISCA 13th Interna-
tional Conference on Intelligent and Adaptive Systems and Software Engineering (IASSE04),
July 13, 2004, Nice, France. isbn: 1-880843-51-X. Google Books ID: KigWPQAACAAJ and
TyyyAAAACAAJ.
828. Norman Richard Draper and Harry Smith. Applied Regression Analysis, Wiley Series in Prob-
ability and Mathematical Statistics Applied Probability and Statistics Section Series. Wi-
ley Interscience: Chichester, West Sussex, UK, 1966. isbn: 0471029955. OCLC: 6486827.
GBV-Identication (PPN): 024357847. Reviewed in [879].
829. Nicole Drechsler, Rolf Drechsler, and Bernd Becker. Multi-objective Optimization in Evo-
lutionary Algorithms Using Satisability Classes. In International Conference on Computa-
tional Intelligence: Theory and Applications 6th Fuzzy Days [2295], pages 108117, 1996.
doi: 10.1007/3-540-48774-3 14. Fully available at http://citeseer.ist.psu.edu/old/
drechsler99multiobjective.html [accessed 2009-07-18].
830. Nicole Drechsler, Rolf Drechsler, and Bernd Becker. Multi-Objective Optimisation Based on
Relation Favour. In EMO01 [3099], pages 154166, 2001. doi: 10.1007/3-540-44719-9 11.
Fully available at http://www.lania.mx/

ccoello/EMOO/drechsler01.pdf.gz [ac-
cessed 2011-12-05]. CiteSeer
x
: 10.1.1.65.5318.
831. Dimiter Driankov, Peter W. Eklund, and Anca L. Ralescu, editors. Fuzzy Logic and Fuzzy
Control: Proceedings of Workshops on Fuzzy Logic and Fuzzy Control (IJCAI91 Fuzzy
WS), August 24, 1991, Sydney, NSW, Australia, volume 833 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-58279-7.
OCLC: 150398218, 303915934, 303915937, 303915940, 303915944, 422860424,
473465173, and 501692697. See also [1987, 1988, 2122].
832. Moshe Dror, editor. Arc Routing: Theory, Solutions and Applications. Springer-Verlag
GmbH: Berlin, Germany, 2000. isbn: 0-7923-7898-9. Google Books ID: rPiHiE0ZZCkC.
833. Stefan Droste. Not All Linear Functions Are Equally Dicult for the Compact Genetic
Algorithm. In GECCO05 [304], pages 679686, 2005. doi: 10.1145/1068009.1068124. See
also [834].
834. Stefan Droste. A Rigorous Analysis of the Compact Genetic Algorithm for Linear Functions.
Natural Computing: An International Journal, 5(3):257283, September 2006, Kluwer Aca-
demic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1007/s11047-006-9001-0. See also [833].
835. Stefan Droste and Dirk Wiesmann. On Representation and Genetic Operators in Evo-
lutionary Algorithms. Technical Report CI41/98, Universitat Dortmund, Collaborative
Research Center (Sonderforschungsbereich) 531: Dortmund, North Rhine-Westphalia, Ger-
many, June 1998. Fully available at http://hdl.handle.net/2003/5341 [accessed 2009-07-
10]. CiteSeer
x
: 10.1.1.36.3354.
836. Stefan Droste, Thomas Jansen, and Ingo Wegener. On the Optimization of Unimodal
Functions with the (1+1) Evolutionary Algorithm. In PPSN V [866], pages 1322, 1998.
doi: 10.1007/BFb0056845.
837. Stefan Droste, Thomas Jansen, and Ingo Wegener. Perhaps Not a Free Lunch But At Least
a Free Appetizer. Reihe Computational Intelligence: Design and Management of Complex
Technical Processes and Systems by Means of Computational Intelligence Methods CI-45/98,
Universitat Dortmund, Collaborative Research Center (Sonderforschungsbereich) 531: Dort-
mund, North Rhine-Westphalia, Germany, September 1998. Fully available at http://hdl.
handle.net/2003/5339 [accessed 2009-03-01]. issn: 1433-3325. See also [838].
838. Stefan Droste, Thomas Jansen, and Ingo Wegener. Perhaps Not a Free Lunch But At Least
a Free Appetizer. In GECCO99 [211], pages 833839, 1999. See also [837].
839. Stefan Droste, Thomas Jansen, and Ingo Wegener. Optimization with Randomized Search
Heuristics The (A)NFL Theorem, Realistic Scenarios, and Dicult Functions. Reihe Com-
1022 REFERENCES
putational Intelligence: Design and Management of Complex Technical Processes and Sys-
tems by Means of Computational Intelligence Methods CI-91/00, Universitat Dortmund,
Collaborative Research Center (Sonderforschungsbereich) 531: Dortmund, North Rhine-
Westphalia, Germany, August 2000. Fully available at http://hdl.handle.net/2003/
5394 [accessed 2009-03-01]. issn: 1433-3325. See also [840].
840. Stefan Droste, Thomas Jansen, and Ingo Wegener. Optimization with Randomized Search
Heuristics The (A)NFL Theorem, Realistic Scenarios, and Dicult Functions. Theoretical
Computer Science, 287(1):131144, September 25, 2002, Elsevier Science Publishers B.V.:
Essex, UK. doi: 10.1016/S0304-3975(02)00094-4. CiteSeer
x
: 10.1.1.35.5850. See also
[839].
841. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO11), July 12
16, 2011, Dublin, Ireland.
842. Marc Dubreuil, Christian Gagne, and Marc Parizeau. Analysis of a Master-Slave Archi-
tecture for Distributed Evolutionary Computations. IEEE Transactions on Systems, Man,
and Cybernetics Part B: Cybernetics, 36(1):229235, February 2006, IEEE Systems, Man,
and Cybernetics Society: New York, NY, USA. doi: 10.1109/TSMCB.2005.856724. Fully
available at http://vision.gel.ulaval.ca/

parizeau/publications/smc06.pdf
[accessed 2008-04-06]. INSPEC Accession Number: 8736975.
843. Frederick Ducatelle, Martin Roth, and Luca Maria Gambardella. Design of a User Space
Software Suite for Probabilistic Routing in Ad-Hoc Networks. In EvoWorkshops07 [1050],
pages 121128, 2007. doi: 10.1007/978-3-540-71805-5 13.
844. Richard O. Duda, Peter Elliot Hart, and David G. Stork. Pattern Classication, Esti-
mation, Simulation, and Control Wiley-Interscience Series in Discrete Mathematics and
Optimization. Wiley Interscience: Chichester, West Sussex, UK, 2nd edition, Novem-
ber 2000. isbn: 0-471-05669-3. Google Books ID: YoxQAAAAMAAJ, hyQgQAAACAAJ, and
o3I8PgAACAAJ. OCLC: 41347061, 154744650, and 474918353. Library of Congress
Control Number (LCCN): 99029981. GBV-Identication (PPN): 303957670.
845. John Duy and Jim Engle-Warnick. Using Symbolic Regression to Infer Strategies from
Experimental Data. In Fifth International Conference on Computing in Economics and Fi-
nance [262], page 150, 1999. Fully available at http://www.pitt.edu/

jduffy/papers/
Usr.pdf [accessed 2010-12-02]. CiteSeer
x
: 10.1.1.34.4205. See also [846].
846. John Duy and Jim Engle-Warnick. Using Symbolic Regression to Infer Strategies from Ex-
perimental Data. In Evolutionary Computation in Economics and Finance [549], Chapter 4,
pages 6184. Springer-Verlag GmbH: Berlin, Germany, 2002. Fully available at http://
www.pitt.edu/

jduffy/docs/Usr.ps [accessed 2007-09-09]. See also [845].


847. Dumitru (Dan) Dumitrescu, Beatrice Lazzerini, Lakhmi C. Jain, and A. Dumitrescu. Evo-
lutionary Computation, volume 18 in International Series on Computational Intelligence.
CRC Press, Inc.: Boca Raton, FL, USA, June 2000. isbn: 0-8493-0588-8. Google Books
ID: MSU9ep79JvUC. OCLC: 43894173, 247021679, and 468482770. Library of Congress
Control Number (LCCN): 00030348. LC Classication: QA76.618 .E882 2000.
848. Bruce S. Duncan and Arthur J. Olson. Applications of Evolutionary Programming for the
Prediction of Protein-Protein Interactions. In EP96 [946], pages 411417, 1996.
849. Margaret H. Dunham. Data Mining: Introductory and Advanced Topics. Prentice Hall
International Inc.: Upper Saddle River, NJ, USA, August 2002. isbn: 0130888923.
850. G. Dzemyda, V. Saltenis, and Antanas

Zilinskas, editors. Stochastic and Global Optimization,
Nonconvex Optimization and Its Applications. Springer Science+Business Media, Inc.: New
York, NY, USA, March 1, 2002.
851. Saso Dzeroski and Peter A. Flach, editors. Proceedings of 9th International Workshop
on Inductive Logic Programming (ILP-99), June 2427, 1999, Bled, Slovenia, volume
1634/1999 in Lecture Notes in Articial Intelligence (LNAI, SL7), Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-48751-4.
isbn: 3-540-66109-3. Google Books ID: ePRRHzBNvFAC.
E
852. Fernando Almeida e Costa, Lus Mateus Rocha, Ernesto Jorge Fernandes Costa, Inman
Harvey, and Antonio Coutinho, editors. Proceedings of the 9th European Conference on
REFERENCES 1023
Advances in Articial Life (ECAL07), September 1014, 2007, Lisbon, Portugal, volume
4648/2007 in Lecture Notes in Articial Intelligence (LNAI, SL7), Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-74913-4.
isbn: 3-540-74912-8. Google Books ID: 8Hl-qxgNAjQC and gVLbGgAACAAJ. Library of
Congress Control Number (LCCN): 2007934544.
853. Russel C. Eberhart and James Kennedy. A New Optimizer Using Particle Swarm
Theory. In MHS95 [1326], pages 3943, 1995. doi: 10.1109/MHS.1995.494215.
Fully available at http://webmining.spd.louisville.edu/Websites/COMB-OPT/
FINAL-PAPERS/SwarmsPaper.pdf [accessed 2009-07-05]. INSPEC Accession Number: 5297172.
854. Russel C. Eberhart and Yuhui Shi. Comparison between Genetic Algorithms and Particle
Swarm Optimization. In EP98 [2208], pages 611616, 1998. doi: 10.1007/BFb0040812.
855. Russel C. Eberhart and Yuhui Shi. A Modied Particle Swarm Optimizer. In CEC98 [2496],
pages 6973, 1998. doi: 10.1109/ICEC.1998.699146. INSPEC Accession Number: 6021037.
856. Marc Ebner, Michael ONeill, Anik o Ek art, Leonardo Vanneschi, and Anna Isabel Esparcia-
Alc azar, editors. Proceedings of the 10th European Conference on Genetic Programming
(EuroGP07), April 1113, 2007, Val`encia, Spain, volume 4445/2007 in Theoretical Computer
Science and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-71605-1. isbn: 3-540-71602-5.
Library of Congress Control Number (LCCN): 2007923720.
857. Francis Ysidro Edgeworth. Mathematical Psychics: An Essay on the Applica-
tion of Mathematics to the Moral Sciences. Kessinger Publishing: Whitesh, MT,
USA and C. Kegan Paul & Co: London, UK, 1881. isbn: 0548216657 and
1163230006. Fully available at http://socserv.mcmaster.ca/econ/ugcm/3ll3/
edgeworth/mathpsychics.pdf [accessed 2011-12-02]. asin: 0548216657.
858. Proceedings of the 9th International Conference on Articial Immune Systems (ICARIS10),
July 2629, 2010, Edinburgh, Scotland, UK.
859. Proceedings of the 2013 International Conference on Systems, Man, and Cybernetics
(SMC13), October 69, 2013, Edinburgh, Scotland, UK.
860. Matthias Ehrgott. Multicriteria Optimization, volume 491 in Lecture Notes in Economics
and Mathematical Systems. Birkhauser Verlag: Basel, Switzerland, 2nd edition, 2005.
isbn: 3-540-21398-8. Google Books ID: yrZw9srrHroC.
861. Matthias Ehrgott, editor. Proceedings of the 19th International Conference on Multiple
Criteria Decision Making: MCDM for Sustainable Energy and Transportation Systems
(MCDM08), June 712, 2008, Auckland, New Zealand. Partly available at https://
secure.orsnz.org.nz/mcdm2008/ [accessed 2007-09-10].
862. Matthias Ehrgott, Carlos M. Fonseca, Xavier Gandibleux, Jin-Kao Hao, and Marc Se-
vaux, editors. Proceedings of the 5th International Conference on Evolutionary Multi-
Criterion Optimization (EMO09), April 710, 2009, Nantes, France, volume 5467/2009
in Theoretical Computer Science and General Issues (SL 1), Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-642-01020-0.
isbn: 3-642-01019-9.
863.

Agoston E. Eiben, editor. Evolutionary Computation, Theoretical Computer Science. IOS
Press: Amsterdam, The Netherlands, 1999. isbn: 4-274-90269-2 and 90-5199-471-0.
Google Books ID: 8LVAGQAACAAJ. This is the book edition of the journal Fundamenta
Informaticae, Volume 35, Nos. 1-4, 1998.
864.

Agoston E. Eiben and C. A. Schippers. On Evolutionary Exploration and Exploita-
tion. Fundamenta Informaticae Annales Societatis Mathematicae Polonae, Series IV,
35(1-2):3550, JulyAugust 1998, IOS Press: Amsterdam, The Netherlands and Euro-
pean Association for Theoretical Computer Science (EATCS): Rio, Greece. Fully avail-
able at http://www.cs.vu.nl/

gusz/papers/FunInf98-Eiben-Schippers.ps [ac-
cessed 2009-07-09]. CiteSeer
x
: 10.1.1.29.4885.
865.

Agoston E. Eiben and James E. Smith. Introduction to Evolutionary Computing, Natural
Computing Series. Springer New York: New York, NY, USA, 1st edition, November 2003.
isbn: 3540401849. Google Books ID: 7IOE5VIpFpwC and RRKo9xVFW QC.
866.

Agoston E. Eiben, Thomas Back, Marc Schoenauer, and Hans-Paul Schwefel, editors.
Proceedings of the 5th International Conference on Parallel Problem Solving from Nature
(PPSN V), September 2730, 1998, Amsterdam, The Netherlands, volume 1498/1998 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
1024 REFERENCES
doi: 10.1007/BFb0056843. isbn: 3-540-65078-4. Google Books ID: HLeU 1TkwTsC.
OCLC: 39739752, 150419863, 243486760, and 246324853.
867. Manfred Eigen and Peter K. Schuster. The Hypercycle: A Principle of Natural Self Orga-
nization. Springer-Verlag GmbH: Berlin, Germany, June 1979. isbn: 0387092935. Google
Books ID: P54PPAAACAAJ and UNJcAAAACAAJ. OCLC: 4665354.
868. Horst A. Eiselt, Michael Gendrau, and Gilbert Laporte. Arc Routing Problems, Part I:
The Chinese Postman Problem. Operations Research, 43(2):231242, MarchApril 1995,
Institute for Operations Research and the Management Sciences (INFORMS): Linthicum,
ML, USA and HighWire Press (Stanford University): Cambridge, MA, USA. JSTOR Stable
ID: 171832.
869. Anik o Ek art and Sandor Zoltan Nemeth. Selection Based on the Pareto Nondomination
Criterion for Controlling Code Growth in Genetic Programming. Genetic Programming and
Evolvable Machines, 2(1):6173, March 2001, Springer Netherlands: Dordrecht, Netherlands.
Imprint: Kluwer Academic Publishers: Norwell, MA, USA. doi: 10.1023/A:1010070616149.
870. Sven E. Eklund. A Massively Parallel GP Engine in VLSI. In CEC02 [944], pages 629633,
2002. doi: 10.1109/CEC.2002.1006999. INSPEC Accession Number: 7321863.
871. El-Sayed M. El-Alfy. Discovering Classication Rules for Email Spam Filtering with
an Ant Colony Optimization Algorithm. In CEC09 [1350], pages 17781783, 2009.
doi: 10.1109/CEC.2009.4983156. INSPEC Accession Number: 10688751.
872. Khaled El-Fakihy, Hirozumi Yamaguchi, and Gregor von Bochmann. A Method and a Ge-
netic Algorithm for Deriving Protocols for Distributed Applications with Minimum Com-
munication Cost. In PDCS99 [1410], pages 863868, 1999. Fully available at http://
www-higashi.ist.osaka-u.ac.jp/

h-yamagu/resource/pdcs99.pdf [accessed 2009-


08-01]. CiteSeer
x
: 10.1.1.27.4746. See also [3005].
873. Proceedings of the 3rd International Conference on Metaheuristics and Nature Inspired Com-
puting (META10), October 2731, 2010, El Mouradi Djerba Menzel: Djerba Island, Tunisia.
874. E.W. Elcock and Donald Michie, editors. Proceedings of the Eighth Machine Intelligence
Workshop (Machine Intelligence 8), 1977, NATO Advanced Study Institute: Santa Cruz,
CA, USA. Ellis Horwood: Chichester, West Sussex, UK and John Wiley & Sons Ltd.: New
York, NY, USA.
875. Niles Eldredge and Stephen Jay Gould. Punctuated Equilibria: An Alternative to Phyletic
Gradualism. In Models in Paleobiology [2427], Chapter 5, pages 82115. Freeman, Cooper &
Co: San Francisco, CA, USA and DoubleDay: New York, NY, USA, 1972. Fully available
at http://www.blackwellpublishing.com/ridley/classictexts/eldredge.pdf
[accessed 2008-07-01].
876. Niles Eldredge and Stephen Jay Gould. Punctuated Equilibria: The Tempo and Mode of
Evolution Reconsidered. Paleobiology, 3(2):115151, April 1, 1977, Paleontological Society:
Lawrence, KS, USA and Allen Press Online Publishing: Lawrence, KS, USA. Fully available
at http://www.nileseldredge.com/pdf_files/Punctuated_Equilibria_Gould_
Eldredge_1977.pdf [accessed 2008-11-09].
877. Proceedings of the 4th European Congress on Fuzzy and Intelligent Technologies (EUFIT96),
September 25, 1996, Aachen, North Rhine-Westphalia, Germany. ELITE Foundation:
Aachen, North Rhine-Westphalia, Germany. isbn: 3896531875.
878. Khaled Elleithy, editor. Advances and Innovations in Systems, Computing Sciences and
Software Engineering Volume 1 of Proceedings of the International Conference on Systems,
Computing Sciences and Software Engineering (SCSS06). Springer-Verlag GmbH: Berlin,
Germany, December 514, 2006, University of Bridgeport: Bridgeport, CT, USA.
879. D. M. Ellis. Book Reviews: Applied Regression Analysis by N. P. Draper and H. Smith.
Journal of the Royal Statistical Society: Series C Applied Statistics, 17(1):8384, 1968,
Blackwell Publishing for the Royal Statistical Society: Chichester, West Sussex, UK. See
also [828].
880. Michael T.M. Emmerich, Nicola Beume, and Boris Naujoks. An EMO Algorithm using
the Hypervolume Measure as Selection Criterion. In EMO05 [600], pages 6276, 2005.
doi: 10.1007/b106458. CiteSeer
x
: 10.1.1.60.2524.
881. Andries P. Engelbrecht. Fundamentals of Computational Swarm Intelligence. John Wiley &
Sons Ltd.: New York, NY, USA, 2005. isbn: 0470091916. Google Books ID: UAg AAAACAAJ.
882. Fatma Corut Ergin, Ayseg ul Yayml, and A. Sima Uyar. An Evolutionary Algorithm for Sur-
vivable Virtual Topology Mapping in Optical WDM Networks. In EvoWorkshops09 [1052],
REFERENCES 1025
pages 3140, 2009. doi: 10.1007/978-3-642-01129-0 4.
883. Larry J. Eshelman, editor. Proceedings of the Sixth International Conference on Genetic
Algorithms (ICGA95), July 1519, 1995, University of Pittsburgh: Pittsburgh, PA, USA.
Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-370-0. Google
Books ID: 5KMsOwAACAAJ and 9xRRAAAAMAAJ.
884. Larry J. Eshelman and J. David Schaer. Preventing Premature Convergence in Genetic
Algorithms by Preventing Incest. In ICGA91 [254], pages 115122, 1991.
885. Larry J. Eshelman, Richard A. Caruana, and J. David Schaer. Biases in the Crossover
Landscape. In ICGA89 [2414], pages 1019, 1989.
886. Anna Isabel Esparcia-Alc azar, Anik o Ek art, Sara Silva, Stephen Dignum, and A. Sima
Etaner-Uyar, editors. Proceedings of the 13th European Conference on Genetic Programming
(EuroGP10), April 710, 2010, Istanbul, Turkey, volume 6021 in Theoretical Computer Sci-
ence and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/978-3-642-12148-7. isbn: 3-642-12147-0. Google
Books ID: 7mHr-QFI-gsC. Library of Congress Control Number (LCCN): 2010922339.
887. Nicol as S. Estevez and Hod Lipson. Dynamical Blueprints: Exploiting Levels
of System-Environment Interaction. In GECCO07-I [2699], pages 238244, 2007.
doi: 10.1145/1276958.1277009. Fully available at http://ccsl.mae.cornell.edu/
papers/gecco07_estevez.pdf [accessed 2010-08-05]. CiteSeer
x
: 10.1.1.130.2238.
888. European Symposium on Intelligent Techniques (ESIT99), June 34, 1999, Orthodox
Academy of Crete (OAC): Kolymvari, Kissamos, Chania, Crete, Greece. European Net-
work for Fuzzy Logic and Uncertainty Modeling in Information Technology (ERUDIT).
Fully available at http://www.erudit.de/erudit/events/esit99/programme.htm
[accessed 2009-07-20].
F
889. Xiannian Fan, Ke Tang, and Thomas Weise. Margin-Based Over-Sampling Method for Learn-
ing From Imbalanced Datasets. In PAKDD11 [1294], 2011.
890. G unter Fandel and Thomas G al, editors. Proceedings of the 3rd International Conference
on Multiple Criteria Decision Making: Theory and Application (MCDM79), August 20
22, 1979, Hagen/Konigswinter, West Germany, volume 177 in Lecture Notes in Economics
and Mathematical Systems. Springer-Verlag: Berlin/Heidelberg. isbn: 0387099638 and
3-540-09963-8. OCLC: 6143366, 263581036, 310833050, 421850617, 470793575,
and 636202402. Published in May 1980.
891. G unter Fandel, Thomas G al, and Thomas Hanne, editors. Proceedings of the 12th Inter-
national Conference on Multiple Criteria Decision Making (MCDM95), June 1923, 1995,
Hagen, North Rhine-Westphalia, Germany, volume 448 in Lecture Notes in Economics
and Mathematical Systems. Springer-Verlag: Berlin/Heidelberg. isbn: 3-540-62097-4.
OCLC: 36017289, 174158645, 246647660, 312770616, 476484227, 490887148, and
639833174. GBV-Identication (PPN): 222832282, 257873279, and 271804556. Pub-
lished on March 21, 1997.
892. Usenet FAQs: comp.ai.neural-nets FAQ. FAQS.ORG Internet FAQ Archives: USA,
July 12, 2009. Fully available at http://www.faqs.org/faqs/ai-faq/neural-nets/
[accessed 2009-07-12].
893. Monique P. Fargues and Ralph D. Hippenstiel, editors. Conference Record of the Thirty-
First Asilomar Conference on Signals, Systems & Computers, November 25, 1997, Naval
Postgraduate School (U.S.): Pacic Grove, CA, USA. IEEE Computer Society: Piscataway,
NJ, USA. isbn: 0-8186-8316-3, 0-8186-8317-1, and 0-8186-8318-X. Google Books
ID: LRwNPwAACAAJ and SyjnAAAACAAJ. OCLC: 39219056. issn: 1058-6393. Order
No.: 97CB36163.
894. Marco Farina and Paolo Amato. On the Optimal Solution Denition for Many-
Criteria Optimization Problems. In NAFIPS00 [1522], pages 233238, 2002.
doi: 10.1109/NAFIPS.2002.1018061. Fully available at http://www.lania.mx/

ccoello/farina02b.pdf.gz [accessed 2009-07-17]. INSPEC Accession Number: 7388102.


895. Tom Fawcett and Nina Mishra, editors. Proceedings of the 20th International Conference
1026 REFERENCES
on Machine Learning (ICML03), August 2124, 2003, Washington, DC, USA. AAAI Press:
Menlo Park, CA, USA. isbn: 1-57735-189-4.
896. Xiang Fei, Junzhou Luo, Jieyi Wu, and Guanqun Gu. QoS Routing based on Genetic Al-
gorithms. Computer Communications The International Journal for the Computer and
Telecommunications Industry, 22(15-16):13921399, October 25, 1999, Elsevier Science Pub-
lishers B.V.: Essex, UK. doi: 10.1016/S0140-3664(99)00113-9. Fully available at http://
neo.lcc.uma.es/opticomm/main.html [accessed 2008-08-01].
897. Edward A. Feigenbaum and Julian Feldman, editors. Computers and Thought, AAAI
Press Series / AAAI Press Copublications. McGraw-Hill: New York, NY, USA, 1963
1995. isbn: 0-262-56092-5. Google Books ID: YnlQAAAAMAAJ. OCLC: 33332474 and
246968117. Library of Congress Control Number (LCCN): 95215375. GBV-Identication
(PPN): 189350644, 197823203, and 279066740. LC Classication: Q335.5 .C66 1995.
898. Stuart I. Feldman, Mike Uretsky, Marc Najork, and Craig E. Wills, editors. Proceedings
of the 13th international conference on World Wide Web (WWW04), May 1720, 2004,
Manhattan, New York, NY, USA. ACM Press: New York, NY, USA. isbn: 1-58113-844-X
and 1604230304. Google Books ID: Gr8dAAAACAAJ, sYW PAAACAAJ, and syExNAAACAAJ.
OCLC: 71006498, 223922047, and 327018361.
899. William Feller. An Introduction to Probability Theory and Its Applications, Volume 1, Wi-
ley Series in Probability and Mathematical Statistics Applied Probability and Statis-
tics Section Series. Wiley Interscience: Chichester, West Sussex, UK, 3rd edition, 1968.
isbn: 0471257087. Google Books ID: E9WLSAAACAAJ and TkfeSAAACAAJ.
900. Dieter Fensel, Fausto Giunchiglia, Deborah L. McGuinness, and Mary-Anne Williams, edi-
tors. Proceedings of the Eighth International Conference on Knowledge Representation and
Reasoning (KR02), April 2225, 2002, Toulouse, France. Morgan Kaufmann Publishers Inc.:
San Francisco, CA, USA.
901. Thomas A. Feo and Jonathan F. Bard. Flight Scheduling and Maintenance Base Plan-
ning. Management Science, 35(12):14151432, December 1989, Institute for Operations Re-
search and the Management Sciences (INFORMS): Linthicum, ML, USA and HighWire Press
(Stanford University): Cambridge, MA, USA. doi: 10.1287/mnsc.35.12.1415. JSTOR Stable
ID: 2632228.
902. Thomas A. Feo and Mauricio G.C. Resende. Greedy Randomized Adaptive Search Proce-
dures. Journal of Global Optimization, 6(2):109133, March 1995, Springer Netherlands: Dor-
drecht, Netherlands. doi: 10.1007/BF01096763. Fully available at http://www.research.
att.com/

mgcr/doc/gtut.ps.Z [accessed 2008-10-20]. CiteSeer


x
: 10.1.1.48.8667.
903. Vitaliy Feoktistov. Dierential Evolution In Search of Solutions, volume 5 in Springer
Optimization and Its Applications. Springer New York: New York, NY, USA, Decem-
ber 2006. isbn: 0-387-36895-7 and 0-387-36896-5. Google Books ID: DUL7AAAACAAJ
and kG7aP v-SU4C. OCLC: 77077713, 318292315, and 494112725. Library of Congress
Control Number (LCCN): 2006929851. GBV-Identication (PPN): 516152432 and
546651992. LC Classication: QA402.5 .F42 2006.
904. Alberto Fernandez, Salvador Garca, Julian Luengo, Ester Bernad o-Mansilla, and Fran-
cisco Herrera Triguero. Genetics-Based Machine Learning for Rule Induction: State of
the Art, Taxonomy, and Comparative Study. IEEE Transactions on Evolutionary Com-
putation (IEEE-EC), June 21, 2010, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2009.2039140.
905. Francisco Fernandez de Vega and Erick Cant u-Paz, editors. Second Workshop on Parallel
Bioinspired Algorithms (WPBA07), July 7, 2007, University College London (UCL): London,
UK. ACM Press: New York, NY, USA. Part of [2700].
906. Paola Festa and Mauricio G.C. Resende. An Annotated Bibliography of GRASP. AT&T
Labs Research Technical Report TD-5WYSEW, AT&T Labs: Florham Park, NJ, USA,
February 29, 2004. Fully available at http://www.research.att.com/

mgcr/grasp/
gannbib/gannbib.html [accessed 2008-10-20]. See also [907].
907. Paola Festa and Mauricio G.C. Resende. An Annotated Bibliography of GRASP-
Part II: Applications. International Transactions in Operational Research, 16(2):
131172, March 2009, Blackwell Publishing Ltd: Chichester, West Sussex, UK.
doi: 10.1111/j.1475-3995.2009.00664.x. See also [906].
908. Anthony V. Fiacco and Garth P. McCormick. Nonlinear Programming: Sequential Un-
constrained Minimization Techniques. John Wiley & Sons Ltd.: New York, NY, USA
REFERENCES 1027
and Defense Technical Information Center (DTIC): Fort Belvoir, VA, USA, January 1969.
isbn: 0471258105. Google Books ID: Hjp7OwAACAAJ and TYONOgAACAAJ. See also [909].
909. Anthony V. Fiacco and Garth P. McCormick. Nonlinear Programming: Sequential Uncon-
strained Minimization Techniques, volume 4 in Classics in Applied Mathematics Series. So-
ciety for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, new edition,
April 1990. isbn: 0898712548. Google Books ID: 8NhQAAAAMAAJ, sjD RJfxvr0C, and
sjD RJfxvr0C&dq. See also [908].
910. Volker Fickert, Winfried Kalfa, and Thomas Weise. Framework for Distributed Simula-
tion and Measuring of Complex Relations of Operating System Components. In EDME-
DIA05 [1568], pages 38653870, 2005. Fully available at http://www.it-weise.de/
documents/files/FKW2005OSF.pdf [accessed 2010-12-07]. See also [2900].
911. M.V. Fidelis, Heitor Silverio Lopes, and Alex Alves Freitas. Discovering Comprehensible
Classication Rules with a Genetic Algorithm. In CEC00 [1333], pages 805810, volume 1,
2000. doi: 10.1109/CEC.2000.870381. CiteSeer
x
: 10.1.1.35.1828.
912. Jonathan E. Fieldsend, Richard M. Everson, and Sameer Singh. Using Unconstrained
Elite Archives for Multi-Objective Optimisation. IEEE Transactions on Evolutionary
Computation (IEEE-EC), 7(3):305323, June 2003, IEEE Computer Society: Washington,
DC, USA. doi: 10.1109/TEVC.2003.810733. Fully available at http://empslocal.ex.
ac.uk/people/staff/jefields/JF_06.pdf, http://eref.uqu.edu.sa/files/
using_unconstrained_elite_archives_for_m.pdf, and http://www.lania.mx/

ccoello/EMOO/fieldsend03.pdf.gz [accessed 2010-12-16]. CiteSeer


x
: 10.1.1.14.6272
and 10.1.1.19.4563. INSPEC Accession Number: 7666738.
913. Richard Fikes and Wendy Lehnert, editors. Proceedings of the 11th National Conference
on Articial Intelligence (AAAI93), July 1115, 1993, Washington, DC, USA. AAAI Press:
Menlo Park, CA, USA and MIT Press: Cambridge, MA, USA. isbn: 0-262-51071-5. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai93.php [accessed 2007-09-06].
914. Joaquim Filipe and Janusz Kacprzyk, editors. Proceedings of the 2nd International Con-
ference on Computational Intelligence (IJCCI10), October 2426, 2010, Hotel Sidi Saler:
Val`encia, Spain.
915. Joaquim Filipe, Jose Cordeiro, and Jorge Cardoso, editors. Revised Selected Papers of the
9th International Conference on Enterprise Information Systems (ICEIS07), June 1216,
2007, Funchal, Madeira, Portugal, volume 12/2009 in Lecture Notes in Business Information
Processing. Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-88710-2. See
also [494].
916. Joaquim Filipe, Ana L.N. Fred, and Bernadette Sharp, editors. Proceedings of the Interna-
tional Conference on Agents and Articial Intelligence (ICAART09), January 1921, 2009,
Porto, Portugal. Institute for Systems and Technologies of Information, Control and Com-
munication (INSTICC) Press: Setubal, Portugal.
917. Joaquim Filipe, Ana L.N. Fred, and Bernadette Sharp, editors. Proceedings of the 2nd
International Conference on Agents and Articial Intelligence (ICAART10), January 22
24, 2010, Val`encia, Spain. Institute for Systems and Technologies of Information, Control
and Communication (INSTICC) Press: Setubal, Portugal.
918. Bogdan Filipic and Jurij

Silc, editors. Proceedings of the International Conference on Bioin-
spired Optimization Methods and their Applications (BIOMA04), October 1112, 2004, Jozef
Stefan International Postgraduate School: Ljubljana, Slovenia. Jozef Stefan Institute: Ljubl-
jana, Slovenia. Part of the Information Society Multiconference (IS 2004).
919. Bogdan Filipic and Jurij

Silc, editors. Proceedings of the Second International Con-
ference on Bioinspired Optimization Methods and their Applications (BIOMA06), Oc-
tober 910, 2006, Jozef Stefan International Postgraduate School: Ljubljana, Slove-
nia, Informacijska Druzba (Information Society). Jozef Stefan Institute: Ljubljana,
Slovenia. isbn: 961-6303-81-3. Fully available at http://is.ijs.si/is/
is2006/Proceedings/C/BIOMA06_Complete.pdf [accessed 2008-11-05]. Partly available
at http://bioma.ijs.si/conference/2006/index.html [accessed 2008-11-05]. Google
Books ID: 0JLsAAAACAAJ and 0ZLsAAAACAAJ. Part of the Information Society Multi-
conference (IS 2006).
920. Bogdan Filipic and Jurij

Silc, editors. Proceedings of the Third International Conference
on Bioinspired Optimization Methods and their Applications (BIOMA08), October 1314,
2008, Jozef Stefan International Postgraduate School: Ljubljana, Slovenia. Jozef Stefan Insti-
1028 REFERENCES
tute: Ljubljana, Slovenia. Fully available at http://is.ijs.si/is/is2008/zborniki/
D%20-%20BIOMA.pdf [accessed 2008-11-05]. Part of the Information Society Multiconference (IS
2008).
921. Bogdan Filipic and Jurij

Silc, editors. Proceedings of the 4th International Conference
on Bioinspired Optimization Methods and their Applications (BIOMA10), May 2021,
2010, Jozef Stefan Institute: Ljubljana, Slovenia. Jozef Stefan Institute: Ljubljana, Slove-
nia. Partly available at http://bioma.ijs.si/conference/ProcBIOMA2010Intro.
pdf [accessed 2010-06-22].
922. Steven R. Finch. Mathematical Constants, volume 94. Cambridge University Press: Cam-
bridge, UK, August 18, 2003. isbn: 0521818052. Partly available at http://algo.inria.
fr/bsolve/ [accessed 2009-06-13]. Google Books ID: Pl5I2ZSI6uAC. LC Classication: QA41
.F54 2003. See also [923].
923. Steven R. Finch. Transitive Relations, Topologies and Partial Orders. Cambridge University
Press: Cambridge, UK, June 5, 2003. Fully available at http://algo.inria.fr/csolve/
posets.pdf [accessed 2009-06-13]. CiteSeer
x
: 10.1.1.5.402. See also [922].
924. Jenny Rose Finkel. Using Genetic Programming To Evolve an Algorithm For Factoring
Numbers. In Genetic Algorithms and Genetic Programming at Stanford [1601], pages 5260.
Stanford University Bookstore, Stanford University: Stanford, CA, USA, 2003. Fully avail-
able at http://www.genetic-programming.org/sp2003/Finkel.pdf [accessed 2010-11-
19]. CiteSeer
x
: 10.1.1.140.9317.
925. Hans Fischer. Die verschiedenen Formen und Funktionen des zentralen Grenzw-
ertsatzes in der Entwicklung von der klassischen zur modernen Wahrscheinlichkeit-
srechnung, Berichte aus der Mathematik. Shaker Verlag GmbH: Aachen, North
Rhine-Westphalia, Germany, August 2000. isbn: 3-8265-7767-1. Partly avail-
able at http://www.ku-eichstaett.de/Fakultaeten/MGF/Mathematik/Didmath/
Didmath.Fischer/HF_sections/content/ [accessed 2008-08-19]. OCLC: 247452236. The
Dierent Forms and Applications of the Central Limit Theorem in its Development from
Classical to Modern Probability Theory.
926. Douglas H. Fisher, editor. Proceedings of the 14th International Conference on Machine
Learning (ICML97), July 812, 1997, Nashville, TN, USA. Morgan Kaufmann Publishers
Inc.: San Francisco, CA, USA. isbn: 1-55860-486-3.
927. Michael Fisher and Richard Owens, editors. Executable Modal and Temporal Logics, Work-
shop Proceedings (IJCAI93-WS), August 28, 1993, Chambery, France, volume 897 in Lecture
Notes in Articial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 0-387-58976-7 and 3-540-58976-7.
OCLC: 150398117, 246324352, 312223057, 473142654, and 501656926. GBV-
Identication (PPN): 180339729, 272032352, and 595124461. See also [182, 183, 2259].
928. Sir Ronald Aylmer Fisher. The Correlations between Relatives on the Supposition of
Mendelian Inheritance. Philosophical Transactions of the Royal Society of Edinburgh, 52:
399433, October 1, 1918, Royal Society of Edinburgh: Edinburgh, Scotland, UK. Fully
available at http://www.library.adelaide.edu.au/digitised/fisher/9.pdf [ac-
cessed 2009-07-11].
929. Sir Ronald Aylmer Fisher. Applications of Students distribution. Metron (Roma), 5(3):
90104, 1925. Fully available at http://digital.library.adelaide.edu.au/coll/
special/fisher/43.pdf [accessed 2007-09-30]. See also [268].
930. Sir Ronald Aylmer Fisher. The Use of Multiple Measurements in Taxonomic Problems.
Annals of Eugenics, 7(2):179188, 1936, University of London Galton Laboratory for National
Eugenic: Harpenden, Herts., England, UK. Fully available at http://digital.library.
adelaide.edu.au/coll/special/fisher/138.pdf [accessed 2008-08-16]. See also [92, 268].
931. Sir Ronald Aylmer Fisher. Statistical Methods and Scientic Inference. Hafner
Press: New York, NY, USA, Macmillan Publishers Co.: New York, NY, USA, and
Oliver and Boyd: London, UK, revised edition, 1956July 1973. isbn: 0028447409.
Google Books ID: 0WoGAQAAIAAJ, IHdqAAAAMAAJ, QmsGAQAAIAAJ, oyXPAAAAMAAJ, and
p7RvGQAACAAJ. OCLC: 785822.
932. J. Michael Fitzpatrick and John J. Grefenstette. Genetic Algorithms in Noisy Environments.
Machine Learning, 3(23):101120, October 1998, Kluwer Academic Publishers: Norwell,
MA, USA and Springer Netherlands: Dordrecht, Netherlands. doi: 10.1007/BF00113893.
Fully available at http://www.springerlink.com/content/n6w328441t63q374/
REFERENCES 1029
fulltext.pdf [accessed 2008-07-19].
933. Peter J. Fleming, Robin Charles Purshouse, and Robert J. Lygoe. Many-Objective Opti-
mization: An Engineering Design Perspective. In EMO05 [600], pages 1432, 2005.
934. Dario Floreano, Jean-Daniel Nicoud, and Francesco Mondada, editors. Proceedings of the
5th European Conference on Advances in Articial Life (ECAL99), September 1317, 1999,
Lausanne, Switzerland, volume 1674/1999 in Lecture Notes in Articial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-48304-7. isbn: 3-540-66452-1. Google Books ID: 2dWRlVcCBfIC and
sOwRVCPs5McC.
935. Henrick Flyvbjerg and Benny Lautrup. Evolution in a Rugged Fitness Landscape. Physical
Review A, 46(10), November 15, 1992, American Physical Society: College Park, MD, USA.
doi: 10.1103/PhysRevA.46.6714. CiteSeer
x
: 10.1.1.55.4769 and 10.1.1.55.4769.
936. Terence Claus Fogarty, editor. Proceedings of the Workshop on Articial Intelligence and
Simulation of Behaviour, International Workshop on Evolutionary Computing, Selected Pa-
pers (AISB94), April 1113, 1994, Leeds, UK, volume 865/1994 in Lecture Notes in Com-
puter Science (LNCS). Society for the Study of Articial Intelligence and the Simulation
of Behaviour (SSAISB): Chichester, West Sussex, UK, Springer-Verlag GmbH: Berlin, Ger-
many. doi: 10.1007/3-540-58483-8. isbn: 0-387-58483-8 and 3-540-58483-8. GBV-
Identication (PPN): 164010106, 272035424, and 59512478X.
937. Terence Claus Fogarty, editor. Proceedings of the Workshop on Articial Intelligence and
Simulation of Behaviour, International Workshop on Evolutionary Computing, Selected Pa-
pers (AISB95), April 34, 1995, Sheeld, UK, volume 993/1995 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-60469-3.
isbn: 3-540-60469-3.
938. Terence Claus Fogarty, editor. Proceedings of the Workshop on Articial Intelligence and
Simulation of Behaviour, International Workshop on Evolutionary Computing, Selected Pa-
pers (AISB96), April 12, 1996, Brighton, UK, volume 1143/1996 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/BFb0032768.
isbn: 3-540-61749-3.
939. David B. Fogel, editor. Evolutionary Computation: The Fossil Record. IEEE Computer
Society Press: Los Alamitos, CA, USA and Wiley Interscience: Chichester, West Sussex, UK,
May 1998. isbn: 0-7803-3481-7. Google Books ID: ahYLAAAACAAJ. OCLC: 38270557.
Order No.: PC5737.
940. David B. Fogel. Blondie24: Playing at the Edge of AI, The Morgan Kaufmann Series in
Articial Intelligence. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, Septem-
ber 2001. isbn: 1558607838. Google Books ID: 5vpuw8L0 CAC and t5RQAAAAMAAJ.
941. David B. Fogel. In Memoriam Alex S. Fraser. Evolutionary Computation, 6(5):429430,
October 2002, MIT Press: Cambridge, MA, USA. doi: 10.1109/TEVC.2002.805212.
942. David B. Fogel and Wirt Atmar, editors. Proceedings of the 1st Annual Conference on
Evolutionary Programming (EP92), February 2122, 1992, La Jolla Marriott Hotel: La
Jolla, CA, USA. Evolutionary Programming Society: San Diego, CA, USA. Google Books
ID: UA2HAAACAAJ. OCLC: 84992066.
943. David B. Fogel and Wirt Atmar, editors. Proceedings of the 2nd Annual Conference on
Evolutionary Programming (EP93), February 2526, 1993, Radisson Hotel: La Jolla, CA,
USA. Evolutionary Programming Society: San Diego, CA, USA. OCLC: 28466830.
944. David B. Fogel, Mohamed A. El-Sharkawi, Xin Yao, Hitoshi Iba, Paul Marrow, and
Mark Shackleton, editors. Proceedings of the IEEE Congress on Evolutionary Computa-
tion (CEC02), 2002, Hilton Hawaiian Village Hotel (Beach Resort & Spa): Honolulu, HI,
USA, volume 1-2. IEEE Computer Society: Piscataway, NJ, USA, IEEE Computer Society
Press: Los Alamitos, CA, USA. isbn: 0-7803-7282-4. Google Books ID: -ptVAAAAMAAJ
and Irq5AQAACAAJ. OCLC: 51476813, 59461927, 64431055, and 181357364. INSPEC
Accession Number: 7321757. CEC 2002 A joint meeting of the IEEE, the Evolutionary
Programming Society, and the IEE. Held in connection with the World Congress on Compu-
tational Intelligence (WCCI 2002). Part of [1338].
945. Lawrence Jerome Fogel, Alvin J. Owens, and Michael J. Walsh. Articial Intelligence
through Simulated Evolution. John Wiley & Sons Ltd.: New York, NY, USA, 1966.
isbn: 0471265160. Google Books ID: 75RQAAAAMAAJ, CwaoPAAACAAJ, EerbAAAACAAJ,
QMLaAAAAMAAJ, and rJNwOwAACAAJ. OCLC: 1208284, 223078340, and 561242276.
1030 REFERENCES
GBV-Identication (PPN): 190979224. asin: B0000CNARU.
946. Lawrence Jerome Fogel, Peter John Angeline, and Thomas B ack, editors. Evolutionary
Programming V: Proceedings of the Fifth Annual Conference on Evolutionary Programming
(EP96), February 29March 3, 1996, San Diego, CA, USA, Complex Adaptive Systems,
Bradford Books. MIT Press: Cambridge, MA, USA. isbn: 0-262-06190-2. Google Books
ID: d hQAAAAMAAJ.
947. Gianluigi Folino, Clara Pizzuti, and Giandomenico Spezzano. GP Ensemble for
Distributed Intrusion Detection Systems. In ICAPR05 [2506], pages 5462, 2005.
doi: 10.1007/11551188 6. Fully available at http://dns2.icar.cnr.it/pizzuti/
icapr05.pdf [accessed 2008-06-23].
948. Cyril Fonlupt, Jin-Kao Hao, Evelyne Lutton, Edmund M. A. Ronald, and Marc Schoenauer,
editors. Selected Papers of the 4th European Conference on Articial Evolution (AE99),
November 35, 1999, Dunkerque, France, volume 1829 in Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-67846-8. Published in
2000.
949. Carlos M. Fonseca. Decision Making in Evolutionary Optimization (Abstract of Invited Talk).
In EMO07 [2061], 2007.
950. Carlos M. Fonseca. Preference Articulation in Evolutionary Multiobjective Optimisation
Plenary Talk. In HIS07 [1572], 2007.
951. Carlos M. Fonseca and Peter J. Fleming. Genetic Algorithms for Multiobjective Optimization:
Formulation, Discussion and Generalization. In ICGA93 [969], pages 416423, 1993. Fully
available at http://www.lania.mx/

ccoello/EMOO/fonseca93.ps.gz [accessed 2009-06-


23]. CiteSeer
x
: 10.1.1.48.9077.
952. Carlos M. Fonseca and Peter J. Fleming. An Overview of Evolutionary Algorithms in Mul-
tiobjective Optimization. Evolutionary Computation, 3(1):116, Spring 1995, MIT Press:
Cambridge, MA, USA. doi: 10.1162/evco.1995.3.1.1. CiteSeer
x
: 10.1.1.50.7779.
953. Carlos M. Fonseca and Peter J. Fleming. Multiobjective Optimization and Multiple Con-
straint Handling with Evolutionary Algorithms Part II: Application example. Technical
Report 565, University of Sheeld, Department of Automatic Control and Systems Engi-
neering: Sheeld, UK, January 23, 1995. CiteSeer
x
: 10.1.1.55.5395. See also [955].
954. Carlos M. Fonseca and Peter J. Fleming. Multiobjective Optimization and Multiple Con-
straint Handling with Evolutionary Algorithms Part I: A Unied Formulation. IEEE
Transactions on Systems, Man, and Cybernetics Part A: Systems and Humans, 28(1):
2637, January 1998, IEEE Systems, Man, and Cybernetics Society: New York, NY, USA.
doi: 10.1109/3468.650319. CiteSeer
x
: 10.1.1.52.2473. See also [955].
955. Carlos M. Fonseca and Peter J. Fleming. Multiobjective Optimization and Multiple Con-
straint Handling with Evolutionary Algorithms Part II: Application example. IEEE
Transactions on Systems, Man, and Cybernetics Part A: Systems and Humans, 28(1):
3847, January 1998, IEEE Systems, Man, and Cybernetics Society: New York, NY, USA.
doi: 10.1109/3468.650320. See also [953, 954].
956. Carlos M. Fonseca and Peter J. Fleming. Multiobjective Optimization and Multiple
Constraint Handling with Evolutionary Algorithms. In Practical Approaches to Multi-
Objective Optimization [399], 2004. Fully available at http://drops.dagstuhl.de/
opus/volltexte/2005/237/ [accessed 2007-09-19].
957. Carlos M. Fonseca, Peter J. Fleming, Eckart Zitzler, Kalyanmoy Deb, and Lothar Thiele,
editors. Proceedings of the Second International Conference on Evolutionary Multi-Criterion
Optimization (EMO03), April 811, 2003, University of the Algarve: Faro, Portugal, vol-
ume 2632/2003 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/3-540-36970-8. isbn: 3-540-01869-7. Google Books
ID: 4H9aKVMsFrcC. OCLC: 51942678, 51975842, 150396006, 166467742, 243489698,
and 248786908.
958. Walter Fontana. Algorithmic Chemistry. In Articial Life II [1668], pages 159210, 1990.
Fully available at http://fontana.med.harvard.edu/www/Documents/WF/Papers/
alchemy.pdf [accessed 2009-08-02].
959. Walter Fontana, Peter F. Stadler, Erich G. Bornberg-Bauer, Thomas Griesmacher, Ivo L. Ho-
facker, Manfred Tacker, Pedro Tarazona, Edward D. Weinberger, and Peter K. Schuster. RNA
Folding and Combinatory Landscapes. Physical Review E, 47(3):20832099, March 1993,
American Physical Society: College Park, MD, USA. doi: 10.1103/PhysRevE.47.2083.
REFERENCES 1031
Fully available at http://fontana.med.harvard.edu/www/Documents/WF/
Papers/combinatory.landscapes.pdf and http://www.santafe.edu/media/
workingpapers/92-10-051.pdf [accessed 2010-12-03]. CiteSeer
x
: 10.1.1.47.6724.
960. Kenneth S. H. Forbus and Howard Shrobe, editors. Proceedings of the 6th National Conference
on Articial Intelligence (AAAI87), July 1317, 1987, Seattle, WA, USA. Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA. Partly available at http://www.aaai.org/
Conferences/AAAI/aaai87.php [accessed 2007-09-06].
961. George Foreman. A Pitfall and Solution in Multi-Class Feature Selection for Text
Classication. In ICML04 [426], pages 3846, 2004. doi: 10.1145/1015330.1015356.
CiteSeer
x
: 10.1.1.64.9405.
962. Agostino Forestiero, Carlo Mastroianni, and Giandomenico Spezzano. Building a Peer-
to-peer Information System in Grids via Self-Organizing Agents. Journal of Grid
Computing, 6(2):125140, June 2000, Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1007/s10723-007-9062-z. Fully available at http://grid.deis.unical.it/
papers/pdf/JGC2008-ForestieroMastroianniSpezzano.pdf [accessed 2008-09-05]. See
also [963].
963. Agostino Forestiero, Carlo Mastroianni, and Giandomenico Spezzano. Construc-
tion of a Peer-to-Peer Information System in Grids. In SOAS2005 [666], pages
220236, 2005. Fully available at http://grid.deis.unical.it/papers/pdf/
SOAS05-ForestieroMastroianniSpezzano.pdf [accessed 2008-09-05]. See also [962, 964
967].
964. Agostino Forestiero, Carlo Mastroianni, and Giandomenico Spezzano. Antares: An Ant-
Inspired P2P Information System for a Self-Structured Grid. In BIONETICS07 [1342], pages
151158, 2007. doi: 10.1109/BIMNICS.2007.4610103. Fully available at http://grid.
deis.unical.it/papers/pdf/ForestieroMastroianniSpezzano-Antares.pdf
[accessed 2008-06-13]. See also [963].
965. Agostino Forestiero, Carlo Mastroianni, and Giandomenico Spezzano. So-Grid: A Self-
Organizing Grid Featuring Bio-inspired Algorithms. ACM Transactions on Autonomous
and Adaptive Systems (TAAS), 3(2:5):137, May 2008, ACM Press: New York, NY,
USA. doi: 10.1145/1352789.1352790. Fully available at http://grid.deis.unical.it/
papers/pdf/Article.pdf [accessed 2008-09-05]. See also [963].
966. Agostino Forestiero, Carlo Mastroianni, and Giandomenico Spezzano. QoS-based Dis-
semination of Content in Grids. Future Generation Computer Systems The Inter-
national Journal of Grid Computing: Theory, Methods and Applications (FGCS), 24
(3):235244, March 2008, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. Imprint: North-Holland Scientic Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/j.future.2007.05.003. Fully available at http://grid.deis.unical.it/
papers/pdf/ForestieroMastroianniSpezzano-FGCS-QoS.pdf [accessed 2008-09-05]. See
also [963].
967. Agostino Forestiero, Carlo Mastroianni, and Giandomenico Spezzano. Reorganization and
Discovery of Grid Information with Epidemic Tuning. Future Generation Computer Sys-
tems The International Journal of Grid Computing: Theory, Methods and Applications
(FGCS), 24(8):788797, October 2008, Elsevier Science Publishers B.V.: Amsterdam, The
Netherlands. Imprint: North-Holland Scientic Publishers Ltd.: Amsterdam, The Nether-
lands. doi: 10.1016/j.future.2008.04.001. Fully available at http://grid.deis.unical.
it/papers/pdf/FGCS-epidemic.pdf [accessed 2008-09-05]. See also [963].
968. M. Forina, S. Lanteri, C. Armanino, and et al. PARVUS An Extendible Package for Data
Exploration, Classication and Correlation. Technical Report, Institute of Pharmaceutical
and Food Analysis and Technologies: Genoa, Italy, Elsevier Science Publishers B.V.: Ams-
terdam, The Netherlands, 19881991. isbn: 0444430121. Google Books ID: qi lAAAACAAJ.
GBV-Identication (PPN): 052128725.
969. Stephanie Forrest, editor. Proceedings of the 5th International Conference on Genetic Al-
gorithms (ICGA93), July 1721, 1993, University of Illinois: Urbana-Champaign, IL, USA.
Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-299-2 and
978-1-55860-299-1. Google Books ID: cfBQAAAAMAAJ.
970. Stephanie Forrest and Melanie Mitchell. Relative Building-Block Fitness and the
Building-Block Hypothesis. In FOGA92 [2927], pages 109126, 1992. Fully available
at http://web.cecs.pdx.edu/

mm/Forrest-Mitchell-FOGA.pdf [accessed 2010-12-04].


1032 REFERENCES
CiteSeer
x
: 10.1.1.83.4366.
971. Christian V. Forst, Christian M. Reidys, and Jacqueline Weber. Evolutionary Dynamics and
Optimization: Neutral Networks as Model-Landscapes for RNA Secondary-Structure Folding-
Landscapes. In ECAL95 [1939], pages 128147, 1995. doi: 10.1007/3-540-59496-5 294.
CiteSeer
x
: 10.1.1.21.1221 and 10.1.1.56.9190. arXiv ID: adap-org/9505002v1.
972. Richard S. Forsyth. BEAGLE A Darwinian Approach to Pattern Recognition. Kyber-
netes, 10(3):159166, 1981, Thales Publications (W.O.) Ltd.: Irvine, CA, USA and Emerald
Group Publishing Limited: Bingley, UK. doi: 10.1108/eb005587. Fully available at http://
www.cs.ucl.ac.uk/staff/W.Langdon/ftp/papers/kybernetes_forsyth.pdf [ac-
cessed 2007-11-01]. Received December 17, 1980. (copy from British Library May 1994).
973. Richard S. Forsyth. The Evolution of Intelligence. In Machine Learning: Principles and
Techniques [974], Chapter 4, pages 6582. Chapman & Hall: London, UK, 1989. Refers also
to PC/BEAGLE. See also [972].
974. Richard S. Forsyth, editor. Machine Learning: Principles and Techniques. Chapman &
Hall: London, UK, 1989. isbn: 0-412-30570-4 and 0-412-30580-1. OCLC: 18255812,
246546781, and 315238891. Library of Congress Control Number (LCCN): 88022872.
GBV-Identication (PPN): 017008093, 025424157, and 275121755. LC Classica-
tion: Q325 .F64 1989.
975. Richard S. Forsyth and Roy Rada. Machine Learning Applications in Expert Systems
and Information Retrieval, Ellis Horwood Series in Articial Intelligence. John Wiley
& Sons Australia: Milton, QLD, Australia and Halsted Press: New York, NY, USA.
isbn: 0-470-20309-9 and 0-7458-0045-9. Google Books ID: TcnaAAAAMAAJ. Contains
chapters on BEAGLE. See also [972].
976. James A. Foster. Review: Discipulus: A Commercial Genetic Programming System. Ge-
netic Programming and Evolvable Machines, 2(2):201203, June 2001, Springer Nether-
lands: Dordrecht, Netherlands. Imprint: Kluwer Academic Publishers: Norwell, MA, USA.
doi: 10.1023/A:1011516717456.
977. James A. Foster, Evelyne Lutton, Julian Francis Miller, Conor Ryan, and Andrea
G. B. Tettamanzi, editors. Proceedings of the 5th European Conference on Genetic
Programming (EuroGP02), April 35, 2002, Kinsale, Ireland, volume 2278/2002 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-45984-7. isbn: 3-540-43378-3. Google Books ID: RpNQAAAAMAAJ and
eCbu4GwRLusC. OCLC: 49312634, 50589656, 64651626, 66624505, 76394308, and
314133163.
978. B. R. Fox and M. B. McMahon. Genetic Operators for Sequencing Problems. In
FOGA90 [2562], pages 284300, 1990.
979. Dieter Fox and Carla P. Gomes, editors. Proceedings of the Twenty-Third AAAI Conference
on Articial Intelligence (AAAI08), June 1317, 2008, Chicago, IL, USA. AAAI Press:
Menlo Park, CA, USA. Partly available at http://www.aaai.org/Conferences/AAAI/
aaai08.php [accessed 2009-02-27].
980. John Fox. Applied Regression Analysis, Linear Models, and Related Methods. SAGE Publica-
tions: Thousand Oaks, CA, USA, February 1997. isbn: 0-8039-4540-X. OCLC: 36065974,
464653054, and 639274269. GBV-Identication (PPN): 225778130 and 509874118.
981. Felipe M. G. Fran ca and Carlos H. C. Ribeiro, editors. Proceedings of the VI Brazil-
ian Symposium on Neural Networks (SBRN00), November 2225, 2000, Rio de Janeiro,
RJ, Brazil. IEEE Computer Society: Washington, DC, USA. isbn: 0-7695-0856-1,
0-7695-0857-X, and 0-7695-0858-8. Google Books ID: Q6d KQAACAAJ and
QqGoJgAACAAJ. OCLC: 45537396, 45537396, 47880868, 47880868, 51498392,
59548079, 59548079, 80493808, 80493808, 288979392, 288979392, 423976687, and
423976687.
982. Frank D. Francone, Markus Conrads, Wolfgang Banzhaf, and Peter Nordin. Homolo-
gous Crossover in Genetic Programming. In GECCO99 [211], pages 10211026, volume
2, 1999. Fully available at http://www.cs.bham.ac.uk/

wbl/biblio/gecco1999/
GP-463.pdf [accessed 2010-08-23]. CiteSeer
x
: 10.1.1.153.15.
983. Eibe Frank, Mark A. Hall, Georey Holmes, Richard Kirkby, Bernhard Pfahringer, Ian H.
Witten, and Leonhard Trigg. WEKA A Machine Learning Workbench for Data
Mining. In The Data Mining and Knowledge Discovery Handbook [1815], Chapter 62,
pages 13051314. Springer Science+Business Media, Inc.: New York, NY, USA, 2005.
REFERENCES 1033
doi: 10.1007/0-387-25465-X 62. Fully available at http://www.cs.waikato.ac.nz/

ml/
publications/2005/weka_dmh.pdf [accessed 2008-11-19].
984. Gene F. Franklin and Anthony N. Michel, editors. Proceedings of the 24th IEEE Conference
on Decision and Control (CDC85), December 1113, 1985, Bonaventure Hotel & Spa: Fort
Lauderdale, FL, USA. IEEE (Institute of Electrical and Electronics Engineers): Piscataway,
NJ, USA. Library of Congress Control Number (LCCN): 85CH2245-9. Catalogue no.: 79-
640961.
985. Daniel Raymond Frantz. Nonlinearities in Genetic Adaptive Search. PhD thesis, University
of Michigan: Ann Arbor, MI, USA, September 1972, John Henry Holland, Advisor. Fully
available at http://deepblue.lib.umich.edu/handle/2027.42/4950 [accessed 2010-08-
03]. Order No.: AAI7311116. University Microlms No.: 73-11116. ID: UMR1485. Published
as Technical Report Nr. 138. See also [986].
986. Daniel Raymond Frantz. Nonlinearities in Genetic Adaptive Search. Dissertation Abstracts
International (DAI), 33(11):5240B5241B, ProQuest: Ann Arbor, MI, USA. See also [985].
987. Silvio Franz, Matteo Marsili, and Haijun Zhou, editors. 2nd Asian-Pacic School on Sta-
tistical Physics and Interdisciplinary Applications, March 314, 2008, Beijng, China. Abdus
Salam International Centre for Theoretical Physics (ICTP): Triest, Italy, Chinese Center
of Advanced Science and Technology (CCAST): Beijng, China, and Chinese Academy of
Sciences, Kavli Institute of Theoretical Physics China (KITPC): Beijng, China.
988. Alex S. Fraser. Simulation of Genetic Systems by Automatic Digital Computers. I. Introduc-
tion. Australian Journal of Biological Science (AJBS), 10:484491, 1957. See also [989, 990].
989. Alex S. Fraser. Simulation of Genetic Systems by Automatic Digital Computers. II. Eects
of Linkage or Rates of Advance under Selection. Australian Journal of Biological Science
(AJBS), 10:484491, 1957. See also [990].
990. Alex S. Fraser. Simulation of Genetic Systems by Automatic Digital Computers. VI. Epistasis.
Australian Journal of Biological Science (AJBS), 13(2):150162, 1960. See also [988, 989].
991. William J. Frawley, Gregory Piatetsky-Shapiro, and Christopher J. Matheus. Knowl-
edge Discovery in Databases: An Overview. AI Magazine, 13(3):5770, Fall 1992,
AAAI Press: Menlo Park, CA, USA. Fully available at http://www.aaai.
org/ojs/index.php/aimagazine/article/download/1011/929 and http://
www.kdnuggets.com/gpspubs/aimag-kdd-overview-1992.pdf [accessed 2009-09-09].
CiteSeer
x
: 10.1.1.18.1674.
992. 12th European Conference on Machine Learning (ECML01), September 37, 2001, Freiburg,
Germany.
993. Alex Alves Freitas. A Genetic Programming Framework for Two Data Mining Tasks: Classi-
cation and Generalized Rule Induction. In GP97 [1611], pages 96101, 1997. Fully available
at http://kar.kent.ac.uk/21483/ [accessed 2010-12-04]. CiteSeer
x
: 10.1.1.47.4151 and
10.1.1.56.2449.
994. Alex Alves Freitas. Data Mining and Knowledge Discovery with Evolutionary Algo-
rithms, Natural Computing Series. Springer New York: New York, NY, USA, 2002.
isbn: 3-540-43331-7. Google Books ID: KkdZlfQJvbYC. OCLC: 49415761, 248441650,
and 492385968. Library of Congress Control Number (LCCN): 2002021728. LC Classi-
cation: QA76.9.D343 F72 2002.
995. Richard M. Friedberg. A Learning Machine: Part I. IBM Journal of Research and Develop-
ment, 2(1):213, November 1958. doi: 10.1147/rd.21.0002. See also [996].
996. Richard M. Friedberg, B. Dunham, and J. H. North. A Learning Machine: Part II. IBM
Journal of Research and Development, 3(3):282287, July 1959. doi: 10.1147/rd.33.0282. See
also [995].
997. Drew Fudenberg and Jean Tirole. Game Theory. MIT Press: Cambridge, MA, USA, Au-
gust 1991. isbn: 0-2620-6141-4. Google Books ID: pFPHKwXro3QC.
998. Cory Fujiko and John Dickinson. Using the Genetic Algorithm to Generate LISP Source
Code to Solve the Prisoners Dilemma. In ICGA87 [1132], pages 236240, 1987.
999. Cory Fujuki. An Evaluation of Hollands Genetic Algorithm Applied to a Program Generator.
Masters thesis, University of Idaho, Computer Science Department: Moscow, ID, USA, 1986.
1000. David T. Fullwood. Percolation in Two-Dimensional Grain Boundary Structures, and
Polycrystal Property Closures. Masters thesis, Brigham Young University: Provo, UT,
USA, December 2005. Fully available at http://contentdm.lib.byu.edu/ETD/image/
etd1045.pdf [accessed 2009-06-21].
1034 REFERENCES
1001. Takeshi Furuhashi, editor. First Online Workshop on Soft Computing (WSC1), August 19
30, 1996, Nagoya University: Nagoya, Japan.
1002. Takeshi Furuhashi and Tomohiro Yoshikawa. Visualization Techniques for Mining of Solu-
tions. In ISIS07 [1579], pages 6871, 2007. Fully available at http://isis2007.fuzzy.
or.kr/submission/upload/A1484.pdf [accessed 2011-12-05].
G
1003. Bogdan Gabrys, Robert J. Howlett, and Lakhmi C. Jain, editors. Proceedings of the
10th International Conference on Knowledge-Based Intelligent Information and Engineer-
ing Systems, Part I (KES06-I), October 911, 2006, Bournemouth International Centre:
Bournemouth, UK, volume 4251/2006 in Lecture Notes in Articial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-46535-9. See also [1004, 1005].
1004. Bogdan Gabrys, Robert J. Howlett, and Lakhmi C. Jain, editors. Proceedings of the
10th International Conference on Knowledge-Based Intelligent Information and Engineer-
ing Systems, Part II (KES06-II), October 911, 2006, Bournemouth International Cen-
tre: Bournemouth, UK, volume 4252/2006 in Lecture Notes in Articial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-46537-5. See also [1003, 1005].
1005. Bogdan Gabrys, Robert J. Howlett, and Lakhmi C. Jain, editors. Proceedings of the
10th International Conference on Knowledge-Based Intelligent Information and Engineer-
ing Systems, Part III (KES06-III), October 911, 2006, Bournemouth International Cen-
tre: Bournemouth, UK, volume 4253/2006 in Lecture Notes in Articial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Ger-
many. doi: 10.1007/11893011. isbn: 3-540-46542-1. Library of Congress Control Number
(LCCN): 2006933827. See also [1003, 1004].
1006. Malgorzata Gadomska and Andrzej Pacut. Performance of Ant Routing Algorithms When
Using TCP. In EvoWorkshops07 [1050], pages 110, 2007. doi: 10.1007/978-3-540-71805-5 1.
1007. Pablo Galiasso and Roger L. Wainwright. A Hybrid Genetic Algorithm for the Point to
Multipoint Routing Problem with Single Split Paths. In SAC01 [24], pages 327332, 2001.
doi: 10.1145/372202.372354. CiteSeer
x
: 10.1.1.20.8236.
1008. Ante Galic, Tonci Caric, and Hrvoje Gold. MARS A Programming Language for
Solving Vehicle Routing Problems. In Recent Advances in City Logistics. Proceedings
of the 4th International Conference on City Logistics [2672], pages 4757, 2005. Fully
available at http://www.cro-grid.hr/hr/apps/rep/app/oot/fpz_galic_caric_
gold_final_paper.doc [accessed 2011-01-14].
1009. Marcus Gallagher, Marcus R. Frean, and Tom Downs. Real-Valued Evolutionary Optimiza-
tion using a Flexible Probability Density Estimator. In GECCO99 [211], pages 840846,
1999. CiteSeer
x
: 10.1.1.35.5113.
1010. E. A. Galperin. Pareto Analysis vis-`a-vis Balance Space Approach in Multiobjective Global
Optimization. Journal of Optimization Theory and Applications, 93(3):533545, June 1997,
Springer Netherlands: Dordrecht, Netherlands and Plenum Press: New York, NY, USA.
doi: 10.1023/A:1022639028824.
1011. Luca Maria Gambardella and Marco Dorigo. Solving Symmetric and Asymmetric TSPs by
Ant Colonies. In CEC96 [1445], pages 622627, 1996. doi: 10.1109/ICEC.1996.542672.
Fully available at http://www.idsia.ch/

luca/icec96-acs.pdf [accessed 2009-06-26].


CiteSeer
x
: 10.1.1.40.4846.
1012. Luca Maria Gambardella,

Eric D. Taillard, and Marco Dorigo. Ant Colonies for the
Quadratic Assignment Problem. The Journal of the Operational Research Society (JORS),
50(2):167176, February 1999, Palgrave Macmillan Ltd.: Houndmills, Basingstoke, Hamp-
shire, UK and Operations Research Society: Birmingham, UK. doi: 10.2307/3010565.
Fully available at http://www.idsia.ch/

luca/tr-idsia-4-97.pdf [accessed 2009-06-30].


CiteSeer
x
: 10.1.1.32.2245.
1013. Luca Maria Gambardella, Andrea E. Rizzoli, Fabrizio Oliverio, Norman Casagrande, Al-
berto V. Donati, Roberto Montemanni, and Enzo Lucibello. Ant Colony Optimization for
REFERENCES 1035
Vehicle Routing in Advanced Logistics Systems. In MAS03 [588], pages 39, 2003. Fully
available at http://www.idsia.ch/

luca/MAS2003_18.pdf [accessed 2007-08-19].


1014. Xavier Gandibleux, Marc Sevaux, Kenneth Sorensen, and Vincent Tkindt, editors. Meta-
heuristics for Multiobjective Optimisation, volume 535 in Lecture Notes in Economics and
Mathematical Systems. Springer-Verlag: Berlin/Heidelberg, 2004. isbn: 3-540-20637-X.
Google Books ID: Lb8p1FWAh-8C and TEToXuP17RYC.
1015. X. Z. Gao, Antonio Gaspar-Cunha, Gerald Schaefer, and Jun Wang, editors. Soft Computing
in Industrial Applications 14th Online World Conference on Soft Computing in Industrial
Applications (WSC14), November 1729, 2009, volume 75 in Advances in Intelligent and
Soft Computing. Springer-Verlag: Berlin/Heidelberg. isbn: 3-642-11281-1. Google Books
ID: EDp3RAAACAAJ.
1016. Yong Gao and Joseph Culberson. An Analysis of Phase Transition in NK Landscapes. Jour-
nal of Articial Intelligence Research (JAIR), 17:309332, October 2002, AI Access Founda-
tion, Inc.: El Segundo, CA, USA and AAAI Press: Menlo Park, CA, USA. Fully available
at http://www.jair.org/media/1081/live-1081-2104-jair.pdf [accessed 2009-02-26].
CiteSeer
x
: 10.1.1.20.2690.
1017. Yu Gao and Yong-Jun Wang. A Memetic Dierential Evolutionary Algorithm for High
Dimensional Functions Optimization. In ICNC07 [1711], pages 188192, volume 4, 2007.
doi: 10.1109/ICNC.2007.60.
1018. Yuelin Gao and Yuhong Duan. An Adaptive Particle Swarm Optimization Algo-
rithm with New Random Inertia Weight. In ICIC07-2 [1293], pages 342350, 2007.
doi: 10.1007/978-3-540-74282-1 39.
1019. Yuelin Gao and Zihui Ren. Adaptive Particle Swarm Optimization Algorithm With
Genetic Mutation Operation. In ICNC07 [1711], pages 211215, volume 2, 2007.
doi: 10.1109/ICNC.2007.161.
1020. Salvador Garca and Francisco Herrera Triguero. An Extension on Statistical Com-
parisons of Classiers over Multiple Data Sets for all Pairwise Comparisons. Journal
of Machine Learning Research (JMLR), 9:26772694, December 2008, MIT Press: Cam-
bridge, MA, USA. Fully available at http://jmlr.csail.mit.edu/papers/volume9/
garcia08a/garcia08a.pdf and http://sci2s.ugr.es/publications/ficheros/
2008-Garcia-JMLR.pdf [accessed 2010-10-19]. See also [766].
1021. Alma Lilia Garca-Almanza and Edward P. K. Tsang. The Repository Method for
Chance Discovery in Financial Forecasting. In KES06-III [1005], pages 3037, 2006.
doi: 10.1007/11893011 5.
1022. Alma Lilia Garca-Almanza, Edward P. K. Tsang, and Edgar Galvan-L opez.
Evolving Decision Rules to Discover Patterns in Financial Data Sets. In
Computational Methods in Financial Engineering Essays in Honour of Man-
fred Gilli [1575], Chapter II-5, pages 239255. Springer-Verlag: Berlin/Heidelberg,
2008. doi: 10.1007/978-3-540-77958-2 12. Fully available at http://www.bracil.
net/finance/papers/GarciaTsangGalvan-Ecr-Book-2007.pdf [accessed 2010-06-25].
CiteSeer
x
: 10.1.1.153.5463.
1023. Ra ul Garca-Castro, Asuncion G omez-Perez, Charles J. Petrie, Emanuele Della Valle, Ul-
rich K uster, Michal Zaremba, and Omair Shaq, editors. Proceedings of the 6th Inter-
national Workshop on Evaluation of Ontology-based Tools and the Semantic Web Service
Challenge (EON-SWSC08), June 12, 2008, Tenerife, Spain, volume 359 in CEUR Work-
shop Proceedings (CEUR-WS.org). Rheinisch-Westfalische Technische Hochschule (RWTH)
Aachen, Sun SITE Central Europe: Aachen, North Rhine-Westphalia, Germany. Fully avail-
able at http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/
Vol-359/ [accessed 2010-12-12]. See also [2166].
1024. Martin Gardner. Martin Gardners Sixth Book of Mathematical Diversions from Scientic
American. University of Chicago Press: Chicago, IL, USA, 1983. isbn: 0226282503.
1025. Michael R. Garey and David S. Johnson. Computers and Intractability: A Guide to the
Theory of NP-Completeness, Series of Books in the Mathematical Sciences. W. H. Free-
man & Co.: New York, NY, USA, 1979. isbn: 0-7167-1044-7 and 0-7167-1045-5.
Google Books ID: OHrJJQAACAAJ, gB FAgAACAAJ, and mdBxHAAACAAJ. OCLC: 4195125,
476019082, 630176268, 632535534, and 633137406. Library of Congress Control Num-
ber (LCCN): 78012361. GBV-Identication (PPN): 250973014, 303019050, 317880381,
321971280, 354248944, 513349081, 584237758, and 59999438X. LC Classica-
1036 REFERENCES
tion: QA76.6 .G35.
1026. Michael R. Garey, David S. Johnson, and Ravi Sethi. The Complexity of Flowshop and
Jobshop Scheduling. Mathematics of Operations Research (MOR), 1(2):117129, May 1976,
Institute for Operations Research and the Management Sciences (INFORMS): Linthicum,
ML, USA. doi: 10.1287/moor.1.2.117. JSTOR Stable ID: 3689278.
1027. Wolfgang Walter Garn. Vehicle Routing Problem (VRP). Vienna University of Technol-
ogy: Vienna, Austria, July 20, 2007. Fully available at http://osiris.tuwien.ac.at/

wgarn/VehicleRouting/vehicle_routing.html [accessed 2011-04-25].


1028. Josselin Garnier and Leila Kallel. Statistical Distribution of the Convergence Time of Evo-
lutionary Algorithms for Long-Path Problems. IEEE Transactions on Evolutionary Com-
putation (IEEE-EC), 4(1):1630, April 2000, IEEE Computer Society: Washington, DC,
USA. doi: 10.1109/4235.843492. CiteSeer
x
: 10.1.1.31.4329. INSPEC Accession Num-
ber: 6617599.
1029. Antonio Gaspar-Cunha and Ricardo H. C. Takahashi, editors. 15th Online World Confer-
ence on Soft Computing in Industrial Applications (WSC15), November 1527, 2010. World
Federation on Soft Computing (WFSC).
1030. Sergey Gavrilets. Fitness Landscapes and the Origin of Species, volume MPB-41 in Mono-
graphs in Population Biology (MPB). Princeton University Press: Princeton, NJ, USA,
July 2004. isbn: 0-691-11983-X. Google Books ID: YjQKGwAACAAJ.
1031. Nicholas Geard. An Exploration of NK Landscapes with Neutrality, 14213. Masters thesis,
University of Queensland, School of Information Technology and Electrical Engineering: St
Lucia, Q, Australia, October 2001, Janet Wiles, Supervisor. Fully available at http://
eprints.ecs.soton.ac.uk/14213/1/ng_thesis.pdf [accessed 2008-07-01].
1032. Nicholas Geard, Janet Wiles, Jennifer Hallinan, Bradley Tonkes, and Benjamin
Skellett. A Comparison of Neutral Landscapes NK, NKp and NKq. In
CEC02 [944], pages 205210, 2002. Fully available at http://eprints.ecs.soton.ac.
uk/14208/1/cec1-comp.pdf and http://www.staff.ncl.ac.uk/j.s.hallinan/
pubs/cec2002.pdf [accessed 2010-12-04]. CiteSeer
x
: 10.1.1.16.5159.
1033. Zong Woo Geem, Joong Hoon Kim, and G.V. Loganathan. A New Heuristic Opti-
mization Algorithm: Harmony Search. Simulation Transactions of The Society for
Modeling and Simulation International, SAGE Journals Online, pages 6068, Febru-
ary 2001, Society for Modeling and Simulation International (SCS): San Diego, CA, USA.
doi: 10.1177/003754970107600201.
1034. Zong Woo Geem, Kang Seok Lee, and Yongjin Park. Application of Harmony Search to
Vehicle Routing. American Journal of Applied Sciences, 2(12):15521557, December 2005,
Science Publications: New York, NY, USA. Fully available at http://www.scipub.org/
fulltext/ajas/ajas2121552-1557.pdf [accessed 2010-12-17].
1035. Johannes Gehrke, Raghu Ramakrishnan, and Venkatesh Ganti. RainForest A Framework
for Fast Decision Tree Construction of Large Datasets. In VLDB98 [1151], pages 416427,
1998. Fully available at http://www.vldb.org/conf/1998/p416.pdf [accessed 2009-05-11].
CiteSeer
x
: 10.1.1.40.474.
1036. Johannes Gehrke, Venkatesh Ganti, Raghu Ramakrishnan, and Wei-Yin Loh. BOAT
Optimistic Decision Tree Construction. In SIGMOD99 [22], pages 169180,
1999. doi: 10.1145/304182.304197. Fully available at http://doi.acm.org/10.
1145/304182.304197 and http://www.cs.cornell.edu/johannes/papers/1999/
sigmod1999-boat.pdf [accessed 2009-05-11].
1037. Alexander F. Gelbukh and Angel Fernando Kuri Morales, editors. Advances in Articial In-
telligence: Proceedings of the 6th Mexican International Conference on Articial Intelligence
(MICAI07), November 410, 2007, Aguascalientes, Mexico, volume 4827/2007 in Lecture
Notes in Articial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-76631-5.
1038. Alexander F. Gelbukh and Eduardo F. Morales, editors. Advances in Articial Intelligence
Proceedings of the 7th Mexican International Conference on Articial Intelligence (MI-
CAI08), October 2731, 2008, Tecnologico de Monterrey, Campus Estado de Mexico: Ati-
zap an de Zaragoza, Mexico, volume 5317/2008 in Lecture Notes in Articial Intelligence
(LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/978-3-540-88636-5. isbn: 3-540-88635-4. Library of Congress
Control Number (LCCN): 2008936816.
REFERENCES 1037
1039. Alexander F. Gelbukh, Alvaro de Albornoz, and Hugo Terashima-Marn, editors. Ad-
vances in Articial Intelligence: Proceedings of the 4th Mexican International Conference
on Articial Intelligence (MICAI05), November 1418, 2005, Monterrey, Mexico, volume
3789/2005 in Lecture Notes in Articial Intelligence (LNAI, SL7), Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11579427.
isbn: 3-540-29896-7.
1040. Stuart Geman, Elie Bienenstock, and Rene Doursat. Neural Networks and the Bias/Variance
Dilemma. Neural Computation, 4(1):158, January 1992, MIT Press: Cambridge, MA, USA.
doi: 10.1162/neco.1992.4.1.1.
1041. Michael R. Genesereth, editor. Proceedings of the National Conference on Articial Intelli-
gence (AAAI93), August 2226, 1983, Washington, DC, USA. AAAI Press: Menlo Park, CA,
USA. isbn: 0-262-51052-9. Partly available at http://www.aaai.org/Conferences/
AAAI/aaai83.php [accessed 2007-09-06]. OCLC: 228289483.
1042. Ian Gent, Hans van Maaren, and Toby Walsh, editors. SAT2000 Highlights of Satisability
Research in the Year 2000, volume 63 in Frontiers in Articial Intelligence and Applica-
tions. IOS Press: Amsterdam, The Netherlands, 2000. isbn: 1586030612. Google Books
ID: GGJM0lXMJQgC.
1043. Stuart German. Stochastic Relaxation, Gibbs Distribution and the Bayesian Restoration in
Images. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), PAMI-6
(6):721741, November 1984, IEEE Computer Society: Piscataway, NJ, USA. doi: 10.1109/T-
PAMI.1984.4767596. Fully available at http://noodle.med.yale.edu/

qian/docs/
geman.pdf [accessed 2011-11-21].
1044. Hannes Geyer, Peter Ulbig, and Siegfried Schulz. Encapsulated Evolution Strategies for the
Determination of Group Contribution Model Parameters in Order to Predict Thermodynamic
Properties. In PPSN V [866], pages 978987, 1998. doi: 10.1007/BFb0056939. See also
[1045, 1046].
1045. Hannes Geyer, Peter Ulbig, and Siegfried Schulz. Use of Evolutionary Algorithms for
the Calculation of Group Contribution Parameters in order to Predict Thermodynamic
properties Part 2: Encapsulated evolution strategies. Computers and Chemical En-
gineering, 23(7):955973, July 1, 1999, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/S0098-1354(99)00270-7. See also [1044, 1046].
1046. Hannes Geyer, Peter Ulbig, and Siegfried Schulz. Verschachtelte Evolutionsstrategien zur
Optimierung nichtlinearer verfahrenstechnischer Regressionsprobleme. Chemie Ingenieur
Technik (CIT), 72(4):369373, April 2000, Wiley Interscience: Chichester, West Sussex,
UK. doi: 10.1002/1522-2640(200004)72:4<369::AID-CITE369>3.0.CO;2-W. Fully avail-
able at http://www3.interscience.wiley.com/cgi-bin/fulltext/76500452/
PDFSTART [accessed 2010-08-01]. See also [1044, 1045].
1047. Zoubin Ghahramani, editor. Proceedings of the 24th Annual International Conference on
Machine Learning (ICML07), June 2024, 2007, Oregon State University: Corvallis, OR,
USA, volume 227 in ACM International Conference Proceeding Series (AICPS). Omipress:
Madison, WI, USA. Fully available at http://www.machinelearning.org/icml2007_
proc.html [accessed 2010-06-30].
1048. Ashish Ghosh and Lakhmi C. Jain, editors. Evolutionary Computation in Data Mining,
volume 163/2005 in Studies in Fuzziness and Soft Computing. Springer-Verlag GmbH:
Berlin, Germany, 2005. doi: 10.1007/3-540-32358-9. isbn: 3-540-22370-3. Google Books
ID: Caevy3rrjPUC. OCLC: 56658773, 249236978, 318293100, and 320964275. Library
of Congress Control Number (LCCN): 2004111009.
1049. Ashish Ghosh and Shigeyoshi Tsutsui, editors. Advances in Evolutionary Computing The-
ory and Applications, Natural Computing Series. Springer New York: New York, NY,
USA, November 22, 2002. isbn: 3-540-43330-9. Google Books ID: OGMEMC9P3vMC.
OCLC: 50207987, 249607003, and 314328282. Library of Congress Control Number
(LCCN): 2002029164. GBV-Identication (PPN): 35332180X and 358205859. LC Clas-
sication: QA76.618 .A215 2003.
1050. Mario Giacobini, Anthony Brabazon, Stefano Cagnoni, Gianni A. Di Caro, Rolf Drech-
sler, Muddassar Farooq, Andreas Fink, Evelyne Lutton, Penousal Machado, Stefan Min-
ner, Michael ONeill, Juan Romero, Franz Rothlauf, Giovanni Squillero, Hideyuki Tak-
agi, A. Sima Uyar, and Shengxiang Yang, editors. Applications of Evolutionary Comput-
ing, Proceedings of EvoWorkshops 2007: EvoCoMnet, EvoFIN, EvoIASP, EvoINTERAC-
1038 REFERENCES
TION, EvoMUSART, EvoSTOC and EvoTransLog (EvoWorkshops07), April 1113, 2007,
Val`encia, Spain, volume 4448/2007 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-71805-5. isbn: 3-540-71804-4. Google Books ID: DtwgBJkn7iMC.
OCLC: 122935862, 152418528, 255966727, and 318300154. Library of Congress Con-
trol Number (LCCN): 2007923848.
1051. Mario Giacobini, Anthony Brabazon, Stefano Cagnoni, Gianni A. Di Caro, Rolf Drech-
sler, Anik o Ek art, Anna Isabel Esparcia-Alc azar, Muddassar Farooq, Andreas Fink, Jon
McCormack, Michael ONeill, Juan Romero, Franz Rothlauf, Giovanni Squillero, A. Sima
Uyar, and Shengxiang Yang, editors. Applications of Evolutionary Computing Proceed-
ings of EvoWorkshops 2008: EvoCOMNET, EvoFIN, EvoHOT, EvoIASP, EvoMUSART,
EvoNUM, EvoSTOC, and EvoTransLog (EvoWorkshops08), May 2628, 2008, Naples, Italy,
volume 4974/2008 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/978-3-540-78761-7. isbn: 3-540-78760-7. Google Books
ID: ibbRDnV1vUkC.
1052. Mario Giacobini, Penousal Machado, Anthony Brabazon, Jon McCormack, Stefano Cagnoni,
Michael ONeill, Gianni A. Di Caro, Ferrante Neri, Anik o Ek art, Mike Preu, Anna Is-
abel Esparcia-Alc azar, Franz Rothlauf, Muddassar Farooq, Ernesto Tarantino, Andreas
Fink, and Shengxiang Yang, editors. Applications of Evolutionary Computing Proceedings
of EvoWorkshops 2009: EvoCOMNET, EvoENVIRONMENT, EvoFIN, EvoGAMES, Evo-
HOT, EvoIASP, EvoINTERACTION, EvoMUSART, EvoNUM, EvoSTOC, EvoTRANSLOG
(EvoWorkshops09), April 1517, 2009, Eberhard-Karls-Universitat T ubingen, Fakult at f ur
Informations- und Kognitionswissenschaften: T ubingen, Germany, volume 5484/2009 in The-
oretical Computer Science and General Issues (SL 1), Lecture Notes in Computer Sci-
ence (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-642-01129-0.
isbn: 3-642-01128-4. Partly available at http://evostar.na.icar.cnr.it/
EvoWorkshops/EvoWorkshops.html [accessed 2011-02-28]. Google Books ID: NvOd8Rz997AC.
1053. K. C. Giannakoglou, D. T. Tsahalis, J. Periaux, K. D. Papailiou, and Terence Claus Fogarty,
editors. Proceedings of the Conference on Evolutionary Methods for Design Optimization
and Control with Applications to Industrial Problems (EUROGEN01), September 1921,
2001, Athens, Greece. International Center for Numerical Methods in Engineering (Cmine):
Barcelona, Catalonia, Spain. isbn: 84-89925-97-6. Partly available at http://www.
mech.ntua.gr/

eurogen2001 [accessed 2007-09-16]. GBV-Identication (PPN): 37304500X.


Published in 2002.
1054. Basilis Gidas. Nonstationary Markov Chains and Convergence of the Annealing Algorithm.
Journal of Statistical Physics, 39(1-2):73131, April 1985, Springer Netherlands: Dordrecht,
Netherlands. doi: 10.1007/BF01007975.
1055. Yolanda Gil and Raymond J. Mooney, editors. Proceedings, The Twenty-First National
Conference on Articial Intelligence and the Eighteenth Innovative Applications of Articial
Intelligence Conference (AAAI06 / IAAI06), July 1620, 2006, Boston, MA, USA. AAAI
Press: Menlo Park, CA, USA. Partly available at http://www.aaai.org/Conferences/
AAAI/aaai06.php and http://www.aaai.org/Conferences/IAAI/iaai06.php [ac-
cessed 2007-09-06].
1056. Walter Gilbert. Why Genes in Pieces? Nature, 271(5645):501, February 9, 1978.
doi: 10.1038/271501a0. PubMed ID: 622185.
1057. John H. Gillespie. Molecular Evolution Over the Mutational Landscape. Evolution In-
ternational Journal of Organic Evolution, 38(5):11161129, September 1984, Society for
the Study of Evolution (SSE) and Wiley Interscience: Chichester, West Sussex, UK.
doi: 10.2307/2408444.
1058. Jean-Numa Gillet and Yunlong Sheng. Simulated Quenching with Temperature Rescaling
for Designing Diractive Optical Elements. In Proceedings of the 18th Congress of the Inter-
national Commission for Optics [1062], pages 683684, 1999. doi: 10.1117/12.354954.
1059. Jean-Numa Gillet and Yunlong Sheng. Iterative Simulated Quenching for Designing Irregular-
Spot-Array Generators. Applied Optics, 39(20):34563465, July 10, 2000, Optical Society of
America (OSA): Washington, DC, USA. doi: 10.1364/AO.39.003456.
1060. Manfred Gilli and Peter Winker. Heuristic Optimization Methods in Econometrics. In Hand-
book on Computational Econometrics [263]. Wiley Interscience: Chichester, West Sussex, UK,
October 21, 2007. doi: 10.1002/9780470748916.ch3. Fully available at http://www.unige.
REFERENCES 1039
ch/ses/metri/gilli/Teaching/GilliWinkerHandbook.pdf [accessed 2010-09-17].
1061. Third European Working Session on Learning (EWSL88), October 35, 1988, Glasgow,
Scotland, UK.
1062. Alexander J. Glass, Joseph W. Goodman, Milton Chang, Arthur H. Guenther, and Toshim-
itsu Asakura, editors. Proceedings of the 18th Congress of the International Commission for
Optics, August 2, 1999, San Francisco, CA, USA, volume 3749 in The Proceedings of SPIE.
Society of Photographic Instrumentation Engineers (SPIE): Bellingham, WA, USA.
1063. Fred Glover. Heuristics for Integer Programming using Surrogate Constraints. Management
Science Report Series 74-6, University of Colorado, Business Research Division: Boulder,
CO, USA, 1974. asin: B0006WBQOG. ID: OL14555131M. See also [1064].
1064. Fred Glover. Heuristics for Integer Programming using Surrogate Constraints. Decision
Sciences A Journal of the Decision Sciences Institute, 8(1):156166, January 1977, Decision
Sciences Institute: Atlanta, GA, USA and John Wiley & Sons Ltd.: New York, NY, USA.
doi: 10.1111/j.1540-5915.1977.tb01074.x. See also [1063].
1065. Fred Glover. Future Paths for Integer Programming and Links to Articial Intelligence.
Computers & Operations Research, 13(5):533549, 1986, Pergamon Press: Oxford, UK and
Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/0305-0548(86)90048-1. Fully
available at http://leeds-faculty.colorado.edu/glover/TS%20-%20Future
%20Paths%20for%20Integer%20Programming.pdf [accessed 2010-10-03].
1066. Fred Glover. Tabu Search Part I. ORSA Journal on Computing, 1(3):190206, Sum-
mer 1989, Operations Research Society of America (ORSA). doi: 10.1287/ijoc.1.3.190.
Fully available at http://leeds-faculty.colorado.edu/glover/TS%20-%20Part
%20I-ORSA.pdf [accessed 2008-03-27]. See also [1067].
1067. Fred Glover. Tabu Search Part II. ORSA Journal on Computing, 2(1):190206,
Winter 1990, Operations Research Society of America (ORSA). doi: 10.1287/ijoc.2.1.4.
Fully available at http://leeds-faculty.colorado.edu/glover/TS%20-%20Part
%20II-ORSA-aw.pdf [accessed 2011-12-05]. See also [1066].
1068. Fred Glover and Gary A. Kochenberger, editors. Handbook of Metaheuristics, volume 57
in International Series in Operations Research & Management Science. Kluwer Academic
Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands, 2003.
doi: 10.1007/b101874. isbn: 0-306-48056-5 and 1-4020-7263-5. Series Editor Frederick
S. Hillier.
1069. Fred Glover,

Eric D. Taillard, and Dominique de Werra. A Users Guide to Tabu Search.
Annals of Operations Research, 41(1):328, March 1993, Springer Netherlands: Dordrecht,
Netherlands and J. C. Baltzer AG, Science Publishers: Amsterdam, The Netherlands.
doi: 10.1007/BF02078647. Special issue on Tabu search.
1070. Helen G. Gobb and John J. Grefenstette. Genetic Algorithms for Tracking Changing En-
vironments. In ICGA93 [969], pages 523529, 1993. Fully available at ftp://ftp.aic.
nrl.navy.mil/pub/papers/1993/AIC-93-004.ps [accessed 2008-07-19].
1071. William L. Goe, Gary D. Ferrier, and John Rogers. Global Optimization of Statistical Func-
tions With Simulated Annealing. Journal of Econometrics, 60(1-2):6599, JanuaryFebruary
1994, Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/0304-4076(94)90038-8. Fully
available at http://cook.rfe.org/anneal-preprint.pdf [accessed 2009-09-18]. Partly
available at http://econpapers.repec.org/software/codfortra/simanneal.htm
[accessed 2009-09-18].
1072. J. Peter Gogarten and Elena Hilario. Inteins, Introns, and Homing Endonucleases:
Recent Revelations about the Life Cycle of Parasitic Genetic Elements. BMC Evo-
lutionary Biology, 6(94), November 13, 2006, BioMed Central Ltd.: London, UK.
doi: 10.1186/1471-2148-6-94. Fully available at http://www.biomedcentral.com/
content/pdf/1471-2148-6-94.pdf [accessed 2008-03-23].
1073. David Edward Goldberg. Computer-Aided Gas Pipeline Operation using Genetic Algorithms
and Rule Learning. Dissertation Abstracts International (DAI), 44(10):3174B, April 1984,
ProQuest: Ann Arbor, MI, USA. ID: 750217171. See also [1074].
1074. David Edward Goldberg. Computer-aided Gas Pipeline Operation using Genetic Algorithms
and Rule Learning. PhD thesis, University of Michigan: Ann Arbor, MI, USA, 1983. Uni-
versity Microlms No.: 8402282. See also [1073].
1075. David Edward Goldberg. Genetic Algorithms in Search, Optimization, and Ma-
chine Learning. Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA,
1040 REFERENCES
1989. isbn: 0-201-15767-5. Google Books ID: 2IIJAAAACAAJ, 3 RQAAAAMAAJ, and
bO9SAAAACAAJ. OCLC: 17674450, 246916144, 255664278, and 299467773. Library of
Congress Control Number (LCCN): 88006276. GBV-Identication (PPN): 250764105,
301364087, 357619773, 357663551, 478167571, 499485440, 558515053, and
62589894X. LC Classication: QA402.5 .G635 1989.
1076. David Edward Goldberg. Genetic Algorithms and Walsh Functions: Part I, A Gentle Intro-
duction. Complex Systems, 3(2):129152, 1989, Complex Systems Publications, Inc.: Cham-
paign, IL, USA. Fully available at http://www.complex-systems.com/pdf/03-2-2.
pdf [accessed 2010-07-23].
1077. David Edward Goldberg. Genetic Algorithms and Walsh Functions: Part II, Deception
and Its Analysis. Complex Systems, 3(2):153171, 1989, Complex Systems Publications,
Inc.: Champaign, IL, USA. Fully available at http://www.complex-systems.com/pdf/
03-2-3.pdf [accessed 2010-07-23].
1078. David Edward Goldberg. Making Genetic Algorithm Fly: A Lesson from the Wright Broth-
ers. Advanced Technology For Developers, 2:18, February 1993, High-Tech Communications:
Sewickley, PA, USA.
1079. David Edward Goldberg and Kalyanmoy Deb. A Comparative Analysis of Selection Schemes
Used in Genetic Algorithms. In FOGA90 [2562], pages 6993, 1990. Fully available
at http://www.cse.unr.edu/

sushil/class/gas/papers/Select.pdf [accessed 2010-


08-03]. CiteSeer
x
: 10.1.1.101.9494.
1080. David Edward Goldberg and Jon T. Richardson. Genetic Algorithms with Sharing for Mul-
timodal Function Optimization. In ICGA87 [1132], pages 4149, 1987.
1081. David Edward Goldberg and Liwei Wang. Adaptive Niching via Coevolutionary Sharing. In
Genetic Algorithms and Evolution Strategy in Engineering and Computer Science: Recent
Advances and Industrial Applications [2163], Chapter 2, pages 2138. John Wiley & Sons
Ltd.: New York, NY, USA, 1998. See also [1082].
1082. David Edward Goldberg and Liwei Wang. Adaptive Niching via Coevolutionary Sharing.
IlliGAL Report 97007, Illinois Genetic Algorithms Laboratory (IlliGAL), Department of
Computer Science, Department of General Engineering, University of Illinois at Urbana-
Champaign: Urbana-Champaign, IL, USA, August 1997. Fully available at http://
www.illigal.uiuc.edu/pub/papers/IlliGALs/97007.ps.Z and www.lania.mx/

ccoello/EMOO/goldberg98.pdf.gz [accessed 2009-08-28]. CiteSeer


x
: 10.1.1.15.5500.
See also [1081].
1083. David Edward Goldberg, Kalyanmoy Deb, and Bradley Korb. Messy Genetic Algorithms:
Motivation, Analysis, and First Results. Complex Systems, 3(5):493530, 1989, Com-
plex Systems Publications, Inc.: Champaign, IL, USA. Fully available at http://www.
complex-systems.com/pdf/03-5-5.pdf [accessed 2010-08-09]. asin: B00071YQK2.
1084. David Edward Goldberg, Kalyanmoy Deb, and Bradley Korb. Messy Genetic Algorithms
Revisited: Studies in Mixed Size and Scale. Complex Systems, 4(4):415444, 1990, Com-
plex Systems Publications, Inc.: Champaign, IL, USA. Fully available at http://www.
complex-systems.com/pdf/04-4-4.pdf [accessed 2010-08-09].
1085. David Edward Goldberg, Kalyanmoy Deb, and Jerey Horn. Massive Multimodality, Decep-
tion, and Genetic Algorithms. In PPSN II [1827], pages 3748, 1992. See also [1086].
1086. David Edward Goldberg, Kalyanmoy Deb, and Jerey Horn. Massive Multimodality,
Deception, and Genetic Algorithms. Illigal report, Illinois Genetic Algorithms Labora-
tory (IlliGAL), Department of Computer Science, Department of General Engineering,
University of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, April 1992.
CiteSeer
x
: 10.1.1.48.8442. See also [1085].
1087. David Edward Goldberg, Kalyanmoy Deb, Hillol Kargupta, and Georges Raif Harik. Rapid,
Accurate Optimization of Dicult Problems Using Fast Messy Genetic Algorithms. In
ICGA93 [969], pages 5664, 1993. Fully available at https://eprints.kfupm.edu.
sa/60781/ [accessed 2008-10-18]. CiteSeer
x
: 10.1.1.49.3524. See also [1088].
1088. David Edward Goldberg, Kalyanmoy Deb, Hillol Kargupta, and Georges Raif Harik. Rapid,
Accurate Optimization of Dicult Problems Using Fast Messy Genetic Algorithms. IlliGAL
Report 93004, Illinois Genetic Algorithms Laboratory (IlliGAL), Department of Computer
Science, Department of General Engineering, University of Illinois at Urbana-Champaign:
Urbana-Champaign, IL, USA, February 1993. Fully available at http://www.illigal.
uiuc.edu/pub/papers/IlliGALs/93004.ps.Z [accessed 2010-08-09]. See also [1087].
REFERENCES 1041
1089. Bruce L. Golden, James S. DeArmon, and Edward K. Baker. Computational Experiments
with Algorithms for a Class of Routing Problems. Computers & Operations Research, 10(1):
4759, 1983, Pergamon Press: Oxford, UK and Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/0305-0548(83)90026-6.
1090. Bruce L. Golden, Edward A. Wasil, James P. Kelly, and I-Ming Chao. The Impact of
Metaheuristics on Solving the Vehicle Routing Problem: Algorithms, Problem Sets, and
Computational Results. In Fleet Management and Logistics [651], pages 3356. Springer
Netherlands: Dordrecht, Netherlands. Imprint: Kluwer Academic Publishers: Norwell, MA,
USA, 1998.
1091. Bruce L. Golden, S. (Raghu) Raghavan, and Edward A. Wasil, editors. The Vehicle
Routing Problem: Latest Advances and New Challenges, volume 43 in Operations Re-
search/Computer Science Interfaces Series. Springer-Verlag GmbH: Berlin, Germany, 2008.
doi: 10.1007/978-0-387-77778-8. isbn: 0-387-77777-6. Library of Congress Control Number
(LCCN): 2008921562.
1092. Oded Goldreich. Computational Complexity: A Conceptual Perspective. Cambridge Univer-
sity Press: Cambridge, UK, 2008. Google Books ID: EuguvA-w5OEC. Library of Congress
Control Number (LCCN): 2008006750.
1093. Oded Goldreich, Arnold L. Rosenberg, and Alan L. Selman, editors. Theoretical Computer
Science: Essays in Memory of Shimon Even, volume 3895 in Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany, 2006. doi: 10.1007/11685654.
isbn: 3-540-32880-7. OCLC: 65427613, 318290447, 487447128, 612704970, and
629703194. Library of Congress Control Number (LCCN): 2006922002. GBV-
Identication (PPN): 508402778 and 512697760. LC Classication: MLCM 2006/41498
(Q).
1094. Erik F. Golen, Bo Yuan, and Nirmala Shenoy. An Evolutionary Approach to
Underwater Sensor Deployment. In GECCO09-I [2342], pages 19251926, 2009.
doi: 10.1145/1569901.1570240.
1095. Matteo Golfarelli, Dario Maio, and Stefano Rizzi. Vertical Fragmentation of Views in
Relational Data Warehouses. In Atti del Settimo Convegno Nazionale Sistemi Evoluti
per Basi di Dati [283], pages 1933, 1999. Fully available at http://www-db.deis.
unibo.it/

srizzi/PDF/sebd99.pdf [accessed 2011-04-11]. CiteSeer


x
: 10.1.1.45.5959
and 10.1.1.64.414.
1096. Matteo Golfarelli, Vittorio Maniezzo, and Stefano Rizzi. Materialization of Fragmented Views
in Multidimensional Databases. Data & Knowledge Engineering, 49(3):325351, June 2004,
Elsevier Science Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland
Scientic Publishers Ltd.: Amsterdam, The Netherlands. doi: 10.1016/j.datak.2003.11.001.
CiteSeer
x
: 10.1.1.81.6534.
1097. Gene Howard Golub and Charles F. Van Loan. Matrix Computations, volume 3 in Johns
Hopkins Studies in the Mathematical Sciences. Johns Hopkins University (JHU) Press
Press: Baltimore, MD, USA, 3rd edition, 1996. isbn: 0801837391, 080185413X, and
0801854148. Google Books ID: mlOa7wPX6OYC. OCLC: 34515797. Library of Congress
Control Number (LCCN): 96014291. LC Classication: QA188 .G65 1996.
1098. Karthik Gomadam, Jon Lathem, John A. Miller, Meenakshi Nagarajan, Cary Pennington,
Amit P. Sheth, Kunal Verma, and Zixin Wu. Semantic Web Services Challenge 2006 - Phase
I. In First Workshop of the Semantic Web Service Challenge 2006 Challenge on Automating
Web Services Mediation, Choreography and Discovery [2688], 2006. See also [2990].
1099. Takashi Gomi, editor. Proceedings of the International Symposium on Evolutionary Robotics
From Intelligent Robotics to Articial Life (ER01), October 1819, 2001, Tokyo, Japan,
volume 2217 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. isbn: 3-540-42737-6.
1100. G. Gong and Bojan Cestnik. Hepatitis Data Set. UCI Machine Learning Reposi-
tory, Center for Machine Learning and Intelligent Systems, Donald Bren School of In-
formation and Computer Science, University of California: Irvine, CA, USA, Novem-
ber 1, 1988. Fully available at http://archive.ics.uci.edu/ml/datasets/
Breast+Cancer+Wisconsin+(Original) [accessed 2008-12-27].
1101. Yiyuan Gong and Alex S. Fukunaga. Fault Tolerance in Distributed Genetic Algorithms with
Tree Topologies. In CEC09 [1350], pages 968975, 2009. doi: 10.1109/CEC.2009.4983050.
INSPEC Accession Number: 10688632.
1042 REFERENCES
1102. Juan R. Gonzalez, David Alejandro Pelta, Carlos Cruz, Germ an Terrazas, and Natalio
Krasnogor, editors. Proceedings of the 4th International Workshop Nature Inspired Coop-
erative Strategies for Optimization (NICSO10), May 1214, 2010, Granada, Spain, vol-
ume 284 in Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/978-3-642-12538-6. isbn: 3-642-12537-9. Google Books ID: OAfiU1kGWWAC.
Library of Congress Control Number (LCCN): 20100924760.
1103. Teolo F. Gonzalez, editor. Handbook of Approximation Algorithms and Metaheuristics,
Chapmann & Hall/CRC Computer and Information Science Series. Chapman & Hall/CRC:
Boca Raton, FL, USA, 2007. isbn: 1584885505. Google Books ID: QK3 VU8ngK8C.
1104. Erik D. Goodman, editor. Late Breaking Papers at Genetic and Evolutionary Computation
Conference (GECCO01 LBP), July 711, 2001, Holiday Inn Golden Gateway Hotel: San
Francisco, CA, USA. AAAI Press: Menlo Park, CA, USA. Google Books ID: FjhHHQAACAAJ
and FzhHHQAACAAJ. See also [2570].
1105. Janaki Gopalan, Reda Alhajj, and Ken Barker. Discovering Accurate and Interesting Classi-
cation Rules Using Genetic Algorithm. In DMIN06 [656], pages 389395, 2006. Fully avail-
able at http://ww1.ucmss.com/books/LFS/CSREA2006/DMI5509.pdf [accessed 2007-07-
29].
1106. V. Scott Gordon and L. Darrell Whitley. Serial and Parallel Genetic Algorithms as Function
Optimizers. In ICGA93 [969], pages 177183, 1993. Fully available at http://eprints.
kfupm.edu.sa/64675/ and www.cs.colostate.edu/

genitor/1993/icga93.ps.
gz [accessed 2009-08-20]. CiteSeer
x
: 10.1.1.54.3472.
1107. Martina Gorges-Schleuter. ASPARAGOS: An Asynchronous Parallel Genetic Optimization
Strategy. In ICGA89 [2414], pages 422427, 1989.
1108. Martina Gorges-Schleuter. Explicit Parallelism of Genetic Algorithms through Population
Structures. In PPSN I [2438], pages 150159, 1990. doi: 10.1007/BFb0029746.
1109. James Gosling and Henry McGilton. The Java Language Environment A White Paper.
Technical Report, Sun Microsystems, Inc.: Santa Clara, CA, USA, May 1996. Fully available
at http://java.sun.com/docs/white/langenv/ [accessed 2007-07-03].
1110. James Gosling, Bill Joy, Guy Steele, and Gilad Bracha. The Java
TM
Language Specication,
The Java Series. Prentice Hall International Inc.: Upper Saddle River, NJ, USA, Sun Mi-
crosystems Press (SMP): Santa Clara, CA, USA, and Addison-Wesley Professional: Reading,
MA, USA, 3rd edition, May 2005. isbn: 0-321-24678-0. Fully available at http://java.
sun.com/docs/books/jls/ [accessed 2007-09-14].
1111. Simon Goss, R. Beckers, Jean-Louis Deneubourg, S. Aron, and Jacques M. Pasteels. How
trail Laying and Trail following can solve Foraging Problems for Ant Colonies. In NATO
Advanced Research Workshop on Behavioural Mechanisms of Food Selection [1306], pages
661678, 1989.
1112. William Sealy Gosset. The Probable Error of a Mean. Biometrika, 6(1):125, March 1908,
Oxford University Press, Inc.: New York, NY, USA. Fully available at http://www.york.
ac.uk/depts/maths/histstat/student.pdf [accessed 2007-09-30]. Because the author was
not allowed to publish this article, he used the pseudonym Student. Reprinted in [1113].
1113. William Sealy Gosset. The Probable Error of a Mean. In Students Collected Papers [1114],
pages 1134. Cambridge University Press: Cambridge, UK and Biometrika Oce: London,
UK, 1942. Reprint of [1112].
1114. William Sealy Gosset, author. Egon Sharpe Pearson and John Wishart, editors. Students
Collected Papers. Cambridge University Press: Cambridge, UK and Biometrika Oce:
London, UK, 1942. Google Books ID: 39fvGAAACAAJ, 3tfvGAAACAAJ, SdKEAAAAIAAJ,
UoJsAAAAMAAJ, and rqM2HQAACAAJ. OCLC: 220708760, 258570743, 313515170, and
318188805. With a Foreword by Launce McMullen.
1115. Jens Gottlieb and G unther R. Raidl, editors. Proceedings of the 4th European Confer-
ence on Evolutionary Computation in Combinatorial Optimization (EvoCOP04), April 57,
2004, Coimbra, Portugal, volume 3004/2004 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/b96499. isbn: 3-540-21367-8.
1116. Jens Gottlieb and G unther R. Raidl, editors. Proceedings of the 6th European Conference
on Evolutionary Computation in Combinatorial Optimization (EvoCOP06), April 1012,
2006, Budapest, Hungary, volume 3906/2006 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-33178-6.
1117. Jens Gottlieb, Elena Marchiori, and Claudio Rossi. Evolutionary Algorithms for the Satis-
REFERENCES 1043
ability Problem. Evolutionary Computation, 10(1):3550, Spring 2002, MIT Press: Cam-
bridge, MA, USA. doi: 10.1162/106365602317301763. Fully available at http://www.
cs.ru.nl/

elenam/fsat.pdf [accessed 2010-08-01]. CiteSeer


x
: 10.1.1.77.4306. PubMed
ID: 11911782. See also [2335].
1118. Georg Gottlob and Toby Walsh, editors. Proceedings of the Eighteenth International Joint
Conference on Articial Intelligence (IJCAI03), August 915, 2003, Acapulco, Mexico. Mor-
gan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 0-127-05661-0. Fully avail-
able at http://dli.iiit.ac.in/ijcai/IJCAI-2003/content.htm [accessed 2008-04-01].
See also [1485].
1119. Gang Gou, Jerey Xu Yu, and Hongjun Lu. A* Search: An Ecient and Flexible Approach
to Materialized View Selection. IEEE Transactions on Systems, Man, and Cybernetics Part
C: Applications and Reviews, 36(3):411425, May 2006, IEEE Systems, Man, and Cybernet-
ics Society: New York, NY, USA. doi: 10.1109/TSMCC.2004.843248. INSPEC Accession
Number: 8906628.
1120. Paul Grace. Genetic Programming and Protocol Conguration. Masters thesis, Lancaster
University, Computing Department: Lancaster, Lancashire, UK, September 2000, Gordon
Blair, Supervisor. Fully available at http://www.lancs.ac.uk/postgrad/gracep/
msc.pdf [accessed 2008-06-23].
1121. J orn Grahl. Estimation of Distribution Algorithms in Logistics: Analysis, Design, and Appli-
cation. PhD thesis, Universitat Mannheim, Fakult at f ur Betriebswirtschaftslehre: Mannheim,
Germany, April 16, 2008, Daniel J. Veit, Supervisor, Hans H. Bauer and Stefan Minner,
Committee members. Fully available at http://madoc.bib.uni-mannheim.de/madoc/
volltexte/2008/1897/ [accessed 2010-06-18]. urn: urn:nbn:de:bsz:180-madoc-18974.
1122. The 2006 China-UK Workshop on Nature Inspired Computation and Applications
(CUWNICA06), October 2527, 2006, Grand Hotel Overseas Tradesrs Club: Hefei,

Anhu,
China.
1123. Pierre-Paul Grasse. La Reconstruction du Nid et les Coordinations Inter-Individuelles
chez Bellicositermes Natalensis et Cubitermes sp. la Theorie de la Stigmergie: Essai
dInterpretation des Termites Constructeurs. Insectes Sociaux, 6(1):4180, March 1959,
Springer-Verlag GmbH: Berlin, Germany and Birkhauser Verlag: Basel, Switzerland.
doi: 10.1007/BF02223791.
1124. Federico Greco, editor. Traveling Salesman Problem. IN-TECH Education and Publish-
ing: Vienna, Austria, September 2008. Fully available at http://intechweb.org/
downloadfinal.php?is=978-953-7619-10-7&type=B [accessed 2009-01-13].
1125. Garrison W. Greenwood, Xiaobo Sharon Hu, and Joseph G. DAmbrosio. Fitness Functions
for Multiple Objective Optimization Problems: Combining Preferences with Pareto Rankings.
In FOGA96 [256], pages 437455, 1996.
1126. John J. Grefenstette. Incorporating Problem Specic Knowledge into Genetic Algorithms.
In Genetic Algorithms and Simulated Annealing [694], pages 4260. Pitman: London, UK,
19871990.
1127. John J. Grefenstette. Credit Assignment in Rule Discovery Systems Based on Ge-
netic Algorithms. Machine Learning, 3(2-3):225245, October 1988, Kluwer Academic
Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1023/A:1022614421909.
1128. John J. Grefenstette. Conditions for Implicit Parallelism. In FOGA90 [2562], pages 252261,
1990. Fully available at http://www.aic.nrl.navy.mil/papers/1991/AIC-91-012.
ps.Z [accessed 2010-07-30]. CiteSeer
x
: 10.1.1.52.7729.
1129. John J. Grefenstette. Deception Considered Harmful. In FOGA92 [2927], pages 7591, 1992.
CiteSeer
x
: 10.1.1.49.1448.
1130. John J. Grefenstette. Predictive Models Using Fitness Distributions of Genetic Operators. In
FOGA 3 [2932], pages 139161, 1994. Fully available at http://reference.kfupm.edu.
sa/content/p/r/predictive_models_using_fitness_distribu_672029.pdf [ac-
cessed 2009-07-10]. CiteSeer
x
: 10.1.1.52.4341.
1131. John J. Grefenstette, editor. Proceedings of the 1st International Conference on Ge-
netic Algorithms and their Applications (ICGA85), June 2426, 1985, Carnegy Mellon
University (CMU): Pittsburgh, PA, USA. Lawrence Erlbaum Associates: Hillsdale, NJ,
USA. isbn: 0-8058-0426-9. OCLC: 19702892, 174343913, 475815642, 493612786,
497548640, and 606101493.
1044 REFERENCES
1132. John J. Grefenstette, editor. Proceedings of the Second International Conference on Genetic
Algorithms and their Applications (ICGA87), July 2831, 1987, Massachusetts Institute of
Technology (MIT): Cambridge, MA, USA. Lawrence Erlbaum Associates, Inc. (LEA): Mah-
wah, NJ, USA. isbn: 0-8058-0158-8. Google Books ID: 0hdRAAAAMAAJ, 5N7LAQAACAAJ,
and bvlQAAAAMAAJ. OCLC: 17252562.
1133. John J. Grefenstette and James E. Baker. How Genetic Algorithms Work: A Critical Look
at Implicit Parallelism. In ICGA89 [2414], pages 2027, 1989.
1134. Horst Greiner. Robust Filter Design by Stochastic Optimization. In Optical Interference
Coatings [6], pages 150160, 1994. doi: 10.1117/12.192107.
1135. Horst Greiner. Robust Optical Coating Design with Evolutionary Strategies. Applied Optics,
35(28):54775483, October 1, 1996, Optical Society of America (OSA): Washington, DC,
USA. doi: 10.1364/AO.35.005477.
1136. Andreas Griewank and Philippe L. Toint. Local Convergence Analysis for Parti-
tioned Quasi-Newton Updates. Numerische Mathematik, 39(3):429448, October 1982.
doi: 10.1007/BF01407874.
1137. Andreas Griewank and Philippe L. Toint. Partitioned Variable Metric Updates for Large
Structured Optimization Problems. Numerische Mathematik, 39(1):119137, February 1982.
doi: 10.1007/BF01399316.
1138. David F. Griths and G. A. Watson, editors. Numerical Analysis Proceedings of the 16th
Dundee Biennial Conference in Numerical Analysis, June 2730, 1995, University of Dundee:
Nethergate, Dundee, Scotland, UK, volume 344 in Pitman Research Notes in Mathemat-
ics. Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, Chapman & Hall:
London, UK, and CRC Press, Inc.: Boca Raton, FL, USA. isbn: 0-582-27633-0. GBV-
Identication (PPN): 198456158 and 27380491X.
1139. Crina Grosan and Ajith Abraham. Hybrid Evolutionary Algorithms: Methodolo-
gies, Architectures, and Reviews. In Hybrid Evolutionary Algorithms [1141], pages
117. Springer-Verlag: Berlin/Heidelberg, 2007. doi: 10.1007/978-3-540-73297-6 1.
Fully available at http://www.softcomputing.net/hea1.pdf [accessed 2010-09-11].
CiteSeer
x
: 10.1.1.160.9007.
1140. Crina Grosan and Ajith Abraham. A Novel Global Optimization Technique for High Dimen-
sional Functions. International Journal of Intelligent Systems, 24(4):421440, April 2009,
Wiley Interscience: Chichester, West Sussex, UK. doi: 10.1002/int.20343. Fully available
at http://www.softcomputing.net/wil2009.pdf [accessed 2009-08-07].
1141. Crina Grosan, Ajith Abraham, and Hisao Ishibuchi, editors. Hybrid Evolutionary Algorithms,
volume 75/2007 in Studies in Computational Intelligence. Springer-Verlag: Berlin/Hei-
delberg, 2007. doi: 10.1007/978-3-540-73297-6. isbn: 3-540-73296-9. Google Books
ID: II7LCqiGFlEC. Library of Congress Control Number (LCCN): 2007932399.
1142. David A. Grossman, Luis Gravano, ChengXiang Zhai, Otthein Herzog, and David A. Evans,
editors. Proceedings of ACM Thirteenth Conference on Information and Knowledge Man-
agement (CIKM04), November 813, 2004, Hyatt Arlington Hotel: Washington, DC, USA.
isbn: 1-58113-874-1. Partly available at http://ir.iit.edu/cikm2004/ [accessed 2011-
04-11].
1143. Frederic Gruau. Neural Network Synthesis using Cellular Encoding and the Genetic Algo-
rithm. PhD thesis, lEcole Normale Superior de Lyon, Laboratoire de lInformatique du
Parallelisme (LIP-IMAG), Unite de Recherche Associee au CNRS: Lyon, France and Centre
d

Etude Nucleaire de Grenoble (CENG), Department de Recherche Fondamentale Mati`ere


Condensee: Grenoble, France, January 4, 1994, Maurice Nivat, Jean-Arcady Meyer, Jacques
Demongeot, Michel Cosnard, Jacques Mazoyer, Pierre Peretto, and L. Darrell Whitley, Com-
mittee members. Fully available at ftp://ftp.ens-lyon.fr/pub/LIP/Rapports/PhD/
PhD1994/PhD1994-01-E.ps.Z [accessed 2010-08-05]. CiteSeer
x
: 10.1.1.29.5939.
1144. Frederic Gruau and L. Darrell Whitley. Adding Learning to the Cellular Development
of Neural Networks: Evolution and the Baldwin Eect. Evolutionary Computation, 1(3):
213233, Fall 1993, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1993.1.3.213.
CiteSeer
x
: 10.1.1.34.1656.
1145. Jun-hua Gu, Xiu-li Zhao, and Qing Tan. Application of Ant Colony System to Materi-
alized Views Selection. Journal of Computer Applications (Jisuan Ji Y`ngy` ong), 27(11):
27632765, November 2007, Chinese Electronic Periodical Services (CEPS): Taipei, Tai-
wan. Fully available at http://epub.cnki.net/grid2008/docdown/docdownload.
REFERENCES 1045
aspx?filename=JSJY200711049&dbcode=CJFD&dflag=pdfdown [accessed 2011-04-10].
ID: CNKI:SUN:JSJY.0.2007-11-049.
1146. Zhifeng Gu, Bin Xu, and Juanzi Li. Inheritance-Aware Document-Driven Service Composi-
tion. In CEC/EEE07 [1344], pages 513516, 2007. doi: 10.1109/CEC-EEE.2007.56. INSPEC
Accession Number: 9868636. See also [319, 1797, 2998, 3010, 3011].
1147. Frank Guerin and Wamberto Weber Vasconcelos, editors. AISB 2008 Convention: Com-
munication, Interaction and Social Intelligence (AISB08), April 14, 2008, University of
Aberdeen: Aberdeen, Scotland, UK, volume 12 in Computing & Philosophy. Society for
the Study of Articial Intelligence and the Simulation of Behaviour (SSAISB): Chichester,
West Sussex, UK. isbn: 1-902956-71-0. Fully available at http://www.aisb.org.uk/
publications/proceedings.shtml [accessed 2008-09-10].
1148. Michael Guntsch and Martin Middendorf. Pheromone Modication Strategies for Ant
Algorithms Applied to Dynamic TSP. In EvoWorkshops01 [343], pages 213222, 2001.
doi: 10.1007/3-540-45365-2 22. Fully available at http://pacosy.informatik.
uni-leipzig.de/pv/Personen/middendorf/Papers/Papers/EvoCOP2001.
Report.ps [accessed 2009-06-29].
1149. Michael Guntsch, Martin Middendorf, and Hartmut Schmeck. An Ant Colony Optimiza-
tion Approach to Dynamic TSP. In GECCO01 [2570], pages 860867, 2001. Fully
available at http://pacosy.informatik.uni-leipzig.de/middendorf/Papers/
GECCO2001-Pap.ps [accessed 2009-07-13]. CiteSeer
x
: 10.1.1.58.1398.
1150. Maozu Guo, Liang Zhao, and Lipo Wang, editors. Proceedings of the 4th International
Conference on Natural Computation (ICNC08), October 1820, 2008, J`n an, Shandong,
China. IEEE Computer Society: Piscataway, NJ, USA. Library of Congress Control Number
(LCCN): 2008904182. Catalogue no.: CFP08CNC-PRT.
1151. Ashish Gupta, Oded Shmueli, and Jennifer Widom, editors. Proceedings of 24rd International
Conference on Very Large Data Bases (VLDB98), August 2426, 1998, New York, NY, USA.
Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-566-5.
1152. Himanshu Gupta. Selection of Views to Materialize in a Data Warehouse. In
ICDT97 [34], pages 98112, 1997. doi: 10.1007/3-540-62222-5 39. Fully avail-
able at http://ilpubs.stanford.edu:8090/161/1/1996-34.pdf [accessed 2011-04-12].
CiteSeer
x
: 10.1.1.30.3892. See also [1154].
1153. Himanshu Gupta and Inderpal Singh Mumick. Selection of Views to Material-
ize under Maintenance Cost Constraint. In ICDT99 [249], pages 453470, 1999.
doi: 10.1007/3-540-49257-7 28.
1154. Himanshu Gupta and Inderpal Singh Mumick. Selection of Views to Materialize
in a Data Warehouse. IEEE Transactions on Knowledge and Data Engineering, 17
(1):2443, January 2005, IEEE Computer Society Press: Los Alamitos, CA, USA.
doi: 10.1109/TKDE.2005.16. INSPEC Accession Number: 8287192. See also [1152].
1155. L. S. Gurin and Leonard A. Rastrigin. Convergence of the Random Search Method in
the Presence of Noise. Automation and Remote Control, 26:15051511, 1965, Springer Sci-
ence+Business Media, Inc.: New York, NY, USA and MAIK Nauka/Interperiodica Pleiades
Publishing, Inc.: Moscow, Russia.
1156. Steven Matt Gustafson. An Analysis of Diversity in Genetic Programming. PhD thesis, Uni-
versity of Nottingham, School of Computer Science & IT: Nottingham, UK, February 2004.
Fully available at http://www.gustafsonresearch.com/thesis_html/index.html
[accessed 2009-07-09]. CiteSeer
x
: 10.1.1.2.9746.
1157. Steven Matt Gustafson, Anik o Ek art, Edmund K. Burke, and Graham Kendall.
Problem Diculty and Code Growth in Genetic Programming. Genetic Program-
ming and Evolvable Machines, 5(3):271290, September 2004, Springer Netherlands:
Dordrecht, Netherlands. Imprint: Kluwer Academic Publishers: Norwell, MA,
USA. doi: 10.1023/B:GENP.0000030194.98244.e3. Fully available at http://www.
gustafsonresearch.com/research/publications/gustafson-gpem2004.pdf [ac-
cessed 2009-07-09]. CiteSeer
x
: 10.1.1.59.6659. Submitted March 4, 2003; Revised August 7,
2003.
1158. Sergio Gutierrez, Gregory Valigiani, Pierre Collet, and Carlos Delgado Kloos. Adaptation
of the ACO Heuristic for Sequencing Learning Activities. In EC-TEL07 Posters [2962],
2007. Fully available at http://ftp.informatik.rwth-aachen.de/Publications/
CEUR-WS/Vol-280/p15.pdf and http://sunsite.informatik.rwth-aachen.de/
1046 REFERENCES
Publications/CEUR-WS/Vol-280/p15.pdf [accessed 2009-09-05].
1159. Tomasz Dominik Gwiazda. Genetic Algorithms Reference Volume 1 Crossover for
Single-Objective Numerical Optimization Problems. TOMASZGWIAZDA Ebooks: Lomianki,
Poland, Agata Szczepa nska, Translator. isbn: 83-923958-3-2 and 83-923958-4-0.
OCLC: 300393138.
1160. Lorenz Gygax. Statistik f ur Nutztierethologen Einf uhrung in die statistische Denkweise:
Was ist, was macht ein statistischer Test? Swiss Federal Veterinary Oce (FVO), Centre
for Proper Housing of Ruminants and Pigs: Z urich, Switzerland, 1.1 edition, June 2003. Fully
available at http://www.lorenzgygax.ch/documents/introEtho.pdf and http://
www.proximate-biology.ch/documents/introEtho.pdf [accessed 2009-07-29].
H
1161. Atidel Ben Hadj-Alouane and James C. Bean. A Genetic Algorithm for the Multiple-Choice
Integer Program. Technical Report 92-50, University of Michigan, Department of Indus-
trial and Operations Engineering: Ann Arbor, MI, USA, September 1992. Fully available
at http://hdl.handle.net/2027.42/3516 [accessed 2008-11-15]. Revised in July 1993.
1162. George Hadley. Nonlinear and Dynamics Programming, World Student. Addison-Wesley
Professional: Reading, MA, USA, December 1964. isbn: 0201026643.
1163. Proceedings of the 27th International Conference on Machine Learning (ICML10), June 21
24, 2010, Haifa, Israel.
1164. Karen Haigh and Nestor Rychtyckyj, editors. Proceedings of the Twenty-rst Innovative
Applications of Articial Intelligence Conference (IAAI09), July 1416, 2009, Pasadena,
CA, USA. AAAI Press: Menlo Park, CA, USA.
1165. Yacov Y. Haimes, editor. Proceedings of the 6th International Conference on Multiple Cri-
teria Decision Making: Decision Making with Multiple Objectives (MCDM84), June 48,
1984, Cleveland, OH, USA, volume 242 in Lecture Notes in Economics and Mathemati-
cal Systems. Springer-Verlag: Berlin/Heidelberg. isbn: 0387152237 and 3-540-15223-7.
OCLC: 45927944, 310680663, 450849831, 470794588, 613247950, 634256419, and
638892702. Published December 31, 1985.
1166. Yacov Y. Haimes and Ralph E. Steuer, editors. Proceedings of the 14th International Con-
ference on Multiple Criteria Decision Making: Research and Practice in Multi Criteria De-
cision Making (MCDM98), June 1218, 1998, University of Virginia: Charlottesville, VA,
USA, volume 487 in Lecture Notes in Economics and Mathematical Systems. Springer-Verlag:
Berlin/Heidelberg. isbn: 3-540-67266-1. Google Books ID: skkEdJir95QC.
1167. Bruce Hajek. Cooling Schedules for Optimal Annealing. Mathematics of Operations Research
(MOR), 13(2):311329, May 1988, Institute for Operations Research and the Management
Sciences (INFORMS): Linthicum, ML, USA. doi: 10.1287/moor.13.2.311. Fully available
at http://web.mit.edu/6.435/www/Hajek88.pdf [accessed 2010-09-25].
1168. Hesham H. Hallal, Arnaud Dury, and Alexandre Petrenko. Web-FIM: Automated Framework
for the Inference of Business Software Models. In SERVICES Cup09 [1351], pages 130138,
2009. doi: 10.1109/SERVICES-I.2009.106. INSPEC Accession Number: 10975112.
1169. Paul Richard Halmos. Naive Set Theory, Undergraduate Texts in Mathematics. Lit-
ton Educational Publishing, Inc.: New York, NY, USA, 1st, new edition, 19601998.
isbn: 0-3879-0092-6. Google Books ID: atw2Z3lqfBIC.
1170. Proceedings of the First International Conference on Metaheuristics and Nature Inspired
Computing (META06), November 24, 2006, Hammamet, Tunesia.
1171. Proceedings of the 2nd International Conference on Metaheuristics and Nature Inspired Com-
puting (META08), October 2931, 2008, Hammamet, Tunesia.
1172. Ulrich Hammel and Thomas Back. Evolution Strategies on Noisy Functions: How
to Improve Convergence Properties. In PPSN III [693], pages 159168, 1994.
doi: 10.1007/3-540-58484-6 260. CiteSeer
x
: 10.1.1.57.5798.
1173. Richard W. Hamming. Error-Detecting and Error-Correcting Codes. Bell System Technical
Journal, 29(2):147169, 1950, Bell Laboratories: Berkeley Heights, NJ, USA. Fully avail-
able at http://garfield.library.upenn.edu/classics1982/A1982NY35700001.
pdf and http://guest.engelschall.com/

sb/hamming/ [accessed 2010-12-04].


1174. Michelle Okaley Hammond and Terence Claus Fogarty. Co-operative OuLiPian Genera-
REFERENCES 1047
tive Literature using Human Based Evolutionary Computing. In GECCO05 LBP [2337],
2005. Fully available at http://www.cs.bham.ac.uk/

wbl/biblio/gecco2005lbp/
papers/56-hammond.pdf [accessed 2010-08-07].
1175. Lin Han and Xingshi He. A Novel Opposition-Based Particle Swarm Optimization for Noisy
Problems. In ICNC07 [1711], pages 624629, volume 3, 2007. doi: 10.1109/ICNC.2007.119.
1176. David J. Hand, Heikki Mannila, and Padhraic Smyth. Principles of Data Mining, volume
6 in Adaptive Computation and Machine Learning, Bradford Books. Prentice-Hall Of India
Pvt. Ltd. (PHI): New Delhi, India, August 2001. isbn: 0-262-08290-X and 8120324579.
Google Books ID: 5i2WPwAACAAJ and SdZ-bhVhZGYC. OCLC: 46866355.
1177. Hisashi Handa, Dan Lin, Lee Chapman, and Xin Yao. Robust Solution of Salting
Route Optimisation Using Evolutionary Algorithms. In CEC06 [3033], pages 30983105,
2006. doi: 10.1109/CEC.2006.1688701. Fully available at http://escholarship.lib.
okayama-u.ac.jp/industrial_engineering/2/ [accessed 2008-06-19].
1178. Julia Handl, Simon C. Lovell, and Joshua D. Knowles. Multiobjectivization by
Decomposition of Scalar Cost Functions. In PPSN X [2354], pages 3140, 2008.
doi: 10.1007/978-3-540-87700-4 4. Fully available at http://dbkgroup.org/handl/
ppsn_decomp.pdf [accessed 2010-07-18].
1179. Proceedings of the Ninth International Conference on Simulated Evolution And Learning
(SEAL12), December 1619, 2012, Hanoi, Vietnam.
1180. James V. Hansen, Paul Benjamin Lowry, Rayman Meservy, and Dan McDonald. Ge-
netic Programming for Prevention of Cyberterrorism through Previous Dynamic and Evolv-
ing Intrusion Detection. Decision Support Systems, 43(4):13621374, August 2007, El-
sevier Science Publishers B.V.: Essex, UK. doi: 10.1016/j.dss.2006.04.004. Fully avail-
able at http://dx.doi.org/10.1016/j.dss.2006.04.004 and http://ssrn.com/
abstract=877981 [accessed 2008-06-17]. Special Issue on Clusters.
1181. James V. Hansen, Raymond Ros, Nikolas Mauny, Marc Schoenauer, and Anne Auger.
PSO Facing Non-Separable and Ill-Conditioned Problems. Technical Report 6447, In-
stitut National de Recherche en Informatique et en Automatique (INRIA), February 11,
2008. Fully available at http://hal.archives-ouvertes.fr/docs/00/25/01/60/
PDF/RR-6447.pdf [accessed 2009-10-20]. See also [147].
1182. Nikolaus Hansen and Andreas Ostermeier. Adapting Arbitrary Normal Mutation Distri-
butions in Evolution Strategies: The Covariance Matrix Adaptation. In CEC96 [1445],
pages 312317, 1996. doi: 10.1109/ICEC.1996.542381. Fully available at http://
www.bionik.tu-berlin.de/ftp-papers/CMAES.ps.Z, http://www.ml.inf.
ethz.ch/education/ws_06_07_seminar_mustererkennung/paper_hansen_
ostermeier, and https://eprints.kfupm.edu.sa/22718/1/22718.pdf [accessed 2009-
09-18]. CiteSeer
x
: 10.1.1.35.2147. INSPEC Accession Number: 5375406.
1183. Nikolaus Hansen and Andreas Ostermeier. Convergence Properties of Evolution Strate-
gies with the Derandomized Covariance Matrix Adaption: The (/
I
, )-CMA-ES. In EU-
FIT97 [3091], pages 650654, volume 1, 1997. CiteSeer
x
: 10.1.1.30.648.
1184. Nikolaus Hansen and Andreas Ostermeier. Completely Derandomized Self-Adaptation
in Evolution Strategies. Evolutionary Computation, 9(2):159195, 2001, MIT Press:
Cambridge, MA, USA. Fully available at http://www.bionik.tu-berlin.
de/user/niko/cmaartic.pdf, http://www.cs.colostate.edu/

whitley/CS640/
cmaartic.pdf, and http://www.lri.fr/

hansen/cmaartic.pdf [accessed 2010-08-10].


CiteSeer
x
: 10.1.1.13.7462.
1185. Nikolaus Hansen, Andreas Ostermeier, and Andreas Gawelczyk. On the Adaptation of Ar-
bitrary Normal Mutation Distributions in Evolution Strategies: The Generating Set Adap-
tation. In ICGA95 [883], pages 5764, 1995. CiteSeer
x
: 10.1.1.29.9321.
1186. Nikolaus Hansen, Sibylle D. M uller, and Petros Koumoutsakos. Reducing the Time Com-
plexity of the Derandomized Evolution Strategy with Covariance Matrix Adaptation (CMA-
ES). Evolutionary Computation, 11(1):118, Spring 2003, MIT Press: Cambridge, MA,
USA. doi: 10.1162/106365603321828970. Fully available at http://mitpress.mit.edu/
journals/pdf/evco_11_1_1_0.pdf [accessed 2010-03-08]. CiteSeer
x
: 10.1.1.13.1008.
1187. Nikolaus Hansen, Fabian Gemperle, Anne Auger, and Petros Koumoutsakos. When
Do Heavy-Tail Distributions Help? In PPSN IX [2355], pages 6271, 2006.
doi: 10.1007/11844297 7.
1188. Nikolaus Hansen, Anne Auger, Steen Finck, and Raymond Ros. Real-Parameter Black-
1048 REFERENCES
Box Optimization Benchmarking 2009: Experimental Setup. Rapports de Recherche
RR-6828, Institut National de Recherche en Informatique et en Automatique (INRIA),
October 16, 2009. Fully available at http://coco.gforge.inria.fr/doku.php?
id=bbob-2009, http://hal.archives-ouvertes.fr/inria-00362649/en/, and
http://hal.inria.fr/inria-00362649/en [accessed 2009-10-28]. Version 3.
1189. Pierre Hansen. The Steepest Ascent Mildest Descent Heuristic for Combinatorial Program-
ming. In Congress on Numerical Methods in Combinatorial Optimization [17], 1986.
1190. Pierre Hansen, editor. Proceedings of the 5th International Conference on Multiple Criteria
Decision Making: Essays and Surveys on Multiple Criteria Decision Making (MCDM82),
August 913, 1982, Mons, Brussels, Belgium, volume 209 in Lecture Notes in Economics
and Mathematical Systems. Springer-Verlag: Berlin/Heidelberg. isbn: 0387119914 and
3540119914. OCLC: 9350544, 239745974, 310666373, 455931654, 470793901,
497290816, 498330203, 610956168, and 634251590. Published in March 1983.
1191. Jin-Kao Hao, Evelyne Lutton, Edmund M. A. Ronald, Marc Schoenauer, and Dominique
Snyers, editors. Selected Papers of the Third European Conference on Articial Evolu-
tion (AE97), October 2224, 1997, Ecole pour les Etudes et la Recherche en Informa-
tique et Electronique (EERIE): Nmes, France, volume 1363/1998 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/BFb0026588.
isbn: 3-540-64169-6. Google Books ID: DtQQwyST3RkC. OCLC: 38519371, 65845265,
190833428, 243486301, and 246325993.
1192. Yue Hao, Jiming Liu, Yuping Wang, Yiu-ming Cheung, and Hujun Yin, editors. Proceedings
of the 2005 International Conference on Computational Intelligence and Security, Part 1
(CIS05), December 1519, 2005, Xan, Shanx, China, volume 3801 in Lecture Notes in
Articial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/11596448. isbn: 3540308180. Google Books
ID: lGLPmtQFyJkC.
1193. Yue Hao, Jiming Liu, Yuping Wang, Yiu-ming Cheung, and Hujun Yin, editors. Proceedings
of the 2005 International Conference on Computational Intelligence and Security, Part 2
(CIS05), December 1519, 2005, Xan, Shanx, China, volume 3802 in Lecture Notes in
Articial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/11596981. isbn: 3540308199. Google Books
ID: IH VSc0d1EYC.
1194. Jenny A. Harding, Muhammad Shahbaz, Srinivas, and Andrew Kusiak. Data Min-
ing in Manufacturing: A Review. Journal of Manufacturing Science and Engineer-
ing, 128(4):969977, November 2006, American Society of Mechanical Engineers (ASME):
New York, NY, USA. Fully available at http://www.icaen.uiowa.edu/

ankusiak/
Journal-papers/Harding.pdf [accessed 2010-06-27].
1195. Simon Harding and Wolfgang Banzhaf. Fast Genetic Programming on GPUs. In Eu-
roGP07 [856], pages 90101, 2007. doi: 10.1007/978-3-540-71605-1 9. Fully avail-
able at http://www.cs.mun.ca/

banzhaf/papers/eurogp07.pdf [accessed 2008-12-24].


CiteSeer
x
: 10.1.1.93.1862.
1196. Georges Raif Harik. Learning Gene Linkage to Eciently Solve Problems of Bounded Di-
culty using Genetic Algorithms. PhD thesis, University of Michigan: Ann Arbor, MI, USA,
1997, Keki B. Irani, David Edward Goldberg, Edmund H. Durfee, and Robert K. Lindsay,
Committee members. CiteSeer
x
: 10.1.1.54.7092. UMI Order No.: GAX97-32090.
1197. Georges Raif Harik. Linkage Learning via Probabilistic Modeling in the ECGA. Il-
liGAL Report 99010, Illinois Genetic Algorithms Laboratory (IlliGAL), Department
of Computer Science, Department of General Engineering, University of Illinois at
Urbana-Champaign: Urbana-Champaign, IL, USA, January 1999. Fully available
at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/99010.ps.Z [accessed 2009-
08-27]. CiteSeer
x
: 10.1.1.33.5508.
1198. Georges Raif Harik, Fernando G. Lobo, and David Edward Goldberg. The Compact
Genetic Algorithm. IlliGAL Report 97006, Illinois Genetic Algorithms Laboratory (Il-
liGAL), Department of Computer Science, Department of General Engineering, Uni-
versity of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, August 1997.
CiteSeer
x
: 10.1.1.48.906. See also [1199].
1199. Georges Raif Harik, Fernando G. Lobo, and David Edward Goldberg. The Compact Ge-
netic Algorithm. IEEE Transactions on Evolutionary Computation (IEEE-EC), 3(4):287297,
REFERENCES 1049
November 1999, IEEE Computer Society: Washington, DC, USA. doi: 10.1109/4235.797971.
INSPEC Accession Number: 6411650. See also [1198].
1200. Venky Harinarayan, Anan Rajaraman, and Jerey David Ullman. Implementing Data Cubes
Eciently. In SIGMOD96 [1428], pages 205216, 1996. doi: 10.1145/233269.233333. See
also [1201].
1201. Venky Harinarayan, Anan Rajaraman, and Jerey David Ullman. Implementing Data Cubes
Eciently. SIGMOD Record, 25(2):205216, June 1996, Association for Computing Ma-
chinery Special Interest Group on Management of Data (ACM SIGMOD): New York, NY,
USA. doi: 10.1145/235968.233333. Fully available at http://infolab.stanford.edu/
pub/papers/cube.ps, http://www.cs.aau.dk/

simas/dat5_08/presentations/
Implementing_Data_Cubes_Efficiently.pdf, and http://www.cs.sfu.ca/CC/
459/han/papers/harinarayan96.pdf [accessed 2011-03-28]. CiteSeer
x
: 10.1.1.63.9928.
See also [1200].
1202. Lisa Lavoie Harlow, Stanley A. Mulaik, and James H. Steiger, editors. What If There Were
No Signicance Tests?, Multivariate Applications Book Series. Lawrence Erlbaum Asso-
ciates, Inc. (LEA): Mahwah, NJ, USA, August 1997. isbn: 0805826343. Google Books
ID: lZ2ybZzrnroC. OCLC: 36892849 and 264623722.
1203. Emma Hart, Chris McEwan, Jonathan Timmis, and Andy Hone, editors. Proceedings of the
10th International Conference on Articial Immune Systems (ICARIS11), July 1821, 2011,
University of Cambridge, Computer Laboratory, William Gates Building: Cambridge, UK,
volume 6209 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany.
1204. Peter Elliot Hart, Nils J. Nilsson, and Bertram Raphael. Correction to A Formal Basis for
the Heuristic Determination of Minimum Cost Paths. ACM SIGART Bulletin, -(37):2829,
December 1972, ACM Press: New York, NY, USA. doi: 10.1145/1056777.1056779.
1205. William Eugene Hart, Natalio Krasnogor, and James E. Smith, editors. Recent Advances in
Memetic Algorithms, volume 166/2005 in Studies in Fuzziness and Soft Computing. Springer-
Verlag GmbH: Berlin, Germany, 2005. doi: 10.1007/3-540-32363-5. isbn: 3-540-22904-3.
Google Books ID: LYf7YW4DmkUC. OCLC: 56697114 and 318297267. Library of Congress
Control Number (LCCN): 2004111139.
1206. Brad Harvey, James A. Foster, and Deborah Frincke. Byte Code Genetic Programming. In
GP98 LBP [1605], 1998. Fully available at http://www.cs.bham.ac.uk/

wbl/biblio/
gp-html/harvey_1998_bcGP.html and http://www.csds.uidaho.edu/deb/jvm.
pdf [accessed 2008-09-16].
1207. Brad Harvey, James A. Foster, and Deborah Frincke. Towards Byte Code Genetic Program-
ming. In GECCO99 [211], page 1234, volume 2, 1999. Fully available at http://www.
cs.bham.ac.uk/

wbl/biblio/gp-html/harvey_1999_TBCGP.html [accessed 2008-09-16].


CiteSeer
x
: 10.1.1.22.4974.
1208. Inman Harvey. The Puzzle of the Persistent Question Marks: A Case Study of Genetic Drift.
In ICGA93 [969], pages 1522, 1993. See also [1209].
1209. Inman Harvey. The Puzzle of the Persistent Question Marks: A Case Study of Genetic Drift.
Cognitive Science Research Papers (CSRP) 278, University of Sussex, School of Cognitive
Science: Brighton, UK, April 1993. CiteSeer
x
: 10.1.1.56.3830. issn: 1350-3162. See
also [1208].
1210. Thomas Haselwanter, Paavo Kotinurmi, Matthew Moran, Tomas Vitvar, and Maciej
Zaremba. Dynamic B2B Integration on the Semantic Web Services: SWS Challenge Phase
2. In Second Workshop of the Semantic Web Service Challenge 2006 Challenge on Au-
tomating Web Services Mediation, Choreography and Discovery [2689], 2006. Fully avail-
able at http://sws-challenge.org/workshops/2006-Budva/papers/DERI_WSMX_
SWSChallenge_II.pdf [accessed 2010-12-12]. See also [582, 1940, 30573059].
1211. Rainer Hauser and Reinhard Manner. Implementation of Standard Genetic Algorithm on
MIMD Machines. In PPSN III [693], pages 504513, 1994. doi: 10.1007/3-540-58484-6 293.
CiteSeer
x
: 10.1.1.26.9333.
1212. Patrick J. Hayes, editor. Proceedings of the 7th International Joint Conference on Articial
Intelligence (IJCAI81-I), August 2428, 1981, University of British Columbia: Vancouver,
BC, Canada, volume 1. John W. Kaufmann Inc.: Washington, DC, USA. Fully available
at http://dli.iiit.ac.in/ijcai/IJCAI-81-VOL%201/CONTENT/content.htm [ac-
cessed 2008-04-01]. OCLC: 490050000. See also [1213].
1050 REFERENCES
1213. Patrick J. Hayes, editor. Proceedings of the 7th International Joint Conference on Arti-
cial Intelligence (IJCAI81-II), August 2428, 1981, University of British Columbia: Van-
couver, BC, Canada, volume 2. John W. Kaufmann Inc.: Washington, DC, USA. Fully
available at http://dli.iiit.ac.in/ijcai/IJCAI-81-VOL-2/CONTENT/content.
htm [accessed 2008-04-01]. OCLC: 490050000. See also [1212].
1214. Barbara Hayes-Roth and Richard E. Korf, editors. Proceedings of the 12th National Con-
ference on Articial Intelligence (AAAI94), July 31August 4, 1994, Convention Center:
Seattle, WA, USA. AAAI Press: Menlo Park, CA, USA. isbn: 0-262-61102-3. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai94.php [accessed 2007-09-06].
2 volumes.
1215. Thomas D. Haynes, Roger L. Wainwright, Sandip Sen, and Dale A. Schoenefeld. Strongly
Typed Genetic Programming in Evolving Cooperation Strategies. In ICGA95 [883], pages
271278, 1995. Fully available at http://www.mcs.utulsa.edu/

rogerw/papers/
Haynes-icga95.pdf [accessed 2007-10-03]. CiteSeer
x
: 10.1.1.48.1037.
1216. Thomas D. Haynes, Dale A. Schoenefeld, and Roger L. Wainwright. Type Inheritance
in Strongly Typed Genetic Programming. In Advances in Genetic Programming II [106],
Chapter 18, pages 359376. MIT Press: Cambridge, MA, USA, 1996. Fully available
at http://www.mcs.utulsa.edu/

rogerw/papers/Haynes-hier.pdf [accessed 2011-11-


23]. CiteSeer
x
: 10.1.1.109.5646.
1217. Jun He and Xin Yao. Towards an Analytic Framework for Analysing the Computation
Time of Evolutionary Algorithms. Articial Intelligence, 145(1-2):5997, April 2003, El-
sevier Science Publishers B.V.: Essex, UK. doi: 10.1016/S0004-3702(02)00381-8. Fully
available at http://www.cs.bham.ac.uk/

xin/papers/published_aij_03.pdf [ac-
cessed 2011-08-06]. CiteSeer
x
: 10.1.1.118.4632.
1218. Richard Heady, George F. Luger, Arthur Maccabe, and Mark Servilla. The Architecture of
a Network Level Intrusion Detection System. Technical Report CS90-20, LA-SUB93-219,
W-7405, University of New Mexico, Department of Computer Science: Albuquerque, NM,
USA, August 15, 1990. doi: 10.2172/425295. Fully available at http://www.osti.gov/
energycitations/product.biblio.jsp?osti_id=425295 [accessed 2008-09-07].
1219. Wendi Rabiner Heinzelman, Anantha Chandrakasan, and Hari Balakrishnan. Energy-
Ecient Communication Protocol for Wireless Microsensor Networks. In HICSS00 [2574],
page 10, volume 2, 2000. Fully available at http://pdos.csail.mit.edu/decouto/
papers/heinzelman00.pdf and https://eprints.kfupm.edu.sa/37623/1/
37623.pdf [accessed 2008-11-16]. INSPEC Accession Number: 6530466.
1220. J org Heitkotter and David Beasley, editors. Hitch-Hikers Guide to Evolutionary Compu-
tation: A List of Frequently Asked Questions (FAQ) (HHGT). ENCORE (The Evolu-
tioNary Computation REpository Network): Santa Fe, NM, USA, 8.1 edition, March 29,
2000. Fully available at http://www.aip.de/

ast/EvolCompFAQ/, http://www.cse.
dmu.ac.uk/

rij/gafaq/top.htm, and http://www.etsimo.uniovi.es/ftp/pub/


EC/FAQ/www/ [accessed 2010-08-07]. USENET: comp.ai.genetic.
1221. 12th European Conference on Machine Learning (ECML02), August 1923, 2002, Helsinki,
Finland.
1222. Jim Hendler and Devika Subramanian, editors. Proceedings of the Sixteenth National Con-
ference on Articial Intelligence and Eleventh Conference on Innovative Applications of Ar-
ticial Intelligence (AAAI99, IAAI99), July 1822, 1999, Orlando, FL, USA. AAAI Press:
Menlo Park, CA, USA and MIT Press: Cambridge, MA, USA. isbn: 0-262-51106-1. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai99.php and http://
www.aaai.org/Conferences/IAAI/iaai99.php [accessed 2007-09-06].
1223. Eighth European Conference on Machine Learning (ECML95), April 2527, 1995, Heraclion,
Crete, Greece.
1224. Nanna Suryana Herman, Siti Mariyam Shamsuddin, and Ajith Abraham, editors. Interna-
tional Conference on SOft Computing and PAttern Recognition (SoCPaR09), December 4
7, 2009, Melaka International Trade Centre (MITC): Malacca, Malaysia. IEEE (Institute of
Electrical and Electronics Engineers): Piscataway, NJ, USA. Partly available at http://
fke.uitm.edu.my/index.php/socpar-2009 [accessed 2011-02-28].
1225. Hugo Hernandez and Christian Blum. Self-Synchronized Duty-Cycling in Sensor Networks
with Energy Harvesting Capabilities: The Static Network Case. In GECCO09-I [2342],
pages 3340, 2009. doi: 10.1145/1569901.1569907.
REFERENCES 1051
1226. Jose Herskovits, Sandro Mazorche, and Alfredo Canelas, editors. Proceedings of the 6th
World Congresses of Structural and Multidisciplinary Optimization (WCSMO6), May 30
June 3, 2005, Rio de Janeiro, RJ, Brazil. COPPE Publication: Rio de Janeiro, RJ, Brazil.
isbn: 8528500705. OCLC: 71284315 and 254952854.
1227. Jose Herskovits, Alfredo Canelas, Henry Cortes, and Miguel Aroztegui, editors. International
Conference on Engineering Optimization (EngOpt08), June 15, 2008, Rio de Janeiro, RJ,
Brazil.
1228. Alain Hertz,

Eric D. Taillard, and Dominique de Werra. A Tutorial on Tabu Search. In
AIRO95 [2169], pages 1324, 1995. Fully available at http://www.cs.colostate.edu/

whitley/CS640/hertz92tutorial.pdf and http://www.dei.unipd.it/

fisch/
ricop/tabu_search_tutorial.pdf [accessed 2009-07-02]. CiteSeer
x
: 10.1.1.28.521 and
10.1.1.50.7347.
1229. Malcom Ian Heywood and A. Nur Zincir-Heywood. Dynamic Page Based Linear Genetic
Programming. IEEE Transactions on Systems, Man, and Cybernetics Part B: Cybernetics,
32(3):380388, June 2002, IEEE Systems, Man, and Cybernetics Society: New York, NY,
USA. doi: 10.1109/TSMCB.2002.999814. INSPEC Accession Number: 7284736.
1230. Joseph F. Hicklin. Application of the Genetic Algorithm to Automatic Program Generation.
Masters thesis, University of Idaho, Computer Science Department: Moscow, ID, USA, 1986.
Google Books ID: oC5IOAAACAAJ.
1231. Tetsuya Higuchi, Masaya Iwata, and Weixin Liu, editors. The First International Con-
ference on Evolvable Systems: From Biology to Hardware, Revised Papers (ICES96),
October 78, 1996, Tsukuba, Japan, volume 1259/1997 in Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-63173-9.
isbn: 3-540-63173-9.
1232. Frederick S. Hillier and Gerard J. Liebermann. Introduction to Operations Research. McGraw-
Hill Ltd.: Maidenhead, UK, England, Tsinghua University Press: Beijng, China, and Holden-
Day: San Francisco, CA, USA, 8th edition, 19802005. isbn: 0070600929, 0816238677,
7302122431, and 9751400147. Google Books ID: toNN2yNKRBIC.
1233. W. Daniel Hillis. Co-Evolving Parasites Improve Simulated Evolution as an Optimization
Procedure. Physica D: Nonlinear Phenomena, 42(1-2):228234, June 1990, Elsevier Science
Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientic Publishers
Ltd.: Amsterdam, The Netherlands. doi: 10.1016/0167-2789(90)90076-2.
1234. Philip F. Hingston, Luigi C. Barone, and Zbigniew Michalewicz, editors. Design by Evo-
lution: Advances in Evolutionary Design, Natural Computing Series. Springer New York:
New York, NY, USA. doi: 10.1007/978-3-540-74111-4. isbn: 3540741097. Google Books
ID: FzJF0sYDvw8C. asin: 3540741097.
1235. Georey E. Hinton and Steven J. Nowlan. How Learning Can Guide Evolution. Complex
Systems, 1(3):495502, 1987, Complex Systems Publications, Inc.: Champaign, IL, USA.
Fully available at http://htpprints.yorku.ca/archive/00000172/ [accessed 2008-09-10].
See also [1236].
1236. Georey E. Hinton and Steven J. Nowlan. How Learning Can Guide Evolution. In Adaptive
Individuals in Evolving Populations: Models and Algorithms [255], pages 447454. Addison-
Wesley Longman Publishing Co., Inc.: Boston, MA, USA and Westview Press, 1996. See
also [1235].
1237. Tomoyuki Hiroyasu, Hiroyuki Ishida, Mitsunori Miki, and Hisatake Yokouchi. Diculties of
Evolutionary Many-Objective Optimization. Intelligent Systems Design Laboratory Research
Reports (ISDL Reports) 20081006004, Doshisha University, Department of Knowledge Engi-
neering and Computer Sciences, Intelligent Systems Design Laboratory: Kyoto, Japan, Oc-
tober 13, 2008. Fully available at http://mikilab.doshisha.ac.jp/dia/research/
report/2008/1006/004/report20081006004.html [accessed 2009-07-18].
1238. Haym Hirsh and Steve A. Chien, editors. Proceedings of the Thirteenth Innovative Appli-
cations of Articial Intelligence Conference (IAAI01), April 79, 2001, Seattle, WA, USA.
AAAI Press: Menlo Park, CA, USA. isbn: 1-57735-134-7. Partly available at http://
www.aaai.org/Conferences/IAAI/iaai01.php [accessed 2007-09-06].
1239. Brahim Hnich, Mats Carlsson, Fran cois Fages, and Francesca Rossi, editors. Recent Ad-
vances in Constraints Joint ERCIM/CoLogNET International Workshop on Constraint
Solving and Constraint Logic Programming, CSCLP 2005 Revised Selected and Invited
Papers, June 2022, 2005, Uppsala, Sweden, volume 3978/2006 in Lecture Notes in Arti-
1052 REFERENCES
cial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/11754602. isbn: 3-540-34215-X. Google Books
ID: YUdkAAAACAAJ. Library of Congress Control Number (LCCN): 2006925094.
1240. Sir Charles Antony Richard (Tony) Hoare. Quicksort. The Computer Journal, Oxford Jour-
nals, 5(1):1016, 1962, British Computer Society: Swindon, UK. doi: 10.1093/comjnl/5.1.10.
1241. Cem Hocao glu and Arthur C. Sanderson. Evolutionary Speciation Using Minimal Represen-
tation Size Clustering. In EP95 [1848], pages 187203, 1995.
1242. Frank Homann, Mario Koppen, Klaus Klawonn, and Rajkumar Roy, editors. Soft Comput-
ing: Methodologies and Applications, volume 32 in Advances in Intelligent and Soft Com-
puting. Birkhauser Verlag: Basel, Switzerland, May 2005. doi: 10.1007/3-540-32400-3.
isbn: 3-540-25726-8. Google Books ID: ATswE4Z1m14C. Library of Congress Control
Number (LCCN): 2005925479. Post-conference proceedings of the 8th Online World Con-
ferences on Soft Computing (WSC8) which took place from September 29th to October 10th,
2003, organized by World Federation of Soft Computing (WFSC).
1243. Frank Homeister and Thomas Back. Genetic Algorithms and Evolution Strategies Simi-
larities and Dierences. In PPSN I [2438], pages 455469, 1990. doi: 10.1007/BFb0029787.
See also [1243].
1244. Frank Homeister and Hans-Paul Schwefel. KORR 2.1 An implementation of a (
+
,
)-evolution strategy. Technical Report, Universitat Dortmund, Fachbereich Informatik:
Dortmund, North Rhine-Westphalia, Germany, November 1990. Interner Bericht.
1245. Birgit Hofreiter, editor. Proceedings of the 11th IEEE Conference on Commerce and En-
terprise Computing (CEC09), July 2023, 2009, Vienna University of Technology: Vienna,
Austria. IEEE Computer Society: Piscataway, NJ, USA and Curran Associates, Inc.: Red
Hook, NY, USA. isbn: 0-7695-3755-3. Partly available at http://cec2009.isis.
tuwien.ac.at/ [accessed 2011-02-28]. GBV-Identication (PPN): 615078826 and 615079865.
1246. Christian Hohn and Colin R. Reeves. Are Long Path Problems Hard for Genetic Algorithms?
In PPSN IV [2818], pages 134143, 1996. doi: 10.1007/3-540-61723-X 977.
1247. John Henry Holland. Outline for a Logical Theory of Adaptive Systems. Journal of the
Association for Computing Machinery (JACM), 9(3):297314, ACM Press: New York, NY,
USA. doi: 10.1145/321127.321128.
1248. John Henry Holland. Nonlinear Environments Permitting Ecient Adaptation. In Proceed-
ings of the Symposium on Computer and Information Sciences II [2718], pages 147164,
1966.
1249. John Henry Holland. Adaptive Plans Optimal for Payo-Only Environments. In
HICSS69 [2056], pages 917920, 1969. DTIC Accession Number: AD0688839.
1250. John Henry Holland. Adaptation in Natural and Articial Systems: An Introduc-
tory Analysis with Applications to Biology, Control, and Articial Intelligence. Uni-
versity of Michigan Press: Ann Arbor, MI, USA, 1975. isbn: 0-472-08460-7.
Google Books ID: JE5RAAAAMAAJ, JOv3AQAACAAJ, Qk5RAAAAMAAJ, YE5RAAAAMAAJ, and
vk5RAAAAMAAJ. OCLC: 1531617 and 266623839. Library of Congress Control Number
(LCCN): 74078988. GBV-Identication (PPN): 075982293. LC Classication: QH546
.H64 1975. Newly published as [1252].
1251. John Henry Holland. Adaptive Algorithms for Discovering and Using General Patterns in
Growing Knowledge Bases. International Journal of Policy Analysis and Information Sys-
tems, 4(3):245268, 1980, Plenum Press: New York, NY, USA.
1252. John Henry Holland. Adaptation in Natural and Articial Systems: An Introductory Anal-
ysis with Applications to Biology, Control, and Articial Intelligence. NetLibrary, Inc.:
Dublin, OH, USA, May 1992. isbn: 0585038449. Google Books ID: Jev3AQAACAAJ and
nALfIAAACAAJ. Reprint of [1250].
1253. John Henry Holland. Genetic Algorithms Computer programs that evolve in ways that
resemble natural selection can solve complex problems even their creators do not fully under-
stand. Scientic American, 267(1):4450, July 1992, Scientic American, Inc.: New York,
NY, USA. Fully available at http://members.fortunecity.com/templarseries/
algo.html, http://www.cc.gatech.edu/

turk/bio_sim/articles/genetic_
algorithm.pdf, http://www.im.isu.edu.tw/faculty/pwu/heuristic/GA_1.
pdf, and http://www2.econ.iastate.edu/tesfatsi/holland.gaintro.htm
[accessed 2011-11-20].
1254. John Henry Holland and Arthur W. Burks. Adaptive Computing System Capable of Learn-
REFERENCES 1053
ing and Discovery. Technical Report 619349 led on 1984-06-11, United States Patent
and Trademark Oce (USPTO): Alexandria, VA, USA, 1987. Fully available at http://
www.freepatentsonline.com/4697242.html and http://www.patentstorm.us/
patents/4697242.html [accessed 2008-04-01]. US Patent Issued on September 29, 1987, Cur-
rent US Class 706/13, Genetic algorithm and genetic programming system 382/155, LEARN-
ING SYSTEMS 706/62 MISCELLANEOUS, Foreign Patent References 8501601 WO Apr.,
1985. Editors: Jerry Smith and John R Lastova.
1255. John Henry Holland and Judith S. Reitman. Cognitive Systems based on Adaptive Algo-
rithms. ACM SIGART Bulletin, 0(63):49, June 1977, ACM Press: New York, NY, USA.
doi: 10.1145/1045343.1045373. See also [1256].
1256. John Henry Holland and Judith S. Reitman. Cognitive Systems based on Adaptive Algo-
rithms. In Selected Papers from the Workshop on Pattern Directed Inference Systems [2869],
pages 313329, 1977. See also [1255]. Reprinted in [939].
1257. John Henry Holland, Lashon Bernard Booker, Marco Colombetti, Marco Dorigo, David Ed-
ward Goldberg, Stephanie Forrest, Rick L. Riolo, Robert Elliott Smith, Pier Luca
Lanzi, Wolfgang Stolzmann, and Stewart W. Wilson. What Is a Learning Classi-
er System? In IWLCS00 [1678], pages 332, 2000. doi: 10.1007/3-540-45027-0 1.
Fully available at http://www.cs.unm.edu/

forrest/publications/Learning
%20Classifier%20Systems.pdf [accessed 2010-12-12].
1258. R. B. Hollstien. Articial Genetic Adaptation in Computer Control Systems. PhD thesis,
University of Michigan: Ann Arbor, MI, USA, 1971. University Microlms No.: 71-23.
1259. Georey Holmes, Andrew Donkin, and Ian H. Witten. WEKA: A Machine Learning
Workbench. In ANZIIS98 [1359], pages 357361, 1994. doi: 10.1109/ANZIIS.1994.396988.
Fully available at http://www.cs.waikato.ac.nz/

ml/publications/1994/
Holmes-ANZIIS-WEKA.pdf [accessed 2008-11-19]. INSPEC Accession Number: 4962353.
1260. Diana Holstein and Pablo Moscato. Memetic Algorithms using Guided Local Search: A Case
Study. In New Ideas in Optimization [638], pages 235244. McGraw-Hill Ltd.: Maidenhead,
UK, England, 1999.
1261. Robert C. Holte. Combinatorial Auctions, Knapsack Problems, and Hill-Climbing Search.
In AI01 [2624], pages 5766, 2001. Fully available at http://www.cs.ubc.ca/

hoos/
Courses/Trento-05/Holte01.pdf [accessed 2010-10-12]. CiteSeer
x
: 10.1.1.28.754.
1262. Robert C. Holte and Adele Howe, editors. Proceedings of the Twenty-Second AAAI Confer-
ence on Articial Intelligence (AAAI07), July 2226, 2007, Vancouver, BC, Canada. AAAI
Press: Menlo Park, CA, USA. Partly available at http://www.aaai.org/Conferences/
AAAI/aaai07.php [accessed 2007-09-06].
1263. Holger H. Hoos and Thomas St utzle. SATLIB: An Online Resource for Research on SAT. In
SAT2000 Highlights of Satisability Research in the Year 2000 [1042], pages 283292. IOS
Press: Amsterdam, The Netherlands, 2000. Fully available at http://www.cs.ubc.ca/

hoos/Publ/sat2000-satlib.pdf [accessed 2011-09-07]. CiteSeer


x
: 10.1.1.42.5395. See
also [1264].
1264. Holger H. Hoos and Thomas St utzle, editors. SATLIB - The Satisability Library. TU Darm-
stadt: Darmstadt, Germany and University of British Columbia: Vancouver, BC, Canada,
March 22, 2005. Fully available at http://www.cs.ubc.ca/

hoos/SATLIB/benchm.
html and http://www.satlib.org/ [accessed 2011-09-28]. See also [1263].
1265. J. Hopf, editor. Genetic Algorithms within the Framework of Evolutionary Computation,
Workshop at KI-94. Technical Report MPI-I-94-241, September 1994.
1266. Jerey Horn and David Edward Goldberg. Genetic Algorithm Diculty and the
Modality of the Fitness Landscape. In FOGA 3 [2932], pages 243269, 1994.
CiteSeer
x
: 10.1.1.31.3340.
1267. Jerey Horn, David Edward Goldberg, and Kalyanmoy Deb. Long Path Prob-
lems. In PPSN III [693], pages 149158, 1994. doi: 10.1007/3-540-58484-6 259.
CiteSeer
x
: 10.1.1.51.9758.
1268. Jerey Horn, Nicholas Nafpliotis, and David Edward Goldberg. A Niched Pareto Genetic
Algorithm for Multiobjective Optimization. In CEC94 [1891], pages 8287, volume 1, 1994.
doi: 10.1109/ICEC.1994.350037. Fully available at http://www.lania.mx/

ccoello/
EMOO/horn2.ps.gz [accessed 2007-08-28]. CiteSeer
x
: 10.1.1.34.4189. Also: IEEE Sympo-
sium on Circuits and Systems, pp. 22642267, 1991.
1269. Jorng-Tzong Horng, Yu-Jan Chang, Baw-Jhiune Liu, and Cheng-Yan Kao. Materialized View
1054 REFERENCES
Selection using Genetic Algorithms in a Data Warehouse System. In CEC99 [110], pages
2227, 1999. doi: 10.1109/CEC.1999.785551. INSPEC Accession Number: 6339040.
1270. Jorng-Tzong Horng, Yu-Jan Chang, and Baw-Jhiune Liu. Applying Evolutionary Algo-
rithms to Materialized View Selection in a Data Warehouse. Soft Computing A Fusion of
Foundations, Methodologies and Applications, 7(8):574581, August 2003, Springer-Verlag:
Berlin/Heidelberg. doi: 10.1007/s00500-002-0243-1.
1271. Reiner Horst and Tuy Hoang. Global Optimization: Deterministic Approaches. Springer-
Verlag GmbH: Berlin, Germany, 3rd edition, 1996. isbn: 3-540-61038-3. Google Books
ID: HyyaAAAAIAAJ and usFjGFvuBDEC.
1272. Reiner Horst, Panos M. Pardalos, and H. Edwin Romeijm. Handbook of Global Optimiza-
tion, volume 62(1-2) in Nonconvex Optimization and Its Applications. Kluwer Academic
Publishers: Norwell, MA, USA, 19952002. isbn: 1-4020-0632-2 and 1-4020-0742-6.
Fully available at http://www.optimization-online.org/DB_FILE/2002/03/456.
pdf [accessed 2009-06-22]. Google Books ID: 9W0Ast3pDIC.
1273. Learning and Intelligent OptimizatioN
Learning and Intelligent OptimizatioN (LION 1), February 1218, 2007, Hotel Adler: Andalo,
Trento, Italy.
1274. XX Simposio Brasileiro de Telecomunicacoes (SBrT03), October 35, 2003, Hotel Gl oria:
Rio de Janeiro, RJ, Brazil.
1275. Proceedings of the 4th International Conference on Bio-Inspired Models of Network, Infor-
mation, and Computing Systems (BIONETICS09), December 911, 2009, Hotel MERCURE
Pont dAvignon: Avignon, France.
1276. Proceedings of the International Conference on Evolutionary Computation (ICEC10), 2010,
Hotel Sidi Saler: Val`encia, Spain. Part of [914].
1277. Daniel Howard. Frontiers in the Convergence of Bioscience and Information Technologies
(FBIT07), October 1113, 2007, Jeju City, South Korea. IEEE (Institute of Electrical
and Electronics Engineers): Piscataway, NJ, USA. isbn: 0-7695-2999-2. Google Books
ID: AcgoPwAACAAJ and KbupPQAACAAJ. OCLC: 191820852, 230950453, 254360958,
and 314010039.
1278. Nicola Howarth. Abstract Syntax Tree Design. ANSA Documents: Ocial Record of the
ANSA Project 23/08/95 APM.1551.01, Architecture Projects Management Limited: Posei-
don House, Castle Park, Cambridge, UK, August 23, 1995. Fully available at http://www.
ansa.co.uk/ANSATech/95/Primary/155101.pdf [accessed 2007-07-03].
1279. Robert J. Howlett and Lakhmi C. Jain, editors. Proceedings of the Fourth International
Conference on Knowledge-Based Intelligent Information Engineering Systems & Allied Tech-
nologies (KES00), August 30September 1, 2000, University of Brighton: Sussex, UK.
IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7803-6400-7. Google Books
ID: 369VAAAAMAAJ, IHAfPwAACAAJ, and VuFVAAAAMAAJ.
1280. Peter Hrastnik and Albert Rainer. Web Service Discovery and Composition for Vir-
tual Enterprises. International Journal on Web Services Research (IJWSR), 4(1):23
29, 2007, Idea Group Publishing (Idea Group Inc., IGI Global): New York, NY, USA.
doi: 10.4018/jwsr.2007010102. CiteSeer
x
: 10.1.1.68.5392.
1281. Juraj Hromkovic. Algorithmics for Hard Computing Problems: Introduction to Combina-
torial Optimization, Randomization, Approximation, and Heuristics, Texts in Theoretical
Computer Science An EATCS Series. Springer-Verlag GmbH: Berlin, Germany, 2001.
isbn: 3-540-66860-8. Google Books ID: Ut1QAAAAMAAJ. OCLC: 46713245. Library of
Congress Control Number (LCCN): 2001031301. GBV-Identication (PPN): 319756777.
LC Classication: QA76.9.A43 H76 2001.
1282. Juraj Hromkovic and I. Z amecnikova. Design and Analysis of Randomized Algorithms:
Introduction to Design Paradigms, Texts in Theoretical Computer Science An EATCS
Series. Springer-Verlag GmbH: Berlin, Germany, 2005. doi: 10.1007/3-540-27903-2.
isbn: 3-540-23949-9. Google Books ID: EfuMfn8T5 oC. OCLC: 60800705, 318291536,
and 474955415. Library of Congress Control Number (LCCN): 2005926236. GBV-
Identication (PPN): 524971269. LC Classication: QA274 .H76 2005.
1283. 15th Conference on Technologies and Applications of Articial Intelligence (TAAI10),
November 1820, 2010, Hsinchu, Taiwan.
1284. Chien-Chang Hsu and Yao-Wen Tang. An Intelligent Mobile Learning System for
On-the-Job Training of Luxury Brand Firms. In AI06 [2406], pages 749759, 2006.
REFERENCES 1055
doi: 10.1007/11941439 79.
1285. P. L. Hsu, R. Lai, and C. C. Chiu. The Hybrid of Association Rule Algorithms and Ge-
netic Algorithms for Tree Induction: An Example of Predicting the Student Course Perfor-
mance. Expert Systems with Applications An International Journal, 25(1):5162, July 2003,
Elsevier Science Publishers B.V.: Essex, UK. Imprint: Pergamon Press: Oxford, UK.
doi: 10.1016/S0957-4174(03)00005-8.
1286. Ting Hu and Wolfgang Banzhaf. Measuring Rate of Evolution in Genetic Programming Using
Amino Acid to Synonymous Substitution Ratio k
a
/k
s
. In GECCO08 [1519], pages 1337
1338, 2008. Fully available at http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/
Hu_2008_gecco.html and http://www.cs.mun.ca/

tingh/papers/GECCO08.pdf
[accessed 2008-08-08].
1287. Ting Hu and Wolfgang Banzhaf. Nonsynonymous to Synonymous Substitution Ratio k
a
/k
s
:
Measurement for Rate of Evolution in Evolutionary Computation. In PPSN X [2354], pages
448457, 2008. doi: 10.1007/978-3-540-87700-4 45. Fully available at http://www.cs.mun.
ca/

tingh/papers/PPSN08.pdf [accessed 2009-07-10].


1288. Ting Hu and Wolfgang Banzhaf. Evolvability and Acceleration in Evolutionary Computa-
tion. Technical Report MUN-CS-2008-04, Memorial University, Department of Computer
Science: St. Johns, Canada, October 2008. Fully available at http://www.cs.mun.ca/

tingh/papers/TR08.pdf and http://www.mun.ca/computerscience/research/


MUN-CS-2008-04.pdf [accessed 2009-08-05].
1289. Ting Hu and Wolfgang Banzhaf. Evolvability and Speed of Evolutionary Algorithms in the
Light of Recent Developments in Biology. Journal of Articial Evolution and Applications,
2010, Hindawi Publishing Corporation: Nasr City, Cairo, Egypt. doi: 10.1155/2010/568375.
Fully available at http://www.cs.mun.ca/

tingh/papers/JAEA10.pdf and http://


www.hindawi.com/archive/2010/568375.html [accessed 2010-10-22]. ID: 568375.
1290. Ting Hu, Yuanzhu Peter Chen, Wolfgang Banzhaf, and Robert Benkoczi. An Evolutionary
Approach to Planning IEEE 802.16 Networks. In GECCO09-I [2342], pages 19291930,
2009. doi: 10.1145/1569901.1570242. Fully available at http://www.cs.mun.ca/

tingh/
papers/GECCO09-network.pdf [accessed 2009-09-05].
1291. Xiaobing Hu, Mark Leeson, and Evor Hines. An Eective Genetic Algorithm for the Network
Coding Problem. In CEC09 [1350], pages 17141720, 2009. doi: 10.1109/CEC.2009.4983148.
INSPEC Accession Number: 10688743.
1292. Xiaomin Hu, Jun Zhang, and Liming Zhang. Swarm Intelligence Inspired Multicast Routing:
An Ant Colony Optimization Approach. In EvoWorkshops09 [1052], pages 5160, 2009.
doi: 10.1007/978-3-642-01129-0 6.
1293. De-Shuang Huang, Laurent Heutte, and Marco Loog, editors. Advanced Intelligent Com-
puting Theories and Applications. With Aspects of Contemporary Intelligent Computing
Techniques Proceedings of the Third International Conference on Intelligent Comput-
ing (ICIC07-2), August 2124, 2007, Qngd ao, Shandong, China, volume 2 in Commu-
nications in Computer and Information Science. Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-74282-1. Fully available at http://www.springerlink.com/
content/k74022/ [accessed 2008-03-28].
1294. Joshua (Zhexue) Huang, Longbing Cao, and Jaideep Srivastava, editors. Proceedings of
the 15th Pacic-Asia Conference on Knowledge Discovery and Data Mining (PAKDD11),
May 2427, 2011, Shenzh`en, Guangd ong, China, Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. Partly available at http://pakdd2011.pakdd.
org/ [accessed 2011-02-28].
1295. Sheng Huang, Xiaoling Wang, and Aoying Zhou. Ecient Web Service Composition Based
on Syntactical Matching. In EEE05 [1371], pages 782783, 2005. doi: 10.1109/EEE.2005.66.
See also [317].
1296. Thomas S. Huang, Anton Nijholt, Maja Pantic, and Alex Pentland, editors. Artical Intelli-
gence for Human Computing, ICMI 2006 and IJCAI 2007 International Workshops, Ban,
Canada, November 3, 2006, Hyderabad, India, January 6, 2007, Revised Seleced and Invited
Papers (ICMI06 / IJCAI07), 20062007, Ban, AB, Canada and Hyderabad, India, vol-
ume 4451 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. See also [2797].
1297. V. Ling Huang, Ponnuthurai Nagaratnam Suganthan, Alex Kai Qin, and S. Baskar. Multi-
objective Dierential Evolution with External Archive and Harmonic Distance-Based Diver-
1056 REFERENCES
sity Measure. Technical Report, Nanyang Technological University (NTU): Singapore, 2006.
Fully available at http://www3.ntu.edu.sg/home/epnsugan/index_files/papers/
Tech-Report-MODE-2005.pdf [accessed 2009-07-17].
1298. Yueh-Min Huang, Yen-Ting Lin, and Shu-Chen Cheng. An Adaptive Testing System
for Supporting Versatile Educational Assessment. Computers & Education An Interna-
tional Journal, 52(1):5367, January 2009, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/j.compedu.2008.06.007.
1299. Zhenqiu Huang, Wei Jiang, Songlin Hu, and Zhiyong Liu. Eective Pruning Algo-
rithm for QoS-Aware Service Composition. In CEC09 [1245], pages 519522, 2009.
doi: 10.1109/CEC.2009.41. INSPEC Accession Number: 10839126. See also [1571, 1801].
1300. Claudia Huber and G unter W achterhauser. Peptides by Activation of Amino Acids with
CO on (Ni,Fe)S Surfaces: Implications for the Origin of Life. Science Magazine, 281(5377):
670672, July 31, 1998, American Association for the Advancement of Science (AAAS):
Washington, DC, USA and HighWire Press (Stanford University): Cambridge, MA, USA.
doi: 10.1126/science.281.5377.670. PubMed ID: 9685253.
1301. Lorenz Huelsbergen. Toward Simulated Evolution of Machine Learning Interaction. In
GP96 [1609], pages 315320, 1996. Fully available at http://cognet.mit.edu/
library/books/mitpress/0262611279/cache/chap41.pdf [accessed 2010-11-21].
1302. Barry D. Hughes. Random Walks and Random Environments: Volume 1: Random Walks.
Oxford University Press, Inc.: New York, NY, USA, May 16, 1995. isbn: 0-19-853788-3.
Google Books ID: QhOen t0LeQC.
1303. Evan J. Hughes. Evolutionary Many-Objective Optimisation: Many Once or One Many? In
CEC05 [641], pages 222227, volume 1, 2005. doi: 10.1109/CEC.2005.1554688. Fully avail-
able at http://www.lania.mx/

ccoello/hughes05.pdf.gz [accessed 2009-07-18]. INSPEC


Accession Number: 9004154.
1304. Evan J. Hughes. MSOPS-II: A General-Purpose Many-Objective Optimiser. In
CEC07 [1343], pages 39443951, 2007. doi: 10.1109/CEC.2007.4424985. INSPEC Acces-
sion Number: 9889597.
1305. Evan J. Hughes. Fitness Assignment Methods for Many-Objective Problems. In Multiobjective
Problem Solving from Nature From Concepts to Applications [1554], Chapter IV-1, pages
307329. Springer New York: New York, NY, USA, 2008. doi: 10.1007/978-3-540-72964-8 15.
1306. Roger N. Hughes, editor. NATO Advanced Research Workshop on Behavioural Mecha-
nisms of Food Selection, July 1721, 1989, Gregynog, Wales, UK, volume 20 in NATO Ad-
vanced Science Institutes (ASI) Series G. Ecological Sciences (NATO ASI). Springer-Verlag
GmbH: Berlin, Germany. isbn: 0-387-51762-6 and 3-540-51762-6. OCLC: 20854001,
311111175, and 472872808. Library of Congress Control Number (LCCN): 89026334.
GBV-Identication (PPN): 025232932 and 157637646. LC Classication: QL756.5 .N38
1989.
1307. F. J. Humphreys and M. Hatherly. Recrystallization and Related Annealing Phenomena,
Pergamon Materials Series. Elsevier Science Publishers B.V.: Amsterdam, The Netherlands,
2004. isbn: 0080441645. Google Books ID: 52Gloa7HxGsC.
1308. Ming-Chuan Hung, Man-Lin Huang, Don-Lin Yang, and Nien-Lin Hsueh. Ecient Ap-
proaches for Materialized Views Selection in a Data Warehouse. Information Sciences
Informatics and Computer Science Intelligent Systems Applications: An International
Journal, 177(6):13331348, March 2007, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/j.ins.2006.09.007.
1309. Phil Husbands and Inman Harvey, editors. Proceedings of the Fourth European Conference on
Advances in Articial Life (ECAL97), July 2831, 1997, Brighton, UK, Complex Adaptive
Systems, Bradford Books. MIT Press: Cambridge, MA, USA. isbn: 0-262-58157-4. GBV-
Identication (PPN): 234965770.
1310. Phil Husbands and Jean-Arcady Meyer, editors. Proceedings of the First European Workshop
on Evolutionary Robotics (EvoRobot98), April 1617, 1998, Paris, France, volume 1468/1998
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-64957-3. OCLC: 246316294, 316835898, 441663096, 471681293, and
502374580. Library of Congress Control Number (LCCN): 98039439.
1311. Phil Husbands and Frank Mill. Simulated Co-Evolution as the Mechanism for
Emergent Planning and Scheduling. In ICGA91 [254], pages 264270, 1991.
Fully available at http://www.informatics.sussex.ac.uk/users/philh/pubs/
REFERENCES 1057
icga91Husbands.pdf [accessed 2010-09-13].
1312. Marcus Hutter. Fitness Uniform Selection to Preserve Genetic Diversity. In CEC02 [944],
pages 783788, 2002. doi: 10.1109/CEC.2002.1007025. Fully available at ftp://
ftp.idsia.ch/pub/techrep/IDSIA-01-01.ps.gz, http://www.hutter1.net/
ai/pfuss.pdf, http://www.hutter1.net/ai/pfuss.ps, and http://www.
hutter1.net/ai/pfuss.tar [accessed 2011-04-29]. CiteSeer
x
: 10.1.1.106.9784. arXiv
ID: cs/0103015. INSPEC Accession Number: 7336007. Also: Technical Report IDSIA-01-
01, 17 January 2001. See also [1313, 1708, 1709].
1313. Marcus Hutter and Shane Legg. Fitness Uniform Optimization. IEEE Transactions
on Evolutionary Computation (IEEE-EC), 10(5):568589, October 2006, IEEE Com-
puter Society: Washington, DC, USA. doi: 10.1109/TEVC.2005.863127. Fully avail-
able at http://www.idsia.ch/idsiareport/IDSIA-16-06.pdf and http://
www.vetta.org/documents/FitnessUniformOptimization.pdf [accessed 2011-03-
01]. CiteSeer
x
: 10.1.1.137.2999. arXiv ID: cs/0610126v1. INSPEC Accession
Number: 9101740. See also [1312, 1708, 1709].
1314. Martijn A. Huynen. Exploring Phenotype Space Through Neutral Evolution. Jour-
nal of Molecular Evolution, 43(3):165169, September 1996, Springer New York:
New York, NY, USA. doi: 10.1007/PL00006074. Fully available at http://
www.santafe.edu/research/publications/wpabstract/199510100 [accessed 2009-07-
10]. CiteSeer
x
: 10.1.1.21.3483. SFI Working Paper Abstract 95-10-100.
1315. Martijn A. Huynen, Peter F. Stadler, and Walter Fontana. Smoothness within Rugged-
ness: The Role of Neutrality in Adaptation. Proceedings of the National Academy of
Science of the United States of America (PNAS), 93(1):397401, January 9, 1996, Na-
tional Academy of Sciences: Washington, DC, USA. Fully available at http://fontana.
med.harvard.edu/www/Documents/WF/Papers/neutrality.pdf and http://www.
pnas.org/cgi/reprint/93/1/397.pdf [accessed 2011-12-05]. CiteSeer
x
: 10.1.1.31.1013.
Communicated by Hans Frauenfelder, Los Alamos National Laboratory, Los Alamos, NM,
September 20, 1995 (received for review June 29, 1995).
1316. Chii-Ruey Hwang. Simulated Annealing: Theory and Applications. Acta Applicandae
Mathematicae: An International Survey Journal on Applying Mathematics and Mathemat-
ical Applications, 12(1):108111, May 1988, Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1007/BF00047572. Academia Sinica. Review of [2776].
I
1317. Hitoshi Iba. Emergent Cooperation for Multiple Agents Using Genetic Programming. In
PPSN IV [2818], pages 3241, 1996. doi: 10.1007/3-540-61723-X 967.
1318. Hitoshi Iba. Evolutionary Learning of Communicating Agents. Information Sciences
Informatics and Computer Science Intelligent Systems Applications: An International
Journal, 108(14):181205, July 1998, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/S0020-0255(97)10055-X. See also [1320].
1319. Hitoshi Iba. Evolving Multiple Agents by Genetic Programming. In Advances in Genetic
Programming [2569], pages 447466. MIT Press: Cambridge, MA, USA, 1996. Fully avail-
able at http://www.cs.bham.ac.uk/

wbl/aigp3/ch19.pdf [accessed 2007-10-03]. See also


[1320].
1320. Hitoshi Iba, Tishihide Nozoe, and Kanji Ueda. Evolving Communicating Agents based on Ge-
netic Programming. In CEC97 [173], pages 297302, 1997. doi: 10.1109/ICEC.1997.592321.
Fully available at http://lis.epfl.ch/

markus/References/Iba97.pdf [accessed 2008-


07-28]. INSPEC Accession Number: 5573068. See also [1318, 1319].
1321. Toshihide Ibaraki, Koji Nonobe, and Mutsunori Yagiura, editors. Fifth Metaheuristics In-
ternational Conference Metaheuristics: Progress as Real Problem Solvers (MIC03), Au-
gust 2528, 2003, Kyoto International Conference Hall: Kyoto, Japan, volume 32 in Op-
erations Research/Computer Science Interfaces Series. Springer-Verlag GmbH: Berlin, Ger-
many. isbn: 0-387-25382-3. Partly available at http://www-or.amp.i.kyoto-u.ac.
jp/mic2003/ [accessed 2007-09-12]. Published in 2005.
1322. Proceedings of the IEEE Aerospace Conference, March 1820, 2000, Big Sky, MT, USA.
IEEE Aerospace and Electronic Systems Society, IEEE Computer Society: Piscataway, NJ,
1058 REFERENCES
USA. isbn: 0-7803-5846-5 and 0780358473. INSPEC Accession Number: 6763001. LC
Classication: TL3000.A1 I22 2000. Catalogue no.: 00TH8484.
1323. Joint IEEE/IEE Workshop on Natural Algorithms in Signal Processing (NASP93), Novem-
ber 1416, 1993, Chelmsford, Essex, UK. IEEE Computer Society: Piscataway, NJ, USA.
1324. 1994 IEEE World Congress on Computation Intelligence (WCCI94), June 26July 2, 1994,
Walt Disney World Dolphin Hotel: Orlando, FL, USA. IEEE Computer Society: Piscataway,
NJ, USA.
1325. Proceedings of the 1995 American Control Conference (ACC), June 2123, 1995, Seattle,
WA, USA. IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7803-2445-5 and
0780324463. Google Books ID: EvlVAAAAMAAJ, MfhVAAAAMAAJ, e VVAAAAMAAJ, and
pPlVAAAAMAAJ. INSPEC Accession Number: 5080557.
1326. Proceedings of the Sixth International Symposium on Micro Machine and Human Sci-
ence (MHS95), October 46, 1995, Nagoya, Japan. IEEE Computer Society: Piscat-
away, NJ, USA. doi: 10.1109/MHS.1995.494249. isbn: 0-7803-2676-8. Google Books
ID: wqFVAAAAMAAJ. INSPEC Accession Number: 5303058.
1327. Second IEEE International Conference on Engineering of Complex Computer Systems
(ICECCS96), October 2125, 1996, Montreal, QC, Canada. IEEE Computer Society: Pis-
cataway, NJ, USA. isbn: 0-8186-7614-0. INSPEC Accession Number: 5436943. held
jointly with 6th CSESAW and 4th IEEE RTAW.
1328. Digest of Technical Papers of the IEEE/ACM International Conference on Computer-Aided
Design (ICCAD), November 913, 1997, San Jose, CA, USA. IEEE Computer Society:
Piscataway, NJ, USA. isbn: 0-8186-8200-0, 0-8186-8201-9, and 0-8186-8202-7.
issn: 1093-3152. Order No.: PRO8200. Catalogue no.: 97CB36142. ACM Order No.: 477973.
1329. Proceedings of the 22nd IEEE Conference on Local Computer Networks (LCN97), Novem-
ber 25, 1997, Mineapolis, MN, USA. IEEE Computer Society: Piscataway, NJ,
USA. doi: 10.1109/LCN.1997.630893. isbn: 0-8186-8141-1, 0-8186-8142-X, and
0-8186-8143-8. Google Books ID: 5e1VAAAAMAAJ, ANQWKwAACAAJ, and j 4GMwAACAAJ.
OCLC: 38188270 and 60166824. issn: 0742-1303. Order No.: 97TB100179 and PR08141.
1330. 1998 IEEE World Congress on Computation Intelligence (WCCI98), May 49, 1998, Egan
Civic & Convention Center and Anchorage Hilton Hotel: Anchorage, AK, USA. IEEE Com-
puter Society: Piscataway, NJ, USA.
1331. IEEE International Conference on Systems, Man, and Cybernetics Human Communi-
cation and Cybernetics (SMC99), October 1215, 1999, Tokyo University, Department of
Aeronautics and Space Engineering: Tokyo, Japan. IEEE Computer Society: Piscataway,
NJ, USA. isbn: 0-7803-5731-0, 0-7803-5732-9, and 0-7803-5733-7. Google Books
ID: J-kOAAAACAAJ and Yv6cGAAACAAJ. OCLC: 43823469 and 213032676.
1332. Proceedings of the 2000 American Control Conference (ACC), June 2830, 2000, Hyatt
Regency Chicago: Chicago, IL, USA. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7803-5519-9. Google Books ID: pQANAAAACAAJ and xD11HAAACAAJ. INSPEC
Accession Number: 6805288.
1333. Proceedings of the IEEE Congress on Evolutionary Computation (CEC00), July 1619, 2000,
La Jolla Marriott Hotel: La Jolla, CA, USA. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7803-6375-2. Google Books ID: ttlVAAAAMAAJ. Library of Congress Control
Number (LCCN): 00-018644. Catalogue no.: 00TH8512. CEC-2000 A joint meeting of
the IEEE, Evolutionary Programming Society, Galesia, and the IEE.
1334. Proceedings of the IEEE Congress on Evolutionary Computation (CEC01), May 2730,
2001, COEX, World Trade Center: Gangnam-gu, Seoul, Korea. IEEE Computer Soci-
ety: Piscataway, NJ, USA. isbn: 0-7803-6657-3 and 0-7803-6658-1. Google Books
ID: R6FVAAAAMAAJ, WXH PQAACAAJ, XaBVAAAAMAAJ, and 1iEAAAACAAJ. Catalogue
no.: 01TH8546. CEC-2001 A joint meeting of the IEEE, Evolutionary Programming Society,
Galesia, and the IEE.
1335. Proceedings of the American Control Conference (ACC02), May 810, 2002, Honeywell,
Houston, TX, USA. IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7803-7298-0.
Google Books ID: 98SNAAAACAAJ and IxlaHAAACAAJ. issn: 0743-1619.
1336. IEEE AP-S International Symposium and USNC/URSI National Radio Science Meeting,
June 1621, 2002, San Antonio, TX, USA. IEEE Computer Society: Piscataway, NJ, USA.
1337. Proceedings of the 1st IEEE International Conference on Cognitive Informatics (ICCI02),
August 1920, 2002, Calgary, AB, Canada. IEEE Computer Society: Piscataway, NJ, USA.
REFERENCES 1059
isbn: 0-7695-1724-2.
1338. 2002 IEEE World Congress on Computation Intelligence (WCCI02), May 1217, 2002,
Hilton Hawaiian Village Hotel (Beach Resort & Spa): Honolulu, HI, USA. IEEE Computer
Society: Piscataway, NJ, USA.
1339. Proceedings of the 4th IEEE International Conference on Cognitive Informatics (ICCI05),
August 810, 2005, University of California: Irvine, CA, USA. IEEE Computer Society:
Piscataway, NJ, USA.
1340. 6th International Conference on Hybrid Intelligent Systems (HIS06), December 1315, 2006,
AUT Technology Park: Auckland, New Zealand. IEEE Computer Society: Piscataway, NJ,
USA. isbn: 0-7695-2662-4. Partly available at http://his06.hybridsystem.com/
[accessed 2007-09-01].
1341. 2006 IEEE World Congress on Computation Intelligence (WCCI06), July 1621, 2006, Sher-
aton Vancouver Wall Centre Hotel: Vancouver, BC, Canada. IEEE Computer Society: Pis-
cataway, NJ, USA.
1342. Proceedings of the 2nd International Conference on Bio-Inspired Models of Network, In-
formation, and Computing Systems (BIONETICS07), December 1013, 2007, Radisson
SAS Beke Hotel: Budapest, Hungary. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 963-9799-05-X. Partly available at http://www.bionetics.org/2007/ [ac-
cessed 2011-02-28]. OCLC: 259234301 and 288980779. Catalogue no.: 08EX2011.
1343. Proceedings of the IEEE Congress on Evolutionary Computation (CEC07), September 25
28, 2007, Swiss otel The Stamford: Singapore. IEEE Computer Society: Piscataway, NJ,
USA. isbn: 1-4244-1339-7. Google Books ID: duo6GQAACAAJ and yBzVPQAACAAJ.
OCLC: 257656187.
1344. Proceedings of IEEE Joint Conference on E-Commerce Technology (9th CEC) and Enter-
prise Computing, E-Commerce and E-Services (4th EEE) (CEC/EEE07), July 2326, 2007,
National Center of Sciences: Tokyo, Japan. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7695-2913-5. Partly available at http://ieejc.ise.eng.osaka-u.ac.jp/
CEC2007/ [accessed 2011-02-28]. Google Books ID: FXdEPAAACAAJ. OCLC: 170925181 and
288976020.
1345. Proceedings of the First IEEE Symposium Series on Computational Intelligence (SSCI07),
April 15, 2007, Honolulu, HI, USA. IEEE Computer Society: Piscataway, NJ, USA.
1346. Proceedings of IEEE Joint Conference on E-Commerce Technology (10th CEC) and En-
terprise Computing, E-Commerce and E-Services (5th EEE) (CEC/EEE08), July 21
24, 2008, Washington, DC, USA. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7695-3340-X. Partly available at http://cec2008.cs.georgetown.edu/ [ac-
cessed 2011-02-28]. Google Books ID: XrakOgAACAAJ. OCLC: 401711110. Library of Congress
Control Number (LCCN): 2005928254. BMS Part Number CFP08321-PRT.
1347. Third IEEE International Services Computing Contest, 2008, Honolulu, HI, USA. IEEE Com-
puter Society: Piscataway, NJ, USA. Part of [1348].
1348. Proceedings of the 4th IEEE Congress on Services: Part I (SERVICES08-I), July 611, 2008,
Honolulu, HI, USA. IEEE Computer Society: Piscataway, NJ, USA.
1349. Third Asia International Conference on Modelling & Simulation (AMS09), May 2526, 2009,
Bandung, Bali. IEEE Computer Society: Piscataway, NJ, USA.
1350. 10th IEEE Congress on Evolutionary Computation (CEC09), May 1821, 2009, Trondheim,
Norway. IEEE Computer Society: Piscataway, NJ, USA. isbn: 1-4244-2958-7. Google
Books ID: 1-RCOgAACAAJ.
1351. SERVICES Cup09, 2009, Los Angeles, CA, USA. IEEE Computer Society: Piscataway, NJ,
USA. Part of [3068].
1352. Proceedings of the 2009 International Conference on Systems, Man, and Cybernetics
(SMC09), October 1114, 2009, San Antonio, TX, USA. IEEE Computer Society: Pis-
cataway, NJ, USA.
1353. Proceedings of the Second IEEE Symposium Series on Computational Intelligence (SSCI09),
March 30April 2, 2009, Sheraton Music City Hotel: Nashville, TN, USA. IEEE Computer
Society: Piscataway, NJ, USA.
1354. 11th IEEE Congress on Evolutionary Computation (CEC10), 2010, Centre de Convencions
Internacional de Barcelona: Barcelona, Catalonia, Spain. IEEE Computer Society: Piscat-
away, NJ, USA. Part of [1356].
1355. Proceedings of the Sixth International Conference on Natural Computation (ICNC10), Au-
1060 REFERENCES
gust 1012, 2010, Yantai, Shandong, China. IEEE Computer Society: Piscataway, NJ, USA.
Catalogue no.: CFP10CNC-PRT.
1356. 2010 IEEE World Congress on Computation Intelligence (WCCI10), July 1823, 2010, Cen-
tre de Convencions Internacional de Barcelona: Barcelona, Catalonia, Spain. IEEE Computer
Society: Piscataway, NJ, USA.
1357. Proceedings of the Third IEEE Symposium Series on Computational Intelligence (SSCI11),
April 1115, 2011, Paris, France. IEEE Computer Society: Piscataway, NJ, USA.
1358. Proceedings of the 2nd International IEEE Conference on Tools for Articial In-
telligence (TAI90), November 69, 1990, Hyatt Hotel, Dulles International Air-
port: Herndon, VA, USA. IEEE Computer Society Press: Los Alamitos, CA,
USA. isbn: 0-8186-2084-6. Google Books ID: 7KaENQAACAAJ, YrY5AAAACAAJ,
and nPtVAAAAMAAJ. OCLC: 23232686, 224971554, and 289019938. Catalogue
no.: 90CH2915-7.
1359. Proceedings of the Second Australia and New Zealand Conference on Intelligent Information
Systems (ANZIIS98), November 29December 2, 1994, Brisbane, QLD, Australia. IEEE
Computer Society Press: Los Alamitos, CA, USA.
1360. Proceedings of the Seventh Florida Articial Intelligence Research Symposium (FLAIRS94),
May 57, 1994, Pensacola Beach, FL, USA. IEEE Computer Society Press: Los Alamitos,
CA, USA.
1361. Second IEEE International Conference on Evolutionary Computation (CEC95), Novem-
ber 29December 1, 1995, University of Western Australia: Perth, WA, Australia. IEEE
Computer Society Press: Los Alamitos, CA, USA. isbn: 0-7803-2759-4. Google
Books ID: BJdVAAAAMAAJ, uQAAAACAAJ, and zpZVAAAAMAAJ. INSPEC Accession Num-
ber: 5248032. CEC-95 Editors not given by IEEE, Organisers David Fogel and Chris deSilva.
1362. Proceedings of the IEEE International Conference on Neural Networks (ICNN95), Novem-
ber 27December 1, 1995, University of Western Australia: Perth, WA, Australia. IEEE Com-
puter Society Press: Los Alamitos, CA, USA. isbn: 0-7803-2768-3 and 0-7803-2769-1.
Google Books ID: vy-hcQAACAAJ.
1363. Proceedings of the 17th International Conference on Distributed Computing Systems
(ICDCS97), May 2830, 1997, Baltimore, MD, USA. IEEE Computer Society Press: Los
Alamitos, CA, USA. isbn: 0-8186-7813-5. Partly available at http://computer.org/
proceedings/icdcs/7813/7813toc.htm [accessed 2011-04-09].
1364. IEEE International Conference on Systems, Man, and Cybernetics (SMC98), October 11
14, 1998, La Jolla, CA, USA, volume 1-5. IEEE Computer Society Press: Los Alamitos,
CA, USA. doi: 10.1109/ICSMC.1998.725373. isbn: 0-7803-4778-1. INSPEC Accession
Number: 6175788. issn: 1062-922X. Catalogue no.: 98CH36218.
1365. Proceedings of the IEEE International Conference on Cluster Computing (CLUSTER 2000),
November 28December 1, 2000, Chemnitz, Sachsen, Germany. IEEE Computer Soci-
ety Press: Los Alamitos, CA, USA. isbn: 0-7695-0896-0. INSPEC Accession Num-
ber: 6805968.
1366. IEEE International Conference on Systems, Man, and Cybernetics. E-Systems and e-Man
for Cybernetics in Cyberspace (SMC01), October 710, 2001, Tucson, AZ, USA. IEEE Com-
puter Society Press: Los Alamitos, CA, USA. isbn: 0-7803-7087-2. INSPEC Accession
Number: 7162685. issn: 1062-922X. Catalogue no.: 01CH37236.
1367. 15th IEEE International Conference on Tools with Articial Intelligence (ICTAI03), Novem-
ber 35, 2003, Sacramento, CA, USA. IEEE Computer Society Press: Los Alamitos, CA,
USA. isbn: 0-7695-2038-3. Google Books ID: ILRVAAAAMAAJ, IvUnMwAACAAJ, and
nVRFAgAACAAJ. issn: 1082-3409.
1368. Proceedings of the 15th International Conference on Scientic and Statistical Database Man-
agement (SSDBM03), July 911, 2003, Cambridge, MA, USA. IEEE Computer Society
Press: Los Alamitos, CA, USA. isbn: 0-7695-1964-4.
1369. Proceedings of the IEEE Congress on Evolutionary Computation (CEC04), June 2023,
2004, Portland, OR, USA. IEEE Computer Society Press: Los Alamitos, CA, USA.
isbn: 0-7803-8515-2. Google Books ID: I8 5AAAACAAJ. CEC 2004 A joint meeting
of the IEEE, the EPS, and the IEE.
1370. International Conference on Computational Intelligence for Modelling, Control and Automa-
tion and International Conference on Intelligent Agents, Web Technologies and Internet
Commerce (CIMCA-IAWTIC05), November 2830, 2005, Vienna, Austria. IEEE Computer
REFERENCES 1061
Society Press: Los Alamitos, CA, USA. isbn: 0-7695-2504-0.
1371. Proceedings of the 2005 IEEE International Conference on e-Technology, e-Commerce, and
e-Service (EEE05), March 29April 1, 2005, Hong Kong Baptist University, Lam Woo Inter-
national Conference Center: Hong Kong (Xiangg ang), China. IEEE Computer Society Press:
Los Alamitos, CA, USA. isbn: 0-7695-2274-2. Library of Congress Control Number
(LCCN): 2005. Order No.: P2274.
1372. Proceedings of the Tenth International Database Engineering and Applications Symposium
(IDEAS06), December 1114, 2006, Delhi, India. IEEE Computer Society Press: Los Alami-
tos, CA, USA. isbn: 0-7695-2577-6.
1373. Proceedings of the IEEE International Conference on Services Computing (SCC06), Septem-
ber 1822, 2006, Chicago, IL, USA. IEEE Computer Society Press: Los Alamitos, CA, USA.
isbn: 0-7695-2670-5. Library of Congress Control Number (LCCN): 2006928886. Order
No.: P2670.
1374. Second IEEE International Services Computing Contest (SCContest07), 2007, Salt Lake
City, UT, USA. IEEE Computer Society Press: Los Alamitos, CA, USA. Part of [3069].
1375. Proceedings of the Ninth International Conference on Hybrid Intelligent Systems (HIS09),
August 1214, 2009, Shenyang Normal University: Shenyang, Liaonng, China. IEEE Com-
puter Society Press: Los Alamitos, CA, USA.
1376. Proceedings of the International Conference on Signal Processing Systems (ICSPS09),
May 1517, 2009, Singapore. IEEE Computer Society Press: Los Alamitos, CA, USA. Library
of Congress Control Number (LCCN): 2009903184. Order No.: P3654.
1377. Proceedings of the Tenth International Conference on Hybrid Intelligent Systems (HIS10),
August 2325, 2010, Atlanta, GA, USA. IEEE Computer Society Press: Los Alamitos, CA,
USA.
1378. Proceedings of the IEEE International Conference on Service-Oriented Computing and Appli-
cations (SOCA10), December 1315, 2010, Perth, WA, Australia. IEEE Computer Society
Press: Los Alamitos, CA, USA. isbn: 978-1-4244-9802-4. Partly available at http://
conferences.computer.org/soca/2010/ [accessed 2011-02-28].
1379. Proceedings of the 7th International Conference on Natural Computation (ICNC11), July 26
28, 2011, Sh`angh ai, China. IEEE Computer Society Press: Los Alamitos, CA, USA.
1380. Proceedings of the 3rd International Workshop on Intelligent Systems and Applications
(ISA11), May 2829, 2011, W uh`an, H ubei, China. IEEE Computer Society Press: Los
Alamitos, CA, USA. Catalogue no.: CFP1160G-PRT.
1381. Proceedings of the 3rd International Conference on Soft Computing and Pattern Recognition
(SoCPaR11), October 1416, 2011, D` ali an, Liaonng, China. IEEE Computer Society Press:
Los Alamitos, CA, USA.
1382. Proceedings of the 15th International Conference on Distributed Computing Systems
(ICDCS95), May 30June 2, 1995, Vancouver, BC, Canada. IEEE Computer Society: Wash-
ington, DC, USA. doi: 10.1109/ICDCS.1995.499995. isbn: 0-8186-7025-8. Google Books
ID: oWdCPgAACAAJ. INSPEC Accession Number: 4998824. issn: 1063-6927.
1383. 10th Euromicro Workshop on Parallel, Distributed and Network-based Processing (PDP02),
January 911, 2002, Canary Islands, Spain. IEEE Computer Society: Washington, DC, USA.
isbn: 0-7695-1444-8.
1384. Proceedings of the 21st IEEE Symposium on Reliable Distributed Systems (SRDS02), Octo-
ber 1316, 2002,

Osaka University: Suita,

Osaka, Japan. IEEE Computer Society: Wash-
ington, DC, USA. isbn: 0-7695-1659-9, 0-7695-1660-2, and 0-7695-1661-0. Google
Books ID: KJVVAAAAMAAJ, k8G AAAACAAJ, and uC8GAAAACAAJ. OCLC: 50818238 and
423964963. issn: 1060-9857.
1385. International Symposium on Microarchitecture Proceedings of the 15th Annual Workshop on
Microprogramming (MICRO 15), October 57, 1982, Palo Alto, CA, USA. IEEE (Institute
of Electrical and Electronics Engineers): Piscataway, NJ, USA. isbn: 9993629073.
1386. Proceedings of the 1999 IEEE International Conference on Robotics and Automation
(ICRA99), May 1015, 1999, Marriott Hotel, Renaissance Center: Detroit, MI,
USA. IEEE (Institute of Electrical and Electronics Engineers): Piscataway, NJ, USA.
isbn: 0-7803-5180-0, 0-7803-518 1-9, 0-7803-5182-7, and 0-7803-5183-5. Li-
brary of Congress Control Number (LCCN): 90-640158. issn: 1050-4729. Catalogue
no.: 99CH36288 and 99CH36288C.
1387. Proceedings of the 2003 IEEE Swarm Intelligence Symposium (SIS03), April 2426, 2003, In-
1062 REFERENCES
dianapolis, IN, USA, IEEE International Symposia on Swarm Intelligence. IEEE (Institute of
Electrical and Electronics Engineers): Piscataway, NJ, USA. doi: 10.1109/SIS.2003.1202237.
Partly available at http://www.computelligence.org/sis/2003/ [accessed 2007-08-26].
Catalogue no.: 03EX706.
1388. Proceedings of Fifth World Congress on Intelligent Control and Automation (WCICA04),
June 1519, 2004, Hangzhou, Zh`eji ang, China. IEEE (Institute of Electrical and Electronics
Engineers): Piscataway, NJ, USA. isbn: 0-7803-8273-0. Library of Congress Control
Number (LCCN): 2003115847. Catalogue no.: 04EX788.
1389. Proceedings of the 2006 IEEE International Conference on Robotics and Automation
(ICRA06), May 1519, 2006, Hilton Walt Disney World, Walt Disney World Resort: Or-
lando, FL, USA. IEEE (Institute of Electrical and Electronics Engineers): Piscataway, NJ,
USA. issn: 1050-4729.
1390. Proceedings of the 2007 International Conference on Computational Intelligence and Security
(CIS07), December 1519, 2007, Harbin, Heilongjiang China. IEEE (Institute of Electrical
and Electronics Engineers): Piscataway, NJ, USA. isbn: 0-7695-3072-9.
1391. Proceedings of the IEEE International Electric Machines & Drives Conference (IEMDC07),
May 35, 2007, Antalya, Turkey. IEEE (Institute of Electrical and Electronics Engineers):
Piscataway, NJ, USA. isbn: 1-4244-0742-7 and 1-4244-0743-5. OCLC: 156998236
and 244627557. Library of Congress Control Number (LCCN): 2006935481. GBV-
Identication (PPN): 540144487. LC Classication: TK2000 .I35 2007. Catalogue
no.: 07EX1597.
1392. Proceedings of the 2008 International Conference on Computational Intelligence and Security
(CIS08), December 1317, 2008, S uzhou, Ji angs u, China. IEEE (Institute of Electrical and
Electronics Engineers): Piscataway, NJ, USA.
1393. Proceedings of the 2009 International Conference on Computational Intelligence and Security
(CIS09), December 1114, 2009, Beijng, China. IEEE (Institute of Electrical and Electronics
Engineers): Piscataway, NJ, USA.
1394. 6th International Workshop on Memetic Algorithms (WOMA09), March 30April 2, 2009,
Sheraton Music City Hotel: Nashville, TN, USA. IEEE (Institute of Electrical and Electron-
ics Engineers): Piscataway, NJ, USA. Part of IEEE Symposium Series on Computational
Intelligence.
1395. Proceedings of the Eastern Joint Computer Conference (EJCC) Papers and Discussions
Presented at the Joint IRE AIEE ACM Computer Conference, December 13, 1959,
Boston, MA, USA, 16. IEEE (Institute of Electrical and Electronics Engineers): Piscataway,
NJ, USA, Association for Computing Machinery (ACM): New York, NY, USA, and Institute
of Radio Engineers: New York, NY, USA. doi: 10.1109/AFIPS.1959.85. asin: B001BS7LVG.
1396. Christian Igel. Causality of Hierarchical Variable Length Representations. In CEC98 [2496],
pages 324329, 1998. doi: 10.1109/ICEC.1998.699753. Fully available at http://www.
neuroinformatik.ruhr-uni-bochum.de/PEOPLE/igel/CoHVLR.ps.gz [accessed 2009-
07-10]. CiteSeer
x
: 10.1.1.47.9002.
1397. Christian Igel and Marc Toussaint. On Classes of Functions for which No Free Lunch
Results Hold. Information Processing Letters, 86(6):317321, June 30, 2003, Elsevier
Science Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scien-
tic Publishers Ltd.: Amsterdam, The Netherlands. doi: 10.1016/S0020-0190(03)00222-9.
CiteSeer
x
: 10.1.1.72.4278 and 10.1.1.8.6772.
1398. Christian Igel and Marc Toussaint. Recent Results on No-Free-Lunch Theorems for Opti-
mization. arXiv.org, Cornell Universiy Library: Ithaca, NY, USA, March 31, 2003. Fully
available at http://www.citebase.org/abstract?id=oai:arXiv.org:cs/0303032
[accessed 2008-03-28]. arXiv ID: abs/cs/0303032.
1399. Auke Jan Ijspeert, Masayuki Murata, and Naoki Wakamiya, editors. Revised Selected Papers
of the First International Workshop on Biologically Inspired Approaches to Advanced In-
formation Technology (BioADIT04), January 2930, 2004, Lausanne, Switzerland, volume
3141/2004 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/b101281. isbn: 3-540-23339-3. Google Books ID: PdYn-iHZFDYC
and WOR7wu jMMQC. OCLC: 56878906 and 56922928. Library of Congress Control Number
(LCCN): 2004113283.
1400. Kokolo Ikeda, Hajime Kita, and Shigenobu Kobayashi. Failure of Pareto-based MOEAs:
Does Non-Dominated Really Mean Near to Optimal? In CEC01 [1334], pages 957962,
REFERENCES 1063
volume 2, 2001. doi: 10.1109/CEC.2001.934293. INSPEC Accession Number: 7018245.
1401. Michael Iles and Dwight Lorne Deugo. A Search for Routing Strategies in a Peer-to-
Peer Network Using Genetic Programming. In SRDS02 [1384], pages 341346, 2002.
doi: 10.1109/RELDIS.2002.1180207. INSPEC Accession Number: 7516795.
1402. Lester Ingber. Adaptive Simulated Annealing (ASA): Lessons learned. Control and Cybernet-
ics, 25(1):3354, 1996, Polish Academy of Sciences (Polska Akademia Nauk, PAN): Poland.
Fully available at http://www.ingber.com/asa96_lessons.pdf [accessed 2010-09-25] and
http://www.ingber.com/asa96_lessons.ps.gz. CiteSeer
x
: 10.1.1.15.2777.
1403. Lester Ingber. Simulated Annealing: Practice versus Theory. Mathematical and Com-
puter Modelling, 18(11):2957, November 1993, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/0895-7177(93)90204-C. Fully available at http://www.ingber.com/asa93_
sapvt.pdf [accessed 2009-09-18]. CiteSeer
x
: 10.1.1.15.1046.
1404. Proceedings of the 3rd International Conference on Agents and Articial Intelligence
(ICAART11), January 2830, 2011, Rome, Italy. Institute for Systems and Technologies
of Information, Control and Communication (INSTICC) Press: Setubal, Portugal.
1405. Proceedings of the 2nd International Conference on Engineering Optimization (EngOpt10),
September 69, 2010, Instituto Superior Tecnico (IST) Congress Center: Lisbon, Portugal.
1406. Proceedings of the World Congress on Engineering and Computer Science (WCECS07), Oc-
tober 2627, 2007, University of California: Berkeley, CA, USA, Lecture Notes in Engineer-
ing and Computer Science. International Association of Engineers (IAENG): Hong Kong
(Xiangg ang), China and Newswood Publications Ltd.: Hong Kong (Xiangg ang), China.
isbn: 988-98671-6-8. Google Books ID: Lg8PLwAACAAJ. OCLC: 184910962.
1407. Proceedings of the IASTED International Conference on Databases and Applications
(DBA06), February 1315, 2006, Innsbruck, Austria. International Association of Science
and Technology for Development (IASTED): Calgary, AB, Canada. Part of [1408].
1408. Proceedings of the 24th IASTED International Multi-Conference on Applied Informatics
(AI06), February 1416, 2006, Innsbruck, Austria. International Association of Science and
Technology for Development (IASTED): Calgary, AB, Canada.
1409. Proceedings of the Tenth IASTED International Conference on Parallel and Distributed Com-
puting and Systems (PDCS98), October 14, 1998, Las Vegas, NV, USA. International As-
sociation of Science and Technology for Development (IASTED): Calgary, AB, Canada and
ACTA Press: Calgary, AB, Canada.
1410. Proceedings of the Eleventh IASTED International Conference on Parallel and Distributed
Computing and Systems (PDCS99), November 36, 1999, Massachusetts Institute of Tech-
nology (MIT): Cambridge, MA, USA. International Association of Science and Technology
for Development (IASTED): Calgary, AB, Canada and ACTA Press: Calgary, AB, Canada.
1411. The 2012 IEEE World Congress on Computational Intelligence (WCCI12), June 1015,
2012, International Convention Centre: Brisbane, QLD, Australia.
1412. 8th International Conference on Systems Research, Informatics and Cybernetics (Inter-
Symp96), August 1418, 1996, Baden-Baden, Baden-W urttemberg, Germany. International
Institute for Advanced Studies in Systems Research and Cybernetic (IIAS): Tecumseh, ON,
Canada.
1413. Proceedings of the Symposium on Network and Distributed Systems Security (NDSS00),
February 24, 2000, Catamaran Resort Hotel: San Diego, CA, USA. Internet Society
(ISOC): Reston, VA, USA. isbn: 1-891562-07-X, 1-891562-08-8, and 1-891562-11-8.
Google Books ID: 49fJPgAACAAJ, LzfCPgAACAAJ, rAGIPgAACAAJ, and sGuCPgAACAAJ.
OCLC: 59522955 and 236489300.
1414. Proceedings of the First Conference on Articial General Intelligence (AGI08), March 13,
2008, University of Memphis, FedEx Institute of Technology: Memphis, TN, USA. IOS Press:
Amsterdam, The Netherlands.
1415. Proceedings of the 4th International Workshop on Machine Learning (ML87), 1987, Irvine,
CA, USA.
1416. Hisao Ishibuchi and Yusuke Nojima. Optimization of Scalarizing Functions Through
Evolutionary Multiobjective Optimization. In EMO07 [2061], pages 5165, 2007.
doi: 10.1007/978-3-540-70928-2 8. Fully available at http://ksuseer1.ist.psu.edu/
viewdoc/summary?doi=10.1.1.78.8014 and http://www.lania.mx/

ccoello/
EMOO/ishibuchi07a.pdf.gz [accessed 2009-07-18].
1417. Hisao Ishibuchi, Tsutomu Doi, and Yusuke Nojima. Incorporation of Scalar-
1064 REFERENCES
izing Fitness Functions into Evolutionary Multiobjective Optimization Algorithms.
In PPSN IX [2355], pages 493502, 2006. doi: 10.1007/11844297 50. Fully
available at http://www.ie.osakafu-u.ac.jp/

hisaoi/ci_lab_e/research/pdf_
file/multiobjective/PPSN_2006_EMO_Camera-Ready-HP.pdf [accessed 2009-07-18].
1418. Hisao Ishibuchi, Yusuke Nojima, and Tsutomu Doi. Comparison between Single-Objective
and Multi-Objective Genetic Algorithms: Performance Comparison and Performance Mea-
sures. In CEC06 [3033], pages 39593966, 2006. doi: 10.1109/CEC.2006.1688438. INSPEC
Accession Number: 9723561.
1419. Hisao Ishibuchi, Noritaka Tsukamoto, and Yusuke Nojima. Iterative Approach to
Indicator-based Multiobjective Optimization. In CEC07 [1343], pages 39673974, 2007.
doi: 10.1109/CEC.2007.4424988. INSPEC Accession Number: 9889480.
1420. Hisao Ishibuchi, Noritaka Tsukamoto, and Yusuke Nojima. Evolutionary Many-
Objective Optimization: A Short Review. In CEC08 [1889], pages 24242431, 2008.
doi: 10.1109/CEC.2008.4631121. Fully available at http://www.ie.osakafu-u.ac.
jp/

hisaoi/ci_lab_e/research/pdf_file/multiobjective/CEC2008_Many_
Objective_Final.pdf [accessed 2009-07-18]. INSPEC Accession Number: 10250829.
1421. Masumi Ishikawa, editor. Fourth International Conference on Hybrid Intelligent Systems
(HIS04), December 58, 2004, Kitakyushu, Japan. IEEE Computer Society: Piscataway,
NJ, USA. isbn: 0-7695-2291-2. OCLC: 58543493, 58804879, 61262247, 71500201,
254581031, and 423969575. INSPEC Accession Number: 8470883.
J
1422. Hajira Jabeen and Abdul Rauf Baig. Review of Classication using Genetic Programming. In-
ternational Journal of Engineering Science and Technology, 2(2):94103, February 2010, Engg
Journals Publications (EJP): Otteri, Vandalur, Chennai, India. Fully available at http://
www.ijest.info/docs/IJEST10-02-02-06.pdf [accessed 2010-06-25].
1423. David Jackson. Parsing and Translation of Expressions by Genetic Programming. In
GECCO05 [304], pages 16811688, 2005. doi: 10.1145/1068009.1068291.
1424. Christian Jacob, Marcin L. Pilat, Peter J. Bentley, and Jonathan Timmis, editors. Proceedings
of the 4th International Conference on Articial Immune Systems (ICARIS05), August 14
17, 2005, Ban, AB, Canada, volume 3627 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-28175-4.
1425. Matthias Jacob, Alexander Kuscher, Max Plauth, and Christoph Thiele. Automated Data
Augmentation Services Using Text Mining, Data Cleansing and Web Crawling Techniques.
In Third IEEE International Services Computing Contest [1347], pages 136143, 2008.
doi: 10.1109/SERVICES-1.2008.67. INSPEC Accession Number: 10117645.
1426. Dean Jacobs, Jan Prins, Peter Siegel, and Kenneth Wilson. Monte Carlo Techniques in Code
Optimization. ACM SIGMICRO Newsletter, 13(4):143148, December 1982, Association for
Computing Machinery (ACM): New York, NY, USA. See also [1427].
1427. Dean Jacobs, Jan Prins, Peter Siegel, and Kenneth Wilson. Monte Carlo Techniques in Code
Optimization. In MICRO 15 [1385], pages 143146, 1982. See also [1426].
1428. H. V. Jagadish and Inderpal Singh Mumick, editors. Proceedings of the 1996 ACM SIGMOD
International Conference on Management of Data (SIGMOD96), June 46, 1996, Montreal,
QC, Canada. ACM Press: New York, NY, USA.
1429. Martin J ahne, Xiaodong Li, and J urgen Branke. Evolutionary Algorithms and Multi-
Objectivization for the Travelling Salesman Problem. In GECCO09-I [2342], pages 595602,
2009. doi: 10.1145/1569901.1569984. Fully available at http://goanna.cs.rmit.edu.
au/

xiaodong/publications/multi-objectivization-jahne-gecco09.pdf [ac-
cessed 2010-07-18].
1430. Lakhmi C. Jain and Janusz Kacprzyk, editors. New Learning Paradigms in Soft Computing,
volume 84 (PR-200 in Studies in Fuzziness and Soft Computing. Springer-Verlag GmbH:
Berlin, Germany, February 2002. isbn: 3-7908-1436-9. Google Books ID: u2NcykOhYYwC.
1431. Nick Jakobi. Harnessing Morphogenesis. Technical Report CSRP 423, University of Sussex,
School of Cognitive Science: Brighton, UK, 1995. CiteSeer
x
: 10.1.1.39.372.
1432. Mohammad Jamshidi, editor. Multimedia, Image Processing, and Soft Computing : Trends,
Principles, and Applications: Proceedings of the 5th Biannual World Automation Congress
REFERENCES 1065
(WAC02), June 913, 2002, Orlando, FL, USA, volume 13/14 in Proceedings of the World
Automation Congress. Technology Software & Information Enterprise (TSI Press) Inc.: San
Antonio, TX, USA. isbn: 1889335185 and 1889335193. Google Books ID: jPGbAAAACAAJ.
OCLC: 51296904, 54530539, 80832353, and 248985844. Catalogue no.: D2EX548.
1433. Thomas Jansen and Ingo Wegener. A Comparison of Simulated Annealing with a Simple
Evolutionary Algorithm on Pseudo-boolean Functions of Unitation. Theoretical Computer
Science, 386(1-2):7393, October 28, 2007, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/j.tcs.2007.06.003.
1434. Matthias Jarke, Michael J. Carey, Klaus R. Dittrich, Frederick H. Lochovsky, Pericles
Loucopoulos, and Manfred A. Jeusfeld, editors. Proceedings of 23rd International Confer-
ence on Very Large Data Bases (VLDB97), August 2527, 1997, Athens, Greece. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-470-7.
1435. Jir Jaros and Vaclav Dvor ak. An Evolutionary Design Technique for Collective Communica-
tions on Optimal Diameter-Degree Networks. In GECCO08 [1519], pages 15391546, 2008.
doi: 10.1145/1389095.1389391.
1436. Grahame A. Jastrebski and Dirk V. Arnold. Improving Evolution Strategies through
Active Covariance Matrix Adaptation. In CEC06 [3033], pages 97199726, 2006.
doi: 10.1109/CEC.2006.1688662. INSPEC Accession Number: 9723774.
1437. Andrzej Jaszkiewicz. On the Computational Eciency of Multiple Objective Metaheuristics:
The Knapsack Problem Case Study. European Journal of Operational Research (EJOR), 158
(2):418433, October 16, 2004, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. Imprint: North-Holland Scientic Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/j.ejor.2003.06.015.
1438. Edwin Thompson Jaynes, author. G. Larry Bretthorst, editor. Probability Theory: The Logic
of Science. Cambridge University Press: Cambridge, UK, May 2002June 2003, Washington
University: St. Louis, MO, USA. isbn: 0521592712. Fully available at http://bayes.
wustl.edu/etj/prob/book.pdf [accessed 2009-07-25]. Google Books ID: tTN4HuUNXjgC.
OCLC: 49859931 and 271784842.
1439. Henrik Jeldtoft Jensen. Self-Organized Criticality: Emergent Complex Behavior in Physical
and Biological Systems, volume 10 in Cambridge Lecture Notes in Physics. Cambridge Uni-
versity Press: Cambridge, UK, 1998. isbn: 0521483719. Google Books ID: 7u3x1Luq50cC.
1440. Mikkel T. Jensen. Guiding Single-Objective Optimization Using Multi-Objective Methods.
In EvoWorkshop03 [2253], pages 9198, 2003. doi: 10.1007/3-540-36605-9 25. See also [1442].
1441. Mikkel T. Jensen. Reducing the Run-Time Complexity of Multiobjective EAs: The
NSGA-II and other Algorithms. IEEE Transactions on Evolutionary Computation (IEEE-
EC), 7(5):503515, October 2003, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2003.817234. INSPEC Accession Number: 7757990.
1442. Mikkel T. Jensen. Helper-Objectives: Using Multi-Objective Evolutionary Algo-
rithms for Single-Objective Optimisation. Journal of Mathematical Modelling and Al-
gorithms, 3(4):323347, December 2004, Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1023/B:JMMA.0000049378.57591.c6. See also [1440].
1443. Licheng Jiao, Lipo Wang, Xinbo Gao, Jing Liu, and Feng Wu, editors. Proceedings of the
Second International Conference on Advances in Natural Computation, Part I (ICNC06-
I), September 2428, 2006, Xan, Shanx, China, volume 4221 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11881070.
isbn: 3-540-45901-4. See also [1444].
1444. Licheng Jiao, Lipo Wang, Xinbo Gao, Jing Liu, and Feng Wu, editors. Proceedings of the
Second International Conference on Advances in Natural Computation, Part II (ICNC06-
II), September 2428, 2006, Xan, Shanx, China, volume 4222 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11881223.
isbn: 3-540-45907-3. See also [1443].
1445. Keisoku Jid o and Seigyo Gakkai, editors. Proceedings of IEEE International Conference on
Evolutionary Computation (CEC96), May 2022, 1996, Nagoya University, Symposium &
Toyoda Auditorium: Nagoya, Japan. IEEE Computer Society Press: Los Alamitos, CA,
USA. isbn: 0-7803-2902-3. Google Books ID: C ZPAAAACAAJ, dNZVAAAAMAAJ, and
qi7jGAAACAAJ. OCLC: 312788226 and 639955150. INSPEC Accession Number: 5398378.
1446. Wan-rong Jih and Jane Yung-jen Hsu. Dynamic Vehicle Routing Using Hy-
brid Genetic Algorithms. In ICRA99 [1386], pages 453458, volume 1, 1999.
1066 REFERENCES
doi: 10.1109/ROBOT.1999.770019. Fully available at http://agents.csie.ntu.
edu.tw/

jih/publication/ICRA99-GAinDVRP.pdf, http://neo.lcc.uma.es/
radi-aeb/WebVRP/data/articles/DVRP.pdf, and http://ntur.lib.ntu.edu.
tw/bitstream/246246/2007041910021646/1/00770019.pdf [accessed 2010-11-02]. IN-
SPEC Accession Number: 6345736.
1447. Prasanna Jog, Jung Y. Suh, and Dirk Van Gucht. Parallel Genetic Algorithms Applied
to the Traveling Salesman Problem. SIAM Journal on Optimization (SIOPT), 1(4):515
529, 1991, Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA.
doi: 10.1137/0801031.
1448. Wilhelm Ludvig Johannsen. Elemente der exakten Erblichkeitslehre (mit Grundz ugen der Bi-
ologischen Variationsstatistik). Verlag von Gustav Fischer: Jena, Thuringia, Germany, second
german, revised and very extended edition, 19091913. Fully available at http://www.zum.
de/stueber/johannsen/elemente/ [accessed 2008-08-20]. Google Books ID: KF8MGQAACAAJ,
RmRVAAAAMAAJ, dxb GwAACAAJ, ggglAAAAMAAJ, l1Q1AAAAMAAJ, oSHVJgAACAAJ, and
yoBUAAAAMAAJ. Limitations of natural selection on pure lines.
1449. Brad Johanson and Riccardo Poli. GP-Music: An Interactive Genetic Programming Sys-
tem for Music Generation with Automated Fitness Raters. Technical Report CSRP-98-13,
University of Birmingham, School of Computer Science: Birmingham, UK, 1998. Fully avail-
able at http://graphics.stanford.edu/

bjohanso/gp-music/tech-report/ [ac-
cessed 2009-06-18]. See also [1453].
1450. Matthias John and Max J. Ammann. Design of a Wide-Band Printed Antenna Using a
Genetic Algorithm on an Array of Overlapping Sub-Patches. In iWAT06 [1745], pages 92
95, 20062005. Fully available at http://www.ctvr.ie/docs/RF%20Pubs/01608983.
pdf [accessed 2008-09-03]. See also [1451].
1451. Matthias John and Max J. Ammann. Optimisation of a Wide-Band Printed Monopole An-
tenna using a Genetic Algorithm. In PAPC06 [1778], pages 237240, 2006. Fully available
at http://www.ctvr.ie/docs/RF%20Pubs/LAPC_2006_MJ.pdf [accessed 2008-09-02]. See
also [1450].
1452. Colin G. Johnson and Juan Jes us Romero Cardalda. Workshop Genetic Algorithms in
Visual Art and Music (GAVAM00). In GECCO00 WS [2986], July 8, 2000, Las Vegas,
NV, USA. Bird-of-a-feather Workshop at the 2000 Genetic and Evolutionary Computation
Conference (GECCO-2000). Part of [2986].
1453. Colin G. Johnson and Riccardo Poli. GP-Music: An Interactive Genetic Programming Sys-
tem for Music Generation with Automated Fitness Raters. In GP98 [1612], pages 181
186, 1998. Fully available at http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/
johanson_1998_GP-Music.html [accessed 2009-06-18]. CiteSeer
x
: 10.1.1.14.1076. See
also [1449].
1454. David S. Johnson, Cecilia R. Aragon, Lyle A. McGeoch, and Catherine Schevon. Optimiza-
tion by Simulated Annealing: An Experimental Evaluation. Part I, Graph Partitioning. Op-
erations Research, 37(6), NovemberDecember 1989, Institute for Operations Research and
the Management Sciences (INFORMS): Linthicum, ML, USA and HighWire Press (Stanford
University): Cambridge, MA, USA. doi: 10.1287/opre.37.6.865. See also [1455].
1455. David S. Johnson, Cecilia R. Aragon, Lyle A. McGeoch, and Catherine Schevon. Op-
timization by Simulated Annealing: An Experimental Evaluation; Part II, Graph Color-
ing and Number Partitioning. Operations Research, 39(3):378406, MayJune 1991, Insti-
tute for Operations Research and the Management Sciences (INFORMS): Linthicum, ML,
USA and HighWire Press (Stanford University): Cambridge, MA, USA. doi: 10.1287/o-
pre.39.3.378. Fully available at http://www.cs.uic.edu/

jlillis/courses/cs594_
f02/JohnsonSA.pdf [accessed 2009-09-09]. See also [1454].
1456. Derek M. Johnson, Ankur M. Teredesai, and Robert T. Saltarelli. Genetic Programming in
Wireless Sensor Networks. In EuroGP05 [1518], pages 96107, 2005. doi: 10.1007/b107383.
Fully available at http://www.cs.rit.edu/

amt/pubs/EuroGP05FinalTeredesai.
pdf [accessed 2008-06-17].
1457. Timothy D. Johnstona. Selective Costs and Benets in the Evolution of Learning. In Advances
in the Study of Behavior [2332], pages 65106. Academic Press Professional, Inc.: San Diego,
CA, USA. doi: 10.1016/S0065-3454(08)60046-7.
1458. Jerey A. Joines and Christopher R. Houck. On the Use of Non-Stationary Penalty Func-
tions to Solve Nonlinear Constrained Optimization Problems with GAs. In CEC94 [1891],
REFERENCES 1067
pages 579584, volume 2, 1994. doi: 10.1109/ICEC.1994.349995. Fully available
at http://www.cs.cinvestav.mx/

constraint/papers/gacons.ps.gz [accessed 2008-


11-15]. CiteSeer
x
: 10.1.1.28.1065.
1459. Donald Forsha Jones, editor. Proceedings of the Sixth Annual Congress of Genetics, 1932,
Ithaca, NY, USA. Brooklyn Botanic Gardens: New York, NY, USA and Menasha Corpora-
tion: Neenah, WI, USA.
1460. Donald R. Jones and Mark A. Beltramo. Solving Partitioning Problems with Genetic Algo-
rithms. In ICGA91 [254], pages 442449, 1991.
1461. Gareth Jones. Genetic and Evolutionary Algorithms. In Encyclopedia of Computational
Chemistry. Volume III: Databases and Expert Systems [2824], volume 29. John Wiley & Sons
Ltd.: New York, NY, USA, 1998. Fully available at http://www.wiley.com/legacy/
wileychi/ecc/samples/sample10.pdf [accessed 2010-08-01].
1462. Joel Jones. Abstract Syntax Tree Implementation Idioms. In PLoP03 [2691],
2003. Fully available at http://hillside.net/plop/plop2003/Papers/
Jones-ImplementingASTs.pdf [accessed 2010-08-19].
1463. Josh Jones and Terence Soule. Comparing Genetic Robustness in Generational vs.
Steady State Evolutionary Algorithms. In GECCO06 [1516], pages 143150, 2006.
doi: 10.1145/1143997.1144024.
1464. Terry Jones. A Description of Hollands Royal Road Function. Evolutionary Computation, 2
(4):411417, Winter 1994, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1994.2.4.409.
See also [1465].
1465. Terry Jones. A Description of Hollands Royal Road Function. Working Papers 94-11-
059, Santa Fe Institute: Santa Fe, NM, USA, November 1994. Fully available at http://
www.santafe.edu/media/workingpapers/94-11-059.pdf [accessed 2010-12-04]. See also
[1464].
1466. Terry Jones. Crossover, Macromutation, and Population-based Search. In ICGA95 [883],
pages 7380, 1995. CiteSeer
x
: 10.1.1.42.3194.
1467. Terry Jones. Evolutionary Algorithms, Fitness Landscapes and Search. PhD thesis, Uni-
versity of New Mexico: Albuquerque, NM, USA, May 1995, Ron Mullin, Ian Munro,
Charlie Colbourn, Douglas Hofstadter, Gregory J. E. Rawlins, John Henry Holland, and
Stephanie Forrest, Advisors. Fully available at http://jon.es/research/phd.pdf
and http://www.cs.unm.edu/

forrest/dissertations-and-proposals/terry.
pdf [accessed 2009-07-10].
1468. Terry Jones and Stephanie Forrest. Fitness Distance Correlation as a Measure of Prob-
lem Diculty for Genetic Algorithms. In ICGA95 [883], pages 184192, 1995. Fully
available at http://www.santafe.edu/research/publications/workingpapers/
95-02-022.ps [accessed 2009-07-10]. CiteSeer
x
: 10.1.1.42.415.
1469. Aravind K. Joshi, editor. Proceedings of the 9th International Joint Conference on Articial
Intelligence (IJCIA85-I), August 1985, Los Angeles, CA, USA, volume 1. Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA. Fully available at http://dli.iiit.ac.in/
ijcai/IJCAI-85-VOL1/CONTENT/content.htm [accessed 2008-04-01]. See also [1470].
1470. Aravind K. Joshi, editor. Proceedings of the 9th International Joint Conference on Articial
Intelligence (IJCIA85-II), August 1985, Los Angeles, CA, USA, volume 2. Morgan Kauf-
mann Publishers Inc.: San Francisco, CA, USA. Fully available at http://dli.iiit.ac.
in/ijcai/IJCAI-85-VOL2/CONTENT/content.htm [accessed 2008-04-01]. See also [1469].
1471. Bryant A. Julstrom. Seeding the Population: Improved Performance in a Genetic Al-
gorithm for the Rectilinear Steiner Problem. In SAC94 [734], pages 222226, 1994.
doi: 10.1145/326619.326728.
1472. Bryant A. Julstrom. Evolutionary Codings and Operators for the Terminal Assignment
Problem. In GECCO09-I [2342], pages 18051806, 2009. doi: 10.1145/1569901.1570171.
1473. Lukasz Juszcyk, Anton Michlmayer, and Christian Platzer. Large Scale Web Service Discov-
ery and Composition using High Performance In-Memory Indexing. In CEC/EEE07 [1344],
pages 509512, 2007. doi: 10.1109/CEC-EEE.2007.60. Fully available at https://berlin.
vitalab.tuwien.ac.at/

florian/papers/cec2007.pdf [accessed 2007-10-24]. INSPEC


Accession Number: 9868635. See also [319].
1068 REFERENCES
K
1474. Leslie Pack Kaelbling and Alessandro Saotti, editors. Proceedings of the Nineteenth In-
ternational Joint Conference on Articial Intelligence (IJCAI05), July 30August 5, 2005,
Edinburgh, Scotland, UK. International Joint Conferences on Articial Intelligence: Den-
ver, CO, USA. isbn: 0938075934. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-05/IJCAI-05%20CONTENT.htm [accessed 2008-04-01]. OCLC: 442578924.
1475. Professor Kaelo. Some Population-set based Methods for Unconstrained Global Optimization.
PhD thesis, Witwatersrand University, School of Computational and Applied Mathematics:
Johannesburg, South Africa, 2005, M. Montaz Ali, Advisor.
1476. Professor Kaelo and M. Montaz Ali. A Numerical Study of Some Modied Dieren-
tial Evolution Algorithms. European Journal of Operational Research (EJOR), 169(3):
11761184, March 16, 2006, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. Imprint: North-Holland Scientic Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/j.ejor.2004.08.047.
1477. Professor Kaelo and M. Montaz Ali. Dierential Evolution Algorithms using Hybrid Muta-
tion. Computational Optimization and Applications, 37(2):231246, June 2007, Kluwer Aca-
demic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1007/s10589-007-9014-3.
1478. Tatiana Kalganova. An Extrinsic Function-Level Evolvable Hardware Approach. In Eu-
roGP00 [2198], pages 6075, 2000. doi: 10.1007/b75085. CiteSeer
x
: 10.1.1.42.1386.
1479. Tatiana Kalganova and Julian Francis Miller. Evolving More Ecient Digital Circuits by
Allowing Circuit Layout Evolution and Multi-Objective Fitness. In EH99 [2614], pages 54
63, 1999. doi: 10.1109/EH.1999.785435. CiteSeer
x
: 10.1.1.47.948. INSPEC Accession
Number: 6338575.
1480. Leila Kallel, Bart Naudts, and Alex Rogers, editors. Proceedings of the Second Evonet Sum-
mer School on Theoretical Aspects of Evolutionary Computing, September 17, 1999, Uni-
versity of Antwerp (RUCA): Antwerp, Belgium, Natural Computing Series. Springer New
York: New York, NY, USA. isbn: 3-540-67396-2. Google Books ID: 6TWWVJD2yXYC.
OCLC: 174724209. Library of Congress Control Number (LCCN): 00061910. GBV-
Identication (PPN): 319367169. LC Classication: QA76.618 .T47 2001.
1481. Olav Kallenberg. Foundations of Modern Probability, Probability and Its Applications,
Springer Series in Statistics (SSS). Springer New York: New York, NY, USA, 2nd edi-
tion, January 8, 2002. isbn: 0-3879-4957-7 and 0-3879-5313-2. Google Books
ID: L6fhXh13OyMC, TBgFslMy8V4C, and lzLVjXkERQQC. OCLC: 46937587.
1482. Olav Kallenberg. Probabilistic Symmetries and Invariance Principles, Probability and
Its Applications, Springer Series in Statistics (SSS). Springer New York: New York,
NY, USA, June 27, 2005. isbn: 0-3872-5115-4 and 0-3872-8861-9. Google Books
ID: E2mUQjV09y4C, Zn3rKAAACAAJ, and apT0AQAACAAJ. OCLC: 60740995, 64668233,
315796434, and 317470414.
1483. Subrahmanyam Kalyanasundaram, Richard J. Lipton, Kenneth W. Regan, and Farbod
Shokrieh. Improved Simulation of Nondeterministic Turing Machines. In MFCS10 [2582],
2010. Fully available at http://www.cc.gatech.edu/

subruk/pdf/simulate.pdf [ac-
cessed 2010-06-24].
1484. Subbarao Kambhampati, editor. Proceedings of the Twentieth National Conference on Arti-
cial Intelligence and the Seventeenth Innovative Applications of Articial Intelligence Confer-
ence (AAAI05 / IAAI05), July 913, 2005, Pittsburgh, PA, USA. AAAI Press: Menlo Park,
CA, USA and MIT Press: Cambridge, MA, USA. isbn: 1-57735-236-X. Partly available
at http://www.aaai.org/Conferences/AAAI/aaai05.php and http://www.aaai.
org/Conferences/IAAI/iaai06.php [accessed 2007-09-06].
1485. Subbarao Kambhampati and Craig A. Knoblock, editors. Proceedings of IJCAI03 Work-
shop on Information Integration on the Web (IIWeb03), August 910, 2003, Acapulco,
Mexico. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. Fully available
at http://www.isi.edu/info-agents/workshops/ijcai03/proceedings.htm [ac-
cessed 2008-04-01]. Part of [1118].
1486. Peter Kampstra. Evolutionary Computing in Telecommunications A Likely EC Suc-
cess Story. Business mathematics and informatics (bmi), Vrije Universiteit Amster-
dam, Faculty of Sciences: Amsterdam, The Netherlands, August 2005, Rob D. van der
REFERENCES 1069
Mei and

Agoston E. Eiben, Supervisors. Fully available at http://www.few.vu.nl/
stagebureau/werkstuk/werkstukken/werkstuk-kampstra.pdf [accessed 2008-09-04].
1487. Ali Kamrani, Wang Rong, and Ricardo Gonzalez. A Genetic Algorithm Methodology for Data
Mining and Intelligent Knowledge Acquisition. Computers & Industrial Engineering, 40(4):
361377, September 2001, Elsevier Science Publishers B.V.: Essex, UK. Imprint: Pergamon
Press: Oxford, UK. doi: 10.1016/S0360-8352(01)00036-5.
1488. Chih-Wei Kang and Jian-Hung Chen. Multi-Objective Evolutionary Optimization of 3D
Dierentiated Sensor Network Deployment. In GECCO09-II [2343], pages 20592064, 2009.
doi: 10.1145/1570256.1570276.
1489. Lishan Kang, Z. Cai, and Y. Yan, editors. Progress in Intelligence Computation and Intel-
ligence: International Symposium on Intelligence, Computation and Applications (ISICA),
April 46, 2005, W uh`an, H ubei, China. China University of Geosciences (CUG), School of
Computer Science: W uh`an, H ubei, China.
1490. Lishan Kang, Yong Liu, and Sanyou Zeng, editors. Second International Symposium on
Advances in Computation and Intelligence (ISICA07), September 2123, 2007, W uh`an,
H ubei, China, volume 4683/2007 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-74581-5. isbn: 3-540-74580-7 and 6611353755. Google Books
ID: t7zBMdjahZUC. OCLC: 170907281, 255916871, 315404653, 318300255, and
403659231. Library of Congress Control Number (LCCN): 2007933208.
1491. Zhuo Kang, Lishan Kang, Xiufen Zou, Minzhong Liu, Changhe Li, Ming Yang, Yan Li, Yup-
ing Chen, and Sanyou Zeng. A New Evolutionary Decision Theory for Many-Objective Opti-
mization Problems. In ISICA07 [1490], pages 111, 2007. doi: 10.1007/978-3-540-74581-5 1.
1492. Proceedings of the United States Department of Energy Cyber Security Group 2004 Training
Conference, April 2427, 2004, Kansas City, KS, USA. CD ROM Proceedings.
1493. 11th Conference on Technologies and Applications of Articial Intelligence (TAAI06), De-
cember 2006, Kaohsiung, Taiwan.
1494. Bahar Karao glu, Haluk Top cuo glu, and Fikret G urgen. Evolutionary Algorithms for Location
Area Management. In EvoWorkshops05 [2340], pages 175184, 2005.
1495. John K. Karlof, editor. Integer Programming: Theory and Practice, Operations Research
Series. CRC Press, Inc.: Boca Raton, FL, USA, September 22, 2005. isbn: 0-8493-1914-5,
1420039598, and 6610536953. Google Books ID: TO3xQO9XEjwC. OCLC: 58052161,
309875596, 508010376, and 629733350. Library of Congress Control Number
(LCCN): 2005041912. LC Classication: T57.74 .I547 2006.
1496. Charles L. Karr and L. Michael Freeman, editors. Industrial Applications of Genetic Algo-
rithms, International Series on Computational Intelligence. CRC Press, Inc.: Boca Raton,
FL, USA, December 29, 1998. isbn: 0849398010.
1497. Firat Kart, Zhongnan Shen, and Cagdas Evren Gerede. The MIDAS System: A Service
Oriented Architecture for Automated Supply Chain Management. In SCContest06 [554],
pages 487494, 2006. doi: 10.1109/SCC.2006.103. INSPEC Accession Number: 9165413.
1498. Nikola Kasabov and Peter Alexander Whigham, editors. Proceedings of the 5th Australia-
Japan Joint Workshop on Intelligent & Evolutionary Systems From Population Genetics
to Evolving Intelligent Systems (AJWIS01), November 1921, 2001, University of Otago:
Dunedin, New Zealand.
1499. Stuart Alan Kauman. Adaptation on Rugged Fitness Landscapes. In Lectures in the Sci-
ences of Complexity: The Proceedings of the 1988 Complex Systems Summer School [2608],
pages 527618, volume Lecture 1, 1988.
1500. Stuart Alan Kauman. The Origins of Order: Self-Organization and Selection in Evolution.
Oxford University Press, Inc.: New York, NY, USA, May 1993. isbn: 0195058119 and
0195079515. Google Books ID: 7mOsGwAACAAJ and lZcSpRJz0dgC. OCLC: 23253930,
59892535, and 263622938.
1501. Stuart Alan Kauman and Simon Asher Levin. Towards a General Theory of Adaptive
Walks on Rugged Landscapes. Journal of Theoretical Biology, 128(1):1145, September 7,
1987, Elsevier Science Publishers B.V.: Amsterdam, The Netherlands. Imprint: Academic
Press Professional, Inc.: San Diego, CA, USA. doi: 10.1016/S0022-5193(87)80029-2. PubMed
ID: 3431131.
1502. Stuart Alan Kauman and Edward D. Weinberger. The NK Model of Rugged Fitness Land-
scapes and its Application to Maturation of the Immune Response. Journal of Theoreti-
1070 REFERENCES
cal Biology, 141(2):211245, November 21, 1989, Elsevier Science Publishers B.V.: Amster-
dam, The Netherlands. Imprint: Academic Press Professional, Inc.: San Diego, CA, USA.
doi: 10.1016/S0022-5193(89)80019-0.
1503. Leonard Kaufman and Peter J. Rousseeuw. Finding Groups in Data An Introduction
to Cluster Analysis, volume 59 in Wiley Series in Probability and Mathematical Statistics
Applied Probability and Statistics Section Series. Wiley Interscience: Chichester, West
Sussex, UK, 9th edition, March 1990May 2005. isbn: 0-471-87876-6. Google Books
ID: Q5wQAQAAIAAJ. OCLC: 19458846.
1504. Henry Kautz and Bruce W. Porter, editors. Proceedings of the Seventeenth National Confer-
ence on Articial Intelligence and Twelfth Conference on Innovative Applications of Articial
Intelligence (AAAI00, IAAI00), July 30August 3, 2000, Austin, TX, USA. AAAI Press:
Menlo Park, CA, USA and MIT Press: Cambridge, MA, USA. isbn: 0-262-51112-6. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai00.php and http://
www.aaai.org/Conferences/IAAI/iaai00.php [accessed 2007-09-06].
1505. W. H. Kautz. Unit-distance Error-Checking Codes. IRE Transactions on Electronic Comput-
ers, EC-7:177180, 1958, Institute of Radio Engineers: New York, NY, USA, R. E. Meagher,
editor.
1506. Steven M. Kay. Fundamentals of Statistical Signal Processing, Volume I: Estimation Theory,
Prentice Hall Signal Processing Series. Prentice Hall PTR: Englewood Clis, Upper Saddle
River, NJ, USA, March 26, 1993. isbn: 0133457117 and 013504135X. Google Books
ID: 3GcsPwAACAAJ. OCLC: 26504848, 177055890, and 313604278.
1507. Hilmi Gunes Kayack, Malcom Ian Heywood, and A. Nur Zincir-Heywood. Evolving Buer
Overow Attacks with Detector Feedback. In EvoWorkshops07 [1050], pages 1120, 2007.
doi: 10.1007/978-3-540-71805-5 2.
1508. Hilmi Gunes Kayack, A. Nur Zincir-Heywood, Malcom Ian Heywood, and Stefan Burschka.
Testing Detector Parameterization Using Evolutionary Exploit Generation. In EvoWork-
shops09 [1052], pages 105110, 2009. doi: 10.1007/978-3-642-01129-0 13.
1509. Sanza T. Kazadi. Conjugate Schema and Basis Representation of Crossover and Mutation.
Evolutionary Computation, pages 129160, Summer 1998, MIT Press: Cambridge, MA, USA.
doi: 10.1162/evco.1998.6.2.129. PubMed ID: 10021744.
1510. Michael J. Kearns, Yishay Mansour, Andrew Y. Ng, and Dana Ron. An Experimental and
Theoretical Comparison of Model Selection Methods. In COLT95 [1802], pages 2130, 1995.
doi: 10.1145/225298.225301. Fully available at http://www.cis.upenn.edu/

mkearns/
papers/ms.pdf [accessed 2008-03-02]. See also [1511].
1511. Michael J. Kearns, Yishay Mansour, Andrew Y. Ng, and Dana Ron. An Experimental
and Theoretical Comparison of Model Selection Methods. Machine Learning, 27(1):750,
April 1997, Kluwer Academic Publishers: Norwell, MA, USA and Springer Netherlands:
Dordrecht, Netherlands, Philip M. Long, editor. doi: 10.1023/A:1007344726582. Fully avail-
able at http://www.springerlink.com/content/r26rk7q55615qq43/fulltext.
pdf [accessed 2009-07-12]. See also [1510].
1512. Tom Kehler and Stan Rosenschein, editors. Proceedings of the 5th National Conference
on Articial Intelligence (AAAI86-I), August 1115, 1986, Philadelphia, PA, USA, volume
Science (Volume 1). Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai86.php [accessed 2007-09-06].
See also [1513].
1513. Tom Kehler and Stan Rosenschein, editors. Proceedings of the 5th National Conference
on Articial Intelligence (AAAI86-II), August 1115, 1986, Philadelphia, PA, USA, volume
Engineering (volume 2). Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai86.php [accessed 2007-09-06].
See also [1512].
1514. Maarten Keijzer. Scaled Symbolic Regression. Genetic Programming and
Evolvable Machines, 5(3):259269, September 2004, Springer Netherlands: Dor-
drecht, Netherlands. Imprint: Kluwer Academic Publishers: Norwell, MA, USA.
doi: 10.1023/B:GENP.0000030195.77571.f9.
1515. Maarten Keijzer, editor. Late Breaking Papers at the 2004 Genetic and Evolutionary Com-
putation Conference (GECCO04 LBP), 2004, Red Lion Hotel: Seattle, WA, USA. Springer-
Verlag GmbH: Berlin, Germany. Distributed on CD-ROM. Part of [752].
1516. Maarten Keijzer and Mike Cattolico, editors. Proceedings of the 8th Annual Conference on
REFERENCES 1071
Genetic and Evolutionary Computation (GECCO06), July 812, 2006, Renaissance Seattle
Hotel: Seattle, WA, USA. ACM Press: New York, NY, USA. isbn: 1-59593-186-4 and
1-59593-187-2. Google Books ID: 6y1pAAAACAAJ. OCLC: 71335687 and 85782847.
ACM Order No.: 910060.
1517. Maarten Keijzer, Una-May OReilly, Simon M. Lucas, Ernesto Jorge Fernandes Costa, and
Terence Soule, editors. Proceedings of the 7th European Conference on Genetic Programming
(EuroGP04), April 57, 2004, Coimbra, Portugal, volume 3003/2004 in Lecture Notes in
Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-21346-5.
1518. Maarten Keijzer, Andrea G. B. Tettamanzi, Pierre Collet, Jano I. van Hemert, and Marco
Tomassini, editors. Proceedings of the 8th European Conference on Genetic Program-
ming (EuroGP05), March 30April 1, 2005, Lausanne, Switzerland, volume 3447/2005
in Theoretical Computer Science and General Issues (SL 1), Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/b107383.
isbn: 3-540-25436-6 and 3-540-31989-1. Google Books ID: -Q88PwAACAAJ and
lcw0Pkgc6OUC. OCLC: 59351789, 145832391, 254751818, and 318300168. Library of
Congress Control Number (LCCN): 2005922866.
1519. Maarten Keijzer, Giuliano Antoniol, Clare Bates Congdon, Kalyanmoy Deb, Benjamin Do-
err, Nikolaus Hansen, John H. Holmes, Gregory S. Hornby, Daniel Howard, James Kennedy,
Sanjeev Kumar, Fernando G. Lobo, Julian Francis Miller, Jason H. Moore, Frank Neumann,
Martin Pelikan, Jordan B. Pollack, Kumara Sastry, Kenneth Owen Stanley, Adrian Stoica,
El-Ghazali Talbi, and Ingo Wegener, editors. Proceedings of the Genetic and Evolution-
ary Computation Conference (GECCO08), July 1216, 2008, Renaissance Atlanta Hotel
Downtown: Atlanta, GA, USA. ACM Press: New York, NY, USA. isbn: 1-60558-130-5
and 1-60558-131-3. Partly available at http://www.sigevo.org/gecco-2008/ [ac-
cessed 2011-02-28]. OCLC: 288975508, 288975509, 299786110, 302317062, and 373680990.
ACM Order No.: 910080 and 910081. See also [585, 1872, 2268, 2537].
1520. Jozef Kelemen and Petr Sosk, editors. Proceedings of the 6th European Conference on Ad-
vances in Articial Life (ECAL01), September 1014, 2001, Prague, Czech Republic, volume
2159/-1 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Ger-
many. doi: 10.1007/3-540-44811-X. isbn: 3-540-42567-5.
1521. Evelyn Fox Keller and Elisabeth A. Lloyd, editors. Keywords in Evolutionary Biology. Har-
vard University Press: Cambridge, MA, USA, November 1992. isbn: 0-674-50312-0 and
0-674-50313-9. Google Books ID: Hvm7sCuyRV4C and fgRpAAAACAAJ.
1522. James M. Keller and Olfa Nasraoui, editors. Proceedings of the Annual Meeting of the
North American Fuzzy Information Processing Society (NAFIPS00), June 2729, 2002, Tu-
lane University: New Orleans, LA, USA. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7803-7461-4. Google Books ID: mpdVAAAAMAAJ. OCLC: 50258796, 63197367,
248814326, and 268938117. Catalogue no.: 02TH8622.
1523. James Kennedy and Russel C. Eberhart. Particle Swarm Optimization. In ICNN95 [1362],
pages 19421948, volume 4, 1995. doi: 10.1109/ICNN.1995.488968. Fully available
at http://www.engr.iupui.edu/

shi/Coference/psopap4.html [accessed 2010-12-24].


INSPEC Accession Number: 5263228.
1524. James Kennedy and Russel C. Eberhart. Swarm Intelligence: Collective, Adaptive. Mor-
gan Kaufmann Publishers Inc.: San Francisco, CA, USA, 2001, Yuhui Shi, Publisher.
isbn: 1558605959. Partly available at http://www.engr.iupui.edu/

eberhart/web/
PSObook.html [accessed 2008-08-22]. Google Books ID: pHku3gYfL2UC and vOx-QV3sRQsC.
1525. Brian Wilson Kernighan and Dennis M. Ritchie. The C Programming Language, Prentice-
Hall Software Series. Prentice Hall PTR: Englewood Clis, Upper Saddle River, NJ,
USA, rst/second edition, 19781988. isbn: 0-13-110163-3, 0-13-110362-8, and
0-13-110370-9. OCLC: 3608698, 251418906, 257254858, and 299378788. Library
of Congress Control Number (LCCN): 77028983. GBV-Identication (PPN): 023146052,
076304213, and 423015281. LC Classication: QA76.73.C15 K47.
1526. Martin L. Kersten, Boris Novikov, Jens Teubner, Vladimir Polutin, and Stefan Manegold,
editors. Proceedings of the 12th International Conference on Extending Database Technology:
Advances in Database Technology (EDBT09), March 2426, 2009, Saint Petersburg, Russia,
volume 360 in ACM International Conference Proceeding Series (AICPS). ACM Press: New
York, NY, USA.
1527. Faten Kharbat, Larry Bull, and Mohammed Odeh. Mining Breast Cancer Data with
1072 REFERENCES
XCS. In GECCO07-I [2699], pages 20662073, 2007. doi: 10.1145/1276958.1277362. Fully
available at http://www.cs.york.ac.uk/rts/docs/GECCO_2007/docs/p2066.pdf
[accessed 2010-07-26].
1528. Vineet Khare. Performance Scaling of Multi-Objective Evolutionary Algorithms. Mas-
ters thesis, University of Birmingham, School of Computer Science: Birmingham,
UK, September 21, 2002, Xin Yao and Kalyanmoy Deb, Supervisors. Fully available
at http://www.lania.mx/

ccoello/EMOO/thesis_khare.pdf.gz [accessed 2011-08-07].


CiteSeer
x
: 10.1.1.15.835. See also [1530].
1529. Vineet Khare, Xin Yao, and Kalyanmoy Deb. Performance Scaling of Multi-Objective
Evolutionary Algorithms. Technical Report 2002009, Kanpur Genetic Algorithms Labo-
ratory (KanGAL), Department of Mechanical Engineering, Indian Institute of Technology
Kanpur (IIT): Kanpur, Uttar Pradesh, India, October 2002. Fully available at http://
citeseer.ist.psu.edu/old/597663.html and http://www.iitk.ac.in/kangal/
papers/k2002009.pdf.gz [accessed 2009-07-18]. See also [1528, 1530].
1530. Vineet Khare, Xin Yao, and Kalyanmoy Deb. Performance Scaling of Multi-Objective Evolu-
tionary Algorithms. In EMO03 [957], pages 367390, 2003. doi: 10.1007/3-540-36970-8 27.
See also [1528, 1529].
1531. Eik Fun Khor, Kay Chen Tan, Tong Heng Lee, and Chi Keong Goh. A Study on Distribution
Preservation Mechanism in Evolutionary Multi-Objective Optimization. Articial Intelli-
gence Review An International Science and Engineering Journal, 23(1):3133, March 2005,
Springer Netherlands: Dordrecht, Netherlands. Imprint: Kluwer Academic Publishers: Nor-
well, MA, USA. doi: 10.1007/s10462-004-2902-3.
1532. Sami Khuri and Teresa Chiu. Heuristic Algorithms for the Terminal Assignment
Problem. In SAC97 [435], pages 247251, 1997. doi: 10.1145/331697.331748.
CiteSeer
x
: 10.1.1.53.9516.
1533. Sami Khuri, Thomas Back, and J org Heitkotter. The zero/one multiple knapsack problem
and genetic algorithms. pages 188193. doi: 10.1145/326619.326694. isbn: 0-89791-647-6.
Fully available at http://doi.acm.org/10.1145/326619.326694 [accessed 2008-06-23].
CiteSeer
x
: 10.1.1.40.4393.
1534. Sami Khuri, Martin Sch utz, and J org Heitkotter. Evolutionary Heuristics for the
Bin Packing Problem. In ICANNGA95 [2141], pages 285288, 1995. Fully available
at http://www6.uniovi.es/pub/EC/GA/papers/icannga95.ps.gz [accessed 2010-10-18].
CiteSeer
x
: 10.1.1.33.3373.
1535. Hyumin Kim and Yong-Hyuk Kim. Optimal Designs of Ambiguous Mobile Key-
pad with Alphabetical Constraints. In GECCO09-I [2342], pages 19311932, 2009.
doi: 10.1145/1569901.1570243.
1536. Jong-Hwan Kim and Hyun-Sik Shim. Evolutionary Programming-Based Optimal Robust
Locomotion Control of Autonomous Mobile Robots. In EP95 [1848], pages 631644, 1995.
1537. Minkyu Kim, Varun Aggarwal, Una-May OReilly, Muriel Medard, and Wonsik Kim. Genetic
Representations for Evolutionary Minimization of Network Coding Resources. In EvoWork-
shops07 [1050], pages 2131, 2007. doi: 10.1007/978-3-540-71805-5 3.
1538. Moon-Chan Kim, Chang Ouk Kim, Seong Rok Hong, and Ick-Hyun Kwon. Forward
backward Analysis of RFID-enabled Supply Chain using Fuzzy Cognitive Map and Genetic
Algorithm. Expert Systems with Applications An International Journal, 35(3):11661176,
October 2008, Elsevier Science Publishers B.V.: Essex, UK. Imprint: Pergamon Press: Ox-
ford, UK. doi: 10.1016/j.eswa.2007.08.015.
1539. Kenneth E. Kinnear, Jr, editor. Advances in Genetic Programming Volume 1, Com-
plex Adaptive Systems, Bradford Books. MIT Press: Cambridge, MA, USA, April 7,
1994. isbn: 0-262-11188-8. Google Books ID: eu2JplnQdBkC and eu2JplnQdBkC.
OCLC: 29595260, 468496888, and 473279948. GBV-Identication (PPN): 133135691
and 192075969.
1540. Kenneth E. Kinnear, Jr. Fitness Landscapes and Diculty in Genetic Programming. In
CEC94 [1891], pages 142147, volume 1, 1994. doi: 10.1109/ICEC.1994.350026. Fully avail-
able at http://eprints.kfupm.edu.sa/41390/ and http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/ieee94_kinnear.html [accessed 2009-07-10].


1541. Scott Kirkpatrick, C. Daniel Gelatt, Jr., and Mario P. Vecchi. Optimization by Simu-
lated Annealing. Science Magazine, 220(4598):671680, May 13, 1983, American Associa-
tion for the Advancement of Science (AAAS): Washington, DC, USA and HighWire Press
REFERENCES 1073
(Stanford University): Cambridge, MA, USA. doi: 10.1126/science.220.4598.671. Fully
available at http://fezzik.ucd.ie/msc/cscs/ga/kirkpatrick83optimization.
pdf [accessed 2009-07-03]. CiteSeer
x
: 10.1.1.18.4175.
1542. I. M. A. Kirkwood, S. H. Shami, and Mark C. Sinclair. Discovering Simple
Fault-Tolerant Routing Rules using Genetic Programming. In ICANNGA97 [2527],
1997. Fully available at http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/
kirkam_1997_dsftrr.html [accessed 2009-07-21]. CiteSeer
x
: 10.1.1.51.4447. See also
[2463].
1543. Marc Kirschner and John Gerhart. Evolvability. Proceedings of the National Academy of
Science of the United States of America (PNAS), 95(15):84208427, July 15, 1998, National
Academy of Sciences: Washington, DC, USA. Fully available at http://www.pnas.org/
cgi/content/full/95/15/8420 [accessed 2009-07-10].
1544. Hajime Kita and Yasuhito Sano. Genetic Algorithms for Optimization of Noisy Fitness Func-
tions and Adaptation to Changing Environments. In 2003 Joint Workshop of Hayashibara
Foundation and Hayashibara Forum Physics and Information and Workshop on Statistical
Mechanical Approach to Probabilistic Information Processing (SMAPIP) [2636], 2003. Fully
available at http://www.smapip.is.tohoku.ac.jp/

smapip/2003/hayashibara/
proceedings/HajimeKita.pdf [accessed 2008-07-19].
1545. Second World Congress on Nature and Biologically Inspired Computing (NaBIC10), Decem-
ber 1517, 2010, Kitakyushu International Conference Center: Kitakyushu, Japan.
1546. M. Kiuchi, Y. Shinano, R. Hirabayashi, and Y. Saruwatari. An Exact Algorithm for the
Capacitated Arc Routing Problem using Parallel Branch and Bound Method. In Abstracts
of the National Conference of the Operations Research Society of Japan [2087], pages 2829,
Spring 1995. Written in Japanese.
1547. Stefan Klahold, Steen Frank, Robert E. Keller, and Wolfgang Banzhaf. Exploring the Pos-
sibilities and Restrictions of Genetic Programming in Java Bytecode. In GP98 LBP [1605],
pages 120124, 1998.
1548. David G. Kleinbaum, Lawrence L. Kupper, and Keith E. Muller, editors. Applied Regres-
sion Analysis and other Multivariable Methods, The Duxbury Series in Statistics and Deci-
sion Sciences. Prindle, Weber, Schmidt (PWS) Publishing Company: Boston, MA, USA,
1988. isbn: 0-871-50123-6. OCLC: 15317108, 299861986, and 476708711. Library
of Congress Control Number (LCCN): 87005397. GBV-Identication (PPN): 040577813,
219461139, and 231144253. LC Classication: QA278.
1549. Jon Kleinberg and Christos Papadimitriou. Computability and Complexity. In Computer
Science: Reections on the Field, Reections from the Field [618], Chapter 2.1, pages 3750.
National Academies Press: Washington, DC, USA, 2004. Fully available at http://www.
cs.cornell.edu/home/kleinber/cstb-turing.pdf [accessed 2010-06-25].
1550. Georg Kliewer and Peter Unruh. Parallele Simulated Annealing Bibliothek: Entwicklung und
Einsatz zur Flugplanoptimierung. Masters thesis, Universitat-Gesamthochschule Paderborn,
Fachbereich 17: Mathematik/Informatik: Paderborn, Germany, February 1998, Burkhard
Monien and S. Tschoke, Advisors. Fully available at http://www.uni-paderborn.de/
cs/geokl/publications/geokl.master.ps [accessed 2010-09-25].
1551. Dimitri Knjazew and David Edward Goldberg. Solving Permutation Problems with the
Ordering Messy Genetic Algorithms. In Advances in Evolutionary Computing Theory and
Applications [1049], pages 321350. Springer New York: New York, NY, USA, 2002.
1552. Joshua D. Knowles and David Wolfe Corne. Properties of an Adaptive Archiving Al-
gorithm for Storing Nondominated Vectors. IEEE Transactions on Evolutionary Com-
putation (IEEE-EC), 7(2):100116, April 2003, IEEE Computer Society: Washington,
DC, USA. doi: 10.1109/TEVC.2003.810755. Fully available at http://dbkgroup.org/
knowles/KnowlesCorneIEEEToECsub2a.pdf [accessed 2010-12-16]. INSPEC Accession Num-
ber: 7617174.
1553. Joshua D. Knowles, Richard A. Watson, and David Wolfe Corne. Reducing Local Optima
in Single-Objective Problems by Multi-objectivization. In EMO01 [3099], pages 269283,
2001. doi: 10.1007/3-540-44719-9 19. Fully available at http://www.macs.hw.ac.uk/

dwcorne/rlo.pdf [accessed 2010-07-18].


1554. Joshua D. Knowles, David Wolfe Corne, and Kalyanmoy Deb. Multiobjective Problem Solving
from Nature From Concepts to Applications, Natural Computing Series. Springer New York:
New York, NY, USA, 2008. doi: 10.1007/978-3-540-72964-8. isbn: 3-540-72963-1. Google
1074 REFERENCES
Books ID: pzq8t9rCKC8C. OCLC: 255894619, 300164898, and 315697103. Library of
Congress Control Number (LCCN): 2007936865.
1555. Donald Ervin Knuth. Big Omicron and Big Omega and Big Theta. ACM SIGACT News, 8
(2):1824, AprilJune 1976, ACM Press: New York, NY, USA. Fully available at http://
portal.acm.org/citation.cfm?id=1008328.1008329 [accessed 2010-06-24].
1556. Donald Ervin Knuth. Fundamental Algorithms, volume 1 in The Art of Computer Program-
ming (TAOCP). Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, third
edition, 1997. isbn: 0-201-89683-4. Google Books ID: 5oJQAAAAMAAJ, B31GAAAAYAAJ,
and J MySQAACAAJ.
1557. Donald Ervin Knuth. Sorting and Searching, volume 3 in The Art of Computer Programming
(TAOCP). Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 2nd edition,
1998. isbn: 0-201-03803-X, 0-201-03809-9, 0-201-03822-6, 0-201-85394-9, and
0-201-89685-0. Google Books ID: 3YpQAAAAMAAJ, QDh9AAAACAAJ, QTh9AAAACAAJ,
STh9AAAACAAJ, SoRQAAAAMAAJ, YG2SGQAACAAJ, and jYNQAAAAMAAJ. Original from the
University of Michigan.
1558. King-Tim Ko, Kit-Sang Tang, Cheung-Yau Chan, Kim-Fung Man, and Sam Kwong. Using
Genetic Algorithms to Design Mesh Networks. Computer, 30(8):5661, August 1997, IEEE
Computer Society Press: Los Alamitos, CA, USA. doi: 10.1109/2.607086.
1559. Ryan K. L. Ko, Andre Jusuf, and Siang Guan (Stephen) Lee. Genesis Dynamic Collabo-
rative Business Process Formulation Based on Business Goals and Criteria. In SERVICES
Cup09 [1351], pages 123129, 2009. doi: 10.1109/SERVICES-I.2009.108. INSPEC Accession
Number: 10967055.
1560. Yves Kodrato, editor. Machine Learning: Fifth European Working Session on Learning
(EWSL91), March 68, 1991, Porto, Portugal, volume 482 in Lecture Notes in Articial In-
telligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. isbn: 0-387-53816-X.
1561. Natallia Kokash. Web Service Discovery with Implicit QoS Filtering. In IBM
PhD Student Symposium at ICSOC 2005 [90], pages 6166, 2005. Fully available
at http://homepages.cwi.nl/

kokash/documents/PhDICSOC.pdf [accessed 2011-01-10].


CiteSeer
x
: 10.1.1.104.691.
1562. Murat Koksalan and Stanley Zionts, editors. Proceedings of the 15th International Con-
ference on Multiple Criteria Decision Making: Multiple Criteria Decision Making in the
New Millennium (MCDM00), June 1014, 2000, Middle East Technical University: Ankara,
Turkey, volume 507 in Lecture Notes in Economics and Mathematical Systems. Springer-
Verlag: Berlin/Heidelberg. Partly available at http://mcdm2000.ie.metu.edu.tr/ [ac-
cessed 2007-09-10]. Published November 9, 2001.
1563. Krasimir Kolarov. Landscape Ruggedness in Evolutionary Algorithms. In CEC97 [173],
pages 1924, 1997. doi: 10.1109/ICEC.1997.592261. CiteSeer
x
: 10.1.1.40.329.
1564. Ville Kolehmainen, Pekka Toivanen, and Bart lomiej Beliczy nski, editors. Revised Se-
lected Papers from the 9th International Conference on Adaptive and Natural Comput-
ing Algorithms (ICANNGA09), April 2325, 2009, University of Kuopio: Kuopio, Fin-
land, volume 5495/2009 in Theoretical Computer Science and General Issues (SL 1),
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-642-04921-7. isbn: 3-642-04920-6. Library of Congress Control Number
(LCCN): 2009935814.
1565. Andre Nikolajevic Kolmogorov. Grundbegrie der Wahrscheinlichkeitsrechnung. Springer-
Verlag GmbH: Berlin, Germany and Chelsea Publishing Company: New York, NY,
USA, 1933. isbn: 038706110X and 354006110X. Google Books ID: NN1aJgAACAAJ,
PDM5AAAAIAAJ, jZA0PQAACAAJ, lxvQAAAAMAAJ, and ob4rAAAAYAAJ. OCLC: 64177345,
74144504, and 185672567. See also [1566].
1566. Andre Nikolajevic Kolmogorov. Foundations of the Theory of Probability. Chelsea Pub-
lishing Company: New York, NY, USA, second edition, 1950. isbn: 0828400237. Fully
available at http://www.mathematik.com/Kolmogorov/ [accessed 2007-09-15]. Google Books
ID: BZ5UJwAACAAJ, EmetAAAAMAAJ, delQAAAACAAJ, puRLAAAAMAAJ, t07dPAAACAAJ,
and tk7dPAAACAAJ. OCLC: 233143719. See also [1565].
1567. Krzysztof Kolowrocki, editor. Advances in Safety and Reliability Proceedings of the Eu-
ropean Safety and Reliability Conference (ESREL05), June 2730, 2005, Tri City (Gdynia,
Gdansk, Sopot), Poland. Taylor and Francis LLC: London, UK. isbn: 0415383404. Google
REFERENCES 1075
Books ID: YTtWDEFp 3cC.
1568. Piet Kommers and Gri Richards, editors. Proceedings of World Conference on Educational
Multimedia, Hypermedia and Telecommunications (EDMEDIA05), June 27, 2005, Le Cen-
tre Sheraton Hotel Montreal: Montreal, QC, Canada. Association for the Advancement of
Computing in Education (AACE): Chesapeake, VA, USA. Partly available at http://www.
aace.org/conf/edmedia/edmed05poster.pdf [accessed 2011-02-28].
1569. Srividya Kona, Ajay Bansal, Gopal Gupta, and Thomas D. Hite. Web Service Dis-
covery and Composition using USDL. In CEC/EEE06 [2967], pages 430432, 2006.
doi: 10.1109/CEC-EEE.2006.95. INSPEC Accession Number: 9189340. See also [318, 1570].
1570. Srividya Kona, Ajay Bansal, Gopal Gupta, and Thomas D. Hite. Semantics-based
Web Service Composition Engine. In CEC/EEE07 [1344], pages 521524, 2007.
doi: 10.1109/CEC-EEE.2007.87. Fully available at http://www.utd.edu/

sxk038200/
research/wsc07.pdf [accessed 2007-10-25]. INSPEC Accession Number: 9868638. See also
[319, 1569].
1571. Srividya Kona, Ajay Bansal, M. Brian Blake, Steen Bleul, and Thomas Weise. WSC-
2009: A Quality of Service-Oriented Web Services Challenge. In CEC09 [1245], pages
487490, 2009. doi: 10.1109/CEC.2009.80. Fully available at http://www.it-weise.de/
documents/files/KBBSW2009WAQOSOWSC.pdf. INSPEC Accession Number: 10839134.
Ei ID: 20094712467206.
1572. Andreas Konig, Mario Koppen, Ajith Abraham, Christian Igel, and Nikola Kasabov,
editors. Proceedings of the 7th International Conference on Hybrid Intelligent
Systems (HIS07), September 1719, 2007, Fraunhofer Center FhG ITWM/FhG
IESE: Kaiserslautern, Germany. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7695-2946-1. OCLC: 213890399 and 314017946. Library of Congress Con-
trol Number (LCCN): 2007936727. Order No.: P2946. Product Number E2946.
1573. Anna V. Kononova, Derek B. Ingham, and Mohamed Pourkashanian. Simple Scheduled
Memetic Algorithm for Inverse Problems in Higher Dimensions: Application to Chemical Ki-
netics. In CEC08 [1889], pages 39053912, 2008. doi: 10.1109/CEC.2008.4631328. INSPEC
Accession Number: 10251020.
1574. Andreas Konstantinidis, Qingfu Zhang, and Kun Yang. A Subproblem-dependent Heuristic in
MOEA/D for the Deployment and Power Assignment Problem in Wireless Sensor Network. In
CEC09 [1350], pages 27402747, 2009. doi: 10.1109/CEC.2009.4983286. INSPEC Accession
Number: 10688881.
1575. Erricos John Kontoghiorghes, Ber c Rustem, and Peter Winker, editors. Computational
Methods in Financial Engineering Essays in Honour of Manfred Gilli. Springer-Verlag:
Berlin/Heidelberg, 2008. doi: 10.1007/978-3-540-77958-2. Library of Congress Control Num-
ber (LCCN): 2008921042.
1576. Mario Koppen and Kaori Yoshida. Substitute Distance Assignments in NSGA-II for Han-
dling Many-Objective Optimization Problems. In EMO07 [2061], pages 727741, 2007.
doi: 10.1007/978-3-540-70928-2 55.
1577. Mario Koppen and Kaori Yoshida. Visualization of Pareto-Sets in Evolutionary Multi-
Objective Optimization. In HIS07 [1572], pages 156161, 2007. doi: 10.1109/HIS.2007.62.
INSPEC Accession Number: 9877140.
1578. Mario Koppen, David H. Wolpert, and William G. Macready. Remarks on a Recent Pa-
per on the No Free Lunch Theorems. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 5(3):295296, June 2001, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/4235.930318. Fully available at http://ti.arc.nasa.gov/people/dhw/
papers/76.ps and http://www.no-free-lunch.org/KoWM01.pdf [accessed 2008-03-28].
1579. Proceedings of the 8th International Symposium on Advanced Intelligent Systems (ISIS07),
September 58, 2007, Kensington Hotel: Sokcho, Korea. Korean Fuzzy Logic and Intelligent
System Society (KFIS).
1580. Peter Korosec and Jurij

Silc. Real-Parameter Optimization Using Stigmergy. In
BIOMA06 [919], pages 7384, 2006. Fully available at http://csd.ijs.si/silc/
articles/BIOMA06DASA.pdf [accessed 2007-08-05].
1581. Witold Kosi nski, editor. Advances in Evolutionary Algorithms. InTech: Winchester,
Hampshire, UK. Fully available at http://www.intechopen.com/books/show/title/
advances_in_evolutionary_algorithms [accessed 2011-11-21].
1582. Juhani Koski. Defectiveness of Weighting Method in Multicriterion Optimization of Struc-
1076 REFERENCES
tures. Communications in Applied Numerical Methods (CANM), 1(6):333337, Novem-
ber 1985, Wiley Interscience: Chichester, West Sussex, UK. doi: 10.1002/cnm.1630010613.
1583. Alex Kosoruko. Human-based Genetic Algorithm. In SMC01 [1366], pages 34643469,
volume 5, 2001. doi: 10.1109/ICSMC.2001.972056. See also [1584].
1584. Alex Kosoruko. Human-based Genetic Algorithm. IlliGAL Report 2001004, Illinois Genetic
Algorithms Laboratory (IlliGAL), Department of Computer Science, Department of General
Engineering, University of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA,
January 2001. CiteSeer
x
: 10.1.1.28.6133. See also [1583].
1585. Yannis Kotidis and Nick Roussopoulos. A Case for Dynamic View Management. ACM Trans-
actions on Database Systems (TODS), 26(4):388423, December 2001, Association for Com-
puting Machinery (ACM): New York, NY, USA. doi: 10.1145/503099.503100. Fully available
at http://www.db-net.aueb.gr/kotidis/Publications/TODS.pdf [accessed 2011-04-
13]. CiteSeer
x
: 10.1.1.21.1377.
1586. Petros Koumoutsakos, J. Freund, and D. Parekh. Evolution Strategies for Parameter Opti-
mization in Jet Flow Control. In Proceedings of the 1998 Summer Program [1925], 1998. Fully
available at http://ctr.stanford.edu/Summer98/koumoutsakos.pdf [accessed 2009-06-
16].
1587. Tim Kovacs. XCS Classier System Reliably Evolves Accurate, Complete, and Minimal
Representations for Boolean Functions. In WSC2 [535], pages 5968, 1997. Fully available
at http://www.cs.bris.ac.uk/Publications/Papers/1000239.pdf [accessed 2007-08-
18]. CiteSeer
x
: 10.1.1.38.4522 and 10.1.1.58.5539.
1588. Tim Kovacs. Two Views of Classier Systems. In IWLCS01 [1679], pages 7487,
2001. doi: 10.1007/3-540-48104-4 6. Fully available at http://www.cs.bris.ac.uk/
Publications/Papers/1000647.pdf [accessed 2007-09-11].
1589. Tim Kovacs, Xavier Llor` a, Keiki Takadama, Pier Luca Lanzi, Wolfgang Stolzmann, and
Stewart W. Wilson, editors. Revised Selected Papers of the International Workshops on
Learning Classier Systems (IWLCS 03-05), 2007, volume 4399/2007 in Lecture Notes in
Articial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-71231-2. isbn: 3-540-71230-5.
OCLC: 612928449. Library of Congress Control Number (LCCN): 2007923322. See also
[27, 2280, 2578].
1590. John R. Koza. Non-Linear Genetic Algorithms for Solving Problems. United States Patent
4,935,877, United States Patent and Trademark Oce (USPTO): Alexandria, VA, USA,
1988. Filed May 20, 1988. Issued June 19, 1990. Australian patent 611,350 issued september
21, 1991. Canadian patent 1,311,561 issued december 15, 1992.
1591. John R. Koza. Hierarchical Genetic Algorithms Operating on Populations of Com-
puter Programs. In IJCAI89-I [2593], pages 768774, 1989. Fully available
at http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/Koza89.html and http://


www.genetic-programming.com/jkpdf/ijcai1989.pdf [accessed 2010-08-19].
1592. John R. Koza. A Hierarchical Approach to Learning the Boolean Multi-
plexer Function. In FOGA90 [2562], pages 171191, 1990. Fully available
at http://www.genetic-programming.com/jkpdf/foga1990.pdf [accessed 2010-08-19].
CiteSeer
x
: 10.1.1.141.255.
1593. John R. Koza. Concept Formation and Decision Tree Induction using the Genetic Pro-
gramming Paradigm. In PPSN I [2438], pages 124128, 1990. Fully available at http://
citeseer.ist.psu.edu/61578.html [accessed 2008-05-29].
1594. John R. Koza. Evolution and Co-Evolution of Computer Programs to Control
Independent-Acting Agents. In SAB90 [1876], pages 366375, 1990. Fully avail-
able at http://www.genetic-programming.com/jkpdf/sab1990.pdf [accessed 2008-10-
19]. CiteSeer
x
: 10.1.1.140.7387.
1595. John R. Koza. Genetic Evolution and Co-Evolution of Computer Programs. In Articial
Life II [1668], pages 603629, 1990. Fully available at http://citeseer.ist.psu.
edu/177879.html, http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/Koza_
geCP.html, and http://www.genetic-programming.com/jkpdf/alife1990.pdf
[accessed 2008-05-29]. CiteSeer
x
: 10.1.1.140.6634. Revised November 29, 1990.
1596. John R. Koza. Genetic Programming: A Paradigm for Genetically Breeding Populations
of Computer Programs to Solve Problems. Technical Report STAN-CS-90-1314, Stanford
University, Computer Science Department: Stanford, CA, USA, June 1990. Fully avail-
REFERENCES 1077
able at http://www.genetic-programming.com/jkpdf/tr1314.pdf [accessed 2010-08-19].
CiteSeer
x
: 10.1.1.4.9961. See also [2559].
1597. John R. Koza. Concept Formation and Decision Tree Induction using the Genetic Pro-
gramming Paradigm. In PPSN I [2438], pages 124128, 1990. doi: 10.1007/BFb0029742.
CiteSeer
x
: 10.1.1.53.5800.
1598. John R. Koza. Non-Linear Genetic Algorithms for Solving Problems by Finding a Fit Com-
position of Functions. United States Patent 5,136,686, United States Patent and Trademark
Oce (USPTO): Alexandria, VA, USA. Filed March 28, 1990, Issued August 4, 1992.
1599. John R. Koza. Genetic Programming II: Automatic Discovery of Reusable Programs, Com-
plex Adaptive Systems, Bradford Books. MIT Press: Cambridge, MA, USA, July 4, 1994.
isbn: 0-262-11189-6. Google Books ID: t7tpQgAACAAJ. OCLC: 30725207, 180673805,
264562231, 311925261, 493266030, and 634102150. Library of Congress Control
Number (LCCN): 94076375. GBV-Identication (PPN): 153763795, 186785119, and
308224353. LC Classication: QA76.6 .K693 1994.
1600. John R. Koza. A Response to the ML-95 Paper entitled Hill Climbing beats Genetic Search
on a Boolean Circuit Synthesis of Kozas. self-published. Fully available at http://
www.genetic-programming.com/jktahoe24page.html [accessed 2009-06-16]. Version 1 dis-
tributed at the Twelfth International Machine Learning Conference (ML-95). Version 2 dis-
tributed at the Sixth International Conference on Genetic Algorithms (ICGA-95). Version 3
deposited at WWW site on May 26, 1996. See also [883, 1665, 2221].
1601. John R. Koza. Genetic Algorithms and Genetic Programming at Stanford. Stanford University
Bookstore, Stanford University: Stanford, CA, USA, Fall 2003. Book of Student Papers from
John Kozas Course at Stanford on Genetic Algorithms and Genetic Programming. Computer
Science 426 (CS 426), Biomedical Informatics 226 (BMI 226).
1602. John R. Koza. Genetic Programming: On the Programming of Computers by
Means of Natural Selection, Bradford Books. MIT Press: Cambridge, MA, USA,
December 1992. isbn: 0-262-11170-5. Google Books ID: Bhtxo60BV0EC and
Wd8TAAAACAAJ. OCLC: 26263956, 245844858, 312492070, 473279968, 476715745,
and 610975642. Library of Congress Control Number (LCCN): 92025785. GBV-
Identication (PPN): 110450795, 152621075, 185924921, 191489522, 237675633,
246890320, 282081046, and 49354836X. LC Classication: QA76.6 .K695 1992. 1992
rst edition, 1993 second edition.
1603. John R. Koza, editor. Late Breaking Papers at the First Annual Conference Genetic Pro-
gramming (GP96 LBP), July 2831, 1996, Stanford University: Stanford, CA, USA. Stanford
University Bookstore, Stanford University: Stanford, CA, USA. isbn: 0-18-201031-7. See
also [1609].
1604. John R. Koza, editor. Late Breaking Papers at the 1997 Genetic Programming Conference
(GP97 LBP), July 1316, 1997, Stanford University: Stanford, CA, USA. Stanford Univer-
sity Bookstore, Stanford University: Stanford, CA, USA. isbn: 0-18-206995-8. Google
Books ID: Vn-nAAAACAAJ and cFI7OAAACAAJ. OCLC: 38045123. See also [1611].
1605. John R. Koza, editor. Late Breaking Papers at the Genetic Programming 1998 Conference
(GP98 LBP), June 2225, 1998, University of Wisconsin: Madison, WI, USA. Stanford
University Bookstore, Stanford University: Stanford, CA, USA. See also [1612].
1606. John R. Koza and Eric V. Siegel, editors. Proceedings of 1995 AAAI Fall Symposium Series,
Genetic Programming Track, November 1012, 1995, Massachusetts Institute of Technology
(MIT): Cambridge, MA, USA. The symposium proceedings appeared as Technical Report
FS-95-01.
1607. John R. Koza, David Andre, Forrest H. Bennett III, and Martin A. Keane. Evolution of a Low-
Distortion, Low-Bias 60 Decibel Op Amp with Good Frequency Generalization using Genetic
Programming. In GP96 LBP [1603], pages 94100, 1996. Fully available at http://www.
cs.bham.ac.uk/

wbl/biblio/gp-html/koza_1996_eldld60db.html and http://


www.genetic-programming.com/jkpdf/gp1996lbpamplifier.pdf [accessed 2009-06-16].
CiteSeer
x
: 10.1.1.50.6221. See also [271].
1608. John R. Koza, David Andre, Forrest H. Bennett III, and Martin A. Keane. Use of Au-
tomatically Dened Functions and Architecture-Altering Operations in Automated Cir-
cuit Synthesis Using Genetic Programming. In GP96 [1609], pages 132149, 1996.
CiteSeer
x
: 10.1.1.51.3933.
1609. John R. Koza, David Edward Goldberg, David B. Fogel, and Rick L. Riolo, editors. Pro-
1078 REFERENCES
ceedings of the First Annual Conference of Genetic Programming (GP96), July 2831, 1996,
Stanford University: Stanford, CA, USA, Complex Adaptive Systems, Bradford Books. MIT
Press: Cambridge, MA, USA. isbn: 0-262-61127-9. Google Books ID: ndy0QgAACAAJ.
OCLC: 35580073, 70655786, and 422034061. See also [1603].
1610. John R. Koza, Forrest H. Bennett III, Jerey L. Hutchings, Stephen L. Bade, Martin A.
Keane, and David Andre. Evolving Sorting Networks using Genetic Programming and the
Rapidly Recongurable Xilinx 6216 Field-Programmable Gate Array. In Conference Record
of the Thirty-First Asilomar Conference on Signals, Systems & Computers [893], pages 404
410, volume 1, 1997. doi: 10.1109/ACSSC.1997.680275. Fully available at http://www.
cs.bham.ac.uk/

wbl/biblio/gp-html/koza_1997_ASILIMOAR.html [accessed 2008-12-


24]. INSPEC Accession Number: 6020885.
1611. John R. Koza, Kalyanmoy Deb, Marco Dorigo, David B. Fogel, Max H. Garzon, Hitoshi
Iba, and Rick L. Riolo, editors. Proceedings of the Second Annual Conference on Genetic
Programming (GP97), July 1316, 1997, Stanford University: Stanford, CA, USA. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-558-60483-9. Google Books
ID: PFIZAQAAIAAJ. OCLC: 60136918. GBV-Identication (PPN): 235727091. See also
[1604].
1612. John R. Koza, Wolfgang Banzhaf, Kumar Chellapilla, Kalyanmoy Deb, Marco Dorigo,
David B. Fogel, Max H. Garzon, David Edward Goldberg, Hitoshi Iba, and Rick L. Ri-
olo, editors. Proceedings of the Third Annual Genetic Programming Conference (GP98),
July 2225, 1998, University of Wisconsin: Madison, WI, USA. Morgan Kaufmann Publish-
ers Inc.: San Francisco, CA, USA. isbn: 1-55860-548-7. OCLC: 123292121. See also
[1605].
1613. John R. Koza, Forrest H. Bennett III, David Andre, and Martin A. Keane. The Design of Ana-
log Circuits by Means of Genetic Programming. In Evolutionary Design by Computers [272],
Chapter 16, pages 365385. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA,
1999. Fully available at http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/koza_
1999_dacGP.html and http://www.genetic-programming.com/jkpdf/edc1999.
pdf [accessed 2009-06-16].
1614. John R. Koza, Forrest H. Bennett III, David Andre, and Martin A. Keane. Automatic
Design of Analog Electrical Circuits using Genetic Programming. In Hugh Cartwright, editor,
Intelligent Data Analysis in Science [503], Chapter 8, pages 172200. Oxford University Press,
Inc.: New York, NY, USA, 2000. Fully available at http://www.cs.bham.ac.uk/

wbl/
biblio/gp-html/koza_2000_idas.html [accessed 2009-06-16].
1615. John R. Koza, Martin A. Keane, Matthew J. Streeter, William Mydlowec, Jessen Yu, and
Guido Lanza. Genetic Programming IV: Routine Human-Competitive Machine Intelligence,
volume 5 in Genetic Programming Series. Springer Science+Business Media, Inc.: New York,
NY, USA, 2003. isbn: 0-387-25067-0, 0-387-26417-5, 1402074468, and 6610611831.
Google Books ID: YQxWzAEnINIC and vMaVhoI-hVUC. OCLC: 51817183, 60594679,
249361385, 255445558, 318293809, and 403761573.
1616. Dexter C. Kozen. The Design and Analysis of Algorithms, Texts and Monographs in Com-
puter Science. Springer-Verlag GmbH: Berlin, Germany, 1992. isbn: 0-387-97687-6 and
3-540-97687-6. Google Books ID: L AMnf9UF9QC. OCLC: 26014868, 318320484,
475812425, and 610835985. Library of Congress Control Number (LCCN): 91036759.
GBV-Identication (PPN): 018981704 and 118353721. LC Classication: QA76.9.A43
K69 1991.
1617. Oliver Kramer. Self-Adaptive Heuristics for Evolutionary Computation, volume 147 in
Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg, July 2008.
doi: 10.1007/978-3-540-69281-2. isbn: 3-540-69280-0. Google Books ID: f4QwC AJEjwC.
OCLC: 233933242. Library of Congress Control Number (LCCN): 2008928449. LC Clas-
sication: QA76.618 .K73 2008.
1618. Natalio Krasnogor and James E. Smith. A Tutorial for Competent Memetic Algorithms:
Model, Taxonomy, and Design Issues. IEEE Transactions on Evolutionary Computa-
tion (IEEE-EC), 9(5):474488, October 2005, IEEE Computer Society: Washington, DC,
USA. doi: 10.1109/TEVC.2005.850260. Fully available at http://www.cs.nott.ac.uk/

nxk/PAPERS/IEEE-TEC-lastVersion.pdf [accessed 2008-03-28]. INSPEC Accession Num-


ber: 8608338.
1619. Natalio Krasnogor, Giuseppe Nicosia, Mario Pavone, and David Alejandro Pelta, editors.
REFERENCES 1079
Proceedings of the 2nd International Workshop Nature Inspired Cooperative Strategies for
Optimization (NICSO07), November 810, 2007, Acireale, Catania, Sicily, Italy, volume
129/2008 in Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/978-3-540-78987-1. Library of Congress Control Number (LCCN): 2008924783.
1620. Natalio Krasnogor, Mara Belen Meli an-Batista, Jose Andres Moreno Perez, J. Marcos
Moreno-Vega, and David Alejandro Pelta, editors. Proceedings of the 3rd International
Workshop Nature Inspired Cooperative Strategies for Optimization (NICSO08), Novem-
ber 1214, 2008, Puerto de la Cruz, Tenerife, Spain, volume 236/2009 in Studies in Com-
putational Intelligence. Springer-Verlag: Berlin/Heidelberg. doi: 10.1007/978-3-642-03211-0.
isbn: 3-642-03210-9.
1621. Ramprasad S. Krishnamachari and Panos Y. Papalambros. Hierarchical Decomposition Syn-
thesis in Optimal Systems Design. Technical Report 95-16, University of Michigan, Depart-
ment of Mechanical Engineering and Applied Mechanics, Design Laboratory: Ann Arbor, MI,
USA, August 1995September 1996. Fully available at http://hdl.handle.net/2027.
42/6077 [accessed 2009-10-25]. CiteSeer
x
: 10.1.1.25.9434. See also [1622].
1622. Ramprasad S. Krishnamachari and Panos Y. Papalambros. Hierarchical Decomposition
Synthesis in Optimal Systems Design. Journal of Mechanical Design, 119(4):448457,
December 1997, American Society of Mechanical Engineers (ASME): New York, NY,
USA. doi: 10.1115/1.2826388. Fully available at http://ode.engin.umich.edu/
publications/PapalambrosPapers/1996/102.pdf [accessed 2009-10-25]. See also [1621].
1623. David Kristoersson. A Software Framework for Genetic Programming. Bachelors
thesis, Lulea University of Technology (LTU), Skelleftea Campus: Skelleftea, Swe-
den, Spring 2005, Johann Nordlander and Daniel Gjorwell, Advisors. Fully avail-
able at http://epubl.ltu.se/1404-5494/2005/28/LTU-HIP-EX-0528-SE.pdf [ac-
cessed 2010-11-21]. issn: 1404-5494. 2005:28 HIP. ISRN: LTU-HIP-EX05/28SE.
1624. Antonio Kr uger and Rainer Malaka, editors. Proceedings of Articial Intelligence in Mobile
System (AIMS03), October 12, 2003, Westin Seattle: Seattle, WA, USA. Fully available
at http://w5.cs.uni-sb.de/

krueger/aims2003/ [accessed 2009-09-05]. Held in conjunc-


tion with Ubicomp 2003.
1625. Joseph. B. Kruskal, Jr. On the Shortest Spanning Subtree of a Graph and the Traveling
Salesman Problem. Proceedings of the American Mathematical Society, 7(1):4850, Febru-
ary 1956, American Mathematical Society (AMS) Bookstore: Providence, RI, USA. JSTOR
Stable ID: 2033241.
1626. Christian Kubczak, Tiziana Margaria, and Bernhard Steen. Semantic Web Services Chal-
lenge 2006 An Approach to Mediation and Discovery with jABC and miAamic. In
Third Workshop of the Semantic Web Service Challenge 2006 Challenge on Automat-
ing Web Services Mediation, Choreography and Discovery [2690], 2006. Fully available
at http://sws-challenge.org/workshops/2006-Athens/papers/sws_stage_3.
pdf [accessed 2010-12-16]. See also [16271632, 1712, 1831, 1832].
1627. Christian Kubczak, Tiziana Margaria, and Bernhard Steen. Semantic Web Services Chal-
lenge 2006 The jABC Approach to Mediation and Choreography. In Second Workshop of the
Semantic Web Service Challenge 2006 Challenge on Automating Web Services Mediation,
Choreography and Discovery [2689], 2006. Fully available at http://sws-challenge.
org/workshops/2006-Budva/papers/unido_sws_2006_draft.pdf [accessed 2010-12-14].
See also [1626, 16281632, 1712, 1831, 1832].
1628. Christian Kubczak, Ralf Nagel, Tiziana Margaria, and Bernhard Steen. The jABC Ap-
proach to Mediation and Choreography. In First Workshop of the Semantic Web Ser-
vice Challenge 2006 Challenge on Automating Web Services Mediation, Choreography and
Discovery [2688], 2006. Fully available at http://sws-challenge.org/workshops/
2006-Stanford/papers/06.pdf. See also [1626, 1627, 16291632, 1712, 1831, 1832].
1629. Christian Kubczak, Tiziana Margaria, Bernhard Steen, and Stefan Naujokat. Service-
Oriented Mediation with jETI/jABC: Verication and Export. In WI/IAT Work-
shops07 [1731], pages 144147, 2007. doi: 10.1109/WI-IATW.2007.27. INSPEC Accession
Number: 9894901. See also [16261628, 16301632, 1712, 1831, 1832].
1630. Christian Kubczak, Tiziana Margaria, Christian Winkler, and Bernhard Steen. Se-
mantic Web Services Challenge 2007 An Approach to Discovery with miAam-
ics and jABC. In KWEB SWS Challenge Workshop [2450], 2007. Fully avail-
able at http://sws-challenge.org/workshops/2007-Innsbruck/papers/2-sws_
1080 REFERENCES
stage_4_disc.pdf [accessed 2010-12-16]. See also [16261629, 1631, 1632, 1712, 1831, 1832].
1631. Christian Kubczak, Christian Winkler, Tiziana Margaria, and Bernhard Steen. An Ap-
proach to Discovery with miAamics and jABC. In WI/IAT Workshops07 [1731], pages
157160, 2007. doi: 10.1109/WI-IATW.2007.26. INSPEC Accession Number: 9894904. See
also [16261630, 1712, 1831, 1832, 2166].
1632. Christian Kubczak, Tiziana Margaria, Matthias Kaiser, Jens Lemcke, and Bjorn Knuth.
Abductive Synthesis of the Mediator Scenario with jABC and GEM. In EON-
SWSC08 [1023], 2008. Fully available at http://sunsite.informatik.rwth-aachen.
de/Publications/CEUR-WS/Vol-359/Paper-5.pdf [accessed 2010-12-16]. See also [1626
1628, 1630, 1631, 1712, 1831, 1832, 2166].
1633. Michal Kudelski and Andrzej Pacut. Ant Routing with Distributed Geographical Localiza-
tion of Knowledge in Ad-Hoc Networks. In EvoWorkshops09 [1052], pages 111116, 2009.
doi: 10.1007/978-3-642-01129-0 14.
1634. Ben Kuipers and Bonnie Lynn Webber, editors. Proceedings of the Fourteenth National
Conference on Articial Intelligence and Ninth Innovative Applications of Articial Intelli-
gence Conference, (AAAI97, IAAI97), July 2731, 1997, Providence, RI, USA. AAAI Press:
Menlo Park, CA, USA and MIT Press: Cambridge, MA, USA. isbn: 0-262-51095-2. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai97.php and http://
www.aaai.org/Conferences/IAAI/iaai97.php [accessed 2007-09-06].
1635. Saku Kukkonen and Jouni A. Lampinen. Ranking-Dominance and Many-Objective Optimiza-
tion. In CEC07 [1343], pages 39833990, 2007. doi: 10.1109/CEC.2007.4424990. INSPEC
Accession Number: 9889569.
1636. Anup Kumar, Rakesh M. Pathak, M. C. Gupta, and Yash P. Gupta. Genetic Algorithm
based Approach for Designing Computer Network Topolgies. In CSC93 [1654], pages 358
365, 1993. doi: 10.1145/170791.170871.
1637. Sanjeev Kumar and Peter J. Bentley. Computational Embryology: Past, Present and Future.
In Advances in Evolutionary Computing Theory and Applications [1049], pages 461477.
Springer New York: New York, NY, USA, 2002. Fully available at http://www.cs.ucl.
ac.uk/staff/P.Bentley/KUBECH1.pdf [accessed 2010-08-06]. CiteSeer
x
: 10.1.1.63.1505.
1638. T. V. Vijay Kumar and Aloke Ghoshal. A Reduced Lattice Greedy Algo-
rithm for Selecting Materialized Views. In ICISTM09 [2218], pages 618, 2009.
doi: 10.1007/978-3-642-00405-6 5. See also [1639].
1639. T. V. Vijay Kumar and Aloke Ghoshal. Greedy Selection of Materialized Views. International
Journal of Computer and Communication Technology (IJCCT), 1(1):4758, 2009, Khandgiri,
Bhubaneswar, Orissa, India. Fully available at http://www.interscience.in/ijcct/
IJCCT_Paper5.pdf [accessed 2011-04-13]. See also [1638].
1640. Viktor M. Kureichik, Sergey P. Malyukov, Vladimir V. Kureichik, and Alexan-
der S. Malioukov. Genetic Algorithms for Applied CAD Problems, volume 212
in Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg, 2009.
doi: 10.1007/978-3-540-85281-0. isbn: 3-540-85280-8. Google Books ID: lqCoF45aYn0C.
1641. Vera Kurkova, Nigel C. Steele, Roman Neruda, and Miroslav Karn y, editors. Proceed-
ings of the 5th International Conference on Articial Neural Networks and Genetic Algo-
rithms (ICANNGA01), April 2225, 2001, Lichtenstein Palace: Prague, Czech Republic.
isbn: 3-211-83651-9. OCLC: 47701381, 248270107, and 440721146.
1642. Frank Kursawe. A Variant of Evolution Strategies for Vector Optimization. In PPSN I [2438],
pages 193197, 1990. doi: 10.1007/BFb002972. Fully available at http://eprints.kfupm.
edu.sa/22064/ and http://www.lania.mx/

ccoello/kursawe.ps.gz [accessed 2009-06-


22]. CiteSeer
x
: 10.1.1.47.8050.
1643. Selahattin Kuru, editor. Proceedings of the Second Turkish Symposium on Articial In-
telligence and Neural Networks (Tuark Yapay Zeka ve Yapay Sinir Aglar Sempozyumu)
(TAINN93), June 2425, 1993, Bo gazici

Universitesi: Istanbul, Turkey. OCLC: 283728082.
1644. Ulrich K uster and Birgitta Konig-Ries. Discovery and Mediation using the DIANE Ser-
vice Description. In Third Workshop of the Semantic Web Service Challenge 2006 Chal-
lenge on Automating Web Services Mediation, Choreography and Discovery [2690], 2006.
Fully available at http://sws-challenge.org/workshops/2006-Athens/papers/
SWS-Challenge-2006-Athens-Full.pdf [accessed 2010-12-16]. See also [1645, 1646, 1648
1650].
1645. Ulrich K uster and Birgitta Konig-Ries. Service Discovery using DIANE Service De-
REFERENCES 1081
scriptions A Solution to the SWS-Challenge Discovery Scenarios. In KWEB SWS
Challenge Workshop [2450], 2007. Fully available at http://sws-challenge.org/
workshops/2007-Innsbruck/papers/1-Ulrich-2007.pdf [accessed 2010-12-16]. See also
[1644, 1646, 16481650].
1646. Ulrich K uster and Birgitta Konig-Ries. Semantic Mediation between Business Partners
A SWS-Challenge Solution Using DIANE Service Descriptions. In WI/IAT Work-
shops07 [1731], pages 139143, 2007. doi: 10.1109/WI-IATW.2007.14. INSPEC Accession
Number: 9894900. See also [1644, 1645, 16481650].
1647. Ulrich K uster and Birgitta Konig-Ries. Semantic Service Discovery with DIANE Service
Descriptions. In WI/IAT Workshops07 [1731], pages 152156, 2007. doi: 10.1109/WI-I-
ATW.2007.87. INSPEC Accession Number: 9894903.
1648. Ulrich K uster, Christian Klein, and Birgitta Konig-Ries. Discovery and Mediation using the
DIANE Service Description. In First Workshop of the Semantic Web Service Challenge 2006
Challenge on Automating Web Services Mediation, Choreography and Discovery [2688],
2006. Fully available at http://sws-challenge.org/workshops/2006-Stanford/
papers/07.pdf [accessed 2010-12-12]. See also [16441646, 1649, 1650].
1649. Ulrich K uster, Birgitta Konig-Ries, and Michael Klein. Discovery and Mediation us-
ing DIANE Service Descriptions. In Second Workshop of the Semantic Web Service
Challenge 2006 Challenge on Automating Web Services Mediation, Choreography and
Discovery [2689], 2006. Fully available at http://sws-challenge.org/workshops/
2006-Budva/papers/SWS-Challenge-2006-Budva.pdf [accessed 2010-12-14]. See also
[16441646, 1648, 1650].
1650. Ulrich K uster, Andrea Turati, Maciej Zaremba, Birgitta Konig-Ries, Dario Cerizza, Emanuele
Della Valle, Marco Brambilla, Stefano Ceri, Federico Michele Facca, and Christina Tziviskou.
Service Discovery with SWE-ET and DIANE A Comparative Evaluation by Means
of Solutions to a Common Scenario. In ICEIS07 [494], pages 430437, volume 4,
2007. Fully available at http://sws-challenge.org/workshops/2007-Madeira/
PaperDrafts/4-ICEIS2007_DIANE_CEFRIEL.pdf [accessed 2010-12-16]. See also [383
386, 16441646, 1648, 1649].
1651. Ugur Kuter, Dana Nau, Marco Pistore, and Paolo Traverso. A Hierarchical Task-Network
Planner based on Symbolic Model Checking. In ICAPS05 [315], pages 300309, 2005. Fully
available at http://www.cs.umd.edu/

nau/papers/kuter05hierarchical.pdf [ac-
cessed 2011-01-11]. CiteSeer
x
: 10.1.1.81.669.
1652. Vladimir Kvasnicka, Martin Pelikan, and Jiri Pospichal. Hill Climbing with Learning
(An Abstraction of Genetic Algorithm). Neural Network World International Journal
on Non-Standard Computing and Articial Intelligence, 6:773796, 1996, Academy of Sci-
ences of the Czech Republic, Institute of Computer Science: Prague, Czech Republic.
Fully available at http://130.203.133.121:8080/viewdoc/summary?doi=10.1.1.
56.4888 [accessed 2009-08-21]. CiteSeer
x
: 10.1.1.56.4888.
1653. Chung Min Kwan and C. S. Chang. Application of Evolutionary Algorithm on a Trans-
portation Scheduling Problem The Mass Rapid Transit. In CEC05 [641], pages 987994,
volume 2, 2005. Fully available at http://www.lania.mx/

ccoello/EMOO/kwan05.
pdf.gz [accessed 2007-08-27].
1654. Stan C. Kwasny and John F. Buck, editors. Proceedings of the 21s/1993 ACM Confer-
ence on Computer Science (CSC93), February 1618, 1993, Indianapolis, IN, USA. ACM
Press: New York, NY, USA. isbn: 0-89791-558-5. Google Books ID: CJq9PwAACAAJ and
fHpQAAAAMAAJ. OCLC: 28196251, 84641266, 221446969, 221446969, 257836504,
258290585, and 311876190.
L
1655. Jerey C. Lagraias, James A. Reeds, Margaret H. Wright, and Paul E. Wright. Conver-
gence Properties of the Nelder-Mead Simplex Method in Low Dimensions. SIAM Journal
on Optimization (SIOPT), 9(1):112147, 1998, Society for Industrial and Applied Mathe-
matics (SIAM): Philadelphia, PA, USA. doi: 10.1137/S1052623496303470. Fully available
at http://www.aoe.vt.edu/

cliff/aoe5244/nelder_mead_2.pdf [accessed 2008-06-14].


1082 REFERENCES
1656. Chih-Chung Lai, Chuan-Kang Ting, and Ren-Song Ko. An Eective Genetic Algorithm for
Improving Wireless Sensor Network Lifetime. In GECCO07-I [2699], pages 22602260, 2007.
doi: 10.1145/1276958.1277395. Poster Session: Real-world applications. See also [1657].
1657. Chih-Chung Lai, Chuan-Kang Ting, and Ren-Song Ko. An Eective Genetic Algorithm
to Improve Wireless Sensor Network Lifetime for Large-Scale Surveillance Applications. In
CEC07 [1343], pages 35313538, 2007. doi: 10.1109/CEC.2007.4424930. See also [1656].
1658. Timothy Lai. Discovery of Understandable Math Formulas Using Genetic Programming.
In Genetic Algorithms and Genetic Programming at Stanford [1601], pages 118127. Stan-
ford University Bookstore, Stanford University: Stanford, CA, USA, 2003. Fully available
at http://www.genetic-programming.org/sp2003/Lai.pdf [accessed 2010-11-21].
1659. John E. Laird, editor. Proceedings of 5th International Conference on Machine Learning
(ICML88), June 1214, 1988, University of Michigan: Ann Arbor, MI, USA. Morgan Kauf-
mann: San Mateo, CA, USA. isbn: 0-934613-64-8. Google Books ID: FldQAAAAMAAJ.
OCLC: 18017017, 82090777, 260147544, 441149879, 471061365, 475831474,
476283025, 499414195, and 610512831. GBV-Identication (PPN): 04232307X.
1660. John E. Laird, editor. Proceedings of the 5th International Workshop on Machine Learning
(ML88), June 1214, 1988, Ann Arbor, MI, USA. Morgan Kaufmann Publishers Inc.: San
Francisco, CA, USA. isbn: 0-934613-64-8.
1661. Gary B. Lamont and Eric M. Holloway. Military Network Security using Self Orga-
nized Multi-Agent Entangled Hierarchies. In GECCO09-II [2343], pages 25592566, 2009.
doi: 10.1145/1570256.1570361.
1662. Jouni A. Lampinen and Ivan Zelinka. On Stagnation of the Dierential Evolution Algorithm.
In MENDEL00 [2106], pages 7683, 2000. CiteSeer
x
: 10.1.1.35.7932.
1663. Ailsa H. Land and A. G. Doig. An Automatic Method of Solving Discrete Programming
Problems. Econometrica Journal of the Econometric Society, 28(3):497520, July 1960,
Wiley Interscience: Chichester, West Sussex, UK and Econometric Society. Fully available
at http://jmvidal.cse.sc.edu/library/land60a.pdf [accessed 2009-06-29].
1664. Edmund Landau. Handbuch der Lehre von der Verteilung der Primzahlen. B. G. Teub-
ner: Leipzig, Germany and AMS Chelsea Publishing: Providence, RI, USA, 19091953.
isbn: 0821826506. Google Books ID: 2ivxUiSogLgC, 81m4AAAAIAAJ, SFm4AAAAIAAJ,
fbyvOgAACAAJ, and u YAOwAACAAJ.
1665. Kevin J. Lang. Hill Climbing Beats Genetic Search on a Boolean Circuit Synthesis Problem
of Kozas. In ICML95 [2221], pages 340343, 1995. Fully available at http://www.cs.
cornell.edu/Courses/cs472/2004fa/Materials/icml-GP.pdf and http://www.
genetic-programming.com/lang-ml95.ps [accessed 2009-06-16]. See also [1600].
1666. Christopher Gale Langdon, editor. The Proceedings of an Interdisciplinary Workshop on
the Synthesis and Simulation of Living Systems (Articial Life87), September 21, 1987,
Oppenheimer Study Center: Los Alamaos, NM, USA, volume 6 in Santa Fe Institue Stud-
ies in the Sciences of Complexity. Addison-Wesley Longman Publishing Co., Inc.: Boston,
MA, USA and Westview Press. isbn: 0201093464 and 0201093561. Google Books
ID: 5W05AgAACAAJ. OCLC: 257011165 and 260158809. Published in January 1989.
1667. Christopher Gale Langdon, editor. Proceedings of the Workshop on Articial Life (Arti-
cial Life III), June 1519, 1992, Santa Fe, NM, USA, volume 17 in Santa Fe Institue
Studies in the Sciences of Complexity. Addison-Wesley Longman Publishing Co., Inc.:
Boston, MA, USA and Westview Press. isbn: 0-201-62492-3 and 0-201-62494-X.
OCLC: 29548167, 475766246, 501682096, 624447801, and 633335938. GBV-
Identication (PPN): 181621835. Published in August 1993.
1668. Christopher Gale Langdon, Charles E. Taylor, Doyne J. Farmer, and Steen Rasmussen, ed-
itors. Proceedings of the Workshop on Articial Life (Articial Life II), February 1990,
volume X in Santa Fe Institue Studies in the Sciences of Complexity. Addison-Wesley Long-
man Publishing Co., Inc.: Boston, MA, USA and Westview Press. isbn: 0201525704 and
0201525712. Google Books ID: 7WZAGQAACAAJ, 7mZAGQAACAAJ, and wAx7AAAACAAJ.
OCLC: 23733587, 177349139, 180517573, 223313171, 257010417, and 258496439.
1669. William B. Langdon. Mapping Non-conventional Extensions of Genetic Programming.
In UC06 [472], pages 166180, 2006. doi: 10.1007/11839132 14. Fully available
at http://www.cs.ucl.ac.uk/staff/wlangdon/ftp/papers/wbl_uc2002.pdf [ac-
cessed 2010-11-21].
1670. William B. Langdon. A SIMD Interpreter for Genetic Programming on GPU Graphics Cards.
REFERENCES 1083
Computer Science Technical Report CSM-470, University of Essex, Departments of Mathe-
matical and Biological Sciences: Wivenhoe Park, Colchester, Essex, UK, July 3, 2007. Fully
available at http://cswww.essex.ac.uk/technical-reports/2007/csm_470.pdf
[accessed 2008-12-24]. issn: 1744-8050. See also [1671].
1671. William B. Langdon and Wolfgang Banzhaf. A SIMD Interpreter for Genetic Pro-
gramming on GPU Graphics Cards. In EuroGP08 [2080], pages 7385, 2008.
doi: 10.1007/978-3-540-78671-9 7. Fully available at http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/langdon_2008_eurogp.html and http://www.cs.ucl.ac.


uk/staff/W.Langdon/ftp/papers/langdon_2008_eurogp.pdf [accessed 2009-08-04]. See
also [1670].
1672. William B. Langdon and Riccardo Poli. Fitness Causes Bloat: Mutation. In Eu-
roGP98 [210], pages 3748, 1998. Fully available at ftp://ftp.cwi.nl/pub/W.
B.Langdon/papers/WBL.euro98_bloatm.ps.gz and http://citeseer.ist.psu.
edu/langdon98fitness.html [accessed 2007-09-09]. CiteSeer
x
: 10.1.1.40.8288.
1673. William B. Langdon and Riccardo Poli. Foundations of Genetic Programming. Springer-
Verlag GmbH: Berlin, Germany, January 2002. isbn: 3-540-42451-2. Partly avail-
able at http://www.cs.ucl.ac.uk/staff/W.Langdon/FOGP/ [accessed 2008-02-15]. Google
Books ID: PZli0s95Jd0C. OCLC: 47717957, 248023954, and 491544616. Library of
Congress Control Number (LCCN): 2001049394. GBV-Identication (PPN): 334151872.
LC Classication: QA76.623 .L35 2002.
1674. William B. Langdon, Terence Soule, Riccardo Poli, and James A. Foster. The Evolution
of Size and Shape. In Advances in Genetic Programming [2569], Chapter 8, pages 163190.
MIT Press: Cambridge, MA, USA, 1996. Fully available at http://www.cs.bham.ac.uk/

wbl/aigp3/ch08.pdf and http://www.cs.bham.ac.uk/

wbl/aigp3/ch08.ps.gz
[accessed 2011-11-23]. CiteSeer
x
: 10.1.1.59.1581.
1675. William B. Langdon, Erick Cant u-Paz, Keith E. Mathias, Rajkumar Roy, David Davis, Ric-
cardo Poli, Hari Balakrishnan, Vasant Honavar, G unter Rudolph, Joachim Wegener, Larry
Bull, Mitchell A. Potter, Alan C. Schultz, Julian Francis Miller, Edmund K. Burke, and
Natasa Jonoska, editors. Proceedings of the Genetic and Evolutionary Computation Confer-
ence (GECCO02), July 913, 2002, Roosevelt Hotel: New York, NY, USA. Morgan Kauf-
mann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-878-8, 1-55860-881-8,
and 1-55860-903-2. Google Books ID: 7YZQAAAAMAAJ, N3kCAAAACAAJ, QDq-IgAACAAJ,
W28dHQAACAAJ, and tBThPAAACAAJ. A joint meeting of the eleventh International Con-
ference on Genetic Algorithms (ICGA-2002) and the seventh Annual Genetic Programming
Conference (GP-2002). See also [224, 478, 1790].
1676. Pat Langley, editor. Proceedings of the 17th International Conference on Machine Learn-
ing (ICML00), June 29July 2, 2000, Stanford University: Stanford, CA, USA. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-707-2.
1677. Pier Luca Lanzi and Rick L. Riolo. A Roadmap to the Last Decade of Learning
Classier System Research (From 1989 to 1999). In IWLCS00 [1678], page 33, 2000.
doi: 10.1007/3-540-45027-0 2.
1678. Pier Luca Lanzi, Wolfgang Stolzmann, and Stewart W. Wilson, editors. Advances in Learning
Classier Systems, Revised Papers of the Third International Workshop on Learning Classi-
er Systems (IWLCS00), September 1516, 2000, Paris, France, volume 1996/2001 in Lec-
ture Notes in Articial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-42437-7. OCLC: 441809319. co-
den: LNCSD9. A joint workshop of the sixth International Conference on Simulation of
Adaptive Behaviour (SAB2000, and the sixth International Conference on Parallel Problem
Solving from Nature (PPSN VI, seeAlsoilPROC2000PPSN), published in 2001.
1679. Pier Luca Lanzi, Wolfgang Stolzmann, and Stewart W. Wilson, editors. Advances in Learn-
ing Classier Systems, Revised Papers of the Fourth International Workshop on Learn-
ing Classier Systems (IWLCS01), July 78, 2001, San Francisco, CA, USA, volume
2321/2002 in Lecture Notes in Articial Intelligence (LNAI, SL7), Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-48104-4.
isbn: 3-540-43793-2. Google Books ID: LX6ky1cIgXAC. Held during GECCO-2001, pub-
lished in 2002. See also [2570].
1680. Pier Luca Lanzi, Wolfgang Stolzmann, and Stewart W. Wilson, editors. Revised Papers of
the 5th International Workshop on Learning Classier Systems (IWLCS02), September 78,
1084 REFERENCES
2002, Granada, Spain, volume 2661/2003 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-20544-6. Published in 2003.
1681. Pier Luca Lanzi, Daniele Loiacono, Stewart W. Wilson, and David Edward Goldberg. Gen-
eralization in the XCSF Classier System: Analysis, Improvement, and Extension. Evo-
lutionary Computation, 15(2):133168, Summer 2007, MIT Press: Cambridge, MA, USA.
doi: 10.1162/evco.2007.15.2.133. CiteSeer
x
: 10.1.1.114.7870. PubMed ID: 17535137.
1682. Patrick LaRoche and A. Nur Zincir-Heywood. 802.11 Network Intrusion Detec-
tion Using Genetic Programming. In GECCO05 WS [2339], pages 170171,
2005. doi: 10.1145/1102256.1102295. Fully available at http://larocheonline.ca/

plaroche/papers/gecco-laroche-heywood.pdf [accessed 2008-06-16].


1683. Pedro Larra naga and Jose Antonio Lozano, editors. Estimation of Distribution Algorithms
A New Tool for Evolutionary Computation, volume 2 in Genetic and Evolutionary Com-
putation. Springer US: Boston, MA, USA and Kluwer Academic Publishers: Norwell, MA,
USA, 2001. isbn: 0-7923-7466-5. Google Books ID: kyU74gxp1rsC and o0llxS4u93wC.
OCLC: 47996547.
1684. Christian W. G. Lasarczyk and Wolfgang Banzhaf. An Algorithmic Chemistry for Genetic
Programming. In EuroGP05 [1518], pages 112, 2005. doi: 10.1007/978-3-540-31989-4 1.
Fully available at http://ls11-www.cs.uni-dortmund.de/downloads/papers/
LaBa05.pdf and http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/eurogp_
LasarczykB05.html [accessed 2010-08-01]. CiteSeer
x
: 10.1.1.131.4314.
1685. Christian W. G. Lasarczyk and Wolfgang Banzhaf. Total Synthesis of Algo-
rithmic Chemistries. In GECCO05 [304], pages 16351640, volume 2, 2005.
doi: 10.1145/1068009.1068285. Fully available at http://www.cs.bham.ac.uk/

wbl/
biblio/gecco2005/docs/p1635.pdf [accessed 2009-08-02].
1686. J org Lassig and Achim G. Homann. Threshold-selecting Strategy for Best Possible
Ground State Detection with Genetic Algorithms. Physical Review E, 79(4):046702046702
8, April 2009, American Physical Society: College Park, MD, USA. doi: 10.1103/Phys-
RevE.79.046702. See also [1687].
1687. J org Lassig, Karl Heinz Homann, and Mihaela Enachescu. Threshold Selecting: Best
Possible Probability Distribution for Crossover Selection in Genetic Algorithms. In
GECCO08 [1519], pages 21812185, 2008. doi: 10.1145/1388969.1389044. See also [1686].
1688. Michael Lassig and Angelo Valleriani, editors. Biological Evolution and Statistical Physics,
volume 585/2002 in Lecture Notes in Physics. Springer-Verlag: Berlin/Heidelberg, 2002.
doi: 10.1007/3-540-45692-9. isbn: 3-540-43188-8. Google Books ID: b7SL7kc5N7oC.
issn: 0075-8450.
1689. Antonio LaTorre, Jose Mara Pe na, Santiago Muelas, and Manuel Zaforas. Hybrid Evolu-
tionary Algorithms for Large Scale Continuous Problems. In GECCO09-I [2342], pages
18631864, 2009. doi: 10.1145/1569901.1570205.
1690. Hui Keng Lau, Jonathan Timmis, and Ian Bate. Anomaly Detection Inspired by
Immune Network Theory: A Proposal. In CEC09 [1350], pages 30453051, 2009.
doi: 10.1109/CEC.2009.4983328. Fully available at ftp://ftp.cs.york.ac.uk/papers/
rtspapers/R:Lau:2009.pdf [accessed 2009-09-05]. INSPEC Accession Number: 10688923.
1691. Marco Laumanns, Eckart Zitzler, and Lothar Thiele. A Unied Model for Multi-Objective
Evolutionary Algorithms with Elitism. In CEC00 [1333], pages 4653, volume 1, 2000.
doi: 10.1109/CEC.2000.870274. Fully available at http://www.lania.mx/

ccoello/
EMOO/laumanns00.ps.gz [accessed 2010-08-01]. CiteSeer
x
: 10.1.1.36.4389. INSPEC Ac-
cession Number: 6734602.
1692. Marco Laumanns, Lothar Thiele, Kalyanmoy Deb, and Eckart Zitzler. On the Convergence
and Diversity-Preservation Properties of Multi-Objective Evolutionary Algorithms. TIK-
Report 108, Kanpur Genetic Algorithms Laboratory (KanGAL), Department of Mechanical
Engineering, Indian Institute of Technology Kanpur (IIT): Kanpur, Uttar Pradesh, India,
June 13, 2001. Fully available at http://e-collection.ethbib.ethz.ch/view/eth:
24717 [accessed 2009-07-09]. CiteSeer
x
: 10.1.1.28.6480.
1693. Eug`ene L. Lawler. Introduction to Stochastic Processes. Chapman & Hall: London, UK and
CRC Press, Inc.: Boca Raton, FL, USA, 2nd edition, July 1995. isbn: 0412995115 and
158488651X. Google Books ID: 1L8uPQAACAAJ, LATJdzBlajUC, and ypD8ZFqdPlUC.
OCLC: 31288133, 64084881, and 266086758.
1694. Eug`ene L. Lawler, Jan Karel Lenstra, A. H. G. Rinnooy-Kan, and David B. Shmoys.
REFERENCES 1085
The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization, Wiley-
Interscience Series in Discrete Mathematics, Estimation, Simulation, and Control Wiley-
Interscience Series in Discrete Mathematics and Optimization. Wiley Interscience: Chich-
ester, West Sussex, UK, September 1985. isbn: 0-471-90413-9. OCLC: 11756468,
260186267, 468006920, 488801427, 611882247, and 631997749. Library of
Congress Control Number (LCCN): 85003158. GBV-Identication (PPN): 011848790,
02528360X, 042459834, 042637112, 043364950, 078586313, 156527677, 175957169,
and 190388226. LC Classication: QA164 .T73 1985.
1695. Mark Lawrence and Colin Wilsdon, editors. Operational Research Tutorial Papers Annual
Conference of the Operational Research Society, 1995, Canterbury, Kent, UK. Operations Re-
search Society: Birmingham, UK, Hoskyns Cap Gemini Sogeti, and Stockton Press: Hamp-
shire, UK. isbn: 0-903440-15-6. Google Books ID: pLkmAQAACAAJ. OCLC: 222984071
and 249133773.
1696. Michael Lawrence. Multiobjective Genetic Algorithms for Materialized View Se-
lection in OLAP Data Warehouses. In GECCO06 [1516], pages 669706, 2006.
doi: 10.1145/1143997.1144120.
1697. Steve Lawrence and C. Lee Giles. Overtting and Neural Networks: Conjugate Gradient
and Backpropagation. In IJCNN00 [82], pages 11141119, volume 1, 2000. doi: 10.1109/I-
JCNN.2000.857823. CiteSeer
x
: 10.1.1.27.9735. INSPEC Accession Number: 6696828.
1698. Antonio Lazcano and Stanley L. Miller. How long did it take for Life to begin and evolve
to Cyanobacteria? Journal of Molecular Evolution, 39(6):546554, December 1994, Springer
New York: New York, NY, USA. doi: 10.1007/BF00160399. PubMed ID: 11536653.
1699. A. C. G. Verdugo Lazo and P. N. Rathie. On the Entropy of Continuous Probability Dis-
tributions. IEEE Transactions on Information Theory (TIT), 24(1):120122, January 1978,
Institute of Electrical and Electronics Engineers (IEEE) Press: New York, NY, USA.
1700. Lucien Marie Le Cam and Jerzy Neyman, editors. Proceedings of the Fifth Berkeley Sympo-
sium on Mathematical Statistics and Probability (June 21-July 18, 1965 and December 27,
1965-January 7, 1966), 1967, University of California, Statistical Laboratory: Berkeley, CA,
USA. University of California Press: Berkeley, CA, USA. Fully available at http://
projecteuclid.org/euclid.bsmsp/1200513453, http://projecteuclid.
org/euclid.bsmsp/1200513615, http://projecteuclid.org/euclid.bsmsp/
1200513981, and http://ucblibrary4.berkeley.edu:8088/xdlib//math/ucb/
mets/cumath_9_1_00019737.xml [accessed 2009-10-20]. Google Books ID: IX9WAAAAMAAJ,
LVI4AAAAIAAJ, NA-3QQAACAAJ, and f7xWAAAAMAAJ. OCLC: 5772953, 25234711,
25234721, 25234803, 25234809, 36997875, 71753794, 221157481, 223463967, and
300992815. issn: 0097-0433.
1701. Joshua Lederberg and Alexa T. McCray. Ome Sweet Omics A Genealogical Treasury of
Words. The Scientist Magazine of the Life Sciences, 15(7):8, April 2001. Fully available
at http://lhncbc.nlm.nih.gov/lhc/docs/published/2001/pub2001047.pdf [ac-
cessed 2009-06-28].
1702. Chang-Yong Lee and Xin Yao. Evolutionary Programming using the Mutations based
on the Levy Probability Distribution. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 8(1):113, February 2004, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2003.816583. CiteSeer
x
: 10.1.1.6.2145. INSPEC Accession Num-
ber: 8003613.
1703. Jack Yiu-Bun Lee and P. C. Wong. The Eect of Function Noise on GP Eciency. In
Progress in Evolutionary Computation, AI93 (Melbourne, Victoria, Australia, 1993-11-16)
and AI94 Workshops (Armidale, NSW, Australia, 1994-11-22/23) on Evolutionary Compu-
tation, Selected Papers [3023], pages 116, 19931994. doi: 10.1007/3-540-60154-6 43.
1704. Jongsoo Lee and Prabhat Hajela. Parallel Genetic Algorithm Implementation in Multidisci-
plinary Rotor Blade Design. Technical Report ADA305771, Renssalaer Polytechnic Institute,
Aeronautical Engineering and Mechanics Department, Mechanical Engineering: Troy, NY,
USA, Defense Technical Information Center (DTIC): Fort Belvoir, VA, USA, November 1995.
Fully available at http://handle.dtic.mil/100.2/ADA305771 [accessed 2009-06-17]. See
also [1705].
1705. Jongsoo Lee and Prabhat Hajela. Parallel Genetic Algorithm Implementation in Multi-
disciplinary Rotor Blade Design. Journal of Aircraft, 33(5):962966, SeptemberOctober
1996, American Institute of Aeronautics and Astronautics (AIAA): Reston, VA, USA.
1086 REFERENCES
doi: 10.2514/3.47042. See also [1704].
1706. Minsoo Lee and Joachim Hammer. Speeding up Materialized View Selection in Data Ware-
house using a Randomized Algorithm. International Journal of Cooperative Information
Systems (IJCIS), 10(3):327353, September 2001, World Scientic Publishing Co.: Singa-
pore. doi: 10.1142/S0218843001000370. Fully available at http://www.cise.ufl.edu/

jhammer/publications/IJCIS-DW2001/minsoolee.doc [accessed 2011-04-09].


1707. S. Lee, S. Soak, K. Kim, H. Park, and M. Jeon. Statistical Properties Analysis of Real
World Tournament Selection in Genetic Algorithms. Applied Intelligence The Interna-
tional Journal of Articial Intelligence, Neural Networks, and Complex Problem-Solving
Technologies, 28(2):195205, April 2008, Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1007/s10489-007-0062-2.
1708. Shane Legg, Marcus Hutter, and Akshat Kumar. Tournament versus Fitness Uniform Selec-
tion. Technical Report IDSIA-04-04, Dalle Molle Institute for Articial Intelligence (IDSIA),
University of Lugano, Faculty of Informatics / University of Applied Sciences of Southern
Switzerland (SUPSI), Department of Innovative Technologies: Manno-Lugano, Switzerland,
March 4, 2004. Fully available at http://www.idsia.ch/idsiareport/IDSIA-04-04.
pdf [accessed 2011-04-29]. See also [1313, 1709].
1709. Shane Legg, Marcus Hutter, and Akshat Kumar. Tournament versus Fitness Uniform Selec-
tion. In CEC04 [1369], pages 21442151, 2004. doi: 10.1109/CEC.2004.1331162. Fully avail-
able at http://www.hutter1.net/ai/fussexp.pdf, http://www.hutter1.net/
ai/fussexp.ps, and http://www.hutter1.net/ai/fussexp.zip [accessed 2011-04-29].
CiteSeer
x
: 10.1.1.71.2663. arXiv ID: cs/0403038v1. INSPEC Accession Num-
ber: 8229180. See also [1312, 1313, 1708].
1710. Joel Lehman and Kenneth Owen Stanley. Exploiting Open-Endedness to Solve Prob-
lems Through the Search for Novelty. In Articial Life XI [446], pages 329336,
2008. Fully available at http://alifexi.alife.org/papers/ALIFExi_pp329-336.
pdf and http://eplex.cs.ucf.edu/papers/lehman_alife08.pdf [accessed 2011-03-01].
1711. Jingsheng Lei, JingTao Yao, and Qingfu Zhang, editors. Proceedings of the Third In-
ternational Conference on Advances in Natural Computation (ICNC07), August 2427,
2007, Haik ou, Hainan, China. IEEE Computer Society Press: Los Alamitos, CA, USA.
isbn: 0-7695-2875-9. Google Books ID: oAIEHwAACAAJ. OCLC: 166273050 and
181014549. Held jointly with the 4th International Conference on Fuzzy Systems and
Knowledge Discovery (FSKD 2007).
1712. Jens Lemcke, Matthias Kaiser, Christian Kubczak, Tiziana Margaria, and Bjorn Knuth. Ad-
vances in Solving the Mediator Scenario with jABC and jABC/GEM. In SWSC08, Seman-
tic Web Services Challenge: Proceedings of the 2008 Workshops [2166, 2590], pages 89102.
Stanford University, Computer Science Department, Stanford Logic Group: Stanford, CA,
USA, 2008. Fully available at http://logic.stanford.edu/reports/LG-2009-01.
pdf [accessed 2010-12-16]. Part of [2166]. See also [16261632, 1831, 1832].
1713. James G. Lennox. Aristotles Philosophy of Biology: Studies in the Origins of Life Science,
Cambridge Studies in Philosophy and Biology. Cambridge University Press: Cambridge,
UK, April 2001. isbn: 0521650275, 0521659760, and 978-0521650274. Google Books
ID: CqmXeBDkb6oC and IijTXV3 ZLMC. OCLC: 43694261 and 45592829. LC Classica-
tion: QH331 .L528 2001.
1714. Henry Leung, editor. Proceedings of the Sixth IASTED International Conference on Articial
Intelligence and Soft Computing (ASC02), July 1719, 2002, Ban, AB, Canada. ACTA
Press: Calgary, AB, Canada. isbn: 0-88986-346-6. Google Books ID: Zd3NPQAACAAJ.
OCLC: 52161283 and 249212159.
1715. Kwong Sak Leung, Kin Hong Lee, and Sin Man Cheang. Genetic Parallel Programming
Evolving Linear Machine Codes on a Multiple-ALU processor. In ICAIET02 [3004], pages
207213, 2002.
1716. Joel R. Levin. What If There Were No More Bickering about Statistical Signicance Tests?
Research in the Schools (RITS), 5(2):4353, 1998, Tony Onwuegbuzie and John Slate, ed-
itors. Fully available at http://www.personal.psu.edu/users/d/m/dmr/sigtest/
6mspdf.pdf [accessed 2008-08-15].
1717. Andrew Lewis, Gerhard Weis, Marcus Randall, Amir Galehdar, and David Thiel. Optimising
Eciency and Gain of Small Meander Line RFID Antennas using Ant Colony System. In
CEC09 [1350], pages 14861492, 2009. doi: 10.1109/CEC.2009.4983118. INSPEC Accession
REFERENCES 1087
Number: 10688713.
1718. Peter R. Lewis, Paul Marrow, and Xin Yao. Evolutionary Market Agents and Heterogeneous
Service Providers: Achieving Desired Resource Allocations. In CEC09 [1350], pages 904910,
2009. doi: 10.1109/CEC.2009.4983041. INSPEC Accession Number: 10688568.
1719. Robert Michael Lewis, Virginia Joanne Torczon, and Michael W. Trosset. Direct Search Meth-
ods: Then and Now. Technical Report NASA/CR-2000-210125, NASA Langley Research
Center, Institute for Computer Applications in Science and Engineering (ICASE): Hampton,
VA, USA, May 2000. Fully available at http://www.cs.wm.edu/

va/research/jcam.
pdf [accessed 2010-12-25]. CiteSeer
x
: 10.1.1.35.5338. ICASE Report No. 2000-26. See also
[1720].
1720. Robert Michael Lewis, Virginia Joanne Torczon, and Michael W. Trosset. Direct Search
Methods: Then and Now. Journal of Computational and Applied Mathematics, 124(1-
2):191207, December 2000, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. Imprint: North-Holland Scientic Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/S0377-0427(00)00423-4. See also [1719].
1721. Jin Li. FGP: A Genetic Programming Based Tool for Financial Forecasting. PhD thesis,
University of Essex: Wivenhoe Park, Colchester, Essex, UK, October 6, 2001. See also
[1722].
1722. Jin Li, Xiaoli Li, and Xin Yao. Cost-Sensitive Classication with Genetic Programming. In
CEC05 [641], pages 21142121, 2005. Fully available at http://www.cs.bham.ac.uk/

xin/papers/LiLiYaoCEC05.pdf [accessed 2010-06-25]. CiteSeer


x
: 10.1.1.66.7637. See
also [1721].
1723. Leon Y. O. Li. Vehicle Routeing for Winter Gritting. PhD thesis, Lancaster University,
Department of Management Science: Lancaster, Lancashire, UK, 1992.
1724. Leon Y. O. Li and Richard W. Eglese. An Interactive Algorithm for Vehicle Routeing for
Winter - Gritting. The Journal of the Operational Research Society (JORS), 47(2):217228,
February 1996, Palgrave Macmillan Ltd.: Houndmills, Basingstoke, Hampshire, UK and
Operations Research Society: Birmingham, UK. JSTOR Stable ID: 2584343.
1725. Wei Li. Using Genetic Algorithm for Network Intrusion Detection. In Proceedings of
the United States Department of Energy Cyber Security Group 2004 Training Confer-
ence [1492], 2004. Fully available at http://www.security.cse.msstate.edu/docs/
Publications/wli/DOECSG2004.pdf [accessed 2009-05-11].
1726. Xiang Li and Vic Ciesielski. Using Loops in Genetic Programming for A Two Class Bi-
nary Image Classication Problem. In AI04 [2871], pages 898909, 2004. Fully available
at http://goanna.cs.rmit.edu.au/

vc/papers/aust-ai04.pdf [accessed 2010-11-20].


CiteSeer
x
: 10.1.1.87.3912.
1727. Xiaodong Li. Niching without Niching Parameters: Particle Swarm Optimization
Using a Ring Topology. IEEE Transactions on Evolutionary Computation (IEEE-
EC), 14(1):150169, February 2010, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2009.2026270. Fully available at http://goanna.cs.rmit.edu.au/

xiaodong/publications/rpso-ieee-tec.pdf [accessed 2011-11-10]. INSPEC Accession


Number: 11088362.
1728. Xiaodong Li, J urgen Branke, and Tim Blackwell. Particle Swarm with Speciation and
Adaptation in a Dynamic Environment. In GECCO06 [1516], pages 5158, 2006.
doi: 10.1145/1143997.1144005.
1729. Xiaodong Li, Michael Kirley, Mengjie Zhang, David Green, Vic Ciesielski, Hussein A.
Abbass, Zbigniew Michalewicz, Tim Hendtlass, Kalyanmoy Deb, Kay Chen Tan, J urgen
Branke, and Yuhui Shi, editors. Proceedings of the Seventh International Conference on
Simulated Evolution And Learning (SEAL08), December 710, 2008, Melbourne, VIC,
Australia, volume 5361/2008 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-89694-4. isbn: 3-540-89693-7. Library of Congress Control Number
(LCCN): 2008939923.
1730. Yong Li and Shihua Gong. Dynamic Ant Colony Optimisation for TSP. The International
Journal of Advanced Manufacturing Technology, 22(7-8):528533, November 2003, Springer-
Verlag London Limited: London, UK. doi: 10.1007/s00170-002-1478-9.
1731. Yuefeng Li and Vijay V. Raghavan, editors. Proceedings of the 2007 IEEE/WIC/ACM In-
ternational Conferences on Web Intelligence and Intelligent Agent Technology Workshops
1088 REFERENCES
(WI/IAT Workshops07), November 25, 2007, Silicon Valley, CA, USA. IEEE Com-
puter Society Press: Los Alamitos, CA, USA. isbn: 0-7695-3028-1. Google Books
ID: E3ALMwAACAAJ, TTBEPwAACAAJ, and wWCDRAAACAAJ. Library of Congress Control
Number (LCCN): 2007933794. Order No.: P3028.
1732. J. J. Liang, Thomas Philip Runarsson, Efren Mezura-Montes, Maurice Clerc, Ponnuthu-
rai Nagaratnam Suganthan, Carlos Artemio Coello Coello, and Kalyanmoy Deb. Problem
Denitions and Evaluation Criteria for the CEC 2006 Special Session on Constrained Real-
Parameter Optimization. Technical Report, Nanyang Technological University (NTU): Sin-
gapore, September 18, 2006. Fully available at http://www.lania.mx/

emezura/util/
files/tr_cec06.pdf [accessed 2009-10-26]. Partly available at http://www3.ntu.edu.sg/
home/EPNSugan/index_files/CEC-06/CEC06.htm [accessed 2009-10-26].
1733. Suiliong Liang, A. Nur Zincir-Heywood, and Malcom Ian Heywood. The Eect of Rout-
ing under Local Information using a Social Insect Metaphor. In CEC02 [944], pages
14381443, 2002. Fully available at http://users.cs.dal.ca/

mheywood/X-files/
Publications/01004454.pdf [accessed 2008-07-26]. See also [1734].
1734. Suiliong Liang, A. Nur Zincir-Heywood, and Malcom Ian Heywood. Adding More In-
telligence to the Network Routing Problem: AntNet and GA-Agents. Applied Soft
Computing, 6(3):244257, March 2006, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/j.asoc.2005.01.005. See also [1733].
1735. Weifa Liang, Hui Wang, and Maria E. Orlowska. Materialized View Selection under the
Maintenance Time Constraint. Data & Knowledge Engineering, 37(2):203216, May 2001,
Elsevier Science Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Sci-
entic Publishers Ltd.: Amsterdam, The Netherlands. doi: 10.1016/S0169-023X(01)00007-6.
CiteSeer
x
: 10.1.1.90.9530.
1736. Pierre Liardet, Pierre Collet, Cyril Fonlupt, Evelyne Lutton, and Marc Schoenauer, editors.
Proceedings of the 6th International Conference on Articial Evolution, Evolution Articielle
(EA03), October 2730, 2003, Marseilles, France, volume 2936 in Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-21523-9. Published
in 2003.
1737. Ching-Fang Liaw. A Hybrid Genetic Algorithm for the Open Shop Scheduling Problem. Eu-
ropean Journal of Operational Research (EJOR), 124(1):2842, July 1, 2000, Elsevier Science
Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientic Publishers
Ltd.: Amsterdam, The Netherlands. doi: 10.1016/S0377-2217(99)00168-X.
1738. Vibeke Libby, editor. Proceedings of Data Structures and Target Classication, the SPIEs In-
ternational Symposium on Optical Engineering and Photonics in Aerospace Sensing, April 1
2, 1991, Orlando, FL, USA, volume 1470 in The Proceedings of SPIE. Society of Photographic
Instrumentation Engineers (SPIE): Bellingham, WA, USA. isbn: 0819405795. Google Books
ID: Ne1RAAAAMAAJ. OCLC: 24461715.
1739. Gunar E. Liepins and Michael D. Vose. Deceptiveness and Genetic Algorithm Dynamics. In
FOGA90 [2562], pages 3650, 1990. Fully available at http://www.osti.gov/bridge/
servlets/purl/6445602-CfqU6M/ [accessed 2007-11-05].
1740. Pedro Lima, editor. Robotic Soccer. I-Tech Education and Publishing: Vienna, Aus-
tria, 2007. isbn: 3-902613-07-6. Fully available at http://s.i-techonline.
com/Book/Robotic-Soccer/ISBN978-3-902613-21-9.html [accessed 2008-04-23]. Google
Books ID: 5lAuQwAACAAJ. OCLC: 441015574.
1741. JiGuan G. Lin Lin. On Min-Norm and Min-Max Methods of Multi-Objective Optimiza-
tion. Mathematical Programming, 103(1):133, May 2005, Springer-Verlag: Berlin/Heidel-
berg. doi: 10.1007/s10107-003-0462-y.
1742. Charles X. Ling. Overtting and Generalization in Learning Discrete Patterns. Neu-
rocomputing, 8(3):341347, August 1995, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/0925-2312(95)00050-G. CiteSeer
x
: 10.1.1.17.3198. Optimization and Com-
binatorics, Part I-III.
1743. Marc Lipsitch. Adaptation on Rugged Landscapes Generated by Iterated Local Interactions
of Neighboring Genes. In ICGA91 [254], pages 128135, 1991.
1744. Proceedings of the 15th Portuguese Conference on Articial Intelligence (EPIA11), Octo-
ber 1013, 2011, Lisbon, Portugal. Partly available at http://epia2011.appia.pt/
[accessed 2011-04-24].
1745. Duixian Liu and Brian Gaucher, editors. 2006 IEEE International Workshop on Antenna
REFERENCES 1089
Technology: Small Antennas and Novel Metamaterials (iWAT06), March 6, 2006March 8,
2005, Crowne Plaza Hotel: White Plains, NY, USA. IEEE Computer Society: Piscataway,
NJ, USA. isbn: 0-7803-9443-7. Google Books ID: IxvhAAAACAAJ. OCLC: 122521207,
289019234, 326736309, and 423939179. Catalogue no.: 06EX1191.
1746. Fang Liu and Le-Ping Lin. Unsupervised Anomaly Detection Based on an Evolutionary
Articial Immune Network. In EvoWorkshops05 [2340], pages 166174, 2005.
1747. Pu Liu, Francis C. M. Lau, Michael J. Lewis, and Cho-li Wang. A New Asynchronous
Parallel Evolutionary Algorithm for Function Optimization. In PPSN VII [1870], pages
401410, 2002. doi: 10.1007/3-540-45712-7 39.
1748. Tie-Yan Liu, Yiming Yang, Hao Wan, Hua-Jun Zeng, Zheng Chen, and Wei-Ying Ma. Sup-
port Vector Machines Classication with a Very Large-Scale Taxonomy. ACM SIGKDD Ex-
plorations Newsletter, 7(1):3643, June 2005, Association for Computing Machinery (ACM):
New York, NY, USA. doi: 10.1145/1089815.1089821. CiteSeer
x
: 10.1.1.72.1506.
1749. Yongguo Liu, Kefei Chen, Xiaofeng Liao, and Wei Zhang. A Genetic Clustering
Method for Intrusion Detection. Pattern Recognition, 37(5):927942, May 2004, El-
sevier Science Publishers B.V.: Essex, UK. Imprint: Pergamon Press: Oxford, UK.
doi: 10.1016/j.patcog.2003.09.011.
1750. Silvana Livramento, Arnaldo V. Moura, Fl avio K. Miyazawa, Mario M. Harada, and Ed-
uardo Reck Miranda. A Genetic Algorithm for Telecommunication Network Design. In
EvoWorkshops04 [2254], pages 140149, 2004.
1751. Xavier Llor` a and Kumara Sastry. Fast Rule Matching for Learning Classier Systems via Vec-
tor Instructions. In GECCO06 [1516], pages 15131520, 2006. doi: 10.1145/1143997.1144244.
See also [1752].
1752. Xavier Llor` a and Kumara Sastry. Fast Rule Matching for Learning Classier Systems via
Vector Instructions. IlliGAL Report 2006001, Illinois Genetic Algorithms Laboratory (Il-
liGAL), Department of Computer Science, Department of General Engineering, Univer-
sity of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, January 2006. Fully
available at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/2006001.pdf
[accessed 2007-09-12]. See also [1751].
1753. Fernando G. Lobo, Claudio F. Lima, and Zbigniew Michalewicz, editors. Parameter
Setting in Evolutionary Algorithms, volume 54 in Studies in Computational Intelli-
gence. Springer-Verlag: Berlin/Heidelberg, March 2007. doi: 10.1007/978-3-540-69432-8.
isbn: 3-540-69431-5 and 3-540-69432-3. Google Books ID: WgMmGQAACAAJ,
vfjLsGBmi7kC, and voAuHwAACAAJ. Library of Congress Control Number
(LCCN): 2006939345.
1754. Jose Lobo, John H. Miller, and Walter Fontana. Neutrality in Technological Landscapes.
Working papers, Santa Fe Institute: Santa Fe, NM, USA, January 26, 2004. Fully avail-
able at http://fontana.med.harvard.edu/www/Documents/WF/Papers/tech.
neut.pdf, http://time.dufe.edu.cn/jingjiwencong/waiwenziliao/neut.
pdf, and http://zia.hss.cmu.edu/miller/papers/neut.pdf [accessed 2010-12-04].
CiteSeer
x
: 10.1.1.61.9296.
1755. Marco Locatelli. A Note on the Griewank Test Function. Journal of Global Opti-
mization, 25(2):169174, February 2003, Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1023/A:1021956306041.
1756. Darrell F. Lochtefeld. Multi-Objectivization in Genetic Algorithms. PhD thesis, Wright
State University (WSU), School of Graduate Studies: Dayton, OH, USA, May 26, 2011,
Frank William Ciarallo, Supervisor, Raymond R. Hill, Yan Liu, W. Paul Murdock, and
Mateen M. Rizki, Committee members. Fully available at http://etd.ohiolink.edu/
send-pdf.cgi/Lochtefeld%20Darrell%20F.pdf?wright1308765665 [accessed 2011-12-
04].
1757. Darrell F. Lochtefeld and Frank William Ciarallo. Deterministic Helper Objective Sequence
Applied to the Job-Shop Scheduling Problem. In GECCO10 [30], pages 431438, 2010.
doi: 10.1145/1830483.1830566.
1758. Darrell F. Lochtefeld and Frank William Ciarallo. Helper-Objective Optimization Strategies
for the Job-Shop Scheduling Problem. Applied Soft Computing, 11(6):41614174, Septem-
ber 2011, Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/j.asoc.2011.03.007.
1759. A. Geo Lockett and Gerd Islei, editors. Proceedings of the 8th International Confer-
ence on Multiple Criteria Decision Making: Improving Decision Making in Organizations
1090 REFERENCES
(MCDM88), August 2126, 1988, University of Manchester, Manchester Business School:
Manchester, UK, Lecture Notes in Economics and Mathematical Systems. Springer-Verlag:
Berlin/Heidelberg. isbn: 0387517952 and 3540517952. OCLC: 20672173, 230998532,
311213400, 613307351, and 634248012. Published in December 1989.
1760. Reinhard Lohmann. Structure Evolution and Incomplete Induction. Biological Cybernetics,
69(4):319326, August 1993, Springer-Verlag: Berlin/Heidelberg. doi: 10.1007/BF00203128.
1761. Reinhard Lohmann. Structure Evolution and Neural Systems. In Dynamic, Genetic, and
Chaotic Programming: The Sixth-Generation [2559], pages 395411. Wiley Interscience:
Chichester, West Sussex, UK, 1992.
1762. Jason D. Lohn, editor. The 2003 NASA/DoD Conference on Evolvable Hardware (EH03),
June 911, 2003, Chicago, IL, USA. IEEE Computer Society: Washington, DC, USA.
1763. Jason D. Lohn and Silvano P. Colombano. Automated Analog Circuit Synthesis using a
Linear Representation. In ICES98 [2509], pages 125133, 1999. Fully available at http://
ti.arc.nasa.gov/people/jlohn/bio.html [accessed 2009-06-16]. See also [1764].
1764. Jason D. Lohn and Silvano P. Colombano. A Circuit Representation Technique for Automated
Circuit Design. IEEE Transactions on Evolutionary Computation (IEEE-EC), 3(3):205219,
September 1999, IEEE Computer Society: Washington, DC, USA. doi: 10.1109/4235.788491.
Fully available at http://ti.arc.nasa.gov/people/jlohn/bio.html [accessed 2010-08-
27]. CiteSeer
x
: 10.1.1.6.7507. INSPEC Accession Number: 6366632. See also [1763, 1765,
1766].
1765. Jason D. Lohn, Silvano P. Colombano, Garyl L. Haith, and Dimitris Stassinopoulos. A
Parallel Genetic Algorithm for Automated Electronic Circuit Design. In CAS00 [2001], 2000.
Fully available at http://ic.arc.nasa.gov/people/jlohn/bio.html [accessed 2009-06-
16]. See also [1764, 1766].
1766. Jason D. Lohn, Garyl L. Haith, Silvano P. Colombano, and Dimitris Stassinopoulos. To-
wards Evolving Electronic Circuits for Autonomous Space Applications. In Proceedings of
the IEEE Aerospace Conference [1322], 2000. doi: 10.1109/AERO.2000.878523. Fully avail-
able at http://ti.arc.nasa.gov/people/jlohn/bio.html [accessed 2009-06-16]. INSPEC
Accession Number: 6795690. See also [1764, 1765].
1767. Jason D. Lohn, William F. Kraus, and Derek Linden. Evolutionary Optimization of a Quadri-
lar Helical Antenna. In IEEE AP-S International Symposium and USNC/URSI National Ra-
dio Science Meeting [1336], 2002. Fully available at http://ti.arc.nasa.gov/people/
jlohn/Papers/aps2002.pdf [accessed 2009-06-17]. See also [1768, 1770].
1768. Jason D. Lohn, Derek Linden, Gregory S. Hornby, William F. Kraus, and Ad an Rodrguez-
Arroyo. Evolutionary Design of an X-Band Antenna for NASAs Space Technology 5 Mission.
In EH03 [1762], pages 155163, 2003. doi: 10.1109/EH.2003.1217660. See also [1767, 1770].
1769. Jason D. Lohn, Gregory S. Hornby, and Derek Linden. An Evolved Antenna for Deploy-
ment on NASAs Space Technology 5 Mission. In GPTP04 [2089], pages 301315, 2004.
doi: 10.1007/b101112. Fully available at http://ic.arc.nasa.gov/people/hornby/
papers/lohn_gptp04.ps.gz [accessed 2007-08-17].
1770. Jason D. Lohn, Derek Linden, Gregory S. Hornby, William F. Kraus, Ad an Rodrguez-Arroyo,
and Stephen E. Seufert. Evolutionary Design of an X-Band Antenna for NASAs Space Tech-
nology 5 Mission. In 2004 IEEE Antennas and Propagation Society (A-P-S) International
Symposium and USNC / URSI NationalRadio Science Meeting [2121], pages 23132316, vol-
ume 3, 2004. doi: 10.1109/APS.2004.1331834. Fully available at http://ti.arc.nasa.
gov/projects/esg/publications/lohn_papers/aps2004.pdf [accessed 2009-06-17]. See
also [1767, 1768].
1771. Proceedings of The Annual Conference of the South Eastern Accounting Group (SEAG) 2003,
September 8, 2003, London Metropolitan University: London, UK.
1772. Heitor Silverio Lopes and Wagner R. Weinert. EGIPSYS: an Enhanced Gene Expression
Programming Approach for Symbolic Regression Problems. International Journal of Ap-
plied Mathematics and Computer Science (AMCS), 14(3), September 2004, University of
Zielona G ora: Zielona G ora, Poland and Lubuskie Scientic Society: Zielona G ora, Poland.
Fully available at http://matwbn.icm.edu.pl/ksiazki/amc/amc14/amc1437.pdf
and http://www.amcs.uz.zgora.pl/?action=get_file&file=AMCS_2004_14_3_
7.pdf [accessed 2009-09-08]. Special Issue: Evolutionary Computation. AMCS Centro Federal
de Educacao Tecnologica do Parana / CPGEI Av. 7 de setembro, 3165, 80230-901 Curitiba
(PR), Brazil.
REFERENCES 1091
1773. Heitor Silverio Lopes, Fabiano Luis de Sousa, and Luiz Carlos Gadelha DeSouza. The Gen-
eralized Extremal Optimization with Real Codication. In EngOpt08 [1227], 2008. Fully
available at http://www.engopt.org/nukleo/pdfs/0201_engopt_2008_mainenti_
etal.pdf [accessed 2009-09-09].
1774. Lus Seabra Lopes, Nuno Lau, Pedro Mariano, and Lus Mateus Rocha, editors. Progress in
Articial Intelligence Proceedings of the 14th Portuguese Conference on Articial Intelli-
gence (EPIA09), October 1215, 2009, Aveiro, Portugal, volume 5816 in Lecture Notes in
Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
1775. Antonio Lopez Jaimes and Carlos Artemio Coello Coello. Some Techniques to Deal
with Many-Objective Problems. In GECCO09-II [2343], pages 26932696, 2009.
doi: 10.1145/1570256.1570386.
1776. Pascal Lorenz, Petre Dini, and Vladimir Uskov, editors. Proceedings of the 10th International
Conference on Telecommunications (ICT03), February 21March 1, 2003, Sotel Coralia
Maeva Beach Hotel, Tahiti, Papeete, French Polynesia. IEEE Computer Society: Piscataway,
NJ, USA. isbn: 0-7803-7661-7. Google Books ID: -TurAAAACAAJ. OCLC: 52223801,
80325313, and 423983590. Catalogue no.: 03EX628.
1777. D. Lorenzoli, S. Mussino, M. Pezze, A. Sichel, D. Tosi, and Daniela Schilling. A SOA based
Self-Adaptive PERSONAL MOBILITY MANAGER. In SCContest06 [554], pages 479486,
2006. doi: 10.1109/SCC.2006.16. INSPEC Accession Number: 9165412.
1778. Loughborough Antennas & Propagation Conference (PAPC06), April 1112, 2006. Lough-
borough University: Loughborough, Leicestershire, UK. isbn: 0947974415. Google Books
ID: K5K4NwAACAAJ.
1779. Jorge Loureiro and Orlando Belo. An Evolutionary Approach to the Selection and Allocation
of Distributed Cubes. In IDEAS06 [1372], pages 243248, 2006. doi: 10.1109/IDEAS.2006.9.
INSPEC Accession Number: 9353377.
1780. Jorge Loureiro and Orlando Belo. Swarm Intelligence in Cube Selection and Allo-
cation for Multi-Node OLAP Systems. In SCSS06 [878], pages 229239. Springer-
Verlag GmbH: Berlin, Germany, 2006, University of Bridgeport: Bridgeport, CT, USA.
doi: 10.1007/978-1-4020-6264-3 41.
1781. Jorge Loureiro and Orlando Belo. A Metamorphosis Algorithm for the Optimiza-
tion of a Multi-Node OLAP System. In EPIA07 [2026], pages 383394, 2007.
doi: 10.1007/978-3-540-77002-2 32.
1782. Jose Antonio Lozano, Pedro Larra naga, I naki Inza, and Endika Bengoetxea, editors. To-
wards a New Evolutionary Computation Advances on Estimation of Distribution Algo-
rithms, volume 192/2006 in Studies in Fuzziness and Soft Computing. Springer-Verlag
GmbH: Berlin, Germany, 2006. doi: 10.1007/11007937. isbn: 3-540-29006-0. Google Books
ID: 0dku9OKxl6AC and r0UrGB8y2V0C. OCLC: 63763942, 181473672, and 318299594.
Library of Congress Control Number (LCCN): 2005932568.
1783. Haiming Lu. State-of-the-Art Multiobjective Evolutionary Algorithms Pareto Ranking, Den-
sity Estimation and Dynamic Population. PhD thesis, Oklahoma State University, Faculty of
the Graduate College: Stillwater, OK, USA, August 2002, Gary G. Yen, Advisor. Fully avail-
able at http://www.lania.mx/

ccoello/EMOO/thesis_lu.pdf.gz [accessed 2010-08-02].


ID: AAT 3080536.
1784. Quiang Lu and Xin Yao. Clustering and Learning Gaussian Distribution for Contin-
uous Optimization. IEEE Transactions on Systems, Man, and Cybernetics Part C:
Applications and Reviews, 35(2):195204, May 2005, IEEE Systems, Man, and Cyber-
netics Society: New York, NY, USA. doi: 10.1109/TSMCC.2004.841914. Fully avail-
able at http://www.cs.bham.ac.uk/

xin/papers/published_IEEETSMC_LuYa05.
pdf [accessed 2010-12-02]. CiteSeer
x
: 10.1.1.65.7063. INSPEC Accession Number: 8396860.
1785. Wei Lu and Issa Traore. Detecting New Forms of Network Intrusions Using Genetic Pro-
gramming. Computational Intelligence, 20(3):475494, August 2004, Wiley Periodicals, Inc.:
Wilmington, DE, USA. doi: 10.1111/j.0824-7935.2004.00247.x. Fully available at http://
www.isot.ece.uvic.ca/publications/journals/coi-2004.pdf [accessed 2008-06-11].
1786. Proceedings of third Conference on Articial General Intelligence (AGI10), March 58, 2010,
Lugano, Switzerland.
1787. Sean Luke and Liviu Panait. A Survey and Comparison of Tree Generation Algorithms.
In GECCO01 [2570], pages 8188, 2001. Fully available at http://www.cs.gmu.edu/

sean/papers/treegenalgs.pdf [accessed 2009-08-07]. CiteSeer


x
: 10.1.1.28.4837.
1092 REFERENCES
1788. Sean Luke and Lee Spector. Evolving Graphs and Networks with Edge Encoding: A Prelimi-
nary Report. In GP96 LBP [1603], 1996. Fully available at http://cs.gmu.edu/

sean/
papers/graph-paper.pdf [accessed 2008-08-02]. CiteSeer
x
: 10.1.1.25.9484.
1789. Sean Luke and Lee Spector. Evolving Teamwork and Coordination with Genetic Pro-
gramming. In GP96 [1609], pages 150156, 1996. Fully available at http://www.ifi.
uzh.ch/groups/ailab/people/nitschke/refs/Cooperation/cooperation.pdf
[accessed 2009-08-01]. CiteSeer
x
: 10.1.1.29.6914.
1790. Sean Luke, Conor Ryan, and Una-May OReilly, editors. GECCO 2002: Graduate Student
Workshop (GECCO02 GSWS), July 9, 2002, Roosevelt Hotel: New York, NY, USA. AAAI
Press: Menlo Park, CA, USA. See also [224, 478, 1675].
1791. Sean Luke, Liviu Panait, Gabriel Balan, Sean Paus, Zbigniew Skolicki, Je Bassett, Robert
Hubley, and Alexander Chircop. ECJ: A Java-based Evolutionary Computation Research
System. George Mason University (GMU), Evolutionary Computation Laboratory (EClab):
Fairfax, VA, USA, 2006. Fully available at http://cs.gmu.edu/

eclab/projects/
ecj/ [accessed 2010-11-02].
1792. Eduard Lukschandl, Magnus Holmlund, Eric Moden, Mats G. Nordahl, and Peter Nordin.
Induction of Java Bytecode with Genetic Programming. In GP98 LBP [1605], pages 135142,
1998.
1793. Eduard Lukschandl, Henrik Borgvall, Lars Nohle, Mats G. Nordahl, and Peter Nordin. Evolv-
ing Routing Algorithms with the JBGP-System. In EvoWorkshops99 [2197], pages 193202,
1999. doi: 10.1007/10704703 16. See also [1794, 1795].
1794. Eduard Lukschandl, Henrik Borgvall, Lars Nohle, Mats G. Nordahl, and Peter Nordin.
Distributed Java Bytecode Genetic Programming with Telecom Applications. In Eu-
roGP00 [2198], pages 316325, 2000. See also [1793].
1795. Eduard Lukschandl, Peter Nordin, and Mats G. Nordahl. Using the Java Method Evolver
for Load Balancing in Communication Networks. In GECCO00 LBP [2928], pages 236239,
2000. See also [1793].
1796. F. Luna, Antonio Jes us Nebro Urbaneja, Bernabe Dorronsoro Daz, Enrique Alba Tor-
res, Pascal Bouvry, and L. Hogie. Optimal Broadcasting in Metropolitan MANETs Us-
ing Multiobjective Scatter Search. In EvoWorkshops06 [2341], pages 255266, 2006.
doi: 10.1007/11732242 23.
1797. Sen Luo, Bin Xu, and Yixin Yan. An Accumulated-QoS-First Search Approach for Semantic
Web Service Composition. In SOCA10 [1378], 2010. See also [320, 1146, 1571, 2998, 3010,
3011].
1798. Mikulas Luptacik and Rudolf Vetschera, editors. Proceedings of the First MCDM Winter Con-
ference, 16th International Conference on Multiple Criteria Decision Making (MCDM02),
February 1222, 2002, Hotel Panhans: Semmering, Austria. Partly available at http://
orgwww.bwl.univie.ac.at/mcdm2002 [accessed 2007-09-10].
1799. Jay L. Lush. Progeny Test and Individual Performance as Indicators of an Animals Breeding
Value. Journal of Dairy Science (JDS), 18(1):119, January 1935, American Dairy Science
Association (ADSA): Champaign, IL, USA. Fully available at http://jds.fass.org/
cgi/reprint/18/1/1 [accessed 2007-11-27].
1800. Luc Luyet, Sacha Varone, and Nicolas Zuerey. An Ant Algorithm for the
Steiner Tree Problem in Graphs. In EvoWorkshops07 [1050], pages 4251, 2007.
doi: 10.1007/978-3-540-71805-5 5.
M
1801. Huanyu Ma, Wei Jiang, Songlin Hu, Zhenqiu Huang, and Zhiyong Liu. Two-Phase Graph
Search Algorithm for QoS-Aware Automatic Service Composition. In SOCA10 [1378], 2010.
See also [320, 1299].
1802. Wolfgang Maass, editor. Proceedings of the Eighth Annual Conference on Computational
Learning Theory (COLT95), July 58, 1995, Santa Cruz, CA, USA. ACM Press: New York,
NY, USA. isbn: 0-89791-723-5. Google Books ID: L2NQAAAAMAAJ and mDtrOgAACAAJ.
OCLC: 33144512, 82595159, 283286354, 300096337, and 318352896. ACM Order
No.: 604950.
1803. Penousal Machado, Jorge Tavares, Francisco Baptista Pereira, and Ernesto Jorge Fer-
REFERENCES 1093
nandes Costa. Vehicle Routing Problem: Doing It The Evolutionary Way.
In GECCO02 [1675], page 690, 2002. Fully available at http://osiris.
tuwien.ac.at/

wgarn/VehicleRouting/GECCO02_VRPCoEvo.pdf [accessed 2008-10-27].


CiteSeer
x
: 10.1.1.16.8208.
1804. Michael Machtey and Paul Young. An Introduction to the General Theory of Algorithms,
volume 2 in Theory of Computation, The Computer Science Library. Elsevier Science Pub-
lishing Company, Inc.: New York, NY, USA and North-Holland Scientic Publishers Ltd.:
Amsterdam, The Netherlands, 1978. isbn: 0-444-00226-X and 0-444-00227-8. Google
Books ID: qncEAQAAIAAJ.
1805. Kenneth J. Mackin and Eiichiro Tazaki. Emergent Agent Communication in Multi-Agent
Systems using Automatically Dened Function Genetic Programming (ADF-GP). In
SMC99 [1331], pages 138142, volume 5, 1999. doi: 10.1109/ICSMC.1999.815536. See also
[1806, 1807].
1806. Kenneth J. Mackin and Eiichiro Tazaki. Unsupervised Training of Multiobjective Agent
Communication using Genetic Programming. In KES00 [1279], pages 738741, volume
2, 2000. doi: 10.1109/KES.2000.884152. Fully available at http://www.cs.bham.ac.
uk/

wbl/biblio/gp-html/Mackin00.html and http://www.lania.mx/

ccoello/
EMOO/mackin00.pdf.gz [accessed 2008-07-28]. See also [1805].
1807. Kenneth J. Mackin and Eiichiro Tazaki. Multiagent Communication Combining Genetic
Programming and Pheromone Communication. Kybernetes, 31(6):827843, 2002, Thales
Publications (W.O.) Ltd.: Irvine, CA, USA and Emerald Group Publishing Limited: Bingley,
UK. doi: 10.1108/03684920210432808. Fully available at http://www.emeraldinsight.
com/10.1108/03684920210432808 [accessed 2008-07-28]. See also [1805].
1808. James B. MacQueen. Some Methods of Classication and Analysis of Multivariate Observa-
tions. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Proba-
bility (June 21-July 18, 1965 and December 27, 1965-January 7, 1966) [1700], pages 281297,
volume I: Statistics, 1967. Fully available at http://digitalassets.lib.berkeley.
edu/math/ucb/text/math_s5_v1_article-17.pdf [accessed 2009-09-20].
1809. Proceedings of the International Joint Conference on Computational Intelligence (IJCCI09),
October 5, 2009, Madeira, Portugal.
1810. Alexander Maedche, Steen Staab, Claire Nedellec, and Eduard H. Hovy, editors. Work-
shop on Ontology Learning, Proceedings of the Second Workshop on Ontology Learning
(OL2001), August 4, 2001, Seattle, WA, USA, volume 38 in CEUR Workshop Pro-
ceedings (CEUR-WS.org). Rheinisch-Westfalische Technische Hochschule (RWTH) Aachen,
Sun SITE Central Europe: Aachen, North Rhine-Westphalia, Germany. Fully avail-
able at http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/
Vol-47/ [accessed 2008-04-01]. Held in conjunction with the 17th International Conference on
Articial Intelligence IJCAI2001. See also [2012].
1811. Pattie Maes, Maja J. Mataric, Jean-Arcady Meyer, Jordan B. Pollack, and Stewart W.
Wilson, editors. Proceedings of the Fourth International Conference on Simulation of Adap-
tive Behavior: From Animals to Animats 4 (SAB96), September 913, 1996, Cape Code,
MA, USA. MIT Press: Cambridge, MA, USA. isbn: 0-262-63178-4. Google Books
ID: V3pksEEKxUkC. issn: 1089-4365.
1812. Anne E. Magurran. Ecological Diversity and Its Measurement. Princeton University Press:
Princeton, NJ, USA and Taylor and Francis LLC: London, UK, 1987. isbn: 0-691-08485-8,
0-691-08491-2, and 0709935390. Google Books ID: GV5eJQAACAAJ, MhsuGQAACAAJ,
f38pHgAACAAJ, gH8pHgAACAAJ, iHoOAAAAQAAJ, and wvSEAAAACAAJ.
1813. Anne E. Magurran. Biological Diversity. Current Biology, 15(4):R116R118, February 22,
2005, Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/j.cub.2005.02.006. Fully
available at http://dx.doi.org/10.1016/j.cub.2005.02.006 [accessed 2008-11-10].
1814. Hadj Mahboubi, Kamel Aouiche, and Jer ome Darmont. Materialized View Se-
lection by Query Clustering in XML Data Warehouses. In CSIT06 [2687],
2006. Fully available at http://eric.univ-lyon2.fr/

jdarmont/publications/
files/CSIT06-Mahboubi-et-al.pdf and http://hal.archives-ouvertes.fr/
docs/00/32/06/32/PDF/CSIT_2006-Mahboubi-et-al.pdf [accessed 2011-04-12]. arXiv
ID: 0809.1963v1.
1815. Oded Maimon and Lior Rokach, editors. The Data Mining and Knowledge Discov-
ery Handbook. Springer Science+Business Media, Inc.: New York, NY, USA, 2005.
1094 REFERENCES
doi: 10.1007/b107408. isbn: 0-387-24435-2 and 0-387-25465-X.
1816. Mohammad A. Makhzan and Kwei-Jay Lin. Solution to a Complete Web Ser-
vice Discovery and Composition. In CEC/EEE06 [2967], pages 455457, 2006.
doi: 10.1109/CEC-EEE.2006.83. INSPEC Accession Number: 9189347. See also [318].
1817. Masud Ahmad Malik. Evolution of the High Level Programming Languages: A Critical
Perspective. SIGPLAN Notices, 33(12):7280, December 1998, ACM Press: New York, NY,
USA. doi: 10.1145/307824.307882.
1818. David Malkin. The Evolutionary Impact of Gradual Complexication on Complex Sys-
tems. PhD thesis, University College London (UCL), Department of Computer Science:
London, UK, 2008. Fully available at http://www.cs.ucl.ac.uk/staff/P.Bentley/
DavidMalkinsthesis.pdf [accessed 2010-07-27].
1819. Proceedings of the 2nd European Computing Conference (ECC08), September 1113, 2008,
Malta.
1820. Bernard Manderick and Frans Moyson. The Collective Behaviour of Ants: An Example
of Self-Organization in Massive Parallelism. In Proceedings of AAAI Spring Symposium on
Parallel Models of Intelligence: How Can Slow Components Think So Fast? [2600], 1988.
1821. Bernard Manderick, Mark de Weger, and Piet Spiessens. The Genetic Algorithm and the
Structure of the Fitness Landscape. In ICGA91 [254], pages 143150, 1991.
1822. Vittorio Maniezzo and Matteo Milandri. An Ant-Based Framework for Very Strongly Con-
straint Problem. In ANTS02 [816], pages 115148, 2002. doi: 10.1007/3-540-45724-0 19.
1823. Vittorio Maniezzo, Luca Maria Gambardella, and Fabio de Luigi. Ant Colony Optimiza-
tion. In New Optimization Techniques in Engineering [2083], Chapter 5, pages 101117.
Springer-Verlag GmbH: Berlin, Germany. Fully available at http://www.idsia.ch/

luca/aco2004.pdf [accessed 2007-09-13]. CiteSeer


x
: 10.1.1.10.7560.
1824. Vittorio Maniezzo, Antonella Carbonaro, Matteo Golfarelli, and Stefano Rizzi. An ANTS Al-
gorithm for Optimizing the Materialization of Fragmented Views in Data Warehouse: Prelim-
inary Results. In EvoWorkshops01 [343], pages 8089, 2001. doi: 10.1007/3-540-45365-2 9.
Fully available at http://bias.csr.unibo.it/golfarelli/Papers/evocop01.pdf
[accessed 2011-04-10].
1825. Vittorio Maniezzo, Antonella Carbonaro, Matteo Golfarelli, and Stefano Rizzi. ANTS for
Data Warehouse Logical Design. In MIC01 [2294], pages 249254, 2001. Fully avail-
able at http://www-db.deis.unibo.it/

srizzi/PDF/mic01.pdf [accessed 2011-04-10].


CiteSeer
x
: 10.1.1.10.1324 and 10.1.1.74.9797.
1826. Vittorio Maniezzo, Roberto Battiti, and Jean-Paul Watson, editors. Learning and In-
telligent OptimizatioN (LION 2), December 812, 2007, University of Trento: Trento,
Italy, volume 5313/2008 in Theoretical Computer Science and General Issues (SL 1),
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-92695-5. isbn: 3-540-92694-1. Library of Congress Control Number
(LCCN): 2008941292.
1827. Reinhard Manner and Bernard Manderick, editors. Proceedings of Parallel Problem Solv-
ing from Nature 2 (PPSN II), September 2830, 1992, Brussels, Belgium. Elsevier Science
Publishers B.V.: Amsterdam, The Netherlands and North-Holland Scientic Publishers Ltd.:
Amsterdam, The Netherlands. isbn: 0-444-89730-5. Google Books ID: jjjlAAAACAAJ.
OCLC: 26362049, 180616056, and 636493751. Library of Congress Control Number
(LCCN): 92023499. GBV-Identication (PPN): 110538498. LC Classication: QA76.58.
1828. Yannis Manolopoulos, Jaroslav Pokorn y, and Timos K. Sellis, editors. Proceedings of the
10th East European Conference on Advances in Databases and Information Systems (AD-
BIS06), September 37, 2009, Thessaloniki, Greece, volume 4152 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11827252.
isbn: 3-540-37899-5. Library of Congress Control Number (LCCN): 2006931262.
1829. Steven Manos, Leon Poladian, Peter J. Bentley, and Maryanne Large. A Genetic Algo-
rithm with a Variable-Length Genotype and Embryogeny for Microstructured Optical Fi-
bre Design. In GECCO06 [1516], pages 17211728, 2006. doi: 10.1145/1143997.1144278.
CiteSeer
x
: 10.1.1.152.7629.
1830. Elena Marchiori and Adri G. Steenbeek. An Evolutionary Algorithm for Large Scale Set
Covering Problems with Application to Airline Crew Scheduling. In EvoWorkshops00 [464],
pages 370384, 2000. doi: 10.1007/3-540-45561-2 36. CiteSeer
x
: 10.1.1.36.8069.
1831. Tiziana Margaria, Christian Winkler, Christian Kubczak, Bernhard Steen, Marco
REFERENCES 1095
Brambilla, Stefano Ceri, Dario Cerizza, Emanuele Della Valle, Federico Michele
Facca, and Christina Tziviskou. SWS Mediator with WEBML/WEBRATIO and
JABC/JETI: A Comparison. In ICEIS07 [494], pages 422429, volume 4,
2007. Fully available at http://sws-challenge.org/workshops/2007-Madeira/
PaperDrafts/3-SWSC06-med-webml-jabc-fin.pdf [accessed 2010-12-16]. See also [383
387, 16261632, 1832].
1832. Tiziana Margaria, Marco Bakera, Harald Raelt, and Bernhard Steen. Synthesiz-
ing the Mediator with jABC/ABC. In EON-SWSC08 [1023], 2008. Fully avail-
able at http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/
Vol-359/Paper-4.pdf [accessed 2010-12-16]. See also [16261632, 1712, 1831, 2166].
1833. Pedro Jose Marr on, editor. 5. GI/ITG KuVS Fachgespr ach Drahtlose Sensornetze, July 17
18, 2006, volume 2006/07. Universitat Stuttgart, Fakult at 5: Informatik, Elektrotechnik und
Informationstechnik, Institut f ur Parallele und Verteilte Systeme (IPVS): Stuttgart, Ger-
many. Fully available at http://elib.uni-stuttgart.de/opus/volltexte/2006/
2838/pdf/TR_2006_07.pdf [accessed 2009-08-01]. urn: urn:nbn:de:bsz:93-opus-28388.
1834. Silvano Martello and Paolo Toth. Knapsack Problems Algorithms and Computer Im-
plementations, volume 25 in Estimation, Simulation, and Control Wiley-Interscience
Series in Discrete Mathematics and Optimization. Wiley Interscience: Chichester,
West Sussex, UK, 1990. isbn: 0471924202. Google Books ID: 0dhQAAAAMAAJ and
BrcjQAAACAAJ. OCLC: 21339027 and 231141136. Library of Congress Control Number
(LCCN): 90012279. GBV-Identication (PPN): 032258186, 073756830, and 115819762.
LC Classication: QA267.7 .M37 1990.
1835. Max Martin, Eric Drucker, and Walter Don Potter. GA, EO, and PSO Applied to the Discrete
Network Conguration Problem. In GEM08 [120], 2008. Fully available at http://www.
cs.uga.edu/

potter/CompIntell/GEM_Martin-et-al.pdf [accessed 2008-08-23].


1836. Trevor Philip Martin and Anca L. Ralescu, editors. Fuzzy Logic in Articial Intelligence, To-
wards Intelligent Systems, IJCAI95 Selected Papers from Workshop (IJCAI95 Fuzzy WS),
August 1921, 1995, volume 1188/1997 in Lecture Notes in Articial Intelligence (LNAI,
SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-62474-0. isbn: 3-540-62474-0. Google Books ID: MGngXEbCI64C.
OCLC: 36215016, 150397549, 301584096, and 312826327. See also [1861, 2919].
1837. William Martin and Michael J. Russel. On the Origins of Cells: A Hypothesis for the
Evolutionary Transitions from Abiotic Geochemistry to Chemoautotrophic Prokaryotes, and
from Prokaryotes to Nucleated Cells. Philosophical Transactions of the Royal Society B:
Biological Sciences, 358(1429):5985, 2003, Royal Society of Edinburgh: Edinburgh, Scotland,
UK. doi: 10.1098/rstb.2002.1183. PubMed ID: 12594918.
1838. Worthy Neil Martin, Jens Lienig, and James P. Cohoon. Island (Migration) Models: Evo-
lutionary Algorithms Based on Punctuated Equilibria. In Handbook of Evolutionary Com-
putation [171], Chapter C6.3, pages 448463. Oxford University Press, Inc.: New York, NY,
USA, Institute of Physics Publishing Ltd. (IOP): Dirac House, Temple Back, Bristol, UK,
and CRC Press, Inc.: Boca Raton, FL, USA, 1997.
1839. Jean-Philippe Martin-Flatin, Joe Sventek, and Kurt Geihs, editors. IFIP/IEEE International
Workshop on Self-Managed Systems and Services (SelfMan05), May 19, 2005, Nice Acropolis
Exhibition Center: Nice, France.
1840. Fl avio Martins, Eduardo Carrano, Elizabeth Wanner, Ricardo H. C. Takahashi, and Ger-
aldo Robson Mateus. A Dynamic Multiobjective Hybrid Approach for Designing Wireless
Sensor Networks. In CEC09 [1350], pages 11451152, 2009. doi: 10.1109/CEC.2009.4983075.
INSPEC Accession Number: 10688670.
1841. Radek Matousek and Lars Nolle. GAHC: Improved GA with HC mutation. In
WCECS07 [1406], pages 915920, 2007. Fully available at http://www.iaeng.org/
publication/WCECS2007/WCECS2007_pp915-920.pdf [accessed 2009-10-24].
1842. Radek Matousek and Pavel Osmera, editors. Proceedings of the 9th International Confer-
ence on Soft Computing (MENDEL03), June 46, 2003, Brno University of Technology:
Brno, Czech Republic. Brno University of Technology,

Ustav Automatizace a Informatiky:
Brno, Czech Republic. isbn: 80-214-2411-7. OCLC: 249853754 and 320595534. GBV-
Identication (PPN): 390844543.
1843. Yoshiyuki Matsumura, Kazuhiro Ohkura, and Kanji Ueda. Advantages of Global Discrete
Recombination in (/, )-Evolution Strategies. In CEC02 [944], pages 18481853, volume
1096 REFERENCES
2, 2002. doi: 10.1109/CEC.2002.1004524.
1844. Giles Mayley. Guiding or Hiding: Explorations into the Eects of Learning on the Rate of
Evolution. In ECAL97 [1309], pages 135144, 1997. CiteSeer
x
: 10.1.1.50.9241.
1845. Ernst Mayr. The Growth of Biological Thought: Diversity, Evolution, and Inheritance. Har-
vard University Press: Cambridge, MA, USA and Belknap Press: Cambridge, MA, USA,
1982. isbn: 0-674-36445-7 and 0-674-36446-5. Google Books ID: V 0TAQAAIAAJ and
pHThtE2R0UQC. OCLC: 7875904, 185404286, and 239745622.
1846. John P. McDermott, editor. Proceedings of the 10th International Joint Conference on Ar-
ticial Intelligence (IJCAI87-I), August 1987, Milano, Italy, volume 1. Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA. Fully available at http://dli.iiit.ac.in/
ijcai/IJCAI-87-VOL1/CONTENT/content.htm [accessed 2008-04-01]. See also [1847].
1847. John P. McDermott, editor. Proceedings of the 10th International Joint Conference on Ar-
ticial Intelligence (IJCAI87-II), August 1987, Milano, Italy, volume 2. Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA. Fully available at http://dli.iiit.ac.in/
ijcai/IJCAI-87-VOL2/content/content.htm [accessed 2008-04-01]. See also [1846].
1848. John Robert McDonnell, Robert G. Reynolds, and David B. Fogel, editors. Proceedings of
the 4th Annual Conference on Evolutionary Programming (EP95), March 12, 1995, San
Diego, CA, USA, Complex Adaptive Systems, Bradford Books. MIT Press: Cambridge,
MA, USA. isbn: 0-262-13317-2. Google Books ID: OPhQAAAAMAAJ and ipiiDmTrJeAC.
OCLC: 32666904.
1849. Deborah L. McGuinness and George Ferguson, editors. Proceedings of the Nineteenth
National Conference on Articial Intelligence and the Sixteenth Conference on Innova-
tive Applications of Articial Intelligence (AAAI04 / IAAI04), July 2529, 2004, San
Jose, CA, USA. AAAI Press: Menlo Park, CA, USA and MIT Press: Cambridge, MA,
USA. isbn: 0-262-51183-5. Partly available at http://www.aaai.org/Conferences/
AAAI/aaai04.php and http://www.aaai.org/Conferences/IAAI/iaai04.php [ac-
cessed 2007-09-06].
1850. Sheila A. McIlraith and Tran Cao Son. Adapting Golog for Composition of Seman-
tic Web Services. In KR02 [900], pages 482496, 2002. Fully available at http://
reference.kfupm.edu.sa/content/a/d/adapting_golog_for_composition_
of_semant_10675.pdf and http://www.cs.nmsu.edu/

tson/papers/kr02gl.pdf
[accessed 2011-01-11].
1851. Sheila A. McIlraith, Tran Cao Son, and Honglei Zeng. Semantic Web Services. IEEE Intelli-
gent Systems Magazine, 16(2):4653, MarchApril 2001, IEEE Computer Society Press: Los
Alamitos, CA, USA. doi: 10.1109/5254.920599. INSPEC Accession Number: 6941088.
1852. Robert Ian McKay, Xin Yao, Charles S. Newton, Jong-Hwan Kim, and Takeshi Fu-
ruhashi, editors. Selected Papers from the Second Asia-Pacic Conference on Simulated
Evolution and Learning (SEAL98), November 2427, 1998, Canberra, Australia, volume
1585/1999 in Lecture Notes in Articial Intelligence (LNAI, SL7), Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-48873-1.
isbn: 3-540-65907-2.
1853. Robert Ian McKay, Xuan Hoai Nguyen, Peter Alexander Whigham, and Yin Shan. Grammars
in Genetic Programming: A Brief Review. In ISICA [1489], pages 318, 2005. Fully available
at http://sc.snu.ac.kr/PAPERS/isica05.pdf [accessed 2007-08-15].
1854. Ken I. M. McKinnon. Convergence of the Nelder-Mead Simplex Method to a Nonstationary
Point. SIAM Journal on Optimization (SIOPT), 9(1):148158, 1998, Society for Industrial
and Applied Mathematics (SIAM): Philadelphia, PA, USA. doi: 10.1137/S1052623496303482.
1855. Nicholas Freitag McPhee and Justin Darwin Miller. Accurate Replication in Genetic
Programming. In ICGA95 [883], pages 303309, 1995. Fully available at http://
www.mrs.umn.edu/

mcphee/Research/Accurate_replication.ps [accessed 2011-11-23].


CiteSeer
x
: 10.1.1.51.6390.
1856. Nicholas Freitag McPhee and Riccardo Poli. Memory with Memory: Soft As-
signment in Genetic Programming. In GECCO08 [1519], pages 12351242, 2008.
doi: 10.1145/1389095.1389336. Fully available at http://www.cs.bham.ac.uk/

wbl/
biblio/gp-html/McPhee_2008_gecco.html [accessed 2009-08-02].
1857. J orn Mehnen, Mario Koppen, Ashraf Saad, and Ashutosh Tiwari, editors. Applications of
Soft Computing From Theory to Praxis: 13th Online World Conference on Soft Computing
in Industrial Applications, November 1021, 2008, volume 58 in Advances in Intelligent and
REFERENCES 1097
Soft Computing. Springer-Verlag: Berlin/Heidelberg. doi: 10.1007/978-3-540-89619-7.
1858. Yi Mei, Ke Tang, and Xin Yao. A Global Repair Operator for Capacitated Arc
Routing Problem. IEEE Transactions on Systems, Man, and Cybernetics Part
B: Cybernetics, 39(3):723734, June 2009, IEEE Systems, Man, and Cybernetics So-
ciety: New York, NY, USA. doi: 10.1109/TSMCB.2008.2008906. Fully available
at http://www.cs.bham.ac.uk/

xin/papers/TSMC09_MeiTangYao.pdf [accessed 2011-


10-01]. CiteSeer
x
: 10.1.1.147.5933. PubMed ID: 19211356.
1859. Yi Mei, Ke Tang, and Xin Yao. Decomposition-Based Memetic Algorithm for Multi-
Objective Capacitated Arc Routing Problem. IEEE Transactions on Evolutionary
Computation (IEEE-EC), 15(2):151165, April 2011, IEEE Computer Society: Wash-
ington, DC, USA. doi: 10.1109/TEVC.2010.2051446. Fully available at http://
ieee-cis.org/_files/EAC_Research_2009_Report_Mei.pdf and http://mail.
ustc.edu.cn/

meiyi/Papers/D-MAENS.pdf [accessed 2011-02-22]. INSPEC Accession Num-


ber: 11887930. to appear.
1860. Proceedings of the Second International Workshop on Local Search Techniques in Constraint
Satisfaction, October 1, 2005, Melia Sitges Hotel: Sitges, Barcelona, Catalonia, Spain.
1861. Christopher S. Mellish, editor. Proceedings of the Fourteenth International Joint Conference
on Articial Intelligence (IJCAI95), August 2025, 1995, Montreal, QC, Canada. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-363-8. Fully avail-
able at http://dli.iiit.ac.in/ijcai/IJCAI-95-VOL%201/content/content.
htm and http://dli.iiit.ac.in/ijcai/IJCAI-95-VOL2/CONTENT/content.htm
[accessed 2008-04-01]. Google Books ID: yI7UPwAACAAJ. OCLC: 41327748, 263632421, and
300469403. See also [1836, 2919].
1862. Abdelhamid Mellouk, Jun Bi, Guadalupe Ortiz, Kak Wah (Dickson) Chiu, and Manuela
Popescu, editors. Proceedings of The Third International Conference on Internet and
Web Applications and Services (ICIW08), June 813, 2008, Athens, Greece. IEEE Com-
puter Society Press: Los Alamitos, CA, USA. isbn: 0-7695-3163-6. Partly available
at http://www.iaria.org/conferences2008/ICIW08.html [accessed 2011-02-28]. Google
Books ID: GiW-PQAACAAJ. Library of Congress Control Number (LCCN): 2008922600.
Product Number: E3163. BMS Part Number: CFP0816C-CDR.
1863. Leonor Albuquerque Melo. Multi-colony Ant Colony Optimization for the Node Placement
Problem. In GECCO09-II [2343], pages 27132716, 2009. doi: 10.1145/1570256.1570391.
1864. Adriana Menchaca-Mendez and Carlos Artemio Coello Coello. A New Proposal to Hybridize
the Nelder-Mead Method to a Dierential Evolution Algorithm for Constrained Optimiza-
tion. In CEC09 [1350], pages 25982605, 2009. doi: 10.1109/CEC.2009.4983268. Fully
available at www.lania.mx/

ccoello/EMOO/menchaca09.pdf.gz [accessed 2009-09-05]. IN-


SPEC Accession Number: 10688863.
1865. Jerry M. Mendel, Takashi Omari, and Xin Yao, editors. The First IEEE Symposium on Foun-
dations of Computational Intelligence (FOCI07), April 15, 2007, Hilton Hawaiian Village
Hotel (Beach Resort & Spa): Honolulu, HI, USA. IEEE Computer Society: Piscataway, NJ,
USA. isbn: 1424407036. Google Books ID: -8kaOAAACAAJ and KaekAAAACAAJ. Catalogue
no.: 07EX1569. Sponsored by IEEE Computational Intelligence Society.
1866. Roberto R. F. Mendes and Arvind S. Mohais. DynDE: A Dierential Evolution for
Dynamic Optimization Problems. In CEC05 [641], pages 28082815, volume 3, 2005.
doi: 10.1109/CEC.2005.1555047. Fully available at http://www3.di.uminho.pt/

rcm/
publications/DynDE.pdf [accessed 2010-08-24]. CiteSeer
x
: 10.1.1.125.2146. INSPEC Ac-
cession Number: 8715549.
1867. Roberto R. F. Mendes, Fabricio de B. Voznika, Alex Alves Freitas, and Julio C. Nievola.
Discovering Fuzzy Classication Rules with Genetic Programming and Co-Evolution.
In PKDD01 [723], pages 314325, 2001. doi: 10.1007/3-540-44794-6 26. Fully avail-
able at http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/Mendes_2001_PKDD.
html and http://www.cs.kent.ac.uk/people/staff/aaf/pub_papers.dir/
PKDD-2001.ps [accessed 2010-08-19]. CiteSeer
x
: 10.1.1.17.8631.
1868. Xiaofeng Meng, Jianwen Su, and Yujun Wang, editors. Proceedings of the Third International
Conference on Advances in Web-Age Information Management (WAIM02), August 1113,
2002, Beijng, China, volume 2419 in Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-45703-8. isbn: 3-540-44045-3.
1869. Manfred Menze. Evolution are Algorithmen zur Ad-hoc-Tourenplanung: InWeSt Intelligente
1098 REFERENCES
Wechselbr uckensteuerung. Technical Report, Micromata GmbH: Kassel, Hesse, Germany,
November 1, 2010. Fully available at http://www.micromata.de/fileadmin/user_
upload/Case_studies/cs_InWeSt.pdf [accessed 2011-09-05].
1870. Juan Julian Merelo-Guerv os, Panagiotis Adamidis, Hans-Georg Beyer, Jose Luis Fernandez-
Villaca nas Martn, and Hans-Paul Schwefel, editors. Proceedings of the 7th International
Conference on Parallel Problem Solving from Nature (PPSN VII), September 711, 2002,
Granada, Spain, volume 2439/2002 in Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-45712-7. isbn: 3-540-44139-5. Google
Books ID: KD-WMBb4AhkC. OCLC: 50422940, 50801109, 66012617, and 248499908.
1871. Daniel Merkle, Martin Middendorf, and Hartmut Schmeck. Ant Colony Optimiza-
tion for Resource-Constrained Project Scheduling. In GECCO00 [2935], pages 893
900, 2000. Fully available at http://pacosy.informatik.uni-leipzig.de/pv/
Personen/middendorf/Papers/ [accessed 2009-06-27]. CiteSeer
x
: 10.1.1.20.8614.
1872. Laurence D. Merkle and Frank W. Moore, editors. Defense Applications of Computational
Intelligence Workshop (DACI08), July 12, 2008, Renaissance Atlanta Hotel Downtown: At-
lanta, GA, USA. ACM Press: New York, NY, USA. Part of [1519].
1873. Achille Messac and Christopher A. Mattson. Generating Well-Distributed Sets of Pareto
Points for Engineering Design Using Physical Programming. Optimization and Engineering, 3
(4):431450, December 2002, Springer Netherlands: Dordrecht, Netherlands. Imprint: Kluwer
Academic Publishers: Norwell, MA, USA. doi: 10.1023/A:1021179727569. Fully available
at http://www.rpi.edu/

messac/Publications/messac_2002_pareto_pp_opte.
pdf [accessed 2009-06-21].
1874. Nicholas Metropolis, Arianna W. Rosenbluth, Marshall Nicholas Rosenbluth, Augusta H.
Teller, and Edward Teller. Equation of State Calculations by Fast Computing Machines.
The Journal of Chemical Physics, 21(6):10871092, June 1953, American Institute of Physics
(AIP): College Park, MD, USA. doi: 10.1063/1.1699114. Fully available at http://home.
gwu.edu/

stroud/classics/Metropolis53.pdf, http://sc.fsu.edu/

beerli/
mcmc/metropolis-et-al-1953.pdf, and http://scienze-como.uninsubria.it/
bressanini/montecarlo-history/mrt2.pdf [accessed 2010-09-25].
1875. Proceedings of the Second International Workshop on Advanced Computational Intelligence
(IWACI09), October 89, 2009, Mexico City, Mexico.
1876. Jean-Arcady Meyer and Stewart W. Wilson, editors. From Animals to Animats: Proceedings
of the First International Conference on Simulation of Adaptive Behavior (SAB90), Septem-
ber 2428, 1990, Paris, France. MIT Press: Cambridge, MA, USA. isbn: 0-262-63138-5.
Published in 1991.
1877. Mike Meyer and James Rosenberger, editors. Proceedings of the 27th Symposium on the Inter-
face Computing Science and Statistics (Interface95), June 2124, 1995, David L. Lawrence
Convention Center and Pittsburgh Vista Hotel: Pittsburgh, PA, USA.
1878. Silja Meyer-Nieberg and Hans-Georg Beyer. Self-Adaptation in Evolutionary Algo-
rithms. In Parameter Setting in Evolutionary Algorithms [1753], Chapter 3, pages
4775. Springer-Verlag: Berlin/Heidelberg, 2007. doi: 10.1007/978-3-540-69432-8 3.
CiteSeer
x
: 10.1.1.60.4624.
1879. Marc Mezard, Giorgio Parisi, and Miguel Angel Virasoro. Spin Glass Theory and Be-
yond An Introduction to the Replica Method and Its Applications, volume 9 in World
Scientic Lecture Notes in Physics. World Scientic Publishing Co.: Singapore, Novem-
ber 1986. isbn: 9971-50-115-5 and 9971-50-116-3. Google Books ID: 84kNHQAACAAJ
and KKbuIAAACAAJ.
1880. Efren Mezura-Montes and Carlos Artemio Coello Coello. Using the Evolution Strategies
Self-Adaptation Mechanism and Tournament Selection for Global Optimization. In AN-
NIE03 [670], pages 373378, 2003. Fully available at http://www.lania.mx/

emezura/
documentos/annie03.pdf [accessed 2008-03-20]. Theoretical Developments in Computational
Intelligence Award, 1st runner up.
1881. Efren Mezura-Montes, Carlos Artemio Coello Coello, and Ricardo Landa-Becerra. Engi-
neering Optimization Using a Simple Evolutionary Algorithm. In ICTAI03 [1367], pages
149156, 2003. CiteSeer
x
: 10.1.1.1.2837.
1882. Efren Mezura-Montes, Jes us Velazquez-Reyes, and Carlos Artemio Coello Coello. A Compar-
ative Study of Dierential Evolution Variants for Global Optimization. In GECCO06 [1516],
pages 485492, 2006. doi: 10.1145/1143997.1144086. Fully available at http://delta.
REFERENCES 1099
cs.cinvestav.mx/

ccoello/conferences/mezura-gecco2006.pdf.gz [accessed 2010-


08-24].
1883. Zbigniew Michalewicz. A Step Towards Optimal Topology of Communication Networks. In
Proceedings of Data Structures and Target Classication, the SPIEs International Sympo-
sium on Optical Engineering and Photonics in Aerospace Sensing [1738], pages 112122, 1991.
doi: 10.1117/12.44844.
1884. Zbigniew Michalewicz. A Perspective on Evolutionary Computation. In Progress
in Evolutionary Computation, AI93 (Melbourne, Victoria, Australia, 1993-11-16) and
AI94 Workshops (Armidale, NSW, Australia, 1994-11-22/23) on Evolutionary Compu-
tation, Selected Papers [3023], pages 7389, 19931994. doi: 10.1007/3-540-60154-6 49.
CiteSeer
x
: 10.1.1.90.7608.
1885. Zbigniew Michalewicz. Genetic Algorithms, Numerical Optimization, and Constraints. In
ICGA95 [883], pages 151158., 1995. Fully available at http://www.cs.adelaide.edu.
au/

zbyszek/Papers/p16.pdf [accessed 2009-02-28]. CiteSeer


x
: 10.1.1.14.1880.
1886. Zbigniew Michalewicz. Genetic Algorithms + Data Structures = Evolution Pro-
grams. Springer-Verlag GmbH: Berlin, Germany, 1996. isbn: 3-540-58090-5 and
3-540-60676-9. Google Books ID: vlhLAobsK68C. asin: 3540606769.
1887. Zbigniew Michalewicz and David B. Fogel. How to Solve It: Modern Heuristics. Springer-
Verlag: Berlin/Heidelberg, 2nd edition, 2004. isbn: 3-540-22494-7. Google Books
ID: RJbV -JlIUQC.
1888. Zbigniew Michalewicz and Girish Nazhiyath. Genocop III: A Co-Evolutionary Al-
gorithm for Numerical Optimization with Nonlinear Constraints. In CEC95 [1361],
pages 647651, volume 2, 1995. doi: 10.1109/ICEC.1995.487460. Fully available
at http://www.cs.cinvestav.mx/

constraint/papers/p24.ps [accessed 2009-02-28].


CiteSeer
x
: 10.1.1.14.2923.
1889. Zbigniew Michalewicz and Robert G. Reynolds, editors. Proceedings of the IEEE Congress on
Evolutionary Computation (CEC08), June 16, 2008, Hong Kong Convention and Exhibition
Centre: Hong Kong (Xiangg ang), China. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 1-4244-1822-4. Google Books ID: W5HqHwAACAAJ and 142PQAACAAJ. Part of
[3104].
1890. Zbigniew Michalewicz and Marc Schoenauer. Evolutionary Algorithms for Constrained
Parameter Optimization Problems. Evolutionary Computation, 4(1):132, Spring 1996,
MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1996.4.1.1. Fully available
at http://www.cs.adelaide.edu.au/

zbyszek/Papers/p30.pdf [accessed 2009-02-28].


CiteSeer
x
: 10.1.1.13.9279.
1891. Zbigniew Michalewicz, J. David Schaer, Hans-Paul Schwefel, David B. Fogel, and Hi-
roaki Kitano, editors. Proceedings of the First IEEE Conference on Evolutionary Computa-
tion (CEC94), June 2729, 1994, Walt Disney World Dolphin Hotel: Orlando, FL, USA.
IEEE Computer Society: Piscataway, NJ, USA, IEEE Computer Society: Piscataway, NJ,
USA. isbn: 0-7803-1899-4 and 0-7803-1900-1. Google Books ID: W6FVAAAAMAAJ and
Zhp7RAAACAAJ. OCLC: 636736723. INSPEC Accession Number: 4812337. Part of [1324].
1892. GeoLife GPS Trajectories. Microsoft Research Asia: Beijng, China, September 30,
2010. Fully available at http://research.microsoft.com/en-us/downloads/
b16d359d-d164-469e-9fd4-daa38f2b2e13/default.aspx [accessed 2011-02-19]. See also
[3077].
1893. Kaisa Miettinen. Nonlinear Multiobjective Optimization, volume 12 in International Series
in Operations Research & Management Science. Kluwer Academic Publishers: Norwell,
MA, USA and Springer Netherlands: Dordrecht, Netherlands, 1998. isbn: 0-7923-8278-1.
Google Books ID: ha zLdNtXSMC. OCLC: 39545951.
1894. Kaisa Miettinen, Marko M. Makela, Pekka Neittaanmaki, and Jacques Periaux, editors. In-
vited Lectures of the European Short Course on Genetic Algorithms and Evolution Strategies
(EUROGEN99), May 30June 3, 1999, University of Jyv askyl a: Jyv askyl a, Finland. Wi-
ley Interscience: Chichester, West Sussex, UK. isbn: 0471999024. OCLC: 41086671,
245840787, and 313492480. Full title: Evolutionary Algorithms in Engineering and Com-
puter Science Recent Advances in Genetic Algorithms, Evolution Strategies, Evolutionary
Programming, Genetic Programming and Industrial Applications.
1895. Kaisa Miettinen, Pekka J. Korhonen, and Jyrki Wallenius, editors. Environment and Policy
Proceedings of the 21st International Conference on Multiple Criteria Decision Making
1100 REFERENCES
(MCDM11), June 1317, 2011, University of Jyv askyl a: Jyv askyl a, Finland.
1896. Mitsunori Miki, Tomoyuki Hiroyasu, and Junya Wako. Adaptive Temperature Schedule
Determined by Genetic Algorithm for Parallel Simulated Annealing. In CEC03 [2395],
pages 459466, volume 1, 2003. doi: 10.1109/CEC.2003.1299611. Fully avail-
able at http://mikilab.doshisha.ac.jp/dia/research/person/wako/society/
cec2003/cec03_wako.pdf [accessed 2007-09-15]. INSPEC Accession Number: 8008655.
1897. Brad L. Miller and David Edward Goldberg. Genetic Algorithms, Tournament Selection,
and the Eects of Noise. IlliGAL Report 95006, Illinois Genetic Algorithms Laboratory
(IlliGAL), Department of Computer Science, Department of General Engineering, University
of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, July 1995. Fully available
at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/95006.ps.Z [accessed 2009-
07-10]. CiteSeer
x
: 10.1.1.30.6625. See also [1898].
1898. Brad L. Miller and David Edward Goldberg. Genetic Algorithms, Selection Schemes, and
the Varying Eects of Noise. Evolutionary Computation, 4(2):113131, Summer 1996, MIT
Press: Cambridge, MA, USA, Marc Schoenauer, editor. doi: 10.1162/evco.1996.4.2.113.
CiteSeer
x
: 10.1.1.31.3449. See also [1897].
1899. Brad L. Miller and Michael J. Shaw. Genetic Algorithms with Dynamic Niche Sharing
for Multimodal Function Optimization. IlliGAL Report 95010, Illinois Genetic Algorithms
Laboratory (IlliGAL), Department of Computer Science, Department of General Engineering,
University of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, December 1, 1995.
CiteSeer
x
: 10.1.1.52.3568.
1900. George A. Miller. The Magical Number Seven, Plus or Minus Two: Some Limits on
our Capacity for Processing Information. Psychological Review, 63(2):8197, March 1956,
American Psychological Association (APA): Washington, DC, USA. Fully available
at http://www.psych.utoronto.ca/users/peterson/psy430s2001/Miller%20GA
%20Magical%20Seven%20Psych%20Review%201955.pdf [accessed 2009-07-17]. PubMed
ID: 13310704.
1901. Julian Francis Miller and Peter Thomson. Cartesian Genetic Programming. In
EuroGP00 [2198], pages 121132, 2000. doi: 10.1007/b75085. Fully available
at http://www.cartesiangp.co.uk/papers/2000/mteurogp2000.pdf [accessed 2009-
07-04]. CiteSeer
x
: 10.1.1.26.8515.
1902. Julian Francis Miller, Adrian Thompson, Peter Thomson, and Terence Claus Fogarty, edi-
tors. Proceedings of Third International Conference on Evolvable Systems From Biology
to Hardware (ICES00), April 1719, 2000, Edinburgh, Scotland, UK, volume 1801/2000
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-46406-9. isbn: 3-540-67338-5.
1903. Julian Francis Miller, Marco Tomassini, Pier Luca Lanzi, Conor Ryan, Andrea G. B. Tet-
tamanzi, and William B. Langdon, editors. Proceedings of the 4th European Conference on
Genetic Programming (EuroGP01), April 1820, 2001, Lake Como, Milan, Italy, volume
2038/2001 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. isbn: 3-540-41899-7. OCLC: 46502093, 48730451, 66624523, 76281028,
456658150, 501702396, 614899119, and 634229636. Library of Congress Control Num-
ber (LCCN): 2001020853. LC Classication: QA76.623 .E94 2001.
1904. Stanley L. Miller. Production of Amino Acids under Possible Primitive Earth Conditions.
Science Magazine, 117(3046):528529, May 15, 1953, American Association for the Advance-
ment of Science (AAAS): Washington, DC, USA and HighWire Press (Stanford University):
Cambridge, MA, USA. doi: 10.1126/science.117.3046.528. Fully available at http://www.
issol.org/miller/miller1953.pdf [accessed 2009-08-05]. PubMed ID: 13056598.
1905. Stanley L. Miller and Harold C. Urey. Organic Compound Synthesis on the Primitive Earth:
Several Questions about the Origin of Life have been Answered, but much Remains to be
Studied. Science Magazine, 130(3370):245251, July 31, 1959, American Association for the
Advancement of Science (AAAS): Washington, DC, USA and HighWire Press (Stanford Uni-
versity): Cambridge, MA, USA. doi: 10.1126/science.130.3370.245. PubMed ID: 13668555.
1906. Hokey Min, Tomasz G. Smolinski, and Grzegorz M. Boratyn. A Genetic Algorithm-based
Data Mining Approach to Proling the Adopters And Non-Adopters of E-Purchasing. In
IRI01 [2524], pages 16, 2001. CiteSeer
x
: 10.1.1.11.4711.
1907. Marvin Lee Minsky. Steps toward Articial Intelligence. pages 830, volume 49 (1). Institu-
tion of Electrical Engineers (IEE): London, UK and Institution of Engineering and Technology
REFERENCES 1101
(IET): Stevenage, Herts, UK, January 1961. doi: 10.1109/JRPROC.1961.287775. Reprinted
in [1908].
1908. Marvin Lee Minsky. Steps toward Articial Intelligence. In Computers and Thought [897],
pages 406450. McGraw-Hill: New York, NY, USA, 19631995. Fully available at http://
web.media.mit.edu/

minsky/papers/steps.html [accessed 2007-08-05]. Reprint of [1907].


1909. Daniele Miorandi, Paolo Dini, Eitan Altman, and Hisao Kameda, authors. Lidia A. R. Ya-
mamoto, editor. WP 2.2 Paradigm Applications and Mapping, D2.2.2 Framework for Dis-
tributed On-line Evolution of Protocols and Services. BIONETS (European Commission
Future and Emerging Technologies Project FP6-027748): Trento, Italy, 2nd edition, June 14,
2007, David Linner and Fran coise Baude, Supervisors. Fully available at http://www.
bionets.eu/docs/BIONETS_D2_2_2.pdf [accessed 2008-06-23]. BIONETS/CN/wp2.2/v2.1.
Status: Final, Public.
1910. Teresa Miquelez, Endika Bengoetxea, and Pedro Larra naga. Evolutionary Computation based
on Bayesian Classiers. International Journal of Applied Mathematics and Computer Science
(AMCS), 14(3):335349, September 2004, University of Zielona G ora: Zielona G ora, Poland
and Lubuskie Scientic Society: Zielona G ora, Poland. Fully available at http://matwbn.
icm.edu.pl/ksiazki/amc/amc14/amc1434.pdf and http://www.amcs.uz.zgora.
pl/?action=get_file&file=AMCS_2004_14_3_4.pdf [accessed 2009-08-27].
1911. Proceedings of the Fifth International Workshop on the Synthesis and Simulation of Living
Systems (Articial Life V), May 1618, 1996, Nara, Japan. MIT Press: Cambridge, MA,
USA.
1912. Melanie Mitchell. An Introduction to Genetic Algorithms, Complex Adaptive Systems, Brad-
ford Books. MIT Press: Cambridge, MA, USA, February 1998. isbn: 0-262-13316-4
and 0-262-63185-7. Google Books ID: 0eznlz0TF-IC. OCLC: 44708663 and
256769418. Library of Congress Control Number (LCCN): 95024489. GBV-Identication
(PPN): 083660852, 240449282, 269703497, and 557262135. LC Classication: QH441.2
.M55 1996.
1913. Melanie Mitchell, Stephanie Forrest, and John Henry Holland. The Royal Road for Genetic
Algorithms: Fitness Landscapes and GA Performance. In ECAL91 [2789], pages 245254,
1991. Fully available at http://web.cecs.pdx.edu/

mm/ecal92.pdf [accessed 2010-08-03].


CiteSeer
x
: 10.1.1.49.212.
1914. Tom M. Mitchell. Generalization as Search. In Readings in Articial Intelligence: A Col-
lection of Articles [2872], pages 517542. Tioga Publishing Co.: Wellsboro, PA, USA and
Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1981. See also [1915].
1915. Tom M. Mitchell. Generalization as Search. Articial Intelligence, 18
(2):203226, March 1982, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/0004-3702(82)90040-6. CiteSeer
x
: 10.1.1.121.5764. See also [1914].
1916. Tom M. Mitchell and Reid G. Smith, editors. Proceedings of the 7th National Conference
on Articial Intelligence (AAAI88), August 2126, 1988, St. Paul, MN, USA. AAAI Press:
Menlo Park, CA, USA and MIT Press: Cambridge, MA, USA. isbn: 0-262-51055-3. Partly
available at http://www.aaai.org/Conferences/AAAI/aaai88.php [accessed 2007-09-06].
1917. Debasis Mitra, Fabio I. Romeo, and Alberto L. Sangiovanni-Vincentelli. Convergence and
Finite-Time Behavior of Simulated Annealing. In CDC85 [984], pages 761767, 1985.
doi: 10.1109/CDC.1985.268600.
1918. Mamoru Mitsuishi, Kanji Ueda, and Fumihiko Kimura, editors. Manufacturing Systems and
Technologies for the New Frontier The 41st CIRP Conference on Manufacturing Systems,
May 2628, 2008, Hongo Campus, The University of Tokyo: Tokyo, Japan. Springer-Verlag
London Limited: London, UK. Library of Congress Control Number (LCCN): 2008924671.
1919. Michael Mitzenmacher and Eli Upfal. Probability and Computing: Randomized Algo-
rithms and Probabilistic Analysis. Cambridge University Press: Cambridge, UK, 2005.
isbn: 0-521-83540-2. Google Books ID: 0bAYl6d7hvkC. OCLC: 55845691, 58999923,
and 502475686. Library of Congress Control Number (LCCN): 2004054540. GBV-
Identication (PPN): 390183326 and 555603164.
1920. Junichi Mizoguchi, Hitoshi Hemmi, and Katsunori Shimohara. Production Genetic Algo-
rithms for Automated Hardware Design Through an Evolutionary Process. In CEC94 [1891],
pages 661664, volume 2, 1994. doi: 10.1109/ICEC.1994.349980. INSPEC Accession Num-
ber: 4812352.
1921. Arvind S. Mohais, Sven Schellenberg, Maksud Ibrahimov, Neal Wagner, and Zbigniew
1102 REFERENCES
Michalewicz. An Evolutionary Approach to Practical Constraints in Scheduling: A Case-
Study of the Wine Bottling Problem. In Variants of Evolutionary Algorithms for Real-World
Applications [567], Chapter 2, pages 3158. Springer-Verlag: Berlin/Heidelberg, 20112012.
doi: 10.1007/978-3-642-23424-8 2.
1922. Masoud Mohammadian, Ruhul A. Sarker, and Xin Yao, editors. Computational Intelligence
in Control. Idea Group Publishing (Idea Group Inc., IGI Global): New York, NY, USA,
2003. isbn: 1591400376. Google Books ID: N3m6XYXL5K0C.
1923. Masoud Mohammadian, Lot A. Zadeh, and Stephen Grossberg, editors. Proceedings
of the International Conference on Computational Intelligence for Modelling, Control and
Automation and International Conference on Intelligent Agents, Web Technologies and
Internet Commerce Vol-2 (CIMCA-IAWTIC06), November 28December 1, 2006, Syd-
ney, NSW, Australia, volume 02. IEEE Computer Society Press: Los Alamitos, CA,
USA. isbn: 0-7695-2731-0. OCLC: 85810664 and 123929091. GBV-Identication
(PPN): 593426258. ID: E2731.
1924. Mukesh K. Mohania and A. Min Tjoa, editors. Proceedings of the First International Con-
ference DataWarehousing and Knowledge Discovery (DaWaK99), August 30September 1,
1999, Florence, Italy, volume 1676 in Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-48298-9. isbn: 3-540-66458-0.
1925. Parviz Moin and William C. Reynolds, editors. Proceedings of the 1998 Summer Program,
July 531, 1998. NASA Center for Turbulence Research (CTR): Stanford, CA, USA. Fully
available at http://www.stanford.edu/group/ctr/Summer/SP98.html [accessed 2009-
06-16].
1926. Guillermo Molina and Enrique Alba Torres. Location Discovery in Wireless Sensor Networks
Using a Two-Stage Simulated Annealing. In EvoWorkshops09 [1052], pages 1120, 2009.
doi: 10.1007/978-3-642-01129-0 2.
1927. Daniel Molina Cabrera, Manuel Lozano Marquez, and Francisco Herrera Triguero. Memetic
Algorithm with Local Search Chaining for Large Scale Continuous Optimization Problems.
In CEC09 [1350], pages 830837, 2009. doi: 10.1109/CEC.2009.4983031. INSPEC Accession
Number: 10688603.
1928. Nicolas Monmarche, El-Ghazali Talbi, Pierre Collet, Marc Schoenauer, and Evelyne
Lutton, editors. Revised Selected Papers of the 8th International Conference on Ar-
ticial Evolution, Evolution Articielle (EA07), October 2931, 2007, Tours, France,
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-540-79305-2. isbn: 3-540-79304-6. Partly available at http://ea07.
hant.li.univ-tours.fr/ [accessed 2007-09-09]. Published in 2008.
1929. Raul Monroy, Gustavo Arroyo-Figueroa, Luis Enrique Sucar, and Juan Humberto Sossa
Azuela, editors. Advances in Articial Intelligence, Proceedings of the Third Mexican In-
ternational Conference on Articial Intelligence (MICAI04), April 2630, 2004, Mexico
City, Mexico, volume 2972/2004 in Lecture Notes in Articial Intelligence (LNAI, SL7),
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/b96521. isbn: 3-540-21459-3.
1930. David J. Montana. Strongly Typed Genetic Programming. BBN Technical Report #7866,
Bolt Beranek and Newman Inc. (BBN): Cambridge, MA, USA, May 7, 1993. Fully available
at http://www.cs.ucl.ac.uk/staff/W.Langdon/ftp/ftp.io.com/papers/stgp.
ps.Z [accessed 2010-12-04]. CiteSeer
x
: 10.1.1.38.4986. See also [1931, 1932].
1931. David J. Montana. Strongly Typed Genetic Programming. BBN Technical Report
#7866, Bolt Beranek and Newman Inc. (BBN): Cambridge, MA, USA, March 25, 1994.
Fully available at http://www.cs.ucl.ac.uk/staff/W.Langdon/ftp/ftp.io.com/
papers/stgp2.ps.Z [accessed 2010-12-04]. CiteSeer
x
: 10.1.1.38.4986. See also [1930, 1932].
1932. David J. Montana. Strongly Typed Genetic Programming. Evolutionary Computation, 3(2):
199230, Summer 1995, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1995.3.2.199.
Fully available at http://personal.d.bbn.com/

dmontana/papers/stgp.pdf
and http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/montana_stgpEC.html
[accessed 2011-11-23].
1933. David J. Montana and Talib Hussain. Adaptive Reconguration of Data Networks using
Genetic Algorithms. Applied Soft Computing, 4(4):433444, October 2004, Elsevier Science
Publishers B.V.: Essex, UK. doi: 10.1016/j.asoc.2004.02.002. CiteSeer
x
: 10.1.1.95.1975.
See also [1935].
REFERENCES 1103
1934. David J. Montana and Jason Redi. Optimizing Parameters of a Mobile Ad Hoc Net-
work Protocol with a Genetic Algorithm. In GECCO05 [304], pages 19931998, 2005.
doi: 10.1145/1068009.1068342. Fully available at http://vishnu.bbn.com/papers/
erni.pdf [accessed 2008-10-16]. CiteSeer
x
: 10.1.1.86.5827.
1935. David J. Montana, Talib Hussain, and Tushar Saxena. Adaptive Reconguration of Data
Networks using Genetic Algorithms. In GECCO02 [1675], pages 11411149, 2002. See also
[1933].
1936. Proceedings of the 2nd International Workshop on Machine Learning (ML83), 1983, Monti-
cello, IL, USA.
1937. The Seventh Metaheuristics International Conference (MIC07), June 2529, 2007, Montreal,
QC, Canada. Partly available at http://www.mic2007.ca/ [accessed 2007-09-12].
1938. H. L. Moore. Cours d

Economie Politique. By VILFREDO PARETO, Professeur `a


lUniversite de Lausanne. Vol. I. Pp. 430. I896. Vol. II. Pp. 426. I897. Lausanne: F. Rouge.
The ANNALS of the American Academy of Political and Social Science, 9(3):128131, 1897,
SAGE Publications: Thousand Oaks, CA, USA and American Academy of Political and
Social Science: Philadelphia, PA, USA. doi: 10.1177/000271629700900314. See also [2127].
1939. Federico Mor an, Alvaro Moreno, Juan Julian Merelo-Guerv os, and Pablo Chacon, editors.
Advances in Articial Life Proceedings of the Third European Conference on Articial
Life (ECAL95), June 46, 1995, Granada, Spain, volume 929/1995 in Lecture Notes in
Articial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-59496-5. isbn: 3-540-59496-5. Google
Books ID: 6CujMeca5S0C.
1940. Matthew Moran, Maciej Zaremba, and Tomas Vitvar. Service Discovery and Composition
with WSMX for SWS Challenge Workshop IV. In KWEB SWS Challenge Workshop [2450],
2007. Fully available at http://sws-challenge.org/workshops/2007-Innsbruck/
papers/4-SWSCha_IV_2pager.pdf. See also [582, 1210, 30573059].
1941. Jorge J. More. A Collection of Nonlinear Model Problems. In Computational Solution of
Nonlinear Systems of Equations Proceedings of the 1988 SIAM-AMS Summer Seminar on
Computational Solution of Nonlinear Systems of Equations [73], pages 723762, 1988. See
also [1942].
1942. Jorge J. More. A Collection of Nonlinear Model Problems. Technical Report MCS-P60-0289,
Argonne National Laboratory: Argonne, IL, USA, February 1989. See also [1941].
1943. Conwy Lloyd Morgan. On Modication and Variation. Simulation Transactions of The
Society for Modeling and Simulation International, SAGE Journals Online, 4(99):733740,
November 20, 1896, Society for Modeling and Simulation International (SCS): San Diego,
CA, USA. doi: 10.1126/science.4.99.733. See also [1944].
1944. Conwy Lloyd Morgan. On Modication and Variation. In Adaptive Individuals in Evolving
Populations: Models and Algorithms [255], pages 8190. Addison-Wesley Longman Publish-
ing Co., Inc.: Boston, MA, USA and Westview Press, 1996. See also [1943].
1945. Proceedings of the 10th International Conference on Machine Learning (ICML93), June 27
29, 1993, University of Massachusetts (UMASS): Amherst, MA, USA. Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-307-7.
1946. Proceedings of the Fifteenth International Joint Conference on Articial Intelligence
(IJCAI97-I), August 2329, 1997, Nagoya, Japan, volume 1. Morgan Kaufmann Publish-
ers Inc.: San Francisco, CA, USA. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-97-VOL1/CONTENT/content.htm [accessed 2008-04-01]. See also [1947, 2260].
1947. Proceedings of the Fifteenth International Joint Conference on Articial Intelligence
(IJCAI97-II), August 2329, 1997, Nagoya, Japan, volume 2. Morgan Kaufmann Publish-
ers Inc.: San Francisco, CA, USA. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-97-VOL2/CONTENT/content.htm [accessed 2008-04-01]. See also [1946, 2260].
1948. Naoki Mori, Hajime Kita, and Yoshikazu Nishikawa. Adaptation to a Changing Environment
by Means of the Thermodynamical Genetic Algorithm. In PPSN IV [2818], pages 513522,
1996. doi: 10.1007/3-540-61723-X 1015.
1949. Naoki Mori, Seiji Imanishi, Hajime Kita, and Yoshikazu Nishikawa. Adaptation to Changing
Environments by Means of the Memory Based Thermodynamical Genetic Algorithm. In
ICGA97 [166], pages 299306, 1997.
1950. Naoki Mori, Hajime Kita, and Yoshikazu Nishikawa. Adaptation to a Changing Environment
by Means of the Feedback Thermodynamical Genetic Algorithm. In PPSN V [866], pages
1104 REFERENCES
149158, 1998. doi: 10.1007/BFb0056858.
1951. David E. Moriarty, Alan C. Schultz, and John J. Grefenstette. Evolutionary Algorithms for
Reinforcement Learning. Articial Intelligence Review An International Science and En-
gineering Journal, 11:241276, September 1999, Springer Netherlands: Dordrecht, Nether-
lands. Imprint: Kluwer Academic Publishers: Norwell, MA, USA. doi: 10.1613/jair.613.
Fully available at http://www.jair.org/media/613/live-613-1809-jair.pdf [ac-
cessed 2007-09-12]. CiteSeer
x
: 10.1.1.40.2231.
1952. Artemis Moroni, Fernando Jose Von Zuben, and Jonatas Manzolli. ArTbitration. In
GAVAM00 [1452], pages 143145, 2000. See also [1953].
1953. Artemis Moroni, Fernando Jose Von Zuben, and Jonatas Manzolli. ArTbitrariness in Mu-
sic (Musical ArTbitrariness). In GA00 [2201], 2000. Fully available at http://www.
generativeart.com/on/cic/2000/MORONI.HTM [accessed 2009-06-18]. See also [1952].
1954. Robert Morris, editor. Deep Blue Versus Kasparov: The Signicance for Articial In-
telligence, Collected Papers from the 1997 AAAI Workshop (AAAI97 WS), July 28,
1997. AAAI Press: Menlo Park, CA, USA. isbn: 1-57735-031-6. GBV-Identication
(PPN): 242427014.
1955. Ronald Walter Morrison. Designing Evolutionary Algorithms for Dynamic Environments.
PhD thesis, George Mason University (GMU): Fairfax, VA, USA, 2002, Kenneth Alan De
Jong, Advisor. See also [1956].
1956. Ronald Walter Morrison. Designing Evolutionary Algorithms for Dynamic Environments,
volume 24 in Natural Computing Series. Springer New York: New York, NY, USA, June 2004.
isbn: 3-540-21231-0. Google Books ID: EaNkNDYq 8. OCLC: 55746331 and 57168232.
Library of Congress Control Number (LCCN): 2004102479. See also [1955].
1957. Ronald Walter Morrison and Kenneth Alan De Jong. A Test Problem Generator for
Non-Stationary Environments. In CEC99 [110], pages 20472053, volume 3, 1999.
doi: 10.1109/CEC.1999.785526.
1958. Ronald Walter Morrison and Kenneth Alan De Jong. Measurement of Population
Diversity. In EA01 [606], pages 10471074, 2001. doi: 10.1007/3-540-46033-0 3.
CiteSeer
x
: 10.1.1.108.1903.
1959. Joel N. Morse. Reducing the Size of the Nondominated Set: Pruning by Clustering. Comput-
ers & Operations Research, 7(12):5566, 1980, Pergamon Press: Oxford, UK and Elsevier
Science Publishers B.V.: Essex, UK. doi: 10.1016/0305-0548(80)90014-3.
1960. Joel N. Morse, editor. Proceedings of the 4th International Conference on Multiple Crite-
ria Decision Making: Organizations, Multiple Agents With Multiple Criteria (MCDM80),
August 1015, 1980, University of Delaware: Newark, DE, USA, volume 190 in Lec-
ture Notes in Economics and Mathematical Systems. Springer-Verlag: Berlin/Heidelberg.
isbn: 0387108211. Google Books ID: ms5-AAAACAAJ. OCLC: 7575489, 246648572,
310683024, 421655673, 470793740, and 612090473.
1961. Pablo Moscato. On Evolution, Search, Optimization, Genetic Algorithms and Mar-
tial Arts: Towards Memetic Algorithms. Caltech Concurrent Computation Program
C3P 826, California Institute of Technology (Caltech), Caltech Concurrent Computa-
tion Program (C3P): Pasadena, CA, USA, 1989. Fully available at http://www.each.
usp.br/sarajane/SubPaginas/arquivos_aulas_IA/memetic.pdf [accessed 2011-12-05].
CiteSeer
x
: 10.1.1.27.9474.
1962. Pablo Moscato. Memetic Algorithms. In Handbook of Applied Optimization [2125], Chapter
3.6.4, pages 157167. Oxford University Press, Inc.: New York, NY, USA, 2002.
1963. Pablo Moscato and Carlos Cotta. A Gentle Introduction to Memetic Algorithms. In
Handbook of Metaheuristics [1068], Chapter 5, pages 105144. Kluwer Academic Pub-
lishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands, 2003.
doi: 10.1007/0-306-48056-5 5. Fully available at http://www.lcc.uma.es/

ccottap/
papers/handbook03memetic.pdf [accessed 2010-09-11]. CiteSeer
x
: 10.1.1.77.5300.
1964. Sanaz Mostaghim. Multi-objective Evolutionary Algorithms: Data structures, Convergence
and, Diversity, Elektrotechnik. PhD thesis, Universitat Paderborn, Fakult at f ur Elektrotech-
nik, Informatik und Mathematik: Paderborn, Germany, Shaker Verlag GmbH: Aachen,
North Rhine-Westphalia, Germany, February 2005. isbn: 3-8322-3661-9. Google Books
ID: HoVLAAAACAAJ and pJpeOQAACAAJ. OCLC: 62900366.
1965. Jack Mostow and Chuck Rich, editors. Proceedings of the Fifteenth National Conference
on Articial Intelligence and Tenth Innovative Applications of Articial Intelligence Confer-
REFERENCES 1105
ence (AAAI98, IAAI98), July 2630, 1998, Madison, WI, USA. AAAI Press: Menlo Park,
CA, USA and MIT Press: Cambridge, MA, USA. isbn: 0-262-51098-7. Partly available
at http://www.aaai.org/Conferences/AAAI/aaai98.php and http://www.aaai.
org/Conferences/IAAI/iaai98.php [accessed 2007-09-06].
1966. Rajeev Motwani and Prabhakar Raghavan. Randomized Algorithms, Cambridge Interna-
tional Series on Parallel Computation. Cambridge University Press: Cambridge, UK, 1995.
isbn: 0-521-47465-5. Google Books ID: QKVY4mDivBEC. OCLC: 31606869. GBV-
Identication (PPN): 180148303, 310077257, 508849136, 555592650, and 563593563.
LC Classication: QA274 .M68 1995.
1967. Rajeev Motwani and Prabhakar Raghavan. Randomized Algorithms. ACM Comput-
ing Surveys (CSUR), 28(1):3337, March 1996, ACM Press: New York, NY, USA.
doi: 10.1145/234313.234327.
1968. VIII Metaheuristics International Conference (MIC09), July 1316, 2009, Movenpick Hotel
Hamburg: Hamburg, Germany.
1969. Heinz M uhlenbein. Parallel Genetic Algorithms, Population Genetics and Combinatorial
Optimization. In WOPPLOT89 [245], pages 398406, 1989. doi: 10.1007/3-540-55027-5 23.
See also [1970].
1970. Heinz M uhlenbein. Parallel Genetic Algorithms, Population Genetics and Combinatorial
Optimization. In ICGA89 [2414], pages 416421, 1989. See also [1969].
1971. Heinz M uhlenbein. Evolution in Time and Space The Parallel Genetic Algorithm. In
FOGA90 [2562], pages 316337, 1990. Fully available at http://muehlenbein.org/
parallel.PDF and http://www.iais.fraunhofer.de/fileadmin/images/pics/
Abteilungen/INA/muehlenbein/PDFs/evolutionary/I/1991_Evolution.PDF [ac-
cessed 2009-08-12]. CiteSeer
x
: 10.1.1.54.6763.
1972. Heinz M uhlenbein. How Genetic Algorithms Really Work I. Mutation and Hillclimbing. In
PPSN II [1827], pages 1526, 1992. Fully available at http://muehlenbein.org/mut92.
pdf [accessed 2010-09-11].
1973. Heinz M uhlenbein. The Equation for Response to Selection and its Use for Prediction.
Evolutionary Computation, 5(3):303346, Fall 1997, MIT Press: Cambridge, MA, USA.
doi: 10.1162/evco.1997.5.3.303. Fully available at http://citeseer.ist.psu.edu/old/
730919.html [accessed 2009-08-21]. PubMed ID: 10021762.
1974. Heinz M uhlenbein and Gerhard Paa. From Recombination of Genes to the Estima-
tion of Distributions I. Binary Parameters. In PPSN IV [2818], pages 178187, 1996.
doi: 10.1007/3-540-61723-X 982. Fully available at ftp://ftp.ais.fraunhofer.de/
pub/as/ga/gmd_as_ga-96_04.ps [accessed 2009-08-20]. CiteSeer
x
: 10.1.1.7.7030. See
also [1976].
1975. Heinz M uhlenbein and Dirk Schlierkamp-Voosen. Predictive Models for the Breeder
Genetic Algorithm I: Continuous Parameter Optimization. Evolutionary Computation,
1(1):2549, Spring 1993, MIT Press: Cambridge, MA, USA, Marc Schoenauer, edi-
tor. doi: 10.1162/evco.1993.1.1.25. Fully available at http://www.muehlenbein.org/
breeder93.pdf [accessed 2010-12-04]. CiteSeer
x
: 10.1.1.30.9381.
1976. Heinz M uhlenbein, J urgen Bendisch, and Hans-Michael Voigt. From Recombination of Genes
to the Estimation of Distributions II. Continuous Parameters. In PPSN IV [2818], pages 188
197, 1996. doi: 10.1007/3-540-61723-X 983. Fully available at http://www.amspr.gfai.
de/techreports/ppsn96-1.pdf [accessed 2009-08-20]. CiteSeer
x
: 10.1.1.9.2941. See also
[1974].
1977. Srinivas Mukkamala, Andrew H. Sung, and Ajith Abraham. Modeling Intrusion Detection
Systems using Linear Genetic Programming Approach. In Robert Orchard, Chunsheng Yang,
and Moonis Ali, editors, IEA/AIE04 [2088], pages 633642, 2004. Fully available at http://
www.rmltech.com/LGP%20Based%20IDS.pdf [accessed 2008-06-17].
1978. J org M uller, W. Lewis Johnson, and Barbara Hayes-Roth, editors. Proceedings of the First
International Conference on Autonomous Agents (AGENTS97), February 58, 1997, Marina
del Rey, CA, USA. ACM Press: New York, NY, USA. isbn: 0-89791-877-0. Fully available
at http://sigart.acm.org/proceedings/agents97/ [accessed 2009-06-27]. Google Books
ID: ktPaAQAACAAJ. OCLC: 37622491 and 71484604. Sponsor: SIGGRAPH ACM
Special Interest Group on Computer Graphics and Interactive Techniques.
1979. Masaharu Munetomo. Designing Genetic Algorithms for Adaptive Routing Algorithms
in the Internet. In GECCO02 WS [224], pages 215216, 2002. Fully available
1106 REFERENCES
at http://uk.geocities.com/markcsinclair/ps/etppf_mun.ps.gz [accessed 2009-09-
02]. CiteSeer
x
: 10.1.1.50.2448. Partly available at http://assam.cims.hokudai.
ac.jp/

munetomo/documents/gecco99-ws.ppt and http://uk.geocities.com/


markcsinclair/ps/etppf_mun_slides.ps.gz [accessed 2009-09-02]. See also [1982].
1980. Masaharu Munetomo and David Edward Goldberg. Linkage Identication by Non-
monotonicity Detection for Overlapping Functions. IlliGAL Report 99005, Illinois Ge-
netic Algorithms Laboratory (IlliGAL), Department of Computer Science, Department of
General Engineering, University of Illinois at Urbana-Champaign: Urbana-Champaign, IL,
USA, January 1999. Fully available at http://www.illigal.uiuc.edu/pub/papers/
IlliGALs/99005.ps.Z [accessed 2008-10-17]. CiteSeer
x
: 10.1.1.33.9322. See also [1981].
1981. Masaharu Munetomo and David Edward Goldberg. Linkage Identication by Non-
monotonicity Detection for Overlapping Functions. Evolutionary Computation, 7(4):377398,
Winter 1999, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1999.7.4.377. See also
[1980].
1982. Masaharu Munetomo, Yoshiaki Takai, and Yoshiharu Sato. An Adaptive Network Routing
Algorithm Employing Path Genetic Operators. In ICGA97 [166], pages 643649, 1997. See
also [1979, 1983, 1984].
1983. Masaharu Munetomo, Yoshiaki Takai, and Yoshiharu Sato. An Adaptive Routing Algorithm
with Load Balancing by a Genetic Algorithm. Transactions of the Information Processing
Society of Japan (IPSJ), 39(2):219227, 1998. in Japanese. See also [1982].
1984. Masaharu Munetomo, Yoshiaki Takai, and Yoshiharu Sato. A Migration Scheme for the
Genetic Adaptive Routing Algorithm. In SMC98 [1364], pages 27742779, volume 3, 1998.
doi: 10.1109/ICSMC.1998.725081. See also [1982].
1985. Durga Prasand Muni, Nikhil R. Pal, and Jyotirmoy Das. A Novel Approach to Design
Classiers using Genetic Programming. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 8(2):183196, April 2004, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2004.825567. INSPEC Accession Number: 7973773.
1986. Nitin Muttil and Shie-Yui Liong. Superior ExplorationExploitation Balance in Shuf-
ed Complex Evolution. Journal of Hydraulic Engineering, 130(12):12021205, Decem-
ber 2004, ASCE (American Society of Civil Engineers) Publications: Reston, VA, USA.
doi: 10.1061/(ASCE)0733-9429(2004)130:12(1202).
1987. John Mylopoulos and Raymond Reiter, editors. Proceedings of the 12th International
Joint Conference on Articial Intelligence (IJCAI91-I), August 2430, 1991, Sydney,
NSW, Australia, volume 1. Morgan Kaufmann Publishers Inc.: San Francisco, CA,
USA. isbn: 1-55860-160-0. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-91-VOL1/CONTENT/content.htm [accessed 2008-04-01]. See also [831, 1988, 2122].
1988. John Mylopoulos and Raymond Reiter, editors. Proceedings of the 12th International
Joint Conference on Articial Intelligence (IJCAI91-II), August 2430, 1991, Sydney,
NSW, Australia, volume 2. Morgan Kaufmann Publishers Inc.: San Francisco, CA,
USA. isbn: 1-55860-160-0. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-91-VOL2/CONTENT/content.htm [accessed 2008-04-01]. See also [831, 1987, 2122].
N
1989. Francesco Nachira, Paolo Dini, Andrea Nicolai, Marion Le Louarn, and Lorena Rivera
L`eon, editors. Digital Business Ecosystems. Oce for Ocial Publications of the
European Communities: Luxembourg, 2007. isbn: 92-79-01817-5. Fully avail-
able at http://www.digital-ecosystems.org/book/2006-4156_PROOF-DCS.pdf
and http://www.digital-ecosystems.org/book/2006-4156_PROOF-DCS.pdf [ac-
cessed 2009-07-21].
1990. Thomas P. Nadeau and Toby J. Teorey. Achieving Scalability in OLAP Materialized
View Selection. In DOLAP02 [2558], pages 2834, 2002. doi: 10.1145/583890.583895.
CiteSeer
x
: 10.1.1.96.8347.
1991. Proceedings of the Third Asia-Pacic Conference on Simulated Evolution And Learning
(SEAL00), October 2527, 2000, Nagoya Congress Center: Nagoya, Japan.
1992. Third Online Workshop on Soft Computing (WSC3), August 1998, Nagoya, Japan.
1993. Fourth Online Workshop on Soft Computing (WSC4), September 1999, Nagoya, Japan.
REFERENCES 1107
1994. Akito Naito, Fumio Harashima, Shun-Ichi Amari, Toshio Fukuda, and Kunihiko Fukushima,
editors. Proceedings of 1993 International Joint Conference on Neural Networks (IJCNN
93), October 2529, 1993, Nagoya Congress Center: Nagoya, Japan. IEEE Computer
Society Press: Los Alamitos, CA, USA. isbn: 0-7803-1421-2, 0-7803-1422-0, and
0-7803-1423-9. Library of Congress Control Number (LCCN): 93-79884. Catalogue
no.: 93CH3353-0.
1995. Tadashi Nakano and Tatsuya Suda. Adaptive and Evolvable Network Services. In GECCO04-
I [751], pages 151162, 2004. Fully available at http://www.cs.york.ac.uk/rts/docs/
GECCO_2004/Conference%20proceedings/papers/3102/31020151.pdf [accessed 2008-
08-29]. See also [1996, 1997].
1996. Tadashi Nakano and Tatsuya Suda. Self-Organizing Network Services with Evolutionary
Adaptation. IEEE Transactions on Neural Networks, 16(5):12691278, September 2005,
IEEE Computer Society Press: Los Alamitos, CA, USA. doi: 10.1109/TNN.2005.853421.
CiteSeer
x
: 10.1.1.74.9149. INSPEC Accession Number: 8586518. See also [1995].
1997. Tadashi Nakano and Tatsuya Suda. Applying Biological Principles to Designs of Network
Services. Applied Soft Computing, 7(3):870878, June 2007, Elsevier Science Publishers B.V.:
Essex, UK. doi: 10.1016/j.asoc.2006.04.006. See also [1995].
1998. Wonhong Nam, Hyunyoung Kil, and Dongwon Lee. Type-Aware Web Service Composi-
tion Using Boolean Satisability Solver. In CEC/EEE08 [1346], pages 331334, 2008.
doi: 10.1109/CECandEEE.2008.108. INSPEC Accession Number: 10475104. See also
[204, 1999, 2064, 2066, 3034].
1999. Wonhong Nam, Hyunyoung Kil, and Jungjae Lee. QoS-Driven Web Service Composi-
tion Using Learning-Based Depth First Search. In CEC09 [1245], pages 507510, 2009.
doi: 10.1109/CEC.2009.50. INSPEC Accession Number: 10839131. See also [662, 1571, 1998,
2064, 2066, 2067, 3034].
2000. Srini Narayanan and Sheila A. McIlraith. Simulation, Verication and Automated Compo-
sition of Web Services. In WWW02 [26], pages 7788, 2002. doi: 10.1145/511446.511457.
Fully available at http://reference.kfupm.edu.sa/content/s/i/simulation__
verification_and_automated_c_10676.pdf, http://www.cs.toronto.
edu/kr/publications/nar-mci-www11.pdf, http://www.cs.toronto.edu/

sheila/publications/nar-mci-www11.pdf, http://www.cs.uoregon.edu/
classes/08W/cis607sii/papers/NarayananMcIlraith02.pdf, and http://
www.icsi.berkeley.edu/

snarayan/nar-mci-www11.pdf [accessed 2011-01-11].


CiteSeer
x
: 10.1.1.16.4126.
2001. NASA High Performance Computing and Communications Computational Aerosciences
Workshop (CAS00), February 1517, 2000, NASA Ames Research Center: Moett Field,
CA, USA.
2002. Collected Abstracts for the First International Workshop on Learning Classier Systems
(IWLCS92), October 69, 1992, NASA Johnson Space Center: Houston, TX, USA. See
also [2534].
2003. 9th Conference on Technologies and Applications of Articial Intelligence (TAAI04), Novem-
ber 56, 2004, National Chengchi University (NCCU): Muzha, Wenshan District, Taipei City,
Taiwan. Fully available at http://www.taai.org.tw/taai2004/ [accessed 2010-07-30].
2004. 10th Conference on Technologies and Applications of Articial Intelligence (TAAI05), De-
cember 23, 2005, National University of Kaohsiung: Kaohsiung, Taiwan. Fully available
at http://www.taai.org.tw/taai2005/ [accessed 2010-07-30].
2005. Techniques foR Implementing Constraint Programming Systems (TRICS). Technical Report
TRA9/00, National University of Singapore, School of Computing: Singapore, September 22,
2000, Singapore. Workshop in conjunction with CP 2000.
2006. 12th Conference on Technologies and Applications of Articial Intelligence (TAAI07),
November 1617, 2007, National Yunlin University of Science and Technology: Douliou,
Yunlin, Taiwan.
2007. Bart Naudts and Alain Verschoren. Epistasis on Finite and Innite Spaces. In Inter-
Symp96 [1412], pages 1923, 1996. CiteSeer
x
: 10.1.1.32.6455.
2008. Bart Naudts and Alain Verschoren. Epistasis and Deceptivity. Bulletin of the Belgian Mathe-
matical Society Simon Stevin, 6(1):147154, 1999, Belgian Mathematical Society: Campus
de la Plaine, Brussels, Belgium. Fully available at http://projecteuclid.org/euclid.
bbms/1103149975 [accessed 2009-07-11]. CiteSeer
x
: 10.1.1.32.4912. Zentralblatt MATH
1108 REFERENCES
identier: 0915.68143. Mathematical Reviews number (MathSciNet): MR1674709.
2009. Bart Naudts, Dominique Suys, and Alain Verschoren. Generalized Royal Road Functions
and their Epistasis. Computers and Articial Intelligence, 19(4), 2000, Slovak Academy of
Science: Bratislava, Slovakia.
2010. Boris Naujoks, Nicola Beume, and Michael T.M. Emmerich. Multi-objective Optimisa-
tion using S-metric Selection: Application to Three-Dimensional Solution Spaces. In
CEC05 [641], pages 12821289, volume 2, 2005. doi: 10.1109/CEC.2005.1554838. Fully
available at http://www.lania.mx/

ccoello/naujoks05.pdf.gz [accessed 2009-07-18].


CiteSeer
x
: 10.1.1.60.4181. INSPEC Accession Number: 8715411.
2011. Yehuda Naveh and Pierre Flener, editors. Fifth International Workshop on Local Search Tech-
niques in Constraint Satisfaction (LSCS08), September 18, 2008, Sydney, NSW, Australia.
Fully available at http://sysrun.haifa.il.ibm.com/hrl/lscs2008/ [accessed 2009-06-
24].
2012. Bernhard Nebel, editor. Proceedings of the Seventeenth International Joint Conference on
Articial Intelligence (IJCAI01), August 410, 2001, Seattle, WA, USA. Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-777-3 and 1-55860-813-3.
Fully available at http://dli.iiit.ac.in/ijcai/IJCAI-2001/content/content.
htm [accessed 2008-04-01]. Google Books ID: BHlFAAAAYAAJ. OCLC: 48481676. See also [1810].
2013. Nadia Nedjah and Luiza de Macedo Mourelle, editors. Swarm Intelligent Systems, vol-
ume 26 in Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg, 2006.
doi: 10.1007/978-3-540-33869-7. isbn: 3-540-33868-3. Google Books ID: 32PmGAAACAAJ
and MJOvAAAACAAJ. Library of Congress Control Number (LCCN): 2006925434. Series
editor: Janusz Kacprzyk.
2014. Nadia Nedjah, Luiza de Macedo Mourelle, Ajith Abraham, and Mario Koppen, editors.
5th International Conference on Hybrid Intelligent Systems (HIS05), November 69, 2005,
Pontical Catholic University of Rio de Janeiro (PUC-Rio): Rio de Janeiro, RJ, Brazil.
IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7695-2457-5. Partly available
at http://his05.hybridsystem.com/ [accessed 2007-09-01].
2015. Nadia Nedjah, Enrique Alba Torres, and Luiza de Macedo Mourelle, editors. Parallel Evolu-
tionary Computations, volume 22/2006 in Studies in Computational Intelligence. Springer-
Verlag: Berlin/Heidelberg, June 2006. doi: 10.1007/3-540-32839-4. isbn: 3-540-32837-8
and 3-540-32839-4. Google Books ID: J36LHQAACAAJ and kOC8GAAACAAJ. Library of
Congress Control Number (LCCN): 2006921792.
2016. Nadia Nedjah, Leandro dos Santos Coelho, and Luiza de Macedo Mourelle, editors.
Multi-Objective Swarm Intelligent Systems: Theory & Experiences, volume 261/2010
in Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg, 2010.
doi: 10.1007/978-3-642-05165-4. Library of Congress Control Number (LCCN): 2009940420.
2017. John Ashworth Nelder and Roger A. Mead. A Simplex Method for Function Minimization.
The Computer Journal, Oxford Journals, 7(4):308313, January 1965, British Computer Soci-
ety: Swindon, UK. doi: 10.1093/comjnl/7.4.308. Fully available at http://www.garfield.
library.upenn.edu/classics1979/A1979HZ22100001.pdf and http://www.
rupley.com/

jar/Rupley/Code/src/simplex/nelder-mead-simplex.pdf [ac-
cessed 2010-12-24]. CiteSeer
x
: 10.1.1.144.5920.
2018. Kevin M. Nelson. A Comparison of Evolutionary Programming and Genetic Algorithms for
Electronic Part Placement. In EP95 [1848], pages 503519, 1995.
2019. Venkata Ranga Neppalli, Chuen-Lung Chen, and Jatinder N. D. Gupta. Genetic Algo-
rithms for the two-stage bicriteria Flowshop Problem. European Journal of Operational Re-
search (EJOR), 95(2), December 6, 1996, Elsevier Science Publishers B.V.: Amsterdam, The
Netherlands. Imprint: North-Holland Scientic Publishers Ltd.: Amsterdam, The Nether-
lands. doi: 10.1016/0377-2217(95)00275-8.
2020. Ferrante Neri, Niko Kotilainen, and Mikko Vapa. An Adaptive Global-Local Memetic Al-
gorithm to Discover Resources in P2P Networks. In EvoWorkshops07 [1050], pages 6170,
2007. doi: 10.1007/978-3-540-71805-5 7.
2021. Ferrante Neri, Jari Toivanen, Giuseppe Leonardo Cascella, and Yew-Soon Ong.
An Adaptive Multimeme Algorithm for Designing HIV Multidrug Therapies.
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB),
4(2), April 2007, IEEE Computer Society Press: Los Alamitos, CA, USA.
doi: 10.1109/TCBB.2007.070202 and 10.1109/tcbb.2007.1039. Fully available at http://
REFERENCES 1109
ntu-cg.ntu.edu.sg/ysong/journal/Adaptive-Multimeme-HIV.pdf [accessed 2011-04-
23]. CiteSeer
x
: 10.1.1.124.5677. PubMed ID: 17473319.
2022. Ferrante Neri, Jari Toivanen, and Raino A. E. Makinen. An Adaptive Evolutionary Al-
gorithm with Intelligent Mutation Local Searchers for Designing Multidrug Therapies for
HIV. Applied Intelligence The International Journal of Articial Intelligence, Neural Net-
works, and Complex Problem-Solving Technologies, 27(3):219235, December 2007, Springer
Netherlands: Dordrecht, Netherlands. doi: 10.1007/s10489-007-0069-8.
2023. Arnold Neumaier. Global Optimization and Constraint Satisfaction. In GICOLAG06 [2024],
2006. Fully available at http://www.mat.univie.ac.at/

neum/glopt.html [ac-
cessed 2009-06-11].
2024. Arnold Neumaier, Immanuel Bomze, Laurence A. Wolsey, and Ioannis Emiris, editors. Global
Optimization Integrating Convexity, Optimization, Logic Programming, and Computational
Algebraic Geometry (GICOLAG06), December 48, 2006, International Erwin Schrodinger
Institute for Mathematical Physics (ESI): Vienna, Austria. Fully available at http://www.
mat.univie.ac.at/

neum/gicolag.html [accessed 2009-06-11].


2025. Frank Neumann and Ingo Wegener. Can Single-Objective Optimization Prot from Multi-
objective Optimization? In Multiobjective Problem Solving from Nature From Concepts
to Applications [1554], pages 115130. Springer New York: New York, NY, USA, 2008.
doi: 10.1007/978-3-540-72964-8 6.
2026. Jose Neves, Manuel Filipe Santos, and Jose Manuel Machado, editors. Progress in Ar-
ticial Intelligence Proceedings of the 13th Portuguese Conference on Aritcial Intelli-
gence; Workshops: GAIW, AIASTS, ALEA, AMITA, BAOSW, BI, CMBSB, IROBOT,
MASTA, STCS, and TEMA (EPIA07), December 37, 2007, Guimaraes, Portugal, volume
4874 in Lecture Notes in Articial Intelligence (LNAI, SL7), Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-77002-2.
isbn: 3-540-77000-3. Library of Congress Control Number (LCCN): 2007939905.
2027. Proceedings of the 12th IEEE Congress on Evolutionary Computation (CEC11), June 58,
2011, New Orleans, LA, USA.
2028. Mark E. J. Newman and Robin Engelhardt. Eect of Neutral Selection on the Evolution of
Molecular Species. Proceedings of the Royal Society B: Biological Sciences, 256(1403):1333
1338, July 22, 1998, Royal Society Publishing: London, UK. doi: 10.1098/rspb.1998.0438.
Fully available at http://www.santafe.edu/media/workingpapers/98-01-001.
pdf [accessed 2010-12-04]. CiteSeer
x
: 10.1.1.43.9103. arXiv ID: adap-org/9712005v1.
2029. Sir Isaac Newton. Philosophi Naturalis Principia Mathematica. Apud GUIL & Joh. INNYS,
Regi Societatis Typographos: London, UK, July 5, 1687. Fully available at http://gdz.
sub.uni-goettingen.de/no_cache/dms/load/img/?IDDOC=294010 [accessed 2008-08-
21].
2030. Jerzy Neyman and Egon Sharpe Pearson. On the Use and Interpretation of Certain Test Cri-
teria for Purposes of Statistical Inference, Part I. Biometrika, 20A(1-2):175240, July 1928,
Oxford University Press, Inc.: New York, NY, USA. doi: doi:10.1093/biomet/20A.1-2.175.
Reprinted in [2031].
2031. Jerzy Neyman and Egon Sharpe Pearson. Joint Statistical Papers. Cambridge Univer-
sity Press: Cambridge, UK, University of California Press: Berkeley, CA, USA, Hodder
Arnold: London, UK, and Lubrecht & Cramer, Ltd.: Port Jervis, NY, USA, October 1967.
isbn: 0520009916 and 0852647069. Google Books ID: KH1GHgAACAAJ. OCLC: 527105.
asin: B000JM5XB0 and B000P0O0T2.
2032. Xuan Hoai Nguyen. A Flexible Representation for Genetic Programming: Lessons from
Natural Language Processing. PhD thesis, University of New South Wales, University Col-
lege, School of Information Technology and Electrical Engineering: Sydney, NSW, Aus-
tralia, December 2004. Fully available at http://www.cs.bham.ac.uk/

wbl/biblio/
gp-html/hoai_thesis.html and http://www.cs.ucl.ac.uk/staff/W.Langdon/
ftp/papers/hoai_thesis.tar.gz [accessed 2010-07-26]. OCLC: 224847044, 455148847,
and 648818432.
2033. Xuan Hoai Nguyen, Robert Ian McKay, and Daryl Leslie Essam. Some Experimental Results
with Tree Adjunct Grammar Guided Genetic Programming. In EuroGP02 [977], pages
229238, 2002. doi: 10.1007/3-540-45984-7 22. Fully available at http://sc.snu.ac.kr/
PAPERS/TAGGGP.pdf [accessed 2007-09-09].
2034. Xuan Hoai Nguyen, Robert Ian McKay, Daryl Leslie Essam, and R. Chau. Solv-
1110 REFERENCES
ing the Symbolic Regression Problem with Tree-Adjunct Grammar Guided Genetic
Programming: The Comparative Results. In CEC02 [944], pages 13261331, 2002.
doi: 10.1109/CEC.2002.1004435. Fully available at http://sc.snu.ac.kr/PAPERS/
MCEC2002.pdf [accessed 2007-09-09]. INSPEC Accession Number: 7328878.
2035. Giuseppe Nicosia, Vincenzo Cutello, Peter J. Bentley, and Jonathan Timmis, editors. Pro-
ceedings of the 3th International Conference on Articial Immune Systems (ICARIS04),
September 1316, 2004, Catania, Sicily, Italy, volume 3239 in Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-23097-1.
2036. Sren R. Nielsen, David Pisinger, and Per Marquardsen. Automatic Transformation of
Constraint Satisfaction Problems to Integer Linear Form An Experimental Study. In
TRICS [2005], 2000. CiteSeer
x
: 10.1.1.36.9656 and 10.1.1.37.2112.
2037. Stefan Niemczyk. Ein Modellproblem mit einstellbarer Schwierigkeit zur Evaluierung Evo-
lutionarer Algorithmen. Masters thesis, University of Kassel, Fachbereich 16: Elektrotech-
nik/Informatik, Distributed Systems Group: Kassel, Hesse, Germany, May 5, 2008, Thomas
Weise, Advisor, Kurt Geihs and Albert Z undorf, Committee members. Fully available
at http://www.it-weise.de/documents/files/N2008MPMES.pdf.
2038. Stefan Niemczyk and Thomas Weise. A General Framework for Multi-Model Estima-
tion of Distribution Algorithms. Technical Report, University of Kassel, Fachbereich
16: Elektrotechnik/Informatik, Distributed Systems Group: Kassel, Hesse, Germany,
March 10, 2010. Fully available at http://www.it-weise.de/documents/files/
NW2010AGFFMMEODA.pdf [accessed 2010-06-19]. See also [2917].
2039. Nils J. Nilsson, editor. Proceedings of the 3rd International Joint Conference on Articial
Intelligence (IJCAI73), August 2023, 1973, Stanford University: Stanford, CA, USA. Fully
available at http://dli.iiit.ac.in/ijcai/IJCAI-73/CONTENT/content.htm [ac-
cessed 2008-04-01].
2040. Jorge Nocedal and Stephen J. Wright. Numerical Optimization, volume 18 in Springer Series
in Operations Research and Financial Engineering. Springer-Verlag GmbH: Berlin, Germany,
2nd edition, 2006. doi: 10.1007/978-0-387-40065-5. isbn: 0-387-98793-2. Google Books
ID: 2B9LUYAP3n4C, eNlPAAAAMAAJ, and epc5fX0lqRIC. Library of Congress Control
Number (LCCN): 2006923897.
2041. David Noever and Subbiah Baskaran. Steady-State vs. Generational Genetic Algorithms: A
Comparison of Time Complexity and Convergence Properties. Technical Report 92-07-032,
Santa Fe Institute: Santa Fe, NM, USA.
2042. Andreas Nolte and Rainer Schrader. A Note on the Finite Time Behaviour of Simulated
Annealing. In SOR96 [3092], pages 175180, 1996. See also [2043].
2043. Andreas Nolte and Rainer Schrader. A Note on the Finite Time Behaviour of Simulated An-
nealing. Mathematics of Operations Research (MOR), 25(3):476484, August 2000, Institute
for Operations Research and the Management Sciences (INFORMS): Linthicum, ML, USA.
doi: 10.1287/moor.25.3.476.12211. Fully available at http://www.zaik.de/

paper/
unzip.html?file=zaik1999-347.ps [accessed 2009-07-03]. CiteSeer
x
: 10.1.1.40.9456.
Revised version from March 1999. See also [2042].
2044. Peter Nordin. A Compiling Genetic Programming System that Directly Manipulates the
Machine Code. In Advances in Genetic Programming Volume 1 [1539], Chapter 14, pages
311331. MIT Press: Cambridge, MA, USA, 1994. Machine code GP Sun Spark and i868.
2045. Peter Nordin. Evolutionary Program Induction of Binary Machine Code and its Appli-
cations. PhD thesis, Universitat Dortmund, Fachbereich Informatik: Dortmund, North
Rhine-Westphalia, Germany, Krehl Verlag: M unster, Germany. isbn: 3931546071.
OCLC: 40285878. GBV-Identication (PPN): 231072783. asin: 3931546071 and
B0033DLUWS.
2046. Peter Nordin and Wolfgang Banzhaf. Evolving Turing-Complete Programs for a
Register Machine with Self-Modifying Code. In ICGA95 [883], pages 318325,
1995. Fully available at http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/
Nordin_1995_tcp.html [accessed 2008-09-16]. CiteSeer
x
: 10.1.1.34.4526.
2047. Peter Nordin and Wolfgang Banzhaf. Programmatic Compression of Images and Sound.
In GP96 [1609], pages 345350, 1996. Fully available at http://www.cs.mun.ca/

banzhaf/papers/gp96.pdf [accessed 2010-08-22]. CiteSeer


x
: 10.1.1.32.7927.
2048. Peter Nordin, Frank D. Francone, and Wolfgang Banzhaf. Explicitly Dened Introns
and Destructive Crossover in Genetic Programming. In Proceedings of the Workshop
REFERENCES 1111
on Genetic Programming: From Theory to Real-World Applications [2326], pages 6
22, 1995. Fully available at http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/
nordin_1995_introns.html [accessed 2010-08-22]. CiteSeer
x
: 10.1.1.38.8934. See also
[2049, 2051].
2049. Peter Nordin, Frank D. Francone, and Wolfgang Banzhaf. Explicitly Dened Introns and De-
structive Crossover in Genetic Programming. SysReport 3/95, Universitat Dortmund, Fach-
bereich Informatik, Systems Analysis: Dortmund, North Rhine-Westphalia, Germany, 1995.
Fully available at http://www.cs.mun.ca/

banzhaf/papers/ML95.pdf [accessed 2010-08-


22]. CiteSeer
x
: 10.1.1.38.8934. See also [2048, 2051].
2050. Peter Nordin, Wolfgang Banzhaf, and Frank D. Francone. Ecient Evolution of Machine
Code for CISC Architectures using Instruction Blocks and Homologous Crossover. In Ad-
vances in Genetic Programming [2569], Chapter 12, pages 275299. MIT Press: Cam-
bridge, MA, USA, 1996. Fully available at http://www.cs.bham.ac.uk/

wbl/aigp3/
ch12.pdf and http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/nordin_1999_
aigp3.html [accessed 2008-09-16].
2051. Peter Nordin, Frank D. Francone, and Wolfgang Banzhaf. Explicitly Dened Introns and De-
structive Crossover in Genetic Programming. In Advances in Genetic Programming II [106],
pages 111134. MIT Press: Cambridge, MA, USA, 1996. See also [2048, 2049].
2052. Peter Nordin, Wolfgang Banzhaf, and Markus F. Brameier. Evolution of a World
Model for a Miniature Robot using Genetic Programming. Automation and Remote
Control, 25(12):105116, October 31, 1998, Springer Science+Business Media, Inc.:
New York, NY, USA and MAIK Nauka/Interperiodica Pleiades Publishing, Inc.:
Moscow, Russia. doi: 10.1016/S0921-8890(98)00004-9. Fully available at http://
www.cs.bham.ac.uk/

wbl/biblio/gp-html/Nordin_1998_RAS.html [accessed 2008-09-


16]. CiteSeer
x
: 10.1.1.54.1211.
2053. Peter Nordin, Frank Homann, Frank D. Francone, and Markus F. Brameier. AIM-
GP and Parallelism. In CEC99 [110], pages 10591066, 1999. Fully available
at http://www.cs.mun.ca/

banzhaf/papers/cec_parallel.pdf [accessed 2008-09-16].


CiteSeer
x
: 10.1.1.6.5450.
2054. Michael G. Norman and Pablo Moscato. A Competitive and Cooperative Approach to Com-
plex Combinatorial Search. Caltech Concurrent Computation Program 790, California Insti-
tute of Technology (Caltech), Caltech Concurrent Computation Program (C3P): Pasadena,
CA, USA, 1989. CiteSeer
x
: 10.1.1.44.776. See also [2055].
2055. Michael G. Norman and Pablo Moscato. A Competitive and Cooperative Approach to Com-
plex Combinatorial Search. In JAIIO91 [514], pages 3.153.29, 1991. Also published as
Technical Report Caltech Concurrent Computation Program, Report. 790, California Insti-
tute of Technology, Pasadena, California, USA, 1989. See also [2054].
2056. Proceedings of the Second Hawaii International Conference on System Sciences (HICSS69),
January 2224, 1969, University of Hawaii at Manoa: Honolulu, HI, USA. North-Holland
Scientic Publishers Ltd.: Amsterdam, The Netherlands.
2057. M. Nowak and Peter K. Schuster. Error Thresholds of Replication in Finite Populations
Mutation Frequencies and the Onset of Mullers Ratchet. Journal of Theoretical Biology, 137
(4):375395, April 20, 1989, Elsevier Science Publishers B.V.: Amsterdam, The Netherlands.
Imprint: Academic Press Professional, Inc.: San Diego, CA, USA, D. Kirschner, Y. Iwasa,
and L. Wolpert, editors. PubMed ID: 2626057.
O
2058. Martin J. Oates and David Wolfe Corne. Global Web Server Load Balancing using Evo-
lutionary Computational Techniques. Soft Computing A Fusion of Foundations, Method-
ologies and Applications, 5(4):297312, August 2001, Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/s005000100103.
2059. Shigeru Obayashi. Multidisciplinary Design Optimization of Aircraft Wing Planform
Based on Evolutionary Algorithms. In SMC98 [1364], pages 31483153, volume 4, 1998.
doi: 10.1109/ICSMC.1998.726486. Fully available at http://www.lania.mx/

ccoello/
obayashi98a.pdf.gz [accessed 2009-06-16]. INSPEC Accession Number: 6195712.
2060. Shigeru Obayashi and Daisuke Sasaki. Visualization and Data Mining of Pareto
1112 REFERENCES
Solutions Using Self-Organizing Map. In EMO03 [957], pages 796806, 2003.
doi: 10.1007/3-540-36970-8 56. Fully available at http://www.ifs.tohoku.ac.jp/
edge/library/Final%20EMO2003.pdf [accessed 2009-07-18].
2061. Shigeru Obayashi, Kalyanmoy Deb, Carlo Poloni, Tomoyuki Hiroyasu, and Tadahiko
Murata, editors. Proceedings of the Fourth International Conference on Evolution-
ary Multi-Criterion Optimization (EMO07), March 58, 2007, Matsushima, Sendai,
Japan, volume 4403/2007 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Ger-
many. doi: 10.1007/978-3-540-70928-2. isbn: 3-540-70927-4, 3-540-70928-2, and
6610865078. Google Books ID: f9Lcf6imJ8MC. OCLC: 84611501, 180728448,
315394594, 318300130, and 403748482. Library of Congress Control Number
(LCCN): 2007921125.
2062. Gabriela Ochoa, Inman Harvey, and Hilary Buxton. Error Thresholds and Their Relation to
Optimal Mutation Rates. In ECAL99 [934], pages 5463, 1999. doi: 10.1007/3-540-48304-7.
Fully available at ftp://ftp.cogs.susx.ac.uk/pub/users/inmanh/ecal99.ps.gz
[accessed 2009-07-10]. CiteSeer
x
: 10.1.1.43.9263.
2063. Christopher K. Oei, David Edward Goldberg, and Shau-Jin Chang. Tournament Selection,
Niching, and the Preservation of Diversity. IlliGAL Report 91011, Illinois Genetic Algo-
rithms Laboratory (IlliGAL), Department of Computer Science, Department of General En-
gineering, University of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, Decem-
ber 1991. Fully available at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/
91011.ps.Z [accessed 2008-03-15].
2064. Seog-Chan Oh, Byung-Won On, Eric J. Larson, and Dongwon Lee. BF*: Web Services
Discovery and Composition as Graph Search Problem. In EEE05 [1371], pages 784786,
2005. doi: 10.1109/EEE.2005.41. INSPEC Accession Number: 8530439. See also [317, 662,
1999, 20652067, 3034].
2065. Seog-Chan Oh, Hyunyoung Kil, Dongwon Lee, and Soundar R. T. Kumara. Algorithms
for Web Services Discovery and Composition Based on Syntactic and Semantic Service De-
scriptions. In CEC/EEE06 [2967], pages 433435, 2006. doi: 10.1109/CEC-EEE.2006.12.
Fully available at http://pike.psu.edu/publications/eee06.pdf [accessed 2010-12-10].
CiteSeer
x
: 10.1.1.77.9275. INSPEC Accession Number: 9189341. See also [318, 662,
1998, 2064, 2066, 2067, 3034].
2066. Seog-Chan Oh, John Jung-Woon Yoo, Hyunyoung Kil, Dongwon Lee, and Soundar R. T. Ku-
mara. Semantic Web-Service Discovery and Composition Using Flexible Parameter Match-
ing. In CEC/EEE07 [1344], pages 533536, 2007. doi: 10.1109/CEC-EEE.2007.86. INSPEC
Accession Number: 9868641. See also [319, 662, 1999, 2064, 2065, 2067, 3034].
2067. Seog-Chan Oh, Ju-Yeon Lee, Seon-Hwa Cheong, Soo-Min Lim, Min-Woo Kim, Sang-
Seok Lee, Jin-Bum Park, Sang-Do Noh, and Mye M. Sohn. WSPR*: Web-Service
Planner Augmented with A* Algorithm. In CEC09 [1245], pages 515518, 2009.
doi: 10.1109/CEC.2009.13. INSPEC Accession Number: 10839129. See also [662, 1571, 1999,
20642066, 3034].
2068. Milos Ohldal, Jir Jaros, Josef Schwarz, and Vaclav Dvor ak. Evolutionary Design of OAB and
AAB Communication Schedules for Interconnection Network. In EvoWorkshops06 [2341],
pages 267278, 2006. doi: 10.1007/11732242 24.
2069. Tatsuya Okabe, Yaochu Jin, Bernhard Sendho, and Markus Olhofer. Voronoi-based Es-
timation of Distribution Algorithm for Multi-objective Optimization. In CEC04 [1369],
pages 15941601, volume 2, 2004. doi: 10.1109/CEC.2004.1331086. Fully available
at http://www.honda-ri.org/intern/hriliterature/PN%2008-04 and http://
www.soft-computing.de/cec-2004-1445-final.pdf [accessed 2009-08-29]. INSPEC Ac-
cession Number: 8229117.
2070. Nicole Oldham, Christopher Thomas, Amit P. Sheth, and Kunal Verma. METEOR-S Web
Service Annotation Framework with Machine Learning Classication. In SWSWPC04 [493],
pages 137146, 2004. doi: 10.1007/978-3-540-30581-1 12.
2071. Gina Oliveira and Stefano Vita. A Multi-Objective Evolutionary Algorithm with -dominance
to Calculate Multicast Routes with QoS Requirements. In CEC09 [1350], pages 182189,
2009. doi: 10.1109/CEC.2009.4982946. INSPEC Accession Number: 10688528.
2072. Anne L. Olsen. Penalty Functions and the Knapsack Problem. In CEC94 [1891], pages
554558, volume 2, 1994. doi: 10.1109/ICEC.1994.350000.
REFERENCES 1113
2073. Donald M. Olsson and Lloyd S. Nelson. The Nelder-Mead Simplex Procedure for Function
Minimization. Technometrics, 17(1):4551, February 1975, American Statistical Association
(ASA): Alexandria, VA, USA and American Society for Quality (ASQ): Milwaukee, WI,
USA. JSTOR Stable ID: 1267998.
2074. Mihai Oltean. Searching for a Practical Evidence of the No Free Lunch Theorems. In
BioADIT04 [1399], pages 472483, 2004. doi: 10.1007/b101281. Fully available at http://
www.cs.ubbcluj.ro/

moltean/oltean_bioadit_springer2004.pdf [accessed 2009-03-


01].
2075. Beatrice M. Ombuki-Berman and Franklin T. Hanshar. Using Genetic Algorithms for Multi-
depot Vehicle Routing. In Bio-inspired Algorithms for the Vehicle Routing Problem [2162],
pages 7799. Springer-Verlag: Berlin/Heidelberg, 2009. doi: 10.1007/978-3-540-85152-3 4.
2076. Proceedings of the 2007 IEEE Symposium on Articial Life (CI-ALife07), April 15, 2007,
Honolulu, HI, USA. Omipress: Madison, WI, USA and IEEE (Institute of Electrical and
Electronics Engineers): Piscataway, NJ, USA. isbn: 1-4244-0701-X.
2077. Michael ONeill and Conor Ryan, editors. Grammatical Evolution Workshop (GEWS02),
2002, Roosevelt Hotel: New York, NY, USA. AAAI Press: Menlo Park, CA, USA. Fully
available at http://www.grammatical-evolution.org/gews2002/ [accessed 2009-07-09].
Partly available at http://www.grammatical-evolution.org/pubs.html [accessed 2009-
07-09]. Part of [1790].
2078. Michael ONeill and Conor Ryan, editors. Proceedings of the Second Grammatical Evolution
Workshop (GEWS03), July 12, 2003, Holiday Inn Chicago: Chicago, IL, USA. See also
[484, 485].
2079. Michael ONeill and Conor Ryan, editors. The 3rd Grammatical Evolution Workshop
(GEWS04), June 26, 2004, Red Lion Hotel: Seattle, WA, USA. Published as CD. See
also [751, 752].
2080. Michael ONeill, Leonardo Vanneschi, Steven Matt Gustafson, Anna Isabel Esparcia-
Alc azar, Ivanoe de Falco, Antonio Della Cioppa, and Ernesto Tarantino, editors. Ge-
netic Programming Proceedings of the 11th European Conference on Genetic Program-
ming (EuroGP08), March 2628, 2008, Naples, Italy, volume 4971/2008 in Theoret-
ical Computer Science and General Issues (SL 1), Lecture Notes in Computer Sci-
ence (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-78671-9.
isbn: 3-540-78670-8. Partly available at http://evostar.iti.upv.es/index.
php?option=com_content&view=article&id=46 [accessed 2011-02-28]. Google Books
ID: OAHUcHbXoNkC. OCLC: 280852852. Library of Congress Control Number
(LCCN): 2008922954.
2081. Yew-Soon Ong and Andy J. Keane. Meta-Lamarckian Learning in Memetic Algorithms.
IEEE Transactions on Evolutionary Computation (IEEE-EC), 8(2):99110, 2004, IEEE Com-
puter Society: Washington, DC, USA. doi: 10.1109/TEVC.2003.819944. Fully available
at http://eprints.soton.ac.uk/22794/1/ong_04.pdf [accessed 2011-04-23]. INSPEC
Accession Number: 7973767.
2082. Seizo Onoe, Mohsen Guizani, Hsiao-Hwa Chen, and Mamoru Sawahashi, editors. Proceed-
ings of the International Conference on Wireless Communications and Mobile Computing
(IWCMC06), July 36, 2006, Vancouver, BC, Canada. ACM Press: New York, NY, USA.
isbn: 1-59593-306-9. OCLC: 288959906.
2083. Godfrey C. Onwubolu and B. V. Babu, editors. New Optimization Techniques in Engineering,
volume 141 in Studies in Fuzziness and Soft Computing. Springer-Verlag GmbH: Berlin,
Germany. isbn: 3-540-20167-X. Google Books ID: Yrsq-Pz0kAAC.
2084. Aleksandr Ivanovich Oparin. The Origin of Life. Courier Dover Publications: North
Chelmsford, MA, USA, 2nd edition, 19532003. isbn: 0486495221 and 0486602133.
Partly available at http://www.valencia.edu/

orilife/textos/The%20Origin
%20of%20Life.pdf. Google Books ID: 3CYJOwAACAAJ, 65c5AAAAMAAJ, Jv8psJCtI0gC,
lDqJwAACAAJ, kW8ZKwAACAAJ, and qZB2o69KWSwC.
2085. Stan Openshaw. Building an Automated Modelling System to Explore a Universe of Spatial
Interaction Models. Geographical Analysis An International Journal of Theoretical Geog-
raphy, 26(1):3146, 1998, Wiley Periodicals, Inc.: Wilmington, DE, USA and Ohio State
University, Department of Geography: Columbus, OH, USA.
2086. Stan Openshaw and Ian Turton. Building New Spatial Interaction Models Using
Genetic Programming. In AISB94 [936], 1994. Fully available at http://www.
1114 REFERENCES
cs.bham.ac.uk/

wbl/biblio/gp-html/openshaw_1994_bnsim.html and http://


www.geog.leeds.ac.uk/papers/94-1/94-1.pdf [accessed 2008-09-16].
2087. Abstracts of the National Conference of the Operations Research Society of Japan. Operations
Research Society of Japan (OSRJ): Tokyo, Japan.
2088. Robert Orchard, Chunsheng Yang, and Moonis Ali, editors. Proceedings of the 17th
International Conference on Industrial and Engineering Applications of Articial Intelli-
gence and Expert Systems (IEA/AIE04), May 1720, 2004, Ottawa, ON, Canada, vol-
ume 3029/2004 in Lecture Notes in Articial Intelligence (LNAI, SL7), Lecture Notes in
Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/b97304.
isbn: 3-540-22007-0. Google Books ID: T52Lf6oWKuMC. OCLC: 55592278, 59669961,
66574763, and 249601813. Library of Congress Control Number (LCCN): 2004105117.
2089. Una-May OReilly, Gwoing Tina Yu, Rick L. Riolo, and Bill Worzel, editors. Genetic Pro-
gramming Theory and Practice II, Proceedings of the Second Workshop on Genetic Pro-
gramming (GPTP04), May 1315, 2004, University of Michigan, Center for the Study of
Complex Systems (CSCS): Ann Arbor, MI, USA, volume 8 in Genetic Programming Series.
Kluwer Publishers: Boston, MA, USA. doi: 10.1007/b101112. isbn: 0-387-23253-2 and
0-387-23254-0.
2090. Agustin Orla, Juan M. Estevez-Tapiador, and Arturo Ribagorda. Evolving High-Speed,
Easy-to-Understand Network Intrusion Detection Rules with Genetic Programming. In
EvoWorkshops09 [1052], pages 9398, 2009. doi: 10.1007/978-3-642-01129-0 11.
2091. Leslie E. Orgel. Prebiotic Adenine Revisited: Eutectics and Photochemistry. Origins of Life
and Evolution of the Biosphere, 34(4):361369, August 2004, Springer Netherlands: Dor-
drecht, Netherlands. doi: 10.1023/B:ORIG.0000029882.52156.c2.
2092. Helcio R. B. Orlande, editor. 4th International Conference on Inverse Problems in En-
gineering: Theory and Practice Inverse Problems in Engineering: Theory and Practice
(ICIPE02), May 2631, 2002, Angra dos Reis, Rio de Janeiro, Brazil. United Engineering
Foundation (UEF): New York, NY, USA. doi: 10.1080/089161502753341870. Fully available
at http://www.lttc.coppe.ufrj.br/4icipe/ [accessed 2009-07-03].
2093. Late Breaking Papers at Genetic and Evolutionary Computation Conference (GECCO99
LBP), July 1317, 1999, Orlando, FL, USA. See also [211, 2502].
2094. The Second International Workshop on Learning Classier Systems (IWLCS99), July 13,
1999, Orlando, FL, USA. Co-located with GECCO-99. See also [211].
2095. Albert Orriols-Puig and Ester Bernad o-Mansilla. Evolutionary Rule-Based Systems
for Imbalanced Data Sets. Soft Computing A Fusion of Foundations, Method-
ologies and Applications, 13(3), February 2009, Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/s00500-008-0319-7.
2096. Albert Orriols-Puig, Jorge Casillas, and Ester Bernad o-Mansilla. Genetic-based Ma-
chine Learning Systems are Competitive for Pattern Recognition. Evolutionary
Intelligence, 1(3):209232, October 2008, Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/s12065-008-0013-9.
2097. First European Working Session on Learning (EWSL86), February 34, 1986, Orsay, France.
2098. Proceedings of the 10th International Symposium on Experimental Algorithms (SEA11),
May 57, 2011, Orthodox Academy of Crete (OAC): Kolymvari, Kissamos, Chania, Crete,
Greece.
2099. Domingo Ortiz-Boyer, Cesar Hervas-Martnez, and Carlos A. Reyes Garca. CIXL2: A
Crossover Operator for Evolutionary Algorithms Based on Population Features. Journal of
Articial Intelligence Research (JAIR), 24:148, July 2005, AI Access Foundation, Inc.: El
Segundo, CA, USA and AAAI Press: Menlo Park, CA, USA. Fully available at http://
www.aaai.org/Papers/JAIR/Vol24/JAIR-2401.pdf and http://www.cs.cmu.
edu/afs/cs/project/jair/pub/volume24/ortizboyer05a-html/Ortiz-Boyer.
html [accessed 2009-10-24].
2100. Emilio G. Ortiz-Garca, Lucas Martnez-Bernabeu, Sancho Salcedo-Sanz, Francisco Fl orez,
Jose Antonio Portilla-Figueras, and Angel M. Perez-Bellido. A Parallel Evolutionary Algo-
rithm for the Hub Location Problem with Fully Interconnected Backbone and Access Net-
works. In CEC09 [1350], pages 15011506, 2009. doi: 10.1109/CEC.2009.4983120. INSPEC
Accession Number: 10688715.
2101. Henry F. Osborn. Oytogenic and Phylogenic Variation. Science Magazine, 4(100):786
789, November 27, 1896, American Association for the Advancement of Science (AAAS):
REFERENCES 1115
Washington, DC, USA and HighWire Press (Stanford University): Cambridge, MA, USA.
doi: 10.1126/science.4.100.786.
2102. Martin J. Osborne and Ariel Rubinstein. A Course in Game Theory. MIT Press: Cambridge,
MA, USA, July 1994. isbn: 0-2626-5040-1. Google Books ID: 5ntdaYX4LPkC.
2103. Ibrahim H. Osman. An Introduction to Metaheuristics. In Operational Research Tutorial
Papers Annual Conference of the Operational Research Society [1695], pages 92122, 1995.
2104. Ibrahim H. Osman and James P. Kelly, editors. 1st Metaheuristics International Conference
Meta-Heuristics: The Theory and Applications (MIC95), July 2225, 1995, Breckenridge,
CO, USA. Kluwer Publishers: Boston, MA, USA. isbn: 0-7923-9700-2. Published in 1996.
2105. Ibrahim H. Osman and Gilbert Laporte. Metaheuristics: A bibliography. Annals
of Operations Research, 63(5):511623, October 1996, Springer Netherlands: Dordrecht,
Netherlands and J. C. Baltzer AG, Science Publishers: Amsterdam, The Netherlands.
doi: 10.1007/BF02125421. Fully available at http://sb.aub.edu.lb/PersonalSites/
DrOsman/docs/Articles-pdf/BIBmh96.pdf [accessed 2010-09-17].
2106. Pavel Osmera, editor. Proceedings of the 6th International Conference on Soft Comput-
ing (MENDEL00), June 79, 2000, Brno University of Technology: Brno, Czech Republic.
Brno University of Technology,

Ustav Automatizace a Informatiky: Brno, Czech Republic.
isbn: 80-214-1609-2.
2107. Pavel Osmera and Radek Matousek, editors. Proceedings of the 13th International Conference
on Soft Computing (MENDEL07), September 57, 2007, Prague, Czech Republic. Brno
University of Technology, Faculty of Mechanical Engineering: Brno, Czech Republic.
2108. Fernando E. B. Otero, Monique M. S. Silvia, and Alex Alves Freitas. Genetic Programming
For Attribute Construction In Data Mining. In GECCO02 [1675], page 1270, 2002. See also
[2109].
2109. Fernando E. B. Otero, Monique M. S. Silva, Alex Alves Freitas, and Julio C. Nievola. Genetic
Programming for Attribute Construction in Data Mining. In EuroGP03 [2360], pages 101
121, 2003. doi: 10.1007/3-540-36599-0 36. Fully available at http://www.cs.kent.ac.uk/
people/staff/aaf/pub_papers.dir/EuroGP-2003-Fernando.pdf [accessed 2007-09-09].
See also [2108].
2110. James G. Oxley. Matroid Theory, volume 3 in Oxford Graduate Texts in Mathematics.
Oxford University Press, Inc.: New York, NY, USA, 1992. isbn: 0-19-853563-5 and
0-19-920250-8. OCLC: 26053360, 440044001, and 474911564. Library of Congress
Control Number (LCCN): 92020802. GBV-Identication (PPN): 111250307, 230783953,
383918693, 470663685, 501250778, and 522307191. LC Classication: QA166.6 .O95
1992.
2111. Akira Oyama. Wing Design Using Evolutionary Algorithm. PhD thesis, Tokyo University,
Department of Aeronautics and Space Engineering: Tokyo, Japan, March 2000, Kazuhiro
Nakahashi and Shigeru Obayashi, Advisors. Fully available at http://flab.eng.isas.
ac.jp/member/oyama/index2e.html [accessed 2009-06-16]. CiteSeer
x
: 10.1.1.24.3306.
P
2112. Ingo Paenke, J urgen Branke, and Yaochu Jin. On the Inuence of Phenotype Plasticity on
Genotype Diversity. In FOCI07 [1865], pages 3340, 2007. doi: 10.1109/FOCI.2007.372144.
Fully available at http://www.honda-ri.org/intern/Publications/PN%2052-06
and http://www.soft-computing.de/S001P005.pdf [accessed 2008-11-10]. Best student
paper.
2113. Charles Campbell Palmer. An Approach to a Problem in Network Design using Genetic
Algorithms. PhD thesis, Polytechnic University: New York, NY, USA, April 1994, Aaron
Kershenbaum, Supervisor, Susan Flynn Hummel, Richard M. van Slyke, and Robert R.
Boorstyn, Committee members. Fully available at ftp://ftp.cs.cuhk.hk/pub/EC/
thesis/phd/palmer-phd.ps.gz [accessed 2010-07-28]. CiteSeer
x
: 10.1.1.38.1716. UMI
Order No.: GAX94-31740. See also [2114].
2114. Charles Campbell Palmer and Aaron Kershenbaum. An Approach to a Problem in Network
Design using Genetic Algorithms. Networks, 26(3):151163, 1995, Wiley Periodicals, Inc.:
Wilmington, DE, USA. doi: 10.1002/net.3230260305. See also [2113].
2115. Charles Campbell Palmer and Aaron Kershenbaum. Representing Trees in Genetic Algo-
1116 REFERENCES
rithms. In CEC94 [1891], pages 379384, volume 1, 1994. CiteSeer
x
: 10.1.1.36.1655.
See also [2116].
2116. Charles Campbell Palmer and Aaron Kershenbaum. Representing Trees in Genetic Algo-
rithms. In Handbook of Evolutionary Computation [171], Chapter G1.3. Oxford University
Press, Inc.: New York, NY, USA, Institute of Physics Publishing Ltd. (IOP): Dirac House,
Temple Back, Bristol, UK, and CRC Press, Inc.: Boca Raton, FL, USA, 1997. Fully available
at http://www.cs.cinvestav.mx/

constraint/papers/palmer94representing.
ps [accessed 2010-07-26]. See also [2115].
2117. Themis Palpanas, Nick Koudas, and Alberto Mendelzon. On Space Constrained Set Selection
Problem. Data & Knowledge Engineering, 67(1):200218, October 2008, Elsevier Science
Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientic Publishers
Ltd.: Amsterdam, The Netherlands. doi: 10.1016/j.datak.2008.06.011.
2118. Guanyu Pan, Quansheng Dou, and Xiaohua Liu. Performance of two Improved Particle
Swarm Optimization In Dynamic Optimization Environments. In ISDA06 [553], pages 1024
1028, volume 2, 2006. doi: 10.1109/ISDA.2006.253752. INSPEC Accession Number: 9275253.
2119. Hui Pan, Ling Wang, and Bo Liu. Particle Swarm Optimization for Function Optimization in
Noisy Environment. Journal of Applied Mathematics and Computation, 181(2):908919, Oc-
tober 15, 2006, Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/j.amc.2006.01.066.
2120. Giselher Pankratz and Veikko Krypczyk. Benchmark Data Sets for Dynamic Vehicle
Routing Problems. FernUniversitat in Hagen, Lehrstuhl f ur Wirtschaftsinformatik: Ha-
gen, North Rhine-Westphalia, Germany, June 12, 2007. Fully available at http://www.
fernuni-hagen.de/WINF/inhfrm/benchmark_data.htm [accessed 2008-10-27].
2121. Isueh-Yuan Pao. 2004 IEEE Antennas and Propagation Society (A-P-S) International Sympo-
sium and USNC / URSI NationalRadio Science Meeting, June 2025, 2004. IEEE Computer
Society: Piscataway, NJ, USA. doi: 10.1109/APS.2004.1330376. isbn: 0-7803-8302-8.
2122. Mike P. Papazoglou and John Zeleznikow, editors. The Next Generation of Informa-
tion Systems: From Data to Knowledge A Selection of Papers Presented at Two
IJCAI-91 Workshops (IJCAI91 WS), August 26, 1991, Sydney, NSW, Australia, volume
611 in Lecture Notes in Articial Intelligence (LNAI, SL7), Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 0-387-55616-8 and
3-540-55616-8. OCLC: 26095701, 123310072, 230127949, 231363363, 320295775,
321349775, 423036170, 441767836, 471535298, and 499252204. Library of Congress
Control Number (LCCN): 92021889. GBV-Identication (PPN): 119292769. LC Classi-
cation: QA76.9.D3 N485 1992. See also [831, 1987, 1988].
2123. Proceedings of the 5th International Conference on Simulated Evolution And Learning
(SEAL04), October 2629, 2004, Paradise Hotel: Busan, South Korea.
2124. Panos M. Pardalos and Ding-Zhu Du, editors. Handbook of Combinatorial Op-
timization, volume 13. Kluwer Academic Publishers: Norwell, MA, USA, Oc-
tober 31, 1990. isbn: 0-7923-5018-9, 0-7923-5019-7, 0-7923-5285-8, and
0-7923-5293-9. OCLC: 38485795 and 490888316. Library of Congress Control Number
(LCCN): 98002942. LC Classication: QA402.5 .H335 1998.
2125. Panos M. Pardalos and Mauricio G.C. Resende, editors. Handbook of Applied Opti-
mization. Oxford University Press, Inc.: New York, NY, USA, February 22, 2002.
isbn: 0-19-512594-0. OCLC: 45532495 and 248337411.
2126. Panos M. Pardalos, Nguyen Van Thoai, and Reiner Horst. Introduction to Global Op-
timization, volume 3 & 48 in Nonconvex Optimization and Its Applications. Springer
Science+Business Media, Inc.: New York, NY, USA, 1st/2nd edition, 19952000.
isbn: 0-7923-3556-2 and 0-7923-6756-1. Google Books ID: D8LxE6PB8fIC,
atHlWfTt9KAC, dbu02-1JbLIC, and w6bRM8W-oTgC.
2127. Vilfredo Federico Pareto. Cours d

Economie Politique. F. Rouge: Lausanne/Paris, France,


18961897. Partly available at http://ann.sagepub.com/cgi/reprint/9/3/128.pdf
[accessed 2009-04-23]. 2 volumes. See also [1938].
2128. Jong-man Park, Jeon-gue Park, Chong-hyun Lee, and Mun-sung Han. Robust and E-
cient Genetic Crossover Operator: Homologous Recombination. In IJCNN 93 [1994], pages
29752978, volume 3, 1993. doi: 10.1109/IJCNN.1993.714347. INSPEC Accession Num-
ber: 4962425.
2129. Julia K. Parrish and William M. Hamner, editors. Animal Groups in Three Dimen-
sions: How Species Aggregate. Cambridge University Press: Cambridge, UK, Decem-
REFERENCES 1117
ber 1997. doi: 10.2277/0521460247. isbn: 0521460247. Google Books ID: zj4I4HNZSR0C.
OCLC: 37156372 and 243890243.
2130. Konstantinos E. Parsopoulos. Cooperative Micro-Dierential Evolution for High-Dimensional
Problems. In GECCO09-I [2342], pages 531538, 2009. doi: 10.1145/1569901.1569975.
2131. Janice Partyka and Randolph Hall. Vehicle Routing Software Survey, volume 37 (1). Lion-
heart Publishing, Inc.: Marietta, GA, USA, February 2010. Fully available at http://www.
lionhrtpub.com/orms/surveys/Vehicle_Routing/vrss.html [accessed 2011-04-25]. See
also [2132].
2132. Janice Partyka and Randolph Hall. Vehicle Routing Software Survey On the Road to
Connectivity Creative integration of computer, communication and location technologies
help a wide range of industries thrive in dicult times. OR/MS Today, February 2010,
Lionheart Publishing, Inc.: Marietta, GA, USA and Institute for Operations Research and the
Management Sciences (INFORMS): Linthicum, ML, USA. Fully available at http://www.
lionhrtpub.com/orms/orms-2-10/frsurvey.html [accessed 2011-04-25]. See also [2131].
2133. Dilip Patel, Shushma Patel, and Yingxu Wang, editors. Proceedings of the 2nd IEEE Inter-
national Conference on Cognitive Informatics (ICCI03), August 1820, 2003, London, UK.
IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7695-1986-5.
2134. Norman R. Paterson. Genetic Programming with Context-Sensitive Grammars. PhD thesis,
University of St Andrews, School of Computer Science: St Andrews, Scotland, UK, Septem-
ber 2002, Michael Livesey, Supervisor. Fully available at http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/paterson_thesis.html [accessed 2009-06-29].


2135. David A. Patterson and John L. Hennessy. Computer Organization and Design: The
Hardware/Software Interface, The Morgan Kaufmann Series in Computer Architecture
and Design. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 3rd edition,
2004. isbn: 0-12-088433-X and 1-558-60604-1. Google Books ID: 1lD9LZRcIZ8C.
OCLC: 56213091 and 503402668. GBV-Identication (PPN): 390849650.
2136. Diane B. Paul. FITNESS: Historical Perspectives. In Keywords in Evolutionary Biol-
ogy [1521], pages 112114. Harvard University Press: Cambridge, MA, USA, 1992.
2137. Linus Carl Pauling. The Nature of the Chemical Bond and the Structure of Molecules
and Crystals: An Introduction to Modern Structural Chemistry. Cornell University Press:
Ithaca, NY, USA, 3rd edition, 1960. isbn: 0-8014-0333-2. Fully available at http://
osulibrary.oregonstate.edu/specialcollections/coll/pauling/ [accessed 2007-
07-12].
2138. Terry Payne and Valentina Tamma, editors. AAAI Fall Symposium on Agents and the Seman-
tic Web, November 36, 2005, Arlington, VA, USA, volume FS-05-01. AAAI Press: Menlo
Park, CA, USA.
2139. Darren Pearce, editor. The 12th White House Papers Graduate Research in Cognitive and
Computing Sciences at Sussex. Cognitive Science Research Papers (CSRP) CSRP 512, Uni-
versity of Sussex, School of Cognitive Science: Brighton, UK, November 1999, Isle of Thorns,
UK. Fully available at ftp://ftp.informatics.sussex.ac.uk/pub/reports/csrp/
csrp512.ps.Z [accessed 2009-09-02]. issn: 1350-3162.
2140. Judea Pearl. Heuristics: Intelligent Search Strategies for Computer Problem Solving, The
Addison-Wesley Series in Articial Intelligence. Addison-Wesley Longman Publishing Co.,
Inc.: Boston, MA, USA, 1984. isbn: 0-201-05594-5. Google Books ID: 0XtQAAAAMAAJ
and 1HpQAAAAMAAJ. OCLC: 9644728 and 251831871. Library of Congress Control Num-
ber (LCCN): 83012217. GBV-Identication (PPN): 013593463 and 021186472. LC
Classication: Q335 .P38 1984.
2141. David W. Pearson, Nigel C. Steele, and Rudolf F. Albrecht, editors. Proceedings of the
2nd International Conference on Articial Neural Nets and Genetic Algorithms (ICAN-
NGA95), April 1821, 1995, Al`es, France. Springer New York: New York, NY, USA.
isbn: 3-211-82692-0. Google Books ID: 77hQAAAAMAAJ. OCLC: 32241732.
2142. David W. Pearson, Nigel C. Steele, and Rudolf F. Albrecht, editors. Proceedings of the 6th
International Conference on Articial Neural Nets and Genetic Algorithms (ICANNGA03),
April 2325, 2003, Roanne, France.
2143. Karl Pearson. The Problem of the Random Walk. Nature, 72:294, July 27, 1905.
doi: 10.1038/072294b0.
2144. Witold Pedrycz and Athanasios V. Vasilakos, editors. Computational Intelligence in
Telecommunications Networks. CRC Press, Inc.: Boca Raton, FL, USA, September 15,
1118 REFERENCES
2000. isbn: 084931075X. Google Books ID: 7lVj5pqSxXIC and dE6d-jPoDEEC.
OCLC: 43894172 and 55103484.
2145. Martin Pelikan. Hierarchical Bayesian Optimization Algorithm Toward a New Generation
of Evolutionary Algorithms, volume 170/2005 in Studies in Fuzziness and Soft Computing.
Springer-Verlag GmbH: Berlin, Germany, 2005. doi: 10.1007/b10910. isbn: 3-540-23774-7.
Google Books ID: R0QHqcaTfIC. OCLC: 57730016 and 288213853. Library of Congress
Control Number (LCCN): 2004116659.
2146. Martin Pelikan. Analysis of Estimation of Distribution Algorithms and Genetic Algorithms on
NK Landscapes. In GECCO08 [1519], pages 10331040, 2008. doi: 10.1145/1389095.1389287.
arXiv ID: 0801.3111. Session: Genetic Algorithms Papers.
2147. Martin Pelikan and David Edward Goldberg. Genetic Algorithms, Clustering, and the Break-
ing of Symmetry. IlliGAL Report 2000013, Illinois Genetic Algorithms Laboratory (Illi-
GAL), Department of Computer Science, Department of General Engineering, University of
Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, March 11, 2003. Fully avail-
able at www.illigal.uiuc.edu/pub/papers/IlliGALs/2000013.ps.Z [accessed 2009-08-
28]. CiteSeer
x
: 10.1.1.35.8620. See also [2148].
2148. Martin Pelikan and David Edward Goldberg. Genetic Algorithms, Clustering, and the Break-
ing of Symmetry. In PPSN VI [2421], pages 385394, 2000. doi: 10.1007/3-540-45356-3 38.
See also [2147].
2149. Martin Pelikan and Kumara Sastry, editors. Proceedings of the Optimization by Building
and Using Probabilistic Models Workshop (OBUPM01), July 7, 2001, Holiday Inn Golden
Gateway Hotel: San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.: San Francisco,
CA, USA. Part of [2570].
2150. Martin Pelikan, David Edward Goldberg, and Erick Cant u-Paz. Linkage Problem, Distri-
bution Estimation, and Bayesian Networks. Technical Report 98013, Illinois Genetic Algo-
rithms Laboratory (IlliGAL), Department of Computer Science, Department of General Engi-
neering, University of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, Novem-
ber 1998. Fully available at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/
98013.ps.Z [accessed 2009-07-11]. CiteSeer
x
: 10.1.1.45.213. See also [483]. Newly published
as [2152].
2151. Martin Pelikan, David Edward Goldberg, and Erick Cant u-Paz. BOA: The Bayesian Opti-
mization Algorithm. In GECCO99 [211], pages 525532, 1999. Fully available at https://
eprints.kfupm.edu.sa/28537/ [accessed 2008-10-18]. CiteSeer
x
: 10.1.1.46.8131. See also
[2152].
2152. Martin Pelikan, David Edward Goldberg, and Erick Cant u-Paz. BOA: The Bayesian Op-
timization Algorithm. IlliGAL Report 99003, Illinois Genetic Algorithms Laboratory (Illi-
GAL), Department of Computer Science, Department of General Engineering, University of
Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, January 1999. Fully available
at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/99003.ps.Z [accessed 2009-
07-11]. CiteSeer
x
: 10.1.1.45.934. See also [2150, 2151].
2153. Martin Pelikan, David Edward Goldberg, and Fernando G. Lobo. A Survey of Opti-
mization by Building and Using Probabilistic Models. Technical Report 99018, Illinois
Genetic Algorithms Laboratory (IlliGAL), Department of Computer Science, Department
of General Engineering, University of Illinois at Urbana-Champaign: Urbana-Champaign,
IL, USA, September 1999. Fully available at http://eprints.kfupm.edu.sa/21437/
and http://www.google.de/url?sa=t&source=web&ct=res&cd=8&url=http
%3A%2F%2Fwww.illigal.uiuc.edu%2Fpub%2Fpapers%2FIlliGALs%2F99018.ps.
Z&ei=N5JPSoz9DZOqmgP_wfiZCw&usg=AFQjCNEuIEIxh7My7MJ2wIKuLw7tQ34DMg
[accessed 2009-07-04]. CiteSeer
x
: 10.1.1.40.1920. See also [2154, 2156].
2154. Martin Pelikan, David Edward Goldberg, and Fernando G. Lobo. A Survey of Optimization
by Building and Using Probabilistic Models. In ACC [1332], pages 32893293, volume 5,
2000. doi: 10.1109/ACC.2000.879173. See also [2153, 2156].
2155. Martin Pelikan, Heinz M uhlenbein, and A. O. Rodriguez, editors. Proceedings of the Opti-
mization by Building and Using Probabilistic Models Workshop (OBUPM2000), 2000, Riviera
Hotel and Casino: Las Vegas, NV, USA. Morgan Kaufmann Publishers Inc.: San Francisco,
CA, USA. Part of [2935].
2156. Martin Pelikan, David Edward Goldberg, and Fernando G. Lobo. A Survey of Optimiza-
tion by Building and Using Probabilistic Models. Computational Optimization and Appli-
REFERENCES 1119
cations, 21(1):520, January 2002, Kluwer Academic Publishers: Norwell, MA, USA and
Springer Netherlands: Dordrecht, Netherlands. doi: 10.1023/A:1013500812258. Fully avail-
able at http://w3.ualg.pt/

flobo/papers/pmbga-survey.pdf [accessed 2009-07-04]. See


also [2153, 2154].
2157. Martin Pelikan, Kumara Sastry, and David Edward Goldberg. Multiobjective hBOA, Clus-
tering, and Scalability. IlliGAL Report 2005005, Illinois Genetic Algorithms Laboratory
(IlliGAL), Department of Computer Science, Department of General Engineering, Univer-
sity of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, February 2005. Fully
available at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/2005005.pdf
[accessed 2009-08-28]. arXiv ID: cs/0502034v1. See also [2398].
2158. Martin Pelikan, Kumara Sastry, and Erick Cant u-Paz, editors. Scalable Optimiza-
tion via Probabilistic Modeling From Algorithms to Applications, volume 33 in
Studies in Computational Intelligence. Springer-Verlag: Berlin/Heidelberg, 2006.
doi: 10.1007/978-3-540-34954-9. isbn: 3-540-34953-7. Google Books ID: znzGDXPh6NAC.
OCLC: 72437004, 180920474, and 318298233. Library of Congress Control Number
(LCCN): 2006928618.
2159. David Alejandro Pelta and Natalio Krasnogor, editors. Proceedings of the 1st International
Workshop Nature Inspired Cooperative Strategies for Optimization (NICSO06), June 29
30, 2006, Granada, Spain. Rosillos: Granada, Spain. isbn: 8468993239. Google Books
ID: jzgDGQAACAAJ.
2160. Parag C. Pendharkar and Gary J. Koehler. A General Steady State Distribution based
Stopping Criteria for Finite Length Genetic Algorithms. European Journal of Operational
Research (EJOR), 176(3):14361451, February 2007, Elsevier Science Publishers B.V.: Am-
sterdam, The Netherlands. Imprint: North-Holland Scientic Publishers Ltd.: Amsterdam,
The Netherlands, Roman Slowinski, Jesus Artalejo, Jean-Charles Billaut, Robert Dyson, and
Lorenzo Peccati, editors. doi: 10.1016/j.ejor.2005.10.050.
2161. Fei Peng, Ke Tang, Guoliang Chen, and Xin Yao. Population-based Algorithm Portfo-
lios for Numerical Optimization. IEEE Transactions on Evolutionary Computation (IEEE-
EC), 14(5):782800, March 29, 2010, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2010.2040183. INSPEC Accession Number: 11559790.
2162. Francisco Baptista Pereira and Jorge Tavares, editors. Bio-inspired Algorithms for the Vehi-
cle Routing Problem, volume 161 in Studies in Computational Intelligence. Springer-Verlag:
Berlin/Heidelberg, 2009. doi: 10.1007/978-3-540-85152-3. Library of Congress Control Num-
ber (LCCN): 2008933225.
2163. J. Periaux, Carlo Poloni, and Gabriel Winter, authors. D. Quagliarella, editor. Genetic
Algorithms and Evolution Strategy in Engineering and Computer Science: Recent Ad-
vances and Industrial Applications. John Wiley & Sons Ltd.: New York, NY, USA, 1998.
isbn: 0471977101. Google Books ID: y NQAAAAMAAJ. OCLC: 38935523.
2164. Timothy Perkis. Stack Based Genetic Programming. In CEC94 [1891], pages
148153, volume 1, 1994. Fully available at ftp://coast.cs.purdue.edu/pub/
doc/genetic_algorithms/GP/Stack-Genetic-Programming.ps.gz [accessed 2010-08-
19]. CiteSeer
x
: 10.1.1.53.2622.
2165. Adrian Perrig and Dawn Xiaodong Song. A First Step Towards the Automatic
Generation of Security Protocols. In NDSS00 [1413], pages 7383, 2000. Fully
available at http://sparrow.ece.cmu.edu/

adrian/projects/protgen/protgen.
pdf, http://www.cs.berkeley.edu/

dawnsong/papers/apg.pdf, and http://


www.isoc.org/isoc/conferences/ndss/2000/proceedings/031.pdf [accessed 2009-
09-02]. CiteSeer
x
: 10.1.1.42.2688.
2166. Charles J. Petrie, editor. Semantic Web Services Challenge: Proceedings of the 2008 Work-
shops, 2009, LG-2009-01. Stanford University, Computer Science Department, Stanford Logic
Group: Stanford, CA, USA. Fully available at http://logic.stanford.edu/reports/
LG-2009-01.pdf [accessed 2010-12-12]. See also [1023].
2167. Alan Petrowski. A Clearing Procedure as a Niching Method for Genetic Algorithms. In
CEC96 [1445], pages 798803, 1996. doi: 10.1109/ICEC.1996.542703. Fully available
at http://sci2s.ugr.es/docencia/cursoMieres/clearing-96.pdf [accessed 2008-12-
16]. CiteSeer
x
: 10.1.1.33.8027. INSPEC Accession Number: 5375492.
2168. Alan Petrowski. An Ecient Hierarchical Clustering Technique for Speciation. Tech-
nical Report, Institut National des Telecommunications: Evry Cedex, France, 1997.
1120 REFERENCES
CiteSeer
x
: 10.1.1.54.1131.
2169. Ferdinando Pezzella, editor. Giornate di Lavoro (Entreprise Systems: Management of Tech-
nological and Organizational Changes, Gestione del cambiamento tecnologico ed organizza-
tivo nei sistemi dimpresa) (AIRO95), September 2022, 1995, Ancona, Italy. Associazione
Italiana di Ricerca Operativa Optimization and Decision Sciences (AIRO): Italy.
2170. Patrick C. Phillips. The Language of Gene Interaction. Genetics, 149(3):11671171,
July 1998, Genetics Society of America (GSA): Bethesda, MD, USA. Fully available
at http://www.genetics.org/cgi/reprint/149/3/1167.pdf [accessed 2009-07-11].
2171. Jiratta Phuboon-on and Raweewan Auepanwiriyakul. Selecting Materialized View Using
Two-Phase Optimization with Multiple View Processing Plan. Technical Report 27, World
Academy of Science, Engineering and Technology (WASET): Baky, AZ, USA, 2007. Fully
available at http://www.waset.org/journals/waset/v27/v27-30.pdf [accessed 2011-
04-13].
2172. Samuel Pierre. Inferring New Design Rules by Machine Learning: A Case Study of Topological
Optimization. IEEE Transactions on Systems, Man, and Cybernetics Part A: Systems and
Humans, 28(5):575585, September 1998, IEEE Systems, Man, and Cybernetics Society:
New York, NY, USA. doi: 10.1109/3468.709602. issn: 6006762.
2173. Samuel Pierre and Ali Elgibaoui. Improving Communication Network Topologies using Tabu
Search. In LCN97 [1329], pages 4453, 1997. doi: 10.1109/LCN.1997.630901. INSPEC
Accession Number: 5766134.
2174. Samuel Pierre and Fabien Houeto. Assigning Cells to Switches in Cellular Mobile Networks
using Taboo Search. IEEE Transactions on Systems, Man, and Cybernetics Part B: Cyber-
netics, 32(3):351356, June 2000, IEEE Systems, Man, and Cybernetics Society: New York,
NY, USA. doi: 10.1109/TSMCB.2002.999810. PubMed ID: 18238132. See also [2175].
2175. Samuel Pierre and Fabien Houeto. A Tabu Search Approach for Assigning Cells to Switches
in Cellular Mobile Networks. Computer Communications The International Journal for
the Computer and Telecommunications Industry, 25(5):464477, March 15, 2002, Elsevier
Science Publishers B.V.: Essex, UK. doi: 10.1016/S0140-3664(01)00371-1. See also [2174].
2176. Samuel Pierre and Gabriel Ioan Ivascu, editors. Proceedings of the 2nd IEEE Interna-
tional Conference on Wireless and Mobile Computing, Networking and Communications
(WiMob06), June 1921, 2006, Montreal, QC, Canada. IEEE Computer Society: Piscataway,
NJ, USA. isbn: 1-4244-0494-0. OCLC: 77118231 and 122525160. Order No.: 06EX1436.
2177. Samuel Pierre and Gis`ele Legault. An Evolutionary Approach for Conguring Economical
Packet Switched Computer Networks. Articial Intelligence in Engineering, 10(2):127134,
1996, Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/0954-1810(95)00022-4. See
also [2178].
2178. Samuel Pierre and Gis`ele Legault. A Genetic Algorithm for Designing Distributed Computer
Network Topologies. IEEE Transactions on Systems, Man, and Cybernetics Part B: Cyber-
netics, 28(2):249258, May 1998, IEEE Systems, Man, and Cybernetics Society: New York,
NY, USA. doi: 10.1109/3477.662766. See also [2177].
2179. Massimo Pigliucci, Courtney J. Murren, and Carl D. Schlichting. Review Article:
Phenotypic Plasticity in Evolution. Journal of Experimental Biology (JEB), 209(12):
23622367, June 2006, The Company of Biologists Ltd. (COB): Cambridge, MA, USA.
doi: 10.1242/jeb.02070. Fully available at http://jeb.biologists.org/cgi/reprint/
209/12/2362.pdf [accessed 2008-09-11].
2180. Martin Pincus. A Monte Carlo Method for the Approximate Solution of Certain Types
of Constrained Optimization Problems. Operations Research, 18(6):12251228, November
December 1970, Institute for Operations Research and the Management Sciences (IN-
FORMS): Linthicum, ML, USA and HighWire Press (Stanford University): Cambridge, MA,
USA.
2181. J anos D. Pinter. Global Optimization in Action Continuous and Lipschitz Optimiza-
tion: Algorithms, Implementations and Applications, volume 6 in Nonconvex Optimization
and Its Applications. Springer Science+Business Media, Inc.: New York, NY, USA. Im-
print: Kluwer Academic Publishers: Norwell, MA, USA, 1996. isbn: 0792337573. Google
Books ID: G8pF982ckNsC. GOiA won the 2000 INFORMS Computing Society prize (typed
in by E. Loew June 2008).
2182. 15th European Conference on Machine Learning (ECML04), September 2024, 2004, Pisa,
Italy.
REFERENCES 1121
2183. Proceedings of the 1st Workshop on Machine Learning (ML80), 1980, Pittsburgh, PA, USA.
2184. R. L. Plackett. Some Theorems in Least Squares. Biometrika, 37(12):149157, 1950, Oxford
University Press, Inc.: New York, NY, USA.
2185. Michael Defoin Platel, Manuel Clergue, and Philippe Collard. Maximum Homologous
Crossover for Linear Genetic Programming. In EuroGP03 [2360], pages 2948, 2003. Fully
available at http://hal.archives-ouvertes.fr/docs/00/15/97/39/PDF/eurogp_
03.pdf, http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/platel83.html, and
http://www.i3s.unice.fr/

clergue/siteCV/publi/eurogp_03.pdf [accessed 2010-


12-04]. CiteSeer
x
: 10.1.1.105.1838.
2186. Michael Defoin Platel, Sebastien Verel, Manuel Clergue, and Philippe Collard. From Royal
Road to Epistatic Road for Variable Length Evolution Algorithm. In EA03 [1736], pages 3
14, 2003. doi: 10.1007/b96080. Fully available at http://arxiv.org/abs/0707.0548 and
http://www.i3s.unice.fr/

verel/publi/ea03-fromRRtoER.pdf [accessed 2007-12-01].


2187. Michael Defoin Platel, Stefan Schliebs, and Nikola Kasabov. Quantum-Inspired Evolutionary
Algorithm: A Multimodel EDA. IEEE Transactions on Evolutionary Computation (IEEE-
EC), 13(6):12181232, December 2009, IEEE Computer Society: Washington, DC, USA.
doi: 10.1109/TEVC.2008.2003010. INSPEC Accession Number: 10998923.
2188. Alexander Podlich. Intelligente Planung und Optimierung des G uterverkehrs auf Strae und
Schiene mit Evolution aren Algorithmen. Masters thesis, University of Kassel, Fachbereich
16: Elektrotechnik/Informatik, Distributed Systems Group: Kassel, Hesse, Germany, Febru-
ary 2009, Thomas Weise, Christian Gorldt, and Manfred Menze, Advisors, Kurt Geihs and
Bernd Scholz-Reiter, Committee members.
2189. Alexander Podlich, Thomas Weise, Manfred Menze, and Christian Gorldt. Intel-
ligente Wechselbr uckensteuerung f ur die Logistik von Morgen. In GSN09 [2836],
pages 110, 2009. Fully available at http://eceasst.cs.tu-berlin.de/index.
php/eceasst/article/viewFile/205/207, http://journal.ub.tu-berlin.de/
index.php/eceasst/article/viewFile/205/207, and http://www.it-weise.
de/documents/files/PWMG2009IWFDLVM.pdf [accessed 2010-11-02].
2190. Hartmut Pohlheim. GEATbx Introduction Evolutionary Algorithms: Overview, Methods
and Operators. GEATbx.com: Berlin, Germany, version 3.7 edition, November 2005. Fully
available at http://www.geatbx.com/download/GEATbx_Intro_Algorithmen_v38.
pdf [accessed 2007-07-03]. Documentation for GEATbx version 3.7 (Genetic and Evolutionary
Algorithm Toolbox for use with Matlab).
2191. Riccardo Poli. Parallel Distributed Genetic Programming. Technical Report CSRP-96-15,
University of Birmingham, School of Computer Science: Birmingham, UK, September 1996.
CiteSeer
x
: 10.1.1.54.7666. See also [2192].
2192. Riccardo Poli. Parallel Distributed Genetic Programming. In New Ideas in Optimiza-
tion [638], pages 403432. McGraw-Hill Ltd.: Maidenhead, UK, England, 1999. Fully avail-
able at http://cswww.essex.ac.uk/staff/rpoli/papers/Poli-NIO-1999-PDGP.
pdf [accessed 2007-11-04]. CiteSeer
x
: 10.1.1.15.118. See also [2191].
2193. Riccardo Poli and William B. Langdon. A New Schema Theory for Genetic Programming
with One-point Crossover and Point Mutation. In GP97 [1611], pages 278285, 1997.
CiteSeer
x
: 10.1.1.16.990. See also [2194].
2194. Riccardo Poli and William B. Langdon. A New Schema Theory for Ge-
netic Programming with One-Point Crossover and Point Mutation. Technical Re-
port CSRP-97-3, University of Birmingham, School of Computer Science: Birm-
ingham, UK, January 1997. Fully available at ftp://ftp.cs.bham.ac.uk/pub/
tech-reports/1997/CSRP-97-03.ps.gz and http://www.cs.bham.ac.uk/

wbl/
biblio/gp-html/poli_1997_schemaTR.html [accessed 2008-06-17]. Revised August 1997.
See also [2193].
2195. Riccardo Poli, William B. Langdon, Marc Schoenauer, Terence Claus Fogarty, and Wolfgang
Banzhaf, editors. Late Breaking Papers at EuroGP98: The First European Workshop on
Genetic Programming (EuroGP98 LBP), April 1415, 1998, Paris, France. Distributed at
the workshop. See also [210].
2196. Riccardo Poli, Peter Nordin, William B. Langdon, and Terence Claus Fogarty, editors.
Proceedings of the Second European Workshop on Genetic Programming (EuroGP99),
May 2627, 1999, G oteborg, Sweden, volume 1598/1999 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-65899-8.
1122 REFERENCES
OCLC: 634229593. Library of Congress Control Number (LCCN): 99028995. LC Classi-
cation: QA76.623 .E94 1999.
2197. Riccardo Poli, Hans-Michael Voigt, Stefano Cagnoni, David Wolfe Corne, George D. Smith,
and Terence Claus Fogarty, editors. Proceedings of the First European Workshops on Evo-
lutionary Image Analysis, Signal Processing and Telecommunications: EvoIASP99, Eu-
roEcTel99 (EvoWorkshops99), May 2627, 1999, G oteborg, Sweden, volume 1596/1999
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Ger-
many. doi: 10.1007/10704703. isbn: 3-540-65837-8. Google Books ID: CYDLxvi0-SYC.
OCLC: 41165439, 59385457, 190833621, and 202788825.
2198. Riccardo Poli, Wolfgang Banzhaf, William B. Langdon, Julian Francis Miller, Peter Nordin,
and Terence Claus Fogarty, editors. Proceedings of the Third European Conference on Genetic
Programming (EuroGP00), April 1516, 2000, Edinburgh, Scotland, UK, volume 1802/2000
in Lecture Notes in Computer Science (LNCS). EvoGP, The Genetic Programming Working
Group of EvoNET, The Network of Excellence in Evolutionary Computing: France, Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/b75085. isbn: 3-540-67339-3. Google Books
ID: KhaR0XZHedEC. OCLC: 43811252, 174380171, 243487736, and 313540688. In
conjunction with ICES2000 and EvoWorkshops2000 on evolutionary computing.
2199. Riccardo Poli, William B. Langdon, and Nicholas Freitag McPhee. A Field Guide
to Genetic Programming. Lulu Enterprises UK Ltd: London, UK, March 2008.
isbn: 1-4092-0073-6. Fully available at http://www.gp-field-guide.org.uk/
and http://www.lulu.com/items/volume_63/2167000/2167025/2/print/book.
pdf [accessed 2008-03-27]. Google Books ID: 3PBrqNK5fFQC. OCLC: 225855345. With contri-
butions by John R. Koza.
2200. Riccardo Poli, Nicholas Freitag McPhee, Luca Citi, and Ellery Crane. Memory with Memory
in Genetic Programming. Journal of Articial Evolution and Applications, 2009(570606),
2009, Hindawi Publishing Corporation: Nasr City, Cairo, Egypt, Leonardo Vanneschi, edi-
tor. doi: 10.1155/2009/570606. Fully available at http://www.hindawi.com/journals/
jaea/2009/570606.html [accessed 2009-09-09].
2201. 3rd Generative Art International Conference (GA00), December 1416, 2000, Politecnico di
Milano: Milano, Italy. Politecnico di Milano, Generative Design Lab: Milano, Italy. Fully
available at http://www.generativeart.com/on/cic/GA2000.htm [accessed 2009-06-18].
2202. Shankar R. Ponnekanti and Armando Fox. SWORD: A Developer Toolkit for Web Service
Composition. In WWW02 [26], 2002. Fully available at http://www2002.org/CDROM/
alternate/786/ [accessed 2011-01-11].
2203. H. Vincent Poor. An Introduction to Signal Detection and Estimation, Springer
Texts in Electrical Engineering. Birkhauser Verlag: Basel, Switzerland, 2nd edition,
1994. isbn: 0-387-94173-8 and 3-540-94173-8. Google Books ID: eI1DBIUbzAQC.
OCLC: 28891765.
2204. Lucian Popa, Serge Abiteboul, and Phokion G. Kolaitis, editors. Proceedings of the
Twenty-First ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Sys-
tems (PODS02), June 26, 2002, Madison, WI, USA. ACM Press: New York, NY, USA.
isbn: 1-58113-507-6.
2205. Proceedings of the 4th International Conference on Metaheuristics and Nature Inspired Com-
puting (META12), October 2731, 2012, Port El Kantaoui, Sousse, Tunisia.
2206. Bruce W. Porter and Raymond J. Mooney, editors. Proceedings of the 7th International
Workshop on Machine Learning (ML90), June 2123, 1990, Austin, TX, USA. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-141-4.
2207. Agent Modeling, Papers from the 1996 AAAI Workshop, August 4, 1996, Portland, OR, USA.
isbn: 1-57735-008-1. GBV-Identication (PPN): 230032508. Technical report WS-96-02
of the American Association for Articial Intelligence. See also [586].
2208. Vincent William Porto, N. Saravanan, D. Waagen, and

Agoston E. Eiben, editors. Evolu-
tionary Programming VII Proceedings of the 7th International Conference on Evolutionary
Programming (EP98), May 2527, 1998, Mission Valley Marriott: San Diego, CA, USA,
volume 1447/1998 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/BFb0040753. isbn: 3-540-64891-7. OCLC: 39539448,
150397169, 247514628, and 313127561. In cooperation with IEEE Neural Networks
Council.
2209. 16th European Conference on Machine Learning (ECML05), October 37, 2005, Porto, Por-
REFERENCES 1123
tugal.
2210. Mitchell A. Potter and Kenneth Alan De Jong. A Cooperative Coevolutionary
Approach to Function Optimization. In PPSN III [693], pages 249257, 1994.
doi: 10.1007/3-540-58484-6 269. CiteSeer
x
: 10.1.1.54.7033.
2211. Mitchell A. Potter and Kenneth Alan De Jong. Cooperative Coevolution: An Ar-
chitecture for Evolving Coadapted Subcomponents. Evolutionary Computation, 8(1):
129, Spring 2000, MIT Press: Cambridge, MA, USA. doi: 10.1162/106365600568086.
Fully available at http://mitpress.mit.edu/journals/EVCO/Potter.pdf and
http://mitpress.mit.edu/journals/pdf/evco_8_1_1_0.pdf [accessed 2010-09-13].
CiteSeer
x
: 10.1.1.134.2926.
2212. Jean-Yves Potvin. A Review of Bio-inspired Algorithms for Vehicle Routing. In Bio-inspired
Algorithms for the Vehicle Routing Problem [2162], pages 134. Springer-Verlag: Berlin/Hei-
delberg, 2009. doi: 10.1007/978-3-540-85152-3 1.
2213. Kata Praditwong, Mark Harman, and Xin Yao. Software Module Clustering as
a Multi-Objective Search Problem. IEEE Transactions on Software Engineering,
37(2):264282, MarchApril 2011, IEEE Computer Society: Piscataway, NJ, USA.
doi: 10.1109/TSE.2010.26. Fully available at http://www.cs.bham.ac.uk/

xin/
papers/PraditwongHarmanYao10TSE.pdf [accessed 2011-08-07]. INSPEC Accession Num-
ber: 11883163.
2214. Bhanu Prasad, editor. Proceedings of the 1st Indian International Conference on Articial
Intelligence (IICAI-03), December 1820, 2003, Hyderabad, India. isbn: 0-9727412-0-8.
2215. Bhanu Prasad, editor. Proceedings of the 2nd Indian International Conference on Articial
Intelligence (IICAI-05), December 2022, 2005, Pune, India. isbn: 0-9727412-1-6.
2216. Bhanu Prasad, editor. Proceedings of the 3rd Indian International Conference on Articial
Intelligence (IICAI-07), December 1719, 2007, Pune, India.
2217. Bhanu Prasad, Pawan Lingras, and Ashwin Ram, editors. Proceedings of the 4th Indian In-
ternational Conference on Articial Intelligence (IICAI-09), December 1618, 2009, Tumkur,
India.
2218. Sushil K. Prasad, Susmi Routray, Reema Khurana, and Sartaj Sahni, editors. Proceed-
ings of the Third International Conference on Information Systems, Technology and Man-
agement (ICISTM09), March 1213, 2009, Ghaziabad, India, volume 31 in Communi-
cations in Computer and Information Science. Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-642-00405-6. isbn: 3-642-00404-0. Library of Congress Control Number
(LCCN): 2009921607.
2219. Steven D. Prestwich, S. Armagan Tarim, and Brahim Hnich. Template Design Under
Demand Uncertainty by Integer Linear Local Search. International Journal of Produc-
tion Research, 44(22):49154928, November 2006, Taylor and Francis LLC: London, UK.
doi: 10.1080/00207540600621060.
2220. Kenneth V. Price, Rainer M. Storn, and Jouni A. Lampinen. Dierential Evolution A
Practical Approach to Global Optimization, Natural Computing Series. Birkhauser Ver-
lag: Basel, Switzerland, 2005. isbn: 3-540-20950-6 and 3-540-31306-0. Google Books
ID: S67vX-KqVqUC. OCLC: 63127211, 63381529, and 318292603.
2221. Armand Prieditis and Stuart J. Russell, editors. Proceedings of the Twelfth International
Conference on Machine Learning (ICML95), July 912, 1995, Tahoe City, CA, USA. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-377-8.
2222. Fifth Annual Princeton Conference on Information Science and Systems, March 1971. Prince-
ton University Press: Princeton, NJ, USA.
2223. Les Proll and Barbara Smith. Integer Linear Programming and Constraint Programming
Approaches to a Template Design Problem. INFORMS Journal on Computing (JOC), 10
(03):225275, Summer 1998, Institute for Operations Research and the Management Sciences
(INFORMS): Linthicum, ML, USA and HighWire Press (Stanford University): Cambridge,
MA, USA. doi: 10.1287/ijoc.10.3.265.
2224. Philip E. Protter. Stochastic Integration and Dierential Equations, volume 21 in Stochastic
Modelling and Applied Probability. Springer-Verlag: Berlin/Heidelberg, 2nd edition, 2004.
isbn: 3-540-00313-4. Google Books ID: mJkFuqwr5xgC.
2225. Proceedings of the Fourth International Workshop on Local Search Techniques in Constraint
Satisfactio (LSCS07), September 23, 2007, Providence, RI, USA.
2226. Adam Pr ugel-Bennett. Modelling GA Dynamics. In Proceedings of the Second Evonet Sum-
1124 REFERENCES
mer School on Theoretical Aspects of Evolutionary Computing [1480], pages 5986, 1999.
Fully available at http://eprints.ecs.soton.ac.uk/13205 [accessed 2010-08-01].
2227. William F. Punch, Douglas Zongker, and Erik D. Goodman. The Royal Tree Problem,
a Benchmark for Single and Multiple Population Genetic Programming. In Advances
in Genetic Programming II [106], pages 299316. MIT Press: Cambridge, MA, USA,
1996. Fully available at http://citeseer.ist.psu.edu/147908.html [accessed 2007-10-
14]. CiteSeer
x
: 10.1.1.48.6434.
2228. Robin Charles Purshouse. On the Evolutionary Optimisation of Many Objectives. PhD
thesis, University of Sheeld, Department of Automatic Control and Systems Engineering:
Sheeld, UK, September 2003, Peter J. Fleming, Advisor. CiteSeer
x
: 10.1.1.10.8816.
2229. Robin Charles Purshouse and Peter J. Fleming. The Multi-Objective Genetic Algorithm Ap-
plied to Benchmark Problems An Analysis. Research Report 796, University of Sheeld,
Department of Automatic Control and Systems Engineering: Sheeld, UK, August 2001.
Fully available at http://citeseer.ist.psu.edu/old/499210.html and http://
www.lania.mx/

ccoello/EMOO/purshouse01.pdf.gz [accessed 2009-08-07].


2230. Robin Charles Purshouse and Peter J. Fleming. Conict, Harmony, and Independence: Re-
lationships in Evolutionary Multi-Criterion Optimisation. In EMO03 [957], pages 1630,
2003. doi: 10.1007/3-540-36970-8 2. Fully available at http://www.lania.mx/

ccoello/
EMOO/purshouse03.pdf.gz [accessed 2009-07-18].
2231. Robin Charles Purshouse and Peter J. Fleming. Evolutionary Many-Objective Optimi-
sation: An Exploratory Analysis. In CEC03 [2395], pages 20662073, volume 3, 2003.
doi: 10.1109/CEC.2003.1299927. Fully available at http://www.lania.mx/

ccoello/
EMOO/purshouse03b.pdf.gz. INSPEC Accession Number: 8494841.
Q
2232. D. Quagliarella, J. Periaux, Carlo Poloni, and Gabriel Winter, editors. Genetic Algorithms
and Evolution Strategy in Engineering and Computer Science: Recent Advances and Indus-
trial Applications Proceedings of the 2nd European Short Course on Genetic Algorithms
and Evolution Strategies (EUROGEN97), November 28December 4, 1997, Triest, Italy, Re-
cent Advances in Industrial Applications. John Wiley & Sons Ltd.: New York, NY, USA.
isbn: 0-471-97710-1. GBV-Identication (PPN): 236675036 and 252253957.
2233. R. J. Quick, Victor J. Rayward-Smith, and George D. Smith. The Royal Road Func-
tions: Description, Intent and Experimentation. In AISB96 [938], pages 223235, 1996.
doi: 10.1007/BFb0032786.
2234. John Ross Quinlan. C4.5: Programs for Machine Learning, Morgan Kaufmann Series in
Machine Learning. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1993.
isbn: 1-558-60238-0. Google Books ID: AABrrnZWgPEC. OCLC: 26547590, 249341510,
257317819, and 473499718. Library of Congress Control Number (LCCN): 92032653.
GBV-Identication (PPN): 376624361.
2235. Alejandro Quintero and Samuel Pierre. Assigning Cells to Switches in Cellular Mobile Net-
works: A Comparative Study. Computer Communications The International Journal for
the Computer and Telecommunications Industry, 26(9):950960, June 2, 2003, Elsevier Sci-
ence Publishers B.V.: Essex, UK. doi: 10.1016/S0140-3664(02)00224-4. See also [2237].
2236. Alejandro Quintero and Samuel Pierre. Evolutionary Approach to Optimize the As-
signment of Cells to Switches in Personal Communication Networks. Computer Com-
munications The International Journal for the Computer and Telecommunications In-
dustry, 26(9):927938, June 2, 2003, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/S0140-3664(02)00238-4. See also [2235, 2237].
2237. Alejandro Quintero and Samuel Pierre. Sequential and Multi-Population Memetic Algo-
rithms for Assigning Cells to Switches in Mobile Networks. Computer Networks, 43(3):
247261, October 22, 2003, Elsevier Science Publishers B.V.: Essex, UK. Imprint: North-
Holland Scientic Publishers Ltd.: Amsterdam, The Netherlands, Ian F. Akyildiz, editor.
doi: 10.1016/S1389-1286(03)00270-6. See also [2235, 2236].
2238. Mohammad Adil Qureshi. Evolving Agents. In GP96 [1609], pages 369374, 1996. Fully
available at http://www.cs.ucl.ac.uk/staff/W.Langdon/ftp/papers/AQ.gp96.
ps.gz [accessed 2007-09-17]. See also [2239, 2240].
REFERENCES 1125
2239. Mohammad Adil Qureshi. Evolving Agents. Research Note RN/96/4, University Col-
lege London (UCL): London, UK, January 1996. Fully available at http://www.cs.
bham.ac.uk/

wbl/biblio/gp-html/qureshi_1996_eaRN.html and http://www.


cs.ucl.ac.uk/staff/W.Langdon/ftp/papers/AQ.gp96.ps.gz [accessed 2008-09-02]. See
also [2238].
2240. Mohammad Adil Qureshi. The Evolution of Agents. PhD thesis, University College Lon-
don (UCL), Department of Computer Science: London, UK, July 2001, Jon Crowcroft,
Supervisor. Fully available at http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/
qureshi_thesis.html [accessed 2009-09-02]. CiteSeer
x
: 10.1.1.4.3698. See also [2238,
2239].
R
2241. Nicholas J. Radclie. Equivalence Class Analysis of Genetic Algorithms. Complex Sys-
tems, 5(2):183205, 1991, Complex Systems Publications, Inc.: Champaign, IL, USA.
Fully available at ftp://ftp.epcc.ed.ac.uk/pub/tr/90/tr9003.ps.Z [accessed 2010-07-
26]. CiteSeer
x
: 10.1.1.7.2988.
2242. Nicholas J. Radclie. The Algebra of Genetic Algorithms. Technical Report TR92-11, Uni-
versity of Edinburgh, Edinburgh Parallel Computing Centre (EPCC): Edinburgh, Scotland,
UK, 1992. Fully available at ftp://ftp.epcc.ed.ac.uk/pub/tr/92/tr9211.ps.Z [ac-
cessed 2010-07-26]. See also [2244].
2243. Nicholas J. Radclie. Non-Linear Genetic Representations. In PPSN II [1827], pages 259
268, 1992. Fully available at http://users.breathe.com/njr/papers/ppsn92.pdf
[accessed 2010-07-26]. CiteSeer
x
: 10.1.1.7.1530.
2244. Nicholas J. Radclie. The Algebra of Genetic Algorithms. Annals of Mathematics and Arti-
cial Intelligence, 10(4):339384, December 1994, Springer Netherlands: Dordrecht, Nether-
lands. doi: 10.1007/BF01531276. Fully available at http://stochasticsolutions.com/
pdf/amai94.pdf [accessed 2010-07-27]. CiteSeer
x
: 10.1.1.21.9675, 10.1.1.51.2195, and
10.1.1.8.2450. See also [2242].
2245. Nicholas J. Radclie and Patrick David Surry. Formal Memetic Algorithms. In AISB94 [936],
pages 116, 1994. doi: 10.1007/3-540-58483-8 1. CiteSeer
x
: 10.1.1.38.9885.
2246. Nicholas J. Radclie and Patrick David Surry. Fitness Variance of Formae and Performance
Prediction. In FOGA 3 [2932], pages 5172, 1994. Fully available at http://users.
breathe.com/njr/papers/foga94.pdf [accessed 2009-07-10]. CiteSeer
x
: 10.1.1.40.4508
and 10.1.1.57.635.
2247. Nicholas J. Radclie and Patrick David Surry. Fundamental Limitations on Search Al-
gorithms: Evolutionary Computing in Perspective. In Computer Science Today Recent
Trends and Developments [2777], pages 275291. Springer-Verlag GmbH: Berlin, Germany,
1995. doi: 10.1007/BFb0015249. CiteSeer
x
: 10.1.1.49.2376.
2248. Sabine Radke and Deutsches Institut f ur Wirtschaftsforschung (DIW): Berlin, Germany.
Verkehr in Zahlen 2006/2007. Deutscher Verkehrs-Verlag GmbH: Hamburg, Germany
and Bundesministerium f ur Verkehr, Bau- und Stadtentwicklung: Berlin, Germany, 2006.
isbn: 3-87154-349-7. OCLC: 255962850. GBV-Identication (PPN): 526782250.
2249. R. Raghuram. Computer Simulation of Electronic Circuits. New Age International (P)
Ltd., Publishers: New Delhi, India, 1989August 2007. isbn: 8122401112. Google Books
ID: kVs2liYNir0C.
2250. Yahya Rahmat-Samii and Eric Michielssen, editors. Electromagnetic Optimization by Genetic
Algorithms, Wiley Series in Microwave and Optical Engineering. John Wiley & Sons Ltd.:
New York, NY, USA, June 1999. isbn: 0-471-29545-0.
2251. G unther R. Raidl. A Hybrid GP Approach for Numerically Robust Symbolic Regres-
sion. In GP98 [1612], pages 323328, 1998. Fully available at http://www.ads.
tuwien.ac.at/publications/bib/pdf/raidl-98c.pdf and http://www.cs.
bham.ac.uk/

wbl/biblio/gp-html/raidl_1998_hGPnrsr.html [accessed 2009-09-09].


CiteSeer
x
: 10.1.1.20.6555.
2252. G unther R. Raidl and Jens Gottlieb, editors. Proceedings of the 5th European Conference on
Evolutionary Computation in Combinatorial Optimization (EvoCOP05), March 30April 1,
1126 REFERENCES
2005, Lausanne, Switzerland, volume 3448/2005 in Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-25337-8.
2253. G unther R. Raidl, Jean-Arcady Meyer, Martin Middendorf, Stefano Cagnoni, Juan
Jes us Romero Cardalda, David Wolfe Corne, Jens Gottlieb, Agn`es Guillot, Emma Hart,
Colin G. Johnson, and Elena Marchiori, editors. Applications of Evolutionary Computing,
Proceedings of EvoWorkshop 2003: EvoBIO, EvoCOP, EvoIASP, EvoMUSART, EvoROB,
and EvoSTIM (EvoWorkshop03), April 1416, 2003, University of Essex: Wivenhoe Park,
Colchester, Essex, UK, volume 2611/2003 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-00976-0.
2254. G unther R. Raidl, Stefano Cagnoni, J urgen Branke, David Wolfe Corne, Rolf Drech-
sler, Yaochu Jin, Colin G. Johnson, Penousal Machado, Elena Marchiori, Franz Roth-
lauf, George D. Smith, and Giovanni Squillero, editors. Applications of Evolution-
ary Computing Proceedings of EvoWorkshops 2004: EvoBIO, EvoCOMNET, EvoHOT,
EvoIASP, EvoMUSART, and EvoSTOC (EvoWorkshops04), April 57, 2004, Coimbra, Por-
tugal, volume 3005/2004 in Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/b96500. isbn: 3-540-21378-3. Google Books
ID: 1K9iHdLz0BQC. OCLC: 54880715 and 249136935. Library of Congress Control
Number (LCCN): 2004102415.
2255. Tapani Raiko, Pentti Haikonen, and Jaakko Vayrynen, editors. AI and Machine Con-
sciousness Proceedings of the 13th Finnish Articial Intelligence Conference (STeP08),
August 2022, 2008, Helsinki University of Technology: Espoo, Finland, volume 24
in Publications of the Finnish Articial Intelligence Society. Nokia Research Center:
Helsinki, Finland. Fully available at http://www.stes.fi/step2008/proceedings/
step2008proceedings.pdf [accessed 2009-10-25].
2256. Albert Rainer. Web Service Composition using Answer Set Programming.
In PuK05 [2409], 2005. Fully available at http://www-is.informatik.
uni-oldenburg.de/

sauer/puk2005/paper/4_puk2005_rainer.pdf [accessed 2011-01-


11]. CiteSeer
x
: 10.1.1.136.1048.
2257. Albert Rainer and J urgen Dorn. MOVE: A Generic Service Composition Frame-
work for Service Oriented Architectures. In CEC09 [1245], pages 503506, 2009.
doi: 10.1109/CEC.2009.56. INSPEC Accession Number: 10839130. See also [822, 823, 1571].
2258. Hassan Rajaei, Gabriel Wainer, and Michael Chinni, editors. Proceedings of the 2008 Spring
Simulation Multiconference (SpringSim08), April 1417, 2008, Crowne Plaza Ottawa Hotel:
Ottawa, ON, Canada, volume 40 in Simulation Series. Society for Computer Simulation, Inter-
national: San Diego, CA, USA. isbn: 1-56555-319-5. Google Books ID: Y5juPgAACAAJ.
OCLC: 316331262.
2259. Anca L. Ralescu, editor. Fuzzy Logic in Articial Intelligence, Workshop Proceedings (IJ-
CAI93 Fuzzy WS), August 28, 1993, Chambery, France, volume 847 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 0-387-58409-9 and
3-540-58409-9. OCLC: 31010125, 246325420, 311898734, 422861463, 473142458,
474000977, 497538589, and 502373916. GBV-Identication (PPN): 165340169,
272032549, and 595124968. See also [182, 183, 927].
2260. Anca L. Ralescu and James G. Shanahan, editors. Fuzzy Logic in Articial Intelligence,
Selected and Invited Workshop Papers (IJCAI97 Fuzzy WS), August 2324, 1997, Nagoya,
Japan, volume 1566 in Lecture Notes in Articial Intelligence (LNAI, SL7), Lecture Notes in
Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-66374-6.
GBV-Identication (PPN): 300375778 and 534418899. See also [1946, 1947].
2261. Theodore K. Ralphs. Vehicle Routing Data Sets. COmputational INfrastructure for Opera-
tions Research (COIN-OR Foundation, Inc.), October 3, 2003. Fully available at http://
www.coin-or.org/SYMPHONY/branchandcut/VRP/data/ [accessed 2008-10-27].
2262. Theodore K. Ralphs, L. Kopman, William R. Pulleyblank, and L. E. Trotter. On the Capaci-
tated Vehicle Routing Problem. Mathematical Programming, 94(23):343359, January 2003,
Springer-Verlag: Berlin/Heidelberg. doi: 10.1007/s10107-002-0323-0.
2263. Proceedings of the 2005 International Conference on Machine Learning and Cybernetics
(ICMLC05), August 1821, 2005, Ramada Pearl Hotel: Guangzhou, Guangd ong, China.
isbn: 0-7803-9091-1.
2264. Krishna Raman, Yue Zhang, Mark Panahi, and Kwei-Jay Lin. Customizable Business
Process Composition with Query Optimization. In CEC/EEE08 [1346], pages 363366,
REFERENCES 1127
2008. doi: 10.1109/CECandEEE.2008.152. INSPEC Accession Number: 10475112. See also
[204, 3067, 3073, 3074].
2265. Valliappan Ramasamy. Syntactical & Semantical Web Services Discovery And Composition.
In CEC/EEE06 [2967], pages 439441, 2006. doi: 10.1109/CEC-EEE.2006.86. INSPEC
Accession Number: 9189343. See also [318].
2266. Federico Ramrez and Olac Fuentes. Spectral Analysis Using Evolution Strategies. In
ASC02 [1714], pages 357391, 2002. Fully available at http://www.cs.utep.edu/
ofuentes/ASC02_RF.pdf [accessed 2009-09-18].
2267. Soraya Rana. Examining the Role of Local Optima and Schema Processing in Genetic
Search. PhD thesis, Colorado State University, Department of Computer Science, GENI-
TOR Research Group in Genetic Algorithms and Evolutionary Computation: Fort Collins,
CO, USA, July 1, 1999. Fully available at http://www.cs.colostate.edu/

genitor/
dissertations/rana.ps.gz [accessed 2010-07-23]. CiteSeer
x
: 10.1.1.18.2879.
2268. William Rand, Sevan G. Ficici, and Rick L. Riolo, editors. Evolutionary Computation and
Multi-Agent Systems and Simulation Workshop (ECoMASS08), July 12, 2008, Renaissance
Atlanta Hotel Downtown: Atlanta, GA, USA. ACM Press: New York, NY, USA. Part of
[1519].
2269. Alain T. Rappaport and Reid G. Smith, editors. Proceedings of the The Second Confer-
ence on Innovative Applications of Articial Intelligence (IAAI90), May 13, 1990, Wash-
ington, DC, USA. AAAI Press: Menlo Park, CA, USA. isbn: 0-262-68068-8. Partly
available at http://www.aaai.org/Conferences/IAAI/iaai90.php [accessed 2007-09-06].
Published in 1991.
2270. Khaled Rasheed and Brian D. Davison. Eect of Global Parallelism on the Behavior of a
Steady State Genetic Algorithm for Design Optimization. In CEC99 [110], pages 534541,
1999. Fully available at http://www.cse.lehigh.edu/

brian/pubs/1999/cec99/
pgado.cec99.pdf [accessed 2010-08-01]. CiteSeer
x
: 10.1.1.114.2664. See also [698].
2271. Reza Rastegar and Arash Hariri. A Step Forward in Studying the Compact Genetic Algo-
rithm. Evolutionary Computation, 14(3):277289, Fall 2006, MIT Press: Cambridge, MA,
USA. doi: 10.1162/evco.2006.14.3.277. arXiv ID: 0901.0598v1.
2272. Rajeev Rastogi and Kyuseok Shim. PUBLIC: A Decision Tree Classier that Integrates
Building and Pruning. In VLDB98 [1151], pages 404415, 1998. Fully available at http://
eprints.kfupm.edu.sa/60034/ and http://www.vldb.org/conf/1998/p404.pdf
[accessed 2009-05-11].
2273. Tapabrata Ray, Kang Tai, and Kin Chye Seow. Multiobjective Design Optimization by an
Evolutionary Algorithm. Engineering Optimization, 33(4):399424, 2001, Taylor and Francis
LLC: London, UK and Informa plc: London, UK. doi: 10.1080/03052150108940926.
2274. Victor J. Rayward-Smith, editor. Applications of Modern Heuristic Methods Proceedings
of the UNICOM Seminar on Adaptive Computing and Information Processing, January 25
27, 1994. Alfred Waller Ltd.: Henley-on-Thames, Oxfordshire, UK, Nelson Thornes Ltd.:
Cheltenham, Gloucestershire, UK, and Unicom Seminars Ltd.: Uxbridge, Middlesex, UK.
isbn: 1872474284. Google Books ID: BjOTAAAACAAJ and NmBRAAAAMAAJ.
2275. Victor J. Rayward-Smith. A Unied Approach to Tabu Search, Simulated Annealing and Ge-
netic Algorithms. In Applications of Modern Heuristic Methods Proceedings of the UNICOM
Seminar on Adaptive Computing and Information Processing [2274], pages 5578, volume 1,
1994.
2276. Victor J. Rayward-Smith, Ibrahim H. Osman, Colin R. Reeves, and George D. Smith, editors.
Modern Heuristic Search Methods. John Wiley & Sons Ltd.: New York, NY, USA, 1996.
isbn: 0470866500 and 0471962805. Google Books ID: DHFRAAAAMAAJ, Sd4LAAAACAAJ,
and btnFAAAACAAJ.
2277. Ingo Rechenberg. Cybernetic Solution Path of an Experimental Problem. Royal Aircraft
Establishment: Farnborough, Hampshire, UK, August 1965. Library Translation 1122.
Reprinted in [939].
2278. Ingo Rechenberg. Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien
der biologischen Evolution, volume 15 in Problemata. PhD thesis, Technische Univer-
sit at Berlin: Berlin, Germany, Friedrick Frommann Verlag: Stuttgart, Germany, 1971
1973. isbn: 3-7728-0373-3 and 3-7728-0374-1. Google Books ID: QcNNGQAACAAJ.
OCLC: 9020616, 462818122, and 500569005. Library of Congress Control Number
(LCCN): 74320689. GBV-Identication (PPN): 024852090. LC Classication: QH371
1128 REFERENCES
.R33.
2279. Ingo Rechenberg. Evolutionsstrategie 94, volume 1 in Werkstatt Bionik und Evolutionstech-
nik. Frommann-Holzboog Verlag: Bad Cannstadt, Stuttgart, Baden-W urttemberg, Germany,
1994. isbn: 3-7728-1642-8. Google Books ID: savAAAACAAJ. OCLC: 75424354. GBV-
Identication (PPN): 153251220. Libri-Number: 8263485. See also [2278].
2280. Seventh International Workshop on Learning Classier Systems (IWLCS04), June 26, 2004,
Red Lion Hotel: Seattle, WA, USA. Held during GECCO 2004. See also [751, 752, 1589].
2281. R. Reddy, editor. Proceedings of the 5th International Joint Conference on Articial In-
telligence (IJCAI77-I), August 2225, 1977, Massachusetts Institute of Technology (MIT):
Cambridge, MA, USA, volume 1. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-77-VOL1/CONTENT/content.htm [accessed 2008-04-01]. See also [2282].
2282. R. Reddy, editor. Proceedings of the 5th International Joint Conference on Articial Intel-
ligence (IJCAI77-II), August 2225, 1977, Massachusetts Institute of Technology (MIT):
Cambridge, MA, USA, volume 2. Fully available at http://dli.iiit.ac.in/ijcai/
IJCAI-77-VOL2/CONTENT/content.htm [accessed 2008-04-01]. See also [2281].
2283. Colin R. Reeves, editor. Modern Heuristic Techniques for Combinatorial Problems, Ad-
vanced Topics in Computer Science Series. Blackwell Publishing Ltd: Chichester, West Sus-
sex, UK, April 1993. isbn: 0077092392, 0-470-22079-1, and 0-632-03238-3. Google
Books ID: G6NfQgAACAAJ. OCLC: 26805931. Library of Congress Control Number
(LCCN): 92033335. GBV-Identication (PPN): 121998312 and 17228810X. LC Clas-
sication: QA402.5 .M62 1993. Outgrowth of a conference held at Bangor, North Wales in
1990.
2284. Colin R. Reeves and Christine C. Wright. An Experimental Design Perspective
on Genetic Algorithms. In FOGA 3 [2932], pages 722, 1994. Fully avail-
able at http://www.cs.colostate.edu/

whitley/CS640/walshdesign.ps.gz [ac-
cessed 2010-09-14]. CiteSeer
x
: 10.1.1.54.1692.
2285. Colin R. Reeves and Christine C. Wright. Epistasis in Genetic Algorithms: An Experimental
Design Perspective. In ICGA95 [883], pages 217224, 1995. CiteSeer
x
: 10.1.1.39.29.
2286. Dirk Reichelt, Peter Gmilkowsky, and Sebastian Linser. A Study of an Iterated Local Search
on the Reliable Communication Networks Design Problem. In EvoWorkshops05 [2340], pages
156165, 2005.
2287. Christian M. Reidys and Peter F. Stadler. Neutrality in Fitness Landscapes. Jour-
nal of Applied Mathematics and Computation, 117(23):321350, January 25, 2001,
Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/S0096-3003(99)00166-6.
CiteSeer
x
: 10.1.1.46.962.
2288. Gerhard Reinelt. TSPLIB. Oce Research Group Discrete Optimization, Ruprecht Karls
University of Heidelberg: Heidelberg, Germany, 1995. Fully available at http://comopt.
ifi.uni-heidelberg.de/software/TSPLIB95/ [accessed 2010-11-23]. See also [2289, 2290].
2289. Gerhard Reinelt. TSPLIB A Traveling Salesman Problem Library. ORSA Journal on
Computing, 3(4):376384, Fall 1991, Operations Research Society of America (ORSA).
doi: 10.1287/ijoc.3.4.376. See also [2288, 2290].
2290. Gerhard Reinelt. TSPLIB 95. Technical Report, Universitat Heidelberg, Institut f ur
Mathematik: Heidelberg, Germany, 1995. Fully available at http://comopt.ifi.
uni-heidelberg.de/software/TSPLIB95/DOC.PS [accessed 2011-09-18]. See also [2288,
2289].
2291. Mark J. Rentmeesters, Wei K. Tsai, and Kwei-Jay Lin. A Theory of Lexi-
cographic Multi-Criteria Optimization. In ICECCS96 [1327], pages 7696, 1996.
doi: 10.1109/ICECCS.1996.558386.
2292. Alfred Renyi. Probability Theory. Dover Publications: Mineola, NY, USA,
May 2007. isbn: 0486458679. Google Books ID: Z9ueAQAACAAJ and yx8IAQAAIAAJ.
OCLC: 77708371.
2293. Proceedings of the Fourth Joint Conference on Information Science (JCIS 1998), Section:
The Second International Workshop on Frontiers in Evolutionary Algorithms (FEA98), Oc-
tober 2428, 1998, Research Triangle Park, NC, USA, volume 2. Workshop held in conjunc-
tion with Fourth Joint Conference on Information Sciences.
2294. Mauricio G.C. Resende, Jorge Pinho de Sousa, and Ana Viana, editors. 4th Metaheuristics
International Conference Metaheuristics: Computer Decision-Making (MIC01), July 16
20, 2001, Porto, Portugal, volume 86 in Applied Optimization. Springer-Verlag GmbH:
REFERENCES 1129
Berlin, Germany. isbn: 1-4020-7653-3. Partly available at http://paginas.fe.up.
pt/

gauti/MIC2001/ [accessed 2007-09-12]. OCLC: 53123574 and 639072881. Library of


Congress Control Number (LCCN): 2003061990. Published in November 30, 2003.
2295. Bernd Reusch, editor. International Conference on Computational Intelligence: The-
ory and Applications 6th Fuzzy Days, May 2528, 1996, Dortmund, North Rhine-
Westphalia, Germany, volume 1625/1999 in Lecture Notes in Computer Science (LNCS).
International Biometric Society (IBS): Washington, DC, USA. doi: 10.1007/3-540-48774-3.
isbn: 3-540-66050-X. Google Books ID: KQm0joXBr8IC. OCLC: 41380465 and
209204083.
2296. Proceedings of the IJCAI-99 Workshop on Intelligent Information Integration (IJCAI99
WS), July 31, 1999, City Conference Center: Stockholm, Sweden, volume 23 in CEUR Work-
shop Proceedings (CEUR-WS.org). Rheinisch-Westfalische Technische Hochschule (RWTH)
Aachen, Sun SITE Central Europe: Aachen, North Rhine-Westphalia, Germany. Fully avail-
able at http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/
Vol-23/ [accessed 2008-04-01]. Held in conjunction with the Sixteenth International Joint Con-
ference on Articial Intelligence. See also [730, 731].
2297. Bernardete Ribeiro, Rudolf F. Albrecht, Andrej Dobnikar, David W. Pearson, and Nigel C.
Steele, editors. Proceedings of the 7th International Conference on Adaptive and Natural
Computing Algorithms (ICANNGA05), March 2123, 2005, University of Coimbra: Coimbra,
Portugal. Springer Verlag GmbH: Vienna, Austria. Partly available at http://icannga05.
dei.uc.pt/ [accessed 2007-08-31].
2298. Celso C. Ribeiro and Pierre Hansen, editors. 3rd Metaheuristics International Conference
Essays and Surveys in Metaheuristics (MIC99), July 1923, 1999, Angra dos Reis, Rio
de Janeiro, Brazil, volume 15 in Operations Research/Computer Science Interfaces Series.
Springer-Verlag GmbH: Berlin, Germany. isbn: 0-7923-7520-3. Published in 2001.
2299. James P. Rice. Mathematical Statistics and Data Analysis, Discussion Paper Series In Eco-
nomics And Econometrics. Academic Internet Publishers, Inc. (AIPI): www.cram101.com,
University of Southampton, 1995April 2006. isbn: 0495110892, 0534209343, and
0534399428. Google Books ID: BXQOAAAACAAJ, bIkQAQAAIAAJ, and bbCPAAAACAAJ.
OCLC: 71364578 and 315216152.
2300. Judith Rich Harris. Parental Selection: A Third Selection Process in the Evolution of Human
Hairlessness and Skin Color. Medical Hypotheses, 66(6):10531059, 2006, Elsevier Science
Publishers B.V.: Essex, UK. doi: 10.1016/j.mehy.2006.01.027. PubMed ID: 16527428.
2301. Sebastian Richly, Georg P uschel, Dirk Habich, and Sebastian G otz. MapReduce for Scalable
Neural Nets Training. In SERVICES Cup10 [2457], pages 99106, 2010. doi: 10.1109/SER-
VICES.2010.36. INSPEC Accession Number: 11535225.
2302. Hendrik Richter. Behavior of Evolutionary Algorithms in Chaotically Changing Fitness Land-
scapes. In PPSN VIII [3028], pages 111120, 2008. doi: 10.1007/b100601.
2303. Mark Ridley. Evolution. Blackwell Publishing Ltd: Chichester, West Sussex, UK, 8th edi-
tion, 2003. isbn: 0192892878, 0-632-03481-5, 0-86542-226-5, and 1-4051-0345-0.
Google Books ID: HCh7n6zuWeUC, WU wAAAAMAAJ, b-HGB9PqXCUC, xPAPgcZflzYC,
and ysddSQAACAAJ. GBV-Identication (PPN): 076515354 and 126473129. Libri-
Number: 3190331.
2304. John Riedl and Randall W. Hill Jr., editors. Proceedings of the Fifteenth Conference
on Innovative Applications of Articial Intelligence (IAAI03), August 1214, 2003, Aca-
pulco, Mexico. isbn: 1-57735-188-6. Partly available at http://www.aaai.org/
Conferences/IAAI/iaai03.php [accessed 2007-09-06].
2305. Rupert J. Riedl. A Systems-Analytical Approach to Macro-Evolutionary Phenomena. Quar-
terly Review of Biology (QRB), Chicago Journals, 52(4):351370, December 1977, Stony
Brook University: Stony Brook, NY, USA. doi: 10.1086/410123.
2306. Rick L. Riolo and Bill Worzel, editors. Genetic Programming Theory and Practice, Pro-
ceedings of the Genetic Programming Theory Practice 2003 Workshop (GPTP03), May 15
17, 2003, University of Michigan, Center for the Study of Complex Systems (CSCS): Ann
Arbor, MI, USA. Kluwer Publishers: Boston, MA, USA and Springer New York: New
York, NY, USA. isbn: 1402075812. Partly available at http://www.cscs.umich.edu/
gptp-workshops/gptp2003/ [accessed 2007-09-28].
2307. Rick L. Riolo, Terence Soule, and Bill Worzel, editors. Genetic Programming Theory
and Practice IV, Proceedings of the Genetic Programming Theory Practice 2006 Workshop
1130 REFERENCES
(GPTP06), May 1113, 2006, University of Michigan, Center for the Study of Complex
Systems (CSCS): Ann Arbor, MI, USA. Springer-Verlag GmbH: Berlin, Germany. Partly
available at http://www.cscs.umich.edu/gptp-workshops/gptp2006/ [accessed 2007-
09-28].
2308. Rick L. Riolo, Terence Soule, and Bill Worzel, editors. Genetic Programming Theory
and Practice VI, Proceedings of the Sixth Workshop on Genetic Programming (GPTP08),
May 1517, 2008, University of Michigan, Center for the Study of Complex Systems (CSCS):
Ann Arbor, MI, USA, Genetic and Evolutionary Computation. Springer US: Boston, MA,
USA and Kluwer Academic Publishers: Norwell, MA, USA. doi: 10.1007/978-0-387-87623-8.
Library of Congress Control Number (LCCN): 2008936134.
2309. Rick L. Riolo, Una-May OReilly, and Trent McConaghy, editors. Genetic Programming
Theory and Practice VII, Proceedings of the Seventh Workshop on Genetic Programming
(GPTP09), May 1416, 2009, University of Michigan, Center for the Study of Com-
plex Systems (CSCS): Ann Arbor, MI, USA, Genetic and Evolutionary Computation.
Springer US: Boston, MA, USA and Kluwer Academic Publishers: Norwell, MA, USA.
doi: 10.1007/978-1-4419-1626-6. Library of Congress Control Number (LCCN): 2009939939.
2310. Rick L. Riolo, Trent McConaghy, and Ekaterina Vladislavleva, editors. Genetic Pro-
gramming Theory and Practice VIII, Proceedings of the Eighth Workshop on Genetic Pro-
gramming (GPTP10), May 2022, 2010, University of Michigan, Center for the Study of
Complex Systems (CSCS): Ann Arbor, MI, USA, volume 8 in Genetic and Evolutionary
Computation. Springer US: Boston, MA, USA and Kluwer Academic Publishers: Nor-
well, MA, USA. doi: 10.1007/978-1-4419-7747-2. Library of Congress Control Number
(LCCN): 20100938320.
2311. Jose L. Risco-Martn, Francisco Fernandez de Vega, and Juan Lanchares, editors. Third
Workshop on Parallel Architectures and Bioinspired Algorithms (WPBA10), September 11
12, 2010, Vienna, Austria.
2312. Irina Rish. An Empirical Study of the Naive Bayes Classier. In IJCAI01 [2012], pages 41
46, 2001. Fully available at http://www.cc.gatech.edu/

isbell/classes/reading/
papers/Rish.pdf [accessed 2007-08-11].
2313. Herbert Robbins and Sutton Monro. A Stochastic Approximation Method. The Annals of
Mathematical Statistics, 22(3):400407, September 1951, Institute of Mathematical Statis-
tics: Beachwood, OH, USA. doi: 10.1214/aoms/1177729586. Fully available at http://
projecteuclid.org/euclid.aoms/1177729586 [accessed 2008-10-11]. Zentralblatt MATH
identier: 0054.05901. Mathematical Reviews number (MathSciNet): MR42668.
2314. Lus Mateus Rocha, Larry S. Yaeger, Mark A. Bedau, Dario Floreano, Robert L. Gold-
stone, and Alessandro Vespignani, editors. ALIFE X: Proceedings of the Tenth Interna-
tional Conference on the Simulation and Synthesis of Living Systems (ALIFE06), June 3
7, 2006, Bloomington, IN, USA, Bradford Books. MIT Press: Cambridge, MA, USA.
isbn: 0-262-68162-5. Library of Congress Control Number (LCCN): 2006922394. GBV-
Identication (PPN): 516051768. LC Classication: QH324.2 .I556 2006.
2315. Miguel Rocha, Pedro Sousa, Paulo Cortez, and Miguel Rio. Evolutionary Computation for
Quality of Service Internet Routing Optimization. In EvoWorkshops07 [1050], pages 7180,
2007. doi: 10.1007/978-3-540-71805-5 8.
2316. Alex Rogers and Adam Pr ugel-Bennett. Modelling the Dynamics of a Steady-State Genetic
Algorithm. In FOGA98 [208], pages 5768, 1998. Fully available at http://eprints.
ecs.soton.ac.uk/451/ [accessed 2010-08-01]. CiteSeer
x
: 10.1.1.28.1226.
2317. Daniel S. Rokhsar, Philip Warren Anderson, and Daniel L. Steingart. Self-Organization in
Prebiological Systems: Simulations of a Model for the Origin of Genetic Information. Journal
of Molecular Evolution, 23(2):119126, June 1986, Springer New York: New York, NY, USA,
Martin Kreitman, editor. doi: 10.1007/BF02099906.
2318. Learning and Intelligent OptimizatioN (LION 5), January 1721, 2011, Rome, Italy.
2319. Crist obal Romero and Sebastian Ventura. Educational Data Mining: A Survey From 1995 to
2005. Expert Systems with Applications An International Journal, 33(1):135146, July 2007,
Elsevier Science Publishers B.V.: Essex, UK. Imprint: Pergamon Press: Oxford, UK.
doi: 10.1016/j.eswa.2006.04.005.
2320. Juan Romero and Penousal Machado, editors. The Art of Articial Evolution: A Handbook
on Evolutionary Art and Music, Natural Computing Series. Springer New York: New York,
NY, USA, November 2007. doi: 10.1007/978-3-540-72877-1. Library of Congress Control
REFERENCES 1131
Number (LCCN): 2007937632.
2321. Simon Ronald. Preventing Diversity Loss in a Routing Genetic Algorithm with Hash Tag-
ging. Complexity International, 2(2), April 1995, Monash University, Faculty of Information
Technology: Victoria, Australia. Fully available at http://www.complexity.org.au/
ci/vol02/sr_hash/ [accessed 2009-07-09].
2322. Simon Ronald. Genetic Algorithms and Permutation-Encoded Problems. Diversity Preser-
vation and a Study of Multimodality. PhD thesis, University of South Australia (UniSA),
Department of Computer and Information Science: Mawson Lakes, SA, Australia, 1996.
2323. Simon Ronald. Robust Encodings in Genetic Algorithms: A Survey of Encoding Issues. In
CEC97 [173], pages 4348, 1997. doi: 10.1109/ICEC.1997.592265.
2324. Simon Ronald, John Asenstorfer, and Millist Vincent. Representational Redundancy
in Evolutionary Algorithms. In CEC95 [1361], pages 631637, volume 2, 1995.
doi: 10.1109/ICEC.1995.487457.
2325. Agostinho Rosa, editor. Proceedings of the International Conference on Evolutionary Com-
putation (ICEC09), 2009, Madeira, Portugal. Part of [1809].
2326. Justinian P. Rosca. Proceedings of the Workshop on Genetic Programming: From Theory to
Real-World Applications. Technical Report 95.2, University of Rochester, National Resource
Laboratory for the Study of Brain and Behavior: Rochester, NY, USA, Morgan Kaufmann:
San Mateo, CA, USA, July 9, 1995, Tahoe City, CA, USA. Held in conjunction with the
twelfth International Conference on Machine Learning.
2327. Justinian P. Rosca. An Analysis of Hierarchical Genetic Programming. Technical Re-
port TR566, University of Rochester, Computer Science Department: Rochester, NY, USA,
March 1995. Fully available at ftp://ftp.cs.rochester.edu/pub/u/rosca/gp/95.
tr566.ps.gz [accessed 2009-07-10]. CiteSeer
x
: 10.1.1.26.2552.
2328. Justinian P. Rosca. Generality versus Size in Genetic Programming. In GP96 [1609],
pages 381387, 1996. Fully available at ftp://ftp.cs.rochester.edu/pub/u/rosca/
gp/96.gp.ps.gz and http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/rosca_
1996_gVsGP.html [accessed 2011-11-23]. CiteSeer
x
: 10.1.1.54.2363.
2329. Justinian P. Rosca and Dana H. Ballard. Causality in Genetic Programming. In
ICGA95 [883], pages 256263, 1995. CiteSeer
x
: 10.1.1.26.2503.
2330. Florian Rosenberg, Christoph Nagl, and Schahram Dustdar. Applying Distributed Busi-
ness Rules The VIDRE Approach. In SCContest06 [554], pages 471478, 2006.
doi: 10.1109/SCC.2006.22. INSPEC Accession Number: 9165411.
2331. Richard S. Rosenberg. Simulation of Genetic Populations with Biochemical Properties. PhD
thesis, University of Michigan: Ann Arbor, MI, USA, June 1967. Fully available at http://
deepblue.lib.umich.edu/handle/2027.42/7321 [accessed 2010-08-03]. University Micro-
lms No.: UMR3370. ID: bad1552.0001.001. ORA Project 08333.
2332. Jay S. Rosenblatt, Robert A. Hinde, Colin Beer, and Marie-Claire Busnel, editors. Advances
in the Study of Behavior, volume 12 in Advances in the Study of Behavior. Academic Press
Professional, Inc.: San Diego, CA, USA. isbn: 0-12-004512-5. OCLC: 646758333. GBV-
Identication (PPN): 011437138.
2333. Howard H. Rosenbrock. An Automatic Method for Finding the Greatest or Least Value
of a Function. The Computer Journal, Oxford Journals, 3(3):175184, March 1960, British
Computer Society: Swindon, UK. doi: 10.1093/comjnl/3.3.175.
2334. Paul L. Rosin and Freddy Fierens. Improving Neural Network Generalisation. In
IGARSS95 [2609], pages 12551257, volume 2, 1995. doi: 10.1109/IGARSS.1995.521718.
Fully available at http://users.cs.cf.ac.uk/Paul.Rosin/resources/papers/
overfitting.pdf [accessed 2009-07-12]. CiteSeer
x
: 10.1.1.48.3136. INSPEC Accession
Number: 5112834.
2335. Claudio Rossi, Elena Marchiori, and Joost N. Kok. An Adaptive Evolutionary Algo-
rithm for the Satisability Problem. In SAC00 [23], pages 463469, volume 1, 2000.
doi: 10.1145/335603.335912. CiteSeer
x
: 10.1.1.37.4771. See also [1117].
2336. Gerald P. Roston. A Genetic Methodology for Conguration Design, CMU-RI-TR-94-42. PhD
thesis, Carnegie Mellon University (CMU), Department of Mechanical Engineering: Pitts-
burgh, PA, USA, December 1994, Robert Sturges, Jr. and William Red Wittaker, Advi-
sors. Fully available at http://www.ri.cmu.edu/pubs/pub_3335.html [accessed 2010-08-
19]. CiteSeer
x
: 10.1.1.152.7521.
2337. Franz Rothlauf, editor. Late Breaking Papers at Genetic and Evolutionary Computation
1132 REFERENCES
Conference (GECCO05 LBP), June 2529, 2005, Loews LEnfant Plaza Hotel: Washington,
DC, USA. Also distributed on CD-ROM at GECCO-2005. See also [304, 2339].
2338. Franz Rothlauf. Representations for Genetic and Evolutionary Algorithms, volume 104 in
Studies in Fuzziness and Soft Computing. Physica-Verlag GmbH & Co.: Heidelberg, Ger-
many, 2nd edition, 20022006. isbn: 3-540-25059-X and 3790814962. Google Books
ID: fQrSUwop4JkC, rNxNAAAACAAJ, and tHYlQfvsP6cC. Foreword by David E. Gold-
berg.
2339. Franz Rothlauf, Misty Blowers, J urgen Branke, Stefano Cagnoni, Ivan I. Garibay, Ozlem
Garibay, J orn Grahl, Gregory S. Hornby, Edwin D. de Jong, Tim Kovacs, Sanjeev Kumar,
Claudio F. Lima, Xavier Llor` a, Fernando G. Lobo, Laurence D. Merkle, Julian Francis Miller,
Jason H. Moore, Michael ONeill, Martin Pelikan, Terry P. Riopka, Marylyn D. Ritchie, Ku-
mara Sastry, Stephen L. Smith, Hal Stringer, Keiki Takadama, Marc Toussaint, Stephen C.
Upton, Alden H. Wright, and Shengxiang Yang, editors. Proceedings of the 2005 Work-
shops on Genetic and Evolutionary Computation (GECCO05 WS), June 2526, 2005, Loews
LEnfant Plaza Hotel: Washington, DC, USA. ACM Press: New York, NY, USA. See also
[304, 2337].
2340. Franz Rothlauf, J urgen Branke, Stefano Cagnoni, David Wolfe Corne, Rolf Drech-
sler, Yaochu Jin, Penousal Machado, Elena Marchiori, Juan Romero, George D. Smith,
and Giovanni Squillero, editors. Applications of Evolutionary Computing, Proceedings
of EvoWorkshops 2005 EvoBIO, EvoCOMNET, EvoHOT, EvoIASP, EvoMUSART,
and EvoSTOC (EvoWorkshops05), March 30April 1, 2005, Lausanne, Switzerland,
volume 3449/2005 in Theoretical Computer Science and General Issues (SL 1), Lec-
ture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/b106856. isbn: 3-540-25396-3. Google Books ID: -4dQAAAAMAAJ and
3X86XlOROyMC. OCLC: 59009564, 59223261, 145832358, and 254368497. Library
of Congress Control Number (LCCN): 2005922824.
2341. Franz Rothlauf, J urgen Branke, Stefano Cagnoni, Ernesto Jorge Fernandes Costa, Carlos
Cotta, Rolf Drechsler, Evelyne Lutton, Penousal Machado, Jason H. Moore, Juan Romero,
George D. Smith, Giovanni Squillero, and Hideyuki Takagi, editors. Applications of Evo-
lutionary Computing Proceedings of EvoWorkshops 2006: EvoBIO, EvoCOMNET, Evo-
HOT, EvoIASP, EvoINTERACTION, EvoMUSART, and EvoSTOC (EvoWorkshops06),
April 1012, 2006, Budapest, Hungary, volume 3907/2006 in Theoretical Computer Sci-
ence and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/11732242. isbn: 3-540-33237-5. Google Books
ID: S4VQAAAAMAAJ. OCLC: 65216918, 67831340, 145829509, 150398193, 181551989,
and 316263582. Library of Congress Control Number (LCCN): 2006922618.
2342. Franz Rothlauf, G unther R. Raidl, Anna Isabel Esparcia-Alc azar, Ying-Ping Chen, Gabriela
Ochoa, Ender Ozcan, Marc Schoenauer, Anne Auger, Hans-Georg Beyer, Nikolaus Hansen,
Steen Finck, Raymond Ros, L. Darrell Whitley, Garnett Wilson, Simon Harding, William B.
Langdon, Man Leung Wong, Laurence D. Merkle, Frank W. Moore, Sevan G. Ficici, William
Rand, Rick L. Riolo, Nawwaf Kharma, William R. Buckley, Julian Francis Miller, Ken-
neth Owen Stanley, Jaume Bacardit i Pe narroya, Will N. Browne, Jan Drugowitsch, Nicola
Beume, Mike Preu, Stephen Frederick Smith, Stefano Cagnoni, Alexandru Floares, Aaron
Baughman, Steven Matt Gustafson, Maarten Keijzer, Arthur Kordon, and Clare Bates
Congdon, editors. Proceedings of the 11th Annual Conference on Genetic and Evolution-
ary Computation (GECCO09-I), July 812, 2009, Delta Centre-Ville Hotel: Montreal, QC,
Canada. Association for Computing Machinery (ACM): New York, NY, USA. Partly avail-
able at http://www.sigevo.org/gec-summit-2009/instructions-papers.html
[accessed 2011-02-28]. See also [2343].
2343. Franz Rothlauf, G unther R. Raidl, Anna Isabel Esparcia-Alc azar, Ying-Ping Chen, Gabriela
Ochoa, Ender Ozcan, Marc Schoenauer, Anne Auger, Hans-Georg Beyer, Nikolaus Hansen,
Steen Finck, Raymond Ros, L. Darrell Whitley, Garnett Wilson, Simon Harding, William B.
Langdon, Man Leung Wong, Laurence D. Merkle, Frank W. Moore, Sevan G. Ficici, William
Rand, Rick L. Riolo, Nawwaf Kharma, William R. Buckley, Julian Francis Miller, Ken-
neth Owen Stanley, Jaume Bacardit i Pe narroya, Will N. Browne, Jan Drugowitsch, Nicola
Beume, Mike Preu, Stephen L. Smith, Stefano Cagnoni, Alexandru Floares, Aaron Baugh-
man, Steven Matt Gustafson, Maarten Keijzer, Arthur Kordon, and Clare Bates Congdon,
editors. Proceedings of the 11th Annual Conference Companion on Genetic and Evolu-
REFERENCES 1133
tionary Computation Conference (GECCO09-II), July 812, 2009, Delta Centre-Ville Hotel:
Montreal, QC, Canada. Association for Computing Machinery (ACM): New York, NY, USA.
ACM Order No.: 910092. See also [2342].
2344. Rick D. Routledge. Diversity Indices: Which ones are admissible? Journal of Theoreti-
cal Biology, 76(4):503515, February 21, 1979, Elsevier Science Publishers B.V.: Amster-
dam, The Netherlands. Imprint: Academic Press Professional, Inc.: San Diego, CA, USA.
doi: 10.1016/0022-5193(79)90015-8.
2345. J. P. Royston. Some Techniques for Assessing Multivariate Normality Based on the Shapiro-
Wilk W. Journal of the Royal Statistical Society: Series C Applied Statistics, 32(2):121133,
1983, Blackwell Publishing for the Royal Statistical Society: Chichester, West Sussex, UK.
2346. Stefan Rudlof and Mario Koppen. Stochastic Hill Climbing with Learning by Vectors of
Normal Distribution. In WSC1 [1001], pages 6070, 1996. Fully available at http://
eprints.kfupm.edu.sa/66958/1/66958.pdf [accessed 2009-08-21]. Second corrected and
enhanced version, 1997-09-03.
2347. William Michael Rudnick. Genetic Algorithms and Fitness Variance with an Application to
the Automated Design of Articial Neural Networks. PhD thesis, Oregon Graduate Institute
of Science & Technology: Beaverton, OR, USA, 1992. UMI Order No.: GAX92-22642.
2348. G unter Rudolph. An Evolutionary Algorithm for Integer Programming. In PPSN
III [693], pages 139148, 1994. doi: 10.1007/3-540-58484-6 258. Fully avail-
able at http://ls11-www.cs.uni-dortmund.de/people/rudolph/publications/
papers/PPSN94.pdf [accessed 2010-08-12]. CiteSeer
x
: 10.1.1.40.5981.
2349. G unter Rudolph. How Mutation and Selection Solve Long-Path Problems in Polynomial
Expected Time. Evolutionary Computation, 4(2):195205, Summer 1996, MIT Press: Cam-
bridge, MA, USA. doi: 10.1162/evco.1996.4.2.195. CiteSeer
x
: 10.1.1.42.2612.
2350. G unter Rudolph. Convergence Properties of Evolutionary Algorithms, volume 35 in
Forschungsergebnisse zur Informatik. PhD thesis, Universitat Dortmund: Dortmund,
North Rhine-Westphalia, Germany, 1996. isbn: 3-86064-554-4. Google Books
ID: V9f9AAAACAAJ. Published 1997.
2351. G unter Rudolph. Self-Adaptation and Global Convergence: A Counter-Example. In
CEC99 [110], pages 646651, volume 1, 1999. doi: 10.1109/CEC.1999.781994. Fully avail-
able at http://ls11-www.cs.uni-dortmund.de/people/rudolph/publications/
papers/CEC99.pdf [accessed 2009-07-10]. CiteSeer
x
: 10.1.1.36.3755 and 10.1.1.56.9366.
2352. G unter Rudolph. Self-Adaptive Mutations may lead to Premature Convergence. IEEE Trans-
actions on Evolutionary Computation (IEEE-EC), 5(4):410414, 2001, IEEE Computer So-
ciety: Washington, DC, USA. doi: 10.1109/4235.942534. See also [2353].
2353. G unter Rudolph. Self-Adaptive Mutations may lead to Premature Convergence. Technical
Report CI73/99, Universitat Dortmund, Fachbereich Informatik: Dortmund, North
Rhine-Westphalia, Germany, April 30, 2001. Fully available at http://ls11-www.
cs.uni-dortmund.de/people/rudolph/publications/papers/tec335.pdf [ac-
cessed 2009-07-10]. CiteSeer
x
: 10.1.1.28.8514 and 10.1.1.36.5633. See also [2352].
2354. G unter Rudolph, Simon M. Lucas, and Carlo Poloni. Proceedings of 10th International
Conference on Parallel Problem Solving from Nature (PPSN X), September 1317, 2008,
Dortmund, North Rhine-Westphalia, Germany, volume 5199/2008 in Theoretical Computer
Science and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-540-87700-4. isbn: 3-540-87699-5.
Google Books ID: OO-5OgAACAAJ and PPxMVCKTbtoC. Library of Congress Control Number
(LCCN): 2008934679.
2355. Thomas Philip Runarsson, Hans-Georg Beyer, Edmund K. Burke, Juan Julian Merelo-
Guervos, L. Darrell Whitley, and Xin Yao, editors. Proceedings of 9th International Con-
ference on Parallel Problem Solving from Nature (PPSN IX), September 913, 2006, Uni-
versity of Iceland: Reykjavik, Iceland, volume 4193/2006 in Theoretical Computer Science
and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/11844297. isbn: 3-540-38990-3. Google Books
ID: L2QFAQAACAAJ. OCLC: 71747790, 255610906, 315830350, and 318289018. Li-
brary of Congress Control Number (LCCN): 2006931845.
2356. Stuart J. Russell and Peter Norvig. Articial Intelligence: A Modern Approach (AIMA).
Prentice Hall International Inc.: Upper Saddle River, NJ, USA and Pearson Education:
Upper Saddle River, NJ, USA, 2nd edition, December 20, 2002. isbn: 0-13-080302-2,
1134 REFERENCES
0-13-790395-2, and 8120323823. Google Books ID: 0GldGwAACAAJ, 5WfMAQAACAAJ,
5mfMAQAACAAJ, and DvqIIwAACAAJ.
2357. Tomas Ruzgas, Rimantas Rudzkis, and Mindaugas Kavaliauskas. Application of Clustering in
the Non-Parametric Estimation of Distribution Density. Nonlinear Analysis: Modelling and
Control (NLAMC), 11(4):393411, November 4, 2006, Lithuanian Association of Nonlinear
Analysts (LANA): Vilnius, Lithuania. Fully available at http://www.lana.lt/journal/
23/Ruzgas.pdf [accessed 2009-08-28].
2358. Conor Ryan, J. J. Collins, and Michael ONeill. Grammatical Evolution: Evolving Pro-
grams for an Arbitrary Language. In EuroGP98 [210], pages 8395, 1998. Fully available
at http://www.grammatical-evolution.org/papers/eurogp98.ps [accessed 2009-07-
04]. CiteSeer
x
: 10.1.1.38.7697.
2359. Conor Ryan, Michael ONeill, and J. J. Collins. Grammatical Evolution: Solving Trigono-
metric Identities. In MENDEL98 [417], pages 111119, 1998. Fully available at http://
www.grammatical-evolution.org/papers/mendel98/index.html [accessed 2010-12-02].
CiteSeer
x
: 10.1.1.30.4253.
2360. Conor Ryan, Terence Soule, Maarten Keijzer, Edward P. K. Tsang, Riccardo Poli, and
Ernesto Jorge Fernandes Costa, editors. Proceedings of the 6th European Conference on
Genetic Programming (EuroGP03), April 1416, 2003, University of Essex, Departments
of Mathematical and Biological Sciences: Wivenhoe Park, Colchester, Essex, UK, vol-
ume 2610/2003 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/3-540-36599-0. isbn: 3-540-00971-X. Google Books
ID: pVhXLZKMF-YC.
2361. Nestor Rychtyckyj and Daniel Shapiro, editors. Proceedings of the Twenty-second Innova-
tive Applications of Articial Intelligence Conference (IAAI10), July 1115, 2010, Westin
Peachtree Plaza: Atlanta, GA, USA. AAAI Press: Menlo Park, CA, USA.
2362. Proceedings of the Second National Conference on Evolutionary Computation and Global Op-
timization (Materialy II Krajowej Konferencji Algorytmy Ewolucyjne i Optymalizacja Glob-
alna), September 1619, 1997, Rytro, Poland. Google Books ID: IGP4tgAACAAJ.
S
2363. Paul S. Andrews, Jonathan Timmis, Nick D. L. Owens, Uwe Aickelin, Emma Hart, Andy
Hone, and Andrew M. Tyrrell, editors. Proceedings of the 8th International Conference on
Articial Immune Systems (ICARIS09), August 912, 2009, York, UK, volume 5666 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
2364. Leonardo Sa, Pedro Vieira, and Antonio Carneiro de Mesquita Filho. Evolutionary Synthesis
of Low-Sensitivity Antenna Matching Networks using Adjacency Matrix Representation. In
CEC09 [1350], pages 12011208, 2009. doi: 10.1109/CEC.2009.4983082. INSPEC Accession
Number: 10688677.
2365. Ashish Sabharwal. Combinatorial Problems I: Finding Solutions. In 2nd
Asian-Pacic School on Statistical Physics and Interdisciplinary Applications [987],
2008. Fully available at http://www.cs.cornell.edu/

sabhar/tutorials/
kitpc08-combinatorial-problems-I.ppt [accessed 2010-08-29].
2366. Krzystof Sacha, editor. Software Engineering Techniques: Design for Quality IFIP Working
Conference on Software Engineering Techniques (SET06), October 1720, 2006, Warsaw
University: Warsaw, Poland. Springer-Verlag: Berlin/Heidelberg. isbn: 0-387-39387-0.
Co-located with VIII Conference on Software Engineering KKIO 2006.
2367. Lorenza Saitta, editor. Proceedings of the 13th International Conference on Machine Learning
(ICML96), July 36, 1996, Bari, Italy. Morgan Kaufmann Publishers Inc.: San Francisco,
CA, USA. isbn: 1-55860-419-7.
2368. Proceedings of the Third World Congress on Nature and Biologically Inspired Computing
(NaBIC11), November 1921, 2011, Salamanca, Spain.
2369. Peter Salamon, Paolo Sibani, and Richard Frost. Facts, Conjectures, and Improvements
for Simulated Annealing, volume 7 in SIAM Monographs on Mathematical Modeling and
Computation. Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA,
USA, 2002. isbn: 0898715083. Google Books ID: jhAldlYvClcC.
2370. Sancho Salcedo-Sanz and Xin Yao. A Hybrid Hopeld Network-Genetic Algorithm Ap-
REFERENCES 1135
proach for the Terminal Assignment Problem. IEEE Transactions on Systems, Man,
and Cybernetics Part B: Cybernetics, 34(6):23432353, December 2004, IEEE Systems,
Man, and Cybernetics Society: New York, NY, USA. doi: 10.1109/TSMCB.2004.836471.
CiteSeer
x
: 10.1.1.62.7879. INSPEC Accession Number: 8207612. See also [2371].
2371. Sancho Salcedo-Sanz and Xin Yao. Assignment of Cells to Switches in a Cellular Mo-
bile Network using a Hybrid Hopeld Network-Genetic Algorithm Approach. Applied Soft
Computing, 8(1):216224, January 2008, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/j.asoc.2007.01.002. See also [2370].
2372. Sancho Salcedo-Sanz and Xin Yao. Assignment of Cells to Switches in a Cellular Mo-
bile Network using a Hybrid Hopeld Network-Genetic Algorithm Approach. Applied Soft
Computing, 8(1):216224, January 2008, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/j.asoc.2007.01.002.
2373. Sancho Salcedo-Sanz, Yong Xu, and Xin Yao. Hybrid Meta-Heuristics Algorithms for Task
Assignment in Heterogeneous Computing Systems. Computers & Operations Research, 33(3):
820835, March 2006, Pergamon Press: Oxford, UK and Elsevier Science Publishers B.V.:
Essex, UK. doi: 10.1016/j.cor.2004.08.010. Fully available at http://www.cs.bham.ac.
uk/

xin/papers/SalcedoSanzXuYao2006.pdf [accessed 2009-08-10].


2374. Sancho Salcedo-Sanz, Jose Antonio Portilla-Figueras, Emilio G. Ortiz-Garca, Angel M.
Perez-Bellido, Christopher Thraves, Antonio Fernandez-Anta, and Xin Yao. Optimal Switch
Location in Mobile Communication Networks using Hybrid Genetic Algorithms. Applied Soft
Computing, 8(4):14861497, September 2008, Elsevier Science Publishers B.V.: Essex, UK,
Rajkumar Roy, editor. doi: 10.1016/j.asoc.2007.10.021. Fully available at http://nical.
ustc.edu.cn/papers/asc2007b.pdf [accessed 2008-09-01]. CiteSeer
x
: 10.1.1.87.8731.
2375. Muhammad Saleem and Muddassar Farooq. BeeSensor: A Bee-Inspired Power Aware Rout-
ing Protocol for Wireless Sensor Networks. In EvoWorkshops07 [1050], pages 8190, 2007.
doi: 10.1007/978-3-540-71805-5 9.
2376. Abdellah Salhi, Hugh Glaser, and David de Roure. Parallel Implementation of a Genetic-
Programming based Tool for Symbolic Regression. DSSE Technical Reports DSSE-TR-97-
3, University of Southampton, Department of Electronics and Computer Science, Declara-
tive Systems & Software Engineering Group: Higheld, Southampton, UK, July 18, 1997.
Fully available at http://www.dsse.ecs.soton.ac.uk/techreports/95-03/97-3.
html [accessed 2007-09-09]. See also [2377].
2377. Abdellah Salhi, Hugh Glaser, and David de Roure. Parallel Implementation of a Genetic-
Programming Based Tool for Symbolic Regression. Information Processing Letters, 66
(6):299307, June 30, 1998, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. Imprint: North-Holland Scientic Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/S0020-0190(98)00056-8. See also [2376].
2378. David Salomon. Assemblers andLoaders, Ellis Horwood Series in Computers and Their Ap-
plications. Ellis Horwood: Chichester, West Sussex, UK, 1993. isbn: 0-13-052564-2.
OCLC: 26404704, 473153955, and 636481452. Library of Congress Control Num-
ber (LCCN): 92028178. GBV-Identication (PPN): 110448634. LC Classica-
tion: QA76.76.A87 S35 1992.
2379. Proceedings of the Eighth Joint Conference on Information Science (JCIS 2005), Section: The
Sixth International Workshop on Frontiers in Evolutionary Algorithms (FEA05), July 16
21, 2005, Salt Lake City Mariott City Center: Salt Lake City, UT, USA. Partly avail-
able at http://fs.mis.kuas.edu.tw/

cobol/JCIS2005/jcis05/Track4.html [ac-
cessed 2007-09-16]. Workshop held in conjunction with Eighth Joint Conference on Information
Sciences.
2380. Rafal Sa lustowicz and J urgen Schmidhuber. Probabilistic Incremental Program Evolution:
Stochastic Search Through Program Space. Evolutionary Computation, 5(2):123141, Febru-
ary 1997, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1997.5.2.123. See also [2381].
2381. Rafal Sa lustowicz and J urgen Schmidhuber. Probabilistic Incremental Program Evolu-
tion: Stochastic Search Through Program Space. In ECML97 [2781], pages 213220,
1997. doi: 10.1007/3-540-62858-4 86. Fully available at ftp://ftp.idsia.ch/pub/
juergen/ECML_PIPE.ps.gz and http://eprints.kfupm.edu.sa/59137/1/59137.
pdf [accessed 2009-08-23]. CiteSeer
x
: 10.1.1.45.231. See also [2380].
2382. Claude Sammut and Achim G. Homann, editors. Proceedings of the 19th International
Conference on Machine Learning (ICML02), July 812, 2002, University of New South
1136 REFERENCES
Wales: Sydney, NSW, Australia. Morgan Kaufmann Publishers Inc.: San Francisco, CA,
USA. isbn: 1-55860-873-7.
2383. Arthur L. Samuel. Some Studies in Machine Learning using the Game of Checkers. IBM
Journal of Research and Development, 3(3):210229, July 1959. doi: 10.1147/rd.33.0210. See
also [2384, 2385].
2384. Arthur L. Samuel. Some Studies in Machine Learning Using the Game of Checkers. In
Computers and Thought [897], pages 71105. McGraw-Hill: New York, NY, USA, 19631995.
See also [2383, 2385].
2385. Arthur L. Samuel. Some Studies in Machine Learning using the Game of Check-
ers II Recent Progress. IBM Journal of Research and Development, 11(6):601
617, November 1967. Fully available at http://pages.cs.wisc.edu/

dyer/cs540/
handouts/samuel-checkers.pdf and http://pages.cs.wisc.edu/

jerryzhu/
cs540/handouts/samuel-checkers.pdf [accessed 2010-08-18]. See also [2383].
2386. Proceedings of the 2014 International Conference on Systems, Man, and Cybernetics
(SMC14), October 58, 2014, San Diego, CA, USA.
2387. Harilaos G. Sandalidis, Constandinos X. Mavromoustakis, and Peter P. Stavroulakis. Per-
formance Measures of an Ant Based Decentralised Routing Scheme for Circuit Switch-
ing Communication Networks. Soft Computing A Fusion of Foundations, Method-
ologies and Applications, 5(4):313317, August 2001, Springer-Verlag: Berlin/Heidel-
berg. doi: 10.1007/s005000100104. Fully available at http://www.springerlink.com/
content/g1u7lygrugwk3nl5/fulltext.pdf [accessed 2008-09-03].
2388. Francisco Sandoval, Carlos Camacho-Penalosa, and Antonio Puerta, editors. IEEE
Mediterranean Electrotechnical Conference (MELECON06), May 1619, 2006, Be-
nalmadena (M alaga), Spain, volume 2. IEEE Computer Society: Washington, DC, USA.
isbn: 1-4244-0087-2. Google Books ID: cXMpAAAACAAJ. OCLC: 75490138, 81252104,
237880307, and 317661437.
2389. Yasuhito Sano and Hajime Kita. Optimization of Noisy Fitness Functions by Means of
Genetic Algorithms Using History of Search. In PPSN VI [2421], pages 571580, 2000.
doi: 10.1007/3-540-45356-3 56.
2390. Yasuhito Sano and Hajime Kita. Optimization of Noisy Fitness Functions by means of
Genetic Algorithms using History of Search with Test of Estimation. In CEC02 [944], pages
360365, 2002. doi: 10.1109/CEC.2002.1006261.
2391. Bruno Sareni and Laurent Krahenb uhl. Fitness Sharing and Niching Methods Revisited.
IEEE Transactions on Evolutionary Computation (IEEE-EC), 2(3):97106, September 1998,
IEEE Computer Society: Washington, DC, USA. doi: 10.1109/4235.735432. Fully avail-
able at http://hal.archives-ouvertes.fr/hal-00359799/en/ [accessed 2010-08-02]. IN-
SPEC Accession Number: 6114042. ID: hal-00359799, version 1.
2392. Manish Sarkar. Evolutionary Programming-Based Fuzzy Clustering. In EP96 [946], pages
247256, 1996.
2393. Ruhul A. Sarker, Hussein A. Abbass, and Samin Karim. An Evolutionary Algorithm for
Constrained Multiobjective Optimization Problems. In AJWIS01 [1498], pages 113122,
2001. Fully available at http://www.lania.mx/

ccoello/EMOO/sarker01a.pdf.gz
[accessed 2008-11-14]. CiteSeer
x
: 10.1.1.8.995.
2394. Ruhul A. Sarker, Masoud Mohammadian, and Xin Yao, editors. Evolutionary Optimization,
volume 48 in International Series in Operations Research & Management Science. Kluwer
Academic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands,
2002. doi: 10.1007/b101816. isbn: 0-306-48041-7 and 0-7923-7654-4.
2395. Ruhul A. Sarker, Robert G. Reynolds, Hussein A. Abbass, Kay Chen Tan, Robert Ian
McKay, Daryl Leslie Essam, and Tom Gedeon, editors. Proceedings of the IEEE Congress
on Evolutionary Computation (CEC03), December 813, 2003, Canberra, Australia.
IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7803-7804-0. Google Books
ID: ALhVAAAAMAAJ, CyBpAAAACAAJ, hblVAAAAMAAJ, and u7hVAAAAMAAJ. INSPEC Ac-
cession Number: 8188933. Catalogue no.: 03TH8674. CEC 2003 A joint meeting of the IEEE,
the IEAust, the EPS, and the IEE. Congress on Evolutionary Computation, IEEE/Neural
Networks Council, Evolutionary Programming Society, Galesia, IEE.
2396. Warren S. Sarle. Stopped Training and Other Remedies for Overtting. In Interface95 [1877],
pages 352360, 1995. Fully available at ftp://ftp.sas.com/pub/neural/inter95.ps.
Z [accessed 2009-07-12]. CiteSeer
x
: 10.1.1.42.3920.
REFERENCES 1137
2397. Warren S. Sarle. What is overtting and how can I avoid it? In Usenet FAQs: comp.ai.neural-
nets FAQ [892], Chapter Part 3 of 7: General, Section 3. FAQS.ORG Internet FAQ Archives:
USA, 2009. Fully available at http://www.faqs.org/faqs/ai-faq/neural-nets/
part3/section-3.html [accessed 2008-02-29].
2398. Kumara Sastry and David Edward Goldberg. Multiobjective hBOA, Clustering, and
Scalability. In GECCO05 [304], pages 663670, 2005, Martin Pelikan, Advisor.
doi: 10.1145/1068009.1068122. See also [2157].
2399. Kumara Sastry and David Edward Goldberg. Modeling Tournament Selection With
Replacement Using Apparent Added Noise. In GECCO01 [2570], page 781, 2001.
CiteSeer
x
: 10.1.1.16.786. See also [2400].
2400. Kumara Sastry and David Edward Goldberg. Modeling Tournament Selection With Replace-
ment Using Apparent Added Noise. IlliGAL Report 2001014, Illinois Genetic Algorithms Lab-
oratory (IlliGAL), Department of Computer Science, Department of General Engineering,
University of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, January 2001.
Fully available at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/2001014.
ps.Z [accessed 2010-08-03]. See also [2399].
2401. Kumara Sastry and David Edward Goldberg. Lets Get Ready to Rumble Redux: Crossover
versus Mutation Head to Head on Exponentially Scaled Problems. In GECCO07-I [2699],
pages 13801387, 2007. doi: 10.1145/1276958.1277215. See also [2402].
2402. Kumara Sastry and David Edward Goldberg. Lets Get Ready to Rumble Redux: Crossover
versus Mutation Head to Head on Exponentially Scaled Problems. IlliGAL Report 2007005,
Illinois Genetic Algorithms Laboratory (IlliGAL), Department of Computer Science, De-
partment of General Engineering, University of Illinois at Urbana-Champaign: Urbana-
Champaign, IL, USA, February 11, 2007. Fully available at http://www.illigal.uiuc.
edu/pub/papers/IlliGALs/2007005.pdf [accessed 2008-07-21]. See also [2401].
2403. Hiroyuki Sato, Arturo Hernandez Aguirre, and Kiyoshi Tanaka. Controlling Dominance Area
of Solutions and Its Impact on the Performance of MOEAs. In EMO07 [2061], pages 520,
2007. doi: 10.1007/978-3-540-70928-2 5.
2404. Satoshi Sato, Tadashi Ishihara, and Hikaru Inooka. Systematic Design via the Method of
Inequalities. IEEE Control Systems Magazine, pages 5765, October 1996, IEEE Computer
Society: Piscataway, NJ, USA. doi: 10.1109/37.537209.
2405. Shinji Sato. Simulated Quenching: A New Placement Method for Module Generation.
In ICCAD [1328], pages 538541, 1997. doi: 10.1109/ICCAD.1997.643591. Fully avail-
able at http://www.sigda.org/Archives/ProceedingArchives/Iccad/Iccad97/
papers/1997/iccad97/pdffiles/09c_2.pdf [accessed 2010-09-25]. INSPEC Accession
Number: 5781467.
2406. Abdul Sattar and Byeong-Ho Kang, editors. Advances in Articial Intelligence. Proceed-
ings of the 19th Australian Joint Conference on Articial Intelligence (AI06), December 4
6, 2006, Hobart, Australia, volume 4304/2006 in Lecture Notes in Articial Intelligence
(LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/11941439. isbn: 3-540-49787-0. Library of Congress Control Num-
ber (LCCN): 2006936897.
2407. Franklin E. Satterthwaite. Random Balance Experimentation. Technometrics, 1(2):111137,
May 1959, American Statistical Association (ASA): Alexandria, VA, USA and American
Society for Quality (ASQ): Milwaukee, WI, USA. JSTOR Stable ID: 1266466.
2408. Franklin E. Satterthwaite. REVOP or Random Evolutionary Operation. Technical Report
10-10-59, Merrimack College: North Andover, MA, USA, 1959.
2409. J urgen Sauer, editor. 19. Workshop Planen, Scheduling und Kongurieren, Entwerfen
(PuK05), September 11, 2005, Koblenz, Germany. Fully available at http://www-is.
informatik.uni-oldenburg.de/

sauer/puk2005/paper.html [accessed 2011-01-11].


2410. Yoshikazu Sawaragi, Koichi Inoue, and Hirotaka Nakayama, editors. Proceedings of the 7th
International Conference on Multiple Criteria Decision Making (MCDM1986), August 18
22, 1986, Kyoto, Japan. isbn: 0-387-17718-3, 0-387-17719-1, 3-540-17718-3, and
3-540-17719-1.
2411. Dhish Kumar Saxena and Kalyanmoy Deb. Dimensionality Reduction of Objectives and
Constraints in Multi-Objective Optimization Problems: A System Design Perspective. In
CEC08 [1889], pages 32043211, 2008. doi: 10.1109/CEC.2008.4631232. INSPEC Accession
Number: 10250936.
1138 REFERENCES
2412. Robert Schaefer, Carlos Cotta, Joanna Ko lodziej, and G unter Rudolph, editors. Pro-
ceedings of the 11th International Conference on Parallel Problem Solving From Nature,
Part 1 (PPSN XI-1), September 1115, 2010, AGH University of Science and Technol-
ogy: Krakow, Poland, volume 6238 in Theoretical Computer Science and General Issues
(SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Ger-
many. isbn: 3-642-15843-9. Partly available at http://home.agh.edu.pl/

ppsn/
[accessed 2011-02-28]. Library of Congress Control Number (LCCN): 2010934114.
2413. Robert Schaefer, Carlos Cotta, Joanna Ko lodziej, and G unter Rudolph, editors. Pro-
ceedings of the 11th International Conference on Parallel Problem Solving From Nature,
Part 2 (PPSN10-2), September 1115, 2010, AGH University of Science and Technol-
ogy: Krakow, Poland, volume 6239 in Theoretical Computer Science and General Issues
(SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Ger-
many. isbn: 3-642-15870-6. Partly available at http://home.agh.edu.pl/

ppsn/
[accessed 2011-02-28]. Library of Congress Control Number (LCCN): 2010934114.
2414. J. David Schaer, editor. Proceedings of the 3rd International Conference on Genetic Algo-
rithms (ICGA89), June 47, 1989, George Mason University (GMU): Fairfax, VA, USA.
Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-066-3.
Google Books ID: BPBQAAAAMAAJ and TYvqPAAACAAJ. OCLC: 19722958, 84251559,
180515150, 231091143, 257885359, and 264619464.
2415. J. David Schaer and L. Darrell Whitley, editors. Proceedings of the International Workshop
on Combinations of GeneticAlgorithms and Neural Networks (COGANN92), June 2, 1992,
Baltimore, MD, USA. IEEE (Institute of Electrical and Electronics Engineers): Piscataway,
NJ, USA. doi: 10.1109/COGANN.1992.273951. isbn: 0-8186-2787-5. OCLC: 27675198,
65776456, 174278971, 224222497, 311746303, 475797050, and 496330593. INSPEC
Accession Number: 4487943. Catalogue no.: 92TH0435-8.
2416. J. David Schaer, Richard A. Caruana, Larry J. Eshelman, and Rajarshi Das. A Study
of Control Parameters Aecting Online Performance of Genetic Algorithms for Function
Optimization. In ICGA89 [2414], pages 5160, 1989.
2417. J. David Schaer, Larry J. Eshelman, and Daniel Outt. Spurious Correlations and Prema-
ture Convergence in Genetic Algorithms. In FOGA90 [2562], pages 102112, 1990.
2418. Robert E. Schapire. The Strength of Weak Learnability. Machine Learning, 5:
197227, June 1990, Kluwer Academic Publishers: Norwell, MA, USA and Springer
Netherlands: Dordrecht, Netherlands. doi: 10.1007/BF00116037. Fully available
at http://citeseer.ist.psu.edu/schapire90strength.html and http://www.
springerlink.com/content/x02406w7q5038735/fulltext.pdf [accessed 2007-09-15].
2419. R. Schilling, W. Haase, Jacques Periaux, and H. Baier, editors. Proceedings of the Sixth
Conference on Evolutionary and Deterministic Methods for Design, Optimization and Control
with Applications to Indutsrial and Societal Problems (EUROGEN05), September 1214,
2005, Technische Universitat M unchen: Munich, Bavaria, Germany. Technische Universitat
M unchen: Munich, Bavaria, Germany. Partly available at http://www.flm.mw.tum.de/
EUROGEN05/ [accessed 2007-09-16]. in association with ECCOMAS and ERCOFTAC.
2420. J urgen Schmidhuber. Evolutionary Principles in Self-Referential Learning. (On Learning
how to Learn: The Meta-Meta-... Hook.). Masters thesis, Technische Universitat M unchen,
Institut f ur Informatik, Lehrstuhl Professor Radig: Munich, Bavaria, Germany, May 14, 1987.
Fully available at http://www.idsia.ch/

juergen/diploma.html [accessed 2007-11-01].


2421. Marc Schoenauer, Kalyanmoy Deb, G unter Rudolph, Xin Yao, Evelyne Lutton, Juan Julian
Merelo-Guerv os, and Hans-Paul Schwefel, editors. Proceedings of the 6th International Con-
ference on Parallel Problem Solving from Nature (PPSN VI), September 1820, 2000, Paris,
France, volume 1917/2000 in Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/3-540-45356-3. isbn: 3-540-41056-2. Google Books
ID: GDz2nCAJpJ4C and gI26Cld2BY0C. OCLC: 44914243, 247592642, and 313676512.
2422. Armin Scholl and Robert Klein. Bin Packing. Friedrich Schiller University of Jena,
Wirtschaftswissenschaftliche Fakult at: Jena, Thuringia, Germany, September 2, 2003. Fully
available at http://www.wiwi.uni-jena.de/Entscheidung/binpp/ [accessed 2010-10-12].
See also [1834].
2423. Armin Scholl, Robert Klein, and Christian J urgens. BISON: A Fast Hybrid Procedure for Ex-
actly Solving the One-Dimensional Bin Packing Problem. Computers & Operations Research,
24(7):627645, July 1997, Pergamon Press: Oxford, UK and Elsevier Science Publishers B.V.:
REFERENCES 1139
Essex, UK. doi: 10.1016/S0305-0548(96)00082-2.
2424. Ruud Schoonderwoerd. Collective Intelligence for Network Control. Masters thesis,
Delft University of Technology, Faculty of Technical Mathematics and Informatics: Delft,
The Netherlands, May 30, 1996, Henk Koppelaar, Eugene J. H. Kerckhos, Leon J. M.
Rothkrantz, and Janet L. Bruten, Committee members. Fully available at http://staff.
washington.edu/paymana/swarm/schoon96-thesis.pdf [accessed 2008-06-12]. See also
[2425].
2425. Ruud Schoonderwoerd, Owen E. Holland, Janet L. Bruten, and Leon J. M. Rothkrantz. Ant-
Based Load Balancing in Telecommunications Networks. Adaptive Behavior, 5(2):169207,
Fall 1996, SAGE Publications: Thousand Oaks, CA, USA. doi: 10.1177/105971239700500203.
Fully available at http://www.ifi.uzh.ch/ailab/teaching/FG06/ [accessed 2009-06-27].
CiteSeer
x
: 10.1.1.4.3655. See also [2424, 2426].
2426. Ruud Schoonderwoerd, Owen E. Holland, and Janet L. Bruten. Ant-like Agents for
Load Balancing in Telecommunications Networks. In AGENTS97 [1978], pages 209
216, 1997. doi: 10.1145/267658.267718. Fully available at http://sigart.acm.
org/proceedings/agents97/A153/A153.ps and http://staff.washington.edu/
paymana/swarm/schoon97-icaa.pdf [accessed 2009-06-26]. See also [2425].
2427. Thomas J. M. Schopf, editor. Models in Paleobiology. Freeman, Cooper & Co: San Francisco,
CA, USA and DoubleDay: New York, NY, USA, January 1972. isbn: 0877353255. Google
Books ID: Db0eAAAACAAJ and OjjlOgAACAAJ. OCLC: 572084.
2428. Herb Schorr and Alain T. Rappaport, editors. Proceedings of the The First Conference
on Innovative Applications of Articial Intelligence (IAAI89), March 2830, 1989, Stan-
ford University: Stanford, CA, USA. AAAI Press: Menlo Park, CA, USA. Partly avail-
able at http://www.aaai.org/Conferences/IAAI/iaai89.php [accessed 2007-09-06]. Pub-
lished in 1991.
2429. Nicol N. Schraudolph and Richard K. Belew. Dynamic Parameter Encoding for Ge-
netic Algorithms. Machine Learning, 9:921, June 1992, Kluwer Academic Pub-
lishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands.
doi: 10.1023/A:1022624728869. Fully available at http://cnl.salk.edu/

schraudo/
pubs/SchBel92.pdf. CiteSeer
x
: 10.1.1.42.1387.
2430. Alexander Schrijver. A Course in Combinatorial Optimization. self-published, January 22,
2009. Fully available at http://homepages.cwi.nl/

lex/files/dict.pdf [accessed 2010-


08-04].
2431. Crystal Schuil, Matthew Valente, Justin Werfel, and Radhika Nagpal. Collective Construc-
tion Using Lego Robots. In AAAI06 / IAAI06 [1055], volume 2, 2006. Fully avail-
able at http://www.eecs.harvard.edu/

rad/ssr/papers/aaai06-schuil.pdf [ac-
cessed 2008-06-12].
2432. Peter K. Schuster and Manfred Eigen. The Hypercycle, A Principle of Natural Self-
Organization. Springer-Verlag GmbH: Berlin, Germany, 1979. isbn: 0-387-09293-5. Google
Books ID: Af49AAAAYAAJ and P54PPAAACAAJ. OCLC: 4665354.
2433. Hans-Paul Schwefel. Kybernetische Evolution als Strategie der exprimentellen Forschung
in der Stromungstechnik. Masters thesis, Technische Universitat Berlin: Berlin, Germany,
1965.
2434. Hans-Paul Schwefel. Experimentelle Optimierung einer Zweiphasend use Teil I. Technical
Report 35, AEG Research Institute: Berlin, Germany, 1968. Project MHDStaustrahlrohr
11.034/68.
2435. Hans-Paul Schwefel. Evolutionsstrategie und numerische Optimierung. PhD thesis, Technis-
che Universitat Berlin, Institut f ur Me- und Regelungstechnik, Institut f ur Biologie und An-
thropologie: Berlin, Germany, 1975. OCLC: 52361662 and 251655294. GBV-Identication
(PPN): 156923181.
2436. Hans-Paul Schwefel. Numerical Optimization of Computer Models. John Wiley & Sons
Ltd.: New York, NY, USA, 1981. isbn: 0-471-09988-0. OCLC: 8011455 and
301251708. Library of Congress Control Number (LCCN): 81173223. GBV-Identication
(PPN): 01489498X. LC Classication: QA402.5 .S3813.
2437. Hans-Paul Schwefel. Evolution and Optimum Seeking, Sixth-Generation Computer Technol-
ogy Series. John Wiley & Sons Ltd.: New York, NY, USA, 1995. isbn: 0-471-57148-2.
Google Books ID: dfNQAAAAMAAJ. OCLC: 30701094 and 222866514. Library of Congress
Control Number (LCCN): 94022270. GBV-Identication (PPN): 194661636. LC Classi-
1140 REFERENCES
cation: QA402.5 .S375 1995.
2438. Hans-Paul Schwefel and Reinhard Manner, editors. Proceedings of the 1st Workshop on Par-
allel Problem Solving from Nature (PPSN I), October 13, 1990, FRG Dortmund: Dort-
mund, North Rhine-Westphalia, Germany, volume 496/1991 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/BFb0029723.
isbn: 0-387-54148-9 and 3-540-54148-9. Google Books ID: W4r2KQAACAAJ,
a7LiAAAACAAJ, and cZhQAAAAMAAJ.
2439. A. Carlisle Scott and Philip Klahr, editors. Proceedings of the The Fourth Conference on Inno-
vative Applications of Articial Intelligence (IAAI92), July 1216, 1992, San Jose, CA, USA.
AAAI Press: Menlo Park, CA, USA. isbn: 0-262-69155-8. Partly available at http://
www.aaai.org/Conferences/IAAI/iaai912.php [accessed 2007-09-06].
2440. Ian Scriven, Andrew Lewis, and Sanaz Mostaghim. Dynamic Search Initialisation Strategies
for Multi-Objective Optimisation in Peer-to-Peer Networks. In CEC09 [1350], pages 1515
1522, 2009. doi: 10.1109/CEC.2009.4983122. INSPEC Accession Number: 10688717.
2441. Ninth International Workshop on Learning Classier Systems (IWLCS06), July 89, 2006,
Seattle, WA, USA. Held during GECCO-2006 (. See also [1516].
2442. Proceedings of the 28th International Conference on Machine Learning (ICML11), June 11
14, 2011, Seattle, WA, USA.
2443. Mich`ele Sebag and Antoine Ducoulombier. Extending Population-Based Incremental
Learning to Continuous Search Spaces. In PPSN V [866], pages 418427, 1998.
CiteSeer
x
: 10.1.1.42.1884.
2444. Anthony V. Sebald and Lawrence Jerome Fogel, editors. Proceedings of the 3rd Annual
Conference on Evolutionary Programming (EP94), February 2426, 1994, San Diego, CA,
USA. World Scientic Publishing: River Edge, NJ, USA. isbn: 9810218109. Library of
Congress Control Number (LCCN): 95118447.
2445. Jimmy Secretan, Nicholas Beato, David B. DAmbrosio, Adelein Rodriguez, Adam Campbell,
and Kenneth Owen Stanley. Evolving Pictures Collaboratively Online. In CHI08 [139],
pages 17591768, 2008. doi: 10.1145/1357054.1357328. Fully available at http://eplex.
cs.ucf.edu/papers/secretan_chi08.pdf [accessed 2011-02-23].
2446. Robert Sedgewick. Algorithms in Java, Parts 14 (Fundamentals: Data Structures, Sort-
ing, Searching). Addison-Wesley Professional: Reading, MA, USA, 3rd edition, Septem-
ber 2002. isbn: 0-201-36120-5. Google Books ID: hyvdUQUmf2UC. GBV-Identication
(PPN): 247734144, 356227073, 511295804, 525555153, and 595338534. With Java
consultation by Michael Schidlowsky.
2447. Alberto Maria Segre, editor. Proceedings of the 6th International Workshop on Machine
Learning (ML89), June 2627, 1989, Cornell University Press: Ithaca, NY, USA. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-036-1.
2448. Y. Ahmet Sekercio glu, Andreas Pitsilides, and Athanasios V. Vasilakos. Computa-
tional Intelligence in Management of ATM Networks: A Survey of Current State of
Research. In ESIT99 [888], 1999. Fully available at http://www.cs.ucy.ac.cy/
networksgroup/pubs/published/1999/CI-ATM-London99.pdf and http://www.
erudit.de/erudit/events/esit99/12525_P.pdf [accessed 2008-09-03].
2449. Bart Selman, Hector Levesque, and David Mitchell. A New Method for Solving Hard Satis-
ability Problems. In AAAI92 [2639], pages 440446, 1992. Fully available at http://www.
aaai.org/Papers/AAAI/1992/AAAI92-068.pdf [accessed 2009-06-24].
2450. KWEB SWS Challenge Workshop, June 67, 2007, Innsbruck Congress Center: Inns-
bruck, Austria. Semantic Technology Institute (STI) Innsbruck, Austria: Innsbruck, Aus-
tria. Fully available at http://sws-challenge.org/wiki/index.php/Workshop_
Innsbruck [accessed 2010-12-12].
2451. Bernhard Sendho, Martin Kreutz, and Werner von Seelen. A Condition for the Genotype-
Phenotype Mapping: Causality. In ICGA97 [166], pages 7380, 1997. Fully avail-
able at ftp://ftp.neuroinformatik.ruhr-uni-bochum.de/pub/manuscripts/
articles/frame-label.ps.gz [accessed 2010-08-12]. CiteSeer
x
: 10.1.1.38.6750. arXiv
ID: adap-org/9711001v1.
2452. Bernhard Sendho, Martin Kreutz, and Werner von Seelen. Causality and the Analysis of Lo-
cal Search in Evolutionary Algorithms. INI Internal Report IR-INI 97-16, Ruhr-Universitat
Bochum, Institut f ur Neuroinformatik: Bochum, North Rhine-Westphalia, Germany,
September 1997. Fully available at ftp://ftp.neuroinformatik.ruhr-uni-bochum.
REFERENCES 1141
de/pub/manuscripts/IRINI/irini97-16/irini97-16.ps.gz [accessed 2009-07-15].
issn: 0943-2752. See also [2451].
2453. Twittie Senivongse and Gina Oliveira, editors. 9th IFIP WG 6.1 International Conference
on Distributed Applications and Interoperable Systems (DAIS09), June 911, 2009, Lisbon,
Portugal, volume 5523/2009 in Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/978-3-642-02164-0. isbn: 3-642-02163-8. Partly
available at http://discotec09.di.fc.ul.pt/ [accessed 2011-02-28].
2454. Proceedings of the 2012 International Conference on Systems, Man, and Cybernetics
(SCM12), October 710, 2012, Seoul, South Korea.
2455. Isabelle Servet, Louise Trave-Massuy`es, and Daniel Stern. Telephone Network Trac Over-
loading Diagnosis and Evolutionary Computation Technique. In AE97 [1191], pages 137144,
1997. doi: 10.1007/BFb0026596.
2456. Isabelle Servet, Louise Trave-Massuy`es, and Daniel Stern. Evolutionary Computation Tech-
niques for Trac Supervision Based On A Model Of Telephone Networks Built From Quali-
tative Knowledge. Studies in Informatics and Control, 6(1), March 1997, National Institute
for R&D in Informatics (Institutul Nat ional de Cercetare-Dezvoltare n Informatica, ICI):
Bucharest (Bucuresti), Romania. Fully available at http://www.ici.ro/SIC/sic1997_
1/ser97.htm [accessed 2009-08-22].
2457. SERVICES Cup10, 2010.
2458. Mark Shackleton, Rob Shipman, and Marc Ebner. An Investigation of Redundant Genotype-
Phenotype Mappings and their Role in Evolutionary Search. In CEC00 [1333], pages 493
500, volume 1, 2000. doi: 10.1109/CEC.2000.870337. Fully available at http://citeseer.
ist.psu.edu/old/409243.html and http://wwwi2.informatik.uni-wuerzburg.
de/mitarbeiter/ebner/research/publications/uniWu/redundant.ps.gz [ac-
cessed 2009-07-10]. INSPEC Accession Number: 6734665.
2459. M. Zubair Shaq, Muddassar Farooq, and Syed Ali Khayam. A Comparative Study
of Fuzzy Inference Systems, Neural Networks and Adaptive Neuro Fuzzy Inference
Systems for Portscan Detection. In EvoWorkshops08 [1051], pages 5261, 2008.
doi: 10.1007/978-3-540-78761-7 6.
2460. M. Zubair Shaq, S. Momina Tabish, and Muddassar Farooq. Are Evolutionary Rule Learn-
ing Algorithms Appropriate for Malware Detection? In GECCO09-I [2342], pages 1915
1916, 2009. doi: 10.1145/1569901.1570233. Fully available at http://www.nexginrc.org/
papers/gecco09-zubair.pdf [accessed 2009-09-05]. See also [2461].
2461. M. Zubair Shaq, S. Momina Tabish, and Muddassar Farooq. On the Appropriateness of Evo-
lutionary Rule Learning Algorithms for Malware Detection. In GECCO09-II [2343], pages
26092616, 2009. doi: 10.1145/1570256.1570370. Fully available at http://nexginrc.
org/papers/iwlcs09-zubair.pdf [accessed 2009-09-05]. See also [2460].
2462. Biren Shah, Karthik Ramachandran, and Vijay V. Raghavan. A Hybrid Approach for
Data Warehouse View Selection. International Journal of Data Warehousing and Mining
(IJDWM), 2(2):137, 2006, Idea Group Publishing (Idea Group Inc., IGI Global): New
York, NY, USA. doi: 10.4018/jdwm.2006040101. CiteSeer
x
: 10.1.1.85.5014.
2463. S. H. Shami, I. M. A. Kirkwood, and Mark C. Sinclair. Evolving Simple Fault-Tolerant
Routing Rules using Genetic Programming. Electronics Letters, 33(17):14401441, Au-
gust 14, 1997, Institution of Engineering and Technology (IET): Stevenage, Herts, UK.
doi: 10.1049/el:19970996. See also [1542].
2464. Yun-Wei Shang and Yu-Huang Qiu. A Note on the Extended Rosenbrock Function. Evo-
lutionary Computation, 14(1):119126, Spring 2006, MIT Press: Cambridge, MA, USA.
doi: 10.1162/evco.2006.14.1.119. PubMed ID: 16536893.
2465. Claude Elwood Shannon. A Mathematical Theory of Communication. Bell System Technical
Journal, 27:379423/623656, JulyOctober 1948, Bell Laboratories: Berkeley Heights, NJ,
USA. Fully available at http://plan9.bell-labs.com/cm/ms/what/shannonday/
paper.html [accessed 2007-09-15]. Reprinted in [2466, 2467, 2521].
2466. Claude Elwood Shannon, author. Neil James Alexander Sloane and Aaron D. Wyner, ed-
itors. Claude Elwood Shannon: Collected Papers. Institute of Electrical and Electronics
Engineers (IEEE) Press: New York, NY, USA, 1993. isbn: 0780304349. Google Books
ID: 06QKAAAACAAJ. OCLC: 26403297, 246915902, and 318255507. See also [2465].
2467. Claude Elwood Shannon and Warren Weaver. The Mathematical Theory of Communication.
University of Illinois Press (UIP): Champaign, IL, USA, 19491962. isbn: 0252725468
1142 REFERENCES
and 0252725484. Google Books ID: BHi4OgAACAAJ, H1MiAAAACAAJ, VB COwAACAAJ,
ZiYIAQAAIAAJ, and dk0n eGcqsUC. OCLC: 13553507, 250803800, and 318164805.
See also [2465].
2468. Nicholas Peter Sharples. Evolutionary Approaches to Adaptive Protocol Design. In The
12th White House Papers Graduate Research in Cognitive and Computing Sciences at Sus-
sex [2139], pages 6062, 1999. See also [2470].
2469. Nicholas Peter Sharples. Evolutionary Approaches to Adaptive Protocol Design. PhD thesis,
University of Sussex, School of Cognitive Science: Brighton, UK, August 20012003. Google
Books ID: -BA3HAAACAAJ. OCLC: 59295838 and 195331508. asin: B001ABO1DK. See
also [2470].
2470. Nicholas Peter Sharples and Ian Wakeman. Protocol Construction Using Genetic Search
Techniques. In EvoWorkshops00 [464], pages 7395, 2000. doi: 10.1007/3-540-45561-2 23.
See also [2469].
2471. Jude W. Shavlik, editor. Proceedings of the 15th International Conference on Machine Learn-
ing (ICML98), July 2427, 1998, Madison, WI, USA. Morgan Kaufmann Publishers Inc.:
San Francisco, CA, USA. isbn: 1-55860-556-8.
2472. Katharine Jane Shaw. Using Genetic Algorithms for Practical Multi-Objective Production
Schedule Optimisation. PhD thesis, University of Sheeld, Department of Automatic Control
and Systems Engineering: Sheeld, UK, 1997. OCLC: 53552498. asin: B001AJODJE.
2473. Katharine Jane Shaw and Peter J. Fleming. Including Real-Life Problem Preferences in Ge-
netic Algorithms to Improve Optimisation of Production Schedules. In GALESIA97 [3056],
pages 239244, 1997. INSPEC Accession Number: 5780801.
2474. J. Shekel. Test Functions for Multimodal Search Techniques. In Fifth An-
nual Princeton Conference on Information Science and Systems [2222], pages 354
359, 1971. Partly available at http://en.wikipedia.org/wiki/Shekel_function
and http://www-optima.amp.i.kyoto-u.ac.jp/member/student/hedar/Hedar_
files/TestGO_files/Page2354.htm [accessed 2007-11-06].
2475. Proceedings of the Third Joint Conference on Information Science (JCIS 1997), Section: The
First International Workshop on Frontiers in Evolutionary Algorithms (FEA98), March 1
5, 1997, Sheraton Imperial Hotel & Convention Center: Research Triangle Park, NC, USA.
Workshop held in conjunction with Third Joint Conference on Information Sciences.
2476. David J. Sheskin. Handbook of Parametric and Nonparametric Statistical Procedures. Chap-
man & Hall: London, UK and CRC Press, Inc.: Boca Raton, FL, USA, 2nd/3rd edi-
tion, 2004. isbn: 0203489535, 0849331196, 1-58488-133-X, and 1-58488-440-1.
Google Books ID: L9c4IAAACAAJ, bmwhcJqq01cC, t6smAAAACAAJ, and tasmAAAACAAJ.
OCLC: 42643434, 52134708, 247821285, and 265211382.
2477. Amit P. Sheth, Steen Staab, Mike Dean, Massimo Paolucci, Diana Maynard, Timothy W.
Finin, and Krishnaprasad Thirunarayan, editors. Proceedings of the 7th International Seman-
tic Web Conference The Semantic Web (ISWC08), October 2630, 2008, Kongresszen-
trum: Karlsruhe, Germany, volume 5318 in Information Systems and Application, incl. In-
ternet/Web and HCI (SL 3), Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. isbn: 3-540-88563-3. Google Books ID: -YeUh7-PCkgC. Library
of Congress Control Number (LCCN): 2008937502.
2478. Masaya Shinkai, Arturo Hernandez Aguirre, and Kiyoshi Tanaka. Mutation Strategy Im-
proves GAs Performance on Epistatic Problems. In CEC02 [944], pages 968973, volume 1,
2002. doi: 10.1109/CEC.2002.1007056.
2479. Rob Shipman. Genetic Redundancy: Desirable or Problematic for Evolutionary Adaptation?
In ICANNGA99 [801], pages 111, 1999. CiteSeer
x
: 10.1.1.32.2032.
2480. Rob Shipman, Mark Shackleton, Marc Ebner, and Richard A. Watson. Neutral Search
Spaces for Articial Evolution: A Lesson from Life. In Articial Life VII [246], pages
162167, 2000. Fully available at http://homepages.feis.herts.ac.uk/

nehaniv/
al7ev/shipmanUH.ps and http://www.demo.cs.brandeis.edu/papers/alife7_
shipman.ps [accessed 2011-12-05]. CiteSeer
x
: 10.1.1.17.664.
2481. Rob Shipman, Mark Shackleton, and Inman Harvey. The Use of Neutral Genotype-Phenotype
Mappings for Improved Evolutionary Search. BT Technology Journal, 18(4):103111, Octo-
ber 2000, Springer Netherlands: Dordrecht, Netherlands. doi: 10.1023/A:1026714927227.
CiteSeer
x
: 10.1.1.9.5548.
2482. Shinichi Shirakawa and Tomoharu Nagao. Graph Structured Program Evolu-
REFERENCES 1143
tion: Evolution of Loop Structures. In GPTP09 [2309], pages 177194, 2009.
doi: 10.1007/978-1-4419-1626-6 11. Fully available at http://www.cscs.umich.edu/
PmWiki/Farms/GPTP-09/uploads/Main/shirakawa-03-24-2009.pdf [accessed 2010-11-
21].
2483. Amit Shukla, Prasad M. Deshpande, and Jerey F. Naughton. Materialized View Selection
for Multidimensional Datasets. In VLDB98 [1151], pages 488499, 1998. Fully available
at http://www.vldb.org/conf/1998/p488.pdf [accessed 2011-04-10].
2484. Patrick Siarry and Gerard Berthiau. Fitting of Tabu Search to Optimize Func-
tions of Continuous Variables. International Journal for Numerical Methods in En-
gineering, 40(13):24492457, 1997, Wiley Interscience: Chichester, West Sussex, UK.
doi: 10.1002/(SICI)1097-0207(19970715)40:13<2449::AID-NME172>3.0.CO;2-O.
2485. Patrick Siarry and Zbigniew Michalewicz, editors. Advances in Metaheuristics for Hard
Optimization, Natural Computing Series. Springer New York: New York, NY, USA,
November 2007. doi: 10.1007/978-3-540-72960-0. isbn: 3-540-72959-3. Google Books
ID: TkMR1b-VCR4C. Library of Congress Control Number (LCCN): 2007929485. Series
editors: G. Rozenberg, Thomas Back,

Agoston E. Eiben, J.N. Kok, and H.P. Spaink.
2486. Patrick Siarry, Gerard Berthiau, Fran cois Durdin, and Jacques Haussy. Enhanced Simulated
Annealing for Globally Minimizing Functions of Many-Continuous Variables. ACM Transac-
tions on Mathematical Software (TOMS), 23(2):209228, June 1997, ACM Press: New York,
NY, USA. doi: 10.1145/264029.264043.
2487. Wojciech Wladyslaw Siedlecki and Jack Sklansky. Constrained Genetic Optimization via
Dynamic Reward-Penalty Balancing and its Use in Pattern Recognition. In ICGA89 [2414],
pages 141150, 1989. See also [2488].
2488. Wojciech Wladyslaw Siedlecki and Jack Sklansky. Constrained Genetic Optimization via
Dynamic Reward-Penalty Balancing and its Use in Pattern Recognition. In Handbook of
Pattern Recognition and Computer Vision [539], Chapter 1.3.3, pages 108124. World Scien-
tic Publishing Co.: Singapore, 1993. See also [2487].
2489. Eric V. Siegel and John R. Koza, editors. Papers from the 1995 AAAI Fall Symposium on
Genetic Programming, November 1012, 1995, Massachusetts Institute of Technology (MIT):
Cambridge, MA, USA. AAAI Press: Menlo Park, CA, USA. isbn: 0-929280-92-X. Google
Books ID: 03UmAAAACAAJ. GBV-Identication (PPN): 226307441. AAAI Technical Report
FS-95-01.
2490. Sidney Siegel and N. John Castellan Jr. Nonparametric Statistics for The Behavioral
Sciences, Humanities/Social Sciences/Languages. McGraw-Hill: New York, NY, USA,
19561988. isbn: 0-07-057357-3 and 070573434X. Google Books ID: ElwEAAAACAAJ,
MO1lGwAACAAJ, QYFCGgAACAAJ, and YmxEAAAAIAAJ. OCLC: 16131891 and 255536918.
2491. Karl Sigurjonsson. Taboo Search Based Metaheuristic for Solving Multiple Depot VRPPD
with Intermediary Depots. Masters thesis, Technical University of Denmark (DTU), Infor-
matics and Mathematical Modelling (IMM): Lyngby, Denmark, September 1, 2008. Fully
available at http://orbit.dtu.dk/getResource?recordId=224453&objectId=1&
versionId=1 [accessed 2010-11-02].
2492. Sara Silva, James A. Foster, Miguel Nicolau, Penousal Machado, and Mario Giacobini, ed-
itors. Proceedings of the 14th European Conference on Genetic Programming (EuroGP11),
April 2729, 2011, Torino, Italy, volume 6621/2011 in Theoretical Computer Science and
General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/978-3-642-20407-4. Partly available at http://evostar.
dei.uc.pt/call-for-contributions/eurogp/ [accessed 2011-02-28]. Library of Congress
Control Number (LCCN): 2011925001.
2493. Kwang Mon Sim and Weng Hong Sun. Multiple Ant-Colony Optimization for Network
Routing. In CW02 [672], pages 277281, 2002. doi: 10.1109/CW.2002.1180890.
2494. Dan Simon. Optimal State Estimation: Kalman, H Innity, and Nonlinear Approaches. John
Wiley & Sons Ltd.: New York, NY, USA, June 2006. isbn: 0470045345 and 0471708585.
Google Books ID: UiMVoP 7TZkC. OCLC: 64084871 and 85821130.
2495. George Gaylord Simpson. The Baldwin Eect. Evolution International Journal of Organic
Evolution, 7(2):110117, June 1953, Society for the Study of Evolution (SSE) and Wiley
Interscience: Chichester, West Sussex, UK.
2496. Patrick K. Simpson, editor. The 1998 IEEE International Conference on Evolutionary Com-
putation (CEC98), 1998, Egan Civic & Convention Center and Anchorage Hilton Hotel:
1144 REFERENCES
Anchorage, AK, USA. IEEE Computer Society: Piscataway, NJ, USA, IEEE Computer
Society: Piscataway, NJ, USA. isbn: 0-7803-4869-9 and 0-7803-4870-2. Google
Books ID: FMRVAAAAMAAJ, OUizJgAACAAJ, and uvhJAAAACAAJ. INSPEC Accession Num-
ber: 6016643. Catalogue no.: 98TH8360. Part of [1330].
2497. Karl Sims. Articial Evolution for Computer Graphics. In SIGGRAPH91 [2702],
pages 319328, 1991. doi: 10.1145/122718.122752. Fully available at http://www.
cs.bham.ac.uk/

wbl/biblio/gp-html/sims91a.html [accessed 2009-06-19] and http://


www.naturewizard.com/papers/evolution%20-%20p319-sims.pdf [accessed 2009-06-
18].
2498. Mark C. Sinclair. Minimum Cost Topology Optimisation of the COST 239 European Optical
Network. In ICANNGA95 [2141], pages 2629, 1995. CiteSeer
x
: 10.1.1.53.6542. See
also [2500, 2501].
2499. Mark C. Sinclair. Evolutionary Telecommunications: A Summary. In GECCO99 WS [2502],
pages 209212, 1999. Fully available at http://uk.geocities.com/markcsinclair/
ps/etppf_sin_sum.ps.gz [accessed 2008-08-01]. CiteSeer
x
: 10.1.1.38.6797. Partly avail-
able at http://uk.geocities.com/markcsinclair/ps/etppf_sin_slides.ps.gz
[accessed 2008-08-01].
2500. Mark C. Sinclair. Optical Mesh Network Topology Design using Node-Pair Encoding
Genetic Programming. In GECCO99 [211], pages 11921197, volume 2, 1999. Fully
available at http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/sinclair_1999_
OMNTDNEGP.html [accessed 2008-07-21]. CiteSeer
x
: 10.1.1.49.453. See also [2498].
2501. Mark C. Sinclair. Node-Pair Encoding Genetic Programming for Optical Mesh Network
Topology Design. In Telecommunications Optimization: Heuristic and Adaptive Tech-
niques [640], Chapter 6, pages 99114. John Wiley & Sons Ltd.: New York, NY, USA,
2000. See also [2498].
2502. Mark C. Sinclair, David Wolfe Corne, and George D. Smith, editors. Proceedings of Evolu-
tionary Telecommunications: Past, Present and Future A Bird-of-a-Feather Workshop at
GECCO99 (GECCO99 WS), July 13, 1999, Orlando, FL, USA. Fully available at http://
uk.geocities.com/markcsinclair/etppf.html [accessed 2008-06-25]. See also [211, 2093].
2503. Gulshan Singh and Kalyanmoy Deb. Comparison of Multi-Modal Optimization Algo-
rithms based on Evolutionary Algorithms. In GECCO06 [1516], pages 13051312, 2006.
doi: 10.1145/1143997.1144200. Fully available at https://ceprofs.civil.tamu.edu/
ezechman/CVEN689/References/singh_deb_2006.pdf [accessed 2011-03-14]. Genetic algo-
rithms session.
2504. Madan G. Singh, editor. Systems and Control Encyclopedia. Pergamon Press: Ox-
ford, UK, 1987. isbn: 0-08-028709-3, 0080359337, and 0080406017. Google Books
ID: EbE9AAAACAAJ, ErE9AAAACAAJ, and re9QAAAAMAAJ.
2505. Rama Shankar Singh, Costas B. Krimbas, Diane B. Paul, and John Beatty, editors. Think-
ing about Evolution: Historical, Philosophical, and Political Perspectives. Cambridge Uni-
versity Press: Cambridge, UK, November 2000. isbn: 0521620708. Google Books
ID: HmVbYJ93d-AC. LC Classication: QH366.2 .T53 2000.
2506. Sameer Singh, Maneesha Singh, Chid Apte, and Petra Perner, editors. Pattern Recognition
and Data Mining Third International Conference on Advances in Pattern Recognition
Part I (ICAPR05), August 2225, 2005, Bath, UK, volume 3686/2005 in Lecture Notes in
Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11551188.
isbn: 3-540-28757-4.
2507. Abhishek Sinha and David Edward Goldberg. A Survey of Hybrid Genetic and Evolu-
tionary Algorithms. IlliGAL Report 2003004, Illinois Genetic Algorithms Laboratory (Il-
liGAL), Department of Computer Science, Department of General Engineering, University
of Illinois at Urbana-Champaign: Urbana-Champaign, IL, USA, January 2003. Fully avail-
able at http://www.illigal.uiuc.edu/pub/papers/IlliGALs/2003004.ps.Z [ac-
cessed 2010-09-11].
2508. Transportation Optimization Portal TOP. SINTEF: Trondheim, Norway, October 21, 2010.
Fully available at http://www.sintef.no/Projectweb/TOP/ [accessed 2011-10-13].
2509. Moshe Sipper, Daniel Mange, and Andres Perez-Uribe, editors. Proceedings of the Sec-
ond International Conference on Evolvable Systems: From Biology to Hardware (ICES98),
September 2325, 1999, Lausanne, Switzerland, volume 1478/1998 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/BFb0057601.
REFERENCES 1145
isbn: 3-540-64954-9.
2510. Michael Sipser. Introduction to the Theory of Computation. Thomson Course Tech-
nology: Boston, MA, USA, Cengage Learning (Wadsworth Publishing Company):
Stamford, CT, USA, and Prindle, Weber, Schmidt (PWS) Publishing Company:
Boston, MA, USA, 19972006. isbn: 053-4947-28X, 0534948103, and 0619217642.
Google Books ID: -wlWAAAACAAJ, 4CR8AQAACAAJ, wlWAAAACAAJ, and eRYFAAAACAAJ.
OCLC: 35558950 and 64006340.
2511. Evren Sirin, Bijan Parsia, Dan Wu, James Hendler, and Dana Nau. HTN Planning for Web
Service Composition using SHOP2. Web Semantics: Science, Services and Agents on the
World Wide Web, 1(4):377396, October 2004, Elsevier Science Publishers B.V.: Essex, UK.
International Semantic Web Conference 2003.
2512. Evren Sirin, Bijan Parsia, and James Hendler. Template-based Composition of Semantic Web
Services. In AAAI Fall Symposium on Agents and the Semantic Web [2138], pages 8592,
2005. Fully available at http://www.mindswap.org/papers/extended-abstract.
pdf [accessed 2011-01-12]. CiteSeer
x
: 10.1.1.74.2584.
2513. Jaroslaw Skaruz and Franciszek Seredynski. Detecting Web Application Attacks with Use of
Gene Expression Programming. pages 20292035. doi: 10.1109/CEC.2009.4983190. INSPEC
Accession Number: 10688785.
2514. Jaroslaw Skaruz and Franciszek Seredynski. Web Application Security through
Gene Expression Programming. In EvoWorkshops09 [1052], pages 110, 2009.
doi: 10.1007/978-3-642-01129-0 1.
2515. Benjamin Skellett, Benjamin Cairns, Nicholas Geard, Bradley Tonkes, and Janet Wiles.
Rugged NK Landscapes Contain the Highest Peaks. In GECCO05 [304], pages 579584,
2005. CiteSeer
x
: 10.1.1.1.5011.
2516. Hendrik Skubch. Hierarchical Strategy Learning for FLUX Agents. Masters thesis, Technis-
che Universitat Dresden: Dresden, Sachsen, Germany, February 18, 2007, Michael Thielscher,
Supervisor. See also [2517].
2517. Hendrik Skubch. Hierarchical Strategy Learning for FLUX Agents: An Applied Tech-
nique. VDM Verlag Dr. M uller AG und Co. KG: Saarbr ucken, Saarland, Germany, 2006.
isbn: 3-8364-5271-5. Google Books ID: HONbJgAACAAJ and syxkPQAACAAJ. See also
[2516].
2518. Hendrik Skubch, Michael Wagner, Roland Reichle, Stefan Triller, and Kurt Geihs. Towards a
Comprehensive Teamwork Model for Highly Dynamic Domains. In ICAART10 [917], pages
121127, volume 2, 2010.
2519. Proceedings of the 3rd International Workshop on Machine Learning (ML88), 1985, Skytop,
PA, USA.
2520. Derek H. Sleeman and Peter Edwards, editors. Proceedings of the 9th International Workshop
on Machine Learning (ML92), July 13, 1992, Aberdeen, Scotland, UK. Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-247-X.
2521. David Slepian, editor. Key Papers in the Development of Information Theory. Insti-
tute of Electrical and Electronics Engineers (IEEE) Press: New York, NY, USA, 1974.
isbn: 0471798525 and 0879420286. Google Books ID: AItQAAAAMAAJ, JSfdGgAACAAJ,
VPFeAQAACAAJ, and VfFeAQAACAAJ. OCLC: 940335, 214976152, and 257564453. See
also [2465].
2522. Proceedings of the 7th International Conference on Soft Computing, Evolutionary Compu-
tation, Genetic Programing, Fuzzy Logic, Rough Sets, Neural Networks, Fractals, Bayesian
Methods (MENDEL01), June 68, 2001, Brno University of Technology: Brno, Czech Re-
public. Slezsk a Univerzita V Opave, Filozocko-prrodovedecka fakulta,

Ustav informatiky:
Opava, Czech Republic. isbn: 80-7248-107-X.
2523. N. J. H. Small. Miscellanea: Marginal Skewness and Kurtosis in Testing Multivariate Nor-
mality. Journal of the Royal Statistical Society: Series C Applied Statistics, 29(1):8587,
1980, Blackwell Publishing for the Royal Statistical Society: Chichester, West Sussex, UK.
2524. Waleed W. Smari, editor. Third International Conference on Information Reuse and In-
tegration (IRI01), November 2729, 2001, Luxor Hotel: Las Vegas, NV, USA. Inter-
national Society for Computers and Their Applications, Inc. (ISCA): Cary, NC, USA.
isbn: 1-880843-41-2. Google Books ID: QnxyAAAACAAJ. OCLC: 55764490.
2525. Mikhail I. Smirnov, editor. Autonomic Communication Revised Selected Papers from
the First International IFIP Workshop (WAC04), October 1819, 2004, Berlin, Ger-
1146 REFERENCES
many, volume 3457/2005 in Computer Communication Networks and Telecommunica-
tions, Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Ger-
many. doi: 10.1007/11520184. isbn: 3-540-27417-0. Google Books ID: kE3DeVNfaMcC
and pfPiPgAACAAJ. OCLC: 61029791, 145832250, 255319050, 318300920, and
403681622. Library of Congress Control Number (LCCN): 2005928376.
2526. Alice E. Smith and David W. Coit. Penalty Functions. In Handbook of Evolutionary Com-
putation [171], Chapter C 5.2. Oxford University Press, Inc.: New York, NY, USA, Institute
of Physics Publishing Ltd. (IOP): Dirac House, Temple Back, Bristol, UK, and CRC Press,
Inc.: Boca Raton, FL, USA, 1997. Fully available at http://www.cs.cinvestav.mx/

constraint/papers/chapter.pdf [accessed 2008-11-15].


2527. George D. Smith, Nigel C. Steele, and Rudolf F. Albrecht, editors. Proceedings of the In-
ternational Conference on Articial Neural Nets and Genetic Algorithms (ICANNGA97),
April 14, 1997, Vienna, Austria, SpringerComputerScience. Springer Verlag GmbH: Vienna,
Austria. isbn: 3-211-83087-1. Google Books ID: KqZQAAAAMAAJ. OCLC: 40146054 and
246391089. Published in 1998.
2528. John Maynard Smith. Natural Selection and the Concept of a Protein Space. Nature, 225
(5232):563564, February 7, 1970. doi: 10.1038/225563a0. PubMed ID: 5411867. Received
November7, 1969.
2529. Joshua R. Smith. Designing Biomorphs with an Interactive Genetic Algorithm. In
ICGA91 [254], pages 535538, 1991. Fully available at http://web.media.mit.edu/

jrs/biomorphs.pdf [accessed 2009-06-18].


2530. Michael H. Smith, M. Lee, James M. Keller, and John Yen, editors. Proceedings of the Bien-
nial Conference of the North American Fuzzy Information Processing Society (NAFIPS96),
June 1922, 1996, Berkeley, CA, USA. IEEE Computer Society: Piscataway, NJ, USA.
isbn: 0-7803-3225-3 and 0-7803-3226-1. Google Books ID: 6chVAAAAMAAJ and
X9SOJgAACAAJ. OCLC: 35188367 and 63197371.
2531. Murray Smith. Neural Networks for Statistical Modeling. John Wiley & Sons Ltd.: New York,
NY, USA, Van Nostrand Reinhold Company, and International Thomson Computer Press:
London, UK, 1993. isbn: 0442013108 and 1850328420. Google Books ID: TjjFAAAACAAJ,
TzjFAAAACAAJ, and lEADKgAACAAJ. OCLC: 27145760 and 34181200.
2532. Peter W. H. Smith and Kim Harries. Code Growth, Explicitly Dened Introns, and Alterna-
tive Selection Schemes. Evolutionary Computation, 6(4):339360, Winter 1998, MIT Press:
Cambridge, MA, USA. doi: 10.1162/evco.1998.6.4.339. CiteSeer
x
: 10.1.1.24.2202.
2533. Reid G. Smith and A. Carlisle Scott, editors. Proceedings of the The Third Conference
on Innovative Applications of Articial Intelligence (IAAI91), June 1419, 1991, Anaheim,
CA, USA. AAAI Press: Menlo Park, CA, USA. isbn: 0-262-69148-5. Partly available
at http://www.aaai.org/Conferences/IAAI/iaai91.php [accessed 2007-09-06].
2534. Robert Elliott Smith, editor. A Report on The First International Workshop on Learning
Classier Systems (IWLCS-92). Technical Report, NASA Johnson Space Center: Houston,
TX, USA, 1992. CiteSeer
x
: 10.1.1.45.4955. See also [2002].
2535. Shana Shiang-Fong Smith. Using Multiple Genetic Operators to Reduce Premature Conver-
gence in Genetic Assembly Planning. Computers in Industry An International, Application
Oriented Research Journal, 54(1):3549, May 2004, Elsevier Science Publishers B.V.: Ams-
terdam, The Netherlands. doi: 10.1016/j.compind.2003.08.001.
2536. Stephen Frederick Smith. A Learning System based on Genetic Adaptive Algorithms. PhD
thesis, University of Pittsburgh: Pittsburgh, PA, USA, 1980. Order No.: AAI8112638. Uni-
versity Microlms No.: 81-12638.
2537. Stephen L. Smith and Stefano Cagnoni, editors. Medical Applications of Genetic and Evo-
lutionary Computation Workshop (MedGEC08), July 12, 2008, Renaissance Atlanta Hotel
Downtown: Atlanta, GA, USA. ACM Press: New York, NY, USA. Part of [1519].
2538. Tom Smith, Phil Husbands, Paul Layzell, and Michael OShea. Fitness Landscapes and
Evolvability. Evolutionary Computation, 10(1):134, Summer 2002, MIT Press: Cambridge,
MA, USA. doi: 10.1162/106365602317301754. Fully available at http://wotan.liu.edu/
docis/dbl/evocom/2002_10_1_1_FLAE.html [accessed 2007-07-28].
2539. Donald L. Snyder and Michael I. Miller. Random Point Processes in Time and Space, Springer
Texts in Electrical Engineering. Springer New York: New York, NY, USA, June 19, 1991.
isbn: 0387975772 and 3540975772. Google Books ID: MIkwAQAACAAJ. OCLC: 23287518
and 246549179.
REFERENCES 1147
2540. Lawrence V. Snyder and Mark S. Daskin. A Random-Key Genetic Algorithm for the Gen-
eralized Traveling Salesman Problem. European Journal of Operational Research (EJOR),
174(1):3853, October 1, 2006, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. Imprint: North-Holland Scientic Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/j.ejor.2004.09.057. Fully available at http://www.lehigh.edu/

lvs2/
Papers/gtsp.pdf and http://www.lehigh.edu/

lvs2/Papers/gtsp_ejor.pdf [ac-
cessed 2010-11-22]. CiteSeer
x
: 10.1.1.1.9730, 10.1.1.71.5810, and 10.1.1.87.6258.
2541. Marcus Henrique Soares Mendes, Gustavo Lus Soares, and Joao Antonio de Vasconcelos.
PID Step Response Using Genetic Programming. In SEAL10 [2585], pages 359368, 2010.
doi: 10.1007/978-3-642-17298-4 37.
2542. Elliott Sober. The Two Faces of Fitness. In Thinking about Evolution: Historical, Philo-
sophical, and Political Perspectives [2505], Chapter 15, pages 309321. Cambridge University
Press: Cambridge, UK, 2000. Fully available at http://philosophy.wisc.edu/sober/
tff.pdf [accessed 2008-08-11].
2543. Proceedings of the twelfth annual ACM-SIAM Symposium on Discrete Algorithms, 2001,
Washington, DC, USA. Society for Industrial and Applied Mathematics (SIAM): Philadel-
phia, PA, USA. isbn: 0-89871-490-7.
2544. AISB 2002 Convention: Logic, Language and Learning (AISB02), April 35, 2002, Imperial
College London: London, UK. Society for the Study of Articial Intelligence and the Sim-
ulation of Behaviour (SSAISB): Chichester, West Sussex, UK. Fully available at http://
www.aisb.org.uk/publications/proceedings.shtml [accessed 2008-09-10].
2545. AISB 2003 Convention: Cognition in Machines and Animals (AISB03), April 711, 2003,
University of Wales: Aberystwyth, Wales, UK. Society for the Study of Articial Intelligence
and the Simulation of Behaviour (SSAISB): Chichester, West Sussex, UK. Fully available
at http://www.aisb.org.uk/publications/proceedings.shtml [accessed 2008-09-10].
2546. AISB 2004 Convention: Motion, Emotion and Cognition (AISB04), March 29April 1, 2003,
University of Leeds: Leeds, UK. Society for the Study of Articial Intelligence and the
Simulation of Behaviour (SSAISB): Chichester, West Sussex, UK. Fully available at http://
www.aisb.org.uk/publications/proceedings.shtml [accessed 2008-09-10].
2547. Social Intelligence and Interaction in Animals, Robots and Agents (AISB05), April 1215,
2005, University of Hertfordshire: Hateld, England, UK. Society for the Study of Arti-
cial Intelligence and the Simulation of Behaviour (SSAISB): Chichester, West Sussex, UK.
Fully available at http://www.aisb.org.uk/publications/proceedings.shtml [ac-
cessed 2008-09-10].
2548. Adaptation in Articial and Biological Systems (AISB06), April 36, 2005, University of
Bristol: Bristol, UK. Society for the Study of Articial Intelligence and the Simulation of
Behaviour (SSAISB): Chichester, West Sussex, UK. Fully available at http://www.aisb.
org.uk/publications/proceedings.shtml [accessed 2008-09-10].
2549. Articial and Ambient Intelligence (AISB07), April 14, 2007, Newcastle University: New-
castle upon Tyne, UK. Society for the Study of Articial Intelligence and the Simulation of
Behaviour (SSAISB): Chichester, West Sussex, UK. Fully available at http://www.aisb.
org.uk/publications/proceedings.shtml [accessed 2008-09-10].
2550. Bambang I. Soemarwoto, O. J. Boelens, M. Allan, M. T. Arthur, K. B utesch, N. Ceresola,
and W. Fritz, editors. 21st Applied Aerodynamics Conference, Towards the Simulation of
Unsteady Manoeuvre Dominated by Vortical Flow, Common Exercise I of WEAG THALES
JP12.15 (AIAA03), June 2326, 2003, Hilton Walt Disney World, Walt Disney World Resort:
Orlando, FL, USA. American Institute of Aeronautics and Astronautics (AIAA): Reston, VA,
USA. isbn: 1563476002 and 9781563476006.
2551. Richard Soland, Ambrose Goicoechea, L. Duckstein, and Stanley Zionts, editors. Proceed-
ings of the 9th International Conference on Multiple Criteria Decision Making: Theory
and Applications in Business, Industry, and Government (MCDM90), August 58, 1990,
Fairfax, VA, USA. Springer New York: New York, NY, USA. isbn: 0-387-97805-4
and 3-540-97805-4. OCLC: 25282714, 232664357, 471769870, 633632497, and
638193262. Library of Congress Control Number (LCCN): 92002703. GBV-Identication
(PPN): 120911701. Published in July 1992.
2552. Ashu M. G. Solo, editor. Proceedings of the 2011 International Conference on Genetic and
Evolutionary Methods (GEM11), July 1821, 2011, Las Vegas, NV, USA.
2553. Dawn Xiaodong Song. A Linear Genetic Programming Approach to Intrusion Detection.
1148 REFERENCES
Masters thesis, Dalhousie University: Halifax, NS, Canada, March 2003, Malcom Ian Hey-
wood and A. Nur Zincir-Heywood, Advisors. Fully available at http://users.cs.dal.
ca/

mheywood/Thesis/DSong.pdf [accessed 2008-06-16]. CiteSeer


x
: 10.1.1.91.3779. See
also [2555].
2554. Dawn Xiaodong Song, Adrian Perrig, and Doantam Phan. AGVI Automatic Generation,
Verication, and Implementation of Security Protocols. In CAV01 [280], pages 241245,
2001. doi: 10.1007/3-540-44585-4 21.
2555. Dawn Xiaodong Song, Malcom Ian Heywood, and A. Nur Zincir-Heywood. A Linear
Genetic Programming Approach to Intrusion Detection. In GECCO03-II [485], pages
23252336, 2003. Fully available at http://users.cs.dal.ca/

mheywood/X-files/
Publications/27242325.pdf [accessed 2008-06-16]. CiteSeer
x
: 10.1.1.61.7475. See also
[2553].
2556. Il-Yeol Song and Torben Bach Pedersen, editors. Proceedings of the ACM Tenth Interna-
tional Workshop on Data Warehousing and OLAP (DOLAP07), November 9, 2007, Lisbon,
Portugal. ACM Press: New York, NY, USA. isbn: 978-1-59593-827-5. Part of [28].
2557. Il-Yeol Song and Karen C. Davis, editors. Proceedings of the 7th ACM International Workshop
on Data Warehousing and OLAP (DOLAP04), November 1213, 2004, Hyatt Arlington
Hotel: Washington, DC, USA. ACM Press: New York, NY, USA. isbn: 1-58113-977-2.
Part of [1142].
2558. Il-Yeol Song and Dimitri Theodoratos, editors. Proceedings of the Fifth International Work-
shop on Data Warehousing and OLAP (DOLAP02), November 8, 2002, McLean, VA, USA.
ACM Press: New York, NY, USA. Part of [25? ].
2559. Branko Soucek and IRIS Group, editors. Dynamic, Genetic, and Chaotic Programming: The
Sixth-Generation, Sixth Generation Computer Technologies. Wiley Interscience: Chichester,
West Sussex, UK, April 1992. isbn: 047155717X. Partly available at http://www.
amazon.com/gp/reader/047155717X/ref=sib_dp_pt/002-6076954-4198445#
reader-link [accessed 2008-05-29]. Google Books ID: tIFQAAAAMAAJ.
2560. Terence Soule and James A. Foster. Removal Bias: a New Cause of Code Growth
in Tree Based Evolutionary Programming. In CEC98 [2496], pages 781186, 1998.
doi: 10.1109/ICEC.1998.700151. CiteSeer
x
: 10.1.1.35.3659. INSPEC Accession Num-
ber: 6016655.
2561. James C. Spall. Introduction to Stochastic Search and Optimization, Estimation, Simulation,
and Control Wiley-Interscience Series in Discrete Mathematics and Optimization. Wiley
Interscience: Chichester, West Sussex, UK, rst edition, June 2003. isbn: 0-471-33052-3
and 0-471-72213-8. Google Books ID: f66OIvvkKnAC. OCLC: 50773216, 51235522,
474647590, and 637018778. Library of Congress Control Number (LCCN): 2002038049.
LC Classication: QA274 .S63 2003.
2562. Bruce M. Spatz and Gregory J. E. Rawlins, editors. Proceedings of the First Workshop on
Foundations of Genetic Algorithms (FOGA90), July 1518, 1990, Indiana University, Bloom-
ington Campus: Bloomington, IN, USA. Morgan Kaufmann Publishers Inc.: San Francisco,
CA, USA. isbn: 1-55860-170-8. Google Books ID: Df12yLrlUZYC. OCLC: 24168098,
263581771, and 311490755. Published July 1, 1991.
2563. William M. Spears and Kenneth Alan De Jong. On the Virtues of Parameterized Uniform
Crossover. In ICGA91 [254], pages 230236, 1991. Fully available at http://www.mli.
gmu.edu/papers/91-95/91-18.pdf [accessed 2010-08-02]. CiteSeer
x
: 10.1.1.51.8141.
2564. William M. Spears and Kenneth Alan De Jong. Using Genetic Algorithms for Supervised Con-
cept Learning. In TAI90 [1358], pages 335341, 1990. doi: 10.1109/TAI.1990.130359. Fully
available at http://www.cs.uwyo.edu/

wspears/papers/ieee90.pdf [accessed 2009-07-


21]. CiteSeer
x
: 10.1.1.57.147. INSPEC Accession Number: 4076618.
2565. William M. Spears and Worthy Neil Martin. Proceedings of the Sixth Workshop on Founda-
tions of Genetic Algorithms (FOGA00), July 2123, 2001, Charlottesville, VA, USA. Morgan
Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-734-X. Google Books
ID: QPFQAAAAMAAJ and uiX9TNaSxxYC.
2566. Lee Spector. Evolving Control Structures with Automatically Dened Macros. In
Papers from the 1995 AAAI Fall Symposium on Genetic Programming [2489], pages
99105, 1995. Fully available at http://helios.hampshire.edu/lspector/pubs/
ADM-fallsymp-e.pdf and http://www.aaai.org/Papers/Symposia/Fall/1995/
FS-95-01/FS95-01-014.pdf [accessed 2010-08-21]. CiteSeer
x
: 10.1.1.51.6239. See also
REFERENCES 1149
[2567].
2567. Lee Spector. Simultaneous Evolution of Programs and their Control Structures. In
Advances in Genetic Programming II [106], Chapter 7, pages 137154. MIT Press:
Cambridge, MA, USA, 1996. Fully available at http://hampshire.edu/

lasCCS/
pubs/AiGP2-post-final-e.pdf, http://helios.hampshire.edu/lspector/
pubs/AiGP2-post-final-e.pdf, and http://www.cs.bham.ac.uk/

wbl/biblio/
gp-html/spector_1996_aigp2.html [accessed 2010-08-21]. CiteSeer
x
: 10.1.1.39.9677.
2568. Lee Spector. Automatic Quantum Computer Programming A Genetic Programming Ap-
proach, volume 7 in Genetic Programming Series. Springer Science+Business Media, Inc.:
New York, NY, USA, paperback edition 2007 edition, 2006. isbn: 0-387-36496-X and
0-387-36791-8. Google Books ID: 9wxzm8Gm-fUC and mEKfvtxmn9MC. Library of
Congress Control Number (LCCN): 2006931640. Series editor: John Koza.
2569. Lee Spector, William B. Langdon, Una-May OReilly, and Peter John Angeline, editors.
Advances in Genetic Programming, volume 3 in Complex Adaptive Systems, Bradford Books.
MIT Press: Cambridge, MA, USA, July 16, 1996. isbn: 0-262-19423-6. Google Books
ID: 5Qwbal3AY6oC. OCLC: 29595260, 42274498, 42579104, 60710976, 60863107,
154701926, and 222641812.
2570. Lee Spector, Erik D. Goodman, Annie S. Wu, William B. Langdon, Hans-Michael Voigt,
Mitsuo Gen, Sandip Sen, Marco Dorigo, Shahram Pezeshk, Max H. Garzon, and Ed-
mund K. Burke, editors. Proceedings of the Genetic and Evolutionary Computation Confer-
ence (GECCO01), July 711, 2001, Holiday Inn Golden Gateway Hotel: San Francisco, CA,
USA. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-774-9.
Google Books ID: 81QKAAAACAAJ and cYVQAAAAMAAJ. See also [1104].
2571. Herbert Spencer. The Principles of Biology, volume 1. D. Appleton and Company: New York,
NY, USA and Williams and Norgate: London & Edinburgh, UK, 1st edition, 18641868. Fully
available at http://books.google.com/books?id=rHoOAAAAYAAJ and http://www.
archive.org/details/ThePrinciplesOfBiology [accessed 2009-06-11].
2572. W. Spendley, G. R. Hext, and F. R. Himsworth. Sequential Application of Simplex Designs
in Optimisation and Evolutionary Operation. Technometrics, 4(4):441461, November 1962,
American Statistical Association (ASA): Alexandria, VA, USA and American Society for
Quality (ASQ): Milwaukee, WI, USA. OCLC: 486202935. JSTOR Stable ID: 1266283.
2573. Christian Spieth, Felix Streichert, Nora Speer, and Andreas Zell. Utilizing an Island
Model for EA to Preserve Solution Diversity for Inferring Gene Regulatory Networks. In
CEC04 [1369], pages 146151, volume 1, 2004. doi: 10.1109/CEC.2004.1330850. Fully
available at http://www-ra.informatik.uni-tuebingen.de/software/JCell/
publications.html [accessed 2007-08-24].
2574. Ralph H. Sprague, Jr., editor. Proceedings of the 33rd Hawaii International Conference
on System Sciences (HICSS00), January 47, 2000, Maui, HI, USA. IEEE Computer So-
ciety: Washington, DC, USA. isbn: 0-7695-0493-0, 0769504949, and 0769504957.
Google Books ID: 7QIEAAAACAAJ and IK9VAAAAMAAJ. OCLC: 43892251, 228269935,
and 423974973. Order No.: PR00493.
2575. Genetic Programming Theory and Practice V, Proceedings of the Genetic Programming The-
ory Practice 2007 Workshop (GPTP07), May 1719, 2007, University of Michigan, Center
for the Study of Complex Systems (CSCS): Ann Arbor, MI, USA, Genetic and Evolution-
ary Computation. Springer US: Boston, MA, USA and Kluwer Academic Publishers: Nor-
well, MA, USA. Partly available at http://www.cscs.umich.edu/gptp-workshops/
gptp2007/ [accessed 2007-09-28].
2576. Proceedings of the V International Workshop on Nature Inspired Cooperative Strategies for
Optimization (NICSO11), October 2022, 2011, Cluj-Napoca, Romania, Studies in Compu-
tational Intelligence. Springer-Verlag: Berlin/Heidelberg. Partly available at http://www.
nicso2011.org/ [accessed 2011-05-25].
2577. Seclected Papers from the First Asia-Pacic Conference on Simulated Evolution and
Learning (SEAL96), November 912, 1996, Taejon, South Korea, volume 1285/1997 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/BFb0028514.
2578. Sixth International Workshop on Learning Classier Systems (IWLCS03), July 1216, 2003,
Holiday Inn Chicago: Chicago, IL, USA. Springer-Verlag GmbH: Berlin, Germany. Held
during GECCO-2003 (. Part of [484, 485]. See also [1589].
1150 REFERENCES
2579. The Tenth International Workshop on Learning Classier Systems (IWLCS07), July 8, 2007,
University College London (UCL), Department of Computer Science: London, UK, Lecture
Notes in Articial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. Held in association with GECCO-2007 (. See also
[2699, 2700].
2580. Sixth International Conference on Ant Colony Optimization and Swarm Intelligence
(ANTS08), September 2228, 2008, Brussels, Belgium, Lecture Notes in Computer Science
(LNCS). Springer-Verlag GmbH: Berlin, Germany.
2581. Learning and Intelligent OptimizatioN (LION 4), January 1822, 2010, Ca Doln Building,
Universita Ca Foscari: Venice, Italy, Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany.
2582. Proceedings of the 35th International Symposium on Mathematical Foundations of Computer
Science (MFCS10), August 2129, 2010, Masaryk University, Faculty of Informatics: Brno,
Czech Republic, Advanced Research in Computing and Software Science (ARCoSS), Lecture
Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
2583. Proceedings of the 9th Mexican International Conference on Articial Intelligence (MI-
CAI10), November 813, 2010, Autonomous University of Hidalgo State: Pachuca, Mexico,
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
2584. Proceedings of the 9th International Symposium on Experimental Algorithms (SEA10),
May 2022, 2010, Hotel Continental Terme: Ischia Porto, Ischa Island, Naples, Italy, volume
6049 in Programming and Software Engineering Series, Lecture Notes in Computer Sci-
ence (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3642131921. Google Books
ID: XerGSAAACAAJ.
2585. Proceedings of the Eighth International Conference on Simulated Evolution And Learning
(SEAL10), December 14, 2010, Indian Institute of Technology Kanpur (IIT): Kanpur,
Uttar Pradesh, India, Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany.
2586. Evolution Articielle: Proceedings of the 10th Biannual International Conference on Articial
Evolution (AE11), October 2426, 2011, Angers Congress Centre: Angers, France, Lecture
Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. Partly avail-
able at http://www.info.univ-angers.fr/ea2011/ [accessed 2011-03-30].
2587. Proceedings of the Fourth Conference on Articial General Intelligence (AGI11), August 3
6, 2011, Mountain View, CA, USA, Lecture Notes in Articial Intelligence (LNAI, SL7),
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
2588. Proceedings of the 4th European Event on Bio-Inspired Algorithms for Continuous Param-
eter Optimisation (EvoNUM11), 2011, Torino, Italy. Springer-Verlag GmbH: Berlin, Ger-
many. Partly available at http://evostar.dei.uc.pt/call-for-contributions/
evoapplications/evonum/ [accessed 2011-02-28]. Part of [788? ].
2589. Proceedings of the 10th International Conference on Adaptive and Natural Computing Al-
gorithms (ICANNGA11), April 1416, 2011, University of Ljubljana: Ljubljana, Slovenia,
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
2590. Proceedings of the 7th Semantic Web Services Challenge Workshop (SWSC08), October 26,
2008, Kongresszentrum: Karlsruhe, Germany. Springer-Verlag GmbH: Berlin, Germany and
Stanford University, Computer Science Department, Stanford Logic Group: Stanford, CA,
USA. Fully available at http://sws-challenge.org/wiki/index.php/Workshop_
Karlsruhe [accessed 2010-12-12]. Part of [2166, 2477].
2591. Proceedings of the 11th UK Performance Engineering Workshop (UKPEW95), September 5
9, 1995, Liverpool John Moores University: Liverpool, UK. Springer-Verlag London Limited:
London, UK.
2592. Giovanni Squillero. MicroGP An Evolutionary Assembly Program Generator. Genetic
Programming and Evolvable Machines, 6(3):247263, September 2005, Springer Nether-
lands: Dordrecht, Netherlands. Imprint: Kluwer Academic Publishers: Norwell, MA, USA.
doi: 10.1007/s10710-005-2985-x.
2593. N. S. Sridharan, editor. Proceedings of the 11th International Joint Conference on Arti-
cial Intelligence (IJCAI89-I), August 1989, Detroit, MI, USA, volume 1. Morgan Kauf-
mann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-094-9. Fully avail-
able at http://dli.iiit.ac.in/ijcai/IJCAI-89-VOL1/CONTENT/content.htm [ac-
cessed 2008-04-01]. See also [2594].
REFERENCES 1151
2594. N. S. Sridharan, editor. Proceedings of the 11th International Joint Conference on Arti-
cial Intelligence (IJCAI89-II), August 1989, Detroit, MI, USA, volume 2. Morgan Kauf-
mann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-094-9. Fully avail-
able at http://dli.iiit.ac.in/ijcai/IJCAI-89-VOL2/CONTENT/content.htm [ac-
cessed 2008-04-01]. See also [2593].
2595. International Conference on Swarm, Evolutionary and Memetic Computing (SEMCCO10),
December 1618, 2010, SRM University, Department of Electrical and Electronics Engineer-
ing: Chennai, India.
2596. Marc St-Hilaire, Steven Chamberland, and Samuel Pierre. A Local Search Heuristic for
the Global Planning of UMTS Networks. In IWCMC06 [2082], pages 14051410, 2006.
doi: 10.1145/1143549.1143831. SESSION: R2-D: general symposium. See also [2597].
2597. Marc St-Hilaire, Steven Chamberland, and Samuel Pierre. A Tabu Search Heuristic for
the Global Planning of UMTS Networks. In WiMob06 [2176], pages 148151, 2006.
doi: 10.1109/WIMOB.2006.1696339. INSPEC Accession Number: 9132969. See also [2596].
2598. Peter F. Stadler. Fitness Landscapes. In Biological Evolution and Statistical Physics [1688],
pages 183204. Springer-Verlag: Berlin/Heidelberg, 2002. doi: 10.1007/3-540-45692-9 10.
Fully available at http://citeseer.ist.psu.edu/old/stadler02fitness.html
and http://www.bioinf.uni-leipzig.de/

studla/Publications/PREPRINTS/
01-pfs-004.pdf [accessed 2009-06-29]. Based on his talk The Structure of Fitness Landscapes
given at the International Workshop on Biological Evolution and Statistical Physics, May
1014, 2000, located at the Max-Planck-Institut f ur Physik komplexer Systeme, Nothnitzer
Str. 38, D-01187 Dresden, Germany.
2599. Peter Stagge and Christian Igel. Structure Optimization and Isomorphisms. In Proceed-
ings of the Second Evonet Summer School on Theoretical Aspects of Evolutionary Comput-
ing [1480], pages 409422. Springer New York: New York, NY, USA, 1999, University of
Antwerp (RUCA): Antwerp, Belgium. Fully available at http://citeseer.ist.psu.
edu/old/520775.html [accessed 2009-07-10].
2600. Proceedings of AAAI Spring Symposium on Parallel Models of Intelligence: How Can Slow
Components Think So Fast?, March 2224, 1988, Stanford, CA, USA. OCLC: 21247441.
2601. Kenneth Owen Stanley and Risto Miikkulainen. A Taxonomy for Articial Embryogeny. Arti-
cial Life, 9(2):93130, 2003, MIT Press: Cambridge, MA, USA. Fully available at http://
nn.cs.utexas.edu/downloads/papers/stanley.alife03.pdf [accessed 2010-08-03].
2602. Johannes Starlinger, Florian Leitner, Alfonso Valencia, and Ulf Leser. SOA-Based Integration
of Text Mining Services. In SERVICES Cup09 [1351], pages 99106, 2009. doi: 10.1109/SER-
VICES-I.2009.100. INSPEC Accession Number: 10982213.
2603. Ioannis Stavrakakis and Mikhail I. Smirnov, editors. Revised and Selected Papers
from the Second IFIP Workshop on Autonomic Communication (WAC05), October 2
5, 2005, Vouliagmeni, Athens, Greece, volume 3854/2006 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11687818.
isbn: 3-540-32992-7. Google Books ID: CqEULQAACAAJ. OCLC: 68018967, 145829334,
150400331, 181547043, 314893626, 318300921, 318300921, and 403681669. Library
of Congress Control Number (LCCN): 2006921997. Post-workshop proceedings of the Sec-
ond IFIP TC6 WG6.3 and WG6.6 International Workshop on Autonomic Communication.
2604. Adam Stawowy. Evolutionary based Heuristic for Bin Packing Problem. Computers & Indus-
trial Engineering, 55(2):465474, September 2008, Elsevier Science Publishers B.V.: Essex,
UK. Imprint: Pergamon Press: Oxford, UK. doi: 10.1016/j.cie.2008.01.007.
2605. Pawel A. Stefanski. Genetic Programming Using Abstract Syntax Trees. self-published, 1993.
Fully available at http://www.cs.bham.ac.uk/

wbl/biblio/gp-html/icga93-gp_
stefanski.html [accessed 2007-08-15]. Notes from Genetic Programming Workshop at ICGA-
93. See also [969].
2606. Irene A. Stegun, author. Milton Abramowitz, editor. Handbook of Mathematical Func-
tions with Formulas, Graphs, and Mathematical Tables. Dover Publications: Mineola, NY,
USA, 1964. isbn: 0486612724. Google Books ID: MtU8uP7XMvoC and USNBNAAACAAJ.
OCLC: 18003605, 174191235, 185061828, 224642767, 232366868, 256560954, and
299883180.
2607. Gerd Steierwald, Hans Dieter K unne, and Walter Vogt. Stadtverkehrsplanung: Grundla-
gen, Methoden, Ziele. Springer-Verlag GmbH: Berlin, Germany, 2., neu bearbeitete und
erweiterte auage edition, 2005. doi: 10.1007/b138349. isbn: 3-540-40588-7. Google
1152 REFERENCES
Books ID: 1ohJ05DCZ2oC. OCLC: 630487728. GBV-Identication (PPN): 524942374
and 557339782.
2608. Daniel L. Stein, editor. Lectures in the Sciences of Complexity: The Proceedings of
the 1988 Complex Systems Summer School, JuneJuly 1988, Santa Fe, NM, USA, Santa
Fe Institue Studies in the Sciences of Complexity. Addison-Wesley Longman Publishing
Co., Inc.: Boston, MA, USA and Westview Press. isbn: 0201510154. Google Books
ID: T2GIPQAACAAJ, VaB5OwAACAAJ, and aBPCAAAACAAJ. Published in September 1989.
2609. Tammy I. Stein, editor. Proceedings of the International Geoscience and Remote Sens-
ing Symposium, Quantitative Remote Sensing for Science and Applications (IGARSS95),
July 1014, 1995, Congress Center: Firenze, Italy. IEEE Computer Society: Piscataway,
NJ, USA. isbn: 0-7803-2567-2. Google Books ID: 2vVVAAAAMAAJ and C VVAAAAMAAJ.
OCLC: 36158995, 223757670, and 421884354. Catalogue no.: 95CH35770.
2610. Ralph E. Steuer. Multiple Criteria Optimization: Theory, Computation and Application.
R. E. Krieger Publishing Company: Malabar, FL, USA, reprint edition, August 1989.
isbn: 0894643932. Google Books ID: r5P0AQAACAAJ.
2611. Terry Stewart. Extrema Selection: Accelerated Evolution on Neutral Networks. In
CEC01 [1334], pages 2529, volume 1, 2001. doi: 10.1109/CEC.2001.934366. Fully avail-
able at http://rob.ccmlab.ca:8080/

terry/papers/2001-Extrema_Selection.
pdf [accessed 2009-07-10]. CiteSeer
x
: 10.1.1.2.9166.
2612. Theo Stewart, editor. Proceedings of the 13th International Conference on Multiple Crite-
ria Decision Making: Trends in Multicriteria Decision Making (MCDM97), January 610,
1997, University of Cape Town: Cape Town, South Africa, volume 465 in Lecture Notes in
Economics and Mathematical Systems. Springer-Verlag: Berlin/Heidelberg. Partly available
at http://www.uct.ac.za/depts/maths/mcdm97 [accessed 2007-09-10]. Published January
15, 1999.
2613. T. R. Stickland, C. M. N. Tofts, and Nigel R. Franks. A Path Choice Algorithm for Ants.
Naturwissenschaften The Science of Nature, 79(12):567572, December 1992, Springer-
Verlag: Berlin/Heidelberg. doi: 10.1007/BF01131415.
2614. Adrian Stoica, Jason D. Lohn, and Didier Keymeulen, editors. Evolvable Hardware Pro-
ceedings of 1st NASA/DoD Workshop on Evolvable Hardware (EH99), June 1921, 1999, Jet
Propulsion Laboratory, California Institute of Technology (Caltech): Pasadena, CA, USA.
IEEE Computer Society: Washington, DC, USA. isbn: 0-7695-0256-3.
2615. Robert Roth Stoll. Set Theory and Logic. Dover Publications: Mineola, NY, USA, reprint
edition, June 1979. isbn: 0-4866-3829-4. Google Books ID: 3-nrPB7BQKMC.
2616. Tobias Storch and Ingo Wegener. Real Royal Road Functions for Constant Population Size.
In GECCO03-II [485], pages 14061417, 2003. doi: 10.1007/3-540-45110-2 14. See also
[2617, 2618].
2617. Tobias Storch and Ingo Wegener. Real Royal Road Functions for Constant Population
Size. Reihe Computational Intelligence: Design and Management of Complex Technical Pro-
cesses and Systems by Means of Computational Intelligence Methods CI-167/04, Universitat
Dortmund, Collaborative Research Center (Sonderforschungsbereich) 531: Dortmund, North
Rhine-Westphalia, Germany, February 2004. Fully available at http://hdl.handle.net/
2003/5456 [accessed 2008-07-22]. issn: 1433-3325. See also [2616, 2618].
2618. Tobias Storch and Ingo Wegener. Real Royal Road Functions for Constant Population Size.
Theoretical Computer Science, 320(1):123134, June 2, 2004, Elsevier Science Publishers
B.V.: Essex, UK, G. Ausiello and D. Sannella, editors. doi: 10.1016/j.tcs.2004.03.047. See
also [2616, 2617].
2619. Rainer M. Storn. On the Usage of Dierential Evolution for Function Optimiza-
tion. In NAFIPS96 [2530], pages 519523, 1996. doi: 10.1109/NAFIPS.1996.534789.
Fully available at https://eprints.kfupm.edu.sa/55484/1/55484.pdf and www.
icsi.berkeley.edu/

storn/bisc1.ps.gz [accessed 2009-09-18]. INSPEC Accession Num-


ber: 5358034.
2620. Rainer M. Storn. Dierential Evolution (DE) for Continuous Function Optimization (An
Algorithm by Kenneth Price and Rainer Storn). International Computer Science Institute
(ICSI), University of California: Berkeley, CA, USA, 2010. Fully available at http://www.
icsi.berkeley.edu/

storn/code.html [accessed 2010-12-28].


2621. Rainer M. Storn and Kenneth V. Price. Dierential Evolution A Simple and Ecient
Adaptive Scheme for Global Optimization over Continuous Spaces. Technical Report TR-
REFERENCES 1153
95-012, International Computer Science Institute (ICSI), University of California: Berkeley,
CA, USA, March 1995. Fully available at http://http.icsi.berkeley.edu/

storn/
TR-95-012.pdf [accessed 2009-07-04]. CiteSeer
x
: 10.1.1.1.9696.
2622. Felix Streichert. Evolution are Algorithmen: Implementation und Anwendungen im Asset-
Management-Bereich (Evolutionary Algorithms and their Application to Asset Management).
Masters thesis, Universitat Stuttgart, Institut A f ur Mechanik: Stuttgart, Germany, Au-
gust 2001, Arnold Kirstner and Werner Koch, Supervisors. Fully available at http://
www-ra.informatik.uni-tuebingen.de/mitarb/streiche [accessed 2007-08-17].
2623. Roman Grigorevich Strongin and Yaroslav D. Sergeyev. Global Optimization with Non-
Convex Constraints, volume 45 in Nonconvex Optimization and Its Applications. Springer
Science+Business Media, Inc.: New York, NY, USA, 2000. isbn: 0-7923-6490-2. Google
Books ID: xh GF9Dor3AC.
2624. Eleni Stroulia and Stan Matwin, editors. Advances in Articial Intelligence Proceedings
of the 14th Biennial Conference of the Canadian Society for Computational Studies of In-
telligence (AI01), June 79, 2001, Ottawa, ON, Canada, volume 2056 in Lecture Notes in
Articial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-45153-6. isbn: 3-540-42144-0.
2625. Thomas St utzle, editor. Learning and Intelligent OptimizatioN (LION 3), January 14
18, 2009, Trento, Italy, Theoretical Computer Science and General Issues (SL 1),
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/978-3-642-11169-3.
2626. Ponnuthurai Nagaratnam Suganthan, Nikolaus Hansen, J. J. Liang, Kalyanmoy Deb, Ying-
Ping Chen, Anne Auger, and Santosh Tiwari. Problem Denitions and Evaluation Crite-
ria for the CEC 2005 Special Session on Real-Parameter Optimization. KanGAL Report
2005005, Kanpur Genetic Algorithms Laboratory (KanGAL), Department of Mechanical
Engineering, Indian Institute of Technology Kanpur (IIT): Kanpur, Uttar Pradesh, India,
May 2005. Fully available at http://www.iitk.ac.in/kangal/papers/k2005005.
pdf [accessed 2007-10-07]. See also [641, 2627].
2627. Ponnuthurai Nagaratnam Suganthan, Nikolaus Hansen, J. J. Liang, Kalyanmoy Deb,
Ying-Ping Chen, Anne Auger, and Santosh Tiwari. Problem Denitions and Evaluation
Criteria for the CEC 2005 Special Session on Real-Parameter Optimization. Technical
Report May-30-05, Nanyang Technological University (NTU): Singapore, May 30, 2005.
Fully available at http://www.ntu.edu.sg/home/epnsugan/index_files/CEC-05/
Tech-Report-May-30-05.pdf [accessed 2007-10-07]. See also [641, 2626].
2628. Sarina Sulaiman, Siti Mariyam Shamsuddin, Fadni Forkan, Ajith Abraham, and Shahida
Sulaiman. Intelligent Web Caching for E-learning Log Data. In AMS09 [1349], pages 136
141, 2009. doi: 10.1109/AMS.2009.88. Fully available at http://www.softcomputing.
net/ams2009_1.pdf [accessed 2009-03-29].
2629. Andre S ulow, Nicole Drechsler, and Rolf Drechsler. Robust Multi-Objective Op-
timization in High Dimensional Spaces. In EMO07 [2061], pages 715726, 2007.
doi: 10.1007/978-3-540-70928-2 54. Fully available at http://www.informatik.
uni-bremen.de/agra/doc/konf/emo2007_RobustOptimization.pdf [accessed 2009-07-
18].
2630. Fuchun Sun, Yingxu Wang, Jianhua Lu, Bo Zhang, Witold Kinsner, and Lot A. Zadeh,
editors. Proceedings of the 9th IEEE International Conference on Cognitive Informatics
(ICCI10), July 79, 2010, Tsinghua University: Beijng, China. IEEE Computer Society
Press: Los Alamitos, CA, USA. Partly available at http://ieeexplore.ieee.org/xpl/
mostRecentIssue.jsp?punumber=5560187 [accessed 2011-02-28]. Catalogue no.: CFP10312-
CDR.
2631. Jianyong Sun, Qingfu Zhang, Jin Li, and Xin Yao. A Hybrid Estimation of Distri-
bution Algorithm for CDMA Cellular System Design. In SEAL06 [2856], pages 905
912, 2006. doi: 10.1007/11903697 114. Fully available at http://dces.essex.ac.uk/
staff/qzhang/conferencepaper/SEAL06SunZhangLIYao.pdf [accessed 2009-08-10]. See
also [2632].
2632. Jianyong Sun, Qingfu Zhang, Jin Li, and Xin Yao. A Hybrid Estimation of Distribution
Algorithm for CDMA Cellular System Design. International Journal of Computational Intel-
ligence and Applications (IJCIA), 7(2):187200, June 2008, World Scientic Publishing Co.:
Singapore and Imperial College Press Co.: London, UK. doi: 10.1142/S1469026808002235.
1154 REFERENCES
See also [2631].
2633. Xia Sun and Ziqiang Wang. An Ecient Materialized View Selection Algorithm Based on
PSO. In ISA09 [2847], pages 1114, 2009. doi: 10.1109/IWISA.2009.5072711. INSPEC
Accession Number: 10717741.
2634. Patrick David Surry. A Prescriptive Formalism for Constructing Domain-specic Evolution-
ary Algorithms. PhD thesis, University of Edinburgh, Edinburgh Parallel Computing Centre
(EPCC): Edinburgh, Scotland, UK, 1998. CiteSeer
x
: 10.1.1.39.7961. Google Books
ID: HOHxSAAACAAJ. OCLC: 281436714 and 606028198.
2635. Andrew M. Sutton, Monte Lunacek, and L. Darrell Whitley. Dierential Evolution and Non-
Separability: Using Selective Pressure to Focus Search. In GECCO07-I [2699], pages 1428
1435, 2007. doi: 10.1145/1276958.1277221. Fully available at http://www.cs.colostate.
edu/

sutton/pubs/Sutton2007de.ps.gz [accessed 2009-10-20].


2636. Masuo Suzuki, H. Nishimori, Yoshiyuki Kabashima, Kazuyuki Tanaka, and T. Tanaka, edi-
tors. 2003 Joint Workshop of Hayashibara Foundation and Hayashibara Forum Physics and
Information and Workshop on Statistical Mechanical Approach to Probabilistic Information
Processing (SMAPIP), July 11, 2003, Okayama, Japan. Partly available at http://www.
smapip.is.tohoku.ac.jp/

smapip/2003/hayashibara/ [accessed 2009-07-11].


2637. Reiji Suzuki and Takaya Arita. Repeated Occurrences of the Baldwin Eect Can
Guide Evolution on Rugged Fitness Landscapes. In CI-ALife07 [2076], pages 814,
2007. doi: 10.1109/ALIFE.2007.367652. Fully available at http://www.alife.cs.is.
nagoya-u.ac.jp/

reiji/publications/2007_ieeealife_suzuki.pdf [accessed 2008-


09-08]. INSPEC Accession Number: 9507350.
2638. Reiji Suzuki and Takaya Arita. How Learning Can Guide Evolution of Communication. In
Articial Life XI [446], pages 608615, 2008. CiteSeer
x
: 10.1.1.161.4097.
2639. William R. Swartout, Paul Rosenbloom, and Peter Szolovits, editors. Proceedings of the
Tenth National Conference on Articial Intelligence (AAAI92), July 1216, 1992, San
Jose Convention Center: San Jose, CA, USA. AAAI Press: Menlo Park, CA, USA and
MIT Press: Cambridge, MA, USA. isbn: 0-262-51063-4. Fully available at http://
www.aaai.org/Library/AAAI/aaai92contents.php [accessed 2009-06-24]. Google Books
ID: Q55QAAAAMAAJ, ZZxQAAAAMAAJ, and zSkYAAAACAAJ.
2640. Philip H. Sweany and Steven J. Beaty. Instruction Scheduling Using Simulated Annealing.
In MPCS98 [611], 1998. CiteSeer
x
: 10.1.1.67.3149.
2641. Gilbert Syswerda. A Study of Reproduction in Generational and Steady State Genetic Al-
gorithms. In FOGA90 [2562], pages 94101, 1990.
2642. Piotr S. Szczepaniak, Janusz Kacprzyk, and Adam Niewiadomski, editors. Advances in Web
Intelligence, Proceedings of the Third International Atlantic Web Intelligence Conference
(AWIC05), June 69, 2005, Lodz, Poland, volume 3528/2005 in Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-26219-9.
2643. George G. Szpiro. A Search for Hidden Relationships: Data Mining with Ge-
netic Algorithms. Computational Economics, 10(3):267277, August 1997, Springer
Netherlands: Dordrecht, Netherlands. doi: 10.1023/A:1008673309609. Fully available
at http://ideas.repec.org/a/kap/compec/v10y1997i3p267-77.html [accessed 2009-
09-10]. CiteSeer
x
: 10.1.1.21.6037.
T
2644. Heidi A. Taboada and David W. Coit. Data Clustering of Solutions for Multiple Objec-
tive System Reliability Optimization Problems. Quality Technology & Quantitative Man-
agement An International Journal, 4(2):191210, June 2007, Chung Hua University, De-
partment of Industrial Engineering & System Management: Hsinchu, Taiwan. Fully avail-
able at http://web2.cc.nctu.edu.tw/

qtqm/qtqmpapers/2007V4N2/2007V4N2_
F3.pdf [accessed 2010-12-16]. See also [2645].
2645. Heidi A. Taboada, Fatema Baheranwala, David W. Coit, and Naruemon Wattanapongsakorn.
Practical Solutions for Multi-Objective Optimization: An Application to System Reliabil-
ity Design Problems. Reliability Engineering & System Safety, 92(3):314322, March 2007,
Elsevier Science Publishers B.V.: Amsterdam, The Netherlands, European Safety and
Reliability Association (ESRA), and Safety Engineering and Risk Analysis Division
REFERENCES 1155
(SERAD) of the American Society of Mechanical Engineers (ASME): New York, NY,
USA. doi: 10.1016/j.ress.2006.04.014. Fully available at http://www.rci.rutgers.edu/

coit/RESS_2007.pdf [accessed 2007-09-10]. See also [605].


2646. Walter Alden Tackett. Recombination, Selection, and the Genetic Construction of Com-
puter Programs. PhD thesis, University of Southern California (USC), Department of
Electrical Engineering Systems: Los Angeles, CA, USA, April 17, 1994. Fully avail-
able at http://www.cs.ucl.ac.uk/staff/W.Langdon/ftp/ftp.io.com/papers/
watphd.tar.Z [accessed 2007-09-07]. CENG Technical Report 94-13.
2647. Genichi Taguchi. Introduction to Quality Engineering: Designing Quality into Products and
Processes. Asian Productivity Organization (APO): Chiyoda-ku, Tokyo, Japan and Kraus
International Publications: Millwood, NY, USA, January 1986. isbn: 9283310837 and
9283310845. Google Books ID: 1NtTAAAAMAAJ, NeA3JAAACAAJ, NuA3JAAACAAJ, and
yMv-AQAACAAJ. OCLC: 16277853 and 180475635. Translation of Sekkeisha no tame no
hinshitsu kanri.
2648.

Eric D. Taillard. Parallel Iterative Search Methods for Vehicle Routing Problems. Net-
works, 23(8):661673, December 1993, Wiley Periodicals, Inc.: Wilmington, DE, USA.
doi: 10.1002/net.3230230804.
2649.

Eric D. Taillard, Gilbert Laporte, and Michel Grendau. Vehicle Routeing with Multiple
Use of Vehicles. The Journal of the Operational Research Society (JORS), 47(8):10651070,
August 1996, Palgrave Macmillan Ltd.: Houndmills, Basingstoke, Hampshire, UK and Op-
erations Research Society: Birmingham, UK. CiteSeer
x
: 10.1.1.49.3923. JSTOR Stable
ID: 3010414.
2650.

Eric D. Taillard, Luca Maria Gambardella, Michael Gendrau, and Jean-Yves Potvin.
Adaptive Memory Programming: A Unied View of Metaheuristics. European Journal
of Operational Research (EJOR), 135(1):116, November 6, 2001, Elsevier Science Pub-
lishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientic Publishers
Ltd.: Amsterdam, The Netherlands. doi: 10.1016/S0377-2217(00)00268-X. Fully available
at http://ina2.eivd.ch/Collaborateurs/etd/articles.dir/ejor135_1_1_16.
pdf [accessed 2008-11-11]. CiteSeer
x
: 10.1.1.54.8460.
2651. Hideyuki Takagi. Active User Intervention in an EC Search. In FEA00 [2853], pages 995
998, 2000. Fully available at http://www.design.kyushu-u.ac.jp/

takagi/TAKAGI/
IECpaper/JCIS2K_2.pdf [accessed 2010-09-03].
2652. Hideyuki Takagi. Interactive Evolutionary Computation: Fusion of the Capacities of EC
Optimization and Human Evaluation. Proceedings of the IEEE, 89(9):12751296, Septem-
ber 2001, Institute of Electrical and Electronics Engineers (IEEE) Press: New York,
NY, USA. doi: 10.1109/5.949485. Fully available at http://www.design.kyushu-u.
ac.jp/

takagi/TAKAGI/IECsurvey.html [accessed 2007-08-29]. INSPEC Accession Num-


ber: 7053972.
2653. Hiromitsu Takahashi and Akira Matsuyama. An Approximate Solution for the Steiner Prob-
lem in Graphs. Mathematica Japonica, 24(6):573577, 1980, Japanese Association of Mathe-
matical Sciences (JAMS): Sakai, Osaka, Japan. Fully available at http://monet.skku.ac.
kr/course_materials/undergraduate/al/lecture/2006/TM.pdf [accessed 2010-09-27].
2654. Ricardo H. C. Takahashi, Elizabeth Wanner, Kalyanmoy Deb, and Salvatore Greco, editors.
Proceedings of the Sixth International Conference on Evolutionary Multi-Criterion Decision
Making (EMO11), April 5, 2011, Hotel Estalagem das Minas Gerais: Ouro Preto, MG,
Brazil.
2655. Ahmet Cagatay Talay. A Gateway Access-Point Selection Problem and Trac Bal-
ancing in Wireless Mesh Networks. In EvoWorkshops07 [1050], pages 161168, 2007.
doi: 10.1007/978-3-540-71805-5 18.
2656. Ahmet Cagatay Talay and Sema Oktug. A GA/Heuristic Hybrid Technique for Routing and
Wavelength Assignment in WDM Networks. In EvoWorkshops04 [2254], pages 150159,
2004.
2657. El-Ghazali Talbi, Pierre Liardet, Pierre Collet, Evelyne Lutton, and Marc Schoenauer,
editors. Revised Selected Papers of the 7th International Conference on Articial Evo-
lution, Evolution Articielle (EA05), October 2628, 2005, Lille, France, volume 3871
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-33589-7. Published in 2006.
2658. Seyed Hamid Talebian and Sameem Abdul Kareem. Using Genetic Algorithm to Select
1156 REFERENCES
Materialized Views Subject to Dual Constraints. In ICSPS09 [1376], pages 633638, 2009.
doi: 10.1109/ICSPS.2009.162. INSPEC Accession Number: 10789147.
2659. 13th Conference on Technologies and Applications of Articial Intelligence (TAAI08),
November 2122, 2008, Tamkang University, Lanyang campus: Chiao-hsi Shiang, I-lan
County, Taiwan.
2660. Honghua Tan and Qi Luo, editors. Proceedings of the IITA International Conference on
Control, Automation and Systems Engineering (CASE09), July 1112, 2009, Zhangjiaji`e,
H unan, China. IEEE Computer Society Press: Los Alamitos, CA, USA.
2661. Kay Chen Tan, Meng Hot Lim, Xin Yao, and Lipo Wang, editors. Recend Advances
in Simulated Evolution and Learning Proceedings of the Fourth Asia-Pacic Conference
on Simulated Evolution And Learning (SEAL02), November 1822, 2002, Singapore, vol-
ume 2 in Advances in Natural Computation. World Scientic Publishing Co.: Singapore.
isbn: 981-238-952-0. Google Books ID: Vn4bdMbo488C.
2662. Kay Chen Tan, Eik Fun Khor, Tong Heng Lee, and Ramasubramanian Sathikannan. An
Evolutionary Algorithm with Advanced Goal and Priority Specication for Multi-Objective
Optimization. Journal of Articial Intelligence Research (JAIR), 18:183215, February 2003,
AI Access Foundation, Inc.: El Segundo, CA, USA and AAAI Press: Menlo Park, CA,
USA. Fully available at http://www.jair.org/media/842/live-842-1969-jair.
pdf [accessed 2009-06-23]. CiteSeer
x
: 10.1.1.14.6112.
2663. L. G. Tan and Mark C. Sinclair. Wavelength Assignment between the Central Nodes of
the COST 239 European Optical Network. In UKPEW95 [2591], pages 235247, 1995.
Fully available at http://uk.geocities.com/markcsinclair/ps/ukpew11_tan.ps.
gz [accessed 2009-09-02]. CiteSeer
x
: 10.1.1.55.3604.
2664. Ming Tan, Hong-Bin Fang, Guo-Liang Tian, and Gang Wei. Testing Multivariate Normality
in Incomplete Data of Small Sample Size. Journal of Multivariate Analysis (JMVA), 93(1):
164179, March 2005, Elsevier Science Publishers B.V.: Essex, UK. Imprint: Academic Press
Professional, Inc.: San Diego, CA, USA. doi: 10.1016/j.jmva.2004.02.014.
2665. Ying Tan, Yuhui Shi, and Kay Chen Tan, editors. Advances in Swarm Intelligence Proceed-
ings of the First International Conference on Swarm Intelligence, Part I (ICSI10-I), June 12
15, 2010, Beijng, China, volume 6145/2010 in Theoretical Computer Science and General
Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/978-3-642-13495-1. isbn: 3-642-13494-7. Library of Congress
Control Number (LCCN): 2010927598.
2666. Ying Tan, Yuhui Shi, and Kay Chen Tan, editors. Advances in Swarm Intelligence Pro-
ceedings of the First International Conference on Swarm Intelligence, Part II (ICSI10-
II), June 1215, 2010, Beijng, China, volume 6146/2010 in Theoretical Computer Science
and General Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag
GmbH: Berlin, Germany. doi: 10.1007/978-3-642-13498-2. isbn: 3-642-13497-1. Library
of Congress Control Number (LCCN): 2010927598.
2667. Jing Tang, Meng Hot Lim, and Yew-Soon Ong. Diversity-Adaptive Parallel Memetic Algo-
rithm for Solving Large Scale Combinatorial Optimization Problems. Soft Computing A
Fusion of Foundations, Methodologies and Applications, 11(9):873888, July 2007, Springer-
Verlag: Berlin/Heidelberg. doi: 10.1007/s00500-006-0139-6.
2668. Ke Tang, Xin Yao, Ponnuthurai Nagaratnam Suganthan, Cara MacNish, Ying-Ping Chen,
Chih-Ming Chen, and Zhenyu Yang. Benchmark Functions for the CEC2008 Special Session
and Competition on Large Scale Global Optimization. Technical Report, University of Science
and Technology of China (USTC), School of Computer Science and Technology, Nature In-
spired Computation and Applications Laboratory (NICAL): Hefei,

Anhu, China, 2007. Fully
available at http://nical.ustc.edu.cn/cec08ss.php and http://sci2s.ugr.es/
programacion/workshop/Tech.Report.CEC2008.LSGO.pdf [accessed 2009-08-15].
2669. Ke Tang, Yi Mei, and Xin Yao. Memetic Algorithm with Extended Neighbor-
hood Search for Capacitated Arc Routing Problems. IEEE Transactions on Evo-
lutionary Computation (IEEE-EC), 13(5):11511166, October 2009, IEEE Computer
Society: Washington, DC, USA. doi: 10.1109/TEVC.2009.2023449. Fully avail-
able at http://staff.ustc.edu.cn/

ketang/papers/TangMeiYao_TEVC09.pdf [ac-
cessed 2011-10-01]. CiteSeer
x
: 10.1.1.160.9949.
2670. Ke Tang, Xiaodong Li, Ponnuthurai Nagaratnam Suganthan, Zhenyu Yang, and Thomas
Weise. Benchmark Functions for the CEC2010 Special Session and Competition on Large-
REFERENCES 1157
Scale Global Optimization. Technical Report, University of Science and Technology of China
(USTC), School of Computer Science and Technology, Nature Inspired Computation and
Applications Laboratory (NICAL): Hefei,

Anhu, China, January 8, 2010. Fully available
at http://www.it-weise.de/documents/files/TLSYW2009BFFTCSSACOLSGO.pdf.
2671. Ke Tang, Pu Wang, and Tianshi Chen. Towards Maximizing The Area Under The
ROC Curve For Multi-Class Classication Problems. In AAAI11 [3], pages 483488,
2011. Fully available at http://aaai.org/ocs/index.php/AAAI/AAAI11/paper/
download/3485/3882 [accessed 2001-09-08].
2672. Eiichi Taniguchi and Russell G. Thompson, editors. Recent Advances in City Logis-
tics. Proceedings of the 4th International Conference on City Logistics, July 1214, 2005,
Langkawi, Malaysia. Elsevier Science Publishers B.V.: Amsterdam, The Netherlands.
isbn: 0-08-044799-6. Google Books ID: dMMQb83KH-EC.
2673. Eiichi Taniguchi, Russell G. Thompson, and Tadashi Yamada. Data Collection for Modelling,
Evaluating, and Benchmarking City Logistics Schemes. In Recent Advances in City Logistics.
Proceedings of the 4th International Conference on City Logistics [2672], pages 114, 2005.
2674. Ajay Kumar Tanwani and Muddassar Farooq. Performance Evaluation of Evolutionary Al-
gorithms in Classication of Biomedical Datasets. In GECCO09-II [2343], pages 2617
2624, 2009. doi: 10.1145/1570256.1570371. Fully available at http://nexginrc.org/
nexginrcAdmin/PublicationsFiles/iwlcs09-ajay.pdf [accessed 2010-07-26].
2675. Proceedings of the 12th International Conference on Parallel Problem Solving From Nature
(PPSN12), September 15, 2012, Taormina, Italy.
2676. Jose Juan Tapia, Enrique Morett, and Edgar E. Vallejo. A Clustering Genetic Algorithm
for Genomic Data Mining. In Foundations of Computational Intelligence Volume 4:
Bio-Inspired Data Mining [13], pages 249275. Springer-Verlag: Berlin/Heidelberg, 2009.
doi: 10.1007/978-3-642-01088-0 11.
2677. Jonathan Tate, Benjamin Woolford-Lim, Ian Bate, and Xin Yao. Comparing Design of
Experiments and Evolutionary Approaches to Multi-Objective Optimisation of Sensornet
Protocols. In CEC09 [1350], pages 11371144, 2009. doi: 10.1109/CEC.2009.4983074.
Fully available at ftp://ftp.cs.york.ac.uk/papers/rtspapers/R:Tate:2009a.
pdf and http://www.cercia.ac.uk/projects/research/SEBASE/pdfs/2009/
jt-bwl-ib-xy-cec2009.pdf [accessed 2009-11-26]. INSPEC Accession Number: 10688669.
2678. Advance Papers of the Fourth International Joint Conference on Articial Intelligence (IJ-
CAI75), September 38, 1975, Tbilisi, Georgia, USSR, volume 1 and 2. Fully available
at http://dli.iiit.ac.in/ijcai/IJCAI-75-VOL-1&2/CONTENT/content.htm [ac-
cessed 2008-04-01].
2679. Technical Committee CEN/TC 119 Swap body for combined transport: Brussels, Belgium.
Swap Bodies Non-Stackable Swap Bodies of Class C Dimensions and General Require-
ments, EN-284:2006. CEN-CEN ELEC: Brussels, Belgium, November 10, 2006. See also:
DIN 70013/14, BS 3951:Part 1, ISO 668, ISO 1161, ISO 6346, 88/218/EEC, UIC 592-4, UIC
596-6.
2680. Y. S. Teh and G. P. Rangaiah. Tabu Search for Global Optimization of Continuous Func-
tions with Application to Phase Equilibrium Calculations. Computers and Chemical Engi-
neering, 27(11):16651679, November 15, 2003, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/S0098-1354(03)00134-0.
2681. Astro Teller. Genetic Programming, Indexed Memory, the Halting Problem, and other Cu-
riosities. In FLAIRS94 [1360], pages 270274, 1994. Fully available at http://www.
cs.cmu.edu/afs/cs/usr/astro/public/papers/Curiosities.ps [accessed 2009-07-21].
CiteSeer
x
: 10.1.1.39.5589.
2682. Astro Teller. Turing Completeness in the Language of Genetic Programming with Indexed
Memory. In CEC94 [1891], pages 136141, 1994. doi: 10.1109/ICEC.1994.350027. Fully
available at http://www.astroteller.net/pdf/Turing.pdf and http://www.cs.
cmu.edu/afs/cs/usr/astro/public/papers/Turing.ps [accessed 2010-11-21].
2683. Marc Terwilliger, Ajay K. Gupta, Ashfaq A. Khokhar, and Garrison W. Greenwood. Local-
ization Using Evolution Strategies in Sensornets. In CEC05 [641], pages 322327, volume
1, 2005. doi: 10.1109/CEC.2005.1554701. Fully available at http://twig.lssu.edu/
publications/LESS.pdf [accessed 2009-09-18]. INSPEC Accession Number: 9004167.
2684. Igor V. Tetko, David J. Livingstone, and Alexander I. Luik. Neural Network Studies. 1.
Comparison of Overtting and Overtraining. Journal of Chemical Information and Computer
1158 REFERENCES
Sciences, 35(5):826833, September 1995, American Chemical Society: Washington, DC,
USA. doi: 10.1021/ci00027a006.
2685. Andrea G. B. Tettamanzi and Marco Tomassini. Soft Computing: Integrating Evolu-
tionary, Neural, and Fuzzy Systems. Springer-Verlag GmbH: Berlin, Germany, 2001.
isbn: 3-540-42204-8. Google Books ID: 86jQw6v30KoC.
2686. Sam R. Thangiah. Vehicle Routing with Time Windows using Genetic Algorithms. In
Practical Handbook of Genetic Algorithms: New Frontiers [522], pages 253277. CRC Press,
Inc.: Boca Raton, FL, USA, 1995. CiteSeer
x
: 10.1.1.23.7903.
2687. Proceedings of the 4th International Multiconference on Computer Science and Information
Technology (CSIT06), April 57, 2006, Amman, Jordan. The Computing Research Reposi-
tory (CoRR): Ithaca, NY, USA.
2688. First Workshop of the Semantic Web Service Challenge 2006 Challenge on Automating
Web Services Mediation, Choreography and Discovery, March 810, 2006, Stanford Univer-
sity, David Packard EE Building Building: Stanford, CA, USA. The Digital Enterprise
Research Institute (DERI) Stanford: Stanford, CA, USA. Fully available at http://
sws-challenge.org/wiki/index.php/Workshop_Stanford [accessed 2010-12-12].
2689. Second Workshop of the Semantic Web Service Challenge 2006 Challenge on Automating
Web Services Mediation, Choreography and Discovery, June 1516, 2006, Hotel Maestral:
Przno, Sveti Stefan, Budva, Republic of Montenegro. The Digital Enterprise Research Insti-
tute (DERI) Stanford: Stanford, CA, USA. Fully available at http://sws-challenge.
org/wiki/index.php/Workshop_Budva [accessed 2010-12-12].
2690. Third Workshop of the Semantic Web Service Challenge 2006 Challenge on Automat-
ing Web Services Mediation, Choreography and Discovery, November 1011, 2006, Geor-
gia Center: Athens, GA, USA. The Digital Enterprise Research Institute (DERI) Stanford:
Stanford, CA, USA. Fully available at http://sws-challenge.org/wiki/index.php/
Workshop_Athens [accessed 2010-12-12].
2691. Proceedings of The 10th Conference on Pattern Languages of Programs (PLoP03), Septem-
ber 812, 2003, University of Illinois, Allerton House: Monticello, IL, USA. The Hill-
side Group: Champaign, IL, USA. Fully available at http://hillside.net/plop/
plop2003/papers.html [accessed 2010-08-19].
2692. Dimitri Theodoratos and Wugang Xu. Constructing Search Spaces for Materialized View
Selection. In DOLAP04 [2557], pages 112121, 2004. doi: 10.1145/1031763.1031783.
2693. Guy Theraulaz and Eric W. Bonabeau. Coordination in Distributed Building. Science Mag-
azine, 269(5224):686688, American Association for the Advancement of Science (AAAS):
Washington, DC, USA and HighWire Press (Stanford University): Cambridge, MA, USA.
doi: 10.1126/science.269.5224.686.
2694. Guy Theraulaz and Eric W. Bonabeau. Swarm Smarts. Scientic American, pages 7279,
March 2000, Scientic American, Inc.: New York, NY, USA. Fully available at http://
www.santafe.edu/

vince/press/SciAm-SwarmSmarts.pdf [accessed 2008-06-12].


2695. Lothar Thiele, Kaisa Miettinen, Pekka J. Korhonen, and Julian Molina. A Preference-
based Interactive Evolutionary Algorithm for Multiobjective Optimization. HSE Working
Paper W-412, Helsinki School of Economics (HSE, Helsingin kauppakorkeakoulu): Helsinki,
Finland, January 2007. Fully available at ftp://ftp.tik.ee.ethz.ch/pub/people/
thiele/paper/TMKM07.pdf and http://hsepubl.lib.hse.fi/pdf/wp/w412.pdf
[accessed 2009-07-18]. issn: 1235-5674.
2696. Dirk Thierens. On the Scalability of Simple Genetic Algorithms. Technical Report UU-CS-
1999-48, Utrecht University, Department of Information and Computing Sciences: Utrecht,
The Netherlands, 1999. Fully available at http://igitur-archive.library.uu.nl/
math/2007-0118-201740/thierens_99_on_the_scalability.pdf and http://
www.cs.uu.nl/research/techreps/repo/CS-1999/1999-48.pdf [accessed 2008-08-12].
CiteSeer
x
: 10.1.1.40.5295. urn: URN:NBN:NL:UI:10-1874/18986.
2697. Dirk Thierens and David Edward Goldberg. Convergence Models of Genetic Algorithm Se-
lection Schemes. In PPSN III [693], pages 119129, 1994. doi: 10.1007/3-540-58484-6 256.
Fully available at http://www.cs.uu.nl/groups/DSS/publications/convergence/
convMdl.ps and http://www.csie.ntu.edu.tw/

b91069/PDF/convMdl.pdf [ac-
cessed 2010-12-04].
2698. Dirk Thierens, David Edward Goldberg, and

Angela Guimaraes Pereira. Domino Conver-
gence, Drift, and the Temporal-Salience Structure of Problems. In CEC98 [2496], pages
REFERENCES 1159
535540, 1998. doi: 10.1109/ICEC.1998.700085. CiteSeer
x
: 10.1.1.46.6116.
2699. Dirk Thierens, Hans-Georg Beyer, Josh C. Bongard, J urgen Branke, John Andrew Clark,
Dave Cli, Clare Bates Congdon, Kalyanmoy Deb, Benjamin Doerr, Tim Kovacs, Sanjeev
Kumar, Julian Francis Miller, Jason H. Moore, Frank Neumann, Martin Pelikan, Riccardo
Poli, Kumara Sastry, Kenneth Owen Stanley, Thomas St utzle, Richard A. Watson, and Ingo
Wegener, editors. Proceedings of 9th Genetic and Evolutionary Computation Conference
(GECCO07-I), July 711, 2007, University College London (UCL): London, UK. ACM Press:
New York, NY, USA. isbn: 1-59593-697-1. OCLC: 184827588, 232392364, 271369878,
and 288957084. ACM Order No.: 910070. See also [2700].
2700. Dirk Thierens, Hans-Georg Beyer, Josh C. Bongard, J urgen Branke, John Andrew Clark,
Dave Cli, Clare Bates Congdon, Kalyanmoy Deb, Benjamin Doerr, Tim Kovacs, Sanjeev
Kumar, Julian Francis Miller, Jason H. Moore, Frank Neumann, Martin Pelikan, Riccardo
Poli, Kumara Sastry, Kenneth Owen Stanley, Thomas St utzle, Richard A. Watson, and Ingo
Wegener, editors. Genetic and Evolutionary Computation Conference Companion Material
(GECCO07-II), July 711, 2007, University College London (UCL): London, UK. ACM
Press: New York, NY, USA. ACM Order No.: 910071. See also [2699].
2701. Herve Thiriez and Stanley Zionts, editors. Proceedings of the 1st International Conference on
Multiple Criteria Decision Making (MCDM75), May 2123, 1975, Jouy-en-Josas, France,
volume 130 in Lecture Notes in Economics and Mathematical Systems. Springer-Verlag:
Berlin/Heidelberg. isbn: 0387077944. OCLC: 2332357. Published in 1976.
2702. James J. Thomas, editor. Proceedings of the 18th Annual Conference on Computer Graphics
and Interactive Techniques (SIGGRAPH91), July 28August 2, 1991, Las Vegas, NV, USA.
SIGGRAPH: ACM Special Interest Group on Computer Graphics and Interactive Techniques,
ACM Press: New York, NY, USA. isbn: 0-89791-436-8. Order No.: 428911.
2703. Richard K. Thompson and Alden H. Wright. Additively Decomposable Fitness Func-
tions. Technical Report, University of Montana, Computer Science Department: Missoula,
MT, USA, 1996. Fully available at http://www.cs.umt.edu/u/wright/papers/add_
decomp.ps.gz [accessed 2007-11-27].
2704. Henk Tijms. Understanding Probability: Chance Rules in Everyday Life. Cambridge Uni-
versity Press: Cambridge, UK, August 16, 2004. isbn: 0521540364. Google Books
ID: y2361S44XLsC. OCLC: 54079595.
2705. Jonathan Timmis and Peter J. Bentley, editors. Proceedings of the 1st International Con-
ference on Articial Immune Systems (ICARIS02), University of Kent: Canterbury, Kent,
UK. isbn: 1902671325.
2706. Jonathan Timmis, Peter J. Bentley, and Emma Hart, editors. Proceedings of the 2nd In-
ternational Conference on Articial Immune Systems (ICARIS03), September 13, 2003,
Edinburgh, Scotland, UK, volume 2787 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. isbn: 3-540-40766-9.
2707. Ville Tirronen, Ferrante Neri, Tommi Karkkainen, Kirsi Majava, and Tuomo Rossi. An
Enhanced Memetic Dierential Evolution in Filter Design for Defect Detection in Paper
Production. Evolutionary Computation, 16(4):529555, Winter 2008, MIT Press: Cambridge,
MA, USA. doi: 10.1162/evco.2008.16.4.529. PubMed ID: 19053498.
2708. Philippe L. Toint. Test Problems for Partially Separable Optimization and Results for the
Routine PSPMIN. Technical Report 83/4, University of Namur (FUNDP), Department of
Mathematics: Namur, Belgium, 1983.
2709. Proceedings of the 6th International Joint Conference on Articial Intelligence (IJCAI79-I),
August 2023, 1979, Tokyo, Japan, volume 1. Fully available at http://dli.iiit.ac.
in/ijcai/IJCAI-79-VOL1/CONTENT/content.htm [accessed 2008-04-01]. See also [2710].
2710. Proceedings of the 6th International Joint Conference on Articial Intelligence (IJCAI79-II),
August 2023, 1979, Tokyo, Japan, volume 2. Fully available at http://dli.iiit.ac.
in/ijcai/IJCAI-79-VOL2/CONTENT/content.htm [accessed 2008-04-01]. See also [2709].
2711. Shisanu Tongchim and Xin Yao. Parallel Evolutionary Programming. In CEC04 [1369],
pages 13621367, volume 2, 2004. doi: 10.1109/CEC.2004.1331055. Fully available
at http://www.cs.bham.ac.uk/

xin/papers/TongchimYao_CEC04.pdf [accessed 2011-


11-10]. INSPEC Accession Number: 8229086.
2712. Virginia Joanne Torczon. Multi-Directional Search: A Direct Search Algorithm for Parallel
Machines. PhD thesis, Rice University, Department of Computational & Applied Mathe-
matics (CAAM): Houston, TX, USA, May 1989. CiteSeer
x
: 10.1.1.48.9967. Supervisors:
1160 REFERENCES
John E. Dennis, Jr., Andrew E. Boyd, Kenneth W. Kennedy, Jr., and Richard A. Tapia.
2713. P. Tormos, A. Lova, F. Barber, L. Ingolotti, M. Abril, and M. A. Salido. A Ge-
netic Algorithm for Railway Scheduling Problems. In Metaheuristics for Scheduling in
Industrial and Manufacturing Applications [2992], Chapter 10, pages 255276. Springer-
Verlag: Berlin/Heidelberg, January 2008. doi: 10.1007/978-3-540-78985-7 10. Fully available
at http://arrival.cti.gr/uploads/Documents.0081/ARRIVAL-TR-0081.pdf [ac-
cessed 2011-02-19]. CiteSeer
x
: 10.1.1.157.9570. STREP (Member of the FET Open Scheme)
Project Number FP6-021235-2: ARRIVAL Algorithms for Robust and online Railway op-
timization: Improving the Validity and reliAbility of Large scale systems; ARRIVAL-TR-
0081.
2714. Aimo Torn and Antanas

Zilinskas. Global Optimization, volume 350/1989 in Lecture
Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany, 1989.
doi: 10.1007/3-540-50871-6. isbn: 0-387-50871-6 and 3-540-50871-6.
2715. Proceedings of the 1st International Workshop on Local Search Techniques in Constraint
Satisfaction (LSCS04), September 27, 2004, Toronto, ON, Canada.
2716. Douglas E. Torres D. and Claudio M. Rocco S. Empirical Models Based on Hybrid Intelligent
Systems for Assessing the Reliability of Complex Networks. In EvoWorkshops05 [2340],
pages 147155, 2005.
2717. Paolo Toth and Daniele Vigo, editors. The Vehicle Routing Problem, volume 9 in SIAM
Monographs on Discrete Mathematics and Applications. Society for Industrial and Applied
Mathematics (SIAM): Philadelphia, PA, USA, 2002. isbn: 0898715792. Google Books
ID: TeMgA5S74skC.
2718. Julius T. Tou, editor. Proceedings of the Symposium on Computer and Information Sciences
II. Academic Press: London, New York, August 2224, 1966, Columbus, OH, USA. Google
Books ID: V NBAAAAIAAJ. GBV-Identication (PPN): 173850669.
2719. Marc Toussaint and Christian Igel. Neutrality: A Necessity for Self-Adaptation.
In CEC02 [944], pages 13541359, 2002. doi: 10.1109/CEC.2002.1004440.
CiteSeer
x
: 10.1.1.16.8492. arXiv ID: nlin/0204038v1.
2720. Amy J.C. Trappey, Charles V. Trappey, and Chang-Ru Wu. Genetic Algorithm Dy-
namic Performance Evaluation for RFID Reverse Logistic Management. Expert Sys-
tems with Applications An International Journal, 37(11):73297335, November 2010,
Elsevier Science Publishers B.V.: Essex, UK. Imprint: Pergamon Press: Oxford, UK.
doi: 10.1016/j.eswa.2010.04.026. See also [2721].
2721. Amy J.C. Trappey, Charles V. Trappey, and Chang-Ru Wu. Corrigendum to Genetic Al-
gorithm Dynamic Performance Evaluation for RFID Reverse Logistic Management [Expert
Systems with Applications 37 (11) (2010) 73297335]. Expert Systems with Applications
An International Journal, 38(3):2920, March 2011, Elsevier Science Publishers B.V.: Essex,
UK. Imprint: Pergamon Press: Oxford, UK. doi: 10.1016/j.eswa.2010.10.005. See also [2720].
2722. Ioan Cristian Trelea. The Particle Swarm Optimization Algorithm: Convergence Analysis and
Parameter Selection. Information Processing Letters, 85(6):317325, March 31, 2003, Elsevier
Science Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientic
Publishers Ltd.: Amsterdam, The Netherlands. doi: 10.1016/S0020-0190(02)00447-7.
2723. Krzysztof Trojanowski. Evolutionary Algorithms with Redundant Genetic Material for Non-
Stationary Environments. PhD thesis, Warsaw University: Warsaw, Poland, June 21, 2000,
Zbigniew Michalewicz, Advisor.
2724. Michael W. Trosset. I Know It When I See It: Toward a Denition of Direct Search Methods.
SIAG/OPT Views and News: A Forum for the SIAM Activity Group on Optimization, 9:7
10, Fall 1997, SIAM Activity Group on Optimization SIAG/OPT: Philadelphia, PA, USA.
Fully available at http://www.mcs.anl.gov/

leyffer/views/09.pdf [accessed 2010-12-


25].
2725. Constantino Tsallis and Ricardo Ferreira. On the Origin of Self-Replicating Information-
Containing Polymers from Oligomeric Mixtures. Physics Letters A, 99(9):461463,
December 26, 1983, Elsevier Science Publishers B.V.: Amsterdam, The Nether-
lands. Imprint: North-Holland Scientic Publishers Ltd.: Amsterdam, The Netherlands.
doi: 10.1016/0375-9601(83)90958-1.
2726. Edward P. K. Tsang, James M. Butler, and Jin Li. EDDIE Beats the
Bookies. International Journal of Software, Practice and Experience, 28(10):
10331043, August 1998, John Wiley & Sons Ltd.: New York, NY, USA.
REFERENCES 1161
doi: 10.1002/(SICI)1097-024X(199808)28:10<1033::AID-SPE198>3.0.CO;2-1. Fully avail-
able at http://cswww.essex.ac.uk/Research/CSP/finance/Eddie/Overview/ [ac-
cessed 2010-06-25]. CiteSeer
x
: 10.1.1.63.7597. See also [2727, 2728].
2727. Edward P. K. Tsang, Jin Li, Sheri Marina Markose, Hakan Er, Abdellah Salhi, and Guilia
Iori. EDDIE in Financial Decision Making. Journal of Management and Economics,
4(4), November 2000, Universidad de Buenos Aires, Facultad de Ciencias Econ omicas:
Buenos Aires, Argentina. Fully available at http://www.bracil.net/finance/papers/
Tsang-Eddie-JMgtEcon2000.pdf [accessed 2010-06-25]. CiteSeer
x
: 10.1.1.64.7200. See
also [2726, 2728].
2728. Edward P. K. Tsang, Paul Yung, and Jin Li. EDDIE-Automation A Decision Support Tool
for Financial Forecasting. Decision Support Systems, 37(4):559565, September 2004, Else-
vier Science Publishers B.V.: Essex, UK. doi: 10.1016/S0167-9236(03)00087-3. Fully avail-
able at http://www.bracil.net/finance/papers/TsYuLi-Eddie-Dss2004.pdf [ac-
cessed 2010-06-25]. See also [2726, 2727].
2729. Christian F. Tschudin. Fraglets A Metabolistic Execution Model for Communication
Protocols. In AINS03 [513], 2003. Fully available at http://cn.cs.unibas.ch/
pub/doc/2003-ains.pdf and http://path.berkeley.edu/ains/final/007%20-
%2022-tschudin.pdf [accessed 2008-05-02]. CiteSeer
x
: 10.1.1.133.8894.
2730. Christian F. Tschudin and Lidia A. R. Yamamoto. Fraglets Instruction Set. Uni-
versity of Basel, Computer Science Department, Computer Networks Group: Basel,
Switzerland, September 24, 2007. Fully available at http://www.fraglets.net/
frag-instrset-20070924.txt [accessed 2009-07-21].
2731. Christian F. Tschudin and Lidia A. R. Yamamoto. A Metabolic Approach to Protocol
Resilience. In WAC04 [2525], pages 191206, 2004. doi: 10.1007/11520184 15. Fully
available at http://cn.cs.unibas.ch/pub/doc/2004-wac-lncs.pdf and http://
www.autonomic-communication.org/wac/wac2004/program/presentations/
wac2004-tschudin-yamamoto-slides.pdf [accessed 2008-06-20].
2732. Shigeyoshi Tsutsui and Yoshiji Fujimoto. Solving Quadratic Assignment Problems by Genetic
Algorithms with GPU Computation: A Case Study. In GECCO09-I [2342], pages 2523
2530, 2009. doi: 10.1145/1570256.1570355. Fully available at http://www2.hannan-u.
ac.jp/

tsutsui/ps/gecco/wk3006-tsutsui.pdf [accessed 2011-11-14].


2733. Shigeyoshi Tsutsui and Ashish Ghosh. Genetic Algorithms with a Robust Solution Search-
ing Scheme. IEEE Transactions on Evolutionary Computation (IEEE-EC), 1(3):201208,
September 1997, IEEE Computer Society: Washington, DC, USA. doi: 10.1109/4235.661550.
2734. Shigeyoshi Tsutsui, Ashish Ghosh, and Yoshiji Fujimoto. A Robust Solution
Searching Scheme in Genetic Search. In PPSN IV [2818], pages 543552, 1996.
doi: 10.1007/3-540-61723-X 1018. Fully available at http://www.hannan-u.ac.jp/

tsutsui/ps/ppsn-iv.pdf [accessed 2008-07-19].


2735. Gunnar Tufte. Phenotypic, Developmental and Computational Resources: Scaling in Arti-
cial Development. In GECCO08 [1519], 2008. doi: 10.1145/1389095.1389261.
2736. Proceedings of the 5th Indian International Conference on Articial Intelligence (IICAI-11),
December 1416, 2011, Tumkur, India.
2737. Alan Mathison Turing. On Computable Numbers, with an Application to the Entschei-
dungsproblem. Proceedings of the London Mathematical Society, s2-42(1):230265, 1937,
Oxford University Press: Oxford, UK. doi: 10.1112/plms/s2-42.1.230. Fully available
at http://www.abelard.org/turpap2/tp2-ie.asp and http://www.thocp.net/
biographies/turing_alan.html [accessed 2007-08-11]. See also [2738].
2738. Alan Mathison Turing. On Computable Numbers, with an Application to
the Entscheidungsproblem: A Correction. Proceedings of the London Mathe-
matical Society, s2-43(1):544546, 137, Oxford University Press: Oxford, UK.
doi: 10.1112/plms/s2-43.6.544. Fully available at http://www.scribd.com/doc/
2937039/Alan-M-Turing-On-Computable-Numbers [accessed 2010-06-24]. See also [2737].
2739. Peter Turney, L. Darrell Whitley, and Russell W. Anderson. Evolution, Learning,
and Instinct: 100 Years of the Baldwin Eect. Evolutionary Computation, 4(3):ivviii,
Fall 1996, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1996.4.3.iv. Fully avail-
able at http://www.alife.cs.is.nagoya-u.ac.jp/

reiji/baldwin/editorial.
html and http://www.apperceptual.com/baldwin-editorial.html [accessed 2010-09-
11].
1162 REFERENCES
2740. Peter Turney, Joanna Bryson, and Reiji Suzuki. The Baldwin Eect: A Bibliography. Nagoya
University, Graduate School of Computer Science, Department of Complex Systems Science:
Nagoya, Japan, August 2008. Fully available at http://www.alife.cs.is.nagoya-u.
ac.jp/

reiji/baldwin/ [accessed 2010-09-12].


2741. William Thomas Tutte. Graph Theory, Cambridge Mathematical Library. Cambridge Uni-
versity Press: Cambridge, UK, 2001. isbn: 0521794897. Google Books ID: uTGhooU37h4C.
2742. Proceedings of the 4th Workshop on Parallel Processing: Logic, Organisation, and Technology
(WOPPLOT92), September 1992, Tutzing, Germany. It is not clear to the author whether
this has actually taken place or not.
2743. Andrew M. Tyrrell, Pauline C. Haddow, and Jim Torresen, editors. Proceedings of the
5th International Conference on Evolvable Systems: From Biology to Hardware (ICES03),
March 1720, 2003, Trondheim, Norway, volume 2606/2003 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-36553-2.
isbn: 3-540-00730-X. Google Books ID: LU1bIwGaWIC.
2744. Gwo-Hshiung Tzeng, Hsiao-Fan Wang, Ue-Pyng Wen, and Po-Lung Yu, editors. Pro-
ceedings of the 10th International Conference on Multiple Criteria Decision Making: Ex-
pand and Enrich the Domains of Thinking and Application (MCDM92), July 1924, 1992,
Taipei, Taiwan. Springer New York: New York, NY, USA. isbn: 0-387-94297-1 and
3-540-94297-1. OCLC: 30027483, 263351019, 311938982, and 633632538. GBV-
Identication (PPN): 148474152 and 257197648. Published in June 1994.
U
2745. Hidetoshi Uchiyama, Kanda Runapongsa, and Toby J. Teorey. A Progressive
View Materialization Algorithm, 1999. doi: 10.1145/319757.319786. Fully avail-
able at http://gear.kku.ac.th/

krunapon/research/pub/progressive.pdf [ac-
cessed 2011-04-08]. CiteSeer
x
: 10.1.1.20.9685 and 10.1.1.57.9921.
2746. Tatsuo Unemi. SBART 2.4: An IEC Tool for Creating 2D Images, Movies, and Collage. In
GAVAM00 [1452], pages 153157, 2000. CiteSeer
x
: 10.1.1.31.3280.
2747. Proceedings of the 8th Argentinean Conference of Computer Science, Congresso Argentino
de Ciencias de la Computacon (CACIC02), October 1517, 2002, Universidad de Buenos
Aires (UBA): Buenos Aires, Argentina.
2748. AISB 2000 Convention: Articial Intelligence and Society (AISB00), April 1721, 2000,
University of Birmingham: Birmingham, UK. Fully available at http://www.aisb.org.
uk/publications/proceedings.shtml [accessed 2008-09-10].
2749. 4th International Conference on Hybrid Articial Intelligence Systems (HAIS09), June 10
12, 2009, Salamanca, Spain, Lecture Notes in Articial Intelligence (LNAI, SL7), Lecture
Notes in Computer Science (LNCS). University of Burgos, Applied Computational Intelli-
gence Group (GICAP): Burgos, Spain, Springer-Verlag GmbH: Berlin, Germany. co-located
with 10th International Work-Conference on Articial Neural Networks (IWANN2009).
2750. New State of MCDM in 21st Century Proceedings of the 20th International Conference on
Multiple Criteria Decision Making (MCDM09), June 2126, 2009, University of Electronic
Science and Technology of China (Shahe Campus): Chengd u, S`chuan, China.
2751. Proceedings of the Seventh Conference on Evolutionary and Deterministic Methods for De-
sign, Optimization and Control with Applications to Indutsrial and Societal Problems (EU-
ROGEN07), June 1113, 2007, University of Jyv askyl a: Jyv askyl a, Finland. Partly available
at http://www.mit.jyu.fi/scoma/Eurogen2007/ [accessed 2007-09-16].
2752. Proceedings of the First International Workshop on Advanced Computational Intelligence
(IWACI08), June 78, 2008, University of Macau: Macau, China.
2753. Genetic Programming Theory and Practice IX, Proceedings of the Ninth Workshop on Genetic
Programming (GPTP11), May 1214, 2011, University of Michigan, Center for the Study of
Complex Systems (CSCS): Ann Arbor, MI, USA.
2754. 2011 International Workshop on Nature Inspired Computation and Its Applications
(IWNICA11), April 17, 2011, University of Science and Technology of China (USTC): Hefei,

Anhu, China. Partly available at http://www.cercia.ac.uk/projects/research/


NICaiA/2011workshopsinfor.php [accessed 2011-03-20].
2755. The 2004 International Workshop on Nature Inspired Computation and Applications
REFERENCES 1163
(IWNICA04), October 2429, 2004, University of Science and Technology of China (USTC),
School of Computer Science and Technology, Nature Inspired Computation and Applications
Laboratory (NICAL): Hefei,

Anhu, China.
2756. The 2004 UK-China Workshop on Nature Inspired Computation and Applications
(UCWNICA04), October 2529, 2004, University of Science and Technology of China
(USTC), School of Computer Science and Technology, Nature Inspired Computation and
Applications Laboratory (NICAL): Hefei,

Anhu, China.
2757. The 2008 International Workshop on Nature Inspired Computation and Applications
(IWNICA08), May 2729, 2008, University of Science and Technology of China (USTC),
School of Computer Science and Technology, Nature Inspired Computation and Applications
Laboratory (NICAL): Hefei,

Anhu, China.
2758. The 2010 International Workshop on Nature Inspired Computation and Applications
(IWNICA10), October 2327, 2010, University of Science and Technology of China (USTC),
School of Computer Science and Technology, Nature Inspired Computation and Applications
Laboratory (NICAL): Hefei,

Anhu, China.
2759. AISB 2001 Convention: Agents and Cognition (AISB01), March 2124, 2001, University of
York: Heslington, York, UK. isbn: 1902956170, 1902956189, 1902956197, 1902956205,
1902956213, and 1902956221.
2760. Rasmus K. Ursem. Models for Evolutionary Algorithms and Their Applications in System
Identication and Control Optimization. PhD thesis, University of Aarhus, Department of
Computer Science:

Arhus, Denmark, April 1, 2003, Thiemo Krink and Brian H. Mayoh,
Advisors. Fully available at http://www.daimi.au.dk/

ursem/publications/RKU_
thesis_2003.pdf [accessed 2009-07-09]. CiteSeer
x
: 10.1.1.13.4971.
V
2761. Rob J. M. Vaessens, Emile H. L. Aarts, and Jan Karel Lenstra. A Local Search Template.
In PPSN II [1827], pages 6776, 1992. CiteSeer
x
: 10.1.1.54.4457. See also [2762].
2762. Rob J. M. Vaessens, Emile H. L. Aarts, and Jan Karel Lenstra. A Local Search
Template. Computers & Operations Research, 25(11):969979, November 1, 1998,
Pergamon Press: Oxford, UK and Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/S0305-0548(97)00093-2. See also [2761].
2763. Gregory Valigiani, Evelyne Lutton, Cyril Fonlupt, and Pierre Collet. Optimisation
par Hommili`ere de Chemins Pedagogiques pour un Logiciel de-Learning. Technique
et Science Informatique (TSI), 26(10):12451267, 2007, TSI: Cachan cedex, France.
doi: 10.3166/tsi.26.1245-1267.
2764. Satyanarayana R. Valluri, Soujanya Vadapalli, and Kamalakar Karlapalem. View Relevance
Driven Materialized View Selection in Data Warehousing Environment. In ADC02 [3081],
2002. doi: 10.1145/563932.563927. Fully available at http://crpit.com/confpapers/
CRPITV5Valluri.pdf [accessed 2011-03-29]. See also [2765].
2765. Satyanarayana R. Valluri, Soujanya Vadapalli, and Kamalakar Karlapalem. View Relevance
Driven Materialized View Selection in Data Warehousing Environment. Australian Computer
Science Communications, 24(2), JanuaryFebruary 2002, IEEE Computer Society Press: Los
Alamitos, CA, USA. doi: 10.1145/563932.563927. Fully available at http://crpit.com/
confpapers/CRPITV5Valluri.pdf [accessed 2011-03-29]. CiteSeer
x
: 10.1.1.18.9497. See
also [2764].
2766. Werner Van Belle. Automatic Generation of Concurrency Adaptors by Means of Learning
Algorithms. PhD thesis, Vrije Universiteit Brussel, Faculteit van de Wetenschappen, Departe-
ment Informatica: Brussels, Belgium, August 2001, Theo DHondt and Tom Tourwe, Advi-
sors. Fully available at http://borg.rave.org/adaptors/Thesis6.pdf [accessed 2008-06-
23]. See also [2767].
2767. Werner Van Belle, Tom Mens, and Theo DHondt. Using Genetic Programming
to Generate Protocol Adaptors for Interprocess Communication. In ICES03 [2743],
pages 6773, 2003. Fully available at http://prog.vub.ac.be/Publications/
2003/vub-prog-tr-03-33.pdf and http://werner.yellowcouch.org/Papers/
genadap03/ICES-online.pdf [accessed 2008-06-23]. See also [2766].
2768. Klemens van Betteray. Gesetzliche und handelsspezische Anforderungen an die
1164 REFERENCES
R uckverfolgung. In Vortrage des 7. VDEB-Infotags 2004 Mobile Computing, RFID und
R uckverfolgbarkeit [2800], 2004. EU Verordnung 178/2002.
2769. Alex Van Breedam. An Analysis of the Behavior of Heuristics for the Vehicle Routing Prob-
lem for a Selection of Problems with Vehicle-related, Customer-related, and Time-related
Constraints. PhD thesis, University of Antwerp (RUCA): Antwerp, Belgium, 1994.
2770. Alex Van Breedam. An Analysis of the Behavior of Heuristics for the Vehicle Routing Prob-
lem for a Selection of Problems with Vehicle-Related, Customer-Related, and Time-Related
Constraints. PhD thesis, University of Antwerp (RUCA): Antwerp, Belgium, 1994.
2771. Gertrudis Van de Vijver, Stanley N. Salthe, and Manuela Delpos, editors. Evolutionary Sys-
tems Biological and Epistemological Perspectives on Selection and Self-Organization. Kluwer
Academic Publishers: Norwell, MA, USA, 1998. isbn: 0792352602. OCLC: 39556318. Li-
brary of Congress Control Number (LCCN): 98036791. LC Classication: QH360.5 .E96
1998.
2772. J. Koen van der Hauw. Evaluating and Improving Steady State Evolutionary Algo-
rithms on Constraint Satisfaction Problems. Masters thesis, Leiden University: Leiden,
The Netherlands, August 9, 1996,

Agoston E. Eiben and Han La Poutre, Supervisors.
CiteSeer
x
: 10.1.1.29.1568.
2773. Christiaan M. van der Walt and Etienne Barnard. Data Characteristics that Determine Clas-
sier Performance. In ICAPR05 [2506], pages 160165, 2005. Fully available at http://
www.meraka.org.za/pubs/CvdWalt.pdf and http://www.patternrecognition.
co.za/publications/cvdwalt_data_characteristics_classifiers.pdf [ac-
cessed 2009-09-09].
2774. Bas C. van Fraassen. Laws and Symmetry, Clarendon Paperbacks. Oxford University Press,
Inc.: New York, NY, USA, 1989. isbn: 0198248601. Google Books ID: LzXuE1Sri wC.
2775. Jano I. van Hemert and Carlos Cotta, editors. Proceedings of the 8th European Conference on
Evolutionary Computation in Combinatorial Optimization (EvoCOP08), March 2628, 2008,
Naples, Italy, volume 4972/2008 in Theoretical Computer Science and General Issues (SL
1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
isbn: 3-540-78603-1. Library of Congress Control Number (LCCN): 2008922955.
2776. Peter J. M. van Laarhoven and Emile H. L. Aarts, editors. Simulated Annealing: Theory
and Applications, volume 37 in Mathematics and its Applications. Kluwer Academic Pub-
lishers: Norwell, MA, USA, 1987. isbn: 90-277-2513-6. Google Books ID: -IgUab6Dp IC
and URDXYgEACAAJ. OCLC: 15548651, 230902948, 471714266, 489996631, and
638812897. Library of Congress Control Number (LCCN): 87009666. GBV-Identication
(PPN): 020442645 and 043026036. LC Classication: QA402.5 .L3 1987. Reviewed in
[1316].
2777. Jan van Leeuwen, editor. Computer Science Today Recent Trends and Developments,
volume 1000/1995 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany, 1995. doi: 10.1007/BFb0015232. isbn: 3-540-60105-8. Google Books
ID: YHxQAAAAMAAJ.
2778. Erik van Nimwegen and James P. Crutcheld. Optimizing Epochal Evolutionary Search:
Population-Size Dependent Theory. Machine Learning, 45(1):77114, October 2001, Kluwer
Academic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht, Netherlands,
Leslie Pack Kaelbling, editor. doi: 10.1023/A:1012497308906.
2779. Erik van Nimwegen, James P. Crutcheld, and Martijn A. Huynen. Neutral Evolution of
Mutational Robustness. Proceedings of the National Academy of Science of the United States
of America (PNAS), 96(17):97169720, August 17, 1999, National Academy of Sciences:
Washington, DC, USA. Fully available at http://www.pnas.org/cgi/reprint/96/
17/9716.pdf [accessed 2008-07-02].
2780. Erik van Nimwegen, James P. Crutcheld, and Melanie Mitchell. Statistical Dynamics of the
Royal Road Genetic Algorithm. Theoretical Computer Science, 229(12):41102, November 6,
1999, Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/S0304-3975(99)00119-X.
CiteSeer
x
: 10.1.1.43.3594.
2781. Maarten van Someren and Gerhard Widmer, editors. 9th European Conference on Ma-
chine Learning (ECML97), April 2325, 1997, Prague, Czech Republic, volume 1224/1997
in Lecture Notes in Articial Intelligence (LNAI, SL7), Lecture Notes in Computer
Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-62858-4.
isbn: 3-540-62858-4. Google Books ID: BC4- V0D-qUC and w4lfPwAACAAJ.
REFERENCES 1165
OCLC: 36676064, 56867902, and 82399375.
2782. Harry L. Van Trees. Detection, Estimation, and Modulation Theory, Part I. Wiley
Interscience: Chichester, West Sussex, UK, 1968February 2007. isbn: 0471093904,
0471095176, and 0471899550. Google Books ID: 2kkHAAAACAAJ, NSddAAAACAAJ,
RywEAAAACAAJ, RywEAAAACAAJ, Xzp7VkuFqXYC, and udD3AQAACAAJ. OCLC: 49665206,
85820539, 234257633, and 249067915.
2783. David A. van Veldhuizen and Laurence D. Merkle. Multiobjective Evolutionary Algorithms:
Classications, Analyses, and New Innovations. PhD thesis, Air University, Air Force Insti-
tute of Technology: Wright-Patterson Air Force Base, OH, USA, June 1999, Gary B. Lamont,
Supervisor, Richard F. Deckro and Thomas F. Reid, Committee members. Fully avail-
able at http://citeseer.ist.psu.edu/old/vanveldhuizen99multiobjective.
html, http://handle.dtic.mil/100.2/ADA364478, and http://neo.lcc.uma.es/
emoo/veldhuizen99a.ps.gz [accessed 2009-08-05]. ID: AFIT/DS/ENG/99-01.
2784. Leonardo Vanneschi and Marco Tomassini. A Study on Fitness Distance Correlation
and Problem Diculty for Genetic Programming. In GECCO02 GSWS [1790], pages
307310, 2002. Fully available at http://personal.disco.unimib.it/Vanneschi/
GECCO_2002_PHD_WORKSHOP.pdf and http://www.cs.bham.ac.uk/

wbl/biblio/
gp-html/vanneschi_2002_gecco_workshop.html [accessed 2008-07-20]. See also [590].
2785. Leonardo Vanneschi, Manuel Clergue, Philippe Collard, Marco Tomassini, and Sebastien
Verel. Fitness Clouds and Problem Hardness in Genetic Programming. In GECCO04-
II [752], pages 690701, 2004. doi: 10.1007/b98645. Fully available at http://hal.
archives-ouvertes.fr/hal-00160055/en/ [accessed 2007-10-14], http://hal.inria.
fr/docs/00/16/00/55/PDF/fc_gp.pdf, http://www.cs.bham.ac.uk/

wbl/
biblio/gp-html/vanneschi_fca_gecco2004.html [accessed 2008-07-20], and http://
www.i3s.unice.fr/

verel/publi/gecco04-FCandPbHardnessGP.pdf [accessed 2007-


10-14].
2786. Leonardo Vanneschi, Marco Tomassini, Philippe Collard, and Sebastien Verel. Negative
Slope Coecient. A Measure to Characterize Genetic Programming. In EuroGP06 [607],
pages 178189, 2006. doi: 10.1007/11729976 16. Fully available at http://www.cs.bham.
ac.uk/

wbl/biblio/gp-html/eurogp06_VanneschiTomassiniCollardVerel.
html [accessed 2008-07-20].
2787. Leonardo Vanneschi, Steven Matt Gustafson, Alberto Moraglio, Ivanoe de Falco, and
Marc Ebner, editors. Proceedings of the 12th European Conference on Genetic Program-
ming (EuroGP09), April 1517, 2009, Eberhard-Karls-Universitat T ubingen, Fakult at f ur
Informations- und Kognitionswissenschaften: T ubingen, Germany, volume 5481/2009 in
Theoretical Computer Science and General Issues (SL 1), Lecture Notes in Computer Sci-
ence (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/978-3-642-01181-8.
isbn: 3-642-01180-2.
2788. Vladimir Naumovich Vapnik. The Nature of Statistical Learning Theory, Information Science
and Statistics. Springer-Verlag GmbH: Berlin, Germany, 19951999. isbn: 0-387-98780-0.
Google Books ID: sna9BaxVbj8C. OCLC: 41959173 and 247524710.
2789. Francisco J. Varela and Paul Bourgine, editors. Toward a Practice of Autonomous Sys-
tems: Proceedings of the First European Conference on Articial Life (ECAL91), De-
cember 1113, 1991, Paris, France, Bradford Books. MIT Press: Cambridge, MA, USA.
isbn: 0-262-72019-1. Published in 1992.
2790. Himanshu Vashishtha, Michael Smit, and Eleni Stroulia. Moving Text Analysis Tools to the
Cloud. In SERVICES Cup10 [2457], pages 107114, 2010. doi: 10.1109/SERVICES.2010.91.
INSPEC Accession Number: 11535163.
2791. Athanasios V. Vasilakos, Kostas G. Anagnostakis, and Witold Pedrycz. Application of
Computational Intelligence Techniques in Active Networks. Soft Computing A Fusion of
Foundations, Methodologies and Applications, 5(4):264271, August 2001, Springer-Verlag:
Berlin/Heidelberg. doi: 10.1007/s005000100100.
2792. Vesselin K. Vassilev and Julian Francis Miller. The Advantages of Landscape Neu-
trality in Digital Circuit Evolution. In ICES00 [1902], pages 252263, 2000.
doi: 10.1007/3-540-46406-9 25. CiteSeer
x
: 10.1.1.41.3777.
2793. Miguel A. Vega-Rodrguez, Juan A. G omez-Pulido, Enrique Alba Torres, David Vega-
Perez, Silvio Priem-Mendes, and Guillermo Molina. Evaluation of Dierent Metaheuris-
tics Solving the RND Problem. In EvoWorkshops07 [1050], pages 101110, 2007.
1166 REFERENCES
doi: 10.1007/978-3-540-71805-5 11.
2794. Miguel A. Vega-Rodrguez, David Vega-Perez, Juan A. G omez-Pulido, and Juan M.
Sanchez-Perez. Radio Network Design Using Population-Based Incremental Learning
and Grid Computing with BOINC. In EvoWorkshops07 [1050], pages 91100, 2007.
doi: 10.1007/978-3-540-71805-5 10.
2795. Goran Velinov, Danilo Gligoroski, and Margita Kon-Popovska. Hybrid Greedy and Genetic
Algorithms for Optimization of Relational Data Warehouses. In AIAP07 [776], pages 470
475, 2007.
2796. Goran Velinov, Boro Jakimovski, Darko Cerepnalkoski, and Margita Kon-Popovska. Improve-
ment of Data Warehouse Optimization Process by Workow Gridication. In ADBIS [143],
pages 295304, 2008. doi: 10.1007/978-3-540-85713-6 21.
2797. Manuela M. Veloso, editor. AI and its benets to society Proceedings of the 20th Inter-
national Joint Conference on Articial Intelligence (IJCAI07), January 612, 2007, Hyder-
abad, India. AAAI Press: Menlo Park, CA, USA. Fully available at http://dli.iiit.
ac.in/ijcai/IJCAI-2007/CONTENT/contents2007.html and http://www.ijcai.
org/papers07/contents.php [accessed 2008-04-01]. See also [1296].
2798. Gerhard Venter and Jaroslaw Sobieszczanski-Sobieski. Particle Swarm Optimization. In 43rd
AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference
and Exhibit [84], 2002. CiteSeer
x
: 10.1.1.57.7431. See also [2799].
2799. Gerhard Venter and Jaroslaw Sobieszczanski-Sobieski. Particle Swarm Optimization. AIAA
Journal, 41(8):15831589, August 2003, American Institute of Aeronautics and Astronautics:
Reston, VA, USA. Fully available at http://pdf.aiaa.org/getfile.cfm?urlX=2
%3CWIG7D%2FQKU%3E6B5%3AKF5%2B%5CQ%3AK%3E%0A&urla=%26%2B%22L%2F%22P
%22%40%0A&urlb=%21%2A%20%20%20%0A&urlc=%21%2A0%20%20%0A&urld=%21%2A0
%20%20%0A&urle=%27%28%22H%22%23PJAT%20%20%20%0A [accessed 2010-12-24]. See also
[2798].
2800. Vortrage des 7. VDEB-Infotags 2004 Mobile Computing, RFID und R uckverfolgbarkeit,
July 28, 2004. Verband der EDV-Software-und Beratungsunternehmen (VDEB): Aachen,
North Rhine-Westphalia, Germany and VDEB Verband IT-Mittelstand e.V.: Stuttgart, Ger-
many.
2801. Proceedings of the Workshop SAKS 2007, March 1, 2007, Bern, Switzerland. Verband der
Elektrotechnik, Elektronik und Informationstechnik e.V. (VDE): Frankfurt am Main, Ger-
many.
2802. Sebastien Verel, Philippe Collard, and Manuel Clergue. Measuring the Evolvabil-
ity Landscape to Study Neutrality. In GECCO06 [1516], pages 613614, 2006.
doi: 10.1145/1143997.1144107. arXiv ID: 0709.4011v1.
2803. Proceedings of the Second European Congress on Fuzzy and Intelligent Technologies (EU-
FIT94), September 2023, 1994, Aachen, North Rhine-Westphalia, Germany. Verlag der
Augustinus Buchhandlung: Aachen, North Rhine-Westphalia, Germany.
2804. Proceedings of the First European Congress on Fuzzy and Intelligent Technologies (EU-
FIT93), September 79, 1993, Aachen, North Rhine-Westphalia, Germany. Verlag der Au-
gustinus Buchhandlung: Aachen, North Rhine-Westphalia, Germany and ELITE Foundation:
Aachen, North Rhine-Westphalia, Germany.
2805. Proceedings of the Third European Congress on Intelligent Techniques and Soft Computing
(EUFIT95), August 2831, 1995, Aachen, North Rhine-Westphalia, Germany. Verlag der
Augustinus Buchhandlung: Aachen, North Rhine-Westphalia, Germany, Verlag und Druck
Mainz GmbH: Aachen, North Rhine-Westphalia, Germany, and ELITE Foundation: Aachen,
North Rhine-Westphalia, Germany.
2806. Proceedings of the 6th European Congress on Intelligent Techniques and Soft Computing
(EUFIT98), September 710, 1998, Rheinisch-Westfalische Technische Hochschule (RWTH)
Aachen: Aachen, North Rhine-Westphalia, Germany. Verlag und Druck Mainz GmbH:
Aachen, North Rhine-Westphalia, Germany. isbn: 3-89653-500-5.
2807. Proceedings of the 7th European Congress on Intelligent Techniques and Soft Computing
(EUFIT99), September 1316, 1999, Aachen, North Rhine-Westphalia, Germany. Verlag
und Druck Mainz GmbH: Aachen, North Rhine-Westphalia, Germany.
2808. William T. Vettering, Saul A. Teukolsky, William H. Press, and Brian P. Flannery. Numeri-
cal Recipes in C++ Example Book The Art of Scientic Computing. Cambridge Univer-
sity Press: Cambridge, UK, second edition, February 7, 2002. isbn: 0521750342. Google
REFERENCES 1167
Books ID: gwijz-OyIYEC. OCLC: 48265610, 314335535, 423496209, 469630365,
and 610691802. Library of Congress Control Number (LCCN): 2001052753. GBV-
Identication (PPN): 336225350 and 374254729. LC Classication: QA76.73.C153 N83
2002.
2809. Sixth European Conference on Machine Learning (ECML93), April 57, 1993, Vienna, Aus-
tria.
2810. 6th Metaheuristics International Conference (MIC05), August 2226, 2005, Vienna, Austria.
Partly available at http://www.mic2005.org/ [accessed 2007-09-12].
2811. Savings Ants for the Vehicle Routing Problem. Technical Report 63, Vienna University of
Economics and Business Administration: Vienna, Austria, December 2001. ID: oai:epub.wu-
wien.ac.at:epub-wu-01 1f1. See also [802].
2812. Daniele Vigo. VRPLIB: A Vehicle Routing Problem LIBrary. Universit`a degli
Studi di Bologna, Dipartimento di Elettronica, Informatica e Sistemistica (DEIS),
Operations Research Group (Ricerca Operativa): Bologna, Emilia-Romagna, Italy,
October 3, 2003. Fully available at http://or.ingce.unibo.it/research/
vrplib-a-vehicle-routing-problem-resources-library and http://www.or.
deis.unibo.it/research_pages/ORinstances/VRPLIB/VRPLIB.html [accessed 2011-
10-12].
2813. Daniele Vigo and Maria Battarra. Database of Heterogeneous Vehicle Routing Problems.
Universit`a degli Studi di Bologna, Dipartimento di Elettronica, Informatica e Sistemistica
(DEIS), Operations Research Group (Ricerca Operativa): Bologna, Emilia-Romagna, Italy,
January 19, 2007. Fully available at http://apice54.ingce.unibo.it/hvrp/index.
php [accessed 2011-10-12].
2814. Frank J. Villegas, Tom Cwik, Yahya Rahmat-Samii, and Majid Manteghi. A Parallel Elec-
tromagnetic Genetic-Algorithm Optimization (EGO) Application for Patch Antenna Design.
IEEE Transactions on Antennas and Propagation, 52(9):24242435, September 3, 2004, IEEE
Antennas & Propagation Society: Marseld, NSW, Australia. doi: 10.1109/TAP.2004.834071.
2815. Andrei Vladimirescu. The SPICE Book. John Wiley & Sons Ltd.: New York, NY, USA,
1994. isbn: 0471609269.
2816. Bernhard Friedrich Voigt. Der Handlungsreisende wie er sein soll und was er zu thun hat,
um Auftr age zu erhalten und eines gl ucklichen Erfolgs in seinen Geschaften gewi zu sein
von einem alten Commis-Voyageur. Voigt: Ilmenau, Germany, 1832. OCLC: 258013333.
Excerpt: . . . Durch geeignete Auswahl und Planung der Tour kann man oft so viel Zeit
sparen, da wir einige Vorschl age zu machen haben. . . . Der wichtigste Aspekt ist, so viele
Orte wie m oglich zu erreichen, ohne einen Ort zweimal zu besuchen. . . . . Newly published
as [2817].
2817. Bernhard Friedrich Voigt. Der Handlungsreisende wie er sein soll und was er zu thun
hat, um Auftr age zu erhalten und eines gl ucklichen Erfolgs in seinen Geschaften gewi zu
sein von einem alten Commis-Voyageur. Verlag Bernd Schramm: Kiel, Germany, 1981.
isbn: 3-921361-24-9. New edition of [2816].
2818. Hans-Michael Voigt, Werner Ebeling, Ingo Rechenberg, and Hans-Paul Schwefel, editors. Pro-
ceedings of the 4th International Conference on Parallel Problem Solving from Nature (PPSN
IV), September 2224, 1996, Berlin, Germany, volume 1141/1996 in Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-61723-X.
isbn: 3-540-61723-X. Google Books ID: uN0mgUx-VRUC. OCLC: 35331240, 150419867,
and 246323673.
2819. Christian Voigtmann. Integration Evolution arer Klassikatoren in Weka. Bachelors thesis,
University of Kassel, Fachbereich 16: Elektrotechnik/Informatik, Distributed Systems Group:
Kassel, Hesse, Germany, October 26, 2008, Thomas Weise, Advisor, Kurt Geihs and Heinrich
Werner, Committee members. See also [2855, 2902].
2820. A. Volgenant. Solving Some Lexicographic Multi-Objective Combinatorial Problems. Eu-
ropean Journal of Operational Research (EJOR), 139(3):578584, June 16, 2002, Elsevier
Science Publishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientic
Publishers Ltd.: Amsterdam, The Netherlands. doi: 10.1016/S0377-2217(01)00214-4.
2821. Paul T. von Hippel. Mean, Median, and Skew: Correcting a Textbook Rule. Journal
of Statistics Education (JSE), 13(2), July 2005, American Statistical Association (ASA):
Alexandria, VA, USA. Fully available at http://www.amstat.org/publications/jse/
v13n2/vonhippel.html [accessed 2007-09-15].
1168 REFERENCES
2822. Richard von Mises. Probability, Statistics, and Truth. W. Hodge & Co.: Glasgow, Scot-
land, UK, Dover Publications: Mineola, NY, USA, and Allen & Unwin: Crows Nest, NSW,
Australia, 2nd edition, 19391957. isbn: 0486242145. Google Books ID: KSE4AAAAMAAJ,
enxeAAAAIAAJ, and wFFK P8Cpk0C. OCLC: 8320292 and 256165308.
2823. Richard von Mises. Mathematical Theory of Probability and Statistics. Academic Press
Professional, Inc.: San Diego, CA, USA and Elsevier Science Publishers B.V.: Essex, UK,
1964. isbn: 0127273565. Google Books ID: 3s8-AAAAIAAJ and 7gnFJQAACAAJ.
2824. Paul von Rague Schleyer and Johnny Gasteiger, editors. Encyclopedia of Computational
Chemistry. Volume III: Databases and Expert Systems. John Wiley & Sons Ltd.: New York,
NY, USA, September 1998. isbn: 0-471-96588-X. OCLC: 490503997. Library of Congress
Control Number (LCCN): 98037164. LC Classication: QD39.3.E46 E53 1998.
2825. Matthias von Randow. G uterverkehr und Logistik als tragende Saule der Wirtschaft zukun-
ftssicher gestalten. In Das Beste Der Logistik: Innovationen, Strategien, Umsetzungen [234],
pages 4953. Springer-Verlag GmbH: Berlin, Germany and Bundesvereinigung Logistik e.V.
(BVL): Bremen, Germany, 2008.
2826. Michael D. Vose. Generalizing the Notion of Schema in Genetic Algorithms. Articial
Intelligence, 50(3):385396, August 1991, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/0004-3702(91)90019-G.
2827. Michael D. Vose and Alden H. Wright. Form Invariance and Implicit Parallelism.
Evolutionary Computation, 9(3):355370, Fall 2001, MIT Press: Cambridge, MA,
USA. doi: 10.1162/106365601750406037. Fully available at http://www.cs.umt.
edu/u/wright/papers/vose_wright2001form_invariance.ps.gz [accessed 2010-07-30].
CiteSeer
x
: 10.1.1.7.4294. PubMed ID: 11522211.
2828. Stefan Vo, Silvano Martello, Ibrahim H. Osman, and Catherine Roucairol, editors. 2nd
Metaheuristics International Conference Meta-Heuristics: Advances and Trends in Local
Search Paradigms for Optimization (MIC97), July 2124, 1997, Sophia-Antipolis, France.
Kluwer Publishers: Boston, MA, USA. isbn: 0-7923-8369-9. Published in 1998.
W
2829. Conrad Hal Waddington. Canalization of Development and the Inheritance of Acquired Char-
acters. Nature, 150(3811):563565, November 14, 1942. doi: 10.1038/150563a0. Fully available
at http://www.nature.com/nature/journal/v150/n3811/pdf/150563a0.pdf [ac-
cessed 2008-09-10]. See also [2832].
2830. Conrad Hal Waddington. Genetic Assimilation of an Acquired Character. Evolution In-
ternational Journal of Organic Evolution, 7(2):118128, June 1953, Society for the Study
of Evolution (SSE) and Wiley Interscience: Chichester, West Sussex, UK. JSTOR Stable
ID: 2405747.
2831. Conrad Hal Waddington. The Baldwin Eect, Genetic Assimilation and Homeostasis.
Evolution International Journal of Organic Evolution, 7(4):386387, December 1953, Soci-
ety for the Study of Evolution (SSE) and Wiley Interscience: Chichester, West Sussex, UK.
JSTOR Stable ID: 2405346.
2832. Conrad Hal Waddington. Canalization of Development and the Inheritance of Acquired Char-
acters. In Adaptive Individuals in Evolving Populations: Models and Algorithms [255], pages
9197. Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA and Westview
Press, 1996. See also [2829].
2833. Andreas Wagner. Robustness and Evolvability in Living Systems, volume 9 in Prince-
ton Studies in Complexity. Princeton University Press: Princeton, NJ, USA, 20052007.
isbn: 0691122407 and 0691134049. Partly available at http://press.princeton.
edu/titles/8002.html [accessed 2009-07-10]. Google Books ID: 7O1bGwAACAAJ.
2834. Andreas Wagner. Robustness, Evolvability, and Neutrality. FEBS Letters, 579(8):1772
1778, March 21, 2005, Federation of European Biochemical Societies (FEBS): London, UK
and Elsevier Science Publishers B.V.: Essex, UK, Robert Russell and Giulio Superti-Furga,
editors. doi: 10.1016/j.febslet.2005.01.063. PubMed ID: 15763550.
2835. G unter P. Wagner and Lee Altenberg. Complex Adaptations and the Evolution of Evolv-
ability. Evolution International Journal of Organic Evolution, 50(3):967976, June 1996,
Society for the Study of Evolution (SSE) and Wiley Interscience: Chichester, West Sussex,
REFERENCES 1169
UK. Fully available at http://dynamics.org/Altenberg/PAPERS/CAEE/ [accessed 2009-
07-10]. CiteSeer
x
: 10.1.1.28.1524.
2836. Michael Wagner, Dieter Hogrefe, Kurt Geihs, and Klaus David, editors. First Workshop
on Global Sensor Networks (GSN09), March 6, 2009, University of Kassel, Fachbereich 16:
Elektrotechnik/Informatik, Distributed Systems Group: Kassel, Hesse, Germany, volume 17.
European Association of Software Science and Technology (EASST; Universitat Potsdam,
Institute for Informatics): Potsdam, Germany. Fully available at http://eceasst.cs.
tu-berlin.de/index.php/eceasst/issue/view/24 [accessed 2009-09-07]. Partly available
at http://ipvs.informatik.uni-stuttgart.de/vs/gsn09/ [accessed 2011-02-28].
2837. Tobias Wagner, Nicola Beume, and Boris Naujoks. Pareto-, Aggregation-, and Indicator-
Based Methods in Many-Objective Optimization. Reihe Computational Intelligence: De-
sign and Management of Complex Technical Processes and Systems by Means of Computa-
tional Intelligence Methods CI-217/06, Universitat Dortmund, Collaborative Research Cen-
ter (Sonderforschungsbereich) 531: Dortmund, North Rhine-Westphalia, Germany, Septem-
ber 2006. Fully available at http://hdl.handle.net/2003/26125 [accessed 2009-07-19].
issn: 1433-3325. See also [2838].
2838. Tobias Wagner, Nicola Beume, and Boris Naujoks. Pareto-, Aggregation-, and Indicator-
Based Methods in Many-Objective Optimization. In EMO07 [2061], pages 742756, 2007.
doi: 10.1007/978-3-540-70928-2 56. See also [2837].
2839. Markus W alchli and Torsten Braun. Ecient Signal Processing and Anomaly Detec-
tion in Wireless Sensor Networks. In EvoWorkshops09 [1052], pages 8186, 2009.
doi: 10.1007/978-3-642-01129-0 9.
2840. Karl-Heinz Waldmann and Ulrike M. Stocker, editors. Selected Papers of the Annual Inter-
national Conference of the German Operations Research Society (GOR), Jointly Organized
with the Austrian Society of Operations Research (

OGOR) and the Swiss Society of Op-


erations Research (SVOR), September 68, 2006, University of Karlsruhe: Karlsruhe, Ger-
many, volume 2006 in Operations Research Proceedings. Springer-Verlag: Berlin/Heidelberg.
doi: 10.1007/978-3-540-69995-8. isbn: 3-540-69994-5. Google Books ID: HOwaluJXznkC.
OCLC: 185135009, 255964381, 300964495, 307478719, and 315261951.
2841. Donald E. Walker and Lewis M. Norton, editors. Proceedings of the 1st International Joint
Conference on Articial Intelligence (IJCAI69), May 79, 1969, Washington, DC, USA.
isbn: 0-934613-21-4. Fully available at http://dli.iiit.ac.in/ijcai/IJCAI-69/
CONTENT/content.htm [accessed 2008-04-01]. OCLC: 15386096.
2842. David Alexander Ross Wallace. Groups, Rings and Fields, Undergraduate Texts in Mathe-
matics. Springer New York: New York, NY, USA, February 2004. isbn: 3540761772.
2843. David Wallin and Conor Ryan. Maintaining Diversity in EDAs for Real-Valued Optimisation
Problems. In FBIT07 [1277], pages 795800, 2007. doi: 10.1109/FBIT.2007.132. INSPEC
Accession Number: 9979463.
2844. David Wallin and Conor Ryan. On the Diversity of Diversity. In CEC07 [1343], pages
95102, 2007. doi: 10.1109/CEC.2007.4424459. INSPEC Accession Number: 9889406.
2845. David L. Waltz, editor. Proceedings of the National Conference on Articial Intelligence
(AAAI82), August 1820, 1982, Carnegy Mellon University (CMU): Pittsburgh, PA, USA.
American Association for Articial Intelligence: Los Altos, CA, USA and John W. Kaufmann
Inc.: Washington, DC, USA. Partly available at http://www.aaai.org/Conferences/
AAAI/aaai82.php [accessed 2007-09-06]. OCLC: 159789798.
2846. Chunzhi Wang and Hongwei Chen, editors. Proceedings of the 2nd International Workshop
on Intelligent Systems and Applications (ISA10), May 2223, 2010, W uh`an, H ubei, China.
IEEE Computer Society Press: Los Alamitos, CA, USA. Partly available at http://www.
icaiss.org/isa2010/ [accessed 2011-04-10]. Catalogue no.: CFP1060G-ART.
2847. Chunzhi Wang and Zhiwei Ye, editors. Proceedings of the International Workshop on
Intelligent Systems and Applications (ISA09), May 2324, 2009, W uh`an, H ubei, China.
IEEE Computer Society Press: Los Alamitos, CA, USA. Partly available at http://
www.icaiss.org/isa2009/indexen.html [accessed 2011-04-10]. Library of Congress Con-
trol Number (LCCN): 2009900554. Catalogue no.: CFP0960G.
2848. Haiying Wang, Kay Soon Low, Kexin Wei, and Junqing Sun, editors. Proceedings of the 5th
International Conference on Natural Computation (ICNC09), August 1416, 2009, Tianjn,
China. IEEE Computer Society: Piscataway, NJ, USA. Library of Congress Control Number
(LCCN): 2009903793. Order No.: P3736. Catalogue no.: CFP09CNC-PRT.
1170 REFERENCES
2849. Lipo Wang, editor. Support Vector Machines: Theory and Applications, volume 177/2005
in Studies in Fuzziness and Soft Computing. Springer-Verlag GmbH: Berlin, Germany, Au-
gust 2005. doi: 10.1007/b95439. isbn: 3-540-24388-7. OCLC: 60800790, 316265870,
318299016, and 403848183. Library of Congress Control Number (LCCN): 2005921894.
2850. Lipo Wang, Kefei Chen, and Yew-Soon Ong, editors. Proceedings of the First International
Conference on Advances in Natural Computation, Part I (ICNC05-I), August 2729, 2005,
Ch angsha, H unan, China, volume 3610 in Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11539087. isbn: 3-540-28323-4.
See also [2851, 2852].
2851. Lipo Wang, Kefei Chen, and Yew-Soon Ong, editors. Proceedings of the First International
Conference on Advances in Natural Computation, Part II (ICNC05-II), August 2729, 2005,
volume 3611 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/11539117. isbn: 3-540-28325-0. See also [2850, 2852].
2852. Lipo Wang, Kefei Chen, and Yew-Soon Ong, editors. Proceedings of the First International
Conference on Advances in Natural Computation, Part III (ICNC05-III), August 2729,
2005, volume 3612 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH:
Berlin, Germany. doi: 10.1007/11539902. isbn: 3-540-28320-X. See also [2850, 2851].
2853. Paul P. Wang, editor. Proceedings of the Fifth Joint Conference on Information Science (JCIS
2000), Section: The Third International Workshop on Frontiers in Evolutionary Algorithms
(FEA00), 2000, Atlantic City, NJ, USA. Part of [142].
2854. Pu Wang, Edward P. K. Tsang, Thomas Weise, Ke Tang, and Xin Yao. Using GP to
Evolve Decision Rules for Classication in Financial Data Sets. In ICCI10 [2630], pages
722727, 2010. doi: 10.1109/COGINF.2010.5599820. Fully available at http://mail.
ustc.edu.cn/

wuyou308/paper1.pdf and http://www.it-weise.de/documents/


files/WTWTY2010UGPTEDRFCIFDS.pdf [accessed 2010-12-09]. CiteSeer
x
: 10.1.1.170.3794.
INSPEC Accession Number: 11579674. Ei ID: 20105013469298.
2855. Pu Wang, Thomas Weise, and Raymond Chiong. Novel Evolutionary Algo-
rithms for Supervised Classication Problems: An Experimental Study. Evolution-
ary Intelligence, 4(1):316, January 12, 2011, Springer-Verlag GmbH: Berlin, Ger-
many. doi: 10.1007/s12065-010-0047-7. Fully available at http://www.it-weise.de/
documents/files/WWC2011NEAFSCPAES.pdf. Ei ID: IP51227209. See also [2819, 2854,
2894].
2856. Tzai-Der Wang, Xiaodong Li, Shu-Heng Chen, Xufa Wang, Hussein A. Abbass, Hitoshi
Iba, Guoliang Chen, and Xin Yao, editors. Proceedings of the 6th International Conference
on Simulated Evolution and Learning (SEAL06), October 1518, 2006, University of Sci-
ence and Technology of China (USTC): Hefei,

Anhu, China, volume 4247 in Theoretical
Computer Science and General Issues (SL 1), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11903697. isbn: 3-540-47331-9.
OCLC: 496560934. Library of Congress Control Number (LCCN): 318289226.
2857. Yingxu Wang, Du Zhang, Jean-Claude Latombe, and Witold Kinsner, editors. Proceedings of
the Seventh IEEE International Conference on Cognitive Informatics (ICCI08), August 14,
2008, Stanford University: Stanford, CA, USA. IEEE Computer Society: Piscataway, NJ,
USA.
2858. Yingxu Wang, Bernard Widrow, Bo Zhang, Witold Kinsner, Kenji Sugawara, Fuchun Sun,
Thomas Weise, Yixin Zhong, and Du Zhang. Perspectives on Cognitive Informatics and Its
Future Development: Summary of Plenary Panel II of IEEE ICCI10. In ICCI10 [2630],
pages 1725, 2010. doi: 10.1109/COGINF.2010.5599693. Fully available at http://
www.it-weise.de/documents/files/WWZKSSWZZ2010POCIAIFDSOPPIIOII.pdf [ac-
cessed 2010-12-09]. Ei ID: 20105013469178. See also [2859, 2890].
2859. Yingxu Wang, Bernard Widrow, Bo Zhang, Witold Kinsner, Kenji Sugawara, Fuchun Sun,
Jianhua Lu, Thomas Weise, and Du Zhang. Perspectives on the Field of Cognitive Informatics
and Its Future Development. The International Journal of Cognitive Informatics and Natural
Intelligence (IJCINI), 5(1):117, JanuaryMarch 2011, Idea Group Publishing (Idea Group
Inc., IGI Global): New York, NY, USA. doi: 10.4018/jcini.2011010101. See also [2858, 2890].
2860. Yu Wang, Bin Li, and Thomas Weise. Estimation of Distribution and Dierential Evo-
lution Cooperation for Large Scale Economic Load Dispatch Optimization of Power Sys-
tems. Information Sciences Informatics and Computer Science Intelligent Systems Ap-
plications: An International Journal, 180(12):24052420, June 2010, Elsevier Science Pub-
REFERENCES 1171
lishers B.V.: Essex, UK. doi: 10.1016/j.ins.2010.02.015. Fully available at http://www.
it-weise.de/documents/files/WLW2010EODADECFLSELDOOPS.pdf [accessed 2010-06-19].
Ei ID: 20101412821019. IDS (SCI): 593PM.
2861. Yu Wang, Bin Li, Thomas Weise, Jianyu Wang, Bo Yuan, and Qiongjie Tian. Self-
Adaptive Learning Based Particle Swarm Optimization. Information Sciences Infor-
matics and Computer Science Intelligent Systems Applications: An International Jour-
nal, 181(20):45154538, October 15, 2011, Elsevier Science Publishers B.V.: Essex, UK.
doi: 10.1016/j.ins.2010.07.013. Ei ID: IP51031102.
2862. Zai Wang, Tianshi Chen, Ke Tang, and Xin Yao. A Multi-Objective Approach to Redun-
dancy Allocation Problem in Parallel-Series Systems. In CEC09 [1350], pages 582589,
2009. doi: 10.1109/CEC.2009.4982998. Fully available at http://nical.ustc.edu.cn/
uploads/cec2009/P252.pdf [accessed 2009-09-05]. INSPEC Accession Number: 10688610.
2863. Zai Wang, Ke Tang, and Xin Yao. Multi-objective Approaches to Optimal Test-
ing Resource Allocation in Modular Software Systems. IEEE Transactions on
Reliability, 59(3):563575, September 2010, IEEE Reliability Society: Lafayette,
CO, USA. doi: 10.1109/TR.2010.2057310. Fully available at http://staff.
ustc.edu.cn/

ketang/papers/WangTangYao_TR10a.pdf [accessed 2011-02-22].


CiteSeer
x
: 10.1.1.174.7029. INSPEC Accession Number: 11489129.
2864. Zhao-Xia Wang, Zeng-Qiang Chen, and Zhu-Zhi Yuan. QoS Routing Optimization Strategy
using Genetic Algorithm in Optical Fiber Communication Networks. Journal of Computer
Science & Technology (JCS&T), 19(2):213217, March 2004, Iberoamerican Science & Tech-
nology Education Consortium (ISTEC; University of New Mexico): Albuquerque, NM, USA.
doi: 10.1007/BF02944799.
2865. Ziqiang Wang and Dexian Zhang. Optimal Genetic View Selection Algorithm Un-
der Space Constraint. International Journal of Information Technology (IJIT), 11
(5):4451, 2005, Singapore Computer Society (SCS): Singapore. Fully available
at http://www.cs.hcmus.edu.vn/elib/bitstream/123456789/83427/1/1684.
pdf and http://www.icis.ntu.edu.sg/scs-ijit/115/115_6.pdf [accessed 2011-03-28].
CiteSeer
x
: 10.1.1.103.5672.
2866. 18th European Conference on Machine Learning (ECML07), September 1721, 2007, War-
saw, Poland.
2867. Proceedings of the 6th International Conference on Hybrid Articial Intelligence Systems
(HAIS11), May 2325, 2011, Warsaw, Poland.
2868. Satoshi Watanabe. Knowing and Guessing: A Quantitative Study of Inference and Infor-
mation. John Wiley & Sons Ltd.: New York, NY, USA, 1969. isbn: 0471921300. Google
Books ID: fNXjAQAACAAJ. OCLC: 10971, 258565060, and 264995516.
2869. Donald Arthur Waterman and Frederick Hayes-Roth, editors. Selected Papers from the
Workshop on Pattern Directed Inference Systems. Academic Press: New York, NY, USA,
1977, Honolulu, HI, USA. isbn: 0127375503. Google Books ID: 1npQAAAAMAAJ and
xHtQAAAAMAAJ. OCLC: 3843456, 84255208, 247410354, and 310813094.
2870. James Dewey Watson, Tania A. Baker, Stephen P. Bell, Alexander Gann, Michael L.
Levine, and Richard Losick. Molecular Biology of the Gene. Pearson Education: Up-
per Saddle River, NJ, USA. Imprint: Benjamin/Cummings Publishing Company, Inc.:
San Francisco, CA, USA, fth edition, 19872003. isbn: 0321223683, 0321248643,
080534635X, 0805346422, and 0805346430. OCLC: 51861844, 56972646, 423675724,
and 488848386. Library of Congress Control Number (LCCN): 2003042863. LC Classi-
cation: QH506 .M6627 2004.
2871. Georey I. Webb and Xinghuo Yu, editors. Advances in Articial Intelligence Proceedings
of the 17th Australian Joint Conference on Articial Intelligence (AI04), December 46,
2004, Central Queensland University: Cairns, Australia, volume 3339/2004 in Lecture Notes
in Articial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS). Springer-
Verlag GmbH: Berlin, Germany. doi: 10.1007/b104336. isbn: 3-540-24059-4. Google Books
ID: bIWyepRUTKkC. OCLC: 57209655, 57313586, 67000961, 76556374, 224922858,
300264487, 318299687, 473571883, and 496631785. Library of Congress Control Num-
ber (LCCN): 2004116040.
2872. Bonnie Lynn Webber and Nils J. Nilsson, editors. Readings in Articial Intelligence: A Col-
lection of Articles. Tioga Publishing Co.: Wellsboro, PA, USA and Morgan Kaufmann
Publishers Inc.: San Francisco, CA, USA, 1981. isbn: 0934613036 and 0935382038.
1172 REFERENCES
Google Books ID: 2X9QAAAAMAAJ, J2xkAAAAMAAJ, NxTUAQAACAAJ, YzJzHgAACAAJ, and
ZX9QAAAAMAAJ. OCLC: 7732055, 12722542, 263999188, 299402077, and 310662867.
2873. Bruce H. Weber and David J. Depew, editors. Evolution and Learning: The Baldwin
Eect Reconsidered, Bradford Books. MIT Press: Cambridge, MA, USA, July 2003.
isbn: 0-262-23229-4. Google Books ID: yBtRzBilw1MC. OCLC: 50440849. Library of
Congress Control Number (LCCN): 2002031992. GBV-Identication (PPN): 353052698.
LC Classication: BF698.95 .E95 2003.
2874. David C. Wedge and Douglas B. Kell. Rapid Prediction of Optimum Population Size in
Genetic Programming using a Novel Genotype-Fitness Correlation. In GECCO08 [1519],
pages 13151322, 2008. Fully available at http://www.cs.bham.ac.uk/

wbl/biblio/
gp-html/Wedge_2008_gecco.html [accessed 2009-07-10].
2875. Bill Wedley, editor. Proceedings of the 17th International Conference on Multiple Criteria
Decision Making (MCDM04), August 58, 2004, Whistler Conference Center: Whistler, BC,
Canada. Partly available at http://www.bus.sfu.ca/events/mcdm/ [accessed 2007-09-10].
2876. Ingo Wegener. Komplexitatstheorie Grenzen der Ezienz von Algorithmen. Springer-Verlag
GmbH: Berlin, Germany, 2003. isbn: 3-540-00161-1. Google Books ID: rFvDwDP0hmkC.
Newly published as [2877].
2877. Ingo Wegener. Complexity Theory: Exploring the Limits of Ecient Algorithms. Springer-
Verlag: Berlin/Heidelberg, 2005, Randall Pruim, Translator. isbn: 3-540-21045-8. Google
Books ID: u7DZSDSUYlQC. See also [2876].
2878. Yaxing Wei, Peng Yue, Upendra Dadi, Min Min, Chengfang Hu, and Liping Di. Active
Acquisition of Geospatial Data Products in a Collaborative Grid Environment. In SCCon-
test06 [554], pages 455462, 2006. doi: 10.1109/SCC.2006.46. INSPEC Accession Num-
ber: 9165409.
2879. Karsten Weicker. Evolutionare Algorithmen, Leitfaden der Informatik. B. G. Teubner:
Leipzig, Germany, March 2002. isbn: 3-519-00362-7. Google Books ID: 3-519-00362-7.
OCLC: 51984239. GBV-Identication (PPN): 098084860.
2880. Karsten Weicker and Nicole Weicker. Burden and Benets of Redundancy. In
FOGA00 [2565], pages 313333, 2001. Fully available at http://www.imn.
htwk-leipzig.de/

weicker/publications/redundancy.pdf [accessed 2008-11-02].


2881. Edward D. Weinberger. Correlated and Uncorrelatated Fitness Landscapes and How to
Tell the Dierence. Biological Cybernetics, 63(5):325336, September 1993, Springer-Verlag:
Berlin/Heidelberg. doi: 10.1007/BF00202749.
2882. Edward D. Weinberger. Local Properties of Kaumans NK Model, a Tuneably Rugged
Energy Landscape. Physical Review A, 44(10):63996413, November 1991, American Physical
Society: College Park, MD, USA. doi: 10.1103/PhysRevA.44.6399.
2883. Edward D. Weinberger. NP Completeness of Kaumans N-k Model, A Tuneable Rugged
Fitness Landscape. Working Papers 96-02-003, Santa Fe Institute: Santa Fe, NM, USA,
February 1996. Fully available at http://www.santafe.edu/media/workingpapers/
96-02-003.pdf [accessed 2010-12-04].
2884. Edward D. Weinberger and Peter F. Stadler. Why Some Fitness Landscapes are Fractal.
Journal of Theoretical Biology, 163(2):255275, July 21, 1993, Elsevier Science Publishers
B.V.: Amsterdam, The Netherlands. Imprint: Academic Press Professional, Inc.: San Diego,
CA, USA. doi: 10.1006/jtbi.1993.1120. Fully available at http://www.tbi.univie.ac.
at/papers/Abstracts/93-01-01.ps.gz [accessed 2008-06-01].
2885. Thomas Weise. Herleitungen zu Warteschlangenmodellen (Queuing Theory). University
of Kassel, Fachbereich 16: Elektrotechnik/Informatik, Distributed Systems Group: Kas-
sel, Hesse, Germany, November 22, 2005. Fully available at http://www.it-weise.de/
documents/files/W2005WS.pdf [accessed 2010-12-07].
2886. Thomas Weise. Genetic Programming for Sensor Networks. Technical Report, Univer-
sity of Kassel, Fachbereich 16: Elektrotechnik/Informatik, Distributed Systems Group:
Kassel, Hesse, Germany, January 2006. Fully available at http://www.it-weise.de/
documents/files/W2006DGPFa.pdf. See also [2896].
2887. Thomas Weise. SIGOA+DGPF: Evolutionary Computation and Genetic Programming for
Distributed Computing. University of Kassel, Fachbereich 16: Elektrotechnik/Informatik,
Distributed Systems Group: Kassel, Hesse, Germany, August 6, 2007.
2888. Thomas Weise. Evolving Distributed Algorithms with Genetic Programming. PhD the-
sis, University of Kassel, Fachbereich 16: Elektrotechnik/Informatik, Distributed Sys-
REFERENCES 1173
tems Group: Kassel, Hesse, Germany, May 4, 2009, Kurt Geihs, Supervisor, Kurt
Geihs, Christian F. Tschudin, Birgit Vogel-Heuser, and Albert Z undorf, Committee mem-
bers. Fully available at http://www.it-weise.de/documents/files/W2009DISS.
pdf. urn: urn:nbn:de:hebis:34-2009051127217. Won the Dissertation Award
of The Association of German Engineers (Verein Deutscher Ingenieure, VDI). See also
[2892, 2898, 2899].
2889. Thomas Weise. Evolving Distributed Algorithms with Genetic Programming. University of
Basel, Computer Science Department, Computer Networks Group: Basel, Switzerland, Jan-
uary 15, 2009. See also [2888].
2890. Thomas Weise. CI/CC and Evolutionary Computation. July 9, 2010. Fully available
at http://www.it-weise.de/documents/files/W2010CCAEC.pdf. Position Paper.
See also [2858].
2891. Thomas Weise. Genetic Programming. University of Science and Technology of China
(USTC), School of Computer Science and Technology, Nature Inspired Computation and
Applications Laboratory (NICAL): Hefei,

Anhu, China, August 11, 2010.
2892. Thomas Weise. Global Optimization Algorithms Theory and Application. it-weise.de (self-
published): Germany, 2009. Fully available at http://www.it-weise.de/projects/
book.pdf.
2893. Thomas Weise and Raymond Chiong. Evolutionary Approaches and Their Applications
to Distributed Systems. In Intelligent Systems for Automated Learning and Adaptation:
Emerging Trends and Applications [561], Chapter 6, pages 114149. Information Science
Reference: Hershey, PA, USA, 2009. doi: 10.4018/978-1-60566-798-0.ch006.
2894. Thomas Weise and Raymond Chiong. Evolutionary Data Mining Approaches for
Rule-based and Tree-based Classiers. In ICCI10 [2630], pages 696703, 2010.
doi: 10.1109/COGINF.2010.5599821. Fully available at http://www.it-weise.de/
documents/files/WC2010EDMAFRBATBC.pdf [accessed 2010-06-24]. INSPEC Accession Num-
ber: 11579675. Ei ID: 20105013469299. See also [2855].
2895. Thomas Weise and Raymond Chiong. A Novel Extremal Optimization Algorithm for the
Template Design Problem. International Journal of Organizational and Collective Intelligence
(IJOCI), 2(2):117, AprilJune 2011, Idea Group Publishing (Idea Group Inc., IGI Global):
New York, NY, USA. See also [566].
2896. Thomas Weise and Kurt Geihs. Genetic Programming Techniques for Sensor Networks.
In 5. GI/ITG KuVS Fachgespr ach Drahtlose Sensornetze [1833], pages 2125, 2006.
Fully available at http://elib.uni-stuttgart.de/opus/volltexte/2006/2838/
and http://www.it-weise.de/documents/files/WG2006DGPFb.pdf [accessed 2007-11-
07]. CiteSeer
x
: 10.1.1.152.8029 and 10.1.1.61.6782. See also [2886, 2897].
2897. Thomas Weise and Kurt Geihs. DGPF An Adaptable Framework for Distributed Multi-
Objective Search Algorithms Applied to the Genetic Programming of Sensor Networks. In
BIOMA06 [919], pages 157166, 2006. Fully available at http://www.it-weise.de/
documents/files/WG2006DGPFc.pdf. CiteSeer
x
: 10.1.1.74.8243. See also [2896].
2898. Thomas Weise and Ke Tang. Evolving Distributed Algorithms with Genetic Programming.
IEEE Transactions on Evolutionary Computation (IEEE-EC), 16(2), September 26, 2011
2012, IEEE Computer Society: Washington, DC, USA. doi: 10.1109/TEVC.2011.2112666.
See also [2888].
2899. Thomas Weise and Michael Zapf. Evolving Distributed Algorithms with Genetic Program-
ming: Election. In GEC09 [3000], pages 577584, 2009. doi: 10.1145/1543834.1543913. Fully
available at http://www.it-weise.de/documents/files/WZ2009EDAWGPE.pdf. Ei
ID: 20093012219539. See also [2888].
2900. Thomas Weise, Volker Fickert, Thomas Ziegs, Rico Roberg, Rene Kreiner, Roland Fischer,
Stefan Gelbke, Harry Briem, and et al. VisioOS Visual Simulation of Operating Systems.
Chemnitz University of Technology, Faculty of Computer Science, Operating Systems Group:
Chemnitz, Sachsen, Germany, May 15, 2005, Winfried Kalfa, Supervisor. Fully available
at http://www.it-weise.de/documents/files/WEA2005VISIOS.pdf [accessed 2010-12-
07]. See also [910].
2901. Thomas Weise, Mirko Dietrich, Marc Kirchho, Lado Kumsiashvili, Stefan Niemczyk, and
Alexander Podlich. DGPF Various other Documents and Slides. University of Kassel, Fach-
bereich 16: Elektrotechnik/Informatik, Distributed Systems Group: Kassel, Hesse, Germany,
October 26, 2006. Fully available at http://www.it-weise.de/documents/files/
1174 REFERENCES
W2006DGPFo.pdf [accessed 2010-12-07]. See also [2888].
2902. Thomas Weise, Stefan Achler, Martin G ob, Christian Voigtmann, and Michael Zapf. Evolv-
ing Classiers Evolutionary Algorithms in Data Mining. Kasseler Informatikschriften
(KIS) 2007, 4, University of Kassel, Fachbereich 16: Elektrotechnik/Informatik: Kassel,
Hesse, Germany, September 28, 2007. Fully available at http://www.it-weise.de/
documents/files/WAGVZ2007DMC.pdf [accessed 2010-12-07]. CiteSeer
x
: 10.1.1.89.6165.
urn: urn:nbn:de:hebis:34-2007092819260.
2903. Thomas Weise, Steen Bleul, and Kurt Geihs. Web Service Composition Sys-
tems for the Web Service Challenge A Detailed Review. Technical Report
2007, 7, University of Kassel, Fachbereich 16: Elektrotechnik/Informatik: Kassel,
Hesse, Germany, November 19, 2007. Fully available at http://www.it-weise.de/
documents/files/WBG2007WSCb.pdf [accessed 2007-11-20]. CiteSeer
x
: 10.1.1.86.7774.
urn: urn:nbn:de:hebis:34-2007111919638. See also [334, 335, 2908].
2904. Thomas Weise, Kurt Geihs, and Philipp Andreas Baer. Genetic Program-
ming for Proactive Aggregation Protocols. In ICANNGA07-I [257], pages 167
173, 2007. doi: 10.1007/978-3-540-71618-1. Fully available at http://www.
it-weise.de/documents/files/W2007DGPFb.pdf. CiteSeer
x
: 10.1.1.152.9906. Ei
ID: 20080311022959. IDS (SCI): BGC96. See also [2911].
2905. Thomas Weise, Michael Zapf, and Kurt Geihs. Rule-based Genetic Programming. In
BIONETICS07 [1342], pages 815, 2007. doi: 10.1109/BIMNICS.2007.4610073. Fully
available at http://www.it-weise.de/documents/files/WZG2007RBGP.pdf. Ei
ID: 20084111632894. IDS (SCI): BKM77.
2906. Thomas Weise, Michael Zapf, Mohammad Ullah Khan, and Kurt Geihs. Genetic
Programming meets Model-Driven Development. Kasseler Informatikschriften (KIS)
2007, 2, University of Kassel, Fachbereich 16: Elektrotechnik/Informatik: Kassel,
Hesse, Germany, July 2, 2007. Fully available at http://www.it-weise.de/
documents/files/WZKG2007DGPFd.pdf [accessed 2007-09-17]. CiteSeer
x
: 10.1.1.94.317.
urn: urn:nbn:de:hebis:34-2007070218786. See also [2907, 2916].
2907. Thomas Weise, Michael Zapf, Mohammad Ullah Khan, and Kurt Geihs. Genetic Pro-
gramming meets Model-Driven Development. In HIS07 [1572], pages 332335, 2007.
doi: 10.1109/HIS.2007.11. Fully available at http://www.it-weise.de/documents/
files/WZKG2007DGPFg.pdf. Ei ID: 20082911386590. See also [2906, 2916].
2908. Thomas Weise, Steen Bleul, Diana Elena Comes, and Kurt Geihs. Dierent Ap-
proaches to Semantic Web Service Composition. In ICIW08 [1862], pages 9096,
2008. doi: 10.1109/ICIW.2008.32. Fully available at http://www.it-weise.de/
documents/files/WBCG2008ICIW.pdf. INSPEC Accession Number: 10067008. Ei
ID: 20083711536637. IDS (SCI): BRC96. See also [2903].
2909. Thomas Weise, Steen Bleul, Marc Kirchho, and Kurt Geihs. Semantic Web Service
Composition for Service-Oriented Architectures. In CEC/EEE08 [1346], pages 355358,
2008. doi: 10.1109/CECandEEE.2008.148. Fully available at http://www.it-weise.de/
documents/files/WBKG2008SWSCFSOA.pdf. INSPEC Accession Number: 10475110. Ei
ID: 20091512027447. IDS (SCI): BJF01. See also [204, 334, 335].
2910. Thomas Weise, Stefan Niemczyk, Hendrik Skubch, Roland Reichle, and Kurt Geihs.
A Tunable Model for Multi-Objective, Epistatic, Rugged, and Neutral Fitness Land-
scapes. In GECCO08 [1519], pages 795802, 2008. doi: 10.1145/1389095.1389252.
Fully available at http://www.it-weise.de/documents/files/WNSRG2008GECCO.
pdf. CiteSeer
x
: 10.1.1.152.8952 and 10.1.1.153.308. Ei ID: 20085111786051.
2911. Thomas Weise, Michael Zapf, and Kurt Geihs. Evolving Proactive Aggregation Proto-
cols. In EuroGP08 [2080], pages 254265, 2008. doi: 10.1007/978-3-540-78671-9 22. Fully
available at http://www.it-weise.de/documents/files/WZG2008DGPFa.pdf. Ei
ID: 20083011392084. IDS (SCI): BHN52. See also [2904].
2912. Thomas Weise, Alexander Podlich, and Christian Gorldt. Solving Real-World Vehicle Routing
Problems with Evolutionary Algorithms. In Natural Intelligence for Scheduling, Planning
and Packing Problems [564], Chapter 2, pages 2953. Springer-Verlag: Berlin/Heidelberg,
2009. doi: 10.1007/978-3-642-04039-9 2. Fully available at http://www.it-weise.de/
documents/files/WPG2009SRWVRPWEA.pdf [accessed 2010-06-19]. See also [2188, 2913, 2914].
2913. Thomas Weise, Alexander Podlich, Manfred Menze, and Christian Gorldt. Optimierte
G uterverkehrsplanung mit Evolution aren Algorithmen. Industrie Management Zeitschrift
REFERENCES 1175
f ur industrielle Geschaftsprozesse, 10(3):3740, June 2, 2009, GITO mbH Verlag f ur Indus-
trielle Informationstechnik und Organisation: Berlin, Germany. Fully available at http://
www.it-weise.de/documents/files/WPMG2009OGMEA.pdf and http://www.
logistics.de/logistik/branchen.nsf/D4FF2D93CA69B47FC12575CF004870C8/
$File/gueterverkehrsplanung_optimierung_algorithmen_weise_podlich_
menze_gorldt_gitopdf.pdf [accessed 2009-09-07]. See also [2912, 2914].
2914. Thomas Weise, Alexander Podlich, Kai Reinhard, Christian Gorldt, and Kurt Geihs. Evo-
lutionary Freight Transportation Planning. In EvoWorkshops09 [1052], pages 768777,
2009. doi: 10.1007/978-3-642-01129-0 87. Fully available at http://www.it-weise.de/
documents/files/WPRGG2009EFTP.pdf. Ei ID: 20093012218527. IDS (SCI): BJH22. See
also [2912].
2915. Thomas Weise, Michael Zapf, Raymond Chiong, and Antonio Jes us Nebro Urbaneja. Why Is
Optimization Dicult? In Nature-Inspired Algorithms for Optimisation [562], Chapter 1,
pages 150. Springer-Verlag: Berlin/Heidelberg, 2009. doi: 10.1007/978-3-642-00267-0 1.
Fully available at http://www.it-weise.de/documents/files/WZCN2009WIOD.pdf.
2916. Thomas Weise, Michael Zapf, Mohammad Ullah Khan, and Kurt Geihs. Combining Genetic
Programming and Model-Driven Development. International Journal of Computational Intel-
ligence and Applications (IJCIA), 8(1):3752, March 2009, World Scientic Publishing Co.:
Singapore and Imperial College Press Co.: London, UK. doi: 10.1142/S1469026809002436.
Fully available at http://www.it-weise.de/documents/files/WZKG2009DGPFz.
pdf. Ei ID: 20092012083173. See also [2906, 2907].
2917. Thomas Weise, Stefan Niemczyk, Raymond Chiong, and Mingxu Wan. A Framework for
Multi-Model EDAs with Model Recombination. In EvoNUM11 [2588], pages 304313,
2011. doi: 10.1007/978-3-642-20525-5 31. Fully available at http://www.it-weise.de/
documents/files/WNCW2011AFFMMEWMR.pdf. See also [2038].
2918. August Weismann. The Germ-Plasm A Theory of Heredity. Charles Scribners Sons:
New York, NY, USA and Electronic Scholary Publishing (ESP): Seattle, WA, USA, 1893,
W. Newton Parker and Harriet R onnfeld, Translators. Fully available at http://www.esp.
org/books/weismann/germ-plasm/facsimile/ [accessed 2008-09-10].
2919. Gerhard Wei and Sandip Sen, editors. Adaption and Learning in Multi-Agent Systems, IJ-
CAI95 Workshop Proceedings (IJCAI-95 WS), August 21, 1995, Montreal, QC, Canada, vol-
ume 1042/1996 in Lecture Notes in Articial Intelligence (LNAI, SL7), Lecture Notes in Com-
puter Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/3-540-60923-7.
isbn: 3-540-60923-7. Google Books ID: joFQAAAAMAAJ. OCLC: 34318834, 150397844,
and 246327094. See also [1836, 1861].
2920. Justin Werfel and Radhika Nagpal. Extended Stigmergy in Collective Construction. IEEE
Intelligent Systems Magazine, 21(2):2028, MarchApril 2006, IEEE Computer Society Press:
Los Alamitos, CA, USA. doi: 10.1109/MIS.2006.25. Fully available at http://hebb.mit.
edu/people/jkwerfel/ieeeis06.pdf [accessed 2008-06-12].
2921. Justin Werfel, Yaneer Bar-Yam, Daniela Rus, and Radhika Nagpal. Distributed Construction
by Mobile Robots with Enhanced Building Blocks. In ICRA06 [1389], pages 27872794, 2006.
doi: 10.1109/ROBOT.2006.1642123. Fully available at http://www.eecs.harvard.edu/

rad/ssr/papers/icra06-werfel.pdf [accessed 2008-06-12]. CiteSeer


x
: 10.1.1.87.6701.
INSPEC Accession Number: 9120372.
2922. Gregory M. Werner and Michael G. Dyer. Evolution of Communication in Articial Organ-
isms. In Articial Life II [1668], pages 659687, 1990. Fully available at http://www.isrl.
uiuc.edu/

amag/langev/paper/werner92evolutionOf.html [accessed 2008-07-28].


2923. A. Wetzel. Evaluation of the Eectiveness of Genetic Algorithms in Combinatorial Opti-
mization. University of Pittsburgh: Pittsburgh, PA, USA, 1983. Unpublished manuscript,
technical report.
2924. James F. Whidborne, Da-Wei Gu, and Ian Postlethwaite. Algorithms for the Method of
Inequalities A Comparative Study. In ACC [1325], pages 33933397, volume 5, 1995. FA19
= 9:15.
2925. James M. Whitacre, Philipp Rohlfshagen, Axel Bender, and Xin Yao. The Role of Degenerate
Robustness in the Evolvability of Multi-Agent Systems in Dynamic Environments. In PPSN
XI-1 [2412], pages 284293, 2010. doi: 10.1007/978-3-642-15844-5 29.
2926. R. C. White, jr. A Survey of Random Methods for Parameter Optimization. Simu-
lation Transactions of The Society for Modeling and Simulation International, SAGE
1176 REFERENCES
Journals Online, 17(5):197205, 1971, Society for Modeling and Simulation International
(SCS): San Diego, CA, USA. doi: 10.1177/003754977101700504. Fully available at http://
alexandria.tue.nl/extra1/erap/publichtml/7101031.pdf [accessed 2009-07-04]. TH-
Report 70-E-16.
2927. L. Darrell Whitley, editor. Proceedings of the Second Workshop on Foundations of Genetic
Algorithms (FOGA92), July 2629, 1992, Vail, CO, USA. Morgan Kaufmann Publishers
Inc.: San Francisco, CA, USA. isbn: 1-55860-263-1. OCLC: 27723077, 312001104,
438787312, 475768457, 490655463, and 612351520. issn: 1081-6593. Published Febru-
ary 1, 1993.
2928. L. Darrell Whitley, editor. Late Breaking Papers at Genetic and Evolutionary Computation
Conference (GECCO00 LBP), July 1012, 2000, Riviera Hotel and Casino: Las Vegas, NV,
USA. AAAI Press: Menlo Park, CA, USA. See also [2935, 2986].
2929. L. Darrell Whitley. The GENITOR Algorithm and Selection Pressure: Why Rank-
Based Allocation of Reproductive Trials is Best. In ICGA89 [2414], pages 116121,
1989. Fully available at http://citeseer.ist.psu.edu/531140.html [accessed 2007-08-
21]. CiteSeer
x
: 10.1.1.18.8195.
2930. L. Darrell Whitley. A Genetic Algorithm Tutorial. Technical Report CS-93-103,
Colorado State University, Computer Science Department: Fort Collins, CO, USA,
March 10, 1993. Fully available at ftp://ftp.cse.cuhk.edu.hk/pub/EC/GA/
papers/tutor93.ps.gz, http://www.cs.iastate.edu/

honavar/ga_tutorial.
pdf, http://www.cs.uga.edu/

potter/CompIntell/ga_tutorial.pdf, http://
www.emunix.emich.edu/

evett/AI/ga_tutorial.pdf, http://www.stat.ucla.
edu/

yuille/courses/stat202c/ga_tutorial.pdf, and http://www.wins.uva.


nl/

mes/psdocs/ga-tr103.ps.gz [accessed 2010-08-03]. CiteSeer


x
: 10.1.1.38.4806. See
also [2931].
2931. L. Darrell Whitley. A Genetic Algorithm Tutorial. Statistics and Computing, 4(2):6585,
June 1994, Springer Netherlands: Dordrecht, Netherlands and Samizdat Press: Golden, CO,
USA. doi: 10.1007/BF00175354. Fully available at http://samizdat.mines.edu/ga_
tutorial/ga_tutorial.ps [accessed 2007-08-12]. See also [2930].
2932. L. Darrell Whitley and Michael D. Vose, editors. Proceedings of the Third Workshop on
Foundations of Genetic Algorithms (FOGA 3), July 31August 2, 1994, Estes Park, CO,
USA. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-356-5.
Google Books ID: SMuOAAAACAAJ and z2vTGQAACAAJ. OCLC: 32999554, 60271926, and
312293288. GBV-Identication (PPN): 194472310. Published June 1, 1995.
2933. L. Darrell Whitley, V. Scott Gordon, and Keith E. Mathias. Lamarckian Evolution,
The Baldwin Eect and Function Optimization. In PPSN III [693], pages 515, 1994.
doi: 10.1007/3-540-58484-6 245. CiteSeer
x
: 10.1.1.18.2428.
2934. L. Darrell Whitley, Soraya Rana, John Dzubera, and Keith E. Mathias. Evaluating Evolution-
ary Algorithms. Articial Intelligence, 85(1-2):245276, August 1996, Elsevier Science Pub-
lishers B.V.: Essex, UK. doi: 10.1016/0004-3702(95)00124-7. CiteSeer
x
: 10.1.1.53.134.
2935. L. Darrell Whitley, David Edward Goldberg, Erick Cant u-Paz, Lee Spector, Ian C. Parmee,
and Hans-Georg Beyer, editors. Proceedings of the Genetic and Evolutionary Computation
Conference (GECCO00), July 812, 2000, Riviera Hotel and Casino: Las Vegas, NV, USA.
Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. isbn: 1-55860-708-0. Google
Books ID: GYRQAAAAMAAJ and o-8HAAAACAAJ. A joint meeting of the Ninth International
Conference on Genetic Algorithms (ICGA-2000) and the Fifth Annual Genetic Programming
Conference (GP-2000). See also [2928, 2986].
2936. Dirk Wiesmann, Ulrich Hammel, and Thomas Back. Robust Design of Multilayer Optical
Coatings by Means of Evolutionary Algorithms. IEEE Transactions on Evolutionary Com-
putation (IEEE-EC), 2(4):162167, November 1998, IEEE Computer Society: Washington,
DC, USA. doi: 10.1109/4235.738986. INSPEC Accession Number: 6149966. See also [2937].
2937. Dirk Wiesmann, Ulrich Hammel, and Thomas Back. Robust Design of Multilayer Opti-
cal Coatings by Means of Evolutionary Strategies. Technical Report, Universitat Dort-
mund, Collaborative Research Center (Sonderforschungsbereich) 531: Dortmund, North
Rhine-Westphalia, Germany, March 31November 8, 1998. Fully available at http://hdl.
handle.net/2003/5348 [accessed 2009-07-11]. CiteSeer
x
: 10.1.1.36.5624. See also [2936].
2938. Wikipedia The Free Encyclopedia. Wikimedia Foundation, Inc.: San Francisco, CA, USA,
2009. Fully available at http://en.wikipedia.org/ [accessed 2009-06-08].
REFERENCES 1177
2939. Frank Wilcoxon. Individual Comparisons by Ranking Methods. Biometrics Bulletin, 1(6):80
83, December 1945, Blackwell Publishing Ltd: Chichester, West Sussex, UK. Fully available
at http://sci2s.ugr.es/keel/pdf/algorithm/articulo/wilcoxon1945.pdf
and http://webspace.ship.edu/pgmarr/Geo441/Readings/Wilcoxon%201945
%20-%20Individual%20Comparisons%20by%20Ranking%20Methods.pdf [accessed 2009-
07-27].
2940. Herbert S. Wilf. Algorithms and Complexity. A K Peters, Ltd.: Natick, MA, USA, second
edition, December 2002. isbn: 1-56881-178-0. Google Books ID: jGD9pNFKI2UC. Library
of Congress Control Number (LCCN): 2002072765. LC Classication: QA63 .W55 2002.
2941. Claus O. Wilke. Evolutionary Dynamics in Time-Dependent Environments. PhD thesis,
Ruhr-Universitat Bochum, Fakult at f ur Physik und Astronomie: Bochum, North Rhine-
Westphalia, Germany, Shaker Verlag GmbH: Aachen, North Rhine-Westphalia, Germany,
July 1999. isbn: 3-8265-6199-6. Fully available at http://wlab.biosci.utexas.
edu/

wilke/ps/PhD.ps.gz [accessed 2007-08-19].


2942. Claus O. Wilke. Adaptive Evolution on Neutral Networks. Bulletin of Mathemat-
ical Biology, 63(4):715730, July 2001, Springer New York: New York, NY, USA.
doi: 10.1006/bulm.2001.0244. arXiv ID: physics/0101021v1. PubMed ID: 11497165.
2943. Daniel N. Wilke, Schalk Kok, and Albert A. Groenwold. Comparison of Linear and Classical
Velocity Update Rules in Particle Swarm Optimization: Notes on Diversity. International
Journal for Numerical Methods in Engineering, 70(8):962984, 2007, Wiley Interscience:
Chichester, West Sussex, UK. doi: 10.1002/nme.1867.
2944. George C. Williams. Pleiotropy, Natural Selection, and the Evolution of Senescence. Evo-
lution International Journal of Organic Evolution, 11(4):398411, December 1957, Society
for the Study of Evolution (SSE) and Wiley Interscience: Chichester, West Sussex, UK.
doi: 10.2307/2406060. Reprinted in [2946].
2945. George C. Williams. Adaptation and Natural Selection: A Critique of Some Current Evo-
lutionary Thought. Princeton University Press: Princeton, NJ, USA, September 28, 1966.
isbn: 0-691-02615-7. Google Books ID: TztkT5lIGOwC. OCLC: 35230452. GBV-
Identication (PPN): 083639411 and 223362034.
2946. George C. Williams. Pleiotropy, Natural Selection, and the Evolution of Senescence. Science
of Aging Knowledge Environment (SAGE KE), 2001(1), October 3, 2001, American Associ-
ation for the Advancement of Science (AAAS): Washington, DC, USA and HighWire Press
(Stanford University): Cambridge, MA, USA. Reprint of [2944].
2947. Dominic Wilson and Devinder Kaur. Using Quotient Graphs to Model Neutrality in Evolu-
tionary Search. In GECCO08 [1519], pages 22332238, 2008. doi: 10.1145/1388969.1389051.
2948. Edward Osborne Wilson. Sociobiology: The New Synthesis. Belknap Press: Cambridge, MA,
USA and Harvard University Press: Cambridge, MA, USA, 1975. See also [2949].
2949. Edward Osborne Wilson. Sociobiology: The New Synthesis. Belknap Press: Cambridge, MA,
USA, twenty-fth anniversary edition edition, March 2000. isbn: 0674002350. See also
[2948].
2950. Garnett Wilson and Wolfgang Banzhaf. Discovery of Email Communication Networks
from the Enron Corpus with a Genetic Algorithm using Social Network Analysis. In
CEC09 [1350], pages 32563263, 2009. doi: 10.1109/CEC.2009.4983357. INSPEC Acces-
sion Number: 10688952.
2951. P. B. Wilson and M. D. Macleod. Low Implementation Cost IIR Digital Filter Design using
Genetic Algorithms. In NASP93 [1323], pages 4/14/8, 1993.
2952. Stewart W. Wilson. ZCS: A Zeroth Level Classier System. Evolution-
ary Computation, 2(1):118, Spring 1994, MIT Press: Cambridge, MA, USA.
doi: 10.1162/evco.1994.2.1.1. Fully available at ftp://ftp.cs.bham.ac.uk/pub/
authors/T.Kovacs/lcs.archive/Wilson1994a.ps.gz and http://www.eskimo.
com/

wilson/ps/zcs.pdf [accessed 2010-12-15]. CiteSeer


x
: 10.1.1.38.6456.
2953. Stewart W. Wilson. Classier Fitness Based on Accuracy. Evolutionary Computation, 3(2):
149175, Summer 1995, MIT Press: Cambridge, MA, USA. doi: 10.1162/evco.1995.3.2.149.
Fully available at ftp://ftp.cs.bham.ac.uk/pub/authors/T.Kovacs/lcs.
archive/Wilson1995a.ps.gz and http://www.eskimo.com/

wilson/ps/xcs.
pdf [accessed 2010-12-15]. CiteSeer
x
: 10.1.1.38.6508.
2954. Stewart W. Wilson. Generalization in the XCS Classier System. In GP98 [1612], pages 665
674, 1998. Fully available at ftp://ftp.cs.bham.ac.uk/pub/authors/T.Kovacs/
1178 REFERENCES
lcs.archive/Wilson1998a.ps.gz [accessed 2010-12-15]. CiteSeer
x
: 10.1.1.38.6472.
2955. Stewart W. Wilson. State of XCS Classier System Research. In IWLCS00 [1678], pages
6382, 2000. Fully available at http://www.eskimo.com/

wilson/ps/state.ps.gz
[accessed 2010-12-15]. CiteSeer
x
: 10.1.1.54.1204.
2956. Stewart W. Wilson and David Edward Goldberg. A Critical Review of Classier Systems.
In ICGA89 [2414], pages 244255, 1989. Fully available at http://www.eskimo.com/

wilson/ps/wg.ps.gz [accessed 2007-09-12].


2957.

Ojvind Winge. Wilhelm Johannsen: The Creator of the Terms Gene, Genotype, Phe-
notype and Pure Line. Journal of Heredity, Oxford Journals, 49(2):8388, March 1958,
American Genetic Association: Newport, OR, USA. Fully available at http://jhered.
oxfordjournals.org/cgi/reprint/49/2/83.pdf [accessed 2008-08-21]. See also [1448].
2958. Hans Winkler. Verbreitung und Ursache der Parthenogenesis im Panzen- und Tier-
reiche. Verlag von Gustav Fischer: Jena, Thuringia, Germany, 1920. Fully available
at http://www.archive.org/details/verbreitungundur00wink [accessed 2009-06-28].
Google Books ID: Y7xUAAAAMAAJ and kZwZAAAAIAAJ. asin: B0018HWPMU.
2959. Gabriel Winter, Jacques Periaux, M. Galan, and P. Cuesta, editors. Genetic Algorithms
in Engineering and Computer Science, European Short Course on Genetic Algorithms and
Evolution Strategies (EUROGEN95), December 1995, University of Las Palmas: Las Pal-
mas de Gran Canaria, Spain, Wiley Series in Computational Methods in Applied Sciences.
Wiley Interscience: Chichester, West Sussex, UK. isbn: 0-471-95859-X. Google Books
ID: WPRQAAAAMAAJ. OCLC: 35318862, 174238271, 246825979, and 639660095. Library
of Congress Control Number (LCCN): 96177148. GBV-Identication (PPN): 191484377
and 27870042X. LC Classication: QA76.87 .G46 1995.
2960. Paul C. Winter, G. Ivor Hickey, and Hugh L. Fletcher. Instant Notes in Genetics. BIOS
Scientic Publishers Ltd.: Oxford, UK, Taylor and Francis LLC: London, UK, and Science
Press: Beijng, China, 19982006. isbn: 0-387-91562-1, 041537619X, 1-85996-166-5,
1-85996-262-9, and 703007307X. Google Books ID: -yhJHgAACAAJ, 966qAQAACAAJ,
VOjyJQAACAAJ, iKYFAAAACAAJ, and pB7PAQAACAAJ.
2961. Ian H. Witten and Eibe Frank. Data Mining: Practical Machine Learning Tools and
Techniques with Java Implementations, The Morgan Kaufmann Series in Data Manage-
ment Systems. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, Octo-
ber 1999. isbn: 1558605525. Google Books ID: 6lVEKlrTq8EC, PtBQAAAAMAAJ, and
b9BQAAAAMAAJ. OCLC: 42420775, 42621057, and 300306645. Library of Congress Con-
trol Number (LCCN): 99046067. LC Classication: QA76.9.D343 W58 2000.
2962. Martin Wolpers, Ralf Klamma, and Erik Duval, editors. Proceedings of the EC-TEL
2007 Poster Session (EC-TEL07 Posters), September 1720, 2007, Crete, Greece, vol-
ume 280 in CEUR Workshop Proceedings (CEUR-WS.org). Rheinisch-Westfalische Tech-
nische Hochschule (RWTH) Aachen, Sun SITE Central Europe: Aachen, North Rhine-
Westphalia, Germany. Fully available at http://sunsite.informatik.rwth-aachen.
de/Publications/CEUR-WS/Vol-280/ [accessed 2009-09-05]. See also [2962].
2963. David H. Wolpert and William G. Macready. No Free Lunch Theorems for Search. Technical
Report SFI-TR-95-02-010, Santa Fe Institute: Santa Fe, NM, USA, February 6, 1995. Fully
available at http://www.santafe.edu/research/publications/workingpapers/
95-02-010.pdf [accessed 2009-06-30]. CiteSeer
x
: 10.1.1.33.5447.
2964. David H. Wolpert and William G. Macready. No Free Lunch Theorems for Op-
timization. IEEE Transactions on Evolutionary Computation (IEEE-EC), 1(1):6782,
April 1997, IEEE Computer Society: Washington, DC, USA. doi: 10.1109/4235.585893.
CiteSeer
x
: 10.1.1.39.6926.
2965. Laurence A. Wolsey. Integer Programming, volume 52 in Estimation, Simulation, and
Control Wiley-Interscience Series in Discrete Mathematics and Optimization. Wiley In-
terscience: Chichester, West Sussex, UK, 1998. isbn: 0-471-28366-5. Google Books
ID: x7RvQgAACAAJ. OCLC: 38976179 and 472310378. Library of Congress Control Num-
ber (LCCN): 98007296. GBV-Identication (PPN): 244286620. LC Classication: T57.74
.W67 1998.
2966. Jan Wolters. Simulated Annealing. GRIN Verlag GmbH: Munich, Bavaria, Germany, 2007.
doi: 10.3239/9783640298761. isbn: 3640303830. Google Books ID: pI9LGg8u09IC.
2967. Andreas Wombacher, Christian Huemer, and Markus Stolze, editors. Proceedings of 2006
IEEE Joint Conference on E-Commerce Technology and Enterprise Computing, E-Commerce
REFERENCES 1179
and E-Services (CEC/EEE06), June 2629, 2006, Westin San Francisco Airport: Mill-
brae, CA, USA. IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7695-2511-3.
Partly available at http://semanticweb.org/wiki/CEC/EEE2006 [accessed 2011-02-28].
OCLC: 72439855, 84651797, 232341468, 289020000, and 326786916. Library of
Congress Control Number (LCCN): 2006927609. Order No.: P2511. LC Classica-
tion: HF5548.32 .I337 2006.
2968. D. F. Wong, Hon Wai Leong, and Chung Laung Liu. Simulated Annealing for VLSI Design,
volume 24 in Kluwer International Series in Engineering and Computer Science, Frontiers
in Logic Programming Architecture and Machine Design, Kluwer International Series in
Engineering and Computer Science: VLSI, Computer Architecture, and Digital Signal Pro-
cessing. Springer Netherlands: Dordrecht, Netherlands. isbn: 0898382564. Google Books
ID: 1ldJR6BDjkIC.
2969. Man Leung Wong and Kwong Sak Leung. Data Mining Using Grammar Based Genetic
Programming and Applications, volume 3 in Genetic Programming Series. Springer Sci-
ence+Business Media, Inc.: New York, NY, USA, January 2000. isbn: 079237746X. Google
Books ID: j8hQAAAAMAAJ. OCLC: 43031203.
2970. John R. Woodward. Evolving Turing Complete Representations. In CEC03 [2395], pages
830837, volume 2, 2003. doi: 10.1109/CEC.2003.1299753. Fully available at http://www.
cs.bham.ac.uk/

wbl/biblio/gp-html/Woodward_2003_Etcr.html [accessed 2008-11-


08]. CiteSeer
x
: 10.1.1.3.6859.
2971. Nimit Worakul, Wibul Wongpoowarak, and Prapaporn Boonme. Optimization in Develop-
ment of Acetaminophen Syrup Formulation. Drug Development and Industrial Pharmacy, 28
(3):345351, March 2002, Informa Healthcare: London, UK. doi: 10.1081/DDC-120002850.
PubMed ID: 12026227. See also [2972].
2972. Nimit Worakul, Wibul Wongpoowarak, and Prapaporn Boonme. Optimization in Develop-
ment of Acetaminophen Syrup Formulation Erratum. Drug Development and Industrial
Pharmacy, 28(8):10431045, September 2002, Informa Healthcare: London, UK. See also
[2971].
2973. Robert P. Worden. A Speed Limit for Evolution. Journal of Theoretical Biology, 176(1):137
152, September 7, 1995, Elsevier Science Publishers B.V.: Amsterdam, The Netherlands. Im-
print: Academic Press Professional, Inc.: San Diego, CA, USA. doi: 10.1006/jtbi.1995.0183.
Fully available at http://dspace.dial.pipex.com/jcollie/sle/ [accessed 2009-07-10].
2974. 5th Online World Conference on Soft Computing in Industrial Applications (WSC5), 2000.
World Federation on Soft Computing (WFSC).
2975. 6th Online World Conference on Soft Computing in Industrial Applications (WSC6), 2001.
World Federation on Soft Computing (WFSC).
2976. 7th Online World Conference on Soft Computing in Industrial Applications (WSC7), 2002.
World Federation on Soft Computing (WFSC).
2977. 8th Online World Conference on Soft Computing in Industrial Applications (WSC8), 2003.
World Federation on Soft Computing (WFSC).
2978. 9th Online World Conference on Soft Computing in Industrial Applications (WSC9), 2004.
World Federation on Soft Computing (WFSC).
2979. 10th Online World Conference on Soft Computing in Industrial Applications (WSC10), 2005.
World Federation on Soft Computing (WFSC).
2980. 11th Online World Conference on Soft Computing in Industrial Applications (WSC11),
September 18October 6, 2006. World Federation on Soft Computing (WFSC).
2981. Alden H. Wright. Genetic Algorithms for Real Parameter Optimization. In FOGA90 [2562],
pages 205218, 1990. CiteSeer
x
: 10.1.1.25.5297.
2982. Alden H. Wright, Richard K. Thompson, and Jian Zhang. The Computational Com-
plexity of N-K Fitness Functions. IEEE Transactions on Evolutionary Computation
(IEEE-EC), 4(4):373379, November 2000, IEEE Computer Society: Washington, DC,
USA. doi: 10.1109/4235.887236. Fully available at http://www.cs.umt.edu/u/wright/
papers/nkcomplexity.ps.gz [accessed 2009-02-26]. CiteSeer
x
: 10.1.1.40.3093. INSPEC
Accession Number: 6791340.
2983. Alden H. Wright, Michael D. Vose, Kenneth Alan De Jong, and Lothar M. Schmitt, editors.
Revised Selected Papers of the 8th International Workshop on Foundations of Genetic Al-
gorithms (FOGA05), January 59, 2005, Aizu-Wakamatsu City, Japan, volume 3469/2005
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
1180 REFERENCES
doi: 10.1007/11513575 1. Partly available at http://www.cs.umt.edu/foga05/ [ac-
cessed 2007-09-01]. Published August 22, 2005.
2984. Margaret H. Wright. Direct Search Methods: Once Scorned, Now Respectable. In Numer-
ical Analysis Proceedings of the 16th Dundee Biennial Conference in Numerical Analy-
sis [1138], pages 191208, 1995. Fully available at http://cm.bell-labs.com/cm/cs/
doc/96/4-02.ps.gz and http://www.math.wm.edu/

buckaroo/pubs/LeToTr00a/
LeToTr00a.pdf [accessed 2010-12-24]. CiteSeer
x
: 10.1.1.80.9399.
2985. Sewall Wright. The Roles of Mutation, Inbreeding, Crossbreeding and Selection in Evo-
lution. In Proceedings of the Sixth Annual Congress of Genetics [1459], pages 356366,
volume 1, 1932. Fully available at http://www.blackwellpublishing.com/ridley/
classictexts/wright.pdf [accessed 2009-06-29].
2986. Annie S. Wu, editor. Proceedings of the 2000 Genetic and Evolutionary Computation Con-
ference Workshop Program (GECCO00 WS), July 8, 2000. GECCO-2000 Bird-of-a-feather
workshops. See also [2928, 2935].
2987. Hsien-Chung Wu. Evolutionary Computation. self-published, February 2005. Fully available
at http://nknucc.nknu.edu.tw/

hcwu/pdf/evolec.pdf [accessed 2010-08-01].


2988. Nelson Wu. Dierential Evolution for Optimisation in Dynamic Environments. Techni-
cal Report, Royal Melbourne Institute of Technology (RMIT) University, School of Com-
puter Science and Information Technology: Melbourne, VIC, Australia, November 2006.
CiteSeer
x
: 10.1.1.93.3234.
2989. Tin-Yu Wu, Guan-Hsiung Liaw, Sing-Wei Huang, Wei-Tsong Lee, and Chung-Chi Wu. A
GA-based Mobile RFID Localization Scheme for Internet of Things. Personal and Ubiquitous
Computing, Springer-Verlag London Limited: London, UK. doi: 10.1007/s00779-011-0398-9.
2990. Zixin Wu, Karthik Gomadam, Ajith Ranabahu, Amit P. Sheth, and John A. Miller. Auto-
matic Composition of Semantic Web Services Using Process Mediation. In ICEIS07 [494],
pages 453462, 2007. Fully available at http://knoesis.cs.wright.edu/library/
publications/download/SWSChallenge-TR-METEOR-S-Feb2007.pdf [accessed 2010-12-
16]. See also [1098].
2991. Roman Wyrzykowski, Jack Dongarra, Marcin Paprzycki, and Jerzy Wasniewski, editors.
Proceedings of the 5th International Conference on Parallel Processing and Applied Math-
ematics, revised papers (PPAM03), September 710, 2003, Czestochowa, Poland, volume
3019/2004 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/b97218. isbn: 3-540-21946-3. Library of Congress Control Number
(LCCN): 2004104391. GBV-Identication (PPN): 386648921. LC Classication: QA76.58
.P69 2003.
X
2992. Fatos Xhafa and Ajith Abraham, editors. Metaheuristics for Scheduling in Indus-
trial and Manufacturing Applications, volume 128 in Studies in Computational Intel-
ligence. Springer-Verlag: Berlin/Heidelberg, 2008. doi: 10.1007/978-3-540-78985-7.
isbn: 3-540-78984-7. Google Books ID: sn4ksQOnE5MC. Library of Congress Control
Number (LCCN): 2008924363. Libri-Number: 3608646.
2993. Fatos Xhafa, Francisco Herrera Triguero, Mario Koppen, and Jose Manuel Benitez, editors.
8th International Conference on Hybrid Intelligent Systems (HIS08), October 1012, 2008,
Barcelona, Catalonia, Spain. IEEE Computer Society: Piscataway, NJ, USA. Partly available
at http://his2008.lsi.upc.edu/ [accessed 2009-03-02]. Library of Congress Control Number
(LCCN): 2008928438. IEEE Computer Society Order Number P3326. BMS Part Number
CFP08360-CDR.
2994. Bowei Xi, Zhen Liu, Mukund Raghavachari, Cathy H. Xia, and Li Zhang. A Smart Hill-
Climbing Algorithm for Application Server Conguration. In WWW04 [898], pages 287296,
2004. doi: 10.1145/988672.988711. CiteSeer
x
: 10.1.1.2.1211. Session: Server performance
and scalability.
2995. Proceedings of the Third International Workshop on Advanced Computational Intelligence
(IWACI10), August 2527, 2010, Xian Jiaotong-Liverpool University: S uzhou, Ji angs u,
China.
2996. Huayang Xie. An Analysis of Selection in Genetic Programming. PhD thesis, Vic-
REFERENCES 1181
toria University of Wellington: Wellington, New Zealand, 2009. Fully available
at http://researcharchive.vuw.ac.nz/bitstream/handle/10063/837/thesis.
pdf [accessed 2011-11-25].
2997. Shengwu Xiong, Weiwu Wang, and Feng Li. A New Genetic Programming Approach in Sym-
bolic Regression. In ICTAI03 [1367], pages 161167, 2003. doi: 10.1109/TAI.2003.1250185.
INSPEC Accession Number: 7862118.
2998. Bin Xu, Tao Li, Zhifeng Gu, and Gang Wu. SWSDS: Quick Web Service Dis-
covery and Composition in SEWSIP. In CEC/EEE06 [2967], pages 449451, 2006.
doi: 10.1109/CEC-EEE.2006.85. INSPEC Accession Number: 9189345. See also [318, 1146,
1797, 3010, 3011].
2999. Lei Xu, Adam Krzy zak, and Erkki Oja. Rival Penalized Competitive Learning for Clus-
tering Analysis, RBF Net, and Curve Detection. IEEE Transactions on Neural Net-
works, 4(4):636649, July 1993, IEEE Computer Society Press: Los Alamitos, CA,
USA. doi: 10.1109/72.238318. Fully available at http://www.cse.cuhk.edu.hk/

lxu/
papers/journal/IEEETNNrpcl93.PDF [accessed 2009-08-28].
3000. Lihong Xu, Erik D. Goodman, and Yongsheng Ding, editors. Proceedings of the First
ACM/SIGEVO Summit on Genetic and Evolutionary Computation (GEC09), June 1214,
2009, Hua-Ting Hotel & Towers: Sh`angh ai, China. ACM Press: New York, NY, USA. Partly
available at http://www.sigevo.org/gec-summit-2009/. ACM Order No.: 910093.
3001. Yong Xu, Sancho Salcedo-Sanz, and Xin Yao. Editorial to the Special Issue on Na-
ture Inspired Approaches to Networks and Telecommunications. International Journal
of Computational Intelligence and Applications (IJCIA), 5(2):iiiviii, June 2005, World
Scientic Publishing Co.: Singapore and Imperial College Press Co.: London, UK.
doi: 10.1142/S1469026805001532.
3002. Yong Xu, Sancho Salcedo-Sanz, and Xin Yao. Metaheuristic Approaches to Trac Groom-
ing in WDM Optical Networks. International Journal of Computational Intelligence and
Applications (IJCIA), 5(2):231249, June 2005, World Scientic Publishing Co.: Singapore
and Imperial College Press Co.: London, UK. doi: 10.1142/S1469026805001593. Fully avail-
able at http://www.cs.bham.ac.uk/

xin/papers/IJCIA_TrafficGrooming2005.
pdf [accessed 2009-08-10].
3003. XXX Seminario Integrado de Software e Hardware in Proceedings of XXIII Congresso da
Sociedade Brasileira de Computacao (SEMISH03), August 45, 2003.
Y
3004. S. Yaacob, Meenakshi Nagarajan, and A. Chekima, editors. Proceedings of Interna-
tional Conference on Articial Intelligence in Engineering and Technology (ICAIET02),
June 1718, 2002, University of Malaysia Sabah: Kota Kinabalu, Sabah, Malaysia.
isbn: 983-2188-92-X. Google Books ID: ejBaPgAACAAJ.
3005. Hirozumi Yamaguchi, Kozo Okano, Teruo Higashino, and Kenichi Taniguchi. Synthesis of
Protocol Entities Specications from Service Specications in a Petri Net Model with Reg-
isters. In ICDCS95 [1382], pages 510517, 1995. CiteSeer
x
: 10.1.1.57.1320. See also
[872].
3006. Lidia A. R. Yamamoto and Christian F. Tschudin. Experiments on the Automatic Evo-
lution of Protocols Using Genetic Programming. In WAC05 [2603], pages 1328, 2005.
doi: 10.1007/11687818 2. Fully available at http://cn.cs.unibas.ch/people/ly/doc/
wac2005-lyct.pdf [accessed 2008-06-20]. See also [3007].
3007. Lidia A. R. Yamamoto and Christian F. Tschudin. Experiments on the Automatic Evo-
lution of Protocols Using Genetic Programming. Technical Report CS-2005-002, Univer-
sity of Basel, Computer Science Department, Computer Networks Group: Basel, Switzer-
land, April 21, 2005. Fully available at http://cn.cs.unibas.ch/people/ly/doc/
wac2005tr-lyct.pdf [accessed 2008-06-20]. See also [3006, 3008].
3008. Lidia A. R. Yamamoto and Christian F. Tschudin. Genetic Evolution of Pro-
tocol Implementations and Congurations. In SelfMan05 [1839], 2005. Fully
available at http://cn.cs.unibas.ch/pub/doc/2005-selfman.pdf and http://
ksuseer1.ist.psu.edu/viewdoc/summary?doi=10.1.1.86.8032 [accessed 2009-07-22].
CiteSeer
x
: 10.1.1.86.8032. See also [3007].
1182 REFERENCES
3009. Lidia A. R. Yamamoto, Daniel Schreckling, and Thomas Meyer. Self-Replicating
and Self-Modifying Programs in Fraglets. In BIONETICS07 [1342], 2007.
doi: 10.1109/BIMNICS.2007.4610104. Fully available at http://cn.cs.unibas.
ch/people/ly/doc/bionetics2007-ysm.pdf [accessed 2008-05-04]. INSPEC Accession
Number: 10186239.
3010. Yixin Yan, Bin Xu, and Zhifeng Gu. Automatic Service Composition Using AND/OR Graph.
In CEC/EEE08 [1346], pages 335338, 2008. doi: 10.1109/CECandEEE.2008.124. INSPEC
Accession Number: 10475105. See also [204, 1146, 1797, 2998, 3011].
3011. Yixin Yan, Bin Xu, Zhifeng Gu, and Sen Luo. A QoS-Driven Approach for Semantic Service
Composition. In CEC09 [1245], pages 523526, 2009. doi: 10.1109/CEC.2009.44. INSPEC
Accession Number: 10839127. See also [1146, 1571, 1797, 2998, 3010].
3012. Yong Yan, Peng Wang, Chen Wang, Haofeng Zhou, Wei Wang, and Baile Shi. ANNE: An
Ecient Framework on View Selection Problem. In APWeb04 [3045], pages 384394, 2004.
doi: 10.1007/978-3-540-24655-8 41.
3013. Yuhong Yan and Xianrong Zheng. A Planning Graph Based Algorithm for Semantic Web
Service Composition. In CEC/EEE08 [1346], pages 339342, 2008. doi: 10.1109/CECan-
dEEE.2008.135. INSPEC Accession Number: 10475106. See also [204].
3014. Ang Yang, Yin Shan, and Lam Thu Bui, editors. Success in Evolutionary Computation,
volume 92/2008 in Studies in Computational Intelligence. Springer-Verlag: Berlin/Hei-
delberg, January 16, 2008. doi: 10.1007/978-3-540-76286-7. isbn: 3-540-76285-X and
3-540-76286-8. Google Books ID: wytCwGYoAowC. OCLC: 181090645, 244024657,
244040858, 261325352, and 300248166. Library of Congress Control Number
(LCCN): 2007939404.
3015. Jian Yang, Kamalakar Karlapalem, and Qing Li. A Framework for Designing Material-
ized Views in Data Warehousing Environment. In ICDCS97 [1363], pages 458465, 1997.
doi: 10.1109/ICDCS.1997.603380. CiteSeer
x
: 10.1.1.54.5417. INSPEC Accession Num-
ber: 5622820.
3016. Jian Yang, Kamalakar Karlapalem, and Qing Li. Algorithms for Materialized View
Design in Data Warehousing Environment. In VLDB97 [1434], pages 136145,
1997. Fully available at http://www.vldb.org/conf/1997/P136.PDF [accessed 2011-04-13].
CiteSeer
x
: 10.1.1.102.838.
3017. Jin-Hyuk Yang and Chan-Jin Chung. ASVMRT: Materialized View Selection Algo-
rithm in Data Warehouse. Journal of Information Processing Systems (JIPS), 2(2):6775,
June 2006, Korea Information Processing Society (KIPS): Seoul, South Korea. Fully available
at http://jips-k.org/dlibrary/JIPS_v02_no2_paper1.pdf [accessed 2006-04-12].
3018. Shengxiang Yang and Changhe Li. A Clustering Particle Swarm Optimizer for Locating and
Tracking Multiple Optima in Dynamic Environments. IEEE Transactions on Evolutionary
Computation (IEEE-EC), 14(6):959974, December 2010, IEEE Computer Society: Wash-
ington, DC, USA. doi: 10.1109/TEVC.2010.2046667. INSPEC Accession Number: 11675042.
3019. Shengxiang Yang, Yew-Soon Ong, and Yaochu Jin, editors. Evolutionary Computa-
tion in Dynamic and Uncertain Environments, volume 51/2007 in Studies in Compu-
tational Intelligence. Springer-Verlag: Berlin/Heidelberg, 2007. isbn: 3-540-49772-2
and 3-540-49774-9. Partly available at http://www.soft-computing.de/Jin_
CEC04T.pdf.gz [accessed 2009-07-12]. Google Books ID: 8w HGQAACAAJ and ciktAAAACAAJ.
OCLC: 77013308, 180965995, and 318293101.
3020. Zhenyu Yang, Ke Tang, and Xin Yao. Large Scale Evolutionary Optimization using Coop-
erative Coevolution. Information Sciences Informatics and Computer Science Intelligent
Systems Applications: An International Journal, 178(15), August 1, 2008, Elsevier Science
Publishers B.V.: Essex, UK. doi: 10.1016/j.ins.2008.02.017. Fully available at http://
nical.ustc.edu.cn/papers/yangtangyao_ins.pdf [accessed 2009-08-07].
3021. Ziheng Yang and Joseph P. Bielawski. Statistical Methods for Detecting Molecular Adapta-
tion. Trends in Ecology and Evolution (TREE), 15(12):496503, December 1, 2000, Elsevier
Science Publishers B.V.: Amsterdam, The Netherlands and Cell Press: St. Louis, MO, USA,
Katrina A. Lythgoe, editor. doi: 10.1016/S0169-5347(00)01994-7. Fully available at http://
citeseer.ist.psu.edu/old/yang00statistical.html [accessed 2009-07-10].
3022. Jinhui Yao, Shiping Chen, Chen Wang, David Levy, and John Zic. Accountability as a Service
for the Cloud: From Concept to Implementation with BPEL. In SERVICES Cup10 [2457],
pages 9198, 2010. doi: 10.1109/SERVICES.2010.79. INSPEC Accession Number: 11535248.
REFERENCES 1183
3023. Xin Yao, editor. Progress in Evolutionary Computation, AI93 (Melbourne, Victoria, Aus-
tralia, 1993-11-16) and AI94 Workshops (Armidale, NSW, Australia, 1994-11-22/23) on
Evolutionary Computation, Selected Papers, 19931994, Australia, volume 956/1995 in
Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-60154-6. isbn: 3-540-60154-6. Google Books ID: -17wqjv133EC.
OCLC: 32892128, 150398021, 246326423, 246326423, and 633901751.
3024. Xin Yao. An Empirical Study of Genetic Operators in Genetic Algorithms. Micropro-
cessing and Microprogramming, 38(1-5):707714, September 1993, Elsevier Science Pub-
lishers B.V.: Amsterdam, The Netherlands. Imprint: North-Holland Scientic Publish-
ers Ltd.: Amsterdam, The Netherlands. doi: 10.1016/0165-6074(93)90215-7. Fully avail-
able at http://www.cs.bham.ac.uk/

xin/papers/euro93_final.pdf [accessed 2011-11-


10]. CiteSeer
x
: 10.1.1.24.7090.
3025. Xin Yao, editor. Evolutionary Computation: Theory and Applications. World Scientic
Publishing Co.: Singapore, January 28, 1996. isbn: 981-02-2306-4. Google Books
ID: GP7ChxbJOE4C.
3026. Xin Yao, Yong Liu, and Guangming Lin. Evolutionary Programming Made Faster. IEEE
Transactions on Evolutionary Computation (IEEE-EC), 3(2):82102, July 1999, IEEE Com-
puter Society: Washington, DC, USA. doi: 10.1109/4235.771163. Fully available at http://
eprints.kfupm.edu.sa/38413/ and http://www.u-aizu.ac.jp/

yliu/
publication/tec22r2_online.ps.gz [accessed 2009-08-07]. CiteSeer
x
: 10.1.1.14.1492.
INSPEC Accession Number: 6290499.
3027. Xin Yao, Feng-Sheng Wang, K. Padmanabhan, and Sancho Salcedo-Sanz. Hybrid Evolution-
ary Approaches to Terminal Assignment in Communications Networks. In Recent Advances in
Memetic Algorithms [1205], pages 129159. Springer-Verlag GmbH: Berlin, Germany, 2005.
doi: 10.1007/3-540-32363-5 7.
3028. Xin Yao, Edmund K. Burke, Jose Antonio Lozano, Jim Smith, Juan Julian Merelo-
Guervos, John A. Bullinaria, Jonathan E. Rowe, Peter Ti no, Ata Kab an, and Hans-
Paul Schwefel, editors. Proceedings of the 8th International Conference on Parallel Prob-
lem Solving from Nature (PPSN VIII), September 1822, 2008, Birmingham, UK, volume
3242/2004 in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/b100601. isbn: 3-540-23092-0. Google Books ID: dsI61Gy0 yAC.
OCLC: 56583354, 57501425, and 249904829. Library of Congress Control Number
(LCCN): 2004112163.
3029. Yiyu Yao, Zhongzhi Shi, Yingxu Wang, and Witold Kinsner, editors. Proceedings of the
Firth IEEE International Conference on Cognitive Informatics (ICCI06), July 1719, 2006,
Beijng, China. IEEE Computer Society: Piscataway, NJ, USA. isbn: 1-4244-0475-4.
3030. Frank Yates. The Design and Analysis of Factorial Experiments. Imperial Bureau
of Soil Science, Commonwealth Agricultural Bureaux: Harpenden, England, UK, 1937
1952. isbn: 0851982204. Google Books ID: 5-7RAAAAMAAJ and YW1OAAAAMAAJ.
asin: B00086SIZ0. Technical Communication No. 35.
3031. Tao Ye. Large-Scale Network Parameter Conguration using On-line Simulation Framework.
PhD thesis, Rensselaer Polytechnic Institute, Computer and System Engineering, Depart-
ment of Electrical: Troy, NY, USA, ProQuest: Ann Arbor, MI, USA, March 2003, Shiv-
kumar Kalyanaraman, Advisor, Shivkumar Kalyanaraman, Biplab Sikdar, Christopher D.
Carothers, and Aparna Gupta, Committee members. Fully available at http://www.
ecse.rpi.edu/Homepages/shivkuma/research/papers/tao-phd-thesis.pdf [ac-
cessed 2008-10-16]. Order No.: AAI3088530. See also [3032].
3032. Tao Ye and Shivkumar Kalyanaraman. An Adaptive Random Search Alogrithm for Opti-
mizing Network Protocol Parameters. Technical Report, Rensselaer Polytechnic Institute,
Computer and System Engineering, Department of Electrical: Troy, NY, USA, 2001. Fully
available at http://www.ecse.rpi.edu/Homepages/shivkuma/research/papers/
tao-icnp2001.pdf [accessed 2008-10-16]. CiteSeer
x
: 10.1.1.24.8138. See also [3031].
3033. Gary G. Yen, Simon M. Lucas, Gary B. Fogel, Graham Kendall, Ralf Salomon, Byoung-Tak
Zhang, Carlos Artemio Coello Coello, and Thomas Philip Runarsson, editors. Proceedings
of the IEEE Congress on Evolutionary Computation (CEC06), 2006, Sheraton Vancouver
Wall Centre Hotel: Vancouver, BC, Canada. IEEE Computer Society: Piscataway, NJ, USA,
IEEE Computer Society: Piscataway, NJ, USA. isbn: 0-7803-9487-9. Google Books
ID: 4j5kNwAACAAJ and ZP7RPQAACAAJ. OCLC: 181016874 and 192047219. Catalogue
1184 REFERENCES
no.: 04TH8846. Part of [1341].
3034. John Jung-Woon Yoo, Soundar R. T. Kumara, and Dongwon Lee. A Web Service Composi-
tion Framework Using Integer Programming with Non-functional Objectives and Constraints.
In CEC/EEE08 [1346], pages 347350, 2008. doi: 10.1109/CECandEEE.2008.144. INSPEC
Accession Number: 10475108. See also [204, 662, 1998, 1999, 2064, 2066, 2067].
3035. Hee Yong Youn and We-Duke Cho, editors. Proceedings of the 10th International Conference
on Ubiquitous Computing (UbiComp08), September 2124, 2008, COEX, World Trade Cen-
ter: Gangnam-gu, Seoul, Korea, volume 344 in ACM International Conference Proceeding
Series (AICPS). ACM Press: New York, NY, USA.
3036. Marshall C. Yovits, editor. Advances in Computers, volume 31 in Advances in Computers.
Academic Press Professional, Inc.: San Diego, CA, USA and Elsevier Science Publishers
B.V.: Amsterdam, The Netherlands, October 1990. isbn: 0-12-012131-X. Google Books
ID: hIZDUKGj1w8C. OCLC: 23189502, 444245898, and 493579034. Library of Congress
Control Number (LCCN): 59-15761.
3037. Marshall C. Yovits, George T. Jacobi, and Gordon D. Goldstein, editors. Self-Organizing
Systems (Proceedings of the conference sponsored by the Information Systems Branch of the
Oce of Naval Research and the Armour Research Foundation of the Illinois Institute of
Technology.), May 2224, 1962, Chicago, IL, USA. Spartan Books: Washington, DC, USA.
asin: B000GXZFFG.
3038. Fei Yu, editor. Proceedings of the International Symposium on Electronic Commerce and Secu-
rity (ISECS08), August 35, 2008, Guangzhou, Guangd ong, China. isbn: 0-7695-3258-6.
Google Books ID: soBiPgAACAAJ. OCLC: 253559152 and 262691080.
3039. Gwoing Tina Yu. Program Evolvability Under Environmental Variations and Neutrality. In
ECAL07 [852], pages 835844, 2007. doi: 10.1007/978-3-540-74913-4 84. See also [3040].
3040. Gwoing Tina Yu. Program Evolvability Under Environmental Variations and Neutrality.
In GECCO07-II [2700], pages 29732978, 2007. doi: 10.1145/1274000.1274041. Workshop
Session Evolution of natural and articial systems - metaphors and analogies in single and
multi-objective problems. See also [3039].
3041. Gwoing Tina Yu and Peter J. Bentley. Methods to Evolve Legal Phenotypes. In PPSN
V [866], pages 280291, 1998. doi: 10.1007/BFb0056843. Fully available at http://www.
cs.ucl.ac.uk/staff/p.bentley/YUBEC2.pdf [accessed 2010-08-06].
3042. Gwoing Tina Yu and Julian Francis Miller. Finding Needles in Haystacks Is Not Hard
with Neutrality. In EuroGP02 [977], pages 4654, 2002. doi: 10.1007/3-540-45984-7 2.
Fully available at http://www.cs.mun.ca/

tinayu/index_files/addr/public_
html/EuroGP2002.pdf [accessed 2009-07-10]. CiteSeer
x
: 10.1.1.5.1303.
3043. Gwoing Tina Yu, Lawrence Davis, Cem M. Baydar, and Rajkumar Roy, editors. Evolu-
tionary Computation in Practice, volume 88/2008 in Studies in Computational Intelligence.
Springer-Verlag: Berlin/Heidelberg, 2008. doi: 10.1007/978-3-540-75771-9. Google Books
ID: om-uG-vk6GgC. Library of Congress Control Number (LCCN): 2007940149. Series
editor: Janusz Kacprzyk.
3044. Jerey Xu Yu, Xin Yao, Chi-Hon Choi, and Gang Gou. Materialized View Selection as Con-
strained Evolutionary Optimization. IEEE Transactions on Systems, Man, and Cybernetics
Part C: Applications and Reviews, 33(4):458467, November 2003, IEEE Systems, Man, and
Cybernetics Society: New York, NY, USA. doi: 10.1109/TSMCC.2003.818494. Fully avail-
able at http://www.cs.bham.ac.uk/

xin/papers/published_tsmc_view_nov03.
pdf [accessed 2009-04-09]. CiteSeer
x
: 10.1.1.6.4190. INSPEC Accession Number: 7782484.
3045. Jerey Xu Yu, Xuemin Lin, Hongjun Lu, and Yanchun Zhang, editors. Proceedings of
the 6th Asia-Pacic Web Conference on Advanced Web Technologies and Applications (AP-
Web04), April 1417, 2004, Hangzhou, Zh`eji ang, China, volume 3007 in Lecture Notes in
Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/b96838.
isbn: 3-540-21371-6. Google Books ID: yHylerhe4uIC. Library of Congress Control
Number (LCCN): 2004102546.
3046. Tian-Li Yu, Scott Santarelli, and David Edward Goldberg. Military Antenna Design Using
a Simple Genetic Algorithm and hBOA. In Scalable Optimization via Probabilistic Model-
ing From Algorithms to Applications [2158], Chapter 12, pages 275289. Springer-Verlag:
Berlin/Heidelberg, 2006. doi: 10.1007/978-3-540-34954-9.
3047. Jing Yuan, Yu Zheng, Chengyang Zhang, Wenlei Xie, Xing Xie, Guangzhong Sun, and Yan
Huang. T-Drive: Driving Directions Based on Taxi Trajectories. In ACM SIGSPATIAL GIS
REFERENCES 1185
2010 [35], pages 99108, 2010. doi: 10.1145/1869790.1869807.
3048. Deniz Yuret and Michael de la Maza. Dynamic Hill Climbing: Overcoming the
Limitations of Optimization Techniques. In TAINN93 [1643], pages 208212, 1993.
Fully available at http://www.denizyuret.com/pub/tainn93.html [accessed 2010-07-23].
CiteSeer
x
: 10.1.1.53.6367.
Z
3049. Lot A. Zadeh, Youakim Badr, Ajith Abraham, Yukio Ohsawa, Richard Chbeir, Fernando
Ferri, Mario Koppen, and Dominique Laurent, editors. Proceedings of the 5th IEEE/ACM
International Conference on Soft Computing as Transdisciplinary Science and Technology
(CSTST08), October 2731, 2008, University of Cergy-Pontoise: Cergy-Pontoise, France.
Association for Computing Machinery (ACM): New York, NY, USA. Library of Congress
Control Number (LCCN): 2009417688.
3050. Vladimir Zakian. New Formulation for the Method of Inequalities. Proceedings of the Insti-
tution of Electrical Engineers, 126:579584, 1979, Institution of Electrical Engineers (IEE):
London, UK and Institution of Engineering and Technology (IET): Stevenage, Herts, UK.
See also [3051].
3051. Vladimir Zakian. New Formulation for the Method of Inequalities. In Madan G. Singh,
editor, Systems and Control Encyclopedia [2504], pages 32063215, volume 5. Pergamon
Press: Oxford, UK, 1987. See also [3050].
3052. Vladimir Zakian. Perspectives on the Principle of Matching and the Method of Inequali-
ties. Control Systems Centre Report 769, University of Manchester Institute of Science and
Technology (UMIST), Control Systems Centre: Manchester, UK, 1992. See also [3053].
3053. Vladimir Zakian. Perspectives on the Principle of Matching and the Method of Inequalities.
International Journal of Control, 65(1):147175, September 1996, Taylor and Francis LLC:
London, UK. doi: 10.1080/00207179608921691. See also [3052].
3054. Vladimir Zakian and U. AI-Naib. Design of Dynamical and Control Systems by the Method
of Inequalities. IEE Proceedings D: Control Theory Control Theory and Applications, 120
(11):14211427, 1973, Institution of Engineering and Technology (IET): Stevenage, Herts,
UK and Institution of Electrical Engineers (IEE): London, UK.
3055. Ali M. S. Zalzala, editor. First International Conference on Genetic Algorithms in Engineer-
ing Systems: Innovations and Applications (GALESIA95), September 1214, 1995, Sheeld,
UK, volume 414 in IET Conference Publications. Institution of Engineering and Technology
(IET): Stevenage, Herts, UK.
3056. Ali M. S. Zalzala, editor. Proceedings of the Second IEE/IEEE International Conference
On Genetic Algorithms in Engineering Systems: Innovations and Applications (GALE-
SIA97), September 24, 1997, University of Strathclyde: Glasgow, Scotland, UK, volume
1997 (CP446) in IET Conference Publications. Institution of Engineering and Technology
(IET): Stevenage, Herts, UK. isbn: 0-85296-693-8. Google Books ID: uBPdAAAACAAJ.
OCLC: 37864166, 38989931, 84640200, 288957978, and 312682751. coden: IECPB4.
3057. Maciej Zaremba, Tomas Vitvar, Matthew Moran, Thomas Haselwanter, and Adina Sirbu.
WSMX Discovery for SWS Challeng. In Third Workshop of the Semantic Web Service
Challenge 2006 Challenge on Automating Web Services Mediation, Choreography and
Discovery [2690], 2006. Fully available at http://sws-challenge.org/workshops/
2006-Athens/papers/DERI-sws-challenge-3.pdf [accessed 2010-12-16]. See also [582,
1210, 1940, 3058, 3059].
3058. Maciej Zaremba, Tomas Vitvar, Matthew Moran, Marco Brambilla, Stefano Ceri, Dario
Cerizza, Emanuele Della Valle, Federico Michele Facca, and Christina Tziviskou. Towards
Semantic Interoperabilty In-depth Comparison of Two Approaches to Solving Seman-
tic Web Service Challenge Mediation Tasks. In ICEIS07 [494], pages 413421, volume
4, 2007. Fully available at http://sws-challenge.org/workshops/2007-Madeira/
PaperDrafts/2-DERI-Milano%20comparison.pdf [accessed 2010-12-16]. See also [582,
1210, 1940, 3057, 3059].
3059. Maciej Zaremba, Maximilian Herold, Raluca Zaharia, and Tomas Vitvar. Data and Pro-
cess Mediation Support for B2B Integration. In EON-SWSC08 [1023], 2008. Fully avail-
1186 REFERENCES
able at http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/
Vol-359/ [accessed 2010-12-16]. See also [582, 1210, 1940, 2166, 3057, 3058].
3060. Marvin Zelen and Norman C. Severo. Probability Functions. In Handbook of Mathemati-
cal Functions with Formulas, Graphs, and Mathematical Tables [2606], Chapter 26. Dover
Publications: Mineola, NY, USA, 1964.
3061. Zhi-hui Zhan and Jun Zhang. Discrete Particle Swarm Optimization for Multiple
Destination Routing Problems. In EvoWorkshops09 [1052], pages 117122, 2009.
doi: 10.1007/978-3-642-01129-0 15.
3062. Chuan Zhang and Jian Yang. Genetic Algorithm for Materialized View Selec-
tion in Data Warehouse Environments. In DaWaK99 [1924], pages 116125, 1999.
doi: 10.1007/3-540-48298-9 12.
3063. Chuan Zhang, Xin Yao, and Jian Yang. Evolving Materialized Views in Data Warehouse.
In CEC99 [110], pages 823829, 1999. doi: 10.1109/CEC.1999.782507. Fully available
at http://www.cs.bham.ac.uk/

xin/papers/ZhangYaoYangCEC99.pdf [accessed 2011-


04-09]. CiteSeer
x
: 10.1.1.45.4743 and 10.1.1.63.5130. INSPEC Accession Num-
ber: 6338891.
3064. Chuan Zhang, Xin Yao, and Jian Yang. An Evolutionary Approach to Materialized Views
Selection in a Data Warehouse Environment. IEEE Transactions on Systems, Man, and
Cybernetics Part C: Applications and Reviews, 31(3):282294, August 2001, IEEE Sys-
tems, Man, and Cybernetics Society: New York, NY, USA. doi: 10.1109/5326.971656.
CiteSeer
x
: 10.1.1.13.9065. INSPEC Accession Number: 7141147.
3065. Du Zhang, Yingxu Wang, and Witold Kinsner, editors. Proceedings of the Six IEEE Interna-
tional Conference on Cognitive Informatics (ICCI07), August 68, 2007, Lake Tahoe, CA,
USA. IEEE Computer Society: Piscataway, NJ, USA. isbn: 1-4244-1327-3.
3066. Guoqiang Peter Zhang. Neural Networks for Classication: A Survey. IEEE Trans-
actions on Systems, Man, and Cybernetics Part C: Applications and Reviews, 30(4):
451462, 2000, IEEE Systems, Man, and Cybernetics Society: New York, NY, USA.
doi: 10.1109/5326.897072. Fully available at http://vis.lbl.gov/

romano/mlgroup/
papers/neural-networks-survey.pdf [accessed 2010-07-24].
3067. Jing Zhang, Weiran Nie, Mark Panahi, Yichin Chang, and Kwei-Jay Lin. Business
Process Composition with QoS Optimization. In CEC09 [1245], pages 499502, 2009.
doi: 10.1109/CEC.2009.81. INSPEC Accession Number: 10839133. See also [1571, 2264,
3073, 3074].
3068. Liang-Jie Zhang, editor. Proceedings of the 5th World Conference on Services: Part I
(SERVICES09-I), July 610, 2009, Los Angeles, CA, USA. IEEE Computer Society: Pis-
cataway, NJ, USA.
3069. Liang-Jie Zhang, Wil van der Aalst, and Patrick C. K. Hung, editors. Proceedings of the IEEE
International Conference on Services Computing (SCC07), July 913, 2007, Salt Lake City,
UT, USA. IEEE Computer Society Press: Los Alamitos, CA, USA. isbn: 0-7695-2925-9.
Library of Congress Control Number (LCCN): 2007927952. Order No.: P2925.
3070. P. Zhang and A.H. Coonick. Coordinated Synthesis of PSS Parameters in Multi-Machine
Power Systems Using the Method of Inequalities Applied to Genetic Algorithms. IEEE Trans-
actions on Power Systems, 15(2):811816, May 2000, IEEE Power & Energy Society (PES):
Piscataway, NJ, USA. doi: 10.1109/59.867178. Fully available at http://www.lania.mx/

ccoello/EMOO/zhang00.pdf.gz [accessed 2008-11-14]. CiteSeer


x
: 10.1.1.8.9768.
3071. Qingzhou Zhang, Xia Sun, and Ziqiang Wang. An Eecient MA-Based Materialized Views
Selection Algorithm. In CASE09 [2660], pages 315318, 2009. doi: 10.1109/CASE.2009.111.
INSPEC Accession Number: 10813304.
3072. Shichao Zhang and Ray Jarvis, editors. Advances in Articial Intelligence. Proceedings of the
18th Australian Joint Conference on Articial Intelligence (AI05), December 59, 2005, Uni-
versity of Technology, Sydney (UTS): Sydney, NSW, Australia, volume 3809/2005 in Lecture
Notes in Articial Intelligence (LNAI, SL7), Lecture Notes in Computer Science (LNCS).
Springer-Verlag GmbH: Berlin, Germany. doi: 10.1007/11589990. isbn: 3-540-30462-2.
Library of Congress Control Number (LCCN): 2005936732.
3073. Yue Zhang, Tao Yu, Krishna Raman, and Kwei-Jay Lin. Strategies for Ecient Syntactical
and Semantic Web Services Discovery and Composition. In CEC/EEE06 [2967], pages 452
454, 2006. doi: 10.1109/CEC-EEE.2006.84. INSPEC Accession Number: 9189346. See also
[318, 2264, 3067, 3074].
REFERENCES 1187
3074. Yue Zhang, Krishna Raman, Mark Panahi, and Kwei-Jay Lin. Heuristic-based Service Com-
position for Business Processes with Branching and Merging. In CEC/EEE07 [1344], pages
525528, 2007. doi: 10.1109/CEC-EEE.2007.53. INSPEC Accession Number: 9868639. See
also [319, 2264, 3067, 3073].
3075. R. T. Zheng, N. Q. Ngo, P. Shum, S. C. Tjin, and L. N. Binh. A Staged Continuous Tabu
Search Algorithm for the Global Optimization and its Applications to the Design of Fiber
Bragg Gratings. Computational Optimization and Applications, 30(3):319335, March 2005,
Kluwer Academic Publishers: Norwell, MA, USA and Springer Netherlands: Dordrecht,
Netherlands. doi: 10.1007/s10589-005-4563-9.
3076. Shijue Zheng, Shaojun Xiong, Yin Huang, and Shixiao Wu. Using Methods of Association
Rules Mining Optimization in in Web-Based Mobile-Learning System. In ISECS08 [3038],
pages 967970, 2008. doi: 10.1109/ISECS.2008.223. INSPEC Accession Number: 10183507.
3077. Yu Zheng, Quannan Li, Yukun Chen, Xing Xie, and Wei-Ying Ma. Understand-
ing Mobility based on GPS Data. In UbiComp08 [3035], pages 312321, 2008.
doi: 10.1145/1409635.1409677. See also [1892, 3078].
3078. Yu Zheng, Like Liu, Longhao Wang, and Xing Xie. Learning Transportation Mode from
Raw GPS Data for Geographic Applications on the Web. In WWW08 [140], pages 247256,
2008. doi: 10.1145/1367497.1367532. Fully available at http://www2008.org/papers/
pdf/p247-zhengA.pdf [accessed 2011-02-19]. See also [1892, 3077].
3079. Jinghui Zhong, Xiaomin Hu, Jun Zhang, and Min Gu. Comparison of Performance
between Dierent Selection Strategies on Simple Genetic Algorithms. In CIMCA-
IAWTIC06 [1923], pages 11151121, 2006. doi: 10.1109/CIMCA.2005.1631619. Fully avail-
able at http://www.ee.cityu.edu.hk/

jzhang/papers/zhongjinghui06.pdf [ac-
cessed 2010-08-03]. CiteSeer
x
: 10.1.1.140.3747. INSPEC Accession Number: 9109532.
3080. Chi Zhou, Weimin Xiao, Thomas M. Tirpak, and Peter C. Nelson. Evolving Accurate and
Compact Classication Rules with Gene Expression Programming. IEEE Transactions on
Evolutionary Computation (IEEE-EC), 7(6):519531, December 2003, IEEE Computer So-
ciety: Washington, DC, USA. doi: 10.1109/TEVC.2003.819261. INSPEC Accession Num-
ber: 7826902.
3081. Xiaofang Zhou, editor. Proceedings of the 13th Australasian Database Conference (ADC02),
JanuaryFebruary 2002, Monash University: Melbourne, VIC, Australia, volume 5 in Con-
ferences in Research and Practice in Information Technology (CRPIT). ACM Press: New
York, NY, USA. isbn: 0-909-92583-6.
3082. Xiaofang Zhou, Maria E. Orlowska, and Yanchun Zhang, editors. Proceedings of the 5th Asia-
Pacic Web Conference (APWeb03), April 2325, 2003, Xan, Shanx, China, volume 2642
in Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-36901-5. isbn: 3-540-02354-2.
3083. Jianping Zhu, Pete Bettinger, and Rongxia (Tiany) Li. Additional Insight into the Per-
formance of a New Heuristic for Solving Spatially Constrained Forest Planning Problems.
Silva Fennica, 41(4):687698, Finnish Society of Forest Science, The Finnish Forest Research
Institute: Finland, Eeva Korpilahti, editor. Fully available at http://www.metla.fi/
silvafennica/full/sf41/sf414687.pdf [accessed 2008-08-23].
3084. Kenny Qili Zhu. A Diversity-Controlling Adaptive Genetic Algorithm for the Vehi-
cle Routing Problem with Time Windows. In ICTAI03 [1367], pages 176183, 2003.
doi: 10.1109/TAI.2003.1250187. INSPEC Accession Number: 7862120.
3085. Liming Zhu, Roger L. Wainwright, and Dale A. Schoenefeld. A Genetic Algorithm
for the Point to Multipoint Routing Problem with Varying Number of Requests. In
CEC98 [2496], pages 171176, 1998. doi: 10.1109/ICEC.1998.699496. Fully available
at http://euler.mcs.utulsa.edu/

rogerw/papers/Zhu-ICEC98.pdf [accessed 2009-


09-03]. CiteSeer
x
: 10.1.1.32.5655.
3086. Weihang Zhu. A Study of Parallel Evolution Strategy Pattern Search on a GPU Computing
Platform. In GEC09 [3000], pages 765771, 2009. doi: 10.1145/1543834.1543939. See also
[3087].
3087. Weihang Zhu. Nonlinear Optimization with a Massively Parallel Evolution Strategy-
Pattern Search Algorithm on Graphics Hardware. Applied Soft Computing, 11(2):17701781,
March 2011, Elsevier Science Publishers B.V.: Essex, UK. doi: 10.1016/j.asoc.2010.05.020.
See also [3086].
3088. Karin Zielinski and Rainer Laur. Stopping Criteria for Constrained Optimization with Parti-
1188 REFERENCES
cle Swarms. In BIOMA06 [919], pages 4554, 2006. Fully available at http://www.item.
uni-bremen.de/staff/zilli/zielinski06stopping_PSO.pdf [accessed 2007-09-13]. See
also [3089].
3089. Karin Zielinski and Rainer Laur. Stopping Criteria for a Constrained Single-Objective
Particle Swarm Optimization Algorithm. Informatica, 31(1):5159, 2007, Slovensko
drustvo Informatika (Slovenian Society Informatika). Fully available at http://
www.informatica.si/vols/vol31.html and http://www.item.uni-bremen.de/
staff/zilli/zielinski07informatica.pdf [accessed 2007-09-13]. Extension of [3088].
3090. Antanas

Zilinskas. Algorithm AS 133: Optimization of One-Dimensional Multimodal Func-
tions. Journal of the Royal Statistical Society: Series C Applied Statistics, 27(3):367375,
1978, Blackwell Publishing for the Royal Statistical Society: Chichester, West Sussex, UK.
doi: 10.2307/2347182.
3091. Hans-J urgen Zimmermann, editor. Proceedings of the 5th European Congress on Intelligent
Techniques and Soft Computing (EUFIT97), September 811, 1997, Aachen, North Rhine-
Westphalia, Germany. ELITE Foundation: Aachen, North Rhine-Westphalia, Germany and
Wissenschaftsverlag Mainz: Germany.
3092. Uwe Zimmermann, Ulrich Derigs, Wolfgang Gaul, Rolf H. Mohring, and Karl-Peter Schuster,
editors. Operations Research Proceedings 1996, Selected Papers of the Symposium on Oper-
ations Research (SOR96), September 36, 1996, Braunschweig, Lower Saxony, Germany.
Springer-Verlag: Berlin/Heidelberg. isbn: 3540626301. Google Books ID: FK5gAQAACAAJ.
OCLC: 36865524, 174184822, and 243863641. Published 1997.
3093. Lyudmila Zinchenko, Matthias Radecker, and Fabio Bisogno. Application of the Univariate
Marginal Distribution Algorithm to Mixed Analogue - Digital Circuit Design and Optimisa-
tion. In EvoWorkshops07 [1050], pages 431438, 2007. doi: 10.1007/978-3-540-71805-5 48.
3094. Stanley Zionts, editor. Proceedings of the 2nd International Conference on Multiple Crite-
ria Decision Making (MCDM77), August 2226, 1977, Bualo, NY, USA, volume 155 in
Lecture Notes in Economics and Mathematical Systems. Springer-Verlag: Berlin/Heidelberg.
isbn: 0-387-08661-7 and 3-540-08661-7.
3095. Christian Zirpins, Guadalupe Ortiz, Winfried Lamersdorf, and Wolfgang Emmerich, editors.
First International Workshop on Engineering Service Compositions (WESC05). Technical
Report RC23821, IBM Research Division: Yorktown Heights, NY, USA, December 12, 2005,
Amsterdam, The Netherlands. Partly available at http://fresco-www.informatik.
uni-hamburg.de/wesc05/ [accessed 2011-02-28].
3096. Eckart Zitzler and Simon K unzli. Indicator-based Selection in Multiobjective Search. In
PPSN VIII [3028], pages 832842, 2008. Fully available at http://www.tik.ee.ethz.
ch/sop/publicationListFiles/zk2004a.pdf [accessed 2009-07-18].
3097. Eckart Zitzler and Lothar Thiele. An Evolutionary Algorithm for Multiobjective Op-
timization: The Strength Pareto Approach. TIK-Report 43, Eidgen ossische Technis-
che Hochschule (ETH) Z urich, Department of Electrical Engineering, Computer Engi-
neering and Networks Laboratory (TIK): Z urich, Switzerland, May 1998. Fully avail-
able at http://www.tik.ee.ethz.ch/sop/publicationListFiles/zt1998a.pdf
[accessed 2009-07-10]. CiteSeer
x
: 10.1.1.40.7696.
3098. Eckart Zitzler, Kalyanmoy Deb, and Lothar Thiele. Comparison of Multiobjective Evo-
lutionary Algorithms: Empirical Results. Evolutionary Computation, 8(2):173195, Sum-
mer 2000, MIT Press: Cambridge, MA, USA. doi: 10.1162/106365600568202. Fully avail-
able at http://sci2s.ugr.es/docencia/cursoMieres/EC-2000-Comparison.pdf
and http://www.lania.mx/

ccoello/EMOO/zitzler99b.ps.gz [accessed 2009-08-07].


CiteSeer
x
: 10.1.1.30.5848. PubMed ID: 10843520.
3099. Eckart Zitzler, Kalyanmoy Deb, Lothar Thiele, Carlos Artemio Coello Coello, and
David Wolfe Corne, editors. Proceedings of the First International Conference on
Evolutionary Multi-Criterion Optimization (EMO01), March 79, 2001, Eidgen ossische
Technische Hochschule (ETH) Z urich: Z urich, Switzerland, volume 1993/2001 in Lec-
ture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin, Germany.
doi: 10.1007/3-540-44719-9. isbn: 3-540-41745-1. Google Books ID: F29RAAAAMAAJ
and d52UjQ5pfcIC. OCLC: 46240282, 59550134, 76254334, 77755314, 150396610,
243488365, and 313725155.
3100. Eckart Zitzler, Marco Laumanns, and Lothar Thiele. SPEA2: Improving the Strength
Pareto Evolutionary Algorithm. TIK-Report 101, Eidgen ossische Technische Hochschule
REFERENCES 1189
(ETH) Z urich, Department of Electrical Engineering, Computer Engineering and Net-
works Laboratory (TIK): Z urich, Switzerland, May 2001. Fully available at http://
www.tik.ee.ethz.ch/sop/publicationListFiles/zlt2001a.pdf [accessed 2009-07-10].
CiteSeer
x
: 10.1.1.17.2440. Errata added 2001-09-27.
3101. Mark Zlochin, Mauro Birattari, Nicolas Meuleau, and Marco Dorigo. Model-Based
Search for Combinatorial Optimization: A Critical Survey. Annals of Operations Re-
search, 132(1-4):373395, November 2004, Springer Netherlands: Dordrecht, Nether-
lands and J. C. Baltzer AG, Science Publishers: Amsterdam, The Netherlands.
doi: 10.1023/B:ANOR.0000039526.52305.af.
3102. Constantin Zopounidis, editor. Proceedings of the 18th International Conference on Multiple
Criteria Decision Making (MCDM06), June 913, 2006, MAICh (Mediterranean Agronomic
Institute of Chania) Conference Centre: Chania, Crete, Greece. Partly available at http://
www.dpem.tuc.gr/fel/mcdm2006/ [accessed 2007-09-10].
3103. Blaz Zupan, Elpida Keravnou, and Nada Lavrac, editors. Proceedings Intelligent Data Anal-
ysis in Medicine and Pharmacology (IDAMAP01), September 4, 2001, Custom House Hotel
at ExCeL: London, UK, The Springer International Series in Engineering and Computer
Science. Springer US: Boston, MA, USA.
3104. Jacek M. Zurada, Gary G. Yen, and Jun Wang, editors. Computational Intelligence: Re-
search Frontiers IEEE World Congress on Computational Intelligence Plenary/Invited
Lectures (WCCI), June 16, 2008, Hong Kong Convention and Exhibition Centre: Hong
Kong (Xiangg ang), China, volume 5050/2008 in Theoretical Computer Science and General
Issues (SL 1), Lecture Notes in Computer Science (LNCS). Springer-Verlag GmbH: Berlin,
Germany. doi: 10.1007/978-3-540-68860-0. isbn: 3-540-68858-7 and 3540688609. Google
Books ID: X-WH9-BP6JoC. OCLC: 228414687, 243453135, 244010459, and 315620902.
Library of Congress Control Number (LCCN): 2008927523.
3105. Katharina Anna Zweig. On Local Behavior and Global Structures in the Evolu-
tion of Complex Networks. PhD thesis, Eberhard-Karls-Universitat T ubingen, Fakult at
f ur Informations- und Kognitionswissenschaften: T ubingen, Germany, July 2007.
Fully available at http://www-pr.informatik.uni-tuebingen.de/mitarbeiter/
katharinazweig/downloads/Diss.pdf [accessed 2008-06-12]. See also [3106].
3106. Katharina Anna Zweig and Michael Kaufmann. Evolutionary Algorithms for the
Self-Organized Evolution of Networks. In GECCO05 [304], pages 563570, 2005.
doi: 10.1145/1068009.1068105. Fully available at http://www-pr.informatik.
uni-tuebingen.de/mitarbeiter/katharinazweig/downloads/299-lehmann.
pdf [accessed 2008-05-28]. See also [3105].
Index
Symbols
(1, 1)-EAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
(11)-EAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
(

(, )

)-EAs . . . . . . . . . . . . . . . . . . . . . . . . . 267
(, ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
(, )-EAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
(/, )-EAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
(/)-EAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
(1)-EAs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
()-EAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
1/5th Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368

2
Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
GP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
1
/5th Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
-algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656
-EO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
k-Means Clustering . . . . . . . . . . . . . . . . . . . . . . . . 454
NP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
NP-Completeness . . . . . . . . . . . . . . . 117, 147, 247
NP-Hardness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
1-PX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
2-PX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
80x86 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
A
A

Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 519 f.


AAAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Abiogenesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Ackley
shifted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
ACO. . . . . . . . . . . . . 3, 32, 37, 207, 471 f., 477, 945
Acquisition
module. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
self . . . . . . . . . . . . . . . . . . . . . . . . . . 360, 370, 419
Adaptive Walk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
tter dynamics . . . . . . . . . . . . . . . . . . . . . . . . 116
greedy dynamics . . . . . . . . . . . . . . . . . . . . . . 116
one-mutant. . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
ADF. . . . . . . . . . . . . . . . . . . . . . . 273, 400 f., 403, 406
Adjacency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
adjacent neighbors . . . . . . . . . . . . . . . . . . . . . . . . . 554
ADM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401, 403
Admissible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
AE/EA. . 319, 353, 374, 384, 414, 421, 429, 460,
468, 474, 484
AGA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Aggregation
linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
AGI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
AI . . . . . . . . . . . . . . . . . . . . . . . . . 31, 35, 76, 130, 536
AICS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
AIIA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
AIM-GP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
AIMGP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
AIMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
AINS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Aircraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
AISB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
AJWS . . . 321, 355, 375, 385, 416, 423, 430, 461,
469, 475, 485
Algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
bucket brigade . . . . . . . . . . . . . . . . . . . . . . . . 458
determinism. . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
deterministic . . . . . . . . . . . . . . . . . . . . . 147, 511
evolutionary . . . . . . . . . . . . . . . . . . . . . . . 37, 253
generational . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
genetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Las Vegas. . . . . . . . . . . . . . . . . . . . . 41, 106, 148
memetic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1190
INDEX 1191
Monte Carlo. . . . . . . . . . . . . . . . . . . . . . 106, 148
optimization. . . . . . . . . . . . . . . . . . . . . . . 23, 103
baysian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
probabilistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
randomized. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
stochastic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Algorithmic Chemistry . . . . . . . . . . . . . . . . . . . . . 327
ALIO/EURO. 133, 228, 235, 237, 251, 321, 355,
375, 385, 416, 423, 430, 461, 469, 475,
479, 485, 491, 496, 500, 502, 506, 518,
521, 524, 528
Allele. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 f.
Amplier. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
ANN. . . 187, 191 f., 204, 273, 389, 396, 406, 536,
541
Annealing
simulated. . . . . . . . . . . . 3, 25, 32 f., 36, 156 f.,
160, 181, 231, 243 249, 252, 308, 412,
463 f., 493, 541, 557, 716, 740, 945
Ant
articial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Ant Colony Optimization . 3, 32, 37, 207, 471 f.,
477, 945
Antenna
design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Antisymmetrie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
ANTS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474, 484
ANZIIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
AO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Articial Ant57, 61, 64, 69, 76, 78, 107, 380, 566
Articial Development . . . . . . . . . . . . . . . . . . . . . 272
Articial Embryogeny . . . . . . . . . . . . . . . . . . . . . . 272
Articial Intelligence . . . . . . . . . . . . 31, 35, 76, 536
Articial Neural Network. . 187, 191 f., 204, 273,
389, 396, 406, 536, 541
Articial Ontogeny. . . . . . . . . . . . . . . . . . . . . . . . . 272
ASC 134, 321, 355, 375, 385, 416, 423, 430, 461,
469, 475, 485
Asexual Reproduction . . . . . . . . . . . . . . . . . 307, 325
Assimilation
genetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
AST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
Asymmetrie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
Autocorrelation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Automatically Dened Function. 273, 400 f., 403
Automatically Dened Macro. . . . . . . . . . 401, 403
B
Baldwin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
eect. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
Basin- . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Bayesian Optimization Algorithm . . . . . . . . . 184
BB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 523
BBH. . . . . 84, 121, 183, 217, 341, 346, 558 f., 566
bcGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Bernoulli
distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
trial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
Best-First Search. . . . . . . . . . . . . . . . . . . . . . . . . . . 519
Best-Limited Search. . . . . . . . . . . . . . . . . . . . . . . . 519
BFS. . . . . . . . . . . . . . . . . . . . . . . . 32, 115, 515 ., 519
Bias. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692
BibTeX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945
Big-O notation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Bijective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
Bijectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
Bin Packing . . . . . . . . . . . . . . . 25, 39, 181, 183, 238
BinInt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 f., 562 f.
Binomial Distribution . . . . . . . . . . . . . . . . . . . . . . 674
BIOMA. . 320, 354, 374, 384, 415, 422, 429, 460,
468, 474, 484
BIONETICS. . 130, 320, 354, 374, 384, 415, 422,
429, 460, 468, 474, 484
BitCount. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
Black Box. . . . . . . . . . . . . . . . . . . . 35 f., 42, 91, 1209
Bloat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Block
building. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
BLUE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
BOA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Boolean. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
Branch And Bound. . . . . . . . . . . . . . . . . . . . . 32, 523
Breadth-First Search. . . . . 32, 115, 515, 517, 519
Breeding Selection . . . . . . . . . . . . . . . . . . . . . . . . . 289
Brute Force . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Bucket Brigade Algorithm. . . . . . . . . . . . . . . . . . 458
Building Block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
Building Block Hypothesis . . . 84, 121, 183, 217,
341, 346, 558 f., 566
building blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Bypass
extradimensional . . . . . . . . . . . . . . . . . . . . . . 219
C
C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 f.
C 745 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
CACSD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Candidate
solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Capacitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
CarpeNoctem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Cartesian Genetic Programming. . . . . . . . . . . . 174
Catastrophe
complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
Causality . . . . . . . . . . . . . . . . . . . . . 84, 162, 218, 571
CDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658, 660
continous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658
discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658
CEC 317, 353, 373, 383, 414, 421, 429, 459, 467,
474, 484
CEGNA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
Cellular Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Central Limit Theorem. . . . . . . . . . . . . . . . . . . . . 681
CGA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
cGA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 .
1192 INDEX
CGP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
CGPS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 f.
Change
non-synonymous . . . . . . . . . . . . . . . . . . . . . . 173
synonymous . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Chemistry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21, 509
Chi-square Distribution . . . . . . . . . . . . . . . . . . . . 684
Chromosome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
Chromosomes
string
xed-length. . . . . . . . . . . . . . . . . . . . . . . . . 328
variable-length. . . . . . . . . . . . . . . . . . . . . . 339
tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
CI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
CIMCA. . 320, 354, 374, 385, 415, 422, 430, 461,
468, 475, 485
Circuit
analog
design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Circuit Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
CIS . 320, 355, 374, 385, 415, 422, 430, 461, 468,
475, 485
CISC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Class
equivalence. . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
Classication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Classier System . . . . . . . . . . . . . . . . . . . . 339, 457 f.
learning. . . . . . . . . 3, 32, 264, 457 f., 536, 945
Classier Systems
learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
Clearing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Climbing
hill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 f.,
41, 85, 96, 99, 114, 116, 157, 160, 165,
167, 169, 172, 176, 181, 210 f., 229
232, 238 ., 243, 245 f., 248, 252, 266,
296, 359 f., 377, 400, 412, 425, 463 f.,
482, 501, 559, 563, 565, 735, 737, 945
random restarts. . . . . . . . . . . . . . . . . . . . . 501
CLT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681
CO
2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
Co-Evolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
Code
gray. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Code Bloat. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
Coding. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Coecient
negative slope . . . . . . . . . . . . . . . . . . . . . . . . . 163
Coecient of Variation. . . . . . . . . . . . . . . . . . . . . 663
Coevolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
cooperative. . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
COGANN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
Combination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
Combinatorial Optimization. . . . . . . . . . . . . . . . 181
Combinatorial Problem. . . . . . . . . . . . . . . . . . . . . . 25
Combinatorics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
Competition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Complete Enumeration. . . . . . . . . . . . . . . . . . . . . 212
Completeness . . . . . . . . . . . . 85, 146, 157, 218, 514
weak . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Complexity. . . . . . . . . . . . . . . . . . . . . . . . . . . . 143, 203
complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Complexity Catastrophe . . . . . . . . . . . . . . . . . . . 555
Computational Embryogeny. . . . . . . . . . . . . . . . 272
Computing
evolutionary. . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
soft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Condence
coecient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697
interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
Constrained-domination . . . . . . . . . . . . . . . . . . . . . 75
Constraint Handling. . . . . . . . . . . . . . . . . . . . . . . . . 70
Constraints . . . . . . . . . . . . . . . . . . . . . . . . . 22, 70, 509
Continuous Distributions . . . . . . . . . . . . . . . . . . . 676
Continuous Optimization. . . . . . . . . . . . . . . . . . . . 27
Continuous Problem. . . . . . . . . . . . . . . . . . . . . . . . . 27
Controller
robot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
domino . . . . . . . . . . . . . . . . . . . . 153 f., 218, 562
non-uniform. . . . . . . . . . . . . . . . . . . . . . . . . . . 153
premature . . . . . . . . . . . . . . . . . 151 f., 482, 547
prevention . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Convergence Prevention . . . . . . . . . . . . . . . . . . . . 546
Cooperation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Copyleft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
Copyright . . . . . . . . . . . . . . . . . . . . . . . . . 4, 947, 955 f.
Correlation
tness distance . . . . . . . . . . . . . . . . . . . . . . . . 163
genotype-tness . . . . . . . . . . . . . . . . . . . . . . . 163
operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660
Covariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
estimate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
CPU. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Creation. . . . . . . . . . . . . . . . . . . . . 306, 328, 340, 389
Criterion
termination . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Criticality
self-organized . . . . . . . . . . . . . . . . . . . . . . . . . 493
CRM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
Crossover . . . . . . . . . . . . . 262, 307, 335, 340 f., 399
1-PX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
2-PX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
average. . . . . . . . . . . . . . . . . . . . . . . . . . . 337, 362
headless chicken . . . . . . . . . . . . . . . . . . 339, 399
strong . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
weak. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .400
homologous. . . . . . . . . . . . . . . . . . 338, 340, 407
k-PX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
MPX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
multi-point . . . . . . . . . . . . . . . . . . . . . . . 335, 338
point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
random. . . . . . . . . . . . . . . . . . . . . . . . . . .339, 399
single-point. . . . . . . . . . . . . . . . . . . . . . . 335, 400
SPX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
INDEX 1193
sticky . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
strong context preserving. . . . . . . . . . . . . . 400
subtree. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
TPX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
two-point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
uniform. . . . . . . . . . . . . . . . . 337, 346, 362, 377
UX. . . . . . . . . . . . . . . . . . . . . . . . . . 337, 346, 362
CS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339, 457 f.
CSTST . . 132, 320, 355, 374, 385, 415, 422, 430,
461, 468, 475, 485
Cumulative Distribution Function . . . . . . . . . 658
Customer Relationship Management . . . . . . . 536
Cut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Cut & Splice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
Cutting-Plane Method . . . . . . . . . . . . . . . . . . . . . . 32
CVRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
D
Darwin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .464
Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . 154, 536
DE. . . . 3, 27, 32, 84, 191, 207, 262 f., 419 f., 425,
444, 463, 580, 771, 794, 796
Death Penalty. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Deceptiveness. . . . . . . . . . . . . . . . 115, 167, 182, 557
Deceptivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . 167, 182
Decile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
Decision Maker
external . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Decision Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
Decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Decreasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
monotonically . . . . . . . . . . . . . . . . . . . . . . . . . 647
Dened Length. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
DELB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
Delivery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
Depth-First Search. . . . . . . . . . 32, 115, 516 f., 519
iterative deepenining . . . . . . . . . . . . . . 32, 516
Depth-Limited Search . . . . . . . . . . . . . . . . . . . . 516 f.
DERL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
DES. . . 3, 27, 32, 84, 191, 207, 262 f., 419 f., 425,
444, 463, 580, 771, 794, 796
Design
antenna . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
circuit
analog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Design Of Experiments. . . . . . . . . . . . . . . . . . . . . 545
Design of Experiments . . . . . . . . . . . . . . . . . . . . . 178
Determinism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Deterministic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
Deterministic Selection. . . . . . . . . . . . . . . . . . . . . 289
Deutsche Post. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
Development
articial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Developmental Approach. . . . . . . . . . . . . . . . . . . 272
Deviation
standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662
DFS . . . . . . . . . . . . . . . . . . . . . . . . 32, 115, 516 f., 519
DHL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 f., 550
Dierential Evolution . . . . . . . . . . . . . . . . . . . . . . . . 3,
27, 32, 84, 191, 207, 262 f., 419 f., 425,
444, 463, 580, 771, 794, 796
Dierentiation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Dicult . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Dilemma
prisoners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Dimensionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Dinosaur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Diodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Direct Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Discrete Distributions . . . . . . . . . . . . . . . . . . . . . . 668
Discrete Optimization . . . . . . . . . . . . . . . . . . . . . . . 29
Discrete Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
Distribution . . . . . . . . . . . . . . . . . . . . . . 658, 668, 676

2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
Binomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
chi-square . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
continuous . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
estimation of, algorithm . . . . . . . . . . . . . . . 27,
32, 267, 367, 427, 431, 442, 447, 452 f.,
455, 554, 775
exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . 682
normal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
multivariate . . . . . . . . . . . . . . . . . . . . . . . . 680
standard. . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
Poisson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
Students t . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
uniform . . . . . . . . . . . . . . . . . . . . . . . . . . 668, 676
continuous . . . . . . . . . . . . . . . . . . . . . . . . . . 676
discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
Diversication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Diversity . . . . . . . . . . . . 70, 154, 216, 330, 452, 547
Divisor
greatest common . . . . . . . . . . . . . . . . . . . . . . 180
DLS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516 f.
DNA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272, 338
DOE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
DoE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
Domination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
constrained. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
rank. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
resistance. . . . . . . . . . . . . . . . . . . . . . . . . . 69, 198
Domino
convergence . . . . . . . . . . . . . . . . 153 f., 218, 562
Domino Convergence . . . . . . . . . . . . . . . . . . . . . . . 154
Dont Care. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
Downhill Simplex. . . 32, 104, 264, 420, 463, 501,
945
DPE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
Drunkards Walk . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
1194 INDEX
DTM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145, 147, 149
Duplication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 f.
DVRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
E
E-Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 f., 945
E-code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
EA . . . . . . . . . . . . . . . . . . . . . . . . . . . 3, 32, 36 f., 75 .,
84, 86, 93 ., 121, 155 ., 162 f., 172 f.,
183 f., 207, 210, 212, 229 f., 245, 253 .,
258, 261 267, 269 f., 272 ., 277, 279,
285 ., 289, 294, 302, 306 f., 309, 312,
324 f., 327, 335, 359 f., 377, 379 f., 393,
412, 419, 427, 434, 452 f., 457, 463 f.,
466, 512, 538, 541 f., 544 ., 551, 562,
566 ., 577, 622, 699, 717, 746, 945
hybrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
EARL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
EBCOA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
EC . . . 3, 24, 31 f., 36, 48, 70, 156, 180, 253, 379,
537, 553, 945
ECML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132, 461
ECML PKDD. . . . . . . . . . . . . . . . . . . . . . . . . 135, 462
Ecological Selection . . . . . . . . . . . . . . . . . . . . . . . . 260
Economics
mathematical . . . . . . . . . . . . . . . . . . . . . . 23, 509
EDA . . 27, 32, 267, 367, 427, 431, 434, 442, 447,
452 f., 455, 554, 775
Edge Encoding. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
EDI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
Editing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
EDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Eect
Baldwin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
hiding. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
Eectiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Eciency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Pareto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Elitism. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267, 546
Embrogeny. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Embryogenesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Embryogenic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Embryogeny
articial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
computational . . . . . . . . . . . . . . . . . . . . . . . . . 272
EMO 55, 65, 202, 253, 257, 259 f., 279, 311, 319,
353, 374, 384, 414, 421, 429, 460, 467,
474, 484
EMOO. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
EN 284 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
Encapsulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 f.
Encoding
cellular . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Endnote . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945
Endogenous . . . . . . . . . . . . . . . . 90, 360, 365 ., 481
EngOpt . . 132, 228, 234, 237, 251, 320, 355, 375,
385, 415, 422, 430, 461, 468, 475, 478,
485, 490, 496, 500, 502, 506, 518, 521,
524, 528
Enterprise Resource Planning . . . . . . . . . . . . . . 536
Entropy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
continuous . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
dierential . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
information . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
Enumeration
complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
exhaustive . . . 113, 116 f., 143, 168, 210, 212
Environment Selection . . . . . . . . . . . . . . . . . . . . . 261
Environmental Selection. . . . . . . . . . . . . . . . . . . . 260
EO . . . . . . . . . . . . . . . . . . . 3, 25, 32, 172, 493 f., 945
GEO. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
Eoarchean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
EP. . . . . . . . . . . . . . . . . . . 27, 32, 264, 380, 413, 415
Ephemeral Random Constants . . . . . . . . . . . . . 532
EPIA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Epistacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Epistasis . 163, 177, 181, 204, 389, 404, 554, 560,
562, 569, 577 f.
Epistatic Road . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
Epistatic Variance. . . . . . . . . . . . . . . . . . . . . . . . . . 163
Equilibrium. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
punctuated. . . . . . . . . . . . . . . . . . . . . . . 172, 494
Equivalence
class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
Ergodicity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 f.
ERL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
ERP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
mean square . . . . . . . . . . . . . . . . . . . . . . . . . . 692
threshold. . . . . . . . . . . . . . . . . . . . . . . . . 163, 562
type 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
type 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
Error Threshold . . . . . . . . . . . . . . . . . . . . . . . 163, 562
ES. . . 3, 27, 32, 37, 155, 162, 188, 191, 216, 229,
263 f., 266 f., 286, 289, 302, 359 f., 362,
364, 367 f., 370 ., 377, 419, 425, 427,
444, 481, 566, 580 f., 760, 786, 788, 945
dierential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3,
27, 32, 84, 191, 207, 262 f., 419 f., 425,
444, 463, 580, 771, 794, 796
Estimation Of Distribution Algorithm. . . . . . 27,
32, 267, 367, 427, 431, 442, 447, 452 f.,
455, 554, 775
Estimation Theory . . . . . . . . . . . . . . . . . . . . . . . . . 691
Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691 f.
best linear unbiased . . . . . . . . . . . . . . . . . . . 696
maximum likelihood. . . . . . . . . . . . . . . . . . . 695
point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692
unbiased . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692
EUFIT . . 132, 320, 355, 375, 385, 415, 422, 430,
461, 468, 475, 485
EUROGEN . . . . . . . . . . . . . . . . . . . . . . . . . . . 354, 374
EuroGP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
INDEX 1195
Event
certain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
conicting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
elementary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653
impossible. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
random . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
EvoCOP. 321, 355, 375, 385, 416, 422, 430, 461,
468, 475, 485
Evolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21, 509
Baldwinian. . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
chemical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
dierential . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
grammatical . . . . . . . . . . . . . . . . . 204, 273, 403
Lamarckian . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
Evolution Strategy . . . . . . . . . . . . . . . . . . . 3, 27, 32,
37, 155, 162, 188, 191, 216, 229, 263 f.,
266 f., 286, 289, 302, 359 f., 362, 364,
367 f., 370 ., 377, 419, 425, 427, 444,
481, 566, 580 f., 760, 786, 788, 945
dierential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3,
27, 32, 84, 191, 207, 262 f., 419 f., 425,
444, 463, 580, 771, 794, 796
Evolutionary Algorithm. . . . . . . 3, 32, 36 f., 75 .,
84, 86, 93 ., 121, 155 ., 162 f., 172 f.,
183 f., 207, 210, 212, 229 f., 245, 253 .,
258, 261 267, 269 f., 272 ., 277, 279,
285 ., 289, 294, 302, 306 f., 309, 312,
324 f., 327, 335, 359 f., 377, 379 f., 393,
412, 419, 427, 434, 452 f., 457, 463 f.,
466, 512, 538, 541 f., 544 ., 551, 562,
566 ., 577, 622, 699, 717, 746, 945
generational . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
hybrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
multi-objective . . . . . . . . . . . . . . . . . . . . . . . . 253
steady-state . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Evolutionary Computation . . 3, 24, 31 f., 36, 48,
70, 156, 180, 253, 379, 537, 553, 945
Evolutionary Multi-Objective Optimization 253
Evolutionary Operation . . . . . . . . . . . . . . . 264, 501
randomized . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
Evolutionary Programming27, 32, 264, 380, 413
Evolutionary Reinforcement Learning. . . . . . 457
Evolvability . . . . . . . . . . . . . . . . . . . . . . . . . . . 163, 172
EvoNUM 321, 355, 375, 386, 416, 423, 430, 461,
469, 502
EVOP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
EvoRobots. . . .322, 356, 375, 386, 416, 423, 431,
462, 469, 476, 486
EvoWorkshops318, 353, 373, 383, 414, 421, 429,
459, 467, 474, 484
EWSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134, 461
Exhaustive Enumeration . . 113, 116 f., 143, 168,
210, 212
Exogenous. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90, 360
expand. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
Expected value. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661
Experiment
Miller-Urey . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Explicit Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Explicit Representation . . . . . . . . . . . . . . . . . . . . 270
Exploitation. . . . . . . . . . . . . . . . . . . . . . 156, 162, 216
Exploration. . . . . . . . . . . . . . . . . . 155, 216, 360, 512
Exponential Distribution . . . . . . . . . . . . . . . . . . . 682
External Decision Maker . . . . . . . . . . . . . 75 ., 281
Extinctive Selection . . . . . . . . . . . . . . . . . . . 183, 265
left . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
right . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Extradimensional Bypass. . . . . . . . . . . . . . . . . . . 219
Extrema Selection. . . . . . . . . . . . . . . . . . . . . . . . . . 174
Extremal Optimization3, 25, 32, 172, 493 f., 945
generalized . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
Extremum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
global . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
F
Factorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
False Negative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
False Positive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
FDC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
FDL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4, 947
FEA. 320, 354, 374, 384, 415, 422, 429, 460, 468
Feasibility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Feature Selection. . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Fibonacci Path. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
Filter
Butterworth. . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Chebyche . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Finite State Machine . . . . . . . . . . . . . . . . . . . . . . . 380
First Hitting Time . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Fitness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94, 257, 259
assignment process . . . . . . . . . . . . . . . . . . . . 274
nature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
optimization . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Fitness Assignment . . . . . . . . . . . . . . . . . . . . . . . . 274
Pareto ranking . . . . . . . . . . . . . . . . . . . . . . . . 275
Prevalence ranking . . . . . . . . . . . . . . . . . . . . 275
Tournament . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
weighted sum . . . . . . . . . . . . . . . . . . . . . . . . . 275
Fitness Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Fitness Landscape. . . . . . . . . . . . . . . . . . . . . . . . . . . 93
deceptive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
ND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
neutral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
NK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
NKp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
NKq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
p-Spin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
rugged . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
technological . . . . . . . . . . . . . . . . . . . . . . . . . . 556
FLAIRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
FOCI . . . . 321, 355, 375, 386, 416, 423, 431, 461,
469, 475, 485
FOGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
Forma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 f., 163
analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Formae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
1196 INDEX
Free Lunch
no . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Freight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
Frequency
absolute. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
relative. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
Full . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
automatically-dened . . . . . . 273, 400 f., 403
benchmark . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
cumulative distribution. . . . . . . . . . . . . . . . 658
tness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Griewangk . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
shifted. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
Griewank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
shifted. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
heuristic . . . . . . . . . . . . . . . . . . . . . . 34, 274, 518
monotone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
objective . . . . . . . . . . . . . . . . . . . 22, 36, 44, 509
conicting . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
harmonic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
independent . . . . . . . . . . . . . . . . . . . . . . . . . 56
penalty. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
adaptive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
dynamic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
probability density . . . . . . . . . . . . . . . . . . . . 659
probability mass . . . . . . . . . . . . . . . . . . . . . . 659
royal road. . . . 154, 173, 455, 559 562, 566
sphere. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
trap . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167, 557 f.
Weierstra . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Functional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
Functionality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
FWGA . . . . . . . . . . . . . . . . . . . . . . 134, 355, 475, 485
G
G3P. . . . . . . . . . . . . . . . . . . . . . . . . 273, 381, 403, 409
GA. . . . . . . . . . . . . . . . . . . 3, 25, 32, 37, 43, 48, 82
85, 90, 95, 101, 105, 119, 139, 154 f.,
163, 167, 174, 183, 188, 207, 215, 217,
248, 262 265, 271, 278, 290 f., 296,
325 329, 331, 335, 337 ., 341 f., 345
348, 350, 357, 377, 380 f., 389, 398,
407, 427, 434, 439, 454 f., 457 f., 463 .,
541, 554, 558 f., 562 f., 565 f., 568, 573,
580, 945
compact. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
messy . . . . . . . . . . . . . . . . . 184, 327, 347 f., 395
Gads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
GALESIA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
Gauss-Markov Theorem. . . . . . . . . . . . . . . . . . . . 696
GCD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
GE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204, 273, 403
GEC321, 355, 375, 385, 416, 422, 430, 461, 468,
475, 485
GECCO . 317, 353, 373, 383, 414, 421, 429, 459,
467, 473, 484
GEM 321, 355, 375, 385, 416, 422, 430, 461, 469
Gene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
reuse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Generality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Generalized Extremal Optimization. . . . . . . . 495
Generation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Generational . . . . . . . . . . . . . . . . . . . . . . . . . 265, 546 f.
Genetic Algorithm. . . . 3, 25, 32, 37, 43, 48, 82
85, 90, 95, 101, 105, 119, 139, 154 f.,
163, 167, 174, 183, 188, 207, 215, 217,
248, 262 265, 271, 278, 290 f., 296,
325 329, 331, 335, 337 ., 341 f., 345
348, 350, 357, 377, 380 f., 389, 398,
407, 427, 434, 439, 454 f., 457 f., 463 .,
541, 554, 558 f., 562 f., 565 f., 568, 573,
580, 945
compact. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
human based . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
interactive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
messy. . . . . . . . . . . . . . . . . . . . . . 184, 347 f., 395
Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 380
real-coded. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Genetic Assimilation . . . . . . . . . . . . . . . . . . . . . . . 465
Genetic Programming. . . . . . . . . . . . . . . . . . . . . . . . 3,
32, 37, 107, 155, 162 f., 173, 175, 180,
187, 192, 195, 262 ., 273, 302, 304 f.,
379 ., 386 f., 389, 396, 398, 400 f., 403,
405 f., 408, 411 f., 427, 434, 447, 531,
534, 536, 561, 566, 568, 824, 945
byte code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
cartesian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
compiling system. . . . . . . . . . . . . . . . . . . . . . 405
crossover
homologous . . . . . . . . . . . . . . . . . . . . 338, 407
function nodes . . . . . . . . . . . . . . . . . . . . . . . . 387
grammar-guided . . . . . . . . 273, 381, 403, 409
graph-based . . . . . . . . . . . . . . . . . . . . . . . 32, 409
linear . . . . 32, 327, 381, 400, 403 408, 412
page-based. . . . . . . . . . . . . . . . . . . . . . . . . . 408
non-terminal nodes . . . . . . . . . . . . . . . . . . . . 387
stack-based . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
Standard. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
standard32, 273, 386 f., 389, 399, 403 f., 412
strongly-typed . . . . . . . . . . . . . . . . . . . . . . . . . . 3,
32, 37, 107, 155, 162 f., 173, 175, 180,
187, 192, 195, 262 ., 273, 302, 304 f.,
379 ., 386 f., 389, 396, 398, 400 f., 403,
405 f., 408, 411 f., 427, 434, 447, 531,
534, 536, 561, 566, 568, 824, 945
terminal nodes . . . . . . . . . . . . . . . . . . . . . . . . 387
tree-based . . . . . . . . . . . . . . . . . . . 381, 386, 389
Genetic Repair . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 f.
GENITOR. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Genome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82, 326
Genomes
string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
Genotype. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39, 82
Genotype-Phenotype Mapping . . . . . . . . 39 ., 71,
INDEX 1197
86 f., 90 f., 95 f., 101, 103 f., 111, 113,
119, 162, 168, 174 f., 179, 181 ., 204,
209, 215 ., 219, 254, 258, 263, 270
273, 325, 327, 348 ., 357, 360, 377,
380, 404, 406 f., 433 f., 437, 443 f., 481,
542, 569, 580, 711, 834
direct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
GEO. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
GEWS . . . . . . . . . . . . . . . . . . . . . . 132, 228, 321, 385
GFC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
GGGP. . . . . . . . . . . . . . . . . . . 32, 273, 381, 403, 409
GICOLAG. . . . 134, 228, 235, 237, 251, 321, 356,
375, 386, 416, 423, 431, 461, 469, 475,
479, 485, 491, 496, 500, 502, 506, 518,
521, 524, 528
Glass
spin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93, 557
Global Optimization . . . . . . . . . . . . . . 3, 22, 31, 48,
51, 55, 90, 93, 99, 105, 107, 139, 151,
154 f., 172, 174, 187 f., 215, 231, 243,
267, 509, 511, 542, 553, 566, 649, 945
Goal Attainment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Goal Programming . . . . . . . . . . . . . . . . . . . . . . 72, 75
GP . . . . 3, 32, 37, 107, 155, 162 f., 173, 175, 180,
187, 192, 195, 262 ., 273, 302, 304 f.,
379 ., 384, 386 f., 389, 396, 398, 400 f.,
403, 405 f., 408, 411 f., 427, 434, 447,
531, 534, 536, 561, 566, 568, 824, 945
cartesian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
grammar-guided . . . . . . . . . . . . . 273, 381, 403
graph-based . . . . . . . . . . . . . . . . . . . . . . . 32, 409
linear . . . . 32, 327, 381, 400, 403 408, 412
standard32, 273, 386 f., 389, 399, 403 f., 412
STGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3,
32, 37, 107, 155, 162 f., 173, 175, 180,
187, 192, 195, 262 ., 273, 302, 304 f.,
379 ., 386 f., 389, 396, 398, 400 f., 403,
405 f., 408, 411 f., 427, 434, 447, 531,
534, 536, 561, 566, 568, 824, 945
GPM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 ., 71,
86 f., 90 f., 95 f., 101, 103 f., 111, 113,
119, 162, 168, 174 f., 179, 181 ., 204,
209, 215 ., 219, 254, 258, 263, 270
273, 325, 327, 348 ., 357, 360, 377,
380, 404, 406 f., 433 f., 437, 443 f., 481,
542, 569, 580, 711, 834
direc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
direct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
explicit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
indirect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
ontogenic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
GPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
GPTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
GR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
GR Hypothesis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
Gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53, 104
descend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Gradient Descent
stochastic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Grammar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
Grammar-Guided Genetic Programming . . 273,
381, 403
Grammatical Evolution. . . . . . . . . . . 204, 273, 403
Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 512, 651
acyclic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
directed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
nite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
innite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
theory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .651
undirected . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
weighted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
GRASP . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 157, 499
Gray Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Greedy Search . . . . . . . . . . . . . . 32, 177, 464, 519 f.
Griewangk Function. . . . . . . . . . . . . . . . . . . . . . . . 601
shifted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
Griewank Function. . . . . . . . . . . . . . . . . . . . . . . . . 601
shifted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
Grow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
GSM/GPRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
H
HAIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Halting Criterion. . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Hardness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 f.
Harmony Search. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Haystack
needle in a . . . . . . . . . 103, 168, 175, 180, 465
HBGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
hBOA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
HC. 32 f., 41, 85, 96, 99, 114, 116, 157, 160, 165,
167, 169, 172, 176, 181, 210 f., 229
232, 238 ., 243, 245 f., 248, 252, 266,
296, 359 f., 377, 400, 412, 425, 463 f.,
482, 501, 559, 563, 565, 735, 737, 945
with random restarts . . . . . . . . . . . . . . . . . . 501
HCwL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Headless Chicken. . . . . . . . . . . . . . . . . . . . . . 339, 399
Herman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Hessian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Heuristic . . . . . . . . . . . . . . . . . . . . . . . 34, 36, 274, 518
admissible. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
monotonic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
Hiding Eect. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
Hill Climbing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 f.,
41, 85, 96, 99, 114, 116, 157, 160, 165,
167, 169, 172, 176, 181, 210 f., 229
232, 238 ., 243, 245 f., 248, 252, 266,
296, 359 f., 377, 400, 412, 425, 463 f.,
482, 501, 559, 563, 565, 735, 737, 945
multi-objective . . . . . . . . . . . . . . . . . . . . . . . . 230
randomized restarts . . . . . . . . . . . . . . . . . . . 232
stochastic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
with random restarts . . . . . . . . . . . . . . . . . . 501
1198 INDEX
Hill Climbing With Random Restarts. . . . . . 501
HIS . 128, 227, 234, 236, 250, 319, 353, 374, 384,
415, 421, 429, 460, 468, 474, 478, 484,
490, 496, 500, 502, 506, 518, 521, 524,
528
Homologous Crossover . . . . . . . . . . . . 338, 340, 407
Homology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
HRO. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
HS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
HX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Hybridization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 f.
Hyperplane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
Hypothesis
building block. . . . . . . . . . . . . . . . . . . . . . . . . 346
I
IAAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128, 474, 484
ICAART. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
ICANNGA . . . 319, 353, 374, 384, 415, 422, 429,
460, 468, 474, 484
ICARIS . . . . . . . . . . . . . . . . . . . . . . . . . . 130, 227, 320
ICCI 128, 319, 353, 374, 384, 415, 422, 429, 460,
468, 474, 484
ICEC. . . . 322, 356, 376, 386, 416, 423, 431, 462,
469, 476, 486
ICGA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
ICML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129, 460
ICMLC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134, 462
ICNC. . . . 319, 354, 374, 384, 415, 422, 429, 460,
468, 474, 484
ICSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475, 485
ICTAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
IDDFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 516 f.
IEA/AIE. . . . . . . . . . . . . . . . . . . . . . . . . 133, 475, 485
IGA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
IICAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
IJCAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
IJCCI . . . 321, 356, 375, 386, 416, 423, 431, 462,
469, 476, 486
Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704
Implicit Parallelistm. . . . . . . . . . . . . . . . . 263, 345 f.
in.west . . . . . . . . . . . . . . . . . . . . . . . . . . . 537, 543, 549
Increasing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
monotonically . . . . . . . . . . . . . . . . . . . . . . . . . 647
Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Indirect Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Indirect Representation . . . . . . . . . . . . . . . . . . . . 272
Individual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Inductor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
inequalities
method of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Infeasibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Inx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
Informed Search . . . . . . . . . . . . . . . . . . . . . . . . 32, 518
Injective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
Injectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
Input-Processing-Output . . . . . . . . . . . . . . . . . . . 379
Integer Programming. . . . . . . . . . . . . . . . . . . . . . . . 29
mixed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Intel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Intelligence
articial . . . . . . . . . . . . . . . . . . . . 31, 35, 76, 536
swarm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 37
Intensication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
condence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
Intrinsic Parallelism. . . . . . . . . . . . . . . . . . . . . . . . 263
Intron. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
explicitly dened . . . . . . . . . . . . . . . . . . . . . . 407
implicit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
Inversion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
IPO. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Irreexivenss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
ISA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133, 475, 485
isGoal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
ISICA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
ISIS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Isolated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Isomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
isomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Isotropic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Iteratively Deepening Depth-First Search . . 516
IUMDA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
IWACI. . . 322, 356, 376, 386, 417, 423, 431, 462,
469, 476, 486
IWLCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
IWNICA. 321, 356, 375, 386, 416, 423, 431, 462,
469, 476, 486
J
JAPHET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
JBGP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
JME. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Job Shop Scheduling . . . . . . . . . . . . . . . . . . . . . . . . 26
JSS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 f.
Juxtapositional . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
JVM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
K
k-PK. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
k-PX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Kauman NK. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
Keys
random . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Knapsack Problem . . . . . . . . . . . . . . . . . . . . . . . . . 147
Kung-Fu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Kurtosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
excess . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
L
Lamarck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164, 464
Lamarckism. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
Landscape
tness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Laplace
INDEX 1199
assumption. . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
Large Numbers
law of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
Large-scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Las Vegas Algorithm. . . . . . . . . . . . . . .41, 106, 148
Layout
circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
LCS . . . . . . . . . . . . . . . . . 3, 32, 264, 457 f., 536, 945
Michigan-style . . . . . . . . . . . . . . . . . . . . . . . . 458
Pitt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Pittsburgh-style . . . . . . . . . . . . . . . . . . . . . . . 458
Learning
linkage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Learning Classier System . . . . 3, 32, 264, 457 f.,
536, 945
Michigan-style . . . . . . . . . . . . . . . . . . . . . . . . 458
Pittsburgh-style . . . . . . . . . . . . . . . . . . . . . . . 458
Left-total . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
Length
dened. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
Lexicographic Optimization . . . . . . . . . . . . . . . . . 59
LGP . . . . . . . . . . 32, 327, 381, 400, 403 408, 412
page-based . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
LGPL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955
License
FDL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4, 947
LGPL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955
Lifting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 f.
Likelihood
function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693
Linear Aggregation. . . . . . . . . . . . . . . . . . . . . . . . . . 62
Linear Genetic Programming . 32, 327, 381, 400,
403 408, 412
page-based . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
Linear Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
Linkage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Linkage Learning. . . . . . . . . . . . . . . . . . . . . . . . . . . 347
LION. . . . 133, 228, 234, 237, 251, 321, 355, 375,
385, 416, 422, 430, 461, 469, 475, 478,
485, 490, 496, 500, 502, 506, 518, 521,
524, 528
List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
addItem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
appendList . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
createList . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
deleteItem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
deleteRange . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
insertItem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
listToSet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
removeItem . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
reverseList . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
setToList . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
subList. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
search (sorted) . . . . . . . . . . . . . . . . . . . . . . . . 650
search (unsorted). . . . . . . . . . . . . . . . . . . . . . 650
sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
LLN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
Local Search. . . . . . . . . . . . . . . . . 229, 243, 463, 514
Locality . . . . . . . . . . . . . . . . . . . . . . . . . . . 84, 162, 218
Locus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Logistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537, 622
Long k-Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
Long Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
LS-1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
LSCS . . . . 133, 228, 235, 237, 251, 321, 355, 375,
385, 416, 423, 430, 461, 469, 475, 478,
485, 491, 496, 500, 502, 506, 518, 521,
524, 528
Lunch
no free . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
M
MA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 37, 463 f.
Machine
nite state . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
Turing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
deterministic. . . . . . . . . . . . . . . . . . . . . . . . 145
non-deterministic . . . . . . . . . . . . . . . . . . . 146
Machine Learning. . . . . . . . . . . . 187, 212, 380, 536
Macro
automatically-dened. . . . . . . . . . . . . 401, 403
Management
customer relationship. . . . . . . . . . . . . . . . . .536
Many-Objectivity . . . . . . . . . . . . 197 f., 201 f., 1213
Mapping
Genotype-Phenotype . . . . . . . . . . . . . . . . . . 270
genotype-phenotype . . . . . . . . . . . . . 39 ., 71,
86 f., 90 f., 95 f., 101, 103 f., 111, 113,
119, 162, 168, 174 f., 179, 181 ., 204,
209, 215 ., 219, 254, 258, 263, 270
273, 325, 327, 348 ., 357, 360, 377,
380, 404, 406 f., 433 f., 437, 443 f., 481,
542, 569, 580, 711, 834
ontogenic . . . . . . . . . . . . . . . . . . . . . . . . . . 86, 273
phenotype-genotype . . . . . . . . . . . . . . . . . . . 464
MAS . . 37, 59, 125, 190, 207, 226, 315, 352, 382,
459, 473
Mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
dened length. . . . . . . . . . . . . . . . . . . . . . . . . 341
order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
Mating Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Maximum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89, 660
global . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52, 89
Maximum Likelihood Estimator . . . . . . . . . . . . 695
MCDA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
MCDM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76, 129
MDVRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
Mean
arithmetic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661
Median. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665, 667
Meme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Memetic Algorithm . . . . . . . . . . . . . . . 32, 37, 463 f.
Memory Consumption. . . . . . . . . . . . . . . . . . . . . . 514
MENDEL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
1200 INDEX
Messy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Messy GA . . . . . . . . . . . . . . . . . . . . . . 184, 347 f., 395
Messy Genome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
META. . . 228, 235, 237, 251, 322, 356, 376, 386,
417, 423, 431, 462, 469, 476, 479, 486,
491, 497, 500, 506
Metaheuristic . . . . . . . . . . . . . . . . . 3, 24, 31 f., 34 .,
39, 41, 109, 112 f., 139, 141, 143, 165,
168 f., 197 f., 213, 225, 253 f., 262, 367,
489, 540 f., 1209
Method
raindrop. . . . . . . . . . . . . . . . . . . . . . . . . . . 36, 235
Method Of Inequalities . . . . . . . . . . . . . 72 75, 77
Metropolis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 .
mGA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184, 347, 349
MIC 227, 234, 236, 250, 320, 354, 374, 384, 415,
422, 429, 460, 468, 474, 478, 484, 490,
496, 500, 506
MICAI. . . 131, 320, 354, 374, 384, 415, 422, 429,
460, 468, 474, 484
MicroGP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
Miller-Urey experiment. . . . . . . . . . . . . . . . . . . . . 258
Min-Max
weighted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Minimization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Minimum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89, 660
global . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52, 89
Mining
data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
MIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
ML. . . . . . . . . . . . . . . . . . . . . . . . . . 187, 212, 380, 536
MLE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695
Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Modularity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Module
acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
MOEA55, 65, 72, 202, 253, 257, 259 f., 274, 279,
311
MOI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 75, 77
Moment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
about mean . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
central . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
standardized . . . . . . . . . . . . . . . . . . . . . . . . . . 664
statistical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
Monotone
function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
heuristic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
Monotonic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
Monotonicity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
Monte Carlo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Monte Carlo Algorithm. . . . . . . . . . . 106, 148, 243
MOP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55, 197
Morphogenesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
MPX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335, 338, 407
MSE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692
Multi-Agent System . 37, 59, 125, 190, 207, 226,
315, 352, 382, 459, 473
Multi-Criterion Decision Making . . . . . . . . . . . . 76
Multi-Modality139, 151, 161, 165, 255, 452, 454
Multi-Objective Evolutionary Algorithm 55, 65,
202, 253, 257, 259 f., 279, 311
Multi-Objectivity. 39, 54 57, 59 62, 64 f., 70,
76 f., 89, 93 ., 101, 104, 106, 158, 182,
197 f., 201, 213, 230, 232, 238, 253 .,
274, 308, 434, 453, 482, 534, 537 f.,
554, 566, 570, 716, 728, 737, 841, 906,
1213, 1221 .
Multi-Objectivization . . . . . . . . . . . . . . . . . . . . . . 158
Mutation . . . . . . . 262, 306 f., 330, 340, 393 f., 543
bit-ip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
multi-point . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
single-point . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
strength. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
N
NaBIC. . . 322, 356, 375, 386, 416, 423, 431, 462,
469, 476, 486
Naivet . . 19 E. . . . . . . . . . . . . . . . . . . 279, 536, 639
Natural Selection. . . . . . . . . . . . . . . . . . . . . . . . . . . 260
ND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167, 172 f., 557
ND Landscape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
Needle-In-A-Haystack . . 103, 168, 175, 180, 465,
563
Negative
false. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
Negative Slope Coecient . . . . . . . . . . . . . . . . . . 163
Neighborhood Search. . . . . . . . . . . . . . . . . . . . . . . 512
Neighboring Relation. . . . . . . . . . . . . . . . . . . . . . . . 86
Nelder & Mead. 32, 104, 264, 420, 463, 501, 945
Network
articial neural . . 187, 191 f., 204, 273, 389,
396, 406, 536, 541
Baesian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
neutral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
random Boolean. . . . . . . . . . . . . . . . . . . . . . . 174
Neural Network
articial187, 191 f., 204, 273, 389, 396, 406,
536, 541
Neutral
network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Neutrality. . . . . . . . . . . . . 171, 556 f., 559, 569, 578
NFL. . . . . . . . . . . . . . . . . . . . 95, 114, 168, 209 213
NIAH. . . . . . . . . . . . . . . . . . . 103, 168, 175, 180, 465
Niche Count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Niche Preemption Principle . . . . . . . . . . . . . . . . 151
NICSO . . 131, 227, 234, 236, 250, 320, 354, 374,
385, 415, 422, 430, 460, 468, 474, 478,
484, 490, 496, 500, 502, 506, 518, 521,
524, 528
NK. . . . . . . . . . . . . . . . . . . . . . . . . . 161, 179, 553, 556
NKp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173, 556
NKq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173, 556
INDEX 1201
No Free Lunch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
No Free Lunch Theorem 95, 114, 168, 209 213
Node Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
nodeWeight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187, 192 f.
Non-Decreasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
Non-Dominated 67 ., 73, 79, 197 f., 200 ., 277,
279, 284, 1210
Non-Increasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
Non-Synonymous Change . . . . . . . . . . . . . . . . . . 173
Nonile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
Nonseparability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
full . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Norm
Tschebychev . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Normal Distribution. . . . . . . . . . . . . . . . . . . . . . . . 678
standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
Notation
reverse polish . . . . . . . . . . . . . . . . . . . . . . . . . 404
NSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
NSGA-II . . . . . . . . . . . . . . . . . . . . 198, 202, 266, 453
NTM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146, 149
Numbers
complex. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
integer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
law of large . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
natural . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
rational . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
Numerical Problem. . . . . . . . . . . . . . . . . . . . . . . . . . 27
NWGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
O
Objective Function . . . . . . . . . . . . . . . . . 36, 44, 544
conicting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
harmonic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
independent . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
OBUPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
O-The-Shelf . . . . . . . . . . . . . . . . . . . . . . . . . . . 43, 146
OneMax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
Ontogenic Mapping. . . . . . . . . . . . . . . . . . . . . 86, 273
Ontogeny
articial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Operations Research. . . . . . . . . . . . . . . 23, 197, 509
Operator
search. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Operator Correlation. . . . . . . . . . . . . . . . . . . . . . . 163
Optimality
Pareto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Optimiality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
Optimization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . 23, 103
classication of . . . . . . . . . . . . . . . . . . . . . . 31
baysian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
combinatorial . . . . . . . . . . . . . . . . . . . . . . . . . 181
continuous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
derivative-based . . . . . . . . . . . . . . . . . . . . . . . . 53
derivative-free . . . . . . . . . . . . . . . . . . . . . . . . . . 53
diculties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
extremal . . . . . . . . . 3, 25, 32, 172, 493 f., 945
generalized . . . . . . . . . . . . . . . . . . . . . . . . . 495
global . . . . . . . . . . . . . . . . . . . . . . . . . . 22, 31, 509
interactive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
lexicographic . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
multi-objective . . . . . . . . . . . . . . . . . . . . . . . . . 55
Pareto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
prevalence . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
weighted sum . . . . . . . . . . . . . . . . . . . . . . . . 62
oine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
particle swarm. 3, 27, 32, 37, 155, 188, 191,
207, 463, 481 f., 580, 945
problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 22, 103
random. . . . . . . . . . . . 3, 32, 36, 116, 232, 505
setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
structure of . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
taxonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
termination criterion . . . . . . . . . . . . . . . . . . 105
Optimum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51, 89
global . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
isolated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
lexicographical . . . . . . . . . . . . . . . . . . . . . . . . . 60
local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51, 88 f.
optimal set . . . . . . . . . . . . . . . . . . . . . . . . . 54, 57
extracting . . . . . . . . . . . . . . . . . . . . . . . . . . 309
obtain by deletion . . . . . . . . . . . . . . . . . . 309
pruning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
updating. . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
updating by insertion . . . . . . . . . . . . . . 309
Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
partial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65, 645
simple. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
total . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
Overtting. . . . . . . . . . . . . . 191, 193, 534, 536, 568
Overgeneralization . . . . . . . . . . . . . . . . . . . . . . . . . 194
Oversimplication . . . . . . . . . . . . . . . . . . . . . 194, 568
Overspecication. . . . . . . . . . . . . . . . . . . . . . . . . . . 348
P
p-Spin. . . . . . . . . . . . . . . . . . . . . . . . . . . . 161, 179, 556
PADO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
PAES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
PAKDD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Parallelism
implicit . . . . . . . . . . . . . . . . . . . . . . . . . 263, 345 f.
intrinsic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Parameter
endogenous . . . . . . . . . . . . . . 90, 360, 370, 481
exogenous . . . . . . . . . . . . . . . . . . . . . . . . . 90, 360
Parental Selection . . . . . . . . . . . . . . . . . . . . . . . . 260 f.
Pareto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
frontier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
optimal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
1202 INDEX
ranking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Pareto Ranking. . . . 68, 76, 260, 275 ., 279, 281,
284, 300, 1210
Pareto-Ranking . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
Partial Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
Partial order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Particle Swarm Optimization. 3, 27, 32, 37, 155,
188, 191, 207, 463, 481 f., 580, 945
Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
Fibonacci . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
long. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
long k . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
PBIL . . . . . . . . . . . . . . . . . . . . . . . . . . 438, 442 ., 452
real-coded. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
PBIL
C
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
PDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659 f.
Penalty
adaptive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Death . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
dynamic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Percentile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
Permission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Permutation . . . . . . . . . . . . . 25, 333, 395, 643, 655
tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Pertinency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
PESA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
PGM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
Phase
juxtapositional . . . . . . . . . . . . . . . . . . . . . . . . 349
primordial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Phenome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Phenotype . . . . . . . . . . . . . . . . . . . . . . . . . . . 39, 43, 82
plasticity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
Phenotype-Genotype Mapping . . . . . . . . . . . . . 464
PIPE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447, 449, 452
Pitfall
siren . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Pitt approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
enterprise resource . . . . . . . . . . . . . . . . . . . . 536
Plasticity
phenotypic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
Pleiotropy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177, 179
PMF. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659 f.
Point
crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
saddle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Point Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692
Poisson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
Poisson Distribution . . . . . . . . . . . . . . . . . . . . . . . 671
Population. . . . . . . . . . . . . . . . . . . . . . . . . 90, 253, 265
Positive
false. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
PPSN. . . . 318, 353, 374, 384, 414, 421, 429, 459,
467, 474, 484
PPT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
Preemption
niche . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Prehistory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
Premature Convergence . . . . . . . . . . 151, 482, 547
Preservative Selection . . . . . . . . . . . . . . . . . . . . . . 265
Prevalence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 f.
Prevention
convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Primordial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Principle
niche preemption. . . . . . . . . . . . . . . . . . . . . . 151
Prisoners Dilemma . . . . . . . . . . . . . . . . . . . . . . . . 380
Probabilistic Algorithm . . . . . . . . . . . . . . . . . . . . . 33
Probability
von Mises [2822]. . . . . . . . . . . . . . . . . . . . . . .655
Bernoulli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
Conditional . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
Kolmogorov . . . . . . . . . . . . . . . . . . . . . . . . . . . 656
of success. . . . . . . . . . . . . . . . . . . . . . . . . 163, 172
space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656
of a random variable . . . . . . . . . . . . . . . . 657
Probability Density Function. . . . . . . . . . . . . . . 659
Probability Mass Function . . . . . . . . . . . . . . . . . 659
Probit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
Problem
bin packing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
combinatorial . . . . . . . . . . . . . . . . . . . . . . 25, 472
continuous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Knapsack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
numerical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
optimization. . . . . . . . . . . . . . . . . . . . . . . 22, 103
multi-objective. . . . . . . . . . . . . . . . . . . . . . . 55
satisability . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
traveling salesman 25, 34, 45 f., 55, 91, 118,
122, 147, 209, 243, 327, 333, 350, 377,
425, 463, 621
Vehicle Routing . . . . . . . . . . . . . . . . . . . . . . . 537
vehicle routing . . . . 25, 147, 266, 350, 537 f.,
540 f., 548, 622, 634, 914
Problem Landscape . . . . . . . . . . . . . . . . . . . . . . . . . 95
Problem Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Product
Cartesian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
Production Systems . . . . . . . . . . . . . . . . . . . . . . . . 457
Programming
genetic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3,
32, 37, 107, 155, 162 f., 173, 175, 180,
187, 192, 195, 262 ., 273, 302, 304 f.,
379 ., 386 f., 389, 396, 398, 400 f., 403,
405 f., 408, 411 f., 427, 434, 447, 531,
534, 536, 561, 566, 568, 824, 945
cartesian. . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
grammar-guided. . . . . . . . . . . 273, 381, 403
INDEX 1203
linear . . 32, 327, 381, 400, 403 408, 412
standard. 32, 273, 386 f., 389, 399, 403 f.,
412
strongly-typed. . . . . . . . . . . . . . . . . . . . . . . . 3,
32, 37, 107, 155, 162 f., 173, 175, 180,
187, 192, 195, 262 ., 273, 302, 304 f.,
379 ., 386 f., 389, 396, 398, 400 f., 403,
405 f., 408, 411 f., 427, 434, 447, 531,
534, 536, 561, 566, 568, 824, 945
integer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
mixed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
constant innovation . . . . . . . . . . . . . . . . . . . 174
Proximity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Pruning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
PSO3, 27, 32, 37, 155, 188, 191, 207, 463, 481 f.,
580, 945
psoReproduce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
PVRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
Q
QBB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
Quadric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
Quantile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
Quartile. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
Quenching
simulated . . . . . . . . . . . . . . . . . . . . . . 247 ., 252
Quintile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
R
Rail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
Raindrop Method . . . . . . . . . . . . . . . . 36, 235 f., 945
RAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Ramped Half-and-Half . . . . . . . . . . . . . . . . . . . . . 390
Random
event. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . 653
experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656
variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
continous . . . . . . . . . . . . . . . . . . . . . . . . . . . 658
discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658
Random Access Machine . . . . . . . . . . . . . . . . . . . 145
Random Boolean Network. . . . . . . . . . . . . . . . . . 174
Random Keys. . . . . . . . . . . . . . . . . . .349 f., 377, 834
random neighbors . . . . . . . . . . . . . . . . . . . . . 455, 554
Random Optimization . . 3, 32, 36, 116, 232, 505
Random Sampling . . . . 113 ., 118, 181, 240, 729
Random Selection. . . . . . . . . . . . . . . . . . . . . . . . . . 302
Random Walk. . . . 113 116, 118, 163, 168, 172,
174 f., 181, 210, 212, 240, 245 f., 267,
302, 514, 732
Randomized Algorithm. . . . . . . . . . . . . . . . . . . . . . 33
Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646, 661
interquartile. . . . . . . . . . . . . . . . . . . . . . . . . . . 667
Ranking
Pareto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
RBN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Reachability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Real-Coded . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Recombination. . . . . . . . . . 262, 307, 335, 399, 544
discrete . . . . . . . . . . . . . . . . . . . . . . . . . . 337, 362
dominant . . . . . . . . . . . . . . . . . . . . 337, 362, 377
homologous . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
intermediate. . . . . . . . . . . . . . . . . . . . . . 338, 362
multi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
Reduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Redundancy . . . . . . . . . . . . . . . . . . . . . . . 86, 174, 569
Reexivenss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
logistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
symbolic . . 119, 121, 187, 191 f., 380 f., 387,
401, 403, 411 f., 531 535, 905 f., 908,
910
Relation
binary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .644
types and properties of . . . . . . . . . . . . . 644
equivalence. . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
partial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
total . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
Repairing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Repairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Representation
explicit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
indirect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Reproduction. . . . . . . . . . . . . . . . . . . . . . . . . . 306, 542
asexual . . . . . . . . . . . . . . . . . . . . . . . . . . . 307, 325
binary. . . . . . . . . 335, 338, 340, 348, 398, 407
nullary . . . . . . . . . . . . . . . . . . . . . . 328, 340, 389
sexual . . . . . . . . . . . . . . . . . . 257, 262, 307, 325
unary . . . . . 330, 333, 340, 347 f., 393, 395 .
Resistance
domiance . . . . . . . . . . . . . . . . . . . . . . . . . . 69, 198
Resistor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Reuse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
REVOP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
RFD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 477
RISC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
River Formation Dynamics . . . . . . . . . . . . . 32, 477
RO. . . . . . . . . . . . . . . . . . . . . 3, 32, 36, 116, 232, 505
Road. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
royal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
variable-length. . . . . . . . . . . . . . . . . . . . . . 559
VLR. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
RoboCup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Robustness. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Root2path . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563, 565
Root2paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
Royal Road. . . . . . . . . . . . . . . . . . . . . . . . . . 173, 558 f.
variable-length . . . . . . . . . . . . . . . . . . . . . . . . 559
VLR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
Royal Road Function. . 154, 173, 455, 558 562,
566, 1221
Royal Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
RPCL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
RPN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
1204 INDEX
Ruggedness . . . . . . . . 161, 554, 556, 571, 573, 578
S
S-Expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
SA . . . . . . . . . . . . . . . . . . . . . . . 3, 25, 32 f., 36, 156 f.,
160, 181, 231, 243 249, 252, 308, 412,
463 f., 493, 541, 557, 716, 740, 945
Saddle Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Salience
high. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
low. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
Sample space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653
Sampling
random. . . . . . . . . . 113 ., 118, 181, 240, 729
SAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
SC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31, 34
Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Scalability Requirement . . . . . . . . . . . . . . . . . . . . 219
Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
large . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
small . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Schedule
temperature. . . . . . . . . . . . . . . . . . . . . . . . . . . 247
adaptive . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
exponential . . . . . . . . . . . . . . . . . . . . . . . . . 249
linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
logarithmic . . . . . . . . . . . . . . . . . . . . . . . . . 248
polynomial . . . . . . . . . . . . . . . . . . . . . . . . . 249
Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
job shop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Schema. . . . . . . . . . . . . . . . . . . . . . . . . . . 119, 163, 341
Theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
Schema Theorem . . . . . 119, 167, 217, 263, 341 f.,
344 ., 558
Schwefel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
SCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 f., 305, 546
SEA 135, 228, 235, 237, 251, 322, 356, 376, 386,
417, 423, 431, 462, 469, 476, 479, 486,
491, 497, 500, 502, 506, 518, 521, 525,
528
SEAL. . . . 130, 320, 354, 374, 384, 415, 422, 429,
460, 468, 474, 484
Search
A

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 519
best-rst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
breadth-rst . . . . . . . . 32, 115, 515, 517, 519
depth-rst. . . . . . . . . . . . . . 32, 115, 516 f., 519
iterative deepenining . . . . . . . . . . . . 32, 516
depth-limited. . . . . . . . . . . . . . . . . . . . . . . . 516 f.
greedy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 519
harmony . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
informed . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 518
local . . . . . . . . . . . . . . . . . . . . 229, 243, 463, 514
Neighborhood . . . . . . . . . . . . . . . . . . . . . . . . . 512
State Space. . . . . . . . . . . . . . . . . . . . . . . . 32, 511
tabu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
uninformed . . . . . . . . . . . . . . . . . . . . 32, 34, 514
Search Operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Search Procedure
greedy randomized adaptive . . . . . 157, 499
Search Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Searching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
Seeding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254, 258
Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 f., 285
breeding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
deterministic . . . . . . . . . . . . . . . . . . . . . . . . . . 289
ecological . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
elitist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
environment. . . . . . . . . . . . . . . . . . . . . . . . . . . 261
environmental . . . . . . . . . . . . . . . . . . . . . . . . . 260
extinctive . . . . . . . . . . . . . . . . . . . . . . . . 183, 265
left . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
right . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
extrema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Fitness Proportionate. . . . . . . . . . . 290, 292 f.
mating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
natural . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
parental . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 f.
preservative . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
random . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
ranking
linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Roulette Wheel . . . . . . . . . . . . . . . . . 290, 292 f.
sexual . . . . . . . . . . . . . . . . . . . . . . . . . . 260 f., 287
survival . . . . . . . . . . . . . . . . . . . . 260 f., 287, 302
threshold. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Tournament . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
non-deterministic . . . . . . . . . . . . . . . . . . . 300
with replacement. . . . . . . . . . . . . . . 298, 300
without replacement. . . . . . . . . . . . . . 298 f.
tournament . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
truncation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Self-Adaptation. . . . . . . . . . . . . . 219, 360, 370, 419
Self-Organization. . . . . . . . . . . . . . . . . . . . . . . . . . . 420
criticality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
Self-Organized Criticality . . . . . . . . . . . . . . . . . . 493
SEMCCO322, 356, 376, 386, 417, 423, 431, 462,
470, 476, 486
Sensor Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
Separability . . . . . . . . . . . . . . . . . . . . . . . . . 178, 204 f.
non . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
partial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Separbility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 639
cardinality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639
Cartesian product . . . . . . . . . . . . . . . . . . . . . 642
complement . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
connected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
countable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
dierence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
empty. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
INDEX 1205
intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
membership. . . . . . . . . . . . . . . . . . . . . . . . . . . 639
operations on . . . . . . . . . . . . . . . . . . . . . . . . . 641
optimal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54, 57
Power set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
relations between. . . . . . . . . . . . . . . . . . . . . . 640
special . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
subset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
theory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .639
Tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
uncountable . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
union . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
Sexual Reproduction. . . . . . . . . 257, 262, 307, 325
Sexual Selection . . . . . . . . . . . . . . . . . . . . . 260 f., 287
SGP . . . . . . . . 32, 273, 386 f., 389, 399, 403 f., 412
SH. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277, 546
Variety Preserving . . . . . . . . . . . . . . . . . . . . 279
SHCC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 f.
SHCLVND. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
SI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 37
Simple Order. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
Simplex
downhill . . . 32, 104, 264, 420, 463, 501, 945
Simulated Annealing. . . . . . 3, 25, 32 f., 36, 156 f.,
160, 181, 231, 243 249, 252, 308, 412,
463 f., 493, 541, 557, 716, 740, 945
Simulated Quenching. . . . . . . . . . . . . . . . . . 247
temperature schedule. . . . . . . . . . . . . . . . . . 247
Simulated Quenching. . . . . . . . . . . 156, 247 ., 252
Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Simulation Dynamics. . . . . . . . . . . . . . . . . . . . . . . 163
Single-Objectivity . . . . . . . . . . . . . . . . . . . 39, 54, 56,
60 ., 70 f., 88, 93, 95, 100 f., 106, 109,
158, 160, 198, 201 f., 209, 238, 253 f.,
274, 278, 287, 325, 360, 362, 434, 439,
501, 541, 580, 713, 724, 905, 1221 .
Siren Pitfall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
SIS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476, 486
Skewness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
Small-scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
SMC131, 227, 234, 236, 251, 320, 354, 374, 385,
415, 422, 430, 460, 468, 474, 478, 484,
490, 496, 500, 502, 506, 518, 521, 524,
528
SOC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
SoCPaR . 133, 321, 355, 375, 385, 416, 423, 430,
461, 469, 475, 485
Soft Computing . . . . . . . . . . . . . . . . . . . . . . . . . 31, 34
Solution
candidate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Space
objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653
search. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
Sparc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
SPEA 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
SPEA-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368, 581
shifted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
SPICE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Spin Glass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Spin-Glass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
Splice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
SPX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
SQ. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 ., 252
SSCI 322, 356, 376, 386, 417, 423, 431, 462, 470,
476, 486
SSEA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266, 324
SSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
Standard Deviation. . . . . . . . . . . . . . . . . . . . . . . 662 f.
Standard Genetic Programming . 32, 273, 386 f.,
389, 399, 403 f., 412
State Machine
nite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
State Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
Search. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 511
State Space Search . . . . . . . . . . . 34 f., 511 f., 514 f.
Static-. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Statisability Problem . . . . . . . . . . . . . . . . . . . . . 147
Statistical Independence. . . . . . . . . . . . . . . . . . . . 657
Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659
Steady-State 157, 255, 266, 286, 289, 324, 546 .
Steady-State Evolutionary Algorithm. 266, 324
STeP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
STGP . 3, 32, 37, 107, 155, 162 f., 173, 175, 180,
187, 192, 195, 262 ., 273, 302, 304 f.,
379 ., 386 f., 389, 396, 398, 400 f., 403,
405 f., 408, 411 f., 427, 434, 447, 531,
534, 536, 561, 566, 568, 824, 945
Sticky Crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
Stigmergy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
sematectonic . . . . . . . . . . . . . . . . . . . . . . . . . . 471
sign-based. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
Stochastic Algorithm. . . . . . . . . . . . . . . . . . . . . . . . 33
Stochastic Gradient Descent. . . . . . . . . . . . . . . . 232
Stopping Criterion . . . . . . . . . . . . . . . . . . . . . . . . . 105
Strategy
evolutionary. . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Strongly-Typed Genetic Programming . . . . 387,
392 f., 398
Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Students t-Distribution . . . . . . . . . . . . . . . . . . . . 687
Subset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
Subtree . . . . . . . . . . 181, 393, 395 400, 404, 1211
Success
probability of . . . . . . . . . . . . . . . . . . . . . 163, 172
Sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661
sqr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662
weightes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Support Vector Machine . . . . . . . . . . . . . . . . . . . 536
Surjective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
1206 INDEX
Surjectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
Survival Selection. . . . . . . . . . . . . . . 260 f., 287, 302
SVM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
Swap Body. . . . . . . . . . . . . . . . . . . . . . . . . . . . 539, 542
Swarm
particle/optimization. . . . 3, 27, 32, 37, 155,
188, 191, 207, 463, 481 f., 580, 945
Swarm Intelligence. . . . . . . . . . . . . . . . . . . . . . . 32, 37
Symbolic Regression 119, 121, 187, 191 f., 380 f.,
387, 401, 403, 411 f., 531 535, 905 f.,
908, 910
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
Symmetric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
Symmetry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
Synonymous Change . . . . . . . . . . . . . . . . . . . . . . . 173
System
classier . . . . . . . . . . . . . . . . . . . . . . . . 339, 457 f.
learning . . . . . . 3, 32, 264, 457 f., 536, 945
multi-agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
T
t-Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
TAAI . . . . . . . . . . . . . . . . . . . . . . . . 134, 462, 476, 486
Tabu Search . 3, 25, 32, 36, 155, 231, 463 f., 489,
512, 541
TAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
TAINN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Technological Landscape . . . . . . . . . . . . . . . . . . . 556
Telematics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549 f.
Temperature Schedule. . . . . . . . . . . . . . . . . . . . . . 247
Termination Criterion. . . . . . . . . . . . . . . . . . . . . . 105
Test
statistical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699
TGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381, 386, 389
Theorem
central limit . . . . . . . . . . . . . . . . . . . . . . . . . . . 681
Schema. . . . . . . . . . . . . . . . . . . . . . . . . . . 341, 558
ugly duckling. . . . . . . . . . . . . . . . . . . . . . . . . . 212
Threshold
error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163, 562
Threshold Selection . . . . . . . . . . . . . . . . . . . . . . . . 289
Time Consumption. . . . . . . . . . . . . . . . . . . . . . . . . 514
TODO. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Total Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
Tournament Selection . . . . . . . . . . . . . . . . . . . . . . 573
TPX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Trac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
Train . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
Transistor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Transititve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
Transitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
Trap Function . . . . . . . . . . . . . . . . . . . . . . . 167, 557 f.
Traveling Salesman Problem25, 34, 45 f., 55, 91,
118, 122, 147, 209, 243, 327, 333, 350,
377, 425, 463, 621
Tree. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
decision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
royal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
Tree Genomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
Truck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
Truck-Meets-Truck . . . . . . . . . . . . . . . . . . . . . . . . . 544
Truncation Selection . . . . . . . . . . . . . . . . . . . . . . . 289
TS. . 3, 25, 32, 36, 155, 231, 463 f., 489, 512, 541
TSoR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
TSP25, 34, 45 f., 55, 91, 118, 122, 147, 209, 243,
327, 333, 350, 377, 425, 463, 621
TSR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
Turing Completeness . . . . . . . . . . . . . . . . . . . . . . . 406
Turing Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
deterministic . . . . . . . . . . . . . . . . . . . . . . . . . . 145
non-deterministic. . . . . . . . . . . . . . . . . . . . . . 146
Two-Point Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
Type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
Type 1 Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
Type 2 Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
U
Ugly Duckling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
UMDA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434, 437
Unbiasedness. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Underspecication . . . . . . . . . . . . . . . . . . . . . . . . . 348
Uni-Modality . . . . . . . . . . . . . . . . . . . . . 165, 452, 454
Uniform Distribution
continuous . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
discrete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
uniformSelectNode . . . . . . . . . . . . . . . . . . . . . . . . . 392
Uninformed Search . . . . . . . . . . . . . . . . . . . . . 32, 514
UX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337, 346, 362
V
Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662
epistatic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
estimator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662
Variation
coecient of . . . . . . . . . . . . . . . . . . . . . . . . . . 663
Variety Preserving . . . . . . . . . . . . . . . . . . . . . . . . . 279
Vehicle Routing Problem. . . . . 25, 147, 266, 350,
537 f., 540 f., 548, 622, 634, 914
Capacitated . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
Multiple Depot. . . . . . . . . . . . . . . . . . . . . . . . 538
Periodic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
with Backhauls. . . . . . . . . . . . . . . . . . . . . . . . 538
with Pick-up and Delivery. . . . . . . . . . . . . 538
VIL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
VRP . . 25, 147, 266, 350, 537 f., 540 f., 548, 622,
634, 914
VRPB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
VRPPD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
W
Walk
Adaptive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
adaptive
tter dynamics . . . . . . . . . . . . . . . . . . . . . 116
INDEX 1207
greedy dynamics . . . . . . . . . . . . . . . . . . . . 116
one-mutant . . . . . . . . . . . . . . . . . . . . . . . . . 116
drunkards . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
random . . . . . 113 116, 118, 163, 168, 172,
174 f., 181, 210, 212, 240, 245 f., 267,
302, 514, 732
WCCI . . . 318, 353, 374, 384, 414, 421, 429, 460,
467, 474, 484
Wechselbr ucksteuerung. . . . . . . . . . . . . . . . . . . . . 537
Weierstra Function. . . . . . . . . . . . . . . . . . . . . . . . 165
Weighted Sum. . . . . . . . . . . . . . . . . . . . . . 62, 78, 275
WHCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
Wildcard. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
WOMA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
WOPPLOT . . 133, 228, 235, 237, 251, 321, 355,
375, 385, 416, 423, 430, 461, 469, 475,
478, 485, 491, 496, 500, 502, 506, 518,
521, 524, 528
WPBA . . 322, 356, 375, 386, 416, 423, 431, 462,
469, 476, 486
Wrapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 f.
WSC131, 320, 354, 374, 385, 415, 422, 430, 461,
468, 475, 484
X
XCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
Y
Yellow Box. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
Z
Z80 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
ZCS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .458
List of Figures
1.1 Terms that hint: Optimization Problem (inspired by [428]) . . . . . . . . . . . . . 21
1.2 One example instance of a bin packing problem. . . . . . . . . . . . . . . . . . . . . 25
1.3 An example for circuit layouts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.4 The graph of g(x) as given in Equation 1.3 with the four roots x

1
to x

4
. . . . . . . . 28
1.5 A truss optimization problem like the one given in [18]. . . . . . . . . . . . . . . . . 29
1.6 A rough sketch of the problem type relations. . . . . . . . . . . . . . . . . . . . . . . 30
1.7 Overview of optimization algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.8 Black box treatment of objective functions in metaheuristics. . . . . . . . . . . . . . 35
1.9 The robotic soccer team CarpeNoctem (on the right side) on the fourth day of the
RoboCup German Open 2009. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.1 A sketch of the stones throw in Example E2.1. . . . . . . . . . . . . . . . . . . . . . 44
2.2 An example for Traveling Salesman Problems, based on data from GoogleMaps. . . . 45
3.1 Global and local optima of a two-dimensional function. . . . . . . . . . . . . . . . . . 52
3.2 Examples for functions with multiple optima . . . . . . . . . . . . . . . . . . . . . . 54
3.3 The possible relations between objective functions as given in [2228, 2230]. . . . . . 56
3.4 Two functions f
1
and f
2
with dierent maxima x
1
and x
2
. . . . . . . . . . . . . . . . 58
3.5 Two functions f
3
and f
4
with dierent minima x
1
, x
2
, x
3
, and x
4
. . . . . . . . . . . . 58
3.6 Optimization using the weighted sum approach (rst example). . . . . . . . . . . . . 62
3.7 Optimization using the weighted sum approach (second example). . . . . . . . . . . 63
3.8 A problematic constellation for the weighted sum approach. . . . . . . . . . . . . . . 64
3.9 Examples of Pareto fronts [2915]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.10 Optimization using the Pareto Frontier approach. . . . . . . . . . . . . . . . . . . . . 67
3.11 Optimization using the Pareto Frontier approach (second example). . . . . . . . . . 68
3.12 Optimization using the Pareto-based Method of Inequalities approach (rst example). 73
3.13 Optimization using the Pareto-based Method of Inequalities approach (rst example). 74
3.14 An external decision maker providing an Evolutionary Algorithm with utility values. 76
4.1 Examples for real-coding candidate solutions. . . . . . . . . . . . . . . . . . . . . . . 82
4.2 The relation of genome, genes, and the problem space. . . . . . . . . . . . . . . . . . 83
4.3 Neighborhoods under non-injective genotype-phenotype mappings. . . . . . . . . . . 87
4.4 Minima of a single objective function. . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.1 An example optimization problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.2 The problem landscape of the example problem derived with searchOp
1
. . . . . . . . 97
5.3 The problem landscape of the example problem derived with searchOp
2
. . . . . . . . 98
6.1 Spaces, Sets, and Elements involved in an optimization process. . . . . . . . . . . . . 102
7.1 Use dedicated algorithms! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
7.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
1210 LIST OF FIGURES
8.1 Some examples for random walks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
9.1 An graph coloring-based example for properties and formae. . . . . . . . . . . . . . . 120
9.2 Example for formae in symbolic regression. . . . . . . . . . . . . . . . . . . . . . . . 121
11.1 Dierent possible properties of tness landscapes (minimization). . . . . . . . . . . . 140
12.1 Illustration of some functions illustrating their rising speed, gleaned from [2365]. . . 145
12.2 Turing machines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
13.1 Premature convergence in the objective space. . . . . . . . . . . . . . . . . . . . . . . 152
13.2 Pareto front approximation sets [2915]. . . . . . . . . . . . . . . . . . . . . . . . . . . 153
13.3 Loss of Diversity leads to Premature Convergence. . . . . . . . . . . . . . . . . . . . 155
13.4 Exploration versus Exploitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
13.5 Multi-Objectivization `a la Sum-of-Parts. . . . . . . . . . . . . . . . . . . . . . . . . . 159
14.1 The landscape diculty increases with increasing ruggedness. . . . . . . . . . . . . . 161
15.1 Increasing landscape diculty caused by deceptivity. . . . . . . . . . . . . . . . . . . 167
16.1 Landscape diculty caused by neutrality. . . . . . . . . . . . . . . . . . . . . . . . . 171
16.2 Possible positive inuence of neutrality. . . . . . . . . . . . . . . . . . . . . . . . . . 173
17.1 Pleiotropy and epistasis in a dinosaurs genome. . . . . . . . . . . . . . . . . . . . . . 178
17.2 The inuence of epistasis on the tness landscape. . . . . . . . . . . . . . . . . . . . 180
17.3 Epistasis in the permutation-representation for bin packing. . . . . . . . . . . . . . . 182
18.1 A robust local optimum vs. a unstable global optimum. . . . . . . . . . . . . . . . 189
19.1 Overtting due to complexity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
19.2 Fitting noise. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
19.3 Oversimplication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
20.1 Approximated distribution of the domination rank in random samples for several
population sizes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
20.2 The proportion of non-dominated candidate solutions for several population sizes ps
and dimensionalities n. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
23.1 A visualization of the No Free Lunch Theorem. . . . . . . . . . . . . . . . . . . . . . 211
23.2 Fitness Landscapes and the NFL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
23.3 The puzzle of optimization algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . 214
24.1 Examples for an increase of the dimensionality of a search space G (1d) to G

(2d). . 220
27.1 A sketch of annealing in metallurgy as role model for Simulated Annealing. . . . . . 244
27.2 Dierent temperature schedules for Simulated Annealing. . . . . . . . . . . . . . . . 248
28.1 The basic cycle of Evolutionary Algorithms. . . . . . . . . . . . . . . . . . . . . . . . 254
28.2 The family of Evolutionary Algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . 264
28.3 The conguration parameters of Evolutionary Algorithms. . . . . . . . . . . . . . . . 269
28.4 An example scenario for Pareto ranking. . . . . . . . . . . . . . . . . . . . . . . . . . 276
28.5 The sharing potential in the Variety Preserving Example E28.17. . . . . . . . . . . . 282
28.6 Selection with and without replacement. . . . . . . . . . . . . . . . . . . . . . . . . . 286
28.7 The four example tness cases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
28.8 The number of expected ospring in truncation selection. . . . . . . . . . . . . . . . 290
28.9 Examples for the idea of roulette wheel selection. . . . . . . . . . . . . . . . . . . . . 294
28.10The number of expected ospring in roulette wheel selection. . . . . . . . . . . . . . 295
28.11The number of expected ospring in tournament selection. . . . . . . . . . . . . . . . 297
28.12The expected numbers of occurrences for dierent values of n and c. . . . . . . . . . 305
LIST OF FIGURES 1211
29.1 A Sketch of a DNA Sequence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
29.2 Value-altering mutation of string chromosomes. . . . . . . . . . . . . . . . . . . . . . 330
29.3 Mutation in binary- and real-coded GAs . . . . . . . . . . . . . . . . . . . . . . . . . 332
29.4 Permutation applied to a string chromosome. . . . . . . . . . . . . . . . . . . . . . . 333
29.5 Crossover (recombination) operators for xed-length string genomes. . . . . . . . . . 335
29.6 The points sampled by dierent crossover operators. . . . . . . . . . . . . . . . . . . 338
29.7 Search operators for variable-length strings (additional to those from Section 29.3.2
and Section 29.3.3). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
29.8 Crossover of variable-length string chromosomes. . . . . . . . . . . . . . . . . . . . . 341
29.9 An example for schemata in a three bit genome. . . . . . . . . . . . . . . . . . . . . 342
29.10Two linked genes and their destruction probability under single-point crossover. . . . 347
29.11An example for two subsequent applications of the inversion operation [1196]. . . . . 348
30.1 Sampling Results from Normal Distributions . . . . . . . . . . . . . . . . . . . . . . 364
31.1 Genetic Programming in the context of the IPO model. . . . . . . . . . . . . . . . . 379
31.2 Some important works on Genetic Programming illustrated on a time line. . . . . . . 380
31.3 The formula e
sinx
+ 3

|x| expressed as tree. . . . . . . . . . . . . . . . . . . . . . . . 387


31.4 The AST representation of algorithms/programs. . . . . . . . . . . . . . . . . . . . . 388
31.5 Tree creation by the full method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
31.6 Tree creation by the grow method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
31.7 Possible tree mutation operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
31.8 Mutation of a mathematical expression represented as a tree. . . . . . . . . . . . . . 395
31.9 Tree permutation (asexually) shuing subtrees. . . . . . . . . . . . . . . . . . . . . 395
31.10Tree editing (asexual) optimization. . . . . . . . . . . . . . . . . . . . . . . . . . . 396
31.11An example for tree encapsulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
31.12An example for tree wrapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
31.13An example for tree lifting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
31.14Tree recombination by exchanging subtrees. . . . . . . . . . . . . . . . . . . . . . . . 399
31.15Recombination of two mathematical expressions represented as trees. . . . . . . . . . 399
31.16Comparison of functions and macros. . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
31.17The impact of insertion operations in Genetic Programming . . . . . . . . . . . . . . 405
34.1 Comparison between information in a population and in a model. . . . . . . . . . . . 428
34.2 A rough sketch of the UMDA process. . . . . . . . . . . . . . . . . . . . . . . . . . . 435
36.1 The Baldwin eect. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
46.1 An example for a possible search space sketched as acyclic graph. . . . . . . . . . . . 513
49.1 An example genotype of symbolic regression of with x = x R
1
. . . . . . . . . . . . 532
49.2 (x), the evolved

1
(x) (x), and

2
(x). . . . . . . . . . . . . . . . . . . . . . . . 535
49.3 The freight trac on German roads in billion tons*kilometer. . . . . . . . . . . . . . 537
49.4 Dierent avors of the VRP and their relation to the in.west system. . . . . . . . . . 539
49.5 The structure of the phenotypes x. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
49.6 Some mutation operators from the freight planning EA. . . . . . . . . . . . . . . . . 543
49.7 Two examples for the freight plan evolution. . . . . . . . . . . . . . . . . . . . . . . . 548
49.8 An overview of the in.west system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
49.9 The YellowBox a mobile sensor node. . . . . . . . . . . . . . . . . . . . . . . . . . 550
50.1 Ackleys Trap function [19, 1467]. . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
50.2 The perfect Royal Trees. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
50.3 Example tness evaluation of Royal Trees . . . . . . . . . . . . . . . . . . . . . . . . 561
50.4 The root2path for l = 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
50.5 An example for the tness landscape model. . . . . . . . . . . . . . . . . . . . . . . . 567
50.6 An example for the epistasis mapping z e
4
(z). . . . . . . . . . . . . . . . . . . . . 570
50.7 An example for r

with = 0..10 and



f = 5. . . . . . . . . . . . . . . . . . . . . . . . 571
50.8 The basic problem hardness. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
50.9 Experimental results for the ruggedness. . . . . . . . . . . . . . . . . . . . . . . . . . 574
1212 LIST OF FIGURES
50.10Experiments with ruggedness and deceptiveness. . . . . . . . . . . . . . . . . . . . . 576
50.11Experiments with epistasis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
50.12The results from experiments with neutrality. . . . . . . . . . . . . . . . . . . . . . . 578
50.13Expection and reality: Experiments involving both, epistasis and neutrality . . . . . 579
50.14Expection and reality: Experiments involving both, ruggedness and epistasis . . . . 579
50.15Examples for the sphere function (v = 100). . . . . . . . . . . . . . . . . . . . . . . . 581
50.16Examples for Schwefels problem 2.22. . . . . . . . . . . . . . . . . . . . . . . . . . . 583
50.17Examples for Schwefels problem 1.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
50.18Examples for Schwefels problem 2.21. . . . . . . . . . . . . . . . . . . . . . . . . . . 587
50.19The 2d-example for the generalized Rosenbrocks function. . . . . . . . . . . . . . . . 589
50.20The 1d-example for the step function (v = 10). . . . . . . . . . . . . . . . . . . . . . 591
50.21Examples for the quartic function with noise. . . . . . . . . . . . . . . . . . . . . . . 593
50.22Examples for Schwefels problem 2.26. . . . . . . . . . . . . . . . . . . . . . . . . . . 595
50.23Examples for the generalized Rastrigins function. . . . . . . . . . . . . . . . . . . . . 597
50.24Examples for Ackleys function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
50.25Examples for the generalized Griewank function. . . . . . . . . . . . . . . . . . . . . 601
50.26Examples for the general penalized function 1. . . . . . . . . . . . . . . . . . . . . . 604
50.27Examples for the general penalized function 2. . . . . . . . . . . . . . . . . . . . . . 606
50.28Examples for Levys function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
50.29A close-to-real-world Vehicle Routing Problem . . . . . . . . . . . . . . . . . . . . . 622
51.1 Set operations performed on sets A and B inside a set A. . . . . . . . . . . . . . . . 641
51.2 Properties of a binary relation R with domain A and codomain B. . . . . . . . . . . 644
53.1 Examples for the discrete uniform distribution . . . . . . . . . . . . . . . . . . . . . 670
53.2 Examples for the Poisson distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . 672
53.3 Examples for the binomial distribution. . . . . . . . . . . . . . . . . . . . . . . . . . 675
53.4 Some examples for the continuous uniform distribution. . . . . . . . . . . . . . . . . 677
53.5 Examples for the normal distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . 679
53.6 Some examples for the exponential distribution. . . . . . . . . . . . . . . . . . . . . . 683
53.7 Examples for the
2
distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685
53.8 Examples for Students t-distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . 688
53.9 The PMF and CMF of the dice throw . . . . . . . . . . . . . . . . . . . . . . . . . . 690
53.10The numbers thrown in the dice example . . . . . . . . . . . . . . . . . . . . . . . . 691
List of Tables
2.1 The unique tours for Example E2.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.1 Applications and Examples of Multi-Objective Optimization. . . . . . . . . . . . . . 59
3.2 Applications and Examples of Constraint Optimization. . . . . . . . . . . . . . . . . 70
10.1 Applications and Examples of Optimization. . . . . . . . . . . . . . . . . . . . . . . . 123
10.2 Conferences and Workshops on Optimization. . . . . . . . . . . . . . . . . . . . . . . 126
18.1 Applications and Examples of robust methods. . . . . . . . . . . . . . . . . . . . . . 189
20.1 Applications and Examples of Many-Objective Optimization. . . . . . . . . . . . . . 197
21.1 Applications and Examples of Large-Scale Optimization. . . . . . . . . . . . . . . . . 203
22.1 Applications and Examples of Dynamic Optimization. . . . . . . . . . . . . . . . . . 207
25.1 Applications and Examples of Metaheuristics. . . . . . . . . . . . . . . . . . . . . . . 226
25.2 Conferences and Workshops on Metaheuristics. . . . . . . . . . . . . . . . . . . . . . 227
26.1 Applications and Examples of Hill Climbing. . . . . . . . . . . . . . . . . . . . . . . 234
26.2 Conferences and Workshops on Hill Climbing. . . . . . . . . . . . . . . . . . . . . . . 234
26.3 Applications and Examples of Raindrop Method. . . . . . . . . . . . . . . . . . . . . 236
26.4 Conferences and Workshops on Raindrop Method. . . . . . . . . . . . . . . . . . . . 236
26.5 Bin Packing test data from [2422], Data Set 1. . . . . . . . . . . . . . . . . . . . . . 239
27.1 Applications and Examples of Simulated Annealing. . . . . . . . . . . . . . . . . . . 250
27.2 Conferences and Workshops on Simulated Annealing. . . . . . . . . . . . . . . . . . . 250
28.1 The Pareto domination relation of the individuals illustrated in Figure 28.4. . . . . . 277
28.2 An example for Variety Preserving based on Figure 28.4. . . . . . . . . . . . . . . . . 283
28.3 The distance and sharing matrix of the example from Table 28.2. . . . . . . . . . . . 283
28.4 Applications and Examples of Evolutionary Algorithms. . . . . . . . . . . . . . . . . 314
28.5 Conferences and Workshops on Evolutionary Algorithms. . . . . . . . . . . . . . . . 317
29.1 Applications and Examples of Genetic Algorithms. . . . . . . . . . . . . . . . . . . . 351
29.2 Conferences and Workshops on Genetic Algorithms. . . . . . . . . . . . . . . . . . . 353
30.1 Applications and Examples of Evolution Strategies. . . . . . . . . . . . . . . . . . . . 373
30.2 Conferences and Workshops on Evolution Strategies. . . . . . . . . . . . . . . . . . . 373
31.1 Applications and Examples of Genetic Programming. . . . . . . . . . . . . . . . . . . 382
31.2 Conferences and Workshops on Genetic Programming. . . . . . . . . . . . . . . . . . 383
31.3 Applications and Examples of Linear Genetic Programming. . . . . . . . . . . . . . . 408
1214 LIST OF TABLES
31.4 Applications and Examples of Grammar-Guided Genetic Programming. . . . . . . . 409
31.5 Applications and Examples of Grahp-based Genetic Programming. . . . . . . . . . . 409
32.1 Applications and Examples of Evolutionary Programming. . . . . . . . . . . . . . . . 414
32.2 Conferences and Workshops on Evolutionary Programming. . . . . . . . . . . . . . . 414
33.1 Applications and Examples of Dierential Evolution. . . . . . . . . . . . . . . . . . . 421
33.2 Conferences and Workshops on Dierential Evolution. . . . . . . . . . . . . . . . . . 421
34.1 Applications and Examples of Estimation Of Distribution Algorithms. . . . . . . . . 428
34.2 Conferences and Workshops on Estimation Of Distribution Algorithms. . . . . . . . 429
35.1 Applications and Examples of Learning Classier Systems. . . . . . . . . . . . . . . . 459
35.2 Conferences and Workshops on Learning Classier Systems. . . . . . . . . . . . . . . 459
36.1 Applications and Examples of Memetic Algorithms. . . . . . . . . . . . . . . . . . . 467
36.2 Conferences and Workshops on Memetic Algorithms. . . . . . . . . . . . . . . . . . . 467
37.1 Applications and Examples of Ant Colony Optimization. . . . . . . . . . . . . . . . . 473
37.2 Conferences and Workshops on Ant Colony Optimization. . . . . . . . . . . . . . . . 473
38.1 Applications and Examples of River Formation Dynamics. . . . . . . . . . . . . . . . 478
38.2 Conferences and Workshops on River Formation Dynamics. . . . . . . . . . . . . . . 478
39.1 Applications and Examples of Particle Swarm Optimization. . . . . . . . . . . . . . . 483
39.2 Conferences and Workshops on Particle Swarm Optimization. . . . . . . . . . . . . . 483
40.1 Applications and Examples of Tabu Search. . . . . . . . . . . . . . . . . . . . . . . . 490
40.2 Conferences and Workshops on Tabu Search. . . . . . . . . . . . . . . . . . . . . . . 490
41.1 Applications and Examples of Extremal Optimization. . . . . . . . . . . . . . . . . . 496
41.2 Conferences and Workshops on Extremal Optimization. . . . . . . . . . . . . . . . . 496
42.1 Applications and Examples of GRAPSs. . . . . . . . . . . . . . . . . . . . . . . . . . 500
42.2 Conferences and Workshops on GRAPSs. . . . . . . . . . . . . . . . . . . . . . . . . 500
43.1 Applications and Examples of Downhill Simplex. . . . . . . . . . . . . . . . . . . . . 502
43.2 Conferences and Workshops on Downhill Simplex. . . . . . . . . . . . . . . . . . . . 502
44.1 Conferences and Workshops on Random Optimization. . . . . . . . . . . . . . . . . . 506
46.1 Applications and Examples of Uninformed Search. . . . . . . . . . . . . . . . . . . . 518
46.2 Conferences and Workshops on Uninformed Search. . . . . . . . . . . . . . . . . . . . 518
46.3 Applications and Examples of Informed Search. . . . . . . . . . . . . . . . . . . . . . 520
46.4 Conferences and Workshops on Informed Search. . . . . . . . . . . . . . . . . . . . . 521
47.1 Applications and Examples of Branch And Bound. . . . . . . . . . . . . . . . . . . . 524
47.2 Conferences and Workshops on Branch And Bound. . . . . . . . . . . . . . . . . . . 524
48.1 Applications and Examples of Cutting-Plane Method. . . . . . . . . . . . . . . . . . 528
48.2 Conferences and Workshops on Cutting-Plane Method. . . . . . . . . . . . . . . . . . 528
49.1 Sample Data A = {(x
i
, y
i
) : i [0..8]} for Equation 49.8 . . . . . . . . . . . . . . . . 534
49.2 The congurations used in the full-factorial experiments. . . . . . . . . . . . . . . . . 546
49.3 The measurements taken during the experiments. . . . . . . . . . . . . . . . . . . . . 546
49.4 The best and the worst evaluation results in the full-factorial tests. . . . . . . . . . . 547
50.1 Some long Root2paths for l from 1 to 11 with underlinedbridge elements. . . . . . . 564
50.2 Properties of the sphere function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
50.3 Results obtained for the sphere function with function evaluations. . . . . . . . . . 581
LIST OF TABLES 1215
50.4 Properties of Schwefels problem 2.22. . . . . . . . . . . . . . . . . . . . . . . . . . . 583
50.5 Results obtained for Schwefels problem 2.22 with function evaluations. . . . . . . 583
50.6 Properties of Schwefels problem 1.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
50.7 Results obtained for Schwefels problem 1.2 with function evaluations. . . . . . . . 585
50.8 Properties of Schwefels problem 2.21. . . . . . . . . . . . . . . . . . . . . . . . . . . 587
50.9 Results obtained for Schwefels problem 2.21 with function evaluations. . . . . . . 587
50.10Properties of the generalized Rosenbrocks function. . . . . . . . . . . . . . . . . . . 589
50.11Results obtained for the generalized Rosenbrocks function with function evaluations.590
50.12Properties of the step function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
50.13Results obtained for the step function with function evaluations. . . . . . . . . . . 591
50.14Properties of the quartic function with noise. . . . . . . . . . . . . . . . . . . . . . . 593
50.15Results obtained for the quartic function with noise with function evaluations. . . 593
50.16Properties of Schwefels problem 2.26. . . . . . . . . . . . . . . . . . . . . . . . . . . 595
50.17Results obtained for Schwefels problem 2.26 with function evaluations. . . . . . . 595
50.18Properties of the generalized Rastrigins function. . . . . . . . . . . . . . . . . . . . . 597
50.19Results obtained for the generalized Rastrigins function with function evaluations. 597
50.20Properties of Ackleys function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
50.21Results obtained for Ackleys function with function evaluations. . . . . . . . . . . 600
50.22Properties of the generalized Griewank function. . . . . . . . . . . . . . . . . . . . . 601
50.23Results obtained for the generalized Griewank function with function evaluations. 601
50.24Properties of the generalized penalized function 1. . . . . . . . . . . . . . . . . . . . 604
50.25Results obtained for the generalized penalized function 1 with function evaluations. 605
50.26Properties of the generalized penalized function 2. . . . . . . . . . . . . . . . . . . . 606
50.27Results obtained for the generalized penalized function 2 with function evaluations. 607
50.28Properties of Levys function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
50.29Results obtained for Levys function with function evaluations. . . . . . . . . . . . 608
50.30Properties of the shifted sphere function. . . . . . . . . . . . . . . . . . . . . . . . . . 610
50.31Results obtained for the shifted sphere function with function evaluations. . . . . . 610
50.32Properties of the shifted and rotated elliptic function. . . . . . . . . . . . . . . . . . 611
50.33Results obtained for the shifted and rotated elliptic function with function
evaluations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
50.34Properties of shifted Schwefels problem 2.21. . . . . . . . . . . . . . . . . . . . . . . 612
50.35Properties of Schwefels modied problem 2.21 with optimum on the bounds. . . . . 613
50.36Results obtained for Schwefels modied problem 2.21 with optimum on the bounds
with function evaluations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
50.37Properties of the shifted Rosenbrocks function. . . . . . . . . . . . . . . . . . . . . . 614
50.38Results obtained for the shifted Rosenbrocks function with function evaluations. . 614
50.39Properties of shifted Ackleys function. . . . . . . . . . . . . . . . . . . . . . . . . . . 616
50.40Properties of the shifted and rotated Ackleys function. . . . . . . . . . . . . . . . . 617
50.41Results obtained for the shifted and rotated Ackleys function with function
evaluations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
50.42Properties of the shifted Rastrigins function. . . . . . . . . . . . . . . . . . . . . . . 618
50.43Results obtained for the shifted Rastrigins function with function evaluations. . . 618
50.44Properties of the shifted and rotated Rastrigins function. . . . . . . . . . . . . . . . 619
50.45Results obtained for the shifted and rotated Rastrigins function with function
evaluations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
50.46Properties of Schwefels problem 2.13. . . . . . . . . . . . . . . . . . . . . . . . . . . 620
50.47Results obtained for Schwefels problem 2.13 with function evaluations. . . . . . . 620
50.48Properties of the shifted Griewank function. . . . . . . . . . . . . . . . . . . . . . . . 621
53.1 Special Quantiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
53.2 Parameters of the discrete uniform distribution. . . . . . . . . . . . . . . . . . . . . . 669
53.3 Parameters of the Poisson distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . 671
53.4 Parameters of the Binomial distribution. . . . . . . . . . . . . . . . . . . . . . . . . . 674
53.5 Parameters of the continuous uniform distribution. . . . . . . . . . . . . . . . . . . . 676
53.6 Parameters of the normal distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . 678
53.7 Some values of the standardized normal distribution. . . . . . . . . . . . . . . . . . . 680
53.8 Parameters of the exponential distribution. . . . . . . . . . . . . . . . . . . . . . . . 682
1216 LIST OF TABLES
53.9 Parameters of the
2
distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
53.10Some values of the
2
distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686
53.11Parameters of the Students t- distribution. . . . . . . . . . . . . . . . . . . . . . . . 687
53.12Table of Students t-distribution with right-tail probabilities. . . . . . . . . . . . . . 689
53.13Parameters of the dice throw experiment. . . . . . . . . . . . . . . . . . . . . . . . . 691
53.14Example for paired samples (a, b). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
53.15Example for unpaired samples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702
54.1 The precise developer environment used implementing and testing. . . . . . . . . . . 705
List of Algorithms
1.1 x optimizeBinPacking

(k, b, a). . . . . . . . . . . . . . . . (see Example E1.1) . . . . . . . . 40


6.1 Example Iterative Algorithm

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
8.1 x randomSampling(f) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
8.2 x randomWalk(f) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
8.3 x exhaustiveSearch(f, y) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
26.1 x hillClimbing(f) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
26.2 X hillClimbingMO

OptProb

, as

. . . . . . . . . . . . . . . . . . . . . . . . . . 231
26.3 X hillClimbingMORR

OptProb

, as

. . . . . . . . . . . . . . . . . . . . . . . . 233
27.1 x simulatedAnnealing(f) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
28.1 X simpleEA(f, ps, mps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
28.2 X simpleMOEA

OptProb

, ps, mps

. . . . . . . . . . . . . . . . . . . . . . . . . 256
28.3 X elitistMOEA

OptProb

, ps, as, mps

. . . . . . . . . . . . . . . . . . . . . . . . 268
28.4 v
2
assignFitnessParetoRank(pop, cmp
f
) . . . . . . . . . . . . . . . . . . . . . . . . 276
28.5 v assignFitnessVariety(pop, cmp
f
) . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
28.6 v assignFitnessTournament(q, r, pop, cmp
f
) . . . . . . . . . . . . . . . . . . . . . . 285
28.7 mate truncationSelection
r
(k, pop, v, mps) . . . . . . . . . . . . . . . . . . . . . . . 289
28.8 mate truncationSelection
w
(mps, pop, v) . . . . . . . . . . . . . . . . . . . . . . . . 289
28.9 mate rouletteWheelSelection
r
(pop, v, mps) . . . . . . . . . . . . . . . . . . . . . . 292
28.10mate rouletteWheelSelection
w
(pop, v, mps) . . . . . . . . . . . . . . . . . . . . . . 293
28.11mate tournamentSelection
r
(k, pop, v, mps) . . . . . . . . . . . . . . . . . . . . . . 298
28.12mate tournamentSelection
w
(k, pop, v, mps) . . . . . . . . . . . . . . . . . . . . . . 298
28.13mate tournamentSelection
w
(k, pop, v, mps) . . . . . . . . . . . . . . . . . . . . . . 299
28.14mate tournamentSelectionND
r
(, k, pop, v, mps) . . . . . . . . . . . . . . . . . . . 300
28.15mate randomSelection
r
(pop, mps) . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
28.16v

clearingFilter(pop, , k, v) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
28.17P scpFilter(pop, c, f ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
28.18pop createPop(ps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
28.19pop reproducePopEA(mate, ps, mr, cr) . . . . . . . . . . . . . . . . . . . . . . . . . 308
28.20P
new
updateOptimalSet(P
old
, p
new
, cmp
f
) . . . . . . . . . . . . . . . . . . . . . . 309
28.21P
new
updateOptimalSet(P
old
, p
new
, cmp
f
) (
nd
Version) . . . . . . . . . . . . . . . 309
28.22P
new
updateOptimalSetN(P
old
, P, cmp
f
) . . . . . . . . . . . . . . . . . . . . . . . 310
28.23P extractBest(pop, cmp
f
) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
28.24P
new
pruneOptimalSet
c
(P
old
, k) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
28.25(P
l
, lst, cnt) divideAGA(P
old
, d) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
28.26P
new
pruneOptimalSetAGA(P
old
, d, k) . . . . . . . . . . . . . . . . . . . . . . . . 314
29.1 g createGAFLBin(n) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
29.2 g createGAPerm(n) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
1218 LIST OF ALGORITHMS
29.3 g mutateGAFLBinRand(g
p
, n) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
29.4 g mutateSinglePerm(n) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
29.5 g mutateMultiPerm(g
p
) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
29.6 g recombineMPX(g
p1
, g
p2
, k) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
29.7 g recombineUX(g
p1
, g
p2
) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
29.8 x gpm
RK
(g, S) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
30.1 X generalES(f, , , ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
30.2 g

recombineDiscrete(P) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
30.3 g

recombineIntermediate(P) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
30.4 g mutate
G,

, w =

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
30.5 g mutate
G,

, w =

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
30.6 g mutate
G,M

, w = M

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
30.7 x (1+1)-ES1/5
(f, L, a,
0
) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
30.8 mutate
W,
()

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
31.1 g createGPFull

d, N,

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
31.2 g createGPGrow

d, N,

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
31.3 g createGPRamped

d, N,

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
31.4 t
c
selectNode
uni
(t
r
) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
31.5 g mutateGP(g
p
) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
31.6 g recombineGP(g
p1
, g
p2
) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
34.1 x PlainEDA(f, ps, mps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
34.2 X EDA MO

OptProb

, ps, mps

. . . . . . . . . . . . . . . . . . . . . . . . . . . 433
34.3 x UMDA(f, ps, mps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
34.4 pop sampleModelUMDA(M, ps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
34.5 M buildModelUMDA(mate) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
34.6 x PBIL(f, ps, mps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
34.7 M buildModelPBIL

mate, M

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
34.8 x cGA(f, ps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
34.9 M buildModelCGA

mate, M

, ps

. . . . . . . . . . . . . . . . . . . . . . . . . . . 440
34.10(true, false) terminationCriterionCGA(M) . . . . . . . . . . . . . . . . . . . . 440
34.11x transformModelCompGA(M) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
34.12x SHCLVND(f, ps, mps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
34.13pop sampleModelSHCLVND(M, ps) . . . . . . . . . . . . . . . . . . . . . . . . . . 443
34.14M buildModelSHCLVND

mate, M

. . . . . . . . . . . . . . . . . . . . . . . . . . 443
34.15x RCPBIL(f, ps, mps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
34.16M sampleModelRCPBIL

mate, M

. . . . . . . . . . . . . . . . . . . . . . . . . . 445
34.17M buildModelRCPBIL

mate, M

. . . . . . . . . . . . . . . . . . . . . . . . . . . 446
34.18x PIPE(f, ps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
34.19N newModelNodePIPE(N, , P

) . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
34.20pop sampleModelPIPE(M, ps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
34.21(N, t) buildTreePIPE

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
34.22M buildModelPIPE

mate, M

, x

. . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
34.23N increasePropPIPE

, t

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
34.24N pruneTreePIPE

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
39.1 x

PSO(f, ps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483


46.1 P expandP(g) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
46.2 X BFS(r, isGoal) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
46.3 X DFS(r, isGoal) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
46.4 X DLS(r, isGoal, d) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
46.5 x IDDFS(r, isGoal) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
46.6 X greedySearch(r, isGoal, ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
LIST OF ALGORITHMS 1219
50.1 r f
lp
(x) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
50.2 r

buildRPermutation

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572
50.3 translate

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
List of Listings
1 The CVS command for anonymously checking out the complete book sources. . . . . 4
17.1 A program computing the greatest common divisor. . . . . . . . . . . . . . . . . . . 180
31.1 A genotype of an individual in Brameier and Banzhafs LGP system. . . . . . . . . . 407
31.2 The data for Task 91 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
50.1 An example Royal Road function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
50.2 A Java code snippet computing the sphere function. . . . . . . . . . . . . . . . . . . 581
50.3 A Java code snippet computing Schwefels problem 2.22. . . . . . . . . . . . . . . . 583
50.4 A Java code snippet computing Schwefels problem 1.2. . . . . . . . . . . . . . . . . 585
50.5 A Java code snippet computing Schwefels problem 2.21. . . . . . . . . . . . . . . . 587
50.6 A Java code snippet computing the generalized Rosenbrocks function. . . . . . . . 589
50.7 A Java code snippet computing the step function function. . . . . . . . . . . . . . . 591
50.8 A Java code snippet computing the quartic function with noise. . . . . . . . . . . . 593
50.9 A Java code snippet computing Schwefels problem 2.26 function. . . . . . . . . . . 595
50.10A Java code snippet computing the generalized Rastrigins function. . . . . . . . . . 597
50.11A Java code snippet computing Ackleys function. . . . . . . . . . . . . . . . . . . . 599
50.12A Java code snippet computing the generalized Griewank function. . . . . . . . . . 601
50.13A Java code snippet computing the generalized penalized function 1. . . . . . . . . 604
50.14A Java code snippet computing the generalized penalized function 2. . . . . . . . . 606
50.15A Java code snippet computing the shifted sphere function. . . . . . . . . . . . . . . 610
50.16A Java code snippet computing shifted Schwefels problem 2.21. . . . . . . . . . . . 612
50.17A Java code snippet computing the shifted Rosenbrocks function. . . . . . . . . . . 614
50.18A Java code snippet computing shifted Ackleys function. . . . . . . . . . . . . . . . 616
50.19A Java code snippet computing the shifted Rastrigins function. . . . . . . . . . . . 618
50.20A Java code snippet computing the shifted Griewank function. . . . . . . . . . . . . 621
50.21The location data of the rst example problem instance. . . . . . . . . . . . . . . . . 626
50.22The order data of the rst example problem instance. . . . . . . . . . . . . . . . . . 627
50.23The truck data of the rst example problem instance. . . . . . . . . . . . . . . . . . 628
50.24The container data of the rst example problem instance. . . . . . . . . . . . . . . . 629
50.25The driver data of the rst example problem instance. . . . . . . . . . . . . . . . . . 629
50.26The distance data of the rst example problem instance. . . . . . . . . . . . . . . . . 631
50.27The move data of the solution to the rst example problem instance. . . . . . . . . . 633
50.28An example output of the features of an example plan (with errors). . . . . . . . . . 634
55.1 The basic interface for everything involved in optimization. . . . . . . . . . . . . . . 707
55.2 The interface for all nullary search operations. . . . . . . . . . . . . . . . . . . . . . . 708
55.3 The interface for all unary search operations. . . . . . . . . . . . . . . . . . . . . . . 708
55.4 The interface for all binary search operations. . . . . . . . . . . . . . . . . . . . . . . 709
55.5 The interface for all ternary search operations. . . . . . . . . . . . . . . . . . . . . . 710
55.6 The interface for all binary search operations. . . . . . . . . . . . . . . . . . . . . . . 711
55.7 The interface common to all genotype-phenotype mappings. . . . . . . . . . . . . . . 711
55.8 The interface for objective functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
55.9 The interface for termination criteria. . . . . . . . . . . . . . . . . . . . . . . . . . . 712
55.10An interface to all single-objective optimization algorithms. . . . . . . . . . . . . . . 713
55.11An interface to all multi-objective optimization algorithms. . . . . . . . . . . . . . . 716
1222 LIST OF LISTINGS
55.12The interface for temperature schedules. . . . . . . . . . . . . . . . . . . . . . . . . . 717
55.13The interface for selection algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . 717
55.14The interface for tness assignment processes. . . . . . . . . . . . . . . . . . . . . . . 718
55.15The interface for comparator functions. . . . . . . . . . . . . . . . . . . . . . . . . . 719
55.16The interface for model building and updating. . . . . . . . . . . . . . . . . . . . . . 719
55.17The interface for sampling points from a model. . . . . . . . . . . . . . . . . . . . . . 720
56.1 The base class for everything involved in optimization. . . . . . . . . . . . . . . . . . 723
56.2 A base class for all single-objective optimization methods. . . . . . . . . . . . . . . . 724
56.3 A base class for all multi-objective optimization methods. . . . . . . . . . . . . . . . 728
56.4 The random sampling algorithm randomSampling given in Algorithm 8.1. . . . . . 729
56.5 The random walk algorithm randomWalk given in Algorithm 8.2. . . . . . . . . . 732
56.6 The Hill Climbing algorithm hillClimbing given in Algorithm 26.1. . . . . . . . . . 735
56.7 The Multi-Objective Hill Climbing algorithm hillClimbingMO given in
Algorithm 26.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737
56.8 The Simulated Annealing algorithm simulatedAnnealing given in Algorithm 27.1. 740
56.9 The logarithmic temperature schedule as specied in Equation 27.16. . . . . . . . . . 743
56.10The logarithmic temperature schedule as specied in Equation 27.17. . . . . . . . . . 744
56.11A base class providing the features Algorithm 28.1 in a generational way (see
Section 28.1.4.1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746
56.12A base class providing the features Algorithm 28.2 in a generational way. . . . . . . 750
56.13The truncation selection method given in Algorithm 28.8 on page 289. . . . . . . . . 754
56.14The tournament selection method given in Algorithm 28.11 on page 298. . . . . . . . 756
56.15The random selection method given in Algorithm 28.15 on page 302. . . . . . . . . . 758
56.16Pareto ranking tness assignment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
56.17An example implementation of the Evolution Strategy Algorithm 30.1 on page 361. . 760
56.18An individual record also holding endogenous information. . . . . . . . . . . . . . . . 770
56.19An example implementation of the Dierential Evolution algorithm which uses
ternary crossover. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771
56.20The Estimation Of Distribution Algorithm algorithm PlainEDA given in
Algorithm 34.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775
56.21The creator for the initial model to be used in UMDA. . . . . . . . . . . . . . . . . . 781
56.22The model building method used in UMDA. . . . . . . . . . . . . . . . . . . . . . . . 782
56.23Sample points from the UMDA model. . . . . . . . . . . . . . . . . . . . . . . . . . . 783
56.24Create uniformly distributed real vectors. . . . . . . . . . . . . . . . . . . . . . . . . 784
56.25Modify a real vector by adding normally distributed random numbers with a given
standard deviation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786
56.26A strategy mutation operator for Evolution Strategy. . . . . . . . . . . . . . . . . . . 787
56.27Modify a real vector by adding uniformly distributed random numbers. . . . . . . . 789
56.28Modify a real vector by adding normally distributed random numbers. . . . . . . . . 791
56.29A weighted average crossover operator. . . . . . . . . . . . . . . . . . . . . . . . . . . 792
56.30The binomial crossover operator from Dierential Evolution. . . . . . . . . . . . . . 794
56.31The exponential crossover operator from Dierential Evolution. . . . . . . . . . . . . 796
56.32The dominant recombination operator dened in Section 30.3.1. . . . . . . . . . . . . 798
56.33The intermedtiate recombination operator dened as Section 30.3.2. . . . . . . . . . 799
56.34A uniform random string generator. . . . . . . . . . . . . . . . . . . . . . . . . . . . 801
56.35A single bit-ip mutator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 802
56.36The uniform crossover operator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 803
56.37Create uniformly distributed permutations. . . . . . . . . . . . . . . . . . . . . . . . 804
56.38Modify a permutation by swapping two genes. . . . . . . . . . . . . . . . . . . . . . . 806
56.39Modify a permutation by repeatedly swapping two genes. . . . . . . . . . . . . . . . 808
56.40The base class for search operations for trees. . . . . . . . . . . . . . . . . . . . . . . 809
56.41An utility class to construct random paths in a tree, i. e., to select random nodes. . . 811
56.42The ramped-half-and-half creation procedure. . . . . . . . . . . . . . . . . . . . . . . 814
56.43A simple sub-tree replacement mutator. . . . . . . . . . . . . . . . . . . . . . . . . . 815
56.44A binary search operation for trees. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817
56.45A multiplexing nullary search operation. . . . . . . . . . . . . . . . . . . . . . . . . . 818
56.46A multiplexing unary search operation. . . . . . . . . . . . . . . . . . . . . . . . . . . 820
56.47A multiplexing binary search operation. . . . . . . . . . . . . . . . . . . . . . . . . . 822
LIST OF LISTINGS 1223
56.48A base class for tree nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824
56.49A type description for tree nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827
56.50A set of tree node types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
56.51The identity mapping for cases where G = X. . . . . . . . . . . . . . . . . . . . . . . 833
56.52The Random Keys genotype-phenotype mapping introduced in Algorithm 29.8. . . . 834
56.53The step-limit termination criterion. . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
56.54A comparator realizing the lexicographic relationship. . . . . . . . . . . . . . . . . . 837
56.55A comparator realizing the Pareto dominance relationship. . . . . . . . . . . . . . . . 838
56.56The individual record. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
56.57The individual record for multi-objective optimization. . . . . . . . . . . . . . . . . . 841
56.58A base class for archiving non-dominated individuals. . . . . . . . . . . . . . . . . . 843
56.59Some constants useful for optimization. . . . . . . . . . . . . . . . . . . . . . . . . . 850
56.60A small and handy class for computing some useful statistics. . . . . . . . . . . . . . 851
56.61Some simple utilities for converting objects to strings. . . . . . . . . . . . . . . . . . 854
57.1 The Sphere Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859
57.2 f
1
as dened in Equation 26.1 on page 238. . . . . . . . . . . . . . . . . . . . . . . . 860
57.3 f
2
as dened in Equation 26.2 on page 238. . . . . . . . . . . . . . . . . . . . . . . . 861
57.4 f
3
as dened in Equation 26.3 on page 238. . . . . . . . . . . . . . . . . . . . . . . . 862
57.5 Testing the function from Task 64. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 863
57.6 An example output of the Function test program. . . . . . . . . . . . . . . . . . . . . 868
57.7 Solving the Sphere functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885
57.8 An example output of the Sphere test program. . . . . . . . . . . . . . . . . . . . . . 887
57.9 An Instance of the Bin Packing Problem. . . . . . . . . . . . . . . . . . . . . . . . . 888
57.10The rst objective function f
1
from Task 67. . . . . . . . . . . . . . . . . . . . . . . . 892
57.11The second objective function f
2
from Task 67. . . . . . . . . . . . . . . . . . . . . . 893
57.12The third objective function f
3
from Task 67. . . . . . . . . . . . . . . . . . . . . . . 894
57.13The fourth objective function f
4
from Task 67. . . . . . . . . . . . . . . . . . . . . . 896
57.14The fth objective function f
5
from Task 67. . . . . . . . . . . . . . . . . . . . . . . . 897
57.15Solving the Bin Packing problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 899
57.16An example output of the Bin Packing test program. . . . . . . . . . . . . . . . . . . 903
57.17The data of the att48 problem instance. . . . . . . . . . . . . . . . . . . . . . . . . . 904
57.18A single-objective test program for symbolic regression. . . . . . . . . . . . . . . . . 905
57.19A multi-objective test program for symbolic regression. . . . . . . . . . . . . . . . . . 906
57.20An objective function for Symbolic Regression. . . . . . . . . . . . . . . . . . . . . . 908
57.21An objective function based on Listing 57.20, but also including tree-size information. 910
57.22An objective function for Symbolic Regression. . . . . . . . . . . . . . . . . . . . . . 910
57.23Some example functions to match with regression. . . . . . . . . . . . . . . . . . . . 912
57.24An instance of this class holds the information of one location. . . . . . . . . . . . . 914
57.25An instance of this class holds the information of one order. . . . . . . . . . . . . . . 915
57.26An instance of this class holds the information of one truck. . . . . . . . . . . . . . . 918
57.27An instance of this class holds the information of one container. . . . . . . . . . . . . 920
57.28An instance of this class holds the information of one driver. . . . . . . . . . . . . . 921
57.29This class holds the complete infomation about the distances between locations. . . 923
57.30An object which holds a transportation move. . . . . . . . . . . . . . . . . . . . . . . 925
57.31An object which can be used to compute the features of a candidate solution of the
benchmark problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931

Você também pode gostar