Você está na página 1de 7

2009 11th IEEE International Conference on High Performance Computing and Communications

A Coarse-grained Parallel Genetic Algorithm


with Migration for Shortest Path Routing
Problem
Salman Yussof, Rina Azlin Razali, Ong Hang See, Azimah Abdul Ghapar, Marina Md Din

Abstract— Shortest path routing is the type of routing widely used two characteristics that makes GA a good search and
in computer networks nowadays. Even though shortest path optimization algorithm.
routing algorithms are well established, other alternative methods One of the earliest GA-based shortest path routing
may have their own advantages. One such alternative is to use a algorithms is the one proposed by Munetomo et al. [3], [4].
GA-based routing algorithm. Based on previous research, GA-
based routing algorithm has been found to be more scalable and
Munetomo proposed a GA-based routing algorithm to
insensitive to variations in network topologies. However, it is also generate alternate paths that can be quickly used in the case of
known that GA-based routing algorithm is not fast enough for real- link failures. In the proposed algorithm, the algorithm
time computation. In this paper, we proposed a coarse-grained chromosome is encoded as a list of node IDs that are on the
parallel genetic algorithm for solving the shortest path routing path from the source node to the destination node. Since
problem with the aim to reduce its computation time. The migration different paths can have different number of nodes, the
scheme, which is commonly used in coarse-grained parallel genetic chromosomes are of variable length. This algorithm employs
algorithm, is also employed in the proposed algorithm. This crossover, mutation and migration genetic operators in
algorithm is developed and run on an MPI cluster. This paper generating the next generation of solutions. Chang et. al [5]
studies the effect of migration on the proposed algorithm and the
performance of the algorithm as compared to its serial counterpart.
also proposed a GA-based routing algorithm for solving the
shortest path routing problem. Similar to Munetomo’s
algorithm, the chromosome in this algorithm consists of a
Keywords - Coarse grained, message passing interface, parallel
genetic algorithm, shortest path routing.
sequence of node IDs that are on the path from source to
destination. However, there are several differences in the
details of the GA implementation such as in the crossover and
mutation operations. Some researchers implemented a hybrid
I. INTRODUCTION GA algorithm where GA is combined with another algorithm
Routing in a computer network refers to the task of finding to solve the shortest path routing problem. One example of
a path from a source node to a destination node. Given a this is the algorithm proposed by Hamdan et. al [6] who
particular network, it is very likely that there is more than one combined GA with the Hopfield network. Another example
path that can be used. The task of a routing algorithm is to would be an algorithm proposed by Riedl [7] who combined
find the shortest path. Shortest path routing algorithms such as GA with a local heuristics search.
Dijkstra’s algorithm and Bellman-Ford algorithm are Chang et. al [5] has shown that using GA for the shortest
commonly used in computer network nowadays [1]. path routing problem has several advantages. The first
Even though shortest path routing algorithms are already advantage is that GA is insensitive to variations in network
well established, there are researchers who are trying to find topologies with respect to route optimality and convergence
alternative methods to find shortest paths through a network. speed. The second advantage is that GA-based routing
One such alternative is to use genetic algorithm (GA). GA is a algorithm is scalable in the sense that the real computation
multi-purpose search and optimization algorithm that is size does not increase very much as the network size gets
inspired by the theory of genetics and natural selection [2]. larger. However this literature also pointed out that GA is not
The problem to be solved using GA is encoded as a fast enough for real-time computation and in order to achieve
chromosome that consists of several genes. The solution of the a really fast computation time in GA, a hardware
problem is represented by a group of chromosomes referred to implementation of GA is required.
as a population. In each iteration of the algorithm, the In this paper, we are proposing a parallel genetic algorithm
chromosomes in the population will undergo one or more for the shortest path routing problem. The motivation behind
genetic operations such as crossover and mutation. The result this proposal is that parallel implementation of GA should be
of the genetic operations will become the next generations of able to improve its computation time. The proposed algorithm
the solution. This process continues until either the solution is is implemented on a message passing interface (MPI) cluster.
found or a certain termination condition is met. The idea
behind GA is to have the chromosomes in the population to II. PARALLEL GENETIC ALGORITHM
slowly converge to an optimal solution. At the same time, the GA is generally able to find good solutions in reasonable
algorithm is supposed to maintain enough diversity so that it amount of time but as they are applied to harder and bigger
can search a large search space. It is the combination of these problems, there is an increase in the time required to find

978-0-7695-3738-2/09 $25.00 © 2009 IEEE 615


DOI 10.1109/HPCC.2009.25
adequate solutions. To tackle this problem, there have been some objective function and then distribute them again to the
multiple efforts to make GA faster and one of the promising computing nodes.
choices is to use parallel implementation [8]. Parallel GA has been successfully used in various problems
In parallel GA (PGA), there are multiple computing nodes. such as design optimization [12], transport routing [13], time
The task of each computing node depends on the type of series forecasting [14], network design [15] and sorting [16].
parallel GA used. There are four major types of parallel GAs:
(1) Master-slave GA, (2) Coarse-grained GA, (3) Fine-grained III. PROPOSED PARALLEL GENETIC ALGORITHM FOR SHORTEST
GA and (4) Hierarchical hybrids. In master-slave GA, one PATH ROUTING
computing node will become the master and the other A. Overview
computing nodes will become the slaves. The master node
will hold the population and perform most of the GA For the proposed algorithm, the coarse-grained GA has
operations. However, the master can assign one or more been chosen to be adopted. The main reason for this choice is
computing-intensive tasks (i.e. fitness evaluation or crossover) due to the use of MPI cluster. In an MPI environment,
to the slaves. This is done by sending one or more communication between computers has a large overhead.
chromosomes to the slaves and the master would then wait for Therefore, coarse-grained GA would be the most suitable type
the slaves to return their results. In coarse-grained GA, the of PGA to be used since it has the lowest communication
population is divided to the computing nodes. Each computing overhead compared to the other PGA types.
node would then have a sub-population and each node In our PGA implementation, all the computing nodes will
executes GA on its own sub-population. To ensure that good randomly create their own sub-population and each of them
solutions can be spread to other nodes, the nodes will will execute GA on its own sub-population. However, one of
occasionally (with certain probability) exchange the computing nodes will be assigned a special task to gather
chromosomes with each other. This exchange is called results from all the other nodes and then choose the best result
migration and it involves a node sending one of its (the one that gives the shortest path) to be the output of the
chromosomes to other nodes. The other nodes would then PGA. This node is called the collector node.
replace one of their chromosomes with the one received. In The operations done by the computing nodes are outlined
fine-grained GA, each computing node only has a single below:
chromosome. The computing nodes are normally arranged in 1. Randomly initialize the initial sub-population. Sub-
a spatial structure where each node can only communicate population refers to the population in each computing
with several neighboring nodes. The population would be the node.
collection of all the chromosomes in each node. To execute a 2. Evaluate fitness of each chromosome in the sub-
genetic operation, a computing node will have to interact with population.
its neighbors. Since the neighborhood overlaps, eventually the 3. Create the mating pool which consists of all the
good traits of a superior individual can spread to the entire chromosomes in the current sub-population.
population. Fine-grained GA has the highest level of 4. Apply crossover operator several times to create n new
parallelism among the four types of parallel GAs. But it also children for the new sub-population (where n is the size
has a large communication overhead due to the high of the sub-population). The parents are selected using the
frequency of interactions between neighboring nodes. The selection operator. Crossover is only performed if one or
final PGA type is the hierarchical hybrid which is structured both of the parents have not yet mated.
in two levels. At the higher level, the algorithm operates as a 5. Apply mutation operator on the chromosomes in the
coarse-grained GA while at the lower level the algorithm mating pool. Each chromosome has a certain probability
operates as a fine-grained (or master-slave) GA. to be mutated.
For each of the parallel GA type, there are multiple 6. With certain probability (as defined by the migration rate),
variations proposed by researchers to improve its performance send the best chromosome to all the other computing
or to suite a particular problem. For example, Golub et. al [9] nodes.
proposed a master-slave GA where the master only creates the 7. Check whether there is any chromosome migrated from
population and let the slaves perform the whole evolution the other computing nodes. For each chromosome
process. Ling Tan et. al [10] proposed a coarse-grained GA received, replace the worst chromosome in the population
which has a special computing node assigned to collect the with the migrated chromosome if the migrated
best chromosomes from all the nodes and distribute one or chromosome has better fitness.
more of the fittest ones to the other nodes. This is done to 8. Repeat step 2 until the sub-population converges or the
reduce the delays in propagating the globally fittest maximum number of iterations has been achieved. The
chromosome to all the computing nodes. Toro et. al [11] also value for maximum iteration used in all the experiments
proposed a modification to the coarse-grained GA where a is 100.
master node is assigned to gather the whole population from 9. Send the best chromosome to the collector node.
all the computing nodes once in several generations. The
master node would then sort the chromosomes according to The operations done by the collector node are outlined
below:

616
1. Receive the best chromosomes from each of the chromosome [17]. This information is then used to pick the
computing nodes. chromosomes that will contribute to the formation of the next
2. Choose the best chromosome from the ones received. generation of solution. The fitness function used in the
This would then be the shortest path found by the parallel proposed algorithm is defined as follows:
algorithm. 1
fi (2)
B. Genetic Representation ci
A communication network can be modeled as a directed where fi represents the fitness value of the ith chromosome
graph G(N,E), where N is the set of nodes representing routers and ci represents the total cost of the path represented by the
and E is the set of edges connecting the links that connect ith chromosome. This would give a higher fitness value for
between the routers [1]. Each edge (i,j) is associated with an shorter paths.
integer representing the cost of sending data from node i to
node j and vice versa. E. Selection
In the proposed algorithm, each chromosome is encoded as In order to generate the next generation of solutions, a
a series of node IDs that are in the path from source to mating pool that consists of the members of the current
destination. The first gene in the chromosome is always the population is created. The chromosomes in the mating pool
source and the last gene in the chromosome is always the are then subjected to genetic operations such as crossover and
destination. Since different paths may have different number mutation. Parents for the crossover operation are selected
of intermediate nodes, the chromosomes will be of variable using a selection operator. The idea behind the selection
length. However, the maximum length of a chromosome operator is to allow chromosomes with better fitness values a
cannot exceed the total number of nodes in the network. Any higher chance to reproduce through the crossover operation.
repeated nodes in the chromosome signify that the path Parents with good fitness values are expected to produce
represented by the chromosome contains a loop and in children with even better fitness values. According to [17],
network routing, any loop should be eliminated. there are several categories of selection scheme. Among them
are proportional selection, rank-based selection and
C. Initial Sub-population tournament selection. For each category, there are several
In the beginning, the sub-population is filled with variations of selection schemes.
chromosomes that represent random paths. Even though the For the proposed algorithm, the selection scheme used is
paths are random, they are supposed to be valid paths, where the pairwise tournament selection with tournament size, s = 2.
the chromosomes consist of a sequence of nodes that are in In this selection scheme, a parent for the crossover operation
the path from sender to receiver. The number of chromosomes is selected by randomly choosing two chromosomes from the
generated for each sub-population, Sn, depends on the total population. The one with the higher chromosome between the
population size and the number of computing nodes, as two will be selected as a parent. To select two parents, this
depicted in Eq. 1. operation is performed twice.
P
Sn (1) F. Crossover
N The first genetic operation done to the chromosomes in the
where Sn represents the sub-population size of the nth node, P mating pool is crossover. The idea behind crossover is to
represents the total population size and N represents the total create an information exchange between two chromosomes
number of computing nodes. [17]. By doing so, the algorithm will explore new paths and
The algorithm used to generate the random paths is adapted hopefully be able to find better paths in the process.
from Chang [5]. The algorithm is as follows: In order to perform crossover, two chromosomes from the
1. Start from the source node. mating pool will be selected using the selection operator.
2. Randomly choose, with equal probability, one of the These two chromosomes will become the parent
nodes that are connected to the current node. chromosomes. To ensure that the paths generated by the
3. If the chosen node has not been visited before, mark that crossover operation are still valid paths, the two chromosomes
node as the next node in the path. Otherwise, find another selected must have at least one common node other than the
node. sending and receiving nodes. If more than one common node
4. If all the neighboring nodes have been visited, go back to exists, one of them will be randomly chosen with equal
step 1. probability. The chosen node is called the crossover point.
5. Otherwise, repeat step 2 by using the next node as the The actual crossover operation is similar to the one described
current node. in [3] – [5]. For example, assume that we have the following
6. Do this until the destination node is found. parent chromosomes:
D. Fitness Function Parent chromosome 1 = [A B C G H I X Y Z]
Parent chromosome 2 = [A K L M I T U Z]
Each chromosome in the population is associated with a where A and Z are the source node and destination node
fitness value that is calculated using a fitness function. This respectively. In this example, the common node is node I.
value indicates how good the solution is for a particular Therefore, crossover operation will exchange the first portion

617
of chromosome 1 with the second portion of chromosome 2 chromosome in its sub-population. For the proposed algorithm,
and vice versa. As a result, the following child chromosomes the migration strategy used is to send the best chromosome
will be generated: and replace the worst chromosome
Child chromosome 1: [A B C G H I T U Z]
Child chromosome 2: [A K L M I X Y Z] IV. EXPERIMENT AND DISCUSSION
These two chromosomes would then become new members A. Experiment Setup
of the population.
The proposed algorithm was implemented as a C++
G. Mutation program. The program is run on an MPI cluster which has six
The objective of mutation operation is to create diversity in machines. Each machine has a dual-core processor. A
the population [17]. Mutation would allow the algorithm to computing node is associated with a single processing core.
search on the areas of the search space that was previously Therefore, up to 12 computing nodes can be run on this
unknown. Each chromosome produced by the crossover parallel computer. In all the simulations conducted for this
operation has a small chance to be mutated based on the paper, the algorithm was run using ten computing nodes and
mutation probability, pm. For all the experiments, the value for one collector node.
pm is set to 0.05. The objective of this experiment is to evaluate the
The mutation operation used in the proposed algorithm is performance of the proposed algorithm as compared to the
similar to the one use in [5]. For each chromosome that is non-parallel implementation of the same algorithm. The
chosen to be mutated, a mutation point will be chosen performance is evaluated based on the result accuracy and
randomly, with equal probability, among the intermediate execution time. Accuracy measures the percentage of the
nodes in the path from sender to receiver (i.e. the sending and shortest paths returned by the algorithm that are actually
receiving node cannot be chosen as the mutation point). Once shortest paths (as obtained from Dijkstra’s algorithm).
the mutation point is chosen, the chromosome will be changed Execution time measures the time taken by the algorithm to
starting from the node after the mutation point and onwards. obtain all the results from the beginning till the end of the
For example, assume that the following chromosome has been simulation. The experiment will also study the effect of
chosen to be mutated: migration rate on the proposed algorithm’s performance.
Original chromosome: [A C E F G H I Y Z] There are two types of network used in the simulation, the
where A and Z are the sending node and the receiving node n x n mesh network and the Waxman network [17]. The
respectively. Assume also that the node G has been chosen as Waxman network is actually a random graph where the
the mutation point. The mutated chromosome would become existence of link between two nodes, i and j, is defined by the
like this: following probability:
Mutated chromosome: [A C E F G x1 x2 x3 … Z] d i, j
The mutated chromosome now contains a new path from G pij = Įexp(( )) , 0 < Į, ȕ < 1 (3)
to Z where xi is the ith new node in the path. The new path is ȕL
generated randomly; the same way as the paths in the initial where di,j is the distance between the two nodes, and L is the
population is generated. maximum inter-nodal distance in the topology. A larger value
of D would generate a graph with higher density and a smaller
H. Migration
value of E increases the density of short edges relative to
Migration is a genetic operation commonly used in coarse-
longer ones. In all the experiments, the values for both D
grained parallel GA implementation [8]. In coarse-grained
PGA, each computing node has its own sub-population that and E are set to 0.2 and 0.1 respectively. However, to avoid
evolves independently and in isolation. As compared to the having disconnected nodes, each node must be connected to at
serial GA, this would result in low diversity of the population least one other node. The network topologies used are 10x10
because different sub-populations do not interact with each mesh network, 15x15 mesh network, 100-node Waxman
other. Migration is an operation that can be used to increase network and 225-node Waxman network. Each link in the
the sub-population diversity by having the computing nodes to network is given a randomly generated cost value, ck(i,j) ~
share their results with each other. Migration involves having uniform[1,20].
each computing node sending one of its chromosomes to the For each network topology, the algorithm was run with
other nodes. At the same time, each computing node will three different population sizes: 4000, 5000 and 6000. These
receive migrated chromosomes from the other nodes. The sizes refer to the size of the whole population. Since there are
received chromosome can replace one of the chromosomes ten computing nodes, the sub-population size would then be
currently in the sub-population. The migration rate is 400, 500 and 600 respectively.
controlled by the migration probability, pmg. The result reported in the next section is averaged over 20
There are several different migration strategies as discussed runs. For each run, a new network with a new set of link
by [8]. Each computing node can either send its best metrics is randomly generated using different seeds. For the
chromosome or a random chromosome from its sub- Waxman network, this also means that a different network
population. The receiving node, on the other hand, can either topology is generated on each run. In each run, a total of 500
choose to replace its worst chromosome or just any random

618
source-destination pairs are randomly chosen and the shortest
path for each of them is calculated.
97.50%
B. Experimental Result 97.00%

The first experiment conducted is to study the effect of 96.50%


migration on the accuracy and execution time of the proposed 96.00%
algorithm. Figure 1 until Figure 4 presents the accuracy with Popsize 4000

Accuracy
95.50%
respect to the migration rate for the four network topologies Popsize 5000

used. For the mesh networks, it seems that having migration 95.00% Popsiize 6000

does not necessarily result in better accuracy. Having no 94.50%

migration (pmg = 0) seems to be better than having a high 94.00%

migration rate. However, the best result is achieved by using a 93.50%


low migration rate of 0.1 or 0.2. For the Waxman network, 93.00%
having a non-zero migration rate is definitely better than 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

having no migration at all. Again, the best result is achieved Migration Rate

with a low migration rate of 0.1 or 0.2. Fig. 3 Accuracy for 100-node Waxman network

99.80%

70.00%
99.70%
68.00%

99.60%
66.00%
Popsize 4000
Accuracy

99.50% Popsize 5000 64.00%


Popsize 4000
Accuracy
Popsize 6000
62.00% Popsize 5000
99.40%
Popsize 6000
60.00%

99.30%
58.00%

99.20% 56.00%
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Migration Rate 54.00%
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Fig. 1 Accuracy for 10x10 mesh network Migration Rate

Fig. 4 Accuracy for 225-node Waxman network

99.00%

98.50% 5000

4500
98.00%
4000
97.50%
Execution Time (seconds)

Popsize 4000 3500


Accuracy

97.00% Popsize 5000


3000
Popsize 6000 Popsize 4000
96.50% 2500 Popsize 5000
Popsize 6000
2000
96.00%
1500
95.50%
1000
95.00%
500
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Migration Rate 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Fig. 2 Accuracy for 15x15 mesh network Migration Rate

Figure 5 until Figure 8 presents the execution time for the Fig. 5 Execution time for 10x10 mesh network
four network topologies used. Theoretically, with higher Waxman network also gives quite low accuracy (only around
migration rate, the execution time should increase due to the 60% to 70%) compared to the other network topologies (over
increased communication overhead between the computing 90%). This suggests that for large Waxman networks, the
nodes. For 10x10 mesh, 15x15 mesh and 100-node Waxman proposed algorithm can prematurely converge to a non-
topologies, this seems to be the case. The execution time optimal solution due to small number of possible paths
increases with higher migration rate. But for the 225-node available to the same destination. Even migration does not
Waxman network, the trend is the total opposite. As migration help in this case due to small number of feasible paths, the
rate increases, the execution time decreases. The 225-node chromosomes sent between computing nodes will always be

619
the same. With higher migration rate, more similar TABLE I: COMPUTATION TIME REQUIRED BY BOTH THE PROPOSED PGA
chromosomes are sent and this contributes to faster premature AND ITS SERIALCOUNTERPART TO GET RELATIVELY THE SAME
convergence and thus smaller execution time. ACCURACY
The second experiment conducted is to evaluate the Network TPGA TGA TPGA / TGA
performance of the proposed algorithm as compared to the Topology (seconds) (seconds) * 100
serial version of the same algorithm. This is done by running 10x10 mesh 2872.73 19440.7 14.78 %
the serial version of the algorithm several times, each for 15x15 mesh 5666.97 30945 18.31 %
different population size. The population size used varies from 100-node
1000 to 5000, with an increment of 500. The result is then Waxman 2765.28 22623.1 12.22 %
compared with the proposed PGA with total population size of
225-node
Waxman 6201.05 35739.6 17.35 %
10000
4000 and 10 computing nodes (each computing node would
9000
then have a sub-population of 400). The migration rate used
8000
for this experiment is 0.1. Table I shows the percentage of
Execution Time (seconds)

7000
computation time required by the proposed PGA (TPGA) and
6000
Popsize 4000 the computation time of its serial counterpart (TGA) to get
5000 Popsize 5000
Popsize 6000
relatively the same accuracy. Based on the result, the
4000
computation time of PGA is only around 12% to 18% out of
3000
the computation time required by the serial version of the
2000 algorithm. The result also shows that the reduction of
1000 computation time is more apparent for smaller networks.
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
V. CONCLUSIONS
Migration Rate
This paper proposed a coarse-grained parallel genetic
Fig. 6 Execution time for 15x15 mesh network algorithm for solving the shortest path routing problem. The
aim is to produce a faster GA-based shortest path routing
algorithm. Several experiments have been performed to study
4500
the effect of migration rate on the proposed algorithm and to
4000 evaluate the performance of the algorithm as compared to its
3500 serial counterpart. Based on the results obtained, a low
Execution Time (seconds)

3000 migration rate of around 0.1 and 0.2 give the best results. The
2500 Popsize 4000 results also show that when compared to the serial
Popsize 5000 implementation of the same algorithm, the proposed algorithm
2000 Popsize 6000
achieved a greatly reduced computation time.
1500

1000 REFERENCES
500 [1] J. F. Kurose, K. W. Ross, Computer Networking: A Top-down
Approach, Addison-Wesley, 4th Edition, 2007.
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
[2] D. E. Goldberg, Genetic Algorithm in Search, Optimization and
Migration Rate
Machine Learning, Addison Wesley, 1989.
[3] M. Munetomo, Y. Takai, Y. Sato, “A migration scheme for the genetic
Fig. 7 Execution time for 100-node Waxman network adaptive routing algorithm”, in Proc. IEEE International Conference
on Systems, Man and Cybernetics, vol. 3, pp. 2774 – 2779, October
1998.
[4] M. Munetomo, N. Yamaguchi, K. Akama, Y. Sato, “Empirical
10000 investigations on the genetic adaptive routing algorithm in the Internet”,
in Proc. Congress on Evolutionary Computation, vol. 2, pp. 1236 –
9000
1243, May 2001.
8000 [5] Chang Wook Ahn, R. S. Ramakrishna, “A genetic algorithm for
shortest path routing problem and the sizing of populations”, IEEE
Execution Time (seconds)

7000
Transactions on Evolutionary Computation, vol. 6, issue 6, pp. 1 – 7,
6000
Popsize 4000 February 1999.
5000 Popsize 5000 [6] M. Hamdan, M. E. El-Hawary, “Hopfield-genetic approach for solving
Popsize 6000
4000 the routing problem in computer networks”, in Proc. Canadian
3000
Conference on Electrical and Computer Engineering, vol. 2, pp. 823 –
827, May 2002.
2000
[7] A. Riedl, “A hybrid genetic algorithm for routing optimization in IP
1000 networks utilizing bandwidth and delay metrics”, in Proc. IEEE
0
Workshop on IP Operations and Management, pp. 166 – 170, 2002.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 [8] Erick Cantu Paz, Efficient and Accurate Parallel Genetic Algorithms,
Migration Rate Kluwer Academic Publishers, 2001.

Fig. 8 Execution time for 225-node Waxman network

620
[9] M. Golub, D. Jakobovic, “A new model of global parallel genetic [14] S. E. Eklund, “Time series forecasting using massively parallel genetic
algorithm”, in Proc. 22nd International Conference on Information programming”, in Proc. International Symposium on Parallel and
Technology Interfaces, pp. 363 – 368, June 2000. Distributed Processing, April 2003.
[10] Ling Tan, D. Taniar, K. A. Smith, “A new parallel genetic algorithm”, [15] Runhe Huang, Jianhua Ma, T. L. Kunii, E. Tsuboi, “Parallel genetic
in Proc. International Symposium on Parallel Architectures, algorithm for communication network design”, in Proc. 2nd Aizu
Algorithms and Networks, pp. 284 – 289, May 2002. International Symposium on Parallel Algorithms/Architecture
[11] F. de Toro, J. Ortega, J. Fernandez, A. Diaz, “PSFGA: a parallel Synthesis, pp. 370 – 377, March 1997.
genetic algorithm for multiobjective optimization”, in Proc. 10th [16] Myung-Mook Han, “Applying parallel genetic algorithm for sorting
Euromicro Workshop on Parallel, Distributed and Network-based problem”, in Proc. IEEE International Conference on Fuzzy Systems,
Processing, pp. 384 – 391, January 2002. vol. 3, pp. 1796 – 1801, August 1999.
[12] M. M. Atiqullah, “Problem independent parallel genetic algorithm for [17] D. Dumetrescu, B. Lazzerini, L.C. Jain, A. Dumitrescu, Evolutionary
design optimization”, in Proc. International Symposium on Parallel Computing. The CRC Press, 2000.
and Distributed Processing, pp. 204 – 211, April 2002. [18] B.M. Waxman, “Routing for multipoint connections”, IEEE Journal on
[13] N. Meghanathan, G. W. Skelton, “Intelligent transport route planning Selected Areas in Communications, vol. 6, no. 9, pp. 1617 – 1622,
using parallel genetic algorithm and MPI in high performance December 1988.J. F. Kurose, K. W. Ross, Computer Networking: A
computing cluster”, in Proc. International Conference on Advanced Top-down Approach, Addison-Wesley, 4th Edition, 2007.
Computing and Communications, pp. 578 – 583, December 2007.

621

Você também pode gostar