Você está na página 1de 17

Received: 10 February 2017 Revised: 16 August 2017 Accepted: 7 October 2017

DOI: 10.1002/tal.1449

RESEARCH ARTICLE

Constrained mean-variance mapping optimization for truss


optimization problems
Mohamad Aslani1 Parnian Ghasemi2 Amir H. Gandomi3

1 Department of Aerospace and Mechanical

Engineering, Iowa State University, Ames, Iowa, Summary


USA
2 Department of Civil Engineering, Iowa State
Truss optimization is a complex structural problem that involves geometric and mechanical
University, Ames, Iowa, USA constraints. In the present study, constrained mean-variance mapping optimization (MVMO)
3 School of Business, Stevens Institute of
algorithms have been introduced for solving truss optimization problems. Single-solution and
Technology, Hoboken, New Jersey, USA
population-based variants of MVMO are coupled with an adaptive exterior penalty scheme to
Correspondence
handle geometric and mechanical constraints. These tools are explained and tuned for weight min-
Amir H. Gandomi, School of Business, Stevens
Institute of Technology, Hoboken, New Jersey imization of trusses with 10 to 200 members and up to 1,200 nonlinear constraints. The results
07030, USA. are compared with those obtained from the literature and classical genetic algorithm. The results
Email: a.h.gandomi@stevens.edu show that a MVMO algorithm has a rapid rate of convergence and its final solution can obviously
outperform those of other algorithms described in the literature. The observed results suggest
that a constrained MVMO is an attractive tool for engineering-based optimization, particularly
for computationally expensive problems in which the rate of convergence and global convergence
are important.

KEYWORDS

exterior penalty, mean-variance mapping optimization, stochastic search, truss weight

1 INTRODUCTION

Weight minimization of skeletal structures is historically used as a benchmark for heuristic and evolutionary optimization algorithms. The objec-
tive (representing the weight of the structure) and constraint(s) (i.e., mechanical properties of the structure such as nodal displacement, member
stress,[1, 2] or vibration[3, 4] ) lead to nonconvex systems with local optima. The set of design variables differs for each problem, and it includes (but
is not limited to) the structure topology,[5-7] its shape,[8, 9] its size,[10, 11] and combinations of these factors.[12] To solve such a problem reliably and
efficiently, researchers have developed specialized optimization algorithms,[10, 13] as well as hybridized and other current tuned techniques.[14-16]
Population-based evolutionary search algorithms such as genetic algorithm (GA),[17] particle swarm,[18] ant colony,[19] bee colony,[20] big-bang
crunch,[21] krill-herd optimization,[22] firefly,[23] and cuckoo search[24] (see Ishibuchi et al. and Gandomi et al.[25, 26] for a review) have been employed in
recent years to solve complex real-world problems. These algorithms utilize a variety of tools to explore favorable regions in the search domain and
exploit them further to locate global minima. Change in an algorithm's behavior is sometimes accomplished manually (i.e., changing the parameters)
or more rigorously through an adaptive strategy. Even though all these techniques are listed in the literature, nonconvex global optimization is still
challenging. Due to the irregularities in either objectives, constraints or both that narrow down the margins of success for a specific algorithm;
if an algorithm works correctly for one problem, it may not be suitable for another problem, so there has always been a desire for designing and
implementing new ideas in this field.
Borrowing ideas of selection, mutation, and crossover from the concept of evolutionary algorithms and employing a mapping method, Elrich
et al.[27] designed a new optimization algorithm, namely, mean-variance mapping optimization (MVMO). MVMO shares philosophical properties
with population-based stochastic algorithms. For instance, elitism is intrinsic in an evolutionary algorithm, and MVMO similarly preserves an archive
of the best points that implicitly resemble elitism. Moreover, the archive of n-best solutions provides guidelines for development of a mapping
algorithm based on the mean and variance of the archive that the mapping function and upon which the new offspring (child) will essentially depend.
Given the mean, variance, and a parameter called a shape variable, a transformation can be constructed and, as explained later, this transformation
automatically switches the behavior of the algorithm, that is, it will explore and exploit the search domain. Finally, the elitism criteria dictate that
the parent is the first-ranked solution in the archive and a new generation is born following the constructed mapping function. One of the impor-

Struct Design Tall Spec Build. 2018;27:e1449. wileyonlinelibrary.com/journal/tal Copyright © 2017 John Wiley & Sons, Ltd. 1 of 17
https://doi.org/10.1002/tal.1449
2 of 17 ASLANI ET AL.

tant features of MVMO is its rapid rate of convergence while at the same time carrying out a global search. This feature is especially important for
problems with expensive function evaluations where a moderately good solution is assumed to be acceptable.[28, 29]
MVMO has been initially designed to work with a single solution on a normalized domain (or xi ∈ [0, 1]). However, if multiple sessions of MVMO
run simultaneously, a population-based approach is achieved.[30] With extra information available from unfavorable regions, a multiparent crossover
can occur to support further exploration of these regions. This extra effort has been shown to make the algorithm robust and reliable in the sense
of a general optimization algorithm.[30-32]
MVMO is an algorithm for unconstrained problems, but the performance of MVMO has never been tested in a real-world application where mul-
tiple nonlinear constraints might exist, and the convergence rate is important. For the truss structure considered in this paper, these constraints are
nodal displacements and member stresses, and if a solution point violates any of them, a penalty is applied through an adaptive quadratic function.
Therefore, in this paper, performances of constrained single-solution and population-based MVMO are evaluated to find the global weight mini-
mization of truss structures. Parameters of the algorithm (MVMO and penalty function) are tuned, and the results are presented for a variety of
population sizes (linearly proportional to the number of variables).
The remainder of this paper is organized as follows: In Section 2, two variants of MVMO (single-solution and population-based) are presented,
followed by an adaptive penalty function to handle constraints. In Section 3, the weight minimization problem of truss structures is formally defined,
and finally, Section 4 describes the results for a variety of cases.

2 MEAN-VARIANCE MAPPING OPTIMIZATION

2.1 Single-solution MVMO


In contrast to many evolutionary algorithms that utilize a population of points, MVMO is originally introduced as a single-solution algorithm. The
internal search range of all variables is restricted to xi ∈ [0, 1]. Therefore, the desired range for a general problem is scaled to the unity length.
The distinctive property of MVMO is the ability to carry out a global search using the best-so-far solutions. Figure 1a depicts the properties of
such a search for two variables in which red circles indicate the newly generated points. For a conservative search, most of the points will concentrate
around the mean value. However, several points will try a broader search region. For an exploration type of search, fewer points are concentrated
around the mean as shown in Figure 1b. These types of search are controlled by the values chosen for shape function.
A particular form of a mapping function, which depends on mean value and shape variables, is central to understanding the mechanism of MVMO.
The mapping function is used to transfer a random point in [0, 1] to a point xi or M ∶ xrand → xi . This mapping occurs in a fashion that the new point
is located in the vicinity of the mean value with specific criteria (see Figure 2a). The mean value (̄xi ) and shape variable (si ) are calculated using the
archive of the best points with the size n:
1 ∑ (j)
n
x̄ i = x ,
n j=1 i (1)
si = −fs ln(vi ),
where vi is the variance of data in the set (for each variable) or
n ( )2
1 ∑ (j)
vi = xi − x̄ i , (2)
n j=1

where fs > 0 is a coefficient and (j) is an indicator of points in the archive. The si is used to find the vector of shape variable si = (si1 , si2 ) for the ith
variable in the problem. The algorithm to find si vector will be explained later. We also note that because vi is less than one, si is always positive.

FIGURE 1 Mapped points for (a) a conservative search with a large value for shape variables and (b) an exploration search with small value for
shape variables
ASLANI ET AL. 3 of 17

The inputs of mapping function are xrand and a function, H, which depends on the mean value and shape variables or:

xi = M(xrand , H)
(3)
= H(̄xi , si , xrand ) + (1 − H1 + H0 ) xrand − H0 ,

where function H depends on the mean value, shape variables, and x:

H(̄xi , si , x) = x̄ i (1 − e−xs1i ) + (1 − x̄ i )e−(1−x)s2i , (4)

along with the definitions for H0 ≡ H(̄xi , si , 0) and H1 ≡ H(̄xi , si , 1).


Contour plot of Figure 2a shows the mapping function for a constant mean value of 0.5. As it is shown, if the random value is around x̄ , then the
mapped variable remains constant, independent of values for s. However, the mapped variables change drastically for small values of s when the
random point is away from the mean value. Also, Figure 2b shows the result of the mapping function for constant shape variables.
The shape variable vector si consists of two shape variables, that is, si = (s1i , s2i ). One of the shape variables is always set to si , which is given in
Equation 1, and the other one is a search distance di , chosen upon the value of si . If si is greater than this distance, then di is reduced. Otherwise, it
will be increased. Initially, this distance is set to a big number and si1 = si2 = si . Numerical experiments have shown that the range of dinitial
i
∈ [1 − 5]
is big enough to result in an acceptable range for variables. The algorithm to find si = (s1i , s2i ) is summarized as

IF (si > di ) THENdi = di × Δd


ELSEdi = di ∕Δd
END IF
IF (rand ≥ 0.5) THENsi = (si , di )
ELSEsi = (di , si )
END IF.

If shape variables are equal, then the mapping function is symmetric. This is shown in Figure 2b. Asymmetric mapping is achieved once shape
variables are not equal as illustrated by the contours in Figure 3a,b. Also, as si → (∞, ∞), the new point converges to the mean value x̄ i . The flowchart
of the single-solution MVMO is summarized in Figure 4.

FIGURE 2 Mapping function for (a) constant mean and (b) symmetric mapping when shape variables are equal

FIGURE 3 Asymmetric mapping when shape variables are not equal


4 of 17 ASLANI ET AL.

FIGURE 4 Flowchart for the single-solution mean-variance mapping optimization algorithm

Once mean, variance, and shape variables are constructed, a number of variables (number of mapped variables, m) are chosen randomly to be
varied based on the mapping function. This number introduces another parameter into MVMO algorithm. Thus, the user needs to tune the algorithm
based on the problem in hand with the following parameters:

• Size of archive (n)


• Number of mapped variables (m)
• Factor (fs )
• Increment (Δdi )
• Number of iterations (Ntot )

Appropriate assignment of these parameters mimics both exploration and exploitation of an evolutionary algorithm with implicit elitism. Numer-
ical experiments show that an archive size of 4 ≤ n ≤ 6 is appropriate for the problems of interest in this study. Generally, the larger archive size
corresponds to a more conservative search, and as a result, it requires more computation. Also, factor fs in Equation 1 will make a conservative search
(when accuracy is needed) if fs > 1 and opposite (global search) if fs < 1. Ideally, an adaptive strategy is required, which is the topic of next section.

2.2 Adaptive strategies for fs and Δdi


Appropriate values for fs and Δdi are not straightforward to set. Both of them determine the properties of shape variables that are used in the
mapping algorithm. Ideally, it is required that fs < 1 initially and gradually becomes greater than one as the algorithm evolves. For this purpose, the
following strategy is adopted:

fs = fsmin + 𝜇 2 (fsmax − fsmin ), (5)

where fsmin = 0.8, fsmax = 2 and 𝜇 = #iteration∕Ntot . The above functionality allows a global search initially and a quadratic increase in fs . In the
original algorithm, a random number in the range of [0, 1] was used. However, numerical experiments showed that a progressive increase of fs as in
Equation 5 gives decent results as well.
ASLANI ET AL. 5 of 17

The search diameter Δdi is responsible for the change in di that determines one of the shape variables. Thus, it can adopt a similar behavior to
factor fs . It is desirable to have di oscillating around si and therefore Δdi > 1. With this in mind, the following algorithm is chosen:

Δdi = 1 + Δd0 × rand,


(6)
Δd0 = Δdmin
0
+ 𝜇 2 (Δdmax
0
− Δdmin
0
),

with Δdmin
0
= 0.01 and Δdmax
0
= 0.4. Thus, as the algorithm evolves, Δdi oscillates around si with a bound that scales maximally with 1 + Δdmax
0
.

2.3 Population-based algorithm: MVMO-SH


Population-based MVMO is first introduced by Rueda et al[30] and is originally called MVMO-SH. In this version, the algorithm treats the simple
MVMO as a mutation operator. A pool of NP candidates is ranked based on their fitness. This pool is divided into good and bad subgroups where a
different crossover operator is applied to each group. The size of each group will change dynamically as the algorithm evolves. For the group with
good points (particles), the crossover instantly assigns the parent to be the best point of the subset and saves it for the next iteration (generation);
however, MVMO is applied with corresponding mean and variance for each particle. For the group with bad points, a three-parent crossover operator
is used that replaces the mean for the MVMO operator. The flowchart of the algorithm is summarized in Figure 5, which also includes the crossover
operator. It should be emphasized that each particle has its own archive (memory) for the MVMO mutation operator.
The number of particles in the good group is determined using the following function,

NGP = round(gGP , NP )
(7)
initial
gGP = gGP + 𝜇(gGP
final initial
− gGP ),

initial final
where gGP = 0.7 and gGP = 0.2. Thus, as the algorithm evolves, the set of good points becomes smaller and smaller.
In the original algorithm, a local search method is used randomly to create the pool of particles along with the fitness evaluation step (Hybrid
version). However, for the current application, similar results are obtained without this extra effort. This could be due to the irregularity of the
objective function that causes the poor performance of gradient-based algorithms.

2.4 Handling constraints


MVMO is able to minimize unconstrained problems or problems that involve explicit bound limits on their variables. However, weight minimization
problems are usually constrained to physical and mechanical implicit constraints. For a single-objective problem with n variables and m constraints,
the following form is taken:

minimize f(x)
with respect to x = (x1 , · · · , xn ) (8)

subject to gi (x) ≤ 0, ∀i = 1, · · · , m,

where x is the vector of variables, f(x) is the objective function, and gi (x) is the ith normalized constraint functions. To address this issue, a constraint
handling strategy is required. A simple way to redefine the problem as an unconstrained problem is to rule out sample points for which the constraints
are not satisfied. This approach has been widely used in population-based heuristic methods like GA in which a newly generated solution does
not enter the population, unless it satisfies all the constraints. This approach would work for particular problems or algorithms, but it can become
inefficient unless a rigorous approach finding a feasible point is available.
Another widely practical approach is to use a penalty function. These functions penalize (increase, in the case of a minimization problem)
the objective value for each constraint in proportions with their degree of closeness/violation of the corresponding constraint. The redefined
unconstrained optimization problem is
minimize F(x, k) ≡ f(x) + p(x, k)
(9)
with respect to x = (x1 , · · · , xn ),
where the function p(x, k) ≥ 0 is yet to be defined based on two major approaches, namely, interior and exterior penalty methods. Theoretically, the
unconstrained problem is converged once ||∇x F(x, k)|| ≤ 𝜀. Ideally, the magnitude of the penalty function is small initially to construct initial guess.
In interior penalty methods, the magnitude of the penalty rapidly increases (ideally goes to infinity) as the point gets closer to the constraint limits.
The penalty function is usually defined in a form of an inverse or logarithmic function of the constraints that approached to zero during convergence
toward the optimal point. Thus, this method is useful if one wishes to have feasible points during the course of iterations toward the optimal point.
In exterior penalty method, however, solution points are allowed to violate constraints with a finite amount of penalization, which ideally goes to
infinity as the solution evolves. Thus, the solution approaches the optimal point but usually from infeasible search domain and adaptively changes
to limit the violation.
Generally, the exterior penalty methods have been shown to be more robust than their counterparts. Moreover, regarding the nature of the
constraints in truss structures, an exterior penalty method seems to be more suited as usually, the optimal point occurs when one or a couple of
constraints are tight. In this study, we use the following form of exterior penalty method:
6 of 17 ASLANI ET AL.

FIGURE 5 Flowchart for the population-based mean-variance mapping optimization (MVMO) algorithm


m
p(x, k) = k Vi (x), (10)
i=1

where Vi is defined as a form of quadratic function:


{
0 gi (x) ≤ 0
Vi (x) = (11)
gi (x)2 otherwise.

The penalty scaling factor is defined as k = (𝛼𝜇)𝛽 with positive values for 𝛼 and 𝛽 . If we set 𝜇 = #iteration∕Ntot , the penalty scaling function
increases monotonically that constitutes an adaptive penalty function. This feature makes the above function ideal for the exterior penalty method.
Numerical experiments show that by setting 𝛼 = 106 and 𝛽 = 2, results with an acceptable accuracy is obtained, that is, gi (x) ≤ 10−6 . Another
∑m
approach is to multiply the penalty value with the objective function, that is, p(x, k) = kf(x) i=1 Vi (x). Both approaches result in similar convergence
rates for problems studied in this paper.
ASLANI ET AL. 7 of 17

The rapid increase in the penalty function ensures the accuracy of the final answer. However, the Hessian of F(x, k) becomes ill-conditioned, and
thus, the minimization problem becomes more difficult as the value of the penalty scaling function increases (slowing down the convergence rate),
which is a major issue for gradient-based algorithms.

3 TRUSS WEIGHT MINIMIZATION PROBLEM

Efficient structural design problems have been the focus of several studies in the literature.[33] These problems vary in their definition of objective
and constraints. In this study, the focus is on structural weight minimization constrained to displacement/tension of the nodes/elements.
From an analytical point of view, the problem of weight minimization of a truss structure that is subjected to tension and displacement limits is
defined as

NEL
minimize W(A) = 𝜌g Ai li
i=1

with respect to A = (A1 , · · · , ANEL )


⎧ (u(x,y,z),k − umax )∕umax ≤ 0 (12)
⎪ (x,y,z),k (x,y,z),k {
⎪ i = 1, · · · , NEL
⎨ (𝜎i − 𝜎i )∕𝜎i ≤ 0
max max
subject to
k = 1, · · · , NNO ,

⎪ Amin ≤ Ai ≤ Amax
⎩ i i

where g is the gravity acceleration, 𝜌 is the material density, A is the vector of the cross-sections, NEL is the number of elements in the structure, NNO
is the total number of nodes, Amin
i
and Amax
i
are lower and upper bounds of Ai , u(x,y,z),k is the displacement of the kth node in x, y or z directions with the
lower and upper limits umin
(x,y,z),k
and umax
(x,y,z),k
, 𝜎 i is the stress on the ith element and 𝜎imin and 𝜎imax are the lower and upper limits for stress, and finally li
is the length of ith element:

li = (x1i − x2i )2 + (y1i − y2i )2 + (z1i − z2i )2 , (13)

where x(1,2)i , y(1,2)i , z(1,2)i are coordinates of the nodes of the ith element.
The cross-sections of the members are the only design variables in this problem that are treated as continuous variables, and the length of the
elements are kept as constants. The finite element method stiffness (displacement) method is implemented in order to evaluate nodal displacement
and member stress. The method has been verified using benchmark problems with analytical solutions.

4 TEST PROBLEMS

In this section, single-solution and population-based MVMO algorithms and the proposed penalty method are analyzed using several truss weight
minimization problems. The population-based simulations have a size of 2k, 5k, and 10k where k is the number of variables. The corresponding
results are marked with MVMO-2,5,10. The best solution obtained after 10 independent runs (i.e., Monte Carlo simulations), and the number of
function evaluations (NFE) to reach this result are reported in Tables. For all algorithms (MVMO-2,5,10), convergence history in terms of the archive
average solution versus function evaluations are shown as well. To perform a basic comparison of the algorithm in terms of convergence and global
optimality, convergence history obtained from a binary-coded GA are also compared in those plots. We set the population size to 50, crossover
probability to 1, the mutation probability to 0.01 in the GA implementation, and the top 10% of the genes are set aside to insist on elitism at every
generation of the GA.
Because the computational cost is directly dependent on the number of function calls, we set a maximum allowable NFE. Compared to reporting
CPU time on a machine, NFE is an accurate measurement tool, as the computational architecture, resources, coding style, and so forth would be
different for each implementation of a specific algorithm. This number is 2 × 103 k (k = NEL , which is the number of variables) and sets Ntot (number
of iterations) to be 1 × 103 , 200, and 100 for MVMO-2, 5, and 10, respectively.
A number of truss problems reported in the literature include designs that violate the constraints slightly.[34] Because the goal of this paper is to
show the capabilities of the proposed method, in some cases (indicated by the symbol ∗), the cross-section areas result in a violation of tension and
displacement constraints. However, this relaxation allows for the algorithm to find a lower weight. The final solution is under the column indicated
by MVMO* in tables. These cases are considered only when the violation threshold is greater than 0.5%.
Finally, for all cases, simulations are initialized with a uniform random distribution in an initial range. The lower bound of this initial range is defined
for each problem, and the upper range is chosen to be 100 times this lower bound.

4.1 10-bar truss


First test problem is a cantilever 10-bar truss with 10 independent design variables as shown in Figure 6. This problem has been studied before
in[34-37] as a benchmark for optimization methods. A material density of 0.1 lb∕in3 and a modulus of elasticity of 10,000 ksi are used in this problem.
8 of 17 ASLANI ET AL.

FIGURE 6 10-bar truss geometry and loading condition

TABLE 1 Cross-section areas and total weights for the 10-bar truss
Variables (in2 ) Lee & Geem*[34] Lie et al.*[36] Farshi & Ziazi[35] MVMO MVMO*

A1 23.25 23.743 23.527 23.3587 23.5344


A2 0.102 0.101 0.1 0.1000 0.1000
A3 25.73 25.287 25.294 25.1161 25.3002
A4 14.51 14.413 14.376 14.3499 14.2751
A5 0.1 0.1 0.1 0.1000 0.1000
A6 1.977 1.969 1.9698 1.9696 1.9665
A7 12.21 12.362 12.4041 12.4181 12.3751
A8 12.61 12.694 12.8245 12.8205 12.8332
A9 20.36 20.323 20.3304 20.4066 20.2260
A10 0.1 0.103 0.1 0.1000 0.1000
Weight (lb) 4, 668.8 4, 677.7 4, 677.8 4, 676.986 4, 668.175
NFE 15, 000 75, 000 - 12, 200 16, 100

Note. MVMO = mean-variance mapping optimization; NFE = number of function evaluations.

FIGURE 7 10-bar truss convergence history for single-point, population-based mean-variance mapping optimization (MVMO) and genetic
algorithm

Members are constrained in displacement and stress to 2 in and 25 ksi, respectively. The lower bound of cross-sectional areas is 0.1 in2 . Following
Figure 6, the loading condition in this problem is P1 = 150 kips and P2 = 50 kips.
Table 1 compares the solution from MVMO with a harmony search,[34] particle swarm optimizer,[36] and method of force-centers.[35] As it is shown,
in the best case, MVMO has obtained a design with a lower structure weight after approximately 12,200 function evaluations. Figure 7 depicts the
ASLANI ET AL. 9 of 17

convergence properties of the MVMO. Single-solution MVMO has an initial rapid convergence that can obtain a solution within less than 1% of the
optimal weight. However, MVMO-2 converges faster than other algorithms. Although GA shows a good initial convergence rate, it fails to locate the
optimal solution within 10% of the optimal solution. The study by Lee and Geem[34] contains violated constraints (∼ 3%). When allowed to relax the
constraints similar to,[34] MVMO is able to find a better optimizer for the total weight of the structure.

4.2 17-bar truss


Many researchers studied the weight minimization of the 17-bar truss shown in Figure 8. In this problem, the material density is 0.268 lb∕in3 , and the
modulus of elasticity is 30,000 ksi. A single load of 100 kips is applied on the node (9). The only constraint in this problem is a displacement limitation
of ±2 in imposed on the nodes in both x and y directions. Seventeen independent variables representing each member's cross-section area exists in
this problem. The minimum cross-section area is 0.1 in2 .
Table 2 summarizes the results obtained from an augmented Lagrangian-based GA algorithm,[38] harmony search,[34] and MVMO implementation.
MVMO is able to locate a superior solution compared to the studies in the literature. The last column of the table shows the results of the MVMO
when allowed to slide the displacement constraint similar to.[34] In this case, MVMO successfully locates a point with lower total weight for the
structure. The number of function evaluations to find the best result is on the same order for harmony search; however, the modified GA technique
of Adeli and Kumar[38] converges to a solution with in 0.5% of the best solution with 6,000 steps. The convergence history for this problem is shown in
Figure 9. The single-solution MVMO is characterized by its rapid convergence characteristic as shown in the plot with the population-based versions
being able to converge to an optimal solution relatively fast. All population-based MVMO simulations converge to a moderately good solution.
Compared to the GA implementation of the same problem, we see that both algorithms show a rapid convergence initially; however, GA fails to
exploit search domain further and falls into a local minimizer.

FIGURE 8 17-bar truss geometry and loading condition

TABLE 2 17-bar truss geometry and loading condition


Variables (in2 ) Adeli & Kumar[38] Lee & Geem*[34] MVMO MVMO*

A1 16.029 15.821 15.9302 15.9341


A2 0.107 0.108 0.1000 0.1000
A3 12.183 11.996 12.0699 12.0601
A4 0.100 0.1 0.1000 0.1000
A5 8.417 8.15 8.0673 8.0491
A6 5.715 5.507 5.5616 5.5614
A7 11.331 11.829 11.933 11.893
A8 0.105 0.1 0.1000 0.1000
A9 7.301 7.934 7.9446 7.9405
A10 0.115 0.1 0.1000 0.1000
A11 4.046 5.66 4.0553 4.0532
A12 0.101 4.061 0.1000 0.1000
A13 5.611 5.565 5.6568 5.6542
A14 4.046 0.1 0.1000 0.1001
A15 5.152 5.656 5.5580 5.5423
A16 0.107 0.1 4.000 4.006
A17 5.286 5.582 5.5785 5.5545
Weight (lb) 2, 594.42 2, 580.81 2, 581.889 2, 578.594
NFE 6, 000 20, 000 20, 800 20, 500

Note. MVMO = mean-variance mapping optimization; NFE = number of function evaluations.


10 of 17 ASLANI ET AL.

FIGURE 9 17-bar truss convergence history for single-point, population-based mean-variance mapping optimization (MVMO) and genetic
algorithm (GA)

FIGURE 10 18-bar truss geometry and loading condition

FIGURE 11 18-bar truss convergence history for single-point, population-based mean-variance mapping optimization (MVMO) and genetic
algorithm (GA)

4.3 18-bar truss


The cantilever 18-bar truss of Figure 10 is an optimization problem analyzed by Imai and Schmit,[39] Lee and Geem,[34] and Kaveh and Talataheri.[40]
For this problem, material density is 0.1 lb∕in3 . Stress constraints on members are 20,000 psi (same limit in tension and compression). Constraints
on Euler buckling strength are also considered where the allowable limit for the ith member is computed as

KEAi
𝜎ib = − , (14)
L2i
where K is buckling constant (K = 4) and E is the modulus of elasticity (E = 10, 000ksi) and Li is the length of the element. Following the figure, a
downward load of F = 20 kips is applied on nodes (1), (2), (4), (6), and (8). The members are divided into four groups, indicated by a bar, as following:
ASLANI ET AL. 11 of 17

• Ā 1 : A1 , A8 , A12 , A4 , A16
• Ā 2 : A2 , A6 , A10 , A14 , A18
• Ā 3 : A3 , A7 , A11 , A15
• Ā 4 : A5 , A9 , A13 , A17

with the minimum cross-section area of 0.1 in2 for each member.
Following the trend in previous problems, we present two sets of results for MVMO. With both conditions, MVMO can find a better optimal point
compared to studies with similar conditions. As previously mentioned, MVMO is characterized by its rapid rate of convergence as can be seen for
this problem as well in Figure 11. All algorithms show a relatively good convergence for this problem; however, we see that single-solution MVMO
does exceptionally well for this problem compared to the population-based MVMO and GA. It can be deduced that this behavior is due to the size
of the problem. There are only four design variables in this problem, and this is considered a relatively easy problem. The results of the 18-bar truss
are shown in Table 3. Similarly, the results indicate that algorithm is able to find a solution that is better than or equal to the results reported in the
literature.

TABLE 3 Cross-section areas and total weights for the 18-bar truss
Variables (in2 ) Imai & Schmit*[39] Lee & Geem*[34] MVMO MVMO*

Ā 1 9.998 9.980 10.0 9.9785


Ā 2 21.650 21.630 21.651 21.6200
Ā 3 12.500 12.490 12.500 12.4694
Ā 4 7.072 7.057 7.071 7.0569
Weight (lb) 6, 430.0 6, 421.88 6, 430.526 6, 418.725
NFE - 2, 000 1, 900 2, 200

Note. MVMO = mean-variance mapping optimization; NFE = number of function evaluations.

FIGURE 12 72-bar truss geometry and loading condition


12 of 17 ASLANI ET AL.

4.4 72-bar truss


The 72-bar truss problem of Figure 12 is a spatial structure studied by many researcher to analyze their proposed algorithms.[35, 41, 42] In this problem,
material density is 0.1 lb∕in3 , and the modulus of elasticity is 10,000 ksi. The structure is subjected to two independent loading conditions,

1. Px = 5.0kips, Py = 5.0kips and Pz = −5.0kips on node 17,


2. Pz = −5.0kips on nodes 17, 18, 19 and 20.

This loading condition divides the 72-bar truss to 16 sub-groups:


Ā 1 = A1 ∼ A4 , Ā 2 = A5 ∼ A12 , Ā 3 = A13 ∼ A16 , Ā 4 = A17 ∼ A18 , Ā 5 = A19 ∼ A22 , Ā 6 = A23 ∼ A30 , Ā 7 = A31 ∼ A34 , Ā 8 = A35 ∼
A36 , Ā 9 = A37 ∼ A40 , Ā 10 = A41 ∼ A48 , Ā 11 = A49 ∼ A52 , Ā 12 = A53 ∼ A54 , Ā 13 = A55 ∼ A58 , Ā 14 = A59 ∼ A66 , Ā 15 = A67 ∼ A70 , Ā 16 =
A71 ∼ A72 .
The resultant compression/tensile stress for each member needs to be less than 25 ksi, and the maximum displacement of uppermost nodes i
limited to 0.25in in x and y directions. The minimum cross-section area of all members are 0.1 in2 .
Results for this case are summarized in Table 4. The results indicate that MVMO is able to locate an optimal point where the total weight of the
structure is minimal. Convergence plot of Figure 13 shows that single-points MVMO has a superior convergence rate and that the population-based

TABLE 4 Cross-section areas and total weights for the 72-bar truss
Variables (in2 ) Camp[1] Kaveh & Khayatazad[42] Kaveh & Talataheri[43] MVMO

Ā 1 1.8807 1.8365 1.9042 0.1565


Ā 2 0.5142 0.5020 0.5162 0.5452
Ā 3 0.1000 0.1000 0.1000 0.4100
Ā 4 0.1000 0.1004 0.1000 0.5692
Ā 5 1.2711 1.2522 1.2582 0.5225
Ā 6 0.5151 0.5033 0.5035 0.5168
Ā 7 0.1000 0.1001 0.1000 0.1000
Ā 8 0.1000 0.1001 0.1000 0.1000
Ā 9 0.5317 0.5730 0.5178 1.2679
Ā 10 0.5134 0.5498 0.5214 0.5114
Ā 11 0.1000 0.1004 0.1000 0.1000
Ā 12 0.1000 0.1001 0.1007 0.1000
Ā 13 0.1565 0.1575 0.1566 1.8852
Ā 14 0.5429 0.5222 0.5421 0.5121
Ā 15 0.4081 0.4355 0.4132 0.1000
Ā 16 0.5733 0.5971 0.5756 0.1000
Weight (lb) 379.632 380.458 379.66 379.503
NFE 21, 542 19, 048 13, 200 14, 700

Note. MVMO = mean-variance mapping optimization; NFE = number of function evaluations.

FIGURE 13 72-bar truss convergence history for single-point, population-based mean-variance mapping optimization (MVMO) and genetic
algorithm (GA)
ASLANI ET AL. 13 of 17

MVMO results in the best solution. However, GA also convergence quickly in this problem initially but fails to exploit the search domain further.

4.5 200-bar truss


The final test case in this paper is a 200-bar truss problem (see Figure 14). Lamberti[13] studied this case by comparing the performance of an
improved simulated annealing and harmony search algorithm of Lee and Geem.[34] In this problem, material density is 7,833.413 kg∕m3 , and the
modulus of elasticity is 206.91 GPa. The structure is subjected to three independent loading conditions,

1. Px = +4, 449.741N (i.e., 1,000 lbf) on nodes 1, 6, 15, 20, 29, 34, 43, 48, 57, 62 and 71
2. Py = −44, 497.412N (i.e. 10,000 lbf) on nodes 1, 2, 3, 4, 5, 6, 8, 10, 12, 14, 15, 16, 17, 18, 19, 20, 22, 24, 26, 28, 29, 30, 31, 32, 33, 34, 36, 38, 40,
42, 43, 44, 45, 46, 47, 48, 50, 52, 54, 56, 57, 58, 59, 60, 61, 62, 64, 66, 68, 70, 71, 72, 73, 74, 75
3. Both of the above loading acting simultaneously.

FIGURE 14 200-bar truss geometry and loading condition


14 of 17 ASLANI ET AL.

This loading condition divides the 200-bar truss to 29 sub-groups, with the cross-section of each group as an optimization variable as indicated in
Table 5. The resultant compression/tensile stress for each member needs to be less than 68.97 MPa (i.e., 10,000 psi), which creates 1,200 constraints
for this problem. There is no constraint on node displacement. The minimum cross-section area of all members are 0.000064516 m2 (i.e., 0.1 in2 ).
Results for this case are summarized in Table 6. We compared the results with the lowest weights reported in Lamberti,[13] which gives 0.0709%
violation of constraints. MVMO is able to find a point with a total weight lower than the results reported by Lamberti.[13] Unlike other problems,
the NFE limit is set to 29 × 103 for this problem because of its huge computational cost. Convergence plot of Figure 15 compares single and
population-based MVMO algorithm with the binary coded GA. Once again, we can see the excellent convergence rate by single-solution MVMO
and global convergence of population-based algorithms.

TABLE 5 Element grouping for the 200-bar truss structure


Group Elements Group Elements

Ā 1 1–4 Ā 16 82, 83, 85, 86, 88, 89,91, 92,


103, 104, 106, 107,109, 110, 112, 113
Ā 2 5, 8, 11, 14, 17 Ā 17 115–118
Ā 3 19–24 Ā 18 119, 122, 125, 128, 131
Ā 4 18, 25, 56, 63, 94, 101, Ā 19 133–138
132, 139, 170, 177
Ā 5 26, 29, 32, 35, 38 Ā 20 140, 143, 146, 149, 152
Ā 6 6, 7, 9, 10, 12, 13, 15, 16, Ā 21 120, 121, 123, 124, 126, 127,129, 130,
27, 28, 30, 31, 33, 34, 36, 37 141, 142, 144, 145, 147, 148, 150, 151
Ā 7 39, 40, 41, 42 Ā 22 153–156
Ā 8 43, 46, 49, 52, 55 Ā 23 157, 160, 163, 166, 169
Ā 9 57–62 Ā 24 171–176
Ā 10 64, 67, 70, 73, 76 Ā 25 178, 181, 184, 187, 190
Ā 11 44, 45, 47, 48, 50, 51, 53, 54, Ā 26 158, 159, 161, 162, 164, 165, 167, 168,
65, 66, 68, 69, 71, 72, 74, 75 179, 180, 182, 183, 185, 186, 188, 189
Ā 12 77–80 Ā 27 191–194
Ā 13 81, 84, 87 90, 93 Ā 28 195, 197, 198, 200
Ā 14 95, 96, 97, 98, 99, 100 Ā 29 196, 199
Ā 15 102, 105, 108, 111, 114

TABLE 6 Cross-section areas and total weights for the 200-bar truss
Variables (in2 ) Lamberti*[13] MVMO* Variables (in2 ) Lamberti*[13] MVMO*

Ā 1 0.1468 0.139898 Ā 16 0.5734 0.573891


Ā 2 0.9400 0.938817 Ā 17 0.1327 0.234063
Ā 3 0.1000 0.100010 Ā 18 7.9717 7.948509
Ā 4 0.1000 0.100000 Ā 19 0.1000 0.100000
Ā 5 1.9400 1.937514 Ā 20 8.9717 8.947209
Ā 6 0.2962 0.292482 Ā 21 0.7049 0.758253
Ā 7 0.1000 0.107811 Ā 22 0.4196 0.201802
Ā 8 3.1042 3.093440 Ā 23 10.8636 10.831769
Ā 9 0.1000 0.102188 Ā 24 0.1000 0.0.100043
Ā 10 4.1042 4.092139 Ā 25 11.8606 11.830469
Ā 11 0.4034 0.401827 Ā 26 1.0339 0.898948
Ā 12 0.1912 0.171093 Ā 27 6.6818 7.073386
Ā 13 5.4284 5.379026 Ā 28 10.8113 10.967431
Ā 14 0.1000 0.125551 Ā 29 13.8404 13.643608
Ā 15 6.4284 6.377726
Weight (kg) 11, 542.137 11, 537.6101
NFE 10, 833 16, 200

Note. MVMO = mean-variance mapping optimization; NFE = number of function evaluations.


ASLANI ET AL. 15 of 17

FIGURE 15 200-bar truss convergence history for single-point, population-based mean-variance mapping optimization (MVMO) and genetic
algorithm (GA)

5 CONCLUSION

A MVMO algorithm is investigated on a variety of small to large scale spatial truss weight minimization problems. Both single-solution and
population-based algorithms are thoroughly discussed and an adaptive (dynamic) quadratic exterior penalty method is employed and tuned to
handle nonlinear mechanical and geometrical constraints for truss structure problems. Adaptive strategies are intrinsic in MVMO so transition
from exploration to exploitation occurs automatically. For fixed computational cost, qualitative behavior in terms of the convergence history for a
single-solution MVMO, a four population-based MVMO and a binary-coded GA are presented, with results that generally exhibits rapid conver-
gence for single-solution MVMO, as well as the capability of the population-based version to carry out a global search. Quantitative results showed
for all cases that, MVMO was able to find a superior solution. In particular, MVMO with a population size of twice the number of variables were
guaranteed to converge to a globally optimum solution with a minimum number of function evaluations. The combination of the mapping functions,
the archive of best-points, the adaptive strategies employed in MVMO, and the dynamic exterior penalty function makes it a robust and reliable tool
for computationally expensive problems with hundreds of constraints where a moderately accurate solution is assumed to be sufficiently good.

ORCID

Amir H. Gandomi http://orcid.org/0000-0002-2798-0104

REFERENCES
[1] C. Camp, M. Farshchin, Eng. Struct. 2014, 62, 87.
[2] S. Bureerat, N. Pholdee, J. Comput. Civil Eng. 2015, 30(2), 04015019.
[3] A Kaveh, A Zolghadr, Comput. Struct. 2012, 102, 14.
[4] H. M. Gomes, Expert Syst. Appl. 2011, 38(1), 957.
[5] A. Kaveh, A. Zolghadr, Appl. Soft Comput. 2013, 13(5), 2727.
[6] J.-P. Li, Eng. Optim. 2015, 47(1), 107.
[7] J. N. Richardson, R. F. Coelho, S. Adriaenssens, Eng. Optim. 2016, 48(2), 334.
[8] L. Pyl, C. Sitters, W. De Wilde, Adv. Eng. Software 2013, 62, 9.
[9] A. Kaveh, S. Javadi, Acta Mech. 2014, 225(6), 1595.
[10] S. Degertekin, M. Hayalioglu, Comput. Struct. 2013, 119, 177.
[11] A. Kaveh, R. Sheikholeslami, S. Talatahari, M. Keshvari-Ilkhichi, Adv. Eng. Software 2014, 67, 136.
[12] L. F. F. Miguel, L. F. F. Miguel, Expert Syst. Appl. 2012, 39(10), 9458.
[13] L. Lamberti, Comput. Struct. 2008, 86(19), 1936.
[14] A. Kaveh, S. Talatahari, Asian J. Civil Eng. 2008, 9(4), 329.
[15] H. Rahami, A. Kaveh, M. Aslani, R. Najian Asl, Iran Univ. of Sci. Technol. 2011, 1(1), 29.
[16] W. Hare, J. Nutini, S. Tesfamariam, Adv. Eng. Software 2013, 59, 19.
[17] L Davis, Handbook of genetic algorithms, 1991.
16 of 17 ASLANI ET AL.

[18] R. C. Eberhart, J. Kennedy, in Proceedings of the Sixth International Symposium on Micro Machine and Human Science, 1, New York, NY 1995, 39–43.
[19] M. Dorigo, M. Birattari, T. Stutzle, IEEE Comput. Intell. Mag. 2006, 1(4), 28.
[20] D. Karaboga, B. Basturk, J. Global Optim. 2007, 39(3), 459.
[21] O. K. Erol, I. Eksin, Adv. Eng. Software 2006, 37(2), 106.
[22] A. H. Gandomi, A. H. Alavi, Commun. Nonlinear Sci. Numer. Simul. 2012, 17(12), 4831.
[23] S. Talatahari, A. H. Gandomi, G. J. Yun, Struct. Des. Tall Special Build. 2014, 23(5), 350.
[24] A. H. Gandomi, S. Talatahari, X. S. Yang, S. Deb, Struct. Des. Tall Special Build. 2013, 22(17), 1330.
[25] H. Ishibuchi, N. Tsukamoto, Y. Nojima, Evolutionary many-objective optimization: A short review, in IEEE Congress on Evolutionary Computation Hong
Kong, China 2008, 2419–2426.
[26] A. H. Gandomi, X. S. Yang, S. Talatahari, A. H. Alavi, Metaheuristic Applications in Structures and Infrastructures, Newnes Cambridge, MA 2013.
[27] I. Erlich, A mean-variance optimization algorithm, in 2010 IEEE World Congress on Computational Intelligence, Niskayuna, NY, USA IEEE 2010,
344–349.
[28] J. L. Rueda, I. Erlich, Evaluation of the mean-variance mapping optimization for solving multimodal problems, in Swarm Intelligence (SIS), Singapore
2013 IEEE Symposium on, IEEE 2013, 7–14.
[29] J. L. Rueda, I. Erlich, Mvmo for bound constrained single-objective computationally expensive numerical optimization, in Evolutionary Computation
(CEC), 2015 IEEE Congress on, Sendai, Japan IEEE 2015, 1011–1017.
[30] J. L. Rueda, I. Erlich, Testing mvmo on learning-based real-parameter single objective benchmark optimization problems, in 2015 IEEE Congress on
Evolutionary Computation (CEC), Sendai, Japan IEEE 2015, 1025–1032.
[31] I. Erlich, J. L Rueda, S. Wildenhues, F. Shewarega, Evaluating the mean-variance mapping optimization on the ieee-cec 2014 test suite, in Evolutionary
Computation (CEC), 2014 IEEE Congress on, Beijing, China IEEE 2014, 1625–1632.
[32] J. L. Rueda, I. Erlich, Hybrid mean-variance mapping optimization for solving the ieee-cec 2013 competition problems, in 2013 IEEE Congress on
Evolutionary Computation, Cancun, Mexico IEEE 2013, 1664–1671.
[33] A. Kaveh, Advances in Metaheuristic Algorithms for Optimal Design of Structures, Gewerbestrasse, Switzerland Springer 2014.
[34] K. S. Lee, Z. W. Geem, Comput. Struct. 2004, 82(9), 781.
[35] B. Farshi, A. Alinia-Ziazi, Int. J. Solids Struct. 2010, 47(18), 2508.
[36] L. Li, Z. Huang, F. Liu, Q. Wu, Comput. Struct. 2007, 85(7), 340.
[37] O. Hasançebi, S. K. Azad, S. K. Azad, Int. J. Optim. Civil Eng 2013, 3(2), 209.
[38] H. Adeli, S. Kumar, J. Aerosp. Eng. 1995, 8(3), 156.
[39] K. Imai, L. A. Schmit, J. Struct. Div. 1981, 107(5), 745.
[40] A. Kaveh, S. Talatahari, Struct. Multi. Optim. 2011, 43(3), 339.
[41] R. Sedaghati, Int. J. Solids Struct. 2005, 42(21), 5848.
[42] A. Kaveh, M. Khayatazad, Comput. Struct. 2013, 117, 82.
[43] A. Kaveh, S. Talatahari, Comput. Struct. 2009, 87(17), 1129.

Mohamad Aslani received his PhD from the department of Aerospace Engineering at Iowa State University where he developed numerical
methods for large-scale compressible fluid flow analysis. He also holds master and bachelor′ s degrees from the department of Mechanical
Engineering at Iowa State University and University of Tehran. He is joining the department of Mathematics at Florida State University as
a postdoctoral research associate. His research interests are computational mechanics, machine learning, and algorithm development for
engineering-based problems.

Parnian Ghasemi received her BS degree in Civil Engineering from Sharif University of Technology, Iran. She is pursuing her PhD in
Geotech-Material Engineering at Iowa State University, USA, where she is working as a graduate research and teaching assistant. Her
research interests include pavement design, finite element analysis, application of machine learning and optimization techniques in pave-
ment design, and asphalt laboratory testing.

Amir H. Gandomi is an assistant professor of Analytics & Information Systems at School of Business, Stevens Institute of Technology. Prior
to joining Stevens, Dr. Gandomi was a distinguished research fellow in headquarter of BEACON NSF center located at Michigan State Uni-
versity. He received his PhD in engineering and used to be a lecturer in several universities. Dr. Gandomi has published over 130 journal
papers and 4 books. Some of those publications are now among the hottest papers in the field and collectively have been cited about 8,000
times (h-index = 46). Recently, he has been named as 2017 Clarivate Analytics Highly Cited Researcher (The Top 1%) and ranked 20th in
GP bibliography among more than 10,000 researchers. He has also served as associate editor, editor, and guest editor in several prestigious
journals and has delivered several keynote/invited talks. Dr. Gandomi is part of a NASA technology cluster on Big Data, Artificial Intelli-
gence, Machine Learning, and Autonomy. His research interests are global optimization and (big) data mining using machine learning and
evolutionary computations in particular.
ASLANI ET AL. 17 of 17

How to cite this article: Aslani M, Ghasemi P, Gandomi AH. Constrained mean-variance mapping optimization for truss optimization
problems. Struct Design Tall Spec Build. 2018;27:e1449. https://doi.org/10.1002/tal.1449

Você também pode gostar