Escolar Documentos
Profissional Documentos
Cultura Documentos
DOI: 10.1002/tal.1449
RESEARCH ARTICLE
KEYWORDS
1 INTRODUCTION
Weight minimization of skeletal structures is historically used as a benchmark for heuristic and evolutionary optimization algorithms. The objec-
tive (representing the weight of the structure) and constraint(s) (i.e., mechanical properties of the structure such as nodal displacement, member
stress,[1, 2] or vibration[3, 4] ) lead to nonconvex systems with local optima. The set of design variables differs for each problem, and it includes (but
is not limited to) the structure topology,[5-7] its shape,[8, 9] its size,[10, 11] and combinations of these factors.[12] To solve such a problem reliably and
efficiently, researchers have developed specialized optimization algorithms,[10, 13] as well as hybridized and other current tuned techniques.[14-16]
Population-based evolutionary search algorithms such as genetic algorithm (GA),[17] particle swarm,[18] ant colony,[19] bee colony,[20] big-bang
crunch,[21] krill-herd optimization,[22] firefly,[23] and cuckoo search[24] (see Ishibuchi et al. and Gandomi et al.[25, 26] for a review) have been employed in
recent years to solve complex real-world problems. These algorithms utilize a variety of tools to explore favorable regions in the search domain and
exploit them further to locate global minima. Change in an algorithm's behavior is sometimes accomplished manually (i.e., changing the parameters)
or more rigorously through an adaptive strategy. Even though all these techniques are listed in the literature, nonconvex global optimization is still
challenging. Due to the irregularities in either objectives, constraints or both that narrow down the margins of success for a specific algorithm;
if an algorithm works correctly for one problem, it may not be suitable for another problem, so there has always been a desire for designing and
implementing new ideas in this field.
Borrowing ideas of selection, mutation, and crossover from the concept of evolutionary algorithms and employing a mapping method, Elrich
et al.[27] designed a new optimization algorithm, namely, mean-variance mapping optimization (MVMO). MVMO shares philosophical properties
with population-based stochastic algorithms. For instance, elitism is intrinsic in an evolutionary algorithm, and MVMO similarly preserves an archive
of the best points that implicitly resemble elitism. Moreover, the archive of n-best solutions provides guidelines for development of a mapping
algorithm based on the mean and variance of the archive that the mapping function and upon which the new offspring (child) will essentially depend.
Given the mean, variance, and a parameter called a shape variable, a transformation can be constructed and, as explained later, this transformation
automatically switches the behavior of the algorithm, that is, it will explore and exploit the search domain. Finally, the elitism criteria dictate that
the parent is the first-ranked solution in the archive and a new generation is born following the constructed mapping function. One of the impor-
Struct Design Tall Spec Build. 2018;27:e1449. wileyonlinelibrary.com/journal/tal Copyright © 2017 John Wiley & Sons, Ltd. 1 of 17
https://doi.org/10.1002/tal.1449
2 of 17 ASLANI ET AL.
tant features of MVMO is its rapid rate of convergence while at the same time carrying out a global search. This feature is especially important for
problems with expensive function evaluations where a moderately good solution is assumed to be acceptable.[28, 29]
MVMO has been initially designed to work with a single solution on a normalized domain (or xi ∈ [0, 1]). However, if multiple sessions of MVMO
run simultaneously, a population-based approach is achieved.[30] With extra information available from unfavorable regions, a multiparent crossover
can occur to support further exploration of these regions. This extra effort has been shown to make the algorithm robust and reliable in the sense
of a general optimization algorithm.[30-32]
MVMO is an algorithm for unconstrained problems, but the performance of MVMO has never been tested in a real-world application where mul-
tiple nonlinear constraints might exist, and the convergence rate is important. For the truss structure considered in this paper, these constraints are
nodal displacements and member stresses, and if a solution point violates any of them, a penalty is applied through an adaptive quadratic function.
Therefore, in this paper, performances of constrained single-solution and population-based MVMO are evaluated to find the global weight mini-
mization of truss structures. Parameters of the algorithm (MVMO and penalty function) are tuned, and the results are presented for a variety of
population sizes (linearly proportional to the number of variables).
The remainder of this paper is organized as follows: In Section 2, two variants of MVMO (single-solution and population-based) are presented,
followed by an adaptive penalty function to handle constraints. In Section 3, the weight minimization problem of truss structures is formally defined,
and finally, Section 4 describes the results for a variety of cases.
where fs > 0 is a coefficient and (j) is an indicator of points in the archive. The si is used to find the vector of shape variable si = (si1 , si2 ) for the ith
variable in the problem. The algorithm to find si vector will be explained later. We also note that because vi is less than one, si is always positive.
FIGURE 1 Mapped points for (a) a conservative search with a large value for shape variables and (b) an exploration search with small value for
shape variables
ASLANI ET AL. 3 of 17
The inputs of mapping function are xrand and a function, H, which depends on the mean value and shape variables or:
xi = M(xrand , H)
(3)
= H(̄xi , si , xrand ) + (1 − H1 + H0 ) xrand − H0 ,
If shape variables are equal, then the mapping function is symmetric. This is shown in Figure 2b. Asymmetric mapping is achieved once shape
variables are not equal as illustrated by the contours in Figure 3a,b. Also, as si → (∞, ∞), the new point converges to the mean value x̄ i . The flowchart
of the single-solution MVMO is summarized in Figure 4.
FIGURE 2 Mapping function for (a) constant mean and (b) symmetric mapping when shape variables are equal
Once mean, variance, and shape variables are constructed, a number of variables (number of mapped variables, m) are chosen randomly to be
varied based on the mapping function. This number introduces another parameter into MVMO algorithm. Thus, the user needs to tune the algorithm
based on the problem in hand with the following parameters:
Appropriate assignment of these parameters mimics both exploration and exploitation of an evolutionary algorithm with implicit elitism. Numer-
ical experiments show that an archive size of 4 ≤ n ≤ 6 is appropriate for the problems of interest in this study. Generally, the larger archive size
corresponds to a more conservative search, and as a result, it requires more computation. Also, factor fs in Equation 1 will make a conservative search
(when accuracy is needed) if fs > 1 and opposite (global search) if fs < 1. Ideally, an adaptive strategy is required, which is the topic of next section.
where fsmin = 0.8, fsmax = 2 and 𝜇 = #iteration∕Ntot . The above functionality allows a global search initially and a quadratic increase in fs . In the
original algorithm, a random number in the range of [0, 1] was used. However, numerical experiments showed that a progressive increase of fs as in
Equation 5 gives decent results as well.
ASLANI ET AL. 5 of 17
The search diameter Δdi is responsible for the change in di that determines one of the shape variables. Thus, it can adopt a similar behavior to
factor fs . It is desirable to have di oscillating around si and therefore Δdi > 1. With this in mind, the following algorithm is chosen:
with Δdmin
0
= 0.01 and Δdmax
0
= 0.4. Thus, as the algorithm evolves, Δdi oscillates around si with a bound that scales maximally with 1 + Δdmax
0
.
NGP = round(gGP , NP )
(7)
initial
gGP = gGP + 𝜇(gGP
final initial
− gGP ),
initial final
where gGP = 0.7 and gGP = 0.2. Thus, as the algorithm evolves, the set of good points becomes smaller and smaller.
In the original algorithm, a local search method is used randomly to create the pool of particles along with the fitness evaluation step (Hybrid
version). However, for the current application, similar results are obtained without this extra effort. This could be due to the irregularity of the
objective function that causes the poor performance of gradient-based algorithms.
minimize f(x)
with respect to x = (x1 , · · · , xn ) (8)
subject to gi (x) ≤ 0, ∀i = 1, · · · , m,
where x is the vector of variables, f(x) is the objective function, and gi (x) is the ith normalized constraint functions. To address this issue, a constraint
handling strategy is required. A simple way to redefine the problem as an unconstrained problem is to rule out sample points for which the constraints
are not satisfied. This approach has been widely used in population-based heuristic methods like GA in which a newly generated solution does
not enter the population, unless it satisfies all the constraints. This approach would work for particular problems or algorithms, but it can become
inefficient unless a rigorous approach finding a feasible point is available.
Another widely practical approach is to use a penalty function. These functions penalize (increase, in the case of a minimization problem)
the objective value for each constraint in proportions with their degree of closeness/violation of the corresponding constraint. The redefined
unconstrained optimization problem is
minimize F(x, k) ≡ f(x) + p(x, k)
(9)
with respect to x = (x1 , · · · , xn ),
where the function p(x, k) ≥ 0 is yet to be defined based on two major approaches, namely, interior and exterior penalty methods. Theoretically, the
unconstrained problem is converged once ||∇x F(x, k)|| ≤ 𝜀. Ideally, the magnitude of the penalty function is small initially to construct initial guess.
In interior penalty methods, the magnitude of the penalty rapidly increases (ideally goes to infinity) as the point gets closer to the constraint limits.
The penalty function is usually defined in a form of an inverse or logarithmic function of the constraints that approached to zero during convergence
toward the optimal point. Thus, this method is useful if one wishes to have feasible points during the course of iterations toward the optimal point.
In exterior penalty method, however, solution points are allowed to violate constraints with a finite amount of penalization, which ideally goes to
infinity as the solution evolves. Thus, the solution approaches the optimal point but usually from infeasible search domain and adaptively changes
to limit the violation.
Generally, the exterior penalty methods have been shown to be more robust than their counterparts. Moreover, regarding the nature of the
constraints in truss structures, an exterior penalty method seems to be more suited as usually, the optimal point occurs when one or a couple of
constraints are tight. In this study, we use the following form of exterior penalty method:
6 of 17 ASLANI ET AL.
FIGURE 5 Flowchart for the population-based mean-variance mapping optimization (MVMO) algorithm
∑
m
p(x, k) = k Vi (x), (10)
i=1
The penalty scaling factor is defined as k = (𝛼𝜇)𝛽 with positive values for 𝛼 and 𝛽 . If we set 𝜇 = #iteration∕Ntot , the penalty scaling function
increases monotonically that constitutes an adaptive penalty function. This feature makes the above function ideal for the exterior penalty method.
Numerical experiments show that by setting 𝛼 = 106 and 𝛽 = 2, results with an acceptable accuracy is obtained, that is, gi (x) ≤ 10−6 . Another
∑m
approach is to multiply the penalty value with the objective function, that is, p(x, k) = kf(x) i=1 Vi (x). Both approaches result in similar convergence
rates for problems studied in this paper.
ASLANI ET AL. 7 of 17
The rapid increase in the penalty function ensures the accuracy of the final answer. However, the Hessian of F(x, k) becomes ill-conditioned, and
thus, the minimization problem becomes more difficult as the value of the penalty scaling function increases (slowing down the convergence rate),
which is a major issue for gradient-based algorithms.
Efficient structural design problems have been the focus of several studies in the literature.[33] These problems vary in their definition of objective
and constraints. In this study, the focus is on structural weight minimization constrained to displacement/tension of the nodes/elements.
From an analytical point of view, the problem of weight minimization of a truss structure that is subjected to tension and displacement limits is
defined as
∑
NEL
minimize W(A) = 𝜌g Ai li
i=1
where g is the gravity acceleration, 𝜌 is the material density, A is the vector of the cross-sections, NEL is the number of elements in the structure, NNO
is the total number of nodes, Amin
i
and Amax
i
are lower and upper bounds of Ai , u(x,y,z),k is the displacement of the kth node in x, y or z directions with the
lower and upper limits umin
(x,y,z),k
and umax
(x,y,z),k
, 𝜎 i is the stress on the ith element and 𝜎imin and 𝜎imax are the lower and upper limits for stress, and finally li
is the length of ith element:
√
li = (x1i − x2i )2 + (y1i − y2i )2 + (z1i − z2i )2 , (13)
where x(1,2)i , y(1,2)i , z(1,2)i are coordinates of the nodes of the ith element.
The cross-sections of the members are the only design variables in this problem that are treated as continuous variables, and the length of the
elements are kept as constants. The finite element method stiffness (displacement) method is implemented in order to evaluate nodal displacement
and member stress. The method has been verified using benchmark problems with analytical solutions.
4 TEST PROBLEMS
In this section, single-solution and population-based MVMO algorithms and the proposed penalty method are analyzed using several truss weight
minimization problems. The population-based simulations have a size of 2k, 5k, and 10k where k is the number of variables. The corresponding
results are marked with MVMO-2,5,10. The best solution obtained after 10 independent runs (i.e., Monte Carlo simulations), and the number of
function evaluations (NFE) to reach this result are reported in Tables. For all algorithms (MVMO-2,5,10), convergence history in terms of the archive
average solution versus function evaluations are shown as well. To perform a basic comparison of the algorithm in terms of convergence and global
optimality, convergence history obtained from a binary-coded GA are also compared in those plots. We set the population size to 50, crossover
probability to 1, the mutation probability to 0.01 in the GA implementation, and the top 10% of the genes are set aside to insist on elitism at every
generation of the GA.
Because the computational cost is directly dependent on the number of function calls, we set a maximum allowable NFE. Compared to reporting
CPU time on a machine, NFE is an accurate measurement tool, as the computational architecture, resources, coding style, and so forth would be
different for each implementation of a specific algorithm. This number is 2 × 103 k (k = NEL , which is the number of variables) and sets Ntot (number
of iterations) to be 1 × 103 , 200, and 100 for MVMO-2, 5, and 10, respectively.
A number of truss problems reported in the literature include designs that violate the constraints slightly.[34] Because the goal of this paper is to
show the capabilities of the proposed method, in some cases (indicated by the symbol ∗), the cross-section areas result in a violation of tension and
displacement constraints. However, this relaxation allows for the algorithm to find a lower weight. The final solution is under the column indicated
by MVMO* in tables. These cases are considered only when the violation threshold is greater than 0.5%.
Finally, for all cases, simulations are initialized with a uniform random distribution in an initial range. The lower bound of this initial range is defined
for each problem, and the upper range is chosen to be 100 times this lower bound.
TABLE 1 Cross-section areas and total weights for the 10-bar truss
Variables (in2 ) Lee & Geem*[34] Lie et al.*[36] Farshi & Ziazi[35] MVMO MVMO*
FIGURE 7 10-bar truss convergence history for single-point, population-based mean-variance mapping optimization (MVMO) and genetic
algorithm
Members are constrained in displacement and stress to 2 in and 25 ksi, respectively. The lower bound of cross-sectional areas is 0.1 in2 . Following
Figure 6, the loading condition in this problem is P1 = 150 kips and P2 = 50 kips.
Table 1 compares the solution from MVMO with a harmony search,[34] particle swarm optimizer,[36] and method of force-centers.[35] As it is shown,
in the best case, MVMO has obtained a design with a lower structure weight after approximately 12,200 function evaluations. Figure 7 depicts the
ASLANI ET AL. 9 of 17
convergence properties of the MVMO. Single-solution MVMO has an initial rapid convergence that can obtain a solution within less than 1% of the
optimal weight. However, MVMO-2 converges faster than other algorithms. Although GA shows a good initial convergence rate, it fails to locate the
optimal solution within 10% of the optimal solution. The study by Lee and Geem[34] contains violated constraints (∼ 3%). When allowed to relax the
constraints similar to,[34] MVMO is able to find a better optimizer for the total weight of the structure.
FIGURE 9 17-bar truss convergence history for single-point, population-based mean-variance mapping optimization (MVMO) and genetic
algorithm (GA)
FIGURE 11 18-bar truss convergence history for single-point, population-based mean-variance mapping optimization (MVMO) and genetic
algorithm (GA)
KEAi
𝜎ib = − , (14)
L2i
where K is buckling constant (K = 4) and E is the modulus of elasticity (E = 10, 000ksi) and Li is the length of the element. Following the figure, a
downward load of F = 20 kips is applied on nodes (1), (2), (4), (6), and (8). The members are divided into four groups, indicated by a bar, as following:
ASLANI ET AL. 11 of 17
• Ā 1 : A1 , A8 , A12 , A4 , A16
• Ā 2 : A2 , A6 , A10 , A14 , A18
• Ā 3 : A3 , A7 , A11 , A15
• Ā 4 : A5 , A9 , A13 , A17
with the minimum cross-section area of 0.1 in2 for each member.
Following the trend in previous problems, we present two sets of results for MVMO. With both conditions, MVMO can find a better optimal point
compared to studies with similar conditions. As previously mentioned, MVMO is characterized by its rapid rate of convergence as can be seen for
this problem as well in Figure 11. All algorithms show a relatively good convergence for this problem; however, we see that single-solution MVMO
does exceptionally well for this problem compared to the population-based MVMO and GA. It can be deduced that this behavior is due to the size
of the problem. There are only four design variables in this problem, and this is considered a relatively easy problem. The results of the 18-bar truss
are shown in Table 3. Similarly, the results indicate that algorithm is able to find a solution that is better than or equal to the results reported in the
literature.
TABLE 3 Cross-section areas and total weights for the 18-bar truss
Variables (in2 ) Imai & Schmit*[39] Lee & Geem*[34] MVMO MVMO*
TABLE 4 Cross-section areas and total weights for the 72-bar truss
Variables (in2 ) Camp[1] Kaveh & Khayatazad[42] Kaveh & Talataheri[43] MVMO
FIGURE 13 72-bar truss convergence history for single-point, population-based mean-variance mapping optimization (MVMO) and genetic
algorithm (GA)
ASLANI ET AL. 13 of 17
MVMO results in the best solution. However, GA also convergence quickly in this problem initially but fails to exploit the search domain further.
1. Px = +4, 449.741N (i.e., 1,000 lbf) on nodes 1, 6, 15, 20, 29, 34, 43, 48, 57, 62 and 71
2. Py = −44, 497.412N (i.e. 10,000 lbf) on nodes 1, 2, 3, 4, 5, 6, 8, 10, 12, 14, 15, 16, 17, 18, 19, 20, 22, 24, 26, 28, 29, 30, 31, 32, 33, 34, 36, 38, 40,
42, 43, 44, 45, 46, 47, 48, 50, 52, 54, 56, 57, 58, 59, 60, 61, 62, 64, 66, 68, 70, 71, 72, 73, 74, 75
3. Both of the above loading acting simultaneously.
This loading condition divides the 200-bar truss to 29 sub-groups, with the cross-section of each group as an optimization variable as indicated in
Table 5. The resultant compression/tensile stress for each member needs to be less than 68.97 MPa (i.e., 10,000 psi), which creates 1,200 constraints
for this problem. There is no constraint on node displacement. The minimum cross-section area of all members are 0.000064516 m2 (i.e., 0.1 in2 ).
Results for this case are summarized in Table 6. We compared the results with the lowest weights reported in Lamberti,[13] which gives 0.0709%
violation of constraints. MVMO is able to find a point with a total weight lower than the results reported by Lamberti.[13] Unlike other problems,
the NFE limit is set to 29 × 103 for this problem because of its huge computational cost. Convergence plot of Figure 15 compares single and
population-based MVMO algorithm with the binary coded GA. Once again, we can see the excellent convergence rate by single-solution MVMO
and global convergence of population-based algorithms.
TABLE 6 Cross-section areas and total weights for the 200-bar truss
Variables (in2 ) Lamberti*[13] MVMO* Variables (in2 ) Lamberti*[13] MVMO*
FIGURE 15 200-bar truss convergence history for single-point, population-based mean-variance mapping optimization (MVMO) and genetic
algorithm (GA)
5 CONCLUSION
A MVMO algorithm is investigated on a variety of small to large scale spatial truss weight minimization problems. Both single-solution and
population-based algorithms are thoroughly discussed and an adaptive (dynamic) quadratic exterior penalty method is employed and tuned to
handle nonlinear mechanical and geometrical constraints for truss structure problems. Adaptive strategies are intrinsic in MVMO so transition
from exploration to exploitation occurs automatically. For fixed computational cost, qualitative behavior in terms of the convergence history for a
single-solution MVMO, a four population-based MVMO and a binary-coded GA are presented, with results that generally exhibits rapid conver-
gence for single-solution MVMO, as well as the capability of the population-based version to carry out a global search. Quantitative results showed
for all cases that, MVMO was able to find a superior solution. In particular, MVMO with a population size of twice the number of variables were
guaranteed to converge to a globally optimum solution with a minimum number of function evaluations. The combination of the mapping functions,
the archive of best-points, the adaptive strategies employed in MVMO, and the dynamic exterior penalty function makes it a robust and reliable tool
for computationally expensive problems with hundreds of constraints where a moderately accurate solution is assumed to be sufficiently good.
ORCID
REFERENCES
[1] C. Camp, M. Farshchin, Eng. Struct. 2014, 62, 87.
[2] S. Bureerat, N. Pholdee, J. Comput. Civil Eng. 2015, 30(2), 04015019.
[3] A Kaveh, A Zolghadr, Comput. Struct. 2012, 102, 14.
[4] H. M. Gomes, Expert Syst. Appl. 2011, 38(1), 957.
[5] A. Kaveh, A. Zolghadr, Appl. Soft Comput. 2013, 13(5), 2727.
[6] J.-P. Li, Eng. Optim. 2015, 47(1), 107.
[7] J. N. Richardson, R. F. Coelho, S. Adriaenssens, Eng. Optim. 2016, 48(2), 334.
[8] L. Pyl, C. Sitters, W. De Wilde, Adv. Eng. Software 2013, 62, 9.
[9] A. Kaveh, S. Javadi, Acta Mech. 2014, 225(6), 1595.
[10] S. Degertekin, M. Hayalioglu, Comput. Struct. 2013, 119, 177.
[11] A. Kaveh, R. Sheikholeslami, S. Talatahari, M. Keshvari-Ilkhichi, Adv. Eng. Software 2014, 67, 136.
[12] L. F. F. Miguel, L. F. F. Miguel, Expert Syst. Appl. 2012, 39(10), 9458.
[13] L. Lamberti, Comput. Struct. 2008, 86(19), 1936.
[14] A. Kaveh, S. Talatahari, Asian J. Civil Eng. 2008, 9(4), 329.
[15] H. Rahami, A. Kaveh, M. Aslani, R. Najian Asl, Iran Univ. of Sci. Technol. 2011, 1(1), 29.
[16] W. Hare, J. Nutini, S. Tesfamariam, Adv. Eng. Software 2013, 59, 19.
[17] L Davis, Handbook of genetic algorithms, 1991.
16 of 17 ASLANI ET AL.
[18] R. C. Eberhart, J. Kennedy, in Proceedings of the Sixth International Symposium on Micro Machine and Human Science, 1, New York, NY 1995, 39–43.
[19] M. Dorigo, M. Birattari, T. Stutzle, IEEE Comput. Intell. Mag. 2006, 1(4), 28.
[20] D. Karaboga, B. Basturk, J. Global Optim. 2007, 39(3), 459.
[21] O. K. Erol, I. Eksin, Adv. Eng. Software 2006, 37(2), 106.
[22] A. H. Gandomi, A. H. Alavi, Commun. Nonlinear Sci. Numer. Simul. 2012, 17(12), 4831.
[23] S. Talatahari, A. H. Gandomi, G. J. Yun, Struct. Des. Tall Special Build. 2014, 23(5), 350.
[24] A. H. Gandomi, S. Talatahari, X. S. Yang, S. Deb, Struct. Des. Tall Special Build. 2013, 22(17), 1330.
[25] H. Ishibuchi, N. Tsukamoto, Y. Nojima, Evolutionary many-objective optimization: A short review, in IEEE Congress on Evolutionary Computation Hong
Kong, China 2008, 2419–2426.
[26] A. H. Gandomi, X. S. Yang, S. Talatahari, A. H. Alavi, Metaheuristic Applications in Structures and Infrastructures, Newnes Cambridge, MA 2013.
[27] I. Erlich, A mean-variance optimization algorithm, in 2010 IEEE World Congress on Computational Intelligence, Niskayuna, NY, USA IEEE 2010,
344–349.
[28] J. L. Rueda, I. Erlich, Evaluation of the mean-variance mapping optimization for solving multimodal problems, in Swarm Intelligence (SIS), Singapore
2013 IEEE Symposium on, IEEE 2013, 7–14.
[29] J. L. Rueda, I. Erlich, Mvmo for bound constrained single-objective computationally expensive numerical optimization, in Evolutionary Computation
(CEC), 2015 IEEE Congress on, Sendai, Japan IEEE 2015, 1011–1017.
[30] J. L. Rueda, I. Erlich, Testing mvmo on learning-based real-parameter single objective benchmark optimization problems, in 2015 IEEE Congress on
Evolutionary Computation (CEC), Sendai, Japan IEEE 2015, 1025–1032.
[31] I. Erlich, J. L Rueda, S. Wildenhues, F. Shewarega, Evaluating the mean-variance mapping optimization on the ieee-cec 2014 test suite, in Evolutionary
Computation (CEC), 2014 IEEE Congress on, Beijing, China IEEE 2014, 1625–1632.
[32] J. L. Rueda, I. Erlich, Hybrid mean-variance mapping optimization for solving the ieee-cec 2013 competition problems, in 2013 IEEE Congress on
Evolutionary Computation, Cancun, Mexico IEEE 2013, 1664–1671.
[33] A. Kaveh, Advances in Metaheuristic Algorithms for Optimal Design of Structures, Gewerbestrasse, Switzerland Springer 2014.
[34] K. S. Lee, Z. W. Geem, Comput. Struct. 2004, 82(9), 781.
[35] B. Farshi, A. Alinia-Ziazi, Int. J. Solids Struct. 2010, 47(18), 2508.
[36] L. Li, Z. Huang, F. Liu, Q. Wu, Comput. Struct. 2007, 85(7), 340.
[37] O. Hasançebi, S. K. Azad, S. K. Azad, Int. J. Optim. Civil Eng 2013, 3(2), 209.
[38] H. Adeli, S. Kumar, J. Aerosp. Eng. 1995, 8(3), 156.
[39] K. Imai, L. A. Schmit, J. Struct. Div. 1981, 107(5), 745.
[40] A. Kaveh, S. Talatahari, Struct. Multi. Optim. 2011, 43(3), 339.
[41] R. Sedaghati, Int. J. Solids Struct. 2005, 42(21), 5848.
[42] A. Kaveh, M. Khayatazad, Comput. Struct. 2013, 117, 82.
[43] A. Kaveh, S. Talatahari, Comput. Struct. 2009, 87(17), 1129.
Mohamad Aslani received his PhD from the department of Aerospace Engineering at Iowa State University where he developed numerical
methods for large-scale compressible fluid flow analysis. He also holds master and bachelor′ s degrees from the department of Mechanical
Engineering at Iowa State University and University of Tehran. He is joining the department of Mathematics at Florida State University as
a postdoctoral research associate. His research interests are computational mechanics, machine learning, and algorithm development for
engineering-based problems.
Parnian Ghasemi received her BS degree in Civil Engineering from Sharif University of Technology, Iran. She is pursuing her PhD in
Geotech-Material Engineering at Iowa State University, USA, where she is working as a graduate research and teaching assistant. Her
research interests include pavement design, finite element analysis, application of machine learning and optimization techniques in pave-
ment design, and asphalt laboratory testing.
Amir H. Gandomi is an assistant professor of Analytics & Information Systems at School of Business, Stevens Institute of Technology. Prior
to joining Stevens, Dr. Gandomi was a distinguished research fellow in headquarter of BEACON NSF center located at Michigan State Uni-
versity. He received his PhD in engineering and used to be a lecturer in several universities. Dr. Gandomi has published over 130 journal
papers and 4 books. Some of those publications are now among the hottest papers in the field and collectively have been cited about 8,000
times (h-index = 46). Recently, he has been named as 2017 Clarivate Analytics Highly Cited Researcher (The Top 1%) and ranked 20th in
GP bibliography among more than 10,000 researchers. He has also served as associate editor, editor, and guest editor in several prestigious
journals and has delivered several keynote/invited talks. Dr. Gandomi is part of a NASA technology cluster on Big Data, Artificial Intelli-
gence, Machine Learning, and Autonomy. His research interests are global optimization and (big) data mining using machine learning and
evolutionary computations in particular.
ASLANI ET AL. 17 of 17
How to cite this article: Aslani M, Ghasemi P, Gandomi AH. Constrained mean-variance mapping optimization for truss optimization
problems. Struct Design Tall Spec Build. 2018;27:e1449. https://doi.org/10.1002/tal.1449