Você está na página 1de 2

XXIV ICTAM, 21-26 August 2016, Montreal, Canada

ON CONVERGENCE SPEEDUP IN TOPOLOGY OPTIMIZATION

Ole Sigmund∗
Department of Mechanical Engineering, Solid Mechanics, Technical University of Denmark

Summary This paper introduces a simple-to-implement, multiscale-inspired approach to improve convergence speed in topology optimiza-
tion. To ensure convergence toward globally optimal Michell-like structures, topology optimization approaches often apply continuation
schemes where e.g. the penalization exponent is increased gradually. In this way, one nudges the process by going from an initially convex
problem (variable thickness sheet) to a penalized, black and white solution. Iteration counts for such continuation approaches are usually
counted in many hundreds or up to thousands. By introducing an extra constraint that limits the p-norm of the difference between the
local density field and a smoothed (homogenized) one, the continuation scheme can be eliminated. It is demonstrated that this approach
systematically creates extremely detailed and highly optimized Michell-like structures within at most 200 iterations.

THEORY AND METHOD

The standard density-based minimum compliance topology optimization problem reads

min : C(ρ
ρ)
ρρ
s.t. : K(ρ̂
ρ(ρρ))D = F
: V (ρ̂
ρ(ρρ)) ≤ V ∗
: 0≤ρ≤1 (1)

where ρ is the vector of element-based design variables, ρ(ρρ̂ ρ) are the (density) filtered, physical design variables using a filter
size rmin , C(ρ ρ) is compliance, K, F and D are global stiffness matrix, load and displacement vectors, respectively, and V
and V ∗ are volume and volume bound, respectively. The optimization problem (1) is set up for a density filtering approach,
however, it can easily be simplified to a sensitivity filtering approach by substituting ρ(ρρ̂ ρ) with ρ or applied with PDE-based
filtering [1]. The relation between density design variables and local (isotropic) stiffness is modelled by the SIMP interpolation
scheme
E(ρ) = Emin + ρ̂q (E0 − Emin ), Emin ≪ E0 (2)
where E0 and Emin are Young’s modulus of solid and void material, respectively, and q is the penalization factor.
In its standard form, the optimization problem (1) is solved by selecting appropriate penalization factors (c.f. q = 3) and
filter radii and is then run until convergence. However, this usually results in convergence towards suboptimal topologies,
where design features tend to agglomerate, resulting in suboptimal objective values and feature sizes much bigger than those
allowed by the filter size. In order to circumvent this, researchers often use continuation approaches where e.g. the (SIMP)
filter factor is increased from 1 to 3 in steps of 0.2, c.f. [2]. The increase in penalization factor is performed upon convergence
or every say 100 iterations. Although this scheme probably can be tuned, e.g. by a recent constrained approach [3], total
iteration counts are often reported to exceed a thousand.
To reduce the iteration count, the original optimization formulation (1) is appended with an extra constraint

min : C(ρ
ρ)
ρρ
s.t. : K(ρ̂
ρ(ρρ))D = F
: V (ρ̂ ρ)) ≤ V ∗
ρ(ρ
||ρ̄
ρ(ρ̂ ρ)) − ρc ||p
ρ(ρ
: ≤ ϵ, ρc ) = V ∗ )
(V (ρ
||ρ
ρc ||p
: 0≤ρ≤1 (3)

where || · ||p indicates p-norm, ϵ is a small number that sets the allowed error, ρρ̄ is a smoothed version of the physical density
field using a large filter radius Rmin and ρc is an auxiliary smoothed density field with subscript c for “coarse” to make the
association to multiscale approaches, although its meaning and function can be seen in several different perspectives. The
basic requirement to the coarse field ρc is that it satisfies the volume constraint. If this is fulfilled and ϵ is small enough,
this implies that the volume constraint on the physical density field (third line of (3)) is automatically satisfied. However, the
volume constraint is maintained in the optimization problem since it tends to stabilize convergence and allows some freedom

∗ Email: sigmund@mek.dtu.dk
Figure 1: a) Optimized Michell-like half-cantilever structure based on continuation approach. b) Result without continuation.
c) Result with proposed scheme and large coarse scale filter radius (radius indicated with circle). d) Result with proposed
scheme and small coarse scale filter radius (radius indicated with circle) and finer mesh.

in selecting ϵ. The choice of coarse scale filter radius Rmin for the ρρ̄ constraint influences the distance between local features,
i.e. it introduces a weakly enforced maximum length scale both on solid and on void regions of the optimized designs.
The added p-norm constraint in (3) effectively introduces a local smoothed density constraint everywhere in the design
domain. Hence, choosing different properties of the coarse field ρc allows a range of interesting features and effects to be
controlled. Here we discuss and apply a concept where ρc is obtained as the optimized solution from the variable thickness
sheet problem. Here one first solves the convex q = 1 SIMP problem (i.e. the variable thickness sheet problem in 2d), possibly
on a coarse mesh which afterwards can be smoothed and projected to the fine mesh. This added local density constraint hinders
design features in agglomerating and hence ensures convergence to very detailed and highly optimal Michell-like solutions.

RESULTS

Preliminary results are presented in Fig. 1. Subfigure a) shows an optimized half beam obtained through the continuation
approach using 1472 iterations. Subfigure b) shows the optimized half beam obtained without continuation approach after
273 iterations. Neither the visual resemblance to an analytical Michell structure nor a quantitative comparison to subfigure a)
in terms of objective function speak in favour of subfigure b). Subfigures c) and d) are obtained using the following strategy:
A) ρc is obtained from running (1) on a coarse mesh. B) Based on A), (3) is run with q = 2 and a minimum filter size of
rmin = 1.2 times the element size, either up to 100 iterations or a certain measure of non-discreteness, whichever comes
first. Then the added constraint is turned off. After 150 iterations the local filter is switched off and the optimization is
continued up to a maximum of 200 iterations. This makes for a fair comparison of approaches and settings if the goal is to
provide an efficient algorithm that can converge to excellent designs in less than 200 iterations. Clearly there may be settings
that can speed up convergence even further. Usually, one would not remove the local filter entirely, however, it is necessary
here in order not to penalize fine detail structures in terms of optimized compliance values. Fine featured structures have
large perimeters and hence have more intermediate density elements than coarse scaled structures. The resulting designs in
subfigures c) and d) are obtained using two different coarse scale filter sizes Rmin as indicated with circles. Both visual
comparison to the analytical Michell solution as well as objective function values compare favourably to those of subfigure a)
that were obtained with the inefficient standard continuation approach.

References
[1] B. Lazarov and O. Sigmund. Filters in topology optimization as a solution to Helmholtz type differential equation. International Journal for Numerical
Methods in Engineering, 86(6):765–781, 2011.
[2] O. Sigmund, N. Aage, and E. Andreassen. On the (non-)optimality of Michell structures. Submitted, 2016.
[3] S. Rojas-Labanda and M. Stolpe. Automatic penalty continuation in structural topology optimization. Structural and Multidisciplinary Optimization,
52(6):1205–1221, 2015.

Você também pode gostar