Escolar Documentos
Profissional Documentos
Cultura Documentos
2 I; or M D DM <
2 D D. Therefore, Our objective is to design the matrix K such that such
that (8) is quadratically stable. This is the \quadratic
(M) = inf
M PM <
2 P; P = P > 0 2 P stabilizability" problem (see [11], and [12, 13, 14]; re-
lated references are [15, 16, 17] and [18]).
Therefore (M) is the optimal value of the GEVP System (8) is quadratically stable for some state
minimize
state-feedback K if there exist P > 0 and K such
that
subject to P > 0; P 2 P (Ai + Bi K)T P + P(Ai + Bi K) < 0; i = 1; : : :; L:
M PM <
2 P
Note that this matrix inequality is not convex in P
and K. However, with the linear fractional transfor-
3.2. Lyapunov function search mation Y = P ,1, W = KP ,1, we may rewrite it
Consider the dierential inclusion (DI) as
(Ai + Bi WY ,1 )T Y ,1 + Y ,1 (Ai + Bi WY ,1 ) < 0:
dx = A(t)x(t); A(t) 2 Co fA ; : : :; A g (6)
dt 1 L Multiplying this inequality on the left and right by Y
(such a congruence preserves the inequality) we get
where Co denotes the convex hull. We ask whether an LMI in Y and W:
the DI is stable, i.e., whether all trajectories of the
system (6) converge to zero as t ,! 1. A sucient Y ATi + W T BiT + Ai Y + Bi W < 0; i = 1; : : :; L:
If this LMIP in Y and W has a solution, then the Lya- Our problem is then to compute T to minimize the
punov function V (z) = z T Y ,1 z proves the quadratic output noise power (10) subject to the over
ow avoid-
stability of the closed-loop system with state-feedback ance constraint (9). We will show that this problem
u(t) = WY ,1 x(t). can be expressed as an EVP.
In other words, we can synthesize a linear state The constraint (9) is equivalent to the existence of
feedback for the DI (6) by solving a set of simultane- P > 0 such that
ous Lyapunov inequalities. We will brie
y discuss the T
A PA , P + T ,T T ,1 AT PB
implications of this result for robust control synthesis B T PA B T PB , I=2 < 0:
in section 5..
The output noise power can be expressed as
Let us also note that by synthesizing a state feed-
back for the DI (6), we have also synthesized a suit- p2noise = 2 Tr T T Wobs T;
able state feedback for the nonlinear, time-varying, where Wobs is the observability Gramian of the origi-
uncertain system nal system fA; B; C g, i.e., the unique solution of the
dx = f(x; u; t; p) Lyapunov equation
dt AT Wobs A , Wobs + C T C = 0:
where p is some parameter vector, provided we have
@f @f
@x @u 2 Co f[A1 B1 ]; : : :; [AL BL ]g With X = T ,T T ,1 , the realization problem be-
for all x, u, t, and p. comes: minimize 2 Tr Wobs X ,1 subject to X > 0
and the LMI (in P > 0 and X)
T
A PA , P + X AT PB
3.4. System realization B T PA B T PB , I=2 < 0: (11)
For the discrete-time LTI system This is a convex problem in X and P , and can be
x(k + 1) = Ax(k) + Bu(k) transformed into the EVP
y(k) = Cx(k); minimize Tr Y
where x : Z+ ! Rn, u : Z+ ! Rnu y : Z+ ! Rny , "
Y 1=2
Wobs
#
and fA; B; C g is a minimal realization, the system subject to (11); P > 0; 1 = 2 >0
realization problem is to nd a change of state coordi- Wobs X
nates x = T x with two competing constraints: First,
the input-to-state transfer matrix, T ,1 (zI , A),1 B More sophisticated realization problems are read-
should be \small", in order to avoid state over
ow ily reduced to LMI problems. For examples, we can
in numerical implementation; and second, the state- have a bound on the RMS value of each component
to-output transfer matrix, C(zI , A),1 T should be of the state, and minimize the maximumquantization
\small", in order to minimize eects of state quan- induced noise power of the components of the output.
tization at the output. We refer the reader to [19]
and [20]; the forthcoming book [21], by M. Gevers, We note that a very simple realization problem,
describes a number of digital lter realization prob- in which the over
ow constraint and the noise ob-
lems. jective are both expressed as H2 constraints, has a
If it is known that the RMS value of the input is well-known \analytic" solution: T is chosen so that
bounded by and the RMS value of the state is re- the system is, except for a constant scaling, balanced.
quired to be less than, say, one, we have an H1 norm Our point is that more sophisticated realization prob-
bound on the input-to-state map: lems, which re
ect much more accurately the true
engineering specications, are also readily solved, not
T ,1 (zI , A),1 B
1 < 1: (9) \analytically" but as LMI problems.
Next, suppose that the state quantization noise 3.5. Inverse problem of optimal control
is modeled as a unit white noise sequence w (i.e.,
E w(k) = 0, E w(k)w(j)T = 2kj I) injected directly Given a system
into the state, and its eect on the output is mea- d
sured by the total noise power appearing in the out-
put, which is just 2 times the square of the H2 norm dt x(t) = Ax(t)
1=2
+ Bu(t)
of the state-to-output transfer matrix: z(t) = Q 0 x(t)
0 R1=2 u(t)
pnoise = 2
C(zI , A),1 T
22 :
(10) x(0) = x0;
assuming (A; B) is stabilizable, (Q; A) is detectable there is a well-developed duality theory (for
and R > 0, the LQR (\optimal control") problem is GEVPs, in a limited sense)
to determine the input u that minimizes the perfor- these problems can be solved in polynomial time
mance index Z 1
(indeed with a variety of interpretations of the
z(t)T z(t) dt: term \polynomial-time").
0
The solution of this problem can be expressed as a The most important practical implication is that
state-feedback u = Kx with K = ,R,1 B T P, where there are eective and powerful algorithms for the
P is the unique positive denite solution of the ARE solution of these problems, that is, algorithms that
AT P + PA , PBR,1 B T P + Q = 0: rapidly compute the global optimum, with non-
heuristic stopping criteria. Thus, on exit, the algo-
The \inverse optimal control problem" is: given a rithms can prove that the global optimum has been
gain K, determine whether there exist Q 0 and R > obtained to within some prespecied accuracy.
0, (Q; A) being observable such that u(t) = Kx(t) is There are a number of general algorithms for the so-
the optimal control law corresponding to the corre- lution of these problems, for example, the ellipsoid al-
sponding LQR problem. gorithm (see e.g., [24, 25]). The ellipsoid method has
This inverse optimal control problem was originally polynomial-time complexity, and works in practice for
considered in a famous paper of Kalman [22], who smaller problems, but can be slow for larger problems.
gave the solution in the case of a single actuator in Other algorithms specically for LMI-based problems
terms of the loop transfer function. Anderson and are discussed in, e.g., [26, 27].
Moore [23, p. 131-133] give a solution based on the Recently, various researchers [28, 1, 29, 30] have
singular-value plot of an appropriate loop transfer developed interior point methods for solving LMI-
matrix in the case of multiple actuators and known based problems, based on the work of Nesterov and
matrix R. Nemirovsky [31]. Numerical experience shows that
The general inverse optimal control problem is these algorithms solve LMI problems with extreme
readily reduced to an LMI problem. It is equivalent eciency. In some specic cases (one is discussed
to nding R > 0 and Q 0 such that there exists a below) these methods can solve LMI-based problems
positive P and a positive-denite W satisfying with computational eort that is comparable to that
required to \evaluate" the \analytic" solutions of sim-
(A + BK)T P + P(A + BK) + K T RK + Q < 0; ilar problems.
AT W + WA < Q; and B T P + KR = 0: 4.1. Multiple Lyapunov inequalities
This is an LMIP in P, W, R and Q. In this section we consider a specic family of LMI
problems for which ecient interior methods have
been developed and extensively tested.
4. Solving LMI-based problems
Consider the EVP: minimize (over P) Tr CP sub-
ject to:
The most important point is:
ATi P + PAi + Bi < 0; i = 1; : : :; L
LMIPs, EVPs, and GEVPs, are where Ai ; Bi ; P 2 Rnn.
tractable
With new primal-dual methods [29], this problem
can be solved in O(L1:2n4) operations. Of course, the
in a sense that can be made precise from a number cost associated in solving the L independent Lyapunov
of theoretical and practical viewpoints. (This is to be equations
contrasted with much less tractable problems, e.g.,
the general problem of robustness analysis for a sys- ATi Pi + PiAi + Bi = 0; i = 1; : : :; L
tem with real parameter perturbations.)
is O(Ln3 ).
From a theoretical standpoint:
Thus the ratio of the cost of solving multiple simul-
taneous Lyapunov inequalities to the cost of solving
we can immediately write down necessary and the same number of Lyapunov equations is O(L0:2n).
sucient optimality conditions In other words:
the cost of solving multiple simultane- 6. Conclusion
ous Lyapunov inequalities is not much
more than solving the same number of
Lyapunov equations, We have shown that many problems in systems
and control can be cast as convex optimization prob-
even though the former problem has no \analytic so- lems involving LMIs. These problems do not have
lution" while the latter does. Similar statements hold \analytic solutions" but can be solved extremely e-
for multiple Riccati inequalities. ciently. The list of problems we have presented is by
no means exhaustive. Other problems include: