Você está na página 1de 87

Author name(s)

Book title
Monograph
March 10, 2012
Springer
Use the template dedic.tex together with the
Springer document class SVMono for
monograph-type books or SVMult for
contributed volumes to style a quotation or a
dedication at the very beginning of your book
in the Springer layout
Foreword
Use the template foreword.tex together with the Springer document class SVMono
(monograph-type books) or SVMult (edited books) to style your foreword in the
Springer layout.
The foreword covers introductory remarks preceding the text of a book that are
written by a person other than the author or editor of the book. If applicable, the
foreword precedes the preface which is written by the author or editor of the book.
Place, month year Firstname Surname
vii
Preface
Use the template preface.tex together with the Springer document class SVMono
(monograph-type books) or SVMult (edited books) to style your preface in the
Springer layout.
A preface is a books preliminary statement, usually written by the author or ed-
itor of a work, which states its origin, scope, purpose, plan, and intended audience,
and which sometimes includes afterthoughts and acknowledgments of assistance.
When written by a person other than the author, it is called a foreword. The
preface or foreword is distinct from the introduction, which deals with the subject
of the work.
Customarily acknowledgments are included as last part of the preface.
Place(s), Firstname Surname
month year Firstname Surname
ix
Acknowledgements
Use the template acknow.tex together with the Springer document class SVMono
(monograph-type books) or SVMult (edited books) if you prefer to set your ac-
knowledgement section as a separate chapter instead of including it as last part of
your preface.
xi
Contents
Part I BACKGROUND
1 Set Theoretic Methods in Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1 Convex sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 Set terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.2 Ellipsoidal set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.3 Polyhedral set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Set invariance theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.1 Basic denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.2 Ellipsoidal invariant sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.3 Polyhedral invariant sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3 Enlarging the domain of attraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.3.1 Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.3.2 Saturation nonlinearity modeling- A linear differential
inclusion approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.3.3 Enlarging the domain of attraction - Ellipsoidal set approach 28
1.3.4 Enlarging the domain of attraction - Polyhedral set approach 34
2 Optimal and Constrained Control - An Overview . . . . . . . . . . . . . . . . . . 41
2.1 Dynamic programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.2 Pontryagins maximum principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.3 Model predictive control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.3.1 Implicit model predictive control . . . . . . . . . . . . . . . . . . . . . . . 44
2.3.2 Recursive feasibility and stability . . . . . . . . . . . . . . . . . . . . . . . 49
2.3.3 Explicit model predictive control - Parameterized vertices . . 50
2.4 Vertex control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
xiii
Acronyms
Use the template acronym.tex together with the Springer document class SVMono
(monograph-type books) or SVMult (edited books) to style your list(s) of abbrevia-
tions or symbols in the Springer layout.
Lists of abbreviations, symbols and the like are easily formatted with the help of
the Springer-enhanced description environment.
ABC Spelled-out abbreviation and denition
BABI Spelled-out abbreviation and denition
CABR Spelled-out abbreviation and denition
xv
Part I
BACKGROUND
Use the template part.tex together with the Springer document class SVMono
(monograph-type books) or SVMult (edited books) to style your part title page and,
if desired, a short introductory text (maximum one page) on its verso page in the
Springer layout.
Chapter 1
Set Theoretic Methods in Control
Abstract The rst aim of this chapter is to briey review some of the set families
used in control and comments on the strengths and weaknesses of each of them.
The tool of choice throughout the manuscript will be the ellipsoidal and polyhedral
sets due to their mix of numerical applicability and exibility. Then the concept of
robust invariant and robust controlled invariant sets are introduced. Some algorithms
are given for computing such sets. The chapter ends with an original contribution
on estimating the domain of attraction for time-varying and uncertain discrete-time
systems with a saturated input.
1.1 Convex sets
1.1.1 Set terminology
Denition 1.1. (Convex set) A set C R
n
is convex if for all x
1
C and x
2
C, it
holds that
x
1
+(1 )x
2
C, 0 1
The point
x = x
1
+(1 )x
2
where 0 1 is called a convex combination of the pair x
1
and x
2
. The set of all
such points is the segment connecting x
1
and x
2
. In other words a set C is said to be
convex if the line segment between two points in C lies in C.
Denition 1.2. (Convex function) A function f : C R with C R
n
is convex if
and only if the set C is convex and
f (x
1
+(1 )x
2
) f (x
1
) +(1 ) f (x
2
)
for all x
1
C, x
2
C and for all 0 1.
3
4 1 Set Theoretic Methods in Control
Denition 1.3. (Closed set) A set C is closed if it contains its own boundary. In
other words, any point outside C has a neighborhood disjoint fromC.
Denition 1.4. (Closure of a set) The closure of a set C is the intersection of all
closed sets containing C. The closure of a set C is denoted as cl(C).
Denition 1.5. (Bounded set) A set C R
n
is bounded if it is contained in some
ball B
R
=x R
n
: |x|
2
of nite radius > 0.
Denition 1.6. (Compact set) A set C R
n
is compact if it is closed and bounded.
Denition 1.7. (C-set) A set S R
n
is a Cset if is a convex and compact set,
containing the origin in its interior.
Denition 1.8. (Convex hull) The convex hull of a set C R
n
is the smallest convex
set containing C.
Denition 1.9. (Support function) The support function of a set C R
n
, evaluated
at z R
n1
is dened as

C
(z) = sup
xC
z
T
x
1.1.2 Ellipsoidal set
Ellipsoidal sets or ellipsoids are a famous class of convex sets. Ellipsoids represent
a large category used in the dynamic systems control eld due to their simple nu-
merical representation [19], [44]. Next we provide a formal denition for ellipsoidal
sets and a few properties.
Denition 1.10. (Ellipsoidal set) An ellipsoidal set E(P, x
0
) R
n
with center x
0
and shape matrix P is a set of the form
E(P, x
0
) =x R
n
: (x x
0
)
T
P
1
(x x
0
) 1
where P R
nn
is a positive denite matrix.
If the ellipsoid is centered in the origin then it is possible to write
E(P) =x R
n
: x
T
P
1
x 1
Dene Q=P
1
2
as the Cholesky factor of matrix P, which satises Q
T
Q=QQ
T
=
Q. With the matrix Q, it is possible to show an alternative dual representation for an
ellipsoidal set
D(Q, x
0
) =x R
n
: x = x
0
+Qz, where z
T
z 1
1.1 Convex sets 5
Ellipsoidal sets are the most popularly used in the control eld since they are
associated with powerful tools such as the Lyapunov equation or Linear Matrix In-
equalities (LMI) [59], [19]. When using ellipsoidal sets, mostly all the optimization
problems present in the control eld can be reduced to the optimization of a linear
function under LMI constraints. This optimization problem is convex and is now a
powerful tool in many control applications.
A linear matrix inequality is a condition of the type [59], [19]
F(x) 0
where x R
n
is a vector variable and the matrix F(x) is afne in x, that is
F(x) = F
0
+
n

i=1
F
i
x
i
with symmetric matrices F
i
R
mm
.
LMIs can either be feasibility conditions or constraints for optimization prob-
lems. Optimization of a linear function over LMI constraints is called semidenite
programming, which is considered as an extension of linear programming. Nowa-
days, a major benet in using LMIs is that for solving an LMI problem, several
polynomial time algorithms were developed and implemented in several software
packages. Such are LMI Lab [28], YALMIP [50], CVX [31], etc.
The Schur complements are one of the most important theorem when working
with LMIs. The Schur complements state that the nonlinear conditions of the special
forms
_
P(x) 0
P(x) Q(x)
T
R(x)
1
Q(x) 0
or
_
R(x) 0
P(x) Q(x)
T
R(x)
1
Q(x) 0
can be equivalently written as the LMI
_
P(x) Q(x)
T
Q(x) R(x)
_
0
The Schur complements allow one to convert certain nonlinear matrix inequali-
ties into LMIs. For example, it is well known [44] that the support function of the
ellipsoid E(P, x
0
), evaluated at the vector z is

E(P,x
0
)
(z) = z
T
x
0
+
_
z
T
Pz (1.1)
then based on equation (1.1), it is apparent that the ellipsoid E(P) is a subset of the
polyhedral set P( f , 1) =x R
n
: [ f
T
x[ 1 with f R
n1
if and only if
f
T
Pf 1
6 1 Set Theoretic Methods in Control
or by using the Schur complements this condition can be rewritten as [19], [36]
_
1 f
T
P
Pf P
_
_0 (1.2)
Obviously an ellipsoidal set E(P, x
0
) R
n
is uniquely dened by its matrix P
and by its center x
0
. Since matrix P symmetric, the complexity of the representation
is
n(n +1)
2
+n =
n(n +3)
2
The main drawback of ellipsoids is however that having a xed and symmetrical
structure they may be too conservative and this conservativeness is increased by the
related operations. It is well known that [44]
The convex hull of of a set of ellipsoids, in general, is not an ellipsoid.
The sum of two ellipsoids is not, in general, an ellipsoid.
The difference of two ellipsoids is not, in general, an ellipsoid.
The intersection of two ellipsoids is not, in general, an ellipsoid.
1.1.3 Polyhedral set
Polyhedral sets provide a useful geometrical representation for the linear constraints
that appear in diverse elds such as control and optimization. In a convex setting,
they provide a good compromise between complexity and exibility. Due to their
linear and convex nature, the basic set operations are relatively easy to implement
[45], [64]. Principally, this is related to their dual (half-spaces/vertices) representa-
tion [54], which allows to chose which formulation is best suited for a particular
problem. With respect to their exibility it is worth noticing that any convex body
can be approximated arbitrarily close by a polytope [20].
This section is started by recalling some theoretical concepts.
Denition 1.11. (Hyperplane) A hyperplane H R
n
is a set of the form
H =x R
n
: f
T
x = g
where f R
n1
, g R.
Denition 1.12. (Half-space) A closed half-space H R
n
is a set of the form
H =x R
n
: f
T
x g
where f R
n1
, g R.
Denition 1.13. (Polyhedral set) A convex polyhedral set P(F, g) is a set of the
form
P(F, g) =x R
n
: F
i
x g
i
, i = 1, 2, . . . , n
1

1.1 Convex sets 7


where F
i
R
1n
denotes the ith row of the matrix F R
n
1
n
and g
i
is the ith
component of the vector g R
n
1
1
.
A polyhedral set includes the origin if and only if g 0 and includes the origin
in its interior if and only if g > 0.
Denition 1.14. (Polytope) A polytope is a bounded polyhedral set.
Denition 1.15. (Dimension of polytope) A polytope P R
n
is of dimension d
n, if there exists a ddimension ball with radius > 0 contained in P and there
exists no d +1dimension ball with radius > 0 contained in P.
Denition 1.16. (Face, facet, vertex, edge) A face F
i
a
of polytope P(F, g) is dened
as a set of the form
F
i
a
= Px R
n
: F
i
x = g
i

The intersection of two faces of dimension n 1 usually gives a n 2 face. The


faces of the polytope P with dimension 0, 1 and n1 are called vertices, edges and
facet, respectively.
One of the fundamental properties of polytopes is that it can be presented in
half-space representation as in Denition 1.13 or in vertex representation as follows
P(V) =
_
x R
n
: x =
r

i=1

i
v
i
, 0
i
1,
r

i=1

i
= 1
_
where v
i
R
n1
denotes the i column of matrix V R
nr
.
x
1
x
2
f
2
T
x g
2
f
3
T
x g
3
f
5
T
x g
5
f
6
T
x g
6
f
1
T
x g
1
f
7
T
x g
7
f
4
T
x g
4
Fig. 1.1 Half-space representation of polytopes.
x
1
x
2
v
1
v
2
v
3
v
4
v
5
v
6
v
7
Fig. 1.2 Vertex representation of polytopes.
This dual (half-spaces/vertices) representation has very practical consequences
in methodological and numerical applications. Due to this duality we are allowed to
use either representation in the solving of a particular problem. Note that the trans-
formation from one representation to another may be time-consuming with several
well-known algorithms: Fourier-Motzkin elimination [25], CDD [26], Equality Set
Projection [39].
Note that the expression x =
r

i=1

i
v
i
with a given set of vectors v
1
, v
2
, . . . , v
r

and
8 1 Set Theoretic Methods in Control
r

i=1

i
= 1,
i
0
is called the convex hull of a set of vectors v
1
, v
2
, . . . , v
r
and denotes as
x = Convv
1
, v
2
, . . . , v
r

Denition 1.17. (Simplex) A simplex C R


n
is an ndimensional polytope, which
is the convex hull of its n +1 vertices.
For example, a 2simplex is a triangle, a 3simplex is a tetrahedron, and a
4simplex is a pentachoron.
Denition 1.18. (Minimal representation) A half-space or vertex representation
of polytope P is minimal if and only if the removal of any facet or any vertex would
change P, i.e. there are no redundant half-spaces or redundant vertices.
A minimal representation of a polytope can be achieved by removing from the
half-space (vertex) representation all the redundant half-spaces (vertices), whose
denition is provided next.
Denition 1.19. (Redundant half-space) For a given polytope P(F, g), a polyhe-
dral set P(F, g) is dened by removing the i th half-plane F
i
from matrix F and
the corresponding component g
i
. The half-space F
i
is redundant if and only if
g
i
< g
i
with
g
i
= max
Fxg
F
i
x
Denition 1.20. (Redundant vertex) For a given polytope P(V), a polyhedral set
P(V) is dened by removing the ith vertex v
i
from matrix V. The vertex v
i
is
redundant if and only if
p
i
< 1
where
p
i
= min
V p=v
i
1
T
p
Now some basic operations on polytopes will be briey reviewed. Note that al-
though the focus lies on polytopes, most of the operations described here are directly
or with minor changes applicable to polyhedral sets. Additional details on polytope
computation can be found in [65], [32], [27].
Denition 1.21. (Intersection) The intersection of two polytopes P
1
R
n
, P
2
R
n
is a polytope
P
1
P
2
=x R
n
: x P
1
, x P
2

1.1 Convex sets 9


Denition 1.22. (Minkowski sum) The Minkowski sum of two polytopes P
1
R
n
,
P
2
R
n
is a polytope
P
1
P
2
=x
1
+x
2
: x
1
P
1
, x
2
P
2

It is well known that if P


1
and P
2
are presented in vertex representation, i.e.
P
1
= Convv
11
, v
12
, . . . v
1p
,
P
2
= Convv
21
, v
22
, . . . v
2q

then the Minkowski can be computed as [65]


P
1
P
2
= Convv
1i
+v
2 j
, i = 1, 2, . . . , p, j = 1, 2, . . . , q
Denition 1.23. (Pontryagin difference) The Pontryagin difference P of two poly-
topes P
1
R
n
, P
2
R
n
is a polytope
P
1
P
2
=x
1
P
1
: x
1
+x
2
P
1
, x
2
P
2

x
1
x
2
P
1
P
2
P
1
P
2
Fig. 1.3 Minkowski sum P
1
P
2
.
x
1
x
2
P
1
P
2
P
1
P
2
Fig. 1.4 Pontryagin difference P
1
P
2
.
Note that the Pontryagin difference is not the complement of the Minkowski sum.
For two polytopes P
1
and P
2
, it holds that (P
1
P
2
) P
2
P
1
.
Denition 1.24. (Projection) Given a polytope P R
n
1
+n
2
the orthogonal projec-
tion onto the x
1
space R
n
1
is dened as
Proj
x
1
(P) =x
1
R
n
1
: x
2
R
n
2
such that [x
T
1
x
T
2
]
T
P
It is well known that the Minkowski sum operation on polytopes in their half-
plane representation is complexity-wise equivalent to a projection [65]. Current pro-
jection methods for polytopes that can operate in general dimensions can be grouped
into four classes: Fourier elimination [40], block elimination [4], vertex based ap-
proaches and wrapping-based techniques [39].
Apparently the complexity of the representation of polytopes is not a function of
the space dimension only, but it may be arbitrarily big. For the half-space (vertex)
representation, the complexity of the polytopes is a linear function of the number
10 1 Set Theoretic Methods in Control
x
1
x
2
P
Proj
x
1
(P)
Fig. 1.5 Projection of a 2-dimensional polytope P onto a line x
1
.
of rows of the matrix F (the number of columns of the matrix V). As far as the
complexity issue concerns, it is worth to say that non of these representations can
be regarded as more convenient. Apparently, one can dene an arbitrary polytope
with relatively few vertices, however this may nevertheless have a surprisingly large
number of facets. This happens, for example when some vertices contribute to many
facets. And equally, one can dene an arbitrary polytope with relatively few facets,
however this may have relatively many more vertices. This happens, for example
when some facets have many vertices.
The main advantage of the polytopes is their exibility. It is well known that [20]
any convex body can be approximated arbitrarily close by a polytope. Particularly,
for a given bounded, convex and close set S and for a given with 0 < < 1, then
there exists a polytope P such that
(1 )S P S
for an inter approximation of the set S and
S P (1 +)S
for an outer approximation of the set S.
1.2 Set invariance theory
1.2.1 Basic denitions
Set invariance is a fundamental concept in analysis and controller design for con-
strained systems, since the constraint satisfaction can be guaranteed for all time if
and only if the initial states are contained in an invariant set. Two types of systems
1.2 Set invariance theory 11
will be considered in this section, namely, autonomous discrete-time uncertain non-
linear systems
x(k +1) = f (x(k), w(k)) (1.3)
and systems with external control inputs
x(k +1) = f (x(k), u(k), w(k)) (1.4)
where x(k) R
n
, u(k) R
m
and w(k) R
d
are respectively the system state, the
control input and the unknown disturbance.
The state vector x(k), the control vector u(k) and the disturbance w(k) are subject
to constraints
_
_
_
x(k) X
u(k) U
w(k) W
k 0 (1.5)
where the sets X, U and W are assumed to be close and bounded.
Denition 1.25. Robust positively invariant set [14] [41] The set X is robust
positively invariant for the system (1.3) if and only if
f (x(k), w(k))
for all x(k) and for all w(k) W.
Hence if the state vector of system (1.3) reaches a robust positively invariant set,
it will remain inside the set in spite of disturbance w(k). The term positively refers
to the fact that only forward evolutions of the system (1.3) are considered and will
be omitted in future sections for brevity.
The maximal robust invariant set
max
X is a robust invariant set, that contains
all the robust invariant sets contained in X.
Denition 1.26. Robust contractive set [14] For a given scalar number with 0
1, the set X is robust contractive for the system (1.3) if and only if
f (x(k), w(k))
for all x(k) and for all w(k) W.
Obviously in denition 1.26 if = 1 we will have robust invariance.
Denition 1.27. Robust controlled invariant set [14] [41] The set C X is robust
controlled invariant for the system (1.4) if for all x(k) C, there exists a control
value u(k) U such that
x(k +1) = f (x(k), u(k), w(k)) C
for all w(k) W.
The maximal robust controlled invariant set C
max
X is a robust controlled in-
variant set and contains all the robust controlled invariant sets contained in X.
12 1 Set Theoretic Methods in Control
Denition 1.28. Robust controlled contractive set [14] For a given scalar number
with 0 < 1 the set C X is robust controlled contractive for the system (1.4)
if for all x(k) C, there exists a control value u(k) U such that
x(k +1) = f (x(k), u(k), w(k)) C
for all w(k) W.
1.2.2 Ellipsoidal invariant sets
Ellipsoidal sets are the most popularly used for robust stability analysis and con-
troller synthesis of constrained systems due to computational efciency via LMI
and the complexity is xed [19], [59]. This approach, however may lead to conser-
vative results. To begin, let us consider the following system
x(k +1) = A(k)x(k) +B(k)u(k) (1.6)
where the matrices A(k) R
nn
, B(k) R
nm
satisfy
_

_
A(k) =
q

i=1

i
(k)A
i
B(k) =
q

i=1

i
(k)B
i
q

i=1

i
(k) = 1,
i
(k) 0, i = 1, 2, . . . , q
(1.7)
with given and xed matrices A
i
R
nn
and B
i
R
nm
, i = 1, 2, . . . , q.
Remark 1.1. A(k) and B(k) given as
_

_
A(k) =
q
1

i=1

i
(k)A
i
B(k) =
q
2

i=1

i
(k)B
i
q
1

i=1

i
(k) = 1,
i
(k) 0, i = 1, 2, . . . , q
1
q
2

i=1

i
(k) = 1,
i
(k) 0, i = 1, 2, . . . , q
2
(1.8)
can be translated into the form of (1.8) as follows
1.2 Set invariance theory 13
x(k +1) =
q
1

i=1

i
(k)A
i
x(k) +
q
2

j=1

j
(k)B
j
u(k)
=
q
1

i=1

i
(k)A
i
x(k) +
q
1

i=1

i
(k)
q
2

j=1

j
(k)B
j
u(k)
=
q
1

i=1

i
(k)
_
A
i
x(k) +
q
2

j=1

j
(k)B
j
u(k)
_
=
q
1

i=1

i
(k)
_
q
2

j=1

j
(k)A
i
x(k) +
q
2

j=1

j
(k)B
j
u(k)
_
=
q
1

i=1

i
(k)
q
2

j=1

j
(k)
_
A
i
x(k) +B
j
u(k)
_
=
q
1
,q
2

i=1, j=1

i
(k)
j
(k)
_
A
i
x(k) +B
j
u(k)
_
Consider the polytope P
c
, the vertices of which are given by taking all possible
combinations of A
i
, B
j
with i = 1, 2n. . . , q
1
and j = 1, 2, . . . , q
2
. Since
q
1
,q
2

i=1, j=1

i
(k)
j
(k) =
q
1

i=1

i
(k)
q
2

j=1

j
(k) = 1
it is clear that A(k), B(k) can be expressed as a convex combination of the vertices
of P
c
.
Both the state vector x(k) and the control vector u(k) are subject to the constraints
_
x(k) X, X =x : [F
i
x[ 1, i = 1, 2, . . . , n
1
u(k) U, U =u : [u
i
[ u
imax
, i = 1, 2, . . . , m
(1.9)
where F
i
R
1n
is the irow of the matrix F R
n
1
n
and u
imax
is the icomponent
of vector u
max
R
m1
. It is assumed that the matrix F and the vector u
max
are
constant with u
max
> 0 such that the origin is contained in the interior of X and U.
Let us consider now the problem of checking robust controlled invariant. The
ellipsoid E(P) = x R
n
: x
T
P
1
x 1 is controlled invariant if and only if for all
x E(P) there exists an input u = (x) U such that
(A
i
x +B
i
(x))
T
P
1
(A
i
x +B
i
(x)) 1 (1.10)
for all i = 1, 2, . . . , q.
It is well known [17] that for time-varying and uncertain linear discrete-time
systems (1.6), it is sufcient to check condition (1.10) only for all x on the bound-
ary of E(P), i.e. for allx such that x
T
P
1
x = 1. Therefore condition (1.10) can be
transformed in
(A
i
x +B
i
(x))
T
P
1
(A
i
x +B
i
(x)) x
T
P
1
x, i = 1, 2, . . . , q (1.11)
One possible choice for u =(x) is a linear state feedback controller u =Kx. By
denoting A
ci
= A
i
+B
i
K with i = 1, 2, . . . , q, condition (1.11) is equivalent to
14 1 Set Theoretic Methods in Control
x
T
A
T
ci
P
1
A
ci
x x
T
P
1
x, i = 1, 2, . . . , q
or
A
T
ci
P
1
A
ci
_P
1
, i = 1, 2, . . . , q
By using the Schur complements, this condition can be rewritten as
_
P
1
A
T
ci
A
ci
P
_
_0, i = 1, 2, . . . , q
The condition provided here is not linear in P. By using the Schur complements
again, one gets
PA
ci
PA
T
ci
_0, i = 1, 2, . . . , q
or
_
P A
ci
P
PA
T
ci
P
_
_0, i = 1, 2, . . . , q
By substituting A
ci
= A
i
+B
i
K with i = 1, 2, . . . , q, one obtains
_
P A
i
P+B
i
KP
PA
T
i
+PK
T
B
T
i
P
_
_0, i = 1, 2, . . . , q
Though this condition is nonlinear (since P and K are unknown). Still it can be
re-parameterized into a linear condition by setting Y = KP. The above condition is
equivalent to
_
P A
i
P+B
i
Y
PA
T
i
+Y
T
B
T
i
P
_
_0, i = 1, 2, . . . , q (1.12)
Condition (1.12) is necessary and sufcient for ellipsoid E(P) with linear state
feedback u =Kx to be robust invariant. Concerning the constraint satisfaction (1.9),
based on equation (1.2) it is obvious that
The state constraints are satised if and only if E(P) is a subset of X, hence
_
1 F
i
P
PF
T
i
P
_
_0, i = 1, 2, . . . , n
1
(1.13)
The input constraints are satised if and only if E(P) is a subset of a polyhedral
set X
u
where
X
u
=x R
n
: [K
i
x[ u
imax

for i = 1, 2, . . . , m and K
i
is the irow of the matrix F R
mn
, hence
_
u
2
imax
K
i
P
PK
T
i
P
_
_0,
By noticing that K
i
P =Y
i
with Y
i
is the irow of the matrix Y R
mn
, one gets
1.2 Set invariance theory 15
_
u
2
max
Y
i
Y
T
i
P
_
_0 (1.14)
Dene a vector T
i
R
1m
as follows
T
i
= [0 0 . . . 0 1
..
ith position
0 . . . 0 0]
It is clear that Y
i
= T
i
Y. Therefore equation (1.14) can be transformed in
_
u
2
max
T
i
Y
Y
T
T
T
i
P
_
_0 (1.15)
With all the ellipsoids satisfying invariance condition (1.12) and constraint sat-
isfaction (1.13), (1.14), we would like to choose among them the largest ellipsoid.
In the literature, the largeness of ellipsoid E(P) is usually measured by the determi-
nant or the trace of matrix P, see [63]. Here the trace of matrix P is chosen due to
its linearity. The trace of a square matrix is dened to be the sum of the elements on
the main diagonal of the matrix. Maximization of the trace of matrices corresponds
to the search for the maximal sum of eigenvalues of matrices. With this objective
function, the problemof maximizing the robust invariant ellipsoid can be formulated
as
J = max
P,Y
trace(P) (1.16)
subject to
Invariance condition (1.12)
Constraints satisfaction (1.13), (1.15)
It is clear that the solution P, Y of problem (1.16) may lead to the controller K =
YP
1
such that the closed loop system with matrices A
ci
= A
i
+B
i
K, i = 1, 2, . . . , q
is at the stability margin. In other words, the ellipsoid E(P) thus obtained might not
be contractive (although being invariant). Indeed, the system trajectories might not
converge to the origin. In order to ensure x(k) 0 as k , it is required that for
all x on the boundary of E(P), i.e. for all x such that x
T
P
1
x = 1,
(A
i
x +B
i
(x))
T
P
1
(A
i
x +B
i
(x)) < 1 i = 1, 2, . . . , q
With the same argument as the above discussion, one can conclude that the ellip-
soid E(P) with the linear controller u = Kx is robust contractive if the following set
of LMI conditions is satised
_
P A
i
P+B
i
Y
PA
T
i
+Y
T
B
T
i
P
_
0 i = 1, 2, . . . , q (1.17)
16 1 Set Theoretic Methods in Control
1.2.3 Polyhedral invariant sets
The problem of computing the domain of attraction using polyhedral sets is ad-
dressed in this section. With linear constraints on state and control variables poly-
hedral invariant sets are more preferable than ellipsoidal invariant sets, since they
offer a better approximation of domain of attraction [24], [35], [12]. To begin, let us
now consider the following uncertain discrete time systems
x(k +1) = A(k)x(k) +B(k)u(k) +D(k)w(k) (1.18)
where x(k) R
n
, u(k) R
m
and w(k) R
d
are, respectively the state, input and
disturbance vectors.
The matrices A(k) R
nn
, B(k) R
nm
and E(k) R
nn
satisfy
_

_
A(k) =
q

i=1

i
(k)A
i
B(k) =
q

i=1

i
(k)B
i
D(k) =
q

i=1

i
(k)D
i
q

i=1

i
(k) = 1,
i
(k) 0
where the matrices A
i
, B
i
and D
i
are given.
The state, the control and the disturbance are subject to the following polytopic
constraints
_
_
_
x(k) X, X =x R
n
: F
x
x g
x

u(k) U, U =u R
m
: F
u
u g
u

w(k) W, W =w R
d
: F
w
w g
w

(1.19)
where the matrices F
x
, F
u
, F
w
and the vectors g
x
, g
u
and g
w
are assumed to be
constant with g
x
0, g
u
0, g
w
0 such that the origin is contained in the interior
of X, U and W. Recall that the inequalities are element-wise.
When the control input is in form of the state feedback u(k) = Kx(k), the closed
loop system is
x(k +1) = A
c
(k)x(k) +D(k)w(k) (1.20)
where
A
c
(k) = A(k) +B(k)K = ConvA
ci

with A
ci
= A
i
+B
i
K, i = 1, 2, . . . , q.
The state constraints of the closed loop system are in the form
x X
c
, X
c
=x R
n
: F
c
x g
c
(1.21)
where
F
c
=
_
F
x
F
u
K
_
, g
c
=
_
g
x
g
u
_
1.2 Set invariance theory 17
The following denition plays an important role in computing a robust invariant
set for system (1.20) and constraints (1.21).
Denition 1.29. (Pre-image set) For the system (1.20), the one step admissible pre-
image set of the set X
c
is a set X
1
c
X
c
such that for all x X
1
c
, it holds that
A
ci
x +D
i
w X
c
for all i = 1, 2, . . . , q.
The pre-image set Pre(X
c
) can be dened by [16], [13]
X
1
c
=
_
x X
c
: F
c
A
ci
x g
c
max
wW
F
c
D
i
w
_
(1.22)
for all i = 1, 2, . . . , q.
Example 1.1. Consider the following uncertain system
x(k +1) = A(k)x(k) +Bu(k) +Dw(k)
where
A(k) = (k)A
1
+(1 (k))A
2
B =
_
0
1
_
, D =
_
1 0
0 1
_
with 0 (k) 1 and
A
1
=
_
1.1 1
0 1
_
, A
2
=
_
0.6 1
0 1
_
The constraints on the state, on the input and on the disturbance are
x X, X =x R
n
: F
x
x g
x
,
u U, U =u R
m
: F
u
u g
u
,
w W, W =w R
n
: F
w
g
w

where
F
x
=
_

_
1 0
0 1
1 0
0 1
_

_
, g
x
=
_

_
3
3
3
3
_

_
F
u
=
_
1
1
_
, g
u
=
_
2
2
_
F
w
=
_

_
1 0
0 1
1 0
0 1
_

_
, g
w
=
_

_
0.2
0.2
0.2
0.2
_

_
The feedback controller is chosen as
18 1 Set Theoretic Methods in Control
K = [0.3856 1.0024]
With this feedback controller the closed loop matrices are
A
c1
=
_
1.1000 1.0000
0.3856 0.0024
_
, A
c2
=
_
0.6000 1.0000
0.3856 0.0024
_
The state constraint set X
c
is
X
c
=
_

_
x R
n
:
_

_
1.0000 0
1.0000 0
0.3590 0.9333
0.3590 0.9333
_

_
x
_

_
3.0000
3.0000
0.9311
0.9311
_

_
_

_
Based on equation (1.22), the one step admissible pre-image set X
1
c
of the set X
c
is dened as
X
1
c
=
_

_
x R
n
:
_
_
F
c
F
c
A
1
F
c
A
2
_
_
x
_

_
g
c
g
c
max
wW
F
c
w
g
c
max
wW
F
c
w
_

_
_

_
(1.23)
After removing redundant inequalities, the set X
1
c
can be represented as
X
1
c
=
_

_
x R
n
:
_

_
1.0000 0
1.0000 0
0.3590 0.9333
0.3590 0.9333
0.7399 0.6727
0.7399 0.6727
0.3753 0.9269
0.3753 0.9269
_

_
x
_

_
3.0000
3.0000
0.9311
0.9311
1.8835
1.8835
1.7474
1.7474
_

_
_

_
The sets X, X
c
and X
1
c
are presented in Figure 1.6.
It is clear that the set X
c
is robust invariant if it equals to its one step admis-
sible pre-image set, that is for all x and for all w W, it holds that
A
i
x +D
i
w
for all i =1, 2, . . . , q. Based on this observation, the following algorithmcan be used
for computing a robust invariant set for system (1.20) with respect to constraints
(1.21)
Procedure 2.1: Robust invariant set computation
Input: The matrices A
c1
,A
c2
,. . .,A
cq
, D
1
,D
2
,. . .,D
q
and the sets X
c
, W
1.2 Set invariance theory 19
3 2 1 0 1 2 3
3
2
1
0
1
2
3
x
1
x
2
X
X
c
X
c
1
Fig. 1.6 One step pre-image set for example 1.1.
Output: The robust invariant set
1. Set i = 0, F
0
= F
c
, g
0
= g
c
and X
0
=x R
n
: F
0
x g
0
.
2. Set X
i
= X
0
.
3. Eliminate redundant inequalities of the following polytope
P =
_

_
x R
n
:
_

_
F
0
F
0
A
c1
F
0
A
c2
.
.
.
F
0
A
cq
_

_
x
_

_
g
0
g
0
max
wW
F
0
D
1
w
g
0
max
wW
F
0
D
2
w
.
.
.
g
0
max
wW
F
0
D
q
w
_

_
_

_
4. Set X
0
= P.
5. If X
0
= X
i
then stop and set = X
0
. Else continue.
6. Set i = i +1 and go to step 2.
The natural question for procedure 2.1 is that if there exists an index i such that
X
0
= X
i
In the absence of disturbance, the following theorem holds [15]
Theorem 1.1. [15] Assume that the system (1.20) is robust asymptotically stable.
Then there exists a nite index i = i
max
, such that X
0
= X
i
.
Apparently the sensitive part of procedure 2.1 is the step 5. Checking the equality
of two polytopes X
0
and X
1
is computationally demanding, i.e. one has to check
X
0
X
1
and X
1
X
0
. Note that if at the step i of procedure 2.1. the set is invariant
then the following set of inequalities
20 1 Set Theoretic Methods in Control
P
r
=
_

_
x R
n
:
_

_
F
0
A
c1
F
0
A
c2
.
.
.
F
0
A
cq
_

_
x
_

_
g
0
max
wW
F
0
D
1
w
g
0
max
wW
F
0
D
2
w
.
.
.
g
0
max
wW
F
0
D
q
w
_

_
_

_
is redundant with respect to the set
=x R
n
: F
0
x g
0

Hence the procedure 2.1 can be modied for computing a robust invariant set as
follows
Procedure 2.2: Robust invariant set computation
Input: The matrices A
c1
,A
c2
,. . .,A
cq
, D
1
,D
2
,. . .,D
q
and the sets X
c
, W
Output: The robust invariant set
1. Set i = 0, F
0
= F
c
, g
0
= g
c
and X
0
=x R
n
: F
0
x g
0
.
2. Eliminate redundant inequalities of the following polytope
P =
_

_
x R
n
:
_

_
F
0
F
0
A
c1
F
0
A
c2
.
.
.
F
0
A
cq
_

_
x
_

_
g
0
g
0
max
wW
F
0
D
1
w
g
0
max
wW
F
0
D
2
w
.
.
.
g
0
max
wW
F
0
D
q
w
_

_
_

_
starting from the following set of inequalities
x R
n
: F
0
A
c j
x g
0
max
wW
F
0
D
j
w
with j = q, q 1, . . . , 1.
3. If all of these inequalities are redundant, then stop and set = X
0
. Else con-
tinue.
4. Eliminate redundant inequalities of the polytope P for the set of inequalities
x R
n
: F
0
x g
0

5. Set X
0
= P
6. Set i = i +1 and go to step 2.
It is well known that [29], [43], [17] the set resulting from procedure 2.1 or
procedure 2.2, turns out to be the maximal robust invariant set for for system (1.20)
with respect to constraints (1.19), that is =
max
.
1.2 Set invariance theory 21
Example 1.2. Consider the uncertain system in example 1.1 with the same con-
straints on the state, on the input and on the disturbance. Applying procedure 2.2,
the robust maximal invariant set is obtained as

max
=
_

_
x R
n
:
_

_
0.3590 0.9333
0.3590 0.9333
0.6739 0.7388
0.6739 0.7388
0.8979 0.4401
0.8979 0.4401
0.3753 0.9269
0.3753 0.9269
_

_
x
_

_
0.9311
0.9311
1.2075
1.2075
1.7334
1.7334
1.7474
1.7474
_

_
_

_
The sets X, X
c
and
max
are depicted in Figure 1.7
3 2 1 0 1 2 3
3
2
1
0
1
2
3
x
1
x
2
X
X
c

max
Fig. 1.7 Maximal robust invariant set
max
for example 1.2.
Denition 1.30. (One step robust controlled set) Given the polytopic system
(1.18), the one step robust controlled set of the set C
0
= x (R)
n
: F
0
x g
0
is
given by all states that can be steered in one step in C
0
when a suitable control ac-
tion is applied. The one step robust controlled set denoted as C
1
can be shown to be
[16], [13]
C
1
=
_
x R
n
: u U : F
0
(A
i
x +B
i
u) g
0
max
wW
F
0
D
i
w
_
(1.24)
for all w W and for all i = 1, 2, . . . , q
Remark 1.2. If the set C
0
is robust controlled invariant, then C
0
C
1
. Hence C
1
is a
robust controlled invariant set.
Recall that the set
max
is a maximal robust invariant set. Dene C
N
as the set of
all states, that can be steered to the
max
in no more than N steps along an admissible
22 1 Set Theoretic Methods in Control
trajectory, i.e. a trajectory satisfying control, state and disturbance constraints. This
set can be generated recursively by the following procedure
Procedure 2.3: Robust controlled invariant set computation
Input: The matrices A
1
,A
2
,. . .,A
q
, D
1
,D
2
,. . .,D
q
and the sets X, U, W and the
maximal robust invariant set
max
Output: The robust controlled invariant set C
N
1. Set i = 0 and C
0
=
max
and let the matrices F
0
, g
0
be the half space repre-
sentation of the set C
0
, i.e. C
0
=x R
n
: F
0
x g
0

2. Compute the expanded set P


i
R
n+m
P
i
=
_

_
(x, u) R
n+m
:
_

_
F
i
(A
1
x +B
1
u)
F
i
(A
2
x +B
2
u)
.
.
.
F
i
(A
q
x +B
q
u)
_

_
g
i
max
wW
F
i
D
1
w
g
i
max
wW
F
i
D
2
w
.
.
.
g
i
max
wW
F
i
D
q
w
_

_
_

_
3. Compute the projection of P
i
on R
n
P
n
i
=x R
n
: u U such that (x, u) P
i

4. Set
C
i+1
= P
n
i
X
and let the matrices F
i+1
, g
i+1
be the half space representation of the set C
i+1
,
i.e.
C
i+1
=x R
n
: F
i+1
x g
i+1

5. If C
i+1
=C
i
, then stop and set C
N
=C
i
. Else continue.
6. If i = N, then stop else continue.
7. Set i = i +1 and go to step 2.
As a consequence of the fact that
max
is a robust invariant set, it follows that for
each i, C
i1
C
i
and therefore C
i
is a robust controlled invariant set and a sequence
of nested polytopes.
Note that the complexity of the set C
N
does not have any analytic dependence on
N and may increase without bound, thus placing a practical limitation on the choice
of N.
The set C
N
resulting from procedure 2.3 is robust controlled invariant, in the
sense that for all x(k) C
N
, there exists an input u(k) R
m
such that x(k +1) C
N
for all unknown admissible
i
(k) and for all w(k) W. If the parameters
i
(k)
are not known a priori, but their instantaneous value is available at each sampling
1.2 Set invariance theory 23
period, then the following procedure can be used for reducing conservatiness in
computing C
N
[16].
Procedure 2.4: Controlled invariant set computation
Input: The matrices A
1
,A
2
,. . .,A
q
, D
1
,D
2
,. . .,D
q
and the sets X, U, W and the
invariant set
max
Output: The controlled invariant set C
N
1. Set i = 0 and C
0
=
max
and let the matrices F
0
, g
0
be the half space repre-
sentation of the set C
0
, i.e. C
0
=x R
n
: F
0
x g
0

2. Compute the expanded set P


i j
R
n+m
P
i j
=
_
(x, u) R
n+m
: F
i
(A
j
x +B
j
u) g
i
max
wW
F
i
D
j
w
_
, j = 1, 2, . . . , q
3. Compute the projection of P
i j
on R
n
P
n
i j
=x R
n
: u U such that (x, u) P
i j
, j = 1, 2, . . . , q
4. Set
C
i+1
= X
q

j=1
P
n
i j
and let the matrices F
i+1
, g
i+1
be the half space representation of the set C
i+1
,
i.e.
C
i+1
=x R
n
: F
i+1
x g
i+1

5. If C
i+1
=C
i
, then stop and set C
N
=C
i
. Else continue.
6. If i = N, then stop else continue.
7. Set i = i +1 and go to step 2.
For a better understanding of the two procedures 2.3 and 2.4 it is important to
underline where the one step controlled set is calculated. In procedure 2.3 such set
is represented by P
n
i
, while in procedure 2.4 the one step controlled set corresponds
to the intersection
q

j=1
P
n
i j
.
Example 1.3. Consider the uncertain system in example 1.1. The constraints on the
state, on the input and on the disturbance are the same.
Using procedure 2.3, one obtains the robust controlled invariant sets C
N
as shown
in Figure 1.8 with N =1 and N =7. The set C
1
is a set of all states that can be steered
in one step in
max
when a suitable control action is applied. The set C
7
is a set of
all states that can be steered in seven steps in
max
when a suitable control action is
applied. Note that C
7
=C
8
, so C
7
is the maximal robust controlled invariant set.
The set C
7
can be presented in half-space representation as
24 1 Set Theoretic Methods in Control
3 2 1 0 1 2 3
3
2
1
0
1
2
3
x
1
x
2
X
C
0
=
max
C
7
C
1
Fig. 1.8 Robust controlled invariant set for example 1.3.
C
7
=
_

_
x R
n
:
_

_
0.3731 0.9278
0.3731 0.9278
0.4992 0.8665
0.4992 0.8665
0.1696 0.9855
0.1696 0.9855
0.2142 0.9768
0.2142 0.9768
0.7399 0.6727
0.7399 0.6727
1.0000 0
1.0000 0
_

_
x
_

_
1.3505
1.3505
1.3946
1.3946
1.5289
1.5289
1.4218
1.4218
1.8835
1.8835
3.0000
3.0000
_

_
_

_
1.3 Enlarging the domain of attraction
This section presents an original contribution on estimating the domain of attraction
for uncertain and time-varying linear discrete-times systems with a saturated inputs.
The ellipsoidal and polyhedral sets will be used for characterizing the domain of
attraction. The use of ellipsoidal sets associated with its simple characterization
as a solution of an LMI problem, while the use of polyhedral sets offers a better
approximation of the domain of attraction.
[]
1.3 Enlarging the domain of attraction 25
1.3.1 Problem formulation
Consider the following time-varying or uncertain linear discrete-time system
x(k +1) = A(k)x(k) +B(k)u(k) (1.25)
where
_

_
A(k) =
q

i=1

i
(k)A
i
B(k) =
q

i=1

i
(k)B
i
q

i=1

i
(k) = 1,
i
(k) 0
(1.26)
with given matrices A
i
R
nn
and B
i
R
nm
, i = 1, 2, . . . , q.
Both the state vector x(k) and the control vector u(k) are subject to the constraints
_
x(k) X, X =x R
n
: F
i
x g
i
, i = 1, 2, . . . , n
1
u(k) U, U =u R
m
: u
il
u
i
u
iu
, i = 1, 2, . . . , m
(1.27)
where F
i
R
1n
is the i th row of the matrix F
x
R
n
1
n
, g
i
is the i th component
of the vector g
x
R
n
1
1
, u
il
and u
iu
are respectively the i th component of the
vectors u
l
and u
u
, which are the lower and upper bounds of input u. It is assumed
that the matrix F
x
and the vectors u
l
R
m1
, u
u
R
m1
are constant with u
l
< 0
and u
u
> 0 such that the origin is contained in the interior of X and U.
Assume that using established results in control theory (LQR, LQG, LMI based,
etc), one can nd a feedback controller K R
mn
such that
u(k) = Kx(k) (1.28)
robustly quadratically stabilizes the system (1.25). We would like to estimate the
domain of attraction of the origin for the closed loop system
x(k +1) = A(k)x(k) +B(k)sat(Kx(k)) (1.29)
1.3.2 Saturation nonlinearity modeling- A linear differential
inclusion approach
In this section, a linear differential inclusion approach used for modeling the satura-
tion function is briey reviewed. This modeling framework was rstly proposed by
Hu et al. in [36], [37], [38]. Then its generalization was developed by Alamo et al.
[1], [2]. The main idea of the differential inclusion approach is to use an auxiliary
vector variable v, and to compose the output of the saturation function as a convex
combination of the actual control signals u and v.
26 1 Set Theoretic Methods in Control
u
sat(u)
u
u
u
l
Fig. 1.9 The saturation function
The saturation function is dened as follows
sat(u
i
) =
_
_
_
u
il
, if u
i
u
il
u, if u
il
u
i
u
iu
u
iu
, if u
iu
u
i
(1.30)
for i = 1, 2, . . . , m and u
il
and u
iu
are respectively the upper bound and the lower
bound of u
i
.
To simply present the approach, lets rstly consider case when m = 1. In this
case u and v are a scalar number. Clearly if
u
l
v u
u
(1.31)
then the saturation function can be rewritten as a convex combination of u and v
sat(u) = u +(1 )v (1.32)
with 0 1, or
sat(u) = Convu, v (1.33)
Figure 1.10 illustrates this fact.
Analogously, for m = 2 and v such that
_
u
1l
v
1
u
1u
u
2l
v
2
u
2u
(1.34)
the saturation function can be expressed as
sat(u) =
1
_
u
1
u
2
_
+
2
_
u
1
v
2
_
+
3
_
v
1
u
2
_
+
4
_
v
1
v
2
_
(1.35)
where
1.3 Enlarging the domain of attraction 27
u
l
u v u
u
u
u
u u
l
sat(u) = u
v
u
u
v u
l
sat(u) = u
u
u
sat(u) = u
l
Case 1: u u
l
Case 3: u
u
u
Case 2: u
l
u u
u
Fig. 1.10 Linear differential inclusion approach.
4

i=1

i
= 1,
i
0 (1.36)
or, equivalently
sat(u) = Conv
__
u
1
u
2
_
,
_
u
1
v
2
_
,
_
v
1
u
2
_
,
_
v
1
v
2
__
(1.37)
Denote now D
m
as the set of mm diagonal matrices whose diagonal elements
are either 0 or 1. For example, if m = 2 then
D
2
=
__
0 0
0 0
_
,
_
1 0
0 0
_
,
_
0 0
0 1
_
,
_
1 0
0 1
__
There are 2
m
elements in D
m
. Denote each element of D
m
as E
i
, i = 1, 2, . . . , 2
m
and dene E

i
= I E
i
. For example, if
E
1
=
_
0 0
0 0
_
then
E

1
=
_
1 0
0 1
_

_
0 0
0 0
_
=
_
1 0
0 1
_
Clearly if E
i
D
m
, then E

i
is also in D
m
. The generalization of the results (1.33)
(1.37) is reported by the following lemma [36], [37], [38]
Lemma 1.1. [37] Consider two vectors u R
m
and v R
m
such that u
il
v
i
u
iu
for all i = 1, 2, . . . , m, then it holds that
28 1 Set Theoretic Methods in Control
sat(u) ConvE
i
u +E

i
v, i = 1, 2, . . . , 2
m
(1.38)
Consequently, there exist
i
with i = 1, 2, . . . , 2
m
and

i
0 and
2
m

i=1

i
= 1
such that
sat(u) =
2
m

i=1

i
(E
i
u +E

i
v)
1.3.3 Enlarging the domain of attraction - Ellipsoidal set approach
The aim of this section is twofold. First, we provide an invariance condition of
ellipsoidal sets for discrete-time linear time-varying or uncertain systems with a sat-
urated input and state constraints. This invariance condition is an extended version
of the previously published results in [37] for the robust case. Secondly, we propose
a method for computing a nonlinear controller u(k) = sat(Kx(k)), which makes a
given ellipsoid invariant.
For the simple exposition, consider the case in which the upper bound and the
lower bound are opposite and equal to u
max
, precisely
u
l
= u
u
= u
max
It is assumed that the polyhedral constraint set X is symmetric with g
i
= 1, for
all i = 1, 2, . . . , n
1
. Clearly this assumption is always possible as long as
F
i
x g
i

F
i
g
i
x 1
for all g
i
> 0.
For a matrix H R
mn
, dene X
c
as an intersection between the state constraint
set X and the polyhedral set F(H, u
max
) =x : [Hx[ u
max
, i.e.
X
c
=
_
_
_
x R
n
:
_
_
F
x
H
H
_
_
x
_
_
1
u
max
u
max
_
_
_
_
_
We are now ready to state the main result of this section
Theorem 1.2. If there exist a symmetric matrix P R
nn
and matrix H R
mn
such that
_
P A
i
+B
i
(E
j
K +E

j
H)P
PA
i
+B
i
(E
j
K+E

j
H)
T
P
_
_0, (1.39)
1.3 Enlarging the domain of attraction 29
for all i =1, 2, . . . , q, j =1, . . . , 2
m
and E(P) X
c
, then the ellipsoid E(P) is a robust
invariant set.
Proof. Assume that there exist matrix P and matrix H such that the conditions (1.39)
are satised. Based on Lemma 1.1 and by choosing v = Hx with [Hx[ u
max
, one
has
sat(Kx) =
2
m

j=1

j
(E
j
Kx +E

j
Hx)
and subsequently
x(k +1) =
q

i=1

i
(k)
_
A
i
+B
i
2
m

j=1

j
(E
j
K +E

j
H)
_
x(k)
=
q

i=1

i
(k)
_
2
m

j=1

j
A
i
+B
i
2
m

j=1

j
(E
j
K +E

j
H)
_
x(k)
=
q

i=1

i
(k)
2
m

j=1

j
_
A
i
+B
i
(E
j
K+E

j
H)
_
x(k)
=
q,2
m

i=1, j=1

i
(k)
j
_
A
i
+B
i
(E
j
K +E

j
H)
_
x(k) = A
c
(k)x(k)
where
A
c
(k) =
q,2
m

i=1, j=1

i
(k)
j
_
A
i
+B
i
(E
j
K+E

j
H)
_
From the fact that
q,2
m

i=1, j=1

i
(k)
j
=
q

i=1

i
(k)
_
2
m

j=1

j
_
= 1
it is clear that A
c
(k) belongs to the polytope P
c
, the vertices of which are given by
taking all possible combinations of A
i
+B
i
(E
j
K +E

j
H) where i = 1, 2, . . . , q and
j = 1, 2, . . . , 2
m
.
The ellipsoid E(P) = x R
n
: x
T
P
1
x 1 is invariant, if and only if for all
x R
n
such that x
T
P
1
x 1 it holds that
x
T
A
c
(k)
T
P
1
A
c
(k)x 1 (1.40)
With the same argument as in Section 1.2.2, it is clear that condition (1.40) can
be transformed to
_
P A
c
(k)P
PA
c
(k)
T
P
_
_0 (1.41)
The left-hand side of equation (1.40) can be treated as a function of k and reaches
the maximum on one of the vertices of A
c
(k), so the set of LMI conditions to be
satised to check invariance is the following
30 1 Set Theoretic Methods in Control
_
P A
i
+B
i
(E
j
K +E

j
H)P
PA
i
+B
i
(E
j
K+E

j
H)
T
P
_
_0,
for all i = 1, 2, . . . , q, j = 1, . . . , 2
m
.
Note that conditions (1.39) involve the multiplication between two unknown pa-
rameters H and P. By denoting Y = HP, the LMI conditions (1.39) can be rewritten
as
_
P (A
i
P+B
i
E
j
KP+B
i
E

j
Y)
(PA
T
i
+PK
T
E
j
B
T
i
+Y
T
E

j
B
T
i
) P
_
_0, (1.42)
for all i = 1, 2, . . . , q, j = 1, . . . , 2
m
. Thus the unknown matrices P and Y enter
linearly in the conditions (1.42).
Again, as in Section 1.2.2, in general one would like to have the largest invariant
ellipsoid for system (1.25) under the feedback u(k) = sat(Kx(k)) with respect to
constraints (1.27). This can be done by solving the following LMI problem
J = max
P,Y
trace(P) (1.43)
subject to
Invariance condition
_
P (A
i
P+B
i
E
j
KP+B
i
E

j
Y)
(PA
T
i
+PK
T
E
j
B
T
i
+Y
T
E

j
B
T
i
) P
_
_0, (1.44)
for all i = 1, 2, . . . , q, j = 1, . . . , 2
m
Constraint satisfaction
On state
_
1 F
i
P
PF
T
i
P
_
_0, i = 1, 2, . . . , n
1
On input
_
u
2
imax
Y
i
Y
T
i
P
_
_0, i = 1, 2, . . . , m
Example 1.4. Consider the following uncertain discrete time system
x(k +1) = A(k)x(k) +B(k)u(k)
with
A(k) = (k)A
1
+(1 (k))A
2
B(k) = (k)B
1
+(1 (k))B
2
and
A
1
=
_
1 0.1
0 1
_
, B
1
=
_
0
1
_
,
A
2
=
_
1 0.2
0 1
_
, B
2
=
_
0
1.5
_
1.3 Enlarging the domain of attraction 31
At each sampling time (k) [0, 1] is an uniformly distributed pseudo-random
number. The constraints are
10 x
1
10, 10 x
2
10
1 u 1
The feedback matrix gain is chosen as
K = [1.8112 0.8092]
By solving the optimization problem (1.43) one obtains matrices P and Y
P =
_
5.0494 8.9640
8.9640 28.4285
_
, Y = [0.4365 4.2452]
Hence
H =YP
1
= [0.4058 0.2773]
Based on the LMI problem (1.17), an invariant ellipsoid E(P
1
) is obtained under
the linear feedback u(k) = Kx(k) with
P
1
=
_
1.1490 3.1747
3.1747 9.9824
_
Figure 1.11 presents invariant sets under different kind of controllers. The set
E(P) is obtained with the saturated controller u(k) =sat(Kx(k)) while the set E(P
1
)
is obtained with the linear controller u(k) = Kx(k).
2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5
6
4
2
0
2
4
6
x
1
x
2
Kx = 1
Hx = 1
Hx = 1
Kx = 1
E(P
1
)
E(P)
Fig. 1.11 Invariant sets with different kind of controllers for example 1.4. The set E(P) is obtained
with the saturated controller u(k) = sat(Kx(k)) while the set E(P
1
) is obtained with the linear
controller u(k) = Kx(k).
32 1 Set Theoretic Methods in Control
Figure 1.12 shows different state trajectories of the closed loop system with the
controller u(k) = sat(Kx(k)), depending on the realizations of (k) and on initial
conditions.
2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5
6
4
2
0
2
4
6
x
1
x
2
Fig. 1.12 State trajectories of the closed loop system for example 1.4.
In the rst part of this section, theorem 1.2 was exploited in the following man-
ner: if the ellipsoid E(P) is robust invariant for the system
x(k +1) = A(k)x(k) +B(k)sat(Kx(k))
then there exists a linear controller u(k) = Hx(k), [Hx(k)[ u
max
, such that the
ellipsoid E(P) is robust invariant for the system
x(k +1) = A(k)x(k) +B(k)Hx(k)
and the matrix gain H R
mn
is obtained based on the optimization problem(1.43).
Theorem1.2 nowwill be exploited in a different manner. We would like to design
a saturated feedback gain u(k) = sat(Kx(k)) that makes a given invariant ellipsoid
E(P) contractive with a maximal contraction factor. This invariant ellipsoid E(P)
can be obtained together with a linear feedback gain u(k) = Hx(k) aiming to max-
imize some convex objective function J(P), for example trace(P). Designing the
invariant ellipsoid E(P) and the controller u(k) = Hx(k) can be done by solving
the LMI problem (1.17). In the second stage, based on the gain H and the ellip-
soid E(P), a saturated controller u(k) = sat(Kx(k)) which aims to maximize some
contraction factor 1 g is computed.
It is worth noticing that the invariance condition (1.17) corresponds to the case in
condition (1.42) with E
j
= 0 and E

j
= I E
j
= I. Following the proof of theorem
1.2, it is clear that for the following system
x(k +1) = A(k)x(k) +B(k)sat(Kx(k))
1.3 Enlarging the domain of attraction 33
the ellipsoid E(P) is contractive with the contraction factor 1 g if
_
A
i
+B
i
(E
j
K+E

j
H)
_
T
P
1
_
A
i
+B
i
(E
j
K+E

j
H)
_
P
1
_gP
1
for all i = 1, 2, . . . , q and for all j = 1, 2, . . . , 2
m
such that E
j
,= 0. By using the Schur
complements, this problem can be converted into an LMI optimization as
J = max
g,K
g (1.45)
subject to
_
(1 g)P
1
(A
i
+B
i
(E
j
K +E

j
H))
T
(A
i
+B
i
(E
j
K+E

j
H)) P
_
_0
for all i = 1, 2, . . . , p and j = 1, 2, . . . , 2
m
with E
j
,= 0. Recall that here the only
unknown parameters are the matrix K R
mn
and the scalar g, the matrices P and
H being given.
Remark 1.3. The proposed two-stage control design presented here benets of global
uniqueness properties of the solution. This is due to the one-way dependence of the
two (prioritized) objectives: the trace maximization precedes the associated contrac-
tion factor.
Example 1.5. Consider the uncertain system in example 1.4 with the same con-
straints on the state vector and on the input vector. In the rst stage by solving
the optimization problem (1.17), one obtains the matrices P and Y
P =
_
100.0000 43.1051
43.1051 100.0000
_
, Y = [3.5691 6.5121]
Hence H =YP
1
= [0.0783 0.0989].
In the second stage, by solving the optimization problem (1.45), one obtains the
feedback gain K
K = [0.3342 0.7629]
Figure 1.13 shows the invariant ellipsoid E(P). This gure also shows the
state trajectories of the closed loop system under the saturated feedback u(k) =
sat(Kx(k)) for different initial conditions and different realizations of (k).
For the initial condition x(0) = [4 10]
T
Figure 1.14 presents the state trajec-
tories of the closed loop system with the saturated controller u(k) = sat(Kx(k)) and
with the linear controller u(k) = Hx(k). It can be observed that the time to regu-
late the plant to the origin by using the linear controller is longer than the time to
regulate the plant to the origin by using saturated controller. The explanation for
this is that when using the controller u(k) = Hx(k), the control action is saturated
only at some points of the boundary of the ellipsoid E(P), while using the controller
u(k) = sat(Kx(k)), the control action is saturated not only on the boundary of the
34 1 Set Theoretic Methods in Control
10 8 6 4 2 0 2 4 6 8 10
10
8
6
4
2
0
2
4
6
8
10
x
1
x
2
Kx = 1
Kx = 1
Fig. 1.13 Invariant ellipsoid and state trajectories of the closed loop system for example 1.5.
set E(P), its saturated also inside the set E(P). This effect can be seen in Figure
1.15. Figure 1.15 also shows the realization of (k).
0 10 20 30 40 50 60
4
2
0
2
4
6
Times
x
1


0 10 20 30 40 50 60
0
5
10
Times
x
2


u(k) = Hx(k)
u(k) = sat(Kx(k))
u(k) = sat(Kx(k))
u(k) = Hx(k)
Fig. 1.14 State trajectories as a function of time for example 1.5. The green line is obtained by
using the saturated feedback gain u(k) = Kx(k). The dashed red line is obtained by using the linear
feedback gain u(k) = Hx(k).
1.3.4 Enlarging the domain of attraction - Polyhedral set approach
In this section, the problem of estimating the domain of attraction is addressed by
using the polyhedral sets.
For the given linear state feedback controller u(k) = Kx(k), it is clear that the
largest polyhedral invariant set is the maximal robust invariant set
max
. The set
1.3 Enlarging the domain of attraction 35
0 10 20 30 40 50 60
1
0.5
0
Times
u


0 10 20 30 40 50 60
0
0.2
0.4
0.6
0.8
1
Times

u(k) = Hx(k)
u(k) = sat(Kx(k))
Fig. 1.15 Input trajectory and realization of (k) as a function of time for example 1.5. The green
line is obtained by using the saturated feedback gain u(k) = Kx(k). The dashed red line is obtained
by using the linear feedback gain u(k) = Hx(k).

max
can be readily found using procedure 2.1 or procedure 2.2. From this point on,
it is assumed that the set
max
is known.
Our aim in this section is to nd the the largest polyhedral invariant sets char-
acterizing an estimation of the domain of attraction for system (1.25) under the
saturated controller u(k) = sat(Kx(k)). To this aim, recall that from Lemma (1.1),
the saturation function can be expressed as
sat(Kx) =
2
m

i=1

i
(E
i
Kx +E

i
v),
2
m

i=1

i
= 1,
i
0 (1.46)
with u
l
v u
u
and E
i
is an element of D
m
(the set of mm diagonal matrices
whose diagonal elements are either 0 or 1), E

i
= I E
i
.
It is worth noticing that the parameters
i
in (1.47) are a function of the state x
[22]. To see this let us consider the case in which m=1, u
l
=1, u
u
=1 and assume
v = 0. In this case, one has
sat(Kx) =
1
Kx
If, for example 1 Kx 1, then
1
= 1. If Kx = 2, then
1
= 0.5. If Kx = 5,
then
1
= 0.2.
Similarly, if for example v = 0.5Kx, then
sat(Kx) = (
1
+0.5
2
)Kx
If 1 Kx 1, then
1
= 1,
2
= 0. If Kx = 2, then
1
= 0,
2
= 1.
With equation (1.47) the closed loop system can be rewritten as
36 1 Set Theoretic Methods in Control
x(k +1) =
q

i=1

i
(k)
_
A
i
x(k) +B
i
2
m

j=1

j
(E
j
Kx(k) +E

j
v)
_
=
q

i=1

i
(k)
_
2
m

j=1

j
A
i
x(k) +B
i
2
m

j=1

j
(E
j
Kx(k) +E

j
v)
_
=
q

i=1

i
(k)
2
m

j=1

j
_
A
i
x(k) +B
i
(E
j
Kx(k) +E

j
v)
_
or
x(k +1) =
2
m

j=1

j
q

i=1

i
(k)
_
(A
i
+B
i
E
j
K)x(k) +B
i
E

j
v
_
(1.47)
The variables v R
m
can be considered as an external controllable input for
system (1.47). Hence, the problem of nding the largest polyhedral invariant set

s
for system (1.25) boils out to the problem of computing the largest controlled
invariant set for system (1.47).
Since the parameters
j
are a function of state x(k), they are known, once the state
vector x(k) is available. Therefore, system (1.47) can be considered as an uncertain
system with respect to the parameters
i
and a time varying system with respect to
the known parameters
j
. Hence the following procedure can be used to obtain the
largest polyhedral invariant set
s
for system (1.47) based on the results in Section
1.2.3, where procedure 2.4 is used to deal with known
j
while procedure 2.3 is
used to deal with uncertain
i
.
Procedure 2.5: Invariant set computation
Input: The matrices A
1
,A
2
,. . .,A
q
and the sets X, U and the invariant set
max
Output: The invariant set
s
1. Set t = 0 and C
0
=
max
and let the matrices F
0
, g
0
be the half space repre-
sentation of the set C
0
, i.e. C
0
=x R
n
: F
0
x g
0

2. Compute the expanded set P


t j
R
n+m
P
t j
=
_

_
(x, u) R
n+m
:
_

_
F
t
(A
1
+B
1
E
j
K)x +B
1
E

j
v
F
t
(A
2
+B
2
E
j
K)x +B
2
E

j
v
.
.
.
F
t
(A
q
+B
q
E
j
K)x +B
q
E

j
v
_

_
g
t
g
t
.
.
.
g
t
_

_
_

_
3. Compute the projection of P
t j
on R
n
P
n
t j
=x R
n
: v U such that (x, v) P
t j
, j = 1, 2, . . . , 2
m
4. Set
C
t+1
= X
2
m

j=1
P
n
t j
1.3 Enlarging the domain of attraction 37
and let the matrices F
t+1
, g
t+1
be the half space representation of the set C
t+1
,
i.e.
C
t+1
=x R
n
: F
t+1
x g
t+1

5. If C
t+1
=C
t
, then stop and set
s
=C
t
. Else continue.
6. Set t =t +1 and go to step 2.
It is clear that C
t1
C
t
, since the set
max
is robust invariant. Hence C
t
is s
a robust controlled invariant set. The set sequence C
0
,C
1
, . . . , converges to
s
,
which is the largest polyhedral invariant set.
Remark 1.4. Each one of the polytopes C
t
is an estimation of the domain of attrac-
tion for the system (1.25) under the saturated controller u(k) = sat(Kx(k)). That
means the procedure 2.5 can be stopped at any time before converging the true
largest invariant set
s
.
It is worth noticing that the matrix H R
mn
resulting from optimization prob-
lem (1.43) can also be employed for computing the polyhedral invariant set
H
s
with
respect to the saturated controller u(k) = Kx(k). Clearly the set
H
s
is a subset of

s
, since the vector v is now in the restricted form v(k) = Hx(k). In this case, based
equation (1.47) one gets
x(k +1) =
2
m

j=1

j
q

i=1

i
(k)
_
(A
i
+B
i
E
j
K+B
i
E

j
H)x(k)
_
(1.48)
Dene the set X
H
as follows
X
H
=x R
n
: F
H
x g
H
(1.49)
where
F
H
=
_
_
F
x
H
H
_
_
, g
H
=
_
_
g
x
u
u
u
l
_
_
With the set X
H
, the following procedure can be used for computing the polyhedral
invariant set
H
s
.
Procedure 2.6: Invariant set computation
Input: The matrices A
1
,A
2
,. . .,A
q
and the set X
H
and the invariant set
max
Output: The invariant set
H
s
1. Set t = 0 and C
0
=
max
and let the matrices F
0
, g
0
be the half space repre-
sentation of the set C
0
, i.e. C
0
=x R
n
: F
0
x g
0

2. Compute the set P


t j
R
n+m
38 1 Set Theoretic Methods in Control
P
t j
=
_

_
x R
n
:
_

_
F
t
(A
1
+B
1
E
j
K+B
1
E

j
H)x
F
t
(A
2
+B
2
E
j
K+B
2
E

j
H)x
.
.
.
F
t
(A
q
+B
q
E
j
K++B
q
E

j
H)x
_

_
g
t
g
t
.
.
.
g
t
_

_
_

_
3. Set
C
t+1
= X
H
2
m

j=1
P
n
t j
and let the matrices F
t+1
, g
t+1
be the half space representation of the set C
t+1
,
i.e.
C
t+1
=x R
n
: F
t+1
x g
t+1

4. If C
t+1
=C
t
, then stop and set
s
=C
t
. Else continue.
5. Set t =t +1 and go to step 2.
Since the matrices (A
i
+B
i
E
j
K+B
i
E

j
H) is asymptotically stable for i =1, 2, . . . , q
and j = 1, 2, . . . , 2
m
, procedure 2.6 is terminated in nite time [17]. In other words,
there exists t =t
max
such that C
t
max
=C
t
max
+1
.
Example 1.6. Consider again the example 1.4. The constraint on the state vector and
on the input vector are the same. The feedback controller is
K = [1.8112 0.8092]
By using procedure 2.5 one obtains the robust polyhedral invariant set
s
as
shown in Figure 1.16. Procedure 2.5 terminated with t =121. Figure 1.16 also shows
the robust polyhedral invariant set
H
s
obtained with the auxiliary matrix H where
H = [0.4058 0.2773]
and the robust polyhedral invariant set
max
obtained with the controller u(k) =Kx.
The set
s
and
H
s
can be presented in half-space representation as
1.3 Enlarging the domain of attraction 39
4 3 2 1 0 1 2 3 4
10
8
6
4
2
0
2
4
6
8
10
x
1
x
2

s
H

max
Fig. 1.16 Robust invariant sets with different kind of controllers and different methods for exam-
ple 1.6. The polyhedral set
s
is obtained with respect to the controller u(k) = sat(Kx(k)). The
polyhedral set
H
s
is obtained with respect to the controller u(k) = sat(Kx(k)) using an auxiliary
matrix H. The polyhedral set
max
is obtained with the controller u(k) = Kx.

s
=
_

_
x R
2
:
_

_
0.9996 0.0273
0.9996 0.0273
0.9993 0.0369
0.9993 0.0369
0.9731 0.2305
0.9731 0.2305
0.9164 0.4004
0.9164 0.4004
0.8434 0.5372
0.8434 0.5372
0.7669 0.6418
0.7669 0.6418
0.6942 0.7198
0.6942 0.7198
0.6287 0.7776
0.6287 0.7776
0.5712 0.8208
0.5712 0.8208
_

_
3.5340
3.5340
3.5104
3.5104
3.4720
3.4720
3.5953
3.5953
3.8621
3.8621
4.2441
4.2441
4.7132
4.7132
5.2465
5.2465
5.8267
5.8267
_

_
_

_
40 1 Set Theoretic Methods in Control

H
s
=
_

_
x R
2
:
_

_
0.8256 0.5642
0.8256 0.5642
0.9999 0.0108
0.9999 0.0108
0.9986 0.0532
0.9986 0.0532
0.6981 0.7160
0.6981 0.7160
0.9791 0.2033
0.9791 0.2033
0.4254 0.9050
0.4254 0.9050
_

_
2.0346
2.0346
2.3612
2.3612
2.3467
2.3467
2.9453
2.9453
2.3273
2.3273
4.7785
4.7785
_

_
_

_
Figure 1.17 presents state trajectories of the closed loop system with the con-
troller u(k) = sat(Kx(k)) for different initial conditions and different realizations of
(k).
4 3 2 1 0 1 2 3 4
10
8
6
4
2
0
2
4
6
8
10
x
1
x
2
Fig. 1.17 State trajectories of the closed loop system with the controller u(k) =sat(Kx(k)) for 1.6.
Chapter 2
Optimal and Constrained Control - An
Overview
Abstract In this chapter some of the most important approaches to constrained and
optimal control are briey reviewed. This is intended to organize the chapter in the
following sections
1. Dynamic programming.
2. Pontryagins maximum principle.
3. Model predictive control, implicit and explicit solution.
4. Vertex control.
2.1 Dynamic programming
The purpose of this section is to present a brief introduction to dynamic program-
ming, which provides a sufcient condition for optimality.
Dynamic programming was developed by R.E. Bellman in the early fties [5],
[6], [7], [8]. It is a powerful method to solve control problems for various class of
systems, e.g. linear, time-varying or nonlinear. The optimal solution is expressed as
a time-varying state-feedback form.
Dynamic programming is based on the principle of optimality [9]
An optimal policy has the property that whatever the initial state and initial
decision are, the remaining decisions must constitute an optimal policy with regard
to the state resulting from the rst decision.
To begin, let us consider the following optimal control problem
min
x,u
_
N1

k=0
L(x(k), u(k)) +E(x(N))
_
(2.1)
subject to
41
42 2 Optimal and Constrained Control - An Overview
_

_
x(k +1) = f (x(k), u(k)), k = 0, 1, . . . , N 1
u(k) U, k = 0, 1, . . . , N 1
x(k) X, k = 0, 1, . . . , N
x(0) = x
0
where
x(k) and u(k) are respectively the state and control variables.
N > 0 is called the time horizon.
L(x(k), u(k)) is the Lagrange objective function, which represents a cost along
the trajectory.
E(x(N)) is the Mayer objective function, which represents the terminal cost.
U and X are the sets of constraints on the input and state variables, respectively.
x(0) is the initial condition.
Dene the value function V
i
(x(i)) as follows
V
i
(x(i)) = min
u
_
E(x(N)) +
N1

k=i
(L(x(k), u(k)))
_
(2.2)
subject to
_
_
_
x(k +1) = f (x(k), u(k)), k = 0, 1, . . . , N 1
u(k) U, k = i, i +1, . . . , N 1
x(k) X, k = i, i +1, . . . , N
for i = N, N 1, N2, . . . , 0.
Clear V
i
(x(i) is the optimal cost on the remaining horizon [i, N], starting from
the state x(i). Based on the principle of optimality, one has
V
i
(x(i)) = min
u(i)
L(x(i), u(i)) +V
i+1
(x(i +1))
By substituting
x(i +1) = f (x(i), u(i))
one gets
V
i
(z) = min
u(i)
L(x(i), u(i)) +V
i+1
( f (x(i), u(i))) (2.3)
The problem (2.3) is much simpler than the one in (2.1) because it involves only
one decision variable u(i). To actually solve this problem, we work backwards in
time from i = N, starting with
V
N
(x(N)) = E(x(N))
Based on the value function V
i+1
(x(i +1)) with i = N 1, N2, . . . , 0, the opti-
mal control values u

(i) can be obtained as


u

(i) = argmin
u(i)
L(x(i), u(i)) +V
i+1
( f (x(i), u(i)))
2.2 Pontryagins maximum principle 43
2.2 Pontryagins maximum principle
The second direction in the optimal control theory is the Pontryagins maximum
principle [57], [23]. This approach, in contrast to the classical calculus of variation
approach allows us to solve the control problems in which the control input is sub-
ject to constraints in a very general way. Therefore the maximumprinciple is a basic
mathematical technique used for calculating the optimal control values in many im-
portant problems of mathematics, engineering, economics, etc. Here for illustration,
we consider the following simple optimal control problem
min
x,u
_
N1

k=0
L(x(k), u(k)) +E(x(N))
_
(2.4)
subject to
_
_
_
x(k +1) = f (x(k), u(k)), k = 0, 1, . . . , N 1
u(k) U, k = 0, 1, . . . , N 1
x(0) = x
0
For simplicity the state variables are considered unconstrained. For solving the
optimal control problem(2.4) with the Pontryagins maximumprinciple, the follow-
ing Hamiltonian H
k
() is dened
H
k
(x(k), u(k), (k +1)) = L(x(k), u(k)) +
T
(k +1) f (x(k), u(k)) (2.5)
where (k) with k = 1, 2, . . . , N are called the co-state or the adjoint variables. For
problem (2.4), these variables must satisfy the so called co-state equation

(k +1) =
H
k
(x(k))
, k = 0, 1, . . . , N 2
and

(N) =
E(x(N))
(x(N))
For given state and co-state variables, the optimal control value is achieved by
choosing control u

(k) that minimizes the Hamiltonian at each time instant, i.e.


H
k
(x

(k), u

(k),

(k +1)) H
k
(x

(k), u(k),

(k +1)), u(k) U
Note that the convexity assumption on the Hamiltonian is needed, i.e. the func-
tion H
k
(x(k), u(k), (k +1)) is convex with respect to u(k).
44 2 Optimal and Constrained Control - An Overview
2.3 Model predictive control
Model predictive control (MPC), or receding horizon control, is one of the most
advanced control approaches which, in the last decades, has became the leading in-
dustrial control technology for systems with constraints [21],[53], [18], [58], [30],
[51]. MPC is an optimization based strategy, where a model of the plant is used to
predict the future evolution of the system, see [53], [51]. This prediction uses the
current state of the plant as the initial state and, at each time instant, k, the controller
computes an optimal control. Then the rst control in the sequence is applied to
the plant at time instant k, and at time instant k +1 the optimization procedure is
repeated with a new plant measurement. This open loop optimal feedback mecha-
nism
1
of the MPC compensates for the prediction error due to structural mismatch
between the model and the real system as well as for disturbances and measurement
noise.
The main advantage which makes MPC industrially desirable is that it can take
into account constraints in the control problem. This feature is very important for
several reasons
Often the best performance, which may correspond to the most efcient opera-
tion, is obtained when the system is made to operate near the constraints.
The possibility to explicitly express constraints in the problemformulation offers
a natural way to state complex control objectives.
2.3.1 Implicit model predictive control
Consider the problem of regulation to the origin the following discrete-time linear
time-invariant system
x(k +1) = Ax(k) +Bu(k) (2.6)
where x(k) R
n
and u(k) R
m
are respectively the state and the input variables,
A R
nn
and B R
nm
are the system matrices. Both the state vector x(k) and the
control vector u(k) are subject to polytopic constraints
_
x(k) X, X =x : F
x
x g
x

u(k) U, U =u : F
u
u g
u

k 0 (2.7)
where the matrices F
x
R
n
1
n
, F
u
R
m
1
m
and the vectors g
x
R
n
1
1
, g
u
R
m
1
1
are assumed to be constant with g
x
> 0, g
u
> 0 such that the origin is contained in
the interior of X and U. Here the inequalities are taken element-wise.
It is assumed that the pair (A, B) is stabilizable, i.e. all uncontrollable states have
stable dynamics.
1
So it has been named OLOF (= Open Loop Optimal Feedback) control, by the name of the author
of [33], whose name by chance, is Per Olof
2.3 Model predictive control 45
Provided that the state x(k) is available fromthe measurements, the nite horizon
MPC optimization problem is dened as
V(x(k)) = min
u=[u(0),u(1),...,u(N1)]
_
N

t=1
x
T
(t)Qx(t) +
N1

t=0
u
T
(t)Ru(t)
_
(2.8)
subject to
_

_
x(t +1) = Ax(t) +Bu(t), t = 0, 1, . . . , N 1
x(t) X, t = 1, 2, . . . , N
u(t) U, t = 0, 1, . . . , N 1
x(0) = x(k)
where
Q R
nn
is a real symmetric positive semi-denite n n matrix.
R R
mm
is a real symmetric positive denite mm matrix.
N is a xed integer greater than 0. N is called the time horizon or the prediction
horizon.
The conditions on Q and R guarantee that the function J is well-dened. In term
of eigenvalues, the eigenvalues of Q should be non-negative, while those of R should
be positive.
It is clear that the rst term x
T
(t)Qx(t) penalizes the deviation of the state x from
the origin, while the second term u
T
(t)Ru(t) measures the input deviation. In other
words, selecting Q large means that, to keep J small, the state x(t) must be small.
On the other hand, selecting R large means that the control input u(t) must be small
to keep J small.
An alternative is a performance measure based on l
1
norms or l

norms
min
u=[u(0),u(1),...,u(N1)]
_
N

t=1
[Qx(t)[
1
+
N1

t=0
[Ru(t)[
1
_
(2.9)
min
u=[u(0),u(1),...,u(N1)]
_
N

t=1
[Qx(t)[

+
N1

t=0
[Ru(t)[

_
(2.10)
Based on the state space model (2.6), the future state variables are computed
sequentially using the set of future control parameters
_

_
x(1) = Ax(0) +Bu(0)
x(2) = Ax(1) +Bu(1) = A
2
x(0) +ABu(0) +Bu(1)
.
.
.
x(N) = A
N
x(0) +A
N1
Bu(0) +A
N2
Bu(1) +. . . +Bu(N1)
(2.11)
The set of equations (2.11) can be rewritten in a compact matrix form as
x = A
a
x(0) +B
a
u = A
a
x(k) +B
a
u (2.12)
46 2 Optimal and Constrained Control - An Overview
with
x = [x
T
(1) x
T
(2) . . . x
T
(N)]
T
and
A
a
=
_

_
A
A
2
.
.
.
A
N
_

_
, B
a
=
_

_
B 0 . . . 0
AB B
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 0
A
N1
B A
N2
B . . . B
_

_
The MPC optimization problem (2.8) can be expressed as
V(x(k)) = min
u
x
T
Q
a
x +u
T
R
a
u (2.13)
where
Q
a
=
_

_
Q 0 . . . 0
0 Q . . . 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 . . . Q
_

_
, R
a
=
_

_
R 0 . . . 0
0 R . . . 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 . . . R
_

_
and by substituting (2.12) in (2.13), one gets
V(x(k)) = min
u
u
T
Hu+2x
T
(k)Fu+x
T
(k)Yx(k) (2.14)
where
H = B
T
a
Q
a
B
a
+R
a
, F = A
T
a
Q
a
B
a
and Y = A
T
a
Q
a
A
a
(2.15)
Consider now the constraints on state and on input along the horizon. From (2.7)
it can be shown that
F
a
x
x g
a
x
F
a
u
u g
a
u
(2.16)
where
F
a
x
=
_

_
F
x
0 . . . 0
0 F
x
. . . 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 . . . F
x
_

_
, g
a
x
=
_

_
g
x
g
x
.
.
.
g
x
_

_
F
a
u
=
_

_
F
u
0 . . . 0
0 F
u
. . . 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 . . . F
u
_

_
, g
a
u
=
_

_
g
u
g
u
.
.
.
g
u
_

_
Using (2.12), the state constraints along the horizon can be expressed as
F
a
x
A
a
x(k) +B
a
u g
a
x
or
F
a
x
B
a
u F
a
x
A
a
x(k) +g
a
x
(2.17)
2.3 Model predictive control 47
Combining (2.16), (2.17), one obtains
Gu Ex(k) +W (2.18)
where
G =
_
F
a
u
F
a
x
B
a
_
, E =
_
0
F
a
x
A
a
_
, W =
_
g
a
u
g
a
x
_
Based on (2.13) and (2.18), the MPC quadratic program formulation can be de-
ned as
V
1
(x(k)) = min
u
_
u
T
Hu+2x
T
(k)Fu
_
(2.19)
subject to
Gu Ex(k) +W
where the term x
T
(k)Yx(k) is removed since it does not inuence the optimal ar-
gument. The value of the cost function at optimum is simply obtained from (2.19)
by
V(x(k)) =V
1
(x(k)) +x
T
(k)Yx(k)
A simple on-line algorithm for MPC is
Algorithm 3.1. Model predictive control - Implicit approach
1. Measure or estimate the current state of the system x(k).
2. Compute the control signal sequence u by solving (2.19).
3. Apply rst element of the control sequence u as input to the system (2.6).
4. Wait for the next time instant k := k +1.
5. Go to step 1 and repeat
Example 2.1. Consider the following discrete time linear time invariant system
x(k +1) =
_
1 1
0 1
_
x(k) +
_
1
0.7
_
u(k) (2.20)
and the MPC problem with weighting matrices Q = I and R = 1 and the prediction
horizon N = 3.
The constraints are
2 x
1
2, 5 x
2
5
1 u 1
Based on equation (2.15) and (2.18), the MPC problem can be described as a QP
problem
min
u=u(0),u(1),u(2)
_
u
T
Hu+2x
T
(k)Fu
_
with
48 2 Optimal and Constrained Control - An Overview
H =
_
_
12.1200 6.7600 2.8900
6.7600 5.8700 2.1900
2.8900 2.1900 2.4900
_
_
, F =
_
5.1000 2.7000 1.0000
13.7000 8.5000 3.7000
_
and subject to the following constraints
Gu W +Ex(k)
where
G =
_

_
1.0000 0 0
1.0000 0 0
0 1.0000 0
0 1.0000 0
0 0 1.0000
0 0 1.0000
1.0000 0 0
0.7000 0 0
1.0000 0 0
0.7000 0 0
1.7000 1.0000 0
0.7000 0.7000 0
1.7000 1.0000 0
0.7000 0.7000 0
2.4000 1.7000 1.0000
0.7000 0.7000 0.7000
2.4000 1.7000 1.0000
0.7000 0.7000 0.7000
_

_
, E =
_

_
0 0
0 0
0 0
0 0
0 0
0 0
1 1
0 1
1 1
0 1
1 2
0 1
1 2
0 1
1 3
0 1
1 3
0 1
_

_
, W =
_

_
1
1
1
1
1
1
2
5
2
5
2
5
2
5
2
5
2
5
_

_
For the initial condition x(0) = [2 1]
T
and by using the implicit MPC method,
Figure 2.1 shows the state trajectories as a function of time.
0 2 4 6 8 10 12 14 16 18 20
0
0.5
1
1.5
2
Time
x
1
0 2 4 6 8 10 12 14 16 18 20
0.5
0
0.5
1
Time
x
2
Fig. 2.1 State trajectories as a function of time
2.3 Model predictive control 49
Figure 2.2 presents the input trajectory as a function of time.
0 2 4 6 8 10 12 14 16 18 20
1
0.8
0.6
0.4
0.2
0
0.2
Time
u
Fig. 2.2 Input trajectory as a function of time.
2.3.2 Recursive feasibility and stability
Recursive feasibility of the optimization problem and stability of the resulting
closed-loop system are two important aspects when designing a MPC controller.
Recursive feasibility of the optimization problem (2.19) means that if the prob-
lem (2.19) is feasible at time instant k, it will be also feasible at time instant k +1. In
other words there exists an admissible control value that holds the system within the
state constraints. The feasibility problemcan arise due to model errors, disturbances
or the choice of the cost function.
Stability analysis necessitates the use of Lyapunov theory [42], since the presence
of the constraints makes the closed-loop system nonlinear. In addition, it is well
known that unstable input-constrained system cannot be globally stabilized [62],
[52], [60]. Another problem is that the control law is generated by the solution of
an optimization problem and generally there does not exist any simple closed-form
expression for the solution, although it can be shown that the solution is a piecewise
afne state feedback law [11].
Recursive feasibility and stability can be assured by adding a terminal cost func-
tion in the objective function in (2.8) and by including the nal state of the planning
horizon in a terminal positively invariant set. Let the matrix P R
nn
be the unique
solution of the following discrete-time algebraic Riccati equation
P = A
T
PAA
T
PB(B
T
XB+R)
1
B
T
PA+Q (2.21)
and the matrix gain K R
mn
is dened as
50 2 Optimal and Constrained Control - An Overview
K =(B
T
PB+R)
1
B
T
PA (2.22)
It is well known [3], [46], [47], [48] that matrix gain K is a solution of the opti-
mization problem (2.8) when the time horizon N = and there are no constraints
on the state vector and on the input vector. In this case the cost function is dened
as
V(x(0)) =

k=0
_
x
T
(k)Qx(k) +u
T
(k)Ru(k)
_
=

k=0
x
T
(k)
_
Q+K
T
RK
_
x(k) = x
T
(0)Px(0)
Once the stabilizing feedback gain u(k) = Kx(k) is dened, the terminal set
X can be computed as a maximal invariant set associated with the control law
u(k) = Kx(k) for system (2.6) and with respect to the constraints (2.7). Generally,
the terminal set is chosen to be in the ellipsoidal or polyhedral form, see Section
1.2.2 and Section 1.2.3.
Consider now the following MPC optimization problem
min
u=[u(0),u(1),...,u(N1)]
_
x
T
(N)Px(N) +
N1

t=0
_
x
T
(t)Qx(t) +u
T
(t)Ru(t)
_
_
(2.23)
subject to
_

_
x(t +1) = Ax(t) +Bu(t), t = 0, 1, . . . , N 1
x(t) X, t = 1, 2, . . . , N
u(t) U, t = 0, 1, . . . , N 1
x(N)
x(0) = x(k)
then the following theorem holds [53]
Theorem 2.1. [53] Assuming feasibility at the initial state, the MPC controller
(2.23) guarantees recursive feasibility and asymptotic stability.
Proof. See [53].
The MPC problem considered here uses both a terminal cost function and a ter-
minal set constraint and is called the dual-mode MPC. This MPC scheme is the most
attractive version in the MPC literature. In general, it offers better performance when
compared with other MPC versions and allows a wider range of control problems to
be handled.
2.3.3 Explicit model predictive control - Parameterized vertices
Note that the implicit model predictive control requires running on-line optimiza-
tion algorithms to solve a quadratic programming (QP) problem associated with
the objective function (2.8) or to solve a linear programming (LP) problem with
2.3 Model predictive control 51
the objective function (2.9), (2.10). Although computational speed and optimiza-
tion algorithms are continuously improving, solving a QP or LP problem can be
computationally costly, specially when the prediction horizon is large, and this has
traditionally limited MPC to applications with relatively low complexity/sampling
interval ratio.
Indeed the state vector can be interpreted as a vector of parameters in the opti-
mization problem(2.23). The exact optimal solution can be expressed as a piecewise
afne function of the state over a polyhedral partition of the state space and the MPC
control computation can be moved off-line, see [11], [18], [61], [55]. The control
action is then computed on-line by exploiting lookup tables and search trees.
Several solutions have been proposed in the literature for constructing a polyhe-
dral partition of the state space [11], [61], [55]. In [11], [10] some iterative tech-
niques use a QP or LP to nd feasible points and then split the parameters space by
inverting one by one the constraints hyper-planes. As an alternative, in [61] the au-
thors construct the unconstrained critical region and then enumerate the others based
on the combinations of active constraints. When the cost function is quadratic, the
uniqueness of the optimal solution is guaranteed and the methods proposed in [11],
[10], [61] work very well.
It is worth noticing that by using l
1
norms or l

norms as a performance
measure, the cost function is only positive semi-denite indicate that the unique-
ness of the optimal solution is not guaranteed. A control laws will have a practical
advantage if the control action presents no jumps on the frontiers of the polyhedral
partitions. When the optimal solution is not unique, the methods in [11], [10], [61]
allow discontinuities as long as during the exploration of the parameters space, the
optimal basis is chosen arbitrarily.
Note that based on the cost function (2.9) or (2.10) the MPC problem can be
rewritten as follows
V(x(k)) = min
z
c
T
z (2.24)
subject to
G
l
z E
l
x(k) +W
l
with
z = [
T
(1)
T
(2) . . .
T
(N

) u
T
(0) u
T
(1) . . . u
T
(N 1)]
T
where (i), i = 1, 2, . . . , N

are slack variables and N

depends on the norm used


and on the time horizon N. Details of how to compute vectors c, W
l
and matrices
G
l
, E
l
are omitted here as by now well known in the literature [10].
The feasible domain for the LP problem (2.24) is dened by a nite number of
inequalities with a right hand side linearly dependent on the vector of parameters
x(k), describing in fact a parameterized polytope [49]
P(x(k)) =z : G
l
z E
l
x(k) +W
l
(2.25)
For simplicity, it is assumed that for all x(k) X, the polyhedral set P(x(k)) is
bounded. With this assumption, P(x(k)) can be expressed in a dual (generator based)
form as
52 2 Optimal and Constrained Control - An Overview
P(x) = Convv
i
(x(k)), i = 1, 2, . . . , n
v
(2.26)
where v
i
is the parameterized vertices. Each parameterized vertex in (2.26) is char-
acterized by a set of saturated inequalities. Once this set of active constraints is
identied, one can write the linear dependence of the parameterized vertex in the
vector of parameters
v
i
(x(k)) = G
1
li
E
li
x(k) +G
1
li
W
li
(2.27)
where G
li
, E
li
, W
li
correspond to the subset of saturated inequalities for the ith
parameterized vertex.
As a rst conclusion, the construction of the dual description (2.25), (2.26) re-
quires the determination of the set of parameterized vertices. Efcient algorithms
exist in this direction [49], the main idea being the analogy with a non-parameterized
polytope in a higher dimension.
When the vector of parameter x(k) varies inside the parameters space, the vertices
of the feasible domain (2.25) may split, slope or merge. This means that each pa-
rameterized vertex v
i
is dened only over a specic region in the parameters space.
These regions VD
i
are called validity domains and can be constructed using simple
projection mechanisms [49].
Once the entire family of parameterized vertices and their validity domains is
available, the optimal solution can be constructed. It is clear that the space of feasible
parameters can be partitioned in non-degenerate polyhedral regions R
k
R
n
such
that the minimum
min
_
c
T
v
i
(x(k))[ v
i
(x(k)) vertex of P(x(k)) valid over R
k
_
(2.28)
is attained by a constant subset of vertices of P(x(k)), denoted v

i
(x(k)). The com-
plete solution over R
k
is
z
k
(x(k)) = Convv

1k
(x(k)), v

2k
(x(k)), . . . , v

sk
(x(k)) (2.29)
The following theorem holds regarding the structure of the polyhedral partitions
of the parameters space [56]
Theorem 2.2. [56] Let multi-parametric programin (2.24) and v
i
(x(k)), i =1, 2, . . . , n
v
the parameterized vertices of the feasible domain (2.25), (2.26) with their corre-
sponding validity domains VD
i
. If a parameterized vertex takes part in the descrip-
tion of the optimal solution for a region R
k
, then it will be part of the family of
optimal solution over its entire validity domain VD
i
.
Proof. See [56].
Theorem 2.2 states that if a parameterized vertex is selected as an optimal candi-
date, then it covers all its validity domain.
It is worth noticing that that the complete optimal solution (2.29) takes into ac-
count the eventual non-uniqueness of the optimum, and it denes the entire family
of optimal solutions using the parameterized vertices and their validity domains.
2.3 Model predictive control 53
Once the entire family of optimal solutions is available, the continuity of the
control law can be guaranteed as follows. Firstly if the optimal solution is unique.
In this case, there is no decision to be made, the explicit solution being the collection
of the parameterized vertices and their validity domains. The continuity is intrinsic.
Conversely, the family of the optimal solutions can be enriched in the presence
of several optimal parameterized vertices
_
_
_
z
k
(x(k)) =
1k
v

1k
+
2k
v

2k
+. . . +
sk
v

sk

ik
0, i = 1, 2, . . . , s

1k
+
2k
+. . . +
sk
= 1
(2.30)
passing to an innite number of candidates (any function included in the convex
combination of vertices being optimal). As mentioned previously, the vertices of the
feasible domain split, slope and merge. The changes occur with a preservation of the
continuity. Hence the continuity of the control law is guaranteed by the continuity
of the parameterized vertices. The interested reader is referred [56] to for further
discussions on the related concepts and constructive procedures.
Example 2.2. To illustrate the parameterized vertices concept, consider the follow-
ing feasible domain for the MPC optimization problem
P(x(k)) = P
1
P
2
(x(k)) (2.31)
where P
1
is a xed polytope
P
1
=
_

_
z R
2
:
_

_
0 1
1 0
0 1
1 0
_

_
z
_

_
1
1
0
0
_

_
_

_
(2.32)
and P
2
(x(k)) is a parameterized polyhedral set
P
2
(x(k)) =
_
z R
2
:
_
1 0
0 1
_
z
_
1
1
_
x(k) +
_
0.5
0.5
__
(2.33)
Note that P
2
(x(k)) is an unbounded set. From equation (2.33), it is clear that
If x(k) 0.5, then x(k) +0.5 0. It follows that P
1
P
2
(x(k)). The polytope
P(x(k)) = P
1
can be presented in half-space representation as
P(x(k)) =
_

_
z R
2
:
_

_
0 1
1 0
0 1
1 0
_

_
z
_

_
1
1
0
0
_

_
_

_
and in vertex representation as
P(x(k)) = Convv
1
, v
2
, v
3
, v
4

54 2 Optimal and Constrained Control - An Overview


where
v
1
=
_
0
0
_
, v
2
=
_
1
0
_
, v
3
=
_
0
1
_
, v
4
=
_
1
1
_
If 0.5 x(k) 1.5, then 1 x(k) +0.5 0. It follows that P
1
P
2
(x(k)) ,= / 0.
Note that for the polytope P(x(k)) the half-spaces z
1
= 0 and z
2
= 0 are redun-
dant. The polytope P(x(k)) can be presented in half-space representation as
P(x(k)) =
_

_
z R
2
:
_

_
0 1
1 0
1 0
0 1
_

_
z
_

_
0
0
1
1
_

_
x(k) +
_

_
1
1
0.5
0.5
_

_
_

_
and in vertex representation as
P(x(k)) = Convv
4
, v
5
, v
6
, v
7

with
v
5
=
_
1
x 0.5
_
, v
6
=
_
x 0.5
1
_
, v
7
=
_
x 0.5
x 0.5
_
,
If 1.5 < x(k), then x(k) +0.5 < 1. It follows that P
1
P
2
(x(k)) = / 0. Hence
P(x(k)) = / 0.
In conclusion, the parameterized vertices of P(x(k)) are
v
1
=
_
0
0
_
, v
2
=
_
1
0
_
, v
3
=
_
0
1
_
, v
4
=
_
1
1
_
,
v
5
=
_
1
x(k) 0.5
_
, v
6
=
_
x(k) 0.5
1
_
, v
7
=
_
x(k) 0.5
x(k) 0.5
_
,
and the validity domains
VD
1
= [ 0.5], VD
2
= [0.5 1.5], VD
3
= (1.5 +]
Table 2.1 presents the validity domains and their corresponding parameterized
vertices.
Table 2.1 Validity domains and their parameterized vertices
VD
1
VD
2
VD
3
v
1
, v
2
, v
3
, v
4
v
4
, v
5
, v
6
, v
7
/ 0
Figure 2.3 shows the polyhedral sets P
1
and P
2
(x(k)) with x(k) = 0.3, x(k) = 0.9
and x(k) = 1.5.
Example 2.3. Consider the discrete time linear time invariant system in example 2.1
with the same constraints on the state and input variables. Here we will use the MPC
formulation, which guarantees the recursive feasibility and stability.
2.3 Model predictive control 55
0.4 0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4
0.4
0.2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
z
1
z
2
P
1
P
2
(1.5)
P
2
(0.9)
P
2
(0.3)
Fig. 2.3 Polyhedral sets P
1
and P
2
(x(k)) with x(k) = 0.3, x(k) = 0.9 and x(k) = 1.5 for example
2.2. For x(k) 0.5, P
1
P
2
(x(k)) = P
1
. For 0.5 x(k) 1.5, P
1
P
2
(x(k)) ,= / 0. For x(k) > 1.5,
P
1
P
2
(x(k)) = / 0
By solving equations (2.21) and (2.22) with weighting matrices
Q =
_
1 0
0 1
_
, R = 0.1
one obtains
P =
_
1.5076 0.1173
0.1173 1.2014
_
, K = [0.7015 1.0576]
The terminal set is computed as a maximal polyhedral invariant set in Section
1.2.3
=
_

_
x R
2
:
_

_
0.7979 0.6029
0.7979 0.6029
1.0000 0
1.0000 0
0.5528 0.8333
0.5528 0.8333
_

_
x
_

_
2.5740
2.5740
2.0000
2.0000
0.7879
0.7879
_

_
_

_
Figure 2.4 shows the state space partition obtained by using the parameterized
vertices framework as a method to construct an explicit solution to the MPCproblem
(2.23) with prediction horizon N = 2.
The MPC law is
56 2 Optimal and Constrained Control - An Overview
2 1.5 1 0.5 0 0.5 1 1.5 2
2.5
2
1.5
1
0.5
0
0.5
1
1.5
2
2.5
x
1
x
2
1
2
3
4
5
6
7
Fig. 2.4 State space partition. Number of regions N
r
= 7.
u(k) =
_

_
0.7015x
1
(k) 1.0576x
2
(k) if
_

_
0.7979 0.6029
0.7979 0.6029
0.5528 0.8333
0.5528 0.8333
1.0000 0
1.0000 0
_

_
x(k)
_

_
2.5740
2.5740
0.7879
0.7879
2.0000
2.0000
_

_
(Region 1)
0.5564x
1
(k) 1.1673x
2
(k) +0.4683 if
_
_
0.4303 0.9027
1.0000 0
0.7979 0.6029
_
_
x(k)
_
_
1.1355
2.0000
2.5740
_
_
(Region 4)
0.5564x
1
(k) 1.1673x
2
(k) 0.4683 if
_
_
0.4303 0.9027
1.0000 0
0.7979 0.6029
_
_
x(k)
_
_
1.1355
2.0000
2.5740
_
_
(Region 7)
1 if
_
_
0.3704 0.9289
1.0000 0
0.5528 0.8333
_
_
x(k)
_
_
1.2894
2.0000
0.7879
_
_
(Region 2)
1 if
_
_
0.3704 0.9289
1.0000 0
0.5528 0.8333
_
_
x(k)
_
_
1.2894
2.0000
0.7879
_
_
(Region 5)
1 if
_

_
0.7071 0.7071
0.2742 0.9617
1.0000 0
1.0000 0
0.4303 0.9027
0.3704 0.9289
_

_
x(k)
_

_
2.1213
1.7098
2.0000
2.0000
1.1355
1.2894
_

_
(Region 3)
1 if
_

_
0.7071 0.7071
0.2742 0.9617
1.0000 0
1.0000 0
0.4303 0.9027
0.3704 0.9289
_

_
x(k)
_

_
2.1213
1.7098
2.0000
2.0000
1.1355
1.2894
_

_
(Region 6)
For the initial condition x(0) = [2 2.33], Figure 2.5 shows the state trajecto-
ries as a function of time.
Figure 2.6 presents the input trajectory as a function of time.
2.4 Vertex control 57
0 5 10 15
2
1.5
1
0.5
0
Time
x
1
0 5 10 15
0
0.5
1
1.5
2
2.5
Time
x
2
Fig. 2.5 State trajectories as a function of time.
0 5 10 15
1
0.8
0.6
0.4
0.2
0
Time
u
Fig. 2.6 Input trajectory as a function of time.
2.4 Vertex control
The vertex control framework was srtly proposed by Gutman et al in [34]. It gives a
necessary and sufcient condition for stabilizing a discrete time linear time invariant
system with polyhedral state and control constraints. The condition is that at each
vertex of the controlled invariant set C
N
2
there exists an admissible control action
that brings the state to the interior of the set C
N
. Then, this condition was extended
to the uncertain plant case by Blanchini in [13]. A stabilizing controller is given by
the convex combination of vertex controls in each sector with a Lyapunov function
given by shrunken images of the frontier of the set C
N
[34], [13].
To begin, let us consider now the system of the form
x(k +1) = A(k)x(k) +B(k)u(k) +D(k)w(k) (2.34)
2
See Section 1.2.3
58 2 Optimal and Constrained Control - An Overview
where x(k) R
n
, u(k) R
m
and w(k) R
d
are, respectively the state, input and
disturbance vectors.
The matrices A(k) R
nn
, B(k) R
nm
and D(k) R
nn
satisfy
_

_
A(k) =
q

i=1

i
(k)A
i
B(k) =
q

i=1

i
(k)B
i
D(k) =
q

i=1

i
(k)D
i
q

i=1

i
(k) = 1,
i
(k) 0
where the matrices A
i
, B
i
and D
i
are given.
The state variables, the control variables and the disturbance are subject to the
following polytopic constraints
_
_
_
x(k) X, X =x R
n
: F
x
x g
x

u(k) U, U =u R
m
: F
u
u g
u

w(k) W, W =w R
d
: F
w
w g
w

(2.35)
where the matrices F
x
, F
u
, F
w
and the vectors g
x
, g
u
and g
w
are assumed to be
constant with g
x
> 0, g
u
> 0, g
w
> 0 such that the origin is contained in the interior
of X, U and W
Using the results in Section 1.2.3, it is assumed that the robust controlled invari-
ant set C
N
with some xed integer N > 0 is determined in form of polytope, i.e.
C
N
=x R
n
: F
N
x g
N
(2.36)
Any state x(k) C
N
can be decomposed as follows
x = sx
s
+(1 s)x
0
(2.37)
where 0 s 1, x
s
C
N
and x
0
is the origin. In other words, the state x is expressed
as a convex combination of the origin and one other point x
s
C
N
.
Consider the following optimization problem
s

= min
s,x
s
s (2.38)
subject to
_
_
_
sx
s
= x
F
N
x
s
g
N
0 s 1
The following theorem holds
2.4 Vertex control 59
Theorem 2.3. For all state x C
N
and x is not the origin, the optimal solution of
the problem (2.38) is reached if and only if x is written as a convex combination of
the origin and one point belonging to the frontier of the set C
N
.
Proof. It the optimal solution x
s
is strictly inside the set C
N
, then by setting
x

s
= Fr(C
N
) x, x
s
i.e. x

s
is the intersection between the frontier of C
N
and the line connecting x and
x
s
, one obtains
x = s

s
+(1 s

)x
0
= s

s
with s

< s. Hence for the optimal solution x

s
, it holds that x

s
Fr(C
N
).
2 1.5 1 0.5 0 0.5 1 1.5 2
3
2
1
0
1
2
3
x
1
x
2
x
s
x
x
0
= 0
x
s
*
Fig. 2.7 Graphical illustration of the proof of theorem 2.3.
Remark 2.1. The optimal solution s

of the problem (2.38) can be seen as a measure


of the distance from state x to the origin.
Remark 2.2. The optimization problem (2.38) can be transformed in an LP problem
as
s

= min
s
s (2.39)
subject to
_
F
N
x sg
N
0 s 1
Based on theorem 2.3, it is clear that the explicit solution of the problem (2.38)
is a set of pyramids P
( j)
C
, each formed by one facet of C
N
as a base and the origin
as a common vertex. By decomposing further these pyramids P
( j)
C
as a sequence of
simplicies C
( j)
N
, each formed by n vertices x
( j)
1
, x
( j)
2
, . . . , x
( j)
n
of the base of P
( j)
C
and the origin, having the following properties
60 2 Optimal and Constrained Control - An Overview
C
( j)
N
has nonempty interior.
Int(C
( j)
N
) Int(C
(l)
N
) = / 0, j ,= l.


j
C
( j)
N
=C
N
.
2 1.5 1 0.5 0 0.5 1 1.5 2
3
2
1
0
1
2
3
x
1
x
2
Fig. 2.8 Graphical illustration of the simplex decomposition.
Let
U
( j)
= [u
( j)
1
u
( j)
2
. . . u
( j)
n
]
be the mn matrix dened by chosen admissible control values
3
satisfying (2.34)
at the vertices x
( j)
1
, x
( j)
2
, . . . , x
( j)
n
.
Remark 2.3. Maximizing the control action at the vertices v R
n1
of the controlled
invariant set C
N
can be achieved by solving the following optimization problem
J = max
u
|u|
p
(2.40)
subject to
_
F
N
(A
i
v +B
i
u) g
N
max
wW
F
N
D
i
w, i = 1, 2. . . , q
F
u
u g
u
where |u|
p
is the pnorm of the vector u. Since the set C
N
is robust controlled
invariant, problem 2.38 is always feasible.
For all x(k) C
N
, there exists an index j such that x(k) C
( j)
N
and hence x

s
(k)
C
( j)
N
. Therefore x

s
(k) can be written as a convex combination of x
( j)
1
, x
( j)
2
, . . . , x
( j)
n
,
i.e.
3
By admissible control values we understand any control actions that bring the state to the interior
of the feasible set in a nite number of steps, see [34].
2.4 Vertex control 61
x

s
(k) =
1
x
( j)
1
+
2
x
( j)
2
+. . . +
n
x
( j)
n
(2.41)
with
_
_
_

i
0, i = 1, 2, . . . , n
n

i=1

i
= 1
By substituting x(k) = s

(k)x

s
(k) in equation (2.41), one obtains
x(k) = s

(k)
1
x
( j)
1
+
2
x
( j)
2
+. . . +
n
x
( j)
n

By denoting

i
= s

(k)
i
, i = 1, 2, . . . , n
one gets
x(k) =
1
x
( j)
1
+
2
x
( j)
2
+. . . +
n
x
( j)
n
(2.42)
with
_
_
_

i
0, i = 1, 2, . . . , n
n

i=1

i
= s

(k)
n

i=1

i
= s

(k)
Remark 2.4. Let x
1
, x
2
, . . . , x
n
c
be the vertices of the polytope C
N
and n
c
be the
number of vertices. It is clear that the optimization problem (2.39) is equivalent to
the following LP problem
min

1
+
2
+. . . +
n
c
(2.43)
subject to
_

1
x
1
+
2
x
2
+. . . +
n
c
x
n
c
= x(k)

i
0, i = 1, 2, . . . , n
n
c

i=1

i
1
Equation (2.42) can be rewritten in a compact form as
x(k) = X
( j)

where
X
( j)
= [x
( j)
1
x
( j)
2
. . . x
( j)
n
]
= [
1

2
. . .
n
]
T
Since C
( j)
N
has nonempty interior, X
( j)
is invertible. It follows that
=X
( j)

1
x(k) (2.44)
Consider the following control law
u(k) =
1
u
( j)
1
+
2
u
( j)
2
+. . . +
n
u
( j)
n
(2.45)
62 2 Optimal and Constrained Control - An Overview
or
u(k) =U
( j)
(2.46)
By substituting equation (2.44) in equation (2.46) one gets
u(k) =U
( j)
X
( j)

1
x(k) = K
( j)
x(k) (2.47)
with
K
( j)
=U
( j)
X
( j)

1
(2.48)
Hence for x C
( j)
N
the controller is an linear feedback state law whose gains are
obtained simply by linear interpolation of the control values at the vertices of the
simplex.
u = K
( j)
x, x C
( j)
N
(2.49)
The piecewise linear control law (2.49) was rstly proposed by Gutman et al in
[34] for the discrete-time linear time invariant system case. In this original work
[34] the authors called this varying state feedback control as a vertex control. Then
this approach was extended to the uncertain plant case by Blanchini in [13].
Remark 2.5. Clearly, once the piecewise linear function (2.49) is pre-calculated, the
control action can be computed by determining the simplex that contains the current
state. An alternative approach for computing the control action is based on solving
on-line the LP problem (2.43) and then apply the control law
u(k) =
1
u
1
+
2
u
2
+. . . +
n
c
u
n
c
(2.50)
where u
1
, u
2
, . . . , u
n
c
are the control value at the vertices x
1
, x
2
, . . . , x
n
c
.
The following theorem holds
Theorem 2.4. For system(2.34) and constraints (2.35), the vertex control law(2.45)
or (2.49) guarantees recursive feasibility for all x C
N
.
Proof. A basic explanation is provided in the original work of [34], here a new
simpler argument being proposed upon convexity of the set C
N
and linearity of the
system (2.34). For recursive feasibility, one has to prove that for all x(k) C
N
_
F
u
u(k) g
u
x(k +1) = A(k)x(k) +B(k)u(k) +D(k)w(k) C
N
For all x(k) C
N
, there exists an index j such that x(k) C
( j)
N
. It follows that
F
u
u(k) = F
u

1
u
( j)
1
+
2
u
( j)
2
+. . . +
n
u
( j)
n

=
1
F
u
u
( j)
1
+
2
F
u
u
( j)
2
+. . . +
n
F
u
u
( j)
n

1
g
u
+
2
g
u
+. . . +
n
g
u
g
u
n

i=1

i
s

(k)g
u
g
u
2.4 Vertex control 63
and
x(k +1) = A(k)x(k) +B(k)u(k) +D(k)w(k)
= A(k)
n

i=1

i
x
( j)
i
+B(k)
n

i=1

i
u
( j)
i
+D(k)w(k)
=
n

i=1

i
A(k)x
( j)
i
+B(k)u
( j)
i
+D(k)w(k) +(1 s

(k))D(k)w(k)
One has
F
N
x(k +1) =
n

i=1

i
F
N
A(k)x
( j)
i
+B(k)u
( j)
i
+D(k)w(k) +(1 s

(k))F
N
D(k)w(k)

i=1

i
g
N
+(1 s

(k))F
N
D(k)w(k)
s

(k)g
N
+(1 s

(k))F
N
D(k)w(k)
Since the set C
N
is robust controlled invariant and containing the origin in its
interior, it follows that
max
wW
F
N
D(k)w(k) g
N
Hence
F
N
x(k +1) s

(k)g
N
+(1 s

(k))g
N
g
N
or in other words, x(k +1) C
N
.
In the absence of disturbance, i.e. w(k) = 0, k 0, the following theorem holds
Theorem 2.5. Consider the uncertain system (2.34) with input and state constraints
(2.35), then the closed loop system with the piecewise linear control law (2.45) or
(2.49) is robustly asymptotically stable.
Proof. A proof is given in [34], [13]. Here we give another proof providing valuable
insight into the vertex control scheme. Consider the following positive function
V(x) = s

(k) (2.51)
V(x) is a Lyapunov function candidate. For any x(k) C
N
, there exists an index j
such that x(k) C
( j)
N
. Hence
x(k) = s

(k)x

s
(k), and u(k) = s

(k)u

s
(k)
where
u

s
(k) =
1
u
( j)
1
+
2
u
( j)
2
+. . . +
n
u
( j)
n
It follows that
x(k +1) = A(k)x(k) +B(k)u(k)
= A(k)s

(k)x

s
(k) +B(k)s

(k)u

s
(k)
= s

(k)A(k)x

s
(k) +B(k)u

s
(k) = s

(k)x
s
(k +1)
64 2 Optimal and Constrained Control - An Overview
where x
s
(k +1) = A(k)x

s
(k) +B(k)u

s
(k) C
N
. Hence s

(k) gives a possible de-


composition (2.37) of x(k +1).
By using the interpolation based on linear programming (2.39), one gets another
decomposition, namely
x(k +1) = s

(k +1)x

s
(k +1)
with x

s
(k +1) C
N
. It follows that s

(k +1) s

(k) and V(x) is a non-increasing


function.
Fromthe facts that the level curves of the functionV(x) =s

(k) are given by scal-


ing the frontier of the feasible set, and the contractiveness property of the control
values at the vertices of the feasible set guarantees that there is no initial condition
x(0) C
N
such that s

(k) = s

(0) = 1 for sufciently large and nite k, one con-


cludes that V(x) = s

(k) is a Lyapunov function for all x(k) C


N
. Hence the closed
loop system with the vertex control law (2.49) is robustly asymptotically stable.
Example 2.4. Consider the discrete time system in example 2.1 and 2.3
x(k +1) =
_
1 1
0 1
_
x(k) +
_
1
0.7
_
u(k) (2.52)
The constraints are
2 x
1
2, 5 x
2
5
1 u 1
(2.53)
Based on procedure 2.3 in Section 1.2.3, the controlled invariant set C
N
is com-
puted with N = 6. The inner invariant set together with the feedback gain K can
be found in example 2.3. Note that C
6
=C
7
. In this case the polytopeC
6
is a maximal
controlled invariant set. The set C
N
is depicted in Figure 2.9.
2 1.5 1 0.5 0 0.5 1 1.5 2
3
2
1
0
1
2
3
x
1
x
2
1
2
3
4
5
6
7
8
Fig. 2.9 Controlled invariant set C
N
and state space partition of vertex control.
2.4 Vertex control 65
The set if vertices of C
N
is given by the matrix V(C
N
) below, together with the
control matrix U
v
V(C
N
) =
_
2.00 1.30 0.10 2.00 2.00 1.30 0.10 2.00
1.00 1.70 2.40 3.03 1.00 1.70 2.40 3.03
_
(2.54)
and
U
v
=
_
1 1 1 1 1 1 1 1

(2.55)
The vertex control law is
u(k) =
_

_
0.2521x
1
(k) 0.4959x
2
(k), if x(k) C
(1)
N
or x(k) C
(5)
N
0.3333x
1
(k) 0.3333x
2
(k), if x(k) C
(2)
N
or x(k) C
(6)
N
0.2128x
1
(k) 0.4255x
2
(k), if x(k) C
(3)
N
or x(k) C
(7)
N
0.1408x
1
(k) 0.4225x
2
(k), if x(k) C
(4)
N
or x(k) C
(8)
N
(2.56)
with
C
(1)
N
=
_
_
_
x R
2
:
_
_
1.0000 0
0.4472 0.8944
0.8349 0.5505
_
_
x
_
_
2
0
0
_
_
_
_
_
C
(2)
N
=
_
_
_
x R
2
:
_
_
0.7071 0.7071
0.4472 0.8944
0.7944 0.6075
_
_
x
_
_
2.1213
0
0
_
_
_
_
_
C
(3)
N
=
_
_
_
x R
2
:
_
_
0.7944 0.6075
0.4472 0.8944
0.9991 0.0416
_
_
x
_
_
0
2.1019
0
_
_
_
_
_
C
(4)
N
=
_
_
_
x R
2
:
_
_
0.9991 0.0416
0.8349 0.5505
0.3162 0.9487
_
_
x
_
_
0
0
2.2452
_
_
_
_
_
C
(5)
N
=
_
_
_
x R
2
:
_
_
1.0000 0
0.4472 0.8944
0.8349 0.5505
_
_
x
_
_
2
0
0
_
_
_
_
_
C
(6)
N
=
_
_
_
x R
2
:
_
_
0.7071 0.7071
0.4472 0.8944
0.7944 0.6075
_
_
x
_
_
2.1213
0
0
_
_
_
_
_
C
(7)
N
=
_
_
_
x R
2
:
_
_
0.7944 0.6075
0.4472 0.8944
0.9991 0.0416
_
_
x
_
_
0
2.1019
0
_
_
_
_
_
C
(8)
N
=
_
_
_
x R
2
:
_
_
0.9991 0.0416
0.8349 0.5505
0.3162 0.9487
_
_
x
_
_
0
0
2.2452
_
_
_
_
_
Figure 2.10 presents state trajectories of the closed loop system with the vertex
controller for different initial conditions.
For the initial condition x(0) = [2.0000 3.0333]
T
Figure 2.11 shows the state
trajectories as a function of time.
66 2 Optimal and Constrained Control - An Overview
2 1.5 1 0.5 0 0.5 1 1.5 2
3
2
1
0
1
2
3
x
1
x
2
Fig. 2.10 State trajectories of the closed loop system with the vertex controller.
0 5 10 15 20 25 30
2
0
2
Time
x
1
0 5 10 15 20 25 30
0
1
2
3
Time
x
2
Fig. 2.11 State trajectories of the closed loop system with the vertex controller as a function of
time.
Figure 2.12 presents the input trajectory and the interpolation coefcient s

as a
function of time. As expected the function s

(k) is positive and non-increasing.


From Figure 2.12, it is worth noticing that using the vertex controller, the control
values are saturated only on the frontier of the set C
N
, i.e. when s

= 1. And also the


state trajectory at some moments is parallel to the frontier of the set C
N
, i.e when s

is constant. At these moments, the control values are also constant due to the choice
of the control values at the vertices of the set C
N
.
References
1. Alamo, T., Cepeda, A., Limon, D.: Improved computation of ellipsoidal invariant sets for satu-
rated control systems. In: Decision and Control, 2005 and 2005 European Control Conference.
CDC-ECC05. 44th IEEE Conference on, pp. 62166221. IEEE (2005)
References 67
0 5 10 15 20 25 30
1
0.5
0
Time
u
0 5 10 15 20 25 30
0
0.5
1
Time
s
*
Fig. 2.12 Input trajectory and the interpolating coefcient s

as a function of time.
2. Alamo, T., Cepeda, A., Limon, D., Camacho, E.: Estimation of the domain of attraction for sat-
urated discrete-time systems. International journal of systems science 37(8), 575583 (2006)
3. Anderson, B., Moore, J.: Optimal control: linear quadratic methods, vol. 1. Prentice Hall
Englewood Cliffs, NJ (1990)
4. Balas, E.: Projection with a minimal system of inequalities. Computational Optimization and
Applications 10(2), 189193 (1998)
5. Bellman, R.: On the theory of dynamic programming. Proceedings of the National Academy
of Sciences of the United States of America 38(8), 716 (1952)
6. Bellman, R.: The theory of dynamic programming. Bull. Amer. Math. Soc 60(6), 503515
(1954)
7. Bellman, R.: Dynamic programming and lagrange multipliers. Proceedings of the National
Academy of Sciences of the United States of America 42(10), 767 (1956)
8. Bellman, R.: Dynamic programming. Science 153(3731), 34 (1966)
9. Bellman, R., Dreyfus, S.: Applied dynamic programming (1962)
10. Bemporad, A., Borrelli, F., Morari, M.: Model predictive control based on linear program-
ming the explicit solution. Automatic Control, IEEE Transactions on 47(12), 19741985
(2002)
11. Bemporad, A., Morari, M., Dua, V., Pistikopoulos, E.: The explicit linear quadratic regulator
for constrained systems. Automatica 38(1), 320 (2002)
12. Bitsoris, G.: Positively invariant polyhedral sets of discrete-time linear systems. International
Journal of Control 47(6), 17131726 (1988)
13. Blanchini, F.: Nonquadratic lyapunov functions for robust control. Automatica 31(3), 451461
(1995)
14. Blanchini, F.: Set invariance in control. Automatica 35(11), 17471767 (1999)
15. Blanchini, F., Miani, S.: On the transient estimate for linear systems with time-varying un-
certain parameters. Circuits and Systems I: Fundamental Theory and Applications, IEEE
Transactions on 43(7), 592596 (1996)
16. Blanchini, F., Miani, S.: Stabilization of lpv systems: state feedback, state estimation and
duality. In: Decision and Control, 2003. Proceedings. 42nd IEEE Conference on, vol. 2, pp.
14921497. IEEE (2003)
17. Blanchini, F., Miani, S.: Set-theoretic methods in control. Springer (2008)
18. Borrelli, F.: Constrained optimal control of linear and hybrid systems, vol. 290. Springer
Verlag (2003)
19. Boyd, S., El Ghaoui, L., Feron, E., Balakrishnan, V.: Linear matrix inequalities in system and
control theory, vol. 15. Society for Industrial Mathematics (1994)
68 2 Optimal and Constrained Control - An Overview
20. Bronstein, E.: Approximation of convex sets by polytopes. Journal of Mathematical Sciences
153(6), 727762 (2008)
21. Camacho, E., Bordons, C.: Model predictive control. Springer Verlag (2004)
22. Cao, Y., Lin, Z.: Stability analysis of discrete-time systems with actuator saturation by a
saturation-dependent lyapunov function. Automatica 39(7), 12351241 (2003)
23. Clarke, F.: Optimization and nonsmooth analysis, vol. 5. Society for Industrial Mathematics
(1990)
24. Cwikel, M., Gutman, P.: Convergence of an algorithm to nd maximal state constraint sets for
discrete-time linear dynamical systems with bounded controls and states. Automatic Control,
IEEE Transactions on 31(5), 457459 (1986)
25. Dantzig, G.: Fourier-motzkin elimination and its dual. Tech. rep., DTIC Document (1972)
26. Fukuda, K.: cdd/cdd+ reference manual. Institute for operations Research ETH-Zentrum,
ETH-Zentrum (1997)
27. Fukuda, K.: Frequently asked questions in polyhedral computation. Report http://www. ifor.
math. ethz. ch/fukuda/polyfaq/polyfaq. html, ETH, Z urich (2000)
28. Gahinet, P., Nemirovskii, A., Laub, A., Chilali, M.: The lmi control toolbox. In: Decision and
Control, 1994., Proceedings of the 33rd IEEE Conference on, vol. 3, pp. 20382041. IEEE
(1994)
29. Gilbert, E., Tan, K.: Linear systems with state and control constraints: The theory and appli-
cation of maximal output admissible sets. Automatic Control, IEEE Transactions on 36(9),
10081020 (1991)
30. Goodwin, G., Seron, M., De Dona, J.: Constrained control and estimation: an optimisation
approach. Springer Verlag (2005)
31. Grant, M., Boyd, S.: Cvx: Matlab software for disciplined convex programming. Available
httpstanford edu boydcvx (2008)
32. Gr unbaum, B.: Convex polytopes, vol. 221. Springer Verlag (2003)
33. Gutman, P.: A linear programming regulator applied to hydroelectric reservoir level control.
Automatica 22(5), 533541 (1986)
34. Gutman, P., Cwikel, M.: Admissible sets and feedback control for discrete-time linear dy-
namical systems with bounded controls and states. Automatic Control, IEEE Transactions on
31(4), 373376 (1986)
35. Gutman, P., Cwikel, M.: An algorithm to nd maximal state constraint sets for discrete-time
linear dynamical systems with bounded controls and states. IEEE TRANS. AUTOM. CON-
TROL. 32(3), 251254 (1987)
36. Hu, T., Lin, Z.: Control systems with actuator saturation: analysis and design. Birkhauser
(2001)
37. Hu, T., Lin, Z., Chen, B.: Analysis and design for discrete-time linear systems subject to
actuator saturation. Systems & Control Letters 45(2), 97112 (2002)
38. Hu, T., Lin, Z., Chen, B.: An analysis and design method for linear systems subject to actuator
saturation and disturbance. Automatica 38(2), 351359 (2002)
39. Jones, C., Kerrigan, E., Maciejowski, J., of Cambridge. Engineering Dept, U.: Equality set
projection: A new algorithm for the projection of polytopes in halfspace representation. Uni-
versity of Cambridge, Dept. of Engineering (2004)
40. Keerthi, S., Sridharan, K.: Solution of parametrized linear inequalities by fourier elimination
and its applications. Journal of Optimization Theory and Applications 65(1), 161169 (1990)
41. Kerrigan, E.: Robust constraint satisfaction: Invariant sets and predictive control. Department
of Engineering, University of Cambridge, UK (2000)
42. Khalil, H., Grizzle, J.: Nonlinear systems, vol. 3. Prentice hall (1992)
43. Kolmanovsky, I., Gilbert, E.: Theory and computation of disturbance invariant sets for
discrete-time linear systems. Mathematical Problems in Engineering 4(4), 317367 (1998)
44. Kurzhanskiy, A., Varaiya, P.: Ellipsoidal toolbox (et). In: Decision and Control, 2006 45th
IEEE Conference on, pp. 14981503. IEEE (2006)
45. Kvasnica, M., Grieder, P., Baoti c, M., Morari, M.: Multi-parametric toolbox (mpt). Hybrid
Systems: Computation and Control pp. 121124 (2004)
References 69
46. Kwakernaak, H., Sivan, R.: Linear optimal control systems, vol. 172. Wiley-Interscience New
York (1972)
47. Lee, E.: Foundations of optimal control theory. Tech. rep., DTIC Document (1967)
48. Lewis, F., Syrmos, V.: Optimal control. Wiley-Interscience (1995)
49. Loechner, V., Wilde, D.: Parameterized polyhedra and their vertices. International Journal of
Parallel Programming 25(6), 525549 (1997)
50. Lofberg, J.: Yalmip: A toolbox for modeling and optimization in matlab. In: Computer Aided
Control Systems Design, 2004 IEEE International Symposium on, pp. 284289. IEEE (2004)
51. Maciejowski, J.: Predictive control: with constraints. Pearson education (2002)
52. Macki, J., Strauss, A.: Introduction to optimal control theory. Springer (1982)
53. Mayne, D., Rawlings, J., Rao, C., Scokaert, P.: Constrained model predictive control: Stability
and optimality. Automatica 36(6), 789814 (2000)
54. Motzkin, T., Raiffa, H., Thompson, G., Thrall, R.: The double description method (1953)
55. Olaru, S., Dumur, D.: A parameterized polyhedra approach for explicit constrained predictive
control. In: Decision and Control, 2004. CDC. 43rd IEEE Conference on, vol. 2, pp. 1580
1585. IEEE (2004)
56. Olaru, S., Dumur, D.: On the continuity and complexity of control laws based on multipara-
metric linear programs. In: Decision and Control, 2006 45th IEEE Conference on, pp. 5465
5470. IEEE (2006)
57. Pontryagin, L., Gamkrelidze, R.: The mathematical theory of optimal processes, vol. 4. CRC
(1986)
58. Rossiter, J.: Model-based predictive control: a practical approach. CRC (2003)
59. Scherer, C., Weiland, S.: Linear matrix inequalities in control. Lecture Notes, Dutch Institute
for Systems and Control, Delft, The Netherlands (2000)
60. Schmitendorf, W., Barmish, B.: Null controllability of linear systems with constrained con-
trols. SIAM Journal on control and optimization 18, 327 (1980)
61. Seron, M., Goodwin, G., Don a, J.: Characterisation of receding horizon control for constrained
linear systems. Asian Journal of Control 5(2), 271286 (2003)
62. Sontag, E.: Algebraic approach to bounded controllability of linear systems. International
Journal of Control 39(1), 181188 (1984)
63. Tarbouriech, S., Garcia, G., da Silva, J., Queinnec, I.: Stability and stabilization of linear
systems with saturating actuators. Springer (2011)
64. Veres, S.: Geometric bounding toolbox (gbt) for matlab. Ofcial website: http://www. sys-
brain. com (2003)
65. Ziegler, G.: Lectures on polytopes, vol. 152. Springer (1995)
Glossary
Use the template glossary.tex together with the Springer document class SVMono
(monograph-type books) or SVMult (edited books) to style your glossary in the
Springer layout.
glossary term Write here the description of the glossary term. Write here the de-
scription of the glossary term. Write here the description of the glossary term.
glossary term Write here the description of the glossary term. Write here the de-
scription of the glossary term. Write here the description of the glossary term.
glossary term Write here the description of the glossary term. Write here the de-
scription of the glossary term. Write here the description of the glossary term.
glossary term Write here the description of the glossary term. Write here the de-
scription of the glossary term. Write here the description of the glossary term.
glossary term Write here the description of the glossary term. Write here the de-
scription of the glossary term. Write here the description of the glossary term.
71
Solutions
Problems of Chapter ??
?? The solution is revealed here.
?? Problem Heading
(a) The solution of rst part is revealed here.
(b) The solution of second part is revealed here.
73

Você também pode gostar