P. 1
(ebook - Mathematics) Discrete Math, Optimization

(ebook - Mathematics) Discrete Math, Optimization

|Views: 40|Likes:
Publicado por김종립

More info:

Published by: 김종립 on Jan 29, 2011
Direitos Autorais:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

07/27/2012

pdf

text

original

i

Lecture Notes on Optimization
Pravin Varaiya
ii
Contents
1 INTRODUCTION 1
2 OPTIMIZATION OVER AN OPEN SET 7
3 Optimization with equality constraints 15
4 Linear Programming 27
5 Nonlinear Programming 49
6 Discrete-time optimal control 75
7 Continuous-time linear optimal control 83
8 Coninuous-time optimal control 95
9 Dynamic programing 121
iii
iv CONTENTS
PREFACE to this edition
Notes on Optimization was published in 1971 as part of the Van Nostrand Reinhold Notes on Sys-
tem Sciences, edited by George L. Turin. Our aim was to publish short, accessible treatments of
graduate-level material in inexpensive books (the price of a book in the series was about five dol-
lars). The effort was successful for several years. Van Nostrand Reinhold was then purchased by a
conglomerate which cancelled Notes on System Sciences because it was not sufficiently profitable.
Books have since become expensive. However, the World Wide Web has again made it possible to
publish cheaply.
Notes on Optimization has been out of print for 20 years. However, several people have been
using it as a text or as a reference in a course. They have urged me to re-publish it. The idea of
making it freely available over the Web was attractive because it reaffirmed the original aim. The
only obstacle was to retype the manuscript in LaTex. I thank Kate Klohe for doing just that.
I would appreciate knowing if you find any mistakes in the book, or if you have suggestions for
(small) changes that would improve it.
Berkeley, California P.P. Varaiya
September, 1998
v
vi CONTENTS
PREFACE
These Notes were developed for a ten-week course I have taught for the past three years to first-year
graduate students of the University of California at Berkeley. My objective has been to present,
in a compact and unified manner, the main concepts and techniques of mathematical programming
and optimal control to students having diverse technical backgrounds. A reasonable knowledge of
advanced calculus (up to the Implicit Function Theorem), linear algebra (linear independence, basis,
matrix inverse), and linear differential equations (transition matrix, adjoint solution) is sufficient for
the reader to follow the Notes.
The treatment of the topics presented here is deep. Although the coverage is not encyclopedic,
an understanding of this material should enable the reader to follow much of the recent technical
literature on nonlinear programming, (deterministic) optimal control, and mathematical economics.
The examples and exercises given in the text form an integral part of the Notes and most readers will
need to attend to them before continuing further. To facilitate the use of these Notes as a textbook,
I have incurred the cost of some repetition in order to make almost all chapters self-contained.
However, Chapter V must be read before Chapter VI, and Chapter VII before Chapter VIII.
The selection of topics, as well as their presentation, has been influenced by many of my students
and colleagues, who have read and criticized earlier drafts. I would especially like to acknowledge
the help of Professors M. Athans, A. Cohen, C.A. Desoer, J-P. Jacob, E. Polak, and Mr. M. Ripper. I
also want to thank Mrs. Billie Vrtiak for her marvelous typing in spite of starting from a not terribly
legible handwritten manuscript. Finally, I want to thank Professor G.L. Turin for his encouraging
and patient editorship.
Berkeley, California P.P. Varaiya
November, 1971
vii
viii CONTENTS
Chapter 1
INTRODUCTION
In this chapter, we present our model of the optimal decision-making problem, illustrate decision-
making situations by a few examples, and briefly introduce two more general models which we
cannot discuss further in these Notes.
1.1 The Optimal Decision Problem
These Notes show how to arrive at an optimal decision assuming that complete information is given.
The phrase complete information is given means that the following requirements are met:
1. The set of all permissible decisions is known, and
2. The cost of each decision is known.
When these conditions are satisfied, the decisions can be ranked according to whether they incur
greater or lesser cost. An optimal decision is then any decision which incurs the least cost among
the set of permissible decisions.
In order to model a decision-making situation in mathematical terms, certain further requirements
must be satisfied, namely,
1. The set of all decisions can be adequately represented as a subset of a vector space with each
vector representing a decision, and
2. The cost corresponding to these decisions is given by a real-valued function.
Some illustrations will help.
Example 1: The Pot Company (Potco) manufacturers a smoking blend called Acapulco Gold.
The blend is made up of tobacco and mary-john leaves. For legal reasons the fraction α of mary-
john in the mixture must satisfy 0 < α <
1
2
. From extensive market research Potco has determined
their expected volume of sales as a function of α and the selling price p. Furthermore, tobacco can
be purchased at a fixed price, whereas the cost of mary-john is a function of the amount purchased.
If Potco wants to maximize its profits, how much mary-john and tobacco should it purchase, and
what price p should it set?
Example 2: Tough University provides “quality” education to undergraduate and graduate stu-
dents. In an agreement signed with Tough’s undergraduates and graduates (TUGs), “quality” is
1
2 CHAPTER 1. INTRODUCTION
defined as follows: every year, each u (undergraduate) must take eight courses, one of which is a
seminar and the rest of which are lecture courses, whereas each g (graduate) must take two seminars
and five lecture courses. A seminar cannot have more than 20 students and a lecture course cannot
have more than 40 students. The University has a faculty of 1000. The Weary Old Radicals (WORs)
have a contract with the University which stipulates that every junior faculty member (there are 750
of these) shall be required to teach six lecture courses and two seminars each year, whereas every
senior faculty member (there are 250 of these) shall teach three lecture courses and three seminars
each year. The Regents of Touch rate Tough’s President at α points per u and β points per g “pro-
cessed” by the University. Subject to the agreements with the TUGs and WORs how many u’s and
g’s should the President admit to maximize his rating?
Example 3: (See Figure 1.1.) An engineer is asked to construct a road (broken line) connection
point a to point b. The current profile of the ground is given by the solid line. The only requirement
is that the final road should not have a slope exceeding 0.001. If it costs $c per cubic foot to excavate
or fill the ground, how should he design the road to meet the specifications at minimum cost?
Example 4: Mr. Shell is the manager of an economy which produces one output, wine. There
are two factors of production, capital and labor. If K(t) and L(t) respectively are the capital stock
used and the labor employed at time t, then the rate of output of wine W(t) at time is given by the
production function
W(t) = F(K(t), L(t))
As Manager, Mr. Shell allocates some of the output rate W(t) to the consumption rate C(t), and
the remainder I(t) to investment in capital goods. (Obviously, W, C, I, and K are being measured
in a common currency.) Thus, W(t) = C(t) + I(t) = (1 − s(t))W(t) where s(t) = I(t)/W(t)

.
.
a
b
Figure 1.1: Admissable set of example.
∈ [0, 1] is the fraction of output which is saved and invested. Suppose that the capital stock decays
exponentially with time at a rate δ > 0, so that the net rate of growth of capital is given by the
following equation:
˙
K(t) =
d
dt
K(t) (1.1)
= −δK(t) +s(t)W(t)
= −δK(t) +s(t)F(K(t), L(t)).
The labor force is growing at a constant birth rate of β > 0. Hence,
1.1. THE OPTIMAL DECISION PROBLEM 3
˙
L(t) = βL(t).
(1.2)
Suppose that the production function F exhibits constant returns to scale, i.e., F(λK, λL) =
λF(K, L) for all λ > 0. If we define the relevant variable in terms of per capita of labor, w =
W/L, c = C/L, k = K/l, and if we let f(k) = F(k, l), then we see that F(K, L)−LF(K/L, 1) =
Lf(k), whence the consumption per capita of labor becomes c(t) = (l −s(t))f(k(t)). Using these
definitions and equations (1.1) and (1.2) it is easy to see that K(t) satisfies the differential equation
(1.3).
˙
k(t) = s(t)f(k(t)) −µk(t)
(1.3)
where µ = (δ +β). The first term of the right-hand side in (3) is the increase in the capital-to-labor
ratio due to investment whereas the second terms is the decrease due to depreciation and increase in
the labor force.
Suppose there is a planning horizon time T, and at time 0 Mr. Shell starts with capital-to-labor
ratio k
o
. If “welfare” over the planning period [0, T] is identified with total consumption

T
0
c(t)dt,
what should Mr. Shell’s savings policy s(t), 0 ≤ t ≤ T, be so as to maximize welfare? What
savings policy maximizes welfare subject to the additional restriction that the capital-to-labor ratio
at time T should be at least k
T
? If future consumption is discounted at rate α > 0 and if time horizon
is ∞, the welfare function becomes


0
e

αt c(t)dt. What is the optimum policy corresponding to
this criterion?
These examples illustrate the kinds of decision-making problems which can be formulated math-
ematically so as to be amenable to solutions by the theory presented in these Notes. We must always
remember that a mathematical formulation is inevitably an abstraction and the gain in precision may
have occurred at a great loss of realism. For instance, Example 2 is caricature (see also a faintly re-
lated but more more elaborate formulation in Bruno [1970]), whereas Example 4 is light-years away
from reality. In the latter case, the value of the mathematical exercise is greater the more insensitive
are the optimum savings policies with respect to the simplifying assumptions of the mathematical
model. (In connection with this example and related models see the critique by Koopmans [1967].)
In the examples above, the set of permissible decisions is represented by the set of all points
in some vector space which satisfy certain constraints. Thus, in the first example, a permissible
decision is any two-dimensional vector (α, p) satisfying the constraints 0 < α <
1
2
and 0 <
p. In the second example, any vector (u, g) with u ≥ 0, g ≥ 0, constrained by the number
of faculty and the agreements with the TUGs and WORs is a permissible decision. In the last
example, a permissible decision is any real-valued function s(t), 0 ≤ t ≤ T, constrained by
0 ≤ s(t) ≤ 1. (It is of mathematical but not conceptual interest to note that in this case a decision
is represented by a vector in a function space which is infinite-dimensional.) More concisely then,
these Notes are concerned with optimizing (i.e. maximizing or minimizing) a real-valued function
over a vector space subject to constraints. The constraints themselves are presented in terms of
functional inequalities or equalities.
4 CHAPTER 1. INTRODUCTION
At this point, it is important to realize that the distinction between the function which is to be
optimized and the functions which describe the constraints, although convenient for presenting the
mathematical theory, may be quite artificial in practice. For instance, suppose we have to choose
the durations of various traffic lights in a section of a city so as to achieve optimum traffic flow.
Let us suppose that we know the transportation needs of all the people in this section. Before we
can begin to suggest a design, we need a criterion to determine what is meant by “optimum traffic
flow.” More abstractly, we need a criterion by which we can compare different decisions, which in
this case are different patterns of traffic-light durations. One way of doing this is to assign as cost to
each decision the total amount of time taken to make all the trips within this section. An alternative
and equally plausible goal may be to minimize the maximum waiting time (that is the total time
spent at stop lights) in each trip. Now it may happen that these two objective functions may be
inconsistent in the sense that they may give rise to different orderings of the permissible decisions.
Indeed, it may be the case that the optimum decision according to the first criterion may be lead to
very long waiting times for a few trips, so that this decision is far from optimum according to the
second criterion. We can then redefine the problem as minimizing the first cost function (total time
for trips) subject to the constraint that the waiting time for any trip is less than some reasonable
bound (say one minute). In this way, the second goal (minimum waiting time) has been modified
and reintroduced as a constraint. This interchangeability of goal and constraints also appears at a
deeper level in much of the mathematical theory. We will see that in most of the results the objective
function and the functions describing the constraints are treated in the same manner.
1.2 Some Other Models of Decision Problems
Our model of a single decision-maker with complete information can be generalized along two
very important directions. In the first place, the hypothesis of complete information can be relaxed
by allowing that decision-making occurs in an uncertain environment. In the second place, we
can replace the single decision-maker by a group of two or more agents whose collective decision
determines the outcome. Since we cannot study these more general models in these Notes, we
merely point out here some situations where such models arise naturally and give some references.
1.2.1 Optimization under uncertainty.
A person wants to invest $1,000 in the stock market. He wants to maximize his capital gains, and
at the same time minimize the risk of losing his money. The two objectives are incompatible, since
the stock which is likely to have higher gains is also likely to involve greater risk. The situation
is different from our previous examples in that the outcome (future stock prices) is uncertain. It is
customary to model this uncertainty stochastically. Thus, the investor may assign probability 0.5 to
the event that the price of shares in Glamor company increases by $100, probability 0.25 that the
price is unchanged, and probability 0.25 that it drops by $100. A similar model is made for all the
other stocks that the investor is willing to consider, and a decision problem can be formulated as
follows. How should $1,000 be invested so as to maximize the expected value of the capital gains
subject to the constraint that the probability of losing more than $100 is less than 0.1?
As another example, consider the design of a controller for a chemical process where the decision
variable are temperature, input rates of various chemicals, etc. Usually there are impurities in the
chemicals and disturbances in the heating process which may be regarded as additional inputs of a
1.2. SOME OTHER MODELS OF DECISION PROBLEMS 5
random nature and modeled as stochastic processes. After this, just as in the case of the portfolio-
selection problem, we can formulate a decision problem in such a way as to take into account these
random disturbances.
If the uncertainties are modelled stochastically as in the example above, then in many cases
the techniques presented in these Notes can be usefully applied to the resulting optimal decision
problem. To do justice to these decision-making situations, however, it is necessary to give great
attention to the various ways in which the uncertainties can be modelled mathematically. We also
need to worry about finding equivalent but simpler formulations. For instance, it is of great signif-
icance to know that, given appropriate conditions, an optimal decision problem under uncertainty
is equivalent to another optimal decision problem under complete information. (This result, known
as the Certainty-Equivalence principle in economics has been extended and baptized the Separation
Theorem in the control literature. See Wonham [1968].) Unfortunately, to be able to deal with
these models, we need a good background in Statistics and Probability Theory besides the material
presented in these Notes. We can only refer the reader to the extensive literature on Statistical De-
cision Theory (Savage [1954], Blackwell and Girshick [1954]) and on Stochastic Optimal Control
(Meditch [1969], Kushner [1971]).
1.2.2 The case of more than one decision-maker.
Agent Alpha is chasing agent Beta. The place is a large circular field. Alpha is driving a fast, heavy
car which does not maneuver easily, whereas Beta is riding a motor scooter, slow but with good
maneuverability. What should Alpha do to get as close to Beta as possible? What should Beta
do to stay out of Alpha’s reach? This situation is fundamentally different from those discussed so
far. Here there are two decision-makers with opposing objectives. Each agent does not know what
the other is planning to do, yet the effectiveness of his decision depends crucially upon the other’s
decision, so that optimality cannot be defined as we did earlier. We need a new concept of rational
(optimal) decision-making. Situations such as these have been studied extensively and an elaborate
structure, known as the Theory of Games, exists which describes and prescribes behavior in these
situations. Although the practical impact of this theory is not great, it has proved to be among the
most fruitful sources of unifying analytical concepts in the social sciences, notably economics and
political science. The best single source for Game Theory is still Luce and Raiffa [1957], whereas
the mathematical content of the theory is concisely displayed in Owen [1968]. The control theorist
will probably be most interested in Isaacs [1965], and Blaquiere, et al., [1969].
The difficulty caused by the lack of knowledge of the actions of the other decision-making agents
arises even if all the agents have the same objective, since a particular decision taken by our agent
may be better or worse than another decision depending upon the (unknown) decisions taken by the
other agents. It is of crucial importance to invent schemes to coordinate the actions of the individual
decision-makers in a consistent manner. Although problems involving many decision-makers are
present in any system of large size, the number of results available is pitifully small. (See Mesarovic,
et al., [1970] and Marschak and Radner [1971].) In the author’s opinion, these problems represent
one of the most important and challenging areas of research in decision theory.
6 CHAPTER 1. INTRODUCTION
Chapter 2
OPTIMIZATION OVER AN OPEN
SET
In this chapter we study in detail the first example of Chapter 1. We first establish some notation
which will be in force throughout these Notes. Then we study our example. This will generalize
to a canonical problem, the properties of whose solution are stated as a theorem. Some additional
properties are mentioned in the last section.
2.1 Notation
2.1.1
All vectors are column vectors, with two consistent exceptions mentioned in 2.1.3 and 2.1.5 below
and some other minor and convenient exceptions in the text. Prime denotes transpose so that if
x ∈ R
n
then x

is the row vector x

= (x
1
, . . . , x
n
), and x = (x
1
, . . . , x
n
)

. Vectors are normally
denoted by lower case letters, the ith component of a vector x ∈ R
n
is denoted x
i
, and different
vectors denoted by the same symbol are distinguished by superscripts as in x
j
and x
k
. 0 denotes
both the zero vector and the real number zero, but no confusion will result.
Thus if x = (x
1
, . . . , x
n
)

and y = (y
1
, . . . , y
n
)

then x

y = x
1
y
1
+ . . . + x
n
y
n
as in ordinary
matrix multiplication. If x ∈ R
n
we define [x[ = +

x

x.
2.1.2
If x = (x
1
, . . . , x
n
)

and y = (y
1
, . . . , y
n
)

then x ≥ y means x
i
≥ y
i
, i = 1, . . . , n. In particular if
x ∈ R
n
, then x ≥ 0, if x
i
≥ 0, i = 1, . . . , n.
2.1.3
Matrices are normally denoted by capital letters. If A is an m n matrix, then A
j
denotes the jth
column of A, and A
i
denotes the ith row of A. Note that A
i
is a row vector. A
j
i
denotes the entry
of A in the ith row and jth column; this entry is sometimes also denoted by the lower case letter
a
ij
, and then we also write A = ¦a
ij
¦. I denotes the identity matrix; its size will be clear from the
context. If confusion is likely, we write I
n
to denote the n n identity matrix.
7
8 CHAPTER 2. OPTIMIZATION OVER AN OPEN SET
2.1.4
If f : R
n
→ R
m
is a function, its ith component is written f
i
, i = 1, . . . , m. Note that f
i
: R
n
→ R.
Sometimes we describe a function by specifying a rule to calculate f(x) for every x. In this case
we write f : x → f(x). For example, if A is an mn matrix, we can write F : x → Ax to denote
the function f : R
n
→ R
m
whose value at a point x ∈ R
n
is Ax.
2.1.5
If f : R
n
→ Ris a differentiable function, the derivative of f at ˆ x is the row vector ((∂f/∂x
1
)(ˆ x), . . . , (∂f/∂x
n
)(ˆ x)).
This derivative is denoted by (∂f/∂x)(ˆ x) or f
x
(ˆ x) or ∂f/∂x[
x=ˆ x
or f
x
[
x=ˆ x
, and if the argument ˆ x
is clear from the context it may be dropped. The column vector (f
x
(ˆ x))

is also denoted ∇
x
f(ˆ x),
and is called the gradient of f at ˆ x. If f : (x, y) → f(x, y) is a differentiable function from
R
n
R
m
into R, the partial derivative of f with respect to x at the point (ˆ x, ˆ y) is the n-dimensional
row vector f
x
(ˆ x, ˆ y) = (∂f/∂x)(ˆ x, ˆ y) = ((∂f/∂x
1
)(ˆ x, ˆ y), . . . , (∂f/∂x
n
)(ˆ x, ˆ y)), and similarly
f
y
(ˆ x, ˆ y) = (∂f/∂y)(ˆ x, ˆ y) = ((∂f/∂y
1
)(ˆ x, ˆ y), . . . , (∂f/∂y
m
)(ˆ x, ˆ y)). Finally, if f : R
n
→ R
m
is
a differentiable function with components f
1
, . . . , f
m
, then its derivative at ˆ x is the mn matrix
∂f
∂x
(ˆ x) = f
x
ˆ x =

f
1x
(ˆ x)
.
.
.
f
mx
(ˆ x)
¸
¸
¸
=

∂f
1
∂x
1
(ˆ x)
.
.
.
∂fm
∂x
1
(ˆ x)
. . .
. . .
∂f
1
∂xn
(ˆ x)
.
.
.
∂fm
∂xn
(ˆ x)
¸
¸
¸
¸
2.1.6
If f : R
n
→ Ris twice differentiable, its second derivative at ˆ x is the nn matrix (∂
2
f/∂x∂x)(ˆ x) =
f
xx
(ˆ x) where (f
xx
(ˆ x))
j
i
= (∂
2
f/∂x
j
∂x
i
)(ˆ x). Thus, in terms of the notation in Section 2.1.5 above,
f
xx
(ˆ x) = (∂/∂x)(f
x
)

(ˆ x).
2.2 Example
We consider in detail the first example of Chapter 1. Define the following variables and functions:
α = fraction of mary-john in proposed mixture,
p = sale price per pound of mixture,
v = total amount of mixture produced,
f(α, p) = expected sales volume (as determined by market research) of mixture as a function of(α, p).
2.2. EXAMPLE 9
Since it is not profitable to produce more than can be sold we must have:
v = f(α, p),
m = amount (in pounds) of mary-john purchased, and
t = amount (in pounds) of tobacco purchased.
Evidently,
m = αv, and
t = (l −α)v.
Let
P
1
(m) = purchase price of m pounds of mary-john, and
P
2
= purchase price per pound of tobacco.
Then the total cost as a function of α, p is
C(α, p) = P
1
(αf(α, p)) +P
2
(1 −α)f(α, p).
The revenue is
R(α, p) = pf(α, p),
so that the net profit is
N(α, p) = R(α, p) −C(α, p).
The set of admissible decisions is Ω, where Ω = ¦(α, p)[0 < α <
1
2
, 0 < p < ∞¦. Formally, we
have the the following decision problem:
Maximize
subject to
N(α, p),
(α, p) ∈ Ω.
Suppose that (α

, p∗) is an optimal decision, i.e.,


, p

) ∈ Ω
N(α

, p

) ≥ N(α, p)
and
for all (α, p) ∈ Ω.
(2.1)
We are going to establish some properties of (a

, p

). First of all we note that Ω is an open subset
of R
2
. Hence there exits ε > 0 such that
(α, p) ∈ Ω whenever [(α, p) −(α

, p

)[ < ε (2.2)
In turn (2.2) implies that for every vector h = (h
1
, h
2
)

in R
2
there exists η > 0 (η of course
depends on h) such that
((α

, p

) +δ(h
1
, h
2
)) ∈ Ω for 0 ≤ δ ≤ η (2.3)
10 CHAPTER 2. OPTIMIZATION OVER AN OPEN SET
|
.


, p

) +δ(h
1
, h
2
)

α
1
2

δh
h
p
(a

, p

)
Figure 2.1: Admissable set of example.
Combining (2.3) with (2.1) we obtain (2.4):
N(α

, p

) ≥ N(α

+δh
1
, p

+δh
2
) for 0 ≤ δ ≤ η (2.4)
Now we assume that the function N is differentiable so that by Taylor’s theorem
N(α

+δh
1
, p

+δh
2
) =
N(α

, p

)
+δ[
∂N
∂α


, p

)h
1
+
∂N
∂p


, p

)h
2
]
+o(δ),
(2.5)
where

δ
→ 0 as δ → 0. (2.6)
Substitution of (2.5) into (2.4) yields
0 ≥ δ[
∂N
∂α


, p

)h
1
+
∂N
∂p


, p

)h
2
] +o(δ).
Dividing by δ > 0 gives
0 ≥ [
∂N
∂α


, p

)h
1
+
∂N
∂p


, p

)h
2
] +
o(δ)
δ
. (2.7)
Letting δ approach zero in (2.7), and using (2.6) we get
0 ≥ [
∂N
∂α


, p

)h
1
+
∂N
∂p


, p

)h
2
]. (2.8)
Thus, using the facts that N is differentiable, (α

, p

) is optimal, and δ is open, we have concluded
that the inequality (2.9) holds for every vector h ∈ R
2
. Clearly this is possible only if
∂N
∂α


, p

) = 0,
∂N
∂p


, p

) = 0. (2.9)
Before evaluating the usefulness of property (2.8), let us prove a direct generalization.
2.3. THE MAIN RESULT AND ITS CONSEQUENCES 11
2.3 The Main Result and its Consequences
2.3.1 Theorem
.
Let Ω be an open subset of R
n
. Let f: R
n
→ R be a differentiable function. Let x

be an optimal
solution of the following decision-making problem:
Maximize
subject to
f(x)
x ∈ Ω.
(2.10)
Then
∂f
∂x
(x

) = 0. (2.11)
Proof: Since x

∈ Ω and Ω is open, there exists ε > 0 such that
x ∈ Ω whenever [x −x

[ < ε. (2.12)
In turn, (2.12) implies that for every vector h ∈ R
n
there exits η > 0 (η depending on h) such that
(x

+δh) ∈ Ω whenever 0 ≤ δ ≤ η. (2.13)
Since x

is optimal, we must then have
f(x

) ≥ f(x

+δh) whenever 0 ≤ δ ≤ η. (2.14)
Since f is differentiable, by Taylor’s theorem we have
f(x

+δh) = f(x

) +
∂f
∂x
(x

)δh +o(δ), (2.15)
where
o(δ)
δ
→ 0 as δ → 0 (2.16)
Substitution of (2.15) into (2.14) yields
0 ≥ δ
∂f
∂x
(x

)h +o(δ)
and dividing by δ > 0 gives
0 ≥
∂f
∂x
(x

)h +
o(δ)
δ
(2.17)
Letting δ approach zero in (2.17) and taking (2.16) into account, we see that
0 ≥
∂f
∂x
(x

)h, (2.18)
Since the inequality (2.18) must hold for every h ∈ R
n
, we must have
0 =
∂f
∂x
(x

),
and the theorem is proved. ♦
12 CHAPTER 2. OPTIMIZATION OVER AN OPEN SET
Table 2.1
Does there exist At how many points
an optimal deci- in Ω is 2.2.2 Further
Case sion for 2.2.1? satisfied? Consequences
1 Yes Exactly one point, x

is the
say x

unique optimal
2 Yes More than one point
3 No None
4 No Exactly one point
5 No More than one point
2.3.2 Consequences.
Let us evaluate the usefulness of (2.11) and its special case (2.18). Equation (2.11) gives us n
equations which must be satisfied at any optimal decision x

= (x

1
, . . . , x

n
)

.
These are
∂f
∂x
1
(x

) = 0,
∂f
∂x
2
(x

) = 0, . . . ,
∂f
∂xn
(x

) = 0 (2.19)
Thus, every optimal decision must be a solution of these n simultaneous equations of n variables, so
that the search for an optimal decision from Ω is reduced to searching among the solutions of (2.19).
In practice this may be a very difficult problem since these may be nonlinear equations and it may
be necessary to use a digital computer. However, in these Notes we shall not be overly concerned
with numerical solution techniques (but see 2.4.6 below).
The theorem may also have conceptual significance. We return to the example and recall the
N = R − C. Suppose that R and C are differentiable, in which case (2.18) implies that at every
optimal decision (α

, p

)
∂R
∂α


, p

) =
∂C
∂α


, p

),
∂R
∂p


, p

) =
∂C
∂p


, p

),
or, in the language of economic analysis, marginal revenue = marginal cost. We have obtained an
important economic insight.
2.4 Remarks and Extensions
2.4.1 A warning.
Equation (2.11) is only a necessary condition for x

to be optimal. There may exist decisions ˜ x ∈ Ω
such that f
x
(˜ x) = 0 but ˜ x is not optimal. More generally, any one of the five cases in Table 2.1 may
occur. The diagrams in Figure 2.1 illustrate these cases. In each case Ω = (−1, 1).
Note that in the last three figures there is no optimal decision since the limit points -1 and +1 are
not in the set of permissible decisions Ω = (−1, 1). In summary, the theorem does not give us any
clues concerning the existence of an optimal decision, and it does not give us sufficient conditions
either.
2.4. REMARKS AND EXTENSIONS 13
Case 1 Case 2 Case 3
Case 5 Case 4
-1
1 -1 1
-1 1
1 -1 -1 1
Figure 2.2: Illustration of 4.1.
2.4.2 Existence.
If the set of permissible decisions Ω is a closed and bounded subset of R
n
, and if f is continuous,
then it follows by the Weierstrass Theorem that there exists an optimal decision. But if Ω is closed
we cannot assert that the derivative of f vanishes at the optimum. Indeed, in the third figure above,
if Ω = [−1, 1], then +1 is the optimal decision but the derivative is positive at that point.
2.4.3 Local optimum.
We say that x

∈ Ω is a locally optimal decision if there exists ε > 0 such that f(x

) ≥ f(x)
whenever x ∈ Ω and [x

− x[ ≤ ε. It is easy to see that the theorem holds (i.e., 2.11) for local
optima also.
2.4.4 Second-order conditions.
Suppose f is twice-differentiable and let x

∈ Ω be optimal or even locally optimal. Then f
x
(x

) =
0, and by Taylor’s theorem
f(x

+δh) = f(x

) +
1
2
δ
2
h

f
xx
(x

)h +o(δ
2
), (2.20)
where
o(δ
2
)
δ
2
→ 0 as δ → 0. Now for δ > 0 sufficiently small f(x

+δh) ≤ f(x

), so that dividing
by δ
2
> 0 yields
0 ≥
1
2
h

f
xx
(x

)h +
o(δ
2
)
δ
2
and letting δ approach zero we conclude that h

f
xx
(x

)h ≤ 0 for all h ∈ R
n
. This means that
f
xx
(x

) is a negative semi-definite matrix. Thus, if we have a twice differentiable objective function,
we get an additional necessary condition.
2.4.5 Sufficiency for local optimal.
Suppose at x

∈ Ω, f
x
(x

) = 0 and f
xx
is strictly negative definite. But then from the expansion
(2.20) we can conclude that x

is a local optimum.
14 CHAPTER 2. OPTIMIZATION OVER AN OPEN SET
2.4.6 A numerical procedure.
At any point ˜ x ∈ Ω the gradient
x
f(˜ x) is a direction along which f(x) increases, i.e., f(˜ x+ε
x
f(˜ x)) > f(˜ x) for all ε > 0 sufficiently small. This observation suggests the following scheme for
finding a point x

∈ Ω which satisfies 2.11. We can formalize the scheme as an algorithm.
Step 1. Pick x
0
∈ Ω. Set i = 0. Go to Step 2.
Step 2. Calculate
x
f(x
i
). If
x
f(x
i
) = 0, stop.
Otherwise let x
i+1
= x
i
+d
i

x
f(x
i
) and go
to Step 3.
Step 3. Set i = i + 1 and return to Step 2.
The step size d
i
can be selected in many ways. For instance, one choice is to take d
i
to be an
optimal decision for the following problem:
Max¦f(x
i
+d
x
f(x
i
))[d > 0, (x
i
+d
x
f(x
i
)) ∈ Ω¦.
This requires a one-dimensional search. Another choice is to let d
i
= d
i−1
if f(x
i
+ d
i−1

x
f(x
i
)) > f(x
i
); otherwise let d
i
= 1/k d
i−1
where k is the smallest positive integer such that
f(x
i
+ 1/k d
i−1

x
f(x
i
)) > f(x
i
). To start the process we let d
−1
> 0 be arbitrary.
Exercise: Let f be continuous differentiable. Let ¦d
i
¦ be produced by either of these choices and
let
¦x
i
¦ be the resulting sequence. Then
1. f(x
i+1
) > f(x
i
) if x
i+1
= x
i
, i
2. if x

∈ Ω is a limit point of the sequence ¦x
i
¦, f
x
(x

) = 0.
For other numerical procedures the reader is referred to Zangwill [1969] or Polak [1971].
Chapter 3
OPTIMIZATION OVER SETS
DEFINED BY EQUALITY
CONSTRAINTS
We first study a simple example and examine the properties of an optimal decision. This will
generalize to a canonical problem, and the properties of its optimal decisions are stated in the form
of a theorem. Additional properties are summarized in Section 3 and a numerical scheme is applied
to determine the optimal design of resistive networks.
3.1 Example
We want to find the rectangle of maximum area inscribed in an ellipse defined by
f
1
(x, y) =
x
2
a
2
+
y
2
b
2
= α.
(3.1)
The problem can be formalized as follows (see Figure 3.1):
Maximize
subject to
f
0
(x, y)
(x, y) ∈ Ω
= 4xy
= ¦(x, y)[f
1
(x, y) = α¦.
(3.2)
The main difference between problem (3.2) and the decisions studied in the last chapter is that
the set of permissible decisions Ω is not an open set. Hence, if (x

, y

) is an optimal decision we
cannot assert that f
0
(x

, y

) ≥ f
0
(x, y) for all (x, y) in an open set containing (x

, y

). Returning
to problem (3.2), suppose (x

, y

) is an optimal decision. Clearly then either x

= 0 or y

= 0. Let
us suppose y

= 0. Then from figure 3.1 it is evident that there exist (i)ε > 0, (ii) an open set V
containing (x

, y

), and (iii) a differentiable function g : (x

−ε, x

+ε) → V such that
f
1
(x, y) = α and (x, y) ∈ V iff fy = g(x).
1
(3.3)
In particular this implies that y

= g(x

), and that f
1
(x, g(x)) = α whenever [x − x

[ < ε. Since
1
Note that y

= 0 implies f1y(x

, Y

) = 0, so that this assertion follows from the Implicit Function Theorem. The
assertion is false if y

= 0. In the present case let 0 < ε ≤ a −x

and g(x) = +b[α −(x/a)
2
]
1/2
.
15
16 CHAPTER 3. OPTIMIZATION WITH EQUALITY CONSTRAINTS
) ( |
-
y

g(x)
Tangent plane to
Ω at (x

, y

)
(f
1x
, f
1y
)
V

x

x

Figure 3.1: Illustration of example.
(x

, y

) = (x

, g(x

)) is optimum for (3.2), it follows that x

is an optimal solution for (3.4):
Maximize
subject to
ˆ
f
0
(x) = f
0
(x, g(x))
[x −x

[ < ε.
(3.4)
But the constraint set in (3.4) is an open set (in R
1
) and the objective function
ˆ
f
0
is differentiable,
so that by Theorem 2.3.1,
ˆ
f
0x
(x

) = 0, which we can also express as
f
0x
(x

, y

) +f
0y
(x

, y

)g
x
(x

) = 0 (3.5)
Using the fact that f
1
(x, g(x)) ≡ α for [x −x

[ < ε, we see that
f
1x
(x

, y

) +f
1y
(x

, y

)g
x
(x

) = 0,
and since f
1y
(x

, y

) = 0 we can evaluate g
x
(x

),
g
x
(x

) = −f
−1
1y
f
1x
(x

, y

),
and substitute in (3.5) to obtain the condition (3.6):
f
0x
−f
0y
f
−1
1y
f
1x
= 0 at (x

, y

). (3.6)
Thus an optimal decision (x

, y

) must satisfy the two equations f
1
(x

, y

) = α and (3.6). Solving
these yields
x

=
+

(α/2)
1/2
a , y

=
+

(α/2)
1/2
b.
3.2. GENERAL CASE 17
Evidently there are two optimal decisions, (x

, y

) =
+

(α/2)
1/2
(a, b), and the maximum area is
m(α) = 2αab. (3.7)
The condition (3.6) can be interpreted differently. Define
λ

= f
0y
f
−1
1y
(x

, y

). (3.8)
Then (3.6) and (3.8) can be rewritten as (3.9):
(f
0x
, f
0y
) = λ

(f
1x
, f
1y
) at (x

, y

) (3.9)
In terms of the gradients of f
0
, f
1
, (3.9) is equivalent to
f
0
(x

, y

) = [f
1
(x

, y

)]λ

, (3.10)
which means that at an optimal decision the gradient of the objective function f
0
is normal to the
plane tangent to the constraint set Ω.
Finally we note that
λ

=
∂m
∂α
. (3.11)
where m(α) = maximum area.
3.2 General Case
3.2.1 Theorem.
Let f
i
: R
n
→ R, i = 0, 1, . . . , m (m < n), be continuously differentiable functions and let x

be
an optimal decision of problem (3.12):
Maximize
subject to
f
0
(x)
f
i
(x) = α
i
, i = 1, . . . , m.
(3.12)
Suppose that at x

the derivatives f
ix
(x

), i = 1, . . . , m, are linearly independent. Then there exists
a vector λ

= (λ

1
, . . . , λ

m
)

such that
f
0x
(x

) = λ

1
f
1x
(x

) +. . . +λ

m
f
mx
(x

) (3.13)
Furthermore, let m(α
1
, . . . , α
m
) be the maximum value of (3.12) as a function of α = (α
1
, . . . , α
m
)

.
Let x

(α) be an optimal decision for (3.12). If x

(α) is a differentiable function of α then m(α) is
a differentiable function of α, and


)

=
∂m
∂α
(3.14)
Proof. Since f
ix
(x

), i = 1, . . . , m, are linearly independent, then by re-labeling the coordinates of
x if necessary, we can assume that the mmmatrix [(∂f
i
/∂x
j
)(x

)], 1 ≤ i, j ≤ m, is nonsingular.
By the Implicit Function Theorem (see Fleming [1965]) it follows that there exist (i) ε > 0, (ii) an
18 CHAPTER 3. OPTIMIZATION WITH EQUALITY CONSTRAINTS
open set V in R
n
containing x

, and (iii) a differentiable function g : U → R
m
, where U =
[(x
m+1
, . . . , x
n
)][ [x
m+
−x

m+
[ < ε, = 1, . . . , n −m], such that
f
i
(x
1
, . . . , x
n
) = α
i
, 1 ≤ i ≤ m, and (x
1
, . . . , x
n
) ∈ V
iff
x
j
= g
j
(x
m+1
, . . . , x
n
), 1 ≤ j ≤ m, and (x
m+1
, . . . , x
n
) ∈ U (3.15)
(see Figure 3.2).
In particular this implies that x

j
= g
j
(x

m+1
, . . . , x

n
), 1 ≤ j ≤ m, and
f
i
(g(x
m+1
, . . . , x
n
), x
m+1
, . . . , x
n
) = α
i
, i = 1, . . . , m. (3.16)
For convenience, let us define w = (x
1
, . . . , x
m
)

, u = (x
m+1
, . . . , x
n
)

and f = (f
1
, . . . , f
m
)

.
Then, since x

= (w

, u

) = (g(u

), u

) is optimal for (3.12), it follows that u

is an optimal
decision for (3.17):
Maximize
subject to
ˆ
f
0
(u) = f
0
(g(u), u)
u ∈ U.
(3.17)
But U is an open subset of R
n−m
and
ˆ
f
0
is a differentiable function on U (since f
0
and g are
differentiable), so that by Theorem 2.3.1 ,
ˆ
f
0u
(u

) = 0, which we can also express using the chain
rule for derivatives as
ˆ
f
0u
(u

) = f
0w
(x

)g
u
(u

) +f
0u
(x

) = 0. (3.18)
Differentiating (3.16) with respect to u = (x
m+1
, . . . , x
n
)

, we see that
f
w
(x

)g
u
(u

) +f
u
(x

) = 0,
and since the mm matrix f
w
(x

) is nonsingular we can evaluate g
u
(u

),
g
u
(u

) = −[f
w
(x∗)]
−1
f
u
(x

),
and substitute in (3.18) to obtain the condition
−f
0w
f
−1
w
f
u
+f
0u
= 0 at x

= (w

, u

). (3.19)
Next, define the m-dimensional column vector λ

by


)

= f
0w
f
−1
w
[x

. (3.20)
Then (3.19) and (3.20) can be written as (3.21):
(f
0w
(x

), f
0u
(x

)) = (λ

)

(f
w
(x

), f
u
(x

)). (3.21)
Since x = (w, u), this is the same as
f
0x
(x

) = (λ

)

f
x
(x

) = λ

1
f
1x
(x

) +. . . +λ

m
f
mx
(x

),
3.2. GENERAL CASE 19
.
.
.
.
x
1
, . . . , x
m
x

V
x
m+1
(x
m+1
, . . . , x
n
)
(x

m+1
, . . . , x

n
)
2
U
x
n
Ω =
¦x[f
i
(x) = α
i
¦
i = 1, . . . , m
(x

1
, . . . , x

m
)
g(x
m+1
, . . . , x
n
)
Figure 3.2: Illustration of theorem.
which is equation (3.13).
To prove (3.14), we vary α in a neighborhood of a fixed value, say α. We define w

(α) =
(x

1
(α), . . . , x

m
(α))

and u

(α) = (x

m+1
(α), . . . , x

(
α))

. By hypothesis, f
w
is nonsingular at
x

(α). Since f(x) and x

(α) are continuously differentiable by hypothesis, it follows that f
w
is
nonsingular at x

(α) in a neighborhood of α, say N. We have the equation
f(w

(α), u

(α)) = α, (3.22)
−f
0w
f
−1
w
f
u
+f
0u
= 0 at (w

(α), u

(α)), (3.23)
for α ∈ N. Also, m(α) = f
0
(x

(α)), so that
m
α
= f
0w
w

α
+f
0u
u

α
(3.24)
Differentiating (3.22) with respect to α gives
f
w
w

α
+f
u
u

α
= I,
so that
w

α
+f
−1
w
f
u
u

α
= f
−1
w
,
20 CHAPTER 3. OPTIMIZATION WITH EQUALITY CONSTRAINTS
and multiplying on the left by f
0w
gives
f
0w
w

α
+f
0w
f
−1
w
f
u
u

α
= f
0w
f
−1
w
.
Using (3.23), this equation can be rewritten as
f
0w
w

α
+f
0u
u

α
= f
0w
f
−1
w
. (3.25)
In (3.25), if we substitute from (3.20) and (3.24), we obtain (3.14) and the theorem is proved. ♦
3.2.2 Geometric interpretation.
The equality constraints of the problem in 3.12 define a n −m dimensional surface
Ω = ¦x[f
i
(x) = α
i
, i = 1, . . . , m¦.
The hypothesis of linear independence of ¦f
ix
(x

)[1 ≤ i ≤ m¦ guarantees that the tangent plane
through Ω at x

is described by
¦h[f
ix
(x

)h = 0 , i = 1, . . . , m¦, (3.26)
so that the set of (column vectors orthogonal to this tangent surface is
¦λ
1

x
f
1
(x

) +. . . +λ
m

x
f
m
(x

)[λ
i
∈ R, i = 1, . . . , m¦.
Condition (3.13) is therefore equivalent to saying that at an optimal decision x

, the gradient of the
objective function
x
f
0
(x

) is normal to the tangent surface (3.12).
3.2.3 Algebraic interpretation.
Let us again define w = (x
1
, . . . , x
m
)

and u = (x
m+1
, . . . , x
n
)

. Suppose that f
w
(˜ x) is nonsin-
gular at some point ˜ x = ( ˜ w, ˜ u) in Ω which is not necessarily optimal. Then the Implicit Function
Theorem enables us to solve, in a neighborhood of ˜ x, the mequations f(w, u) = α. u can then vary
arbitrarily in a neighborhood of ˜ u. As u varies, w must change according to w = g(u) (in order to
maintain f(w, u) = α), and the objective function changes according to
ˆ
f
0
(u) = f
0
(g(u), u). The
derivative of
ˆ
f
0
at ˜ u is
ˆ
f
0u
(˜ u) = f
0w
g
u
+f
0u˜ x
= −
˜
λ

f
u
(˜ x) +f
0u
(˜ x),
where
˜
λ

= f
0w
f
−1
w˜ x
, (3.27)
Therefore, the direction of steepest increase of
ˆ
f
0
at ˜ u is

u
ˆ
f
0
(˜ u) = −f

u
(˜ x)
˜
λ +f

Ou
(˜ x) , (3.28)
and if ˜ u is optimal,
u
ˆ
f
0
(˜ u) = 0 which, together with (3.27) is equation (3.13). We shall use (3.27)
and (3.28) in the last section.
3.3. REMARKS AND EXTENSIONS 21
3.3 Remarks and Extensions
3.3.1 The condition of linear independence.
The necessary condition (3.13) need not hold if the derivatives f
ix
(x

), 1 ≤ i ≤ m, are not linearly
independent. This can be checked in the following example
Minimize
subject to sin(x
2
1
+x
2
2
)
π
2
(x
2
1
+x
2
2
) = 1.
(3.29)
3.3.2 An alternative condition.
Keeping the notation of Theorem 3.2.1, define the Lagrangian function L : R
n+m
→ R by L :
(x, λ) → f
0
(x) −
¸
m
i=1
λ
i
f
i
(x). The following is a reformulation of 3.12, and its proof is left as
an exercise.
Let x

be optimal for (3.12), and suppose that f
ix
(x

), 1 ≤ i ≤ m, are linearly independent.
Then there exists λ

∈ R
m
such that (x

, λ

) is a stationary point of L, i.e., L
x
(x

, λ

) = 0 and
L
λ
(x

, λ

) = 0.
3.3.3 Second-order conditions.
Since we can convert the problem (3.12) into a problem of maximizing
ˆ
f
0
over an open set, all
the comments of Section 2.4 will apply to the function
ˆ
f
0
. However, it is useful to translate these
remarks in terms of the original function f
0
and f. This is possible because the function g is
uniquely specified by (3.16) in a neighborhood of x

. Furthermore, if f is twice differentiable, so
is g (see Fleming [1965]). It follows that if the functions f
i
, 0 ≤ i ≤ m, are twice continuously
differentiable, then so is
ˆ
f
0
, and a necessary condition for x

to be optimal for (3.12) and (3.13) and
the condition that the (n − m) (n − m) matrix
ˆ
f
0uu
(u

) is negative semi-definite. Furthermore,
if this matrix is negative definite then x

is a local optimum. the following exercise expresses
f
ˆ
f
0uu
(u

) in terms of derivatives of the functions f
i
.
Exercise: Show that
ˆ
f
0uu
(u

) = [g

u
.
.
.I]
¸
L
ww
L
uw
L
wu
L
uu

g
u
. . .
I
¸
¸

(w

, u

)
where
g
u
(u

) = −[f
w
(x

)]
−1
f
u
(x

), L(x) = f
0
(x) −
m
¸
i=1
λ

i
f
i
(x).
22 CHAPTER 3. OPTIMIZATION WITH EQUALITY CONSTRAINTS
3.3.4 A numerical procedure.
We assume that the derivatives f
ix
(x), 1 ≤ i ≤ m, are linearly independent for all x. Then the
following algorithm is a straightforward adaptation of the procedure in Section 2.4.6.
Step 1. Find x
0
arbitrary so that f
i
(x
0
) = α
i
, 1 ≤ i ≤ m. Set k = 0 and go to Step 2.
Step 2. Find a partition x = (w, u)
2
of the variables such that f
w
(x
k
) is nonsingular. Calculate λ
k
by (λ
k
)

= f
0w
f
−1
w(xk)
, and
ˆ
f
k
0
(u
k
) = −f

u
(x
k

k
+f

0u
(x
k
). If
ˆ
f
k
0
(u
k
) = 0, stop. Otherwise
go to Step 3.
Step 3. Set ˜ u
k
= u
k
+d
k

ˆ
f
k
0
(u
k
). Find ˜ w
k
such that f
i
( ˜ w
k
, ˜ u
k
) = 0, 1 ≤ i ≤ m. Set
x
k+1
= ( ˜ w
k
, ˜ u
k
), set k = k + 1, and return to Step 2.
Remarks. As before, the step sizes d
k
> 0 can be selected various ways. The practical applicability
of the algorithm depends upon two crucial factors: the ease with which we can find a partition
x = (w, u) so that f
w
(x
k
) is nonsingular, thus enabling us to calculate λ
k
; and the ease with which
we can find ˜ w
k
so that f( ˜ w
k
, ˜ u
k
) = α. In the next section we apply this algorithm to a practical
problem where these two steps can be carried out without too much difficulty.
3.3.5 Design of resistive networks.
Consider a network N with n + 1 nodes and b branches. We choose one of the nodes as datum
and denote by e = (e
1
, . . . , e
n
)

the vector of node-to-datum voltages. Orient the network graph
and let v = (v
1
, . . . , v
b
)

and j = (j
1
, . . . , j
b
)

respectively, denote the vectors of branch voltages
and branch currents. Let A be the n b reduced incidence matrix of the network graph. Then the
Kirchhoff current and voltage laws respectively yield the equations
Aj = 0 and A

e = v (3.30)
Next we suppose that each branch k contains a (possibly nonlinear)resistive element with the form
shown in Figure 3.3, so that
j
k
−j
sk
= g
k
(v
rk
) = g
k
(v
k
−v
sk
), 1 ≤ k ≤ b, (3.31)
where v
rk
is the voltage across the resistor. Here j
sk
, v
sk
are the source current and voltage in the
kth branch, and g
k
is the characteristic of the resistor. Using the obvious vector notation j
s
∈ R
b
,
v
s
∈ R
b
for the sources, v
r
∈ R
b
for the resistor voltages, and g = (g
1
, . . . , g
b
)

, we can rewrite
(3.30) as (3.31):
j −j
s
= g(v −v
s
) = g(v
r
). (3.32)
Although (3.30) implies that the current (j
k
−j
s
k) through the kth resistor depends only on the
voltage v
rk
= (v
k
−v
sk
) across itself, no essential simplification is achieved. Hence, in (3.31) we
shall assume that g
k
is a function of v
r
. This allows us to include coupled resistors and voltage-
controlled current sources. Furthermore, let us suppose that there are design parameters p =
(p
1
, . . . , p

)

which are under our control, so that (3.31) is replaced by (3.32):
j −j
x
= g(v
r
, p) = g(v−v
s
, p). (3.33)
2
This is just a notational convenience. The w variable may consist of any m components of x.
3.3. REMARKS AND EXTENSIONS 23
o
-
+ -
+ -
+
o
j
sk
v
rk
v
sk
j
k
−j
sk
j
k
v
k
Figure 3.3: The kth branch.
If we combine (3.29) and (3.32) we obtain (3.33):
Ag(A

e −v
s
, p) = i
s
, (3.34)
where we have defined i
s
= A
js
. The network design problem can then be stated as finding p, v
s
, i
s
so as to minimize some specified function f
0
(e, p, v
s
, i
s
). Formally, we have the optimization prob-
lem (3.34):
Minimize
subject to
f
0
(e, p, v
s
, i
s
)
Ag(A

e −v
s
, p) −i
s
= 0.
(3.35)
We shall apply the algorithm 3.3.4 to this problem. To do this we make the following assumption.
Assumption: (a) f
0
is differentiable. (b) g is differentiable and the nn matrix A(∂g/∂v)(v, p)A

is nonsingular for all v ∈ R
b
, p ∈ R

. (c) The network N described by (3.33) is determinate i.e.,
for every value of (p, v
s
, i
s
) there is a unique e = E(p, v
s
, i
s
) satisfying (3.33).
In terms of the notation of 3.3.4, if we let x = (e, p, v
s
, i
s
), then assumption (b) allows us to
identify w = e, and u = (p, v
s
, i
s
). Also let f(x) = f(e, p, v
s
, i
s
) = Ag(A

e−v
s
, p) −i
s
. Now the
crucial part in the algorithm is to obtain λ
k
at some point x
k
. To this end let ˜ x = (˜ e, ˜ p, ˜ v
s
,
˜
i
s
) be a
fixed point. Then the corresponding λ =
˜
λ is given by (see (3.27))
˜
λ

= f
0w
(˜ x)f
−1
w
(˜ x) = f
0e
(˜ x)f
−1
e
(˜ x). (3.36)
From the definition of f we have
f
e
(˜ x) = AG(˜ v
r
, ˜ p)A

,
where ˜ v
r
= A

˜ e − ˜ v
s
, and G(˜ v
r
, ˜ p) = (∂g/∂v
r
)(˜ v
r
, ˜ p). Therefore,
˜
λ is the solution (unique by
assumption (b)) of the following linear equation:
AG

(˜ v
r
, ˜ p)A

˜
λ = f

0e
(˜ x). (3.37)
Now (3.36) has the following extremely interesting physical interpretation. If we compare (3.33)
with (3.36) we see immediately that
˜
λ is the node-to-datum response voltages of a linear network
N(˜ v
r
, ˜ p) driven by the current sources f

0e
(˜ x). Furthermore, this network has the same graph as
the original network (since they have the same incidence matrix); moreover, its branch admittance
matrix, G

(˜ v
r
, ˜ p), is the transpose of the incremental branch admittance matrix (evaluated at (˜ v
r
, ˜ p))
of the original network N. For this reason, N(˜ v
r
, ˜ p) is called the adjoint network (of N) at (˜ v
r
, ˜ p).
24 CHAPTER 3. OPTIMIZATION WITH EQUALITY CONSTRAINTS
Once we have obtained
˜
λ we can obtain
u
ˆ
f
0
(˜ u) using (3.28). Elementary calculations yield
(3.37):

u
ˆ
f
0
(˜ u) =

ˆ
f

0p
(˜ u)
ˆ
f

0vs
(˜ u)
ˆ
f

0is
(˜ u)
¸
¸
¸ =

[
∂g
∂p
(˜ v
r
, ˜ p)]

A

G

(˜ v
r
, ˜ p)A

−I
¸
¸ ˜
λ +

f

0p
(˜ x)
f

0vs
(˜ x)
f

0is
(˜ x)
¸
¸
(3.38)
We can now state the algorithm.
Step 1. Select u
0
= (p
0
, v
0
s
, i
0
s
) arbitrary. Solve (3.33) to obtain e
0
= E(p
0
, v
0
s
, i
0
s
). Let k = 0 and
go to Step 2.
Step 2. Calculate v
k
r
= A

e
k
−v
k
s
. calculate f

0e
(x
k
). Calculate the node-to-datum response λ
k
of
the adjoint network N(v
k
r
, p
k
) driven by the current source f

0e
(x
k
). Calculate
u
ˆ
f
0
(u
k
) from
(3.37). If this gradient is zero, stop. Otherwise go to Step 3.
Step 3. Let u
k+1
= (p
k+1
, v
k+1
s
, i
k+1
s
) = u
k
−d
k

u
ˆ
f
0
(u
k
), where d
k
> 0 is a predetermined
step size.
3
Solve (3.33) to obtain e
k+1
= (Ep
k+1
, v
k+1
s
, i
k+1
s
). Set k = k +1 and return to Step 2.
Remark 1. Each iteration from u
k
to u
k+1
requires one linear network analysis step (the
computation of λ
k
in Step 2), and one nonlinear network analysis step (the computation of e
k+1
in
step 3). This latter step may be very complex.
Remark 2. In practice we can control only some of the components of v
s
and i
s
, the rest being
fixed. The only change this requires in the algorithm is that in Step 3 we set
p
k+1
= p
k
−d
k
ˆ
f

0p
(u
k
) just as before, where as v
k+1
sj
= v
k
sj
−d
k
(∂
ˆ
f
0
/∂v
sj
)(u
k
) and
i
k+1
sm
= i
k
sm
−d
k
(∂
ˆ
f
0
/∂i
sm
)(u
k
) with j and m ranging only over the controllable components and
the rest of the components equal to their specified values.
Remark 3. The interpretation of λ as the response of the adjoint network has been exploited for
particular function f
0
in a series of papers (director and Rohrer [1969a], [1969b], [1969c]). Their
derivation of the adjoint network does not appear as transparent as the one given here. Although
we have used the incidence matrix A to obtain our network equation (3.33), one can use a more
general cutset matrix. Similarly, more general representations of the resistive elements may be
employed. In every case the “adjoint” network arises from a network interpretation of (3.27),
[f
w
(˜ x)]

˜
λ = f
0w
(˜ x),
with the transpose of the matrix giving rise to the adjective “adjoint.”
Exercise: [DC biasing of transistor circuits (see Dowell and Rohrer [1971]).] Let N be a transistor
circuit, and let (3.33) model the dc behavior of this circuit. Suppose that i
s
is fixed, v
sj
for j ∈ J
are variable, and v
sj
for j / ∈ J are fixed. For each choice of v
sj
, j ∈ J, we obtain the vector e and
hence the branch voltage vector v = A

e. Some of the components v
t
, t ∈ T, will correspond to
bias voltages for the transistors in the network, and we wish to choose v
sj
, j ∈ J, so that v
t
is as
close as possible to a desired bias voltage v
d
t
, t ∈ T. If we choose nonnegative numbers α
t
, with
relative magnitudes reflecting the importance of the different transistors then we can formulate the
criterion
3
Note the minus sign in the expression u
k
−d
k
u
ˆ
f0(u
k
). Remember we are minimizing f0, which is equivalent to
maximizing (−f0).
3.3. REMARKS AND EXTENSIONS 25
f
0
(e) =
¸
t∈T
α
t
[v
t
−v
d
t
[
2
.
(i) Specialize the algorithm above for this particular case.
(ii) How do the formulas change if the network equations are written using an arbitrary cutset matrix
instead of the incidence matrix?
26 CHAPTER 3. OPTIMIZATION WITH EQUALITY CONSTRAINTS
Chapter 4
OPTIMIZATION OVER SETS
DEFINED BY INEQUALITY
CONSTRAINTS: LINEAR
PROGRAMMING
In the first section we study in detail Example 2 of Chapter I, and then we define the general linear
programming problem. In the second section we present the duality theory for linear program-
ming and use it to obtain some sensitivity results. In Section 3 we present the Simplex algorithm
which is the main procedure used to solve linear programming problems. In section 4 we apply
the results of Sections 2 and 3 to study the linear programming theory of competitive economy.
Additional miscellaneous comments are collected in the last section. For a detailed and readily ac-
cessible treatment of the material presented in this chapter see the companion volume in this Series
(Sakarovitch [1971]).
4.1 The Linear Programming Problem
4.1.1 Example.
Recall Example 2 of Chapter I. Let g and u respectively be the number of graduate and undergradu-
ate students admitted. Then the number of seminars demanded per year is
2g+u
20
, and the number of
lecture courses demanded per year is
5g+7u
40
. On the supply side of our accounting, the faculty can
offer 2(750) + 3(250) = 2250 seminars and 6(750) + 3(250) = 5250 lecture courses. Because of
his contractual agreements, the President must satisfy
2g+u
20
≤ 2250 or 2g +u ≤ 45, 000
and
5g+7u
40
≤ 5250 or 5g + 7u ≤ 210, 000 .
27
28 CHAPTER 4. LINEAR PROGRAMMING
Since negative g or u is meaningless, there are also the constraints g ≥ 0, u ≥ 0. Formally then the
President faces the following decision problem:
Maximize αg +βu
subject to 2g +u ≤ 45, 000
5g + 7u ≤ 210, 000
g ≥ 0, u ≥ 0 .
(4.1)
It is convenient to use a more general notation. So let x = (g, u)

, c = (α, β)

, b = (45000, 210000, 0, 0)

and let A be the 42 matrix
A =

2
5
−1
0
1
7
0
−1
¸
¸
¸
¸
.
Then (4.1) can be rewritten as (4.2)
1
Maximize c

x
subject to Ax ≤ b .
(4.2)
Let A
i
, 1 ≤ i ≤ 4, denote the rows of A. Then the set Ω of all vectors x which satisfy the constraints
in (4.2) is given by Ω = ¦x[A
i
x ≤ b
i
, 1 ≤ i ≤ 4¦ and is the polygon OPQR in Figure 4.1.
For each choice x, the President receives the payoff c

x. Therefore, the surface of constant payoff
k say, is the hyperplane π(k) = ¦x[c

x = k¦. These hyperplanes for different values of k are
parallel to one another since they have the same normal c. Furthermore, as k increases π(k) moves
in the direction c. (Obviously we are assuming in this discussion that c = 0.) Evidently an optimal
decision is any point x

∈ Ω which lies on a hyperplane π(k) which is farthest along the direction
c. We can rephrase this by saying that x∗ ∈ Ω is an optimal decision if and only if the plane π

through x

does not intersect the interior of Ω, and futhermore at x

the direction c points away
from Ω. From this condition we can immediately draw two very important conclusions: (i) at least
one of the vertices of Ω is an optimal decision, and (ii) x

yields a higher payoff than all points
in the cone K

consisting of all rays starting at x

and passing through Ω, since K

lies “below”
π

. The first conclusion is the foundation of the powerful Simplex algorithm which we present in
Section 3. Here we pursue consequences of the second conclusion. For the situation depicted in
Figure 4.1 we can see that x

= Q is an optimal decision and the cone K

is shown in Figure 4.2.
Now x

satisfies A
x
x

= b
1
, A
2
x

= b
2
, and A
3
x

< b
3
, A
4
x

< b
4
, so that K

is given by
K

= ¦x

+h[A
1
h ≤ 0 , A
2
h ≤ 0¦ .
Since c

x

≥ c

y for all y ∈ K

we conclude that
c

h ≤ 0 for all h such that A
1
h ≤ 0, A
2
h ≤ 0 . (4.3)
We pause to formulate the generalization of (4.3) as an exercise.
1
Recall the notation introduced in 1.1.2, so that x ≤ y means xi ≤ yi for all i.
4.1. THE LINEAR PROGRAMMING PROBLEM 29
,
-
-
-
-
-
-
-
-
-
-
x
2
π(k) = ¦x[c

x = k¦
π

Q = x

direction of
increasing
payoff k
¦x[A
2
x = b
2
¦
x
1
¦x[A
1
x = b
1
¦
R
A
4
O A
3
A
1
⊥ QR
c ⊥ π

A
2
⊥ PQ
P
Figure 4.1: Ω = OPQR.
Exercise 1: Let A
i
, 1 ≤ i ≤ k, be n-dimensional row vectors. Let c ∈ R
n
, and let b
i
, 1 ≤ i ≤ k,
be real numbers. Consider the problem
Maximize c

x
subject to A
i
x ≤ b
i
, 1 ≤ i ≤ k .
For any x satisfying the constraints, let I(x) ⊂ ¦1, . . . , n¦ be such that A
i
(x) = b
i
, i ∈ I(x), A
i
x <
b
i
, i / ∈ I(x). Suppose x

satisfies the constraints. Show that x

is optimal if an only if
c

h ≤ 0 for all h such that A
i
h ≤ 0 , i ∈ I(x

).
Returning to our problem, it is clear that (4.3) is satisfied as long as c lies between A
1
and A
2
.
Mathematically this means that (4.3) is satisfied if and only if there exist λ

1
≥ 0, λ

2
≥ 0 such that
2
c

= λ

1
, A
1


2
A
2
. (4.4)
As c varies, the optimal decision will change. We can see from our analysis that the situation is as
follows (see Figure 4.1):
2
Although this statement is intuitively obvious, its generalization to n dimensions is a deep theorem known as Farkas’
lemma (see Section 2).
30 CHAPTER 4. LINEAR PROGRAMMING
P
x

= Q
K

π

R
A
4
O
A
3
A
2
c
A
1
Figure 4.2: K

is the cone generated by Ω at x

.
1. x

= Qis optimal iff c lies between A
1
and A
2
iff c

= λ

1
A
1


2
A
2
for some λ

1
≥ 0, λ

2

0,
2. x

∈ QP is optimal iff c lies along A
2
iff c

= λ

2
A
2
for some λ

2
≥ 0,
3. x

= P is optimal iff c lies between A
3
and A
2
iff c

= λ

2
A
2


3
A
3
for some λ

2
≥ 0, λ

3

0, etc.
These statements can be made in a more elegant way as follows:
x

∈ Ω is optimal iff there exists λ

i
≥ 0 , 1 ≤ i ≤ 4, such that
(a) c

=
4
¸
i=1
λ

i
a
i
, (b) if A
i
x

< b
i
then λ

i
= 0 . (4.5)
For purposes of application it is useful to separate those constraints which are of the form x
i
≥ 0,
from the rest, and to reformulate (4.5) accordingly We leave this as an exercise.
Exercise 2: Show that (4.5) is equivalent to (4.6), below. (Here A
i
= (a
i1
, a
i2
).) x

∈ Ω is optimal
iff there exist λ

1
≥ 0 , λ

2
≥ 0 such that
(a) c
i
≤ λ

1
a
1i


2
a
2i
, i = 1, 2,
(b) if a
j1
x

1
+a
j2
x

2
< b
j
then x

j
= 0, j = 1, 2.
(c) if c
i
< λ

1i


2
a
2i
then x

i
= 0, i = 1, 2.
(4.6)
4.1. THE LINEAR PROGRAMMING PROBLEM 31
4.1.2 Problem formulation.
A linear programming problem (or LP in brief) is any decision problem of the form 4.7.
Maximize c
1
x
1
+c
2
x
2
+. . . +c
n
x
n
subject to
a
il
x
1
+a
i2
x
2
+. . . +a
in
x
n
≤ b
i
, l ≤ i ≤ k ,
a
il
x
1
+. . . . . . . . . +a
in
x
n
≥ b
i
, k + 1 ≤ i ≤ ,
a
il
x
1
+. . . . . . . . . +a
in
x
n
= b
i
, + 1 ≤ i ≤ m ,
and
x
j
≥ 0 , 1 ≤ j ≤ p ,
x
j
≥ 0 , p + 1 ≤ j ≤ q;
x
j
arbitary , q + 1 ≤ j ≤ n ,
(4.7)
where the c
j
, a
ij
, b
i
are fixed real numbers.
There are two important special cases:
Case I: (4.7) is of the form (4.8):
Maximize
n
¸
j=1
c
j
x
j
subject to
n
¸
j=1
a
ij
x
j
≤ b
i
,
x
j
≥ 0 ,
1 ≤ i ≤ m ,
1 ≤ j ≤ n
(4.8)
Case II: (4.7) is of the form (4.9):
Maximize
n
¸
j=1
c
j
x
j
subject to
n
¸
j=1
a
ij
x
j
= b
i
,
x
j
≥ 0 ,
1 ≤ i ≤ m ,
1 ≤ j ≤ n .
(4.9)
Although (4.7) appears to be more general than (4.8) and (4.9), such is not the case.
Proposition: Every LP of the form (4.7) can be transformed into an equivalent LP of the form (4.8).
Proof.
Step 1: Replace each inequality constraint
¸
a
ij
x
j
≥ b
i
by
¸
(−a
ij
)x
j
≤ (−b
i
).
Step 2: Replace each equality constraint
¸
a
ij
x
j
= b
i
by two inequality constraints:
¸
a
ij
x
j
≤ b
i
,
¸
(−a
ij
)x
j
≤ (−b
i
).
Step 3: Replace each variable x
j
which is constrained x
j
≤ 0 by a variable y
j
= −x
j
constrained
y
j
≥ 0 and then replace a
ij
x
j
by (−a
ij
)y
j
for every i and c
j
x
j
by (−c
j
)y
j
.
32 CHAPTER 4. LINEAR PROGRAMMING
Step 4: Replace each variable x
j
which is not constrained in sign by a pair of variables
y
j
−z
j
= x
j
constrained y
j
≥ 0, z
j
≥ 0 and then replace a
ij
x
j
by a
ij
y
j
+ (−a
ij
)z
j
for every i and
c
j
x
j
by c
j
y
j
+ (−c
j
)z
j
. Evidently the resulting LP has the form (4.8) and is equivalent to the
original one. ♦
Proposition: Every LP of the form (4.7) can be transformed into an equivalent LP of the from (4.9)
Proof.
Step 1: Replace each inequality constraint
¸
a
ij
x
j
≤ b
i
by the equality constraint
¸
a
ij
x
j
+y
i
= b
i
where y
i
is an additional variable constrained y
i
≥ 0.
Step 2: Replace each inequality constraint
¸
a
ij
x
j
≥ b
i
by the equality constraint
¸
a
ij
x
j
−y
i
= b
i
where y
i
is an additional variable constrained by y
i
≥ 0. (The new variables
added in these steps are called slack variables.)
Step 3, Step 4: Repeat these steps from the previous proposition. Evidently the new LP has the
form (4.9) and is equivalent to the original one. ♦
4.2 Qualitative Theory of Linear Programming
4.2.1 Main results.
We begin by quoting a fundamental result. For a proof the reader is referred to (Mangasarian
[1969]).
Farkas’ Lemma. Let A
i
, 1 ≤ i ≤ k, be n-dimensional row vectors. Let c ∈ R
n
be a column vector.
The following statements are equivalent:
(i) for all x ∈ R
n
, A
i
x ≤ 0 for 1 ≤ i ≤ k implies c

x ≤ 0,
(ii) there exists λ
1
≥ 0, . . . , λ
k
≥ 0 such that c

=
k
¸
i=1
λ
i
A
i
.
An algebraic version of this result is sometimes more convenient.
Farkas’ Lemma (algebraic version). Let Abe a kn matrix. Let c ∈ R
n
. The following statements
are equivalent.
(i) for all x ∈ R
n
, Ax ≤ 0 implies c

x ≤ 0,
(ii) there exists λ ≥ 0, λ ∈ R
k
, such that A

λ = c.
Using this result it is possible to derive the main results following the intuitive reasoning of (4.1).
We leave this development as two exercises and follow a more elegant but less intuitive approach.
Exercise 1: With the same hypothesis and notation of Exercise 1 in 4.1, use the first version of
Farkas

lemma to show that there exist λ

i
≥ 0 for i ∈ I(x

) such that
¸
i∈I(x

)
λ

i
A
i
= c

.
Exercise 2: Let x

satisfy the constraints for problem (4.17). Use the previous exercise to show
that x

is optimal iff there exist λ

1
≥ 0, . . . , λ

m
≥ 0 such that
(a) c
j

m
¸
i=1
λ

i
a
ij
, 1 ≤ j ≤ n
(b) if
n
¸
j=1
a
ij
x

j
< b
i
then λ

i
= 0 , 1 ≤ i ≤ m (c) if
m
¸
i=1
λ

i
a
ij
> c
j
then x

j
= 0 , 1 ≤ j ≤ m.
In the remaining discussion, c ∈ R
n
, b ∈
n
are fixed vectors, and A = ¦a
ij
¦ is a fixed m n
matrix, whereas x ∈ R
n
and λ ∈ R
m
will be variable. Consider the pair of LPs (4.10) and (4.11)
4.2. QUALITATIVE THEORY OF LINEAR PROGRAMMING 33
below. (4.10) is called the primal problem and (4.11) is called the dual problem.
Maximize
subject to
c
1
x
1
+. . . +c
n
x
n
a
i1
x
1
+. . . +a
in
x
n
≤ b
i
,
x
j
≥ 0 ,
1 ≤ i ≤ m
1 ≤ j ≤ n .
(4.10)
Maximize
subject to
λ
1
b
1
+. . . +λ
m
b
m
λ
1
a
1j
+. . . +λ
m
a
mj
≥ c
j
,
λ
i
≥ 0 ,
1 ≤ j ≤ n
1 ≤ i ≤ m .
(4.11)
Definition: Let Ω
p
= ¦x ∈ R
n
[Ax ≤ b, x ≥ 0¦ be the set of all points satisfying the constraints
of the primal problem. Similarly let Ω
d
= ¦λ ∈ R
m

A ≥ c

, λ ≥ 0¦. A point x ∈ Ω
p
(λ ∈ Ω
d
) is
said to be a feasible solution or feasible decision for the primal (dual).
The next result is trivial.
Lemma 1: (Weak duality) Let x ∈ Ω
p
, λ ∈ Ω
d
. Then
c

x ≤ λ

Ax ≤ λ

b. (4.12)
Proof: x ≥ 0 and λ

A − c

≥ 0 implies (λ

A−c

)x ≥ 0 giving the first inequality. b−Ax ≥ 0 and
λ

≥ 0 implies λ

(b−Ax) ≥ 0 giving the second inequality. ♦
Corollary 1: If x

∈ Ω and λ

∈ Ω
d
such that c

x

= (λ

)

b, then x

is optimal for (4.10) and λ

is
optimal for (4.11).
Theorem 1: (Strong duality) Suppose Ω
p
= φ and Ω
d
= φ. Then there exists x

which is optimum
for (4.10) and λ

which is optimum for (4.11). Furthermore, c

x

= (λ

)

b.
Proof: Because of the Corollary 1 it is enough to prove the last statement, i.e., we must show that
there exist x ≥ 0, λ ≥ 0, such that Ax ≤ b, A

λ ≥ c and b

λ−c

x ≤ 0. By introducing slack
variables y ∈ R
m
, µ ∈ R
m
, r ∈ R, this is equivalent to the existence of x ≥ 0, y ≥ 0, λ ≥ 0, µ ≤
0, r ≤ 0 such that

A I
m
A

−I
n
−c

b

1
¸
¸

x
y
λ
µ
r
¸
¸
¸
¸
¸
¸
=

b
c
0
¸
¸
By the algebraic version of Farkas’ Lemma, this is possible only if
A

ξ −cθ ≤ 0 , ξ ≤ 0 ,
Aw = bθ ≤ 0 , −w ≤ 0 ,
θ ≤ 0
(4.13)
implies
b

ξ +c

w ≤ 0. (4.14)
34 CHAPTER 4. LINEAR PROGRAMMING
Case (i): Suppose (w, ξ, θ) satisfies (4.13) and θ < 0. Then (ξ/θ) ∈ Ω
d
, (w/−θ) ∈ Ω
p
, so that by
Lemma 1 c

w/(−θ) ≤ b

ξ/θ, which is equivalent to (4.14) since θ < 0.
Case (ii): Suppose (w, ξ, θ) satisfies (4.13) and θ = 0, so that −A

ξ ≥ 0, −ξ ≥ 0, Aw ≤ 0, w ≥ 0.
By hypothesis, there exist x ∈ Ω
p
, λ ∈ Ω
d
. Hence, −b

ξ = b

(−ξ) ≥ (Ax)

(−ξ) = x

(−A

ξ) ≥ 0,
and c

w ≤ (A

λ)

w = λ

(Aw) ≤ 0. So that b

ξ +c

w ≤ 0. ♦
The existence part of the above result can be strengthened.
Theorem 2: (i) Suppose Ω
p
= φ. Then there exists an optimum decision for the primal LP iff

d
= φ.
(ii) Suppose Ω
d
= φ. Then there exists an optimum decision for the dual LP iff Ω
p
= φ.
Proof Because of the symmetry of the primal and dual it is enough to prove only (i). The
sufficiency part of (i) follows from Theorem 1, so that only the necessity remains. Suppose, in
contradiction, that Ω
d
= φ. We will show that sup ¦c

x[x ∈ Ω
p
¦ = +∞. Now, Ω
d
= φ means
there does not exist λ ≥ 0 such that A

λ ≥ c. Equivalently, there does not exist λ ≥ 0, µ ≤ 0 such
that
¸
A

[
[
−I
n

λ
−−−
µ
¸
¸
=

c

By Farkas’ Lemma there exists w ∈ R
n
such that Aw ≤ 0, −w ≤ 0, and c

w > 0. By hypothesis,

p
= φ, so there exists x ≥ 0 such that Ax ≤ b. but then for any θ > 0, A(x + θw) ≤ b,
(x + θw) ≥ 0, so that (x + θw) ∈ Ω
p
. Also, c

(x + θw) = c

x + θc

w. Evidently then, sup
¦c

x[x ∈ Ω
p
¦ = +∞so that there is no optimal decision for the primal. ♦
Remark: In Theorem 2(i), the hypothesis that Ω
p
= φ is essential. Consider the following exercise.
Exercise 3: Exhibit a pair of primal and dual problems such that neither has a feasible solution.
Theorem 3: (Optimality condition) x

∈ Ω
p
is optimal if and only if there exists λ

∈ Ω
d
such that
m
¸
j=1
a
ij
x

j
< b
i
implies λ

i
= 0 ,
and
m
¸
i=1
λ

i
a
ij
< c
j
implies x

j
= 0 .
(4.15)
((4.15) is known as the condition of complementary slackness.)
Proof: First of all we note that for x

∈ Ω
p
, λ

∈ Ω
d
, (4.15) is equivalent to (4.16):


)

(Ax

−b) = 0, and (A

λ

−c)

x

= 0 . (4.16)
Necessity. Suppose x

∈ Ω
p
is optimal. Then from Theorem 2, Ω
d
= φ, so that by Theorem 1
there exists λ

∈ Ω
d
such that c

x

= (λ

)

b. By Lemma 1 we always have
c

x

≤ (λ

)

Ax

≤ (λ

)

b so that we must have c

x

= (λ

)

Ax

= (λ

)

b. But (4.16) is just an
equivalent rearrangement of these two equalities.
Sufficiency. Suppose (4.16) holds for some x

∈ Ω
p
, λ

∈ Ω
d
. The first equality in (4.16) yields


)

b = (λ

)

Ax

= (A

λ

)

x

, while the second yields (A

λ

)

x

= c

x

, so that c

x

= (λ

)

b.
By Corollary 1, x

is optimal. ♦
4.2. QUALITATIVE THEORY OF LINEAR PROGRAMMING 35
The conditions x

∈ Ω
p
, x

∈ Ω
d
in Theorem 3 can be replaced by the weaker x

≥ 0, λ

≥ 0
provided we strengthen (4.15) as in the following result, whose proof is left as an exercise.
Theorem 4: (Saddle point) x

≥ 0 is optimal for the primal if and only if there exists λ

≥ 0 such
that
L(x, λ

) ≤ L(x

, λ

) ≤ L(x

, λ) for all x ≥ 0, and allλ ≥ 0, (4.17)
where L: R
n
xR
m
→ R is defined by
L(x, λ) = c

x −λ

(Ax −b) (4.18)
Exercise 4: Prove Theorem 4.
Remark. The function L is called the Lagrangian. A pair (x

, λ

) satisfying (4.17) is said to form
a saddle-point of L over the set ¦x[x ∈ R
n
, x ≥ 0¦ ¦λ[λ ∈ R
m
, λ ≥ 0¦.
4.2.2 Results for problem (4.9).
It is possible to derive analogous results for LPs of the form (4.9). We state these results as exercises,
indicating how to use the results already obtained. We begin with a pair of LPs:
Maximize
subject to
c
1
x
1
+. . . +c
n
x
n
a
il
x
1
+. . . +a
in
x
n
= b
i
,
x
j
≥ 0 ,
1 ≤ i ≤ m ,
1 ≤ j ≤ n .
(4.19)
Minimize
subject to
λ
1
b
1
+. . . +λ
m
b
m
λ
1
a
1j
+. . . +λ
m
a
mj
≥ c
j
, 1 ≤ j ≤ n .
(4.20)
Note that in (4.20) the λ
i
are unrestricted in sign. Again (4.19) is called the primal and (4.20) the
dual. We let Ω
p
, Ω
d
denote the set of all x, λ satisfying the constraints of (4.19), (4.20) respectively.
Exercise 5: Prove Theorems 1 and 2 with Ω
p
and Ω
d
interpreted as above. (Hint. Replace (4.19)
by the equivalent LP: maximize c

x, subject to Ax ≤ b, (−A)x ≤ (−b), x ≥ 0. This is now of the
form (4.10). Apply Theorems 1 and 2.)
Exercise 6: Show that x

∈ Ω
p
is optimal iff there exists λ

∈ Ω
d
such that
x

j
> 0 implies
m
¸
i=1
λ

i
a
ij
= c
j
.
Exercise 7: x

≥ 0 is optimal iff there exists λ

∈ R
m
such that
L(x, λ

) ≤ L(x

, λ

) ≤ L(x

, λ) for all x ≥ 0, λ ∈ R
m
.
where L is defined in (4.18). (Note that, unlike (4.17), λ is not restricted in sign.)
Exercise 8: Formulate a dual for (4.7), and obtain the result analogous to Exercise 5.
36 CHAPTER 4. LINEAR PROGRAMMING
4.2.3 Sensitivity analysis.
We investigate how the maximum value of (4.10) or (4.19) changes as the vectors b and c change.
The matrix Awill remain fixed. Let Ω
p
and Ω
d
be the sets of feasible solutions for the pair (4.10) and
(4.11) or for the pair (4.19) and (4.20). We write Ω
p
(b) and Ω
d
(c) to denote the explicit dependence
on b and c respectively. Let B = ¦b ∈ R
m
[Ω
p
(b) = φ¦ and C = ¦c ∈ R
n
[Ω
d
(c) = φ¦, and for
(b, c) ∈ B C define
M(b, c) = max ¦c

x[x ∈ Ω
p
(b)¦ = min ¦λ

b[λ ∈ Ω
d
(c)¦ . (4.21)
For 1 ≤ i ≤ m, ε ∈ R, b ∈ R
m
denote
b(i, ε) = (b
1
, b
2
, . . . , b
i−1
, b
i
+ε, b
i+1
, . . . , b
m
)

,
and for 1 ≤ j ≤ n, ε ∈ R, c ∈ R
n
denote
c(j, ε) = (c
1
, c
2
, . . . , c
j−1
, c
j
+ε, c
j+1
, . . . , c
n
)

.
We define in the usual way the right and left hand partial derivatives of M at a point (
ˆ
b, ˆ c) ∈ BC
as follows:
∂M
+
∂b
i
(
ˆ
b, ˆ c) = lim
ε → 0
ε > 0
1
ε
¦M(
ˆ
b(i, ε), ˆ c) −M(
ˆ
b, ˆ c)¦ ,
∂M

∂b
i
(
ˆ
b, ˆ c) = lim
ε → 0
ε > 0
1
ε
¦M(
ˆ
b, ˆ c) −M(
ˆ
b(i, −ε), ˆ c)¦ ,
∂M
+
∂c
j
(
ˆ
b, ˆ c) = lim
ε → 0
ε > 0
1
ε
¦M(
ˆ
b, ˆ c(j, ε)) −M(
ˆ
b, ˆ c¦ ,
∂M

∂c
j
(
ˆ
b, ˆ c) = lim
ε → 0
ε > 0
1
ε
¦M(
ˆ
b, ˆ c −M(
ˆ
b, ˆ c(j, −ε))¦ ,
Let

B,

C denote the interiors of B, C respectively.
Theorem 5: At each (
ˆ
b, ˆ c) ∈

B

C, the partial derivatives given above exist. Furthermore, if
ˆ x ∈ Ω
p
(
ˆ
b),
ˆ
λ ∈ Ω
d
(ˆ c) are optimal, then
∂M
+
∂b
i
(
ˆ
b, ˆ c) ≤
ˆ
λ
i

∂M

∂b
i
(
ˆ
b, ˆ c) , 1 ≤ i ≤ m , (4.22)
4.3. THE SIMPLEX ALGORITHM 37
∂M
+
∂c
j
(
ˆ
b, ˆ c) ≥ ˆ x
j

∂M

∂c
j
(
ˆ
b, ˆ c) , 1 ≤ j ≤ n , (4.23)
Proof: We first show (4.22), (4.23) assuming that the partial derivatives exist. By strong duality
M(
ˆ
b, ˆ c) =
ˆ
λ

ˆ
b, and by weak duality M(
ˆ
b(i, ε), ˆ c) ≤
ˆ
λ

ˆ
b(i, ε), so that
1
ε
¦M(
ˆ
b(i, ε), ˆ c) −M(
ˆ
b, ˆ c)¦ ≤
1
ε
ˆ
λ

¦
ˆ
b(i, ε) −
ˆ

ˆ
λ
i
, for ε > 0,
1
ε
¦M(
ˆ
b, ˆ c) −M(
ˆ
b(i, −ε), ˆ c)¦ ≥
1
ε
ˆ
λ

¦
ˆ
b −
ˆ
b(i, −ε)¦ =
ˆ
λ
i
, for ε > 0.
Taking limits as ε → 0, ε > 0, gives (4.22).
On the other hand, M(
ˆ
b, ˆ c) = ˆ c

ˆ x, and M(
ˆ
b, ˆ c(j, ε)) ≥ (ˆ c(j, ε))

ˆ x, so that
1
ε
¦M(
ˆ
b, ˆ c(j, ε)) −M(
ˆ
b, ˆ c)¦ ≥
1
ε
¦ˆ c(j, ε)

− ˆ c¦

ˆ x = ˆ x
j
, for ε > 0,
1
ε
¦M(
ˆ
b, ˆ c) −M(
ˆ
b, ˆ c(j, −ε))¦ ≤
1
ε
¦ˆ c − ˆ c(j, −ε)¦

ˆ x = ˆ x
j
, for ε > 0,
which give (4.23) as ε → 0, ε > 0.
Finally, the existence of the right and left partial derivatives follows from Exercises 8, 9 below. ♦
We recall some fundamental definitions from convex analysis.
Definition: X ⊂ R
n
is said to be convex if x, y ∈ X and 0 ≤ θ ≤ 1 implies (θx+(1−θ)y) ∈ X.
Definition: Let X ⊂ R
n
and f : X → R. (i) f is said to be convex if X is convex, and x, y ∈ X,
0 ≤ θ ≤ 1 implies f(θx + (1 − θ)y) ≤ θf(x) + (1 − θ)f(y). (ii) f is said to be concave if −f is
convex, i.e., x, y ∈ X, 0 ≤ θ ≤ 1 implies f(θx + (1 −θ)y) ≥ θf(x) + (1 −θ)f(y).
Exercise 8: (a) Show that Ω
p
, Ω
d
, and the sets B ⊂ R
m
, C ⊂ R
n
defined above are convex sets.
(b) Show that for fixed c ∈ C, M(, c) : B → R is concave and for fixed b ∈ B, M(b, ) : C → R
is convex.
Exercise 9: Let X ⊂ R
n
, and f : X → R be convex. Show that at each point ˆ x in the interior of
X, the left and right hand partial derivatives of f exist. (Hint: First show that for
ε
2
> ε
1
> 0 > δ
1
> δ
2
,(1/ε
2
)¦f(ˆ x(i, ε
2
)) −f(ˆ x)¦ ≥ (1/ε
1
)¦f(ˆ x(i, ε
1
)) −f(ˆ x))¦ ≥
(1/δ
1
)¦f(ˆ x(i, δ
1
)) −f(ˆ x))¦ ≥ (1/δ
2
)¦f(ˆ x(i, δ
2
)) −f(ˆ x)¦. Then the result follows immediately.)
Remark 1: Clearly if (∂M/∂b
i
)(
ˆ
b) exists, then we have equality in (4.22), and then this result
compares with 3.14).
Remark 2: We can also show without difficulty that M(, c) and M(b, ) are piecewise linear (more
accurately, linear plus constant) functions on B and C respectively. This is useful in some
computational problems.
Remark 3: The variables of the dual problem are called Lagrange variables or dual variables or
shadow-prices. The reason behind the last name will be clear in Section 4.
4.3 The Simplex Algorithm
4.3.1 Preliminaries
We now present the celebrated Simplex algorithm for finding an optimum solution to any LP of the
form (4.24):
Maximize
subject to
c
1
x
1
+. . . +c
n
x
n
a
il
x
1
+. . . +a
in
x
n
= b
i
,
x
j
≥ 0 ,
1 ≤ i ≤ m
1 ≤ j ≤ n .
(4.24)
38 CHAPTER 4. LINEAR PROGRAMMING
As mentioned in 4.1 the algorithm rests upon the observations that if an optimal exists, then at least
one vertex of the feasible set Ω
p
is an optimal solution. Since Ω
p
has only finitely many vertices (see
Corollary 1 below), we only have to investigate a finite set. The practicability of this investigation
depends on the ease with which we can characterize the vertices of Ω
p
. This is done in Lemma 1.
In the following we let A
j
denote the jth column of A, i.e., A
j
= (a
1j
, . . . , a
mj
)

. We begin with
a precise definition of a vertex.
Definition: x ∈ Ω
p
is said to be a vertex of Ω
p
if x = λy + (1 −λ)z, with y, z in Ω
p
and
0 < λ < 1, implies x = y = z.
Definition: For x ∈ Ω
p
, let I(x) = ¦j[x
j
> 0¦.
Lemma 1: Let x ∈ Ω
p
. Then x is a vertex of Ω
p
iff ¦A
j
[j ∈ I(x)¦ is a linearly independent set.
Exercise 1: Prove Lemma 1.
Corollary 1: Ω
p
has at most
m
¸
j=1
n!
(n −j)!
vertices.
Lemma 2: Let x

be an optimal decision of (4.24). Then there is a vertex z

of Ω
p
which is optimal.
Proof: If ¦A
j
[j ∈ I(x

)¦ is linearly independent, let z

= x

and we are done. Hence suppose
¦A
j
[j ∈ I(x

)¦ is linearly dependent so that there exist γ
j
, not all zero, such that
¸
j∈I(x

)
γ
j
A
j
= 0 .
For θ ∈ R define z(θ) ∈ R
n
by
z
j
(θ) =

x

j
= θγ
j
,
x

j
= 0 ,
j ∈ I(x

)
j ∈ I(x

) .
Az(θ) =
¸
j∈I(x

)
z
j
(θ)A
j
=
¸
j∈I(x

)
x

j
A
j

¸
j∈I(x

)
γ
j
A
j
= b +θ 0 = b .
Since x

j
> 0 for j ∈ I(x

), it follows that z(θ) ≥ 0 when
[θ[ ≤ min

x

j

j
|

j ∈ I(x

)
¸
= θ

say .
Hence z(θ) ∈ Ω
p
whenever [θ[ ≤ θ

. Since x

is optimal we must have
c

x

≥ c

z(θ) = c

x


¸
j∈I(x

)
c
j
y
j
for −

θ ≤ θ ≤ θ

.
Since θ can take on positive and negative values, the inequality above can hold on if
¸
J∈I(x

)
c
j
γ
j
=
0, and then c

x

= c

z(θ), so that z(θ) is also an optimal solution for [θ[ ≤ θ

. But from the
definition of z(θ) it is easy to see that we can pick θ
0
with [θ
0
[ = θ

such that z
j

0
) = x

j

0
γ
j
= 0
for at least one j = j
0
in I(x

). Then,
I(z(θ
0
)) ⊂ I(x

) −¦j
0
¦ .
4.3. THE SIMPLEX ALGORITHM 39
Again, if ¦A
j
[j ∈ I(z(θ
0
))¦ is linearly independent, then we let z

= z(θ
0
) and we are done.
Otherwise we repeat the procedure above with z(θ
0
). Clearly, in a finite number of steps we will
find an optimal decision z

which is also vertex. ♦
At this point we abandon the geometric term “vertex” and how to established LP terminology.
Definition: (i) z is said to be a basic feasible solution if z ∈ Ω
p
, and ¦A
j
[j ∈ I(z)¦ is linearly
independent. The set I(z) is then called the basis at z, and x
j
, j ∈ I(z), are called the basic
variables at z. x
j
, j ∈ I(z) are called the non-basic variables at z.
Definition: A basic feasible solution z is said to be non-degenerate if I(z) has m elements.
Notation: Let z be a non-degenerate basic feasible solution, and let j
1
< j
2
< . . . < j
m
constitute I(z). Let D(z) denote the m m non-singular matrix D(z) = [A
j
1
.
.
.A
j
2
.
.
. . . .
.
.
.A
jm
], let
c(z) denote the m-dimensional column vector c(z) = (c
j
1
, . . . , c
jm
)

and define λ(z) by λ

(z) =
c

(z)[D(z)]
−1
. We call λ(z) the shadow-price vector at z.
Lemma 3: Let z be a non-degenerate basic feasible solution. Then z is optimal if and only if
λ

(z)A ≥ c
j
, for all , j ∈ I(z) . (4.25)
Proof: By Exercise 6 of Section 2.2, z is optimal iff there exists λ such that
λ

A
j
= c
j
, for , j ∈ I(z) , (4.26)
λ

A
j
≥ c
j
, for , j ∈ I(z) , (4.27)
But since z is non-degenerate, (4.26) holds iff λ = λ(z) and then (4.27) is the same as (4.25). ♦
4.3.2 The Simplex Algorithm.
The algorithm is divided into two parts: In Phase I we determine if Ω
p
is empty or not, and if not,
we obtain a basic feasible solution. Phase II starts with a basic feasible solution and determines if
it is optimal or not, and if not obtains another basic feasible solution with a higher value. Iterating
on this procedure, in a finite number of steps, either we obtain an optimum solution or we discover
that no optimum exists, i.e., sup ¦c

x[x ∈ Ω
p
¦ = +∞. We shall discuss Phase II first.
We make the following simplifying assumption. We will comment on it later.
Assumption of non-degeneracy. Every basic feasible solution is non-degenerate.
Phase II:
Step 1. Let z
0
be a basic feasible solution obtained from Phase I or by any other means. Set k = 0
and go to Step 2.
Step 2. Calculate [D(z
k
)]
−1
,c(z
k
), and the shadow-price vector λ

(z
k
) = c

(z
k
)[D(z
k
)]
−1
. For
each j ∈ I(z
k
) calculate c
j
−λ

(z
k
)A
j
. If all these numbers are ≤ 0, stop, because z
k
is optimal
by Lemma 3. Otherwise pick any
ˆ
j ∈ I(z
k
) such that c
ˆ
j
−λ

(z
k
)A
ˆ
j
> 0 and go to Step 3.
Step 3. Let I(z
k
) consist of j
1
< j
2
< . . . < j
m
. Compute the vector
γ
k
= (γ
k
j
1
, . . . γ
k
jm
)

= [D(z
k
)]
−1
A
ˆ
j
. If γ
k
≤ 0, stop, because by Lemma 4 below, there is no
finite optimum. Otherwise go to Step 4.
Step 4. Compute θ = min ¦(z
k
j
γ
k
j
)[j ∈ i(z), γ
k
j
> 0¦. Evidently 0 < θ < ∞. Define z
k+1
by
40 CHAPTER 4. LINEAR PROGRAMMING
z
k+1
j
=

z
k
j
−θγ
k
j
θ
z
k
j
= 0
,
,
,
j ∈ I(z)
j =
ˆ
j
j =
ˆ
j and j ∈ I(z) .
(4.28)
By Lemma 5 below, z
k+1
is a basic feasible solution with c

z
k+1
> c

z
k
. Set k = k + 1 and return
to Step 2.
Lemma 4: If γ
k
≤ 0, sup ¦c

x[x ∈ Ω
p
¦ = ∞.
Proof: Define z(θ) by
z
j
(θ) =

z
j
−θγ
k
j
θ
z
j
= 0
,
,
,
j ∈ I(z)
j =
ˆ
j
j ∈ I(z) and j =
ˆ
j .
(4.29)
First of all, since γ
k
≤ 0 it follows that z(θ) ≥ 0 for θ ≥ 0. Next, Az(θ) = Az − θ
¸
j∈I(z)
γ
k
j
A
j
+
θA
ˆ
j
= Az by definition of γ
k
. Hence, z(θ) ∈ Ω
p
for θ ≥ 0. Finally,
c

z(θ)
= c

z −θc

(z
k

k
+θc
ˆ
j
= c

z +θ¦c
ˆ
j
−c

(z
k
)[D(z
k
)]
−1
A
ˆ
j
¦
= c

z +θ¦c
ˆ
j
−λ

(z
k
)A
ˆ
j
¦i .
(4.30)
But from step 2 ¦c
ˆ
j −λ

(z
k
)A
ˆ
j
¦ > 0, so that c

z(θ) → ∞as θ → ∞. ♦
Lemma 5: z
k+1
is a basic feasible solution and c

z
k+1
> c

z
k
.
Proof: Let
˜
j ∈ I(z
k
) be such that γ
k
˜
j
> 0 and z
k
˜
j
= θγ
k
˜
j
. Then from (4.28) we see that z
k+1
˜
j
= 0,
hence
I(z
k+1
) ⊂ (I(z) −¦
˜
j¦)
¸
¦
ˆ
j¦ , (4.31)
so that it is enough to prove that A
˜
j
is independent of ¦A
j
[j ∈ I(z), j =
˜
j¦. But if this is not the
case, we must have γ
k
˜
j
= 0, giving a contradiction. Finally if we compare (4.28) and (4.29), we see
from (4.30) that
c

z
k+1
−c

z
k
= θ¦c
ˆ
j
−γ

(z
k
)A
ˆ
j
¦ ,
which is positive from Step 2. ♦
Corollary 2: In a finite number of steps Phase II will obtain an optimal solution or will determine
that sup¦c

x[x ∈ Ω
p
¦ = ∞.
Corollary 3: Suppose Phase II terminates at an optimal basic feasible solution z

. Then γ(z

) is an
optimal solution of the dual of (4.24).
Exercise 2: Prove Corollaries 2 and 3.
Remark 1: By the non-degeneracy assumption, I(z
k+1
) has m elements, so that in (4.31) we must
have equality. We see then that D(z
k+1
) is obtained from D(z
k
) by replacing the column A
j
by
4.3. THE SIMPLEX ALGORITHM 41
the column A
ˆ
j
. More precisely if D(z
k
) = [A
j
1
.
.
. . . .
.
.
.A
j
i−1
.
.
.A
˜
j
.
.
.A
j
i+1
.
.
. . . .
.
.
.A
jm
] and if
j
k
<
ˆ
j < j
k+1
then D(z
k+1
) = [A
j
1
.
.
. . . .
.
.
.A
j
i−1
.
.
.A
j
i+1
.
.
. . . .
.
.
.A
j
k
.
.
.A
ˆ
j
.
.
.A
j
k+1
.
.
. . . .
.
.
.A
jm
]. Let E be the
matrix E = [A
j
1
.
.
. . . .
.
.
.A
j
i−1
.
.
.A
ˆ
j
.
.
.A
j
i+1
.
.
. . . .
.
.
.A
jm
]. Then [D(z
k+1
)]
−1
= P E
−1
where the matrix P
permutes the columns of D(z
k+1
) such that E = D(z
k+1
)P. Next, if A
ˆ
j
=
m
¸
=1
γ
j

A
j

, it is easy
to check that E
−1
= M[D(z
k
)]
−1
where
M =

1
1
.
.
.
1
−γ
j
1
γ
˜
j
1
γ
˜
j
−γ
jm
γ
˜
j
1
.
.
.
1
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸

ith column
Then [D(z
k+1
)]
−1
= PM[D(z
k
)]
−1
, so that these inverses can be easily computed.
Remark 2: The similarity between Step 2 of Phase II and Step 2 of the algorithm in 3.3.4 is
striking. The basic variables at z
k
correspond to the variables w
k
and non-basic variables
correspond to u
k
. For each j ∈ I(z
k
) we can interpret the number c
j
−λ

(z
k
)A
j
to be the net
increase in the objective value per unit increase in the jth component of z
k
. This net increase is due
to the direct increase c
j
minus the indirect decrease λ

(z
k
)A
j
due to the compensating changes in
the basic variables necessary to maintain feasibility. The analogous quantity in 3.3.4 is
(∂f
0
/∂u
j
)(x
k
) −(λ
k
)

(∂f/∂u
j
)(x
k
).
Remark 3: By eliminating any dependent equations in (4.24) we can guarantee that the matrix A
has rank n. Hence at any degenerate basic feasible solution z
k
we can always find
¯
I(z
k
) ⊃ I(z
k
)
such that
¯
I(z
k
) has m elements and ¦A
j
[j ∈
¯
I(z
k
)¦ is a linearly independent set. We can apply
Phase II using
¯
I(z
k
) instead of I(z
k
). But then in Step 4 it may turn out that θ = 0 so that
z
k+1
= z
k
. The reason for this is that
¯
I(z
k
) is not unique, so that we have to try various
alternatives for
¯
I(z
k
) until we find one for which θ > 0. In this way the non-degeneracy
assumption can be eliminated. For details see (Canon, et al., [1970]).
We now describe how to obtain an initial basic feasible solution.
Phase I:
Step I. by multiplying some of the equality constraints in (4.24) by −1 if necessary, we can assume
that b ≥ 0. Replace the LP (4.24) by the LP (4.32) involving the variables x and y:
Maximize −
m
¸
i=1
y
i
subject to a
il
x
1
+. . . +a
in
x
n
+y
i
= b
i
, 1 ≤ i ≤ m ,
x
j
≥ 0 , y
i
≥ 0 , 1 ≤ j ≤ n , 1 ≤ i ≤ m .
(4.32)
42 CHAPTER 4. LINEAR PROGRAMMING
Go to step 2.
Step 2. Note that (x
0
, y
0
) = (0, b) is a basic feasible solution of (4.32). Apply phase II to (4.32)
starting with this solution. Phase II must terminate in an optimum based feasible solution (x

, y

)
since the value of the objective function in (4.32) lies between −
m
¸
i=1
b
i
and 0. Go to Step 3.
Step 3. If y

= 0, x

is a basic feasible solution for (4.24). If y

= 0, by Exercise 3 below, (4.24)
has no feasible solution.
Exercise 3: Show that (4.24) has a feasible solution iff y

= 0.
4.4 LP Theory of a Firm in a Competitive Economy
4.4.1 Activity analysis of the firm.
We think of a firm as a system which transforms input into outputs. There are m kinds of inputs
and k kinds of outputs. Inputs are usually classified into raw materials such as iron ore, crude oil,
or raw cotton; intermediate products such as steel, chemicals, or textiles; capital goods
3
such as
machines of various kinds, or factory buildings, office equipment, or computers; finally various
kinds of labor services. The firm’s outputs themselves may be raw materials (if it is a mining
company) or intermediate products (if it is a steel mill) or capital goods (if it manufactures lathes)
or finished goods (if it makes shirts or bakes cookies) which go directly to the consumer. Labor is
not usually considered an output since slavery is not practiced; however, it may be considered an
output in a “closed,” dynamic Malthusian framework where the increase in labor is a function of the
output. (See the von Neumann model in (Nikaido [1968]), p. 141.)
Within the firm, this transformation can be conducted in different ways, i.e., different combina-
tions of inputs can be used to produce the same combination of outputs, since human labor can
do the same job as some machines and machines can replace other kinds of machines, etc. This
substitutability among inputs is a fundamental concept in economics. We formalize it by specifying
which transformation possibilities are available to the firm.
By an input vector we mean any m-dimensional vector r = (r
1
, . . . , r
m
)

with r ≥ 0, and by an
output vector we mean any k-dimensional vector y = (y
1
, . . . , y
k
)

with y ≥ 0. We now make three
basic assumptions about the firm.
(i) The transformation of inputs into outputs is organized into a finite number, say n, of processes
or activities.
(ii) Each activity combines the k inputs in fixed proportions into the m outputs in fixed propor-
tions. Furthermore, each activity can be conducted at any non-negative intensity or level. Pre-
cisely, the jth activity is characterized completely by two vectors A
j
= (a
1j
, a
2j
, . . . , a
mj
)

and
B
j
= (b
ij
, . . . , b
kj
)

so that if it is conducted at a level x
j
≥ 0, then it combines (transforms) the
input vector (a
1j
x
j
, . . . , a
mj
x
j
)

= x
j
A
j
into the output vector (b
1j
x
j
, . . . , b
kj
x
j
)

= x
j
B
j
. Let
A be the mn matrix [A
1
.
.
. . . .
.
.
.A
n
] and B be the k n matrix B = [B
1
.
.
. . . .
.
.
.B
n
].
3
It is more accurate to think of the services of capital goods rather than these goods themselves as inputs. It is these
services which are consumed in the transformation into outputs.
4.4. LP THEORY OF A FIRM IN A COMPETITIVE ECONOMY 43
(iii) If the firmconducts all the activities simultaneously with the jth activity at level x
j
≥ 0, 1 ≤ j ≤
n, then it transforms the input vector x
1
A
1
+. . . +x
n
A
n
into the output vector x
1
B
1
+. . . +x
n
B
n
.
With these assumptions we know all the transformations technically possible as soon as we spec-
ify the matrices A and B. Which of these possible transformations will actually take place depends
upon their relative profitability and availability of inputs. We study this next.
4.4.2 Short-term behavior.
In the short-term, the firm cannot change the amount available to it of some of the inputs such as
capital equipment, certain kinds of labor, and perhaps some raw materials. Let us suppose that these
inputs are 1, 2, . . . , and they are available in the amounts r

1
, . . . , r

, whereas the supply of the
remaining inputs can be varied. We assume that the firm is operating in a competitive economy
which means that the unit prices p = (p
1
, . . . , p
k
)

of the outputs, and q = (q
1
, . . . , q
m
)

of the
inputs is fixed. Then the manager of the firm, if he is maximizing the firm’s profits, faces the
following decision problem:
Maximize p

y −
m
¸
j=+1
q
j
r
j
subject to y = Bx,
a
i1
x
1
+. . . +a
in
x
n
≤ r

i
, 1 ≤ i ≤ ,
a
i1
x
1
+. . . +a
in
x
n
≤ r
i
, + 1 ≤ i ≤ m ,
x
j
≥ 0, 1 ≤ j ≤ n; r
i
≥ 0 , + 1 ≤ i ≤ m .
(4.33)
The decision variables are the activity levels x
1
, . . . , x
n
, and the short-term input supplies r
+1
, . . . , r
m
.
The coefficients of B and Aare the fixed technical coefficients of the firm, the r

i
are the fixed short-
term supplies, whereas the p
i
, q
j
are prices determined by the whole economy, which the firm ac-
cepts as given. Under realistic conditions (4.33) has an optimal solution, say, x

1
, . . . , x

n
, r

+1
, . . . , r

m
.
4.4.3 Long-term equilibrium behavior.
In the long run the supplies of the first inputs are also variable and the firm can change these
supplies from r

1
, . . . , r

by buying or selling these inputs at the market price q
1
, . . . , q

. Whether
the firm will actually change these inputs will depend upon whether it is profitable to do so, and in
turn this depends upon the prices p, q. We say that the prices (p

, q

) and a set of input supplies
r

= (r

1
, . . . , r

m
) are in (long-term) equilibrium if the firm has no profit incentive to change r

under the prices (p

, q

).
Theorem 1: p

, q

, r

are in equilibrium if and only if q

is an optimal solution of (4.34):
Minimize (r

)

q
subject to A

q ≥ B

p

q ≥ 0 .
(4.34)
Proof: Let c = B

p

. By definition, p

, q

, r

are in equilibrium iff for all fixed ∆ ∈ R
m
,
M(∆) ≤ M(0) where M(∆) is the maximum value of the LP (4.35):
Maximize c

x −(q

)


subject to Ax ≤ r

+ ∆ ,
x ≥ 0 .
(4.35)
44 CHAPTER 4. LINEAR PROGRAMMING
For ∆ = 0, (4.34) becomes the dual of (4.35) so that by the strong duality theorem, M(0) = (r

)

q

.
Hence p

, q

, r

are in equilibrium iff
c

x −(q

)

∆ ≤ M(0) = (r

)

q

, (4.36)
whenever x is feasible for (4.35). By weak duality if x is feasible for (4.35) and q is feasible for
(4.34),
c

x −(q

)

∆ ≤ q

(r

= ∆) −(q

)

∆ , (4.37)
and, in particular, for q = q

,
c

x −(q

)

∆ ≤ (q

)

(r

+ ∆) −(q

)

∆ = (q

)

r


Remark 1: We have shown that (p

, q

, r

are in long-term equilibrium iff q

is an optimum
solution to the dual (namely (4.34)) of (4.38):
Maximize c

x
subject to Ax ≤ r

x ≥ 0 .
(4.38)
This relation between p

, q

, r

has a very nice economic interpretation. Recall that c = B

p

, i.e.,
c
j
= p

1
b
1j
+p

2
b
2j
+. . . +p

k
b
kj
. Now b
ij
is the amount of the ith output produced by operating the
jth activity at a unit level x
j
= 1. Hence, c
j
is the revenue per unit level operation of the jth activity
so that c

x is the revenue when the n activities are operated at levels x. On the other hand if the jth
activity is operated at level x
j
= 1, it uses an amount a
ij
of the ith input. If the ith input is valued at
a

i
, then the input cost of operating at x
j
= 1, is
m
¸
i=1
q
i
a
ij
, so that the input cost of operating the n
activities at levels x is (A

q

)

= (q

)

Ax. Thus, if x

is the optimum activity levels for (4.38) then
the output revenue is c

x

and the input cost is (q

)

Ax

. But from (4.16), (q

)

(Ax

−r

) = 0 so
that
c

x

= (q

)

r

, (4.39)
i.e., at the optimum activity levels, in equilibrium, total revenues = total cost of input supplies. In
fact, we can say even more. From (4.15) we see that if x
=
ast
j
> 0 then
c
j
=
m
¸
i=1
q

i
a
ij
,
i.e., at the optimum, the revenue of an activity operated at a positive level = input cost of that activity.
Also if
c
j
<
m
¸
i=1
q

i
a
ij
,
then x

j
= 0, i.e., if the revenue of an activity is less than its input cost, then at the optimum it is
operated at zero level. Finally, again from (4.15), if an equilibrium the optimum ith input supply r

i
is greater than the optimum demand for the ith input,
4.4. LP THEORY OF A FIRM IN A COMPETITIVE ECONOMY 45
r

i
>
n
¸
j=1
a
ij
x

j
,
then q

i
= 0, i.e., the equilibrium price of an input which is in excess supply must be zero, in other
words it must be a free good.
Remark 2: Returning to the short-term decision problem (4.33), suppose that


1
, . . . , λ

, λ

+1
, . . . , λ

m
) is an optimum solution of the dual of (4.33). Suppose that the market
prices of inputs 1, . . . , are q
1
, . . . , q

. Let us denote by M(∆
1
, . . . , ∆

) the optimum value of
(4.33) when the amounts of the inputs in fixed supply are r

1
+ ∆
1
, . . . , r

+ ∆

. Then if
(∂M/∂∆
i
)[
∆=0
exists, we can see from (4.22) that it is always profitable to increase the ith input
by buying some additional amount at price q
i
if λ

i
> q
i
, and conversely it is profitable to sell some
of the ith input at price q
i
if λ

i
< q
i
. Thus λ

i
can be interpreted as the firm’s internal valuation of
the ith input or the firm’s imputed or shadow price of the ith input. This interpretation has wide
applicability, which we mention briefly. Often engineering design problems can be formulated as
LPs of the form (4.10) or (4.19), where some of the coefficients b
i
are design parameters. The
design procedure is to fix these parameters at some nominal value b

i
, and carry out the
optimization problem. Suppose the resulting optimal dual variables are λ

i
. then we see (assuming
differentiability) that it is worth increasing b

i
if the unit cost of increasing this parameter is less
than λ

i
, and it is worth decreasing this parameter if the reduction in total cost per unit decrease is
greater than λ

i
.
4.4.4 Long-term equilibrium of a competitive, capitalist economy.
The profit-maximizing behavior of the firm presented above is one of the two fundamental building
blocks in the equilibrium theory of a competitive, capitalist economy. Unfortunately we cannot
present the details here. We shall limit ourselves to a rough sketch. We think of the economy as
a feedback process involving firms and consumers. Let us suppose that there are a total of h com-
modities in the economy including raw materials, intermediate and capital goods, labor, and finished
products. By adding zero rows to the matrices (A, B) characterizing a firm we can suppose that all
the h commodities are possible inputs and all the h commodities are possible outputs. Of course,
for an individual firm most of the inputs and most of the outputs will be zero. the sole purpose for
making this change is that we no longer need to distinguish between prices of inputs and prices of
outputs. We observe the economy starting at time T. At this time there exists within the economy
an inventory of the various commodities which we can represent by a vector ω = (ω
1
, . . . , ω
h
) ≥ 0.
ω is that portion of the outputs produced prior to T which have not been consumed up to T. We are
assuming that this is a capitalist economy, which means that the ownership of ω is divided among
the various consumers j = 1, . . . , J. More precisely, the jth consumer owns the vector of commodi-
ties ω(j) ≥ 0, and
J
¸
j=1
ω(j) = ω. We are including in ω(j) the amount of his labor services which
consumer j is willing to sell. Now suppose that at time T the prevailing prices of the h commodities
are λ = (λ
1
, . . . , λ
h
)

≥ 0. Next, suppose that the managers of the various firms assume that the
prices λ are not going to change for a long period of time. Then, from our previous analysis we
know that the manager of the ith firm will plan to buy input supplies r(i) ≥ 0, r(i) ∈ R
h
, such
46 CHAPTER 4. LINEAR PROGRAMMING
that (λ, r(i)) is in long term equilibrium, and he will plan to produce an optimum amount, say y(i).
Here i = 1, 2, . . . , I, where I is the total number of firms. We know that r(i) and y(i) depend on
λ, so that we explicitly write r(i, λ), y(i, λ). We also recall that (see (4.38))
λ

r(i, λ) = λ

y(i, λ) , 1 ≤ i ≤ I . (4.40)
Now the ith manager can buy r(i) from only two sources: outputs from other firms, and the con-
sumers who collectively own ω. Similarly, the ith manager can sell his planned output y(i) either as
input supplies to other firms or to the consumers. Thus, the net supply offered for sale to consumers
is S(λ), where
S(λ) =
J
¸
j=1
ω(j) +
I
¸
i=1
y(i, λ) −
i
¸
i=1
r(i, λ) . (4.41)
We note two important facts. First of all, from (4.40), (4.41) we immediately conclude that
λ

S(λ) =
J
¸
j=1
λ

ω(j) , (4.42)
that is the value of the supply offered to consumers is equal to the value of the commodities (and
labor) which they own. The second point is that there is no reason to expect that S(λ) ≥ 0.
Now we come to the second building block of equilibrium theory. The value of the jth consumer’s
possessions is λ

ω(j). The theory assumes that he will plan to buy a set of commodities d(j) =
(d
1
(j), . . . , d
h
(j)) ≥ 0 so as to maximize his satisfaction subject to the constraint λ

d(j) = λ

ω(j).
Here also d(j) will depend on λ, so we write d(j, λ). If we add up the buying plans of all the
consumers we obtain the total demand
D(λ) =
J
¸
j=1
d(j, λ) ≥ 0 , (4.43)
which also satisfies
λ

D(λ) =
J
¸
j=1
λ

ω(j) . (4.44)
The most basic question of equilibrium theory is to determine conditions under which there exists a
price vector λ
E
such that the economy is in equilibrium, i.e., S(λ
E
) = D(λ
E
), because if such an
equilibrium price λ
E
exists, then at that price the production plans of all the firms and the buying
plan of all the consumers can be realized. Unfortunately we must stop at this point since we cannot
proceed further without introducing some more convex analysis and the fixed point theorem. For
a simple treatment the reader is referred to (Dorfman, Samuelson, and Solow [1958], Chapter 13).
For a much more general mathematical treatment see (Nikaido [1968], Chapter V).
4.5 Miscellaneous Comments
4.5. MISCELLANEOUS COMMENTS 47
4.5.1 Some mathematical tricks.
It is often the case in practical decision problems that the objective is not well-defined. There may
be a number of plausible objective functions. In our LP framework this situation can be formulated
as follows. The constraints are given as usual by Ax ≤ b, x ≥ 0. However, there are, say, k
objective functions (c
1
)

x, . . . , (c
k
)

x. It is reasonable then to define a single objective function
f
0
(x) by f
0
(x) = minimum ¦(c
1
)

x, (c
2
)

x, . . . , (c
k
)

x¦, so that we have the decision problem,
Maximize f
0
(x)
subject to Ax ≤ b, x ≥ 0 .
(4.45)
This is not a LP since f
0
is not linear. However, the following exercise shows how to transform
(4.45) into an equivalent LP.
Exercise 1: Show that (4.45) is equivalent to (4.46) below, in the sense that x

is optimal for (4.45)
iff (x

, y

) = (x

, f
0
(x

)) is optimal for (4.46).
Maximize y
subject to Ax ≤ b, x ≤ 0
y ≤ (c
i
)

x , 1 ≤ i ≤ k .
(4.46)
Exercise 1 will also indicate how to do Exercise 2.
Exercise 2: Obtain an equivalent LP for (4.47):
Maximize
n
¸
j=1
c
i
(x
i
)
subject to Ax ≤ b, x ≤ 0 ,
(4.47)
where c
i
: R → R are concave, piecewise-linear functions of the kind shown in Figure 4.3.
The above-given assumption of the concavity of the c
i
is crucial. In the next exercise, the inter-
pretation of “equivalent” is purposely left ambiguous.
Exercise 3: Construct an example of the kind (4.47), where the c
i
are piecewise linear (but not
concave), and such that there is no equivalent LP.
It turns out however, that even if the c
i
are not concave, an elementary modification of the Simplex
algorithm can be given to obtain a “local” optimal decision. See (Miller [1963]).
4.5.2 Scope of linear programming.
LP is today the single most important optimization technique. This is because many decision prob-
lems can be adequately formulated as LPs, and, given the capabilities of modern computers, the
Simplex method (together with its variants) is an extremely powerful technique for solving LPs in-
volving thousands of variables. To obtain a feeling for the scope of LP we refer the reader to the
book by one of the originators of LP (Dantzig [1963]).
48 CHAPTER 4. LINEAR PROGRAMMING
.
.
.
c
i
(x
i
)
x
i
Figure 4.3: A function of the form used in Exercise 2.
Chapter 5
OPTIMIZATION OVER SETS
DEFINED BY INEQUALITY
CONSTRAINTS: NONLINEAR
PROGRAMMING
In many decision-making situations the assumption of linearity of the constraint inequalities in LP
is quite restrictive. The linearity of the objective function is not restrictive as shown in the first
exercise below. In Section 1 we present the general nonlinear programming problem (NP) and
prove the Kuhn-Tucker theorem. Section 2 deals with Duality theory for the case where appropriate
convexity conditions are satisfied. Two applications are given. Section 3 is devoted to the important
special case of quadratic programming. The last section is devoted to computational considerations.
5.1 Qualitative Theory of Nonlinear Programming
5.1.1 The problem and elementary results.
The general NP is a decision problem of the form:
Maximize f
0
(x)
subject to (x) ≤ 0 , i = 1, . . . , m,
(5.1)
where x ∈ R
n
, f
i
: R
n
→ R, i = 0, 1, . . . , m, are differentiable functions. As in Chapter 4,
x ∈ R
n
is said to be a feasible solution if it satisfies the constraints of (5.1), and Ω ⊂ R
n
is the
subset of all feasible solutions; x

∈ Ω is said to be an optimal decision or optimal solution if
f
0
(x

) ≥ f
0
(x) for x ∈ Ω. From the discussion in 4.1.2 it is clear that equality constraints and sign
constraints on some of the components of x can all be transformed into the form (5.1). The next
exercise shows that we could restrict ourselves to objective functions which are linear; however, we
will not do this.
Exercise 1: Show that (5.2), with variables y ∈ R, x ∈ R
n
, is equivalent to (5.1):
49
50 CHAPTER 5. NONLINEAR PROGRAMMING
Maximize y
subject to f
i
(x) ≤ 0, 1 ≤ i ≤ m, and y −f
0
(x) ≤ 0 .
(5.2)
Returning to problem (5.1), we are interested in obtaining conditions which any optimal decision
must satisfy. The argument parallels very closely that developed in Exercise 1 of 4.1 and Exercise 1
of 4.2. The basic idea is to linearize the functions f
i
in a neighborhood of an optimal decision x

.
Definition: Let x be a feasible solution, and let I(x) ⊂ ¦1, . . . , m¦ be such that f
i
(x) = 0 for
ı ∈ I(x), f
i
(x) < 0 for i ∈ I(x). (The set I(x) is called the set of active constraints at x.)
Definition: (i) Let x ∈ Ω. A vector h ∈ R
n
is said to be an admissible direction for Ω at x if there
exists a sequence x
k
, k = 1, 2, . . . , in Ω and a sequence of numbers ε
k
, k = 1, . . . , with ε
k
> 0
for all k such that
lim
k→∞
x
k
= x ,
lim
k→∞
1
ε
k
(x
k
−x) = h .
(ii) Let C(Ω, x) = ¦h[h is an admissible direction for Ω at x¦. C(Ω, x) is called the tangent cone
of Ω at x. Let K(Ω, x) = ¦x + h[h ∈ C(Ω, x)¦. (See Figures 5.1 and 5.2 and compare them with
Figures 4.1 and 4.2.)
If we take x
k
= x and ε
k
= 1 for all k, we see that 0 ∈ C(Ω, x) so that the tangent cone is always
nonempty. Two more properties are stated below.
Exercise 2: (i) Show that C(Ω, x) is a cone, i.e., if h ∈ C(Ω, x) and θ ≥ 0, then θh ∈ C(Ω, x).
(ii) Show that C(Ω, x) is a closed subset of R
n
. (Hint for (ii): For m = 1, 2, . . . , let h
m
and
¦x
mk
, ε
mk
> 0¦

k=1
be such that x
mk
→ x and (1/ε
mk
)(x
mk
−x) → h
m
as k → ∞. Suppose
that h
m
→ h as m → ∞. Show that there exist subsequences ¦x
mkm
, ε
mkm
¦

m=1
such that
x
mkm
→ x and (1/ε
mkm
)(x
mkm
−x) → h as m → ∞.)
In the definition of C(Ω, x) we made no use of the particular functional description of Ω. The
following elementary result is more interesting in this light and should be compared with (2.18) in
Chapter 2 and Exercise 1 of 4.1.
Lemma 1: Suppose x

∈ Ω is an optimum decision for (5.1).
Then
f
0x
(x

)h ≤ 0 for all h ∈ C(Ω, x

) . (5.3)
Proof: Let x
k
∈ Ω, ε
k
> 0, k = 1, 2, 3, . . . , be such that
5.1. QUALITATIVE THEORY OF NONLINEAR PROGRAMMING 51
,
P
¦x[f
3
(x) = 0¦
Q
x

direction of
increasing
payoff
π(k) =
¦x[f
0
(x) = k¦
¦x[f
1
(x) = 0¦

R
¦x[f
2
(x) = 0¦
Figure 5.1: Ω = PQR
lim
k→∞
x
k
= x

,
lim
k→∞
1
ε
k
(x
k
−x

) = h . (5.4)
Note that in particular (5.4) implies
lim
k→∞
1
ε
k
[x
k
−x

[ = [h[ . (5.5)
Since f
0
is differentiable, by Taylor’s theorem we have
f
0
(x
k
) = f
0
(x

+ (x
k
−x

)) = f
0
(x

) +f
0x
(x

)(x
k
−x

) +o([x
k
−x

[) . (5.6)
Since x
k
∈ Ω, and x

is optimal, we have f
0
(x
k
) ≤ f
0
(x

), so that
0 ≥ f
0x
(x

)
(x
k
−x

)
ε
k
+
o(|x
k
−x

|)
ε
k
.
Taking limits as k → ∞, using (5.4) and (5.5), we can see that
0 ≥
=
lim
k→∞
f
0x
(x

)h. ♦
f
0x
(x

)
(x
k
−x

)
ε
k
+
lim
k→∞
o(|x
k
−x

|)
|x
k
−x

| lim
k→∞
|x
k
−x

|
ε
k
52 CHAPTER 5. NONLINEAR PROGRAMMING
-
-
-
-
-
-
-
-
x

K(Ω, x

)
0
C(Ω, x

)
Figure 5.2: C(Ω, x

) is the tangent cone of Ω at x

.
The basic problem that remains is to characterize the set C(Ω, x

) in terms of the derivatives of the
functions f
i
. Then we can apply Farkas’ Lemma just as in Exercise 1 of 4.2.
Lemma 2: Let x

∈ Ω. Then
C(Ω, x

) ⊂ ¦h[f
ix
(x

)h ≤ 0 for all i ∈ I(x

)¦ . (5.7)
Proof: Let h ∈ R
n
and x
k
∈ Ω, ε
k
> 0, k = 1, 2, . . . , satisfy (5.4). Since f
i
is differentiable, by
Taylor’s theorem we have
f
i
(x
k
) = f
i
(x

) +f
ix
(x

)(x
k
−x

) +o([x
k
−x

[) .
Since x
k
∈ Ω, f
i
(x
k
) ≤ 0, and if i ∈ I(x

), f
i
(x

) = 0, so that f
i
(x
k
) ≤ f
i
(x

). Following the
proof of Lemma 1 we can conclude that 0 ≥ f
ix
(x

)h. ♦
Lemma 2 gives us a partial characterization of C(Ω, x

). Unfortunately, in general the inclusion
sign in (5.7) cannot be reversed. The main reason for this is that the set ¦f
ix
(x

)[i ∈ I(x

)¦ is not
in general linearly independent.
Exercise 3: Let x ∈ R
2
, f
1
(x
1
, x
2
) = (x
1
−1)
3
+x
2
, and f
2
(x
1
, x
2
) = −x
2
. Let
(x

1
, x

2
) = (1, 0). Then I(x

) = ¦1, 2¦. Show that
5.1. QUALITATIVE THEORY OF NONLINEAR PROGRAMMING 53
C(Ω, x

) = ¦h[f
ix
(x

)h ≤ 0 , i = 1, 2, ¦.
(Note that ¦f
1x
(x

), f
2x
(x

)¦ is not a linearly independent set; see Lemma 4 below.)
5.1.2 Kuhn-Tucker Theorem.
Definition: Let x

∈ Ω. We say that the constraint qualification (CQ) is satisfied at x

if
C(Ω, x) = ¦h[f
ix
(x

)h ≤ 0 for all i ∈ I(x

)¦,
and we say that CQ is satisfied if CQ is satisfied at all x ∈ Ω. (Note that by Lemma 2 C(Ω, x) is
always a subset of the right-hand side.)
Compare the next result with Exercise 2 of 4.2.
Theorem 1: (Kuhn and Tucker [1951]) Let x

be an optimum solution of (5.1), and suppose that
CQ is satisfied at x

. Then there exist λ

i
≥ 0, for i ∈ I(x

), such that
f
0x
(x

) =
¸
i∈I(x

)
λ

i
f
ix
(x

)
(5.8)
Proof: By Lemma 1 and the definition of CQ it follows that f
0x
(x

)h ≤ 0 whenever f
ix
(x

)h ≤ 0
for all i ∈ I(x

). By the Farkas’ Lemma of 4.2.1 it follows that there exist λ

i
≥ 0 for i ∈ I(x

)
such that (5.8) holds. ♦
In the original formulation of the decision problem we often have equality constraints of the form
r
j
(x) = 0, which get replaced by r
j
(x) ≤ 0, −r
j
(x) ≤ 0 to give the form (5.1). It is convenient in
application to separate the equality constraints from the rest. Theorem 1 can then be expressed as
Theorem 2.
Theorem 2: Consider the problem (5.9).
Maximize f
0
(x)
subject to f
i
(x) ≤ 0 , i = 1, . . . , m,
r
j
(x) = 0 , j = 1, . . . , k .
(5.9)
Let x

be an optimum decision and suppose that CQ is satisfied at x

. Then there exist λ

i
≥ 0, i =
1, . . . , m, and µ

j
, j = 1, . . . , k such that
f
0x
(x

) =
m
¸
i=1
λ

i
f
ix
(x

) +
k
¸
j=1
µ

j
r
jx
(x

) , (5.10)
and
λ

i
= 0 whenever f
i
(x

) < 0 . (5.11)
Exercise 4: Prove Theorem 2.
54 CHAPTER 5. NONLINEAR PROGRAMMING
An alternative form of Theorem 1 will prove useful for computational purposes (see Section 4).
Theorem 3: Consider (5.9), and suppose that CQ is satisfied at an optimal solution x

. Define
ψ : R
n
→ R by
ψ(h) = max ¦−f
0x
(x

)h, f
1
(x

) +f
1x
(x

)h, . . . , f
m
(x

) +f
mx
(x

)h¦ ,
and consider the decision problem
Minimize ψ(h)
subject to −ψ(h) −f
0x
(x

)h ≤ 0,
−ψ(h) +f
i
(x

) +f
ix
(x

)h ≤ 0 , 1 ≤ i ≤ m
−1 ≤ h
i
≤ 1 , i = 1, . . . , n .
(5.12)
Then h = 0 is an optimal solution of (5.12).
Exercise 5: Prove Theorem 3. (Note that by Exercise 1 of 4.5, (5.12) can be transformed into a
LP.)
Remark: For problem (5.9) define the Lagrangian function L:
(x
1
, . . . , x
n
; λ
1
, . . . , λ
m
; µ
1
, . . . , µ
k
) → f
0
(x) −
m
¸
i=1
λ
i
f
i
(x) −
k
¸
j=1
µ
j
r
j
(x).
Then Theorem 2 is equivalent to the following statement: if CQ is satisfied and x

is optimal, then
there exist λ

≥ 0 and µ

such that L
x
(x

, λ

, µ

) = 0 and L(x

, λ

, µ

) ≤ L(x

, λ, µ) for all
λ ≥ 0, µ.
There is a very important special case when the necessary conditions of Theorem 1 are also
sufficient. But first we need some elementary properties of convex functions which are stated as an
exercise. Some additional properties which we will use later are also collected here.
Recall the definition of convex and concave functions in 4.2.3.
Exercise 6: Let X ⊂ R
n
be convex. Let h : X → R be a differentiable function. Then
(i) h is convex iff h(y) ≥ h(x) +h
x
(x)(y −x) for all x, y, in X,
(ii) h is concave iff h(y) ≤ h(x) +h
x
(x)(y −x) for all x, y in X,
(iii) h is concave and convex iff h is affine, i.e. h(x) ≡ α +b

x for some
fixed α ∈ R, b ∈ R
n
.
Suppose that h is twice differentiable. Then
(iv) h is convex iff h
xx
(x) is positive semidefinite for all x in X,
(v) h is concave iff h
xx
(x) is negative semidefinite for all x in X,
(vi) h is convex and concave iff h
xx
(x) ≡ 0.
Theorem 4: (Sufficient condition) In (5.1) suppose that f
0
is concave and f
i
is convex for
i = 1, . . . , m. Then
(i) Ω is a convex subset of R
n
, and
(ii) if there exist x

∈ Ω, λ

i
≥ 0, i ∈ I(x

), satisfying (5.8), then x

is an optimal solution of
(5.1).
Proof:
(i) Let y, z be in Ω so that f
i
(y) ≤ 0, f
i
(z) ≤ 0 for i = 1, . . . , m. Let 0 ≤ θ ≤ 1. Since f
i
is
convex we have
5.1. QUALITATIVE THEORY OF NONLINEAR PROGRAMMING 55
f
i
(θy + (1 −θ)z) ≤ θf
i
(y) + (1 −θ)f
i
(z) ≤ 0 , 1 ≤ i ≤ m,
so that (θy + (1 −θ)z) ∈ Ω, hence Ω is convex.
(ii) Let x ∈ Ω be arbitrary. Since f
0
is concave, by Exercise 6 we have
f
0
(x) ≤ f
0
(x

) +f
0x
(x

)(x −x

) ,
so that by (5.8)
f
0
(x) ≤ f
0
(x

) +
¸
i∈I(x

)
λ

i
f
ix
(x

)(x −x

) .
(5.13)
Next, f
i
is convex so that again by Exercise 6,
f
i
(x) ≥ f
i
(x

) +f
ix
(x

)(x −x

) ;
but f
i
(x) ≤ 0, and f
i
(x

) = 0 for i ∈ I(x

), so that
f
ix
(x

)(x −x

) ≤ 0 for i ∈ I(x

) . (5.14)
Combining (5.14) with the fact that λ

i
≥ 0, we conclude from (5.13) that f
0
(x) ≤ f
0
(x

), so that
x

is optimal. ♦
Exercise 7: Under the hypothesis of Theorem 4, show that the subset Ω

of Ω, consisting of all the
optimal solutions of (5.1), is a convex set.
Exercise 8: A function h : X → R defined on a convex set X ⊂ R
n
is said to be strictly convex if
h(θy + (1 −θ)z) < θh(y) + (1 −θ)h(z) whenever 0 < θ < 1 and y, z are in X with y = z. h is
said to be strictly concave if −h is strictly convex. Under the hypothesis of Theorem 4, show that
an optimal solution to (5.1) is unique (if it exists) if either f
0
is strictly concave or if the
f
i
, 1 ≤ i ≤ m, are strictly convex. (Hint: Show that in (5.13) we have strict inequality if x = x

.)
5.1.3 Sufficient conditions for CQ.
As stated, it is usually impractical to verify if CQ is satisfied for a particular problem. In this
subsection we give two conditions which guarantee CQ. These conditions can often be verified in
practice. Recall that a function g : R
n
→ R is said to be affine if g(x) ≡ α + b

x for some fixed
α ∈ R and b ∈ R
n
.
We adopt the formulation (5.1) so that
Ω = ¦x ∈ R
n
[f
i
(x) ≤ 0 , 1 ≤ i ≤ m¦ .
Lemma 3: Suppose x

∈ Ω and suppose there exists h

∈ R
n
such that for each i ∈ I(x

), either
f
ix
(x

)h

< 0, or f
ix
(x

)h

= 0 and f
i
is affine. Then CQ is satisfied at x

.
Proof: Let h ∈ R
n
be such that f
ix
(x

)h ≤ 0 for i ∈ I(x

). Let δ > 0. We will first show that
(h +δh

) ∈ C(Ω, x

). To this end let ε
k
> 0, k = 1, 2, . . . , be a sequence converging to 0 and set
x
k
= x


k
(h +δh

). Clearly x
k
converges to x

, and (1/ε
k
)(x
k
−x

) converges to (h +δh

).
Also for i ∈ I(x

), if f
ix
(x

)h < 0, then
f
i
(x
k
) = f
i
(x

) +ε
k
f
ix
(x

)(h +δh

) +o(ε
k
[h +δh

[)
≤ δε
k
f
ix
(x

)h

+o(ε
k
[h +δh

[)
< 0 for sufficiently large k ,
whereas for i ∈ I(x

), if f
i
is affine, then
56 CHAPTER 5. NONLINEAR PROGRAMMING
f
i
(x
k
) = f
i
(x

) +ε
k
f
ix
(x

)(h +δh

) ≤ 0 for all k .
Finally, for i ∈ I(x

) we have f
i
(x

) < 0, so that f
i
(x
k
) < 0 for sufficiently large k. Thus we
have also shown that x
k
∈ Ω for sufficiently large k, and so by definition (h + δh

) ∈ C(Ω, x

).
Since δ > 0 can be arbitrarily small, and since C(Ω, x

) is a closed set by Exercise 2, it follows that
h ∈ C(Ω, x

). ♦
Exercise 9: Suppose x

∈ Ω and suppose there exists ˆ x ∈ R
n
such that for each i ∈ I(x

), either
f
i
(x

) < 0 and f
i
is convex, or f
i
(ˆ x) ≤ 0 and f
i
is affine. Then CQ is satisfied at x

. (Hint: Show
that h

= ˆ x −x

satisfies the hypothesis of Lemma 3.)
Lemma 4: Suppose x

∈ Ω and suppose there exists h

∈ R
n
such that f
ix
(x

)h

≤ 0 for
i ∈ I(x

), and ¦f
ix
(x

)[i ∈ I(x

), f
ix
(x

)h

= 0¦ is a linearly independent set. Then CQ is
satisfied at x

.
Proof: Let h ∈ R
n
be such that f
ix
(x

)h ≤ 0 for all i ∈ I(x

). Let δ > 0. We will show that
(h +δh

) ∈ C(Ω, x

). Let J
δ
= ¦i[i ∈ I(x

), f
ix
(x

)(h +δh

) = 0¦, consist of p elements.
Clearly J
δ
⊂ J = ¦i[i ∈ I(x

), f
i
x(x

)h

= 0¦, so that ¦f
ix
(x

, u

)[i ∈ J
δ
¦ is linearly
independent. By the Implicit Function Theorem, there exist ρ > 0, an open set V ⊂ R
n
containing
x

= (w

, u

), and a differentiable function g : U → R
p
, where U = ¦u ∈ R
n−p
[[u −u

[ < ρ¦,
such that
f
i
(w, u) = 0, i ∈ J
δ
, and (w, u) ∈ V
iff
u ∈ U, and w = g(u) .
Next we partition h, h

as h = (ξ, η), h

= (ξ

, η

) corresponding to the partition of x = (w, u).
Let ε
k
> 0, k = 1, 2 . . . , be any sequence converging to 0, and set u
k
= u

+ ε
k
(η +δη

), w
k
=
g(u
k
), and finally x
k
= (s
k
, u
k
).
We note that u
k
converges to u

, so w
k
= g(u
k
) converges to w

= g(u

). Thus, x
k
converges
to x

. Now (1/ε
k
)(x
k
−x

) = (1/ε
k
)(w
k
−w

, u
k
−u

) = (1/ε
k
)(g(u
k
) −g(u

), ε
k
(η +δη

)).
Since g is differentiable, it follows that (1/ε
k
)(x
k
−x

) converges to (g
u
(u

)(η +δη

), η +δη

).
But for i ∈ J
δ
we have
0 = f
ix
(x

)(h +δh

) = f
iw
(x

)(ξ +δξ

) +f
iu
(x

)(η +δη

) . (5.15)
Also, for i ∈ J
δ
, 0 = f
i
(g(u), u) for u ∈ U so that 0 = f
iw
(x

)g
u
(u

) +f
iu
(x

), and hence
0 = f
iw
(x

)g
u
(u

)(η +δη

) +f
iu
(x

)(η +δη

) . (5.16)
If we compare (5.15) and (5.16) and recall that ¦f
iw
(x

)[i ∈ J
δ
¦ is a basis in R
p
we can conclude
that (ξ +δξ

) = g
u
(u

)(η +δη

) so that (1/ε
k
)(x
k
−x

) converges to (h +hδh

).
It remains to show that x
k
∈ Ω for sufficiently large k. First of all, for i ∈ J
δ
, f
i
(x
k
) =
f
i
(g(u
k
), u
k
) = 0, whereas for i ∈ J
δ
, i ∈ I(x

),
f
i
(x
k
) = f
i
(x

) +f
ix
(x

)(x
k
−x

) +o([x
k
−x

[)
f
i
(x

) +ε
k
f
ix
(x

)(h +δh

) +o(ε
k
) +o([x
k
−x

[),
5.2. DUALITY THEORY 57
and since f
i
(x

) = 0 whereas f
ix
(x

)(h + δh

) < 0, we can conclude that f
i
(x
k
) < 0 for suffi-
ciently large k. Thus, x
k
∈ Ω for sufficiently large k. Hence, (h +δh

) ∈ C(Ω, x

).
To finish the proof we note that δ > 0 can be made arbitrarily small, and C(Ω, x

) is closed by
Exercise 2, so that h ∈ C(Ω, x

). ♦
The next lemma applies to the formulation (5.9). Its proof is left as an exercise since it is very
similar to the proof of Lemma 4.
Lemma 5: Suppose x

is feasible for (5.9) and suppose there exists h

∈ R
n
such that the set
¦f
ix
(x

)[i ∈ I(x

), f
ix
(x

)h

= 0¦
¸
¦r
jx
(x

)[j = 1, . . . , k¦ is linearly independent, and f
ix
(x

)h


0 for i ∈ I(x

), r
jx
(x

)h

= 0 for 1 ≤ j ≤ k. Then CQ is satisfied at x

.
Exercise 10: Prove Lemma 5
5.2 Duality Theory
Duality theory is perhaps the most beautiful part of nonlinear programming. It has resulted in many
applications within nonlinear programming, in terms of suggesting important computational algo-
rithms, and it has provided many unifying conceptual insights into economics and management
science. We can only present some of the basic results here, and even so some of the proofs are
relegated to the Appendix at the end of this Chapter since they depend on advanced material. How-
ever, we will give some geometric insight. In 2.3 we give some application of duality theory and in
2.2 we refer to some of the important generalizations. The results in 2.1 should be compared with
Theorems 1 and 4 of 4.2.1 and the results in 4.2.3.
It may be useful to note in the following discussion that most of the results do not require differ-
entiability of the various functions.
5.2.1 Basic results.
Consider problem (5.17) which we call the primal problem:
Maximize f
0
(x)
subject to f
i
(x) ≤
ˆ
b
i
, 1 ≤ i ≤ m
x ∈ X ,
(5.17)
where x ∈ R
n
, f
i
: R
n
→ R, 1 ≤ i ≤ m, are given convex functions, f
0
: R
n
→ R is a
given concave function, X is a given convex subset of R
n
and
ˆ
b = (
ˆ
b
1
, . . . ,
ˆ
b
m
)

is a given vector.
For convenience, let f = (f
1
, . . . , f
m
)

: R
n
→ R
m
. We wish to examine the behavior of the
maximum value of (5.17) as
ˆ
b varies. So we define
Ω(b) = ¦x[x ∈ X, f(x) ≤ b¦, B = ¦b[Ω(b) = φ¦,
and
M : B → R
¸
¦+∞¦ by M(b) = sup¦f
0
(x)[x ∈ X, f(x) ≤ b¦
= sup¦f
0
(x)[x ∈ Ω(b)¦ ,
so that in particular if x

is an optimal solution of (5.17) then M(
ˆ
b) = f
0
(ˆ x). We need to consider
the following problem also. Let λ ∈ R
m
, λ ≥ 0, be fixed.
Maximize f
0
(x) −λ

(f(x) −
ˆ
b)
subject to x ∈ X ,
(5.18)
58 CHAPTER 5. NONLINEAR PROGRAMMING
and define
m(λ) = sub¦f
0
(x) −λ

(f(x) −
ˆ
b)[x ∈ X¦ .
Problem (5.19) is called the dual problem:
Minimize m(λ)
subject to λ ≥ 0 .
(5.19)
Let m

= inf ¦m(λ)[λ ≥ 0¦.
Remark 1: The set X in (5.17) is usually equal to R
n
and then, of course, there is no reason to
separate it out. However, it is sometimes possible to include some of the constraints in X in such
a way that the calculation of m(λ) by (5.18) and the solution of the dual problem (5.19) become
simple. For example see the problems discussed in Sections 2.3.1 and 2.3.2 below.
Remark 2: It is sometimes useful to know that Lemmas 1 and 2 below hold without any convexity
conditions on f
0
, f, X. Lemma 1 shows that the cost function of the dual problem is convex which
is useful information since there are computation techniques which apply to convex cost functions
but not to arbitrary nonlinear cost functions. Lemma 2 shows that the optimum value of the dual
problem is always an upper bound for the optimum value of the primal.
Lemma 1: m : R
n
+
→ R
¸
¦+∞¦ is a convex function. (Here R
n
+
= ¦λ ∈ R
n
[λ ≥ 0¦.)
Exercise 1: Prove Lemma 1.
Lemma 2: (Weak duality) If x is feasible for (5.17), i.e., x ∈ Ω(
ˆ
b), and if λ ≥ 0, then
f
0
(x) ≤ M(
ˆ
b) ≤ m

≤ m(λ) . (5.20)
Proof: Since f(x) −
ˆ
b ≤ 0, and λ ≥ 0, we have λ

(f(x) −
ˆ
b) ≤ 0. So,
f
0
(x) ≤ f
0
(x) −λ

(f(x) −
ˆ
b), for x ∈ Ω(
ˆ
b), λ ≥ 0 .
Hence
f
0
(x) ≤ sup ¦f
0
(x)[x ∈ Ω(
ˆ
b)¦ = M(
ˆ
b)
≤ sup ¦f
0
(x) −λ

(f(x) −
ˆ
b)[x ∈ Ω(
ˆ
b)¦ and since Ω(
ˆ
b) ⊂ X,
≤ sup ¦f
0
(x) −λ

(f(x) −
ˆ
b)[x ∈ X¦ = m(λ) .
Thus, we have
f
0
(x) ≤ M(
ˆ
b) ≤ m(λ) for x ∈ Ω(
ˆ
b), λ ≥ 0 ,
and since M(
ˆ
b) is independent of λ, if we take the infimum with respect to λ ≥ 0 in the right-hand
inequality we get (5.20). ♦
The basic problem of Duality Theory is to determine conditions under which M(
ˆ
b) = m

in
(5.20). We first give a simple sufficiency condition.
Definition: A pair (ˆ x,
ˆ
λ) with ˆ x ∈ X, and
ˆ
λ ≤ 0 is said to satisfy the optimality conditions if
5.2. DUALITY THEORY 59
ˆ x is optimal solution of (5.18) with λ =
ˆ
λ, (5.21)
ˆ x is feasible for (5.17), i.e., f
i
(ˆ x) ≤
ˆ
b
i
for i = 1, . . . , m , (5.22)
ˆ
λ
i
= 0 when f
i
(ˆ x) <
ˆ
b
i
, equivalently,
ˆ
λ

(f(ˆ x) −
ˆ
b) = 0. (5.23)
ˆ
λ ≥ 0 is said to be an optimal price vector if there is ˆ x ∈ X such that (ˆ x,
ˆ
λ) satisfy the optimality
condition. Note that in this case ˆ x ∈ Ω(
ˆ
b) by virtue of (5.22).
The next result is equivalent to Theorem 4(ii) of Section 1 if X = R
n
, and f
i
, 0 ≤ i ≤ m, are
differentiable.
Theorem 1: (Sufficiency) If (ˆ x,
ˆ
λ) satisfy the optimality conditions, then ˆ x is an optimal solution to
the primal,
ˆ
λ is an optimal solution to the dual, and M(
ˆ
b) = m

.
Proof: Let x ∈ Ω(
ˆ
b), so that
ˆ
λ

(f(x) −
ˆ
b) ≤ 0. Then
f
0
(x) ≤ f
0
(x) −
ˆ
λ

(f(x) −
ˆ
b)
≤ sup¦f
0
(x) −
ˆ
λ

(f(x) −
ˆ
b)[x ∈ X¦
= f
0
(ˆ x) −
ˆ
λ

(f(ˆ x) −
ˆ
b) by (5.21)
= f
0
(ˆ x) by (5.23)
so that ˆ x is optimal for the primal, and hence by definition f
0
(ˆ x) = M(
ˆ
b). Also
m(
ˆ
λ) = f
0
(ˆ x) −
ˆ
λ

(f(ˆ x) −
ˆ
b)
f
0
(ˆ x) = M(
ˆ
b) ,
so that from Weak Duality
ˆ
λ is optimal for the dual. ♦
We now proceed to a much more detailed investigation.
Lemma 3: B is a convex subset of R
m
, and M : B → R
¸
¦+∞¦ is a concave function.
Proof: Let b,
˜
b belong to B, let x ∈ Ω(b), ˜ x ∈ Ω(
˜
b), let 0 ≤ θ ≤ 1. Then (θx + (1 − θ)˜ x) ∈ X
since X is convex, and
f
i
(θx + (1 −θ)˜ x) ≤ θf
i
(x) + (1 −θ)f
i
(˜ x)
since f
i
is convex, so that
f
i
(θx + (1 −θ)˜ x) ≤ θb + (1 −θ)
˜
b , (5.24)
hence
(θx + (1 −θ)˜ x) ∈ Ω(θb + (1 −θ)
˜
b)
and therefore, B is convex.
Also, since f
0
is concave,
f
0
(θx + (1 −θ)˜ x) ≥ θf
0
(x) + (1 −θ)f
0
(˜ x) .
60 CHAPTER 5. NONLINEAR PROGRAMMING
Since (5.24) holds for all x ∈ Ω(b) and ˜ x ∈ Ω(
˜
b) it follows that
M(θb + (1 −θ)
ˆ
b) ≥ sup ¦f
0
(θx + (1 −θ)˜ x)[x ∈ Ω(b), ˜ x ∈ Ω(
˜
b)¦
≥ sup¦f
0
(x)[x ∈ Ω(b)¦ + (1 −θ) sup ¦f
0
(˜ x)[˜ x ∈ Ω(
˜
b)¦
= θM(b) + (1 −θ)M(
˜
b). ♦
Definition: Let X ⊂ R
n
and let g : X → R
¸
¦∞, −∞¦. A vector λ ∈ R
n
is said to be a
supergradient (subgradient) of g at ˆ x ∈ X if
g(x) ≤ g(ˆ x) +λ

(x − ˆ x) for x ∈ X.
(g(x) ≥ g(ˆ x) +λ

(x − ˆ x) for x ∈ X.)
(See Figure 5-3.)
,
-
-
-
-
-
.
.
.
.
M(b)
M(
ˆ
b)
b ∈ B
ˆ
b b
M is not stable at
ˆ
b
M(b)
M(
ˆ
b)
ˆ
b
b
M is stable at
ˆ
b
M(b)
M(
ˆ
b) +λ

(b −
ˆ
b)
ˆ
b b
λ is a supergradient at
ˆ
b
Figure 5.3: Illustration of supergradient of stability.
Definition: The function M : B → R
¸
¦∞¦ is said to be stable at
ˆ
b ∈ B if there exists a real
number K such that
M(b) ≤ M(
ˆ
b) +K[b −
ˆ
b[ for b ∈ B .
(In words, M is stable at
ˆ
b if M does not increase infinitely steeply in a neighborhood of
ˆ
b. See
Figure 5.3.)
A more geometric way of thinking about subgradients is the following. Define the subset A ⊂
R
1+m
by
5.2. DUALITY THEORY 61
A = ¦(r, b)[b ∈ B, and r ≤ M(b)¦ .
Thus A is the set lying ”below” the graph of M. We call A the hypograph
1
of M. Since M is
concave it follows immediately that A is convex (in fact these are equivalent statements).
Definition: A vector (λ
0
, λ
1
, . . . , λ
m
) is said to be the normal to a hyperplane supporting A at a
point(ˆ r,
ˆ
b) if
λ
0
ˆ r +
m
¸
i=1
λ
i
ˆ
b
i
≥ λ
0
r +
m
¸
i=1
λ
i
b
i
for all (r, b) ∈ A . (5.25)
(In words, Alies below the hyperplane ˆ π = ¦(r, b)[λ
0
r+
¸
λ
i
b
i
= λ
0
ˆ r+
¸
λ
i
b
i
¦.) The supporting
hyperplane is said to be non-vertical if λ
0
= 0. See Figure 5.4.
Exercise 2: Show that if
ˆ
b ∈ B,
˜
b ≥
ˆ
b, and ˜ r ≤ M(
ˆ
b), then
˜
b ∈ B, M(
˜
b), and (˜ r,
˜
b) ∈ A.
Exercise 3: Assume that
ˆ
b ∈ B, and M(
ˆ
b) < ∞. Show that (i) if λ = (λ
1
, . . . , λ
m
)

is a
supergradient of M at
ˆ
b then λ ≥ 0, and (1, −λ
1
, . . . , −λ
m
)

defines a non-vertical hyperplane
supporting A at (M(
ˆ
b),
ˆ
b), (ii) if (λ
0
, −λ
1
, . . . , −λ
m
)

defines a hyperplane supporting A at
(M(
ˆ
b),
ˆ
b) then λ
0
≥ 0, λ
i
≥ 0 for 1 ≤ i ≤ m; futhermore, if the hyperplane is non-vertical then
((λ
1

0
, . . . , (λ
m

0
))

is a supergradient of M at
ˆ
b.
We will prove only one part of the next crucial result. The reader who is familiar with the
Separation Theorem of convex sets should be able to construct a proof for the second part based
on Figure 5.4, or see the Appendix at the end of this Chapter.
Lemma 4: (Gale [1967]) M is stable at
ˆ
b iff M has a supergradient at
ˆ
b. Proof: (Sufficiency only)
Let λ be a supergradient at
ˆ
b, then
M(b) ≤ M(
ˆ
b) +λ

(b −
ˆ
b)
≤ M(
ˆ
b) +[λ[[b −
ˆ
b[ . ♦
The next two results give important alternative interpretations of supergradients.
Lemma 5: Suppose that ˆ x is optimal for (5.17). Then
ˆ
λ is a supergradient of M at
ˆ
b iff
ˆ
λ is an
optimal price vector, and then (ˆ x,
ˆ
λ) satisfy the optimality conditions.
Proof: By hypothesis, f(ˆ x) = M(
ˆ
b), ˆ x ∈ X, and f(ˆ x) ≤
ˆ
b. Let
ˆ
λ be a supergradient of M at
ˆ
b.
By Exercise 2, (M(
ˆ
b), f(ˆ x)) ∈ A and by Exercise 3,
ˆ
λ ≥ 0 and
M(
ˆ
b) −
ˆ
λ

ˆ
b ≥ M(
ˆ
b) −
ˆ
λ

f(ˆ x) ,
so that
ˆ
λ

(f(ˆ x) −
ˆ
b) ≥ 0. But then
ˆ
λ

(
ˆ
b − f(ˆ x)) = 0, giving (5.23). Next let x ∈ X. Then
(f
0
(x), f(x)) ∈ A, hence again by Exercise 3
M(
ˆ
b) −
ˆ
λ

ˆ
b ≥ f
0
(x) −
ˆ
λ

f(x) .
Since f
0
(ˆ x) = M(
ˆ
b), and
ˆ
λ

(f(ˆ x) −
ˆ
b) = 0, we can rewrite the inequality above as
1
From the Greek “hypo” meaning below or under. This neologism contrasts with the epigraph of a function which is
the set lying above the graph of the function.
62 CHAPTER 5. NONLINEAR PROGRAMMING
.
.
M(
ˆ
b)
M(b)
ˆ
b
A
b
No non-vertical hyperplane supporting A at (M(
ˆ
b),
ˆ
b)

0
, . . . , λ
m
)
M(
ˆ
b)
M(b)
ˆ π
A
b
ˆ
b
ˆ π is a non-vertical hyperplane supporting A at (M(
ˆ
b),
ˆ
b)
Figure 5.4: Hypograph and supporting hyperplane.
f
0
(ˆ x) +
ˆ
λ

(f(ˆ x) −
ˆ
b) ≥ f
0
(x) −
ˆ
λ

(f(x) −
ˆ
b) ,
so that (5.21) holds. It follows that (ˆ x,
ˆ
λ) satisfy the optimality conditions.
Conversely, suppose ˆ x ∈ X,
ˆ
λ ≥ 0 satisfy (5.21), (5.22), and (5.23). Let x ∈ Ω(b), i.e.,
x ∈ X, f(x) ≤ b. Then
ˆ
λ

(f(x) −b) ≤ 0 so that
f
0
(x) ≤ f
0
(x)
ˆ
λ

(f(x) −b)
= f
0
(x) −
ˆ
λ

(f(x) −
ˆ
b) +
ˆ
λ

(b −
ˆ
b)
≤ f
0
(ˆ x) −
ˆ
λ

(f(ˆ x) −
ˆ
b) +
ˆ
λ

(b −
ˆ
b) by (5.21)
= f
0
(ˆ x) +
ˆ
λ

(b −
ˆ
b) by (5.23)
= M(
ˆ
b) +
ˆ
λ

(b −
ˆ
b) .
Hence
M(b) = sup¦f
0
(x)[x ∈ Ω(b)¦ ≤ M(
ˆ
b) +
ˆ
λ

(b −
ˆ
b) ,
so that
ˆ
λ

is a supergradient of M at
ˆ
b. ♦
Lemma 6: Suppose that
ˆ
b ∈ B, and M(
ˆ
b) < ∞. Then
ˆ
λ is a supergradient of M at
ˆ
b iff
ˆ
λ is an
optimal solution of the dual (5.19) and m(
ˆ
λ) = M(
ˆ
b).
Proof: Let
ˆ
λ be a supergradient of M at
ˆ
b. Let x ∈ X. By Exercises 2 and 3
5.2. DUALITY THEORY 63
M(
ˆ
b) −
ˆ
λ

ˆ
b ≥ f
0
(x) −
ˆ
λ

f(x)
or
M(
ˆ
b) ≥ f
0
(x) −
ˆ
λ

(f(x) −
ˆ
b) ,
so that
M(
ˆ
b) ≥ sup¦f
0
(x) −
ˆ
λ

(f(x) −
ˆ
b)[x ∈ X¦ = m(
ˆ
λ) .
By weak duality (Lemma 2) it follows that M(
ˆ
b) = m(
ˆ
λ) and
ˆ
λ is optimal for (5.19).
Conversely suppose
ˆ
λ ≥ 0, and m(
ˆ
λ) = M(
ˆ
b). Then for any x ∈ X
M(
ˆ
b) ≥ f
0
(x) −
ˆ
λ

(f(x) −
ˆ
b) ,
and if moreover f(x) ≤ b, then
ˆ
λ

(f(x) −b) ≤ 0, so that
M(
ˆ
b) ≥ f
0
(x) −
ˆ
λ

(f(x) −
ˆ
b) +
ˆ
λ

(f(x) −b)
= f
0
(x) −
ˆ
λ

b +
ˆ
λ

ˆ
b for x ∈ Ω(b) .
Hence,
M(b) = sup¦f
0
(x)[x ∈ Ω(b)¦ ≤ M(
ˆ
b) +
ˆ
λ

(b −
ˆ
b) ,
so that
ˆ
λ is a supergradient. ♦
We can now summarize our results as follows.
Theorem 2: (Duality) Suppose
ˆ
b ∈ B, M(
ˆ
b) < ∞, and M is stable at
ˆ
b. Then
(i) there exists an optimal solution
ˆ
λ for the dual, and m(
ˆ
λ) = M(
ˆ
b),
(ii)
ˆ
λ is optimal for the dual iff
ˆ
λ is a supergradient of M at
ˆ
b,
(iii) if
ˆ
λ is any optimal solution for the dual, then ˆ x is optimal for the primal iff (ˆ x,
ˆ
λ) satisfy the
optimality conditions of (5.21), (5.22), and (5.23).
Proof: (i) follows from Lemmas 4,6. (ii) is implied by Lemma 6. The “if” part of (iii) follows from
Theorem 1, whereas the “only if” part of (iii) follows from Lemma 5. ♦
Corollary 1: Under the hypothesis of Theorem 2, if
ˆ
λ is an optimal solution to the dual then
(∂M
+
/∂b
i
)(
ˆ
b) ≤
ˆ
λ
i
≤ (∂M

/∂b
i
)(
ˆ
b).
Exercise 4: Prove Corollary 1. (Hint: See Theorem 5 of 4.2.3.)
5.2.2 Interpretation and extensions.
It is easy to see using convexity properties that, if X = R
n
and f
i
, 0 ≤ i ≤ m, are differentiable,
then the optimality conditions (5.21), (5.22), and (5.23) are equivalent to the Kuhn-Tucker condition
(5.8). Thus the condition of stability of M at
ˆ
b plays a similar role to the constraint qualification.
However, by Lemmas 4, 6 stability is equivalent to the existence of optimal dual variables, whereas
CQ is only a sufficient condition. In other words if CQ holds at ˆ x then M is stable at
ˆ
b. In particular,
if X = R
n
and the f
i
are differentiable, the various conditions of Section 1.3 imply stability. Here
we give one sufficient condition which implies stability for the general case.
Lemma 7: If
ˆ
b is in the interior of B, in particular if there exists x ∈ X such that f
i
(x) <
ˆ
b
i
for
1 ≤ i ≤ m, then M is stable at
ˆ
b.
64 CHAPTER 5. NONLINEAR PROGRAMMING
The proof rests on the Separation Theorem for convex sets, and only depends on the fact that M
is concave, M(
ˆ
b) < ∞ without loss of generality, and
ˆ
b is the interior of B. For details see the
Appendix.
Much of duality theory can be given an economic interpretation similar to that in Section 4.4.
Thus, we can think of x as the vector of n activity levels, f
0
(x) the corresponding revenue, X as
constraints due to physical or long-term limitations, b as the vector of current resource supplies,
and finally f(x) the amount of these resources used up at activity levels x. The various convexity
conditions are generalizations of the economic hypothesis of non-increasing returns-to-scale. The
primal problem (5.17) is the short-term decision problem faced by the firm. Next, if the current
resources can be bought or sold at prices
ˆ
λ = (λ
1
, . . . , λ
m
)

, the firm faces the decision problem
(5.18). If for a price system
ˆ
λ, an optimal solution of (5.17) also is an optimal solution for (5.18),
then we can interpret
ˆ
λ as a system of equilibrium prices just as in 4.2. Assuming the realistic
condition
ˆ
b ∈ B, M(
ˆ
b) < ∞ we can see from Theorem 2 and its Corollary 1 that there exists
an equilibrium price system iff (∂M
+
/∂b
i
)(
ˆ
b) < ∞, 1 ≤ i ≤ m; if we interpret (∂M
+
/∂b
i
)(
ˆ
b)
as the marginal revenue of the ith resource, we can say that equilibrium prices exist iff marginal
productivities of every (variable) resource is finite. These ideas are developed in (Gale [1967]).
.
M(b)
A
b
ˆ
b
M(
ˆ
b)
Figure 5.5: If M is not concave there may be no supporting hyperplane at (M(
ˆ
b),
ˆ
b).
Referring to Figure 5.3 or Figure 5.4, and comparing with Figure 5.5 it is evident that if M is not
concave or, equivalently, if its hypograph Ais not convex, there may be no hyperplane supporting A
at (M(
ˆ
b),
ˆ
b). This is the reason why duality theory requires the often restrictive convexity hypoth-
esis on X and f
i
. It is possible to obtain the duality theorem under conditions slightly weaker than
convexity but since these conditions are not easily verifiable we do not pursue this direction any fur-
ther (see Luenberger [1968]). A much more promising development has recently taken place. The
basic idea involved is to consider supporting A at (M(
ˆ
b),
ˆ
b) by (non-vertical) surfaces ˆ π more gen-
eral than hyperplanes; see Figure 5.6. Instead of (5.18) we would then have more general problem
of the form (5.26):
Maximize f
0
(x) −F(f(x) −
ˆ
b)
subject to x ∈ X ,
(5.26)
5.2. DUALITY THEORY 65
where F : R
m
→ R is chosen so that ˆ π (in Figure 5.6) is the graph of the function b → M(
ˆ
b) −
F(b −
ˆ
b). Usually F is chosen from a class of functions φ parameterized by µ = (µ
1
, . . . , µ
k
) ≥ 0.
Then for each fixed µ ≥ 0 we have (5.27) instead of (5.26):
Maximize f
0
(x) −φ(µ; f(x) −
ˆ
b)
subject to x ∈ X .
(5.27)
.
M(b)
ˆ π
A
b
ˆ
b
M(
ˆ
b)
Figure 5.6: The surface ˆ π supports A at (M(
ˆ
b),
ˆ
b).
If we let
ψ(µ) =sup¦f
0
(x) −φ(µ; f(x) −
ˆ
b)[x ∈ X¦ .
then the dual problem is
Minimize ψ(µ)
subject to µ ≥ 0 ,
in analogy with (5.19).
The economic interpretation of (5.27) would be that if the prevailing (non-uniform) price system
is φ(µ; ) then the resources f(x) −
ˆ
b can be bought (or sold) for the amount φ(µ; f(x) −
ˆ
b). For
such an interpretation to make sense we should have φ(µ; b) ≥ 0 for b ≥ 0, and φ(µ; b) ≥ φ(µ;
˜
b)
whenever b ≥
˜
b. A relatively unnoticed, but quite interesting development along these lines is
presented in (Frank [1969]). Also see (Arrow and Hurwicz [1960]).
For non-economic applications, of course, no such limitation on φ is necessary. The following
references are pertinent: (Gould [1969]), (Greenberg and Pierskalla [1970]), (Banerjee [1971]). For
more details concerning the topics of 2.1 see (Geoffrion [1970a]) and for a mathematically more
elegant treatment see (Rockafellar [1970]).
5.2.3 Applications.
Decentralized resource allocation.
Parts (i) and (iii) of Theorem 2 make duality theory attractive for computation purposes. In particular
from Theorem 2 (iii), if we have an optimal dual solution
ˆ
λ then the optimal primal solutions are
those optimal solutions of (5.18) for λ =
ˆ
λ which also satisfy the feasibility condition (5.22) and
66 CHAPTER 5. NONLINEAR PROGRAMMING
the “complementary slackness” condition (5.23). This is useful because generally speaking (5.18)
is easier to solve than (5.17) since (5.18) has fewer constraints.
Consider a decision problem in a large system (e.g., a multi-divisional firm). The system is
made up of k sub-systems (divisions), and the decision variable of the ith sub-system is a vector
x
i
∈ R
n
i
, 1 ≤ i ≤ k. The sub-system has individual constraints of the form x
i
∈ X
i
where x
i
is
a convex set. Furthermore, the sub-systems share some resources in common and this limitation is
expressed as f
1
(x
1
) +. . . +f
k
(x
k
) ≤
ˆ
b where f
i
: R
n
i
→ R
m
are convex functions and
ˆ
b ∈ R
m
is the vector of available common resources. Suppose that the objective function of the large system
is additive, i.e. it is the form f
1
0
(x
1
) + . . . + f
k
0
(x
k
) where f
i
0
: R
n
i
→ R are concave functions.
Thus we have the decision problem (5.28):
Maximize
k
¸
i=1
f
i
0
(x
i
)
subject to x
i
∈ X
i
, 1 ≤ i ≤ k,
k
¸
i=1
f
i
(x
i
) ≤
ˆ
b .
(5.28)
For λ ∈ R
m
, λ ≥ 0, the problem corresponding to (5.19) is
Maximize f
i
0
(x
i
) −λ

f
i
(x
i
) −λ

(
k
¸
i=1
f
i
(x
i
) −
ˆ
b)
subject to x
i
∈ X
i
, 1 ≤ i ≤ k ,
which decomposes into k separate problems:
Maximize f
i
0
(x
i
) −λ

f
i
(x
i
)
subject to x
i
∈ X
i
, 1 ≤ i ≤ k .
(5.29)
If we let m
i
(λ) = sup¦f
i
0
(x
i
) −λ

f
i
(x
i
)[x
i
∈ X
i
¦, and m(λ) =
k
¸
i=1
m
i
(λ) +λ

ˆ
b, then the dual
problem is
Minimize m(λ) ,
subject to λ ≥ 0 .
(5.30)
Note that (5.29) may be much easier to solve than (5.28) because, first of all, (5.29) involves fewer
constraints, but perhaps more importantly the decision problems in (5.29) are decentralized whereas
in (5.28) all the decision variables x
1
, . . . , x
k
are coupled together; in fact, if k is very large it may
be practically impossible to solve (5.28) whereas (5.29) may be trivial if the dimensions of x
i
are
small.
Assuming that (5.28) has an optimal solution and the stability condition is satisfied, we need to
find an optimal dual solution so that we can use Theorem 2(iii). For simplicity suppose that the
f
i
0
, 1 ≤ i ≤ k, are strictly concave, and also suppose that (5.29) has an optimal solution for every
λ ≥ 0. Then by Exercise 8 of Section 1, for each λ ≥ 0 there is a unique optimal solution of (5.29),
say x
i
(λ). Consider the following algorithm.
5.2. DUALITY THEORY 67
Step 1. Select λ
0
≥ 0 arbitrary. Set p = 0, and go to Step 2.
Step 2. Solve (5.29) for λ = λ
p
and obtain the optimal solution x
p
= (x
1

p
), . . . , x
k

p
)).
Compute e
p
=
k
¸
i=1
f
i
(x
i

p
)) −
ˆ
b. If e
p
≥ 0, x
p
is feasible for (5.28) and can easily be seen to be
optimal.
Step 3. Set λ
p=1
according to
λ
p+1
i
=

λ
p
i
if e
p
i
≥ 0
λ
p
i
−d
p
e
p
i
if e
p
i
< 0
where d
p
> 0 is chosen a priori. Set p = p + 1 and return to Step 3.
It can be shown that if the step sizes d
p
are chosen properly, x
p
will converge to the optimum
solution of (5.28). For more detail see (Arrow and Hurwicz [1960]), and for other decentralization
schemes for solving (5.28) see (Geoffrion [1970b]).
Control of water quality in a stream.
The discussion in this section is mainly based on (Kendrick, et al., [1971]). For an informal discus-
sion of schemes of pollution control which derive their effectiveness from duality theory see (Solow
[1971]). See (Dorfman and Jacoby [1970].)
Figure 5.7 is a schematic diagram of a part of a stream into which n sources (industries and
municipalities) discharge polluting effluents. The pollutants consist of various materials, but for
simplicity of exposition we assume that their impact on the quality of the stream is measured in
terms of a single quantity, namely the biochemical oxygen demand (BOD) which they place on the
dissolved oxygen (DO) in the stream. Since the DO in the stream is used to breakdown chemically
the pollutants into harmless substances, the quality of the stream improves with the amount of
DO and decreases with increasing BOD. It is a well-advertized fact that if the DO drops below a
certain concentration, then life in the stream is seriously threatened; indeed, the stream can “die.”
Therefore, it is important to treat the effluents before they enter the stream in order to reduce the
BOD to concentration levels which can be safely absorbed by the DO in the stream. In this example
we are concerned with finding the optimal balance between costs of waste treatment and costs of
high BOD in the stream.
We first derive the equations which govern the evolution in time of BOD and DO in the n areas
of the streams. The fluctuations of BOD and DO will be cyclical with a period of 24 hours. Hence,
it is enough to study the problem over a 24-hour period. We divide this period into T intervals,
t = 1, . . . , T. During interval t and in area i let
z
i
(t) = concentration of BOD measured in mg/liter,
q
i
(t) = concentration of DO measured in mg/liter,
s
i
(t) = concentration of BOD of effluent discharge in mg/liter, and
m
i
(t) = amount of effluent discharge in liters.
The principle of conservation of mass gives us equations (5.31) and (5.32):
z
i
(t + 1) −z
i
(t) = −α
i
z
i
(t) +
ψ
i−1
z
i−1
(t)
v
i

ψ
i
z
i
(t)
v
i
+
s
i
(t)m
i
(t)
v
i
, (5.31)
q
i
(t + 1) −q
i
(t) = β
i
(q
s
i
−q
i
(t)) +
ψ
i−1
q
i−1
(t)
v
i

ψ
i
q
i
(t)
v
i

i
z
i
(t) −η
i
v
i
, t = 1, . . . , T and i = 1, . . . , N.
(5.32)
68 CHAPTER 5. NONLINEAR PROGRAMMING
. . . . . .
. . .
direction of flow
0
z
0
q
0
1
z
1
q
1
i −1
z
i−1
q
i−1
i
z
i
q
i
i + 1
z
i+1
q
i+1
N
z
N
q
N
N + 1
given
(1 −π
1
)s
i
s
i
1 −π
i−1
s
i−a
s
i
(1 −π
i
)s
i
s
i+1
(1 −π
i+1
)s
i+1
s
N
(1 −π
N
)s
N
Figure 5.7: Schematic of stream with effluent discharges.
Here, v
i
= volume of water in area i measured in liters, ψ
i
= volume of water which flows from
area i to are i +1 in each period measured in liters. α
i
is the rate of decay of BOD per interval. This
decay occurs by combination of BOD and DO. β
i
is the rate of generation of DO. The increase in
DO is due to various natural oxygen-producing biochemical reactions in the stream and the increase
is proportional to (q
s
− q
i
) where q
s
is the saturation level of DO in the stream. Finally, η
i
is the
DO requirement in the bottom sludge. The v
i
, ψ
i
, α
i
, η
i
, q
s
are parameters of the stream and are
assumed known. They may vary with the time interval t. Also z
0
(t), q
0
(t) which are the concen-
trations immediately upstream from area 1 are assumed known. Finally, the initial concentrations
z
i
(1), q
i
(1), i = 1, . . . , N are assumed known.
Now suppose that the waste treatment facility in area i removes in interval t a fraction π
i
(t) of
the concentration s
i
(t) of BOD. Then (5.31) is replaced by
z
i
(t + 1) −z
i
(t) = −α
i
z
i
(t) +
ψ
i
z
i−1
v
i

ψ
i
z
i
(t)
v
i
+
(1−π
i
(t))s
i
(t)m
i
(t)
v
i
. (5.33)
We now turn to the costs associated with waste treatment and pollution. The cost of waste treat-
ment can be readily identified. In period t the ith facility treats m
i
(t) liters of effluent with a BOD
concentration s
i
(t) mg/liter of which the facility removes a fraction π
i
(t). Hence, the cost in period
t will be f
i

i
(t), s
i
(t), m
i
(t)) where the function must be monotonically increasing in all of its
arguments. We further assume that f is convex.
The costs associated with increased amounts of BOD and reduced amounts of DO are much
more difficult to quantify since the stream is used by many institutions for a variety of purposes
(e.g., agricultural, industrial, municipal, recreational), and the disutility caused by a decrease in
the water quality varies with the user. Therefore, instead of attempting to quantify these costs let
us suppose that some minimum water quality standards are set. Let q be the minimum acceptable
DO concentration and let ¯ z be the maximum permissible BOD concentration. Then we face the
5.2. DUALITY THEORY 69
following NP:
Maximize −
N
¸
i=1
T
¸
t=1
f
i

i
(t), s
i
(t), m
i
(t))
subject to (5.32), (5.33), and
−q
i
(t) ≤ −q , i = 1, . . . , N; t = 1, . . . , T,
z
i
(t) ≤ ¯ z , i = 1, . . . , N; t = 1, . . . , T,
0 ≤ π
i
(t) ≤ 1 , i = 1, . . . , N; t = 1, . . . , T.
(5.34)
Suppose that all the treatment facilities are in the control of a single public agency. Then assuming
that the agency is required to maintain the standards (q, ¯ z) and it does this at a minimum cost it will
solve the NP (5.34) and arrive at an optimal solution. Let the minimum cost be m(q, ¯ z). But if
there is no such centralized agency, then the individual polluters may not (and usually do not) have
any incentive to cooperate among themselves to achieve these standards. Furthermore, it does not
make sense to enforce legally a minimum standard q
i
(t) ≥ q, z
i
(t) ≤ ¯ z on every polluter since the
pollution levels in the ith area depend upon the pollution levels on all the other areas lying upstream.
On the other hand, it may be economically and politically acceptable to tax individual polluters in
proportion to the amount of pollutants discharged by the individual. The question we now pose
is whether there exist tax rates such that if each individual polluter minimizes its own total cost
(i.e., cost of waste treatment + tax on remaining pollutants), then the resulting water quality will be
acceptable and, furthermore, the resulting amount of waste treatment is carried out at the minimum
expenditure of resources (i.e., will be an optimal solution of (5.34)).
It should be clear from the duality theory that the answer is in the affirmative. To see this let
w
i
(t) = (z
i
(t), −q
i
(t))

, let w(t) = (w
1
(t), . . . , w
N
(t)), and let w = (w(1), . . . , w(t)). Then we
can solve (5.32) and (5.33) for w and obtain
w = b +Ar , (5.35)
where the matrix A and the vector b depend upon the known parameters and initial conditions, and
r is the NT-dimensional vector with components (1 − π
i
(t))s
i
(t)m
i
(t). Note that the coefficients
of the matrix must be non-negative because an increase in any component of r cannot decrease the
BOD levels and cannot increase the DO levels. Using (5.35) we can rewrite (5.34) as follows:
Maximize −
¸
i
¸
t
f
i

i
(t), s
i
(t), m
i
(t))
subject to b +Ar ≤ ¯ w ,
0 ≤ π
i
(t) ≤ 1 , i = 1, . . . , N; t = 1, . . . , T,
(5.36)
where the 2NT-dimensional vector ¯ w has its components equal to −q or ¯ z in the obvious manner.
By the duality theorem there exists a 2NT-dimensional vector λ

≥ 0, and an optimal solution
π

i
(t), i = 1, . . . , N, t = 1, . . . , T, of the problem:
Maximize −
¸
i
¸
t
f
i

i
(t), s
i
(t), m
i
(t)) −λ

(b +Ar −w)
subject to 0 ≤ π
i
(t) ≤ 1, i = 1, . . . , N; t = 1, . . . , T ,
(5.37)
such that ¦π

i
(t)¦ is also an optimal solution of (5.36) and, furthermore, the optimal values of (5.36)
and (5.37) are equal. If we let p

= A

λ

≥ 0, and we write the components of p

as p

i
(t) to match
70 CHAPTER 5. NONLINEAR PROGRAMMING
with the components (1−π
i
(t))s
i
(t)m
i
(t) of r we can see that (5.37) is equivalent to the set of NT
problems:
Maximize −f
i

i
(t), s
i
(t), m
i
(t)) −p

i
(t)(1 −π
i
(t))s
i
(t)m
i
(t)
0 ≤ π
i
(t) ≤ 1 ,
i = 1, . . . , N; t = 1, . . . , T .
(5.38)
Thus, p

i
(t) is optimum tax per mg of BOD in area i during period t.
Before we leave this example let us note that the optimum dual variable or shadow price λ

plays an important role in a larger framework. We noted earlier that the quality standard (q, ¯ z)
was somewhat arbitrary. Now suppose it is proposed to change the standard in the ith area during
period t to q +∆q
i
(t) and ¯ z +∆z
i
(t). If the corresponding components of λ

are λ
q∗
i
(t) and λ
z∗
i
(t),
then the change in the minimum cost necessary to achieve the new standard will be approximately
λ
q∗
i
(t)∆q
i
(t) + λ
z∗
i
(t)∆z
i
(t). This estimate can now serve as a basis in making a benefits/cost
analysis of the proposed new standard.
5.3 Quadratic Programming
An important special case of NP is the quadratic programming (QP) problem:
Maximize c

x −
1
2
x

Px
subject to Ax ≤ b, x ≥ 0 ,
(5.39)
where x ∈ R
n
is the decision variable , c ∈ R
n
, b ∈ R
m
are fixed, A is a fixed m n matrix and
P = P

is a fixed positive semi-definite matrix.
Theorem 1: A vector x

∈ R
n
is optimal for (5.39) iff there exist λ

∈ R
m
, µ

∈ R
n
, such that
Ax

≤ b, x

≥ 0
c −Px

= A

λ

−µ

, λ

≥ 0, µ

≥ 0 ,


)

(Ax

−b) = 0 , (µ

)

x

= 0 .
(5.40)
Proof: By Lemma 3 of 1.3, CQ is satisfied, hence the necessity of these conditions follows from
Theorem 2 of 1.2. On the other hand, since P is positive semi-definite it follows from Exercise 6
of Section 1.2 that f
0
: x → c

x −1/2 x

Px is a concave function, so that the sufficiency of these
conditions follows from Theorem 4 of 1.2. ♦
From (5.40) we can see that x

is optimal for (5.39) iff there is a solution (x

, y

, λ

, µ

) to
(5.41), (5.42), and (5.43):
Ax +I
m
Y = b
−Px −A

λ +I
n
µ = −c ,
(5.41)
x ≥ 0 y ≥ 0, λ ≥ 0, µ ≥ 0 , (5.42)
µ

x = 0 , λ

y = 0 . (5.43)
Suppose we try to solve (5.41) and (5.42) by Phase I of the Simplex algorithm (see 4.3.2). Then we
must apply Phase II to the LP:
Maximize −
m
¸
i=1
z
i

n
¸
j=1
ξ
j
5.4. COMPUTATIONAL METHOD 71
subject to
Ax +I
m
y +z = b
−Px −A

λ +I
n
µ +ξ = −c
x ≥ 0, y ≥ 0, λ ≥ 0, µ ≥ 0, z ≥ 0, ξ ≥ 0,
(5.44)
starting with a basic feasible solution z = b, ξ = −c. (We have assumed, without loss of generality,
that b ≥ 0 and −c ≥ 0.) If (5.41) and (5.42) have a solution then the maximum value in (5.44) is 0.
We have the following result.
Lemma 1: If (5.41), (5.42), and (5.43) have a solution, then there is an optimal basic feasible solution
of (5.44) which is also a solution f (5.41), (5.42), and (5.43).
Proof: Let ˆ x, ˆ y,
ˆ
λ, ˆ µ be a solution of (5.41), (5.42), and (5.43). Then ˆ x, ˆ y,
ˆ
λ, ˆ µ, ˆ z = 0,
ˆ
ξ = 0 is
an optimal solution of (5.44). Furthermore, from (5.42) and (5.43) we see that at most (n + m)
components of (ˆ x, ˆ y,
ˆ
λ, ˆ µ) are non-zero. But then a repetition of the proof of Lemma 1 of 4.3.1 will
also prove this lemma. ♦
This lemma suggests that we can apply the Simplex algorithm of 4.3.2 to solve (5.44), starting
with the basic feasible solution z = b, ξ = −c, in order to obtain a solution of (5.41), (5.42), and
(5.43). However, Step 2 of the Simplex algorithm must be modified as follows to satisfy (5.43):
If a variable x
j
is currently in the basis, do not consider µ
j
as a candidate for entry into the basis;
if a variable y
i
is currently in the basis, do not consider λ
i
as a candidate for entry into the basis. If
it not possible to remove the z
i
and ξ
j
from the basis, stop.
The above algorithm is due to Wolfe [1959]. The behavior of the algorithm is summarized below.
Theorem 2: Suppose P is positive definite. The algorithm will stop in a finite number of steps at an
optimal basic feasible solution (ˆ x, ˆ y,
ˆ
λ, ˆ µ, ˆ z,
ˆ
ξ) of (5.44). If ˆ z = 0 and
ˆ
ξ = 0 then (ˆ x, ˆ y,
ˆ
λ, ˆ µ) solve
(5.41), (5.42), and (5.43) and ˆ x is an optimal solution of (5.39). If ˆ z = 0 or
ˆ
ξ = 0, then there is no
solution to (5.41), (5.42), (5.43), and there is no feasible solution of (5.39).
For a proof of this result as well as for a generalization of the algorithm which permits positive
semi-definite P see (Cannon, Cullum, and Polak [1970], p. 159 ff).
5.4 Computational Method
We return to the general NP (5.45),
Maximize f
0
(x)
subject to f
i
(x) ≤ 0, i = 1, . . . , m ,
(5.45)
where x ∈ R
n
, f
i
: R
n
→ R, 0 ≤ i ≤ m, are differentiable. Let Ω ⊂ R
n
denote the set of
feasible solutions. For ˆ x ∈ Ω define the function ψ(ˆ x) : R
n
→ R by
ψ(ˆ x)(h) = max¦−f
0x
(ˆ x)h, f
1
(ˆ x) +f
1x
(ˆ x)h, . . . , f
m
(ˆ x) +f
mx
(ˆ x)h¦.
Consider the problem:
Minimize ψ(ˆ x)(h)
subject to −ψ(ˆ x)(h) −f
0x
(ˆ x)h ≤ 0 ,
−ψ(ˆ x)(h) +f
i
(ˆ x)f
ix
h ≤ 0 ,
1 ≤ i ≤ m , −1 ≤ h
j
≤ 1 , 1 ≤ j ≤ n .
(5.46)
72 CHAPTER 5. NONLINEAR PROGRAMMING
.
f
0
(x) = F
0
(x

) > f
0
(x
k
)
f
0
(x) = f
0
(x
k
)
f
2
= 0
f
1
= 0

f
3
= 0
f
2
(x
k
)
x
k
f
3
(x
k
)
f
1
(x
k
)
f
0
(x
k
)
h(x
k
)
Figure 5.8: h(x
k
) is a feasible direction.
Call h(ˆ x) an optimum solution of (5.46) and let h
0
(ˆ x) = ψ(ˆ x)(h(ˆ x)) be the minimum value at-
tained. (Note that by Exercise 1 of 4.5.1 (5.46) can be solved as an LP.)
The following algorithm is due to Topkis and Veinott [1967].
Step 1. Find x
0
∈ Ω, set k = 0, and go to Step 2.
Step 2. Solve (5.46) for ˆ x = x
k
and obtain h
0
(x
k
), h(x
k
). If h
0
(x
k
) = 0, stop, otherwise go to Step
3.
Step 3. Compute an optimum solution µ(x
k
) to the one-dimensional problem,
Maximize f
0
(x
k
+µh(x
k
)) ,
subject to (x
k
+µh(x
k
)) ∈ Ω, µ ≥ 0 ,
and go to Step 4.
Step 4. Set x
k+1
= x
k
+µ(x
k
)h(x
k
), set k = k + 1 and return to Step 2.
The performance of the algorithm is summarized below.
Theorem 1: Suppose that the set
Ω(x
0
) = ¦x[x ∈ Ω, f
0
(x) ≥ f
0
(x
0

is compact, and has a non-empty interior, which is dense in Ω(x
0
). Let x

be any limit point of
the sequence x
0
, x
1
, . . . , x
k
, . . . , generated by the algorithm. Then the Kuhn-Tucker conditions are
satisfied at x

.
For a proof of this result and for more efficient algorithms the reader is referred to (Polak [1971]).
Remark: If h
0
(x
k
) < 0 in Step 2, then the direction h(x
k
) satisfies f
0x
(x
k
)h(x
k
) > 0, and f
i
(x
k
)+
f
ix
(x
K
)h(x
k
) < 0, 1 ≤ i ≤ m. For this reason h(x
k
) is called a (desirable) feasible direction.
(See Figure 5.8.)
5.5. APPENDIX 73
5.5 Appendix
The proofs of Lemmas 4,7 of Section 2 are based on the following extremely important theorem
(see Rockafeller [1970]).
Separation theorem for convex sets. Let F, Gbe convex subsets of R
n
such that the relative interiors
of F, G are disjoint. Then there exists λ ∈ R
n
, λ = 0, and θ ∈ R such that
λ

g ≤ θ for all g ∈ G
λ

f ≥ θ for all f ∈ F .
Proof of Lemma 4: Since M is stable at
ˆ
b there exists K such that
M(b) −M(
ˆ
b) ≤ K[b −
ˆ
b[ for all b ∈ B . (5.47)
In R
1+m
consider the sets
F = ¦(r, b)[b ∈ R
m
, r > K[b −
ˆ
b[¦ ,
G = ¦(r, b)[b ∈ B, r ≤ M(b) −M(
ˆ
b)¦ .
It is easy to check that F, G are convex, and (5.47) implies that F ∩ G = φ. Hence, there exist

0
, . . . , λ
m
) = 0, and θ such that
λ
0
r +
m
¸
i=1
λ
i
b
i
≤ θ for (r, b) ∈ G ,
λ
0
r +
m
¸
i=1
λ
i
b
i
≥ θ for (r, b) ∈ F .
(5.48)
From the definition of F, and the fact that (λ
0
, . . . , λ
m
) = 0, it can be verified that (5.49) can hold
only if λ
0
> 0. Also from (5.49) we can see that
m
¸
i=1
λ
i
ˆ
b
i
≥ θ, whereas from (5.48)
m
¸
i=1
λ
i
ˆ
b
i
≤ θ,
so that
m
¸
i=1
λ
i
ˆ
b
i
= θ. But then from (5.48) we get
M(b) −M(
ˆ
b) ≤
1
λ
0
[θ −
m
¸
i=1
λ
i
b
i
] =
m
¸
i=1
(−
λ
i
λ
0
)(b
i

ˆ
b). ♦
Proof of Lemma 7: Since
ˆ
b is in the interior of B, there exists ε > 0 such that
b ∈ B whenever [b −
ˆ
b[ < ε . (5.49)
In R
1+m
consider the sets
F = ¦(r,
ˆ
b)[r > M(
ˆ

G = ¦(r, b)[b ∈ B, r ≤ M(b)¦ .
Evidently, F, G are convex and F ∩ G = φ, so that there exist (λ
0
, . . . , λ
m
) = 0, and θ such that
λ
0
r +
m
¸
i=1
λ
i
ˆ
b
i
≥ θ , for r > M(
ˆ
b) , (5.50)
74 CHAPTER 5. NONLINEAR PROGRAMMING
λ
0
r +
m
¸
i=1
λ
i
ˆ
b
i
≤ θ , for (r, b) ∈ G . (5.51)
From (5.49), and the fact that (λ
0
, . . . , λ
m
) = 0 we can see that (5.50) and (5.51) imply λ
0
> 0.
From (5.50),(5.51) we get
λ
0
M(
ˆ
b) +
m
¸
i=1
λ
i
ˆ
b
i
= θ ,
so that (5.52) implies
M(b) ≤ (
ˆ
b) +
m
¸
i=1
(−
λ
i
λ
0
)(b
i

ˆ
b
i
) . ♦
Chapter 6
SEQUENTIAL DECISION PROBLEMS:
DISCRETE-TIME OPTIMAL
CONTROL
In this chapter we apply the results of the last two chapters to situations where decisions have to be
made sequentially over time. A very important class of problems where such situations arise is in
the control of dynamical systems. In the first section we give two examples, and in Section 2 we
derive the main result.
6.1 Examples
The trajectory of a vertical sounding rocket is controlled by adjusting the rate of fuel ejection which
generates the thrust force. Specifically suppose that the equations of motion are given by (6.1).
˙ x
1
(t) = x
2
(t)
˙ x
2
(t) = −
C
D
x
3
(t)
ρ(x
1
(t))x
2
2
(t) −g +
C
T
x
3
(t)
u(t)
˙ x
3
(t) = −u(t) ,
(6.1)
where x
1
(t) is the height of the rocket from the ground at time t, x
2
(t) is the (vertical) speed at
time t, x
3
(t) is the weight of the rocket (= weight of remaining fuel) at time t. The “dot” denotes
differentiation with respect to t. These equations can be derived from the force equations under the
assumption that there are four forces acting on the rocket, namely: inertia = x
3
¨ x
1
= x
3
˙ x
2
; drag
force = C
D
ρ(x
1
)x
2
2
where C
D
is constant, ρ(x
1
) is a friction coefficient depending on atmospheric
density which is a function of x
1
; gravitational force = gx
3
with g assumed constant; and thrust
force C
T
˙ x
3
, assumed proportional to rate of fuel ejection. See Figure 6.1. The decision variable at
time t is u(t), the rate of fuel ejection. At time 0 we assume that (x
1
(0), x
2
(0), x
3
(0)) = (0, 0, M);
that is, the rocket is on the ground, at rest, with initial fuel of weight M. At a prescribed final time
t
f
, it is desired that the rocket be at a position as high above the ground as possible. Thus, the
75
76 CHAPTER 6. DISCRETE-TIME OPTIMAL CONTROL
decision problem can be formalized as (6.2).
Maximize x
1
(t
f
)
subject to ˙ x(t) = f(x(t), u(t)), 0 ≤ t ≤ t
f
x(0) = (0, 0, M)
u(t) ≥ 0, x
3
(t) ≥ 0, 0 ≤ t ≤ t
f
,
(6.2)
where x = (x
1
, x
2
, x
3
)

, f : R
3+1
→ R
3
is the right-hand side of (6.1). The constraint inequalities
u(t) ≥ 0 and x
3
(t) ≥ 0 are obvious physical constraints.
x
3
¨ x
1
= inertia
C
D
ϕ(x
1
)x
2
2
= drag
gx
3
= gravitational force
C
R
˙ x
3
= thrust
Figure 6.1: Forces acting on the rocket.
The decision problem (6.2) differs from those considered so far in that the decision variables,
which are functions u : [0, t
f
] → R, cannot be represented as vectors in a finite-dimensional
space. We shall treat such problems in great generality in the succeeding chapters. For the moment
we assume that for computational or practical reasons it is necessary to approximate or restrict
the permissible function u() to be constant over the intervals [0, t
1
), [t
1
, t
2
), . . . , [t
N−1
, t
f
), where
t
1
, t
2
, . . . , t
N−1
are fixed a priori. But then if we let u(i) be the constant value of u() over [t
i
, t
i+1
),
we can reformulate (6.2) as (6.3):
Maximize x
1
(t
N
)(t
N
= t
f
)
subject to x(t
i+1
) = g(i, x(t
i
), u(i)), i = 0, 1, . . . , N −1
x(t
0
) = x(0) = (0, 0, M)
u(i) ≥ 0, x
3
(t
i
) ≥ 0, i = 0, 1, . . . , N .
(6.3)
In (6.3) g(i, x(t
1
), u(i)) is the state of the rocket at time t
i+1
when it is in state x(t
i
) at time t
i
and
u(t) ≡ u(i) for t
i
≤ t < t
i+1
.
As another example consider a simple inventory problem where time enters discretely in a natural
fashion. The Squeezme Toothpaste Company wants to plan its production and inventory schedule
for the coming month. It is assumed that the demand on the ith day, 0 ≤ i ≤ 30, is d
1
(i) for
6.2. MAIN RESULT 77
their orange brand and d
2
(i) for their green brand. To meet unexpected demand it is necessary that
the inventory stock of either brand should not fall below s > 0. If we let s(i) = (s
1
(i), s
2
(i))

denote the stock at the beginning of the ith day, and m(i) = (m
1
(i), m
2
(i))

denote the amounts
manufactured on the ith day, then clearly
s(i + 1) +s(i) +m(i) −d(i) ,
where d(i) = (d
1
(i), d
2
(i))

. Suppose that the initial stock is ˆ s, and the cost of storing inventory s
for one day is c(s) whereas the cost of manufacturing amount mis b(m). The the cost-minimization
decision problem can be formalized as (6.4):
Maximize
30
¸
i=0
(c(s(i)) +b(m(i)))
subject to s(i + 1) = s(i) +m(i) −d(i), 0 ≤ i ≤ 29
s(0) = ˆ s
s(i) ≥ (s, s)

, m(i) ≥ 0, 0 ≤ i ≤ 30 .
(6.4)
Before we formulate the general problem let us note that (6.3) and (6.4) are in the form of non-
linear programming problems. The reason for treating these problems separately is because of their
practical importance, and because the conditions of optimality take on a special form.
6.2 Main Result
The general problem we consider is of the form (6.5).
Maximize
N−1
¸
i=0
f
0
(i, x(i), u(i))
subject to
dynamics : x(i + 1) −x(i) = f(i, x(i), u(i)), i = 0, . . . , N −1 ,
initial condition: q
0
(x(0) ≤ 0, g
0
(x(0)) = 0 ,
final condition: q
N
(x(N)) ≤ 0, g
N
(x(N)) = 0 ,
state-space constraint: q
i
(x(i)) ≤ 0, i = 1, . . . , N −1 ,
control constraint: h
i
(u(i)) ≤ 0, i = 0, . . . , N −1 .
(6.5)
Here x(i) ∈ R
n
, u(i) ∈ R
p
, f
0
(i, , ) : R
n+p
→ R, f(i, , ) : R
n+p
→ R
n
, q
i
: R
n

R
m
i
, g
i
: R
n
→ R

i
, h
i
: R
p
→ R
s
i
are given differentiable functions. We follow the control
theory terminology, and refer to x(i) as the state of the system at time i, and u(i) as the control or
input at time i.
We use the formulation mentioned in the Remark following Theorem 3 of V.1.2, and construct
the Lagrangian function L by
L(x(0), . . . , x(N); u(0), . . . , u(N −1); p(1), . . . , p(N);
λ
0
, . . . , λ
N
; α
0
, α
N
; γ
0
, . . . , γ
N−1
)
78 CHAPTER 6. DISCRETE-TIME OPTIMAL CONTROL
=
N−1
¸
i=0
f
0
(i, x(i), u(i)) −

N−1
¸
i=0
(p(i + 1))

(x(i + 1) −x(i) −f(i, x(i), u(i)))+
N
¸
i=0

i
)

q
i
(x(i)) + (α
0
)

g
0
(x(0)) + (α
N
)

g
N
(x(N)) +
N−1
¸
i=0

i
)

h
i
(u(i))
¸
.
Suppose that CQ is satisfied for (6.5), and x

(0), . . . , x

(N); u

(0), . . . , u

(N −1), is an optimal
solution. Then by Theorem 2 of 5.1.2, there exist p

(i) in R
n
for 1 ≤ i ≤ N, λ
i∗
≥ 0 in R
m
i
for
0 ≤ i ≤ N, α
i∗
in R

i
for i = 0, N, and γ
i∗
≥ 0 in R
s
i
for 0 ≤ i ≤ N −1, such that
(A) the derivative of L evaluated at these points vanishes,
and
(B) λ
i∗
q
i
(x

(i)) = 0 for 0 ≤ i ≤ N, γ
i∗
h
i
(u

(i)) = 0 for 0 ≤ i ≤ N −1 .
We explore condition (A) by taking various partial derivatives.
Differentiating L with respect to x(0) gives
f
0x
(0, x

(0), u

(0)) −¦−(p

(1))

−(p

(1))

[f
x
(0, x

(0), u

(0))]
+(λ
0∗
)

[q
0x
(x

(0))] + (α
0∗
)

[g
0x
(x

(0))]¦ = 0 ,
or
p

(0) −p

(1) = [f
x
(0, x

(0), u

(x))]

p

(1)
+[f
0x
(0, x

(0), u

(0))]

−[q
0x
(x

(0))]

λ
0∗
,
(6.6)
where we have defined
p

(0) = [g
0x
(x

(x))]

α
0∗
. (6.7)
Differentiating L with respect to x(i), 1 ≤ i ≤ N −1, and re-arranging terms gives
p

(i) −p

(i + 1) = [f
x
(i, x

(i), u

(i))]

p

(i + 1)
+[f
0x
(i, x

(i), u

(i))]

−[q
ix
(x

(i))]

λ
i∗
.
(6.8)
Differentiating L with respect to x(N) gives,
p

(N) = −[g
Nx
(x

(N))]

α
N∗
−[q
Nx
(x

(N))]

λ
N∗
.
It is convenient to replace α
N∗
by −α
N∗
so that the equation above becomes (6.9)
p

(N) = [g
Nx
(x

(N))]

α
N∗
−[q
Nx
(x

(N))]

λ
N∗
. (6.9)
Differentiating L with respect to u(i), 0 ≤ i ≤ N −1 gives
[f
0u
(i, x

(i), u

(i))]

+ [f
u
(i, x

(i), u

(i))]

p

(i +l) −[h
iu
(u

(i))]

γ
i∗
= 0 . (6.10)
We summarize our results in a convenient form in
Table 6.1
Remark 1: Considerable elegance and mnemonic simplification is achieved if we define the
Hamiltonian function H by
6
.
2
.
M
A
I
N
R
E
S
U
L
T
7
9
Suppose x

(0), . . . , x

(N);
u

(0), . . . , u

(N −1) maximizes
N−1
¸
i=0
f
0
(i, x(i), u(i)) subject
to the constraints below
then there exist p

(N); λ
0∗
, . . . , λN

; α
0∗
, α
N∗;
γ
0∗
, . . . , γ
N−1

, such that
dynamics: i = 0, . . . , N −1
x(i + 1) −x(i) = f(i, x(i), u(i))
initial condition:
q
0
(x

(0)) ≤ 0, g
0
(x

(0)) = 0
final conditions:
q
N
(x

(N)) ≤ 0, g
N
(x

(N)) = 0
state space constraint:
i = 1, . . . , N −1
q
i
(x

(i)) ≤ 0
control constraint:
i = 0, . . . , N −1
h
i
(u

(i)) ≤ 0
adjoint equations: i = 0, . . . , N −1
p

(i) −p

(i + 1) = [f
x
(i, x

(i), u

(i)]

p

(i + 1)
+[f
0x
(i, x

(i), u

(i)]

−[q
ix
(x

(i)]

γ
i∗
transversality conditions:
p

(0) = [g
0x
(x

(0))]

α
0∗
p

(N) = [g
Nx
(x

(N))]

α
N∗
−[q
Nx
(x

(N))]

λ
N∗
[f
0u
(i, x

(i), u

(i))]

+ [f
u
(i, x

(i)u

(i))]

.
p

(i
1
) = [h
iu
(u

(i))]

γ
i∗
λ
0∗
≥ 0,

0∗
)

q
0
(x

(0)) = 0
λ
N∗
≥ 0,

N∗)

q
N
(x

(N)) = 0
λ
i∗
≥ 0,

i∗
)

q
i
(x

(i)) = 0
γ
i∗
≥ 0

i∗)

h
i
(u

(i) = 0
T
a
b
l
e
6
.
1
:
80 CHAPTER 6. DISCRETE-TIME OPTIMAL CONTROL
H(i, x, u, p) = f
0
(i, x, u) +p

f(i, x, u) .
The dynamic equations then become
x

(i + 1) −x

(i) = [H
p
(i, x

(i), u

(i), p

(i + 1))]

,
0 ≤ i ≤ N −1 .
(6.11)
and the adjoint equations (6.6) and (6.8) become
p

(i) −p

(i + 1) = [H
x
(i, x

(i), u

(i), u

(i), p

(i + 1))]

−[q
ix
(x

(i))]

λ
i∗
,
0 ≤ i ≤ N −1 ,
whereas (6.10) becomes
[h
iu
(u

(i))]

γ
i∗
= [H
u
(i, x

(i), u

(i), p

(i + 1))]

, 0 ≤ i ≤ N −1 . (6.12)
Remark 2: If we linearize the dynamic equations about the optimal solution we obtain
δx(i + 1) −δx(i) = [f
x
(i, x

(i), u

(i))]δx(i) + [f
u
(i, x

(i, x

, (i), u

(i))]δu(i) ,
whose homogeneous part is
z(i + 1) −z(i) = [f
x
(i, x

(i), u

(i))]z(i) ,
which has for it adjoint the system
r(i) −r(i + 1) = [f
x
(i, x

(i), u

(i))]

r(i + 1) . (6.13)
Since the homogeneous part of the linear difference equations (6.6), (6.8) is (6.13), we call (6.6),
(6.8) the adjoint equations, and the p

(i) are called adjoint variables.
Remark 3: If the f
0
(i, , ) are concave and the remaining function in (6.5) are linear, then CQ is
satisfied, and the necessary conditions of Table 6.1 are also sufficient. Furthermore, in this case we
see from (6.13) that u

(i) is an optimal solution of
Maximize H(i, x

(i), u, p

(i + 1)),
subject to h
i
(u) ≤ 0 .
For this reason the result is sometimes called the maximum principle.
Remark 4: The conditions (6.7), (6.9) are called transversality conditions for the following reason.
Suppose q
0
≡ 0, q
N
≡ 0, so that the initial and final conditions read g
0
(x(0)) = 0, g
N
(x(N)) = 0,
which describe surfaces in R
n
. Conditions (6.7), (6.9) become respectively p

(0) = [g
0x
(x

(0))]

α
0∗
, p

(N) =
[g
Nx
(x(N))]

α
N∗
which means that p

(0) and p

(N) are respectively orthogonal or transversal to
the initial and final surfaces. Furthermore, we note that in this case the initial and final conditions
specify (
0
+
n
) conditions whereas the transversality conditions specify (n−
0
)+(n−
n
) condi-
tions. Thus, we have a total of 2n boundary conditions for the 2n-dimensional system of difference
equations (6.5), (6.12); but note that these 2n boundary conditions are mixed, i.e., some of them
refer to the initial time 0 and the rest refer to the final time.
6.2. MAIN RESULT 81
Exercise 1: For the regulator problem,
Maximize
1
2
N−1
¸
i=0
x(i)

Qx(i) +
1
2
N−1
¸
i=0
u(i)

Pu(i)
subject to x(i + 1) −x(i) = Ax(i) +Bu(i), 0 ≤ i ≤ N −1
x(0) = ˆ x(0),
u(i) ∈ R
p
, 0 ≤ i ≤ N −1 ,
where x(i) ∈ R
n
, A and B are constant matrices, ˆ x(0) is fixed, Q = Q

is positive semi-definite,
and P = P

is positive definite, show that the optimal solution is unique and can be obtained by
solving a 2n-dimensional linear difference equation with mixed boundary conditions.
Exercise 2: Show that the minimal fuel problem,
Minimize
N−1
¸
i=0

¸
P
¸
j=1
[(u(i))
j
[
¸

,
subject to x(i + 1) −x(i) = Ax(i) +Bu(i), 0 ≤ i ≤ N −1
x(0) = ˆ x(0), x(N) = ˆ x(N) ,
u(i) ∈ R
p
, [u(i))
j
[ ≤ 1, 1 ≤ j ≤ p, 0 ≤ i ≤ N −1
can be transformed into a linear programming problem. Here ˆ x(0), ˆ x(N) are fixed, A and B are as
in Exercise 1.
82 CHAPTER 6. DISCRETE-TIME OPTIMAL CONTROL
Chapter 7
SEQUENTIAL DECISION PROBLEMS:
CONTINUOUS-TIME OPTIMAL
CONTROL OF LINEAR SYSTEMS
We will investigate decision problems similar to those studied in the last chapter with one (math-
ematically) crucial difference. A choice of control has to be made at each instant of time t where
t varies continuously over a finite interval. The evolution in time of the state of the systems to be
controlled is governed by a differential equation of the form:
˙ x(t) = f(t, x(t), u(t)) ,
where x(t) ∈ R
n
and u(t) ∈ R
p
are respectively the state and control of the system at time t.
To understand the main ideas and techniques of analysis it will prove profitable to study the linear
case first. The general nonlinear case is deferred to the next chapter. In Section 1 we present the
general linear problem and study the case where the initial and final conditions are particularly
simple. In Section 2 we study more general boundary conditions.
7.1 The Linear Optimal Control Problem
We consider a dynamical system governed by the linear differential equation (7.1):
˙ x(t) = A(t)x(t) +B(t)u(t), t ≥ t
0
. (7.1)
Here A() and B() are n n- and n p-matrix valued functions of time; we assume that they are
piecewise continuous functions. The control u() is constrained to take values in a fixed set Ω ⊂ R
p
,
and to be piecewise continuous.
Definition: A piecewise continuous function u : [t
0
, ∞) → Ω will be called an admissible control.
| denotes the set of all admissible controls.
Let c ∈ R
n
, x
0
∈ R
n
be fixed and let t
f
≥ t
0
be a fixed time. We are concerned with the
83
84 CHAPTER 7. CONTINUOUS-TIME LINEAR OPTIMAL CONTROL
decision problem (7.2).
Maximize c

x(t
f
),
subject to
dynamics: ˙ x(t) = A(t)x(t) +B(t)u(t) , t
0
≤ t ≤ t
f
,
initial condition: x(t
0
) = x
0
,
final condition: x(t
f
) ∈ R
n
,
control constraint: u() ∈ | .
(7.2)
Definition: (i) For any piecewise continuous function u() : [t
0
, t
f
] → R
p
, for any z ∈ R
n
, and
any t
0
≤ t
1
≤ t
2
≤ t
f
let
φ(t
2
, t
1
, z, u)
denote the state of (7.1) at time t
2
, if a time t
1
it is in state z, and the control u() is applied.
(ii) Let
K(t
2
, t
1
, z) = ¦φ(t
2
, t
1
, z, u)[u ∈ |¦ .
Thus, K(t
2
, t
1
, z) is the set of states reachable at time t
2
starting at time t
1
in state z and using
admissible controls. We call K the reachable set.
Definition: Let Φ(t, τ), t
0
≤ τ ≤ t ≤ t
f
, be the transition-matrix function of the homogeneous
part of (7.1), i.e., Φ satisfies the differential equation
∂Φ
∂t
(t, τ) = A(t)Φ(t, τ) ,
and the boundary condition
Φ(t, t) ≡ I
n
.
The next result is well-known. (See Desoer [1970].)
Lemma 1: φ(t
2
, t
1
, z, u) = Φ(t
2
, t
1
)z +

t
2
t
1
Φ(t
2
, τ)B(τ)u(τ)dτ.
Exercise 1: (i) Assuming that Ω is convex, show that | is a convex set. (ii) Assuming that | is
convex show that K(t
2
, t
1
, z) is a convex set. (It is a deep result that K(t
2
, t
1
, z) is convex even if
Ω is not convex (see Neustadt [1963]), provided we include in | any measurable function
u : [t
0
, ∞) → Ω.)
Definition: Let K ⊂ R
n
, and let x

∈ K. We say that c is the outward normal to a hyperplane
supporting K at x

if c = 0, and
c

x

≥ c

x for all x ∈ K .
The next result gives a geometric characterization of the optimal solutions of (2).
Lemma 2: Suppose c = 0. Let u

() ∈ | and let x

(t) = φ(t, t
0
, x
0
, u

). Then u

is an optimal
solution of (2) iff
(i) x

(t
f
) is on the boundary of K = K(t
f
, t
0
, x
0
), and
(ii) c is the outward normal to a hyperplane supporting K at x

. (See Figure 7.1.)
Proof: Clearly (i) is implied by (ii) because if x

(t
f
) is in the interior of K there is δ > 0 such
that (x

(t
f
) +δc) ∈ K; but then
7.1. THE LINEAR OPTIMAL CONTROL PROBLEM 85
x
3
c
x
2
x
1
c
x

(t
f
)
π

= ¦x[c

x = c

x

(t
f

K
Figure 7.1: c is the outward normal to π

supporting K at x

(t
f
)
.
c

(x

(t
f
) +δc) = c

x

(t
f
) +δ[c[
2
> c

x

(t
f
) .
Finally, from the definition of K it follows immediately that u

is optimal iff c

x

(t
f
) ≥ c

x for all
x ∈ K . ♦
The result above characterizes the optimal control u

in terms of the final state x

(t
f
). The beauty
and utility of the theory lies in the following result which translates this characterization directly in
terms of u

.
Theorem 1: Let u

() ∈ | and let x

(t) = φ(t, t
0
, x
0
, u

), t
0
≤ t ≤ t
f
. Let p

(t) be the solution
of (7.3) and (7.4):
adjoint equation: ˙ p

(t) = −A

(t)p

(t) , t
0
≤ t ≤ t
f
. (7.3)
final condition: p

(t
f
) = c . (7.4)
Then u

() is optimal iff
(p

(t))

B(t)u

(t) = sup¦(p

(t))

B(t)v[v ∈ Ω¦ , (7.5)
for all t ∈ [t
0
, t
f
], except possibly for a finite set.
Proof: u

() is optimal iff for every u() ∈ |
(p

(t
f
))

[Φ(t
f
, t
0
)x
0
+

t
f
t
0
Φ(t
f
, τ)B(τ)u

(τ)dτ]
≥ (p

(t
f
))

[Φ(t
f
, t
0
)x
0
+

t
f
t
0
Φ(t
f
, τ)B(τ)u(τ)dτ] ,
which is equivalent to (7.6).

t
f
t
0
(p

(t
f
))

Φ(t
f
, τ)B(τ)u

(τ)dτ

t
f
t
0
(p

(t
f
))

Φ(t
f
, τ)B(τ)u(τ)dτ
(7.6)
86 CHAPTER 7. CONTINUOUS-TIME LINEAR OPTIMAL CONTROL
Now by properties of the adjoint equation we know that p

(t))

= (p

(t
f
))

Φ(t
f
, t) so that (7.6) is
equivalent to (7.7),

t
f
t
0
(p

(τ))

B(τ)u

(τ)dτ ≥

t
f
t
0
(p

(τ))

B(τ)u(τ)dτ, (7.7)
and the sufficiency of (7.5) is immediate.
To prove the necessity let D be the finite set of points where the function B() or u

() is discon-
tinuous. We shall show that if u

() is optimal then (7.5) is satisfied for t ∈ D. Indeed if this is not
the case, then there exists t

∈ [t
0
, t
f
], t

∈ D, and v ∈ Ω such that
(p

(t

))

B(t

)u

(t

) < (p

(t

))

B(t

)v ,
and since t

is a point of continuity of B() and u

(), it follows that there exists δ > 0 such that
(p

(t))

B(t)u

(t) < (p

(t))

B(t)v, for [t −t

[ < δ . (7.8)
Define ˜ u() ∈ | by
˜ u(t) =

v [t −t

[ < δ, t ∈ [t
0
, t
f
]
u

(t) otherwise .
Then (7.8) implies that

t
f
t
0
(p

(t))

B(t)˜ u(t)dt >

t
f
t
0
(p

(t))

B(t)u

(t)dt .
But then from (7.7) we see that u

() cannot be optimal, giving a contradiction. ♦
Corollary 1: For t
0
≤ t
1
≤ t
2
≤ t
f
,
(p

(t
2
))x

(t
2
) ≥ (p

(t
2
))

x for all x ∈ K(t
2
, t
1
, x

(t
1
)). (7.9)
Exercise 2: Prove Corollary 1.
Remark 1: The geometric meaning of (7.9) is the following. Taking t
1
= t
0
in (7.9), we see that if
u

() is optimal, i.e., if c = p

(t
f
) is the outward normal to a hyperplane supporting K(t
f
, t
0
, x
0
)
at x

(t
f
), then x

(t) is on the boundary of K(t, t
0
, x
0
) and p

(t) is the normal to a hyperplane
supporting K(t, t
0
, x
0
) at x

(t). This normal is obtained by transporting backwards in time, via
the adjoint differential equation, the outward normal p

(t
f
) at time t
f
. The situation is illustrated
in Figure 7.2.
Remark 2: If we define the Hamiltonian function H by
H(t, x, u, p) = p

(A(t)x +B(t)u) ,
and we define M by
M(t, x, p) = sup¦H(t, x, u, p)[u ∈ Ω¦,
then (7.5) can be rewritten as
H(t, x

(t), u

(t), p

(t)) = M(t, x

(t), p

(t)) . (7.10)
This condition is known as the maximum principle.
7.2. MORE GENERAL BOUNDARY CONDITIONS 87
Exercise 3: (i) Show that m(t) = M(t, x

(t), p

(t)) is a Lipschitz function of t. (ii) If A(t), B(t)
are constant, show that m(t) is constant. (Hint: Show that (dm/dt) ≡ 0.)
The next two exercises show how we can obtain important qualitative properties of an optimal
control.
Exercise 4: Suppose that Ω is bounded and closed. Show that there exists an optimal control u

()
such that u

(t) belongs to the boundary of Ω for all t.
Exercise 5: Suppose Ω = [α, β], so that B(t) is an n 1 matrix. Suppose that A(t) ≡ A and
B(t) ≡ B are constant matrices and A has n real eigenvalues. Show that there is an optimal
control u

() and t
0
≤ t
1
≤ t
2
≤ . . . ≤ t
n
≤ t
f
such that u

(t) ≡ α or β on [t
i
, t
i+1
), 0 ≤ i ≤ n.
(Hint: first show that (p

(t))

B = γ
1
exp(δ
1
t) +. . . +γ
n
exp(δ
n
(t)) for some γ
i
, δ
i
in R.)
Exercise 6: Assume that K(t
f
, t
0
, x
0
) is convex (see remark in Exercise 1 above). Let
f
0
: R
n
→ R be a differentiable function and suppose that the objective function in (7.2) is
f
0
(x(t
f
)) instead of c

x(t
f
). Suppose u

() is an optimal control. Show that u

() satisfies the
maximum principle (7.10) where p

() is the solution of the adjoint equation (7.3) with the final
condition
p

(t
f
) = f
0
(x

(t
f
)) .
Also show that this condition is sufficient for optimality if f
0
is concave. (Hint: Use Lemma 1 of
5.1.1 to show that if u

() is optimal, then f
0x
(x

(t
f
)(x

(t
f
) −x) ≤ for all x ∈ K(t
f
, t
0
, x
0
).)
7.2 More General Boundary Conditions
We consider the following generalization of (7.2). The notion of the previous section is retained.
Maximize c

x(t
f
)
subject to
dynamics: ˙ x(t) = A(t)x(t) +B(t)u(t), t
0
≤ t ≤ t
f
,
initial condition: G
0
x(t
0
) = b
0
,
final condition: G
f
x(t
f
) = b
f
,
control constraint: u() ∈ |, i.e., ¯ : [.

, .
{
] → ⊗ and
u()piecewise continuous.
(7.11)
In (7.11) G
0
and G
f
are fixed matrices of dimensions
0
xn and
f
n respectively, while b
0

R

0
, b
f
∈ R

f
are fixed vectors.
We will analyze the problem in the same way as before. That is, we first characterize optimality
in terms of the state at the final time, and then translate these conditions in terms of the control. For
convenience let
T
0
= ¦z ∈ R
n
[G
0
z = b
0
¦ ,
T
f
= ¦z ∈ R
n
[G
f
z = b
f
¦ .
Definition: Let p ∈ R
n
. Let z

∈ T
0
. We say that p is orthogonal to T
0
at z

and we write
p ⊥ T
0
(z

) if
88 CHAPTER 7. CONTINUOUS-TIME LINEAR OPTIMAL CONTROL
R
n
x
0
=
x

(
t
0
)
=
K
(
t
0
,
t
0
,
x
0
) R
n
p

(
t
1
)
x

(
t
1
)
K
(
t
1
,
t
0
,
x
0
)
R
n
p

(
t
2
)
x

(
t
2
)
K
(
t
2
,
t
0
,
x
0
)
R
n
p

(
t
f
)
=
c
x

(
t
f
)
K
(
t
f
,
t
0
,
x
0
)
t
t
f
t
2
t
1
t
0
Figure 7.2: Illustration of (7.9) for t
1
= t
0
.
7.2. MORE GENERAL BOUNDARY CONDITIONS 89
p

(z −z

) = 0 for all z ∈ T
0
.
Similarly if z

∈ T
f
, p ⊥ T
f
(z

) if
p

(z −z

) = 0 for all z ∈ T
f
.
Definition: Let X(t
f
) = ¦Φ(t
f
, t
0
)z +w[z ∈ T
0
, w ∈ K(t
f
, t
0
, 0)¦.
Exercise 1: X(t
f
) = ¦Φ(t
f
, t
0
, z, u)[z ∈ T
0
, u() ∈ |¦.
Lemma 1: Let x

(t
0
) ∈ T
0
and u

() ∈ |. Let x

(t) = φ(t, t
0
, x

(t
0
), u

), and suppose that
x

(t
f
) ∈ T
f
.
(i) Suppose the Ω is convex. If u

() is optimal, there exist ˆ p
0
∈ R, ˆ p
0
≥ 0 and ˆ p ∈ R
n
, not both
zero, such that
(ˆ p
0
c + ˆ p)

x

(t
f
) ≥ (ˆ p
0
c + ˆ p)

x for all x ∈ X(t
f
) , (7.12)
ˆ p ⊥ T
f
(x

(t
f
)) , (7.13)
[Φ(t
f
, t
0
)]

(ˆ p
0
c + ˆ p) ⊥ T
0
(x

(t
0
)) . (7.14)
(ii) Conversely if there exist ˆ p
0
> 0, and ˆ p such that (7.12) and (7.13) are satisfied, then u

() is
optimal and (7.14) is also satisfied.
Proof: Clearly u

() is optimal iff
c

x

(t
f
) ≥ c

x for all x ∈ X(t
f
) ∩ T
f
. (7.15)
(i) Suppose that u

() is optimal. In R
1+m
define sets S
1
, S
2
by
S
1
= ¦(r, x)[r > c

x

(t
f
), x ∈ T
f
¦ , (7.16)
S
2
= ¦(r, x)[r = c

x , x ∈ X(t
f
)¦ . (7.17)
First of all S
1
∩S
2
= φ because otherwise there exists x ∈ X(t
f
) ∩T
f
such that c

x > c

x

(t
f
)
contradicting optimality of u

() by (7.15).
Secondly, S
1
is convex since T
f
is convex. Since Ω is convex by hypothesis it follows by Exercise
1 of Section 1 that S
2
is convex.
But then by the separation theorem for convex sets (see 5.5) there exists ˆ p
0
∈ R, ˆ p ∈ R
n
, not
both zero, such that
ˆ p
0
r
1
+ ˆ p

x
1
≥ ˆ p
0
r
2
+ ˆ p

x
2
for all (r
i
, x
i
) ∈ S
i
, i = 1, 2. (7.18)
In particular (7.18) implies that
ˆ p
0
r + ˆ p

x

(t
f
) ≥ ˆ p
0
c

x + ˆ p

x for all x ∈ X(t
f
), r > c

x

(t
f
). (7.19)
Letting r → ∞ we conclude that (7.19) can hold only if ˆ p
0
≥ 0. On the other hand letting r →
c

x

(t
f
) we see that (7.19) can hold only if
ˆ p
0
c

x

(t
f
) + ˆ p

x

(t
f
) ≥ ˆ p
0
c

x + ˆ p

x for all x ∈ X(t
f
) , (7.20)
which is the same as (7.12). Also from (7.18) we get
90 CHAPTER 7. CONTINUOUS-TIME LINEAR OPTIMAL CONTROL
ˆ p
0
r + ˆ p/x ≥ ˆ p
0
c

x

(t
f
) + ˆ p

x

(t
f
) for all r > c

x

(t
f
), x ∈ T
f
,
which can hold only if
ˆ p
1
c

x

(t
f
) + ˆ p

x ≥ ˆ p
0
c

x

(t
f
) + ˆ p

x

(t
f
) for all x ∈ T
f
,
or
ˆ p

(x −x

(t
f
)) ≥ 0 for all x ∈ T
f
(7.21)
But ¦x −x

(t
f
)[x ∈ T
f
¦ = ¦z[G
f
z = 0¦ is a subspace of R
n
, so that (7.21) can hold only if
ˆ p

(x −x

(t
f
)) = 0 for all x ∈ T
f
,
which is the same as (7.13). Finally (7.12) always implies (7.14), because by the definition of X(t
f
)
and Exercise 1, ¦Φ(t
f
, t
0
)(z − x

(t
0
)) + x

(t
f
)¦ ∈ X(t
f
) for all z ∈ T
0
, so that from (7.12) we
get
0 ≥ (ˆ p
0
c + ˆ p)

Φ(t
f
, t
0
)(z −x

(t
0
)) for all z ∈ T
0
,
which can hold only if (7.14) holds.
(ii) Now suppose that ˆ p
0
> 0 and ˆ p are such that (7.12), (7.13) are satisfied. Let ˜ x ∈ X(t
f
) ∩ T
f
.
Then from (7.13) we conclude that
ˆ p

x

(t
f
) = ˆ p

˜ x ,
so that from (7.12) we get
ˆ p
0
c

x

(t
f
) ≥ ˆ p
0
c

˜ x ;
but then by (7.15) u

() is optimal. ♦
Remark 1: If it is possible to choose ˆ p
0
> 0 then ˆ p
0
= 1, ˆ p = (ˆ p/ˆ p
0
) will also satisfy (7.12),
(7.13), and (7.14). In particular, in part (ii) of the Lemma we may assume ˆ p
0
= 1.
Remark 2: it would be natural to conjecture that in part (i) ˆ p
0
may be chosen > 0. But in Figure
7.3 below, we illustrate a 2-dimensional situation where T
0
= ¦x
0
¦, T
f
is the vertical line, and
T
f
∩ X(t
f
) consists of just one vector. It follows that the control u

() ∈ | for which
x

(t
f
) = φ(t
f
, t
0
, x
0
, u

) ∈ T
f
is optimal for any c. Clearly then for some c (in particular for the
c in Figure 7.3) we are forced to set ˆ p
0
= 0. In higher dimensions the reasons may be more
complicated, but basically if T
f
is “tangent” to X(t
f
) we may be forced to set ˆ p
0
= 0 (see
Exercise 2 below). Finally, we note that part (i) is not too useful if ˆ p
0
= 0, since then (7.12), (7.13),
and (7.14) hold for any vector c whatsoever. Intuitively ˆ p
0
= 0 means that it is so difficult to satisfy
the initial and final boundary conditions in (7.11) that optimization becomes a secondary matter.
Remark 3: In (i) the convexity of Ω is only used to guarantee that K(t
f
, t
0
, 0) is convex. But it is
known that K(t
f
, t
0
, 0) is convex even if Ω is not (see Neustadt [1963]).
Exercise 2: Suppose there exists z in the interior of X(t
f
) such that z ∈ T
f
. Then in part (i) we
must have ˆ p
0
> 0.
We now translate the conditions obtained in Lemma 1 in terms of the control u

.
7.2. MORE GENERAL BOUNDARY CONDITIONS 91
Theorem 1: Let x

(t
0
) ∈ T
0
and u

() ∈ |. Let x

(t) = φ(t, t
0
, x

(t
0
), u

) and suppose that
x

(t
f
) ∈ T
f
.
(i) Suppose that Ω is convex. If u

() is optimal for (7.11), then there exist a number p

0
≥ 0, and a
function p

: [t
0
, t
f
] → R
n
, not both identically zero, satisfying
adjoint equation: ˙ p

(t) = −A

(t)p

(t) , t
0
≤ t ≤ t
f
(7.22)
initial condition: p

(t
0
)⊥T
0
(x

(t
0
)) (7.23)
final condition: (p

(t
f
) −p

0
c)⊥T
f
(x

(t
f
)) , (7.24)
and the maximum principle
H(t, x

(t), u

(t), p

(t)) = M(t, x

(t), p

(t)) , (7.25)
holds for all t ∈ [t
0
, t
f
] except possibly for a finite set.
(ii) Conversely suppose there exist p

0
> 0 and p

() satisfying (7.22), (7.23), (7.24), and (7.25).
Then u

() is optimal.
[Here
H(t, x, u, p) = p

(A(t)x +B(t)u), M(t, x, p) = sup¦H(t, x, v, p)[v ∈ Ω¦.]
Proof: A repetition of a part of the argument in the proof of Theorem 1 of Section 1 show that if p

satisfies (7.22), then (7.25) is equivalent to (7.26):
(p

(t
f
))

x

(t
f
) ≥ (p

(t
f
))

x for all x ∈ K(t
f
, t
0
, x

(t
0
)) . (7.26)
(i) Suppose u

() is optimal and Ω is convex. Then by Lemma 1 there exist ˆ p ≥ 0, ˆ p ∈ R
n
, not
both zero, such that (7.12), (7.13) and (7.14) are satisfied. Let p

0
= ˆ p
0
and let p

() be the solution
of (7.22) with the final condition
p

(t
f
) = p

0
c + ˆ p = ˆ p
0
c + ˆ p .
Then (7.14) and (7.13) are respectively equivalent to (7.23) and (7.24), whereas since K(t
f
, t
0
, x

(t
0
)) ⊂
X(t
f
), (7.26) is implied by (7.12).
(ii) Suppose p

0
> 0 and (7.22), (7.23), (7.24), and (7.26) are satisfied. Let ˆ p
0
= p

0
and ˆ p =
p

(t
f
) −p

0
c, so that (7.24) becomes equivalent to (7.13). Next if x ∈ X(t
f
) we have
(ˆ p
0
c + ˆ p)

x = (p

(t
f
))

x
= (p

(t
f
))

(Φ(t
f
, t
0
)z +w) ,
92 CHAPTER 7. CONTINUOUS-TIME LINEAR OPTIMAL CONTROL
.
x
0
=
T
0
T
f
x

(
t
f
)
=
X
(
t
f
)
¸
T
f
K
(
t
f
,
t
0
,
x
0
)
=
X
(
t
f
)
c
t
Figure 7.3: Situation where ˆ p
0
= 0
7.2. MORE GENERAL BOUNDARY CONDITIONS 93
for some z ∈ T
0
and some w ∈ K(t
f
, t
0
, 0). Hence
(ˆ p
0
c + ˆ p)

x = (p

(
f
))

Φ(t
f
, t
0
)(z −x

(t
0
))
+(p

(t
f
))

(w +φ(t
f
, t
0
)x

(t
0
))
= (p

(t
0
))

(z −x

(t
0
))
+(p

(t
f
))

(w + Φ(t
f
, t
0
)x

(t
0
)) .
But by (7.23) the first term on the right vanishes, and since (w+φ(t
f
, t
0
)x

(t
0
)) ∈ K(t
f
, t
0
, x

(t
0
)),
it follows from (7.26) that the second term is bounded by (p

(t
f
))

x

(t
f
). Thus
(ˆ p
0
c + ˆ p)

x

(t
f
) ≥ (ˆ p
0
c + ˆ p)

x for all x ∈ X(t
f
) ,
and so u

() is optimal by Lemma 1. ♦
Exercise 3: Suppose that the control constraint set is Ω(t) which varies continuously with t, and
we require that u(t) ∈ Ω(t) for all t. Show that Theorem 1 also holds for this case where, in (7.25),
M(t, x, p) =sup¦H(t, x, v, p)[v ∈ Ω(t)¦.
Exercise 4: How would you use Exercise 3 to solve Example 3 of Chapter 1?
94 CHAPTER 7. CONTINUOUS-TIME LINEAR OPTIMAL CONTROL
Chapter 8
SEQUENTIAL DECISION PROBLEMS:
CONTINUOUS-TIME OPTIMAL
CONTROL OF NONLINEAR SYSTEMS
We now present a sweeping generalization of the problem studied in the last chapter. Unfortunately
we are forced to omit the proofs of the results since they require a level of mathematical sophis-
tication beyond the scope of these Notes. However, it is possible to convey the main ideas of the
proofs at an intuitive level and we shall do so. (For complete proofs see (Lee and Markus [1967]
or Pontryagin, et al., [1962].) The principal result, which is a direct generalization of Theorem 1 of
7.2 is presented in Section 1. An alternative form of the objective function is discussed in Section
2. Section 3 deals with the minimum-time problem and Section 4 considers the important special
case of linear systems with quadratic cost. Finally, in Section 5 we discuss the so-called singular
case and also analyze Example 4 of Chapter 1.
8.1 Main Results
8.1.1 Preliminary results based on differential equation theory.
We are interested in the optimal control of a system whose dynamics are governed by the nonlinear
differential equation
˙ x(t) = f(t, x, (t), u(t)) , t
0
≤ t ≤ t
f
, (8.1)
where x(t) ∈ R
n
is the state and u(t) ∈ R
p
is the control. Suppose u

() is an optimal control
and x

() is the corresponding trajectory. In the case of linear systems we obtained the necessary
conditions for optimality by comparing x

() with trajectories x() corresponding to other admis-
sible controls u(). This comparison was possible because we had an explicitly characterization of
x() in terms of u(). Unfortunately when f is nonlinear such a characterization is not available.
Instead we shall settle for a comparison between the trajectory x

() and trajectories x() obtained
by perturbing the control u

() and the initial condition x

(t
0
). We can then estimate the difference
between x() and x

() by the solution to a linear differential equation as shown in Lemma 1 below.
But first we need to impose some regularity conditions on the differential equation (8.1). We assume
throughout that the function f : [t
0
, t
f
] R
n
R
p
→ R
n
satisfies the following conditions:
95
96 CHAPTER 8. CONINUOUS-TIME OPTIMAL CONTROL
1. for each fixed t ∈ [t
0
, t
f
], f(t, , ) : R
n
xR
p
→ R
n
is continuously differentiable in the
remaining variables (x, u),
2. except for a finite subset D ⊂ [t
0
, t
f
], the functions f, f
x
, f
u
are continuous on [t
0
, t
f
] R
n

R
p
, and
3. for every finite α, there exist finite number β and γ such that
[f(t, x, u)[ ≤ β +γ[x[ for all t ∈ [t
0
, t
f
], x ∈ R
n
, u ∈ R
p
with [u[ ≤ α .
The following result is proved in every standard treatise on differential equations.
Theorem 1: For every z ∈ R
n
, for every t
1
∈ [t
0
, t
f
], and every piecewise continuous function
u() : [t
0
, t
f
] → R
p
, there exists a unique solution
x(t) = φ(t, t
1
, z, u()) , t
1
≤ t ≤ t
f
,
of the differential equation
˙ x(t) = f(t, x(t), u(t)) , t
1
≤ t ≤ t
f
,
satisfying the initial condition
x(t
1
) = z .
Furthermore, for fixed t
1
≤ t
2
in [t
0
, t
f
] and fixed u(), the function φ(t
2
, t
1
, , u()) : R
n
→ R
n
is
differentiable. Moreover, the n n matrix-valued function Φ defined by
Φ(t
2
, t
1
, z, u()) =
∂φ
∂z
(t
2
, t
1
, z, u())
is the solution of the linear homogeneous differential equation
∂Φ
∂t
(t, t
1
, z, u, ()) = [
∂f
∂x
(t, x, (t), u(t))]Φ(t, t
1
, z, u()), t
1
≤ t ≤ t
f
,
and the initial condition
Φ(t
1
, t
1
, z, u()) = I
n
.
Now let Ω ⊂ R
p
be a fixed set and let | be set of all piecewise continuous functions u() :
[t
0
, t
f
] → Ω. Let u

() ∈ | be fixed and let D

be the set of discontinuity points of u

(). Let
x

0
∈ R
n
be a fixed initial condition.
Definition: π = (t
1
, . . . , t
m
;
1
, . . . ,
m
; u
1
, . . . , u
m
) is said to be a perturbation data for u

() if
1. m is a nonnegative integer,
2. t
0
< t
1
< t
2
< . . . t
m
< t
f
, and t
i
∈ D

¸
D, i = 1, . . . , m (recall that D is the set of
discontinuity points of f),
3.
i
≥ 0, i = 1, . . . , m, and
4. u
i
∈ Ω, i = 1, . . . , m.
8.1. MAIN RESULTS 97
Let ε(π) > 0 be such that for 0 ≤ ε ≤ ε(π) we have [t
i
− ε
i
, t
i
] ⊂ [t
0
, t
f
] for all i, and
[t
i
−ε
i
, t
i
]
¸
[t
j
−ε
j
, t
j
] = φ for i = j. Then for 0 ≤ ε ≤ ε(π),the perturbed control u
(π,ε)
() ∈ |
corresponding to π is defined by
u
(π,ε)
(t) =

u
i
for all t ∈ [t
i
−ε
i
, t
i
] , i = 1, . . . , m
u

(t) otherwise .
Definition: Any vector ξ ∈ R
n
is said to be a perturbation for x

0
, and a function x
(ξ,ε)
defined for
ε > 0 is said to be a perturbed initial condition if
lim
ε→0
x
(ξ,ε)
= x

0
,
and
lim
ε→0
1
ε
(x
(ξ,ε)
−x

0
) = ξ .
Now let x

(t) = φ(t, t
0
, x

0
, u

()) and let x
ε
(t) = φ(t, t
0
, x
(ξ,ε)
, u
(π,ε)
()). Let Φ(t
2
, t
1
) =
Φ(t
2
, t
1
, x

(t
1
), u

()). The following lemma gives an estimate of x

(t) − x
ε
(t). The proof of the
lemma is a straightforward exercise in estimating differences of solutions to differential equations,
and it is omitted (see for example (Lee and Markus [1967])).
Lemma 1: lim
ε→0
[x
ε
(t) −x

(t) −εh
(π,ε)
(t)[ = 0 for t ∈ [t
0
, t
1
], where h
(π,ε)
() is given by
h
(π,ε)
(t) = Φ(t, t
0
)ξ , t ∈ [t
0
, t
1
)
= Φ(t, t
0
)ξ + Φ(t, t
1
)[f(t
1
, x

(t
1
), u
1
) −f(t
1
, x

(t
1
), u

(t
1
))]
1
, t ∈ [t
1
, t
2
)
= Φ(t, t
0
)ξ +
i
¸
j=1
Φ(t, t
j
)[f(t
j
, x

(t
j
), u
j
) −f(t
j
, x

(t
j
), u

(t
j
))]
j
, t ∈ [t
i
, t
i+1
)
= Φ(t, t
0
)ξ +
m
¸
j=1
Φ(t, t
m
)[f(t
j
, x

(t
j
), u
j
) −f(t
j
, x

(t
j
), u

t
j
))]
j
, t ∈ [t
m
, t
f
] .
(See Figure 8.1.)
We call h
(π,ξ)
() the linearized (trajectory) perturbation corresponding to (π, ξ).
Definition: For z ∈ R
n
, t ∈ [t
0
, t
f
] let
K(t, t
0
, z) = ¦φ(t, t
0
, z, u())[u() ∈ |¦
be the set of states reachable at time t, starting at time t
0
in state z, and using controls u() ∈ |.
Definition: For each t ∈ [t
0
, t
f
], let
Q(t) = ¦h
(π,0)
(t)[πis a perturbation data for u

(), and
h
(π,0)
()is the linearized perturbation
corresponding to(π, 0)¦ .
Remark: By Lemma 1 (x

(t)+εh
(π,ξ)
) belongs to the set K(t, t
0
, x
(ξ,ε)
) up to an error of order o(ε).
In particular for ξ = 0, the set x

(t) + Q(t) can serve as an approximation to the set K(t, t
0
, x

0
).
More precisely we have the following result which we leave as an exercise.
98 CHAPTER 8. CONINUOUS-TIME OPTIMAL CONTROL
| | | | | | |
|
|
| |
u
u
1
u
(πε)
()
u
2
u

()
u
3
ε
3
ε
2
ε
1
t
0
t
1
t
2
t
3
t
f
x
x
(
ξ, ε)
x
ε
()
x

()
εh
πξ
t
1
t
2
t
3
t
f
Figure 8.1: Illustration for Lemma 1.
Exercise 1: (Recall the definition of the tangent cone in 5.1.1.) Show that
Q(t) ⊂ C(K(t, t
0
, x

0
), x

(t)) . (8.2)
We can now prove a generalization of Theorem 1 of 7.1.
Theorem 2: Consider the optimal control problem (8.3):
Maximize ψ(x(t
f
))
subject to
dynamics: ˙ x(t) = f(t, x(t), u(t)) , t
0
≤ t ≤ t
f
,
initial condition: x(t
0
) = x

0
,
final condition: x(t
f
) ∈ R
n
,
control constraint: u() ∈ |, i.e., u : [t
0
, t
f
] → Ω and
u() piecewise continuous ,
(8.3)
where ψ : R
n
→ R is differentiable and f satisfies the conditions listed earlier.
Let u

() ∈ | be an optimal control and let x

(t) = φ(t, t
0
, x

0
, u

()), t
0
≤ t ≤ t
f
, be the
corresponding trajectory. Let p

(t), t
0
≤ t ≤ t
f
, be the solution of (8.4) and (8.5):
adjoint equation: ˙ p

(t) = −[
∂f
∂x
(t, x

(t), u

(t))]

p

(t), t
0
≤ t ≤ t
f
, (8.4)
8.1. MAIN RESULTS 99
final condition: p

(t
f
) = ψ(x

(t
f
)) . (8.5)
Then u

() satisfies the maximum principle
H(t, x

(t), u

(t), p

(t)) = M(t, x

(t), p

(t)) (8.6)
for all t ∈ [t
0
, t
f
] except possibly for a finite set. [Here H(t, x, u, p) = p

f(t, x, u, ), M(t, x, p) =
sup¦H(t, x, v, p)[v ∈ Ω¦].
Proof: Since u

() is optimal we must have
ψ(x

(t
f
)) ≥ ψ(z) for all z ∈ K(t
f
, t
0
, x

0
) ,
and so by Lemma 1 of 5.1.1
ψ(x

(t
f
))h ≤ 0 for all h ∈ C(K(t
f
, t
0
, x

0
), x

(t
f
)) ,
and in particular from (8.2)
ψ
x
(x

(t
f
))h ≤ 0 for all h ∈ Q(t
f
) . (8.7)
Now suppose that (8.6) does not hold from some t

∈ D

∪ D. Then there exists v ∈ Ω such that
p

(t

)

[f(t

, x(t

), v) −f(t

, x(t

), u

(t

))] > 0 . (8.8)
If we consider the perturbation data π = (t

; 1; v), then (8.8) is equivalent to
p

(t

)

h
(π,0)
(t

) > 0 . (8.9)
Now from (8.4) we can see that p

(t

)

= p

(t
f
)

Φ(t
f
, t

). Also h
(π,0)
(t
f
) = Φ(t
f
, t

)h
(π,0)
(t

)
so that (8.9) is equivalent to
p

(t
f
)

h
(π,0)
(t
f
) > 0
which contradicts (8.7). ♦
8.1.2 More general boundary conditions.
In Theorem 2 the initial condition is fixed and the final condition is free. The problem involving
more general boundary conditions is much more complicated and requires more refined analysis.
Specifically, Lemma 1 needs to be extended to Lemma 2 below. But first we need some simple
properties of the sets Q(t) which we leave as exercises.
Exercise 2: Show that
(i) Q(t) is a cone, i.e., if h ∈ Q(t) and λ ≥ 0, then λh ∈ Q(t),
(ii) for t
0
≤ t
1
≤ t
2
≤ t
f
, Φ(t
2
, t
1
)Q(t
1
) ⊂ Q(t
2
) .
Definition: Let C(t) denote the closure of Q(t).
Exercise 3: Show that
(i) C(t) is a convex cone,
(ii) for t
0
≤ t
1
≤ t
2
≤ t
f
, Φ(t
2
, t
1
)C(t
1
) ⊂ C(t
2
) .
Remark: From Lemma 1 we know that if h ∈ C(t) then (x

(t) +εh) belongs to K(t, t
0
, x

(t
0
)) up
to an error of order o(ε). Lemma 2, below, asserts further that if h is in the interior of C(t) then in
fact (x

(t) +εh) ∈ K(t, t
0
, x

(t
0
)) for ε > 0 sufficiently small. The proof of the lemma depends
upon some deep topological results and is omitted. Instead we offer a plausibility argument.
Lemma 2: Let h belong to the interior of the cone C(t). Then for all ε > 0 sufficiently small,
100 CHAPTER 8. CONINUOUS-TIME OPTIMAL CONTROL
(x

(t) +εh) ∈ K(t, t
0
, x

0
) . (8.10)
Plausibility argument. (8.10) is equivalent to
εh ∈ K(t, t
0
, x

(t
0
)) −¦x

(t)¦ , (8.11)
where we have moved the origin to x

(t). The situation is depicted in Figure 8.2.
0
ˆ
C(ε)
ˆ
K(ε)
o(ε)
K(t
1
, t
0
, x

) −¦x

(t)¦
h
C(t)
δε
εh
Figure 8.2: Illustration for Lemma 2.
Let
ˆ
C(ε) be the cross-section of C(t) by a plane orthogonal to h and passing through εh. Let
ˆ
K(ε) be the cross-section of K(t, t
0
, x

0
) −¦x

(t
0
)¦ by the same plane. We note the following:
(i) by Lemma 1 the distance between
ˆ
C(ε) and
ˆ
K(ε) is of the order o(ε);
(ii) since h is in the interior of C(t), the minimum distance between εh and
ˆ
C(ε) is δε where
δ > 0 is independent of ε.
Hence for ε > 0 sufficiently small εh must be trapped inside the set
ˆ
K(ε).
(This would constitute a proof except that for the argument to work we need to show that there
are no “holes” in
ˆ
K(ε) through which εh can “escape.” The complications in a rigorous proof arise
precisely from this drawback in our plausibility argument.) ♦
Lemmas 1 and 2 give us a characterization of K(t, t
0
, x

0
) in a neighborhood of x

(t) when we
perturb the control u

() leaving the initial condition fixed. Lemma 3 extends Lemma 2 to the case
when we also allow the initial condition to vary over a fixed surface in a neighborhood of x

0
.
Let g
0
: R
n
→ R

0
be a differentiable function such that the
0
n matrix g
0
x
(x) has rank

0
for all x. Let b
0
∈ R
n
be fixed and let T
0
= ¦x[g
0
(x) − b
0
¦. Suppose that x

0
∈ T
0
and let
T
0
(x

0
) = ¦ξ[g
0
x
(x

0
)ξ = 0¦. Thus, T
0
(x

0
) + ¦x

0
¦ is the plane through x

0
tangent to the surface
T
0
. The proof of Lemma 3 is similar to that of Lemma 2 and is omitted also.
Lemma 3: Let h belong to the interior of the cone ¦C(t)+Φ(t, t
0
)T
0
(x

0
)¦. For ε ≥ 0 let h(ε) ∈ R
n
be such that lim h(ε) = 0, and lim
ε→0
(
1
ε
)h(ε) = h. Then for ε > 0 sufficiently small there exists
x
0
(ε) ∈ T
0
such that
(x

(t) +h(ε)) ∈ K(t, t
0
, x
0
(ε)) .
8.1. MAIN RESULTS 101
We can now prove the main result of this chapter. We keep all the notation introduced above.
Further, let g
f
: R
n
→ R

f
be a differentiable function such that g
f
x
(x) has rank
f
for all x.
Let b
f
∈ R
n
be fixed and let T
f
= ¦x[g
f
(x) − b
f
¦. Finally, if x

(t
f
) ∈ T
f
let T
f
(x

(t
f
)) =
¦ξ[g
f
x
(x

(t
f
))ξ = 0¦.
Theorem 3: Consider the optimal control problem (8.12):
Maximize ψ(x(t
f
))
subject to
dynamics: ˙ x(t) = f(t, x(t), u(t)) , t
0
≤ t ≤ t
f
,
initial conditions: g
0
(x(t
0
)) = b
0
,
final conditions: g
f
(x(t
f
)) = b
f
,
control constraint: u() ∈ |, i.e., u : [t
0
, t
f
] → Ω and
u() piecewise continuous .
(8.12)
Let u

() ∈ |, let x

0
∈ T
0
and let x

(t) = φ(t, t
0
, x

0
, u

()) be the corresponding trajectory.
Suppose that x

(t
f
) ∈ T
f
, and suppose that (u

(), x

0
) is optimal. Then there exist a number
p

0
≥ 0, and a function p

: [t
0
, t
f
] → R
n
, not both identically zero, satisfying
adjoint equation: ˙ p

(t) = −[
∂f
∂x
(t, x

(t), u

(t))]

p

(t), t
0
≤ t ≤ t
f
, (8.13)
initial condition: p

(t
0
)⊥T
0
(x

0
) , (8.14)
final condition: (p

(t
f
) −p

0
∇ψ(x

(t
f
)))⊥T
f
(x

(t
f
)) . (8.15)
Furthermore, the maximum principle
H(t, x

(t), u

(t), p

(t)) = M(t, x

(t), p

(t)) (8.16)
holds for all t ∈ [t
0
, t
f
] except possibly for a finite set. [Here H(t, x, p, u) = p

f(t, x, u, ), M(t, x, p) =
sup¦H(t, x, v, p)[v ∈ Ω¦].
Proof: We break the proof up into a series of steps.
Step 1. By repeating the argument presented in the proof of Theorem 2 we can see that (8.15) is
equivalent to
p

(t
f
)

h ≤ 0 for all h ∈ C(t
f
) . (8.17)
Step 2. Define two convex sets S
1
, S
2
in R
1+m
as follows:
S
1
= ¦(y, h)[y > 0, h ∈ T
f
(x

(t
f
))¦,
S
2
= ¦(y, h)[y = ψ
x
(x

(t
f
))h, h ∈ ¦C(t
f
) + Φ(t
f
, t
0
)T
0
(x

0
)¦¦ .
We claim that the optimality of (u

(), x

0
) implies that S
1
∩ Relative Interior (S
2
) = φ. Suppose
this is not the case. Then there exists h ∈ T
f
(x

(t
f
)) such that
ψ
x
(x

(t
f
))h > 0 , (8.18)
102 CHAPTER 8. CONINUOUS-TIME OPTIMAL CONTROL
h ∈ Interior¦C(t
f
) + Φ(t
f
, t
0
)T
0
(x

0
)¦ . (8.19)
Now by assumption g
f
x
(x

(t
f
) has maximum rank. Since g
f
x
(x

(t
f
))h = 0 it follows that the
Implicit Function Theorem that for ε > 0 sufficiently small there exists h(ε) ∈ R
n
such that
g
f
(x

(t
f
) +h(ε)) = b
f
, (8.20)
and, moreover, h(ε) → 0, (1/ε)h(ε) → h as ε → 0. From (8.18) and Lemma 3 it follows that for
ε > 0 sufficiently small there exists x
0
(ε) ∈ T
0
and u
ε
() ∈ | such that
x

(t
f
) +h(ε) = φ(t
f
, t
0
, x
0
(ε), u
ε
()) .
Hence we can conclude from (8.20) that the pair (x
0
(ε), u
ε
()) satisfies the initial and final condi-
tions, and the corresponding value of the objective function is
ψ(x

(t
f
) +h(ε)) = ψ(x

(t
f
)) +ψ
x
(x

(t
f
))h(ε) +o([h(ε)[) ,
and since h(ε) = εh +o(ε) we get
ψ(x

(t
f
) +h(ε)) = ψ(x

(t
f
)) +ε)ψ
x
(x

(t
f
))h +o(ε) ;
but then from (8.18)
ψ(x

(t
f
) +h(ε)) > ψ(x

(t
f
))
for ε > 0 sufficiently small, thereby contradicting the optimality of (u

(), x

0
).
Step 3. By the separation theorem for convex sets there exist ˆ p
0
∈ R, ˆ p
1
∈ R
n
, not both zero, such
that
ˆ p
0
y
1
+ ˆ p

1
h
1
≥ ˆ p
0
y
2
+ ˆ p

1
h
2
for all (y
i
, h
i
) ∈ S
1
, i = 1, 2 . (8.21)
Arguing in exactly the same fashion as in the proof of Lemma 1 of 7.2 we can conclude that (8.21)
is equivalent to the following conditions:
ˆ p
0
≥ 0 ,
ˆ p
1
⊥T
f
(x

(t
f
)) ,
(8.22)
Φ(t
f
, t
0
)

(ˆ p
0
∇ψ(x

(t
f
)) + ˆ p
1
)⊥T
0
(x

0
) , (8.23)
and
(ˆ p
0
ψ
x
(x

(t
f
)) + ˆ p

1
)h ≤ 0 for all h ∈ C(t
f
) . (8.24)
If we let ˆ p

0
= ˆ p
0
and p

(t
f
) = ˆ p
0
∇ψ(x

(t
f
)) + ˆ p
1
then (8.22), (8.23), and (8.24) translate respec-
tively into (8.15), (8.14), and (8.17). ♦
8.2. INTEGRAL OBJECTIVE FUNCTION 103
8.2 Integral Objective Function
In many control problems the objective function is not given as a function ψ(x(t
f
)) of the final
state, but rather as an integral of the form

t
f
t
0
f
0
(t, x(t), u(t))dt . (8.25)
The dynamics of the state, the boundary conditions, and control constraints are the same as before.
We proceed to show how such objective functions can be treated as a special case of the problems
of the last section. To this end we defined the augmented system with state variable ˜ x = (x
0
, x) ∈
R
1+m
as follows:
·
˜ x=
¸
˙ x
0
(t)
˙ x(t)

=
˜
f(t, ˜ x(t), u(t)) =
¸
f
0
(t, x(t), u(t))
f(t, x(t), u(t))

.
The initial and final conditions which are of the form
g
0
(x) = b
0
, g
f
(x) = b
f
are augmented ˜ g
0
(˜ x) =
¸
x
0
g
0
(x)

=
˜
b
0
=
¸
0
b
0

and ˜ g
f
(˜ x) = g
f
(x) = b
f
. Evidently then the problem of maximizing (8.25) is equivalent to the
problem of maximizing
ψ(˜ x(t
f
)) = x
0
(t
f
) ,
subject to the augmented dynamics and constraints which is of the form treated in Theorem 3 of
Section 1, and we get the following result.
Theorem 1: Consider the optimal control problem (8.26):
Maximize

t
f
t
0
f
0
(t, x(t), u(t))dt
subject to
dynamics: ˙ x(t) = f(t, x(t), u(t)), t
0
≤ t ≤ t
f
,
initial conditions: g
0
(x(t
0
)) = b
0
,
final conditions: g
f
(x(t
f
)) = b
f
,
control constraint: u() ∈ | .
(8.26)
Let u

() ∈ |, let x

0
∈ T
o
and let x

(t) = φ(t, t
0
, x

0
, u

()), and suppose that x

(t
f
) ∈ T
f
. If
(u

(), x

0
) is optimal, then there exists a function ˜ p

= (p

0
, p

) : [t
0
, t
f
] → R
1+m
, not identically
zero, and with p

0
(t) ≡ constant and p

0
(t) ≥ 0, satisfying
(augmented) adjoint equation:
·
˜ p

(t) = −[

˜
f
∂˜ x
(t, x

(t), u

(t))]

˜ p

(t) ,
initial condition: p

(t
0
)⊥T
0
(x

0
) ,
final condition: p

(t
f
)⊥T
f
(x

(t
f
)) .
Futhermore, the maximum principle
˜
H(t, x

(t), ˜ p

(t), u

(t)) =
˜
M(t, x

(t), ˜ p

(t))
holds for all t ∈ [t
0
, t
f
] except possibly for a finite set. [Here
˜
H(t, x, ˜ p, u) = ˜ p

˜
f(t, x, u) =
p
0
f
0
(t, x, u) +p

f(t, x, u), and
˜
M(t, x, ˜ p) = sup¦
˜
H(t, x, ˜ p, v)[v ∈ Ω¦.]
Finally, if f
0
and f do not explicitly depend on t, then
˜
M(t, x

(t), ˜ p

(t)) ≡ constant.
Exercise 1: Prove Theorem 1. (Hint: For the final part show that (d/dt)
˜
M(t, x

(t), ˜ p

(t)) ≡ 0.)
104 CHAPTER 8. CONINUOUS-TIME OPTIMAL CONTROL
8.3 Variable Final Time
8.3.1 Main result.
In the problem considered up to now the final time t
f
is assumed to be fixed. In many important
cases the final time is itself a decision variable. One such case is the minimum-time problem where
we want to transfer the state of the system from a given initial state to a specified final state in
minimum time. More generally, consider the optimal control problem (8.27).
Maximize

t
f
t
0
f
0
(t, x(t), u(t))dt
subject to
dynamics: ˙ x(t) = f(t, x, (t), u(t)), , t
0
≤ t ≤ t
f
,
initial condition: g
0
(x(t
0
)) = b
0
,
final condition: g
f
(x(t)f)) = b
f
,
control constraint: u() ∈ | ,
final-time constraint: t
f
∈ (t
0
, ∞) .
(8.27)
We analyze (8.27) by converting the variable time interval [t
0
, t
f
] into a fixed-time interval [0, 1].
This change of time-scale is achieved by regarding t as a new state variable and selecting a new
time variable s which ranges over [0, 1]. The equation for t is
dt(s)
ds
= α(s) , 0 ≤ s ≤ 1 ,
with initial condition
t(0) = t
0
.
Here α(s) is a new control variable constrained by α(s) ∈ (0, ∞). Now if x() is the solution of
˙ x(t) = f(t, x(t), u(t)) , t
0
≤ t ≤ t
f
, x(t
0
) = x
0
(8.28)
and if we define
z(s) = x(t(s)), v(s) = u(t(s)) , 0 ≤ s ≤ 1 ,
then it is easy to see that z() is the solution of
dz
ds
(s) = α(s)f(s, z(s), v(s)) , 0 ≤ s ≤ 1 z(0) = x
0
. (8.29)
Conversely from the solution z() of (8.29) we can obtain the solution x() of (8.28) by
x(t) = z(s(t)) , t
0
≤ t ≤ t
f
,
where s() : [t
0
, t
f
] → [0, 1] is the functional inverse of s(t); in fact, s() is the solution of the
differential equation ˙ s(t) = 1/α(s(t)), s(t
0
) = 0.
8.3. VARIABLE FINAL TIME 105
With these ideas in mind it is natural to consider the fixed-final-time optimal control problem
(8.30), where the state vector (t, z) ∈ R
1+m
, and the control (α, v) ∈ R
1+p
:
Maximize

1
0
f
0
(t(s), z(s), v(s))α(s)ds
subject to
dynamics: ( ˙ z(s),
˙
t(s)) = (f(t(s), z(s), v(s))α(s), α(s)),
initial constraint: g
0
(z(0)) = b
0
, t(0) = t
0
,
final constraint: g
f
(z(1)) = b
f
, t(1) ∈ R ,
control constraint: (v(s), α(s)) ∈ Ω (0, ∞)
for 0 ≤ s ≤ 1 and v(), α() piecewise continuous.
(8.30)
The relation between problems (8.27) and (8.30) is established in the following result.
Lemma 1: (i) Let x

0
∈ T
0
, u

() ∈ |, t

f
∈ (t
0
, ∞) and let x

(t) = φ(t, t
0
, x

0
, u

()) be the
corresponding trajectory. Suppose that x

(t

f
) ∈ T
f
, and suppose that (u

(), x

0
, t

f
) is optimal for
(8.27). Define z

0
, v

(), and α

() by
z

0
= x

0
v

(s) = u

(t
0
+s(t

f
−t
0
))
α

(s) = (t

f
−t
0
)
, 0 ≤ s ≤ 1 ,
, 0 ≤ s ≤ 1 .
Then ((v

(), α

()), z

0
) is optimal for (8.30).
(ii) Let z

0
∈ T
0
, and let (v

(), α

()) be an admissible control for (8.30) such that the correspond-
ing trajectory (t

(), z

()) satisfies the final conditions of (8.30). Suppose that ((v

(), α

()), z

0
)
is optimal for (8.30). Define x

0
, u

() ∈ |, and t

f
by
x

0
= z

0
,
u

(t) = v

(s

(t)) , t
0
≤ t ≤ t

f
,
t

f
= t

(1) ,
where s

() is functional inverse of t

(). Then (u

(), z

0
, t

f
) is optimal for (8.27).
Exercise 1: Prove Lemma 1.
Theorem 1: Let u

() ∈ |, let x

0
∈ T
0
, let t

f
∈ (0, ∞), and let
x

(t) = φ(t, t
0
, x

0
, u

()), t
0
≤ t ≤ t
f
, and suppose that x

(t

f
) ∈ T
f
. If (u

(), x

0
, t

f
) is optimal
for (8.27), then there exists a function ˜ p

= (p

0
, p

) : [t
0
, t

f
] → R
1+m
, not identically zero, and
with p

0
(t) ≡ constant and p

0
(t) ≥ 0, satisfying
(augmented) adjoint equation:
·
˜ p

(t) = −[

˜
f
∂˜ x
(t, x

(t), u

(t))]

˜ p

(t) ,
(8.31)
initial condition: p

(t
0
)⊥T
0
(x

0
) , (8.32)
106 CHAPTER 8. CONINUOUS-TIME OPTIMAL CONTROL
final condition: p

(t

f
)⊥T
f
(x

(t

f
)) . (8.33)
Also the maximum principle
˜
H(t, x

(t), ˜ p

(t), u

(t)) =
˜
M(t, x

(t), ˜ p

(t)) , (8.34)
holds for all t ∈ [t
0
, t
f
] except possibly for a finite set. Furthermore, t

f
must be such that
ˆ
H(t

f
, x

(t

f
), ˜ p

(t

f
), u

(t

f
)) = 0 . (8.35)
Finally, if f
0
and f do not explicitly depend on t, then
ˆ
M(t, x

(t), ˜ p

(t)) ≡ 0.
Proof: By Lemma 1, z

0
= x

0
, v

(s) = u

(t
0
+s(t

f
−t
0
)) and α

(s) = (t

f
−t
0
) for 0 ≤ s ≤ 1
constitute an optimal solution for (8.30). The resulting trajectory is
z

(s) = x

(t
0
+s(t

f
−t
0
)), t

(s) = t
0
+s(t

f
−t
0
), 0 ≤ s ≤ 1 , so that in particular
z

(1) = x

(t

f
).
By Theorem 1 of Section 2, there exists a function
˜
λ

= (λ

0
, λ

, λ

n+1
) : [0, 1] → R
1+n+1
, not
identically zero, and with λ

0
(s) ≡ constant and λ

0
(s) ≥ 0, satisfying
adjoint equation:

˙
λ

0
(t)
˙
λ

(t)
˙
λ

n+1
(t)
¸
¸
¸
¸
¸
¸
= −

0
¦[
∂f
0
∂z
(t

(s), z

(s), v

(s))]

λ

0
(s)
+[
∂f
∂z
(t

(s), z

(s), v

(s))]

λ

(s)¦α

(s)
¦[
∂f
0
∂t
(t

(s), z

(s), v

(s))]

λ

0
(s)
+[
∂f
∂t
(t

(s), z

(s), v

(s))]

λ

(s)¦α

(s)
¸
¸
¸
¸
¸
¸
¸
(8.36)
initial condition: λ

(0)⊥T
0
(z

0
) (8.37)
final condition: λ

(1)⊥T
f
(z

(1)) , λ

n+1
(1) = 0 . (8.38)
Furthermore, the maximum principle
λ

0
(s)f
0
(t

(s), z

(s), v

(s))α

(s)


(s)

f(t

(s), z

(s), v

(s))α

(s) +λ

n+1
(s)α

(s)
= sup¦[λ

0
(s)f
0
(t

(s), z

(s), w)β


(s)

f(t

(s), z

(s), w)β +λ

n+1
(s)β][w ∈ Ω, β ∈ (0, ∞)¦
(8.39)
holds for all s ∈ [0, 1] except possibly for a finite set.
Let s

(t) = (t −t
0
)/(t

f
−t
0
), t
0
≤ t ≤ t

f
, and define ˜ p

= (p

0
, p

) : [t
0
, t

f
] → R
1+n
by
p

0
(t) = λ

0
(s

(t)), p

(t) = λ

(s

(t)), t
0
≤ t ≤ t

f
. (8.40)
First of all, ˜ p

is not identically zero. Because if ˜ p

≡ 0, then from (8.40) we have (λ

0
, λ

) ≡ 0 and
then from (8.36), λ

n+1
≡ constant, but from (8.38), λ

n+1
(1) = 0 so that we would have
˜
λ

≡ 0
8.3. VARIABLE FINAL TIME 107
which is a contradiction. It is trivial to verify that ˜ p

() satisfies (8.31), and, on the other hand (8.37)
and (8.38) respectively imply (8.32) and (8.33). Next, (8.39) is equivalent to
λ

0
(s)f
0
(t

(s), z

(s), v

(s))


(s)

f(t

(s), z

(s), v

(s)) +λ

n+1
(s) = 0
(8.41)
and
λ

0
(s)f
0
(t

(s), z

(s), v

(s)) +λ

(s)

f(t

(s), z

(s), v

(s))
= Sup ¦[λ

0
(s)f
0
(t

(s), z

(s), w) +λ

(s)

f(t

(s), z

(s), w)][w ∈ Ω¦.
(8.42)
Evidently (8.42) is equivalent to (8.34) and (8.35) follows from (8.41) and the fact that λ

n+1
(1) = 0.
Finally, the last assertion of the Theorem follows from (8.35) and the fact that
˜
M(t, x

(t), ˜ p

(t)) ≡
constant if f
0
, f are not explicitly dependent on t. ♦
8.3.2 Minimum-time problems
.
We consider the following special case of (8.27):
Maximize

t
f
t
0
(−1)dt
subject to
dynamics: ˙ x(t) = f(t, x(t), u(t)), t
0
≤ t ≤ t
f
initial condition: x(t
0
) = x
0
,
final condition: x(t
f
) = x
f
,
control constraint: u() ∈ | ,
final-time constraint: t
f
∈ (t
0
, ∞) .
(8.43)
In (8.43), x
0
, x
f
are fixed, so that the optimal control problem consists of finding a control which
transfers the system from state x
0
at time t
0
to state x
f
in minimum time. Applying Theorem 1 to
this problem gives Theorem 2.
Theorem 2: Let t

f
∈ (t
0
, ∞) and let u

: [t
0
, t

f
] → Ω be optimal. Let x

() be the corresponding
trajectory. Then there exists a function p

: [t
0
, t

f
] → R
n
, not identically zero, satisfying
adjoint equation: ˙ p

(t) = −[
∂f
∂x
(t, x

(t), u

(t))]

p

(t), t
0
≤ t ≤ t

f
,
initial condition: p

(t
0
) ∈ R
n
,
final condition: p

(t

f
) ∈ R
n
.
Also the maximum principle
H(t, x

(t), p

(t), u

(t)) = M(t, x

(t), p

(t)) (8.44)
holds for all t ∈ [t
0
, t

f
] except possibly for a finite set.
Finally,
M(t

f
, x

(t
f
), p

(t
f
)) ≥ 0 (8.45)
and if f does not depend explicitly on t then
M(t, x

(t), p

, (t)) ≡ constant . (8.46)
108 CHAPTER 8. CONINUOUS-TIME OPTIMAL CONTROL
Exercise 2: Prove Theorem 2.
We now study a simple example illustrating Theorem 2. Example 1: The motion of a particle is
described by
m¨ x(t) +σ ˙ x(t) = u(t) ,
where m = mass, σ = coefficient of friction, u = applied force, and x = position of the particle. For
simplicity we suppose that x ∈ R, u ∈ R and u(t) constrained by [u(t)[ ≤ 1. Starting with an
initial condition x(0) = x
01
, ˙ x(0) = x
02
we wish to find an admissible control which brings the
particle to the state x = 0, ˙ x = 0 in minimum time.
Solution: Taking x
1
= x, x
2
= ˙ x we rewrite the particle dynamics as
¸
˙ x
1
(t)
˙ x
2
(t)

=
¸
0 1
0 −α
¸
x
1
(t)
x
2
(t)

+
¸
0
b

u(t) , (8.47)
where α = (σ/m) > 0 and b = (1/m) > 0. The control constraint set is Ω = [−1, 1].
Suppose that u

() is optimal and x

() is the corresponding trajectory. By Theorem 2 there exists
a non-zero solution p

() of
¸
˙ p

1
(t)
˙ p

2
(t)

= −
¸
0 0
1 −α
¸
p

1
(t)
p

2
(t)

(8.48)
such that (8.44), (8.45), and (8.46) hold. Now the transition matrix function of the homogeneous
part of (8.47) is
Φ(t, τ) =
¸
1
1
α
(1 −e
−α(t−τ)
)
0 e
−α(t−τ)

,
so that the solution of (8.48) is
¸
p

1
(t)
p

2
(t)

=
¸
1 0
1
α
(1 −e
αt
) e
αt
¸
p

1
(0)
p

2
(0)

,
or
p

1
(t) ≡ p

1
(0) ,
and
p

2
(t) =
1
α
p

1
(0) +e
αt
(−
1
α
p

1
(0) +p

2
(0)) . (8.49)
The Hamiltonian H is given by
H(x

(t), p

(t), v) = (p

1
(t) −αp

2
(t))x

2
(t) +bp

2
(t)v
= e
αt
(p

1
(0) −αp

2
(0))x

2
(t) +pb

2
(t)v ,
8.3. VARIABLE FINAL TIME 109
so that from the maximum principle we can immediately conclude that
u

(t) =

+1 if p

2
(t) > 0,
−1 if p

2
(t) < 0,
? if p

2
(t) = 0 .
(8.50)
Furthermore, since the right-hand side of (8.47) does not depend on t explicitly we must also have
e
αt
(p

1
(0) −αp

2
(0))x

2
(t) +bp

2
(t)u

(t) ≡ constant. (8.51)
We now proceed to analyze the consequences of (8.49) and (8.50). First of all since p

1
(t) ≡
p

1
(0), p

2
() can have three qualitatively different forms.
Case 1. −p

1
(0) + αp

2
(0) > 0: Evidently then, from (8.49) we see that p

2
(t) must be a strictly
monotonically increasing function so that from (8.50) u

() can behave in one of two ways:
either
u

(t) =

−1 for t <
ˆ
t and p

2
(t) < 0 for t <
ˆ
t,
+1 for t >
ˆ
t and p

2
(t) > 0 for t >
ˆ
t,
or
u

(t) ≡ +1 and p

2
(t) > 0 for all t.
Case 2. −p

1
(0) +αp

2
(0) < 0 : Evidently u

() can behave in one of two ways:
either
u

(t) =

+1 for t <
ˆ
t and p

2
(t) > 0 for t <
ˆ
t,
−1 for t >
ˆ
t and p

2
(t) < 0 for t >
ˆ
t,
or
u

(t) ≡ −1 and p

(t) < 0 for all t.
Case 3. −p

1
(0) + αp

2
(0) = 0 : In this case p

2
(t) ≡ (1/α)p

1
(0). Also since p

(t) ≡ 0, we must
have in this case p

1
(0) = 0. Hence u

() we can behave in one of two ways:
either
u

(t) ≡ +1 and p

2
(t) ≡
1
α
p

1
(0) > 0 ,
or
u

(t) ≡ −1 and p

2
(t) ≡
1
α
p

1
(0) < 0 ,
110 CHAPTER 8. CONINUOUS-TIME OPTIMAL CONTROL
Thus, the optimal control u

is always equal to +1 or -1 and it can switch at most once between
these two values. The optimal control is given by
u

(t) = sgn p

2
(t)
= sgn [
1
α
p

1
(0) +e
αt
(−
1
α
p

1
(0) +p

2
(0))] .
Thus the search for the optimal control reduces to finding p

1
(0), p

2
(0) such that the solution of the
differential equation
˙ x = x
2
˙ x
2
= −αx
2
+b sgn[
1
α
p

1
(0) +e
αt
(−
1
α
p

1
(0) +p

2
(0))] ,
(8.52)
with initial condition
x
1
(0) = x
10
, x
20
= x
20
(8.53)
also satisfies the final condition
x
1
(t

f
) = 0, x
2
(t

f
) = 0 , (8.54)
for some t

f
> 0; and then t

f
is the minimum time.
There are at least two ways of solving the two-point boundary value problem (8.52), (8.53), and
(8.54). One way is to guess at the value of p

(0) and then integrate (8.52) and (8.53) forward in time
and check if (8.54) is satisfied. If (8.54) is not satisfied then modify p

(0) and repeat. An alternative
is to guess at the value of p

(0) and then integrate (8.52) and (8.54) backward in time and check of
(8.53) is satisfied. The latter approach is more advantageous because we know that any trajectory
obtained by this procedure is optimal for initial conditions which lie on the trajectory. Let us follow
this procedure.
Suppose we choose p

(0) such that −p

1
(0) = αp

2
(0) = 0 and p

2
(0) > 0. Then we must have
u

(t) ≡ 1. Integrating (8.52) and (8.54) backward in time give us a trajectory ξ(t) where
˙
ξ
1
(t) = −
˙
ξ
2
(t)
˙
ξ
2
(t) = αξ
2
(t) −b ,
with
ξ
1
(0) −ξ
2
(0) = 0 .
This gives
ξ
1
(t) =
b
α
(−t +
e
αt
−1
α
) , ξ
2
(t) =
b
α
(1 −e
αt
) ,
which is the curve OA in Figure 8.3.
On the other hand, if p

(0) is such that −p

1
(0) + αp

2
(0) = 0 and p

2
(0) < 0, then u

(t) ≡ −1
and we get
ξ
1
(t) = −
b
α
(−t +
e
αt
−1
α
) , ξ
2
(t) = −
b
α
(1 −e
αt
) ,
which is the curve OB.
8.3. VARIABLE FINAL TIME 111
B
u

≡ −1
D
u

≡ 1
C
ξ
1
O
ξ
2
u

≡ 1
A
E
u

≡ −1
F
Figure 8.3: Backward integration of (8.52) and (8.54).
Next suppose p

(0) is such that −p

1
(0) + αp

2
(0) > 0, and p

2
(0) < 0. Then [(1/α)p

1
(0) +
e
αt
(−(1/α)p

1
(0) + p

2
(0))] will have a negative value for t ∈ (0,
ˆ
t) and a positive value for t ∈
(
ˆ
t, ∞). Hence, if we integrate (8.52), (8.54) backwards in time we get trajectory ξ(t) where
˙
ξ(t) = −ξ
2
(t)
˙
ξ
2
(t) = αξ
2
(t)+

−b for t <
ˆ
t
b for t >
ˆ
t ,
with ξ
1
(0) = 0, ξ
2
(0) = 0. This give us the curve OCD. Finally if p

(0) is such that −p

1
(0) +
αp

2
(0) < 0, and p

2
(0) < 0, then u

(t) = 1 for t <
ˆ
t and u

(t) = −1 for t >
ˆ
t, and we get the
curve OEF.
We see then that the optimal control u

() has the following characterizing properties:
u

(t) =

1 if x

(t) is above BOA or on OA
−1 if x

(t) is below BOA or on OB .
Hence we can synthesize the optimal control in feedback from: u

(t) = ψ(x

(t)) where the
B
u

≡ −1
x
2
u

≡ 1
x
1
A
u

≡ 1
O
u

≡ −1
Figure 8.4: Optimal trajectories of Example 1.
112 CHAPTER 8. CONINUOUS-TIME OPTIMAL CONTROL
function ψ : R
2
→ ¦1, −1¦ is given by (see Figure 8.4)
ψ(x
1
, x
2
) =

1 if (x
1
, x
2
) is above BOA or on OA
−1 if (x
1
, x
2
) is below BOA or on OB .
8.4 Linear System, Quadratic Cost
An important class of problems which arise in practice is the case when the dynamics are linear and
the objective function is quadratic. Specifically, consider the optimal control problem (8.55):
Minimize

T
0
1
2
[x

(t)P(t)x(t) +u

(t)Q(t)u(t)]dt
subject to
dynamics: ˙ x(t) = A(t)x(t) +B(t)u(t), 0 ≤ t ≤ T ,
initial condition: x(0) = x
0
,
final condition: G
f
x(t) = b
f
,
control constraint: u(t) ∈ R
p
, u() piecewise continuous.
(8.55)
In (8.56) we assume that P(t) is an n n symmetric, positive semi-definite matrix whereas Q(t) is
a p p symmetric, positive definite matrix. G
f
is a given
f
n matrix, and x
0
∈ R
n
, b
f
∈ R

f
are given vectors. T is a fixed final time.
We apply Theorem 1 of Section 2, so that we must search for a number p

0
≥ 0 and a function
p

: [0, T] → R
n
, not both zero, such that
˙ p

(t) = −p

0
(−P(t)x

(t)) −A

(t)p

(t) , (8.56)
and
p

(t)⊥T
f
(x

(t)) = ¦ξ[G
f
ξ = 0¦ . (8.57)
The Hamiltonian function is
H(t, x

(t), ˜ p

(t), v) = −
1
2
p

0
[x

(t)

P(t)x

(t) +v

Q(t)v]
+p

(t)

[A(t)x

(t) +B(t)v]
so that the optimal control u

(t) must maximize

1
2
p

0
v

Q(t)v +p

(t)

B(t)v for v ∈ R
p
. (8.58)
If p

0
> 0, this will imply
u

(t) =
1
p

0
Q
−1
(t)B

(t)p

(t) ,
(8.59)
whereas if p

0
= 0, then we must have
p

(t)

B(t) ≡ 0 (8.60)
because otherwise (8.58) cannot have a maximum.
8.5. THE SINGULAR CASE 113
We make the following assumption about the system dynamics.
Assumption: The control system ˙ x(t) = A(t)x(t) + B(t)u(t) is controllable over the interval
[0, T]. (See (Desoer [1970]) for a definition of controllability and for the properties we use below.)
Let Φ(t, τ) be the transition matrix function of the homogeneous linear differential equation ˙ x(t) =
A(t)x(t). Then the controllability assumption is equivalent to the statement that for any ξ ∈ R
n
ξ

Φ(t, τ)B(τ) = 0 , 0 ≤ τ ≤ T , implies ξ = 0 . (8.61)
Next we claim that if the system is controllable then p

0
= 0, because if p

0
= 0 then from (8.56)
we can see that
p

(t) = (Φ(T, t))

p

(T)
and hence from (8.60)
(p

(t))

Φ(T, t)B(t) = 0 , 0 ≤ t ≤ T ,
but then from (8.61) we get p

(T) = 0. Hence if p

0
= 0, then we must have ˜ p

(t) ≡ 0 which is a
contradiction. Thus, under the controllability assumption, p

0
> 0, and hence the optimal control is
given by (8.59). Now if p

0
> 0 it is trivial that ˆ p

(t) = (1, (p

(t)/p

0
)) will satisfy all the necessary
conditions so that we can assume that p

0
= 1. The optimal trajectory and the optimal control is
obtained by solving the following two-point boundary value problem:
˙ x

(t) = A(t)x

(t) +B(t)Q
−1
(t)B

(t)p

(t)
˙ p(t) = P(t)x

(t) −A

(t)p

(t)
x

(0) = x
0
, G
f
x

(T) = b
f
, p

(T)⊥T
f
(x

(T)) .
For further details regarding the solution of this boundary value problem and for related topics see
(See and Markus [1967]).
8.5 The Singular Case
In applying the necessary conditions derived in this chapter it sometimes happens that H(t, x

(t), p

(t), v)
is independent of v for values of t lying in a non-zero interval. In such cases the maximum principle
does not help in selecting the optimal value of the control. We are faced with the so-called singular
case (because we are in trouble–not because the situation is rare). We illustrate this by analyzing
Example 4 of Chapter 1.
The problem can be summarized as follows:
Maximize

T
0
c(t)dt =

T
0
(1 −s(t))f(k(t))dt
subject to
dynamics:
˙
k(t) = s(t)f(k(t)) −µk(t) , 0 ≤ t ≤ T
initial constraint: k(0) = k
0
,
final constraint: k(t) ∈ R ,
control constraint: s(t) ∈ [0, 1], s() piecewise continuous.
114 CHAPTER 8. CONINUOUS-TIME OPTIMAL CONTROL
We make the following assumptions regarding the production function f:
f
k
(k) > 0, f
kk
(K) < 0 for all k , (8.62)
lim
k→0
f
k
(k) = ∞ .
(8.63)
Assumption (8.62) says that the marginal product of capital is positive and this marginal product
decreases with increasing capital. Assumption (8.63) is mainly for technical convenience and can
be dispensed with without difficulty.
Now suppose that s

: [0, T] → [0, 1] is an optimal savings policy and let k

(t), 0 ≤ t ≤ T,
be the corresponding trajectory of the capital-to-labor ratio. Then by Theorem 1 of Section 2, there
exist a number p

0
≥ 0, and a function p

: [0, T] → R, not both identically zero, such that
˙ p

(t) = −p

0
(1 −s

(t))f
k
(k

(t)) −p

(t)[s

(t)f
k
(k

(t)) −µ] (8.64)
with the final condition
p

(T) = 0 , (8.65)
and the maximum principle holds. First of all, if p

0
= 0 then from (8.64) and (8.65) we must also
have p

(t) ≡ 0. Hence we must have p

0
> 0 and then by replacing (p

0
, p

) by (1/p

0
)(p

0
, p

) we
can assume without losing generality that p

0
= 1, so that (8.64) simplifies to
˙ p

(t) = −1(1 −s

(t))f
k
(k

(t)) −p

(t)[s

(t)f
k
(k

(t)) −µ] . (8.66)
The maximum principle says that
H(t, k

(t), p

(t), s) = (1 −s)f(k

(t)) +p

(t)[sf(k

(t)) −µk

(t)]
is maximized over s ∈ [0, 1] at s

(t), which immediately implies that
s

(t) =

1 if p

(t) > 1
0 if p

(t) < 1
? if p

(t) = 1
(8.67)
We analyze separately the three cases above.
Case 1. p

(t) > 1, s

(t) = 1 : Then the dynamic equations become
˙
k

(t) = f(k

(t)) −µk

(t) ,
˙ p

(t) = −p

(t)[f
k
(k

(t)) −µ] .
(8.68)
The behavior of the solutions of (8.68) is depicted in the (k, p)−, (k, t)− and (p, t)−planes in
Figure 8.5. Here k
G
, k
H
are the solutions of f
k
(k
G
) −µ = 0 and f(k
M
) −µk = 0. Such solutions
exist and are unique by virtue of the assumptions (8.62) and (8.63). Futhermore, we note from
(8.62) that k
G
< k
M
, and f
k
(k) − µ
<
> 0 according as k
<
> k
G
whereas f(k) − µk
>
< 0 according
as k
<
> k
M
. (See Figure 8.6.)
8.5. THE SINGULAR CASE 115
p
f
k
> µ
f
k
< µ
f < µk f > µk
k
k
M
k
G
l
k
k
M
t
p
l
t
Figure 8.5: Illustration for Case 1.
Case 2. p

(t) < 1, s

(t) = 0: Then the dynamic equations are
˙
k

(t) = −µk

(t) ,
˙ p

(t) = −f
k
(k

(t)) +µp

(t) ,
giving rise to the behavior illustrated in Figure 8.7.
Case 3. p

(t) = 1, s

(t) =?: (Possibly singular case.) Evidently if p

(t) = 1 only for a finite set of
times t then we do not have to worry about this case. We face the singular case only if p

(t) = 1
for t ∈ I, where I is a non-zero interval. But then we have ˙ p

(t) = 0 for t ∈ I so that from (8.66)
we get
−(1 −s

(t))f
k
(k

(t)) −[s

(t)f
k
(k

(t)) −µ] = 0 for t ∈ I ,
so
−f
k
(k

(t)) +µ = 0 for t ∈ I ,
or
k

(t) = k
G
for t ∈ I . (8.69)
In turn then we must have
˙
k

(t) = 0 for t ∈ I so that
s

(t)f(k
G
) −µK
G
= 0 for t ∈ I ,
and hence,
s

(t) = µ
k
G
f(k
G
)
for t ∈ I . (8.70)
116 CHAPTER 8. CONINUOUS-TIME OPTIMAL CONTROL
.
f
line of slope µ
µk
f(k)
k
k
M
k
G
Figure 8.6: Illustration for assumptions (8.62), (8.63).
Thus in the singular case the optimal solution is characterized by (8.69) and (8.70), as in Figure 8.8.
We can now assemble separate cases to obtain the optimal control. First of all, from the final
condition (8.65) we know that for t close to T, p

(t) < 1 so that we are in Case 2. We face two
possibilities: Either (A)
p

(t) < 1 for all t < [0, T]
and then s

(t) = 0, k

(t) = k
0
e
−µt
, for 0 ≤ t ≤ T, or (B)
there exists t
2
∈ (0, T) such that p

(t
2
) = 1 and p

(t) < 1 for t
2
< t ≤ T .
We then have three possibilities depending on the value of k

(t
2
):
(Bi) k

(t
2
) < k
G
: then ˙ p

(t
2
) < 0 so that p

(t) > 1 for t < t
2
and we are in Case 1 so that
s

(t) = 1 for t < t
2
. In particular we must have k
0
< k
G
.
(Bii) k

(t
2
) > k
G
: then ˙ p

(
2
) > 0 but then p

(t
2
+ ε) > 1 for ε > 0 sufficiently small and since
p

(T) = 0 there must exist t
3
∈ (t
2
, T) such that p

(t
3
) = 1. This contradicts the definition of t
2
so that this possibility cannot arise.
(Biii) k

(t
2
) − k
G
: then we can have a singular arc in some interval (t
1
, t
2
) so that p

(t) =
1, k

(t) = k
G
, and s

(t) = µ(k
G
/f(k
G
)) for t ∈ (t
1
, t
2
). For t < t
1
we either have p

(t) >
1, s

(t) > 1 if k
0
< k
G
, or we have p

(t) < 1, s

(t) = 0 if k > k
G
.
The various possibilities are illustrated in Figure 8.9.
The capital-to-labor ratio k
G
is called the golden mean and the singular solution is called the
golden path. The reason for this term is contained in the following exercise.
Exercise 1: A capital-to-labor ratio
ˆ
k is said to be sustainable if there exists ˆ s ∈ [0, 1] such that
ˆ sf(
ˆ
k) −µ
ˆ
k = 0. Show that k
G
is the unique sustainable capital-to-labor ratio which maximizes
sustainable consumption (1 −s)f(k).
8.6. BIBLIOGRAPHICAL REMARKS 117
p
k
k
G
l
k
t
p
t
l
Figure 8.7: Illustration for Case 2.
8.6 Bibliographical Remarks
The results presented in this chapter appeared in English in full detail for the first time in 1962 in the
book by Pontryagin, et al., cited earlier. That book contains many extensions and many examples
and it is still an important source. However, the derivation of the maximum principle given in the
book by Lee and Markus is more satisfactory. Several important generalizations of the maximum
principle have appeared. On the one hand these include extensions to infinite-dimensional state
spaces and on the other hand they allow for constraints on the state more general than merely initial
and final constraints. For a unified, but mathematically difficult, treatment see (Neustadt [1969]).
For a less rigorous treatment of state-space constraints see (Jacobson, et al, [1971]), whereas for a
discussion of the singular case consult (Kelley, et al. [1968]).
For an applications-oriented treatment of this subject the reader is referred to (Athans and Falb
[1966]) and (Bryson and Ho [1969]). For applications of the maximum principle to optimal eco-
nomic growth see (Shell [1967]). There is no single source of computational methods for optimal
control problems. Among the many useful techniques which have been proposed see (Lasdon, et
al., [1967]), (Kelley [1962]), (McReynolds [1966]), and (Balakrishnan and Neustadt [1964]); also
consult (Jacobson and Mayne [1970]), and (Polak [1971]).
118 CHAPTER 8. CONINUOUS-TIME OPTIMAL CONTROL
.
p
k
k
G
1
k
t
k
G
p
t
1
Figure 8.8: Case 3. The singular case.
8.6. BIBLIOGRAPHICAL REMARKS 119
. .
.
.
.
.
. .
p

1
t
T
s

1
t
T
k

t
T
p

Case (A)
t
T t
2
t
1
s

1
µk
G
f(k
G
)
t
k

k
G
k
0
t
p

1
t
T
t
2
s

1
t
T
t
2
k

t
T
t
2
p

Case (Bi)
t
T
t
2
t
1
s

t
k

t
Case (Biii)
Figure 8.9: The optimal solution of example.
120 CHAPTER 8. CONINUOUS-TIME OPTIMAL CONTROL
Chapter 9
Dynamic programing
SEQUENTIAL DECISION PROBLEMS: DYNAMIC PROGRAMMING FORMULATION
The sequential decision problems discussed in the last three Chapters were analyzed by varia-
tional methods, i.e., the necessary conditions for optimality were obtained by comparing the op-
timal decision with decisions in a small neighborhood of the optimum. Dynamic programming
(DP is a technique which compares the optimal decision with all the other decisions. This global
comparison, therefore, leads to optimality conditions which are sufficient. The main advantage of
DP, besides the fact that it give sufficiency conditions, is that DP permits very general problem for-
mulations which do not require differentiability or convexity conditions or even the restriction to a
finite-dimensional state space. The only disadvantage (which unfortunately often rules out its use)
of DP is that it can easily give rise to enormous computational requirements.
In the first section we develop the main recursion equation of DP for discrete-time problems. The
second section deals with the continuous-time problem. Some general remarks and bibliographical
references are collected in the final section.
9.1 Discrete-time DP
We consider a problem formulation similar to that of Chapter VI. However, for notational conve-
nience we neglect final conditions and state-space constraints.
Maximize
N−1
¸
i=0
f
0
(i, x(i), u(i)) + Φ(x(N))
subject to
dynamics: x(i + 1) = f(i, x(i), u(i)) , i = 0, 1, . . . , N −1 ,
initial condition: x(0) = x
0
,
control constraint: u(i) ∈ Ω
i
, i = 0, 1, . . . , N −1 .
(9.1)
In (9.1), the state x(i) and the control u(i) belong to arbitrary sets X and U respectively. X and U
may be finite sets, or finite-dimensional vector spaces (as in the previous chapters), or even infinite-
dimensional spaces. x
0
∈ X is fixed. The Ω
i
are fixed subsets of U. Finally f
0
(i, , ) : X U →
R, Φ : X → R, f(i, , ) : X U → X are fixed functions.
121
122 CHAPTER 9. DYNAMIC PROGRAMING
The main idea underlying DP involves embedding the optimal control problem (9.1), in which
the system starts in state x
0
at time 0, into a family of optimal control problems with the same
dynamics, objective function, and control constraint as in (9.1) but with different initial states and
initial times. More precisely, for each x ∈ X and k between ) and N − 1, consider the following
problem:
Maximize
N−1
¸
i=k
f
0
(i, x(i), u(i)) + Φ(x(N)) ,
subject to
dynamics: x(i + 1) = f(i, x(i), u(i)), i = k, k + 1, . . . , N −1,
initial condition: x(k) = x,
control constraint: u(i) ∈ Ω
i
, i = k, k + 1, , N −1 .
(9.2)
Since the initial time k and initial state x are the only parameters in the problem above, we will
sometimes use the index (9.2)
k,x
to distinguish between different problems. We begin with an
elementary but crucial observation.
Lemma 1: Suppose u

(k), . . . , u

(N − 1) is an optimal control for (9.2)
k,x
, and let x

(k) =
x, x

(k + 1), . . . , x

(N) be the corresponding optimal trajectory. Then for any , k ≤ ≤ N −
1, u

(), . . . , u

(N −1) is an optimal control for (9.2)
,x

()
.
Proof: Suppose not. Then there exists a control ˆ u(), ˆ u( + 1), . . . , ˆ u(N − 1), with corresponding
trajectory ˆ x() = x

(), ˆ x( + 1), . . . , ˆ x(N), such that
N−1
¸
i=
f
0
(i, ˆ x(i), ˆ u(i)) + Φ(ˆ x(N))
>
N−1
¸
i=
f
0
(i, x

(i), u

(i)) + Φ(x

(N)) .
(9.3)
But then consider the control ˜ u(k), . . . , ˜ u(N −1) with
˜ u(i)

u

(i) , i = k, . . . , −1
ˆ u(i) , i = , . . . , N −1 ,
and the corresponding trajectory, starting in state x at time k, is ˜ x(k), . . . , ˜ x(N) where
˜ x(i) =

x

(i) , i = k, . . . ,
ˆ x(i) , i = + 1, . . . , N .
The value of the objective function corresponding to this control for the problem (9.2)
k,x
is
N−1
¸
i=k
f
0
(i, ˜ x(i), ˜ u(i)) + Φ(˜ x(n))
=
−1
¸
i=k
f
0
(i, x

(i), u

(i)) +
N−1
¸
i=
f
0
(i, ˆ x(i), ˆ u(i)) + Φ(ˆ x(N))
>
N−1
¸
i=k
f
0
(i, x

(i), u

(i)) + Φ(x

(N)) ,
9.1. DISCRETE-TIME DP 123
by (9.3), so that u

(k), . . . , u

(N −1) cannot be optimal for 9.2)
k
, x, contradicting the hypothesis.
(end theorem)
From now on we assume that an optimal solution to (9.2)
k,x
exists for all 0 ≤ k ≤ N −1, and all
x ∈ X. Let V (k, x) be the maximum value of (9.2)
k,x
. We call V the (maximum) value function.
Theorem 1: Define V (N, ) by (V (N, x) = Φ(x). V (k, x) satisfies the backward recursion equa-
tion
V (k, x) = Max¦f
0
, (k, x, u) +V (k
1
, f(k, x, u, ))[u ∈ Ω
k
¦, 0 ≤ k ≤ N −1 . (9.4)
Proof: Let x ∈ X, let u

(k), . . . , u

(N − 1) be an optimal control for (9.2)
k,x
, and let x

(k) =
x, . . . , x

(N) be the corresponding trajectory be x(k) = x, . . . , x(N). We have
N−1
¸
i=k
f
0
(i, x

(i), u

(i)) + Φ(x

(N))

N−1
¸
i=k
f
0
(i, x(i), u(i)) + Φ(x(N)) .
(9.5)
By Lemma 1 the left-hand side of (9.5) is equal to
f
0
(k, x, u

(k)) +V (k + 1, f(k, x

, u

(k)) .
On the other hand, by the definition of V we have
N−1
¸
i=k
f
0
(i, x(i), u(i)) + Φ(x(N)) = f
0
(k, x, u(k))

N
¸
i=k+1
f
0
(i, x(i), u(i)) + Φ(x(N)) ≤ f
0
(k, x, u, (k)) +V (k + 1, f(k, x, u(k))¦ ,
with equality if and only if u(k +1), . . . , u(N −1) is optimal for (9.2)
k+1,x(k+1)
. Combining these
two facts we get
f
0
(k, xu

(k)) +V (k + 1, f(k, x, u

(k)))
≥ f
0
(k, x, u(k)) +V (k + 1, f(x, k, u(k))) ,
for all u(k) ∈ Ω
k
, which is equivalent to (9.4).(end theorem)
Corollary 1: Let u(k), . . . , u(N − 1) be any control for the problem (9.2)
k,x
and let x(k) =
x, . . . , x(N) be the corresponding trajectory. Then
V (, x()) ≤ f
0
(, x(), u()) +V ( + 1, f(, x(), u()), k ≤ ≤ N −1 ,
and equality holds for all k ≤ ≤ N −1 if and only if the control is optimal for (9.2)
k,x
.
Corollary 2: For k = 0, 1, . . . , N −1, let ψ(k, ) : X → Ω
k
be such that
f
0
(k, x, ψ(k, x)) +V (k + 1, f(k, x, ψ(k, x))
= Max¦f
0
(k, x, u) +V (k + 1, f(k, x, u))[u ∈ Ω
k
¦ .
Then ψ(k, ), k = 0, . . . , N − 1 is an optimal feedback control, i.e., for any k, x the control
u

(k), . . . , u

(N −1) defined by u

() = ψ(, x

()), k ≤ ≤ N −1, where
124 CHAPTER 9. DYNAMIC PROGRAMING
x

( + 1) = f(, x

(), ψ(, x

()), k ≤ ≤ N −1 , x

(k) = x ,
is optimal for (α)
k,x
.
Remark: Theorem 1 and Corollary 2 are the main results of DP. The recursion equation (9.4) al-
lows us to compute the value function, and in evaluating the maximum in (9.4) we also obtain the
optimum feedback control. Note that this feedback control is optimum for all initial conditions.
However, unless we can find a “closed-form” analytic solution to (9.4), the DP formulation may
necessitate a prohibitive amount of computation since we would have to compute and store the val-
ues of V and ψ for all k and x. For instance, suppose n = 10 and the state-space X is a finite set
with 20 elements. Then we have to compute and store 10 20 values of V , which is a reasonable
amount. But now suppose X = R
n
and we approximate each dimension of x by 20 values. Then
for N = 10, we have to compute and store 10x(20)
n
values of V . For n = 3 this number is 80,000,
and for n = 5 it is 32,000,000, which is quite impractical for existing computers. This “curse of
dimensionality” seriously limits the applicability of DP to problems where we cannot solve (9.4)
analytically.
• Exercise 1: An instructor is preparing to lead his class for a long hike. He assumes that each
person can take up to W pounds in his knapsack. There are N possible items to choose from.
Each unit of item i weighs w
i
pounds. The instructor assigns a number U
i
> 0 for each
unit of item i. These numbers represent the relative utility of that item during the hike. How
many units of each item should be placed in each knapsack so as to maximize total utility?
Formulate this problem by DP.
9.2 Continuous-time DP
We consider a continuous-time version of (9.2):
Maximize

t
f
0
f
0
(t, x(t), u(t))dt + Φ(x(t
f
))
subject to
dynamics: ˙ x(t) = f(t, x(t), u(t)) , t
0
≤ t ≤ t
f
initial condition: x(0) = x
0
,
control constraint: u : [t
0
, t
f
] → Ω and u() piecewise continuous.
(9.6)
In (9.6), x ∈ R
n
, u ∈ R
p
, Ω ⊂ R
p
. Φ : R
n
→ R is assumed differentiable and f
0
, f are assumed
to satisfy the conditions stated in VIII.1.1.
As before, for t
0
≤ t ≤ t
f
and x ∈ R
n
, let V (t, x) be the maximum value of the objective
function over the interval [t, t
f
] starting in state x at time t. Then it is easy to see that V must satisfy
V (t, x) = Max¦

t+∆
t
f
0
(τ, x(τ), u(τ))dτ
+V (t + ∆, x(t + ∆))[u : [t, t + ∆] → Ω¦, ∆ ≥ 0 ,
(9.7)
and
V (t
f
, x) = Φ(x) . (9.8)
9.2. CONTINUOUS-TIME DP 125
In (9.7), x(τ) is the solution of
˙ x(τ) = f(τ, x(τ), u(τ)) , t ≤ τ ≤ t + ∆ ,
x(t) = x .
Let us suppose that V is differentiable in t and x. Then from (9.7) we get
V (t, x) = Max¦f
0
(t, x, u)∆ +V (t, x) +
∂V
∂x
f(t, x, u)∆
+
∂V
∂t
(t, x)∆ +o(∆)[u ∈ Ω¦, ∆ > 0 .
Dividing by ∆ > 0 and letting ∆ approach zero we get the Hamilton-Jacobi- Bellman partial
differentiable equation for the value function:
∂V
∂t
(t, x) + Max¦f
0
(t, x, u) +
∂V
∂x
(t, x)f(t, x, u)[u ∈ Ω¦ = 0. (9.9)
Theorem 1: Suppose there exists a differentiable function V : [t
0
, t
f
] R
n
→ R which satisfies
(9.9) and the boundary condition (9.8). Suppose there exists a function ψ : [t
0
, t
f
] R
n
→ Ω
with ψ piecewise continuous in t and Lipschitz in x, satisfying
f
0
(t, x, ψ(t, x)) +
∂V
∂x
f(t, x, ψ(t, x))
= Max¦f
0
(t, x, u) +
∂V
∂x
f(t, x, u)[u ∈ Ω¦ .
(9.10)
Then ψ is an optimal feedback control for the problem (9.6), and V is the value function.
Proof: Let t ∈ [t
0
, t
f
] and x ∈ R
n
. Let ˆ u : [t, t
f
] → Ω be any piecewise continuous control and
let ˆ x(τ) be the solution of
·
ˆ x (τ) = f(τ, ˆ x(τ), ˆ u(τ)) , t ≤ τ ≤ t
f
,
ˆ x(t) = x .
(9.11)
Let x

(τ) be the solution of
˙ x

(τ) = f(τ, x

(τ), ψ(τ, x

(τ))) , t ≤ τ ≤ t
f
,
x

(τ) = x .
(9.12)
Note that the hypothesis concerning ψ guarantees a solution of (9.12). Let u

(τ) = ψ(τ, x

, (τ)), t ≤
τ ≤ t
f
. To show that ψ is an optimal feedback control we must show that

t
f
t
f
0
(tτ, x

(τ), u

(τ))dτ + Φ(x

(τ))

t
f
t
f
0
(τ, x

(τ), ˆ u(τ))dτ + Φ(ˆ x(t
f
)) .
(9.13)
To this end we note that
V (t
f
, x

(t
f
)) −V (t, x

(t)) =

t
f
f
dV

(τ, x

(τ))dτ
=∈
t
f
t
¦
∂V
∂τ
(τ, x

(τ) +
∂V
∂x
˙ x

(τ)¦dτ
= −

t
f
t
F −0(τ, x

(τ), u

(τ))dτ ,
(9.14)
126 CHAPTER 9. DYNAMIC PROGRAMING
using (9.9), (9.10). On the other hand,
V (t
f
, ˆ x(t
f
)) −V (t, ˆ x, (t)) =

t
f
t
¦
∂V
∂τ
(τ, ˆ x(τ)) +
∂V
∂x
·
˜ x (τ)¦dτ
≤ −

t
f
t
f
0
(τ, ˆ x(τ), ˆ u

(τ))dτ ,
(9.15)
using (9.9). From (9.14), (9.15), (9.8) and the fact that x

(t) = ˆ x(t) = x we conclude that
V (t, x) = Φ(x

(t
f
)) +

t
f
t
f
0
(τ, x

(τ), u

(τ))
≥ Φ(ˆ x(t
f
)) +

t
f
t
f
0
(τ, ˆ x(τ), ˆ u(τ))dτ
so that (9.13) is proved. It also follows that V is the maximum value function. ♦
• Exercise 1: Obtain the value function and the optimal feedback control for the linear regula-
tory problem:
Minimize
1
2
x

(T)P(T)x(t) +
1
2

T
0
¦x

(t)P(t)x(t)
+u

(t)Q(t)u(t)¦dt
subject to
dynamics: ˙ x(t) = A(t)x(t) +B(t)u(t) , 0 ≤ t ≤ T ,
initial condition: x(0) = x
0
,
control constraint: u(t) ∈ R
p
,
where P(t) = P

(t) is positive semi-definite, and Q(t) = Q

(t) is positive definite. [Hint:
Obtain the partial differential equation satisfied by V (t, x) and try a solution of the form
V (t, x) = x

R(t)x where R is unknown.]
9.3 Miscellaneous Remarks
There is vast literature dealing with the theory and applications of DP. The most elegant applications
of DP are to various problems in operations research where one can obtain “closed-form” analytic
solutions to be recursion equation for the value function. See (Bellman and Dreyfus [1952]) and
(Wagner [1969]). In the case of sequential decision-making under uncertainties DP is about the
only available general method. For an excellent introduction to this area of application see (Howard
[1960]). For an important application of DP to computational considerations for optimal control
problems see (Jacobson and Mayne [1970]). Larson [1968] has developed computational tech-
niques which greatly increase the range of applicability of DP where closed-form solutions are not
available. Finally, the book of Bellman [1957] is still excellent reading. []
Bibliography
[1] J.J. Arrow and L. Hurwicz. Decentralization and Computation in Resource in resource Allo-
cation. Essays in Economics and Econometrics. University of North Carolina Press, 1960. in
Pfouts R.W. (ed.).
[2] M. Athans and P.L. Falb. Optimal Control. McGraw-Hill, 1966.
[3] A.V. Balakrishnan and L.W. Neustadt. Computing Methods in Optimization Problems. Aca-
demic Press, 1964.
[4] K. Banerjee. Generalized Lagrange Multipliers in Dynamic Programming. PhD thesis, Col-
lege of Engineering, University of California, Berkeley, 1971.
[5] R.E. Bellman. Dynamic Programming. Princeton University Press, 1957.
[6] R.E. Bellman and S.E. Dreyfus. Applied Dynamic Programming. Princeton University Press,
1962.
[7] D. Blackwell and M.A. Girshick. Theory of Games and Statistical Decisions. John Wiley,
1954.
[8] J.E. Bruns. The function of operations research specilists in large urban schools. IEEE Trans.
on Systems Science and Cybernetics, SSC-6(4), 1970.
[9] A.E. Bryson and Y.C. Ho. Applied Optimal Control. Blaisdell, 1969.
[10] J.D. Cannon, C.D. Cullum, and E. Polak. Theory of Optimal Control and Mathematical Pro-
gramming. McGraw-Hill, 1970.
[11] G. Dantzig. Linear Programming and Extensions. Princeton University Press, 1963.
[12] C.A. Desoer. Notes for a Second Course on Linear Systems. Van Nostrand Reinhold, 1970.
[13] S.W. Director and R.A. Rohrer. On the design of resistance n-port networks by digital com-
puter. IEEE Trans. on Circuit Theory, CT-16(3), 1969a.
[14] S.W. Director and R.A. Rohrer. The generalized adjoint network and network sensitivities.
IEEE Trans. on Circuit Theory, CT-16(3), 1969b.
[15] S.W. Director and R.A. Rohrer. Automated network design–the frequency-domain case. IEEE
Trans. on Circuit Theory, CT-16(3), 1969c.
127
128 BIBLIOGRAPHY
[16] R. Dorfman and H.D. Jacoby. A Model of Public Decisions Illustrated by a Water Pollution
Policy Problem. Public Expenditures and Policy Analysis. Markham Publishing Co, 1970. in
Haveman, R.H. and Margolis, J. (eds.).
[17] R. Dorfman, P.A. Samuelson, and R.M.Solow. Linear Programming and Economic Analysis.
McGraw-Hill, 1958.
[18] R.I. Dowell and R.A. Rohrer. Automated design of biasing circuits. IEEE Trans. on Circuit
Theory, CT-18(1), 1971.
[19] Dowles Foundation Monograph. The Economic Theory of Teams. John Wiley, 1971. to appear.
[20] W.H. Fleming. Functions of Several Variables. Addison-Wesley, 1965.
[21] C.R. Frank. Production Theory and Indivisible Commodities. Princeton University Press,
1969.
[22] D. Gale. A geometric duality theorem with economic applications. Review of Economic
Studies, XXXIV(1), 1967.
[23] A.M. Geoffrion. Duality in Nonlinear Programming: a Simplified Application-Oriented Treat-
ment. The Rand Corporation, 1970a. Memo RM-6134-PR.
[24] A.M. Geoffrion. Primal resource directive approaches for optimizing nonlinear decomposable
programs. Operations Research, 18, 1970b.
[25] F.J. Gould. Extensions of lagrange multipliers in nonlinear programming. SIAM J. Apl. Math,
17, 1969.
[26] J.J. Greenberg and W.P. Pierskalla. Surrogate mathematical programming. Operations Re-
search, 18, 1970.
[27] H.J.Kushner. Introduction to Stochastic Control. Holt, Rinehart, and Winston, 1971.
[28] R.A. Howard. Dynamic Programming and Markov Processes. MIT Press, 1960.
[29] R. Isaacs. Differential Games. John Wiley, 1965.
[30] D.H. Jacobson, M.M. Lele, and J.L. Speyes. New necessary condtions of optimality for prob-
lems with state-variable inequality constraints. J. Math. Analysis and Applications, 1971. to
appear.
[31] D.H. Jacobson and D.Q. Mayne. Differential Dynamic Programming. American Elsevier
Publishing Co., 1970.
[32] S. Karlin. Mathematical Methods and Theory in Games, Programming, and Economics, vol-
ume 1. Addison-Wesley, 1959.
[33] H.J. Kelley. Method of Gradients. Optimization Techniques. Academic Press, 1962. in Leit-
mann, G.(ed.).
BIBLIOGRAPHY 129
[34] J.H. Kelley, R.E. Kopp, and H.G. Mayer. Singular Extremals. Topics in Optimization. Aca-
demic Press, 1970. in Leitman, G. (ed.).
[35] D.A. Kendrick, H.S. Rao, and C.H. Wells. Water quality regulation with multiple polluters. In
Proc. 1971 Jt. Autom. Control Conf., Washington U., St. Louis, August 11-13 1971.
[36] T.C. Koopmans. Objectives, constraints, and outcomes in optimal growth models. Economet-
rica, 35(1), 1967.
[37] H.W. Kuhn and A.W. Tucker. Nonlinear programming. In Proc. Second Berkeley Symp. on
Math. Statistics and Probability. University of California Press, Berkeley, 1951.
[38] R.E. Larson. State Increment Dynamic Programming. American Elsevier Publishing Co.,
1968.
[39] L.S. Lasdon, S.K. Mitter, and A.D. Waren. The conjugate gradient method for optimal control
problems. IEEE Trans. on Automatic Control, AC-12(1), 1967.
[40] E.B. Lee and L. Markus. Foundation of Optimal Control Theory. John Wiley, 1967.
[41] R. Luce and H. Raiffa. Games and Decisions. John Wiley, 1957.
[42] D.G. Luenberger. Quasi-convex programming. Siam J. Applied Math, 16, 1968.
[43] O.L. Mangasarian. Nonlinear Programming. McGraw-Hill, 1969.
[44] S.R. McReynolds. The successive sweep method and dynamic programming. J. Math. Analy-
sis and Applications, 19, 1967.
[45] J.S. Meditch. Stochastic Optimal Linear Estimation and Control. McGraw-Hill, 1969.
[46] M.D. Mesarovic, D. Macho, and Y. Takahara. Theory of Hierarchical, Multi-level Systems.
Academic Press, 1970.
[47] C.E. Miller. The Simplex Method for Local Separable Programming. Recent Advance Pro-
gramming. McGraw-Hill, 1963. in Graves, R.L. and Wolfe, P. (eds.).
[48] L.W. Neustadt. The existence of optimal controls in the absence of convexity conditions. J.
Math. Analysis and Applications, 7, 1963.
[49] L.W. Neustadt. A general theory of extremals. J. Computer and System Sciences, 3(1), 1969.
[50] H. Nikaido. Convex Structures and Economic Theory. Academic Press, 1968.
[51] G. Owen. Game Theory. W.B. Saunders & Co., 1968.
[52] E. Polak. Computational Methods in Optimization: A Unified Approach. Academic Press,
1971.
[53] L.S. Pontryagin, R.V. Boltyanski, R.V. Gamkrelidze, and E.F.Mischenko. The Mathematical
Theory of Optimal Processes. Interscience, 1962.
130 BIBLIOGRAPHY
[54] R.T. Rockafeller. Convex Analysis. Princeton University Press, 1970.
[55] M. Sakarovitch. Notes on Linear Programming. Van Nostrand Reinhold, 1971.
[56] L.J. Savage. The Foundation of Statistics. John Wiley, 1954.
[57] K. Shell. Essays in the Theory of Optimal Economic Growth. MIT Press, 1967.
[58] R.M. Solow. The economist’s approach to pollution and its control. Science, 173(3996), 1971.
[59] D.M. Topkis and A. Veinnott Jr. On the convergence of some feasible directions algorithms
for nonlinear programming. SIAM J. on Control, 5(2), 1967.
[60] H.M. Wagner. Principles of Operations Research. Prentice-Hall, 1969.
[61] P. Wolfe. The simplex method for quadratic programming. Econometrica, 27, 1959.
[62] W.M. Wonham. On the seperation theorem of optimal control. SIAM J. on Control, 6(2), 1968.
[63] E. Zangwill. Nonlinear Programming: A Unified Approach. Prentice-Hall, 1969.
Index
Active constraint, 50
Adjoint Equation
augmented, 98
continuous-time, 85
Adjoint equation
augmented, 105
continuous-time, 91
discrete-time, 80
Adjoint network, 23
Affine function, 54
Basic feasible solution, 39
basic variable, 39
Certainty-equivalence principle, 5
Complementary slackness, 34
Constraint qualification
definition, 53
sufficent conditions, 55
Continuous-time optimal control
necessary condition, 101, 103
problem formulation, 101, 103
sufficient condition, 91, 125
Control of water quality, 67
Convex function
definition, 37
properties, 37, 54, 55
Convex set, 37
Derivative, 8
Design of resistive network, 15
Discrete-time optimal control
necessary condition, 78
problem formulation, 77
sufficient condition, 123
Discrete-time optimality control
sufficient condition, 80
Dual problem, 33, 58
Duality theorem, 33, 63
Dynamic programming, DP
optimality conditions, 123, 125
problem formulation, 121, 124
Epigraph, 61
Equilibrium of an economy, 45, 64
Farkas’ Lemma, 32
Feasible direction, 72
algorithm, 71
Feasible solution, 33, 49
Game theory , 5
Gradient, 8
Hamilton-Jacobi-Bellman equation, 125
Hamiltonian H,
˜
H, 78, 99
Hamiltonian H
˜
H, 101
Hypograph, 61
Knapsack problem, 124
Lagrange multipliers, 37
Lagrangian function, 35
Langrangian function, 21, 54
Langrangian multipliers, 21
Linear programming, LP
duality theorem, 33, 35
problem formulation, 31
theory of the firm, 42
Linear programming,LP
optimality condition, 34
Maximum principle
continuous-time, 86, 91, 101, 103
discrete-time, 80
Minimum fuel problem, 81
Minimum-time problem, 107
131
132 INDEX
example, 108
Non-degeneracy condition, 39
Nonlinear programming, NP
duality theorem, 63
necessary condition, 50, 53
problem formulation, 49
suficient condition, 54
Optimal decision, 1
Optimal economic growth, 2, 113, 117
Optimal feedback control, 123, 125
Optimization over open set
necessary condition, 11
sufficient condition, 13
Optimization under uncertainty, 4
Optimization with equality constraints
necessary condition, 17
sufficient condition, 21
Optimum tax, 70
Primal problem, 33
Quadratic cost, 81, 112
Quadratic programming, QP
optimality condition, 70
problem formulation, 70
Wolfe algorithm, 71
Recursion equation for dynamic programming,
124
Regulator problem, 81, 112
Resource allocation problem, 65
Separation theorem for convex sets, 73
Separation theorem for stochastic control, 5
Shadow prices, 37, 45, 70
Shadow-prices, 39
Simplex algorithm, 37
Phase I, 41
Phase II, 39
Singular case for control, 113
Slack variable, 32
State-space constraint
continuous-time problem, 117
discrete-time problem, 77
Subgradient, 60
Supergradient, 60
Supporting hyperplane, 61, 84
Tangent, 50
Transversality condition
continuous-time problem, 91
discrete-time problem, 80
Value function, 123
Variable final time, 103
Vertex, 38
Weak duality theorem, 33, 58
Wolfe algorithm, 71

ii

Contents
1 INTRODUCTION 2 OPTIMIZATION OVER AN OPEN SET 3 Optimization with equality constraints 4 Linear Programming 5 Nonlinear Programming 6 Discrete-time optimal control 7 Continuous-time linear optimal control 8 Coninuous-time optimal control 9 Dynamic programing 1 7 15 27 49 75 83 95 121

iii

iv CONTENTS .

However. California September. 1998 P. Turin. The effort was successful for several years. However. the World Wide Web has again made it possible to publish cheaply. I thank Kate Klohe for doing just that. Notes on Optimization has been out of print for 20 years. Our aim was to publish short. accessible treatments of graduate-level material in inexpensive books (the price of a book in the series was about five dollars). The only obstacle was to retype the manuscript in LaTex. or if you have suggestions for (small) changes that would improve it. Varaiya v . edited by George L.PREFACE to this edition Notes on Optimization was published in 1971 as part of the Van Nostrand Reinhold Notes on System Sciences. Van Nostrand Reinhold was then purchased by a conglomerate which cancelled Notes on System Sciences because it was not sufficiently profitable. The idea of making it freely available over the Web was attractive because it reaffirmed the original aim. Books have since become expensive. several people have been using it as a text or as a reference in a course. Berkeley.P. They have urged me to re-publish it. I would appreciate knowing if you find any mistakes in the book.

vi CONTENTS .

Cohen.A.PREFACE These Notes were developed for a ten-week course I have taught for the past three years to first-year graduate students of the University of California at Berkeley. as well as their presentation. I would especially like to acknowledge the help of Professors M. A. and Chapter VII before Chapter VIII. Finally. and linear differential equations (transition matrix. and mathematical economics. Billie Vrtiak for her marvelous typing in spite of starting from a not terribly legible handwritten manuscript. matrix inverse). basis. who have read and criticized earlier drafts. the main concepts and techniques of mathematical programming and optimal control to students having diverse technical backgrounds. California November. I want to thank Professor G. M. My objective has been to present. I have incurred the cost of some repetition in order to make almost all chapters self-contained. E. an understanding of this material should enable the reader to follow much of the recent technical literature on nonlinear programming. Turin for his encouraging and patient editorship. The examples and exercises given in the text form an integral part of the Notes and most readers will need to attend to them before continuing further. Jacob. Berkeley.L. has been influenced by many of my students and colleagues. adjoint solution) is sufficient for the reader to follow the Notes. Although the coverage is not encyclopedic. However.P. in a compact and unified manner. 1971 P. To facilitate the use of these Notes as a textbook. Ripper. Varaiya vii . I also want to thank Mrs. C. A reasonable knowledge of advanced calculus (up to the Implicit Function Theorem). Polak. The treatment of the topics presented here is deep. Desoer. linear algebra (linear independence. The selection of topics. Athans. Chapter V must be read before Chapter VI. J-P. (deterministic) optimal control. and Mr.

viii CONTENTS .

and 2. 1. and what price p should it set? Example 2: Tough University provides “quality” education to undergraduate and graduate students. The blend is made up of tobacco and mary-john leaves. tobacco can be purchased at a fixed price. For legal reasons the fraction α of maryjohn in the mixture must satisfy 0 < α < 1 . and 2. The phrase complete information is given means that the following requirements are met: 1. and briefly introduce two more general models which we cannot discuss further in these Notes. An optimal decision is then any decision which incurs the least cost among the set of permissible decisions. If Potco wants to maximize its profits. From extensive market research Potco has determined 2 their expected volume of sales as a function of α and the selling price p. In order to model a decision-making situation in mathematical terms. how much mary-john and tobacco should it purchase.Chapter 1 INTRODUCTION In this chapter.1 The Optimal Decision Problem These Notes show how to arrive at an optimal decision assuming that complete information is given. In an agreement signed with Tough’s undergraduates and graduates (TUGs). we present our model of the optimal decision-making problem. When these conditions are satisfied. The cost corresponding to these decisions is given by a real-valued function. The cost of each decision is known. The set of all permissible decisions is known. Some illustrations will help. “quality” is 1 . Example 1: The Pot Company (Potco) manufacturers a smoking blend called Acapulco Gold. 1. illustrate decisionmaking situations by a few examples. certain further requirements must be satisfied. Furthermore. whereas the cost of mary-john is a function of the amount purchased. The set of all decisions can be adequately represented as a subset of a vector space with each vector representing a decision. the decisions can be ranked according to whether they incur greater or lesser cost. namely.

1) = −δK(t) + s(t)F (K(t). Mr. Shell allocates some of the output rate W (t) to the consumption rate C(t). W . one of which is a seminar and the rest of which are lecture courses. I. L(t)) As Manager. and K are being measured in a common currency. whereas each g (graduate) must take two seminars and five lecture courses. The University has a faculty of 1000.2 CHAPTER 1. The current profile of the ground is given by the solid line. wine. capital and labor. The Weary Old Radicals (WORs) have a contract with the University which stipulates that every junior faculty member (there are 750 of these) shall be required to teach six lecture courses and two seminars each year. b a . Subject to the agreements with the TUGs and WORs how many u’s and g’s should the President admit to maximize his rating? Example 3: (See Figure 1. Shell is the manager of an economy which produces one output. INTRODUCTION defined as follows: every year. There are two factors of production. and the remainder I(t) to investment in capital goods. ∈ [0. If it costs $c per cubic foot to excavate or fill the ground. 1] is the fraction of output which is saved and invested. The only requirement is that the final road should not have a slope exceeding 0.001. Suppose that the capital stock decays exponentially with time at a rate δ > 0. how should he design the road to meet the specifications at minimum cost? Example 4: Mr. Hence. The labor force is growing at a constant birth rate of β > 0.) An engineer is asked to construct a road (broken line) connection point a to point b. Figure 1. (Obviously. each u (undergraduate) must take eight courses.) Thus. L(t)). so that the net rate of growth of capital is given by the following equation: ˙ K(t) = d K(t) dt = −δK(t) + s(t)W (t) (1.1.1: Admissable set of example. A seminar cannot have more than 20 students and a lecture course cannot have more than 40 students. If K(t) and L(t) respectively are the capital stock used and the labor employed at time t. W (t) = C(t) + I(t) = (1 − s(t))W (t) where s(t) = I(t)/W (t) . then the rate of output of wine W (t) at time is given by the production function W (t) = F (K(t). whereas every senior faculty member (there are 250 of these) shall teach three lecture courses and three seminars each year. . C. The Regents of Touch rate Tough’s President at α points per u and β points per g “processed” by the University.

The constraints themselves are presented in terms of functional inequalities or equalities.) In the examples above. If “welfare” over the planning period [0. the value of the mathematical exercise is greater the more insensitive are the optimum savings policies with respect to the simplifying assumptions of the mathematical model. Shell’s savings policy s(t).1. F (λK. constrained by 0 ≤ s(t) ≤ 1.2) Suppose that the production function F exhibits constant returns to scale. L) for all λ > 0. the welfare function becomes 0 e− αt c(t)dt. The first term of the right-hand side in (3) is the increase in the capital-to-labor ratio due to investment whereas the second terms is the decrease due to depreciation and increase in the labor force. (1. any vector (u. a permissible decision is any two-dimensional vector (α. be so as to maximize welfare? What savings policy maximizes welfare subject to the additional restriction that the capital-to-labor ratio at time T should be at least kT ? If future consumption is discounted at rate α > 0 and if time horizon ∞ is ∞.. THE OPTIMAL DECISION PROBLEM 3 ˙ L(t) = βL(t). c = C/L. in the first example. λL) = λF (K. (It is of mathematical but not conceptual interest to note that in this case a decision is represented by a vector in a function space which is infinite-dimensional. 0 ≤ t ≤ T . these Notes are concerned with optimizing (i. ˙ k(t) = s(t)f (k(t)) − µk(t) (1. and if we let f (k) = F (k. Shell starts with capital-to-labor T ratio ko .) More concisely then. whence the consumption per capita of labor becomes c(t) = (l − s(t))f (k(t)). In the last example. the set of permissible decisions is represented by the set of all points in some vector space which satisfy certain constraints. p) satisfying the constraints 0 < α < 1 and 0 < 2 p. a permissible decision is any real-valued function s(t). l). whereas Example 4 is light-years away from reality. what should Mr. g) with u ≥ 0. k = K/l. w = W/L. . What is the optimum policy corresponding to this criterion? These examples illustrate the kinds of decision-making problems which can be formulated mathematically so as to be amenable to solutions by the theory presented in these Notes.2) it is easy to see that K(t) satisfies the differential equation (1. and at time 0 Mr. Thus. i. 1) = Lf (k). 0 ≤ t ≤ T . In the second example. We must always remember that a mathematical formulation is inevitably an abstraction and the gain in precision may have occurred at a great loss of realism. maximizing or minimizing) a real-valued function over a vector space subject to constraints. g ≥ 0. constrained by the number of faculty and the agreements with the TUGs and WORs is a permissible decision.3).1.3) where µ = (δ + β). (In connection with this example and related models see the critique by Koopmans [1967]. In the latter case. T ] is identified with total consumption 0 c(t)dt. then we see that F (K.e.1) and (1.e. L)−LF (K/L. If we define the relevant variable in terms of per capita of labor. Using these definitions and equations (1. Suppose there is a planning horizon time T . For instance. Example 2 is caricature (see also a faintly related but more more elaborate formulation in Bruno [1970]).

and at the same time minimize the risk of losing his money. INTRODUCTION At this point. so that this decision is far from optimum according to the second criterion. which in this case are different patterns of traffic-light durations. we need a criterion by which we can compare different decisions. 1. Indeed. Now it may happen that these two objective functions may be inconsistent in the sense that they may give rise to different orderings of the permissible decisions. input rates of various chemicals. Thus. and probability 0. may be quite artificial in practice. In this way.4 CHAPTER 1. For instance.” More abstractly. the hypothesis of complete information can be relaxed by allowing that decision-making occurs in an uncertain environment.2. We will see that in most of the results the objective function and the functions describing the constraints are treated in the same manner. we merely point out here some situations where such models arise naturally and give some references. we can replace the single decision-maker by a group of two or more agents whose collective decision determines the outcome. although convenient for presenting the mathematical theory. probability 0. Let us suppose that we know the transportation needs of all the people in this section. we need a criterion to determine what is meant by “optimum traffic flow. An alternative and equally plausible goal may be to minimize the maximum waiting time (that is the total time spent at stop lights) in each trip. Usually there are impurities in the chemicals and disturbances in the heating process which may be regarded as additional inputs of a . the investor may assign probability 0. In the second place. suppose we have to choose the durations of various traffic lights in a section of a city so as to achieve optimum traffic flow.2 Some Other Models of Decision Problems Our model of a single decision-maker with complete information can be generalized along two very important directions. One way of doing this is to assign as cost to each decision the total amount of time taken to make all the trips within this section. since the stock which is likely to have higher gains is also likely to involve greater risk. It is customary to model this uncertainty stochastically. A similar model is made for all the other stocks that the investor is willing to consider.5 to the event that the price of shares in Glamor company increases by $100.000 in the stock market. Since we cannot study these more general models in these Notes. The two objectives are incompatible. The situation is different from our previous examples in that the outcome (future stock prices) is uncertain. etc. He wants to maximize his capital gains.000 be invested so as to maximize the expected value of the capital gains subject to the constraint that the probability of losing more than $100 is less than 0.25 that the price is unchanged. 1. and a decision problem can be formulated as follows. Before we can begin to suggest a design. A person wants to invest $1.1 Optimization under uncertainty. consider the design of a controller for a chemical process where the decision variable are temperature.1? As another example. it may be the case that the optimum decision according to the first criterion may be lead to very long waiting times for a few trips. it is important to realize that the distinction between the function which is to be optimized and the functions which describe the constraints. This interchangeability of goal and constraints also appears at a deeper level in much of the mathematical theory.25 that it drops by $100. In the first place. How should $1. the second goal (minimum waiting time) has been modified and reintroduced as a constraint. We can then redefine the problem as minimizing the first cost function (total time for trips) subject to the constraint that the waiting time for any trip is less than some reasonable bound (say one minute).

to be able to deal with these models. these problems represent one of the most important and challenging areas of research in decision theory. We need a new concept of rational (optimal) decision-making. See Wonham [1968]. it is of great significance to know that. heavy car which does not maneuver easily.2. exists which describes and prescribes behavior in these situations. an optimal decision problem under uncertainty is equivalent to another optimal decision problem under complete information. it is necessary to give great attention to the various ways in which the uncertainties can be modelled mathematically. given appropriate conditions. For instance. It is of crucial importance to invent schemes to coordinate the actions of the individual decision-makers in a consistent manner. The difficulty caused by the lack of knowledge of the actions of the other decision-making agents arises even if all the agents have the same objective. (See Mesarovic. We can only refer the reader to the extensive literature on Statistical Decision Theory (Savage [1954]. [1970] and Marschak and Radner [1971]. We also need to worry about finding equivalent but simpler formulations. If the uncertainties are modelled stochastically as in the example above. After this. To do justice to these decision-making situations.) In the author’s opinion. [1969]. The place is a large circular field... et al.2 The case of more than one decision-maker. so that optimality cannot be defined as we did earlier. Although problems involving many decision-makers are present in any system of large size. et al. however. (This result. 1. since a particular decision taken by our agent may be better or worse than another decision depending upon the (unknown) decisions taken by the other agents. known as the Certainty-Equivalence principle in economics has been extended and baptized the Separation Theorem in the control literature. What should Alpha do to get as close to Beta as possible? What should Beta do to stay out of Alpha’s reach? This situation is fundamentally different from those discussed so far. SOME OTHER MODELS OF DECISION PROBLEMS 5 random nature and modeled as stochastic processes. whereas Beta is riding a motor scooter.1. and Blaquiere. known as the Theory of Games.) Unfortunately. Kushner [1971]). then in many cases the techniques presented in these Notes can be usefully applied to the resulting optimal decision problem. Blackwell and Girshick [1954]) and on Stochastic Optimal Control (Meditch [1969]. we need a good background in Statistics and Probability Theory besides the material presented in these Notes. slow but with good maneuverability. notably economics and political science. The control theorist will probably be most interested in Isaacs [1965]. Here there are two decision-makers with opposing objectives. we can formulate a decision problem in such a way as to take into account these random disturbances. just as in the case of the portfolioselection problem. The best single source for Game Theory is still Luce and Raiffa [1957]. Alpha is driving a fast. Although the practical impact of this theory is not great. yet the effectiveness of his decision depends crucially upon the other’s decision. . the number of results available is pitifully small. Agent Alpha is chasing agent Beta.2. whereas the mathematical content of the theory is concisely displayed in Owen [1968]. Situations such as these have been studied extensively and an elaborate structure. it has proved to be among the most fruitful sources of unifying analytical concepts in the social sciences. Each agent does not know what the other is planning to do.

6 CHAPTER 1. INTRODUCTION .

. . .1 All vectors are column vectors. its size will be clear from the context. n. . . . with two consistent exceptions mentioned in 2.Chapter 2 OPTIMIZATION OVER AN OPEN SET In this chapter we study in detail the first example of Chapter 1. . and x = (x1 . . . and then we also write A = {aij }. yn ) √ then x y = x1 y1 + . . Aj denotes the entry i of A in the ith row and jth column. 2. . . . . Vectors are normally denoted by lower case letters. . and Ai denotes the ith row of A.1. . if xi ≥ 0. . yn ) then x ≥ y means xi ≥ yi . . . Thus if x = (x1 . 7 . I denotes the identity matrix. xn ) and y = (y1 . . we write In to denote the n × n identity matrix. .3 and 2. . xn ).3 Matrices are normally denoted by capital letters. + xn yn as in ordinary n we define |x| = + x x. the properties of whose solution are stated as a theorem. Note that Ai is a row vector. .1. 0 denotes both the zero vector and the real number zero. . . If confusion is likely. . If x ∈ R 2. matrix multiplication. . then x ≥ 0. but no confusion will result. . In particular if x ∈ Rn . . We first establish some notation which will be in force throughout these Notes. . . Prime denotes transpose so that if x ∈ Rn then x is the row vector x = (x1 .1.1. 2. . i = 1. xn ) and y = (y1 . the ith component of a vector x ∈ Rn is denoted xi . this entry is sometimes also denoted by the lower case letter aij . If A is an m × n matrix. and different vectors denoted by the same symbol are distinguished by superscripts as in xj and xk .2 If x = (x1 . i = 1. Then we study our example. This will generalize to a canonical problem.5 below and some other minor and convenient exceptions in the text. . then Aj denotes the jth column of A. . n.1 Notation 2. xn ) .1. Some additional properties are mentioned in the last section.

. . .1. its second derivative at x is the n×n matrix (∂ 2 f /∂x∂x)(ˆ) = ˆ x fxx (ˆ) where (fxx (ˆ))j = (∂ 2 f /∂xj ∂xi )(ˆ).5 If f : Rn → R is a differentiable function. ˆ (ˆ) = fx x =  x  . the partial derivative of f with respect to x at the point (ˆ.2 Example We consider in detail the first example of Chapter 1. . . m.  . i = 1. y )). ∂fm x ∂xn (ˆ) 2.4 If f : Rn → Rm is a function. .1. .. fm . v = total amount of mixture produced. y )). (∂f /∂xn )(ˆ.. p = sale price per pound of mixture. y) is a differentiable function from ˆ Rn × Rm into R. For example. f (α. ∂fm (ˆ) x ∂x1  ∂f1 x ∂xn (ˆ)     . if A is an m × n matrix. OPTIMIZATION OVER AN OPEN SET 2. p) = expected sales volume (as determined by market research) of mixture as a function of(α. its ith component is written fi . x x 2. y) → f (x. . and if the argument x x x ˆ x x is clear from the context it may be dropped. . . if f : Rn → Rm is x ˆ x ˆ x ˆ x ˆ a differentiable function with components f1 . . 2. If f : (x. Define the following variables and functions: α = fraction of mary-john in proposed mixture. The column vector (fx (ˆ)) is also denoted x f (ˆ). .1. . . y ) = (∂f /∂x)(ˆ.5 above. . Thus. y ). Note that fi : Rn → R. . In this case we write f : x → f (x). Finally.1. y ) is the n-dimensional x ˆ row vector fx (ˆ. y ) = ((∂f /∂x1 )(ˆ. x x and is called the gradient of f at x. in terms of the notation in Section 2. then its derivative at x is the m × n matrix ˆ  x f1x (ˆ) ∂f   . ˆ x x This derivative is denoted by (∂f /∂x)(ˆ) or fx (ˆ) or ∂f /∂x|x=ˆ or fx |x=ˆ . the derivative of f at x is the row vector ((∂f /∂x1 )(ˆ). . . y ) = (∂f /∂y)(ˆ. (∂f /∂xn )(ˆ)). . . . we can write F : x → Ax to denote the function f : Rn → Rm whose value at a point x ∈ Rn is Ax. .6 If f : Rn → R is twice differentiable. y ). ∂x fmx (ˆ) x  ∂f1 (ˆ) . . =  . and similarly x ˆ x ˆ x ˆ x ˆ fy (ˆ. Sometimes we describe a function by specifying a rule to calculate f (x) for every x. p). . x  ∂x1. x x i x fxx (ˆ) = (∂/∂x)(fx ) (ˆ). . (∂f /∂ym )(ˆ. .8 CHAPTER 2. y ) = ((∂f /∂y1 )(ˆ. .

so that the net profit is N (α. p). p) = pf (α. p)) + P2 (1 − α)f (α. p). we 2 have the the following decision problem: Maximize N (α. Then the total cost as a function of α. h2 )) ∈ Ω for 0 ≤ δ ≤ η (2. p).2. Formally. m = amount (in pounds) of mary-john purchased.e. p) − C(α. p). p∗ )| < ε (2. Suppose that (α∗ . p). First of all we note that Ω is an open subset of R2 .2) In turn (2. (2. and t = amount (in pounds) of tobacco purchased. p∗) is an optimal decision. p) = R(α. 9 Evidently.. where Ω = {(α. p∗ ) ∈ Ω and N (α∗ . p)|0 < α < 1 . p∗ ).1) We are going to establish some properties of (a∗ . p) for all (α. p) ∈ Ω whenever |(α. p) ∈ Ω.3) . and t = (l − α)v. p∗ ) ≥ N (α. p) − (α∗ . 0 < p < ∞}. and P2 = purchase price per pound of tobacco. i. Hence there exits ε > 0 such that (α. The set of admissible decisions is Ω. p) ∈ Ω. m = αv. h2 ) in R2 there exists η > 0 (η of course depends on h) such that ((α∗ . The revenue is R(α. EXAMPLE Since it is not profitable to produce more than can be sold we must have: v = f (α. p∗ ) + δ(h1 . p) = P1 (αf (α. Let P1 (m) = purchase price of m pounds of mary-john. p is C(α.2) implies that for every vector h = (h1 . (α∗ . subject to (α.2.

1) we obtain (2. p )h2 ] ∂N ∗ ∗ ∂p (α . p∗ )h1 + ∂α ∂N ∗ ∗ ∂p (α .8). p )h2 ] + o(δ). and using (2. p∗ ) is optimal.1: Admissable set of example. (2. p∗ ) | h p Figure 2.9) holds for every vector h ∈ R2 . where oδ δ ∂N ∗ ∗ ∂p (α . we have concluded that the inequality (2.6) we get 0 ≥ [ ∂N (α∗ . p )h2 ] (2.7) Letting δ approach zero in (2. p∗ + δh2 ) for 0 ≤ δ ≤ η Now we assume that the function N is differentiable so that by Taylor’s theorem N (α∗ . h2 ) .8) Thus. p∗ )h1 + ∂α Dividing by δ > 0 gives 0 ≥ [ ∂N (α∗ . p∗ + δh2 ) = +δ[ ∂N (δ∗ .5) → 0 as δ → 0. p ) = 0. p∗ ) N (α∗ + δh1 .10 α CHAPTER 2. (2. p )h2 ].5) into (2. (2. (α∗ . p∗ ) + δ(h1 . Combining (2. let us prove a direct generalization.3) with (2. p∗ )h1 + ∂α +o(δ).4) yields 0 ≥ δ[ ∂N (α∗ . p ) = 0. Ω δh (a∗ . (2.6) Substitution of (2.4): N (α∗ . + o(δ) δ . using the facts that N is differentiable. p∗ ) ≥ N (α∗ + δh1 . OPTIMIZATION OVER AN OPEN SET 1 2 (α∗ . p∗ )h1 + ∂α ∂N ∗ ∗ ∂p (α . Clearly this is possible only if ∂N ∗ ∗ ∂α (α .7). ∂N ∗ ∗ ∂p (α .4) (2. and δ is open. .9) Before evaluating the usefulness of property (2.

16) Substitution of (2. Let Ω be an open subset of Rn .1 Theorem .18) Since the inequality (2.14) + o(δ).3.10) = 0.18) must hold for every h ∈ Rn .3 The Main Result and its Consequences 2. Let f : Rn → R be a differentiable function. we must have 0= and the theorem is proved.12) implies that for every vector h ∈ Rn there exits η > 0 (η depending on h) such that (x∗ + δh) ∈ Ω whenever 0 ≤ δ ≤ η. (2.13) (2. Then ∂f ∗ ∂x (x ) (2. Since x∗ is optimal.11) Proof: Since x∗ ∈ Ω and Ω is open. there exists ε > 0 such that x ∈ Ω whenever |x − x∗ | < ε.12) In turn.15) into (2.14) yields 0 ≥ δ ∂f (x∗ )h + o(δ) ∂x and dividing by δ > 0 gives 0≥ ∂f ∗ ∂x (x )h + o(δ) δ (2. (2. ♦ .3.2.17) Letting δ approach zero in (2. (2. we see that 0≥ ∂f ∗ ∂x (x )h. (2.17) and taking (2. Since f is differentiable.16) into account. THE MAIN RESULT AND ITS CONSEQUENCES 11 2. Let x∗ be an optimal solution of the following decision-making problem: Maximize f (x) subject to x ∈ Ω. we must then have f (x∗ ) ≥ f (x∗ + δh) whenever 0 ≤ δ ≤ η. by Taylor’s theorem we have f (x∗ + δh) = f (x∗ ) + where o(δ) δ ∂f ∗ ∂x (x )δh (2. (2. ∂f ∗ ∂x (x ).15) → 0 as δ → 0 (2.

1).4. . in the language of economic analysis. . in which case (2. In each case Ω = (−1. ∂f ∗ ∂xn (x ) =0 (2. p ).3.6 below). marginal revenue = marginal cost. . More generally. so that the search for an optimal decision from Ω is reduced to searching among the solutions of (2. In summary. In practice this may be a very difficult problem since these may be nonlinear equations and it may be necessary to use a digital computer. the theorem does not give us any clues concerning the existence of an optimal decision. or. n 1 These are ∂f ∗ ∂x1 (x ) = 0. in these Notes we shall not be overly concerned with numerical solution techniques (but see 2. There may exist decisions x ∈ Ω ˜ such that fx (˜) = 0 but x is not optimal. We return to the example and recall the N = R − C. Equation (2. ∂f ∗ ∂x2 (x ) = 0. and it does not give us sufficient conditions either.18) implies that at every optimal decision (α∗ .1? Yes Yes No No No Further Consequences x∗ is the unique optimal 2. Equation (2. ∂R ∗ ∗ ∂p (α .2. OPTIMIZATION OVER AN OPEN SET Table 2. .2 Consequences.1 A warning. Suppose that R and C are differentiable. p∗ ) ∂R ∗ ∗ ∂α (α .11) and its special case (2. The diagrams in Figure 2.2. 1). every optimal decision must be a solution of these n simultaneous equations of n variables.4 Remarks and Extensions 2.1 At how many points in Ω is 2. 2. x∗ ) .1 may x ˜ occur. However. Let us evaluate the usefulness of (2. p ). The theorem may also have conceptual significance.11) gives us n equations which must be satisfied at any optimal decision x∗ = (x∗ .18). .4. p ) = ∂C ∗ ∗ ∂α (α . say x∗ More than one point None Exactly one point More than one point Case 1 2 3 4 5 Does there exist an optimal decision for 2.11) is only a necessary condition for x∗ to be optimal. any one of the five cases in Table 2. We have obtained an important economic insight. .1 illustrate these cases.19) Thus.2 satisfied? Exactly one point. .12 CHAPTER 2. . p ) = ∂C ∗ ∗ ∂p (α .19). . Note that in the last three figures there is no optimal decision since the limit points -1 and +1 are not in the set of permissible decisions Ω = (−1.

then +1 is the optimal decision but the derivative is positive at that point.4 Second-order conditions.2. 2 2 (2. 2.. Suppose at x∗ ∈ Ω. if Ω = [−1.5 Sufficiency for local optimal.4.20) we can conclude that x∗ is a local optimum. and if f is continuous. But then from the expansion (2.e.4. Then fx (x∗ ) = 0.2: Illustration of 4. 2.4. so that dividing δ by δ2 > 0 yields 0 ≥ 1 h fxx (x∗ )h + 2 o(δ2 ) δ2 and letting δ approach zero we conclude that h fxx (x∗ )h ≤ 0 for all h ∈ Rn .4. Indeed. We say that x∗ ∈ Ω is a locally optimal decision if there exists ε > 0 such that f (x∗ ) ≥ f (x) whenever x ∈ Ω and |x∗ − x| ≤ ε.20) where o(δ2 ) → 0 as δ → 0. Now for δ > 0 sufficiently small f (x∗ + δh) ≤ f (x∗ ). if we have a twice differentiable objective function. then it follows by the Weierstrass Theorem that there exists an optimal decision. But if Ω is closed we cannot assert that the derivative of f vanishes at the optimum. Thus. fx (x∗ ) = 0 and fxx is strictly negative definite.2 Existence.4.1. in the third figure above. . It is easy to see that the theorem holds (i. Suppose f is twice-differentiable and let x∗ ∈ Ω be optimal or even locally optimal. and by Taylor’s theorem f (x∗ + δh) = f (x∗ ) + 1 δ2 h fxx (x∗ )h + o(δ2 ). 1].3 Local optimum. 2. This means that fxx (x∗ ) is a negative semi-definite matrix.11) for local optima also. 2. If the set of permissible decisions Ω is a closed and bounded subset of Rn . REMARKS AND EXTENSIONS 13 -1 Case 1 1 -1 Case 2 1 -1 Case 3 1 -1 Case 4 1 -1 Case 5 1 Figure 2. we get an additional necessary condition. 2.

Step 3.4. If x f (xi ) = 0.11. Let {di } be produced by either of these choices and let {xi } be the resulting sequence. otherwise let di = 1/k di−1 where k is the smallest positive integer such that f (xi + 1/k di−1 x f (xi )) > f (xi ).6 A numerical procedure. i. We can formalize the scheme as an algorithm. (xi + d x f (xi )) ∈ Ω}. i 2. Another choice is to let di = di−1 if f (xi + di−1 x f (xi )) > f (xi ). This observation suggests the following scheme for x x finding a point x∗ ∈ Ω which satisfies 2. Go to Step 2.e. Step 2. Set i = i + 1 and return to Step 2. stop. f (˜ + ε x ˜ x x f (˜)) > f (˜) for all ε > 0 sufficiently small. Then 1. Pick x0 ∈ Ω. The step size di can be selected in many ways. if x∗ ∈ Ω is a limit point of the sequence {xi }. fx (x∗ ) = 0. For other numerical procedures the reader is referred to Zangwill [1969] or Polak [1971]. Otherwise let xi+1 = xi + di x f (xi ) and go to Step 3. At any point x ∈ Ω the gradient x f (˜) is a direction along which f (x) increases. OPTIMIZATION OVER AN OPEN SET 2.. . Set i = 0. This requires a one-dimensional search.14 CHAPTER 2. To start the process we let d−1 > 0 be arbitrary. For instance. one choice is to take di to be an optimal decision for the following problem: Max{f (xi + d x f (xi ))|d > 0. Calculate x f (xi ). Step 1. Exercise: Let f be continuous differentiable. f (xi+1 ) > f (xi ) if xi+1 = xi .

3) In particular this implies that y ∗ = g(x∗ ). In the present case let 0 < ε ≤ a − x∗ and g(x) = +b[α − (x/a)2 ]1/2 . y ∗ ) is an optimal decision. suppose (x∗ . if (x∗ .2) and the decisions studied in the last chapter is that the set of permissible decisions Ω is not an open set. Let us suppose y ∗ = 0. y ∗ ). y ∗ ) ≥ f0 (x.2) The main difference between problem (3. y) ∈ V iff f y = g(x). y) = 4xy subject to (x. Since Note that y ∗ = 0 implies f1y (x∗ .1 (3. y) for all (x.2). Returning to problem (3. 3. Y ∗ ) = 0. (ii) an open set V containing (x∗ . and (iii) a differentiable function g : (x∗ − ε. y) = α and (x.1 Example We want to find the rectangle of maximum area inscribed in an ellipse defined by f1 (x. y) ∈ Ω = {(x. Then from figure 3. The assertion is false if y ∗ = 0. y ∗ ).1) The problem can be formalized as follows (see Figure 3. y) = α}. y)|f1 (x. 1 15 .1 it is evident that there exist (i)ε > 0. y) in an open set containing (x∗ . y) = x2 a2 y2 b2 + = α. g(x)) = α whenever |x − x∗ | < ε.1): Maximize f0 (x. and the properties of its optimal decisions are stated in the form of a theorem. so that this assertion follows from the Implicit Function Theorem. (3. Hence.Chapter 3 OPTIMIZATION OVER SETS DEFINED BY EQUALITY CONSTRAINTS We first study a simple example and examine the properties of an optimal decision. (3. Additional properties are summarized in Section 3 and a numerical scheme is applied to determine the optimal design of resistive networks. Clearly then either x∗ = 0 or y ∗ = 0. and that f1 (x. x∗ + ε) → V such that f1 (x. This will generalize to a canonical problem. y ∗ ) is an optimal decision we cannot assert that f0 (x∗ .

y ∗ ) + f1y (x∗ .1: Illustration of example.4): ˆ Maximize f0 (x) = f0 (x. which we can also express as ˆ so that by Theorem 2.2). g(x)) ≡ α for |x − x∗ | < ε. (x∗ . y∗ = + 1/2 b. g(x∗ )) is optimum for (3. Solving these yields x∗ = + 1/2 a − (α/2) . y ∗ ) (f1x .6).1. y ∗ ) must satisfy the two equations f1 (x∗ . it follows that x∗ is an optimal solution for (3.4) ˆ But the constraint set in (3. (3.6): −1 f0x − f0y f1y f1x = 0 at (x∗ .5) to obtain the condition (3.6) Thus an optimal decision (x∗ .3. − (α/2) . y ∗ ) = (x∗ . f1y ) y∗ g(x) V ( x∗ x | ) Ω Figure 3. (3. y ∗ ) = 0 we can evaluate gx (x∗ ). y ∗ ) + f0y (x∗ . f0x (x f0x (x∗ .4) is an open set (in R1 ) and the objective function f0 is differentiable. ∗ ) = 0. y ∗ )gx (x∗ ) = 0. −1 gx (x∗ ) = −f1y f1x (x∗ . y ∗ ) = α and (3. y ∗ ). y ∗ )gx (x∗ ) = 0 Using the fact that f1 (x.16 CHAPTER 3. OPTIMIZATION WITH EQUALITY CONSTRAINTS Tangent plane to Ω at (x∗ . g(x)) subject to |x − x∗ | < ε.5) and substitute in (3. (3. and since f1y (x∗ . y ∗ ). we see that f1x (x∗ .

3. . f1y ) at (x∗ . and (λ∗ ) = ∂m ∂α (3. . If x∗ (α) is a differentiable function of α then m(α) is a differentiable function of α.2 General Case 3. . . . 1. . Finally we note that λ∗ = where m(α) = maximum area. 1 ≤ i.9): (f0x . m. Let x∗ (α) be an optimal decision for (3. y ∗ ) = m(α) = 2αab. − (α/2) 17 and the maximum area is (3. be continuously differentiable functions and let x∗ be an optimal decision of problem (3. are linearly independent. αm ) .8) Then (3. + 1/2 (a. .14) Proof. . Let fi : Rn → R.6) and (3. let m(α1 . λ∗ ) such that m 1 f0x (x∗ ) = λ∗ f1x (x∗ ) + . . . ∂m ∂α . b). By the Implicit Function Theorem (see Fleming [1965]) it follows that there exist (i) ε > 0. is nonsingular. m. (3. (x∗ . .1 Theorem.2. (3. . (3. y ∗ ). .12). . y ∗ )]λ∗ .9) is equivalent to f0 (x∗ . i = 0. . . Define −1 λ∗ = f0y f1y (x∗ .12) as a function of α = (α1 . .10) (3. (3. . . m. y ∗ ) In terms of the gradients of f0 . we can assume that the m × m matrix [(∂fi /∂xj )(x∗ )]. . j ≤ m. + λ∗ fmx (x∗ ) m 1 (3. .13) Furthermore. . . then by re-labeling the coordinates of x if necessary. Then there exists a vector λ∗ = (λ∗ . .8) can be rewritten as (3. . m (m < n). . The condition (3. y ∗ ) = [ f1 (x∗ . . f1 . . i = 1. Since fix (x∗ ). αm ) be the maximum value of (3. f0y ) = λ∗ (f1x . i = 1.11) 3. i = 1.12) Suppose that at x∗ the derivatives fix (x∗ ). are linearly independent. .7) (3. GENERAL CASE Evidently there are two optimal decisions.12): Maximize f0 (x) subject to fi (x) = αi . . (ii) an .6) can be interpreted differently.2.9) which means that at an optimal decision the gradient of the objective function f0 is normal to the plane tangent to the constraint set Ω.

. . . 1 ≤ j ≤ m. = 1. . let us define w = (x1 . this is the same as f0x (x∗ ) = (λ∗ ) fx (x∗ ) = λ∗ f1x (x∗ ) + .16) (3.19) and (3. . . define the m-dimensional column vector λ∗ by −1 (λ∗ ) = f0w fw |x∗ .18) (3. . f0u (x∗ )) = (λ∗ ) (fw (x∗ ). . . f0u (u∗ ) = 0. (3. . m 1 (3. .19) Next. xn ) = αi . . 1 ≤ i ≤ m. u∗ ) is optimal for (3. .12). . . n − m]. . xn ) = αi . u∗ ). In particular this implies that x∗ = gj (x∗ . . so that by Theorem 2. u∗ ) = (g(u∗ ). + λ∗ fmx (x∗ ). .3. . . and (x1 . . .1 . gu (u∗ ) = −[fw (x∗)]−1 fu (x∗ ).18 CHAPTER 3. . . .21): (f0w (x∗ ).17): ˆ Maximize f0 (u) = f0 (g(u). .20) Then (3. . . u = (xm+1 . fu (x∗ )). (3. where U = [(xm+1 . xm ) . and (iii) a differentiable function g : U → Rm . xn ). it follows that u∗ is an optimal decision for (3. and (xm+1 . xn ) .2).18) to obtain the condition −1 −f0w fw fu + f0u = 0 at x∗ = (w∗ . . . and n m+1 j fi (g(xm+1 . . Differentiating (3. xn ) ∈ U (see Figure 3. .15) For convenience. xn ) and f = (f1 . . . . . . (3. (3. . u) subject to u ∈ U. we see that fw (x∗ )gu (u∗ ) + fu (x∗ ) = 0. . xn ) ∈ V iff xj = gj (xm+1 . 1 ≤ j ≤ m. . Since x = (w.21) . .17) ˆ But U is an open subset of Rn−m and f0 is a differentiable function on U (since f0 and g are ˆ differentiable). . . xn ). . which we can also express using the chain rule for derivatives as ˆ f0u (u∗ ) = f0w (x∗ )gu (u∗ ) + f0u (x∗ ) = 0. xn )]| |xm+ − x∗ | < ε. . m. i = 1.16) with respect to u = (xm+1 . . . since x∗ = (w∗ . Then. . and substitute in (3. such that m+ fi (x1 . x∗ ). . . . xm+1 . . OPTIMIZATION WITH EQUALITY CONSTRAINTS open set V in Rn containing x∗ . u). . . fm ) . .20) can be written as (3. . . and since the m × m matrix fw (x∗ ) is nonsingular we can evaluate gu (u∗ ). .

. . . x m 19 g(xm+1 . . . (xm+1 . x∗ α)) .x . α so that ∗ −1 −1 wα + fw fu u∗ = fw . xn ) (x∗ . .13). .23) for α ∈ N . it follows that f is x w nonsingular at x∗ (α) in a neighborhood of α. which is equation (3. . . x∗ ) m 1 Ω= {x|f i (x) = αi } i = 1.2: Illustration of theorem. x∗ ) n m+1 (3. . Since f (x) and x∗ (α) are continuously differentiable by hypothesis. m(α) = f0 (x∗ (α)). u∗ (α)). . α . . x∗ (α)) and u∗ (α) = (x∗ (α). . .2. . fw is nonsingular at m 1 m+1 ( ∗ (α). . we vary α in a neighborhood of a fixed value. . say N . To prove (3. . . . . . . .22) (3. We define w∗ (α) = (x∗ (α). m . By hypothesis. . . −1 −f0w fw fu + f0u = 0 at (w∗ (α). . Also. . ∗ V xm+1 U xn 2 Figure 3. xn ) (x∗ . . . We have the equation f (w∗ (α). so that ∗ mα = f0w wα + f0u u∗ α (3. . GENERAL CASE x1 . . .3. u∗ (α)) = α. .24) Differentiating (3.14).22) with respect to α gives ∗ fw wα + fu u∗ = I. say α.

α (3. u can then vary ˜ arbitrarily in a neighborhood of u. . u) in Ω which is not necessarily optimal. w must change according to w = g(u) (in order to ˜ ˆ maintain f (w. u) = α). m}.25) ♦ In (3.28) in the last section. if we substitute from (3.13) is therefore equivalent to saying that at an optimal decision x∗ . . and the objective function changes according to f0 (u) = f0 (g(u).24). .25). .27) is equation (3. u f0 (˜) = 0 which. i = 1. xm ) and u = (xm+1 . . . xn ) . . .12 define a n − m dimensional surface Ω = {x|fi (x) = αi . Let us again define w = (x1 .3 Algebraic interpretation. . . . 3. x ˜ x (3. the m equations f (w.20) and (3. . .27) ˆ Therefore. we obtain (3. The hypothesis of linear independence of {fix (x∗ )|1 ≤ i ≤ m} guarantees that the tangent plane through Ω at x∗ is described by {h|fix (x∗ )h = 0 . i = 1. Then the Implicit Function ˜ ˜ ˜ Theorem enables us to solve. Suppose that fw (˜) is nonsinx gular at some point x = (w. . .2 Geometric interpretation. u) = α. x x x where −1 ˜ λ = f0w fw˜ . . u). . . α Using (3. m}. the gradient of the objective function x f0 (x∗ ) is normal to the tangent surface (3. m}. . The ˆ at u is derivative of f0 ˜ ˆ u ˜ f0u (˜) = f0w gu + f0u˜ = −λ fu (˜) + f0u (˜). . OPTIMIZATION WITH EQUALITY CONSTRAINTS and multiplying on the left by f0w gives ∗ −1 −1 f0w wα + f0w fw fu u∗ = f0w fw .12). together with (3. + λm x fm (x∗ )|λi ∈ R. i = 1.14) and the theorem is proved. .13). . We shall use (3.27) ˜ and (3. Condition (3.23). this equation can be rewritten as ∗ −1 f0w wα + f0u u∗ = f0w fw . x (3. in a neighborhood of x. .26) f1 (x∗ ) + .28) ˆ u and if u is optimal. The equality constraints of the problem in 3.2.2. As u varies. so that the set of (column vectors orthogonal to this tangent surface is {λ1 x (3. the direction of steepest increase of f0 at u is ˜ u u f0 (˜) ˆ = −fu (˜)λ + fOu (˜) . 3.20 CHAPTER 3.

1. 3. define the Lagrangian function L : Rn+m → R by L : (x. This can be checked in the following example Minimize subject to sin(x2 + x2 ) 1 2 π 2 2 2 (x1 + x2 ) = 1.3.16) in a neighborhood of x∗ . REMARKS AND EXTENSIONS 21 3.3 Remarks and Extensions 3. ˆ f0uu (u∗ ) = [gu . (3. Keeping the notation of Theorem 3. so is g (see Fleming [1965]).3. and suppose that fix (x∗ ). if f is twice differentiable. all ˆ .. it is useful to translate these the comments of Section 2. Exercise: Show that  .12) and (3. Furthermore. λ) → f0 (x) − m λi fi (x).3 Second-order conditions. λ∗ ) = 0. This is possible because the function g is uniquely specified by (3.3.. the following exercise expresses if this matrix is negative definite then x ˆ f f0uu (u∗ ) in terms of derivatives of the functions fi . Let x∗ be optimal for (3. are not linearly independent. L(x) = f0 (x) − i=1 λ∗ fi (x). then so is f0 . ˆ Since we can convert the problem (3.12) into a problem of maximizing f0 over an open set. The necessary condition (3.  I (w∗ . . i .13) need not hold if the derivatives fix (x∗ ).3.12. 1 ≤ i ≤ m.13) and ˆ the condition that the (n − m) × (n − m) matrix f0uu (u∗ ) is negative semi-definite. and its proof is left as i=1 an exercise. Lx (x∗ . Then there exists λ∗ ∈ Rm such that (x∗ . 0 ≤ i ≤ m. and a necessary condition for x∗ to be optimal for (3.2. ∗ is a local optimum. u∗ ) gu (u∗ ) = −[fw (x∗ )]−1 fu (x∗ ).1 The condition of linear independence. Furthermore. are linearly independent. are twice continuously ˆ differentiable.e. It follows that if the functions fi .29) 3. The following is a reformulation of 3.I] where m Lww Lwu Luw Luu  gu  . i..4 will apply to the function f0 remarks in terms of the original function f0 and f . 1 ≤ i ≤ m.3.2 An alternative condition. λ∗ ) = 0 and Lλ (x∗ . However.12). λ∗ ) is a stationary point of L.

Set uk = uk + dk f0 (uk ). u)2 of the variables such that fw (xk ) is nonsingular. . Here jsk . gb ) . vb ) and j = (j1 . Find x0 arbitrary so that fi (x0 ) = αi . . 1 ≤ i ≤ m. in (3. . stop. The w variable may consist of any m components of x.30) as (3.5 Design of resistive networks. p) = g(v−v s . x ˜ ˜ Remarks. . and gk is the characteristic of the resistor.31) where vrk is the voltage across the resistor. We choose one of the nodes as datum and denote by e = (e1 .4. 2 (3. .22 CHAPTER 3. OPTIMIZATION WITH EQUALITY CONSTRAINTS 3. are linearly independent for all x. p). the step sizes dk > 0 can be selected various ways. Find wk such that fi (wk . jb ) respectively. . set k = k + 1. .32) Although (3.30) implies that the current (jk −j s k) through the kth resistor depends only on the voltage vrk = (vk −v sk ) across itself. . The practical applicability of the algorithm depends upon two crucial factors: the ease with which we can find a partition x = (w.6. Orient the network graph and let v = (v1 .31) we shall assume that gk is a function of vr . so that jk − jsk = gk (vrk ) = gk (vk − vsk ). and f0 (uk ) = −fu (xk )λk + f0u (xk ). . uk ) = 0. . vr ∈ Rb for the resistor voltages. and the ease with which we can find wk so that f (wk . vsk are the source current and voltage in the kth branch. uk ) = α.3. vs ∈ Rb for the sources. Otherwise go to Step 3. 1 ≤ k ≤ b. Using the obvious vector notation js ∈ Rb . uk ). Furthermore. . Then the Kirchhoff current and voltage laws respectively yield the equations Aj = 0 and A e = v (3. . no essential simplification is achieved. and g = (g1 . so that (3. Hence. . and return to Step 2. we can rewrite (3. Then the following algorithm is a straightforward adaptation of the procedure in Section 2. Consider a network N with n + 1 nodes and b branches. p ) which are under our control. u) so that fw (xk ) is nonsingular. Step 1. let us suppose that there are design parameters p = (p1 . Set ˜ ˜ ˜ ˜ k+1 = (wk . Set k = 0 and go to Step 2. As before. 1 ≤ i ≤ m. .4 A numerical procedure.30) Next we suppose that each branch k contains a (possibly nonlinear)resistive element with the form shown in Figure 3.31): j − js = g(v − vs ) = g(vr ).32): j − jx = g(vr . Find a partition x = (w. In the next section we apply this algorithm to a practical ˜ ˜ ˜ problem where these two steps can be carried out without too much difficulty. . . denote the vectors of branch voltages and branch currents. . (3. Step 2. . If f0 (uk ) = 0. 3.3. Let A be the n × b reduced incidence matrix of the network graph.3. Calculate λk −1 ˆk ˆk by (λk ) = f0w fw(xk). We assume that the derivatives fix (x). . ˆk Step 3. thus enabling us to calculate λk . . This allows us to include coupled resistors and voltagecontrolled current sources.31) is replaced by (3. .33) This is just a notational convenience. en ) the vector of node-to-datum voltages. 1 ≤ i ≤ m. (3.

vs .37) (3. The network design problem can then be stated as finding p. vs . p) − is .35) We shall apply the algorithm 3. If we combine (3. p). is ) = Ag(A e−v s . In terms of the notation of 3. p) − is = 0. p). p. p). vs . ˜s ) be a ˜ e ˜ ˜ i ˜ is given by (see (3. To this end let x = (˜. p.4.. v ˜ v ˜ . for every value of (p. For this reason. (c) The network N described by (3. Then the corresponding λ = λ ˜ λ = f0w (˜)fw (˜) = f0e (˜)fe (˜). moreover. (3. its branch admittance matrix.3: The kth branch. and G(˜r .36) we see immediately that λ is the node-to-datum response voltages of a linear network N (˜r . v ˜ ˜ x (3.e. (b) g is differentiable and the n×n matrix A(∂g/∂v)(v.33). is ). is ) there is a unique e = E(p. is ).29) and (3. then assumption (b) allows us to identify w = e.32) we obtain (3. is ). if we let x = (e. Formally.3. REMARKS AND EXTENSIONS jsk jk − jsk jk o 23 + + vsk - vrk + - o vk - Figure 3. is ) subject to Ag(A e − vs .36) Now (3.34): Minimize f0 (e. Assumption: (a) f0 is differentiable. vs . G (˜r .27)) fixed point.33): Ag(A e − vs . p) = is . x −1 x x −1 x From the definition of f we have fe (˜) = AG(˜r . and u = (p. x v ˜ ˜ where vr = A e − vs . vs . p)A . p. Now the crucial part in the algorithm is to obtain λk at some point xk . is so as to minimize some specified function f0 (e.36) has the following extremely interesting physical interpretation. p)A λ = f0e (˜).3. p)) v ˜ v ˜ of the original network N . is the transpose of the incremental branch admittance matrix (evaluated at (˜r . p) = (∂g/∂vr )(˜r . vs . N (˜r . Therefore. (3. λ is the solution (unique by ˜ ˜ ˜ v ˜ v ˜ assumption (b)) of the following linear equation: AG (˜r .4 to this problem. p ∈ R .34) where we have defined is = Ajs .33) ˜ with (3. p)A is nonsingular for all v ∈ Rb . If we compare (3. p. Furthermore.33) is determinate i. p) driven by the current sources f0e (˜). vs . we have the optimization problem (3. this network has the same graph as v ˜ x the original network (since they have the same incidence matrix). p. vs . is ) satisfying (3. vs . To do this we make the following assumption.3. Also let f (x) = f (e.3. p) is called the adjoint network (of N ) at (˜r .

28). stop. pk ) driven by the current source f (xk ). vs .33) to obtain e0 = E(p0 . Some of the components vt . j ∈ J. Calculate the node-to-datum response λk of k . v k+1 . Each iteration from uk to uk+1 requires one linear network analysis step (the computation of λk in Step 2). OPTIMIZATION WITH EQUALITY CONSTRAINTS u u f0 (˜) ˆ using (3. Suppose that is is fixed. the rest being fixed. Although we have used the incidence matrix A to obtain our network equation (3. The only change this requires in the algorithm is that in Step 3 we set k+1 k ˆ ˆ pk+1 = pk − dk f0p (uk ) just as before.] Let N be a transistor circuit. This latter step may be very complex. x ˜ x with the transpose of the matrix giving rise to the adjective “adjoint. s s Remark 1. In every case the “adjoint” network arises from a network interpretation of (3. If this gradient is zero.37). Similarly.33) to obtain ek+1 = (Epk+1 . 3 u ˆ f0 (uk ). ik+1 ) = uk − dk u f0 (uk ). i0 ) arbitrary. Let k = 0 and s s go to Step 2. p)] A  ˆ  ˜ ˆ u  G (˜r .33). Calculate vr = A ek − vs .27). with relative magnitudes reflecting the importance of the different transistors then we can formulate the criterion Note the minus sign in the expression uk − dk maximizing (−f0 ). step size. will correspond to bias voltages for the transistors in the network. j ∈ J. which is equivalent to . vs . and let (3. Their derivation of the adjoint network does not appear as transparent as the one given here. and vsj for j ∈ J are fixed. [1969b].37):       ∂g ˆ u f0p (˜) x f0p (˜) v ˜ [ ∂p (˜r .38) 0 0 Step 1. where as vsj = vsj − dk (∂ f0 /∂vsj )(uk ) and ˆ ik+1 = ik − dk (∂ f0 /∂ism )(uk ) with j and m ranging only over the controllable components and sm sm the rest of the components equal to their specified values. and we wish to choose vsj . [fw (˜)] λ = f0w (˜). calculate f0e (xk ). t ∈ T . Remark 3. Let uk+1 = (pk+1 . p)A  λ +  f0v (˜)  x u v ˜ u f0 (˜) =  f0vs (˜)  = s ˆ u f0is (˜) x −I f0is (˜) We can now state the algorithm. so that vt is as d close as possible to a desired bias voltage vt . [1969c]).33) model the dc behavior of this circuit. vs . more general representations of the resistive elements may be employed. k k Step 2. Remark 2. In practice we can control only some of the components of vs and is . Set k = k + 1 and return to Step 2. Solve (3. (3. i0 ). and one nonlinear network analysis step (the computation of ek+1 in step 3). Elementary calculations yield ˜ Once we have obtained λ we can obtain (3. The interpretation of λ as the response of the adjoint network has been exploited for particular function f0 in a series of papers (director and Rohrer [1969a]. ik+1 ). Otherwise go to Step 3. Remember we are minimizing f0 . t ∈ T .” Exercise: [DC biasing of transistor circuits (see Dowell and Rohrer [1971]). we obtain the vector e and / hence the branch voltage vector v = A e. If we choose nonnegative numbers αt . vsj for j ∈ J are variable. where dk > 0 is a predetermined s 3 Solve (3. k+1 ˆ Step 3. one can use a more general cutset matrix. For each choice of vsj . Select u0 = (p0 .24 CHAPTER 3. Calculate ˆ k the adjoint network N (vr u f0 (u ) from 0e (3.

3.3. t (i) Specialize the algorithm above for this particular case. (ii) How do the formulas change if the network equations are written using an arbitrary cutset matrix instead of the incidence matrix? . REMARKS AND EXTENSIONS f0 (e) = t∈T 25 αt |vt −v d |2 .

26 CHAPTER 3. OPTIMIZATION WITH EQUALITY CONSTRAINTS .

000 . Additional miscellaneous comments are collected in the last section. and then we define the general linear programming problem. Recall Example 2 of Chapter I. 000 and 5g+7u 40 ≤ 5250 or 5g + 7u ≤ 210. In the second section we present the duality theory for linear programming and use it to obtain some sensitivity results. 27 . the faculty can 40 offer 2(750) + 3(250) = 2250 seminars and 6(750) + 3(250) = 5250 lecture courses. and the number of 20 lecture courses demanded per year is 5g+7u .1. 4.1 Example. For a detailed and readily accessible treatment of the material presented in this chapter see the companion volume in this Series (Sakarovitch [1971]).Chapter 4 OPTIMIZATION OVER SETS DEFINED BY INEQUALITY CONSTRAINTS: LINEAR PROGRAMMING In the first section we study in detail Example 2 of Chapter I. Then the number of seminars demanded per year is 2g+u . On the supply side of our accounting.1 The Linear Programming Problem 4. Let g and u respectively be the number of graduate and undergraduate students admitted. In Section 3 we present the Simplex algorithm which is the main procedure used to solve linear programming problems. Because of his contractual agreements. In section 4 we apply the results of Sections 2 and 3 to study the linear programming theory of competitive economy. the President must satisfy 2g+u 20 ≤ 2250 or 2g + u ≤ 45.

A2 x∗ = b2 . and (ii) x∗ yields a higher payoff than all points in the cone K ∗ consisting of all rays starting at x∗ and passing through Ω. Furthermore. and A3 x∗ < b3 . the surface of constant payoff k say. so that x ≤ y means xi ≤ yi for all i.28 CHAPTER 4. (Obviously we are assuming in this discussion that c = 0. b = (45000. (4. u) . Then the set Ω of all vectors x which satisfy the constraints in (4.2.1) can be rewritten as (4.3) as an exercise.2) is given by Ω = {x|Ai x ≤ bi . so that K ∗ is given by K ∗ = {x∗ + h|A1 h ≤ 0 . 1 (4. Here we pursue consequences of the second conclusion. and futhermore at x∗ the direction c points away from Ω.1. . 1 ≤ i ≤ 4. LINEAR PROGRAMMING Since negative g or u is meaningless. the President receives the payoff c x. We can rephrase this by saying that x∗ ∈ Ω is an optimal decision if and only if the plane π ∗ through x∗ does not intersect the interior of Ω. as k increases π(k) moves in the direction c. Now x∗ satisfies Ax x∗ = b1 .2.1. 0) and let A be the 4×2 matrix   2 1  5 7  . denote the rows of A.1 we can see that x∗ = Q is an optimal decision and the cone K ∗ is shown in Figure 4. β) . A=  −1 0  0 −1 Then (4.2) Let Ai . 0. A4 x∗ < b4 .1) It is convenient to use a more general notation. 210000. A2 h ≤ 0 .3) Recall the notation introduced in 1. u ≥ 0 . 000 5g + 7u ≤ 210. So let x = (g. (4. u ≥ 0. For the situation depicted in Figure 4. Therefore. For each choice x. We pause to formulate the generalization of (4. Since c x∗ ≥ c y for all y ∈ K ∗ we conclude that c h ≤ 0 for all h such that A1 h ≤ 0. 1 ≤ i ≤ 4} and is the polygon OP QR in Figure 4.2)1 Maximize c x subject to Ax ≤ b . The first conclusion is the foundation of the powerful Simplex algorithm which we present in Section 3. is the hyperplane π(k) = {x|c x = k}. These hyperplanes for different values of k are parallel to one another since they have the same normal c.) Evidently an optimal decision is any point x∗ ∈ Ω which lies on a hyperplane π(k) which is farthest along the direction c. 000 g ≥ 0. Formally then the President faces the following decision problem: Maximize αg + βu subject to 2g + u ≤ 45. there are also the constraints g ≥ 0. From this condition we can immediately draw two very important conclusions: (i) at least one of the vertices of Ω is an optimal decision. c = (α. A2 h ≤ 0} . since K ∗ lies “below” π ∗ .

.4) - - {x|A2 x = b2 } - x1 2 As c varies. its generalization to n dimensions is a deep theorem known as Farkas’ lemma (see Section 2). i ∈ I(x). Let c ∈ Rn . 1 ≤ i ≤ k.3) is satisfied as long as c lies between A1 and A2 . let I(x) ⊂ {1.3) is satisfied if and only if there exist λ∗ ≥ 0. the optimal decision will change. Suppose x∗ satisfies the constraints. i ∈ I(x∗ ). Show that x∗ is optimal if an only if / c h ≤ 0 for all h such that Ai h ≤ 0 . THE LINEAR PROGRAMMING PROBLEM x2 29 - π∗ - π(k) = {x|c x = k} direction of increasing payoff k - P - Q = x∗ - c ⊥ π∗ A2 ⊥ P Q A1 ⊥ QR A3 . Returning to our problem. .1.1): Although this statement is intuitively obvious. n} be such that Ai (x) = bi . Consider the problem Maximize c x subject to Ai x ≤ bi . O A4 R {x|A1 x = b1 } Figure 4. Ai x < bi . be real numbers. be n-dimensional row vectors. Mathematically this means that (4. it is clear that (4. For any x satisfying the constraints. . . 1 ≤ i ≤ k.4. A1 + λ∗ A2 . λ∗ ≥ 0 such that 1 2 c = λ∗ . and let bi . 1 2 (4. We can see from our analysis that the situation is as follows (see Figure 4.1: Ω = OP QR. Exercise 1: Let Ai . i ∈ I(x). 1 ≤ i ≤ k . 2 .

such that i 4 (a) c = i=1 λ∗ ai . x∗ = P is optimal iff c lies between A3 and A2 iff c = λ∗ A2 + λ∗ A3 for some λ∗ ≥ 0.5) is equivalent to (4. 1 ≤ i ≤ 4.30 CHAPTER 4. i = 1. x∗ ∈ QP is optimal iff c lies along A2 iff c = λ∗ A2 for some λ∗ ≥ 0. 2. 2 2 3. i i (4.) x∗ ∈ Ω is optimal iff there exist λ∗ ≥ 0 . 1 2 (b) if aj1 x∗ + aj2 x∗ < bj then x∗ = 0. i = 1. (b) if Ai x∗ < bi then λ∗ = 0 .6) . λ∗ ≥ 0 such that 1 2 (a) ci ≤ λ∗ a1i + λ∗ a2i . and to reformulate (4.6). Exercise 2: Show that (4. below. λ∗ ≥ 1 2 1 2 0. 2.5) For purposes of application it is useful to separate those constraints which are of the form xi ≥ 0. 1. from the rest. LINEAR PROGRAMMING P x∗ = Q K∗ A2 c A1 A3 O R π∗ A4 Figure 4. 2 1i i (4.2: K ∗ is the cone generated by Ω at x∗ . 1 2 j (c) if ci < λ∗ + λ∗ a2i then x∗ = 0. (Here Ai = (ai1 . 2. ai2 ). j = 1. 2. λ∗ ≥ 2 3 2 3 0. etc.5) accordingly We leave this as an exercise. These statements can be made in a more elegant way as follows: x∗ ∈ Ω is optimal iff there exists λ∗ ≥ 0 . x∗ = Q is optimal iff c lies between A1 and A2 iff c = λ∗ A1 + λ∗ A2 for some λ∗ ≥ 0.

. Step 3: Replace each variable xj which is constrained xj ≤ 0 by a variable yj = −xj constrained yj ≥ 0 and then replace aij xj by (−aij )yj for every i and cj xj by (−cj )yj .7) can be transformed into an equivalent LP of the form (4. .8) xj ≥ 0 Case II: (4. ail x1 + . . . such is not the case.4. . . xj ≥ 0 . 1≤j≤p. .2 Problem formulation. q + 1 ≤ j ≤ n . . + ain xn ≤ bi . . 1≤j≤n. .7) is of the form (4. . . THE LINEAR PROGRAMMING PROBLEM 31 4. . . + ain xn = bi .7) appears to be more general than (4. . p + 1 ≤ j ≤ q. bi are fixed real numbers.9).1. . + 1 ≤ i ≤ m . There are two important special cases: Case I: (4.8) and (4. xj arbitary . .7) is of the form (4. + cn xn subject to ail x1 + ai2 x2 + . A linear programming problem (or LP in brief) is any decision problem of the form 4. Maximize c1 x1 + c2 x2 + . l ≤ i ≤ k .8): n (4. . j=1 subject to 1≤i≤m. Step 2: Replace each equality constraint aij xj = bi by two inequality constraints: aij xj ≤ bi .9): n Maximize j=1 n cj xj aij xj = bi . and xj ≥ 0 . Step 1: Replace each inequality constraint aij xj ≥ bi by (−aij )xj ≤ (−bi ). .7) Maximize j=1 n cj xj aij xj ≤ bi .8). + ain xn ≥ bi . . aij . k + 1 ≤ i ≤ .7. 1≤j≤n (4. . Proposition: Every LP of the form (4. Proof. . (−aij )xj ≤ (−bi ). where the cj .9) xj ≥ 0 Although (4. . j=1 subject to 1≤i≤m.1. (4. ail x1 + .

1 ≤ i ≤ k. (ii) there exists λ ≥ 0. λ ∈ Rk . . . Evidently the new LP has the form (4. and A = {aij } is a fixed m × n matrix. Step 2: Replace each inequality constraint aij xj ≥ bi by the equality constraint aij xj − yi = bi where yi is an additional variable constrained by yi ≥ 0. 1 ≤ j ≤ m. . Farkas’ Lemma. An algebraic version of this result is sometimes more convenient. .9) and is equivalent to the original one. Evidently the resulting LP has the form (4. i j In the remaining discussion. Let A be a k × n matrix.32 CHAPTER 4. Let c ∈ Rn be a column vector. be n-dimensional row vectors. For a proof the reader is referred to (Mangasarian [1969]). such that A λ = c.9) Proof. zj ≥ 0 and then replace aij xj by aij yj + (−aij )zj for every i and cj xj by cj yj + (−cj )zj .11) . Exercise 1: With the same hypothesis and notation of Exercise 1 in 4.) Step 3. Ax ≤ 0 implies c x ≤ 0. ♦ Proposition: Every LP of the form (4. . b ∈n are fixed vectors. (The new variables added in these steps are called slack variables. λ∗ ≥ 0 such that m 1 m i∈I(x∗ ) (a) cj ≤ n i=1 λ∗ aij . c ∈ Rn . ♦ 4. Let Ai . Consider the pair of LPs (4.1). Using this result it is possible to derive the main results following the intuitive reasoning of (4. Let c ∈ Rn .2. 1 ≤ i ≤ m (c) if j i λ∗ aij > cj then x∗ = 0 . Farkas’ Lemma (algebraic version). . The following statements are equivalent: (i) for all x ∈ Rn . k (ii) there exists λ1 ≥ 0. Use the previous exercise to show that x∗ is optimal iff there exist λ∗ ≥ 0. λk ≥ 0 such that c = i=1 λi Ai . 1 ≤ j ≤ n i m i=1 (b) if j=1 aij x∗ < bi then λ∗ = 0 . Step 1: Replace each inequality constraint aij xj ≤ bi by the equality constraint aij xj + yi = bi where yi is an additional variable constrained yi ≥ 0. whereas x ∈ Rn and λ ∈ Rm will be variable. (i) for all x ∈ Rn . use the first version of Farkas lemma to show that there exist λ∗ ≥ 0 for i ∈ I(x∗ ) such that λ∗ Ai = c .17). . We leave this development as two exercises and follow a more elegant but less intuitive approach.7) can be transformed into an equivalent LP of the from (4. The following statements are equivalent. i i Exercise 2: Let x∗ satisfy the constraints for problem (4.8) and is equivalent to the original one. Ai x ≤ 0 for 1 ≤ i ≤ k implies c x ≤ 0. Step 4: Repeat these steps from the previous proposition.2 Qualitative Theory of Linear Programming 4.1. .1 Main results. LINEAR PROGRAMMING Step 4: Replace each variable xj which is not constrained in sign by a pair of variables yj −z j = xj constrained yj ≥ 0. We begin by quoting a fundamental result.10) and (4.

1 ≤ i ≤ m xj ≥ 0 .12) Proof: x ≥ 0 and λ A − c ≥ 0 implies (λ A−c )x ≥ 0 giving the first inequality. ξ ≤ 0 . x ≥ 0} be the set of all points satisfying the constraints of the primal problem. Then there exists x∗ which is optimum for (4. + λm bm subject to λ1 a1j + . λ ∈ Ωd .10) and λ∗ is optimal for (4.4. . Lemma 1: (Weak duality) Let x ∈ Ωp . QUALITATIVE THEORY OF LINEAR PROGRAMMING below. A λ ≥ c and b λ−c x ≤ 0. we must show that there exist x ≥ 0. λ ≥ 0}. (4. this is possible only if A ξ − cθ ≤ 0 . . Similarly let Ωd = {λ ∈ Rm |λ A ≥ c . . A point x ∈ Ωp (λ ∈ Ωd ) is said to be a feasible solution or feasible decision for the primal (dual). + ain xn ≤ bi . . Furthermore. such that Ax ≤ b.11). Theorem 1: (Strong duality) Suppose Ωp = φ and Ωd = φ. λ ≥ 0. y ≥ 0. this is equivalent to the existence of x ≥ 0.11) Definition: Let Ωp = {x ∈ Rn |Ax ≤ b. then x∗ is optimal for (4. r ∈ R. r ≤ 0 such that   A −c Im A b −I n     1    x y λ µ r     b  = c    0 By the algebraic version of Farkas’ Lemma.10) and λ∗ which is optimum for (4. . 1 ≤ j ≤ n .14) (4. . Maximize c1 x1 + . ♦ Corollary 1: If x∗ ∈ Ω and λ∗ ∈ Ωd such that c x∗ = (λ∗ ) b.10) is called the primal problem and (4.2. + λm amj ≥ cj . 1 ≤ j ≤ n λi ≥ 0 . Then c x ≤ λ Ax ≤ λ b. The next result is trivial. .10) Maximize λ1 b1 + .11). (4.13) . µ ≤ 0. . c x∗ = (λ∗ ) b. i. (4.11) is called the dual problem.e. + cn xn subject to ai1 x1 + . θ≤0 implies b ξ + c w ≤ 0. By introducing slack variables y ∈ Rm .. Proof: Because of the Corollary 1 it is enough to prove the last statement. b−Ax ≥ 0 and λ ≥ 0 implies λ (b−Ax) ≥ 0 giving the second inequality. λ ≥ 0. −w ≤ 0 . Aw = bθ ≤ 0 . µ ∈ Rm . 33 (4. 1 ≤ i ≤ m . (4.

15) λ∗ aij < cj implies x∗ = 0 . µ ≤ 0 such that   λ | A −In  − − −  = c | µ By Farkas’ Lemma there exists w ∈ Rn such that Aw ≤ 0. The sufficiency part of (i) follows from Theorem 1. Now. Also. (4. in contradiction. the hypothesis that Ωp = φ is essential. The first equality in (4. Equivalently. sup {c x|x ∈ Ωp } = +∞ so that there is no optimal decision for the primal. Proof Because of the symmetry of the primal and dual it is enough to prove only (i). Sufficiency. Evidently then.14) since θ < 0. Suppose x∗ ∈ Ωp is optimal. Hence.16) yields (λ∗ ) b = (λ∗ ) Ax∗ = (A λ∗ ) x∗ . Then there exists an optimum decision for the primal LP iff Ωd = φ.34 CHAPTER 4. ξ. Aw ≤ 0. Suppose. Case (ii): Suppose (w. By Corollary 1. A(x + θw) ≤ b.) Proof: First of all we note that for x∗ ∈ Ωp .16): (λ∗ ) (Ax∗ − b) = 0.13) and θ = 0. there exist x ∈ Ωp . Ωd = φ. Then (ξ/θ) ∈ Ωd . so that −A ξ ≥ 0. and c w ≤ (A λ) w = λ (Aw) ≤ 0. which is equivalent to (4. i j and m i=1 ((4. there does not exist λ ≥ 0. but then for any θ > 0. λ ∈ Ωd . −b ξ = b (−ξ) ≥ (Ax) (−ξ) = x (−A ξ) ≥ 0. Then there exists an optimum decision for the dual LP iff Ωp = φ. Theorem 3: (Optimality condition) x∗ ∈ Ωp is optimal if and only if there exists λ∗ ∈ Ωd such that m j=1 aij x∗ < bi implies λ∗ = 0 . so that c x∗ = (λ∗ ) b.16) is just an equivalent rearrangement of these two equalities. Then from Theorem 2. By hypothesis. ξ. Theorem 2: (i) Suppose Ωp = φ. x∗ is optimal. θ) satisfies (4. LINEAR PROGRAMMING Case (i): Suppose (w. ♦ Remark: In Theorem 2(i). ♦ . and (A λ∗ − c) x∗ = 0 . θ) satisfies (4.13) and θ < 0. λ∗ ∈ Ωd . and c w > 0. so that only the necessity remains.16) holds for some x∗ ∈ Ωp . By Lemma 1 we always have c x∗ ≤ (λ∗ ) Ax∗ ≤ (λ∗ ) b so that we must have c x∗ = (λ∗ ) Ax∗ = (λ∗ ) b. that Ωd = φ. w ≥ 0. while the second yields (A λ∗ ) x∗ = c x∗ . (ii) Suppose Ωd = φ. Exercise 3: Exhibit a pair of primal and dual problems such that neither has a feasible solution. so there exists x ≥ 0 such that Ax ≤ b. Ωp = φ. By hypothesis. We will show that sup {c x|x ∈ Ωp } = +∞. (4. c (x + θw) = c x + θc w. But (4. λ∗ ∈ Ωd .15) is known as the condition of complementary slackness.16) Necessity. ♦ The existence part of the above result can be strengthened. (x + θw) ≥ 0. −w ≤ 0. Consider the following exercise.15) is equivalent to (4. (w/−θ) ∈ Ωp . so that by Lemma 1 c w/(−θ) ≤ b ξ/θ. Suppose (4. So that b ξ + c w ≤ 0. −ξ ≥ 0. Ωd = φ means there does not exist λ ≥ 0 such that A λ ≥ c. so that (x + θw) ∈ Ωp . j i (4. so that by Theorem 1 there exists λ∗ ∈ Ωd such that c x∗ = (λ∗ ) b.

) Exercise 8: Formulate a dual for (4.9). subject to Ax ≤ b.20) respectively. QUALITATIVE THEORY OF LINEAR PROGRAMMING 35 The conditions x∗ ∈ Ωp .2.) Exercise 6: Show that x∗ ∈ Ωp is optimal iff there exists λ∗ ∈ Ωd such that m x∗ j > 0 implies i=1 λ∗ aij = cj . . Apply Theorems 1 and 2. . (4.2 Results for problem (4.18). x ≥ 0.20) the dual. (Note that. Theorem 4: (Saddle point) x∗ ≥ 0 is optimal for the primal if and only if there exists λ∗ ≥ 0 such that L(x. λ∗ ≥ 0 provided we strengthen (4. λ∗ ) ≤ L(x∗ . .4. Ωd denote the set of all x.20) the λi are unrestricted in sign. A pair (x∗ .9). and allλ ≥ 0. 1 ≤ i ≤ m . We let Ωp .17) is said to form a saddle-point of L over the set {x|x ∈ Rn . xj ≥ 0 . (4. indicating how to use the results already obtained. 1 ≤ j ≤ n . Replace (4. The function L is called the Lagrangian. It is possible to derive analogous results for LPs of the form (4. . where L is defined in (4. whose proof is left as an exercise. + λm amj ≥ cj (4. + λm bm subject to λ1 a1j + . 1≤j≤n .7). Exercise 5: Prove Theorems 1 and 2 with Ωp and Ωd interpreted as above. λ ∈ Rm . .19).10). 4. λ) for all x ≥ 0. unlike (4. λ satisfying the constraints of (4. i Exercise 7: x∗ ≥ 0 is optimal iff there exists λ∗ ∈ Rm such that L(x.19) by the equivalent LP: maximize c x. . This is now of the form (4.15) as in the following result. We state these results as exercises. where L: Rn xRm → R is defined by L(x. x ≥ 0} × {λ|λ ∈ Rm . λ∗ ) ≤ L(x∗ .19) .17) Exercise 4: Prove Theorem 4.2. λ∗ ) ≤ L(x∗ . λ∗ ) satisfying (4. .18) (4. λ) for all x ≥ 0. + cn xn subject to ail x1 + . and obtain the result analogous to Exercise 5. . Remark.17). We begin with a pair of LPs: Maximize c1 x1 + . . λ∗ ) ≤ L(x∗ . (−A)x ≤ (−b). + ain xn = bi . (Hint. Again (4. Minimize λ1 b1 + . x∗ ∈ Ωd in Theorem 3 can be replaced by the weaker x∗ ≥ 0. λ ≥ 0}. λ) = c x − λ (Ax − b) (4. λ is not restricted in sign.19) is called the primal and (4.20) Note that in (4.

We investigate how the maximum value of (4. The matrix A will remain fixed. c ) − M (ˆ c)} . b(i. c) − M (ˆ −ε). We write Ωp (b) and Ωd (c) to denote the explicit dependence on b and c respectively. For 1 ≤ i ≤ m. the partial derivatives given above exist. c) ◦ ◦ ˆ ≤ λi ≤ ∂M − ˆ ˆ ∂bi (b. . ε). . . ◦ ◦ Theorem 5: At each (ˆ c) ∈B × C .11) or for the pair (4. cj+1 . ˆ ∂M − ˆ ˆ ∂cj (b. c)} . c) .10) or (4. c) ∈ B × C define M (b. LINEAR PROGRAMMING 4. if b. bi+1 . . Furthermore. ε)) − M (ˆ c} . . c) = lim ε→0 ε>0 1 ˆ ˆ ε {M (b. cj + ε. ˆ as follows: ∂M + ˆ ˆ ∂bi (b. and for (b. .21) = lim ε→0 ε>0 1 ˆ ˆ ε {M (b(i. .2. . . c − M (ˆ c(j. and for 1 ≤ j ≤ n. b.20).19) changes as the vectors b and c change. c2 . cn ) . −ε))} . ˆ c ∂M + ˆ ˆ ∂bi (b. ˆ ∂M + ˆ ˆ ∂cj (b. C respectively. c) = lim ε→0 ε>0 1 ˆ ˆ ε {M (b. ˆ Let B . . then ˆ b). c(j. 1≤i≤m. ε) = (c1 . c) = lim ε→0 ε>0 1 ˆ ˆ ε {M (b. bi−1 . . . b.3 Sensitivity analysis. . ˆ ∂M − ˆ ˆ ∂bi (b. Let Ωp and Ωd be the sets of feasible solutions for the pair (4. b2 . ε) = (b1 . c) (4.19) and (4. ˆ x ∈ Ωp (ˆ λ ∈ Ωd (ˆ) are optimal. bm ) . ε ∈ R. (4. c) = max {c x|x ∈ Ωp (b)} = min {λ b|λ ∈ Ωd (c)} .36 CHAPTER 4.22) . b ∈ Rm denote b(i. . bi + ε.10) and (4. . We define in the usual way the right and left hand partial derivatives of M at a point (ˆ c) ∈ B × C b. . cj−1 . Let B = {b ∈ Rm |Ωp (b) = φ} and C = {c ∈ Rn |Ωd (c) = φ}. c ∈ Rn denote c(j. C denote the interiors of B. ε ∈ R. b.

c(j.24): Maximize c1 x1 + . + cn xn subject to ail x1 + . so that b. ·) : C → R is convex. ˆ b(i. ε2 )) − f (ˆ)} ≥ (1/ε1 ){f (ˆ(i. 1 ≤ i ≤ m xj ≥ 0 . δ2 )) − f (ˆ)}.14). and x. y ∈ X. ♦ We recall some fundamental definitions from convex analysis.3. ˆ ˆˆ b. c) : B → R is concave and for fixed b ∈ B. c)} b(i. THE SIMPLEX ALGORITHM 37 ∂M + ˆ ˆ ∂cj (b. 1 ˆ c) − M (ˆ c(j. (4.24) . . and f : X → R be convex. ˆ ε λ {b − b(i.22). c) ≤ λ ˆ ε). for ε > 0. Remark 3: The variables of the dual problem are called Lagrange variables or dual variables or shadow-prices. . Definition: X ⊂ Rn is said to be convex if x. M (b. 9 below. gives (4. 1 ≤ j ≤ n . −ε))} ≤ 1 {ˆ − c(j. (i) f is said to be convex if X is convex. 0 ≤ θ ≤ 1 implies f (θx + (1 − θ)y) ≤ θf (x) + (1 − θ)f (y).1 Preliminaries We now present the celebrated Simplex algorithm for finding an optimum solution to any LP of the form (4.. Then the result follows immediately. the left and right hand partial derivatives of f exist. This is useful in some computational problems. −ε)} x = xj . ε)) x. so that b. and M (ˆ c(j. > 0. c) and M (b. c)} ≥ ε {ˆ(j. ε)) ≥ (ˆ(j. ˆ b. . Taking limits as ε → 0.4. ˆ ≤ ≥ 1ˆ ˆ ˆ ˆ ε λ {b(i.e. (ii) f is said to be concave if −f is convex.22). ε > 0.(1/ε2 ){f (ˆ(i. ε)) − M (b.3 The Simplex Algorithm 4.23) as ε → 0. Exercise 9: Let X ⊂ Rn . y ∈ X. ε > 0. for ε > 0. (4. (4. δ1 )) − f (ˆ))} ≥ (1/δ2 ){f (ˆ(i. ε).23) Proof: We first show (4. c) ≥ xj ≥ ˆ ∂M − ˆ ˆ ∂cj (b. 4.23) assuming that the partial derivatives exist. c) . Finally. c)} 1 ˆ c) − M (ˆ −ε). c ) − M (b. for ε > 0. By strong duality ˆ b. (Hint: First show that for ε2 > ε1 > 0 > δ1 > δ2 . x. C ⊂ Rn defined above are convex sets. Remark 2: We can also show without difficulty that M (·. ˆ b(i. ˆ 1 ˆ ˆ ˆ ˆ ε {M (b(i. ˆ c ˆ ˆ ˆ ε ε which give (4. Definition: Let X ⊂ Rn and f : X → R.) x x x x Remark 1: Clearly if (∂M/∂bi )(ˆ exists. and the sets B ⊂ Rm . 1≤j≤n. y ∈ X and 0 ≤ θ ≤ 1 implies (θx + (1 − θ)y) ∈ X. ˆ c ˆ 1 1 ˆ ˆ ˆ ˆ c ˆ ˆ ˆ ε {M (b. M (·. then we have equality in (4. M (ˆ c) = c x. Ωd . linear plus constant) functions on B and C respectively. ˆ ε {M (b.3. On the other hand. 0 ≤ θ ≤ 1 implies f (θx + (1 − θ)y) ≥ θf (x) + (1 − θ)f (y). (b) Show that for fixed c ∈ C. ε) − c} x = xj . The reason behind the last name will be clear in Section 4. + ain xn = bi . i. Exercise 8: (a) Show that Ωp . the existence of the right and left partial derivatives follows from Exercises 8. Show that at each point x in the interior of ˆ X. and then this result b) compares with 3. {M (b. ε1 )) − f (ˆ))} ≥ x x x x (1/δ1 ){f (ˆ(i. M (ˆ c) = λ ˆ and by weak duality M (ˆ ε). ε) − b}λi . .22). for ε 1ˆ ˆ ˆ −ε)} = λi . ·) are piecewise linear (more accurately.

24). . cj γj = J∈I(x∗ ) Since θ can take on positive and negative values. with y. LINEAR PROGRAMMING As mentioned in 4. Since x∗ > 0 for j ∈ I(x∗ ). amj ) . j∈I(x∗ ) j=1 For θ ∈ R define z(θ) ∈ Rn by zj (θ) = x∗ = θγj . let z ∗ = x∗ and we are done.. j ∈ I(x∗ ) . i. We begin with a precise definition of a vertex. Definition: For x ∈ Ωp . j x∗ Aj + θ j j∈I(x∗ ) j∈I(x∗ ) Az(θ) = j∈I(x∗ ) zj (θ)Aj = γj Aj =b+θ·0=b. Since Ωp has only finitely many vertices (see Corollary 1 below). (n − j)! Lemma 2: Let x∗ be an optimal decision of (4. m n! Corollary 1: Ωp has at most vertices. the inequality above can hold on if 0. implies x = y = z. This is done in Lemma 1.1 the algorithm rests upon the observations that if an optimal exists. Hence suppose {Aj |j ∈ I(x∗ )} is linearly dependent so that there exist γj . not all zero. and then c x∗ = c z(θ). so that z(θ) is also an optimal solution for |θ| ≤ θ ∗ . Then there is a vertex z ∗ of Ωp which is optimal. . . Hence z(θ) ∈ Ωp whenever |θ| ≤ θ ∗ . Then. let I(x) = {j|xj > 0}. z in Ωp and 0 < λ < 1. we only have to investigate a finite set. I(z(θ0 )) ⊂ I(x∗ ) − {j0 } . In the following we let Aj denote the jth column of A. it follows that z(θ) ≥ 0 when j |θ| ≤ min x∗ j |γj | j ∈ I(x∗ ) = θ ∗ say . such that γj Aj = 0 .e. j ∈ I(x∗ ) j x∗ = 0 . Lemma 1: Let x ∈ Ωp . Then x is a vertex of Ωp iff {Aj |j ∈ I(x)} is a linearly independent set. Proof: If {Aj |j ∈ I(x∗ )} is linearly independent. then at least one vertex of the feasible set Ωp is an optimal solution. The practicability of this investigation depends on the ease with which we can characterize the vertices of Ωp . Since x∗ is optimal we must have c x∗ ≥ c z(θ) = c x∗ + θ j∈I(x∗ ) cj yj for −∗ θ ≤ θ ≤ θ ∗ . But from the definition of z(θ) it is easy to see that we can pick θ0 with |θ0 | = θ ∗ such that zj (θ0 ) = x∗ +θ0 γj = 0 j for at least one j = j0 in I(x∗ ). .38 CHAPTER 4. . Exercise 1: Prove Lemma 1. Definition: x ∈ Ωp is said to be a vertex of Ωp if x = λy + (1 − λ)z. Aj = (a1j .

3. Assumption of non-degeneracy. either we obtain an optimum solution or we discover that no optimum exists. < jm . j ∈ I(z) . The algorithm is divided into two parts: In Phase I we determine if Ωp is empty or not.c(z k ). Then z is optimal if and only if λ (z)A ≥ cj .. j j Step 3. Notation: Let z be a non-degenerate basic feasible solution. are called the basic variables at z. z is optimal iff there exists λ such that λ Aj = cj . in a finite number of steps. The set I(z) is then called the basis at z. Calculate [D(z k )]−1 . . and let j1 < j2 < . . Definition: (i) z is said to be a basic feasible solution if z ∈ Ωp .2 The Simplex Algorithm. for . Set k = 0 and go to Step 2.2. . jm ]. j2 . in a finite number of steps we will find an optimal decision z ∗ which is also vertex. Let D(z) denote the m × m non-singular matrix D(z) = [Aj1 . j ∈ I(z) . let c(z) denote the m-dimensional column vector c(z) = (cj1 . . cjm ) and define λ(z) by λ (z) = c (z)[D(z)]−1 . Proof: By Exercise 6 of Section 2. (4. . . i. If all these numbers are ≤ 0. and the shadow-price vector λ (z k ) = c (z k )[D(z k )]−1 . Definition: A basic feasible solution z is said to be non-degenerate if I(z) has m elements. j ∈ I(z) . For each j ∈ I(z k ) calculate cj − λ (z k )Aj . Evidently 0 < θ < ∞. because z k is optimal ˆ by Lemma 3. and xj . then we let z ∗ = z(θ0 ) and we are done. If γ k ≤ 0. We will comment on it later.25) (4.26) holds iff λ = λ(z) and then (4. . Compute the vector ˆ k k γ k = (γj1 . stop. sup {c x|x ∈ Ωp } = +∞. . Define z k+1 by . for . Otherwise we repeat the procedure above with z(θ0 ).27) ♦ But since z is non-degenerate. .3.4. j ∈ I(z). . . Lemma 3: Let z be a non-degenerate basic feasible solution. and if not. .27) is the same as (4.25). there is no finite optimum. < jm . . k k k Step 4. Otherwise go to Step 4. γjm ) = [D(z k )]−1 Aj . We call λ(z) the shadow-price vector at z. 4. We make the following simplifying assumption. THE SIMPLEX ALGORITHM 39 Again. for all . λ Aj ≥ cj . Clearly. . Let z 0 be a basic feasible solution obtained from Phase I or by any other means. ♦ At this point we abandon the geometric term “vertex” and how to established LP terminology. we obtain a basic feasible solution. and {Aj |j ∈ I(z)} is linearly independent. Compute θ = min {(zj γj )|j ∈ i(z). j ∈ I(z) are called the non-basic variables at z. .26) (4. (4. stop. Phase II: Step 1. Step 2.e. xj . Iterating on this procedure. Every basic feasible solution is non-degenerate. and if not obtains another basic feasible solution with a higher value. because by Lemma 4 below. Phase II starts with a basic feasible solution and determines if it is optimal or not. if {Aj |j ∈ I(z(θ0 ))} is linearly independent. . .A constitute I(z). . We shall discuss Phase II first. Let I(z k ) consist of j1 < j2 < . Otherwise pick any ˆ ∈ I(z k ) such that cˆ − λ (z k )Aj > 0 and go to Step 3.A . . γj > 0}.

Set k = k + 1 and return to Step 2. Hence. j j j j j hence I(z k+1 ) ⊂ (I(z) − {˜ j}) {ˆ . Exercise 2: Prove Corollaries 2 and 3. I(z k+1 ) has m elements. k = 0. Next.40 CHAPTER 4. zj = 0 j First of all. j ∈ I(z) and j = ˆ . Corollary 3: Suppose Phase II terminates at an optimal basic feasible solution z ∗ .29) k γj Aj + θA = Az by definition of ˆ j γk. z(θ) ∈ Ωp for θ ≥ 0. z k+1 is a basic feasible solution with c z k+1 > c z k .28) and (4.24). Then γ(z ∗ ) is an optimal solution of the dual of (4. Finally.30) = c z + θ{cˆ − λ j ˆ (z k )Aj }i .31) ˜ so that it is enough to prove that Aj is independent of {Aj |j ∈ I(z). j = ˜ But if this is not the j}.30) that c zk+1 − c zk = θ{cˆ − γ (z k )Aj } . Remark 1: By the non-degeneracy assumption.28) By Lemma 5 below. we see case. sup {c x|x ∈ Ωp } = ∞. ♦ ˆ Corollary 2: In a finite number of steps Phase II will obtain an optimal solution or will determine that sup{c x|x ∈ Ωp } = ∞. j ∈ I(z) zj (θ) = . we must have γ˜ j from (4. so that c z(θ) → ∞ as θ → ∞. j ∈ I(z) θ . Proof: Define z(θ) by  k  zj − θγj . since γ k ≤ 0 it follows that z(θ) ≥ 0 for θ ≥ 0. j Lemma 5: z k+1 is a basic feasible solution and c z k+1 > c z k .29). so that in (4. ♦ ˆ But from step 2 {cˆ − λ (z k )Aj } > 0. LINEAR PROGRAMMING  k k  zj − θγj . j which is positive from Step 2.28) we see that z˜ = 0. Lemma 4: If γ k ≤ 0. k+1 k k k Proof: Let ˜ ∈ I(z k ) be such that γ˜ > 0 and z˜ = θγ˜ . k k c z(θ) = c z − θc (z )γ + θcˆ j ˆ = c z + θ{cˆ − c (z k )[D(z k )]−1 Aj } j (4. j} (4. We see then that D(z k+1 ) is obtained from D(z k ) by replacing the column Aj by . Finally if we compare (4. giving a contradiction. j k+1 zj (4. Then from (4. Az(θ) = Az − θ j∈I(z) (4. j =ˆ θ j  .31) we must have equality. j=ˆ j =  k zj = 0 . j=ˆ and j ∈ I(z) .

. . .A ]. . . The reason for this is that I(z k ) is not unique. But then in Step 4 it may turn out that θ = 0 so that k+1 = z k . jk . + ain xn + yi = bi . . .32) involving the variables x and y: m Maximize − i=1 yi (4.  .A . . 1 ≤ j ≤ n . Remark 2: The similarity between Step 2 of Phase II and Step 2 of the algorithm in 3. et al. Phase I: Step I. . y i ≥ 0 . it is easy to check that E −1 = M [D(z k )]−1 where  1 1 . .A . In this way the non-degeneracy alternatives for I(z assumption can be eliminated.A . . jk+1 . .A . . . THE SIMPLEX ALGORITHM 41 . .A . j . 1 ↑ ith column Then [D(z k+1 )]−1 = P M [D(z k )]−1 . For details see (Canon. .3.3. ji+1 . ji−1 . jm ] and if .A . jm . .3. matrix E = [Aj1 . This net increase is due to the direct increase cj minus the indirect decrease λ (z k )Aj due to the compensating changes in the basic variables necessary to maintain feasibility. . . ji+1 . .24) by the LP (4. jm ].   1 M =      −γ j1 γ˜ j           1 γ˜ j −γ jm γ˜ j 1 . .A .4 is striking. ˜. ji−1 . . . .24) we can guarantee that the matrix A ¯ has rank n. . . by multiplying some of the equality constraints in (4. if ˆ Aj m = =1 γj Aj .A . . Remark 3: By eliminating any dependent equations in (4. . . Let E be the . 1 ≤ i ≤ m .A ..32) subject to ail x1 + . . . . Then [D(z k+1 )]−1 = P E −1 where the matrix P permutes the columns of D(z k+1 ) such that E = D(z k+1 )P .A . Next. . . jk < ˆ < jk+1 then D(z k+1 ) = [Aj1 . We can apply ¯ such that I(z ¯ Phase II using I(z k ) instead of I(z k ). ji+1 . . . Hence at any degenerate basic feasible solution z k we can always find I(z k ) ⊃ I(z k ) ¯ k ) has m elements and {Aj |j ∈ I(z k )} is a linearly independent set. .4 is (∂f0 /∂uj )(xk ) − (λk ) (∂f /∂uj )(xk ). 1 ≤ i ≤ m . so that these inverses can be easily computed. .24) by −1 if necessary.A j . . [1970]). j . . The basic variables at z k correspond to the variables wk and non-basic variables correspond to uk . . .A . xj ≥ 0 .4. j. . . The analogous quantity in 3. . we can assume that b ≥ 0. More precisely if D(z k ) = [Aj1 . . so that we have to try various ¯ z ¯ k ) until we find one for which θ > 0. . For each j ∈ I(z k ) we can interpret the number cj − λ (z k )Aj to be the net increase in the objective value per unit increase in the jth component of z k . ˆ.A . . ji−1 . . . . ˆ the column Aj . . Replace the LP (4. We now describe how to obtain an initial basic feasible solution. . ˆ. . ..

. . LINEAR PROGRAMMING Step 2.24) has no feasible solution. . If = 0. . Apply phase II to (4.4 LP Theory of a Firm in a Competitive Economy 4. yk ) with y ≥ 0. . y∗ x∗ y∗ 4. . say n.32). different combinations of inputs can be used to produce the same combination of outputs. n ]. . . If = 0. . Furthermore. rm ) with r ≥ 0. Step 3. The firm’s outputs themselves may be raw materials (if it is a mining company) or intermediate products (if it is a steel mill) or capital goods (if it manufactures lathes) or finished goods (if it makes shirts or bakes cookies) which go directly to the consumer. A be the m × n matrix [A1 . . i. and by an output vector we mean any k-dimensional vector y = (y1 . amj ) and B j = (bij . Precisely. We now make three basic assumptions about the firm. office equipment. . . . Phase II must terminate in an optimum based feasible solution (x∗ . it may be considered an output in a “closed. . . .32) starting with this solution. Note that (x0 . each activity can be conducted at any non-negative intensity or level. . This substitutability among inputs is a fundamental concept in economics. . . y 0 ) = (0. .” dynamic Malthusian framework where the increase in labor is a function of the output. chemicals. this transformation can be conducted in different ways.42 Go to step 2. CHAPTER 4. (i) The transformation of inputs into outputs is organized into a finite number. then it combines (transforms) the input vector (a1j xj . We formalize it by specifying which transformation possibilities are available to the firm. the jth activity is characterized completely by two vectors Aj = (a1j . or textiles. . Go to Step 3. .1 Activity analysis of the firm.A . . p. . Labor is not usually considered an output since slavery is not practiced. By an input vector we mean any m-dimensional vector r = (r1 . a2j . intermediate products such as steel. n ] and B be the k × n matrix B = [B 1 .. however. . 3 . We think of a firm as a system which transforms input into outputs.4. . . by Exercise 3 below.24). bkj xj ) = xj B j . finally various kinds of labor services. or computers.) Within the firm.B It is more accurate to think of the services of capital goods rather than these goods themselves as inputs. of processes or activities. Inputs are usually classified into raw materials such as iron ore. (ii) Each activity combines the k inputs in fixed proportions into the m outputs in fixed proportions. . or factory buildings. or raw cotton. . . y ∗ ) m since the value of the objective function in (4. . It is these services which are consumed in the transformation into outputs. . There are m kinds of inputs and k kinds of outputs. 141. (See the von Neumann model in (Nikaido [1968]). Let . since human labor can do the same job as some machines and machines can replace other kinds of machines. Exercise 3: Show that (4. . .e. b) is a basic feasible solution of (4. bkj ) so that if it is conducted at a level xj ≥ 0.32) lies between − i=1 bi and 0. . . is a basic feasible solution for (4. . capital goods 3 such as machines of various kinds. amj xj ) = xj Aj into the output vector (b1j xj . etc. .24) has a feasible solution iff y ∗ = 0. crude oil. (4.

. In the long run the supplies of the first inputs are also variable and the firm can change these ∗ supplies from r1 . . x∗ . r ∗ are in equilibrium iff for all fixed ∆ ∈ Rm . + 1 ≤ i ≤ m .35): Maximize c x − (q ∗ ) ∆ subject to Ax ≤ r ∗ + ∆ . . . . .4.34) Proof: Let c = B p∗ . . + xn B n . faces the following decision problem: m Maximize p y − j= +1 q j rj (4.35) . .3 Long-term equilibrium behavior. and perhaps some raw materials.4. xn . Theorem 1: p∗ . then it transforms the input vector x1 A1 + . . . p∗ .4. q ∗ . . M (∆) ≤ M (0) where M (∆) is the maximum value of the LP (4.4. . . . . . . . Which of these possible transformations will actually take place depends upon their relative profitability and availability of inputs. whereas the pi . r ∗+1 . . Let us suppose that these ∗ inputs are 1. . q ∗ . rm ) are in (long-term) equilibrium if the firm has no profit incentive to change r ∗ under the prices (p∗ .34): Minimize (r ∗ ) q subject to A q ≥ B p∗ q≥0. . ai1 x1 + . The decision variables are the activity levels x1 . Whether the firm will actually change these inputs will depend upon whether it is profitable to do so. 1 ≤ i ≤ . We assume that the firm is operating in a competitive economy which means that the unit prices p = (p1 . . qj are prices determined by the whole economy. (4. . ri ≥ 0 . pk ) of the outputs. q. say. . . . . . With these assumptions we know all the transformations technically possible as soon as we specify the matrices A and B. . 1 ≤ j ≤ n. 4. + xn An into the output vector x1 B 1 + . x≥0. . and q = (q1 . r ∗ by buying or selling these inputs at the market price q1 . . xj ≥ 0. . . rm . and in turn this depends upon the prices p. .33) has an optimal solution. and the short-term input supplies r +1 . + ain xn ≤ ri . . the ri are the fixed shortterm supplies. . the firm cannot change the amount available to it of some of the inputs such as capital equipment. q ∗ ). We say that the prices (p∗ . r ∗ are in equilibrium if and only if q ∗ is an optimal solution of (4. + ain xn ≤ ri . (4. By definition. Under realistic conditions (4. q . . . ∗ ai1 x1 + . ∗ The coefficients of B and A are the fixed technical coefficients of the firm. x∗ . . LP THEORY OF A FIRM IN A COMPETITIVE ECONOMY 43 (iii) If the firm conducts all the activities simultaneously with the jth activity at level xj ≥ 0. We study this next. r ∗ . . . which the firm ac∗ cepts as given. 1 ≤ j ≤ n. Then the manager of the firm. . . if he is maximizing the firm’s profits. . whereas the supply of the remaining inputs can be varied.33) subject to y = Bx. n 1 4. . . In the short-term. .2 Short-term behavior. and they are available in the amounts r1 . 2. q ∗ ) and a set of input supplies ∗ ∗ r ∗ = (r1 . certain kinds of labor. . rm . + 1 ≤ i ≤ m . . . qm ) of the inputs is fixed. . .

On the other hand if the jth activity is operated at level xj = 1. cj = p∗ b1j + p∗ b2j + . Also if m cj < i=1 ∗ qi aij .38) This relation between p∗ . at the optimum. q ∗ . Recall that c = B p∗ . r ∗ are in long-term equilibrium iff q ∗ is an optimum solution to the dual (namely (4. + p∗ bkj . i. at the optimum activity levels. . if x∗ is the optimum activity levels for (4. if an equilibrium the optimum ith input supply ri is greater than the optimum demand for the ith input. (4. Finally.e.44 CHAPTER 4.34). Hence. By weak duality if x is feasible for (4. From (4. r ∗ has a very nice economic interpretation. .16).35). Thus. If the ith input is valued at m a∗ . again from (4. in equilibrium.15) we see that if x= astj > 0 then m cj = i=1 ∗ qi aij . if the revenue of an activity is less than its input cost. total revenues = total cost of input supplies. i. M (0) = (r ∗ ) q ∗ .34)) of (4.35) and q is feasible for (4.. c x − (q ∗ ) ∆ ≤ (q ∗ ) (r ∗ + ∆) − (q ∗ ) ∆ = (q ∗ ) r ∗ ♦ (4. the revenue of an activity operated at a positive level = input cost of that activity. cj is the revenue per unit level operation of the jth activity so that c x is the revenue when the n activities are operated at levels x. then the input cost of operating at xj = 1. x∗ j . But from (4. c x − (q ∗ ) ∆ ≤ q (r ∗ = ∆) − (q ∗ ) ∆ .. (4. (4.e.39) i.34) becomes the dual of (4. q ∗ .38) then the output revenue is c x∗ and the input cost is (q ∗ ) Ax∗ . we can say even more.. i. Hence p∗ . r ∗ are in equilibrium iff c x − (q ∗ ) ∆ ≤ M (0) = (r ∗ ) q ∗ . (4.e.15). (q ∗ ) (Ax∗ − r ∗ ) = 0 so that c x∗ = (q ∗ ) r ∗ . for q = q ∗ . so that the input cost of operating the n activities at levels x is (A q ∗ ) = (q ∗ ) Ax. it uses an amount aij of the ith input. In fact. then = 0. q ∗ . and.38): Maximize c x subject to Ax ≤ r ∗ x≥0. LINEAR PROGRAMMING For ∆ = 0.. is i i=1 qi aij . in particular.37) Remark 1: We have shown that (p∗ . then at the optimum it is ∗ operated at zero level.35) so that by the strong duality theorem.e. Now bij is the amount of the ith output produced by operating the 1 2 k jth activity at a unit level xj = 1.36) whenever x is feasible for (4.

. Let us suppose that there are a total of h commodities in the economy including raw materials. LP THEORY OF A FIRM IN A COMPETITIVE ECONOMY n ∗ ri > j=1 45 aij x∗ . . Now suppose that at time T the prevailing prices of the h commodities are λ = (λ1 . intermediate and capital goods. Next. which we mention briefly. the jth consumer owns the vector of commodiJ ties ω(j) ≥ 0. Unfortunately we cannot present the details here. The profit-maximizing behavior of the firm presented above is one of the two fundamental building blocks in the equilibrium theory of a competitive. . . . from our previous analysis we know that the manager of the ith firm will plan to buy input supplies r(i) ≥ 0. . . Often engineering design problems can be formulated as LPs of the form (4. This interpretation has wide applicability.33). Of course. the equilibrium price of an input which is in excess supply must be zero. capitalist economy. and j=1 ω(j) = ω. which means that the ownership of ω is divided among the various consumers j = 1. . ω is that portion of the outputs produced prior to T which have not been consumed up to T . . J. . Remark 2: Returning to the short-term decision problem (4.4. r ∗ + ∆ .33) when the amounts of the inputs in fixed supply are r1 + ∆1 .e. We think of the economy as a feedback process involving firms and consumers.10) or (4. r(i) ∈ Rh . . . Let us denote by M (∆1 . . .22) that it is always profitable to increase the ith input by buying some additional amount at price qi if λ∗ > qi .. . Thus λ∗ can be interpreted as the firm’s internal valuation of i i the ith input or the firm’s imputed or shadow price of the ith input. i. . i 4. where some of the coefficients bi are design parameters. . λh ) ≥ 0. λ∗ ) is an optimum solution of the dual of (4. We observe the economy starting at time T . At this time there exists within the economy an inventory of the various commodities which we can represent by a vector ω = (ω1 . q .4 Long-term equilibrium of a competitive. . . .4. for an individual firm most of the inputs and most of the outputs will be zero. suppose that (λ∗ . λ∗+1 . B) characterizing a firm we can suppose that all the h commodities are possible inputs and all the h commodities are possible outputs.19). . . capitalist economy. . By adding zero rows to the matrices (A.4. Suppose that the market m 1 prices of inputs 1. suppose that the managers of the various firms assume that the prices λ are not going to change for a long period of time. . . . . We are assuming that this is a capitalist economy. Suppose the resulting optimal dual variables are λ∗ . . . and carry out the i optimization problem. . More precisely. ωh ) ≥ 0. and it is worth decreasing this parameter if the reduction in total cost per unit decrease is i greater than λ∗ . We shall limit ourselves to a rough sketch. . . ∆ ) the optimum value of ∗ (4. λ∗ . such . the sole purpose for making this change is that we no longer need to distinguish between prices of inputs and prices of outputs. j ∗ then qi = 0. labor. . .33). then we see (assuming i differentiability) that it is worth increasing b∗ if the unit cost of increasing this parameter is less i than λ∗ . The design procedure is to fix these parameters at some nominal value b∗ . and finished products. in other words it must be a free good. we can see from (4. Then if (∂M/∂∆i )|∆=0 exists. Then. and conversely it is profitable to sell some i of the ith input at price qi if λ∗ < qi . We are including in ω(j) the amount of his labor services which consumer j is willing to sell. . are q1 . .

then at that price the production plans of all the firms and the buying plan of all the consumers can be realized. We also recall that (see (4. where I is the total number of firms. 1 ≤ i ≤ I . λ). λ).40). Chapter V). . We know that r(i) and y(i) depend on λ. . λ) . (4. Thus. S(λE ) = D(λE ). (4. The second point is that there is no reason to expect that S(λ) ≥ 0.41) We note two important facts. . (4. Similarly. and Solow [1958]. and he will plan to produce an optimum amount. 2. say y(i). Here also d(j) will depend on λ. . i.5 Miscellaneous Comments . and the consumers who collectively own ω. .e. For a simple treatment the reader is referred to (Dorfman. λ) ≥ 0 . Now we come to the second building block of equilibrium theory. If we add up the buying plans of all the consumers we obtain the total demand J D(λ) = j=1 d(j.41) we immediately conclude that J λ S(λ) = j=1 λ ω(j) . r(i)) is in long term equilibrium. so we write d(j. First of all. Samuelson. Chapter 13). The value of the jth consumer’s possessions is λ ω(j). where J I i S(λ) = j=1 ω(j) + i=1 y(i. . The theory assumes that he will plan to buy a set of commodities d(j) = (d1 (j). Here i = 1. the ith manager can sell his planned output y(i) either as input supplies to other firms or to the consumers. dh (j)) ≥ 0 so as to maximize his satisfaction subject to the constraint λ d(j) = λ ω(j).40) Now the ith manager can buy r(i) from only two sources: outputs from other firms. LINEAR PROGRAMMING that (λ.38)) λ r(i.43) which also satisfies J λ D(λ) = j=1 λ ω(j) . Unfortunately we must stop at this point since we cannot proceed further without introducing some more convex analysis and the fixed point theorem.. because if such an equilibrium price λE exists.46 CHAPTER 4. λ) . y(i. For a much more general mathematical treatment see (Nikaido [1968]. . (4. λ) = λ y(i. (4. so that we explicitly write r(i. (4. the net supply offered for sale to consumers is S(λ). 4. λ).44) The most basic question of equilibrium theory is to determine conditions under which there exists a price vector λE such that the economy is in equilibrium. λ) − i=1 r(i. from (4. I. .42) that is the value of the supply offered to consumers is equal to the value of the commodities (and labor) which they own.

x ≤ 0 . given the capabilities of modern computers. . an elementary modification of the Simplex algorithm can be given to obtain a “local” optimal decision.1 Some mathematical tricks. Exercise 1: Show that (4. In our LP framework this situation can be formulated as follows. that even if the ci are not concave. the interpretation of “equivalent” is purposely left ambiguous. f0 (x∗ )) is optimal for (4. 1 ≤ i ≤ k . so that we have the decision problem. . However. . . . (4. y ∗ ) = (x∗ . k objective functions (c1 ) x. . the Simplex method (together with its variants) is an extremely powerful technique for solving LPs involving thousands of variables.45) This is not a LP since f0 is not linear. Maximize y subject to Ax ≤ b. LP is today the single most important optimization technique. (ck ) x}. Maximize f0 (x) subject to Ax ≤ b. and such that there is no equivalent LP.47). the following exercise shows how to transform (4.3. There may be a number of plausible objective functions. there are.45) iff (x∗ . in the sense that x∗ is optimal for (4.47) subject to Ax ≤ b.4. .45) is equivalent to (4. .5.46) below. However. (c2 ) x. x ≥ 0.47): n (4. This is because many decision problems can be adequately formulated as LPs.46). In the next exercise.2 Scope of linear programming. where ci : R → R are concave. Exercise 1 will also indicate how to do Exercise 2. x ≥ 0 . and. piecewise-linear functions of the kind shown in Figure 4. x ≤ 0 y ≤ (ci ) x . See (Miller [1963]). It turns out however.5. 4.45) into an equivalent LP. It is often the case in practical decision problems that the objective is not well-defined.5. To obtain a feeling for the scope of LP we refer the reader to the book by one of the originators of LP (Dantzig [1963]). Exercise 3: Construct an example of the kind (4. It is reasonable then to define a single objective function f0 (x) by f0 (x) = minimum {(c1 ) x. Exercise 2: Obtain an equivalent LP for (4. where the ci are piecewise linear (but not concave). MISCELLANEOUS COMMENTS 47 4. The above-given assumption of the concavity of the ci is crucial. . (ck ) x. say.46) Maximize j=1 ci (xi ) (4. The constraints are given as usual by Ax ≤ b.

xi Figure 4. . LINEAR PROGRAMMING ci (xi ) . . .48 CHAPTER 4.3: A function of the form used in Exercise 2.

however. . . The general NP is a decision problem of the form: Maximize f0 (x) subject to (x) ≤ 0 .1). we will not do this. x∗ ∈ Ω is said to be an optimal decision or optimal solution if f0 (x∗ ) ≥ f0 (x) for x ∈ Ω.1 Qualitative Theory of Nonlinear Programming 5. Section 3 is devoted to the important special case of quadratic programming.1. is equivalent to (5. m. 1. i = 1.Chapter 5 OPTIMIZATION OVER SETS DEFINED BY INEQUALITY CONSTRAINTS: NONLINEAR PROGRAMMING In many decision-making situations the assumption of linearity of the constraint inequalities in LP is quite restrictive. (5. . From the discussion in 4. i = 0. . Exercise 1: Show that (5. Two applications are given. with variables y ∈ R. fi : Rn → R. Section 2 deals with Duality theory for the case where appropriate convexity conditions are satisfied. As in Chapter 4.2).1).2 it is clear that equality constraints and sign constraints on some of the components of x can all be transformed into the form (5. x ∈ Rn is said to be a feasible solution if it satisfies the constraints of (5.1 The problem and elementary results. The last section is devoted to computational considerations. . 5. m. The next exercise shows that we could restrict ourselves to objective functions which are linear. In Section 1 we present the general nonlinear programming problem (NP) and prove the Kuhn-Tucker theorem. . are differentiable functions.1. . and Ω ⊂ Rn is the subset of all feasible solutions. x ∈ Rn .1): 49 . The linearity of the objective function is not restrictive as shown in the first exercise below. .1) where x ∈ Rn .

. Lemma 1: Suppose x∗ ∈ Ω is an optimum decision for (5.1 and 4. Proof: Let xk ∈ Ω. . x∗ ) . m} be such that fi (x) = 0 for ı ∈ I(x). . Show that there exist subsequences {xmkm .50 CHAPTER 5. (ii) Let C(Ω.3) .e.) Definition: (i) Let x ∈ Ω. .) In the definition of C(Ω. then θh ∈ C(Ω. be such that (5. x)}. εmk > 0}∞ be such that xmk → x and (1/εmk )(xmk − x) → hm as k → ∞. . 2. Let K(Ω. εmkm }∞ such that m=1 xmkm → x and (1/εmkm )(xmkm − x) → h as m → ∞. Suppose k=1 that hm → h as m → ∞. The following elementary result is more interesting in this light and should be compared with (2. . x) is called the tangent cone of Ω at x. (ii) Show that C(Ω.1 and Exercise 1 of 4. . and y − f0 (x) ≤ 0 . x) = {h|h is an admissible direction for Ω at x}. and let I(x) ⊂ {1. The argument parallels very closely that developed in Exercise 1 of 4. Then f0x (x∗ )h ≤ 0 for all h ∈ C(Ω.2) Returning to problem (5. . . x) we made no use of the particular functional description of Ω. fi (x) < 0 for i ∈ I(x). with εk > 0 for all k such that k lim x = x . The basic idea is to linearize the functions fi in a neighborhood of an optimal decision x∗ . A vector h ∈ Rn is said to be an admissible direction for Ω at x if there exists a sequence xk . (5. . Two more properties are stated below. 2. Definition: Let x be a feasible solution. .1. . . x) = {x + h|h ∈ C(Ω.1).) If we take xk = x and εk = 1 for all k. k = 1. (See Figures 5. . C(Ω. 3. if h ∈ C(Ω. . . εk > 0. . let hm and {xmk . we are interested in obtaining conditions which any optimal decision must satisfy. k = 1. x) is a cone. k→∞ k→∞ lim 1 (xk εk − x) = h .1 and 5. . x) and θ ≥ 0. we see that 0 ∈ C(Ω.18) in Chapter 2 and Exercise 1 of 4. Exercise 2: (i) Show that C(Ω. (The set I(x) is called the set of active constraints at x.2. . . k = 1. x) so that the tangent cone is always nonempty. in Ω and a sequence of numbers εk . 2.2 and compare them with Figures 4.1). i. (Hint for (ii): For m = 1. x). NONLINEAR PROGRAMMING Maximize y subject to fi (x) ≤ 0..2. 1 ≤ i ≤ m. x) is a closed subset of Rn .

{x|f1 (x) = 0} Figure 5. (5. Since xk ∈ Ω.6) εk + o(|xk −x∗ |) εk . ♦ k→∞ lim o(|xk −x∗ |) |xk −x∗ | k→∞ lim |xk −x∗ | εk . we have f0 (xk ) ≤ f0 (x∗ ).4) and (5. by Taylor’s theorem we have f0 (xk ) = f0 (x∗ + (xk − x∗ )) = f0 (x∗ ) + f0x (x∗ )(xk − x∗ ) + o(|xk − x∗ |) . 1 (xk εk k→∞ k→∞ lim − x∗ ) = h .5) k→∞ Since f0 is differentiable. we can see that 0≥ = k→∞ lim f0x (x∗ ) (xk −x∗ ) εk + f0x (x∗ )h.5). so that 0 ≥ f0x (x∗ ) (x k −x∗ ) (5. (5. and x∗ is optimal.4) implies lim 1 εk |xk − x∗ | = |h| .1.5. QUALITATIVE THEORY OF NONLINEAR PROGRAMMING direction of increasing payoff π(k) = {x|f0 (x) = k} Q x∗ 51 {x|f3 (x) = 0} P {x|f2 (x) = 0} Ω R .4) Note that in particular (5. Taking limits as k → ∞.1: Ω = P QR k ∗ lim x = x . using (5.

Let (x∗ . . . 2. fi (x∗ ) = 0.7) Proof: Let h ∈ Rn and xk ∈ Ω. so that fi (xk ) ≤ fi (x∗ ). x∗ ) in terms of the derivatives of the functions fi . x∗ ) - - - - 0 - C(Ω. Exercise 3: Let x ∈ R2 . . fi (xk ) ≤ 0. and if i ∈ I(x∗ ). Since xk ∈ Ω. x2 ) = (x1 − 1)3 + x2 .7) cannot be reversed. Then C(Ω. x∗ ) ⊂ {h|fix (x∗ )h ≤ 0 for all i ∈ I(x∗ )} .2: C(Ω. 0). Then we can apply Farkas’ Lemma just as in Exercise 1 of 4. k = 1. Unfortunately.52 CHAPTER 5. x∗ ) is the tangent cone of Ω at x∗ .2. Lemma 2: Let x∗ ∈ Ω. Following the proof of Lemma 1 we can conclude that 0 ≥ fix (x∗ )h. x∗ ). by Taylor’s theorem we have fi (xk ) = fi (x∗ ) + fix (x∗ )(xk − x∗ ) + o(|xk − x∗ |) . NONLINEAR PROGRAMMING x∗ K(Ω. x2 ) = −x2 . 2}. and f2 (x1 . The main reason for this is that the set {fix (x∗ )|i ∈ I(x∗ )} is not in general linearly independent. Show that 1 2 - - - . satisfy (5.4). Since fi is differentiable. . (5. ♦ Lemma 2 gives us a partial characterization of C(Ω. in general the inclusion sign in (5. εk > 0. f1 (x1 . The basic problem that remains is to characterize the set C(Ω. Then I(x∗ ) = {1. x∗ ) Figure 5. x∗ ) = (1.

5.1. QUALITATIVE THEORY OF NONLINEAR PROGRAMMING
C(Ω, x∗ ) = {h|fix (x∗ )h ≤ 0 , i = 1, 2, }. (Note that {f1x (x∗ ), f2x (x∗ )} is not a linearly independent set; see Lemma 4 below.)

53

5.1.2 Kuhn-Tucker Theorem.
Definition: Let x∗ ∈ Ω. We say that the constraint qualification (CQ) is satisfied at x∗ if C(Ω, x) = {h|fix (x∗ )h ≤ 0 for all i ∈ I(x∗ )}, and we say that CQ is satisfied if CQ is satisfied at all x ∈ Ω. (Note that by Lemma 2 C(Ω, x) is always a subset of the right-hand side.) Compare the next result with Exercise 2 of 4.2. Theorem 1: (Kuhn and Tucker [1951]) Let x∗ be an optimum solution of (5.1), and suppose that CQ is satisfied at x∗ . Then there exist λ∗ ≥ 0, for i ∈ I(x∗ ), such that i f0x (x∗ ) =
i∈I(x∗ )

λ∗ fix (x∗ ) i

(5.8)

Proof: By Lemma 1 and the definition of CQ it follows that f0x (x∗ )h ≤ 0 whenever fix (x∗ )h ≤ 0 for all i ∈ I(x∗ ). By the Farkas’ Lemma of 4.2.1 it follows that there exist λ∗ ≥ 0 for i ∈ I(x∗ ) i such that (5.8) holds. ♦ In the original formulation of the decision problem we often have equality constraints of the form rj (x) = 0, which get replaced by rj (x) ≤ 0, −rj (x) ≤ 0 to give the form (5.1). It is convenient in application to separate the equality constraints from the rest. Theorem 1 can then be expressed as Theorem 2.

Theorem 2: Consider the problem (5.9). Maximize f0 (x) subject to fi (x) ≤ 0 , i = 1, . . . , m, rj (x) = 0 , j = 1, . . . , k .

(5.9)

Let x∗ be an optimum decision and suppose that CQ is satisfied at x∗ . Then there exist λ∗ ≥ 0, i = i 1, . . . , m, and µ∗ , j = 1, . . . , k such that j
m

f0x (x∗ ) =
i=1

λ∗ fix (x∗ ) + i

k j=1

µ∗ rjx (x∗ ) , j

(5.10)

and λ∗ = 0 whenever fi (x∗ ) < 0 . i Exercise 4: Prove Theorem 2. (5.11)

54

CHAPTER 5. NONLINEAR PROGRAMMING

An alternative form of Theorem 1 will prove useful for computational purposes (see Section 4). Theorem 3: Consider (5.9), and suppose that CQ is satisfied at an optimal solution x∗ . Define ψ : Rn → R by ψ(h) = max {−f0x (x∗ )h, f1 (x∗ ) + f1x (x∗ )h, . . . , fm (x∗ ) + fmx (x∗ )h} , and consider the decision problem Minimize ψ(h) subject to −ψ(h) − f0x (x∗ )h ≤ 0, −ψ(h) + fi (x∗ ) + fix (x∗ )h ≤ 0 , 1 ≤ i ≤ m −1 ≤ hi ≤ 1 , i = 1, . . . , n . Then h = 0 is an optimal solution of (5.12). Exercise 5: Prove Theorem 3. (Note that by Exercise 1 of 4.5, (5.12) can be transformed into a LP.) Remark: For problem (5.9) define the Lagrangian function L:
m k

(5.12)

(x1 , . . . , xn ; λ1 , . . . , λm ; µ1 , . . . , µk ) → f0 (x) −
i=1

λi fi (x) −
j=1

µj rj (x).

Then Theorem 2 is equivalent to the following statement: if CQ is satisfied and x∗ is optimal, then there exist λ∗ ≥ 0 and µ∗ such that Lx (x∗ , λ∗ , µ∗ ) = 0 and L(x∗ , λ∗ , µ∗ ) ≤ L(x∗ , λ, µ) for all λ ≥ 0, µ. There is a very important special case when the necessary conditions of Theorem 1 are also sufficient. But first we need some elementary properties of convex functions which are stated as an exercise. Some additional properties which we will use later are also collected here. Recall the definition of convex and concave functions in 4.2.3. Exercise 6: Let X ⊂ Rn be convex. Let h : X → R be a differentiable function. Then (i) h is convex iff h(y) ≥ h(x) + hx (x)(y − x) for all x, y, in X, (ii) h is concave iff h(y) ≤ h(x) + hx (x)(y − x) for all x, y in X, (iii) h is concave and convex iff h is affine, i.e. h(x) ≡ α + b x for some fixed α ∈ R, b ∈ Rn . Suppose that h is twice differentiable. Then (iv) h is convex iff hxx (x) is positive semidefinite for all x in X, (v) h is concave iff hxx (x) is negative semidefinite for all x in X, (vi) h is convex and concave iff hxx (x) ≡ 0. Theorem 4: (Sufficient condition) In (5.1) suppose that f0 is concave and fi is convex for i = 1, . . . , m. Then (i) Ω is a convex subset of Rn , and (ii) if there exist x∗ ∈ Ω, λ∗ ≥ 0, i ∈ I(x∗ ), satisfying (5.8), then x∗ is an optimal solution of i (5.1). Proof: (i) Let y, z be in Ω so that fi (y) ≤ 0, fi (z) ≤ 0 for i = 1, . . . , m. Let 0 ≤ θ ≤ 1. Since fi is convex we have

5.1. QUALITATIVE THEORY OF NONLINEAR PROGRAMMING
fi (θy + (1 − θ)z) ≤ θfi (y) + (1 − θ)fi (z) ≤ 0 , 1 ≤ i ≤ m, so that (θy + (1 − θ)z) ∈ Ω, hence Ω is convex. (ii) Let x ∈ Ω be arbitrary. Since f0 is concave, by Exercise 6 we have f0 (x) ≤ f0 (x∗ ) + f0x (x∗ )(x − x∗ ) , so that by (5.8) f0 (x) ≤ f0 (x∗ ) +
i∈I(x∗ )

55

λ∗ fix (x∗ )(x − x∗ ) . i

(5.13)

Next, fi is convex so that again by Exercise 6, fi (x) ≥ fi (x∗ ) + fix (x∗ )(x − x∗ ) ; but fi (x) ≤ 0, and fi (x∗ ) = 0 for i ∈ I(x∗ ), so that fix (x∗ )(x − x∗ ) ≤ 0 for i ∈ I(x∗ ) . (5.14) Combining (5.14) with the fact that λ∗ ≥ 0, we conclude from (5.13) that f0 (x) ≤ f0 (x∗ ), so that i x∗ is optimal. ♦ Exercise 7: Under the hypothesis of Theorem 4, show that the subset Ω∗ of Ω, consisting of all the optimal solutions of (5.1), is a convex set. Exercise 8: A function h : X → R defined on a convex set X ⊂ Rn is said to be strictly convex if h(θy + (1 − θ)z) < θh(y) + (1 − θ)h(z) whenever 0 < θ < 1 and y, z are in X with y = z. h is said to be strictly concave if −h is strictly convex. Under the hypothesis of Theorem 4, show that an optimal solution to (5.1) is unique (if it exists) if either f0 is strictly concave or if the fi , 1 ≤ i ≤ m, are strictly convex. (Hint: Show that in (5.13) we have strict inequality if x = x∗ .)

5.1.3 Sufficient conditions for CQ.
As stated, it is usually impractical to verify if CQ is satisfied for a particular problem. In this subsection we give two conditions which guarantee CQ. These conditions can often be verified in practice. Recall that a function g : Rn → R is said to be affine if g(x) ≡ α + b x for some fixed α ∈ R and b ∈ Rn . We adopt the formulation (5.1) so that Ω = {x ∈ Rn |fi (x) ≤ 0 , 1 ≤ i ≤ m} . Lemma 3: Suppose x∗ ∈ Ω and suppose there exists h∗ ∈ Rn such that for each i ∈ I(x∗ ), either fix (x∗ )h∗ < 0, or fix (x∗ )h∗ = 0 and fi is affine. Then CQ is satisfied at x∗ . Proof: Let h ∈ Rn be such that fix (x∗ )h ≤ 0 for i ∈ I(x∗ ). Let δ > 0. We will first show that (h + δh∗ ) ∈ C(Ω, x∗ ). To this end let εk > 0, k = 1, 2, . . . , be a sequence converging to 0 and set xk = x∗ + εk (h + δh∗ ). Clearly xk converges to x∗ , and (1/εk )(xk − x∗ ) converges to (h + δh∗ ). Also for i ∈ I(x∗ ), if fix (x∗ )h < 0, then fi (xk ) = fi (x∗ ) + εk fix (x∗ )(h + δh∗ ) + o(εk |h + δh∗ |) ≤ δεk fix (x∗ )h∗ + o(εk |h + δh∗ |) < 0 for sufficiently large k , whereas for i ∈ I(x∗ ), if fi is affine, then

and finally xk = (sk . an open set V ⊂ Rn containing x∗ = (w∗ . u∗ ). Let δ > 0. NONLINEAR PROGRAMMING fi (xk ) = fi (x∗ ) + εk fix (x∗ )(h + δh∗ ) ≤ 0 for all k . and (w. x∗ ). Thus. u). for i ∈ I(x∗ ) we have fi (x∗ ) < 0. or f (ˆ) ≤ 0 and f is affine. so wk = g(uk ) converges to w∗ = g(u∗ ). and since C(Ω. and so by definition (h + δh∗ ) ∈ C(Ω. Finally. x∗ ). consist of p elements. h∗ as h = (ξ. We will show that (h + δh∗ ) ∈ C(Ω. be any sequence converging to 0. Since δ > 0 can be arbitrarily small. and set uk = u∗ + εk (η + δη ∗ ). it follows that (1/εk )(xk − x∗ ) converges to (gu (u∗ )(η + δη ∗ ). η ∗ ) corresponding to the partition of x = (w.56 CHAPTER 5. Proof: Let h ∈ Rn be such that fix (x∗ )h ≤ 0 for all i ∈ I(x∗ ). and w = g(u) . so that fi (xk ) < 0 for sufficiently large k. x∗ ). By the Implicit Function Theorem. whereas for i ∈ Jδ . h∗ = (ξ ∗ . But for i ∈ Jδ we have 0 = fix (x∗ )(h + δh∗ ) = fiw (x∗ )(ξ + δξ ∗ ) + fiu (x∗ )(η + δη ∗ ) . Let εk > 0. Clearly Jδ ⊂ J = {i|i ∈ I(x∗ ). so that {fix (x∗ . and {fix (x∗ )|i ∈ I(x∗ ).15) Also. fi (xk ) = fi (x∗ ) + fix (x∗ )(xk − x∗ ) + o(|xk − x∗ |) fi (x∗ ) + εk fix (x∗ )(h + δh∗ ) + o(εk ) + o(|xk − x∗ |). We note that uk converges to u∗ . fix (x∗ )h∗ = 0} is a linearly independent set. η).15) and (5. Since g is differentiable. u) ∈ V iff u ∈ U. (Hint: Show fi (x i i x i that h∗ = x − x∗ satisfies the hypothesis of Lemma 3. x∗ ) is a closed set by Exercise 2. Then CQ is satisfied at x∗ . i ∈ I(x∗ ). where U = {u ∈ Rn−p ||u − u∗ | < ρ}. xk converges to x∗ . u) for u ∈ U so that 0 = fiw (x∗ )gu (u∗ ) + fiu (x∗ ). Next we partition h. uk − u∗ ) = (1/εk )(g(uk ) − g(u∗ ). Thus we have also shown that xk ∈ Ω for sufficiently large k. (5. there exist ρ > 0. . εk (η + δη ∗ )).) ˆ Lemma 4: Suppose x∗ ∈ Ω and suppose there exists h∗ ∈ Rn such that fix (x∗ )h∗ ≤ 0 for i ∈ I(x∗ ). and hence 0 = fiw (x∗ )gu (u∗ )(η + δη ∗ ) + fiu (x∗ )(η + δη ∗ ) . Let Jδ = {i|i ∈ I(x∗ ). . uk ). 2 .16) and recall that {fiw (x∗ )|i ∈ Jδ } is a basis in Rp we can conclude that (ξ + δξ ∗ ) = gu (u∗ )(η + δη ∗ ) so that (1/εk )(xk − x∗ ) converges to (h + hδh∗ ). and a differentiable function g : U → Rp . for i ∈ Jδ . η + δη ∗ ). k = 1. ♦ Exercise 9: Suppose x∗ ∈ Ω and suppose there exists x ∈ Rn such that for each i ∈ I(x∗ ). It remains to show that xk ∈ Ω for sufficiently large k. u∗ )|i ∈ Jδ } is linearly independent. uk ) = 0. fi (xk ) = fi (g(uk ). . 0 = fi (g(u). it follows that h ∈ C(Ω. wk = g(uk ). such that fi (w. . First of all. for i ∈ Jδ . (5. fi x(x∗ )h∗ = 0}. Then CQ is satisfied at x∗ . either ˆ ∗ ) < 0 and f is convex. i ∈ Jδ . fix (x∗ )(h + δh∗ ) = 0}. u) = 0.16) If we compare (5. Now (1/εk )(xk − x∗ ) = (1/εk )(wk − w∗ .

let f = (f1 . .9) and suppose there exists h∗ ∈ Rn such that the set {fix (x∗ )|i ∈ I(x∗ ). . λ ≥ 0. we can conclude that fi (xk ) < 0 for sufficiently large k. Then CQ is satisfied at x∗ . Consider problem (5.2 Duality Theory Duality theory is perhaps the most beautiful part of nonlinear programming. We wish to examine the behavior of the maximum value of (5. Exercise 10: Prove Lemma 5 5. 1 ≤ i ≤ m b x∈X .5. (5.17) where x ∈ Rn . be fixed. X is a given convex subset of Rn and ˆ = (ˆ1 . . . fm ) : Rn → Rm .2 we refer to some of the important generalizations. we will give some geometric insight. ˆm ) is a given vector. Lemma 5: Suppose x∗ is feasible for (5. so that in particular if x∗ is an optimal solution of (5. . rjx (x∗ )h∗ = 0 for 1 ≤ j ≤ k. In 2. (5. x∗ ). So we define b Ω(b) = {x|x ∈ X. and fix (x∗ )h∗ ≤ 0 for i ∈ I(x∗ ). Hence. fi : Rn → R. It may be useful to note in the following discussion that most of the results do not require differentiability of the various functions. . B = {b|Ω(b) = φ}. and C(Ω. Maximize f0 (x) − λ (f (x) − ˆ b) subject to x ∈ X . f (x) ≤ b} = sup{f0 (x)|x ∈ Ω(b)} .2.3 we give some application of duality theory and in 2. . Let λ ∈ Rm . To finish the proof we note that δ > 0 can be made arbitrarily small. DUALITY THEORY 57 and since fi (x∗ ) = 0 whereas fix (x∗ )(h + δh∗ ) < 0.17) which we call the primal problem: Maximize f0 (x) subject to fi (x) ≤ ˆi . so that h ∈ C(Ω. f (x) ≤ b}. f0 : Rn → R is a given concave function. (h + δh∗ ) ∈ C(Ω. k} is linearly independent. . 5. fix (x∗ )h∗ = 0} {rjx (x∗ )|j = 1. . and it has provided many unifying conceptual insights into economics and management science. are given convex functions. However. x∗ ) is closed by Exercise 2.1 and the results in 4.1 should be compared with Theorems 1 and 4 of 4. x∗ ).3. It has resulted in many applications within nonlinear programming. 1 ≤ i ≤ m.18) . . The results in 2.17) then M (ˆ = f0 (ˆ).2. ♦ The next lemma applies to the formulation (5.2. Thus. and M : B→R {+∞} by M (b) = sup{f0 (x)|x ∈ X.2. xk ∈ Ω for sufficiently large k. and even so some of the proofs are relegated to the Appendix at the end of this Chapter since they depend on advanced material.1 Basic results. Its proof is left as an exercise since it is very similar to the proof of Lemma 4.9). .17) as ˆ varies. b b b For convenience. We need to consider b) x the following problem also. in terms of suggesting important computational algorithms. We can only present some of the basic results here. .

20) . x ∈ Ω(ˆ and if λ ≥ 0.3.17) is usually equal to Rn and then.20). Lemma 1 shows that the cost function of the dual problem is convex which is useful information since there are computation techniques which apply to convex cost functions but not to arbitrary nonlinear cost functions. i.18) and the solution of the dual problem (5. Lemma 2 shows that the optimum value of the dual problem is always an upper bound for the optimum value of the primal. of course. we have f0 (x) ≤ M (ˆ ≤ m(λ) for x ∈ Ω(ˆ λ ≥ 0 . and λ ≥ 0. (Here R+ = {λ ∈ Rn |λ ≥ 0}.19) is called the dual problem: Minimize m(λ) subject to λ ≥ 0 . We first give a simple sufficiency condition. f0 (x) ≤ M (ˆ ≤ m∗ ≤ m(λ) . λ) with x ∈ X. f. X.58 and define CHAPTER 5. Remark 1: The set X in (5. b).1 and 2. ≤ sup {f0 (x) − λ (f (x) − b)|x b)} b) ≤ sup {f0 (x) − λ (f (x) − ˆ ∈ X} = m(λ) . and λ ≤ 0 is said to satisfy the optimality conditions if x ˆ ˆ (5. (5.3. Lemma 2: (Weak duality) If x is feasible for (5. b b) f0 (x) ≤ f0 (x) − λ (f (x) − ˆ for x ∈ Ω(ˆ λ ≥ 0 . b). ♦ ˆ = m∗ in The basic problem of Duality Theory is to determine conditions under which M (b) (5. However. it is sometimes possible to include some of the constraints in X in such a way that the calculation of m(λ) by (5. b) b). b)|x Thus. Remark 2: It is sometimes useful to know that Lemmas 1 and 2 below hold without any convexity conditions on f0 ..17). if we take the infimum with respect to λ ≥ 0 in the right-hand b) inequality we get (5. b)|x Problem (5. NONLINEAR PROGRAMMING m(λ) = sub{f0 (x) − λ (f (x) − ˆ ∈ X} . then b). we have λ (f (x) − ˆ ≤ 0. So. Hence f0 (x) ≤ sup {f0 (x)|x ∈ Ω(ˆ = M (ˆ b)} b) ˆ ∈ Ω(ˆ and since Ω(ˆ ⊂ X.2 below.19) Let m∗ = inf {m(λ)|λ ≥ 0}.20). For example see the problems discussed in Sections 2.e. ˆ Definition: A pair (ˆ.) Exercise 1: Prove Lemma 1. n n Lemma 1: m : R+ → R {+∞} is a convex function. there is no reason to separate it out.19) become simple. and since M (ˆ is independent of λ. b) Proof: Since f (x) − ˆ ≤ 0.

ˆ 59 (5. ˆ ˆ m(λ) = f0 (ˆ) − λ (f (ˆ) − ˆ x x b) ˆ .18) with λ = λ. . x since X is convex. f0 (θx + (1 − θ)˜) ≥ θf0 (x) + (1 − θ)f0 (˜) . are differentiable. equivalently. then x is an optimal solution to x ˆ ˆ ˆ the primal. λ) satisfy the optimality conditions. Also.e. fi (ˆ) ≤ ˆi for i = 1. Then b). . λ (f (ˆ) − ˆ = 0.22) ˆ ˆ λi = 0 when fi (ˆ) < ˆi .21) x is feasible for (5. let x ∈ Ω(b). since f0 is concave. .23) ˆ λ ≥ 0 is said to be an optimal price vector if there is x ∈ X such that (ˆ. B is convex. and M (ˆ = m∗ . Then (θx + (1 − θ)˜) ∈ X b ˜ b). Proof: Let b. i. condition. f0 (ˆ) = M (b) x ˆ so that from Weak Duality λ is optimal for the dual..17).21) x x b) = f0 (ˆ) by (5. ˜ belong to B. Lemma 3: B is a convex subset of Rm . ˆ x b (5.24) . DUALITY THEORY ˆ x is optimal solution of (5. so that b fi (θx + (1 − θ)˜) ≤ θb + (1 − θ)˜ . and fi . b) ˆ f0 (x) ≤ f0 (x) − λ (f (x) − ˆ b) ˆ (f (x) − ˆ ∈ X} ≤ sup{f0 (x) − λ b)|x ˆ = f0 (ˆ) − λ (f (ˆ) − ˆ by (5. x hence (θx + (1 − θ)˜) ∈ Ω(θb + (1 − θ)˜ x b) and therefore. and fi (θx + (1 − θ)˜) ≤ θfi (x) + (1 − θ)fi (˜) x x since fi is convex. m . x ∈ Ω(˜ let 0 ≤ θ ≤ 1.2. ♦ We now proceed to a much more detailed investigation. Theorem 1: (Sufficiency) If (ˆ. λ) satisfy the optimality ˆ x ˆ ˆ by virtue of (5. Note that in this case x ∈ Ω(b) ˆ The next result is equivalent to Theorem 4(ii) of Section 1 if X = Rn .23) x so that x is optimal for the primal. x b x b) (5. and hence by definition f0 (ˆ) = M (ˆ Also ˆ x b). 0 ≤ i ≤ m.5.22). x x (5. and M : B → R {+∞} is a concave function. λ is an optimal solution to the dual. b) ˆ Proof: Let x ∈ Ω(ˆ so that λ (f (x) − ˆ ≤ 0. .

∈M (ˆ b B . Definition: The function M : B → R number K such that {∞} is said to be stable at ˆ ∈ B if there exists a real b M (b) ≤ M (ˆ + K|b − ˆ for b ∈ B .60 CHAPTER 5. ♦ Definition: Let X ⊂ Rn and let g : X → R {∞. −∞}.) x ˆ (See Figure 5-3. λ is a supergradient at ˆ b Figure 5. b) b| (In words. x ∈ Ω(˜ b) x ˜ b)} ≥ sup{f0 (x)|x ∈ Ω(b)} + (1 − θ) sup {f0 (˜)|˜ ∈ Ω(˜ x x b)} ˜ = θM (b) + (1 − θ)M (b). - ˆ b - - b . x ˆ (g(x) ≥ g(ˆ) + λ (x − x) for x ∈ X.24) holds for all x ∈ Ω(b) and x ∈ Ω(˜ it follows that ˜ b) M (θb + (1 − θ)ˆ ≥ sup {f0 (θx + (1 − θ)˜)|x ∈ Ω(b).) M (b) M (b) M (ˆ b) b) .) A more geometric way of thinking about subgradients is the following.3: Illustration of supergradient of stability. Figure 5. NONLINEAR PROGRAMMING Since (5. A vector λ ∈ Rn is said to be a supergradient (subgradient) of g at x ∈ X if ˆ g(x) ≤ g(ˆ) + λ (x − x) for x ∈ X. M is stable at ˆ if M does not increase infinitely steeply in a neighborhood of ˆ See b b. Define the subset A ⊂ R1+m by . b M (b) - ˆ b M is stable at ˆ b M (ˆ + λ (b − ˆ b) b) - . b ˆ b M is not stable at ˆ b .3.

The reader who is familiar with the Separation Theorem of convex sets should be able to construct a proof for the second part based on Figure 5. .23). . b)|b ∈ B. hence again by Exercise 3 ˆ M (ˆ − λ ˆ ≥ f0 (x) − λ f (x) . x M (ˆ − λ ˆ ≥ M (ˆ − λ f (ˆ) . and then (ˆ. b) ˆ b b) ˆ x ˆ ˆ b so that λ (f (ˆ) − ˆ ≥ 0. b) ˆ b ˆ Since f0 (ˆ) = M (ˆ and λ (f (ˆ) − ˆ = 0. Show that (i) if λ = (λ1 . λ ≥ 0 and ˆ By Exercise 2.2. and M (ˆ < ∞. M (b) ≤ M (ˆ + λ (b − ˆ b) b) ≤ M (ˆ + |λ||b − ˆ . λm ) is said to be the normal to a hyperplane supporting A at a point(ˆ. This neologism contrasts with the epigraph of a function which is the set lying above the graph of the function. . . b) ∈ A . b b b. f (ˆ) = M (ˆ x ∈ X. i=1 (5. 61 Thus A is the set lying ”below” the graph of M . Exercise 2: Show that if ˆ ∈ B. (M (b). A lies below the hyperplane π = {(r. ˆ ˆ Lemma 5: Suppose that x is optimal for (5. Then λ is a supergradient of M at ˆ iff λ is an ˆ b ˆ satisfy the optimality conditions. or see the Appendix at the end of this Chapter. . λ) x ˆ Proof: By hypothesis. ˜ ≥ ˆ and r ≤ M (ˆ then ˜ ∈ B. .4. . b. futhermore. . b b). . we can rewrite the inequality above as x b). r b) ˆ ∈ B. DUALITY THEORY A = {(r. λm ) is a Exercise 3: Assume that b b) supergradient of M at ˆ then λ ≥ 0. . λ1 . x b) From the Greek “hypo” meaning below or under. ˆ x b. . −λ1 . ˜ b). −λm ) defines a hyperplane supporting A at b). ˆ then Let λ be a supergradient at b. and r ≤ M (b)} . . Definition: A vector (λ0 .25) λi bi }. 1 . and f (ˆ) ≤ ˆ Let λ be a supergradient of M at ˆ x b). if the hyperplane is non-vertical then b). Next let x ∈ X. . (λm /λ0 )) is a supergradient of M at ˆ b. b)|λ0 r+ λi bi = λ0 r + ˆ ˆ hyperplane is said to be non-vertical if λ0 = 0. We will prove only one part of the next crucial result. −λm ) defines a non-vertical hyperplane b supporting A at (M (ˆ ˆ (ii) if (λ0 . ˜ ∈ A. ˆ f (ˆ)) ∈ A and by Exercise 3. Since M is concave it follows immediately that A is convex (in fact these are equivalent statements).5. (M (ˆ ˆ then λ0 ≥ 0. b) ((λ1 /λ0 . . b). But then λ (ˆ − f (ˆ)) = 0.17).) The supporting (In words. M (˜ and (˜. See Figure 5. Then x b) x (f0 (x). .4. giving (5. and (1. Lemma 4: (Gale [1967]) M is stable at ˆ iff M has a supergradient at ˆ Proof: (Sufficiency only) b b. . f (x)) ∈ A. . b) b| ♦ The next two results give important alternative interpretations of supergradients. λi ≥ 0 for 1 ≤ i ≤ m. . . We call A the hypograph1 of M . . −λ1 . ˆ if r b) m m λ0 r + ˆ i=1 b λiˆi ≥ λ0 r + λi bi for all (r. optimal price vector.

A b ˆ b No non-vertical hyperplane supporting A at (M (ˆ ˆ b). b) ˆ b) Hence M (b) = sup{f0 (x)|x ∈ Ω(b)} ≤ M (ˆ + λ (b − ˆ . Then λ is a supergradient of M at ˆ iff λ is an b b) b ˆ optimal solution of the dual (5. f (x) ≤ b. b) ˆ b) ˆ so that λ is a supergradient of M at ˆ b.4: Hypograph and supporting hyperplane. By Exercises 2 and 3 Proof: Let λ b.21) holds.23). It follows that (ˆ. λ ≥ 0 satisfy (5. i. ˆ ˆ (f (x) − b) ≤ 0 so that x ∈ X.62 M (b) CHAPTER 5. A ˆ b b π is a non-vertical hyperplane supporting A at (M (ˆ ˆ ˆ b).21) by (5. Let x ∈ Ω(b). suppose x ∈ X. . by (5. b) Figure 5. NONLINEAR PROGRAMMING M (ˆ b) .. x ˆ ˆ Conversely. x x b) b) so that (5. and M (ˆ < ∞. λm ) . and (5. . . (5. λ) satisfy the optimality conditions. .21).19) and m(λ) = M (ˆ b). ♦ ˆ ˆ Lemma 6: Suppose that ˆ ∈ B. ˆ be a supergradient of M at ˆ Let x ∈ X.23) . b) M (b) π ˆ M (ˆ b) (λ0 . Then λ ˆ f0 (x) ≤ f0 (x) λ (f (x) − b) ˆ = f0 (x) − λ (f (x) − ˆ + λ (b − ˆ b) ˆ b) ˆ (f (ˆ) − ˆ + λ (b − ˆ ˆ ≤ f0 (ˆ) − λ x x b) b) ˆ (b − ˆ = f0 (ˆ) + λ x b) = M (ˆ + λ (b − ˆ .e.22). ˆ ˆ f0 (ˆ) + λ (f (ˆ) − ˆ ≥ f0 (x) − λ (f (x) − ˆ .

b) b) so that ˆ ˆ M (ˆ ≥ sup{f0 (x) − λ (f (x) − ˆ ∈ X} = m(λ) . and (5. M (b) = sup{f0 (x)|x ∈ Ω(b)} ≤ M (ˆ + λ (b − ˆ .) 5. b However.2 Interpretation and extensions. b) ˆ ≥ 0. then λ (f (x) − b) ≤ 0. if X = Rn and fi . (Hint: See Theorem 5 of 4.21). and m(λ) = M (ˆ b). The “if” part of (iii) follows from Theorem 1. b) b)|x ˆ ˆ By weak duality (Lemma 2) it follows that M (ˆ = m(λ) and λ is optimal for (5. ˆ ˆ (i) there exists an optimal solution λ for the dual.2. then x is optimal for the primal iff (ˆ. b) b) ˆ and if moreover f (x) ≤ b.22).2. Thus the condition of stability of M at ˆ plays a similar role to the constraint qualification.6.23) are equivalent to the Kuhn-Tucker condition (5. In other words if CQ holds at x then M is stable at ˆ In particular. Hence. ♦ We can now summarize our results as follows. ˆ ˆ (ii) λ is optimal for the dual iff λ is a supergradient of M at ˆ b. (5. Lemma 7: If ˆ is in the interior of B. then M is stable at ˆ b.22).5. (ii) is implied by Lemma 6. . Theorem 2: (Duality) Suppose ˆ ∈ B. n and the f are differentiable. and M is stable at ˆ Then b b) b. if λ is an optimal solution to the dual then (∂M + /∂bi )(ˆ ≤ λi ≤ (∂M − /∂bi )(ˆ b) ˆ b). b) ˆ b) 63 ˆ so that λ is a supergradient. the various conditions of Section 1. by Lemmas 4. Exercise 4: Prove Corollary 1. DUALITY THEORY ˆ M (ˆ − λ ˆ ≥ f0 (x) − λ f (x) b) ˆ b or ˆ M (ˆ ≥ f0 (x) − λ (f (x) − ˆ . so that ˆ M (ˆ ≥ f0 (x) − λ (f (x) − ˆ + λ (f (x) − b) b) b) ˆ ˆ ˆb = f0 (x) − λ b + λ ˆ for x ∈ Ω(b) . ˆ M (ˆ ≥ f0 (x) − λ (f (x) − ˆ . 0 ≤ i ≤ m.23). λ) satisfy the (iii) if λ ˆ x ˆ optimality conditions of (5. in particular if there exists x ∈ X such that fi (x) < ˆi for b b 1 ≤ i ≤ m. 6 stability is equivalent to the existence of optimal dual variables. Proof: (i) follows from Lemmas 4. Here if X = R i we give one sufficient condition which implies stability for the general case. whereas the “only if” part of (iii) follows from Lemma 5. ˆ is any optimal solution for the dual.21).2. then the optimality conditions (5.3 imply stability.19). (5. whereas CQ is only a sufficient condition.8). M (ˆ < ∞. ˆ b.3. and m(λ) = M (ˆ Then for any x ∈ X ˆ Conversely suppose λ b). It is easy to see using convexity properties that. and (5. ♦ ˆ Corollary 1: Under the hypothesis of Theorem 2. are differentiable.

if its hypograph A is not convex. if we interpret (∂M + /∂bi )(ˆ b) b) as the marginal revenue of the ith resource. there may be no hyperplane supporting A at (M (ˆ ˆ This is the reason why duality theory requires the often restrictive convexity hypothb). and comparing with Figure 5. .2.64 CHAPTER 5. Assuming the realistic condition ˆ ∈ B. NONLINEAR PROGRAMMING The proof rests on the Separation Theorem for convex sets. the firm faces the decision problem ˆ an optimal solution of (5.26): Maximize f0 (x) − F (f (x) − ˆ b) subject to x ∈ X . Referring to Figure 5. .3 or Figure 5. These ideas are developed in (Gale [1967]). b) ˆ eral than hyperplanes. ˆ then we can interpret λ as a system of equilibrium prices just as in 4. we can say that equilibrium prices exist iff marginal productivities of every (variable) resource is finite. and only depends on the fact that M is concave. The basic idea involved is to consider supporting A at (M (ˆ ˆ by (non-vertical) surfaces π more genb). λm ) . (5. if the current ˆ resources can be bought or sold at prices λ = (λ1 .4. M (b) M (ˆ b) .18). and ˆ is the interior of B. The various convexity conditions are generalizations of the economic hypothesis of non-increasing returns-to-scale.6. It is possible to obtain the duality theorem under conditions slightly weaker than convexity but since these conditions are not easily verifiable we do not pursue this direction any further (see Luenberger [1968]).18) we would then have more general problem of the form (5. (5. esis on X and fi . If for a price system λ.17) also is an optimal solution for (5. .5: If M is not concave there may be no supporting hyperplane at (M (ˆ ˆ b). For details see the b) b Appendix.17) is the short-term decision problem faced by the firm. b as the vector of current resource supplies. M (ˆ < ∞ we can see from Theorem 2 and its Corollary 1 that there exists b b) an equilibrium price system iff (∂M + /∂bi )(ˆ < ∞. equivalently. f0 (x) the corresponding revenue. X as constraints due to physical or long-term limitations. b). Much of duality theory can be given an economic interpretation similar to that in Section 4.4. see Figure 5. b). . Instead of (5.5 it is evident that if M is not concave or. 1 ≤ i ≤ m. Thus. M (ˆ < ∞ without loss of generality. Next.18). A ˆ b b Figure 5. we can think of x as the vector of n activity levels.26) . A much more promising development has recently taken place. and finally f (x) the amount of these resources used up at activity levels x. The primal problem (5.

µk ) ≥ 0.26): Maximize f0 (x) − φ(µ. In particular ˆ from Theorem 2 (iii). For more details concerning the topics of 2.3 Applications.6: The surface π supports A at (M (ˆ ˆ ˆ b). no such limitation on φ is necessary. . such an interpretation to make sense we should have φ(µ. M (b) π ˆ (5. in analogy with (5.2.6) is the graph of the function b → M (ˆ − ˆ b) F (b − ˆ Usually F is chosen from a class of functions φ parameterized by µ = (µ1 . Also see (Arrow and Hurwicz [1960]). The following references are pertinent: (Gould [1969]).19). . For non-economic applications. f (x) − ˆ For b b). (Banerjee [1971]). of course. b) ≥ 0 for b ≥ 0. b). f (x) − ˆ b) subject to x ∈ X . if we have an optimal dual solution λ then the optimal primal solutions are ˆ which also satisfy the feasibility condition (5.18) for λ = λ . f (x) − ˆ ∈ X} .5. b)|x then the dual problem is Minimize ψ(µ) subject to µ ≥ 0 . ·) then the resources f (x) − ˆ can be bought (or sold) for the amount φ(µ.2. b). A ˆ b b Figure 5. . presented in (Frank [1969]). The economic interpretation of (5. ˜ b) whenever b ≥ ˜ A relatively unnoticed. If we let ψ(µ) =sup{f0 (x) − φ(µ.27) instead of (5. Parts (i) and (iii) of Theorem 2 make duality theory attractive for computation purposes.1 see (Geoffrion [1970a]) and for a mathematically more elegant treatment see (Rockafellar [1970]). (Greenberg and Pierskalla [1970]). 5.27) M (ˆ b) .22) and those optimal solutions of (5.27) would be that if the prevailing (non-uniform) price system is φ(µ. . DUALITY THEORY 65 where F : Rm → R is chosen so that π (in Figure 5. b) ≥ φ(µ. Decentralized resource allocation. Then for each fixed µ ≥ 0 we have (5. but quite interesting development along these lines is b. and φ(µ.

. .28) whereas (5.23). Thus we have the decision problem (5.19) is k i Maximize f0 (xi ) − λ f i (xi ) − λ ( i=1 f i (xi ) − ˆ b) subject to xi ∈ X i . and m(λ) = i=1 (5.28) For λ ∈ Rm . . 1 ≤ i ≤ k.28) all the decision variables x1 .29) involves fewer constraints. . a multi-divisional firm). which decomposes into k separate problems: i Maximize f0 (xi ) − λ f i (xi ) i ∈X . + f0 (xk ) where f0 : Rni → R are concave functions.29) may be trivial if the dimensions of xi are small. and also suppose that (5. . it is the form f0 (x1 ) + . 1 ≤ i ≤ k. The sub-system has individual constraints of the form xi ∈ X i where xi is a convex set.29). for each λ ≥ 0 there is a unique optimal solution of (5.28): k Maximize i=1 k i=1 i f0 (xi ) subject to xi ∈ X i . the problem corresponding to (5. .28) has an optimal solution and the stability condition is satisfied.30) Note that (5. and the decision variable of the ith sub-system is a vector xi ∈ Rni . Then by Exercise 8 of Section 1. if k is very large it may be practically impossible to solve (5. 1 ≤ i ≤ k. in fact. 1≤i≤k . Consider the following algorithm. . b (5. This is useful because generally speaking (5. + f k (xk ) ≤ ˆ where f i : Rni → Rm are convex functions and ˆ ∈ Rm b b is the vector of available common resources. subject to x i k i If we let mi (λ) = sup{f0 (xi ) − λ f i (xi )|xi ∈ X i }. but perhaps more importantly the decision problems in (5. the sub-systems share some resources in common and this limitation is expressed as f 1 (x1 ) + . Suppose that the objective function of the large system 1 k i is additive. we need to find an optimal dual solution so that we can use Theorem 2(iii). . i. mi (λ) + λ ˆ then the dual problem is Minimize m(λ) .29) are decentralized whereas in (5. say xi (λ). .18) is easier to solve than (5. Furthermore.17) since (5. f i (xi ) ≤ ˆ . xk are coupled together. Consider a decision problem in a large system (e. are strictly concave. (5. λ ≥ 0. first of all.28) because.g. For simplicity suppose that the i f0 . Assuming that (5.29) has an optimal solution for every λ ≥ 0. 1 ≤ i ≤ k . (5. subject to λ ≥ 0 . .18) has fewer constraints.e.29) may be much easier to solve than (5. The system is made up of k sub-systems (divisions).66 CHAPTER 5. NONLINEAR PROGRAMMING the “complementary slackness” condition (5.29) b.

Step 2. then life in the stream is seriously threatened. For more detail see (Arrow and Hurwicz [1960]). We divide this period into T intervals. . Select λ0 ≥ 0 arbitrary. it is important to treat the effluents before they enter the stream in order to reduce the BOD to concentration levels which can be safely absorbed by the DO in the stream. Solve (5. N.2.32) . but for simplicity of exposition we assume that their impact on the quality of the stream is measured in terms of a single quantity. Set p = 0. Set p = p + 1 and return to Step 3. It is a well-advertized fact that if the DO drops below a certain concentration. .31) and (5. namely the biochemical oxygen demand (BOD) which they place on the dissolved oxygen (DO) in the stream. . indeed. et al. T . The principle of conservation of mass gives us equations (5. The pollutants consist of various materials.) Figure 5.7 is a schematic diagram of a part of a stream into which n sources (industries and municipalities) discharge polluting effluents. t = 1. The discussion in this section is mainly based on (Kendrick. Hence. . . .28) and can easily be seen to be b. [1971]). (5.5. Step 3. DUALITY THEORY 67 Step 1. It can be shown that if the step sizes dp are chosen properly. t = 1. . and mi (t) = amount of effluent discharge in liters. .” Therefore. . In this example we are concerned with finding the optimal balance between costs of waste treatment and costs of high BOD in the stream. . k Compute ep = i=1 f i (xi (λp )) − ˆ If ep ≥ 0. .29) for λ = λp and obtain the optimal solution xp = (x1 (λp ). . optimal. . During interval t and in area i let zi (t) = concentration of BOD measured in mg/liter. See (Dorfman and Jacoby [1970].31) s qi (t + 1) − qi (t) = βi (qi − qi (t)) + ψi−1 qi−1 (t) − ψi qi (t) vi vi +αi zi (t) − ηi vi . The fluctuations of BOD and DO will be cyclical with a period of 24 hours. and go to Step 2. . (5.28). Control of water quality in a stream. xk (λp )).32): zi (t + 1) − zi (t) = −αi zi (t) + ψi−1 zi−1 (t) vi − ψi zi (t) vi + si (t)mi (t) vi . . Set λp=1 according to λp+1 = i if ep ≥ 0 λp i i p p ep if ep < 0 λi − d i i where dp > 0 is chosen a priori. and for other decentralization schemes for solving (5. . xp will converge to the optimum solution of (5. the quality of the stream improves with the amount of DO and decreases with increasing BOD. the stream can “die. qi (t) = concentration of DO measured in mg/liter. xp is feasible for (5. it is enough to study the problem over a 24-hour period.28) see (Geoffrion [1970b]). T and i = 1.. Since the DO in the stream is used to breakdown chemically the pollutants into harmless substances. si (t) = concentration of BOD of effluent discharge in mg/liter. We first derive the equations which govern the evolution in time of BOD and DO in the n areas of the streams. For an informal discussion of schemes of pollution control which derive their effectiveness from duality theory see (Solow [1971]).

vi = volume of water in area i measured in liters. qi (1). . Therefore. . zi−1 qi−1 . αi . αi is the rate of decay of BOD per interval. NONLINEAR PROGRAMMING direction of flow 0 z0 q0 (1 − π1 )si given 1 z1 q1 1 − πi−1 i−1 i zi qi i+1 zi+1 qi+1 N N +1 . Finally. Now suppose that the waste treatment facility in area i removes in interval t a fraction πi (t) of the concentration si (t) of BOD.. zN qN (1 (1 − πi )si − πi+1 )si+1 (1 − πN )sN si si−a si si+1 sN Figure 5. Finally. They may vary with the time interval t..33) We now turn to the costs associated with waste treatment and pollution. i = 1. . Then we face the ¯ .. ψi . The vi . The costs associated with increased amounts of BOD and reduced amounts of DO are much more difficult to quantify since the stream is used by many institutions for a variety of purposes (e. q0 (t) which are the concentrations immediately upstream from area 1 are assumed known. Hence. We further assume that f is convex. recreational). In period t the ith facility treats mi (t) liters of effluent with a BOD concentration si (t) mg/liter of which the facility removes a fraction πi (t).68 CHAPTER 5. This decay occurs by combination of BOD and DO.. βi is the rate of generation of DO. ηi . and the disutility caused by a decrease in the water quality varies with the user. Then (5. .7: Schematic of stream with effluent discharges.. the cost in period t will be fi (πi (t). agricultural. industrial. Let q be the minimum acceptable DO concentration and let z be the maximum permissible BOD concentration. q s are parameters of the stream and are assumed known. The cost of waste treatment can be readily identified. (5. the initial concentrations zi (1). Also z0 (t). N are assumed known. instead of attempting to quantify these costs let us suppose that some minimum water quality standards are set. Here. municipal.31) is replaced by zi (t + 1) − zi (t) = −αi zi (t) + ψi zi−1 vi − ψi zi (t) vi + (1−πi (t))si (t)mi (t) vi . . ψi = volume of water which flows from area i to are i + 1 in each period measured in liters. The increase in DO is due to various natural oxygen-producing biochemical reactions in the stream and the increase is proportional to (q s − qi ) where q s is the saturation level of DO in the stream. . ηi is the DO requirement in the bottom sludge.g.. mi (t)) where the function must be monotonically increasing in all of its arguments. si (t).

. . i = 1. the resulting amount of waste treatment is carried out at the minimum expenditure of resources (i. . On the other hand. . (5. N . . . N. si (t). and we write the components of p∗ as p∗ (t) to match i .5. . . . z ). it does not make sense to enforce legally a minimum standard qi (t) ≥ q. si (t). . . Let the minimum cost be m(q. ¯ ¯ ∗ ≥ 0. .e. then the individual polluters may not (and usually do not) have any incentive to cooperate among themselves to achieve these standards. −qi (t)) .36) and.35) where the matrix A and the vector b depend upon the known parameters and initial conditions.e. i = 1. . then the resulting water quality will be acceptable and. N . Furthermore. . . Suppose that all the treatment facilities are in the control of a single public agency. and an optimal solution By the duality theorem there exists a 2N T -dimensional vector λ ∗ πi (t). . mi (t)) (5. Using (5. and r is the NT-dimensional vector with components (1 − πi (t))si (t)mi (t). . . DUALITY THEORY following NP: N T 69 Maximize − i=1 t=1 fi (πi (t). . N . (5. Note that the coefficients of the matrix must be non-negative because an increase in any component of r cannot decrease the BOD levels and cannot increase the DO levels.. t = 1.33) for w and obtain w = b + Ar . . . It should be clear from the duality theory that the answer is in the affirmative. t = 1. Then we can solve (5.33). T. . . . . and −qi (t) ≤ −q . . .36) and (5. ¯ 0 ≤ πi (t) ≤ 1 . . . . T. t = 1. . . . (5. . furthermore. T . i = 1. . But if ¯ there is no such centralized agency. . . and let w = (w(1). . furthermore.37) ∗ such that {πi (t)} is also an optimal solution of (5. i = 1. N .. will be an optimal solution of (5.34) and arrive at an optimal solution. . of the problem: Maximize − i t fi (πi (t). If we let p∗ = A λ∗ ≥ 0.32). . . cost of waste treatment + tax on remaining pollutants). . mi (t)) (5. . it may be economically and politically acceptable to tax individual polluters in proportion to the amount of pollutants discharged by the individual. si (t). . i = 1. ¯ 0 ≤ πi (t) ≤ 1 . w(t)).2. . mi (t)) − λ∗ (b + Ar − w) subject to 0 ≤ πi (t) ≤ 1. t = 1. T. zi (t) ≤ z . . T . z ) and it does this at a minimum cost it will ¯ solve the NP (5. . .32) and (5.36) subject to b + Ar ≤ w . the optimal values of (5. where the 2N T -dimensional vector w has its components equal to −q or z in the obvious manner. To see this let wi (t) = (zi (t).34) subject to (5. wN (t)). . t = 1. . i = 1. zi (t) ≤ z on every polluter since the ¯ pollution levels in the ith area depend upon the pollution levels on all the other areas lying upstream.35) we can rewrite (5. . The question we now pose is whether there exist tax rates such that if each individual polluter minimizes its own total cost (i.34) as follows: Maximize − i t fi (πi (t). t = 1.34)). let w(t) = (w1 (t). T. . . .37) are equal. . Then assuming that the agency is required to maintain the standards (q. N . .

.3. i Before we leave this example let us note that the optimum dual variable or shadow price λ∗ plays an important role in a larger framework.38) Thus. (5. (5.3 Quadratic Programming An important special case of NP is the quadratic programming (QP) problem: Maximize c x − 1 x P x 2 subject to Ax ≤ b. NONLINEAR PROGRAMMING with the components (1 − πi (t))si (t)mi (t) of r we can see that (5.41). (λ∗ ) (Ax∗ − b) = 0 . λ∗ . x ≥ 0 y ≥ 0. λ∗ ≥ 0. µ∗ ) to (5.2 that f0 : x → c x − 1/2 x P x is a concave function.41) and (5. and (5. ♦ From (5.42) by Phase I of the Simplex algorithm (see 4. x ≥ 0 . si (t). T . . (5. . . z ) ¯ was somewhat arbitrary. CQ is satisfied.2). (5. b ∈ Rm are fixed. If the corresponding components of λ∗ are λq∗ (t) and λz∗ (t). µ∗ ∈ Rn . c ∈ Rn . y ∗ . mi (t)) − p∗ (t)(1 − πi (t))si (t)mi (t) i 0 ≤ πi (t) ≤ 1 . A is a fixed m × n matrix and P = P is a fixed positive semi-definite matrix. t = 1. x∗ ≥ 0 c − P x∗ = A λ∗ − µ∗ . λ ≥ 0.39) where x ∈ Rn is the decision variable .40) Proof: By Lemma 3 of 1. (5. hence the necessity of these conditions follows from Theorem 2 of 1. µ∗ ≥ 0 . We noted earlier that the quality standard (q.3. ¯ i i then the change in the minimum cost necessary to achieve the new standard will be approximately λq∗ (t)∆qi (t) + λz∗ (t)∆zi (t). so that the sufficiency of these conditions follows from Theorem 4 of 1.43) Suppose we try to solve (5. such that Ax∗ ≤ b. N . 5.39) iff there exist λ∗ ∈ Rm .2. (µ∗ ) x∗ = 0 .42).70 CHAPTER 5. Theorem 1: A vector x∗ ∈ Rn is optimal for (5. On the other hand.43): Ax + Im Y = b −P x − A λ + In µ = −c . µ ≥ 0 . .40) we can see that x∗ is optimal for (5.2.39) iff there is a solution (x∗ . This estimate can now serve as a basis in making a benefits/cost i i analysis of the proposed new standard. . since P is positive semi-definite it follows from Exercise 6 of Section 1. p∗ (t) is optimum tax per mg of BOD in area i during period t. i = 1. µ x = 0 .41) (5. Now suppose it is proposed to change the standard in the ith area during period t to q + ∆qi (t) and z + ∆zi (t). . Then we must apply Phase II to the LP: m n Maximize − i=1 zi − j=1 ξj . .42) (5.37) is equivalent to the set of N T problems: Maximize − fi (πi (t).λ y = 0 .

λ. If z = 0 and ξ = 0 then (ˆ.2 to solve (5. µ be a solution of (5. z . and (5. (We have assumed.39). x x 1 ≤ i ≤ m . . λ.45). and there is no feasible solution of (5. . do not consider µj as a candidate for entry into the basis.1 will x ˆ ˆ ˆ also prove this lemma. then there is an optimal basic feasible solution of (5. and (5. x x −ψ(ˆ)(h) + fi (ˆ)fix h ≤ 0 .3. (5. Furthermore.41).4. 1 ≤ j ≤ n . . λ. . fm (ˆ) + fmx (ˆ)h}.) If (5. z = 0. (5. Maximize f0 (x) subject to fi (x) ≤ 0. y .43).3. from (5.43). Lemma 1: If (5. in order to obtain a solution of (5.46) . and (5. then there is no (5. y .42).42) have a solution then the maximum value in (5. stop. . Cullum. −1 ≤ hj ≤ 1 . and (5.4 Computational Method We return to the general NP (5. Let Ω ⊂ Rn denote the set of feasible solutions. ♦ This lemma suggests that we can apply the Simplex algorithm of 4.44) is 0. and (5.43). (5.43) have a solution. that b ≥ 0 and −c ≥ 0. If it not possible to remove the zi and ξj from the basis.41). without loss of generality.42). z ≥ 0. if a variable yi is currently in the basis. ξ = −c. Theorem 2: Suppose P is positive definite. The above algorithm is due to Wolfe [1959]. (5. do not consider λi as a candidate for entry into the basis. If z = 0 or ξ ˆ ˆ solution to (5. p. 159 ff).5. y . y ≥ 0. y . λ. µ. starting with the basic feasible solution z = b. The behavior of the algorithm is summarized below. µ.41).39).45) where x ∈ Rn . (5.44) starting with a basic feasible solution z = b. ξ ≥ 0. For x ∈ Ω define the function ψ(ˆ) : Rn → R by ˆ x ψ(ˆ)(h) = max{−f0x (ˆ)h. y . Step 2 of the Simplex algorithm must be modified as follows to satisfy (5. (5. . x x x x x x Consider the problem: Minimize ψ(ˆ)(h) x subject to − ψ(ˆ)(h) − f0x (ˆ)h ≤ 0 .44). 0 ≤ i ≤ m.42).42).42).41) and (5. But then a repetition of the proof of Lemma 1 of 4. COMPUTATIONAL METHOD subject to Ax + Im y +z =b −P x − A λ + In µ + ξ = −c x ≥ 0. i = 1. µ ≥ 0. . and Polak [1970]. (5. For a proof of this result as well as for a generalization of the algorithm which permits positive semi-definite P see (Cannon. ξ = −c.41). However.43). ˆ Proof: Let x. ξ = 0 is ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ an optimal solution of (5. ξ) of (5. m .42). λ ≥ 0. We have the following result. Then x. f1 (ˆ) + f1x (ˆ)h. µ) solve x ˆ ˆ ˆ ˆ ˆ ˆ x ˆ ˆ ˆ ˆ = 0.44) which is also a solution f (5. 5. The algorithm will stop in a finite number of steps at an ˆ optimal basic feasible solution (ˆ.41).44).44).43) we see that at most (n + m) components of (ˆ. . λ. µ) are non-zero. (5.42) and (5.41).43): If a variable xj is currently in the basis.43) and x is an optimal solution of (5. fi : Rn → R. are differentiable. (5. 71 (5.

For this reason h(xk ) is called a (desirable) feasible direction.8: h(xk ) is a feasible direction. Find x0 ∈ Ω.46) can be solved as an LP. Step 1. h(xk ). Step 2. and has a non-empty interior. µ ≥ 0 . . generated by the algorithm. . Remark: If h0 (xk ) < 0 in Step 2.8.46) and let h0 (ˆ) = ψ(ˆ)(h(ˆ)) be the minimum value atx x x x tained. (Note that by Exercise 1 of 4. Let x∗ be any limit point of the sequence x0 . Maximize f0 (xk + µh(xk )) . Call h(ˆ) an optimum solution of (5.1 (5.) The following algorithm is due to Topkis and Veinott [1967]. set k = k + 1 and return to Step 2. . Solve (5. Set xk+1 = xk + µ(xk )h(xk ). set k = 0. and go to Step 2. and fi (xk )+ fix (xK )h(xk ) < 0. .5. . For a proof of this result and for more efficient algorithms the reader is referred to (Polak [1971]). otherwise go to Step ˆ 3. which is dense in Ω(x0 ). Compute an optimum solution µ(xk ) to the one-dimensional problem. . then the direction h(xk ) satisfies f0x (xk )h(xk ) > 0. If h0 (xk ) = 0. xk . Step 4. The performance of the algorithm is summarized below. stop. NONLINEAR PROGRAMMING f0 (xk )f0 (x) = F0 (x∗ ) > f0 (xk ) f1 (xk ) f0 (x) = f0 (xk ) f2 = 0 . . and go to Step 4. Step 3. 1 ≤ i ≤ m. (See Figure 5.46) for x = xk and obtain h0 (xk ). subject to (xk + µh(xk )) ∈ Ω. . x1 . Theorem 1: Suppose that the set Ω(x0 ) = {x|x ∈ Ω. f0 (x) ≥ f0 (x0 )} is compact.72 CHAPTER 5. f3 (xk ) xk h(xk ) f2 (xk ) f1 = 0 Ω f3 = 0 Figure 5. Then the Kuhn-Tucker conditions are satisfied at x∗ .) .

48) m b λiˆi ≤ θ. .7 of Section 2 are based on the following extremely important theorem (see Rockafeller [1970]).49) can hold m only if λ0 > 0. Then there exists λ ∈ Rn . there exist (λ0 . b)|b ∈ B. Evidently. it can be verified that (5. F. b) (5. Proof of Lemma 4: Since M is stable at ˆ there exists K such that b M (b) − M (ˆ ≤ K|b − ˆ for all b ∈ B . . r ≤ M (b) − M (ˆ .49) we can see that m i=1 b λiˆi ≥ θ.5 Appendix The proofs of Lemmas 4.47) λ0 r + i=1 m λi bi ≤ θ for (r. b) ∈ F .49) λ0 r + i=1 b λiˆi ≥ θ . and θ ∈ R such that λ g ≤ θ for all g ∈ G λ f ≥ θ for all f ∈ F . G are convex. λm ) = 0. . b)|b ∈ Rm .48) we get m 1 λ0 [θ m M (b) − M (ˆ ≤ b) − i=1 λ i bi ] = i=1 (− λi )(bi − ˆ b). λ0 ♦ Proof of Lemma 7: Since ˆ is in the interior of B. G be convex subsets of Rn such that the relative interiors of F. Let F. λ = 0. Separation theorem for convex sets. But then from (5. b) ∈ G . λm ) = 0. r ≤ M (b)} . b) b| In R1+m consider the sets F = {(r. b| In R1+m consider the sets F = {(r. (5. .47) implies that F ∩ G = φ. i=1 so that i=1 b λiˆi = θ. . . so that there exist (λ0 . APPENDIX 73 5. G are disjoint.50) . there exists ε > 0 such that b b ∈ B whenever |b − ˆ < ε . b)} It is easy to check that F. and (5. Also from (5. for r > M (ˆ . λm ) = 0. . and the fact that (λ0 .48) λi bi ≥ θ for (r. . Hence. . b)|b ∈ B. b|} G = {(r.5. and θ such that m (5. . . i=1 λ0 r + From the definition of F . r > K|b − ˆ . and θ such that m (5. ˆ > M (ˆ b)|r b} G = {(r. G are convex and F ∩ G = φ. . whereas from (5.5.

50) and (5. . . NONLINEAR PROGRAMMING λ0 r + i=1 b λiˆi ≤ θ . b) ∈ G . . (5.50).51) From (5.52) implies M (b) ≤ (ˆ + b) m m i=1 b λiˆi = θ . λm ) = 0 we can see that (5. for (r. b λ0 ♦ .51) we get λ0 M (ˆ + b) so that (5. (− i=1 λi )(bi − ˆi ) . and the fact that (λ0 .49).51) imply λ0 > 0. From (5.74 m CHAPTER 5.(5. .

with initial fuel of weight M . In the first section we give two examples. gravitational force = gx3 with g assumed constant. The decision variable at ˙ time t is u(t). Thus.1) where x1 (t) is the height of the rocket from the ground at time t. namely: inertia = x3 x1 = x3 x2 . at rest. x2 (0). ρ(x1 ) is a friction coefficient depending on atmospheric 2 density which is a function of x1 . At a prescribed final time tf . A very important class of problems where such situations arise is in the control of dynamical systems.1). The “dot” denotes differentiation with respect to t. the rate of fuel ejection. and thrust force CT x3 . 6. it is desired that the rocket be at a position as high above the ground as possible. ˙ CT x3 (t) u(t) (6. Specifically suppose that the equations of motion are given by (6. M ). that is. x3 (t) is the weight of the rocket (= weight of remaining fuel) at time t. drag ¨ ˙ force = CD ρ(x1 )x2 where CD is constant. 0. and in Section 2 we derive the main result.Chapter 6 SEQUENTIAL DECISION PROBLEMS: DISCRETE-TIME OPTIMAL CONTROL In this chapter we apply the results of the last two chapters to situations where decisions have to be made sequentially over time. These equations can be derived from the force equations under the assumption that there are four forces acting on the rocket. At time 0 we assume that (x1 (0). x1 (t) = x2 (t) ˙ x2 (t) = − xCD ρ(x1 (t))x2 (t) − g + ˙ 2 3 (t) x3 (t) = −u(t) .1. assumed proportional to rate of fuel ejection. the rocket is on the ground. x3 (0)) = (0. the 75 . See Figure 6.1 Examples The trajectory of a vertical sounding rocket is controlled by adjusting the rate of fuel ejection which generates the thrust force. x2 (t) is the (vertical) speed at time t.

76

CHAPTER 6. DISCRETE-TIME OPTIMAL CONTROL

decision problem can be formalized as (6.2). Maximize x1 (tf ) subject to x(t) = f (x(t), u(t)), 0 ≤ t ≤ tf ˙ x(0) = (0, 0, M ) u(t) ≥ 0, x3 (t) ≥ 0, 0 ≤ t ≤ tf ,

(6.2)

where x = (x1 , x2 , x3 ) , f : R3+1 → R3 is the right-hand side of (6.1). The constraint inequalities u(t) ≥ 0 and x3 (t) ≥ 0 are obvious physical constraints.

x3 x1 = inertia ¨

CD ϕ(x1 )x2 = drag 2

gx3 = gravitational force ˙ CR x3 = thrust Figure 6.1: Forces acting on the rocket. The decision problem (6.2) differs from those considered so far in that the decision variables, which are functions u : [0, tf ] → R, cannot be represented as vectors in a finite-dimensional space. We shall treat such problems in great generality in the succeeding chapters. For the moment we assume that for computational or practical reasons it is necessary to approximate or restrict the permissible function u(·) to be constant over the intervals [0, t1 ), [t1 , t2 ), . . . , [tN −1 , tf ), where t1 , t2 , . . . , tN −1 are fixed a priori. But then if we let u(i) be the constant value of u(·) over [ti , ti+1 ), we can reformulate (6.2) as (6.3): Maximize x1 (tN )(tN = tf ) subject to x(ti+1 ) = g(i, x(ti ), u(i)), i = 0, 1, . . . , N − 1 x(t0 ) = x(0) = (0, 0, M ) u(i) ≥ 0, x3 (ti ) ≥ 0, i = 0, 1, . . . , N .

(6.3)

In (6.3) g(i, x(t1 ), u(i)) is the state of the rocket at time ti+1 when it is in state x(ti ) at time ti and u(t) ≡ u(i) for ti ≤ t < ti+1 . As another example consider a simple inventory problem where time enters discretely in a natural fashion. The Squeezme Toothpaste Company wants to plan its production and inventory schedule for the coming month. It is assumed that the demand on the ith day, 0 ≤ i ≤ 30, is d1 (i) for

6.2. MAIN RESULT

77

their orange brand and d2 (i) for their green brand. To meet unexpected demand it is necessary that the inventory stock of either brand should not fall below s > 0. If we let s(i) = (s1 (i), s2 (i)) denote the stock at the beginning of the ith day, and m(i) = (m1 (i), m2 (i)) denote the amounts manufactured on the ith day, then clearly s(i + 1) + s(i) + m(i) − d(i) , where d(i) = (d1 (i), d2 (i)) . Suppose that the initial stock is s, and the cost of storing inventory s ˆ for one day is c(s) whereas the cost of manufacturing amount m is b(m). The the cost-minimization decision problem can be formalized as (6.4):
30

Maximize
i=0

(c(s(i)) + b(m(i))) (6.4)

subject to s(i + 1) = s(i) + m(i) − d(i), 0 ≤ i ≤ 29 s(0) = s ˆ s(i) ≥ (s, s) , m(i) ≥ 0, 0 ≤ i ≤ 30 .

Before we formulate the general problem let us note that (6.3) and (6.4) are in the form of nonlinear programming problems. The reason for treating these problems separately is because of their practical importance, and because the conditions of optimality take on a special form.

6.2 Main Result
The general problem we consider is of the form (6.5).
N −1

Maximize
i=0

f0 (i, x(i), u(i))

subject to dynamics : x(i + 1) − x(i) = f (i, x(i), u(i)), i = 0, . . . , N − 1 , initial condition: q0 (x(0) ≤ 0, g0 (x(0)) = 0 , final condition: qN (x(N )) ≤ 0, gN (x(N )) = 0 , state-space constraint: qi (x(i)) ≤ 0, i = 1, . . . , N − 1 , control constraint: hi (u(i)) ≤ 0, i = 0, . . . , N − 1 .

(6.5)

Here x(i) ∈ Rn , u(i) ∈ Rp , f0 (i, ·, ·) : Rn+p → R, f (i, ·, ·) : Rn+p → Rn , qi : Rn → Rmi , gi : Rn → R i , hi : Rp → Rsi are given differentiable functions. We follow the control theory terminology, and refer to x(i) as the state of the system at time i, and u(i) as the control or input at time i. We use the formulation mentioned in the Remark following Theorem 3 of V.1.2, and construct the Lagrangian function L by L(x(0), . . . , x(N ); u(0), . . . , u(N − 1); p(1), . . . , p(N ); λ0 , . . . , λN ; α0 , αN ; γ 0 , . . . , γ N −1 )

78
N −1 N −1

CHAPTER 6. DISCRETE-TIME OPTIMAL CONTROL
=
i=0 N i i=0 0

f0 (i, x(i), u(i)) −
i=0

(p(i + 1)) (x(i + 1) − x(i) − f (i, x(i), u(i)))+
N −1 N

(λ ) qi (x(i)) + (α ) g0 (x(0)) + (α ) gN (x(N )) +
i=0

(γ i ) hi (u(i))

.

Suppose that CQ is satisfied for (6.5), and x∗ (0), . . . , x∗ (N ); u∗ (0), . . . , u∗ (N − 1), is an optimal solution. Then by Theorem 2 of 5.1.2, there exist p∗ (i) in Rn for 1 ≤ i ≤ N, λi∗ ≥ 0 in Rmi for 0 ≤ i ≤ N, αi∗ in R i for i = 0, N, and γ i∗ ≥ 0 in Rsi for 0 ≤ i ≤ N − 1, such that (A) the derivative of L evaluated at these points vanishes, and (B) λi∗ qi (x∗ (i)) = 0 for 0 ≤ i ≤ N , γ i∗ hi (u∗ (i)) = 0 for 0 ≤ i ≤ N − 1 . We explore condition (A) by taking various partial derivatives. Differentiating L with respect to x(0) gives f0x (0, x∗ (0), u∗ (0)) − {−(p∗ (1)) − (p∗ (1)) [fx (0, x∗ (0), u∗ (0))] +(λ0∗ ) [q0x (x∗ (0))] + (α0∗ ) [g0x (x∗ (0))]} = 0 , or p∗ (0) − p∗ (1) = [fx (0, x∗ (0), u∗ (x))] p∗ (1) +[f0x (0, x∗ (0), u∗ (0))] − [q0x (x∗ (0))] λ0∗ , where we have defined p∗ (0) = [g0x (x∗ (x))] α0∗ . Differentiating L with respect to x(i), 1 ≤ i ≤ N − 1, and re-arranging terms gives p∗ (i) − p∗ (i + 1) = [fx (i, x∗ (i), u∗ (i))] p∗ (i + 1) +[f0x (i, x∗ (i), u∗ (i))] − [qix (x∗ (i))] λi∗ . Differentiating L with respect to x(N ) gives, p∗ (N ) = −[gN x (x∗ (N ))] αN∗ − [qN x (x∗ (N ))] λN∗ . It is convenient to replace αN∗ by −αN∗ so that the equation above becomes (6.9) p∗ (N ) = [gN x (x∗ (N ))] αN∗ − [qN x (x∗ (N ))] λN∗ . Differentiating L with respect to u(i), 0 ≤ i ≤ N − 1 gives [f0u (i, x∗ (i), u∗ (i))] + [fu (i, x∗ (i), u∗ (i))] p∗ (i + l) − [hiu (u∗ (i))] γ i∗ = 0 . We summarize our results in a convenient form in Table 6.1 Remark 1: Considerable elegance and mnemonic simplification is achieved if we define the Hamiltonian function H by (6.10) (6.9) (6.8) (6.7)

(6.6)

. . N − 1 p∗ (i) − p∗ (i + 1) = [fx (i. . x∗ (i). . x∗ (N ). . . . . . . N − 1 hi (u∗ (i)) ≤ 0 [f0u (i. u∗ (N − 1) maximizes N −1 i=0 then there exist p∗ (N ). p∗ (N ) = [gN x (x∗ (N ))] αN∗ − [qN x (x∗ (N ))] λN∗ (λN∗ ) qN (x∗ (N )) = 0 λi∗ ≥ 0. x(i).2. . x(i). u(i)) Table 6. such that f0 (i.1: initial condition: q0 (x∗ (0)) ≤ 0.6. u∗ (i)] p∗ (i + 1) +[f0x (i. γ 0∗ . . MAIN RESULT Suppose x∗ (0). u(i)) subject to the constraints below dynamics: i = 0. . αN∗ . u∗ (i))] + [fu (i. . λN∗ . . α0∗ . . N − 1 x(i + 1) − x(i) = f (i. x∗ (i)u∗ (i))] . . . . N − 1 qi (x∗ (i)) ≤ 0 control constraint: i = 0. λ0∗ . . . gN (x∗ (N )) = 0 state space constraint: i = 1. x∗ (i). . . γ N −1∗ . . . . x∗ (i). . . (λi∗ ) qi (x∗ (i)) = 0 γ i∗ ≥ 0 (γ i∗ ) hi (u∗ (i) = 0 79 . . u∗ (0). . g0 (x∗ (0)) = 0 final conditions: qN (x∗ (N )) ≤ 0. u∗ (i)] − [qix (x∗ (i)] γ i∗ transversality conditions: p∗ (0) = [g0x (x∗ (0))] α0∗ λ0∗ ≥ 0. (λ0∗ ) q0 (x∗ (0)) = 0 λN∗ ≥ 0. p∗ (i1 ) = [hiu (u∗ (i))] γ i∗ adjoint equations: i = 0. . .

8) is (6.e.13). Remark 2: If we linearize the dynamic equations about the optimal solution we obtain δx(i + 1) − δx(i) = [fx (i. x. (6. ·) are concave and the remaining function in (6. u∗ (i))]z(i) .6). . x∗ (i). p∗ (i + 1))] − [qix (x∗ (i))] λi∗ . whose homogeneous part is z(i + 1) − z(i) = [fx (i.12) Since the homogeneous part of the linear difference equations (6. p∗ (i + 1))] . Conditions (6. u∗ (i))] r(i + 1) . in this case we see from (6.12). p∗ (N ) = [gN x (x(N ))] αN∗ which means that p∗ (0) and p∗ (N ) are respectively orthogonal or transversal to the initial and final surfaces.8) become (6.. so that the initial and final conditions read g0 (x(0)) = 0.13) (6. p) = f0 (i.10) becomes [hiu (u∗ (i))] γ i∗ = [Hu (i.13) that u∗ (i) is an optimal solution of Maximize H(i.8) the adjoint equations. qN ≡ 0. Thus. but note that these 2n boundary conditions are mixed. 0≤i≤N −1. Suppose q0 ≡ 0. p∗ (i + 1)). x∗ (i). u. u∗ (i). gN (x(N )) = 0. x∗ (i). u∗ (i). x∗ (i). p∗ (i + 1))] . we have a total of 2n boundary conditions for the 2n-dimensional system of difference equations (6. subject to hi (u) ≤ 0 . u∗ (i). u) . and the p∗ (i) are called adjoint variables. i. x∗ (i). which describe surfaces in Rn . x. we note that in this case the initial and final conditions specify ( 0 + n ) conditions whereas the transversality conditions specify (n − 0 ) + (n − n ) conditions. u∗ (i))]δu(i) . x∗ (i). then CQ is satisfied. u. (6.6). which has for it adjoint the system r(i) − r(i + 1) = [fx (i. DISCRETE-TIME OPTIMAL CONTROL H(i. (6. x∗ (i.9) become respectively p∗ (0) = [g0x (x∗ (0))] α0∗ . Furthermore. (6. Furthermore.80 CHAPTER 6.5) are linear. Remark 4: The conditions (6. The dynamic equations then become x∗ (i + 1) − x∗ (i) = [Hp (i. some of them refer to the initial time 0 and the rest refer to the final time.1 are also sufficient. (6.6) and (6.11) p∗ (i) − p∗ (i + 1) = [Hx (i.7). and the adjoint equations (6.9) are called transversality conditions for the following reason. (i). For this reason the result is sometimes called the maximum principle. (6. u) + p f (i. and the necessary conditions of Table 6. 0≤i≤N −1 . u∗ (i))]δx(i) + [fu (i. x. u∗ (i). we call (6.7).5). 0 ≤ i ≤ N − 1 . x∗ . whereas (6. x∗ (i). Remark 3: If the f0 (i. ·.

. 0 ≤ i ≤ N − 1 .2. A and B are constant matrices. x(N ) are fixed. A and B are as ˆ ˆ in Exercise 1. x(0) is fixed. ˆ ˆ u(i) ∈ Rp . 0 ≤ i ≤ N − 1 can be transformed into a linear programming problem.6. 0 ≤ i ≤ N − 1 x(0) = x(0). |u(i))j | ≤ 1. Here x(0). ˆ and P = P is positive definite. 1 2 N −1 N −1 81 x(i) Qx(i) + where x(i) ∈ Rn . Q = Q is positive semi-definite. Maximize 1 u(i) P u(i) 2 i=0 i=0 subject to x(i + 1) − x(i) = Ax(i) + Bu(i). 1 ≤ j ≤ p. Exercise 2: Show that the minimal fuel problem. ˆ u(i) ∈ Rp . show that the optimal solution is unique and can be obtained by solving a 2n-dimensional linear difference equation with mixed boundary conditions. j=1 subject to x(i + 1) − x(i) = Ax(i) + Bu(i). MAIN RESULT Exercise 1: For the regulator problem. x(N ) = x(N ) . 0 ≤ i ≤ N − 1 x(0) = x(0).   N −1 i=0 Minimize  P |(u(i))j | .

DISCRETE-TIME OPTIMAL CONTROL .82 CHAPTER 6.

˙ (7. In Section 2 we study more general boundary conditions. We are concerned with the 83 . t ≥ t0 .Chapter 7 SEQUENTIAL DECISION PROBLEMS: CONTINUOUS-TIME OPTIMAL CONTROL OF LINEAR SYSTEMS We will investigate decision problems similar to those studied in the last chapter with one (mathematically) crucial difference. and to be piecewise continuous.1 The Linear Optimal Control Problem We consider a dynamical system governed by the linear differential equation (7.1): x(t) = A(t)x(t) + B(t)u(t). To understand the main ideas and techniques of analysis it will prove profitable to study the linear case first. Definition: A piecewise continuous function u : [t0 . ˙ where x(t) ∈ Rn and u(t) ∈ Rp are respectively the state and control of the system at time t. u(t)) . A choice of control has to be made at each instant of time t where t varies continuously over a finite interval. In Section 1 we present the general linear problem and study the case where the initial and final conditions are particularly simple. The control u(·) is constrained to take values in a fixed set Ω ⊂ Rp . we assume that they are piecewise continuous functions. x0 ∈ Rn be fixed and let tf ≥ t0 be a fixed time.1) Here A(·) and B(·) are n × n. Let c ∈ Rn . U denotes the set of all admissible controls. x(t). 7. The general nonlinear case is deferred to the next chapter.and n × p-matrix valued functions of time. ∞) → Ω will be called an admissible control. The evolution in time of the state of the systems to be controlled is governed by a differential equation of the form: x(t) = f (t.

x0 . and any t0 ≤ t1 ≤ t2 ≤ tf let φ(t2 . z) = {φ(t2 . ˙ initial condition: x(t0 ) = x0 . τ )B(τ )u(τ )dτ . CHAPTER 7. for any z ∈ Rn . Φ satisfies the differential equation ∂Φ ∂t (t.84 decision problem (7. t0 .2) Definition: (i) For any piecewise continuous function u(·) : [t0 . and let x∗ ∈ K. and (ii) c is the outward normal to a hyperplane supporting K at x∗ . z) is the set of states reachable at time t2 starting at time t1 in state z and using admissible controls. if a time t1 it is in state z. show that U is a convex set. t1 )z + t2 t1 Φ(t2 .) Lemma 1: φ(t2 .) Definition: Let K ⊂ Rn . t1 . z. Then u∗ is an optimal solution of (2) iff (i) x∗ (tf ) is on the boundary of K = K(tf . We say that c is the outward normal to a hyperplane supporting K at x∗ if c = 0. x0 ). Lemma 2: Suppose c = 0. t1 .) Proof: Clearly (i) is implied by (ii) because if x∗ (tf ) is in the interior of K there is δ > 0 such that (x∗ (tf ) + δc) ∈ K. t1 . u) denote the state of (7. (See Desoer [1970].e. z. but then . and the boundary condition Φ(t.1). provided we include in U any measurable function u : [t0 . Definition: Let Φ(t. u∗ ). t0 ≤ τ ≤ t ≤ tf .1) at time t2 . t1 . CONTINUOUS-TIME LINEAR OPTIMAL CONTROL Maximize c x(tf ).. ∞) → Ω. t1 . z) is a convex set. subject to dynamics: x(t) = A(t)x(t) + B(t)u(t) . (ii) Let K(t2 . The next result is well-known. be the transition-matrix function of the homogeneous part of (7. z) is convex even if Ω is not convex (see Neustadt [1963]). t0 ≤ t ≤ tf . tf ] → Rp . τ ). (7. The next result gives a geometric characterization of the optimal solutions of (2). z. τ ) = A(t)Φ(t. final condition: x(tf ) ∈ Rn . t1 . We call K the reachable set. t) ≡ In . K(t2 . (It is a deep result that K(t2 .1. τ ) . (ii) Assuming that U is convex show that K(t2 . and c x∗ ≥ c x for all x ∈ K . u) = Φ(t2 . t0 . Thus. u)|u ∈ U} . control constraint: u(·) ∈ U . (See Figure 7. and the control u(·) is applied. t1 .2). Exercise 1: (i) Assuming that Ω is convex. i. Let u∗ (·) ∈ U and let x∗ (t) = φ(t.

5) which is equivalent to (7.3) and (7. c (x∗ (tf ) + δc) = c x∗ (tf ) + δ|c|2 > c x∗ (tf ) . for all t ∈ [t0 . Then u∗ (·) is optimal iff (p∗ (t)) B(t)u∗ (t) = sup{(p∗ (t)) B(t)v|v ∈ Ω} . t0 )x0 + t0f Φ(tf .7. x0 . except possibly for a finite set.3) (7.6). tf ]. t0 )x0 + t0f Φ(tf . from the definition of K it follows immediately that u∗ is optimal iff c x∗ (tf ) ≥ c x for all x∈K. THE LINEAR OPTIMAL CONTROL PROBLEM x3 c 85 x∗ (tf ) c x2 K π ∗ = {x|c x = c x∗ (tf )} x1 Figure 7. τ )B(τ )u(τ )dτ (7. τ )B(τ )u (τ )dτ t ≥ t0f (p∗ (tf )) Φ(tf .1: c is the outward normal to π ∗ supporting K at x∗ (tf ) . tf ∗ ∗ t0 (p (tf )) Φ(tf . t0 ≤ t ≤ tf . ˙ final condition: p∗ (tf ) = c . τ )B(τ )u∗ (τ )dτ ] t ≥ (p∗ (tf )) [Φ(tf . Theorem 1: Let u∗ (·) ∈ U and let x∗ (t) = φ(t. t0 ≤ t ≤ tf .1. Finally. Let p∗ (t) be the solution of (7.6) . The beauty and utility of the theory lies in the following result which translates this characterization directly in terms of u∗ .4) (7.4): adjoint equation: p∗ (t) = −A (t)p∗ (t) . τ )B(τ )u(τ )dτ ] . ♦ The result above characterizes the optimal control u∗ in terms of the final state x∗ (tf ). t0 . Proof: u∗ (·) is optimal iff for every u(·) ∈ U (p∗ (tf )) [Φ(tf . t (7. u∗ ).

♦ But then from (7. u∗ (t). then there exists t∗ ∈ [t0 . (p∗ (t2 ))x∗ (t2 ) ≥ (p∗ (t2 )) x for all x ∈ K(t2 . Indeed if this is not the case. it follows that there exists δ > 0 such that (p∗ (t)) B(t)u∗ (t) < (p∗ (t)) B(t)v. Define u(·) ∈ U by ˜ u(t) = ˜ Then (7.7) and the sufficiency of (7. tf ] u∗ (t) otherwise . u.8) implies that tf ∗ t0 (p (t)) (7.e.9). t0 . Corollary 1: For t0 ≤ t1 ≤ t2 ≤ tf . t0 . p)|u ∈ Ω}. tf ]. p∗ (t)) . via the adjoint differential equation. tf ∗ t0 (p (τ )) B(τ )u∗ (τ )dτ ≥ tf ∗ t0 (p (τ )) B(τ )u(τ )dτ. giving a contradiction. x. x∗ (t1 )). t) so that (7. x. t0 . p) = p (A(t)x + B(t)u) . B(t)˜(t)dt > u tf ∗ t0 (p (t)) B(t)u∗ (t)dt . t∗ ∈ D. and since t∗ is a point of continuity of B(·) and u∗ (·). This condition is known as the maximum principle. x∗ (t).86 CHAPTER 7. CONTINUOUS-TIME LINEAR OPTIMAL CONTROL Now by properties of the adjoint equation we know that p∗ (t)) = (p∗ (tf )) Φ(tf . x∗ (t). we see that if u∗ (·) is optimal. (7. (7.6) is equivalent to (7. u. then (7. The situation is illustrated in Figure 7.5) is immediate. the outward normal p∗ (tf ) at time tf . Taking t1 = t0 in (7. Remark 1: The geometric meaning of (7. and we define M by M (t. p) = sup{H(t. t ∈ [t0 .5) is satisfied for t ∈ D.8) v |t − t∗ | < δ. t1 . p∗ (t)) = M (t. if c = p∗ (tf ) is the outward normal to a hyperplane supporting K(tf .7) we see that u∗ (·) cannot be optimal. then x∗ (t) is on the boundary of K(t.2. x0 ) at x∗ (tf ). for |t − t∗ | < δ . We shall show that if u∗ (·) is optimal then (7. and v ∈ Ω such that (p∗ (t∗ )) B(t∗ )u∗ (t∗ ) < (p∗ (t∗ )) B(t∗ )v .5) can be rewritten as H(t. x0 ) at x∗ (t). (7.10) . To prove the necessity let D be the finite set of points where the function B(·) or u∗ (·) is discontinuous. Remark 2: If we define the Hamiltonian function H by H(t.7).9) Exercise 2: Prove Corollary 1. i. This normal is obtained by transporting backwards in time.9) is the following. x0 ) and p∗ (t) is the normal to a hyperplane supporting K(t. x..

2) is f0 (x(tf )) instead of c x(tf ). Also show that this condition is sufficient for optimality if f0 is concave. x0 ) is convex (see remark in Exercise 1 above). (Hint: Use Lemma 1 of 5. { ] → ⊗ and u(·)piecewise continuous. We will analyze the problem in the same way as before. That is. ≤ tn ≤ tf such that u∗ (t) ≡ α or β on [ti . Suppose u∗ (·) is an optimal control. B(t) are constant.) Exercise 6: Assume that K(tf . t0 ≤ t ≤ tf . and then translate these conditions in terms of the control.2 More General Boundary Conditions We consider the following generalization of (7.) 7. (7. For convenience let T 0 = {z ∈ Rn |G0 z = b0 } .1 to show that if u∗ (·) is optimal. while b0 ∈ R 0 .) The next two exercises show how we can obtain important qualitative properties of an optimal control.7. ˙ initial condition: G0 x(t0 ) = b0 . t0 .e. Suppose that A(t) ≡ A and B(t) ≡ B are constant matrices and A has n real eigenvalues. Definition: Let p ∈ Rn . control constraint: u(·) ∈ U . T f = {z ∈ Rn |Gf z = bf } . . then f0x (x∗ (tf )(x∗ (tf ) − x) ≤ for all x ∈ K(tf . we first characterize optimality in terms of the state at the final time.10) where p∗ (·) is the solution of the adjoint equation (7. (Hint: first show that (p∗ (t)) B = γ1 exp(δ1 t) + .2). (ii) If A(t).. : [ . Maximize c x(tf ) subject to dynamics: x(t) = A(t)x(t) + B(t)u(t). δi in R. . so that B(t) is an n × 1 matrix.2. + γn exp(δn (t)) for some γi . 0 ≤ i ≤ n. x0 ). . MORE GENERAL BOUNDARY CONDITIONS 87 Exercise 3: (i) Show that m(t) = M (t. β]. Show that there exists an optimal control u∗ (·) such that u∗ (t) belongs to the boundary of Ω for all t. i. bf ∈ R f are fixed vectors.1. Show that there is an optimal control u∗ (·) and t0 ≤ t1 ≤ t2 ≤ . Exercise 4: Suppose that Ω is bounded and closed. Let z ∗ ∈ T 0 . . Exercise 5: Suppose Ω = [α. x∗ (t). (Hint: Show that (dm/dt) ≡ 0.3) with the final condition p∗ (tf ) = f0 (x∗ (tf )) . Let f0 : Rn → R be a differentiable function and suppose that the objective function in (7. final condition: Gf x(tf ) = bf . Show that u∗ (·) satisfies the maximum principle (7.11) In (7. We say that p is orthogonal to T 0 at z ∗ and we write p ⊥ T 0 (z ∗ ) if . t0 .11) G0 and Gf are fixed matrices of dimensions 0 xn and f × n respectively. The notion of the previous section is retained. show that m(t) is constant. p∗ (t)) is a Lipschitz function of t. ti+1 ).

CONTINUOUS-TIME LINEAR OPTIMAL CONTROL Figure 7. t0 .2: Illustration of (7. x0 ) p∗ (tf ) = c x∗ (tf ) CHAPTER 7.88 Rn Rn p∗ (t2 ) p∗ (t 1) Rn Rn x0 0) = x∗ (t1 ) K(t1 . x0 ) x∗ (t = K(t0 . x0 ) K(tf . t0 .9) for t1 = t0 . x0 ) x∗ (t2 ) K(t2 . t1 t2 tf t t0 . t0 . t0 .

r > c x∗ (tf ). p0 ≥ 0 and p ∈ Rn . t0 . (i) Suppose the Ω is convex.19) can hold only if p0 ≥ 0. (i) Suppose that u∗ (·) is optimal.12) (7.18) . p ∈ Rn . Since Ω is convex by hypothesis it follows by Exercise 1 of Section 1 that S 2 is convex.19) Letting r → ∞ we conclude that (7. Exercise 1: X(tf ) = {Φ(tf . S 2 by S 1 = {(r. such that (ˆ0 c + p) x∗ (tf ) ≥ (ˆ0 c + p) x for all x ∈ X(tf ) .13) (7. ˆ ˆ ˆ ˆ (7. (7. p ˆ (7. t0 )] (ˆ0 c + p) ⊥ T 0 (x∗ (t0 )) . u)|z ∈ T 0 . such that p0 r 1 + p x1 ≥ p0 r 2 + p x2 for all (r i .20) (7. z. 89 Lemma 1: Let x∗ (t0 ) ∈ T 0 and u∗ (·) ∈ U .13) are satisfied. 0)}. w ∈ K(tf . x ∈ T f } .14) is also satisfied.17) First of all S 1 ∩ S 2 = φ because otherwise there exists x ∈ X(tf ) ∩ T f such that c x > c x∗ (tf ) contradicting optimality of u∗ (·) by (7. ˆ [Φ(tf . MORE GENERAL BOUNDARY CONDITIONS p (z − z ∗ ) = 0 for all z ∈ T 0 . x∗ (t0 ). i = 1. not ˆ ˆ both zero. In R1+m define sets S 1 . 2. Similarly if z ∗ ∈ T f .12) and (7. u∗ ). Proof: Clearly u∗ (·) is optimal iff c x∗ (tf ) ≥ c x for all x ∈ X(tf ) ∩ T f . ˆ ˆ ˆ ˆ which is the same as (7.7.18) implies that p0 r + p x∗ (tf ) ≥ p0 c x + p x for all x ∈ X(tf ). Secondly. On the other hand letting r → ˆ c x∗ (tf ) we see that (7. not both ˆ ˆ ˆ zero. p ⊥ T f (z ∗ ) if p (z − z ∗ ) = 0 for all z ∈ T f . t0 .18) we get (7. S 2 = {(r. u(·) ∈ U}. and suppose that x∗ (tf ) ∈ T f . xi ) ∈ S i . p ˆ p ˆ p ⊥ T f (x∗ (tf )) .16) (7.5) there exists p0 ∈ R. If u∗ (·) is optimal. Let x∗ (t) = φ(t. S 1 is convex since T f is convex. x ∈ X(tf )} .19) can hold only if p0 c x∗ (tf ) + p x∗ (tf ) ≥ p0 c x + p x for all x ∈ X(tf ) .14) (ii) Conversely if there exist p0 > 0.2.15) (7. there exist p0 ∈ R. and p such that (7.15). x)|r = c x . t0 )z + w|z ∈ T 0 .12). Also from (7. ˆ ˆ ˆ ˆ In particular (7. Definition: Let X(tf ) = {Φ(tf . then u∗ (·) is ˆ ˆ optimal and (7. But then by the separation theorem for convex sets (see 5. t0 . x)|r > c x∗ (tf ).

ˆ ˆ ˜ Then from (7. u∗ ) ∈ T f is optimal for any c. ˆ ˆ ˆ ˆ or p (x − x∗ (tf )) ≥ 0 for all x ∈ T f ˆ (7. x ∈ T f . we illustrate a 2-dimensional situation where T 0 = {x0 }. (ii) Now suppose that p0 > 0 and p are such that (7.14) holds. ˆ which is the same as (7. .3 below. 0) is convex even if Ω is not (see Neustadt [1963]). p = (ˆ/ˆ0 ) will also satisfy (7. because by the definition of X(tf ) and Exercise 1. But in Figure ˆ 7.3) we are forced to set p0 = 0. t0 .13). but basically if T ˆ0 f Exercise 2 below). and (7. ♦ Remark 1: If it is possible to choose p0 > 0 then p0 = 1.12) we get 0 ≥ (ˆ0 c + p) Φ(tf . But it is known that K(tf . Remark 3: In (i) the convexity of Ω is only used to guarantee that K(tf .90 CHAPTER 7. t0 )(z − x∗ (t0 )) for all z ∈ T 0 . (7.13).13).14). t0 . ˆ ˆ˜ so that from (7. since then (7.15) u∗ (·) is optimal. Clearly then for some c (in particular for the c in Figure 7. in part (ii) of the Lemma we may assume p0 = 1.12) we get p0 c x∗ (tf ) ≥ p0 c x . ˆ We now translate the conditions obtained in Lemma 1 in terms of the control u∗ . x0 . ˆ and (7.12).12). Intuitively p0 = 0 means that it is so difficult to satisfy ˆ the initial and final boundary conditions in (7.21) can hold only if p (x − x∗ (tf )) = 0 for all x ∈ T f . Then in part (i) we must have p0 > 0. {Φ(tf .14) hold for any vector c whatsoever. so that (7. so that from (7. ˆ Remark 2: it would be natural to conjecture that in part (i) p0 may be chosen > 0.11) that optimization becomes a secondary matter. ˆ ˆ ˆ ˆ which can hold only if p1 c x∗ (tf ) + p x ≥ p0 c x∗ (tf ) + p x∗ (tf ) for all x ∈ T f .14). It follows that the control u∗ (·) ∈ U for which x∗ (tf ) = φ(tf .12). ˆ ˆ ˆ p p (7. p ˆ which can hold only if (7. In higher dimensions the reasons may be more ˆ f is “tangent” to X(t ) we may be forced to set p = 0 (see complicated. In particular. T f is the vertical line. t0 . ˆ ˆ ˜ but then by (7. 0) is convex. we note that part (i) is not too useful if p0 = 0. Finally (7. (7.13) are satisfied. and T f ∩ X(tf ) consists of just one vector. Finally.12) always implies (7. Let x ∈ X(tf ) ∩ T f . t0 )(z − x∗ (t0 )) + x∗ (tf )} ∈ X(tf ) for all z ∈ T 0 . CONTINUOUS-TIME LINEAR OPTIMAL CONTROL p0 r + p x ≥ p0 c x∗ (tf ) + p x∗ (tf ) for all r > c x∗ (tf ).13) we conclude that p x∗ (tf ) = p x .21) But {x − x∗ (tf )|x ∈ T f } = {z|Gf z = 0} is a subspace of Rn . Exercise 2: Suppose there exists z in the interior of X(tf ) such that z ∈ T f .

and (7.26) is implied by (7. p) = sup{H(t.25). x∗ (t0 )) . then there exist a number p∗ ≥ 0. p∗ (t)) . M (t. If u∗ (·) is optimal for (7. t0 .11).24) becomes equivalent to (7. ˆ ˆ ˆ 0 Then (7.22).23). [Here H(t. (7.12). (7. 0 Then u∗ (·) is optimal.7. (7. x.23) and (7. (ii) Conversely suppose there exist p∗ > 0 and p∗ (·) satisfying (7. t0 ≤ t ≤ tf ˙ (7.22) initial condition: p∗ (t0 )⊥T 0 (x∗ (t0 )) (7. t0 . p∗ (t)) = M (t. u∗ ) and suppose that x∗ (tf ) ∈ T f .] Proof: A repetition of a part of the argument in the proof of Theorem 1 of Section 1 show that if p∗ satisfies (7.14) and (7. u. MORE GENERAL BOUNDARY CONDITIONS 91 Theorem 1: Let x∗ (t0 ) ∈ T 0 and u∗ (·) ∈ U . x. then (7. Let p∗ = p0 and let p∗ (·) be the solution ˆ 0 of (7. 0 and the maximum principle H(t. not ˆ ˆ both zero. p ∈ Rn . whereas since K(tf . and (7.13) are respectively equivalent to (7.25) holds for all t ∈ [t0 .25) is equivalent to (7.24). (7. such that (7. tf ] → Rn . Let x∗ (t) = φ(t. and a 0 function p∗ : [t0 . . x∗ (t). so that (7. (ii) Suppose p∗ > 0 and (7.22) with the final condition p∗ (tf ) = p∗ c + p = p0 c + p .26) (i) Suppose u∗ (·) is optimal and Ω is convex. Then by Lemma 1 there exist p ≥ 0. (7. tf ] except possibly for a finite set. p)|v ∈ Ω}. (7.24) (7.24).12).26): (p∗ (tf )) x∗ (tf ) ≥ (p∗ (tf )) x for all x ∈ K(tf . (7. x∗ (t).22). u∗ (t). satisfying adjoint equation: p∗ (t) = −A (t)p∗ (t) . x∗ (t0 )) ⊂ X(tf ). not both identically zero. x∗ (t0 ). x.13).23). (7. t0 . Next if x ∈ X(tf ) we have 0 (ˆ0 c + p) x = (p∗ (tf )) x p ˆ = (p∗ (tf )) (Φ(tf .24). (i) Suppose that Ω is convex. p) = p (A(t)x + B(t)u).22).14) are satisfied.2.23) final condition: (p∗ (tf ) − p∗ c)⊥T f (x∗ (tf )) . v.13) and (7.26) are satisfied. Let p0 = p∗ and p = ˆ ˆ 0 0 p∗ (tf ) − p∗ c. t0 )z + w) .

3: Situation where p0 = 0 ˆ x0 = T 0 c t . x0 ) = X(tf ) f f Tf Figure 7. CONTINUOUS-TIME LINEAR OPTIMAL CONTROL Tf . t0 .92 CHAPTER 7. x (t ) = X(t ) ∗ K(tf .

t0 )x∗ (t0 )) . p ˆ p ˆ and so u∗ (·) is optimal by Lemma 1. p)|v ∈ Ω(t)}. x.2. and we require that u(t) ∈ Ω(t) for all t. t0 )x∗ (t0 )) = (p∗ (t0 )) (z − x∗ (t0 )) +(p∗ (tf )) (w + Φ(tf . 0). p) =sup{H(t. Thus (ˆ0 c + p) x∗ (tf ) ≥ (ˆ0 c + p) x for all x ∈ X(tf ) . and since (w+φ(tf . t0 )(z − x∗ (t0 )) p ˆ +(p∗ (tf )) (w + φ(tf . Show that Theorem 1 also holds for this case where.23) the first term on the right vanishes. x. x∗ (t0 )). ♦ Exercise 3: Suppose that the control constraint set is Ω(t) which varies continuously with t. in (7.7. v. t0 . t0 )x∗ (t0 )) ∈ K(tf .25). it follows from (7. Exercise 4: How would you use Exercise 3 to solve Example 3 of Chapter 1? .26) that the second term is bounded by (p∗ (tf )) x∗ (tf ). 93 But by (7. M (t. t0 . Hence (ˆ0 c + p) x = (p∗ (f )) Φ(tf . MORE GENERAL BOUNDARY CONDITIONS for some z ∈ T 0 and some w ∈ K(tf .

94 CHAPTER 7. CONTINUOUS-TIME LINEAR OPTIMAL CONTROL .

t0 ≤ t ≤ tf . ˙ Rn Rp u∗ (·) (8. it is possible to convey the main ideas of the proofs at an intuitive level and we shall do so.1) where x(t) ∈ is the state and u(t) ∈ is the control.2 is presented in Section 1. Suppose is an optimal control ∗ (·) is the corresponding trajectory.1. x. (t). We are interested in the optimal control of a system whose dynamics are governed by the nonlinear differential equation x(t) = f (t. tf ] × Rn × Rp → Rn satisfies the following conditions: 95 . However. In the case of linear systems we obtained the necessary and x conditions for optimality by comparing x∗ (·) with trajectories x(·) corresponding to other admissible controls u(·). Section 3 deals with the minimum-time problem and Section 4 considers the important special case of linear systems with quadratic cost. [1962]. et al. An alternative form of the objective function is discussed in Section 2. Unfortunately when f is nonlinear such a characterization is not available. This comparison was possible because we had an explicitly characterization of x(·) in terms of u(·). But first we need to impose some regularity conditions on the differential equation (8.1 Main Results 8. 8.) The principal result. in Section 5 we discuss the so-called singular case and also analyze Example 4 of Chapter 1.1 Preliminary results based on differential equation theory. We can then estimate the difference between x(·) and x∗ (·) by the solution to a linear differential equation as shown in Lemma 1 below. (For complete proofs see (Lee and Markus [1967] or Pontryagin. u(t)) . We assume throughout that the function f : [t0 .Chapter 8 SEQUENTIAL DECISION PROBLEMS: CONTINUOUS-TIME OPTIMAL CONTROL OF NONLINEAR SYSTEMS We now present a sweeping generalization of the problem studied in the last chapter.1). which is a direct generalization of Theorem 1 of 7. Instead we shall settle for a comparison between the trajectory x∗ (·) and trajectories x(·) obtained by perturbing the control u∗ (·) and the initial condition x∗ (t0 ). Unfortunately we are forced to omit the proofs of the results since they require a level of mathematical sophistication beyond the scope of these Notes.. Finally.

u(·)). . . CONINUOUS-TIME OPTIMAL CONTROL 1. m is a nonnegative integer. . tf ]. t1 . there exists a unique solution x(t) = φ(t. (·)) = [ ∂f (t. . u(·)) = In . there exist finite number β and γ such that |f (t. for every t1 ∈ [t0 . fx . i = 1. . Now let Ω ⊂ Rp be a fixed set and let U be set of all piecewise continuous functions u(·) : [t0 . u). 2. Furthermore. i = 1. z. tf ] → Ω. tf ]×Rn × Rp . . t1 . 2. t0 < t1 < t2 < . u(·)) is the solution of the linear homogeneous differential equation ∂Φ ∂t (t. i = 1. ˙ satisfying the initial condition x(t1 ) = z . . u1 . for every finite α. . Let x∗ ∈ Rn be a fixed initial condition. x. . u ∈ Rp with |u| ≤ α . um ) is said to be a perturbation data for u∗ (·) if 1. . Let u∗ (·) ∈ U be fixed and let D∗ be the set of discontinuity points of u∗ (·). ·. . x(t). u(t))]Φ(t. z. for fixed t1 ≤ t2 in [t0 . . t1 ≤ t ≤ tf . tf ]. t1 . t1 . . t1 . (t). z. u. The following result is proved in every standard treatise on differential equations. . u(·)) = ∂φ ∂z (t2 . u)| ≤ β + γ|x| for all t ∈ [t0 . tm < tf . m. t1 . 1. z. . . f (t. i D. t1 ≤ t ≤ tf . 0 Definition: π = (t1 . tf ] and fixed u(·). tf ]. m. t1 . fu are continuous on [t0 . . tf ]. . z. of the differential equation x(t) = f (t. ∂x and the initial condition Φ(t1 . x. u(t)) . . u(·)) . except for a finite subset D ⊂ [t0 . the functions f. ui ∈ Ω. ·) : Rn xRp → Rn is continuously differentiable in the remaining variables (x. x ∈ Rn . for each fixed t ∈ [t0 . the function φ(t2 . tf ] → Rp . and every piecewise continuous function u(·) : [t0 . . . Moreover. m. and 3. . and 4. . . t1 ≤ t ≤ tf . m (recall that D is the set of ≥ 0. . u(·)) : Rn → Rn is differentiable. and ti ∈ D ∗ discontinuity points of f ). .96 CHAPTER 8. z. 3. tm . . the n × n matrix-valued function Φ defined by Φ(t2 . Theorem 1: For every z ∈ Rn . ·.

the perturbed control u(π. u∗ tj ))] . t0 .1. tj ] = φ for i = j. m u∗ (t) otherwise . and it is omitted (see for example (Lee and Markus [1967])).ε) (·)). t ∈ [ti . 0 More precisely we have the following result which we leave as an exercise. x∗ (tj ). and using controls u(·) ∈ U . z. t1 . Remark: By Lemma 1 (x∗ (t)+εh(π. u∗ (t1 ))] i . u(π. u∗ (·)) and let xε (t) = φ(t. t ∈ [t0 . ti+1 ) . tm )[f (tj .1. x∗ (tj ).0) (t)|πis a perturbation data for u∗ (·). x∗ (t1 ). x∗ (t1 ). ti ] . uj ) − f (tj . ti ] [tj −ε j . 0)} . i = 1. u∗ (tj ))] Φ(t. . 0 Now let x∗ (t) = φ(t. u∗ (·)). t0 . t2 ) j = Φ(t. In particular for ξ = 0. MAIN RESULTS 97 Let ε(π) > 0 be such that for 0 ≤ ε ≤ ε(π) we have [ti − ε i . t0 . t0 )ξ + j=1 j (See Figure 8. t0 . . Lemma 1: lim |xε (t) − x∗ (t) − εh(π. u(·))|u(·) ∈ U} be the set of states reachable at time t.ε) (·) ∈ U corresponding to π is defined by u(π. The following lemma gives an estimate of x∗ (t) − xε (t). tf ]. t0 )ξ + j=1 m Φ(t. . t ∈ [tm . x∗ (tj ). tf ] for all i.ε) = x∗ . and a function x(ξ. u1 ) − f (t1 . t1 ]. t ∈ [t0 . The proof of the lemma is a straightforward exercise in estimating differences of solutions to differential equations.ε) . t1 ) = 0 Φ(t2 . uj ) − f (tj . and [ti −ε i .ξ) ) belongs to the set K(t. x∗ (t1 ). t1 ) 1 . x∗ . t0 )ξ + Φ(t. tj )[f (tj . x(ξ. where h(π. tf ] let K(t. Let Φ(t2 .ξ) (·) the linearized (trajectory) perturbation corresponding to (π. x(ξ. t0 . Definition: For z ∈ Rn . the set x∗ (t) + Q(t) can serve as an approximation to the set K(t. x∗ ).0) (·)is the linearized perturbation corresponding to(π. Definition: Any vector ξ ∈ Rn is said to be a perturbation for x∗ . starting at time t0 in state z. tf ] . t0 .ε) (t) = Φ(t. . ξ). let Q(t) = {h(π.ε) (t)| = 0 for t ∈ [t0 . 0 ε→0 and ε→0 lim 1 ε (x(ξ. . Then for 0 ≤ ε ≤ ε(π).ε) − x∗ ) = ξ . t0 )ξ = Φ(t.) We call h(π. z) = {φ(t. t1 )[f (t1 . Definition: For each t ∈ [t0 .ε) (·) is given by ε→0 h(π.ε) (t) = ui for all t ∈ [ti − ε i . x∗ (tj ).ε) defined for 0 ε > 0 is said to be a perturbed initial condition if lim x(ξ. t ∈ [t1 . and h(π.ε) ) up to an error of order o(ε). = Φ(t.8. ti ] ⊂ [t0 .

2) (8.4) . Let p∗ (t). ˙ initial condition: x(t0 ) = x∗ . (8.1. t0 ≤ t ≤ tf . u∗ (·)).1. x∗ (t). ˙ ∂x (8.4) and (8. u : [t0 . tf ] → Ω and u(·) piecewise continuous .e. x∗ (t)) .1: Illustration for Lemma 1. u(t)) .1.5): adjoint equation: p∗ (t) = −[ ∂f (t.3) where ψ : Rn → R is differentiable and f satisfies the conditions listed earlier.) Show that Q(t) ⊂ C(K(t.98 u CHAPTER 8. Let u∗ (·) ∈ U be an optimal control and let x∗ (t) = φ(t. 0 final condition: x(tf ) ∈ Rn .3): Maximize ψ(x(tf )) subject to dynamics: x(t) = f (t. Exercise 1: (Recall the definition of the tangent cone in 5. control constraint: u(·) ∈ U . Theorem 2: Consider the optimal control problem (8. be the solution of (8.. i. ε) εhπξ t1 | | | | t2 t3 tf Figure 8. t0 ≤ t ≤ tf . u∗ (t))] p∗ (t). x∗ ). t0 ≤ t ≤ tf . 0 We can now prove a generalization of Theorem 1 of 7. t0 . x(t). t0 ≤ t ≤ tf . CONINUOUS-TIME OPTIMAL CONTROL u1 u(πε) (·) u∗ (·) ε | 1 u2 t1 | | u3 ε 2 ε | | 3 t0 x t2 t3 tf x∗ (·) | | xε (·) x( ξ. be the 0 corresponding trajectory. x∗ . t0 .

t∗ ). x∗ (t).1.0) (t∗ ) so that (8. If we consider the perturbation data π = (t∗ . t1 )C(t1 ) ⊂ C(t2 ) . u∗ (t∗ ))] > 0 . Φ(t2 . x∗ (tf )) . Remark: From Lemma 1 we know that if h ∈ C(t) then (x∗ (t) + εh) belongs to K(t. (8. i. Also h(π. x∗ (t0 )) up to an error of order o(ε). 99 (8. The proof of the lemma depends upon some deep topological results and is omitted. But first we need some simple properties of the sets Q(t) which we leave as exercises. v). p∗ (t)) = M (t.2 More general boundary conditions. x. u∗ (t).1. Definition: Let C(t) denote the closure of Q(t). t0 .2) ψx (x∗ (tf ))h ≤ 0 for all h ∈ Q(tf ) . (ii) for t0 ≤ t1 ≤ t2 ≤ tf . ).0) (tf ) = Φ(tf . Lemma 2.0) (tf ) > 0 which contradicts (8. 1. below. x. u. t∗ )h(π. 0 and so by Lemma 1 of 5. Then for all ε > 0 sufficiently small. x∗ ) .8. Φ(t2 . Specifically. asserts further that if h is in the interior of C(t) then in fact (x∗ (t) + εh) ∈ K(t.4) we can see that p∗ (t∗ ) = p∗ (tf ) Φ(tf .1. The problem involving more general boundary conditions is much more complicated and requires more refined analysis. (ii) for t0 ≤ t1 ≤ t2 ≤ tf . x. Exercise 3: Show that (i) C(t) is a convex cone. [Here H(t. x∗ (t). p∗ (t)) ψ(x∗ (tf )) . then λh ∈ Q(t). p) = p f (t.8) is equivalent to p∗ (t∗ ) h(π.9) is equivalent to p∗ (tf ) h(π.9) Now from (8. v) − f (t∗ .8) 8.1 ψ(x∗ (tf ))h ≤ 0 for all h ∈ C(K(tf . x(t∗ ). .7). Instead we offer a plausibility argument. p∗ (t∗ ) [f (t∗ . tf ] except possibly for a finite set. x(t∗ ).7) Now suppose that (8.6) does not hold from some t∗ ∈ D ∗ ∪ D. MAIN RESULTS final condition: p∗ (tf ) = Then u∗ (·) satisfies the maximum principle H(t.e. ♦ (8.5) (8. t0 . Lemma 1 needs to be extended to Lemma 2 below. Exercise 2: Show that (i) Q(t) is a cone. if h ∈ Q(t) and λ ≥ 0. u.. p)|v ∈ Ω}].0) (t∗ ) > 0 . p) = sup{H(t. Then there exists v ∈ Ω such that (8. t0 . v. In Theorem 2 the initial condition is fixed and the final condition is free. Proof: Since u∗ (·) is optimal we must have ψ(x∗ (tf )) ≥ ψ(z) for all z ∈ K(tf . then (8. t1 )Q(t1 ) ⊂ Q(t2 ) . x∗ ). x. M (t. 0 and in particular from (8. t0 . Lemma 2: Let h belong to the interior of the cone C(t). x∗ (t0 )) for ε > 0 sufficiently small.6) for all t ∈ [t0 .

100

CHAPTER 8. CONINUOUS-TIME OPTIMAL CONTROL
(x∗ (t) + εh) ∈ K(t, t0 , x∗ ) . 0 (8.10)

Plausibility argument. (8.10) is equivalent to εh ∈ K(t, t0 , x∗ (t0 )) − {x∗ (t)} , where we have moved the origin to x∗ (t). The situation is depicted in Figure 8.2. ˆ K(ε) ˆ C(ε) o(ε) K(t1 , t0 , x∗ ) − {x∗ (t)} εh 0

(8.11)

h

δε

C(t)

Figure 8.2: Illustration for Lemma 2. ˆ Let C(ε) be the cross-section of C(t) by a plane orthogonal to h and passing through εh. Let ˆ K(ε) be the cross-section of K(t, t0 , x∗ ) − {x∗ (t0 )} by the same plane. We note the following: 0 ˆ ˆ (i) by Lemma 1 the distance between C(ε) and K(ε) is of the order o(ε); ˆ (ii) since h is in the interior of C(t), the minimum distance between εh and C(ε) is δε where δ > 0 is independent of ε. ˆ Hence for ε > 0 sufficiently small εh must be trapped inside the set K(ε). (This would constitute a proof except that for the argument to work we need to show that there ˆ are no “holes” in K(ε) through which εh can “escape.” The complications in a rigorous proof arise precisely from this drawback in our plausibility argument.) ♦ ∗ ) in a neighborhood of x∗ (t) when we Lemmas 1 and 2 give us a characterization of K(t, t0 , x0 perturb the control u∗ (·) leaving the initial condition fixed. Lemma 3 extends Lemma 2 to the case when we also allow the initial condition to vary over a fixed surface in a neighborhood of x∗ . 0 0 Let g0 : Rn → R 0 be a differentiable function such that the 0 × n matrix gx (x) has rank 0 n 0 0 0 ∗ 0 0 for all x. Let b ∈ R be fixed and let T = {x|g (x) − b }. Suppose that x0 ∈ T and let 0 (x∗ ) = {ξ|g 0 (x∗ )ξ = 0}. Thus, T 0 (x∗ ) + {x∗ } is the plane through x∗ tangent to the surface T x 0 0 0 0 0 T 0 . The proof of Lemma 3 is similar to that of Lemma 2 and is omitted also. Lemma 3: Let h belong to the interior of the cone {C(t)+Φ(t, t0 )T 0 (x∗ )}. For ε ≥ 0 let h(ε) ∈ Rn 0 1 be such that lim h(ε) = 0, and lim ( )h(ε) = h. Then for ε > 0 sufficiently small there exists ε→0 ε x0 (ε) ∈ T 0 such that (x∗ (t) + h(ε)) ∈ K(t, t0 , x0 (ε)) .

8.1. MAIN RESULTS

101

We can now prove the main result of this chapter. We keep all the notation introduced above. f Further, let gf : Rn → R f be a differentiable function such that gx (x) has rank f for all x. f ∈ Rn be fixed and let T f = {x|g f (x) − bf }. Finally, if x∗ (t ) ∈ T f let T f (x∗ (t )) = Let b f f f {ξ|gx (x∗ (tf ))ξ = 0}. Theorem 3: Consider the optimal control problem (8.12): Maximize ψ(x(tf )) subject to dynamics: x(t) = f (t, x(t), u(t)) , t0 ≤ t ≤ tf , ˙ initial conditions: g0 (x(t0 )) = b0 , final conditions: gf (x(tf )) = bf , control constraint: u(·) ∈ U , i.e., u : [t0 , tf ] → Ω and u(·) piecewise continuous .

(8.12)

Let u∗ (·) ∈ U , let x∗ ∈ T 0 and let x∗ (t) = φ(t, t0 , x∗ , u∗ (·)) be the corresponding trajectory. 0 0 Suppose that x∗ (tf ) ∈ T f , and suppose that (u∗ (·), x∗ ) is optimal. Then there exist a number 0 p∗ ≥ 0, and a function p∗ : [t0 , tf ] → Rn , not both identically zero, satisfying 0 adjoint equation: p∗ (t) = −[ ∂f (t, x∗ (t), u∗ (t))] p∗ (t), t0 ≤ t ≤ tf , ˙ ∂x initial condition: p∗ (t0 )⊥T 0 (x∗ ) , 0 final condition: (p∗ (tf ) − p∗ ψ(x∗ (tf )))⊥T f (x∗ (tf )) . 0 Furthermore, the maximum principle H(t, x∗ (t), u∗ (t), p∗ (t)) = M (t, x∗ (t), p∗ (t)) (8.16) (8.13) (8.14) (8.15)

holds for all t ∈ [t0 , tf ] except possibly for a finite set. [Here H(t, x, p, u) = p f (t, x, u, ), M (t, x, p) = sup{H(t, x, v, p)|v ∈ Ω}]. Proof: We break the proof up into a series of steps. Step 1. By repeating the argument presented in the proof of Theorem 2 we can see that (8.15) is equivalent to p∗ (tf ) h ≤ 0 for all h ∈ C(tf ) . (8.17)

Step 2. Define two convex sets S1 , S2 in R1+m as follows: S1 = {(y, h)|y > 0, h ∈ T f (x∗ (tf ))}, S2 = {(y, h)|y = ψx (x∗ (tf ))h, h ∈ {C(tf ) + Φ(tf , t0 )T 0 (x∗ )}} . 0 We claim that the optimality of (u∗ (·), x∗ ) implies that S1 ∩ Relative Interior (S2 ) = φ. Suppose 0 this is not the case. Then there exists h ∈ T f (x∗ (tf )) such that ψx (x∗ (tf ))h > 0 , (8.18)

102

CHAPTER 8. CONINUOUS-TIME OPTIMAL CONTROL
h ∈ Interior{C(tf ) + Φ(tf , t0 )T 0 (x∗ )} . 0 (8.19)

f f Now by assumption gx (x∗ (tf ) has maximum rank. Since gx (x∗ (tf ))h = 0 it follows that the Implicit Function Theorem that for ε > 0 sufficiently small there exists h(ε) ∈ Rn such that

gf (x∗ (tf ) + h(ε)) = bf ,

(8.20)

and, moreover, h(ε) → 0, (1/ε)h(ε) → h as ε → 0. From (8.18) and Lemma 3 it follows that for ε > 0 sufficiently small there exists x0 (ε) ∈ T 0 and uε (·) ∈ U such that x∗ (tf ) + h(ε) = φ(tf , t0 , x0 (ε), uε (·)) . Hence we can conclude from (8.20) that the pair (x0 (ε), uε (·)) satisfies the initial and final conditions, and the corresponding value of the objective function is ψ(x∗ (tf ) + h(ε)) = ψ(x∗ (tf )) + ψx (x∗ (tf ))h(ε) + o(|h(ε)|) , and since h(ε) = εh + o(ε) we get ψ(x∗ (tf ) + h(ε)) = ψ(x∗ (tf )) + ε)ψx (x∗ (tf ))h + o(ε) ; but then from (8.18) ψ(x∗ (tf ) + h(ε)) > ψ(x∗ (tf )) for ε > 0 sufficiently small, thereby contradicting the optimality of (u∗ (·), x∗ ). 0 Step 3. By the separation theorem for convex sets there exist p0 ∈ R, p1 ∈ Rn , not both zero, such ˆ ˆ that p0 y 1 + p1 h1 ≥ p0 y 2 + p1 h2 for all (y i , hi ) ∈ S1 , i = 1, 2 . ˆ ˆ ˆ ˆ (8.21)

Arguing in exactly the same fashion as in the proof of Lemma 1 of 7.2 we can conclude that (8.21) is equivalent to the following conditions: p0 ≥ 0 , ˆ p1 ⊥T f (x∗ (tf )) , ˆ Φ(tf , t0 ) (ˆ0 ψ(x∗ (tf )) + p1 )⊥T 0 (x∗ ) , p ˆ 0 and (ˆ0 ψx (x∗ (tf )) + p1 )h ≤ 0 for all h ∈ C(tf ) . p ˆ (8.24)

(8.22)

(8.23)

If we let p∗ = p0 and p∗ (tf ) = p0 ψ(x∗ (tf )) + p1 then (8.22), (8.23), and (8.24) translate respecˆ0 ˆ ˆ ˆ tively into (8.15), (8.14), and (8.17). ♦

tf ] → R1+m .2 Integral Objective Function In many control problems the objective function is not given as a function ψ(x(tf )) of the final state. (Hint: For the final part show that (d/dt) M (t. then there exists a function p∗ = (p∗ . then M ˜ ˜ Exercise 1: Prove Theorem 1. x. u∗ (t))] p∗ (t) . x(t).2. u(t))dt (8. x∗ (t). u(t))dt . To this end we defined the augmented system with state variable x = (x0 . If 0 0 (u∗ (·). x∗ (t). u(t)) = x(t) ˙ f (t. and with p∗ (t) ≡ constant and p∗ (t) ≥ 0. x.25) is equivalent to the ˜ x problem of maximizing ψ(˜(tf )) = x0 (tf ) . p∗ (t)) ≡ constant. u∗ (·)). u(t)). x. tf ] except possibly for a finite set. x(t). and we get the following result. Let u∗ (·) ∈ U. p) = sup{H(t.8. u(t)) x0 (t) ˙ ˜ ˜ . p∗ ) : [t0 . the boundary conditions. (8. gf (x) = bf are augmented g0 (˜) = ˜ x x0 g0 (x) = ˜0 = b 0 b0 and gf (˜) = gf (x) = bf . x∗ .26) subject to dynamics: x(t) = f (t.26): tf Maximize t0 f0 (t. control constraint: u(·) ∈ U . x(t). u) + p f (t. and M (t. u) = ˜ ˜˜ ˜ ˜ p0 f0 (t. not identically ˜ 0 0 zero. x= ˜ = f (t. t0 ≤ t ≤ tf . x(t). x∗ ) is optimal. v)|v ∈ Ω}. We proceed to show how such objective functions can be treated as a special case of the problems of the last section. and suppose that x∗ (tf ) ∈ T f . t0 .) ˜ · ˜ . Finally. x(t). Futhermore. x(t). Evidently then the problem of maximizing (8. satisfying 0 0 (augmented) adjoint equation: p∗ (t) = −[ ∂ f (t. p. x∗ (t). x. ˜ ˜ ∂x ˜ initial condition: p∗ (t0 )⊥T 0 (x∗ ) . u(t)) The initial and final conditions which are of the form g0 (x) = b0 . x) ∈ ˜ R1+m as follows: · f0 (t. the maximum principle ˜ ˜ H(t. u).25) The dynamics of the state. but rather as an integral of the form tf t0 f0 (t. Theorem 1: Consider the optimal control problem (8. let x∗ ∈ T o and let x∗ (t) = φ(t. p∗ (t)) ≡ 0. final conditions: gf (x(tf )) = bf .] ˜ ˜ ˜ (t. and control constraints are the same as before. ˙ initial conditions: g0 (x(t0 )) = b0 . p∗ (t). p∗ (t)) ˜ ˜ ˜ holds for all t ∈ [t0 . u∗ (t)) = M (t. x∗ (t). INTEGRAL OBJECTIVE FUNCTION 103 8. [Here H(t. x. p. if f0 and f do not explicitly depend on t. x. u) = p f (t. x subject to the augmented dynamics and constraints which is of the form treated in Theorem 3 of Section 1. 0 final condition: p∗ (tf )⊥T f (x∗ (tf )) . x∗ (t).

u(t)). x(t). (8.27) We analyze (8.29) Conversely from the solution z(·) of (8. with initial condition t(0) = t0 . t0 ≤ t ≤ tf . 0 ≤ s ≤ 1 . control constraint: u(·) ∈ U . where s(·) : [t0 . x. then it is easy to see that z(·) is the solution of dz ds (s) (8. 0 ≤ s ≤ 1 . u(t))dt subject to dynamics: x(t) = f (t.104 CHAPTER 8. (t). t0 ≤ t ≤ tf .28) = α(s)f (s. In the problem considered up to now the final time tf is assumed to be fixed.3. One such case is the minimum-time problem where we want to transfer the state of the system from a given initial state to a specified final state in minimum time. Now if x(·) is the solution of x(t) = f (t. 0 ≤ s ≤ 1 z(0) = x0 . tf ] → [0.27) by converting the variable time interval [t0 . v(s) = u(t(s)) . The equation for t is dt(s) ds = α(s) . Here α(s) is a new control variable constrained by α(s) ∈ (0. 1].1 Main result. (8. In many important cases the final time is itself a decision variable. CONINUOUS-TIME OPTIMAL CONTROL 8. This change of time-scale is achieved by regarding t as a new state variable and selecting a new time variable s which ranges over [0.27). final condition: gf (x(t)f )) = bf . u(t)) . tf ] into a fixed-time interval [0. x(t0 ) = x0 ˙ and if we define z(s) = x(t(s)). x(t). ∞) .3 Variable Final Time 8. t0 ≤ t ≤ tf . ˙ initial condition: g0 (x(t0 )) = b0 . . ˙ . final-time constraint: tf ∈ (t0 . s(·) is the solution of the differential equation s(t) = 1/α(s(t)). 1]. More generally. ∞). tf Maximize t0 f0 (t.29) we can obtain the solution x(·) of (8. 1] is the functional inverse of s(t). consider the optimal control problem (8.28) by x(t) = z(s(t)) . in fact. v(s)) . z(s). s(t0 ) = 0.

t0 . t0 ≤ t ≤ t∗ .27). 0 0 f α∗ (s) = (t∗ − t0 ) . v(s))α(s)ds (8. x∗ . Suppose that ((v ∗ (·). t∗ ) is optimal 0 0 f f for (8. ˙ initial constraint: g0 (z(0)) = b0 . z0 . ∗ (ii) Let z0 ∈ T 0 . u∗ (·)) be the 0 0 f corresponding trajectory. t(1) ∈ R . ∞) and let x∗ (t) = φ(t. t0 ≤ t ≤ tf . x∗ . α(s)) ∈ Ω × (0. 0 ·∗ ˜ (8. 0 u∗ (t) = v ∗ (s∗ (t)) . 0≤s≤1. z(s). final constraint: gf (z(1)) = bf . and ˜ 0 f with p∗ (t) ≡ constant and p∗ (t) ≥ 0.8. and α∗ (·) by ∗ z0 = x∗ 0 ∗ (s) = u∗ (t + s(t∗ − t )) v . v) ∈ R1+p : 1 Maximize subject to ˙ dynamics: (z(s). p∗ ) : [t0 . If (u∗ (·).30) such that the correspond∗ ing trajectory (t∗ (·). t(s)) = (f (t(s). u∗ (·) ∈ U . f ∗ where s∗ (·) is functional inverse of t∗ (·). z0 ) is optimal for (8. ∞) for 0 ≤ s ≤ 1 and v(·). f ∗ Then ((v ∗ (·). let t∗ ∈ (0. Define z0 . and suppose that x∗ (t∗ ) ∈ T f . where the state vector (t. then there exists a function p∗ = (p∗ . z ∗ (·)) satisfies the final conditions of (8.3. u∗ (t))] p∗ (t) .27) and (8. α(·) piecewise continuous. and t∗ by 0 f ∗ x∗ = z0 . Then (u∗ (·). 0≤s≤1. not identically zero. z0 ) is optimal for (8. x∗ . z) ∈ R1+m . z(s). satisfying 0 0 (augmented) adjoint equation: p (t) = −[ ∂ f (t. t∗ ∈ (t0 . and suppose that (u∗ (·).27). and let 0 f x∗ (t) = φ(t. u∗ (·)). α∗ (·)).30). f t∗ = t∗ (1) . x∗ . t∗ ] → R1+m . t∗ ) is optimal for (8.30). Define x∗ . v(s))α(s). α∗ (·)).32) . α∗ (·)) be an admissible control for (8. x∗ (t). Theorem 1: Let u∗ (·) ∈ U. v ∗ (·). and the control (α. and let (v ∗ (·).30).27). α(s)).30) is established in the following result. t(0) = t0 . t∗ ) is optimal for 0 f f ∗ (8. 0 f0 (t(s). ˜ ˜ ∂x ˜ initial condition: p∗ (t0 )⊥T 0 (x∗ ) . t0 . ∞).30). f Exercise 1: Prove Lemma 1. control constraint: (v(s). let x∗ ∈ T 0 . The relation between problems (8. VARIABLE FINAL TIME 105 With these ideas in mind it is natural to consider the fixed-final-time optimal control problem (8. Suppose that x∗ (t∗ ) ∈ T f .31) (8. u∗ (·) ∈ U.30) Lemma 1: (i) Let x∗ ∈ T 0 .

t∗ must be such that f ˆ H(t∗ . n+1 Furthermore. u∗ (t)) = M (t.106 CHAPTER 8. z ∗ (s). v ∗ (s))] λ∗ (s) n+1 0 +[ ∂f (t∗ (s). λ∗ .40) we have (λ∗ . if f0 and f do not explicitly depend on t. λ∗ ) ≡ 0 and ˜ ˜ 0 ˜ then from (8. ˜ ∗ Proof: By Lemma 1. tf ] except possibly for a finite set. 1] except possibly for a finite set. p∗ is not identically zero. the maximum principle λ∗ (s)f0 (t∗ (s). β ∈ (0. x∗ (t). t∗ (s) = t0 + s(t∗ − t0 ). there exists a function λ∗ = (λ∗ . w)β + λ∗ (s)β]|w ∈ Ω. z ∗ (s). z ∗ (s).38) final condition: λ∗ (1)⊥T f (z ∗ (1)) . ˜ ˜ holds for all t ∈ [t0 . The resulting trajectory is z ∗ (s) = x∗ (t0 + s(t∗ − t0 )). 1] → R1+n+1 . z ∗ (s). p∗ (t∗ ).39) holds for all s ∈ [0.34) (8. x∗ (t).30). f f (8. u∗ (t∗ )) = 0 . v ∗ (s) = u∗ (t0 + s(t∗ − t0 )) and α∗ (s) = (t∗ − t0 ) for 0 ≤ s ≤ 1 0 f f constitute an optimal solution for (8. but from (8. λ∗ ) : [0. t0 ≤ t ≤ t∗ . z ∗ (s). v ∗ (s))] λ∗ (s) ˙  λ∗ (t)   0 ∂z    = −  +[ ∂f (t∗ (s). CONINUOUS-TIME OPTIMAL CONTROL final condition: p∗ (t∗ )⊥T f (x∗ (t∗ )) . z ∗ (s). p∗ (t)) ≡ 0.36) (8. Let s∗ (t) = (t − t0 )/(t∗ − t0 ). t0 ≤ t ≤ t∗ . not 0 n+1 ∗ (s) ≡ constant and λ∗ (s) ≥ 0.33) Also the maximum principle ˜ ˜ H(t. p∗ ) : [t0 . x∗ (t∗ ).37) (8. v ∗ (s))] λ∗ (s)}α∗ (s) adjoint equation:    ∂z   ∂f0 ∗  λ∗ (t)  ˙  {[ ∂t (t (s). v ∗ (s))] λ∗ (s)}α∗ (s) ∂t  ∗ initial condition: λ∗ (0)⊥T 0 (z0 )        (8. f ˜ By Theorem 1 of Section 2. then from (8. and with λ0 0 (8. ∞)} n+1 (8.40) First of all.35)   ˙ 0 λ∗ (t) 0  {[ ∂f0 (t∗ (s). λ∗ ≡ constant. v ∗ (s))α∗ (s) 0 +λ∗ (s) f (t∗ (s). and define p∗ = (p∗ . 0 ≤ s ≤ 1 . Because if p∗ ≡ 0. p∗ (t). p∗ (t)) . satisfying identically zero. 0 0 f (8. f f ˜ f f ˆ Finally. z ∗ (s).38). so that in particular f f z ∗ (1) = x∗ (t∗ ). p∗ (t) = λ∗ (s∗ (t)). λ∗ (1) = 0 . λ∗ (1) = 0 so that we would have λ∗ ≡ 0 n+1 n+1 . w)β 0 +λ∗ (s) f (t∗ (s). z0 = x∗ .36). t∗ ] → R1+n by ˜ 0 f f f p∗ (t) = λ∗ (s∗ (t)). then M (t. v ∗ (s))α∗ (s) + λ∗ (s)α∗ (s) n+1 = sup{[λ∗ (s)f0 (t∗ (s). x∗ (t). Furthermore. z ∗ (s).

2 Minimum-time problems . z ∗ (s). x∗ (t). w) + λ∗ (s) f (t∗ (s). We consider the following special case of (8.35) and the fact that M (t.3. p∗ (t)) ≡ ˜ constant if f0 . u(t)).3. (8.44) . t∗ ] except possibly for a finite set. p∗ (t)) holds for all t ∈ [t0 . 0 (8.38) respectively imply (8. so that the optimal control problem consists of finding a control which transfers the system from state x0 at time t0 to state xf in minimum time. xf are fixed. t0 ≤ t ≤ t∗ .41) and the fact that λ∗ (1) = 0. z ∗ (s). t0 ≤ t ≤ tf ˙ initial condition: x(t0 ) = x0 .37) ˜ and (8. p∗ (tf )) ≥ 0 f and if f does not depend explicitly on t then M (t. x∗ (t). z ∗ (s). not identically zero. Then there exists a function p∗ : [t0 . final-time constraint: tf ∈ (t0 .42) is equivalent to (8. and. control constraint: u(·) ∈ U .46) (8. u∗ (t)) = M (t. M (t∗ . the last assertion of the Theorem follows from (8. p∗ (t).43). x∗ (t). u∗ (t))] p∗ (t).8. z ∗ (s). v ∗ (s)) 0 +λ∗ (s) f (t∗ (s). v ∗ (s)) + λ∗ (s) = 0 n+1 and λ∗ (s)f0 (t∗ (s).31). final condition: p∗ (t∗ ) ∈ Rn . z ∗ (s). Applying Theorem 1 to this problem gives Theorem 2. n+1 ˜ Finally. w)]|w ∈ Ω}. (8.35) follows from (8.41) Evidently (8.27): tf Maximize t0 (−1)dt subject to dynamics: x(t) = f (t. (t)) ≡ constant .43) In (8. x(t). t∗ ] → Rn . f Finally. ∞) and let u∗ : [t0 . Theorem 2: Let t∗ ∈ (t0 . x∗ (t).34) and (8.32) and (8. ♦ 8. Next. ∞) . p∗ . f are not explicitly dependent on t. satisfying f adjoint equation: p∗ (t) = −[ ∂f (t. ˙ f ∂x initial condition: p∗ (t0 ) ∈ Rn . (8. f Also the maximum principle H(t. VARIABLE FINAL TIME 107 which is a contradiction. final condition: x(tf ) = xf .42) (8. It is trivial to verify that p∗ (·) satisfies (8. t∗ ] → Ω be optimal. x∗ (t). x0 . on the other hand (8.39) is equivalent to λ∗ (s)f0 (t∗ (s).33). x∗ (tf ). z ∗ (s). v ∗ (s)) + λ∗ (s) f (t∗ (s). Let x∗ (·) be the corresponding f f trajectory.45) (8. v ∗ (s)) 0 = Sup {[λ∗ (s)f0 (t∗ (s).

1 1 and p∗ (t) = 2 The Hamiltonian H is given by H(x∗ (t). Suppose that u∗ (·) is optimal and x∗ (·) is the corresponding trajectory. p∗ (t). The control constraint set is Ω = [−1. Now the transition matrix function of the homogeneous part of (8. u = applied force. ˙ Solution: Taking x1 = x. By Theorem 2 there exists a non-zero solution p∗ (·) of p∗ (t) ˙1 p∗ (t) ˙2 =− 0 0 1 −α p∗ (t) 1 p∗ (t) 2 (8. and (8. x ˙ where m = mass. v) = (p∗ (t) − αp∗ (t))x∗ (t) + bp∗ (t)v 1 2 2 2 = eαt (p∗ (0) − αp∗ (0))x∗ (t) + pb∗ (t)v . CONINUOUS-TIME OPTIMAL CONTROL Exercise 2: Prove Theorem 2. 1 2 (8. σ = coefficient of friction.47) is 1 0 1 α (1 Φ(t. 1]. x(0) = x02 we wish to find an admissible control which brings the ˙ particle to the state x = 0. For simplicity we suppose that x ∈ R. (8. (8.49) .46) hold. and x = position of the particle. u ∈ R and u(t) constrained by |u(t)| ≤ 1. Example 1: The motion of a particle is described by m¨(t) + σ x(t) = u(t) .48) is p∗ (t) 1 p∗ (t) 2 or = e−α(t−τ ) − e−α(t−τ ) ) . We now study a simple example illustrating Theorem 2.47) where α = (σ/m) > 0 and b = (1/m) > 0. Starting with an initial condition x(0) = x01 .44). 1 2 2 2 1 ∗ α p1 (0) 1 + eαt (− α p∗ (0) + p∗ (0)) . 1 1 αt α (1 − e ) 0 eαt p∗ (0) 1 p∗ (0) 2 .45). x2 = x we rewrite the particle dynamics as ˙ x1 (t) ˙ x2 (t) ˙ = 0 1 0 −α x1 (t) x2 (t) + 0 b u(t) .48) such that (8. p∗ (t) ≡ p∗ (0) .108 CHAPTER 8. x = 0 in minimum time. τ ) = so that the solution of (8.

49) we see that p∗ (t) must be a strictly 1 2 2 monotonically increasing function so that from (8. 2 ˆ and p∗ (t) < 0 for t > t. 2 Case 2.3. u (t) = 2  ? if p∗ (t) = 0 .8.50).47) does not depend on t explicitly we must also have eαt (p∗ (0) − αp∗ (0))x∗ (t) + bp∗ (t)u∗ (t) ≡ constant. −p∗ (0) + αp∗ (0) < 0 : Evidently u∗ (·) can behave in one of two ways: 1 2 either u∗ (t) = or u∗ (t) ≡ −1 and p∗ (t) < 0 for all t.50) Furthermore. ˆ −1 for t > t 2 >0. . <0. Case 3. VARIABLE FINAL TIME so that from the maximum principle we can immediately conclude that   +1 if p∗ (t) > 0. since the right-hand side of (8.50) u∗ (·) can behave in one of two ways: either ˆ ˆ −1 for t < t and p∗ (t) < 0 for t < t. First of all since p∗ (t) ≡ 1 can have three qualitatively different forms. 2 ∗ −1 if p∗ (t) < 0. Also since p∗ (t) ≡ 0.49) and (8. −p∗ (0) + αp∗ (0) > 0: Evidently then.51) We now proceed to analyze the consequences of (8. we must 1 2 2 1 have in this case p∗ (0) = 0. Case 1. −p∗ (0) + αp∗ (0) = 0 : In this case p∗ (t) ≡ (1/α)p∗ (0). ˆ ˆ +1 for t > t and p2 u∗ (t) = or u∗ (t) ≡ +1 and p∗ (t) > 0 for all t. 2 ∗ (t) > 0 for t > t. p∗ (·) 1 2 (8. from (8. 2 109 (8. Hence u∗ (·) we can behave in one of two ways: 1 either u∗ (t) ≡ +1 and p∗ (t) ≡ 2 or u∗ (t) ≡ −1 and p∗ (t) ≡ 2 1 ∗ α p1 (0) 1 ∗ α p1 (0) ˆ ˆ +1 for t < t and p∗ (t) > 0 for t < t. 1 2 2 2 p∗ (0).

Let us follow this procedure. CONINUOUS-TIME OPTIMAL CONTROL Thus.52) and (8.52) and (8.54) is satisfied. Suppose we choose p∗ (0) such that −p∗ (0) = αp∗ (0) = 0 and p∗ (0) > 0. f f There are at least two ways of solving the two-point boundary value problem (8.110 CHAPTER 8.3. If (8. the optimal control u∗ is always equal to +1 or -1 and it can switch at most once between these two values. Integrating (8. which is the curve OB. (8.53) is satisfied. ξ2 (t) = − α (1 − eαt ) .54) backward in time and check of (8.53). with ξ1 (0) − ξ2 (0) = 0 . On the other hand.54). The optimal control is given by u∗ (t) = sgn p∗ (t) 2 1 1 = sgn [ α p∗ (0) + eαt (− α p∗ (0) + p∗ (0))] .53) forward in time and check if (8. .54) (8.52). 1 1 2 Thus the search for the optimal control reduces to finding p∗ (0). and then t∗ is the minimum time. ˙ 1 1 2 with initial condition x1 (0) = x10 . Then we must have 1 2 2 ∗ (t) ≡ 1.53) (8. x20 = x20 also satisfies the final condition x1 (t∗ ) = 0. f f (8. The latter approach is more advantageous because we know that any trajectory obtained by this procedure is optimal for initial conditions which lie on the trajectory. One way is to guess at the value of p∗ (0) and then integrate (8.54) is not satisfied then modify p∗ (0) and repeat. then u∗ (t) ≡ −1 1 2 2 and we get b ξ1 (t) = − α (−t + eαt −1 α ) b . if p∗ (0) is such that −p∗ (0) + αp∗ (0) = 0 and p∗ (0) < 0. x2 (t∗ ) = 0 .52) and (8. which is the curve OA in Figure 8.52) for some t∗ > 0. p∗ (0) such that the solution of the 1 2 differential equation x = x2 ˙ 1 1 x2 = −αx2 + b sgn[ α p∗ (0) + eαt (− α p∗ (0) + p∗ (0))] . An alternative is to guess at the value of p∗ (0) and then integrate (8. and (8. ξ2 (t) = b α (1 − eαt ) . This gives b ξ1 (t) = α (−t + eαt −1 α ) .54) backward in time give us a trajectory ξ(t) where u ˙ ˙ ξ1 (t) = −ξ2 (t) ˙ ξ2 (t) = αξ2 (t) − b .

8. Then [(1/α)p∗ (0) + 1 2 2 1 ˆ + p∗ (0))] will have a negative value for t ∈ (0. Hence we can synthesize the optimal control in feedback from: u∗ (t) = ψ(x∗ (t)) where the B u∗ ≡ −1 x2 u∗ ≡ 1 u∗ ≡ −1 O x1 u∗ ≡ 1 A Figure 8. and we get the 2 2 curve OEF . VARIABLE FINAL TIME u∗ ≡ −1 B C D u∗ ≡ 1 O E 111 ξ1 ξ2 u∗ ≡ 1 A F u∗ ≡ −1 Figure 8.3.54). and p∗ (0) < 0. (8. ∞).4: Optimal trajectories of Example 1. This give us the curve OCD. if we integrate (8. . We see then that the optimal control u∗ (·) has the following characterizing properties: u∗ (t) = 1 if x∗ (t) is above BOA or on OA −1 if x∗ (t) is below BOA or on OB . Finally if p∗ (0) is such that −p∗ (0) + 1 ˆ ˆ αp∗ (0) < 0.54) backwards in time we get trajectory ξ(t) where eαt (−(1/α)p∗ (0) 1 ˙ ξ(t) = −ξ2 (t) ˙ ξ2 (t) = αξ2 (t)+ ˆ −b for t < t ˆ .52) and (8. then u∗ (t) = 1 for t < t and u∗ (t) = −1 for t > t. ξ2 (0) = 0.52).3: Backward integration of (8. b for t > t with ξ1 (0) = 0. Hence. t) and a positive value for t ∈ 2 ˆ (t. Next suppose p∗ (0) is such that −p∗ (0) + αp∗ (0) > 0. and p∗ (0) < 0.

T is a fixed final time. x2 ) = 1 if (x1 . bf ∈ R f are given vectors. this will imply 0 u∗ (t) = whereas if p∗ = 0.58) (t)p∗ (t) .112 CHAPTER 8. final condition: Gf x(t) = bf . control constraint: u(t) ∈ Rp . ˙ 0 and p∗ (t)⊥T f (x∗ (t)) = {ξ|Gf ξ = 0} . and x0 ∈ Rn . Gf is a given f × n matrix. 0 ≤ t ≤ T . 2 0 If p∗ > 0. such that p p∗ (t) = −p∗ (−P (t)x∗ (t)) − A (t)p∗ (t) .55) subject to dynamics: x(t) = A(t)x(t) + B(t)u(t). 8.60) 1 −1 p∗ Q (t)B 0 (8. consider the optimal control problem (8. Specifically. not both zero.59) . (8. then we must have 0 p∗ (t) B(t) ≡ 0 because otherwise (8. u(·) piecewise continuous. v) = − 1 p∗ [x∗ (t) P (t)x∗ (t) + v Q(t)v] ˜ 2 0 +p∗ (t) [A(t)x∗ (t) + B(t)v] so that the optimal control u∗ (t) must maximize − 1 p∗ v Q(t)v + p∗ (t) B(t)v for v ∈ Rp . T ] → Rn . x2 ) is above BOA or on OA −1 if (x1 . In (8. positive semi-definite matrix whereas Q(t) is a p × p symmetric. p∗ (t). (8.56) we assume that P (t) is an n × n symmetric.58) cannot have a maximum. x2 ) is below BOA or on OB .4 Linear System. The Hamiltonian function is H(t. so that we must search for a number p∗ ≥ 0 and a function 0 ∗ : [0.56) (8. x∗ (t). ˙ initial condition: x(0) = x0 .57) (8. −1} is given by (see Figure 8. We apply Theorem 1 of Section 2.4) ψ(x1 . CONINUOUS-TIME OPTIMAL CONTROL function ψ : R2 → {1. Quadratic Cost An important class of problems which arise in practice is the case when the dynamics are linear and the objective function is quadratic.55): T Minimize 0 1 [x (t)P (t)x(t) + u (t)Q(t)u(t)]dt 2 (8. positive definite matrix.

v) is independent of v for values of t lying in a non-zero interval. t)) p∗ (T ) and hence from (8.56) 0 0 we can see that p∗ (t) = (Φ(T. s(·) piecewise continuous.59). τ ) be the transition matrix function of the homogeneous linear differential equation x(t) = ˙ A(t)x(t). (p∗ (t)/p∗ )) will satisfy all the necessary ˆ 0 0 conditions so that we can assume that p∗ = 1. 0 c(t)dt = T 0 (1 − s(t))f (k(t))dt .) Let Φ(t. Thus. For further details regarding the solution of this boundary value problem and for related topics see (See and Markus [1967]). p∗ (t).60) (p∗ (t)) Φ(T. implies ξ = 0 .8. control constraint: s(t) ∈ [0. (See (Desoer [1970]) for a definition of controllability and for the properties we use below. T ]. x∗ (t). τ )B(τ ) = 0 . 0 ≤ τ ≤ T . The problem can be summarized as follows: T Maximize subject to ˙ dynamics: k(t) = s(t)f (k(t)) − µk(t) . We are faced with the so-called singular case (because we are in trouble–not because the situation is rare).5 The Singular Case In applying the necessary conditions derived in this chapter it sometimes happens that H(t. but then from (8. because if p∗ = 0 then from (8. We illustrate this by analyzing Example 4 of Chapter 1.5. THE SINGULAR CASE 113 We make the following assumption about the system dynamics. 1]. final constraint: k(t) ∈ R . under the controllability assumption. 8. t)B(t) = 0 . Now if p∗ > 0 it is trivial that p∗ (t) = (1. In such cases the maximum principle does not help in selecting the optimal value of the control.61) we get p∗ (T ) = 0. p∗ (T )⊥T f (x∗ (T )) . 0 ≤ t ≤ T . then we must have p∗ (t) ≡ 0 which is a ˜ 0 contradiction. Assumption: The control system x(t) = A(t)x(t) + B(t)u(t) is controllable over the interval ˙ [0. and hence the optimal control is 0 given by (8. (8. p∗ > 0. 0 ≤ t ≤ T initial constraint: k(0) = k0 . Gf x∗ (T ) = bf . Then the controllability assumption is equivalent to the statement that for any ξ ∈ Rn ξ Φ(t.61) Next we claim that if the system is controllable then p∗ = 0. Hence if p∗ = 0. The optimal trajectory and the optimal control is 0 obtained by solving the following two-point boundary value problem: x∗ (t) = A(t)x∗ (t) + B(t)Q−1 (t)B (t)p∗ (t) ˙ p(t) = P (t)x∗ (t) − A (t)p∗ (t) ˙ x∗ (0) = x0 .

we note from (8.63).114 CHAPTER 8. (8. Here kG . p∗ ) by (1/p∗ )(p∗ .64) and the maximum principle holds. p∗ (t) > 1. ˙ The maximum principle says that H(t. lim fk (k) = ∞ . T ] → R.62) and (8. and fk (k) − µ > 0 according as k > kG whereas f (k) − µk < 0 according as k > kM . t)− and (p.62) says that the marginal product of capital is positive and this marginal product decreases with increasing capital. and a function p∗ : [0. 0 ≤ t ≤ T .64) simplifies to 0 p∗ (t) = −1(1 − s∗ (t))fk (k∗ (t)) − p∗ (t)[s∗ (t)fk (k∗ (t)) − µ] . p∗ ) we 0 0 0 0 can assume without losing generality that p∗ = 1. Now suppose that s∗ : [0. (See Figure 8. Such solutions exist and are unique by virtue of the assumptions (8. p∗ (t). CONINUOUS-TIME OPTIMAL CONTROL We make the following assumptions regarding the production function f : fk (k) > 0.62) k→0 (8. be the corresponding trajectory of the capital-to-labor ratio. which immediately implies that   1 if p∗ (t) > 1 ∗ s (t) = 0 if p∗ (t) < 1  ? if p∗ (t) = 1 We analyze separately the three cases above.63) is mainly for technical convenience and can be dispensed with without difficulty.) < < < > . not both identically zero. so that (8. Case 1. there exist a number p∗ ≥ 0. 1] at s∗ (t). if p∗ = 0 then from (8. such that 0 p∗ (t) = −p∗ (1 − s∗ (t))fk (k∗ (t)) − p∗ (t)[s∗ (t)fk (k∗ (t)) − µ] ˙ 0 with the final condition p∗ (T ) = 0 . Futhermore. k∗ (t). Hence we must have p∗ > 0 and then by replacing (p∗ .64) and (8. T ] → [0.68) is depicted in the (k.66) (8.67) The behavior of the solutions of (8.63) Assumption (8. (k. Then by Theorem 1 of Section 2.65) we must also 0 have p∗ (t) ≡ 0.62) that kG < kM . Assumption (8.5. kH are the solutions of fk (kG ) − µ = 0 and f (kM ) − µk = 0. p)−. s) = (1 − s)f (k∗ (t)) + p∗ (t)[sf (k∗ (t)) − µk∗ (t)] is maximized over s ∈ [0. fkk (K) < 0 for all k .65) (8. p∗ (t) = −p∗ (t)[fk (k∗ (t)) − µ] .68) (8. First of all. 1] is an optimal savings policy and let k∗ (t). t)−planes in Figure 8. (8. s∗ (t) = 1 : Then the dynamic equations become ˙ k∗ (t) = f (k∗ (t)) − µk∗ (t) . ˙ (8.6.

p∗ (t) = −fk (k∗ (t)) + µp∗ (t) . ˙ In turn then we must have k∗ (t) = 0 for t ∈ I so that s∗ (t)f (kG ) − µKG = 0 for t ∈ I .70) .5. Case 3. p∗ (t) < 1. ˙ giving rise to the behavior illustrated in Figure 8. so −fk (k∗ (t)) + µ = 0 for t ∈ I . (8.7. s∗ (t) = 0: Then the dynamic equations are ˙ k∗ (t) = −µk∗ (t) .8. or k∗ (t) = kG for t ∈ I . We face the singular case only if p∗ (t) = 1 for t ∈ I.66) ˙ we get −(1 − s∗ (t))fk (k∗ (t)) − [s∗ (t)fk (k∗ (t)) − µ] = 0 for t ∈ I . and hence.69) (8. where I is a non-zero interval. THE SINGULAR CASE 115 p l fk > µ fk < µ k kM f > µk kG p kM f < µk k t l t Figure 8. p∗ (t) = 1.5: Illustration for Case 1. Case 2.) Evidently if p∗ (t) = 1 only for a finite set of times t then we do not have to worry about this case. s∗ (t) =?: (Possibly singular case. But then we have p∗ (t) = 0 for t ∈ I so that from (8. kG s∗ (t) = µ f (kG ) for t ∈ I .

ˆ Exercise 1: A capital-to-labor ratio k is said to be sustainable if there exists s ∈ [0. p∗ (t) < 1 so that we are in Case 2. k kG kM Figure 8.69) and (8. and s∗ (t) = µ(kG /f (kG )) for t ∈ (t1 . CONINUOUS-TIME OPTIMAL CONTROL µk line of slope µ f (k) . We can now assemble separate cases to obtain the optimal control.70).116 f CHAPTER 8. (8. k∗ (t) = kG . This contradicts the definition of t2 so that this possibility cannot arise. The various possibilities are illustrated in Figure 8. The reason for this term is contained in the following exercise. t2 ). The capital-to-labor ratio kG is called the golden mean and the singular solution is called the golden path. s∗ (t) > 1 if k0 < kG . from the final condition (8. or we have p∗ (t) < 1. In particular we must have k < k . t2 ) so that p∗ (t) = 1.65) we know that for t close to T. for 0 ≤ t ≤ T . . or (B) there exists t2 ∈ (0. For t < t1 we either have p∗ (t) > 1. 1] such that ˆ ˆ − µk = 0.62).6: Illustration for assumptions (8. s∗ (t) = 0 if k > kG . We then have three possibilities depending on the value of k∗ (t2 ): (Bi) k∗ (t2 ) < kG : then p∗ (t2 ) < 0 so that p∗ (t) > 1 for t < t2 and we are in Case 1 so that ˙ ∗ (t) = 1 for t < t . T ] and then s∗ (t) = 0. Thus in the singular case the optimal solution is characterized by (8. Show that kG is the unique sustainable capital-to-labor ratio which maximizes ˆ sf (k) ˆ sustainable consumption (1 − s)f (k). We face two possibilities: Either (A) p∗ (t) < 1 for all t < [0.63). s 2 0 G (Bii) k∗ (t2 ) > kG : then p∗ (2 ) > 0 but then p∗ (t2 + ε) > 1 for ε > 0 sufficiently small and since ˙ p∗ (T ) = 0 there must exist t3 ∈ (t2 .9. T ) such that p∗ (t3 ) = 1. (Biii) k∗ (t2 ) − kG : then we can have a singular arc in some interval (t1 . T ) such that p∗ (t2 ) = 1 and p∗ (t) < 1 for t2 < t ≤ T . k∗ (t) = k0 e−µt . First of all.8. as in Figure 8.

also consult (Jacobson and Mayne [1970]). Several important generalizations of the maximum principle have appeared. the derivation of the maximum principle given in the book by Lee and Markus is more satisfactory. . whereas for a discussion of the singular case consult (Kelley. et al. On the one hand these include extensions to infinite-dimensional state spaces and on the other hand they allow for constraints on the state more general than merely initial and final constraints. and (Polak [1971]).6 Bibliographical Remarks The results presented in this chapter appeared in English in full detail for the first time in 1962 in the book by Pontryagin. 8. For applications of the maximum principle to optimal economic growth see (Shell [1967]). However.8. (McReynolds [1966]). but mathematically difficult. et al. et al. (Kelley [1962])... For an applications-oriented treatment of this subject the reader is referred to (Athans and Falb [1966]) and (Bryson and Ho [1969]). cited earlier. BIBLIOGRAPHICAL REMARKS p k 117 l k kG p t l t Figure 8. and (Balakrishnan and Neustadt [1964]). et al. [1967]).6. For a unified. Among the many useful techniques which have been proposed see (Lasdon.7: Illustration for Case 2. treatment see (Neustadt [1969]). For a less rigorous treatment of state-space constraints see (Jacobson. [1968]). That book contains many extensions and many examples and it is still an important source. [1971]). There is no single source of computational methods for optimal control problems.

8: Case 3. The singular case. . k kG p kG t 1 t Figure 8.118 CHAPTER 8. CONINUOUS-TIME OPTIMAL CONTROL p k 1 .

t t2 T s∗ 1 s∗ µkG f (kG ) t k∗ kG k0 t k∗ . t Case (Biii) . t1 . . t Figure 8.9: The optimal solution of example. . t t1 t2 T . BIBLIOGRAPHICAL REMARKS 119 p∗ 1 t s∗ 1 t k∗ T T p∗ 1 t s∗ 1 t2 T t k∗ t2 T t Case (A) p∗ T p∗ Case (Bi) t2 T t . .8.6. .

120 CHAPTER 8. CONINUOUS-TIME OPTIMAL CONTROL .

Dynamic programming (DP is a technique which compares the optimal decision with all the other decisions. In the first section we develop the main recursion equation of DP for discrete-time problems. x(i). Finally f0 (i.e. . i = 0. This global comparison.Chapter 9 Dynamic programing SEQUENTIAL DECISION PROBLEMS: DYNAMIC PROGRAMMING FORMULATION The sequential decision problems discussed in the last three Chapters were analyzed by variational methods. the state x(i) and the control u(i) belong to arbitrary sets X and U respectively. . 121 . However.1 Discrete-time DP We consider a problem formulation similar to that of Chapter VI. The Ωi are fixed subsets of U . ·. In (9.. Φ : X → R. .1) subject to dynamics: x(i + 1) = f (i. x(i). ·. therefore. The only disadvantage (which unfortunately often rules out its use) of DP is that it can easily give rise to enormous computational requirements. u(i)) + Φ(x(N )) (9. i. ·) : X × U → X are fixed functions. f (i. N − 1 . N − 1 . Some general remarks and bibliographical references are collected in the final section. for notational convenience we neglect final conditions and state-space constraints. 1. is that DP permits very general problem formulations which do not require differentiability or convexity conditions or even the restriction to a finite-dimensional state space. . . i = 0.1). or finite-dimensional vector spaces (as in the previous chapters). . The main advantage of DP. x0 ∈ X is fixed. ·) : X × U → R. or even infinitedimensional spaces. initial condition: x(0) = x0 . leads to optimality conditions which are sufficient. 1. N −1 Maximize i=0 f0 (i. 9. control constraint: u(i) ∈ Ωi . u(i)) . besides the fact that it give sufficiency conditions. . . The second section deals with the continuous-time problem. the necessary conditions for optimality were obtained by comparing the optimal decision with decisions in a small neighborhood of the optimum. X and U may be finite sets.

. x( + 1). . . . u∗ (i)) + Φ(x∗ (N )) . Lemma 1: Suppose u∗ (k). . i = + 1. . . u(N − 1). x(i). i = k. . N − 1 . k + 1. we will sometimes use the index (9. Then for any . objective function. .2)k. for each x ∈ X and k between ) and N − 1. . . . x(N ). . u(i)) + Φ(˜(n)) ˜ ˜ x i=k −1 = > i=k f0 (i. .2)k.1) but with different initial states and initial times.1). i = k. . control constraint: u(i) ∈ Ωi . i = k. N . u (i)) + Φ(x (N )) . Then there exists a control u( ). such that ˆ ˆ ˆ N −1 f0 (i. u∗ (i)) + N −1 f0 (i. x (i). . u∗ (N − 1) is an optimal control for (9. N − 1 . x∗ (i). . with corresponding ˆ ˆ ˆ trajectory x( ) = x∗ ( ). and control constraint as in (9. . into a family of optimal control problems with the same dynamics. . . . is x(k). . in which the system starts in state x0 at time 0. x(N ) where ˜ ˜ x(i) = ˜ x∗ (i) .x to distinguish between different problems. x∗ (N ) be the corresponding optimal trajectory. . . initial condition: x(k) = x. We begin with an elementary but crucial observation. . .x is N −1 f0 (i. u(i)) + Φ(ˆ(N )) ˆ ˆ x i= N −1 (9. u(i)) + Φ(x(N )) . k + 1. (9. . . . i = k. consider the following problem: N −1 Maximize i=k f0 (i. . x(i).122 CHAPTER 9. u(i)).2) . . ∗ ∗ ∗ > i= But then consider the control u(k).x∗ ( ) . . k ≤ ≤ N − 1. . . and let x∗ (k) = x. . DYNAMIC PROGRAMING The main idea underlying DP involves embedding the optimal control problem (9. ˆ and the corresponding trajectory. More precisely. . . ˆ The value of the objective function corresponding to this control for the problem (9. u(i)) + Φ(ˆ(N )) ˆ ˆ x i= i=k N −1 f0 (i. u(N − 1) with ˜ ˜ u(i) ˜ u∗ (i) . . .2)k. N − 1. x∗ (i). ·. − 1 u(i) .2) subject to dynamics: x(i + 1) = f (i. . . x∗ (k + 1).3) f0 (i. x(i) . x(i). . u∗ (N − 1) is an optimal control for (9. x(i). . . Since the initial time k and initial state x are the only parameters in the problem above. Proof: Suppose not. . . x(i). .x . u∗ ( ). u( + 1). starting in state x at time k. i = . . .

(k)) + V (k + 1. f (k. . x) satisfies the backward recursion equation V (k. ψ(k. x(i). . x(N ) be the corresponding trajectory. u∗ (N − 1) be an optimal control for (9. .2)k. u∗ (k)) + V (k + 1. u( )) + V ( + 1. .. Then ψ(k. f (k. x)) + V (k + 1. u. u∗ (k))) ≥ f0 (k. . x. f (k. .2)k. u) + V (k + 1.5) is equal to f0 (k. . . . for all u(k) ∈ Ωk . x. . x(i). . x( ). let ψ(k. so that u∗ (k). x) = Φ(x).3). . x( ). . contradicting the hypothesis. N − 1 is an optimal feedback control. . Then V ( . 0 ≤ k ≤ N − 1 . f (k. Corollary 2: For k = 0. x. On the other hand. .4). xu∗ (k)) + V (k + 1. u(N − 1) is optimal for (9. u(N − 1) be any control for the problem (9. . k.1. u(k)) i=k N +{ i=k+1 f0 (i. and all x ∈ X. . for any k.(end theorem) Corollary 1: Let u(k). f ( .x . x the control u∗ (k). i. ·) by (V (N. We have N −1 i=k N −1 f0 (i. (end theorem) From now on we assume that an optimal solution to (9. Combining these two facts we get f0 (k. k ≤ ≤ N − 1. ))|u ∈ Ωk }. x. 1. . u(i)) + Φ(x(N )) = f0 (k. with equality if and only if u(k + 1). . x. u( )). . u∗ (N − 1) cannot be optimal for 9. where . ≥ i=k By Lemma 1 the left-hand side of (9. x. . x∗ ( )). x. x(N ). N − 1. . . x. ψ(k. u(k)) + V (k + 1.x . u∗ (N − 1) defined by u∗ ( ) = ψ( . . and let x∗ (k) = x. x∗ (N ) be the corresponding trajectory be x(k) = x. Theorem 1: Define V (N. . x)) = Max{f0 (k. . . DISCRETE-TIME DP 123 by (9. (k. u∗ (k)) .2)k . u(i)) + Φ(x(N )) ≤ f0 (k. u∗ (i)) + Φ(x∗ (N )) (9. x. u.2)k. .x exists for all 0 ≤ k ≤ N − 1. x. k ≤ ≤ N − 1 . f (x. . ·) : X → Ωk be such that f0 (k.x .x and let x(k) = x. let u∗ (k). u(k))) . (9. x. . x. . . which is equivalent to (9. Let V (k. f (k. We call V the (maximum) value function. x∗ (i). x∗ . . . . k = 0.2)k. . .2)k. by the definition of V we have N −1 f0 (i. x) = Max{f0 .2)k+1. x.9. ·). and equality holds for all k ≤ ≤ N − 1 if and only if the control is optimal for (9. u) + V (k1 . V (k. u(k))} . u))|u ∈ Ωk } . . . f (k. .e.5) f0 (i.4) Proof: Let x ∈ X. x(i). x( )) ≤ f0 ( . x) be the maximum value of (9.x(k+1) . u(i)) + Φ(x(N )) .

There are N possible items to choose from. For instance.8) .x . Remark: Theorem 1 and Corollary 2 are the main results of DP. Then we have to compute and store 10 × 20 values of V .000. The recursion equation (9. 9. u(t)) . As before. x) = Φ(x) . For n = 3 this number is 80. How many units of each item should be placed in each knapsack so as to maximize total utility? Formulate this problem by DP. He assumes that each person can take up to W pounds in his knapsack. x∗ ( ). These numbers represent the relative utility of that item during the hike. k ≤ ≤ N − 1 . let V (t. u(t))dt + Φ(x(tf )) subject to dynamics: x(t) = f (t. Each unit of item i weighs wi pounds. and in evaluating the maximum in (9. x ∈ Rn .6) In (9.000.4) we also obtain the optimum feedback control.1. t + ∆] → Ω}. (9. But now suppose X = Rn and we approximate each dimension of x by 20 values. • Exercise 1: An instructor is preparing to lead his class for a long hike. ψ( . control constraint: u : [t0 . This “curse of dimensionality” seriously limits the applicability of DP to problems where we cannot solve (9. which is quite impractical for existing computers. x(t). x(t). and for n = 5 it is 32.2 Continuous-time DP We consider a continuous-time version of (9. which is a reasonable amount. Then it is easy to see that V must satisfy V (t. ∆ ≥ 0 .4) analytically. x) be the maximum value of the objective function over the interval [t.7) +V (t + ∆.4) allows us to compute the value function. x∗ (k) = x .2): Maximize 0 f f0 (t. is optimal for (α)k.1. The instructor assigns a number Ui > 0 for each unit of item i. tf ] → Ω and u(·) piecewise continuous.6). t (9. f are assumed to satisfy the conditions stated in VIII. u ∈ Rp . t0 ≤ t ≤ tf ˙ initial condition: x(0) = x0 . Then for N = 10. Φ : Rn → R is assumed differentiable and f0 . x(τ ). unless we can find a “closed-form” analytic solution to (9. u(τ ))dτ (9. Note that this feedback control is optimum for all initial conditions.4). x(t + ∆))|u : [t. tf ] starting in state x at time t.000. x∗ ( )).124 CHAPTER 9. DYNAMIC PROGRAMING x∗ ( + 1) = f ( . the DP formulation may necessitate a prohibitive amount of computation since we would have to compute and store the values of V and ψ for all k and x. we have to compute and store 10x(20)n values of V . for t0 ≤ t ≤ tf and x ∈ Rn . suppose n = 10 and the state-space X is a finite set with 20 elements. x) = Max{ t t+∆ f0 (τ. However. and V (tf . Ω ⊂ Rp .

tf ] and x ∈ Rn . u)|u ∈ Ω} = 0. ˆ ˆ ˆ x(t) = x . Let u∗ (τ ) = ψ(τ.2. Let us suppose that V is differentiable in t and x. x) + ∂V f (t. x) + Max{f0 (t. u)∆ ∂x + ∂V (t. t ≤ τ ≤ tf . t ≤ τ ≤ t + ∆ . x. x.6).Bellman partial differentiable equation for the value function: ∂V ∂t (t. u)∆ + V (t.8). (9. x∗ (τ ) + x (τ )}dτ ˙ ∂τ ∂x t f (9. x(τ ). tf ] × Rn → Ω with ψ piecewise continuous in t and Lipschitz in x. Let u : [t. CONTINUOUS-TIME DP In (9.14) F − 0(τ. x) = Max{f0 (t.12) · (9. x(τ ) is the solution of x(τ ) = f (τ. ψ(t. x. x(τ ). x∗ (t)) = f tf =∈t tf =− dV (τ.7). ∆ > 0 .9. satisfying f0 (t.10) Then ψ is an optimal feedback control for the problem (9. ˙ x∗ (τ ) = x .11) Note that the hypothesis concerning ψ guarantees a solution of (9. ∂x (9. x∗ (τ ). x∗ (τ ). x∗ (tf )) − V (t. x.12). ψ(τ. x)) + ∂V f (t. x. u)|u ∈ Ω} . u) + ∂V ∂x (t. x∗ (τ ). x∗ (τ ). x∗ (τ ))dτ dτ ∂V ∂V ∗ { (τ. x)f (t.9) Theorem 1: Suppose there exists a differentiable function V : [t0 . x∗ (τ ))) . t ≤ τ ≤ tf . To show that ψ is an optimal feedback control we must show that tf t tf t f0 (tτ. ˙ x(t) = x . and V is the value function. u(τ ))dτ + Φ(ˆ(tf )) . (9. x∗ . ˆ x (9. (τ )). Suppose there exists a function ψ : [t0 . x)) ∂x = Max{f0 (t.13) ≤ To this end we note that V (tf . tf ] × Rn → R which satisfies (9. u(τ )) . ˆ Let x∗ (τ ) be the solution of x∗ (τ ) = f (τ. x. u(τ )) .9) and the boundary condition (9. ∂t 125 Dividing by ∆ > 0 and letting ∆ approach zero we get the Hamilton-Jacobi.7) we get V (t. u∗ (τ ))dτ + Φ(x∗ (τ )) f0 (τ. Then from (9. tf ] → Ω be any piecewise continuous control and ˆ let x(τ ) be the solution of ˆ x (τ ) = f (τ. t ≤ τ ≤ tf . u) + ∂V f (t. u∗ (τ ))dτ . t . Proof: Let t ∈ [t0 . x. x)∆ + o(∆)|u ∈ Ω}. x. ψ(t.

0 ≤ t ≤ T . (9. x(τ ). x. (9.10).14).8) and the fact that x∗ (t) = x(t) = x we conclude that ˆ V (t. On the other hand. In the case of sequential decision-making under uncertainties DP is about the only available general method. x(τ ).15) using (9. Finally. ˆ ˆ (9. u∗ (τ ))dτ .15). The most elegant applications of DP are to various problems in operations research where one can obtain “closed-form” analytic solutions to be recursion equation for the value function. • Exercise 1: Obtain the value function and the optimal feedback control for the linear regulatory problem: Minimize 1 x (T )P (T )x(t) + 2 1 2 T0 {x (t)P (t)x(t) +u (t)Q(t)u(t)}dt subject to dynamics: x(t) = A(t)x(t) + B(t)u(t) . For an important application of DP to computational considerations for optimal control problems see (Jacobson and Mayne [1970]).126 using (9.9). [Hint: Obtain the partial differential equation satisfied by V (t.9). u∗ (τ )) f0 (τ. From (9. x) = Φ(x∗ (tf )) + ≥ Φ(ˆ(tf )) + x t tf t tf f0 (τ.13) is proved. where P (t) = P (t) is positive semi-definite. V (tf . control constraint: u(t) ∈ Rp . u(τ ))dτ ˆ ˆ ♦ so that (9. ˙ initial condition: x(0) = x0 . See (Bellman and Dreyfus [1952]) and (Wagner [1969]). x) = x R(t)x where R is unknown. (t)) = ˆ ˆ t tf CHAPTER 9.3 Miscellaneous Remarks There is vast literature dealing with the theory and applications of DP. x(tf )) − V (t. For an excellent introduction to this area of application see (Howard [1960]). and Q(t) = Q (t) is positive definite. x∗ (τ ). [] . DYNAMIC PROGRAMING { ≤− t tf ∂V ∂V · x (τ )}dτ ˜ (τ. It also follows that V is the maximum value function.] 9. x) and try a solution of the form V (t. the book of Bellman [1957] is still excellent reading. x(τ )) + ˆ ∂τ ∂x f0 (τ. Larson [1968] has developed computational techniques which greatly increase the range of applicability of DP where closed-form solutions are not available. (9.

[10] J. University of California. [7] D.A. SSC-6(4). CT-16(3).L. Applied Dynamic Programming.D.V. Berkeley. CT-16(3). Dynamic Programming. Generalized Lagrange Multipliers in Dynamic Programming.W. Polak.A. Theory of Optimal Control and Mathematical Programming. IEEE Trans. 1971. [14] S. Princeton University Press. 1970. Notes for a Second Course on Linear Systems. 1969b.E. Van Nostrand Reinhold.). Rohrer. Bellman. Optimal Control. [4] K. Dreyfus. Bellman and S.A.E.W. Computing Methods in Optimization Problems. [3] A. and E. Arrow and L. 1962. [8] J. Bryson and Y.J. 1969c. Director and R. 1957. 1966. Girshick. Ho. College of Engineering. Desoer. Balakrishnan and L.E. The function of operations research specilists in large urban schools.W. McGraw-Hill. IEEE Trans. (ed. Falb. in Pfouts R.C. Essays in Economics and Econometrics. Director and R. [6] R. Princeton University Press. University of North Carolina Press. Theory of Games and Statistical Decisions. Neustadt. Automated network design–the frequency-domain case. 1954. [15] S.E. 1963. IEEE Trans.W. The generalized adjoint network and network sensitivities. Rohrer. 1970. Banerjee. 1970. Director and R. 1969a. IEEE Trans. 1964. Blackwell and M. [2] M. 127 . [12] C. CT-16(3). on Circuit Theory.A. Academic Press. on Circuit Theory. C. Athans and P. Princeton University Press. on Systems Science and Cybernetics. Rohrer. Linear Programming and Extensions. 1969. Decentralization and Computation in Resource in resource Allocation. McGraw-Hill. Cullum.W. PhD thesis.D. [9] A. On the design of resistance n-port networks by digital computer. Blaisdell. on Circuit Theory. [11] G.A. 1960. John Wiley. Applied Optimal Control.E.Bibliography [1] J. Hurwicz. Cannon. [13] S. Bruns. [5] R. Dantzig.

1971. G. Optimization Techniques. Differential Dynamic Programming. R. 1969. 1967. [23] A. [21] C. Karlin. Mathematical Methods and Theory in Games.A. Jacobson. 1958. M. 1965. 1971. IEEE Trans. Memo RM-6134-PR.M. Dorfman and H. Howard. The Economic Theory of Teams. . Princeton University Press.L. Operations Research.M. Introduction to Stochastic Control. and Economics. Gould.Q.R. Pierskalla. XXXIV(1). 1970. Apl.J. Greenberg and W. MIT Press. Addison-Wesley. Geoffrion. [30] D. and J. (eds. 1971. in Haveman. 1970a. Linear Programming and Economic Analysis..H. Holt. New necessary condtions of optimality for problems with state-variable inequality constraints. in Leitmann. Programming. McGraw-Hill. Duality in Nonlinear Programming: a Simplified Application-Oriented Treatment. [33] H. [31] D. Math. Rohrer. Speyes. Method of Gradients. Rinehart. A Model of Public Decisions Illustrated by a Water Pollution Policy Problem. and Margolis. Geoffrion. Primal resource directive approaches for optimizing nonlinear decomposable programs.J. Math. Dynamic Programming and Markov Processes.H. Review of Economic Studies. A geometric duality theorem with economic applications. SIAM J. [22] D. Jacobson and D. The Rand Corporation. on Circuit Theory. [20] W. P.). Gale.J.A. Kelley. Operations Research. Isaacs.H. Functions of Several Variables. 1959. John Wiley. Production Theory and Indivisible Commodities. Analysis and Applications.D. [27] H.P. 1971. [29] R.M. [24] A. [28] R. Markham Publishing Co.J.Solow. 18. J. [17] R. Differential Games. [18] R. Extensions of lagrange multipliers in nonlinear programming. 1969. Frank. [19] Dowles Foundation Monograph.I.). 1965. Academic Press. 1970. [26] J. John Wiley. 18.H. Dowell and R. J. to appear. [25] F. and Winston.(ed. American Elsevier Publishing Co. CT-18(1).M. volume 1. Jacoby. Surrogate mathematical programming. Automated design of biasing circuits. 1970b.Kushner. 1970. and R.128 BIBLIOGRAPHY [16] R. to appear. Public Expenditures and Policy Analysis. Samuelson. Addison-Wesley. Dorfman.A. 1962. 1960. [32] S. 17. Lele. Mayne. Fleming.

St. Siam J. Rao.W. Water quality regulation with multiple polluters. 19. Math.G. [52] E. (ed. J. Washington U. John Wiley. J. [45] J. August 11-13 1971. (eds. Meditch. State Increment Dynamic Programming. Convex Structures and Economic Theory. Interscience. Nonlinear programming. Academic Press. [44] S. 1957. The successive sweep method and dynamic programming. 1971 Jt. Second Berkeley Symp.).E. Recent Advance Programming. Koopmans. University of California Press. [37] H. Statistics and Probability. [39] L. H. 7. Theory of Hierarchical. Waren. Luenberger. Kuhn and A. R.S. and Y.W. R. A general theory of extremals. and outcomes in optimal growth models. Objectives. Saunders & Co. in Leitman. [47] C. 1968. Miller.W. Analysis and Applications.D. Mayer. Nonlinear Programming. [46] M. American Elsevier Publishing Co. Games and Decisions. Gamkrelidze..S. . Applied Math. in Graves.G. Analysis and Applications. Lee and L.V. The Simplex Method for Local Separable Programming. Game Theory. S. McGraw-Hill. R. 1968. Kopp.R. Mesarovic. Math. Control Conf. and Wolfe.V. Econometrica. 1967. constraints. The Mathematical Theory of Optimal Processes. 1970. [53] L. 16. [48] L. Macho.Mischenko.F. 1967. 1968. [35] D. [41] R.B.E. Mitter. 1969. Multi-level Systems. [50] H. 1963. McReynolds. and A.D. John Wiley. J.). on Automatic Control. [36] T. Computer and System Sciences. In Proc. McGraw-Hill. Berkeley. Raiffa. Wells.L.BIBLIOGRAPHY 129 [34] J. 1969. on Math.H. and H. Polak. Academic Press. Nikaido. Singular Extremals. [51] G. Lasdon. 1969. 1963. Owen.C. [49] L. Stochastic Optimal Linear Estimation and Control. Kendrick. Foundation of Optimal Control Theory. McGraw-Hill.W. Takahara. AC-12(1).E. Louis. Markus. G. Mangasarian. In Proc. Computational Methods in Optimization: A Unified Approach.L. P. 1951. [40] E. D. The conjugate gradient method for optimal control problems. 1967.. 1967.S.S..B. Pontryagin. Neustadt. Academic Press. 3(1). The existence of optimal controls in the absence of convexity conditions. 1970. and C. Quasi-convex programming. [42] D. and E.. R. Topics in Optimization. 35(1). Kelley. 1971. 1962. Boltyanski.H. Neustadt.K. IEEE Trans. [43] O. W. Autom.A. Luce and H. 1968. [38] R. Larson. Academic Press. Tucker.

Van Nostrand Reinhold. The economist’s approach to pollution and its control. 1971. . Princeton University Press. John Wiley.J. Veinnott Jr. 1971. 1959. Shell. Econometrica. Nonlinear Programming: A Unified Approach. Essays in the Theory of Optimal Economic Growth. The Foundation of Statistics. Sakarovitch. Notes on Linear Programming. Science. On the seperation theorem of optimal control. [62] W. Solow. 1967. [63] E. Topkis and A. 1969.M. [58] R. on Control. Wonham. MIT Press. BIBLIOGRAPHY [55] M. 5(2).M. [57] K. Wolfe. Principles of Operations Research. Rockafeller. 1967. on Control. 1970. 27. Prentice-Hall. Wagner. Prentice-Hall.T.M. [56] L. On the convergence of some feasible directions algorithms for nonlinear programming. 173(3996).M. 1954. [59] D. The simplex method for quadratic programming. Savage. 1969. SIAM J. [61] P.130 [54] R. 1968. Zangwill. Convex Analysis. [60] H. SIAM J. 6(2).

63 Dynamic programming. 123 Discrete-time optimality control sufficient condition. 49 Game theory . 21. 5 Complementary slackness. 91 discrete-time. 91. 8 Design of resistive network. 8 Hamilton-Jacobi-Bellman equation. 105 continuous-time. 33. DP optimality conditions. 53 sufficent conditions. 23 Affine function. 78. 64 Farkas’ Lemma. 121. 35 problem formulation. 21 Linear programming. 37 properties. 34 Constraint qualification definition. 35 Langrangian function. 42 Linear programming. 101 Hypograph. 34 Maximum principle continuous-time. 37 Lagrangian function. 91. 67 Convex function definition.Index Active constraint. 50 Adjoint Equation augmented. 107 . 54. 81 Minimum-time problem. LP duality theorem. 33. 77 sufficient condition. 37. 103 sufficient condition. 54 Basic feasible solution. 101. 101. 86. 32 Feasible direction. 103 problem formulation. 85 Adjoint equation augmented. 124 Epigraph. 125 Control of water quality. 15 Discrete-time optimal control necessary condition. 72 algorithm. 61 Equilibrium of an economy. 98 continuous-time. 5 Gradient. 37 Derivative. 99 ˜ Hamiltonian HH. 80 Dual problem. 125 ˜ Hamiltonian H. 71 Feasible solution. 39 Certainty-equivalence principle.H. 125 problem formulation. 80 Minimum fuel problem. 54 Langrangian multipliers. 45. 55 Convex set. 124 Lagrange multipliers. 61 Knapsack problem. 58 131 Duality theorem. 103 discrete-time. 33. 78 problem formulation. 39 basic variable. 31 theory of the firm. 33. 80 Adjoint network.LP optimality condition. 55 Continuous-time optimal control necessary condition. 101. 123.

71 INDEX . 39 Simplex algorithm. 33.132 example. 123. 113 Slack variable. 39 Singular case for control. 11 sufficient condition. 50. 32 State-space constraint continuous-time problem. 112 Resource allocation problem. 117 Optimal feedback control. 49 suficient condition. 61. 70 Wolfe algorithm. 63 necessary condition. 45. 4 Optimization with equality constraints necessary condition. 81. 113. 50 Transversality condition continuous-time problem. 73 Separation theorem for stochastic control. 91 discrete-time problem. 60 Supporting hyperplane. 60 Supergradient. 70 Primal problem. 58 Wolfe algorithm. 5 Shadow prices. 21 Optimum tax. 84 Tangent. 70 problem formulation. 80 Value function. 33 Quadratic cost. 54 Optimal decision. 70 Shadow-prices. 81. 37. NP duality theorem. 125 Optimization over open set necessary condition. 37 Phase I. 65 Separation theorem for convex sets. 108 Non-degeneracy condition. 17 sufficient condition. 53 problem formulation. 1 Optimal economic growth. 41 Phase II. QP optimality condition. 124 Regulator problem. 13 Optimization under uncertainty. 117 discrete-time problem. 103 Vertex. 39 Nonlinear programming. 38 Weak duality theorem. 77 Subgradient. 112 Quadratic programming. 123 Variable final time. 71 Recursion equation for dynamic programming. 2.

You're Reading a Free Preview

Descarregar
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->