Escolar Documentos
Profissional Documentos
Cultura Documentos
from
A. P. Sakis Meliopoulos
Appendix B
Linear Programming
B.1 Introduction
This appendix presents the theory and solution techniques to optimization problems of
the linear programming variety. The linear programming problem is studied in its
standard form. Two classes of solution methods are introduced: (a) those based on the
simplex method, and (b) those based on the interior point method. Various extensions
are discussed with emphasis to those applicable to large scale systems.
B-1
Standard Form
Min
z = cjx j
Min
j =1
j =1
Subject to :
z = cjx j
a ij x j < bi ,
xj 0
Subject to :
j =1
i = 1, 2,..., m ,
ij
x j = bi ,
xj 0
j =1
j = 1, 2,..., n
i = 1, 2,..., m ,
j = 1, 2,..., n
Min z = c T x
Subject to : Ax b,
x0
Above problem is transformed into the standard form upon introduction of the slack
variables s:
Min z = c T x
Subject to : Ax + s = b,
x 0,
s0
Free Variables: Assume that the optimization problem contains free variables,
that is one or more variables, xi, may assume positive, zero or negative values.
There are two ways to handle this case:
Approach A. A free variable xi is substituted with two nonnegative variables:
xi = ui - vi
ui 0,
vi 0
B-2
si 0
cT x
Ax = b _
hxh
cT (y + h) = cT h + cT y
A(y + h)_ = b
0yh-h
cT y
Ay = b'
y+s=d
y0
s0
where b' = _b - Ah
d=h-h
s are slack/surplus variables
B-3
- a ij x j bi
j= 1
6
7
15
x1 +
x2 9
9
9
x0 =
75
12
62
x1 +
x2
9
9
9
B-4
Subject to:
15
2
60
x1 x2
9
9
9
141 x1 - 28 x2 210
- 141 x1 + 28 x2 60
x1, x2 0
Upon introduction of slack variables:
Minimize
Subject to:
x0 =
75
12
62
x1 +
x2
9
9
9
-15 x1 -2 x2 + s1 = 60
141 x1 - 28 x2 + s2 = 210
- 141 x1 + 28 x2 + s3 = 60
x1, x2, s1, s2, s3 0
Note in above that the constant term in the objective function may be omitted
without altering the problem.
Example B.2: Convert the following problem into a linear program in standard
form.
Minimize
Subject to:
|x|+|y|+|z|
x+y1
2x + z = 3
x, y, z are free
x1, x2 0
y1, y2 0
z1, z2 0
x1 + x2 + y1 + y2 + z1 + z2
x1 - x2 + y1 - y2 + s = 1
2x1 - 2x2 + z1 - z2 = 3
x1, x2, y1, y2, z1, z2, s 0
In summary, any optimization problem with linear objective and constraints can
be transformed into the standard form. Thus, it is sufficient to study the
standard form only.
B-5
B.4 Definitions
The discussion of Section B.3 suggests that it suffices to study the solution of the
linear program in standard form. For this purpose, a number of definitions will
be introduced. For conformity, all definitions will refer to the following linear
program in standard form:
Minimize
Subject to:
cT x
Ax = b
x0
(B.1)
(B.2)
(B.3)
where:
b is an m x 1 vector
x is an n x 1 vector of unknown variables
A is an m x n matrix
c is an n x 1 vector
Definition:: Basic Solution. A basic solution is a vector x which has at least n-m
zero entries and which satisfies the constraint Ax = b. A basic solution can be
constructed as follows: Select (n - m) unknowns from the vector x and set them
equal to zero. These variables are called nonbasic variables and are denoted with
the vector xN. Then
Ax = B xB = b
Where B is a square matrix. The matrix B is a submatrix of A and shall be called
the basis matrix. If B has rank m, the solution for xB is given from xB = B-1 b. The
solution
xB = B 1b
xN = 0
(B.4a)
(B.4b)
is called a basic solution. The variables included in x B are called basic variables.
Definition: Degenerate Basic Solution. If one or more components of a basic
solution, x B , are equal to zero, then this solution is called degenerate basic
solution.
Definition:: Basic Feasible Solution. Consider a basic solution. If x B 0 , then
the solution is called basic feasible solution.
B-6
x0 = -3x1 - 5x2
x1 + 2x2 + x3 = 8
3x1 + 2x2 + x4 = 18
x1, x2, x3, x4 0
x1
x2
x3
x4
Feasible
Degenerate
0
0
0
8
6
5
0
4
9
0
0
1.5
8
0
-10
0
2
0
18
10
0
-6
0
0
Yes
Yes
No
No
Yes
Yes
No
No
No
No
No
No
x0 = cT x
Ax = b
x0
where:
x is an n x 1 vector of unknown variables
B-7
b is an m x 1 vector of constants
nm
and rank of A = m.
The fundamental theorem states that:
(a) If there is a feasible solution, there is a basic feasible solution.
(b) If there is an optimal feasible solution, there is an optimal basic feasible
solution.
Proof of (a): Let x = (x1, ..., xn ) be a feasible solution. The constraint Ax = b is
written as follows:
x1 a1 + x2 a2 + ... = b
(B.5)
Note that ai denotes the ith column of matrix A. If more than m variables, let it
be p, are greater than zero, then we can find numbers yi, i = 1, 2,..., p, such that
[since rank A m]:
y1 a1 + ... + yp ap = 0
(B.6)
(B.7)
Select such that at least one of xi - yi becomes zero while all others remain
nonnegative. In this case, the number p will be decreased by at least 1. This
process may be repeated as long as equation (B.6) can be written with nonzero yi
values, that is, as long as p > m. That is, as long as p > m, the number of nonzero
variables can be decreased by one. Thus, eventually p = m.
Proof of (b): Let x = [x1 x2 ... xn ] be an optimal feasible solution. Then:
Case 1: If p (> m) variables x1, x2 ..., xn are nonzero, the optimal feasible solution
is also an optimal basic feasible solution. The theorem holds in this case.
Case 2: If p (> m) variables x1, x2 ..., xn are nonzero, then without loss of
generality, we can assume that
and
i
xi a = b
i =1
B-8
(B.8)
i
yi a = b
i =1
(B.9)
i
( x i y i )a = b
i =1
The last equation suggests that the vector x - y is a solution. The value
of objective function is
cT x - cT y
( positive or negative)
Since cTx is the minimum value of the objective function and can be
positive or negative, cTy must equal zero. Then select such that one
entry is zeroed. This process can be repeated until only m entries are
nonzero.
In the next paragraph we will introduce a number of geometric concepts that will
enable a geometric interpretation of the fundamental theorem. The importance of
the fundamental theorem of linear programming is that it provides the necessary
information to construct an algorithm for the computation of the optimal
solution. In subsequent paragraphs we will introduce the simplex method which
is based on this theorem.
B-9
Ax = b
x0
Minimize
Subject to:
Has no solution.
b. If K (non empty setr), then the linear program has a solution (bounded or
unbounded). If the set K is bounded, the linear program solution is bounded. If
the set K is unbounded, the linear program solution may be bounded or
unbounded.
Definition: Equivalence Between Extreme Points and Basic Solutions. An
extreme point in a convex set is one which can not be expressed as a linear
combination of any two other points in the set. Consider the convex set K
K = { x 0, Ax = b}
B-10
B-11
A: feasible solution
B: basic feasible solution
B
A
(a)
B'
A'
(b)
B-12
(B.10)
Assume that the nonbasic variable xk (k > m) is to be brought into the basis. The
vector ak of matrix A can be expressed as a linear combination of
{a1 , a2 , ..., am }
This is always possible since it has been assumed that equation (B.10) has a
solution. Thus, the vectors a1, a2, ..., am are linearly independent. Therefore,
(B. 11 )
a k = y 1k a 1 + y k2 a 2 + ... + y km a m
B-13
(B.12)
(B.13)
What happens if all y ki < 0? In this case, the vector x() is a feasible solution
which may become as large as desired. Thus, in this case, the problem has an
unbounded solution.
In summary, a new basic feasible solution can be obtained from the vector, x()
by appropriate selection of the parameter . The vector x() will be denoted with:
x B
- y k
(B.14)
where
yk = B-1ak.
(B.15)
cT x
Ax = b
x0
B-14
(B.16)
or
Ax = Bxb + Ar xr = b
xb = B-1 b - B-1 Ar xr
xb = xB - B-1 Ar xr
The value of the objective function evaluated at the general solution x is:
x b
c T x = [c TB : c Tr ] = c TB x b + c Tr x r
xr
T
T
= c B xB - c B B-1Arxr + c rT xr
= z0 + ( c rT - c BT B-1Ar) xr
In above expression, observe that in order to improve the objective function the
quantity ( c rT - c BT B-1Ar) xr must be negative. Since xr can be arbitrarily selected, a
procedure can be devised where only one variable is selected to enter the
solution. To have best results, select the most negative component of the vector
( c rT - c BT B-1Ar). This will determine the entering variable k.
The components of the vector ( c rT - c BT B-1Ar) are called the reduced cost
coefficients. In summary, the optimality condition involves the following steps:
Compute ( c rT - c BT B-1Ar). Select the most negative value. The index determines
the entering variable k. If all components are nonnegative, optimum has been
obtained.
B-15
Example B.3: Solve the following linear program using the simplex method.
Minimize
Subject to:
x0 = -3x1 - 5x2
x1 + 2x2 8
3x1 + 2x2 18
x1, x2 0
Subject to:
x0 = -3x1 - 5x2
1 2 1 0
3 2 0 1
x1
x
8
2 =
x3
18
x 4
x0 = [0 0 -3 -5] y
Subject to:
1 0 1 2
8
0 1 3 2 y = 18
y0
At this point, the first iteration of the simplex method can be performed.
1st iteration: The Optimality Condition is applied to determine the entering
variable:
B-16
8 - 2
18 - 2
y() =
0
where the value is to be selected from:
= min
(1, 2)
y i0
, y ik > 0
y ik
= min [4, 9] = 4
Set = 4 to obtain the new basic feasible solution.
0
10
ynew =
0
4
Thus, the leaving variable is y1. Note that the new solution is also a basic feasible
solution. The objective function computed at the new solution is:
x0 = [0 0 -3 -5] ynew = -20
B-17
2nd iteration: Again, the solution is cast in the form (B.16) with the following
definition:
x 4
y2
z1
x
y
z
2 = 4 = 2
x3
y1
z3
x1
y3
z 4
x0 = [0 -5 0 -3] z
Subject to:
0 2 1 1
8
1 2 0 3 z = 18
z0
The Optimality Condition is applied to determine the entering variable.
1
c - c B-1Ar
T
r
T
B
0 2
= [0 -3] - [0 -5]
1 2
1 1
= [0 -3] - [0 -5]
0.5 0
= [0 -3] - [-2.5 -2.5]
= [2.5 -0.5]
1 1
0 3
1 1
0 3
B-18
10 - (2)
4 - (0.5)
z() =
where
= min
y i0
, y ik > 0 = min(5, 8) = 5
y ik
znew =
0
5
Thus, the leaving variable is z1 (= y2 = x4)
The value of the objective function evaluated at the solution is:
x0 = [0 -5 0 -3] znew = -22.5
Further application of the optimality condition reveals that above solution is the
optimal. Thus, no more iterations are needed. Now, the solution in terms of the
original variables x is:
x1 = y3 = z4 = 5
x2 = y4 = z2 = 1.5
x3 = y1 = z3 = 0.0
x4 = y2 = z1 = 0.0
B.8 Convergence
Consider a linear program problem in standard form:
Minimize
Subject to:
cT x
Ax = b
x0
Assume that the problem has a solution. The simplex method will converge to
the solution. The solution will occur at a basic feasible solution. The maximum
B-19
n!
number of basic feasible solutions of a linear program is m! (n-m)!. Thus, the
simplex method will converge in a finite number of steps. The maximum
n!
number of steps will be m! (n-m)!. In general, the actual number of steps is much
n!
less than the maximum possible m! (n-m)!. In large system applications, the
n!
maximum possible number of iterations m! (n-m)! may be prohibitively large.
For this reason, computational details are of tremendous importance in the
simplex method. Some of the computational considerations will be discussed
next.
B-20
m equations in n unknowns
Infinite Number
of Solutions
Finite
Number of
Solutions
Basic feasible
<( n )
solutions = N
m
(some may be degenerated)
. Number of iterations
<N
x2
...
..
...
xm
i = 1, 2, ..., m
i = m + 1, ..., n
x1
x2
M
xi
...
xm
0
1
0
...
0
...
0
0
0
1
...
0
...
0
0
0
0
...
0
...
0
...
...
...
..1
...
..0
...
...
...
..0
...
..1
rm+1
y1,m+1
y2,m+1
...
yi,m+1
...
ym,m+1
...
...
...
...
...
...
rk
y1,k
y2,k
...
yi,k
...
ym,k
...
...
...
...
...
...
rn
y1,n
y2,n
...
yi,n
...
ym,n
-z0
y10
y20
...
yi0
...
ym0
Assume that variable i has been selected to leave the basis and variable k has
been selected to enter the basis. The new solution may be found with the
following steps:
B-22
1. Replace xi with xk
2. Divide ith row with yi,k
3. Multiply ith row by yj,k, j = 1, 2, ..., m, j i and subtract from jth row j = 1, 2, ...,
m, j i.
The new yj0 will provide the new basic feasible solution. Above procedure is
called pivoting on yi,k. The tableau is usually augmented by the coefficients of
the objective function as shown in the tableau. In this case, the row of these
coefficients is treated the same way: the coefficients of the basic variables are
always zero. The rest of them define the vector [ c rT - c BT B-1Ar]. In this case, the
selection of the entering variable is done by inspection. The selection of the
leaving variable, if the entering variable is k, is done as follows: The minimum
number
y i0
, yi,k > 0 determines the leaving variable .
y ik
Example B.4: Solve the linear program of example B.3 using the tableau form.
Solution: The solution obtained in tableau form with the following sequence:
The starting tableau is:
x3
x4
-3
1
3
x1
-5
2
2
x2
0
1
0
x3
0
0
1
x4
0
8
18
In the first iteration, by inspection of the tablaue, the entering variable is x2. The
leaving variable is computed to be x3. Pivoting on the element (1,2) of the matrix
yields the following tableau:
x2
x4
-0.5
0.5
2
x1
0
1
0
x2
2.5
0.5
-1
x3
0
0
1
x4
20
4
10
By inspection of the tableau, the entering variable must be x1. The leaving
variable is computed to be x4. Pivoting on the element (2,1) of the matrix yields
the following tableau:
B-23
x2
x1
0
0
1
x1
0
1
0
x2
2.25
0.75
-0.5
x3
0.25
-.25
0.5
x4
22.5
1.5
5
(B.22)
B-24
The simplex method, as has been presented so far, is an algorithm which starts
from a basic feasible solution and progressively moves to the optimal solution.
Many times it is considerably difficult to identify a basic feasible solution. If the
problem is infeasible, then obviously a basic feasible solution does not exist.
Here we shall examine the question of existence of a basic feasible solution. If
such a solution can be found then the problem has an optimal solution.
The existence of a basic feasible solution is addressed by means of an auxiliary
linear problem. This problem is defined with the aid of artificial variables as
follows:
Minimize
yi
(B.17a)
Ax + y = b
x 0, y 0
(B.17b)
(B.17c)
Subject to:
It is easy to show that if the original problem has a feasible solution, then the
optimal solution to the above problem will be a basic feasible solution of the
original problem. The reader is encouraged to provide a formal proof of this
statement. Conversely, if at the optimal solution of above problem, y 0, then
the original problem has no solution.
The definition and solution of above auxiliary problem is called Phase I of the
simplex method. Once Phase I has been completed, the artificial variables y may
be dropped. The simplex method can be initiated in the original problem with
the found basic feasible solution (Phase II).
Example B.5: Consider the linear program:
Minimize
Subject to:
Compute a basic feasible solution using Phase I and then solve the problem.
Solution: Phase I of this problem is defined as follows:
B-25
Minimize
y1 + y2
subject to:
x1
x
2
1 1 1 1 0 5
5 1 2 0 1 x 3 = 8
y
1
y 2
x1, x2, x3, y1, y2 0
0
1
5
0
1
1
0
1
2
1
1
0
1
0
1
0
5
8
y1
y2
-6
1
5
-2
1
1
-3
1
2
0
1
0
0
0
1
-13
5
8
y1
x1
0
0
1
-0.8
8
0.2
-0.6
0.6
0.4
0
1
0
1.2
-0.2
0.2
-3.4
3.4
1.6
x2
x1
0
0
1
0
1
0
0
0.75
0.25
1
1.25
-0.25
1
-0.25
0.25
0
4.25
0.75
Tableau 2
Tableau 3
Tableau 4
Comments: The solution from tableau to tableau proceeds as follows. In the first
iteration the entering variable is x1, the leaving variable is y2. Therefore pivoting
is performed on y2,1. In the second iteration the entering variable is x2, the leaving
variable is y1. Therefore pivoting is performed on y1,2. Completion of Phase I
yields the following basic feasible solution:
x1 = .75
x2 = 4.25
x3 = 0
B-26
The Big M-Method: The two phases, I and II, can be integrated in one linear
program problem as follows:
Step 1. Artificial variables are introduced as in phase I.
Step 2. The objective function is augmented with the artificial variables
Mr i
i
Minimize
Subject to:
B-27
The basic idea in the dual simplex method is to start from a basic solution which satisfies
the optimality condition (all reduced cost coefficients nonnegative), but possibly
violating feasibility (one or more variables negative) and to construct a new basic
solution which again satisfies the optimality condition, but has moved closer to the
feasible region. Obviously, when feasibility is satisfied, the optimal solution has been
reached since the optimality condition is always satisfied. This method is suitable to a
class of problems for which an optimal but infeasible starting solution can be determined
easily.
The method will be developed for a linear program in standard form:
Min z = c T x
Subject to : Ax = b,
x0
x B = AB1 b,
x r = 0, c rT c BT AB1 Ar 0
where the usual notation is applied. Note that if in addition to the above conditions:
xB 0
then this solution is the optimal solution. If, however, one or more of the entries of x B is
negative, it is necessary to design an algorithm which will generate a new basic dual
feasible solution. The new solution should be selected in such a way that one of the
negative variables returns to zero. The algorithm that performs this task is illustrated in
Figure B.3.
B-28
Is x
Yes
>0?
STOP
No
Compute
c TB y i - c i
= min
i
y ij
-1
y i=B a , y j <0
Leaving variable is j
Entering variable is k
Compute new solution
The algorithm of Figure B.3 can be developed in a number of different ways. The reader
is referred to the extensive literature on the subject. Here the essentials of the algorithm
will be proved.
From block A of the algorithm, it is obvious that the algorithm moves closer to the
feasible region since negative variables are selected to leave the basis. Now if it is
proved that the selection of the entering variable in block B will result in a new basic
solution which satisfies the optimality condition, the proof will be completed. This can
B-29
be proven with simple matrix manipulations as follows: First, observe that for every
basic variable, j, the reduced cost coefficient is zero:
c j c BT AB1 a j = c j c BT e j = 0
Then, let j be the leaving variable and k the entering variable. The new matrix of the
basis will be
AB' = AB a j (e j ) + a k (e k )
T
(B.23)
y k = AB1 a k
(A )
' 1
B
= EAB1
E = e 1
e2
y 1k y kj
M
L
1 y kj
M
L em
The above matrix identity can be proved by direct substitution. The new
solution will be optimal if the reduced cost coefficients are nonnegative.
c i c BT AB1 a i 0
By direct computation
0
0
M
c'BT = cBT - cj eT + ck eT, e =
1 place of variable j
M
0
Thus:
c'BT B'T ai = (cBT - cj eT + ck eT) E B-1 ai
= cBT E yi - cj eT E yi + ck eT E yi
B-30
Note that
E yi = yi -
y ij
y kj
where
y1k
k
y2
M
= k
yj M
k
y m
and
eT E yi = yji / yjk
Thus
c'BT B'T ai = c BT y i =c y i
T
B
= c TB y i -
y ij
y kj
y ij
y kj
y ij
y kj
c BT - c j
T
B
y ij
y kj
y ij
c y + cj
y kj
+ ck
- cj
y ij
y kj
y ij
y kj
+ ck
[c BT y k - c k ]
c'BT
= rcci +
B'T
i
j
k
j
ai
= ci - c y +
T
B
y ij
y
k
j
[c BT y k - c k ]
[c BT y k - c k ]
y ij
y
k
j
[c BT y k - c k ] 0 for every i
Note that the entering variable k has been selected in such a way that
yjk < 0
and since the previous solution was a basic dual feasible solution
B-31
y ij
y kj
cBT yk - ck 0.
rcci' 0
yji < 0
c BT y i - c i
; yi = B-1 ai, yji < 0 }
i
yj
it follows that
c BT y i - c i
c BT
y ij
y ij
T
i
cB y ci k
yj
y ij
c i - c BT y i + k
yj
or
y k - ck
y kj
( c BT y k c k )
( c TB y k c k ) 0
rcc'i 0
Hence, the new reduced cost coefficients will be nonnegative. This completes the
proof. The algorithm of Figure B.3 will be demonstrated with an example.
Example B.8: Solve the following problem via the dual simplex method:
Minimize
Subject to
x1 + x2 + 3x3
B-32
xB
x1
x r = x 2 = 0
x 3
x 4
-.05
= =
x5
-.10
1st iteration: By selection, the leaving variable is x5. The entering variable is
determined by:
-.75
y1 =
,
.25
.75
y2 =
,
-.25
0 = min { ,
.1
-3
,
}=4
-.25 -.625
-.625
y3 =
-.625
Above minimum corresponds to variable x2. Thus x2 is the entering variable. The
new solution is:
xB
x 2
.4
= =
x 4
-.35
.75 1
AB =
-.25 0
A B1
x1
0
x r = x 2 = 0
x 3
0
0 -4
=
1 3
1
CB =
0
2nd iteration: By selection, the leaving variable is x4. The entering variable is
determined by:
-1
y1 =
0
0 = min {
2.5
y3 =
-2.5
,
-.5
,
-.25
-4
y5 =
3
} = .2
Above minimum corresponds to variable x3. Thus x3 is the entering variable. The
new solution is:
xB
x 2
.05
= =
x4
.14
B-33
0
x1
x r = x 4 = 0
0
x 5
cT x
Ax = b
xh
x0
(B.24a)
(B.24b)
(B.24c)
(B.24d)
Introduction of slack variables, s, will transform the problem into standard form:
Minimize
Subject to:
cT x
A 0 x
b
I I s = h
x, s 0
(B.25)
Thus, in the considered solution, the variables s will be always basic and
nonnegative. This solution shall be called an extended dual feasible solution.
Now consider the application of the dual simplex method to the above problem.
Observe that in block A of Figure B.3, the leaving variable will be an x variable.
B-34
[c BT 0] z i - c i
; z ij < 0
0 = min
i
i
zj
In above expression, the index i extends over all nonbasic variables. Since an
extended basic dual feasible solution is assumed, the vector z will be given with
B 0
z =
E I
i
a i
B-1 a i
=
i
-1 i
i
e
-E B a + e
Thus the entry z ij is the jth entry of the vector B-1 ai. Define yi = B-1 ai. Then
equation (B.1) becomes
c TB y i - c i
; y i = B-1 a i , y ij < 0
0 = min
i
i
yj
(B.26)
Let the minimum correspond to variable k. Then variable k will be the entering
variable.
If in the new solution, one or more variables exceed their upper bounds, the
simple transformation
xl = -x'l + hl
will bring the solution into the assumed form. The above transformation applied
to basic variables leaves invariant the dual feasibility of the solution.
The algorithm of the dual simplex method with upper bounds is illustrated in
Figure B.4. The algorithm will be demonstrated with an example.
B-35
Is x
Yes
>0?
STOP
No
Compute
= min
i
c TB y i - c i
y
i
j
-1
y i =B a , y j <0
Leaving variable is j
Entering variable is k
Compute new solution
B-36
0 1
x7
-.05
xB = =
x8
-.05
xr = x 1 x 2 x 3 x 4 x 5 x 6
By selection of the most negative variable (in this case both variables have the
same negative value) the leaving variable is x7. The entering variable is
determined with the optimality condition. For this purpose, the following is
computed.
- 0.25
y2 =
- 0.125
- 0.25
y5 =
- 0.125
- 0.25
y1 =
0.125
0.25
y4 =
- 0.375
Then:
= min [
- 0.25
y3 =
- 0.375
- 0.5
y6 =
- 0.25
1 0.25,
x1
x3
x5
x6
Above minimum corresponds to variables x1 or x3. Either can enter the solution.
Select x1 as the entering variable. The new solution is:
x1 = 0.20
B-37
x8= -0.025
Note that x1 violates the upper bound. Thus, replace with
x1 = - x'1 + .18
Upon substitution, the problem becomes:
Minimize
Subject to:
.125 1
.5 1
From theory, it is known that this solution is an extended dual basic feasible
solution. As an exercise, the dual feasibility of the solution will be verified by
computing the reduced cost coefficients:
rcc2 = 1 - [-1
rcc3 = 1 - [-1
rcc4 = 1 - [-1
rcc5 = 3 - [-1
rcc6 = 3 - [-1
rcc7 = 0 - [-1
4
0]
.5
4
0]
.5
.25
=2>0
1 .125
0 .25
=0
1 .375
4
0]
.5
4
0]
.5
.25
=2>0
1 .375
.25
=2>0
1 .125
4 0 .5
0]
=1>0
.5 1 .25
0
4 0 1
0]
=4>0
.5 1 0
2nd iteration: By selection, the leaving variable is x8. The entering variable is
determined by:
= min [
-2/-0.5,
-4/-0.5
B-38
]=4
x4
x7
3.0 2.0
AB-1=
. 2.0
10
= min [
-2/-1,
x5
-1/-2
x6
] = 0.5
.375 .25
1. 2.
AB-1 =
. 1.
15
Since x4, x6 0, this is the optimal solution. In terms of the original variables, this
solution is:
x1 = .18
x4 = .05
x6 = .035
B-39
Min z = c T x
Subject to : Ax = b,
x0
Max z = T b
Subject to : T A c T
Above linear programs have the following property (duality theorem of linear
programming): If either of the linear programs has a finite optimal solution, so does the
other. The optimal value of the objective function is the same for both problems.
There are two very important properties that connect the primal and dual linear programs.
The first is the property of complementary slackness.
B.12.1 Complementary Slackness
Consider the primal linear program in standard form and the corresponding dual problem.
Let the optimal solution for the primal problem be:
x A 1 b
x = B = B ,
xr 0
The complementary slackness theorem states that the optimal solution l for the dual
problem will meet the following necessary and sufficient condition.
For each basic variable i: x i > 0 T a i = c i
For each nonbasic variable i: T a i < c i x i = 0
The implication of this theorem is that once the optimal solution of the primal problem is
known, then the optimal solution of the dual problem can be immediately computed from
the equation:
B-40
T AB = c BT
B.12.2 Sensitivity
Consider the primal linear program in standard form and the corresponding dual problem.
Let the optimal solutions for the primal and dual problem be:
x A 1 b
x = B = B ,
xr 0
T = c BT AB1
Assuming that the solution is not degenerate (all basic variables are positive nonzero),
then the following holds:
z = T b
This property enables the computation of the sensitivity of the objective function with
respect to the constants b.
5
52
53
5n
It can be proven that the number of iterations required for the solution of above
problem using the described simplex algorithm (entering variable is always the
B-41
one with the most negative reduced cost coefficient) is 2n. The reader is
encouraged to work out this proof by mathematical induction. (Hint: work
problem for n = 2, then for n = 3. Then examine the common parts of the two
solutions.) Note that even for moderate values of n, i.e. n = 100, the number of
required iterations is N = 1.26 x 1030 !!!
For above problem, the computational requirements of the simplex method are
extremely high. Recently, a new method has been developed for linear
problems, which overcomes the drawbacks of the simplex method for large scale
problems. This method starts from a feasible solution, which is an interior point
in the space of the constraints Ax = b, and projects to a point which is closer to
the optimal solution. For this reason these methods are coined interior point
method to be distinguished from the simplex type methods which move from
vertex to vertex.
Interior point methods have been undergone extensive developments with
noticeable milestones established by Kachian[xxx], Karmakar[xxx] and Fiacco &
McCormick[xxx]. Presently, the most efficient method is a variation of a barrier
method based on the primal dual interior point method. This method is
presented here.
We study the linear problem defined with
Minimize
Subject to:
x0 = cT x
= be
Ae x
bi
Ai x
x
h
0
x
cT x
A e x = be
A i x + s = bi
x + w
= h
x > 0, s > 0, w > 0
(26)
Let , , and denote the Lagrange multipliers for the constraints in Eq.(26).
The inequalities are dealt with logarithmic barrier functions [5], yielding the
following Lagrangian:
L(x, s, w, , , , ) = cTx - (lnx i+ lnsi + lnwj)
B-42
= be
= bi
= h
= c
= e
= - e
= e
(27)
where:
X = diag(x1,...,xn), S = diag(s1,...,sI),
N = diag(1,...,n),
M = diag(1,...,I),
e = (1,1,...,1)T,
n: the dimension of control variables x,
I: the number of inequality constraints
W = diag(w1,...,wn),
Z = diag(z1,...,zn),
Equations (27) are solved iteratively using Newton's method which consists of
successively solving the linearized Equataions (27). The Newton's algorithm is as
follows:
A e
=
Ai
0 x
I 0
0 AeT
s 0
AiT
B-43
be Ae
bi Ai
0 x
I s
Ae
+
Ai
0 x
I 0
x x 0 AeT
=
s 0 s 0
0 x ( ) c AeT
+
s s ( ) 0 0
AiT x ( ) c AeT
I s ( ) 0 0
AiT z
+
I 0
AiT z
+
I 0
z = -X-1 Z x - Z e + X-1 e
= W-1 N x - N e + W-1 e
w = -x
where:
x = ( X-1Z + W-1 N )-1 , x() = -( N - Z ) e - ( X-1 - W-1) e
s = -M -1 S ,
s() = -( M + S-1 ) e
In order to maintain feasibility, the full Newton step has to be scaled first before
updating variables. Specifically, first the step lengths P and D in the primal
and dual space are computed on the basis of feasibility. Next the variables are
updated with:
xk+1
sk+1
wk+1
k+1
k+1
k+1
zk+1
=
=
=
=
=
=
=
xk + P x
sk + P s
wk + P w
k + D
k + D
k + D
zk + D z
(28)
The algorithm terminates when the normalized duality gap (objective function
value of primal problem minus objective function of dual problem) becomes
smaller than a predetermined tolerance value :
cT x ( b eT + b Ti h T )
<
1 + b eT + b Ti h T
(29)
B-44
method, the dual simplex method with upper bounds is extremely efficient in a
certain class of problems.
B.15 Problems
Problem PB.1:
A certain engineering problem results in the following
optimization problem.
Minimize
Subject to
J = |Pg2|+|Pg3|
|1.9 + 0.294 Pg2 - 0.156Pg3 | 1.7
-0.5 Pg2 +Pg3 2.0
-2.1 Pg2 0.15
-1.1 Pg3 0.35
x1 + x2 + 2x3
-0.9x1 + 0.9x2 - 0.5x3 + x4 = 0.05
-1.1x1 + 1.1x2 + 0.5x3 - x5 = 0.05
x1, x2, x3, x4, x5 0
B-45
6x1 + 4x2
2x1 + 2x2 - x3 = 3
3x1 + x2 + - x4 = 2
x1, x2, x3, x4, 0
Problem PB.6: Solve the following linear program problem using the dual
simplex method with upper bounds:
Minimize
Subject to
x4
-0.3x1 - 0.022x2 + 0.022x3 + x4 = -0.013
x1 0.15
x2 0.0536
x3 0.5964
x4
x1, x2, x3, x4, 0
Problem PB.7: The following linear program results from a certain security
constraint economic dispatch:
Minimize
Subject to
x1 + x2 + x3 + x4 + 5x5
-0.21x1 + 0.21x2 - 0.20x3 + 0.20x4 - 0.25x5 + x6 = -0.04
-0.12x1 + 0.12x2 + 0.30x3 - 0.30x4 - 0.125x5 + x7 = -0.05
x1 0.20
x2 0.20
x3 0.15
x4 0.05
x5 0.35
x1, x2, x3, x4, x5, x6, x7 0
B-46
(b) Perform one iteration of the dual simplex method with upper bounds.
(c) Compute the new solution and cast problem in a form ready for the next
iteration.
Problem PB.8: Solve the following linear program using the dual simplex
method with upper bounds. Use the following starting solution:
xi = 0, i = 1, 2, ..., 6 , x7 = 0.0875 , x8 = - 0.013
Minimize x1 + x2
Subject to 0.269x1 - 0.269x2 + 0.2x3 - 0.2x4 + 0.053x5 - 0.053x6 + x7 = - 0.0875
- 0.3x3 + 0.3x4 + 0.022x5 - 0.022x6 + x8 = - 0.013
x1 1.0
x2 0.8
x3 0.05
x4 0.1
x5 0.0536
x6 0.5964
x1, x2, x3, x4, x5, x6 0
Problem PB.9: Consider the following linear programming problem.
Minimize x1 + x2
Subject to 0.25x1 - 0.25x2 + 0.1x3 - 0.1x4 - 0.05x5 + x6 = - 0.05
x1 0.1
x2 0.5
x3 0.18
x4 0.12
x5 0.5
x6 unbounded
x1, x2, x3, x4, x5, x6 0
Consider the following basic solution
x1 = x2 = x3 = x4 = x5 = 0
x6 = - 0.05
(a) Is this basic solution dual feasible? Is this basic solution an extended dual
feasible solution?
B-47
(b) Solve above problem with the dual simplex method with upper bounds.
Problem PB.10: Solve the following linear program using the barrier method
based on the primal - dual interior point method.
Minimize: x0 = x1 + x2 + x3 + x4 + 3x5 + 3x6
Subject to: -0.25x1 + 0.25x2 - 0.25x3 + 0.25x4 - 0.25x5 - 0.5x6 + x7 = -0.05
-0.125x1 + 0.125x2 + 0.375x3 - 0.375x4 - 0.125x5 - 0.25x6 + x8 = -0.05
x1, x2, ..., x6 0.18
x1, x2, x3, ..., x8 0
B-48