Você está na página 1de 13

367

Equivalent weights for lexicographic


multi-objective programs: Characterizations
and computations
H a n i f D. S H E I L ~ L I
Department of Industrial Engmeerlng and Operatwns Re~earch. Virginia Polvtethnic lnMttute and State l_.'mt-er~lt~.
Blacksburg, VA 24061, U.S.A.

Received September 198 I


Rewsed December 1981

A recent paper [19] demonstrated the existence of a set of equivalent weights for which the optimal solutmn ~et of a preemptive
priority multi-objective program is precisely equal to the set of optimal solutions to the resulting nonpreempt~e program w~th the
objecuve function given as a hnear weighting of the multiple objectives. This paper addresses two further ~ssues Firstly. for ,,ome
important special cases or apphcatmns, it is demonstrated that not only is the computatmn of a set of eqmvalent weights feasible, but
it is also highly desirable. Two algorithms are presented to compute a set of eqmvalent weight,, One method is a direct ,,pe~mhzatlon
of the approach adopted in [19], whereas the second approach is an alternative techmque. The latter method is shown to y~eld weights
of uniformly smaller values than the former method, while being of the same computatmnal complexity, and ~s hence preferable
Secondly, as opposed to constructing one vector of equivalent weights, a charactenzatmn ts provided for the enure set of eqmvalent
weights.

I. Introduction

In several applications, the task of mathematical modelling is obfuscated by the fact that there ex,st
more than one objective function wnich are conflicting in the sense that an improvement in any one of
these may be accompanied by a worsening in value of some of the other objectives. In such contexts, one
formulates a multiple objective mathematical program of the form
• maximize { f, ( x ) , t = I ..... r }, subject to x E X (I)
where f l ( x ) ..... f , ( x ) are the objective functions and X is a nonempty and closed feasible region.
Several methods have been proposed m order to resob, e the conflict between the objective functmns m
(1), and these methods have been amply surveyed by Hwang and Masud [10], MacCrimmon [15] and Roy
[ 17]. The computational aspects associated with these methods have been discussed by Belenson and Kapur
[3], Benayoun, deMontgolfier, and Tergny [4] and by Evans and Steuer [71. Invariably, these methods
concentrate on selecting one or more solutions from the set of efficient points, namely, points at which it ~s
not possible to simultaneously improve all the alternative objective function values. For a characterization
of such points, the reader is referred to Ecker and Kouada [6] and to Yu and Zeleny [21].
The present paper relates to two specific strategies which have been advocated for decision making m
the context of multiple objective programs. These strategies, considered by Yu [20]. are called the
lexicographic ordering and one-dimensional methods. In the former method, one ranks the objective
functions according to some subjective priority so that a marginal improvement for a particular objective

Thanks are due to Professor A.L. Soyster and two anonymous referees for their input and suggestions m the writing ol this paper

North-Holland Publishing Company


European Joarnal of Operational Research I 1 (1982) 367-379

0377-2217/82/0000-0000/$02.75 © 1982 North-Holland


368 H D. Sherah /Wetghts .[or lextcographtc multt-objecttve programs

preempts arbitrarily large improvements in objectives of subsequent ranks. In the latter approach, one
constructs a real valued utility or value function u[fl(x) ..... f,(x)], and then maximizes u(.) over the
feasible region X. "l:hough any form of the utility function could be used, typically a linear utility function
of the following form is selected:

u[fI(X) . . . . . f,(X)]---- ~ ~.,f,(x) (2)


X,
where A t ..... are ~ n n e g a t i v e weights which reflect a subjective evaluation of the objectives fl(" )..... f,(" )
respectively.
In the context of goal programming [11], the lexicographic and the one dimensional methods are known
as the preemptive and the nonpreemptive methods respectively. Mathematically, a (linear) goal program is
of the form
GP: minimize {L,d, + U,d ~, ,, = 1..... r},
subject to
g , . x + d ~ - d +, = G , , t = l ..... r,
D x = d , x>~O,
d , - , d , ~/>0, i = 1..... r.
Here, g,- x, i = 1..... r are r linear objective functions, each with a stated goal G,, i = 1..... r respectively.
The quantities d~- and d,+ respectively measure deviations of g,. x below and above the goal G,, and the
term L,d 7 + U,d +, ascribes a suitable penalty to these deviations. The constraints Dx = d, x >>-0 represent
the constraints on the problem. Note that the goal program G P is of the form (1) with L, d7 + U,d,+,
t = 1 ..... r being the r "objective functions fj(-) ..... f~(.)", and the remaining restrictions representing
problem constraints. Hence, one may either resolve G P as a preemptive program with objective priority
ranking 1..... r, or one may convert GP into a nonpreemptive program via some utility function (2).
Now, in order to resolve (1) as a preemptive program with objective priority ranking 1..... r, one
typically adopts the following procedure known as the sequential method [l 1]. (Assume in the following
that X is non empty and bounded). First, one maximizes fl(x) subject to x E X, and determines an optimal
solution x* with, say, fl(x*) = ill. Next, one solves the problem of maximizing f2(x) subject to f l ( x ) ~>fll
and x E X, and so on. In general, at the qth iteration, one solves
maximize{fq(X): f , ( x ) ~> B,, t = 1..... q - 1, and x E X } . (3)
If either (3) has a unique optimum, or if q = r, then the optimal solution to (3) is a preemptive optimum.
Otherwise, one proceeds to iteration q + !. (In case there is no easy way of determining whether or not (3)
has a unique optimum, one simply proceeds until q = r).
One important insight in relation to (3) for the special case in which f,(. ), t : 1..... r are linear functions,
is that the feasible region to (3) is a face of X. The reason being that {x: fl(x)>I ill} n X defines a face of
X, and {x: f2(x) ~/32} n {x: fl(x) ~ ill, x E X} defines a face of {x: fl(x) >1fl, x E X}, which is also a face
of X, and so on. Hence, the extreme points of the feasible region in (3) are subsets of the extreme points of
X, and the sequential method determines a preemptive solution which is an extreme point of X [2 l].
A recent paper [19] addressed the question of whether or not there exists a set of (equivalent) weights
(A~ ..... ~,,} so that the optimal solution set of the nonpreemptive program with a utility function given by
(2) is identical to the set of preemptive optimal solutions. The answer turned out to be affirmative for the
case of multiple objective linear programs as well as for several discrete programming problems [19]. The
importance of this result is two fold. Mathematically, it asserts that the nonpreemptive approach subsumes
the preemptive approach for the above cases. Practically, it suggests that if a set of equivalent weights could
be conveniently determined in some cases, then one could solve a preemptive program as a nonpreemptive
program using standard, available, computer packages. Indeed, this is the thrust of the present paper. The
main result of [19], which is a refinement of an analogous result for maximal efficient faces due to Ecker,
Hegner and Kouada [5], is stated below for the important special case of linear multiple objective
programs.
H D. Sherah / Weightsfor lexwograp&c muln-objectweprograms 369

Theorem 1. Consider a multiple objective hnear program (MOLP) (1) wtth f,(x) ~ c,. x, t -- 1..... r and
X = {x: A x = b, x >~0} v~ g &nd wtth a preemptive priority ordering c I • x . . . . . c~. x. Then there extsts a scalar
M o >~0 such that for any M >I M o, the set of weights {h, = M ~-', i = 1. . . . . r} is a set of equtvalent wetghts m
the sense that ~ is a preemptive optimal solutton to M O L P if and only t f ~ solves the problem

maximize(e-x: x ~ X} (4)
where

e--- ~ M~-'c,. (5)


t=l

Proof. See [ 19].

The development in [19] is directed toward establishing the validity of Theorem 1, with no attempt being
made at actually providing a computational framework for solving preemptive priority MOLP's as linear
programs. In fact, a counter example is provided for a previous method [8] which attempts to accomplish
this task. The present paper follows up on [19] and addresses two further issues. Firstly, for some imp.,3itant
special cases of (1), it is demonstrated that not only can one obtain with a minimal amount of effort a set of
equivalent weights for solving a preemptive priority problem (1) as a nonpreemptive program, but for these
special cases, it is also computationaily highly advisable. Secondly. for the case of problems MOLP. m
contrast to determining a single set of equivalent weights, we provide a characterization for the entire set of
equivalent weights. These two issues are considered ill turn over the following two sections.

2. Computation of a set of equivalent weights

In this section we consider the following multiple objective program (MOP), which is a special case of(1)
MOP: maximize {c,.x, i = 1..... r},
subject to I
x~X= {x: A x = b , O < x < < - u } . (6)
xE~. (7)

Here, c, ..... cr are integer n-dimensional vectors, the matrix A and the vector b have m rows. u is a vector of
finite upper bounds and either one of the following two conditions hold:
(i) fl --= R" and X v ~ g with every extreme point of X being integer valued; or
(ii) fl---- {x E R": x is integer}, and X ~ g .
Examples of sets which satisfy condition (i) are the well-known unimodular constraint sets which occur
in network structured programs [ 1], whereas sets which satisfy (ii) occur in integer programming problems
In both, the network as well as the discrete case, several alternative optima for a given objective function
may exist [2,18]. Hence, among all solutions which optimize the first priority objective, one may seek
solutions which optimize the second priority objective, and so on. For example, one may seek to assign jobs
to machines to minimize cost, delay, machine wear, error in accuracy, etc. in some priority order. In the
context of goal programs GP, one may have a network in which the goals represent demands or supplies at
certain nodes, for which an excess or shortage of products is permissible, but at a certain cost. Or, MOP
may be some integer goal program [11 ].
Now, given a preemptive priority ordering c~ . x . . . . . G ' x , one alternative is to use the standard
sequential approach as described in Section 1. On the other hand, suppose that we are able to determine a
set of equivalent weights (X~ ..... h,} such that an optimal solution to the nonpreemptive problem (NPP)
NPP: maximize(0.x:xEXNfi} (8)
370 H.D. Sherah / Weights ]'or lexacographic multt-objective programs

is a preemptive optimal solution, where

e= ~ X,c,. (9)
i-~-|

Then in lieu of the sequential approach, we may opt to solve program N P P (8). Observe the computational
advantage of adopting this latter strategy. In the case of condition (i) embodied for example by network
structured problems, program (8) is a simple network flow linear programming problem for which highly
efficient solution codt:s exist [12]. The sequential method, on the other hand, would introduce side
constraints which would ruin the structure of the problem. Next, in the case of condition (ii) embodied for
example by integer programming problems, program NPP (8) is a single linear integer program. As
opposed to this, the sequential method would need to solve a series of integer programming problems. Of
course, post-optimality analysis via cutting planes is possible for iterations 2 ..... r, but degeneracy problems
are likely to be troublesome [18]. Besides, in the presence of dual degeneracy, it may not be easy to check
whether or not alternative optimal integer solutions exist at a given iteration, and one may have to proceed
up to iteration r before terminating the computations. Section 4 discusses these computational considera-
tions in some further detail. With this motivation, let us proceed to prescribe a set of equivalent weights for
Problem MOP under conditions (i) or (ii).
Toward this end, ~efine
= ~ {x: x is an extreme point of X} if (i) holds, (10)
LX,- ~ if (ii) holds
and let
X* = {x ~ _ X~: X * is a preemp: ire optimal solution } (11)
Then, since X is bom~ded, the set of preemptive optimal solutions under condition (i) is simply the convex
hull of X* [21], and L ader condition (ii), this set is precisely X*. Hence, under either condition (i) or (ii), we
need to construct a set of equivalent weights { ~ ..... Jkr} such that x ~ X~ is an optimal solution to NPP
given by (8), (9) if and only if x E X*. Below, we describe two algorithms which are proven to yield a set of
equivalent weights. The computation in both these algorithms revolves around determining an upper bound
UB(z) on the range of values taken on by a linear objective z - x over the set Xe of (10). Namely,
UB( z ) >~maximum z • x - minimumz • x. (12)
x~X~ x~:_X~
Note that if z : (z~ ..... z~), and we let
P(z)={j:z,>O}, N(z)=={j:z:<O} (13)
respectively denote the positive and negative component indices of z, then we may define (recalling that
MOP restricts 0 ~ x <<-u, for some finite ~. = ( u~ . . . . . u,, )),
el
Y.. u:,- Y. ,',l:,f. (14)
j~P(:) j~N(:) J= I

Observe that sometimes, a special structure may permit an alternative, easily derived, tighter upper
bounding function UB(. ). (An example i:~ provided in Section 4.)
Below, we present a scheme (Algoritlhm 1) which is a specialization of the development in [19] for
determining a set of equivalent weights in the present context. Hence, Theorem 1 validates this scheme.
However, we provide an alternative proof for the method because it is straightforward and avoids the
complicated issues which arise in the general case. On the other hand, Algorithm 2 adopts a different
means for computing the set of equivalent weights. This procedure, which is of the same computational
complexity as Algorithm 1, is shown later to yield a set of equivalent weights of uniformly smaller
magnitudes. Hence, this scheme is prefi.~rable to the one obtained via the specialization of Theorem 1,
namely, Algorithm 1.
H.D. Sherah /Wetghts for lextcographwmultt-objecnveprograms 371

Algorithm 1. C o m p u t e UB(c,) for i = 1..... r and let


~, = m a x { U B ( c , ) , i = 1 .... ,'r}, M=(I +3,). (15)
Then, c o m p u t e
X I , = M r-' f o r t = 1. . . . . r (16)
and let ~ ' - ~ ' = ~h',c,. (Hence, superscripts on M are exponents). Then, P r o b l e m N P P gives a p r e e m p t i v e
optimal solution.

L e m m a 1. Constder the multiple objective program M O P under either conditwns (i) or (ii) and let c I • x . . . . . c,. x
be the preemptive priority ordering. Then the set of weights {hll ..... hlr} given by Algorithm 1 are a set of
equivalent weights for the nonpreemptive problem N PP.

Proof. Let e be given by (9) with )k, =-- ~,~, for i = 1..... r. Observe that if X* has m o r e than one element, then
for each x* E X*, e . x* = p, a constant, because c, • x* is a constant for each such x*. Hence. as argued
above, it is sufficient to show that for every choice of ~ ~ X¢ - X*. it is true that e..~ < e . x* = v. where
x* ~ X*.
Thus, let .~ ~ X e - X* and x* E X* be an arbitrary choice. Since ~ ~ X*, there exists a q E { 1. . . . . r} such
that
c,..~=c,.x* fort<q- 1 and C q ' . ~ C q ' X * - - 1. (17)

Here, we assume c o ----0 in case i = 0, and we have used the fact that Cq, Ycand x* are integer vectors. Hence,
using (17),

('X*--?'.12= ~] M ' - ' ( c , . x * - c , . Y ¢ ) = ~ Mr-'(c,.x*-c,..~)


t = ! t=q

= M r - q ( ¢ q ' X * - - ¢ q ' X ) -k ~ Mr-'(ct'x*--g,'fc)


t=q+l

t=q+ I t=q+ I
since by (12) and (15), (c,- x* - c,. ~ ) ~> - U B(c, ) >~ - 7 for i = q + 1..... r. Hence. s u b s t i t u t i n g / = t - q in
the last s u m m a t i o n above,
r--q , r--q 1 l
~ " x * - - c ' f c > ~ M ' - q - - ' Y 2 Mr-q ' = M r - q l - ) t j 2=
3=1 1 !

I J=l
} i
by (15), and the p r o o f is complete.

As an alternative to the above procedure, consider the following scheme for determining a set of
equivalent weights. This scheme involves the same overall order of c o m p u t a t i o n s as does Algorithm l, and
as L e m m a 3 below indicates, the weights for Algorithm 2 are never more than the weights for Algorithm !.

Algorithm 2. Initialize by letting F, = c,, ;~2 = 1 and the counter i = r - 1.


Main step: G i v e n F~+I c o m p u t e ~2 and F, according to
hi= 1 + UB(F,+,), (18)

v, = V,+, + (19)
372 H.D Sherah / W e t g h t s for lexwographt, muhJ-objecttve programs

If t = !, then sol,~e Problem N P P with ~ = F t to obtain a preemptive optimal solution. Otherwise, decrease i
by one and repeat the main step.

Lemma 2. Consider the multiple objectzve program MOP under either condttions (i) or (ii) and let c, • x ..... c, . x
be the preemptive priority ordering. Then the set of weights {h~ ..... h2~} given by Algorithm 2 are a set of
equivalent weights for the nonprt-emptive problem NPP.

Proof. As in Lemma 1, let x* E X* and let ~ E X~ - X*. We need to show that for a:,y such choice of x*
and -'L it is true that e- x* > e. ~, where t ' ~ Ft is compLzted by Algorithm 2 according to (9) with h, -----hi for
t = 1 . . . . . r. Again, since .~ ~ X*, there exists a q E {1 ..... r} such that (17) holds. Consequently,

e-.~*-e-.~= ~" x",(~,.x*-~,.~)= i x~,(~,.x*-~,.,~)


t= I t=q

" ,.,, .x*-,-~.~)+


= x-~( ~ x~,(c,.x*-,.,.~).
t=q~-I
Using (17), this yields
r

8-x*--g-$~hzq+ x~,(~,-x * - - ,.,.~) ----- x~+G+,


~ " x* -- Fq, " Yc
a=q+l
.-x,," - - u s ( F~,, )_._2
~,~-(X2~-1)=1>0 [from (19)].
This completes the proof.

Lemma 3. ConsMer the sets of eqmvalent weights {hit..... Xtr} and {X] ..... h'-r} given by Algorithms 1 and 2
respectively. Then ~2 <~Xl for each t = 1..... r.

ProoL We prove this result by induction. Observe from (15), (16) and (18) that ~tr=X2r= 1 and
~2, t = I + U B ( G ) < ~ I + y = M = h t , _ I . Hence, assume that h~2 t,~.l~ e for some p <~ r - 2. We need to
2 ~< )k)~- t.
show that this implies X,_p_ t e-
Using (18), (19), we obtain
X2r_p_l---~ l "~ U B ( F r _ p ) ~ 1 J t - U ~ ( F r _ p + I " ~ X2r_per_p).

Now, using (14) and the fact that 10t + 021~10,1 +lOzl for Ot, 0 2 E R,
h2r-t,-, <~ I + UB( F, _t,+ t ) + UB(X2,_pcr_p)

= 1 + Ua(F._~+,) + X~._.UB(~._.)
= X2,_p + X2,_ p U B ( G _ , , ) b y (18)

=h~-,_e[1 + U B ( G _ , ) ] ~<h~_t,(l + ' / ) = Mh2,_p ~< Mhlr_, (induction hypothesis)


~--- Air--p - t'

and the proof is complete.

In concluding this section, let us provide two pertinent remarks.

Remark I. Note that.while c,~mputing the set of equivalent weights via Algorithm 2, say, if it turns out at a
particular stage q that h2q is large enough to create an integer overflow, then the effort up to this stage is not
wasted. In other words, the single objective vector F¢+ i = X~=q+ tA2,c, available at this stage may be used to
equivalently replace Cq+~..... c r in the sequential preemptive approach. The reason being that the sequential
H.D. Sherah / Weightsfor lexwographw multt-objecttveprogram~ 373

procedure seeks optima over subsets of X~, and for solutions in X~, the above analysis has basically
established that while optimizing Fq+ t over X¢ or a subset thereof, an improvement in c, preempts any
feasible improvements in c,+~..... c~ for any i E {q + 1..... r - 1}.

Remark 2. Consider the multiple objective version of the Extreme Point Mathematical Program [! 3]
maximize {c,. x, i = 1. . . . . r } ,
subject to Dx <~d,
x E X~ of (I0) under condition (i)
where the objectives are indexed according to a preemptive priority ranking. Then it ~s easy to show that
the set of equivalent weights determined in this section are also valid for converting this program into an
ordinary Extreme Point Mathematical Program with objective function coefficients given by (9). Applica-
tions of such problems include network problems with side constraints and production scheduling
problems using Leontief substitution systems.

3. Characterization of the set of equivalent weights

In this section, we consider the multiple objective linear program


MOLP: maximize {c,.x, i -:- I ..... r } ,
subject to xEX={x:Ax:b.x>~O} (20)
where the multiple objective vectors c l ..... cr have the preemptive priority ranking 1..... r and where X is a
nonempty and compact polyhedron. (To avoid complications, remarks pertaining to the generalization of
the development in this section to the case of unbounded X-sets are given at the end of this section.) As
before, let us define the sets
X~ = {x: x is an extreme point of X}, (21)
X* = {x* E X¢: x* is a preemptive optimal solution for MOLI'}. (22)
The task undertaken in the present section is to characterize the entire set of (equivalent) weights
{)kj..... h,} for which any optimal solution to
LP(c): maximize ( c - x : x G X } (23)
with c -- 5~'= t~,c, is a preemptive optimum, and vice versa.
Toward this end, let us review a well known result in linear programming [1]. Consider the linear
program (23) and let x~ be an extreme point of X, i.e., x~ E X~. Then, the set of objective vectors c for
which x~ is an optimal solution to LP(c) is a closed, polyhedral convex cone of the form
{c ~ a": cD(xe)/>0} (24)
where D(x~) is n X (n - m), given that A is m x n of rank m. Note that in terms of the linear programming
simplex tableau, the condition cD(x~)>! 0 merely asserts that the appropriate reduced cost coefficients for
the nonbasic variables are nonnegative. Clearly, D(x~) depends on x,.
Now, consider the set X* and note that X* may possibly be of cardinahty greater than one. For each
x* ~ X*, let D(x*) be as defined above and denote the intersection over sets of the type (24) for x* E X*
by
c(x*)= n (2s)

Since X* is a finite discrete set, it follows that C(X*) is also a closed, polyhedral convex cone. Observe that
any c ~ C(X*) has the property that every x* E X* (by virtue of the intersection) Js an optimal solution to
LP(c). However, LP(c) may also have alternative optimal solutions among points in X~ - X*, i.e., among
374 H D Sherah /Wetghts for lexwographwmult,-objectweprograms

nonpreemptive points. Hence. we need to exclude some objective vectors from (25). Of course, the first
question is whether there exist vectors c in C ( X * ) for which the set of optimal extreme point solutions to
LP(c) is precisely X,*., so that the set of optimal solutions is then the convex hull of X*. Theorem I asserts
that the answer is yes. In fact. as argued previously, the sequential approach determines the convex hull of
X~* as a face of X. Therefore, there exists a supporting hyperplane of X whose intersection with X is
precisely the convex hull of X*. In other words, there exists a c* such that the convex hull of X* is precisely
the set of alternative optimal solutions to LP(c*).
Now, we are interested in characterizing the entire set of objective vectors c for which the set of
alternative optimal solutions to LP(c) is precisely the convex hull of X~*. Theorem 2 below establishes this
set of objective vectors to be the relative interior of C(X*~), denoted r.i. C(X*). Here, for a nonempty
convex set S C_R", r.i. S may be defined as follow~ [16, Theorem 6.4, p. 47]:
r.i. S = ( y ~ S : for each x ~ S, [(i - ~)x + t t y ] ~ S for s o m e # > 1}. (26)
However, for the purpose of defining a set of equivalent weights, we must further consider only those
objective vectors which in addition are expressible in the form Y; ih,c,.. Thus. given Theorem 2, the set of
equwalent weights may be characterized as

{ " }
t=|

Again. Theorem l asserts (implicitly) that A(X~*) is nonempty. Hence. if (X I..... X ~ ) ~ A ( X * ) and if
~ = ~r= iX,c,, then LP(~) has an optimal solution set given precisely by the convex hull of X*. Conversely, if
( = Z,=~k,~, is such that the convex hull of X* is precisely the set of optimal solutions to LP(g), then
IX, ..... XA e

Theorem 2. Suppose that X* represents the set of all extreme point optimal solutions to LP(e*) for some
oh]ecnre vector e* (whwh is known to exist as demonstrated above), Let C ( X * ) be as defined by Eq. (25).
Then the set of alternative optmud soluttons to LP(c) is prectsely the convex hull of X* tf and on/y tf c E r.i.
c( x,*>.

Proof. Let c G r.t. C(X*). Since c ~ C(X*), the convex hull of X* is by definition (25) a subset of the set of
optimal solutions to LP(c). By contradiction, let X (~ convex hull of X* also solve LP(c). Thus,
c.X=c,x* for each x* ~ X*. (27)
Further, by definition,
c * - x < c*. x* for each x* ~ X*. (28)
Now. since c* ~ C(X*) and c ~ r.i. C(X*), (26) asserts that there exists a #* > i such that [(! - #*)e* +
/z'el E C(X*). However, multiplying (27) by ~t* and (28) by (! - ~ * ) < 0 and adding, we obtain
[(1-~*)c*+u*c].,~>[(l-t~*)c*+~*c].x* for each x* @ X*
which contradicts the fact that [(1 - # * ) c * + #*el G C(X*).
Conversely, for some objective vector c suppose that the convex hull of X* represents precisely the set of
all alternative optimal solutions to LP(c) and suppose that on the contrary, c (~ r.i. C(X*). Then by (26),
there exists a e E C(X*) such that [(1 - ~t)e + pc] E C(X*) for each ~ > 1. Now, since both c and e belong
to C(X*), we have e. x* = ~ and e- x* = 32 for each x* ~ X*, for some 8 I. 82. Hence,
[(l-#)e+~c].x*=d2(l-~)+6fft for each /t > l (29)
which is a constant for each x* ~ X*. Thus, each x * E X~* has the same value for the objective
[(I #c].x.
Now, in the trivial case that X* --- X~ of (21), the proof follows because (29) implies that for a n y / t > 1,
[(1 - / t )e + ~tc] E C(X*), a contradiction. Therefore, let us suppose that X* C X~. Then by hypothesis, there
H D. Sherah / WeJghtsfor lo:lcographt, multi-objective progrwm 375

exists an e~ > 0 satisfying


c.x*>~c.x+e2 foreachx*~X*,x~X~-X,*. (30)
Now, let oo > e I/> 0 be the range of e. x over X~. Then,
e . x * ~ e . x + e~ f o r e a c h x * E X * , x E X~ - X*. (31)
F o r / z > 1, multiplying (30) by p and (3i) by (I - # ) < 0 and adding, we obtain for each x* E A'~* and each

[(1 - + t,,'] ' x* [(i - + • + + - )

>[(1 - p ) g ' + ~c] . x 132)


for e a c h / , > 1 if e2~>el, or for each 1 < t x < e ~ / ( e l - ~ 2 ) if e z < e ~. But for any such bt it follows from (29)
and (32) that the convex hull of X* is precisely the set of all alternative optimal solutions to the linear
program L P [ ( I - - ~ ) ? + # c ] . This leads to a contradiction since we have found a # > 1 for which
[(1 - / ~ ) ? +/Lc] E C(X*), and the proof is complete.

In concluding this section, let us make a remark about the development in case X is an unbounded set. If
the set of preemptive optimal solutions is bounded, then the same analysis above holds. Thu~ suppose that
the set of preemptive optimal solutions is unbounded. In th~s case, we need to consMer bounds u so that
x < u for each extreme point x of X. Then. for the sake of the development, we need to replace X by the set
X N {x: x ~<u} and accordingly define X~. and X* based on this new set X via (21) and (22) respectively.
Now suppose we characterize the set of equivalent weights as A(X*) using the new definitions of X~ and
X*. If {X I..... Xr} E A(X*), then the convex hull of X* is precisely the set of optimal solutions to LP(c)
with e = E,~ ~A,c,. But since MOLP is assumed to have a preemptive optimum, at least one point x* ~ X*
satisfies x* < u. Consequently, if the constraints x ~< u are removed from LP(e), then the convex hull of X,.*
remains as the set of optimal solutions. Furthermore, the additional alternative optimal solutions to LP(e)
are those which belong to X (3 {affine hull of X*}. But this set of optimal solutions to LP(e) is precisely the
set of preemptive optimal solutions. Conversely. if {X~..... Xr} is a set of equivalent weights, then with
e= 5:,_lX,c ,, the problem LP(e) over the set XN {x: x<~u} must have the convex hull of X,* as precisely
the set of optimal solutions, by the definition of equivalent weights. By Theorem 2. it follows that
{X, ..... L } ~ A(X*).

4. Numerical illustrations and computational considerations

In this section, we will illustrate the algorithmic development of Section 2 with a numerical example, and
will discuss related computational considerations. In this regard, the isstie under consideration is a
comparison between obtaining a set of equivalent weights via Algorithm 2 and the subsequent resolution of
one program NPP (Eq. (8)), versus the resolutions of (upto) r programs via postoptimality analysis using
the sequential approach described in Section 1.
First, consider problems MOP characterized by condition (ii). Using the sequential approach m th~s case
is complicated by the fact that the nonconvexity of the problem does not permit an easy way of performing
postoptimality analyses. One may choose to totally enumerate all alternative optimal solutions (using a
branch and bound technique [18]) for the objective e l . x , and then search among these solutions for a
preemptive optimum. Since combinatorial discrete programs can turn out to have several alternative
opUmal solutions, such a strategy may be computationally burdensome. Another option available Ls tO
perform postoptimality analysis using Gomory type of fractional or all-integer cutting planes [18]. It has
been the experience of several researchers (see [9,18], for example) that these techniques mtroduce
degeneracy into the problem which plagues it in that hundreds of cutting planes may be required to update
the solution to a problem at a given stage. Besides, there are some discrete programs such as fixed charge
location problems, set covering and partitioning problems, integer knapsack problems, etc., which posses~ a
376 H.D. Sherah / Weights for lexlcographic multa-objectmeprograms

special structure, and for which there exist specially designed techniques capable of solving larger than
usual problems [18]. In these cases, the side constraints introduced by the sequential approach become
undesirable. On the other hand, Algorithm 2 easily computes an equivalent objective through simple
arithmetic operations and upper bounding functions (14). Hence in this case, the advantage in solving NPP
is clear.
Next, consider problems MOP characterized by condition (i). Since in this case we are dealing with
linear programs, postoptimality analysis is readily implementable. However, particularly for specially
structured sets X (e.g., network structured problem,~), the sequential approach indeed involves additional
e f f o r This point is addressed next.
One option available to implement the sequential method is as follows. First, solve the linear program
maximize {c~. x: x ~ X}. For the optimal basis obtained, compute the dual solution using the objective
c , . x. Now, maintaining the dual solutions for both the objectives 1 and 2, perform simplex pivots by
entering those nonbasic variables which lead to no change in the value of objective 1, but improve the value
of objective 2. In this manner, after having resolved objectives 1, 2 . . . . . i (for i < r ) , and having dual
solutions corresponding to the current primal basic feasible solution, for each of the objectives 1, 2 ..... i,
i + 1, perform simplex iterations which leave objectives 1, 2 ..... i unchanged in value, but improve the value
of objective i + 1, until no further iterations of this type are possible. At completion, by convexity, a
preemptive optimal solution would be available. Since network structured programs typically have several
alternative optimal solutions (see [! 2 ] , for example), one may typically have to continue this process until
the last objective function r.
Now, let us examine this strategy. Two points are worth noting. One is that a special code will have to be
designed to adopt this approach. That is, a standard computer package available for solving, say, network
problems cannot be used. Packages of this type are highly sophisticated [12], and are not easily modified by
an analyst who is not conversant with the special data and list structures used by such codes. Secondly,
maintaining the several dual solutions and searching through the reduced costs using each of these dual
solutions sequentially, can be time consuming.
Alternatively, one may adopt the following strategy which maintains the structure of X (and hence
permits one to use the special, standard computer packages), but revolves somewhat additional effort. This
is the Dantzlg-Wolfe [1] method, which rewrites the problem maximize {c,. x: c;. x t> fll f o r j -- 1..... t - 1,
and x E X} as the following problem MP(e) in the variables "/i . . . . . "/,,

MP( e ) maximize ~ y~( c,..~c ~ ),


k=l

subject to ~ y , ( c , - x ' ) ~ >fl;, j = i ..... I - 1 (33)


/,=l

)k = 1, (34)
h=l

7~..... y,,~>0
where x ~..... x" are ~ae extreme points of X. Problem MP is known as the master program, and it has
written x ~ X as a , onvex combination cf its extreme points. Since e can be very large, one solves a
restriction M P(q) e M P(e), using only a few extreme points x I..... x q, say. Letting ~r~*,j = 1..... i - 1 and
•rd~ be the correspord, ng optimal dual variables associated with ~33) and (34) respectively, one determines if
a.ny of the other ex~: ~'me points of X need be considered by pricing out the best point [1] via the following
subproblem SP:

(35)
SP: maximize
fllc - ~
I=l ]

xx t
E X o

If the optimal objective value of (35) exceeds ~ro*, then the corresponding optimal extreme point of X is
H.D. Sherah /Wetghts for lextcographt~ multt-objectwe program~ 377

introduced into M P via an associated ,/-variable. Otherwise, the problem is solved.


Observe that optimization over X occurs only in (35), and hence, its structure ~s preserved. More
importantly, note from the objective in (35) that the method is actually using weighted sums of objective~
(though not "equivalent" weights) in sequentially determining a preemptive optimum. The extra computa-
tional effort involved here is clearly apparent.
As an example, consider a situation in which three jobs are to be assigned to three machines in a
one-to-one fashion (assignment problem). The objectives, in priority order, are to minimize total cost, total
machining time, and a total measure of machining error. These three objectives are given below, where the
entry in row i, column j designates the "penalty" for assigning job t to machine j. A solution will be
representated by the permutation (i,j, k) of 1, 2, 3, where jobs 1, 2 and 3 are to be assigned to machines ~,
j, k respectively for this solution.

ct---- Ii4il 2
I
, I''4]
c2 -= 2
1
5
2
3 ,
1
c3-- 3
2
2
5
3 •
2
i
Algorithm 2. To implement this method, one may use UB(-) as defined by (14). However, another UB(. )
which satisfies (12) in this case is one which sums up the maximum entries in each row, say, and subtracts
from this the sum of the m i n i m u m entires in each row of the cost m a m x .
Thus, we set F 3 ----c 3, and compute UB(c3) = (4 + 3 + 5) - (2 + 2 + 2) = 6. Consequently, F 2 -- 7t', + c~
and is given below

F2 = 1 37 2 withUB(F~)=(30+37+19)-(9+17+9)=51.
19

FI=
[19
Hence, finally, Fj = 52c I + F 2 is given below

69 141
134]
7 •
165 71
The optimal assignment for this matrix is (3, 1,2), which is the optimal preemptive solution.

Note that Algorithm 1 computes UB(c~) = (4 + 2 + 3) - (2 + i + 0) = 6, UB(c2) = (4 + 5 + 2) - ( i + 2


+ 1) = 7 a n d UB(c3) = (4 + 3 + 5) - (2 + 2 + 2) = 6. Hence, it sets y = max{6, 7, 6} = 7 and M = 7 + 1 = 8.
The equivalent weighted objective it derives is 64c~ + 8c 2 + c 3, as opposed to 52cl + 7c 2 + c~ obtained by
Algorithm 2 above.
Next, consider the first strategy outlined in this section for the sequential approach in the present case.
Let the assignment constraint set be X = {x: ~ = ~x,j = 1 for 1 = 1, 2, 3, 5,3,=_~x,j = 1 for j = 1 . 2 , 3, x >~ 0}.
where x,j = 1 if j o b i is assigned to m a c h i n e j and is zero otherwise. (All extreme points of X are zero-one
assignment solutions [1].) Let w ' = (w~ ..... w~) be the dual variables associated with the 6 constraints in X,
for the objective vector c,, i = 1, 2, 3. Then, solving min{ct • x: x E X}, we obtain the assignment (1,2, 3)
(with degenerate basic variables x21 and x32 ). For this, the dual solution is w j = (2, 1,0.0, - 1,0), and we
compute w 2 = (5, 4, 1,2, - 1, 0). Pricing out the nonbasic variables, we find that entering x23 into the basis
leaves objective 1 unchanged but improves objective 2. Performing this pivot we obtain an assignment
(1, 3, 2) with degenerate basic variables x2~ and x33. The associated dual solutions are w ) : (2, 1, 0, 0. - 1,0)
and w 2 = (4, 3, 1, 1, - 1 , 0 ) . Checking the nonbasic variables, we find that no improvement in objective 2 is
possible without worsening objective 1. Thus, we compute w 3 = (4, 3, 2, 0, - 3, 0). Now, we find on checking
the nonbasic variables that entering xt3 into the basis improves objective 3, leaving objectives 1, 2
unchanged. This pivot leads to the assignment (3, 1,2), with degenerate basic variables .x'~j and x~3.
Updating the vector w 3, we now have w 1 = ( 2 , 1 , 0 , 0 , - 1 , 0 ) , w-' = (4, 3 , 1 , 1 , - 1 , 0) and w 3 =
(2, 1,2, - 2. - 3, 0). Checking the nonbasic variables, we find that no improvement in objectwe 3 is possible
without worsening objectives 1 or 2, and we terminate with (3, 1, 2) as a preemptive optimum. The extra
378 H.D. Sherali /Wetghts for lextcographlc muht-objecttve programs

pricing of nonbasic variables, and pivoting effort, plus the fact that a standard network code would have to
be modified, compares not so favorably with the simplistic nonpreen:.p:ive approach.
Finally, consider briefly the D a n t z i g - W o l f e approach. To begin with, we solve min{c I • x: x E X} to get
x ~ ~ (1, 2, 3), with flz ~ ct • x I = 4. The method then proceeds as follows, using the notation of MP(-) and
SP introduced earlier, except that the problems are minimizations, and constraints (33) are of the <~ type:
MP(I) with i = 2, gl = 4 gives cry' = 0, ~r~ = 9, objective value = 9.
SP with objective function ( c 2 - 0 c t ) ' x gives x 2--(2, 1, 3). A column ~,2 corresponding to x 2 is
introduced into MP(.) since c : - x z = 4 < rr~' = 9.
MP(2) with i = 2,/~1 = 4 gives or* = - 5, ¢rd' = 29, objective value = 9.
SP with objective (c2+5cl).x gives x-~ ~ (1, 3, 2). Since (c2+5ct).xa=28<erl?=29, we enter 73
corresponding t~3 x -a into MP(-).
MP(3) with i = 2, gl = 4 gives cry' = - 4 , ¢r0*= 24, objective value = 8.
SP with objective (c: + 4c t)- x has a value = cry' = 24. So, we now consider the third objective.
MP(3) with z = 3, fll -- 4 and f12 : 8 gives ~r~' = - 5 , ~'~' = 0, ¢r~ = 32, objective value = 12.
SP with objective (c3+5cl).x gives x4 -= (1, 2, 3) with (c.~+5cl).x4=28<~r~=32. A column ~'4
corresponding to x 4 is entered into MP.
MP(4) with i = 3, fl~ = 4 , fl~ : 8 gives cry' -- - 2 1 , ~r2* : - 4 , ~r~ : 128, objective v a l u e = 12.
SP withobjective(c~+21cl+4c2).xgivesxS=(3,1,2)with(c.~+21cl+4cz)..,cs=126<rr~:i28.
A column )'5 corresponding to x s is entered into MP.
MP(5) with i = 3, fll = 4, f12 = 8 gives ~r( = - 11, ~r* : - 2 , cry' : 70, objective value : 10.
SP with objective (c 3 + 1 ic~ + 2c~). x gives an optimal valve = cry"= 70 and hence, the procedure
terminates. Since y~ = 1 at the final master program MP(5;~ the optimal preemptive solution is
x s ~ (3, 1,2). Although the subproblems preserve the structure of X, the additional effort here is
clearly evident.
In conclusion, we note that the use of equivalent weights, inexpensively obtained for the cases discussed
in this paper, facilitates the resolution of the preemptive priority multi-objective program. Further, as noted
in Remark 1 of Section 2, if r is large enough to cause an integer overflow in the weights, one can always
aggregate objective vectors c~, c~_ i ..... c,, for as small an i ~> 1 as possible, using the equivalent weights.
Thi~ aggregated objective can then serve as a last priority objective m a sequential approach.

References

[il M.S. Bazaraa and J.J Jarvis, Linear Programming and Network Flow~ (Wiley, New York, 1977).
[21 M.S [~araraa and H.D. Sherah, A versatde scheme for ranking the extreme points of an assignment polytope. Nafal Rev Logtst.
Quart., to appear
[3] S M. Belenson and K.C. Kapur, An algorithm for solving multlcritenon linear programming problems with exampl,: 1, Operational
Res. Quart 24 (I) (1973) 65-77.
[4] R. Benayoun, J. deMontgolfier and J. Tergny, Linear programming with multiple objective functions: Step n ethod, Math.
Programmang 1 (1971) 366-375.
[51 J.G. Ecker, N.S. Hegner and I.A. Kouada, Generating all maximal efficient faces for multiple objectwe hneal programs, J.
Optlmtzatton Theory Appl. 30 (3) (1980) 353-381.
[6] J.G. Ecker and I.A. Kouada, Finding efficient pomts for hnear multiple objectwe programs, Math. Programmln~ 8 (3) (1975)
375-377.
[71 J.P. Evans and R.E. Steuer, A rev,sed simplex method for hnear multiple ob.lectwe programs, Math. Programmt,,~ 5 (1) (1973)
54-72.
I81 D.B. Field, Goal programming for forest management, Forest Set. 19 (2) (1973) 125-135.
[91 R.S. Garfinkel and G.L. Nemhauser, Integer Programming (Wiley, New York, 1972).
[10] C.1. Hwang and A.S.M. Masud, Eds., Multiple Objectave Dectston Making-Methods and Apphcatwns, Lecture Notes in
Economics and Mathematical Systems 164 (Springer, Berlin, 1979),
[ 11] J.P. lgnizio, Goal Programming and Extensions ( Lexington Books, 1976).
[12] J.L. Kenningtan and R.V. Helgason, Algorithms for Network Programming (Wiley, New York, 1980).
[13] M.J.L. Kirby, H.R. Love and K. Swarup, Extreme point mathematical programming problems, Management Set. 18 (9) (1972)
540-549.
H.D Sherah / WeJght~for lexlcographtc multJ-objectlt,e wograms 379

[14] J. Kornbluth, A survey of goal programming, Omega I (1973) 193-205.


[15] K.R. MacCrimmon, An overview of multiple objective decision mak,ng, in: J. Cochrane and M. Zeleny, Ed,,.. Multtple Cr~terla
Decmon Making (University of South Carolina Press, 1973).
[161 R.T. Rockafeilar, Convex Analysis (Princeton University Press. 1970).
117] B, Roy, Problems and methods with multiple objective functions, Math Progrmmnmg I { 1971 ) 239 -266.
[18] H. Salkin, Integer Programming (Addison-Wesley, Reading, M A, 1975)
[19] H.D. Sherah and A.L. Soyster, Preemptive and non-preemptive multi-objective programming. Relatmn~hlp~ and counter
examples, J. Opt,mization Theory Appl., to appear.
[20] P.L. Yu, Domination structures and nondominated solutions, in: G. Leltman and A Mar,,ollo. Ed~. Mtdt~craterm I)e(~oJ~
Making (Springer, Berling, 1975).
121] P.L. Yu and M. Zeleny, The set of all nondominated solutions in linear cases and a multi-or, term ~,nple~ method. J M~tth Anal
Appl. 49 (2) (1975) 430-468.

Você também pode gostar