Você está na página 1de 6

5.

1 Necessary conditions for an interior optimum


One variable
From your previous study of mathematics, you probably know that if the function f of a single variable is
differentiable then there is a relationship between the solutions of the problem
max
x
f (x) subject to x I,
where I is an interval of numbers, and the points at which the first derivative of f is zero. What precisely is
this relationship?
We call a point x such that f '(x) = 0 a stationary point of f . Consider the cases in the three figures.
In the left figure, the unique stationary point x* is the global maximizer.
In the middle figure, there are three stationary points: x*, x', and x''. The point x* is the global
maximizer, while x' is a local (though not global) minimizer and x'' is a local (but not global) maximizer.
In the right figure, there are two stationary points: x' and x''. The point x' in neither a local maximizer
nor a local minimizer; x'' is a global minimizer.
We see that
a stationary point is not necessarily a global maximizer, or even a local maximizer, or even a local
optimizer of any sort (maximizer or minimizer) (consider x' in the right-hand figure)
a global maximizer is not necessarily a stationary point (consider a in the right-hand figure).
That is, being a stationary point is neither a necessary condition nor a sufficient condition for solving the
problem. So what is the relation between stationary points and maximizers?
Although a maximizer may not be a stationary point, the only case in which it is not is when it is one of the
endpoints of the interval I on which f is defined. That is, any point interior to this interval that is a maximum
must be a stationary point.
Proposition
Let f be a differentiable function of a single variable defined on the interval I. If a point x in the interior
of I is a local or global maximizer or minimizer of f then f '(x) = 0.
This result gives a necessary condition for x to be a maximizer (or a minimizer) of f : if it is a maximizer (or a
minimizer) and is between a and b then x is a stationary point of f . The condition is obvously not sufficient
for a point to be a maximizerthe condition is satisfied also, for example, at points that are minimizers. The
first-derivative is involved, so we refer to the condition as a first-order condition.
Thus among all the points in the interval I, only the endpoints (if any) and the stationary points of f can be
maximizers of f . Most functions have a relatively small number of stationary points, so the following
procedure to find the maximizers is useful.
Procedure for solving a single-variable maximization problem on an interval
Let f be a differentiable function of a single variable and let I be an interval of numbers. If the problem
max
x
f (x) subject to x I has a solution, it may be found as follows.
Find all the stationary points of f (the points x for which f '(x) = 0) that are in the constraint
set S, and calculate the values of f at each such point.
Find the values of f at the endpoints, if any, of I.
The points x you have found at which the value f (x) is largest are the maximizers of f .
The variant of this procedure in which the last step involves choosing the points x at which f (x) is smallest
may be used to solve the analogous minimization problem.
Example
Consider the problem
max
x
x
2
subject to x [1, 2].
This problem satisfies the conditions of the extreme value theorem, and hence has a solution. Let f (x)
= x
2
. We have f '(x) = 2x, so the function has a single stationary point, x = 0, which is in the
constraint set. The value of the function at this point is f (0) = 0. The values of f at the endpoints of
the interval on which it is defined are f (1) = 1 and f (2) = 4. Thus the global maximizer of the
function on [1, 2] is x = 2 and the global minimizer is x = 0.
Example
Consider the problem
max
x
x
2
subject to x (, ).
This problem does not satisfy the conditions of the extreme value theorem, so that the theorem does
not tell us whether it has a solution. Let f (x) = x
2
. We have f '(x) = 2x, so that the function has a
single stationary point, x = 0, which is in the constraint set. The constraint set has no endpoints, so x =
0 is the only candidate for a solution to the problem. We conclude that if the problem has a solution
then the solution is x = 0. In fact, the problem does have a solution: we have f (x) 0 for all x and
f (0) = 0, so the solution is indeed x = 0.
Example
Consider the problem
max
x
x
2
subject to x (, ).
Like the problem in the previous example, this problem does not satisfy the conditions of the extreme
value theorem, so that the theorem does not tell us whether it has a solution. Let f (x) = x
2
. We have
f '(x) = 2x, so that the function has a single stationary point, x = 0, which is in the constraint set. The
constraint set has no endpoints, so x = 0 is the only candidate for a solution to the problem. We
conclude that if the problem has a solution then the solution is x = 0. In fact, the problem does not
have a solution: the function f increases without bound as x increases (or decreases) without bound.
Many variables
Consider a maximum of a function of two variables. At this maximum the function must decrease in every
direction (otherwise the point would not be a maximum!). In particular, the maximum must be a maximum
along a line parallel to the x-axis and also a maximum along a line parallel to the y-axis. Hence, given the result
for a function of a single variable, at the maximum both the partial derivative with respect to x and the partial
derivative with respect to y must be zero. Extending this idea to many dimensions gives us the following result,
where f
i
' is the partial derivative of f with respect to its ith argument.
Proposition
Let f be a differentiable function of n variables defined on the set S. If the point x in the interior of S is
a local or global maximizer or minimizer of f then
f
i
'(x) = 0 for i = 1, ..., n.
As for the analogous result for functions of a single variable, this result gives a necessary condition for a
maximum (or minimum): if a point is a maximizer then it satisfies the condition. As before, the condition is
called a first-order condition. Any point at which all the partial derivatives of f are zero is called a
stationary point of f .
As for functions of a single variable, the result tells us that the only points that can be global maximizers are
either stationary points or boundary points of the set S. Thus the following procedure locates all global
maximizers and global minimizers of a differentiable function.
Procedure for solving a many-variable maximization problem on a set
Let f be a differentiable function of n variables and let S be a set of n-vectors. If the problem
max
x
f (x) subject to x S has a solution, it may be found as follows.
Find all the stationary points of f (the points x for which f
i
'(x) = 0 for i = 1, ..., n) in the
constraint set S and calculate the value of f at each point.
Find the largest and smallest values of f on the boundary of S.
The points x you have found at which the value of f is largest are the maximizers of f .
This method is much less generally useful than the analogous method for functions of a single variable because
for many problems finding the largest and smallest values of f on the boundary of S is difficult. For this
reason, we devote considerable attention to other, better methods for finding maxima and minima of
maximization problems with constraints, in the next two parts.
Here are some examples, however, where the method may be fairly easily applied.
Example
Consider the problem
max
x,y
[(x 1)
2
(y + 2)
2
] subject to < x < and < y < .
This problem does not satisfy the conditions of the extreme value theorem (because the constraint set
is not bounded), so the theorem does not tell us whether the problem has a solution. The first-order
conditions are
2(x 1) = 0
2(y + 2) = 0,
which have a unique solution, (x, y) = (1, 2). The constraint set has no boundary points, so we
conclude that if the problem has a solution, this solution is (x, y) = (1, 2). In fact, the problem does
have a solution, because the value of the objective function at (1, 2) is 0, and its value at any point is
nonpositive.
Example
Consider the problem
max
x,y
[(x 1)
2
+ (y 1)
2
] subject to 0 x 2 and 1 y 3.
This problem satisfies the conditions of the extreme value theorem, and hence has a solution. The first-
order conditions are
2(x 1) = 0
2(y 1) = 0,
which have a unique solution, (x, y) = (1, 1), which is in the constraint set. The value of the objective
function at this point is 0.
Now consider the behavior of the objective function on the boundary of the constraint set, which is a
rectangle.
If x = 0 and 1 y 3 then the value of the objective function is 1 + (y 1)
2
. The problem of
finding y to maximize this function subject to 1 y 3 satisfies the conditions of the extreme
value theorem, and thus has a solution. The first-order condition is 2(y 1) = 0, which has a
unique solution y = 1, which is in the constraint set. The value of the objective function at this
point is 1. On the boundary of the set {(0, y): 1 y 3}namely at the points (0, 1) and
(0, 3)the value of the objective function is 5. Thus on this part of the boundary, the points (0,
1) and (0, 3) are the only candidates for a solution of the original problem.
A similar analysis leads to the conclusion that the points (2, 1) and (2, 3) are the only
candidates for a maximizer on the part of the boundary for which x = 2 and 1 y 3, the
points (0, 1) and (2, 1) are the only candidates for a maximizer on the part of the boundary
for which 0 x 2 and y = 1, and the points (0, 3) and (2, 3) are the only candidates for a
maximizer on the part of the boundary for which 0 x 2 and y = 3.
The value of the objective function at all these candidates for a solution on the boundary of the
constraint set is 5.
Finally, comparing the values of the objective function at the candidates for a solution that are (a)
interior to the constraint set (namely (1, 1)) and (b) on the boundary of the constraint set, we conclude
that the problem has four solutions, (0, 1), (0, 3), (2, 1), and (2, 3).
Example
Consider the problems
max
x,y
x
2
+ y
2
+ y 1 subject to x
2
+ y
2
1
and
min
x,y
x
2
+ y
2
+ y 1 subject to x
2
+ y
2
1.
In each case the constraint set, {(x, y): x
2
+ y
2
1}, is compact. The objective function is continuous,
so by the extreme value theorem, the problem has a solution.
We apply the procedure as follows, denoting the objective function by f .
We have f
1
'(x, y) = 2x and f
2
'(x, y) = 2y + 1, so the stationary points are the solutions of 2x
= 0 and 2y + 1 = 0. Thus the function has a single stationary point, (x, y) = (0, 1/2), which is
in the constraint set. The value of the function at this point is f (0, 1/2) = 5/4.
The boundary of the constraint set is the set of points (x, y) such that x
2
+ y
2
= 1, as shown in
the following figure.
Thus for a point (x, y) on the boundary we have f (x, y) = x
2
+ 1 x
2
+ y 1 = y. We have
1 y 1 on the boundary, so the maximum of the function on the boundary is 1, which is
achieved at (x, y) = (0, 1), and the minimum is 1, achieved at (x, y) = (0, 1).
Looking at all the values we have found, we see that the global maximum of f is 1, achieved at
(0, 1), and the global minimum is 5/4, achieved at (0, 1/2).
Notice that the reasoning about the behavior of the function on the boundary of the constraint set is
straightforward in this example because we are able easily to express the value of the function on the boundary
in terms of a single variable (y). In many other problems, doing so is not easy.
Exercises
Copyright 1997-2005 by Martin J. Osborne

Você também pode gostar