Você está na página 1de 29

Design Optimization Lecture 11

Assistant Professor: Dr. Daniel Neufeld

KU Aerospace

6/5/2012

Review
Last lecture, we discussed some challenges associated with non-linear programming
o The solutions dont necessarily like on constraint intersection points and can be anywhere in the feasible region o Feasible regions can be discontinuous o The optimum point may not be mathematically demonstrable

Only local maxima or minima can be mathematically demonstrated if the 3 necessary conditions are met

KU Aerospace

6/5/2012

Second Derivative Test for N variables


1. Objective function 1 If there is a point where = [0 0] and 0 = 1,2, , If > 0 for all = 1,2, ,
o then has a local minimum at

2. If 1
o

3. Otherwise has a saddle point at Note, the 1 term causes every second following term to be negative, i.e, -D1+D2-D3, The terms are represent the determinants of the subset matrices of the hessian, , which begins with the first element at (1,1), then (1,1), (1,2),(2,1),(2,2) and so on
KU Aerospace 6/5/2012 3

Then has a local maximum at

> 0 for all = 1,2, ,

Necessary Conditions for Local Optima


local max
Peaks

saddle point

-5

2 0 -2 y -3 -2 -1 x 0 1 2

local min
6/5/2012 4

KU Aerospace

Algorithm Categories for NLP


Generally, the methods fall into three categories Weve already discussed the application of some stochastic approaches in integer programming Such methods are also suitable for NLP Generally, Deterministic Methods have the fastest solution time, but are more prone to local solutions

KU Aerospace

6/5/2012

Gradient Based Algorithms


First, well look at Gradient Based Optimization, which encompasses the most widely used algorithms for solving non-linear problems However, we need to look at some preliminary considerations first Gradient based optimization schemes employ the principle of finding locations with zero slope
o Solving for points where the partial derivatives of a function are all zero o Testing each point for maximum, minimum, or saddle point using the second derivative test

KU Aerospace

6/5/2012

Gradient Based Algorithms


Most Gradient Based algorithms follow these principles:
1. Calculate a search direction (could be orthogonal, steepest descent, second order, etc) 2. Formulate a 1-D equation in the search direction with respect to step size 3. Set the resulting 1-D equation to zero and solve for the step size 4. Check for convergence if not converged, return to 1

An exact solution of these equations is rarely possible We need to use numerical methods to do so
o Newtons Method o Secant Method o Bisection Method

KU Aerospace

6/5/2012

Root Finding
There are 3 widely used methods for determining the roots of any continuous equation Roots are defined as the x values where the function f(x) is zero. Methods
o Bisection Method o Secant Method o Newtons Method

Why study root finding?


o Recall from calculus that a function is maximum or minimum when = 0 o Finding the roots of the derivative of a function will locate the maximum or minimum value
KU Aerospace 6/5/2012 8

Bisection Method
Were trying to find the value of where = 0 Well call this value such that = 0 Intermediate Value Theorem
o If is continuous over the interval , , then for any such that is between and there is a value, between and where =

This means that if we know the roots of a continuous function lie between two values , such that:
o >0 o <0

Then we know the roots are somewhere between and .

KU Aerospace

6/5/2012

Bisection Method (2)


If we know that the roots of a function are between , , we could take a first guess to narrow down the search for the roots For a first guess, we could just take the half-way point between , .

First guess: c =

+ 2

We can then check whether = 0 If not, we need to adjust the boundaries for a better guess We do this by applying the following condition
KU Aerospace 6/5/2012 10

Bisection Method (3)


Recall, we have two initial guesses , such that
o >0 o <0

And weve calculated c =

+ 2

We can make a new interval as follows: If , have different signs


o We pick a new interval, ,

If , have different signs


o We pick a new interval, ,

We can apply this recursively until the gap between the upper and lower boundary becomes very small
KU Aerospace 6/5/2012 11

Bisection Method (4)


find = sin = 0 where , = 1.5, +2.5

root
Step 1
1 0.5 0 -0.5 -1 -1.5

+ = 2

-1

-0.5

0.5

1.5

2.5

> 0 < 0

Therefore, the solution is between and . We can discard .

KU Aerospace

6/5/2012

12

Bisection Method (5)


Make a new point where point was located Step 2
1 0.5 0 -0.5 -1 -1.5

+ = 2

-1

-0.5

0.5

1.5

2.5

> 0 < 0

Therefore, the solution is between and . We can discard .

KU Aerospace

6/5/2012

13

Bisection Method (6)


Make a new point where point was located Step 3 =
1 0.5 0 -0.5 -1 -1.5

+ 2

-1

-0.5

0.5

1.5

2.5

> 0 < 0

Therefore, the solution is between and . We can discard .

KU Aerospace

6/5/2012

14

Bisection Method (7)


Make a new point where point was located + = 2
1 0.5 0 -0.5 -1 -1.5

Step 4

-1

-0.5

0.5

1.5

2.5

Clearly, the interval between the solution and the boundaries continues to shrink as the bisection method proceeds
KU Aerospace 6/5/2012 15

Bisection Method (8)


We can stop the method when the interval between , becomes small enough Then, we can say that
o 0 where = o Where o
+ 2

In the example of finding the roots of


o = sin

We know that the true answer is = 0 from trig

KU Aerospace

6/5/2012

16

Bisection Method (9)


If we choose an error = 103 The solution converges in 12 iterations
0.5

f(x)

-0.5

6 Iteration Number

10

12

Final estimate: = 9.76 104 The true answer is = 0, so clearly, were within our tolerance
KU Aerospace 6/5/2012 17

Bisection Method (10)


Properties of the Bisection Method
o Can be very slow to converge compared to other approaches o For example, if we choose = 106 , it takes 26 iterations o If we choose = 1010 , it takes 37 iterations
40

Number of Iterations

30

20

10

6
-

10

Tolerance (10 )

o Y-axis is number of iterations o X-axis is the tolerance exponent, 10 to the power of x)

KU Aerospace

6/5/2012

18

Bisection Method (11)


The number of iterations required to solve to a desired tolerance can be calculated as follows: We know that the error margin is reduced by a factor of 2 over every iteration
o = 1 1 , or the interval between and at iteration n will be 2 half of the interval between and at iteration 1.

And the next will be half again, therefore after iterations, the interval will be
o = =
0 0 , 0 0 2 0 0

o Or 2 =

= log 2

KU Aerospace

6/5/2012

19

Bisection Method (12)


Notes: if the function isnt continuous, the bisection method wont work For example, =
1

KU Aerospace

6/5/2012

20

Bisection Method (13)


Example Problem Find the square root of 11 (without using a square root button on your calculator) Solved on the blackboard

KU Aerospace

6/5/2012

21

Newtons Method
Newtons Method is an iterative approach for root finding The basic principle is to transform the given function into a straight line and solve for the roots of that line Doing so repeatedly eventually finds the root Convergence rates are significantly faster than that of the bisection method

KU Aerospace

6/5/2012

22

Newtons Method
We assume a starting point for x. Determine the slope of the function at that point Draw a straight line and calculate the x-intercept of that straight line Replace the previous x with the new one Repeat

KU Aerospace

6/5/2012

23

Newtons Method
Newtons Method works as follows
+1 = ( ) Start with the point 0 = (0 ) slope = = () We need to calculate the value where = 0 so 0 = 0 and = () we get 0 = 0 () or 0 = (0 ) From the figure, = 0 0 = 0 (0 ) 6/5/2012 24

= (0 )


KU Aerospace

Newtons Method (5)


Unlike bisection method, Newtons Method is not guaranteed to converge for continuous functions Sometimes the derivative information is undefined

KU Aerospace

6/5/2012

25

Newtons Method (6)


Find the square root of 11 using Newtons Method Solved on the blackboard

KU Aerospace

6/5/2012

26

Secant Method
What if we cant perform analytical derivatives on our function? If () is not differentiable analytically, we can use approximate derivatives Recall that
o
+

This can be subbed into Newtons Method Alternatively, we can employ a different strategy called the Secant Method

KU Aerospace

6/5/2012

27

Secant Method
Start with TWO guesses, point 0 and 1 = (0 ) 1 (0 ) slope = = 1 0
1 = (1 )

We need to calculate the 2 value such that (2 ) = 0 2 = 2 2 = 2 where 2 = 1 2 and 2 = 1 0 2 = 1 (1 ) OR 2 = 1 +1 = ( )


1 0 1 (0 )

0 = (0 ) 2 0

1
2

(1 )

1 (1 )
6/5/2012 28

KU Aerospace

Local and Global Optimization


Recall that since were taking derivatives and setting them to zero, we wont know whether we have a global or local optimization We can move the staring point (or interval) around to check if the answer is consistent We can apply the second derivative tests to see if we have a maximum, minimum or saddle point
KU Aerospace 6/5/2012 29

Você também pode gostar