Escolar Documentos
Profissional Documentos
Cultura Documentos
These slides are a supplement to the book Numerical Methods with Matlab: Implementations and Applications, by Gerald W. Recktenwald, c 20002006, Prentice-Hall, Upper Saddle River, NJ. These slides are copyright c 20002006 Gerald W. Recktenwald. The PDF version of these slides may be downloaded or stored or printed for noncommercial, educational use. The repackaging or sale of these slides in any form, without written consent of the author, is prohibited. The latest version of this PDF le, along with other supplemental material for the book, can be found at www.prenhall.com/recktenwald or web.cecs.pdx.edu/~gerry/nmm/.
Version 1.12
Overview
Preliminary considerations and bracketing. Fixed Point Iteration Bisection Newtons Method The Secant Method Hybrid Methods: the built in fzero function Roots of Polynomials
page 2
b b2 h d2 2 d1 b
d1
d2 w
d2
page 3
w sin = h cos + b
Given overall dimensions w and h, and the material dimension, b, what is the value of ? An analytical solution for = f (w, h, b) exists, but is not obvious. Use a numerical root-nding procedure to nd the value of that satises
f () = w sin h cos b = 0
page 4
Roots of f (x) = 0
Any function of one variable can be put in the form f (x) = 0. Example: To nd the x that satises
1.5 y=x y = cos(x) f = cos(x) - x
cos(x) = x,
0.5
f (x) = cos(x) x = 0
-0.5
Solution
-1
0.2
0.4
0.6
0.8
1.2
1.4
page 5
General Considerations
Is this a special function that will be evaluated often? How much precision is needed? How fast and robust must the method be? Is the function a polynomial? Does the function have singularities?
page 6
Root-Finding Procedure
The basic strategy is 1. Plot the function. The plot provides an initial guess, and an indication of potential problems. 2. Select an initial guess. 3. Iteratively rene the initial guess with a root-nding algorithm.
page 7
Bracketing
A root is bracketed on the interval [a, b] if f (a) and f (b) have opposite sign. A sign change occurs for singularities as well as roots
f(a) 0 f(b)
f(a) 0 f(b)
Bracketing is used to make initial guesses at the roots, not to accurately estimate the values of the roots.
page 8
Bracketing Algorithm
Algorithm 6.1 Bracket Roots
(1)
Bracketing Algorithm
(2)
page 10
Bracketing Algorithm
(3)
page 11
Looks for brackets of a user-dened f (x) Plots the brackets and f (x) Returns brackets in a two-column matrix
Syntax:
brackPlot(myFun,xmin,xmax) brackPlot(myFun,xmin,xmax,nx)
where
myFun xmin, xmax nx is the name of an m-le that evaluates f (x) dene range of x axis to search is the number of subintervals on [xmin,xmax] used to check for sign changes of f (x). Default: nx= 20
page 12
(1)
>> Xb = brackPlot(sin,-4*pi,4*pi) Xb = -12.5664 -11.2436 -9.9208 -8.5980 -7.2753 -5.9525 -3.3069 -1.9842 -0.6614 0.6614 1.9842 3.3069 5.9525 7.2753 8.5980 9.9208 11.2436 12.5664
0.5
-0.5
-1 -10 -5 0 x 5 10
page 13
(1)
f (x) = x x 2 = 0 we need an m-le function to evaluate f (x) for any scalar or vector of x values.
File fx3.m:
function f = fx3(x) % fx3 Evaluates f(x) = x - x^(1/3) - 2 f = x - x.^(1/3) - 2;
1/3
page 14
(2)
1.5
>> Xb = brackPlot(fx3,0,5) Xb = 3.4211 3.6842
page 15
(3)
Note: When an inline function object is supplied to brackPlot, the name of the object is not surrounded in quotes: brackPlot(f,0,5) instead of brackPlot(fun,0,5)
page 16
Root-Finding Algorithms
These algorithms are applied after initial guesses at the root(s) are identied with bracketing (or guesswork).
page 17
Fixed point iteration is a simple method. It only works when the iteration function is convergent. Given f (x) = 0, rewrite as xnew = g(xold) Algorithm 6.2 Fixed Point Iteration
page 18
Convergence Criteria
An automatic root-nding procedure needs to monitor progress toward the root and stop when current guess is close enough to the desired root.
Convergence checking will avoid searching to unnecessary accuracy. Convergence checking can consider whether two successive approximations to the root are close enough to be considered equal. Convergence checking can examine whether f (x) is suciently close to zero at the current guess.
page 19
(1)
To solve rewrite as or or
xx
1/3
2=0
1/3
6 + 2xold 3 xold
2/3
1/3
page 20
(2)
g1 (xk1 )
3 3.4422495703 3.5098974493 3.5197243050 3.5211412691 3.5213453678 3.5213747615 3.5213789946 3.5213796042 3.5213796920
g2 (xk1 )
3 1 1 27 24389 1.451 1013 3.055 1039 2.852 10118
g3 (xk1 )
3 3.5266442931 3.5213801474 3.5213797068 3.5213797068 3.5213797068 3.5213797068 3.5213797068 3.5213797068 3.5213797068
g1(x) = x + 2 ` 3 g2(x) = x 2
1/3
0 1 2 3 4 5 6 7 8 9
page 21
Bisection
Given a bracketed root, halve the interval while continuing to bracket the root
f (x1) f (b1)
a f (a1)
x2
x1
page 22
Bisection (2)
xm
1 = (a + b) 2
xm = a +
ba 2
page 23
Bisection Algorithm
Algorithm 6.3
Bisection
initialize: a = . . ., b = . . . for k = 1, 2, . . . xm = a + (b a)/2 if sign (f (xm)) = sign (f (xa)) a = xm else b = xm end if converged, stop end
page 24
Bisection Example
k
0 3 3 3.5 3.5 3.5 3.5 3.5 1 2 3 4 5 6 7 8 9 10
a
4 4 4 3.75
xmid
3.5 3.75 3.625 3.5625 3.53125 3.515625 3.5234375 3.51953125 3.52148438 3.52050781
f (xmid )
-0.01829449 0.19638375 0.08884159 0.03522131 0.00845016 -0.00492550 0.00176150 -0.00158221 0.00008959 -0.00074632
xx
1/3
2=0
page 25
Analysis of Bisection
(1)
Let n be the size of the bracketing interval at the nth stage of bisection. Then
=
or
n = 0
n 1 n =2 2 n n = log2 0
page 26
Analysis of Bisection
(2)
n = 0
n 1 n =2 2 n 5 10 20 30 40 50 n 0
or
n = log2
function evaluations
n 0
3.1 102 9.8 104 9.5 107 9.3 1010 9.1 1013 8.9 1016
7 12 22 32 42 52
page 27
Convergence Criteria
An automatic root-nding procedure needs to monitor progress toward the root and stop when current guess is close enough to the desired root.
Convergence checking will avoid searching to unnecessary accuracy. Check whether successive approximations are close enough to be considered the same: |xk xk1| < x Check whether f (x) is close enough zero. |f (xk )| < f
page 28
Convergence Criteria on x
x tolerance on x
page 29
x tolerance on x
Relative tolerance:
page 30
If f (x) is small near the root, it is easy to satisfy a tolerance on f (x) for a large range of x. A tolerance on x is more conservative.
f (x)
If f (x) is large near the root, it is possible to satisfy a tolerance on x when |f (x)| is still large. A tolerance on f (x) is more conservative.
f (x)
page 31
Newtons Method
(1)
For a current guess xk , use f (xk ) and the slope f (xk ) to predict where f (x) crosses the x axis.
f(x1)
x2 f(x2)
x3
x1
page 32
Newtons Method
Expand f (x) in Taylor Series around xk
(2)
+ ...
xk
df f (xk ) = dx xk
page 33
Newtons Method
(3)
Goal is to nd x such that f (x) = 0. Set f (xk+1) = 0 and solve for xk+1
xk+1 = xk
f (xk ) f (xk )
page 34
Algorithm 6.4 initialize: x1 = . . . for k = 2, 3, . . . xk = xk1 f (xk1)/f (xk1) if converged, stop end
page 35
(1)
xx
1/3
2=0
1 1 3 xk
2/3
page 36
(2)
xk+1 = xk
1/3 xk xk 2 1 2/3 1 3 xk
Conclusion
k
0 1 2 3 4
xk
3 3.52664429 3.52138015 3.52137971 3.52137971
f (xk )
0.83975005 0.85612976 0.85598641 0.85598640 0.85598640
f (x)
-0.44224957 0.00450679
Newtons method converges much more quickly than bisection Newtons method requires an analytical formula for f (x) The algorithm is simple as long as f (x) is available. Iterations are not guaranteed to stay inside an ordinal bracket.
page 37
Since
f '(x1) 0 f(x1)
xk+1
f (xk ) = xk f (xk )
x1
the new guess, xk+1, will be far from the old guess whenever f (xk ) 0
page 38
Secant Method
(1)
Given two guesses xk1 and xk , the next guess at the root is where the line through f (xk1) and f (xk ) crosses the x axis.
f(b)
f(x1)
x2 f(a)
x1
page 39
Secant Method
Given
(2)
xk+1 = xk
to get
f (xk ) f (xk )
page 40
xk+1
Secant Method
(3)
Two versions of this formula are equivalent in exact math: xk xk1 xk+1 = xk f (xk ) f (xk ) f (xk1) and
()
xk+1 =
()
Equation () is better since it is of the form xk+1 = xk + . Even if is inaccurate the change in the estimate of the root will be small at convergence because f (xk ) will also be small. Equation () is susceptible to catastrophic cancellation:
f (xk ) f (xk1) as convergence approaches, so cancellation error in the denominator can be large. |f (x)| 0 as convergence approaches, so underow is possible
NMM: Finding the Roots of f (x) = 0 page 41
Secant Algorithm
Algorithm 6.5 initialize: x1 = . . ., x2 = . . . for k = 2, 3 . . . xk+1 = xk f (xk )(xk xk1)/(f (xk ) f (xk1)) if converged, stop end
page 42
Solve:
xx
k
0 1 2 3 4 5
1/3
Conclusions
2=0
f (xk ) 0.44224957 0.00345547 0.00003163 2.034 109 1.332 1015 0.0
xk1
4 3 3.51734262 3.52141665 3.52137959 3.52137971
xk
3 3.51734262 3.52141665 3.52137970 3.52137971 3.52137971
Converges almost as quickly as Newtons method. No need to compute f (x). The algorithm is simple. Two initial guesses are necessary Iterations are not guaranteed to stay inside an ordinal bracket.
page 43
Since
f '(x) 0 f(x2) f(x3)
xk+1
x1 f (x1)
x3
x2
the new guess, xk+1, will be far from the old guess whenever f (xk ) f (xk1) and |f (x)| is not small.
page 44
fzero Function
(1)
fzero is a hybrid method that combines bisection, secant and reverse quadratic interpolation Syntax:
r = fzero(fun,x0) r = fzero(fun,x0,options) r = fzero(fun,x0,options,arg1,arg2,...)
If x0 is a scalar, fzero tries to create its own bracket. If x0 is a two element vector, fzero uses the vector as a bracket.
page 46
Find the point where the x axis intersects the sideways parabola passing through three pairs of (x, f (x)) values.
15
10
5 0
0.5
1.5
page 47
fzero Function
(2)
Result of reverse quadratic interpolation (RQI) if that result is inside the current bracket. Result of secant step if RQI fails, and if the result of secant method is in inside the current bracket. Result of bisection step if both RQI and secant method fail to produce guesses inside the current bracket.
page 48
fzero Function
(3)
Optional parameters to control fzero are specied with the optimset function. Examples: Tell fzero to display the results of each step: >> options = optimset(Display,iter); >> x = fzero(myFun,x0,options) Tell fzero to use a relative tolerance of 5 109: >> options = optimset(TolX,5e-9); >> x = fzero(myFun,x0,options) Tell fzero to suppress all printed output, and use a relative tolerance of 5 104: >> options = optimset(Display,off,TolX,5e-4); >> x = fzero(myFun,x0,options)
NMM: Finding the Roots of f (x) = 0 page 49
fzero Function
Allowable options (specied via optimset):
Option type Display Value iter final off TolX tol
(4)
Eect Show results of each iteration Show root and original bracket Suppress all print out Iterate until
The default values of Display and TolX are equivalent to options = optimset(Display,iter,TolX,eps)
page 50
Roots of Polynomials
Repeated roots Complex roots Sensitivity of roots to small perturbations in the polynomial coecients (conditioning).
2 y = f(x)
1 f1(x) 0 distinct real roots -1 0 2 repeated real roots 4 6 x (arbitrary units) complex roots 8 10 f2(x) f3(x)
page 51
Bairstows method Mllers method u Laguerres method JenkinsTraub method Companion matrix method
page 52
roots Function
(1)
No initial guess Returns all roots of the polynomial Solves eigenvalue problem for companion matrix
c1x + c2x
Then, for a third order polynomial >> c = [c1 c2 c3 c4]; >> r = roots(c)
NMM: Finding the Roots of f (x) = 0
n1
+ . . . + cnx + cn+1 = 0
page 53
roots Function
The eigenvalues of
(2)
2 c2/c1 6 1 A=6 4 0 0
are the same as the roots of
c3/c1 0 1 0
c4/c1 0 0 1
3 c5/c1 0 7 7 0 5 0
c5 + c4 + c3 + c2 + c1 = 0.
page 54
roots Function
The statements c = ... % r = roots(c); are equivalent to c = ... n = length(c); A = diag(ones(1,n-2),-1); A(1,:) = -c(2:n) ./ c(1); r = eig(A);
(3)
% %
page 55
roots Examples
Roots of are found with
2
>> roots([1 -3 2]) ans = 2 1 >> roots([1 -10 25]) ans = 5 5 >> roots([1 -17 72.5]) ans = 8.5000 + 0.5000i 8.5000 - 0.5000i
page 56