Escolar Documentos
Profissional Documentos
Cultura Documentos
215212
dv 'v v2 v1 1600
|
dt 't t2 t1
1400
velocity (m/s)
vi 1 vi c
vi g for i 1, 2,3,... 1000
ti 1 ti m
800
or
600
c
vi 1 vi 't g vi for i 1, 2,3,...
m 400
1400
systems properties or composition
1200
velocity (m/s)
Comments:
600 Much easier than exact solution method
change = 0 = increases - decreases
Numerical solution can be obtained very
400
easily with computer (matlab here)
Higher accuracy can be obtained by increases = decreases
200
reducing the time step, t
0
Called the steady-state calculation
0 100 200 300 400 500 600
time (s)
Figure 1.3
Objectives
Inaccurate or Bias
Learning how to quantify error
Understanding how round-off errors occur +
because digital computers have a limited Imprecise or Uncertainty
ability to represent numbers
Recognising that truncation errors occur when =
exact mathematical formulation are
Errors
represented by approximation
Knowing how to use Taylor series to estimate
truncation errors
How to get derivative approximation from The concept of the accuracy and precision :
Taylor series Finite difference approximation a) inaccurate and imprecise, b) accurate and imprecise,
c) inaccurate and precise, and d) accurate and precise
TYPES OF ERROR Error Definitions (1/2)
3. Data (e.g. shuttle mass is not constant, measurement) Relative true error (in percentage)
4. Blunder (e.g. careless) true value - approx
Ht u 100%
5. Truncation (e.g. chopping infinite series) true value
True error but if you know true value, why bother
6. Round-off (e.g. value of S ) the numerical approximation!
Have to find the way to
approximate the error
ROUND-OFF ERROR
Computer Number Representation (1/3)
S = 3.141592653589793238462643.....
Human: Base-10/Decimal Computer: Base-2/Binary
The value of S above consists of 25 significant Two major implications due to using finite number of binary digits (bits)
figures that can be stored in supercomputer, whereas to represent numbers
0.005678 5.678*10-3
For sufficiently small h, the first- and other lower-order terms usually
account for a disproportionately high percent of the error; Only few terms
are needed.
Check example 4.3, p 96 : Approximation of cos(x) at /3 from /4 out by
TSE
Examples of Roots of Equation(s)
Polynomial Functions: f ( x) ax 2 bx c 0
f ( x) cosh x cos x 1 0
Reference: C5 Oblique Shock Wave Angle:
9sin 2 x 1 S
f ( x) 2 cot x tan 0
9 1.4 cos 2 x
2 9
Column Buckling: f
3
f ( x) 8 x
n 1
2n
1 0
Bisection Method
Graphical Method Computational Procedure Step 1
A bungee jumper problem: Check example 5.1 out! Step 1: From given xL and xR , compute mean value,
x M = x L + xR
mg c g
f (m)
cd
tanh d t v t 2
m
then compute f(xM ) which could be + (Case A) or - (Case B)
9.81 m 0.25 u 9.81 f (x) f (x)
root = masscrit 145 kg tanh u 4 36
0.25 m
0
f (x M)
f (x M) f (x R)
xL xL xM xz f (x R )
0 x 0 x
z
xM xR xR
x
Case A Case B
Bisection Method Bisection Method
Computational Procedure Step 2 & 3 Example 5.3 and 5.4
Step 2: Compute the product of f (xM ) and f (xR ): Check example 5.3 and 5.4 out!
Case A Case B
f(xM ) f(xR ) ! 0
x f(xM ) f(xR ) 0
x
Root: xL x xM Root: xM x xR
Bisection Method
Computational Procedure Step 4
Case A: f( x1) 0 Case B: f(x1) ! 0 where H s is the stopping tolerance, e.g. H s = 0.001%
Can you say a few thing about the ratio of the true error from any
iteration to that from previous iteration? Fixed-point iteration has a
linear convergence characteristic!
Sometimes
divergences
z z
f (x 0 )
0 z z z x 0 z z z x
x x x0 x x0
Root x0- x
Root
f xi
xi 1 xi
Try a bungee-jumper problem out. What is f c xi
the convergence rate of Newton-Raphson?
Quadratic?
Fixed-Point Iteration
Converge or Diverge?
converge: g c 1
Key:
diverge: gc ! 1
Newton-Raphson Method Poor Convergence
Secant Method
Whats happen if you cant find the derivative of the function
(the slope)?
In that case, lets estimate the slope from
f xi f xi 1
f c xi #
xi xi 1
Hence we can estimate new root from
f xi
xi 1 xi xi xi1
f xi f xi 1
Notice that now you need two initial guesses!
Newton-Raphson Method
MORE Poor Convergence
Modified Secant Method
Convergence problem usually It is not convenient to start from two arbitrary guesses
comes when f xi G xi f xi
f c xi # ;G small perturbation fraction (10-6 )
inflection point close to roots G xi
initial guess close to local Lets now estimate the slope of the function from
maxima/minima f xi
jumping from interest solution xi 1 xi Gx
f xi G xi f xi i
to the next (multiple roots)
hit zero-slope! Try a bungee-jumper problem with modified secant method.
Cure
start an initial guess as close to
the root as possible
bracketing method
MATLAB Function: fzero MATLAB : Polynomials
Roots of polynomials are also very common in engineering
The MATLAB fzero function is designed to find the real
Example: Column Buckling
roots of a single equation f
3
The function combines the best qualities of reliable (but f ( x) 8 x
n 1
2n
1 0
slow) bracketing methods to fast (but possibly unreliable) We can use all the methods we learn but we will only determine one
open methods. root at a time. We want all roots in one shot!
Try it out with our bungee-jumper problem! MATLAB function roots is designed to compute all roots in one shot.
Try it out
Note that an inverse function of roots is poly
Chapter 5
Optimization
Reference: C7
Objectives
Reading and Homework 1. Understanding why and where optimization occurs in engineering and
scientific problem solving.
2. Recognizing the difference between one-dimensional and
multidimensional optimization.
Reading: Chapter 6 in Chapra 3. Distinguishing between global and local optima.
4. Knowing how to recast a maximization problem so that it can be solved
Homework: 6.3, 4, 9, 10, 11, 16, 19, 20, 23 with a minimizing algorithm.
5. Being able to define the golden ratio and understand why it makes one-
dimensional optimization efficient.
6. Locating the optimum of a single-variable function with the golden-
section search.
7. Locating the optimum of a single-variable function with parabolic
interpolation.
8. Knowing how to apply the fminbnd function to determine the minimum
of a one-dimensional function.
9. Being able to develop MATLAB contour and surface plots to visualize
two-dimensional functions.
10. Knowing how to apply the fminsearch function to determine the
minimum of a multidimensional function.
You got a problem.
More often than not, the design is also subjected to
Simple projectile restrictions, or constraints, which may have the
Find the minimum initial velocity for reaching the altitude at least 200 m within 4 seconds. form of equalities or inequalities.
Initial velocity ~ = power ~ = cost
Altitude as a function of time for an object initially projected upward with an initial velocity.
Whats Optimization?
Optimization is the term often used for minimizing or
maximizing a function.
In engineering, optimization is closely related to design.
(creating something that is as effective as possible:
performance vs. limitations ) Some optimization method do just
by solving the root problem: f(x)=0.
Golden-Section Search
It is very similar in spirit to the bisection method for the root location.
Golden Ratio () :
l1 l2 l1
I
l1 l2
1 5
I 2 I 1 0 I 1.61803398...
2
Parabolic Interpolation
A second-order polynomial
often provides a good
approximation to the shape
of f(x) near an optimum.
There is only one parabola
connecting three points.
Objectives
Understanding what systems of equations are and where
they occur in engineering and science
Chapter 5 Understanding how to use graphical method to solve
System of Equations 1 small sets of equations
Understanding how to use Gauss elimination method for
large sets of equations
Reference: C7 and 8
Understanding the concepts of singularity and ill-
condition
Understanding how to improve the accuracy of solution
by pivoting
Recognising special types of systems banded systems
In the particular case that the line crosses through the
origin, if the linear equation is written in the form y = f(x) Youve got a problem
then f has the properties:
f (ax) af ( x)
where a is any scalar.
A function which satisfies these properties is called a linear
function
by assuming that quantities of interest vary to only a small Try this on matlab! m2 g k3 x3 x2 k2 x2 x1 0
extent from some "background" state.
m3 g k3 x3 x2 0
Standard Methods
Methods for small sets of equations e.g. graphical
method and Cramers rule
Direct methods Solving sets of equations directly
without mathematical approximation. Usually good for
modest sets of equations. Examples are Gauss
elimination, LU decomposition and Cholesky
decomposition.
Iterative methods Solving sets of equations with
some mathematical approxixmation. Usually faster for The Over determined Case (M > N) : the number (M) of (independent) equations
large sets of equations. Examples are Jacobi, Gauss- is greater than the number (N) of unknowns, there exists no solution satisfying all
Seidel, successive overrelaxation (SOR)/underrelaxation the equations strictly.
(SUR) The Underdetermined Case (M < N) : The number (M) of equations is less
than the number (N) of unknowns, the solution is not unique, but numerous.
Graphical Method Gauss Elimination
Methods for small sets of equations e.g. graphical Find maximum member force in this truss?
10kN
method and Cramers rule
Solve {x} from [A]{x1} = {b1}
Direct methods Solving sets of equations directly
without mathematical approximation. Usually good for
modest sets of equations. Examples are Gauss
elimination, LU decomposition and Cholesky Solve {x} from [A]{x2} = {b2}
How about this one? 10kN
decomposition. Redo Gauss elimination again?
Iterative methods Solving sets of equations with Very time consuming Not so smart!
some mathematical approxixmation. Usually faster for How about we do factorisation to [A]?
large sets of equations. Examples are Jacobi, Gauss-
1. [A] {x1} = [L][U] {x1} = {b1}
Seidel, successive overrelaxation (SOR)/underrelaxation
2. [A] {x2} = [L][U] {x2} = {b2}
(SUR)
Same [L] and [U]; Decompose once
Step 2: Substitution
0 0 a' '33
a21
f 21
a11
1 0 0
[ L] f 1 0 f 31
a31
21 a11
f 31 f 32 1 a'32
f 32
a '22
Ex.
MATLAB : the backslash operator
Check [A] is in the format where a solution can be obtained
without full Gauss elimination.
These include systems that are (a) sparse and banded (b)
triangular or (c) symmetric.
Methods for small sets of equations e.g. graphical method Iterative methods provide an alternative method to
and Cramers rule the direct method discussed earlier. The basic
Direct methods Solving sets of equations directly without concept for iterative methods is guess or go just like
mathematical approximation. Usually good for modest sets of in chapter 3 and 4.
equations. Examples are Gauss elimination, LU The main advantage of iterative methods is when we
decomposition and Cholesky decomposition.
deal with a large set of linear or nonlinear equations:
Iterative methods Solving sets of equations with some
mathematical approximation. Usually faster for large sets of
Direct Methods: FLOPS ~ O(N3)
equations. Examples are Jacobi, Gauss-Seidel, successive Iterative Methods: FLOPS ~ O(N2)
overrelaxation (SOR)/underrelaxation (SUR)
Understanding iterative methods for solving a set Lets try Gauss-Seidel method to solve the following set of equations:
3x1 0.1x2 0.2 x3 7.85
of equations including Jacobi and Gauss-Seidel
0.1x1 7 x2 0.3x3 19.3
Knowing how relaxation can improve the 0.3x1 0.2 x2 10 x3 71.4
convergence of the iterative methods Step 1: Guess x1 = x2 = x3 = 0
guess iter 0
Knowing the techniques to solve a set of x1 x2 x3 x1 x2 x3 0 0 0
Step 2: Update x1, x2, x3 by solve equation 1, 2 and 3 consecutively
nonlinear equations use new x1 to update x2 and x3
7.85 + 0.1x2 0.2 x3 7.85 + 0.1 0 0.2 0
x1 2.616667
3 3 use new x2 to update x3
19.3 0.1x1 0.3x3 19.3 0.1 2.616667 0.3 0
x2 2.794524
7 7
71.4 0.3x1 0.2 x2 71.4 0.3 2.616667 0.2 2.794524
x3 7.005610
10 10
x1 x2 x3 iter 1 2.616667 2.794524 7.005610
Gauss-Seidel Method (2/2)
x2iter 1 x2iter 0
H a ,2 u 100% d H s
x2iter 1
x3iter 1 x3iter 0
H a ,3 u 100% d H s
x3iter 1
Gauss-Seidel Jacobi
Convergence Nonlinear Systems Graphical Method
With the relaxation factor, we can either accelerate or decelerate the convergence
relaxation factor Solution methods for nonlinear systems usually combine
rate of the solution
x new O xi new 1 O xi old one of the methods from chapter 3 and 4 (roots of
new solution i old solution equation finding) and one of the methods from chapter 5,
for the next iteration new solution before the iteration
from the iteration 6 and 7 (solution of equation systems).
The relaxation factor acts as a weight function of the old and new solution. Examples include
Convergence acceleration successive over-relaxation method (SOR): 1 < < 2
Successive Substitution (a fixed-point iteration +
Convergence deceleration successive under-relaxation method (SUR): 0 < < 1
Tips:
Gauss-Seidel method)
Always start with Gauss-Seidel or Jacobi. the Newton-Raphson method + Jacobi method.
If the solution start to converge to certain value, gradually increase to
accelerate the convergence, but
if the solution doesnt seems to converge to any value, gradually decrease
to decelerate the convergence.
Successive Substitution
Nonlinear : Newton-Raphson
Newton-Raphson method
Nonlinear : Newton-Raphson
Generalization :
Matrix form
Ex.
Similar to
Newton-Raphson
method
Readings and Homework