Você está na página 1de 41

Ways to solve mathematic problems

215212

Analytical methods (standard mathematics)


Computational Methods in give exact solutions
Aerospace Engineering
Numerical methods give approximate solutions

What the most case can be done


Chapter 1 only with analytical methods?

Analyse simple problems:


Boundary layer over a flat plate
Introduction to numerical methods, Deformation of simple loads
A simplified model for the dynamics of aircraft
Mathematical modeling, Numerical
Methods, Problem Solving What about real complicated engineering problems?
How to analyse real complicated engineering problems Example 2: Helicopter Crash Analysis

One way to do this is to learn to simplify/break a big


complicated problem into small simple problems,
and use different tools you learn in the class.

This is the first step you should do before moving to more


advanced tools e.g. numerical methods.

In this class, youll learn how we can use computer to


help us analyse complicated problems numerically.

Example 1: Car Crash Analysis Why we should study numerical methods?


1. Enhance your problem-solving skill:
You can handle the larger systems of equations, nonlinearities, and
complicated geometries that are impossible to solve analytically with
standard calculus.
2. Allow you to use canned software with insight:
You can use commercial programs that involve numerical methods
with understanding and validating their results. Dont treat them as
black boxes.
3. Allow you to solve problems with your own programs:
without buying (paying commission) expensive software that may be
not suitable for the problems.
4. Be an efficient vehicle for learning computer:
Numerical methods are designed for computer implementation that
illustrates the computers power and limitation (too long computational
time; too small memory storage).
5. Be a vehicle to reinforce your understanding mathematics:
Because one function of numerical methods is to reduce higher
mathematics to basic arithmetic operations.
The secret to successful numerical analysis Example: Space Shuttle Landing

Understand a physical meaning of a given problem This is Example 1.1 from


Numerical Methods in Engineering; r
Understand the limitation of each numerical methods:
No universal method that works best for every What is the space shuttle velocity during its
problem! landing?
Understand that how each numerical method
produces errors:
No numerical method produces error-free solution! F2 = drag = cv

Objectives of this course :


You can solve the given problem with the suitable
numerical methods.
F1 = shuttle weight = mg

Space Shuttle Landing Exact Solution


Linear ordinary differential equation
From Newtons second law
dv dv c
m mg  cv  v g
dt dt m
Solution methods
Exact solution: Possible in this case (assume v(t=0) = 0)
c
mg  t mg
1  e ; v(t o f)
m
v (t )
c c
Given m = 90,000 kg, c = 450 kg/s and g = 9.8 m/s2

v (t ) 1,960 1  e 0.005t ; v (t o f) 1,960 m/s

Numerical solution: Lets try this out!


Space Shuttle Landing Numerical Solution Space Shuttle Landing Even more
accurate numerical result
Called a finite-difference approximation 2000
Estimate dv/dt from ? Numerical Solution ( t = 30 s)
Numerical Solution ( t = 5 s)
1800
Exact Solution

dv 'v v2  v1 1600
|
dt 't t2  t1
1400

Equation of motion becomes 1200

velocity (m/s)
vi 1  vi c
 vi g for i 1, 2,3,... 1000
ti 1  ti m
800
or
600
c
vi 1 vi  't g  vi for i 1, 2,3,...
m 400

With t = 5 s, the numerical solution is almost exact!


200
new value = old value + slope*step size
0
Formally called Eulers method 0 100 200 300
time (s)
400 500 600

Space Shuttle Landing - MATLAB Technical terms


2000
Numerical Solution ( t = 30 s)
Exact Solution
Dependent variable = f ( Independent variable, Parameters, Forcing function )
1800

1600 time or/and space external influences

1400
systems properties or composition

1200
velocity (m/s)

change = increases decreases : with respect to time


1000
Called the time-variable (transient) computation
800

Comments:
600 Much easier than exact solution method
change = 0 = increases - decreases
Numerical solution can be obtained very
400
easily with computer (matlab here)
Higher accuracy can be obtained by increases = decreases
200
reducing the time step, t
0
Called the steady-state calculation
0 100 200 300 400 500 600
time (s)
Figure 1.3

Figure 1.2 Figure 1.4


Lesson Learnt Numerical Errors

No numerical solution is perfect that is, it has error!


Ideally, we want an error to be as small as we can
Round-Off and Truncation Errors but this is not possible in practice; we have to
understand how these errors occur and control them
to be within a given tolerance!

Objectives
Inaccurate or Bias
Learning how to quantify error
Understanding how round-off errors occur +
because digital computers have a limited Imprecise or Uncertainty
ability to represent numbers
Recognising that truncation errors occur when =
exact mathematical formulation are
Errors
represented by approximation
Knowing how to use Taylor series to estimate
truncation errors
How to get derivative approximation from The concept of the accuracy and precision :
Taylor series Finite difference approximation a) inaccurate and imprecise, b) accurate and imprecise,
c) inaccurate and precise, and d) accurate and precise
TYPES OF ERROR Error Definitions (1/2)

Error introduced from: True value = approximation + error


True error (relative to true value)
1. Mathematical modeling (e.g. discretization, theory vs. reality)
2. Propagation itself (e.g. from step to step) Et true value - approx

3. Data (e.g. shuttle mass is not constant, measurement) Relative true error (in percentage)
4. Blunder (e.g. careless) true value - approx
Ht u 100%
5. Truncation (e.g. chopping infinite series) true value
True error but if you know true value, why bother
6. Round-off (e.g. value of S ) the numerical approximation!
Have to find the way to
approximate the error

Numerical Methods Concern Error Definitions (2/2)

Relative approximate error (relative to approximate


value in percentage)
Round-Off Error: present approx - previous approx
Ha u 100%
Computer approximation is not perfect present approx

When to stop the calculation of approximation?


Truncation Error: H a  H s Stopping Criterion
Mathematical approximation is not perfect
Relation between these error to the number of
significant figures in the approximation (Scarboroug,
1966) Number of significant figures
H s 0.5 u 10 %
2n

This relation is pretty conservative!


SIGNIFICANT FIGURES Two major facets of round-off errors

- Computer Number Representation:


S = 3.14159 has 6 significant figures. Easier Digital computers have size and precision limits on
1
10x
to write in form of floating point, i.e. S = .314159 their ability to represent numbers

- Arithmetic Manipulations of Computer Numbers:


This means the values 0.0001278, 0.001278, Certain number manipulations are highly sensitive
to round-off errors; either from mathematic
0.01278 all have 4 significant figures because they consideration or the way computers perform
can be written as .1278 x-310 , .1278-2x 10 , .1278-1x arithmetic operations
10 , respectively.

ROUND-OFF ERROR
Computer Number Representation (1/3)

How do computers represent numbers?


For example,
5.5 5 u 100  5 u 101
1 u 2  0 u 2  1 u 2  1 u 2 101.1
2 1 0 1

S = 3.141592653589793238462643.....


Human: Base-10/Decimal Computer: Base-2/Binary

The value of S above consists of 25 significant Two major implications due to using finite number of binary digits (bits)
figures that can be stored in supercomputer, whereas to represent numbers

a typical personal computer may store the value of S Integer representation


with only 10 significant figures as,
S = 3.141592653
This causes round-off error which will also
The binary representation of the decimal integer -173 on
propagate if such value is used repeatedly in a 16-bit computer using the signed magnitude method.
computation.
Computer Number Representation (3/3)

Floating-point representation: s*be


Range
Computers can only represent finite ranges of numbers
depending on bit word size
16-bit word size
Integers: -32,768 to 32,767
Floating-Point Numbers: 10-38 to 1039
32-bit word size
The manner in which a floating-point number is stored
in a 8-byte word in IEEE double precision format.
Integers: -2,147,483,648 to 2,147,483,647
Floating-Point Numbers: 10-308 to 10308
MATLAB: Try realmax, realmin and eps in format long out

0.005678 5.678*10-3

Computer Number Representation (2/2)


Computer Numerical Process
Precision Ex. 1.557 + 0.04341
Some numbers cannot be represented exactly e.g. =
3.141592; they cannot be represented exactly by computers With a 4-digit mantissa and a 1-digit exponent
Some exact base-10 numbers cannot be represented exactly
by base-2 representation used in computers e.g. 0.1
in 16 and 32-bit word size 0.1577 * 101
= 3.141593; 16-bit word size about seven base-10 +
digits of precision
= 3.14159265358979; 32-bit word size about fifteen 0.004341 * 101
base-10 digits of precision
32-bit precision/double precision is standard in MATLAB =
0.160041 * 101 0.1600 * 101
Arithmetic Manipulations of Computer Numbers The Taylor Series
Consider a hypothetical decimal computer with : A complete Taylor series expansion (TSE)
a 4-digit mantissa and a 1-digit exponent f cc xi 2 f (3) xi 3 f ( n ) xi n
f xi 1 f xi  f c xi h  h  h  h  Rn
2! 3! n!
Addition/subtraction : 1.557 + 0.04341 = ? 1.600
Subtractive cancellation : 0.7642103-0.7641103 = ? 0.100 where Rn is the remainder for the nth-order approximation
Large computations : 0.000110000 = ? 0.999999999999991
f ([ ) n1
n 1

Adding a Large and a Small Number : 4000 + 0.0010 = ? 4000 Rn h


Infinite series initial terms are relatively large compared to later terms
n  1 !
To fix start the summation from small terms first (reverse order) and is a value of x that lies somewhere between xi and xi+1
Smearing: individual terms in sum >> sum itself e.g. series of mixed signs Comments
Inner Production : The nth-order TSE is exact for n th-order polynomial
Also prone of round-off error Other differentiable and continuous functions such as
Very common e.g. solution of simultaneous linear algebraic equations sinusoids can not be represented by finite number of terms
To minimise do double precision (always do in MATLAB) Each additional term contribute some improvements,
however slight, to the approximation

Truncation Errors The Taylor Series


An approximation of f(x) = 0.1x4 0.15x3 0.5x2 0.25x + 1.2
Truncation errors are those that result from using an at x = 1 from x = 0
approximation in place of an exact mathematical
procedure
Consider an approximation of the time-derivative of
velocity
dv 'v v(ti 1 )  v(ti )
#
dt 't ti 1  ti

To gain some insight into the properties of truncation


errors, we will turn to the Taylor series expansion an
approximation of continuous function
Using TSE to Estimate Truncation Errors Total Numerical Error
Total numerical error = truncation error
dv 'v v(ti 1 )  v(ti ) + round-off error
Let us consider #
dt 't ti 1  ti In practice, we must estimate error in
our calculation, mostly from experience
Try to use TSE to do this and judgment
vcc ti 2
Some suggestions
v ti 1 v ti  vc ti ti 1  ti  ti1  ti  Avoid subtractive cancellation
2!
Use extended-precision arithmetic
Whats happen if we keep only the first term? Always add/subtract smallest
numbers first
v ti 1  v ti vcc [
vc ti  ti 1  ti A few more tips
ti 1  ti 2! Check accuracy by plugging the
Q1: Whys truncation error increase with step size?
solution back to the equation
truncation error = O ti 1  ti Do the calculation with a different
A1: According to TSE
First-order
step size/method/parameter to Q2: How about round-off error?
approximation
appreciate errors/convergence A2: As step size decreases,
rates and stability characteristics you need to increase iterations

Truncation Errors The Remainder Readings and Homework


Truncation errors are determined from the remainder
R f ( n 1) ([ ) n
Truncation errors # n h O(h n )
h (n  1)!
If you approximate the function with linear approximation, what is the
Readings: Chapter 4 in Chapra
power of truncation error? Questions: 4.1, 4.2, 4.4, 4.6, 4.8, 4.9, 4.12
truncation error = O(h) reduce h by half; reduce truncation error by half!

How about if we use quadratic approximation ?

truncation error = O(h2) reduce h by half; reduce truncation error by 1/4!

For sufficiently small h, the first- and other lower-order terms usually
account for a disproportionately high percent of the error; Only few terms
are needed.
Check example 4.3, p 96 : Approximation of cos(x) at /3 from /4 out by
TSE
Examples of Roots of Equation(s)
Polynomial Functions: f ( x) ax 2  bx  c 0

Chapter 3 Tension in Catenary Cable:


4 5
f ( x ) sinh  0
Roots of Equations 1 Vibration of Stop Sign Pole:
9x 9x

f ( x) cosh x cos x  1 0
Reference: C5 Oblique Shock Wave Angle:
9sin 2 x  1 S
f ( x) 2 cot x  tan 0

9 1.4  cos 2 x
 2 9
Column Buckling: f
3
f ( x) 8 x
n 1
2n
1 0

Youve Got a Problem Objectives

Medical Studies: A bungee jumpers free-fall velocity


Understanding what roots problems are
should always not exceeds 36 m/s after 4 s of free fall and where they occur in engineering
Problem: Determine the mass of a bungee jumper at
which this criterion is exceeded given a drag coefficient Knowing how to determine a root
of 0.25 kg/m
Solution Method: graphically
cg
Analytical Solution: v t
mg
tanh d t . Knowing how to determine a root with the
cd m
Can you solve for m explicitly? bisection method
Alternative: Look for where f(m) = 0, where
Knowing how to determine a root with the
f (m)
mg c g
tanh d t  v t
9.81 m 0.25 u 9.81
tanh 4  36 0 false position method
cd m 0.25 m
Roots of Equation
Standard Methods Bisection Method
f (x)

Graphical Method Very efficient method to scan for roots


f (x) f (xR)!
Bisection Method Bracketing Methods:
False Position Method Always work but converge slowly xL
0 z z z x
One Point Iteration Method x xR
f (xL)<0
Newton Raphson Method Open Methods:
Not always work but converge more quicker
Root
Secant Method
IDEA If we know the root x is between xL and xR , then,
xL  x  xR
which means f (xL ) and f (xR ) must be opposite in sign.

Bisection Method
Graphical Method Computational Procedure Step 1
A bungee jumper problem: Check example 5.1 out! Step 1: From given xL and xR , compute mean value,
x M = x L + xR
mg c g
f (m)
cd
tanh d t  v t 2
m
then compute f(xM ) which could be + (Case A) or - (Case B)
9.81 m 0.25 u 9.81 f (x) f (x)
root = masscrit 145 kg tanh u 4  36
0.25 m
0
f (x M)
f (x M) f (x R)
xL xL xM xz f (x R )
0 x 0 x
z
xM xR xR
x
Case A Case B
Bisection Method Bisection Method
Computational Procedure Step 2 & 3 Example 5.3 and 5.4

Step 2: Compute the product of f (xM ) and f (xR ): Check example 5.3 and 5.4 out!

Case A Case B
f(xM ) f(xR ) ! 0
x f(xM ) f(xR )  0
x

Root: xL  x  xM Root: xM  x  xR

Step 3: If Case A, set new xR = x M


If Case B, set new x L = xM

Bisection Method
Computational Procedure Step 4

Step 4: Check convergence criterion,


f (x M )  H

where H is the tolerance. Or use


x new
R
- x old
R x 100 % < H s
x new
R

where H s is the stopping tolerance, e.g. H s = 0.05%


False Position Method
IDEA Similar to Bisection Method but includes
values of functions at xL and xR in computation.

f (x) z Compute x from:


f (x) tan T = tan E
f (x R)
f (xR ) f (xL )
=
xL x1 T xR - x1 xL - x1
z z z z x
E x xR
f ( x L) X1 can be computed from
z
Root xL f (xR ) - xR f (xL )
x1 =
1 f (xR ) - f (xL )
which will be XL or XR until solution converges
False Position Method False Position Method
Computational Procedure Step 1 Computational Procedure Step 4
Step 1: From given xL and xR , compute f(xL ) and f(xR ), Step 4: Check convergence criterion,
and also x = xL f (xR) - xR f (xL)
1
f (xR) - f (xL) f (x 1 )  H
then compute f(x1) which could be either (Case A) or (Case B)
f (x) f (x)
where H is the tolerance. Or use
z f ( x1) z

f ( x1) x 1new - x 1old


xL x1 xL x x 100 %  H s
0 z z z z x 0 z z z z x x 1new
x xR x1 xR
z z

Case A: f( x1)  0 Case B: f(x1) ! 0 where H s is the stopping tolerance, e.g. H s = 0.001%

False Position Method False Position Method


Computational Procedure Step 2 & 3 Example 5.5 and 5.6
Step 2: Compute the product of Repeat example 5.1
and 5.3 (as in
Case A Case B example 5.5)s
f (x1) x f (xR )  0 f (x1) x f (xR ) ! 0 Try example 5.6 out
Root: x1 x  xR Root: xL x  x1 where bisection
method is preferred to
false position method.
Step 3: If Case A, set new xL = x1 Why?

If Case B, set new xR = x1


Reading and Homework
Reading: Chapter 5 in Chapra
Homework: 5.1, 3, 5, 6, 8, 10
Chapter 4
Roots of Equations 2
Reference: C6

Figure 5.7 Standard Methods

Graphical Method Very efficient method to scan for roots


Bisection Method Bracketing Methods:
False Position Method Always work but converge slowly

One Point Iteration Method We did last time!


Open Methods:
Newton Raphson Method Not always work but converge more quicker
Secant Method
Objectives Fixed-Point Iteration
Recognising the difference between bracketing and open methods Simple but may not converge
for root location Arrange the function such that x is on the left-hand side of the
Understanding the fixed-point iteration method and its convergence equations: x = g(x)
characteristics For a given xi, we can compute a new estimation xi+1 from xi+1 = g(xi)
Knowing how to solve a roots problem with the Newton-Raphson Error can be estimated from xi 1  xi
method and its quadratic convergence characteristics Ha u 100%
xi 1
Knowing how to implement both the secant and the modified secant Bungee-jumper example: Estimate maximum jumper mass!
methods
Knowing how to use MATLABs fzero functions to estimate the roots mg c g cd v 2 t 0.25 u 362
v t tanh d t m
Knowing how to use MATLABs roots to find the roots of polynomials cd m c g 0.25 u 9.81
g u tanh 2 d t 9.81 u tanh 2 4 u
m m

Can you say a few thing about the ratio of the true error from any
iteration to that from previous iteration? Fixed-point iteration has a
linear convergence characteristic!

Bracketing Methods v.s. Open Methods

Sometimes
divergences

Always converges converges

Bracketing Method Open Method


Bisection Method Newton-Raphson Method
[2 initial guessed values] [1 initial guessed value]
Newton-Raphson Method
Most widely used!
Concept: Use the slope to estimate the root

f (x) f (x) f (x) f (x)

z z

f (x 0 )

0 z z z x 0 z z z x
x x x0 x x0

Root x0- x
Root
f xi
xi 1 xi 
Try a bungee-jumper problem out. What is f c xi
the convergence rate of Newton-Raphson?
Quadratic?

Fixed-Point Iteration
Converge or Diverge?

From xi+1 = g(xi), we can say


if fixed-point iteration method
will converge or not!

converge: g c  1
Key:
diverge: gc ! 1
Newton-Raphson Method Poor Convergence
Secant Method
Whats happen if you cant find the derivative of the function
(the slope)?
In that case, lets estimate the slope from

f xi  f xi 1
f c xi #
xi  xi 1
Hence we can estimate new root from

f xi
xi 1 xi  xi  xi1
f xi  f xi 1
Notice that now you need two initial guesses!

Newton-Raphson Method
MORE Poor Convergence
Modified Secant Method

Convergence problem usually It is not convenient to start from two arbitrary guesses
comes when f xi  G xi  f xi
f c xi # ;G small perturbation fraction (10-6 )
inflection point close to roots G xi
initial guess close to local Lets now estimate the slope of the function from
maxima/minima f xi
jumping from interest solution xi 1 xi  Gx
f xi  G xi  f xi i
to the next (multiple roots)
hit zero-slope! Try a bungee-jumper problem with modified secant method.
Cure
start an initial guess as close to
the root as possible
bracketing method
MATLAB Function: fzero MATLAB : Polynomials
Roots of polynomials are also very common in engineering
The MATLAB fzero function is designed to find the real
Example: Column Buckling
roots of a single equation f
3
The function combines the best qualities of reliable (but f ( x) 8 x
n 1
2n
1 0
slow) bracketing methods to fast (but possibly unreliable) We can use all the methods we learn but we will only determine one
open methods. root at a time. We want all roots in one shot!
Try it out with our bungee-jumper problem! MATLAB function roots is designed to compute all roots in one shot.
Try it out
Note that an inverse function of roots is poly
Chapter 5
Optimization
Reference: C7

Objectives
Reading and Homework 1. Understanding why and where optimization occurs in engineering and
scientific problem solving.
2. Recognizing the difference between one-dimensional and
multidimensional optimization.
Reading: Chapter 6 in Chapra 3. Distinguishing between global and local optima.
4. Knowing how to recast a maximization problem so that it can be solved
Homework: 6.3, 4, 9, 10, 11, 16, 19, 20, 23 with a minimizing algorithm.
5. Being able to define the golden ratio and understand why it makes one-
dimensional optimization efficient.
6. Locating the optimum of a single-variable function with the golden-
section search.
7. Locating the optimum of a single-variable function with parabolic
interpolation.
8. Knowing how to apply the fminbnd function to determine the minimum
of a one-dimensional function.
9. Being able to develop MATLAB contour and surface plots to visualize
two-dimensional functions.
10. Knowing how to apply the fminsearch function to determine the
minimum of a multidimensional function.
You got a problem.
More often than not, the design is also subjected to
Simple projectile restrictions, or constraints, which may have the
Find the minimum initial velocity for reaching the altitude at least 200 m within 4 seconds. form of equalities or inequalities.
Initial velocity ~ = power ~ = cost

The majority of available methods are designed for


unconstrained optimization, where no restrictions
are placed on the design variables.

In the more difficult problem of constrained


optimization the minima are usually located where
the F(x) surface meets the constraints.

Altitude as a function of time for an object initially projected upward with an initial velocity.

Whats Optimization?
Optimization is the term often used for minimizing or
maximizing a function.
In engineering, optimization is closely related to design.
(creating something that is as effective as possible:
performance vs. limitations ) Some optimization method do just
by solving the root problem: f(x)=0.

The function F(x), called the merit function or objective


function, is the quantity that we wish to keep as small as
possible, such as cost or weight.
The components of x, known as the design variables, are the
quantities that we are free to adjust (lengths, areas, angles, One dimension
etc.).
Two dimension

Optimization is a large topic with many books dedicated to it.


Max f(x) = Min f(x)
x1 xl  d
One-Dimensional Optimization x2 xu  d
where d (I  1)( xu  xl )
If f ( x1 )  f ( x2 )
xl x2
If f ( x2 )  f ( x1 )
xu x1
xu  xl
Ha (2  I ) u100%
xopt
The benefit from the use of the golden
There is no guaranteed way of finding the global optimal point. ratio, we dont have to recalculate all
One suggested procedure is to make several computer runs
Reduce time by 1) per evaluation
the function value, see the figure
using different starting points and pick the best result.
and 2) number of evaluations x2_new = x1_old

Golden-Section Search
It is very similar in spirit to the bisection method for the root location.

Golden Ratio () :

l1  l2 l1
I
l1 l2
1 5
I 2  I 1 0 I 1.61803398...
2
Parabolic Interpolation

A second-order polynomial
often provides a good
approximation to the shape
of f(x) near an optimum.
There is only one parabola
connecting three points.

X1, x2, x3 are the initial guesses


X4 is the optimum value of the parabolic fit to the guesses.
MATLAB function : fminbnd
It combines the slow, dependable golden-section search with the faster,
but possibly unreliable, parabolic interpolation.

Multidimensional optimization MATLAB function : fminsearch


It can be used to determine the minimum of a multi dimensional function.
It is based on the Nelder-Mead method : direct search method ( does not
require derivatives )
Laboratory Example of System of Equations

Structural Problems Fluid/Aerodynamic Problems

The figure shows the cross section of a channel


carrying water. Determine w, d and that minimize the
length of the wetted perimeter while maintaining a
cross sectional area of 8 m2. (Minimizing the wetted
perimeter results in least resistance to the flow.)
Use both methods.

Objectives
Understanding what systems of equations are and where
they occur in engineering and science
Chapter 5 Understanding how to use graphical method to solve
System of Equations 1 small sets of equations
Understanding how to use Gauss elimination method for
large sets of equations
Reference: C7 and 8
Understanding the concepts of singularity and ill-
condition
Understanding how to improve the accuracy of solution
by pivoting
Recognising special types of systems banded systems
In the particular case that the line crosses through the
origin, if the linear equation is written in the form y = f(x) Youve got a problem
then f has the properties:

f ( x  y) f ( x)  f ( y ) Find equilibrium positions - assume


each cord behaves as a linear
and spring and follows Hookes law

f (ax) af ( x)
where a is any scalar.
A function which satisfies these properties is called a linear
function

Many non-linear equations may be reduced to linear equations In matrix form?m g  k x2  x1  k1 x1 0


1 2

by assuming that quantities of interest vary to only a small Try this on matlab! m2 g  k3 x3  x2  k2 x2  x1 0
extent from some "background" state.
m3 g  k3 x3  x2 0

Standard Methods
Methods for small sets of equations e.g. graphical
method and Cramers rule
Direct methods Solving sets of equations directly
without mathematical approximation. Usually good for
modest sets of equations. Examples are Gauss
elimination, LU decomposition and Cholesky
decomposition.
Iterative methods Solving sets of equations with
some mathematical approxixmation. Usually faster for The Over determined Case (M > N) : the number (M) of (independent) equations
large sets of equations. Examples are Jacobi, Gauss- is greater than the number (N) of unknowns, there exists no solution satisfying all
Seidel, successive overrelaxation (SOR)/underrelaxation the equations strictly.
(SUR) The Underdetermined Case (M < N) : The number (M) of equations is less
than the number (N) of unknowns, the solution is not unique, but numerous.
Graphical Method Gauss Elimination

It is very good idea to Systematically eliminate


visualise behaviour of unknowns and back-
substitute for the solution We begin here
each equation before
actually find solution Extendable to large sets of
equations
Graphical method
Try example 8.3
provides the way to do so After forward
Now go from 8.8 to 8.13
but it is restricted to a elimination
Check section 8.2.2 out
small sets of equations For a large systems, the
(usually for n<3) One equation, one unknown!
amount of floating point
You know how to solve it
operations (flops) is
proportional to n3 which
comes from forward
This is how we get
elimination
solution

Lesson from Graphical Method (Partial) Pivoting


How can you do Gauss elimination to this system of equations?
a11 (coefficient for x1) = 0
2 x2  3x3 8 Swap with the equation
with the largest
4 x1  6 x2  7 x3 3 coefficient for x1
partial pivoting
2 x1  3x2  6 x3 5
How about if a11 is very small?
0.0003x1  3.0000 x2 2.0001
1.0000 x1  1.0000 x2 1.0000
No solution ! Infinite number of solutions ! Sensitive to Round-Off Error
Singularity Also Singularity Ill-Condition Exact solution is x1=1/3 and x2=2/3. Try solving it by Gauss
elimination using and not using partial pivoting
Very dangerous system for
numerical solution
Banded System Laboratory

Many systems of equations resulted from engineering problems are


banded systems (you will see in the lab)
x x 0 0 x1 r1
x x x 0 x r
tridiagonal system 2 2

0 x x x x3 r3

0 0 x x x4 r4
non-zero elements!
We can take advantage of this banded structure in terms of both
memory storage and computational time
For tridiagonal system with n equations, flops is only proportional to
Three reactors linked by pipes. The rate of mass transfer
n instead of n3 as in a general system. A huge save here! through each pipe is equal to the product of flow Q (m3/s)
and concentration c (mg/s) of the reactor from which the
flow originates. Find their concentrations.

Solving Linear Equations with MATLAB

MATLAB provides two direct ways to solve systems of


linear algebraic equations Chapter 6
The most efficient way backslash or left-division
>> x = A\b Systems of Equations 2
Another way matrix inversion
>> x = inv(A)*b Reference: C9 and 10
Standard Methods Why LU Decomposition?

Methods for small sets of equations e.g. graphical Find maximum member force in this truss?
10kN
method and Cramers rule
Solve {x} from [A]{x1} = {b1}
Direct methods Solving sets of equations directly
without mathematical approximation. Usually good for
modest sets of equations. Examples are Gauss
elimination, LU decomposition and Cholesky Solve {x} from [A]{x2} = {b2}
How about this one? 10kN
decomposition. Redo Gauss elimination again?
Iterative methods Solving sets of equations with Very time consuming Not so smart!
some mathematical approxixmation. Usually faster for How about we do factorisation to [A]?
large sets of equations. Examples are Jacobi, Gauss-
1. [A] {x1} = [L][U] {x1} = {b1}
Seidel, successive overrelaxation (SOR)/underrelaxation
2. [A] {x2} = [L][U] {x2} = {b2}
(SUR)
Same [L] and [U]; Decompose once

Objectives Gauss elimination involves two steps: forward elimination


and backward substitution.

Understanding LU decomposition The forward elimination step comprises the bulk of


Knowing how to express Gauss elimination as an LU computational effort, especially for the large systems of
decomposition equations.
Extending LU decomposition to symmetric system
LU factorization or decomposition methods separate the
Cholesky decomposition
time-consuming elimination of the matrix [A] from the
Inverse matrix v.s. stimulas-response manipulations of the right-hand side {b} (For example,
Understanding how the magnitude of the condition when the inputs of the system {b} change. )
number can be used to estimate the precision of the
solutions Thus, once [A] has been factored or decomposed
multiple right-hand-side vectors can be evaluates in an
efficient manner.
LU Decomposition 1/2 LU Decomposition 2/2
Step 2.1: Forward
1 0 0 d1 b1 Work out example 9.1 or 10.1 (how to find
l
1 0 d 2 b2 L and U )
21
l31 l32 1 d3
b3

and 9.2 (how to solve the system with LU)
1 0 0 u11 u12 u13 x1 b1 L
l
1 0 0 u22 u23 x2
21
l31 l32 1 0

0 u x3
b2

=
Step 2.2: Back
+ MATLAB function: [L,U]=lu(x)


33 b3
L U u11 u12 u13 x1 d1
0 u
u23 x2 d 2
22
Step 1: LU Decomposition 0 0 u x3 d
3

33
U

Step 2: Substitution

a11 a12 a13


[U] 0 a'22 a'23 Same as Gauss elimination

0 0 a' '33
a21
f 21
a11
1 0 0
[ L] f 1 0 f 31
a31
21 a11
f 31 f 32 1 a'32
f 32
a '22
Ex.
MATLAB : the backslash operator
Check [A] is in the format where a solution can be obtained
without full Gauss elimination.

These include systems that are (a) sparse and banded (b)
triangular or (c) symmetric.

If any of these cases are detected, the solution is obtained


with the efficient techniques that are available for such
systems: banded solvers, back and forward substitution, and
Cholesky factorization.

If none of these simplified solutions are possible and the


matrix is square, a general triangular factorization is computed
by Gauss elimination with partial pivoting and the solution
obtained with substitution.

Inverse Matrix and


Cholesky Decomposition
Stimulus-Response Computations
We can find [A-1] easily from LU decomposition
Similar to LU decomposition but designed a 111 a 112 a 113

1 0 0
> A@ A
1
I > L@>U @ A > L@>U @ a 121 a 122 a 123
1
0 1 0
specially for symmetric system only half a 131 a 132 a 133 0 0 1

the storage and half the computational We can think of we are solving subsystems e.g.
a 111 a 111 1
time > A@ a 121 > L@>U @ a 121
0
a 1 a 1 0
[A]=[U]T[U] 31 31
Is there any meaning physically? Think about three bungee-jumper system.
So.. [U]T{d}={b} The right-hand side vector represents a unit load applied to jumper 1 (stimulas
of the system) and the solution of the subsystem represents what happens to
all three bungee-jumper (response of the system)
[U]{x}={d} You can apply a unit load to jumper 2 and 3 respectively to get the response
from the system.
MATLAB function: U=chol(x) You can in fact superimpose (this is a linear system) these unit load response
to any stimulas!
[A] {x} = {b}
Readings and Homework
[Interactions] {response} = {stimuli}

Readings: Chapter 9 and 10 in Chapra


{x}=[A]-1 {b}
Homework: Try to solve problems in the
x1 a111b1  a121b2  a131b3 previous lecture with LU and Cholesky
x2 1
a21 1
b1  a22 1
b2  a23 b3 decompositions; 9.6, 9.9, 10.3
1 1 1
x3 a31 b1  a32 b2  a33 b3

See Ex. 10.2 or 11.2

Matrix Condition Number and


Ill-Conditioning Matrix
We have seen that an ill-condition matrix always
associates with the solution sensitivity to round-off error Chapter 7
We can define the condition of matrix by the matrix
conditional number Systems of Equations 3
Cond[A] = ||A|| ||A-1||
It can also be shown that Reference: C12 (or C11)
||X||/||X|| = Cond[A] || A||/||A||
For the coefficients of [A] are known to t-digit precision
(~10-t) and Cond[A] = 10c, the solution {X} may be valid
to only t-c digits (~10c-t)
Standard Methods Why Iterative Methods?

Methods for small sets of equations e.g. graphical method Iterative methods provide an alternative method to
and Cramers rule the direct method discussed earlier. The basic
Direct methods Solving sets of equations directly without concept for iterative methods is guess or go just like
mathematical approximation. Usually good for modest sets of in chapter 3 and 4.
equations. Examples are Gauss elimination, LU The main advantage of iterative methods is when we
decomposition and Cholesky decomposition.
deal with a large set of linear or nonlinear equations:
Iterative methods Solving sets of equations with some
mathematical approximation. Usually faster for large sets of
Direct Methods: FLOPS ~ O(N3)
equations. Examples are Jacobi, Gauss-Seidel, successive Iterative Methods: FLOPS ~ O(N2)
overrelaxation (SOR)/underrelaxation (SUR)

Objectives Gauss-Seidel Method (1/2)

Understanding iterative methods for solving a set Lets try Gauss-Seidel method to solve the following set of equations:
3x1  0.1x2  0.2 x3 7.85
of equations including Jacobi and Gauss-Seidel
0.1x1  7 x2  0.3x3 19.3
Knowing how relaxation can improve the 0.3x1  0.2 x2  10 x3 71.4
convergence of the iterative methods Step 1: Guess x1 = x2 = x3 = 0
guess iter 0
Knowing the techniques to solve a set of x1 x2 x3 x1 x2 x3 0 0 0
Step 2: Update x1, x2, x3 by solve equation 1, 2 and 3 consecutively
nonlinear equations use new x1 to update x2 and x3
7.85 + 0.1x2  0.2 x3 7.85 + 0.1 0  0.2 0
x1 2.616667
3 3 use new x2 to update x3
19.3  0.1x1  0.3x3 19.3  0.1 2.616667  0.3 0
x2 2.794524
7 7
71.4  0.3x1  0.2 x2 71.4  0.3 2.616667  0.2 2.794524
x3 7.005610
10 10
x1 x2 x3 iter 1 2.616667 2.794524 7.005610
Gauss-Seidel Method (2/2)

Step 3: xguess = xiter=1(defined by approximated error)? i.e.


x1iter 1  x1iter 0
H a ,1 u 100% d H s
x1iter 1

x2iter 1  x2iter 0
H a ,2 u 100% d H s
x2iter 1

x3iter 1  x3iter 0
H a ,3 u 100% d H s
x3iter 1

Yes Stop the calculation


No Repeat step 2 again until xiter = xiter-1 (defined by
approximated error)

Gauss-Seidel v.s. Jacobi


(Another method is Jacobi.)

Gauss-Seidel Jacobi
Convergence Nonlinear Systems Graphical Method

A big question for iterative methods is will the solution converge?


x12  x1 x2 10
For diagonal dominance i.e. What are x1 and x2?
n
x2  3x1 x2 2 57 Why graphical method?
graphical method
aii ! aij , Easy to visualise behaviour
j 1 of each equation before
j z1
actually find the solution. In
solution always converges! Lucky lots of engineering problems are particular, if there are more
in the form of diagonal dominance. than 1 solution for the
How about any system of equations, any way to ensure the system!
convergence? For a linear system, there is a way to check it out but Easy to guess the initial
for a nonlinear system, there is no way to do so! guess solution for the
solution method.

Relaxation Nonlinear Systems Solution Methods

With the relaxation factor, we can either accelerate or decelerate the convergence
relaxation factor Solution methods for nonlinear systems usually combine
rate of the solution
x new O xi new  1  O xi old one of the methods from chapter 3 and 4 (roots of
new solution i old solution equation finding) and one of the methods from chapter 5,
for the next iteration new solution before the iteration
from the iteration 6 and 7 (solution of equation systems).
The relaxation factor acts as a weight function of the old and new solution. Examples include
Convergence acceleration successive over-relaxation method (SOR): 1 < < 2
Successive Substitution (a fixed-point iteration +
Convergence deceleration successive under-relaxation method (SUR): 0 < < 1
Tips:
Gauss-Seidel method)
Always start with Gauss-Seidel or Jacobi. the Newton-Raphson method + Jacobi method.
If the solution start to converge to certain value, gradually increase to
accelerate the convergence, but
if the solution doesnt seems to converge to any value, gradually decrease
to decelerate the convergence.
Successive Substitution

Nonlinear : Newton-Raphson

One-variable case Two-variable case

A first-order Taylor series.

Newton-Raphson method
Nonlinear : Newton-Raphson

Algebraic manipulations for two-variable case

The determinant of the Jacobian of the system.

Newton-Raphson approach for two-variable case:


Jacobian matrix
is the matrix of all first-order partial derivatives of a vector-valued function.

Generalization :

Matrix form
Ex.

Where J is the Jacobian Matrix and { f }T = [ f1,I f2,I fn,I ]

Similar to
Newton-Raphson
method
Readings and Homework

Readings: Chapter 12 (11) in Chapra


Homework: Try to solve problems in the
previous lecture with iterative methods Jacobi,
Gauss-Seidel and relaxation method.

Formally, the derivative of the function f at a is the limit of the


difference quotient as h approaches zero, if this limit exists. If the
limit exists, then f is differentiable.

The process of finding a derivative is called differentiation.

Você também pode gostar