Você está na página 1de 12

Approximations and Numerical Methods

Kenny TM∼

August 20, 2006

1 Why do we need approximation?

Approximation means an inexact representation of something that is still close


enough to be useful. In physics and all other kinds of sciences use approxima-
tion extensively.

Why do we need approximation? All natural scientific theories, unlike


mathematical one, derived from observations of experiments. As apparatus
cannot made exact, we can only obtain an estimation of what is going on.
Ensuring exact in calculation would be nitpicking.

Approximation usually leads to much simpler expression, without losing a


lot of information or making excessive assumption. Therefore it is used a lot of
time. Sometimes an exact solution is even impossible, and approximation is a
must.

2 Some common approximations

The following shows some common approximations, when x is much less than
1, usually written as x  1. I will state why these approximations are possible
in the next section.

1
(1 + x)n ≈ 1 + nx (1)
sin x ≈ x (2)
1
cos x ≈ 1 − x2 ≈ 1 (3)
2
tan x ≈ x (4)
ex ≈ 1 + x (5)
ln(1 + x) ≈ x (6)

For example, when a  b, i.e., a is much greater than b, then


!n
b
(a + b) = a
n n
1+ (7)
a
!
nb
n
≈a 1+ . (8)
a

Approximates can be applied recursively. For example, the

1 1

ex + x 1 + x + x
1
=
1 + 2x
≈ 1 − 2x.

A practical example is the simple pendulum. The equation of motion of a


simple pendulum is
ml2 θ̈ = −mgl sin θ.
However, this ODE cannot be solved with elementary solution. Nevertheless,
if we restrict the range of θ to be θ  1, then the equation can be simplified to

ml2 θ̈ = −mglθ

as sin θ ≈ θ. We can easily identify it as an SHM, and the angular frequency is


q
g
l.

3 Taylor series

Taylor series is a direction of how approximation can be done.

2
Suppose we want to approximate a given function f (x) near x = x0 . The
easiest approximation is f (x) ≈ f (x0 ). This is known as the zeroth order approx-
imation. The result of this approximation is often unsatisfactory. A better one
is to use a line to approximate it. The line should, of course, pass through
(x0 , f (x0 )) and has a slope f 0 (x0 ) so that the line is tangent to the curve. Using
the usual point-slope form we have
f (x) ≈ f (x0 ) + f 0 (x0 )(x − x0 ). (9)
This is known as the first order approximation. This approximation is the most
widely used one, since the form is linear and thus easy enough to interpret, yet
does not deviate much from the real function.

1st

0th
x

Figure 1: Zeroth and first order approximation.

It is possible, though, that first order approximation is not enough. We can


make it more precise by approximating with a parabola, thus yielding a second
order approximation
1 00
f (x) ≈ f (x0 ) + f 0 (x0 )(x − x0 ) + f (x0 )(x − x0 )2 , (10)
2
and, in general, the n-th order approximation,
n
1 ∂k f

X
f (x) ≈ (x − x0 )k (11)
k! ∂xk

k=0 x=x0

3
If n = ∞, the right hand side is an exact representation of f . This is called the
Taylor series, or Taylor expansion of f . In particular, if x0 = 0, then the expression
reduced to
n
1 ∂k f

X
k
f (x) ≈ x
k! ∂xk x=0
k=0

which is known as the Maclaurin series.

All the approximates in the last section are in fact first order approximations
(except the cosine one, which is second order). If you substitute everything into
(9), you will find how they are derived.

Example: Find the first order approximation of cos−1 x near x = 0. Using


(9),

d
cos −1
x ≈ cos−1
0+ −1
cos x x
dx x=0
π
= − x.
2

4 Non-numerical approximation

Approximation does not appear just in formulae, but overwhelmingly a lot


more on assumptions of the system. Here gives some common assumptions:

Symmetry — Ignoring defect in a system, and assumes a certain kind of sym-


metrym, since symmetric objects often eliminated the need of special
advanced functions. Examples are:
Spherical symmetry — the Earth is a sphere.
Circular symmetry — the trajectory of Moon around the Earth is a circle.

Infinity — When one observes an object close enough to it, it seems to ex-
tend to infinity. Assuming it as infinite or semi-infinite usually provide
simplification and new symmetry, if the calculation really converges at
all.
Infinite rod — a straight rod is infinite in length. It then exhibits transla-
tional symmetry along the direction of the rod, and rotational sym-
metry about the rod1 .
Infinite plane — a flat sheet is infinite in area. It then exhibits transla-
tional symmetry, and rotational symmetry about the normal direc-
tion of the plane.
1 The rotational symmetry is always there even if the rod is not infinite, though

4
Lattice — a lattice fills the whole space. It again exhibits translational
and rotational symmetry.
Infinitesimal — the collision time between two objects are infinitesimal.
Averaging — a fast-varying quantity can be replaced by its average value,
which is constant, since we cannot catch up with its pace. For instance,
in AC circuit the power consumed by a load calculated using P = IV is in
fact an averaged value.
Perfect — a fluid with zero viscosity, a gas totally satisfy the ideal gas law, a
mirror with infinite optical density to reflect all incident light.
Uniform — the density and temperature of a fluid is uniformly distributed.
Constant — the acceleration of a car, and due to Earth’s gravity also, is constant,
a heater providing constant power, etc.
Point mass/charge — an object can be assumed as a point mass or point charge
for easy calculation. For the former, the rotating motion of a finite-sized
mass can be ignored. For the latter, the irregularity of electric field can
be neglected. Also, when one is far enough from a finite object, it can be
considered as a point also.
Vacuum — the atmosphere is often taken the same as vacuum, ignoring the
air friction and refractive index, etc.

Isolated system — no energy lost or gained from surroundings, no external


force or any other influence from the outer space, etc.
Perpetual — a system that has been running infinitely long, and will run
infinitely long more, such as SHM and waves.
Classical physics — applying classical physics to quantum-scaled object, e.g.
explaining the diamagnetism of material.

5 Numerical root-finding

When the system is so complicated that approximation cannot simplify every-


thing down-to-earth, and no solution found, we will have to apply numerical
methods. Numerical methods are methods to find a numerical, not analytical
answer, which is very often the case encountered by engineers. The usual nu-
merical methods needed in this level is to solve an algebraic equation, evaluate
an integral, or solve an ODE. In this notes we will only deal with root-finding,
which is the most frequently needed one.

5
5.1 Iterative method

Suppose we have an equation, x = f (x), how do we solve it?

The simple analysis is, if x = f (x), then x = f ( f (x)) = f ( f ( f (x))) = . . . ad


infitinum. The iterative method is built upon this. The general procedure is

1. Write the equation to solve in the form x = f (x)


2. Make a reasonable guess of x, say, x = x0 .
3. Calculate x1 = f (x0 ).
4. Continue with xn+1 = f (xn )
5. Stop when xn reaches the precision needed.

Example: find the solution to f (x) = x7 − x13 = 1. This equation cannot


be solved algebraically, and hence numerical method is needed. Firstly, we
rewrite the equation into
x = x7 − x13 + x − 1
so that iterative method can be done. Now, when x = −2, f (x) = 8064 > 1; when
x = 2, f (x) = −8064 < 1, therefore the solution should lie somewhere between
−2 and 2. Let’s suppose x0 = 0. Then applying iterative method we have

n xn
0 0
1 −1
2 −2
3 2.04516
4 8061
5 −6.06815 × 1050

It seems not working! It is one of the big problem of iterative methods and any
other numerical root-finding methods — if you chose the wrong starting point,
the method fails. Another reason of failing maybe that our x = f (x) is not quite
good. Observe that when |x| > 1, the expression blows up extremely fast, and
that’s why the method fails also. Instead, if we write

13
x= x7 − 1,

6
then the following table is obtained:

n xn
0 0
1 −1
2 −1.05477
3 −1.07144
4 −1.07694
5 −1.07878
6 −1.07941
7 −1.07962
8 −1.07969

Much nicer! We see that x stablizes around −1.080, and we may say the solution
is x ≈ −1.080.
y

x
x9 x7 x5xx3 1xx02x4 x6 x8

Figure 2: Iterative method, where it fails.

5.2 Bisection method

Owing to the unstability of iterative method, another method, the bisection


method is used when it fails.

7
y

x
x0 xx2 4xx53x1

Figure 3: Iterative method, where it succeeds.

The basic idea of bisection method is that, if we found that f (a) < 0 and
f (b) > 0 for a continuous function f , then then must be a point x such that
a < x < b and f (x) = 0.

As a numerical method, we often pick the x that lies exactly in the middle
2 . If f (x) = 0, we are done. More often it is not, and we
of a and b, i.e., x = a+b
need to find another x to test. However, we can actually half the interval length
by knowing the sign of f (x) — if f (x) and f (a) are of the same sign, then the
root must appear in the interval (x, b). Otherwise, it is in (a, x). By keeping the
halving, we could eventually reach the exact root.

The general procedure of using bisection method to solve an equation is

1. Rewrite the equation into the form f (x) = 0.


2. Make a reasonable guess of x−2 and x−1 such that x−2 < x−1 and f (x−2 ) and
f (x−1 ) are of different sign.
x−2 +x−1
3. Take x0 as the average value of x−2 and x−1 , i.e., x0 = 2 .
4. Evaluate f (x0 ).
x−2 +x0
• If f (x0 ) f (x−2 ) < 0, then set x1 = 2 .
x−1 +x0
• If f (x0 ) f (x−1 ) < 0, then set x1 = 2 .

5. Continue with evaluating f (xn ) and comparing the signs with f (xn−1 ) and
f (xn−2 ).

8
6. Stop when xn reaches the precision needed.

Example: find the solution to f (x) = x7 − x13 = 1. Firstly the equation should
be rewritten as
g(x) = x7 − x13 − 1 = 0.
Again we knew f (−2) > 0 and f (2) < 0, hence we set x−2 = −2 and x−1 = 2. And
thus x0 = 0. Continue gives the following table:

n xn f (xn )
−2 −2 +
−1 2 −
0 0 −
1 −1 −
2 −1.5 +
3 −1.25 +
4 −1.125 +
5 −1.0625 −
6 −1.09375 +
7 −1.078125 −
8 −1.0859375 +
9 −1.08203125 +
10 −1.080078125 +

we can see that x settles down around −1.08, and therefore x ≈ −1.08. Compared
with iterative method, bisection method can always find the root, because it
just bracket the interval where the root exists, but using iterative method the
successive values may diverge away. On the other hand, bisection method is
slow, as we can see, we repeated 10 times in bisection method to arrive at −1.08,
but only 4 in (the successful) iterative method.

5.3 Newton’s method

Newton’s method is a numerical method that is more stable than the iterative
method, and much faster than bisection method.

Newton’s method can be considered as the application of first order ap-


proximation. Given a guess x0 to the equation f (x) = 0, we first the first order

9
y

Figure 4: Bisection method.

approximation of f near x = x0 . Then, we find the x-intercept of this line, which


is supposed to be the zero of f . Of course it is not, but this new x can be used
to approximate another line, and eventually the exact solution can be found.

The general procedure is:

1. Rewrite the equation into the form f (x) = 0.


2. Make a reasonable guess of x0 .
f (xn )
3. Define xn+1 = xn − f 0 (xn ) .

4. Repeat until xn is precise enough.

Example: find the solution to f (x) = x7 − x13 = 1. Firstly the equation should
be rewritten as g(x) = x7 − x13 − 1 = 0. Therefore g0 (x) = 7x6 − 13x12 . Using

10
Newton’s method with an initial guess x0 = −1, then

n xn
0 −1
1 −1.166666667
2 −1.113191097
3 −1.08615279
4 −1.080008019
5 −1.079731886
6 −1.079731352
7 −1.079731352
8 −1.079731352

We can say that x ≈ −1.079731352 then.

Notice how Newton’s method converges to the solution. Once it found


where the solution is, the error diminishes quadratically, i.e., the precision is
doubled each iteration. Compared with bisection method, whose speed is only
linear, i.e., the precision is increasing in a steady pace.

Despite Newton’s method’s great speed, it still has limitations where us-
ing such method is not prefered. Newton’s method involves derivative of a
function. If the derivative is difficult to calculate, we cannot use Newton’s
method efficiently. Also, if f 0 (xn ) is very small, then the next term may be sent
to somewhere totally irrelevant, and the method will fail.

5.4 Using your calculator wisely

Modern scientific calculators often include numerical integration as a built-in


function, and newer models include also numerical differentiation and even
Newton’s method. Using these calculators we can eliminate the need of ana-
lyzing the behavior of the function, and pay more attention to the real physical
problems. Some calculators even provide the programming (macro) function
so that repetative tasks can be done automatically. Table 1 lists calculators
where numerical methods are built-in.

11
y

x1
x
x0 x2

Figure 5: Newton’s method.


d f
R
Model f (x)dx dx x=x f (x) = 0 Prog
0

Casio fx-50F
Casio fx-3900Pv X X
Casio fx-3650P X X X
Casio fx-991MS* X X X
Casio fx-991ES* X X X
Sharp EL-506V X X •
Citizen SRP-285II •
Hewlett Packard HP-30S •

Table 1: Comparison of calculators. Note: “*” means the calculator is not


HKEAA approved. “•” means programming function is limited,
and no complex programs can be written.

12

Você também pode gostar