Você está na página 1de 30

1.

Introduction To perturbation Theory & Asymptotic Expansions


Example 1.0.1. Consider
x 2 = cosh x
(1.1)
For 6= 0 we cannot solve this in closed form. (Note: = 0 x = 2)
The equation defines a function x : (, ) R (some range of either side of 0)
We might look for a solution of the form x = x0 +x1 +2 x2 + and by subsititung
this into equation (1.1) we have
x0 + x1 + 2 x2 + 2 = cosh(x0 + x1 + )

Now for = 0, x0 = 2 and so for a suitably small


2
 + x1 2
 cosh(2 + x1 + )
x1 cosh(2)
x() = 2 + cosh() +
For example, if we set = 102 we get x = 2.037622... where the exact solution is
x = 2.039068...
1.1. Landau or Order Notation.
Definition 1.1.1. Let f and g be real functions defined on some open set containing
zero, ie: 0 D R. We say:
(1) f (x) = O(g(x)) as x (Big O)
if K > 0 and > 0 such that(, ) D and x (, )|f (x)| <
K|g(x)|
f (x)
=0
(2) f (x) = o(g(x)) as x (little o) if lim
x0 g(x)
f (x)
(3) f (x) g(x) as x (asymptotically equivalent) if lim
=1
x0 g(x)
Remark 1.1.2. .
(1) Could define these for x x0 or x
(2) Abuse of notation: for say, sin x = x + O(x2 ), O(x2 ) should be an equivalence class, ie: sin x x O(x2 )
Lemma 1.1.3. If lim

x0

Proof. Suppose

|f (x)|
m < then f (x) = O(g(x))
|g(x)|

then


f (x)


m < and |x| <

g(x)

|f (x)|

|f (x)|


|m| <
< |m| +

|g(x)|
|g(x)|
|f (x)| < (|m| + )|g(x)|
But this is just the definition of O with |m| + for K
Example 1.1.4.
(i) x2 = o(x) as x 2 since
(ii) 3x2 = O(x2 ) since

x2
0 as x
x

3x2
0 as x 0
5x2
1

2
1

(iii) x = o(|x 2 |) as x 0
2x/(1 + x2 )
2x
=
o(1)
since
0 as x 0
(iv)
1 + x2
1
2x
(v)
= O(x)
1 + x2
cos x 1
sin x x
= lim
=0
(vi) sin x = x + o(x) as x 0 since lim
x0
x0
x
1
sin xx
x3
x0
sin x
cos x
1
= lim 6x = lim 6 = 6
x0
x0
3
sin x x x3! since lim sinxx3 = 1
x0 x 3!
x3
sin x = x 3! + O(x4 )

(vii) sin x = x + O(x3 ) since lim

(viii)
(ix)

= lim

x0

cos x1
3x2

The definition of f (x) in o notation is:


f (x + h) = f (x) + f (x)h + o(h) as h 0
The Taylor series for f (x + h) is given by
f (x + h) = f (x) + f (x)h + +

f (n) (x)hn
+ o(h)
n!

If the Taylor series is a convergent power series then o(hn ) can be replaced by
O(hn+1 ).
N
N
X
X
n
Any convergent power series
an x =
an xn + O(xn+1 )
n=0

n=0

Examples:
1

2
( 1 )(2x)2
1 + 2x = 1 + 21 (2x) + 2 2 2!
+ O(x3 ) = 1 x x2 + O(x3 )
2
ln(1 + x) = x x2 + O(x3 )
x2 sin( x1 ) = O(x2 ) It is important to note that this is big O with no limit as

2
2
1
x )|
x sin( 1 ) 1 x2 though |x sin(
= sin( 1 ), and this has no limit.
2
x

1.2. The Fundamental Theorem of Perturbation Theory.


Theorem 1.2.1. If A0 + A1 + + AN N = O(N +1 ) then A0 = A1 = AN = 0
Proof. Suppose A0 + A1 + + AN N = O(N +1 ) but not all Ak are zero. Let
AM be the first non-zero.
Consider

AM M + AM+1 M+1 + + AN N +1
N +1
AM + AM+1 + +
as
=
N +1M
Then we have a contradiction with big O
.

1.3. Perturbation Theory of Algebraic Equations.


Example 1.3.1. Consider x2 3x+ 2 + = 0. Assume the roots have the following
expansion: x0 + x1 + 2 x2 + O(3 ) then by substitution we get
2

(x0 + x1 + 2 x2 + ) 3(x0 + x1 + 2 x2 + ) + + 2 = O(3 )

= (x20 + 3x0 + 2) + (2x0 x1 3x1 + 1) + 2 (x21 + 2x0 x1 3x2 ) = O(3 )


Terms in 0 :
x20 3x0 + 2 = 0 x0 = 1 or x0 = 2
Terms in 1 :
2x0 x1 3x1 + 1 = 0
if x0 = 1 then x1 = 1
otherwise if x0 = 2 then x1 = 1
Terms in 2 :
x21 = 2x0 x2 3x2 = 0
if x0 = 1 then x1 = 1, and so x2 = 1
otherwise if x0 = 2 then x1 = 1, and so x2 = 1

x = 1 + + 2 + O(3 ) or x = 2 2 + O(3 )

3 1 4
2
We can solve x 3x + 2 + = 0 directly to get x =
2

3 1 4
we get
Now 1 4 = 1 2 22 + O(x3 ) and substituting this into
2
3 (1 2 22 )
x=
2
which is the same answer as above.
Example 1.3.2 (Singular Perturbation). .
Consider x2 2x + 1 = 0 and again assume there is an expansion x0 + x1 + 2 x2 +
O(3 ). We get for terms in 0 :
1
2x0 + 1 = 0 x0 =
2
1
terms in
1
x20 2x1 = 0 x1 =
8
terms in 2
1
2x0 x1 x2 = 6 x2 =
8
and so we have
1 2
x= + +
2 8
8
and this gives us one of the roots, but where is the second?
The exact solution is given by:

1 1
x=

and the other root should be


2 1
x = + O()
2

Last time in example 1.3.2 we did not find the other root of x2 2x + 1 = 0
using the expansion of form x0 + x1 + 2 x2 + O(3 ).
2
2
If instead we try = x, then = 2w
+ 1 = 0 2 + = 0 this, we can
assume in the usual way has an expansion of the form: 0 + 1 + 2 2 O(3 ), and:
Terms in 0 :
02 20 = 0 0 = 0 or 2
Terms in 1 :
1
1
20 1 21 + 1 = 0 1 = or
2
2
n 1
+
O()
2
= x =
2
1
2 + O()
Example 1.3.3 (Non Regular Expansions). Consider x2 x(2 + ) + 1 = 0.
assume x = x0 + x1 + 2 x2 + O(3 )
0 : x20 2x0 + 1 = 0 x0 = 1 twice
1 : (1 + )2 (1 + x1 )(2 + ) + 1 = O(2 ) 2x1 2x1 + 1 = O(2 ) 1 = O(2 )
This contradicts the assumption that there was a regular expansion.
The exact roots are:
x=

2+ 42
2

= 21 (2 +



1
4 ) = 12 (2 + 2 4 ) = 1 2 + O()

Try x = x0 + 2 x1 + x2 +
0 : x0 = 1 as before.
1

2 : (1 + 2 x1 )2 (1 + 2 )(2 + ) + 1 = O( 2 ) 2 2 x1 2 2 x1 = O() 0 = 0
1

1 : (1 + 2 x1 + (x2 )2 (1 + 2 x1 + x2 )(2 + ) + 1 = O( 2 )
1

which gives x21 = 1 x1 = 1 so x = 1 2 + O( 2 )


1.4. Perturbation Theory of Odes.
Example 1.4.1 (Regular Problem). Consider the following ODE:
x + x = x2 , x(0) = 1
We try an expansion of the form: x(t) = x0 (t) + x1 (t) + O(2 ) which leads to:
0 :
n x + x = 0
0
0
x0 (t) = et
x0 (0) = 1

1 :

n x + x = x2 = e2t
1
1
0
x1 (t) = et e2t
x1 (0) = 0 (no in x(0) = 1)

and so
x = et + (et e2t + O(2 )

For t [0, ] the constants in O definition can be independent of t


Sometimes we want only > 0 versions of o, O with one sided limits lim+ .
o

We use a series 1, 2 , , 2 or in general: 0 (), 1 () . . .


12 +1 () = o( 12 ()) is what is needed.
Example 1.4.2 (Singular Model Equation).
Consider x + x = 1, x(0) = 0, and suppose x = x0 + x1 O(2 )
Then (x 0 + x1 ) + x0 + x1 = 1 + O(2 )
0 : x0 = 1 but x(0) = 0 cannot solve the initial condition.
dx
dx dt
dt
= ,
=
= x
We can rescale time, ie: t = which gives = t and
d
d
dt d
dx
+ x = 1, x(0) = 0
d
Now use x = x0 + x1 O(2 )
0 :
dx0
t
+ x0 = 1, x0 (0) = 0 x0 = 1 e = 1 e
d
1 :
dx1
+ x1 = 1, x1 (0) = 0 x1 = 0 (similarly for 2 , 3 etc...)
d
Therefore the solution is:
x = 1 et
Example 1.4.3 (Singular In The Domain).
Consider x + x2 = 1, x(0) = o, t > 0 and assume x = x0 + x1 + 2 x2 O(2 )
0 :
x 0 = 1, x0 (0) = 0 x0 = t
1 :
(t +x1 ) + (t + x1 )2 = 1 + O(2 ) 1 + x 1 + (t2 + 2tx1 ) = 1 + O(2 )
x 1 + t2 = O(2 ) x 1 + t2 = 0 x1 =

t3
3

(If we carry on we find the 2 term t5 ) The solution is:


x =t

t3
+ O(3 )
3

This is not regular for t [0, )


Example 1.4.4 (Damped Harmonic Motion with Small Damping ( > 0)).
Consider x
+ x + x = 0, x(0) = 0, x(0)

= 1 and assume x = x0 + x1 + 2 x2 O(3 )


0
:
x
0 + x0 = 0, x0 (0) = 0, x 0 = 1 x0 = sin t
1
:
x
1 + x 0 + x1 = 0 x
1 + x1 = cos t, x1 (0) = 0, x 1 (0) = 0
and we should get: x1 = 2t sin t whereby the solution is:
1
x = sin t t sin t
2

This again is not uniform for t [0, q


)

The exact solution is: x = e 2 t sin( 1 =


for small t.

2
4 )t,

the expansion above is only good

1.5. Asymptotic Expansions.


Definition 1.5.1. A sequence of functions {n }, n = 0, 1, 2, . . . is called an
asymptotic series as x x0 if
lim

xx0

n+1 (x)
=0
n (x)

(ie: n+1 (x) = o(n (x)))


Note: We could have x0 = or a one-sided limit x x+
0
Examples:
1

(i) x 2 , 1, x 2 , x, x 2 , . . . x 0+
1
(ii) 1, x1 , x ln(x)
, 3 1 , x
x 2 ln(x)

(iii) tan x, (x )2 , (sin x)3 , x


Taylors Theorem:
remainder

N
z }| {
X
f (n) (x)(x x0 )n
+ RN (x) , f C N +1 [x0 r, x0 + r], r > 0
f (x) =
n!
n=0
There are remainder formulas to bound RN (x), for example,
MN rN +1
, MN > 0, RN = O((x x0 )N +1 ) as x x0 .
Cauchy: |RN (x)|
(n + 1)!
It is important to remember that Taylor ; RN 0 as N In fact we know
N
X
plenty of power series that do not converge, eg:
(1)n n!xn diverges for all x.
n=0

Another famous non-convergent series is Stirlings formula for n!:


ln n! = n ln n +

1
1
1
1
ln(2n) + 2 1 3 2 1 3 + O( 4 )
2
2 3 n 2 3 5 n
n

Definition 1.5.2. Let {n } be an asymptotic sequence as x x0 .


N
X
The sum
an n (x) is called an asymptotic expansion of f with N terms if
n=0

f (x)

N
X

an n (x) = o(N (x)).

n=0

The coefficients an are called the coefficients of the asymptotic expansion.


is called an asymptotic series.
Note: Some people use the stronger definition: O(N (x))
Notation 1.5.3. f (x)

n=0

an n (x) as x x0

n=0

an n (x)

Clearly any Taylor series is an asymptotic series.


Example: Find an asymptotic expansion for f (x) =

et
dt
1 + xt

(1 + xt)1

Z
= 1 xt + x2 t2 + f (x) = (1 xt + x2 t2 + )et dt

Now, it can be shown that

tn et dt = n! and hence

f (x) = 1 x + 2!x2 3!x3 +




n!xn


which diverges by the ratio test
= n|x| x 6= 0.
(n 1)!xn1
It could still be be an asymptotic expansion however; wed need to check
f (x)

N
X

(1)n n!xn = o(xN )

n=0

This is a special case of:


Lemma 1.5.4 (Watsons Lemma). Let f be a function with convergent power series
and radius of convergence R, and f (t) = O(et ) as t (for some > 0) then:
Z

at

n=0

In last example we had f (x) =


Z

X
f n (o)
f (t) dt
an+1

et
dt as x 0+ let xt = u, a = 1/x, then
1+x

au

e
du (looks like Watsons)
u + u2

Example 1.5.5 (Incomplete Gamma Function). .


(a, x) =

Zx

a1 et

dt =

N
X
(1)n
n!

n=0

Zx

ta1 dt

Zx

tn+a1 dt =

N
X

n=0

N
X
(t)n
n!

n=0

(1)n n+a
x
n!(n + a)

Note: Power series under integral is convergent, hence uniformly convergent. We


have a convergent power series for (a, x)in x
Example 1.5.6. .
Zx t
e
dt (exponential integral)
Ei (x) =
t

E1 (x) =

etx
dt (Cauchy principal value), it turns out that E1 (x) = Ei (x)
t

x2
x3
Ei (x) = + ln(x) + x +
+
+ O(x4 )
2

2!
3

3!
Z
N

X
1
ln(n) 0.5772
where = ex ln(x) dx = lim
n
k
k=0

2. ODEs In The Plane


We consider systems of ODEs of the form:
n
x = u(x, y)
where x(0) = x0 , y(0) = y0 .
y = v(x, y)
(Note: A 2nd order ODE can be expressed as a coupled system of 1st order ODEs
since if x
+ f (x)x + g(x) = 0 then if we say x = y it follows that y = x so we get
n
x = y
y = f (x) g(x)
Where in this case it turns out that u(x, y) = y, v(x, y) = f (x) g(x))
2.1. Linear Plane autonomous Systems.
n
x = ax + by
the 1D case is easy: x = ax, x(t) = x(0)eat
y = cx + dy
Example 2.1.1. .
n x = y
Consider
= 2y
3x
 


x
3 0
If we say ~x =
then we can write ~x =
~x
y
0 2
We solve these two ODEs to get
x(t) = x(0)e3t , y(t) = y(0)e2t (which may be written as)
 3t


e
0
x(0)
~x(t) =
0
e2t
y(0)
We can construct a Phase plot with solutions being
curves in the plane called trajectories or orbits.
We could eliminate t as follows:
 x  32
y = y(0)
, x(0) 6= 0
x(0)
y

Figure 2.1.1:

Example 2.1.2.
n x = x
y = y
We solve in both cases to get x(t) = x(0)et , y(t) = y(0)et and eliminate t such
that:
 x 1
y = y(0)
x(0)


1 0
Noting that ~x =
~x we end up with a phase plot which looks like this
0 1
Similar to the above example consider

Figure 2.1.2:

Example 2.1.3 (Simple Harmonic Motion: x


+ x = 0).


n x = y
0 1
~x =
~x
y = x
1 0
x(t) = A cos t + B sin t, x(0) = A
x(t)

= A sin t + B cos t, x(0)

=B


 

x(0)
x(0) cos t + y(0) sin t
cos t sin t
x =
=
y(0)
x(0) sin t + y(0) cos t
sin t cos t
{z
}
|
rotation matrix
The orbits are circles.
y

Figure 2.1.3:

Example 2.1.4 (Damped


Motion: x
+ bx + x = 0).
 Harmonic

n x = y
0 1
x =
x
y = x by
1 b

Try x(t) = et giving the characteristic polynomial


r
b2
b
2

1
+ b + 1 =
2
4

10

If b is small then

b2
4

1 < 0 such that


r
b2
b
i 1
= + i
=
2
4

and so
x(t) = et (A cos(t) + B sin(t))
y

Figure 2.1.4:
Theorem 2.1.5. Let A R22 be a real matrix with eigenvalues 1 , 2 then:
(i) If 1 6= 2 are real then there exists an invertible matrix P such that


1 0
1
P AP =
0 2
(ii) If 1 = 2 then either A is diagonal, A = I or A is not diagonal and there
is a P such that


1
1
P AP =
0
(iii) If 1 = + i, 2 = i, 6= 0 then there is a P such that



P1 AP =

How does this help? Put ~x = A~x and let ~y = P1 ~x
then ~x = P~y , ~
y = P1 ~x , P~x = PA~x
and so
~y = P1 AP~y
This allows us to generalise the work we did above.
Example 2.1.6. For case 1 in the theorem, consider ~u = P1 AP~u
then
~u 1 = 1 u1 , ~u 2 = 2 u2
(since P1 AP~u is just a matrix of eigenvectors acting on ~u)
Then ui (t) = ui (0)ei t and so
 t


u1 (0)
e 1
0
u(t) =
u2 (0)
0
e2 t
 t


 t



e 1
0
u1 (0)
e 1
0
1 x1 (0)
Now ~x = P~u so ~x(t) = P
=
P
P
u2 (0)
x2 (0)
0
e2 t
0
e2 t
u1 and u2 are related by eliminating t:
 u  2
 u  1
1
1
1
1
= et , then u2 = u2 (0)
.
u1 (t) = u1 (0)e1 t
u1 (0)
u1 (0)

11

x
1 > 2 > 0

1 < 2 < 0

Figure 2.1.5:
For 1 6= 2 , distinct eigenvectors v1 , ~v2 , A~vi = i~vi
Half lines through the origin in eigen-directions are trajectories.
y

tion

irec

d
igen

x
e.g. 1 > 0 > 2

Figure 2.1.6:

12

2.2. Phase Space Plots. .

Eigenvalues real,
different, same sign

Eigenvalues real,
different, same sign

Node: Source

Node: Sink

Eigenvalues real,
different, opposite
sign

Eigenvalues real,
equal,
> 0, A = I

Saddle

Node: Source

Eigenvalues real,
equal
< 0, A = I

Node: Sink

Eigenvalues real,
equal
> 0, A I

Degenerate Source

Eigenvalues real,
equal
< 0, A I

Eigenvalues
complex,
1 = +i, 2 = - i
< 0, 0

Degenerate Sink

Stable Spiral

Eigenvalues
complex,
1 = +i, 2 = - i
> 0, 0

Eigenvalues
purely imaginary,
1 = i, 2 - i, 0

Unstable Spiral
Figure 2.2.1:

Ellipse

13

x = 3x + y
y = 2x 2y
 
0
The critical points of this system are at
, and the Jacobian is given by
0


3 1
2 2


3
1
we find the eigenvalues by setting A I =
= 0, ie:
2
2

Example 2.2.1. consider

2 + 5 + 4 = 0 = 4, 1 this corresponds to a node (sink)

As for the associated eigenvectors;



 

1 1
1
A 4I =
~v =
is in the null space (hence an eigenvector)
2 2
2


 
2 1
1
AI=
~v =
is the other eigenvector
2 1
2
We can get further information to help in curve sketching by considering isoclines:
dy
2x 2y
y
=
=
x
dx
3x + y

Figure 2.2.2:
2.3. Linear Systems. A linear system for which the eigenvalues are wholly imaginary ( = ) is called called a centre.
The characteristic equation, in general of A R22 = 2 (trace A) + det A.
In this case (for imaginary), 2 + 2 = 0, trace A = 0, det A > 0.
Consider a simple case:
n x = y
y = cx
eliminate t to get
y
dy
cx
=
=
x
dx
y
Z
Z
2
cx2
y
+
= const
y dy = cx dx
2
2
This is the equation for an ellipse.
To determine
the direction of the arrows, set y = 0 then on x axis x = 0, y = cx
 
x
and
is a vector in the direction of solutions, and we that for x positive x is
y
positive.

14

The Oscillatory nature of these graphs dont reflect any real life situations.
x(t)

y(t)

Figure 2.3.1:
2.4. Linear Approximations. .
Consider x = u(x, y), y = v(x, y), Critical points occur when u = v = 0.
Let (x0 , y0 ) be critical points, put = x x0 , = y y0 and Taylor expand about
(x0 , y0 ) to get (near equilibrium point)
u
u

u(x, y) = 
u(x
, y
+
+O( 2 + 2 ) as (, ) (0, 0)
0

0) +
x (x , y )
y (x , y )
0

v
v

+
+O( 2 + 2 ) as (, ) (0, 0)
v(x, y) = 
v(x
, y
)
+

0 


0
x (x0 , y0 ) y (x0 , y0 )
and so
u u
 
 
u
x y
+ O( 2 + 2 )
= v v

v
x y
We can now make the approximation
u u

 
 

x y
+ O( 2 + 2 )
= v v


(x0 , y0 )
x y
which is a linear system.

Example 2.4.1 (Preditor-Prey). .


We wish to model the dynamics between predators and prey. Without considering
external & environmental variables, as the number of predators increases, we expect
the population growth rate of the prey should lessen; this then, should result in a
slow down in the growth rate of predators (as there is more competition for fewer
prey).
let x be a population of prey (eg: rabbits), y be a population of predators (eg:
foxes) with x, y > 0 (note: this model relies on large x and y such that we are able
to talk about derivatives etc... (since x, y are integers!)
n x = x(a y)
A simple model is
(a, c, , > 0).
y = y(c + x)
Then u(x, y) = x(a y), v(x, y) = y(c + x) and so for an equilibrium,
u = v = 0, x(a y), y(c + x) = 0
a
so for u = 0, either x = 0 or a y = 0
(y = )

15

for v = 0, either y = 0 or c + x = 0

(x =

Therefore the criticals are at

c
)

c a
(0, 0), ( , )

More specifically if we put a = 1, = 12 , c = 34 , = 41 then:
n x = x(1 y )
2
with critical points at (0, 0), (3, 0)
y = y( 43 + x4 )
Near (0, 0):
  
 
1 0

=
0 43

The corresponding eigenvalues being:


 
 
3
0
1
1 = 1, v1 =
, 2 = , v2 =
1
0
4
This is a saddle. Near (3, 2):
  
0

= 1

2
with eigenvalues:
1,2 = i

43
0

 

3
ie: 2 + 2 = const, so ellipses.
2

y( x 3 )
dy
= 4 y4 34 ln x + ln y y2 x4 = const.
dx
x(1 2 )
It is possible, albeit tricky to show this is a closed curve.

Now

x(t)
y(t)

Figure 2.4.1:
Example 2.4.2 (Circular Pendulum). .
non-dimensional units

}|
{
z
We consider a circular pendulum given by x + sin x = 0 where x is an angle.
x
m
mg

Figure 2.4.2:
Note that for small angles x, x sin x and so we get x
+ x = 0 (simple harmonic

16

motion)
We shall solve x
+ sin x = 0 qualititively since it cant easily be solved analytically.
Let x = y = u, y = sin x = v
The critical points are at
x = y = 0 y = 0, x = n, n Z

u


0
1
y =
at y = 0, x = n
v
(1)n 0
y

 
0
1

Near the critical point we consider


n+1
(1)
0




1

= 0 = 2 + (1)n = 0
The characteristic equation is
(1)n+1
If n even 2 + 1 = 0 = i, if n odd 2 1 = 0 = 1
 
 
1
1
For n odd we get eigen-vectors v1 =
, v2 =
which is a saddle.
1
1

u
x
v
x

For n even we get a centre.

The centres correspond to small swings


The saddles correspond to swings just large
enough that they stop at the top
Everywhere else corresponds to big swings, where
with no damping, the pendulem doesnt stop.

Figure 2.4.3:
2.5. Non-Linear Operators. .
x
+x = 0 represents simple harmonic motion (SHM) with solution x(t) = A cos(t)+
B sin(t), a centre.
Consider x + 2 x + x = 0 making the substitution x = y. The system of ODEs is
n x = y
given by
, and the roots of the resulting char poly are
y = 2 x
p
= i 1 2
and so for 0 < < 1 we get a stable spiral

Example 2.5.1 (Stiff Spring System). .


In a simple spring, force and hence acceleration is proportional to the extension of
the spring (Hookes Law), instead we think of a stiff spring with force proportional
to x + x3 . For small x this behaves like x, for large x, like x3 .
n

x = y
, u = y, v = x x3
y = x x3
The critical points are at u = v = 0 y = 0, x + x3 = 0 x = 0 or 1 + x2 = 0
(no real solutions).
x
+ x + x3 = 0,

17

The only critical is at (0, 0)

u u

 

0
1
0 1
x y
=
evaluated at (0, 0)
v v =
1 3x2 0
1 0
x y
This is a centre = i
Example 2.5.2 (Soft Spring). .
We could change the sign before and simulate a soft spring ie: x
+ x x3 = 0
1

The critical points in this case lie at (0, 0), ( , 0) and the jacobian is given by


0
1
1 + 3x2 0
1
for x close to we have a source; which, in physical terms, means we are actually

adding energy to the system.


3. Limit Cycles
Orbits, that is, trajectories of a system of ODEs cannot cross.
x(0)
x(0)
= x(T)

x(t)

Figure 3.0.1:


x(t)
and ~x(0) = ~x(T ) (for T 6= 0) then ~x (0) = ~x (T ).
y(t)
when this happens we have a periodic orbit, eg a clock, oscillator, cycle in the
economy, biology etc...
Suppposing we have such a cycle what can be said about what happens around it?
It could be the case that both outside and inside the orbit we spiral towards it;
but then again, something else could happen instead. The equilibrium point at the
centre doesnt give us this information.
If ~x =

Figure 3.0.2:
Example 3.0.3 (Cooked up). Consider a system of the contrived form:
n x = y x(x2 + y 2 1)
1
y = x y(x2 + y 2 1) 2

18

we see that by construction, when x, y satisfy x2 +y 2 = 1 we obtain simple harmonic


motion. There is only one equilibriums point and that occurs at (0,0). The Jacobian
at this point is given by

f f


1 1
x y
=
g g
1 1
x y (x,y)=(0,0)
The characteristic equation is 2 2 + 2 = 0 which has roots = 1 i. This is
an unstable spiral.
If instead however, we look at this in polar coordinates
y
r2 = x2 + y 2 , tan =
x
Differentiating implicitely we get:
xy y x
2rr = 2xx + 2y y,
= 2
x + y2
If we multiply (1) and (2) by x & y respectively we get;
xx = x(y x(x2 + y 2 1)), y y = y(x y(x2 + y 2 1))
xx + y y = (x2 + y 2 )(r2 1) rr = r2 (r2 1)

r = r(r2 1)
This reveals that there is also an equilibrium point at r = 1 (ie; for r = 1 we stay
on the circle). Furthermore for r > 1, r < 0 which is a stable spiral, whilst for
r < 1, r > 0 which is the unstable spiral we found above.
r<0
r>0

Figure 3.0.3:
Now if we multiply (1) and (2) by y & x respectively and subtract we get:
xy y x = x2 xy(r2 1) + y 2 + xy(r2 1) = x2 + y 2 = r2

r2
=1
r2
The equilibrium point correctly told us that locally we have an unstable spiral but
it failed to illuminate the behaviour as we move further out.
=

We shall establish a couple of results that allow us determine when & where
there there exist no closed orbits. First recall
Theorem 3.0.4 (Divergence Theorem/Divergence theorem in the plane). Let C
be a closed curve, let A R2 be the region it encloses, and u, v be functions with
continuous derivatives. Then
ZZ
Z
Z
u v
dy
dx
+
dx dy = u dy v dy = u ds v
ds
x y
ds
ds
A

where (x(s), y(s)) is a parameterisation of C

19

Theorem 3.0.5 (Bendixons Negative criterion). Consider the system


with u and v continuously differentiable.

x = u(x, y)
y = v(x, y)

u v
+
does not change sign.
x y
Then there is no closed orbit contained within A.
Let A R2 be a region of the plane for which

Proof. Suppose for contradiction there exists a closed orbit in A. Then this orbit
forms a closed curve C in A.
x(0)= x(T)

Figure 3.0.4:
Let A A be the region enclosed by C (ie A = C). Then by the divergence
theorem
ZT
0

but

(xy y x)
dt =

ZT

dx
dy
dt =
u dt v
dt
dt

u dy v dy =

u v
+
is either > 0 or < 0 x, y A hence
x y

ZZ

u v
+
dx dy = 0
x y

ZZ

6= 0

Example 3.0.6 (returned to cooked example). .


n x = u = y x(x2 + y 2 1)
y = v = x y(x2 + y 2 1)
u
v
= 3x2 y 2 + 1,
= x2 3y 2 + 1 and so
x
y
u v
+
= 4x2 4y 2 + 2 = 4(x2 + y 2 ) + 2
x y
For values of x, y such that x2 + y 2 > 12 this fails to be always positive or always
negative; and so we cannot say there
p exists no closed orbit. (Though we can be
confident there is no such orbit for x2 + y 2 = r < 12 )
Bendixons Criterion is not much use for answering the question
Is there a closed orbit between r = 1 or r =

1 ?
2

Example 3.0.7 (Damped Harmonic Motion). .


For x
+ 2 x + x = 0 we have x = u = y, y = v = 2y x
This is a stable spiral at (0,0) and
u v
+
= 0 2 which is constant
x y
Hence there is no sign change for

u v
+
and so no closed orbits.
x y

20

Example 3.0.8 (General Damped Oscillator). .


This system is characterised by x
+ f (x)x +g(x) = 0 which with the usual substitu| {z }
damping f (x)>0

tion x = y yields

x = y
y = f (x)y g(x)

u v
+
= 0 f (x) is always negative and so general damped systems of this
x y
form have no closed orbits.

Now

3.1.
Theorem 3.1.1 (Poincare -Bendixon Theorem). Given a region A R2 and an
orbit of a system of ODEs C which remains in A t 0 then C must approach
either a limit cycle or equilibrium.
Remark 3.1.2.
(1) orbit = trajectory = solution curve
(2) limit cycle = closed orbit that nearby orbits approach
(3) We can use this result with time running backwards, ie: orbits come
from unstable equilibrium or closed orbits.
3.2. energy (brief ). .
n x = y
characterising x
+ f (x) = 0
y = f (x)
Since there is no damping we expect energy to be conserved.
y2
x 2
x 2
+ F (x) =
+ F (x) where
can be considered kinetic energy,
Consider =
2
2
2
F (x) the potential energy. Then
consider an oscillator of the form

d(x, y)
= y y + f (x) = (
x + F (x))x
dt
d
= 0 along solution curves.
dt
So is constant on a solution (x(t), y(t)). we call this a first integral
Set F = f then

Example 3.2.1 (Duffings Equation). .


This is the hard spring system we met earlier x
+ 2 x + x3 = 0
4
2 2
x

x
+
+ some constant we need not worry
f (x) = 2 x + x3 F (x) =
2
4
y 2 2 x2 x4
+
+
about in this context of constant solution curves. Then (x, y) =
2
2
4
for > 0. As (x, y) is constant then the solutions are bounded for all t (closed
curves).
y2
2 x2
x4
2
2
+
) = max(2, 2 ). Ie;
We can say that x2 + y 2 max(2, 2 ) ( +

2
2
4

solutions stay in the circle.


For constant we can check that for y 6= 0 there are two solutions for x.

21

4. Lindstedts Method
Example 4.0.2 (Duffings Equation). x
+ 2 x + x3 = 0, 0 < << 1
We know the solutions are periodic for y(0) = x(0)

6= 0. The solutions will resemble


slightly square ellipses.

Figure 4.0.1:
consider a straight-forward expansion x = x0 + x1 + 2 x2 O(3 )
Terms in 0 :
x
0 + 2 x0 = 0 x0 = a cos t
We may suppose without loss of generality that the initial conditions for this system
are x(0) = a, and x(0)

= 0 since we are just picking out a particular solution curve;


we still get all of them.
Terms in 1 :
x
1 + 2 x1 + x30 = 0 x
1 + 2 x1 = a3 cos3 (t)

To deal with cos3 (t) we use the identity that


cos3 (t) =

3cos(t) cos(3t)
+
4
4

and so

 3cos(t)

cos(3t) 
4
4
since cos(t) appears in the homogeneous solution we would need to introduce a
t sin(t) term (secular term), and this is at odds with what we already know about
this system; that it is bounded as t
The trick to get rid of the secular term is to introduce another series:
x1 + 2 x1 = a3

= t where = 0 + 1 +
Example 4.0.3 (Lindstedts Method). We try again with Duffings equation (prefixing with a minus sign this time) using the above idea and expecting to see a
solution that has periodic orbits for small enough.
Without loss of generality, we rescale time (setting = 1) such that we try to solve
x + x x3 = 0
We define = t where = 0 + 1 + and so x(t) becomes x( ). coupling
this with the standard expansion we use for x we have
x( ) = x0 ((0 + 1 )t) + x1 ((0 + 1 )t)
We want xi ( ) = xi ( + 2) (ie; period of 2 for i = 1, 2, 3, . . . ).
For = 0 we have 0 = 1 (had we not set = 1 earlier then wed have instead

22

0 = ; actually we could choose what we want for 0 and depending on how


difficult we wnat to make things, this choice determines 1 later. It makes sense
to keep things simple and choose 0 such that for = 0 we have x( ) = x(t))
d
d2 x
dx
dx d
dx
d2 x
Now
= so
=
=
(so x = x ) and 2 = 2 2
dt
dt
d dt
d
dt
d
and so returning to the ODE we have
2 x x3 = 0
=0

( 1 +1 + )2 (x0 + x1 + ) + (x0 + x1 + ) (x0 + x1 + )3 = 0

Terms in 0 :

x0 + x0 = 0
As before we will assume initial conditions x(0) = a, x(0)

= 0 to pick a critical
point on some curve. We know there exists a solution with these properties by
assuming ellipsoidal shaped orbits; and by varying a we get all the curves.
Since there is no in initial conditions we get
x0 (0) = a, x1 (0) = 0, x0 (0) = x1 (0) = 0
Then the solution for x0 is
x0 = a cos
1

Terms in :
x1 + 21 x0 + x1 x30 = 0

x1 + x1 = 21 (a cos( )) + (a cos( ))3 = 0

3
1
x1 + x1 = (21 a + a3 ) cos( ) + a3 cos(3 )
4
4
The whole point of doing this was to eliminate the cos( ) term which would have
introduced a secular term and so we set 21 a + 34 a2 = 0
3
1 = a3
8
so it now remains to solve
x1 + x1 =

1 3
a cos(3 )
4
3

a
We try x1 = A cos(3 ) 7 9A cos(3 ) + A cos(3 ) = 14 a3 cos(3 ) A = 32
The general solution for x1 before applying boundary conditions is

x1 = cos( ) + sin( )

a3
cos(3 )
32

a
and x1 (0) = 0 = 0 giving
Now x1 (0) = 0 = 32

x1 =

a3
a3
cos( )
cos(3 )
32
32

Thus
x(t) = a cos(t) +
Where = 1 83 a2 +

a3
(cos(t) cos(3t))
32

23

5. Method of Multiple scales


In the last section we used Lindstedts method to take account of varying frequencies; now we develop a more general method for situations with two time
scales.
An example of where this is relevant is the damped circular pendulem
x
+ 2 x + sin x = 0
There are two things going on here; there is an oscillation which is captured by one
time scale; and a slow loss of energy due to the damping term captured by another
time scale. By considering two scales we are able to capture different features of
the system.

Figure 5.0.2:
Consider small oscillations and small energy
x
+ x + sin x = 0 (D.H.M)
If we did a standard expansion wed end up with a solution
x(t, ) = sin t + 2 t sin t +

and this is not a uniform expansion for the solution. The exact solution for this
system is:

q
2
t
1
x(0) = 1
x= q
e 2 sin t 1 4 where
x(0)

=0
2
1 4
5.1. The Method. .
(1) Introduce a new variable T = t, and think of t as fast time, and T as
slow time.
(2) Treat t and T as independent variables for the function x(T, t, ). Using
the chain rule we have
=0

x t
x T
x
dx

=
+
+
dt
t t T t
t
x
x
+
=
t
T
d2 x
 x
 x
d  x
x 
x 
x 
=

1
+

=
+

dt2
dt t
T
t t
T
T t
T
2x
x
2x
+ 2
= 2 + 2
t
tT
T
(3) Try an expansion x = x0 (t, T ) + x1 (t, T ) + where of course T = t

24

(4) Use the extra freedom of x depending on T to kill off any secular terms.
5.2. Application of Multiple scales to D.H.M.
Example 5.2.1. we consider x
+ x + x = 0 with boundary conditions x(0) =
0, x(0)

= 1, this becomes
 x
2
2x
x 
2x
2 x
+x=0
+
2
+

t2
tT
T 2
t
T
Now let X = x0 + x1 + 2 x2 + to get
 2

2 
2

2
2
(x0 +x1 +2 x2 + )
(x
+x
+
x
+

)+
+2
+
+
0
1
2
t2
tT
T 2
t
T
+x0 + x1 + 2 x2 + = 0

Terms in 0 :

2 x0
+ x0 = 0
t2

this is a PDE with solution


x0 = A0 (T ) cos t + B0 (T ) sin t
where A0 (T ), B0 (T ) are some functions of T (as opposed to just constants) Using
boundary conditions x(0) = 0, x(0)

= 1 then
x0 (0) = A0 (T ) cos t + B0 (T ) sin t = 0 A0 (0) = 0
Terms in 1 :

x 0 (0) = A0 (T ) sin t + B0 (T ) cos t = 1 B0 (0) = 1

2 x1
2 x0
x0
+2
+
+ x1 = 0
2
t
tT
t
2 x0
dA0 (T )
dB0 (T )
x0
= A0 (T ) sin t + B0 (T ) cos t and
=
sin t +
cos t
Now
t
tT
dT
dT
and so it remains to solve
 dA (T )

2 x1
dB0 (T )
0
+
x
=
2

sin
t
+
cos
t
+ A0 (T ) sin t B0 (T ) cos t
1
t2
dT
dT
The RHS contains terms in sin(t), cos(t) which will induce secular terms. we
therefore choose A0 (T ) and B0 (T ) such that they go away. In other words:
dA0
1
+ A0 = 0 A0 = A0 (0)e 2 T = 0
dT
1
dB0
1
+ B0 = 0 B0 = B0 (0)e 2 T = e 2 T
2
dT
2

and so

x0 (t) = e 2 T sin t = e 2 sin t


to get the x1 term we would need higher order terms to fully specify it. Notice that
with the exception of constants, the first order part captures most of the features
of the exact solution.
In general we would have multiple time scales: T0 = t, T1 = t, . . . , Tn = n t
If we consider a series x0 (T0 , T1 , . . . , Tn ) + x1 (T0 , T1 , . . . , Tn ) + . . . then

d
+ 2
+
+
=
dt
t =T0
T1
T2

25

Example 5.2.2 (Van Der Pols Equation). Consider


x
+ (x2 1)x + x = 0

Immediately we see that if x2 > 1 we have damping, x2 < 1 we have negative


damping; so it would seem there is a tendency to head towards x = 1. Perhaps we
could use energy methods; anyhow...
 2

2
+
2
+

(x0 + x1 + )
t2
tT


(x0 + x1 + ) + x0 + x1 + = 0
+
+((x20 + 1)

T
Terms in 0 :
2 x0
+ x0 = 0 x0 = A(T ) cos t + B(T ) sin t
t2
We can write this in complex form
x0 = A0 (T )eit + A0 (T )eit (noting that z + z = 2Re(z))
We can justify this step since in general, if z = a + ib then
(a + ib)eit + (a ib)eit = a(eit + eit ) + ib(eit eit ) = 2a cos t 2b sin t

Terms in 1 :

2 x0
x0
2 x1
+
x
+
2
+ (x20 1)
=0
1
t2
tT
t
h
i
2 x1
+ x1 + 2(A ieit A ieit ) + (Aeit + Aeit ) 1 (iAeit iAeit ) = 0

2
t
A
where A = A (T ) =
T
If we wish to find A(T ) as opposed to x1 we need only consider secular terms. So
we need to equate coefficients of eit (and et ) to zero and kill them off.
considering terms in eit we have:
2iA iA iA2 A + 2iA2 A = 0
2A A + A2 A = 0

Similarly, if we consider eit we get the conjugate of this expression; ie; whatever
A kills off the eit terms also kills off the eit terms.
We should now use the polar form of A, as |A2 A| = |A3 |, and if we write A = |A|ei
with = arg(A) and |A| = 21 a then
x0 = a cos(t + ) and we now wish to find a
Now

dA
=
dT

1
2

da i 1 i d
e + 2 aie
and so it remains to solve
dT
dT
a ei + iaei 12 aei +

a2 2i a i
4 e
2e

which dividing through by ei gives


1
1
a = a + a3 ia = 0
2
8
We now equate real and imaginary parts:
= 0 is constant

=0

26

Which leaves us the following ODE


1
1
a a3
2
8
There are a number of methods to solve this; one of which being making the substitution cos = 12 a and using the fact that
a =

d (csc + cot )
1 csc + cot
csc2 + csc cot
1
=
=
= d
sin
sin csc + cot
csc + cot
csc + cot
Z
1
d = log(csc() + cot())

sin
This would still require a bit of work and so we shall instead treat a2 as a variable
and use partial fractions.
da2
da
a4
= 2a
= a2
dT
dT
4
Z 
Z

1
1
4
da2 = log|a2 | log|4 a2 | = T + T0
+
da2 =

a2 (4 a2 )
a2
4 a2

Hence

a2
4k
= keT a2 (1 + keT ) = 4keT a2 = T
4 a2
e +k
2
a(T ) =
1 + k 1 eT
2
cos(t + )
x0 (t, T ) =
1 + k 1 eT

Example 5.2.3 (Duffings Equation). .


We return again to Duffings equation x
+ 2 x + x3 in the context of multiple
scales. Let T = x then

2x
2x

2x
2
x
dx
= + , x
= 2 = 2 + 2
+ 2  2
x =
dt

t
t
tT  T
and so putting this into the ODE above gives
 2
2 
(x0 + x1 + ) + 2 (x0 + x1 + ) + (x0 + x1 + )3
+
2
t2
tT
Terms in 0 :
2 x0
+ 2 x0 = 0 x0 = Aeit + Aeit
t2
Terms in 1 :
2 x0
2 x1
+2
+ 2 x1 + x30 = 0
2
t
tT
Now
2 x0
2
= 2(A ieit iA eit )
tT
whilst
x30 = A3 e3it + 3A2 Aeit + 3AA2 eit + A3 e3it
and so
2 x1
+ 2 x1 + 2(A ieit iA eit )
t2
+A3 e3it + 3A2 Aeit + 3AA2 eit + A3 e3it = 0

27

we now wish to kill off the secular terms eit and eit
terms in eit :
2iA + 3A2 A = 0
1
i
Again set A = 2 ae such that A = 21 a ei + 12 aiei . Then
2i( 21 a ei + 21 aiei ) + 34 a2 e2i 21 aei = 0
ia a + 38 a3 = 0
The imaginary term ia = 0 a = a0 (a constant)
This leaves us with a simple ODE to solve
a = 38 a30
with solution

3 a20
t (since T = T )
8
Thus the solution we get working as far as the first term x0 is
(T ) =

x = x0 + O() = a0 cos(( +

2
3 a0
8 )t)

+ O()
a2

which as far as the first term goes is identical (except for the sign of 83 0 (we
started with +x3 in this example)) to that which we got for Lindstedts method.
6. The WKB Method
In this section we focus on eigenvalue problems for 2nd order ODEs, specifically
d2 y
+ 2 r(x)y = 0,
dx2
ie; y is a function of x.

y(0) = y(1) = 0

(6.1)

6.1. Motivation. stationary waves, for example, a string on a musical instrument


satisfy
2y
2y
T 2 = 2
x
t
Where T is the tension, the density of string, y the displacement, and x is the
distance along the string.
y
(x)
T

Figure 6.1.1:
y(0, t), y(1, t) = 0 the string is pinned down at both ends.
Standing wave solutions have the form
y(x, t) = y(x) cos t
and so we reduce the wave equation to
d2 y
(x) 2
+
y=0
2
dx
T
(x)
this is in the form of (6.1) where = , r =
t

28

6.1.1. Constant density. Consider the case (x) = constant, then


d2 y
2
+
y = 0 (where c2 =
dx2
c2

T
)

This gives
y(x) = A cos(x) + B sin(x) (where 2 =

2
c2 )

Using the boundary conditions gives


y(0) = 0 A = 0, y(1) = 0 B sin = 0 = n (B 6= 0, n Z)
thus solutions have the form
y = b sin(nx)

(6.2)

Solutions only exist for Z; ie, special or eigen values.


In general we only get closed solutions for a simple r(x).
Note: we are using boundary conditions y(0), y(1) rather than initial conditions
y(0), y (0). This means solutions do not exist for all (ie; they have to fit).
Like eigenvectors the eigenfunctions y can be scaled by any B.
6.2. The WKB Approximation. This method stands for Wentzel, Kramer,
Brillouin (though the work done by Jeffreys 2 years earlier had not been
recognised by these three, and so he is often neglected credit for it).
if or are very large, the zeros of (6.2) are very close together. we intend to use
an asymptotic expansion for large .
if r was constant in (6.1) we would have a solution of the form
y(x) = Aex

So we will try solutions of the form


y = eg(x,)
Then
d2 y
dy
= g (, x)eg(x,) and
= 2 (g (, x))2 eg(x,) + g (, x)eg(x,)
dx
dx2
we substitute into (6.1) and divide through by 2 eg(,x) to get
1
g

+ (g )2 + r = 0

This is a 1st order ODE in g , ie:


1
h

+ h2 + r = 0

Noting that 1 behaves like (since we are considering large), we look for an
expression of the form
h(x) = h0 (x) + 1 h1 (x) +
and since g = h = h 0 (x) + 1 h 1 (x) +
expression for y becomes

1
2 h2 (x)

1
2 h 2 (x)
1

+ O( 13 )

+ O( 13 ) then the original


1

y = eg0 +g1 + g2 +O( 2 )

29

Example 6.2.1 (Airys Equation). .


Consider
y + 2 xy = 0
1

using y = eg0 +g1 + g2 +O( 2 ) then


1

y = eg0 +g1 + g2 + (g 0 + g 1 + )
1

y = eg0 +g1 + g2 + (g0 + g1 + )2 + eg0 +g1 + g2 + (g0 + g1 + )

we substitute this into the original expression to get


Terms in 2 :

(g0 + g1 + )2 + g0 + g1 + + 2 x = 0
2

g0 + x = 0 g0 = ix 2
3

g0 = 23 ix 2 + A

we ignore the constant because as it merely factorises out of y as a constant


multiple of the solution and is of no real value to us.
3
Terms in 1 using g0 = 32 ix 2 :
1

g0 + 2g0 g1 = 0 21 ix 2 + 2ix 2 g1 = 0

1
+ g1 = 0
4x

g1 = 14 log(x) + B
we didnt need the absolute value signs for log in this case since 0 < x, 1. Also, if
we were to consider the other solution for g0 then since g0 inherits the same sign
we still get the same solution for g1 .
The general solution is
2

3/2

3/2

y = Aei 3 x 4 log(x) + Bei 3 x 4 log(x)


i
1 h
3
3
1
= 1 C cos( 32 x 2 ) + D sin( 23 x 2 ) + O( 2 ) as

x4
In general, if y = y + 2 r(x)y = 0 then
i
Rx p
Rx p
1 h
y(x, ) =
r(t)dt) + B sin(
r(t)dt) + O( 12 ) as
A cos(
1
r(x) 4
a
a
Returning to the particular problem, we still havent found or used boundary
conditions.
to find approximate eigenvalues for large:
y(0) = 0 C = 0

y(1) = 0 sin( 23 ) = 0 32 = n
23 n (for large n Z)

Example 6.2.2. .
this time consider y + 2 (1 + 3 sin2 x)2 y = 0, y(0) = y(1) = 0, again using
y = eg0 +g1 + such that
(g0 + g1 + )2 + (g0 + g1 + ) + 2 (1 + 3 sin2 x)2 = 0

terms in 2 :

g0 + (1 + 3 sin2 x)2 = 0 g0 = i

q
(1 + 3 sin2 x)2

30

we now use the angle sum trig identity sin2 x = 21 (1 cos 2x) to get
g0 = i( 25 + 23 (cos 2x))

g0 = i( 25 x
terms in 1 (taking positive g0 ):

3
4

sin 2x)

2g0 g1 + g0 = 0 2(1 + 3 sin2 x)g1 + 3 sin 2x = 0 g1 =

3 sin 2x
2(1 + 3 sin2 x)

g1 = 21 log(1 + 3 sin2 x)
Again in this example wed gain no new information if we used the other form of
g0 . Putting it all together we have


1
5
3
5
3
C
cos((
y=
x

sin
2x))
+
D
sin((
x

sin
2x))
1
2
4
2
4
(1 + 3 sin2 x) 2
Now we use the boundary conditions
y(0) = 0 C = 0, y(1) = 0 ( 52 43 sin 2) = n
n
n 5 3
2 4 sin 2
For example, plugging in some numbers, say n = 8 then our approximation yields
8 = 13.824 whilst the exact solution is 13.820

Você também pode gostar