Escolar Documentos
Profissional Documentos
Cultura Documentos
x0
Proof. Suppose
|f (x)|
m < then f (x) = O(g(x))
|g(x)|
then
f (x)
m < and |x| <
g(x)
|f (x)|
|f (x)|
|m| <
< |m| +
|g(x)|
|g(x)|
|f (x)| < (|m| + )|g(x)|
But this is just the definition of O with |m| + for K
Example 1.1.4.
(i) x2 = o(x) as x 2 since
(ii) 3x2 = O(x2 ) since
x2
0 as x
x
3x2
0 as x 0
5x2
1
2
1
(iii) x = o(|x 2 |) as x 0
2x/(1 + x2 )
2x
=
o(1)
since
0 as x 0
(iv)
1 + x2
1
2x
(v)
= O(x)
1 + x2
cos x 1
sin x x
= lim
=0
(vi) sin x = x + o(x) as x 0 since lim
x0
x0
x
1
sin xx
x3
x0
sin x
cos x
1
= lim 6x = lim 6 = 6
x0
x0
3
sin x x x3! since lim sinxx3 = 1
x0 x 3!
x3
sin x = x 3! + O(x4 )
(viii)
(ix)
= lim
x0
cos x1
3x2
f (n) (x)hn
+ o(h)
n!
If the Taylor series is a convergent power series then o(hn ) can be replaced by
O(hn+1 ).
N
N
X
X
n
Any convergent power series
an x =
an xn + O(xn+1 )
n=0
n=0
Examples:
1
2
( 1 )(2x)2
1 + 2x = 1 + 21 (2x) + 2 2 2!
+ O(x3 ) = 1 x x2 + O(x3 )
2
ln(1 + x) = x x2 + O(x3 )
x2 sin( x1 ) = O(x2 ) It is important to note that this is big O with no limit as
2
2
1
x )|
x sin( 1 ) 1 x2 though |x sin(
= sin( 1 ), and this has no limit.
2
x
AM M + AM+1 M+1 + + AN N +1
N +1
AM + AM+1 + +
as
=
N +1M
Then we have a contradiction with big O
.
x = 1 + + 2 + O(3 ) or x = 2 2 + O(3 )
3 1 4
2
We can solve x 3x + 2 + = 0 directly to get x =
2
3 1 4
we get
Now 1 4 = 1 2 22 + O(x3 ) and substituting this into
2
3 (1 2 22 )
x=
2
which is the same answer as above.
Example 1.3.2 (Singular Perturbation). .
Consider x2 2x + 1 = 0 and again assume there is an expansion x0 + x1 + 2 x2 +
O(3 ). We get for terms in 0 :
1
2x0 + 1 = 0 x0 =
2
1
terms in
1
x20 2x1 = 0 x1 =
8
terms in 2
1
2x0 x1 x2 = 6 x2 =
8
and so we have
1 2
x= + +
2 8
8
and this gives us one of the roots, but where is the second?
The exact solution is given by:
1 1
x=
Last time in example 1.3.2 we did not find the other root of x2 2x + 1 = 0
using the expansion of form x0 + x1 + 2 x2 + O(3 ).
2
2
If instead we try = x, then = 2w
+ 1 = 0 2 + = 0 this, we can
assume in the usual way has an expansion of the form: 0 + 1 + 2 2 O(3 ), and:
Terms in 0 :
02 20 = 0 0 = 0 or 2
Terms in 1 :
1
1
20 1 21 + 1 = 0 1 = or
2
2
n 1
+
O()
2
= x =
2
1
2 + O()
Example 1.3.3 (Non Regular Expansions). Consider x2 x(2 + ) + 1 = 0.
assume x = x0 + x1 + 2 x2 + O(3 )
0 : x20 2x0 + 1 = 0 x0 = 1 twice
1 : (1 + )2 (1 + x1 )(2 + ) + 1 = O(2 ) 2x1 2x1 + 1 = O(2 ) 1 = O(2 )
This contradicts the assumption that there was a regular expansion.
The exact roots are:
x=
2+ 42
2
= 21 (2 +
1
4 ) = 12 (2 + 2 4 ) = 1 2 + O()
Try x = x0 + 2 x1 + x2 +
0 : x0 = 1 as before.
1
2 : (1 + 2 x1 )2 (1 + 2 )(2 + ) + 1 = O( 2 ) 2 2 x1 2 2 x1 = O() 0 = 0
1
1 : (1 + 2 x1 + (x2 )2 (1 + 2 x1 + x2 )(2 + ) + 1 = O( 2 )
1
1 :
n x + x = x2 = e2t
1
1
0
x1 (t) = et e2t
x1 (0) = 0 (no in x(0) = 1)
and so
x = et + (et e2t + O(2 )
t3
3
t3
+ O(3 )
3
2
4 )t,
xx0
n+1 (x)
=0
n (x)
(i) x 2 , 1, x 2 , x, x 2 , . . . x 0+
1
(ii) 1, x1 , x ln(x)
, 3 1 , x
x 2 ln(x)
N
z }| {
X
f (n) (x)(x x0 )n
+ RN (x) , f C N +1 [x0 r, x0 + r], r > 0
f (x) =
n!
n=0
There are remainder formulas to bound RN (x), for example,
MN rN +1
, MN > 0, RN = O((x x0 )N +1 ) as x x0 .
Cauchy: |RN (x)|
(n + 1)!
It is important to remember that Taylor ; RN 0 as N In fact we know
N
X
plenty of power series that do not converge, eg:
(1)n n!xn diverges for all x.
n=0
1
1
1
1
ln(2n) + 2 1 3 2 1 3 + O( 4 )
2
2 3 n 2 3 5 n
n
f (x)
N
X
n=0
n=0
an n (x) as x x0
n=0
an n (x)
et
dt
1 + xt
(1 + xt)1
Z
= 1 xt + x2 t2 + f (x) = (1 xt + x2 t2 + )et dt
tn et dt = n! and hence
N
X
n=0
at
n=0
X
f n (o)
f (t) dt
an+1
et
dt as x 0+ let xt = u, a = 1/x, then
1+x
au
e
du (looks like Watsons)
u + u2
Zx
a1 et
dt =
N
X
(1)n
n!
n=0
Zx
ta1 dt
Zx
tn+a1 dt =
N
X
n=0
N
X
(t)n
n!
n=0
(1)n n+a
x
n!(n + a)
E1 (x) =
etx
dt (Cauchy principal value), it turns out that E1 (x) = Ei (x)
t
x2
x3
Ei (x) = + ln(x) + x +
+
+ O(x4 )
2
2!
3
3!
Z
N
X
1
ln(n) 0.5772
where = ex ln(x) dx = lim
n
k
k=0
Figure 2.1.1:
Example 2.1.2.
n x = x
y = y
We solve in both cases to get x(t) = x(0)et , y(t) = y(0)et and eliminate t such
that:
x 1
y = y(0)
x(0)
1 0
Noting that ~x =
~x we end up with a phase plot which looks like this
0 1
Similar to the above example consider
Figure 2.1.2:
=B
x(0)
x(0) cos t + y(0) sin t
cos t sin t
x =
=
y(0)
x(0) sin t + y(0) cos t
sin t cos t
{z
}
|
rotation matrix
The orbits are circles.
y
Figure 2.1.3:
1
+ b + 1 =
2
4
10
If b is small then
b2
4
and so
x(t) = et (A cos(t) + B sin(t))
y
Figure 2.1.4:
Theorem 2.1.5. Let A R22 be a real matrix with eigenvalues 1 , 2 then:
(i) If 1 6= 2 are real then there exists an invertible matrix P such that
1 0
1
P AP =
0 2
(ii) If 1 = 2 then either A is diagonal, A = I or A is not diagonal and there
is a P such that
1
1
P AP =
0
(iii) If 1 = + i, 2 = i, 6= 0 then there is a P such that
P1 AP =
How does this help? Put ~x = A~x and let ~y = P1 ~x
then ~x = P~y , ~
y = P1 ~x , P~x = PA~x
and so
~y = P1 AP~y
This allows us to generalise the work we did above.
Example 2.1.6. For case 1 in the theorem, consider ~u = P1 AP~u
then
~u 1 = 1 u1 , ~u 2 = 2 u2
(since P1 AP~u is just a matrix of eigenvectors acting on ~u)
Then ui (t) = ui (0)ei t and so
t
u1 (0)
e 1
0
u(t) =
u2 (0)
0
e2 t
t
t
e 1
0
u1 (0)
e 1
0
1 x1 (0)
Now ~x = P~u so ~x(t) = P
=
P
P
u2 (0)
x2 (0)
0
e2 t
0
e2 t
u1 and u2 are related by eliminating t:
u 2
u 1
1
1
1
1
= et , then u2 = u2 (0)
.
u1 (t) = u1 (0)e1 t
u1 (0)
u1 (0)
11
x
1 > 2 > 0
1 < 2 < 0
Figure 2.1.5:
For 1 6= 2 , distinct eigenvectors v1 , ~v2 , A~vi = i~vi
Half lines through the origin in eigen-directions are trajectories.
y
tion
irec
d
igen
x
e.g. 1 > 0 > 2
Figure 2.1.6:
12
Eigenvalues real,
different, same sign
Eigenvalues real,
different, same sign
Node: Source
Node: Sink
Eigenvalues real,
different, opposite
sign
Eigenvalues real,
equal,
> 0, A = I
Saddle
Node: Source
Eigenvalues real,
equal
< 0, A = I
Node: Sink
Eigenvalues real,
equal
> 0, A I
Degenerate Source
Eigenvalues real,
equal
< 0, A I
Eigenvalues
complex,
1 = +i, 2 = - i
< 0, 0
Degenerate Sink
Stable Spiral
Eigenvalues
complex,
1 = +i, 2 = - i
> 0, 0
Eigenvalues
purely imaginary,
1 = i, 2 - i, 0
Unstable Spiral
Figure 2.2.1:
Ellipse
13
x = 3x + y
y = 2x 2y
0
The critical points of this system are at
, and the Jacobian is given by
0
3 1
2 2
3
1
we find the eigenvalues by setting A I =
= 0, ie:
2
2
Figure 2.2.2:
2.3. Linear Systems. A linear system for which the eigenvalues are wholly imaginary ( = ) is called called a centre.
The characteristic equation, in general of A R22 = 2 (trace A) + det A.
In this case (for imaginary), 2 + 2 = 0, trace A = 0, det A > 0.
Consider a simple case:
n x = y
y = cx
eliminate t to get
y
dy
cx
=
=
x
dx
y
Z
Z
2
cx2
y
+
= const
y dy = cx dx
2
2
This is the equation for an ellipse.
To determine
the direction of the arrows, set y = 0 then on x axis x = 0, y = cx
x
and
is a vector in the direction of solutions, and we that for x positive x is
y
positive.
14
The Oscillatory nature of these graphs dont reflect any real life situations.
x(t)
y(t)
Figure 2.3.1:
2.4. Linear Approximations. .
Consider x = u(x, y), y = v(x, y), Critical points occur when u = v = 0.
Let (x0 , y0 ) be critical points, put = x x0 , = y y0 and Taylor expand about
(x0 , y0 ) to get (near equilibrium point)
u
u
u(x, y) =
u(x
, y
+
+O( 2 + 2 ) as (, ) (0, 0)
0
0) +
x (x , y )
y (x , y )
0
v
v
+
+O( 2 + 2 ) as (, ) (0, 0)
v(x, y) =
v(x
, y
)
+
0
0
x (x0 , y0 ) y (x0 , y0 )
and so
u u
u
x y
+ O( 2 + 2 )
= v v
v
x y
We can now make the approximation
u u
x y
+ O( 2 + 2 )
= v v
(x0 , y0 )
x y
which is a linear system.
15
for v = 0, either y = 0 or c + x = 0
(x =
c
)
c a
(0, 0), ( , )
More specifically if we put a = 1, = 12 , c = 34 , = 41 then:
n x = x(1 y )
2
with critical points at (0, 0), (3, 0)
y = y( 43 + x4 )
Near (0, 0):
1 0
=
0 43
= 1
2
with eigenvalues:
1,2 = i
43
0
3
ie: 2 + 2 = const, so ellipses.
2
y( x 3 )
dy
= 4 y4 34 ln x + ln y y2 x4 = const.
dx
x(1 2 )
It is possible, albeit tricky to show this is a closed curve.
Now
x(t)
y(t)
Figure 2.4.1:
Example 2.4.2 (Circular Pendulum). .
non-dimensional units
}|
{
z
We consider a circular pendulum given by x + sin x = 0 where x is an angle.
x
m
mg
Figure 2.4.2:
Note that for small angles x, x sin x and so we get x
+ x = 0 (simple harmonic
16
motion)
We shall solve x
+ sin x = 0 qualititively since it cant easily be solved analytically.
Let x = y = u, y = sin x = v
The critical points are at
x = y = 0 y = 0, x = n, n Z
u
0
1
y =
at y = 0, x = n
v
(1)n 0
y
0
1
1
= 0 = 2 + (1)n = 0
The characteristic equation is
(1)n+1
If n even 2 + 1 = 0 = i, if n odd 2 1 = 0 = 1
1
1
For n odd we get eigen-vectors v1 =
, v2 =
which is a saddle.
1
1
u
x
v
x
Figure 2.4.3:
2.5. Non-Linear Operators. .
x
+x = 0 represents simple harmonic motion (SHM) with solution x(t) = A cos(t)+
B sin(t), a centre.
Consider x + 2 x + x = 0 making the substitution x = y. The system of ODEs is
n x = y
given by
, and the roots of the resulting char poly are
y = 2 x
p
= i 1 2
and so for 0 < < 1 we get a stable spiral
x = y
, u = y, v = x x3
y = x x3
The critical points are at u = v = 0 y = 0, x + x3 = 0 x = 0 or 1 + x2 = 0
(no real solutions).
x
+ x + x3 = 0,
17
u u
0
1
0 1
x y
=
evaluated at (0, 0)
v v =
1 3x2 0
1 0
x y
This is a centre = i
Example 2.5.2 (Soft Spring). .
We could change the sign before and simulate a soft spring ie: x
+ x x3 = 0
1
The critical points in this case lie at (0, 0), ( , 0) and the jacobian is given by
0
1
1 + 3x2 0
1
for x close to we have a source; which, in physical terms, means we are actually
x(t)
Figure 3.0.1:
x(t)
and ~x(0) = ~x(T ) (for T 6= 0) then ~x (0) = ~x (T ).
y(t)
when this happens we have a periodic orbit, eg a clock, oscillator, cycle in the
economy, biology etc...
Suppposing we have such a cycle what can be said about what happens around it?
It could be the case that both outside and inside the orbit we spiral towards it;
but then again, something else could happen instead. The equilibrium point at the
centre doesnt give us this information.
If ~x =
Figure 3.0.2:
Example 3.0.3 (Cooked up). Consider a system of the contrived form:
n x = y x(x2 + y 2 1)
1
y = x y(x2 + y 2 1) 2
18
f f
1 1
x y
=
g g
1 1
x y (x,y)=(0,0)
The characteristic equation is 2 2 + 2 = 0 which has roots = 1 i. This is
an unstable spiral.
If instead however, we look at this in polar coordinates
y
r2 = x2 + y 2 , tan =
x
Differentiating implicitely we get:
xy y x
2rr = 2xx + 2y y,
= 2
x + y2
If we multiply (1) and (2) by x & y respectively we get;
xx = x(y x(x2 + y 2 1)), y y = y(x y(x2 + y 2 1))
xx + y y = (x2 + y 2 )(r2 1) rr = r2 (r2 1)
r = r(r2 1)
This reveals that there is also an equilibrium point at r = 1 (ie; for r = 1 we stay
on the circle). Furthermore for r > 1, r < 0 which is a stable spiral, whilst for
r < 1, r > 0 which is the unstable spiral we found above.
r<0
r>0
Figure 3.0.3:
Now if we multiply (1) and (2) by y & x respectively and subtract we get:
xy y x = x2 xy(r2 1) + y 2 + xy(r2 1) = x2 + y 2 = r2
r2
=1
r2
The equilibrium point correctly told us that locally we have an unstable spiral but
it failed to illuminate the behaviour as we move further out.
=
We shall establish a couple of results that allow us determine when & where
there there exist no closed orbits. First recall
Theorem 3.0.4 (Divergence Theorem/Divergence theorem in the plane). Let C
be a closed curve, let A R2 be the region it encloses, and u, v be functions with
continuous derivatives. Then
ZZ
Z
Z
u v
dy
dx
+
dx dy = u dy v dy = u ds v
ds
x y
ds
ds
A
19
x = u(x, y)
y = v(x, y)
u v
+
does not change sign.
x y
Then there is no closed orbit contained within A.
Let A R2 be a region of the plane for which
Proof. Suppose for contradiction there exists a closed orbit in A. Then this orbit
forms a closed curve C in A.
x(0)= x(T)
Figure 3.0.4:
Let A A be the region enclosed by C (ie A = C). Then by the divergence
theorem
ZT
0
but
(xy y x)
dt =
ZT
dx
dy
dt =
u dt v
dt
dt
u dy v dy =
u v
+
is either > 0 or < 0 x, y A hence
x y
ZZ
u v
+
dx dy = 0
x y
ZZ
6= 0
1 ?
2
u v
+
and so no closed orbits.
x y
20
tion x = y yields
x = y
y = f (x)y g(x)
u v
+
= 0 f (x) is always negative and so general damped systems of this
x y
form have no closed orbits.
Now
3.1.
Theorem 3.1.1 (Poincare -Bendixon Theorem). Given a region A R2 and an
orbit of a system of ODEs C which remains in A t 0 then C must approach
either a limit cycle or equilibrium.
Remark 3.1.2.
(1) orbit = trajectory = solution curve
(2) limit cycle = closed orbit that nearby orbits approach
(3) We can use this result with time running backwards, ie: orbits come
from unstable equilibrium or closed orbits.
3.2. energy (brief ). .
n x = y
characterising x
+ f (x) = 0
y = f (x)
Since there is no damping we expect energy to be conserved.
y2
x 2
x 2
+ F (x) =
+ F (x) where
can be considered kinetic energy,
Consider =
2
2
2
F (x) the potential energy. Then
consider an oscillator of the form
d(x, y)
= y y + f (x) = (
x + F (x))x
dt
d
= 0 along solution curves.
dt
So is constant on a solution (x(t), y(t)). we call this a first integral
Set F = f then
x
+
+ some constant we need not worry
f (x) = 2 x + x3 F (x) =
2
4
y 2 2 x2 x4
+
+
about in this context of constant solution curves. Then (x, y) =
2
2
4
for > 0. As (x, y) is constant then the solutions are bounded for all t (closed
curves).
y2
2 x2
x4
2
2
+
) = max(2, 2 ). Ie;
We can say that x2 + y 2 max(2, 2 ) ( +
2
2
4
21
4. Lindstedts Method
Example 4.0.2 (Duffings Equation). x
+ 2 x + x3 = 0, 0 < << 1
We know the solutions are periodic for y(0) = x(0)
Figure 4.0.1:
consider a straight-forward expansion x = x0 + x1 + 2 x2 O(3 )
Terms in 0 :
x
0 + 2 x0 = 0 x0 = a cos t
We may suppose without loss of generality that the initial conditions for this system
are x(0) = a, and x(0)
3cos(t) cos(3t)
+
4
4
and so
3cos(t)
cos(3t)
4
4
since cos(t) appears in the homogeneous solution we would need to introduce a
t sin(t) term (secular term), and this is at odds with what we already know about
this system; that it is bounded as t
The trick to get rid of the secular term is to introduce another series:
x1 + 2 x1 = a3
= t where = 0 + 1 +
Example 4.0.3 (Lindstedts Method). We try again with Duffings equation (prefixing with a minus sign this time) using the above idea and expecting to see a
solution that has periodic orbits for small enough.
Without loss of generality, we rescale time (setting = 1) such that we try to solve
x + x x3 = 0
We define = t where = 0 + 1 + and so x(t) becomes x( ). coupling
this with the standard expansion we use for x we have
x( ) = x0 ((0 + 1 )t) + x1 ((0 + 1 )t)
We want xi ( ) = xi ( + 2) (ie; period of 2 for i = 1, 2, 3, . . . ).
For = 0 we have 0 = 1 (had we not set = 1 earlier then wed have instead
22
Terms in 0 :
x0 + x0 = 0
As before we will assume initial conditions x(0) = a, x(0)
= 0 to pick a critical
point on some curve. We know there exists a solution with these properties by
assuming ellipsoidal shaped orbits; and by varying a we get all the curves.
Since there is no in initial conditions we get
x0 (0) = a, x1 (0) = 0, x0 (0) = x1 (0) = 0
Then the solution for x0 is
x0 = a cos
1
Terms in :
x1 + 21 x0 + x1 x30 = 0
3
1
x1 + x1 = (21 a + a3 ) cos( ) + a3 cos(3 )
4
4
The whole point of doing this was to eliminate the cos( ) term which would have
introduced a secular term and so we set 21 a + 34 a2 = 0
3
1 = a3
8
so it now remains to solve
x1 + x1 =
1 3
a cos(3 )
4
3
a
We try x1 = A cos(3 ) 7 9A cos(3 ) + A cos(3 ) = 14 a3 cos(3 ) A = 32
The general solution for x1 before applying boundary conditions is
x1 = cos( ) + sin( )
a3
cos(3 )
32
a
and x1 (0) = 0 = 0 giving
Now x1 (0) = 0 = 32
x1 =
a3
a3
cos( )
cos(3 )
32
32
Thus
x(t) = a cos(t) +
Where = 1 83 a2 +
a3
(cos(t) cos(3t))
32
23
Figure 5.0.2:
Consider small oscillations and small energy
x
+ x + sin x = 0 (D.H.M)
If we did a standard expansion wed end up with a solution
x(t, ) = sin t + 2 t sin t +
and this is not a uniform expansion for the solution. The exact solution for this
system is:
q
2
t
1
x(0) = 1
x= q
e 2 sin t 1 4 where
x(0)
=0
2
1 4
5.1. The Method. .
(1) Introduce a new variable T = t, and think of t as fast time, and T as
slow time.
(2) Treat t and T as independent variables for the function x(T, t, ). Using
the chain rule we have
=0
x t
x T
x
dx
=
+
+
dt
t t T t
t
x
x
+
=
t
T
d2 x
x
x
d x
x
x
x
=
1
+
=
+
dt2
dt t
T
t t
T
T t
T
2x
x
2x
+ 2
= 2 + 2
t
tT
T
(3) Try an expansion x = x0 (t, T ) + x1 (t, T ) + where of course T = t
24
(4) Use the extra freedom of x depending on T to kill off any secular terms.
5.2. Application of Multiple scales to D.H.M.
Example 5.2.1. we consider x
+ x + x = 0 with boundary conditions x(0) =
0, x(0)
= 1, this becomes
x
2
2x
x
2x
2 x
+x=0
+
2
+
t2
tT
T 2
t
T
Now let X = x0 + x1 + 2 x2 + to get
2
2
2
2
2
(x0 +x1 +2 x2 + )
(x
+x
+
x
+
)+
+2
+
+
0
1
2
t2
tT
T 2
t
T
+x0 + x1 + 2 x2 + = 0
Terms in 0 :
2 x0
+ x0 = 0
t2
= 1 then
x0 (0) = A0 (T ) cos t + B0 (T ) sin t = 0 A0 (0) = 0
Terms in 1 :
2 x1
2 x0
x0
+2
+
+ x1 = 0
2
t
tT
t
2 x0
dA0 (T )
dB0 (T )
x0
= A0 (T ) sin t + B0 (T ) cos t and
=
sin t +
cos t
Now
t
tT
dT
dT
and so it remains to solve
dA (T )
2 x1
dB0 (T )
0
+
x
=
2
sin
t
+
cos
t
+ A0 (T ) sin t B0 (T ) cos t
1
t2
dT
dT
The RHS contains terms in sin(t), cos(t) which will induce secular terms. we
therefore choose A0 (T ) and B0 (T ) such that they go away. In other words:
dA0
1
+ A0 = 0 A0 = A0 (0)e 2 T = 0
dT
1
dB0
1
+ B0 = 0 B0 = B0 (0)e 2 T = e 2 T
2
dT
2
and so
d
+ 2
+
+
=
dt
t =T0
T1
T2
25
(x0 + x1 + )
t2
tT
(x0 + x1 + ) + x0 + x1 + = 0
+
+((x20 + 1)
T
Terms in 0 :
2 x0
+ x0 = 0 x0 = A(T ) cos t + B(T ) sin t
t2
We can write this in complex form
x0 = A0 (T )eit + A0 (T )eit (noting that z + z = 2Re(z))
We can justify this step since in general, if z = a + ib then
(a + ib)eit + (a ib)eit = a(eit + eit ) + ib(eit eit ) = 2a cos t 2b sin t
Terms in 1 :
2 x0
x0
2 x1
+
x
+
2
+ (x20 1)
=0
1
t2
tT
t
h
i
2 x1
+ x1 + 2(A ieit A ieit ) + (Aeit + Aeit ) 1 (iAeit iAeit ) = 0
2
t
A
where A = A (T ) =
T
If we wish to find A(T ) as opposed to x1 we need only consider secular terms. So
we need to equate coefficients of eit (and et ) to zero and kill them off.
considering terms in eit we have:
2iA iA iA2 A + 2iA2 A = 0
2A A + A2 A = 0
Similarly, if we consider eit we get the conjugate of this expression; ie; whatever
A kills off the eit terms also kills off the eit terms.
We should now use the polar form of A, as |A2 A| = |A3 |, and if we write A = |A|ei
with = arg(A) and |A| = 21 a then
x0 = a cos(t + ) and we now wish to find a
Now
dA
=
dT
1
2
da i 1 i d
e + 2 aie
and so it remains to solve
dT
dT
a ei + iaei 12 aei +
a2 2i a i
4 e
2e
=0
26
d (csc + cot )
1 csc + cot
csc2 + csc cot
1
=
=
= d
sin
sin csc + cot
csc + cot
csc + cot
Z
1
d = log(csc() + cot())
sin
This would still require a bit of work and so we shall instead treat a2 as a variable
and use partial fractions.
da2
da
a4
= 2a
= a2
dT
dT
4
Z
Z
1
1
4
da2 = log|a2 | log|4 a2 | = T + T0
+
da2 =
a2 (4 a2 )
a2
4 a2
Hence
a2
4k
= keT a2 (1 + keT ) = 4keT a2 = T
4 a2
e +k
2
a(T ) =
1 + k 1 eT
2
cos(t + )
x0 (t, T ) =
1 + k 1 eT
2x
2
x
dx
= + , x
= 2 = 2 + 2
+ 2 2
x =
dt
t
t
tT T
and so putting this into the ODE above gives
2
2
(x0 + x1 + ) + 2 (x0 + x1 + ) + (x0 + x1 + )3
+
2
t2
tT
Terms in 0 :
2 x0
+ 2 x0 = 0 x0 = Aeit + Aeit
t2
Terms in 1 :
2 x0
2 x1
+2
+ 2 x1 + x30 = 0
2
t
tT
Now
2 x0
2
= 2(A ieit iA eit )
tT
whilst
x30 = A3 e3it + 3A2 Aeit + 3AA2 eit + A3 e3it
and so
2 x1
+ 2 x1 + 2(A ieit iA eit )
t2
+A3 e3it + 3A2 Aeit + 3AA2 eit + A3 e3it = 0
27
we now wish to kill off the secular terms eit and eit
terms in eit :
2iA + 3A2 A = 0
1
i
Again set A = 2 ae such that A = 21 a ei + 12 aiei . Then
2i( 21 a ei + 21 aiei ) + 34 a2 e2i 21 aei = 0
ia a + 38 a3 = 0
The imaginary term ia = 0 a = a0 (a constant)
This leaves us with a simple ODE to solve
a = 38 a30
with solution
3 a20
t (since T = T )
8
Thus the solution we get working as far as the first term x0 is
(T ) =
x = x0 + O() = a0 cos(( +
2
3 a0
8 )t)
+ O()
a2
which as far as the first term goes is identical (except for the sign of 83 0 (we
started with +x3 in this example)) to that which we got for Lindstedts method.
6. The WKB Method
In this section we focus on eigenvalue problems for 2nd order ODEs, specifically
d2 y
+ 2 r(x)y = 0,
dx2
ie; y is a function of x.
y(0) = y(1) = 0
(6.1)
Figure 6.1.1:
y(0, t), y(1, t) = 0 the string is pinned down at both ends.
Standing wave solutions have the form
y(x, t) = y(x) cos t
and so we reduce the wave equation to
d2 y
(x) 2
+
y=0
2
dx
T
(x)
this is in the form of (6.1) where = , r =
t
28
T
)
This gives
y(x) = A cos(x) + B sin(x) (where 2 =
2
c2 )
(6.2)
+ (g )2 + r = 0
+ h2 + r = 0
Noting that 1 behaves like (since we are considering large), we look for an
expression of the form
h(x) = h0 (x) + 1 h1 (x) +
and since g = h = h 0 (x) + 1 h 1 (x) +
expression for y becomes
1
2 h2 (x)
1
2 h 2 (x)
1
+ O( 13 )
29
y = eg0 +g1 + g2 + (g 0 + g 1 + )
1
(g0 + g1 + )2 + g0 + g1 + + 2 x = 0
2
g0 + x = 0 g0 = ix 2
3
g0 = 23 ix 2 + A
g0 + 2g0 g1 = 0 21 ix 2 + 2ix 2 g1 = 0
1
+ g1 = 0
4x
g1 = 14 log(x) + B
we didnt need the absolute value signs for log in this case since 0 < x, 1. Also, if
we were to consider the other solution for g0 then since g0 inherits the same sign
we still get the same solution for g1 .
The general solution is
2
3/2
3/2
x4
In general, if y = y + 2 r(x)y = 0 then
i
Rx p
Rx p
1 h
y(x, ) =
r(t)dt) + B sin(
r(t)dt) + O( 12 ) as
A cos(
1
r(x) 4
a
a
Returning to the particular problem, we still havent found or used boundary
conditions.
to find approximate eigenvalues for large:
y(0) = 0 C = 0
y(1) = 0 sin( 23 ) = 0 32 = n
23 n (for large n Z)
Example 6.2.2. .
this time consider y + 2 (1 + 3 sin2 x)2 y = 0, y(0) = y(1) = 0, again using
y = eg0 +g1 + such that
(g0 + g1 + )2 + (g0 + g1 + ) + 2 (1 + 3 sin2 x)2 = 0
terms in 2 :
g0 + (1 + 3 sin2 x)2 = 0 g0 = i
q
(1 + 3 sin2 x)2
30
we now use the angle sum trig identity sin2 x = 21 (1 cos 2x) to get
g0 = i( 25 + 23 (cos 2x))
g0 = i( 25 x
terms in 1 (taking positive g0 ):
3
4
sin 2x)
3 sin 2x
2(1 + 3 sin2 x)
g1 = 21 log(1 + 3 sin2 x)
Again in this example wed gain no new information if we used the other form of
g0 . Putting it all together we have
1
5
3
5
3
C
cos((
y=
x
sin
2x))
+
D
sin((
x
sin
2x))
1
2
4
2
4
(1 + 3 sin2 x) 2
Now we use the boundary conditions
y(0) = 0 C = 0, y(1) = 0 ( 52 43 sin 2) = n
n
n 5 3
2 4 sin 2
For example, plugging in some numbers, say n = 8 then our approximation yields
8 = 13.824 whilst the exact solution is 13.820