Você está na página 1de 56

1

Differential Equations
M.T.Nair
Department of Mathematics, IIT Madras

CONTENTS

PART I: Ordinary Differential Equations Page Number

1. First order ODE 2

1.1 Introduction 2
1.2 Direction Field and Isoclines 3
1.3 Initial Value Problem 3
1.4 Linear ODE 4
1.5 Equations with Variables Separated 6
1.6 Homogeneous equations 7
1.7 Exact Equations 7
1.8 Equations reducible to homogeneous or variable separable or linear or exact form 9

2. Second and higher order linear ODE

2.1 Second order linear homogeneous ODE 13


2.2 Second order linear homogeneous ODE with constant coefficients 17
2.3 Second order linear non-homogeneous ODE 18

3. System of first order linear homogeneous ODE 25

4. Power series method 28

4.1 The method and some examples 28


4.2 Legendres equation and Legendre polynomials 30
4.3 Power series solution around singular points 36
4.4 Orthogonality of functions 45

5. SturmLiouville problem (SLP) 52

6. References 56

1 Lectures for the course MA2020, July-November 2012.

1
1 First order ODE

1.1 Introduction

An Ordinary differential equation (ODE) is an equation involving an unknown function and its
derivatives with respect to an independent variable x:

F (x, y, y (1) , . . . y (k) ) = 0.

Here, y is the unknown function, x is the independent variable and y (j) represents the j-th derivative
of y. We shall also denote
y 0 = y (1) , y 00 = y (2) , y 000 = y (3) .

Thus, a first order ODE is of the form


F (x, y, y 0 ) = 0. ()

Sometimes the above equation can be put in the form:

y 0 = f (x, y). (1)

By a solution of () we mean a function y = (x) defined on an interval I := (a, b) which is


differentiable and satisfies (), i.e.,

F (x, (x), 0 (x)) = 0, x I.

Example 1.1.
y 0 = x.

Note that, for every constant C, y = x2 /2 + C satisfies the DE for every x R.

The above simple example shows that a DE can have more than one solution. In fact, we obtain a
family of parabolas as solution curves. But, if we require the solution curve to pass through certain
specified point then we may get a unique solution. In the above example, if we demand that

y(x0 ) = y0

for some given x0 , y0 , then we must have

x20
y0 = +C
2
so that the constant C must be
x20
C = y0 .
2
Thus, the solution, in this case, must be

x2 x2
y= + y0 0 .
2 2

2
1.2 Direction Field and Isoclines

Suppose y = (x) is a solution of DE (1). Then this curve is also called an integral curve of the
DE. At each point on this curve, the tangent must have the slope f (x, y). Thus, the DE prescribes a
direction at each point on the integral curve y = (x). Such directions can be represented by small
line segments with arrows pointing to the direction. The set of all such directed line segments is called
the direction field of the DE.

The set of all points in the plane where f (x, y) is a constant is called an isocline. Thus, the family
of isoclines would help us locating integral curves geometrically.

Isoclines for the DE: y 0 = x + y are the straight lines x + y = C.

1.3 Initial Value Problem

An equation of the form


y 0 = f (x, y) (1)

together with a condition of the form the form

y(x0 ) = y0 (2)

is called an initial value problem. The condition (2) is called an initial condition.

THEOREM 1.2. Suppose f is defined in an open rectangle R = I J, where I and J are open
intervals, say I = (a, b), J = (c, d):

R := {(x, y) : a < x < b, c < y < d}.


f
If f is continuous and has continuous partial derivative in R, then for every (x0 , y0 ) R, there
y
exists a unique function y = (x) defined in an interval (x0 h, x0 +h) (a, b) which satisfies (1)(2).

Remark 1.3. The conditions prescribed are sufficient conditions that guarantee the existence and
uniqueness of a solution for the initial value problem. They are not necessary conditions. A unique
solution for the initial value problem can exist without the prescribed conditions on f as in the above
theorem.

The condition (2) in Theorem 1.2 is called an initial condition, the equation (1) together with
(2) is called an initial value problem.

A solution of (1) the form


y = (x, C),

where C is an arbitrary constant varying in some subset of R, is called a general solution of


(1).

3
A solution y for a particular value of C is called a particular solution of (1).

If general solutions of (1) are given implicitly in the form

u(x, y, C) = 0

arbitrary constant C, then the above equation is called the complete integral of (1).

A complete integral for a particular value of C is called a particular integral of (1).

Remark 1.4. Under the assumptions of Theorem 1.2, if x0 I, then existence of a solution y for (1)
is guaranteed in some neighbourhood I0 I of x0 , and it satisfies the integral equation
Z x
y(x) = y0 + f (t, y(t))dt.
x0

A natural question would be:

Is the family of all solutions of (1) defined on I0 a one-parameter family, so that any two
solutions in that family differ only by a constant?

It is known that for a general nonlinear equation (1), the answer is nt in affirmative. However, for
linear equations the answer is in affirmative.

1.4 Linear ODE

If f depends of y in a linear fashion, then the equation (1) is called a linear DE. A general form of
the linear first order DE is:
y 0 + p(x)y = q(x). (3)

Here is a procedure to arrive at a solution of (3):

Assume first that there is a solution for (3) and that after multiplying both sides of (3) by a
differentiable function (x), the LHS is of the ((x)y)0 . Then(3) will be converted into:

((x)y)0 = (x)q(x)

so that Z
(x)y = q(x)dx + C.

Thus, must be chosen in such a manner that

0 y + y 0 = (y 0 + py).

Therefore, we must have


d
0 y = py, i.e., 0 = p, i.e., = pdx,

4
i.e.,
R
p(x)dx
(x) := e .

Thus, y takes the form


Z 
1 R
p(x)dx
y= (x)q(x)dx + C , (x) := e . (4)
(x)

It can be easily seen that the function y defined by (4) satisfies the DE (3). Thus existence of a
solution for (3) is proved for continuous functions p and q.

Suppose there are two functions and which satisfy (3). Then (x) := (x) (x) would
satisfy
0 (x) + p(x)(x) = 0.

Hence, using the arguments in the previous paragraph, we obtain

(x) = C(x)1

for some constant C.

Now, if (x0 ) = y0 = (x0 ), then we must have (x0 ) = 0 so that C(x)1 = 0. Hence, we obtain
C = 0 and hence, = . Thus, we have proved the existence and uniqueness for the linear DE only
by assuming that p and q are continuous.

Example 1.5.
y 0 = x + y.
R
Then, = e dx
= ex and hence,
Z   Z 
x x x x x
y=e e xdx + C = e xe + e dx + C .

Thus,
y = ex xex ex + C = x 1 + Cex .
 

y(0) = 0 = 0 = 1 + C = C = 1.

Hence,
y = x 1 + ex .

Note that
y 0 = 1 + ex = 1 + (x + y + 1) = x + y.

5
1.5 Equations with Variables Separated

If f (x, y) in (1) is of the form


f (x, y) = f1 (x)f2 (y)

for some functions f1 , f2 , then we say that (3) is an equation with separated variables. In this
case (3) takes the form:
y 0 = f1 (x)f2 (y);

equivalently,
y0
= f1 (x),
f2 (y)
assuming that f2 (y) is not zero at all points in the interval of interest. Hence, in this case, a general
solution is given implicitly by Z Z
dy
= f1 (x)dx + C.
f2 (y)

Example 1.6.
y 0 = xy.

Equivalently,
dy
= xdx.
y
Hence,
x2
log |y| = + C,
2
i.e.,
2
y = C1 ex /2
.

Note that  2 
2
y = C 1 ex /2
= y 0 = C1 ex /2 x = xy.

An equation with separated variables can also be written as

M (x)dx + N (y)dy = 0.

In this case, solution is implicitly defined by


Z Z
M (x)dx + N (y)dy = 0. (5)

Equation of the form


M1 (x)N1 (y)dx + M2 (x)N2 (y)dy = 0 (6)

can be brought to the form (5): After dividing (6) by N1 (y)M2 (x) we obtain
M1 (x) N2 (y)
dx + dy = 0.
M2 (x) N1 (y)

6
1.6 Homogeneous equations

A function f : R R is said to be homogeneous of degree n if

f (x, y) = n f (x, y) R

for some n N.

The differential equation (1) is called a homogeneous equation if f is homogeneous of degree


0, i.e., if
f (x, y) = f (x, y) R.

Suppose (1) is a homogeneous equation. Then we have


x y y
y 0 = f (x, y) = f ( , ) = f (1, u), u := .
x x x
Now,
y du
u= = ux = y = u + x = y 0 = f (1, u).
x dx
Thus,
du dx
=
f (1, u) u x
and hence, u and therefore, y is implicitly defined by
Z Z
du dx
= + C.
f (1, u) u x

1.7 Exact Equations

Suppose (1) is of the form


M (x, y)dx + N (x, y)dy = 0, (7)

where M and N are such that there exists u(x, y) with continuous first partial derivatives satisfying

u u
M (x, y) = , N (x, y) = . (8)
x y

Then (7) takes the form


u u
dx + dy = 0;
x y
equivalently,
du = 0.

Then the general solution is implicitly defined by

u(x, y) = C.

Equation (7) with M and N satisfying (8) is called an exact differential equation.

7
2u
Note that, in the above, if there exists u(x, y) with continuous second partial derivatives
xy
2u
and , then
yx
M N
= .
y x

In fact it is a sufficient condition of (7) to be an exact differential equation.

THEOREM 1.7. Suppose M and N are continuous and have continuous first partial derivatives
M N
and in I J, and
y x
M N
= .
y x
Then the equation (7) is exact, and in that case the complete integral of (7) is given by
Z x Z y
M (x, y)dx + N (x0 , y)dy = C.
x0 y0

Proof. Note that for any differentiable function g(y),


Z x
u(x, y) := M (x, y)dx + g(y)
x0

u
satisfies = M (x, y). Then
x
Z x Z x
u M 0 N
= dx + g (y) = dx + g 0 (y) = N (x, y) N (x0 , y) + g 0 (y).
y x0 y x0 x
Thus, Z y
u
= N g 0 (y) = N (x0 , y) g(y) = N (x0 , y)dy.
y y0
Thus, taking Z y Z x
g(y) = N (x0 , y)dy and u(x, y) := M (x, y)dx + g(y)
y0 x0

we obtain (8), and the complete integral of (7) is given by


Z x Z y
M (x, y)dx + N (x0 , y)dy = C.
x0 y0

Example 1.8.
y cos xydx + x cos xydy = 0.

(x, y) = sin xy = = x cos xy and = x cos xy.
x y
Hence, sin xy = C. Also,
y dx dy
y cos xydx + x cos xydy = 0 y 0 = + = 0.
x x y
Hence, log |xy| = C.

8
Example 1.9.
2x y 2 3x2
dx + dy = 0.
y3 y4
In this case
M 6x N
= 4 = .
y y x
Hence, the given DE is exact, and u is give by

x2
Z Z
1
u(x, y) = M dx + N (0, y)dy = 3 ,
y y
so that the complete integral is given by u(x, y) = C.

1.8 Equations reducible to homogeneous or variable separable or linear or


exact form

1.8.1 Reducible to homogeneous or variable separable form

Note that the function


ax + by + c
f (x, y) =
a1 x + b1 y + c1
is not homogeneous if either c 6= 0 or c1 6= 0, and in such case,
dy
= f (x, y) (1)
dx
is not homogeneous. We shall convert this equation into a homogenous equation in terms a variables:
Consider the change of variables:
X = x h, Y = y k.

Then
ax + by + c = a(X + h) + b(Y + k) + c = aX + bY + (ah + bk + c),

a1 x + b1 y + c1 = a1 (X + h) + b1 (Y + k) + c1 = a1 X + b2 Y + (a1 h + b1 k + c1 ).

There are two cases:


!
a b
Case(i): det 6= 0.
a1 b1

In this case there exists a unique pair (h, k) such that

ah + bk + c = 0 (2)

a 1 h + b1 k + c 1 (3)

are satisfied. Hence, observing that


dY dY dy dx dy
= = ,
dX dy dx dX dx

9
the equation (1) takes the form
dY aX + bY
= .
dX a 1 X + b1 Y
This is a homogeneous equation. If Y = (X) is a solution of this homogeneous equation, then a
solution of (1) is given by
y = k + (x h).

!
a b
Case(ii): det = 0. In this case either
a1 b1

a1 = a, b1 = b for some R

or
a1 = a1 , b = b1 for some R.

Assume that a1 = a and b1 = b for some R. Then, (1) takes the form
dy ax + by + c ax + by + c
= = .
dx a1 x + b1 y + c1 (ax + by) + c1
Taking z = ax + by, we obtain
 
dz dy z+c
=a+b =a+b .
dx dx (z + c1
This is an equation in variable separable form.

Example 1.10.
dy 2x + y 1
= .
dx 4x + 2y + 5
Taking z = 2x + y,
dz dy z1 dz 5z + 9
=2+ =2+ = ,
dx dx 2z + 5 dx 2z + 5
i.e.,
2z + 5
dz = dx.
5z + 9

Note that
       
2z + 5 1 10z + 25 1 2(5z + 9) + 7 2 7 1
= = = +
5z + 9 5 5z + 9 5 5z + 9 5 5 5z + 9
Z
2z + 5 2z 7
dz = inf dx + log |5z + 9| = x + 9
5z + 9 5 25
2(2x + y) 7
+ log |5(2x + y) + 9| = x + 9
5 25
Thus, the solution y is given by
4x + 2y 7
+ log |10x + 5y + 9| = x + 9.
5 25

10
1.8.2 Reducible to linear form

Bernaulis equation:
y 0 + p(x)y = q(x)y n .

Write it as
y n y 0 + p(x)y n+1 = q(x).

Taking z = y n+1 ,
dz dy
= (n + 1)y n = (n + 1)[p(x)z + q(x)],
dx dx
i.e.,
dz
(n + 1)p(x)z = (n + 1)q(x).
dx
Hence, Z 
1 R
z= (x)(n + 1)q(x)dx + C , (x) = e(n+1) p(x)dx
.
(x)
Example 1.11.
dy
+ xy = x3 y 3 .
dx
Here, n = 3 so that n + 1 = 2 and
2
R R
(x) = e(n+1) p(x)dx
= e2 xdx
= ex .

Z  Z 
1 2 2
z = (x)(n + 1)q(x)dx + C = ex 2ex x3 dx + C
(x)
Z 
2 2
= 2ex ex x3 dx C/2 .

Gives:
2
(x2 + 1 + Cex )y 2 = 1.

1.8.3 Reducible to exact equations

M N M N
Suppose M (x, y) and N (x, y) are functions with continuous partial derivatives , , , .
x x y x
Consider the differential equation

M (x, y)dx + N (x, y)dy = 0.

Recall that it is an exact equation if and only if


M N
= .
y x
Suppose the equation is not exact. Then we look for a function := (x) such that

(x)[M (x, y)dx + N (x, y)dy] = 0 ()

11
is exact. So, requirement on should be
M N
(M ) = (N ), i.e., = + 0 N
y x y x
0
 
1 M N
= .
N y x
Thus:
 
1 M N
If := is a function of x alone, then the above differential equation
N y x R
for can be solved and with the resulting := e dx the equation () is exact equation.

Similarly, looking for a function


=
(y) such that


(x)[M (x, y)dx + N (x, y)dy] = 0 ()

becomes exact, we arrive at the equation


0 (y)
 
1 N M
= .

(y) M x y
Hence, we can make the following statement:
 
1 N M
If := is a function of y alone, then the above differential equation for
M x y R
can be solved and with the resulting := e dx the equation () is exact equation.

Definition 1.12. Each of the functions (x) and


(y) in the above discussion, if exists, is called an
integrating factor.

Example 1.13.
(y + xy 2 )dx xdy = 0.
M N
Note that y = 1 + 2xy, = 1,
x
 
1 M N (1 + 2xy) + 1 2(1 + xy)
:= = = .
N y x x x
 
1 N M 2(1 + xy) 2
= = = .
M x y y(1 + xy) y
Thus,
R 2
y dy
1

:= e =
y2
is an integrating factor, i.e.,
 
1 1 x
[(y + xy 2 )dx xdy] = 0 x dx 2 dy = 0
y2 y y
is an exact equation. Then
x x2
Z Z Z  
1
u = M dx + N (0, y)dy = x dx = .
y y 2
x x2
Thus the complete integral is given by y + 2 = C.

12
2 Second and higher order linear ODE

Second order linear ODE is of the form

y 00 + a(x)y 0 + b(x)y = f (x) (1)

where a(x), b(x), f (x) are functions defined on some interval I. The equation (1) is said to be

1. homogeneous if f (x) = 0 for all x I, and

2. non-homogeneous of f (x) = 0 for some x I.

THEOREM 2.1. (Existence and uniqueness) Suppose a(x), b(x), f (x) are continuous functions
(defined on some interval I). Then for every x0 I, y0 R, z0 R, there exists a unique solution y
for (1) such that
y(x0 ) = y0 , y 0 (x0 ) = z0 .

2.1 Second order linear homogeneous ODE

Consider second order linear homogeneous ODE:

y 00 + a(x)y 0 + b(x)y = 0. (2)

Note that:

If y1 and y2 are solutions of (2), then for any , R, the function y1 + y2 is also a solution
of (2).

Definition 2.2. Let y1 and y2 be functions defined on an interval I.

1. y1 and y2 are said to be linearly dependent if there exists R such that either y1 (x) = y2 (x)
or y2 (x) = y1 (x); equivalently, there exists , R with atleast one of them nonzero, such
that
y1 (x) + y2 (x) = 0 x I.

2. y1 and y2 are said to be linearly independent if they are not linearly dependent, i.e. for
, R,
y1 (x) + y2 (x) = 0 x I = = 0, = 0.

We shall prove:

THEOREM 2.3. The following hold.

13
1. The differential equation (2) has two linearly independent solutions.

2. If y1 and y2 are linearly independent solutions of (2), then every solution y of (2) can be expressed
as
y = y1 + y2

for some , R.

Definition 2.4. Let y1 and y2 be differentiable functions (on an interval I). Then the function
!
y1 y2
W (y1 , y2 )(x) := det 0
y1 y20

is called the Wronskian of y1 , y2 .

Once the functions y1 , y2 are fixed, we shall denote W (y1 , y2 )(x) by W (x).

Note that:

If y1 and y2 are linearly dependent, then W (x) = 0 for all x I.

Equivalently:

If W (x0 ) 6= 0 for some x0 I, then y1 and y2 are linearly independent.


" #
a1 b1
THEOREM 2.5. Consider a nonsingular matrix A = . Let x0 I. Let y1 and y2 be
a2 b2
unique solutions of (2) satisfying the conditions

y1 (x0 ) = a1 y2 (x0 ) = b1
.
y10 (x0 ) = a2 y20 (x0 ) = b2

Then y1 and y2 are linearly independent solutions of (2).

Proof. Since A = W (x0 ) and det(A) 6= 0, the proof follows from the earlier observation.

LEMMA 2.6. Let y1 and y2 be solutions of (2) and x0 I. Then


Rx
a(t)dt
W (x) = W (x0 )e x0
.

In particular, if y1 and y2 are solutions of (2), then

W (x0 ) = 0 at some point x0 W (x) = 0 at every point x I.

Proof. Since y1 and y2 are solutions of (2), we have

y100 + a(x)y10 + b(x)y1 = 0,

14
y200 + a(x)y20 + b(x)y2 = 0.

Hence,
(y1 y200 y2 y100 ) + a(x)(y1 y20 y2 y10 ) = 0.

Note that
W = y1 y20 y2 y10 , W 0 = y1 y200 y2 y100 .

Hence
W 0 + a(x)W = 0.

Therefore, Rx
a(t)dt
W (x) = W (x0 )e x0
.

THEOREM 2.7. Let y1 and y2 be solutions of (2) and x0 I. Then

y1 and y2 are linearly independent, W (x) 6= 0 for every x I.

Proof. We have already observed that if W (x0 ) = 0 for some x0 I, then y1 and y2 are linearly
independent. Hence, it remains to prove that if y1 and y2 are linearly independent, then W (x) 6= 0
for every x I.

Suppose W (x0 ) = 0 for some x0 I. Then by the Lemma 2.6, W (x) = 0 for every x I,i.e.,

y1 y20 y2 y10 = 0 on I.

Let I0 = {x I : y1 (x) 6= 0}. Then we have

y1 y20 y2 y10
=0 on I0 ,
y12

i.e.,  
d y2
=0 on I0 .
dx y1
Hence, there exists R such that
y2
= on I0 .
y1
Hence, y2 = y1 on I, showing that y1 and y2 are linearly dependent.

THEOREM 2.8. Let y1 and y2 be linearly independent solutions of (2). Then every solution y of
(2) can be expressed as
y = y1 + y2

for some , R.

15
Proof. Let y be a solution of (2), and for x0 I, let

y0 := y(x0 ), z0 := y 0 (x0 ).

Let W (x) be the Wronskian of y1 , y2 . Since y1 and y2 are linearly independent solutions of (2), by
Theorem 2.5, W (x0 ) 6= 0. Hence, there exists a unique pair , ) of real numbers such that
" #" #" #
y1 (x0 ) y2 (x0 ) y0
.
y10 (x0 ) y20 (x0 ) z0

Let
(x) = y1 (x) + y2 (x), x I.

Then is a solution of (2) satisfying

(x0 ) = y1 (x0 ) + y2 (x0 ) = y0 , 0 (x0 ) = y10 (x0 ) + y20 (x0 ) = z0 .

By the existence and uniqueness theorem, we obtain (x) = y(x) for all x I, i.e.,

y = y1 + y2 .

Theorem 2.5 and Theorem 2.8 give Theorem 2.3.

Now, the question is how to get linearly independent solutions for (2).

THEOREM 2.9. Let y1 be a nonzero solution of (2). Then


Z
(x) xx a(t)dt
R
y2 (x) := y1 (x) dx, (x) := e 0 ,
y1 (x)2

is a solution of (2), and y1 , y2 are linearly independent.

Proof. Let y2 (x) = y1 (x)(x),where


Z
(x)
Rx
a(t)dt
(x) := dx, (x) := e x0
.
y1 (x)2

Then
y20 = y1 0 + y10 , y200 = y1 00 + y10 0 + y10 0 + y100 = y1 00 + 2y10 0 + y100 .

Hence,

y200 + ay 0 + by2 = y1 00 + 2y10 0 + y100 + a(y1 0 + y10 ) + by1


= y1 00 + 2y10 0 + (y100 + ay10 + by1 ) + ay1 0
= y1 00 + 2y10 0 + ay1 0

16
Note that
(x)
0 = , i.e., y12 0 = .
y1 (x)2
Hence
y12 00 + 2y1 y10 0 = 0 i.e., y1 (y1 00 + 2y10 0 ) = 0

so that
0 a 0 + a
y200 + ay 0 + by2 = y1 00 + 2y10 0 + ay1 0 = + = = 0.
y1 y1 y1
Clearly, y1 and y2 are linearly independent.

Motivation for the above expression for y2 :

If y1 and y2 are solutions of (2), then we know that


Rx
a(t)dt
y1 y20 y2 y10
 
d y2 W (x) Ce x0
= = = .
dx y1 y12 y12 y12

Hence, Rx !
Z a(t)dt
Ce x0
y2 = y1 dx.
y12

2.2 Second order linear homogeneous ODE with constant coefficients

The DE in this case is of the form


y 00 + py 0 + qy = 0, (1)

where p, q are real constants. Let us look for a solution (1) in the form y = ex for some , real or
complex. Assuming that such a solution exists, from (1) we have

(2 + p + q)ex = 0

so that must satisfy the auxiliary equation:

2 + p + q = 0. (2)

We have the following cases:

1. (2) has two distinct real roots 1 , 2 ,

2. (2) has two distinct complex roots 1 = + i, 2 = i,

3. (2) has a multiple root .

In case 1, e1 x , e2 x are linearly independent solutions.

In case 2, ex cos x, ex sin x are linearly independent solutions.

17
In case 1, ex , xex are linearly independent solutions.

Example 2.10.
y 00 + y 0 2y = 0

Auxiliary equation: 2 + 2 = 0 has two distinct real roots: 1 = 1, 2 = 2.


x 2x
General solution: y = C1 e + C2 e .

Example 2.11.
y 00 + 2y 0 + 5y = 0

Auxiliary equation: 2 + 2 + 5 = 0 has two complex roots: 1 + i2, = 1 i2.


General solution: y = ex [C1 cos 2x + C2 sin 2x].

Example 2.12.
y 00 4y 0 + 4y = 0

Auxiliary equation: 2 4 + 4 = 0 has a multiple root: 0 = 2.


General solution: y = e2x [C1 + C2 e2x ].

2.3 Second order linear non-homogeneous ODE

Consider the nonhomogeneous ODE:

y 00 + a(x)y 0 + b(x)y = f (x), (1)

We observe that if y0 is a solution of the homogeneous equation

y 00 + a(x)y 0 + b(x)y = 0 (2)

and y is a particular solution of the nonhomogeneous equation (1), then

y = y0 + y

is a solution of the nonhomogeneous equation (1). Also, if y is a particular solution of the nonho-
mogeneous equation (1) and if y is any solution of the nonhomogeneous equation (1), then y y is
a solution of the homogeneous equation (2). Thus, knowing a particular solution y of the nonho-
mogeneous equation (1) and a general solution y of homogeneous equation (2), we obtain a general
solution of the nonhomogeneous equation (1) as

y = y + y .

If the coefficients are constants, then we know a method of obtaining two linearly independent solutions
for the homogeneous equation (2), and thus we obtain a general solution for the homogeneous equation
(2).

How to get a particular solution for the nonhomogeneous equation (1)?

18
2.3.1 Method of variation of parameters

Suppose y1 and y2 are linearly independent solutions of the homogeneous ode:

y 00 + a(x)y 0 + b(x)y = 0. (2)

The, look for a solution of (1) in the form

y = u1 y1 + u2 y2

where u1 and u2 are unctions to be determined. Assume for a moment that such a solution exists.
Then
y 0 = u1 y10 + u2 y20 + u01 y1 + u02 y2 .

We shall look for u1 , u2 such that


u01 y1 + u02 y2 = 0 (3).

Then, we have
y 0 = u1 y10 + u2 y20 , (4)

y 00 = u1 y100 + u2 y200 + u01 y10 + u02 y20 . (5)

Substituting (4-5) in (1),

(u1 y100 + u2 y200 + u01 y10 + u02 y20 ) + a(x)(u1 y10 + u2 y20 ) + b(x)(u1 y1 + u2 y2 ) = f (x),

i.e.,
u1 [y100 + a(x)y10 b(x)y1 ] + u2 [y200 + a(x)y20 b(x)y2 ] + u01 y10 + u02 y20 = f (x),

i.e.,
u01 y10 + u02 y20 = f (x). (6)

Now, (3) and (6): " #" # " #


y1 y2 u01 0
=
y10 y20 u02 f
gives
y2 f y1 f
u01 = , u02 = .
W W
Hence, Z Z
y2 f y1 f
u1 = + C1 , u2 = + C2 .
W W
Thus,  Z  Z 
y2 f y1 f
y= + C1 y1 + + C 2 y2
W W
is the general solution. Thus we have proved the following theorem.

19
THEOREM 2.13. If y1 , y2 are linearly independent solutions of the homogeneous equation (2), and
if W (x) is their Wronskian, then a general solution of the nonhomogeneous equation (1) is given by

y = u1 y1 + u2 y2 ,

where Z Z
y2 f y1 f
u1 = + C1 , u2 = + C2 .
W W

Analogously, it the following theorem also can be proved:

THEOREM 2.14. If y1 , y2 , . . . , yn are linearly independent solutions of the homogeneous equation

y (n) + a1 (x)y (n1) + + an1 (x)y (1) + an (x)y = 0,

where a1 , a2 , . . . , an are continuous functions on an interval I, and if W (x) is their Wronskian, i.e.,

y1 y2 yn
0
y20 yn0

y1
W (x) = det

,

(n1) (n1) (n1)
y1 y2 yn

then a general solution of the nonhomogeneous equation

y (n) + a1 (x)y (n1) + + an1 (x)y (1) + an (x)y = f (x)

is given by
y = (u1 + C1 )y1 + (u2 + C2 )y2 + + (un + Cn )yn ,

where u01 , u02 , . . . , u0n are obtained by solving the system



y1 y2 yn u01 0
0 0 0
0

y1 y 2 y n 2
u 0

= .



(n1) (n1) (n1)
y1 y2 yn u0n f

Remark 2.15. Suppose the right hand side of (1) is of the form f (x) = f1 (x) + f2 (x). Then it can
be easily seen that:

If y1 and y2 are solutions of

y 00 + a(x)y 0 + b(x)y = f1 (x), y 00 + a(x)y 0 + b(x)y = f2 (x),

respectively, then y1 + y2 are solutions of

y 00 + a(x)y 0 + b(x)y = f1 (x) + f2 (x).

20
2.3.2 Method of undetermined coefficients

This method is when the coefficients of (1) are constants and f is of certain special forms. So, consider

y 00 + py 0 + qy = f, (1)

where p, q are constants.

Case (i): f (x) = P (x)ex , where P is a polynomial of degree n, and R:

We look for a solution of the form


y = Q(x)ex ,

where Q is a polynomial of degree n Substituting the above expression in the DE, we obtain:

[Q00 + (2 + p)Q0 + (2 + p + q)Q]ex = P (x)ex .

Thus, we must have


Q00 + (2 + p)Q0 + (2 + p + q)Q = P (x).

Note that, the above equation is an identity only if 2 + p + q 6= 0, i.e., is not a root of the auxiliary
equation 2 + p + q = 0. In such case, we can determine Q by comparing coefficients of powers of xk
for k = 0, 1, . . . , n.

If is a root of the auxiliary equation 2 + p + q = 0, then we must look for a solution of the
form
x
y = Q(x)e
e ,

where Q
e is a polynomial of degree n + 1, or we must look for a solution of the form

y = xQ(x)ex ,

where Q is a polynomial of degree n. Proceeding as above we can determine Q provided 2 + p 6= 0,


i.e., if is not a double root of the auxiliary equation 2 + p + q = 0.

If is a double root of the auxiliary equation 2 + p + q = 0, then we must look for a solution
of the form
x
y = Q(x)e
b ,

where Q
b is a polynomial of degree n + 2, or we must look for a solution of the form

y = x2 Q(x)ex ,

where Q is a polynomial of degree n, which we can determine by comparing coefficients of powers of


x.

Case (ii): f (x) = P1 (x)ex cos x + P1 (x)ex sin x, where P1 and P2 are polynomials and , are
real numbers:

21
We look for a solution of the form

y = Q1 (x)ex cos x + Q1 (x)ex sin x,

where Q1 and Q2 are polynomials with

degQj (x) = max{P1 (x), P2 (x)}, j {1, 2}.

Substituting the above expression in the DE, we obtain the coefficients of Q1 , Q2 if + i is not a
root of the auxiliary equation 2 + p + q = 0.

If + i is a simple root of the auxiliary equation 2 + p + q = 0, then we look for a solution of


the form
y = x[Q1 (x)ex cos x + Q1 (x)ex sin x],

where Q1 and Q2 are polynomials with degQj (x) = max{P1 (x), P2 (x)}, j {1, 2}.

The following example illustrates the second part of case (ii) above:
2
Example 2.16. We find the general solution of

y 00 + 4y = x sin 2x.

The auxiliary equation corresponding to the homogeneous equation y 00 + 4y = 0 is:

2 + 4 = 0.

Its solutions are = 2i. Hence, the general solution of the homogenous equation is:

y0 = A cos 2x + B sin 2x.

Note that the non-homogenous term, f (x) = x sin 2x, is of the form

f (x) = P1 (x)ex cos x + P1 (x)ex sin x,

with P1 (x) = 0, = 0, = 2. Also, 2i = + i is a simple root of the auxiliary equation. Hence,


a particular solution is of the form

y = x[Q1 (x)ex cos x + Q1 (x)ex sin x],

where Q1 and Q2 are polynomials with degQj (x) = max{P1 (x), P2 (x)} = 1. Thus, a a particular
solution is of the form
y = x[(A0 + A1 x) cos 2x + (B0 + B1 x) sin 2x].

Differentiating:

y 0 = [A0 + (2A1 + 2B0 )x + 2B1 x2 ] cos 2x + [B0 + (2B1 2A0 )x 2A1 x2 ] sin 2x,
2 This example is included in the notes on November 23, 2012 mtnair.

22
y 00 + 4y = 2[B0 + (2B1 2A0 )x 2A1 x2 ] cos 2x
2[A0 + (2A1 + 2B0 )x + 2B1 x2 ] sin 2x
+[(2B1 2A0 ) 4A1 x] sin 2x + [(2A1 + 2B0 ) + 4B1 x] cos 2x
+4x[(A0 + A1 x) cos 2x + (B0 + B1 x) sin 2x].

Hence, y 00 + 4y = x sin 2x if and only if

2[B0 + (2B1 2A0 )x 2A1 x2 ] + [(2A1 + 2B0 ) + 4B1 x] + 4x(A0 + A1 x) = 0,

2[A0 + (2A1 + 2B0 )x + 2B1 x2 ] + [(2B1 2A0 ) 4A1 x] + 4x(B0 + B1 x) = x

1 1
A0 = 0, A1 = , B0 = , B1 = 0,
8 16
so that
x2 x
y = x[(A0 + A1 x) cos 2x + (B0 + B1 x) sin 2x] = cos 2x + sin 2x.
8 16
Thus, the general solution of the equation is:

x2 x
A cos 2x + B sin 2x cos 2x + sin 2x.
8 16

Remark 2.17. The above method can be generalized, in a natural way, to higher order equation

y (n) + a1 y (n1) + + an1 y (1) + an y = f (x)

where f is of the form


f (x) = P1 (x)ex cos x + P1 (x)ex sin x

with P1 and P2 being polynomials and , are real numbers.

2.3.3 Equations reducible to constant coefficients case

A particular type of equations with non-constant coefficients can be reduced to the ones with constant
coefficients. here it is: Consider

xn y (n) + a1 xn1 y (n1) + + an1 xy (1) + an y = f (x). (1)

In this case, we take the change of variable: x 7 z defined by

x = ez .

Then the equation (1) can be brought to the form

d
Dn y + b1 Dn1 y + + bn1 Dy + an y = f (ez ), D := ,
dz

23
where b1 , b2 , . . . , bn are constants. Let us consider the case of n = 2:

x2 y 00 + a1 xy 0 + a2 y = f (x).

Taking x = ez ,
dy dy dx
= = y 0 x,
dz dx dz
d2 y d 0 dy 0 dx dy
2
= (y x) = x + y0 = y 00 x2 + y 0 x = y 00 x2 + .
dz dz dz dz dz
Hence we have
d2 y dy d2 y
 
dy dy
x2 y 00 + a1 xy 0 + a2 y = + a1 + a2 y = 2 + (a1 1) + a2 y.
dz 2 dz dz dz dz

Thus, the equation takes the form:

d2 y dy
+ (a1 1) + a2 y = f (ez ).
dz 2 dz

Note also that


d3 y d 00 2 dy 00 2 dx
= (y x + y 0 x) = x + y 00 2x + y 00 x2 + y 0 x
dz 3 dz dz dz
= y 000 x3 + 2y 00 x2 + y 00 x2 + y 0 x
 2 
d y dy dy
= y 000 x3 + 3 + .
dz 2 dz dz

Hence,

d3 y
 2   2 
d y dy dy d y dy dy
x3 y 000 + ax2 y 00 + bxy 0 + cy = 3 + a +b + cy
dz 3 dz 2 dz dz dz 2 dz dz
d3 y d2 y dy
= 3
+ (a 3) 2 + (b a + 3) + cy.
dz dz dz

24
3 System of first order linear homogeneous ODE

Consider the system:


dx1
= ax1 + bx2
dt
dx2
= cx1 + dx2
dt
The above system can be written in matrix notation as:
" # " #" #
d x1 a b x1
= (1)
dt x2 c d x2

or more compactly as:


dX
= AX,
dt
where " # " #
x1 a b
X= , A= .
x2 c d
Here, we used the convention: " # " #
d f f0
= 0 .
dt g g
In this case we look for a solution of the form
" # " #
1 et 1 t
X= =: e .
2 et 2

Substituting this into the system of equations we get


" # " #" #
1 t a b 1 t
e = e .
2 c d 2

Equivalently, " #" # " #


a b 1 1
= .
c d 2 2
That is, " #" # " #
a b 1 0
= . (2)
c d 2 0
Thus, if 0 is a root of the equation
" #
a b
det = 0, (2)
c d
" #
1
T
then there is a nonzero vector [1 , 2 ] satisfying (2), and X = e0 t is a solution of the system
2
(1).

25
Definition 3.1. The equation (3) is called the auxiliary equation for the system (1).

Let us consider the following cases:

Case (i): Suppose the roots of the auxiliary equation (3) are real distinct, say 1 and 2 . Suppose
" (1) # " (2)
#
1 1
(1) and (2)
2 2

are nonzero solutions of (2) corresponding to = 1 and = 2 , respectively. Then, the vector valued
functions " (1) # " (2) #
1 1 t 1
X1 = (1) e , X2 = (2) e2 t
2 2
are solutions of (1), and they are linearly independent. In this case, the general solution of (1) is given
by C1 X1 + C2 X2 .

Case (ii): Suppose the roots of the auxiliary equation (3) are complex non-real. Since the entries of
the matrix are real, these roots
" are# conjugate to each other. Thus, they are of the form + i and
1
i for 6= 0. Suppose be a nonzero solution of (2) corresponding to = + i. The
2
numbers 1 and 2 need not be real. Thus,
" # " (1) (2)
#
1 1 + i1
= (1) (2) .
2 2 + i2

Then, the vector valued function


" # " (1) (2)
#
1 (+i)t 1 + i1 t
X := e = (1) (2) e [cos t + i sin t]
2 2 + i2

is a solution of (1). Note that


X = X1 + iX2 ,

where " # " (1) #


(1) (2) (2)
1 cos t 1 sin t t 1 sin t + 1 cos t
X1 = (1) (2) e , X2 = (1) (2) et .
2 cos t 2 sin t 2 sin t + 2 cos t
We see that X1 and X2 are also are solutions of (1), and they are linearly independent. In this case,
a general solution of (1) is given by C1 X1 + C2 X2 .

Case (iii): Suppose 0 is a double root of the auxiliary equation (3). In this case there are two
subcases:

There are linearly independent solutions for (2).

There is only one (up to scalar multiples) nonzero solution for (2).

26
In the first case if " (1) # " #
(2)
1 1
(1) and (2)
2 2
are the linearly independent solutions of (2) corresponding to = 0 , then the vector valued functions
" (1) # " (2) #
1 0 t 1 0 t
X1 = (1) e , X2 = (2) e
2 2

are solutions of (1), and the general solution of (1) is given by

C1 X1 + C2 X2 .
" #
1
In the second case, let u := is a nonzero solution of (2) corresponding to = 0 , and let
2
" #
1
v := is such that
2
(A 0 I)v = u.

Then
X = C1 ue0 t + C2 [tu + v]e0 t

is the general solution.

Remark 3.2. Another method of solving a system is to convert the given system into a second order
system for one of x1 and x2 , and obtain the other.

27
4 Power series method

4.1 The method and some examples

Consider the differential equation:

y 00 + f (x)y 0 + g(x)y = r(x). (1)

We would like to see if the above equation has a solution of the form

X
y= cn (x x0 )n (2)
n=0

in some interval I containing some known x0 , where c0 , c1 , . . . are to determined.



X
Recall from calculus: Suppose the power series an (x x0 )n converges at some point other than
n=0
x0 .

There exists > 0 such that the series converges at every x with |x x0 | < .

The series diverges at every x with |x x0 | > .



X
an (x x0 )n = 0 implies an = 0 for all n = 0, 1, 2, . . ..
n=0

The series can be differentiated term by term in the interval (x0 r, x0 + ) any number of times,
i.e.,

dk X n
X
an (x x 0 ) = n(n 1) (n k + 1)an (x x0 )nk
dxk n=0
n=k

for every x with |x x0 | < and for every k N.


P f (n) (x0 )
If f (x) := n=0 an (x x0 )n for |x x0 | < , then an = .
n!

X
The above number is called the radius of convergence of the series an (x x0 )n .
n=0

Definition 4.1. A (real valued) function f defined in a neighbourhood of a point x0 R is said to


be analytic at x0 if it can be expressed as

X
f (x) = an (x x0 )n , |x x0 | < ,
n=0

for some > 0, where a0 , a1 , . . . are real numbers.

Recall that if p(x) and q(x) are polynomials given by

p(x) = a0 + a1 x + + an xn , q(x) = b0 + b1 x + + bn xn ,

28
then
p(x)q(x) = a0 b0 + (a0 b1 + a1 b0 )x + + (a0 bn + a1 bn1 + + an b0 )xn .

X
X
Motivated by this, for convergent power series an (x x0 )n and bn (x x0 )n , we define
n=0 n=0


X
 X  X n
X
an (x x0 )n bn (x x0 )n = cn (x x0 )n , cn := ak bnk .
n=0 n=0 n=0 k=0

Now, it may be too much to expect to have a solution of the form (2) for a differential equation
(1) for arbitrary continuous functions f, g r. Note that we require the solution to have only second
derivative, whereas we are looking for a solution having a series expansion; in particular, differentiable
infinitely many times. But, it may not be too much expect to have a solution of the form (2) if f, g, r
also have power series expansions about x0 . Power series method is based on such assumptions.

The idea is to consider those cases when f, g, r also have power series expansions about x0 , say

X
X
X
f (x) = an (x x0 )n , g(x) = bn (x x0 )n , r(x) = dn (x x0 )n ..
n=0 n=0 n=0

Then substitute the expressions for f, g, r, y and obtain the coefficients cn , n N, by comparing
coefficients of (x x0 )k for k = 0, 1, 2, . . . .

Note that this case includes the situation when:

Any of the functions f, g, r is a polynomial,

Any of the functions f, g, r is a rational function, i.e., function of the form p(x)/q(x) where p(x)
and q(x) are polynomials, and in that case the point x0 should not be a zero of q(x).

Example 4.2.
y 00 + y = 0. ()

In this case, f = 0, g = 0, r = 0. So, we may assume that the equation has a solution power series
expansion around any point x0 R. For simplicity, let x0 = 0, and assume that the solution is of the
X
form y = cn xn . Note that
n=0


X
X
X
X
() n(n 1)cn xn2 + cn xn = 0 (n + 2)(n + 1)cn+2 xn + cn x n = 0
n=2 n=0 n=0 n=0


X
[(n + 2)(n + 1)cn+2 + cn ]xn = 0 (n + 2)(n + 1)cn+2 + cn n N0 := N {0}
n=0
cn
(n + 2)(n + 1)cn+2 = n N0
(n + 2)(n + 1)
(1)n a0 (1)n a1
c2n = , c2n+1 = n N0 .
(2n)! (2n + 1)!

29

X
Thus, if y = cn xn is a solution of (), then
n=0


X
X
X
n 2n
y= cn x = c2n x + c2n+1 x2n+1 = c0 cos x + c1 sin x
n=0 n=0 n=0

for arbitrary c0 and c1 . We can see that this, indeed, is a solution.

The following theorem specifies conditions under which a power series solution is possible.

THEOREM 4.3. Let p, q r be analytic at a point x0 . Then every solution of the equation

y 00 + p(x)y 0 + q(x)y = r(x)

can be represented as a power series in powers of x x0 .

4.2 Legendres equation and Legendre polynomials

The differential equation


(1 x2 )y 00 2xy 0 + ( + 1)y = 0 ()

is called Legendre equation. Here, is a real constant. Note that the above equation can also be
written as
d h dy i
(1 x2 ) + ( + 1)y = 0.
dx dx
Note that () can also be written as
2xy 0 ( + 1)y
y 00 + = 0.
1 x2 1 x2
It is of the form (1) with
2x ( + 1)
f (x) = , g(x) = , r(x) = 0.
1 x2 1 x2
Clearly, f, g, r are rational functions, and have power series expansions around the point x0 = 0. Let

X
us assume that a solution of () is of the form y = cn xn . Substituting the expressions for y, y 0 , y 00
n=0
into (), we obtain

X
X
X
2 n2 n1
(1 x ) n(n 1)cn x 2x ncn x + ( + 1) cn xn = 0,
n=2 n=1 n=0

i.e.,

X
X
X
X
n(n 1)cn xn2 n(n 1)cn xn 2ncn xn + ( + 1) cn xn = 0,
n=2 n=2 n=1 n=0

i.e.,

X
X
X
X
(n + 2)(n + 1)cn+2 xn n(n 1)cn xn 2ncn xn + ( + 1)cn xn = 0.
n=0 n=2 n=1 n=0

30
Equating coefficients of xk to 0 for k N0 , we obtain

2c2 + ( + 1)c0 = 0, 6c3 2c1 + ( + 1)c1 = 0,

(n + 2)(n + 1)cn+2 + [n(n 1) 2n + ( + 1)]cn = 0,

i.e.,
2c2 + ( + 1)c0 = 0, 6c3 + [2 + ( + 1)]c1 = 0,

(n + 2)(n + 1)cn+2 + ( n)( + n + 1)cn = 0,

i.e.,
( + 1) 2 + ( + 1) ( n)( + n + 1)
c2 = c0 , c3 = c1 , cn+2 = cn .
2 6 (n + 2)(n + 1)
Note that if = k is a positive integer, then coefficients of xn+2 is zero for n {k, k + 1, . . .}. Thus,
in this case we have y = y1 (x) + y2 (x), where:

If = k is an even integer, then y1 (x) is a polynomial of degree k with only even powers of x,
and y2 (x) is a power series with only odd powers of x,

If = k is an odd integer, then y2 (x) is a polynomial of degree k with only odd powers of x,
and y1 (x) is a power series with only even powers of x.

Now, suppose = k is a positive integer. Then, from the iterative formula


( n)( + n + 1)
cn+2 = cn
(n + 2)(n + 1)
we have ck 6= 0 and ck+2 = 0 so that

ck+2j = 0 for j N.

Thus,
k(k 1)
ck2 = ck ,
2(2k 1)
(k 2)(k 3) k(k 1)(k 2)(k 3)
ck4 = ck2 = (1)2 ck .
4(2k 3) 2.4.(2k 1)(2k 3)
(k 4)(k 5) k(k 1)(k 2)(k 3)(k 4)(k 5)
ck6 = ck4 = (1)3 ck .
6(2k 5) 2.4.6(2k 1)(2k 3)(2k 5)
In general, for 2` < k,
k(k 1)(k 2) (k 2` + 1)
ck2` = (1)` ck
[2.4. (2`)](2k 1)(2k 3) (2k 2` + 1)
k!(2k 2)(2k 4) (2k 2`)
= (1)` ck
(k 2`)!2` `!(2k 1)(2k 2)(2k 3)(2k 4) (2k 2` + 1)(2k 2`)
k!2` (k 1)(k 2) (k `)
= (1)` ck
(k 2`)!2` `!(2k 1)(2k 2)(2k 3)(2k 4) (2k 2` + 1)(2k 2`)
k!(k 1)!(2k 2` 1)!
= (1)` ck
(k 2`)!`!(k ` 1)!(2k 1)!

31
Taking
(2k)!
ck :=
2k (k!)2
it follows that
(2k 2`)!
ck2` = (1)` .
2k `!(k `)!(k 2`)!
Definition 4.4. The polynomial
Mn
X (2n 2`)!
Pn (x) = (1)` xn2`
2n `!(n `)!(n 2`)!
`=0

is called the Legendre polynomial of degree n. Here, Mn = n/2 if n is even and Mn = (n 1)/2 if
n is odd.

Recall
Mn
X (2n 2k)!
Pn (x) = (1)k xn2k .
2n k!(n k)!(n 2k)!
k=0
It can be seen that
3 2 1
P0 (x) = 1, P1 (x) = x, P2 (x) = (x 1), P2 (x) = (5x3 3x),
2 5
1 1
P4 (x) = (35x4 30x2 + 3), P5 (x) = (63x5 70x3 + 15x).
8 8
Mn
X (2n 2k)!
Pn (x) = (1)k n (x)n2k = (1)n Pn (x).
2 k!(n k)!(n 2k)!
k=0

1 dn 2
Rodrigues formula: Pn (x) = (x 1)n .
n!2n dxn
Let
n
X
2 n
f (x) = (x 1) = (1)r (n Cr )x2n2r .
r=0
Then
M1
X
0
f (x) = (1)r (n Cr )(2n 2r)x2n2r1 ,
r=0
M2
X
f 00 (x) = (1)r (n Cr )(2n 2r)(2n 2r 1)x2n2r2 ,
r=0

Mn
X
n
f (x) = (1)r (n Cr )[(2n 2r)(2n 2r 1) (2n 2r n + 1)]x2n2rn ,
r=0
Mn
X
= (1)r (n Cr )[(2n 2r)(2n 2r 1) (n 2r + 1)]xn2r ,
r=0
Mn
X n! (2n 2r)! n2r
= (1)r x
r=0
r!(n r)! (n 2r)!
= n!2n Pn (x),

32

1 X
Generating function: = Pn (x)un .
1 2xu + u2 n=0

For a fraction , we use the expansion:



X 1
(1 + ) = 1 + ( Cn )n , ( Cn ) := [( 1) ( n + 1)].
n=1
n!

Thus, for = 1/2,


 
1/2 1  1  1  1   1
( Cn ) = 1 2 n + 1
n! 2 2 2 2
     2n 1 
1 1 3 5
= (1)n
n! 2 2 2 2
1 (2n)!
h i
= (1)n
n!2n 2n n!
(2n)!
= (1)n 2n .
2 (n!)2

Thus,

12
X (2n)!
(1 ) = an n , an := (1)n .
n=0
22n (n!)2
Also,
n n
2 n
X n! X n!
(2xu u ) = (2xu)nk (u2 )k = (1)k 2nk xnk un+k .
k!(n k)! k!(n k)!
k=0 k=0

Thus,
n
X n!
(2xu u2 )n = bn,k xnk un+k , bn,k = (1)k 2nk .
k!(n k)!
k=0
2
Taking = 2xu u , we have
n
hX i
1
X
(1 2xu + u2 ) 2 = an bn,k xnk un+k
n=0 k=0
= a0 + a1 b1,0 xu + (a1 b1,1 + a2 b2,0 x2 )u2
+(a2 b2,1 x + a3 b3,0 x3 )u3
+(a2 b2,2 + a3 b3,1 x2 + a4 b4,4 x4 )u4 +
= f0 (x) + f1 (x)u + f2 (x)u2 + ,

where
Mn
X
fn (x) = ank bnk,k xn2k .
k=0

Since
[2(n k)]! (n k)! n2k (2n 2k)!
ank bnk,k = (1)k 2 = (1)k n ,
(2nk )2 [(n k)!]2 k!(n 2k)! 2 k!(n k)!(n 2k)!

33
we have
fn (x) = Pn (x).

Thus,

1 X
= Pn (x)un .
1 2xu u2 n=0

Note that, taking x = 1,



X 1 X
un = = Pn (1)un
n=0
1 u n=0
so that Pn (1) = 1 for all n.

Recurrence formulae:

1. (n + 1)Pn+1 (x) = (2n + 1)xPn (x) nPn1 (x).

2. nPn = xPn0 (x) Pn1


0
(x).

0 0
3. (2n + 1)Pn+1 (x) = Pn+1 (x) nPn1 (x).

0 0
4. Pn+1 (x) = xPn1 (x) nPn1 (x).

5. (1 x2 )Pn0 (x) = n[Pn1 (x) xPn (x)].

1
Proofs. 1. Recall that the generating function for (Pn ) is (1 2xt + t2 ) 2 , i.e.,

1
X
(1 2xt + t2 ) 2 = Pn (x)tn .
n=0

Differentiating with respect to t:



3
X
(x t)(1 2xt + t2 ) 2 = nPn (x)tn1
n=1


1
X
(x t)(1 2xt + t2 ) 2 = (1 2xt + t2 ) nPn (x)tn1
n=1


X
X
X
(x t) Pn (x)tn = (1 2xt + t2 ) nPn (x)tn1 = (1 2xt + t2 ) (n + 1)Pn+1 (x)tn .
n=0 n=1 n=0

n
Equating the coefficients of t , we obtain

xPn x Pn1 (x) = (n + 1)Pn+1 (x) 2x nPn (x) + (n 1)Pn1 (x),

i.e.,
(n + 1)Pn+1 (x) = (2n + 1)xPn (x) nPn1 (x).

34
2. Differentiating with respect to t:

2 23
X
(x t)(1 2xt + t ) = nPn (x)tn1
n=1

Differentiating with respect to x:



2 23
X
t(1 2xt + t ) = Pn0 (x)tn
n=0

Hence,

3
X X
(x t)t(1 2xt + t2 ) 2 = nPn (x)tn = nPn (x)tn
n=1 n=0

Thus,

X
X
(x t) Pn0 (x)tn = nPn (x)tn
n=0 n=0
n
Equating the coefficients of t , we obtain nPn = xPn0 (x) 0
Pn1 (x).

3. Differentiating the recurrence relation in (1) with respect to x and then using the expression
for xPn0 (x) from (2), we get the result in (3).

4. Differentiating the recurrence relation in (1) with respect to x leads to

0
(n + 1)Pn+1 (x) = (2n + 1)Pn (x) + (n + 1)xPn0 (x) + n[xPn0 (x) Pn1
0
(x)].

Now, using (2) and replacing n by n 1 leads to the required relation.

5. Recurrence relation in (2) and (4) imply the required relation.


n(n+1)
Exercise 4.5. 1. Show that Pn0 (1) = 2 .
(Hint: Use the fact that Pn (x) satisfies the Legendre equation.)

2. Using generating function derive

(a) Pn (1) = (1)n ,


(b) Pn (x) = (1)n Pn (x). (Hind: Replace x by y := x and then t by := t.)
Z 1 Z 1 Z 1
3. Find values of x[Pn (x)]2 dx, x2 [Pn (x)]2 dx, x2 Pn+1 (x)Pn1 (x)dx.
1 1 1
(Hint: Use recurrence formula.)

4. Prove that for every polynomial q(x) of degree n, there exists a unique (n+1)-tuple (a0 , a1 , . . . , an )
of real numbers such that q(x) = a0 P0 (x) + a1 P1 (x) + . . . an Pn (x).
(Hint: use induction on degree.)

35
4.3 Power series solution around singular points

Look at the DE:


x2 y 00 (1 + x)y = 0.
X
Does it have a nonzero solution of the form an xn ? Following our method of substitution and
n=0
determination of coefficients, it can be see that an = 0 for all n N0 .

What went wrong?

Note that the above DE is same as


1+x
y 00 y = 0,
x2
which is of the form
y 00 + p(x)y 0 + q(x)y = 0 (1)
1+x
with p(x) = 0 and q(x) = . Note that p(x) is not analytic at x0 = 0.
x2
Definition 4.6. A point x0 R is called a regular point of (1) if p(x) and q(x) are analytic at x0 .
If x0 is not a regular point of (1), then it is called a singular point of (1).
y
Example 4.7. 1. Consider (x 1)y 00 + xy 0 + = 0. This takes the form (1) with
x
x 1
p(x) = , q(x) = .
x1 x(x 1)
Note that x = 0 and x = 1 are singular points of the DE. All other points in R are regular
points.

2. Consider the Cauchy equation: x2 y 00 + 2xy 0 2y = 0. This takes the form (1) with
2 2
p(x) = , q(x) = 2 .
x x
Note that x = 0 is the only singular point of this DE.

Definition 4.8. A singular point x0 R of the DE (1) is called a regular singular point if
(x x0 )p(x) and (x x0 )2 q(x) are analytic at x0 . Otherwise, x0 is called an irregular singular
point of (1).

Example 4.9. Consider x2 (x 2)y 00 + 2y 0 + (x + 1)y = 0. This takes the form (1) with
2 x+1
p(x) = , q(x) = .
x2 (x 2) x2 (x 2)
Note that
2 x+1
xp(x) = , x2 q(x) = ,
x(x 2) x2
2 (x + 1)(x 2)
(x 2)p(x) = 2 , (x 2)2 q(x) = .
x x2
We see that

36
x = 0 is an irregular singular point,

x = 2 is a regular singular point.

Example 4.10. Consider the DE


b(x) 0 c(x)
y 00 + y + 2 y = 0,
x x
b(x)
where a(x) and b(x) are analytic at 0. Note that the above equation is of the form (1) with p(x) =
x
c(x)
and q(x) = x2 . Thus, 0 is a regular singular point of the given DE.

4.3.1 Frobenius method

It is known that a DE of the form


b(x) 0 c(x)
y 00 + y + 2 y = 0, (1)
x x
where a(x) and b(x) are analytic at 0 has a solution of the form

X
y(x) = xr an xn ,
n=0

for some real or complex number r and for some real numbers a0 , a1 , a2 , . . . with a0 6= 0.

Note that () is same as


x2 y 00 + xb(x)y 0 + c(x)y = 0 (2)

and it reduces to the EulerCauchy equation when b(x) and c(x) are constant functions.

Substituting the expression for y in (2) into (1), we get:



X
X
x2 (n + r)(n + r 1)an xn+r2 + xb(x) (n + r)an xn+r1 + c(x) = 0.
n=0 n=0

That is,

X
X
(n + r)(n + r 1)an xn+r + b(x) (n + r)an xn+r + c(x) = 0. (3)
n=0 n=0

Let

X
X
b(x) = bn x n , c(x) = cn x n .
n=0 n=0
r
Comparing comparing coefficients of x , we get

[r(r 1) + b0 r + c0 ]a0 = 0.

This quadratic equation is called the indicial equation of (1).

37
Let r1 , r2 be the roots of the indicial equation. Then one of the solutions is

X
y1 (x) = xr1 an xn ,
n=0

where a0 , a1 , . . . are obtained by comparing coefficients of xn+r , n = 0, 1, 2, . . ., in (3) for r = r1 .


Another solution, linearly independent of y1 is obtained using the method of variation of parameter.

Recall that, in the method of variation of parameter,

the second solution y2 is assumed to be of the form y2 (x) = u(x)y1 (x),

substituting the expressions for y2 , y20 , y200 in (2),

use the fact that y1 (x) satisfies (2),

obtain a first order ODE for u(x), and

solve it to obtain an expression for u(x).

We have seen that R


e p(x)
Z
a(x)
y2 (x) = y1 (x) dx, p(x) := .
[y1 (x)]2 x
In case y1 (x) is already in a simple form, then the above expression can be used. Otherwise, one may
use the above mentioned steps to reach appropriated expression for y2 (x) by making use of the series
expression for y1 (x).

By the above procedure we have the following (see Kreiszig):

Case 1: If r1 and r2 distinct and not differing by an integer, then y2 is of form



X
y2 (x) = xr1 An x n .
n=0

Case 2: If r1 = r2 = r, say, i.e., r is a double root, then y2 is of the form



X
y2 (x) = y1 (x) ln(x) + xr An x n .
n=1

Case 3: If r1 and r2 differ by an integer and r2 > r1 , then y2 is of the form



X
y2 (x) = ky1 (x) ln(x) + xr2 An x n .
n=0

The method described above is called the Frobenius method3 .


3 George Frobenius (18491917) was a German mathematician.

38
Example 4.11. Let us ind linearly independent solutions for the Euler-Cauchy equation:

x2 y 00 + b0 xy 0 + c0 y = 0.

Note that this is of the form (2) with b(x) = b0 , c(x) = c0 , constants. Assuming a solution is of the
P
form y = xr n=0 an xn , we obtain

X
X
(n + r)(n + r 1)an xn+r + b0 (n + r)an xn+r + c0 = 0.
n=0 n=0

Now, equating the coefficient of xr to 0, we get the indicial equation as [r(r 1) + b0 r + c0 ]a0 = 0,
a0 6= 0, so that
r2 (1 b0 )r + c0 = 0.

For a root r and n N,

[(n + r)(n + r 1) + (n + r)b0 ]an = 0, i.e., (n + r)[(n + r 1) + b0 ]an = 0, i.e.,

[(n + r 1) + b0 ]an = 0 n N.

We can take an = 0 for all n N. Thus, y1 (x) = xr . The other solution is given by
R
e p(x)
Z
a(x)
y2 (x) = y1 (x) dx, p(x) := .
[y1 (x)]2 x

Thus, R
e p(x)
Z Z
r b0 1
y2 (x) = x dx, p(x) := , i.e., y2 (x) = xr dx.
x2r x x2r+b0
If r is a double root, then 2r + b0 = 1 so that

y2 (x) = xr ln(x).

If r is not a double root, then


Z
r 1 1
y2 (x) = x dx = .
x2r+b0 (2r + b0 1)xr+b0 1

If r = r1 and r2 are the roots, then we have r1 + r2 = 1 b0 so that r + b0 1 and hence,


x r2
y2 (x) = .
(2r1 + b0 1)

Thus, xr1 and xr2 are linearly independent solutions.

Example 4.12. Consider the DE:

x(x 1)y 00 + (3x 1)y 0 + y = 0. ()

39
3x 1 x P
This is of the form (1) with b(x) = , c(x) = . Now, taking y = xr n=0 an xn , we obtain
x1 x1
from (1):

X
x(x 1)y 00 = (x2 x) (n + r)(n + r 1)an xn+r2
n=0

X
X
= (n + r)(n + r 1)an xn+r (n + r)(n + r 1)an xn+r1
n=0 n=0

X
(3x 1)y 0 = (3x 1) (n + r)an xn+r1
n=0

X
X
= 3(n + r)an xn+r (n + r)an xn+r1 .
n=0 n=0

Hence, ():

X
X
[(n + r)(n + r 1) + 3(n + r) + 1]an xn+r + [(n + r)(n + r 1) (n + r)]an xn+r1 = 0.
n=0 n=0

r1
Equating coefficient of x to 0, we get the indicial equation as r(r 1) r = 0, i.e., r2 = 0. Thus,
r = 0 is a double root of the indicial equation. Hence, we obtain:

X
X
[(n)(n 1) + 3(n) + 1]an xn + [(n)(n 1) (n)]an xn1 = 0,
n=0 n=1

i.e.,

X
X
X
X
(n + 1)2 an xn n2 an xn1 = 0, i.e., (n + 1)2 an xn (n + 1)2 an+1 xn = 0.
n=0 n=1 n=0 n=0

Thus, an+1 = an for all n N0 , and consequently, taking a0 = 1,



X a0
y1 (x) = xn = .
n=0
1x

Now, R
e pdx 3x 1
Z
y2 (x) = y1 (x) dx, p(x) := .
[y1 (x)]2 x(x 1)
Note that
Z Z Z Z Z Z
3 1 3 1 1
p(x)dx = dx dx = dx + dx dx
x1 x(x 1) x1 x x1
= 3 ln |x 1| + ln |x| ln |x 1| = 2 ln |x 1| + ln |x| = ln |(x 1)2 x|,
R
e pdx 1 1
2
= 2 2
= .
[y1 (x)] |(x 1) x|[y1 (x)] x
Thus,
ln(x)
y2 (x) = .
1x

40
Example 4.13. Consider the DE:

(x2 1)x2 y 00 (x2 + 1)xy 0 + (x2 + 1)y = 0. ()

(x2 + 1) x2 + 1 P
This is of the form (1) with b(x) = 2
, c(x) = 2 . Now, taking y = xr n=0 an xn , we
(x 1) x 1
obtain from (1):

X
(x2 1)x2 y 00 = (x2 1) (n + r)(n + r 1)an xn+r
n=0

X
X
= (n + r)(n + r 1)an xn+r+2 (n + r)(n + r 1)an xn+r
n=0 n=0

X
(x2 + 1)xy 0 = (x2 + 1) (n + r)an xn+r
n=0

X
X
= (n + r)an xn+r+2 + (n + r)an xn+r ,
n=0 n=0
X
X
(x2 + 1)y = an xn+r+2 + an xn+r .
n=0 n=0

Thus, () takes the form



X
X
[(n + r)(n + r 1) (n + r) + 1]an xn+r+2 + [(n + r)(n + r 1) (n + r) + 1]an xn+r = 0. ()
n=0 n=0

Equating coefficient of xr to 0, we get the indicial equation as

[r(r 1) r + 1]a0 = 0, i.e., (r2 1) = 0.

The roots are r1 = 1 and r2 = 1. For r1 = 1, () takes the form



X
X
[(n + 1)n (n + 1) + 1]an xn+3 + [(n + 1)n (n + 1) + 1]an xn+1 = 0,
n=0 n=0

i.e.,

X
X
n2 an xn+3 n(n + 2)an xn+1 = 0, i.e.,
n=0 n=0

This implies a1 = 0 and


n2 an (n + 2)(n + 4)an+2 = 0 n N.

Hence, an = 0 for all n N so that y(x) = x. Taking y1 (x) = x, we obtain the second solution y2 as
R
e p
Z
y2 (x) = y1 ,
y12

where
x2 + 1 (x2 1) + 2
   
1 2 1 1 1
p= 2 = 2 = + = + .
(x 1)x x 1)x x (x2 1)x x1 x+1 x

41
R x2 1
Hence, e p
= so that
x
Z R p
1 x2 1
  Z 2  
x 1
Z
e 1
y2 (x) = y1 =x dx = x dx = x ln(x) + 2 .
y12 x2 x x3 2x
Thus,
1
y1 = x, y2 = x ln(x) +
2x
are linearly independent solutions.
P
Remark 4.14. It can be seen that if we take the solution as y = xr n=0 An xn with r = 1, then
we arrive at An = 0 so that it violates our requirement, and the resulting expression will not be a
solution.

4.3.2 Bessels equation

Bessels equation is given by


x2 y 00 + xy 0 + (x2 2 )y = 0

where is a non-negative real number. This is a special case of the equation

y 00 + p(x)y 0 + q(x)y = 0

where p, q are such that xp(x) and x2 q(x) are analytic at 0, i.e., 0 is a regular singular point. Thus,
Frobenius method can be applied.

X
Taking a solution y of the form y = xr an xn , we have
n=0


X
X
X
X
n+r n+r n+r
(n + r)(n + r 1)an x + (n + r)an x + (an2 x 2 an xn+r = 0.
n=0 n=0 n=2 n=0

Coefficient of xr is 0 [r(r 1) + r 2 ]a0 r2 2 = 0.


Coefficient of xr+1 is 0 [(r + 1)2 2 ]a1 = 0
Coefficient of xr+n : [(n + r)(n + r 1) + (n + r) 2 ]an + an2 .

Thus, roots of the indicial equation are r1 = , r2 = . Taking r = r1 = , we have a1 = 0 and


an2 an2
an = 2
= 2 , n = 2, 3, . . . .
(n + r)(n + r 1) + (n + r) n + 2n
Hence, a2n1 = 0 for all n N and
a2n2 a2n2
a2n = = 2 , n N.
(2n)2 + 4n 2 n(n + )
It is a usual convention to take
Z
1
a0 = , () := et t1 dt, > 0.
2 ( + 1) 0

42
Recall that ( + 1) = (). Then we have
a0 1 1
a2 = = 2+ = 2+ ,
22 (1 + ) 2 ( + 1)( + 1) 2 ( + 2)
a2 1
a4 = 2
= (1)2 ,
2 2(2 + ) 24+ 2( + 3)
(1)n
a2n = .
22n+ n!( + n + 1)
The corresponding solution is

X (1)n
J (x) = 2n+ n!(n + + 1)
x2n+ ,
n=0
2
which is called the Bessel function of the first kind of order .

Observe:

Since the Bessel equation involves only 2 , it follows that



X (1)n
J (x) = x2n
n=0
22n n!(n + 1)
is also a solution.

If is not an integer, then J (x) and J (x) are linearly independent solutions.

If is an integer, then say = k N then

Jk (x) = (1)k Jk (x) ()

say = k N then so that Jk and Jk are linearly dependent.

To see the above relation (), note that



X (1)n
Jk (x) = x2n+k ,
n=0
22n+k n!(k + n + 1)

X (1)n
= x2n+k ,
n=0
22n+k n!(n + k)!
Also,

X (1)n
J (x) = 2n n!(n + 1)!
x2n .
n=0
2
It can be seen that if n = 1, 2, . . . , 1, then (n k) as n. Hence for = k, k N,

X (1)n
Jk (x) = 2nk n!(n k + 1)
x2nk ,
n=0
2

X (1)n
= x2nk
22n+k n!(n k)!
n=k

X (1)n+k
= 2n+k (n + k)!n!
x2n+k
n=0
2
= (1)k Jk (x).

43
Now, for an integer k, for obtaining a second solution of the Bessel equation which is linearly
independent of Jk , we can use the general method, i.e., write the Bessel equation as

y 00 + p(x)y 0 + q(x0y = 0
R
e p(x)dx
Z
and knowing a solution y1 , obtain y2 := y1 (x) dx. Note that
y12

1 x2 k 2
p(x) = , q(x) = .
x x2
Thus, the second solution according to the above formula is
Z
dx
Yk (x) = Jk (x) .
x[Jk (x)]2

This is called the Bessel equation of the second kind of order k.

Now, we observe few more relations:

0
1. (x J (x)) = x J1 (x).
0
2. (x J (x)) = x J+1 (x).
2
3. J1 (x) + J+1 (x) = x J (x).

4. J1 (x) J1 (x) = 2J0 (x).

Proofs:
Note that

0
X (2n + 2)x2n+21
(x J (x)) = (1)n
n=0
22n+ n!(n + + 1)

X 2(n + )x2n+21
= (1)n
n=0
22n+ n!(n + )(n + )

X x2n+21
= (1)n
n=0
22n+1 n!(n + )

X x2n+21
= x (1)n
n=0
22n+1 n!(n + )

= x J1 (x).

44
This proves (1). To prove (2), note that

0 X 2nx2n1
x J (x) = (1)n
n=1
22n+ n!(n + + 1)

X 2(n + 1)x2n+1
= (1)n+1
n=0
22n++2 (n + 1)!(n + + 2)

X x2n+1
= (1)n+1
n=0
22n++1 n!(n + + 2)

X x2n++1
= x (1)n+1
n=0
22n++1 n!(n + + 2)

= x J+1 (x).

Proofs of (3) & (4): From (1) and (2),


0 0
J1 (x) + J+1 (x) = x (x J (x)) x x J (x)
= x [x J0 (x) + x1 J (x)] x [x J0 (x) x1 J (x)]
2
= J (x).
x

0 0
J1 (x) J+1 (x) = x (x J (x)) + x x J (x)
= x [x J0 (x) + x1 J (x)] + x [x J0 (x) x1 J (x)]
= 2J0 (x).


Using the fact ( 21 ) = , it can be shown (verify!) that
r r
2 2
J 12 = sin x, J 12 = cos x.
x x

4.4 Orthogonality of functions

Definition 4.15. Functions f and g defined on an interval [a, b] are said to be orthogonal with
respect to a nonzero weight function w if
Z b
f (x)g(x)w(x)dx = 0.
a

A sequence (fn ) of functions is said to be an orthogonal sequence of functions with respect to w


if Z b
fi (x)fj (x)w(x)dx = 0 for i 6= j.
a

[Here, we assume that the above integral exits; that is the case, if for example, they are continuous
or bounded and piece-wise continuous.]

45
Note that (
2
if n 6= m,
Z
0
sin(nx) sin(mx)dx =
0 if n 6= m,
(
2
if n 6= m,
Z
0
cos(nx) cos(mx)dx =
0 if n 6= m,
Z 2
sin(nx) cos(mx)dx = 0.
0
Thus, writing
f2n2 (x) = cos(nx), f2n1 (x) = sin(nx) for n N,
then (fn ) is an orthogonal sequence of functions with respect to w = 1.

Notation: We shall denote Z b


hf, giw := fi (x)fj (x)w(x)dx
a
and call this quantity as the scalar product of f and g with respect to w. If w(x) = 1 for every
x [a, b], then we shall denote hf, gi := hf, giw . We observe that

hf, f iw 0,

hf + g, hiw = hf, hiw + hg, hiw ,

hcf, f iw = chf, f iw .

If f, g, w are continuous functions, then

hf, f iw = 0 f = 0.

Exercise 4.16. Let f1 , , . . . , fn be linearly independent continuous functions. Let g1 = f1 and for
j = 1, . . . , n, define g1 , . . . , gn iteratively as follows:

gj+1 = fj+1 hfj+1 , g1 iw g1 hfj+1 , g2 iw g2 hfj+1 , gj iw gj , j = 1, . . . , n 1


j
X
i.e., gj+1 = fj+1 hfj+1 , fi iw fi , j = 1, 2, . . . , n1. Prove that g1 , . . . , gn are orthogonal functions
i=1
with respect to w.

Definition 4.17. Functions f1 , f2 , . . . are said to be linearly independent if for every n N,


f1 , . . . , fn are linearly independent, i.e., for every n N, if 1 , . . . , n are scalars such that 1 f1 +
+ n fn = 0, then i = 0 for i = 1, . . . , n.

Definition 4.18. A sequence (fn ) on [a, b] is said to be an orthonormal sequence of functions with
respect to w if (fn ) is an orthogonal sequence with respect to w and hfn , fn iw = 1 for every j N.

Exercise 4.19. Let fj (x) = xj1 for j N. Find g1 , g2 , . . . as per the formula in Exercise 4.16 with
w(x) = 1 and [a, b] = [1, 1]. Observe that, for each n N, gn is a scalar multiple of the Legendre
polynomial Pn1 .

46
4.4.1 Orthogonality of Legendre polynomials

Recall that for non-negative integers n, the Legendre equation is given by

(1 x2 )y 00 2xy 0 + n y = 0, n := n(n + 1).

This equation can be written as:


[(1 x2 )y 0 ]0 + n y = 0. ()

Recall that for each n N0 , the Legendre polynomial


Mn
(
n
X (2n 2k)! if n even,
Pn (x) = (1)k n xn2k , Mn := 2
n1
2 k!(n k)!(n 2k)! 2 if n odd
k=0

satisfies the equation (). Thus,


[(1 x2 )Pn0 ]0 + n Pn = 0, ()1
0 0
[(1 x2 )Pm ] + m Pm = 0. ()2

=
[(1 x2 )Pn0 ]0 Pm + n Pn Pm = 0, 0 0
[(1 x2 )Pm ] Pn + m Pm Pn = 0

=
{[(1 x2 )Pn0 ]0 Pm [(1 x2 )Pm
0 0
] Pn } + (n m )Pn Pm = 0,

i.e.,
[(1 x2 )Pn0 Pm ]0 [(1 x2 )Pm
0
Pn ]0 + (n m )Pn Pm = 0

= Z 1 Z 1
{[(1 x 2
)Pn0 Pm ]0 [(1 x 2 0
)Pm Pn ]0 }dx + (n m ) Pn Pm dx = 0
1 1
i.e., Z 1
(n m ) Pn Pm dx = 0.
1

Thus, Z 1
n 6= m = n 6= m = Pn Pm dx = 0.
1

Using the expression for Pn , it can be shown that


Z 1
2
Pn2 dx = .
1 2n + 1
Hence,
nq o
2n+1
2 P n : n N0 is an orthonormal sequence of polynomials.

Remark 4.20. Recall that for n N0 , the Legenendre polynomial Pn (x) is of degree n and the
P0 , P1 , P2 , . . . are orthogonal. Hence P0 , P1 , P2 , . . . are linearly independent. We recall the following
result from Linear Algebra:

47
If q0 , q1 , . . . , qn are polynomials which are

1. linearly independent and


2. degree of qj is atmost n for each j = 0, 1, . . . , n,

then every polynomial q of degree at most n can be uniquely represented as

q = c0 q0 + c1 q1 + . . . + cn qn .

In the above if q0 , q1 , . . . , qn are orthogonal also, i.e., hqj , qk i = 0 for j 6= k, then we obtain
hq, qj i
cj = , j = 0, 1, . . . , n.
hqj , qj i
Thus,
n n
X X hq, qj i
q= cj qj = qj .
j=0 j=0
hq j , qj i

In particular:

If q is a polynomial of degree n, then


n
X hq, Pj i
q= Pj ,
j=0
hP j , Pj i

where P0 , P1 , . . . are Legendre polynomials.

From Real Analysis, we recall that:

For every continuous function f defined on a closed and bounded interval [a, b], there exists a
sequence (qn ) of polynomials such that (qn ) converges to f uniformly on [a, b], i.e., for every
> 0 there exists a positive integer N such that

|f (x) qn (x)| n N , x [a, b].

The above result is known as Weierstrass approximation theorem. Using the above result it can be
shown that:

If q0 , q1 , . . . , are nonzero orthogonal polynomials on [a, b] such that max deg(qj ) n, then
0jn
every continuous function f defined on [a, b] can be represented as

X hq, qj i
f= cj qj , cj := , j N0 . ()
j=0
hqj , qj i

The equality in the above should be understood in the sense that



X
kf cj qj k 0 as n
j=n

where kgk2 := hg, gi.

48
The expansion in () above is called the Fourier expansion of f with respect to the orthogonal
polynomials qn , n N0 . If we take P0 , P1 , P2 , . . . on [1, 1], then the corresponding Fourier expansion
is known as FourierLegendre expansion.

4.4.2 Orthogonal polynomials defined by Bessel functions

Recall that for a positive integer n N, the Bessel function of the first kind of order n is given by

X (1)j
Jn (x) = x2j+n
j=0
22j+n j!(n+ j + 1)

is a power series, and it satisfies he Bessel equation:

x2 Jn00 + xJn0 + (x2 n2 )Jn = 0.

THEOREM 4.21. If and are zeros of Jn (x) in the interval [0, 1], then
Z 1 (
0 if 6= ,
xJn (x)Jn (x)dx = 1
0 J
2 n+1 (), if = .

Proof. Observe that, for R, if z = x and y(x) = Jn (x), then

yn0 (x) = Jn0 (x) = Jn (z), yn00 (x) = 2 Jn00 (z).

Thus, we have

yn00 (x) yn0 (x)


z 2 Jn00 (z) + zJn0 (z) + (z 2 n2 )Jn (z) = 0 2 x2 + x + (2 x2 n2 )yn (x) = 0
2

x2 yn00 (x) + xyn0 (x) + (2 x2 n2 )yn (x) = 0

Now, let
u(x) = Jn (x), v(x) = Jn (x).

Thus, we have

x2 u00 + xu0 + (2 x2 n2 )u = 0, x2 v 00 + xv 0 + ( 2 x2 n2 )v = 0

n2 n2
xu00 + u0 + (2 x )u = 0, xv 00 + v 0 + ( 2 x )v = 0
x x
= h n2 i h n2 i
v xu00 + u0 + (2 x )u = 0, u xv 00 + v 0 + ( 2 x )v = 0
x x
=
x[vu00 uv 00 ] + [vu0 uv 0 ] + (2 2 )xuv = 0

49

d
[x(vu0 uv 0 )] + (2 2 )xuv = 0
dx
= Z 1 Z 1
d
[x(vu0 uv 0 )]dx + (2 2 ) xuvdx = 0.
0 dx 0

Since u(1) = Jn () = 0 and v(1) = Jn () = 0, it follows that


Z 1
(2 2 ) xuvdx = 0.
0

Hence, Z 1
6= = xJn (x)Jn (x)dx = 0.
0
Next, we consider the case of = : Note that

2u0 [x2 u00 + xu0 + (2 x2 n2 )u = 0,

i.e.,
2x2 u0 u00 + 2xu0 u0 + 2(2 x2 n2 )u0 u = 0,

i.e.,
[x2 (u0 )2 ]0 + 2(2 x2 n2 )u0 u = 0,

Also,
[2 x2 u2 n2 u2 ]0 = 2 (2x2 uu0 + 2xu2 ) n2 (2uu0 ) = 2(2 x2 n2 )u0 u + 22 xu2 .

Thus,
[x2 (u0 )2 ]0 + 2(2 x2 n2 )u0 u = 0

[x2 (u0 )2 ]0 + [2 x2 u2 n2 u2 ]0 22 xu2 = 0,

= Z 1 Z 1 Z 1
[x2 (u0 )2 ]0 dx + [2 x2 u2 n2 u2 ]0 dx 22 xu2 dx = 0,
0 0 0
i.e., Z 1
[x2 (u0 )2 ]10 + [2 x2 u2 n2 u2 ]10 22 xu2 dx = 0,
0

Since u(1) = Jn () = 0 and u(0) = Jn (0) = 0, it follows that


Z 1
[u0 (1)]2 22 xu2 dx = 0,
0

i.e., Z 1
1 0 1
x[Jn (x)]2 dx = [J ()]2 = Jn+1 ().
0 2 n 2
The last equality follows, since:

(xn Jn )0 = xn Jn+1 xn Jn0 nxn1 Jn = xn Jn+1

50
so that taking x = ,

n Jn+1 () = n Jn0 () nn1 Jn () = n Jn0 ().

Thus, Jn0 () = Jn+1 (), and the proof is complete.

51
5 SturmLiouville problem (SLP)

Definition 5.1. For continuous real valued functions p, q, r defined on interval such that r0 exists and
continuous and p(x) > 0 for all x [a, b], consider the differential equation

(r(x)y 0 )0 + [q(x) + p(x)]y = 0, (1)

together with the boundary conditions

k1 y(a) + k2 y 0 (a) = 0, (2)

`1 y(b) + `2 y 0 (b) = 0. (3)

The problem of determining a scalar and a corresponding nonzero function y satisfying (1)(3) is
called a SturmLiouville problem (SLP). A scalar (real or complex number) for which there
is a nonzero function y satisfying (1)(3) is called an eigenvalue of the SLP, and in that case the
function y is called the corresponding eigenfunction.

We assume the following known result.

THEOREM 5.2. Under the assumptions on p, q, r given in Definition 5.1, the set of all eigenvalues
of SLP is a countably infinite set4 .

THEOREM 5.3. Eigenfunctions corresponding to distinct eigenvalues are orthogonal on [a, b] with
respect to the weight function p(x).

Proof. Suppose 1 and 2 are eigenvalues of the SLP with corresponding eigenvectors y1 and y2 ,
respectively. Let us denote
Ly := [r(x)y 0 ]0 + q(x)y.

Then we have Let us denote


Ly1 = 1 py1 , Ly2 = 2 py2 .

=
(Ly1 )y2 (Ly2 )y1 = (2 1 )py1 y2 .

= Z b Z b
[(Ly1 )y2 (Ly2 )y1 dx = (2 1 ) py1 y2 dx.
a a
Note that
(Ly1 )y2 (Ly2 )y1 = [(ry10 )y2 (ry20 )y1 ]0 .
4 A set S is said to be countably infinite if it is in one-one corresponding to the set N of natural numbers. For example,

other than N itself, the set Z of all integers, and the set Q of all rational numbers are countably infinite. However,
the set {x R : 0 < x < 1} is not a countably infinite set. An infinite set which is not countably infinite is called an
uncountable set. For example, the set {x R : 0 < x < 1} is an uncountable set; so also the set of all irrational numbers
in {x R : 0 < x < 1}

52
Hence Z b
[(Ly1 )y2 (Ly2 )y1 dx = [(ry10 )y2 (ry20 )y1 ](b) [(ry10 )y2 (ry20 )y1 ](a).
a
Using the boundary conditions, the last expression on the above can be shown to be 0. Thus, we
obtain Z b
(2 1 ) py1 y2 dx = [(ry10 )y2 (ry20 )y1 ](b) [(ry10 )y2 (ry20 )y1 ](a) = 0.
a
Z b
Therefore, if 2 6= 1 , we obtain py1 y2 dx = 0.
a

THEOREM 5.4. Every eigenvalue of the SLP (1)(3) is real.

Proof. Let us denote


Ly := [r(x)y 0 ]0 + q(x)y.

Suppose := + i is an eigenvalue of SLP with corresponding eigenfunction y(x) = u(x) + iv(x),


where , R, and u, v are real valued functions. Then we have

L(u + iv) = ( + i)p(u + iv),

i.e.,
Lu + iLv = p(u v) ip(v + u).

Hence,
Lu = p(u v), Lv = p(v + u)

=
(Lu)v (Lv)u = p(v 2 + u2 ).

= Z b Z b
[(Lu)v (Lv)u]dx = p(v 2 + u2 )dx.
a a
But,
(Lu)v (Lv)u = [(ru0 )v (rv 0 )u]0 .

Hence,
Z b Z b
[(Lu)v (Lv)u]dx = [(ru0 ) (rv 0 )u]0 dx = [(ru0 )v (rv 0 )u](b) [(ru0 )v (rv 0 )u](a).
a a

Using the fact that u and v satisfy the boundary conditions (2)-(3), it can be shown that

[(ru0 )v (rv 0 )u](b) [(ru0 )v (rv 0 )u](a) = 0.


Z b Z b
Thus, we obtain p(v 2 + u2 )dx = 0. Since p(v 2 + u2 )dx we obtain = 0, and hence
a a
= R.

THEOREM 5.5. If y1 and y2 are the eigenfunctions corresponding to an eigenvalue of the SLP,
then prove that y1 , y2 are linearly dependent.

53
Proof. Suppose y1 and y2 are eigenfunctions corresponding to an eigenvalue of the SLP. Then we
have
Ly1 = py1 , Ly2 = py2 .
Hence,
(Ly1 )y2 (Ly2 )y1 = 0.
But,
(Ly1 )y2 (Ly2 )y1 = [(ry10 )y2 (ry20 )y1 ]0 = [rW (y1 , y2 )]0 .
Thus [rW (y1 , y2 )]0 = 0 so that, using the assumption that r is not a zero function, we obtain rW (y1 , y2 )
is a constant function, say
r(x)W (y1 , y2 )(x) = c, constant.
But, by the boundary condition (2) we have

k1 y1 (a) + k2 y10 (a) = 0


k1 y2 (a) + k2 y20 (a) = 0

i.e., " #" # " #


y1 (a) y10 (a) k1 0
= .
y2 (a) y20 (a) k2 0
Hence, W (y1 , y2 )(a) = 0 so that r(a)W (y1 , y2 )(a) = 0 and hence, c = 0. This implies that W (y1 , y2 )
is a zero function, and hence y1 , y2 are linearly dependent.

Example 5.6. For R, consider the SLP:

y 00 + y = 0, y(0) = 0 = y()

Note that, for = 0, the problem has only zero solution. Hence, 0 is not an eigenvalue of the problem.

If < 0, say = 2 , then a general solution is given by

y(x) = C1 ex + C2 ex .

Now, y(0) implies C1 + C2 = 0 and y() = 0 implies C1 ei + C1 ei = 0. Then, it follows that,


C1 = 0 = C2 . Hence, the SLP does not have any negative eigenvalues.

Next suppose that > 0, say = 2 . Then a general solution is given by

y(x) = C1 cos(x) + C2 sin(x).

Note that y(0) = 0 implies C1 = 0. Now, y() = 0 implies y() = C2 sin() = 0. Hence, for those
values of for which sin() = 0, we obtain nonzero solution. Now,

sin() = 0 = n for n Z.

Thus the eigenvalues and corresponding eigenfunctions of the SLP are

n := n2 , yn (x) := sin(nx), n N.

54
Example 5.7. For R, consider the SLP:

y 00 + y = 0, y 0 (0) = 0 = y 0 ()

Note that, for = 0, y(x) = + x is a solution of the DE. Now, y 0 (0) = 0 = y 0 () = 0 imply = 0.
Hence, y(x) = 1 is a solution.

If < 0, say = 2 , then a general solution is given by

y(x) = C1 ex + C2 ex .

Note that y 0 (x) = C1 ex C2 ex . Hence,

y 0 (0) = 0 = y 0 () = C1 C2 = 0, C1 e C2 e = 0.

Hence, C1 = C2 = 0, and hence the SLP does not have any negative eigenvalues.

Next suppose that > 0, say = 2 . Then a general solution is given by

y(x) = C1 cos(x) + C2 sin(x).

Then,
y 0 (x) = C1 sin(x) + C2 cos(x).
Now, y(0) implies C2 = 0, and hence y() = 0 implies sin() = 0. Note that

sin() = 0 = n for n Z.

Thus the eigenvalues and corresponding eigenfunctions of the SLP are

n := n2 , yn (x) := cos(nx), n N0 .


Exercise 5.8. For R, consider the SLP:

y 00 + y = 0, y(0) = 0, y 0 () = 0.

Show that the eigenvalues and the corresponding eigenfunctions for the above SLP are given by
 2n 1 2 h 2n 1  i
n = , yn (x) = sin x , n N.
2 2


Exercise 5.9. Consider the Schr
odinger equation:
h2 00
(x) = x, x [0, `],
2m
along with the boundary condition
(0) = 0 = (`).
Show that the eigenvalues and the corresponding eigenfunctions for the above SLP are given by
r
h2 2 n2 2  nx 
n = , n (x) = sin , n N.
2m`2 ` `

55
Exercise 5.10. Let
Ly := [r(x)y 0 ]0 + q(x)y.

Prove that
hLy, zip = hy, Lzip y, z C[a, b],

for every weight function p(x) > 0 on [a, b].

Definition 5.11. An orthogonal sequence (n ) of nonzero functions in C[a, b] is called a complete


system for C[a, b] with respect to a weight function w if every f C[a, b] can be written as

X
f= cn n ,
n=1

where the equality above is in the sense that


Z b N
X 2
(x) cn n (x) w(x)dx 0 as N .

f
a n=1

hf, n iw
It can be seen that cn = .
hfn , n iw

References

[1] William E. Boycee and Richard C. DiPrima (2012): Elementary Differential Equations , John
Wiley and Sons, Inc.

56

Você também pode gostar