Você está na página 1de 570

M.Sc.

First Year Mathematics

Numerical Analysis & Differential

Equations

WELCOME

Dear Students,

We welcome you as a student of the

first year M.Sc degree course.

This paper deal with the subject

NUMERICAL ANALYSIS AND

DIFFERENTIAL EQUATIONS. The

learning material for this paper will

be supplemented by contact lectures.

In this book, first five units deal

numerical analysis.

Unit 6 to 8 discuss the theory of

ordinary differential equations and


rest of the book deals partial

differential equations.

Learning through the Distance

Education mode, as you are all

aware, involves selflearning and self-

assessment and in this regard, you

are expected to part in disciplined

and dedicated effort.

As our part, we assure of our

guidance and support.

With best regards.


SYLLABUS

Unit 1: Introduction Bisecton

method, iteration methods based on

first degree equation, iteration

methods based on second degree

equation, Rate of convergence,

General iteration methods, Methods

for complex roots. Polynomial

equations.

Unit 2: Introduction, Direct methods,

Error analysis for direct methods,


iteration methods, Eigen values and

eigen vectors, Power method.

Unit 3: Introduction, Lagrange and

Newton interpolations, Finite

difference Operators, Interpolating

polynomials using finite differences,


Hermite interpolation, Piecewise and

spline interpolation.

Unit 4: Introduction, Numerical

Differentiation, Extrapolation

methods, Partial Differentiation,

Numerical integration, Methods

based on interpolation, Composite

integration methods, Romberg

method.

Unit 5: Introduction, Difference

equation, Numerical methods, Single


step methods.

Unit 6: Introduction Initial value

problems for the homogeneous

equation, Solutions of the

homogeneous equation, The

Wronskian and linear independence.


Reduction of the. order of a

homogeneous equation, The non-

homogeneous equation,

Homogeneous equations with

analytic coefficients, The Legendre

equation.

Unit 7: Introduction, The Euler

equation, Second order equations

with regular singular points an

example, second order equations

with regular singular points-the

general case, A convergence proof,

The exceptional cases, The Bessel

equation. The Bessel equation

(continued)

Unit 8: Introduction, Equations with

variables separated. Exact equations.


The method of successive

approximations. The Lipschitz

condition. Convergence of the

successive approximations, Non-local

existence of solutions,

Approximations to. and uniqueness

of. solutions

Unit 9: Partial differential equations

Origins of first order partial

differential equations Cauchy's

problem for first order equations

Linear equations of the first order

Integral surfaces passing through a

given curve.

Unit 10: Nonlinear partial differential

equations of the first order -

Cauchy's method of characteristics


compatible systems of First-order

equations Charpit's method

special types of first-order equations.

Text Books

1. Numerical Methods for Scientific

and Engineering Computation by

M.K. Jai, S.R.K.Iyengar and R.K.

Jai. Furth Edition, New Age

International Publishers, 2003.

Chapter 2: Sections 2.1 to 2.6,

2.8 to 2.9,

Chapter 3: Sections 3.1 to 3.6

Chapter 4: Sections 4.1 to 4.6.

Chapter 5: Sections 5.1, 5.2,

5.4, to 5.7, 5.9 to 5.11 omitting

double integration. Chapter 6

Sections 6.1 to 6.4


2. An Introduction to ordinary

differential equations by E.A.

Coddington, Prentice Hall of

India, 1987.

3. Elements of Partial Differential

equations by I.N. Sneddon, Tata

McGraw Hill Book Company,

1986.

From Textbook 2

Chapter 3: Sections 1 to 8,

Chapter 4: Sections 1 to 8,

Chapter 5: Sections 1 to 8.

From Textbook 3

Chapter 2: Sections 2.1 to 2.11


CONTENT

TOPIC

ORDINARY DIFFERENTIAL

Unit5 EQUATIONS INITIAL VALUE

PROBLEMS

5.1 Introduction

5.2 Difference Equations

5.3 Numerical Methods

5.4 Singlestep Methods.

Unit LINEAR EQUATIONS WITH

6 VARIABLE COEFFICIENTS

6.1 Introduction

Initial Value Problems For The


6.2
Homogeneous Equation

Solutions of The Homogeneous


6.3
Equation
The Wronskian And Linear
6.4
Independence

Reduction Of The Order Of A


6.5
Homogeneous Equation

6.6 The Non-Homogeneous Equation

Homogeneous EquationsWith Analytic


6.7
Coefficients

6.8 The Legendre Equation

Unit LINEAR EQUATIONS WITH

7 REGULAR SINGULAR POINTS

7.1 Introduction

7.2 The Euler Equation

Second Order Equation With Regular


7.3
Singular Points An Example
Second Order Equations With Regular
7.4
Singular Points General Case

7.5 Convergence Proof

7.6 The Exceptional Cases

7.7 The Bessel Equation

7.8 The Bessel Equation

EXISTENCE AND UNIQUENESS OF


Unit
SOLUTIONS TO FIRST ORDER
8
EQUATIONS

8.1 Introduction

8.2 Equations With Variables Separated

8.3 Exact Equations

The Method Of Successive


8.4
Approximations
8.5 The Lipschitz Condition

Convergence of the successive


8.6
approximations

8.7 Non-local existence of solutions

Unit PARTIAL DIFFERENTIAL

9 EQUATIONS OF THE FIRST ORDER

9.1 Partial Differential Equations

Origins Of First-Order Partial


9.2
Differential Equations

Cauchy's problem for first-order


9.3
equations

9.4 Linear Equations Of The First Order

Integral Surfaces Passing Through A


9.5
Given Curve
NONLINEAR PARTIAL
Unit
DIFFERENTIAL EQUATIONS OF THE
10
FIRST ORDER

10.1 Introduction

10.2 Cauchy's Method Of Characteristics

Compatible Systems Of First-Order


10.3
Equations

10.4 Charpit's Method

Special Types Of First Order


10.5
Equations
Unit - 5

ORDINARY DIFFERENTIAL
EQUATIONS INITIAL VALUE
PROBLEMS

5.1 INTRODUCTION

An ordinary differential equation is

a relation between a function, its

derivatives, and the variable upon

which they depend. The most general

form of an ordinary differential

equation is give by

(m)
(t, y, y,y..y ) = 0

Where m represents the highest

order derivative, and y and its

derivatives are functions of t. The

order of the differential equation is


the order of its highest derivative and

its degree is he degree of the

derivative of the highest order after

the equation has been rationalized

in derivatives. If no product of the

dependent variable y(f) with itself or

any one of its derivatives occur, then

the equation is said to be linear,

otherwise it is nonlinear. A linear

differential equation of order m can

be expressed in the form.

m
p=0
p (t)y( )(t) = r(t)
p

in which p (t) are known functions.

If the general nonlinear differential

equation of order m can be written as

(m-l)
y(m) = F(t, y, y,..., y )
then the equation is called a

canomical representation of the

differential equation. In such a form,

the highest order derivative is

expressed in terms of the lower order

derivatives and the independent

variable.

Initial value problem

A general solution of an ordinary

differential equation is a relation

betw een y. t and m arbitrary


constants, which satisfies the

equation, but which contains no

derivatives. The solution may be an

implicit relation of the form

w(t, y, c 1 ,c 2 , ....c m ) = 0
or an explicit function of t of the form

y = w(t, c 1 , c 2 .....c m )

The m arbitrary constants c 1 , c 2 , ...,

c m can be determined by prescribing

m conditions of the form

(v)
y (t0) = v v = 0,1,2,m 1

at one point t = t 0 . Which are called

initial conditions. The point t0 is

called the initial point. The

differential equation together with

the initial conditions is called an mth

order initial value problem.

If the m conditions are prescribed at

more than one point to determine the

m arbitrary constants in the general


solution, these are called boundary

conditions. The differential equation

together with the boundary

conditions is known as the boundary

value problem.

Reduction of Higher Order

Equations to the System of

First Order Differential

Equations

The mth order differential equation

with initial conditions may be written


as an equivalent system of m first

order initial value problems.M

u1 = y

u 1 = u 2
u 2 = u 3

u m-1 = u m

u m = F(t, u 1 ,u 2 ,...u m )

u 1 (t 0 ) = 0 , u 2 (t 0 ) = 1 , .....,u m (t 0 )

= m-1

which in vector notation becomes

u = f(t, u)

u(t 0 ) =

where

T
u = [u 1 , u 2 ... u m ] , f = [u 2 u 3 .... u m
T T
F] , = [ 0 1 ... m-1 ] . Thus, the

methods of solution of the first order

initial value problem


= f(t,u), u(t0) = 0
du
dt

May be used to solve the system of

first order initial value problems and

the mth order initial value problem.

Example 1.

Convert the following second order

initial value problem into a system of

first order initial value problems

Proof

3
ty - y + 4t y = 0

y(l) =l,y(l) = 2.

Let. v 1 = y(t), v 2 (f) = y(t).

Then, v 1 = y(t) v 2 (t), v 2 = y

Hence, the system becomes


V 1 = V 2 , V 1 (l)= 1

v2 = 1 t ( v 2 4 t 3 v 1 ), v 2 ( 1

)=2

or v = f(t, v), v(l) = d

T
where v = [v 1 , v 2 ] ,

T T
f = [f 1 , f 2 ] , d = [d 1 , d 2 ] f1 = v2, f2
3
= (v 2 4t V 1 ), /t, d 1 = 1, d 2 = 2

Example.2

Convert the following system of two

simultaneous third-order equation

into a system of six simultaneous

first-order equations:

y1 = f 1 ( t,y 1 , y 2 , y' 1 , y' 2 , y

1 , y 2 ) y 2 = f 2 ( t,y 1 , y 2 , y'
1 , y' 2 , y 2 ) y 1 ( t 0 )= 0 , y'

1 ( t 0 )= 1 , y 1 ( t 0 )= 2 y 2

( t 0 )= b 0 , y' 2 ( t 0 )= b 1 , y 2

( t 0 )= b 3

we make the following substitutions:

u1(t) = y1(t), u3(t) = y1(t), u5(t) = y1(t)


u2(t) = y2(t), u4(t) = y2(t), u6(t) = y2(t)

Then, by observing that


u 3 ( t )= y 1 ( t ), u 5 ( t )= y 1 ( t )and u 4
( t )= y 2 ( t ), u 6 ( t )= y 6 ( t )

we can write the original system as

u 1 = u 3 (t), u 1 (t 0 ) = 0

u 2 = u 4 (t), u 2 (t 0 ) = b 0

u 3 = u 5 (t), u 3 (t 0 ) = 1

u 4 = u 6 (t), u 4 (t 0 ) = b 1
u 5 = f 1 (t. u 1 . u 2 . u 3 , u 4 . u 5 , u 6 ),

u 5 (t 0 ) = 2

u 6 = f 2 (t, u 1 , u 2 u 3 , u 4 , u 5 , u 6 ),

u 6 (t 0 ) = b 2 .

Existence and Uniqueness

The existence and uniqueness of the

solution of the initial value problem

is guaranteed by the following

theorem:

Theorem.

We assume that f(t, u) satisfies the

following conditions:

i. f (t, u) is a real function


ii. f(t, u) is defined and continuous

in the strip

t [f 0 , b], u (-,).

iii. There exists a constant L such

that for any t [t 0 , b] and any

v 1 and v 2 .

|f (t, v 1 ) f(t, v 2 )| < L |V 1 v 2 |

Where L is called the Lipschitz

constant. Then, for any u 0 , the initial

value problem has a unique solution

u(t) for t [t 0 , b].

We will always assume the existence

and uniqueness of the solution and

also that f(t, u) has continuous

partial derivatives with respect to t

and u of as high an order as we

desire.
Test Equations

The behavior of the solution of the

initial value problem in the

neighbourhood of any point (t, u) can

be predicted by considering the

linearized form of the differential

equation

u = f(t, u)

The nonlinear function f(t, u) can be

linearized by expanding the function

about the point


( ) in

t ,u the Taylor

series and truncating it after the first

order term. We have


( ) ( )( ) ( ) ( )
f
u = t,u + t- t tt,u + u-u
f
u

t ,u

or u =u+c

f
( )
where =( u ) t ,u
( ) ( ) ( ) ( )
f
and c = f t ,u -u u t ,u + t- t
f
t

t ,u

Substituting w = u + (c / ),we get

u = w = [w (c / .)] + c = w,
or w = w.

This equation is called the test

equation for the nonlinear equation

u= f(t, u). the solution of this test


1
equation, w(t) = k e , k constant, is

shown in Fig.5.1(a)

t
Fig.5.1 (a) Representation of w = e
Similarly, the test equation for the

second order initial value problem

y = f(t, y, y )

y(t 0 ) = 0 , y (t 0 ) = 1

can also be obtained. Expanding f

(t, y, y) in Taylor scries the point

(
)
t ,y,y ' and truncating after the first

order terms, we get


f
y = f (t,y,y' ) + (t-t ) t
(t,y,y' ) + (y y ) fy (t,y,y' )

f
+(y y ) y
(t,y,y' )

= -by' - cy + k
f f
where b = - y'
(t,y,y' ), c = - y
(t,y,y' ), and

f f
k = f (t,y,y' ) + (t-t ) t
(t,y,y' ) y y
(t,y,y' )

- y' fy' (t,y,y' )

Setting y = w + (k/c ),

Double click this page to view clearly


y = w = -b w -c [w+ (k/c )] + k

or = - bw' - cw

This differential equation is

equivalent to the following system of

equations

u = Au

where

[ ]
T 0 1
u = [u1, u2] , A =
c b

u1 = w , u2 = W

The nature of the solutions depends


on the roots 1 and 2 of the

characteristic equation of the matrix

A, that is, of the equation

2
+ b + c = 0

Double click this page to view clearly


The roots of the characteristic

equations are

1, 2 = [ b b2 4c] / 2
We now consider the following cases:

i. b > 0, c 0,b < 0, c 0 ,b > 2c


The characteristic roots are real,

distinct and negative. Therefore,

the solutions are exponentially

decreasing. For c = 0, the test

equation becomes

w + bw = 0, b > 0.

ii. b < 0, c 0,

b < 0, c 0 ,|b| > 2c The

characteristic roots are real,

distinct and positive. Therefore,

the solutions are exponentially


increasing. For c = 0, the test

equation becomes

w + bw = 0, b < 0.

iii. c > 0 and |b| < 2c The

characteristic roots are a

complex pair.

Therefore, the solutions are

oscillating. If b < 0, then the

solution is an oscillating function

whose amplitude becomes

unbounded as t . If b > 0,

then the solution is a damped

oscillating function as t . For

b = 0, the test equation becomes

w + cw = 0, c > 0

whose solution is periodic with period

2 / c The solution is shown in


Fig.6.1(b). We observe that 1 and 2

(b = 0, c > 0) are pure imaginary

numbers.

Fig.5.1 (b) Periodic solution

The solution of the test differential

equation will also be periodic if we

allow to be a pure imaginary

number.

Thus, the nature of the solutions of

the systems of equations of higher


order equations may be discussed by

using the test equation for

i. real,

ii. pure imaginary, and

iii. complex.

Systems of Linear First Order

Differential Equation with

Constant Coefficients

A first order linear system with

constant coefficient is of the form

u = Au + b(t)

where A = [ ij ], i, j = 1,2, ..m is an

m x m constant matrix and b(t) is an

m-dimensional vector. The general

solution may be written as


u(f) =
t
m kie ,
i=1
ui + (t)

where the matrix A has distinct

eigenvectors u i . The function (t) is

particular solution. The matrix

equation may be decoupled by

applying a similarity transformation.

Let U = [u 1 , u 2 , ...., u m ] be the

matrix of eigenvectors. Set u = U z.

Then,

u = U z = AUz + b(t).

-1
Premultiplying by U , we get

-1 -1 -1
U Uz = U A U z + U b(t)

or z = Dz + h(t)
-1 -1
where h(t) = U b(t) and D = U AU

is the diagonal matrix with diagonal

elements 1 , 2 , .. m . the above

equation gives a degenerate system

for the solution of z 1 , z 2 ,...., z m . The

solution of is given by

it
z i ( t) = e (e it
)
hi(t)dt+ci , i = 1,2,...,m

Theorem.

The solutions of the system with b(t)

= 0 are said to be stable

(asymptotically stable), i.e., u i 0

as t , if and only if all the

eigenvalues of the matrix A have

negative real parts.

Double click this page to view clearly


Example 3.

Find the solution of the system of

equations

du
dt = Au

[ ]
T 3 4
Where u =[u1, u2] and A =
2 3

The solution of the system of

equations can be obtained by finding

the eigenvalues and eigenvectors of

A. The characteristic equation of A is

| |
3 4 2
|A - | = = 1=0
2 3

and the eigenvalues are = 1.

[ ][ ] [ ]
4 4 u1 0
For = 1, we have the system =
2 2 u2 0

Double click this page to view clearly


(t) T
The eigenvector is v = [1, 1] .

[ ][ ] [ ]
2 4 u1 0
For = -1, we have the system =
2 4 u2 0

The eigenvector v( ) = [2 1] .
2 T

Hence the solution of the system is

[ ] [ ] [ ]
u1 t 1 2
= c1e v( ) + c2e v( ) = c1e
t 1 t 2 t
+ c2e
u2 1 1

Componentwise, we can write the

solution as

1 -t 1
u 1 (t) = c 1 e + 2c 2 e , u 2 (t) = c 1 e
-1
+ c2e

Example 4.

Find the solution of the initial value

problem

Double click this page to view clearly


du T
dt = A u, u(0) = [1, 0]

[ ] [ ]
2 1 u1
Where A= and u =
1 20 u2

.Is the system asymptotically stable?

The solution of the initial value

problem can be obtained by finding

the eigenvalues and eigenvectors of

A. The characteristic equation of A is

| |
2 1 2
| AI | = = + 22 + 39 = 0
1 20

The eigenvalues are


1
= 11 + 82 and 2 = 11 82

For = 1 = 11 + 82, we have the system

[ ][ ] [ ]
9 + 82 1 u1 0
=
1 9 82 u2 0
The eigenvector is v( ) = [1,
t T
82 9]
2
For = = 11 82, we have the system

[ ][ ] [ ]
9 + 82 1 u1 0
=
1 9 82 u2 0

The eigenvector is v( ) = [9 82, 1]


2 T

The general solution of the system is

given by

u = c1e(
82, 11)t
v( ) + c2e (
+ 11)t
v( )
1 82, 2

[ ] [ ]
1 9 82
= c1e(
82, 11)t
+c2e (
82, + 11)t

82 9 1

Substituting in the initial conditions,

we get c 1 (9 82) and (82 9)

9)c 1 + c 2 = 0. The solution is c 1 =

0.99694, c 2 = -0.05522.

The particular solution is

[ ] [
82 + 11)t 9 82

]
1
u = 0.99694 e(
11)t
0.05522 e (
82,

82 9 1

Double click this page to view clearly


The eigenvalues of the matrix are

real and negative. Therefore, the

solutions of the. system 0 as t

and the system is asymptotically

stable.

Example 5.

Obtain a general solution of the

system of equations

du1
dt = 5u1 + 2u2 + t
du2 t
dt = 2u1 + 2u2 + e

In matrix form, we write the given


system as

[ ] [ ] [ ]
du 5 2 u1 t
dt = A u + b, where A = , u= and b = t
2 2 u2 e

t
Consider the homogeneous system u

= A u. The characteristic equation of

A is

Double click this page to view clearly


[ ]
5 2 2
| A I | = = + 7+6=0
2 2

The eigenvalues of A are = - 6, = -1.

[ ][ ] [ ]
1 2 u1 0
For = - 6,we have the system =
2 4 u2 0

(1) T
The eigenvector is v = [2, -1] .

For = -1, we have the system

[ ][ ] [ ]
4 2 u1 0
=
2 1 u2 0

(2) T
The eigenvector is v =[1,2] .

The matrix of eigenvectors is

[ ] [ ]
2 1 1 1 2 1
U= and U = 5
1 2 1 2
Set u = U z. We obtain

z = D z + h(t),

where

[ ][ ] [ ] [ ]
t
1 1 2 1 t 1 2t e 6 0
h ( t) = U b= 5 t
= 5 and D =
1 2 e t+2
et 0 1

Hence we have Z1 = 6z1 + 5


1
(2t e t )
Z2 = z2 + t+2e ( 1
5
1
)

The solution of these first order

differential equations are

z1 = c1e
6t
+
1
450 (30 t 5 18e t ), z2 = c2e t + 15 ( t 1 + 2te t )

The general solution is u = U z or

[ ][ ][ ] [ ]
u1 2 1 z1 2z1 + z2
= =
u2 1 2 z2 z1 + 2z2

Therefore, u1(t)=2c1e
6t
+ c2e
t
+
1
450 (150 t - 100 - 36e-t + 180te t )
u2(t)=2c2e
6t
+ c2e
t
+
1
450 (150 t - 175 - 18e-t + 360te t )

5.2 DIFFERENCE EQUATIONS


Let x n , x n+1 , ....x n +k be a set of

k + 1 equispaced points wit spacing

h and u n , u n+1 ,.......u n+k , be the

Double click this page to view clearly


corresponding values of a function

u(x) at these points, that is x n+i = x n

+ ih, u n+i = u(x n+i ), i = 0, l,....k,

for some integer k. A relationship

between u n and the differences u n ,


2 k
u n ....., u n is called a difference

equation and hence it can be

regarded as a relation among u n ,

u n+1 ,.....u n+k . The order of a

difference equation is the number of

intervals separating the largest and

the smallest argument of the

dependent variable.

F(u n , u n+1 ,......u n+k ) = 0

If the function F is linear in u n ,

u, n+1 ,.....u n+k , then the difference


equation is called linear. A linear

difference equation of order k, can be

written as

0 u n+k + 1 u n+k-1 + ....+ k u n =

g n , 0 0.

If the coefficients 0 , 1 , .... k are

constants, then the equation defines

a linear difference equation of order

k with constant coefficients. For

example, the difference equation


2
u n + 3u n + 5u n = 0

or, (u n+2 2u n+1 + u n ) + 3(u n+1

u n ) + 5u n = 0

or u n+2 + u n+1 + 3u n = 0

is a linear difference equation of

order 2, with constant coefficients.


If in g n = 0, then the difference

equation is said to be homogeneous,

otherwise, it is said to be

inhomogeneous. We shall consider

only the linear difference equations

with constant coefficients.

The solution of the difference

equation consists of solution un( ) ,


H

called the complementary solution of

the homogeneous part

0 u n+k + 1 u n+k-1 +.....+ k u n = 0

And a particular solution un( ) of the


H

inhomogeneous part. The general

solution of is written as

un = un( ) + un( )
H p
For solving, we assume a solution of
n
the form u n = A , where A 0 is a

constant, Substituting, we get

n = 0
A ( 0 k + 1 k-1 + ...... + k )

or 0 k + 1 k-1 + .... + k = 0

which is a polynomial of degree k. the

equation is called the characteristic

equation of the difference equation.

Let 1 . 2 ,. k be the roots of the

above equation. We have the

following cases:

Real and Distinct Roots

If 1 , 2 , ...... k are all real and

distinct, then we have


u(n ) = c1u1 + c2u2 + ... + ckk
H 2 n n

Where c 1 , c 2 , ..... c k are arbitrary

constants

Real and Repeated Roots

Let 1 (= 2 ) be a double root and all

other roots 3 , 4 , ... k be distinct.


n n
It can be verified that 1 and n 1 are

two linearly independent solutions of

then, we have

(n ) = (c1 + nc2)1 + c33 + ... + ckk


H n n n

Similarly, when 1 is a root of

multiplicity p and the other roots

p+1 - P+2 ,......, k are distinct, we

have

[ p-1
]
(n ) = c1 nc2 + ... + n cp 1 + cp+1 p+1 + ... + ck k
H n n n
Complex Roots

For a polynomial with real

coefficients, the complex roots occur

as conjugate pairs. Let 1 = + i


i -i
= re and 2 = - i = re , where


2 2
r= + and = tan-1 (/) be the

complex roots and the roots 3 , 4 ,

....., 1 be real and distinct. Then, we

have
n
un( ) = [c1 cos (n) + c2sin(n)]|1| + c3 3 + ... + ckk
H n n

The constants c 1 , C 2 , ....C k can be

determined from the given

conditions. The particular integral (n )


p

depends on the form of g n . When

g n = g is a constant, we assume
the solutions as un( ) = q, a constant.
p

Substituting we get

(0 + 1 + ... + k)q = g
u(n ) = q =
p g
or 0 + 1 + ... + k

When g n is of the form g n = B + nC,

then we write the particular integral

as un( ) = q + nr. Substituting and


p

comparing the coefficient of n and

the constant term on both sides, we

determine q and r.

Consider the homogeneous difference

equation with constant coefficients.

If 1 , 2 , ..... k are all real and

distinct, then

n n n
un = c1 1 + c2 2 + ... + ck k.
Suppose now, we require that u n

0 as n . Then, a necessary and

sufficient condition is | 1 | < 1, i =

1(1 )k. If u n is to be bounded, then

some of i may satisfy | i | < 1 and

others may satisfy | i | = 1.

Assume that | 1 | = 1 = | 2 | and | i |

< 1, i = 3(l)k. Then

|un| |c1| + |c2|as n


If 1 = 2 is a double root, then

un = (c1 + nc2)1 + c33 + ... + ckk


n n n

Now, let | i | < 1, i = 3(1)k. If | 1 |

= 1, then u n as n and

the solution is unbounded. However,

if | 1 | = | 2 | < 1, then u n 0 as n
, If 1 and 2 form a complex pair

and | 1 | < 1, i = 3(1)k, then u n 0

as n if | 1 | < 1.

Hence, u n 0 as n , if the roots

satisfy | i | < 1, i = l(l)k. If the roots

of are of magnitude 1 and simple,

then u n is bounded. In other cases,

un is unbounded.

In many applications, the coefficients

involve some parameter(s) and we

need to determine the ranges for


these parameters so that the roots

satisfy | i | < 1 for all i. It is not

always possible to find the roots for

all values of the parameter(s) to

check whether | i | < 1. In such

cases, the values of the parameter(s)


for which | i | < 1 can be determined

by using the conformal mapping.

Consider the mapping


1+z
=1 z

Which maps the interior of the unit

circle || = 1 onto the left half plane

Re(z) < 0, and the unit circle || = 1

onto the imaginary axis.

Simplifying, we can write z = ( -

1)/( + 1). We find that = 1 is

mapped onto z = 0 and = -1 is

taken to z = - .

Fig.5.2 Mapping of the interior of the unit circle

onto the left half plane


We can then apply the Routh-Hurwitz

criterion which gives the necessary

and sufficient conditions for the roots

of characteristic equation to have

negative real parts.

Theorem (Hurwitz) Let

k k-1
P( z ) = 0 z + 1 z + ... + k

| |
1 3 5 2k-1

0 2 4 2k-2

0 1 3 2k-3
and D =
0 0 2 2k-4

0 0 0 k
where j 0 for j = 1,2, ....k and j

= 0 for j > k. Then, the real parts

of the roots of p(z) = 0 are negative

if and only if the leading principal

minors of D are positive.

Proof.

We have

k = 1, 0 > 0, 1 > 0

k = 2, 0 > 0, 1 > 0, 2 > 0

k = 3, 0 > 0, 1 > 0, 3 > 0, 3

> 0, 1 2 > 3 0 > 0

as the necessary and sufficient

conditions for the real parts of the

roots to be negative. Hence, the

conditions are the necessary and

sufficient conditions that the roots of

the original polynomial c 0 + c 1 = 0,


2 2
or c 0 + c 1 + c 2 = 0, or c 0 +
3 2
c 1 + c 2 , = 0, or c 0 + c1 + c2

+ c 3 = 0 from which p(z) is derived,

satisfy | i | < 1.

For example, consider the equation


2
c0 + c 1 , + c 2 = 0. Substituting

= (1 + z) (1 z) and simplifying, we

get
2 2 2
c 0 (1+z) + c 1 (l z ) + c 2 (1 z) =0

2
or (c 0 c 1 +c 2 )z + 2(c 0 c 2 )z + (c 0

+ C 1 +c 2 ) = 0

where 0 = c0 c 1 + c 2 , 1 = 2 (c 0

c 2 ), and 2 = c 0 + c 1 + c 2 .

In the case of a polynomial of degree

k, we get the first and last

coefficients as
k
0 = c 0 c 1 + c 2 -...+ (-l) c k and

k = c 0 + c 1 + c 2 + .... +c k .

If any of the i 's is equal to zero and

other, i 's are greater than zero, then

it indicates that a root lies on the unit

circle || = 1. If any of the afs is less

than zero, than there is at least one

root for which | i | > 1.

Example 6.

Find the general solution of the

difference equations

2
i. u n - 3u n + 2u n = 0,

2
ii. u n + u n + u n + (1/4) u n = 0,

2
iii. u n - 2u n + 2u n = 0,

2 2
iv. u n+1 (1/3) u n = 0.
Is the solution bounded?

Proof.

We substitute for the forward

differences in the difference,

equations,

i. (u n+2 2u n+1 +u n ) 3 (u n+1

u n ) + 2u n = 0

Or u n+2 5u n+1 + 6u n = 0.
n
Substituting u n = A , where A

0, a constant, we obtain the

characteristic equation as
2
- 5 + 6 = 0

Whose roots are = 2, 3.

Thus, the general solution of the

difference equation becomes


n n
u n = c 1 (2 ) + c 2 (3 ).
since, u n as n ; the

solution is unbounded.

ii. (u n+2 2u n+1 + u n ) + (u n+1

u n ) + (1/4) u n = 0

or u n+2 u n+1 + (1/4) u = 0.

Substituting u n = A n , where A

0, a constant, we obtain the

characteristic equation as
2
- + (l/4) = 0

Whose roots are = , .

Thus, the general solution

becomes
n
u n = (c 1 + nc 2 ) (1/2) .

since, u n 0 as n , the

solution is bounded.

iii. (u n+2 2u n+1 + u n ) 2(u n+1

u n ) + 2u n = 0
or u n+2 4u n+1 + 5u n = 0.
n
Substituting u n = A , A 0,

a constant, we obtain the

characteristic equation as .
2
-4 + 5 = 0

Whose roots are


i
= 2 i = re , where r = || = 5
-1
and = tan (1/2).

Therefore, the general solution

becomes
n
un = (c1 cos n + c2sinn)(5)

since, u n as n . the

solution is unbounded.

(un+3 2un+2 + un+1) 3 (un+2 2un+1 + un) = 0


1

iv.
or 3un+3 7un+2 + 5un+1 un = 0

Double click this page to view clearly


n
Substituting u n = A . A O.

a constant, we obtain he

characteristic equation as

3 2
3 - 7 + 5 - I = 0

Whose roots are = 1, 1, 1/3.

Therefore, the general solution

becomes

n
u n = c 1 + nc 2 + (l/3) c3.

Since, u n as n , the

solution is unbounded.

Example 7.

Solve the difference equation

2 2
y j + 3y j 4y j = j

With the initial conditions y 0 = 0. y 2 = 2.


Proof.

We substitute for the forward

difference in the difference equation

and get

(y j+2 2y j+1 + y j ) + 3(y j+1 y j )


2
4y j = j

2
Or y j+2 + y j+1 6y j = j .

The general solution of the difference

equation is of the form

(H) (p)
yj = y + y

(H)
where y is the complementary
(p)
solution and y is the particular

solution.
(H)
To determine the solution y , we

assume the solution in the form y j


j
= A and substitute into the

homogenous difference equation and

get the characteristic equation as

2
+ - 6 = 0.

The roots of the characteristic

equation are = -3 and 2.

(H)
The complementary function y , is

therefore

(H) i j
y = c 1 (-3) + c 2 (2) .

we assume the particular solution in

the form

(p) 2
y = j + bj + c
and substitute into the difference

equation. We obtain

2 2
((j+2) + b(j + 2) + c) + (j + l)
2
+ b(j +1) + c 6(j + bj + c) =
2
j .

Comparing the right-hand side with

the left-hand side we get the

following equations

-4 = 1,

6 - 4b = 0.

5 + 3b 4c = 0.

whose obtain is = -1/4, b = -3/8

and c = - 19/32.

The general solution becomes


The general solution becomes
j j 1 2 3 19
yj = c1( 3) +c2(2) 4 j 8 j- 32
The initial conditions give
19
j = 0 c1 + c2 32 =0
75
j = 2: 9c1 + 4c2 32 =2

Substituting c 1 and c 2 , we obtain

[ ]
1 j j 2
yj = 160 63( 3) + 32(2) 40j 60j 95

Example 8.

Find the range for , so that the roots

of the characteristic equation of the

difference equations

i. (1 - 5) y n+2 (1 + 8) y n+1 +

yn = 0

ii. (1 - 9)y n+3 (1+19)y n+2 +

5y n+1 - y n = 0
are less than one is magnitude

i. The characteristic equation of

the difference equation is

obtained as
2
(1 -5) (l + 8) + = 0.

Since the coefficients involve a

parameter, we use the Routh-

Hurwitz criterion, Setting = (1

+ z)/(l z) in, we obtain the

transformed characteristic

equation as
2 2
(1 - 5)(l +z) (l +8)(l z )
2
+ (1 z) = 0
2
Or (2 + 4)z + (2 - 12)z - 12

= 0.

The Routh Hurwitz criterion is

satisfied if
2 + 4 > 0,2 - 12 > 0, - 12 >

0.

The second and third inequalities

are satisfied for all < 0. From

the first inequality, we get, > -

(1/2), Therefore, || < 1 for all a

satisfying (-l/2, 0). For = 0,


2
we get - = 0 whose roots are

= 0, 1 indicating a root on the

unit circle || = 1.

ii. The characteristic equation of

the difference equation is

obtained as
3 2
(1 - 9) (1+19) + 5 -

= 0.

Since the coefficients involve a

parameter, we use the Routh-

Hurwitz criterion. Setting = (1


+ z) / (1 - z) in, we obtain the

transformed characteristic

equation as
3
(1 - 9)(1 +z) (1 + 19) (1 +
2 2
z) (1-z) + 5 (1 + z) (1 z) -
3
(1 z) = 0
3 2
Or (2 + 16)z + (4 - 16) z +

(2 - 48)z 0 - 24 = 0
3 2
Or 0 z + 1z + 2z + 3 = 0

Where 0 = 2 + 16, 1 = 4 -

16, 2 = 2 - 48, 3 = -24.

The Routh-Hurwitz criterion is

satisfied if

0 = 2 (1 + 8) > 0, 1 = 4(1 -

4) > 0, 2 = 2(1 - 24) > 0,

3 = -24 > 0, and


1 2 = 0 3 = 8[(1 - 4) (1 -

24) + 6 (1 + 8)]
2
= 8[1 - 22 + 144 ] > 0.

CYP QUESTIONS.

1. Convert the following differential

equation into a system of first

order ordinary differential

equations and obtain the

corresponding initial value

problem.

2
y = 3y + 6ty + 10t y +
sin t

y(0)= l,y (0) = 2,y (0) = 3.

2. Convert the following system of

second order ordinary

differential equations into a


system of first order ordinary

differential equations, and write

the corresponding initial value

problem.

y = et + y + u + u + y, y(l)

= 3, y (l) = 1

3. Using the matrix method, solve

the following system


dx 3
dt = 100 x, x(0) = 50
dy 3 3
dt = 100 x - 50 y, y(0) = 0

4. Using the matrix method and

diagonalisation procedure,

obtain the solution of the system

of equations

u 1 = - 3u 1 + u 2 + 3 cos t

u 2 = u 1 3u 2 2 cos t 3 sin

t.
5. Find yj from the difference

equation
2 1 2
yj+1 + 2 yj = 0, j = 0,1,2,...

When y 0 = 0, y 1 = 1/2, y 1 =1/

4. Is this method numerically

stable?

6. Solve the difference equation

u j+1 = 2(sin x) u j + u j-1 = 0

when u 0 = 0 and u 1 = cos x.

7. Find the general solution of the

difference equation.
-j
u j+1 = 2u j = j2 .

8. Show that all solutions of the

difference equation

y j+1 - 2y j + y j-1 = 0

are bounded, when j if 1 <

< 1, while for all other complex


values of there is at least One

unbounded solutions.

5.3 NUMERICAL METHODS

The first step in obtaining a

numerical solution of the differential

equation is to partition the interval

[t 0 , b] on which the solution is

desired, into a finite number of

subintervals by the points t 0 < t 1 <

t 2 .....< t N = b

The points are called the mesh points

or the grid points. The spacing

between the points is given by h j =

t j t j-1 ,j= 1,2, ...N which is called

the mesh spacing or step length. For

simplicity, we assume that the points


are spaced uniformly, i.e.

h j = h = constant, j = 1,2,..., N.

The uniform mesh points are given by

t j = t 0 + j h , j = 0, 1,2, ....N.

In numerical methods, we determine

a number uj, which is an

approximation to the solution u(t) at

the point t j . The set of numbers {u j },

i.e., u 0 , u 1 , ..., u n is the numerical

solution of the initial value problem.

The difference equations. There are

many difference approximations

possible for a given differential

equation. As an example, let us

develop expressions for the first

derivative in terms of the forward,

backward, and central differences


operators. We assume that the

function u(t) may be expanded in a

Taylor series in the closed interval t

h<t<t + h. we write as
h h2 hp
u ( th ) = u (t ) 1 ! u (t ) + u (t ) ... + (1 ) p u (p ) + (t ) + ...
2! p!

where a prime denotes differentiation

with respect to t. we then have

u (t+h ) u (t ) h 2
h 2

u> (t ) du
h dt

Where the notation o(h) means that

the first term neglected is of order h.

Similarly, we obtain

u(t) du
h = dt + o(h)

( 2)
u(t) du
and h = dt +o h
A difference approximation to u(t) at

t = t j is obtained by neglecting the

error term, we have

| |
(i)(uj+1 uj) / h

u(tj) = (ii)(uj uj 1) / h

(iii)(uj+1 uj-1) / h

We use the approximations for u (t)

in the differential equation at the

mesh point t j . This gives

uj+1 uj
i. h = f(tj, uj)
uj uj-1
ii. h = f(tj, uj)
uj+1 uj-1
iii. 2h = f(tj,uj)

The difference equations are of first

order and the difference equation is


of second order. The methods and

are called single step methods and

the method is called a two step or a

multi step method.

Local Truncation Error

The error in an approximation is the

difference between the exact solution

u(t j+1 )) at t = t j+1 and the solution

value u j+1 determined from a

numerical method, using the exact

arithmetic, and is called the local

truncation error or local

discretization error. We have

T j+t = u(t j+1 ) = - u j+1 , j = 0, 1, ....N 1.


Convergence

The numerical solution u j will contain

errors. We shall be concerned with

the effect of these errors on the

solution, and ask what happens as we

try to get a more accurate solution,

by taking more grid points. A method

is convergent if, as more grid points

are taken or step size is decreased,

the numerical solution converges to

the exact solution, in the absence of

roundoff errors.

Stability
A method is stable if the cumulative

effect of all errors, including round

off errors is bounded, independent of

the number of mesh points.


The concepts of stability and

convergence can be understood by

analyzing the numerical solutions of

the test equation u = u, which is

the linearised form of the non linear

equation u = f(t, u).

Analysis of the Numerical Solution of

the Test Equation u = u. Consider

the initial value problem for the test

equation as u = u, u(t 0 ) = 0 .

The exact solution of the initial value

problem is

= 0 e (
t t0)
u(t)

Substituting t = t j+1 and t = t j in and

dividing, we get
u(tj+) (
tj+1 t0 )
= e ( j+1 j)
0e t t
=
u(tj) 0e (
tj t0 )

Or u(tj+1) = e u(tj)
h

Where t j+1 t j = h. A numerical

method can be obtained by writing


h
an approximation to e . We may use

the following Pade' approximations

h
e 1 + h(first order)
2 2
h
1+ h 2 (second order)
2 2 3 3
h h
1 + h + 2 + 6 (third order)
2 2 3 3 4 4
h h h
1 + h + 2 + 6 + 24 (fourth order)
1 + (h / 2)

1 (h / 2)
(second order)

Let this approximation be denoted by

E(h), that is, E(h), that is, E(h)


h
e .

Double click this page to view clearly


The numerical method is defined by

u j+1 = E (h) u j .

Adding the truncation error, we find

that the exact solution satisfies the

equation

u(t j+1 ) = E(h) u(t j ) + T j .

h
The exact solution is u(t j+1 ) = e

u(t j )

Let j = u j u(t j ). We get from

j+1 + u(t j+1 ) = E(h) [ j + u(t j )]

Or j+1 = - u(t j+1 ) + E(h) { j +

u(t j )]

h
= -e u(t j ) + E(h) { j + u(t j )]

h
= [E(h) - e ] u(t j ) + E(h) j .
The first term on the right hand side

is the truncation error and the second

term is the propagation error, that is,

the error carried over from the jth

step. E(h) is called the propagating

factor. The truncation error is in our

control while the propagation error is

not in our control. For the next step,

we get

[
j+2 = E(h) e
h
]u(tj+1) + E(h)j+1
[
= E(h) e ]eh u(tj) + E(h){[E(h) eh]u(tj) + E(h)j}
h

[
= E(h) e
h
][eh + E(h)]u(tj) + E2(h)j
[ 2
= E (h)
2h
]u(tj) + E2(h)j
Again, the first term is the truncation

error at the step j + 2, while the

second term is the error propagated

from the step j. At the step j + k,

Double click this page to view clearly


k
the second term is of the form E (h)

j . As j , meariihgful results are

obtained, only if the propagation

error decays or is atleast bounded.

Hence, the propagation factor should

satisfy the condition

|E(h)| < 1.

If is satisfied, then the method is

called absolutely stable. If E(h) <


h
e , then the method is called

relatively stable. If > 0, then E(h)

always satisfies the condition E(h) <


h
e and hence the methods derived

from these approximations are

relatively stable.
If < 0, then the methods are

absolutely stable if |E(h)| < 1. The

intervals for h, derived from this

condition are called the intervals of

absolute stability. We have the

following examples.

i. First order approximation. We

have

|E(h)| = |1 + h| < 1.

From 1 < 1 + h < 1, we get

the interval of Stability as 2 <

h < 0 or h (-2, 0).

ii. Second order approximation: we

have

|
|E(h)| = 1 + h+ 2 h
1 2 2
|<1
1 2 2
Or -1 < 1 +h+ 2 h < 1
[(h+1) ]
1 2
Simplifying, we get 1 < 2 +1 <1

[ ]
2
Or - 2 < (h+1) + 1 < 2

The left inequality is always

satisfied. The right inequality


2
gives (h + l) < 1. or |h + 1|

< 1 giving (-2, 0).

Hence, the interval of stability,


of the second order method is

also (-2,0).

iii. Third order approximation; we

have

|E(h)| = |1 + h+ 2 |<1
1 2 2 1 3 3
h + 6 h

We find that h (-2.5, 0).

iv. Fourth order approximation : we

have

|E(h)| = |1 + h+ 2 |<1
1 2 2 1 3 3 1 4 4
h + 6 h + 24 h
We find that h (-2.78, 0).

v. Second order approximation

(6.47 v): we have


|1 + (h/2)|
|E(h)| = |1 (h/2)| < 1 or|1 + 2 | < |1 2 |
h h

For < 0, this inequality is

always satisfied. Hence, the

interval of absolute stability is (-

, 0). Such methods are also

called unconditionally stable

methods.

Euler Method

u j+1 = u j + hf j , j = 0, 1, 2, ...........,N

1 where f j = f(t j , u j ). This is called

the Euler Method. Applying the

method at the mesh points tj, j =


0, 1,2,...., N 1, we obtain the

numerical solution of

u1 = u0 + hf0
u2 = u1 + hf1
_
uN = uN-1 + hfN-1

Choosing a suitable value of h and

the initial condition, u 1 is obtained

from the first equation, u 2 is obtained

from the second equation and so on.

The method is called an explicit

method, since using u j , h and f j we

can calculate u j+1 directly. The Euler

method is the simplest method. The

truncation error in the method is

given by
Tj+1 = u(tj+1) uj+1

[ ]
2
= u(tj+1) u(tj) + hf(tj,uj) =
h
2 u ()

Where tj < < tj+1 .If we denote |


max|Tj+1| = T and max u

()| = M2, then ,
[t0, b] [t0, b]
2
h
we have T 2 M2

2
The local truncation error is of o(h )

as h 0.

Example 1.

Use the Euler method to solve

numerically the initial value problem.


2
u = - 2tu , u(0) = 1

with h = 0.2, 0.1 and 0.05 on the

interval [0, 1], Neglecting the


roundoff errors, determine the bound

for the error. Apply Richardson's

extrapolation to improve the

computed value u(1.0). we have

2
uj + 1 = uj 2htj uj ; j = 0,1,2,3,4

Double click this page to view clearly


with h = 0.2. The initial condition

gives u 0 = 1.

For j = 0: t 0 = 0, u 0 = 1
2
u(0, 1) u1 = u0 2ht0u0 = 1.0.

For j = 1: t 1 = 0.2, u 1 = 1

2
u(0.4) u2 = u1 2ht1 u1

2
= 1 2(0.2) (0.2) (l) = 0.92.

For j = 2: t 2 = 0.4, u 2 = 0.92


2
u(0.6) u3 = u2 2ht2u2

2
= 0.92 2(0.2) (0.4) (0.92) =

0.78458.

Similarly, we get

u(0.8) u 4 = 0.63684, u(1) u 5 =


050706.
When h = 0.1, we get

for j = 0: t 0 = 0, u 0 = 1
2
u(0, 1) u1 = u0 2ht0u0 = 1.0.

for j = 1: t 1 = 0.1, u j = 1
2
u(0.2) u2 = u1 2ht1u1

2
= 1 2(0.1) (0.1) (l) = 0.98. for
j = 2: t 2 = 0.2, u 2 = 0.98

2
u(0.3) u3 = u2 2ht2u2

2
= 0.98 2(0.1) (0.2) (0.98) = 0.94158.

Similarly, we get
u(0.4) u 4 = 0.88839, u(0.5) u 5 =

0.82525,
u(0.6) u 6 = 0.75715, u(0.7) u 7 =

0.68835,
u(0.8) u 8 = 0.62202, u(0.9) u 9 =

0.56011,
u(1.0) u 10 = 0.50364.
For h - 0.05, we get

u(0.05) 1.0, u(0.1) 0.995,


u(0.15) 0.9851, u(0.2) 0.97054,
u{0.25) 0.9517, u(0.3) 0.92906,
u(0.35) 0.90316, u(0.4) 0.87461,
u(0.45) 0.84401, u(0.5) 0.81195,
u(0.55) 0.77899, u(0.6) 0.74561,
u(0.65) 0.71225, u(0.7) 0.67928,
u(0.t5) 0.64698, u(0.8) 0.61559,
u(0.85) 0.58527, u(0.9) 0.55615,
u(0.95) 0.52831, u(1.0) 0.50179.

The truncation error in the Euler

method is given by
2
h
TE = 2 u ()

|u ()| | u ( t) |
2 2
h h
|TE| = 2 2 max
0 t 1

Since the exact solution is u(t) f 1/(1


2
+ t ), we get
| ( )
|
2
h
2 2 1 3t 2
|TE| 2 max 3 2h
0 t 1 (1 + t2)

2
as the absolute maximum of (1 3t

) in [0, 1] is 2.

The error in Euler method is of the


2
form u(t j ) = u j (h) = c 1 h + c 2 h +
3
c 3 h + ....

Richardson's extrapolation gives

k (k-1) h (k-1)
2 uj ( ) uj (h)
u(k)(h) =
2
k
2 1

we have the following extrapolated

value of u( 1.0)

Extrapolated Value for u(1.0)

H (0) (1) (2) (3)


u (h) u (h) u (h) u (h)

0.20 0.50706
0.10 0.50364 0.50022

0.05 0.50179 0.49994 0.49985 0.5

Example 2.

Show that in Euler method the bound

of the truncation error, when applied

to the test equation u = u, u() =

B, > 0, can be written as

| |
u(tj) u(tj,h)
hM
2 [exp{(t )} 1]
j

Where M = max |u (t)|. Generalise

the result when applied to.the

problem u = f(t, u), u() = B.

Applying the Euler method to the test

equation
u = u, we get u j+i = u j + hu j = (1

+ h)u j

The exact solution satisfies the

equation u(t j+1 ) = (1 + h) u(t j ) +

T j+i

where T j+1 is the truncation error


2
given by T J+1 = [h u ()]/2.

Substituting the two equations and

setting j = u(t j ) u j ,

we obtain

u(tj+1) uj+1 = (1 + h) u(tj) uj + Tj+1 [ ]


or j+1 = (1 + h)j + Tj+1

Hence, |j+1| |1 + h||j| + |Tj+1|


Let A = |1 + h|

| |
2 2

|Tj+1| =
h h
2 u () 2 M

max
Where M=
[ , b]
| u

( t) | Denote
E j+i = max ) | j+1 |. Then, we have

Ej+1 0, 1, 2, ..., we get


E1 AE0 + T
2
E2 AE1 + T = A E0 + (1 + A)T
3
E3 AE2 + T = A E0 + 1 + A+A T ( 2
)

j
( 2 j-1
Ej A E0 + 1 + A + A + A + ... + A T )
( )T, where A 1
j
j A 1
= A E0 + A1

Let E 0 = 0, that is, there is no initial

error. Now.

i
(1 + h) < exp [ j h] < exp [(t j -

)], > 0.

Since > 0, we get

[ { }] = hM
2 [exp{(tj )} 1]
2
Ej < 2(h) exp (tj ) 1
h M

Since max |j| = Ej

Double click this page to view clearly


|j| |u(tj) u(tj,h)| [exp{(t )} 1]
hM
2 j

|u(tj) u(tj,h)| hM
2L [exp{(t )} 1]
j

Where | f/ y| L.

Example 3.

The system y = z z = -by - z where

where 0 < < 2b, b > 0is to be

integrated by Euler method With

known initial values. What is the

largest step length h for which all

solutions of the corresponding

difference equation are bounded?

Proof.

Applying the Euler method, we obtain

Yj+1 = yj + hzj

Zj+1 = zj + h( byj zj)


= -bhyj + (1 h)zj
Which may be written as

[ ][ ][ ]
yj+1 1 h yj
=
zj+1 bh 1 h zj

Or uj+1 = Auj
Where

[ ]
T 1 h
uj = [yjzj] and A =
bh 1 h

The characteristic equation of the

matrix A is given by

2
(2 - h) ^ + 1 - h + bh = 0.

Putting , = (1+ z) / (1 z) in the


characteristic equation, we get the

reduced characteristic equation as


2 2
(4 - 2=c h + bh ) z + 2h (a - bh) z
2
+ bh = 0.

T he roots of the characteristic

equation will lie within the unit circle


or that of the reduced characteristic

equation on the left half plane of the

z-plane, if and only if the following

conditions will be satisfied:

2 2
4 - 2h + bh > 0. - bh > 0, bh >0.

The first inequality gives h < /b. The


third inequality is satisfied since b > 0.

Hence, we obtain h < /b.

For h = /b, or = bh, we get the

characteristic equation as
2 2
(2 bh ) + 1=0.

[(2 bh2) (2 bh ) ]/2


2 2 2
The roots are = 4

2 2
Since b > 0, (2-bh ) 4 < 0, the

roots are a complex pair and || = 1.

Thus, the largest step length is given

by h /b.
Backward Euler Method

The equation at the mesh point t =

t j+1 may be written as u j+1 = u j +

hf j+1 where f j+1 = f(t j+1 , u j+1 ). This

is called the Backward Euler Method.

The solution values u1,

u 2 ,............u N are determined from

the following equations

u1 = u0 + hf1
u2 = u1 + hf2

uN = uN-1 + hfN

The method involves the unknown

value u j+1 on both sides of the

equation and hence defines u j+1

implicitly. Such a method is called an

implicit method.
Since f(t, u) is nonlinear (1) produces

a nonlinear algebraic equation for the

solution of u j+1 . For the solution of

this nonlinear equation, we can use

the Newton-Raphson method.

Applying the Newton-Raphson

method to, we get

s+1
[(
u(j+1 ) = uj + h f tj+1, uj+1
(s)
) (
+ u(j+1 ) uj+1
s+1 (s)
) f
u ]
or after rearrangement as

(1 h uf )(u(j+1
s+1) (s)
uj+1 ) = uj uj+1
(s)
+ hf(tj+1, uj+1
(s)
)
( ) s
Where uj+1 is the sth iterate of u j+1

and f/u is evaluated at ( ( s)


tj+1, uj+1 )
We iterate until

| u(j+1 ) uj+1
s+1 ( s)
|
TOL

Double click this page to view clearly


Where TOL is a specified absolute

error tolerance. Since the solution

u(t) varies continuously, we may

take the initial approximation as

u(j+1
)
0
= ujThe local truncation error at

the point t = t j+1 in the method (1)

is given by

T j+1 =u(t j+1 ) u j+1

[ (
= u(tj+1) u(tj) + hf tj+1, u(tj+1) )]
2
h
= 2 u"(), tj < < tj+1

It may be noted that the truncation

errors of Euler and backward Euler

methods are equal but are of

opposite signs.
Example 4.

Solve the initial value problem


2
u = -2tu , u(0) = 1 with h = 0.2,

on the interval [0, 0.4], using the

backward Euler method.

Proof.

The backward Euler method gives


2
u j+1 = u j 2ht j+1 u j+1,j = 0, 1

This is an implicit equation in u j+1

and can be solved by using an


iterative method. We generally use

Newton-Raphson method.

Denote

2
F(u j+1 ) = u j+1 u j + 0.4t j+1 u j+1
We have,

F (u j+1 ) = 1 + 0.8t j+1 u j+1


(s+1)
(s+1) (s) Fuj+1
uj+1 = uj+1. s=0,1,.....
u(j+1
)s

( ) 0
Taking we obtain from uj+1 = ujfor

j=0:t1 =0.2.u( ) = u0=1


0

( ) 0
( )
F u1( ) = 0.08, F' u1( ) = 1.16, u1( ) = 0.93103448,
1 1

F(u( )) = 0.0038050, F'(u( )) = 1.14896552, u( ) = 0.9370332,


1 1 2
1 1 1

F(u( )) = 0.14 10 , F'(u( )) = 1.14891253, u( ), u( ) = 0.93070331,


2 7 2 3 1
1 1 1 1

Therefore,u(0.2) u1 = 0.93070331

j=1:t2 = 0.4, u(2 ) = 0.93070331,


0

( ) 0
( )
F u(2 ) = 0.13859338, F' u(2 ) = 1.29782506, u(2 ) = 0.82391436,
0 1

F(u( )) = 0.00182463, F'(u( )) = 1.26365260, u( ) = 0.82247043,


1 1 2
2 2 2

F(u( )) = 0.00000034, F'(u( )) = 1.26319054, u( ) = 0.82247016,


2 2 3
2 2 2

F(u( )) = 0.40 10 , F'(u( )) = 1.26319045, u( ) = 0.82247016.


3 8 3 4
2 2 2

Double click this page to view clearly


Therefore,u(0.4) u2 = 0.82247016

The exact values are u(0.2)

= 0.96153846, u(0.4) = 0.86206897.

MidPoint Method

The equation may be written as

uj+1 uj-1 + 2hfj, j=0,1,....N-1

This is called the mid-point method.

The solution values are given by


u2 = u0 + 2hf1
u3 = u1 + 2hf2
......................
uN = uN-2 + 2hfN-1

The value Uo is known from the initial

condition. The value ui is unknown

and has to be determined by some

other method after which is used to

determine successively u2, u3,

......u n .
The local truncation error is given by

Tj+1 =u(tj+1) uj+1

[ (
=u(tj+1) uj+1 + 2hf tj, u(tj) )]
3
h
=3 u'"(), tj-1 < < tj+1

The bound on the truncation error is

given by

|Tj+1|
h
3 M3

Where M3 = max |u'"(t)|


t [t0, b]

Example 5.

Solve the initial value problem


2
u = - 2tu , u(0) = 1

using the mid-point method, with h

= 0.2, over the interval [0, 1]. Use

the Taylor series method of second


order to compute u(0.2). Determine

the percentage relative error at t =

1.

The mid-point method gives

uj+1 uj-1 4h tjuj2 = uj-1 0.8tj uj2, j=1,2,3,4.

We calculate u1 from the Taylor

series method of second order. We

have

2
h
u1 u(h) = u(0) + hu'(0) + 2 , u"(0)

where, u(0) = 1, u'(0) = u"(0) = 2


0.04
We obtain u1 u(0.2) = 1 + 0 + 2 ( 2) = 0.96

Now, we obtain from, u 2 ,u 3 , u 4 , u 5 .

We have, for
j=1:u0=1,u1=0.96,t1=0.2,
2
u(0.4) u2 = u0 0.8t1 u1
2
=1-(0.8)(0.2)(0.96) = 0.852544
j = 2: u1 = 0.96, u2 = 0.852544, t2 = 0.4,
2
u(0.6) u3 = u1 0.8t2u2
2
=0.96-(0.8)(0.4)(0.852544) = 0.7274139
j = 3: u2 = 0.852544, u3 = 0.727139, t3 = 0.6,
2
u(0.6) u3 = u1 0.8t2u2
4
=0.96-(0.8)(0.4)(0.852544) = 0.7274139
j = 3: u2 = 0.852544, u3 = 0.7274139, t3 = 0.6,
2
u(0.8) u4 = u2 0.8t3u3
2
=0.852544-4(0.2)(0.6)(0.7274139) = 0.5985611
j = 4: u3 = 0.7274139, u4 = 0.5985611, t4 = 0.8,
2
u(1.0) u5 = u3 0.8t4u4
2
=0.7274139-(0.8)(0.8)(0.5985611) = 0.4981176

The exact solution is u(t) = 1/(1


2
+1 ). Therefore, u(1) = 0.5.

The percentage error is defined as


| u-u |
p.e= |u| 100

where it is the exact solution and

u* is the approximate solution.

Therfore,

Double click this page to view clearly


|0.5 0.4981176|
p.e= 0.5 100 = 0.38%

Example 6.

Find singlestep methods for the

differential equation y = f(t,y).

which produce exact results for

-1
i. y(t) = + be

ii. y(t) = + b cost + c sin t.

i. since the function y(t) depends

upon two arbitrary parameters

and b, we need three relations

in y(t) and y (t) to eliminate

and b to get required numerical

method. We assume a relation

between y(t j+1 ), y(t j ) and y (t j )

to obtain a single-step method.

We have
y(tj+1) = + be
tj+1

y(tj) = + be
tj

y(tj) = be
tj

| |
y(tj+1)
tj+1
-1 e

y(tj)
tj
or 1 e =0

y'(tj)
tj
1 e

Simplifying, we get the

numerical method

(
yj+1 = yj + 1 e
-h
)y'j
ii. we have three arbitrary

parameters , b and c and we

would require four relations to

eliminate the parameters.

We have
y(tj+1)=+b cos (tj+1)+c sin (tj+1)

y'(tj+1)=-b sin (tj+1)+c cos (tj+1)

y(tj)=+b cos (tj)+c sin (tj)

y'(tj)=-b sin (tj)+c cos (tj)

we eliminate and get

( ) ( )
y(tj+1) y(tj) = b cos (tj+1) cos(tj) + c sin(tj+1) sin(tj) Eliminating b and c we obtain

| |
y(tj+1) y(tj)
(
cos (tj+1) cos(tj) ) (
sin(tj+1) sin(tj) )
y'(tj+1) sin (tj+1) cos(tj+1) =0
y'(tj) sin(tj)
cos(tj)

Simplifying, we have

[y(tj+1) y(tj)][ sin(tj+1)costj + cos(tj+1)sintj]


[ { } {
-y'(tj+1) cos tj cos (tj+1) cos tj + sintj sin(tj+1) sin tj }]

[ {
+y'(tj) cos (tj+1) cos (tj+1) cos tj + sint } ( ){
j+1
sin(tj+1) sin tj } ] =0

or (yj+1 yj)( sinh) y'j+1[cos h-1] + y'j[1 cos h] = 0


(y'j+1 + y'j)
1 cos h
or yj+1 = yj + sin h

5.4 SINGLESTEP METHODS

The methods for the solution of the

initial value problem

Double click this page to view clearly


u = f(t, u), u(t 0 ) = 0 , t [t 0 , b] can

be classified mainly into two types.

They are

i. singlestep methods and

ii. multistep methods.

In singlestep methods, the solution

at any point is obtained using the

solution at only the previous point.

Thus, a general single step method

can be written as

u j+1 = u j + h (t j+1 , t j , u j+1 , u j , h)

where is a function of the

arguments t j , t j+1 u j , u j+1 , h and also

depends on f. Wo often write it as

(t, u, h). This function is called

the increment function. If u j+1 can

be obtained simply by evaluating the


right hand side of then the method

is called an explicit method. In this

case, the method is of the from

u j+1 = u j + h (t j , u j , h).

If the right hand side of depends on

u j+1 also, then, it is called an implicit

method. The general form in this

case is as given in general single step

method.

Local Truncation Error or

Discretization Error

The true (exact) value u(t j ) satisfies

the equation

u(t j+1 ) = u(t j ) + h (t j+1 , t j , u(t J+1 ),

u(t j ), h) + T j+1

where T j+1 is called the local


truncation error or discretization

error of the method. Therefore, the

truncation error is given by

T j+1 = u(t j+1 ) u(t j ) - h (t j+1 , t j ,

u(t j+1 ), u(t j ), h)

Order of a method

The order of a method is the largest

integer p for which

| 1
h Tj+1 | = O(hp)
We now derive single step methods

which have different increment

functions.

Taylor Series Method


The fundamental numerical method

for the solution of initial value

problem is the Taylor series method.


We assume that the function u(t) can

be expanded in Taylor series about

any point t j , that is,


2 p
u(t) = u(tj) + (t-tj)u'(tj) + (t-tj) + u"(tj) + ..... + (t-tj)
1 1
2! p!
p+1 p+1
u( )(tj) + (p+1) (t-tj) u( )(tj+h)
p 1

This expansion holds for t [t 0 , b]

and 0 < < 1.

Substituting t = t j+1 in, we get


2
u(tj+1) = u(tj) + hu'(tj) + (tj) + .... + p! h u( )(tj)
h 1 p p
2 ! u"

u(
p+1)
(tj+h)
1 p+1
+ (p+1) ! h

( )
= u(tj) + h tj, u(tj), h + (p+1) ! h
1 p+1
u(
p+1)
(tj+h)
( )
2 p
Where h tj,u(tj), h = hu'(tj) + (tj) + .... + p! u( )(tj)
h h p
2 ! u"

Denote by h(t j , u j , h), the value


obtained from h(t j , u(t j ), h) by
using an approximate value uj in
place of the exact value u(t j ).

Neglecting the error term, we have

the method
u j+1 = u j + h (t j , u j , h), j = 0,

1, ...., N 1 to approximate u(t j+1 ).

The error or the truncation error of

the method is given by

1 p+1 p+1
Tj+1 = (p+1) ! h u ( j
)
t + h

The method is called the Taylor series

method of order p. Substituting


p = 1, we get

u j+1 = u j + hu j = u j + hf (t j , u j )

which is the Euler method. Therefore,

Euler method can also be called as

the Taylor series method of order 1.


It is necessary to know u(t j ), u(t j ),
(p)
........., u (t j ). If t j and u(t j ) are

known, then the derivatives can be

calculated as follows:

First, the known values t j and u(t j )

are substituted into the differential

equation to give u(t j ) = f(t j , u(t j )).

Next, the differential equation u =

f(t, u) is differential to obtain

expressions for the higher order

derivatives of u(t). Thus, we have

u'=f(t,u)
u"=f1 + ffu

u"'=fu + 2fftu + f ftu + fu(f1+ffu)


2

.............................................
Where f1, f u ,........represent the

partial derivatives of f with respect

to t and u and so on. The values u

(t j ), u (t j ),.....can be computed by

substituting t = t j .

Therefore, if t j and u(t j ) are known

exactly, then can be used to compute

u j+1 with an error

p+1
(p+1)
(tj + h)
h
(p+1) !
u

The number of terms to be included is

fixed by he permissible error. If this


error is and the series is truncated

(p)
at the term u (t j ) then
p+1 (p +1)
h |u (t j + h)| < (p + 1)!
p+1 (p)
or h |f (t j + h)| < (p+ 1)! .
(p)
We assume that an estimate of |f

(t j + h) is known.

For a given h and , the above

inequations will determine p, and if p

and are specified, then it will give

an upper bound on h.

(p)
Since t j + h is not known, |f (t j

+ h)| in is replaced by its maximum

value in [t 0 , b]. A way of determining

this value is as follows. Write one

more non-vanishing term in the

series that is required and then

differentiate this series p times. The

maximum value of this quantity in

[t 0 , b] gives a rough required bound.


Example 1.

Given the initial value problem u =


2 2
t + u , u(0) = 0 determine the first

three non-zero terms in the Taylor

series for u(t) and hence obtain the

value for u(l). Also determine t when

the error in u(t) obtained from the

first two non-zero terms is to be less


-6
than 10 after rounding.

We have
u(0) = 0, u'(0) = 0
u"=2t+2uu',u"(0) = 0

( )
2
u"'=2+2 uu"+(u') , u'"(0) = 2

(4)
= 2(uu"'+3u'u"), u( )(0) = 0
4
u

[ ]
u( ) = 2 uu( )+4u'u"+3(u") , u( )(0) = 0
5 4 2 5

u( ) = 2(uu( )+5u'u( )+10u"u'"), u( )(0) = 0


6 5 4 6

u( ) = 2(uu( )+6u'u( )+15u"u( ) + 10(u"') ), u( )(0) = 80


7 6 5 4 2 7

u( )(0) = u( )(0) = u( )(0) = 0


8 9 10

11
[
10 9 8 7 4 6
]
u( ) = 2 uu( ) + 10u'u( ) + 45u"u( ) + 120u"'u( ) + 210u( )u( ) + 126u( )) , u( )(0) = 38400
5 2 11

Thus ,the Taylor series for u(t) becomes


1 3 1 7 2 11
u(t) = 3 t + 63 t + 2079 t

Double click this page to view clearly


The approximate value of u( 1) is

given by
1 1 2
u(1) = 3 + 63 + 2079 0.350168

If only the first two terms are used,

then the value of t is obtained from

| 2
2079 t |
11
< 0.5 10
7

Solving, we get t 0.41

Example 2.

Find the three term Taylor series

solution for the third order initial

value problem

W+ WW= 0, W(0) = 0,
W(0) = 0, W"(0)=1.

Find the bound on the error for t

[0, 0.2].
Proof.

We find

W"'=-WW".W"'(0) = 0

W( )=-(WW"+W'W"),WH( )(0) = 0
4 4

[
W( )=- WW( )+2W'W"'+(W") , W( )(0) = 1 ]
5 4 2 5

W( )(0)=0,W( )(0) = 0, W( )(0) = 11


6 7 8

W( )(0)=0,W( )(0) = 0, W( )(0) = 375


9 10 11

The Taylor series solution is


2 5
t t 11 8
W(t) = 2! 5! + 8! t + E8

|E8| max|W (t)|


9
(9) t
Where 9!

Writing the next term, we have


2 5
t t 11 8 375 11
W(t) = 2! 5! + 8! t 11 ! t

W( )(t) =
9 375 2
We find 11 ! t

And max |
0 t 0.2
W( )(t) = 7.5
9
|
9
7.5(0.2)
Hence,|E8|
11
9! (1.06)10

Double click this page to view clearly


Runge-Kutta Methods
From the application point of view,

the Taylor series method has a major

disadvantage. The method requires

evaluation of partial derivatives of

higher order manually. This is not

possible in any practical application.

Therefore, we need to develop

methods which do not require

evaluation and computation of higher

order derivatives. The most

important class of methods in this

direction are the Runge-Kutta

methods. Howeveer, all these

methods should compare with the

Taylor series method when they are

expanded about a point t = t j . we

shall first explain the principle

involved in the Riinge Kutta


methods. Integrating the differential

equation u = f(t, u) on the interval

[t j , t j+1 ], we get

tj+1
du

tj+1
dt= f(t,u)dt
tj
dt tj

By the mean value theorem of

integral calculus, we obtain

u(t j+1 ) = u(t j ) + hf(t j + h, u(t j +

h)), 0 < < 1. my value of

[0, 1] produces a numerical method.

Consider the following ases.

Case = 0. When = 0, we obtain

the approximation

u j+1 = u j + hg(t j , u j ) which is the

Euler method.

Case = 1. When = 1, we obtain

u(t j+1 ) = u(t j ) + hf(t j + h, u(t j+1 )).


we have the numerical method as

u j+1 = uj + hf j+1 which is the

backward Euler method.

If we approximate u j+1 = u j + hf j ,

that is by the Euler method, in the

argument of f, we get

uj+1 = uj + hf(tj+1, uj + hfj)


If we set K1 = hfj

K2 = hf(tj+1, uj + kj)
We get the method as
uj+1 = uj + k2
Case =1/2 when =1/2 we obtain

( (
u(tj+1) = u(tj) + hf tj+ 2 , u tj +
h h
2 ))
However, t j + (h/2) is not a nodal

point.
If we approximate (
u tj +
h
2 )in the

above equation, by Euler method

with spacing h/2, we get

(
u tj +
h
2 ) = uj + 2 fj
h

Then, we have the approximation

(
uj+1=uj+hf tj + 2 , uj + 2 fj
h h
)
if we get K1 = hfj

( h
K2 = hf tj + 2 , uj + 2 fj
h
)
Then can be written as
uj+1 = uj + K2
Alternately, if we use the approximation

(
u' tj +
h
2 )
1
2 [u'(tj) + u'(tj + h)]
and the Euler method, we obtain

uj+1=uj +
1
2 [
+ f'(tj, uj) + f(tj+1, uj + hfj) ]
If we set K1 = hfj

K2 = hf(tj+1, uj + K1)
Then, can be written as

(K1+K2)
h
uj+1 = uj + 2

This method is also called

Euler Cauchy method.

The integrand on the right hand side,

f(t, u), is the slope of the solution

curve which varies continuously in

the interval [t j , t j+1 ]. In deriving the

Euler method, we may interpret that

the slope of the solution curve, which

varies continuously on [t j , t j+1 ], is

approximated by the slope at the

initial point, that is by f(t j , u j ) f(t j ,

u(t j )).
Runge-Kutta methods use a weighed

average of slopes on the given

interval [t j , t j+1 ], instead of a single

slope. Thus, the general Runge-Kutta

methods may be defined as u j+1 = u j

+ h [weighted average of slopes on

the given interval]. Consider v slopes

on [t j , t j+1 ]. Define

K| = hf(t j + c 1 h, u j + 11 K 1 + 12 K 2

+ ... + 1v K v )

K 2 = hf(t j + c 2 h, u j + 21 K 1 +

22 k 2 + .... + 2v K v )
.....................................................................

K v = h f (t j + c v h, u j + v 1 K 1 + v 2

K 2 + vv K v ).

The Runge-Kutta method is now

defined by u j+1 = u j + W 1 K 1 + W 2 K 2

+.....+ W v K v .
This is also called a v-stage Runge-

Kutta method. It is a fully implicit

method which uses v evaluations of

f. The matrix of coefficients aij is the

full v x v matrix.

[ ]
11 12 ...... 1v
A= 21 22 ...... 2v
v1 v2 ...... vv

It is very difficult to derive the

implicit methods. If in A, we set the

elements in the upper triangular part

as zeros, then (K v ), (u j+1 ) define

semi-implicit methods, where

K 1 =h f(t j + c 1 h, u j + 11 K 1 )

K 2 = h f (t j + c 2 h, u j + 21 K 1 +

22 K 2 )

.. .. .. .. .. ..
K v = h f (t j + c v h, u j + v 1 K 1 + v2 K 2

+ ... + vv K v ).

If in A, we set the elements on the

diagonal and the upper triangular

part as zeros, then K v and u j+1 define

explicit methods, where

K 1 = h f (t j , u j )

K 2 = h f (t j + c 2 h, u j + 21 K 1 )

K 3 = h f (t j + c 2 h, u j + 31 K 1 + 32

K2)

.. .. .. .. .. ..

K v = h f (t j + c v h. u j + V1 K 1 +

v2 K 2 + ... + v , v-1 K v-1 )

We shall first consider the derivation

of explicit Runge-Kutta methods.


Explicit Runge-Kutta Methods

Second Order Methods

Consider the following Runge-Kutta

method with two slopes

K1 = hf(tj,uj)

K2 = hf(tj+c2h,uj + 21K1)
uj+1 = uj + W1K1 + W2K2

Where the parameters c 2 , 21 , w 1

and W 2 are chosen to make u j+1

closer to u(t j+1 ). There are four

parameters to be determined. Now,

Taylor series expansion about t j gives

2 3
u(tj+1) = u(tj) + hu'(tj) + ( tj ) + (tj) + ....
h h
2 ! u" 3 ! u"'

( )
2 3
= u(tj) + hf tj, u(tj) + (f1+ffu)tj +
h h
2! 2!

[f11 + 2fftu + f2fuu + fu(ft + ffu)]tj + ....


We also have
K1 = hfj

K2 = hf(tj+c2,uj+21 hfj)

[ ]
fj + h(c2f1 + 21 f fu)tj
=h
( )
2
h 2 2
+ 2 c fu + 2c221 fftu + f2fuu tj + ....
21

Substituting the values of K 1 and K 2

in, we get

uj+1 = uj + (W1 + W2)hfj + h2(W2 + c2 ft + W221 f fu)tj

(2 )tj + ....
3
h 2 2
+ 2 W2 c2 ftt + 2c2 21 fftu + 21 f fuu

Comparing the coefficients of h and


2
h in and, we obtain

W1 + W2 =1

c2W2 = 1 / 2

21W2 = 1 / 2
The solution of this system is
1 1
21 = c2, W2 = 2c2 , W1 = 1 2c2
Where c 2 0, is arbitrary. It is not

possible to compare the coefficient of


3
h , as there are five terms in and

only three terms in Therefore, the

Runge-Kutta method using two

evaluations of f is

uj+1 = uj + 1 ( 1
2c2 ) K1 +
1
2c2 K2

Where K1 = hf(tj,uj),

K2 = hf(tj+c2h,uj + c2K1)
Substituting in we get
2 3
h c2
(ft + f fu)tj + (fu+2f ftu+f2 fuu)tj+....
h
uj+1 = uj + hfj + 2 4

The local truncation error is given by


Tj+1 = u(tj+1) uj+1

[( ) (f )t1 + 61 {fu(ft + f fu}t + ....]


1 c2 2
= h3 6 4 tt + 2f ftu + f fuu
j

Which shows that the method is of a

second order. It may be noted that


every Runge-Kutta method should

reduce to a quadrature formula when


f(t, u) in independent of u, with W 1 's

as weights and c i 's as abscissas.

The free parameter 2 is usually taken

between 0 and 1. Sometimes, c 2 is

chosen such that one of the W 1 's in

the method is zero. For example, the

choice c 2 = makes W 1 = 0.

If c2 = 1 / 2, we get
uj+1 = uj + K2

Where K1 = hf(tj,uj),

( h 1
K2 = hf tj + 2 , uj + 2 K1)
Which is the Euler method with

spacing h/2.
It is also called the modified

Euler-Cauchy method. It reduces

to the mid-point quadrature rule

when f(t, u) is independent of u.

(K1+K2)
1
For uj+1 = uj + 2

Where K1=hf(tj,uj)

K2 = hf(tj + h,uj + K1)

Which reduces to the trapezoidal

when f(t, u) is independent of u. This

method is also called as the Euler-

Cauchy of Heun method.

Minimization of Local

Truncation Error

An alternative way of choosing the

arbitrary parameter is to minimize the

sum of the absolute values of the


coefficients in the term T j+1 . Such

a choice gives an optimal method in

the sense of minimum truncation

error. Here, we use the Lot kin's

bounds which are defined by


1+j i+j
f L
| i j |< j-1 , i,j=0,1,2.......
t u M

we find that

|f| < M,|fu| < L,|f1| < LM


2

|ftt| < L M,|ftu| < L ,|fuu| <


2 2 L
M

Thus, |T j+1 | becomes

[| | ]
c2
|Tj+1| < ML
2 3 1 1
h 4 6 4 + 3

The minimum value of T j+1 occurs

for c 2 = 2/3 in which case |T j+1 |


2 3
< ML h /3. We have the optimal

method as
(K1 + 3K2)
1
uj+1 = uj + 4

where K1 = hf(tj,uj)

( 2
K2 = hf tj + 3 h,uj + 3 K1
2
)
If the arbitrary parameters are

determined by putting the leading

coefficients in |T j+1 | to zero, then

such a formula is called nearly

optimal. It may be noted that the

explicit Runge-Kutta methods using

two evaluations of f have one

arbitrary parameter and have


produced second order methods.

Third Order Methods

We now use three evaluations of f

and define the method as


uj+1 = uj+ W1K1+W2K2+W3K3

where K1 = hf(tj,uj)

K2 = hf(tj + c2h,uj + 21K1)

and K3 = hf(tj + c3h,uj + 31K1 + 32K2)

Expand K 2 , K 3 in Taylor series about

tj and substitute. Collecting the

terms and comparing the terms upto


3
O(h ), we obtain the following six

equations in eight parameters W 1 ,

W 2 , W 3 , c 2 , c 3 , 21 , 31 and 32 .

Therefore, the methods contain two

arbitrary parameters. The equations

are

21 = c2, 31+32 = c3, W1 + W2 + W3 = 1


1 2 2 1 1
c2W2 + c3W3 = 2 , c2w2 + c3w3 = 3 , c232W3 = 6
Multiplying the fourth and fifth

equations by c 2 32 and using the

sixth equation, we get

2
c232W2 + c3 ( 16 ) = 12 c2232, c3232W2 + c23( 16 ) = 13 c232

Eliminating W2 from these two

equations, we find that no solution

exists unless

c3(c3-c2)
2
3c232 c3 2c232 c3
= or 32 =
c2(2-3c2)
2 2 3
6c2c232 6c232

Usually, c 2 , c 3 are arbitrarily chose

and 32 is determined Irom.

However, if c2 = c3, then we

immediately obtain from the fourth

and fifth equations, that c 2 = 2/3 The

values of the remaining parameters

are obtained from the above six

equations.
When c s = c 3 , we get c 2 = 2/3 and

21 = 2/3. We get the value of the

other parameters as 31 = 0, 32 =

2/3, W 1 = 2/8, W 2 = 3/8 and W 3

= 3/8. The Runge-Kutta method is

obtained as

(2K1+3K2+3K3)
1
uj+1 = uj + 8

where K1 = hf(tj,uj),

K2 = hf tj+ ( 2h
2 ,uj
2
+ 3 K1 )
and K3 = hf tj+ ( 2h
3 ,uj
2
+ 3 K2 )
we note that the methods use three

evaluations of f, has two arbitrary

parameters and produce third order

methods.

Fourth Order Methods


We now use four evaluations of f and

define the method as


uj+1 = uj+W1K1+W2K2+W3K3+W4K4

where K1=hf(tj,uj)

K2 = hf(tj + c2h,uj + 21K1)

K3 = hf(tj + c3h,uj + 31K1 + 32K2)

K4 = hf(tj + c4h,uj + 41K1 + 42K2 + 43K3)

The parameters c 2 , c 3 , c 4 , 24 , ...,

43 , and W 1 , W 4 are chosen to make

u j+1 closer to u (t j+1 ).

Expanding K 2 , K 3 , K 4 in Taylor series

about t j , substituting in and matching


2 3 4
the coefficients of h, h , h and h .

We obtain the following system of

equations
c2=21
c3=31+32
c4=41+42+43
W1+W2+W3+W4=1
1
W2c2+W3c3+W4c4= 2
2 2 2 1
W2c2+W3c3+W4c4= 3
W3c232+W4(c242 + c343)+= 6
1

3 3 3 1
W2c2 + W3c3 + W4c4 = 4

2
(
W3c232 + W4 c242 + c343 =
2 2
) 1
12

W3c2c232 + W4(c242 + c343)c4 =


1
8
1
W4c23243 = 24

We have 11 equations in 13

unknowns. Therefore, there are two

arbitrary parameters. Since the


4
terms upto O(h ) are compared, the
5
truncation error is of O(h ) and the

order of the method is 4. The

simplest solution of the equations is

given by .

c2= c3 = 1/2, c4= 1, W2= W3= 1/3.

W1 = W4 = 1 / 6, 21 = 1 / 2, 31 = 0, 32 = 1 / 2, 41 = 0, 42 = 0, 43 = 1.
Thus, the fourth order method becomes
(K1+2K2+2K3+K4)
1
uj+1 = uj + 6

K1 = hf(tj,uj)

( 1
K2 = hf tj + 2 h,uj + 2 K1
1
)
( 1
K3 = hf tj + 2 h,uj + 2 K2
1
)
K4 = hf(tj + h,uj+K3)

We now list the second, third and

fourth order Runge-Kutta methods.

Second Order Methods

C2 21

w1 w2

2/ 2/
1 1
3 3

0 1
Euler-cauchy Modified
Optimal
or Heun Euler-Cauchy

Third Order Methods

C2 21

C3 31 32

W1 W2 W3

2/ 2/

3 3

2/ 2/
0 0
3 3

2/ 3/ 3/ 2/ 3/ 4/

8 8 8 9 9 9
Ny strom Nearly optimal

1/ 1/ 1/

2 3 3

2/ 2/
1 -1 2 0
3 3

1/ 4/ 1/
0
6 6 6

Classical Heun

Fourth Order Methods

C2 21

C3 31 32

C4 41 42 43

W1 W2 W3 W4
1/ 1/ 1/

2 3 3

1/ 2/ -1/
0 1
2 3 3

1 0 0 1 1 1 -1 1

1/ 2/ 2/ 1/ 1/ 3/ 3/ 1/
6 6 6 6 8 8 8 8

Classical Kutta

It can be observed that all explicit

Runge-Kutta methods satisfy the

equations


i-1 v
Ci = ij, and Wi = 1
j=1 i=1

Since c j 's can be obtained from ij's,

the number of independent

parameters is equal to

(Number of ij 's) + (Number of Wi 's) = [1 + 2 + ... + (v-1)] + v= 2 v(v+1)


1
High Order Methods
We note that the second order

method requires two function

evaluations for each step of

integration. Similarly, we find that

the third and fourth order methods

require three and four function

evaluations respectively for each step

of integration.

The minimum number of function

evaluations v for a given order p is

given the following table.

p 2 3 4 5 6 ...

V 2 3 4 6 8 ...

There is a jump in v from 4 to 6 when

p goes found 4 to 5. Hence, methods

with p < v are not generally used.


Example 3.

Given the initial value problem


2
u = -2tu , u(0) = 1 estimate

u(0.4) using

i. modified Euler-Cauchy method,

and

ii. Heun method, with h = 0.2,

compare the results with the


2
exact solution u(t) = 1/(1 + t ).

i. The modified Euler-Cauchy

method is given by

uj+1 = uj + |K2

where K1 = hf(tj,uj) = 0.2


[ 2
2tju
j ] = 0.4 tju
2
j

2
and ( )
K2 =hf tj + 2 , uj + 2 K1 = 0.4(tj + 0.1) uj + 2 K1
h 1
( 1
)
For j = 0, we have
2
t0 = 0, u0 = 1, K1 = 0, K2 = 0.4(0.1)(1) = 0.4
u(0.2) = u1 = u0 + K2 = 1 0.04 = 0.96
For j=1,we have
2
t1 = 0.2, u1 = 0.96, K1 = 0.4(0.2)(0.96) = 0.073728
2
K2 = 0.4(0.2 + 0.1)(0.96 0.36864) = 0.102262
u(0.4) u2 = u1 + K2 = 0.96 0.102262 = 0.857738

ii. The Heun method is given by


(K1+K2)
1
uj+1 = uj + 2

where Kl = hf(tj,uj) = 0.4tju j


2

2
and K2 = hf(tj+h,uj+K1) = 0.4(tj + 0.2)(uj + K1)
For j=1,we have
2
t1 = 0.2, u1 = 0.96, K1 = 0.4(0.2)(0.96) = 0.0731728
2
K2 = 0.4(0.2 + 0.2)(0.96 0.737328) = 0.125676

(K1+K2) = 0.96 + 2 ( 0.073728 0.125676)


1 1
u(0.4) u2 = u1 + 2

the exact solution is u(0.2) = 0.961538, u(0.4) = 0.86069

The absolute errors in the

numerical solutions are

Modified Euler Cauchy

method: (0.2) = 0.001538,

(0.4) = 0.004331.

Hence method: (0.2) =

0.001538, (0.4) =

0.001771

Double click this page to view clearly


Example 4.
Solve the initial value problem
2
u = 2tu , u(0) =1

with h = 0.2 on the interval [0, 0.4].

Use the fourth order classical Runge-

Kutta method. Computer with the

exact solution.

For j=0, we get


t0 = 0, u0 = 1
2
K1 = hf(t0,u0) = 2(0.2)(0)(1) = 0

( ) ( 0.22 )(1)
h 1 2
K2 = hf t0+ 2 ,u0 + 2 K1 = (0.2) = 0.04

( ) ( 0.22 )(0.98)
h 1 2
K3 = hf t0+ 2 ,u0 + 2 K2 = 2(0.2) = 0.038416
2
K4 = hf(t0+h,u0 + K3) = 2(0.2)(0.2)(0.961585) = 0.0739715
1
u(0.2) u1 = 1 + 6 [0 0.08 0.076832 0.0739715] = 0.9615328

For j=1,we get


t1 = 0.2, u1 = 0.9615328
2
K1 = hf(t1,u1) = 2(0.2)(0.2)(0.9615328) = 0.0739636

( ) = 2(0.2)(0.3)(0.924551) = 0.1025753
h K1 2
K2 = hf t1+ 2 ,u1 + 2

= hf(t + ) = 2(0.2)(0.3)(0.9102451) = 0.994255


h K2 2
K3 1 2 ,u1 + 2

Double click this page to view clearly


2
K4 = hf(t1+h,u1 + K3) = 2(0.2)(0.4)(0.8621073 ) = 0
.1189166
1
0.1988510 0.1189166] = 0.8620525
u(0.4) = u2 = 0.9615328 + 6 [ 0.0739636 0.2051506 ]
The absolute errors in the numerical solutions are
(0.2) = |0.961539 0.961533| = 0.000006
(0.4) = |0.862069 0.862053| = 0.000016

Estimation of Local

Truncation Error

In the numerical solution of

differential equations, it is desirable

to have an estimate of the local

truncation error of the solutions at

each step. This enables us to adjust


the step size. In our discussion, the

rounding error will be ignored. A

simple procedure for estimating the

truncation error is called

extrapolation. Here, we find the

difference between the two solutions

Double click this page to view clearly


* *
uj+1 and u j+1 where uj+1and u j+1 are

calculated using the step-sizes h/2

and h respectively. We have

| | |
u(tj+1) uj+1 < u(tj+1) uj+1
*
|
and j+1 = u(tj+1) uj+1 uj+1 uj+1
*

For Runge - Kutta methods, using


one-step and two half-steps

(extrapolation) procedure can be

very expensive as the cost of

computation increases with the


number of function evaluations. We

list the total number of function

evaluations required per step for a p-

th order method, using one-step and

two half - steps to

p 2 3 4 5

v* 5 8 11 14
where v* is the total evaluations of f

per step.

This method is generally used to

calculate the total (not local) error

for any method. An economical

procedure is the Runge-Kutta-

Fehlberg method. Here, we use a

Runge-Kutta method of higher order

to compute u j+1 and a lower order

Runge-Kutta method to compute u j+1

using the same step size. An

important achievement of this

method is that the lower order

method has the K i 's common with

the higher order method. The Runge-

Kutta-Fehlberg fourth order pair of

formula is
uj+1 = uj + [ 216
25
+ 2565 K3 + 4104 K4 5 K5]
1408 2194 1

With Tj+1 = O h ( 5)
And
*
uj+1 = uj + [ 135
16
K1 + 12825 K3 + 56430 K4 50 K5 + 55 K6]
6656 28561 9 2

*
With Tj+1 = O h ( 6)
Where

K1 = hf(tj,uj)

(
K2 = hf tj + 1 h,uj + 1 K1
4 4
)
(
K3 = hf tj + 3 h,uj +
3
32 K1 +
9
32 K2 )
8

(
K4 = hf tj +
12
13 h,uj +
1932
3197 K1 +
7200
2197 K2 +
7296
2197 K3 )
(
K5 = hf tj + h,uj +
439
216 K1 8K2 +
3680
513 K3
845
4104 K4 )

( 1
K6 = hf tj + 2 h,uj
8
27 K1 + 2K2
3544
2565 K3 +
11
40 K5 )
The formula for u j+1 is of fourth order

and requires 5 function evaluations

and with an additional function


*
evaluation we obtain uj+1. Thus,
*
uj+1 uj+1 requires only six as

Double click this page to view clearly


compared to 11 function evaluations

in the case of one-step and two half-

steps procedure. Thus, an estimate

of the fourth order accuracy may be

determined during the course of

computation with only six Junction

evaluations. This method can also be

used to construct a variable mesh

method, the mesh varying with the

accuracy of the solutions required.

System of Equations
Consider the system of n equations

= f(t,u1,u2,....un)
du
dt

U(t0) =
T T T
Where u=[u1,u2,....un] , f=[f1,f2,.....,fn] .=[1, 2, ...., n]
.

All the methods derived earlier can

be used to solve the system of

equations, by writing the methods in


vector form. Let us now apply some

of these methods to solve.

Taylor Series Method


We write in vector form as

we write in vector form as


2 p
h h (p )
uj+1 = uj + hu'j+ 2! u"j + .... + p! uj , j=0,1,2,.....,N-1

where

[ ][ ]
u(1,j)
k k-1
f1(tj,u1,j,u2,j,......un,j)
d
k-1
dt

uj( ) = u(2,j)
k k
= ... ... ... ... ... ... ... ... ... ... ...
k-1
fn(tj,u1,j,u2,j,......un,j)
d
u(n,j)
k
k-1
dt

In particular, the Euler method can

be written as

u j+1 = u j + hu j , j = 0, 1, 2, ..... N-l.

Runge Kutta Method of Second

Order

Consider the Euler-Cauchy or the

Heun method. We write it in vector

form a s

Double click this page to view clearly


(K1+K2)
1
uj+1 = uj + 2

where Ki1 = hfi(tj, u1.j, u2.j, ..., un.j),

Ki2 = hfi(tj + h, u1,j + K11, u2,j + K21, ..., un,j + Kn1).


i=1,2,.........,n

In an explicit form, becomes

[ ] [ ] ([ ] [ ])
u1,j+1 u1,j K11 K12
u2,j+1 u2,j K21 K22
1
= + 2 + j=0,1,....N-1

un,j+1 un,j un,1 un,2

Runge Kutta (classical)

Method of Fourth Order

We write the vector format of as

(K1+2K2+2K3+K4)
1
uj+1 = uj + 6

where

[] [] [] []
K11 K12 K13 K14
K21 K22 K23 K24
K1 = , K2 = , K3 = , K4 =

un1 un2 un3 un4

Double click this page to view clearly


And
Ki1 = hfi(tj, u1,j, u2,j, ....., un,j)

( h 1 1
Ki2 = hfi tj + 2 , u1, j + 2 K11, u2,j + 2 K21, ...., un,j + 2 Kn1
1
)
( h 1 1
Ki3 = hfi tj + 2 , u1, j + 2 K12, u2,j + 2 K22, ...., un,j + 2 Kn2
1
)
Ki4 = hfi(tj + h, u1, j + K13, u2,j + K23, ...., un,j + Kn3),

i=1(1)n
In an explicit form, becomes

[ ] [ ] ([ ] [ ] [ ] [ ])
K1,j+1 K1,j K11 K12 K13 K14
K2,j+1 K2,j K21 K22 K23 K24
1
= = 6 +2 +2 +

un,j+1 un,j Kn1 Kn2 Kn3 Kn4

J=1,2,......,N-1

Example 5.

Compute an approximation to u(l), u

(l) and u (1) with the Taylor series

method of second order and step

length h = 1, for the initial value

problem.

Double click this page to view clearly


u"'+2u"+u'-u=cost, 0r1
u(0) + 0, u'(0) = 1, u"(0) = 2

after reducing it to a system of first

order equations

Set u = v 1 , v 1 = v 2 , v 2 = v 3 . Note

that v 2 = v 2 = v 1 = u, v 3 = v 2 = u

and v 3 =

V 2 = u .

The system of equation is v

V'1 = V2, v1(0)=0


V'2 = V3, v2(0)=1
v'3 = cost-2v3 v2 + v1, v3(0) = 2,

[ ][ ]
v1 v2
or v'= v2 = v3
v3 cos t-2v3 v2 + v1
The Taylor series method of second order is
2
v(t0+h) = v0 + hv'0 +
h 1
2 v"0 = v0 + v'0 + 2 v"0
since h=1,we have

[ ][ ]
v2 1
v'0 = v3 = 2
1-2v3(0) v2(0) + v1(0) 4

[ ] [ ]
v'2 2
Now v"= v'3 and v"0 4
1-sin t-2v'3(0) v'2 + v'1 7

Hence

[ ][ ] [ ][ ]
0 1 2 2
1 1
v(1) = v0 + v'0+ 2 v"0= 1 + 2 +2 4 = 1
2 4 7 3/2

Therefore,u(1)=2,u'(1)=1,u"(1)=3/2

Example 6.

Solve the system of equations

u = -3u + 2v. u(0) = 0

v = 3u 4v, v(0) = 0.5 with h = 0.2

on the interval [0, 0.4]. Use the


i. Euler-Cauchy method, and

ii. Classical Runge-Kutta fourth

order method,

i. The Euler Cauchy method is

given by

uj+1=uj+ 2 (K1+K2),j=0,1
1

K1=hf(tj,uj)

K2=hf(tj,+h,uj+K1)
For j=0,we have

t0 = 0, u0 = 0, v0 0.5

K11 = hf1(t0,u0,v0) = 0.2(-3u0+2v0)

=0.2[ 3 0 + 2 0.5] = 0.2

K21 = hf2(t0,u0,v0) = 0.2(3u0-4v0)

=0.2[3 0 4 0.5] = 0.4

K12 = hf1(t0 + h,u0 + K11, v0 + K21)

[
=0.2 3(u0+K11) + 2(v0 + K21) ]
=0.2[ 3(0+0.2) + 2(0.5 0.4)] = 0.08

K22 = hf2(t0 + h,u0 + K11, v0 + K21)

[
=0.2 3(u0+K11) 4(v0 + K21) ]
(K11+K12) = 0 + 2 (0.2 0.08) = 0.06
1 1
u(0.2) = u1 = u0 + 2

(K21+K22) = 0.5 + 2 ( 0.4 0.04) = 0.32


1 1
v(0.2) = v1 = v0 + 2

for j=1,we have


t1 = 0.2,u1=0.06,v1=0.32

K11 =hf1(t1,u1,v1)=0.2( 3u1 + 2v1)

=0.2( 3 0.06 + 2 0.32)=0.092

K21 =hf2(t1,u1,v1)=0.2(3u1 4v1)

=0.2(3 0.06 4 0.32)=-0.22

K12 =hf1(t1+h,u1+K11,v1 + K21)

[
=0.2 3(u1+k11) + 2(v1+K21) ]
=0.2[ 3(0.06 + 0.092) + (0.32 0.22)]=0.0512

K22 =hf2(t1+h,u1+K11,v1 + K21)

[
=0.2 3(u1+K11) 4(v1+K21) ]
=0.2[ 3(0.06 + 0.092) 4(0.32 0.22)]=0.0112

(K11+K12) = 0.06 + 2 (0.092 0.0512) = 0.0804


1 1
u(0.4) = u2 = u1 + 2
1 1

ii. For the classical Runge Kutta

fourth order method, we obtain

the following results.


For j=0,we have
t0 = 0, u0 = 0,v0 = 0.5

K11 = hf1(t0,u0,v0) = h( 3u0 + 2v0)

=0.2( 3 0 + 2 0.5) = 0.2

K21 = hf2(t0,u0,v0) = h(3u0-4v0)

=0.2(3 0 4 0.5) = 0.4

( h 1
K12 = hf1 t0+ 2 , u0 + 2 K1 , v0 + 2 K21
1
1
)
[ ( 1
=h 3 u0 + 2 K11 + 2 v0 + 2 K21 ) ( 1
)]
=0.2[ 3 0.1 + 2 0.3] = 0.06

( h 1
K22 = hf2 t0 + 2 , u0 + 2 K11, v0 + 2 K21
1
)
[ ( 1
=h 3 u0 + 2 K11 4 v0 + 2 K21 ) ( 1
)
=0.2[3 0.1 4 0.3] = 0.18

( h 1
K13 = hf1 t0+ 2 , u0+ 2 K12,v0 + 2 K22
1
)
[ ( 1
=h 3 u0 + 2 K12 + 2 v0 + 2 K22 ) ( 1
)]
=0.2[ 3 0.03 + 2 0.41] = 0.146

( h
K23=hf2 t0 + 2 , u0 + 2 K12, v0 + 2 K22
1 1
)
[( 1
)
=h 3 u0 + 2 K12 4 t0 + 2 K22 ( 1
)]
=0.2[3 0.03 4 0.41] = 0.31

[
K14 = hf1(t0 + h.u0 + K13, v0 + K23) = h 3(u0 + K13) + 2(v0 + K23) ]
=0.2[ 3 0.146 + 2 0.19] = 0.0116

K24 = hf2(t0 + h,u0 + K13, v0 + K23) = h 3(u0 + K13) 4(v0 + K23)[ ]


=0.2[3 0.146 4 0.19] = 0.0644

(K11+2K12+2K13+K14)
1
u(0.2) u1 u0 + 6
1
=0+ 6 (0.2 + 0.12 + 0.292 0.0116) = 0.1001

(K21+2K22+2K23+K24)
1
v(0.2) v1 v0 + 6
1
=0.5+ 6 ( 0.4 0.36 0.62 0.0644) = 0.2593
Fo j=1,we have
t1 = 0.2, u1 = 0.1001, v1 = 0.2593

K11= h f(t1,u1,v1)= h(-3u1+2v1)


= 0.2 ( -3 1.001 + 2 x 0.2593 ) = 0.0437

[ ( 1
= h 3 u1 + 2 K11 + 2 v1 + 2 K21 ) ( 1
)]
= 0.2[ 3 0.1220 + 2 0.1856] = 0.0010

( h
K22 = hf2 t1 + 2 , u1 + 2 K11, v1 + 2 K21
1 1
)
[ ( 1
=h 3 u1 + 2 K11 4 v1 + 2 K21) ( 1
)]
=0.2 [3 0.122. 4 01856] = 0.0753

( h
K13 = hf1 t1 + 2 , u1 + 2 K12, v1 + 2 K22
1 1
)
=h[ 3 0.1006 + 2 0.2217] = 0.0283
=0.2[ 3 0.1006 + 2 0.2217] = 0.0283
( h
K23 = hf2 t1 + 2 , u1 + 2 K12, v1 + 2 K22
1 1
)
[( 1
=h 3 u1 + 2 K12 4 v1 + 2 K22) ( 1
)]
=0.2[3 0.1006 4 0.2217] = 0.1170

[
K14 = hf1(t1 + h,u1 + K13, v1 + K23) = h 3(u1 + K13) + 2(v1+K23) ]
=0.2[ 3 0.1284 + 2 0.1423] = 0.0201

[
K24 = hf2(t1 + h,u1 + K13, v1 + K23) = h 3(u1 + K13) 4(v1+K23) ]
=0.2[3 0.1284 4 0.1423] = 0.0368

(K11+2K12+2K13+K14)
1
u(0.4) u2 = uj + 6
1
=0.1001+ 6 [0.0437 + 0.0020 + 0.0566 0.0201] = 0.1138

(K21+2K22+2K23+K24)
1
v(0.4) v2 = vi + 6
1
=0.2593+ 6 ( 0.1474 0.1506 0.2340 0.0368) = 0.1645

The exact solution is u(t) =


1
5 (e 1 e 6t ), v(t) = 101 (2e-t + 3e-6t)
and u(0.2) = 0.1035, v(0.2) = 0.2541
u(0.4) = 0.1159, v(0.4) = 0.1613

Implicit Runge Kutta

Methods

The implicit Runge-Kutta method

using v slopes is defined as


v
uj+1 = uj + WmKm
m=1

Double click this page to view clearly


where


v
ci = ij, i=1,2,.....v
j=1

and ij , 1 I, j v, W 1 , W 2 , .. .W v

are arbitrary parameters. The slopes

Km are defined implicitly. The

number of unknown parameters are

v(v+l). We now give the derivation

for the case v = 1. We have

u j+1 =u j + W 1 K 1

K 1 = h f (t j + c 1 h, u j + 11 K 1 ).

The Taylor series gives


2
uj+1 = u(tj) + hu'(tj) + u"(tj) + ....
h
2

( )
2
=u(tj) + hf tj, u(tj) + (f1 + ffu)tj + ....
h
2

and

[
K1 = h f(tj,uj) + (clhf1 + 11K1fu + ...)tj ]
( 2
= hf+clh f1 + h11fuK1 tj + O h ) ( 3)
=hfj + h (c1f1 + 11 f fu)tj+O h
2
( 3)
Substituting into we get
=
u(t j+1 ) u(t j ) + W 1 [hf 1 + h 2 (c 1 f 1 +
3
h 2 (c 1 f 1 + 11 ff u )t j + O(h )].

Comparing the coefficients ofh and


2
h , we get

W 1 = l,W 1 c 1 = ,W 1 11 , = .

We obtain W 1 = 1, c 1 = an = 1/2.

The second order implicit Runge-

Kutaa method becomes u j+1 = u j +

K1

uj+1 = uj + K1

( 1
K1 = hf tj + 2 , h,uj + 2 K1
1
)
For obtaining the value of K 1 , we

need to solve nonlinear algebraic

equation in one variable K 1 .


For v = 2. the implicit Runge Kutta

method becomes

uj+1= uj+W1K1+W2K2

K1 = hf(tj+c1h,uj+11K1+12K2)

K2 = hf(tj+c2h,uj+21K1+22K2)

Taylor series expansion of K 1 , K 2

given in this recursive form is very

difficult.. Since. K1, K2 can be

expanded in powers of h, we can

write

2 3
K 1 = hA 1 + h B 1 + h C 1 + ...

2 3
K 2 = hA 2 + h B 2 + h C 2 + ...

Substitute in, expand in Taylor series

and compare the coefficients of h.


2 3 4
h , h and h . Solving

resulting,equations, we obtain the

parameter values as
W1 = 1 / 2, W2 = 1 / 2, c1 = (3 3) / 6, c2 = (3 + 3) / 6

11 = 1 / 4, 11 = (3 23) / 12, 21 = (3 + 23) / 12, 22 = 1 / 4

Since, the truncation error is of


5
O(h ), the order of the method is 4.

The method is given by

(K1+K2)
1
uj+1 = uj + 2

K1 = hf tj + ( 3 3
6 h,uj
1
+ 4 K1 +
3 23
12 K2 )
K2 = hf tj + ( 3 3
6 h,uj +
3 + 23
12 K2
1
+ 2 K2 )
For obtaining the values of K 1 , K 2

we need to solve a system of two

nonlinear algebraic equations in two

unknowns K 1 , K 2 .

Example 7.

Solve the initial value problem u =


2
-2tu , u(0) = 1 with h = 0.2 on the

interval. Use the second order

implicit Runge-Kutta method.


The second order implicit Runge-

Kutta method is given by

uj+1 = uj + K1, j=0,1

( h
K1 = hf tj + 2 , uj + 2 K1
1
)
which gives
2
K1 = h(2tj + h) uj + ( 1
2 K1 )
This is an implicit equation in K 1 and

can be solved by using an iterative

method. We generally use the

Newton-Raphson method. We write


2 2
(
F(K1) = K1 + h(2tj+h) uj+ 2 K1
1
) (
= K1 + 0.2(2tj + 0.2) uj + 2 K1
1
)
we have

( )
F(K1) = 1 + h(2tj+h) uj+ 2 K1 = 1 + 0.2(2tj + 0.2) uj + 2 K1
1
( 1
)
The Newton-Raphson method gives

(s+1) (s) ( )
F K(1 )
s

K1 = K1 , s=0,1,....
F(K( ))
s
1

We assume K1( ) = hf(tj, uj), j=0,1


0

Now, we obtain from and

j=0:t0 = 0, u0 = 1, K1( ) = h 2t0u 0 = 0,


0 2
( )
( ) ( ) ( )
F K1( ) = 0.04, F' K1( ) = 1.04, K1( ) = 0.03846150,
0 0 1

(K( )) = 0.00001483, F'(K( )) = 1.03923077, (K( )) = 0.03847567


1
1
1
1
1
2

F(K( )) = 0.3010
2 8
1

K = (K( )) = 0.03847567,
2
Therefore, 1 1

And u(0.2) u1=u0 + K1 = 0.96152433

( )
i-1;t1 = 0.2, u1 = 0.96152433, K1( )=-h 2t1u1 = 0.07396231
0 2

( ) 0
( )
F K1( ) = 0.02861128, F' K1( ) = 1.11094517, K1( ) = 0.9971631,
0 1

F(K( )) = 0.00001989, F'(K( )) = 1.10939993, K( ) = 0.9973423,


1 1 2
1 1 1

F(K( )) = 0.35, 10 , F'(K( )) = 1.10939885, K( ) = 0.99773420.


2 7 2 3
1 1 1

Therefore,K1 = K(1 ) = 0.09973420,


2

and u(0.4) u2 = u1+K1 = 0.86179013

Second Order Equations

The second order and higher order

equations can be solved by

considering an equivalent system of

first order equations. However, we

can also derive single step methods

to solve second order or higher order

equations directly.

Double click this page to view clearly


Such methods are useful when we

consider oscillatory systems, which

are usually consider the second

order initial value problem.

u = f (t, u, u ), t 0 t b

u(t 0 ) = u 0 , u (t 0 ) = u 0 .

We present a few methods to solve

directly.

Taylor Series Method or Order p

We write the Taylor series-method to

find u j+1 and u j+1 as


2 p
h h ( p)
= uj + hu'j + 2! uj + .... + p! uj
j+1
p
h (p+1)
u'j+1 = u'j + hu"j + ... + p! uj

j=0,1,....N-1
p+1
The truncation error is 0(h ) in

both u j+1 and u j+1 The higher order

derivatives are calculated as follows:


uj = f(tj, uju'j) = fj
"

"'
uj = (ft + u'fu + ffu)j = ( t + u' u + f u' )fj = Dfj
[ ]
2
uj = ftt + (u') fuu + f fu'u'+2u'ftu + 2fftu'+2u'ffuu + fu(f1 + u'fu + ffu') + ffu
iv 2
j

=D fj + (fu')jDfj + fj(fu)j, ....


2

2
Where D= t + u'j u + fj u' and D = D(D)
Therefore

[D f + (f )Df + f (f ) ] + ....
2 3 4
h h h 2
uj+1 = uj + hu'j + 2 fj + 6 Dfj + 24 j u
' j j u j

[D f + (f ) Df + f (f ) ] + ....
2 3
h h 2
u'j+1 = u'j + hfj + 2 Dfj + 6 j u j
' j j u j

Runge Kutta Methods

For the second order initial value

problem, we define the method as


uj+1 = uj+hu'j + W1K1 + W2K2

u'j+1 = u'j+ h (W*1K1 + W*2K2)


1

(tj, uj, u'j)


h
K1 = 2! f

( )
2
h 1
K2 = 2! f tj+c2h,uj+c2hu'j+2 K1,u'j+ h b21K1
1

Double click this page to view clearly


Where c 2 21 , W 1 , W 2 , W* 1 , W* 2 are

arbitrary constants to be determined

Note that b 21 k 1 /h is an O(h) term, in

the argument of fin K 2 . Because of th

definition of K 1 , we choose b 21 = 2c 2 .

Expanding K 1 and K 2 in Taylor series

about t j , we get
2
h
K1 = 2 fj

[ (c22D2fj + 21fjfu) + ....]


2 2
h h
K2 = 2 fj + hc2Dfj + 2

2
Where D and D are as defined in

Substituting in, we obtain


2 3

(W1+W2)fj +
h h
uj+1 = uj + hu'j + 2 2 c2W2Dfj +

(W2c22 D2fj + W221fjfu) + O(lt5)


4
h
2
2

(W*1 + W*2)fj +
h h
u'j+1 = u'j + 2 2 c2w*2Dfj +

( ) ( )
3
h 2 2 4
4 W*2 + c2D fj + W*221fjfu + O h
Comparing with we get

W1 + W2 = 1, W*1 + W*2 = 2
1
c2W2 = 3 , c2W*2 = 1

4
The coefficient of h in u j+1 and of
3
h in u j+1 of equation will not match

with the corresponding coefficients in

equation for any choice of c 2 , 2 ,

W 2 and W* 2 . Therefore, the error is


4 3
O(h ) in u j+1 and O(h ) in u j+1 . We

have four equations to determine the

six parameters c 2 , 21 , W 1 , W 2 , W* 1

and W* 2 .

Choose c 2 = 2/3 and 21 = c 2 . Then,

we obtain from
W1 = W2 = 1 / 2, c2 = 21 = 2 / 3, b21 = 4 / 3,

W*1 = 1 / 2, W*2 = 3 / 2
Thus,the Range-kutta method becomes

(K1+K2)
1
uj+1 = uj + hu'j + 2

(K1+3K2)
1
u'j+1 = u'j + 2h
2
f(tj,uj,u'j)
h
K1 = 2

( )
2
h 2 2 2 2 4
K2 = 2 f tj + 3 h,uj + 3 hu'j + 3 K1,hu'j + 3 Kl, u'j + 3h K1

If we use four evaluations of f, then

we get the following Runge-Kutta

method.

(K1 + K2 + K3)
1
uj+1 = uj + hu'j + 3

(K1+2K2+2K3 + K4)
1
u'j+1 = u'j + 3h
2
f(tj,uj,u'j)
h
K1 = 2

( )
2
h h 1 1
K2 = 2 f tj+ 2 ,uj+ 2 hu'j,u'j + h K1

( )
2
h h 1 1 1
K3 = 2 f tj+ 2 ,uj+ 2 hu'j+ 4 K1,u'j + h K2

( )
2
h 2
K4 = 2 f tj+h,uj+hu'j+K3,u'j + h K3

Double click this page to view clearly


5 4
The error is O(h ) in u j+1 and O(h )

in u j+1 .

Now. let the function f be

independent of u. then, the equation

u = f(t, u)is called the special

second order equation. The initial

value problem becomes

u"=f(t,u), t0 1 b

u(t0) = u0, u'(t0) = u'0

Consider the Runge-Kutta method

using 2 evaluations of f. since f u = 0,

we have from

uj = D fj + fj(fu)j
iv 2

2
Where D= t + u'j u and D

in suitably defined. Therefore, it is

possible to compare O(h 3 ) terms in

uj+1 and u(tj+1).


3
Comparing O(h ) terms we get two

more equations, besides the four

equations. We now have the

equations

1 2 2 2
W1 + W2 = 1, c2W2 = 3 , c2W*2 = 3 , 21, W*2 = 3

W*1 + W*1 = 2, c2W*2 = 1

This system uniquely determines the

six parameters. The solution is

c2 = 2 / 3, 2
21 = c2 = 4 / 9, W2 = 1 / 2, W1 = 1 / 2,

W*1 = 1 / 2, W*2 = 3 / 2
Therefore, the method is given by

(K1+K2)
1
uj+1 = uj + ju'j + 2

(K1+K2)
1
u'j+1 = u'j + + 2j
2
f(tj,uj)
h
K1 = 2

( )
2
h 2 2 4
K2 = 2 f tj + 3 h,uj + 3 hu'j + 9 K1

The error in both u j+1 and u j+1 is


4
O(h ).
The Runge Kutta method using four

evaluations is given by

(23K1 + 75K2 27K3 + 25K4)


1
uj+1 = uj + 96

(23K1 + 125K2 81K3 + 125K4)


1
u'j+1 = u'j + 96h

2
f(tj,uj)
h
K1 = 2

( )
2
h 2 2 4
K2 = 2 f tj+ 5 h,uj+ 5 huj+ 25 K1

( )
2
h 2 2 4
K3 = 2
f tj+ 3 h,uj+ 3 huj+ 9 K1

( )
2
f tj+ 5 h,uj+ 5 huj+ 25 (K1 + K2)
h 4 4 8
K4 = 2

The error in both u j+1 and u j+1 is


5
O(h ). This method is called the

Runge Kutta-Nystrom method.

Example 8.

The Runge-Kutta method is used for

solving the initial value problem

u = u

u(0) = 1, u (0) = 1 with step length


[ ] [ ]
uj+1 uj
h. Prove that =A where
u'j+1 u'j

A is a real 2x2 matrix

Complete the solution at t = 0.2 with

h = 0.1,

Compare with the exact solution u(t)


t
= e .

We have f(t, u, u ) = u. Therefore,

2 2
f(tj,uj,u'j) =
h h
K1 = 2 2 u'j

( )
2 2
h h h 1 K1
K2 = 2 tj + 2 , uj + 2 u'j + 4 K1, u'j + h

( ) ( ) (1 + )u'
2 2 2
h K1 h h h h
= 2 u'j + h = 2 u'j + 2 u'j = 2 2 j

f(t + , u + hu' + K , u' + )


2
h h 1 1 K2
K3 = 2 j 2 j 2 j 4 1 j h

( ) [u' + (1 + )u' ]
2 2 2
h K h h h
= 2 f u'j + 2 = 2 j 2 2 j

(1 + + )u'
2 2
h h h
=2 2 4 j

Double click this page to view clearly


( )
2
h 2
K4 = 2 f tj + h,uj + hu'j + K3, u'j + h K53

( ) [ ( )u' ]
2 2 2
h 2 h h h
= 2 u'j + h K3 = 2 u'j+h 1+ 2 + 4 j

( )u'
2 2 3
h h h
= 2 1 + h+ 2 + 4 j

(K1+K2+K3)
1
uj+1 = uj + hu'j + 3

[ (1 + 2h )u'j + (1 + 2h + h4 )u'j]
2 2 2
1 h h
=uj + hu'j + 3 2 u'j + 2

( )u'
2 3 4
h h h
=uj + h+ 2 + 6 + 24 j

(K1+2K2+2K3+K4)
1
u'j+1 = u'j + 3h

[ ( )u'j + h2(1 + 2h + h4 )u'j + h2 (1 + h+ h2 + h2 )u'j]


2 2 2 2 3
1 h 2 h
=u'j + 3h 2 u'j + h 1 + 2

( )u'
2 3 4
h h h
= 1 + h+ 2 + 6 + 24 j

Therefore,

[ ][ ][ ]
uj+1 1 E(h) 1 uj
=
u'j+1 0 E(h) u'j

2 3 4
h h h
Where E(h) = 1 + h+ 2 + 6 + 24

[ ]
1 E(h)-1
A=
0 E(h)

The matrix A for h=0.1 becomes

[ ]
0 0.1051708
A=
0 1.1051708

Double click this page to view clearly


For j=0,we have
u0 = u'0 = 1, t1 = 0.1|

[ ][ ][ ] [ ]
u1 1 0.1051708 1 1.1051708
= =
u'1 0 1.1051708 1 1.1051708

For j=1,we have


u1 = 1.1051708, u'1 = 1.1051708, t2 = 0.2

[ ][ ][ ] [ ]
u2 1 0.1051708 1.1051708 1.2214025
= =
u'2 0 1.1051708 1.1051708 1.2214025

t
The exact solution is u(t) = e and

the exact values at t = 0.2 are given

by

[ ][ ]
u2 1.2214028
=
u'2 1.2214028

Example 9.
Solve the initial value problem
u = (l+t 2 )u, u(0) = 1, u (0) = 0, t

[0, 0.4]

by the Runge-Kutta-Nystrom method

with h = 0.2. Compare with the exact

solution u(t) = e
t
2
/2
For j=0,we have
t0 = 0, u0 = 1, u'0 = 0
2

(1 + )u0 =
2 2
(0.2)
f(t0,u0) =
h h 2
K1 = 2 2 t0 2 (1 + 0) = 0.02

( )
2
h 2 2 4
K2 = 2 f t0 + 5 h,u0 5 hu'0 + 25 K1

[ ) ][u0 + 25 hu'0 + 254 K1]


2
(
2
h 2
= 2 1 + t0 + 5 h
2

[1 + (0.08) ][1 + 0 + ]
(0.2) 2 4
= 2 25 0.02

=(0.02)(1.0064)(1.0032) = 0.0201924

( )
2
h 2 2 4
K3 = 2 f t0 + 3 h,u0 + 3 hu'0 + 9 K1

[ ) ][u0 + 23 hu'0 + 49 K1]


2
(
2
h 2
= 2 f 1 + t0 + 3 h
2

[1 + (0.1333333) ][1 + 0 + (0.0201924)]


(0.2) 2 4
= 2 9

=(0.02)(1.0177778)(1.0089744) = 0.0205382

[ ]
2
h 4 4 8
25 ( 1
K4 = t0 + h,u0 + hu'0 + K +K2)
2 5 5

[ ( ) ][ ]
2 2
h 4 4 8
25 ( 1
= 1 + t0+ h u0 + hu'0 + K +K2)
2 5 5

[1 + (0.16) ][1 + 0 + 25 (0.0401924)]


(0.2) 2 8
=
2

=(0.02)(1.0256)(1.0128616) = 0.207758
1
96 (
u1 = u0 + hu'0 + 23K1 + 75K2 27K3 + 25K4)

1
96 [
= 23(0.02) + 75(0.201924) 27(0.205382))

+25(0.0207758)]

Double click this page to view clearly


1
=1+
96
(1.9392936) = 1.0202010
1
96h (
u'1 = u'0 + 23K1 + 125K2 81K3 + 125K4)

[ ]
1 23(0.02) + 125(0.0201924) 81(0.205382)
=0+
96(0.2) +125(0.0207758)

1
=
19.2
(3.9174308) = 0.2040329
For j=1,we have
'
t1 = 0.2, u1 = 1.0202010, u 1 = 0.2040329
2
h
2
h
2
(0.2)
( ) [1 + (0.2) ](1.0202010)
2
2 ( 1 1)
2
K1 = f t ,u = 1 + t1 u1 =
2 2
= (0.02)(1.04)(1.0202010) = 0.0212202

( )
2
h 2 2 4
K2 = f t1 + h,u1 + hu'1 + K
2 5 5 25 1

[ ( ) ][u + 52 hu' + 254 K ]


2 2
h 2
= 1 + t1 + h 1 1 1
2 5

2
(0.2)
[1 + (0.28) ][1.0202010 + (0.08)(0.2040329) + (0.16)(0
2
=
2
=(0.02)(1.0784)(1.0399189) = 0.0224290
2
h 2 2 4

( (K1+K2))
2
h 4 4 8
K4 = 2 f t1 + 5 h,u1 + 5 hu'1 + 25

[ ( 4h 2
) ][u1 + 54 hu'1 + 258 (K1+K2)]
2
h
= 2 1 + t1 + 5

[1 + (0.36) ][1.0202010 + (0.16)(0.2040329) + (0.32)(0.0436492)]


(0.2) 2
= 2

=(0.02)(1.1296)(1.0668140) = 0.0241015

(23K1+75K2-27K3+25K4)
1
u2 = u1 + hu'1 + 96

[23(0.0212202) + 75(0.0224290) 27(0.0234853) + 25(0.0241015)]


1
=1.0202010+(0.2)(0.2040329) + 96

1
=1.0202010+0.0408066+ 96 (2.1386740) = 1.0832855)
(23K1 + 125K2 81K3 + 125K4)
1
u'2 = u'1 + 96h

Double click this page to view clearly


=0.2040329+ 19.2 [23(0.0212201) + 125(0.0224290) 81(0.0234853) + 125(0.02410151)]
1

1
=0.2040329+ 19.2 (4.4020678) = 0.4333073
Therefore,we have
u(0.2) u1 = 1.0202010, u'(0.2) u'1 = 0.2040329,
u(0.4) u2 = 1.0832855, u'(0.4) u'2 = 0.4333073.

The exact solution is e


t
2
/ 2 The exact
values are given by

u(0.2) = 1.2020134 u'(0.2) = 0.20404027,


u(0.4) = 1.0832871, u'(0.4) = 0.43331484.

Double click this page to view clearly


Unit -6

LINEAR EQUATIONS WITH


VARIABLE COEFFICIENTS

6.1 INTRODUCTION

A linear differential equation of order

n with variable coefficients is an

equation of the form

(n) (n-1)
0 (x) y + 1 (x) y + .. .+ n (x)

y = b (x),

Where 0 , 1 , ...., n , b are complex-

valued functions on some real

interval I. Points where 0 (x) = 0

are called singular points, and often

the equation requires special

consideration at such points.

Therefore in this unit we assume that


0 (x) 0 on I. By dividing by 0 we

can obtain an equation of the sample

form, but with 0 replaced by the

constant 1. Thus we consider the


(n)
equation. y + 1 (x)
(n-1)
y +...+ n (x) y = b(x).

As in the case when 1 , ......, n are

constants we designate the left side

by L(y). Thus

(n) (n-1)
L(y) = y + 1 (x) y + ... +

n (x)y,

If b(x) = 0 for all x on I we say

L(y) = 0 is a homogeneous equation,

whereas if b(x) 0 for some x in I,

the equation L(y) = b(x) is called a

non-homogeneous equation.
We give a meaning to L itself as an

operator which takes each function

, which has n derivatives on I, into

the function L() on I whose value at

x is given by

(n) (n-1)
L() (x) = (x) + 1 (x) (x)

+.... + n (x) (x).

Thus a solution of (1) on I is a

function on I which has n

derivatives there, and which satisfies

L() = b.

In this unit we assume that the

complex-valued functions 1 , .....,

n , b are continuous on some real

interval 1,.
Most of the results we developed for

the case when 1, ......, n are

constants continue to be valid in the

more general case we are now

considering. The major difficulty with

linear equations with variable

coefficients, form a practical point of

view, is that it is rare that we can

solve the equations, in terms of

elementary functions, such as the

exponential and trigonometric

functions. However, in case 1 , ... n ,

b have convergent power series

expansions the solutions will have

this property also, and these series

solutions can be obtained by a simple

formal process.
6.2 Initial value problems for the
homogeneous equation

Although in many cases it is not

possible to express a solution of (1)

in terms of elementary functions, it

can be proved that solutions always

exist.Theorem Let 1, .... n be

continuous functions on an interval 1

containing the point x 0 . If 1 , ...., n

are any n constants, there exists a

solution of

(n) (n-1)
L(y) = y + 1 (x) y + ... +

n (x) y = 0

On I satisfying

(x 0 ) = 1 , (x 0 ) = 2 , .....,
(n-1)
(x 0 ) = n .
We stress two things about this

theorem:

i. the solution exists on the entire

interval I where 1 ,..... n are

continuous, and

ii. every initial value problem has a

solution.

Neither these results may be true if


(n)
the coefficient of y vanishes

somewhere in I. For example,

consider the equation

xy + y = 0, whose coefficients are

continuous for all real x. This

equation and the initial condition y(l)

= 1 has the solution 1 , where

1
1(x) = x
But this solution exists only for 0 < x

< . Also, if is any solution, then

x (x) = c,

where c is some constant. Thus only

the trivial solution (c = 0) exists at

the origin, which imphes that the

only initial value problem

xy + y = 0, y(0) = 1 ,

which has a solution is the one for

which 1 = 0.

Just as in the case where the

coefficients j (j = 1, .... n) are

constants, the uniqueness of the

solution given in Theorem is

demonstrated with the aid of an

estimate for
2 2
|| (x) || = [ | (x) | + | (x) |
(n-1) 2 1/2
+ ... + | (x) | ] .

Theorem.

Let b1,. ...b n be non-negative

constants such that for all x in I.

| j (x) | b j , (j = 1, ...., n), and

define k by k = 1 + b 1 +.....+b n .

If x 0 is a point in I, and is a

solution of L(y) = 0 on I, then

e-k|x-x0|
|| (x 0 ) || || (x) || (x 0 )
k|x-x |
|| e 0 for all x in I.

Proof.

Since L() = 0 we have


(n)(x) n-1
= -1(x) (x) ....... n(x)(x),
and therefore

|( )(x)| _ | (x)||( )(x)| + ..... + | (x)||(x)|


n
1
n-1
1

b | (x)| + .... + b |(x)|


( )
n-1
1 n
_
We remark that if I is a closed

bounded interval, that is, of the form

x b with , b with , b real,

and if the j are continuous on I, then

there always exist finite constants b j

such that | j (x) | b j on I.

Theorem.

Let x 0 be in I and let 1 , ...., n be

any n constants. There is at most one

solution of L(y) = 0 on I satisfying


(n-1)
(x 0 ) = 1 , (x 0 )= 2 , ...,

(x 0 ) = n .

Proof.

Let , be two solutions of L(y) = 0

on I satisfying the conditions at x0,

and consider x = - .
We wish to prove (x) = 0 for all x on
I. Even though the functions j are

continuous on I they need not be

bounded there. However, let x be any


point on I other than x 0 , and let J be

any closed bounded interval in I


which contains x 0 and x. On this

interval the functions j are bounded,

that is,

| j (x) | b j , (j = 1, ...,n),
On J for some constants b j . which

may depend on J. Now we apply the

above Theorem to x defined on J. We

have L() = 0 on J, and || (x 0 ) || =

0. Therefore (2) implies that || (s)

|| 0. and hence (x) = (x). Since

chosen to be any point in I other than

x n . we have proved (x) = (x) for

all x on I.
6.3 solutions of the
homogeneous equation

If 1 ,... m are any m solutions of

the n-th order equation L(y) = 0 on

an interval I, and c 1 ,...., c m are any

m constants, then

L(c 1 l + ... + c m m ) = c 1 L( 1 ) +

...+ c m L( m ),

Which implies that c 1 1 + .. c m m

is also solution. In words, any linear

combination of solutions is again a

solution. The trivial solution is the


function which is identically zero on

I.

As in the case of an L with constant

coefficients, every solution of L(y) =

0 is a linear combination of any n

linearly independent solutions. Recall


that n functions 1 , ....., n defined

on an interval I are said to be linearly

independent if the only constants

c 1 ,..., c n such that c 1 1 (x)+ ....+

c n n (x) = 0 for all x in I are the

constants c 1 = c 2 =... =c n = 0.

We construct n linearly independent

solutions, and show, that every

solution is a linear combination of

these.

Theorem.

There exist n linearly independent

solutions of L(y) = 0 on I.

Proof.

Let x 0 be a point in 1. There is a

solution 1 of L(y) = 0 satisfying


(n-1)
1(x0) = 1, '1(x0) = 0, ....1 (x0) = 0
In general for each i = 1,2,..n there

is a solution i satisfying

(i-1) (i-1)
1 (x0) = 1, 1 (x0) = 0, ji

The solutions 1 ,......., n are linearly

independent on 1, for suppose there

are constants c 1 , c 2 , ..., c n such that

c11(x) + c22(x) + .... + cnn(x) = 0

for all x 1.Differentiating we see that


c1 '1(x) + C2 '2 + ... + Cn '2(x) = 0

c1 "1 (x) + C2 "2 (x) + ... + Cn "2 (x) = 0


(n-1) (n-1) (n-1)
c11 (x) + C22 (x) + ... + Cnn (x) = 0

for all x in I. In particular, the

equations must hold at x 0 . Putting x

= x 0 in we find, that c 1 1 + 0+ ... +

0 = 0, or c 1 = 0. Putting x = x 0 in the

equations we obtain c 2 = c 2 = ... = c n


= 0, and thus the solutions 1 ,.... n

are linearly independent.

Theorem.
Let 1 ,...... n be the n solutions of

L(y) = 0 on I satisfying initial

conditions. If is any solution of

L(y) = 0 on I, there are n constants

c 1 ....c n such that

= c 1 1 +.....+c n n .

Proof.

Let

(n-1)
(x 0 ) = 1 , (x 0 ) = 2 ,...., (x 0 )

= n,

And consider the function

(x 0 ) = 1 1 + 2 2 + ....+ n n
It is a solution of L(y) = 0, and

clearly

(x 0 ) = 1 1 (x 0 ) + 2 2 (x 0 ) + ....+

n n (x 0 ) = 1 .

Since

1 (x 0 ) = 1, 2 (x 0 ) = 0, ....., n (x 0 )

= 0.

Using the other relations in initial

conditions we see that


(n-1)
(x 0 ) = 1 , (x 0 ) = 2 ,...,

(x 0 ) = n .

Thus is a solution of L(y) = 0


having the same initial conditions at

x 0 as , By theorem the uniqueness,

we must have = , that is

= 1 1 + 2 2 + ....+ n n
Linear space of functions

A set of function which has the

property that, if 1 , 2 , belong to the

set, and c 1 , c 2 are any two constants,

then c 1 1 + c 2 2 belongs to the set

also, is called a linear space of

functions. We have just seen that the

set of all solutions on L(y) = 0 on

an interval I is a linear space of

functions. If a linear space of

functions contains n functions 1 ,

... n which are linearly independent

and such that every function in the

space can be represented as a linear

combination of these, then 1 ,...., n

is called a basis for the linear space*,

and the dimension of the linear space

is the integer n. the content of


Theorem 5 is that the functions

1 ,..... n satisfying the initial

conditions form a basis for the

solutions of L(y) = 0 on I, and this

linear space of functions has

dimension n.

CYP QUESTIONS
1 1
1. consider the equation y"+ x y'- 2 y=0
x

For x > 0.

a. Show that there is a solution


r
of the form x , where r is a

constant.

b. Find two linearly

independent solutions for x

> 0, and prove that they are

linearly independent.
c. Find the two solutions 1 , 2

satisfying

1 (1) = 1, 2 (1) = 0

1 (1) = 0, 2 (1) = 1

2. Find two linearly independent

solutions of the equation


2
(3x 1) y + (9x 3)y - 9y =

For x > 1/3. (Hint: x replaced by

3x-lv) in CYP QUESTIONS

3. Consider the equation

L(y) = y + 1 (x)y + 2 (x)y = 0,

Where 1 , 2 are continuous on

some interval I, and 1 has a

continuous derivative there.

a. If is a solution of L(y) = 0

let = u, and determine


a differential equation for u

which will make y the

solution of an equation in

which the first derivative

term is absent.

b. Solve this differential

equation for u.

c. Show that will then satisfy

the equation

y + (x)y = 0,
2 2
1 1
where =2 4 2

6.4 THE WRONSK1AN-AND


LINEAR INDEPENDENCE

In order to show that any set of n

linearly independent solutions of L(y)

= 0 can serve as a basis for the

solutions of L(y) = 0, we consider the


Wronskian W( 1 , ....., n ) of any n

solutions 1 ...... n . Recall that this

is defined to be the determinant

| |
1 2 ... n

'1 '2 ... 'n


( )
W 1, ..., n =

(n-1) (n-1) (n-1)
1 2 ... 1

Theorem.

If 1 ,..., n are n solutions of L(y) =

0 on an interval I, they are linearly

independent there if, and only if,

( )
W 1, ...n (x) 0

W( 1 , ...., n ) (x) 0 for all x in I.


Proof.

First suppose W( 1 ,..., n ) (x) 0

for all x in I. If there are constants

c 1 ,..., c n such that c 1 1 (x) + .... +

c n n (x) = 0 for all x in I, then clearly

c1 '1(x) + ... + cn 'n(x) = 0


.
.
.
(n-1) (n-1)
c11 (x) + ... + cnn (x) = 0

for all x in I. For a fixed x in I the

equations are n linear homogeneous

equations satisfied by c 1 , ...c n . The

determinant of the coefficients is just

W( 1 , ..., n ) (x), which is not zero.

Hence there is only one solution to


the system, namely

c 1 =c 2 = ... = c n = 0.

Therefore 1 ,..., n are linearly

independent on I.

Conversely, suppose 1 , .... n are

linearly independent on 1. Suppose

there is an x 0 in I such that

W( 1 , .... n ) (x 0 ) 0.

Then this implies that the system of

n linear equations

c11(x0) + ... + cnn(x0) = 0

c1 'n(x0) + ... + cn 'n(x0) = 0


.
.
.
(n-1)
(x0) + ... + cn(n )(x0) = 0
n-1
c11
has a solution c 1 , ...., c n , where not

all the constants c 1 , ...,c n are zero.

Let c 1 ,..., c n be such a solution, and

consider the function

= c 1 1 +....+c n n .

Now L() = 0, we see that

(n-1)
(x 0 ) = 0, '(x 0 ) = 0, (x 0 ) = 0.

From the uniqueness theorem it

follows that (x) = 0, for all x in I,

and thus

c 1 1 (x)+....+c n n (x) = 0

for all x in I. But this contradicts

the fact that 1 , ... n are linearly

independent on I. thus the

supposition that there was a point x 0

in 1 such that
W( 1 , ... n ) (x 0 ) = 0 must be false.

We have consequently proved that

W( 1 , ... n )(x) 0 for all x in I.

Theorem.

Let 1, ... n be n linearly

independent solutions of L(y) = 0 on

an interval I. If is any solution of

L(y) = 0 on I, it can be represented

in the form

= c 1 1 + .... + c n n .

Where c 1 ....c n are constants. Thus

any set of n linearly independent

solutions of L(y) = 0 on I is a basis

for the solutions of L(y) = 0 on I.


Proof.

Let x 0 be a point in I, and suppose

(x 0 ) = 1, '(x 0 ) = 2,
(n-1)
..., (x 0 ) = n .

We show that there exist unique

constants c 1 , ...c n such that

= c 1 1 + ...+ c n n is a solution of

L(y) = 0 satisfying

(n-1)
(x 0 ) = 1 , (x 0 ) = (x 0 ) = n .

By the uniqueness result we then

have = , or

= c 1 1 + ...+ c n n .

The initial conditions for vp are

equivalent to the following equations

for c 1 .....c n :
c11(x0) + ... + cnn(x0) = 1

c1 '1(x0) + ... + cn 'n(x0) = 2


(n-1)
(x0) + ... + cn(n )(x0) = n
n-1
c11

This is a set of n linear equations

for c 1 , ..., c n . The determinant of

the coefficients is W( 1 ,..... n ) (xo),

which is not zero since 1 ,.... n are

linearly independent. Therefore there

is a unique solution c 1 , ...,c n of the

equations, and this completes the

proof.

Theorem.

Let 1 ,.... n be n solutions of L(y) =

0 on an interval I, and let x 0 be any

point in I. Then
( ) [ ] ( )
z
W 1, ....n (x) = exp 1(t)dt W 1, ....n (x0)
z0

Proof.

We first prove this result for the

simple case n = 2, and then give

a proof which is valid of general n.

The latter proof makes use of some

general properties of determinants.

Proof for the case n = 2. In this case

( )
W 1, 2 = 1 '2 '12,

and therefore

( )
W' 1, 2 = '1 '2 + 1 "2 "1 2 '1 '2

= '1 "2 "1 2

Since 1, 2 satisfy y"+1(x)y'+ 2(x)y=0, we get

"1 = 1 '1 21,

"2 = 1 '1 22

Thus

( ) (
W' 1, 2 = 1 1 '2 22 ) ( ' )
1 1 2 2 2

(
= 1 1 '2 ' ) = W( , )
1 2 1 1 2

We see that W( , )satisfies the linear first order equation


1 2
y'+1(x)y=0
and hence

( ) [ ]
z
W 1, 2 (x) = c exp 1(t)dt
z0

Where c is a constant. By putting x = x0, we obtain

( )
c=W 1, 2 (x0),

thus proving in case n = 2.

Proof for a general n. We let W = W

( 1 , ...., n ) for brevity. From the

definition on W as a determinant it

follows that its derivative W is a sum

of n determinants

W = V 1 +... + V n ,

Where V k differs from W only in its

k-th row, and the k-th row of V k is

obtained by differentiating the k-th

row of W. Thus
[ ][ ]
'1 ... 'n 1 ... n

'1 ... 'n "1 ... "n

W "1 ... "n + "1 ... "n


(n-1) (n-1) (n-1) (n-1)
1 ... n 1 ... n

[ ]
1 ... n

'1 ... 'n

+... + "1 ... "n


(n) (n)
1 ... n

The first n 1 determinants V 1 ,...,

V n-1 are all zero, since they each

have two identical rows. Since 1 ,


..., n are solutions of L(y) = 0 we

have

(n) (n-1)
i = 1i ...... ni, (i=1,....,n),
and therefore

[ ]
1 ... n

'1 ... 'n


W'=
(n-2) (n-2)
i ... n


n-1


n-1
(i ) (i )
n-j1 ... n-jn
j=0 j=0

The value of this determinant is

unchanged if we multiply any row by

a number and add to the last row.

We multiply the first row by n , the

second by n-1 ,..., the (n-1)st row by


2 , and add these to the last row,

obtaining

| |
1 ... n

'1 ... 'n

W'=
(n-2) (n-2)
i ... n
(n-1) (n-1)
-11 ... -1n

Therefore W satisfies the linear first

order equation y + 1 (x) y = 0, and

thus

[ ]
z
W(x) = exp 1(t)dt W(x0)
z0

A consequence of the above theorem

is that n solutions 1 ,.... n of


L(y) 0

On an interval I are linearly

independent there if and only if

W( 1 ,....., n )(x 0 ) 0 For any

particular x 0 in I.

CYP QUESTIONS

1. Consider the equation

L(y) = y + 1 (x)y + 2 (x)y = 0,

Where 1 , 2 are continuous on

some interval I. Let 1 , 2 be

two bases for the solution of L(y)


= 0. Show that there is a non-

zero constant k such that

W( 1 , 2 ) (x) = kW( 1 , 2 ) (x).

2. Consider the equation


y + (x)y = 0, where a is a

continuous function on - < x <

which is of period > 0. Let be

basis for the solutions satisfying

1 (0) = 1, 2 (0) = 0,

1 (0) = 0, 2 (0) = 1,

a. show that W( 1 , 2 ) (x) = 1

for all x.

show that there is at least

one non-trivial solution of

period , if, and only if,

1 () + 2 () = 2.

b. Show that there exists a

non-trivial solution

satisfying

(x + ) = - (x)

If, and only if,


1 () + 2 () = -2.

(Hint: show that such a

exists if, and only if,

() = -(0) and () = -

(0).

c. If 1 () + 2 () = -2 show

that there exists a non-

trivial solution of pet d 2.

3. a. Let be a real-valued non

trivial solution of

y + (x)y = 0 on < x

< b, and let be a real-

valued non-trivial solution of

y + (x)y = 0 on a < x

< b. Hence , are real-

valued continuous functions.

Suppose that

(x) > (x), ( < x < b).


Show that if x 1 and x 2 are

successive zeros of on <

x < b, then must vanish

at some point , x 1 < <

x 2 . (Hint: Suppose x) > 0

for x 1 < x < x 2 , and assume

(x) > 0 for x 1 < x < x 2 .

Then

( - ) = -

= ( - ), and an

integration yields

(x 2 ) (x 2 ) (x 1 ) (x 1 )

> 0,

Since (x 1 ) = (x 2 ) = 0.

Show that (x 2 ) < 0 and

(x 1 ) > 0)

b. Show that any solution of


y + xy = 0 on 0 < x <

has an intinity of zeros

there. (Hint: Consider the

equation y + y = 0, and use

(a) with a(x) = 1, (x) = x,

(x) = cos x.)

4. Let and be two real-valued

linearly independent solutions of

y + (x)y = 0

On < x < b. where is a real-

valued. Show that between any

two successive zeros of there

is a zero of . (Hint: Suppose)

(x 1 ) = (x 2 ) = 0, and (x) > 0

for x 1 x x 2 .

Let x = /, and show that

( )
(
-W , )
x'= , x1 x x2

2
_ _
Apply Rolled theorem to x on x 1

x x 2 . Note that and cannot

vanish simultaneously for W(,

) (x) 0.)

5. One solution of
1
L(y) = y"+ 2 y=0
4x

1/2
For x > 0 is (x) = x . Show

that there is another solution

of the form = u, where u

is some function. (Hint: Try to

find u so that L(u) = 0. This

is a variation of the variation of

constants idea.)

6. Consider the equation

y + (x)y = 0, where is a real-

valued continuous function on 0

< x < .
a. If (x) for 0 < x < ,

where e is a positive

constant, show that every

solution has an infinity of

zeros on 0 < x < .

7. Consider the equation

y + (x)y = 0. where a is a real-

valued continuous function for

< x < b.

a. If is a non-trivial solution

which has a zero at x 0 , show

that (x 0 ) 0.

b. Show that the zeros of a

non-trivial solution are

isolated, that is, if (x 0 )=0,

there is no sequence of

distinct x n x 0 , (n ), such

that (x n ) = 0.
 !$##!!
$" $#

6XSSRVH ZH KDYH IRXQG E\ VRPH

PHDQV RQH VROXWLRQ  RI WKH

HTXDWLRQ

Q QO
/ \ \   [ \    Q [

\ 

,W LV WKHQ SRVVLEOH WR WDNH DGYDQWDJH


RI WKLV LQIRUPDWLRQ WR UHGXFH WKH

RUGHU RI WKH HTXDWLRQ WR EH VROYHG

E\ RQH 7KH LGHD LV WKH VDPH RQH


HPSOR\HG LQ WKH YDULDWLRQ RI
FRQVWDQWV PHWKRG :H WU\ WR ILQG
VROXWLRQV RI / \  RI WKH IRUP
X   ZKHUH X LV VRPH IXQFWLRQ ,I

X  LV WR EH D VROXWLRQ ZH PXVW

KDYH
(n) (n-1)
0 = u1( ) (
+ 1 u1 ) ( )
+ .... + n-1 u1 '+n u1 ( )
(n)
=u( )1 + ... + u1 + 1u( )1 + .... + 1u1( ) + .... + n-1u'1 + n-1u '1 + 1u1
n n-1 n-1

The coefficient of u in this equation is


=
just L( 1 ) 0. Therefore, if v = u,

this is a linear equation of order n 1

in v,

1v(
n-1)
+ .... + n1 [ (n-1)
+ (n-1)1
(n-2)
]
+ ... + n-11 v=0 (1)

(n-1)
The coefficient of v is 1 , and

hence if 1 (x) 0 on an interval

I this equation has n 1 linearly

independent solutions v 2 ,...,v n on I.

If x 0 is some point in I, and


x
uk(x) = vk(t)dt, (k=2,....n)
x0

then we have u k = v k and the

functions

1, u21, ...., un1 (2)

Double click this page to view clearly


are solutions of L(y) = 0. Moreover

these functions from a basis for the

solutions of L(y) = 0 on I. For

suppose we have constants

c 1 ,......,c n such that

c 1 1 +c 2 u 2 1 + ... + c n u n 1 ] = 0.

Since 1 (x) 0 on I this implies

c 1 + c 2 u 2 + ... + c n u n = 0, and

differentiating we obtain c 2 u 2 + ... +

c n u n = 0, or c 2 v 2 + ... + c n v n = 0.

Since v 2 ,..., vn are linearly

independent on I we have c 2 = c 3

=...= c n = 0, and we obtain c 1 = 0

also. Thus the functions in (2) form a

basis for the solutions of L(y) = 0 on

1.
Theorem.

Let 1 be a solution of L(y) = on an

interval I, and suppose 1 (x) 0 on

I. If

v 2 .....,v n is any basis on I for the

solutions of the linear equation (1) of

order n 1, and if

vk = u k (k = 2,...,n), then

1 ,u 2 1 ,......,u n 1 is a basis for the

solutions of L(y) = 0 on I.

The case n = 2 of the theorem merits

further discussions, since in this case

the equation for v is linear of the first

order, and therefore can be solved

explicitly. Here we have

L(y) = y + 1 (x)y + 2 (x)y = 0, and

If 1 is a solution on I we have
+
L(u 1 ) = (u 1 ) + 1 (u 1 ) 2

(u 1 )

= u 1 + 2u 1 + u 1 + 1 u

1 + 1 u 1 + 2 u 1

= u 1 + u(2 1 + 1 1 ).

Thus, if v = u, and u is such that

L(u 1 ) = 0,

'
( '
1v + 2 + 11 v = 0.
1 ) (2a)

But this equation is a liner equation


of order one, and can always be

solved explicitly provided 1 (x) 0

on I. Indeed v satisfies

2 '
( '
1v + 21 + 11 v = 0.
1 )
2 (3)
Which is just the previous equation

multiplied by 1 . Thus

( ) ( )
2 ' 2
1v + 1 1v = 0,

Which implies that,

[ ]
2 x
1(x)v(x) = c exp 2(t)dt ,
x0

Where x 0 is a point in I, and c is a

constant. Since any constant multiple

of a solution of (3) is again a

solution, we see that

[ ]
1 x
v(x) = 2 exp 1(t)dt
[ 1(x) ] x0

is a solution of (3), and also of (2a).

Therefore tow independent solutions

of

" '
L(y) = y + 1(x)y + 2(x)y = 0 (4)
On I are 1 and 2 , where

[
x

1
[ ]
s
2(x) = 1(x) 2 exp 1(t)dt ds. (5)
x0
1(s) ] x0

Theorem.

If 1 is a solution of (4) on an

interval I, and 1 (x) 0 on I, a

second solution 2 of (4) on I is

given by (5). The functions 1 , 2

form a basis for the solutions of (4)

on I.

As a simple example consider the

equation

" 2
y 2
x
y = 0, (0 < x< ).
It is easy to verify that the 4) given
2
by 1 (x) = x is a solution on 0 <

x < , and since this function does

not vanish on this interval there is

another independent solution 2 of

the form 2 = u 1 . If v = u we find

that v satisfies

2
x v + 4xv 0, or xv + 4v = 0.

A solution for this is given by

4
v(x) = x (0 < x <)

and therefore a choice for u is

1
u(x) = 3 , (0 < x <)
3x

This leads to

1
2(x) = 3x , (0 < x < ),
But since any constant times a

solution is a solution, we may as well

choose for a second solution 2 (x) =


-1 2 -1
x . Thus x , x form a basis for the

solutions on

0 < x < .

CYP QUESTIONS

1. A differential equation and a

function 1 are given in each of

the following. Verify that the

function 1 satisfies the

equation, and find a second

independent solution.

2
a. x y - 7xy + 15y = 0, 1 (x)
2
= x , (x > 0).

b. 2
x y - xy + y = 0, 1 (x)
= x, (x > 0).
( )
2
" ' 2 x
c. y 4xy + 4x 2 y = 0, 1(x) = e

d. xy - (x + 1)y + y = 0,
x
1 (x) = e , (x > 0).

2
e. (1 x ) y - 2xy + 2y = 0,

1 (x) = x, (0 < x < 1).

f. y - 2xy +2y = 0, 1 (x) = x,

(x>0).

2. One solution of
3 2
x y - 3x y + 6xy - 6y = 0 for

x > 0 is 1 (x) = x. Find a basis

for the solutions for x > 0.

3. Consider the equation

L(y) = y + 1 (x)y + 2 (x)y +

3 (x)y = 0.

Suppose 1, 2 are given

linearly independent solutions of

L(y) = 0.
a. Let = u 1 , and compute

the equation of order two

satisfied by u in order, that

L() = 0. Show that ( 2 / 1 )

is a solution of this equation

of order two.

b. Use the fact that ( 2 / 1 )

satisfied the equation of

order two to reduce the

order of this equation by

one.

4. Two solutions of
2
x y - 3xy + 3y = 0, (x > 0),
2
are (x) = x, 2 (x) = x . Use

this information to find a third

independent solution. (Hint: See

question 3)
5. Consider the equation
y + 1 (x)y + 2 (x) y = 0, where

1 , 2 are continuous on some

interval I containing x 0 . Suppose

1 is a solution such that 1 (x) 0

for all x in I.
a. Show that there is a second

solutions 2 on I such that


W( 1 , 2 )(x 0 )=l.

b. Compute such a 2 terms of

1 , by solving the first order

equation

[ ]
(x) (x)2(x) = exp x 1(t)dt .
' ' x
1(x)
2 1 0

For 2.

6.6 THE NON-HOMOGENEOUS


EQUATION

Let 1 ,...., n, b be continuous

functions on an interval 1, and

consider the equation


(n) (n-1)
L(y) = y + 1 (x)y + ... + n (x)

y = b(x).

We have already seen that, in the

case where the k are all constants,

this equation may be solved using

the variation of constants method.

The method does not depend on the

fact that the are constants, and is

therefore valid for the above

equation. We outline briefly the

results.

If vgp is a particular solution of the

above equation, any other solution

has the form

= p + c 1 1 + ... +c n n ,
Where c 1 , ... c n are constants, and 1

..., 2 ) n is a basis for the solutions

of L(y) = b(x). A particular solution

of the above equation, any other

solution has the form

= p + c 1 1 + ... +c n n ,

Where c 1 , ..., c n are constants, and

1 ,..... n is a basis for the solutions

of L(y)

= 0. Every such is a solution of

L(y) = b(x). A particular solution p

can be found which has the form

p = u 1 1 + ... +u n n .

Where u 1 ,.... un are functions

satisfying u l 1 + ... + u n n ) n = 0

u 1 1 + ... + u n n = 0
' (n-2) ' (n-2)
u 1 + .... + u n =0
1 n

' (n-1) ' (n-1)


u 1 + .... + u n = b.
1 n

If x 0 is any point of I we may take for

u k the function given by

wk(t)b(t)
uk(x) = dt

x0
(
W 1....n (t)) (k = 1,.....,n),

and then p has the form


n
x

wk(t)b(t)
p(x) = k(x) dt (1)
x0
(
W 1....n (t) )
k=1

Here W( 1 ,......, n ) the Wronskian

of the basis 1 ....., n , and Wu is

the determinant obtained from

W( 1 ,.... n ) by replacing the k-th

column ( k ,

( (n-1)
'
k, , ......, k by (0, 0, ...., 0, 1).
k
Theorem.

Let be continuous on an interval I

and let 1 ,.... n be a basis for the

solutions of L(y) = 0 on 1. Every

solution of L(y) = b(x) can be

written as

= p +c 1 1 + ... + c n n ,

where p is a particular solution of

L(y) = b(x), and c 1 , ..., c n are

constants.

Every such y is a solution of L(y) =


b(x). A particular solution p is given

by (1).

Illustration.

As an illustration let us find all

solutions of the equation


" 2
y
x
2 y = x, ( 0 < x < ) .

We have already seen that a basis

for the solutions of the homogeneous

equation is given by

2 -1
1 (x) = x . 2 (x) = x .

A solution p of the non-

homogeneous equation has the form

2 -1
p = u1x + u2x .

-1
Where u 1 , u 2 satisfy x 2 u 1 + x u 2

= 0

-2
- 2xu 1 x u 2 = x.

Now W( 1 , 2 ) (x) = -3. and we find

that
3
' 1 ' x
u (x) = 3 u (x) = 3 .
1 2

For u 1 ,u 2 we may take

|
4
x x
u1(x) = 3, u2(x) = 12

and from we see that

3 3 4
x x x
p(x) = 3 12 = 4

Every solution of then has the form

3
x 2 1
(x) = 4 + c1x + c2x ,

Where c 1 , c 2 are constants.

Since we can always solve the non-

homogeneous equation L(y) = b(x)

by using algebraic methods and an

integration, we now concenirate our


attention on methods for solving the

homogeneous equation.

CYP QUESTIONS

1. One solution of
2
x y - 2y = 0
2
On 0 < x < is 1 (x) = x . Find
2
all solutions of x y - 2y = 2x

1 on 0 < x < .

2. One solution of
2
x y - xy + y = 0, (x > 0), is

1 (x) = x. Find the solution


2 2
of x y - xy + y = x satisfying

(1) = 1, (1) = 0.

3. a. Show that there is a basis

1 , 2 for solutions of
2 2
x y + 4xy + (2+ x ) y = 0,

(x > 0), of the form


1(x) 2(x)
1(x) = 2 , 2(x) = 2 .
x x
(Hint: If is a solution, let
2
= v/x .)

b. Find all solutions of


2 2 2
x y + 4xy + (2 + x )y = x

for x > 0.

4. a. Consider the equation

L(y) = y + 1 (x)y + 2 (x)y

= b(x),

Where 1, 2, b are

continuous on some interval

I. Suppose 1 is a solution

of L(y) = 0 such that 1 (x)

0 for all x in I. Show that

there is a particular solution


p of L(y) = b(x) of the form

p = v p 1 , where p = u p

is a particular solution of the

first order equation

1 (x) + [2 1 (x) +

1 (x) 1 (x)] = b(x).

b. Use the idea in (a) to find all


2
solutions of x y - xy + y
2
= x for x > 0. (Hint: From

Question 2 one solutions of

x 2 y - xy + y = 0 is given by

1 (x) = x.)

5. Show that the function p given

by

p(x0) = ( x0) = ... = p( )(x0) = 0.


' n-1
p

6. Let g(x, t) be defined by


n k(x)Wk(t)
g(x,t) = W(t)
,
k=1
where W = W( 1 , ....., n ) is

the Wronskian of n linearly

independent solutions of L(y) =

0. Then the p of (1) can be

written as


x
p(x) = g(x,t) b(t) dt.
x0

k(x,t)
Prove that g(x,t) = W(t)
,

where

| |
1(t) 2(t) n(t)
' ' '
i. (t) ( t) ( t)
1 2 n

k(x,t) =
(n-2) (n-2) (n-2)
1 ( t) 2 ( t) n ( t)
1(x) 2(x) n(x)

ii. Show that


n-2 n-1
g g g
g(t,t) = 0, x (t,t) = 0, ...., n-2 (t,t) = 0, n-1 (t,t) = 1.
x x

7. Consider the equation

y + y = b(x), where b is a

continuous function on 1 x <

satisfying

Double click this page to view clearly


|b(t)| dt < .

1

a. Show that a particular

solution p is given by

sin(x-t)b(t) dt.
x
p(x) =
1

b. Show that any solution is

bounded on 1 x < .

6.7 HOMOGENEOUS EQUATIONS


WITH ANALYTIC COEFFICIENTS

If g is a function, defined on an

interval I containing a point x 0 , we

say that g is analytic at x 0 if g can

be expanded in a power series about

x 0 which has a positive radius of

convergence. Thus g is analytic at x 0

if it can be represented in the form


k
g(x) = ck(x-x0) ,
k=0
where the c k are constants, and the

series converges for |x x 0 | < r 0 , r 0

> 0. Recall that one of the important

properties of a function g which has

the above form, where the series

converges for |x x 0 | < r 0 , is that

all of its derivatives exist on |x

x 0 | < r 0 , and they may be computed

by differentiating the series term by

term. Thus, for example


k-1
kck(x-x0)
'
g (x) = ,
k=1

and


k-2
k(k-1)ck(x-x0)
"
g (x) = ,
k=2

and the differentiated series

converge on |x x 0 | < r 0 also.


If the coefficients 1 , ..., n of L are

analytic at x 0 it turns out that the

solutions are also. In fact solutions

can be computed by a formal

algebraic process.

Illustration.

We illustrate by considering the

example L(y) = y - xy = 0.

Here 1 (x) = 0, 2 (x) = -x, and

hence 1 , 2 are analytic for all real

x 0 . We try for a solution the series


2
(x) = c0 + c1x + c2x + .....
" 2
Then (x) = 2c2 + 3.2c3x+4.3c4x + ...

k=0 (k+2)(k+1)ck + 2 2k.



=

Also

k=1 ck-1xk,
2 3
x(x) = c0x+c1x + c2x + ....... =

(x) x(x) = 2c2+k=1



[(k+2)(k+1)ck+2 ck+1]x .
" k

Double click this page to view clearly


In order for to be a solution of L(y)

= 0 we must have

(x) - x(x) = 0,

Or



[(k+2)(k+1)ck+2 ck+1]x
2c2 + k
k=1
= 0,

And this is true only if all the

coefficients of the powers of x are

zero. Thus

2c 2 = 0, (k + 2) (k + 1) c k+2 c k-1

= 0, (k=l,2,....)

This gives an infinite set of

equations, which can be solved for

the c k . Thus, for k = 1, we have

c0
3.2c3 = c0, or c3 = 3.2 .

Putting k = 2 we find
c1
c4 = 4.3

Continuing in this way we see that


c4 c2 c0 c4 c1
c5 = 5.4 = 0, c6 = 6.5 = 6.5.3.2 , c7 = 7.6 = 7.6.4.3

It can be sown by induction that

c0
c3m = 2.3.5.6...(3m-1)3m
, (m=1,2,...),
c1
c3m+1 = 3.4.6.6.7...3m(3m+1)
, (m=1,2,...),
c3m+2 = 0, (m=0,1,2,...),

Thus all the constants are determined

in terms of c 0 and c 1 . Collecting

together terms with c 0 and c 1 as a

factor we have

[ ] [ ]
3 6 4 7
x x x x
(x) = c0 1 + 3.2 + 6.5.3.2 + ... + c1 x+ 4.3 + 7.6.4.3 + ... .

Let 1 , 2 represent the two series in

the brackets. Thus



3m
x
1(x) = 1 + 2.3.5.6...(3m-1)3m
,
m=1

(x) = x +
3m+1
x
2 3.4.6.7...3m(3m+1)
.
m=1

We have shown, in a formal way,

that satisfies y - xy = 0 for any

two constants c 0 , c 1 . In particular

the choice c 0 = 1, c 1 = 0 shows

that satisfies this equation, and the

choice c 0 = 0, c 1 = 1 implies 2 also

satisfies the equation.

The only question that remains

concerns the convergence of the

series defining (x) and 2 (x). It is

readily checked by the ratio test that

both series converge for all finite x.


Example.

Let us consider the series for 1 (x).

Writing it as



1(x) = 1 + dm(x),
m=1

We see that
dm+1(x) x
3m+3
2.3.5.6...(3m-1)(3m)
=
dm(x) 2.3.5.6...(3m-1)(3m)(3m+2)(3m+2) x
3m

and therefore

| |
3
dm+1(x) |x|
dm(x)
= (3m+2)(3m+3)

Which tends to zero, as m ,

provided only that | x | < .

Summarizing, we have found in a


purely formal way two series, which
are convergent for all finite x, and

Double click this page to view clearly


thus represent two functions 1 , 2 ,

and from the way we obtained 1 , 2

it is apparent that they are solutions

of the equation y - xy = 0 on - <

x < . They are linearly independent

solutions for it is clear from series


(1) defining 1 and 2 that

1 (0) = 1, 2 (0) = 0,

1 (0) = 0, 2 (0) = l, and therefore

W( 1 , 2 ) (0) = 1 0.

The method illustrated by this

example works in general when the

coefficients are analytic, and always

yields a convergent power series

solution for any initial value problem.

We state this result formally.


Theorem.

(Existence Theorem for Analytic

Coefficients) Let xo be a real number,

and suppose that the coefficients

1 ,..., n in

(n) (n-1)
L(y) = y + 1 (x) y + ... +

n (x) y have convergent power series

expansions in powers in powers of x

0 on an interval

|x x 0 )| < r 0 . r 0 > 0.

If 1 ...... n are n constants, there

exists a solution of the problem

(n-1)
L(y) = 0, y(x 0 ) = 1 , .... y (x 0 )

= n,
With a power series expansion


k
(x) = ck(x-x0)
k=0

Convergent for |x x 0 ) < r 0 . We

have k! c k = k+1 , (k = 0,1, ....,n

1), and ck for k n may be

computed in terms of c 0 c 1 , ....,

c n-1 by substituting the series (2) into

L(y) = 0.

It follows from Theorem and the

uniqueness Theorem, that any

solution of L(y) = 0 on |x x 0 |

< r 0 has a Convergent power series

expansion there of the form (2).

CYP QUESTIONS

1. Find two linearly independent

power series solutions (in powers

of x) of the following equations:


a. y - xy + y = 0

2
b. y + 3x y - xy = 0

2
c. y - x y = 0

2 2
d. y + x y + x y = 0

e. y + y = 0

For what values of x do the

series converge?

2. Find the solution of


2
y + (x 1) y - (x 1) y = 0 in

the form


k
1(x) = ck(x-1) ,
k=0

Which satisfies (1) = 1, (1) =

0.. (Hint: Let x 1 =

3. Find the solution of


2
(1 + x ) y + y = 0 of the form

k=0
k
(x) = ckx ,
Which satisfies (0) = 0, (0)

= 1.

What is the largest r > 0 such


that the series for converges

for |x| < r?

4. The equation
x
y + e y = 0 has a solution of

the form

k=0
k
(x) = ckx

Which satisfies (0) = 1, (0)

= 0. Compute c 0 , c 1 , c 2 , c 3 , c 4 ,
(k)
c 5 . (Hint: (0)/k! and (x) =
x
-c (x))

5. Compute the solution of

y - xy = 0 which satisfies (0)

= 1, (0) = 0, (0) = 0.
6. The equation
2
(1 - x )y -2xy + (+l)y = 0

Where a is a constant, is called

the Legendre equation.

a. Show that if it is written in

the form

y + 1 (x)y + 2 (x)y = 0,

then 1 , 2 have convergent

power series expansions (in

powers of x) on |x| < 1.

b. Compute two linearly

independent solutions for |x|

< 1. (Hint: Leave the

equation in the form (*).)

c. Show that if is a non-

negative integer n there is

a polynomial solution of

degree n.
7. The equation
2 2
(1 x )y - xy + y = 0,

Where is a constant, is called

the Chebysheve equation,

a. Compute two linearly

independent series solutions

for |x| < 1.


b. Show that for every non-

negative integer = n there

is a polynomial solution of

degree n. When

appropriately normalized

these are called the

Chebyshev polynomials.

8. The equation

y - 2xy + 2y = 0, where is

a constant, is called the Hermite

equation.
a. Find two linearly

independent solution on -

< x < .

b. Show that there is a

polynomial solution of

degree n, in case = n is a

non-negative integer.

c. Show that the polynomial H n

defined by

n x2 dn 2
2
Hn(x) = ( 1) e n e
dx

Is a solution of the Hermit

equation in case = n is

a non-negative integer. This

solution H n is called the n-th

Hermite polynomial. (Hint: If


2
x
u(x) = e show that u (x) +

2xu(x) = 0. Differentiate this

equation n times to obtain

show that u(x) + 2xu(x) =

0. Differentiate this equation

n times to obtain

Hn+1(x) = 2xHn(x) + 2nHn-1(x) = 0 (*)

For n 1. Differentiate H n to

obtain
'
H (x) = 2xHn(x)-Hn+1(x) (* *)
n

For n 0. Use (*) and (**)

to show H n is a solution of

the Hermite equation)

9. Compute H 0 , H 1 , H 2 , H 3 .

6.8. THE LEGENDRE EQUATION


Some of the important

differential equations met in

physical problems
are second order liner equations with

analytic coefficients. One of these is

the Legendre equations.

2
L(y) = (1 x )y - 2xy + ( + 1)

y = 0, where a is a constant. If we

write this equation as


" 2x ' (+ 1)
y 2 y + 2 y = 0,
1x 1x

we see that the functions 1 , 2 given

by

2x (+ 1)
1(x) = 2 , 2(x) = 2
1x 1x

are analytic at x = 0. Indeed,



1 2 4 2k
2 = 1 + x + x + .... = x ,
1-x k=0

and this series converges for |x| <

1. Thus 1 and 2 have the series

expansions
k=0 ( 2)x k=0 (+1)x2k,
2k+1
1(x) = , 2(x) =

which converge for |x| < 1. The

previous theorem it follows that the

solutions of L(y) = 0 on |x| < 1 have

convergent power series expansions

there. We proceed to find a basis for

there solutions.

Let be any solution of the Legendre

equation on |x| < 1, and suppose

k=0 ckxk.

(x) = c0 + c1x+c2x2 + .... =

We have


' 2 k-1
(x) = c1 + 2c2x+3c3x + ... + kckx ,
k=0


' k
-2x (x) = -2kckx ,
k=0

(x) = 2c2 + 3.2c3x+...=k=0


" k-2
k(k-1), ckx ,

(x) = k=0
2 " k
x -k(k-1)ckx .

Note that (x) may also be written

as
(x) = k=0
" k
(k+2)(k+1)ck+2x .

We obtain

We obtain

()(x) = (1 x2)
" '
(x) 2x (x) + (+1)(x)



[(k+2)(k+1)ck+2 k(k-1)ck 2kck + (+1)ck]x
k
=
k=0



[(k+2)(k+1)ck+2+ (+k+1)(+k)ck]x .
k
=
k=0

For to satisfy L() = 0 we must

have all the coefficients of the

powers of x equal to zero. Hence

(k + 2) (k + 1) c k+2 + ( + k + 1) (

- k) c k = 0,

(k = 0,1,2,...).

this is the recursion relation which

gives c k+2 in terms of c k . For k = 0

we obtain
(+1)
c2 = 2 c0,

and for k = 1 we get

Double click this page to view clearly


(+2)(-1)
c3 = 3.2 c1.

Similarly, letting k = 2, 3 in we

obtain

(+3)(-2) (+3)(+1)(-2)
c4 = 4.3 c2 = 4.3.2 c0,
(+4)(-3) (+4)(+2)(-1)(-3)
c6 = 5.4 c3 = 5.4.3.2

The pattern now becomes clear, and


it follows by induction that for m = 1,2,

(+2m-1)(+2m-3)...(+1)(-2)...(-2m+2)
c2m = ( 1)m (2m) !
c0,

(+2m)(+2m-2)...(+2)(-1)(-3)...(-2m+1)
c2m+1 = ( 1)m (2m+1) !
c1.

All coefficients are determined in

terms of c 0 and c 1 , and we must have

(x) = c01(x) + c12(x)

Where
(+1) 2 (+3)(+1)(-2) 4
1(x) = 1 2! x + 4! x ....,

Or

Double click this page to view clearly



m (+2m-1)(+2m-3)...(+1)(-2)....(-2m+2) 2m
1(x) = 1 + ( 1) (2m) !
x ,
m-1

and

(+2)(-1) 3 (+4)(+2)(-1)(-3) 5
2(x) = x- 3! X + 5! x .....,

or



(m +2m)(+2m-2)...(+2)(-1)(-3)....(-2m+1) 2m+1
2(x) = x + ( 1) X (2m+1) !
x .
m-1

Both 1 and 2 are solutions of the

Legendre equation, those

corresponding to the choices

c 0 = l, c 1 = 0 and c 0 = 0, c 1 = 1,

respectively. They form a basis for

the solutions, since

1 (0)= 1, 2 (0) = 0,

1 (0) = 0, 2 (0)= 1.

We notice that if a is a non-negative

even integer

n = 2m, (m = 0, 1,2,...),

Double click this page to view clearly


then 1 has only a finite number of

non-zero terms. Indeed, in this case


1 is a polynomial of degree n

containing only even powers of x. For

example,

1(x) = 1, (=0),
2
1(x) = 1 3x , (=2),
2 3.5 4
1(x) = 1 10x + 3 x , (=4).

The solution 2 is not a polynomial

in this case since none of the

coefficients in the series vanish.

A similar situation occurs when a is

a positive odd integer n. Then 2 is

a polynomial of degree n having only

odd powers of x. and 1 is not a

polynomial. For example,


2(x) = x, (=1),
5 3
2(x) = x- 3 x , (=3),
14 3 21 5
2(x) = x- 3 x + 5 x (=5).

We consider in more detail these

polynomial solutions when = n, a

non-negative integer. The polynomial

solution P n of degree n of

(1 x2)y " 2xy ' + n(n+1)y = 0, (1)

satisfying P n (l) = 1 is called the n-

th Legendre polynomial. In order to

justify this definition we must show

that there is just one such solution

for each nonnegative integer n. This

will be established by way of a slight

detour, which is of interest in itself.


Let be the polynomial of degree n

defined by

n
( )
n
d 2
(x) = n x 1 .
dx

This satisfies the Legendre

equation. Indeed, let u(x)


2 n
= (x 1) .

Then we obtain by differentiating

2
(x 1) u - 2nxu = 0.

Differentiating this expression n + 1

times yields

2 (n+2) (n+1)
(x 1) u + 2x(n + 1) u +
(n) (n+1)
(n + 1) nu 2nxu 2n(n+l)
(n)
u = 0.
(n)
Since = u we obtain
2
(1 -x ) (x) 2x (x) + n(n + 1)

(x) = 0, and we have shown that

satisfies the legendre equation

This polynomial satisfies

n
(1) = 2 n!.

This can be seen by noting that

[( )]
(n) (n)
[(x-1) (x+1) ]
2 n n n
(x) = x 1 =

(n)
[ ]
n n
= (x 1) (x+1) + terms with (x-1)
n
= n!(x+1) + terms with (x-1) as a factor.

n
Hence (l) = n ! 2 , as stated.
It is now clear that the function P n

given by

( )
n
1 d 2
Pn(x) = n n x 1 n
2 n! dx

is the n-th Legendre polynomial

provided we can show that there is


no other polynomial solution of (1)

which is 1 at x = 1.

Suppose iy is any polynomial solution

of (1). Then for some constant c we

must have = c 1 or = c 2 .

according as n is even or odd.

Suppose n is even, for example.

Then, for | x | < 1,

= c 1 + d 2

for some constants c, d since 1 , 2

form a basis for the solutions on |x|

< 1. But then - c 1 is a polynomial,

whereas d 2 is not a polynomial in

case d 0. Hence d = 0. In particular

the function P n satisfies P n = c 1 for

some constant c. if n is even. Since


I = P n (l) = c 1 (l),

we see that 1 (l) 0. A similar result

if n is old. Thus no non-trivial

polynomial solution of the Legendre

equation can be zero at x = 1. From

this it follows that there is only one

polynomial P n satisfying (1) and P n (l)

= 1, for if P n was another, then P n

P n would be a polynomial solution,

and

=
P n (1) P n ( 1) 0.

The first few Legendre polynomials

are

3 2 1
P0(x) = 1, P1(x) = x, P2(x) = 2 x 2 ,
5 3 3 35 4 15 2 3
P3(x) = 2 x 2 x, P4(x) = 8 x 4 x + 8.
CYP QUESTIONS

n
1. Show that P n (-x) = (-1 ) P n (x),
n
and hence that P n (-1) = (-1 ) .

n
2. Show that the coefficient of x in

P n (x) is
(2n) !
n 2.
2 (n!)

3. Show that there are constant 0 ,

1 ,.... n such that


n
x = 0 P 0 (x) + 1 P 1 (x) + ... +

n P n (x).

(Hint: For n = 0,1 = P 0 (x). For n

= 1, x = P 1 (x). Use induction.)

4. Show that any polynomial of

degree n is a linear combination

of P 0 , P 1 , .... P n . (Hint: Question

3).

5. Show that

1
Pn(x) Pm(x)dx = 0, ( nm )
1

(Hint: Note that


2
[(l-x )P n ] = -n (n+l)P n ,
2
[(l-x )P m ] = -m (m + 1)P m .

Hence
2 2
Pm[1 - x )F n ] - P n [(l x )P m ]
2
= {(1 x ) [P m P n - P m P n ]}

= [m(m+l) n(n+l)]P m P n .

Integrate from 1 to 1.)

6. Show that


1 2 2
Pn(x)dx= 2n+1
1

(Hint: Let u(x) = (x


n
2
)
1 .Then

u( )(x).
1 n
Pn(x) = n
2 n!
(k) (k)
Show that u (1) = u (-1) - 0

if 0 k < n. Then, integrating by

parts,

|
1 1
u( )(x)u( )(x) dx = u (x)u( )(x) u(
n n n n-1 n+1)
1
1
1
(x)dx
1


1
u(
n+1)
(x)u(
n-1)
=
1
(x)dx


1
u( )(x)u(x)dx.
n 2n
=...=( 1)
1

1(1-x2) dx.
1 n
=(2n)!

To compute the latter integral let

x = sin , and obtain


/2 2n+ 2
( )

1 n
2 n 2 2 n!

1
( 1-x ) dx = 2 cos d = (2n+1)!
0

7. Let P be any polynomial of

degree n, and let

P = c 0 p 0 + c 1 p 1 + ... + c n P n ,

Where c0, c1, ...,c n are

constants. Show that .

Double click this page to view clearly



2k+1 1
ck = 2 P(x)Pk(x)dx, (k=0,1,....,n).
1

(Hint: Multiply (*) by P k and

integrate from 1 to 1. Use the

results of questions 5 and 6

8. Using the fact that P 0 (x) = 1 is a

solution of
2
(1 x ) y - 2xy = 0,

find a second independent

solution verify that the function

Q 1 defined by

Q1(x) = 2 log
x
( ) 1,
1+x
1-x (|x| < 1),
is solution of the Legendre

equation when = 1.
Unit -7

LINEAR EQUATIONS WITH


REGULAR SINGULAR
POINTS

7.1 INTRODUCTION

In this chapter we continue our

investigation of linear equations with

variable coefficients

0(x)y( ) + 1(x)y( ) + ...... + n(x)y = 0. (1)


n n-1

We shall assume that the coefficients

0 , 1 , ..., n are analytic at some

point x 0 , and we shall be interested

in an important case when 0 (x 0 ) =

0. A point xo such that 0 (x 0 ) = 0

is called a singular point of the

equation. In this case we can not


apply directly the existence result,

concerning initial value problems at

x0. Indeed, it is usually rather

difficult to determine the nature of

the solutions in the vicinity of such

singular points. However there is a

large class of equations for which the

singularity is rather weak, in the

sense that slight modifications of the

methods used for solving equations

with analytic coefficients serve to

yield solutions near the singularities.

We say that x 0 is a regular singular

point for (1) if the equation can be

written in the form

n n-1 n-1
(x-x0) y( ) + b1(x)(x-x0) y + .....bn(x)y = 0
n
(2)
near x0, where the functions

b 1 ,....,b n are analytic at x 0 . If the

functions b 1 ,.... b n , can be written in

the form
k
b k (x) = (x x 0 ) k (x), (k = 1, ...n),

where 1 ,..., n are analytic at x 0 , we

see that (2) becomes


(n) (n-l)
y + 1 (x) y + .... + n (x)y = 0
n
upon dividing out (x - x 0 ) .

An equation of the form


n (n) n-1
c 0 (x) (x x 0 ) y + c 1 (x) (x x 0 )
(n-1)
y + ...+ c n (x)y = 0 has a regular

singular point at x 0 if c 0 , c 1 , ..., c n

are analytic at x 0 . and c 0 (x 0 ) 0.

This is because we may divide by

c 0 (x), for x near x 0 . to obtain an


equation of the form (2) with b k (x) =

c k (x)/c 0 (x), and it can be shown that

these b k are analytic at x 0 .

We first consider the simplest case of

an equation having a regular singular

point. This is the Euler equation,

which is the case of (2) with b 1 ,...,b n

all constants. Next we investigate the

general equation of the second order

with a regular singular point, and

indicate how solutions may be

obtained near the singular point. For

x > x 0 such solutions turn out to be

of the form
r 8
(x) = (x x 0 ) (x) + (x x 0 ) (x)

log (x x 0 ),

where r, s are constants, and ,

are analytic at x 0 . As an example


the solutions of the important Bessel

equation are computed in detail.

Regular singular points at infinity are

briefly discussed.

The method used to show that the

coefficients of the series for the

analytic functions , can be

computed in a recursive fashion, and

the to indicate that the series

obtained actually converge near the

singular point. Fortunately many of

the equations with singular points

with arise in physical problems have

regular singular points.

To indicate how lucky we are in this

situation consider the equation

2 " ' 3
x y y y = 0.
4
The origin x 0 = 0 is a singular point,

but not a regular singular point since

the coefficient 1 of y is not of

the form xb 1 (x), where b 1 is analytic

at 0. Nevertheless we may formally

solve this equation by a series


k
ckx ,
k=0

where the coefficients c k satisfy the

recursion formula

( 3
)
(k+1)ck+1 = k2 k- 4 ck, (k=0,1,2,....)

0, the ratio test applied, shows

that

| | | |
k+1 z 3
ck+1x k k- 4
k =
k+1
|x| ,
ckx

as k , provided | x | 0. Thus
the series will only converge for x =

0, and therefore does not represent

a function near x =.0, much less a

solution of the above differential

equation.

7.2. THE EULER EQUATION

The simplest example of a second

order equation having a regular

singular point at the origin is the

Euler equation

2 " '
L(y) = x y + xy + by = 0 (1 )

where a, b are constants. We first

consider this equation for x > 0, and


(k)
observe that the coefficient of y in
k
L(y) is a constant times x . If r is any
r
constant, x has the property that its
k
k-th derivative times x is constant
r r r
times x . For example x(x ) = rx ,
2 r r
x (x ) = r(r - 1)x .

This suggests trying for a solution of

L(y) = 0 a power of x. we find that

r r
L(x ) = [r(r 1) + r + b] x .

If q is the polynomial defined by

q(r) = r (r 1) + r + b, we may

write

( r) = q(r)xr,
L x (2)

and it is clear that if r 1 is a root of q

then

L(x r 1 ) = 0.
Thus the function given by 1 (x) =

xr 1 ) is a solution of (1) for x > 0. If

r 2 is the other root q, and r 2 r 1 , we

obtain another solution 2 given by

2 (x) = xr 2 . In case the roots r 1 , r 2

of q are equal we know that

q(r 1 ) = 0, q(r 1 ) = 0,

and this suggests differentiating (2)

with respect to r. Indeed


r ( )
r
L x (
=L

r ) ( r
r = L x logx )
[ '
= q (r) + q(r)log x x , ] r

and if r = r 1 we see that

L(xr 1 log x) = 0.

Therefore 2 (x) = xr 1 log x is a

second solution associated with the

root r 1 in this case.


In either case the solutions 1 . 2

are linearly independent for x > 0.

The proof is easy. If r 1 r 2 and c 1 ,

c 2 are constants such that

r1 r2
c1x + c2x = 0, (x>0),
then

r2-r1
c1 + c2x = 0, (x>0).

and differentiating we obtain

c2
x = 0, (x>0),

or c 2 = 0. We see that c 1 = 0.

We have glossed over one point in

the above calculations, and that is

the definition of xr in case r is

complex.
This possibility must be taken into

account since the roots of q could be


r
complex. We define x for r complex
r r
by x = e log x, (x > 0).

Then we have

' '
( )
' r log x 1 r r-1
x = r(log x) e = rx x = rx ,

and

r ( )
x
r
=

r ( r
) r
e logx = (logx)e log x =x logx,
r

which are the formulas we used in

the calculations.

Solutions for (1) can be found for x


r
< 0 also. In this case consider (-x) ,

where is a constant. Then we have

for x < 0.

r r r-1 r
[(-x) ] = -r (-x) , [(-x) ] = r(r
r-2
1) (- x) , and hence
r r 2 r
[(-x) ] = r (-x) , x [(-x) ] = r(r -1)
r
(-x) .

Thus

r r
L[((-x) ) = q(r)(-) , (x < 0).

Also,

r [ ]
( -x
r
) = ( x )
r
log(-x), (x<0),

as can be easily checked. Therefore

we see that if the roots r 1 , r 2 of q are

distinct two independent solution 1 ,

2 of (1) for x < 0 are given by

r1 r2
1(x) = ( x ) , 2(x) = ( x ) , (x<0),

and if r 1 = r 2 two solutions are given

by
r1 r1
1(x) = ( x ) , 2(x) = ( x ) log( x ), (x<0).

There are just the formulas for the

solutions obtained for x > 0. with


x replaced by x everywhere. S ince

|x | = x for x > 0. and | x \ = - x for

x < 0. we can write the solutions for

any x 0 in the following way:

r1 r2
1(x) = |x| , 2(x) = |x| , ( x0 ).

In case r 1 r 2 . and

r1 r
1(x) = |x| , 2(x) = |x| log|x|, ( x0 ),
1

In case r 1 = r 2 .

Theorem.

Consider the second order Euler

equation
2
x y + xy + by = 0, (a, b

constants),

and the polynomial q give by q(r) =

r(r 1) + r + b.
A basis for the solutions of the Euler

equation on any interval not

containing x = 0 is given by
r1 r2
1(x) = |x| , 2(x) = |x|

in case r 1 , r 2 are distilled roots of q,

and by

r1 r1
1(x) = |x| , 2(x) = |x| log|x|,

If r 1 is a root of q of multiplicity two.

As an example let us consider the

equation
2
x y + xy + y = 0

for x 0. The polynomial q is given

by
2
q(r) = r(r l) + r + 1 r + 1, and its

roots are r 1 = i, r 2 = -i. Thus a basis

for the solutions is furnished by


i -1
1 (x) = | x | , 2 (x) = | x | , (x

0), where we have

i i log |x|
| x | = e .

Note that in this case another basis

1 , 2 is given by

(x) = cos (log | x |), 2 (x) = sin

(log | x |), (x 0).

The extension of the result of the

theorem to the Euler equation of the

n-th order
n (n) n-1 (n-1)
L(y) = x y + 1x y + ... +

n y = 0,

Where 1 ,.... n are constants, is

straightforward. We have for any


k r (k)
constant r x [ | x | ] = r(r 1)
r
...(r k + 1) | x | , (x 0), and

hence
r r
L(| x | ) = q(r) | x | .

Where q is now the polynomial of

degree n
(r)
q = r(r 1) ... (r n + 1) + 1 r(r

1) ... (r n + 2) + ... + n .

This polynomial is called the indicial

polynomial for the Euler equation

Differentiating k times with respect

to r we obtain

( ) = L( )
|x| = L(|x| log |x|)
k k
r r r k
k L |x| k
r r

[ ]
k(k-2) k-2
q( )(r) + kq( )(r)log|x| + 2! q( )(r)log |x|
k k+1 2
r
= |x| .
+....+q( )log |x|
r k

If r 1 is a root of q of multiplicity m 1

then
(m1-1)
q(r 1 ) = 0, q(r 1 ) = 0, ....., q (r 1 )

= 0, and we see that


r1 r1
| x | , | x | log | x |, ... , |
r1 m1-1
x | log | x | are solutions of

L(y) = 0. Repeating this process for


each root of q we obtain the following

result.
Theorem.

Let r 1 ,....r s be the distinct roots of

the indicial polynomial q for the Euler

equation and suppose r1 has

multiplicity m 1 . Then the n functions


r1 rl rl
| x | , | x | log | x |, ..., | x |
m1-1 r2 r2
log | x | ; | x | , | x | log | x
r2
|,..., |x|
m2-1 r2 r3
Iog | x |; ... ;| x | , | x | log
r2 m2-1
| x |,..., | x | log | x | form

a basis for the solutions of the n-th

order Euler equation on any interval

not containing x = 0.
CYP QUESTIONS

1. Find all solutions of the following

equations for x > 0:

2
a. x y + 2xy - 6y = 0

2
b. 2x y + xy - y = 0

2
c. x y + xy - 4y = x

2 3
d. x y - 5xy + 9y = x

2 2
e. x y + 2x y - xy + y = 0

2. Find all solutions of the following

equations for | x | > 0:

2
a. x y + xy + 4y = 1

2
b. x y - 3xy + 5y = 0

2
c. x y - (2 + i) xy + 3iy = 0

2
d. x y + xy - 4y = x
3. Let be a solution for x > 0 of

the Euler equation


2
x y + xy + by = 0, where . b
t
are constants. Let (t) = (e ).

a. Show that satisfies the

equation

(t) + ( - 1) (t) +

b[(t) = 0.

b. Compute the characteristic

polynomial of the equation

satisfied by , and compare

it with the indicial

polynomial of the given Euler

equation.

c. Show that (x) = (log x).

4. Suppose the constants , b in the

Euler equation
2
x y + xy + by = 0

are real. Let r 1 , r 2 denote the

roots of the indicial polynomial q.

a. If r 1 = + ir with 0,

show that r 2 = f 1 = - ir.

b. If r 1 = + ir with 0,

show that the functions 1 ,

2 given by

1 (x) = | x | cos (r log | x

|),

2 (x) = | x | sin (r log |

x |), form a basis for the

solutions of the Euler

equation on any interval not

containing x = 0.

5. The logarithm of a negative

number can be defined in the


following way. If x < 0, then x

> 0, and we have


i
x = (-x) (-1) = (-x)e .

We define
i
Log x = log [(-x)e ) = log (-x) +
i
log e

= log (-x) + i, (x < 0).

Thus log x, for x < 0, is a

complex number. Using this


r r log x
definition, let x = e , for x

> 0 and for x < 0.

a. Show that
r ir r
x = e |x| , (x < 0).

b. Let r 1 , r 2 be the roots of the

indicial polynomial for the


2
Euler equation x y + xy +

by = 0.
Show that two independent

solutions for | x | > 0 are


r1 r2
given by x , x
rl rl
If r 1 r 2 , and by x , x log

x if r 1 = r 2 .

6. Let
2
L(y) = x y + xy + by where ,

b are constants, and let q be the

indicial polynomial q(r) = r(r 1)

+ r + b.

a. Show that the equation L(y)


k
= x has a solution of the
k
form (x) = cx if q(k) 0.
k
Compute c. (Hint: L(cx ) =
k k
cL(x ) = cq(k)x .)

b. Suppose k is a root of q of

multiplicity one. Show that


there is a solution of L(y)
k
= x of the form
k
(x) = cx log x.

Compute c.

k
c. Find a solution of L(y) = x
in case k is double root of q.

7.3 SECOND ORDER EQUATION


WITH REGULAR SINGULAR POINTS
AN EXAMPLE

A Second order equation with a

regular singular point at x 0 has the

form

2
(x-x0) y + (x)(x-x0)y + b(x)y = 0,
" '
(1)

Where a, b are analytic at xo. Thus a,

b have power series expansions




(x) = k(x - x0)k, b(x) = k(x - x0)k,
k=0 k=0

which are convergent on some

interval |x x 0 | < r 0 , for some r 0 >

0. We shall be interested in finding

solutions near x 0 . In order to simplify

our notation we shall assume x 0 =

0.

If x 0 0 it is easy to change the

equation into an equivalent equation

with a regular point at the origin. We

let t = x - x 0 and

(t) = (x0 + t) = b(x) = b(x0 + t) =


k k
kt , kt .
k=0 k=0


The power series for , b converge on

the interval | t | < r 0 about t = 0. Let



be any solution, and define by

(t) = (x0 + t).
Then
2 2
d d d d
dt ( t) = dx (x0 + t), dt
2 ( t) = dx
2 (x0 + t),

and we see that satisfies

2 " '
t u + (t)tu + b(t)u = 0 (2)

where now u = du/dt. This is an

equation with a regular singular point



at t = 0. Conversely, if satisfies (2)
the function given by


(x) = (x-x0)

satisfies (1) In this sense (2) is

equivalent to (1).

With x 0 = 0 in (1) we may write (1)

as
2 " '
L(y) = x y + (x)xy + b(x)y = 0, (3)

Where , b are analytic at the origin,

and have power series expansions

k=0 k=0
k k
(x) = kx , b(k) = kx , (4)

Which are convergent on an interval

| x | < r 0 , r 0 > 0. The Euler equation

is the special case of (3) with , b

constant. The effect of the higher

order terms (terms with x as a factor)

in the series (4) is to introduce series

into the solutions of (3).

Example

Consider the equation

2 " 3 '
L(y) = x y + 2 xy + xy = 0, (5)
which has a regular singular point at

the origin case (i) x > 0. Since it

is not an Euler equation we can not

expect it to have a solution of the


r
form x there. However we try for a

solution



(c0 0), (6)
r k r r+1
(x) = x ckx = c0x + c1x + ....,
k=0

r
that is, x times a power series. This

simple idea works. We operate

formally and see what condition must

be satisfied by r and c 0 , c 1 , c 2 , ... in

order that this be a solution of the


equation. Computing we find that
' 0 r-1 r r+1
(x) = c rx + c1(r+1)x + c2(r+2)x + .....,
" r-2 r-1 r
(x) = c0r(r-1)x + c1(r+1)rx + c2(r+2)(r+1)x + ...., and
hence
2 " r r+1 r+2
x (x) = c0r(r-1)x + c1(r+1)rx + c2(r+2)(r+1)x + .....,

Double click this page to view clearly


3 ' 3 r+1 3 r+2
2 x (x) = 2 c1(r+1)x + 2 c2(r+2)x + ...,
r+1 r+2
x(x) = c0x + c1x + ......,
Adding we obtain

( ) [
L (x) = r(r-1) + 2 r c0x +
3
] r
{[(r+1)r+ 3
2 (r+1)]c1 + c0 x
r+1
}
+ {[ ( r+2)( r+1) +
3
2 ( r+2) ] c2 + c1} + ...
x
r+2

If we let

3
q(r) = r(r-1) 2 r = r r+ 2 , ( 3
)
this may be written as

( )
L (x) = q(r)csx + [q(r+1)c1 + c0]x
r r+1
+ [q(r+2)c2+c1]x
r+2
+ ....

[q(r+k)c

+ ck-1]x .
r r k
= q(r)c0x + x k
k=1

If is to satisfy L() (x) = 0 all

coefficients of the powers of x must

vanish. Since we assumed c 0 0 this

implies

( )
L (x) = 0

q(r) = 0, (7)

Double click this page to view clearly


The polynomial q is called the indicial

polynomial for (5). It is the

coefficient of the lowest power of x

appearing in L() (x), and from (7)

we see that its roots are the only

permissible values of r for which

there are solutions of the form (6). In

our example these roots are

3
r1 = 0, r2 = 2 .

The second set of equations in (7)

delimits c 1 , c 2 ,.... in terms of c 0 and r.

If q(r + k) 0 for k = 1,2, .... then

ck-1
ck = q(r+k) , (k=1,2,....).
Thus
( 1)kc
ck = q(r+k)q(r+k)...q(r+1) .
0
(k=1,2,...)
If r1 = 0,

q(r1 + k) = q ( 1
2 )
+ k 0 for k=1,2,..

Letting c 0 = 1 and r = r 1 = 0 we

obtain, at least formally, a solution

1 given by


k k
( 1) x
1(x) = 1 + q(k)q(k-1)....q(1)
.
k=1

These functions 1, 2 will be

solutions provided the series

converge on some interval containing

x = 0. Let us write the series for 1

in the form



1(x) = dk(x).
k=0
Using the ratio test we obtain



1(x) = dk(x).
k=0

as k , provide | x \ < . Thus

the series defining 1 is convergent

for all finite x. The same can be show


-l/2
to hold for the series multiplying x

in the expression for 2 .Thus 1 , 2

are solutions of (5) for all x > 0.

To obtain solutions for x < 0 we note

that all the above computations go


r
through if x is replaced everywhere
r
by | x | , where
r r log |x|
| x | = e .

Thus two solutions of (5) which are

valid for all x 0 are given by



x
( 1) xk
1(x) = 1 + q(k)q(k-1)...q(1)
,
k=1

and

[ ]

x
1/2 ( 1) xk
2(x) = |x| 1+ .
k=1
( )( ) ()
1 3
q k- 2 q k- 2 ...q
1
2

The above example illustrates the

general fact that an equation (3) with

a regular singular point at the origin

always has a solution of the form


r k
(x) = |x| ckx ,
k=1

where r is a constant, and the series

converges on the interval | x | < r 0 .

Moreover r, and the constants c k ,may

be computed by substituting into the

differential equation.
CYP QUESTIONS

1. Find the singular points of the

following equations, and

determine those which are

regular singular points.

2 2
i. x y + (x + x )y - y = 0

2 2
ii. 3x y + x y 2xy = 0

2 2
iii. x y - 5y + 3x y = 0

iv. xy + 4y = 0

2
v. (1 x )y - 2xy + 2y = 0

2 2
vi. (x + x 2) y + 3(x + 2)y
+ (x 1)y = 0

2
vii. x y + (sin x)y + (cos x)y = 0

2. Compute the indicial

polynomials, and their roots, for

the following equations:


2 2
i. x y + (x + x )y - y = 0

ii.
2 "
x y + xy + x
'
( 2 1
4 )y = 0
2 4 2
iii. 4x y + (4x 5x)y + (x +

2)y = 0

2 2 x
iv. x y + (x 3x )y + e y = 0

2
v. x y + (sin x)y + (cos x)y = 0

3. a. Show that 1 and 1 are

regular singular points for

the Legendre equation


2
(1 x )y - 2xy + ( + 1)

y = 0.

b. Find the indicial polynomial,

and its roots, corresponding

to the point x = 1.

4. Find a solution of the form

k=0
r k
(x) = x ckx , (x > 0),
for the following equations
2 2
a. 2x y + (x x)y + y = 0 b)
2 2
x y + (x x )y + y = 0

7.4 SECOND ORDER EQUATIONS


WITH REGULAR SINGULAR POINTS
GENERAL CASE

k=0
r k
(x) = x ckx , (x > 0),

k=0

(x) = x
r
ckx ,
k
(c0 0), (1)
for the equation
2 " '
x y + (x)xy + b(x)y = 0, (2)
where

k=0 kxk,

k=0 kxk,

(x) = b(x) = (3)
for |x| < r0.Then


' r-1 k
(x) = x ( k+r)ckx ,
k=0


" r-2 k
(x) = x k=0
( k+r)(k+r-1)ckx ,

and hence

( )( )
r k k
b(x)(x) = x ckx kx
k=0 k=0

( )
k
r k
=x kx k = cik-i
j=0
k=0

Double click this page to view clearly


( k=0
)( k=0 )
k+r)ckx
' r k k
x(x) (x) = x ( kx



r k
=x kx
k=0

(x) = x k=0
2 " r k
x (k+r)(k+r-1)ckx .

Thus

( )
L (x) = x
'

k=0
[(k+r)(k+r-1)c + + ]x ,
k k k
k

and we must have

[
[ ]k = (k+r)(k+r-1)ck + + = 0, ] (k=0,1,2,...).


Using the definitions of k, k we can write the bracket [ ]k as
[ ]k = (k+r)(k+r-1)ck + j=0 (j+r) cjk-j+j=0 cjk-i
k k

[(j+r)
k-1
=[(k+r)(k+r-1) + (k+r)0 + 0]ck + k-i + k-j]ci.
j=0

For k = 0 we must have

r(r-1) + r0 + 0 = 0. (4)

since c 0 0. The second degree

polynomial q given by

q(r) = r(r 1) + r 0 + 0 is called the

indicial polynomial for (2), and the

Double click this page to view clearly


only admissible values of r are the

roots of q. We see that

[ ]k = q(r+k)ck + dk = 0, (k = 1,2,....), (5)


where

[(j+r)
k-1
dk = k-j + k-j]cj, (k =1,2,...) (6)
j=0

Note that d k is a linear combination

of c 0 , c 1 , ....., c k-1 with coefficients

involving the known functions , b,

and r. Leaving r and c 0 indeterminant

for the moment we solve the

equations (5), (6) successively in

terms of c 0 and r. The solutions we

denote by C K (r), and the

corresponding d k by D k (r). Thus


D1(r)
D1(r) = (r1 + 1)c0, C1(r) = q(r+1) ,

and if general

[(j+r)
k-1
Dk(r) = k-j + k-j] C1(r), (7)
j=0

Dk(r)
Ck(r) = q(r+k) , (k=1,2,...) (8)
The C k thus determined are rational

functions of r (quotients of

polynomials), and the only points

where they cease to exist are the

points r for which q(r + k) = 0 for

some k = 1,2. ... only two such

possible points exist. Let us define

by .


r r k
(x,r) = c0x + x Ck(r)x . (9)
k=1

If the series in (9) converges for 0 <

x < r 0 , then clearly


'
L()(x,r) = c0q(r)x . (10)

We have now arrived at the following

situation. If the given by (1) is a

solution of (2) then r must be a root

of the indicial polynomial q, and the


c k (kg 1) are determined uniquely

in terms of r and c 0 to be the C k (r) of

(8), provided q(r + k) 0, k = 1, 2.

... Conversely, if r is a root of q, and

if the C k (r) can be determined (that

is, q(r + k) 0 for k = 1,2, ...) then

the function given by (x) = (x,

r) is a solution of for any choice of

c 0 , provided the series in (9) can be

shown to be convergent.

Let r 1 , r 2 be the two roots of q, and

suppose we have labeled them so

that Re r 1 Re r 2 . Then q(r 1 + k) 0

for any k = 1.2,.... Thus C k (r 1 ) exists

for , all k = 1,2,..., and letting c 0 =

C 0 (r 1 ) = 1 we see that the function

1 given by
(C0(r1) = 1),

Ck(r1)x ,
k
1(x) = xr1 (11)
k=0

is a solution of (2), provided the

series is convergent.

If r 2 is a root of q distinct from r 1 and

q(r 2 + k) 0 for k = 1,2,...., then

clearly C k (r 2 ) is defined for k = 1, 2.

..., and the function 2 given by

(C0(r2) = 1),

Ck(r2)x ,
r2 k
2(x) = x (12)
k=0

is another solution of (2), provided

the series is convergent. The

condition q(r 2 + k) 0 for k = l,2,

.... is the same as r 1 r 2 + k for k =

1,2, ....

Or r 1 r 2 is not a positive integer.


Notice that since 0 = (0), 0 =

b(0), the indicial polynomial q can be

written as q(r) = r (r 1) + (0) r +

b(0).

Theorem.

2
Consider the equation x y + (x)xy

+ b(x)y = 0, where a. b have

convergent power series expansions

for

| x | < r 0 , r 0 > 0.

Let r 1 , r 2 (Re r 1 Re r 2 ) be the roots

of the indicial polynomial q(r) = r(r

1)) + (0) r + b(0

For 0 | x [ < r 0 there is a solution 1

of the form

n
(c0 = 1),
k
1(x) = |x| ckx ,
k=0

Where the series converges for | x

| < r 0 . If r 1 r 2 is not zero or a

positive integer, there is a second

solution 2 for 0 < | x | < r 0 of the

form

( )

r2 k
2(x) = |x| c kx , c0 = 1 .
k=0

where the series converges for | x |

< r0.


The coefficients c k can be obtained by

substitutions of the solutions into the

differential equation.

As we have seen in (11), (12), the



coefficients ck, c kappearing in the

solutions 1, 2 of the above

theorem are given by



c k = C k (r 1 ), c k(k = 0,1, 2,...) where

the C k (r), (k = 1, 2,...). Are the

solutions of the equations (7), (8),

with C 0 (r) = 1.

It is easy to check, as in the case

of the Euler equation, that the

calculations made for x > 0 remain


r
valid for x < 0 provided x is replaced
r
everywhere by | x | . Thus all that

remains to be proved in Theorem is

the convergence of the series

involved in 1 and 2 . This will be

done in see 7.5.

If r 1 r 2 is either zero or a positive

integer we shall say that we have an

exceptional case.
The Euler equation shows that if r 1

= r2 we must expect solutions

involving log x. It turns out that


even in the case when r 1 r 2 is a

positive integer log x may appear.

In sec 7.6 we show how to obtain


a solution associated with r 2 in the

exceptional cases.

CYP QUESTIONS

1. Find all solutions of the form


r
(|x| > 0),
k
(x) = |x| ckx ,
k=0

For the following equations:


2
a. 3x y + 5xy + 3xy = 0

2 2
b. x y + xy + x y = 0

c.
2 " '
x y + xy + x 1 y = 0( 2
4 )
Test each of the series

involved for convergence.


2. Consider the equation
2 z =
x y + xe y + y 0.

a. Compute the indicial

polynomial, and show that

its roots are i and i.

b. Compute the coefficients c 1 ,

c 2 , c 3 in the solution



ckx , (c0 = 1).
i k
(x) = x
k=0

3. a. Find a solution of the form


r k
(x) = |x-1| ck(x-1) for
k=0

the Legendre equation


2
(1 x )y - 2xy + ( + 1)y = 0.

For what values of x does

the series converge? (Hint:

do not divide by x + 1 and

multiply by x 1, but note


that x = (x 1) + 1. Express

the coefficients in terms of

powers of x - 1).

4. The equation

xy + (1 x)y + y = 0, where

a is a constant, is called the

Laguerre equation .

a. Show that this equation has

a regular singular point at x

= 0.

b. Compute the indicial

polynomial and its roots.

c. Find a solution of the form


r k
(x) = x ckx .
k=0

d. Show that if = n, a non-

negative integer, there is a

polynomial solution of

degree n.
5. a. Let L n denote the polynomial

( ).
n
x d n x
Ln(x) = e n x e
dx

Show that L n satisfies the

Laguerre equation if = n.

This polynomial is called the

n-th Laguerre polynomial

(Hint : See the treatment of

the Legendre polynomials)

(b) Compute L 0 , L 1 , L 2 .

7.5 A CONVERGENCE PROOF

Under consideration is the equation


2
x y + (x)xy + b(x)y = 0, with

2 " '
x y + (x)xy + b(x)y = 0,
with


k k
(x) = kx , b(x) = kx , (1)
k=0 k=0
Where these series, converge for | x

| < r 0 for some r 0 > 0. The indicial

polynomial q is given by

q(r) = r(r-1) + 0r + 0, (2)

and its two roots are r 1 , r 2 with Re r 1

Re r 2 .

The series we must show to be

convergent are determined from

k
ck(r)x , (3)
k=0

where the c k (r) are given recursively

by

C0(r) = 1,

[(j+r)
k-1
q(r+k)ck(r) = k-j + k-j] Cj(r) (4 )
j=0

(k =1,2,.....);
We must prove that the series (3)

converges for | x | < r 0 if r = r 1 , and

if r = r 2 , provided r 1 r 2 is not a

positive integer.

We note that

q(r) = (r r,)(r r 2 ) and hence that

q(r 1 + k) = k(k + r 1 r 2 ), q(r 2 + k)

= k(k + r 2 r 1 ).

Therefore

|q(r 1 + j) | k(k-|r 1 r 2 |),

|q(r2 + k)| _ k(k-|r2- r1|). (5 )

Now let p be any number satisfying 0

< < r 0 . Since the series in (1) are

convergent for | x | = there is a

constant M > 0 such that


| j | |j| M, (j=0,1,2,....) (6)
j j
M,
_ _
Using (5) and (6) in (4) we obtain


k-1
( )
k k-1|r1 r2| Ck(r1) M
_ j=0
(j+1+|r1|)i-k|Cj(r1)|, (7)

(k = 1,2,....).

Let N be that integer satisfying

N 1 | r 1 r 2 | < N,

and let us define 0 , 1 ,... by

0 = C0(r1) = 1, |
k = Ck(r1) , | (k=1,2,....,N-1),
and


k-1
( )
k k-|r1 r2| k = M
j=0
(j+1+|r1|pj-kj, (8)

(k =N,N+1,.....).

Then comparing the definition of the

k with (7) we see that

|Ck(r1)| _ rk, (k = 0,1,2,....) (9)


We show that the series


kx
k
(10)
k=0

is convergent for | x | < . Replacing

k by k + 1 in (8) we obtain (k + 1)

(k + 1 -|r 1 r 2 |) k+1 = [k (k - |r 1

r 2 |) + M (k + 1 + |r 1 |)] k

For k N. Thus

[ k(k-|r ) (
r2| + M k+1+|r2| ]
| |
k+1 1
k+1x
= |x|,
( )
k
kx (k+1) k+1-|r1 r2|

which tends to | x | / as k .

Thus, by the ratio test, the series

(10) converges for | x | < . Using

(9) and the comparison test we see

that the series.

(C0(r1) = 1),

Ck(r1)x ,
k
k=0
Converges for | x | < . But since

is any number satisfying 0 < <

r 0 we have shown that this series

converges for | x | r 0 .

The same computations with r1

replaced by r 2 everywhere show that

(C0(r2) = 1).

Ck(r2)x ,
k
k=0

converges for | x | < r 0 , provided r 1

2 is not a positive integer.

7.6 THE EXCEPTIONAL CASES

We divide the exceptional cases into

two groups according as the roots

r 1 , r 2 (Re r 1 Re r 2 ) of the indicia!

polynomial satisfy
h) r 1 = r 2

i) r 1 r 2 is a positive integer.

We try to find solutions for 0 < x <

r 0 . We are going to work in a purely

formal way in order to discover the

form that the solutions should take.

For such x we have

r
L()(x,r) = c0q(r)x , (1)

Where is given by


r r r
(x,r) = c0x + x ck(r)x . (2)
k=1

The C k (r) are determined recursively

by the formulas

C0(r) = c0 0,
q(r+k)Ck(r) = Dk(r) (3)

k-1
Dk(r) =
j=0
[(j+1)k-j + k-j] Cj(r) (k=1,2,.....);

In-case (i) we have

q(r 1 ) = 0, q(r 1 ) = 0, and this

suggests formally differentiating (1)


with respect to r. We obtain

r L()(x,r) = L ( )(x,r)

r

=c0[q (r) + (log x)q(r)]x ,


' r

and we see that if r=r1 = r2, c0 = 1, then

(x,r1)

2(x) = r

Will yield a solution of our equation,

provided the series involved

converge, Computing formally from

(2) we find



(r1)x ck(r1)x
r1 ' k r1 k
2(x) = x C + (log x)x
k k=0
k=0



(r1)x
r1 ' k
=x C + (log x)1(x),
k=0 k
where 1 is the solution already

obtained:

(C0(r1) = 1).

Ck(r1)x ,
r1 k
1(x) = x
k=0

Note that C k (r 1 ) exists for all k = 0,

1,2, ..since C k is a rational function

of r whose denominator is not zero

at r = r = r 1 . Also C 0 (r) = 1 implies

that C 0 (r 1 ) = 0, and thus the series


rl
multiplying x in 2 starts with the

first power of x.

Let us now turn to the case (ii), and

suppose that r 1 = r 2 + m, where m is

a positive integer. If c 0 is given,

C 1 (r 2 ), ..., C m-1 (r 2 ) all exists as

finite numbers, but since


q(r+m)Cm(r) = Dm(r), (4)

we run into trouble in trying to

compute C m (r 2 ). Now q(r) = (r

r 1 )(r r 2 ), and hence

q(r + m) = (r r 2 ) (r + m r 2 ).

If D m (r) also has r r 2 as a factor

(i.e, D m (r 2 ) = 0) this would cancel

the same factor in q(r + m), and

would give C m (r 2 ) as a finite number.

Then

C m+1 (r 2 ) C m+2 (r 2 ), ... all exist. In

this rather special situation we will

have a solution 2 of the form

(C0(r2) = 1).

Ck(r2)x ,
r2 k
2(x) = x
k=0

We can always arrange it so that

D m (r 2 ) = 0 by choosing
C 0 (r) = r r 2 .

From (3) we see that D k (r) is linear

homogeneous in

C 0 (r), ....., C k-1 (r), and hence D k (r)

has C 0 (r) = r r 2 as a factor. Thus

C m (r 2 ) will exist as a finite number.

Letting

(C0(r) = r-r2), (5)


r k
(x,r) = x Ck(r)x ,
k=0

We find formally that

L()(x,r) = (r-r2)q(r)x .
r
(6)

Putting r = r 2 we obtain formally a

solution given by

(x) = (x, r 2 )

However C 0 (r 2 ) = C 1 (r 2 ) = .... =

C m-1 (r 2 ) = 0. Thus the series for

actually starts with the m-th power of

x, and hence vg has the form


r2+m rl
(x) = x (x) = x (x),

Where a is some power series. It is

not difficult to see that is just a

constant multiple of the solution 1

already obtained.

To get a solution really associated

with r2 we differentiate (6) with

respect to r, obtaining

r L()(x,r) = L( r )(x,r)

[ ]
r
=q(r)x + (r-r2) q (r) + (log x)q(r) .
r '

Now letting r = r 2 we find that the 2

given by

(x,r2)

2(x) = r

is a solution, provided the series

involved are convergent. It has the

form


Ck(r2)x + (log x)x Ck(r2)x ,
r2 r k r2 k
2(x) = x
k=0 k=0

Where C 0 (r) = r r 2 . Since

C 0 (r 2 ) = ... = C m-1 (r 2 ) = 0.

We may write this as



Ck(r2)x + c(log x)1(x),
r2 k
2(x) = x
k=0

Where c is some constant.

The method used in this section to

obtain solutions is called the

Frobenius method. All the series

obtained converge for |x| < r 2 . and

the 2 computed formally will be a

solution in both the bases (i) and

(ii). This requires justifying the

differentiating of the various of the

various series term by term with

respect to r, and this can be done.


Another approach which leads to a

justification of the results is the

following. Once we have discovered

what form a second solution 2

should take, we can substitute this

back into the equation and compute

the coefficients of the various series

involved. Then a proof of the

convergence of these series can be

patterned after the convergence

proof in Sec (7.5) We omit this proof.

Solutions for x < 0 can be obtained


rl r2
by replacing x , x , log x

everywhere by
1 r2
| x | r , | x | , log |x|

Respectively. We summarize our

results in the following theorem


Theorem.

2
Consider the equation x y + a(x)xy

+ b(x)y = 0, where , b have power

series expansions which are

convergent for | x | < r 0 , r 0 > 0. Let

r 1 , r 2 (Re r 1 Re r 2 ) be the roots of

the indicial polynomial q(r) = r(r 1)

+ (0)r + b(0).

If r 1 = r 2 there are two linearly

independent solutions 1 , 2 for 0 <

| x | < r 0 of the form

r1
1 (x) = | x | 1 (x), 2 (x) = | x
r1+1
| 2 (x) + (log |x|) 1 (x),

Where 1, 2 have power series

expansions which are convergent for

|x| < r 0 , and 1 (0) 0.


If r 1 r 2 is a positive integer there

are two linearly independent

solutions 1 , 2 for 0 < |x| < r 0 of

the form

r1
1 (x) = |x| 1 (x),
r2
2 (x) = |x| 2 (x) + c(log |x|)

1 (x).

Where 1, 2 have power series

expansions which are convergent for

| x | < r 0 , 1 (0) 0, 2 (0) 0. and

c is a constant. It may happen that c

= 0.

CYP QUESTIONS.

1. Consider the following three

equations near x = 0:
2 2 2
i. 2x y + (5x + x )y + (x 2)y = 0
2 x
ii. 4x y + 4xe y + 3(cos x)y = 0

2 2 2
iii. (1 x )x y + 3(x + x ) y + y = 0.

a. Compute the roots r 1 , r 2

of the indicial equation

for each relative to z =

0.

2. Consider the equation


2 2 2
x y + xy + (x )y = 0,

where is a non-negative

constant.

a. Compute the indicial

polynomial and its two roots.

b. Discuss the nature of the

solutions near the origin.

Consider all cases carefully.

Do not compute the

solutions.
3. Obtain two linearly independent
solution of the following

equations which are valid near x = 0:

2
a. x y + 3xy+ (1 + x)y = 0

2 2
b. x y + 2x y - 2y = 0

2 2
c. x y + 5xy + (3 x )y = 0

2
d. x y - 2x(x +l)y + 2(x + l)y = 0
2 2
e. x y + xy + (x 1 )y = 0

2 2
f. x y - 2x y + (4x 2)y = 0

4. Show that the solutions 1 , 2

in the above theorem are linearly


independent for 0 < x < r 0 .

5. Show that (x, r 2 ), where is

given by (5), is a constant times

1 (x), where 1 is given by

(C0(r1) = 1).

Ck(r1)x ,
r1 k
1(x) = x
k=0
6. Consider the equation

xy + (x)y = 0. where
'
xy + (x)y = 0,
where


k
(x) x ,
k=0 k

and the series converges for | x

| < r 0 r 0 > 0.

a. Show formally that there is a

solution of the form

(c0=1),
r k
(x) = x ckx ,

Where r + 0 = 0, and x > 0

b. Prove that the series

obtained converges for |x| <

r 0 . (Hint: Use the method of

See 7.5)
7. Consider the equations

2
x y = (x)xy + b(x)y = 0,

where , b have power series

expansions which are convergent

for |x| < r 0 , r 0 > 0. Let r 1 , r 2

be the roots of the indicial

polynomial, Re r 1 Re r 2 . Let 1

be a solution for x > 0

corresponding to r 1 :
rl
1 (x) = x 1 (x), ( 1 (0)=l),

Where 1 has a power series

expansion valid for | x | < r 0 .

a. Let be any other solution

of (*), and suppose - u 1 .

Show that v = u satisfies

the equation

[ ]v = 0.
'
' 2x1(x)
xv + 2r1 + (x) + 1(x)
b. Since 1 / 1 has a power

series expansion on some

interval |x| < f 0 ,f 0 > 0, show

that the v satisfying (**) has

the form


2r1-0 k
v(x) = x dxx ,
k=0

where the power series

converges for | x | < 0 ,

where 0 is the smaller of

the two numbers, r0, f0.

(Hint: questions 6)

c. Using the results of (), (b)

show that a second solution

2 of (*) exists of the from

2 (x) = c (log x) 1 (x) +


r2
x 2 (x), (x > 0),

Where c is a constant, and

l has a power series


expansion which converges

for |x| < 0 .

7.7 THE BESSEL EQUATION

If is a constant, Re 0, the

Bessel equation of order is the


2 z 2
equation x y + xy + (x - )y = 0.

This has the form

2
, x v + x(x)y + b(x)y = 0, with

2 2
(x) = 1, b(x) = x - .

Since , b are analytic at x = 0,

the Bessel equation has the origin as

a regular singular point. The indicial

polynomial q is given by q(r) r(r


2 2 2
1) + r - = r - .
Whose two roots r 1 , r 2 are r 1 = , r 2

= - .

We shall construct solutions for x >

0.

Let us consider the case = 0 first.

Since the roots re both equal to zero

in this case it follows from previous

theorem that there are two solutions

1 , 2 of the form

1 (x) = 1 (x), 2 (x) = x 2 (x) + (log

x) 1 (x)

Where 1, 2 have power series

expansions which converge for all

finite x. Let us compute 1 , 2 . Let

for the moment


1(x) = 1(x), 2(x) = x2(x) + (log x)1(x)

We see that

c 1 = 0,

[k(k 1) + k]c t + c k-2 = 0, (k =

2,3,....).

The second set of equation is the

same as

ck-2
ck =
k
3 , (k=2,3,...).

The choice c 0 = 1 implies

1 c2 1
c0 = 2 , c4 = 2 = 2 2 , .....
2 4 2 .4

and in general

m m
( 1) ( 1)
c2 = 2 2
2 .4 ...(2m)
2 = 2m
2 (m!)
2, (m = 1,2, ....).
since c 1 =0 we have c 3 = c 5 = ...

=0.

Thus 1 contains only even powers of

x, and we obtain



m 2m
( 1) x
1(x) = 2m 2 ,
2 (m!)
k=0

0
where as usual 0! = 1, and 2 = 1.

The function defined by this series

is called the Bessel function of zero

order of the first kind and is denoted

by J 0 . Thus



m
( 1) x 2m
J0(x) =
( m!)
2 ()
2
k=0

It is easily checked by the ratio test

that this series indeed converges for

all finite x.
We now determine a second solution

2 for the Bessel equation of order

zero. Letting 1 = J 0 this solution has

the from



(c0 = 0).
k
2(x) = ckx + (log x)1(x),
k=0

We obtain
1(x)
(x) = k=1
' k-1 '
kckx + x + (logx) (x)
2 1

1(x)
(x) = k=2
" k-2 2 ' "
k(k-1)ckx + 2 + x + (logx) (x).
2 x 1 1

Thus

( ) 2 " ' 2
L 2 (x) = x (x) + x (x) + x 2(x).
2 2

(k c

) (x) + (log x)L(1)(x)
2 2 2 k '
= c1x+2 c2x + k + ck-2 x + 2x
k=2 1

( )
And since L 1 (x) = 0 we have



m 2m
( 1) 2mx


c1x+2 c2x +
2 2
k=2
( 2
)
k ck + ck-2 x = 2
k
2m
2 (m!)
2
m=1

Hence
2 2
c1 = 0, 2 c2 = 1, 3 c3 + c1 = 0, ....,

2 2
c 1 = 0, 2 c 2 = 1. 3 c 3 + c 1 = 0, ....,

and we see that since the series on

the right has only even powers of x.

c 1 = c 3 = c 5 = ... = 0.

Double click this page to view clearly


The recursion relation for the other

coefficients is

m+1
2 ( 1) m
(2m) c2m + c2m-2 = 2m-2 2 , (m-2,3,....).
2 (m!)

We have

c2 =
1
2
2 , c4 =
1
2
4 ( 1
2
2

1
2.2
2
) = (1 + )
1
2 2
2 4
1
2

6 [ 2 .4 ( ) ( ) ] 246 ( ), ....,
1 1 1 1 1 1 1 1
c6 = 2 2 1 +22 + 3 =2 1 +
2 2 + 3 2 2 2
2 .4

and it can be shown by induction that


m-1

( )
( 1) 1 1
c2m = 2m 2 1 + 2 + ... + m .(m=1,2,....).
2 (m!)

The solution thus determined is

called a Bessel function of zero order

of the second kind, and is denoted by

K 0 . Hence



m
( 1) x 2m
2 (1 + 2 + ... + m )( 2 )
1 1
K0(x) = + (log x)J0(x).
m=1
( m!)

Double click this page to view clearly


Using the ratio test it is easy to check

that the series on the right is

convergent for all finite x.

CYP QUESTIONS

1. Prove that the series defining J 0

and K 0 converge for |x| < .

2. Suppose is any solution of


2 2
x y + xy + x y = 0 for x > 0,
l/2
and let (x) = x (x). Show

that satisfies the equation


2
x y + x
"
( 2 14 )y = 0
for x > 0.

3. Show that J 0 has an infinity of


=
positive zeros. (Hint: If 0 (x)
l/2
x j 0 (x), then 0 satisfies

"
y + 1+ [ 1
4x
2
]y = 0, (x > 0).
The function x given by x(x)

= sin x satisfies y + y = 0.

1/
4. a. If 0 and (x) = x
2
J 0 (x), show that
1 2
" + 2 = .
4x

l/2
(Hint: (x) = 0 (x),

where 0 is defined question

3)

b. If , are positive

constants, show that

(2 2)0(x)p(x)dx = (1) (1) (1).


1 ' '

By and subtract to obtain

=
( - ) ( 2 -

2 )

Integrate from 0 to 1

c. If and J 0 () = 0, J 0 ()

= 0, show that
(x) (x)dx = xJ (x)J (x)dx = 0.
1 1
0 0
0 0

5. Using the notation of question 4

show that if J 0 () = 0 then


1
(x) dx =0xJ0(x) dx = 2 (J0()) .
2 2 1 1 2

0

(Hint: Relation (**) in question 4

is valid for any positive and .

Differentiate this with respect to

, and then set = to obtain

[ ( )]
' ' 2
= 2

Integrate from 0 to 1

6. If > 0 is such that J0() = 0,

prove that J0 () 0. (Hint: If


J0() = J0() = 0 the

uniqueness theorem would imply

J0(x) = 0 for x > 0.


Alternately, use question 5

(Remark: The result of this

exercise can be used to show


that the positive zeros of J 0

are denumerable, that is, they

may be put into a one-to-one

correspondence with positive

integers.)

7. Show that J 0 satisfies the Bessel

equation of order one


2 2
x y + xy + (x l)y = 0.

8. Since J 0 (0) = 1, and Jo is

continuous, J 0 (x) 0 in some

interval 0 < x < , for some >

0.. Let 0 < 0 < .


a. Show that there is a second

solution 2 of the bessel

equation of order zero which

has the form

[ ]
x

1
2(x) = J0(x) dt, (0 < x < ).
0(t)
2
t
x0

b. Show that Jo and 2 are

linearly independent on 0 <

x < .

7.8. THE BESSEL EQUATION

Now we compute solutions for the


Bessel equation of order , where

0, and Re 0:

2 2 2
L(y) = x y + xy + (x - ) y = 0.
As before we restrict attention to the

case x > 0. The roots of the indicial

equation are r 1 = , r 2 = -

First we determine a solution

corresponding to the root r 1 = .

From Theorem 3 such a solution 1

has the form



(c0 0).
k
1(x) = x ckx ,
k=0

We find, after a little calculation, that

( )
L 1 (x) = 0.c0x + [(+1)
2 2
]
c1x
+1
+x

k=2
{[(+k) 2
]
2

ck + ck-2}x = 0.
k

Thus we have c 1 = 0,

2 2
[( + k) - ] c k + c k-2 = 0, (k = 2,

3, ....).
Since

2 2
( + k) - = k (2 + k) 0 for k =

2, 3, ...., and c 1 = 0, it follows that

c 1 = c 3 = c 5 =....= 0.

We find
c0 c0
c2 = 2(2+2) = 2 ,
2 (+1)

c2 c0
c4 = 4(2+4) = 2 ,
2 2! (+1)(+2)

c4 c0
c6 = 6(2+6) = 2 ,
2 3! (+1)(+2)(+3)

and,in general,
m
( 1) c0
c2m = 2m
2 m!(+1)(+2)...(+m)

Our solution thus becomes



m 2m
( 1) x
1(x) = c0x + c0x 2m (1)
2 m!(+1)...(+m)
m=1

For = 0, c 0 = 1, this reduces to

J 0 (x).

It is usual to choose

1
c0 =
2 (+1)
, (2)
where is the gamma function

defined by


x s-1
(z ) = e x dx, (Re z > 0).
0

It is readily seen that

(z+1) = z(z). (3)

Indeed, integrating by parts, we have

e
T z s
(z+1) = limT x dx
0

[ | ]
s -z T x x-1
= lim x e T +z e x dx
0 0
T

e
T x z-1
= z lim x dx = z(z),
0
T

s T
Since T e U as T . Also, since

s -T
Te U as T
If z is a positive integer n,

(n+l) = n!.

Thus the gamma function is an

extension of the factorial function to

numbers which are not integers.

The relation (3) can be used to define

(z) for z such that Re z < 0,

provided z is not a negative integer.

To see this suppose N is the positive

integer such that

- N < Re z - N + 1,

Then Re (z + N) > 0, and we can

define (z) in terms of (z + N) by

(z+N)
(z) = z(z+1)...(z+N-1) , (Re z < 0),
Provided z - N + 1. The gamma

function is not defined at 0, -1, -2,....

Returning to, if we use the c 0 given

by we obtain a solution of the Bessel

equation of order which is denoted

by J, and is called the Bessel

function of order a of the first kind:

() (Re )
x ( 1) x
2m
J(x) = ()
2
m!(m++1) 2
0 .
_
m=0

Notice that this formula for J

reduces to J 0 when = 0, since (m

+ 1) = m!.

There are now two cases according as

r 1 r 2 = 2 is a positive integer or

not. If 2 is not a positive integer,

there is another solution 2 of the

form

k
2(x) = x ckx .
k=0

We find that our calculations for the

root r 1 = carry over provided only

that we replace by - everywhere.

Thus

()
( 1) x
2m
J (x) = ()
x
2
m!(m-+1) 2
m=0

gives a second solution in case 2 is

not a positive integer.

Sine (m - + 1) exists for m = 0,

1, 2, .... provided is not a positive

integer, we see that J - exists in this

case, ever if r 1 r 2 = 2 is a positive

integer. Thus, if a is not zero or a

positive integer, J and J - form a

basis for the solutions of the Bessel

equation of order a for x > 0.


The only remaining case is that for

which a is a positive integer, may =

n. There is a solution 2 of the form


-n k
2(x) = x ckx + c(log x)Jn(x).
k=0

We find that

( ) (x) + x (x) + (x n )2(x)


" ' 2 2
L 2 (x) = x2
2 2

{[(

= 0.c0x
n
+ [(1 n) n2]c1x1 n + x-n
2

k=2
2 2
] }
k-n) n ck + ck-2 xk

(x) + c (log x)L(Jn)(x) = 0,


'
+2cxJ
n
n
and since L(Jn)(x) = 0 we have,on multiplying by x

(1 2n)c1x+k=2 [k(k-2n)ck + ck-2]x


k


2m+2m
= -2c
m=0
(2m+n) d2mx (5)
Here we have put


2m+n
Jn(x) = d2mx ,
m=0

and hence
m
( 1)
d2m = 2m+n
2 m!(m+n) !
(6)

The series on the right side of (5)


2n
begins with x , and since n is a

positive integer we have c 1 = 0.

Further, if n > 1,

k(k 2n) c k + c k-2 = 0, (k =

2,3,...,2n 1), and this implies

Double click this page to view clearly


c 1 = c 3 = c 6 = ...... = c 2n-1 = 0,

whereas

c0 c0
c2 = 2 , c4 = 4
2 (n-1) 2 2!(n-1)(n-2)

and in general
c0
(j=1,2,....,n-1) (7)
2j
c = 4j ,
2 j!(n-1)...(n-j)

2n
Comparing the coefficients of x in

(5) we obtain

c
c2n-3 = 2cnd0 = n-1
2 (n-1) !

On the other hand from it follows that


c0
c2n-2 = 2n-2 ,
2 (n-1)! (n-1) !
and therefore
c0
c= n-1
(8)
2 (n-1) !

Since the series on the right side of

(5) contains only even powers of x

the same must be true of the series

on the left side of (5), and this

implies c 2n+1 = c 2n+3 = ...... = 0.


The coefficient c 2n is undetermined,

but the remaining coefficients c 2n+2 ,

c 2n+4, ..... are obtained from the

equations

+ =
2m(2n + 2m)c 2n + 2m c 2n+2m-3 -

.2c(n + 2m)d 2m , (m=l,2,...).

For m = 1 we have

( )
cd2 1 c2n
c2n+2 = 2 1+ n+1 4(n+1)

We now choose c 2n so that

( )
c2n cd2 1 1
4(n+1)
= 2 1+ 2 + ... + 4

Since 4(n+l) d 2 = - d 0 ,

( )
cd0 1 1
c2n = 2 1+ 2 + ... + n

with this choice of c 2n we have

( )
cd2 1 1
c2n+2 = 2 1+1+ 2 + ... + n+1
For m = 2 we obtain

( )
cd4 1 1 c2n+2
c2n+4 = 2 2 + n+2 2
2 .2.(n+2)

2
Since 2 .2.(n+2)d 4 = -d 2 ,

( )
c2n+2 cd4 1 1 1
2 = 2 1+ 2 +1+ 2 + ... + n+2
2 .2.(n+2)

It ca be shown by induction that

[ (1 + ) + (1 + 12 + ... + n+1m )]
cd2m 1 1
c2n+2 = 2 2 + ... + m

Finally, we obtain for our solution 2

the function given by


n-1
2j
-n -n x
2(x) = c0x + c0x 2j
2 j!(n-1)...(n-1)
j=1

( )
cd0 1 1 n
= 2 1+ 2 + ... + n x

[( ) + (1 + 21 + ... + n+1m )]xn+2m + c(log x)Jn(x).


c 1 1
-2 d2m 1 + 2 + ... + m
m=1

Double click this page to view clearly


Where c 0 and c are constants related

by (8) and d 2m is given by when c

= 1 the resulting function 2 is often

denoted by K n . In this case


n-l
c 0 = -2 (n-l)!, and therefore we

may write


n-1
1 x -n (n-j-1) ! 2j 1 1 m
Kn(x) = ()
2 2 j! ( 2x ) . 2 n! (1 + 12 + ... + 1n )( 2x )
j=0



m
( 1)
1 x n
()
-2 2
m(m+n)! ( 1
[
1 + 2 + ... + m ) + (1 + 2 + ... + m+n )
1 1 1
]
m=1
2m
( 2x ) + (log x)Jn(x).

This formula reduces to the one for


K 0 (x) when n = 0 provided we

interpret he first two sums o the right

as zero in this case. The function K n

is called a Bessel function of order n

of the second kind. .

Double click this page to view clearly


CYP QUESTIONS

1. a. Prove that the series

defining J and J - converge

for |x| < .

b. prove that the infinite series

involved in the definition of

K n converges for |x| < .

c. 2 Let be any solution for x

> 0 of the Bessel equation of

order
2 2 2
x y + xy + (x - ) y =
1/2
0, and put (x) = x (x).
Show that satisfies the

equation

[ ]
1 2
"
y + 1+ y=0
4
2
x

for x > 0.
2. a. Show that

x / J1 / 2(x) = 1 sin x.
1 2 2

(2)

b. Show that

x / J 1 / 2(x) = 1 cos x.
1 2 2
(2)
1/
(Hint: From Ex.2, (x) = x
2
J 1/2 (x) satisfies + = 0

for x > 0. and hence (x) =

c 1 cos x + c 2 sin x, where

c 1 , c 2 are constants. Show

that c 1 = 0 and c2 = 2 / ()

1
2

(Note : it can be shown that

( ) =
1
2

3. a. Show that J 0 (x) = J 1 (x).

b. Show that K 0 (x) = K 1 (x).


4. Prove that between any two

positive zeros of J 0 there is a

zero of J 1 . (Hint: use question 4.

and rolle's theorem)

5. Show that if > 0 then J has an

infinity of positive zeros. (Hint: If


1/2
(x) = x J (x) then satisfies

y + (x)y = 0, where
1 2

(x) = 1 +
4
2
x

For all large enough x, say x >


1
x 0 , x>x0, (x) > 4 Compare (*)

with the equation


" 1
y + 2 y = 0 satisfied by x(x) =

sin (x/2). Apply the result of

questions 4 of sec 6.4


6. For fixed > 0, and > 0, let
1/2
(x) = x J (x). Show that
[ ]
1 2
"
+ = -2.
4
2
x

1/2
(Hint: (x) = (x), where

is defined in question

7. If , are positive, show that

(2 2)0(x)(x)dx=(1) (1) (1) (1).


1 ' '

(Hint: Use question 7 to show

that

- = ( -
2 2
) = ( - ) , and

then integrate from 0 to 1

8. If > 0, and , are positive


zeros of J , show that

xJ (x)J (x)dx=0,
1 1
(x)(x)dx =
0 0

If . (Hint: question 8)
9. If > 0, > 0, and J () = 0

Show that
| |
2

(x)dx =
1 2 1 2 1 '
0

0
xJ (-) dx= 2 J () .

(Hint: Differentiate (*), in

question 8, with respect to and

then put = to obtain

[ ( )]
' ' 2
= 2

Integrate from 0 to 1

10. Define 1/(k), when k is a non-

positive integer, to be zero.

Show that if n is a positive

integer the formula for J -n (x)

gives
n
J -n (x) = (-l) J n (x)

11. a. Use the formula for J n (x) to

show that


(x J n ) (x) = x J -1 (x).
b. Prove that
- r -
(x J ) (x)_ = -x J +1 (x).

12. Show that

J -1 (x) - J +1 (x) = 2J (x) and


-1
J -1(x) + J +1 (x) 2x J (x).

Hint: Use the results of question

13. a. Show that between any two

positive zeros of J there is

a zero of J +1 (Hint: Use

question 12(b). and Rollers

theorem)

b. Show that between any two

positive zeros of J +1 there

is a zero of j . (Hint: Use

.question 12a, and rolle's

theorem)
Unit - 8

EXISTENCE AND
UNIQUENESS OF
SOLUTIONS TO FIRST
ORDER EQUATIONS

8.1 INTRODUCTION

In this chapter we consider the

general first order equation

y'=f(x,y), (1)

where f is some continuous function.

Only in rather special cases is it

possible to find explicit analytic

expressions for the solutions of (1).

We have already considered one such

special case; namely, the linear

equation
y'g(x)y+h(x), (2)

where g,h are continuous on some

interval 1. Any solution of (2) can

be written in the form

e
x -Q t
(x) = e ( ) ()
h(t)dt+ce ( )
Q x Q x
x
(3)
Where

g( )dt.
x t
Q(x)=
x

x 0 is in I, and c is a constant. In secs

8.2 and 8.3 we indicate procedure

which can be used to solve other

important special cases of 1.

Our main purpose is to prove that a

wide class of equations of the form

(1) have solutions, and that solutions

to initial value problems are unique.

If f is not a linear equation there


are certain limitations which must be

expected concerning any general

existence theorem.

Example.

2
y = y .

Solution.

2
Here f(x, y) = y , and we see f has

derivatives of all orders with respect

to x and y at every point in the (x, y)-

plane. A solution of this equation

satisfying the initial condition

(1) = -1 is given by

1
(x) = - x ,

However this solution ceases to exist

at x = 0, even though f is a nice


function there. This example shows

that any general existence theorem

for (1) can only asset the existence

of a solution on some interval near-

by the initial point.

The above phenomenon does not

occur in the case of the linear

equation (2), for it is clear form (3)

that any solution exists on all of the

interval 1. This points up one of the

fundamental difficulties we encounter

when we consider nonlinear

equations. The equation often gives

no clue as to how far a solution will

exist.

We prove that initial value problems

for equation (1) have unique


solutions which can be obtained by

an approximation process, provided f

satisfies an additional condition, the

Lipschitz condition. We first

concentrate our attention on the case

when f is real-valued, and later show

how the results carry over to the

situation when f is complex-valued.

8.2. EQUATIONS WITH VARIABLES


SEPARATED

A first order equation

y = f(x,y) is said to have the


variables separated if separated if f

can be written in the from


g(x)
f(x,y) = h(y)

where g, h are functions of a single

argument. In this case we may write

our equation as
h(y) dy = g(x) dx, (1)

For similarly let us discuss the

equation 1 in the case g and h are

continuous real-valued functions

defined for real-valued solution of 1

on some interval I containing a point

x 0 , then

h((x)(x) = g(x) for all x in I, and

therefore

h((t)) '(t) dt =
x x
g(t) dt (2)
x0 x0

for all x in I. Letting u = (t) in the

integral on the left, we see that (2)

may be written as


x
h(u)du= g(t) dt
x0
n

Conversely, suppose x and y are

related by the formula



y x
h(u) du = g(t) dt (3)
y0 x0

and that this defines implicity a

differentiable function (j) for x in I.

Then this function satisfies


x x
h(u) du = g(t) dt
y0 x0

for all x in I, and differentiating we

obtain

( )
h (x) '(x) = g(x)

which shows that is a solution of

(1) on I.

Theorem 1.

Let g, h be continuous real-valued

functions for x b, c y

d respectively, and consider the

equation
h(y)dy = g(x) dx

If G, H are any functions such that

G = g, H = h, and c is any constant

such that the relation

H(y) = G(x) + c

Defines a real-valued differentiable


function for x in some interval 1

contained in a x b, then will be

a solution of (1) on I. conversely, if

is a solution of (1) on I, it satisfies

the relation

H(y) = G(x) + c

On I, for some constant c.

Proof.

Integrating I to obtain

h(y) dy = g(x) dx + c.
Where c is a constant, and the

integrals are anti-derivatives. Thus

H(y) = h(y) dy. G(x) = g(x) dx,

Represent any two functions H, G

such that

H = h. G = g.

Then any differentiable function4

which is defined implicitly by the

relation

H(y) = G(x) + c (4)


Will be a solution of (1). Therefore it
is usual to identify any solution thus

obtained with the relation (4).

Example.

h(y) = 1. Then y = g(x), and every

solution has the from


(x) = G(x) + c, (5)

Where G is any function on x b

such that G = g, and c is a constant.

Moreover, if c is any constant, (5)

defines a solution of y = g(x). Thus

we have found all solutions of y =

g(x) on x b.

Example.

When g(x) = 1, for then we have

Thus, if H = h, any differentiable


function defined implicitly by the

relation

H(y) = x+c (7)

Where c is a constant, will be a

solution of (6)
2
y' = y (8)
2
Here h(y) = 1/y , which we note is

not continuous at y = 0, We have

dy
2 = d,
y

and thus the relation 7 becomes

1
y = x+c

Thus, if c is any constant, the

function given by
1
(x) = x+c (9)

is a solution of (8), provided x - c.

Example.

y' = 3y /
2 2
(10)

This leads to
dy
2/2 = 3 dx
y
If y 0, and hence to
3
y = (x + c) ,

where c is a constant. Thus the

function given by

3
(x) = (x+c) (11)

Will be a solution of (10) for any

constant c.

There may be several solutions

satisfying a given initial condition.

Thus the two functions and given

by
2
(x) = x (x) = 0, (-<x<),

are solution of (10) which pass

through the origin. Actually the

situation, is much worse than


appears, for there are infinitely many

functions which are solutions of (10)

passing through the origin. To see

this let k be any positive number,


and define k by

k(x) = 0, (-<x k),


3
k(x) = ( xk ) , (k<x<)

Then it is not difficult to see that k

is a solution of (10) for all real x, and

clearly k (0) = 0.

CYP QUESTIONS

1. Find all real-valued solutions of

the following equations:

2
a. y = x y

b. yy = x
2
x+x
c. y'= 2
yy
x
e y
d. y'= x
1+e

2 2 2
e. y = x y 4x

2. a. show that the solution of

2
y = y
which passes through the

point (x 0 , y 0 ) is given by
y0
(x) =
1 y0(x x0)

(Note: The identically zero

solution can be obtained

from this formula by letting


y 0 = 0.)

b. For which x is a well-

defined function?
c. For which x is a solution of

the problem
2
y = y , y(x 0 ) = y 0 ?

1/
3. a. Find the solution of y = 2y
2
passing through the point

(x 0 , y 0 ), where y 0 > 0.

b. find all solutions of this

equation passing through

(x 0- 0).

4. A function f defined for real x,

y is said to be homogeneous of

degree k if
k
f(tx, ty) = t f(x,y)

for all t, x, y. In case f is

homogeneous of degree zero we

have

f(tx, ty) = f(x, y) and then we

say the equation y = f(x, y) is

homogeneous.
Such equations can be reduced to

ones with variables separated.

To see this, let y = ux in y =

f(x, y). Then we obtain

xu + u = f(x, ux) = f( 1, u), and

hence
' f(1,u) u
u = x ,

which is an equation for u with

variables separated.

Find all real-valued solutions of

the following equations


x+y
a. y' xy
2
y
b. y' = 2
xy+x
2 2
x +xy+y
c. y' = 2
x
2y/x
y+xe
d. y' = x
5. The equation
1x+b1 + y+c1
y' = c2x+b2y+c2

where 1 ,b 1 ,c 1 , 2 ,b 2 ,c 2 are

constants (c l , c 2 not both 0) can

be reduced to a homogeneous

equation. Assume we do not

have the simple equation y =

c 1 /c 2 , and let x = , + h, y = n +

k, where h, k are constants. Then

(*) becomes
d 1+b1+(1h+b1k+c1)
=
d 2+b2(2h+b2k+c2)

If h, k satisfy
1 h + b 1 k + c1 = 0, 2 h + b 2 k +

c2 = 0 (**)

The equation becomes

homogeneous.

Double click this page to view clearly


If the equations (**) have no

solution, then 1 b 2 - 2 b 1 = 0,

and in this case either the

substitution

u = 1 x + b 1 y + c 1 or u = 2 x +

b 2 y + c 2 leads to a separation of

variables.

Solve the following equations:


xy+2
a. y' = x+y 1
2x+3y+1
b. y' = x 2y 1
x+y+1
c. y'= 2x+2y 1

6. a. Show that the method of


question 5 can be used to

reduce an equation of the

form

( )
1x+b1y+c1
y' = f 2x+b2y+c2

to a homogeneous equation

Double click this page to view clearly


b. Solve the equation

( )
2
1 x+y 1
y' =
2 x+2

7. Suppose there is a family F of

curves in a region S in the plane

with the property that through

each point (x, y) of S there

passes one, and only one, curve

C of F, and that the slope of the

tangent of C at (x, y) is given by

f(x. y). where f is continuous. If

a curve in F can be written as (x,


(x)), where x runs over some

interval I, then is a solution of

y = f(x, y). If is any solution

of the equation y = -l/f(x,y),



then the curve C given by the
points (x, (x)) will have a

tangent at each of its points (x,

y) which is perpendicular to the

curve in F passing through (x,



y). The set G of all curves C

is called the set of orthogonal

trajectories to the family F.

The following relations determine

a family of curves, one curve for

each value of the constant c.

Find the orthogonal trajectories

of these families.

2 2
a. x + y = c, (c > 0)

b. y = cx

2
c. y = cx
2 2
x y
d. 2 +3 =c,(c>0)

2 2 x2
e. x y = c f) y = ce
8.3. EXACT EQUATIONS
Suppose the first order equation y =

f(x, y) is written in the form

M(x, y) + N(x, y) y = 0,

Where M. N are real-valued function

defined for real x, y on some

rectangle R. The equation (1) is said

to be exact in R if there exists a

function F having continuous first

partial derivatives there such that

F F
x
= M,
y
= N, (2)

In R.

Theorem 2.

Suppose the equation

M(x,y)+N(x,y) y' = 0 (1)


Is exact in a rectangle R, and F is a

real-valued function such that

F F
x = M, y =N (2)

In R. Every differentiable function

defined implicitly by a relation

F(x, y) = c, (c = constant)

is a solution of (1), and every

solution of (1) whose graph lies in R

arises this way.

Proof.

If 1 is exact in R. and F is a function

satisfying 2, then 1 becomes


F F
x = (x,y) + y = (x,y)y' = 0.

If is any solution on some interval

I, then
F
x ( )
x,(x) +
F
y (x,(x) '(x)) (3)

For all x in 1. If (x) = F(x, (x)),

then equation (3) just says that

(x) = 0, and hence

F(x, (x)) = c,

Where c is some constant. Thus the

solution must be function which is

given implicitly by the relation

f(x,y) = c (4)

Looking at this argument in reverse

we see that if is a differentiable

function on some interval I defined

implicitly by the relation (4) then

F(x, (x)) = c
For all x in I, and a differentiation

yields (3). Thus is a solution of (1).

Example

x
y' = - y (5)

is written in the form

xdx + y dy = 0,

it is clear that the left side is the


2 2
differential of (x + y ) / 2. Thus

any differentiable function which is

defined by the relation


2 2
x + y = c, (c = constant)

is a solution of 5. Note that the

equation 5 does not make sense

when y = 0.

The above example is also a special

case of an equation with variables

separated.
Indeed any such equation is a special

case of an exact equation, for if we

write the equation as

g(x) dx = h(y) dy, it is clear that

an F is given by

F(x, y) = G(x) H(y).

Where G = g, H = h.

Theorem.

Let M, N be two real-valued function

which have continuous first partial

derivatives on some rectangle

R:|x-x 0 | , |y y 0 | b.

Then the equation

M(x, y)+N(x, y)y = 0 is exact in R if,

and only if,

M N
y = x in R.
Proof.

Suppose M(x, y) dx + N (x, y)dy = 0

is exact, and F is a function which has

continuous second derivatives such

that
F F
x = M, y = N.
Then
2 2
F M F N
y x = y , x y = x

and since for such a function


2 2
F F
y x = x y

We must have
M N
y = x

Now suppose is satisified in R. We need to find a function F satisfying


F F
x = M, y = N.
To see how to do this. we note that if we had such a function then
F(x,y) F(x0,y0) = F(x,y) F(x0,y) + F(x0,y) F(x0, y0)


x y
F F
=
x
(s,y)ds + y
(x,t)dt
x0 y0

Similarly we would have


F(x,y) F(x0,y0) = F(x,y) F(x,y0) + F(x,y0) f(x0, y0)


y x
F F
=
y
(x,t)dt + x (s,y0)ds
y0 x0

Double click this page to view clearly


N(x,t)dt+ M(s,y )ds.
y
x
x0
0 (7)
y0

We now defind F by the formula


x y
F(x,y) = M(s,y)ds + N(x,t)dt (8)
x0 y0

This definition implies that F (x 0 , y 0 )

= 0, and that

F
x (x,y) = M (x,y)

for all (x, y) in R. From (7) we would

guess that F is also given by

M(s,y )ds.
x


y
F(x,y) = N(x,t)dt + 0 (9)
y0
x0

This is in fact true and is a

consequence of the assumption (6).

Once this has been shown, it is clear

from (9) that


F
y (x,y) = N (x,y)

For all (x, y) in R, and we have found

our F.

In order to show that 9 is valid,

where F is the function given by (8),


let us consider the difference

[ M(s,y )ds.]
x
y
F(x,y)- N(x,t)dt + 0
y0
x0

[N(x,t) N(x ,t)]dt


y
]
x
= [M(s,y) M(s,y0) ds - 0
x0 y0

[ [
x y

] ]
x x
M N
=
y
(s,t)dt ds - x
(s,t)ds dt
x0 x0
x0 y0

[
x
y

=
x0
y0
M
y
( s,t)
N
x ]
(s,t) ds dt,

Which is zero by virtue of 6. This

completes our proof of Theorem.


Example.
2
3x 2x
y' = 2
x 2y
(10)

which we write as

2 2
(3x 2xy) dx + (2y x ) dy = 0

Here

2
M(x, y) = 3x 2xy, N(x, y) = 2y
2
x , and a computation shows that

M N
y (x,y)= x (x,y)= -2x.
Which shows that our equation is

exact for all x, y. We know there is

an F such that
F F
x = M, y = N.

Thus F satisfies

F 2
x (x,y) = 3x 2xy,
Which implies that for each fixed y,

2 2
f(x,y) = x + x y+f(y), (11)

Where f is independent of x. Now

F/y = N tells us that

2 2
-x + f(y) = 2y x ,

Or that f (y) = 2y.

Thus a choice for f is given by f(y) =


2
y , and placing this back into (11) we

obtain finally

3 2 2
F(x, y) = x x y + y

Any differentiable function which is

defined implicitly by a relation

2 2 2
x x y+y = c. (12)
where c is a constant, will be a

solution of (10), aijid all solutions

of (10) arise in this way. Often the

solutions are identified with the

relations (12). It is proved in

advanced calculus texts that (12) will

define a unique differentiable

function function near, and passing

through, a given point (x 0 , y0)

provided that

F (x 0 , y 0 ) = c, and that

(x0,y0) 0.
F
y

Notice that the only points (x 0 , y 0 )

satisfying (12) for which

(x0,y0) = 0.
F
y
are those satisfying

2
-x0 + 2y0 = 0.

and these are precisely the points

where the given equation (10) is not

defined. Thus, if (x 0 ,y 0 ) is a point


2 2
for which (3x 2xy) / (x 2y)

is defined, there will be a unique

solution of (10) whose graph passes

through (x 0 , y 0 ).

CYP QUESTIONS

1. The equations below are written

in the form M(x,y) dx + N(x, y)

dy = 0, where M, N exist on the

whole plane. Determine which

equations are exact there, and

solve these.
2 2
a. 2xy dx + (x + 3y )dy = 0

2
b. (x + xy) dx + xy dy = 0

x y
c. e dx + (e (y + 1) dy = 0

2
d. cos x cos y dx sin x sin 2y

dy = 0

2 2 2 2
e. x y dx x y dy = 0

f. (x + y) dx + (x y) dy = 0

2x 2x
g. (2ye + 2x cos y) dx + (e
2
x sin y) dy = 0

2 2
h. (3x log |x| + x + y) dx + x dy = 0

2. Even though an equation M(x, y)

dx + N(x, y) dy = 0 may not be

exact, something it is not too

difficult to find a function u,

nowhere zero, such that


u(x, y) M (x, y) dx + u(x, y)

N (x, y) dy = 0 is exact. Such

a function is called an

integrating factor. For example,

y dx = x dy = 0 is not exact,

but multiplying

2
the equation by u(x, y) = 1/y ,

makes it exact for y 0.

Solutions are then given by y =

cx.

Find an integrating factor for

each of the following equations,

and solve them.

1 2
a. (2y + 2) dx + 3xy dy = 0

b. Cos x cos y dx 2 sin x sin y

dy = 0
1 2 2
c. (5x y + 2y) dx + (3x y +

2x) dy = 0

y y y
d. (e + xe ) dx + xe dy = 0

3. Consider the equation M(x,y) dx

+ N (x,y) dy = 0, Where M, N

have continuous first partial

derivatives on some rectangle R.

Prove that a function u on R,

having continuous first partial

derivatives, is an integrative

factor if and only if,

u( M
y
N
x ) =N
u
x
u
- M y

onR.
4. a. Under the same conditions

as in question 3, show that if the

........

M(x, y) dx + N (x, y) dy = 0
Has an integrating factor

u f ,which is a function of

x alone, then

P=
1 M
N y(
N
x )
Is a continuous function of x

alone.

b. If p is continuous and

independent of y, show that

an integrating factor is given


p(x)
by u(x) = c , where P is

any function satisfying P = p.

5. a. Under the same conditions

as in question 3 show that if

M(x, y) dx + N (x, y) dy = 0

Has an integrating factor u,

which is a function of y

alone, then
q=
1 N
M x(
M
y )
is a continuous function of y

alone.

b. If q is continuous, and

independent of x, show that

an integrating factor is given


Q(y)
by u(y) = e , where Q is

any function such that Q =

q.

6. Consider the linear equation of

the first order

y + (x)y = b(x), where , b are

continuous on some interval 1.

a. Show that there is an

integrating factor which is a

function of x alone. (Hint:

question 4)
b. Solve this equation, using an

integrating factor.

8.4 THE METHOD OF SUCCESSIVE


APPROXIMATIONS

We now face up to the general

problem of finding solutions of the

equation

y' =f(x,y) (1)

where f is any continuous real-valued

function defined on some rectangle

R: |x x 0 | , |y y 0 | b, (, b >

0), in the real (x, y) plane. Our object

is to show that on some interval I

containing x 0 there is a solution of

(1) satisfying

(x0) = y0 (2)
By this we mean there is a real-

valued differentiable function

satisfying (2) such that the points (x,

(x)) are in R for x in I, and

(x) = f(x, (x))

For all x in I. Such a function is

called a solution to the initial value

problem

y' = f(x,y), y(x0) = y0, (3) on I.


Our first step will be to show that the

initial value problem is equivalent to

an integral equation, namely


x
y = y0 + f(t,y) dt (4)
x0

On I. By a solution of this equation on

I is meant a real-valued continuous

function on I such that (x, (x)) is

in R for all x in I, and


f(t,(t)) dt
x
(x) = y0 + (5)
x0

for all x on I.

Theorem 4.

A function is a solution of the initial

value problem (3) on an interval I

if and only if it is a solution of the

integral equation (4) on I. Proof.

Suppose is a solution of the initial

value problem on 1. Then

(
'(t) = f t,(t) ) (6)

On I. Since is continuous on I, and

f is continuous on R, the function F

defined by

F(t) = f(t, (t))


Is continuous on I, Integrating (6)

from x 0 to x we obtain

f(t,(t)) dt
x
(x) = (x0 + )
x0

and since (x 0 ) = y 0 we see that is

a solution of (4).

Conversely, suppose satisfies (5)

on I. Differentiating we find, using

the fundamental theorem of integral

calculus, that

(x) = f(x, (x))

For all x on 1. Moreover from (5) it is

clear that (x 0 ) = y 0 , and thus is

a solution of the initial value problem

(3).
We now turn our attention to solving

(4). As a first approximation to a

solution we consider the function 0

defined by

0 (x) = y 0 .

This function satisfies the initial

condition 0 (x 0 ) = y 0 . but does not

in general satisfy (4). However, if we

compute

f(t, (t)) dt
x
1(x) = y0 + 0
x0


x
= y0 + f(t,y0) dt ,
x0

We might expect that 1 is a closer

approximation to a solution then 0 .

In fact, if we continue the processes

and define successively


0(x) = y0, (7)

f(t, (t)) dt
x
k+1(x) = y0 + y0 + k
(k = 0,1,2,...,)
x0

We might expect, on taking the limit

as k , that we would obtain

k (x) (x)

Where would satisfy

f(t, (t)) dt .
x
(x) = y0 + y0 + 0
x0

Thus would be our desired solution.

We call the functions 0, 1 ,....

defined by 7 successive

approximations to a solution of the

integral equation (4), or the initial

value problem (3). One way to


picture the successive

approximations is to think of a

machine S (for solving) which

converts functions into new

functions S () defined by

f(t, (t)) dt
x
( )
S (x) = y0 + y0 +
x0
0

A solution of the initial value problem

(3) would then be a function which

moves through the machine

untouched, that is, a function

satisfying S() = . Starting with


0 (x) = y 0 , we see that S converts

0 into 1 , and then 1 into 2 .In

general S( k ) = k+1 , and ultimately

we end up with a such that S() =

; see Fig.
Of course we need to show that the

k merit the name, that is, we need

to show that all the k exist on some

interval I containing x 0 , and that they

converge there to a solution of (4),

or of (3).

Example.

y' xy, y(0) = 1 (8)

Solution. The integral equation

corresponding to this problem is


x
y = 1+y0 + ty dt,
x0

and the successive approximations

are given by

0(x) = 1,


x
k+1(x) = 1 + y0 + tk(t)dt, (k=0,1,2,....).
0
Thus


x 2
x
1(x) = 1 + t dt, = 1 + 2
0

( )
2
t x
2
x
4
2(x) = 1 + t 1+ dt = 1 + + 2.4 ,
2 2
0

We recognize k (x) as a partial sum

for the series expansion of the

function

x2/2
(x) = e

We know that this series converges

for all real x, and this just means that

k (x) (x), (k), for all real x.

The function is the solution of the

problem (8).

Let us now show that there is an

interval I containing x0 where all the

functions k, k = 0, 1, 2,...,defined
by (7) exist. Since f is continuous on

R, it is bounded there, that is, there

exists a constant M > 0 such that

|f(x, y)| M

For all (x, y) in R*. Let a be the

smaller of the two numbers , b /M.

then we prove that the k are all

defined on |x - x 0 | .

Theorem.

The successive approximations k ,

defined by (7), exist as continuous

functions on

1: |x x 0 | = minimum {, b/

M} and (x, k (x)) is in R for x in I.

Indeed, the k satisfy


| (x) y | M|x x |
k 0 0 (9)

For all x in I.

Note: Since for x in 1, |x x 0 | b/

M, the inequality (9) implies that

| k (x) y 0 | b

for x in I, which shows that the points


(x, k (x)) are in R for x in I. The

precise geometric interpretation of

the inequality (9) is that the graph

of each k lies in the region T in R

bounded by the two lines y y 0 =

M(x x 0 ), y y 0 = -M(x x 0 ), and

the lines

x x 0 = , x x 0 = - ;
Proof of Theorem. Clearly 0 exists

on I as a continuous function, and

satisfies (9) with k = 0. Now


x
1(x) = y0+y0 + f(t y0) dt,
x0

and hence

| (x) y | = | | | |
x x
1 0 f(t y0) dt, f(t y0) dt M|x-x0|
x0 x0

Which shows that 1 satisfies the

inequality (9). Since f is continuous

on R the function F 0 defined by

F 0 (t) = f(t, y 0 ) is continuous on I.

Thus 1 , which is given by


x
1(x) = y0 + F0(t) dt
x0

is continuous on I.

Now assume the theorem has been

proved for the functions 0, 1,

......, k . We prove it is valid for

k+1 . Indeed the proof is just a


repetition of the above. We know that

(t, k (t)) is in R for t in 1. Thus the

function F k given by

F k (t) = f(t, K (t))

Exists for t in 1. It is continuous on

I since f is continuous on R, and k

is continuous on I. therefore k+1 ,

which is given by


x
k+1(x) = y0 + Fk(t) dt
x0

Exists as a continuous function on I.

Moreover

| | | F (t)| dt | M | |x x |,
x
k+1(x) y0 M k 0
x0

Which shows that k+1 satisfies (9).

The theorem is thus proved by

induction.
CYP QUESTIONS

1. Consider the initial value

problem

y = 3y+l, y(0) = 2.

a. Show that all the successive

approximations 0 , 1 , ...

exist for all real x.

b. Compute the first four

approximations 0 , 1 , 2 ,

3 to the solution.
2. For each of the following

problems compute the fist four

successive approximations 0,

1, 2, 3:
2 2
a. y = x + y , y(0) = 0

b. y = 1 + xy, y(0) = 1

2
c. y =y , y(0) = 0

2
d. y = y , y(0) = 1
3. a. Show that all the successive

approximations for the

problem
2
y = y ,y(0) = 1,

exist for all real x.

b. Find a solution of the initial

value problem in (a). On

what interval does it exist?

c. Assuming there is just one

solution of the problem in

(a), indicate why the

successive approximations

found in (a) can not

converge to a solution for all

real x.
4. Consider the problem
2 2
y = x + y , y(0) = 0, on

R:|x|1, |y|l.
a. Compute an upper bound M
2
for the function f(x, y) = x
2
+ y on R.

b. On what interval containing

x = 0 will all the successive

approximations exist, and be

such that their graphs are in

R?

5. Let f be a real-valued continuous

function defined on the rectangle

R: |x x 0 | , . |y y 0 | b,

(, b > 0).

Let be a real-valued function

defined on an interval I

containing x 0 .
a.
Define carefully what it

would mean to say that is


a solution on I of the initial

value problem
y = f(x, y), y(x 0 ) = y 0 ,

y(x 0 ) = y 1 . (*)

b. Define carefully what it

would mean to say that is

a solution on I of the integral

equation


x
y = y0 + (x - x0)y1 + ( xt ) f(t,y) dt (* *)
x0

c. Show that is a solution of

the initial value problem (*)

on I if and only if is a

solution of the integral

equation (**) on I. (Hint: In

proving the statement one

way it is useful to use the

rule that

x
x F
d
dx
0
F(t,x)dt = F (x,x)
0 x
(t,x)dt,

If F and F/x are.

continuous. In proving the

statement the other way, let

F(x) = f(x, (x)), and solve

y = F(x), y(x 0 ) = y 0 , y(x 0 )

= y 1 , by the variation of

constants method.

6. Let f be the same as in question

5.

a. Define a sequence of

successive approximations

0 , 1 , 2 ,...for (*), or (**)

in question (Hint: Let 0 (x)

= y 0 .)
7. a. Find a sequence of

successive approximations

for the problem

y = x y, y(0)=l, y(0) = 0
and show that the sequence

tends to a limit for all real x.

8. Let f be a real-valued continuous

function defined on the strip

S:|x| , |y|.< , ( > 0), and

let I denote the interval |x|

. Suppose is a real-value

function on I.

a. Define what it would mean to

say that is a solution on I

of the initial value problem


2
y + y = f(x, y), y(0) = 0,

y(0) = 1, ( > 0). (*)


b. Show that is a solution of

(*) on 1 if and only if is

a solution of the integral

equation


x
sinx sin( xt )
y= + f (t,y)dt (* *)

0

on I.(Hint: see the hint in

question)

c. Define a sequence of

successive approximations

0 , 1 , 2 , ... for the initial

vajue problem (*), or the

integral equation (**), and

show that each k is defined

as a continuous function on

I. (Hint:

Let 0 (x) = 0. It is a result

in advanced calculus that if


a function g is continuous in

(t, x), then

g(t,x)dt
x
0

is continuous in x.)

8.5. THE LIPSCHITZ CONDITION

Let f be a function defined for (x,

y) in a set S. We say f satisfies a

Lipscifitz condition on S if there

exists a constant K > 0 such that

|f(x, y 1 ) f(x, y 2 )| K |y 1 y 2 |

For all (x, y 1 ), (x, y 2 ) in S. The

constant K is called a Lipschitz

constant.

If f is continuous and satisfies a

Lipschitz condition on the rectangle


R, then the successive

approximations converge to a

solution of the initial value problem

on |x x 0 | . Before we prove

this, let us remark that a Lipschitz

condition is a rather mild restriction

on f.

Theorem.

Suppose S is either a rectangle

| x x 0 | , (y y 0 | b, (, b >

0),

Or a strip

|x x 0 | , |y|<, ( > 0),

and that f is a real-valued function


defined on S such that f/y exists,

is continuous on S, and

| f
y (x,y)| K, ((x,y)in S),

For some K > 0. Then f satisfies a

Lipschitz condition on S with Lipschitz

constant K.

Proof.

We have

f

y1
f(x,y1) f(x,y2) = (x,t)dt
y2 y

and hence

|f(x,y1) f(x,y2)| |y
y1
| y (x,t)|dt| K|y1 y2|,
f
2

For all (x, y 1 ), (x, y 2 ) in S.


Example.

2
f(x, y) = xy on

R: |x| 1, |y| 1.

Here

| f
y (x,y)| = |2xy| 2

For (x, y) on R. This function does

not satisfy a Lipschitz condition on

the strip

S: |x| 1, | y | < ,

Since

| | = |x||y |,
f(x,y1) f(x,0)
y1 0 1

Which tends to infinity as |y 1 | ,

if | x | 0.
Example.
2/3
F(x, y) = y

On

R:|x| l, |y| l.

Indeed, if y 1 > 0,

|f(x,y1) f(x,0)| y1 /
2 3
1
= = =
|y1 0| y1 y1 /
1 3

Which is unbounded as y 1 0.

CYP QUESTIONS

1. By computing appropriate

Uipschitz constants, show that

the following functions satisfy

Lipschitz conditions on the sets S

indicated:

2 2
a. f(x, y) = 4x + y , on S: | x

| 1, | y | 1.
2 2 2
b. f(x, y) = x cos y+ y sin x,

on S: | x | 1, |y| <

3 -xy2
c. f(x,y) = x e , on S: 0 x

, |y| < , ( > 0)

2
d. f(x,y) = (x)y + b(x)y +

c(x), on S: |x| 1, |y|

2, (, b, c are continuous

functions on |x| 1)

e. f(x, y) = (x)y + b(x), on S:

|x| 1, |y| < , (, b are

continuous functions on |x|

1)

2. a. Show that the function f

given by

1/2
f(x, y) = y does not

satisfy a Lipschitz condition

on
R: |x| 1, 0 y 1.

b. Show that this f satisfies a

Lipschitz condition on any

rectangle R of the form

R: |x| , b y c, (, b,

c > 0).

3. a. Show that the function f

given by
2
f(x,y) = x |y| satisfies a

Lipschitz condition on

R:|x| l, |y| l.

b. Show that f/y does not

exist at (x, 0) if x 0.

4. Show that the assumption that

f/y be continuous on S is

superfluous in Theorem 6. (Hint:

For each fixed x the mean value

theorem implies that


F(x,y1) f(x,y2) = (x,)(y1 y2)
f
y

8.6. CONVERGENCE OF THE


SUCCESSIVE APPROXIMATIONS

We now prove the main existence

theorem.

Theorem. (Existence Theorem).

Let f be a continuous real-valued

function on the rectangle

R: |x x 0 | , |y y 0 | b, (, b>0),

and let

|f(x, y)| M

For all (x, y) in R. Further suppose

that f satisfies a Lipschitz condition

with constant K in R. Then the

successive approximations
f(t, (t))dt, (k = 0,1,2,...),
x
0(x) = y0, k+1(x) = y0 + k
x0

Converge on the interval

I: |x x 0 | = min {, b/M}

to a solution of the initial value

problem y = f(x, y), y(x 0 ) = y 0

on I.

Note : If f is just continuous on R

it is possible to show that there is a

solution of the initial value problem

on I. Since more sophisticated

methods from advanced calculus are

required for the proof of this, we

shall forego such a proof. However,

in order to show that the successive

approximations converge to a
solution, something more than the

continuity of f must be assumed; see

question 3.

Proof of Theorem. Convergence of

{ (x)}.The
k key to the proof is the

observation that k may be written

as

k = 0 + ( 1 - 0 ) + ( 2 - l )

+.....+ ( k - kl ), and hence k (x)

is a partial sum for the series

[ (x)-

0(x) + p p-1 (x)]
p

Therefore to show that the sequence

{ k (x)} converges is equivalent to

showing that the series (1)

converges. To prove the latter we

must estimate the terms p (x) -

p1 (x) of thus series.


The functions p all exist as

continuous functions on I, and (x,

p (x)) is in R for x in 1. Moreover,

| (x)- (x)| M|x x |


1 0 0 (2)

for x in I. Writing down the relations

defining 2 and 1 , and subtracting,

we obtain

| |f (t, (t)) f(t, (t))|dt|


x

| (x)- (x)| =
2 1
x0
1 0

Therefore

|f(x,y1) f(x,y2)| K|y1 y2|,


and since f satisfies the Lipschitz

condition

|f(x,y 1 ) f(x, y 2 )|K|y 1 y 2 |.


we have

|
| (x)- (x)| K | (t) (t)|dt |
x
2 1 1 0
x0

Using we obtain

| | | |.
x
2(x)-1(x) KM
x0
|t x0|dt
Thus,if x x0,
2
(x x0)
| (x)- (x)| KM
x
2 1 x0
(t x0)dt = KM 2

The same result id valid in case x x0.


We shall prove by induction that
p
|x x0|
p1

| (x)- (x)|
MK
p p-1 p! (3)

For x in 1. We have seen that this

is true for p =1 and p = 2. Let us

assume x x 0 ; the proof is similar

for x x 0 . Assume (3) for p = m.

Using the definition of m,1 and m

we obtain
[(
x
m+1(x) - m(x) =
x0
) (
f t,m(t) f t,m-1(t) )] dt,
and thus

|(
x

| m+1 (x) - m(x)|


x0
) (
f t,m(t) f t,m-1(t) )| dt.

Using the Lipschitz condition we get

|
x
| m+1 (x) - m(x)| K
x0
m (t) - m-1(t)| dt

Since we have assumed for p = m

this yields
m+1


|x x0|
x m

| (x) - m(x)|
m
m MK
|t x0|
MK
m+1 m! dt= (m+1) !
x0

This is just (3) for p = m + 1, and

hence is valid for all p = 1,2,...., by

induction.

It follows from 3 that the infinite

series

Double click this page to view clearly


[ (x)-

0(x) + p (x)]
p-1 (1)
p=1

Is absolutely convergen on I, that is,the series

| (x)| + | (x)-

0 p p-1 (x)| (4)
p=1

Is convergent on I.Indeed, from we see that


p
M K |x x0|
p

| (x)-
p (x)|
p-1 k p!

Which shows that the p-th term of

the series in (4) is less than or equal

to M/K times the p-th term of the

K
power series for e |xx 0 |. Since the
k|xx0|
power series for e is

convergent, the series is convergent

for x in I. this implies that the series

1 is convergent on I. therefore the

kth partial sum of 1, which is just

k (x), tends to a limit (x) as k ,

for each x in I.
Properties of the limit . This limit

function is a solution to our

problem on I. First, let us show that

is continuous on I. This may be

seen in the following way. If x 1 , x 2

are in I

| |
x1
| |
k+1(x1)-(x2) =
x2
( )
f t,k(t) dt M |x1 x2|

Which implies, by leeting k ,

|(x1) (x2)| M |x1 x2| (5)

This shows that as x 2 x 1 (x 2 )

(x 1 ), that is, is continuous on i.

Also, letting x 1 = x, x 2 = x 0 in we

obtain

|(x) y 0 | M |x - x 0 |, (x in I)

Which implies that the points (x,


(x)) are in R for all x in I.
c) Estimate for |(x) - k (x)|.

We now estimate |(x) - k (x)|.

We have

[ (x)-

(x) = 0(x) + p p-1 (x)]
p=1

and

[ (x)-
k
k(x) = 0(x) + p p-1 (x)]
p

Therefore, using 3, we find that

|(x)- | = | |

k
p=k+1
[ (x)
p p-1 (x)]

| |

p=k+1
[ (x)
p p-1 (x)]



p
M (K)
K p!
p=k+1



p
M (K)
k+1
(K)
K (k+1) ! p!
p=k+1
k+1
M (K) k
= K (k+1) ! e (6)

k+1
Letting k = (K) / (k +1)!, we

see that k 0 as k, since k

is a general term for the series

Double click this page to view clearly


K
for e . In terms of k (6) may

be written as

|(x)- |
k
M k
Ke k, ( k 0, k ) (7)

The limit is a solution. To

complete the proof we must

show that

f(t,(t)) dt
x
(x) = y0 + (8)
x0

For all x in I. the right side of 8

makes sense for is continuous

on I, f is. continuous on R, and

thus the function F given by

F(t) = f(t, (t))

Is continuous on I. Now

f(t,(t)) dt
x
k+1(x) = y0 +
x0
and k+1(x) (x), as k .

Thus to prove (8) we must show

that for each x in 1

(
x x

x0
f t,k(t) dt) x0
( )
f t,(t) dt, ( k ) (9)

We have

| f(t, (t)) dt|


x x

x0
(
f t,(t) dt ) x0
k

f(t, (t)) |dt|


x
|f(t,(t))
x
| dt k
x0 x0


x
K| |(t) k(t)|dt| (10)
x0

Using the fact that f satisfies a

Lipschitz condition. The estimate

can now be used in 10 to obtain

| |
x


x
( )
f t,(t) dt
x0
( )
f t,k(t) dt Me k|x x0|
K

x0

Double click this page to view clearly


Which tends to zero as k ,

for each x in I. This proves 9,

and hence that satisfies 8.

Thus our proof of theorem is now

complete.

The estimate 6 of how well the

kth approximation k

approximates the solution is

worthy of special attention.

Theorem.
The kth successive

approximation k to the solution

of the initial value problem of

Theorem 7 satisfies
k+1

|(x)- (x)|
k
M (K)
K (k+1) ! e
k

For all x in I.
CYP QUESTIONS

1. Consider the problem

y = 1 2xy, y(0) = 0.

a. Since the differential

equation is linear, an

expression can be found for

the solution. Find it

b. Consider the above problem

on
1
R:|x| 2 , |y| 1.

If f(x,y) = 1 2xy, show that

|f(x, y)| 2, ((x,y) in R)

And that all the successive

approximations to the
1
solution exist on |x| 2 , and

their graphs remain in R


c. Show that f satisfies a

Lispschitz condition on R

with Lipschitz constant K =

1, and therefore the

successive approximations

converge to a solution of

the initial value problem on


1
|x| 2

d. Show that the

approximations 3 satisfies

|(x) - 3 (x)| < 01


1
For |x| 2

e. Compute 3 .

2. Consider the problem


2
y = 1 + y , y(0) = 0.

a. Using separation of

variables, find the solution


of this problem. (It is not

difficult to convince oneself

that the separation of

variables technique gives the

only solution of the problem)

On what interval does

exist?

b. Show that all the successive

approximations 0 , 1 , 2 ,...

Exist for all real x.

c. Show that k (x) (x) for


1
each x satisfying |x| 2

(Hint:

2
Consider f(x, y) = 1 + y on
1
R:|x| 2 , |y| 1.
1
Show that = 2
3. On the square

R:|x|1, |y|l,

Let f be defined by

f(x,y) = 0 if x = 0, |y| 1,
= 2x, if 0<|x| 1, -1y<0,
4y 2
=2x x if 0 <|x| 1, 0yx ,

2
=-2x, if 0 <|x| 1, x y 1.

a. Show that this f is

continuous on R, and |f(x,

y)| 2 on R. (It might help

to make a sketch).

b. Show that this f does not

satisfy a Lipschitz condition

on R.

c. Show that the successive

approximations 0 , 1 , 2 ...

for the problem

d. y = f(x, y), y(0) = 0, satisfy


= 2
0 (x) 0, 2m-l (x) = X ,
2
2m (x) = -x , (m = 1,2, ...)

e. Prove that neither of the

convergent subsequences in

(c) converge to a solution of

the initial value problem.

(Note: This problem has a

solution but the above shows

that it can not be obtained

by using successive

approximations)

4. Let f satisfy the conditions of the

previous Theorem Show that the

successive approximations

0 (x) = y 0 ,

0(x) = y0

( )
x
k+1(x) = y0 + (x x0)y1 + ( xt ) f t,k(t) dt (k =0,1,2, ..)
x0

Double click this page to view clearly


Converge on the interval

I: | x x 0 | = minimum {, b

/ M 1 },

Where M 1 = |y 1 | + (M/2), to

a solution of the initial value

problem y = f(x, y), y(x 0 ) = y 0 ,

y (x 0 ) = y 1 .

8.7 NONLOCAL EXISTENCE OF


SOLUTIONS

There are many cases when a

solution to the initial value problem

exists on the entire interval |x x 0 |

, and in such cases we say that a

solution exists nonlocally.

Example. An example of nonlocal

existence is furnished by the linear

equation
y' + g(x)y = h(x) (1)

The solutions exist on every interval

where g and h are continuous.

Suppose g and h are continuous on

|x x 0 | , and that K is a positive

constant such that

|g(x)|K, (|x x 0 | ).

Then if we write (1) as y = f(x, y) =

-g(x) y +(x ), we see that

|f(x, y 1 ) f(x, y 2 )| = | g(x) (y 1

y 2 )| K |y 1 y 2 |

For all (x, y 1 ), (x, y 2 ) in the strip

S:|x x 0 | , |y| < ,


We can show that if f satisfies a

Lipschitz condition in a strip S,

instead of in a rectangle R, then

solutions will exist on the entire

interval.

Theorem.

Let f be a real-valued continuous

function on the strip

S: |x x0| , y(x0) = y0 (2)

exist on the entire interval |x x 0 |

, and converge there to a solution


of (2).

Proof.

The successive approximations are

given by
0(x) = y0,

f(t, (t)) dt,


x
k+1(x) = y0 + k (k = 0,1,2,...).
x0

An induction argument establishes

the existence of each k for

|x x 0 | ;

Since f is continuous on S, the


function F 0 given by

F 0 (x) = f(x, y 0 ) is continuous for |x

x 0 | , and hence bounded there.

Let M be any positive constant such

that

|f(x,y0)| M, (|x x0| ) (3)

The proof of convergence follows

from
| | | f(t,y ) dt|
x
1(x)-0(x) = 0
x0

| |
f(t,y ) dt M|x x |,
x
0 0
x0

Due to (3).

The limit function need no longer

satisfy the inequality (5) for the M

given in (3). However, we note that

(3) is valid, and this implies that

| ]| | (x)- |
k k
| k(x) y0 = | p=1
[ p(x) p 1(x)
p=1
p p-1


k
p
K |x x0|
p p
M K |x x0|
p
M

K p! K p!
p=1


M
K (eK-1),
For |x x0| . If we let
M
b = K e -1 ,( K
)

we see that the approximations

satisfy
| k (x) y 0 | b, (|xx 0 |), and

taking the limit as k we obtain

|(x) y 0 | b, (|x x 0 | ) .

Now since f is continuous on

R: |xx 0 |, |yy 0 |b,

It is bounded there, that is, there is a

positive constant N such that |f(x,y)|

For (x, y) in R. The continuity of

may now be exhibited. Indeed, x 1 ,x 2

in our interval |x x 0 | ,

| |
x1
| (x1)-k+1(x2)| =
k+1
x2
( )
f t,k(t) dt N |x1 x2|,

Which implies, on letting k ,


|(x 1 ) - (x 2 )| |x 1 x 2 |.

The remainder of the proof is a

repetition of parts (c) and (d) of the

proof of theorem in section 7.6 with

replaced by a everywhere.

Corollary.

Suppose f is a real - valued

continuous function On the plane

| x | < , | y | <, which satisfies a

Lipschitz condition on each strip

S : | x | , | y | < ,

Where is any positive number. Then

every initial value problem y =

f(x,y), y(x 0 ) = y 0 , has a solution

which exists for all real x.


Proof.

If x is any real number there is an

> 0 such that x is contained inside an

interval |x x 0 | . For this a the

function f satisfies the condition; of

theorem 9 on the strip

|x x 0 | , | y | <

Since this strip is contained in the

strip

|x| |x 0 | + , | y | < .

Thus { k (x)} tends to (x), where

is a solution to the initial-value

problem.

Example.

An example of a nonlinear equation

satisfying the conditions of this

corollary is
2 x
y e 2
y' =
1+y
2 + x cos y (4)

If we let f(x, y) denote the right side

of (4) we see that f is continuous on

the plane. Since

f (y43y2) x 2
(x,y) = 1 + y2 e x sin y,
y
( )
We have

| yf (x,y)| 3e + 2
For all (x,y) in the strip
S : |x| , |y| <

Hence, satisfies a Lipschitz condition

on S with Lipschitz constant


2
K = 3e + .

Therefore equation (4), together with

any initial condition y(x 0 ) = y 0 , is a

problem which has a solution existing

for all real x.


Note that the function f given by f(x,
2
y) = y does not satisfy a Lipschitz

condition on any strip S , although it

satisfies one on any rectangle R. As

we have seen in Sec. 1 the problem


2 =
y = y , y(1) -1 ahs a solution

which exists only for x > 0.

CYP QUESTIONS

1. Consider the equation


2 2 2
y = (3x + l)cos y + (x 2x)

sin 2y

on the strip S : |x| ( >

0). If f (x, y) denotes the right side


of this equation, show that f satisfies

a Lipschitz condition on the strip S,

and hence every initial value problem


y = f(x, y), y(x0) = y0, has a
solution which exists for all real x.
2. Let

(|x| < 1).


cosy
f(x,y) = 2 ,
1x

a. Show that f satisfies a

Lipschitz condition on every

strip S : |x| , where 0 <

< L

b. Show that every initial value

problem y = f(x, y), y(0)

= y 0 , (|y 0 | < ), has a

solution which exists for |x|

< 1.

3. Consider the equation

y = f(x) p (cos y) + g(x) q (sin

y),

where f, g are continuous for all

real x, and p, q are polynomials.

Show that every initial value


problem for this equation has a

solution which exists for all real

x.

4. Let f be a rea1valued

continuous function on the strip

S: |x x 0 | , |y| <, ( >0),

And suppose that f satisfies on

S Lipschitz condition with

constant K > 0 Show that the

successive approximations
0(x) = y0,

( )
x
k+1(x) = y0 + (x x0)y1 + ( xt ) f t,k(t) dt, (k = 0,1,2,..)
x0

Exist as continuous functions on

the whole interval 1 : |x x 0 |,

and converge on I to a solution


of the initial value problem y =

f(x, y), y(x 0 ) = y 0 , y (x 0 ) = y 1 .


5. Prove the Corollary to theorem

in sec 8.7 for the initial value

problem

y = f(x,y), y(x 0 ) = y 0 ,

y(x) = y 1 .

6. Let f be a real-valued continuous

function, on the strip

S: | x | , | y | < , ( > 0),

And suppose f satisfies a

Lipschitz condition on S with

constant K > 0. Show that the

successive approximations

0(x) = 0,


x
sin( xt )
k+1(x) =
sinx
+
x0
( )
f t,k(t) dt, ( > 0),

(k = 0,1,2,...)
Exist as continuous functions on

I: | x | , and converge there


to a solution of the initial value
2
problem y + y = f(x, y), y(0)

= 0, y (0)=1.

7. Prove the corollary to theorem

in sec 8.7 for the initial value

problem
2
y + y = f(x,y), y(0) = 0,

y(0)=1.

8. Let q be a realvalued

continuous function on I: |x|

, where > 0. Consider the

initial value problem


2
y + y = q(x)y, (0), y(0) =

0, y (0) = 1

a. Show that there is a solution

of (*) on I, and give an


integral equation which

also satisfies.

b. If q is continuous for all real

x, show that there is a

solution of (*) for all real x.

(Hint: See questions 4, 5, 6,

7)

Theorem.

Let f g be continuous on R, and

suppose f satisfies a Lipschitz

condition there with Lipschitz

constant K. Let , be solutions of

respectively on an interval I

containing x 0 with graphs contained

in R. If the inequalities

1 + (x, y) g(x 1 , y) |y 1 y 2 |

f are valid, then


(e | )

|(x)-(x)| eK|x x | +
0
K
K x x0|
1 (5)

For all x in I.

Proof

y' = f(x,y), y(x0) = y1 (1)


and
y(x0) = y
2
y' = g(x,y), (2)
R: |x x0| , |y y0| b, (, b > 0)

where f, g are both continuous real-

valued functions on

R: |x x 0 |, [y y 0 | b, (, b>0),

and

(x 0 , y 1 ), (x 0 , y 2 ) are points in R.

We shall show that if g is close to f,

and y 2 close to y 1 , then any solution

of (2) on an interval I containing


x 0 is close to a solution of (1) on

1. Suppose there exist nonnegative

constants , such that

|f(x,y) g(x,y)| , ((x,y)in R) (3)


And

|y1 y2| (4)

If we take g = f and y 0 = y 1 = y 2 we

see that we may choose = 0, =

0, and we have

Corollary 1.

Let f be continuous and satisfy a

Lipschitz condition on R. If and

are two solutions of

y = f(x,y), y(x 0 )= y 0 ,
on an interval I containing x 0 , then

(x) = (x) for all x in I.

We remark that some restriction on f,

in addition to continuity, is required

in order to guarantee uniqueness.

Example.

2/3
y = 3y , y(0) = 0,

2/3
Here f (x, y) = 3y , and thus f

is continuous for all (x, y). The two

functions , given by

2
(x) = x , (x) = 0, (- < x < ),
are both solutions of this problem.

Of course, as we have seen in Sec.5,

this f does not satisfy a Lipschitz


condition on any rectangle containing

the origin.

Intuitively, if we have a sequence of

functions g k f on R, and a sequence

y k y 0 , we would expect that the

solutions k of

y' = gk(x,y) y(x0)= yk (6)

would tend to the solution of

y' = f (x,y), y(x0)= y0 (7)

This is a direct consequence of (5).


Suppose the g k are continuous on R

and there are constants k such that

|f(x,y) gk(x,y)| k (all (x,y) in R) (8)

and constants k such that


|yk y0| k

Where k and k tend to 0 as k .

Applying (5) we obtain

Corollary 2.

Let f be continuous and satisfy a

Lipschitz condition on R. Let the g k (k

= 1, 2,....) be continuous on R and

satisfy for some constants

k 0 (k), and let y k y 0 (k

If k is a solution of (6) on an

interval I containing x 0 and is the

solution of on I, then k (x) (x) on

I.
Proof of the theorem 10. From 1, 2

we see that

f(t,(t)) dt
x
(x) = y1 +
x0

f(t,(t)) dt
x
(x) = y2 +
x0

and hence

[ f(t,(t)) g(t,(t))] dt
x
(x)-(x) = y1 y2 +
x0

[f(t,(t)) f(t,(t))] dt+ [f(t,(t)) g(t,(t))] dt


x x
=y1 y2
x0 x0

Using (3), (4) and the fact that f

satisfies a Lipschitz condition with

constant K, we obtain for x x 0 .

|(t)-(t)| dt +
x
| |
(x)-(x) +K
x0
(x x0) (9)

If

|(t) (t)| dt
x
E(x) =
x0

Double click this page to view clearly


we see that may be written as

E'(x) = KE (x) + (x x0) (10)

This is a first order differential

inequality which we may solve in

the same way we solve first order

linear differential equations.


-K(xx0)
Multiplying (10) by e we get,

after changing x to t,

K(tx0)E
[e ]
K(tx0) K(tx0)
(t)e +(tx 0 )e

An integration from x 0 to x yields

[1- e ] + [-K(x x ) 1] e
k(x x0) K(x x0) K(t x0)
e K 2 0 + 2
K K

Multiplying both sides of this


K(xx0)
inequality by e we find

[e ] [K(x - x0) + 1] +
K(x x0)
e (
K x x0)
E(x) K -1 - 2 2
K K

Double click this page to view clearly


and using this in (9) we obtain finally

[ ]
K(t x0)
e (
K t x0)
| |
(x)-(x) e + K -1 .

This is just (5) for x x 0 . A similar

proof holds in case x x 0 .

CYP QUESTIONS

1. Consider the initial value

problem
10 1
y' = xy + y , y(0) = 10

a. Show that a solution of

this problem exists for


1
|x| 2 (Hint:
Consider this problem on
1
R:|x| 2 , |y- 101 | 101
10
If g(x, y) = xy + y , show

that
|g(x,y)| <
1
5

for (x, y) in R, and hence

that the of theorem 7 may


1
be taken to be 2

b. For small |y| the problem (*)

can be approximated by the

problem
1
y' = xy, y(0) = 10

Compute a solution of this

problem, and show that its


1
graph is in P for |x| 2

c. Show that

| (x)-(x) | 2
10
5 ( x 2
)
e| | / 1

1
For |x| 2

(Hint: Apply Theorem 10

with f(x, y) = xy on R.)


d. prove also that

| (x)- (x) | 1
10
5 ( e| |-1
x
)
2. Consider the problem
r 2
y = y + x sin y, y(0) = 1

where A is some real parameter,

|| 1.

a. Show that the solution of

this problem exists for |x|

1.

b. Prove that
x |x|
|(x)e | || (e 1)

For |x| 1.

3. Let f be a continuous function for


(x, y, ) in R: |xx0|, |y

y0|b, | - 0| c.
Where , b, c > 0, and suppose

there is a constant K > 0 such

that

|f(x,y 1 ,) f(x, y 2 , )| K |y 1

y2|

For all (x 1 , y 1 , ), (x, y 2 , ) in R.

Further suppose that f/ exists

and there is a constant L > 0

such that

| f (x,y,)| L
For all (x,y,) in R. If

represents the solution of y =

f(x, y,, y(x 0 ) = y 0 . show that

| |
(x)-(x)
L| |
K ( e | 0|-1
K x-x
)
For all x for which , exist.
4. A. Apply question 3 to the

initial value problem


2
y + y = q(x)y, y(x 0 ) =

y 0 , where is real, and q is

continuous for |x x 0 | .

5. Let f, g be as in Theorem the

previous, and consider the two

initial value problems

y = f(x, y), y(x 0 ) = y 0 , y(x 0 )

= y 1 (*) y = g(x,y), y(x 0 ) = z 0 ,

y(x0) = z 1 , (**)

Suppose , are solutions of (*)

and (**), respectively, on an

interval 1 containing x 0 . State,

and prove, analogues of theorem

10, and corollaries 1 and 2,

(Hint, question 5, sec 8.4



x
(x) = y0 + (x x0)y1 +
x0 ( )
( xt ) f t,(t) dt,


x
(x)z0+(x x0)z1+ ( xt )g(t,(t))dt.
x0

[f |y 0 z 0 | 0 , |y 1 z 1 | 1 ,

show that the estimate is valid

with = 0 , K. replaced by K,

replaced by 1 + (/2)

6. Let f be a realvalued continuous

function on the strip

S : |x| |y| < , ( > 0),

And suppose f satisfies a

Lipschitz condition on S. Show

that the solution of the initial


2
value problem y + y = f(x,

y), y(0) = 0 y(0) =1 ( > 0), is

unique. (Hint: Apply question 5.)

(Note: From Ex.6 sec 7, it follows

that a solution exists on |x| )


7. Let and be solutions of the

two problems
2
y + y = f(x, y), y(x 0 ) = y 0 ,

y(x 0 ) = y 1 ,
2
y + y = g(x, y), y(x 0 ) = z 0 ,

y'(x 0 ) = z 1 ,

respectively, with > 0. State

and prove analogues of Theorem

10, and Corollaries 1 and 2, for

this situation. (Hint: Apply

question 5)
Unit - 9

PARTIAL DIFFERENTIAL
EQUATIONS OF THE FIRST
ORDER

9.1 PARTIAL DIFFERENTIAL


EQUATIONS

We now proceed to the study of

partial differential equations proper.

Such equations arise in geometry and

physics when the number of

independent variables in the problem

under discussion is two or more.


When such is the case, any

dependent variable is likely to be a

function of more that one variable,

so that it possesses not ordinary

derivatives with respect to a single

variable but partial derivatives with


respect to several variables. For

instance, in the study of thermal

effects in a solid body the

temperature may vary from point

to point in the solid as well as from

time to time, and as a consequence,

the derivatives


x , y , z , t ,

will, in general, be nonzero.

Furthermore in any particular

problem it may happen that higher

derivatives of the types

2 2 3

2 , x t x2 t ,
, etc.
x

may be of physical significance.


When the laws of physics are applied

to a problem of this kind, we

sometimes obtain a relation between

the derivatives of the kind

( )
2 2

F x , ..., 2 , ..., x t , ... = 0
x

Such an equation relating partial

derivatives is called a partial

differential equation.

We define the order of a partial

differential equation to be the order

of the derivative of highest order

occurring in the equation. If, for

example, we take to be the

dependent variable and x, y and t

to be independent variables, then the

equation
2

2 = t
x

is a second order equation in two

variables. The equation

2
( x ) + t =0

is a first - order equation in two

variables, while


x= x + y y = 0

is a first order equation in three

variables

In this unit we shall consider partial

differential equations of the first

order, i.e., equations of the type


F(, x ) = 0
In the main we shall suppose that

there are two independent variables

x and y and that the dependent

variable is denote by z. If we write

z z
P = x , q = y

we see that such an equation can be

written in the symbolic form

f(x, y, z, p, q) = 0

9.2 ORIGINS OF FIRSTORDER


PARTIAL DIFFERENTIAL EQUATIONS

We shall examine the interesting

question of how they arise. Suppose

that we consider the equation

2 2 2 2
x +y +( zc ) = a (1)
in which the constants a and c are

arbitrary. Then equation represents

the set of all spheres whose centers

lie along the z axis. If we

differentiate this equation with

respect to x, we obtain the relation

x + p(z c) = 0 while if we

differentiate it with respect to y, we

find that y + q(z c) = 0

Eliminating the arbitrary constant c

from these two equations, we obtain

the partial differential equation

yp xq = 0 (2)

which is of the first order. In some

sense, then, the set of all spheres

with centers on the z axis is


characterized by the partial

differential equation (2).

Example

2 2 2 2
x +y = ( zc ) tan (3)

in which both of the constants c and

are arbitrary, represents the set of

all right circular cones whose axes

coincide with the line Oz. If we

differentiate equation (3) first with

respect to x and then with respect to

y, we find that

2 2
p(z-c)tan = x,q(z-c)tan =c (4)

and, upon eliminating c and a from

these relations, we see that for these

cones also the equation (2) is

satisfied.
Now what the spheres and cones

have in common is that they are

surfaces of revolution which have the

line Oz as axes of symmetry. All

surfaces of revolution with this

property are characterized by an

equation of the form

(
Z = f x +y
2
)
2
(5)

where the function f is arbitrary. Now


2 2
if we write x + y = u and

differentiate equation (5) with

respect to x and y, respectively, we

obtain the relations

p = 2f'(u), q = 2yf'(u)

Thus we see that the function z

defined by each of the equations (1),


(3) and (5) is, in some sense, a

solution of the equation (2).

We shall now generalize this

argument slightly. Consider the

equation

F(x,y,z,,b)=0 (6)

Where and b denote arbitrary

constants. If we differentiate this

equation with respect to x, we obtain

the relation

F F F F
x +P z = 0, y +q z =0 (7)

The set of equations (6) and (7)

constitute three equations involving

two arbitrary constants and b, and

in the general case, it will be possible


to eliminate and b from these

equations to obtain a relation of the

kind

f(x,y,z,p,q)=0 (8)

showing that the system of surfaces

(1) gives rise to a partial differential

equation (8) of the first order.

The obvious generalization of the

relation (5) is a relation between x, y

and z of the type

F(u,v) = 0 (9)

where u and v are known functions

of x, y, and z and F is an arbitrary

function of u and v. If we

differentiate equation (9) with


respect to x and y, respectively, we

obtain the equations

F
u { uu + zu p} + VF { xv + zv p} = 0
F
u { u
y +
u
z q } +
F
v { F
v +
v
z }
q =0

and if we now eliminate F/u and

F/u from these equations, we

obtain the equation

(u,v) (u,v) (u,v)


P (y,z)
+ q (z,x) = (x,y) (10)

Which is a partial differential

equation of the type (8).

It should be observed, however, that

the partial differential equation (10)

is a linear equation; i.e., the powers

of p and q are both unity, whereas


equation (8) need not be linear. For

example, the equation

2 2 2
(x ) + (y + b) + z = 1 which

represents the set of all spheres of

unit radius with center in the plane

xOy, leads to the firstorder

nonlinear differential equation

2 2 2
z (l + p + q ) = 1

CYP QUESTIONS

1. Eliminate the constants and b

from the following equations:

a. z = (x + ) (y + b)

2
b. 2z = (x + y) + b

2 2 2
c. x + by + z = 1
2. Eliminate the arbitrary function f

from the equations

a. (
z = xy + f x + y
2
)
2

b. z = x+y +f(xy)

c. z=f ( )
xy
z

d. z =f( xy )

e. ( 2 2 2 2
f x + y + z , z 2xy = 0 )
9.3 CAUCHY'S PROBLEM FOR
FIRST- ORDER EQUATIONS

The business of an existence theorem

is to establish conditions under which

we can assert whether or not a given

partial differential equation has a


solution at all; the further step of

proving that the solution, when it

exists, is unique requires a


uniqueness theorem. The conditions

to be satisfied in the case of a first-

order partial differential equation are

conveniently crystallized in the

classic problem of Cauchy, which in

the case of two independent

variables may be stated as follows:

Cauchy's Problem. If

a. x 0 (), y 0 (), and Z 0 () are

functions which, together with

their first derivatives, are

continuous in the interval M

defied by 1 < < 2 ;

b. And if F(x,y,z,p,q) is a

continuous function of x,y,z,p,

and q in a certain region U of the

xyzpq space, then it is required


to establish the existence of a

function (x,y) with the

following properties:

1. (x,y) and its partial

derivatives with respect to x

and y are continuous

functions of x and y in a

region R of the xy space.

2. For all values of x and y lying

in R, the point

{x,y,(x,y), (x,y), v (x,y)}

lies in U and

F[x,y,(x,y),

(x,y), y (x,y)] = 0

3. For all belonging to the

interval M, the point {x 0 (),

y 0 ()} belongs to the region

R, and
{x 0 (),y 0 ()} = z 0

Stated geometrically, what we wish

to prove is that there exists a surface

z = (x,y) which passes through the

curve whose parametric equations

are

x = x0(), y = y0(), z =z0() (1)

and at every point of which the

direction (p,q,1) of the normal is

such that

F(x,y,z,p,q) = 0 (2)

The significant point is that the

theorem cannot be proved with this

degree of generality. To prove the

existence of a solution of equation


(2) passing through a curve with

equations (1) it is necessary to make

some further assumptions about the

form of the function F and the nature

of the curve . There are, therefore,

a whole class of existence theorems

depending on the nature of these

special assumptions. The classic

theorem in this field is that due to

Sonia knowalewski:

Theorem.

If g(y) and all its derivatives are


continuous for |y y 0 | < , If x 0 is
=
a given number and z 0 = g(y 0 ), q 0

g (y 0 ), and if f(x,y,z,q) and all its

partial derivatives are continuous in

a region S defined by
|x x 0 | < , |y y 0 | <, |qq 0 | <

then there exists a unique function

(x, y) and all its partial derivatives

are continuous in a region R

a. (x,y) and all its partial

derivatives are continuous in a

region R

defined by |x x 0 | < 1 , |y y 0 |

< 2;

b. For all (x, y) in R, z = (x,y) is a

solution of the equation


z z
x =f(x,y,z= y )

c. For all values of y in the interval

|y y 0 | < 1 , (x 0 ,y) = g(y).

Before passing on to the discussion

of the solution of first-order partial

differential equations, we shall say


a word about different kinds of

solutions. Relations of the type

F(x,y,z,,b) = 0

Led to partial differential equations of

the first order. Any suclrelation which

contains two arbitrary constants

and b and is a solution of a partial

differential equation of the first order

is said to be a complete solution or

a complete integral of that equation.

On the other hand any relation of the

type

F(u, v) = 0

Involving an arbitrary function F

connecting two known functions and

v of x, y, and z and providing a


solution of a first-order partia

differential equation is called a

general solution or a general integral

of that equation.

It is obvious that in some sense a

general integral provides a much

broader set of solutions of the partial

differential equation in question than

does a complete integral.

9.4. LINEAR EQUATIONS OF THE


FIRST ORDER

They partial differential equations of

the form

Pp + Q q = R (1)

Where P, Q and R given functions of

x, y, and z (which do not involve p


or q), p denotes z/x, q denotes

z/y, and we wish to find a relation

between x, y and z involving an

arbitrary function. The first

systematic theory of equations of this

type was given by Lagrange. For that

reason equation (1) is frequently

referred to as Lagrange's equation.

Its generalization to n independent

variables is obviously the equation

X1p1+X2p2+...+Xnpn=Y (2)

Where X1, X 2 ,...X n and Y are

functions of n independent variables

x1, x 2 ,.....,x n and a dependent

variable f; p, denotes f/x i , (i = 1,2,

....,n). It should be observed that

in this connection the term linear


means that p and q (or, in the

general case, p 1 ,p 2 ,...p n ) appear to

the first degree only but P, Q, R may

be any functions of x, y, and z. This

is in contrast to the situation in the

theory of ordinary differential

equations, where z must also appear

linearly. For example the equation

z z 2 2
x x + y y = z + x
is linear, whereas the equation
dz 2 2
x dx =z +x
is not

The method of solving linear

equations of the form (1) is

contained in:
Theorem.

The general solution of the linear

partial differential equation

Pp + Q q = R (1)
Is
F(u,v) = 0 (3)

where F is an arbitrary function and

(x,y,z) = c 1 and v (x,y,z) = c 2 form a

solution of the equations

dx dy dz
P = Q = R (4)

Proof.

If the equations (5) satisfy the

equations (4), then the equations u

dx + u y dy + u z dz = 0
dx dy dz
and P = Q = R must be compatible;

ie, we must have

Pu + Qu y + Ru z = 0

Similarly we must have

Pv + Qv y + Rv 2 = 0

Solving these equations for P, Q and

R we have

P Q R
= =
(u,v) / (y,z) (u,v) / (z,x) (u,v) / (x,y) (8)

Now we already showed that the

relation

F(u, v) = 0

Leads to the partial differential

equation
(u,v) (u,v) (u,v)
P = (y,z) + q (z,x)
= (x,y) (9)

Substituting from equations (8) into

equation (9), we see that (3) is a

solution of the equation (1) if u and v

are given by equations (5).

Example.

Find the general solution of the

differential equation

2 z 2 z
x x +y y = (x+y)z

Solution. The integral surfaces of this

equation are generated by the

integral curves of the equations

dx dy dz
x
2 =
y
2 = (x+y)z (10)

The first equation of this set has

obviously the integral


-1 -1
x -y = c1 (11)

and it follows immediately from the

equations that

dxdy dz
x y
2 2 = (x+y)z (12)

Which has the integral

xy
z = c2

Combining the solution (11) and (12)

we see that the integral curves of the

equations (10) are given by equation

(12) and the equation

xy
z = c3 (13)

and that the curves given by these

equations generate the surface

F ( xy
z ,
xy
z )=0 (14)
Where the function F is arbitrary.

It should be observed that this

surface can be expressed by

equations such at

xy
z = xyf ( z )
xy
or z = xyg ( xy )

(in which f and g denote arbitrary

functions), which are apparently

different from equation)

The theory we have developed for the

case of two independent variables

can, of course, be readily extended to

the case of n independent variables,

though in this case it is simpler to

make use of an analytical method of

proof than one which depends on the


appreciation of geometrical ideas.

The general theorem is:

Theorem.

If u 1 (x 1 ,x 2 ,...., x n ,z) = c i (i = 1,2,

...,n) are independent solutions of

the equations

dx1 dx2 dxn dz


P1 = P2 = ... = Pn = R

Then the relation (u 1 ,u 2 ,...u n ) = 0,

in which the function arbitrary, is a

general solution of the linear partial

differential equation
z z z
P1 x1 + P2 x2 + ... + Pn xn = R

Proof.

To prove this theorem we first of all

note that if the solutions of it

equations
dx1 dx2 dxn dz
P1 = P2 = ... = Pn = R (15)
Are
ui(x1,x2, ..., xn, z) = ci i = 1,2,... N (16)
then the n equations


n
ui ui
dxj + dz = 0 i = 1,2,...,n (17)
j=1
xj
z

Must be compatible with the

equations (15). In other words, In

other words, we must have


n
u ui
Pj
xj
R+ z =0 (18)
j=1

(Solving the set of n equations (18)

for P i , we find that

Pi R
(
u1,u2 ....,un )
=
(
u1u2, ..., un )
i = 1,2....n (19)
(
x1...xi-1z,xi+1 ...,xn ) (x1, .....,xn)

When (u1,u2 ....,un) / (x1,x2 .....,xn) denotes the Jacobian

Double click this page to view clearly


[ ]
u1 u1 u1
x1 x2 ... xn

u2 u2 u2
x1 x2 ... xn

. . .

. . .

. . .
un un un
x1 x2 ... xn

Consider now the relation

(u1,u2 ....,un) = 0 (20)

Differentiating it with respect to x i ,

we obtain the equation

(
n

j=1
uj uj z
uj xi
+
z xi )
Double click this page to view clearly
and there are n such equations, one

for each value of i. Eliminating the

n quantities /u 1 ,...,/u n from

these equations, we obtain the

relation
(u1,...un) z (u1,...,uj 1,uj+1,....un)

n
(x1,...xn)
+
j=1 xj (x1,...,xj 1, zxj+1, ..., xn)
=0 (21)

Substituting from equations (19) into

the equation (21), we see that the

function z defined by the relation

(20) is a solution of the equation


z z z
P1 x1 + P2 x2 + ... + Pn xn = R (22)

as we desired to show,

Example.

If u is a function of x,y and z which

satisfies the partial differential

equation

Double click this page to view clearly


u u u
( yz ) x + ( zx ) y + ( xy ) z = 0

Show that u contains x, y and z only


2
in combinations x + y + z and x
2 2
+ y + z . In this case the auxiliary

equations are

dx dy dz du
yz = zx = xy = 0

And they are equivalent to the three

relations

du = 0

dx + dy + dz = 0

xdx + y dy+ z dz = 0

which show that the integrals are

2 2 2
u = c 1 , x+y + z = c 2 , x + y + z = c3
Hence the general solution is of the
2 2 2
form u = f(x + y + z, x + y + z )

CYP QUESTIONS

Find the general integrals of the

linear partial differential equations:

2 2
1. z(xp yq) = y x

2 2
2. px(z 2y ) = (z qy)(z y
3
2x )

3. px(x + y) = qy(x + y) (x y)

(2x + 2y + z)

2
4. y p xyq = x (z zy)

2 2
5. (y + zx) p (x + yz) q = x y

2 2 2 2
6. x(x + 3y )p y (3x + y )q =
2 2
2z(y x )
9.5. INTEGRAL SURFACES PASSING
THROUGH A GIVEN CURVE

We shall now indicate how a general

solution ay be used to determine the

integral surface which passes

through a given curve. We shall

suppose that we have found two

solutions

u (x,y,x) = c1, v(x,y,z) = c2 (1)

of the auxiliary equations. Then, as

we saw any solution of the

corresponding linear equation is of

the form

F(u,v) = 0 (2)

Arising from a relation

F(c1,c2) = 0 (3)
Between the constants c 1 and c 2 . The

problem we have to consider is that

of determining the function F in

special circumstances.

If we wish to find the integral surface

which passes through the curve c

whose parametric equations are

x = x(t), y = y(t), z = z(t)

where t is a parameter, then the

particular solution (1) must be such

that u{x(t), y(t), z(t)} = c 1 , v{x(t),

y(t), z(t)} = c 2

We therefore have two equations

from which we may eliminate the

single variable t to obtain a relation

of the type (3). The solution, we are

seeking is then given by equation

(2).
Example 3.

Find the integral surface of the linear


2
partial differential equation\ x(y +
2 2 2
z) p y (x + z)q = (x y )z which

contains the straight line x + y = 0, z

= 1.

The auxiliary equations

dx dy dz
= =
( 2
x y +z ) ( 2
y y +z ) (x
2
y z )
2

have integrals
2 2
xyz = c1, x +y 2z = c2 (4)
For the curve in question we have the

freedom equations

x = t, y = -t, z = 1

Substituting these values in the pair

of equations (4), we have the pair


2 2
-t = C 1 , 2t 2 C2

and eliminating t from them, we find

the relation

2c 1 + C 2 + 2 = 0

Showing that the desired integral

surface is

2 2
x + y + 2xyz 2z + 2 = 0

CYP QUESTIONS

1. Find the equation of the integral

surface of the differential

equation

2y(z 3)p + (2x z)q = y(2x-3)

Which passes through the circle z


2 2
= 0, x +y = 2x.
2. Find the general integral of the

partial differential equation


2
(2xy 1) p (z 2x ) q = 2 (x yz)

And also the particular integral

which passes through the line x

= 1 y=0.

3. Find the integral surface of the

equation
2 2 2
(x y)y p + (y x)x q = (x +
2
y )z
3
Through the curve xz = a , y = 0.

4. Find the general solution of the

equation
2 2 3
2x(y + z ) p + y (2y + z ) q = z
2
and deduce that yz(z + yz 2y)
2
= x is a solution.
5. Find the general integral of the

equation

(x y) p + (y x z)q = z and

the particular solution through


2 2
the circle z = 1, x + y =1.

6. Find the general solution of the

differential equation

x(z + 2)p + (xz + 2yz + 2y)q

= z(z + )

Find also the integral surfaces

which pass through the curves:

a. y = 0,

2
b. z = 4x

c. y = 0,

3 2
d. z + x(z + ) = 0
Unit -10

NONLINEAR PARTIAL
DIFFERENTIAL EQUATIONS
OF THE FIRST ORDER

We turn now to the more difficult

problem of finding the solutions of

the partial differential equation

F (x, y, z, p, q) = 0 (1)

in which the function F is not

necessarily linear in p and q.

We saw that the partial differential

equation of the two parameter

system

F (x, y, z, p, q) = 0 (1)
was of this form. It will be shown a

little later that the converse is also

true; ie, that any partial differential

equation of the type (1) has solutions

of the type (2). Any envelope of the

system (2) touches at each of its

points a member of the system. It

possesses therefore the same set of

values (x,y,z,p,q) as the particular

surface, so that it must also be a

solution of the differential equation.

In this way we are let to three classes

of integrals of a partial differential

equation of the type (1) :

a. Two parameter systems of

surfaces

f(x, y, z, , b) = 0
Such an integral is called a

complete integral.

b. If we take any one-parameter

sub system

f{x,y,z,,(a)} =0

of the system (2) and form its

envelope, we obtain a solution of

equation

(1). When the function () which

defines this subsystem is arbitrary,

the solution obtained is called the

general integral of (I) corresponding

to the complete integral (2). When

a define function () is used, we

obtain a particular case of the

general integral.
If the envelope of the two-parameter

system (2) exists, it is also a solution

of the equation (1); it is called the

singular integral of the equation.

We can illustrate these three kinds of

solution with reference to the partial

differential equation

2
( 2
z 1+p +q =1
2
) (3)

we showed that

2 2 2
( x - ) + ( y - b) + z = 1 (4)

was a solution of this equation with

arbitrary and b. Since it contains

two arbitrary constants, the solution

(4) is thus a complete integral of the

equation (3).
Putting b = in equation (4), we

obtain the one-parameter subsystem

2 2 2
(x - ) + (y- ) + z = 1

Whose envelope is obtained by

eliminating between this equation

and

x + y - 2 = 0

so that it has equation

2 2
(x - y) + 2z + 2 (5)

Differentiating both sides of this

equation with respect to x and y,

respectively, we obtain the relations

2zp = yx, 2zq = x - y


From which it follows immediately

that (5) is an integral surface of the

equation (3). It is a solution of type

(b); i.e., it is a general integral of the

equation (3).

The envelope of the two-parameter

system (3) is obtained by eliminating

and b from equation (4) and the

two equations

x-=0 y-b=0

(y mx c) = (1 + m )(1 z ) (6)
2 2 2

is a complete integral of equation

(3), since it contains two arbitrary

constants m and c, and it cannot be

derived from the complete integral

(4) by a simple change in the values


of a and b. It can be readily shown,

however, that the solution (6) is the

envelop of the one-parameter

subsystem of (4) obtained by taking

b = m + c.

CYP QUESTIONS

1. verify that z = x + by + + b

- b is a complete integral of the

partial differential equation

z = px + qy + p + q pq

where and b are arbitrary

constants. Show that the

envelope of all plans

corresponding to complete

integrals provides a singular

solution of the differential

equation, and determine a


general solution by finding the

envelope of those planes that

pass through the origin.

2. Verify that the equations

a. Z= 2x + + 2y + b
2 -1
b. z + = 2(1 + )(x+y)

are both complete integrals

of the partial differential

equation
1 1
Z= p + q

show, further, that the

complete integral (b) is the

envelope of the

oneparameter subsystem

obtained by taking

b = - - 1+

in the solution ().


10.2 CAUCHY'S METHOD OF
CHARACTERISTICS

We shall now consider methods of

solving the nonlinear partial

differential equation

(
F x,y,z,
z
x ,
z
Y )=0 (1)

In this section we shall consider a

method, due to Cauchy, which is

based largely on geometrical ideas.

The plane passing through the point

P(x 0 ,y 0 ,z 0 ) with its normal parallel

to the direction n defined by the

direction ratios (P 0 , q0, - 1) is

uniquely specified by the set of

numbers D(x 0 ,y 0 ,z 0 ,p 0 ,q 0 ).

Conversely any such set of five real


numbers defines a plane in three-

dimensional space. For this reason a

set of five numbers D(x,y,z,p,q) is

called a place element of the space.

In particular a plane element (x 0 , y 0 ,

z 0 , p 0 , q 0 ) whose components satisfy

an equation

F(x,y,z,p,q) = 0 (2)

is called an integral element of the

equation (2) at the point (x 0 ,y 0 ,z 0 ).

It is theoretically possible to solve an

equation of the type (2) to obtain an

expression

q G(x,y,zp) (3)

from which to calculate q when x, y,

z and p are known. Keeping x 0 , y 0 ,

and z 0
Figure 16

fixed and varying p, we obtain a set

of plane elements {x 0 ,y 0 ,z 0 ,p,G

(x 0 ,y 0 ,z 0 ,p)}, which depend on the

single parameter p. As p varies, we

obtain a set of plane elements all

of which pass through the point P and

which therefore envelop a cone with

vertex p;
the cone so generated is called the

elementary cone of equation (2) at

the point P (of Fig.16).

z = g(x,y) (4)

If the function g(x,y) and its first

partial derivatives g (x,y), g y (x,y)

are continuous in a certain region R

of the xy plane, then the tangent

plane at each point of S determines a

plane element of the type

{x0,y0g(x0,y0),g(x0,y0,)gy(x0,y0)} (5)

Which we shall call the tangent

element of the surface S at the point

{x 0 ,y 0 g(x 0 ,y 0 )}.

It is obvious on geometrical grounds

that:
Theorem.
A necessary and sufficient condition

that a surface be an integral surface

of a partial differential equation is

that at each point its tangent element

should touch the elementary cone of

the equation.

A curve C with parametric equations

x = x(t), y = y(t), z = z ( t) (6)

lies on the surface (4) if

z(t) = g {x(t),y(t)}

for all values of t in the

appropriate interval I. If P0 is a

point on this determined by the

parameters t0, then the direction

ratios of the tangent line P0P1 (cf

Fig 17) are {x (t0),


y (t 0 ),y(t 0 ),z(t 0 )}. where x(t 0 )

denotes the value of dx/dt when t =


t0, etc. This direction will be

perpendicular to the direction

(p 0 ,q 0 ,1)if z(t 0 ) = p 0 x 0 (t 0 ) +

q 0 y 0 (t 0 )

For this reason we say that any set

{x(t),y(t),z(t),p(t),q(t)} (7)

of five real functions satisfying the

condition

z'(t) = p(t)x'(t) + q(t)y'(t) (8)

defines a strip at the point (x,y,z) of

the curve C. If such a strip is also an

integral element of equation (2),


we say that it is an integral strip of

equation (2); ie., the set of functions

(7) is an integral strip of equation (2)

provided they satisfy condition (8)


and the further condition

F {x(t), y(t), z(t), p(t), q(t)} (9)

For all t in I.

If at each point the curve (6) touches

a generator of the elementary cone,

we say that the corresponding strip

is a characteristic strip. We shall now

derive the equations determining a

characteristic strip. The point (x +

dx, y + dy, z + dz) lies in the tangent

plane to the elementary cone at P if

dz = p dx + q dy (10)
where p, q satisfy the relation (2).

Differentiating (10) with respect to p,

we obtain .
dq
0 = dx + dp dy (11)

Where, from (2)

F F dq
p + q dp =0 (12)

Solving the equations (10), (11), and

(12) for the ratios of dy, dz to dx, we

obtain

dx dy dz
Fp = Fq = pFp + qFq (13)

So that along a characteristic strip

x(t), y(t), z(t) must be proportional


to F p , F q , pF p + qF q , respectively. If

we chose the parameter t in such a

way that
x'(t) = Fp, y'(t) = Fq (14)
then
z'(t) = Fp, + qFq (15)

Along a characteristic strip p is a

function of t so that

p p
p'(t) = x x'(t) + y y'(t)
p F p F
= x p + y q
p F q F
= x p + x q

since p / y = q /x.

Differentiating equation (2) with

respect to x, we find that

F F F p F q
x + z p+ p x + q x =0

so that on a characteristic strip

p'(t) = - (F + pFz) (16)


and it can be shown similarly that

q'(t) = - (Fy + qFz) (17)

Collecting equations (14) to (17)

together, we see that we have the

following system of five ordinary

differential equations for the

determination of the characteristic

strip

x'(t) = Fp, y'(t) Fq, z'(t) = pFp + qFq (18)


p'(t) = -F- pFz, z'(t) = -Fy qFz

These equations are known as the

characteristic equations of the

differential equation (2), if the

functions which appear in equations

(18) satisfy a Lipschitz condition,

there is a unique solution of the

equations for each prescribed set of

initial values of the variables.


Therefore the characteristic strip is

determined uniquely by any initial


element (x 0 , y 0 , z 0 , p 0 , q 0 ) and any

initial value t 0 of t.

The main theorem about

characteristic strips is:

Theorem.

Along every characteristic strip of the

equation F(x,y,z,p,q) = 0 the

function F(x,y,z,p,q) is a constant.

Proof.

Along a characteristic strip we have

{x(t), y(t), z(t), q(t)}


d
dt F

= Fx' + Fyy' + Fzz' + Fpp' + Fqq'

= FFp + FyFq + Fz(pFp + qFq) Fp(F+ pFz) Fq(Fy + qFz)


=0

So that F(x,y,z,p,q) = k, a constant

along the strip.

As a corollary we have immediately:

Double click this page to view clearly


Theorem:

If a characteristic strip contains at

least one integral element of

F(x,y,z,p,q) = 0 it is an integral strip

of the equation F(x,y,z,z ,z y )= 0.

We are now in a position to solve

Cauchy's problem. Suppose we wish

to find the solution of the partial

differential equation (1) which passes

through a curve whose freedom

equations are

x = (), y =(), z =() (19)

then in the solution

x = x(p0, q0, x0, y0, z0, t0, t), etc (20)

of the characteristic equations (18)

we may take x0 = (), y0 = (),


z 0 = () as the initial values of x, y,

z. The corresponding initial values of


P0, q0, are determined by the

relations

() = p 0 () + q 0 ()

F{(), (), (), p 0 , q 0 } = 0

If we substitute these values of x 0 ,

y 0 , z 0 , p 0 , q 0 and the appropriate

value of t 0 in equation (20), we find

that x, y, z can be expressed in terms

of the two parameters t, v, to give

x = X1(v, t), y = Y1(v, t), z = Z1(v, t) (21)

Eliminating v, t from these three

equations, we get a relation

(x, y, z) = 0
Which is the equation of the integral

surface of equation (1) through the

curve . We shall illustrate this

procedure by an example.

Example.

Find the solution of the equation z =


2 2
(p + q ) + (p x) (q y) which

passes through the x-axis.

Solution.

It is readily that the initial values

are x 0 = v, y 0 = 0, z 0 = 0, P 0 =

0, q 0 = 20 t 0 = 0 the characteristic

equations of this partial differential

equation are

dx dy dz
dt = p + q y, dt = p + q x, dt = p( p + qy ) + q( p + qx )
dp dq
dt = p + q y, dt =p+qx

Double click this page to view clearly


From which it follows immediately

that

X = v + p, y = q 2v

Also it is readily shown that

d d
dt ( p + qx ) = p + q x, dt ( p + qy ) = p + q y +

Giving
t t
p + q x = ve , P + q y = ve

Hence we have

( t
x = v 2e 1 , ) (
y=v e 1 ,
t
) (
p = 2v e 1
t
) ( t
q=v e +1 ) (22)

Substituting in the third of the

characteristic equations, we have


dz 2 t 2 t
dt = 5v e2 3v e

With solution

z=
5 2
2v ( e
2
t
)
1 3v e 1
2
( t
)

Double click this page to view clearly


Now from the first pair of equations

(22) we have

t yx
e = 2yx , v = x 2y

so that substituting in (23), we

obtain the solution

1
z= 2 y( 4x3y )

CYP QUESTONS

1. Find the characteristics of the

equation pq = z, and determine

the integral surface which passes


2
through the parabola x = 0, y = z.

2. Write down, and integrate

completely, the equations for the

characteristics of
2
(1 + q )z = px
Expressing x, y, z and p in terms

of , where q = tan , determine

the integral surface which passes


2
through the parabola x = 2x, y = 0.

3. Determine the characteristics of


2 2
the equation z= p q , and find
the integral surface which passes

through the parabola 4z + x = 0,

y = 0.

4. Integral the equations for the

characteristics of the equation


2 2
p + q = 4z

expressing x, y, z. and p in terms

of q, and then find the solutions

of this equation which reduce to


2
z = x + 1 when y = 0.
10.3 COMPATIBLE SYSTEMS OF
FIRSTORDER EQUATIONS

We shall net consider the condition

to be satisfied in order that every

solution of the first order partial

differential equation

f(x, y, z, p, q) = 0 (1)

is also a solution of the equation

g(x, y, z, p, q) = 0 (2)

When such a situation arises, the

equations are said to be compatible.

If

(f, g)
J= (p,q)
0 (3)

We can solve equations (1) and (2)

to obtain the explicit expressions


p = (x, y, z), q =(x, y, z) (4)

for p and q. The condition that the

pair of equations (1) and (2) should

be compatible reduces then to the

condition that the system of

equations (4) should be completely

integrable, i.e, that the equation

dx + dy dz = 0

Should be integrable. We see that

the condition that this equation is

integrable is

(- z ) + ( z ) ( x - y ) = 0

Which is equivalent to

x + z = y + z (5)
Substituting from equations (4) into

equation (1) and differentiating with

regard to x and z, respectively, we

obtain the equations


fx + fp x + fq x = 0

f z + f p z +f q z = 0

from which it is readily deduced that


f x + f z + f p ( x + z ) + f q ( x +

z ) = 0

Similarly we may deduce from

equation (2) that g x + g z + g y ( x

+ z ) + g q ( x + z ) = 0

Solving these equations, we find that

{ }
1 (f,g) (f,g)
x + z = J (x,p)
+ (z,p) (6)

Where J is defined as equation (3).


If we had differentiated the given

pair of equations with respect to y

and z, we should have obtained

{ }
1 (f,g) (f,g)
y + z = -J (y,p)
+ (z,p) (7)

So that, substituting from equations

(6) and (7) into equation (5) and

replacing , by p, q, respectively,

we see that the condition that the

two conditions should be compatible

is that

[f, g] = 0 (8)
(f,g) (f,g) (f,g) (f,g)
Where [f, g] = (x,p) + p (z,p) + (y,p) + q (z,p) (9)

Example.

Show that the equations xp = yq,

z(xp + yq) = 2xy are compatible and

the solve them.

Double click this page to view clearly


Solution.

We may take f = xp yq, g = z(xp +

yq) 2xy so that

(f,g) (f,g) 2 (f,g) (f,g)


(x,p)
= 2xy, (z,p)
= - x p. (y,p)
= - 2xy, (z,p)
= xyp

from which it follows that

[f,g] = xp(yq xp) = 0

Since xp = yp. The equations are

therefore compatible

It is readily shown that p = y/z, q =

x/z, so that we have to solve

z dz = y dx + x dy
2
which has solutions z = c 1 + 2xy

where c 1 is constant.

Double click this page to view clearly


CYP QUESTIONS

1. Show that the equations


2
xp yq = x. x p + q = xz are

compatible and find the solution.

2. Show that the equation z = px

+ qy is compatible with any

equation f(x,y,z,p,q) = 0 that is

homogeneous equations

Solve completely the

simultaneous equations z = px +
2 2
qy, 2xy (p + q ) = z (yp + xq)

3. Show that the equations f(x, y,

p, q) = 0, g(x, y, p, q) = 0 are

compatible if

(f,g) (f,g)
(x,p)
+ (y,p) = 0
Verify that the equations p =

P(x,y), q = Q(x,y) are compatible

if
P Q
y = x

4. If u 1 = u / x, u 2 = u / y,,

u 3 = u / z, show that the

equations

f(x, y, z, u 1 , u 2 , u 3 ) = 0, g (x, y,

z, u 1 , u 2 , u 3 ) = 0 are compatible

if

(f,g) (f,g) (f,g)


+ + =0
(x,u1) (y,u2) (z,u3)

10.4. CHARPIT'S METHOD

A method of solving the partial

differential equation

f(x, y, z, p, q) = 0 (1)

Double click this page to view clearly


due to Charpit, is based on the

considerations of the last section.

The fundamental idea in Charpit's

method is the introduction of a

second partial differential equation of

the first order

g(x, y, z, p, q, a) = 0 (2)
which contains an arbitrary constant

a and which is such that:

a. Equation (1) and (2) can be

solved to give

P = P (x,y,z,a), q = q(x, y, z, a)

b. The equation

dz = p(x,y,z,a) dx + q (x,y,z,a) dy (3)


is integrable.

When such a function g has been

found, the solution of equation

F(x,y,z,a,b) = 0 (4)
containing two arbitrary

constants a, b will be a solution

of equation (1). From the

considerations it will be seen that

equation (4) is a complete

integral of equation (1).

The main problem then is the

determination of the second equation

(2), but this has already been solved

in the last section, since we need

only seek an equation g = 0

compatible with the given equation

f = 0. The conditions for this are

symbolized in equations (3) and (8)

of the last section. Expanding the

latter equation, we see that it is

equivalent to the linear partial

differential equation
fp x + fq y + (pfp + qfq) z - (fx + pfz) p -(fy + qfz) q = 0
g g g g g
(5)

for the determination of g. Our

problem then is to find a solution of

this equation, as simple as possible,

involving an arbitrary constant a, and

this we do by finding an integral of

the subsidiary equations

dx dy dz dp dq
fp = fq = pfp + qfq =
(fx + pfz)
=
(fy + qfz)
(6)

in accordance with Theorem 3. These

equations, which are known as


Charpit's equations, are equivalent to

the characteristic equation (18) of

sec. 8.Once an integral g(x,y,z,p,q,a)

of this kind has been found, the

problem reduces to solving for p,q,

and then integrating equation (3) by

Double click this page to view clearly


the methods of Sec.6 of Chap. 1.

It should be noted that not all of

Charpit's equations (6) need be used,

but that p or q must occur in the

solution obtained.

Example 7.

Find a complete integral of the

equation

2 2
P x+q y=z (7)

The auxiliary equations are

dx dy dz dp dq
= = = =
2px 2qy
( )
2 2 2 2
2 p x+q y pp qq

From which it follows that

2 2
p dx + 2px db q dy + 2qy dq
2 = 2
p x q y

Double click this page to view clearly


and hence that

2
p x = aq2y
(8)

where a is a constant. Solving

equations (7) and (8) for p,q, we

have

1 1

{ az
}, { z
}
2 2
p= (1 + a)x
q= (1 + a)y

so that equation (3) becomes in this

case

1 1 1

( 1+a
z ) 2
dz = () a
x
2
dx + ()
1
y
2
dy

With solution

1/2
{(l+a)z} = (ax) + y + b

Which is therefore a complete

integral of (7).
PROBLEMS

Find the complete integrals of the

equations:
2 2
1. (p + q ) y = qz

2
2. P = (z + qy)

2
3. Z = pqxy

2 2
4. xp + 3yq = 2(z x q )

5 3 2 2
5. px 4q x + 6x z 2 = 0

6. 2(y + zq) = q(xp + yq)

2
7. 2(z + xp + yq) = yp

11. Special Types of First Order

Equations

In this section we shall consider some

special types of first order partial

differential equations whole solutions

may be obtained easily by Charpit's

method.
a. Equations Involving Only P and

q. For equation of the type

F(p,q) = 0 (1)

b. Charpit's equations reduce to

dx dy dz dp dp
fy = fq = pfp + qfq = 0 = 0

An obvious solution of these

equation is

P= (2)
the corresponding value of q

being obtained from (1) in the

from

f(a,q) = 0 (3)
so that q = Q(a) a constant. The

solution of the equation is then

z = ax + Q(a)y + b (4)

where b is a constant.
We have chosen the equation dp

= 0 to provide our second

equation. In some problems the

amount of computation involved

is considerably reduced if we

take instead dq = 0, leading to q


= a.

Example 8.

Find a complete integral of the

equation pq = 1.

In this case Q(a) = 1/a, so that we

see, from equation (4), that a

complete integral is
y
z = ax + a
+b

2
which is equivalent to a x + y az =C
where a, c are arbitrary constants.
b) Equations Not involving the

independent variables. If the

partial differential equation is of

the type

f(z, p, q) = 0 (5)
Charpit's equations take the

forms
dx dy dz dp dq
fp = fq = pfp + qfq = pfz = qfa

the last of which leads to the

relation

p = aq (6)

Solving (5) and (6), we obtain

expressions for p, q from which a

complete integral follows

immediately.
Example.
Find a complete integral of the
2 2 2
equation p z + q = 1. Putting

p aq, we find that


1 1
2 2 2 2
2
( 2 2
q 1+a z = 1, ) (
q= 1+a z ) 2
(
p=a 1+a z ) 2

( 1+a z
2 2
) 2
dz = a dx + dy

Hence

Which leads to the complete


2
integral az (1 + a z) - log [az +
2 2
(1 + a z ) ] = 2a(ax + y + b)

C) Separable Equations. We say

that a fist order partial

differential is separable if it can

be written in the form

f(x, p) = g(y, q) (7)


For such an equation Charpit's

equations become
dx dy dz dp dq
fp = gq = pfp qgq = fx = gy

So that we have an ordinary

differential equation.
dp fz
dx + fp =0

In x and p which may be solved

to give p as a function of x and

an arbitrary constant a. Writing

this equation in the form f y dp

+ f x dx = 0, we see that its

solution is f(x, p) = a. Hence we

determine p, q from the relations

f(x, p) = a. g(y,q) = a (8)


and then proceed as in the

general theory.

Example.
Find a complete integral of the

2 2
equation p y(1 + x ) = qx .
Solution.

We first that we can write the

equation in the form


2
(
p 1+x )
2
q
2 =y
x
ax 2
So that p= , q=a y
1 + x
2

and hence a complete integral is

z = a1 + x + 2 a y + b
2 1 2 2

where a and b are constants.

d) Clairaut Equations. A first

order partial differential equation

is said to be of Clairaut type if it

can be written in the form

z = px + qy + f(p,q) (9)
The corresponding Charpit

equations are
dx dy dz dp dq
x + fp = y + fq = px + qy + pfp + afq = 0 = 0
so that we may take p = a, q = b.

If substitute these values in (9),

we get the complete integral ,

z = ax + by + f(a, b) (10)
as is readily verified by direct

differentiation.

Example.

Find a complete integral of the

equation

(p + q) (z xp yq) = 1

Writing this equation in the form

1
z = xp + yq + p+q

we see that a complete integral is

1
z = ax + by + a+b
CYP QUESTIONS

Find complete integrals of the

equations:

1. p + q = pq

2 2
2. z = p q

3. zpq = p + q

2 2 2
4. p q(x + y ) = p2 + q

2 2 2 2 2 2 2 2
5. p q + x y = x q (x + y )

2 2 2 2
6. 6. pqz = p (xq + p ) + q (yp + q )

Você também pode gostar