Escolar Documentos
Profissional Documentos
Cultura Documentos
Lecture
1
Content
Solution of first order ordinary differential equations
Hours
1
References:
BradieBAFriendlyIntroductiontoNumericalAnaysisPearsonEducation,2007
BurdenRL,FairesJDNumericalAnalysisCengageLearning,2007
ChapraSC,Canale,RPNumericalMethodsforEngineersTataMcGrawHill,2003
GeraldC.F.,WheatleyPOAppliedNumericalanalysis,AddisonWesley,1998
Lecture 1
Numerical solution of first order ordinary differential equations
Keywords: Initial Value Problem, Approximate solution, Picard method, Taylor series
equation is an equation relating y, t and its first order derivatives. The most general
form is :
F(t,y(t),y(t)) 0
(1.1)
i)
y(t) is differentiable
ii)
The first of these equations represents the exponential decay of radioactive material
where y represent the amount of material at any given time and k=2 is the rate of decay.
It may be noted that y(t) c exp( 2t) is the solution of differential equation as it
identically satisfies the given differential equation for arbitrarily chosen constant c. This
means that the differential equation has infinitely many solutions for different choices of
c. In other words, the real world problem has infinitely many solutions which we know is
not true. In fact, an initial condition should be specified for finding the unique solution of
the problem:
y(0) A
That is, the amount of radioactive material present at time t=0 is A. When this initial
condition is imposed on the solution, the constant c is evaluated as A and the solution
y(t) A exp( 2t) is now unique. The expression can now be used for computing the
t 0 t b with y(t 0 ) y 0
(1.2)
There exist several methods for finding solutions of differential equations. However, all
differential equations are not solvable. The following well known theorem from theory of
differential equations establishes the existence and uniqueness of solution of the IVP:
Let f(t,y(t)) be continuous in a domain D={ (t,y(t)): t0tb, cyd } R2. If f satisfies
Lipschitz condition on the Variable y and (t0,y0) in D, then IVP has a unique solution
y=y(t) on the some interval t0 t t0+.
{The function f satisfies Lipschitz condition means that there exists a positive constant L
such that f(t,y) f(t,w) L y w }
The theorem gives conditions on function f(t, y) for existence and uniqueness of the
solution. But the solution has to be obtained by available methods. It may not be
possible to obtain analytical solution (in closed form) of a given first order differential
equation by known methods even when the above theorem guarantees its existence.
Sometimes it is very difficult to obtain the solution. In such cases, the approximate
solution of given differential equation can be obtained.
Approximate Solution
The classical methods for approximate solution of an IVP are:
i)
ii)
(1.3)
t0
From the theory of differential equations, it can be proved that the above sequence of
approximations converges to the exact solution of IVP.
Example 1.1: Obtain the approximate solution of IVP using Picard method. Obtain its
y(0) 1
and so on.
1 t t 2 / 2 t 3 / 3 t 4 / 8 t 5 /15 t 6 / 48
The differential equation in example 1.1 is a linear first order equation. Its exact solution
can be obtained as
t
The closed form solution of differential equation in this case is possible. But the
expression involving an integral is difficult to analyze. The sequence of polynomials as
obtained by Picard method gives only approximate solution, but for many practical
problems this form of solution is preferred.
The IVP gives the solution y0 at initial point t=t0. for given step size h, the solution at
t=t0+h can be computed from Taylor series as
y(t1 ) y(t 0 h) y(t 0 ) h y (t 0 )
h2
h3
y (t 0 )
y (t 0 ) ...
2
6
(1.4)
2 f
f
y (t 0 ) 2 2
and so on
y ( y )2
ty
y
t
t t0
Substituting these derivatives and truncating the series (1.4) gives the approximate
solution at t1.
Example 1.2: Obtain the approximate solution y(t) of IVP using Taylor series method.
y(0) 1
y y ty
y 2y ty
yiv (2y 1)y xy y and so on
Or
Or
It may be noted that fifth term and subsequent terms are smaller than the accuracy
requirement, the Taylor series can be truncated beyond fourth term. Accordingly
y(0.1)=1.1.53.
Observe that the Picard method involves integration while Taylor series method
involves differentiation of the function f. Depending on the ease of operation, one can
select the appropriate method for finding the approximate solution. The number of
iterations in Picard method depends upon the accuracy requirement. The step size h
can be chosen sufficiently small to meet the accuracy requirement in case of Taylor
series method. For fixed h, more terms have to be included in the solution when more
accuracy is desired.
In the category of methods that include Picard method and Taylor series method, the
approximate solution is given in the form of a mathematical expression.
Lecture 2
Numerical Methods: Euler method
Keywords: Numerical solution, grid points, local truncation error, rounding error
Numerical Solution
Numerical methods for solving ordinary differential equations are more popular due to
several reasons:
More computational efforts are involved in Picard and Taylor series methods for
complex real life applications
Easy availability of computers
The numerical methods can still be applied in cases where the closed form
expression for the function is not available, but the values of function f are known
at finitely many discrete points. The analytical methods are not applicable.
For example, the velocity of a particle is measured at given points and one is interested
to predict the position of particle at some times in future. In such cases the analytical
methods cannot be applied and one has to obtain solution by numerical methods.
In this lecture a very basic method known as Euler method is being discussed. The
method is illustrated with an example.
Euler method:
When initial value problem (1.2) is solved numerically, the numeric values of the
solution y=g(t) are obtained at finitely many (say n) discrete points in the interval of
interest. Let these n points be equi-spaced in the interval [t0, b] as t1,t 2 ,...,t n such that
t 0 kh, k 1,2,...n . These points are known as grid points. Here the step size h is
computed as h
b t0
. The numeric value of the solution is known at t=t0. The
n
approximate numeric value yk of the solution at kth grid point t=tk is an approximation to
the exact solution y(tk) of IVP. The Euler method specifies the formula for computing the
solution:
y k 1 y k h f(t k , y k ); k 0,1,2,...n 1
(1.5)
y(t)
y2
y1
y0
t1 t2
tn1 tn
tk+1=tk+h, k=k+1
If k< n go to step 1
stop
Example 1.3: Solve the initial value problem using the above algorithm
3y t y; y(0) 1
The IVP is solved first for step size h=1. The solution is obtained at t=1, 2 and 3.
The computations are performed using MS-Excel diff-euler1.xls [See columns B and C.
The column D gives the truncation error.] Note that the equation can be solved exactly.
Its exact solution is y(t) 4 exp( t / 3) t 3 , y(3.0)=1.471518.
Next, the same problem is solved with step size h=0.5 up to t=3. The solution is
obtained successively at t=0.5, 1.0, 1.5, 2.0, 2.5, 3.0. [See columns E and F of the same
MS-Excel sheet. The column G gives the error.]
Comparing y computed at t=3.0 by two different step sizes, it is observed that solution
with smaller step size is closer to exact solution.
The computations are repeated with step size h=0.25 and 0.125 also. [See the excel
sheet columns H to M. The column O of the sheet gives exact solution at grid points
with h=0.125].
The table 1.1 shows the application of Euler Method for h=0.125. The attached graph
shows that the difference between the exact solution and the solution obtained by Euler
method with h=0.125. The following conclusions can be drawn:
i)
ii)
The accuracy of the approximate solution increases with decreasing step size
tk
yk
exact sol
error
0.125
0.9583333
0.961758
-0.00342
0.25
0.9236111
0.930178
-0.01351
0.375
0.895544
0.904988
-0.00944
0.5
0.8738546
0.885927
-0.01207
0.625
0.8582774
0.872745
-0.01447
0.75
0.8485575
0.865203
-0.01665
0.875
0.8444509
0.86307
-0.01862
0.8457238
0.866125
-0.0204
1.125
0.852152
0.874157
-0.02201
1.25
0.8635206
0.886963
-0.02344
1.375
0.8796239
0.904347
-0.02472
1.5
0.9002646
0.926123
-0.02586
1.625
0.9252536
0.952111
-0.02686
1.75
0.9544097
0.982141
-0.02773
1.875
0.9875593
1.016046
-0.02849
1.024536
1.053668
-0.02913
2.125
1.0651803
1.094857
-0.02968
2.25
1.1093395
1.139466
-0.03013
2.375
1.156867
1.187356
-0.03049
2.5
1.2076225
1.238393
-0.03077
2.625
1.2614716
1.292448
-0.03098
2.75
1.3182853
1.349399
-0.03111
2.875
1.3779401
1.409126
-0.03119
1.4403176
1.471518
-0.0312
yk
exactsol
y yk
dy
k 1
h
dx t t
k
When k=0, the right side of the formula can be computed from known initial value y0.
Once y1 is computed, other yk, k=2, 3, 4, can be computed successively in the similar
manner.
Analysis :
h2
y( ), (t k ,t k h)
2
Substituting the derivative from the differential equation and neglecting second order
terms of the Taylor theorem gives Euler formula which is an approximation of the
solution at next grid point:
Starting from the initial condition at t0, the approximate solution y1 at t1 computed by
Euler method has error due to following reasons:
i)
ii)
Tk 1 y t k h y k 1
h2
y ( ) [y k h f(t k , y k )]
2
h2
y(t k ) y k h[f(t k , y(t k )) f(t k , y k )]
y ( )
2
Tk 1 y(t k ) h y (t k )
Tk 1
h2
M
2
For y1, the initial condition y0 is assumed to be the correct solution; hence the local
truncation error is of order h2.
However, the solution at t2 has one more source of error and that is approximate value
of the solution y1 at t1 as computed in earlier step. This error is further accumulated as
solution is advanced to more grid points tk, k=3, 4, . The accumulation of error is
evident from the fig. 1.1 also. The Final Global Error (F.G.E) in computing the final value
at the end of the interval (a,b); a= t0, b=to+Mh is accumulated error in M steps and is of
order h. This means that the error E(y(b),h) in computing y(b) using step size h is
approximated as
E( y(B), h) Ch
Accordingly, E( y (B), h / 2) Ch / 2
Therefore, halving the step size will half the FGE. FGE gives an estimate of
computational effort required to obtain an approximation to exact solution.
Repeated application gives
Tk 1
Mh
k 1
1 hL 1
2L
m
mx
Using 0 (1 x) e ; x 1 gives overall truncation error of order h:
Tk 1
Mh (k 1)hL
e
1
2L
Apart from the Euler method, there are numerous numerical methods for solving IVP. In
the next couple of lectures more methods are discussed. These are not exhaustive. The
selection of methods is based on the fact that these are generally used by scientists and
engineers in solving IVP because of their simplicity. Also, more complex techniques are
the combination of one or more of these and their development is on similar lines.
Lecture 3
Modified Euler Method
Modified Euler Method: Better estimate for the solution than Euler method is expected
if average slope over the interval (t0,t1) is used instead of slope at a point. This is being
used in modified Euler method. The solution is approximated as a straight line in the
interval (t0,t1) with slope as arithmetic average at the beginning and end point of the
interval.
y1c
y1p
y0
t0
t1
y(t1 ) y1 = y0 + h
(y0 + y1 )
(f(t ,y(t 0 ) + f(t1,y(t1 ))
y0 + h 0
2
2
However the value of y( t1) appearing on the RHS is not known. To handle this, the
value of y1p is first predicted by Euler method and then the predicted value is used in
(1.6) to compute y1 from which a better approximation y1c to y1 can be obtained:
y1,p y 0 h f(t 0 , y 0 )
y1c y 0 h
f(t 0 ,y 0 ) f(t1,y1,p )
2
y k 1 y k h
In the fig (1.3), observe that black dotted line indicates the slope f(t0,y(t0)) of the solution
curve at t0, red line indicates the slope f(t1,y(t1)), at the end point t1. Since the solution at
end point y(t1) is not known at the moment, its approximation y1p as obtained from Euler
method is used. The blue line indicates the average slope. Accordingly, y1 is a better
estimate than y1p. The method is also known as Heuns Method.
tk+1=tk+h, k=k+1
stop
Example 1.4: Solve the IVP in the interval (0.0, 2.0) using Modified Euler method with
step size h=0.2
dy
y 2t 2 1 ;y(0) 0.5
dt
h2
h3
y(t k ) y( ), (t k ,tk h)
2
6
h2 y(t k 1 ) y(t k ) h3
(
) y( ), (t k ,t k h)
2
h
6
(1.6)
Further simplification gives local truncation error of modified Euler formula as O(h3):
h
h3
y(t k h) y(t k ) ( y (t k ) y (t k 1 )) y( ), (t k ,t k h)
2
6
The FGE in this method is of order h2. This means that halving the step size will reduce
the error by 25%.
y0
f(t0,y0)
t1
y1p
f(t1,y1p)
y1c
0.5
1.5
0.2
0.8
1.72
0.822
0.2
0.822
1.742
0.4
1.1704
1.8504
1.18124
0.4
1.18124
1.86124
0.6
1.553488
1.833488
1.550713
0.6
1.550713
1.830713
0.8
1.916855
1.636855
1.89747
0.8
1.89747
1.61747
2.220964
1.220964
2.181313
2.181313
1.181313
1.2
2.417576
0.537576
2.353202
1.2
2.353202
0.473202
1.4
2.447842
-0.47216
2.353306
1.4
2.353306
-0.56669
1.6
2.239967
-1.88003
2.108634
1.6
2.108634
-2.01137
1.8
1.70636
-3.77364
1.530133
1.8
1.530133
-3.94987
0.740159
-6.25984
0.509162
0.509162
-6.49084
2.2
-0.78901
-9.46901
-1.08682
t tk
y k 1 y k
O(h)
h
The central and backward difference approximation can also be used to give single step
methods
dy
dt
t tk
y k y k 1
O(h)
h
or
dy
dt
t tk
y k 1 y k 1
O(h2 )
2h
Module1
Numerical Solution of Ordinary Differential Equations
Lecture 4
Runge Kutta Method
on adding more and more terms of the series till the desired accuracy is achieved. This
requires the expressions for several higher order derivatives and its evaluation. It poses
practical difficulties in the application of Taylor series method:
Higher order derivatives may not be easily obtained
Even if the expressions for derivatives are obtained, lot of computational effort may
still be required in their numerical evaluation
It is possible to develop one step algorithms which require evaluation of first derivative
as in Euler method but yields accuracy of higher order as in Taylor series. These
methods require functional evaluations of f(t,y(t)) at more than one point.on the interval
[tk,tk+1]. The Category of methods are known as Runge-Kutta methods of order 2, 3 and
more depending upon the order of accuracy. .A general Runge Kutta algorithm is given
as
y k 1 y k h ( t k , y k , h)
(1.7)
The function phi is termed as increment function. The mth order Runge-Kutta method
gives accuracy of order hm. The function is chosen in such a way so that when
expanded the right hand side of (1.7) matches with the Taylor series upto desired order.
This means that for a second order Runge-Kutta mehod the right side of (1.7 ) matches
up to second order terms of Taylor series.
Second Order Runge Kutta Methods
The Second order Runge Kutta methods are known as RK2 methods. For the derivation
of second order Runge Kutta methods, it is assumed that phi is the weighted average of
two functional evaluations at suitable points in the interval [tk,tk+1]:
(tk , yk , h) w1K1 w 2K 2
K1 f ( t k , yk )
(1.8)
K 2 f (t k ph, yk qhK1 ); 0 p, q 1
Here, four constants w1, w2, p and q are introduced. These are to be chosen in such a
way that the expansion matches with the Taylor series up to second order terms.
For this
K 2 f (t k ph, y k qhK1 )
f (t k , y k ) phft (t k , y k ) qhK1fy (t k , yk ) O(h2 )
(1.9)
yk 1 yk h[w1f (tk , yk ) w 2{f (tk , yk ) phft (tk , yk ) qhf (tk , yk )(fy (tk , yk ) O(h2 )}]
Or
yk 1 yk h[ w1f (tk , yk ) w 2 f (tk , yk )] h2 [pft (tk , yk ) qf (tk , yk )(fy (tk , yk )] O(h3 ) (1.10)
h2
h3
f (t k , y (t k )) f ( , y ( )); t k t k 1
2
6
h2
[ ft ( t k , y ( tk )) f ( tk , y ( t k ))( fy ( tk , y ( t k ))] O(h3 ) (1.11)
2
(1.12)
Observe that four unknowns are to be evaluated from three equations. Accordingly
many solutions are possible for (1.12). Let us chose arbitrary value to constant q as
q=1, then
w 1 w 2 1/ 2, p 1 and q 1
(1.13)
This is the same as modified Euler method. It may be noted that the method reduces to
a quadrature formula [Trapezoidal rule] when f(t, y) is independent of y:
h
y k 1 y k [f (t k ) f (t k h)]
2
For convenience q is chosen between 0 and 1such that one of the weights w in the
method is zero. For example choose q=1/2 makes w1=0 and (1.12) yields:
w 1 0, w 2 1, p q 1/ 2
h
f (t k , yk )
2
h
y k 1 yk hf (t k , yk )
2
yk yk
(1.14)
This gives another second order Runge-Kutta method known as optimal RK2 method:
2h
f (tk , yk )
3
h
3h
2h
y k 1 y k f (t k , y k )
f (t k , y k )
4
4
3
y k y k
(1.15)
Example 1.5: Solve IVP in 1<t<2 with h=0.1using Optimal Runge Kutta Method (1.15)
y y / t ( y / t ) 2 ; y (1) 1
yk
f(t,y)
yk0
t+2h/3
f(t+2h/3,yk0)
yk+1
1.066667
0.05859375
1.004395
1.1
1.004395
0.07936
1.020267
1.166667
0.119744262
1.015359
1.2
1.015359
0.130192
1.041398
1.266667
0.159037749
1.030542
1.3
1.030542
0.164312
1.063404
1.366667
0.185456001
1.048559
1.4
1.048559
0.188014
1.086162
1.466667
0.203806563
1.068545
1.5
1.068545
0.204902
1.109525
1.566667
0.216857838
1.089932
1.6
1.089932
0.217164
1.133364
1.666667
0.226296619
1.112333
1.7
1.112333
0.226187
1.157571
1.766667
0.233198012
1.135478
1.8
1.135478
0.232886
1.182055
1.866667
0.238272937
1.15917
1.9
1.15917
0.23788
1.206746
1.966667
0.242006106
1.183268
1.183268
0.241603
1.231588
2.066667
0.244736661
1.207663
Lecture 5
Fourth Order Runge Kutta Methods
yk 1 yk h (t k , yk , h)
w1K1 w 2K 2 w 3K 3 w 4K 4
K1 f (t k , y k )
K 2 f (t k p1h, yk a21K1 )
(1.16)
p1 p2
h
y k 1 y k [K1 2K 2 2K 3 K 4 ]
6
K1 f ( t k , y k )
h
h
K 2 f (t k , yk K1 )
2
2
h
h
K 3 f (t k , y k K 2 )
2
2
K 4 f (t k h, yk hK 3 )
(1.16)
It may be observed that RK4 uses four functional evaluations in the interval [ t0, t1].
These points are shown as p0, p1, p2 and p3 in the following figure
Example 1.6: find the solution of IVP using classical fourth order Runge-Kutta method
with h=1
dy 1 y
; y (1) 0
dt 2 t
Solution: The solution of IVP by RK4 classical method is shown in the following table:
h k
tk yk
k1
k2
k3
k4
yk+1
exactsol
1
1
1
1
1
1
1
1
1
1
1
1
2
3
4
5
6
7
8
9
10
11
0.5
0.84491
1.0475
1.19131
1.30288
1.39404
1.47111
1.53787
1.59677
1.64945
1.6971
0.66667
0.94491
1.11893
1.24687
1.34833
1.4325
1.50444
1.56729
1.62308
1.67326
1.71884
0.72222
0.96491
1.12913
1.25304
1.35246
1.43546
1.50666
1.56902
1.62447
1.67439
1.71978
0.861111
1.051574
1.192908
1.303659
1.394475
1.471381
1.538054
1.59689
1.649536
1.697168
1.740658
0.689815
1.6425
2.765255
4.014388
5.364212
6.797766
8.302995
9.87089
11.49446
13.16811
14.88727
0.693147
1.647918
2.772589
4.023595
5.375278
6.810686
8.317766
9.887511
11.51293
13.18842
14.90944
0
1
2
3
4
5
6
7
8
9
10
0
0.68981
1.6425
2.76526
4.01439
5.36421
6.79777
8.303
9.87089
11.4945
13.1681
Abs
error
0.00333
0.00542
0.00733
0.00921
0.01107
0.01292
0.01477
0.01662
0.01847
0.02032
0.02217
Various entries of the table in a row are computed by the RK4 classical formula (1.16).
For
computations
in
the
table
the
user
is
referred
to
the
excel
sheet
R4_CLASSICAL.xlsx/sheet1
The solutions are compared with the exact solution and the absolute error is given in the
last column.
(1.17)
and
h
1
1
)K 2 2(1
)K 3 K 4 ]
yk 1 yk [K1 2(1
6
2
2
K1 f (t k , y k )
h
h
K 2 f (t k , y k K1 )
2
2
h
1 1
1
)hK1 (1
)hK 2 )
K 3 f (tk , yk (
2
2
2
2
1
1
)hK 3 )
K 4 f (t k h, yk
hK 2 (1
2
2
(1.18)
Example 1.7: find the solution of IVP using classical fourth order Runge-Kutta method
Solution: In the following table the computation are shown to solve the IVP using (1.17).
Although both the methods are of same order but (1.17) gives more accurate result as
compared to classical method (1.16)
h k tk yk
k1
k2
k3
k4
yk+1
exactsol
1 0 1
0
0.5
0.625
0.775
0.825 0.690625 0.693147
1 1 2 0.69063 0.84531 0.91674 0.9971 1.038765 1.643824 1.647918
1 2 3 1.64382 1.04794 1.09794 1.15249 1.186578 2.76705 2.772589
1 3 4 2.76705 1.19176 1.23022 1.27143 1.300004 4.016642 4.023595
1 4 5 4.01664 1.30333 1.33458 1.36767 1.392176 5.366922 5.375278
1 5 6 5.36692 1.39449 1.4208 1.44843 1.469863 6.80093 6.810686
1 6 7 6.80093 1.47156 1.49429
1.518 1.537026 8.306613 8.317766
1 7 8 8.30661 1.53833 1.55833 1.5791 1.59619 9.874961 9.887511
1 8 9 9.87496 1.59722 1.61508 1.63355 1.649065 11.49898 11.51293
1 9 10 11.499 1.6499 1.66603 1.68266 1.696865 13.17308 13.18842
1 10 11 13.1731 1.69755 1.71226 1.72738 1.74048 14.8927 14.90944
Abs
error
0.00252
0.00409
0.00554
0.00695
0.00836
0.00976
0.01115
0.01255
0.01395
0.01534
0.01674
Let Tk+1(h) be the local truncation error at the( k+1)th step of the one step method with
step size h, assuming that no error was made in the previous step. It is obtained as
Tk 1 (h) y k 1 y k h ( t k , y k , h)
T (h)
lim k 1 0
h 0
h
It is now easy to verify that the Euler, Modified Euler and Runge-Kutta methods are
consistent
It is now easy to verify that the Eulers, Modified Eulers and Runge Kutta methods are
consistent.
A one step method is convergent when the difference between the exact solution and
the solution of difference equation at kth step satisfies the condition
1k N
Using the bound for Tk=y(tk)-yk proves the convergence of Euler method.
Stability of a numerical method ensures that small changes in the initial conditions
should not lead to large changes in the solution. This is particularly important as the
initial conditions may not be given exactly. The approximate solution computed with
errors in initial condition is further used as the initial condition for computing solution at
the next grid point. This accounts for large deviation in the solution started with small
initial errors. Also round off errors in computations may also affect the accuracy of the
solution at a grid point. Euler method is found to be stable.
According to Lax equivalence theorem Given a properly posed initial value problem and
a finite-difference approximation to it that satisfies the consistency condition, stability is
the necessary and sufficient condition for convergence.
Lecture 6
Higher order Runge Kutta Methods
K1 f (t k , y k )
h
h
K 2 f (t k , y k K1 )
4
4
3h
3h
9h
K 3 f ( t k , y k K1 K 2 )
8
32
32
12h
1932h
7200h
7296h
, yk
K 4 f (tk
K1
K2
K3 )
13
2197
2197
2197
439h
3680h
845h
K 5 f (t k h, yk
K1 8hK 2
K3
K4 )
216
513
4104
1859h
11h
h
8h
3544h
K4
K5 )
K 6 f (tk , yk
K1 2hK 2
K3
4104
40
2
27
2565
h 25
1408
2197
1
y k 1 y k [
K1
K3
K4 K5 ]
8 216
2565
4104
5
h 16
6656
28561
9
2
y k 1 y k [
K1
K3
K 4 K5
K6 ]
8 135
12825
56430
50
55
1
128
2197
1
2
Error y k 1 y k 1 [
K1
K3
K4
K5
K 6 ]h
360
4275
75240
50
55
(1.19)
Since the lower order method is of order four, the step size adjustment factor s can be
computed as
1/ 4
s 0.84
Ti1
Here, is the accuracy requirement and T is the truncation error Ti1 yi1 yi1 .
If the desired accuracy is not achieved the solution is iterated taking new value of h.
Depending upon error requirement the step size h can be increased or decreased. The
solution yk+1 of desired accuracy is obtained at tk+1=tk+sh. The method is known as
RKF45. To implement the method, the user specifies the allowable smallest step size
hmin , largest step size hmax and the maximum allowable local truncation error . The
following algorithm is used to solve IVP using RKF45 formulae with self adjusting
variable step sizes:
Algorithm RKF45
[Step 1] set k=0, t=a=t0, y=yk, h=hmax, flag=1
[Step 2] while (flag==1) repeat steps 3-7
y k 1, y k 1 and R y k 1 y k 1
[Step 3]
compute
[Step 4]
compute s
[Step 5]
( R ) flag=0
[Step 6]
if
[Step 7]
go to step 2
[Step 8]
[Step 9]
[Step 10]
t y
0.1
1.005
0.2
1.02
0.3
1.047
0.4
1.088
0.5
1.145
0.6
1.225
0.7
1.337
0.8
1.497
0.9
1.741
1.0
2.146
Example 1.9: find the solution of IVP using higher order order Runge-Kutta method
RKF45 given in (1.19) in the interval (0,2) taking h max=0.25, hmin=0.01 and accuracy
as 0.000001
dy
y t 2 1; y (0) 0.5
dt
Solution: The detailed solution of the problem is worked out in the excel sheet rkf45.xls
h
0.25
0.188324
0.25
0.192067
0.20392
0.206565
0.214212
0.225464
0.24762
0.25
0.25
0.206501
0.215663
t
0
0
0.188324
0.188324
0.380391
0.584311
0.790876
1.005088
1.230552
1.478171
1.728171
1.728171
1.934673
y
0.5
0.5
0.808501
0.808501
1.174051
1.613165
2.104573
2.654304
3.263801
3.948873
4.627744
4.627744
5.151414
k1
1.5
1.5
1.773035
1.773035
2.029354
2.271745
2.479088
2.644102
2.749544
2.763882
2.641168
2.641168
2.408456
k2
1.589844
1.568405
1.856403
1.83778
2.091427
2.326045
2.524275
2.676656
2.763568
2.747947
2.586313
2.596419
2.326784
k3
k4
k5
Kk6
1.638153 1.833988 1.863381 1.681916
1.604568 1.753716 1.775389 1.638313
1.901019 2.079976
2.10662
1.94126
1.87192 2.011491 2.031676 1.903627
2.124074 2.255372 2.274263 2.154073
2.354349 2.465632 2.481366 2.380102
2.54744 2.634325 2.646171 2.568076
2.692615
2.74499 2.751256 2.706084
2.768682 2.767586 2.764947 2.771299
2.735929 2.656062 2.640557 2.722156
2.552099 2.370792 2.338767 2.517156
2.569447 2.430547 2.406828 2.541928
2.278814 2.042876 2.003464 2.231007
y5
0.92048705
0.80850123
1.2937217
1.17405097
1.61316482
2.10457288
2.6543036
3.26380128
3.94887292
4.62774423
5.25474897
5.15141426
5.63074392
y5
0.25
0.188324
0.438324
0.380391
0.584311
0.790876
1.005088
0.92048705
0.80850123
1.2937217
1.17405097
1.61316482
2.10457288
2.6543036
exact
0.920487292
0.808501278
1.293721978
1.174051068
1.613164996
2.104573148
2.654303965
tk+1
y5
1.230552
1.478171
1.728171
1.978171
1.934673
2.150336
3.26380128
3.94887292
4.62774423
5.25474897
5.15141426
5.63074392
exact
3.263801766
3.948873499
4.62774482
5.254749428
5.151414894
5.630744518
Exercise 1
1.1
Apply Eulers method to solve the initial value problems in the interval (0,1]:
y 2y 3t, y (0) 1
y 2ty 2 , y (0) 1
x
, y (0) 1
y
Apply modified Euler method to IVPs in Ex. 1.1 with h=0.1. Compare the effort
and accuracy achieved in two methods.
1.3
Can Eulers method be applied to solve the following IVP in the interval (0,2)
1.4
y( t ) 4 y 2 , y (0) 0
Show that Eulers method and modified Eulers method fail to approximate the
solution y ( t ) 8t 3 / 2 of the IVP
y(t ) 6y1/ 3 , y (0) 0
1.5
Solve the following IVP using Runge Kutta method of order two and four:
y(t ) t 2 y 2 , y (1) 0 at t 2using h=0.5
Lecture
Content
Hours
Adams-Moulton method.
Module 2
Lecture 1
Consider I VP
y f(t,y);
t 0 t b with y(t 0 ) y 0
(2.1)
One step methods for solving IVP (2.1) are those methods in which the solution yj+1 at
the j+1th grid point involves only one previous grid point where the solution is already
known. Accordingly, a general one step method may be written as
y j+1=y j +h(t j ,y j ,h)
The increment function depends on solution yj at previous grid point tj and step size h.
If yj+1 can be determined simply by evaluating right hand side then the method is explicit
method. The methods developed in the module 1 are one step methods. These
methods might use additional functional evaluations at number of points between tj and
tj+1. These functional evaluations are not used in further computations at advanced grid
points. In these methods step size can be changed according to the requirement.
It may be reasonable to develop methods that use more information about the solution
(functional values and derivatives) at previously known values while computing solution
at the next grid point. Such methods using information at more than one previous grid
points are known as multi-step methods and are expected to give better results than
one step methods.
To determine solution yj+1, a multi-step method or k-step method uses values of y(t) and
f(t,y(t)) at k previous grid points tj-k, k=0,1,2,k-1,. yj is called the initial point while yj-k
are starting points. The starting points are computed using some suitable one step
method. Thus multi-step methods are not self starting methods.
Integrating (2.1) over an interval (tj-k, tj+1) yields
t j1
y j1 y j
t jk
t j1
f(t,y(t))dt y j
a t dt
t j k j 0
(2.2)
value of yj+1. The predictor-corrector methods form a large class of general methods for
numerical integration of ordinary differential equations. A popular predictor-corrector
scheme is known as the Milne-Simpson method.
Milne-Simpson method
Its predictor is based on integration of f (t, y(t)) over the interval [tj3, tj+1] with k=3 and
r=3. The interpolating polynomial is considered to match the function at three points tj2,
tj1, and tj and the function is extrapolated at both the ends in the interval [tj3, tj-2] and [tj,
tj+1] as shown in the Fig 2.2(a). Since the end points are not used, an open integration
formula is used for the integral in (2.2):
p j1 y j1 y j
4h
14
2f(t j ,y j ) f(t j1,y j1 ) 2f(t j2 ,y j2 ) h5 f (4) (); in(t j3 ,t j1 ) (2.3)
3
45
The explicit predictor formula is of O(h4) and requires starting values. These starting
values should also be of same order of accuracy. Accordingly, if the initial point is y0
then the starting values y1, y2 and y3 are computed by fourth order Runge kutta method.
Then predictor formula (2.3) predicts the approximate solution y4 as p4 at next grid point.
The predictor formula (2.3) is found to be unstable (proof not included) and the solution
so obtained may grow exponentially.
The predicted value is then improved using a corrector formula. The corrector formula is
developed similarly. For this, a second polynomial for f (t, y(t)) is constructed, which is
based on the points (tj1, fj1), (tj, fj) and the predicted point (tj+1, fj+1). The closed
integration of the interpolating polynomial over the interval [tj, tj+1] is carried out [See Fig
2.2 (b)]. The result is the familiar Simpsons rule:
y j1 y j
h
1 5 ( 4)
f(t j1,y j1 ) 4f(t j ,y j ) f(t j1,y j1 )
h f ( ); in(t j1,t j1 )
3
90
fj-1
(2.4)
fj-3
fj-2
fj-k
tj-k
fj
fj+1
tj-3
tj
tj+1
tj-1
x
x
xj-1
t
tj-3
(a)
tj-2 tj-1
tj
tj+1
t
tj-3
tj-1 tj-1
tj+1
tj
(b)
Fig 2.2 (a) Open Scheme for Predictor (b) Closed integration for Corrector
In the corrector formula fj+1 is computed from the predicted value pj+1 as obtained from
(2.3).
Denoting f j f(t j , y j ) , the equations (2.3) and (2.4) gives the following predictor corrector
formulae, respectively, for solving IVP (2.1) at equi-spaced discrete points t4,t5,
p j1 y j1 y j3
y j1 y j1
4h
2fj fj1 2fj2
3
(2.5)
h
fj1 4f j f j1
3
The solution at initial point t0 is given in the initial condition and t1, t2 and t3 are the
starting points where solution is to be computed using some other suitable method such
as Runge Kutta method. This is illustrated in the example 2.1
Example 2.1: Solve IVP y=y+3t-t2 with y(0)=1; using Milnes predictor corrector method
take h=0.1
Solution: The following table 2.1 computes Starting values using fourth order Runge
Kutta method.
k y k1= t+ y+
k2 y+
k3
t
f(t,y) h/2 h/2*k
h/2*k2
1
0 0
1
1 0.05 1.05 1.197 1.05987 1.2074
5
5
1 0.1 1.1203 1.410 0.15 1.190 0.222 1.13147 1.559
34
859
7
7
2 0.2 1.2338 1.793 0.25 1.323 0.321 1.24993 1.9374
84
533
8
1
3 0.2 1.3763
y+
t+h h*k3
k4
y+h(k1+2k
2+2k3+k4)
/6
0.1 1.1207 1.41074 1.1203415
0.2 1.2762 1.83624 1.2338409
0.3 1.4276 2.23758 1.3763387
Using initial value and starting values at t=0, 0.1, 0.2 and 0.3, the predictor formula
predicts the solution at t=0.4 as 1.7199359. It is used in corrector formula to give the
corrected value. The solution is continued at advanced grid points [see table 2.2].
MilnePredictorcorrector1
f(t,p)
k
t
y
f(t,y)
corrector
0
0
1
1
rk4
Milne pc exact
1
0.1 1.1203415 1.410341
2
0.2 1.2338409 1.793841 startingpoints
3
0.3 1.3763387 2.186339
4
0.4 1.7199359 2.759936 1.67714525 2.717145
5
0.5 2.0317593 3.281759 1.920894708 3.170895
Table 2.2: Example 2.1 using predictor corrector method with h=0.1
The exact solution is possible in this example; however it may not be possible for other
equations. Table 2.3 compares the solution with the exact solution of given equation.
Clearly the accuracy is better in predictor corrector method than the Runge-Kutta
method.
rk4
Milnepc exact
0.1 1.120341
1.120342
0.2 1.233841
1.282806
0.3 1.376339
1.489718
0.4
1.5452 1.6771453 1.743649
0.5
1.7369 1.9208947 2.047443
Table 2.3: Comparison of solution of example 2.1
Milne
Predictor
corrector1
f(t,p)
f(t,y)
0.05
1.055042
1.202542
starting point1
0.1
1.120342
1.410342
starting point2
0.15
1.196169
1.623669
starting point3
0.2
1.282805
1.842805
1.282805
1.842805
0.25
1.380551
2.068051
1.380551
2.068051
Table 2.4: Example 2.1 using predictor corrector method with h=0.05
The exercise 2.1 is repeated with h=0.5 in table 2.4.The table 2.5 clearly indicates that
the better accuracy is achieved with h=0.05 [see table 2.5]
0.05
1.055042
1.055042
0.1
1.120342
1.120342
0.15
1.196169
1.196168
0.2
1.282805 1.282806
0.25
1.380551 1.380551
Predictor Corrector methods are preferred over Runge-Kutta as it requires only two
functional evaluations per integration step while the corresponding fourth order RungeKutta requires four evaluations. The starting points are the weakness of predictorcorrector methods. In Runge kutta methods the step size can be changed easily.
Module 2
Lecture 2
if this error estimate does not exceed a specified maximum value. Otherwise, the
corrected values are rejected and the interval of integration is reduced starting from the
last accepted point. Likewise, if the error estimate becomes unnecessarily small, the
interval of integration may be increased. The predictor formula is more influential in the
stability properties of the predictor-corrector algorithm.
In another more commonly used approach, a predictor formula is used to get a first
estimate of the solution at next grid point and then the corrector formula is applied
iteratively until convergence is obtained. This is an iterative approach and corrector
formula is used iteratively. The number of derivative evaluations required is one greater
than the number of iterations of the corrector and it is clear that this number may in fact
exceed the number required by a Runge-Kutta algorithm. In this case, the stability
properties of the algorithm are completely determined by the corrector equation alone
and the predictor equation only influences the number of iterations required. The step
size is chosen sufficiently small to converge to the solution in one or two iterations. The
step size can be estimated from the error term in (2.4).
Example 2.2: Apply iterative method to solve IVP y=y+3t-t2 with y(0)=1 with h=0.1
Solution: With h=0.1 the computations are arranged in the table 2.6
Note that the corrector formula converges fast but is not converging to the solution of
the equation. It converges to the fixed point of difference scheme given by the corrector
formula. If h=0.05 then the solution converges to the exact solution in just two iterations.
f(t,y)
0.1
0.1
1.120341 1.410341
starting point1
0.1
0.2
1.233841 1.793841
starting point2
0.1
0.3
1.376339 2.186339
starting point3
0.1
0.4
1.719936 2.759936
predictor
0.1
0.4
1.677145 2.717145
corrector1
0.1
0.4
1.675719 2.715719
corrector2
0.1
0.4
1.675671 2.715671
corrector3
0.1
0.4
1.67567
2.71567
corrector4
0.1
0.4
1.67567
2.71567
corrector5
0.1
1.120341 1.410341
0.1
0.2
1.233841 1.793841
0.1
0.3
1.376339 2.186339
0.1
0.4
1.67567
0.1
0.5
1.840277 3.090277
predictor
0.1
0.5
1.914315 3.164315
corrector1
0.1
0.5
1.916783 3.166783
corrector2
0.1
0.5
1.916865 3.166865
corrector3
0.1
0.5
1.916868 3.166868
corrector4
Starting values
2.71567
Table 2.6a iterative Milnes predictor corrector method example 2.2 with h=0.1
Several applications of corrector formula is needed to obtain the desired accuracy.
Decreasing the value of h will reduce the number of applications of corrector formula.
This is evident from the next table 2,6b.
0.05
2
0.1 1.120342
1.410342
0.05
3
0.15 1.196169
1.623669
0.05
4
0.2 1.282805
1.842805 predictor
0.05
5
0.2 1.282805
1.842805 corrector1
0.05
6
0.2 1.282805
1.842805 corrector2
Table 2.6b iterative Milnes predictor corrector method example 2.2 with h=0.05
A modified method, or modified predictor-corrector method, refers to the use of the
predictor equation and one subsequent application of the corrector equation with
incorporation of the error estimates as discussed below
Error estimates
Local Truncation Error in predictor and corrector formulae are given as
y(t j1 ) p j1
28 5 ( 4)
h f (); in(t j3 ,t j1 )
90
y(t j1 ) y j1
1 5 ( 4)
h f ( ); in(t j1,t j1 )
90
It is assumed that the derivative is constant over the interval [tj-3, tj+1]. Then simplification
yields the error estimates based on predicted and corrected values.
y(t j1 ) p j1
28
[y j1 p j1 ]
29
(2.6)
Further, assume that the difference between predicted and corrected values at each
step changes slowly. Accordingly, pj and yj can be substituted for pj+1 and yj+1 in (2.6)
gives a modifier qj+1 as
q j1 p j1
28
[y j p j ]
29
4h
2fj fj1 2fj2
3
q j1 p j1
y j1 y j
28
[y j p j ]; f j1 f(t j 1,q j1 )
29
(2.7)
h
f j1 4fj f j1
3
Another problem associated with Milnes predictor corrector method is the instability
problem in certain cases. This means that error does not tend to zero as h tends to
zero. This is illustrated analytically for a simple IVP
Y=Ay, y(0)=y0
Its solution at t=tn is yn=y0 exp(tn-t0). Substituting y=Ay in the corrector formula gives the
difference equation
y j1 y j1
Or
(1
h
Ay j1 4Ay j Ay j1
3
hA
4hA
hA
)y j1
y j (1
)y j1 0
3
3
3
(2.8)
hA 2 4hA
hA
)Z
Z (1
)0
3
3
3
y j C Z C2 Z ;with Z1,2
j
1 1
j
2
2r 3r 2 1
hA
,r
1 r
3
2r 3r 2 1
1 3r O(r 2 ) 1 Ah O(h2 )
1 r
Z2
2r 3r 2 1
Ah
1 r O(r 2 ) (1
) O(h2 )
1 r
3
Hence the solution of the given IVP by predictor corrector method is represented as
y j C1 exp A(t j t 0 ) C 2 exp A(t j t 0 ) / 3
When A>0, the second term will die out but the first grows exponentially as j increases
irrespective of h. However, first term will die out and second will grow exponentially
when A<0. This establishes the instability of the solution.
Module 2
Lecture 3
y j1 ak 1y j ak 2 y j1 ... a0 y j1m
h[bk f(t j1,y j1 ) bk 1f(t j ,y j ) ... b0 f(t j1m ,y j1m )
(2.9)
When bk=0, the method is explicit and yj+1 is explicitly determined from the initial value y0
and starting values yi; i=1,2, , k-1. When bk is nonzero, the method is implicit.
The Milnes predictor corrector formulae are the special cases of (2.9):
Predictor formula : k=4,a1=a 2 =a3 =0,a 4 =1,b0 =0, b1=8/3, b 2 =-4/3, b3 =8/3, b 4 =0
Corrector formula : k=4,a1=a 2 =a 4 =0,a3 =1,b0 =0, b1=0, b 2 =1, b3 =4, b 4 =1 .
Another category of multistep methods, known as Adams methods, are obtained from
t j1
y j1 y j
t j1
f(t,y(t))dt y j
tj
a t dt
j
t j k j 0
Here the integration is carried out only on the last panel, while many function and
derivative values at equi-spaced points are considered for the interpolating polynomial.
Both open and closed integration are considered giving two types of formulas. The
integration scheme is shown in the fig. 2.3
x
x
x
x
x
x
x
x
tj-3
tj-1 tj-1
t
tj
tj-3
tj+1
tj-1 tj-1
t
tj
tj+1
Fig. 2.3 Schematic diagram for open and closed Adams integration
formulas
The open integration of Adams formula gives Adams Bashforth formula while
closed integration gives Adams Moulton formula. Different degrees of interpolating
polynomials depending upon the number r of interpolating points give rise to formulae of
different order. Although these formulae can be derived in many different ways, here
backward application of Taylor series expansion is used for the derivation of second
order Adams Bashforth open formula.
Second Order Adams Bashforth open formula
For second order formula, consider degree of polynomial r=2 and points tj, tj-1and tj-2
are used in the following
t j1
y j1 y j
t j1
f(t,y(t))dt y j
tj
a t dt
j
t j k j 0
y j1 y j h[fj fj h / 2 fj h2 / 6 ...]
Also,
f j
f j f j1
h
f j
2
(2.10)
h O(h2 )
3
1
5
y j1 y j h( fj fj1 ) h3 fj()
2
2
12
(2.11)
A fourth order Adams Bashforth formula can be derived on similar lines and it is written
as
y j1 y j h(
55
59
37
9
251 5 iv
fj
fj1 fj2
f j 3 )
h f ( )
24
24
24
24
720
(2.12)
Solution: With h=0.1 the computations are arranged in the table 2.7
0.05
0.05
0.05
0.05
0.05
0.05
Exact
f(t,y)
1
2
3
0.05
0.1
0.15
1.0550422
1.1203418
1.1961685
1.202542
1.410342
1.623669
Starting
0.2
1.2828053
1.842805
1.2828055
solution
values
1.3805508
5
0.25
1.3805503
2.06805
Table 2.7: Apply Adams Bashforth method Example 2.3
Module 2
Lecture 4
Backward Taylor series is used for integrand in the closed integration formula:
t j1
y j1 y j
t j1
f(t,y(t))dt y j
tj
a t dt
j
t j k j 0
(2.13)
f j1 f j
h
fj1
2
h O(h2 )
1
1
1
y j1 y j h( fj1 fj ) h3 fj1 ()
2
2
12
(2.14)
y j1 y j h(
9
19
5
1
19 5 iv
fj1
fj fj1
f j2 )
h f ()
24
24
24
24
720
(2.15)
y j1 y j h(
55
59
37
9
251 5 iv
fj
fj1 fj2
f j 3 )
h f ( )
24
24
24
24
720
y j1 y j h(
9
19
5
1
19 5 iv
fj1
fj fj1
f j2 )
h f ()
24
24
24
24
720
Example 2.4 Solve IVP of example 2.3 by Adams predictor corrector Method
f(t,y)
Solution At t=0.20
0.05
0.05
0.05
1.0550422 1.202542
0.05
0.1
1.1203418 1.410342
0.05
0.15
1.1961685 1.623669
0.05
0.2
0.05
0.2
0.05
0.2
Starting
values
Solution At t=0.25
0.05
1
0.05
1.0550422 1.202542
0.05
2
0.1
1.1203418 1.410342
Starting
0.05
3
0.15
1.1961685 1.623669
values
0.05
4
0.2
1.2828056 1.842806
0.05
5
0.25
1.3805506 2.068051 predictor
0.05
6
0.25
1.3805509 2.068051 corrector
0.05
7
0.25
1.3805509 2.068051 corrector
Table 2.8 Solution of IVP of Example 2.3 by Adams predictor corrector Method
Both the predictor and corrector formula have Local Truncation errors of order O(h5):
251 (5 )
y (1 )h5
720
19 (5 )
y (t k 1 ) y k 1
y (2 )h5
720
y (t k 1 ) pk 1
If fifth derivative is nearly constant and h is small then the error estimate can be
obtained by eliminating the derivative and simplifying as
y (t k 1 ) y k 1
Exercise 2
19
y (tk 1 ) pk 1
270
2.1
Compute solution at times t=0.05, 0.1 and 0.15 using RK4 by taking h=0.05. Use
these values to compute solution at t=0.2 0.25 and 0.3 using Milne-Simpson
method. Compare solution with the exact solution y (t 1)e t
2.2
Compute solution at times t=0.2, 0.4 and 0.6 using RK4 by taking h=0.2. Apply
Adams Bashforth method to compute solution at t=0.8 and 1.00
2.3
Solve IVP of exercise 2.2 by fourth order Adams predictor corrector Method.
Module 3
Systems of equations and higher order equations
their analysis. However when it comes to numerical techniques for solving them, there is not
much difference between the system of differential equations and single differential equations.
The most general form of a system of m differential equations can be written as
(3.1)
In (3.1) t is independent variable and m dependent variables are y1, y 2 , y 3 ,...y m . Introducing
column vector Y as (y1, y 2 , y 3 ,...y m )T , F as (f1,f2 ,f3 ,...fm )T and Y0 as (y 01, y 02 , y 03 ,...y 0m )T , the
system (3.1) in matrix form is written as
(3.2)
The form (3.2) is similar to the IVP (1.2) with scalars being replaced by vectors.
Let the interval (a,b) is divided into N subintervals of width h=(b-a)/N such that the grid
points are tj=a+jh and tj+1=tj+h. Let yi,j ; i=1,2,m and
th
(3.3)
h
h
h
h
h
(3.4)
h
K1,i 2K 2,i 2K 3,i K 4,i
6
Example 3.1 Solve the system of differential equations with given initial conditions
Solution:
x(0)=6
y(0)=4
The Euler method given in (3.3) is used for solving given system of two
equations (m=2):
f1(x,y)=x+2y
x(0)=6
f2 (x,y)=3x+2y
y(0)=4
xj
yj
f1
f2
0.02
14
26
0.02
0.02
6.28
4.52
15.32
27.88
0.02
0.04
6.5864
5.0776
16.7416
29.9144
0.02
0.06
6.921232
5.675888
18.273008
32.115472
0.02
0.08
7.28669216
6.31819744
19.923087
34.49647136
0.02
0.1
7.685153901
7.00812687
21.7014076
37.07171544
0.02
0.12
8.119182054
7.74956118
23.6183044
39.85666851
0.02
0.14
8.591548142
8.54669455
25.6849372
42.86803352
0.02
0.16
9.105246886
9.40405522
27.9133573
46.12385109
0.02
0.18
9.663514033
10.3265322
30.3165785
49.64360657
10
0.02
0.2
10.2698456
11.3194044
32.9086543
53.44834555
11
0.02
0.22
10.92801869
12.3883713
35.7047613
57.56079863
Example 3.2 Solve the following system of differential equations with given initial
conditions using fourth order Runge Kutta method in the interval (0, 2) taking h=0. 5
x=x-xy
y=-y+xy
x(0)=4
y(0)=1
f1(x,y)=x-xy
x(0)=4
f2 (x,y)=-y+xy
y(0)=1
The Runge-Kutta formulae given in (3.4) are used to compute solution. The
computations are shown in the table 3.2 [Ref. system_of_equations.xls]
t
0
x
4
y
1
k11=f1(x,y)
0
k12=f2(x,y)
3
t+h/2
x+h*k11/2
y+h*k12/2
k21=f1(x,y)
k22=f2(x,y)
0.25
1.75
-3
5.25
t+h/2
x+h*k21/2
y+h*k22/2
k31=f1(x,y)
k32=f2(x,y)
0.25
3.25
2.3125
-4.265625
5.203125
t+h
x+h*k31
y+h*k32
k41=f1(x,y)
k42=f2(x,y)
0.5
1.8671875
3.6015625
-4.85760498
3.12322998
phi1
phi2
-1.61573792
2.25245667
t1
x+phi1
y+phi2
0.5
2.384262085
3.252456665
t1
x
0.5
t1+h/2
2.384262085
x+h*k11/2
0.75
t1+h/2
t1+h
1
y+h*k12/2
4.378019776
y+h*k22/2
1.504583232
x+h*k31
k11=f1(x,y)
3.252456665
1.041650329
x+h*k21/2
0.75
3.298043156
y+h*k32
0.655463485
4.084525303
-5.37044702
k21=f1(x,y)
-3.51871541
k31=f1(x,y)
-3.4575972
k41=f1(x,y)
-2.02179371
phi1
-1.77873883
t2
x+phi1
1
k12=f2(x,y)
4.50225244
k22=f2(x,y)
0.18234596
k32=f2(x,y)
1.66413728
k42=f2(x,y)
-1.40726811
phi2
0.56566257
y+phi2
0.605523256
3.818119233
t2
k11=f1(x,y)
k12=f2(x,y)
0.605523256
3.818119233
-1.70643673
-1.50615924
t2+h/2
x+h*k11/2
y+h*k12/2
k21=f1(x,y)
k22=f2(x,y)
1.25
0.178914073
3.441579422
-0.43683292
-2.82583243
t2+h/2
x+h*k21/2
y+h*k22/2
k31=f1(x,y)
k32=f2(x,y)
1.25
0.496315026
3.111661125
-1.04804915
-1.56729695
t2+h
x+h*k31
y+h*k32
k41=f1(x,y)
k42=f2(x,y)
1.5
0.081498682
3.034470757
-0.16580669
-2.78716539
phi1
phi2
-0.40350063
-1.08996528
t3
x+phi1
y+phi2
1.5
0.202022627
2.728153949
t3
k11=f1(x,y)
k12=f2(x,y)
1.5
0.202022627
2.728153949
-0.3491262
-2.17700512
x+h*k11/2
y+h*k12/2
k21=f1(x,y)
k22=f2(x,y)
1.25
0.114741077
2.183902669
-0.13584227
-1.93331933
t3+h/2
x+h*k21/2
y+h*k22/2
k31=f1(x,y)
k32=f2(x,y)
0.25
0.16806206
2.244824118
-0.20920771
-1.86755435
t3+h
x+h*k31
y+h*k32
k41=f1(x,y)
k42=f2(x,y)
0.097418774
1.794376773
-0.07738721
-1.61957079
phi1
x+phi1
t3+h/2
-0.09305111
0.108971514
phi2
y+phi2
-0.94986027
1.778293677
Accordingly,
the
first
part
of
the
table
gives
x(0.5)=2.384262085,
y(0.5)=
3.252456665.The solution at t=1.0,1.5 and 2.0 are computed in the subsequent parts of
the table
Module 3
Lecture 2
Numerical Solution of Higher Order Ordinary Differential Equations
(3.5)
It is convenient to convert higher order differential equation to a system of first order differential
equations. This is illustrated for the following second order differential equation n=2:
F(y,y,y,t) 0
(3.6)
z y
F(z,z,y,t) 0
Or
y z
z f(z, y,t)
(3.7)
(3.8)
(3.9)
The second order differential equation is reduced to a system of two first order
differential equation in two unknowns y and y. Similarly nth order differential equation
can be reduced to a system of n first order differential equation in n unknowns y ,
y,,y(n-1) .Once the higher order differential equation is converted into system of first
order differential equations, then one of the methods already discussed can be applied.
Example 3.3 Solve the following initial value problem using fourth order Runge Kutta
method in the interval (1, 2) taking h=0.2
2x
Solution:
d2 x dx 2
dx
( ) 1 0, x (1) 1,
0
2
dt
dt
dt x 1
Writing
dx
y ,the given differential equation becomes
dt
dy
1 y 2
dt
2x
f2 ( x, y )
dt
dt
2x
The system is subjected to initial conditions x(1)=1, y(1)=0.The following table shows
the detailed computational steps for solution at t=1.2 when h=1.2
j
k11=f1(x,y)
k12=f2(x,y)
0.2
-0.5
t+h/2
x+h*k11/2
y+h*k12(x,y)/2
k21=f1(x,y)
k22=f2(x,y)
1.1
-0.05
-0.05
-0.50125
t+h/2
x+h*k21/2
y+h*k22(x,y)/2
k31=f1(x,y)
k32=f2(x,y)
1.1
0.995
-0.05013
-0.05013
-0.50378
t+h
x+h*k31
y+h*k32(x,y)
k41=f1(x,y)
k42=f2(x,y)
1.2
0.989975
-0.10076
-0.10076
-0.51019
1.2
phi1
x+phi1
phi2
y+phi2
-0.01003
0.989966
-0.10067
-0.10067
x(1.2)
1
0.989966
x(1.4)
0.959451
x(1.6)
0.907106
x(1.8)
0.830285
x(2)
0.724106
Module 3
Lecture 3
Stiff Differential Equations
keywords: Euler method, explicit method, implicit method, stiff equations, stability
higher derivative is bounded then the method gives predictable error bound. In some
cases the solutions as well as its derivative(s) grow with number of steps and the
relative error can still be under control. However, there are IVP for which the error grows
so large that the solution is not acceptable. Such equations are termed as stiff
equations. These are very common in physical and biological systems. One typical
example is mass spring system with a large spring constant (stiffness).
The stiff equations are characterized by those differential equations having a term of the
form exp(-ct) in the exact solution with large c. Typically, this led to fast decaying
transient solution. But the increasing truncation error (involving its nth derivative) of the
form cn exp (-ct) will destabilize the approximate solution. In such cases the step size h
should be drastically decreased to maintain stability. The following stiff differential
equation involves large negative exponent in the solution
x cx; x (o) x 0 , c is large and positive
(3.10)
Example 3.4: Solve IVP x 15x; x(o) 1 numerically in the interval (0, 2)
Solution: Solving the differential equation numerically in the interval (0, 2) for c=15 by
Eulers method with h=0. 5, it is observed that the solution explodes very soon. [See the
computations in the Excel sheet Stiff equ.xls/ sheet 1 and sheet 2]. When h=0. 25, it is
oscillatory in contrast to exact solution. The step size h is to be decreased drastically to
obtain the solution close to the exact solution. However, this will increase the
computational effort.
t
at h=0.5
at h=0.25
at h=0.1
at h=0.05
at h=0.01
at
exact
h=0.005
0.5
42.25
7.5625
-0.3125
9.54 E-07
0.000296
0.000411
0.00053
-274.625
57.19141
0.000977
9.09E-13
1.5
1785.0625
432.51
-3.10E-05
1.69E-10
-11602.91
3270.857
9.50E-07
9.36E-14
3.06E-07
To solve stiff equation (3.10), it is desirable to use implicit methods. The explicit Euler
method is modified for the equation as follows:
x
x (t t ) x (t )
cx (t t );
t
Or
x ( t t )
x(t)
1 ct
(3.11)
The application of this Euler implicit scheme for the equation (3.10) will not misbehave.
[See the computations in the Excel sheet Stiff equ.xls/ sheet 3]
In case of IVP x f ( x, t ); x (o) x 0
The implicit Euler Scheme is written as
x k 1 x k hf ( x k 1, t k 1 )
(3.12)
Another implicit scheme which can be used to solve stiff equations is Trapezoidal
scheme or two stage adams Moulton scheme
x k 1 x k h[f ( x k , t k ) f ( x k 1, t k 1 )] / 2
(3.13)
The stiff differential equations generally results from the phenomenon that involves
widely differing time scales. Consider the differential equation
x 100x
Its most general solution is
x ( t ) Ae10t Be 10t
There are two different time scale in the solution. In another example
x 1000x e t ; x (o) 1
Its exact solution also involves two time scales 1 versus 1/1000
x ( t ) (e 1000t e t ) / 999
In general the presence of vastly different evolutionary time scale gives stiff dynamic
differential equations. In such cases, care should be taken in solving them.
Example 3.5: Using Euler implicit Scheme Solve IVP over interval (0, 3.0)
x 1000x e t ; x (o) 1. Take h=0.05
x k h exp( t k 1 )
1 1000h
The numerical details are worked out in Excel file Stiff equ.xls/ sheet 4
euler method
h
tk
xk
f(xk,tk)
xk+1
0.05
-1001
-49.05
exact
0.05
-0.000952182
0.05
0.05
-49.05
49049.0488
2403.40244
0.1
-0.000905743
0.05
0.1
2403.40244
-2403403.3
-117766.765
0.15
-0.00086157
0.05
0.15
-117766.765
117766764
5770571.43
0.2
-0.00081955
0.05
0.2
5770571.43
-5.771E+09
-282758000
0.25
-0.00077958
0.05
0.25
-282758000
2.8276E+11
1.3855E+10
0.3
-0.00074156
0.05
0.3
1.3855E+10
-1.386E+13
-6.789E+11
0.35
-0.000705393
tk
xk
tk+1
hexp(-tk+1)
xk+1
exact
0.05
0.05
0.951229
0.0186753
-0.000952182
0.05
0.05
0.018675
0.1
0.904837
-0.0005209
-0.000905743
0.05
0.1
-0.00052
0.15
0.860708
-0.000854
-0.00086157
0.05
0.15
-0.00085
0.2
0.818731
-0.0008194
-0.00081955
0.05
0.2
-0.00082
0.25
0.778801
-0.0007796
-0.00077958
0.05
0.25
-0.00078
0.3
0.740818
-0.0007416
-0.00074156
0.05
0.3
-0.00074
0.35
0.704688
-0.0007054
-0.000705393
Exercise 3
3.1
x xy t; x (0) 1
;
y ty x; y (0) 1
(b)
x x 4y; x (0) 2
y y x; y (0) 3
; taking h=0.0.5
x y 2 x 2 ; x(0) 2
(c)
y 2xy; y(0) 0.1
; taking h=0.0.5
(d)
3.2
taking h=0.1
; taking h=0.1
Solve the following higher order differential equations using Runge Kutta
method of order 4
(a) t 2 x 2tx 2x t 3 ln t, 1 t 2; x (1) 1, x(1) 0
3.3
Module 4
Linear Boundary value problems
Lecture
1
Content
Hours
Shooting Method
Module 4
Lecture 1
y f(y,y,x);a x b
1(x)y(x) 1(x)y(x) 1 at x a
(4.1)
2 (x)y(x) 2 (x)y(x) 2 at x b
The Boundary value is called two-point linear boundary value problem when the
arbitrary function f is given as a linear combination of dependent variable and its
derivative as
f(y,y,x) c(x)y(x) d(x)y(x) e(x)
(4.2)
The boundary conditions are specified at the boundary points x=a and x=b as linear
combination of y and its derivative y. This is Robin mixed boundary conditions. In
particular when i 0, i 0;i 1,2 then the boundary conditions are known as Dirichlet
boundary conditions. The conditions are Neumanns boundary conditions when
i 0, i 0;i 1,2
Finite difference method for two point linear Boundary Value problem with Dirichlet type
conditions
(4.3)
To apply finite difference method first discretize the domain a x b into N-1
computational grid points xi;i 1,2...N 1 and two boundary points x0 and xN as
a x0 x1 x 2 ... xN1 xN b
ba
N
The step size h is a critical parameter for stability and convergence of the numerical
scheme. The differential equation is now written at each internal grid point
xi;i 1,2...N 1 . For this, the derivatives are replaced by corresponding finite differences:
y 2yi yi1
o(h2 )
y xi i1
2
h
y y
y xi i1 i1 o(h2 )
2h
that is
yi1 2yi yi 1
h2
or
y y
c(xi ) i 1 i1 d(xi )yi e(xi )
2h
h
h
( 1 c i )yi1 (2 h2di )yi ( 1 c i )yi1 h2ei;i 1,2,N 1
2
2
(4.4)
The unknown yis are on the left side and known quantities are on right side of
the equation for i=2,3,N-2. Using boundary conditions for i=1and i=N-1 give
(2 h2d1)y1 ( 1
h
h
c1)y 2 h2e1 (1 c1)1
2
2
(4.6a)
h
h
( 1 cN1)yN 2 (2 h2dN1)yN1 [h2eN1 (1 cN1)2 ] (4.6b)
2
2
This reduces the system of differential equations to linear system of N-1 algebraic
equations which can be written in the matrix form as AX=B where
1 u1
2 2 u2
.
... .
. ...
N 2
N 2 uN 2
N1 N1 (N1)X(N1)
x1
x
X .
xN 2
xN1
i 2 h2di
h
i 1 ci
2
h
ui 1 ci
2
h
2
h e1 (1 2 )1
h2e2
B ...
h eN 2
h
h eN1 (1 )2
(4.7)
The system of equations must admit unique solution for which the sufficient condition is
the diagonal dominance of the matrix A. Suppose d(x) has positive values in the domain
and c(x) is continuous. Let L be upper bound of the function c(x) over the domain then
the step size h smaller than 2/L guarantees the uniqueness of the solution.
Example 4.1: Solve the boundary value problem using N=4
y 12y 16; y(0) y(2) 5
Solution:
It is observed that c(x)=0, d(x)=12>0, e(x)=-16. for N=4, the Bvp will
reduce to system of three algebraic equations with step size h=2/4=0.5 The equivalent
system given below:
5y1 y 2
y1 5y 2 y3
y 2 5y3 1
The system of algebraic equations corresponding to the BVP has unique solution
irrespective of step size h. The solution of the system is
y1
1
18
1
, y 2 , y3
23
23
23
Solution: For the given boundary value problem a=1, b=3, N=4 h=0.5
c(x)
2x 1
x 1
,d(x)
,e(x) 0
x
x
It may be noted that d(x) is positive and c(x) is a decreasing function in (1, 3).
Therefore, L=C(1)=3. Accordingly, the condition for unique solution ( h<2/L) is satisfied
for h=0.5
The grid points are x 0 1.0, x1 1.5, x 2 2.0, x3 2.5, x 4 3.0
The coefficients of the matrix are computed from the expressions given below
x 1 2
2x 1 h
2x 1 h
di 2 i
h , i 1 ( i ) ,ui 1 ( i ) ,
xi
xi
2
xi
2
=9.060939