Escolar Documentos
Profissional Documentos
Cultura Documentos
Martin Stmpfle
January 30, 2013
Contents
1 Numerical Analysis
2 Ordinary Differential Equations
2.1 Introduction . . . . . . . . . . . . . . . .
2.1.1 Theoretical Basics . . . . . . . .
2.1.2 Numerical Basics . . . . . . . . .
2.2 Eulers Method and Variants . . . . . . .
2.2.1 Eulers Method . . . . . . . . . .
2.2.2 Implicit Eulers Method . . . . .
2.2.3 Improved Eulers Method . . . .
2.2.4 Heuns Method . . . . . . . . . .
2.3 Runge-Kutta Methods . . . . . . . . . .
2.3.1 Runge-Kutta Methods . . . . . .
2.3.2 Step Size Control . . . . . . . . .
2.3.3 The One-Step Approach . . . . .
2.4 Multistep Methods . . . . . . . . . . . .
2.4.1 Linear Multistep Methods . . . .
2.4.2 Adams-Bashforth Methods . . .
2.4.3 Adams-Moulton Methods . . . .
2.4.4 Predictor-Corrector Methods . .
2.4.5 The General Multistep Approach
2.5 Stability and Stiffness . . . . . . . . . . .
2.5.1 Stability of One-Step Methods .
2.5.2 Stability of Multistep Methods .
2.5.3 Stiffness . . . . . . . . . . . . . .
2.6 Initial Value Problems . . . . . . . . . . .
2.6.1 Summary . . . . . . . . . . . . .
2.6.2 Matlab Functions . . . . . . . . .
2.7 Boundary Value Problems . . . . . . . .
2.7.1 Shooting Methods . . . . . . . .
2.7.2 Finite Difference Methods . . . .
2.8 Applications . . . . . . . . . . . . . . . .
2.8.1 Biology . . . . . . . . . . . . . . .
2.8.2 Mechanics . . . . . . . . . . . . .
2.8.3 Engineering . . . . . . . . . . . .
2.8.4 Vehicle Dynamics . . . . . . . . .
2.9 Exercises . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7
8
8
17
19
19
20
21
22
23
23
26
27
28
28
29
30
31
32
33
33
39
41
44
44
44
45
45
45
47
47
49
50
51
53
4
3 Partial Differential Equations
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.1 Theoretical Basics . . . . . . . . . . . . . . . . . .
3.1.2 Numerical Basics . . . . . . . . . . . . . . . . . . .
3.2 Finite Difference Methods . . . . . . . . . . . . . . . . . .
3.2.1 Finite Differences . . . . . . . . . . . . . . . . . . .
3.2.2 Difference Methods for 2D Elliptic Equations . . .
3.2.3 Difference Methods for 1D Parabolic Equations .
3.2.4 Difference Methods for 2D Parabolic Equations .
3.2.5 Difference Methods for 1D Hyperbolic Equations .
3.3 Finite Element Methods . . . . . . . . . . . . . . . . . . .
3.3.1 Meshes, Partitions, and Triangulations . . . . . . .
3.3.2 Variational Problems . . . . . . . . . . . . . . . . .
3.3.3 Function Spaces . . . . . . . . . . . . . . . . . . .
3.3.4 Piecewise Linear Finite Elements . . . . . . . . . .
3.3.5 Galerkins Method . . . . . . . . . . . . . . . . . .
3.3.6 Rayleigh-Ritzs Method . . . . . . . . . . . . . . .
3.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . .
3.4.1 Heat Distribution . . . . . . . . . . . . . . . . . . .
3.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
57
58
58
65
67
67
69
74
81
84
87
87
90
97
97
98
99
114
114
116
A Appendix
121
A.1 Greek Letters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Bibliography
123
Index
125
1 Numerical Analysis
Ordinary differential equations are equations where the unknown is a funtion in one variable. Many models from engineering sciences include ordinary differential equations. The
most popular class are multi body systems, where several components are linked together
by rods, springs, dampers, or elastomers. Depending on the constraints we distinguish
between initial and boundary value problems.
2.1 Introduction
We briefly discuss the basics of ordinary differential equations. Besides theoretical aspects
we introduce numerical definitions as well.
9
Example 2.1 (Ordinary differential equations)
a) The equation y = y is an explicit, first-order, linear equation. The general solution reads
y(t) = c et with c R.
y = 2t2 y + 4y
The general solution of an ordinary differential equation typically is a set of infinite many
functions. Depending on appropriate additional initial conditions the solution then is
unique.
n=2
y0
y(t0 )
= y0
y (t0 )
= y1
= yn1
y (n1) (t0 )
t0
y(0) = 2
y(t) = 2 et .
b) Using the set of all solutions from Example 2.1 the initial value problem has the following
unique solution:
y = y,
y(0) = 1,
y (0) = 0
y(t) = cos t .
For boundary conditions the situation is different. The existence and uniqueness of solu-
10
tions is not closely related to the number of boundary values.
= y0
y(t1 )
= y1
y (t0 )
= y2
y (t1 )
= y3
y (m) (tk )
n=4
y1
y0
= yn1
t0
t1
y(0) = 1,
y ( ) = 1
2
b) Using the general solution y(t) = c1 sin t + c2 cos t from Example 2.1 the following boundary
value problem has no solution:
y = y,
y(0) = 1,
y() = 0
c2 = 1,
c2 = 0 .
c) Again using the general solution y(t) = c1 sin t + c2 cos t from Example 2.1 the following
boundary value problem has infinite many solutions:
y = y,
y(0) = 1,
y() = 1
11
t
Example 2.4 (Vector fields)
a) The vector field of the differential equation
y =
y
t
t
y
12
= f (t, y)
= f1 (t, y1 , . . . , yn )
= fn (t, y1 , . . . , yn )
y1
yn
= f (y)
y1
yn
= f1 (y1 , . . . , yn )
= fn (y1 , . . . , yn )
By introducing new states ordinary differential equation of higher order can be transformed
into first-order systems. This is important, because many numerical methods only deal
with first-order equations.
Transformation into a system
An ordinary differential equation of order n can be transformed into a system of n
first-order differential equations:
y1 = x
y1
y2 = x
y2
,
yn1 = x(n2)
yn1
(n1)
yn = x
yn
=
=
=
=
y2
y3
yn
f (y1 , . . . , yn )
13
Example 2.5 (Systems of differential equations)
a) The third-order differential equation
x 3tx + 5x = sin t
can be transformed into a system:
y1
y2
y3
=
=
=
x
x
x
y1
y2
y3
=
=
=
y2
y3
5y1 + 3ty2 + sin t
Since the differential equation is linear, the system also can be written using matrices:
y1 0
y2 = 0
y3 5
1
0
3t
0 y1 0
1 y2 + 0 .
0 y3 sin t
x(0) = 2,
x (0) = 1
=
=
x
x
y1
y2
=
=
y2
cos(ty1 ) + 4t2 + 6y1
,
,
y1 (0)
y2 (0)
=
=
2
1
=
=
=
=
0
1
4
2
=
=
6xx z + tet
t + x + z + x + z
,
,
x(1)
z(1)
=
=
0
1
,
,
x (1)
z (1)
=
=
4
2
=
=
=
=
x
z
x
z
y1
y2
y3
y4
=
=
=
=
y3
y4
6y1 y3 y4 + tet
t + y1 + y2 + y3 + y4
,
,
,
,
y1 (1)
y2 (1)
y3 (1)
y4 (1)
14
Example 2.6 (Vibration equation)
The vibration equation is defined as
x
+ 2 x + 02 = 0 ,
where 0 is the damping constant and 0 > 0 is the angular frequency. Using the new states
y1 = x ,
y2 = x
the second-order differential equation can be transformed into a linear first-order system:
y 1
0
=
2
y 2 0
1 y1
2 y2
= f1 (t, y1 , . . . , yn )
= fn (t, y1 , . . . , yn )
y1
yn
yn+1
= f1 (yn+1 , y1 , . . . , yn )
= fn (yn+1 , y1 , . . . , yn )
= 1
=
=
=
=
x
x
x
t
y1
y2
y3
y4
=
=
=
=
y2
y3
5y1 + 3y2 y4 + sin y4
1
Note, that the original differential equation is linear, whereas the autonomous system has lost
the property of linearity.
There are different types of solutions. A solution that does not change its state at all is
called an equilibrium solution. Other special solutions are periodic solutions. For periodic
solutions, the system states are repeated again and again.
15
16
Example 2.9 (Phase plane)
y2
4
x = 0,
9
x (0) = 0 .
x(0) = 3,
y2
y1 (0) = 3
y2
4
y1
9
3 2 1
1
y2 (0) = 0
y1
y1
2
y2 (t) = 2 sin ( t) .
3
y2
52 4
x + x = 0, x(0) = 3, x (0) = 0 .
255
9
y2
y1 (0) = 3
y2
52
4
y2
y1
9
255
y2 (0) = 0
3 2 1
1
2
168
t) ,
255
y2 (t) = 3 e 255 t (
26
168
168
168
26
cos (
t)
sin (
t)) .
255
255
255
255
Finally, we introduce the definition of stability. Stability of a solution in general means that
nearby solutions always converge towards the stable solution. This principle of stability is
especially important for equilibrium solutions.
17
y
wk
y(t)
yk = y(tk ) wk
is called discretization. If the points in time
tk are equidistant with h > 0 then tk = hk.
Numerical approximations of y are denoted by
w.
yk = y(tk )
t
The error of numerical solutions has two main sources. Firstly, any numerical algorithm
is an approximation scheme. For instance, derivatives can be substituted by difference
formulas. Such discretizations cause truncation errors. Secondly, on computers, numbers
are stored with a fixed number of digits. Any floating-point operation thus causes roundoff
d
errors. As a rule of thumb, there optimal step size is h0 = 10 2 , where d is the number
of digits. On general purpose personal computers, d = 16 and thus h0 = 108 . For many
real-world applications, step sizes between 105 and h = 103 yield solutions that are
sufficiently accurate.
Both errors depend on the step size h and contribute to the total error.
The approximation quality of numerical solutions is a central issue in the context of initial
value problems. The most important error is the global error that describes the difference
between the exact and the numerical solution. In most cases, this global error is very
hard to calculate or estimate. As a substitute, the local error is defined as the difference
between numerical and exact solution in one single step. This local error is much easier
18
to compute.
(h 0)
t ,
t ,
t .
19
y(t + h) y(t)
.
h
Let wk denote the numerical solution at tk = kh with step size h. Then, inserting the
forward difference formula into the differential equation yields
wk+1 wk
= f (tk , wk ) .
h
Resolving this equation with respect to the next iteration wk+1 is called Eulers method.
= wk + h f (tk , wk )
To investigate the order of Eulers approach, we discuss the local truncation error. We
insert the exact solution y into Eulers formula and get
E(t, h) = y(t) + hf (t, y(t)) y(t + h) .
exact
Euler
Now we use Taylor series expansion and obtain
E(t, h) = (y(t) + hf (t, y(t))) (y(t) + hy (t) + O(h2 )) = O(h2 ) .
Since y (t) = f (t, y) the leading expressions disappear.
20
= wk + h f (tk+1 , wk+1 )
for k N0 . The increment function is the slope at the right point of each interval.
We discuss the order of the implicit Euler: Inserting the exact solution y into the formula
for the local truncation error yields
E(t, h) = y(t) + hf (t + h, y(t + h)) y(t + h) .
exact
implicit Euler
Now, we use Taylor series expansion and obtain
E(t, h) = (y(t) + h (f (t, y(t)) + O(h)) ) (y(t) + hy (t) + O(h2 )) = O(h2 ) .
Since y (t) = f (t, y) the leading expressions disappear.
Theorem 2.3 (Implicit Eulers method)
The implicit Eulers method is consistent and of order 1. Each iteration step requires
the solution of a nonlinear equation. In contrast to explicit methods this method has
a large region of stabiliy.
21
= f (tk , wk )
K2
= f (tk + 21 h, wk + 12 h K1 )
wk+1
= wk + h K2
for k N0 . The increment function is the slope at the midpoint of each interval.
To discuss the order of the improved Euler, we insert the exact solution y into the formula
of the local truncation error:
1
1
h, y(t + h) + hf (t, y(t))) y(t + h) .
2
2
exact
improved Euler
E(t, h) = y(t) + hf (t +
For simplicity, we now omit the argument (t) for function y and the arguments (t, y(t))
for function f and its derivatives. Again, using Taylor series expansion yields
E(t, h) =
1
h (ft + fy (f + O(h))) + O(h2 )))
2
1
(y + hy + h2 y + O(h3 )) = O(h3 ) .
2
(y + h (f +
The derivative of y (t) = f (t, y(t)) with respect to t is obtained by the general chain rule.
Since y = f and y = ft + fy f , the leading expressions disappear.
Theorem 2.4 (Improved Eulers method)
The improved Eulers method is consistent and of order 2. Each iteration step requires
2 function evaluations.
22
K1
= f (tk , wk )
K2
= f (tk + h, wk + h K1 )
wk+1
= wk + h
K1 + K2 )
( 21
1
2
for k N0 . The increment function is the average of the slopes at the left and right
point of each interval.
The formula of the local truncation error has to used to proof the order of Heuns method:
1
1
E(t, h) = y(t) + h ( f (t, y(t)) + f (t + h, y(t) + hf (t, y(t)))) y(t + h) .
2
2
exact
Heun
Again, for simplicity, we now omit the arguments (t) and (t, y(t)). Using Taylor series
expansion yields
E(t, h) =
1
1
hf + h (f + h (ft + fy (f + O(h))) + O(h2 )))
2
2
1 2
(y + hy + h y + O(h3 )) = O(h3 ) .
2
(y +
The derivative of y (t) = f (t, y(t)) with respect to t is obtained by the general chain rule.
Since y = f and y = ft + fy f , the leading expressions disappear.
Theorem 2.5 (Heuns method)
Heuns method is consistent and of order 2. Each iteration step requires 2 function
evaluations.
23
K2 = f (tk + 12 h, wk + 21 h K1 )
K3 = f (tk + 21 h, wk + 21 h K2 )
K4 = f (tk + h, wk + h K3 )
wk+1 =
wk +h ( 16 K1 + 62 K2 + 26 K3 + 61 K4 )
for k N0 . The increment function is an average of two slopes at the left and right
point and two slopes at the midpoint of each interval.
Theorem 2.6 (Classical Runge-Kutta method)
The classical Runge-Kutta method is consistent and of order 4. Each iteration step
requires 4 function evaluations.
24
= f (t + ai h, y + h bi,j Kj )
Ki
j=1
= wk + h ci Ki
wk+1
i=1
ar
b2,1
b3,1
br,1
c1
b3,2
br,2
c2
...
...
br,r1
cr1
cr
1
1
2
2
3
3
4
4
5
4
6
5
7
6
8
6
9
7
0
1
2
1
2
1
1
1
1
2
1
2
Heuns method
25
Special Runge-Kutta methods of order 3
The coefficient tables of special Runge-Kutta methods of order 3 are:
0
1
2
1
2
1
6
4
6
1
6
Kuttas method
1
2
1
2
1
2
1
2
1
6
2
6
2
6
1
6
Classical RK method
1
3
2
3
1
3
13
1
8
3
8
3
8
3/8 method
1
8
26
1
4
1
2
h.
h then accept wk+1 and set the step size to 2h for the next step.
0
1
2
1
2
1
2
1
4
1
2
1
6
1
6
4
6
1
6
1
4
1
2
1
6
4
6
27
1
1
2
1
4
3
32
1932
2197
439
216
8
27
25
216
16
135
9
32
7200
2197
8
2
0
0
7296
2197
3680
513
3544
2565
1408
2565
6656
12825
845
4104
1859
4104
2197
4104
28561
56430
11
40
51
9
50
2
55
28
ai wk+i
i=0
= h bi f (tk+i , wk+i )
i=0
for k N0 . The coefficients with a20 + b20 0 and a2m + b2m 0 can be listed in a table:
a0
b0
a1
b1
...
...
am
bm
29
Linear multistep methods
Linear multistep methods have some characteristics in common:
The general advantages are high order and high accuracy with few computational
effort.
The general disadvantage are the limitation to constant step size and the necessity
for seed computation for the first iterates.
am1 = 1
am = 1
bm = 0
1 2
h y + O(h3 )) .
2
Here, the arguments (t) and (t, y(t)) are omitted. Comparing all expressions with y = f
and y = ft + fy f , respectively, the following equations result:
hb0 + hb1 h = 0
1
h2 b0 h2 = 0 .
2
30
Dividing by h and h2 and rearranging the expressions yields
b0 + b1 = 1
b0 =
1
.
2
am1
am
12
5
12
9
24
251
720
3
2
16
12
37
24
1274
720
b0
b1
b2
b3
b4
b5
p
1
0
23
12
59
24
2616
720
2
0
55
24
2774
720
3
4
0
1901
720
am1 = 1
am = 1
bm 0
31
To get appropriate coefficients bk , the local truncation error is stated with the objective to
cancel out as many leading expressions as possible. As in the explicit case, the achievable
order depends on the number of included old values. The implicit Eulers method is a
special Adams-Moulton method.
Adams-Moulton methods
The coefficients of m-step Adams-Moulton methods together with the order p are:
m
am1
am
1
2
1
12
1
24
19
720
1
2
8
12
5
24
106
720
1
1
1
b0
b1
b2
b3
b4
p
1
2
5
12
19
24
264
720
3
9
24
646
720
4
251
720
The special case p = 1 is the implicit Eulers method. The special case p = 2 is the
trapezoidal method.
Example 2.11 (Adams-Moulton Methods)
a) For m = 1, we obtain the implicit Eulers method of order 1:
wk+1 = wk + hf (tk , wk+1 ) .
b) Again for m = 1, there is a another Adams-Moulton method. This method is called trapezoidal method and has order 2:
1
1
wk+1 = wk + h ( f (tk , wk ) + f (tk+1 , wk+1 )) .
2
2
(2) Compute corrector wk+m with implicit Adams-Moulton m 1-step method using
wk+m
in the right-hand side of the Adams-Moulton scheme.
32
Predictor corrector methods
The advantage of predictor corrector methods is that there is no need for rootfinding
subroutines. Using the predictor in the right-hand side of the scheme transforms a
nonlinear problem into a linear one.
i=0
33
C.
Since most real-world models embody friction or damping, the interesting case is Re() < 0.
Applying a one-step method with step size h to this equation generates a recursive formula
of the form
wk+1 = g(h)wk .
for the numerical solution wk . Here, g(z) denotes some complex function with a complex
argument z = h C. If Re() < 0, the exact solution y(t) tends to zero. The numerical
solution wk has the same behavior, if g(z) < 1.
Definition 2.30 (Region of stability for one-step methods)
Consider the test differential equation y = y. For a one-step method with a numerical
solution (wk ) and a step size h of the form
wk+1 = g(h)wk
the region of stability is defined as that part of the complex plane where g is contracting:
G = {z C g(z) < 1} .
Within G and the left-half plane the numerical solution has the same quantitative
behavior as the exact solution.
34
Boundary of region of stability for one-step methods
The boundary of the region of stability consists of all solutions z of the equations
g(z) = ei ,
0 < 2 ,
where g results from the one-step method applied to the test differential equation.
Example 2.12 (Stability of Eulers method)
Applying Eulers method
wk+1 = wk + h f (tk , wk )
to the test differential equation y = y yields
wk+1 = wk + hwk = (1 + h)wk .
Using z = h, the stability function reads
g(z) = 1 + z .
The boundary of the region of stability is defined as g(z) = 1. Since all complex points on the
unit circle are of the form ei , we get
1 + z = ei
z = 1 + ei .
With [0, 2] the boundary of the region of stability describes a circle with center 1 and
radius 1. The region of stability lies within the circle.
35
Example 2.13 (Stability of improved Eulers and Heuns method)
a) Applying the improved Eulers method
wk+1 = wk + h f (tk +
1
1
h, wk + h f (tk , wk ))
2
2
1
1
1
h f (tk , wk )) = wk + h (wk + hwk ) = (1 + h + h2 2 ) wk .
2
2
2
1 2
z .
2
The boundary of the region of stability is defined as g(z) = 1. Since all complex points on
the unit circle are of the form ei , we get
1+z+
1 2
z = ei
2
z1,2 = 1
2 ei 1 .
With [0, 2] the boundary of the region of stability describes a closed curve which looks
like an oval. For special values we obtain
= 0 z = 2, 0 , = z = 1 3i .
b) Applying Eulers method
1
1
wk+1 = wk + h ( f (tk , wk ) + f (tk + h, wk + hf (tk , wk)))
2
2
to the test differential equation y = y yields
1
1
1
wk+1 = wk + h ( wk + (wk + hwk )) = (1 + h + h2 2 ) wk .
2
2
2
Hence, the stability function of Heuns method
g(z) = 1 + z +
1 2
z .
2
36
Example 2.14 (Stability of Kuttas method)
Applying Kuttas method to the test differential equation y = y yields
K1
f (tk , wk )
wk
K2
f (tk +
(1 +
K3
(1 + h + h2 2 ) wk .
1
2
h, wk + h 12 K1 )
1
2
h) wk
wk + h( 61 wk + 46 (1 +
(1 + h +
1
2
h2 2 +
1
6
1
2
h) wk + 16 (1 + h + h2 2 ) wk )
h3 3 ) wk .
1 2 1 3
z + z .
2
6
The boundary of the region of stability is defined as g(z) = 1. Since all complex points on the
unit circle are of the form ei , we get
1+z+
1 2 1 3
z + z = ei .
2
6
This equation can be solved either numerically or with a formula for the roots of cubic polynomials.
With [0, 2] the boundary of the region of stability describes a closed curve in the complex
plane.
1 2 1 3 1 4
z + z +
z .
2
6
24
The boundary of the region of stability is defined as g(z) = 1. Since all complex points on the
unit circle are of the form ei , we get
1+z+
1 2 1 3 1 4
z + z +
z = ei .
2
6
24
This equation can be solved either numerically of with a formula for the roots of cubic polynomials.
With [0, 2] the boundary of the region of stability describes a closed curve in the complex
plane.
37
Stability of Runge-Kutta methods
Runge-Kutta methods are explicit methods.
The regions of stability are bounded and cover
only small parts of the left-half plane.
Im
RK1: Euler
6 5 4 3 2 1
i
RK4
RK3
RK2
RK1
1
Re
RK3: Kutta
RK4: Classical Runge-Kutta
In terms of stability, good numerical methods cover a wide range of the left-half plane.
There are a couple of methods that cover the complete left-half plane. Such methods are
called A-stable. They are optimal with respect to stability.
wk+1 =
1
wk .
1 h
1
.
1z
The boundary of the region of stability is defined as g(z) = 1. Since all complex points on the
unit circle are of the form ei , we get
1
= ei
1z
z = 1 ei .
With [0, 2] the boundary of the region of stability describes a circle with center 1 and radius
1. The region of stability lies outside the circle and thus covers the whole left half plane. The
implicit Eulers method is A-stable. This is a big advantage of this approach.
38
Example 2.17 (Stability of trapezoidal method with one-step definition)
Applying the trapezoidal method
1
1
wk+1 = wk + h ( f (tk , wk ) + f (tk+1 , wk+1 ))
2
2
to the test differential equation y = y yields
1
1
wk+1 = wk + h ( wk + wk+1 )
2
2
wk+1 =
2 + h
wk .
2 h
2+z
.
2z
The boundary of the region of stability is defined as g(z) = 1. Since all complex points on the
unit circle are of the form ei , we get
2+z
= ei
2z
z=2
ei 1
2 sin
=
i.
ei + 1 1 + cos
The last equation results from expanding the fraction with the complex conjugate of the denominator ei + 1 and using the relations
sin =
1 i
(e ei )
2i
cos =
1 i
(e + ei ) .
2
With [0, 2] the boundary of the region of stability is the imaginary axis. The region of
stability lies left to the imaginary axis and thus covers the whole left half plane. The trapezoidal
method is A-stable.
() = ai i ,
i=0
() = bi i .
i=0
are called the first characteristic polynomial and the second characteristic polynomial of the given multistep method.
39
Example 2.18 (Characteristic polynomials of Adams methods)
a) The characteristic polynomials of the Adams-Bashforth method with m = 1 are
() = 1 +
() = 1 .
1 3
() = + .
2 2
() =
16
23
5
+ 2 .
12 12
12
() = .
() =
1 1
+ .
2 2
() =
8
5
1
+ 2 .
12 12
12
with Re() < 0. With this equation, a linear multistep method with step size h reads
m
i=0
i=0
ai wk+i = h bi wk+i
i=0
where z = h. This is a linear difference equation for the sequence wk . The general
solution of that difference equation is based on the powers
k , where
are the zeros of
the polynomial
m
40
i=0
i=0
ai wk+i = h bi wk+i
the region of stability is defined as that part of the complex plane where the stability
polynomial p has only zeros within the unit circle:
m
Within G and the left-half plane the numerical solution has the same quantitative
behavior as the exact solution.
The boundary of the region of stability is reached if one zero
crosses the unit circle:
= 1
= ei
[0, 2) .
Inserting this value into the characteristic polynomial yields the boundary points
m
i = 0
(ai zbi )
i=0
z() =
(ei )
(ei )
Hence, all boundary points z() are obtained by varying in the interval [0, 2).
Boundary of region of stability for linear multistep methods
Using the characteristic polynomials and of a multistep method, the boundary of
the region of stability consists of all points
z=
(ei )
(ei )
0 < 2 .
1 +
= 1 + ei .
1
This expression is the same as in Example 2.12. Again, all points z lie on a circle with center 1
and radius 1.
41
Stability of Adams-Bashforth methods
Adams-Bashforth methods are explicit methods. AB1 is Eulers method. The regions of
stability are bounded and cover very small parts
of the left-half plane. With increasing order the
regions are getting even smaller. A point that
belongs to any stability region is plotted in light
red.
Im
AB1
AB2
i
6 5 4 3 2 1
i
AB4
1
Re
AB3
1 +
ei 1
.
= 2 i
1
e +1
+2
2
This expression is the same as in Example 2.17. Again, all points z lie on the imaginary axis.
Thus, the trapezoidal method is A-stable.
Im
AM2
AM1
AM3
AM4
6 5 4 3 2 1
i
Re
2.5.3 Stiffness
Stiffness describes the relation between necessary step size and solution smoothness. If
a solution behaves continuous and smooth without sudden changes of the slopes on one
hand and requests quite a small step size on the other hand, such a paradox situation is
called stiff. Stiffness is a matter of performance, not of precision. Stiffness also is a local
matter. A differential equation can behave normal in some region and stiff in another.
42
y = Ay
with
0
A=(
1
1
) .
10.1
y
1
z1 (t)
0.5
z2 (t)
det(A I) = 2 + 10.1 + 1
1
2 = 10
z1 (t) = e0.1 t ,
z2 (t) = e10 t .
0.1
1
1
) v1 = 0,
10
(A 2 I)v2 = (
10
1
1
) v2 = 0
0.1
with eigenvectors
v1 = (
10
) ,
1
v2 = (
1
) .
10
43
Example 2.22 (Vibration equation solved with Euler)
We consider again the vibration equation of Example 2.6 with the parameters of Example 2.21. For the
initial value
1
) ,
y(0) = (
0
y
1
y1 (t)
= 0.17
h
= 0.23
h
0.5
10
,
99
C2 =
10
.
99
2
2
=
= 20 ,
1 0.1
h2 =
2
2
=
= 0.2 ,
2 10
h = min{h1 , h2 } = h2 = 0.2 .
The absolute value of eigenvalue 1 is much smaller than that of eigenvalue 2 . So, the limit
step size h1 derived from 1 is much larger than the limit step size h2 derived from 2 . Although
= 0.17 < h the
the solution part z1 is dominant, the solution part z2 limits the step size. With h
solution behavior is generated correctly. In contrast, with h = 0.23 > h, strong oscillations occur
in the numerical solution. This effect is called stiffness.
44
2.6.1 Summary
Initial value problems can be solved numerically in many different ways. Most popular
methods integrate from the initial point in time to the end point step by step. The
resulting solution is a sequence of approximation points.
problem type
nonstiff
nonstiff
nonstiff
stiff
fully implicit
stiff
mildly stiff
stiff
type of algorithm
explicit Runge-Kutta pair, orders 2 and 3
explicit Runge-Kutta pair, orders 4 and 5 (workhorse!)
explicit linear multistep, orders 1 to 13
implicit linear multistep, orders 1 to 5
implicit linear multistep, orders 1 to 5
modified Rosenbrock pair (one-step), orders 2 and 3
trapeziodal rule (implicit), orders 2 and 3
implicit Runge-Kutta-type algorithm, orders 2 and 3
45
y(a) =
y(b) =
y (a) = s .
y(a) =
46
y(a) =
y(b) =
y (tk )
y(tk+1 ) y(tk1 )
+ O(h2 )
2h
y(tk+1 ) 2y(tk ) + y(tk1 )
+ O(h2 )
h2
47
2.8 Applications
2.8.1 Biology
Example 2.23 (Predator-Prey model)
The Lotka-Volterra predator-prey model describes the relation between two populations. One
population behaves as predator the other as prey. The model consists of a two-dimensional
system of first-order differential equations. The 4 parameters are a, b, c, d > 0.
x = ax + bxy
y = cy + dxy
48
Example 2.24 (Competing population model)
The competing population model describes the two populations that compete in their living
environment. The model consists of a two-dimensional system of first-order differential equations.
The parameter r > 0 describes the effect of the growth of one species on the growth of the other
species.
x = x x2 rxy
y = y y 2 rxy
First, we discuss the case r = 0.5:
x-nullclines: x = 0 and y = 2x + 2
y-nullclines: y = 0 and y = 12 x + 1
Equilibrium solution: (0, 0), (1, 0), (0, 1) and ( 32 , 23 ) (stable)
1.5
0.5
0.5
1.5
1
2
y-nullclines: y = 0 and y = 2x + 1
Equilibrium solution: (0, 0), (1, 0), (0, 1) and ( 31 , 13 ) (unstable)
1
0.8
0.6
0.4
0.2
0
0.2
0.2
0.2
0.4
0.6
0.8
1.2
49
2.8.2 Mechanics
Example 2.25 (Spring damper oscillator)
Let x denote the deflection of a ball with mass m
and c and k denote the spring and damper coefficient,
respectively. Let F (t) be an external force. Newtons
law on the balance of forces states:
F(t)
c
m
m
x + kx + cx = F (t) .
x(t)
y2 = x
the second-order differential equation can be transformed into a linear first-order system:
0
y 1
=
c
y 2
m
1
0
y
1 + 1
F (t)
y2
m
m
=
=
c1
x1
y2 = x2 ,
y3 = x 1 ,
c2
m2
c3
0
0
m1
x2
y4 = x 2
this second-order system can be written as a linear first-order system with constant coefficients:
0
0
y
2
= c +c
2
y 3 1
m
1
y 4
c2
m2
c2
m1
c2 + c3
m2
0
0
y1
y2
y3
y4
50
2.8.3 Engineering
Example 2.27 (Liquid tank)
The model describes the height of a liquid in a spherical tank. The model consists of a first-order
differential equation. The parameter r > 0 describes the tank radius. The parameter cd denotes
a liquid dependent parameter (water: 0.6). The parameter A describes the hole area.
cd A 2gh
h =
2rh h2
For example we can set r = 1.5 m, cd = 0.6, and A = 0.0152 m2 .
3
2.5
h [m]
1.5
0.5
0
0
500
1000
1500
2000
2500
time [s]
51
Source: Mitschke
The model consists of a second-order linear differential equation. The state variables are
y=(
z
) .
z
0
c
m
1
k )y + (
m
c
m
0
)
k
h+ m
h
52
Example 2.29 (Quarter car model with three masses)
The model describes a simple quarter car with three coupled masses:
Source: Mitschke
The model consists of a system of three second-order linear differential equations. The differential
equations are
m1 z1 k2 (z2 z1 ) c2 (z2 z1 ) + c1 z1
m2 z2 k3 (z3 z2 ) c3 (z3 z2 ) + k2 (z2 z1 ) + c2 (z2 z1 )
m3 z3 + k3 (z3 z2 ) + c3 (z3 z2 )
=
=
=
c1 h
0
0
z1
z = z2
z3
z
y=(
)=
z1
z2
z3
z1
z2
z3
The mass matrix, the right-hand side matrix and the amplification by the road read
m1
M = 0
0
0
m2
0
0
0 ,
m3
c1
R= 0
0
0
0
0
0
0 ,
0
h1
h= 0 .
0
k2
k2 + k3
k3
0
k3 ,
k3
c1 + c2
C = c2
c2
c2 + c3
c3
0
c3 .
c3
0
M 1 C
E
0
)y + (
) .
M 1 K
M 1 Rh
53
2.9 Exercises
Theoretical Exercises
Exercise 2.1
For each ODE that ist given, determine its order, check if it is linear, and check if the given
functions are solutions.
a) y = 2t with y(t) = t2 + c.
b) y = 2ty + 1 with y(t) = et
x2
dx + et .
e
54
Exercise 2.5
Compute the region of stability for the trapezoid method
1
1
wk+1 = wk + h ( f (tk , wk ) + f (tk+1 , wk+1 ))
2
2
Computational Exercises
Exercise 2.6
Consider the IVP
y = y,
y(t0 ) = y0 ,
y (t0 ) = y0
y(t0 ) = y0 ,
y(t1 ) = y1 .
y(0) = 1 .
a) Plot the solution of the IVP for t [0, 5] using the function quad with stepsize 0.01.
b) Which element of the discrete solution vector y(k) gives y(3)?
c) What is the value of y(3)?
55
Exercise 2.9
Consider the simple IVP
y = y,
y(0) = 1
y(0) = 5,
t [0, 50]
Application-based Exercises
Exercise 2.11
Population model with two competing species: Consider the system
x = x x2
1
xy,
2
y = y y2
1
xy
2
Draw a phase-plane diagram for x 0, y 0 which includes all equilibrium solutions, x- and
y-nullclines along with the exact flow directions on the nullclines and approximate flow directions
in the regions between nullclines.
Exercise 2.12
After a skydiver jumps from an airplane and until the parachute opens, the air resistance is
proportional to v1.5 , and the maximum speed that the skydiver can reach is 130km/h.
a) Set up the differential equation for the vertical velocity. What is the initial condition?
b) MATLAB: Implement the Runge-Kutta method and integrate the system with h = 0.01 for
t [0, 10]. Plot the solution curve and include the curve if there were no air resistance.
c) MATLAB: How many seconds would it take for the skydiver to break a falling speed of
100km/h?
56
Exercise 2.13
The vertical dynamical behavior of a vehicle can be studied using a quarter car with three masses.
this simplified model consists of a wheel with mass m1 , the body with mass m2 , and the driver
with mass m3 . The masses are linked with springs and dampers.
a) Set up three scalar differential equations of order 2 describing the vertical dynamics using the
positions z1 , z2 , and z3 as state variables. Let h denote the road height.
b) Transform the differential equation into a second-order system with mass matrix M , damper
matrix K, and spring matrix C. Use z = (z1 , z2 , z3 )T as state variable.
c) Transform the system into an explicit first-order system. Use y = (z, z)
T as state variable.
d) MATLAB: Implement the right-hand side function of the ODE system as set up in c).
e) MATLAB: Implement the road height function. Let x denote the road position. Then,
h(x) = 0.2
if x 4
if 4 < x 12
if 12 < x
y(0) = ,
2
t [0, ]
57
Partial differential equations are used as mathematical models in a vast range of technical
applications. Heat distributions, diffusion processes and wave dynamics can be described
with partial differential equations. Any kind of fluid dynamics is mathematically expressed
by the famous Navier-Stokes equations.
58
3.1 Introduction
First of all, we briefly discuss the most important basics on partial differential equations.
The independent variables typically describe time and space. It is common to denote
problems in one spacial variable as 1D problems. Likewise, problems in two or three
spacial variables are called 2D and 3D problems.
Dimension
Partial differential equations are called
1D problems if only one spacial variable is involved,
2D problems if two spacial variable are involved,
3D problems if three spacial variable are involved.
This classification holds no matter if time is a further variable or not.
59
Example 3.2 (Problem dimension)
a) The equation ux + uy = 1 is a 2D problem.
b) The equation ux = t u is a 1D partial differential equation although x and time t are both
independent variables.
c) The equation uxx + uyy + uzz = t u2 is a 3D problem.
= uxx + uyy = 0 .
ut = (uxx + uyy ) = u .
Solutions of the Laplace equation are called harmonic. Conversely, any function with
u = 0 solves the Laplace equation.
Definition 3.2 (Solution)
A function u is called solution of a partial differential equation if the equation is
satisfied at all points of the domain when inserting u and its partial derivatives.
60
Example 3.7 (Solutions of the heat equation)
a) The function u(x, t) = ax + b solves the heat equation since ut = uxx = 0.
2
b) The function u(x, t) = e t (c1 cos ( x) + c2 sin ( x) ) also solves the heat equation
for any coefficients c1 and c2 since
ut (x, t)
uxx (x, t)
c1 cos ( x) + c2 sin ( x) )
2
e t ( c1 cos ( x) c2 sin ( x) )
2 e t (
2
+ c2 e
uxx = 2y,
1
6
uy = x2
2 3
y ,
3
uyy = 2y 2 .
= 2 + 2 = 4 0.
= 2 2 = 0.
2x
2y
2x2 + 2y 2
2x2 2y 2
+
=
+ 2
=0.
2
2
2
2
2
2
2
x x + y
y x + y
(x + y )
(x + y 2 )2
61
Many important partial differential equations are linear and of second order:
a uxx + 2b uxy + c uyy + 2 d1 ux + 2 d2 uy + d3 u = f (x, y).
In general, the coefficients a, b, c and d1 , d2 , d3 depend on (x, y). The factors 2 simplify
some equations related to this quadratic form. Using differentials we can rewrite the
equation as
a b x
d1
u = f .
(
,
)
,
)
(
+
2
(
)
+
d
3
x y
x
y
2
b c
y
A
Finding the zeros of the characteristical polynomial of matrix A
det (A I) = 2 (a + c) + (ac b2 ) = 0
yields the eigenvalues of matrix A:
b2 ac < 0,
parabolic if
b2 ac = 0,
hyperbolic if
b2 ac > 0.
In general, the coefficients a, b, c and d1 , d2 , d3 depend on (x, y). Hence, the type of
equation depends on (x, y).
62
Example 3.11 (Laplace, heat, and wave equation)
a) For the 2D Laplace and Poisson equation we have
uxx + uyy = 0
a = 1,
b = 0,
c=1
and with that b2 ac = 1 < 0. Hence, the 2D Laplace equation is of second-order, linear,
and elliptic.
b) In terms of classification, the 1D heat equation gives
ut = uxx
a = ,
b = 0,
c=0
which yields b2 ac = 0. Hence, the 1D heat equation is of second-order, linear, and parabolic.
c) For the 1D wave equation we find
utt = 2 uxx
a = 2 ,
b = 0,
c = 1
and with that b2 ac = 2 > 0. Hence, the 1D wave equation is of second-order, linear, and
hyperbolic.
a(x, y) = y,
b(x, y) = 0,
c(x, y) = 1 .
This gives b2 ac = y. Hence, the Tricomi equation is elliptic, if y > 0, parabolic if y = 0, and
hyperbolic if y < 0.
63
(x, y) D
(x, y) D
u(x, 0) = 1,
u(x, 1) = 2
for
x2 + y 2 = 1
define a unique solution. Here, arg(x, y) denotes the angle between the vector (x, y)T and
the positive x-axis.
c) Consider the 2D Laplace equation on the triangle D = {(x, y) x, y 0, x + y 1}. With
u(x, 1 x) = 1,
ux (0, y) = uy (x, 0) = 0
64
ux (0, t) = ux (1, t)
+y 2 )
u(x, 1) = 1
u(b, t) = c2 ,
t>0
D
a
The solution at P (x0 t0 ) D depends on that part of the boundary D that lies
behind the propagating line of P .
65
Example 3.15 (Boundary conditions for the wave equation)
Consider the 1D wave equation on the domain D = [0, ] R+0 . With
u(x, 0) = cos x,
u(0, t) = 1,
u(, t) = 1
ui
ui,j
= u(xi )
= u(xi , yj )
wi
wi,j
ui,k
ui,j,k
= u(xi , tk )
= u(xi , yj , tk )
wi,k
wi,j,k
For the continuous solution x [a, b], y [c, d], and t 0. The discretized solution is
defined for i = 0, 1, . . . , n, j = 0, 1, . . . , m, and k N0 . If the points are equidistant with
h > 0 and t > 0 then xi = a + hi, yj = c + hj, and tk = t k. Numerical approximations
of u are denoted by w.
If the problem is 2D or higher, all grid nodes can be ordered linearly in a vector. One of
the possible variants to do this is to use the reading or alphabetical order. All grid nodes
are ordered with ascending index values. The first index has highest and the last index
lowest priority.
66
x1
x2
x3
The grid consists of 21 11 = 231 nodes with 19 9 = 171 inner nodes and 20 + 10 + 20 + 10 = 60
boundary nodes. If the solution function is, for example, w(x, y) = xy 2 , the vector
w = ( 0, 0, 0, . . . , 0.9, 0.961, 1.024, . . . , 1.8, 1.922, 2.048, . . . , . . . , 32)
x=0
x = 0.1
x = 0.2
consists of all grid values in reading order.
67
f (t) =
f (t + h) f (t)
+ O(h) .
h
f (t) =
f (t) f (t h)
+ O(h) .
h
Both approximations are of first order O(h). Subtracting the second from the first expanded polynomial
f (t + h)
f (t h)
h2
f (t) +
2
h2
= f (t) h f (t) +
f (t)
2
= f (t) + h f (t) +
f (t + h) f (t h)
+ O(h2 )
2h
h3
f (t) + O(h4 )
6
h3
f (t) + O(h4 )
6
68
f (t + h) f (t)
,
h
f (t)
f (t) f (t h)
,
h
f (t + h) f (t h)
.
2h
f (t + h) 2f (t) + f (t h)
.
h2
These finite differences in one dimension now can be applied to a function u depending on
x and y. Using these differences, approximations w of u are involved only at grid points.
In this context, we assume a regular grid with equidistant points.
Finite differences
The second-order partial derivatives of a function u(x, y) at point xi , yj can be approximated using h > 0 by the numerical values
uxx (xi , yj )
(wxx )i,j
uyy (xi , yj )
(wyy )i,j
69
y
1
1
1
x
The equations for all numerical values wi,j can be collected in a linear system Aw = b.
If the domain is rectangular, the system matrix A has a special structure, it consists of
5 diagonals. Using a grid with n + 1 nodes in both directions, the vector w consists of
(n + 1)2 elements in reading order and the matrix A has (n + 1)4 elements!
70
Linear system for 5-point-star
Using the 5-point-star to solve the 2D Laplace equation results in the linear system
Aw = b with
4
1
A=
1
1
,
1 4
b=
This linear system is valid for all inner nodes wi,j . Boundary conditions for boundary
nodes have to be added separately. Using the reading order in a retangular grid with
n + 1 nodes in x-direction and m + 1 nodes in y-direction, the values 1 are separated
by m 1 zero elements.
Since all finite differences used in the numerical solution scheme are of order 2, the 5point-star is of second-order. This is sufficient and gives reasonable precision for normal,
well-conditioned problems.
Theorem 3.1 (Order of the 5-point-star)
The numerical method using the 5-point-star with grid size h to solve the 2D Laplace
equation is of order O(h2 ).
Until now, the linear system Aw = b is valid only for inner nodes. Nodes on the boundary
typically have only 2 or 3 neighbors within the domain. Thus, the 5-point-star is not
applicable.
Ghost nodes
Applying the 5-point-star to boundary nodes
generates ghost nodes outside the domain.
These ghost nodes are eliminated again with
modified difference formulas. Thus, for boundary nodes the 5-point-star changes into formulas involving less than 5 nodes.
ghost nodes
ym
y0
x0
xn
To include Dirichlet boundary conditions there are two alternatives. The first alternative is
to keep the boundary nodes in the linear system and replace the corresponding 5-point-star
equations by trivial equations.
71
Boundary nodes with Dirichlet condition
To add Dirichlet boundary conditions, the linear system Aw = b is modified. Any row
k of A that refers to a boundary node wk with Dirichlet conditions is replaced by zeros
and the value 1 on the diagonal. The corresponding element bk of the right-hand side
vector is replaced by the given Dirichlet value gk :
ak,` = 0,
` k,
ak,k = 1,
bk = gk .
=0
y
4
on D = {(x, y) 0 x, y 2}
u(x, y) = {
if y = 2
otherwise on D
10
0
w0,2
w2,2
w0,0
1
w2,0
2
are introduced.
According to the reading order, we define the vector
w = (w0,0 , w0,1 , w0,2 , w1,0 , w1,1 , w1,2 , w2,0 , w2,1 , w2,2 )T .
Using the 5-point-star we obtain the linear system Aw = b with
A=
1
0
0
0
0
0
0
0
0
0
1
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0
4
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
1
0
0
0
0
0
0
0
0
0
1
0
0
10
b= 0 .
10
0
10
The second alternative is to delete all boundary node equations from the linear system. As
an advantage, this implies a smaller matrix A. However, matrix A is then less structured.
This may be regarded as a disadvantage.
72
Example 3.18 (Laplace equation on an L-shaped domain)
y
Consider the 2D Laplace problem
u
=0
on D
u(x, y) = {
5
0
w3,3
if y = 0
otherwise on D
w1,1
w3,1
2
Since the values of the boundary nodes are known through the Dirichlet conditions, we set up a
linear system resulting from the 5-point-star including only the inner nodes:
4w11
4w21
4w31
4w32
4w33
w21
w11
w21
w31
w32
=
=
=
=
=
w31
w32
w33
5
5
5
0
0
4
1
0
0
1
4
1
0
0
0
1
4
1
0
0
0
1
4
1
0
0
0
1
4
w =
5
5
5
0
0
1
15
0
0
0
0
4
56
0
0
0
0
15
209
0
0
0
0
56
780
0
25
100
100
100
w2,1 2.1795,
w3,1 1.9231,
w3,2 0.5128,
w3,3 0.1282 .
To include Neumann boundary conditions there are at least two approaches. In a first
approach we could use forward or backward difference formulas for the boundary nodes.
The advantage is that these formulas do not include nodes outside the domain. The big
disadvantage however is that their order is only O(h). This order does not match with
the second-order differences O(h2 ) of the 5-point-star for the inner nodes. A second
approach makes use of temporary ghost nodes outside the domain. If wn,j is a boundary
node on the right part of a boundary, the ghost node wn+1,j lies outside the domain. Let
u
(xn , yj ) = gn,j
n
be the value of the normal derivate at the boundary node. Then, using the central
difference formula in x-direction for the normal derivative gives the approximation
wn+1,j wn1,j
= gn,j
2h
73
Inserting the ghost node wn+1,j into
4wn,j wn1,j wn+1,j wn,j1 wn,j+1 = 0
finally yields the modified boundary equation
4wn,j 2wn1,j wn,j1 wn,j+1 = 2h gn,j
in which the ghost node is eliminated again.
Boundary nodes with Neumann condition
To add Neumann boundary conditions the linear system Aw = b is modified. In any
row k of A that refers to a boundary node wk with Neumann conditions the element
ak,` that corresponds with the node w` opposite the ghost node outside the domain is
replaced by the value 2. The corresponding element bk of the right-hand side vector
is replaced by a given Neumann value gk and the grid size h:
ak,` = 2,
bk = 2h gk .
The coefficients , , and are chosen such that the resulting local discretisation error
is as small as possible. One possible choice defines the 9-point-star.
y
1
20
1
x
It can be shown that the 9-point-star is of higher order than the 5-point-star. To be more
precise, the order is O(h4 ) in comparison with O(h2 ). However, 9 neighboring points are
74
involved in each equation. The corresponding matrix has more non-zero elements than
the 5-point-star matrix.
Theorem 3.2 (Order of the 9-point-star)
The numerical method using the 9-point-star with grid size h to solve the 2D Laplace
equation is of order O(h4 ).
t
1
wi,k+1 wi,k
wi1,k 2 wi,k
=
t
h2
12
A=
,
1 2
t
.
h2
This linear system is valid for all inner nodes wi,k+1 with 1 i n 1 and k 0.
Boundary conditions for boundary nodes have to be added separately.
75
Example 3.19 (Heat equation on a band domain)
Consider the 1D heat problem
ut = uxx on D = {(x, t) 0 x 4, 0 t}
u(0, t) = 0,
u(4, t) = 10
u(x, 0) = 0
for 0 t
wk+1 = A wk
with
A=
1
1
2
0
0
0
0
0
1
2
0
0
0
1
2
0
1
2
0
0
1
2
0
0
1
1
0
0
0
.
1
2
1
has to be computed. The first and last row in A are due to the Dirichlet boundary conditions.
Starting with the initial condition, the first iterates and the limit interate read
0
0
w0 = 0
0
10
, w1 = 0
10
, w2 = 2.5
10
1.25
2.5
, w3 = 2.5 , . . . , w = 5 .
6.25
7.5
10
10
The simple and straightforward explicit method is of order O(h2 ) and only of order O(t).
Moreover, a stability conditon can be derived. This condition is obtained by investigating
the eigenvalues of matrix A.
Theorem 3.3 (Order and stability of the explicit 4-point-star)
The numerical method using the explicit 4-point-star with grid sizes h and t to solve
the 1D heat equation is of order O(h2 ) and O(t). The method is stable if
=
h2
1
.
2
76
Example 3.20 (Heat equation on a band domain with unstable solution)
Consider again the 1D heat problem
t
ut = uxx on D = {(x, t) 0 x 4, 0 t}
u(0, t) = 0,
u(4, t) = 10
u(x, 0) = 0
for 0 t
h2
=1>
1
1
1
2
stability is not guaranteed. Using the explicit method, in each interation step
wk+1 = A wk
with
A=
1
1
0
0
0
0
1
1
0
0
0
1
1
1
0
0
0
1
1
0
0
0
0
1
1
has to be computed. The first and last row in A are due to the Dirichlet boundary conditions.
Starting with the initial condition, the first iterates read
0
0
w0 = 0
0
10
0
0
0
0
, w1 = 0 , w2 = 10
10
0
10
10
0
0
10
20
, w3 = 10 , w4 = 40 , . . .
20
20
10
10
This numerical solution shows increasing oscillations and thus do not behave similiar to the real
solution.
Instead of using the forward difference formula for the time discretisation the backward
one can be used as well. The 4-point-star then is established in such a way that just 1
node lies on the old time level and 3 nodes lie on the new one.
1+2
77
In this implicit approach the matrix times vector multiplication from the explicit method
changes into the solution of a linear system. This implies that the computational effort
increases for the implicit variant.
Linear system for implicit 4-point-star
Using the implicit 4-point-star to solve the 1D heat equation ut = uxx results in the
linear system B wk+1 = wk with
1 + 2
B=
1 + 2
h2
This linear system is valid for all inner nodes wi,k+1 with 1 i n 1 and k 0.
Boundary conditions for boundary nodes have to be added separately.
The implicit method is still quite simple and also of order O(h2 ) and only of order O(t).
But in contrast to the explicit approach, the implicit variant is unconditionally stable. This
is a major advantage.
Theorem 3.4 (Order and stability of the implicit 4-point-star)
The numerical method using the implicit 4-point-star with grid sizes h and t to solve
the 1D heat equation is of order O(h2 ) and O(t). The method is unconditionally
stable.
78
Example 3.21 (Heat equation on a band domain with implicit solution)
Consider again the 1D heat problem
t
ut = uxx on D = {(x, t) 0 x 4, 0 t}
u(0, t) = 0,
u(4, t) = 10
u(x, 0) = 0
for 0 t
B wk+1 = wk
with
1
1
B= 0
0
0
0
3
1
0
0
0
1
3
1
0
1
1
0
0
1
3
0
0
0
0
1
1
The first and last row in B are due to the Dirichlet boundary conditions. Starting with the initial
condition, the iterates read
0
0
w0 = 0
0
10
Likewise, with
0
0
0
1.043
2.5
0.476
, w1 1.428 , w2 2.653 , . . . , w = 5 .
3.809
5.487
7.5
10
10
10
1
2
B wk+1 = wk
with
11
2
B= 0
0
0
0
2
12
0
0
0
12
2
12
0
0
0
21
2
0
0
0
0
.
1
2
1
w0 = 0
0
10
0
0
0
0.178
0.471
2.5
, w1 0.714 , w2 1.530 , . . . , w = 5 .
2.678
4.221
7.5
10
10
10
When making a comparison between the results obtained with the two values of t we have to
be careful. Only every second iterate obtained with t = 21 corresponds to an iterate obtained
with t = 1.
Now, we combine explicit and implicit approach. The two 4-point-stars merge into one
6-point-star called Crank-Nicolson method. This method has been published in a paper
in 1947.
79
2+2
22
wi,k+1 wi,k
wi1,k + wi1,k+1 2 wi,k 2 wi,k+1
=
.
t
2 h2
Like in the implicit case the Crank-Nicolson method requires the solution of a linear system
in each iteration step. Note, that for 1D parabolic equations the size of the resulting linear
system is comparable to that of 1D elliptic equations and thus in general much smaller
than the size of 2D elliptic equations.
Linear system for 6-point-star Crank-Nicolson method
Using the 6-point-star Crank-Nicolson method to solve the 1D heat equation ut = uxx
t
results in the linear system B wk+1 = A wk with = 2
h
2 2
A=
,
2 2
2 + 2
B=
2 + 2
This linear system is valid for all inner nodes wi,k+1 with 1 i n 1 and k 0.
Boundary conditions for boundary nodes have to be added separately.
The Crank-Nicolson method is in several ways satisfying. Firstly, the method is of secondorder for h as well as for t. Secondly, the method is unconditionally stable. There is no
restriction regarding the time step size t.
Theorem 3.5 (Order and stability of the 6-point-star Crank-Nicolson method)
The numerical method using the 6-point-star Crank-Nicolson method with grid sizes h
and t to solve the 1D heat equation is of order O(h2 ) and O(t2 ). The method is
unconditionally stable.
80
Example 3.22 (Heat equation on a band domain with Crank-Nicolson)
u
Consider again the 1D heat problem from example 3.19 with a grid using h = 1. Applying Crank10
Nicolsons method, the corresponding linear system
w
7.5
reads
1
1
0
0
0
0
0
1
0
0
0
1
0
1
0
0
0
1
0
0
0
0
0
1
1
1
1
B= 0
0
0
1
2
2.5
A=
w0
B wk+1 = A wk .
With
0
4
1
0
0
0
1
4
1
0
0
0
1
4
0
0
0
0
1
1
The first and last row in B are due to the Dirichlet boundary conditions. Starting with the initial
condition, the iterates read
0
0
w0 = 0
0
10
Likewise, with
A=
0
0
0.357
1.173
2.5
, w1 1.428 , w2 3.265 , . . . , w = 5 .
5.357
6.173
7.5
10
10
10
1
1
2
0
0
0
1
2
0
1
1
2
0
0
1
1
2
0
0
1
2
1
0
0
0
0
,
1
2
1
11
2
B= 0
0
0
0
3
12
0
0
0
12
3
12
0
0
0
21
3
0
0
0
0
.
1
2
1
w0 = 0
0
10
0
0
0.098
0.407
2.5
, w1 0.588 , w2 1.660 , . . . , w = 5 .
3.431
4.852
7.5
10
10
10
When making a comparison between the results obtained with the two values of t we have to
be careful. Only every second iterate obtained with t = 21 corresponds to an iterate obtained
with t = 1.
81
t
1
14
h2
A=
1 4
h2
This linear system is valid for all inner nodes wi,j,k+1 with 1 i n1 and 1 j m1
and k 0. Boundary conditions for boundary nodes have to be added separately.
The order of the explicit approach is the same in 1D and 2D. To ensure stability of the
numerical solution, the coefficient has to be below the value 14 . This value in 2D is
different to the value 21 in 1D.
82
h2
1
.
4
Swapping the difference formulas of time layers k and k + 1 yields the implicit variant.
Here, the backward difference formula with respect to time is applied. This results again
in a 6-point-star.
h2
1+4
83
Linear system for implicit 6-point-star
Using the implicit 6-point-star to solve the 2D heat equation ut = u results in the
linear system B wk+1 = wk with
1 + 4
B=
1 + 4
t
.
h2
This linear system is valid for all inner nodes wi,j,k+1 with 1 i n1 and 1 j m1
and k 0. Boundary conditions for boundary nodes have to be added separately.
The order of simple explicit and implicit methods is O(h2 ) and O(t). Thus, the implicit
variant has no advantage in terms of precision. But, concerning the overall behavior of
the numerical solution, there is no limit of t. Even with large t, the numerical solution
behaves in the same way as the exact solution.
Theorem 3.7 (Order and stability of the implicit 6-point-star)
The numerical method using the implicit 6-point-star with grid sizes h and t to solve
the 2D heat equation is of order O(h2 ) and O(t). The method is unconditionally
stable.
A further improvement is obtained, using the average of the explicit and the implicit
method. In this approach, 10 points in total are included in the methods molecule. 5
points refer to time k and another 5 points refer to time k + 1.
Definition 3.20 (10-point-star Crank-Nicolson method)
t
A numerical solution wi,k of the 2D heat equa
tion ut = u can be obtained using the grid
2+4
sizes h and t and an approximation scheme
t
24
is = 2 and the approximation scheme
x
reads
wi,j,k+1 wi,j,k
wi1,j1,k + wi1,j1,k+1 2 wi,j,k 2 wi,j,k+1
.
=
t
2 h2
84
The time update involves two matrices. One matrix is multiplied with wk+1 , the other
one with wk . Hence, a matrix multiplication and the solution of a linear system have to
be calculated.
Linear system for 10-point-star Crank-Nicolson method
Using the 10-point-star Crank-Nicolson method to solve the 2D heat equation ut = u
t
results in the linear system B wk+1 = A wk with = 2
h
2 4
A=
2 + 4
, B =
2 4
2 + 4
This linear system is valid for all inner nodes wi,j,k+1 with 1 i n1 and 1 j m1
and k 0. Boundary conditions for boundary nodes have to be added separately. Using
the reading order in a retangular grid with n + 1 nodes in x-direction and m + 1 nodes
in y-direction the values and are separated by m 1 zero elements.
It can be shown, that the order of the Crank-Nicolson approach is 2 for h as well as for
t. Moreover, the advantage of unconditional stbility of the implicit method is inherited
to the Crank-Nicolson method. These advantages make the Crank-Nicolson approach to
a commonly used solver for parabolic problems.
Theorem 3.8 (Order and stability of the 10-point-star Crank-Nicolson method)
The numerical method using the 10-point-star Crank-Nicolson method with grid sizes
h and t to solve the 2D heat equation is of order O(h2 ) and O(t2 ). The method
is unconditionally stable.
85
t
1
wi,k1 2wi,k
wi1,k 2wi,k
= 2
2
t
h2
22
1
x
In general, the discretization of space and time is different. This implies a coefficient ,
that also depends on the factor of the differential equation. Moreover, three time layers
are included in the hyperbolic 5-point-star.
Linear system for hyperbolic 5-point-star
Using the hyperbolic 5-point-star to solve the 1D wave equation utt = 2 uxx results in
the second-order linear system wk+1 = A wk wk1 with
2 2
A=
,
2 2
= 2
h2
This linear system is valid for all inner nodes wi,k+1 with 1 i n 1 and k 1.
Boundary conditions for boundary nodes have to be added separately. In addition, a
special initial step is necessary for k = 0.
Since the central difference formula is used for time discretization in the hyperbolic 5point-star, this approach is of second order with respect to space and time. The CourantFriedrichs-Levy condition describes the limit of stability.
Theorem 3.9 (Order and stability of the hyperbolic 5-point-star)
The numerical method using the hyperbolic 5-point-star with grid sizes h and t to
solve the 1D wave equation utt = 2 uxx is of order O(h2 ) and O(t2 ). The method
is stable if the Courant-Friedrichs-Levy condition holds:
= 2
h2
1.
ut (x, 0) = g(x) ,
86
ghost nodes for the seed computation in the first step are used:
wi,1 wi,1
= gi
2 t
wi,1 = wi,1 2 t gi .
(gi1 + gi+1 ) + t gi .
2
ut (x, 0) = g(x)
can be computed using the central difference formula in t-direction with ghost nodes
wi,1 outside the domain:
wi,1 = (1 )gi +
(gi1 + gi+1 ) + t gi ,
2
1in1.
t
u
w
87
x1 = 2.5,
x2 = 3.5,
x3 = 5,
x4 = b = 6 .
h=
ba
,
n
k = 0, 1, 2, . . . , n
x1 = 2.1,
x2 = 2.2,
...
, x3 = 2.3,
...
, x4 = 2.4,
...,
x40 = 6 .
88
For 2D problems meshes can be triangulations. The common way of representing a 2D
triangulation is to use two vectors x and y for the mesh nodes a matrix T for the
triangles. In T only the indices of the nodes are stored. Thus, the elements of T are
natural numbers. The order of the triangle vertices can be either ascending or according
to the orientation. The second variant starts with the vertex with the smallest index and
then continues counter-clockwise.
x=
1
2
0
0
2
1
1
1
, y = 2 , T =
0
1
2
3
4
5
3
4
5
2
In each row of T the node indices are arranged starting with the smallest index and then continuing counterclockwise.
y
2
n3
n2
T1
n1 T4
T2
T3
n4
n5
2
1
0
2
1
x=
0 , y = 0 , T =
2.5
0
2
4
0
y
n3
2
3
4
5
5
3
4
5
2
6
In each row of T the node indices are arranged starting with the smallest index and then continuing counterclockwise.
n2
T1
1
n1 T4
T2
T5
T3
n4
n5
n6
4
89
Example 3.26 (Triangulation of a house shaped domain)
y
We consider a domain with 1 inner and 6 boundary
nodes. The triangulation consists of 6 triangles:
x=
2
1
2
3
1
2
3
2.5
1
1.5
1.5
, y = 1.5 , T = 2
3
0.5
0.5
4
0.5
2
3
3
5
4
6
3
4
5
6
6
7
n1
T1 T2
n4
n3
T3
T5
T4
T6
n7
n6
n2
1
n5
1
The process of decomposing a 2D domain is quite different from that for a 1D interval.
Finding good triangulations is an art in some sense. The Delaunay triangulation is such
that the smallest angle in any triangle is as large as possible. This ensures numerical
stability during the calculation of the finite element solution.
Tk
Delaunay
90
Example 3.27 (Delaunay triangulation of a quarter circle)
y
We consider a quarter circle shaped domain with 1
inner and 7 boundary nodes. Hence, 8 x-coordinates
2
and 8 y-coordinates have to be stored.
1
0
y=
1
3
1
0
x = ,
3
T =
6
1
1
2
1
1
3
7
6
5
3
8
3
4
8
8
6
8
3
5
5
y
2
n8
n7
n6 n1
T1
T2
T3
n5
T5 T6
T4
n2
T7
1
n3
n4
2
In each row of T the node indices are arranged starting with the smallest index and then continuing
counterclockwise. Finite elements are well suited for curved shaped domains. Hence, for a quarter
circle domain, the finite element method is advantageous in comparison with the finite difference
method.
91
Example 3.28 (Poisson equation with differential operator)
a) The 1D Poisson equation u = f (x) can be reformulated with a differential operator:
Lu = f,
L=
d2
.
d x2
b) In 2D, the Poisson equation uxx + uyy = f (x, y) can be written with an operator:
Lu = f,
L = = 2 =
2
2
+
.
x2 y 2
L = ,
f = f .
u, v = v, u
positive definite:
u, u > 0 for u 0
linear:
1 u1 + 2 u2 , v = 1 u1 , v + 2 u2 , v
92
Example 3.29 (Integral expressions as inner products)
a) The integral of the product of two functions u and v on a 1D interval [a, b]
u, v =
u(x) v(x) d x
defines an inner product. Symmetry, linearity, and positive definiteness result directly form
the corresponding properties of the integral.
b) Likewise, the integral of the product of two functions u and v on a 2D domain D
u, v = u(x, y) v(x, y) d x d y
D
u (x) v (x) d x
is an inner product.
d) In the same way, using the gradients of two functions u and v on a 2D domain D
u, v = u(x, y) v(x, y) d x d y
D
defines an inner product. Note that the function within the integral is real-valued.
For the negative Laplace operator Lu = u the inner product with a function v is defined
as
Lu, v = u, v =
u v dx dy
93
Lu, v = u, Lv
positive definite if
94
Example 3.30 (Properties of the negative Laplace operator)
a) We consider the negative 1D Laplace operator Lu = u on [a, b] together with the integral
inner product for functions that are zero on the interval bounds a and b. First, we show the
symmetry of the Laplace operator: Using integration by parts we get
b
Lu, v =
u (x) v (x) d x .
u, Lv =
u (x) v (x) d x .
Here again, the first expression vanishes. Second, we proof positive definiteness:
b
Lu, u =
(u (x))2 d x > 0
for any function u 0 with u(a) = u(b) = 0. From this, we can understand that for positive
definiteness it is necessary to use the negative of the original Laplace operator.
b) In 2D, the negative Laplace operator is also symmetric and positive definite for functions
that are zero on the boundary D. Starting with
Lu, v =
u(x, y) v(x, y) d x d y
u
(s) v(s) d s + u(x, y) v(x, y) d x d y .
n
D
u
denotes the derivative with respect to the normal direction n on the boundary of
n
the domain. With v = 0 on D again the first expression vanishes. Likewise, starting with
Here,
u, Lv = u(x, y) v(x, y) d x d y
D
u(s)
v
(s) d s + u(x, y) v(x, y) d x d y .
n
D
With u = 0 on D we finally get symmetry. Positive definiteness can be seen starting with
Lu, u =
u(x, y) u(x, y) d x d y
u
(s) u(s) d s + (u(x, y))2 d x d y > 0 .
n
D
One of the key ideas of the finite element approach is to rewrite the original partial
differential equation as a variational or an optimization problem. The strong formulation
95
then changes into a weak formulation.
Theorem 3.10 (Differential equation and variational problems)
For a symmetric, positive definite, and linear differential operator L the following problems are equivalent:
(A) The function u solves the partial differential equation
Lu = f .
(B) The function u satisfies the principle of virtual work
Lu, v = f, v for all functions v .
(C) The function u minimizes the potential energy functional
F (v) =
1
Lv, v f, v
2
The theorem is fundamental. Hence, it is worth to proof the equality between the three
formulations.
a) (A) (B)
Take the inner product on both sides of the equation Lu = f with an arbitrary
function v.
b) (B) (A)
The opposite direction follows immediately with the fundamental lemma of the calculus of variations: If Lu f, v = 0 for any function v then Lu f has to be the
zero function.
c) (B) (C)
Let v be an arbitrary function. Then, with w = v u we get
F (v) = F (u + w) =
1
L(u + w), u + w f, u + w .
2
Applying the linearity of L and the inner product, the functional F (v) reads
F (v) =
1
(Lu, u + Lu, w + Lw, u + Lw, w) (f, u + f, w) .
2
Since L is assumed to be symmetric, the second and third inner product are the
same:
F (v) =
1
1
Lu, u f, u + Lu, w f, w + Lw, w .
2
2
0
0
F (u)
96
The last expression is positive, since L is assumed to be positive definite. The
inequality F (v) F (u) shows the minimum of F for the function u.
d) (C) (B)
Again, let v be an arbitrary function. Then, we consider the one-dimensional function
() = F (u + v) which can be expanded to
() =
1
(Lu, u + Lu, v + Lv, u + 2 Lv, v) (f, u + f, v) .
2
Using the symmetry of L we can again combine the second and third inner product.
Hence, the derivative with respect to is
() = Lu, v + Lv, v f, v .
F has a minimum at = 0. This yields
0 = (0) = Lu, v f, v .
=f.
1
v, v + f, v
2
Using again the negative Laplace operator Lu = u in combination with the function
f = f the main theorem specializes to the following:
(A) The original partial differential equation yields the Poisson equation:
Lu = f
u = f
=f .
(B) The original principle of virtual work can be written as the following equation:
Lu, v = f, v
u, v = f, v
u, v = f, v .
97
(C) The original potential energy functional specializes to
F (v) =
1
1
1
Lv, v f, v = v, v f, v = v, v + f, v .
2
2
2
Hence, for the Laplace or Poisson equation the principle of virtual work and the potential
energy functional can be formulated with gradients.
98
j (x) = 0
linear
if x = xj
if x = xi , i j
otherwise
j (x)
xj
Similar to 1D, in 2D hat functions also have a very small support. They look like a
carneval hat or a pyramid.
1 if (x, y) = (xj , yj )
linear otherwise
z
1
j (x, y)
yj
xj
w = cj j
j=1
is used for the unknown function u. Likewise, the arbitrary function v is one after the
other replaced by the hat functions 1 , . . . , n . In this way, n equations are obtained.
99
w(x) = cj j (x)
or
w(x, y) = cj j (x, y) .
j=1
j=1
w = cj j
j=1
is used for the function v. As a necessary condition the gradient of the functional is set
to zero. Like in the Galerkin approach, here also n equations are obtained.
1
Lv, v f, v
2
w(x) = cj j (x)
j=1
or
w(x, y) = cj j (x, y) .
j=1
100
omit the functions arguments. Inserting
n
w = cj j
j=1
w = cj j
j=1
n
n
1 n
cj j , cj j + f , cj j .
2 j=1
j=1
j=1
n
1 n n
cj ck j , k + cj f, j .
2 j=1 k=1
j=1
1 n
2 cj i , j + 2ci i , i + f, i .
F (c1 , . . . , cn ) =
ci
2 ji
The first two expressions can be collected in one sum, and then the partial derivative is
set to zero:
n
F (c1 , . . . , cn ) = cj i , j + f, i = 0 .
ci
j=1
bi = f, i ,
i, j = 1, . . . , n .
i, j = 1, . . . , n
i, j = 1, . . . , n .
101
i = 1, . . . , n
i = 1, . . . , n .
The functions 1 , . . . , n are the basis functions and f is the right-hand side of the
Poisson equation on the domain D.
Using the stiffness matrix and the load vector the Rayleigh-Ritz method for the Poisson
equation finally can be written as a linear system: Ac = b.
Theorem 3.12 (Linear system for Poisson equation)
The values of the Rayleight-Ritz coefficients c1 , . . . , cn can be obtained by solving the
linear system Ac = b, where A is the stiffness matrix and b is the load vector.
The next step addresses the question how the stiffness matrix can be computed. Calculating the elements ai,j one after the other in two nested loops is straightforward but not
efficient, because same integrals are computed several times.
Assembly
The stiffness matrix A and the load vector b can be assembled
nodewise: Each value ai,j and bi is computed separately. This approach is timeconsuming since same parts of the integrals are computed several times.
elementwise: The integrals are computed locally on each mesh element Tk . This
approach generates element stiffness matrices Ak and element load vectors bk .
102
The element-wise assembly calculates the integrals separately on each triangle element. Let
Tk be one triangle with nodes
nk 1 ,
nk2 ,
nk1
nk3 .
Tk
nk3
nk2
i, j {k1 , k2 , k3 } .
This means, that for each triangle, 9 integrals contributing to the stiffness matrix have to
be computed. These 9 values define the element stiffness matrix.
Tk
i, j = 1, . . . , 3 .
The functions k1 , k2 , and k3 are the basis functions corresponding to the nodes
k1 , k2 , and k3 of triangle Tk .
In the same way, only if i {k1 , k2 , k3 } we obtain a non-zero entry in the load vector.
Thus, 3 integrals contribute to the element load vector.
bi
Tk
i = 1, . . . , 3 .
The functions k1 , k2 , and k3 are the basis functions corresponding to the nodes
k1 , k2 , and k3 of triangle Tk . The function f is the right-hand side of the Poisson
equation.
Now we address to the computation of the element stiffness matrix. We calculate the 9
103
(k)
xk1
),
yk1
nk2 = (
xk2
),
yk2
nk3 = (
xk 3
)
yk3
be the coordiantes of the three nodes of triangle Tk . On one hand, the numerical solution
w on Tk is a linear combination of the three relevant hat functions:
w(x, y) = ck1 k1 (x, y) + ck2 k2 (x, y) + ck3 k3 (x, y) .
The transposed gradient of w can be written as a matrix vector product:
T w(x, y)
c
1
y yk3 yk3 yk1 yk1 yk2 k1
2
( k2
) ck2 .
)=
3
dk xk3 xk2 xk1 xk3 xk2 xk1 c
k3
Bk
Now we can compare the two representations of T w. Since ck1 , ck2 , and ck3 are arbitrary
numbers we have equality of the two matrices:
(T k1 (x, y), T k2 (x, y), T k3 (x, y)) = B k .
The gradients of the hat functions are row vectors with constant numbers. This implies
k1
1
Ak = k2 (T k1 , T k2 , T k3 ) 1 dx dy = B Tk B k dk
2
T
k
k3
using the area
1
2
dk of triangle Tk .
104
Element stiffness matrix for 2D Poisson equation with hat functions
The element stiffness matrix Ak R33 on a triangle Tk with vertices
(
xk1
),
yk1
xk 2
),
yk2
xk3
)
yk3
1
dk B Tk B k
2
1
2
xk1
xk2
xk3
yk1
yk2
yk3
RRR
RRR
RRR ,
RRR
R
Bk =
1
y yk3
( k2
dk xk 3 xk 2
yk3 yk1
xk1 xk3
yk1 yk2
).
xk2 xk1
105
Example 3.31 (Element stiffness matrices for a square domain)
We again consider example 3.24. The determinants for T1 and T2 are
RRR 1
RR
d1 = RRRR 1
RRR
RR 1
1
2
0
1
2
2
RRR
RRR
RRR = 2 ,
RRR
RR
RRR 1
RR
d2 = RRRR 1
RRR
RR 1
1
0
0
1
2
0
RRR
RRR
RRR = 2 ,
RRR
RR
and in the same way we obtain d3 = 2 and d4 = 2. Since all triangle nodes are in counterclockwise
order, all values dk are positive. The matrix consisting of the gradients of triangle T1 reads
B1 =
1 22
(
2 02
21
10
1
12
0
)= (
21
2 2
1
1
1
) .
1
1 20
(
2 00
01
10
1 2
12
)= (
01
2 0
1
1
1
) .
1
1 0
(
2 2
1
1
1
) ,
1
B4 =
1 2
(
0
2
1
1
1
) .
1
When calculating B 4 we have to be careful about the not ascending order of the node indices:
1, 5, 2. This means k1 = 1, k2 = 5, and k3 = 2. Using the formula Ak = 12 dk B Tk B k , in this
example we obtain 4 equal stiffness matrices:
A1 = A2 = A3 = A4 =
2
1
1
2
1
1
1
0
1
0 .
1
Note the special structure of the element stiffness matrices. The diagonal elements are positive.
The sum in each row and column is zero.
106
Example 3.32 (Element stiffness matrices)
We again consider example 3.25. The determinants for T1 and T2 are
RRR 1
RR
d1 = RRRR 1
RRR
RR 1
1
2.5
0
RRR 1
RR
d2 = RRRR 1
RRR
RR 1
RRR
RRR
RRR = 2.5 ,
RRR
RR
1
2
2
1
0
0
1
2
0
RRR
RRR
RRR = 2
RRR
RR
and in the same way we obtain d3 = 2.5, d4 = 3, and d5 = 3. Since all triangle nodes are ordered
counterclockwise, all values dk are positive. The matrix consisting of the gradients of triangle
T1 reads
B1 =
1
22
(
2.5 0 2.5
21
10
12
0
)=(
2.5 1
1
0.4
) .
0.6
0.4
0.4
1 20
(
2 00
01
10
12
1
)=(
01
0
0.5
0.5
0.5
) .
0.5
We also show how the matrix with the gradients of triangle T3 is computed:
B3 =
1
00
(
2.5 2.5 0
01
1 2.5
10
0
)=(
01
1
0.4
0.6
0.4
) .
0.4
1
02
(
3 2.5 2.5
21
1 2.5
10
0.667
)=(
2.5 1
0
0.333
0.5
0.333
) ,
0.5
1
00
(
3 4 2.5
02
2.5 4
1
2
20
0
)=(
2.5 2.5
0.5
0.667
0.5
0.667
) .
0
1.25
1
d1 B T1 B 1 = 0.5
2
0.75
0.5
0.4
0.1
0.75
0.1 .
0.65
A2 = 0.5
0.5
0.5
0.5
0
0.5
0
0.5
0.75
0.65
0.1
0.5
0.1
0.4
0.375
A5 = 0.375
0.375
1.042
0.667
1.25
A3 = 0.75
0.5
0.333
0.542
0.208
0.333
0.208
0.542
0
0.667 .
0.667
In matrices B k and Ak , some numbers are truncated to 3 digits after the decimal point.
107
Example 3.33 (Element stiffness matrices for a house shaped domain)
We again consider example 3.26. The determinants for the 6 triangles are
d1 = 1,
d2 = 1,
d3 = 1,
d4 = 1,
d5 = 1,
d6 = 1.
Since the orientation of the triangle nodes is not the same we obtain positive and negative values.
The absolute value is constant because the triangulation is regular and all triangles have the same
area. The matrices B k read
B1 = B4 = B6 = (
1
0
0
1
1
1
) , B2 = (
1
0
1
1
0
1
) , B3 = B5 = (
1
1
0
1
1
).
0
The equality of some of the matrices B k is reflected in the element stiffness matrices. The first
two different matrices are
A1 = A4 = A6 =
1
0
1
0
1
1
1
1 ,
2
A2 =
1
1
1
2
0
1
2
1
0
1 ,
1
2
1
1
2
1
1
1
0
1
0 .
1
108
Example 3.34 (Element stiffness matrices for a quarter circle domain)
We again consider example 3.27. The determinants for T1 and T2 are
R
RR
RRRR 1 1
RRRR 1 1
3 RRR
1 RRRR
R
RR
R
R
d1 = RR 1 0
2 RRRR = 1 , d2 = RRRR 1 1
3 RRR = 3 1 ,
RRR
R
R
RRR
1 RRRR
1 RRRR
RR 1 0
RR 1 0
and in the same way we obtain
d3 = 2(2 3) , d4 = 1 ,
d5 = 1 ,
d6 =
31,
d7 = 1 .
Since all triangle nodes are in counterclockwise order, all values dk are positive. The matrix
consisting of the gradients of triangle T1 reads
1 21 1 3
32
1 1 3
32
)=(
) ,
B1 = (
10
01
0
1
1
1 00
Likewise, the matrix consisting of the gradients of triangle T2 is
1
1
31 11 1 3
31
B2 =
(
)=
(
01 10
11
1
31
31
0
1
3
) .
0
1
1 3
31
0
B3 =
) .
(
0
31
2(2 3) 1 3
The next two matrices read
B4 = (
1
1
1
0
0
) ,
1
B5 = (
1
1
1
0
0
) ,
1
and the last two matrices are similar to the first two matrices:
B6 =
1
1
(
31
31
0
3
1
) ,
0
1
B7 = (
32
1
3
0
) .
1
A1 = 0.366
0.134
0.366
0.768
0.402
0.134
0.402 ,
0.536
1.049
A2 = 0.683
0.366
0.683
0.683
0
0.366
0 .
0.366
0.536
A7 = 0.402
0.134
0.402
0.768
0.366
0.134
0.366 .
0.5
A3 = A4 = A5 = 0.5
0.5
0.5
0.5
0
0.5
0
0.5
0.366
0.366
0
0.683
0 ,
0.683
109
In the same way the element load vector can be calculated. Since the function f of the
Poisson equation in general is nonlinear, the resulting integrals have to be computed using
numerical integration methods. We do not discuss this issue here.
Elementwise assembly for triangulations
If the mesh is a triangulation with m triangles the elementwise assembly can be performed as follows:
(1) Initialize the stiffness matrix A = 0 and the load vector b = 0.
(2) For k = 1, . . . , m: Add element matrix Ak R33 and element vector bk R3 of
triangle Tk with nodes k1 , k2 , and k3 to A and b.
k1 k2
k3
k1
k2
k3
1
2
0
0
1
1
0
0
0
1
0
1
0
0
0
0
0
0
0
0
0
0
0
0
4
1
1
2
2
1
0
1
1
0
0
0
2
0
2
0
0
1
0
0
1
0
0
0
0
0
0
Next, matrix A3 is added. Due to the non-ascending node order we have to add A4 carefully:
6
1
2
2
1
1
1
0
0
0
2
0
2
0
0
2
0
0
2
0
1
0
0
0
1
8
2
2
2
2
2
2
0
0
0
2
0
2
0
0
2
0
0
2
0
2
0
0
0
2
A=
4
1
1
1
1
1
1
0
0
0
1
0
1
0
0
1
0
0
1
0
1
0
0
0
1
110
Example 3.36 (Assembly)
We once again consider example 3.25 with the element stiffness matrices from example 3.32.
The assembly starts with A = 0. After adding 5 matrices we obtain the stiffness matrix
4.167
0.833
1.25
A=
1.25
0.833
0.833
1.317
0.1
0
0.583
0
1.25
0.1
1.15
0
0
0
1.25
0
0
1.15
0.1
0
0.833
0.583
0
0.1
1.983
0.667
0
0
0
0
0.667
0.667
111
Example 3.37 (Assembly for a house shaped domain)
We once again consider example 3.26 with the element stiffness matrices from example 3.33.
The assembly starts with A = 0. In the first two steps, matrices A1 and A2 are added to A:
1
0
1
1
0
2
0
0
0
0
1
1
0
0
0
0
1
1
2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
2
1
4
1
0
0
0
0
1
1
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
2
1
0
2
0
0
0
0
3
2
0
1
0
0
2
2
5
1
0
0
0
0
0
1
1
0
0
0
0
1
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
2
0
0
3
2
0
1
0
0
2
2
6
1
0
1
0
0
0
1
1
0
0
0
0
1
0
0
2
1
0
0
0
1
0
1
2
0
0
0
0
0
0
0
0
2
1
0
2
0
0
0
0
3
2
0
1
0
0
2
2
8
2
0
2
0
0
0
2
2
0
0
0
0
1
0
0
2
1
0
0
0
2
0
1
3
0
0
0
0
0
0
0
0
1
0
0
3
2
0
1
0
0
2
2
8
2
0
2
0
0
0
2
3
0
0
1
0
1
0
0
2
1
0
0
0
2
0
1
4
1
0
0
0
1
0
1
2
A=
0
0
0
0
0
1.5
1
0
0.5
0
0
1
1
4
1
0
1
0
0
0
1
1.5
0
0
0.5
0
0.5
0
0
1
0.5
0
0
0
1
0
0.5
2
0.5
0
0
0
0.5
0
0.5
1
112
Example 3.38 (Assembly for a quarter circle domain)
We once again consider example 3.27 with the element stiffness matrices from example 3.34.
The assembly starts with A = 0. After adding 7 matrices we obtain the stiffness matrix
A=
4.098
0
0.866
0
1.183
1.183
0
0.866
0
1
0.5
0
0
0
0
0.5
0.866
0.5
1.902
0.402
0.134
0
0
0
1.183
0
0.134
0.366
1.683
0
0
0
0
0
0.402
0.768
0.366
0
0
0
1.183
0
0
0
0
1.683
0.366
0.134
0
0
0
0
0
0.366
0.768
0.402
0.866
0.5
0
0
0
0.134
0.402
1.902
For the Laplace equation the load vector b up to now is just a zero vector. A nonzero
numerical solution is obtained if boundary conditions are added. As in the finite difference
approach, here we also impress the given boundary conditions on the linear system Ac = b.
Boundary nodes with Dirichlet condition using hat functions
To add Dirichlet boundary conditions the linear system Ac = b is modified. Any row
k of A that refers to a boundary node with Dirichlet conditions is replaced by zeros
and the value 1 on the diagonal. The corresponding element bk of the right-hand side
vector is replaced by a given Dirichlet value gk :
ak,` = 0,
` k,
ak,k = 1,
bk = gk .
4
0
0
0
0
1
1
0
0
0
1
0
1
0
0
1
0
0
1
0
1
0
0
0
1
c1
c2
c3
c4
c5
0
g2
g3
g4
g5
Hence, the value of the unknown inner node w1 is just the average of the boundary nodes:
w1 =
1
(g2 + g3 + g4 + g5 ) .
4
113
Example 3.40 (Boundary conditions for a domain)
We consider the result of example 3.36. Impressing Dirichlet boundary conditions g2 , . . . , g6 on
the boundary nodes n2 , . . . , n6 yields the linear system
4.167
0
0.833
1
0
0
0
0
1.25
0
1
0
0
0
1.25
0
0
1
0
0
0.833
0
0
0
1
0
0
0
0
0
0
1
c1
c2
c3
c4
c5
c6
0
g2
g3
g4
g5
g6
Hence, the value of the unknown inner node w1 is a weighted average of the 4 adjacent boundary
nodes. Boundary value g6 is not relevant for w1 .
Now we have discussed the complete procedure of deriving the overall linear system that
results from the finite element approach using Rayleigh-Ritzs method with hat functions
for the Laplace equation.
Ritzs method with hat functions for Laplace equation with Dirichlet boundary
The finite element solution using Rayleigh-Ritzs method with hat functions k on a
mesh with n nodes and m elements for the 2D Laplace equation with Dirichlet boundary
can be computed as follows:
(1) Initialize the stiffness matrix A Rnn and the load vector b Rn to 0.
(2) Compute the element stiffness matrices Ak and add them to A for k = 1, . . . , m.
(3) Impress Dirichlet boundary conditions on A and b.
(4) Solve the linear system Ac = b. The numerical solution is
n
w(x, y) = cj j (x, y) .
j=1
114
3.4 Applications
3.4.1 Heat Distribution
Example 3.41 (Steady-state heat distribution on a square metal plate)
y
Consider the 2D Laplace equation uxx + uyy = 0 on
10
the given domain D with Dirichlet boundary conditions on the outer boundary
8
u(x, 10) = 100, u(x, 0) = u(0, y) = u(10, y) = 0
and Dirichlet boundary conditions on the inner
boundary (hole)
6
4
2
10 12
The function u(x, y) describes the steady-state temperature distribution on the metal plate. The solution of the elliptic problem can be computed with
finite differences using the 5-point-star.
u(x, 7, t) = 100
u(x, 3, t) = u(3, y, t) = u(7, y, t) = 0.
10 12
10 12
10
u(x, y, t) = 0, (x, y) D D
8
6
4
2
115
Example 3.43 (steady-state heat distribution on a round metal plate)
y
Consider the 2D Laplace equation uxx + uyy = 0 on
10
the given domain D with Dirichlet boundary conditions
8
u = 2,
u = cos(2),
(x 5)2 + (y 5)2 = 22
(x 5)2 + (y 5)2 = 52 .
The function u(x, y) describes the steady-state temperature distribution on the metal plate. The solution of the elliptic problem can be computed with
finite elements using Ritzs method with hat functions.
6
4
2
2
10 12
116
3.5 Exercises
Theoretical Exercises
Exercise 3.1
Show that the function
u(x, y) = ex+y arccos(x y)
solves the partial differential equation
ux + uy 2u = 0
Exercise 3.2
Consider Laplaces differential equation
uxx + uyy = 0
a) For which constant c is the function u = x3 + cxy 2 a solution of the differential equation?
117
Computational Exercises
Exercise 3.5
Consider the 2D-Laplace equation
uxx + uyy = 0,
0 x, y 1
u(0, y) = 10,
uy (x, 1) = 3,
ux (1, y) = 0 .
a) Set up a grid over the domain using the step size h = 0.25 . Introduce the nodes wi,j
u(xi , yj ) for 0 i, j 4 (including boundary nodes).
b) Discretize the problem using the 5-point stencil for the PDE and the central difference formula
for the Neumann boundaries.
c) Transform the 2D nodes w0,0 to w4,4 into a 1D node vector w using the reading order. Set
up the matrix A R2525 and the vector b R25 for the linear system Aw = b.
Exercise 3.6
Consider the 2D Laplace equation
uxx + uyy = 0,
0 x, y 3
a) Discretize the problem using the 5-point-star with the grid size h = 1.
b) Set up a linear system Aw = b with A R44 and b R4 for the 4 inner nodes.
c) Solve the linear system Aw = b.
d) Triangulate the squares in the grid by lines from left down to right up.
e) Solve the problem with finite element approach. Use Ritzs method with 2D hat functions.
f) Compare the finite difference solution with the finite element solution.
118
Exercise 3.7
Consider the 2D Laplace equation
uxx + uyy = 0,
0 x, y 2
y
2
a) Set up the stiffness matrix and the load vector of the linear system Ac = b using Ritzs method
with 2D hat functions.
b) Solve the linear system Ac = b.
Exercise 3.8
Consider the 2D Laplace equation
uxx + uyy = 0
y
2
a) Set up the stiffness matrix and the load vector of the linear system Ac = b using Ritzs method
with 2D hat functions.
b) Solve the linear system Ac = b.
119
Application-based Exercises
Exercise 3.9
Consider the 2D-Laplace equation u = uxx + uyy = 0
on the following domain with the Dirichlet boundary
conditions on the outer boundary
y
10
8
6
4
2
10 12
a) Set up the finite difference method for the steady-state heat problem for the Laplace equation
with Dirichlet boundary conditions using a common step size h = 1.
b) MATLAB: Solve the resulting linear system and give a surface plot of the solution.
c) MATLAB: Give a corresponding isotherm plot.
d) Repeat parts a) through c) using the step size h = 0.5.
Exercise 3.10
Consider the 2D-Laplace equation u = uxx + uyy = 0
on the following domain D with the Dirichlet boundary
conditions on the upper boundary
y
10
8
u(x, 8) = 100
(x, y) D,
y8
4
2
2
10 12
a) Set up the finite difference method for the steady-state heat problem for the Laplace equation
with Dirichlet boundary conditions using the common step sizes h = 1 and h = 0.5.
b) MATLAB: Solve the resulting linear system and give a surface plot of the solution.
120
Exercise 3.11
Consider the 2D-Laplace equation u = uxx + uyy = 0
on the following domain D with the Dirichlet boundary
conditions on the outer boundary
y
10
8
6
4
2
10 12
a) Set up the 2D Crank-Nicolson method for the time-dependent heat problem. Use h = 0.5
and t = 0.1 for t [0, 5].
b) MATLAB: Solve the resulting linear system and give a surface plot animation of the solution.
c) Set up the explicit difference method for the above problem and compare the resulting linear
system of equations with the Crank-Nicolson scheme.
d) MATLAB: Implement the explicit difference method and compute the solution for
and t = 1.5. Compare the solutions.
Exercise 3.12
Consider the 2D-Laplace problem
u
= 0.05
y
10
= uxx + uyy = 0
,
,
(x 5)2 + (y 5)2 = 22
(x 5)2 + (y 5)2 = 52
6
4
2
10 12
a) MATLAB: Generate approximately n approximately equally spaced nodes within the domain.
Choose a method that respects the symmetry of the domain, and therefore use nodes on
circles with radii between the inner and the outer boundary circle.
b) MATLAB: Generate a triangulation of the domain. Introduce a temporary ghost node to
delete triangles outside the domain.
c) MATLAB: Set up the stiffness matrix and the load vector for the given equation. Impress the
boundary conditions in a final step.
d) MATLAB: Solve the problem and plot a graph of the solution.
e) MATLAB: Vary the number of nodes n.
121
A Appendix
A.1 Greek Letters
Capital Letter
Small Letter
Name
Alpha
Beta
Gamma
Delta
,
Epsilon
Zeta
Eta
Theta
Iota
Kappa
Lambda
My
Ny
Xi
Omikron
Pi
, %
Rho
Sigma
Tau
Ypsilon
Phi
Chi
Psi
Omega
123
Bibliography
Vehicle Dynamics
[Genta] Genta: Motor Vehicle Dynamics: Modeling and Simulation, World Scientific
[Gillespie] Gillespie: Fundamentals of Vehicle Dynamics, Society of Automotive Engineers,
Warrendale
[Rajamani] Rajamani: Vehicle Dynamics and Control, Springer
Mathematics
[Koch/Stmpfle] Koch, J., Stmpfle, M.: Mathematik fr das Ingenieurstudium, 2nd edition,
Carl Hanser Verlag, Mnchen, 2012
Formularies
[Bartsch] Bartsch, H.-J.: Taschenbuch mathematischer Formeln, Fachbuchverlag Leipzig
im Carl Hanser Verlag, 21. Auflage, 2007
[Bronstein] Bronstein, Semendjajew, Musiol, Mhlig: Taschenbuch der Mathematik, Harri
Deutsch
[Mohr] Mohr, R.: Mathematische Formeln fr das Studium an Fachhochschulen, Carl
Hanser Verlag, Mnchen, 2011
125
Index
Symbols
A-stable 23
m-step method 20
10-point-star 51
4-point-star,
explicit 46
implicit 47
5-point-star 45
hyperbolic 52
6-point-star 48
explicit 49
implicit 50
9-point-star 46
D
Delaunay triangulation 54
differential operator 55
positive definite 55
symmetric 55
Dirichlet boundary condition 41
discretization 11, 43
Adams-Bashforth methods 21
Adams-Bashforth-Moulton methods 22
Adams-Moulton methods 21
autonomous 9
C
central difference formula 44
classical Runge-Kutta method 16
consistent 12
constant boundary condition 42
convergent 12
G
Galerkins method 57
general Runge-Kutta method 16
global truncation error 12
H
hat function 57
heat equation 41
Heuns method 15
hyperbolic 40
hyperbolic 5-point-star 52
126
I
implicit 4-point-star 47
implicit 6-point-star 50
implicit Eulers method 13
implicit form 8
improved Eulers method 14
increment function 14, 20
initial boundary condition 42
initial condition 8
initial value problem 8
inner product 55
linear 55
positive definite 55
symmetric 55
periodic solution 10
phase portrait 10
phase-plane 10
phase-space 10
Poisson equation 41
positive definite 55
positive definite differential operator 55
predictor corrector methods 22
R
Rayleigh-Ritzs method 58
reading order 43
region of stability 23
roundoff error 11
Runge-Kutta-Fehlberg method 19
L
Laplace equation 41
line element 9
linear 8, 40, 55
linear m-step method 20
linear multistep method 20
load vector 59
local truncation error 12
M
matrix,
element stiffness 60
stiffness 59
mesh 54
multistep method 20
N
natural boundary condition 41 f.
Neumann boundary condition 41
node 54
nullcline 11
O
one-step method 14
order 12
ordinary differential equation 8
P
parabolic 40
partial differential equation 40
partition 54
periodic boundary condition 42
S
shooting method 26
stable 11
step size control 18
stiff 25
stiffness matrix 59
symmetric 55
symmetric differential operator 55
system of ordinary differential equations 9
T
triangulation 54
Delaunay 54
truncation error 11
U
unstable 11
V
vector,
element load 60
load 59
vector field 9
W
wave equation 42