Você está na página 1de 92

3MA010 Computational and Mathematical Physics

Lectures 1 to 6
2015/16
V2

Contents
1 Introduction to modeling techniques
1.1 Models classification . . . . . . . . . . . . . . . . . . .
1.2 Examples of derivation of few classical PDEs . . . . .
1.3 Partial differential equations: nomenclature . . . . . .
1.4 Analytical solution methods for PDEs . . . . . . . . .
1.5 Numerical solution methods for PDEs . . . . . . . . .
1.6 Classical PDE, generalities and basic solution methods
1.7 Example of solution . . . . . . . . . . . . . . . . . . .
1.8 General and particular solution . . . . . . . . . . . . .
1.9 The wave equation . . . . . . . . . . . . . . . . . . . .
1.10 The diffusion equation . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

1.0
1.1
1.3
1.8
1.10
1.11
1.11
1.11
1.13
1.21
1.23

2 First order equations


2.1 First order equation . . . . . . . . . .
2.2 Method of Characteristics . . . . . . .
2.3 Cauchy data . . . . . . . . . . . . . . .
2.4 Solutions with discontinuos derivatives

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

2.0
2.1
2.1
2.3
2.9

3 Finite difference approximation of PDEs


3.1 Finite difference approximation of derivatives . . . . . . . . .
3.2 Finite difference discretization of advection-diffusion equation
3.3 Consistency of a numerical scheme . . . . . . . . . . . . . . .
3.4 Semi-discretization and method of lines . . . . . . . . . . . .
3.5 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.6 Stability analysis: von Neumann method . . . . . . . . . . . .
3.7 Well posed problem . . . . . . . . . . . . . . . . . . . . . . .
3.8 Convergence of the finite-difference equation . . . . . . . . . .
3.9 The concept of modified equation . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

3.0
3.1
3.2
3.6
3.7
3.8
3.10
3.12
3.12
3.16

4 Finite difference methods


4.1 Finite-difference approximation of elliptic equations . . . . . . . . . . . . . . . . . . . . . . . .
4.2 Finite-difference approximation of hyperbolic equations . . . . . . . . . . . . . . . . . . . . .
4.3 Finite-difference approximation of parabolic equations . . . . . . . . . . . . . . . . . . . . . .

4.0
4.1
4.4
4.23

5 Second order equations

5.0

6 Solution methods for PDE

6.0

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

0.1

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

CONTENTS

0.2

Important disclaimer
These notes are meant to be a reference for students following the course 3MA010 and cover only lecture 1 to
lecture 4. The notes are written in a concise way. These are not meant to be a replacement for following the
classes or for studying the material in the suggested books. The notes, as far as possible, reproduce material
from the original sources (quoted where appropriate). This gives the possibility to students to easily go back
to the original, and more complete, study material.

List of sources
NB: Lectures are based on material from a variety of books and web sources. These notes collect most of
this material, from the different sources, and some examples. Whenever applicable a detailed reference to
the original source for exercises / paragraph / chapters is provided. Here below a short list of the relevant
sources:

List of books
Essential Mathematical Methods for the Physical Sciences
by K. F. Riley and M. P. Hobson
University of Cambridge 2011
ISBN: 9780521761147
Several lectures follow closely Chapters 10 and 11 of the book.
Partial Differential Equations
by S.W. Rienstra, J.H.M. Ten Thije Boonkkamp R. M. M. Mattheij
Society for Industrial & Applied Mathematics, U.S. 1987
ISBN-10: 0898715946
ISBN-13: 978-0898715941
More advanced book for both analytical and numerical methods for solution of PDEs.
Finite Difference and Spectral Methods for Ordinary and Partial Differential Equations
Lloyd N. Trefethen
Available online: http://people.maths.ox.ac.uk/trefethen/pdetext.html
Finite Difference Schemes and Partial Differential Equations
John Strikwerda
Society for Industrial and Applied Mathematics 2007
ISBN-10: 089871639X
ISBN-13: 978-0898716399
Good reference on numerical analysis and numerical methods to solve PDEs.
Methods of Mathematical Physics
R. Courant, D. Hilbert
Wiley-VCH 1989
ISBN-10: 0471504475
Introduction to Partial Differential Equations with MATLAB
J.M. Cooper
Birkhauser Basel 1998
ISBN: 978-0-8176-3967-9

0.2

Version of September 1, 2015

CONTENTS

0.3

List of web-based material


Computational Fluid Mechanics - Lectures notes
G.Tryggvason
http://www3.nd.edu/~gtryggva/CFD-Course/2013-lectures.html
Applied Partial Differential Equations - Lectures notes
J. Norbury
https://www0.maths.ox.ac.uk/courses/course/26301/

Additionally relevant material


Students may find the following books also relevant for the course:
Introduction to Partial Differential Equations
Series: Undergraduate Texts in Mathematics
Olver, Peter
2014, XXV, 635 p. 143 illus.
ISBN 978-3-319-02099-0
Numerical Methods for Conservation Laws
by Randall J. LeVeque
Lectures in Mathematics, ETH-Zurich
Birkhauser-Verlag, Basel, 1990
ISBN 3-7643-2464-3

0.3

Version of September 1, 2015

Lecture 1

Introduction to modeling techniques


Contents
1.1

Models classification . . . . . . . . . . . . . . . . . . . . . .
1.1.1 Systematic models . . . . . . . . . . . . . . . . . . . . . .
1.1.2 Constructing models . . . . . . . . . . . . . . . . . . . . .
1.1.3 Canonical models . . . . . . . . . . . . . . . . . . . . . . .
1.2 Examples of derivation of few classical PDEs . . . . . . .
1.3 Partial differential equations: nomenclature . . . . . . .
1.4 Analytical solution methods for PDEs . . . . . . . . . . .
1.5 Numerical solution methods for PDEs . . . . . . . . . . .
1.6 Classical PDE, generalities and basic solution methods
1.7 Example of solution . . . . . . . . . . . . . . . . . . . . . .
1.8 General and particular solution . . . . . . . . . . . . . . .
1.8.1 First order equation . . . . . . . . . . . . . . . . . . . . .
1.8.2 First order inhomogeneous equations . . . . . . . . . . . .
1.8.3 Second order equation . . . . . . . . . . . . . . . . . . . .
1.9 The wave equation . . . . . . . . . . . . . . . . . . . . . . .
1.10 The diffusion equation . . . . . . . . . . . . . . . . . . . .

1.0

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

. . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

1.1
1.1
1.2
1.2
1.3
1.8
1.10
1.11
1.11
1.11
1.13
1.13
1.15
1.17
1.21
1.23

Models classification

1.1

1.1

Models classification

This section closely follows: RMM Mattheij et al. 7.2

1.1.1

Systematic models

Systematic models are also referred to as asymptotic or reducing models. The starting point are over-complete
models, that are considered appropriate to describe the problem, but that are overcomplete because they
may include effects negligibly small or uninteresting, thus making the mathematical problem unnecessary
complex. By using knowledge about the system, one can simplify the complete model towards an easier to
handle and to interpret model that retains the interesting effects.
Example 1.1: From a real to an ideal pendulum
A classical example of systematic model is the motion of an ideal pendulum when compared to the
motion of a real one. A pendulum with a point mass suspended by a weightless cord of length L, see
Figure 1.1, is described by the following non-linear equation of motion:
d2
g
= sin ()
2
dt
L

(1.1)

with initial conditions (0) = 0 and (0)


= 0 and where g is the acceleration of gravity and (t) is
the angle that at time t the cord makes with the vertical.
Under the conditions of small oscillations, one can approximate sin () and thus the equation
simplifies to the form:
g
d2
=
(1.2)
2
dt
L
p
which has the solution (t) = 0 cos(t) where = g/L.

Figure 1.1: Schematic of a pendulum.

1.1

Version of September 1, 2015

Models classification

1.1.2

1.2

Constructing models

Also referred to as building or lumped-parameter models. The problem description is built step by step from
bottom to up, from simpler to more complex, by adding effects and elements until the required description
or accuracy is achieved. Examples of such models are given in Section 1.2.

1.1.3

Canonical models

Also known as characteristic or quintessential models. An existing model is reduced in order to describe
only a certain aspect of the problem, typical case being the Burgers equation which is the reduced version
of full Navier-Stokes equations.
Example 1.2: 1 - Burgers equation
This section closely follows: RMM Mattheij et al. Page 135 example 7.5
The Navier-Stokes equations for incompressible viscous flow are given by

1
v + v v = p + 2 v
t

along with the continuity equation


v = 0.
These equations are very complex due to coupling between nonlinear and viscous terms.
Burgers proposed to consider the following very simplified version of the equations where the pressure
gradient has been neglected and only one spatial dimension is maintained:
u
2u
u
+u
= 2
t
x
x
Cole (1951) and Hopf (1950), independently, found that making the following substitution
u = 21

the Burgers equation is reduced to a linear equation, related to the heat equation:

2
2 = C(t)
t
x
This latter kind of equation is well understood and allows many exact solutions.

1.2

Version of September 1, 2015

Examples of derivation of few classical PDEs

1.2

1.3

Examples of derivation of few classical PDEs

Example 1.3: Derivation of the diffusion equation


This section closely follows: Riley & Hobson Pag. 390
Here we derive the three-dimensional heat diffusion equation applying Gauss theorem.
Consider a volume V bounded by a surface S. At any point inside the solid, the rate of heat flow per
unit area in any direction r is proportional to the temperature gradient along that direction q = (k
T ) r. The total flux of heat leaving the volume per unit of time is

dQ
=
dt

Z
dS
(kT ) n
S

Z
(kT )dV

=
V

is the outward-pointing unit normal to S. The passage from an integral over a surface to an
where n
integral on a volume is based on Gauss divergence theorem. The rate of change of Q can also be
expressed as
Z
T
dQ
=
cp
dV
dt
t
V
where we used Leibniz rule and cp , symbols represent specific heat and density, respectively.
Equating the above results and assuming a material with constant properties, we can obtain the
three-dimensional heat diffusion equation
2 T =

T
t

(1.3)

Example 1.4: Derivation of the wave equation


This section closely follows: RMM Mattheij et al. Example 1.3
Here we consider a long chain of point masses m connected by springs. The mass of the spring can be
neglected with respect to m and the springs are characterized by an elastic constant (spring constant)
with equilibrium length x, see Figure 1.2.
We label the individual masses as Vi and indicate their position as x = u1 , u2 , . . . . Assuming linear
spring constants, the force to increase the original spacing between the elements Vi and Vi1 by
i = ui ui1 x is equal to Fi = i . The equation of motion for the mass element Vi is therefore:
d2 ui
= Fi+1 Fi = (ui+1 ui ui + ui1 )
i = 1, 2, . . .
(1.4)
dt2
In the limit of many small masses, and being interested in a macroscopic behaviour of the system, we
can describe it in terms of a continuum and not as a discrete set of elements. We consider thus the
density such that m = x and a stiffness constant = x. Taking the limit x 0 we thus
obtain:
2u
2u
=
(1.5)
2
t
x2
m

1.3

Version of September 1, 2015

Examples of derivation of few classical PDEs

1.4

~wr.
Figure 1.2: Chain of coupled spring elements.

p
The typical solution of this equation are waves propagating with a wave velocity /.
More precisely this equation describes longitudinal wave, i.e. oscillating in the direction of propagation. This is similar to what one obtains for sound waves propagating in a compressible fluid like air,
in that case the air stiffness would correspond to = p where = 1.4 is a gas constant and p the
atmospheric pressure.

1.4

Version of September 1, 2015

Examples of derivation of few classical PDEs

1.5

Example 1.5: Derivation of the wave equation (2)


This section closely follows: Riley & Hobson Page 388
The wave equation has the following expression
1 2u
(1.6)
c2 t2
Here we derive the equation which is satisfied by small transverse displacement which turns out to
be the wave equation.
Consider a uniform string of density (mass per unit length) that is displaced from the horizontal
position by a quantity u(x, t) while being held under an uniform tension T , see Figure 1.3.
The force acting on the small portion s of the string, of mass s, is
2 u =

F = T sin(2 ) T sin(1 )

(1.7)

Assuming that both angles 1 and 2 are small, one can make the approximation tan . Since at
any position the slope is such that tan = u
x , one can write


u(x + x) u(x)
2 u(x, t)

x
(1.8)
F = T
T
x
x
x2
This upward force must be equated, according to Newtons law, to the product of the vertical acceleration times the mass of the small string element.
x

2 u(x, t)
2 u(x, t)
=T
x
2
t
x2

(1.9)

and thus

2
2 u(x, t)
2 u(x, t)
=
c
t2
x2
2
where c = T / is the speed of propagation of waves.

(1.10)

Figure 1.3: Tensions acting on an elastic string

1.5

Version of September 1, 2015

Examples of derivation of few classical PDEs

1.6

Example 1.6: Derivation of simple version of the voter model


This section closely follows: RMM Mattheij et al. Example 1.5
We consider a two-dimensional population of individuals placed on a two-dimensional cartesian domain with their positions labelled as (i, j). For simplicity we can think of a square domain with size
L L and thus with N = L/h individuals per direction, where h is the spacing between them. We
assume that each person has an opinion that is indicated with a scalar number pi,j and that each
individual can only communicate with its first-neighbours. Moreover, each person tries to minimize
the conflict of opinion with its neighbours and thus is immediately ready to take an opinion that is
the average of the opinions that is listened by the neighbours, see Fig. 1.4.
1
(pi+1,j + pi1,j + pi,j+1 + pi,j1 )
(1.11)
4
Here we assume that only the individuals on the domain boundaries are provided with enough
information to have an unchangeable opinion and thus their opinion remains fixed.
pi,j =

Assuming that the number of individuals is very large, we can take the limit N or equivalently
h 0 and thus p becomes a continuos function of space:
1
(p(x + h, y) + p(x h, y) + p(x, y + h) + p(x, y h))
4
that can be rewritten in the following form:
p(x, y) =

[p(x + h, y) 2p(x, y) + p(x h, y)] + [p(x, y h) 2p(x, y) + p(x, y + h)] = 0

(1.12)

(1.13)

dividing by h and taking the limits:


2p
2p
+ 2 =0
2
x
y

(1.14)

This is the celebrated Laplace equation that typically emerges to describe equilibrium phenomena
where information is exchange in all direction and discontinuities and gradient get smoothed out.
A very famous case where the Laplace equation emerges is the distribution of temperature in heatconduction problems once a stationary equilibrium is achieved.

1.6

Version of September 1, 2015

Examples of derivation of few classical PDEs

1.7

yj+1
yj
yj1

xj1

xj

xj+1

Figure 1.4: An array of individuals, schematic for the voter model.

1.7

Version of September 1, 2015

Partial differential equations: nomenclature

1.3

1.8

Partial differential equations: nomenclature

The difference between partial differential equations, PDE, and ordinary differential equations,
ODE, consists in the fact that in PDE more than one independent variable are present while in ODEs only
one independent variable (and its derivative) enter in the equation (e.g. time).
While ODEs are special cases of PDEs, their general behaviour is quite different. Here some basic concepts
related to PDEs are presented.
Order
Linear

The order of a PDE corresponds to the highest derivative. If its highest


derivative is of order n then the PDE is said to be of order n.
If the unknown function and its derivatives appear linearly in the equation.

Semi-linear

The highest order derivative coefficients are functions only of the independent variables.

Quasi-linear

If it is linear in its highest derivatives. When the equation is a linear combination of derivatives but the coefficients of the highest derivative, suppose of
order n, depends upon (n1)th order derivatives at most, then the equation
is called quasi-linear.

Non-linear

Neither linear nor semi-linear nor quasi-linear.


Linear $ semi-linear $ quasi-linear$ fully non-linear

Homogeneous
BVP
IVP
Dirichlet BC

The equation is furthermore called homogeneous if every term involves the


unknown function or its partial derivatives and inhomogeneous if it does not.
Boundary Value Problem: The boundary conditions are specified at the
extremes of the independent variables.
Initial Value Problem: All conditions are specified at the same value of the
independent variables (that value is at the lower boundary of the domain).
The value of the function u is specified at each point of the boundaries.

Neumann BC

The value of u/n, i.e. the normal derivative of u is specified at each point
of the boundary.

Cauchy BC

Both u and u/n are specified at each point of the boundary.

Robin BC

A linear combination of the value of the function u and its normal derivative
u/n.

Mixed BC

One type of boundary data is given on one part while another type of boundary data is given on a different part of the boundary.

Periodic BC

Boundary conditions used to approximate a large (infinite) system by using


a small part called unit cell, e.g. u(L, t) = u(0, t) if L is the size of the
domain.
The region of points influenced by a given initial data.

Domain of influence
Domain of dependence
Evolutionary problems

The region of points that influences the solution at a given position.


Time is also involved as one of the variables and the modeling will be based
on causality.

Stationary problems

Steady state problems: does not involve time.


1.8

Version of September 1, 2015

Partial differential equations: nomenclature

1.9

Example 1.7: The case of first order equations


Fully non-linear: F (x, y, u, ux , uy ) = 0
Quasi-linear: a(x, y, u)ux + b(x, y, u)uy = c(x, y, u)
Semi-linear: a(x, y)ux + b(x, y)uy = c(x, y, u)
Linear: a(x, y)ux + b(x, y)uy = c0 (x, y)u + c1 (x, y)
Homogeneous: When L[u] = f with L a linear operator and f only function of the independent
variables. If f = 0 the equation is said homogenous.

Example 1.8: The order of PDE


Consider the equations
1.


u 1 u2
+
=0
t
2 x
It is the celebrated inviscid Burgers equation in its conservation form which is a 1st order
quasi-linear equation since it can be also written in the advection form as
u
u
+u
=0
t
x
where clearly the coefficient b(x, y, u) = u.

2.
|u| = c
It is the well-known eikonal equation which is a non-linear 1st order equation. Its non-linearity
can be clearly deduced if it is written in the following form:


u
x

2


+

u
y

2


+

u
z

2

= c2

which is equivalent to the previous one.


3.
ut = (u3 u) 2 u
where, here, = 2 is the Laplacian operator. It is the Cahn-Hilliard equation which is a 4th
order non-linear equation (it is linear for the highest derivative term, but not for the second
order term).

1.9

Version of September 1, 2015

Analytical solution methods for PDEs

1.10

Example 1.9: BVP with different boundary conditions


This section closely follows: Introduction to Partial Differential Equations with MATLAB, J.M.
Cooper Section 8.2
We illustrate several BVPs for the heat equation in two-dimensions space.
The boundary value problems are of the form:
ut = 2 u

in G [0, )

and with initial condition:


u(x, y, 0) = f (x, y)
where G is the problem domain whose boundary is G. Different boundary conditions (BC) can be
applied, e.g.
1. Dirichlet BC:

2. Neumann BC:

3. Robin BC:

u = 0 on G [0, )

(1.15)

u
= 0 on G [0, )
n

(1.16)

u
+ hu = 0 on G [0, )
n

(1.17)

Example 1.10: IVP for the wave equation


This section closely follows: Introduction to Partial Differential Equations with MATLAB, J.M.
Cooper Section 5.3
Consider the one-dimension wave equation in an infinite domain (no boundaries):
utt = c2 uxx x R, t > 0

(1.18)

We define the pure IVP specifying the initial displacement f (x) and initial velocity g(x) at initial
instant of time t = 0 as
u(x, 0) = f (x), ut (x, 0) = g(x) for x R

1.4

Analytical solution methods for PDEs

Separation of variables Method in which the final solution is found as the product of separate
functions of the independent variables.
Method of characteristics A method that in some special cases allows to find characteristic curves
on which the PDE reduces to a system of ODEs.
Integral transform Integral transformations can allow to transform an equation in a simpler one, for
example in a separable PDE. A classical example is the use of Fourier analysis that diagonalizes the
heat equation using the eigenfunction represented by sinusoidal waves.
1.10

Version of September 1, 2015

Numerical solution methods for PDEs

1.11

Change of variables Sometimes PDEs can be transformed in simpler one by appropriate change of
variables (e.g. the Cole-Hopf transformation that transforms Burgers into the heat equation).
Fundamental solution Non homogenous equations can be solved by firstly finding the fundamental
solution (the solution to a point source, or Dirac delta) and then taking the convolution with the
boundary condition to get the solution.
Superposition principle Particularly useful in case of linear equations, make use of the fact that the
sum of two solutions is still a solution.
Methods for non-linear equations No general solution methods exist for non-linear equations. Numerical approximation plays an importan role here. Many different methods can be helpful in specific
cases.

1.5

Numerical solution methods for PDEs

Finite element method Will not be discussed.


http://www.scholarpedia.org/article/Finite_element_method
Finite difference method Finite difference discretizes the equation by approximating the derivatives
by means of finite difference equations.
Spectral methods Will not be discussed.
Finite volume method The equation is discretized on a finite mesh. Using the divergence theorem
surface integrals of terms involving divergences are converted to volume integrals. The scheme evaluates the fluxes through the surfaces of each volume and due to these fluxes being identical between
neighbouring volumes, these schemes are conservative. Will not be discussed.

1.6

Classical PDE, generalities and basic solution methods

This section closely follows: Riley & Hobson Chapter 10


Our study of PDE will mostly focus on first- and second-order equations. An important remark is that
higher order PDEs can be rewritten into systems of first order PDE.

1.7

Example of solution

This section closely follows: Courant Hilbert - Chapter 1, pages 2-5


Here we underline one main difference between ODEs and PDEs. The ODE
df (x)
=0
dx

(1.19)

f (x) = const

(1.20)

f (x, y)
=0
x

(1.21)

f = f (y)

(1.22)

has as a general solution


and then infinite possible values.
The PDE for, say, f (x, y)

has as a general solution

1.11

Version of September 1, 2015

Example of solution

1.12

thus infinite possible functions.


Consider now the equation:
uxy = 0

(1.23)

u = w(x) + v(y)

(1.24)

which has the general solution


Similarly, the solution of the nonhomogeneous differential equation
uxy = f (x, y)
gives upon integration:
Z

(1.25)

f (, )dd + w(x) + v(y)

u(x, y) =
x0

(1.26)

y0

with arbitrary functions w and v and fixed values x0 , y0 .


Another example is the following:
ux = uy

(1.27)

Introducing the following transformation:


x+y =

xy =

(1.28)

u(x, y) = (, )

(1.29)

2 = 0

(1.30)

= w()

(1.31)

u = w(x + y)

(1.32)

which has as general solution

Similarly, if and are constants, the general solution of the differential equation
ux + uy = 0

(1.33)

u = w(x y)

(1.34)

is in the form
According to elementary theorems of differential calculus, the PDE
ux gy uy gx = 0

(1.35)

where g(x, y) is any given function of x, y states that the Jacobian (u, g)/(x, y) of u, g with respect to x,
y vanishes. This means that u depends on g, i.e., that
u = w[g(x, y)],
where w is an arbitrary function of the quantity g.

1.12

Version of September 1, 2015

General and particular solution

1.8

1.13

General and particular solution

This section closely follows: Riley & Hobson Section 10.3

1.8.1

First order equation

The most general first order linear PDE of two independent variables is:
A(x, y)

u
u
+ B(x, y)
+ C(x, y)u = R(x, y)
x
y

(1.36)

where A, B, C and R are given functions. For A = 0 or B = 0 the equation is just an ODE where the
arbitrary integration constant is now a function of the independent variable, either x or y.
Example 1.11: First order PDE: general solution
This section closely follows: Riley & Hobson Page 394
As an example, consider the first order equation for u(x, y)
x

u
+ 3u = x2
x

equivalent to
u 3u
+
= x,
x
x
Multiplying through the integrating factor x3 we find
3
(x u) = x4
x
that can be straightforwardly integrated with respect to x
x3 u =

x5
+ f (y)
5

and finally
x2
f (y)
+ 3
5
x
where now the integration constant is, actually, an arbitrary function of y.
u=

Let us assume for the time being that C = R = 0 and let us search for a solution in the form u(x, y) = f (p)
where p is an unknown function of x and y:
u
df (p) p
=
x
dp x

(1.37)

df (p) p
u
=
y
dp y

(1.38)

Substituting in the PDE one obtains:




p
p df (p)
A(x, y)
+ B(x, y)
=0
x
y
dp
1.13

(1.39)

Version of September 1, 2015

General and particular solution

1.14

A(x, y)

p
p
+ B(x, y)
=0
x
y

(1.40)

We now look for the necessary conditions for f (p) to remain constant as x and y vary, this is equivalent to
require that p itself to remain constant. For this condition to be fulfilled, we need that x and y vary in such
a way that
p
p
dp =
dx +
dy = 0
(1.41)
x
y
The forms of (1.40) and (1.41) become exactly the same if we require that
dy
dx
=
A(x, y)
B(x, y)

(1.42)

Upon integration of this expression the form of p can be found.


Example 1.12: Solution of homogeneous equation
This section closely follows: Riley & Hobson Page 395
For
x

u
u
2y
=0
x
y

find the solution that takes the value (i) 2y + 1 on line x = 1 and, then, a solution that has the value
(ii) 4 at the point (1, 1).
We seek a solution of the form u(x, y) = f (p) and it will be constant along lines of (x, y) that satisfy
the relation
dy
dx
=
,
x
2y
see Eq. (1.42) where, for this equation, B = 2y and A = x.
We are now ready to integrate the relations (1.8.1) as:
log x = c

1
log y
2

which, finally, gives x = c0 y 1/2 . If we identify the constant of integration c0 with p1/2 , we have that
p = x2 y. The general solution of the PDE is therefore
u(x, y) = f (x2 y),
with f arbitrary function.
For the boundary condition (i), the particular solution required is given by
u(x, y) = 2(x2 y) + 1
while for the boundary condition (ii) some acceptable solutions are
u(x, y) = x2 y + 3,
u(x, y) = 4x2 y,
u(x, y) = 4.

1.14

Version of September 1, 2015

General and particular solution

1.15

All these three solutions are particular examples of the general solution
u(x, y) = x2 y + 3 + g(x2 y)
where g(x, y) = g(p) is an arbitrary function subject to the only condition that g(1) = 0

So far we considered the case where the PDE contains no term proportional to u. In the case instead that
C(x, y) 6= 0 the procedure needs to be adapted and one seeks a solution in the form u(x, y) = h(x, y)f (p).
Example 1.13: Solution of homogeneous equation (2)
This section closely follows: Riley & Hobson Page 396
Consider the inhomogeneous equation
x

u
u
+2
2u = 0.
x
y

(1.43)

To start with, we look for solutions in the form u(x, y) = h(x, y)f (p). We can write the following
equations:
u
h
df (p) p
=
f (p) + h
x
x
dp x
u
h
df (p) p
=
f (p) + h
y
y
dp y

(1.44)
(1.45)

Plugging into the equation these results, we get:






h
h
df
p
p
f (p) x
+2
2h + h
x
+2
=0
x
y
dp
x
y

(1.46)

From inspection of Eq. (1.46), we see that the first term in curl brackets vanishes for any solution
h(x, y) (it has the same expression as the original PDE), while the second term in curl brackets can
be solved as previously done:
x

p
p
+2
= 0 x = cey/2 p = xey/2
x
y

(1.47)

Therefore the solution to the inhomogeneous equation is given as:


u(x, y) = h(x, y)f (xey/2 )

(1.48)

where f (p) is an arbitrary function and h(x, y) is any solution to Eq. (1.43).

1.8.2

First order inhomogeneous equations

An equation is said to be homogenous if given a solution u(x, y) then u(x, y) is also a solution for any .
The problem is said to be homogenous if the boundary condition satisfied by u(x, y) are also satisfied by
u(x, y).

1.15

Version of September 1, 2015

General and particular solution

1.16

The interest in homogeneity vs. inhomogeneity of PDE is that, similarly to ODE, the general solution of the
inhomogeneous problem can be written as the sum of any particular solution of the problem and the general
solution of the corresponding homogeneous problem.
As an example, consider the equation
u
u
x
+ u = f (x, y)
x
y

(1.49)

subject to e.g. the boundary conditions u(0, y) = g(y) given by


u(x, y) = v(x, y) + w(x, y)

(1.50)

where v(x, y) is any solution of the inhomogeneous equation such that v(0, y) = g(y) and w(x, y) is the
general solution of the homogenous equation:
w
w
x
+ w = 0
x
y

(1.51)

with the homogeneous boundary condition w(0, y) = 0.


Example 1.14: Solution of inhomogeneous equation
This section closely follows: Riley & Hobson Page 398
Consider the inhomogeneous equation
y

u
u
x
= 3x
x
y

(1.52)

To start with we look for solutions of the corresponding homogeneous equation in the form u(x, y) =
f (p). Following the exposed procedure, we see that u(x, y) will be constant along lines satisfying the
relation
dx
dy
=
y
x
and after integration we get
x2
y2
+
=c
2
2
from which, imposing c = p/2, we obtain the general solution of the homogeneous equation in the
form u(x, y) = f (x2 + y 2 ) where f is an arbitrary function which will be determined once appropriate
boundary conditions will be imposed.
Proceeding further, we seek a particular integral of eq. (1.52). For this simple case, we note that
a particular integral is u(x, y) = 3y and so the general solution of (1.52) will be the sum of two
contributions
u(x, y) = f (x2 + y 2 ) 3y
To determine the arbitrary function f , we apply the boundary condition u(x, 0) = x2 which requires
that u(x, 0) = f (x2 ), that is f (z) = z, and so the particular solution in this case is
u(x, y) = x2 + y 2 3y
If the boundary condition is a one-point boundary condition, say, u(1, 0) = f (1) = 2, one possibility
is f (z) = 2z and so we obtain
u(x, y) = 2x2 + 2y 2 3y + g(x2 + y 2 )

1.16

Version of September 1, 2015

General and particular solution

1.17

where g is any arbitrary function for which g(1, 0) = 0. Alternatively, a simpler choice is f (z) = 2,
which leads to
u(x, y) = 2 3y + h(x2 + y 2 )
where, again h(1, 0) = 0.

1.8.3

Second order equation

This section closely follows: Riley & Hobson 10.3.3


Second order equations are extremely important as model for many physical systems. For simplicity we will
consider in the discussion equations that are only function of two independent variables; extension to the
most generic case is rather obvious.
The most general second order linear PDE with two independent variables has the form:
A

2u
2u
2u
u
u
+
B
+
C
+D
+E
+ F u = R(x, y)
2
2
x
xy
y
x
y

(1.53)

where A, B, . . . , F and R(x, y) are given functions of x and y.


These PDEs are classified into three categories due to the different nature of their solutions, as:
hyperbolic
parabolic
elliptic

B 2 > 4AC
B 2 = 4AC
B 2 < 4AC

Clearly if A, B and C are functions of x and y, instead of being constants, then the nature of the PDE may
be different in different parts of the domain.
Here we illustrate the nature of the PDE by considering a special case, namely homogeneous equations,
R(x, y) = 0, for which the coefficients A, . . . , F are not function of space but are constant.
We focus in particular on the special case D = E = F = 0 so that only second-order derivatives remain.
This is for example the case of the one-dimensional wave equation (1.18), of the two-dimensional Laplace
equation (1.14) but not of the diffusion equation (1.3) as the latter contains first order derivatives.
We seek for a solution in the form
u(x, y) = f (p)

(1.54)

where f (p) is a function such that we hope to be able to obtain a common factor d2 f (p)/dp2 .
u
df (p) p
=
x
dp x

(1.55)

Clearly one will not obtain a single factor involving a second order derivative of f (p) unless p is a linear
function of x and y. Assuming a solution of the form u(x, y) = f (ax + by) and evaluating the partial
derivative

1.17

Version of September 1, 2015

General and particular solution

1.18

u
x
u
y
2u
x2
2u
xy
2u
y 2

df (p)
dp
df (p)
= b
dp
2
d
f (p)
= a2
dp2
2
d f (p)
= ab
dp2
2
d f (p)
= b2
dp2
=

(1.56)
(1.57)
(1.58)
(1.59)
(1.60)

and substituting it gives


Aa2 + Bab + Cb2

 d2 f (p)
=0
dp2

(1.61)

This is precisely the form we were looking for. In the case the term inside the parenthesis is zero, then the
equation is satistified for any function f (p). The condition is thus:
Aa2 + Bab + Cb2 = 0

(1.62)

and from this second order equation one obtains the following two solutions for a and b:
1/2 i
b
1 h
=
B B 2 4AC
a
2C

(1.63)

and if we take 1 and 2 equal to the two ratios b/a, solution of the second order equation, then any function
of the two variables p1 = x + 1 y and p2 = x + 2 y is a solution of the original PDE. Thus, in general, the
solution may be written as:
u(x, y) = f (x + 1 y) + g(x + 2 y)
(1.64)
where f and g are arbitrary functions.
The solution of the equation d2 f (p)/dp2 = 0 provides only the trivial solution u(x, y) = kx + ly + m for
which all second derivatives are identically zero.
Example 1.15: General solution of the one-dimensional wave equation
This section closely follows: Riley & Hobson Page 401
As an example we look at the solution of the following wave equation:
2u
1 2u

=0
x2
c2 t2

(1.65)

This equation is in the form (1.53) with A = 1, B = 0 and C = 1/c2 . Consequently 1,2 are a
solution of
21,2
1 2 =0
(1.66)
c

1.18

Version of September 1, 2015

General and particular solution

1.19

and namely 1 = c and 2 = c. This means that the generic solution can be expressed as
u(x, t) = f (x ct) + g(x + ct)

(1.67)

where f and g are two arbitrary functions corresponding to travelling solutions in the positive and
negative x direction with speed c.

Example 1.16: General solution of the two-dimensional Laplace equation


This section closely follows: Riley & Hobson Page 402
We consider the two-dimensional Laplace equation:
2u 2u
+ 2 =0
x2
y

(1.68)

and again we look for a solution in the form of a function f (p) where p = x + y and satisfies:
1 + 2 = 0

(1.69)

This condition requires that = i and thus p = x iy and the general solution is:
u(x, y) = f (x + iy) + g(x iy)

(1.70)

From the two examples above it is clear that the nature of the solution, i.e. the appropriate combination of
x and y, depends upon whether B 2 > 4AC or B 2 < 4AC. This is the criterion that distinguishes if the PDE
is hyperbolic or elliptic.
As a general result, hyperbolic or elliptic equations, given the condition that the constants A, B and C are
real, have a solution with arguments respectively of the form x + y or x + iy, where and are real
constants.
The case of parabolic equations, i.e. B 2 = 4AC, is special because 1 = 2 and thus only one appropriate
combination of x and y is possible:
u(x, y) = f (x (B/2C)y)
(1.71)
In order to find the second part of the general solution one may try a solution of the form:
u(x, y) = h(x, y)g(x (B/2C)y)
Substituting this into the equation and using the fact that A = B 2 /4C:
 2

h
2h
2h
A 2 +B
+C 2 g =0
x
xy
y

(1.72)

(1.73)

Thus it is required that h is any solution of the original PDE. As any will do, one can take the simplest
h(x, y) = x, this will allow to construct the general solution of the parabolic PDE as:
u(x, y) = f (x (B/2C)y) + xg(x (B/2C)y)

(1.74)

Of course one could have taken h = y as well.

1.19

Version of September 1, 2015

General and particular solution

1.20

Example 1.17: Second order equation


This section closely follows: Riley & Hobson Page 403
As an example, solve the following equation:
2u
2u
2u
+2
+ 2 =0
2
x
xy
y

(1.75)

with the boundary conditions u(0, y) = 0 and u(x, 1) = x2 .


From the general result previously obtained any function of p = x + y will be a solution provided
that
1 + 2 + 2 = 0
(1.76)
thus the (degenerate) solution is = 1 and the equation is parabolic. The general solution is thus
u(x, y) = f (x y) + xg(x y)

(1.77)

Now the boundary condition u(0, y) = 0 implies f (p) = 0 and the other boundary condition, u(x, 1) =
x2 , gives:
xg(x 1) = x2
(1.78)
and g(p) = p + 1 with a particular solution given by
u(x, y) = x(p + 1) = x(x y + 1)

(1.79)

As the boundary conditions are prescribed along two boundaries, x = 0 and y = 1, the solution is
completely determined and it contains no arbitrary function.

Here we provide an alternative derivation of the general solutions (1.64) and (1.74) by changing variables in
the original PDE before solving it. At that point the solution will become very easy, but of course this is
only possible due to the insight that we already have on the expected solutions.
Starting from eqn. (1.53) we change to the new variables:
= x + 1 y

(1.80)

= x + 2 y

(1.81)

Thanks to this change of variable and with the chain rule:

=
+
x

= 1
+ 2
y

(1.82)
(1.83)
(1.84)

together with A + Bi + C2i = 0 where i = 1, 2, the equation


A

2u
2u
2u
+B
+C 2 =0
2
x
xy
y

becomes
[2A + B(1 + 2 ) + 2C1 2 ]
1.20

2u
=0

(1.85)

(1.86)
Version of September 1, 2015

The wave equation

1.21

Provided the term in brackets is non zero (i.e. B 2 6= 4AC) we have


2u
=0

(1.87)

that has the integrals


u
= F ()

u(, ) = f () + g()

(1.88)

whose solution is the same as what obtained before, i.e.:


u(x, y) = f (x + 2 y) + g(x + 1 y)

(1.89)

If the equation is parabolic (i.e. B 2 = 4AC) we use the alternative set of variables:
= x + y

=x

(1.90)

and because = (B/2C) one can deduce :


A

2u
=0
2

(1.91)

By means of integration one obtains the solution


u(, ) = g() + f ()

(1.92)

u(x, y) = xg(x + y) + f (x + y)

(1.93)

that in terms of x and y is:

1.9

The wave equation

This section closely follows: Riley & Hobson 10.4


The wave equation has the general solution
u(x, t) = f (x ct) + g(x + ct)

(1.94)

where f and g are arbitrary functions that represent the propagation in positive and negative directions.
In the case where f (p) = g(p) this may result into a wave that does not progress, i.e. a standing wave.
Supposing
f (p) = g(p) = A cos(kp + )
(1.95)
then the solution can be written as:
u(x, t) = A [cos(kx kct + ) + cos(kx + kct + )] = 2A cos(kct) cos(kx + )

(1.96)

and thus the shape of the wave does not move, but its amplitude oscillates in time with a frequency kc. At
any point that satisfies cos(kx + ) = 0 there is no displacement and such points are called nodes.
So far the discussion considered the wave equation without any boundary condition.
How to impose a boundary condition to the wave equation? This problem is usually treated by means of
the method of separation of variables (see Chapter 5). Here we consider the DAlemberts solution u(x, t) of
the wave equation with the following initial conditions:
initial displacement:
u(x, 0) = (x)
(1.97)
1.21

Version of September 1, 2015

The wave equation

1.22

initial velocity:
u(x, 0)
= (x)
(1.98)
t
We need to find the functions f and g that are consistent with the assigned values at t = 0. This implies
that
(x) = u(x, 0) = f (x 0) + g(x + 0)
u(x, 0)
(x) =
= cf 0 (x 0) + cg 0 (x + 0)
t

(1.99)
(1.100)
(1.101)

Integrating the equation above:


1
c

(q)dq + K = f (p) + g(p)

(1.102)

p0

for some integration extreme p0 and with a consistent constant K (depending on p0 ). Putting this together
with = f (x) + g(x) gives us:
Z p

1
K
f (p) =
(q)dq
(1.103)
2
2c p0
2
Z p

1
K
g(p) = +
(q)dq +
(1.104)
2
2c p0
2
Adding these two last equations, the first evaluated with p = x ct and the second with p = x + ct we obtain
the solution to the problem in the form:
Z x+ct
1
1
u(x, t) = [(x ct) + (x + ct)] +
(q)dq
(1.105)
2
2c xct
What is the physical interpretation of what we found? The solution is composed by 3 terms, the first two
represent the influence of the original displacement that started at a position ct or ct and traveled leftward
or rightwards, arriving at x at time t. The third term can be seen as an accumulated displacement at position
x of all parts of the initial condition that could reach x within a time t travelling both backward or forward.
Extension to 3d The extension to the 3d wave equation of solution similar to the one just discussed is
rather straighforward.
The 3d wave equation reads:
2u 2u 2u
1 2u
+ 2 + 2 2 2 =0
2
x
y
z
c t

(1.106)

and, similarly to the 1d case, we can search for solutions that are linear combinations of all four variables:
p = lx + my + nz + t
A solution will be acceptable under the condition:


2 d2 f (p)
2
2
2
l +m +n 2
=0
c
dp2

(1.107)

(1.108)

and thus the condition is satisfied for any arbitrary f if


l2 + m2 + n2 = 2 /c2
1.22

(1.109)
Version of September 1, 2015

The diffusion equation

1.23

and taking = c then


l2 + m2 + n2 = 1

(1.110)

pointing
This conditions is equivalent to saying that (l, m, n) are the cartesian components of a unit vector n
r ct and the
along the direction of propagation of the wave. The argument p can be written as p = n
general solution of the wave equation in 3d:
u(x, y, z, t) = u(r, t) = f (
n r ct) + g(
n r + ct)

1.10

(1.111)

The diffusion equation

This section closely follows: Riley & Hobson 10.5


A very well known and important class of second-order PDE, that we will consider in detail in this course,
is the one where the second derivative with respect to one of the variable appears but only the first order
derivative with respect to another variable (this is usually the time).
A good example for this class of PDEs is the following one dimensional diffusion equation:

u
2 u(x, t)
=
x2
t

(1.112)

where the constant has dimensions of [length]2 [time]1 . The actual value of the constant is a property
of the material and of the nature of the process (e.g. diffusion of a concentration of a solute, heat flux, etc).
The methods seen so far cannot be applied to this equation as it is differentiated a different number of time
with respect to x and to t. It is obvious that any attempt to search for a solution in the form u(x, t) = f (p)
with p = ax + bt will not lead to a form where the function f can be cancelled out.
A simple way of solving this equation is to set both members of the equation equal to a constant, :
2u

= ,
2
x

u
=
t

(1.113)

and

(1.114)

that have the general solutions:


u(x, t) =

2
x + xg(t) + h(t)
2

u(x, t) = t + m(x)

These solutions are compatible if g(t) = g is a constant, h(t) = t and m(x) = (/2)x2 + gx. An acceptable
solution is thus:
2
x + gx + t + const
(1.115)
2
We remark that a solution that is a function of a linear combination of x and t cannot work and so we
seek solutions of equations by combining the independent variables in particular ways. Due to the physical
dimensions of the following combination of variables is dimensionless
u(x, t) =

x2
(1.116)
t
By substitution we see if we can find solutions in the form u(x, t) = f (). Evaluating the derivatives:
=

1.23

Version of September 1, 2015

The diffusion equation

1.24

df ()
2x df ()
u
=
=
x
d x
t d
 2 2
2
u
2 df ()
2x
d f ()
=
+
x2
t d
t
d 2
2
u
x df ()
= 2
t
t d

(1.117)
(1.118)
(1.119)

and substituting into the diffusion equation one obtains that it can be, indeed, written solely in terms of :
4

d2 f ()
df ()
+ (2 + )
=0
2
d
d

(1.120)

that is a simple ODE that can be solved in the following way. Using the notation f 0 () = df ()/d we have:
f 00
1
1
=
f0
2 4

(1.121)

Integrating, we have


ln (f 0 ()) = ln 1/2 + c
4



0
1/2
ln (f ()) ln
= +c
4



0
1/2
ln (f ()) + ln
= +c
4
from which

ln [ 1/2 f 0 ()] = + c
4

(1.122)

A
exp (/4)
1/2

(1.123)

1/2 exp (/4) d

(1.124)

f 0 () =
Z

f () = A
0

and rewriting this in terms of the following variable


=

1/2
x
=
2
2(t)1/2

(1.125)

then d = 1/4 1/2 d and the solution is given by:


Z

u(x, t) = f ()g() = B

exp ( 2 )d

(1.126)

In the expression above B is a constant and x and t appear only in the upper integration limit, , and only
in the combination xt1/2 . If 0 = 0 then u(x, t) is the error function erf[x/2(kt)1/2 ]. Only non negative
values of x and t are considered and so 0 .
To understand the physical meaning of the solution, we may think of u representing a temperature field. We
may want to know which temperature distribution it represents, e.g. in the case 0 = 0.
Because the solution is a function of xt1/2 it is clear that all points x at times t such that xt1/2 has the
same value will have the same temperature. In other words, at any time t, the region with a given value
of the temperature has moved along the positive x-axis of a distance proportional to t1/2 . This is a typical
feature of diffusion processes. We notice that at t = 0 the variable and u becomes independent of
x (expect at x = 0). Instead at x = 0 one has that u is identically zero for all t.

1.24

Version of September 1, 2015

The diffusion equation

1.25

Example 1.18: Laser pulse


This section closely follows: Riley & Hobson Page 410.
A laser delivers a pulse of heat of energy E at a point P on a large insulated sheet of thickness b,
thermal conductivity k, specific heat s and density . The sheet is initially at a uniform temperature.
Indicating with u(r, t) the excess temperature at any later time t, at a point at a distance r from P ,
show that an appropriate solution for u is:



r2
u(r, t) = exp
(1.127)
t
2t
where and are constants. Show that (i) = 2k/(s); (ii) that the excess heat energy is
independent of t and evaluate , (iii) show that the total heat flow through any circle of radius r is
E.
Solution. The equation relevant for this problem is the equation for heat diffusion:
k2 u(r, t) = s

u(r, t)
t

(1.128)

We look for the solution for r  b and treat the problem as with circular symmetry. The equation
becomes:


k
u
u
r
= s
(1.129)
r r
r
t
where u(r, t) = u(r, t). (i) By substituting the solution in the equation:




r2
2k r2

1
exp

t2 2t
2t
from which we can see that = 2k/s.
(ii) The excess heat in the system at any time t is
Z
Z
bs
u(r, t)2rdr = 2bs
0

(1.130)



r2
r
exp
dr = 2bs
t
2r

(1.131)

The excess heat is thus independent of t and must be equal to the heat input E. As a consequence:
=

E
E
=
2bs
4bk

(1.132)

(iii) The total heat flux through a circle of radius r is:


 



 2 
Z
Z
u(r, t)
E
r
r2
r
2rbk
= 2rbk
exp
dt = E exp
=E
r
4bkt
t
2t
2t
0
0
(1.133)

1.25

Version of September 1, 2015

Lecture 2

First order equations


Contents
2.1
2.2
2.3

First order equation . . . . . . . . . . . . . . .


Method of Characteristics . . . . . . . . . . .
Cauchy data . . . . . . . . . . . . . . . . . . .
2.3.1 Cauchy-Kowalevski theorem . . . . . . . . . .
2.3.2 Domain of definition . . . . . . . . . . . . . .
2.4 Solutions with discontinuos derivatives . . .

2.0

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

2.1
2.1
2.3
2.4
2.5
2.9

First order equation

2.1

2.1

First order equation

This section closely follows: Applied Partial Differential Equations, John Norbury web Lecture notes University of Oxford
We start considering the following first-order, quasi-linear PDEs:
a(x, y, u)

u
u
+ b(x, y, u)
= c(x, y, u)
x
y

(2.1)

where a, b and c are assumed to be smooth functions (continuously differentiable, C ) of the independent
variables x and y and u(x, y) the solution.
Using this simple equation, that is relevant to many applications, we will illustrate the concepts of Cauchy
data, characteristics and weak solutions.
Special cases:
a and b do not depend on u, then the equation is called semi-linear
a(x, y)

u
u
+ b(x, y)
= c(x, y, u)
x
y

(2.2)

if, additionally, c is a linear function of u then the equation is called linear


a(x, y)

2.2

u
u
+ b(x, y)
= c1 (x, y)u + c2 (x, y)
x
y

(2.3)

Method of Characteristics

The solution u(x, y) can be thought as a surface z = u(x, y) in three-dimensions and it is possible to compute
the normal to this surface by taking the gradient:
u
n (u(x, y) z) =

x
u
y

(2.4)

1
while the PDE (2.2) can be rewritten as:
(a b c) n = 0

(2.5)

and this means that the vector (a, b, c)T is always tangent to the surface representing the solution of the
PDE. Geometrically the above condition allows us to define the characteristics curves (x( ), y( ), z( ))
parameterized by that are curves everywhere tangent to the solution surface and that satisfy
dx
= a(x, y, u)
d

(2.6)

dy
= b(x, y, u)
d

(2.7)

du
= c(x, y, u)
(2.8)
d
The projection of the characteristics on the x y plane, (x( ), y( )), are the characteristics projections.

2.1

Version of September 1, 2015

Method of Characteristics

2.2

/1

/
9

I
9

/ K

/ t~tcte.v,~1k

/
/

l,rb3~~Cl~iDr1O

;~~;i-~l ~
I~p~e

~r~ec~oflS

/
4,,

~9
Figure 2.1: Schematic showing the characteristics parameterized by and pointing in the direction (a, b, c)T ,
emerging from the initial curve, in turn parametrized by s. The curve and the characteristic projections
are the projections of the initial curve and of the characteristics curves, respectively.

The formulation of the PDE in terms of characteristics allows us to define a solution to the PDE in parametric form by solving a system of ODEs.
Here we assume that the boundary data, i.e. Cauchy data, is given on a curve (x0 (s), y0 (s), u0 (s)), whose
projection on (x y) plane is called and it is parameterized by another parameter s, see Figure 2.1.
For any value of s we can find a characteristic that passes throught, with the initial conditions:
x = x0 (s)

(2.9)

y = y0 (s)

(2.10)

u = u0 (s)

(2.11)

The characteristic itself is parameterized with , solve eqns. (2.6) (2.8) and thus the parametric solutions
are given by:
x = x(s, )

(2.12)

y = y(s, )

(2.13)

u = u(s, )

(2.14)

2.2

Version of September 1, 2015

Cauchy data

2.3

Example 2.1: Burgers equation


Solve

u
u
+u
=1
t
x
for u(x, t) in t > 0, given the initial condition u = x at t = 0.
The equations for the characteristics are, according to (2.6) (2.8):
dt
=1
d
dx
=u
d
du
=1
d

(2.15)

(2.16)
(2.17)
(2.18)

with the initial conditions t = 0, x = s, u = s at = 0. The equations for t provides t = and


u = s + t. Now the equation for x can be solved
dx
=s+t
dt

(2.19)

with x = s for t = 0, giving the solution x = s + st + 12 t2 , which can be expressed in terms of s and
finally one can solve the equation for u giving
u=

2.3

x + t + 21 t2
1+t

(2.20)

Cauchy data

Cauchy data are boundary conditions for PDE that, at least locally, determine the solution.
In the case of first-order quasilinear PDE the Cauchy data correspond to the prescription of the values of u
along a curve in the x y plane, i.e. u = u0 (s) where x = x0 (s) and y = y0 (s) with s parameterizing the
curve .
The curve along which the Cauchy data is given should be nowhere tangent to (a, b)T , if this happens the
characteristic points along the initial curve, instead of departing from it, and in general the initial data will
not agree with the ODE satisfied by u along the characteristic.
Example 2.2: Consistency of initial data
The equation

has the general solution


u=

u u
+
=1
x y

(2.21)

x+y
+ F (x y)
2

(2.22)

where F is an arbitrary constant.


We consider the three following possible initial data:
u = 0 on x + y=0: this implies F (x, y) = 0.

2.3

Version of September 1, 2015

Cauchy data

2.4

u = 0 on x = y: this initial data is given along the characteristic x = y and is inconsistent.


u = x on x = y: this initial data is given along the characteristic x = y and is consistent
with the evolution of the ODE for u along the characteristic; this implies that any F such that
F (0) = 0 is correct.

Thus, as from the example above, in general there are 3 possibilities:


If is nowhere tangent to a characteristic projection then there should be an unique solution (at least
locally).
If is tangent to a characteristic projection, then there is no solution in general.
If, absolutely exceptional, the prescribed data for u specified along agree with the ODE (along the
characteristic projection), then there is a non-unique solution.

2.3.1

Cauchy-Kowalevski theorem

We want to formalize the condition of existence of a solution. A necessary condition for the existence of an
unique solution to the PDE eqn. (2.23) in a neighbourhood of is the existence of the first derivatives of u
on .
a(x, y, u)

u
u
+ b(x, y, u)
= c(x, y, u)
x
y

(2.23)

The derivative of u0 along provides, by the chain-rule:


du0
u dx0
u dy0
=
+
ds
x ds
y ds

(2.24)

u
The equations (2.23) and (2.24) provide a couple of equation for u
x and y on .
The derivative can thus be derived by the system of equations under the condition that the determinant of
the system of the above equations is non zero:


a
dy0
dx0
b
dx
(2.25)
0 dy0 = a ds b ds 6= 0
ds

ds

If the condition is satisfied then both u and its derivative are defined on . This condition is equivalent to
what previously discussed of not to be tangent to a characteristic projection.
When the determinant is zero, there is either no solution or an infinity of solutions. The two conditions
correspond, respectively to the following two cases:
1 dx0
1 dy0
1 du0
=
6=
a ds
b ds
c ds

no solutions

(2.26)

1 dx0
1 dy0
1 du0
=
=
infinite solutions
(2.27)
a ds
b ds
c ds
The same argument carried out for the existence of the first derivative of u0 on can be carried out to
inquire for the existence of the second derivative:
 
2 u dy0
d u
2 u dx0
=
+
(2.28)
ds x
x2 ds
xy ds

2.4

Version of September 1, 2015

Cauchy data

2.5

and by differentiating the PDE (with respect to x):


a

2u
2u
a u
b u a
+b
+
+
+
2
x
xy x x x y
u
2

u
x

2
+

b u u
c
c u
=
+
u x y
x u x

(2.29)

u
and the conditions for this system to have an unique solution
where there are two equations for xu2 and xy
are the same as for the system (2.25).
Conclusion: if a, b and c are analytic then the same argument can be continued giving the same conditions
at any order of the derivative of u on . A Taylor series can thus be constructed for u(x, y) on the curve .
This is a hint of the Cauchy-Kowalevski theorem stating that: the PDE (2.23) has a unique solution in
some interval on if a, b and c are analytic functions and satisfy the condition (2.25).

2.3.2

Domain of definition

When the initial data is not given on an infinite curve but e.g. only on a finite interval, then the solution is
defined in a region that is reached by the characteristics originating from the points in the initial interval.
This region is called domain of definition.
Example 2.3: Domain of definition
u u
+
= u3
x y

(2.30)

with initial conditions u = y on x = 0 and 0 < y < 3. The equations for the characteristics are:
dx
=1
d
dy
=1
d
du
= u3
d

(2.31)
(2.32)
(2.33)

with initial data x = 0, y = s on = 0, 0 < s < 3 with solution


x=

(2.34)

y =s+
s
u=
1 2s2

(2.35)

and in explicit form:


u= p

yx
1 2x(y x)2

(2.36)

(2.37)

This solution blows up for s = 1/ 2 that is equivalent to:


1
y =x+
2x

(2.38)

as represented in Figure 2.2 where the domain of definition of the solution is shown as bounded by
curve given by (2.38) and by the characteristic projections y = x and y = x + 3.

2.5

Version of September 1, 2015

Cauchy data

2.6

Figure 2.2: Domain of definition for equation (2.30)


Blow-up
The domain of definition of the PDE may be further reduced by the existence of singularities.
Example 2.4: Domain of definition and singularities
As an example we want to determine the domain of definition of the following PDE:
x

u
u
+y
=0
x
y

(2.39)

with conditions u = y on x = 1, 0 < y < 1.


The characteristic equations given by
dx
=x
d
dy
=y
d
du
=0
d

(2.40)
(2.41)
(2.42)

with initial data x = 1, y = s, u = s for = 0 and 0 < s < 1 give:


x = e

(2.43)

(2.44)

u=s

(2.45)

y = se

for 0 < s < 1. Eliminating s and we obtan the solution u = y/x for 0 < y/x < 1. The solution is
not uniquely defined at the origin thus the domain of definition is 0 < y/x < 1 and x > 0, as shown
in Figure 2.3 where characteristic projections are drawn.

2.6

Version of September 1, 2015

Cauchy data

2.7

Figure 2.3: Characteristic projections for equation (2.39) and with the domain of definition bounded by the
thick black curve. The domain of definition does not contain the origin.

Example 2.5: The case of Burgers equation


We look for the region of definition of the following Burgers equation
u
u
+u
=0
t
x

(2.46)

for t > 0 and with initial data u = sin (x) for 0 x 2 at t = 0. The solution in parametric form
is u = sin(s), t = , x = s + sin(s) or
u = sin(x tu)

(2.47)

in implicit form. The characteristic lines are represented in Figure 2.4. As it can be seen the
characteristics cross at some finite distance. The Jacobian is (x, y)/(s, t) = 1 + t cos s and J = 0 on
the curve x = s tan s, t = sec(s), drawn in Figure 2.4 as a thick red curve. The solution, instead,
is represented in Figure 2.5.

Non-uniqueness
The domain of definition may be further restricted by u not being a unique function of x and y anymore. A
unique mapping between (, s) and (x, y) requires that a unique characteristic passes through each point in
the (x, y) plane. If the following determinant is zero this means that characteristics start to intersect:
x x


y
x
s

(2.48)
y y = a s b s

s
and thus the method of characteristics fails to provide a valid solution.

2.7

Version of September 1, 2015

Cauchy data

2.8

5
4.5
4
3.5

J=0

2.5
2

Domain of
definition

1.5
1
0.5
0
0

Figure 2.4: Characteristic projections for Burgers equation with initial conditions given by u = sin (x), as
from Example 2.5, and domain of definition bounded by the thick red curve, where the Jacobian J = 0.

Figure 2.5: Solution of Eq. (2.47) plotted versus x for t=0.5, 1.0, 1.5, 2.0. The initial condition u = sin (x)
is shown in blue. Note that at some instant of time the solution becomes multivalued and so the solution
obtained from method of characteristics is not valid anymore, see Fig. 2.4.

2.8

Version of September 1, 2015

Solutions with discontinuos derivatives

2.4

2.9

Solutions with discontinuos derivatives

We can extend the solution of the equation to the case where u is smooth everywhere but C 0 on a line C on
which the first derivative of u is discontinuos. If the curve C is parameterized by x = x() and y = y() then
we can use the superscript to indicate the solution on one or the other side of C.
u+ dx u+ dy
du+
=
+
d
x d
y d
du
u dx u dy
=
+
d
x d
y d

(2.49)
(2.50)

while the function u itself is continuous across C and u+ = u . From this it follows that:
du
du+
=
d
d
and thus
dx
d

u+
u

x
x

dy
+
d

(2.51)

u+
u

y
y


=0

(2.52)

and finally
 +
 +
dy u
dx u
+
=0
d x d y

(2.53)

where [u]+
= u u is the jump across C. Both u and u are classical solutions of the PDE, in the sense
1
that they are C and thus:
u
u
a
+b
=c
(2.54)
x
y

and subtracting the equation one from the other:



a

u
x

+


+b

u
y

+
=0

(2.55)

The two equations (2.53) and (2.55) form a system for the discontinuity in the derivatives on C and the
system can be solved if the determinant is non zero. This is equivalent to the condition:
b

dx
dy
a
=0
d
d

(2.56)

that is equivalent to the equation for the characteristics projection.


Conclusions: the derivative of u may only be discontinuos across a characteristic projection.

2.9

Version of September 1, 2015

Solutions with discontinuos derivatives

2.10

Example 2.6: Discontinuous first derivative of solution function


We consider
ux + uy = 1

(2.57)

with boundary conditions



u(x, 0) =

0
x

x<0
x0

(2.58)

The characteristic equations are dx = dy = du with general solution u = x + f (x y). The boundary
condition gives

s s < 0
f (s) =
(2.59)
0
s0
and the solution, therefore, is

u=

y
x

x<y
xy

(2.60)

Thus the function u is continuos across the characteristic y = x but its derivative not.

2.10

Version of September 1, 2015

Lecture 3

Finite difference approximation of


PDEs
Contents
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
3.9

Finite difference approximation of derivatives . . . .


Finite difference discretization of advection-diffusion
3.2.1 Analysis of truncation error . . . . . . . . . . . . . . .
Consistency of a numerical scheme . . . . . . . . . . .
Semi-discretization and method of lines . . . . . . . .
Stability . . . . . . . . . . . . . . . . . . . . . . . . . . .
Stability analysis: von Neumann method . . . . . . .
Well posed problem . . . . . . . . . . . . . . . . . . . .
Convergence of the finite-difference equation . . . . .
3.8.1 Dissipation and dispersion of a numerical scheme . . .
The concept of modified equation . . . . . . . . . . . .
3.9.1 Artificial viscosity . . . . . . . . . . . . . . . . . . . .

3.0

. . . . . . . . . . . . .
equation . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

3.1
3.2
3.3
3.6
3.7
3.8
3.10
3.12
3.12
3.12
3.16
3.18

Finite difference approximation of derivatives

3.1

In this chapter we will discuss fundamental aspects related to the issue of the numerical approximation of
PDEs by means of finite difference. The fundamental concepts that will be introduced and discussed are:
finite difference approximation of differential operators (time and space derivatives), well posed
problems, truncation error, scheme consistency, scheme dissipative or dispersive properties,
scheme stability (von Neumann stability analysis), scheme convergence.
We will recall how to derive finite difference approximations of derivatives in order to define a numerical
scheme for solving partial differential equations.
We will derive the truncation error of a scheme so to be able to determine the accuracy of the computed
solution. We will also define very important properties of a numerical scheme: consistency, stability,
well-posedness and their relation with the convergence of the numerical solution towards the exact solution of a partial differential equation.

3.1

Finite difference approximation of derivatives

This section closely follows: G. Tryggvason - CFD course - Web Lecture Notes
Consider a partial differential equation for u(x, t) defined in a rectangular domain D = {(x, t)|a < x <
b, t0 < t < tmax }. In order to numerically solve PDEs one can approximate the derivatives to evaluate them
numerically. We discretize the function over a mesh that is equispaced by a factor h in space and t in time.
This means that xj = j h and that tn = n t.
The time derivative
The Taylor expansion of the function u(x, t) gives:
u(x, t + t) = u(x, t) +

1 2 u(x, t) 2
u(x, t)
t +
t + O(t3 )
t
2 t2

(3.1)

and this gives:

and thus:

u(x, t + t) u(x, t)
u(x, t) t 2 u(x, t)
=
+
+ O(t2 )
t
t
2
t2

(3.2)

u(x, t)
u(x, t + t) u(x, t) t 2 u(x, t)
=

+ O(t2 )
t
t
2
t2

(3.3)

The spatial first derivative


u(x + h, t) = u(x, t) +

2 u(x, t) h2
3 u(x, t) h3
4 u(x, t) h4
u(x, t)
h+
+
+
+ O(h5 )
x
x2
2
x3
6
x4 24

(3.4)

u(x h, t) = u(x, t)

u(x, t)
2 u(x, t) h2
3 u(x, t) h3
4 u(x, t) h4
h+

+
+ O(h5 )
2
3
x
x
2
x
6
x4 24

(3.5)

and subtracting
u(x + h, t) u(x h, t) = 2
and thus

u(x, t)
3 u(x, t) h3
h+2
+ O(h5 )
x
x3
6

u(x, t)
u(x + h, t) u(x h, t) 3 u(x, t) h2
=

+ O(h4 )
x
2h
x3
6

3.1

(3.6)

(3.7)

Version of September 1, 2015

Finite difference discretization of advection-diffusion equation

3.2

(n + 2)t
(n + 1)t
nt

unj1 unj

unj+1

(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 3.1: Stencil for first order spatial derivative.
The spatial second derivative
u(x + h, t) = u(x, t) +

u(x, t)
2 u(x, t) h2
3 u(x, t) h3
4 u(x, t) h4
h+
+
+
+ ...
x
x2
2
x3
6
x4 24

(3.8)

u(x h, t) = u(x, t)

u(x, t)
2 u(x, t) h2
3 u(x, t) h3
4 u(x, t) h4
h+

+
+ ...
2
3
x
x
2
x
6
x4 24

(3.9)

and summing
u(x + h, t) + u(x h, t) = 2u(x, t) + 2
and

3.2

2 u(x, t) h2
4 u(x, t) h4
+2
+ ...
2
x
2
x4 24

u(x + h, t) 2u(x, t) + u(x h, t) 4 u(x, t) h2


2 u(x, t)
=

+ ...
2
x
h2
x4 12

(3.10)

(3.11)

Finite difference discretization of advection-diffusion equation

This section closely follows: G. Tryggvason - CFD course - Web Lecture Notes
Based on the expressions for the discretization of the differential operators we can now write the discretized
form for the advection-diffusion equation:
u
u
2u
+v
=D 2
(3.12)
t
x
x
where for simplicity we consider the case where the advection velocity v = const. At t = tn = n t and
x = xj = j h we denote
unj = u(xj , tn ) = u(j h, n t)
n


u
=
t (xj ,tn )
j

 n
u
u
=
x j
x (xj ,tn )

u
t

3.2

Version of September 1, 2015

Finite difference discretization of advection-diffusion equation


and


2u
x2

n
j

3.3


2 u
=
x2 (xj ,tn )

At (xj , t ), Eq. (3.12) satisfies




u
t

n


+v

u
x

n


=D

2u
x2

n
(3.13)
j

We use the approximations obtained earlier




n

u
t

=
j

un+1
unj
j
+ O(t)
t

(3.14)

n
unj+1 unj1
u
=
+ O(h2 )
x j
2h
 2 n
unj+1 2unj + unj1
u
=
+ O(h2 )
x2 j
h2


(3.15)

(3.16)

and rearranging terms we have


un+1
= unj
j

 Dt

vt n
uj+1 unj1 + 2 unj+1 2unj + unj1
2h
h

(3.17)

(n + 2)t
un+1
j

(n + 1)t
nt

unj1 unj

unj+1

(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 3.2: A graphical representation corresponding to the stencil used to approximate the advectiondiffusion equation via forward Euler in time and centered difference in space (FTCS).

3.2.1

Analysis of truncation error

For a single variable function u(x) denote ui = u(xi ). The Taylor expansion gives
 




u
h2 2 u
h3 3 u
ui+1 = ui + h
+
+
+ O(h4 )
x i
2 x2 i
6 x3 i

ui1 = ui h

u
x


+
i

h2
2

2u
x2

3.3

h3
6

3u
x3

+ O(h4 )

(3.18)

(3.19)

Version of September 1, 2015

Finite difference discretization of advection-diffusion equation

3.4

(n + 2)t
(n + 1)t

un+1
j

nt

unj

(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 3.3: A graphical representation corresponding to the stencil used to approximate the temporal derivative as forward Euler.
(n + 2)t
(n + 1)t
nt

unj1 unj

unj+1

(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 3.4: A graphical representation corresponding to the stencil used to approximate the spatial derivative
as centered difference.
The forward difference:

D+ =

u
x

u
x

=
i

ui+1 ui
h

h
2

ui ui1
h
+
h
2

2u
x2

2u
x2

h2
6

h2
6

3u
x3

3u
x3

+ O(h3 )

(3.20)

+ O(h3 )

(3.21)

The backward difference:



D =

=
i

and the central difference for first order derivative has an error of O(h2 ):
 


1
u
ui+1 ui1
h2 3 u
(D+ + D ) =
=

+ O(h3 )
2
x i
2h
6 x3 i

(3.22)

Second order derivative:


1
(D+ D ) =
h

2u
x2


=
i

ui+1 2ui + ui1


+ O(h2 )
h2

3.4

(3.23)

Version of September 1, 2015

Finite difference discretization of advection-diffusion equation

3.5

But what happens at the boundaries of the domain?


One sided differences (useful for boundary conditions treatment):
 
ui + ui+1 + ui+2
u

x
h

(3.24)

and using





h3 3 u
h2 2 u
u
+
+ O(h4 )
+
ui+1 = ui + h
x i
2 x2 i
6 x3 i
 




u
(2h)3 3 u
(2h)2 2 u
ui+2 = ui + 2h
+
+ O(h4 )
+
x i
2
x2 i
6
x3 i


(3.25)

(3.26)

thus:
ui + ui+1 + ui+2
++
=
ui + ( + 2)
h
h

u
x

h
+ ( + 4)
2
i

2u
x2

+ O(h2 )

(3.27)

that is second order accurate if


++ =0

(3.28)

+ 2 = 1

(3.29)

+ 4 = 0

(3.30)

that gives = 32 , = 2 and = 12


 
3ui + 4ui+1 ui+2
u
=
+ O(h2 )
x i
2h

(3.31)

and similarly for the second order derivative:


 2 
u
ui + ui+1 + ui+2

x2 i
h2
thus:
ui + ui+1 + ui+2
++
+ 2
=
ui +
h2
h2
h

u
x

+ 4
+
2
i

(3.32)

2u
x2


+ O(h)

(3.33)

This is first order accurate if

that gives = 1, = 2 and = 1




2u
x2

++ =0

(3.34)

+ 2 = 0

(3.35)

+ 4 = 2

(3.36)


=
i

ui 2ui+1 + ui+2
+ O(h)
h2

(3.37)

Higher order approximations

3.5

Version of September 1, 2015

Consistency of a numerical scheme

3.6


u
2ui+1 + 3ui 6ui1 + ui2
=
+ O(h3 )
x i
6h
 
u
ui+2 + 6ui+1 3ui 2ui1
=
+ O(h3 )
x i
6h
 
ui+2 + 8ui+1 8ui1 + ui2
u
=
+ O(h4 )
x i
12h


3.3

2u
x2


=
i

ui+2 + 16ui+1 30ui 16ui1 + ui2


+ O(h4 )
12h2

(3.38)
(3.39)
(3.40)
(3.41)

Consistency of a numerical scheme

This section closely follows: Strikwerda Finite Difference Schemes and Partial Differential Equations Section
1.4
An important concept characterizing a finite difference scheme is the so-called consistency.
We say that, given a partial differential equation, P u = f , and a finite difference scheme, Pt,h = f , the
finite difference scheme is consistent with the partial differential equation if for any smooth function (t, x)
n

P |j Pt,h 0, as t, h 0,
the convergence being point-wise convergence at each point (t, x).
Note that for some schemes, we may have to restrict the manner in which t and h tend to zero in order
for it to be consistent. We demonstrate this definition with the following examples, where the consistency
of two numerical schemes, which will be later described in details, are introduced.
Example 3.1: Consistency of the Forward-Time Forward-Space Scheme for LAE
For the linear advection equation
u
u
+a
=0
t
x
the operator P is
is given by

+ a x
. The difference operator Pt,h for the forward-time forward-space scheme

n+1
nj
nj+1 nj
j
+a
,
t
h
where nj = (nt, jh). We begin with the Taylor series of the function in t and x about the point
(nt, jh). We have that
Pt,h =

1
n+1
= nj + tt + t2 tt + O(t3 ),
j
2
1
nj+1 = nj + hx + h2 xx + O(h3 ),
2

(3.42)
(3.43)

where the derivatives on the RHS are evaluated at (nt, jh), and so
1
1
Pt,h = t + ax + ttt + ahxx + O(t2 ) + O(h2 ).
2
2

3.6

Version of September 1, 2015

Semi-discretization and method of lines

3.7

Thus
1
1
n
P |j Pt,h = ttt ahxx + O(t2 ) + O(h2 )
2
2
0 as (t, h) 0
and therefore, according to the definition, this scheme is consistent with the given partial differential
equation.

Example 3.2: The Lax-Friedrichs scheme for LAE


The difference operator for Lax-Friedrichs scheme is given by
Pt,h =

n+1
12 (nj+1 + nj1 )
nj+1 nj1
j
+a
t
2h

Using the Taylor series, we have


1
1
nj1 = nj hx + h2 xx h3 xxx + O(h4 ),
2
6
where, again, the derivatives are evaluated at (nt, jh), and therefore we have
1
1 n
(
+ nj1 ) = nj + h2 xx + O(h4 )
2 j+1
2
and

nj+1 nj1
1
= x + h2 xxx + O(h4 ).
2h
6
Substituting these expressions and Eq. (3.42) in the scheme, we obtain
1
1
1
Pt,h = t + ax + ttt t1 h2 xx + ah2 xxx + O(h4 + t1 h4 + t2 ).
2
2
6
n

So Pt,h P |j 0 as h, t 0. Note, however, that it is consistent as long as t1 h2 also


tends to 0, so we say that this scheme is conditionally consistent.

Note also that consistency is a necessary condition for a scheme to be convergent, but it is not
a sufficient condition. We will later determine under which conditions a scheme will be convergent.

3.4

Semi-discretization and method of lines

This section closely follows: RMM Mattheij et al. Section 5.5.1


A PDE can be converted in a system of ODEs by discretizing all directions except one.
We provide an example based on the diffusion equation:

2 u(x, t)
u(x, t) =
t
x2

3.7

(3.44)

Version of September 1, 2015

Stability

3.8

By finite difference approximation of the 1d Laplacian and writing uj (t) for u(xj , t)
duj (t)
uj+1 (t) 2uj (t) + uj1 (t)
=
dt
h2
which is a system of ODEs evolved along the lines of discretization of the equation.
This system can be written in matrix form as follow:
du
= Au
dt
where

A=

1
h2

(3.45)

(3.46)

1
...
1

1
...
1

(3.47)

Based on this discretization the ODEs can be approximated numerically based on the standard methods for
ODEs.

3.5

Stability

This section closely follows: RMM Mattheij et al. Section 5.5.2


Clearly one would like to employ the largest possible time step within the requirements of stability and
accuracy.
Stability for ODEs

Figure 3.5: The stability of the ODE: for time step t 2 102 , the forward Euler method of integration
becomes unstable. Made with MATLAB script ODE_stability.m

3.8

Version of September 1, 2015

Stability

3.9

We consider one of the simplest example of numerical integration of the following ODE with initial condition
f (0) = 1:
df
= f
(3.48)
dt
that has the exact exponential solution f (t) = e(t) which is monotone decreasing. By using forward Euler
discretization scheme for the time derivative:
f n+1 = f n f n t = (1 t)f n .

(3.49)

n+1

f


fn 1

(3.50)

From the above relation we see that:

only if one chooses t 2.


We notice that the discretized equation has an exact solution:
f n+1 = (1 t)f n = (1 t)n f 1

(3.51)

from which we see that f n oscillates unless t 1.


We consider now the case of backward Euler discretization for the derivative:

from which

f n+1 = f n f n+1 t

(3.52)

f n+1
1
=
fn
(1 + t)

(3.53)

n+1
f



fn 1

(3.54)

and thus the condition

is always satisfied for any t.


Example 3.3: Occurrence of instability for an ODE
Consider the initial value problem


1
du
= 100 u cos t +
sin t
dt
100

(3.55)

u(0) = 1.

(3.56)



1
un+1 = un 100t un cos tn +
sin tn
100

(3.57)

with initial condition given by


Using the forward Euler method, one finds

In Figure 3.5, the numerical solution is implemented with different time steps t. It can be shown
that the stability requirement is given by
|1 100t| 1
and, therefore, for t > 2.00 102 , instability occurs.

3.9

Version of September 1, 2015

Stability analysis: von Neumann method

3.6

3.10

Stability analysis: von Neumann method

This section closely follows: G. Tryggvason - CFD course - Web Lecture Notes - Lecture 3
Here we illustrate one of the possible methods to investigate the stability of a scheme for solving PDEs. The
basic idea is to check that a small perturbation around the solution will not grow in time.
Thus, in order to study the stability of the numerical method we study the evolution of a small perturbation
around the solution unj :
unj unj + unj

(3.58)

and in particular we focus on the evolution of the amplitude of the Fourier modes associated to the perturbation. In order to do that we can simply Fourier transform the perturbation:

unj = u(xj )n =

dn eikxj
u
k

(3.59)

k=

For the sake of simplicity we will consider here the evolution of a monochromatic perturbation, i.e. the
evolution of a wave with a single wavenumber:
dn eikxj
unj = u
k

(3.60)

and we use the same expression to evaluate the perturbation at the nearby points involved in the stencil:
dn eikxj+1 = u
dn eikxj eikh
unj+1 = u
k
k

(3.61)

dn eikxj1 = u
dn eikxj eikh
unj1 = u
k
k

(3.62)

and substituting into the FTCS difference equation of perturbations


un+1
unj
unj+1 unj1
unj+1 2unj + unj1
j
+v
=D
t
2h
h2

(3.63)

The equation for the error, in Fourier amplitudes, reduces to:


n+1
\
dn
dn
dn


u
u
u
u
k
k
+ v k eihk eihk = D 2k eihk 2 + eihk
t
2h
h

(3.64)

and the ratio of the amplitudes at two successive times is:


n+1
\
 Dt

u
vt ikh
k
=1
e eikh + 2 eikh 2 + eikh =
dn
2h
h
u

(3.65)

vt
Dt
i sin kh + 2 2 (cos kh 1) =
h
h
Dt
kh
vt
1 4 2 sin2
i
sin kh
h
2
h

and stability requires that the amplification factor G




\

un+1

G = k 1
dn
u

(3.66)
(3.67)

(3.68)

Here we can consider separately two cases:


3.10

Version of September 1, 2015

Stability analysis: von Neumann method

3.11

Diffusion equation This corresponds to the case where v = 0. The stability condition simplifies to:



\


un+1
Dt
kh
G = k = 1 4 2 sin2
dn
h
2
u

(3.69)

The amplification of the error is 1 if the following condition is met:


1 1 4

Dt
1
h2

(3.70)

that is
Dt
1

2
h
2

(3.71)

Advection equation This corresponds to the case where D = 0. The stability condition is:



\


un+1

vt
k



sin kh
G=
= 1 i

dn
h
u

(3.72)

and the absolute value of the complex number is always larger than unity impliying that the method
is unconditionally unstable for the integration of the advection equation.
Advection-diffusion equation For the general case we report only the result of the stability condition
that requires extra analysis.
Dt
1

(3.73)
2
h
2
and, simultaneously,
v 2 t
2
(3.74)
D
In the case of two- or three-dimensional problems similar type of analysis can be conducted where the
amplitude of the perturbation is now expressed in terms of two- or three-dimensional Fourier amplitudes:
i(kj xj +kl xl )
n
\
unj,l = u
kj ,kl e

(3.75)

The stability condition for the advection-diffusion equation solved via FTCS scheme reads:
In 2d 1
Dt

h2
4
In 3d Dt
1

2
h
6

(|vx | + |vy |)2 t


4
D

(3.76)

(|vx | + |vy | + |vz |)2 t


8
D

(3.77)

and

and

3.11

Version of September 1, 2015

Well posed problem

3.7

3.12

Well posed problem

This section closely follows: RMM Mattheij et al. Section 2.6


Hadamard definition of well posed problem includes the following three conditions to be satisifed:
A solution exists
The solution is unique
The solution depends continuosly on the initial conditions
Problems not well posed in the sense of Hadamard are called ill-posed.

3.8

Convergence of the finite-difference equation

A very important problem, besides the stability of the method, is the issue of the convergence of the finitedifference approximation to the solution of the continuum PDE.
Lax equivalence theorem - It states that given a well posed inital value problem and a finite difference
approximation that satisfies the consistency condition then stability is the necessary and sufficient
condition for convergence to the solution of the problem.

3.8.1

Dissipation and dispersion of a numerical scheme

This section closely follows: Trefethen 1994 Finite difference and Spectral methods for ODE and PDE
Dispersion relation
Any time-dependent scalar and linear PDE with constant coefficients on an unbounded domain admits plane
wave solutions
u(x, t) = ei(kx+t)
(3.78)
where k is the wave number and the frequency. The PDE imposes a relation between the possible
values of k and :
= (k)
(3.79)
This relation is known as dispersion relation (NB: in general for a given k more than one frequency
may be admissible. The numbers of possible values of depend upon the order of the PDE, thus we talk of
a dispersion relation and not of a function). Considering k real, may be real or complex, according to the
PDE.
Here below a short list of dispersion relation for common and basic PDEs:
ut = ux

=k

(3.80)

utt = uxx

= k = k

(3.81)

ut = uxx

i = k 2

(3.82)

ut = iuxx

(3.83)

= k

All these relations are plotted in Figures 3.6 (left panel).


The discrete, finite difference approximation of the PDEs admits also plane wave solutions, with their own
dispersion relations.

3.12

Version of September 1, 2015

Convergence of the finite-difference equation

3.13

ut=0u

ut=ux

2
2

0
k
utt=uxx

2
3

0
k
ut=uxx

0
k
utt=xu

0
k
ut=xu

0
k
ut=ixu

0
k

2
0
2

0
2
4

8
2

0
k
ut=iuxx

2
0
2

0
2
4

0
k

Figure 3.6: (Left panel) Dispersion relations for the four PDEs (3.80) (3.83); (Right panel) Dispersion
relation for the finite difference approximation of the PDEs, as from (3.84) (3.87). The dotted black line
corresponds to the dispersion relation for the original PDEs. Note that with the symbols 0 and xx we
identify the finite difference representation of the first derivative with respect to x and the second derivative
with respect to x, respectively. Made with MATLAB script dispersion_relation.m

3.13

Version of September 1, 2015

Convergence of the finite-difference equation

3.14
u =u

Continuum
nd

2
2

order

th

4 order

6th order
0

2
3

0
k
ut=iuxx

3
Continuum
2nd order

th

4 order

6th order

2
4
6
8
3

0
k

Figure 3.7: Dispersion relation for the finite difference approximation of the PDEs at increasing the order of
the discretization of the space derivatives. Higher order discretizations match more closely the dispersion relation for the original PDEs at small wavenumbers k. Made with MATLAB script dispersion_relation_2.m.

Discretizing, for simplicity, only the spatial derivative:


uj+1 uj1
2h
uj+1 2uj + uj1
utt =
h2
uj+1 2uj + uj1
ut =
h2
uj+1 2uj + uj1
ut = i
h2
ut =

1
h

sin kh

(3.84)

2 =

4
h2

sin2

(3.85)

kh
2

i = h42 sin2

kh
2

(3.86)

= h42 sin2

kh
2

(3.87)

All these formulas are readily obtained by substituting the plane wave (3.78) into the finite difference
discretization with x = xj . The behaviour of the dispersion relations for the finite difference case is plotted
in figures 3.6 (Right panel) and, as it can be seen, deviate from the ones obtained for the PDEs. A general
remark that can be made is that the dispersion relation for the discretized PDE approximate well the one
of the PDEs for small k while deviations are always visible for larger wavenumbers k.
Unless other requirements impose differently, one would clearly like to have a discretization that approximates
the dispersion relation of the continuum PDE to the higher degree as possible. In Figure 3.7 it is shown that
increasing the order of the discretization of the space derivatives allows to obtain dispersion relations that
better match with the dispersion relation of the PDEs.

3.14

Version of September 1, 2015

Convergence of the finite-difference equation

3.15

Here we give a few examples of dispersion relations for the PDE ut = ux with = t/h = 0.5.
BTCS
Crank-Nicolson
Leap Frog
Lax-Wendroff

i(1 eit ) = sin kh


t
2 tan
= sin kh
2
sin t = sin kh
i(eit 1) = sin kh + 2i2 sin2

kh
2

For the discussion we consider three operators:


u
u
+U
t
x
u
2u
u
+U
D 2
L2 u =
t
x
x
u
u
3u
L3 u =
+U
3
t
x
x
L1 u =

(3.88)
(3.89)
(3.90)

These linear operators generates, respectively, the following three PDEs:


L1 u = 0

(3.91)

L2 u = 0

(3.92)

L3 u = 0

(3.93)

and we try a solution in the form


u(x, y) = a exp (i(kx t))

(3.94)

It can be seen that this wavelike solution is an eigenfunction of the three linear PDEs:
L1 u(x, y) = (i + ikU )u = 1 u

(3.95)

L2 u(x, y) = (i + ikU D(ik)2 )u = 2 u

(3.96)

L3 u(x, y) = (i + ikU (ik) )u = 3 u

(3.97)

From these relations it follows that the eigenfunction is a solution of the PDEs Lu = 0 if and k are such
that they satisfy (, k) = 0.
The equation (, k) = 0 is called the dispersion relation. For the case of the three PDEs above this
corresponds to:
(k) = U k

(3.98)

(3.99)
(3.100)

(k) = U k iDk
(k) = U k + k 3

(3.101)
As said (k), in general, has the form
(k) = (k) + i(k)

(3.102)

where (k) = Re((k)) and (k) = Im((k)).


The solution in the form (3.78) is then:
u(x, t) =a exp (i(kx (k)t i(k)t) =
a exp ((k)t) exp i(kx (k)t)
From the expression before one can clearly see that the (k) determines the speed of the wave while (k)
influences the behaviour of the amplitude of the wave.
3.15

Version of September 1, 2015

The concept of modified equation

3.16

The phase velocity of the wave is


uph =

Re((k))
(k)
=
k
k

(3.103)

We consider now three operators given by:


u
u
+U
t
x
u
u
2u
L2 u =
+U
D 2
t
x
x
u
u
3u
L3 u =
+U
3
t
x
x
L1 u =

(3.104)
(3.105)
(3.106)

These linear operators generates, respectively, the following three PDEs:


L1 u = 0

(3.107)

L2 u = 0

(3.108)

L3 u = 0

(3.109)

For these operators, the phase velocity is


Uk
=U
k
L2 : k 1 Re(U k iDk) = U
L1 :

L3 : k

(U k + k ) = U + k

(3.110)
(3.111)

(3.112)

And thus in the case L3 the solution presents dispersion as waves with different wavelength travel with
different velocity.
For what concerns the amplitude the situation is as follow:
L1 : (k) = 0

(3.113)

L2 : (k) = Dk 2

(3.114)

L3 : (k) = 0

(3.115)

and thus in the case L2 the amplitude present dissipation and it decays as exp(Dk 2 t).
Summarizing:
Given a linear homogeneous PDE operator we call it dissipative if and only if Im((k)) < 0 and dispersive
if and only if Re((k)) is not linear in k.

3.9

The concept of modified equation

This section closely follows: G. Tryggvason - CFD course - Web Lecture Notes
The numerical approximation of the derivatives introduces error terms in the equation. However, the numerical scheme can be interpreted as an exact discretization not of the original equation but of another
modified equation different from the original equation that we want to solve. The structure of the terms
in the difference between the modified and the original equation can teach us much about the properties of
the numerical approximation.

3.16

Version of September 1, 2015

The concept of modified equation

3.17

The starting point is again the Taylor expansion:


u
2 u t2
3 u t3
t + 2
+ 3
+ O(t4 )
t
t 2
t 6
2 u h2
3 u h3
u
h+
+
+ O(h4 )
= unj +
x
x2 2
x3 6
2 u h2
3 u h3
u
h+

+ O(h4 )
= unj
2
x
x 2
x3 6

un+1
= unj +
j
unj+1
unj1

(3.116)
(3.117)
(3.118)

Example 3.4: Modified equation for upwind scheme


As an example we look at the modified equation corresponding to the upwinding of the first order
wave equation:
u
u
+U
=0
t
x

(3.119)


un+1
unj
U n
j
+
uj unj1 = 0
(3.120)
t
h
Using Taylor expansions given by (3.116) (3.118) and inserting them into (3.120), we obtain
u
t
Uh
t2
U h2
u
+U
= utt +
uxx
uttt
uxxx + O(h3 ) + O(t3 )
t
x
2
2
6
6
and use the PDE to write all time derivatives in terms of space derivatives
t
Uh
t2
U h2
uttt +
uxxt
utttt
uxxxt + . . .
2
2
6
6
U t
U 2h
U t2
U 2 h2
=
uttx
uxxx +
utttx +
uxxxx + . . .
2
2
6
6

utt + U uxt =
U uxt U 2 uxx

(3.121)

(3.122)
(3.123)

Summing up, one gets:





uttt
U
U
U2
utt = U uxx + t
+ uttx + O(t) + h
uxxt
uxxx + O(h)
2
2
2
2
2

uttt = U 3 uxxx + O(t, h)

(3.124)
(3.125)

uttx = U uxxx + O(t, h)

(3.126)

uxxt = U uxxx + O(t, h)

(3.127)

And the final modified equation is:



u
u
Uh
U h2
+U
=
(1 )uxx
22 3 + 1 uxxx + O(h3 , h2 t, ht2 , t3 )
(3.128)
t
x
2
6
and the sign of the second order term should satisfy the CFL condition (to be discussed later):
=

U t
<1
h

(3.129)

For the Lax-Wendroff scheme:




u
u
U h2
U h3
+U
=
1 2 uxxx
1 2 uxxxx
t
x
2
8
Note that the modified equation can be also called the equivalent equation.

3.17

(3.130)

Version of September 1, 2015

The concept of modified equation

3.18

The dissipative and dispersive nature of the first-order and second-order methods, respectively, are visible in
Figure 3.8. In fact, for the upwind method in the final modified equation, a term proportional to the second
derivative, responsible for the dissipation of the scheme, appears, while for the Lax-Wendroff scheme, a term
proportional to the third derivative, responsible for the dispersivity of the scheme, is present.

3.9.1

Artificial viscosity

The wiggling oscillation can be damped out by the addition of an artificial viscosity.
Here we consider the equation in conservative form
u F
+
=0
t
x

(3.131)

where F = U u. The artificial viscosity can be added by modifying the flux as:
F0 = F

u
x

(3.132)

where = Dh2 | u
x | thus
u F

+
=
t
x
x







u

2 u u

=
Dh
x
x
x x

(3.133)

The artificial viscosity produces effects similar to physical viscosity on a scale of the order of the grid scale.
It particularly acts across discontituities and is negligible elsewhere. The h2 ensures that the viscous term
remains higher order.
This is visible looking at Figure 3.9 where the upwind scheme is compared with the simple Lax-Wendroff
scheme (D = 0) and with Lax-Wendroff scheme where the artificial diffusive term is added (D = 0.5). Note
that the effect of this additional term is concentrated in proximity of the discontinuity.

Figure 3.8: Comparison between analytical, dissipative (Upwind) and dispersive (Lax-Wendroff) numerical scheme for h = 0.1, t = 0.25 h, n = 81 and at t = 4.00. Made with MATLAB script
shock_triangular_modified_equation.m

3.18

Version of September 1, 2015

The concept of modified equation

3.19

Figure 3.9: Comparison between analytical, upwind and (Lax-Wendroff) numerical scheme for D = 0 (classical Lax-Wendroff scheme) and with the adding of artificial viscosity D = 0.5 with h = 0.0125, t = 0.25 h,
n = 321 and at t = 2.75. Made with MATLAB script artificial_viscosity.m

3.19

Version of September 1, 2015

Lecture 4

Finite difference methods


Contents
4.1
4.2

Finite-difference approximation of elliptic equations . . .


Finite-difference approximation of hyperbolic equations .
4.2.1 The Explicit FTCS or Forward Euler method . . . . . . .
4.2.2 The Upwind scheme . . . . . . . . . . . . . . . . . . . . . .
4.2.3 Generalities over the CFL condition . . . . . . . . . . . . .
4.2.4 The Implicit BTCS or Backward Euler method . . . . . .
4.2.5 The Lax-Friedrichs method . . . . . . . . . . . . . . . . . .
4.2.6 The Leap-Frog second order method . . . . . . . . . . . . .
4.2.7 The Lax-Wendroff method (I) . . . . . . . . . . . . . . . . .
4.2.8 The Lax-Wendroff method (II) . . . . . . . . . . . . . . . .
4.2.9 The MacCormack method . . . . . . . . . . . . . . . . . . .
4.2.10 The Second order upwind scheme: Beam-Warming method
4.2.11 Discontinuos solutions and shocks Optional . . . . . . .
4.2.12 Advection of a shock with different schemes . . . . . . . . .
4.3 Finite-difference approximation of parabolic equations . .
4.3.1 The Explicit FTCS Euler method . . . . . . . . . . . . .
4.3.2 The Implicit BTCS Backward Euler method . . . . . . .
4.3.3 The Crank-Nicolson method . . . . . . . . . . . . . . . . . .
4.3.4 The method . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.5 The Richardson method . . . . . . . . . . . . . . . . . . . .
4.3.6 The DuFort-Frankel method . . . . . . . . . . . . . . . . . .

4.0

. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

4.1
4.4
4.5
4.6
4.7
4.9
4.10
4.11
4.12
4.13
4.14
4.15
4.17
4.18
4.23
4.23
4.24
4.25
4.25
4.26
4.26

Finite-difference approximation of elliptic equations

4.1

4.1

Finite-difference approximation of elliptic equations

This section closely follows: G. Tryggvason - CFD course - Web Lecture Notes
We start considering the case of 2D Poisson equation:
2u 2u
+ 2 =S
x2
y

(4.1)

on a two dimensional mesh with spacing h. Applying central difference for the discretization:
ui+1,j 2ui,j + ui1,j
ui,j+1 2ui,j + ui,j1
+
= Si,j
h2
h2

(4.2)

This equation can be rewritten in terms of ui,j as a function of neighbouring values:


ui,j =


1
ui+1,j + ui1,j + ui,j+1 + ui,j1 h2 Si,j
4

(4.3)

This expression can be taken as a starting point for iterative (relaxation) solution methods (note that here
n + 1 denotes new values and n denotes old values):
Jacobi
un+1
i,j =


1 n
ui+1,j + uni1,j + uni,j+1 + uni,j1 h2 Si,j
4

(4.4)

To accelerate the convergence of the method,


Gauss-Seidel


1 n
n+1
n
2
ui+1,j + un+1
(4.5)
i1,j + ui,j+1 + ui,j1 h Si,j
4
Note that in Gauss-Seidel method, new values are used as soon as they become available, see terms
n+1
un+1
i1,j and ui,j1 in RHS of Eq.(4.5).
Notice also that the iterative procedure is even simpler than Jacobi.
un+1
i,j =

Successive Over-Relaxation (SOR)


un+1
i,j =


 n
n+1
n
2
n
ui+1,j + un+1
i1,j + ui,j+1 + ui,j1 h Si,j + (1 )ui,j
4

(4.6)

where 1 < < 2.


The iterative procedure must be continued until sufficient convergence is achieved. A typical quantity that
can be employed to monitor the degree of convergence is the residual:
ui+1,j + ui1,j + ui,j1 + ui,j+1 4ui,j
Sij
h2
that at steady state should be zero.
The residual can be seen as a measure of degree of mismatch of discretized version of Eq. (4.1)
In Figure 4.1, the application of iterative procedure is demonstrated when solving Laplace equation
Ri,j =

(4.7)

2u 2u
+ 2 =0
x2
y

(4.8)

u(x = 0, 0.5 y 1.5) = 1 and u(x, y) = 0 everywhere else

(4.9)

subject to Dirichlet boundary conditions

4.1

Version of September 1, 2015

Finite-difference approximation of elliptic equations

4.2

Figure 4.1: Solution via SOR iterative procedure of Laplace equation with boundary conditions given by
(4.9) for = 1.5. Made with MATLAB script SOR_Laplace2D.m

Method
Gauss-Seidel
SOR
SOR
SOR
SOR
SOR

1.0
1.2
1.5
1.7
1.9
1.95

Iterations for
1584
1051
518
265
99
202

i,j

Ri,j /(M N ) 103

Table 4.1: Evaluation of number of iterations needed to reach required average residual for Gauss-Seidel and
SOR methods for the solution of eq.(4.8) under boundary conditions given by eq.(4.9). Note that M and N
are the numbers of grid nodes along x and y, respectively. In this case M = N = 50.

4.2

Version of September 1, 2015

Finite-difference approximation of elliptic equations

4.3

and thus with source term S = 0.


In Table (4.1), different methods for numerical solution of elliptic equation (4.8) for Dirichlet boundary
condition given by (4.9)
It is interesting to observe that the Jacobi iterative method can be obtained via the discretization of an
evolutionary parabolic problem:
2u 2u
u
+ 2
=
t
x2
y

(4.10)

n
un+1
uni+1,j + uni1,j + uni,j+1 + uni,j1 4uni,j
i,j ui,j
=
t
h2

(4.11)

and discretizing with finite difference:

that is equivalent to:


un+1
i,j =


14

t
h2

uni,j +

t
h2

(uni+1,j + uni1,j + uni,j+1 + uni,j1 )

(4.12)

and by selecting the maximum time step:


1
t
=
h2
4

(4.13)

one obtains the Jacobi scheme:


un+1
i,j =


1 n
ui+1,j + uni1,j + uni,j+1 + uni,j1 h2 Si,j
4

4.3

(4.14)

Version of September 1, 2015

Finite-difference approximation of hyperbolic equations

4.2

4.4

Finite-difference approximation of hyperbolic equations

This section closely follows: G. Tryggvason - CFD course - Web Lecture Notes
In this section, numerical schemes for hyperbolic equations, along with their accuracy and stability properties, are presented.
Application of some of them to numerical solution of both linear (Linear Advection Equation) and non-linear
(inviscid Burgers equation) first-order hyperbolic equations are illustrated.
In particular, dissipative (for first-order accurate in space methods) or dispersive (for second-order accurate
in space methods) nature of different schemes is demonstrated through advection of a discontinuos function,
which is a solution of the linear advection equation.
Finally, a numerical scheme for second-order linear hyperbolic equations is presented and applied to the
classical wave equation.
A typical example of hyperbolic equation is the second-order wave equation:
2u
2u
c2 2 = 0
(4.15)
2
t
x
This equation can be rewritten as a system of two first-oder PDEs. Thus, for the sake of simplicity we will
focus firstly on the first order equation:
u
u
+U
=0
t
x
where the analytic solution is easily obtained by solving the equations for the characteristics:
dx
dt
du
dt

(4.16)

=U

(4.17)

=0

(4.18)

4.4

Version of September 1, 2015

Finite-difference approximation of hyperbolic equations

4.2.1

4.5

The Explicit FTCS or Forward Euler method

The equation is discretized with forward in time and central difference scheme in space:
un+1
= unj
j

U t n
(uj+1 unj1 )
2h

(4.19)

the scheme is O(t, h2 ) accurate but from von Neumann stability:


G=1i

U t
sin kh
2h

(4.20)

and thus it is unconditionally unstable. Here and in the following we will indicate with =
Courant number .

U t
h

the

(n + 2)t
(n + 1)t
nt

un+1
j
unj1 unj

unj+1

(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.2: A graphical representation corresponding to the stencil used to approximate the temporal
derivative as forward Euler and the spatial derivative as centered difference: FTCS scheme.

4.5

Version of September 1, 2015

Finite-difference approximation of hyperbolic equations

4.2.2

4.6

The Upwind scheme

The upwind scheme is similar to the previous scheme but uses directional (forward) derivatives in space.
Time is discretized in the same forward difference manner.
t
U (unj unj1 )
(4.21)
h
It must be noticed that the upwind or downwind nature of the scheme depends on the local sign of the
advecting velocity U and thus one may refer to the following generalized upwind scheme that reads:
un+1
= unj
j

t
U (unj unj1 )
h
t
= unj
U (unj+1 unj )
h

un+1
= unj
j

if U > 0

(4.22)

un+1
j

if U < 0

(4.23)

By defining
U+ =

1
(U + |U |)
2

(4.24)

U =

1
(U |U |)
2

(4.25)

and

the two different cases can be written as:


un+1
= unj
j


t  + n
U (uj unj1 ) + U (unj+1 unj )
h

(4.26)

Substituting U :

t n
|U |t n
(uj+1 unj1 ) +
uj+1 2unj + unj1
(4.27)
2h
2h
that is displaying a clear structure with a central difference term and a numerical viscosity term proportional to the numerical viscosity
un+1
= unj U
j

|U |h
2
The equation for the growth of a small perturbation reads:
Dnum =

un+1
unj
U
j
+ (unj unj1 ) = 0
t
h
and testing the stability of a wave perturbation with frequency k, i.e. writing
unj = u
n eikxj

(4.28)

(4.29)

(4.30)

the amplification factor for the error becomes


G=

u
n+1
= 1 (1 eikh )
u
n

(4.31)

with = Uht .
For the stability we are interested in knowing when |G| < 1 (see the section on the stability analysis via the
von Neumann method).

4.6

Version of September 1, 2015

Finite-difference approximation of hyperbolic equations

4.7

The stability conditon can be assessed graphically by the construction in Figure 4.5 and it corresponds to
the conditions < 1 or equivalently
U t
1
(4.32)
h
This condition is called CFL condition and it was first introduced by Courant, Friedrichs and Lewy.
Another way to derive the stability condition is via direct algebra.
G = 1 + eikh = 1 + cos kh i sin kh

(4.33)

|G|2 = (1 + cos kh)2 + 2 sin2 kh =

(4.34)

and thus

(1 ) + 2(1 ) cos kh + cos kh + sin kh =


2

(4.35)

(1 ) + 2(1 ) cos kh + =

(4.36)

1 2 + 22 + 2(1 ) cos kh

(4.37)

and
|G|2 1

if 1

(4.38)

(n + 2)t
(n + 1)t
nt

un+1
j
unj1 unj

(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.3: A graphical representation corresponding to the stencil used in the upwind scheme.
Remarks: The upwind is exceptionally robust but low accuracy in space and time thus unsuitable for most
applications.

4.2.3

Generalities over the CFL condition

The CFL condition is a necessary condition for stability of explicit hyperbolic schemes and it has a rather
simple physical interpretation. It amounts to the requirement that the numerical domain of dependence
contains the physical domain of dependence. In other words the numerical speed of propagation must
be greater or equal to the speed of propagation of the PDE. This condition can be interpreted as providing
a limitation on the time step.
A consequence of this is that: There are no explicit, unconditionally stable and consistent finite
difference schemes for hyperbolic systems.

4.7

Version of September 1, 2015

Finite-difference approximation of hyperbolic equations

4.8

37~MQ7fl/ ~1~VMC77Y
J-QFI
Figure 4.4: The geometrical interpretation of the CFL condition for the upwind scheme. The characteristics
of the equation must travel slower than one grid spacing per time step.

(-9) ~
Figure 4.5: A geometrical interpretation of the stability condition for the upwind scheme. The red circle
bounds values of amplification factor G for which the scheme is stable.

4.8

Version of September 1, 2015

Finite-difference approximation of hyperbolic equations

4.2.4

4.9

The Implicit BTCS or Backward Euler method

The discretized equation is:


un+1
unj
U n+1
j
+
(u
un+1
j1 ) = 0
t
2h j+1

(4.39)

This method is:


Unconditionally stable
1st order accurate in time, 2nd order in space
Form a tri-diagonal matrix
U n+1
1 n+1
U n+1
1 n
u
+
u

u
=
u
2h j+1
t j
2h j1
t j

(4.40)

n+1
aj uj+1
+ bj un+1
+ cj un+1
j
j1 = dj

(4.41)

or equivalently

(n + 2)t
(n + 1)t
nt

unj1 unj

(n 1)t

unj+1

un1
j

(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.6: A graphical representation corresponding to the stencil used to approximate the derivatives of
the backward Euler central space scheme.

4.9

Version of September 1, 2015

Finite-difference approximation of hyperbolic equations

4.2.5

4.10

The Lax-Friedrichs method

The Lax scheme consists in splitting


unj

1 n
(u
+ unj1 )
2 j+1

(4.42)

and in the FTCS scheme this produces:


t n
1
t n
(uj+1 unj1 ) un+1
= (unj+1 + unj1 )
(u
unj1 )
j
2h
2
2h j+1
with the following characteristics:
un+1
= unj
j

(4.43)

stable for < 1


1st order accurate in time, 2nd order in space
Conditionally consistent: it is consistent only if h/t = const
FTCS (forward in time, centered in space) scheme with an artificial viscosity term of 1/2
The Lax-Friedrichs method can be seen an alternative to Godunovs scheme that avoids solving a
Riemann problem at the expense of adding artificial viscosity
The lowest order error terms for the scheme are:


U h2
Uh 1
uxx +
(1 2 )uxxx
2

(4.44)

(n + 2)t
(n + 1)t
nt

un+1
j
unj1 unj

unj+1

(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.7: A graphical representation corresponding to the stencil used in Lax-Friedrichs scheme.

4.10

Version of September 1, 2015

Finite-difference approximation of hyperbolic equations

4.2.6

4.11

The Leap-Frog second order method

One of the simplest second-order scheme (in time) is the following Leap-Frog method:
un+1
un1
u
j
j
=
+ O(t2 )
t
2t

U t n
uj+1 unj1
h
that corresponds to the following modified equation:

(4.45)

un+1
= un1

j
j

(4.46)

u
u
U h2 2
+U
=
( 1)uxxx + . . .
t
x
6

(4.47)

The scheme is:


Stable for || < 1
Dispersive, (no dissipation)
Initial conditions at two time levels
Oscillatory solution in time (alternating)
(n + 2)t
(n + 1)t
nt

un+1
j
unj1 unj

(n 1)t

unj+1

un1
j

(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.8: Leap-Frog.

4.11

Version of September 1, 2015

Finite-difference approximation of hyperbolic equations

4.2.7

4.12

The Lax-Wendroff method (I)

By expanding the solution in time


u(x, t + t) = u(x, t) +

2 u t2
3 u t3
u
t + 2
+ 3
+ ...
t
t 2
t 6

(4.48)

and using
u
u
= U
t
x
one obtains

2
=
t2
t

u
t

=
t

(4.49)
u
2u
= U2 2
x t
x

(4.50)

u
2 u t2
t + U 2 2
+ O(t3 )
x
x 2

(4.51)

u
U
x


= U

and substituting in the time expansion of the solution:


u(x, t + t) = u(x, t) U

and using central difference approximation for the spatial derivatives:


un+1
= unj
j

 U 2 t2 n

U t n
uj+1 unj1 +
uj+1 2unj + unj1
2
2h
2h

(4.52)

This scheme is second order accurate in space and in time and stable for
U t
<1
h

(4.53)

(n + 2)t
(n + 1)t
nt

un+1
j
unj1 unj

unj+1

(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.9: A graphical representation corresponding to the stencil used to approximate the derivative in
the Lax-Wendroff (I) scheme.

4.12

Version of September 1, 2015

Finite-difference approximation of hyperbolic equations

4.2.8

4.13

The Lax-Wendroff method (II)

This is the two step Lax-Wendroff scheme and it is split in two steps:
Step 1: Lax
n+1/2

uj+1/2 (unj+1 + unj )/2


t/2

+U

unj+1 unj
=0
h

(4.54)

Step 2: Leapfrog
n+1/2

n+1/2

uj+1/2 uj1/2
un+1
unj
j
+U
=0
t
h

(4.55)

It is stable for

U t
<1
h
and second order accurate in time and space. For linear equations the LW-II is identical to LW-I.

(4.56)

(n + 2)t
(n + 1)t
nt

n+1/2
uj1/2

un+1
j
n+1/2
uj+1/2

unj1 unj

unj+1

(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.10: A graphical representation corresponding to the stencil used for the Lax-Wendroff (II) scheme.

4.13

Version of September 1, 2015

Finite-difference approximation of hyperbolic equations

4.2.9

4.14

The MacCormack method

This scheme is similar to the LW-II without the j + 1/2 and j 1/2.
The predictor step provides a provisional value for u based on forward difference:

t n
uj+1 unj
h

= unj U
un+1
j

(4.57)

Corrector, the predicted value is corrected by using backward difference (time step t/2):
n+1/2

un+1
= uj
j
n+1/2

and by replacing the uj


t  n+1
uj un+1
j1
2h

(4.58)

unj + un+1
j
2

(4.59)

term by its average:


n+1/2

uj

one obtains the corrector steps as:


un+1
=
j



1 n
t n+1
n+1
uj + un+1
(u

u
)
j
j1
2
h j

(4.60)

A fractional step method


Predictor: forward differencing
Corrector: backward differencing
For linear problems accuracy and stability are identical to LW-I
Well suited for non-linear equations (inviscid Burgers, Euler etc.)
Unlike upwind scheme it does not introduce diffusive errors. However introduces dispersive errors
(Gibbs phenomenon) in high-gradient regions.

(n + 2)t
un+1
j

(n + 1)t un
j1
nt

unj1

unj+1

(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.11: A graphical representation corresponding to the stencil used for the MacCormack scheme.

4.14

Version of September 1, 2015

Finite-difference approximation of hyperbolic equations

4.2.10

4.15

The Second order upwind scheme: Beam-Warming method

Warming and Beam (1975)


Predictor:
uj = unj U
Corrector:
un+1
=
j


t n
uj unj1
h

(4.61)





1 n
t
t n
uj + uj U
uj uj1 U
uj 2unj1 + unj2
2
h
h

(4.62)

and combining

 1
un+1
= unj unj unj1 + ( 1) unj 2unj1 + unj2
j
2

(4.63)

where = U t
h
The scheme is stable for 0 2
Second order accurate in time and space
(n + 2)t
(n + 1)t
nt

un+1
j
unj2 unj1 unj

(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.12: A graphical representation corresponding to the stencil used for the second order upwind method
(Beam-Warming)

(n + 2)t
(n + 1)t
nt

un+1
j
unj1 unj

(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.13: A graphical representation corresponding to the stencil used to approximate the temporal
derivative as forward euler.

4.15

Version of September 1, 2015

Finite-difference approximation of hyperbolic equations


ut + U ux = 0
FTCS

4.16

Numerical scheme
t
un+1
= unj U2h
(unj+1 unj1 )
j

Leading Error term


2
2
t U2 uxx U6h (1 + 22 )uxxx

Stability
Unconditionally
unstable

n
n
un+1
= unj U t
j
h (uj+1 uj )

Uh
2 (1
1)uxxx

)uxx + U6h (22 3 +

U 2 t
2 uxx

un+1
j
unj1 unj

unj+1

Upwind
un+1
j

Stable for 1

unj1 unj
BTCS
unj1

(Implicit)
unj

un+1
un
j
j
t

+U

n+1
un+1
j+1 uj1
2h

=0

1

2
6Uh


+ 13 U 3 t2 uxxx

Unconditionally
stable

U h2
3 (1

Conditionally
consistent,
stable for 1

unj+1

un1
j
n
(un
un+1
j+1 +uj1 )/2
j
+
t n
(un
j+1 uj1 )
+U
=0
2h

Lax-Friedrichs
un+1
j

Uh
2


uxx +

2 )uxxx

unj1unj unj+1
un1
un+1
j
j
2t

Leapfrog

+U

n
un
j+1 uj1
2h

U h2
2
6 (

=0

1)uxxx

Stable for 1

un+1
j
unj1unj unj+1
un1
j
Lax-Wendroff

(I)

un+1
j
unj1 unj

un
un+1
(un un )
j
j
+
U j+12h j1 +
t
(un 2un +un )
U 2 t2 j+1 2hj2 j1 = 0

U6h (12 )uxxx U8h (12 )uxxxx

Stable for 1

= 0

As LW-1

Stable for 1


unj U t
unj+1 unj
hh
i
n+1
1
n
=
+
2 uj + uj

As LW-1

Stable for 1

unj+1
n+1/2

Lax-Wendroff
n+1/2

uj1/2

un+1
j n+1/2
uj+1/2

unj1 unj
MacCormack
unj1

(II)

n
n
uj+1/2 (un
un
j+1 +uj )/2
j+1 uj
+
U
t/2
h
n+1/2
n+1/2
uj+1/2 uj1/2
un+1
un
j
j
+U
=0
t
h

unj+1
un+1
j

un+1
j

un+1
j
unj+1

Beam-Warming
un+1
j

U2

n+1
t
h (uj

un+1
un
j
j
t
U 2 t
n
2h2 (uj

un+1
j1 )
(3un 4un

+un

j1
j2
+ U j
2h
n
n
2uj1 + uj2 ) = 0

U h2
6 (1
2

)(2 )uxxx
) (2 )uxxxx

U h3
8 (1

Stable for
02

unj2 unj1 unj


4.16

Version of September 1, 2015

Finite-difference approximation of hyperbolic equations

4.2.11

4.17

Discontinuos solutions and shocks Optional

Here we consider the linear advection equation


u
u
+U
=0
t
x
whose solution can be easily obtained by the method of characteristics:

(4.64)

dx
=U
dt
du
=0
dt

(4.65)
(4.66)

Consider the inviscid Burgers equation


u
u
+u
=0
t
x

(4.67)

with Riemann condition



u(x, 0) =

uL
uR

x < x0 ,
x > x0 ,

(4.68)

where uL > uR . The characteristics are:


dx
=u
dt
du
=0
dt

(4.69)
(4.70)

For uL > uR , this produces a shock wave.


The following initial condition better highlights the formation of shocks:

x < a
uL

1
x
(u
+
u
)

(u

u
)
a
<x<a
u(x, 0) =
L
R
L
R a
2
uR
x>a

(4.71)

which is represented in Figure 4.14.


In order to determine the shock speed one can integrate the equation, written in conservative form across
the shock:
u F (u)
+
=0
(4.72)
t
x
and in characteristics
u F u
+
=0
t
u x

(4.73)

where
dt
=1
ds
dx
F
=
ds
u

(4.74)
(4.75)

or

dx
F
=
= F 0 (u)
(4.76)
dt
u
The Entropy condition corresponds to the requirement that the characteristics enter in the discontinuity and
thus the shock speed must be between
F 0 (uL ) > C > F 0 (uR )
4.17

(4.77)
Version of September 1, 2015

Finite-difference approximation of hyperbolic equations

4.18

-Yr

~0-~
Figure 4.14: Initial condition for eq.(4.67) given by (4.71).

4.2.12

Advection of a shock with different schemes

Consider again the linear advection equation


u
u
+U
=0
t
x

(4.78)

with boundary conditions



u(x, 0) =

1
0

x<0
x>0

(4.79)

In Figures from 4.15 to 4.18, the numerical solutions of eq. (4.78) with initial conditions (4.79) for different
numerical schemes are plotted for h = 0.10, t = 0.05, n = 41 and at t = 2.75, while for Figures from 4.19
to 4.22, numerical solutions for the same equation and initial conditions for h = 0.05, t = 0.025, n = 81
and at t = 2.75 are plotted. Note that the second-order methods tend to better capture the solution shape
(higher accuracy) but they show oscillating behaviour in proximity of the shock, while first-order methods
(e.g. upwind) are dissipative and less accurate but the solution preserves monotonicity (it does not oscillate).

4.18

Version of September 1, 2015

Finite-difference approximation of hyperbolic equations

4.19

Figure 4.15: Shock-like numerical solution of linear advection equation using the upwind scheme with h = 0.1,
dt = 0.05, n = 41 and at t = 2.75. Made with MATLAB script shock_advection.m.

Figure 4.16: Shock-like numerical solution of linear advection equation using the Lax-Wendroff (I) scheme
with h = 0.1, t = 0.05, n = 41 and at t = 2.75. Made with MATLAB script shock_advection.m.

4.19

Version of September 1, 2015

Finite-difference approximation of hyperbolic equations

4.20

Figure 4.17: Shock-like numerical solution of linear advection equation using Leapfrog scheme with h = 0.1,
t = 0.05, n = 41 and at t = 2.75. Made with MATLAB script shock_advection.m.

Figure 4.18: Shock-like numerical solution of linear advection equation using McCormack scheme with
h = 0.1, t = 0.05, n = 41 and at t = 2.75. Made with MATLAB script shock_advection.m.

4.20

Version of September 1, 2015

Finite-difference approximation of hyperbolic equations

4.21

Figure 4.19: Shock-like numerical solution of linear advection equation using upwind scheme with h = 0.05,
t = 0.025, n = 81 and at t = 2.75. Made with MATLAB script shock_advection.m.

Figure 4.20: Shock-like numerical solution of linear advection equation using Lax-Wendroff (I) scheme with
h = 0.05, t = 0.025, n = 81 and at t = 2.75. Made with MATLAB script shock_advection.m.

4.21

Version of September 1, 2015

Finite-difference approximation of hyperbolic equations

4.22

Figure 4.21: Shock-like numerical solution of linear advection equation using Leapfrog scheme with h = 0.05,
t = 0.025, n = 81 and at t = 2.75. Made with MATLAB script shock_advection.m.

Figure 4.22: Shock-like numerical solution of linear advection equation using McCormack scheme with
h = 0.05, t = 0.025, n = 81 and at t = 2.75. Made with MATLAB script shock_advection.m.

4.22

Version of September 1, 2015

Finite-difference approximation of parabolic equations

4.3

4.23

Finite-difference approximation of parabolic equations

This section closely follows: G. Tryggvason - CFD course - Web Lecture Notes
As a model equation we consider the diffusion equation:
u
2u
= 2
t
x
for t > 0 and in the interval a < x < b. This equation requires initial conditions:

(4.80)

u(x, 0) = u0 (x)

(4.81)

and boundary conditions, e.g. Dirichlet


u(a, t) = a (t)

(4.82)

u(b, t) = b (t)

(4.83)

or Neumann
u
(a, t) = a (t)
x
u
(b, t) = b (t)
x

(4.84)
(4.85)

or a combination of both.
Notice that parabolic equations can be viewed as the limit of hyperbolic equations with two families of
characteristics (actually one) when the propagation speed goes to infinity.

4.3.1

The Explicit FTCS Euler method




u
t

n

u
t

n


=

2u
x2

n
(4.86)
j

where

and

thus


u
t

n
j

2u
x2

un+1
unj
j
t

(4.87)

unj+1 2unj + unj1


h2

(4.88)

=
j

n
=
j

un+1
unj
j
=
=
t

2u
x2

n


=

unj+1 2unj + unj1


h2


(4.89)

or equivalently:

t n
uj+1 2unj + unj1
2
h
The scheme corresponds to the following (modified) equation:
un+1
= unj +
j

(4.90)

u
2u
h2
4u
6u
2 =
(1 6r) 4 + O(t2 , h2 t, h4 ) 6
t
x
12
x
x
with
r=

t
h2

(4.91)

(4.92)

and
4.23

Version of September 1, 2015

Finite-difference approximation of parabolic equations

4.24

(n + 2)t
(n + 1)t

un+1
j

nt

unj1 unj

unj+1

(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.23: A graphical representation corresponding to the stencil used to approximate the temporal
derivative as forward Euler and the spatial derivatives as centered in space.
has accuracy O(t, h2 )
has no odd derivatives, thus it is dissipative
stability (von Neumann):
t
h
un+1
= 1 4 2 sin k 2
un
h
2
and
1 < 1 4
and
0

(4.93)

t
<1
h2

(4.94)

1
t
<
2
h
2

(4.95)

Domain of dependence
As it can be seen from the discretization, at each time step the updated values depend maximum upon the
first neighbours at the previous time step. This means that a node in the center may be influenced solely by
the initial data for a long time, without feeling the boundary conditions. This may result in unphysical
behaviour of the solution.
It can be easily verified that for t 0 and h 0 the computational domain containts the physical
domain of dependence. We have seen that the latter condition requires t h2 for von Neumann
stability.

4.3.2

The Implicit BTCS Backward Euler method


n+1
un+1
unj
un+1
+ un+1
j
j+1 2uj
j1
=
t
h2

(4.96)

n+1
n
run+1
run+1
j+1 + (2r + 1)uj
j1 = uj

(4.97)

and
where r = t/h2 .
This is a tri-diagonal system that must be solved by matrix method.
This method corresponds to the following modified equation:
2u
h2
4u
6u
u
2 =
(1 + 6r) 4 + O(t2 , h2 t, h4 ) 6
t
x
12
x
x
4.24

(4.98)

Version of September 1, 2015

Finite-difference approximation of parabolic equations

4.25

with

t
h2
and the amplification factor from von Neumann type analysis is

(4.99)

r=

G = [1 + 2r(1 cos )]

(4.100)

that shows how the scheme is unconditionally stable.


(n + 2)t
(n + 1)t
nt

unj1 unj

(n 1)t

unj+1

un1
j

(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.24: A graphical representation corresponding to the stencil used to approximate the temporal
derivative as backward Euler.

4.3.3

The Crank-Nicolson method


un+1
unj

j
=
t
2

n+1
un+1
+ un+1
unj+1 2unj + unj1
j+1 2uj
j1
+
h2
h2

n+1
n
n
n
run+1
run+1
j+1 + 2(r + 1)uj
j1 = ruj+1 2(r 1)uj + ruj1

The modified equation to which this method corresponds is:



 6
u
2u
h2 4 u
1 3 2
1
4 u
2 =
+
t +
h
t
x
12 x4
12
360
x6

(4.101)
(4.102)

(4.103)

with second order accuracy O(t2 , h2 ).


The amplification of the error is:
G=

1 r(1 cos())
1 + r(1 cos())

(4.104)

and thus the scheme is unconditionally stable.

4.3.4

The method

This method can interpolate, varying the parameter between implicit and explicit:
" n+1
#
uj+1 2un+1
+ un+1
un+1
unj
unj+1 2unj + unj1
j
j
j1
=
+ (1 )
t
h2
h2

(4.105)

where 0 1 and:

4.25

Version of September 1, 2015

Finite-difference approximation of parabolic equations

4.26

(n + 2)t
(n + 1)t
nt

n+1
un+1
un+1
j1 uj
j+1

unj1 unj

unj+1

(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.25: A graphical representation corresponding to the stencil used in Crank-Nicolson method.
(n + 2)t
(n + 1)t
nt

n+1
un+1
un+1
j1 uj
j+1

unj1 unj

unj+1

(n 1)t
(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.26: A graphical representation corresponding to the stencil used in theta method.
= 0 gives the explicit FTCS scheme;
= 1 gives the implicit BTCS scheme;
= 1/2 gives the Cranck-Nicolson scheme;

4.3.5

The Richardson method

This scheme is similar to the leapfrog scheme:


un+1
un1
unj+1 2unj + unj1
j
j
=
2t
h2

(4.106)

it is O(t2 , h2 ) and unconditionally unstable.

4.3.6

The DuFort-Frankel method

The Richardson scheme can be made stable by the following splitting:



un+1
+ un1
j
j
n
uj
2
4.26

(4.107)
Version of September 1, 2015

Finite-difference approximation of parabolic equations

4.27

(n + 2)t
(n + 1)t

un+1
j

nt

unj1 unj

(n 1)t

unj+1

un1
j

(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.27: A graphical representation corresponding to the stencil used in the Richardson scheme.

un+1
un1
unj+1 un+1
un1
+ unj1
j
j
j
j
=
2t
h2

(4.108)

un+1
(1 + 2r) = un1
+ 2r(unj+1 ujn1 + unj1 )
j
j

(4.109)

The resulting modified equation is:






2
4
1
1
1 3 2
2
3 t
4
5 t
ut uxx =
h 2 uxxxx +
h t + 2 4 uxxxxxx
12
h
360
3
h
p
2r cos 1 4r2 sin2
G=
1 + 2r
Unconditionally stable

(4.110)

(4.111)

(n + 2)t
un+1
j

(n + 1)t
nt

unj1

unj+1

(n 1)t

un1
j

(n 2)t
jh
(j 2)h
(j + 2)h
(j 1)h
(j + 1)h
Figure 4.28: A graphical representation corresponding to the stencil used in the DuFort-Frankel method.

4.27

Version of September 1, 2015

Finite-difference approximation of parabolic equations


ut =uxx
FTCS

Numerical scheme

un+1
j
unj1 unj

unj+1

unj1 unj

un+1
unj
j
t

unj+1

un1
j

Leading Error term

un+1
unj
unj+1 2unj + unj1
j
=
t
h2

BTCS

4.28

h2
(1 6r)uxxxx
12

n+1
un+1
+ un+1
j+1 2uj
j1
h2

h
(1 + 6r)uxxxx
12

Crank-Nicolson
n+1
uj1

un+1
j

unj1 unj

un+1
j+1
unj+1

h2
3 2
un+1
unj

j
n+1
u
+
uxxxxxx
xxxx
= 2 un+1

2u
+
j+1
j
12
12
t
2h


n
n
n
+un+1
j1 + uj+1 2uj + uj1

Richardson
un+1
j
unj1 unj

unj+1

un+1
un1
unj+1 2unj + unj1
j
j
=
2t
h2

Stability
Stable for r 1/2

O(t2 , h2 )

Unconditionally
stable

Unconditionally
stable

Unconditionally
unstable

un1
j
DuFort-Frankel
un+1
j

un+1
un1
j
j
2t
=

unj1

unj+1

unj+1 un+1
ujn1 + unj1
j
h2

h2
(1 12r2 )uxxxx
12

Unconditionally
consistent, conditionally consistent

un1
j
3-levels

implicit

n+1
uj1
un+1
un+1
j
j+1

unj
un1
j

3un+1
4unj + un1
j
j
=
2t

h2
uxxxx
12

Unconditionally
stable

n+1
un+1
+ un+1
j+1 2uj
j1
h2

4.28

Version of September 1, 2015

Lecture 5

Second order equations


This section closely follows: Riley & Hobson Chapter 11 - Sections 11.1-11.3

5.0

Lecture 6

Solution methods for PDE


This section closely follows: Riley & Hobson Chapter 11 - Sections 11.4-11.5

6.0

Você também pode gostar