Você está na página 1de 95

Lecture Notes on

Partial Dierential Equations


Universite Pierre et Marie Curie (Paris 6)
Nicolas Lerner
February 24, 2011
2
Contents
1 Introduction 5
1.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3 Quotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2 Vector Fields 15
2.1 Ordinary Dierential Equations . . . . . . . . . . . . . . . . . . . . . 15
2.1.1 The Cauchy-Lipschitz result . . . . . . . . . . . . . . . . . . . 15
2.1.2 Maximal and Global Solutions . . . . . . . . . . . . . . . . . . 21
2.1.3 Continuous dependence . . . . . . . . . . . . . . . . . . . . . . 25
2.2 Vector Fields, Flow, First Integrals . . . . . . . . . . . . . . . . . . . 29
2.2.1 Denition, examples . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.2 Local Straightening of a non-singular vector eld . . . . . . . 34
2.2.3 2D examples of singular vector elds . . . . . . . . . . . . . . 38
2.3 Transport equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.3.1 The linear case . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.3.2 The quasi-linear case . . . . . . . . . . . . . . . . . . . . . . . 42
2.3.3 Classical solutions of Burgers equation . . . . . . . . . . . . . 43
2.4 One-dimensional conservation laws . . . . . . . . . . . . . . . . . . . 45
2.4.1 Rankine-Hugoniot condition and singular solutions . . . . . . 45
2.4.2 The Riemann problem for Burgers equation . . . . . . . . . . 46
3 Five classical equations 51
3.1 The Laplace and Cauchy-Riemann equations . . . . . . . . . . . . . . 51
3.1.1 Fundamental solutions . . . . . . . . . . . . . . . . . . . . . . 51
3.1.2 Hypoellipticity . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.1.3 Polar and spherical coordinates . . . . . . . . . . . . . . . . . 54
3.2 The heat equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.3 The Schrodinger equation . . . . . . . . . . . . . . . . . . . . . . . . 57
3.4 The Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.4.1 Presentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.4.2 The wave equation in one space dimension . . . . . . . . . . . 61
3.4.3 The wave equation in two space dimensions . . . . . . . . . . 61
3.4.4 The wave equation in three space dimensions . . . . . . . . . . 62
3
4 CONTENTS
4 Analytic PDE 65
4.1 The Cauchy-Kovalevskaya theorem . . . . . . . . . . . . . . . . . . . 65
5 Elliptic Equations 71
5.1 Some simple facts on the Laplace operator . . . . . . . . . . . . . . . 71
5.1.1 The mean-value theorem . . . . . . . . . . . . . . . . . . . . . 71
5.1.2 The maximum principle . . . . . . . . . . . . . . . . . . . . . 73
5.1.3 Analyticity of harmonic functions . . . . . . . . . . . . . . . . 74
5.1.4 Greens function . . . . . . . . . . . . . . . . . . . . . . . . . 77
6 Hyperbolic Equations 83
6.1 Energy identities for the wave equation . . . . . . . . . . . . . . . . . 83
6.1.1 A basic identity . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.1.2 Domain of dependence for the wave equation . . . . . . . . . . 84
7 Appendix 85
7.1 Fourier transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
7.2 Spaces of functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
7.2.1 On the Fa`a de Bruno formula . . . . . . . . . . . . . . . . . . 85
7.2.2 Analytic functions . . . . . . . . . . . . . . . . . . . . . . . . 87
7.3 Some computations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
7.3.1 On multi-indices . . . . . . . . . . . . . . . . . . . . . . . . . 91
7.3.2 Stirlings formula . . . . . . . . . . . . . . . . . . . . . . . . . 91
7.3.3 On the Poisson kernel for a half-space . . . . . . . . . . . . . . 92
Chapter 1
Introduction
1.1 Examples
What is a partial dierential equation? Although the question may look too general,
it is certainly a natural one for the reader opening these notes with the expectation of
learning things about PDE, the acronym of Partial Dierential Equations. Loosely
speaking it is a relation involving a function u of several real variables x
1
, . . . , x
n
with its partial derivatives
u
x
j
,

2
u
x
j
x
k
,

3
u
x
j
x
k
x
l
, . . .
Maybe a simple example would be a better starting point than a general (and vague)
denition: let us consider a C
1
function u dened on R
2
and let c > 0 be given. The
PDE

t
u +c
x
u = 0 (Transport Equation) (1.1.1)
is describing a propagation phenomenon at speed c, and a solution is given by
u(t, x) = (x ct), C
1
(R). (1.1.2)
We have indeed
t
u + c
x
u =
t
(x ct)
_
c + c
_
= 0. Note also that if u has
the dimension of a length L and c of a speed LT
1
,
t
u and c
x
u have respectively
the dimension LT
1
, LT
1
LL
1
i.e. (fortunately) both LT
1
. At time t = 0, we have
u(0, x) = (x) and at time t = 1, we have u(1, x) = (x c) so that is translated
(at speed c) to the right when time increases. The equation (1.1.1) is a linear PDE,
namely, if u
1
, u
2
are solutions, then u
1
+ u
2
is also a solution as well as any linear
combination c
1
u
1
+ c
2
u
2
with constants c
1
, c
2
. Looking at (1.1.1) as an evolution
equation with respect to the time variable t, we may already ask the following
question: knowing u at time 0, say u(0, x) = (x), is it true that (1.1.2) is the
unique solution? In other words, we can set the so-called Cauchy problem,
1
_

t
u +c
x
u = 0,
u(0) = ,
(1.1.3)
1
Augustin L. Cauchy (1789-1857) is a French mathematician, a prominent scientic gure
of the nineteenth century, who laid many foundational concepts of innitesimal calculus; more is
available on the website [17].
5
6 CHAPTER 1. INTRODUCTION
and ask the question of determinism: is the law of evolution (i.e. the transport
equation) and the initial state of the system (that is ) determine uniquely the
solution u? We shall see that the answer is yes. Another interesting and natural
question about (1.1.1) concerns the regularity of u: of course a classical solution
should be dierentiable, just for the equation to make sense but, somehow, this is
a pity since we would like to accept as a solution u(t, x) = [x ct[ and in fact
all functions (x ct). We shall see that Distribution theory will provide a very
complete answer to this type of questions for linear equations.
Let us consider now for u of class C
1
on R
2
,

t
u +u
x
u = 0. (Burgers Equation) (1.1.4)
That equation
2
is not linear, but one may look at a linear companion equation in
three independent variables (t, x, y) given by
t
U +y
x
U = 0. It is easy to see that
U(t, x, y) = x y(t T) is a solution of the latter equation (here T is a constant).
Let us now take a function u(t, x) such that x u(t, x)(t T) = x
0
, where x
0
is a
constant, i.e.
u(t, x) =
x x
0
t T
.
Now, we can verify that for t ,= T, the function u is a solution of (1.1.4): we check

t
u +u
x
u =
(x x
0
)
(t T)
2
+
x x
0
t T
1
t T
= 0.
We shall go back to this type of equation later on, but we can notice already an
interesting phenomenon for this solution: assume T > 0, x
0
= 0, then the solution
at t = 0 is x/T (perfectly smooth and decreasing) and it blows up at t = T. If on
the contrary, we assume T < 0, x
0
= 0, the solution at t = 0 is is x/T (perfectly
smooth and increasing), remains smooth for all times larger than T, but blows up
in the past at time t = T.
The Laplace equation
3
is the second-order PDE, u = 0, with
u =

1jn

2
u
x
2
j
. (1.1.5)
This is a linear equation and it is called second-order because it involves partial
derivatives of order at most 2. The solutions of the Laplace equation are called har-
monic functions. Let us determine all the harmonic polynomials in two dimensions.
Denoting the variables (x, y) R
2
, the equation can be written as
(
x
+i
y
)(
x
i
y
)u = 0.
Since u is assumed to be a polynomial, we can write
u(x, y) =

(k,l)N
2
u
k,l
(x +iy)
k
(x iy)
l
, u
k,l
C, all 0 but a nite number.
2
Jan M. Burgers (1895-1981) is a Dutch physicist.
3
Pierre-Simon Laplace (1749-1827) is a French mathematician, see [17].
1.1. EXAMPLES 7
Now we note that (
x
+i
y
)(x+iy)
l
= l(x+iy)
l1
(1+i
2
) = 0 and (
x
i
y
)(xiy)
l
=
0. As a result u is a harmonic polynomial if u
k,l
= 0 when kl ,= 0. Conversely, noting
that (
x
+i
y
)(x iy)
l
= l(x iy)
l1
2 and (
x
i
y
)(x +iy)
k
= k(x +iy)
k1
2, we
have (the nite sum)
u =

(k,l)(N

)
2
u
k,l
4kl(x +iy)
k1
(x iy)
l1
and thus for kl ,= 0, u
k,l
= 0, from the following remark: If the polynomial P =

p,qN
a
p,q
z
p
z
q
vanishes identically for z C, then all a
p,q
= 0. To prove this
remark, we shall note with z = x +iy,

z
=
1
2
_

x
+i

y
_
,

z
=
1
2
_

x
i

y
_
, so that

z
z = 1,

z
z = 0,

z
z = 0,

z
z = 1,
0 =
1
p!q!
_

p
z
p

q
z
q
P
_
(0) = a
p,q
.
Finally the harmonic polynomials in two dimensions are
u(x, y) = f(x +iy) +g(x iy), f, g polynomials in C[X]. (1.1.6)
Requiring moreover that they should be real-valued leads to, using the standard
notation x +iy = re
i
, r 0, R,
u(x, y) =

kN
Re
_
(a
k
ib
k
)(x +iy)
k
_
=

kN
r
k
Re
_
(a
k
ib
k
)e
ik
_
=

kN
_
a
k
cos(k) +b
k
sin(k)
_
r
k
, a
k
, b
k
R all 0 but a nite number.
We see also that for a sequence (c
k
)
kZ

1
,
v(x, y) = c
0
+

kN

(c
k
z
k
+c
k
z
k
) (1.1.7)
is a harmonic function in the unit disk D
1
= z C, [z[ < 1 such that
v
[D
1
(e
i
) =

kZ
c
k
e
ik
.
As a consequence the function (1.1.7) is solving the Dirichlet problem
4
for the
Laplace operator in the unit disk D
1
with
_
v = 0 on D
1
,
v = on D
1
,
(1.1.8)
where is given by its Fourier series expansion (e
i
) =

kZ
c
k
e
ik
. The boundary
condition v = on D
1
is called a Dirichlet boundary condition. The Laplace
4
Johann P. Dirichlet (1805-1859) is a German mathematician, see [17].
8 CHAPTER 1. INTRODUCTION
equation is a stationary equation, i.e. there is no time variable and that boundary
condition should not be confused with an initial condition occurring for the Cauchy
problem (1.1.3).
The eikonal equation is a non-linear equation
[[ = 1, i.e.

1jn
[
x
j
[
2
= 1. (1.1.9)
Note that for R
n
with Euclidean norm equal to 1, (x) = x is a solution of
(1.1.9). The notation (nabla ) stands for the vector
=
_

x
1
, . . . ,

x
n
_
. (1.1.10)
We shall study as well the Hamilton-Jacobi equation
5

t
u +H(x, u) = 0, (1.1.11)
which is a non-linear evolution equation.
The Helmholtz
6
equation u = u is a linear equation closely related to
the Laplace equation and to the wave equation, also linear second order,
1
c
2

2
u
t
2

x
u = 0, t R, x R
n
, c > 0 is the speed of propagation. (1.1.12)
Note that if u has the dimension of a length L, then c
2

2
t
u has the dimension
L
2
T
2
T
2
L = L
1
as well as
x
u which has dimension L
2
L = L
1
. It is interesting
to note that for any R
n
with

j

2
j
= 1, and of class C
2
on R
u(t, x) = ( x ct)
is a solution of (1.1.12) since c
2

tt
( x ct)c
2

1jn

tt
( x ct)
2
j
= 0.
We shall study in the sequel many other linear equations, such as the heat
equation,
u
t

x
u, t R
+
, x R
n
,
and the Schr odinger equation,
1
i
u
t

x
u, t R, x R
n
.
Although the two previous equations look similar, they are indeed very dierent. The
Schrodinger
7
equation is a propagation equation which is time-reversible: assume
that u(t, x) solves on R R
n
, i
t
u + u = 0, then v(t, x) = u(t, x) will satisfy
5
Sir William Hamilton (1805-1865) is an Irish mathematician, physicist and astronomer.
Carl Gustav Jacobi (1804-1851) is a Prussian mathematician.
6
Hermann von Helmholtz (1821-1894) is a German mathematician.
7
Erwin Schr odinger (1887-1961) is an Austrian physicist, author of fundamental contribu-
tions to quantum mechanics.
1.1. EXAMPLES 9
i
t
v + v = 0 on R R
n
. The term propagation equation is due to the fact that
for R
n
and
u(t, x) = e
i(xt[[
2
)
,
we have
1
i

t
u u = [[
2
e
i(xt[[
2
)

j
i
2

2
j
e
i(xt[[
2
)
= 0,
so that, comparing to the transport equation (1.1.1), the Schrodinger equation be-
haves like a propagation equation where the speed of propagation depends on the
frequency of the initial wave (x) = e
ix
. On the other hand the heat equation is a
diusion equation, modelling the evolution of the temperature distribution: this
equation is time-irreversible. First of all, if u(t, x) solves
t
u u = 0 on R
+
R
n
,
then v(t, x) = u(t, x) solves
t
v + u = 0 on the dierent domain R

R
n
;
moreover, for R
n
the function
v(t, x) = e
ix
e
t[[
2
satises

t
v v = [[
2
v(t, x)

j
i
2

2
j
v = 0, with v(0, x) = e
ix
.
In particular v(t = 0) is a bounded function in R
n
and v(t) remains bounded for
t > 0 whereas it is exponentially increasing for t < 0. It is not dicult to prove
that there is no bounded solution v(t, x) of the heat equation on the whole real line
satisfying v(0, x) = e
ix
( ,= 0).
So far, we have seen only scalar PDE, i.e. equations involving the derivatives
of a single scalar-valued function R
n
x u(x) R, C. Many very important
equations of mathematical physics are in fact systems of PDE, dealing with the
partial derivatives of vector-valued functions R
n
x u(x) R
N
. A typical
example is Maxwells equations
8
, displayed below in vacuum. For (t, x) RR
3
,
the electric eld E(t, x) belongs to R
3
and the magnetic eld B(t, x) belongs to R
3
with
_

t
E = curl B =
_
_
_

x
1

x
2

x
3
_
_
_

_
_
_
B
1
B
2
B
3
_
_
_
=
_
_
_

2
B
3

3
B
2

3
B
1

1
B
3

1
B
2

2
B
1
_
_
_
,

t
B = curl E =
_
_
_

3
E
2

2
E
3

1
E
3

3
E
1

2
E
1

1
E
2
_
_
_
,
div E = div B = 0,
(1.1.13)
with div E =
1
E
1
+
2
E
2
+
3
E
3
. The previous system is a linear one, whereas the
following, Eulers system for incompressible uids
9
, is non-linear: the velocity
8
James C. Maxwell (1831-1879) is a Scottish theoretical physicist and mathematician.
9
Leonhard Euler (1707 -1783) is a mathematician and physicist, born in Switzerland, who
worked mostly in Germany and Russia.
10 CHAPTER 1. INTRODUCTION
eld v(t, x) = (v
1
, v
2
, v
3
) and the pressure (a scalar) p(t, x) should satisfy
_

t
v + (v )v = (p/)
div v = 0
v
[t=0
= w
(1.1.14)
where v = v
1

1
+v
2

2
+v
3

3
, is the mass density, so that the system is
_
_

t
v
1
+

j
v
j

j
v
1
+
1
(p/)

t
v
2
+

j
v
j

j
v
2
+
2
(p/)

t
v
3
+

j
v
j

j
v
3
+
3
(p/
_
_
=
_
_
0
0
0
_
_
,

j
v
j
= 0.
Note that v has dimension LT
1
, so that
t
v has dimension LT
2
(acceleration) and
v v has dimension LT
1
L
1
LT
1
= LT
2
, as well as (p/) which has dimension
L
1
..

MLT
2
. .
force
L
2
..
area
1
M
1
L
3
. .
density
1
= LT
2
where M stands for the mass unit.
The Navier-Stokes system for incompressible uids
10
reads
_

t
v + (v )v v = (p/)
div v = 0
v
[t=0
= w
(1.1.15)
where is the kinematic viscosity expressed in Stokes L
2
T
1
so that the dimension
of v is also
L
2
T
1
. .

L
2
..

LT
1
..
v
= LT
2
.
We note that curl grad = 0 since
_
_

x
1

x
2

x
3
_
_

_
_

x
1
f

x
2
f

x
3
f
_
_
= 0 and this implies that, taking
the curl of the rst line of (1.1.14), we get with the vorticity
= curl v (1.1.16)

t
+ curl((v )v) = 0.
Let us compute, using Einsteins convention
11
on repeated indices (this means that

j
v
j
stands for

1j3

j
v
j
),
curl
_
(v )v
_
=
_
_

x
1

x
2

x
3
_
_

_
_
(v )v
1
(v )v
2
(v )v
3
_
_
= (v ) curl v +
_
_

2
v
j

j
v
3

3
v
j

j
v
2

3
v
j

j
v
1

1
v
j

j
v
3

1
v
j

j
v
2

2
v
j

j
v
1
_
_
10
Claude Navier (1785-1836) is a French engineer and scientist. Georges Stokes (1819-
1903) is a British mathematician and physicist.
11
Albert Einstein (1879-1955) is one of the greatest scientists of all times and, needless to say,
his contributions to Quantum Mechanics, Brownian Motion and Relativity Theory are far more
important than this convention, which is however a handy notational tool.
1.2. COMMENTS 11
and since
j
v
j
= 0, =
_
_

2
v
3

3
v
2

3
v
1

1
v
3

1
v
2

2
v
1
_
_
, we get

2
v
j

j
v
3

3
v
j

j
v
2
= [
2
v
1

1
v
3
] +
2
v
2

2
v
3
+
2
v
3

3
v
3
[
3
v
1

1
v
2
]
3
v
2

2
v
2

3
v
3

3
v
2
= [
2
v
1

1
v
3
] [
3
v
1

1
v
2
] +
1
(
2
v
2
+
3
v
3
)
=
1

1
v
1
+
2
v
1

1
v
3

3
v
1

1
v
2
=
j

j
v
1
+ (
3
v
1

1
v
3
)
2
v
1
+ (
1
v
2

2
v
1
)
3
v
1
+
2
v
1

1
v
3

3
v
1

1
v
2
=
j

j
v
1
+
2
v
1

3
v
1

3
v
1

2
v
1
=
j

j
v
1
,
so that, using a circular permutation, we get
curl
_
(v )v
_
= (v ) ( )v (1.1.17)
and (1.1.14) becomes
t
+ (v ) ( )v = 0, div v = 0,
[t=0
= curl v.
1.2 Comments
Although the above list of examples is very limited, it is quite obvious that partial
dierential equations are occurring in many dierent domains of science: Electro-
magnetism with the Maxwell equations, Wave Propagation with the transport, wave,
Burgers equations, Quantum Mechanics with the Schrodinger equation, Diusion
Theory with the heat equation, Fluid Dynamics with the Euler and Navier-Stokes
systems. We could have mentioned Einsteins equation of General Relativity and
many other examples. As a matter of fact, the law of Physics are essentially all
expressed as PDE, so the domain is so vast that it is pointless to expect a useful
classication of PDE, at least in an introductory chapter of a textbook on PDE.
We have already mentioned various type of questions such that the Cauchy prob-
lem for evolution equations: for that type of Initial Value Problem, we are given an
equation of evolution
t
u = F(x, u,
x
u, . . . ) and the initial value u(0). The rst nat-
ural questions are about the existence of a solution, its uniqueness but also about
the continuous dependence of the solution with respect to the data: the french
mathematician Jacques Hadamard (18651963)
12
introduced the notion of well-
posedness as one of the most important property of a PDE. After all, the data
(initial or Cauchy data, various quantities occurring in the equation) in a Physics
problem are known only approximatively and even if the solution were existing and
proven unique, this would be useless for actual computation or applications if minute
changes of the data trigger huge changes for the solution. In fact, one should try
to establish some inequalities controlling the size of the norms or semi-norms of the
solution u in some functional space. The lack of well-posedness is linked to instabil-
ity and is also a very interesting phenomenon to study. We can quote at this point
Lars Gardings survey
13
article [10]: When a problem about partial dierential op-
erators has been tted into the abstract theory, all that remains is usually to prove
12
See [17].
13
Lars Garding (born 1919), is a Swedish mathematician.
12 CHAPTER 1. INTRODUCTION
a suitable inequality and much of our knowledge is, in fact, essentially contained in
such inequalities.
On the other hand, we have seen that the solution can be submitted to bound-
ary conditions, such as the Dirichlet boundary condition and we shall study other
types of boundary conditions, such as the Neumann boundary
14
, where the normal
derivative to the boundary is given.
The questions of smoothness and regularity of the solutions are also very impor-
tant: where are located the singularities of the solutions, do they propagate? Is it
possible to consider weak solutions, whose regularity is too limited for the equation
to make classical sense (see our discussion above on the transport equation).
Obviously non-linear PDE are more dicult to handle than the linear ones, in
particular because some singularities of the solution may occur although the initial
datum is perfectly smooth (see our discussion above on the Burgers equation). The
study of systems of PDE is playing a key role in Fluid Mechanics and the intricacies
of the algebraic properties of these systems deserves a detailed examination (a simple
example of calculation was given with the formula (1.1.17)).
1.3 Quotations
Let us end this introduction with a couple of quotations. First of all, we cannot
avoid to quote Galileo Galilei (1564-1642), an Italian physicist, mathematician,
astronomer and philosopher with his famous apology of Mathematics: Nature is
written in that great book which ever lies before our eyes - I mean the universe - but
we cannot understand it if we do not rst learn the language and grasp the symbols,
in which it is written. This book is written in the mathematical language, and the
symbols are triangles, circles and other geometrical gures, without whose help it
is impossible to comprehend a single word of it; without which one wanders in vain
through a dark labyrinth, see the translation of [4].
Our next quotation is by the physicist Eugene P. Wigner (1902-1995, 1963
Physics Nobel Prize) who, in his celebrated 1960 article The Unreasonable Eective-
ness of Mathematics in the Natural Sciences [24] is unraveling part of the complex
relationship between Mathematics and Physics: The miracle of the appropriate-
ness of the language of mathematics for the formulation of the laws of physics is a
wonderful gift which we neither understand nor deserve. We should be grateful for
it and hope that it will remain valid in future research and that it will extend, for
better or for worse, to our pleasure, even though perhaps also to our baement,
to wide branches of learning. It is interesting to complement that quotation by
the 2009 appreciation of James Glimm
15
in [11]: In simple terms, mathematics
works. It is eective. It is essential. It is practical. Its force cannot be avoided, and
the future belongs to societies that embrace its power. Its force is derived from its
essential role within science, and from the role of science in technology. Wigners
observations concerning The Unreasonable Eectiveness of Mathematics are truer
today than when they were rst written in 1960.
14
Carl Gottfried Neumann (1832-1925) is a German mathematician.
15
James Glimm (born 1934) is an American mathematician.
1.3. QUOTATIONS 13
The British physicist and mathematician Roger Penrose (born 1931), ac-
claimed author of popular books such as The Emperors new mind and The Road
to Reality,
16
a complete guide to the laws of the universe [18], should have a say
with the following remarkable excerpts of the preface of [18]: To mathematicians
. . . mathematics is not just a cultural activity that we have ourselves created, but
it has a life of its own, and much of it nds an amazing harmony with the physical
universe. We cannot get a deep understanding of the laws that govern the physical
world without entering the world of mathematics. . . In modern physics, one cannot
avoid facing up to the subtleties of much sophisticated mathematics
Then we listen to John A. Wheeler (1911-2008), an outstanding theoretical
physicist (author with Kip S. Thorne and Charles W. Misner of the landmark
book Gravitation [16]) who deals with the aesthetics of scientic truth: It is my
opinion that everything must be based on a simple idea. And it is my opinion that
this idea, once we have nally discovered it, will be so compelling, so beautiful, that
we will say to one another, yes, how could it have been any dierent.
16
As a matter of fact, that extra-ordinary one-thousand-page book could not really be qualied
as popular, except for the fact that it is indeed available in general bookstores.
14 CHAPTER 1. INTRODUCTION
Chapter 2
Vector Fields
We start with recalling a few basic facts on Ordinary Dierential Equations.
2.1 Ordinary Dierential Equations
2.1.1 The Cauchy-Lipschitz result
1
Let I be an interval of R and be an open set of R
n
. We consider a continuous
function F : I R
n
such that for all (t
0
, x
0
) I, there exists a neighborhood
V
0
of (t
0
, x
0
) in I and a positive constant L
0
such that for (t, x
1
), (t, x
2
) V
0
[F(t, x
1
) F(t, x
2
)[ L
0
[x
1
x
2
[, (2.1.1)
where [ [ stands for a norm in R
n
. We shall say that F satises a local Lipschitz
condition. Note that these assumptions are satised whenever F C
1
(I ) and
even if
x
F(t, x) C
0
(I ).
Theorem 2.1.1 (Cauchy-Lipschitz). Let F be as above. Then for all (t
0
, x
0
) I,
there exists a neighborhood J of t
0
in I such the initial-value-problem
_
x(t) = F
_
t, x(t)
_
x(t
0
) = x
0
(2.1.2)
has a unique solution dened in J.
N.B. A solution of (2.1.2) is a dierentiable function on J, valued in , and since
F and x are continuous, the equation itself implies that x is C
1
. One may as well
consider continuous solutions of
x(t) = x
0
+
_
t
t
0
F(s, x(s))ds. (2.1.3)
From this equation, the solution t x(t) is C
1
, and satises (2.1.2).
1
See the footnote (1) for A.L. Cauchy. Rudolph Lipschitz (1832-1903) is a German mathe-
matician.
15
16 CHAPTER 2. VECTOR FIELDS
Proof. We shall use directly the Picard approximation scheme
2
which goes as follows.
We want to dene for k N, t in a neighborhood of t
0
, ,
x
0
(t) = x
0
,
x
k+1
(t) = x
0
+
_
t
t
0
F(s, x
k
(s))ds. (2.1.4)
We need to prove that this makes sense, which is not obvious since F is only dened
on I . Let us assume that for t J
0
= t I, [t t
0
[ T
0
, (x
l
(t))
0lk
is such
that
x
l
(t)

B(x
0
, R
0
) , where T
0
and R
0
are positive,
(2.1.1) holds with V
0
= J
0


B(x
0
, R
0
),
_
(2.1.5)
and such that
e
L
0
T
0
_
[tt
0
[T
0
[F(s, x
0
)[ds R
0
. (2.1.6)
The relevance of the latter condition will be claried by the computation below, but
we may note at once that, given R
0
> 0, there exists T
0
> 0 such that (2.1.6) is
satised since the lhs goes to zero with T
0
. Property (2.1.5) is true for k = 0; let us
assume k 1. Then we can dene x
k+1
(t) as above for t J
0
and we have, with
(x
l
)
0lk
satisfying (2.1.5)
[x
k+1
(t) x
k
(t)[

_
t
t
0
L
0
[x
k
(s) x
k1
(s)[ds

. (2.1.7)
and inductively
[x
k+1
(t) x
k
(t)[ L
k
0

_
t
t
0
[F(s, x
0
)[ds
[t t
0
[
k
k!

, (2.1.8)
since (we may assume without loss of generality that t t
0
) that estimate holds true
trivially for k = 0 and if k 1, we have, using (2.1.7) and the induction hypothesis
(2.1.8) for k 1,
[x
k+1
(t) x
k
(t)[ L
0
_
t
t
0
L
k1
0
_
s
t
0
[F(, x
0
)[( t
0
)
k1
dds
1
(k 1)!
L
k
0
1
(k 1)!
_
t
t
0
[F(, x
0
)[d
_
t
t
0
(s t
0
)
k1
ds =
L
k
0
(t t
0
)
k
k!
_
t
t
0
[F(, x
0
)[d.
As a consequence, we have for t J
0
,
[x
k+1
(t) x
0
[

0lk
[x
l+1
(t) x
l
(t)[

_
t
t
0
[F(, x
0
)[d

0lk
L
l
0
[t t
0
[
l
l!
e
L
0
[tt
0
[

_
t
t
0
[F(, x
0
)[d


..
from (2.1.6)
R
0
.
2
Emile Picard (1856-1941) is a French mathematician.
2.1. ORDINARY DIFFERENTIAL EQUATIONS 17
We have thus proven that, provided (2.1.6) holds true, then for all k N and all
t J
0
, x
k
(t) makes sense and belongs to

B(x
0
, R
0
). Thus we have constructed a
sequence (x
k
)
k0
of continuous functions of C
0
(J
0
; R
n
) such that, dening
(T
0
) =
_
J
0
[F(s, x
0
)[ds, J
0
= t I, [t t
0
[ T
0
, (2.1.9)
and assuming as in (2.1.6) that (T
0
)e
L
0
T
0
R
0
, we have
sup
tJ
0
|x
k+1
(t) x
k
(t)|
L
k
0
T
k
0
k!
(T
0
). (2.1.10)
Lemma 2.1.2. Let J be a compact interval of R, E be a Banach
3
space and c =
u C
0
(J; E) equipped with the norm |u|
c
= sup
tJ
[u(t)[
E
is a Banach space.
Proof of the lemma. Note that the continuous image u(J) is a compact subset of E,
thus is bounded so that the expression of |u|
c
makes sense and is obviously a norm;
let us consider now a Cauchy sequence (u
k
)
k1
in c. Then for all t J, (u
k
(t))
k1
is a Cauchy sequence in E, thus converges: let us set v(t) = lim
k
u
k
(t), for t J.
The convergence is uniform with respect to t since
[u
k
(t) v(t)[
E
= lim
l
[u
k
(t) u
k+l
(t)[
E
limsup
l
|u
k
u
k+l
|
c
= (k)
k+
0.
The continuity of the limit follows by the classical argument: for t, t + h J, we
have for all k
[v(t +h) v(t)[
E
[v(t +h) u
k
(t +h)[
E
+[u
k
(t +h) u
k
(t)[
E
+[u
k
(t) v(t)[
E
2|v u
k
|
c
+[u
k
(t +h) u
k
(t)[
E
,
and thus by continuity of u
k
, limsup
h0
[v(t + h) v(t)[
E
2|v u
k
|
c
so that
limsup
h0
[v(t +h) v(t)[
E
2 inf
k
|v u
k
|
c
= 0.
Applying this lemma, we see that the sequence of continuous functions (x
k
) is a
Cauchy sequence in the Banach space C
0
(J
0
; R
n
) since (2.1.10) gives

k0
|x
k+1
x
k
|
C
0
(J
0
;R
n
)
(T
0
)e
L
0
T
0
< +.
Let u = lim
k
x
k
in the Banach space C
0
(J
0
; R
n
); since x
k
(J
0
)

B(x
0
, R
0
), we have
also u(J
0
)

B(x
0
, R
0
) and from the equation (2.1.4), we get for t J
0
u(t) = x
0
+
_
t
t
0
F(s, u(s))ds,
3
Stefan Banach (1892-1945) is a Polish mathematician. A Banach space is a complete normed
vector space.
18 CHAPTER 2. VECTOR FIELDS
since u(t) = lim
k
x
k+1
(t), x
k+1
(t) = x
0
+
_
t
0
F(s, x
k
(s))ds and the dierence
_
t
0
_
F(s, x
k
(s)) F(s, u(s))
_
ds
satises

_
t
t
0
_
F(s, u(s)) F(t, x
k
(t))
_
dt

_
t
t
0
L
0
[u(s) x
k
(s)[ds

L
0
T
0
|x
k
u|
C
0
(J
0
;R
n
)
,
providing the local existence part of Theorem 2.1.1. Let us prove uniqueness (and
even more). Let u, v be solutions of
_
u(t) = x
0
+
_
t
t
0
F(s, u(s))ds
v(t) = y
0
+
_
t
t
0
F(s, v(s))ds
for 0 t t
0
T
0
. (2.1.11)
We dene (t) = [u(t) v(t)[ and we have
(t) [u(t
0
) v(t
0
)[ +
_
t
t
0
L
0
[u(s) v(s)[ds = R(t),
so that

R(t) = L
0
[u(t) v(t)[ = L
0
(t) L
0
R(t).
Lemma 2.1.3 (Gronwall
4
). Let t
0
< t
1
be real numbers and R : [t
0
, t
1
] R be a
dierentiable function such that

R(t) LR(t) for t [t
0
, t
1
], where L R. Then
for t [t
0
, t
1
], R(t) e
L(tt
0
)
R(t
0
).
More generally, if

R(t) L(t)R(t) + f(t) for t [t
0
, t
1
] with L, f L
1
([t
0
, t
1
]),
we have for t [t
0
, t
1
]
R(t) e
R
t
t
0
L(s)ds
R(t
0
) +
_
t
t
0
e
R
t
s
L()d
f(s)ds.
Remark 2.1.4. In other words a solution of the dierential inequality

R LR +f, R(t
0
) = R
0
,
is smaller than the solution of the equality

R = LR +f, R(t
0
) = R
0
.
Proof of the lemma. We calculate
d
dt
_
R(t)e

R
t
t
0
L(s)ds
_
=
_

R(t) L(t)R(t)
_
e

R
t
t
0
L(s)ds
f(t)e

R
t
t
0
L(s)ds
,
entailing for t [t
0
, t
1
] R(t)e

R
t
t
0
L(s)ds
R(t
0
) +
_
t
t
0
f(s)e

R
s
t
0
L()d
.
Applying this lemma, we get for 0 t t
0
T
0
that
[u(t) v(t)[ = (t) R(t) e
L
0
(tt
0
)
R(t
0
),
proving uniqueness and a much better result, akin to continuous dependence on the
data, summarized in the following lemma.
4
Thomas Gr onwall (1877-1932) is a Swedish-born American mathematician
2.1. ORDINARY DIFFERENTIAL EQUATIONS 19
Lemma 2.1.5 (Gronwall lemma on ODE). Let F be as in Theorem 2.1.1 with
[F(t, x
1
) F(t, x
2
)[ L[x
1
x
2
[ for t I, x
1
, x
2
. Let u, v be as in (2.1.11).
Then for t
0
t I, [u(t) v(t)[ e
L(tt
0
)
[u(t
0
) v(t
0
)[.
The proof of Theorem 2.1.1 is complete.
Remark 2.1.6. We have proven a quantitatively more precise result: let F be as
in Theorem 2.1.1, (t
0
, x
0
) I , R
0
> 0 such that

B(x
0
, R
0
) and T
0
> 0
such that (2.1.6) holds. Then with J
0
= t I, [t t
0
[ T
0
, there exists a unique
solution x of (2.1.2) such that x C
1
(J
0
;

B(x
0
, R
0
)). Let K be a compact subset of
and J be a compact nonempty subinterval of I: then
sup
tJ
x
j
K,j=1,2,x
1
,=x
2
[F(t, x
1
) F(t, x
2
)[
[x
1
x
2
[
< +. (2.1.12)
If it were not the case, we would nd sequences (t
k
) in J, (x
1,k
), (x
2,k
) in K, so that
[F(t
k
, x
1,k
) F(t
k
, x
2,k
)[ > k[x
1,k
x
2,k
[.
Extracting subsequences and using the compactness assumption, we may assume
that the three sequences are converging to (t, x
1
, x
2
) J K
2
; moreover the con-
tinuity hypothesis on F gives the convergence of the lhs to [F(t, x
1
) F(t, x
2
)[ and
the inequality gives x
1
= x
2
and x
1,k
,= x
2,k
We infer from the assumption (2.1.1) at
(t, x
1
) that for k k
0
,
k <
[F(t
k
, x
1,k
) F(t
k
, x
2,k
)[
[x
1,k
x
2,k
[
L
0
which is impossible. We have proven that for K compact subset of , J compact
subinterval of I, (2.1.12) holds. Let R > 0 such that
xK

B(x, R) = K
R
. Now
if t
0
is given in I, L
0
stands for the lhs of (2.1.12), and T
0
small enough so that
e
L
0
T
0
_
tI,[tt
0
[T
0
sup
yK
[F(s, y)[ds R,
we know that, for all y K, there exists a unique solution of (2.1.2) dened on
J
0
= t I, [t t
0
[ T
0
such that x C
1
(J
0
;

B(y, R)), x(t
0
) = y. In particular,
if the initial data y belongs to a compact subset of and s belongs to a compact
subset of I, the time of existence of the solution of (2.1.2) is bounded from below
by a xed constant (provided F satises (2.1.1)).
If we consider F as in Theorem 2.1.1, we know that, for any (s, y) I ,
the initial-value problem x(t) = F(t, x(t)), x(s) = y has a unique solution, which
is dened and C
1
on a neighborhood of s in I. We may denote that solution by
x(t, s, y) which is characterized by
(
t
x)(t, s, y) = F
_
t, x(t, s, y)
_
, x(s, s, y) = y.
20 CHAPTER 2. VECTOR FIELDS
We may consider y
1
, y
2
, s I and the solutions x(t, s, y
2
), x(t, s, y
1
) both dened
on a neighborhood J of s in I (the intersection of the neighborhoods on which
t x(t, s, y
j
) are dened). We have
x(t, s, y
2
) x(t, s, y
1
) = y
2
y
1
+
_
t
s
_
F
_
, x(, s, y
2
)
_
F
_
, x(, s, y
2
)
_
d,
and assuming J compact, we get that
j=1,2
x(, s, y
j
)
J
is a compact subset of
, so that there exists L 0 with
[x(t, s, y
2
) x(t, s, y
1
)[ [y
2
y
1
[ +

_
t
s
L[x(, s, y
2
) x(, s, y
1
)[d

and the previous lemma implies that


[x(t, s, y
2
) x(t, s, y
1
)[ e
L[ts[
[y
2
y
1
[. (2.1.13)
The mapping y x(t, s, y), dened for any s I and t in a neighborhood of s
is thus Lipschitz continuous. We have also proven the following
Proposition 2.1.7. Let F : I R
n
be as in Theorem 2.1.1 with 0 I. We
dene the ow of the ODE,

X(t) = F(t, X(t)) as the unique solution of

t
(t, x) = F(t, (t, x)), (0, x) = x. (2.1.14)
The C
1
mapping t (t, x) is dened on a neighborhood of 0 in I which may depend
on x. However if x belongs to a compact subset K of , there exists T
0
> 0 such
that is dened on t I, [t[ T
0
K and (t, ) is Lipschitz-continous.
Remark 2.1.8. There is essentially nothing to change in the statements and in the
proofs if we wish to replace R
n
by a Banach space (possibly innite dimensional).
Remark 2.1.9. The local Lipschitz regularity can be replaced by a much weaker as-
sumption related to an Osgood
5
modulus of continuity: let :]0, +) ]0, +),
be a continuous and non-decreasing function, such that (0
+
) = 0 and
r
0
> 0,
_
r
0
0
dr
(r)
= +. (2.1.15)
Let I be an interval of R, be an open subset of a Banach space E and F : I E
such that there exists L
1
loc
(I) so that for all t, x
1
, x
2
I
2
[F(t, x
1
) F(t, x
2
)[
E
(t)([x
1
x
2
[
E
). (2.1.16)
Then Theorem 2.1.1 holds (see e.g. [5]). Some continuous dependence can also be
proven, in general weaker than (2.1.13). Note that the Lipschitz regularity corre-
sponds to (r) = r and that the integral condition above allows more general moduli
of continuity such as
(r) = r [ ln r[ or r [ ln r[ ln([ ln r[).
5
William F. Osgood (1864-1943) is an American mathematician.
2.1. ORDINARY DIFFERENTIAL EQUATIONS 21
Naturally Holders regularity (r) = r

with [0, 1[ is excluded by (2.1.15): as a


matter of fact, in that case some classical counterexamples to uniqueness are known
such as the one-dimensional
x = [x[

, x(0) = 0, ( [0, 1[)


which has the solution 0 and x(t) =
_
(1 )t
_ 1
1
for t > 0, 0 for t 0.
Remark 2.1.10. Going back to the nite-dimensional case, a theorem by Peano
6
is providing an existence result (without uniqueness) for the ODE (2.1.2) under a
mere continuity assumption for F. That type of result is not true in the innite
dimensional case as the reader may check for instance in the exercise 18 page IV.41
of the Bourbakis volume [2].
2.1.2 Maximal and Global Solutions
Let I be an interval of R and be an open set of R
n
. We consider a continuous
function F : I R
n
. Let I
1
I
2
I be subintervals of I. Let x
j
: I
j

(j = 1, 2) be such that x
j
= F(t, x
j
). If x
1
= x
2[I
1
we shall say that x
2
is a
continuation of x
1
.
Denition 2.1.11. We consider the ODE x = F(t, x). A maximal solution x of
this ODE is a solution so that there is no continuation of x, except x itself. A global
solution of this ODE is a solution dened on I.
Note that a global solution is a maximal solution, but that the converse in not
true in general. Taking I = R, = R the equation x = x
2
has the maximal solutions
t (T
0
t)
1
(T
0
is a real parameter) dened on the intervals (, T
0
), (T
0
, +).
None of these maximal solutions can be extended globally since [x(t)[ goes to +
when t approaches T
0
.
For t < 1, x(t) =
1
1 t
, x(0) = 1, blow-up time t = 1,
for t R, x(t) = 0, the only solution not blowing-up,
for t > 1, x(t) =
1
1 t
, x(2) = 1, blow-up time t = 1.
Note that if x(t
0
) is positive, then x(t) is positive and blows-up in the future and if
x(t
0
) is negative, then x(t) is negative and blows-up in the past. Moreover x(0) =
T
1
0
, so that the larger positive x(0) is, the sooner the blow-up occurs.
Theorem 2.1.12. Let F : I R
n
be as in Theorem 2.1.1 and let (t
0
, x
0
) I.
Then there exists a unique maximal solution x : J R
n
of the initial-value-problem
x = F(t, x), x(t
0
) = x
0
, where J is a subinterval of I containing t
0
.
Proof. Let us consider all the solutions x

: J

R
n
of the initial-value-problem
x

= F(t, x

), x

(t
0
) = x
0
, where J

is a subinterval of I containing t
0
. From the
6
Giuseppe Peano (1858-1932) is an Italian mathematician.
22 CHAPTER 2. VECTOR FIELDS
x(0)=1
o
x(2)=-1
o
Figure 2.1: Three solutions of the ODE x = x
2
.
existence theorem, that family is not empty. If J

, x

() = x

(), from the


uniqueness theorem on [t
0
, ] (or [, t
0
]) so that we may dene for t

= J (J
is an interval since t
0
belongs to all J

), x(t) = x

(t).
Moreover, the function x is continuous on J: take J, say with > t
0
, J

:
the function x coincides with x

on [t
0
, ], thus is left-continuous at . If = sup J,
it is enough to prove continuity. Now if < sup J,
t
J,
t
> :
t
J

for
some and as above the function x coincides with x

on [t
0
,
t
], which proves as
well continuity. Note that x is continuous at t
0
since it coincides with x

on a
neighborhood of t
0
in I for all .
For t J, we have t J

for some and since t


0
J

, we get
_
t
t
0
F(s, x(s))ds =
_
t
t
0
F(s, x

(s))ds = x

(t) x
0
= x(t) x
0
,
so that x is a solution of the initial-value-problem x = F(t, x), x(t
0
) = x
0
on J. By
construction, it is a maximal solution.
2.1. ORDINARY DIFFERENTIAL EQUATIONS 23
Theorem 2.1.13. Let F : [0, +) R
n
be as in Theorem 2.1.1, x
0
. The
maximal solution of x = F(t, x), x(0) = x
0
is dened on some interval [0, T
0
[ and if
T
0
< + then
sup
0t<T
0
[x(t)[ = + or x([0, T
0
[) is not a compact subset of .
Proof. If the maximal solution were dened on some interval [0, T
0
], T
0
> 0, then
(T
0
, x(T
0
)) [0, +) and the local existence theorem would imply the existence
of a solution of y = F(t, y), y(T
0
) = x(T
0
) on some neighborhood of T
0
: by the
uniqueness theorem, that solution should coincide with x for t T
0
and provide a
continuation of x, contradicting its maximality.
Let us assume that x is dened on [0, T
0
[ with 0 < T
0
< + and
sup
0t<T
0
[x(t)[ M < +, as well as x([0, T
0
[) = K compact subset of .
We consider a sequence (t
k
)
k1
with 0 < t
k
< T
0
, lim
k
t
k
= T
0
. The sequence
(x(t
k
))
k1
belongs to K and thus has a convergent subsequence, that we shall call
again (x(t
k
))
k1
so that
lim
k
x(t
k
) = , K.
The equation y = F(t, y), y(T
0
) = has a unique solution dened in [T
0

0
, T
0
+
0
]
with
0
> 0. For t [T
0

0
, T
0
[, we have x(t) K which is a compact subset of
and y(t) in a neighborhood of so that (see the remark 2.1.6 for the uniformity of
the constant L)
[x(t) y(t)[ [x(t
k
) y(t
k
)[ +

_
t
t
k
L[x(s) y(s)[ds

,
implying
sup
T
0

0
t<T
0
[x(t) y(t)[ [x(t
k
) y(t
k
)[ +L[t t
k
[ sup
T
0

0
t<T
0
[x(t) y(t)[
and thus, since sup
0t<T
0
[x(t)[ M < +, we have sup
T
0

0
t<T
0
[x(t) y(t)[ = 0,
i.e. x(t) = y(t) on [T
0

0
, T
0
[. Considering the continuous function X(t) = x(t) for
0 t < T
0
, X(t) = y(t) for T
0

0
t T
0
+
0
, we see that for T
0

0
t T
0
+
0
,
_
t
0
F(s, X(s))ds =
_
T
0

0
0
F(s, x(s))ds +
_
t
T
0

0
F(s, y(s))ds
= x(T
0

0
) x
0
+y(t) y(T
0

0
) = X(t) x
0
,
so that X is a continuation of x, contradicting the maximality of the latter.
The previous theorems have the following consequences.
Corollary 2.1.14. We consider a continuous function F : R R
n
R
n
which
satises the Lipschitz condition (2.1.1). Then for all (t
0
, x
0
) R R
n
the initial
value problem
_
x(t) = F(t, x(t))
x(t
0
) = x
0
24 CHAPTER 2. VECTOR FIELDS
has a unique maximal solution (dened on a non-empty interval J). Then
if sup J < +, one has limsup
t(sup J)

[x(t)[ = +, (2.1.17)
if inf J > , one has limsup
t(inf J)
+
[x(t)[ = +. (2.1.18)
This follows immediately from Theorem 2.1.13. In other words, maximal solu-
tions always exist (under mild regularity assumptions for F) and the only possible
obstruction for a maximal solution to be global is that [x(t)[ gets unbounded, or if
,= R
n
, that x(t) gets close to the boundary .
Corollary 2.1.15. Let I be an interval of R. We consider a continuous function
F : I R
n
R
n
such that (2.1.1) holds and there exists a continuous function
: I R
+
so that
t I, x R
n
, [F(t, x)[ (t)
_
1 +[x[
_
. (2.1.19)
Then all maximal solutions of the ODE x = F(t, x) are global. In particular, the
solutions of linear equations with C
0
coecients are globally dened.
The motto for this result should be: solutions of nonlinear equations may blow-up
in nite time, whereas solutions of linear equations do exist globally.
Proof. We assume that 0 I [0, +) and we consider a maximal solution of the
ODE: we note that for I t 0
[x(t)[ [x(0)[ +
_
t
0
(s)
_
1 +[x(s)[
_
ds = R(t),
so that

R = (1 +[x[) +R, R(0) = [x(0)[, and Gronwalls inequality gives
[x(t)[ R(t) e
R
t
0
(s)ds
[x(0)[ +
_
t
0
(s)e
R
t
s
()d
ds < +, for all I t 0,
implying global existence. In particular a linear equation with C
0
coecients would
be x = A(t)x(t) +b(t), with A a nn continuous matrix, t b(t) R
n
continuous,
so that (2.1.1) holds trivially and
[F(t, x)[ = [A(t)x +b(t)[ |A(t)|[x[ +[b(t)[
satisfying the assumption of the corollary.
We can check the example x = x(x
2
1).
If [x(0)[ > 1, the solutions blow-up in nite time,
If [x(0)[ 1, 0, stationary solutions,
If [x(0)[ 1, global solutions.
When 0 < x(0) < 1, x(t) ]0, 1[ for all t R (and thus are decreasing), otherwise at
some t
0
, we would have by continuity x(t
0
) 0, 1 and thus by uniqueness it would
be a stationary solution 0 or 1, contradicting 0 < x(0) < 1. The lines x = 0, 1 are
separating the solutions.
2.1. ORDINARY DIFFERENTIAL EQUATIONS 25
x(0)=3/2
x(0)=1
x(0)=1/2
x(0)=0
x(0)=-1/2
x(0)=-3/2
x(0)=-1
.
.
.
.
.
.
.
x(0)>1: blow-up in nite time
x(0)<-1: blow-up in nite time
lx(0)l1: global solutions
Figure 2.2: Solutions of the ODE x = x(x
2
1).
2.1.3 Continuous dependence
Theorem 2.1.16. Let I be an interval of R, be an open set of R
n
and U be an
open set of R
m
. We consider a continuous function F : I
t

x
U

R
n
such that
the partial derivatives F/x
j
, F/
k
exist and are continuous. Assuming 0 I,
y , we dene x(t, y, ) as the unique solution of the initial value problem
x
t
(t, y, ) = F
_
t, x(t, y, ),
_
, x(0, y, ) = y.
Then the function x is a C
1
function dened on a neighborhood of (0, y) U.
Proof. We consider rst the ow of the ODE dened by

t
(t, x) = F(t, (t, x)), (0, x) = x
and we recall that (t, ) is Lipschitz-continous from (2.1.13). According to Propo-
sition 2.1.7, we may assume that is dened on [0, T
0
] K
0
with T
0
> 0 and K
0
a
26 CHAPTER 2. VECTOR FIELDS
compact subset of with
[(t, x
1
) (t, x
2
)[ e
tL
0
[x
1
x
2
[. (2.1.20)
For x given in , we consider the linear ODE (D,
2
F are n n matrices)

D(t, x) = (
2
F)
_
t, (t, x)
_
D(t, x), D(0, x) = Id, (2.1.21)
and we claim that

x
(t, x) = D(t, x). In fact, we have, since
2
F is continuous and
(t, ) is Lipschitz-continuous,
(t, x +h) (t, x) = h +
_
t
0
_
F
_
s, (s, x +h)
_
F
_
s, (s, x)
_
_
ds
= h+
_
t
0
_
1
0
(
2
F)
_
s, (s, x) +
_
(s, x +h) (s, x)
_
_
d
_
(s, x +h) (s, x)
_
ds.
As a result, with (t, x, h) = (t, x +h) (t, x), we have
(t, x, h) =
_
1
0
(
2
F)
_
t, (t, x) +(t, x, h)
_
d(t, x, h), (0, x, h) = h.
We obtain
(t, x, h) = (
2
F)(t, (t, x))(t, x, h) +(t, x, (t, x, h))(t, x, h),
(t, x, ) =
_
1
0
_
(
2
F)(t, (t, x) +) (
2
F)(t, (t, x))
_
d.
Using (2.1.21), (2.1.20) we have
(0, x, h) D(0, x)h = 0 and [(t, x, h)[ e
tL
0
[h[,
so that
(t, x, h)

D(t, x)h
= (
2
F)(t, (t, x))((t, x, h) D(t, x)h) +(t, x, (t, x, h))(t, x, h),
and as a consequence with r(t) = [(t, x, h) D(t, x)h[ for t 0,
r(t)
_
t
0
|(
2
F)(s, (s, x))|r(s)ds +t(h)[h[
_
t
0
C
1
r(s)ds +t(h)[h[ = R(t),
with lim
h0
(h) = 0. This gives

R(t) = C
1
r(t) +(h)[h[ C
1
R(t) +(h)[h[, R(0) = 0,
and by Gronwalls inequality R(t)
_
t
0
e
C
1
(ts)
ds(h)[h[ = o(h) which gives
r(t) = o(h), (t, x, h) = D(t, x)h +o(h),
so that

x
(t, x) = D(t, x). We note also that (2.1.21) and (2.1.13) imply that D(t, x)
is solution of the linear equation

D(t, x) = (t, x)D(t, x), D(0, x) = Id (2.1.22)


with continuous.
2.1. ORDINARY DIFFERENTIAL EQUATIONS 27
Lemma 2.1.17. Let N N, I be an interval of R and U be an open subset of R
n
.
Let be a continuous function on I U valued in /
N
(R), the NN matrices with
real entries. Let (x) be a continuous mapping from R
n
into /
N
(R). The unique
solution of the linear ODE

D(t, x) = (t, x)D(t, x), D(0, x) = (x),


is a continous function of its arguments.
Proof. From Theorem 2.1.1, we know that there exists a unique global solution for
every x U, so that I t D(t, x) is C
1
for each x U. We may assume
[0, T
0
] I with some positive T
0
, and for t [0, T
0
], x, x + h K
0
, where K
0
is a
compact neighborhood of x in U, we calculate
D(t, x+h)D(t, x) = (x+h)(x)+
_
t
0
_
(s, x+h)D(s, x+h)(s, x)D(s, x)
_
ds,
entailing
[D(t, x +h) D(t, x)[ [(x +h) (x)[
+
_
t
0
[(s, x +h) (s, x)[[D(s, x +h)[ds
+
_
t
0
[(s, x)[[D(s, x +h) D(s, x)[ds.
We note also that
[D(t, x)[ [(x)[ +
_
t
0
[(s, x)[[D(s, x)[ds
sup
xK
0
[[ +
_
t
0
[D(s, x)[ds sup
[0,T
0
]K
0
[[ = R(t),
so that

R(t) = ||
[0,T
0
]K
0
[D(t, x)[ ||
[0,T
0
]K
0
R(t) and Gronwalls inequality
implies
[D(t, x)[ R(t) ||
K
0
exp t||
[0,T
0
]K
0
C
0
, for t [0, T
0
], x K
0
.
With (t, x, h) = [D(t, x + h) D(t, x)[, we get thus with C
1
= ||
[0,T
0
]K
0
,
lim
h0
(h) = 0,
(t, x, h) [(x +h) (x)[ +C
0
T
0
(h) +C
1
_
t
0
(s, x, h)ds = R
1
(t).
We obtain

R
1
(t) = C
1
(t, x, h) C
1
R
1
(t) and Gronwalls inequality gives
(t, x, h) R
1
(t)
_
[(x +h) (x)[ +C
0
T
0
(h)
_
exp T
0
C
1
.
For t [0, T
0
], x, x +h K
0
we get
[D(t, x +h) D(0, x)[

_
[(x +h) (x)[ +C
0
T
0
(h)
_
exp T
0
C
1
+
_
t
0
[(s, x)[[D(s, x)[ds

_
[(x +h) (x)[ +C
0
T
0
(h)
_
exp T
0
C
1
+tC
1
C
0
,
which proves the continuity of D at (0, x), ending the proof of the Lemma.
28 CHAPTER 2. VECTOR FIELDS
We may apply this lemma to get the fact that the ow is C
1
, under the assump-
tions of Theorem 2.1.16. To handle the question with an additional parameter ,
we use the previous results, remarking that the equation

(t, y, z) = F(t, (t, y, z), z), (0, y, z) = y


can be written as

(t, y, z) = F(t, (t, y, z)), (0, y, z) = (y, z)


with (t, y, z) = ((t, y, z), z). The proof of Theorem 2.1.16 is complete.
Corollary 2.1.18. Let I be an interval of R containing 0, be an open set of R
n
,
k N

and F : I R
n
be a continuous function such that

x
F
[[k
exist
and are continuous on I . We denote
7
by J
x
(t, x) (t, x) R
n
the
maximal solution of the ODE

t
(t, x) = F
_
t, (t, x)
_
, (0, x) = x.
Then the function is a C
1
function such that

x
,
t

[[k
are continuous.
Proof. For k = 1, x
0
, we get from the previous theorem that is a C
1
function
dened in a neighborhood of (0, x
0
) in I . Moreover, the proof of that theorem
and (2.1.21) gives

tx
(t, x) = (
2
F)
_
t, (t, x)
_


x
(t, x),

x
(0, x) = Id, (2.1.23)
with

x
continuous, according to Lemma 2.1.17, entailing from the above equation
the continuity of

2

tx
. We want now to prove the theorem by induction on k with
the additional statement that for any multi-index with [[ k,

x
=

[[1

1
++
||
=, [
j
[1
c(
1
, . . . ,

, )(

2
F)(t, )

1
x
. . .

||
x
, (2.1.24)
where c(
1
, . . . ,

, ) are positive constants and (

x
)(0, x) is a C
1
function. The
formula (2.1.23) gives precisely the case k = 1 (note that (0, x),
x
(0, x) are both
C
1
). Let us now assume that k 1 and the assumptions of the Theorem are fullled
for k + 1. For [[ = k, (2.1.24) implies that

x
satises

x
= (
2
F)(t, )

x
+G(t, ,

x
)
<
where G is a linear combination of products (

2
F)(t, )

1
x
. . .

||
x
, with [[
k, 1 [
j
[ < k. As a result the function G is a C
1
function of t, x. Since (t, x)
(
2
F)
. .
C
k
(t, (t, x)
. .
C
1
) is also C
1
since k 1, Y

x
is the solution of a linear ODE

t
Y

= a(t, x)Y

+f(t, x), a, f C
1
.
7
According to Remark 2.1.6, the time of existence of the solutions is bounded below by a positive
constant T
0
, provided the initial data belong to a compact subset.
2.2. VECTOR FIELDS, FLOW, FIRST INTEGRALS 29
A direct integration of that ODE gives
Y

(t, x) = e
R
t
0
a(,x)d
Y

(0, x)
. .
C
1
+
_
t
0
e
R
t
s
a(,x)d
f(s, x)ds,
so that

x
is also C
1
, as well as
t

x
from the equation. Taking the derivative
with respect to x
1
of (2.1.24), using the fact that

x
F
[[k+1
exist, we get a linear
combination of terms
(

2
F)(t, )

1
x

x
1
. . .

||
x
,

[=[[+1
(

2
F)(t, )
x
1

1
x
. . .

||
x
,
entailing the formula (2.1.24) for k + 1; note also that for [[ = 1 + k 2,
(

x
)(0, x) = 0. The proof of the induction is complete as well as the proof of
the theorem.
Corollary 2.1.19. Let be an open set of R
n
, 1 k N and F : R
n
be a C
k
function. Then the ow of the autonomous ODE,

= F(), is of class C
k
(in both
variables t, x).
Proof. For k = 1, it follows from Corollary 2.1.18. Assume inductively that k 1
and F C
k+1
: from Corollary 2.1.18, we know that
t

x
,

x
are continuous
functions for [[ k +1. Moreover we know from (2.1.24) an explicit expression for

x
for [[ k + 1 and in particular for [[ = k; since in (2.1.24), [[ [[ = k,
we can compute
2
t

x
, which is a linear combination of

F
..
[

[=1+[[k+1

..
F()

1
x
. . .

||
x
,

1
x

t

..
F()
. . .

||
x
,
which is a continuous function. More generally,
l
t

x
for l + [[ k + 1 is a
polynomial in

x
, (

F)(), with [[ k + 1, [[ k + 1, thus a continuous


function. All the partial derivatives of with order k + 1 are continuous, com-
pleting the induction and the proof.
2.2 Vector Fields, Flow, First Integrals
2.2.1 Denition, examples
Denition 2.2.1. Let be an open set of R
n
. A vector eld X on is a mapping
from into R
n
. The dierential system associated to X is
dx
dt
= X(x). (2.2.1)
An integral curve of X is a solution of the previous system and the ow of the vector
eld is the ow of that system of ODE. A singular point of X is a point x
0

such that X(x
0
) = 0. When for x , X(x) = (X
j
(x))
1jn
, the vector eld X is
denoted by

1jn
X
j
(x)

x
j
.
30 CHAPTER 2. VECTOR FIELDS
N.B. We have introduced the notion of ow of an ODE in Proposition 2.1.7. In the
above denition, we deal with a so-called autonomous ow since X depends only on
the variable x and not on t.
Remark 2.2.2. Let X be a C
1
vector eld on some open subset of R
n
. The ow
of the vector eld X, denoted by
t
X
(x) is the maximal solution of the ODE

t
X
(x) = X
_

t
X
(x)
_
,
0
X
(x) = x. (2.2.2)
The mapping t
t
X
(x) is dened in a neighborhood of 0 which may depend
on x; however, thanks to Proposition 2.1.7 and Corollary 2.1.18, for each compact
subset K
0
of , there exists T
0
> 0 such that (t, x)
t
X
(x) is dened and C
1
on
[T
0
, T
0
] K
0
. We have for x and t, s in a neighborhood of 0,
d
dt
_

t+s
X
(x)
_
= X
_

t+s
X
(x)
_
,
t+s
X
(x)
[t=0
=
s
X
(x),
d
dt
_

t
X
_

s
X
(x)
__
= X
_

t
X
_

s
X
(x)
__
,
t
X
_

s
X
(x)
_
[t=0
=
s
X
(x),
so that the uniqueness theorem 2.1.1 forces

t+s
X
=
t
X

s
X
. (2.2.3)
In particular the ow
t
X
is a local C
1
dieomorphism with inverse
t
X
since
0
X
is
the identity.
Let us give a couple of examples. The radial vector eld in R
2
is x
1

x
1
+ x
2

x
2
,
namely is the mapping R
2
(x
1
, x
2
) (x
1
, x
2
) R
2
. The dierential system
associated to this vector eld is
_
x
1
= x
1
x
2
= x
2
i.e.
_
x
1
= y
1
e
t
, x
1
(0) = y
1
x
2
= y
2
e
t
, x
2
(0) = y
2
so that the integral curves are straight lines through the origin. The ow (t, x) of
the radial vector eld dened on R R
2
is thus
(t, x) = e
t
x.
We can note that (t, 0
R
2) = 0
R
2 for all t, expressing the fact that 0
R
2 is a singular
point of x
1

x
1
+x
2

x
2
, namely a point where the vector eld is vanishing.
We consider now the angular vector eld in R
2
given by x
1

x
2
x
2

x
1
(the
mapping R
2
(x
1
, x
2
) (x
2
, x
1
) R
2
). The dierential system associated to this
vector eld is
_
x
1
= x
2
x
2
= x
1
i.e.
d
dt
(x
1
+ix
2
) = i(x
1
+ix
2
), (x
1
+ix
2
) = e
it
(x
1
(0) +ix
2
(0))
so that the integral curves are circles centered at the origin. The ow (t, x) of the
angular vector eld dened on R R
2
is thus
(t, x) = R(t)x, R(t) =
_
cos t sin t
sin t cos t
_
(rotation with angle t).
2.2. VECTOR FIELDS, FLOW, FIRST INTEGRALS 31
Figure 2.3: The radial vector field x
1

x
1
+x
2

x
2
.
Remark 2.2.3. We note that if a vector eld X is locally Lipschitz-continuous on
, for all x
0
, there exists a unique maximal solution t x(t) = (t, x
0
) of
the ODE (2.2.1) dened on [0, T
0
[ with some positive T
0
. From Theorem 2.1.13,
if x([0, T
0
[) is contained in a compact subset of , we have T
0
= +, (otherwise
T
0
< + and sup
0t<T
0
[x(t)[ = +).
Denition 2.2.4. Let X be a vector eld on , open subset of R
n
. A rst integral
of X is a dierentiable mapping f : R such that
x , (Xf)(x) =

1jn
X
j
(x)
f
x
j
(x) = 0.
In other words, df, X) = 0, where the bracket stand for a bracket of duality
between the one-form df =

1jn
f
x
j
dx
j
and the vector eld X =

1jn
X
j

x
j
.
If f is of class C
1
with df ,= 0, the set (a level surface of f)
= x , f(x) = f(x
0
)
is a C
1
hypersurface to which the vector eld X is tangent since it is orthogonal
to the gradient of f given by f = (
x
j
f)
1jn
, which is the normal vector to
. The quotation marks here are important in the sense that orthogonality here
must be understood in the sense of duality. The tangent bundle to the open set |
is simply the product | R
n
and a vector eld on | is a section of that bundle,
i.e. a mapping | x (x, X(x)) | R
n
. Now if 1 is an open set of R
n
and
: 1 | is a C
1
-dieomorphism, we can dene the pull-back of the vector eld X
by as the vector eld Y on 1 such that for f C
1
(|)
d(f ), Y ) = df, X) , (2.2.4)
32 CHAPTER 2. VECTOR FIELDS
Figure 2.4: The angular vector field x
1

x
2
x
2

x
1
.
i.e. for X =

1jn
a
j
(x)
x
j
, we dene Y =

1kn
b
k
(y)
y
k
so that

1kn
b
k
(y)
(f )
y
k
(y) =

1jn
a
j
((y))
f
x
j
((y))
which means

1kn
b
k
(y)
(f )
y
k
(y) =

1k,jn
b
k
(y)
f
x
j
((y))

j
y
k
(y) =

1jn
a
j
((y))
f
x
j
((y))
and this gives
a
j
((y)) =

1kn
b
k
(y)

j
y
k
(y).
Abusing the notations, these relationships are written usually in a more convenient
way as

k
b
k

y
k
=

j,k
b
k
x
j
y
k

x
j
=

j
_

k
b
k
x
j
y
k
_
. .
a
j

x
j
.
Anyhow, we get immediately from these expressions that if X(f) = 0, i.e. f is
a rst integral of X, the function f is a rst integral of Y , meaning that the
notion of rst integral is invariant by dieomorphism, as well as the notion of a
vector eld tangent to the level surfaces of a function f. Similarly, the one-form
2.2. VECTOR FIELDS, FLOW, FIRST INTEGRALS 33
df has an invariant meaning as a conormal vector to : the reader may remember
from that discussion that no Euclidean nor Riemannian structure was involved in
the denitions above.
Let us go back to our examples above: the angular vector eld x
1

2
x
2

1
has
obviously the rst integral x
2
1
+ x
2
2
and we see that this vector eld is tangential to
the circles centered at 0. The radial vector eld x
1

1
+ x
2

2
has the rst integral
x
2
/x
1
on the open set x
1
,= 0 and is indeed tangential to all straight lines through
the origin. We see as well that
x
1
_
x
2
1
+x
2
2
,
x
2
_
x
2
1
+x
2
2
are rst integrals of the radial vector eld as well as all homogeneous functions of
degree 0. Considering the dieomorphism
8
R

+
] , [ CR

(r, ) re
i
we see as well that
r

r
= x
1

1
+x
2

2
,

= x
1

2
x
2

1
. (2.2.5)
For the so-called spherical coordinates in R
3
, we have
_

_
x
1
= r cos sin
x
2
= r sin sin
x
3
= r cos
(2.2.6)
( is the longitude, the colatitude
9
) and the dieomorphism
: ]0, +[ ]0, [ ] , [ R
3
(x
1
, x
2
, x
3
), x
1
0, x
2
= 0 = T
(r , , ) (r cos sin , r sin sin , r cos )
we have

r
= cos sin

x
1
+ sin sin

x
2
+ cos

x
3
,

= r cos cos

x
1
+r sin cos

x
2
r sin

x
3
,

= r sin sin

x
1
+r cos sin

x
2
.
8
For z CR

, we dene Log z =
_
[1,z]
d

and we get by analytic continuation that e


Log z
= z;
we dene arg z = Im(Log z), so that z = e
i arg z
e
Re Log z
= e
i arg z
e
ln |z|
.
9
The latitude is

2
, equal to /2 at the north pole (0, 0, 1) and to /2 at the south
pole (0, 0, 1).
34 CHAPTER 2. VECTOR FIELDS
Since we have an analytic determination of the argument (see footnote 8) on CR

,
we dene on T
_

_
r =
_
x
2
1
+x
2
2
+x
2
3
= arg
_
x
3
+i(x
2
1
+x
2
2
)
1/2
_
= arg(x
1
+ix
2
)
The radial vector eld x
x
= x
1

x
1
+x
2

x
2
+x
3

x
3
= r
r
is transverse to the spheres
with center 0 since
r
r = 1 ,= 0 whereas the vector elds

are tangential to the


spheres with center 0 since it is easy to verify that
r

=
r

= 0.
In fact the three vector elds x
2

x
1
x
1

x
2
, x
3

x
2
x
2

x
3
, x
1

x
3
x
3

x
1
, are tangential
to the spheres with center 0 and

= x
1

x
2
x
2

x
1
, (2.2.7)

= r cos cos
x
1
+r sin cos
x
2
r sin
x
3
,
so that
r sin

= x
3
r

r
r
2

x
3
= x
1
x
3

x
1
+x
2
x
3

x
2
(x
2
1
+x
2
2
)

x
3
= x
1
_
x
3

x
1
x
1

x
3
_
+x
2
_
x
3

x
2
x
2

x
3
_
so that

=
x
1
_
x
2
1
+x
2
2
_
x
3

x
1
x
1

x
3
_
+
x
2
_
x
2
1
+x
2
2
_
x
3

x
2
x
2

x
3
_
. (2.2.8)
The integral curves of / are the so-called parallels, which are horizontal circles
with center on the x
3
-axis (e.g. the Equator, the Artic circle), whereas the integral
curves of / are the meridians, which are circles (or half-circles) with diameter
NS where N = (0, 0, 1) is the north pole and S = (0, 0, 1) is the south pole.
2.2.2 Local Straightening of a non-singular vector eld
Theorem 2.2.5. Let k N

, an open set of R
n
, x
0
and let X be C
k
-vector
eld on such that X(x
0
) ,= 0 (X is non-singular at x
0
). Then there exists a C
k
dieomorphism : V U, where U is an open neighborhood of x
0
and V is an
open neighborhood of 0
R
n such that

_
X
[U
_
=

y
1
.
2.2. VECTOR FIELDS, FLOW, FIRST INTEGRALS 35
Proof. Assuming as we may x
0
= 0, with X =

1jn
a
j
(x)
x
j
, we may assume
that a
1
(0) ,= 0. The ow of the vector eld X satises

(t; x) = X
_
(t; x)
_
, (0; x) = x (2.2.9)
and thus the mapping (y
1
, y
2
, . . . , y
n
) (y
1
, y
2
, . . . , y
n
) = (y
1
; 0, y
2
, . . . , y
n
) is of
class C
k
in a neighborhood of 0 (see Corollary 2.1.18) with a Jacobian determinant
at 0 equal to

t
(0)

y
2
(0)

y
n
(0) = X(0) e
2
e
n
= a
1
(0) ,= 0.
As a result, from the local inversion Theorem, is a C
k
dieomorphism between V
and U, neighborhoods of 0 in R
n
. From the identity

t
_
u((t, y))
_
=

1jn
u
x
j
_
(t, y)
_

j
t
(t, y) =

1jn
a
j
_
(t, y)
_
u
x
j
_
(t, y)
_
,
we get

y
1
_
(u )(y)
_
=

y
1
_
u
_
(y
1
; 0, y
2
, . . . , y
n
)
__
=

1jn
a
j
((y))
u
x
j
((y)),
so that
d(u ),

y
1
) = du, X)
and the identity (2.2.4) gives the result. We have with (b
1
, . . . , b
n
) = (1, 0 . . . , 0)

_
X
[U
_
=

1kn
b
k
(y)

y
k
, a
j
((y)) =

1kn

j
y
k
(y)b
k
(y) = a
j
((y))b
1
(y).
The previous method also gives a way to actually solve a rst-order linear PDE:
let us consider the following equation on some open set of R
n

1jn
a
j
(x)
u
x
j
(x) = a
0
(x)u(x) +f(x) (2.2.10)
where the a
j

1jn
C
1
(; R), a
0
, f C
1
(; R). We are seeking some C
1
solution
u. That equation can be written as Xu = a
0
u +f and if is the ow of the vector
eld X, i.e., satises (2.2.9) and u is a C
1
function solving (2.2.10), we get
d
dt
_
u((t, x))
_
= du, X)((t, x)) = a
0
((t, x))u((t, x)) +f((t, x)).
The function t u((t, x)) satises an ODE that we can solve explicitely: with
a(t, x) =
_
t
0
a
0
((s, x))ds
u((t, x)) = e
a(t,x)
u(x) +
_
t
0
e
a(t,x)a(s,x)
f((s, x))ds. (2.2.11)
36 CHAPTER 2. VECTOR FIELDS
In particular, if in the formula above x belongs to some C
1
hypersurface so that
X is tranverse to , the proof of the previous theorem shows that the mapping
R (t, x) (t, x) is a local C
1
-dieomorphism. As a result the datum
u
[
determines uniquely the C
1
solution of the equation (2.2.10).
Corollary 2.2.6. Let be an open set of R
n
, a C
1
hypersurface
10
of and X a
C
1
vector eld on such that X is transverse to . Let a
0
, f C
0
(), x
0
and
g C
0
(). There exists a neighborhood | of x
0
such that the Cauchy problem
_
Xu = a
0
u +f on |,
u
[
= g on ,
has a unique continuous solution.
Proof. Using Theorem 2.2.5, we may assume that X =

x
n
. Since X is transverse
to the hypersurface with equation (x) = 0, the implicit function theorem gives
that on a possibly smaller neighborhood of x
0
(we take x
0
= 0),
= (x
t
, x
n
) R
n1
R
n
, x
n
= (x
t
)
where is a C
1
function. We get
u
x
n
= a
0
(x
t
, x
n
)u(x
t
, x
n
) +f(x
t
, x
n
), u(x
t
, (x
t
)) = g(x
t
).
Using the notation
a(x
t
, x
n
) =
_
x
n
(x

)
a
0
(x
t
, t)dt
The unique solution of that ODE with respect to the variable x
n
with parameters
x
t
is given by
u(x
t
, x
n
) = e
a(x

,x
n
)
g(x
t
) +
_
x
n
(x

)
e
a(x

,x
n
)a(x

,t)
f(x
t
, t)dt.
Denition 2.2.7. Let be an open set of R
n
, X a Lipschitz-continuous vector eld
on . The divergence of X is dened as div X =

1jn

x
j
(a
j
).
Denition 2.2.8. Let be an open set of R
n
: will be said to have a C
1
boundary
if for all x
0
, there exists a neighborhood U
0
of x
0
in R
n
and a C
1
function

0
C
1
(U
0
; R) such that d
0
does not vanish and U
0
= x U
0
,
0
(x) < 0.
Note that U
0
= x U
0
,
0
(x) = 0 since the implicit function the-
orem shows that, if (
0
/x
n
)(x
0
) ,= 0 for some x
0
, the mapping x
(x
1
, . . . , x
n1
,
0
(x)) is a local C
1
-dieomorphism.
10
We shall dene a C
1
hypersurface of as the set = x , (x) = 0 where the function
C
1
(; R) such that d ,= 0 at . The transversality of the vector eld X means here that
X ,= 0 at .
2.2. VECTOR FIELDS, FLOW, FIRST INTEGRALS 37
Theorem 2.2.9 (Gauss-Green formula). Let be an open set of R
n
with a C
1
boundary, X a Lipschitz-continuous vector eld on . Then we have, if X is com-
pactly supported or is bounded,
_

(div X)dx =
_

X, )d, (2.2.12)
where is the exterior unit normal and d is the Euclidean measure on .
Proof. We may assume that = x R
n
, (x) < 0, where : R
n
R is C
1
and such that d ,= 0 at . The exterior normal to the open set is dened on (a
neighborhood of) as = |d|
1
d. We can reformulate the theorem as
_

divX dx =
_
X, )((x))|d(x)| = lim
0
+
_
X, d(x))((x)/)dx/
where C
c
(R) has integral 1. Since it is linear in X, it is enough to prove it for
a(x)
x
1
, with a C
1
c
. We have, with = 1 on (1, +), = 0 on (, 0),
_

divX dx =
_
(x)<0
a
x
1
(x)dx = lim
0
+
_
a
x
1
(x)((x)/)dx
= lim
0
+
_
a(x)
t
((x)/)
1

x
1
(x)dx = lim
0
+
_
a(x)
x
1
, d)
t
((x)/)
1
dx
= lim
0
+
_
X, d)((x)/)
1
dx,
with (t) =
t
(t),
_
+

(t)dt =
_
+


t
(t)dt =
_
+


t
(t)dt = 1,
In two dimensions, we get the Green-Riemann formula
__

_
P
x
+
Q
y
_
dxdy =
_

Pdy Qdx, (2.2.13)


since with X = P
x
+Q
y
, (x, y) < 0, the lhs of (2.2.13) and (2.2.12) are the
same, whereas the rhs of (2.2.12) is, if (x, y) = f(x) y on the support of X,
__
X, d)()dxdy = lim
0
+
__
_
P(x, y)f
t
(x) Q(x, y)
_
((f(x) y)/)dxdy/
=
_
_
P(x, f(x))f
t
(x) Q(x, f(x))
_
dx =
_

Pdy Qdx.
Corollary 2.2.10. Let be an open subset of R
n
with a C
1
boundary, u, v C
2
(

).
Then
_

(u)(x)v(x)dx =
_

u(x)(v)(x)dx +
_

_
v
u

u
v

_
d, (2.2.14)
_

u vdx =
_

uvdx +
_

u
v

d. (2.2.15)
where =

1jn

2
x
j
is the Laplace operator and
u

= u where is the
exterior normal.
Proof. We have vu = div
_
vu
_
u v so that vuuv = div(vuuv)
providing the rst formula from Greens formula (2.2.12). The same formula written
as u v = uv + div
_
uv
_
entails the second formula.
38 CHAPTER 2. VECTOR FIELDS
2.2.3 2D examples of singular vector elds
Theorem 2.2.5 shows that, locally, a C
1
non-vanishing (i.e. non-singular) vector eld
is equivalent to /x
1
. The general question of classication of vector elds with
singularities is a dicult one, but we can give a pretty complete discussion for planar
vector elds with non-degenerate dierential: it amounts to look at the system of
ODE
_
x
1
x
2
_
= A
_
x
1
x
2
_
, where A is a 2 2 constant real matrix, det A ,= 0. (2.2.16)
The characteristic polynomial of A is
p
A
(X) = X
2
X trace A + det A,
A
= (trace A)
2
4 det A.
Case
A
> 0: two distinct real roots
1
,
2
, R
2
= E

1
E

2
In a basis of eigenvectors the system is
_
y
1
=
1
y
1
y
2
=
2
y
2
i.e.
_
y
1
= e
t
1
y
10
y
2
= e
t
2
y
20
det A > 0, trace A > 0: 0 <
1
<
2
y
2
y
20
=
_
y
1
y
10
_

2
/
1
repulsive node.
det A > 0, trace A < 0:
2
<
1
< 0, attractive node, reverse the arrows in the
previous picture,
y
2
y
20
=
_
y
1
y
10
_

2
/
1
.
det A < 0:
1
< 0 <
2
, saddle point
y
2
y
20
_
y
1
y
10
_

2
/(
1
)
= 1
Case
A
= 0: a double real root,
1
=
2
=
1
2
trace A.
dimE

= 2 attractive node if trace A < 0, repulsive node if trace A > 0,


dimE

= 1 attractive node if trace A < 0, repulsive node if trace A > 0.


Case
A
< 0: two distinct conjugate non-real roots,
1
= +i,
2
= i,
> 0
= 0 center,
> 0 expanding spiral point,
< 0 shrinking spiral point.
Exercise: draw a picture of the integral curves for each case above.
2.3. TRANSPORT EQUATIONS 39
2.3 Transport equations
2.3.1 The linear case
We shall deal rst with the linear equation
_

_
u
t
+

1jd
a
j
(t, x)
u
x
j
= a
0
(t, x)u +f(t, x) on t > 0, x R
d
,
u
[t=0
= u
0
, for x R
d
,
(2.3.1)
where a
j
, f are functions of class C
1
on R
d+1
. We claim that solving that rst-order
scalar linear PDE amounts to solve a non-linear system of ODE. We check with
x(t) =
_
x
j
(t)
_
1jd
R
d
x
j
(t) = a
j
_
t, x(t)
_
, 1 j d, x
j
(0) = y
j
, (2.3.2)
and we note that if u is a C
1
solution of (2.3.1), we have
d
dt
_
u(t, x(t))
_
= (
t
u)(t, x(t)) +

j
(
x
j
u)(t, x(t))a
j
(t, x(t))
= a
0
(t, x(t))u(t, x(t)) +f(t, x(t)),
so that u(t, x(t)) = u
0
(y)e
R
t
0
a
0
(s,x(s))ds
+
_
t
0
e
R
t
s
a
0
(,x())d
f(s, x(s))ds. As a result, the
value of the solution u along the characteristic curves t x(t), which satises a
linear ODE, is completely determined by u
0
, a
0
and the source term f. We can write
as well x(t) = (t, y) and notice that is a C
1
function: following Theorem 2.1.16,
is the ow of the non-autonomous ODE (2.3.2) and we shall say as well that is
the ow of the non-autonomous vector eld

t
+

1jn
a
j
(t, x)

x
j
. (2.3.3)
We have

j
ty
k
(t, y) =

1ld
a
j
x
l
(t, (t, y))

l
y
k
(t, y), i.e.
d
dt

y
=
a
x

y
,
so that with D(t, y) = det
_

j
y
k
_
1j,kd
, we have from (0, y) = y,
D(t, y) = exp
_
t
0
(trace
a
x
)(s, (s, y))ds = exp
_
t
0
(div a)(s, (s, y))ds. (2.3.4)
As a result, from the implicit function theorem, the equation x = (t, y) is locally
equivalent to y = (t, x) with a C
1
function so that
(t, (t, x)) = x.
40 CHAPTER 2. VECTOR FIELDS
We obtain from
u(t, (t, y)) = u
0
(y)e
R
t
0
a
0
(s,(s,y))ds
+
_
t
0
e
R
t
s
a
0
(,(,y))d
f(s, (s, y))ds
that
u(t, x) = u
0
((t, x))e
R
t
0
a
0
(s,(s,(t,x)))ds
+
_
t
0
e
R
t
s
a
0
(,(,(t,x)))d
f
_
s, (s, (t, x))
_
ds.
(2.3.5)
We need to introduce a slightly more general version of the non-autonomous ow
in order to compare it with the actual ow of the vector eld
t
+ a(t, x)
x
in
particular when a satises the assumption (2.1.19), ensuring global existence for the
characteristic curves.
Denition 2.3.1. Let a : RR
d
R
d
be a C
1
function such that (2.1.19) holds.
The non-autonomous ow of the vector eld
t
+a(t, x)
x
is dened as the mapping
R R R
d
(t, s, y) (t, s, y) R
d
such that
_

t
_
(t, s, y) = a
_
t, (t, s, y)
_
, (s, s, y) = y. (2.3.6)
Lemma 2.3.2. With a as in the previous denition, we have (t, 0, y) = (t, y),
where is dened by (2.3.2). The ow

of the vector eld


t
+a(t, x)
x
in R
1+d
satises

(s, y) =
_
s +, (s +, s, y)
_
. (2.3.7)
Moreover for s,
1
,
2
R, y R
d
, we have
(s +
1
+
2
, s, y) =
_
s +
1
+
2
, s +
1
, (s +
1
, s, y)
_
. (2.3.8)
For all t R, y (t, y) is a C
1
dieomorphism of R
d
.
Proof. Formula (2.3.7) follows immediately from (2.3.6). Considering the lhs (resp.
rhs) u(
2
) (resp. v(
2
)) of (2.3.8), we have
u(
2
) = a(s +
1
+
2
, u(
2
)), v(
2
) = a(s +
1
+
2
, v(
2
)),
and u(0) = (s +
1
, s, y), v(0) =
_
s +
1
, s +
1
, (s +
1
, s, y)
_
= (s +
1
, s, y),
so that (2.3.8) follows. Note also that (2.3.8) is equivalent to the fact that

1
+
2
=

2
proven in (2.2.3). In particular, we have
y = (s, s, y) =
_
s, s +, (s +, s, y)
_
so that y = (s, t, (t, s, y)) = (0, t, (t, y)) = (s, 0, (0, s, y)) = (s, (0, s, y))
and for all t R, y (t, y) = (t, 0, y) is a C
1
global dieomorphism of R
d
with inverse dieomorphism x (t, x) = (0, t, x): both mappings are C
1
from
Theorem 2.1.16, and with
t
(x) = (t, x),
t
(y) = (t, y) we have
(
t

t
)(y) =
t
((t, y)) = (0, t, (t, 0, y)) = y,
(
t

t
)(x) =
t
((t, x)) = (t, 0, (0, t, x)) = x.
2.3. TRANSPORT EQUATIONS 41
Remark 2.3.3. The group property of the non-autonomous ow is expressed by
(2.3.8) and in general, (t + s, y) ,= (t, (s, y)): the simplest example is given by
the vector eld
t
+ 2t
x
in R
2
t,x
for which
(t, y) = t
2
+y =
_
(t +s, y) = (t +s)
2
+y
(t, (s, y)) = t
2
+s
2
+y.
Note also that here (t, s, y) = t
2
s
2
+y and (2.3.8) reads
(s +
1
+
2
)
2
s
2
+y = (s +
1
+
2
)
2
(s +
1
)
2
+ (s +
1
)
2
s
2
+y.
Of course when a does not depend on t, the ow of L =
t
+a(x)
x
. .
X
is given by

L
(s, y) = (s +,

X
(y)), (t, s, y) =
ts
X
(y), (t, y) =
t
X
(y)
and in that very particular case, (t +s, y) = (t, (s, y)).
We have proven the following
Theorem 2.3.4. Let a : R
t
R
d
x
R
d
be a continuous function which satises
(2.1.19) and such that
x
a is continuous. Let a
0
, f : R
t
R
d
x
R be continuous
functions and u
0
: R
d
R be a C
1
function. The Initial-Value-Problem (2.3.1) has
a unique C
1
solution given by (2.3.5).
Note that, thanks to the hypothesis (2.1.19) and
x
a continuous, the ow x
(t, x) and y (t, y) dened above are global C
1
dieomorphisms of R
d
.
Remark 2.3.5. Let us assume that a
0
and f are both identically vanishing: then
we have from (2.3.5)
u(t, x) = u
0
((t, x))
and since (t, y) = y +
_
t
0
a
_
s, (s, y)
_
ds we get
x = (t, (t, x)) = (t, x) +
_
t
0
a
_
s, (s, (t, x))
_
ds
so that [(t, x) x[ [t[|a|
L
and the solution u(t, x) depends only on u
0
on the
ball B(x, [t[|a|
L
), that is a nite-speed-of-propagation property. Moreover the
range of u(t, ) is included in the range of u
0
: in particular if u
0
is valued in [m, M],
so is the solution u. If the vector eld is autonomous, we have seen that
(t, x) =
t
X
(x), X =

1jd
a
j
(x)
x
j
,
so that u(t, x) = u
0
_

t
X
(x)
_
, u(t +s, x) = u
0
_

ts
X
(x)
_
= u(t,
s
X
(x)
_
.
42 CHAPTER 2. VECTOR FIELDS
2.3.2 The quasi-linear case
A linear companion equation
We want now to investigate a more involved case
_

_
u
t
+

1jd
a
j
_
t, x, u(t, x)
_
u
x
j
= b
_
t, x, u(t, x)
_
on 0 < t < T, x R
d
,
u
[t=0
= u
0
, for x R
d
.
(2.3.9)
That equation is a general quasi-linear scalar rst-order equation. We know from
the introduction and the discussion around Burgers equation (1.1.4) that we should
not expect global existence in general for such an equation. We assume that the
functions a
j
, b : R
t
R
d
x
R
v
R are of class C
1
and we shall introduce a companion
linear homogeneous equation
F
t
+

1jd
a
j
(t, x, v)
F
x
j
+b(t, x, v)
F
v
= 0, F(0, x, v) = v u
0
(x). (2.3.10)
From the discussion in the previous section, we know that, provided u
0
is continuous,
there exists a unique C
1
local solution of the initial value problem (2.3.10), near the
point (t, x, v) = (0, x
0
, v
0
= u
0
(x
0
)). We claim now that, at this point, F/v ,= 0:
in fact this follows obviously from the identitv F(0, x, v) = v u
0
(x) which implies
that (F/v)(0, x, v) = 1. We can now apply the Implicit Function Theorem, which
implies that the equation F(t, x, v) = 0 is equivalent to v = u(t, x) with a C
1
function
u dened in a neighborhood of (t = 0, x = x
0
) with u(0, x
0
) = u
0
(x
0
). We have
F(t, x, u(t, x)) 0 (2.3.11)
and
u
t
+

1jd
a
j
_
t, x, u(t, x)
_
u
x
j
=
F/t
F/v
(t, x, u)

1jd
a
j
(t, x, u)
F/x
j
F/v
(t, x, u) = b(t, x, u),
so that we have found a local solution for our quasi-linear PDE. Moreover, since
F(0, x, v) = v u
0
(x), we get from (2.3.11) u(0, x) u
0
(x) = 0, so that the initial
condition is also fullled. We shall develop later on this discussion on the rst-order
scalar quasi-linear case, but it is interesting to note that nding a local solution for
such an equation is not more dicult than getting a solution for a linear equation.
Moreover, we shall be able to track the solution by a suitable method of characteris-
tics adapted to this quasi-linear case, in fact following the method of characteristics
for the companion linear equation (2.3.10).
The previous discussion shows that a local solution of (2.3.9) does exist. A direct
method of characteristics can be devised, following the discussion above: we assume
that u is a C
1
solution of (2.3.9) and we consider the ODE
x(t) = a(t, x, u(t, x(t))), v(t) = b(t, x(t), u(t, x(t))), x(0) = x
0
, v(0) = u
0
(x
0
).
(2.3.12)
2.3. TRANSPORT EQUATIONS 43
We calculate
d
dt
_
u(t, x(t)) v(t)
_
= (
t
u)(t, x(t)) + (
x
u)(t, x(t)) a(t, x(t), u(t, x(t))) b(t, x(t), u(t, x(t))) = 0
so that
u(t, x(t)) = v(t). (2.3.13)
2.3.3 Classical solutions of Burgers equation
We have already encountered Burgers equation in (1.1.4). According to the previous
discussion, the linear companion equation is
t
F+v
x
F = 0 whose ow is (t, x, v) =
(x+tv, v) since x = v, v = 0; we have F(t, x+tv, v) = vu
0
(x) and thus the identity
F(t, x +tu
0
(x), u
0
(x)) = 0. Since a C
1
solution u(t, x) of

t
u +u
x
u = 0, u(0, x) = u
0
(x), (2.3.14)
satises the identity (2.3.11) we have
u(t, x +tu
0
(x)) = u
0
(x). (2.3.15)
If u
0
C
1
with u
0
, u
t
0
bounded the mapping x x + tu
0
(x) = f
t
(x) for t 0 is a
C
1
dieomorphism provided 1 +tu
t
0
(x) > 0 which is satised whenever
0 t < T
0
=
1
sup(u
t
0
)
+
since the inequality u
t
0
M with M 0 implies
1 +tu
t
0
(x) 1 tM > 1 T
0
M = 0.
Moreover f
t
and g
t
= f
1
t
are C
1
functions of t. As a result, we have for 0 t < T
0
u(t, x) = u
0
(g
t
(x)), so that sup
x
[u(t, x)[ = sup
x
[u
0
(x)[.
Note that
(
x
u)(t, f
t
(x))f
t
t
(x) = u
t
0
(x) = (
x
u)(t, x +tu
0
(x)) =
u
t
0
(x)
1 +tu
t
0
(x)
so that this quantity is unbounded when t (T
0
)

if u
t
0
reaches a positive max-
imum at x, but nevertheless
_
[(
x
u)(t, x)[dx =
_
[u
t
0
(g
t
(x))[g
t
t
(x)dx =
_
[u
t
0
(x)[dx.
Let us check some simple examples.
When u
0
(x) = x, with 0 we do have a global solution for t 0 given by the
identity
u(t, x) = u
0
(x tu(t, x)) = x tu(t, x) = u(t, x) = x/(1 +t).
When u
0
(x) = x, with > 0 the solution blows-up at time T = 1/,
u(t, x) = u
0
(x tu(t, x)) = x +tu(t, x) = u(t, x) = x/(t 1).
44 CHAPTER 2. VECTOR FIELDS
When u
0
(x) = (1 x)H(x) + (1 + x)H(x), with H = 1
R
+
, we nd, using the
method of characteristics that
u(t, x) = H(x t)
1 x
1 t
+H(t x)
x + 1
t + 1
, 0 t < 1, x R. (2.3.16)
The function u is only Lipschitz continuous, but we may compute its distribution
derivative and we get on the open set 1 < t < 1

t
u = (x t)
1 x
1 t
+(t x)
x + 1
t + 1
+H(x t)
1 x
(1 t)
2
H(t x)
x + 1
(t + 1)
2
= H(x t)
1 x
(1 t)
2
H(t x)
x + 1
(t + 1)
2
L

loc

x
u = (x t)
1 x
1 t
(t x)
x + 1
t + 1
H(x t)
1
(1 t)
+H(t x)
1
(t + 1)
= H(x t)
1
(1 t)
+H(t x)
1
(t + 1)
L

loc
, the product u
x
u makes sense
u
x
u = H(x t)
1 x
(1 t)
2
+H(t x)
x + 1
(t + 1)
2
=
t
u,
so that Burgers equation holds for u. The following picture is helpful. In fact, the
t=0
t=1
Blow-up at time t=1
for x0, the characteristics meet at (1,1)
for x<0, the characteristics do not meet
Figure 2.5: The characteristic curves with
u
0
(x) = (1 x)H(x) + (1 +x)H(x).
function u
0
is equal to 1 + x for x 0 and to 1 x for x 0 and we have from
(2.3.15) u(t, x+tu
0
(x)) = u
0
(x), so that u is constant along the characteristic curves
t (x
0
+ tu
0
(x
0
), t) R
2
x,t
: these curves are straight lines starting at (x
0
, t = 0)
with slope 1/u
0
(x
0
). In the case under scope, we have
_
x(t, x
0
) = x
0
+t(1 +x
0
) if x
0
0,
x(t, x
0
) = x
0
+t(1 x
0
) if x
0
0.
2.4. ONE-DIMENSIONAL CONSERVATION LAWS 45
We have indeed for x
0
,= x
1
both 0, 0 t < 1
x(1, x
0
) = 1, x(t, x
0
) x(t, x
1
) = (x
0
x
1
)(1 t) ,= 0.
On the other hand for x
0
,= x
1
both 0, t 0, we have
x(t, x
0
) x(t, x
1
) = (x
0
x
1
)(1 +t) ,= 0.
2.4 One-dimensional conservation laws
2.4.1 Rankine-Hugoniot condition and singular solutions
We shall consider here the following type of non-linear equation

t
u +
x
_
f(u)
_
= 0, (2.4.1)
where (t, x) are two real variables, and u is a real-valued function, whereas f is a
smooth given function, called the ux. We shall consider some singular solutions of
this equation, with a discontinuity across a C
1
curve with equation x = (t). We
dene
u(t, x) = H(x (t))u
r
(t, x) +H((t) x)u
l
(t, x) (2.4.2)
where H is the Heaviside function and we shall assume that u
r
and u
l
are C
1
functions respectively on the closure of the open subsets

r
= (x, t), x > (t) and
l
= (x, t), x < (t)
and that u
r
solves the equation (2.4.1) on the open set
r
(resp. u
l
solves the
equation (2.4.1) on the open set
l
). We shall assume also the so-called Rankine-
Hugoniot condition,
at x = (t), f(u
r
) f(u
l
) =
t
(t)(u
r
u
l
). (2.4.3)
Theorem 2.4.1. Let be a C
1
function,
r,l
dened as above, let u
r
, u
l
be C
1
solutions of (2.4.1) respectively on
r
,
l
. Then u given by (2.4.2) is a distribution
solution of (2.4.1) if and only if the Rankine-Hugoniot condition (2.4.3) is fullled.
Proof. We have
f(u(t, x)) = H(x (t))f(u
r
(t, x)) +H((t) x)f(u
l
(t, x))
and thus the distribution derivative with respect to x of f(u) is equal to

x
(f(u)) = H(x (t))f
t
(u
r
(t, x))(
x
u
r
)(t, x) +H((t) x)f
t
(u
l
(t, x))(
x
u
l
)(t, x)
+(x (t))
_
f(u
r
(t, x)) f(u
l
(t, x))
_
.
On the other hand, we have
t
u = (x (t))
_
u
l
(t, x) u
r
(t, x)
_

t
(t). Since u
r
and
u
l
are C
1
solutions respectively on
r
,
l
, we get

t
u +
x
_
f(u)
_
= (x (t))
_
f(u
r
) f(u
l
)
t
(t)
_
u
r
u
l
_
_
and the results follows.
46 CHAPTER 2. VECTOR FIELDS
2.4.2 The Riemann problem for Burgers equation
We consider Burgers equation and a L

loc
solution u of
_

t
u +
x
(u
2
/2) = 0, on t > 0, x R,
u(0, x) = u
0
(x), x R.
(2.4.4)
It means that for all C

c
(R
2
),
_
R
u
0
(x)(0, x)dx +
__
H(t)
t
(t, x)u(t, x)dtdx
+
1
2
__
H(t)u(t, x)
2
(
x
)(t, x)dtdx = 0, (2.4.5)
or for u L

loc
(R
2
),

t
(Hu) +
x
((Hu)
2
/2) = (t) u
0
(x), H = H(t). (2.4.6)
Indeed (2.4.5) means that

t
(Hu) +
x
(Hu)
2
/2, )
D

,D
= (t) u
0
(x), )
D

,D
,
which is (2.4.6).
Non-Physical Shock and Rarefaction Wave
Let us rst assume u
0
(x) = H(x), i.e. u
l
(0, x) = 0, u
r
(0, x) = 1. Following the
method of characteristics (2.3.15), we should have u(t, x +tu
0
(x)) = u
0
(x), i.e.
_
u(t, x) = 0, for x < 0,
u(t, x +t) = 1 for x > 0.
that is
_
u(t, x) = 0, for x < 0,
u(t, x) = 1 for x > t,
so we get no information in the region 0 < x < t. We could use our knowledge
on the construction of singular solutions to create a somewhat arbitrary shock at
x = t/2 (non-physical shock)
u(t, x) =
_
u(t, x) = 0, for x < t/2,
u(t, x) = 1 for x > t/2.
(2.4.7)
The Rankine-Hugoniot condition (2.4.3) is satised since (t) = t/2, u
2
l
= u
l
, u
2
r
=
u
r
, so that
1
2
(u
2
r
u
2
l
) =
t
(t)(u
r
u
l
).
2.4. ONE-DIMENSIONAL CONSERVATION LAWS 47
u=0
u=1
A non-physical shock along the line x=t/2
However, we can also devise another solution v by dening (rarefaction wave)
v(t, x) =
_

_
v(t, x) = 0, for x < 0,
v(t, x) = x/t for 0 < x < t,
v(t, x) = 1 for x > t.
(2.4.8)
We can indeed calculate

t
_
H(t)H(x)
_
H(t x)x/t+H(x t)
_
_
+
1
2

x
_
H(t)H(x)
_
H(t x)x/t +H(x t)
_
_
2
and refer the reader to the proof of Theorem 2.4.2 to show that (2.4.8) is actually a
solution.
u=0
u=1
A rarefaction wave: u=x/t in the region 0<x<t, u=0 on x<0, u=1 on x>t
We have thus two dierent weak solutions of (2.4.4) with the same inital datum!
This very unnatural situation has to be modied and we have to nd a criterion to
select the correct solution.
48 CHAPTER 2. VECTOR FIELDS
Entropy condition
For a general one-dimensional conservation law
t
u +
x
_
f(u)
_
= 0 with a strictly
convex ux f (assume f C
2
(R) with inf f
tt
> 0), suppose that we have a curve of
discontinuity x = (t) with distinct left and right limits u
l
, u
r
. Then nonetheless
the Rankine-Hugoniot (2.4.3) should be satised across , but also u
l
> u
r
along
: this eliminates in particular the solution (2.4.7). As a geometric explanation, we
may say that singularities are due to the crossing of characteristics, but we want
to avoid that by moving backwards along a characteristic, we encounter a singular
curve.
Theorem 2.4.2. We consider the initial-value problem
_

t
u +
x
_
u
2
2
_
= 0, t > 0,
u
0
(x) = H(x)u
l
+H(x)u
r
.
(2.4.9)
where u
l
, u
r
are distinct constants and we dene
=
1
2
u
2
r
u
2
l
u
r
u
l
(2.4.10)
(1) If u
l
> u
r
, the unique entropy solution is given by
u(t, x) = H(t x)u
l
+H(x t)u
r
. (2.4.11)
This is a shock wave with speed satisfying the Rankine-Hugoniot condition (2.4.3)
at the discontinuity curve x = t.
(2) If u
l
< u
r
, the unique entropy solution is given by
u(t, x) = H(tu
l
x)u
l
+
x
t
1
[tu
l
,tu
r
]
(x) +H(x tu
r
)u
r
. (2.4.12)
The states u
l
, u
r
are separated by a rarefaction wave.
Proof. In the case u
l
> u
r
, we have a singular solution according to Theorem 2.4.1
satisfying our entropy condition u
l
> u
r
. In the other case, we must avoid a shock
curve and we check directly, with u given by (2.4.12),

t
u = (tu
l
x)u
2
l

x
t
2
1
[tu
l
,tu
r
]
(x) +
x
t
_
(x tu
l
)u
l
+(tu
r
x)u
r
_
(x tu
r
)u
2
r
,
and

x
(u
2
) =
x
_
H(tu
l
x)u
2
l
+
x
2
t
2
1
[tu
l
,tu
r
]
(x) +H(x tu
r
)u
2
r
_
= u
2
l
(tu
l
x) +
2x
t
2
1
[tu
l
,tu
r
]
(x) +
x
2
t
2
_
(x tu
l
) (x tu
r
)
_
+(x tu
r
)u
2
r
2.4. ONE-DIMENSIONAL CONSERVATION LAWS 49
so that

t
u +
x
(u
2
/2) = (tu
l
x)
=u
2
l
(
1
2
1+
1
2
)=0
..
_
u
2
l
/2 xu
l
/t +x
2
/2t
2
_
+(x tu
r
)
_
u
2
r
/2 +xu
r
/t x
2
/2t
2
_
. .
=u
2
r
(
1
2
+1
1
2
)=0
= 0.
50 CHAPTER 2. VECTOR FIELDS
Chapter 3
Five classical equations
3.1 The Laplace and Cauchy-Riemann equations
3.1.1 Fundamental solutions
We dene the Laplace operator in R
n
as
=

1jn

2
x
j
. (3.1.1)
In one dimension, we have that
d
2
dt
2
(t
+
) =
0
and for n 2 the following result
describes the fundamental solutions of the Laplace operator. In R
2
x,y
, we dene the
operator

(a.k.a. the Cauchy-Riemann operator) by

=
1
2
(
x
+i
y
). (3.1.2)
Theorem 3.1.1. We have E =
0
with | | standing for the Euclidean norm,
E(x) =
1
2
ln |x|, for n = 2, (3.1.3)
E(x) = |x|
2n
1
(2 n)[S
n1
[
, for n 3, with [S
n1
[ =
2
n/2
(n/2)
, (3.1.4)

_
1
z
_
=
0
, with z = x +iy (equality in D
t
(R
2
x,y
)). (3.1.5)
Proof. We start with n 3, noting that the function |x|
2n
is L
1
loc
and homogeneous
with degree 2n, so that |x|
2n
is homogeneous with degree n (see section 3.4.3
in [15]). Moreover, the function |x|
2n
= f(r
2
), r
2
= |x|
2
, f(t) = t
1
n
2
+
is smooth
outside 0 and we can compute there
(f(r
2
)) =

j
(f
t
(r
2
)2x
j
) =

j
f
tt
(r
2
)4x
2
j
+ 2nf
t
(r
2
) = 4r
2
f
tt
(r
2
) + 2nf
t
(r
2
),
so that with t = r
2
,
(f(r
2
)) = 4t(1
n
2
)(
n
2
)t

n
2
1
+ 2n(1
n
2
)t

n
2
= t

n
2
(1
n
2
)(2n + 2n) = 0.
51
52 CHAPTER 3. FIVE CLASSICAL EQUATIONS
As a result, |x|
2n
is homogeneous with degree n and supported in 0. From
Theorem 3.3.4 in [15], we obtain that
|x|
2n
= c
0
. .
homogeneous
degree n
+

1jm

[[=j
c
j,

()
0
. .
homogeneous
degree n j
.
Lemma 3.4.8 in [15] implies that for 1 j m, 0 =

[[=j
c
j,

()
0
and |x|
2n
=
c
0
. It remains to determine the constant c. We calculate, using the previous
formulas for the computation of (f(r
2
)), here with f(t) = e
t
,
c = |x|
2n
, e
|x|
2
) =
_
|x|
2n
e
|x|
2
_
4|x|
2

2
2n
_
dx
= [S
n1
[
_
+
0
r
2n+n1
e
r
2
(4
2
r
2
2n)dr
= [S
n1
[
_
1
2
[e
r
2
(4
2
r
2
2n)]
0
+
+
1
2
_
+
0
e
r
2
8
2
rdr
_
= [S
n1
[(n + 2),
giving (3.1.4). For the convenience of the reader, we calculate explicitely [S
n1
[. We
have indeed
1 =
_
R
n
e
|x|
2
dx = [S
n1
[
_
+
0
r
n1
e
r
2
dr
=
..
r=t
1/2

1/2
[S
n1
[
(1n)/2
_
+
0
t
n1
2
e
t
1
2
t
1/2
dt
1/2
= [S
n1
[
n/2
2
1
(n/2).
Turning now our attention to the Cauchy-Riemann equation, we see that 1/z is also
L
1
loc
(R
2
), homogeneous of degree 1, and satises

(z
1
) = 0 on the complement of
0, so that the same reasoning as above shows that

(
1
z
1
) = c
0
.
To check the value of c, we write c =

(
1
z
1
), e
z z
) =
_
R
2
e
z z

1
z
1
zdxdy =
1, which gives (3.1.5). We are left with the Laplace equation in two dimensions and
we note that with

z
=
1
2
(
x
i
y
),

z
=
1
2
(
x
+i
y
), we have in two dimensions
= 4

z

z
= 4

z

z
. (3.1.6)
Solving the equation 4
E
z
=
1
z
leads us to try E =
1
2
ln [z[ and we check directly
1
that

z
_
ln(z z)
_
= z
1
(
1
2
ln [z[) =
1
2
2
4

z

z
_
ln(z z)
_
=
1

z
_
z
1
_
=
0
.
1
Noting that ln(x
2
+ y
2
) and its rst derivatives are L
1
loc
(R
2
), we have for C

c
(R
2
),


z
_
ln[z[
2
_
, ) =
1
2
__
R
2
(
x
+i
y
) ln(x
2
+y
2
)dxdy =
__
(x, y)(xr
2
iyr
2
)dxdy =
__
(xiy)
1
(x, y)dxdy.
3.1. THE LAPLACE AND CAUCHY-RIEMANN EQUATIONS 53
3.1.2 Hypoellipticity
Denition 3.1.2. We consider a constant coecients dierential operator
P = P(D) =

[[m
a

x
, where a

C, D

x
=
1
(2i)
[[

x
. (3.1.7)
A distribution E D
t
(R
n
) is called a fundamental solution of P when PE =
0
.
Denition 3.1.3. Let P be a linear operator of type (3.1.7). We shall say that P
is hypoelliptic when for all open subsets of R
n
and all u D
t
(), we have
singsupp u = singsupp Pu. (3.1.8)
We note that if f E
t
(R
n
) and E is a fundamental solution of P, we have from
(3.5.13), (3.5.14) in [15],
P(E f) = PE f =
0
f = f,
which allows to nd a solution of the Partial Dierential Equation P(D)u = f, at
least when f is a compactly supported distribution.
Examples. We have on the real line already proven (see (3.2.2) in [15]) that
dH
dt
=
0
,
so that the Heaviside function is a fundamental solution of d/dt (note that from
Lemma 3.2.4 in [15], the other fundamental solutions are C + H(t)). This also
implies that

x
1
_
H(x
1
)
0
(x
2
)
0
(x
n
)
_
=
0
(x), (the Dirac mass at 0 in R
n
).
Let N N. With x

+
dened in (3.4.8) of [15], we get, since
N+1
x
1
(x
N+1
1,+
) =
H(x
1
)(N + 1)!, that
(
x
1
. . .
x
n
)
N+2
_

1jn
_
x
N+1
j,+
(N + 1)!
_
=
0
(x).
It is obvious that singsupp Pu singsupp u, so the hypoellipticity means that
singsupp u singsupp Pu, which is a very interesting piece of information since we
can then determine the singularities of our (unknown) solution u, which are located
at the same place as the singularities of the source f, which is known when we try
to solve the equation Pu = f.
Theorem 3.1.4. Let P be a linear operator of type (3.1.7) such that P has a fun-
damental solution E satisfying
singsupp E = 0. (3.1.9)
Then P is hypoelliptic. In particular the Laplace and the Cauchy-Riemann operators
are hypoelliptic.
54 CHAPTER 3. FIVE CLASSICAL EQUATIONS
N.B. The condition (3.1.9) appears as an i condition for the hypoellipticity of the
operator P since it is also a consequence of the hypoellipticity property.
Proof. Assume that (3.1.9) holds, let be an open subset of R
n
and u D
t
(). We
consider f = Pu D
t
(), x
0
/ singsupp f,
0
C

c
(),
0
= 1 near x
0
. We have
from Proposition 3.5.7 in [15] that
u = u PE = (Pu) E = ([P, ]u) E +
C

c
(R
n
)
..
(f) E
. .
C

(R
n
)
and thus, using Proposition 3.5.7 in [15] for singular supports, we get
singsupp(u) singsupp([P, ]u) + singsupp E = singsupp([P, ]u) supp(u),
and since is identically 1 near x
0
, we get that x
0
/ supp(u), implying x
0
/
singsupp(u), proving that x
0
/ singsupp u and the result.
A few words on the Gamma function
The gamma function is a meromorphic function on C given for Re z > 0 by the
formula
(z) =
_
+
0
e
t
t
z1
dt. (3.1.10)
For n N, we have (n + 1) = n!; another interesting value is (1/2) =

. The
functional equation
(z + 1) = z(z) (3.1.11)
is easy to prove for Re z > 0 and can be used to extend the function into a mero-
morphic function with simple poles at N and Res(, k) =
(1)
k
k!
. For instance, for
1 < Re z 0 with z ,= 0 we dene
(z) =
(z + 1)
z
, where we can use (3.1.10) to dene (z + 1).
More generally for k N, 1 k < Re z k, z ,= k, we can dene
(z) =
(z +k + 1)
z(z + 1) . . . (z +k)
.
There are manifold references on the Gamma function. One of the most compre-
hensive is certainly the chapter VII of the Bourbaki volume Fonctions de variable
reelle [2].
3.1.3 Polar and spherical coordinates
The polar coordinates in R
2
are ]0, +)] , [ (r, ) (r cos , r sin ) = (x, y)
which is a C
1
dieomorphism from ]0, +)], [ onto R
2
(R

0) with inverse
mapping given by
r = (x
2
+y
2
)
1/2
, = Im
_
Log(x +iy)
_
, where for z CR

, Log z =
_
[1,z]
d

.
3.2. THE HEAT EQUATION 55
We have in two dimensions
r
2
= (r
r
)
2
+
2

, (3.1.12)
since
(x
x
+y
y
)
2
+ (x
y
y
x
)
2
= x
2

2
x
+y
2

2
y
+ 2xy
2
xy
+x
x
+y
y
+x
2

2
y
+y
2

2
x
2xy
2
xy
x
x
y
y
= (x
2
+y
2
)(
2
x
+
2
y
).
In three dimensions, the spherical coordinates are given by
_

_
x = r cos sin
y = r sin sin
z = r cos
,
_

_
r = (x
2
+y
2
+z
2
)
1/2
= Im
_
Log(x +iy)
_
= Im
_
Log(z +i(x
2
+y
2
)
1/2
)
_
(3.1.13)
dening a C
1
dieomorphism
]0, +)] , []0, [ (r, , ) (x, y, z) R
3

_
R

0 R
_
.
The expression of the Laplace operator in spherical coordinates is
r
2
= (r
r
)
2
+r
r
+
2

+
1
sin
2

+
1
tan

. (3.1.14)
To prove the above formula, we use (3.1.12), with
z = r cos , = r sin , r
2
(
2
z
+
2

) = (r
r
)
2
+
2

,
(

)
2
+
2

=
2
(
2
x
+
2
y
),
so that (r
r
)
2
+
2

= r
2

2
z
+r
2

2
_

2
(
2
x
+
2
y
)

_
and thus
r
2
= (r
r
)
2
+
2

+
1
sin
2

+r
2

. (3.1.15)
We have also, using the change of variables (r, ) (z, )
r
2

= r
2

1
_
r cos
z
2
+
2

+r
1

r
_
=
1
tan

+r
r
and with (3.1.15), this provides the sought formula (3.1.14).
3.2 The heat equation
The heat operator is the following constant coecient dierential operator on R
t
R
n
x

x
, (3.2.1)
where the Laplace operator
x
on R
n
is dened by (3.1.1).
56 CHAPTER 3. FIVE CLASSICAL EQUATIONS
Theorem 3.2.1. We dene on R
t
R
n
x
the L
1
loc
function
E(t, x) = (4t)
n/2
H(t)e

|x|
2
4t
. (3.2.2)
The function E is C

on the complement of (0, 0) in RR


n
. The function E is
a fundamental solution of the heat equation, i.e.
t
E
x
E =
0
(t)
0
(x).
Proof. To prove that E L
1
loc
(R
n+1
), we calculate for T 0,
_
T
0
_
+
0
t
n/2
r
n1
e

r
2
4t
dtdr =
..
r=2t
1/2

_
T
0
_
+
0
t
n/2
2
n1
t
(n1)/2

n1
e

2
2t
1/2
dtd
= 2
n
T
_
+
0

n1
e

2
d < +.
Moreover, the function E is obviously analytic on the open subset of R
1+n
(t, x)
R R
n
, t ,= 0. Let us prove that E is C

on R (R
n
0). With
0
dened in
(3.1.1) of [15], the function
1
dened by
1
(t) = H(t)t
n/2

0
(t) is also C

on R
and
E(t, x) = H(
[x[
2
4t
)
_
[x[
2
4t
_
n/2
e

|x|
2
4t
[x[
n

n/2
= [x[
n

n/2

1
_
4t
[x[
2
_
,
which is indeed smooth on R
t
(R
n
x
0). We want to solve the equation
t
u
x
u =

0
(t)
0
(x). If u belongs to S
t
(R
n+1
), we can consider its Fourier transform v with
respect to x (well-dened by transposition as the Fourier transform in (4.1.10) of
[15], and we end-up with the simple ODE with parameters on v,

t
v + 4
2
[[
2
v =
0
(t). (3.2.3)
It remains to determine a fundamental solution of that ODE: we have
d
dt
+ = e
t
d
dt
e
t
,
_
d
dt
+
_
(e
t
H(t)) =
_
e
t
d
dt
e
t
_
(e
t
H(t)) =
0
(t), (3.2.4)
so that we can take v = H(t)e
4
2
t[[
2
, which belongs to S
t
(R
t
R
n

). Taking
the inverse Fourier transform with respect to of both sides of (3.2.3) gives
2
with
u S
t
(R
t
R
n

t
u
x
u =
0
(t)
0
(x). (3.2.5)
To compute u, we check with D(R), D(R
n
),
u,

) = v
x
, ) = v,

) =
_
+
0
_
R
n
(t)

()e
4
2
t[[
2
dtd.
We can use the Fubini theorem in that absolutely converging integral and use (4.1.2)
in [15] to get
u,

) =
_
+
0
(t)
__
R
n
(4t)
n/2
e

|x|
2
4t
(x)dx
_
dt = E,

),
where the last equality is due to the Fubini theorem and the local integrability of
E. We have thus E = u and E satises (3.2.5). The proof is complete.
2
The Fourier transformation obviously respects the tensor products.
3.3. THE SCHR

ODINGER EQUATION 57
Corollary 3.2.2. The heat equation is C

hypoelliptic (see the denition 3.1.3) ,


in particular for w D
t
(R
1+n
),
singsupp w singsupp(
t
w
x
w),
where singsupp stands for the C

singular support as dened by (3.1.9) in [15].


Proof. It is an immediate consequence of the theorem 3.1.4, since E is C

outside
zero from the previous theorem.
Remark 3.2.3. It is also possible to dene the analytic singular support of a dis-
tribution T in an open subset of R
n
: we dene
singsupp
,
T = x , Uopen V
x
, T
[U
/ /(U), (3.2.6)
where /(U) stands for the analytic
3
functions on the open set U. It is a consequence
4
of the proof of theorem 3.2.1 that
singsupp
,
E = 0 R
n
x
. (3.2.7)
In particular this implies that the heat equation is not analytic-hypoelliptic since
0 R
n
x
= singsupp
,
E , singsupp
,
(
t
E
x
E) = singsupp
,

0
= 0
R
1+n.
3.3 The Schr odinger equation
We move forward now with the Schrodinger equation,
1
i

t

x
(3.3.1)
which looks similar to the heat equation, but which is in fact drastically dierent.
Lemma 3.3.1.
D(R
n+1
)
_
+
0
e
i(n2)

4
(4t)
n/2
__
R
n
(t, x)e
i
|x|
2
4t
dx
_
dt = E, ) (3.3.2)
is a distribution in R
n+1
of order n + 2.
3
A function f is said to be analytic on an open subset U of R
n
if it is C

(U), and for each


x
0
U there exists r
0
> 0 such that

B(x
0
, r
0
) U and
x

B(x
0
, r
0
), f(x) =

N
n
1
!

x
f(x
0
)(x x
0
)

.
4
In fact, in the theorem, we have noted the obvious inclusion singsupp
A
E 0 R
n
x
, but
since E is C

in t ,= 0, vanishes identically on t < 0, is positive ( it means > 0) on t > 0, it cannot


be analytic near any point of 0 R
n
x
.
58 CHAPTER 3. FIVE CLASSICAL EQUATIONS
Proof. Let D(R R
n
); for t > 0 we have, using (4.6.7) iin [15],
e
i(n2)

4
(4t)
n/2
_
R
n
(t, x)e
i
|x|
2
4t
dx = i
_
R
n

x
(t, )e
4i
2
t[[
2
d,
so that with N n even > n, using (4.1.7) and (4.1.14) in [15],
sup
t>0

e
i(n2)

4
(4t)
n/2
_
R
n
(t, x)e
i
|x|
2
4t
dx

sup
t>0
_
R
n
[

x
(t, )[d
sup
t>0
_
(1 +[[
2
)
n/2
[ (1 +[[
2
)
n/2
. .
polynomial

(t, )[d C
n
max
[[ n
|

x
|
L

(R
n+1
)
.
As a result the mapping
D(R
n+1
)
_
+
0
e
i(n2)

4
(4t)
n/2
__
R
n
(t, x)e
i
|x|
2
4t
dx
_
dt = E, )
is a distribution of order n + 2.
Theorem 3.3.2. The distribution E given by (3.3.2) is a fundamental solution of
the Schrodinger equation, i.e.
1
i

t
E
x
E =
0
(t)
0
(x). Moreover, E is smooth
on the open set t ,= 0 and equal there to
e
i(n2)

4
H(t)(4t)
n/2
e
i
|x|
2
4t
. (3.3.3)
The distribution E is the partial Fourier transform with respect to the variable x of
the L

(R
n+1
) function

E(t, ) = iH(t)e
4i
2
t[[
2
. (3.3.4)
Proof. We want to solve the equation i
t
u
x
u =
0
(t)
0
(x). If u belongs to
S
t
(R
n+1
), we can consider its Fourier transform v with respect to x (well-dened
by transposition as the Fourier transform in (4.1.10) of [15]), and we end-up with
the simple ODE with parameters on v,

t
v +i4
2
[[
2
v = i
0
(t). (3.3.5)
Using the identity (3.2.4), we see that we can take v = iH(t)e
i4
2
t[[
2
, which belongs
to S
t
(R
t
R
n

). Taking the inverse Fourier transform with respect to of both sides


of (3.3.5) gives with u S
t
(R
t
R
n

t
u i
x
u = i
0
(t)
0
(x) i.e.
1
i

t
u
x
u =
0
(t)
0
(x). (3.3.6)
To compute u, we check with D(R), D(R
n
),
u, ) = v
x
,

) = v,

) = i
_
+
0
(t)
__
R
n

()e
i(4t)[[
2
d
_
dt.
(3.3.7)
3.4. THE WAVE EQUATION 59
We note now that, using (4.6.7) and (4.1.10) in [15], for t > 0,
i
_
R
n

()e
i(4t)[[
2
d = i
_
R
n
(x)(4t)
n/2
e
i
|x|
2
4t
dxe
n
i
4
= e
i(n2)

4
(4t)
n/2
_
R
n
e
i
|x|
2
4t
(x)dx.
As a result, u is a distribution on R
n+1
dened by
u, ) = e
i(n2)

4
(4)
n/2
_
+
0
t
n/2
__
R
n
(t, x)e
i
|x|
2
4t
dx
_
dt
and coincides with E, so that E satises (3.3.6). The identity (3.3.7) is proving
(3.3.4). The proof of the theorem is complete.
Remark 3.3.3. The fundamental solution of the Schrodinger equation is unbounded
near t = 0 and, since E is smooth on t ,= 0, its C

singular support is equal to


0 R
n
x
. In particular, the Schrodinger equation is not hypoelliptic. We shall see
that it looks like a propagation equation with an innite speed, or more precisely
with a speed depending on the frequency of the wave.
3.4 The Wave Equation
3.4.1 Presentation
The wave equation in d dimensions with speed of propagation c > 0, is given by the
operator on R
t
R
d
x

c
= c
2

2
t

x
. (3.4.1)
We want to solve the equation c
2

2
t
u
x
u =
0
(t)
0
(x). If u belongs to S
t
(R
d+1
),
we can consider its Fourier transform v with respect to x, and we end-up with the
ODE with parameters on v,
c
2

2
t
v + 4
2
[[
2
v =
0
(t),
2
t
v + 4
2
c
2
[[
2
v = c
2

0
(t). (3.4.2)
Lemma 3.4.1. Let , C. A fundamental solution of P
,
= (
d
dt
)(
d
dt
) (on
the real line) is
_
_
_
_
e
t
e
t

_
H(t) for ,= ,
te
t
H(t) for = .
(3.4.3)
Proof. If ,= , to solve (
d
dt
)(
d
dt
) =
0
(t), the method of variation of
parameters gives a solution a(t)e
t
+b(t)e
t
with
_
e
t
e
t
e
t
e
t
__
a

b
_
=
_
0

_
=
_
a

b
_
=
1

_

_
= (3.4.3) for ,= ,
which gives also the result for = by dierentiation with respect to of the
identity P
,
_
e
t
e
t
_
= ( ).
60 CHAPTER 3. FIVE CLASSICAL EQUATIONS
Going back to the wave equation, we can take v as the temperate distribution
5
given by
v(t, ) = c
2
H(t)
e
2ict[[
e
2ict[[
4ic[[
= c
2
H(t)
sin
_
2ct[[
_
2c[[
. (3.4.4)
Taking the inverse Fourier transform with respect to of both sides of (3.4.2) gives
with u S
t
(R
t
R
d

)
c
2

2
t
u
x
u =
0
(t)
0
(x). (3.4.5)
To compute u, we check with D(R
1+d
),
u, ) = v
x
(t, ), (t, )) =
_
+
0
_
R
n

x
(t, )c
sin
_
2ct[[
_
2[[
ddt. (3.4.6)
We have found an expression for a fundamental solution of the wave equation in d
space dimensions and proven the following proposition.
Proposition 3.4.2. Let E
+
be the temperate distribution on R
d+1
such that

E
+
x
(t, ) = cH(t)
sin
_
2ct[[
_
2[[
. (3.4.7)
Then E
+
is a fundamental solution of the wave equation (3.4.1), i.e. satises

c
E
+
=
0
(t)
0
(x).
Remark 3.4.3. Dening the forward-light-cone
+,c
as

+,c
= (t, x) R R
d
, ct [x[, (3.4.8)
one can prove more precisely that E
+
is the only fundamental solution with support
in t 0 and that
supp E
+
=
+
, when d = 1 and d 2 is even, (3.4.9)
supp E
+
=
+
, when d 3 is odd, (3.4.10)
singsupp E
+
=
+
, in any dimension. (3.4.11)
Lemma 3.4.4. Let E
1
, E
2
be fundamental solutions of the wave equation such that
supp E
1

+,c
, supp E
2
t 0. Then E
1
= E
2
.
Proof. Dening u = E
1
E
2
, we have supp u t 0 and the mapping
t 0
+,c

_
(t, x), (s, y)
_
(t +s, x +y) R
d+1
is proper since
t, s 0, cs [y[, [t +s[ T, [x +y[ R = t, s [0, T], [x[ R +cT, [y[ cT,
so that Section 3.5.3 in [15] allows to perform the following calculations
u = u
0
= u
c
E
1
=
c
u E
1
= 0.
5
The function R s
sin s
s
=

k0
(1)
k s
2k
(2k+1)!
= S(s
2
) is a smooth bounded function of
s
2
, so that v(t, ) = c
2
H(t)tS(4
2
c
2
t
2
[[
2
) is continuous and such that [v(t, )[ CtH(t), thus a
tempered distribution.
3.4. THE WAVE EQUATION 61
3.4.2 The wave equation in one space dimension
Theorem 3.4.5. On R
t
R
x
, the only fundamental solution of the wave equation
supported in
+,c
is
E
+
(t, x) =
c
2
H(ct [x[). (3.4.12)
where E
+
is dened in (3.4.7). That fundamental solution is bounded and the prop-
erties (3.4.9), (3.4.11) are satised.
Proof. We have c
2

2
t

2
x
= (c
1

t

x
)(c
1

t
+
x
) and changing (linearly) the
variables with x
1
= ct + x, x
2
= ct x, we have t =
1
2c
(x
1
+ x
2
), x =
1
2
(x
1
x
2
),
using the notation
(x
1
, x
2
) (t, x) u(t, x) = v(x
1
, x
2
),
u
t
=
v
x
1
c +
v
x
2
c,
u
x
=
v
x
1

v
x
2
, c
1

x
= 2
x
2
, c
1

t
+
x
= 2
x
1
,
and thus
c
= 4

2
x
1
x
2
, so that a fundamental solution is v =
1
4
H(x
1
)H(x
2
). We
have now to pull-back this distribution by the linear mapping (t, x) (x
1
, x
2
): we
have the formula
(0, 0) = 4

2
v
x
1
x
2
(x
1
, x
2
), (x
1
, x
2
)) = (
c
u)(t, x), (ct +x, ct x))2c
which gives the fundamental solution
2c
4
H(ct+x)H(ctx) =
c
2
H(ct[x[). Moreover
that fundamental solution is supported in
+,c
and since E
+
is supported in t 0,
we can apply the lemma 3.4.4 to get their equality.
3.4.3 The wave equation in two space dimensions
We consider (3.4.1) with d = 2, i.e.
c
= c
2

2
t

2
x
1

2
x
2
.
Theorem 3.4.6. On R
t
R
2
x
, the only fundamental solution of the wave equation
supported in
+,c
is
E
+
(t, x) =
c
2
H(ct [x[)(c
2
t
2
[x[
2
)
1/2
, (3.4.13)
where E
+
is dened in (3.4.7). That fundamental solution is L
1
loc
and the properties
(3.4.9), (3.4.11) are satised.
Proof. From the lemma 3.4.4, it is enough to prove that the rhs of (3.4.13) is indeed
a fundamental solution. The function E(t, x) =
c
2
H(ct [x[)(c
2
t
2
[x[
2
)
1/2
is
locally integrable in R R
2
since
_
T
0
_
ct
0
(c
2
t
2
r
2
)
1/2
rdrdt =
_
T
0
[(c
2
t
2
r
2
)
1/2
]
r=0
r=ct
dt = cT
2
/2 < +.
Moreover E is homogeneous of degree 1, so that
c
E is homogeneous with degree
3 and supported in
+,c
. We use now the independently proven three-dimensional
62 CHAPTER 3. FIVE CLASSICAL EQUATIONS
case (Theorem 3.4.7). We dene with E
+,3
given by (3.4.15), D(R
3
t,x
1
,x
2
),
D(R) with (0) = 1,
u, )
D

(R
3
),D(R
3
)
= lim
0
E
+,3
, (t, x
1
, x
2
) (x
3
))
D

(R
4
),D(R
4
)
= lim
0
1
4
___
R
3
(c
1
_
x
2
1
+x
2
2
+x
2
3
, x
1
, x
2
)
_
x
2
1
+x
2
2
+x
2
3
(x
3
)dx
1
dx
2
dx
3
=
1
4
2
___
R
2
x
1
,x
2
x
3
0
(c
1
_
x
2
1
+x
2
2
+x
2
3
, x
1
, x
2
)
_
x
2
1
+x
2
2
+x
2
3
dx
1
dx
2
dx
3
(t = c
1
_
x
2
1
+x
2
2
+x
2
3
)
=
1
2
___
R
2
x
1
,x
2
ct

x
2
1
+x
2
2

(t, x
1
, x
2
)
ct
1
2
(c
2
t
2
x
2
1
x
2
2
)
1/2
2c
2
tdx
1
dx
2
dt
=
c
2
___
R
2
x
1
,x
2
ct

x
2
1
+x
2
2

(t, x
1
, x
2
)(c
2
t
2
x
2
1
x
2
2
)
1/2
dx
1
dx
2
dt
= E, )
D

(R
3
),D(R
3
)
, so that E
+
= u.
With
c,d
standing for the wave operator in d dimensions with speed c, we have,
since

c,3
_
(t, x
1
, x
2
) (x
3
)
_
=
c,2
_
(t, x
1
, x
2
)
_
(x
3
) (t, x
1
, x
2
)
2

tt
(x
3
)

c,2
u, ) = lim
0
E
+,3
, (
c,2
)(t, x
1
, x
2
) (x
3
))
= lim
0
_
E
+,3
,
c,3
_
(t, x
1
, x
2
) (x
3
)
_
)) +E
+,3
, (t, x
1
, x
2
)
2

tt
(x
3
))
_
= (0, 0, 0),
which gives
c,2
E =
c,2
u =
0,R
3 and the result.
3.4.4 The wave equation in three space dimensions
We consider (3.4.1) with d = 3, i.e.
c
= c
2

2
t

2
x
1

2
x
2

2
x
3
.
Theorem 3.4.7. On R
t
R
3
x
, the only fundamental solution of the wave equation
supported in
+,c
is
E
+
(t, x) =
1
4[x[

0,R
(t c
1
[x[), (3.4.14)
i.e. for D(R
t
R
3
x
), E
+
, ) =
_
R
3
1
4[x[
(c
1
[x[, x)dx. (3.4.15)
where E
+
is dened in (3.4.7). The properties (3.4.10), (3.4.11) are satised.
Proof. The formula (3.4.15) is dening a Radon measure E with support
+,c
,
so that the last statements of the lemmas are clear. From the lemma 3.4.4, it is
3.4. THE WAVE EQUATION 63
enough to prove that (3.4.15) denes indeed a fundamental solution. We check for
D(R), D(R
3
)

c
E, (t) (x)) = E,
c
( ))
=
1
4
_
R
3
[x[
1
_
c
2

tt
(c
1
[x[)(x) (c
1
[x[)()(x)
_
dx.
If we assume that supp R

+
, we get
_
R
3
[x[
1
(c
1
[x[)()(x)dx =
_
R
3

_
[x[
1
(c
1
[x[)
_
(x)dx
=
_
R
3
_
_
r
1
(c
1
r)
_
tt
+ 2r
1
_
r
1
(c
1
r)
_
t
_
(x)dx (r = [x[)
=
_
(x)
_
r
1

tt
(c
1
r)c
2
+ 2(r
2
)
t
(c
1
r)c
1
+ 2r
3
(c
1
r)
+ 2r
1
r
1

t
(c
1
r)c
1
+ 2r
1
(r
2
)(c
1
r)
_
dx,
which gives
c
E, (t) (x)) = 0. As a result,
supp(
c
E)
+,c
t 0 = (0
R
, 0
R
3),
and since E is homogeneous with degree 2, the distribution
c
E is homogeneous
with degree 4 with support at the origin of R
4
: Lemma 3.4.8 and Theorem 3.3.4
in [15] imply that
c
E =
0,R
4. To check that = 1, we calculate for D(R)
(noting that [t[ C and [x[ c[t[ + 1 implies [x[ cC + 1)

c
E, (t) 1) =
1
4
_
+
0
r
1
c
2

tt
(c
1
r)r
2
dr4 =
_
+
0

tt
(r)rdr
= [
t
(r)r]
+
0

_
+
0

t
(r)dr = (0),
so that = 1 and the theorem is proven.
64 CHAPTER 3. FIVE CLASSICAL EQUATIONS
Chapter 4
Analytic PDE
4.1 The Cauchy-Kovalevskaya theorem
Let m N

. We consider the Cauchy problem in R


t
R
d
x
_

m
t
u = F
_
t, x, (
k
t

x
u)
[[+km,k<m
_
,
(
j
t
u)(0, x) = v
j
(x), 0 j < m.
(4.1.1)
where F, v
j
are all analytic of their arguments.
Theorem 4.1.1. Let F be an analytic function in a neighborhood of (0, x
0
, y
0
)
R
t
R
d
x
R
N
with y
0
=
_
(

x
v
k
)(x
0
)
_
[[+km,k<m
, N = C
d+1
d+m+1
1 (see (7.3.3) in
the appendix) and let (v
j
)
0j<m
be analytic functions in a neighborhood of x
0
. Then
there exists a neighborhood of (0, x
0
) on which the Cauchy problem (4.1.1) has a
unique analytic solution.
Proof. The uniqueness part is a consequence of the following lemma.
Lemma 4.1.2. Let m, m
t
N

. We consider the Cauchy problem in R


t
R
d
x
_

m
t
u = G
_
t, x, (
k
t

x
u)
[[m

,k<m
_
,
(
j
t
u)(0, x) = v
j
(x), 0 j < m.
(4.1.2)
where G, v
j
are all analytic of their arguments. The problem (4.1.2) has a unique
analytic solution.
Proof of the lemma. Let u be an analytic solution of (4.1.2): we prove by induction
on l that
l N, m
t
l
N,
m+l
t
u = G
l
(t, x, (
k
t

x
u)
[[m

l
,k<m
), (4.1.3)
where G
l
depends on a nite number of derivatives of G. It is true for l = 0 and if
true for some l 0, we get

m+l+1
t
u =
G
l
t
+

k,
[[m

,k<m
G
l
w
k

k+1
t

x
u
. .
expected term if k < m 1
.
65
66 CHAPTER 4. ANALYTIC PDE
If k = m1 in the sum above, we have
k+1
t

x
u =

x
_
G(t, x, (
k
t

x
u)
[[m

,k<m
)
_
,
and this concludes the induction proof. As a result, we get that
l N,
m+l
t
u(0, x) = G
l
(0, x, (

x
v
k
)
[[m

l
,k<m
) (and
j
t
u(0, x) = v
j
(x), 0 j < m).
This implies that for all k, , (
k
t

x
u)(0, x) are determined by the equation (4.1.2)
and by analyticity of u, gives the uniqueness result. The proof of the lemma is
complete.
Let us now prove the existence part of Theorem 4.1.1. Introducing U(t, x) = u(t, x)

0j<m
v
j
(x)
t
j
j!
we see that (4.1.1) is equivalent to
_
_
_

m
t
U = F
_
t, x, (
k
t

x
(U +

0j<m
v
j
(x)
t
j
j!
))
[[+km
k<m
_
= G
_
t, x, (
k
t

x
U)
[[+km
k<m
_
,
(
j
t
U)(0, x) = 0, 0 j < m.
(4.1.4)
with G analytic. To prove the theorem, we may thus assume that the v
j
in (4.1.1)
are all identically 0. Let us notice that if u is a smooth function satisfying (4.1.1),
then for k +[[ m, k < m, we have with w
k,
=
k
t

x
u,
if k + 1 +[[ m and k + 1 < m,
t
w
k,
= w
k+1,
,
if k = m1, [[ = 0,
t
w
k,
=
m
t
u = F(t, x, (w
l
)
l+[[m,l<m
),
if k = m1, [[ = 1, = e
j
,
t
w
k,
=
x
j

m
t
u =
F
x
j
+

l+[[m
l<m
F
w
l
w
l
x
j
,
if k < m1, k + 1 +[[ 1 +m = k +[[ = m, k m2, [[ 2,
j with
j
1,
t
w
k,
=
k+1
t

x
u =
x
j

k+1
t

e
j
x
u =
x
j
w
k+1,e
j
,
with k + 1 < m, k + 1 +[ e
j
[ = k +[[ = m,
w
k
(0, x) =

x
v
k
(x) 0.
Conversely, if the functions (w
k,
)
k+[[m,k<m
satisfy
if k + 1 +[[ m and k + 1 < m,
t
w
k,
= w
k+1,
, (4.1.5)
if k = m1, [[ = 0,
t
w
k,
= F(t, x, (w
l
)
l+[[m,l<m
), (4.1.6)
if k = m1, [[ = 1, = e
j
,
t
w
k,
=
F
x
j
+

l+[[m
l<m
F
w
l
w
l
x
j
, (4.1.7)
if k +[[ = m, k m2, [[ 2,
t
w
k,
=
x
j
w
k+1,e
j
, (4.1.8)
where j is the smallest integer in [1, d] such that
j
1,
w
k
(0, x) 0, (4.1.9)
we have
w
k,
=
k
t

x
w
00
, k +[[ m, k < m. (4.1.10)
In fact, if [[ = 0, we have for k < m from (4.1.5)

t
w
00
= w
10
, . . . ,
t
w
m2,0
= w
m1,0
=
k
t
w
00
= w
k0
for 0 k < m,
4.1. THE CAUCHY-KOVALEVSKAYA THEOREM 67
and the property (4.1.10) for [[ = 0. We perform now an induction on [[. If
k +[[ = m, k m2, (4.1.8), (4.1.9) imply

t
w
k,
=
x
j
w
k+1,e
j
= w
k,
=
_
t
0

x
j
w
k+1,e
j
ds =
..
induction
since| e
j
| = || 1
_
t
0

x
j

e
j
x

k+1
t
w
00
ds =
k
t

x
w
00
.
(4.1.11)
Moreover, if k = m1, [[ = 1, = e
j
, (4.1.7), (4.1.9) imply
w
m1,e
j
=
_
t
0
_
F
x
j
+

l+[[m
l<m
F
w
l
w
l
x
j
_
ds (4.1.12)
whereas from (4.1.6), (4.1.9),
w
m1,0
=
_
t
0
F(s, x, (w
l
)
l+[[m,l<m
)ds
and thus (4.1.12) gives
x
j
w
m1,0
= w
m1,e
j
so that
w
m1,e
j
=
m1
t

x
j
w
00
, (4.1.13)
from the case [[ = 0. Assume now that k +1 +[[ m, k +1 < m, i.e. k m2,
k +[[ m1.
If k = m2, [[ = 1, = e
j
, we have from (4.1.5), (4.1.9) and (4.1.13)
w
m2,
=
_
t
0
w
m1,e
j
ds =
_
t
0

x
j
w
m1,0
ds =
x
j
_
t
0

t
w
m2,0
ds
=
x
j
w
m2,0
=
m2
t

x
w
00
.
If k = m3, [[ 1,
j
1, we have from (4.1.5), (4.1.9) and the case k = m2,
w
m3,
=
_
t
0
w
m2,
ds =
_
t
0

x
j
w
m2,e
j
ds =
x
j
_
t
0

t
w
m3,e
j
ds
=
x
j
w
m3,e
j
=
x
j

m3
t

e
j
x
w
00
=
m3
t

x
w
00
.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
If k = m l, l 2, [[ 1,
j
1, we have from (4.1.5), (4.1.9) and the case
k = ml + 1,
w
ml,
=
_
t
0
w
ml+1,
ds =
_
t
0

x
j
w
ml+1,e
j
ds =
x
j
_
t
0

t
w
ml,e
j
ds
=
x
j
w
ml,e
j
=
x
j

ml
t

e
j
x
w
00
=
ml
t

x
w
00
,
proving (4.1.10). Property (4.1.10) and (4.1.6) give

m
t
w
00
=
t
w
m1,0
= F(t, x, (w
l
)
l+[[m,l<m
) = F(t, x, (
l
t

x
w
00
)
l+[[m,l<m
)
68 CHAPTER 4. ANALYTIC PDE
so that u = w
00
satises the equation (4.1.1) with
j
t
u(0, x) = 0 for 0 j < m. As
a result, considering the vector-valued function Y = (w
k,
)
k+[[m,k<m
, we have
_

t
Y =

1jd
A
j
(t, x, Y )
x
j
Y +T(t, x, Y ),
Y (0, x) = 0,
(4.1.14)
where each A
j
is a N N matrix and T belongs to R
N
. Moreover if F is analytic,
it is also the case of the functions A
j
, T. Adding a dimension to the vector Y , we
may take t as a rst component and deal nally with the existence of an analytic
solution for the quasi-linear system
_

t
Y =

1jd
A
j
(x, Y )
x
j
Y +T(x, Y ),
Y (0, x) = 0,
(4.1.15)
where each A
j
is a N
t
N
t
matrix and T belongs to R
N

(this new N
t
= N + 1,
where N is given in (4.1.14): nevertheless, we shall call it N in the sequel). If
Y =

j,
x

t
j
Y
,j
is an analytic solution of (4.1.15), we have

t
Y =

j1,
jt
j1
x

Y
,j
=

j0,
(j + 1)t
j
x

Y
,j+1
and since A(x, Y )
x
Y + T(x, Y ) =

j,
T
,j
_
(Y
,l
)
lj
, coe.A, T
_
x

t
j
, where T
,j
is a polynomial with non-negative coecients, we get
Y
,j+1
=
1
j + 1
T
,j
_
(Y
,l
)
lj
, coe.A, T
_
.
As a result, the knowledge of (Y
,l
)
lj
provides the knowledge of (Y
,l
)
lj+1
, and since
we know (Y
,0
) from the initial condition in (4.1.15), the above formula determines
the power series coecients of Y and we have
Y
,j
= Q
,j
_
coe.A, T
_
,
where Q
,j
is a polynomial with non-negative coecients. We consider the Cauchy
problem

t
Z =

j
B
j
(x, Z)
x
j
Z +S(x, Z), Z(x, 0) = 0
with analytic functions B
j
, S, majorizing A
j
, T: if we nd an analytic solution Z,
then its Taylor coecients Z
,j
will satisfy
Z
,j
= Q
,j
_
coe.B, S
_
and since Q
,j
is a polynomial with non-negative coecients, we get
[Y
,j
[ = [Q
,j
_
coe.A, T
_
[ Q
,j
_
coe.B, S
_
= Z
,j
Using (7.2.7), we see that the power series of A, T are majorized by those of
Mr
r

1jd
x
j

1lN
y
l
,
4.1. THE CAUCHY-KOVALEVSKAYA THEOREM 69
provided, M is large enough and r > 0 is small enough. We consider now the Cauchy
problem
_

t
z
m
=
Mr
r

1jd
x
j

1lN
z
l
_

j,l

x
j
z
l
+ 1
_
z
m
(0, x) = 0.
, 1 m N. (4.1.16)
It is enough to prove the existence of an analytic solution for this problem. If we
consider the scalar equation (t, s) R
2
u(t, s) R,

t
u =
Mr
r s Nu
_
Nd
s
u + 1
_
, u(0, s) = 0, (4.1.17)
and if we dene
z
m
(t, x) = u(t, x
1
+ +x
d
), 1 m N,
we get a solution of (4.1.16). The remaining task is to solve (4.1.17). To simplify
the algebra we solve

t
u =
1
1 s u
(
s
u + 1), u(0, s) = 0.
Using the method of characteristics, we obtain
_

t = 1 s u, s = 1, u = 1,
t(0) = 0, s(0) = , u(0) = 0
so that u = , s = , t = and thus = s +u, = u, t = u(1 s u),
that is u =
1s

(1s)
2
4t
2
. To satisfy the initial condition u(0, s) = 0, we nd
u =
1 s
_
(1 s)
2
4t
2
, (4.1.18)
which is indeed analytic near the origin. The proof of Theorem 4.1.1 is complete.
70 CHAPTER 4. ANALYTIC PDE
Chapter 5
Elliptic Equations
5.1 Some simple facts on the Laplace operator
5.1.1 The mean-value theorem
Denition 5.1.1. Let be an open subset of R
n
and u D
t
(). We shall say that
u is an harmonic function on if u = 0 on .
Remark 5.1.2. Note that from Theorem 3.1.4, an harmonic function is C

and we
shall see below that it is even analytic.
Proposition 5.1.3. Let be an open subset of R
n
and u be an harmonic function
on . Then u is a smooth function and for all x and all r > 0 such that

B(x, r) , we have
u(x) =
1
[B(x, r)[
_
B(x,r)
u(y)dy =
1
[B(x, r)[
_
B(x,r)
u(y)d(y). (5.1.1)
We shall use the notation
_
A
f(y)dy =
1
[A[
_
A
f(y)dy.
Proof. For x, r as in the statement above, we dene
(r) =
_
B(x,r)
u(y)d(y) =
_
B(0,1)
u(x +r)d()
and we have
t
(r) =
_
S
n1
u
t
(x+r)d() so that with X(y) =

j
(
j
u)(x+ry)
y
j

t
(r)[S
n1
[ =
_
S
n1

1jn
(
j
u)(x +r)
j
d() =
_
S
n1
X, )d
( is the exterior normal to S
n1
) =
_
B
n
div Xdy =
_
B
n
(u)(x +ry)rdy = 0,
and is constant so that
_
B(x,r)
u(y)d(y) = lim
r0
+

_
B(x,r)
u(y)d(y) = u(x). On
the other hand we have
_
B(x,r)
u(y)dy =
_
r
0

n1
_
S
n1
u(x +)d()d
=
_
r
0

n1
[S
n1
[du(x) = [B(x, r)[u(x),
71
72 CHAPTER 5. ELLIPTIC EQUATIONS
concluding the proof.
Remark 5.1.4. Note that, dening a subharmonic function u as a C
2
function such
that u 0, we get, using the same proof, that a subharmonic function u on an
open subset of R
n
satises
x with

B(x, r) , u(x)
_
B(x,r)
u(y)d(y). (5.1.2)
In fact the function above is proven non-decreasing, and thus such that u(x) =
lim
r0
(r)
_
B(x,r)
u(y)d(y).
Remark 5.1.5. If

B(x, r) , we have dened for u C
2
(),
(r) =
_
B(x,r)
u(y)d(y)
and we have seen that
(r)[S
n1
[ =
_
S
n1
u(x +r)d(), so that

t
(r)[S
n1
[ =
_
B
n
(u)(x +ry)rdy =
_
B(x,r)
(u)(z)dzr
1n
=
_
B(x,r)
(u)(z)dzr
1n
r
n
n
[S
n1
[ and thus

t
(r) =
_
B(x,r)
(u)(z)dz
r
n
. (5.1.3)
Theorem 5.1.6. Let be an open subset of R
n
and u C
2
(). The function u is
harmonic in if and only if for all x and all r > 0 such that

B(x, r) , we
have
u(x) =
_
B(x,r)
u(y)d(y),
that is u satises the mean value property.
Proof. We have seen in Proposition 5.1.3 the only if part. On the other hand, if
u satises the mean-value property and x
0
with (u)(x
0
) > 0, as above, we
get
0 =
t
(r) =
_
B(x
0
,r)
(u)(y)d(y)
r
n
> 0
with B(x
0
, r
0
) , r
0
r > 0, u > 0 continuous on B(x, r
0
), which is impossible;
the same occurs with a negative sign for u.
5.1. SOME SIMPLE FACTS ON THE LAPLACE OPERATOR 73
5.1.2 The maximum principle
Theorem 5.1.7. Let be an open bounded subset of R
n
, u an harmonic function
on continuous on . Then max

u = max

u, and if is connected and x


0

with u(x
0
) = max

u, then u is constant on .
Proof. Note that u is continuous on the compact set . Let us assume x
0
with
u(x
0
) = max

u = M. Then if 0 < r < d(x


0
, ),
M = u(x
0
) =
_
B(x
0
,r)
u(y)d(y) M,
and this implies that u = M on B(x
0
, r), so that the set / = x , u(x) = M
is closed and open in . If is connected, we get the sought result. In the general
case, we get that / contains the closure of the connected component of x
0
.
Remark 5.1.8. If is a connected open subset of R
n
, u is a continuous function
on , such that
_
u = 0 in ,
u = g on ,
with g 0, then u(x) > 0 for all x if there exists x
0
such that g(x
0
) > 0.
In fact, the function g is valued in [m, M] R
+
with M > 0, m 0. From the
previous result, the function u is also valued in [m, M]. If m > 0, we are done and
if m = 0, we dene
B = x , u(x) = 0
we get that it is closed and open and cannot be all since u must be positive near
the point x
0
. As a result, B = , proving the result.
Theorem 5.1.9. Let be an open subset of R
n
, f be a continuous function on and
g a continuous function on . There exists at most one solution u C() C
2
()
to the Dirichlet problem
_
u = f in ,
u = g on .
Proof. If u
1
, u
2
are two solutions, the function u
1
u
2
is harmonic on with bound-
ary value 0 and the maximum principle entails u
1
u
2
= 0.
Theorem 5.1.10 (Harnacks inequality). Let U be open subsets of R
n
, with U
connected (U means U compact ). There exists C > 0 such that for any u
nonnegative harmonic function on ,
sup
U
u C inf
U
u. (5.1.4)
This implies that for all x, y U, C
1
u(y) u(x) Cu(y).
74 CHAPTER 5. ELLIPTIC EQUATIONS
Proof. Since

U is a compact subset of , dist (

U, ) > 0, and with x


1
, x
2
U, [x
1

x
2
[ r with r = dist (

U, )/4, we have

B(x
2
, r)

B(x
1
, 2r) , and
u(x
1
) =
_
B(x
1
,2r)
u(y)dy
..
u0
2
n

_
B(x
2
,r)
u(y)dy = u(x
2
),
implying that for x
1
, x
2
U, [x
1
x
2
[ r, we have 2
n
u(x
2
) u(x
1
) 2
n
u(x
2
).
Using the compactness of

U, and

U
y

U
B(y, r/2), we can nd a nite number
N of balls such that

U
1jN
B(y
j
, r/2).
Lemma 5.1.11. Let U be an open connected subset of R
n
such that U
1jN
B
j
where the B
j
are open balls. Then for x
0
, x
1
U, there exists a continuous curve
: [0, 1] U such that (0) = x
0
, (1) = x
1
, and there exists 0 T
0
T
1

T
1
1 with N and
([0, T
1
)) B
j
1
, ([T
1
, T
2
) B
j
2
, . . . , ([T
1
, 1] B
j

.
Proof of the lemma. Note rst that since U is an open connected subset of R
n
, it is
also pathwise connected and we can nd a continuous curve in U joining x
0
to x
1
.
If x
0
, x
1
belong to the same ball B
j
, there is nothing to prove. If x
0
B
j
1
, x
1
/ B
j
1
,
we dene
T
1
= supt [0, 1], (t) B
j
1
.
We get that T
1
(0, 1) and (T
1
) B
j
1
. We dene on [0, T
1
] as the segment
[(0), (T
1
)]. We know now that (t) / B
j
1
for all t T
1
. Since (T
1
) B
j
2
we can
now dene
T
2
= supt [T
1
, 1], (t) B
j
2
.
We get that T
2
(T
1
, 1] and (T
2
) B
j
2
. We dene on [T
1
, T
2
] as the segment
[(T
1
), (T
2
)]. And so on.
This implies that for x, y U, u(x) 2
nN
u(y) and the result.
5.1.3 Analyticity of harmonic functions
We have seen in Theorem 3.1.1 that the fundamental solution of the Laplace operator
is nonetheless C

outside of the origin, but also analytic outside of the origin. We


could use that result to prove directly the analytic-hypoellipticity of the Laplace
equation, that is the property u analytic on the open set implies u analytic
on . However, we have chosen a more direct approach, relying on the maximum
principle.
Proposition 5.1.12. Let be an open set of R
n
and u be an harmonic function on
. Then u C

() and for x
0
with

B(x
0
, r) ,

n
[

x
u(x
0
)[
C
k
r
n+k
|u|
L
1
(B(x
0
,r))
, [[ = k, C
0
= 1, C
k
= (2
n+1
nk)
k
for k 1,
where

n
= [S
n1
[/n =
2
n/2
n(n/2)
=

n/2
(1 +
n
2
)
(5.1.5)
is the volume of the unit ball in R
n
.
5.1. SOME SIMPLE FACTS ON THE LAPLACE OPERATOR 75
Proof. We have from the mean-value property if

B(x, ) ,
[u(x)[
n

n
|u|
L
1
(B(x,))
(5.1.6)
and in particular the estimate is true for k = 0. From the mean-value property, we
have from the harmonicity of each
x
j
u that, if

B(x, ) ,

x
j
u(x) =
_
B(x,)

x
j
u(y)dy =
n

n
[S
n1
[
_
B(x,)
div (u
x
j
)dy
=
n

n
[S
n1
[
_
B(x,)
u
j
d,
so that
[
x
j
u(x)[
n

|u|
L

(B(x,))
. (5.1.7)
As a result, we have, using (5.1.6)-(5.1.7) with = r/2,

n
[
x
j
u(x
0
)[
2n
r
(r/2)
n
|u|
L
1
(B(x
0
,r))
=
|u|
L
1
(B(x
0
,r))
r
n+1
n2
n+1
and the property is true for k = 1. Let us consider now a multi-index with
[[ = k 1 and from the harmonicity of

x
u and (5.1.7)
[
x
j

x
u(x
0
)[
n(k + 1)
r
|

x
u|
L

(B(x
0
,r/(k+1)))
so that, inductively,

n
[
x
j

x
u(x
0
)[
n(k + 1)
r
|u|
L
1
(B(x
0
,r))
(
rk
k+1
)
n+k
(n2
n+1
k)
k
.
We check now
(n2
n+1
k)
k
n(k + 1)(
k + 1
k
)
n+k
= (k + 1)
k+1
(k + 1)
n
k
n
n
k+1
(2
n+1
)
k
=
_
n2
n+1
(k + 1)
_
k+1
2
n1
(1 +
1
k
)
n

_
n2
n+1
(k + 1)
_
k+1
,
completing the proof of the proposition.
Theorem 5.1.13 (Liouville theorem). Let u be a bounded harmonic function on
R
n
. Then u is a constant.
Proof. From the previous proposition, we have for all x R
n
, r > 0
[u(x)[
2
n+1
n

n
r
n+1
|u|
L
1
(B(x,r))

2
n+1
n
r
|u|
L

(R
n
)
,
implying u 0 so that u is constant.
Corollary 5.1.14. Let f L

comp
(R
n
) with n 3. The bounded solutions of u = f
on R
n
are E f + constant, where E is the fundamental solution of the Laplace
operator given by Theorem 3.1.1
76 CHAPTER 5. ELLIPTIC EQUATIONS
Proof. If u is a bounded solution of u = f, the distribution u E f makes sense
and is harmonic on R
n
. It is also bounded since u is bounded and
[(E f)(x)[ c
n
_
[f(y)[
[x y[
n2
dy c
n
|f|
L

(R
n
)
_
[y[R
0
[x y[
2n
dy.
If [x[ R
0
+ 1, we have
_
[y[R
0
[x y[
2n
dy
_
[y[R
0
([x[ R
0
)
2n
dy =
R
n
0

n
([x[ R
0
)
n2
R
n
0

n
,
and if [x[ R
0
+ 1, we have
_
[y[R
0
[x y[
2n
dy
_
[zx[R
0
[z[
2n
dz
_
[z[R
0
+[x[2R
0
+1
[z[
2n
dz
entailing that E f is bounded. By Liouville Theorem, u E f is constant.
Conversely E f +C is indeed a bounded solution of u = f.
Note that in two dimensions, the fundamental solution of the Laplace operator
is unbounded; in particular a solution of u = f with f C

c
(B(0, 1)), f , 0 is
given by
1
2
_
[yx[1
f(x y) ln [y[dy. Since [y x[

[y[ [x[

, [y x[ 1 implies
[x[ 1 [y[ [x[ + 1 and if [x[ > 2,
[y x[ 1 = 0 < ln([x[ 1) ln [y[ ln(1 +[x[)
so that if f 0, for [x[ > 2,
_
[yx[1
f(x y) ln [y[dy ln([x[ 1)
_
f(z)dz
which is unbounded.
Theorem 5.1.15 (Analytic-hypoellipticity of the Laplace operator). Let u be an
harmonic function on some open subset of R
n
. Then u is an analytic function on
.
Proof. Let x
0
be a point of and r
0
> 0 with

B(x
0
, 4r
0
) . We have proven in
Proposition 5.1.12 that u is C

and such that

n
|

x
u|
L

(B(x
0
,r
0
))
r
nk
0
k
k
(2
n+1
n)
k
|u|
L
1
(B(x
0
,2r
0
))
, [[ = k.
Furthermore Stirlings formula (7.3.5) gives for k k
0
, k
k
k!2(2k)
1/2
e
k
and
since n
k
=

N
n
,[[=k
k!
!
which implies k! !n
k
, we obtain for k k
0
,

n
|

x
u|
L

(B(x
0
,r
0
))
r
nk
0
(2
n+1
n)
k
|u|
L
1
(B(x
0
,2r
0
))
k!2(2k)
1/2
e
k
C
0

[[
0
!
yielding analyticity from Theorem 7.2.4.
5.1. SOME SIMPLE FACTS ON THE LAPLACE OPERATOR 77
5.1.4 Greens function
Lemma 5.1.16. Let be a bounded open set of R
n
with a C
1
boundary, u C
2
(

).
Then we have for all x ,
u(x) =
_

E(x y)(u)(y)dy +
_

u(y)
E

(y x)d(y)
_

(y)E(y x)d(y)
(5.1.8)
where E is the fundamental solution of the Laplace operator (see Theorem 3.1.1).
Since E L
1
loc
(R
n
) C

(R
n
0), the formula above makes sense.
Proof. We consider u C

c
(R
n
) and we write u = u = u E = u E so that
u(x) =
_
u(y)E(x y)dy =
_

u(y)E(y x)dy +
_

c
div
y
_
E(y x)u(y)
_
dy

c
(E)(y x) u(y)dy,
entailing with Greens formula for x ,
u(x) =
_

u(y)E(x y)dy
_

E(y x)
u

(y)d(y)
+
_

(y x)u(y)d(y) +
_

c
u(y) (E)(y x)
. .
=0
dy,
which is the result.
Remark 5.1.17. Let be a bounded set of R
n
with a C
1
boundary. Assume that
for each x , we are able to nd a function y
x
(y) such that
_

x
= 0 in ,

x
(y) = E(x y) on ,
(5.1.9)
As a consequence, we have
_

x
(y)(u)(y)dy =
_

x
u

_
d(y). We dene
then the Green function for the open set as
G(x, y) = E(y x) +
x
(y). (5.1.10)
Using formula (5.1.8), we get for x ,
u(x) =
_

E(x y)(u)(y)dy +
_

u(y)
E

(y x)d(y) +
_

(y)
x
(y)d(y)
=
_

G(x, y)(u)(y)dy +
_

u(y)
_

+
E

(y x)
_
d(y)
=
_

G(x, y)(u)(y)dy +
_

u(y)
G

y
(x, y)d(y). (5.1.11)
78 CHAPTER 5. ELLIPTIC EQUATIONS
Note that we may symbolically write that for x ,
_
(
y
G)(x, y) = (x y), y
G(x, y) = 0, y .
As a matter of fact, we have
_

G(x, y)(u)(y)dy
=
_

u(y)(
y
G)(x, y)dy +
_

_
G(x, y)
u

(y)
G

y
(x, y)u(y)
_
d(y)
= u(x)
_

y
(x, y)u(y)d(y),
which is (5.1.11).
An immediate consequence of Lemma 5.1.16 and Formula (5.1.10) is the following
theorem.
Theorem 5.1.18. Let be a bounded open set of R
n
with a C
1
boundary. If
u C
2
(

) is such that
_
u = f in ,
u = g on ,
(5.1.12)
then for all x ,
u(x) =
_

G(x, y)f(y)dy +
_

y
(x, y)g(y)d(y) (5.1.13)
where G is the Green function given by (5.1.10).
Greens function for a half-space
We consider rst the following simple problem on R
n
+
= R
n1
R

+
. Let g
S(R
n1
): we are looking for u dened on R
n
+
such that
_
u = 0 in x
n
> 0,
u(, 0) = g on R
n1
.
Dening v(
t
, x
n
) as the Fourier transform of u with respect to x
t
, we get the ODE

2
x
n
v(
t
, x
n
) 4
2
[
t
[
2
v(
t
, x
n
) = 0, v(
t
, 0) = g(
t
),
that we solve readily, obtaining v(
t
, x
n
) = e
2x
n
[

[
g(
t
), so that, at least formally,
u(x
t
, x
n
) =
__
e
2i(x

e
2x
n
[

[
g(y
t
)dy
t
d
t
.
5.1. SOME SIMPLE FACTS ON THE LAPLACE OPERATOR 79
Using Formula (7.1.1), we get for x
n
> 0,
u(x
t
, x
n
) =
(n/2)

n/2
_
(1 +[x
t
y
t
[
2
x
2
n
)
n/2
g(y
t
)dy
t
x
1n
n
=
2x
n
[S
n1
[
_
R
n1
g(y
t
)dy
t
_
x
2
n
+[x
t
y
t
[
2
_
n/2
=
2x
n
n
n
_
R
n
+
g(y)dy
[x y[
n
. (5.1.14)
We dene the Poisson kernel for R
n
+
as
k(x, y) =
2x
n
n
n
[x y[
n
, x R
n
+
, y R
n
+
, (5.1.15)
and we note right away that
x R
n
+
,
_
R
n
+
2x
n
n
n
[x y[
n
dy = 1 (for a proof see Section 7.3.3). (5.1.16)
Theorem 5.1.19. Let g C
0
(R
n1
) L

(R
n1
) and u dened on R
n
+
by (5.1.14).
Then the function u C

(R
n
+
) L

(R
n
+
), is harmonic on R
n
+
and such that for
each x
0
R
n
+
lim
xx
0
xR
n
+
u(x) = g(x
0
). (5.1.17)
The function u is thus continuous up to the boundary.
Proof. Formula (5.1.14) is well-dened for x
n
> 0, g L

(R
n1
), denes a smooth
function which satises as well [u(x)[ |g|
L

(R
n1
)
since the Poisson kernel k given
by (5.1.15) is non-negative with integral 1 from (5.1.16). On the other hand, u is
harmonic on R
n
+
since with
y
(x
t
, x
n
) = ([x
t
y
t
[
2
+x
2
n
)
n/2
x
n
(
n
) + 2
x
n
(
n
)
= x
n
_
(n)(n 1)
n2
+ (n 1)
1
(n)
n1

+ 2(n)
n1
x
n

1
= x
n

n2
_
n(n + 1) n(n 1) 2n
_
= 0.
We now consider for x
n
> 0, u(x
t
, x
n
) g(x
t
) =
2x
n
n
n
_
R
n1
(g(y

)g(x

))dy

(x
2
n
+[x

[
2
)
n/2
and we
obtain with r > 0
n
n
[u(x
t
, x
n
) g(x
t
)[ 2x
n
sup
y

B(x

,r)
[g(y
t
) g(x
t
)[
_
B(x

,r)
dy
t
_
x
2
n
+[x
t
y
t
[
2
_
n/2
+ 4x
n
|g|
L

(R
n1
)
_
[x

[r
dy
t
(x
2
n
+[x
t
y
t
[
2
)
n/2
so that from (5.1.16) and k 0,
[u(x
t
, x
n
) g(x
t
)[ sup
y

B(x

,r)
[g(y
t
) g(x
t
)[ +
4x
n
|g|
L

(R
n1
)
n
n
_
+
r

n2n
d[S
n2
[
sup
y

B(x

,r)
[g(y
t
) g(x
t
)[ +
4x
n
|g|
L

(R
n1
)
[S
n2
[
n
n
r
.
80 CHAPTER 5. ELLIPTIC EQUATIONS
As a result, we get
limsup
x
n
0
+
[u(x
t
, x
n
) g(x
t
)[ inf
r>0
_
sup
y

B(x

,r)
[g(y
t
) g(x
t
)[
_
= 0.
Since
[u(x
t
, x
n
) g(z
t
)[ [u(x
t
, x
n
) g(x
t
)[ +[g(x
t
) g(z
t
)[
sup
y

B(x

,r)
[g(y
t
) g(x
t
)[ +
4x
n
|g|
L

(R
n1
)
[S
n2
[
n
n
r
+[g(x
t
) g(z
t
)[.
we get
limsup
(x

,x
n
)(z

,0)
x
n
>0
[u(x
t
, x
n
) g(z
t
)[ inf
r>0
_
limsup
x

_
sup
y

B(x

,r)
[g(y
t
) g(x
t
)[
__
= 0
from the continuity of g.
Proposition 5.1.20. The Green function for the half-space x R
n
, x
n
> 0 is
G(x, y) = E(y x) E(y x), (5.1.18)
where E is the fundamental solution of the Laplace operator given by (3.1.4) and for
x = (x
t
, x
n
) R
n1
R, we have dened x = (x
t
, x
n
).
Proof. According to (5.1.10), we have to verify for x R
n
+
that
x
(y) = E(y x)
does satisfy (
x
)(y) = 0 in and
x
(y) = E(xy) for y R
n
+
. Both points are
obvious. Note also that Formula (5.1.13) gives for u satisfying (5.1.12) with f = 0
and x R
n
+
,
u(x) =
_
R
n1
_


y
n
_
_
E(y x) E(y x)
_
g(y)dy
which gives for n 3, u(x) =
1
(2 n)n
n
_
g(y)
_
[y x[
1n
(2 n)
x
n
y
n
[y x[
+[y x[
1n
(2 n)
x
n
+y
n
[y x[
_
dy
so that we recover Formula (5.1.14) for the Poisson kernel of the half-space.
Greens function for a ball
We want now to solve
_
u = 0 in [x[ < 1,
u = g on S
n1
.
The Green function for the ball is G(x, y) = E(y x) +
x
(y) and with x = x/[x[
2
we dene

x
(y) = E
_
(y x)[x[
_
.
5.1. SOME SIMPLE FACTS ON THE LAPLACE OPERATOR 81
We note that for [x[ < 1, the function y
x
(y) is harmonic and for y S
n1
, we
have

x
(y) =
[y x[
2n
[x[
2n
(2 n)n
n
= E(y x)
since [x[
2
[y x[
2
= [x[
2
2y x + 1 = [x y[
2
. We calculate now for [y[ = 1
(2 n)n
n
G

y
(x, y) = [y x[
n
(2 n)(y x) y [y x[
n
(2 n)[x[
2n
(y x) y,
so that
G

y
(x, y) =
[y x[
n
n
n
_
1 x y [x[
2
(1 x y)
_
=
1 [x[
2
n
n
[y x[
n
.
As a result the Poisson kernel for the ball B
R
= B(0, R) is
k(x, y) =
R
2
[x[
2
n
n
R[x y[
n
.
Theorem 5.1.21. Let g C
0
(B
R
) and u dened on B
R
by
u(x) =
R
2
[x[
2
n
n
R
_
B
R
g(y)
[x y[
n
d(y) (5.1.19)
Then the function u C

(B
R
), is harmonic on B
R
and such that for each x
0
B
R
limxx
0
xB
R
u(x) = g(x
0
). The function u is thus continuous up to the boundary.
Proof. The proof is similar to that for Theorem 5.1.19. We may also compare this
formula to (1.1.7) in the introduction: we have
u(z, z) = c
0
+

k1
(c
k
z
k
+c
k
z
k
), g(e
i
) =

kZ
c
k
e
ik
,
so that u(z, z) =
1
2
_
2
0
g(e
i
)d +

k1
1
2
_
2
0
(z
k
e
ik
+ z
k
e
ik
)d. Since we have
also for [[ = 1 > [z[,
1 +

k1
(z

)
k
+ ( z)
k
= 1 +
z

1 z

+
z
1 z
= 1 + 2 Re
z
1
1 z
1
= Re
_
1 +
2z
z
_
= Re
_
+z
z
_
=
Re( +z)(

z)
[ z[
2
=
1 [z[
2
[ z[
2
,
we obtain
u(z, z) =
1 [z[
2
2
_
2
0
g(e
i
)
[z e
i
[
2
d =
1 [z[
2
2
1
_
S
1
g(y)
[z y[
2
d(y),
which is indeed the 2D case of (5.1.19).
82 CHAPTER 5. ELLIPTIC EQUATIONS
Chapter 6
Hyperbolic Equations
6.1 Energy identities for the wave equation
6.1.1 A basic identity
In Section 3.4, we have found the fundamental solution of the wave equation and
provided explicit formulas when the space dimension is less than 3. Here, we want
to consider a bounded open subset of R
d
, with a smooth boundary. For T > 0,
we dene the cylinder
T
= (0, T] and noting that

T
= [0, T] =
_
(0, T]
_

_
0
_

_
[0, T]
_
we see that

T
=
T

T
=
_
0
_

_
[0, T]
_
.
With c > 0, we dene the wave operator with speed c by the formula (3.4.1)

c
= c
2

2
t

x
.
We consider the problem
_

c
u = f, on
T
= (0, T] ,
u = g, on
T
=
_
0
_

_
[0, T]
_
,

t
u = h, on 0 ,
(6.1.1)
and we want to start by proving that if a C
2
solution exists, it is unique. We
calculate

c
u,
t
u)
L
2
()
=
1
2
d
dt
c
2
|
t
u|
2
L
2
()
u,
t
u)
L
2
()
=
1
2
d
dt
c
2
|
t
u|
2
L
2
()

t
u
u

d +
t
(u), u)
2
L
2
()
=
1
2
d
dt
_
c
2
|
t
u|
2
L
2
()
+|u|
2
L
2
()
_

t
u
u

d.
Dening the energy of u on at time t as
E(t) =
1
2
_
c
2
|
t
u|
2
L
2
()
+|u|
2
L
2
()
_
, (6.1.2)
83
84 CHAPTER 6. HYPERBOLIC EQUATIONS
we see that

E(t) =
c
u,
t
u)
L
2
()
+
_

t
u
u

d. (6.1.3)
As a rst result, if u
1
, u
2
are two solutions of (6.1.1), the function u = u
2
u
1
satises (6.1.1) with f, g, h all 0 and consequently

E = 0 for that u. Since E(0) = 0
as well, we get that E(t) = 0 for all times, and u is 0.
6.1.2 Domain of dependence for the wave equation
We consider now a C
2
solution u of the wave equation on R
+
R
d
, c
2

2
t
u
x
u = 0,
and we introduce for t
0
t 0 the following energy
F(t) =
1
2
_
B(x
0
,c(t
0
t))
_
c
2
[
t
u[
2
+[u[
2
_
dx
and we calculate its derivative, using the identity of the previous subsection,

F(t) =
_
[xx
0
[=c(t
0
t)

t
u
u

d
1
2
_
[xx
0
[=c(t
0
t)
_
c
1
[
t
u[
2
+c[u[
2
_
d.
We note that 2
t
u
u

c
1
[
t
u[
2
+c[u[
2
so that

F 0. As a result, for 0 t t
0
,
we have
_
B(x
0
,c(t
0
t))
_
c
2
[
t
u[
2
+[u[
2
_
dx
_
B(x
0
,ct
0
)
_
c
2
[
t
u[
2
+[u[
2
_
dx. (6.1.4)
In particular, if u and
t
u both vanish at time 0 on the ball B(x
0
, ct
0
) then u vanishes
on the cone
C
t
0
,x
0
= (t, x) [0, t
0
] R
d
, [x x
0
[ c(t
0
t).
Rephrasing that, we can say that, if both u(t = 0) and
t
u(t = 0) are supported in

B(X
0
, R
0
), then for t
0
0,
supp u(t
0
, )

B(X
0
, R
0
+ct
0
). (6.1.5)
In fact, if [x
0
X
0
[ > R
0
+ct
0
, we have C
t
0
,x
0
t = 0 = B(x
0
, ct
0
) and

B(x
0
, ct
0
)

B(X
0
, R
0
)
c
since [y x
0
[ ct
0
= [y X
0
[ [x
0
X
0
[ [y x
0
[ > R
0
.
As a consequence, both u(t = 0) and
t
u(t = 0) vanish on B(x
0
, ct
0
) so that
u(t
0
, x
0
) = 0 and the result (6.1.5). In other words, the value u(T, X) for some
positive T depends only on the values of u(t = 0),
t
u(t = 0) on the backward light-
cone with vertex (T, X) intersected with t = 0, i.e. C
T,X
t = 0 = B(X, cT).
The cone C
T,X
is the cone of dependence at (T, X). If both u(t = 0),
t
u(t = 0) are
supported in the ball

B(X, R), then
supp u(T, )

B(X, R +cT).
These properties bear the name of nite propagation speed.
Chapter 7
Appendix
7.1 Fourier transform
Lemma 7.1.1. Let n N

and R
n
x u(x) = exp 2[x[, where [x[ stands
for the Euclidean norm of x. The function u belongs to L
1
(R
n
) and its Fourier
transform is
u() =
(
n+1
2
)

_
n + 1
2
__
1 +[[
2
_
(
n+1
2
)
. (7.1.1)
Proof. We note rst that in one dimension
_
R
e
2ix
e
2[x[
dx = 2 Re
_
+
0
e
2x(1+i)
dx =
1
(1 +
2
)
corroborating the above formula in 1D. We want to take advantage of this to write
e
2[x[
as a superposition of Gaussian functions; doing this will be very helpful since
it is easy to calculate the Fourier transform of Gaussian functions (this quite natural
idea seems to be used only in the wonderful textbook by Robert Strichartz [22] and
we follow his method). For t R
+
, we have
e
2t
=
_
R
e
2it
d
(1 +
2
)
=
__
R
2
e
2it
e
s(1+
2
)
H(s)dsd =
_
R
+
e
s
s
1/2
e

s
t
2
ds
so that for x R
n
, e
2[x[
=
_
R
+
e
s
s
1/2
e

s
[x[
2
ds and thus
u() =
__
R
n
R
+
e
2ix
e
s
s
1/2
e

s
[x[
2
dxds =
_
R
+
e
s
s
1/2
e
s[[
2
s
n/2
ds
so that u() =
_
+
0
e
s
s
(n1)/2
_
(1 +[[
2
)
_
(n+1)/2
ds, which is the sought result.
7.2 Spaces of functions
7.2.1 On the Fa`a de Bruno formula
The following useful formula is known as Fa`a de Brunos
1
, dealing with the iterated
chain rule. We write here all the coecients explicitly.
1
One could nd a version of theorem 7.2.1 on pages 69-70 of the thesis of Chevalier Francois
FA
`
A DE BRUNO, Capitaine honoraire d

Etat-Major dans larmee Sarde. This thesis was


85
86 CHAPTER 7. APPENDIX
Theorem 7.2.1. Let k 1 be an integer and U, V , W open sets in Banach spaces.
Let a and b be k times dierentiable fonctions b : U V and a : V W. Then
the k- multilinear symmetric mapping (a b)
(k)
is given by (N

= N0)
(a b)
(k)
k!
=

1jk
(k
1
,...,k
j
)N
j
k
1
++k
j
=k
a
(j)
b
j!
b
(k
1
)
k
1
!
. . .
b
(k
j
)
k
j
!
. (7.2.1)
Remark 7.2.2. One can note that a symmetric kmultilinear mapping L is de-
termined by its value on diagonal k-vectors (T, . . . , T), so that formula (7.2.1)
means that (a b)
(k)
is the only symmetric k-multilinear mapping such that, if T is
a tangent vector to U, and x a point in U
1
k!
(a b)
(k)
(x)
k times
..
(T, . . . , T) =

1jk
(k
1
,...,k
j
)N
j
k
1
++k
j
=k
a
(j)
_
b(x)

j!
_
b
(k
1
)
(x)T
k
1
k
1
!
, . . . ,
b
(k
j
)
(x)T
k
j
k
j
!
_
,
where b
(l)
(x)T
l
stands for the tangent vector to V given by b
(l)
(x)
l times
..
(T, . . . , T). Since
a
(j)
_
b(x)

is a j-multilinear mapping from the product of j copies of the tangent


space to V into the tangent space to W, the formula makes sense with both sides
tangent vectors to W. Note also that the sum in (7.2.1) is extended to all the
multi-indices (k
1
, . . . , k
j
) N
j
such that k
1
+ +k
j
= k.
Proof. Lets now prove the theorem. Using the same notations as in the remark
above, we see, that for t R, x U and h a tangent vector to U,
c
(k)
(x)h
k
=
_
d
dt
_
k
c(x +th)
[t=0
,
so that it is enough to prove the theorem for U neighborhood of 0, U R and
b(0) = 0 . Moreover, one can assume by regularization that b C

c
(R). Taylor-
Youngs formula gives then with a continuous with (0) = 0
(a b)(t) =

0jk
a
(j)
(0)
j!
b(t)
j
+t
k
(t) (7.2.2)
and thus
[
k
t
(a b)](0) =

0jk
a
(j)
(0)
j!

k
t
[b(t)
j
]
[t=0
.
Since for tensor products we have (the inverse Fourier formula comes from the usual
one for , b(t)), where is a linear form)
b(t)
j
=
_
R
j
e
2i t(
1
++
j
)

b(
1
) . . .

b(
j
)d
1
. . . d
j
,
defended in 1856, in the Faculte des Sciences de Paris in front of the following jury : Cauchy
(chair) , Lame and Delaunay.
7.2. SPACES OF FUNCTIONS 87
we obtain

k
t
[b(t)
j
]
[t=0
=
_
R
j
(2i)
k
(
1
+ +
j
)
k

b(
1
) . . .

b(
j
)d
1
. . . d
j
=

(k
1
,...,k
j
)N
j
k
1
++k
j
=k
k!
k
1
! . . . k
j
!
b
(k
1
)
(0) . . . b
(k
j
)
(0) ,
which gives the result of the theorem, since b(0) = 0 so that all the k
1
, . . . , k
j
above
should be larger than 1.
It is now easy to derive the following
Corollary 7.2.3. Let a and b be functions satisfying the assumptions of Theorem
7.2.1 so that U R
m
x
, V R
n
y
, W R and is a multi-index N
m
. Then
(using the standard notation for a multi-index N
n
, a
()
=

y
a and if N
l
,
! =
1
! . . .
l
!) we get for [[ 1,

x
(a b)
!
=

1j[[
(
(1)
,...,
(j)
)N
m
N
m
=N
mj

(1)
++
(j)
=
min
1rj
[
(j)
[1
a
(j)
b
j!
b
(
(1)
)
. . . b
(
(j)
)

(1)
! . . .
(j)
!
(7.2.3)
We can remark that, although corollary 7.2.3 follows actually from Theorem
7.2.1, it is easier to prove it directly, along the lines of the proof above using the
Fourier inversion formula to compute

x
[b(x)]
j
.
7.2.2 Analytic functions
Let be an open subset of R
n
and f : R be a C

function. The function f is


said to be analytic
2
on if for all x
0
, there exists R
0
> 0 such that
x B(x
0
, R
0
), f(x) =

k0
f
(k)
(x
0
)
k!
(x x
0
)
k
. (7.2.4)
Note that when n > 1, f
(k)
(x
0
) is a k-th multilinear symmetric form and that
3
f
(k)
(x)
k!
h
k
=

[[=k
(

x
f)(x)
!
h

, ! =
1
! . . .
n
!, h

= h

1
. . . h

n
, (7.2.5)
so that formula (7.2.4) can be written
f(x) =

N
n
f
()
(x
0
)
!
(x x
0
)

. (7.2.6)
2
Real-analytic would be more appropriate.
3
The summation is taking place on multi-indices N
n
such that [[ =

1jn

j
= k.
88 CHAPTER 7. APPENDIX
There are plenty of examples of C

functions which are not analytic such as (3.1.1-


2) in [15]. One should also keep in mind that the convergence of the Taylor series

k0
f
(k)
(0)
k!
h
k
is not enough to ensure analyticity as shown by the example on the
real line
f(t) = e
1/t
2
for t ,= 0, f(0) = 0,
which is easily seen to be C

and is at at the origin, i.e. for all k N, f


(k)
(0) = 0.
That function is not analytic near 0 (otherwise it would be 0 near 0, which is not
the case), but the Taylor series at 0 does converge.
On the other hand, there is no diculty to extend Formula (7.2.4) to a ball
with same radius in C
n
. In particular the restriction to R
n
of an entire function
(holomorphic function on the whole C
n
) is indeed analytic. However, all analytic
functions on R
n
are not restrictions of entire function: an example is given by
R t 1/(1 + t
2
) which is analytic on R but is not the restriction of an entire
function to R (exercise: if it were the restriction of an entire function, that function
would coincide with 1/(1 + z
2
) which has poles at i). This type of example is a
good reason to use the terminology real-analytic for analytic functions on an open
subset of R
n
.
Going back to (7.2.4), we dene the k-th multi-linear symmetric form a
k
=
f
(k)
(x
0
)
k!
and
1
R
= limsup
k
|a
k
|
1/k
, with |a
k
| = sup
[T[=1
[a
k
T
k
[.
Assuming R > 0, we have for [h[ R
2
< R
1
< R, provided that for k > k
0
,
|a
k
|
1/k
1/R
1
,

k0
sup
[h[R
2
[a
k
h
k
[ sup
0kk
0
|a
k
|

0kk
0
R
k
2
+

k
0
<k
sup
[h[R
2
(|a
k
|
1/k
[h[)
k
sup
0kk
0
|a
k
|

0kk
0
R
k
2
+

k
0
<k
(R
1
1
R
2
)
k
< +,
so that the series

k0
a
k
h
k
converges normally on each compact subset of B(0, R).
As a consequence the convergence is uniform on each compact subset of B(0, R) and
the series can be dierentiated termwise.
If n = 1 and [h[ R
2
> R
1
> R, we have, extracting a subsequence, [a
k
j
[
1/k
j
R
1

R
1
/R
2
, j j
0
. As a result, for j j
0
,
[a
k
j
h
k
j
[ =
_
[a
k
j
[
1/k
j
R
1
[h[R
1
2
_
k
j
(R
2
/R
1
)
k
j

_
R
1
R
2
[h[R
1
2
_
k
j
(R
2
/R
1
)
k
j
= ([h[/R
2
)
k
j
1
and the series

a
k
h
k
cannot converge. This proves also in n dimensions that, if
[h[ >
1
sup
[T[=1
_
limsup
k
[a
k
T
k
[
1/k
_
the series

a
k
h
k
cannot converge everywhere. Note that
1/

R = sup
[T[=1
_
limsup
k
[a
k
T
k
[
1/k
_
limsup
k
|a
k
|
1/k
= 1/R
7.2. SPACES OF FUNCTIONS 89
and the series

a
k
h
k
does not converge on the whole [h[ >

R and converges when
[h[ < R (note that we have indeed R

R and R =

R in one dimension).
The following example is a good illustration of what may happen with the domain
of convergence of multiple power series: we consider

k0
x
k
1
x
k
2
, which is convergent on [x
1
x
2
[ 1.
We have |a
2k
| = sup
x
2
1
+x
2
2
=1
[x
k
1
x
k
2
[ = sup
R
[ cos sin [
k
= 2
k
, so that R =

2,
which is indeed the largest circle to t between the hyperbolas x
1
x
2
= 1. On the
other hand, with T
0
= (cos
0
, sin
0
),
[a
2k
T
2k
0
[
1/2k
= (cos
0
sin
0
)
1/2
= 2
1/2
_
sin(2
0
) =

R =

2.
The radius

2 is indeed the largest possible ball in which convergence takes place.


However the region of convergence is unbounded. The picture below may help the
reader to understand the various regions.
4
0
2
We have the following characterization of analytic functions.
4
For multiple power series, it would be more natural, but also more complicated, to introduce
the notion of polydisc: a polydisc with center and (positive) radii r
1
, . . . , r
n
in C
n
is the set
D(, r
1
, . . . , r
n
) = z C
n
, j, [z
j

j
[ < r
j
.
The interior of the set where absolute convergence of a multiple power series takes place is called
the domain of convergence T. With r = (r
1
, . . . , r
n
), the polydisc D(, r) is called the polydisc of
convergence at C
n
if D(, r) T and D(, ) , T if max(
j
r
j
) > 0. We have then the
90 CHAPTER 7. APPENDIX
Theorem 7.2.4. Let be an open subset of R
n
and f C

(; R). The function f


is analytic on if and only for all compact subsets K of , there exists C 0, > 0
such that
N
n
, |

x
f|
L

(K)
C
[[
!
We leave the proof as an exercise for the reader.
Remark 7.2.5. A consequence of Corollary 7.2.3 and is that the composition of an-
alytic functions is analytic and that the power series coecients of ab are universal
polynomials with positive rational coecients of the power series coecients of a, b:
in fact we have the explicit formula

x
(a b)
!
=

1j[[
(
(1)
,...,
(j)
)N
m
N
m
=N
mj

(1)
++
(j)
=
min
1rj
[
(j)
[1
a
(j)
b
j!
b
(
(1)
)
. . . b
(
(j)
)

(1)
! . . .
(j)
!
Denition 7.2.6. Let A =

N
n
a

be a power series with non-negative coe-


cients and B =

N
n
b

be a power series with complex coecients. The power


series A is said to majorize B if for all N
n
, [b

[ a

. We shall write B A.
In particular, if A converges absolutely, then B converges absolutely.
To provide simple examples, we start noting that, for R > 0, the function R
d

x
1
R
P
1jd
x
j
is analytic on x,

1jd
[x
j
[ < R since it is equal to
R
1

k0
_
R
1

1jd
x
j
_
k
=

R
1[[
[[!
!
x

,
and with [x[
1
=

1jn
[x
j
[, we have from the multinomial formula
5

R
1[[
[[!
!
[x

[ = R
1

k0
_
R
1

1jd
[x
j
[
_
k
= R
1
1
1 ([x[
1
/R)
=
1
R [x[
1
.
We remark now that if the power series C =

converges on
[x[

= max
1jn
[x
j
[ R for some positive R,
then c

(R, . . . , R)

= c

R
[[
must be bounded, i.e. [c

[ MR
[[
MR
[[
[[!
!
, so
that
C
MR
R

1jn
x
j
. (7.2.7)
relation
limsup
|k|+
_
[a
k
r
k
1
1
. . . r
k
n
n
[
_
1/|k|
= 1,
analogous to the Cauchy-Hadamard relation for the radius of convergence in one dimension.
5
The multinomial formula is (t
1
+ +t
n
)
k
=

N
n
k!
!
t

.
7.3. SOME COMPUTATIONS 91
7.3 Some computations
7.3.1 On multi-indices
Let n N

, m N. We have
Card (
1
, . . . ,
n
) N
n
,
1
+ +
n
= m
. .
E
n,m
= C
n1
m+n1
=
(m +n 1)!
(n 1)!m!
. (7.3.1)
In fact, dening

1
=
1
+ 1,
2
=
1
+
2
+ 2, . . . ,
n1
=
1
+ +
n1
+n 1,
we have 1
1
<
2
< <
n1
m + n 1, and we nd a bijection between
the set E
n,m
and the set of strictly increasing mappings from 1, . . . , n 1 to
1, . . . , m+n1; the latter set has cardinality C
n1
m+n1
since it amounts to choosing
a subset with n1 elements among a set of m+n1 elements. On the other hand,
dening q
n,m
= Card E
n,m
, we have obviously
q
n+1,m
=
m

j=0
q
n,j
and we can check that C
n
m+n
=

m
j=0
C
n1
n+j1
: it is true for m = 0 and if veried for
m 0, we get indeed
C
n
m+1+n
= C
n
m+n
+C
n1
m+n
=
m

j=0
C
n1
n+j1
+C
n1
m+n
=
m+1

j=0
C
n1
n+j1
.
We have also obviously from the above discussion
Card (
1
, . . . ,
n
) N
n
,
1
+ +
n
m
. .
F
n,m
= q
n+1,m
= C
n
n+m
, (7.3.2)
as well as
Card(
1
, . . . ,
n+1
) N
n+1
,
1
+ +
n+1
m,
n+1
< m = C
n+1
n+m+1
1. (7.3.3)
7.3.2 Stirlings formula
Let k N. We have
k! =
_
k
e
_
k

2k
_
1 +
1
12k
+O(k
2
)
_
, k +. (7.3.4)
and in particular
k!
_
k
e
_
k

2k k +. (7.3.5)
92 CHAPTER 7. APPENDIX
7.3.3 On the Poisson kernel for a half-space
We consider for x
n
> 0, x
t
R
n1
, n 1, n
n
=
2
n/2
(n/2)
= [S
n1
[,
I =
2x
n
n
n
_
R
n1
dy
t
_
[x
t
y
t
[
2
+x
2
n
_
n/2
=
x
n
(n/2)

n/2
_
R
n1
dy
t
(x
2
n
+[y
t
[
2
)
n/2
=
x
n
(n/2)

n/2
_
+
0

n2
d
(x
2
n
+
2
)
n/2
[S
n2
[ =
(n/2)

n/2
2
(n1)/2
((n 1)/2)
_
+
0

n2
(1 +
2
)
n/2
d.
We have
_
+
0

n2
(1 +
2
)
n/2
d =
_
/2
0
(tan )
n2
(cos )
n2
d =
_
/2
0
(sin )
n2
d = W
n2
,
the so-called Wallis integrals. It is easy and classical to get for k N,
W
2k
=
(2k)!
(k!)
2
2
2k+1
, W
2k+1
=
(k!)
2
2
2k
(2k + 1)!
. (7.3.6)
As a result, for n = 2 + 2k,
(n/2)

n/2
2
(n1)/2
((n 1)/2)
_
+
0

n2
(1 +
2
)
n/2
d =
k!

1/2
2
(k +
1
2
)
(2k)!
(k!)
2
2
2k+1
=

1
2
(2k)!
2
2k
k!(k +
1
2
)
and for n = 3 + 2k,
(n/2)

n/2
2
(n1)/2
((n 1)/2)
_
+
0

n2
(1 +
2
)
n/2
d
=
(k +
1
2
)(k +
1
2
)

1/2
2
k!
(k!)
2
2
2k
(2k + 1)!
=
2
2k
k!(k +
1
2
)

1
2
(2k)!
.
on the other hand we have
(k +
1
2
) = (k
1
2
)(k
3
2
) . . . (k
(2k 1)
2
)(1/2) =
1/2
(2k)!
2
2k
k!
,
entailing

1
2 (2k)!
2
2k
k!(k+
1
2
)
= 1 =
2
2k
k!(k+
1
2
)

1
2 (2k)!
. We have thus proven
(x
t
, x
n
) R
n1
R

+
,
2x
n
n
n
_
R
n1
dy
t
_
[x
t
y
t
[
2
+x
2
n
_
n/2
= 1. (7.3.7)
Bibliography
[1] N. Bourbaki,

Elements de mathematique. Topologie generale. Chapitres 1 `a 4,
Hermann, Paris, 1971. MR MR0358652 (50 #11111)
[2] ,

Elements de mathematique, Hermann, Paris, 1976, Fonctions dune
variable reelle, Theorie elementaire, Nouvelle edition. MR MR0580296 (58
#28327)
[3] H. Brezis, Analyse fonctionnelle, Masson, 1983.
[4] Edwin A. Burtt, The metaphysical foundations of modern science, Dover Pub-
lications Inc., 2003.
[5] J.-Y. Chemin and N. Lerner, Flot de champs de vecteurs non lipschitziens et
equations de Navier-Stokes, J. Dierential Equations 121 (1995), no. 2, 314
328. MR MR1354312 (96h:35153)
[6] Gustave Choquet, Topology, Translated from the French by Amiel Feinstein.
Pure and Applied Mathematics, Vol. XIX, Academic Press, New York, 1966.
MR MR0193605 (33 #1823)
[7] John B. Conway, A course in functional analysis, second ed., Graduate Texts
in Mathematics, vol. 96, Springer-Verlag, New York, 1990. MR MR1070713
(91e:46001)
[8] Gerald B. Folland, Introduction to partial dierential equations, second ed.,
Princeton University Press, Princeton, NJ, 1995. MR 1357411 (96h:35001)
[9] F.G. Friedlander, Introduction to the theory of distributions, Cambridge Uni-
versity Press, 1982.
[10] Lars G arding, Some trends and problems in linear partial dierential equations,
Proc. Internat. Congress Math. 1958, Cambridge Univ. Press, New York, 1960,
pp. 87102. MR MR0117434 (22 #8213)
[11] James Glimm, Mathematical perspectives, Bull. Amer. Math. Soc. (N.S.) 47
(2009), no. 1, 127136.
[12] G.H. Hardy, A Mathematicians Apology, Cambridge University Press, 1940.
[13] L. Hormander, The analysis of linear partial dierential operators, Springer-
Verlag, 1983.
93
94 BIBLIOGRAPHY
[14] N. Lerner, Integration, http://www.math.jussieu.fr/lerner/int.nte.html, 2000.
[15] , Lecture notes on real analysis, http://people.math.jussieu.fr/lerner
/realanalysis.lerner.pdf, 2008.
[16] Charles W. Misner, Kip S. Thorne, and John Archibald Wheeler, Gravita-
tion, W. H. Freeman and Co., San Francisco, Calif., 1973. MR MR0418833 (54
#6869)
[17] University of St Andrews, History of mathematics, http://www-history.mcs.st-
and.ac.uk/history/, .
[18] Roger Penrose, The Road to Reality, Vintage Books, London, 2005.
[19] Michael Reed and Barry Simon, Methods of modern mathematical physics. I,
second ed., Academic Press Inc. [Harcourt Brace Jovanovich Publishers], New
York, 1980, Functional analysis. MR MR751959 (85e:46002)
[20] W. Rudin, Real and complex analysis, McGraw-hill, 1966.
[21] , Functional analysis, McGraw-hill, 1973.
[22] Robert S. Strichartz, A guide to distribution theory and Fourier transforms,
World Scientic Publishing Co. Inc., River Edge, NJ, 2003, Reprint of the 1994
original [CRC, Boca Raton; MR1276724 (95f:42001)]. MR 2000535
[23] F. Treves, Topological vector spaces, distributions and kernels, Academic Press,
1967.
[24] Eugene P. Wigner, The unreasonable eectiveness of mathematics in the natural
sciences [Comm. Pure Appl. Math. 13 (1960), 114; Zbl 102, 7], Mathematical
analysis of physical systems, Van Nostrand Reinhold, New York, 1985, pp. 114.
MR MR824292
Index
Notations
, Laplace operator, 51
, gamma function, 54

+,c
, forward light-cone, 60

, 51

n
, volume of the unit ball of R
n
, 74
curl, 9
div, 9
, 8
attractive node, 38
autonomous ow, 29
Burgers equation, 6, 43
Cauchy problem, 5
Cauchy-Lipschitz Theorem, 15
characteristic curves, 39
Dirichlet, 7
Eulers system, 9
rst integral, 29
ow of a vector eld, 29
ow of an ODE, 20
fundamental solution, 51
gamma function, 54
Gauss-Green formula, 37
global solution of an ODE, 21
Green-Riemann formula, 37
Gronwall, 18
heat equation, 8, 55
hypoellipticity, 53
Laplace, 6
maximal solution of an ODE, 21
Navier-Stokes system, 10
Osgood, 20
polar coordinates in R
2
, 54
quasi-linear equation, 42
repulsive node, 38
saddle point, 38
Schrodinger equation, 8, 57
singular point of a vector eld, 29
spherical coordinates, 55
spherical coordinates in R
3
, 33
transport equation, 5
vector eld, 29
volume of the unit ball, 74
vorticity, 10
wave, 8
wave equation, 59
95

Você também pode gostar