Você está na página 1de 39

Notes from Math 651

Richard O. Moore
August 27, 2014
Contents
1 Ordinary dierential equations 2
1.1 Denitions, existence and uniqueness . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Linear homogeneous ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Constant coecient ODEs . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.2 Equidimensional equations . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Linear inhomogeneous ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.1 Variation of parameters . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.2 Undetermined coecients . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4 Singular points and power series solutions . . . . . . . . . . . . . . . . . . . 11
1.5 Boundary and eigenvalue problems . . . . . . . . . . . . . . . . . . . . . . . 15
1.6 Inhomogeneous BVPs and Greens functions . . . . . . . . . . . . . . . . . . 20
1.7 Phase plane and nonlinear ODEs . . . . . . . . . . . . . . . . . . . . . . . . 22
1.7.1 Hamiltonian systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2 Partial dierential equations 25
2.1 Characteristics and classication . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2 The wave equation and DAlemberts solution . . . . . . . . . . . . . . . . . 27
2.3 The heat equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.4 Separation of variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.5 Inhomogeneities and eigenfunction expansions . . . . . . . . . . . . . . . . . 28
2.6 Laplaces equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.6.1 Laplaces equation in cylindrical coordinates . . . . . . . . . . . . . . 30
2.7 Vibrating membranes (wave equation in higher dimension) . . . . . . . . . . 33
2.8 Transform methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.8.1 The Fourier transform . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.9 The Laplace transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.10 Greens functions for PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.10.1 Free-space Greens functions and the Method of Images . . . . . . . . 38
1
Introduce syllabus, texts, grade breakdown, oce hours. Any non-math majors should
see professor.
The objective of this course is to provide everybody, particularly rst-year graduate
students, a common background in methods used to solve dierential equations. We will
also touch upon material related to dynamical systems and vector calculus.
1 Ordinary dierential equations
c.f. Bender & Orszag, Boyce & DiPrima
1.1 Denitions, existence and uniqueness
Consider generally written n
th
-order ODE
G(x, y(x), y

(x), . . . , y
(n)
(x)) = 0.
where
y
(n)
:=
d
n
y
dx
n
.
Suppose G is dierentiable in all arguments, with
G
y
(n)
,= 0. Then at least locally (by the
implicit function theorem) we can express the ODE as
y
(n)
= F(x, y(x), . . . , y
(n1)
).
Denition: Recall that function F(x) is said to be linear if F(ax + by) = aF(x) + bF(y)
for all constants a and b and all arguments x and y.
Denition: ODE (1.1) is said to be linear if multivariate function F(x
1
, y, y

, . . . , y
(n)
) is
linear in each of its arguments. An ODE that is not linear is referred to as nonlinear.
Example: y

= (2x + y)y

is nonlinear. xy

= 2y

(tan x)y

is linear.
Note: y

= 2y

(tan x)y

+ 1 is not technically linear by the above denition, but we


refer to it as linear nonetheless since F can be expressed as a linear function added to a
function of x only.
Under the assumption of linearity, ODE (1.1) can be written in the form
Ly = f(x) with L := p
0
(x) + p
1
(x)
d
dx
+ + p
n1
d
n1
dx
n1
+
d
n
dx
n
.
2
Denition: ODE (1.1) is said to be homogeneous if f(x) 0. A linear ODE that is not
homogeneous is said to be inhomogeneous or nonhomogeneous. Note that it doesnt strictly
make sense to refer to a nonlinear ODE as either homogeneous or inhomogeneous. Why?
Linear, homogeneous ODEs enjoy the principle of superposition, which states that any
two functions that satisfy the ODE can be added to produce a third function that also
satises the ODE. This is not generally true for nonlinear ODEs. (It isnt strictly true for
linear, nonhomogeneous ODEs either, although well see that the general method for solving
problems involving linear, nonhomogeneous ODEs depends critically on the principle of
superposition.)
Example: m x + c x + kx = f(t)
When solving an ODE, the constants of integration that arise in the general solution are
xed by applying initial conditions or boundary conditions.
Denition: An equation of the form (1.1) is called an initial value problem (IVP) if it is
accompanied by initial conditions (ICs), i.e., y, y

,. . . , y
(n1)
given at a single point x
0
:
y(x
0
) = a
0
y

(x
0
) = a
1
.
.
.
y
(n1)
(x
0
) = a
n1
. (1)
Denition: (1.1) is called a boundary value problem (BVP) if it comes with boundary
conditions (BCs), i.e., n quantities given at two or more points x
0
, x
1
, . . . x
k
.
Example: y(x
0
) = a
0
, y(x
1
) = a
1
, y

(x
1
) = a
2
, y
2
(x
0
) + 2y

(x
3
) = a
3
Just as linear ODEs are simpler than nonlinear ODEs in that we can make more general
statements concerning existence and uniqueness, IVPs are simpler than BVPs for the same
reason.
Theorem: If F(x, y, y

, . . . , y
(n1)
) from (1.1) is continuous and bounded in all arguments
within some neighbourhood of x
0
, a
0
, a
1
, . . . , a
n1
then the IVP with ICs given by (1) has a
solution (i.e., a solution exists).
Theorem: If, in addition to the above, F has continuous and bounded partial derivatives
with respect to y, y

, . . . , y
(n1)
, then the solution found above is unique.
Proof: By Picard iteration, c.f. Courant, Di. & Int. Calculus, see Problem Set #1.
3
Example: Consider y

= F(x, y) with y(x


0
) = a
0
. Its formal solution is
y(x) = a
0
+
_
x
x
0
F(t, y(t)) dt.
We can construct a sequence of functions y
0
(x), y
1
(x), . . . satisfying
y
j+1
(x) = a
0
+
_
x
x
0
F(t, y
j
(t)) dt, (2)
then ask: Does this sequence converge? Youll see this in an exercise.
Example: y

= y
1/3
with y(0) = 0. Here, F(x, y) is continuous and bounded on any
interval, so we expect existence. However, F/y is neither continuous nor bounded on any
interval containing the origin, so we dont expect uniqueness. Solving gives us three possible
solutions: y = (2x/3)
3/2
or y 0.
Example: y

= x
1/2
with y(0) = 1. Here, F is neither continuous nor bounded on intervals
containing x = 0 so we dont expect either existence or uniqueness. Nevertheless, there is
a unique solution y = 2x
1/2
+ 1. Moral: the above theorems give suciency conditions for
existence/uniqueness, not necessary ones!
For the linear ODE (1.1), our criteria for existence and uniqueness become conditions on
the coecients p
j
(x) that they be continuous and bounded on a given interval. Now lets
turn to the harder case of BVPs. Rather than focusing on a small interval around x
0
as for
the IVPs, we now have to worry about an entire interval between boundary points x
1
and
x
2
, for example.
Example: y

+ y = 0. General solution y = Acos x + B sin x.


Case I: y(0) = 0, y

(/2) = 1. Then, A = 0, B =?. No solution!


Case II: y(0) = 0, y(/2) = 3. Then, A = 0, B = 3. Unique solution!
Case III: y(0) = 0, y() = 0. Then, A = 0, B = B. Innitely many solutions!
Thus, even with a well-behaved ODE, BVPs can yield zero, one or multiple solutions,
and we can make no general statements. How does this relate to what you know about
solutions to AX = B in linear algebra?
1.2 Linear homogeneous ODEs
Consider
Ly = 0 with L := p
0
(x) + p
1
(x)
d
dx
+ + p
n1
(x)
d
n1
dx
n1
+
d
n
dx
n
. (3)
4
Clearly, any two solutions of (3) can be added in linear combination and the result will
be another solution. Moreover, the general solution to this equation takes the form
y =
J

j=1
c
j
y
j
(x)
where y
j
(x) is a linearly independent set of solutions to (3). Any solution to (3) can then
be expressed in this form through a unique choice of constants c
j
, j = 1, . . . , J.
Denition: A set of functions y
j
(x) is said to be linearly dependent if there exists a
nontrivial solution (i.e., a solution with at least one nonzero constant) to
J

j=1
c
j
y
j
(x) = 0.
A set of functions that is not linearly dependent is said to be linearly independent.
Example: L = 1 +
d
2
dx
2
, i.e., y

+ y = 0. Possible general solutions:


y = c
1
sin x + c
2
cos x (4)
y = c
1
e
ix
+ c
2
e
ix
(5)
y = c
1
sin x + c
2
sin(x + /3). (6)
Coecients c1, c2 are then determined by ICs (or BCs, if possible).
Denition: If n functions y
1
(x), . . . , y
n
(x) each have n-1 derivatives, then their Wron-
skian is given by
W(x) = W[y
1
(x), . . . , y
n
(x)] := det
_
_
_
_
_
y
1
(x) . . . y
n
(x)
y

1
(x) . . . y

n
(x)
.
.
.
y
n1
1
(x) . . . y
n1
n
(x)
_
_
_
_
_
Theorem: If n functions y
1
(x), . . . , y
n
(x) have n 1 derivatives and are linearly depen-
dent on an interval I, then their Wronskian is identically zero on I.
Proof: Suppose y
1
, . . . , y
n
are linearly dependent. Then
c
1
y
1
+ c
2
y
2
+ + c
n
y
n
= 0
has a nontrivial solution, and so therefore do
c
1
y

1
+ c
2
y

2
+ + c
n
y

n
= 0, . . . (7)
c
1
y
n1
1
+ c
2
y
n1
2
+ + c
n
y
n1
n
= 0, (8)
which is only possible if W = 0.
5
Example: Calculate Wronskian for above solutions to y

+ y = 0.
Theorem: If n functions y
1
, . . . , y
n
all satisfy the same n
th
order homogeneous linear
ODE on an interval I then if W 0 on I, the functions are linearly dependent.
Theorem: Abels Theorem: Suppose y
1
, . . . , y
n
satisfy 3. Then
dW
dx
= p
n1
(x)W.
Proof: Insert y
n
j
= p
0
y
j
p
1
y

j
p
n1
y
n1
j
into dW/dx. Thus,
W = W
0
exp
_

_
x
x
0
p
n1
(s) ds
_
.
This is Abels formula, and demonstrates that if all coecients p
j
(x) are continuous, then
W is either never zero (LI) or always zero (LD).
Example: y

[(1 + x)/x]y

+ (1/x)y = 0, with solutions y


1
= 1 + x, y
2
= e
x
. W = xe
x
by direct calculation and by Abels formula. Well see later that x = 0 is a regular singular
point.
Theorem: Equation (3) has exactly n linearly independent solutions y
1
(x), . . . , y
n
(x) in
any interval where the p
0
(x), . . . , p
n1
(x) are continuous.
Proof: Let y
j
(x) satisfy (3) and the IC y
j
(x
0
) = 1, y(x
0
) = y

(x
0
) = = y
j1
(x
0
) =
y
j+1
(x
0
) = = y
n1
(x
0
) = 0. By the existence theorem, this IVP has a unique solution.
But W[y
1
, . . . , y
n
] = det I = 1, so their Wronskian is never zero and they are therefore LI.
Now suppose (3) has an additional solution y

. Clearly, the resulting (n + 1) (n + 1)


Wronskian evaluated at x
0
1.2.1 Constant coecient ODEs
Suppose coecients p
j
(x) in 3 are constant. The ODE is then referred to as constant-
coecient, and the solutions are found by substituting y = e
rx
into the dierential operator
L[y], yielding a characteristic polynomial P(r). The roots of P(r) correspond to solutions
of the ODE. We consider two cases:
Case I: All roots r
1
, . . . , r
n
of P(r) are distinct. In this case, we immediately have n linearly
independent solutions, so that the general solution is
y = c
1
e
r
1
x
+ c
2
e
r
2
x
+ + c
n
e
rnx
.
6
If the ODE has real coecients p
j
(x), then any complex roots of P(r) will occur in
complex conjugate pairs r
1
, r
2
= a + ib, a ib, in which case we can write the
corresponding pair of solutions in the form
y = e
ax
(d
1
cos(bx) + d
2
sin(bx)).
Case II: P(r) has repeated roots. Suppose r
1
is a root of order m. Then P(r) = (r r
1
)
m
Q(r)
where Q(r
1
) ,= 0 and
L(e
rx
) = (r r
1
)
m
Q(r)e
rx
.
Clearly, y = e
r
1
x
satises Ly = 0, but we need to nd another m1 solutions to form
a complete set of solutions. Note that
L(xe
rx
) = m(r r
1
)
m1
Q(r)e
rx
+ (r r
1
)
m
Q

(r)e
rx
+ (r r
1
)
m
Q(r)xe
rx
,
so L(xe
r
1
x
= 0 if m > 1. We can build all m solutions by multiplying by successive
degrees of monomial to give e
r
1
x
, xe
r
1
x
, . . . , x
m1
e
r
1
x
.
Example: Find the general solution of y
iv
+ 2y

+ y = 0.
1.2.2 Equidimensional equations
Denition: The simplest non-constant coecient ODEs are equidimensional or Euler
equations satisfying
Ly := x
n
d
n
y
dx
n
+
n1
x
n1
d
n1
y
dx
n1
+ +
1
x
dy
dx
+
0
y = 0.
The name equidimensional comes from the fact that the ODE is invariant under the scaling
transformation x ax for any a ,= 0.
Just as exponential functions reduced constant-coecient ODEs to algebraic charac-
teristic equations, monomials reduce equidimensional ODEs to algebraic indicial equations
L(x
s
) = P(s)x
s
where
P(s) = s(s 1) . . . (s (n 1)) +
n1
s(s 1) . . . (s (n 2)) + +
1
s +
0
.
Again, we have the cases of distinct and repeated roots to consider.
Case I: Distinct roots. The solution is simply
y = c
1
x
s
1
+ c
2
x
s
2
+ + c
n
x
sn
.
Complex roots again appear in conjugate pairs s
1,2
= a ib, yielding solutions
c
1
x
a+ib
+ c
2
x
aib
= x
a
(d
1
cos(b ln x) + d
2
sin(b ln x)).
7
Case II: Repeated roots. If s
1
is a root of order m of the indicial equation P(s) = 0, then
L[x
s
] = (s s
1
)
m
Q(s)x
s
, so
L[(ln x)x
s
] = L[
d
ds
x
s
] = m(ss
1
)
m1
Q(s)x
s
+(ss
1
)
m
Q

(s)x
s
+(ss
1
)
m
Q(s)(ln x)x
s
,
so again, L[(ln x)x
s
1
] = 0 provided m > 1. We therefore have m solutions x
s
1
, (ln x)x
s
1
, . . . , (ln x)
m1
x
s
1
.
Example: 2x
2
y

4xy

+ 6y = 0, x
2
y

5xy

+ 9y = 0.
Note: If we divide the equidimensional equation by x
n
to put it in the form (3), we see
that p
0
=
0
/x
n
, p
1
=
1
/x
n1
, etc., so the coecients are singular on intervals containing
x = 0, suggesting that Abels formula breaks down.
If we happen to know some but not all of the n linearly independent solutions to an
n
th
order linear homogeneous ODE, we can derive the others using a technique known as
reduction of order. Letting y(x) = u(x)y
1
(x), where y
1
(x) is the known solution, results in
an (n-1)
st
order ODE for unknown u(x).
Example: Ly := y

+p(x)y

+q(x)y = 0. Find a second solution using reduction of order.


Answer (not particularly useful as a formula):
y
2
(x) = y
1
(x)
_
x
1
y
2
1
(s)
e

s
p(r) dr
ds
1.3 Linear inhomogeneous ODEs
Consider Ly = f(x) where L is the n
th
order linear dierential operator given in (??).
Clearly, any two solutions to this ODE will have a dierence that satises Ly = 0, which
we already know has a complete set y
1
(x), . . . , y
n
(x) of n linearly independent solutions.
Thus, all solutions to Ly = f must have the form
y(x) = y
p
(x) + c
1
y
1
(x) + + c
n
y
n
(x)
where y
p
(x) is a (any) particular solution to the inhomogeneous ODE Ly
p
= f.
A rst order linear ODE is always solvable by nding a suitable integrating factor, i.e.,
a multiplicative factor that allows the ODE to be integrated directly. Consider
y

+ p(x)y = f(x).
Multiplying both sides by exp(
_
x
p(s) ds) results in
_
e

x
p(s) ds
y
_

= e

x
p(s) ds
f(x),
which is integrated to give
y(x) =
_
x
e

s
x
p(r) dr
f(s) ds.
8
Example: y

= y/(x + y). Not linear in y! Solve to get x = y ln y + cy. Can we invert?


Depends on what c is.
1.3.1 Variation of parameters
More generally, inhomogeneous ODEs are solved using variation of parameters, which relies
on the solutions of the homogeneous ODE to produce a rst order system of inhomogeneous
ODEs that we can solve by direct integration. Consider
Ly := y

+ p
1
(x)y

+ p
0
(x)y = f(x)
and suppose that Ly
1
= Ly
2
= 0 (i.e., we already have both linearly independent solutions to
the homogeneous problem). To nd a particular solution (to the inhomogeneous problem),
we let
y
p
(x) = u
1
(x)y
1
(x) + u
2
(x)y
2
(x),
where u
1
and u
2
are unknown functions to be determined. Dierentiating, we get
y

p
= u

1
y
1
+ u
1
y

1
+ u

2
y
2
+ u
2
y

2
.
Before taking another derivative, we set the terms involving derivatives of u
1
and u
2
to zero:
u

1
y
1
+ u

2
y
2
= 0.
Well see why we can do this in a second. Taking another derivative then gives
y

p
= u

1
y

1
+ u
1
y

1
+ u

2
y

2
+ u
2
y

2
.
Recalling that Ly
1
= Ly
2
= 0, we arrive at
Ly
p
= u

1
y

1
+ u

2
y

2
= f.
We can rewrite the two rst-order ODEs that we ended up with as a system:
_
y
1
y
2
y

1
y

2
__
u

1
u

2
_
=
_
0
f
_
.
Note that the determinant of the matrix on the left is the Wronskian W(x) of y
1
and y
2
, and
is nonzero by the assumption that these functions are linearly independent. Solving, either
by inverting this matrix or by elimination, gives
u

1
(x) =
y
2
(x)f(x)
W(x)
u

2
(x) =
y
1
(x)f(x)
W(x)
,
leading to a particular solution of the form
y
p
= y
1
(x)
_
x
y
2
(s)f(s)
W(s)
ds + y
2
(x)
_
x
y
1
(s)f(s)
W(s)
ds.
Clearly, y
p
is only dened up to the addition of an arbitrary linear combination of y
1
and
y
2
, reected by the constants of integration in the two integrals.
9
Note: Youll note the similarity of spirit between variation of parameters and the method
of reduction of order.
Example: y

2y = x/ ln x.
1.3.2 Undetermined coecients
Variation of parameters is somewhat cumbersome to implement for higher-order dieren-
tial equations. A simpler method involving educated guessing to determine a particular
solution to Ly = f is often applied when the following conditions are met:
1. the dierential operator L has constant coecients, and
2. the inhomogeneous term f(x) is a linear combination of simple exponentials and poly-
nomials. (More broadly, when f consists of functions that repeat or terminate upon
repeated dierentiation.
The basic rules for forming a trial form of particular solution are as follows:
1. If f(x) and its derivatives do not contain any solutions to Ly = 0, then set y
p
equal
to a linear combination of the linearly independent terms contained in f and its rst
n derivatives.
2. If f(x) or its derivatives contain terms that satisfy Ly = 0, then multiply those terms
in the trial solution by x
s
, where s is the smallest nonzero integer such that the trial
solution does not include solutions to Ly = 0.
Example: y

3y

+ 2y = 3e
x
10 cos 3x, y(0) = 1, y

(0) = 2.
Example: y

+ 4y = x sin 2x.
Note: The method of undetermined coecients is often simplied by using complex expo-
nentials. (Redo last example.)
Example: Simple harmonic oscillator: m x + c x + kx = F
0
cos(t) with IC x(0) = x
0
,
x(0) = v
0
. Cases:
1. undamped, unforced: natural frequency
2. damped, unforced: underdamped, overdamped, critically damped
3. undamped, non-resonant forcing
4. damped, resonant forcing
10
1.4 Singular points and power series solutions
Recall our general formulation for an n
th
order homogeneous linear ODE (3):
Ly = 0 with L :=
d
n
dx
n
+ p
n1
(x)
d
n1
dx
n1
+ + p
1
(x)
d
dx
+ p
0
(x).
We saw that if the p
j
(x) are continuous (and therefore nonsingular), then (3) has n linearly
independent solutions y
j
(x) and the general solution is y =

n
j
c
j
y
j
. We now consider
what happens when one or more of the p
j
has a singularity.
Denition: If, at x = x
0
C, the coecients p
0
(x), . . . , p
n1
(x) are analytic (i.e.,
dierentiable) in a neighborhood of x
0
, then x
0
is said to be an ordinary point of (3).
Note: If x
0
is an ordinary point of (3), then all solutions of (3) can be expanded in a Taylor
series about x
0
. The radius of convergence of this Taylor series will be at least as large as
the distance (in C) to the nearest singularity of the coecients p
j
(x).
Example: (x
2
+ 1)y

+ 2xy = 0. Simple poles at x = i. Maclaurin series (x = 0 is


ordinary point) of y = C/(x
2
+ 1) converges for [x[ < 1.
Denition: A point x
0
that is not an ordinary point is said to be a singular point. These
can be regular or irregular.
Denition: A singular point x
0
is said to be regular if (x x
0
)
j
p
nj
(x) is analytic in a
neighborhood of x = x
0
for each j = 1, . . . , n. In other words, p
nj
(x) can be no more
singular than 1/(x x
0
)
j
.
Example: Bessels equation of order :
x
2
y

+ xy

+ (x
2

2
)y = 0.
Then p
0
= 1
2
/x
2
and p
1
= 1/x, so all points are ordinary except x
0
= 0, which is a
regular singular point for all .
Note:
1. If x
0
is a RSP, then solutions of (3) may be analytic. If a solution is not analytic, then
its singularity is either a pole or a branch point (which can be algebraic or logarithmic).
2. If x
0
is a RSP, then (3) always has at least one solution of the form y
1
= (xx
0
)

A(x)
where (typically real) is referred to as the indicial exponent and A is analytic at x
0
(with radius of convergence at least as large as the distance to the nearest singularity
of the coecients p
j
in C. Clearly, this solution is
11
(a) analytic at x
0
if is a nonnegative integer;
(b) singular with a pole if is a negative integer; or
(c) singular with an algebraic branch point if ,= Z.
3. If (3) is at least 2
nd
order, then there is a second solution that has the form either of
y
2
= (x x
0
)

B(x)
or
y
2
= (x x
0
)

A(x) ln(x x
0
) + (x x
0
)

C(x),
where and A are the same as above, and B and C are analytic at x
0
. This can be
generalized to continue to the n
th
solution of an n
th
order ODE, of the form
y
n
= (xx
0
)

_
A
0
(x) + A
1
(x) ln(x x
0
) + A
2
(x) (ln(x x
0
))
2
+ + A
n1
(x) (ln(x x
0
))
n1

.
Denition: A point that is neither ordinary nor singular and regular is referred to as an
irregular singular point.
Note: Not much can be said about solutions near ISPs, except that at least one solution
does not have one of the forms above. Solutions typically have an essential singularity, e.g.,
y = e
1/x
.
Finding series solutions to linear ODEs near ordinary points is straightforward. Simply
match terms in a Taylor expansion:
y =

n=0
a
n
(x x
0
)
n
.
Example: Legendres equation.
(1 x
2
)y

2xy

+ ( + 1)y = 0. (9)
Show that this has regular singular points (simple poles) at x = 1 and ordinary points
everywhere else. Expand about x
0
= 0:

n=0
(n + 2)(n + 1)a
n+2
x
n

n=2
n(n 1)a
n
x
n
2

n=1
na
n
x
n
+ ( + 1)

n=0
a
n
x
n
= 0.
Recursion relation:
a
n+2
=
( n)( + n + 1)
(n + 2)(n + 1)
a
n
.
From n even:
y
1
(x) = a
0

n=0
(1)
n
b
n
x
2n
,
12
b
n
=
( 2) . . . ( 2n + 2)( + 1)( + 3) . . . ( + 2n 1)
(2n)!
From n odd:
y
2
(x) = a
1

n=0
(1)
n
c
n
x
2n+1
,
c
n
=
( 1)( 3) . . . ( 2n + 1)( + 2)( + 4) . . . ( + 2n)
(2n + 1)!
.
Of course, a
0
and a
1
are arbitrary since the ODE is linear; we can therefore arbitrarily set
them equal to 1.
To demonstrate how to nd a series solution about a regular singular point, we focus on
the 2nd order case, i.e.,
y

+
p(x)
x x
0
y

+
q(x)
(x x
0
)
2
y = 0
with p and q analytic at x
0
, i.e.,
p(x) =

n=0
p
n
(x x
0
)
n
, q(x) =

n=0
q
n
(x x
0
)
n
.
We pose a solution of the form (referred to as Frobenius form):
y
1
(x) = (x x
0
)

A(x) = (x x

n=0
(x x
0
)
n
,
with A analytic at x = x
0
. Substituting, we get

n=0
(+n)(+n1)a
n
(xx
0
)
+n2
+

m,n=0
(+n)a
n
p
m
(xx
0
)
+m+n2
+

m,n=0
a
n
q
m
(xx
0
)
+m+n2
= 0.
Equating coecients of powers of x x
0
, we get:
P()a
0
= 0 with P() :=
2
+ (p
0
1) + q
0
.
As we saw for equidimensional equations, the roots of indicial polynomial P determine
(up to two possibilities since P is quadratic). The next order provides recurrence relation
P( + n)a
n
=
n1

m=0
(( + m)p
nm
+ q
nm
) a
m
. (10)
Recall that P has two roots, giving rise to the possibility that the left-hand side of (10) will
vanish, indicating that no solution of this form is possible unless the right-hand side vanishes
simultaneously.
Assuming
1

2
, we have cases
13
Case I:
1

2
,= 0, 1, 2, 3, . . . . Then (10) provides a well-dened recurrence relation and two
solutions of Frobenius form exist.
Case II:
1
=
2
. One solution of Frobenius form exists, but the other does not. We know
from experience with equidimensional equations that the second solution with contain
ln(x x
0
).
Case III:
1

2
= N 1. This case breaks into two subcases:
IIIa: The RHS of (10) is nonzero for n = N. Then only one solution can be of Frobenius
form and the other has a ln(x x
0
) term.
IIIb: The RHS of (10) vanishes for n = N. Then the rst (Frobenius) solution truncates
and a second solution is found in Frobenius form involving the arbitrary coecient
a
N
.
Example: Modied Bessel equation of order :
x
2
y

+ xy

(x
2
+
2
)y = 0.
Regular singular point at x
0
= 0, with indicial equation
P() =
2

2
and recurrence relation
P(n + )a
n
= a
n2
.
These give at least one solution of Frobenius form,
I

(x) =

n=0
(x/2)
2n+
n!(n + + 1)
.
(Recall that (z) =
_

0
exp(t)t
z1
dt.) In this example, a second solution in Frobenius form
certainly exists if ,= N/2 for some integer N (for other ODEs, it is sometimes possible to
have a second Frobenius solution even when the two roots of the indicial equation dier by
an integer). To nd the second solution in these cases, we need a logarithmic term similar
to that found solutions to equidimensional equations whose indicial polynomial has repeated
roots.
Consider = 0 (repeated roots case). First (Frobenius) solution:
I
0
(x) =

n=0
(x/2)
2n
(n!)
2
.
To nd second solution, consider how we found the rst solution
y(x; ) =

n=0
a
n
()x
n+
. (11)
14
Letting
L =
d
2
dx
2
+
1
x
d
dx
1,
suppose we dene the functions a
n
() so that they satisfy the recurrence relation, thereby
killing o all but the leading order term in
Ly(x; ) = a
0
x
2
P(). (12)
Then we nd the rst solution to Ly = 0 by simply choosing =
1
= 0 in (11). Now
suppose we dierentiate (12) to get
L
dy
d
=
da
0
d
x
2
P() + a
0
ln(x)x
2
P() + a
0
x
2
P

().
If
1
is a multiplicity-2 root of P, then P

(
1
) = 0 and we have our second solution
y
2
(x) =
dy
d
[

1
.
Turning the crank as before, we eventually nd
K
0
(x) = ( + ln(x/2))I
0
(x) +

n=1
(x/2)
2n
(n!)
2
n

j=1
1
j
.
Other cases in which the second solution is not of Frobenius form (e.g., where the indicial
roots are separated by a nonzero integer) can be found using similar methods.
1.5 Boundary and eigenvalue problems
So far, we have considered mostly initial value problems (IVPs), i.e., ODEs with initial
conditions (ICs). Now we will consider what happens when the conditions are imposed at
dierent value of x, giving boundary conditions (BCs).
Denition: A BVP Lu = f is referred to as linear and homogeneous if
1. L is a linear dierential operator,
2. f = 0, i.e., the ODE is homogeneous, and
3. the BCs are linear and homogeneous (i.e., any two functions satisfying the BCs can be
added to form a new function that also satises the BCs.)
15
Example:
u

+ u = 0 with BCs u(0) = u(1) = 0 is a linear homogeneous BVP.


u

+ u = 0 with BCs u(0) = 0, u(1) = 1 is not a homogeneous BVP.


u

+ u = 0 with BCs u
2
(0) = 0, cos(u(1)) = 0 is not a linear BVP.
As we discussed earlier, BVPs can have no solution, a single solution, or an innite
number of solutions in general. For linear homogeneous BVPs, we are guaranteed to have
the trivial solution, so these problems either have a single trivial solution or an innite family
of solutions.
Example: Consider u

+ u = 0 with u(0) = u() = 0. Only has trivial solution unless


= n
2
, n = 1, 2, . . . . Also consider more complicated BCs u(0) = 0, u

() + hu() = 0.
Denition: Values of parameter for which linear homogeneous BVPs similar to the
example above have nontrivial solutions are called eigenvalues. The nontrivial solutions
associated with each of the eigenvalues are called eigenfunctions.
Much of the motivation for studying eigenvalue problems stems from their usefulness in
solving partial dierential equations. Consider a PDE that well see later:
(x)c(x)
u
t
=

x
_
K
0
(x)
u
x
_
+ (x)u
with boundary conditions u(0, t) = u(L, t) = 0. We could try looking for solutions that
separate variables, i.e.,
u(x, t) = (x)h(t)
and wed arrive at

h
h
=
d
dx
(K
0
d
dx
) + (x)
(x)c(x)
=
where is an arbitrary constant (i.e., independent of both x and t). We therefore want to
nd all possible values of that satisfy the BVP
d
dx
(K
0
d
dx
) + ( + c) = 0
with BCs (0) = (L) = 0. This is an eigenvalue problem.
We will focus on the eigenvalues of a particularly useful form of operator in this regard:
Sturm-Liouville operators.
16
Denition: A Sturm-Liouville eigenvalue problem has the form
Lu + u = 0, (13)
for a x b where L is the dierential operator of the form
L :=
d
dx
_
p(x)
d
dx
_
q(x)
and (13) is accompanied by linear homogeneous BCs at x = a and x = b. The Sturm-
Liouville problem is referred to as regular if the following hold:
1. p, q, are real and continuous on [a, b],
2. p > 0, > 0 on [a, b], and
3. BCs are not mixed (i.e., each BC applies only to x = a or x = b).
Note: The following are examples of regular BCs:
1. u(a) = 0 is a Dirichlet (1st-kind) BC
2. u

(a) = 0 is a Neumann (2nd-kind) BC


3. u

(a) + hu(a) = 0 is a Robin (3rd-kind) BC


The following are examples of irregular BCs:
1. u(a) + u(b) = 0 is a mixed BC
2. u(a) = u(b), u

(a) = u

(b) are periodic BCs (for a 2nd-order eigenvalue problem)


3. u bounded as x a
+
is a typical singularity BC applied when either p or vanishes
at x = a
Denition: The inner product of two functions f(x) and g(x) on [a, b] dened with respect
to positive-denite weight function (x) > 0 is given by
(f, g) :=
_
b
a
(x)f(x)g(x) dx.
Functions f and g are said to be orthogonal if (f, g) = 0. The standard L
2
inner product
has weighting function (x) 1. If we wish to extend this denition to complex-valued
functions, the standard (weighted) inner product with real, positive (x) becomes
(f, g) :=
_
b
a
(x)f(x)g(x) dx.
17
Denition: The formal adjoint of linear operator L is the operator L

satisfying
(Lu, v) = (u, L

v) + boundary terms(B.T.)
If u and V satisfy BCs that cause the boundary terms in (1.5) to vanish, then Lu = f and
L

v = g are said to satisfy adjoint BVPs. Operator L is said to be self-adjoint if L

= L.
Theorem: Consider the eigenvalue problem given by Sturm-Liouville operator L in (13)
accompanied by regular BCs. Then
1. the eigenvalues are real and form an ordered countably innite sequence
1
<
2
< . . . ;
2. the corresponding eigenfunctions
n
(x) form a complete and orthogonal system, i.e.,
any continuous function f(x) with piecewise 1
st
and 2
nd
derivatives that satises the
BCs can be expanded in an absolutely and uniformly convergent series
f(x) =

n=1
c
n

n
(x)
where
c
n
=
_
b
a
(x)f(x)
n
(x) dx;
3. the n
th
eigenfunction
n
(x) has n 1 nodes, i.e., it divides the domain into n subin-
tervals; and
4. each eigenvalue can be computed directly from its eigenfunction using the Rayleigh
quotient:

n
=
p
n

n
[
b
a
+
_
b
a
[p(

n
)
2
q
2
n
] dx
_
b
a

2
n
dx
.
Example: Show above properties for u

+ u = 0, u(0) = u() = 0.
Example: Show how above properties change for periodic boundary conditions, e.g., u

+
u = 0, u(0) = u(), u

(0) = u

().
A variety of eigenvalue problems arise in mathematical physics, resulting in useful expan-
sions in terms of Legendre, Hermite, Jacobi, Laguerre, and Chebyshev polynomials. Let us
look in more detail at another singular Sturm-Liouville problem, Bessels equation of order
n = 0:
x
2
u

+ xu

+ x
2
u = 0, u bounded as x 0, u(a) = 0. (14)
Here the coecients are p(x) = x, q(x) = 0, (x) = x, so the singularity in the coecients
occurs at x = 0, and if we take as our domain 0 x b, this implies that we need only
18
require boundedness as x 0 instead of the usual BC at x = 0. Let us consider a Dirichlet
condition at x = b, i.e., u(b) = 0. From the Rayleigh quotient we see that all eigenvalues are
nonnegative, and in fact they must be positive since trivial eigenfunctions are not permitted.
Note then that if we perform a change of variables
z =

x, z
d
dz
= x
d
dx
,
with u(x) = v(z), we arrive at
z
2
v

+ zv

+ z
2
v = 0.
We must now understand the solutions to this equation.
From local analysis near z = 0, we know that of the two linearly independent solutions
to (14), one approaches a constant as z 0 and one has a logarithmic singularity. We
denote the rst solution by J
0
(z) and the second one by Y
0
(z). Note also that, for z 1
and assuming v and v

remain bounded, the dominant terms in (14) give


v

+ v 0,
so that, as z , we expect sinusoidal oscillations in v. This picture is enough information
to understand the eigenfunctions we are looking for, as we let z
1
< z
2
< z
3
< . . . denote the
roots (zeros) of J
0
(z). These zeros are tabulated in standard texts (and online): z
1
= 2.40,
z
2
= 5.52, z
3
= 8.65, etc. Note that z
3
z
2
= 3.13 .
Returning to (14), application of the second BC gives
u(a) = v(

a) = J
0
(

a) = 0,
so that

a = z
n
for some n 1. The eigenvalues are therefore innitely countable, with

n
= (z
n
/a)
2
.
The associated eigenfunctions are simply stretched copies of J
0
(z), i.e.,

n
(x) = J
0
(

n
x) = J
0
((x/a)z
n
).
Each of these eigenfunctions has n 1 zeros, and is related to its eigenvalue through the
Rayleigh quotient. Any piecewise-smooth function f(x) on [0, a] can be expressed as an
eigenfunction expansion
f(x) =

n=1
c
n

n
(x)
where the coecients c
n
can be determined using the orthogonality relation
(
m
(x),
n
(x)) =
_
a
0
(x)
m
(x)
n
(x) dx =
_
a
0
xJ
0
(
_

n
x)J
0
(
_

m
x) dx =
_
a
0
xJ
2
0
(

x) dx
mn
where the Kronecker delta
mn
= 1 if m = n and zero otherwise. Thus,
c
n
= (f(x), (n)(x))/(
n
(x),
n
(x))
2
=
_
a
0
xf(x)J
0
((x/a)z
n
) dx/
_
a
0
xJ
2
0
((x/a)z
n
) dx.
19
Example: Expand f(x) = x in terms of these eigenfunctions.
1.6 Inhomogeneous BVPs and Greens functions
We have already seen two methods for solving inhomogeneous ODEs Ly = f, the method
of variation of parameters and the method of undetermined coecients. These methods are
equally applicable to IVPs or BVPs, and are very dependent on the functional form of f(x).
Many physics and engineering problems take the form of BVPs that require the response of
a system to a variety of dierent functions f(x), where the BCs and the linear operator L
are the same throughout. For these problems, an alternative form of solution is appropriate,
known as the method of Greens functions.
Denition: The Dirac delta (x) is a unit impulse function satisfying
1. (x) = 0 for x ,= 0 and
2.
_

(x) dx = 1.
From these properties it can immediately be shown that
_

(x a)f(x) dx =
_
a
+
a

(x a)f(x) dx = f(a)
and that
(x) =
d
dx
h(x)
where h(x) is the Heaviside function dened by
h(x) =
_

_
0 if x < 0
1/2 if x = 0
1 if x > 0.
The Dirac delta is not technically a function but can be dened in many ways as a limit of
a sequence of functions, for example
(x) = lim
0
h(x ) h(x + )
2
.
Denition: The Greens function for boundary value problem Lu = f with linear homo-
geneous BCs B
1
[u] = 0 and B
2
[u] = 0 is the solution to the associated BVP:
LG(x, ) = (x ), a < x, < b
with B
1
[G] = B
2
[G] = 0.
20
Theorem: Let G(x, ) be the Greens function for the boundary value problem Lu = f
with homogeneous BCs B
1
[u] = B
2
[u] = 0. Then the BVP solution is given by
u(x) =
_
b
a
G(x, )f() d.
Example: Solve y

= f(x) with BCs y(0) = 0, y

(1) = 1.
If the BVP for u involves inhomogeneous BCs, it is more useful to consider the adjoint
Greens function.
Denition: Let u satisfy Lu = f with boundary conditions B
1
[u] = c
1
, B
1
[u] = c
2
. The
adjoint Greens function H(x, ) satises
L

H(x; ) = (x )
with boundary conditions B

1
[H] = B

2
[H] = 0, where L

is the formal adjoint of L and B

1
and B

2
are the adjoint boundary operators, chosen to force any unknown boundary terms
in (1.5) to vanish.
Theorem: Let H be the adjoint Greens function to the BVP given above. Then the
solution of the BVP is given by
u() = (f, H(; )) :=
_
b
a
f(x)H(x; ) dx + boundary terms.
Proof: Note that
(f, H(; ) = (Lu, H(; ) = (u, L

H(; ) + B.T. =
_
b
a
u(x)(x ) dx + B.T. = u() + B.T.
Theorem: The Greens function G(x, ) and adjoint Greens function H(x, ) satisfy
H(x, ) = G(, x).
Proof: Note that
(G(; ), L

H(;

)) = (G(; ), (

)) = G(

, )
= (LG(; ), H(;

)) = (( ), H(,

)) = H(,

)
where the boundary terms have all dropped out due to the homogeneous forward and adjoint
conditions on G and H.
This means that the original Greens function can still be used to nd the solution to
Lu = f with inhomogeneous boundary conditions, by nding that
u() = (f, G(, )) =
_
b
a
f(x)G(; x) dx + B.T. (15)
21
Theorem: If the boundary problem Lu = f with boundary conditions B
1
[u] = c
1
and
B
2
[u] = c
2
is self-adjoint, including both L and the BCs, then H(x; ) = G(x; ), i.e.,
u() =
_
b
a
f(x)G(x; ) dx + B.T. (16)
Example: y

+ y

= x, y(0) = 1, y

(1) = 0.
1.7 Phase plane and nonlinear ODEs
c.f. Strogatz, Ch. 6
The solution methods discussed above are for linear ODEs, for which we can guarantee
the existence and uniqueness of solutions (for IVPs, at least) under fairly broad conditions.
We now briey consider techniques for understanding the solutions of nonlinear ODEs.
We start by considering the trajectories traced by solutions of 2nd-order IVPs on the
phase plane, plotting curves of y = x versus x that are parametrized by time t.
Example: Consider m x + c x + kx = 0 with IC x(0) = 1 and x(0) = 0. Letting u =
(x y)
T
:= ( x x)
T
we obtain the 1st-order system
du
dt
= Au :=
_
0 1
k/m c/m
_
u
with u(0) = (1 0)
T
. The behaviour of solutions will depend on the eigenvalues and eigenvec-
tors of A:
Au
j
=
j
u
j

du
j
dt
= u u
j
(t) = u
j
(0)e

j
t
.
Recall that the eigenvalues of A are

1,2
=
c
2m

_
_
c
2m
_
2

k
m
=
c
2m

_
_
c
2m
_
2

2
0
, (17)
from which we can immediately see the categories of behaviour we saw earlier:
1. undamped: c = 0. All solutions revolve clockwise around origin x = y = 0.
2. overdamped: c/2m >
2
0
. All solutions decay to the origin without oscillations.
3. underdamped: c/2m <
2
0
. All solutions spiral (clockwise) into the origin.
4. critically damped: c/2m =
2
0
. Repeated roots. Initial algebraic growth in one direc-
tion but ultimate decay to the origin.
22
More generally, ws we saw earlier how we can express any n
th
-order ODE as an n-
dimensional, 1st-order ODE
dx
dt
=

F(x, t). (18)
We would like to understand how state x(t) evolves in time t, i.e., its trajectory in the phase
plane (of which the generalization to several dimensions is often called phase space).
Denition: ODE (18) is referred to as autonomous if

F/t 0. Otherwise, it is referred


to as nonautonomous. A planar autonomous system can be written
x = f(x, y), y = g(x, y). (19)
Theorem: Trajectories of an autonomous system of ODEs never cross.
Proof: Suppose two trajectories cross at (x
0
, y
0
) at times t
1
and t
2
, respectively. Then
letting
1
= t
1
t and
2
= t
2
t denes the same IVP in backward times
1
and
2
with
IC (x
0
, y
0
). Thus the two trajectories give dierent solutions for the same IVP, which must
have a unique solution. Note that if the system is not autonomous, the backward ODEs are
not identical and can therefore have dierent solutions.
Denition: A xed point or equilibrium point (x
0
, y
0
) of (19) satises
f(x
0
, y
0
) = 0, g(x
0
, y
0
) = 0. (20)
The linearization of (19) about xed point (x
0
, y
0
) is the linear system
_


_
= A
_

_
:=
_
f
x
(x
0
, y
0
)
f
y
(x
0
, y
0
)
g
x
(x
0
, y
0
)
g
y
(x
0
, y
0
)
_
_

_
, (21)
where A is the Jacobian of the velocity eld, i.e., the transformation from (x, y) to ( x, y).
The linearization (21) is obtained by letting
x = x
0
+ , y = y
0
+
and expanding f and g in Taylor series, keeping only 1st-order terms in and . For values
of and small, i.e., near the xed point, the system behaves nearly linearly and is therefore
dictated by the eigenvalues (or, for a 2-d system, the trace and determinant) of A. Recall
that
1

2
= det A and
1
+
2
= tr A, so that

1,2
=
1
2
tr A
_
(
1
2
tr A)
2
det A. (22)
Generic cases:
23
1.
1
>
2
> 0 (tr A > 0, (tr A)
2
> 4 det A. Unstable node.
2. 0 >
1
>
2
> 0 (tr A < 0, (tr A)
2
> 4 det A. Stable node.
3.
1
> 0 >
2
(det A < 0). Saddle.
4.
1
=
2
, Re(
1
) > 0 (tr A > 0, (tr A)
2
< 4 det A). Unstable spiral.
5.
1
=
2
, Re(
1
) < 0 (tr A < 0, (tr A)
2
< 4 det A). Stable spiral.
Degenerate cases:
1.
1
=
2
, Re(
1
) = 0 (tr A = 0, (tr A)
2
< 4 det A. Center.
2.
1
=
2
< 0. Degenerate node or star node, depending on the geometric multiplicity
of (i.e., how many eigenvectors are associated with ).
Example: Find and classify the xed points for the damped pendulum equation

+c

+
sin = 0.
1.7.1 Hamiltonian systems
The linearization gives a local picture of what happens near the xed points. Global infor-
mation requires the existence of additional structure such as conserved quantities or integrals
of motion. An example of this is ODEs for which a conserved quantity known as the Hamil-
tonian can be formulated.
Denition: Autonomous system (19) is said to be Hamiltonian if there exists a function
H(x, y) (called the Hamiltonian) such that
f(x, y) =
H
y
, g(x, y) =
H
x
. (23)
Such a function can be found if
f
x
+
g
y
= 0.
If (19) is Hamiltonian, then on every trajectory (x(t), y(t)), we have
dH
dt
=
H
x
x +
H
y
y = gf + fg = 0.
Trajectories of (19) are then level curves of H(x, y).
Denition: A separatrix is a trajectory that separates qualitatively dierent regions in
phase space. Two common types of separatrix are:
1. homoclinic orbits connecting a xed point to itself, and
2. heteroclinic orbits connecting two dierent xed points.
24
Example: Show that the undamped (nonlinear) pendulum equation is Hamiltonian and
nd the separatrices. Show what happens when damping is added.
2 Partial dierential equations
Most physically realistic models require the study of quantities that depend on several vari-
ables (e.g., space and time). A partial dierential equation (PDE) expresses a relationship
between the quantity of interest and its instantaneous rates of change with respect to the
variable upon which it depends, as specied by a model for its behavior. PDEs can obviously
manifest more complex behavior than ODEs, but the two are intimately related in various
ways:
many (but not all) solution methods for PDEs involve breaking the problem into a set
of ODEs that one can solve using the methods we saw earlier;
a PDE can often be regarded as an innite-dimensional ODE in a similar sense
to how we equated an n
th
order ODE in one dimension to a rst-order ODE in n
dimensions.
2.1 Characteristics and classication
We begin by considering by solving a simple PDE using the ODEs satised along special
trajectories known as characteristics (or characteristic curves) of the PDE.
Example:
u
t
+ c
u
x
= 0 for x R, t > 0 and with initial condition u(x, 0) = f(x).
Example:
u
t
+ 3t
2 u
x
= 2tu with u(x, 0) = f(x).
Denition: The method of characteristics solves the ODEs that result from considering
the evolution of u along characteristic curves u(t) = (x
c
(t), y
c
(t)). Parametrization of the
Cauchy data at t = 0 give u(s, t) in characteristic coordinates, which must then be inverted
if possible to nd u(x, y).
Example: x
u
x
+ y
u
y
= (x + y)u with u = 1 on x = 1, 1 < y < 2.
More generally, we can consider
a(x, y)
u
x
+ b(x, y)
u
y
= c(x, y)
with Cauchy data specied along a curve in the xyplane. Parametrizing characteristics
with t, we have
du
dt
=
u
x
x +
u
y
y
25
giving the ODE
d
dt
_
_
x
y
u
_
_
=
_
_
a(x, y, u)
b(x, y, u)
c(x, y, u)
_
_
.
Picards theorem guarantees that if a, b, c Lipshitz, then (2.1) has a unique solution locally
(i.e., near a given point). If the curve on which Cauchy data are given is parametrized
by s then for values of x, y near , the method of characteristics provides a solution u(s, t)
along characteristics x(s, t), y(s, t).
Denition: The Jacobian (or Jacobian determinant) of a transformation x = x(s, t), y =
y(s, t) is given by
J = det
_
x
s
x
t
y
s
y
t
_
. (24)
Theorem: The inverse function theorem states that a continuously dierentiable trans-
formation x = x(s, t), y = y(s, t) is invertible near P R
2
if the Jacobian is nonzero at
P.
Along characteristics satisfying x = a and y = b, we must therefore have that
det
_
x
s
a
y
s
b
_
,= 0. (25)
One way this determinant can vanish is for the characteristics to be tangential to the curve
. When this occurs, the partial derivatives of u are no longer uniquely specied and the
Cauchy problem is no longer well-posed.
Consider now the 2nd-order PDE
a

2
u
x
2
+ 2b

2
u
xy
+ c

2
u
y
2
= f(x, y, u,
u
x
,
u
y
). (26)
The Cauchy data for this problem are given by specifying on curve R
2
that x = x
0
(s),
y = y
0
(s), u = u
0
(s) and
u
n
= v
0
(s), where is parametrized by s
1
s s
2
and
u
n
is the
normal derivative to curve . The second-order PDE (26) with Cauchy data as prescribed
has characteristics dened by
a y
2
2b x y + c x
2
= 0
where (x(t), y(t)) is a characteristic parametrized by t. The discriminant = b
2
ac
determines how many characteristics the PDE has, which determines its classication. A
PDE is classied as:
1. hyperbolic if it has two real characteristics. This occurs when b
2
> ac, e.g., u
xx
u
yy
= 0.
2. parabolic if it has a single degenerate characteristic. This occurs when b
2
= ac, e.g.,
u
y
u
xx
= 0.
26
3. elliptic if it has no real characteristics. This occurs when b
2
< ac, e.g., u
xx
+ u
yy
= 0.
It is helpful to bear in mind as we explore the solutions of dierent PDEs that informa-
tion propagates along characteristics. Thus, hyperbolic equations often exhibit wave-like
behavior where a solution at (x, t
1
) will depend on information at earlier time t
0
< t
1
dened
on a set of points bounded by the two characteristics passing through (x, t
1
). Parabolic
equations, with a single degenerate real characteristic, retain some aspect of information
traveling in a given direction, but the information spreads, or diuses, with innite speed
as it propagates. Finally, solutions of elliptic equations, with no real characteristics, do not
exhibit wave-like behavior at all but rather form surfaces that are inherently global in that
they are determined largely by boundary conditions.
2.2 The wave equation and DAlemberts solution
The use of characteristics as a method to solve PDEs is typically restricted to hyperbolic
equations such as the wave equation,

2
u
t
2
v
2

2
u
x
2
= 0 with x R, t > 0. (27)
The characteristics are dened by
dx
dt
= v,
and one can formally express the PDE in characteristic coordinates given by the solutions
to these equations x

(t) respectively satisfying x


+
(0) = and x

(0) = , i.e.,
x = vt + x = vt + . (28)
Inverting, we have = x vt and = x + vt, which we use to obtain

2
u

= 0. (29)
We can integrate this directly to obtain
u = F() + G() = F(x + vt) + G(x vt). (30)
Applying boundary conditions (i.e., Cauchy data)
u(x, 0) = (x) (31)
u
t
(x, 0) = (x) (32)
gives DAlemberts solution,
u(x, t) =
1
2
[(x + vt) + (x vt)] +
1
2v
_
x+vt
xvt
(s) ds. (33)
27
Example: Consider DAlemberts solution with IC u(x, 0) = h(1 [x[),
u
t
(x, 0) = 0.
Consider the solution when these BCs are swapped. Consider DAlembert on the half-plane
x 0, t 0 with Neumann BC at x = 0.
2.3 The heat equation
The simplest parabolic PDE is the diusion, or heat, equation. This models the evolution
of heat, for instance along a one-dimensional rod:
c
T
t
=

x
_
K
0
T
x
_
+ Q for 0 < x < L, t > 0 (34)
where (x) is the density, c(x) is the heat capacity, K
0
(x) is the thermal conductivity, and
Q(x, t) is a source term, such as external heating, applied throughout the rod. This PDE
requires initial and boundary conditions, for example,
u(0, t) = 0, u(L, t) = 0, u(x, 0) = f(x). (35)
Note that this PDE is linear and homogeneous, so that any two solutions can be added
to form a third solution, by the principle of superposition. This motivates the approach we
will be using for the next few classes.
2.4 Separation of variables
Denition: The method of separation of variables is a technique for solving linear dier-
ential equations that looks for solutions of separable form, e.g., u(x, t) = (x)h(t), in order
to identify a family of solutions that provides a complete set of basis functions that can
be used to represent all relevant initial conditions. This typically involves identifying the
eigenfunctions of a convenient linear operator, often of Sturm-Liouville form.
Example: Solve the above Dirichlet problem for square integrable f(x).
Denition: Many of these problems require the eigenvalues and eigenfunctions of the
operator L := d
2
/dx
2
(or, in higher dimensions, the Laplacian operator). On nite domain
[0, L], these eigenfunctions are referred to as Fourier modes, and the resulting series for f(x)
a Fourier series. Depending on the boundary conditions, one might obtain a Fourier sine
series or a Fourier cosine series.
Example: Find the Fourier modes for u

+u = 0 with Dirichlet, Neumann and periodic


boundary conditions. Use them to solve the heat equation in each case.
2.5 Inhomogeneities and eigenfunction expansions
Inhomogeneities in the heat equation can often be handled by subtracting o the steady
state solution and expanding everything that remains in the appropriate eigenfunctions.
28
Example: Solve
u
t
=

2
u
x
2
+ e
t
sin(3x)
with u(0, t) = 0, u(, t) = 1, u(x, 0) = f(x).
A more general and elegant approach, however, is to derive ODEs for the Fourier coef-
cients using their denitions as projections of the solution u(x, t). This is referred to as a
Galerkin method. In the example above with Dirichlet BCs, for instance, the appropriate
eigenfunction expansion for u would be
u(x, t) =

n=1
a
n
(t) sin(nx).
Rather than inserting this directly into the PDE as we did for the method of separation of
variables, we now recall that the coecients are dened by
a
n
(t) =
2

_

0
u(x, t) sin(nx) dx.
Dierentiating with respect to t,
a
n
(t) =
2

_

0
u
t
sin(nx) dx.
Substituting for
u
t
using the heat equation and integrating by parts gives ODEs for the
coecients a
n
(t), whose initial condition is simply given by the Fourier sine series of f(x),
i.e.,
a
n
(0) =
2

_

0
f(x) sin(nx) dx.
2.6 Laplaces equation
In higher dimensions, the heat equation becomes
u
t
=
2
u,
and one is often interested in knowing what happens to the solution as t , i.e., does it
approach a steady state and, if so, what is it? The answer to this question involves solving
the most commonly encountered elliptic PDE, Laplaces equation. Note that in steady state,
the solution is entirely dictated by boundary conditions and initial conditions do not play a
role. We will consider solutions to Laplaces equation for a few dierent geometries.
29
Example: Consider the heat equation
u
t
= k
2
u on the rectangle 0 < x < L, 0 < y < H
with initial condition u(x, y, t) = f(x, y) and boundary conditions u(0, y, t) = u(L, y, t) =
u(x, 0, t) = 0 and u(x, H, t) = g(x). Identify the BVP satised by the steady state and solve.
Note that, in the above problem, we chose the eigenfunctions corresponding to the vari-
able (x, in this case) satisfying homogeneous boundary conditions. More complicated bound-
ary data can be handled by appealing to the principle of superposition, as shown by the
example below.
Example: Solve
2
u = 0 on the same rectangular box as above, but with BCs u(x, 0) =
f
1
(x), u(x, H) = f
2
(x), u(0, y) = g
1
(y), u(L, y) = g
2
(y).
2.6.1 Laplaces equation in cylindrical coordinates
In some cases, the boundary conditions are imposed by geometry rather than by physical
considerations. Consider Laplaces equation on a circular disk of radius a, where the only
obvious boundary to consider is that imposed at r = a. It is obviously convenient in this
geometry to use polar coordinates r =
_
x
2
+ y
2
and = tan
1
(y/x) to look for a solution
u(r, ). The obvious BC is imposed at r = a, but since this is a second order PDE in two
dimensions, we expect the equivalent of four BCs, two on r and two on , to have a well-
posed boundary value problem. It will turn out that the second condition on r is simply
boundedness of u as r 0, as we saw for Bessels equation, while the natural BCs on are
that u must be periodic in , i.e., u(r, + 2) = u(r, ) for all r and . The example below
makes these ideas more concrete.
Example: Find solution u(r, ) to Laplaces equation on a circular disk, with BC u(a, ) =
f().
Note from the solution that
u[
r=0
=
1

f() d,
i.e., that the solution at the center of the disk is equal to the average of the boundary
data on the circle. This averaging principle applies to all circles inside domains on which
u satises Laplaces equation, precluding the existence of local extrema of u interior to the
domain. It also implies uniqueness of solutions to Laplaces equation (or Poissons equation,
the nonhomogeneous version of Laplaces equation) with a Dirichlet boundary condition, as
you will show in your assignment.
Note that the eigenfunctions are dened by the homogeneous boundary conditions, i.e.,
u(r, ) = (r)()
30
gives eigenvalue problem

() + = 0
with BCs () = () and

() =

() with eigenvalues
n
= n
2
and eigenfunc-
tions
0
= 1,
n1
= cos(n) and
n2
= sin(n). Unlike regular Sturm-Liouville eigenvalue
problems, here we have two linearly independent eigenfunctions for each nonzero eigenvalue

n
.
Denition: Solutions to Laplaces equation satisfy the averaging principle, i.e., at any
point (r, ), the value of u at that point is equal to the average value of u on any circle
centered at (r, ). In particular, this implies that u can have no interior maxima or minima,
i.e., that all extrema of u must occur on the boundary. It also implies uniqueness of solutions
to Laplaces (and Poissons) equation with Dirichlet boundary conditions.
Example: Find solution u(r, , z) to Laplaces equation on a cylinder, with BC u(a, , z) =
0, u(r, , 0) = f(r, ), u(r, , H) = 0.
In this example, it is convenient to identify 2-dimensional eigenfunctions, with a separa-
tion of variables according to
u(r, , z) = (r, )(z),
giving an eigenvalue problem of

2
2
+ = 0
with homogeneous Dirichlet BC at (a, ) = 0 (in addition to the usual natural BCs of
boundedness as r 0 and periodicity in ). For convenience, we have written the 2-
dimensional Laplacian as
2
2
, so that
2
=
2
2
+

2
z
2
. This allows us to write the formal
solution immediately and save nding the eigenfunctions for later, i.e.,
u(r, , z) =

n
c
n

n
(r, ) sinh(
_

n
(H z)).
Now let us turn back to the eigenvalue problem. Recall some properties of the divergence
and Laplacian operators over domain D R
2
:
__
D
(u
2
2
v v
2
2
u) dA =
_
D
(u
v
n
v
u
n
) ds
__
D
u
2
2
u dA =
_
D
u
u
n
ds
__
D
|u|
2
dA,
where the outward normal derivative to domain D is dened as
u
n
= u n
31
with n the unit outward normal to D. The multi-dimensional equivalent to the Rayleigh
quotient relating an eigenvalue to its eigenfunction therefore states

n
=

_
D

n
n
n
ds +
__
D
|
n
|
2
dA
__
D

2
n
dA
,
from which we can make a priori statements regarding the eigenvalues corresponding to
particular boundary conditions. For example, homogeneous Dirichlet or Neumann conditions
on would cause the rst term in the numerator to disappear, allowing us to conclude
immediately that all eigenvalues are nonnegative. In this case, we also have an orthogonality
condition as we did for Sturm-Liouville eigenvalue problems, with inner product
(u, v) :=
__
D
uv dA.
Recall that, for polar coordinates, dA = r dr d, so the weight function for Bessels equation
correctly appears in the inner product.
Returning to the example, we need only consider nonnegative eigenvalues, so let =
2
with 0. The given boundary condition is homogeneous Neumann data on the circle of
radius a, and the natural boundary conditions are boundedness as r 0 and periodicity in
. Some analysis shows that the eigenvalues are most conveniently indexed by two indices
n = 0, 1, 2, . . . and m = 1, 2, . . . with

nm
= (z
nm
/a)
2
where z
nm
is the m
th
zero of the n
th
Bessel function of the rst kind, J
n
(z). The eigenfunctions
are

n0
= J
0
(
_

0m
r)
and

nm1
= J
n
(
_

nm
r) cos(n),
nm2
= J
n
(
_

nm
r) sin(n).
The solution to Laplaces equation on a cylinder with homogeneous BCs everywhere but on
the base is thus
u(r, , z) =

m=1
c
0m
J
0
(
_

0m
r) sinh
_

0m
(H z)
+

n=1

m=1
J
n
(
_

nm
r)(c
nm
cos n + d
nm
sin n) sinh
_

nm
(H z).
32
Applying the condition at z = 0 and using orthogonality of the eigenfunctions, we have
c
0m
= (
n0
, f)/|
n0
|
2
=
_

_
a
0
J
0
(
_

0m
r)f(r, )r dr d/
_

_
a
0
J
2
0
(
_

0m
r)r dr d
c
nm
= (
nm1
, f)/|
nm1
|
2
=
_

_
a
0
J
0
(
_

nm
r) cos(n)f(r, )r dr d/
_

_
a
0
J
2
0
(
_

0m
r) cos
2
(n)r dr d
=
_

_
a
0
J
0
(
_

nm
r) cos(n)f(r, )r dr d/
_
a
0
J
2
0
(
_

0m
r)r dr
d
nm
= (
nm2
, f)/|
nm2
|
2
=
_

_
a
0
J
0
(
_

nm
r) sin(n)f(r, )r dr d/
_

_
a
0
J
2
0
(
_

0m
r) sin
2
(n)r dr d
=
_

_
a
0
J
0
(
_

nm
r) sin(n)f(r, )r dr d/
_
a
0
J
2
0
(
_

0m
r)r dr
Simplications of the integrals involving Bessels functions can often be made using properties
found in reference texts such as Abramowitz & Stegun or Gradshteyn & Ryzhik, but we will
forego such simplications for the purpose of these notes.
2.7 Vibrating membranes (wave equation in higher dimension)
The two-dimension eigenvalue problem dened above prepares us for a return to the wave
equation, now over two dimensional membranes. As we will see, the musical notes that such
a membrane produces are completely dened by the geometry and boundary conditions
imposed at the membranes boundary.
Example: Solve the wave equation,

2
u
t
2
= c
2

2
u
on the rectangular domain 0 x L, 0 y H with homogeneous Dirichlet BCs and
initial condition u(x, y, 0) = f(x, y),
u
t
(x, y, 0) = g(x, y).
Example: Find the natural frequencies of a snare drum (i.e., the wave equation on a
circular domain with homogeneous Dirichlet BCs).
2.8 Transform methods
The method of separation of variables is predicated on the fact that geometry and boundary
conditions identify a suitable set of eigenfunctions in which we can expand the solution. If
33
the domain is innite or semi-innite, then we no longer expect a countably innite set of
functions that form a complete set. Rather, we expect a family of functions that depend
continuously on a parameter, as illustrated by the following sections.
2.8.1 The Fourier transform
WARNING: In this section, we adopt the physics convention of assuming limits of when
no limits appear on an integral. Thus,
_
f(x) dx :=
_

f(x) dx.
Example: Consider the 1-d heat equation with Dirichlet BCs on domain (L, L) and let
L . Use IC u(x, 0) = f(x). Note that we can dene
c(
n
) = c
n
=
1
2L
_
L
L
f(x)e
inx/L
.
Denition: We say f(x) is absolutely integrable if
_
[f(x)[ dx < .
Denition: We say f(x) is piecewise continuous if on any nite interval [a, b], f has at
most a nite number of discontinuities, i.e., points x such that
lim
yx
f(y) ,= f(x).
Denition: Suppose f is absolutely integrable and piecewise continuous. Then the Fourier
transform of f is given by
F() := T[f] =
_
f(x)e
ix
dx
and satises
1. F() < for all R,
2. F() continuous on R,
3. F() 0 as .
The inverse Fourier transform is then given by
T
1
[F] =
1
2
_
F()e
ix
d
where
T
1
[F] =
_
f(x) where f is continuous and
1
2
(lim
yx
f(y) + lim
yx
+ f(y)) where f has a discontinuity.
34
Note: The Fourier transform maps derivatives to multipliers. Suppose f 0 as x .
Then by integrating by parts, we have that
T[f

] =
_
f

(x)e
ix
dx = i
_
f(x)e
ix
dx = iF().
Moreover,
T[f
(n)
] = (i)
n
F().
Thus, with its power to map derivatives to multiplers, the Fourier transform can change
PDEs to ODEs, and ODEs to algebraic equations.
Example: Solve the heat equation on the line, i.e.,
u
t
= k

2
u
x
2
, with u(x, 0) = f(x).
Denition: The convolution of functions f(x) and g(x) is
(f g)(x) =
_
f(x )g() d.
Note: The Fourier transform maps convolutions to products, i.e., if
F() := T[f] and G() := T[g], then
T[f g] = F()G().
Thus, we can immediately write the solution to the heat equation for t > 0 in terms of a
convolution between the initial condition f(x) and the heat kernel
h(x, t) = T
1
[e
k
2
t
] =
1

4kt
e
x
2
/4kt
.
As mentioned, Fourier transforms are also useful for solving PDEs on innite domains.
Example: Solve Laplaces equation on the upper half-plane < x < , y 0 with u
bounded and u(x, 0) = f(x).
2.9 The Laplace transform
A related technique that is more suited to semi-innite spaces is the Laplace transform. We
jump straight to its denition.
Denition: A function f(t) is said to be of exponential order if [f(t)[ Me
bt
for some b
for all t 0.
35
Denition: Suppose f(t) is of exponential order. Then the Laplace transform of f(t) is
F(s) := /[f] :=
_

0
f(t)e
st
dt.
It satises the following properties:
1. [F(s)[ M/(s b) for s R, i.e., F(s) exists if s > b,
2. F(s) 0 as s ,
3. /[f

] = sF(s) f(0),
4. /[f
(n)
] = s
n
F(s) s
n1
f(0) s
n2
f

(0) f
n1
(0),
5. /[tf] = F

(s).
Note: Laplace transforms have a similar convolution theorem as do Fourier transforms,
but where the convolution is dened a little dierently:
(f g)(t) :=
_
t
0
f()g(t ) d.
Then, if F(s) = /[f] and G(s) = /[g], we have that
/[f g] = F(s)G(s).
Whereas for Fourier transforms the inverse transform is entirely similar to the forward
transform, the inverse Laplace transform generally requires performing a contour integral
in the complex plane, which is beyond the scope of this course. We will simply identify
some useful inverse transforms as a lookup table to use in conjunction with the convolution
theorem. Assume for the following that /[f] = F(s).
/[e
at
f(t)] = F(s a)
/[f(t b)] = e
sb
F(s) (b 0)
/[e
at
] =
1
s a
(s > a)
/[(t t
0
)] = e
st
0
(t
0
> 0)
/[t
n
] =
n!
s
n+1
(n = 1, 2, 3, . . . )
36
Example: Use Laplace transforms to solve the signalling problem, i.e., the wave equation

2
u
t
2
= c
2

2
u
x
2
on x > 0, t > 0
with BC u(0, t) = f(t) and IC u(x, 0) =
u
t
(x, 0) = 0. Now add an inhomogeneity (i.e.,
forcing term) to nd the dynamics of a string falling under its own weight:

2
u
t
2
= c
2

2
u
x
2
g on x > 0, t > 0,
with BCs u(0, t) = lim
x
u
x
= 0 and ICs u(x, 0) =
u
t
(x, 0) = 0.
In engineering applications, the Laplace transform is most useful for characterizing the
impact on a system of dierent forcing terms turning on at dierent times. Consider the
following ODE example:
Example: Find the solution to
y

7y

+ 6y = e
t
+ (t 2) + (t 4), y(0) = 0, y

(0) = 0.
Compare to the solution using Greens functions.
2.10 Greens functions for PDEs
One of the most convenient features of solving inhomogeneous dierential equations with
transform techniques is the ability to express the answer as a convolution. This is reminiscent
of solving inhomogeneous ODEs using Greens functions, and for good reason, since we can
apply the same method to the solution of inhomogeneous PDEs.
Suppose, for example, we wish to solve Poissons equation on some bounded domain with
Dirichlet BCs, i.e.,

2
u = f(x), in D
with Dirichlet boundary data given on D. It would be highly desirable to be able to express
the solution as a simple expression involving the boundary data.
Denition: The Dirac delta in 2d (i.e., for x, 1
2
) is the function (x) satisfying
(x ) = 0 for x ,= ,
__
R
2
f(x)(x ) dA = f()
for all continuous functions f. Note that in rectilinear coordinates in 2d, (x ) = (x
x
0
)(y y
0
).
37
Denition: The Greens function for Poissons equation (i.e., for the Laplacian operator)
in 2-d is the function G(x, ), with x, R
2
, satisfying

2
G = (x ) in D
and satisfying BC G(x, ) = 0 on boundary D.
Theorem: The boundary value problem given by

2
u(x) = f(x) in D
with u = h(x) on boundary D can be expressed in the form
u(x) =
__
D
f()G(x, ) dA
0
+
_
D
h()

G(x, ) nds
0
Proof: Recall Greens formula in 2d:
__
D
(u
2
GG
2
u) dA =
_
D
(uGGu) nds.
Example: Solve Poissons equation on the rectangle, i.e.,

2
u = f(x, y) on 0 < x < L, 0 < y < H,
rst with homogeneous Dirichlet boundary data then with inhomogeneous Dirichlet bound-
ary data. Use eigenfunction expansions either in 1d or 2d to nd the Greens function.
2.10.1 Free-space Greens functions and the Method of Images
We have thus far considered Greens functions designed (i.e., solved) for specic partial
dierential operators with specic boundary conditions on specic geometries. A widely used
alternative to the above approach is to identify free-space Greens functions that are dened
over all space (e.g., R
3
) with simple decay conditions as |x| . Boundary conditions
on bounded geometric regions are then specied by strategically positioning image points in
the region outside the domain of interest.
Denition: The free-space Greens function for the Laplacian operator in R
n
satises

2
G = (x )
with no boundary condition.
Example: Consider n = 2. Find the free-space Greens function for the Laplacian operator
on the plane. Start by nding G(x, 0) and using the divergence theorem to solve for the
constant.
38
Example: Consider n = 3. Find the free-space Greens function for the Laplacian in
3-dimensional space. (Follow the same procedure as above.)
Theorem: Let u satisfy Poissons equation in R
n
, i.e.,

2
u = f(x)
and G(x, ) be the corresponding free-space Greens function. Then
u(x) =
__
f()G(x, ) dA

if n = 2 and
u(x) =
___
f()G(x, ) dV

if n = 3.
Interestingly, the real power in the free-space Greens function lies in its ability to imple-
ment boundary conditions through image points.
Example: Consider Poissons equation on semi-innite domain y > 0 with BC u(x, 0) =
h(x). (pp. 320-321 in Haberman)
Example: Consider Poissons equation on the rectangular box 0 < x < L, 0 < y < H with
Dirichlet BCs. Set up the image points necessary to nd the Greens function (but dont
solve).
Example: Find the Greens function for the Laplacian with Dirichlet BCs on a circular
disk of radius a. Show that
G(x, ) =
1
4
ln
_
a
2
r
2
+ r
2
0
2rr
0
cos
r
2
r
2
0
+ a
4
2rr
0
a
2
cos
_
where r = [x[, r
0
= [[ and is the angle between x and .
39

Você também pode gostar