Você está na página 1de 73

Partial differential equations

Swapneel Mahajan

www.math.iitb.ac.in/swapneel/207

Power series
For a real number x0 and a sequence (an ) of real
numbers, consider the expression

an (xx0 )n = a0 +a1 (xx0 )+a2 (xx0 )2 +. . . .

n=0

This is called a power series in the real variable x. The


number an is called the n-th coefficient of the series
and x0 is called its center.
For instance,

n=0

1
1
1
n
(x1) = 1+ (x1)+ (x1)3 +. . .
n+1
2
3

is a power series in x centered at 1 and with n-th


1
.
coefficient equal to n+1

What can we do with a power series? Note that by


substituting a value for x in a power series, we get a
series of real numbers. We say that a power series
converges (absolutely) at x1 if substituting x1 for x
yields a (absolutely) convergent series. A power series
always converges absolutely at its center x0 .
We would like to know the set of values of x where a
power series converges.
Lemma. Suppose the power series converges for
some real number x1

6= x0 . Let |x1 x0 | = r. Then


the power series is (absolutely) convergent for all x
such that |x x0 | < r , that is, in the open interval of
radius r centered at x0 .
The radius of convergence of the power series is the
largest number R, including , such that the power
series converges in the open interval

{|x x0 | < R}. The latter is called the interval of

convergence of the power series.

A power series determines a function in its interval of


3

convergence. Denoting this function by f , we may write

f (x) =

n=0

an (x x0 )n ,

Let us assume R

|x x0 | < R.

> 0. It turns out that f is infinitely

differentiable on the interval of convergence, and the


successive derivatives of f can be computed by
differentiating the power series on the right termwise.
From here one can deduce that

f (n) (x0 )
.
an =
n!
Thus, two power series both centered at x0 take the
same values in some open interval around x0 iff all the
corresponding coefficients of the two power series are
equal.

Real analytic functions


Let R denote the set of real numbers. A subset U of R
is said to be open if for each x0

U , there is a r > 0
such that the open interval |x x0 | < r is contained
in U .
Let f

: U R be a real-valued function on an open


set U . We say f is real analytic at a point x0 U if
f (x) =

n=0

an (x x0 )n

holds in some open interval around x0 . We say f is


real analytic on U if it is real analytic at all points of U .
In general, we can always consider the set of all points
in the domain where f is real analytic. This is called
the domain of analyticity of f .
Just like continuity or differentiability, real analyticity is a
local property.

Suppose f is real analytic on U . Then f is infinitely


differentiable on U , and its power series representation
around x0 is necessarily the Taylor series of f around

x0 , that is, the coefficients an are given by


f (n) (x0 )
.
an =
n!
If f and g are real analytic on U , then so is cf , f

+ g,

f g , and f /g (provided g 6= 0 on U ).
A power series is real analytic in its interval of
convergence. Thus, if a function is real analytic at a
point x0 , then it is real analytic in some open interval
around x0 . Thus, the domain of analyticity of a function
is an open set.

Solving a linear ODE by the power


series method
Consider the initial value problem

p(x)y +q(x)y +r(x)y = g(x), y(a) = y0 , y (a) = y1


where p, q , r and g are real analytic functions in an
interval containing the point a, and y0 and y1 are fixed.
Theorem. Let r

> 0 be less than the minimum of the


radii of convergence of the functions p, q , r and g
expanded in power series around a. Assume that
p(x) 6= 0 for all x (x0 r, x0 + r). Then there is
a unique solution to the initial value problem in the
interval (x0

r, x0 + r), and moreover it can be

represented by a power series

y(x) =

n=0

an (x x0 )n

whose radius of convergence is at least r .

There is an algorithm to compute the power series


representation of y :
Plug a general power series into the ODE, take
derivatives of the power series formally, and equate the
coefficients of (x x0 )n for each n to obtain a

recursive definition of the coefficients an . The an s are


uniquely determined and we obtain the desired power
series solution.
In most of our examples, x0

= 0 and the functions p,

q , r and g will be polynomials of small degree.

This result generalizes to any n-th order linear ODE


with the first n 1 derivatives at x0 specified. For the

first order linear ODE, the initial condition is simply the


value at x0 . Suppose the first order ODE is

p(x)y + q(x)y = 0.
Then, by separation of variables, the general solution is

ce
provided p(x)

q(x)
dx
p(x)

6= 0 else the integral may not be

well-defined. This is an indicator why such a condition


is required in the hypothesis of the theorem.

Legendre equation
Consider the second order linear ODE:

(1 x2 )y 2xy + p(p + 1)y = 0.


This is known as the Legendre equation. Here p
denotes a fixed real number. The equation can also be
written in the form

((1 x2 )y ) + p(p + 1)y = 0.


This ODE is defined for all real numbers.

10

General solution
The coefficients (1 x2 ), 2x and p(p + 1) are

polynomials (and in particular real analytic). However

1 x2 = 0 for x = 1. These are the singular

points of the ODE. Our theorem guarantees a power


series solution around x

= 0 in the interval (1, 1). It

is given by
y(x)=a0 (1
+a1 (x

where a0

p(p+1) 2 (p(p2)(p+1)(p+3) 4
x +
x +... )
2!
4!

(p1)(p+2) 3 (p1)(p3)(p+2)(p+4) 5
x +
x +... ),
3!
5!

= y(0) and a1 = y (0). It is called the

Legendre function. The first series is an even function


while the second series is an odd function.

11

Legendre polynomials
Now suppose the parameter p in the Legendre
equation is a nonnegative integer. Then one of the two
series in the general solution terminates, and is a
polynomial. Thus we obtain a sequence of polynomials

Pm (x) (up to multiplication by a constant) for each


nonnegative integer m. These are called the Legendre
polynomials. The m-th Legendre polynomial Pm (x)
solves the Legendre equation for p = m and for all x,
not just for x (1, 1). It is traditional to normalize
the constants so that Pm (1) = 1.

12

The first few values are as follows.

Pn (x)

2
3
4
5

1
(3x2 1)
2
1
(5x3 3x)
2
1
(35x4 30x2 + 3)
8
1
(63x5 70x3 + 15x)
8

13

Their graphs in the interval (1, 1) are given below.

14

Second solution to the Legendre equation


when p is an integer
Now let us consider the second independent solution.
It is a honest power series. For p

= 0, it is

x3 x5
1
1 + x
x+
+
+ = log
,
3
5
2
1x

while for p

= 1, it is

x2 x4 x6
1
1 + x
1

= 1 x log
.
1
3
5
2
1x
These nonpolynomial solutions always have a log factor
of the above kind and hence are unbounded at both

+1 and 1. (Since they are either even or odd, the


behavior at x = 1 is reflected at x = 1.) They are

called the Legendre functions of the second kind (with


the Legendre polynomials being those of the first kind).

15

The vector space of polynomials


The space of polynomials in one variable is a vector
space with basis {1, x, x2 , . . . }. Further the space of
polynomials carries an inner product defined by

hf, gi :=

f (x)g(x)dx.
1

Note that we are integrating only between 1 and 1.

This ensures that the integral is always finite. The norm


of a polynomial is defined by

kf k :=

Z

f (x)f (x)dx
1

1/2

Note this simple consequence of integration by parts:


For differentiable functions f and g , if

(f g)(b) = (f g)(a), then


Z b
Z b
f g dx =
f gdx.
a

(This process transfers the derivative from g to f .)

16

Orthogonality of the Legendre polynomials


Since Pm (x) is a polynomial of degree m, it follows
that {P0 (x), P1 (x), P2 (x), . . . } is a basis of the
vector space of polynomials. Further,

Pm (x)Pn (x)dx =
1

2
2n+1

if m

6= n,

if m

= n.

Thus, the Legendre poynomials form an orthogonal


basis. The norms are not 1, hence the basis is not
orthonormal.
Orthogonality can be established using the technique
of derivative-transfer.

17

Rodrigues formula
Consider the sequence of polynomials

d n 2
(x 1)n , n 0.
qn (x) :=
dx
Observe that qn (x) has degree n. The first few
polynomials are 1, 2x, 4(3x2

1), . . . . [Note the

similarity with the Legendre polynomials.]

From the product rule for the derivative, one can

= 2n n!. Further,

Z 1
0
qm (x)qn (x)dx =
2 (2n n!)2
1

deduce that qn (1)

2n+1

if m

6= n,

if m

= n.

This can be established using the technique of


derivative-transfer.

18

Lemma. In any inner product space, suppose

{u1 , u2 , . . . , un } and {v1 , v2 , . . . , vn } are two

orthogonal systems of vectors such that for each

1 k n, the span of u1 , . . . , uk equals the span


of v1 , . . . , vk . Then ui = ci vi for each i for certain
nonzero scalars ci .
By convention Pn (1)

= 1 while qn (1) = 2n n!, so we

deduce

d n 2
1
(x 1)n .
Pn (x) = n
2 n! dx
This is known as Rodrigues formula.

19

Fourier-Legendre series
A function f (x) on [1, 1] is square-integrable if

1
1

f (x)f (x)dx < .

For instance, polynomials, continuous functions,


piecewise continuous functions are square-integrable.
The set of all square-integrable functions on [1, 1] is
a vector space. The inner product on polynomials
extends to square-integrable functions. The Legendre
polynomials no longer form a basis for this larger
space, but they form a maximal orthogonal set. This
allows us to expand any square-integrable function

f (x) on [1, 1] in a series of Legendre polynomials


X
f (x)
cn Pn (x),
n0

where

2n + 1
cn =
2

f (x)Pn (x)dx.
1

20

This is called the Fourier-Legendre series (or simply


the Legendre series) of f (x). This series converges in
norm to f (x), that is,

kf (x)

m
X

n=0

cn Pn (x)k 0 as m .

Pointwise convergence is more delicate. There are two


issues here: Does the series converge at x? If yes,
then does it converge to f (x)? A useful result in this
direction is the Legendre expansion theorem:
Theorem. If both f (x) and f (x) have at most a
finite number of jump discontinuities in the interval

[1, 1], then the Legendre series converges to


1
(f (x ) + f (x+ ))
2
for 1

< x < 1, to f (1+ ) at x = 1, and to


f (1 ) at x = 1. In particular, the series converges to
f (x) at every point of continuity.

21

Ordinary and singular points


Consider the second-order linear ODE

p(x)y + q(x)y + r(x)y = 0,


where p, q , and r are real analytic with no common
zeroes.
A point x0 is an ordinary point if p(x0 )

6= 0, and a
singular point if p(x0 ) = 0. A singular point x0 is
regular if the ODE can be written in the form

c(x)
b(x)
y +
y = 0,
y +
2
x x0
(x x0 )

where b(x) and c(x) are real analytic around x0 .


A singular point which is not regular is called irregular.

22

Cauchy-Euler equation
Consider the Cauchy-Euler equation

x2 y + b0 xy + c0 y = 0,
where b0 and c0 are constants with c0

6= 0. Note that

x = 0 is a regular singular point, the rest are all


ordinary points.

> 0. Substitute y = xr , and compute the


roots r1 and r2 of the indicial equation

Assume x

r2 + (b0 1)r + c0 = 0.
If the roots are real and unequal, then xr1 and xr2
are two independent solutions.

If the roots are complex (written as a ib), then


xa cos(b log x) and xa sin(b log x) are two
independent solutions.

If the roots are real and equal, then xr1 and


(log x)xr1 are two independent solutions.
23

Fuchs-Frobenius theory
Consider the ODE

x2 y + xb(x) y + c(x) y = 0,
where

b(x) =

bn x

and

n0

c(x) =

cn xn

n0

are real analytic in a neighborhood of the origin.


Restrict to x

> 0. Assume a solution of the form


X
r
y(x) = x
an xn , a0 6= 0,
n0

with r fixed. Substitute in the ODE and equate the


coefficient of xr to obtain the indicial equation:

r(r1)+b0 r+c0 = 0, or r2 +(b0 1)r+c0 = 0.


Let us denote the quadratic by I(r). More generally,
equating the coefficient of xr+n , we obtain the

24

recursion

I(r+n)an +

n1
X
j=0

aj ((r+j)bnj +cnj ) = 0, n 1.

Let r1 and r2 be the roots of I(r)

= 0 with r1 r2 .

Theorem. The ODE has as a solution for x


r1

y1 (x) = x (1 +

>0

an xn )

n1

where an s solve the recursion for r

= r1 and a0 = 1.

If r1

r2 is not an integer, then a second independent


solution for x > 0 is given by
X
r2
y2 (x) = x (1 +
An xn )
n1

where An s solve the recursion for r

= r 2 , a 0 = 1.

The power series in these solutions converge in the


interval in which both b(x) and c(x) converge.
The term fractional power series solutions is used for
these solutions.
25

Theorem. If the indicial equation has repeated roots,


then there is a second solution of the form

y2 (x) = y1 (x) log x + x

r1

An xn

n1

with y1 (x) as before. The power series converges in


the interval in which both b(x) and c(x) converge.
Treating r as a variable, one can uniquely solve

I(r+n)an +

n1
X
j=0

starting with a0

aj ((r+j)bnj +cnj ) = 0, n 1

= 1. Since the an depend on r, let us

write an (r). Now consider


r

(r, x) := x 1 +

n1

an (r)x .

Note that the first solution is (r1 , x). The second


solution is obtained by taking partial derivative of

(r, x) wrt r, and then putting r = r1 . This also


shows that An = an (r1 ).
26

Theorem. If r1

r2 is a positive integer, say N , then

there is a second solution of the form


r2

y2 (x) = Ky1 (x) log x + x (1 +

An xn ),

n1

with

and


d
An = ((r r2 )an (r)) r=r2 ,
dr

n 1.

K = lim (r r2 )aN (r).


rr2

The power series converges in the interval in which


both b(x) and c(x) converge.
It is possible that K

= 0.

27

Gamma function
Define for all p

> 0,

(p) :=

tp1 et dt.

(There is a problem at p

= 0 since 1/t is not


integrable in an interval containing 0. The same
problem persists for p < 0. For large values of p, there
is no problem because et is rapidly decreasing.)
Z
(1) =
et dt = 1.
0

For any integer n

1, using derivative transfer,


Z x
(n + 1) = lim
tn et dt
x 0


Z x
= n lim
tn1 et dt = n(n).
x 0

This yields

(n) = (n 1)!.
28

Thus the gamma function extends the factorial function


to all positive real numbers. The above calculation is
valid for any real p

> 0, so

(p + 1) = p (p).
We use this identity to extend the gamma function to all
real numbers except 0 and the negative integers: First
extend it to the interval (1, 0), then to (2, 1),

and so on. The graph is shown below.

Though the gamma function is now defined for all real


numbers (except the nonpositive integers), remember

29

that the integral representation is valid only for p

> 0.

It is useful to rewrite

1
p
=
.
(p)
(p + 1)
This holds for all p if we impose the natural condition
that the reciprocal of evaluated at a nonpositive
integer is 0,
A well-known value of the gamma function at a
non-integer point is

(1/2) =

1/2 t

e dt = 2

s2

ds =

= s2 .) By translating,
4
2.363
(3/2) = 3

(1/2) = 2 3.545
1
0.886
(3/2)
=2
3
1.329
(5/2)
=4
15
(7/2)
= 8 3.323.

(We used the substitution t

30

Bessel functions
Consider the second-order linear ODE

x2 y + xy + (x2 p2 )y = 0.
This is known as the Bessel equation. Here p denotes
a fixed real number. We may assume p
a regular singularity at x

0. There is

= 0. All other points are

ordinary.
We apply the Frobenius method to find the solutions. In
previous notation, b(x)
The indicial equation is

= 1 and c(x) = x2 p2 .

r2 p2 = 0.
The roots are r1

= p and r2 = p. The recursion is

(r + n + p)(r + n p)an + an2 = 0, n 2,


and a1

= 0. So all odd terms are 0. Let us solve this


31

recursion with r as a variable. The even terms are


(1)n
a2n (r)=
((r+2)2 p2 )((r+4)2 p2 )...((r+2n)2 p2 )

a0 .

The fractional power series solution for the larger root

r1 = p obtained by setting a0 = 1 and r = p is


y1

(1)n
n0 ((p+2)2 p2 )((p+4)2 p2 )...((p+2n)2 p2 )
P
(1)n
2n
p
=x
n0 22n n!(1+p)...(n+p) x .

(x)=xp

x2n

The power series converges everywhere. Multiplying


1
, we define
by 2p (1+p)

Jp (x) :=

 x p X
2

n0

 x 2n
(1)n
, x > 0.
n! (n + p + 1) 2

This is called the Bessel function of the first kind of


order p. It is a solution of the Bessel equation.
Explicitly, the Bessel functions of order 0 and 1 are

x2
x4
x6
J0 (x) = 1 2 + 2 2 2 2 2 + . . .
2
2 4
2 4 6
1  x 5
x
1  x 3
+
+ ....
J1 (x) =
2 1!2! 2
2!3! 2
32

Both J0 (x) and J1 (x) have a damped oscillatory


behavior having an infinite number of zeroes, and
these zeroes occur alternately much like the functions

cos x and sin x. Further, they also satisfy similar


derivative identities

J0 (x) = J1 (x) and [xJ1 (x)] = xJ0 (x).

These functions are real analytic representable by a


single power series centered at 0.

33

Second independent solution


Observe that r1

r2 = 2p. The analysis to get the

second independent solution of the Bessel equation


splits into four cases.

2p is not an integer: We get a second fractional


power series solution.

p = 0: We get a logarithmic singularity.


p is a positive half-integer such as 1/2, 3/2, 5/2:
We get a second fractional power series solution.

p is a positive integer: We get a logarithmic


singularity.

34

Suppose 2p is not an integer. Solving the recursion


uniquely for a0

y2 (x) = x

= 1 and r = p, we obtain

n0

(1)n
2n
x
.
2n
2 n!(1 p) . . . (n p)

1
Normalizing by 2p (1p)
, define

Jp (x) :=

 x p X
2

n0

 x 2n
(1)n
, x>0
n! (n p + 1) 2

This is a second solution of the Bessel equation


linearly independent of Jp (x). It is clearly unbounded
at x

= 0, behaving like xp as x approaches 0.

The same solution works in the half-integer case


because N

= 2p is an odd integer, and the recursion


only involves the even coefficients a0 , a2 , a4 , etc.
Jp (x) make sense for all p. However, when
p is an integer, Jp (x) = (1)p Jp (x).

Remark:

35

Suppose p

= 0. In this case,

(1)n
a2n (r) =
,
(r + 2)2 (r + 4)2 . . . (r + 2n)2
and an (r)

= 0 for n odd. The first solution is a power


series with coefficients an (0):
X (1)n
2n
x
, x > 0.
y1 (x) = J0 (x) =
2n
2
2 (n!)
n0

Differentiating the a2n (r) wrt r , we get

a2n (r)

1
1
1 
= 2a2n (r)
+
+ +
.
r+2 r+4
r + 2n

Now setting r

= 0, we obtain

n1 H
(1)
n
a2n (0) =
,
2n
2
2 (n!)

1
1
Hn = 1+ + + .
2
n

Thus, the second solution is

y2 (x) = J0 (x) log x

X (1)n Hn

n1

36

22n (n!)2

x2n , x > 0.

Summary of p

= 0 and p = 1/2

For p

= 0, there are two independent solutions. The


first function J0 (x) is real analytic function for all R,
while the second function has a logarithmic singularity
at 0.
For p

= 1/2, two independent solutions are J1/2 (x)


and J1/2 (x). These can be expressed in terms of
the trigonometric functions:

J 1 (x) =
2

2
sin x and J 1 (x) =
2
x

2
cos x.
x

Both exhibit singular behavior at 0. The first function is


bounded at 0 but does not have a derivative at 0, while
the second function is not even bounded.

37

Bessel identities
For any real number p,

d p
[x Jp (x)] = xp Jp1 (x)
dx

d  p
x Jp (x) = xp Jp+1 (x)
dx
2p
Jp1 (x) + Jp+1 (x) =
Jp (x)
x
Jp1 (x) Jp+1 (x) = 2Jp (x)
The first two identities can be directly established by
manipulating the respective power series. These can
then be used to prove the next two identities.
These can be used to give formulas for Jp (x) for
half-integer values of p. For instance:

1
J 3 (x) = J 1 (x)J 1 (x) =
2
2
x 2

38


2  sin x
cos x
x
x

Zeroes of the Bessel function


Fix p

0. Let Z (p) denote the set of zeroes of Jp (x).

The set of zeroes is a sequence increasing to infinity.

Let x1 and x2 be successive positive zeroes of Jp (x).

If 0 p < 1/2, then x2 x1 is less than and


approaches as x1 .
If p = 1/2, then x2 x1 = .
If p > 1/2, then x2 x1 is greater than and
approaches as x1 .

39

The first few zeroes are tabulated below.

J0 (x)

J1 (x)

J2 (x)

J3 (x)

J4 (x)

2.4048

3.8317

5.1356

6.3802

7.5883

5.5201

7.0156

8.4172

9.7610

11.0647

8.6537

10.1735 11.6198 13.0152 14.3725

4 11.7915 13.3237 14.7960 16.2235 17.6160


5 14.9309 16.4706 17.9598 19.4094 20.8269

40

Orthogonality
Define an inner product on square-integrable functions
on [0, 1] by

hf, gi :=

xf (x)g(x)dx.
0

This is similar to the previous inner product except that

f (x)g(x) is now multiplied by x, and the interval of


integration is from 0 to 1. The multiplying factor x is
called a weight function.
Fix p

0. The set of scaled functions


{Jp (zx) | z Z (p) }

indexed by the zero set Z (p) form an orthogonal family:


Proposition. If k and are any two positive zeroes of
the Bessel function Jp (x), then

xJp (kx)Jp (x)dx =


0

41

1 [J (k)]2
2

if k

= ,

if k

6= .

Fourier-Bessel series
Fix p

0. Any square-integrable function f (x) on

[0, 1] can be expanded in a series of scaled Bessel


functions Jp (zx)
X
f (x)
cz Jp (zx),
zZ (p)

where

2
cz =
[Jp (z)]2

x f (x) Jp (zx) dx.


0

This is called the Fourier-Bessel series of f (x). The


series converges to f (x) in norm. For pointwise
convergence, we have the Bessel expansion theorem:
Theorem. If both f (x) and f (x) have at most a
finite number of jump discontinuities in the interval

[0, 1], then the Bessel series converges to


1
(f (x ) + f (x+ ))
2
for 0

< x < 1.
42

Orthogonality of the trigonometric


family
Consider the space of square-integrable functions on

[, ]. Define an inner product by


Z
1
f (x)g(x)dx.
hf, gi :=
2
The norm of a function is then given by

kf k :=

1
2

f (x)f (x)dx

43

1/2

Proposition. The set {1, cos nx, sin nx}n1 is a


maximal orthogonal family wrt this inner product.

h1, 1i = 1.

0
hcos mx, cos nxi =
1/2
hsin mx, sin nxi =

1/2

if m

6= n,

if m

= n.

if m

6= n,

if m

= n.

hsin mx, cos nxi = h1, cos nxi = h1, sin mxi = 0.

44

Fourier series
Any square-integrable function f (x) on [, ] can
be expanded in a series of the trigonometric functions

f (x) a0 +

(an cos nx + bn sin nx)

where the an and bn are given by

1
f (x)dx, an =
f (x) cos nxdx,

Z
1
bn =
f (x) sin nx dx, n 1.

1
a0 =
2

This is called the Fourier series of f (x), and the an


and bn are called the Fourier coefficients. The above
formulas are sometimes called the Euler formulas.
The Fourier series of f (x) converges to f (x) in norm.

45

Pythagorus theorem or Parsevals identity


Suppose V is a finite-dimensional inner product space
and {v1 , . . . , vk } is an orthogonal basis. If

v=

Pk

2
a
v
,
then
kvk
i
i
i=1

Pk

2 kv k2 . This
a
i
i=1 i

is the Pythagoras theorem. There is an

infinite-dimensional analogue which says that the


square of the norm of f is the sum of the squares of
the norms of its components wrt any maximal
orthogonal set. Thus, we have
2

kf k =

a20

1X 2
+
(an + b2n ).
2
n1

This is known as Parsevals identity.

46

Pointwise convergence
A function f

: R R is periodic (of period 2 ) if

f (x + 2) = f (x)

for all x.

Theorem. Let f (x) be a periodic function of period

2 which is integrable on [, ]. Then at a point x, if


the left and right derivative exist, then the Fourier
series of f converges to

1
[f (x+ ) + f (x )].
2
Example. Consider the function

f (x) =

if 0
if

< x < ,

< x < 0.

The value at 0, and is left unspecified. Its

periodic extension is the square-wave. The Fourier


series of f (x) is


4
sin 3x sin 5x
sin x +
+
+ ... .

3
5
47

This series converges to f (x) at all points except


integer multiples of where it converges to 0. The
partial sums of the Fourier series wiggle around the
square wave.

In particular, evaluating at x

= /2,


4
1 1 1

1 + + ... .
f( ) = 1 =
2

3 5 7
Rewriting,

1 1 1

1 + + = .
3 5 7
4
48

Fourier sine and cosine series


If f is an odd (even) function, then its Fourier series
has only sine (cosine) terms. This allows us to do
something interesting. Suppose f is defined on the
interval (0, ). Then we can extend it as an odd
function on (, ) and expand it in a Fourier sine
series, or extend it as an even function on (, )
and expand it in a Fourier cosine series. For instance,
consider the function

f (x) = x, 0 < x < .


The Fourier sine series of f (x) is


sin 2x sin 3x
+
...
2 sin x
2
3
while the Fourier cosine series of f (x) is


4 cos x cos 3x
+
+ ...

2
2
2 1
3
The two series are equal on 0 < x < (but different
on < x < 0).
49

Fourier series for arbitrary periodic functions


One can also consider Fourier series for functions of
any period not necessarily 2 . Suppose the period is

2. Then the Fourier series is of the form


a0 +

n=1

nx
nx
+ bn sin
.
an cos

The Fourier coefficients are given by

1
nx
f (x)dx, an =
f (x) cos
dx,

Z
1
nx
bn =
dx, n 1.
f (x) sin

1
a0 =
2

By scaling the independent variable, one can transform


the given periodic function to a 2 -periodic function,
and then apply the standard theory.

50

One-dimensional heat equation


We now solve the one-dimensional heat equation

ut = kuxx , 0 < x < , t > 0.


This describes the temperature evolution of a thin rod
of length . The temperature at t

= 0 is specified. This

is the initial condition. We write it as

u(x, 0) = u0 (x).
In addition to the initial condition, there are conditions
specified at the two endpoints of the rod. These are the
boundary conditions. We consider four different kinds
of boundary conditions one by one. In each case, we
employ the method of separation of variables:
Suppose u(x, t)

= X(x)T (t). Substituting this in

the PDE separates the variables:

T (t)
X (x)
=
= (say).
X(x)
kT (t)
51

Dirichlet boundary conditions

u(0, t) = u(, t) = 0.
In other words, the endpoints of the rod are maintained
at temperature 0 at all times t. (The rod is isolated from
the surroundings except at the endpoints from where
heat will be lost to the surroundings.)
The solution is

u(x, t) =

bn e

n2 (/)2 kt

n=1

nx
,
sin

where

2
bn =

nx
u0 (x) sin
dx.

As t increases, the temperature of the rod rapidly


approaches 0 everywhere.

52

Neumann boundary conditions

ux (0, t) = 0 = ux (, t).
In other words, there is no heat loss at the endpoints.
Thus the rod is completely isolated from the
surroundings.
The solution is

u(x, t) = a0 +

an e

n2 (/)2 kt

n=1

nx
cos
,

where

1
a0 =

2
u0 (x)dx, an =

nx
u0 (x) cos
dx.

All terms except for the first one tend rapidly to zero as

t . So one is left with a0 , which is the mean or


average value of u0 . Physically, this means that an
isolated rod will eventually assume a constant
temperature, which is the mean of the initial
temperature distribution.

53

Mixed boundary conditions

u(0, t) = 0 = ux (, t).
Thus, the left endpoint is maintained at temperature 0
(so there will be heat loss from that end), while there is
no heat loss at the right endpoint.
The solution is

u(x, t) =

bn e(n+1/2)

2 (/)2 kt

sin

n0

(n + 1/2)x
,

where

2
bn =

(n + 1/2)x
u0 (x) sin
dx.

For the mixed boundary conditions

ux (0, t) = 0 = u(, t),


the solution is similar with a cosine series instead.

54

Periodic boundary conditions

u(0, t) = u(, t), ux (0, t) = ux (, t).


The solution is

u(x, t) = a0 +

4n2 (/)2 kt

n1

2
2nx
+ bn sin
an cos

where

1
a0 =

2
u0 (x)dx, an =

2nx
u0 (x) cos
dx

and

2
bn =

2nx
u0 (x) sin
dx.

55

Dirichlet boundary conditions with heat source


We now solve

ut kuxx = f (x, t), 0 < x < , t > 0,


with u(x, 0)

= u0 specified, and with Dirichlet

boundary conditions. For simplicity, we assume

= 1.

Expand everything in a Fourier sine series over (0, 1):

u(x, t) =

Yn (t) sin nx,

n1

f (x, t) =

Bn (t) sin nx, u0 (x) =

n1

bn sin nx,

n1

where the Bn (t) and the bn are known while the

Yn (t) are to be determined. Substituting, we obtain


X
X
2
2
[Y n (t)+kn Yn (t)] sin nx =
Bn (t) sin nx.

n1

n1

This implies that Yn (t) solves the following IVP

Y n (t) + kn2 2 Yn (t) = Bn (t), Yn (0) = bn .


56

One-dimensional wave equation


We now solve the one-dimensional wave equation

utt c2 uxx = 0, 0 < x < , t > 0.


This describes the vibrations of a string of length .
The initial position and velocity is specified. We write
these initial conditions as

u(x, 0) = u0 (x) and ut (x, 0) = u1 (x).


In addition to the initial condition, there are conditions
specified at the two endpoints of the string. These are
the boundary conditions.
Adopting the method of separation of variables, let

u(x, t) = X(x)T (t). Substituting this in the PDE


does indeed separate the variables:

X (x)
T (t)
= 2
= (say).
X(x)
c T (t)
57

Dirichlet boundary conditions

u(0, t) = u(, t) = 0.
The solution is

u(x, t) =

n1

cnt
nx
cnt
+Dn sin
] sin
,
[Cn cos

where

2
Cn =

nx
u0 (x) sin
dx

and

2
Dn =
cn

nx
u1 (x) sin
dx.

58

Neumann boundary conditions

ux (0, t) = 0 = ux (, t).
They describe a shaft vibrating torsionally in which the
ends are held in place by frictionless bearings so that
rotation at the ends is permitted but all other motion is
prevented, OR a vibrating string free to slide vertically
at both ends, OR longitudinal waves in an air column
open at both ends.
The solution is

u(x, t) = C0 +D0 t+

cnt
cnt
+Dn sin
] cos
[Cn cos

n1

where

1
C0 =

2
u0 (x)dx, Cn =

u0 (x) cos
0

nx
dx,

and

1
D0 =

2
u1 (x)dx, Dn =
cn

59

u1 (x) cos
0

nx
dx

Mixed boundary conditions

u(0, t) = 0 = ux (, t).
The solution is

u(x, t) =

[Cn cos c(n+1/2)t+Dn sin c(n+1/2)t]

n0

where

2
Cn =

u0 (x) sin(n + 1/2) xdx

and

2
Dn =
c(n + 1/2)

u1 (x) sin(n+1/2) xdx.

60

Periodic boundary conditions

u(0, t) = u(, t) and ux (0, t) = ux (, t).


The solution is
u(x,t)=C0 +D0 t+

n1 [Cn

cos

2cnt
+Dn

sin

2cnt
][An

cos

2nx
+B

where

1
C0 =

Cn An = 2

R
0

u0 (x) cos

u0 (x)dx,
0

2nx
dx,

and

Cn Bn = 2

R
0

u0 (x) sin

2nx
dx

And also

1
D0 =

u1 (x)dx,
0

and
1
Dn An = cn

R
0

u1 (x) cos

2nx
dx,

and

1
Dn Bn = cn

Note that An , Bn , Cn and Dn are not uniquely


defined, but the above products are.

61

R
0

u1 (x) sin

2n

Nonhomogeneous case
Consider the nonhomogeneous wave equation

utt uxx = f (x, t), 0 < x < 1, t > 0


with Dirichlet boundary conditions.
Expand everything in a Fourier sine series on (0, 1):
f (x,t)=

n1 Bn (t) sin nx

and

u(x,t)=

n1

Yn (t) sin nx.

Then the functions Yn (t) must satisfy

Yn (t) + n2 2 Yn (t) = Bn (t), n = 1, 2, 3, . . . .


Also let
u0 (x)=

n1 bn sin nx

and

u1 (x)=

n1 b1n

These lead to the initial conditions

Yn (0) = bn and Y n (0) = b1n .


They determine the Yn (t) uniquely.

62

sin nx.

Vibrations of a circular membrane


The two-dimensional wave equation is given by

utt = c2 (uxx + uyy ).


For a circular membrane of radius R, the wave
equation written in polar coordinates is

utt = c2 (urr + r1 ur + r2 u ).
The initial conditions are

u(r, , 0) = f (r, ) and ut (r, , 0) = g(r, ).


We assume Dirichlet boundary conditions

u(R, , t) = 0.
Physically u represents the displacement of the point

(x, y) at time t in the z -direction. These are


transverse vibrations.

63

Radially symmetric solutions


We first find solutions which are radially symmetric, that
is, independent of . Thus u

= u(r, t) with intial and

boundary conditions

u(r, 0) = f (r), ut (r, 0) = g(r) and u(R, t) = 0.


Substituting u(r, t)

= X(r)Z(t) in the simplified

wave equation

utt = c2 (urr + r1 ur )
and separating variables, we obtain

Z (t)
X (r) + r1 X (r)
2
=
=

.
c2 Z(t)
X(r)
The equation in the r variable

r2 X (r) + rX (r) + 2 r2 X(r) = 0


is the scaled Bessel equation of order 0.

64

The elementary solutions are

un (r, t) = (An cos cn t + Bn sin cn t)J0 (n r),


where n R is the n-th positive zero of J0 . The final
solution is given by

u(r, t) =

(An cos cn t+Bn sin cn t)J0 (n r).

n1

where

2
An = 2 2
R J1 (n R)

rf (r)J0 (n r)dr
0

and

2
Bn =
cn R2 J12 (n R)

65

rg(r)J0 (n r)dr.
0

General solution
We solve for u(r, , t)

= X(r)Y ()Z(t). The

method of separation of variables now proceeds in two


steps: we have a total of 3 variables and we need to
separate one variable at a time. Separating ,

Y ()
r2 X (r) + rX (r) r2 Z (t)
=
+ 2
= n2
Y ()
X(r)
c Z(t)
for n

= 0, 1, 2, 3, . . . . Separating again,

Z (t)
X (r) + r1 X (r) n2
2
=

,
=

2
2
c Z(t)
X(r)
r
The equation in the r variable

r2 X (r) + rX (r) + (2 r2 n2 )X(r) = 0


is the scaled Bessel equation of order n.
The elementary solutions are

(A cos n+B sin n)(C cos ct+D sin ct)Jn (r),


where R is a zero of Jn .

66

The Laplacian operator


The Laplacian on the line is R (u)

= u .

The Laplacian on the plane is

R2 (u) = uxx + uyy = urr + r1 ur + r2 u .


The Laplacian in three-dimensional space is

R3 (u) = uxx + uyy + uzz ,


= (urr + r1 ur + r2 u ) + uzz ,
2

+ 12
= 2 + r2 r
r
r

2
1

2
+
+cot

2
sin2 2

These are standard coordinates, cylindrical coordinates


and spherical polar coordinates respectively.
The Laplacian on the sphere of radius R is

S2 (u) =

1
+
cot

+
.
2
2
2
2
R
sin

These are the spherical polar coordinates, where is


the azimuthal angle and is the polar angle.

67

General heat and wave equation


The Laplacian can be used to define the heat equation
in general:

ut = (u),
where is the Laplacian of the domain under
consideration.
Similarly, the general wave equation can be written as

utt = (u).

68

Eigenfunctions for Laplacian on unit sphere


Consider the eigenvalue problem on the unit sphere:

S2 (u) = u.
Any solution u is called an eigenfunction and the
corresponding is called its eigenvalue.
Put u(, )

= X()Y (). Separating variables,

Y ()
X ()
Y ()
+ cot
] =
.
sin [
Y ()
Y ()
X()
2

Setting this constant to be m2 , the equation in is

m2
Y () + cot Y () ( +
2 )Y () = 0.
sin

The change of variable x

= cos yields

m2
)y = 0.
(1 x )y 2xy ( +
2
1x
2

This is the associated Legendre equation. Putting

m = 0 recovers the Legendre equation.


69

The associated Legendre equation has a bounded


solution iff

= n(n + 1) for some nonnegative

integer n. The bounded solution is given by

y(x) = (1 x2 )m/2 Pn(m) (x).


Going back to the coordinate,

Y () = sinm Pn(m) (cos ).


Thus

cos m sinm Pn(m) (cos ), m = 0, 1, 2, . . . n


sin m sinm Pn(m) (cos ), m = 1, 2, . . . n
are eigenfunctions for the Laplacian operator on the
unit sphere with eigenvalues n(n + 1). In particular,

cos , sin sin and cos sin are eigenfunctions


with eigenvalue 2. (These are nothing but z , y and x

in spherical-polar coordinates.)

70

Vibrations of a spherical membrane


The wave equation on the unit sphere is

utt = S2 u.
Putting u(, ; t)

= X()Y ()Z(t), we obtain

S2 (X()Y ())
Z (t)
=
= .
Z(t)
X()Y ()
What can be?

= n(n + 1).
The elementary solutions or the pure harmonics are
(m)

(A cos m+B sin m) sinm Pn

for nonnegative integers 0


What happens for n

(cos )(C cos(

n(n+1)t)+D sin(

m n and n 6= 0.

= 0?

Z(t) = A + Bt

71

Laplace equation in three space


Recall
2

+ 12
R3 2 + r2 r
r
r

2
2
+
+cot

2
sin2 2

Consider the Laplace equation in three space:

R3 (u) = 0.
Putting u(r, , )

= Z(r)X()Y (), we obtain

r2 Z (r) 2rZ (r)


S2 (X()Y ())
+
=
= n(n+1)
Z(r)
Z(r)
X()Y ()
for some nonnegative integer n. The equation in r

r2 Z (r) + 2rZ (r) n(n + 1)Z(r) = 0


1
.
is a Cauchy-Euler equation with solutions r n and rn+1

The elementary solutions are


(Cr n +D

where 0

1
)(A cos m+B
n+1
r

(m)

sin m) sinm Pn

m n are nonnegative integers.


72

(cos ),

Laplace equation on a rectangular plate


Consider the Laplace equation uxx

+ uyy = 0 on the

unit square [0, 1] [0, 1] with Dirichlet boundary


conditions

u(x, 0) = 0 = u(x, 1) and u(0, y) = sin y = u(1, y).


Let u(x, y)

= X(x)Y (y), and separate variables:


X (x)
Y (y)
=
= .
X(x)
Y (y)

Should be positive or negative?

= n2 2 .
and Y (y)

= sin ny . Put
X
u(x, y) =
Xn (x) sin ny
n1

and solve for Xn (x) using the other boundary


condition.

73

Você também pode gostar