Escolar Documentos
Profissional Documentos
Cultura Documentos
Section 3
Integral Equations
Integral Operators and Linear Integral Equations
As we saw in Section 1 on operator notation, we work with functions defined in some
suitable function space. For example, f (x), g (x) may live in the space of continuous real-
valued functions on [a, b], i.e. C (a, b). We also saw that it is possible to define integral as
well as differential operators acting on functions. Theorem 2.8 is an example of an integral
operator: Z b
u (x) = G (x, y) f (y) dy,
a
where G (x, y) is a Green’s function.
Definition 3.1: An integral operator, K, is an integral of the form
Z b
K (x, y) f (y) dy,
a
where K (x, y) is a given (real-valued) function of two variables, called the kernel.
The integral equation
Kf = g
maps a function f to a new function g in the interval [a, b], e.g.
K : C [a, b] → C [a, b] , K : f 7→ g.
Theorem 3.2: The integral operator K is linear
K (αf + βg) = αKf + βKg.
Example 1: The Laplace Transform is an integral operator
Z ∞
Lf = exp (−sx) f (x) dx = f¯ (s) ,
0
with kernel K(x, s) = K(s, x) = exp(−sx). Let us recap its important properties.
Theorem 3.3:
df
(i) L = sf¯ (s) − f (0) ,
dx
2
df
(ii) L = s2 f¯ (s) − sf (0) − f ′ (0) ,
dx2
Z x
1
(iii) L f (y) dy = f¯ (s) .
0 s
(iv) Convolution: let Z x
f (x) ∗ g (x) = f (y) g (x − y) dy
0
then
L (f (x) ∗ g (x)) = f¯ (s) ḡ (s) .
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations 2
Definition 3.5: (a) A linear Fredholm integral equation of the first kind has the
form Z b
Kf = g, K (x, y) f (y) dy = g (x) ;
a
(b) A linear Fredholm integral equation of the second kind has the form
Z b
f − λKf = g, f (x) − λ K (x, y) f (y) dy = g (x) ,
a
where the kernel K (x, y) and forcing (or inhomogeneous) term g (x) are known functions
and f (x) is the unknown function. Also, λ (∈ R or C) is a parameter.
Definition 3.6: If g (x) ≡ 0 the integral equation is called homogeneous and a value of
λ for which
λKφ = φ
possesses a non-trivial solution φ is called an eigenvalue of K corresponding to the eigen-
function φ.
Definition 3.7 Volterra integral equations of the first and second kind take the
forms
(a) Z x
K (x, y) f (y) dy = g (x) ,
a
(b) Z x
f (x) − λ K (x, y) f (y) dy = g (x) ,
a
respectively.
Note: Volterra equations may be considered a special case of Fredholm equations. Sup-
pose a ≤ y ≤ b and put
K (x, y) for a ≤ y ≤ x,
K1 (x, y) =
0 for x < y ≤ b,
then
Z b Z x Z b Z x
K1 (x, y) f (y) dy = + K1 (x, y) f (y) dy = K (x, y) f (y) dy + 0.
a a x a
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations 3
The latter is a Fredholm integral equation of the second kind with kernel G (x, y) ρ (y) and
forcing term Z b
h (x) = G (x, y) g (y) dy.
a
The ODE and integral equation are equivalent: solving the ODE subject to the BCs is
equivalent to solving the integral equation.
Example 3: Initial value problems
Let
u′′ (x) + a (x) u′ (x) + b (x) u (x) = g (x) ,
where u (0) = α and u′ (0) = β and a (x) , b (x) and g (x) are known functions.
Change dummy variable from x to y and then Integrate with respect to y from 0 to z :
Z z Z x Z z Z z
′′ ′
u (y) dy + a (y) u (y) dy + b (y) u (y) dy = g (y) dy
0 0 0 0
and using integration by parts, with u = a, v = u , on the second term on the left hand
′ ′
side
Z z Z z Z z
′ x z ′
[u (y)]0 + [a (y) u (y)]0 − a (y) u (y) dy + b (y) u (y) dy = g (y) dy,
0 0 0
yields
Z z Z z
′ ′
u (z) − β + a (z) u (z) − a (0) α − [a (y) − b (y)] u (y) dy = g (y) dy.
0 0
Proof: Interchanging the order of integration over the triangular domain in the yz-plane
reveals that the integral equals
Z xZ z Z x Z x
f (y) dydz = f (y) dzdy
0 0 0 y
Thus,
Z x
u (x) + {a (y) − (x − y) [a′ (y) − b (y)]} u (y) dy
Z x 0
= (x − y) g (y) dy + [β + a (0) α] x + α.
0
Example 4
Write the Initial Value Problem
Change dummy variable in first integral term to y and use Theorem 3.8 in last term to
get Z x Z x
u(x) − α − βx + yu(y)dy + (x − y)u(y) = 0
0 0
Simplification gives Z x
u(x) + x u(y)dy = α + βx.
0
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations 6
where the latter notation is the inner product for finite vector spaces (i.e. the dot product).
Equation (3.2) may be solved by reduction to a set of simultaneous linear algebraic equa-
tions as we shall now show. Substituting (3.3) into (3.2) gives
Z b "X
n
#
f (x) = λ uj (x) vj (y) f (y) dy + g (x)
a j=1
n
X Z b
=λ uj (x) vi (y) f (y) dy + g (x)
j=1 a
and letting Z b
cj = vj (y) f (y) dy = hvj , f i, (3.4)
a
then n
X
f (x) = λ cj uj (x) + g (x) . (3.5)
j=1
For this class of kernel, it is sufficient to find the cj in order to obtain the solution to the
integral equation. Eliminating f between equations (3.4) and (3.5) (i.e. take inner product
of both sides with vi ) gives
Z b " n #
X
ci = vi (y) λ cj uj (y) + g (y) dy,
a j=1
Writing Z b
aij = vi (y) uj (y) dy = hvi , uj i, (3.7)
a
and Z b
gi = vi (y) g (y) dy = hvi , gi, (3.8)
a
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations 7
c = λAc + g
i.e.
(I − λA) c = g (3.10)
where I is the identity. This is just a simple linear system of equations for c. We
therefore need to understand how we solve the canonical system Ax = b where A is a
given matrix, b is the given forcing vector and x is the vector to be determined. Let’s
state an important theorem from Linear Algebra:
AT x̂ = 0 (3.12)
[Reminder, rank(A) is the number of linearly independent rows (or columns) of the matrix
A.]
Then the following alternatives hold:
either
(i) DetA 6= 0, so that there exists a unique solution to (3.11) given by x = A−1 b for each
given b. (And b = 0 ⇒ x = 0)
or
(ii) DetA = 0 and then
(a) If b is such that hb, x̂j i = 0 for all j then there are infinitely many solutions
to equation (3.11).
(b) If b is such that hb, x̂j i =
6 0 for any j then there is no solution to equation
(3.11).
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations 8
In the case of (ii)(a), then there are infinitely many solutions because the theorem states
that we can find a particular solution xP S and furthermore, the homogeneous system
Ax = 0 (3.13)
has p = n − rank (A) > 0 non-trivial linearly independent solutions
x1 , x2 , . . . , xp .
so that there are infinitely many solutions because we can write
Xp
x = xP S + αj x̂j
j=1
where αj are arbitrary constants (and hence there are infinitely many solutions).
No proof of this theorem is given.
To illustrate this theorem consider the following simple 2 × 2 matrix example:
Example 5
Determine the solution structure of the linear system Ax = b when
2 1 1 1
(I) A = (II) A = (3.14)
1 1 2 2
and in the case of (II) when
1 1
b= b= (3.15)
2 1
(I) Since Det(A) = 1 6= 0 the solution exists for any b, given by x = A−1b.
(II) Here Det(A) = 0 so we have to consider solutions to the adjoint homogeneous system,
i.e.
AT x̂ = 0 (3.16)
i.e.
1 2
x̂ = 0. (3.17)
1 2
This has the 1 non-trivial linearly independent solution x̂1 = (2 − 1)T . It is clear that
there should be 1 such solution, i.e. p = n−rank(A) = 2 − 1 = 1.
Note also that the homogeneous system
Ax̂ = 0 (3.18)
i.e.
1 1
x̂ = 0 (3.19)
2 2
has the 1 non-trivial linearly independent solution x1 = (1 − 1)T . If solutions do exist
they will therefore have the form x = xP S + α1 x1 .
A solution to the problem Ax = b will exist if x̂1 · b = 0. This condition does hold for
b = (1 2)T and so the theorem predicts that a solution will exist. Indeed it does, note
that xP S = (1/2 1/2)T and so x = xP S + α1 x1 is the infinite set of solutions.
The orthogonality condition does not hold for b = (1 1)T and so the theorem predicts
that a solution will not exist. This is clear from looking at the system.
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations 9
Now let us apply the Fredholm Alternative theorem to equation (3.10) in order to solve
the problem of degenerate kernels in general.
Case (i) if
det (I − λA) 6= 0 (3.20)
then the Fredholm Alternative theorem tells us that (3.10) has a unique solution for c:
c = (I−λA)−1 g. (3.21)
Hence (3.2), with degenerate kernel (3.3), has the solution (3.5):
n
X
f (x) = λ ci ui (x) + g (x) = λ(u (x))T c + g (x)
i=1
or from (3.21)
f (x) = λ(u (x))T (I−λA)−1 g + g (x) ,
which may be expressed, from (3.8), as
Z b
(u (x))T (I − λA)−1 v (y) g (y) dy + g (x) .
f (x) = λ
a
Definition 3.11: The resolvent kernel R (λ, x, y) is such that the integral representa-
tion for the solution Z b
f (x) = λ R (λ, x, y) g (y) dy + g (x)
a
holds.
Case (i) covered the simple case when there is a unique solution. Let us know concern
ourselves with the case when the determinant of the matrix on the left hand side of the
linear system is zero.
Case (ii) suppose
det (I − λA) = 0, (3.22)
and that the homogeneous equation
(I − λA) c = 0 (3.23)
c1 , c2 , . . . , cp .
with j = 1, 2, ..., p.
Turning to the inhomogenous equation, (3.10), it has a solution if and only if the forcing
term g is orthogonal to every solution of
(I − λA)T h = 0 i.e. hT g =0 (3.26)
or n
X
hi gi = 0.
i=1
which is equivalent to !
Z b n
X
hi vi (y) g (y) dy = 0.
a i=1
Thus, writing
n
X
h (y) = hi vi (y) (3.27)
i=1
then Z b
h (y) g (y) dy = 0,
a
which means that g (x) must be orthogonal to h (x) on [a, b].
Let us explore the function h(x) a little; we start be expressing (3.26) as
n
X
hi − λ aji hj = 0.
j=1
Without loss of generality assume that all the vi (x) in (3.3) are linearly independent (since
if one is dependent on the others, eliminate it and obtain a separable kernel with n replaced
by (n − 1)). Multiply the ith equation in (3.26) by vi (x) and sum over all i from 1 to n:
n
X n X
X n
hi vi (x) − λ aji hj vi (x) = 0,
i=1 i=1 j=1
Using (3.27) and (3.3) we see that this reduces to the integral equation:
Z b
h (x) − λ K (y, x) h (y) dy = 0. (3.28)
a
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations 11
Note that this is the homogeneous form of the transpose of the Fredholm integral equation
(3.2), i.e. it has no forcing term on the right hand side and the kernel is written K(y, x)
rather than K(x, y).
In conclusion for Case (ii), the integral equation (3.2) with a separable kernel of the
form (3.3) has a solution if and only if g (x) is orthogonal to every solution h (x) of the
homogeneous equation (3.28). The general solution is then
p
X
f (x) = f (0) (x) + αj f (j) (x) (3.29)
j=1
where f (0) (x) is a particular solution of (3.2), (3.3) and the αj are arbitrary constants.
Example 6:
Consider the integral equation
Z π
f (x) = λ sin (x − y) f (y) dy + g (x) . (3.30)
0
Find
(i) the values of λ for which it has a unique solution,
(ii) the solution in this case,
(iii) the resolvent kernel.
For those values of λ for which the solution is not unique, find
(iv) a condition which g (x) must satisfy in order for a solution to exist,
(v) the general solution in this case.
The solution proceeds as follows. Expand the kernel:
Z π
f (x) = λ sin (x − y) f (y) dy + g (x)
Z0 π
= λ (sin x cos y − cos x sin y) f (y) dy + g (x)
0
and so write Z π
c1 = f (y) cos ydy, (3.31)
0
Z π
c2 = f (y) sin ydy, (3.32)
0
which gives
f (x) = λ [c1 sin x − c2 cos x] + g (x) . (3.33)
Substituting this value of f (x) into (3.31) gives
Z π
c1 = {λ [c1 sin y − c2 cos y] + g (y)} cos ydy
0
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations 12
Z π Z π Z π
2
= λc1 sin y cos ydy − λc2 cos ydy + g (y) cos ydy.
0 0 0
Defining Z π
g1 = g (y) cos ydy,
0
and noting the values of the integrals
Z π π
1 π
1
Z
sin y cos ydy = sin 2ydy = − cos 2y = 0,
0 2 0 4 0
Z π Z π π
2 1 1 1 1
cos ydy = (1 + cos 2y) dy = y + sin 2y = π,
0 2 0 2 2 0 2
yields
1
c1 = − πλc2 + g1 .
2
Repeating this procedure, putting f (x) from (3.33) into (3.32), gives
Z π
c2 = {λ [c1 sin y − c2 cos y] + g (y)} sin ydy
0
Z π Z π Z π
2
= λc1 sin ydy − λc2 cos y sin ydy + g (y) sin ydy.
0 0 0
Observing that
π π π
1 1 1 1
Z Z
2
sin ydy = (1 − cos 2y) dy = y − sin 2y = π,
0 2 0 2 2 0 2
and writing Z π
g2 = g (y) sin ydy,
0
then we obtain
1
c2 = πλc1 + g2 .
2
Thus, there is a pair of simultaneous equations for c1 , c2 :
c1 + 12 πλc2 = g1 ,
− 12 πλc1 + c2 = g2 ,
or in matrix notation
1
1 2
πλ c1 g1
1 = . (3.34)
− 2 πλ 1 c2 g2
Case (i): These equations have a unique solution provided
1
1 2
πλ
det 6= 0,
− 12 πλ 1
i.e.
1 2i
1 + π 2 λ2 6= 0 or λ 6= ± .
4 π
MATH34032: Green’s Functions, Integral Equations and the Calculus of Variations 13
or π
λ 1
Z
f (x) = 1 2 2 sin (x − y) − πλ cos (x − y) g (y) dy + g (x) .
1 + 4π λ 0 2
This is the required solution, and we can observe that the resolvent kernel is
sin (x − y) − 12 πλ cos (x − y)
R (x, y, λ) = .
1 + 14 π 2 λ2
Case (ii): If
1
1 2
πλ
det =0
− 12 πλ 1
i.e.
2i 2i
or − λ=+
π π
then there is either no solution or infinitely many solutions. With this, (3.34) becomes
1 ±i c1 g1
= .
∓i 1 c2 g2
This is the condition that g (x) must satisfy for the integral equation to be soluble, i.e. if
g(x) does not satisfy this, then (3.30) does not have a solution. Suppose this condition
holds then we can set c2 to take any arbitrary constant value, c2 = α, say. Thus,
c1 = ∓iα + g1 ,
and hence from (3.33), the solution of (3.30) is, when λ = ±2i/π,
2i
f (x) = ± [(∓iα + g1 ) sin x − α cos x] + g (x)
π
2α 2i
= (sin x ∓ i cos x) ± g1 sin x + g (x)
π π
for arbitrary α, with constraint g2 = ∓ig1 or equivalently (3.35).
We arrived at the above conclusions via simple row operations. Fredholm’s theorem would
have also told us the same information regarding constraints on g1 and g2 .