Você está na página 1de 29

Applications of Wavelets in

Numerical Mathematics
Kees Verhoeven
1. Brief summary
2. Data compression
3. Denoising
4. Preconditioning
5. Adaptive grids
6. Integral equations
1
1. Brief Summary
(t): scaling function.
For the following 2-scale relation holds
(t) =

k=
p
k
(2t k), t IR.
(t): mother wavelet.
For the following 2-scale relation holds
(t) =

k=
q
k
(2t k), t IR.
The decomposition for reads
(2tk) =

m=
h
2mk
(tm)+g
2mk
(tm), t IR.
2
2. Data Compression
We consider a function f
f : [0, 1] IR.
We want to approximate this function by a function v de-
ned by
v =

k
c
k

k
,
where
k
[k = 1, . . . , N is a basis for the linear function
space V .
The quality of the approximation can be expressed in terms
of a norm
|f v|.
An alternative is to expand f periodically. We therefore
look at the Fourier series of f
f(x) =

m=
c
m
e
2imx
and approximate this by
v(x) =
M

m=M
c
m
e
2imx
.
3
So
V =
_
e
2mix
[m = M, . . . , M
_
,
with dimension N = 2M + 1. The basis functions form an
orthonormal system. Therefore
c
k
= (
k
, f) =
_
1
0
f(x)e
2imx
dx.
Error estimates
Given f, g : [0, 1] IR with the Fourier expansions
f =

m=
c
m
e
2imx
, g =

m=
d
m
e
2imx
.
Then
_
1
0
f(x)g(x)dx =

m=
c
m
d
m
.
So
_
1
0
[f(x)[
2
dx =

m=
[c
m
[
2
.
The error then reads

2
M
:= |f v|
2
=

[m[>M
[c
m
[
2
.
In many cases properties of f lead to an error estimate of
the type

M
CM

, C, > 0.
4
Example
Given
f(x) = x(x
1
2
)(x 1), x [0, 1].
We can derive
c
m
=
3
4i
3
m
3
.
The error
M
can therefore be estimated via

2
M
=
9
8
6

m=M+1
m
6

9
8
6
_

M
y
6
dy =
9
40
6
M
5
.
M L
2
-error
_
9
40
6
M
5
10 0.426 10
4
0.484 10
4
20 0.803 10
5
0.855 10
5
40 0.147 10
5
0.151 10
5
5
How about localized functions?
It seems sensible to approximate a localized function with
basis functions which also have compact support.
Stepfunctions:

k
=
_
_
_
h

1
2
, x [(k 1)h, kh),
0, else.
Note that:
h = N
1
and |
k
| = 1
more ecient for function evaluations
6
Comparison
We consider the following function.
The error of the approximation using the Fourier series at
M = 64, is approximately 0.001. The approximation using
rst order splines is 0.01 with N = 2M + 1 = 129.
We would like to use localized basis functions only where
the function to be approximated behaves like a localized
function.
7
Therefore, we would like basis functions which all have the
same shape but dierent scales. Then, if we have
v =
N

k=1
c
k

k
and [c
j
[ , we also have
_
_
_
_
_
f

k
c
k

k
_
_
_
_
_

_
_
_
_
_
_
f

k,=j
c
k

k
_
_
_
_
_
_
+|c
j

j
|.
This leads to data compression.
8
Example
We denote by V
J
the space of piecewise constant basis func-
tions on [0,1] with width h = 2
J
and dimension N = 2
J
.
The space V
0
has one basis function: the constant function

1
= 1.
For V
1
the usual basis functions are depicted here:
The coecients c
k
behave like
c
k
= h

1
2
_
a+h/2
ah/2
f(x)dx h
1
2
f(a).
Can we chose a more appropriate basis?
9
First approach: construct basis for V
J
by expanding the
basis of V
J1
.
Figure 1: An alternative basis for V
1
.
Figure 2: Together with the functions above an alternative basis for V
2
.
But: the new basis is no longer orthogonal.
For the test function f(x) = x, the drop in the coecients
seems to be like h
3/2
(2
3/2
3).
generation [c
k
[
0 0.354
1 0.125
2 0.044
3 0.016
10
A better alternative basis for V
J
Construct an orthogonal basis

i
for V
J
. This leads to
Figure 3: A better alternative basis for V
1
.
Figure 4: Together with the two functions above a better alternative basis
for V
2
.
Note that these are the Haar wavelets!
(x) =
0,0
(x) =

2
(x),
0,1
(x) =

3
(x),
1,1
(x) =

4
(x).
11
Now:
c
k
= h

1
2
_
_
a
ah/2
f(x)dx
_
a+h/2
a
f(x)dx
_
= h

1
2
_
_
a
ah/2
(f(a) + (x a)f
/
(a) + . . .)dx

_
a+h/2
a
(f(a) + (x a)f
/
(a) + . . .)dx
_

1
4
f
/
(a)h
3/2
.
12
Comparison of this basis and homogeneous one:
Figure 5: Approximation of f (top left) with 32, 10, 9 basis functions,
respectively.
13
Figure 6: Approximation of f with 22, 10 homogeneous basis functions,
respectively.
If wavelets used as basis functions have several moments
equal to zero, then reduction will be better.
Because, if
v =

lZZ
c
l,I

l,I
+
J1

k=I

lZZ
d
l,k

l,k
,
then
[d
k,J
[ Ch
d+3/2
,
where d is the number of moments equal to zero.
14
Example
M L
2
-error for spline d = 1 L
2
-error for spline d = 3
256 1.149 10
3
2.829 10
4
128 1.149 10
3
2.829 10
4
64 1.356 10
3
2.829 10
4
32 2.623 10
3
1.143 10
3
15
3. Denoising
Suppose we have a signal with some noise. We can make a
wavelet decomposition up to a certain depth.
If the coecients of the wavelets remain relatively large
(say [d
j
, k[ > ), then we have some localized noise.
So cancel these contributions by forcing d
j,k
= 0 at L levels
deep.
Then, we can reconstruct the ltered signal.
16
The role of L can be seen as follows:
Figure 7: Filtering of f with L = 1, 3, 5 levels ( = 0.1).
17
The role of is shown here:
Figure 8: Filtering of f (top) with L = 1, 2 levels ( = 0.01).
18
But: if the original signal is not periodic, we encounter
problems at the boundaries.
Figure 9: Filtering of non-periodic f with 3 levels ( = 0.01).
19
4. Preconditioning
Consider
T(u) = f
on a domain , with the dierential operator T elliptic. If
T is linear, this would lead to a linear system
Dc = r,
with
D
j,k
= (
j
, T
k
), r
j
= (
j
, f).
The matrix D is called the stiness matrix. If we use an
iterative method to solve this system, the speed of con-
vergence strongly depends on the condition number of the
matrix D, (D) = |D||D
1
|, with
|D| = sup
|c|=1
c
T
Dc, |D
1
|
1
= inf
|c|=1
c
T
Dc.
For symmetric matrices, we can express the condition num-
ber in terms of the eigenvalues:
(D) =

max

min
.
20
Example
We consider
T(u) =
d
2
u
dx
2
+ u = f, x (0, 1),
with periodical boundary conditions.
We assume that the numerical solution v can be written
as a linear combination of certain scaling functions which
span V
J
.
Galerkins method and integration by parts gives us
Dc = r,
with
D
i,j
=
_
d
i,J
dx
,
d
j,J
dx
_
+ (
i,J
,
j,J
),
and, as before
r
j
= (
j
, f).
For linear B-splines:
_
d
i,J
dx
,
d
j,J
dx
_
=
_
2
J/2
d
dx
(2
J
xi)2
J/2
d
dx
(2
J
xj)dx.
The derivatives are piecewise constant, and therefore we
derive
D
i,j
=
_
_
_
2N
2
+
2
3
, i = j,
N
2
+
1
6
, i = (j 1) mod N,
0 else.
This is a circulant matrix, that is D
i,j
= d(i j).
21
We dene the symbol of a circulant matrix as
D(z) =

j
D
i,j
z
ij
.
For D now follows

max
= max
z
N
=1
[D(z)[,
min
= min
z
N
=1
[D(z)[.
For the dierential equation we have
D(z) =

j
__
d
i,J
dx
,
d
j,J
dx
_
+ (
i,J
,
j,J
)
_
z
ij
.
We write
D(z) = D
1
(z) + D
2
(z),
with
D
1
(z) =

j
_
d
i,J
dx
,
d
j,J
dx
_
z
ij
, D
2
(z) =

j
(
i,J
,
j,J
)z
ij
.
The second term can be recognized as D
2
(z) = R

(z). In
the same manner we can derive D
1
(z) = 2
2J

R

(z).
Using this and the 2-scale relations, we can derive
D(z) = N
2
(2 z z
1
) +
1
6
(4 + z + z
1
).
Calculating the biggest and smallest eigenvalue, we see that

1
= 4N
2
+
1
3
.
If N ,
1
.
22
We now use wavelets
v =

k,j
d
k,j

k,j
.
The stiness matrix D then looks like
D
l,m,j,k
=
_
d
m,l
dx
,
d
k,j
dx
_
+ (
m,l
,
k,j
).
After some algebra using Riesz functions and 2-scale rela-
tions, we can show that

2
(D) C, for all J.
Comparison
2
J

1

2
16 1024.3 45.4
32 4096.3 49.7
64 16384.3 52.9
128 65536.3 55.4
256 262144.3 57.3
Again: this strongly depends on the periodicity of the bound-
ary conditions.
23
5. Adaptive grids
We consider a hyperbolic PDE
u(x, t)
t
= T(u,
u
x
, . . .),
together with initial condition and periodic boundary con-
ditions.
The approximation of the initial condition u(x, 0) is done
by
v(x, 0) =
N1

i=0
c
i,I
(0)
i,I
(x) +
J1

j=I

iI
j
d
i,j
(0)
i,j
(x).
Here N = 2
J
is the amount of intervals on the coarsest
grid I, the set I
j
is a subset of all possible wavelets on the
grids j = I, . . . , J 1. These sets I
j
are found by making a
wavelet decomposition and leave out all wavelet coecients
below a certain threshold .
But: this would mean that we have to approximate the
function on the nest grid rst!
24
We ignore all contributions below a wavelet for which
[d
k,l
[ . (lled circles mean [d
k,l
[ > , open circles mean:
[d
k,l
[ )
Adaptivity means that wavelets left out in previous time
steps can occur again.
25
Example: Burgers equation
Figure 10: Approximation on t = 0,
1
12
,
2
12
,
3
12
,
4
12
,
5
12
, for Burgers equation.
The number of basis functions with coecient above threshold is 32, 56,
122, 114, 114 and 114, respectively.
26
Example: wave equation
Figure 11: Approximation on t = 0, 0.3 and 0.5, respectively, for the wave
equation. The number of basis functions with coecient above threshold is
approximately 60.
27
6. Integral Equations
We consider
u(x) =
_
K(x; t)u(t)dt + f(x).
We take
v(x) =

j
c
j

j
(x).
Galerkins method would leave us with
Ac = r,
with
A
j,k
= (
j
,
k
)
_ _

j
(x)K(x; t)
k
(t)dxdt, r
j
= (
j
, f).
Often this A is well conditioned, but full.
Using wavelets reduces the number of nonzero elements.
We represent
v(x) =

lZZ
c
l,I

l,I
+
J1

k=I

lZZ
d
l,k

l,k
.
Look at the second term of A
K
l,m,j,k
=
_ _

m,l
(x)K(x; t)
k,j
(t)dxdt.
We make the following assumption on K(x; t)

d
x
d
K(x; t)

d
t
d
K(x; t)

C
d
1
[x t[
d+1
,
for a certain d > 0.
28
Making use of the Taylor series of K(x; t) and taking a
wavelet with d zero moments, we can derive
[K
l,m,j,k
[ C
1
[x
0
t
0
[
d+1
.
Using this we can bring down the number of nonzero ele-
ments (or better: the number of elements with value above
a certain threshold ).
Example
N = 10
6
= 10
9
= 10
12
24 74% 92% 92%
48 19% 85% 96%
96 5.1% 54% 78%
192 1.1% 16% 50%
384 0.34% 3.5% 25%
Table 1: The number of elements above threshold , with Daubechies
wavelets with K = 2.
N = 10
6
= 10
9
= 10
12
24 66% 92% 92%
48 12% 93% 96%
96 3.1% 47% 90%
192 0.85% 12% 56%
384 0.32% 2.4% 21%
Table 2: The number of elements above threshold , with Daubechies
wavelets with K = 5.
29

Você também pode gostar