where A is a linear operator representing the highest order differential terms and f
is a lower order nonlinear operator. The solution to the initial value problem (1.1) is
then given by the variation of constants formula and most of the exponential methods
considered in the literature are constructed from this representation of the solution.
Let us consider for instance the family of multistep exponential methods de
veloped in [2]. Given a stepsize h > 0, n 0 and approximations u
n+j
u(t
n+j
),
t
n+j
= (n+ j)h, 0 j k 1, the kstep method approximates the solution u of (1.1)
at t
n+k
= (n+k)h by
u
n+k
=
0
(k, hA)u
n
+h
k1
j=0
j+1
(k, hA)
j
f
n
, (1.2)
where f
n
= f (t
n
, u
n
), denotes the standard forward difference operator, and for
C and k 1,
0
(k, ) = e
k
,
j
(k, ) =
_
k
0
e
(k)
_
j 1
_
d, 1 j k. (1.3)
As we can see in (1.2), these methods require the evaluation of
j
(k, hA), 0 j k,
for
j
(k, ) dened in (1.3). This is in fact the main difculty in the implementation
of the methods in (1.2) and, in general, of exponential methods, since they typically
require the evaluation of vectorvalued mappings (hA), with h the time step in the
discretization and either
() = e
m
, C, (1.4)
or
() =
_
m
0
e
(m)
p()d, (1.5)
with m an integer and p() a polynomial. The values of m and p in (1.5) depend on
the method. For instance, in the case of the methods in (1.2), it is clear from (1.3) that
m = k, the number of steps of the method, and
p() =
_
j 1
_
=
( 1). . . ( j +2)
( j 1)!
, 1 j k.
If we consider instead the explicit exponential RungeKutta methods in [11], then it
is m = 1 and
p() =
k1
(k 1)!
, k 1,
as we show in Section 2.1.
In the present paper we propose a way to evaluate the operators (hA) in (1.4)
and (1.5) when A in (1.1) is the innitesimal generator of an analytic semigroup in
a Banach space X. Thus, we will assume that A : D(A) X X is sectorial, i.e., A
is a densely dened and closed linear operator on X and there exist constants M > 0,
R, and an angle 0 < <
2
, such that the resolvent fulls
(zI A)
1

M
[z [
, for [ arg(z )[ < . (1.6)
A quadrature method for evaluating exponentialtype functions 3
In all our examples, A will be in practice a discrete version of a sectorial operator, i.
e., a matrix arising after the spatial semidiscretization of (1.1) by a certain numerical
method.
The approach we follow in the present paper is based on a suitable contour inte
gral representation of the mappings (hA) and it is related with the method in [12].
More precisely, the contour integral representation in [12] is given by the Cauchy in
tegral formula. The goal is both the evaluation of the mappings (h) at scalar level,
assuming that a diagonalization of A is available, but also at operator/matrix level
for non diagonal matrices, working with the full matrix A. However, despite the good
computational results reported in [12], the way of selecting the parameters involved in
the quadrature formulas is not very much clear and they depend very strongly on the
equation considered and the spatial discretization parameters. Alternatively, the algo
rithm we propose is derived by using Laplace transformation formulas. Our method
is nally based on another contour integral representation of the (hA). However,
our quadrature formulas borrow their parameters from the inversion method of sec
torial Laplace transforms developed in [16], where a rigorous analysis of the error is
performed together with an optimization process to choose the different parameters
involved in the approximation. Thus, for sectorial operators A our approach seems to
provide a more natural selection of the quadrature parameters, it is more accurate and
is much less problemdependent, since the quadrature parameters depend only on the
sectorial width in (1.6). For a comparison with the approach in [12] we refer to
Section 6.1.
In order to apply the quadrature formulas developed in [16], we derive a represen
tation of the operators in (1.4) and (1.5) as the inverses of suitable Laplace transforms
(z, hA) at certain values of the original variable . These Laplace transforms have
all the form
(z, hA) = R(z)(zI hA)
1
, (1.7)
with R(z) a scalar rational mapping of z C. Due to (1.6), the mappings (z, hA)
turn out to be sectorial in the variable z, i.e., there exist constants R and M > 0,
possibly different from the constants in (1.6), such that
(z, hA) is analytic for z in the sector [ arg(z )[ < and there
(z, hA)
M
[z [
, for some 1.
(1.8)
In this way, we reduce the problem of computing (hA) to the inversion of a sectorial
Laplace transform (z, hA) of the form (1.7). We then use the inversion method in
[16], which consists on a special quadrature to discretize the inversion formula for
the Laplace transform. Thus, we nally approximate
(hA)
K
=K
w
e
mz
(z
, hA) =
K
=K
w
e
mz
R(z
)(z
I hA)
1
, (1.9)
with the quadrature weights w
and nodes z
=K
w
e
mz
(z
, ) =
K
=K
w
e
mz
R(z
)
z
. (1.10)
The contour integral representation of the mappings given by the inversion
formula of the Laplace transform (see (2.5)) can be used at different levels:
(i) At operator/matrix level it allows precomputing all the required (hA) by an
exponential method before the timestepping begins. Thus, only the products
matrix vector need to be carried out at every time step. Moreover, this algo
rithm allows parallelism both to compute the inverses (z
I hA)
1
for different
z
I hA)
1
can be efciently computed, the storage of the resulting
full matrices (hA) and the subsequent products matrix vector can become
prohibitive for large problems. For this reason, we consider that the representation
at this level makes sense only for moderate size problems, specially if they are
described by a full matrix A. This is the case in the example given in Section 6.1,
where a Chebychev collocation method is applied for the spatial discretization of
the 1D AllenCahn equation. It is also the case in the 2D examples of Section 6.2,
where spatial discretizations by the nite element method are considered. Let us
also notice that so far we did not have knowledge of the application of exponential
methods to problems with a mass matrix.
(ii) At vector level, our approach is useful to compute products of the form (hA)v,
for a given vector v. For problems with a large sparse matrix, it can be used in
combination with Krylov subspace approximations. In this situation
(A)v V
m
(H
m
)e
1
v
2
, (1.11)
where V
m
contains the Arnoldi or Lanczos basis of the mth Krylov subspace with
respect to A, v and , H
m
is a much smaller matrix than A, and e
1
is the m
dimensional unit vector. We can apply (1.9) to evaluate the products (H
m
)e
1
.
Then, we do not compute the full inverses (z
I H
m
)
1
for every , but solve
the linear systems
(z
I H
m
)x = e
1
. (1.12)
In Section 7 we provide an example of this application inside the MATLAB
package EXP4, which implements a variable step size exponential RungeKutta
method by using Krylov techniques [9]. In this case, our approach is to be com
pared with Pad e approximants, also used for instance in [1]. The workefciency
diagrams in this context are very much the same with both techniques, but our
approach is easier to program. Moreover, we think that it can be more easily ex
tended to the implementation of other exponential solvers, since it uses no recur
rences, neither to evaluate different mappings nor to evaluate ( jhA), j =1, . . . ,
from (hA).
A quadrature method for evaluating exponentialtype functions 5
(iii) At scalar level, we can apply (1.9) to compute (), C. The evaluation of the
scalars () is itself a wellknown problem in numerical analysis, as exemplied
in [12] with the mapping
() =
e
. (1.13)
On the one hand, the evaluation of for small by using formula (1.13) suffers
fromcancellation error. On the other hand, the use of a truncated Taylor expansion
only works well for small enough . Thus, there are some intermediate values of
for which the choice of the proper formula is not very much clear, leading to
a lost of accuracy. For inside a sector [ arg( )[ , these difculties
can be overcome by writing in the format (1.5), with m = 1 and p() = 1.
By doing so, we will be able to evaluate (), independently of the size of , by
using essentially the same technique developed in principle to evaluate the vector
valued mapping (hA). Actually, in Section 4 we derive a formula to evaluate
(), too.
Let us notice that for problems with a large sparse matrix, the use of (1.9) could
be combined with a data sparse procedure to approximate the resolvent operators, as
it is proposed in [4, 5]. We did not test this approach in the present paper, but follow
the Krylov subspace approximation mentioned in paragraph (ii).
By using (1.9) we are in fact computing an approximation to the numerical solu
tion of (1.1) provided by an exponential method. Thus, the global error after applying
(1.9) to the time integration of (1.1) can be split into the error in the time integra
tion by the pure exponential method and the deviation from the numerical solution
introduced by the approximation (1.9) of the operators (hA). The error in the time
integration for the exponential integrators considered in the present paper is analyzed
in [2] and [11] (see Section 2.1), and the quadrature error is analyzed in [16] (see
Section 2.2, Theorem 2.1). In order to visualize the effect of this approximation, we
show in Section 5 the performance of our implementation for several problems with
known exact solution and moderate size after the spatial discretization. In the error
plots provided we can observe that the error coincides with the expected error for the
exact exponential integrators up to high accuracy for quite moderate values of K in
(1.9), i.e., the error induced by the quadrature (1.9) is negligible compared with the
error in the time integration.
Finally, we notice that the matrix exponential e
tA
and also certain rational appro
ximations to it originating fromRungeKutta schemes have already been successfully
approximated by using this approach [13, 15, 16].
The paper is organized as follows. In Section 2.1 we briey review the class
of multistep exponential methods proposed in [2] and the exponential RungeKutta
methods in [11]. Section 2.2 is devoted to the inversion method of sectorial Laplace
transforms in [16]. In Section 3 we deduce a representation of the operators required
in the implementation of these exponential integrators in terms of suitable Laplace
transforms. We consider with some detail in Section 4 the evaluation of the associated
mappings at scalar level and present some numerical results. We test our algorithm at
matrix level with several academic examples present in the related literature in Sec
tion 5. In Section 6 we apply our method, also at matrix level, to examples governed
6 Mara L opezFern andez
j=1
a
i j
(hA) f (t
n
+c
j
h,U
nj
),
u
n+1
= e
hA
u
n
+h
s
i=1
b
i
(hA) f (t
n
+c
i
h,U
ni
),
(2.1)
with c
1
= 0 (U
n1
= u
n
). In (2.1), the coefcients b
i
() and a
i j
() are linear combi
nations of
k
() and
k
(c
l
) with
k
() =
_
1
0
e
(1)
k1
(k 1)!
d, C, k 1. (2.2)
Setting
0
() = e
T(P)
Fig. 2.1 Action of T in (2.6) on the real axis
For a locally integrable mapping f : (0, ) X, bounded by
 f (t) Ct
1
e
t
, for some R, > 0, (2.3)
we denote its Laplace transform
F(z) =L[ f ](z) =
_
0
e
tz
f (t)dt, Rez > . (2.4)
When F satises (1.8), the method in [16] allows to approximate the values of f
from few evaluations of F. This is achieved by means of a suitable quadrature rule to
discretize the inversion formula
f (t) =
1
2i
_
e
zt
F(z)dz, (2.5)
where is a contour in the complex plane, running from i to i and laying in
the analyticity region of F. Due to (1.8), can be taken so that it begins and ends in
the half plane Rez < 0. Following [16], in (2.5) we choose as the left branch of a
hyperbola parameterized by
R : x T(x) = (1sin( +ix)) +, (2.6)
where > 0 is a scale parameter, is the shift in (1.8), and 0 < <
2
. Thus,
is the left branch of the hyperbola with center at (, 0), foci at (0, 0), (2, 0), and
with asymptotes forming angles (/2+) with the real axis, so that remains in
the sector of analyticity of F, [ arg(z )[ < . In Figure 2.1 we show the action
of the conformal mapping T on the real axis.
After parameterizing (2.5), the function f is approximated by applying the trun
cated trapezoidal rule to the resulting integral along the real axis, i.e.,
f (t) =
1
2i
_
e
tz
F(z)dz
K
=K
w
e
tz
F(z
), (2.7)
8 Mara L opezFern andez
and nodes z
given by
w
=
2i
T
/
() , z
= T(), K K, (2.8)
and > 0 a suitable step length parameter. We notice that the minus sign in the for
mula for the weights comes from setting the proper orientation in the parametrization
of . In case of symmetry, the sum in (2.7) can be halved to
f (t) Re
_
K
=0
w
e
tz
F(z
)
_
, (2.9)
with w
0
= w
0
and w
= 2w
)
K
, =
2dK(1
)
t
0
a(
)
, (2.11)
where, for (0, 1), a() is the mapping
a() = arccosh
_
(1)sin
_
, (2.12)
and
= min
(0,1)
_
e
2dK(1)/a()
+e
2dK/a()
_
, (2.13)
A quadrature method for evaluating exponentialtype functions 9
for the precision in the evaluations of the Laplace transform F and the elementary
operations in (2.7).
Then, there exist positive C, c, such that the error E
K
(t) in the approximation (2.7)
to f (t) with quadrature weights and nodes in (2.8) is bounded by
E
K
(t) Ct
1
_
+e
cK
_
, (2.14)
uniformly for t [t
0
, t
0
], where is the exponent in (1.8).
In case we do not have any reliable information about the errors in the compu
tation of the matrices (zI hA)
1
, we cannot use formula (2.13). In this situation,
setting = 1 1/K instead of
j
(k, ) =
_
k
0
e
(k)
_
j
_
d =L
1
[L[ f
0
(, )] L[ f
j
]](k),
where, for > 0,
f
0
(, ) = e
and f
j
() =
_
j
_
. (3.1)
For every j 1 and z C, we dene
j
(z, ) =L[ f
0
(, )](z) L[ f
j
](z) =
1
z
L[ f
j
](z). (3.2)
Then, for every C and j 1,
j
(k, ) =L
1
[
j
(, )](k). (3.3)
For j = 0
0
(k, ) = e
k
=L
1
_
1
_
(k),
10 Mara L opezFern andez
0
(z, ) =
1
z
. (3.4)
For scalar, the mappings
j
(z, ), with 1 j 4 are given by
1
(z, ) =
1
z(z )
,
2
(z, ) =
1
z
2
(z )
,
3
(z, ) =
2z
2z
3
(z )
,
4
(z, ) =
33z +z
2
3z
4
(z )
.
(3.5)
In order to evaluate
j
(k, hA), 0 j 4, we propose to use the formulas in (3.5)
with hA instead of and perform the inversion of the Laplace transform to approxi
mate the original mappings at = k. In this way, the Laplace transforms we need to
invert are:
0
(z, hA) = (zI hA)
1
,
1
(z, hA) =
1
z
(zI hA)
1
,
2
(z, hA) =
1
z
2
(zI hA)
1
,
3
(z, hA) =
2z
2z
3
(zI hA)
1
,
4
(z, hA) =
33z +z
2
3z
4
(zI hA)
1
.
(3.6)
Although the formulas in (3.6) are derived just formally, we notice that they can be
justied by combining the Cauchy integral formula with the inversion formula for
the Laplace transform. More precisely, for suitable contours
1
and
2
in the complex
plane, both laying in the resolvent set of A, it holds
j
(k, hA) =
1
2i
_
j
(k, )(I hA)
1
d
=
1
2i
_
1
_
1
2i
_
2
e
k
j
(, )d
_
(I hA)
1
d
=
1
2i
_
2
e
k
_
1
2i
_
j
(, )(I hA)
1
d
_
d
=
1
2i
_
2
e
k
j
(, hA)d =L
1
[
j
(, hA)](k).
(3.7)
Due to (1.6), all the Laplace transforms
j
(k, hA) in (3.6) are sectorial, since they
satisfy (1.8) with
= max0, and
j
(z, hA) in (3.2). We notice that the inverse Laplace transforms need to be approx
imated only at the xed value = k, which is specially favorable for the application
A quadrature method for evaluating exponentialtype functions 11
of the inversion method (see the bound in Theorem 2.1). Then, we set = 1, t
0
= k,
and select the parameters and following Theorem 2.1. The selection of and d
is more heuristic and a good choice is
1
2
(
2
) and d slighly smaller than .
For example, if = 0 in (1.8), good values are around = 0.7 and d = 0.6. Next,
we compute the quadrature weights w
and nodes z
j
(k, hA)
K
=K
w
e
kz
j
(z
, hA). (3.8)
The sum in (3.8) can be halved in case of symmetry like in (2.9).
As we already mentioned in the Introduction, the computation of all the required
operators
j
(k, hA), 0 j k, can be carried out before the time stepping of the
exponential method begins. Thus, if we use the method of lines and apply the expo
nential method to some spatial discretization of (1.1), only the matrixvector products
in (1.2) need to be computed at every time step.
3.2 Evaluation of the mappings required by the RungeKutta methods
For
j
in (2.2), j 1 and t > 0, we have
j
() =
_
1
0
e
(1)
j1
( j 1)!
d =L
1
[L[g
0
(, )] L[g
j
]](1),
where, for > 0 and C,
g
0
(, ) = e
and g
j
() =
j1
( j 1)!
. (3.9)
For every j 1 and z C, we dene
j
(z, ) =L[g
0
(, )](z) L[g
j
](z) =
1
z
j
(z )
(3.10)
and
0
(z, ) = (z )
1
. Then, for every C and j 0,
j
() =L
1
[
j
(, )](1). (3.11)
The same argument as in (3.7) justies the computation of the operators
j
(hA)
and
j
(c
l
hA), j 0, 2 l s, by performing the inversion of the Laplace transforms
j
(z, hA) =
1
z
j
(zI hA)
1
, j 0, = 1, c
l
, (3.12)
to approximate the original mappings at = 1. If (1.6) holds, the Laplace transforms
in (3.12) are also the sectorial in the sense of (1.8) and we can use the inversion
method of [16].
As in the case of the methods in (1.2), the computation of all the required oper
ators
j
(hA) and
j
(c
l
hA), j 0, can be carried out before the time stepping of the
exponential method begins.
12 Mara L opezFern andez
Remark 3.1 In general, we can always evaluate a mapping (hA) of the form of (1.5)
by using the numerical inversion of the Laplace transform, just by noticing that (hA)
is the inverse Laplace transform at = n of a mapping (z, hA) like in (1.7),
(z, hA) = R(z)(zI hA)
1
,
with R(z) =L[p](z), a scalar rational function of z.
The above Remark implies that our algorithm can be used to implement other
kinds of exponential methods, different than those in [2, 11], as long as they require
the evaluation of mappings of the form of (1.4) and (1.5).
4 Evaluation of the scalar mappings
As we already mentioned in the Introduction, we can also apply the inversion of the
Laplace transform to evaluate with accuracy the scalar mappings () in (1.5). In
this section we consider with some detail the evaluation of the mappings
g
j
(m, ) =
_
m
0
e
(m)
j1
d, j 1, m N, (4.1)
by means of the quadrature formula (1.10). The result provided by (1.10) does not
depend on the size of , but the formula is only useful in principle for values of
inside a sector of the form [ arg( )[ . However, using that
e
m
g
1
(m, ) = g
1
(m, ), C, (4.2)
and
g
j+1
(m, ) =
jg
j
(m, ) m
j
, j 1, m N, (4.3)
it is easy to see by induction that, for m N and C,
e
m
g
j
(m, ) =
j
=1
_
j 1
1
_
(1)
1
m
j
g
(m, ), j 1. (4.4)
Thus, we can compute
g
j
(m, ) = e
m
L
1
_
G
j
(, )
(m), (4.5)
with
G
j
(z, ) =
1
z
j
(z )
j
=1
_
j 1
1
_
!(1)
1
(mz)
j
, j 1. (4.6)
which provides a stable formula to approximate g
j
(m, ) for inside a proper
sector [ arg( )[ > and moderate size.
A quadrature method for evaluating exponentialtype functions 13
Table 4.1 Computation of () in (1.13) for [1, 1] by using formulas (2.7) and (4.2). We show the
absolute error obtained in MATLAB with K = 15 and K = 25.
< 0 K = 15 K = 25 K = 15 K = 25
1 1.5050e12 1.3323e15 1 1.5050e12 3.3307e15
1e1 1.5227e12 3.2196e15 1e1 1.5227e12 3.5527e15
1e2 1.4243e12 4.4409e15 1e2 1.4243e12 4.6629e15
1e3 1.3750e12 1.3323e15 1e3 1.3750e12 1.3323e15
1e4 1.3738e12 1.7764e15 1e4 1.3738e12 1.7764e15
1e5 1.3747e12 3.6637e15 1e5 1.3747e12 3.7748e15
1e6 1.3748e12 3.6637e15 1e6 1.3748e12 3.7748e15
1e7 1.3695e12 1.9984e15 1e7 1.3695e12 1.9984e15
1e8 1.3717e12 1.1102e16 1e8 1.3717e12 2.2204e16
1e9 1.3715e12 1.1102e16 1e9 1.3715e12 0
1e10 1.3711e12 0 1e10 1.3711e12 0
1e11 1.3711e12 0 1e11 1.3711e12 0
1e12 1.3715e12 1.1102e16 1e12 1.3715e12 0
1e13 1.3712e12 0 1e13 1.3712e12 2.2204e16
In Table 4.1 we show the error obtained in the evaluation of () = g
1
(1, )
in (1.13) for different values of in the interval [1, 1]. For < 0, we applied the
inversion formula (2.7) with t = 1 and
F(z) = G
1
(z, ) =
1
z(z )
, (4.7)
which, for these values of , fulls (1.8) with = 0, = 0, and = 2. We assumed
that the evaluations of G
1
can be carried out in MATLAB up to machine accuracy
and thus we set = 2.220410
16
. Then, we computed the quadrature weights and
nodes in (2.8) following (2.11)(2.13) with = 1. Setting = 0.7 and d = 0.6, we
obtained = 0.693, for K = 15, and = 0.793, for K = 25. In Table 4.1 we can
see that K = 25 is enough to attain almost the machine accuracy of MATLAB in the
evaluations of (). For positive values of , we used (4.2) with m = 1.
5 Test with some academic examples
In this section we test our quadratures at matrix level with some of the examples
presented in [2] and [11]. For every of these examples the exact solution is known
and the convergence of the exponential methods considered is well understood.
5.1 Test for the multistep exponential methods
Our rst example is the problem considered in [2]
u
t
(x, t) = u
xx
(x, t) +
__
1
0
u(s, t)ds
_
u
x
(x, t) +g(x, t), (5.1)
for x [0, 1] and t [0, 1], subject to homogeneous Dirichlet boundary conditions and
with g(x, t) such that the exact solution to (5.1) is u(x, t) = x(1x)e
t
14 Mara L opezFern andez
10
3
10
2
10
1
10
0
10
15
10
10
10
5
10
0
dt
e
r
r
o
r
ExpS1
ExpS2
ExpS3
ExpS4
10
3
10
2
10
1
10
0
10
15
10
10
10
5
10
0
dt
e
r
r
o
r
ExpS1
ExpS2
ExpS3
ExpS4
Fig. 5.1 Error of exponential multistep methods (1.2) applied to (5.1), for k = 1, 2, 3, and 4. Left: With
K = 15 quadrature nodes on the hyperbolas, Right: With K = 25.
The spatial discretization of (5.1) is carried out by using standard nite differ
ences with J = 512 spatial nodes, centered for the approximation of u
x
. The nonlocal
term is approximated by means of the composite Simpsons formula.
To integrate in time the semidiscrete problem we use (1.2) with k = 1, 2, 3 and 4,
so that A is the (J 1) (J 1) matrix
A = J
2
tridiag([1, 2, 1]).
We approximate the matrices
j
(k, hA), 0 j k, required in (1.2) by applying the
quadrature rule (3.8). To avoid an extra source of error, the initial values u
1
, . . . , u
k1
are computed from the exact solution. In a less academic example, these values can be
computed by means of a onestep method of sufciently high order or by performing
the x point iteration proposed in [2].
In Figure 5.1 we show the error versus the stepsize at t =1, measured in a discrete
version of the norm  
1/2
, for K = 15 and K = 25 in (3.8). We see that for K = 25
the full precision is achieved for all the methods implemented; cf. [2, Section 6].
In Figure 5.1 we also show lines of slope 1, 2, 3 and 4, to visualize the order of
convergence.
5.2 Test for the exponential RungeKutta methods
Secondly, we consider the example from [11]
u
t
(x, t) = u
xx
(x, t) +
1
1+u(x, t)
2
+g(x, t), (5.2)
for x [0, 1] and t [0, 1], subject to homogeneous Dirichlet boundary conditions and
with g(x, t) such that the exact solution to (5.2) is again u(x, t) = x(1x)e
t
.
We discretize (5.2) in space by standard nite differences with J =200 grid points
and apply for the time integration some of the methods proposed in [11]. More pre
cisely, following the notation
i
=
i
(hA), and
i, j
=
i, j
(hA) =
i
(c
j
hA), 2 j s, (5.3)
A quadrature method for evaluating exponentialtype functions 15
we implemented (2.1) with s = 1, the secondorder method
0
1
2
1
2
1,2
0
1
(5.4)
the thirdorder method
0
1
3
1
3
1,2
2
3
2
3
1,3
4
3
2,3
4
3
2,3
3
2
2
0
3
2
2
(5.5)
and the fourthorder one
0
1
2
1
2
1,2
1
2
1
2
1,3
2,3
2,3
1
1,4
2
2,4
2,4
2,4
1
2
1
2
1,5
2a
5,2
a
5,4
a
5,2
a
5,2
a
5,4
1
3
2
+4
3
0 0
2
+4
3
4
2
8
3
(5.6)
with
a
5,2
=
1
2
2,5
3,4
+
1
4
2,4
1
2
3,5
and
a
5,4
=
1
4
2,5
a
5,2
.
For the implementation of (5.4) we need to invert four different Laplace trans
forms of the form of (3.12), to approximate
0
_
h
2
A
_
,
0
(hA),
1
_
h
2
A
_
, and
1
(hA).
The implementation of both (5.5) and (5.6) requires the inversion of eight Laplace
transforms.
In Figure 5.2 we show the error at t = 1 versus the stepsize, measured in the
maximum norm. The expected order of convergence for this example is k for the k
order method. In order to check our algorithm, we added lines with the corresponding
slopes in Figure 5.2. We can see that also for this kind of methods we attain full
precision for K = 25 in (3.8).
Let us notice that in the two examples presented so far the matrix operator A can
be easily diagonalized by means of fast Fourier techniques. It also turns out that all the
eigenvalues are well separated from 0, so that a direct evaluation of the mappings
at scalar level is neither a problem in these examples. Our aim so far was to test the
performance of the quadratures explained in Section 2.2 in the context of exponential
integrators. In the following sections we address more challenging problems.
16 Mara L opezFern andez
10
3
10
2
10
1
10
0
10
15
10
10
10
5
10
0
dt
e
r
r
o
r
ExpRK1
ExpRK2
ExpRK3
ExpRK4
10
3
10
2
10
1
10
0
10
15
10
10
10
5
10
0
dt
e
r
r
o
r
ExpRK1
ExpRK2
ExpRK3
ExpRK4
Fig. 5.2 Error of RungeKutta methods (1.2) with s =1, (5.4), (5.5), and (5.6), applied to (5.2). Left: With
K = 15 quadrature nodes in (3.8), Right: With K = 25.
6 Examples with a full matrix
6.1 The 1D AllenCahn equation
The purpose of this section is to compare the performance of the quadratures pro
posed in the present paper with the similar approach presented in [12]. Thus, we
consider the AllenCahn equation in 1D,
u
t
= u
xx
+uu
3
, x [1, 1], t > 0,
u(x, 0) = 0.53x +0.47sin(1.5x), u(1, t) =1, u(1, t) = 1.
(6.1)
For = 0.01 and t [0, 70], we apply the fourorder method (5.6) to a spatial semi
discretization of (6.1) by an 80point Chebychev spectral method. We compare the
results with those obtained by using the code provided in [12] for this problem,
which implements a timedifferencing RungeKutta method of order four, too, called
ETDRK4 [3]. We compute the error against a reference solution obtained with 45
quadrature nodes and time step 3 10
5
, half the smallest time step used to produce
Figures 6.1 and 6.2.
As we can see in Figures 6.1 and 6.2, the quadratures used in [12] do not con
verge for less than 20 quadrature nodes and do not provide similar results to those
obtained with our approach with less than 30 quadrature nodes. We consider that
these results are useful to compare the performance of the different quadratures used
to evaluate the exponential type mappings. They also show that method (5.6) with
our implementation performs better for (6.1) than ETDRK4 with the implementation
of [12].
In order to have a reference, we also added in Figure 6.2 the results obtained
with the MATLAB solver ode15s, an variable step size integrator for stiff problems.
It turns out that for the highest tolerance requirements, our implementation of (5.6)
performs similarly to ode15s, even though it is an explicit and xed step size method.
A quadrature method for evaluating exponentialtype functions 17
10
4
10
2
10
0
10
2
10
10
10
5
10
0
10
5
dt
e
r
r
o
r
ExpRK
ETDKT
10
4
10
2
10
0
10
2
10
10
10
5
10
0
dt
e
r
r
o
r
ExpRK
ETDKT
10
4
10
2
10
0
10
2
10
8
10
6
10
4
10
2
10
0
dt
e
r
r
o
r
ExpRK
ETDKT
10
4
10
2
10
0
10
2
10
8
10
6
10
4
10
2
10
0
dt
e
r
r
o
r
ExpRK
ETDKT
Fig. 6.1 Relative error versus step size for (6.1) after applying our implementation of (5.6) (ExpRK) and
the code of [12] (ETDKT), with the same number of quadrature nodes. In clockwise sense: K =15, K =20,
K = 25, and K = 35.
6.2 Examples with a Mass Matrix in 2D
Our approach allows to apply exponential methods to problems with a mass matrix
arising after a spatial discretization by the nite element method of a parabolic PDE.
In this situation, the semidiscrete version of (1.1) will be a system of ODEs
MU
/
= SU +F(U), M Mass Matrix, S Stiffness Matrix
Reformulating as
U
/
= M
1
SU +M
1
F(U),
we can use formula (1.9) for A = M
1
S. Then, we can evaluate
0
(hA) = exp(hA),
1
(hA), . . .
by linear combinations of
(z
I hA)
1
= (z
MhS)
1
M,
for z
on a hyperbola , =K, . . . , K.
18 Mara L opezFern andez
10
2
10
0
10
2
10
4
10
10
10
5
10
0
10
5
cpu time (s)
e
r
r
o
r
ode15s
ExpRK
ETDKT
10
2
10
0
10
2
10
4
10
10
10
5
10
0
cpu time (s)
e
r
r
o
r
ode15s
ExpRK
ETDKT
10
2
10
0
10
2
10
4
10
8
10
6
10
4
10
2
10
0
cpu time (s)
e
r
r
o
r
ode15s
ExpRK
ETDKT
10
2
10
0
10
2
10
4
10
8
10
6
10
4
10
2
10
0
cpu time (s)
e
r
r
o
r
ode15s
ExpRK
ETDKT
Fig. 6.2 Relative error versus CPU time for (6.1) after applying our implementation of (5.6) (ExpRK)
and the code of [12] (ETDKT), with different number of quadrature nodes. We also show the results with
ode15s. In clockwise sense: K = 15, K = 20, K = 25, and K = 35.
As a test of the computational efciency of this approach, let us consider in the
rst place the inhomogeneous heat equation on the unit square = (0, 1)
2
,
_
_
_
u
t
(t, x) = u(t, x) + f (x) for x ; t 0,
10
2
10
0
10
2
10
4
10
15
10
10
10
5
10
0
10
5
cpu time (s)
e
r
r
o
r
ode15s
ExpRK1
ExpRK2
ExpRK3
ExpRK4
10
2
10
0
10
2
10
4
10
10
10
5
10
0
10
5
10
10
cpu time (s)
e
r
r
o
r
ode15s
ExpRK1
ExpRK2
ExpRK3
ExpRK4
Fig. 6.4 Left: Relative error versus CPU time for (6.3) after applying our implementation of the Expo
nential RungeKutta Methods described in Section 2.1 with K = 20. Left: = 0.1. Right: = 1. We also
show the results with ode15s.
7 Inside the package EXP4
In this section we apply our quadratures at vector level (see the Introduction), as a
tool inside the MATLAB package EXP4. EXP4 is an implementation of the adaptive
exponential integrator developed in [9]. This method is an embedded method based
on an exponential RungeKutta method of 7 stages, dened for general IVP of the
form
y
/
= f (y), y(0) = y
0
. (7.1)
For A the jacobian of f , EXP4 requires, at every time step, the evaluation of (
1
3
hA)v,
(
2
3
hA)v, and (hA)v, where is the mapping in (1.13) (
1
following the notation
in Section 2.1), h is the step size and v a given vector, both varying along the time
stepping process.
For problems with a large sparse matrix A, EXP4 computes the matrices (A)
by means of a Krylov subspace approximation (1.11). In this way, the problem is
reduced to compute
1
(H
m
), for H
m
a much smaller matrix than A. This last com
putation is performed by using a sophisticated routine, called phim. There are several
internal functions inside phim, where several cases for H
m
are distinguished, taking
into account its size and if it can be easily reduced to a diagonal matrix. phim uses
also a recurrence relation developed in [9] to efciently evaluate
1
( jH
m
), j N
from
1
(H
m
). It nally evaluates
1
(H
m
) by using a Pad e approximant at matrix
and vector level (internal functions phim pde and phis pade) or at scalar level (in
ternal function phis), in case H
m
is diagonalized. This routine phim is very optimized
for the implementation of this particular exponential method.
Since H
m
and A have the same spectral properties, in case A fulls (1.6) also
does H
m
. Thus, for this type of problems we can evaluate the
1
( jH
m
)v by using
the approach of the present paper, i.e., by linear combinations of the solutions to
the linear systems (1.12). To do this, we replaced the routine phim in EXP4 by a
new phim cont. Apart from the simplicity in the programming (see Figure 7.2), the
recurrences to compute
1
( jH
m
)v from
1
(H
m
)v are not anymore needed since,
by Theorem 2.1, we can use the same solutions to (1.12) to approximate
1
( jH
m
)v
A quadrature method for evaluating exponentialtype functions 21
for different j [1, 3]. This is due to the fact that, for
k
in (2.2), we have
k
( j) =
1
j
k
_
j
0
e
( js)
s
j
( j 1)!
ds =L
1
_
1
z
k
(z )
_
( j).
Thus, we think that phim can be easily extended to the implementation of other
adaptive exponential methods that use Krylov techniques and require the evaluation
of other exponential type mappings, different from
1
. As we can see in the following
test problem (Figure 7.1), the computational cost does not grow at all with respect to
the original implementation by means of the routine phim. For the rest of the package,
we only need to set in the initializing routine, exp4 initialize, the number of
quadrature nodes in terms of the tolerance options. In this way, we set
tol > 10
5
, K
1
= 8, K
3
= 12,
tol > 10
7
, K
1
= 12, K
3
= 16,
tol > 10
9
, K
1
= 14, K
3
= 18,
tol > 10
11
, K
1
= 18, K
3
= 22,
tol > 10
13
, K
1
= 20, K
3
= 24,
Else, K
1
= 24, K
3
= 28,
where K
1
is the number of nodes to compute (H
m
) at a single value of and K
3
is to compute ( jH
m
) for all j [1, 3]. We also compute the quadrature nodes and
weights following (2.13) before the time stepping begins, since they do not depend
on the value of nor on v.
In order to test the computational effect of replacing phim by phim cont, we
consider the homogeneous heat equation on the unit square = (0, 1)
2
, which is one
of the examples provided by the original package EXP4,
_
_
_
u
t
(t, x, y) = u(t, x, y), for (x, y) ; t 0,
10
1
10
2
10
3
10
15
10
10
10
5
10
0
cpu time (s)
r
e
l
a
t
i
v
e
e
r
r
o
r
exp4
exp4cont
10
0
10
1
10
2
10
3
10
15
10
10
10
5
10
0
cpu time (s)
r
e
l
a
t
i
v
e
e
r
r
o
r
exp4
exp4cont
Fig. 7.1 Left: Relative error versus CPU time for EXP4 after applying our evaluation of the mappings
(exp4cont) and the original Pad e based routine (exp4), for = 0.2 (left) and = 0.02 (right).
and with the evaluation of the prototypical mapping in (1.13). We also applied
the algorithm to more challenging examples governed by a full matrix operator, in
cluding nite element semidiscretizations of parabolic PDEs. In these examples our
approach is shown to be better than the one in [12]. Finally, we applied with success
our method at vector level, as an auxiliary tool inside the MATLAB package EXP4
[9].
Acknowledgements The author is grateful to Christian Lubich, Achim Sch adle, C esar Palencia, and En
rique Zuazua for helpful discussions during the preparation of the manuscript.
References
1. H. Berland, B. Skaestad, W. M. Wright, EXPINT  A MATLAB package for exponential integrators,
ACM Transactions on Mathematical Software, 33, Article Number 4 (2007).
2. M. P. Calvo, C. Palencia, A class of explicit multistep exponential integrators for semilinear problems,
Numer. Math., 102, 367381 (2006).
3. S. M. Cox, P. C. Matthews, Exponential time differencing for stiff systems, J. Comput. Phys., 176,
430455 (2002).
4. I. P. Gavrilyuk, W. Hackbusch, B. N. Khoromskij, Datasparse approximation to the operatorvalued
functions of elliptic operators, Math. Comp., 73, 12971324 (2004).
5. I. P. Gavrilyuk, W. Hackbusch, B. N. Khoromskij, Datasparse approximation to a class of operator
valued functions, Math. Comp., 74, 681708 (2005).
6. D. Henry, Geometric theory of semilinear parabolic equations. Lecture Notes in Mathematics 840,
Springer, Berlin, (1981).
7. M. Hochbruck, C. Lubich, On Krylov subspace approximations to the matrix exponential operator,
SIAM J. Numer. Anal., 34, 19111925 (1997).
8. M. Hochbruck, C. Lubich, Error analysis of Krylov methods in a nutshell, SIAM J. Sci. Comput., 19,
695701 (1998).
9. M. Hochbruck, C. Lubich, H. Selhofer, Exponential integrators for large systems of differential equa
tions, SIAM J. Sci. Comput., 19, 15521574 (1998).
10. M. Hochbruck, A. Ostermann, Exponential RungeKutta methods for parabolic problems, Appl. Nu
mer. Math., 53, 323339 (2005).
11. M. Hochbruck, A. Ostermann, Explicit exponential RungeKutta methods for semilinear parabolic
problems, SIAM J. Numer. Anal., 43, 10691090 (2005).
12. A. K. Kassam, L. N. Trefethen, Fourthorder timestepping for stiff PDEs, SIAM J. Sci. Comput., 26,
12141233 (2005).
A quadrature method for evaluating exponentialtype functions 23
function out = phim_cont(A,h,fac_t,v)
%
% function out = phim_cont(A,h,fac_t,v)
%
% Matrix evaluation of the function
% phi(z) = (exp(z)1)/z
% for A sectorial. It uses the inversion of the Laplace transform.
%
% Called with arguments: Return:
% A the matrix phi(A)
% A, h the matrix phi(h*A)
% A, h, fac_t the matrix out(:,j)=phi(j*h*A)*e1, j=1..fac_t
% A, h, fac_t, v the matrix out(:,j)=phi(j*h*A)*v, j=1..fac_t
%
% The global variable PARACONT is a structure set in the intializing routine.
if nargin <= 2
if nargin == 1;
h = 1;
end
out = phim_full_cont(A,h); %compute phi(h*A) and return the full matrix
else
if nargin == 3 %compute phi(j*h*A) for j=1..fac_t<=3
nz = size(A,1)1;
e1 = [1 ; zeros(nz,1)];
out = phivec(A,h,fac_t,e1);
else
out=phivec(A,h,fac_t,v);
end
end
%
% Internal functions
%
function y = phivec(A,h,fac_t,v)
%
% compute the matrix out(:,j)= phi(jhA)*v, j=1,...,fac_t
%
global PARACONT
dim = size(A,1); I = speye(dim);
if fac_t == 1
K = PARACONT.K; nodes = PARACONT.nodes; weights = PARACONT.weights;
else
K = PARACONT.Kr; nodes = PARACONT.nodesr; weights = PARACONT.weightsr;
end
y = zeros(dim,fac_t);
for j=1:K+1
z = nodes(j);
LT = (z*Ih*A)\v;
for ll=1:fac_t
y(:,ll) = y(:,ll) + weights(j)*exp(z*ll)*LT/(ll*z);
end
end
y = real(y);
function Phi = phim_full_cont(A,h,k)
%
% Compute the full matrix phi_k(h*A).
% For k=0 is the matrix exponential.
%
global PARACONT
dim = size(A,1); I = speye(dim);
K = PARACONT.K; nodes = PARACONT.nodes; weights = PARACONT.weights; %ratio = 1.
Phi = zeros(size(A));
for j=1:K+1
z = nodes(j);
zIA = (z*Ih*A)\I;
Phi = Phi + weights(j)*exp(z)*zIA/z^k;
end
Phi = real(Phi);
Fig. 7.2 MATLAB code to evaluate
1
( jhA)v for j = 1, . . . , f ac t.
24 Mara L opezFern andez
13. M. L opezFern andez, C. Lubich, C. Palencia, A. Sch adle, Fast RungeKutta approximation of inho
mogeneous parabolic equations, Numer. Math., 102, 277291 (2005).
14. M. L opezFern andez, C. Lubich, A. Sch adle, Adaptive, fast and oblivious convolution in evolution
equations with memory, SIAM J. Sci. Comput., 30, 10151037 (2008).
15. M. L opezFern andez, C. Palencia, On the numerical inversion of the Laplace transform of certain
holomorphic mappings, Appl. Numer. Math., 51, 289303 (2004).
16. M. L opezFern andez, C. Palencia, A. Sch adle, A spectral order method for inverting sectorial Laplace
transforms, SIAM J. Numer. Anal., 44, 13321350 (2006).
17. W. McLean, V. Thom ee, Time discretization of an evolution equation via Laplace transforms, IMA J.
Numer. Anal., 24, 439463 (2004).
18. A. Sch adle, M. L opezFern andez, and C. Lubich, Fast and oblivious convolution quadrature, SIAM J.
Sci. Comput., 28, 421438 (2006).
19. D. Sheen, I. H. Sloan, V. Thom ee, A parallel method for time discretization of parabolic equations
based on Laplace transformation and quadrature, Math. Comp., 69, 177195 (2000).
20. F. Stenger, Approximations via Whittakers Cardinal Function, J. Approx. Theory, 17, 222240
(1976).
21. F. Stenger, Numerical methods based on Whittaker Cardinal, or sinc Functions, SIAM Rev., 23, 165
224 (1981).
22. A. Talbot, The accurate numerical inversion of Laplace transforms, J. Inst. Math. Appl., 23, 97120
(1979).
23. J. A. C. Weideman, Optimizing Talbots contours for the inversion of the Laplace transform, SIAM J.
Numer. Anal., 44, 23422362 (2006).
24. J. A. C. Weideman, L. N. Trefethen, Parabolic and hyperbolic contours for computing the Bromwich
integral, Math. Comput., 76, 13411356 (2007).
Muito mais do que documentos
Descubra tudo o que o Scribd tem a oferecer, incluindo livros e audiolivros de grandes editoras.
Cancele quando quiser.