Você está na página 1de 20

GLOBALLY CONVERGENT ALGORITHM FOR AN INVERSE PROBLEM

IN ELLIPTIC PDE MODELS


Mauricio Carrillo Oliveros
Juan Alfredo Gmez Fernndez
Universidad de la Frontera, Temuco,Chile
Andrs Fraguela Collar
Universidad Autnoma de Puebla, Mxico
Abstract
In this paper we study the convergence properties of an algorithm
for estimation of coecient -(r, j) in an elliptic partial dierential
equation, dened in a circular domain with Neumann-type boundary
conditions. The equation describes the behavior of electrostatic po-
tential in a medium with conductivity given by the function -(r, j).
We suppose that each time a current c
I
is applied to the boundary
of the circle (Neumanns data), it is possible to measure the corre-
sponding potential ,
I
(Dirichlet data). The inverse problem to solve
is to determine the function -(r, j), from a nite number of Cauchy
pairs measurements (,
I
, c
I
), i = 1, ..., . The problem is formu-
lated as a least square problem, and the proposed solution method
attempts to solve the continuous problem using descent iterations in
its corresponding nite element approximations. Wolfes conditions
are used to ensure the global convergence of the optimization algo-
rithm for the continuous problem. Although exact data are assumed,
errors in data and regularization methods shall be considered in a
future work.
keywords: Inverse problem, elliptic equations, least square method,
nite element discretization, global convergence algorithm.
1. INTRODUCTION
Suppose pairs of functions (,
I
, c
I
) are given, representing measures of
potentials and currents on the boundary
0
of a circular domain
0
: r
2
+j
2
<
1
2
, where ,
I
H
1
2
(
0
) and c
I
1
2
(
0
).
The potential functions n
I
(r, j) are supposed to satisfy the following elliptic
problem with Neumanns boundary conditions:
_
div(-\n
I
) = 0, on
0
-
Jui
Jn ]0
= c
I
(1.1)
where i = 1, ..., .
We assumed that - 1
o
(
0
) satisfy the inequality:
-(r, j) _ 1 0, a.c on
0
, (1.2)
1
c
I
1
2
(
0
), and the compatibility condition:
_
0
c
I
do = 0 (1.2a)
holds.
The variational formulation of iproblem (1.1) follows:
Find n
I
\ , such that: (1.3a)
a(n
I
, ) =
_
0
-\n
I
\ dr = 1
I
() =
_
0
c
I
do, \ \, (1.3b)
i = 1, ..., , where \ is dened by the subspace:
\ :=
_
n H
1
(
0
) :
_
0
n do = 0
_
, (1.4)
provided with the same dot product of H
1
(
0
).
The integral condition (1.4) is needed to ensure the existence and unique-
ness of solution for (1.3a)-(1.3b). In fact, it can be shown the bilinear form
a(., .) is a coercive function on \ \ (see Folland[6]) and from Lax-Milgram
lemma, problem (1.3a)-(1.3b) has a unique solution n
I
\ . Furthermore, the
inequality:
|n
I
|
1
1
(0)
_
1
`
|c
I
|
J
2
(0)
,
holds, where ` 0 is a constant depending on the coercive constant of a and
the norm of the trace operator.
ADDITIONAL HIPOTHESIS
As usual in nite element discretization, we assume that
0
is replaced by a
near enough convex poligonal domain , which shall be considered xed in what
follows. It is known (see [...]) that if the boundary of
0
is smooth enough then,
the solution of the variational problem on
0
is a good approximation of
the solution on
0
. In our case, we have
0
C
o
.
We denote := 0 the boundary of , and we assume ,
I
, c
I
to be continuous
and piecewise linear functions on .
2
2. LEAST SQUARE PROBLEM AND DISCRETIZATION
For given Cauchy pairs (,
I
, c
I
), i = 1, ..., , consider the following least
square problem:
min J(-)
:J
1
()
=
1
2

I=1
_

[n
I
,
I
[
2
do, (2.1a)
:.a. a(-, n
I
) =
_

-\n
I
\dr = 1
I
() =
_

c
I
do, \ \, i = 1, ..., (2.1b)
with n
I
\ .
We try to determine - 1
o
(), minimizing the residual between the po-
tential data ,
I
on and the potential n
I
given by the model. Actually, the
minimization should be done over the closed convex set:
' := - 1
o
() : - _ 1 0, a.c Q,
where Q is some class of uniqueness of the inverse problem, and it is also assumed
to be closed and convex with nonempty interior in 1
o
(). Our aim is to use
rst order optimality conditions, and although in a rst approach we consider
the problem (2.1a)-(2.1b) without constraints for -, we later shall return to this
restriction.
Discretization of problem (2.1a-2.1b) is done by the nite element method.
We denote by t
|
the family of regular triangulations (see [...]) of domain ,
where / 0 is the diameter of t
|
. We have:
=

T
h
_
|=1
T
|
,
for any triangulation t
|
, where
T
h
is the number of elements and T
|
is the
/th triangle of the mesh.
We dene the subspace:
\
|
=
_

|
C() \ :
|]T
1
1
(T), \T t
|
_
(2.2)
as the space of solutions of the discretized problem, where \ is given by (1.4)
and 1
1
(T) is the set of polinomials of degree less or equal to one dened on T.
We also denote by:

|
: number of t
|
nodes,

h
: number of t
|
nodes in ,

h
: number of t
|
nodes in ,
3
therefore:
|
=

h
+

h
.
We also are given a basis set 1
|
=
_
c
1
, ..., c

h
_
of \
|
(see [18]), satisfying:
c
I
(r

) = c
I
=
_
1, i = ,
0, i ,= ,
where r

is the , t/ node of , 1 _ , _
|
. Then, the discrete solution
n
I
h
\
|
can be written as follows:
n
I
h
(r) =

h

=1
n
I
h
(r

)c

(r), r . (2.3)
Symbol R

T
h
denotes the space of discretized coecients, i.e. - 1
o
()
shall be discretized by a
T
h
vector -
|
= (-
1
, -
2
, ...., -
N
T
h
), and - is considered
constant = -
|

at each triangle T

of t
|
.
Alternatively, we also consider vector -
|
as a function of 1
o
() by its canon-
ical extension:
-
|
(r, j) = -
|
j
, if (r, j)

T

, , = 1, ...,
T
h
,
dened almost everywhere (a.c.) on . We denote -
|
(.) the canonical extension
of -
|
to 1
o
(). Moreover, we denote R

T
h
(.) the set of canonical extensions
to 1
o
() of all possible
T
h
vectors -
|
dened on a triangulation t
|
. Hence,
R

T
h
(.) becomes a subspace of 1
o
() and its elements -
|
(.) R

T
h
(.) shall be
called t
|
piecewise functions.
The double discretization of the variational problem (1.3a)-(1.3b), in equa-
tion and in the parameter, produces the discretized problem:
Find n
I
h
\
|
, such that: (2.4a)
a(n
I
h
,
|
) =
_

-
|
\n
I
h
\
|
dr = 1
I
(
|
) =
_

c
I

|
do, \ \
|
(2.4b)
for 1 _ i _ , with \
|
dened in (2.2).
This gives a
|

|
system of linear equations:
1
(|)
c
(I)
= /
(I)
, (2.4c)
for each i = 1, 2, ... , where:
c
(I)
|
= n
I
h
(r
|
)
1
(|)
|
=
_

-
|
\c

\c
|
dr
/
(I)

=
_

c
I
c

d:
(2.4d)
4
and 1 _ ,, / _
|
.
The discretized version of (2.1a)-(2.1b) is given by:
min
"
h
2R
N
T
h
J
|
(-
|
) =
1
2

I=1
_

[n
I
h
,
I
[
2
do, (2.5a)
:.a. a(n
I
h
,
|
) =
_

-
|
\n
I
h
\
|
dr = 1
I
(
|
) =
_

c
I

|
do, \
|
\
|
. (2.5b)
From nite element theory, it is well known (see Raviart[4]) that we have
convergence conditions, i.e. if - 1
o
(), - _ 1, and if n
:
I
h
is the solution of
(2.4a)-(2.4b) for xed -
|
= -, \/ 0, then:
lim
|0
_
_
n
I
n
:
I
h
_
_
1
1
()
= 0, (2.5)
for all 1 _ i _ , where n
I
\ is the solution of (1.3a)-(1.3b) corresponding
to -. Moreover, in section 5 we shall prove that:
lim
|0
|-
|
-|
J1()
= 0 == lim
|0
|n
I
n
I
h
|
1
1
()
= 0,
where n
I
h
is the solution of (2.4a)-(2.4b) corresponding to -
|
.
In addition, if n
I
H
2
(), then:
_
_
n
I
n
:
I
h
_
_
1
1
()
_ C/|n|
1
2
()
. (2.6)
3. DESCENT ALGORITHM AND GLOBAL CONVERGENCE
Let be A a normed space and J : A R. Denote A
t
the topological dual
space of A. We recall that a general descent algorithm searching local minima
of the functional J (
1
, is given by the following steps:
Descent Algorithm:
1) Choose -
0
A and set / = 0.
2) Stopping test: If
_
_
\J(-
|
)
_
_
= 0, take -
|
as a solution.
3) Descent direction search: Find d
|
A, such that

\J(-
|
), d
|
_

0
,
< 0.
4) Step length search: Find `
|
0 such that J
_
-
|
+`
|
d
|
_
< J(-
|
).
5) Iteration: Take -
|+1
= -
|
+`
|
d
|
, / = / + 1 and go to step 2).
For each -
0
this algorithm generates a nite or innite sequence -
|
, and
we say it is globally convergent if for all -
0
A, the sequence is nite and stops
in 2) or is innite and any accumulation point - of -
|
satisfy \J(-) = 0.
When dimA = : < , step 3) is implemented looking for a direction
within the non-orthogonal directions set:
G

(-
|
) =
_
d :

\J(-
|
), d
_
_ j
_
_
\J(-
|
)
_
_
|d|
_
, where j (0, 1). (3.1)
5
To this set belong all the descent directions used in common algorithms,
for example, in Quasi-Newtons versions (see [...]). In step 4) an inexact line
search is done, i.e. for xed c (0, 1) and , (c, 1) , a real `
|
0 is found
such that:
n1) J(-
|
+`
|
d
|
) _ J(-
|
) +c`
|
\J(-
|
), d
|

n2) \J(-
|
+`
|
d
|
), d
|
_ ,\J(-
|
), d
|

(3.2)
n
1
) and n
2
) are called the c and , Wolfes conditions. According to classical
results (see Luenberger[...]), the resulting algorithm using (3.1) and (3.2), is
globally convergent to the set:
= r A : \J(r) = 0 .
The search criteria (3.1) and (3.2) can be extended to a nite number of
directions. Given j, c, , (0, 1), , c, j _ 1, in steps 3) and 4) we look for
j directions 1
|
= (d
|
1
, ..., d
|

) A

and j step lengths `


|
= (`
|1
, ..., `
|
) R

+
,
such that:

\J(-
|
), `
|
1
|
_
_ j
_
_
\J(-
|
)
_
_

0
|`
|
1
|
|

, (3.1p)
and
n1) J(-
|+1
) _ J(-
|
) +c\J(j
|
), `
|
1
|

0
,
n2)

\J(-
|+1
), `
|
1
|
_
_ ,

\J(-
|
), `
|
1
|
_

0
,
(3.2p)
where:
-
|+1
= -
|
+`
|
1
|
,
`
|
1
|
=

=1
`
|
d
|

.
For j 1 a multi-directional and multi-step search algorithm is obtained.
Note that this approach allows the case when one or several directions in 1
|
do
not belongs to G

(-
|
) and/or the intermediate points
-
|+1
:
= -
|
+
:

=1
`
|
d
|

, r < j,
do not satisfy n1) and n2); but the overall step `
|
1
|
does. Furthermore,
the number j of nite directions can be dierent at each iteration /, and the
algorithm becomes with variable multi-directional search.
Theorem 1 If J (
1
and is bounded below then, the variable multi-directional
and multi-step search (\ 'd':) algorithm is globally convergent to the set
= r A : \J(r) = 0 .
Proof. See Gmez [7].
The algorithm (\ 'd':) can be applied to the continuous problem (2.1a)-
(2.1b) through its discrete approximations (2.5a)-(2.5b) as follows:
6
At the beginning of / + 1step, a current point -
|
|
R

T
h
is given at the
discrete problem and its corresponding canonical extension -
|
|
(.) 1
o
() is
implicitly given at the continuous problem. Working in the discrete problem, a
jdescent direction c-
|
|
R

T
h
is sought and an inexact line search is perfomed
to nd a step length `
|
and a new point -
|+1
|
= -
|
|
+ `
|
c-
|
|
R

T
h
satisfying
the c, ,Wolfe conditions n1),n2) for J
|
. At the same time, we verify if the
corresponding canonical extensions c-
|
|
(.),-
|+1
|
(.) 1
o
() satisfy Wolfes con-
ditions for J. If not, new iterations are done in the discrete problem (say j
|
iterations) until \J
|
= 0 or Wolfes conditions are fullled in the continuous
one, i.e. `
|
1
|
(.) and -
|+1
|
(.) satisfy (3.1p) and (3.2p). Afterwords, the step /
is decreased (for example to /
t
= /,2) and a new discrete problem is dened
with a given initial solution -
|+1
|
0
.
Actually, this process is a variable multi-directional and multi-step algorithm
for problem (2.1a)-(2.1b). The number j
|
of iterations performed in the / t/
discrete problem corresponds to the number of directions taken in 1
|
for a
single iteration of the multi-directional algorithm in the continuous problem.
The number j
|
can be dierent at each iteration, but remains bounded. This
approach was proposed by Gmez[7] in the context of nonlinear optimal control
algorithms.
In section 6 we shall see when the fullling of non orthogonal and Wolfes
conditions in the discrete problem imply the same properties in the continuous
problem. It depends on the relations between j, c and , conditions in both
problems and therefore, depends on J(-), J
|
(-
|
) and on the continuos \J(-)
and discrete \J
|
(-
|
) gradients of the objective functions.
Obviously, we cannot expect both are satised for the same parameters j, c
and ,, but some criteria for its appropiate selection shall be given also in section
6. These are similar results than those given in Gmez[7], and the only dierence
shall be the specic form addopted for the gradients, at each respective problem,
and its convergence property when / 0.
In section 4 we calculate the continuous and discrete gradient. In section
5 we prove the convergence property between these gradients. In section 6
the relations between j, c and , conditions of both problems are examined. In
section 7 the algorithm is described and in section 8 some conclusions and future
work are commented.
4. CALCULATION OF GRADIENTS
Functions J and J
|
implicitly depend on -, -
|
then, we shall use the La-
grangean method (see [...]) in order to calculate the respective gradients \J(-), \J
|
(-
|
).
Continuous Gradient:
Dene J
I
(-) =
1
2
|n
I
,
I
|
2
J
2
()
, then \J(-) =

I=1
\J
I
(-).
We dene the Lagrangean /
I
: 1
o
() \ \ R as follows:
/
I
(-, n
I
, j
I
) =
1
2
|n
I
,
I
|
2
J
2
()
+
I
(-, n
I
), j
I

\
0
,\
(4.1)
7
where
I
(-, n
I
) \
t
is the linear operator dened by:

I
(-, n
I
),
\
0
,\
=
_

-\n
I
\ dr
_

c
I
do, \.
If n
I
= n
I
(-) \ is the solution of (2.1b) corresponding to -, then:
/
I
(-, n
I
, j
I
) = J
I
(-),
because
I
(-, n
I
) = 0. As a consequence:
\J
I
(-)c- = 0
:
/
I
(-, n
I
, j
I
)c- +0
ui
/
I
(-, n
I
, j
I
)0
:
n
I
(-)c-. (4.2)
Since cn
I
= 0
:
n
I
(-)c- \ is arbitrary, we choose j
I
\ such that:
0
ui
/
I
(-, n
I
, j
I
)cn
I
= 0, \cn
I
\.
Dierentiation in (4.1) with respect to n
I
gives:
0
ui
/
I
(-, n
I
, j
I
)cn
I
= n
I
,
I
, cn
I

J
2
()
+0
ui
(-, n
I
)cn
I
, j
I
= 0, \cn
I
\
==
_

( n
I
,
I
)cn
I
do +
_

-\cn
I
\ j
I
dr = 0, \cn
I
\.
Then, the i t/ adjoint equation for j
I
\ is the following:
_

-\j
I
\cn
I
dr =
_

(,
I
n
I
)cn
I
do, \cn
I
\, (4.3a)
which corresponds to a variational formulation of the elliptic problem:
_
div(-\j
I
) = 0 en
-
Ji
Jn ]
= (,
I
n
I
) ,
where j
I
\, i = 1, 2, ...,
T
h
, and compatibility condition:
_

(,
I
n
I
)d: = 0 ==
_

,
I
d: = 0. (4.3b)
Problem (4.3a)-(4.3b) has a unique solution by the same reasons given above
for the problem (1.3a)-(1.3b). From (4.1) and (4.2) we have:
\J
I
(-)c- = 0
:
/
I
(-, n
I
, j
I
)c-
= 0
:
(-, n
I
)c-, j
I

=
_

c-\ n
I
\ j
I
dr, i = 1, 2, ....
8
hence:
\J(-)c- =

I=1
_

c-\ n
I
\ j
I
dr, \c- 1
o
(). (4.3c)
We denote by:
\J(-) =

I=1
\ n
I
\ j
I
, (4.4)
the function of 1
1
(), representing the integral linear functional (4.3c).
Discrete Gradient:
Dene J
I
h
(-
|
) : R

T
h
R such that:
J
I
h
(-
|
) =
1
2
|n
I
h
,
I
|
2
J
2
()
,
where:

I
(-
|
, n
I
h
),
|

\
0
h
,\
h
=
_

-
|
(.)\n
I
h
\
|
dr
_

c
I

|
do = 0, \
|
\
|
,
1 _ i _ ,
(4.5)
then:
\J
|
=

I=1
\J
I
h
.
Taking into account that -
|
= (-
|
)
1|
T
h
R

T
h
, integration in (4.5)
over each mesh triangle gives:

I
(-
|
, n
I
h
),
|

\
0
h
,\
h
=

T
h

|=1
-
|
_
T
k
\n
I
h
\
|
dr
_

c
I

|
do
=

T
h

|=1
-
|
(\n
I
h
\
|
)
]T
k

_

c
I

|
do,
(4.6)
where in the last equality we use that the gradients are constant over each
triangle, since functions n
I
h
and
|
are supposed to be piecewise linear on t
|
.
We dene the Lagrangean / : R

T
h
\
|
\
|
by:
/(-
|
, n
I
h
, j
I
h
) =
1
2
|n
I
h
,
I
|
2
J
2
()
+
I
(-
|
, n
I
h
), j
I
h

\
0
h
,\
h
(4.7)
If n
I
h
= n
I
h
(-
|
) satises the equation (4.5) corresponding to -
|
, then:
/(-
|
, n
I
h
, j
I
h
) = J
I
h
(-
|
), \j
I
h
\
|
(4.8)
Dierentiation with respect to -
|
, in (4.8) gives:
\J
I
h
(-
|
)c-
|
= 0
:
h
/(-
|
, n
I
h
, j
I
h
)c-
|
+0
ui
h
/(-
|
, n
I
h
, j
I
h
)0
:
n
I
h
c-
|
,
\c-
|
R

T
h
.
(4.9)
9
Dene cn
I
h
= 0
:
n
I
h
c-
|
\
|
, then the operator 0
ui
h
/(-
|
, n
I
h
, j
I
h
) \
t
|
and
we choose j
I
h
\
|
such that:
0
ui
h
/(-
|
, n
I
h
, j
I
h
)cn
I
h
= 0, \cn
I
h
\
|
. (4.10)
Computing dierential 0
ui
h
in (4.7), we obtain:
0
ui
h
/(-
|
, n
I
h
, j
I
h
)cn
I
h
= n
I
h
,
I
, cn
I
h

J
2
()
+
_
0
ui
h

I
(-
|
, n
I
h
)cn
I
h
, j
I
h
_
\
0
h
,\
h
= 0,
or equivalently:
_

( n
I
h
,
I
)cn
I
h
do +
_

-
|
(.)\cn
I
h
\ j
I
h
dr = 0,
Then, j
I
h
should satisfy the equation:
_

-
|
(.)\j
I
h
\cn
I
h
dr =
_

(,
I
n
I
h
)cn
I
h
do, \cn
I
h
\
|
(4.11)
for 1 _ i _ and j
I
h
\
|
. This is the adjoint equation for j
I
h
, which gives
the
|

|
linear system:
1
(|)
,
(I)
= c
(I)
,
for each i = 1, 2, ... , where:
,
(I)
|
= j
I
h
(r
|
)
1
(|)
|
=
_

-
|
\c

\c
|
dr
c
(I)

=
_

(,
I
n
I
h
)c

d:
and 1 _ ,, / _
|
.
From (4.9), (4.10) and (4.6) we have:
\J
I
h
(-
|
)c-
|
= 0-
|
/(-
|
, n
I
h
, j
I
h
)c-
|
=

T
h

|=1
c-
|
[T
|
[ (\ n
I
h
, \ j
I
h
)
]T
k
(4.12)
for all c-
|
= (c-
|
)
1|
T
h
R

T
h
and 1 _ i _ .
Finally:
\J
|
(-
|
)c-
|
=

I=1

T
h

|=1
c-
|
[T
|
[ (\ n
I
h
\ j
I
h
)
]T
k
=

T
h

|=1
c-
|

I=1
[T
|
[ (\ n
I
h
\ j
I
h
)
]T
k
(4.13)
10
Therefore, the partial derivatives of J
|
are given by:
0J
|
(-
|
)
0-
|
=

I=1
[T
|
[ (\ n
I
h
\ j
I
h
)
]T
k
(4.14)
for / = 1, 2....
T
h
. We use the same notation \J
|
(-
|
)(.) R

T
h
(.) for the
canonical extension of the gradient vector \J
|
(-
|
) = (
J
h
J:
k
)
1|
T
h
R

T
h
to
1
o
(), which is equal to the constant (4.14) at each open triangle

T
|
of t
|
.
Note that (4.13) can be written as:
\J
|
(-
|
)c-
|
= \J
|
(-
|
)(.), c-
|
(.)
\
0
,\
=

I=1
_

c-
|
(.)\ n
I
h
\ j
I
h
dr (4.15)
for all c-
|
R

T
h
. Therefore, we can consider the gradient \J
|
(-
|
) as an
integral linear functional dened on the subspace R

T
h
(.) 1
o
(), which can
be extended to 1
o
() with the same integral formula:
\J
|
(-
|
) [c-] =

I=1
_

c- \ n
I
h
\ j
I
h
dr, \c- 1
o
(). (4.16)
We denote by:
\J
|
(-
|
)[.] =

I=1
\ n
I
h
\ j
I
h
1
1
(). (4.17)
the function representing this integral linear functional over 1
o
().
In what follows, we shall use the notations \J
|
(-
|
), \J
|
(-
|
)(.) or \J
|
(-
|
)[.]
according to our needs. Resume:
\J
|
(-
|
) is a vector with
T
h
components,
\J
|
(-
|
)(.) is a t
|
piecewise function on , which is constant =
J
h
(:
h
)
J:
k
over each open triangle

T
|
of t
|
,
\J
|
(-
|
)[.] is this same function as \J
|
(-
|
)(.), but now considered as the
representation in 1
1
() of the integral linear functional (4.16).
5. CONVERGENCE OF GRADIENTS
Lemma 1: Let be -
|
R

T
h
such that -
|
(.) R

T
h
(.) satisfy:
-
|
(r, j) _ 1 0, a.c. on ,
and suppose:
-
|
(.) - in 1
o
(), when / 0.
11
Then:
\J
|
(-
|
) [.] \J(-) in 1
1
(), when / 0.
REMARK: From Lemma 1 we conclude that if -
|
k
, / = 1, 2, ..., is a
sequence of stationary solutions of the discrete problems (2.5a)-(2.5b), corre-
sponding to a triangulation t
|
k
, with /
|
0, i.e.:
\J
|
k
(-
|
k
)[.] = 0, / = 1, 2, ...,
and if - is an accumulation point of -
|
k
then:
\J(-) = 0,
and - is an stationary solution of the continuous problem.
Moreover, if we now add the convex constraint:
- ' = - 1
o
() : - _ 1 0, a.c. Q, (4.18)
for both, the continuous (2.1a)-(2.1b) and discrete (2.5a)-(2.5b) problems, by
Lemma 1 and convexity, we deduce from the necessary optimality conditions of
discrete problems:
\J
|
k
(-
|
k
)[ -
|
k
] _ 0, \ ' Q,
(taking / +) the fulllment of the necessary optimality condition for the
continuous problem:
\J(-)( -) _ 0, \ ' Q.
A more delicate situation appears if we consider a discretization '
|
of the
constraint set ' in (2.5a)-(2.5b). For example:
-
|
(.) '
|
=
_
-(.) R

T
h
(.) : -[
T
k
= -
n
|
_ 1, / = 1, ...,
T
h
_
Q, (4.19)
Evidently '
|
' and since ' and Q are closed, any accumulation point of
the sequence -
|
k
(.) '
|
k
Q, belongs to ' Q. But the existence of optimal
solution for the continuous problem with constraint (4.18) does not imply the
existence of optimal solution for the discrete problem with constraint (4.19).
This situation can be avoided with a smart selection of the set Q, ensuring
existence and uniqueness of the solution for the continuous inverse problems,
but Q being composed of piecewise constant functions in 1
o
() plus their sup-
limits.
Proof: We use the 1
1
() representations (4.4) and (4.17):
\J(-) =

I=1
\ n
I
\ j
I
\J
|
(-
|
)[.] =

I=1
\ n
I
h
\ j
I
h
where:
12
1. n
I
\ is the solution of:
_

-\n
I
\dr =
_

c
I
do, \ \
2. j
I
\ is the solution of:
_

-\j
I
\dr =
_

(,
I
n
I
)do, \ \
3. n
I
h
\
|
es solucin de:
_

-
|
(.)\n
I
h
\
|
dr =
_

c
I

|
do, \
|
\
|
4. j
I
h
\
|
is the solution of:
_

-
|
(.)\j
I
h
\
|
dr =
_

(,
I
n
I
h
)
|
do, \
|
\
|
We also dene:
5. n
:
I
h
\
|
as the solution of:
_

-\n
:
I
h
\
|
dr =
_

c
I

|
do, \
|
\
|
.
From (2.5) we know that n
:
I
h
n
I
in H
1
() when / 0.
Dene n
I
h
:= n
I
h
n
:
I
h
\
|
. From above denitions 3. and 5. we have:
_

-\ n
:
I
h
\
|
dr =
_

-
|
\ n
I
h
\
I
h
dr, \
|
\
|
,
and replacing n
I
h
= n
I
h
+ n
:
I
h
, we obtain that n
I
h
\
|
is the solution of:
_

-
|
\n
I
h
\
|
dr =
_

(- -
|
)\ n
:
I
h
\
|
dr, \
|
\
|
,
with the continuous and coercive bilinear form:
a(n
I
h
,
|
) =
_

-
|
\n
I
h
\
|
dr.
13
Furthermore, 1
|
: \
|
R dened by:
1
|
(
|
) =
_

(- -
|
)\ n
:
I
h
\
|
dr,
is linear and satisfy:
[1
|
(
|
)[ _ |- -
|
|
J
1
()
_
_
n
:
I
h
_
_
1
1
()
|
|
|
1
1
()
, \
|
\
|
,
hence, 1
|
is continuous and:
|1
|
|
(1
1
())
0 _ |- -
|
|
J
1
()
_
_
n
:
I
h
_
_
1
1
()
, \/ 0,
where
_
_
n
:
I
h
_
_
1
1
()
is uniformly bounded because n
:
I
h
converges to n
I
in H
1
().
Then:
|1
|
|
(1
1
())
0 0, if / 0.
From Lax-Milgram Lemma:
| n
I
h
|
1
1
()
_
1
`
|1
|
|
(1
1
())
0 0, if / 0,
where ` 0 is the coercivity constant of a, which is independent of /.
We have proved that:
_
_
n
I
h
n
:
I
h
_
_
1
1
()
0, if / 0,
and as a consequence:
| n
I
n
I
h
|
1
1
()
_
_
_
n
I
n
:
I
h
_
_
1
1
()
+
_
_
n
I
h
n
:
I
h
_
_
1
1
()
0, if / 0.
Analogously, we dene j
:
I
h
\
|
satisfying:
_

-\ j
I
h
\
|
dr =
_

(,
I
n
I
)
|
do, \
|
\
|
,
and .
I
h
= j
I
h
j
:
I
h
\
|
, as the solution of:
_

-
|
\.
I
h
\
|
dr =
_

(- -
|
)\ j
:
I
h
\
|
, \
|
\
|
.
With the same arguments as before, we obtain:
_
_
j
I
h
j
:
I
h
_
_
1
1
()
= | .
I
h
|
1
1
()
0, if / 0,
then:
| j
I
j
I
h
|
1
1
()
_
_
_
j
I
j
:
I
h
_
_
1
1
()
+
_
_
j
I|
j
:
I
h
_
_
1
1
()
0, if / 0.
14
Therefore, we have:
\ n
I
h
\ n
I
in 1
2
(),
\ j
I
h
\ j
I
in 1
2
(),
and using triangular inequality, we obtain:
|\ n
I
h
\ j
I
h
\ n
I
\ j
I
|
J
1
()
_
_ |\ n
I
h
\ j
I
h
\ n
I
h
\ j
I
|
J
1
()
+|\ n
I
h
\ j
I
\ n
I
\ j
I
|
J
1
()
_ |\ n
I
h
|
J
2
()
|\ j
I
h
\ j
I
|
J
2
()
+|\ j
I
|
J
2
()
|\ n
I
h
\ n
I
|
J
2
()
0, if / 0.
and then:
\ n
I
h
\ j
I
h
\ n
I
\ j
I
in 1
1
(),
for all i = 1, 2, .... Finally:
\J
|
(-
|
)[.] =

I=1
\ n
I
h
\ j
I
h
\J(-) =

I=1
\ n
I
\ j
I
in 1
1
() when / 0.
Denition: Let t
|1
be a regular triangulation of , with diameter /
1
0,
and -
|1
(.) R

T
h
1
(.) be a t
|1
piecewise function on . We say that -
|
(.)
R

T
h
(.) is a reduction of -
|1
(.) to t
|
if and only if t
|
is a regular triangulation
of , which is ner than t
|1
, with diameter / 0, / _ /
1
and -
|
(.) is a
t
|
piecewise function on , satisfying:
-
|
( r) = -
|1
( r), \ r .
Lemma 2: Let /
1
0 and -
|1
(.) R

T
h
1
(.) be a t
|1
piecewise function
dened on and suppose -
|
(.) R

T
h
(.) is a reduction of -
|1
(.) to t
|
, where
t
|
is any regular triangulation, ner than t
|1
. Then:
|\J
|
(-
|
)[.] \J(-
|
(.))|
J
1
()
0, if / 0.
Proof: Clearly we can write:
-
|
(.) -
|1
(.) in 1
o
(), if / 0.
Then, by Lemma 2:
\J
|
(-
|
)[.] \J(-
|1
(.)) in 1
1
(), if / 0. (5.1)
In addition:
\J(-
|1
(.)) = \J(-
|
(.)), (5.2)
for all / _ /
1
, since \J is continuous and -
|
(.) and -
1
(.) are the same function,
piecewise dened in dierent triangulations. The result follows from (5.1) and
(5.2).
15
6. RELATIONS BETWEEN WOLFES CONDITIONS
In what follows, we rewrite in our notations the general results given in
Gmez[7].
We consider the discrete problem (2.4a)-(2.4b) dened in the space R

T
h
o
,
i.e. the set R

T
h
provided with the norm:
||
R
N
T
h
1
= max
1
T
h
[

[ .
It is known that the topological dual of R

T
h
o
is R

T
h
1
, i.e. R

T
h
provided
with the norm:
||
R
N
T
h
1
=

T
h

=1
[

[ .
If -
|
(.) 1
o
() is the canonical extension of -
|
R

T
h
o
, its easy to see
that:
|-
|
|
R
N
T
h
1
= |-
|
(.)|
J
1
()
.
We use the two representations of the discrete gradient obtained in section
4:
- the vectorial form:
\J
|
(-
|
) =
_
e

I=1
[T
|
[ (\n
I
h
\j
I
h
)
]T
k
_
1|
T
h
R

T
h
, where =

T
h
_
|=1
T
|
,
- and the functional form:
\J
|
(-
|
)[.] =
e

I=1
\n
I
h
\j
I
h
1
1
().
Then, we have:
|\J
|
(-
|
)[.]|
J
1
()
=
_

I=1
\n
I
h
\j
I
h
dr
=

I=1

T
h

|=1
_
T
k
\n
I
h
\j
I
h
dr
=

T
h

|=1

I=1
[T
|
[ (\n
I
h
\j
I
h
)
]T
k
= |\J
|
(-
|
)|
R
N
T
h
1
,
== |\J
|
(-
|
)[.]|
J
1
()
= |\J
|
(-
|
)|
R
N
T
h
1
. (6.1)
16
Analogously, it can be deduced the equality:
\J
|
(-
|
)[.], -
|
(.)
J
1
,J
1 = \J
|
(-
|
), -
|

R
N
T
h
. (6.2)
Denition: For the continuous and discrete funtional J and J
|
introduced
in section 2, we dene the following error functions:
J(-
1
, -
2
) := J(-
1
) J(-
2
)
J
|
(-
1|
, -
2|
) := J
|
(-
1|
) J
|
(-
2|
)
j
|
= j
|
(-
1|
, -
2|
) := J(-
1|
(.), -
2|
(.)) J
|
(-
1|
, -
2|
)
.
|
= .
|
(-
|
) := |\J(-
|
(.)) \J
|
(-
|
)[.]|
J
1
()
(6.3)
where -
1
, -
2
1
o
() and -
1|
, -
2|
R

T
h
o
. Note that .
|
0, when / 0
by Lemma 2.
Theorem: Let be -
|1
R

T
h
1
o
and j, c, , (0, 1) with c < ,. Choose
(0, 1) satisfying:
< min
_
j
2 +j
,
j(1 ,)
2 +j(1 ,)
,
jc
1 +c(2 +j)
_
.
Let be / _ /
1
, a regular triangulation t
|
and suppose the reduction -
|
(.) of
-
|1
(.) to t
|
, are such that:
.
|
(-
|
) = |\J(-
|
(.)) \J
|
(-
|
)(.)|
J
1
()
_ |\J(-
|
(.))|
J
1
()
.
Let be -
+
|
= -
|
+ c-
|
with c-
|
R

T
h
o
for which the following inequalities
hold:
.
|
(-
+
|
) = |\J(-
+
|
(.)) \J
|
(-
+
|
)(.)|
J
1
()
_ |\J(-
|
(.))|
J
1
()
j
|
(-
+
|
, -
|
) = [J(-
+
|
(.), -
|
(.)) J
|
(-
+
|
, -
|
)[ _ |\J(-
|
(.))|
J
1
()
|c-
|
(.)|
J
1
()
If vector -
+
|
R

T
h
satises j, c, ,conditions for the discrete problem
then, there exist numbers j
1
, c
1
, ,
1
with c
1
< ,
1
such that, its canonical ex-
tension -
+
|
(.) 1
o
() satisfy j
1
, c
1
, ,
1
conditions for the continuous problem.
Furthermore, j
1
, c
1
, ,
1
can be chosen into the following intervals:
j
1
( j
1
, j t(1 +j)),
,
1
(, +(1 +,), 1),
c
1
(0, c (1 +c)),
where:
j
1
= max
_
j(1 +,)
2 +j(1 ,)
,
j(1 +c)
1 +c(2 +j)
_
,
=
1
j
1
_
j j
1
1 +j
_
.
17
Proof: Gmez[7].
REMARK: Note that given intervals do not depend on / and only depend
on the parameters j, c, ,.
7. ALGORITHM
1. Choose /
InIc
0, -
InIc
R

T
h
inic
2. Choose j, c, ,, o, j (0, 1) satisfying:
, c, o < min
_

2+
,
(1o)
2+(1o)
,
(1+o)
1+o(2+)
_
, j _ max
_
(1+o)
2+(1o)
,
(1+o)
1+o(2+)
_
3.Choose o
0
(0, o), j
1
(j, j o(1 + j)), ,
1
(, + (1 + ,), 1), c
1

(0, c (1 +c)) with =
1

1
_

1
1+
_
4. Set /
0
= /
InIc
, -
0
(.) = -
|0
(.) 1
o
(), | = 0, / = 0.
5. If |\J(-
l
(.))|
J
1
()
= 0, stop, function -
l
(.) is a local minima for the
continuous problem.
If |\J(-
l
(.))|
J
1
()
,= 0 , go to step 6.
6. Set / / + 1, o
|
=
c
k1
2
, /
|
=
|
k1
2
7. Dene -
|
k
= -
l
at the new triangulation t
|
k
.
8. Verify if
|\J(-
|
k
(.)) \J
|
k
(-
|
k
)(.)|
J
1
()
_ o
|
|\J(-
|
k
(.))|
J
1
()
If it holds, go to step 9,
If it does not hold, take /
|
=
|
k
2
and go to step 7.
9. Choose c-
|
k
R

T
h
k
and `
|
0 satisfying j, c and ,conditions for
the discrete problem.
10. Dene -
+
|
k
= -
|
k
+`
|
c-
|
k
and verify:
_
_
\J(-
+
|
k
(.)) \J
|
k
(-
+
|
k
)[.]
_
_
J
1
()
_ o
|
|\J(-
|
k
(.))|
J
1
()

J(-
+
|
k
(.), -
|
k
(.)) J
|
k
(-
+
|
k
, -
|
k
)

_ o
|
|\J(-
|
k
(.))|
J
1
()
|c-
|
k
|
J
1
()
If inequalities hold, or if
_
_
\J
|
k
(-
+
|
k
)
_
_
= 0, take -
l+1
(.) = -
+
|
k
(.),
| = | + 1, and go to step 5.
If they do not hold and
_
_
\J
|
k
(-
+
|
k
)
_
_
,= 0, take -
|
k
= -
+
|
k
and go
to step 9.
REMARKS:
- Quasi-Newton methods with inexact line search or trust region routines
can be used at step 9.
- Estimations for the continuous gradients \J(-
|
(.)), \J(-
+
|
(.)) and for
the continuous increment J(-
+
|
(.), -
|
(.)) can be obtained using smaller steps
in the discrete problems.
8. CONCLUSIONS
REFERENCES
*******************************************************************************************
AADIR MS REFERENCIAS RECIENTES
*******************************************************************************************
18
1. Borcea,Liliana, Electrical impedance tomography.Topical Review.Inverse
problems 18(2002),R99-R136.
2. Brezis, H. Analyse Fonctionelle. Theorie et Applications, Masson, 1987.
3. Dalmasso R.,An inverse Problem for an Elliptic Equation,Pub. RIMS,
Kyoto Univ.,40,91-123,2004
4. Dennis J.E,Schnabel R.B, Numerical Methods for Unsconstrained Optimization,Prentice-
Hall,1983
5. Fletcher R., Practical Methods of Optimization, J. Wiley and Sons. Chirch-
ester,England,1987.
6. Folland, Introduction to Partial Dierential Equations,Princeton,NJ: Prince-
ton University Press,1995.
7. Gmez,J.A; Romero,M. Global convergence of a multidirectional algo-
rithm for unconstrained optimal control problems. Numerical functional
analysis and optimization,Vol. 19, N

9-10,1998.
8. Gmez, J.A; Marrero,A. Computing gradients of inverse problems in ode
models, Revista Investigacin Operativa,Brasil,Vol 9,N
o
2,pp 207-
224,2000.
9. Gmez,J.A, Marrero,A. Convergence of discrete aproximations of inverse
problems in ode models,Revista Investigacin Operativa, Brasil, Vol 9,N
o
2,pp
207-224,2000.
10. Isakov V., Inverse Problems for Partial Dierential Equations,Vol 127 of
Applied Mathematical Sciences,Springer,New York,USA,2nd edition,2006.
11. Kirsch A., An Introduction to the Mathematical Theory of Inverse prob-
lems, Springer Verlag, New York,1996.
12. Knabner P.,Angermann L., Numerical Methods for Elliptic and Parabolic
Partial Dirential Equations,Springer,New York,2003.
13. Kohn R.,Vogelius M.,Determining Conductivity by Boundary Measure-
ments,Commun. Pure Appl. Math 37,289-298,1984
14. Larrauturou,B. Modlisation mathmatique et numrique pour les sciences
de lingenieur, cole Polytechnique,Tomo I, 1996.
15. Luenberger,D. Introduction to linear and nonlinear programming, Addison-
Wesley, Massachussets,1984
16. MacMillan H.R,Manteufel T.A,McCormick S.F, First-Order System Least
Squares and Electrical Impedance Tomography, SIAM on Numerical Analy-
sis,Volume 42,pp.461-483,2004.
19
17. Nocedal J.,Wright S.J, Numerical Optimization, Springer Series in Oper-
ations Research, Springer Verlag,1999.
18. Polak,E. Computational Methods in Optimization: A Unied Approach,
Academic Press, New York,1971.
19. Raviart-Thomas. Introduction Lanalyse numrique des quations aux
drivees partielles. Mathemtiques Apliques pour la Maitrise.Dunod,1998.
20. Tarzia,D. Anlisis numrico de un problema de control ptimo elptico
distribuido,Mecnica Computacional, Vol. XXVIII, pp 1149-1160, 2009.
21. Vogel,C. Computational Methods for Inverse Problems, SiAM, 2002.
20

Você também pode gostar