Escolar Documentos
Profissional Documentos
Cultura Documentos
Hydrodynamic Scaling
Here, qi ! is the position of the i-th particle and pi R3 is its velocity and k = 1, 2, 3
refer to the three components of position or velocity. The repulsive potential V 0 is
even and has compact support in R3 . The interaction in particular is short range. The
equations of motion are
k
dqi = H(pk,q) = pki
dt pi
dpi = H(pk,q) = %N Vk (qi qj )
k
dt q i=1
i
where
V (q)
Vk (q) =
q k
is the gradient of V . The dynamical
%system has five conserved quantities. The total number
N of particles, the total momenta N p
i=1 i
k for k = 1, 2, 3 and the total energy H(p , q) given
by (1.1). The hydrodynamic scaling in this context consists of rescaling space and time by
a factor of ". The rescaled space is the unit torus T3 in 3-dimensions. The macroscopic
quantities to be studied correspond to conserved quantities. The first one of these is the
45
46 CHAPTER 4. HYDRODYNAMIC SCALING
density, and is measured by a function (t, x) of t and x. For each " < it is approximated
by ! (t, x), defined by
& N
1 ! qi ("t)
J(x)! (t, x)dx = 3 J( )
Td " "
i=1
& N
d d 1 ! qi ("t)
J(x)! (t, x)dx = J( )
dt Td dt "3 "
i=1
N
!
1 qi ("t)
= (J)( ) pi ("t)
"3 "
i=1
&
! (J)(x) ! (t, x)u(t, x)dx
Td
where u = uk , k = 1, 2, 3 are the average velocity of the fluid. This introduces three
other macroscopic variables, which represent three coordinates of the momenta that are
conserved.
We can now write down the first of our five equations
+ (u) = 0 (1.3)
t
To derive the next three equations we start with test functions J and differentiate,
again using (1.2), for k = 1, 2, 3
N N
d 1 ! qi ("t) k 1 ! k qi ("t)
3
J( )p i ("t) = 3
pi ("t)(J)( ) pi ("t) (4.1)
dt " " " "
i=1 i=1
N N
1 ! ! qi ("t)
2 J( )Vk (qi ("t) qj ("t)) (4.2)
" "
i=1 j=1
The next step is rather mysterious and requires considerable explanation. Quantities
N
!
(qi (t) qj (t))
i,j=1
and
N
!
pki pri
i=1
4.1. FROM CLASSICAL MECHANICS TO EULER EQUATIONS. 47
are not conserved. They depend on spacings between particles or velocities of individ-
ual particles that change in the microscopic time scale and hence do so rapidly in the
macroscopic time scale. They can therefore be replaced by their space-time averages. By
appealing to an Ergodic Theorem they can be replaced by ensemble averages with repect
to their equilibrium distributions. The ensemble consists of an infinite collection of points
{p , q }, in the phase space R3 R3 . There is a natural five parameter family of measures
,u,T that are invariant under spatial translation as well as Hamiltonian dynamics. The
points {p } are distributed according to a Gibbs Distribution with density and formal
interaction
1 !
V (q q )
2T
,
In other words {q } is a point process obtained by taking infinite volume limit of N = "3
particles distributed in cube of side " according to the joint density
N
1 ' 1 ! (
exp V (qi qj )
Z 2T
i,j=1
Although we will not derive it, there is a similar equation for e(t, x) that is obtained
by differentiating
N + N ,
d 1 ! qi ("t) 2
!
J( ) |pi ("t)| + V (qi ("t) qj ("t))
dt 2"3 "
i=1 j=1
the first two moments given the initial configuration (x1 , , xkN )
&
1 !
E(U ) = 3 f (y)p(t , xi , y)dy
N T3
and an elementary calculation reveals that the expectation converges to the following con-
stant. & & &
f (y)p(t , x , y)0 (x)dydx = f (y)(t , y)dy
T3 T3 T3
The independence clearly provides a uniform upper bound of order N 3 for the variance
that clearly goes to 0. Of course on T3 we could have had a process obtained by rescaling
a random walk on a large torus of size N . Then the hydrodynamic scaling limit would be a
consequence of central limit theorem for the scaling limit of a single particle and the law of
large numbers resulting from the averaging over a large number of independently moving
particles. The situation could be different if the particles interacted with each other.
The next class of examples are called simple exclusion processes. They make sense
on any finite or countable set X and for us X will be either the integer lattice Zd in d-
dimensions or ZdN obtained from it as a quotient by considering each coordinate modulo N .
At any given time a subset of these lattice sites will be occupied by partcles, with atmost
one particle at each site. In other words some sites are empty while others are occupied
with one particle. The particles move randomly. Each particle waits for an exponential
random time and then tries to jump from the current site x to a new site y. The new
site
% y is picked randomly according to a probability distribution p(x , y). In particular
y p(x , y) = 1 for every x. Of course a jump to y is not always possible. If the site is
empty the jump is possible and is carried out. If the site already has a particle the jump
cannot be carried out and the particle forgets about it and waits for another chance, i.e.
waits for a new exponential waiting time.
50 CHAPTER 4. HYDRODYNAMIC SCALING
If we normalize so that all waiting times have mean 1, the generater of the process can
be written down as
!
(A f )() = (x)(1 (y))p(x , y)[f ( x,y ) f ()]
x,y
where represents the configuration with (x) = 1 if there is a particle at x and (x) = 0
otherwise. For each configuration and a pair of sites x, y the new configuration x,y is
defined by
(y) if z = x
x,y
(z) = (x) if z = y
(z) if z += x , y
We will be concerned mainly with the situation where the set X is Zd or ZdN , viewed
naturally as an Abelian group with p(x , y) being translation invariant and given by
p(x , y) = (y x) for some probability distribution (). It is convenient to assume that
has finite support. There are various possibilities.
% () is symmetric,
% i.e. (z) = (z)
or more generally () has mean zero, i.e. z z(z) = 0, and finally z z (z) = m += 0.
We shall first concentrate on the symmetric case. Let us look at the function
!
VJ () = J(x)(x)
and compute
!
(AVJ )() = (x)(1 (y))(y x)(J(y) J(x))
x,y
!
= (x)(y x)(J(y) J(x))
x,y
!
= (x)[(P I)J](x)
x,y
= V(PI)J ()
The space of linear functionals is left invariant by the generator. It is not difficult to
see that
' (
E VJ ((t)) = VJ(t) ()
where
J(t) = exp[t(P I)]J
is the solution of
d
J(t , x) = (P I)J(t , x)
dt
4.2. SIMPLE EXCLUSION PROCESSES 51
It is almost as if the interaction has no effect and in fact in the calculation of expec-
tations of one particle functions it clearly does not. Let us start with a configuration on
ZdN and scale space by N and time by N 2 . The generator becomes N 2 A and the particles
can be visualized as moving in a lattice imbedded in the unit torus Td , with spacing N1 ,
and becoming dense as N .
Let J be a smooth function on Td . We consider the functional
1 ! x
(t) = d J( )(t, x)
N x N
essentially completing the proof in this case. Technically the empirical distribution N (t)
is viewed as a measure on Td and N () is viewed as a stochastic process with values in
the space M(Td ) of nonnegative measures on Td . In the limit it lives on the set of weak
solutions of the heat equation
1
= C
t 2
and the uniqueness of such weak soultions for given initial density establishes the validity
of the hydrodynamic limit.
52 CHAPTER 4. HYDRODYNAMIC SCALING
Let us make the problem slightly more complicated by adding a small bias. Let q(z) be
an odd function with q(z) = q(z) and we will modify the problem by making depend
on N in the form
1
N (z) = (z) + q(z)
N
Assuming that q is nonzero only when is so, N will be an admissible transition proba-
bility for large enough N . A calculation yields that in the slightly modified model referred
to as weakly asymmetric simple exclusion model VN is given by
1 !
VN () ! VJN () + (x)(1 (x)) < m , J(x) >
Nd x
with !
m= z q(z)
z
If one thinks of (t , u) as the density of particles at the (macroscopic) time t and space u
the first term clearly wants to have the limit
&
1
(C J)(u)(t , u)du
T 2
d
It is not so clear what to do with the second term. The invariant measures in this model
are the Bernoulli measures with various densities and the averaged version of the second
term should be &
< m , (J)(u) > (t , u)(1 (t , u))du
Td
Replacing the linear heat equation by the nonlinear equation
1
= C m(1 )
t 2
This requires justification that will be the content of our next lecture.
Let us now turn to the case where p has mean zero but is not symmetric. In this case
! y x
VN () = N 2d (x)(1 (y))p(y x)[J( ) J( )]
x,y
N N
where
! !
x = [(x) (1 (x + z))z p(z) + (1 (x)) (x z)z p(z)]
z z
! !
= [(x) (x + z)z p(z) + (1 (x)) (x z)z p(z)]
z z
! !
=[ (x z)z p(z) (x) ((x + z) + (x z))z p(z)]
z z
= x 0
with x being the shift by x. In the symmetric case, the second sum is zero and 0 can
then be written as a gradient !
0 = ej j j
j
where ej are shifts in the coordinate directions. This allows us to do summation by parts
and gain a factor of N 1 . When this is not the case, we have a nongradient model and
the hydrodynamic limit can no longer be established by simple averaging.
and is positive definite. The simple exclusion process is defined by the generator
!
(Lf )() = (y x)(x)(1 (y))[f ( x,y ) f ()]
x,y
Here the state space consists of consisting of : ZdN {0, 1}. represents a configuration
of particles with at most one particle per site. Let us recall that the convention is that
(x) = 1 if there is particle at x and 0 otherwise and that x,y represents the configuration
obtained by exchanging the situation at sites x and y.
(y)
if z = x
x,y
(z) = (x) if z = y
(z) otherwise
54 CHAPTER 4. HYDRODYNAMIC SCALING
The initial configuration is a state 0 = {(0, x)} and we assume that for some density
0 (u) on the torus T d , 0 0 (u) 1, we have
&
1 ! x
lim J( )(0, x) J(u)0 (u)du
N N d N Td
xZN d
We can compute
! y x
(N 2d LN FJ )() = N 2d [J( ) J( )](x)(1 (y))(y x)
x,y
N N
N 2d ! y x
= [(x)(1 (y)) (y)(1 (x))][J( ) J( )](y x)
2 x,y N N
N 2d ! y x
= [(x) (y)][J( ) J( )](y x)
2 x,y N N
N 2d ! x+z x
= [(x) (x + z)][J( ) J( )](z)
2 x,z N N
N 2d ! x+z x xz
= (x)[J( ) 2J( ) + J( ](z)
2 x,z N N N
1 ! x
! d
(x)(C J)( )
2N x N
&
1
= (C J)(u)N (du)
2
4.3. SYMMETRIC SIMPLE EXCLUSION. 55
From our computation of (N 2 LN F )() it follows that if J has two bounded derivatives,
then
sup sup |(LN FJ )()| C(J)
N
is a Martingale. N 2 |(LN FJ )()| C(J) and the quadratic variation of the martingale
MJ,N (t) is easily estimated. The jumps are of size NC(J)
d+1 . The total jump rate is at most
This is sufficient to prove the compactness of QN and that any limit point Q will have the
property that for any J
& & & t&
1
J(u)(t, u)du J(u)(u)du (C J)(u)(s, du)ds 0
Td Td 2 0 Td
a.e Q.
Remark 4.3.2. The symmetry of () played a crucial part. (x)(1 % (y)) (y)(1
(x)) = (x) (y) cancelled the nonlinearity. If we assumed only z z(z) = 0, we will
not have the second difference and N 2 (LN Fj )() will be of size N .
56 CHAPTER 4. HYDRODYNAMIC SCALING
It is not clear what to do with the second term. In the limit when (du) becomes (u)du
we would like this term to be
&
(u)(1 (u))-m(s, u), (J)(u).du
Td
where !
m(s, u) = z q(s, u, z)
z
because we expect locally the statistics to reflect the Bernoulli distribution with the correct
density. If we can accomplish it, we will establish the following theorem.
(t, u) 1
= C (t, u) ((t, u)(1 (t, u))m(t, u))
t 2
with (0, u) = 0 (u).
4.4. LARGE DEVIATIONS AND WEAK ASYMMETRY. 57
But this requires justification. The route is complicated. There are various approxima-
tions that are needed. There are several measures. PN , QN and the perturbed ones PN,q
and QN,q . Computations with PN are easier because it is a reversible Markov process.
Direct computations with PN,q are harder. Even while examining PN it is easier if we
start in equilibrium, i.e. with a reversible invariant measure, uniform distribution of N d
particles among the N d sites. One can use Feynman-Kac formula and some variational
methods to obtain estimates. We can then transfer the estimates from PNeq to PN and PN,q
using entropy inequality.
We will deal with averages of all kinds. Let us set up some notation. If f = f () is a
local function on the configuration space or N their averages will be denoted
1 !
f!,x = f (y )
(2" + 1)d
y:|yx|!
(x) can be (s, x) representing the configuration at some time s. We denote by x the
configuration (x )(y) = (x + y). The object we need to estimate is
& T
eN (-, s)ds
0
where + ,
PN,q 1 ! x
eN (-, s) = E | d J( )[fx ((s)) f (
N (,s,x )]|
N N
and
f() = E [f ()]
the expectation of the local function f with respect to the product Bernoulli measure with
density .
Let N (s, d) be the marginal distribution on N of the configuration (s, ) at time s
and
N its space and time average. More precisely
& & & + ,
1 T 1 !
f ()d
N () = f (x ) N (s, d)ds
T 0 N N d x
Lemma 4.4.3. . ,
.
.
lim lim sup .fk () f (
(k))| = 0
k N
Lemma 4.4.4. +
N
lim sup lim sup E |
N ( k | = 0
%0 N
k
by
1 ! 1 ! y
d
f (x ) J( )
N x (2k + 1)d N
y:|yx|k
or
1 ! x
J( )fk,x
Nd x N
Lemma 4.4.4 allows us to replace k,x with large k by N (,x with a small -.
We now concentrate on the proof of the two lemmas. They depend on the following
observations.
The function f (a, b) = ( a b)2 is a convex function of (a, b) R2+ . It is checked
by computing the Hessian
/ 1 3 1
0
1 b 2 a 2 (ab) 2
1 1 3
2 (ab) 2 a 2 b 2
The invariant distributions fot A are all reversible and dform a convex set with extreme
1N 2
points wN,k which % are uniform distributions over 2 k configurations in N with a
given number k = xZd (x) of occupied sites.
N
%
Let us denote by N (k) = N { xZd (x) = k}. Then the invariant distribution
N %
that corresponds to N is the
convex combination wN= k N (k) wN,k . If we
write
N () = f ()wN () then f is in L2 (wN ) with " f "L2 (wN ) = 1 and D( f ) =
D( N ).
An estimate of the form D( N ) CN d2 has many consequences. If N (s) are
4T 3
such that 0 D( N (s))ds CN d2 , then the average N over space and time,
defined by, & T!
1 1
N = x
N (s)ds
T Nd 0 x
satisfies
C
D( N ) N d2
T
Since there is spacial homogeneity the quantity
! ! 3 3 C
sup ( N ( x,y ) N ())2 N 2
x y:|xy|=1
T
!
!
(a1 + + a! )2 " a2j
j=1
allows us to estimate
!3 3 C
( N ( x,x+z ) N ())2 |z|2 N 2
T
This shows that if two microscopically large blocks are macroscopically close then
there empirical densities are close with probability nearly 1. This implies lemma
4.4.4.
All we need to do now is control the Dirichlet form. Our generator is of the form
1
AN,q = A + SN,q
N
the sum of the symmetric part A and the weak skew symmetric perturbation SN,q . The
entropy is defined by
! ()
H() = () log
()
1 d 21
where () = c = c(N, ) = N kN is the uniform distribution of kN = N d particles on
d . It is invariant for the evolution under A.
ZN
4.4. LARGE DEVIATIONS AND WEAK ASYMMETRY. 61
dtN
Let us note that tN evolves according to the forward equation dt = N 2 AN,q N .
Therefor for HN (t) = H(tN ) we have
dHN (t) d ! t () t
= log N dN ()
dt dt c
! tN () t ! d
= N2 log AN,q N + t ()
c
dt
!
2
=N (LN,q log tN ())tN ()
!! 1 x t ( x,y )
= N2 [(y x) + q(t, , y x)] log Nt (x)(1 (y))tN ()
x,y
N N N ()
!! q(t, Nx , y x) t ( x,y )
= N2 (y x)[1 + ] log Nt (x)(1 (y))tN ()
x,y
N (y x) N ()
q(t,u,yx)
Denoting by cN = cN (t, u, y x) = 1 + N (yx) , and using the inequality
x log y 2[x log x x + 1] + 2[ y 1]
(verify supx [x log y 2(x log x x + 1)] = 2( y 1)), we obtain
dHN (t) !!
2N 2 (y x)[cN log cN cN + 1]tN ()
dt x,y
!! 5 5
2
+ 2N (y x)[ N ( ) tN ()]tN ()
t x,y
x,y
+ ,2
2 q(t, Nx , y x)
2N (cN log cN cN + 1) !
(y x)
We can also change x,y in the summation. It just maps N onto itself. Assuming
|q(t, u, z)| C(z), we end up with
!! 5 5
dHN (t)
CN d N 2 (y x)( tN ( x,y ) tN ())2
dt x,y
Then
' (
lim lim sup sup E N dN,k,( () = 0
k
%0
N N MN,C
Let {Px } x X be a Markov family on the finite state space X i.e. measures on
D[[0, ), X ] corresponding to a generator
!
(Af )(x) = [f (y) f (x)]q(x, y)
y
Assume that (x) is an invariant probability distribution for the Markov process. Let us
assume that the process is reversible with respect to , i.e. A is self adjoint in L2 (). This
is the same as q(x, y)(x) = q(y, x)(y). The Dirichlet form is given by
! 1!
-Af, f . = [f (y) f (x)]f (x)q(x, y)(x) = [f (y) f (x)]2 q(x, y)(x)
x,y
2 x,y
4.5. SUPER-EXPONENTIAL ESTIMATES. 63
is the solution of
du(t, x)
= (Au)(t, x) + V (x)u(t, x); u(0, x) = f (x)
dt
It follows that
d"u(t, )"22
= -(A + A )u(t) + 2V u(t), u(t). = 2-u(t)V, u(t). 2D(u(t))] 2A (V )"u(t)"22
dt
providing the estimate
"u(t, )"22 exp[2tA (V )]"f "22
where ' (
A (V ) = sup -uV, u. D(u)
u:(u(2 =1
1
Take g(x) = k1A (x) and k = log P (A) . Then
&
1
Q(A) [H(Q; P ) + log ek1A (x)+1Ac (x) dP ]
k
1
= [H(Q; P ) + log[ek P (A) + P (Ac )]]
k
1 H(Q; P ) + 1
= [H(Q; P ) + log 2] 1
k log P (A)
d
then we can get estimates under P . But each point carries a mass of 2N . If we have
super exponential estimates under P we have them uniformly under P for N . We
can take V () = eN,k (f, ) or V () = dN,k,( () and use Tchebychevs inequality on
& T
P d
E [exp N [ V ((s))ds]]
0
to get
& T
1
lim sup d log P [ V (x(s))ds ] [ + (V )]
N N 0
1
= C b(t, u)(t, u)(1 (t, u)) = 0; (0, u) = 0 (u) (4.5)
t 2
with a bounded continuous b(t, u). Then for initial condition 0 compatible with 0 and for
any neighborhood G of () in C[[0, T ]; M(T d )]
& T &
1 1
lim inf d log PN0 [() G] -C 1 b, b.(t, u)(1 (t, u))dtdu
N N 2 0 Td
while minimizing
! [q(t, u, z)]2
z
(z)
The choice is
q(t, u, z) = -C 1 b(t, u), z.(z)
and the minimum is -C 1 b, b.. Here C = {Ci,j } is the covariance matrix of (). We
next need to prove compactness in C[[0, T ]; M(T d )] for the perturbed process. We will
use repeatedly inequality (4.4). In particular if log P (A) >> N d and H(Q, P ) = O(N d )
then Q(A) << 1. To prove compactness we will show that a super exponential tightness
estimate holds in equilibrium. Then any limit will be concentrated on the weak solutions of
(4.5), and we need only prove uniqueness of them. The entropy will then have the correct
limit.
where
! y x
AN,J (s) = N 2 (y x)[eJ( N )J( N ) 1](s, x)(1 (s, y))
x,y
! +
N2 y x
= (y x) (eJ( N )J( N ) 1)(s, x)(1 (s, y))
2 x,y
,
x y
+ (eJ( N )J( N ) 1)(s, y)(1 (s, x))
N2 ! y x
! (y x)(J( ) J( ))((s, x) (s, y))
2 x,y N N
N2 ! y x
+ (y x)(J( ) J( ))2 [(x)(1 (y)) + (y)(1 (x))]
4 x,y N N
d
1 1 ! ! x x
! (C J)(x) + Di J( )Dj J( )fxi,j ()
2 4 N N
d xZN i,j=1
where !
f i,j () = (z)zi zj [x (1 (x + z)) + (x + z)(1 (x))]
The exponential martingale has many uses. The first one is super-exponential tightness.
Lemma 4.6.1. Let J(u) be a smooth function on T d . Let PN be the simple exclusion
process with initial state . Then for any - > 0,
1 ' 1 ! x (
lim sup lim sup sup d log PN d
sup | J( )[(t, x) (0, x)]| - =
0 N N N 0t d
N
xZN
68 CHAPTER 4. HYDRODYNAMIC SCALING
Proof.
& t
1 ! x
J( )[(t, x) (0, x)] AN,J (s)ds = MN,J (t)
Nd d
N 0
xZN
is a martingale, where
N2 1 ! x+z x xz
AN,J (s) = d
(z)[J( ) 2J( ) + J( )](s, x)
2 N x,z N N N
where
! y x y x
BN,J (s) = N 2 [eJ( N )J( N ) 1 (J( ) J( ))](y x)(s, x)(1 (s, y))
x,y
N N
C(J)N d
Hence
1
lim sup log PN [ sup MN,J (s) ] - + C(J)
N Nd 0st
and
1
lim sup lim sup sup log PN [ sup MN,J (s) ] -
0 N Nd 0st
and
P [d(x(t 0), x(t + 0)) ] = 0
Then for any integer k and > 0,
+& ,k
T k
P [ sup d(X(t), X(s)) 4- + ] kP (, -) + e e P (, -)d
|st| 0
0s,tT
lim sup P (, -) = 0
0
P [k < T ] eT E[ek ] eT
k
where
is an upper bound on
1 + e0
2
for some positive 0 .
70 CHAPTER 4. HYDRODYNAMIC SCALING
Lemma 4.6.3. Given any " < there exists a compact set K! D[[0, T ]; M(X )] such
that for any - > 0,
1
lim d
sup log PN [K!c ] "
N N
The following lemma allows us to use super exponential estimates in evaluating inte-
grals.
Lemma 4.6.4. Suppose for a sequence of distributions on some Polish space X we have
the estimates &
1
lim sup log exp[N FN (x)]dPN C
N N
where FN is a sequence bounded by C. Suppose for each - > 0, GN,( is another sequence
of continuous functions on X , such that they are all uniformly bounded by C, and for any
> 0,
1
lim sup lim sup log PN [|FN GN,( | ] =
(0 N N
1
lim sup H(QN ; PN ) H
N N
E Q [G(x)] C + H
Proof.
1' (
E QN [GN,( (x)] H(QN ; PN ) + log E PN [exp[GN,( (x)]
On the other hand, writing
and using H
olders inequality
( ( (
log E PN [exp[GN,( (x)] log E PN [exp[FN (x)] + (1 ) log E PN [exp[ |GN,( FN |]
1
The super exponential estimate implies that for any 0 < < 1,
(
lim sup log E PN [exp[ |GN,( FN |] = 0
N 1
Therefore
1
E Q [G( ] lim sup E QN [GN,( ] [H + C]
N
We can let - 0 and 1 to complete the proof.
!+ x
& T
x
,
FJ = J(T, )(T, x) J(0, x)(0, x) Jt (t, )(t, x)dt
N 0 N
xZdN
! & T + ,
y x
N2 eJ(t, N )J(t, N ) 1](t, x)(1 (t, y))(y x)dt
0
x,yZdN
' (
E PN exp[FJ ((, ))] = 1
This implies with the help of (4.3) that
1 1
E QN [ d
FJ ] d H(QN ; PN )
N N
72 CHAPTER 4. HYDRODYNAMIC SCALING
H(PN ;QN )
Letting N with H = limN Nd
,
+& & & T &
Q
E J(T, u)(T, u)du J(0, u)(0, u)du Jt (t, u)(t, u)du
Td Td 0 Td
& &
1 T
(C J)(t, u)(t, u)dtdu
2 0 Td
& & ,
1 T
-CJ, J.(t, u)(1 (t, u))dtdu H
2 0 Td
One of the things we should remember while carrying out large deviation estimates is that
if
1
lim sup log E P [efi ] 0
1
lim sup log E P [e max(f1 ,f2 ) ] 0
Just notice that e max(f1 ,f2 ) ef1 +ef2 and the sum of two exponetnials that do not grow
does not grow either. This is easily extended to a finite sum. It is now easy to see that if
we have a family f of continuous functions and two sequences of probability measures P
and Q such that Q Q weakly,
1
lim sup H(Q ; P ) H
and
1
lim sup log E P [ef ] 0
for every A, then of course
sup E Q [f ] H
A
E Q [sup f ] H
A
It follows from our previous discussion that the above inequality is valid if we replace A
by any finite subset of A. But then by monotone convergence it is true for A. The rest is
routine. If Q is any limit point of {QN } with H(QNNd;PN ) H, we have
4.6. EXPONENTIAL MARTINGALES. 73
Lemma 4.6.5.
+ +& & & T &
Q
E sup J(T, u)(T, u)du J(0, u)(0, u)du Jt (t, u)(t, u)du
J Td Td 0 Td
& &
1 T
(C J)(t, u)(t, u)dtdu
2 0 Td
& & ,,
1 T
-CJ, J.(t, u)(1 (t, u))dtdu H (4.6)
2 0 Td
In particular,
t L2 [[0, T ]; H1 (T d )] (4.7)
and
L2 [[0, T ]; H1 (T d )] (4.8)
Proof. We just have to make sure that estimate (4.6) is enough to provide (4.7) and (4.8).
Since (1 ) 1, (4.6) implies a bound on
+& T ,
1
EQ "t C "21 dtdu
0 2
+ +& & & T &
Q
= E sup J(T, u)(T, u)du J(0, u)(0, u)du Jt (t, u)(t, u)du
J Td Td 0 Td
& T & & T & ,,
1 1
(C J)(t, u)(t, u)dtdu -CJ, J.dtdu
2 0 Td 2 0 Td
H
where
& & T&
' (
GJ = J(T, u)(T, u) J(0, u)(0, u) du Jt (t, u)(t, u)dtdu
Td 0 Td
& &
1 T
C J(t, u)(t, u)dtdu
2 0 Td
& &
1 T
-CJ, j.(t, u)(1 (t, u))dtdu
2 0 Td
where
+ & & ,
"f "21,C,(1) = sup 2 J(u)f (u)du -C(J)(u), (J)(u).(u)(1(u))du (4.9)
J Td Td
Finally we have to match the upper bound with the lower bound.
Lemma 4.7.1.
& T &
1
I((, )) = inf -C 1 b(t, u), b(t, u).(t, u)(1 (t, u))dtdu
b 2 0 T d
&
1 T 1
= "t C "21,C,(1) dt
2 0 2
Proof. First let us assume that (t, u) is smooth and satisfies 0 < c1 c2 < 1. We
can do the variational problem in (4.9), t 21 C = f (t, u) being a smooth function. It
results in solving
(t, u)(1 (t, u))CJ = f (t, u)
and the rate function equals
& &
1 T
-CJ, J.(t, u)(1 (t, u))dtdu
2 0 Td
4.7. UPPER BOUND 75
We have finally established the Large Deviation% Principle for the processes PNN on
D[[0, T ]; M(T d )], under the assumption that N1d xZd Nx N (x) 0 (u)du weakly in
N
M(T d ).
Theorem 4.7.2. The large deviation principle holds for PNN with the rate function
& T
1 1
I((, )) = "t C "21,C,(1) dt
2 0 2