Você está na página 1de 31

Chapter 4

Hydrodynamic Scaling

4.1 From Classical Mechanics to Euler Equations.

The basic example of hydrodynamical scaling is naturally hydrodynamics itself. Let us


start with a collection of N ! "3 classical particles in a large periodic cube ! of side "
in R3 . The motion of the particles are governed by the equations of motion of a classical
Hamiltonian dynamical system with energy given by
N
1! 1!
H(p , q) = "pi "2 + V (qi qj ) (1.1)
2 2
i=1 i!=j

Here, qi ! is the position of the i-th particle and pi R3 is its velocity and k = 1, 2, 3
refer to the three components of position or velocity. The repulsive potential V 0 is
even and has compact support in R3 . The interaction in particular is short range. The
equations of motion are
k
dqi = H(pk,q) = pki
dt pi
dpi = H(pk,q) = %N Vk (qi qj )
k

dt q i=1
i

where
V (q)
Vk (q) =
q k
is the gradient of V . The dynamical
%system has five conserved quantities. The total number
N of particles, the total momenta N p
i=1 i
k for k = 1, 2, 3 and the total energy H(p , q) given

by (1.1). The hydrodynamic scaling in this context consists of rescaling space and time by
a factor of ". The rescaled space is the unit torus T3 in 3-dimensions. The macroscopic
quantities to be studied correspond to conserved quantities. The first one of these is the

45
46 CHAPTER 4. HYDRODYNAMIC SCALING

density, and is measured by a function (t, x) of t and x. For each " < it is approximated
by ! (t, x), defined by
& N
1 ! qi ("t)
J(x)! (t, x)dx = 3 J( )
Td " "
i=1

A straight forward differentiation using (1.2) yields

& N
d d 1 ! qi ("t)
J(x)! (t, x)dx = J( )
dt Td dt "3 "
i=1
N
!
1 qi ("t)
= (J)( ) pi ("t)
"3 "
i=1
&
! (J)(x) ! (t, x)u(t, x)dx
Td

where u = uk , k = 1, 2, 3 are the average velocity of the fluid. This introduces three
other macroscopic variables, which represent three coordinates of the momenta that are
conserved.
We can now write down the first of our five equations


+ (u) = 0 (1.3)
t
To derive the next three equations we start with test functions J and differentiate,
again using (1.2), for k = 1, 2, 3
N N
d 1 ! qi ("t) k 1 ! k qi ("t)
3
J( )p i ("t) = 3
pi ("t)(J)( ) pi ("t) (4.1)
dt " " " "
i=1 i=1
N N
1 ! ! qi ("t)
2 J( )Vk (qi ("t) qj ("t)) (4.2)
" "
i=1 j=1

The next step is rather mysterious and requires considerable explanation. Quantities
N
!
(qi (t) qj (t))
i,j=1

and
N
!
pki pri
i=1
4.1. FROM CLASSICAL MECHANICS TO EULER EQUATIONS. 47

are not conserved. They depend on spacings between particles or velocities of individ-
ual particles that change in the microscopic time scale and hence do so rapidly in the
macroscopic time scale. They can therefore be replaced by their space-time averages. By
appealing to an Ergodic Theorem they can be replaced by ensemble averages with repect
to their equilibrium distributions. The ensemble consists of an infinite collection of points
{p , q }, in the phase space R3 R3 . There is a natural five parameter family of measures
,u,T that are invariant under spatial translation as well as Hamiltonian dynamics. The
points {p } are distributed according to a Gibbs Distribution with density and formal
interaction
1 !
V (q q )
2T
,

In other words {q } is a point process obtained by taking infinite volume limit of N = "3
particles distributed in cube of side " according to the joint density
N
1 ' 1 ! (
exp V (qi qj )
Z 2T
i,j=1

where Z is the normalization constant. The velocities {p } are distributed independently


of each other as well as of {q }, having a common 3-dimensional Gaussian distribution with
mean u and covariance T I. Assuming that the infinite volume limit exists in a reasonable
sense it will be a point process defined as an infinite volume Gibbs measure ,T . The
velocities {p } will be an independent Gaussian ensemble u,T .
The first term in (1.4) involves sums of pki pri that are replaced by their expectations in
the Gaussian ensemble uk ur + T k,r .
V
If we now use the skew-symmetry of Vk = q k
, we can rewrite the second term of (1.4)
as
N N
1 !! qi ("t) qj ("t)
2 (J( ) J( )Vk (qi ("t) qj ("t))
2" " "
i=1 j=1
N N
1 !! qi ("t) r
! 3
Jr ( )(qi ("t) qjr ("t))Vk (qi ("t) qj ("t))
2" "
i=1 j=1
N N
1 !! qi ("t) r
= 3
Jr ( )k (qi ("t) qj ("t))
" "
i=1 j=1
&
J
! (x)Prk ((t, x), T (t, x))dx
T d x r
Where Prk ( , T ) is the pressure per unit volume in the Gibbs ensemble
) *
r ,T 1 ! r
Pk ( , T ) = lim E (q q )
! "3 |q |# k

|q |#
48 CHAPTER 4. HYDRODYNAMIC SCALING

We now integrate by parts, remove the test function J and obtain


d
(u) + ( u u + T I + P( , T )) = 0 (1.5)
dt
There is an equation of state that expresses the total energy per unit volume e as
1
e( , u , T ) = (|u|2 + 3T ) + f ( , T ) (1.6)
2
where f ( , T ), the potential energy per unit volume, is given by
) *
,T 1 !
f ( , T ) = lim E V (q q )
! 2"3 |q |#

|q |#

Although we will not derive it, there is a similar equation for e(t, x) that is obtained
by differentiating
N + N ,
d 1 ! qi ("t) 2
!
J( ) |pi ("t)| + V (qi ("t) qj ("t))
dt 2"3 "
i=1 j=1

and proceeding in a similar fashion. It looks like


de ' (
+ (e + T )u + P( , T )u = 0 (1.7)
dt
The five equations ((1.3),(1.5) and (1.7)) in five variables (actually in six variables
with one relation (1.6) ) is a symmetrizable first order system of non-linear hyperbolic
conservation laws. Given smooth initial data they have local solutions.
Rigorous derivation of these equations does not exist. The ergodic theory is definitely
plausible, but hard to establish. The reason is that we have essentially an infinite system of
ordinary differential equations representing a classical deterministic dynamical system and
the ergodicity properties are nearly impossible to establish in any general context. However
if we had some noise in the system, i.e. stochastic dynamics instead of deterministic
dynamics, then we will be concerned with the ergodic theory of Markov Processes of some
kind, which is far more accessible. This will be the focus of our future lectures.

4.2 Simple Exclusion Processes


The simplest example is a system of noninteracting particles undergoing independent mo-
tions. For instance we could have on T 3 , kN ! N 3 particles, behaving collectively like
independent Brownian Particles. If the initial configuration of the kN particles is such that
the empirical distribution
1 !
0 (dx) = 3 xi
N
i
4.2. SIMPLE EXCLUSION PROCESSES 49

has a deterministic limit 0 (x)dx, then the empirical distribution


1 !
t (dx) = xi (t)
N3
i

of the configuration at time t, has a deterministic limit (t , x)dx as N and (t , x)


can be obtained from 0 (x) by solving the heat equation
1
=
t 2
with the initial condition (0 , x) = 0 (x). The proof is an elementary law of large numbers
argument involving a calculation of two moments. Let f (x) be a continuous function on T
and let us calculate for
1 !
U= 3 f (xi (t))
N
i

the first two moments given the initial configuration (x1 , , xkN )
&
1 !
E(U ) = 3 f (y)p(t , xi , y)dy
N T3

and an elementary calculation reveals that the expectation converges to the following con-
stant. & & &
f (y)p(t , x , y)0 (x)dydx = f (y)(t , y)dy
T3 T3 T3

The independence clearly provides a uniform upper bound of order N 3 for the variance
that clearly goes to 0. Of course on T3 we could have had a process obtained by rescaling
a random walk on a large torus of size N . Then the hydrodynamic scaling limit would be a
consequence of central limit theorem for the scaling limit of a single particle and the law of
large numbers resulting from the averaging over a large number of independently moving
particles. The situation could be different if the particles interacted with each other.
The next class of examples are called simple exclusion processes. They make sense
on any finite or countable set X and for us X will be either the integer lattice Zd in d-
dimensions or ZdN obtained from it as a quotient by considering each coordinate modulo N .
At any given time a subset of these lattice sites will be occupied by partcles, with atmost
one particle at each site. In other words some sites are empty while others are occupied
with one particle. The particles move randomly. Each particle waits for an exponential
random time and then tries to jump from the current site x to a new site y. The new
site
% y is picked randomly according to a probability distribution p(x , y). In particular
y p(x , y) = 1 for every x. Of course a jump to y is not always possible. If the site is
empty the jump is possible and is carried out. If the site already has a particle the jump
cannot be carried out and the particle forgets about it and waits for another chance, i.e.
waits for a new exponential waiting time.
50 CHAPTER 4. HYDRODYNAMIC SCALING

If we normalize so that all waiting times have mean 1, the generater of the process can
be written down as
!
(A f )() = (x)(1 (y))p(x , y)[f ( x,y ) f ()]
x,y

where represents the configuration with (x) = 1 if there is a particle at x and (x) = 0
otherwise. For each configuration and a pair of sites x, y the new configuration x,y is
defined by

(y) if z = x
x,y
(z) = (x) if z = y


(z) if z += x , y

We will be concerned mainly with the situation where the set X is Zd or ZdN , viewed
naturally as an Abelian group with p(x , y) being translation invariant and given by
p(x , y) = (y x) for some probability distribution (). It is convenient to assume that
has finite support. There are various possibilities.
% () is symmetric,
% i.e. (z) = (z)
or more generally () has mean zero, i.e. z z(z) = 0, and finally z z (z) = m += 0.

We shall first concentrate on the symmetric case. Let us look at the function
!
VJ () = J(x)(x)

and compute
!
(AVJ )() = (x)(1 (y))(y x)(J(y) J(x))
x,y
!
= (x)(y x)(J(y) J(x))
x,y
!
= (x)[(P I)J](x)
x,y

= V(PI)J ()

The space of linear functionals is left invariant by the generator. It is not difficult to
see that
' (
E VJ ((t)) = VJ(t) ()
where
J(t) = exp[t(P I)]J
is the solution of
d
J(t , x) = (P I)J(t , x)
dt
4.2. SIMPLE EXCLUSION PROCESSES 51

It is almost as if the interaction has no effect and in fact in the calculation of expec-
tations of one particle functions it clearly does not. Let us start with a configuration on
ZdN and scale space by N and time by N 2 . The generator becomes N 2 A and the particles
can be visualized as moving in a lattice imbedded in the unit torus Td , with spacing N1 ,
and becoming dense as N .
Let J be a smooth function on Td . We consider the functional
1 ! x
(t) = d J( )(t, x)
N x N

and we can write & t


(t) (0) = VN ((s))ds + MN (t)
0
where
VN () = (N 2 AVJ )() = VJN ()
with
!' z (
(JN )(u) = N 2 J(u + ) J(u) p(z)
N
1
! (C J)(u)
2
for u Td . Here C refers to the Laplacian
! 2
Ci,j
xi xj
i,j

with the covariance matrix C given by


!
Ci,j = zi zj p(z)
z

MN (t) is a martingale and a very elementary calculation yields


) *
2
E [MN (t)] C t N d

essentially completing the proof in this case. Technically the empirical distribution N (t)
is viewed as a measure on Td and N () is viewed as a stochastic process with values in
the space M(Td ) of nonnegative measures on Td . In the limit it lives on the set of weak
solutions of the heat equation
1
= C
t 2
and the uniqueness of such weak soultions for given initial density establishes the validity
of the hydrodynamic limit.
52 CHAPTER 4. HYDRODYNAMIC SCALING

Let us make the problem slightly more complicated by adding a small bias. Let q(z) be
an odd function with q(z) = q(z) and we will modify the problem by making depend
on N in the form
1
N (z) = (z) + q(z)
N
Assuming that q is nonzero only when is so, N will be an admissible transition proba-
bility for large enough N . A calculation yields that in the slightly modified model referred
to as weakly asymmetric simple exclusion model VN is given by
1 !
VN () ! VJN () + (x)(1 (x)) < m , J(x) >
Nd x
with !
m= z q(z)
z
If one thinks of (t , u) as the density of particles at the (macroscopic) time t and space u
the first term clearly wants to have the limit
&
1
(C J)(u)(t , u)du
T 2
d

It is not so clear what to do with the second term. The invariant measures in this model
are the Bernoulli measures with various densities and the averaged version of the second
term should be &
< m , (J)(u) > (t , u)(1 (t , u))du
Td
Replacing the linear heat equation by the nonlinear equation
1
= C m(1 )
t 2
This requires justification that will be the content of our next lecture.
Let us now turn to the case where p has mean zero but is not symmetric. In this case
! y x
VN () = N 2d (x)(1 (y))p(y x)[J( ) J( )]
x,y
N N

and we get stuck at this point. If p is symmetric, as we saw, we gain a factor of N 2 .


Otherwise the gain is only a factor of N 1 which is not enough.
We seem to end up with
!! 1 x y
N d (x) < [(J)( ) + (J)( )] , N (1 (y))(y x)p(y x) >
x y
2 N N
1 ! x
= d
(J)( )N x
2N x N
4.3. SYMMETRIC SIMPLE EXCLUSION. 53

where
! !
x = [(x) (1 (x + z))z p(z) + (1 (x)) (x z)z p(z)]
z z
! !
= [(x) (x + z)z p(z) + (1 (x)) (x z)z p(z)]
z z
! !
=[ (x z)z p(z) (x) ((x + z) + (x z))z p(z)]
z z
= x 0

with x being the shift by x. In the symmetric case, the second sum is zero and 0 can
then be written as a gradient !
0 = ej j j
j

where ej are shifts in the coordinate directions. This allows us to do summation by parts
and gain a factor of N 1 . When this is not the case, we have a nongradient model and
the hydrodynamic limit can no longer be established by simple averaging.

4.3 Symmetric Simple Exclusion.


We will now begin the analysis of the symmetric simple exclusion process. We have a
probability distribution (z), on Zd that is symmetric and compactly supported. We will
also assume that its support generates entire Zd . The covariance matrix C is defined by
!
-C", ". = -z, ".2 (z)
z

and is positive definite. The simple exclusion process is defined by the generator
!
(Lf )() = (y x)(x)(1 (y))[f ( x,y ) f ()]
x,y

Here the state space consists of consisting of : ZdN {0, 1}. represents a configuration
of particles with at most one particle per site. Let us recall that the convention is that
(x) = 1 if there is particle at x and 0 otherwise and that x,y represents the configuration
obtained by exchanging the situation at sites x and y.

(y)
if z = x
x,y
(z) = (x) if z = y


(z) otherwise
54 CHAPTER 4. HYDRODYNAMIC SCALING

The initial configuration is a state 0 = {(0, x)} and we assume that for some density
0 (u) on the torus T d , 0 0 (u) 1, we have
&
1 ! x
lim J( )(0, x) J(u)0 (u)du
N N d N Td
xZN d

for every continuous function J : T d R. 0 () should be thought of as the macroscopic


density profile. We speed time up by factor of N 2 and let PN = PN,0 be the probability
measure on the % space D[[0, T ]; N ]. We can map the configuration to the measure
N (du) = N1d x Nx on T d . This induces a measure QN on the space D[[0, T ]; M(T d )].
We will prove the following theorem which is quite elementary.
Theorem 4.3.1. As N , the distributions QN converge weakly to the degenerate
distribution concentrated on the trajectory (t, u) which is the unique solution of the heat
equation
d
1 !
= C
t 2
i,j=1

with the initial condition (0, u) = 0 (u).


Proof. Consider the function
1 ! x
FJ () = d
J( )(x)
N d
N
xZN

We can compute
! y x
(N 2d LN FJ )() = N 2d [J( ) J( )](x)(1 (y))(y x)
x,y
N N
N 2d ! y x
= [(x)(1 (y)) (y)(1 (x))][J( ) J( )](y x)
2 x,y N N
N 2d ! y x
= [(x) (y)][J( ) J( )](y x)
2 x,y N N
N 2d ! x+z x
= [(x) (x + z)][J( ) J( )](z)
2 x,z N N
N 2d ! x+z x xz
= (x)[J( ) 2J( ) + J( ](z)
2 x,z N N N
1 ! x
! d
(x)(C J)( )
2N x N
&
1
= (C J)(u)N (du)
2
4.3. SYMMETRIC SIMPLE EXCLUSION. 55

We now establish two things. The sequence QN is compact as probability distributions


on D[[0, T ]; ]. Any limit Q is concentrated on the set of weak solutions of the heat
equation with the correct initial condition. Since there is only one solution, we have weak
convergence to the degenerate distribution concentrated at that solution, as claimed.
We will show compactness in D[[0, T ]; M(T d )]. The topology on M(T d ) is weak con-
vergence. The space is compact. If we use a continuous test function J, the jump sizes are
at most sup|xy| C |J(x) J(y)|, where C is the size of the support of (). It is therefore
N
sufficient to get a uniform estimate on
'
N,J (, -) = sup PN, sup |FJ (t) FJ (0)| -]
N, 0t

and show that for any smooth J and - > 0,

lim sup N,J (, -) 0


0 N

From our computation of (N 2 LN F )() it follows that if J has two bounded derivatives,
then
sup sup |(LN FJ )()| C(J)
N

From the generator N 2 LN . we occlude that


& t
2
FJ ((t)) FJ ((0)) N (LN FJ )((s))ds = MJ,N (t)
0

is a Martingale. N 2 |(LN FJ )()| C(J) and the quadratic variation of the martingale
MJ,N (t) is easily estimated. The jumps are of size NC(J)
d+1 . The total jump rate is at most

N d+2 . The quadratic variation is then bounded by

[C(J)]2 N 2d2 N d+2 = C(J)N d

This is sufficient to prove the compactness of QN and that any limit point Q will have the
property that for any J
& & & t&
1
J(u)(t, u)du J(u)(u)du (C J)(u)(s, du)ds 0
Td Td 2 0 Td

a.e Q.

Remark 4.3.2. The symmetry of () played a crucial part. (x)(1 % (y)) (y)(1
(x)) = (x) (y) cancelled the nonlinearity. If we assumed only z z(z) = 0, we will
not have the second difference and N 2 (LN Fj )() will be of size N .
56 CHAPTER 4. HYDRODYNAMIC SCALING

4.4 Large deviations and weak asymmetry.


Now that we have established the Law of large numbers, let us investigate the large devia-
tion properties. We want to obtain a rate function I(()) on D[[0, T ]; M(T d )] and prove
a large deviation result for QN with this rate function. Lower bound requires a tilting
argument and upper bound involves estimates via some exponential martingales. First we
need to consider a class of tilts and then optimize over them. The tilts depend on the
choice of a function q(z) that is odd i.e. q(z) = q(z), and q(z) = 0 unless (z) > 0.
Then we perturb (z) by (z) + N1 q(z) that introduces a weak asymmetry. The choice of
q() = q(s, u, ) can depend on u and s. The generator is modified accordingly
! y x 1 x
(N 2 LN,q FJ )() = N 2d [J( ) J( )](x)(1 (y))[(y x) + q(s, , z)]
x,y
N N N n
1 ! ! x
! d
(x) Ci,j (Di Dj J)( )
2N x N
i,j
1 ! x x
+ d- q(s, , z)(x)(1 (x + z)), (J)( .
N z
n N

where the first term can be replaced as before by


&
1
(C J)(u)(du)
2 Td

It is not clear what to do with the second term. In the limit when (du) becomes (u)du
we would like this term to be
&
(u)(1 (u))-m(s, u), (J)(u).du
Td

where !
m(s, u) = z q(s, u, z)
z

because we expect locally the statistics to reflect the Bernoulli distribution with the correct
density. If we can accomplish it, we will establish the following theorem.

Theorem 4.4.1. The sequence of measures QN,q converges as N to the distribution


concentrated on the single trajectory that is the weak solution of

(t, u) 1
= C (t, u) ((t, u)(1 (t, u))m(t, u))
t 2
with (0, u) = 0 (u).
4.4. LARGE DEVIATIONS AND WEAK ASYMMETRY. 57

But this requires justification. The route is complicated. There are various approxima-
tions that are needed. There are several measures. PN , QN and the perturbed ones PN,q
and QN,q . Computations with PN are easier because it is a reversible Markov process.
Direct computations with PN,q are harder. Even while examining PN it is easier if we
start in equilibrium, i.e. with a reversible invariant measure, uniform distribution of N d
particles among the N d sites. One can use Feynman-Kac formula and some variational
methods to obtain estimates. We can then transfer the estimates from PNeq to PN and PN,q
using entropy inequality.
We will deal with averages of all kinds. Let us set up some notation. If f = f () is a
local function on the configuration space or N their averages will be denoted
1 !
f!,x = f (y )
(2" + 1)d
y:|yx|!

A special case is when f () = (0), in which case we denote by


1 !
!,x = (y)
(2" + 1)d
y:|yx|!

(x) can be (s, x) representing the configuration at some time s. We denote by x the
configuration (x )(y) = (x + y). The object we need to estimate is
& T
eN (-, s)ds
0

where + ,
PN,q 1 ! x
eN (-, s) = E | d J( )[fx ((s)) f (
N (,s,x )]|
N N
and
f() = E [f ()]
the expectation of the local function f with respect to the product Bernoulli measure with
density .
Let N (s, d) be the marginal distribution on N of the configuration (s, ) at time s
and
N its space and time average. More precisely
& & & + ,
1 T 1 !
f ()d
N () = f (x ) N (s, d)ds
T 0 N N d x

We need the following theorem


Theorem 4.4.2. & T
lim lim sup eN (-, s)ds = 0
(0 N 0
58 CHAPTER 4. HYDRODYNAMIC SCALING

This is proved in two steps.

Lemma 4.4.3. . ,
.
.
lim lim sup .fk () f (
(k))| = 0
k N

Lemma 4.4.4. +
N

lim sup lim sup E |
N ( k | = 0
%0 N
k

We note the following.

We can always replace for fixed k and large N , the sum


1 ! x
J( )f (x )
Nd x N

by
1 ! 1 ! y
d
f (x ) J( )
N x (2k + 1)d N
y:|yx|k
or
1 ! x
J( )fk,x
Nd x N

Lemma 4.4.3 allows us to replace this by


1 ! x
J( )f (
k,x )
Nd x N

Lemma 4.4.4 allows us to replace k,x with large k by N (,x with a small -.

This allows us to replace (x)(1 (x)) by N (,x (1 N (,x ), which is sufficient to


prove Theorem 4.4.2

We now concentrate on the proof of the two lemmas. They depend on the following
observations.

The function f (a, b) = ( a b)2 is a convex function of (a, b) R2+ . It is checked
by computing the Hessian
/ 1 3 1
0
1 b 2 a 2 (ab) 2
1 1 3
2 (ab) 2 a 2 b 2

of f which is seen to be positive semidefinite.


4.4. LARGE DEVIATIONS AND WEAK ASYMMETRY. 59

The invariant distributions fot A are all reversible and dform a convex set with extreme
1N 2
points wN,k which % are uniform distributions over 2 k configurations in N with a
given number k = xZd (x) of occupied sites.
N

The Dirichlet form corresponding to the operator A is given by


1 !!
[f ( x,y ) f ()]2 (y x)wN,k ()
2 x,y

where wN,k () is any reversible invariant distribution.

If () is irreducible, then they are all equivalent to


! !
[f ( x,y ) f ()]2 wN,k ()
x,y
|xy|=1

If N = {N ()} is a probability distribution on N its Dirichlet form is defined by


1! ! 3 3
D( N ) = ( N ( x,y ) N ())2
2
|xy|=1

%
Let us denote by N (k) = N { xZd (x) = k}. Then the invariant distribution
N %
that corresponds to N is the
convex combination wN= k N (k) wN,k . If we
write
N () = f ()wN () then f is in L2 (wN ) with " f "L2 (wN ) = 1 and D( f ) =

D( N ).

An estimate of the form D( N ) CN d2 has many consequences. If N (s) are
4T 3
such that 0 D( N (s))ds CN d2 , then the average N over space and time,
defined by, & T!
1 1

N = x
N (s)ds
T Nd 0 x
satisfies
C
D( N ) N d2
T
Since there is spacial homogeneity the quantity
! ! 3 3 C
sup ( N ( x,y ) N ())2 N 2
x y:|xy|=1
T

If we restrict the distribution to a block Bk of size k, the marginal distribution has


the Dirichlet form from bonds (x, y) : |x y| = 1 internal to Bk which is at most
(2k)d C
TN
2 . As N the Dirichlet form goes to 0. The limiting distribution
60 CHAPTER 4. HYDRODYNAMIC SCALING

is therefore permutation invariant. Hence it is uniform over all possible choices of


locations for the total number of particles in the block. This shows that
' (
lim sup lim sup E N fk f(
k ) = 0
k N

This is precisely lemma 4.4.3.


% 3 3
We want to estimate ( N ( x,y ) N ())2 when |xy| = " instead of |xy| =
1. Any interchange of x, y at a distance " can be achieved by 2" successive interchanges
of nearest neighbors and the simple inequality

!
!
(a1 + + a! )2 " a2j
j=1

allows us to estimate
!3 3 C
( N ( x,x+z ) N ())2 |z|2 N 2

T

In particular if |z| N - then


!3 3 C C
( N ( x,x+z ) N ())2 |z|2 N 2 -2

T T

This shows that if two microscopically large blocks are macroscopically close then
there empirical densities are close with probability nearly 1. This implies lemma
4.4.4.

All we need to do now is control the Dirichlet form. Our generator is of the form

1
AN,q = A + SN,q
N
the sum of the symmetric part A and the weak skew symmetric perturbation SN,q . The
entropy is defined by
! ()
H() = () log

()

1 d 21
where () = c = c(N, ) = N kN is the uniform distribution of kN = N d particles on
d . It is invariant for the evolution under A.
ZN
4.4. LARGE DEVIATIONS AND WEAK ASYMMETRY. 61

dtN
Let us note that tN evolves according to the forward equation dt = N 2 AN,q N .
Therefor for HN (t) = H(tN ) we have

dHN (t) d ! t () t
= log N dN ()
dt dt c
! tN () t ! d
= N2 log AN,q N + t ()

c
dt
!
2
=N (LN,q log tN ())tN ()

!! 1 x t ( x,y )
= N2 [(y x) + q(t, , y x)] log Nt (x)(1 (y))tN ()
x,y
N N N ()
!! q(t, Nx , y x) t ( x,y )
= N2 (y x)[1 + ] log Nt (x)(1 (y))tN ()
x,y
N (y x) N ()

q(t,u,yx)
Denoting by cN = cN (t, u, y x) = 1 + N (yx) , and using the inequality

x log y 2[x log x x + 1] + 2[ y 1]

(verify supx [x log y 2(x log x x + 1)] = 2( y 1)), we obtain

dHN (t) !!
2N 2 (y x)[cN log cN cN + 1]tN ()
dt x,y
!! 5 5
2
+ 2N (y x)[ N ( ) tN ()]tN ()
t x,y
x,y

+ ,2
2 q(t, Nx , y x)
2N (cN log cN cN + 1) !
(y x)
We can also change x,y in the summation. It just maps N onto itself. Assuming
|q(t, u, z)| C(z), we end up with
!! 5 5
dHN (t)
CN d N 2 (y x)( tN ( x,y ) tN ())2
dt x,y

If the support of () generates Zd it is routine to estimate


! !
D(f ) = [f ( x,y f ()]2 C (y x)[f ( x,y ) f ()]2
|xy|=1 x,y
62 CHAPTER 4. HYDRODYNAMIC SCALING

This leads to the estimate


&
N2 d
T 15 2
HN (T ) HN (0) CT N D tN dt
C 0

Since HN (T ) 0 and HN (0) CN d , it follows that


& T 15 2
D tN dt C(T )N d2
0

4.5 Super-exponential estimates.


A consequence of the estimates of the previous section is the following theorem. Let
) *
d2
MN,C = N () : D( N ) CN
1 ! .. 1 ! .
eN,k (f, ) = d
f (y )) f(
x,k ).
N d
(2k + 1)d
xZN |yx|k
1 !
dN,k,(() = d |
x,k x,N ( |
N d xZN

Then

Theorem 4.5.1. For any C < and local function f ,


' (
lim lim sup sup E N eN,k (f, ) = 0
k N N MN,C

' (
lim lim sup sup E N dN,k,( () = 0
k
%0
N N MN,C

Let {Px } x X be a Markov family on the finite state space X i.e. measures on
D[[0, ), X ] corresponding to a generator
!
(Af )(x) = [f (y) f (x)]q(x, y)
y

Assume that (x) is an invariant probability distribution for the Markov process. Let us
assume that the process is reversible with respect to , i.e. A is self adjoint in L2 (). This
is the same as q(x, y)(x) = q(y, x)(y). The Dirichlet form is given by
! 1!
-Af, f . = [f (y) f (x)]f (x)q(x, y)(x) = [f (y) f (x)]2 q(x, y)(x)
x,y
2 x,y
4.5. SUPER-EXPONENTIAL ESTIMATES. 63

If the process is not reversible then the Dirichlet form corresponds to 21 (A + A ).


The Feynman-Kac formula says that for any V : X R,
& t
Px
u(t, x) = E [exp[ V (x(t))dt]f (x(t))]
0

is the solution of
du(t, x)
= (Au)(t, x) + V (x)u(t, x); u(0, x) = f (x)
dt
It follows that
d"u(t, )"22
= -(A + A )u(t) + 2V u(t), u(t). = 2-u(t)V, u(t). 2D(u(t))] 2A (V )"u(t)"22
dt
providing the estimate
"u(t, )"22 exp[2tA (V )]"f "22
where ' (
A (V ) = sup -uV, u. D(u)
u:(u(2 =1

Taking f = 1 and using Schwartzs inequality


& t
P
E [exp[ V (x(s))ds]] = "u(t)"1 exp[tA (V )]
0

(There are no constants!)


The following lemma is useful and replaces Lp , Lq duality of Holders inequality with
the entropy duality between x log x and ex .
4 dQ
Lemma 4.5.2. Let Q << P with H(Q; P ) = dP log dQ
dP dP < . Then for any bounded
measurable function g(x)
&
E [g(x)] H(Q; P ) + log eg(x) dP
Q
(4.3)

For any set A,


H(Q; P ) + 1
Q(A) 1 (4.4)
log P (A)

Proof. The inequality


ab b log b b + ea
dQ
implies, with f (x) = dP , for any c > 0, (replacing b by cb and dividing by c),
1
f (x)g(x) f (x) log[cf (x)] f (x) + eg(x)
c
64 CHAPTER 4. HYDRODYNAMIC SCALING

Integrating with respect to P we obtain,


& &
1
g dQ log c + H(Q; P ) 1 + eg(x) dP
c
4
Pick c = eg(x) dP . We then get
& &
g dQ H(Q; P ) + log eg(x) dP

1
Take g(x) = k1A (x) and k = log P (A) . Then
&
1
Q(A) [H(Q; P ) + log ek1A (x)+1Ac (x) dP ]
k
1
= [H(Q; P ) + log[ek P (A) + P (Ac )]]
k
1 H(Q; P ) + 1
= [H(Q; P ) + log 2] 1
k log P (A)

The following theorem is now an easy consequence of theorem 4.5.1


Theorem 4.5.3. Let N be arbitrary. Then for any > 0 and local function f ,
& T
1
lim sup lim sup d sup log P [ eN,k (f, (t))dt ] =
k N N 0
& T
1
lim sup lim sup d sup log P [ dN,k,( ((t))dt ] =
k N N 0
%0

Proof. We start by estimating


& T
1 P
' d
(
lim sup d
log E exp[N V ((s))ds]
N N 0
T
lim sup d sup [N d -V u, u. N 2 D(u)]
N N (u(2 =1

= T lim sup sup [-V u, u. N 2d D(u)]


N (u(2 =1

If V 0 is bounded by C, then the supremum can be limited to u with D(u) CN d2 .


d
Taking N () = 2N u2 (), if for some V , if we can control
(V ) = sup [-V u, u. N 2d D(u)]
(u(2 =1
4.5. SUPER-EXPONENTIAL ESTIMATES. 65

d
then we can get estimates under P . But each point carries a mass of 2N . If we have
super exponential estimates under P we have them uniformly under P for N . We
can take V () = eN,k (f, ) or V () = dN,k,( () and use Tchebychevs inequality on
& T
P d
E [exp N [ V ((s))ds]]
0

to get
& T
1
lim sup d log P [ V (x(s))ds ] [ + (V )]
N N 0

If (V ) 0 as k (and - 0) for every > 0 we have our super exponential


estimates.

Weak asymmetry introduces a perturbation (y x) (y x) + N1 q(t, Nx , y x).


Assuming q(t,u,z)
(z) is bounded the relative entropy of this perturbation is easily estimated
to be & T !
2
N (cN log cN cN + 1)(y x)(x)(1 (x))N (t, ) CN d
0 ,x,y

Therefore for the perturbation QN , uniformly over all initial configuration


& T
lim sup lim sup sup QN [ eN,k (f, (t))dt ] =
k N 0
& T
lim sup lim sup sup QN [ dN,k,(((t))dt ] =
k N 0
%0

Now we are ready to establish the large deviation lower bound.

Theorem 4.5.4. Let (t, x) be a weak solution of

1
= C b(t, u)(t, u)(1 (t, u)) = 0; (0, u) = 0 (u) (4.5)
t 2
with a bounded continuous b(t, u). Then for initial condition 0 compatible with 0 and for
any neighborhood G of () in C[[0, T ]; M(T d )]
& T &
1 1
lim inf d log PN0 [() G] -C 1 b, b.(t, u)(1 (t, u))dtdu
N N 2 0 Td

In particular the lower bound for the large deviation can be


& T &
1
inf -C 1 b, b.(t, u)(1 (t, u))dtdu
bB() 2 0 T d
66 CHAPTER 4. HYDRODYNAMIC SCALING

where B() consists of b that satisfies (4.5). The infimum is calculated to be


&
1 T
1!
"t Ci,j Di Dj "21,C,(1) dt
2 0 2
i,j
+& &
= sup J(T, u)(T, u)du J(0, u)(0, u)du
J(,) Td Td
& & & ,
1 T
1! 1 T
[Jt + Ci,j Di Dj J](t, x) -CJ, J.(t, u)(1 (t, u))dtdu
2 0 Td 2 2 0
i,j

Proof. We begin by choosing a perturbation q(t, u, z). We need to choose q so that


!
b(t, u) = zq(t, u, z)
z

while minimizing
! [q(t, u, z)]2

z
(z)

The choice is
q(t, u, z) = -C 1 b(t, u), z.(z)
and the minimum is -C 1 b, b.. Here C = {Ci,j } is the covariance matrix of (). We
next need to prove compactness in C[[0, T ]; M(T d )] for the perturbed process. We will
use repeatedly inequality (4.4). In particular if log P (A) >> N d and H(Q, P ) = O(N d )
then Q(A) << 1. To prove compactness we will show that a super exponential tightness
estimate holds in equilibrium. Then any limit will be concentrated on the weak solutions of
(4.5), and we need only prove uniqueness of them. The entropy will then have the correct
limit.

Actually there is a converse as well.

Theorem 4.5.5. If PN is a sequence of probability distributions on X such that every


sequence QN with H(QN ; PN ) CN is uniformly tight, then PN is super exponentially
tight.

Proof. Consider the set B! = N {Q : H(Q, PN ) "N }. B! is tight. There is a compact


set K!,( such that Q[K!,(c ] - for every Q B . Take - = 1 . There is K such that
! 2 !
Q(K!c ) = 1 12 implies H(Q; PN ) CN . Take Q to be the restriction of PN to K!c . The
relative entropy of Q to PN is easily calculated to be log PN (K!c ), proving PN (K!c )
exp["N ].
4.6. EXPONENTIAL MARTINGALES. 67

4.6 Exponential martingales.


For any Markov process with generator A, there are exponential martingales associated
with them. They are of the form
& t
exp[f (x(t)) f (x(0)) (ef Aef )(x(s))ds]
0
and are quite useful for estimates.
In the case of the simple exclusion process, with
! x
f () = J( )(x)
x
N
we obtain the martingales
! & t
x
MN,J (t) = exp[ J( )[(t, x) (0, x)] AN,J (s)ds]
N 0
xZdN

where
! y x
AN,J (s) = N 2 (y x)[eJ( N )J( N ) 1](s, x)(1 (s, y))
x,y
! +
N2 y x
= (y x) (eJ( N )J( N ) 1)(s, x)(1 (s, y))
2 x,y
,
x y
+ (eJ( N )J( N ) 1)(s, y)(1 (s, x))

N2 ! y x
! (y x)(J( ) J( ))((s, x) (s, y))
2 x,y N N
N2 ! y x
+ (y x)(J( ) J( ))2 [(x)(1 (y)) + (y)(1 (x))]
4 x,y N N
d
1 1 ! ! x x
! (C J)(x) + Di J( )Dj J( )fxi,j ()
2 4 N N
d xZN i,j=1

where !
f i,j () = (z)zi zj [x (1 (x + z)) + (x + z)(1 (x))]
The exponential martingale has many uses. The first one is super-exponential tightness.
Lemma 4.6.1. Let J(u) be a smooth function on T d . Let PN be the simple exclusion
process with initial state . Then for any - > 0,
1 ' 1 ! x (
lim sup lim sup sup d log PN d
sup | J( )[(t, x) (0, x)]| - =
0 N N N 0t d
N
xZN
68 CHAPTER 4. HYDRODYNAMIC SCALING

Proof.
& t
1 ! x
J( )[(t, x) (0, x)] AN,J (s)ds = MN,J (t)
Nd d
N 0
xZN

is a martingale, where

N2 1 ! x+z x xz
AN,J (s) = d
(z)[J( ) 2J( ) + J( )](s, x)
2 N x,z N N N

and |AN,J | C(J). It is therefore sufficient to estimate

PN [ sup MN,J (s) ]


0st

We have the exponential martingales


+ & t ,
d
exp N MN,J (t) BN,J (s)ds
0

where
! y x y x
BN,J (s) = N 2 [eJ( N )J( N ) 1 (J( ) J( ))](y x)(s, x)(1 (s, y))
x,y
N N
C(J)N d

The usual bound (Doobs inequality) for martingale yields

PN [ sup MN,J (t) ]


0t

P [ sup N d MN,J (t) C(J)N d t N d - C(J)N d ]


0t
d (+C(J)N d
eN

Hence
1
lim sup log PN [ sup MN,J (s) ] - + C(J)
N Nd 0st

and
1
lim sup lim sup sup log PN [ sup MN,J (s) ] -
0 N Nd 0st

Since - > 0 and > 0 is arbitrary we are done.

This is actually all we need to prove super exponential tightness.


4.6. EXPONENTIAL MARTINGALES. 69

Lemma 4.6.2. Let P be a probability distribution on [D[0, T ]; X where X is a Polish space.


Let
P [ sup d(x(t), x(s)) -|Ft ] P (, -)
tst+

and
P [d(x(t 0), x(t + 0)) ] = 0
Then for any integer k and > 0,
+& ,k
T k
P [ sup d(X(t), X(s)) 4- + ] kP (, -) + e e P (, -)d
|st| 0
0s,tT

In particular if X is compact and {P } is a family such that

lim sup P (, -) = 0
0

P is uniformly tight family.


Proof. Define successive stopping times, 0 = 0,

j+1 = inf{t : t j , d(X(t), X(j )) -}

Let k = inf{j : j+1 > T } and = inf 1jk (j j1 ). Then if 0 s, t T and


|t s| , there can be at most one j between them and hence d(X(s), X(t)) 4- + .
So we need to estimate
k
!

QN [ ] = QN [j ] + P [k < T ]
j=1

We have control on the first term


k(, -)
As for the second term we can estimate it by

P [k < T ] eT E[ek ] eT
k

where
is an upper bound on

E P [e(j+1 j ) |Fj1 ] P (, -)(1 e ) + e


1
If we pick 0 such that P (0 , -) 2 then

1 + e0


2
for some positive 0 .
70 CHAPTER 4. HYDRODYNAMIC SCALING

Lemma 4.6.3. Given any " < there exists a compact set K! D[[0, T ]; M(X )] such
that for any - > 0,
1
lim d
sup log PN [K!c ] "
N N

Proof. We will use Theorem 4.5.5. It is sufficient to prove that if H(QN : PN ) CN d ,


then QN is tight. But since we are dealing with weak topology in M(X ) this is the same
as tightness of the processes XJ (t) = -J, (t). under QN . From lemma 4.6.1, we have the
estimates
d d
(, -) eN (+C(J)N
that are super exponential. Therefore for any sequence QN with H(QN ; PN ) CN we
will have uniform estimates on (, -). This will imply the uniform tightness of QN .

The following lemma allows us to use super exponential estimates in evaluating inte-
grals.

Lemma 4.6.4. Suppose for a sequence of distributions on some Polish space X we have
the estimates &
1
lim sup log exp[N FN (x)]dPN C
N N

where FN is a sequence bounded by C. Suppose for each - > 0, GN,( is another sequence
of continuous functions on X , such that they are all uniformly bounded by C, and for any
> 0,
1
lim sup lim sup log PN [|FN GN,( | ] =
(0 N N

Assume further that as N , GN,( G( uniformly on compact sets and G( G


uniformly on compacts as - 0. Then if QN is such that

1
lim sup H(QN ; PN ) H
N N

and QN Q weakly in M(X ), then

E Q [G(x)] C + H

Proof.
1' (
E QN [GN,( (x)] H(QN ; PN ) + log E PN [exp[GN,( (x)]

On the other hand, writing

exp[GN,( ] exp[FN ] exp[|GN,( FN |]


4.6. EXPONENTIAL MARTINGALES. 71

and using H
olders inequality
( ( (
log E PN [exp[GN,( (x)] log E PN [exp[FN (x)] + (1 ) log E PN [exp[ |GN,( FN |]
1
The super exponential estimate implies that for any 0 < < 1,
(
lim sup log E PN [exp[ |GN,( FN |] = 0
N 1
Therefore
1
E Q [G( ] lim sup E QN [GN,( ] [H + C]
N
We can let - 0 and 1 to complete the proof.

Now we need to establish the uniqueness of weak solutions to the problem


1
= C b(t, u)(t, u)(1 (t, u))
t 2
The difference of two solutions r = ) satisfies
r 1
= C r b(t, u)r(t, u)[1 (t, u) ) (t, u)]
t 2
1
= C r c(t, u)r(t, u)
2
& &
d 2 1
"r(t)"2 = -Cr, r.du + r-c r.du K"r"22
dt 2 Td Td
If we can establish some a priori regularity on the solutions then Gronwalls inequality will
establish uniqueness.
We have the exponential martingales for PN . With

!+ x
& T
x
,
FJ = J(T, )(T, x) J(0, x)(0, x) Jt (t, )(t, x)dt
N 0 N
xZdN
! & T + ,
y x
N2 eJ(t, N )J(t, N ) 1](t, x)(1 (t, y))(y x)dt
0
x,yZdN
' (
E PN exp[FJ ((, ))] = 1
This implies with the help of (4.3) that
1 1
E QN [ d
FJ ] d H(QN ; PN )
N N
72 CHAPTER 4. HYDRODYNAMIC SCALING

H(PN ;QN )
Letting N with H = limN Nd
,
+& & & T &
Q
E J(T, u)(T, u)du J(0, u)(0, u)du Jt (t, u)(t, u)du
Td Td 0 Td
& &
1 T
(C J)(t, u)(t, u)dtdu
2 0 Td
& & ,
1 T
-CJ, J.(t, u)(1 (t, u))dtdu H
2 0 Td

One of the things we should remember while carrying out large deviation estimates is that
if
1
lim sup log E P [efi ] 0

for i = 1, 2, it follows that

1
lim sup log E P [e max(f1 ,f2 ) ] 0

Just notice that e max(f1 ,f2 ) ef1 +ef2 and the sum of two exponetnials that do not grow
does not grow either. This is easily extended to a finite sum. It is now easy to see that if
we have a family f of continuous functions and two sequences of probability measures P
and Q such that Q Q weakly,

1
lim sup H(Q ; P ) H

and

1
lim sup log E P [ef ] 0

for every A, then of course
sup E Q [f ] H
A

But actually we get to move the sup inside for free.

E Q [sup f ] H
A

It follows from our previous discussion that the above inequality is valid if we replace A
by any finite subset of A. But then by monotone convergence it is true for A. The rest is
routine. If Q is any limit point of {QN } with H(QNNd;PN ) H, we have
4.6. EXPONENTIAL MARTINGALES. 73

Lemma 4.6.5.
+ +& & & T &
Q
E sup J(T, u)(T, u)du J(0, u)(0, u)du Jt (t, u)(t, u)du
J Td Td 0 Td
& &
1 T
(C J)(t, u)(t, u)dtdu
2 0 Td
& & ,,
1 T
-CJ, J.(t, u)(1 (t, u))dtdu H (4.6)
2 0 Td

In particular,
t L2 [[0, T ]; H1 (T d )] (4.7)
and
L2 [[0, T ]; H1 (T d )] (4.8)

Proof. We just have to make sure that estimate (4.6) is enough to provide (4.7) and (4.8).
Since (1 ) 1, (4.6) implies a bound on
+& T ,
1
EQ "t C "21 dtdu
0 2
+ +& & & T &
Q
= E sup J(T, u)(T, u)du J(0, u)(0, u)du Jt (t, u)(t, u)du
J Td Td 0 Td
& T & & T & ,,
1 1
(C J)(t, u)(t, u)dtdu -CJ, J.dtdu
2 0 Td 2 0 Td
H

We can do a convolution in space and time and approximate by ( that is smooth. We


will continue to have +& T ,
Q ( 1 ( 2
E "t C "1 dtdu H
0 2
The cross term
& T & T & & &
1 21 1
Q
E [ ( (
-t , C .1,C dt] = (t ( dxdt = (
(T, u) du ( (0, u)2 du
0 0 Td 2 Td 2 Td 2

and we can get a uniform bound


& T & T &
E [ Q
"(t "21 dt + -C( , ( .dtdu] H + 1
0 0 Td

We can now let - 0.


74 CHAPTER 4. HYDRODYNAMIC SCALING

4.7 Upper Bound


The proof of the upper bound is almost done. We have already shown super exponential
tightness. From the exponential martingales we have (??). Therefore the local rate function
is
sup GJ ((, ))
J

where
& & T&
' (
GJ = J(T, u)(T, u) J(0, u)(0, u) du Jt (t, u)(t, u)dtdu
Td 0 Td
& &
1 T
C J(t, u)(t, u)dtdu
2 0 Td
& &
1 T
-CJ, j.(t, u)(1 (t, u))dtdu
2 0 Td

The supremum is calculated as


& T
1 1
"t C "21,C,(1) dt
2 0 2

where
+ & & ,
"f "21,C,(1) = sup 2 J(u)f (u)du -C(J)(u), (J)(u).(u)(1(u))du (4.9)
J Td Td

Finally we have to match the upper bound with the lower bound.

Lemma 4.7.1.
& T &
1
I((, )) = inf -C 1 b(t, u), b(t, u).(t, u)(1 (t, u))dtdu
b 2 0 T d
&
1 T 1
= "t C "21,C,(1) dt
2 0 2

Proof. First let us assume that (t, u) is smooth and satisfies 0 < c1 c2 < 1. We
can do the variational problem in (4.9), t 21 C = f (t, u) being a smooth function. It
results in solving
(t, u)(1 (t, u))CJ = f (t, u)
and the rate function equals
& &
1 T
-CJ, J.(t, u)(1 (t, u))dtdu
2 0 Td
4.7. UPPER BOUND 75

b(t, u) = CJ B(), and


& T &
1
I((, )) = -C 1 b, b.(1 )dtdu
2 0 Td

matching the upper and lower bounds.


Finally we need to approximate an arbitrary with I((, )) < by n that are nice
such that I(n (, )) I((, )) We note that the rate function I((, )) is convex, lower
semicontinuous and translation invariant in space and time. Smoothing by convolution will
provide the needed approximation.

We have finally established the Large Deviation% Principle for the processes PNN on
D[[0, T ]; M(T d )], under the assumption that N1d xZd Nx N (x) 0 (u)du weakly in
N
M(T d ).

Theorem 4.7.2. The large deviation principle holds for PNN with the rate function
& T
1 1
I((, )) = "t C "21,C,(1) dt
2 0 2

provided (0, u) = 0 (u) and + otherwise.

Você também pode gostar