Hydrodynamic limit in a particle system with topological
interactions
G. Carinci, A. De Masi, C. Giardinà, E. Presutti
July 1, 2013
Abstract
We study a system of particles in the interval [0, −1 ] ∩ Z, −1 a positive integer. The
particles move as symmetric independent random walks (with reflections at the endpoints);
simultaneously new particles are injected at rate j, j > 0, and removed at same rate: the
particles being injected at 0 and removed from the rightmost occupied site. The removal
mechanism is therefore of topological rather than metric nature. The determination of
the rightmost occupied site requires a knowledge of the entire configuration and prevents
from using correlation functions techniques.
We prove using stochastic inequalities that the system has a hydrodynamic limit,
namely P
that under suitable assumptions on the initial configurations, the law of the density
fields φ(x)ξ−2 t (x) (φ a test function, ξt (x) the numberRof particles at x at time t)
concentrates in the limit → 0 on the deterministic value φρt , ρt interpreted as the
limit density at time t. We characterize the limit ρt as a weak solution in terms of barriers
of a limit free boundary problem.
1
Introduction and model definition
This paper is inspired by the analysis in [11] and we are indebted to Pablo Ferrari for discussions and in particular for suggesting the inequalities in Section 6. This is a first in a series
of three papers where we study a particle system whose hydrodynamic limit is described by
a free boundary problem.
Our system is made of particles confined to the lattice [0, −1 ] ∩ Z, for brevity in the sequel
we shall just write [0, −1 ]. In this notation −1 is a positive integer denoting the system size
and we will be eventually interested in the asymptotics as → 0. The evolution is a Markov
process on the space of particles configurations ξ = (ξ(x))x∈[0,N ] , the component ξ(x) ∈ N is
interpreted as the number of particles at site x. The generator is denoted by
L = L0 + Lb + La
(1.1)
(dependence on is not made explicit). L0 is the generator of the independent random walks
process, it is defined on the core of local functions by
−1
−1
1 X 0
Lx,x+1 f (ξ)
L f (ξ) =
2
0
x=0
1
(1.2)
L0x,x+1 f (ξ) = ξ(x) f (ξ x,x+1 ) − f (ξ) + ξ(x + 1) f (ξ x+1,x ) − f (ξ)
(1.3)
where ξ x,y denotes the configuration obtained from ξ by removing one particle from site x
and putting it at site y, i.e.
if z 6= x, y,
ξ(z)
ξ(z) − 1 if z = x,
ξ x,y (z) =
ξ(z) + 1 if z = y.
Namely L0 describes independent symmetric random walks which jump with equal probability
after an exponential time of mean 1 to the nearest neighbor sites, the jumps leading outside
[0, −1 ] being suppressed (reflecting boundary conditions).
The term Lb in (1.1) is
Lb f (ξ) = j f (ξ + ) − f (ξ) , ξ + (x) = ξ(x) + 1x=0
(1.4)
It describes the action of throwing into the system new particles at rate j, j > 0, which then
land at 0; instead La removes particles:
La f (ξ) = j f (ξ − ) − f (ξ) , ξ − (x) = ξ(x) − 1x=R(ξ)
(1.5)
namely a particle is taken out from the edge R(ξ) of the configuration ξ:
(
ξ(y) > 0 for y = R(ξ)
R(ξ) is such that:
ξ(y) = 0 for y > R(ξ)
(1.6)
La f (ξ) = 0 if R(ξ) does not exist, i.e. if ξ ≡ 0.
We interpret L as the generator of a system of independent walkers with “current reservoirs” which impose a positive current j at 0 and at the edge of the configuration. See [8, 9]
for a comparison with the density reservoirs used in the analysis of the Fourier law. Here is
a list of the main issues which are studied in this and in the other papers in this series.
• The interaction described by La is highly non local as R(ξ) depends on the positions of
all the particles. This spoils any attempt to use the BBGKY hierarchical equations for
the correlation functions, as customary in perturbations of the independent system,
see for instance [7].
• The La interaction is “topological rather than metric”, as the influence on a particle
i of a particle j only depends on whether j is to the right or left of i and not on
their distance. Topological interactions seem to appear often in natural sciences as in
population dynamics, in particular the motion of crowds of people [15], or of animals
[1]. This shows that there are natural examples in physical systems as well. The relative
simplicity of our model allows for a rigorous analysis of such an interaction.
• To the left of R(ξ) the particles do not feel the La interaction and move freely, but R(ξ)
depends on the configuration of particles and hence on time as well. Ours therefore
is a microscopic model for a free boundary problem and one may thus conjecture that
the hydrodynamic limit is also a free boundary problem. In such a case hydrodynamics
would be ruled by the linear heat equation in an open, time dependent space interval
with suitable boundary conditions complemented by a law for the speed of the right
boundary.
2
• The action of Lb and La is to add from the left and respectively remove from the
right particles at rate j. They act therefore as “current reservoirs” because they are
imposing a current j (recall that for density reservoirs the particles current scales by
). Supposing the validity of Fick’s law the stationary macroscopic profiles are then
linear functions with slope −2j: there are therefore infinitely many such profiles (as
here the boundary densities are not fixed). Two scenarios are then possible: either
there is a preferential profile or there is a second time scale beyond the hydrodynamical
one, where we see that such profiles are not stationary.
We shall give answers to most of the above issues, our main results being stated in the next
section.
2
Main results
Hydrodynamic limit.
We denote by ρinit the initial macroscopic profile. We suppose:
Definition 2.1 (Assumptions on the initial macroscopic density profile). We suppose ρinit ∈
C([0, 1], R+ ) and, if it exists, we call R0 = min{R : ρinit (r) = 0, r ∈ [R, 1]}, the “edge” of ρinit .
For each we start the process from a fixed initial configuration ξ, for instance a typical
configuration of the product of Poisson measures over x ∈ [0, −1 ] with averages ρinit (x). The
precise statement is given in Definition 2.3 where we use the notion of empirical averages:
Definition 2.2 (Empirical averages). For any integer ` and x ∈ [0, −1 − ` + 1] we define the
empirical average in a configuration ξ as
x+`−1
1 X
A` (x, ξ) :=
ξ(y),
` y=x
A0` (x, ρ)
1
=
ε`
Z
(x+`)}
ρ(r)dr
(2.1)
x
We also define for any ρ ∈ L∞ ([0, 1], R+ )
A0` (x, ρ)
1
=
ε`
Z
(x+`)}
ρ(r)dr
(2.2)
x
Definition 2.3 (Assumptions on the initial particle configuration). We fix b < 1 suitably
close to 1 and a > 0 suitably small, for the sake of definiteness we set b = 9/10 and a = 1/20.
We then denote by ` the integer part of −b and suppose that for any the initial configuration
ξ verifies
max A` (x, ξ) − A0` (x, ρinit ) ≤ a
(2.3)
x∈[0,−1 ]
3
and moreover that if ρinit has an edge R0 (see Definition 2.1)
|R(ξ) − R0 | ≤ a
(2.4)
()
with R(ξ) defined in (1.6). We shall denote by Pξ the law of the process with generator L
on [0, −1 ] supported at time 0 by a configuration ξ as above.
Under such assumptions the initial configuration ξ converges to ρinit as → 0 in the sense
of (2.3): convergence extends to all times but in a weaker sense. Writing
Z
F (r; u) =
1
u(r0 ) dr0 ,
F (x; ξ) =
r
X
ξ(y)
(2.5)
y≥x
we have:
()
Theorem 2.1 (Existence of hydrodynamic limit). Let ρinit as in Definition 2.1 and Pξ as
in Definition 2.3. Then there exists a function ρt (r) ≥ 0, t ≥ 0, r ∈ [0, 1], equal to ρinit at
time t = 0, continuous in (r, t) and such that for all T > 0, ζ > 0 and t ∈ [0, T ]
h
i
()
lim Pξ
max |F (x; ξ−2 t ) − F (x; ρt )| ≤ ζ = 1
(2.6)
x∈[0,−1 ]
→0
The above convergence implies weak convergence of the density fields against smooth test
functions φ:
"
#
Z 1
X
ξ−2 t (x)φ(x) −
φ(r)ρt (r)dr| ≤ ζ = 1, for all ζ > 0
lim P |
→0
0
x
The free boundary problem.
A natural question which arises after Theorem 2.1 is: what kind of free boundary problem is
defined by the particle system in its hydrodynamic limit ? We are here evidently supposing
that ρinit has an edge R0 < 1. A heuristic argument suggests that if u(r, t) := ρt (r) is
sufficiently smooth and it has an edge Rt differentiable, then
1 ∂2u
∂u
=
,
∂t
2 ∂r2
r ∈ (0, R(t)),
∂u = −2j, u(Rt , t) = 0
∂r r=0,Rt
(2.7)
Indeed the current flowing into the system is j and this must equal the current inside the
system at 0, which is − 21 ∂u
∂r r=0+ ; analogously the current flowing out of the system at Rt is
1 ∂u − 2 ∂r r=R− which again must equal the mass per unit time extracted from the system, which
t
is again j. Finally by the definition of the edge, u(Rt , t) = 0. In the open interval (0, Rt )
the flow is just the linear heat equation (which is the hydrodynamic limit of the system of
independent random walks without external sources), hence (2.7).
(2.7) is related to the Stefan problem:
1 ∂2v
∂v
,
=
∂t
2 ∂r2
r ∈ (0, R(t)),
v(0, t) = v(Rt , t) = j,
4
dRt
1 ∂v =−
dt
2j ∂r Rt
(2.8)
which is well known to have smooth solutions at least when the initial datum v(r, 0) ≥ j.
Indeed one can easily check that if v is a smooth solution of (2.8) then
Z Rt
v
(2.9)
u(r, t) := 2
r
solves (2.7).
In [2] we shall prove that any classical solutions u(r, t) of (2.7) coincides with the hydrodynamic limit ρt of Theorem 2.1 (for the appropriate initial condition). However we know
that the Stefan problem (2.8) may develop instabilities if the initial datum v(r, 0) is not always ≥ j, see for instance [5], [14]. Moreover it is not at all clear that in general (2.7) and
(2.8) are equivalent, nor that (2.7) is the correct hydrodynamic equation for our model, take
for instance ρinit which does not satisfy the boundary conditions on its derivatives. In the
absence of smoothness we need a weak version of hydrodynamic equations which generalizes
(2.7), that such a formulation exists is clearly indicated by our Theorem 2.1 which states that
in general a limit evolution ρt is well defined.
A weak formulation can be easily written by discretizing time and injecting and removing
mass at such times, while in between the state is evolved according to the linear heat equation.
The remarkable point is that we can define such approximated evolutions as “barriers”, indeed
we shall see that there are two families of “barriers” which bound from above and below ρt ,
and ρt is the unique element which separates the two families. The inequalities are in the
sense of mass transport:
Definition 2.4. (Partial order). Let U = {u = cD0 + ρ, c ≥ 0, ρ ∈ L∞ ([0, 1], R+ )} where D0
is the Dirac delta at 0. The definition (2.5) extends naturally to u ∈ U and for any u, v ∈ U
and r ∈ [0, 1] we set
u ≤ v iff F (r; u) ≤ F (r; v) for all r ∈ [0, 1] .
(2.10)
R1
Thus F (r; u) is a non increasing function of r which starts at 0 from F (0; u) = 0 u, the
total mass of u, and ends at at r = 1 where it vanishes. The graph of F (r; u) is “the interface
of u” and u ≤ v means that the interface of v is not below the interface of u, to be compared
with the analogous notion in [11].
Definition 2.5. (The cut and paste operator). We also define for any δ > 0 the subset
Uδ ⊂ U, see Definition 2.4, as
Z
Uδ := {u = cD0 + ρ : u > jδ, c ≥ 0, ρ ∈ L∞ ([0, 1], R+ )}
(2.11)
and the cut-and-paste operator K (δ) : Uδ → Uδ
K (δ) u = jδD0 + 1r∈[0,Rδ (u)] u,
Rδ (u) : F (Rδ (u); u) = jδ
(2.12)
We also need in the following definition of barriers the Green function Gneum
(r, r0 ) of the heat
δ
equation in [0, 1] with Neumann boundary conditions:
0 2
Gneum
(r, r0 ) =
t
X
Gt (r, rk0 ),
k
rk0
r0
Gt (r, r0 ) =
e−(r−r ) /2t
√
2πt
(2.13)
being the images of under repeated reflections of the interval [0, 1] to its right and left
(see for instance [13] pag. 97 for details).
5
R
Definition 2.6 (Barriers). Let u ∈ U be such that u > 0. Then for all δ small enough
(δ,±)
u ∈ Uδ and for such δ we define the “barriers” Snδ (u) ∈ Uδ , n ∈ N, as follows: we set
(δ,±)
S0 (u) = u, and, for n ≥ 1,
(δ,−)
(u) = K (δ) Gneum
∗ S(n−1)δ (u)
δ
(δ,+)
(u) = Gneum
∗ K (δ) S(n−1)δ (u)
δ
Snδ
Snδ
(δ,−)
(2.14)
(δ,+)
(δ,±)
The functions Snδ are obtained by alternating the map Gneum
(i.e. a diffusion) and
δ
the cut and paste map K (δ) (which takes out a mass jδ from the right and put it back at
the origin, the macroscopic counterpart of Lb and La ). They are therefore the macroscopic
(δ,±)
counterpart of the particles process ξt . It can be easily seen that unlike ξt the evolutions Snδ
(δ,+)
(δ,−)
conserve the total mass, that Snδ maps Uδ into L∞ while Snδ has a singular component
(jδD0 ) plus a L∞ component.
(δ,±)
Snδ
have been called “barriers” because they form separated classes:
Proposition 2.2 (Separated classes). Let u ∈ U, then
(δ,−)
St
(δ 0 ,+)
(u) ≤ St
(u)
for all δ, δ 0 , t such that t = kδ = k 0 δ 0 .
(2.15)
It thus look natural to define the free boundary problem as the problem of finding those
functions which are separated by the barriers:
Definition 2.7 (Separating elements). (ρt )t≥0 is a separating element between the barriers
(δ,−)
(δ,+)
{Snδ (u)} and {Snδ (u)} if
(δ,−)
St
(δ,+)
(u) ≤ ρt ≤ St
(u)
for all δ, δ 0 , t such that t = kδ = k 0 δ 0 , k, k 0 ∈ N
(2.16)
Observe that if ρt is a separating element then ρ0 = u.
Theorem 2.3 (Characterization of ρt ). If u satisfies the conditions of Definition (2.1) then
(δ,−)
(δ,+)
there is a unique element separating the barriers Snδ (u), Snδ (u). Such an element is equal
to the hydrodynamic limit ρt of Theorem 2.1. Moreover, for any t > 0, any r ∈ [0, 1] and any
n ∈ N:
(t2−n ,−)
(t2−n ,+)
F (r; St
(ρinit )) ≤ F (r; ρt ) ≤ F (r; St
(ρinit )) .
(2.17)
The lower bound is a non decreasing function of n, the upper bound a non increasing function
of n which and
(t2−n ,±)
lim sup F (r; St
(ρinit )) − F (r; ρt ) = 0
(2.18)
n→∞ r∈[0,1]
6
Strategy of proof.
The key observation is that if we anticipate/posticipate the addition and removal of the
particles which occur in the true process in a given time interval then we stochastically
increase/decrease the final configuration (in the sense of mass transport to the right, i.e. the
microscopic version of (2.10)).
(δ,±)
To implement this we introduce the processes ξk−2 δ , k ∈ N. If for the true process the
number of added and removed particles in the time interval [k−2 δ, (k + 1)−2 δ] is equal to
(δ,−)
(δ,−)
Nk;± then ξ(k+1)−2 δ is obtained from ξk−2 δ by letting it evolve with generator L0 and at
(δ,+)
the end add Nk;+ particles at 0 and then remove the rightmost Nk;− particles. ξ(k+1)−2 δ is
obtained by reversing the order of the operations: first the addition/removal and then after
the free evolution. We have for all δ > 0 and all k ∈ N
(δ,−)
(δ,+)
ξk−2 δ ≤ ξk−2 δ ≤ ξk−2 δ stochastically
(2.19)
(see Section 6 for details).
The probabilistic part of the paper is essentially concentrated in the analysis of the hy(δ,±)
(δ,±)
drodynamic limit of the process ξk−2 δ : in Section 4 we prove that it converges to Skδ (u) (if
the initial ξ “approximates” u) where convergence is in the sense of (2.6). This is important
because it implies that the inequalities are preserved in the limit.
The hydrodynamic limit for the independent random walks process is easy and well known
in the literature, but in our case there is an extra difficulty related to a macroscopic occupation
at the origin, ξ(0) ≈ −1 , due to the cut and paste operations. This severely limits the choice
of the parameters (b close to 1, a close to 0 which in normal situations have a much larger
range of values) but luckily some room is left. Instead the convergence of the microscopic cut
and paste to its macroscopic counterpart is easy, as the variables Nk;± are modulo negligible
deviations independent Poisson variables with mean j−1 δ.
(δ,±)
Once we have convergence to Skδ (u) we are left with the analytic problem of studying
the limits of the latter as δ → 0. We first prove some regularity properties uniform in δ, see
Section 7, and then complete the proof of all theorems.
In [3] we shall study the stationary solutions of (2.7), they are linear functions with slope
−2j. We shall prove that any weak solution (in the sense of barriers) converges as t → ∞
to a linear profile, the one with the same total mass as the initial state. We shall also prove
that at super-hydrodynamic times, i.e. times of order −3 the particle processes is “close” to
the manifold of linear profiles performing a brownian motion on such a set.
We conclude the list of results in this paper by a last theorem where we identify the limit
equation for ρt when ρinit has no edge:
Theorem 2.4 (Hydrodynamic limit in the absence of an edge). Let ρt as in Theorem 2.1 and
suppose that there is T > 0 such that ρt (1) > 0 for t ∈ [0, T ]. Then ρt satisfies the equation
Z t
neum
ρt (r) = Gt
∗ ρinit (r) + j
{Gneum
(r, 0) − Gneum
(r, 1)}ds, t ∈ [0, T ]
(2.20)
s
s
0
Gneum
(r, r0 )
t
being the Green function of the heat equation in [0, 1] with Neumann conditions,
see (2.13).
7
Sections content.
In Section 3 we prove that the law of the total particles number process |ξ|t is a symmetric
random walk on N with reflection at the origin (a result which follows directly from the
definition of the process ξt ). We then state some consequences of such a result which will be
used in the sequel and introduce δ-approximated particles processes which are analogous to
(δ,±)
the barriers Skδ .
In Section 4 we prove that if the initial configuration ξ approximates a profile u ∈ U then
(δ,±)
(δ,±)
ξ−2 kδ converges in law to Skδ (u) as → 0. The proof exploits duality for the independent
process but is not a consequence of well known results on the hydrodynamic limit for independent particles because we need to take into account the case when there is a macroscopic
occupation number at the origin. As a consequence the bounds are not as strong as those
which appear in the literature.
In Section 5 we introduce a probability space (Ω, P ) where we can realize simultaneously
(δ,±)
all the processes ξt and ξ−2 kδ for all .
(δ,±)
In Section 6 we relate the true process ξ−2 kδ and the auxiliary ones, ξ−2 kδ , by stochastic
inequalities, in the sense of mass transport theory, exploiting the realization of the process
of Section 5. By using the convergence proved in Section 4 the inequalities extend to flows
(δ,±)
Skδ , thus proving Proposition 2.2.
(δ,±)
In Section 7 we prove regularity properties of the flows Skδ which are uniform in δ.
In Section 8 we prove Theorems 2.1, 2.4, Proposition 2.2 and Theorem 2.3.
3
The total particles number process
Despite the complexity of the full process ξt , its marginal |ξt | (i.e. the particles’ number
process) is very simple:
Theorem 3.1 (Distribution of the particles’ number). |ξt | has the law of a random walk
(nt )t≥0 on N which jumps with equal probability by ±1 after an exponential time of parameter
2j, the jumps leading to −1 being suppressed. As a consequence if ξ verifies the conditions
()
()
in Definition 2.3, then the process (Mt )t≥0 defined by Mt := |ξ−3 t |, converges in law as
→ 0 to (bjt )t≥0 , where (bt )t≥0 is the
R brownian motion on R+ with reflections at the origin
which starts from b0 = lim→0 |ξ| = ρinit .
Proof. For any bounded function f on N we have
Lf (|ξ|) = j{ f (|ξ| + 1) − f (|ξ|) + 1|ξ|>0 f (|ξ| − 1) − f (|ξ|) }
(3.1)
which is indeed the action of the generator of the random walk (nt )t≥0 on the function f (n).
This proves that the law of |ξt | is the same as that of the random walk. Classical estimates
on the central limit theorem prove convergence to the brownian motion.
Theorem 3.1 is one of the ingredients in the proof of Theorem ?? (in particular it proves
the last statement in the theorem). We shall often use in the sequel the following explicit
realization of the process (nt )t≥0 .
8
Definition 3.1 (The probability space (Ω0 , P0 )). We set Ω0 = {ω0 = (t0 , σ 0 )}, t0 = (t1;0 , t2;0 , ...),
σ 0 = (σ1;0 , σ2;0 , ...) are infinite sequences of increasing positive “times” tk;0 and of symmetric
“jumps”, σk;0 = ±1. (Ω0 , P0 ) is the product of a Poisson process of intensity 2j for the time
sequence t0 and of a Bernoulli process with parameter 1/2 for the jump sequence σ 0 .
Given n0 ∈ N and ω0 ∈ Ω0 we define (nt )t≥0 , iteratively: we set nt = ntk;0 in the time
interval [tk;0 , tk+1;0 ), k ≥ 0, (t0;0 ≡ 0) and define
(
ntk;0 + σk+1;0 if ntk;0 + σk+1;0 ≥ 0
ntk+1;0 =
0
if ntk;0 + σk+1;0 < 0
It is readily seen that the law of (nt )t≥0 as a process on (Ω0 , P0 ) (for a given initial value n0 )
is the same as the Markov process of Theorem 3.1 and hence of the particles’ number |ξt | in
our original process once n0 = |ξ0 |.
We give special names to the increments of the process (nt )t≥0 : let 0 ≤ t < t0 ,
Bt,t0 = number of upwards jumps of ns for s ∈ [t, t0 ]
(3.2)
At,t0 = number of downwards jumps of ns for s ∈ [t, t0 ]
(3.3)
When realized on (Ω0 , P0 ), Bt,t0 (ω0 , n0 ) (n0 the initial particles’ number) is the number of
times tk;0 ∈ [t, t0 ] where σk;0 = 1 (which does not depend on n0 ) , while the number of times
tk;0 ∈ [t, t0 ] where σk;0 = −1 is an upper bound for At,t0 (ω0 , n0 ) as the values σk;0 = −1 do
not produce a jump if ntk;0 = 0 (hence the dependence on n0 ).
The increments Bt,t0 and At,t0 are used below in the definition of the auxiliary processes
(δ,±)
ξt
which are the particles analogue of the evolutions defined in Definition 2.6. While the
analysis of the true process (ξt )t≥0 is rather complex due to the non local nature of La , the
(δ,±)
study of the hydrodynamical limit for ξt
is much simpler because the number of rightmost
particles to delete is macroscopic and becomes deterministic, the analysis will be carried out
in the next section.
(δ,±)
Definition 3.2 (The δ−approximated processes). The processes ξt
are defined succes−2
−2
sively in the time intervals [k δ, (k + 1) δ], k ≥ 0. We first distribute the variables
B[k−2 δ,(k+1)−2 δ] and A[k−2 δ,(k+1)−2 δ] as the increments of the Markov process (nt )t≥0 start(δ,±)
ing from |ξ0
(δ,−)
|. Given such variables we use an induction procedure and suppose ξk−2 δ = ξ
(δ,−)
given. Then ξt
, t ∈ [k−2 δ, (k + 1)−2 δ) has the law of the process ξt0 with generator L0
(δ,−)
starting from ξ at time k−2 δ (ξt0 is the independent random walks process). ξ(k+1)−2 δ is
0
then obtained from ξ(k+1)
−2 δ by adding B[k−2 δ,(k+1)−2 δ] particles all at the origin and then
removing the A[k−2 δ,(k+1)−2 δ] rightmost particles.
(δ,+)
ξt
, t ∈ (k−2 δ, (k + 1)−2 δ]], is defined as the independent random walk evolution
(δ,+)
starting at time k−2 δ from ξ 0 : ξ 0 is obtained from ξ = ξk−2 δ by adding B[k−2 δ,(k+1)−2 δ]
particles all at the origin and then removing the A[k−2 δ,(k+1)−2 δ] rightmost particles.
(δ,±)
Thus in the ξt
processes births and deaths are concentrated at the times k−2 δ, in
between such times the particles are independent random walks. Under the assumptions on
9
the initial datum ξ, see Definition 2.3, the process of adding and removing particles becomes
quite simple. For 0 ≤ t < t0 define on Ω0
X
0
Bt,t
1tk;0 ∈[t,t0 ] 1σk;0 =+1
(3.4)
0 (ω0 ) =
k
A0t,t0 (ω0 ) =
X
(3.5)
1tk;0 ∈[t,t0 ] 1σk;0 =−1
k
and
are independent Poisson distributed variables with average j(t0 − t); they are
also independent of the variables defined in [s, s0 ] if [s, s0 ] ∩ [t, t0 ] = ∅.
0
Bt,t
0
A0t,t0
Definition 3.3 (Good sets). Given T > 0 and γ > 0 we define for any δ and positive
n
o
1
1
G = ω0 ∈ Ω0 : |A0tk ,tk+1 (ω0 )−−1 jδ| ≤ − 2 −γ ; |Bt0k ,tk+1 (ω0 )−−1 jδ| ≤ − 2 −γ , k : tk ≤ −2 T
(3.6)
−2
where tk = k δ.
Theorem 3.2 ( Reduction to Poisson variables). Given ξ as in Definition 2.3, T > 0 and
γ > 0 there is δ ∗ > 0 so that for any δ < δ ∗ and any > 0 small enough the following holds.
For any ω0 ∈ G (see (3.6)) and any tk = k−2 δ ≤ −2 T ,
Atk ,tk+1 (ω0 , |ξ|) = A0tk ,tk+1 (ω0 ), Btk ,tk+1 (ω0 , |ξ|) = A0tk ,tk+1 (ω0 )
(3.7)
where Atk ,tk+1 (ω0 , |ξ|) and Btk ,tk+1 (ω0 , |ξ|) denote the variables when realized on Ω0 .
Finally, for any n there is cn so that
P0 [G] ≥ 1 − cn n
(3.8)
Proof.
By Definition 2.3 the initial number of particles |ξ| is bounded from below by
R
−1
ρinit − −1+a ≥ −1 C, C > 0. We choose δ ∗ := C/(2j) and shall prove by induction that
for any δ < δ ∗ and all small enough we have in G
1
ntk ≥ −1 C − k2− 2 −γ ,
k≤
T
δ
Suppose that the inequality holds for k and let us prove it for k + 1. Since Atk ,tk+1 (ω0 , |ξ|) ≤
A0tk ,tk+1 (ω0 )
1
1
nt ≥ ntk − −1 jδ − − 2 −γ ≥ −1 (C − jδ ∗ ) − (2k + 1)− 2 −γ ,
t ∈ [tk , tk+1 ]
which is strictly positive for any k ≤ T /δ if is small enough. Thus (3.7) holds and
1
ntk+1 ≥ ntk − A0tk ,tk+1 (ω0 ) + Bt0k ,tk+1 (ω0 ) ≥ ntk − 2− 2 −γ
because ω0 ∈ G. This proves the induction hypothesis and for what seen in the proof(3.7)
holds as well.
The variables A0tk ,tk+1 (ω0 ) and Bt0k ,tk+1 (ω0 ), k ≤ T /δ, are independent Poisson variables
with mean −1 jδ hence (3.8).
10
(δ,±)
Once restricted to G the processes ξt
, 0 ≤ t ≤ −2 T , become quite simple. The particles
move as independent random walks in the finitely many intervals [k−2 δ, (k + 1)−2 δ], while
births and deaths at the times k−2 δ are “essentially deterministic” like in the δ-approximated
(δ,±)
evolutions St
of Definition 2.6. Such considerations are made precise in Section 4 where
(δ,±)
(δ,±)
we prove convergence of ξt
to St
(ρinit ) in the hydrodynamic limit. Here we exploit
duality to prove convergence in a very strong form of the independent system to the heat
equation.
4
Hydrodynamic limit for the approximating processes
The main result in this section is in Theorem 4.1 below. It states that the δ-approximate
(δ,±)
(δ,±)
processes ξt
of Definition 3.2 converge in the hydrodynamic limit to the evolutions St
(·)
(δ,±)
−2
of Definition 2.6. For any fixed δ and T > 0, the processes ξt
, t ≤ T are obtained by
alternating independent random walk evolutions to cut and paste operations. The latter
involve macroscopic quantities and can be controlled by the law of large numbers once we
have the hydrodynamic limit for the independent process. This is well studied and very
detailed estimates are available but in the present case we have the extra difficulty that the
initial configurations may have a macroscopic occupation number at the origin ξ(0) ≈ −1 .
This is because in the the cut and paste we actually paste ≈ jδ−1 particles at the origin.
This is not a case studied in the literature (as far as we know) and indeed it affects greatly
the decay of correlations in the hydrodynamic limit.
As in our iterative procedure we have initial data with macroscopic occupation at the
origin, we may as well take more general initial conditions (than those in Definition 2.3) with
macroscopic occupation at the origin, this will be actually useful in the sequel. Thus the
“macroscopic initial profile v0 ” is here taken in U, namely it is the sum
R of a non negative
L∞ function plus cD0 , with c either equal to 0 or to jδ, we suppose that v0 = F (0; v0 ) > 0.
Analogously to (2.3) for any > 0 we choose the initial configuration ξ0 so that
max
(4.1)
A` (x, ξ0 ) − A0` (x, v0 ) ≤ a
x∈[0,N −`+1]
Theorem 4.1. Given any T > 0 for any δ > 0 small enough, any k : kδ ≤ T and any ζ > 0
h
i
()
(δ,±)
(δ,±)
lim Pξ0
max |F (x; ξk−2 δ ) − F (x; Skδ (v0 ))| ≤ ζ = 1
(4.2)
→0
x∈[0,−1 ]
()
where v0 and ξ0 are as above; Pξ0 as in Definition 2.3; F and F as in (2.5).
The theorem is proved at the end of the section, as we shall see stronger results actually
hold but what stated is what needed for Theorem 2.1. In the course of the proof we shall
introduce several positive parameters: b, a, a∗ , γ: b should be close to 1 and the others close
to 0, for the sake of definiteness we take:
a=γ=
1
9
1
, b = , a∗ =
20
10
100
11
(4.3)
(δ,−)
(δ,+)
We prove the theorem only for the process ξt
, the analysis of ξt
(δ,−)
The first step is a spatial discretization of the flow Skδ :
is similar and omitted.
Definition 4.1 (The discrete evolution). Denote by p0t (x, y), t ≥ 0, x, y ∈ [0, −1 ], the
transition probability of a continuous time, simple symmetric random walk with reflections
at 0 and −1 (i.e. the random walker jumps by ±1 with equal probability after an exponential
time of mean 1, the jumps which would lead outside [0, −1 ] are suppressed). For δ small
enough we define functions uk (x), x ∈ [0, −1 ] ∩ Z, with the property that mass is conserved:
F (0; uk ) = F (0; u0 ) for all k. The definition is iterative, we set u0 (x) := v0 (x); then
supposing that uk−1 has been already defined and that F (0; uk−1 ) = F (0; u0 ) we define uk
as follows. We first call
X
u0k (x) =
p(x, y)uk−1 (y),
p(x, y) := p0−2 δ (x, y)
(4.4)
y
uk is then obtained from u0k by adding particles at 0 and removing particles on the right. To
make this precise let Rk be an integer such that F (Rk ; u0k ) ≥ −1 jδ while F (Rk + 1; u0k ) <
−1 jδ, existence of Rk for δ small enough follows from the assumption F (0; u0 ) ≥ c−1 ,
c > 0, observing that F (0; u0k ) = F (0; uk−1 ) = F (0; u0 ) by the inductive assumption and
F (0; u0 ) = −1 F (0; v0 ). We then set vk (x) = u0k (x) for x < Rk , vk (x) = 0 for x > Rk and
vk (Rk ) := F (Rk ; u0k ) − −1 jδ
We can then finally define uk as
uk := vk + −1 jδ10
(4.5)
where 10 is the Krönecker delta at 0. To complete the induction we observe that F (0; uk ) =
−1 jδ + F (0; vk ), F (0; vk ) = F (0; u0k ) − −1 jδ so that F (0; uk ) = F (0; u0k ) = F (0; uk−1 ).
(δ,−)
In the next proposition we show that in (4.2) we can replace Skδ
uk with a negligible error:
(v0 ) by the sequence
Proposition 4.2. In the same context as in Theorem 4.1,
(δ,−)
lim max |F (x; uk ) − F (x; Skδ
→0
x∈[0,−1 ]
(v0 ))| = 0
(4.6)
Proof. In this proof we shorthand by g(r, r0 ) the Green function Gneum
(r, r0 ), r, r0 ∈ [0, 1],
δ
defined in (2.13) and also write for brevity p(x, y) := p0−2 δ (x, y), as in Definition 4.1. Let uk ,
u0k and Rk be as in Definition 4.1. We define for any real r between 0 and −1 ,
(δ,−)
ψk (r) := [Skδ
(v0 ) − jδD0 ](r)
Analogously to (1.6) we denote by Rk0 the real number in [0, −1 ] such that ψk (r) > 0 for
r < Rk0 and ψk (r) = 0 for r > Rk0 . We also call
ψk0 (r) = jδg(r, 0) +
1
Z
dr0 g(r, r0 )ψk−1 (−1 r0 )
0
12
so that
ψk0 (r) = ψk (r), r < Rk0 ;
Z
−1
Rk0
ψk0 (r) = −1 jδ
Claim. There are strictly positive constants C± which depend on δ so that for all k,
C− ≤ ψk0 ≤ C+ ,
|
X
x∈Z:x∈[Rk
C− ≤ u0k ≤ C+ ;
ψk0 (x)
−1
−
|
jδ| ≤ C+ ,
d 0
ψ | ≤ C+
dr k
|F (x; ψk0 )
Z
−
−1
ψk0 | ≤ C+
(4.7)
x
,−1 ]
The proof of the claim follows from classical estimates on random walks and Green functions:
c
c
c c √1 ≤ g(r, r0 ) ≤ √2 ; √1 ≤ p(x, y) ≤ √2
(4.8)
δ
δ
δ
δ
d
c3
c3 | g(r, r0 )| ≤ ; |p(x, y) − p(x, y + 1)| ≤ ( √ )2
dr
δ
δ
Z −1
jδ + ψk = F (0; v0 )
0
The crucial step in the proof of the proposition is the following statement:
There are α > β > 1 so that |u0k (x) − ψk0 (x)| ≤ √ αk ,
δ
|Rk − Rk0 | ≤ βαk
(4.9)
We prove (4.9) by induction. We thus suppose that it holds for k − 1. Calling Rk∗ the largest
integer smaller or equal than Rk and Rk0
|u0k (x) − ψk0 (x)| ≤ jδ|−1 p(x, 0) − g(x, 0)|
Z (y+1)
X 0
0
+
g(x, r)ψk−1
(−1 r)
p(x, y)uk−1 (y) −
y
∗
y≤Rk−1
Rk−1
+
X
p(x, y)u0k−1 (y)
∗
y=Rk−1
+1
Z
+
0
Rk−1
∗
Rk−1
0
g(x, r)ψk−1
(−1 r)
We use the local central limit theorem to bound:
c 2
5
p(x, y) − g(x, y) ≤
δ
Thus
c5 0
+ max |u0k−1 (x) − ψk−1
(x)|
x
δ
Z (y+1)
X 0
0
−1 +
g(x, r)ψk−1 ( r)
p(x, y)ψk−1 (y) −
|u0k (x) − ψk0 (x)| ≤ jδ
y
∗
y≤Rk−1
c2 0
+2 √ |Rk−1 − Rk−1
|C+
δ
13
(4.10)
We write
Z
(y+1)
y
0
0
g(x, r)ψk−1
(−1 r) − g(x, y)ψk−1
(y) ≤ c6 2
and get using the induction assumption
c2 |u0k (x) − ψk0 (x)| ≤ jc5 + √ αk−1 + 2 √ βαk−1 C+
δ
δ
2
c5 +−1 {c6 2 + C+
}
δ
√
√
Choosing α ≥ 1 + j δc5 + 2c2 C+ β + δ{c6 + C+ cδ5 }, we have
√
√
c5 |u0k (x) − ψk0 (x)| ≤ √ αk−1 j δc5 + 1 + 2c2 C+ β + δ{c6 + C+ } ≤ √ αk
δ
δ
δ
As a consequence:
|F (x; u0k ) − F (x; ψk0 )| ≤ (−1 − x + 1) √ αk
δ
(4.11)
Recalling that ψk0 (x) ≥ C− and u0k (x) ≥ C− we get
|F (Rk0 ; u0k ) − jδ−1 | ≤ C+ + (−1 − Rk0 + 1) √ αk
δ
0
k
−1 0
0
|F (Rk ; uk ) − F (Rk ; uk )| ≤ 2C+ + √ α
δ
αk
C− |Rk0 − Rk | ≤ |F (Rk0 ; u0k ) − F (Rk ; u0k )| ≤ 2C+ + √
δ
−1
which is smaller than βαk if β ≥ C−
(2C+ + δ −1/2 ), thus completing the proof of (4.9).
Using (4.11) we then conclude the proof of the proposition, details are omitted.
The proof of Theorem 4.1 is thus reduced to showing that:
h
i
(N )
(δ,−)
lim Pξ
|F (x; ξn−2 δ ) − F (x; un )| ≤ ζ for all x ∈ [0, −1 ] = 1
→0
(4.12)
(δ,−)
which will be done in the sequel. Both sequences {ξn−2 δ } and {un } are determined by
alternating free evolution and a cut and paste procedure. We first study the free evolution
part proving that the independent random walk configuration ξ0−2 δ is well approximated by
its average. Call Pξ and Eξ law and expectation of the independent process starting from ξ,
define for x ∈ [0, −1 ]
−1
w(x|ξ) :=
Eξ [ξ0−2 δ (x)]
=
X
p(x, y)ξ(y),
y=0
with p0t the transition probability used in Definition 4.1.
14
p(x, y) := p0−2 δ (x, y)
(4.13)
Proposition 4.3. Let c∗ and a∗ be strictly positive and
n
o
∗
Xc∗ ,a∗ := ξ : |ξ| ≤ c∗ −1 , max ξ(x) ≤ −a
x6=0
Then for any ξ ∈ Xc∗ ,a∗
max
x∈[0,−1 ]
c2 c∗
w(x|ξ) ≤ √
δ
(4.14)
(4.15)
(c2 as in (4.8)). Moreover let c∗ , a∗ and b be strictly positive and such that
b
a∗ < ,
2
b + a∗ < 1
(4.16)
(a condition which is satisfied by the choice (4.3)). Let ` be the integer part of −b and A` be
as in (2.1), then for any integer n there is c0n so that
h
i
Pξ ξ0−2 δ ∈ Xc∗ ,a∗ ≥ 1 − c0n n
(4.17)
Finally there is a constant c so that
4 sup
Eξ A` (x, ξ0−2 δ ) − A` (x, w(·|ξ)) ≤ c2b
(4.18)
x≤−1 −`+1
Proof. For brevity in this proof we shall write w(x) instead of w(x|ξ). Recalling that p(x, y)
is defined in (4.4) and bounded in (4.8), we have for any ξ ∈ Xc∗ ,a∗
X
c2 X
c2 (4.19)
w(x) =
p(x, y)ξ(y) ≤ √
ξ(y) ≤ √ c∗ −1
δ y
δ
y
hence (4.15). The proof of (4.17) and (4.18) uses in a crucial way duality:
Duality. Given ξ ∈ N[0,N ] and a labeled configuration x = (x1 , .., xn ), n ≥ 1, xi ∈ [0, −1 ],
we define
Y
D(ξ, x) =
dx(x) (ξ(x)), dk (m) = m(m − 1) · · · (m − k + 1), d0 (m) = 1 (4.20)
x
x(x) =
n
X
1xi =x
i=1
dk (m) are called Poisson polynomials. We then have:
Eξ D(ξt0 , x) = Ex D(ξ, x0t )
(4.21)
where x0t is the independent random walks evolution.
• Proof of (4.17). Call x = (x1 , .., x2k ) with xi = x for all i = 1, .., 2k. Then by (4.21) and
(4.19)
Y
Y
0
Eξ [d2k (ξ0−2 δ (x))] = Ex [ dx0−2 (x) (ξ0 (x))] ≤ Ex [ ξ(x)x−2 δ (x) ]
x
=
hX
y
δ
x
i2k c |ξ| 2k c c∗ 2k
2
2
p(x, y)ξ(y)
≤ √
≤ √
δ
δ
15
(4.22)
By (4.22) we have that for any k there is c00k (independent of ) so that
max Eξ ξ0−2 δ (x)k ≤ c00k
(4.23)
Moreover by the Chebishev inequality and (4.22)
∗
Pξ max ξ0−2 δ (x) ≤ −a ≥ 1 − c0m m
(4.24)
x∈[0,−1 ]
x∈[0,−1 ]
which proves (4.17) because |ξ0−2 δ | = |ξ| ≤ −1 c∗ .
To prove (4.18) we shall use again duality but also several maybe non totally straightforward algebraic manipulations. We start by expanding the product in the expectation:
4
h
4 i
i
1 X hY 0
0
Eξ A` (x, ξ−2 δ ) − A` (x, w)
= 4
Eξ
ξ−2 δ (xi ) − w(xi )
`
x∈B`
(4.25)
i=1
where x ∈ [0, −1 − ` + 1] and B` = {x = (x1 , ...x4 ) : xi ∈ [x, x + ` − 1], i = 1, .., 4}.
(i)
Call B` , i = 1, 2, 3, 4, the set of x ∈ B` such that there are i mutually distinct sites. We
then have for i ≤ 2:
4
hY
i
1 X
0
|E
ξ
| ≤ c`−2
−2 δ (xi ) − w(xi )
ξ
`4
(i)
(4.26)
i=1
x∈B`
as the expectation of products of ξ0−2 δ (·) is bounded, which is proved using (4.23).
(i)
We are thus left with the sum over x ∈ B` with i = 3, 4. When i = 4, x = (x1 , .., x4 )
with the entries mutually distinct. Call σ = (σ1 , .., σ4 ), σi ∈ {−1, 1}, and |σ|− the number of
−1 in σ, then
4
Y
X
ξ0−2 δ (xi ) − w(xi ) =
(−1)|σ|− D(ξ0−2 δ ; {xi : σi = 1})
σ
i=1
Y
w(xj )
(4.27)
j:σj =−1
and using duality:
Eξ [
4
Y
X
X
ξ0−2 δ (xi ) − w(xi ) ] =
p(x, y)
(−1)|σ|− D(ξ; {yi : σi = 1})
σ
y
i=1
×Π(ξ; {yj : σj = −1})
Π(ξ; {yj : σj = −1}) :=
Y
ξ(yj )
(4.28)
j:σj =−1
Suppose there is a singleton h, namely such that yh 6= yj for all j 6= h, then
X
(−1)|σ|− D(ξ0−2 δ ; {yi : σi = 1})Π(ξ; {yj : σj = −1}) = 0
(4.29)
σ
Indeed let σ a sequence with σh = 1 and σ 0 the one obtained from σ by changing only σh ,
then
(−1)|σ|− D(ξ0−2 δ ; {yi : σi = 1})Π(ξ; {yj : σj = −1})
= (−1)|σ|− D(ξ0−2 δ ; {yi : σi = 1, i 6= h})Π(ξ; {yh , yj : σj = −1})
0
= −(−1)|σ |− D(ξ0−2 δ ; {yi : σi0 = 1})Π(ξ; {yj : σj0 = −1})
16
We have thus proved that calling Xn.s. the set of all y with no singletons then
4
Y
Eξ [
ξ0−2 δ (xi ) − w(xi ) ] = Φ4 (x)
(4.30)
i=1
Φ4 (x) =
X
p(x, y)
X
(−1)|σ|− D(ξ; {yi : σi = 1})Π(ξ; {yj : σj = −1})
σ
y∈Xn.s.
(3,∗)
A similar property holds also when x ∈ B`
which is the set of all x such that x1 = x2 ,
(3)
(3,∗)
x3 6= x4 , x1 and x4 6= x1 (modulo permutation of labels all x ∈ B` are in B` ). We write
2
ξ(x) − w(x) = {ξ(x)[ξ(x) − 1] − 2w(x)ξ(x) + w(x)2 } + {ξ(x) − w(x)} + w(x)
(3,∗)
Then analogously to (4.27) but with x ∈ B`
4
Y
X
ξ0−2 δ (xi ) − w(xi ) =
,
(−1)|σ|− D(ξ0−2 δ ; {xi , σi = 1})
σ∈{−1,1}4
i=1
X
+
Y
(−1)|σ|− D(ξ0−2 δ ; {xi , σi = 1, i ≥ 2})
X
w(xj )
j:σj =−1
w(xj )
j≥2:σj =−1
σ=(σ2 ,σ3 ,σ4 )
+w(x1 )
Y
(−1)|σ|− D(ξ0−2 δ ; {xi , σi = 1, i ≥ 3})
Y
w(xj )
j≥3:σj =−1
σ=(σ3 ,σ4 )
(4.31)
4
Y
Eξ [
ξ0−2 δ (xi ) − w(xi ) ] = Φ4 (x) + Φ3 (x2 , x3 , x4 ) + w(x1 )Φ2 (x3 , x4 )
(4.32)
i=1
where
4
Y
X
Φ3 (x2 , x3 , x4 ) =
(y2 ,y3 ,y4 )∈Xn.s. i=2
Φ2 (x3 , x4 ) =
X
4
Y
p(xi , yi )
(y3 ,y4 )∈Xn.s. i=3
X
p(xi , yi )
(−1)|σ|− D(ξ; {yi , σi = 1, i ≥ 2})Π(ξ; j ≥ 2 : σj = −1)
σ2 ,σ3 ,σ4
X
(−1)|σ|− D(ξ; {yi , σi = 1, i ≥ 3})Π(ξ; j ≥ 3 : σj = −1)
σ=(σ3 ,σ4 )
with Φ4 (x) as in (4.30).
Going back to (4.25), using (4.26) and (4.15)
h
4 i
c
6
Eξ A` (x, ξ0−2 δ ) − A` (x, w) ≤ 2 + max Φ4 (x) +
| max Φ4 (x)|
(4)
`
` x∈B(3,∗)
x∈B
`
+|
`
c∗
c2
Φ3 (x2 , x3 , x4 )| + √ |
max
Φ2 (x3 , x4 )|
(x2 ,x3 ,x4 ):distinct
δ (x3 ,x4 ):distinct
max
(4.33)
Let us bound one by one the functions Φi starting from Φ4 . Recalling (4.30) the condition
y ∈ Xn.s. is realized (modulo label permutations) in only two cases: (i) y1 = y2 6= y3 = y4 ;
(ii) y1 = ... = y4 .
(
X
ξ(y1 )ξ(y3 ),
in case (i)
|σ|−
(−1)
D(ξ; {yi : σi = 1})Π(ξ; {yj : σj = −1}) =
(4.34)
2
3ξ(y1 ) − 6ξ(y1 ), in case (ii)
σ
17
so that from (4.8) and since ξ ∈ Xc∗ ,a∗
c2 |Φ4 (x)| ≤ ( √ )4 6(c∗ −1 )2 + 3(c∗ −1 )2 ≤ c2
δ
(4.35)
The condition (y2 , y3 , y4 ) ∈ Xn.s. in Φ3 implies y2 = y3 = y4 and for such a y:
X
(−1)|σ|− D(ξ; {yi , σi = 1, i ≥ 2})Π(ξ; j ≥ 2 : σj = −1) = 2ξ(y2 )
(4.36)
σ=(σ2 ,σ3 ,σ4 )
so that from (4.8) and since ξ ∈ Xc∗ ,a∗
c2 |Φ3 (x2 , x3 , x4 )| ≤ ( √ )3 2(c∗ −1 ) ≤ c2
δ
(4.37)
Finally if (y3 , y4 ) ∈ Xn.s. then y3 = y4 and for such a y,
X
(−1)|σ|− D(ξ; {yi , σi = 1, i ≥ 3})Π(ξ; j ≥ 3 : σj = −1) = −ξ(y3 ) ≤ 0
(4.38)
σ=(σ3 ,σ4 )
Thus (4.18) follows from (4.33) together with the above inequalities.
(δ,−)
The cut and paste sequence of operations which appear in the definition of {ξtk , k ≤ k ∗ },
the largest integer such that δk ∗ ≤ T , tk = k−2 δ, is independent of the motion of the
(δ,−)
particles so that we have a rather explicit expression for the law of the variables {ξtk ,
k ≤ k ∗ }, see (4.42) below. We first write (with ξ0 below the initial condition in Theorem 4.1)
i
h
()
−
+
∗
∗
p({n±
,
k
=
1,
..,
k
})
=
P
, tk := k−2 δ
(4.39)
A
=
n
,
B
=
n
,
k
≤
k
tk−1 ,tk
tk−1 ,tk
k
ξ0
k
k
k∗
where At,t0 , Bt,t0 are the increments of the particles’ number process nt with n0 = |ξ|, they
are defined in (3.3) and (3.2) and their law depends only on |ξ0 |.
We also write
h
i
π(ξ 0 |ξ) = Pξ ξ0−2 δ = ξ 0 , |ξ| = |ξ 0 |, a.s.
(4.40)
−
+
(ξt0 the independent random walk process). We finally denote by K (n ,n ) ξ the configuration
obtained from ξ by adding n+ particles at 0 and then removing the n− rightmost particles
(the definition requires that |ξ| + n+ − n− ≥ 0, condition automatically satisfied below as the
variables n± are the increments of the particles’ number nt ). Then, writing
∗
P
h
0
{n±
k , ξk ,
∗
i
k = 1, .., k } =
p({n±
k ,k
∗
≤ k })
k
Y
π(ξk0 |ξk−1 )
k=1
ξk :=
− +
K (nk ,nk ) ξk0
with n±
0 := 0, we have
h
i
()
(δ,−)
Pξ0 {ξk−2 δ ) = ξ¯k , k = 1, .., k ∗ } =
(4.41)
X
0
∗
n±
k ,ξk , k=1,..,k
18
h
i
0
∗
1ξk =ξ̄k , k=1,..,k∗ P {n±
,
ξ
,
k
≤
k
}
(4.42)
k
k
By (3.8) for any n there is cn so that
X
{n±
k,
∗
n
p({n±
k , k ≤ k }) ≥ 1 − cn (4.43)
k=1,..,k∗ }∈G
1
±
∗
−1
− 2 −γ
G := {n±
}
k , k = 1, .., k : |nk − jδ| ≤ ∗
The strategy now is to fix {n±
k , k = 1, .., k } ∈ G and prove estimates uniform in the choice
∗
of {n±
k , k = 1, .., k }, as the contribution to (4.2) of the complement of G has negligible
probability. We have
(δ,−)
max ∗ |ξk−2 δ | ≤ c̄−1 ,
k=1,..,k
∗
for all {n±
k , k = 1, .., k } ∈ G
(4.44)
1
where c̄−1 ≥ |ξ0 | + 2k ∗ − 2 −γ .
Recalling (4.41) for notation and that w is defined in (4.13), having fixed {n±
k, k =
±
∗
1, .., k } ∈ G, see (4.43), with n0 ≡ 0, we call
n
C = ξk0 , k = 1, .., k ∗ : max ∗ max |A` (x, ξk0 ) − A` (x, w(·|ξk−1 )| ≤ a ;
x
k=1,..,k
o
∗
(4.45)
max ∗ kξk0 k∞ ≤ −a
k=1,..,k
Then by Proposition 4.3 and (4.43) after using Chebishev with the fourth power,
i
h
∗
0
,
k
=
1,
..,
k
}
∈
G
∩
C
≥ 1 − c−1−4a+2b = 1 − c6/10
P {n±
,
ξ
k
k
(4.46)
The proof of (4.12) continues by showing that in the set G ∩ C, ξk (as defined in (4.41)) is
“close” to uk (as in Definition 4.1). More precisely call Xk and Rk the integers such that
0
F (Xk + 1; ξk0 ) < n+
k ≤ F (Xk ; ξk );
F (Rk + 1; u0k ) < −1 jδ ≤ F (Rk ; u0k )
(see again Definition 4.1 for notation). Then the analogue of (4.9) holds:
0
∗
Proposition 4.4. There are α > β > 1 so that if {n±
k , ξk , k = 1, .., k } ∈ G ∩ C then for all
∗
k = 1, .., k
max |A` (x, ξk0 ) − A` (x, u0k )| ≤ αk a , |Xk − Rk | ≤ βαk −1+a
(4.47)
x
Proof. By (4.45)
|A` (x, ξk0 ) − A` (x, u0k )| ≤ a + |A` (x, u0k ) − A` (x, w(·|ξk−1 )|
(4.48)
Supposing for instance that Rk−1 ≤ Xk−1 we get
X
|w(x|ξk−1 ) − u0k (x)| = |
p(x, y)[ξk−1 (y) − uk−1 (y)]|
y
≤
p(x, 0)|n+
k−1
− −1 jδ| + |
X
0
p(x, y)[ξk−1
(y) − u0k−1 (y)]|
y<Rk−1
0
+p(x, Rk−1 )[ξk−1
(Rk−1 ) + u0k−1 (Rk−1 )] +
X
0
p(x, y)ξk−1
(y)
Rk−1 <y≤Xk−1
(4.49)
19
By (4.8)
c2 − 1 −γ
−1
2
p(x, 0)|n+
k−1 − jδ| ≤ √ δ
We decompose the interval [1, Rk−1 − 1] into consecutive intervals [zi , zi0 ] of length ` with the
last interval which may have length < ` and get using (4.8)
X
0
p(x, y)[ξk−1
(y) − u0k−1 (y)]|
|
0<y<Rk−1
≤
X
X
c2 ∗
∗
{p(x, zi )`αk−1 a +
|p(x, zi ) − p(x, y)|2−a } + √ `2−a
δ
i
z ≤y≤z 0
i
i
c3 c2 c2
c2 ∗
∗
∗
≤ √ −1 αk−1 a + ( √ )2 2−a + √ 2−b−a ≤ √ αk−1 a + c1−b−a
δ
δ
δ
δ
We also have
c2 ∗
0
p(x, Rk−1 )[ξk−1
(Rk−1 ) + u0k−1 (Rk−1 )] ≤ √ 2−a
δ
By (4.8) and (4.15) and decomposing as before the interval [Rk−1 + 1, Xk−1 ] into consecutive
intervals of length `,
X
X
0
(y) ≤
p(x, y)w(y|ξk−2 )
p(x, y)ξk−1
Rk−1 <y≤Xk−1
Rk−1 <y≤Xk−1
X
+|
0
(y) − w(y|ξk−2 )]|
p(x, y)[ξk−1
Rk−1 <y≤Xk−1
c∗
c2 c2
c2 c3 c2 ∗
∗
≤ √ √ |Xk−1 − Rk−1 | + √ |Xk−1 − Rk−1 |αk−1 a + ( √ )2 2−a + √ 2−b−a
δ δ
δ
δ
δ
1−b−a∗
≤ c |Xk−1 − Rk−1 | + By collecting the above bounds and using the induction hypothesis:
c2 1
c2
c2
∗
∗
|w(x|ξk−1 ) − u0k (x)| ≤ √ − 2 −γ + √ αk−1 a + 2c1−b−a + √ 21−a + cβαk−1 a
δ
δ
δ
c 1
c
c2
∗
∗
2
2
≤ αk−1 a √ 2 −γ−a + { √ + cβ} + 2c1−b−a −a + √ 21−a −a
δ
δ
δ
c
2
k−1 a
a0
≤ α { √ + cβ} + C
δ
where a0 = min{ 12 − γ − a, 1 − b − a∗ − a, 1 − a∗ − a} > 0. Hence
c
2
0
0
a
k−1
a0
√
|A` (x, ξk ) − A` (x, uk )| ≤ [1 + α
{
+ cβ} + C ]
δ
0
For small enough Ca ≤ 1,
|A` (x, ξk0 ) − A` (x, u0k )| ≤ αk a ,
By (4.50)
c2
α = 2 + { √ + cβ}
δ
∗
|F (x; ξk0 ) − F (x; u0k )| ≤ (−1 − x)αk a + 2−b−a
20
(4.50)
(4.51)
hence, recalling (4.7),
∗
|F (Rk ; u0k ) − jδ−1 | ≤ C+ ,
1
1
|F (Xk ; ξk0 ) − jδ−1 | ≤ −a + − 2 −γ ≤ 2− 2 −γ
1
1
∗
|F (Xk ; u0k ) − jδ−1 | ≤ 2− 2 −γ + |F (Xk ; u0k ) − F (Xk ; ξk0 )| ≤ 2− 2 −γ + −1+a αk + 2−b−a
1
∗
C− |Rk − Xk | ≤ |F (Rk ; u0k ) − F (Xk ; u0k )| ≤ C+ + 2− 2 −γ + −1+a αk + +2−b−a
−1
which proves (4.47) with β = C−
(5 + C+ ).
Proof of Theorem 4.1. We need to prove (4.12). By (4.46) we can reduce to configurations
in G ∩ C and want to prove that in such a set
lim max
→0 x∈[0,−1 ]
|F (x; ξk ) − F (x; uk )| = 0
(4.52)
Let us suppose for the sake of definiteness that Rk ≤ Xk . Then for x ≤ Rk
Xk
k −1
RX
X
ξk0 + u0 (Rk )
|F (x; ξk ) − F (x; uk )| ≤ (ξk0 − u0k ) +
y=x
y=Rk
Calling R̄k ≤ Rk the largest integer so that R̄k − x is a multiple integer of `, we get from
(4.47):
k −1
RX
k a
−a∗ −b
0 0
)
≤ αk −1+1/20 + 2−1+1/10−1/100
−
u
(ξ
k ≤ (R̄k − x)α + 2
k
y=x
Call X̄k the smallest integer ≥ Xk such that X̄k − Rk is a multiple integer of `, then
Xk
X
y=Rk
c c∗
1
∗
∗
2
ξk0 ≤ |X̄k − Rk | √ + a + `−a ≤ c{−1+a + −b−a } ≤ 2c−1+ 20
δ
Analogous bounds hold for x > Rk and (4.52) then follows.
5
Realization of the process
Following [12] we introduce a graphical construction of the process. It is also convenient to
enlarge the physical space [0, N ] by adding two extra sites {−1, −1 +1} so that configurations
ξ are functions on [−1, −1 + 1]. We also suppose that ξ(−1) = ∞, while ξ(x) is finite for all
x ∈ [0, −1 + 1]. The space of such ξ’s is called X . Physical configurations are recovered by
restricting ξ to [0, −1 ]. We also need to label the particles.
Definition 5.1 (Ordered configurations in the enlarged space). We denote by X ord the space
of ordered sequences x = (x1 , x2 , .., xn , ..), xi ≥ xi+1 , with values on [−1, −1 + 1], such that
there are finitely many entries with xi ≥ 0. To each x we associate the configuration ξx ∈ X
X
ξx (x) =
1xi =x for all x ∈ [0, −1 + 1],
ξx (−1) = ∞
(5.1)
i≥1
21
ξ
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 ξ’
25
24
23
F(x,ξ)
22
21
20
19
F(x,ξ’)
18
17
16
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0
0
1
2
3
4
5
6
7
8
9
10
x
Figure 1: An example of two particle configurations (ξ, ξ 0 ) related by the inequality ξ ≤ ξ 0 for N = 10.
Note that for the sites x ∈ {1, 2, 8, 9} one has ξ(x) > ξ 0 (x). However the interface of ξ is below the
interface of ξ 0 for all x ∈ [0, 10].
Viceversa, given any ξ ∈ X we define xξ by labelling the particles of ξ consecutively starting
from the right.
Figure 1: Schematic description of the Symmetric Exclusion Process SIP(2j). The arrows represent
the possible transitions and the corresponding rates, while the two cylinders represent the boundary
reserovoirs. Each site can accomodate up to 2j particles.
The physically relevant quantities are the unlabeled configurations and we are therefore free
to choose the labels as we like.
Definition 5.2 (The probability space (Ω, P )). We set
Y
Y
Ω=
Ωi , P =
Pi
i≥0
i≥0
22
where Ωi = {ωi = (ti , σ i )}, ti = (t1;i , t2;i , ...) are infinite sequences of increasing positive
“times” tk;i and σ i = (σ1;i , σ2;i , ...) infinite sequences of symmetric “jumps”, σk;i = ±1. For
i ≥ 1 Pi is the product probability law of a Poisson process of intensity 1 for the time sequences
ti and of a Bernoulli process with parameter 1/2 for the jump sequences σ i . (Ω0 , P0 ) is the
probability space introduced in Definition 3.1.
To each label i we attach a time axis R+ and an element ωi ∈ Ωi is graphically represented
as a sequence of arrows on the time axis starting at tk;i and pointing to the right or left if
σk;i = ±1 respectively. As we shall see the arrows indicate times and directions of the jumps
of particle i. The elements in ω0 are represented as + or − crosses on the time axis R+
which are put at the times tk;0 with ± being the value of σk;0 . At a minus cross a particle
dies (it moves to −1 + 1), at a plus cross a new particle is born (moves from −1 to 0). The
definitions below are tacitly meant to hold with P -probability 1, P the probability introduced
in Definition 5.2.
(δ,±)
We are going to define for almost all ω ∈ Ω the flows Tt and Tt
with values on X ord
(δ,±)
and then recover the processes ξt and ξt
by restricting the configurations to [0, −1 ]. We
0
start from the independent flow Tt .
Definition 5.3 (The flow Tt0 on X ord ). Given x ∈ X ord call m and n the minimal labels such
that xm < N + 1 and xn+1 = −1. Supposing n + 1 > m, we define (for P almost all ω) a
sequence s of times so that if s is an element of such a sequence then the next element t is
t = min{t0 ∈ [tm ∩ (s, +∞)] ∪ · · · ∪ [tn ∩ (s, +∞)]}
(5.2)
With P probability 1 the sequence of such times is divergent so that by iteration it is enough
to define the flow in the time interval (s, t], s and t two consecutive times in the sequence
s. Calling y = Ts0 (x, ω) we set Ts00 (x, ω) = y for s0 ∈ [s, t). If t = tk;i , i ∈ {m, .., n}, set
zi := yi + σk;i and define yi0 = zi if zi ∈ [0, −1 ] while yi0 = yi otherwise. Then Tt0 (x, ω) is
0 , y 0 , y 0 , ...). Observe that any particle
obtained by reordering the sequence y 0 =(y10 , .., yi−1
i i+1
0 (x, ω), s < t, the configuration
with label j ∈
/ {m, , .n} never moves. We shall denote by Ts,t
at time t when at time s the configuration was x.
The independent flow Tt0 is a realization of the independent random walk process in the
sense that the law of ξTt0 (x,ω) is the same as the law of the ξt0 Markov process with generator
L0 in (1.2) started at time t = 0 from the configuration ξx defined in (5.1).
Definition 5.4 (Creation and annihilation operators). The creation and annihilation operators a± : X ord → X ord are defined as follows. Calling y ± := a± x and k the smallest integer
such that xk = −1, then yk+ = 0 and yi+ = xi for i 6= k. Instead yh− = −1 + 1 and yi− = xi
for i 6= h if h is the smallest integer such that xh ∈ [0, −1 ], if h does not exist then a− x = x
(and the annihilation fails).
Annihilation means that a particle is moved from a site in [0, −1 ] to −1 + 1, creation that
it is moved from −1 to 0, the reason for the terminology is that the physical space is [0, −1 ].
23
Definition 5.5 (The flow Tt on X ord ). Given t and ω let {tk;0 , k ≤ k ∗ } be the times in t0
which are ≤ t (they are finitely many and distinct with P -probability 1). Then
Tt (x, ω) = Tt0∗ ,t (Tt∗ (x, ω), ω),
t∗ := tk∗ ;0
Tt∗ (x, ω) = aσk∗ ;0 Tt0k∗ −1 ,t∗ (Ttk∗ −1 (x, ω), ω)
(5.3)
which by iteration completely defines Tt (x, ω). For a graphical visualization of the flow Tt see
Fig. 5
x1 x2 x3 x4 x5 x6 x7 x8 x9 … 5 4 2 2 1 1 -‐1 -‐1 -‐1 … 0 1 2 3 4 5 6 t=0 5 4 2 2 1 0 -‐1 -‐1 -‐1 … 5 4 2 2 1 0 0 -‐1 -‐1 … 5 4 2 1 1 0 0 -‐1 -‐1 … 7 4 2 1 1 0 0 -‐1 -‐1 … 7 4 2 1 1 0 0 0 -‐1 … 7 4 2 1 1 1 0 0 -‐1 … 7 4 2 1 1 1 0 0 -‐1 … t=N2δ N=6 *me Figure 2: Graphical construction of the flow Tt (left panel) and of the process with generator (1.1)
(right panel) for a system of size N = 6. The legend for the left panel is as follows: continuous vertical
line denotes the clocks of the particles involved in the dynamics; the clock of the boundaries is that on
the left; the clock that rings first is depicted with a bold line with color red if it has associated a jump
+1 and color black if corresponds to a jump −1. After jumps particles are re-ordered (if needed). On
the right panel the motion in the physical space [0, 6] is displayed.
(δ,±)
Definition 5.6 (The flows Tt
on X ord ). The definition is given for t ∈ [0, −2 δ], it then
extends to all times by iteration. Let {tk;0 , k ≤ k ∗ } be the times in t0 which are ≤ −2 δ (they
(δ,−)
are finitely many and distinct with P -probability 1). We then set Tt
(x, ω) = Tt0 (x, ω) for
−2
t ∈ [0, δ) while
(δ,−)
T−2 δ (x, ω) = aσk∗ ;0 · · · aσ1;0 T0−2 δ (x, ω)
(5.4)
(δ,+)
Tt
(x, ω) is given by
(δ,+)
Tt
0
(x, ω) = Tt0 (x0 , ω),
x := a
σk∗ ;0
σ1;0
···a
24
x
t ∈ (0, −2 δ]
(5.5)
One can easily check that: the above is well defined with P -probability 1 and that the
law of ξTt (x,ω) is the Markov process with generator L defined in (1.1) and that the law of
(δ,±)
ξT (δ,±) (x,ω) is the same as the law of the processes ξt
defined in Definition 3.2. The initial
t
condition of both processes are given by the configuration ξx defined in (5.1).
6
Mass transport inequalities
In this section we introduce a partial order among measures based on moving mass to the
right, we are evidently in the context of mass transport theory from where we are borrowing
the notions used in this section. We work first in the space of particle configurations ξ
regarding ξ as a distribution of masses and then in the space U, considering u ∈ U as a mass
density (which may have a Dirac delta at 0), the notions are the same except for a change of
language.
(δ,±)
The main goal is to prove inequalities between ξt and the auxiliary processes ξt
(recall
that the hydrodynamic limit of the latter is known since Section 4) and then derive analogous
(δ,±)
inequalities for St
(u) and their limit as δ → 0.
We tacitly suppose in the sequel that the configurations ξ are in X , see the beginning of
Section 5.
Definition 6.1 (Partial order). ξ ≤ ξ 0 iff
F (x; ξ) ≤ F (x; ξ 0 )
for all x ∈ [0, −1 + 1]
(6.1)
u ≤ v, u, v ∈ U, iff
F (r; u) ≤ F (r; v)
for all r ∈ [0, 1]
(6.2)
Observe that ξ ≤ ξ 0 has not the usual meaning, i.e. ξ(x) ≤ ξ 0 (x) for all x ! P
The notion of
order has rather to be interpreted in the sense of “the interfaces” F (x; ξ) = y≥x ξ(y), see
Definition 2.4 and Fig. 5 for a visual illustration. One can easily check that the above “≤”
relation has indeed all the properties of a partial order. Same considerations apply to the
case of continuous mass distributions as in (6.2) where the notion is well known and much
used in mass transport theory.
The equivalence with the previous statement about moving mass to the right is established
next. We first introduce a partial order in X ord by saying that x ≤ x0 iff xi ≤ x0i for all i.
Since there is a one-to-one correspondence between X (see Definition 6.1) and X ord this
defines a priori a new order in X , but the two orders are the same as proved in the following
Proposition.
Proposition 6.1. The following conditions are equivalent: (1) ξ ≤ ξ 0 ; (2) xξ ≤ xξ0 and (3)
letting x = (x1 , .., xm ) and x0 = (x01 , .., x0n ) be the entries of xξ and xξ0 in [0, −1 + 1], then
n ≥ m and there are permutations y and y 0 of x and x0 such that yi0 ≥ yi for i = 1, .., m.
Proof. We shall prove (1) ⇒ (2), (2) ⇒ (3) and (3) ⇒ (1).
25
The implication (2) ⇒ (3) obviously holds taking the identical permutation. Since for any
x ≥ 0,
X
X
X
X
1yi ≥x ; F (x; ξx0 ) =
1yi0 ≥x
(6.3)
F (x; ξx ) =
1xi ≥x =
1x0i ≥x =
i≥1
i≥1
i≥1
i≥1
the implication (3) ⇒ (1) follows once we prove that (2) ⇒ (1), thus it is sufficient to prove
(2) ⇒ (1) and (1) ⇒ (2).
Supposing (2)
X
X
F (x; ξx ) =
1x0i ≥x = F (x; ξx0 ) for all x ≥ 0
1xi ≥x ≤
(6.4)
i≥1
i≥1
hence (2) ⇒ (1).
Supposing (1) with
x = (x1 , .., xm ) and x0 = (x01 , .., x0n ) as in the statement of the Proposition, we have
n ≥ m because otherwise (6.4) would be violated at x = 0. We also have that xi ≤ x0i for
i ≤ m: suppose by contradiction that xk > x0k then ξ(xk ) ≥ k while ξ 0 (xk ) < k, hence the
contradiction. Thus (1) ⇒ (2).
Analogous statements hold for mass distributions in U, they are well known in the literature and since we shall not need them we omit the details. We have defined above a partial
order among configurations, this gives rise to an analogous order between probabilities on
configurations or even configuration valued processes. We give an explicit definition for the
latter:
Definition 6.2 (Stochastic order). A process (ξt )t≥0 is stochastically smaller than a process
(ξt0 )t≥0 , writing in short ξt ≤ ξt0 (stochastically), if they can be both realized on a same space
where the inequality holds pointwise almost surely.
We shall prove stochastic order by realizing the processes on the same space (Ω, P ) of
Definition 5.2:
(δ,±)
0
Theorem 6.2 (Stochastic inequalities). All the maps Tm−2 δ (x, ω), Tm
−2 δ (x, ω), m ≥ 1,
±
ord
Tt (x, ω), t > 0, and a acting on x ∈ X
preserve order. Moreover for any x ∈ X ord for P
almost all ω
(δ,−)
(δ,+)
Tm−2 δ (x, ω) ∩ [0, −1 ] ≤ Tm−2 δ (x, ω) ∩ [0, −1 ] ≤ Tm−2 δ (x, ω) ∩ [0, −1 ]
(6.5)
Finally, if δ = kδ 0 , k a positive integer, for P almost all ω
(δ,−)
(δ 0 ,−)
Tm−2 δ (x, ω)∩[0, −1 ] ≤ Tm−2 δ (x, ω)∩[0, −1 ],
(δ 0 ,+)
(δ,+)
Tm−2 δ (x, ω)∩[0, −1 ] ≤ Tm−2 δ (x, ω)∩[0, −1 ]
(6.6)
The theorem will be proved in the remaining part of this section together with its continuum counterpart
26
(δ,±)
Theorem 6.3 (Macroscopic inequalities). All the maps K (δ) , Gneum
∗ and St
t
(2.11), preserve order. Moreover for any ρ ∈ Uδ
(δ,−)
(δ,+)
Smδ (ρ) ≤ Smδ (ρ)
on Uδ , see
(6.7)
and if δ = kδ 0 , k a positive integer,
(δ,−)
(δ 0 ,−)
Smδ (ρ) ≤ Smδ
(δ 0 ,+)
(ρ),
Smδ
(δ,+)
(ρ) ≤ Smδ (ρ)
(6.8)
Finally, adding a subscript j to underline dependence on j, we have
if
j0 ≤ j
(δ,±)
St;j
then
(δ,±)
ρ ≤ St;j 0 ρ
(6.9)
We start by proving that Tt0 preserves order:
Lemma 6.4. If x ≤ x0 then Tt0 (x, ω) ≤ Tt0 (x0 , ω) for any t ≥ 0.
Proof. As in Definition 5.3 we call m and n the minimal labels such that xm < −1 + 1 and
xn+1 = −1 and m0 and n0 those for x0 , so that m0 ≥ m and n0 ≥ n. Let s be the sequence of
times so that if s is an element of such a sequence then the next element t is
t = min{t0 ∈ [tm ∩ (s, ∞)] ∪ · · · ∪ [tn0 ∩ (s, ∞)]}
(6.10)
The proof is iterative on the elements of s, so that we suppose that Ts0 (x, ω) ≤ Ts0 (x0 , ω),
s ∈ s, and want to prove the inequality for the next time t in the sequence (recall that the
particles do not move in the time interval s0 ∈ [s, t)). Calling y = Ts0 (x, ω) ≤ y 0 = Ts0 (x0 , ω)
suppose first that i > n, then yi = −1 so that y does not change at time t and the inequality
is preserved whatever is the displacement of yi0 (the other yj0 are unchanged). Suppose next
that t = tk;i , i ∈ {m, .., n}. Define z by setting zi0 := yi + σk;i , zi = zi0 if zi0 ∈ [0, −1 ] and
zi = yi otherwise; for j 6= i we set zj = yj . Define also w by setting wi0 := yi0 + σk;i , then
wi = wi0 if wi0 ∈ [0, −1 ] and wi = yi0 otherwise; as before we set wj = yj for j 6= i. Then w ≥ z
and since their re-ordering are Tt0 (x, ω) and respectively Tt0 (x0 , ω), by Theorem 6.1 we then
get Tt0 (x, ω) ≤ Tt0 (x0 , ω).
Lemma 6.5. If x ≤ x0 then a± x ≤ a± x0 and
a+ x ≤ x0 ,
−
0
a x≤x,
if x ≤ x0 and |x ∩ [0, −1 + 1]| < |x0 ∩ [0, −1 + 1]|
0
−1
if x ≤ x and |x ∩ {
0
−1
+ 1}| < |x ∩ {
(6.11)
+ 1}|
Moreover
(δ,−)
(δ,+)
|T−2 δ (x, ω) ∩ {−1 + 1}| = |T−2 δ (x, ω) ∩ {−1 + 1}| = |T−2 δ (x, ω) ∩ {−1 + 1}| (6.12)
(δ,−)
(δ,+)
|T−2 δ (x, ω) ∩ [0, −1 + 1]| = |T−2 δ (x, ω) ∩ [0, −1 + 1]| = |T−2 δ (x, ω) ∩ [0, −1 + 1]|
Proof. (6.11) follows directly from the definition. To prove (6.12) call n the cardinality of
x ∩ [0, −1 ] and m the cardinality of x ∩ {−1 + 1}. n and m are invariant under the free flow
Tt0 while n → n + 1 under a+ and n → n − 1, m → m + 1 under a− (if n > 0). Thus the
sequence of values of n and m under Tt (x, ω) is given by the sequence aσk∗ ;0 · · · aσ1;0 , see (5.4),
27
which is the same which appears in the definition of T (δ,±) , the only difference being the time
when this happens: they are “diffuse” in Tt and concentrated at time 0 and time −2 δ for
T (δ,+) and respectively T (δ,−) .
In the following corollary we fix arbitrarily t > 0 and n ≥ 1 and denote by ω and ω 0 any
two elements of Ω with the following properties. ω0 = (t0 , σ 0 ), ω00 = (t00 , σ 00 ) and
t0 ∩ [0, t] = (s1 , .., sn ), t00 ∩ [0, t] = (s01 , .., s0n )
s0i ≤ si
0
Moreover σk;0 = σk;0
=: σk , k ≤ n. Finally for all i ≥ 1, ωi = ωi0 , all times tk;i , i ≥ 1, are
mutually distinct and distinct from sk , s0k , k = 1, .., n, and from 0 and t.
Corollary 6.6. Let x ≤ x0 in X ord and |x ∩ {−1 + 1}| = |x0 ∩ {−1 + 1}|; let ω and ω 0 as
above, then
Tt (x, ω) ≤ Tt (x0 , ω 0 ),
Tt (x, ω) ∩ [0, −1 ] ≤ Tt (x0 , ω 0 ) ∩ [0, −1 ]
(6.13)
Proof. The proof follows by applying repeatedly Lemma 6.4 and Lemma 6.5.
0
Proof of Theorem 6.2. In Lemma 6.4 it is proved that Tm
−2 δ (x, ω) preserves order. By
(δ,±)
(6.13) (with ω = ω 0 ) also Tt (x, ω) preserves order, the same argument applies to T−2 δ (x, ω)
by choosing the time sequence s1 , .., sn with all si = 0 or, in the case (δ, −), with all si = −2δ .
By iteration the result extends to all m−2 δ.
(6.5) follows from Corollary 6.6 by analogous arguments as well as (6.6): they are all based
on anticipating or postponing the action of the creation and annihilation operators. Details
are omitted.
Proof of Theorem 6.3. We first prove (6.7) and (6.8). By Theorem 4.1 and (6.5)
h
(N )
(δ,−)
lim Pξ0 F (r; Smδ (v0 )) − ζ ≤ F (r; ξ−2 mδ )
→0
i
(δ,+)
≤ F (r; Smδ (v0 )) + ζ, for all r ∈ [0, 1] = 1
(δ,−)
(6.14)
(δ,+)
which, by the arbitrariness of ζ, implies F (r; Smδ (v0 )) ≤ F (r; Smδ (v0 )) for all r ∈ [0, 1],
then (6.7) follows. Analogously (6.8) follows from (6.6).
In order to prove the remaining statements we show that
if
j0 ≤ j
and ρ ≤ ρ̄
then
(δ)
(δ)
Kj ρ ≤ Kj 0 ρ̄
(6.15)
where we have added the index j to K (δ) to have track of the dependence on the current
parameter. We recall that F (Rδj (ρ), ρ) = jδ. By hypothesis, for any r ∈ [0, 1], F (r; ρ) ≤
F (r; ρ̄), then
0
Rδj (ρ) ≤ Rδj (ρ̄) ≤ Rδj (ρ̄)
(6.16)
(δ)
We recall that Kj ρ = ρ 1r≤Rj (ρ) + jδD0 then, for any r ∈ (0, 1]
δ
for r = 0
F (0; ρ)
(δ)
F (r; ρ) − jδ for 0 < r < Rδj (ρ)
F (r; Kj ρ) =
0
for r ≥ Rδj (ρ)
28
(6.17)
thus
F (0; ρ̄) − F (0; ρ)
F (r; ρ̄) − F (r; ρ) + (j − j 0 )δ
(δ)
(δ)
F (r; Kj 0 ρ̄) − F (r; Kj ρ) =
F (r; ρ̄) − j 0 δ
0
for
for
for
for
r=0
0 < r < Rδj (ρ)
0
Rδj (ρ) ≤ r < Rδj (ρ̄)
0
Rδj (ρ̄) ≤ r
(6.18)
that is positive in all the cases, thus (6.15) is proved.
Choosing j 0 = j in (6.15) we deduce, in particular, that K (δ) preserves the order. On the
other hand, the property that Gneum
∗ preserves the order is inherited from the same property
t
for the independent flow Tt0 . As a consequence of the two previous statements we have that
(δ,±)
also St
preserves the order (see the definition in (2.11)).
Finally (6.9) follows from (6.15) with ρ = ρ̄ and from the fact that Gneum
∗ preserves the
t
order. Thus Theorem 6.3 is proved.
Proof of Proposition 2.2. It follows from the inequalities (6.7) and (6.8).
7
Regularity properties of the approximating flows
(δ,±)
In this section we shall prove regularity properties of the approximating flows St
(u) defined
in Definition 2.6. We tacitly suppose hereafter that u ∈ Uδ , see (2.11).
∞ function we get from its very definition that S (δ,+) (u) is C ∞
Since Gneum
nδ (r, 0) is a C
nδ
(δ,−)
as well while Snδ (u) is equal to jδD0 plus a function which is C ∞ in its support. Such
, depends on δ, while we want properties
a smoothness however, being inherited from Gneum
δ
which hold uniformly as δ → 0. We shall frequently use in the sequel the following bounds (c
a constant)
√
√
c(1 + t)
d neum
c(1 + t)
neum
√
Gt
(r, 0) ≤
, | Gt
(r, 0)| ≤
, t = nδ, n ≥ 1
(7.1)
dr
t
t
which make explicit the dependence on δ. Recall also that
Z
Z
0
0
dr Gt (r, r ) = dr Gt (r, r0 ) = 1
(7.2)
We shall use the following notation:
(δ,−)
ρt
(δ,−)
:= St
(δ,+)
(u) − jδD0 , ρt
(δ,+)
:= St
(u),
t ∈ δN
(7.3)
(δ,±)
For what said above ρt
are “smooth”, their dependence on u ≡ ρ0 is not made explicit.
The main result in this section is:
(δ,±)
Theorem 7.1 (Space and time equicontinuity). The total mass is conserved, i.e. F (0; ρt
)=
F (0; ρ0 ) for all t ∈ δN, and there is a constant c so that
j + kρ0 k∞ for all t ∈ δN, t ≤ 1
(δ,±)
kρt
k∞ ≤ c
(7.4)
j + kρ0 k1 for all t ∈ δN, t > 1
29
(δ,+)
ρt
(r) is space-time equicontinuous, namely for any ζ > 0 there are τζ > 0 and dζ > 0 so
that for any δ > 0 and t, t0 ∈ δN:
(δ,+)
sup
t,|r−r0 |≤dζ
|ρt
(δ,+)
(r) − ρt
(r0 )| < ζ,
(δ,+)
sup
r,|t−t0 |≤τ
|ρt0
(δ,+)
(r) − ρt
(r)| < ζ
(7.5)
ζ
Finally
(δ,+)
kSt
(δ,−)
(u) − St
(u)k1 ≤ 4jδ,
for all t > 0 in δN.
(7.6)
Proof. The total mass is conserved because both Gneum
and K (δ) (see (2.12)) preserve mass.
δ
We shall next prove (7.4) for (δ, +), the proof for (δ, −) is similar and omitted. Let t = nδ, n
a positive integer, then
Z
(δ,+)
(δ,+)
(r, 0)
(r, r0 )ρt−δ (r0 ) + jδGneum
ρt
(r) ≤ dr0 Gneum
δ
δ
The inequality is because we are not taking into account the “loss part” in the action of K (δ) .
Iterating we get for s = mδ, m < n a non negative integer,
(δ,+)
ρt
(r)
Z
≤
dr
0
0 (δ,+) 0
Gneum
(r )
t−s (r, r )ρs
+ jδ
n−m
X
(r, 0)
Gneum
kδ
(7.7)
k=1
Let nδ be the smallest integer such that δnδ ≥ 1 and suppose that in (7.7) n < nδ and m = 0.
By √
(7.2) the integral in (7.7) is bounded by kρ0 k∞ whereas by (7.1) the sum is bounded by
00
c j nδ ≤ c00 j. Thus (7.4) is proved for t ≤ 1.
Let us choose now n = nδ , m = 0, then using (7.1) we bound the integral√ in (7.7) by
c0 kρ0 k1 (δnδ )−1/2 ≤ c0 kρ0 k1 . As before the last term in (7.7) is bounded by c00 j δnδ ≤ 2c00 j
so that (we may suppose c0 < 2c00 )
(δ,+)
kρnδ δ k∞ ≤ 2c00 (kρ0 k1 + j)
By the same argument for any integer k ≥ 1
(δ,+)
(δ,+)
kρknδ δ k∞ ≤ 2c00 (kρ(k−1)nδ δ k1 + j) = 2c00 (kρ0 k1 + j)
(7.8)
the last equality because we have already proved that mass is conserved. Thus (7.4) is proved
for t ∈ (δnδ )N. Let now m = knδ and knδ < n ≤ (k + 1)nδ k a positive integer. The last
(δ,+)
term in (7.7) is bounded again by 2c00 j, whereas the integral is smaller than kρknδ δ k∞ . Thus
(7.4) for t ≥ 1 follows from (7.8). The proof of (7.5) and (7.6) will be continued after the
following lemma.
Lemma 7.2. There is a constant c so that the following holds. For any 0 < s < t ∈ δN,
(δ,±)
±
±
±
±
t − s ≤ 1, we can write ρt
= ws,t
+ vs,t
, ws,t
and vs,t
non negative and such that
±
±
sup |ws,t
(r) − ws,t
(r0 )| ≤ ckρ0 k∞
r,r0 ∈[0,1]
−
kvs,t
k∞
|r − r0 |
t−s
±
kvs,t
k1 ≤ 2j(t − s)
√
√
+
≤ c(kρ0 k∞ + j t − s), kvs,t
k∞ ≤ cj t − s
30
(7.9)
(7.10)
(7.11)
Proof. Let t = nδ, s = mδ, m < n positive integers such that (m − n)δ < 1. Recalling the
definition (7.3), we have
(δ,−)
ρnδ
(δ,−)
(δ,−)
= K (δ) [Gneum
? S(n−1)δ (ρ0 )](r) − jδD0 = 1r≤R Gneum
? ρ(n−1)δ + jδD0 (r)
δ
δ
(δ,−)
= jδGneum
(r, 0) + 1r≤R Gneum
? ρ(n−1)δ (r)
δ
δ
(δ,−)
≤ jδGneum
(r, 0) + Gneum
? ρ(n−1)δ (r)
δ
δ
(7.12)
having neglected the “loss terms” (as in (7.7)). After iteration (7.12) yields
(δ,−)
ρt
(r) ≤ jδ
n−m
X
−
Gneum
(r, 0) + ws,t
(r)
kδ
−
(δ,−)
ws,t
:= Gneum
t−s ∗ ρs
(7.13)
k=1
By (7.4) and the second inequality in (7.1) we get
Z
|r0 − r00 |
(δ,−)
−
−
0
neum 0
kρ0 k∞
|ws,t (r) − ws,t (r )| ≤ kρt
k∞ |Gneum
t−s (r, z) − Gt−s (r , z)| dz ≤ c
t−s
−
which proves (7.9) for ws,t
. We next define
(δ,−)
−
vs,t
:= ρt
−
− ws,t
(7.14)
(δ,−)
−
To estimate vs,t
we prove below a lower bound for ρt
terms. We first define for all τ ∈ δN,
vτ(δ,−) (r)
:=
1r>R Gneum
δ
∗ [jδD0 +
(δ,−)
By the second equality in (7.12), ρt
(δ,−)
ρt
iteratively by neglecting the gain
(δ,−)
ρτ −δ ](r),
Z
R:
vτ(δ,−) = jδ
(7.15)
(δ,−)
(δ,−)
= Gneum
∗ [ρt−δ + jδD0 − vt
, hence
δ
(δ,−)
(δ,−)
∗ ρt−δ − vt
≥ Gneum
δ
and by iteration:
(δ,−)
ρt
n
X
(δ,−)
≥ Gneum
−
t−s ∗ ρs
(δ,−)
Gneum
(n−k)δ ∗ vkδ
−
= ws,t
−
k=m+1
−
By (7.14) and (7.13), vs,t
(r) ≤ jδ
−
|vs,t
|
n
X
≤
n
X
(δ,−)
Gneum
(n−k)δ ∗ vkδ
k=m+1
Pn−m
k=1
Gneum
(n−k)δ
Gneum
(r, 0), so that using (7.16) we get
kδ
∗
(δ,−)
vkδ
k=m+1
+ jδ
n−m
X
Gneum
(r, 0)
kδ
k=1
From (7.1), (7.2) and (7.4) we get
k
n
X
(δ,−)
Gneum
(n−k)δ ∗ vkδ
(7.16)
√ √
√
k∞ ≤ cj δ n − m + ckρ0 k∞ ≤ c(kρ0 k∞ + j t − s)
k=m+1
31
(7.17)
From (7.15) and (7.2) we get
−
|vs,t
|
n
X
≤ jδ(t − s) + k
(δ,−)
Gneum
(n−k)δ ∗ vkδ
k1 ≤ 2jδ(t − s)
k=m+1
which concludes the proof of (7.10) and (7.11).
In an analogous way we have
(δ,+)
ρt
(r) ≤ jδ
n−m
X
(δ,+)
Gneum
(r, 0) + [Gneum
](r)
t−s ∗ ρs
kδ
k=1
For the lower bound we first define
vτ(δ,+) (r) = 1r≥R ρ(δ,+)
(r),
τ
where R is such that
R
(δ,+)
vτ
(δ,+)
(r) = jδ and kvτ
(δ,+)
ρt
τ ∈ δN
k∞ ≤ C, C = c j + kρ0 k∞ . We then have:
(δ,+)
(δ,+)
∗
ρ
−
v
≥ Gneum
δ
t−δ
t−δ
and by iteration:
(δ,+)
ρt
(δ,+)
≥ Gneum
−
t−s ∗ ρs
n−1
X
(δ,+)
Gneum
(n−k)δ ∗ vkδ
k=m
Combining the two inequalities we get
(δ,+)
|ρt
n−1
X
(δ,+)
− Gneum
|≤
t−s ∗ ρs
(δ,+)
Gneum
(n−k))δ ∗ vkδ
+ jδ
k=m
(δ,−)
The analysis is then the same as for ρt
(δ,+)
of term with vt
.
n−m
X
(r, 0)
Gneum
kδ
(7.18)
k=1
, the improvement in (7.11) is due to the absence
We resume the proof of Theorem 7.1 by proving:
Proof of space equicontinuity. Given any ζ > 0 we must find dζ > 0 so that
(δ,+)
sup
|ρt
|r−r0 |<dζ
(δ,+)
(r0 )| < ζ
(7.19)
√
|r − r0 |
+ c0 t − s
t−s
(7.20)
(r) − ρt
By (7.9) and (7.11) and calling c0 := c(j + kρ0 k∞ )
(δ,+)
|ρt
(δ,+)
(r) − ρt
(r0 )| ≤ c0
We shall prove (7.19) with
dζ < ζ 3 min
n
o
1
1
;
4c0 (2c0 )2 c00 (2c0 )2
where c00 is a constant which will be specified later.
32
(7.21)
We first consider
the case when (2c0 )2 δ < ζ 2 . We then choose s < t as the largest time
√
such that 2c0 t − s < ζ. Since t − s = kδ, for s to exist it must be√that (2c0 )2 δ < ζ 2 which is
indeed the case we are studying now. By the maximality of s, 2c0 t − s + δ ≥ ζ so that
2(t − s) ≥ t − s + δ ≥
ζ2
(2c0 )2
By choosing dζ as in (7.21) the first term on the right hand side of (7.20) is bounded by
c0
(δ,+)
2(2c0 )2
ζ
dζ <
2
ζ
2
(δ,+)
hence |ρt
(r) − ρt
(r0 )| < ζ.
It remains to consider the case when (2c0 )2 δ ≥ ζ 2 . Observe that
(δ,+)
ρt
(δ,+)
= Gneum
∗ (jδD0 + v),
δ
v = 1r≤R ρt−δ
(7.22)
hence its space-derivative is bounded by c(jδ + kvk∞ )δ −1 =: c00 δ −1 (c a constant). By (7.21)
we then get
(δ,+)
|ρt
(δ,+)
(r) − ρt
(r0 )| ≤ c00 δ −1 |r − r0 | ≤ c00 (
ζ 2 −1
) |r − r0 | < ζ
(2c0 )2
(7.23)
Proof of time equicontinuity. Supposing t0 > t, t0 − t ≤ 1, by (7.18) with t → t0 and
s → t,
(δ,+)
|ρt0
−
Gneum
t0 −t
∗
(δ,+)
ρt
|
≤
n−1
X
Gneum
(n−k))δ
∗
(δ,+)
vkδ
≤ cj
t0
(δ,+)
Hence calling ζ 0 = ζ/4 and with C ≥ kρt
Z
(δ,+)
(δ,+)
(r)| ≤
|ρt0 (r) − ρt
Gneum
(n−k)δ (r, 0)
k=1
k=m
√
+ jδ
n−m
X
−t
k∞ ,
r0 :|r−r0 |≥dζ 0
√
0
0
0
CGneum
t0 −t (r, r ) + ζ + cj t − t
√
We then choose t0 − t so small that: (i) cj t0 − t < ζ 0 and such that (ii) the integral on the
right hand side is < ζ 0 as well.
In the proof of (7.6) we shall use the following lemma:
Lemma 7.3. Let u and v be both in Uδ , see (2.11), then
kK (δ) u − K (δ) vk1 ≤ ku − vk1 ,
kK (δ) u − uk1 ≤ 2jδ
Proof. Supposing Rδ (u) > Rδ (v), see (2.12),
Z Rδ (v)
Z
kK (δ) u − K (δ) vk1 =
|u − v| +
0
(7.24)
Rδ (u)
u
Rδ (v)
Z
Rδ (u)
= ku − vk1 +
Z
∞
(u − |u − v|) −
Rδ (v)
33
|u − v|
Rδ (u)
We have
Z
∞
Z
∞
Z
|u − v| ≥ |
Rδ (u)
∞
(u − v)| = jδ −
Z
Rδ (u)
Rδ (u)
so that
kK (δ) u − K (δ) vk1 ≤ ku − vk1 −
Z
Rδ (u)
v=
v
Rδ (v)
Rδ (u)
(v − u + |u − v|) ≤ ku − vk1
Rδ (v)
The second inequality in (7.24) follows because
K (δ) u − u = jδD0 − 1r>Rδ (u) u
Proof of (7.6). The proof is actually a corollary of Lemma 7.3 and the maximum principle
kGneum
∗ u − Gneum
∗ vk1 ≤ ku − vk1
t
t
∗ and
Shorthand G for the operator Gneum
δ
φ := K (δ) G · · · K (δ) Gu,
ψ := GK (δ) · · · GK (δ) u
so that we need to bound the total variation of φ − ψ. Call
v = K (δ) u,
vn = GK (δ) · · · Gv,
un = GK (δ) · · · Gu
Thus un and vn are obtained by applying G(K (δ) G)n−1 to u and respectively v. Since
G(K (δ) G)n−1 is a contraction we get, using (7.24),
|ψ − φ|1 ≤ |K (δ) un − vn |1 ≤ |K (δ) un − un |1 + |vn − un |1
≤ 2jδ + |vn − un |1 ≤ 2jδ + |u − v|1 ≤ 4jδ
The proof of Theorem 7.1 is concluded.
In the proof of Theorem 2.4 we shall use the following Lemma where we derive an explicit
(δ,+)
(δ,+) 0
bound for the difference |ρt
(r) − ρt
(r )|.
Lemma 7.4. There is c > 0 such that, for any δ and for any t ∈ δN,
1 √
(δ,+)
(δ,+) 0
|ρt
(r) − ρt
(r )| ≤ c max{|r − r0 | 3 , δ}
(7.25)
Proof. By (7.9) and (7.11) there is c0 > 0 such that
(δ,+)
|ρt
(δ,+)
(r) − ρt
(r0 )| ≤ c0
√
|r − r0 |
+ c0 t − s
t−s
(7.26)
3
for any s < t, with t − s ∈ δN. When |r − r0 | ≥ δ 2 we choose s such that t − s = k ∗ δ with
2
0 3
k ∗ = d |r−rδ | e, then
2
2
|r − r0 | 3
|r − r0 | 3
≤ k∗ ≤
+1
δ
δ
34
(7.27)
then, from (7.26)
(δ,+)
|ρt
(r)
−
(δ,+) 0
ρt
(r )|
q
1
2
≤ c |r − r | + c |r − r0 | 3 + δ ≤ c00 |r − r0 | 3
0
1
3
0
0
(7.28)
3
Suppose now |r − r0 | ≤ δ 2 , and choose s = t − δ then, from (7.26)
(δ,+)
|ρt
(δ,+)
(r) − ρt
√
(r0 )| ≤ 2c0 δ
(7.29)
thus the result follows from (7.28) and (7.29).
8
Hydrodynamic limit
Convergence in the hydrodynamic limit will be proved as a consequence of (i) Theorem
4.1 which proves the hydrodynamic limit for the approximate evolutions; (ii) Theorem 7.1
which establishes regularity properties of the macroscopic approximate evolutions and (iii)
the stochastic inequalities proved in Theorem 6.2.
Definition 8.1. For any τ > 0 we call for n ∈ N
Tn(τ ) = {2−n τ N},
T (τ ) :=
[
Tn(τ )
(8.1)
n∈N
We shall first restrict δ to {τ 2−n , n ∈ N} and prove convergence for t ∈ T (τ ) .
(τ 2−n ,+)
• By equicontinuity and a diagonalization procedure St
(ρinit ) converges by subsequences as n → ∞ in sup norm on the compacts to a continuous bounded function
ρt (r), r ∈ [0, 1], t ∈ T (τ ) .
• By (6.8), for any r ∈ [0, 1] and t ∈ T (τ )
(τ 2−n ,+)
lim F (r; St
n→∞
(ρinit )) = F (r; ρt )
which proves that the previous convergence to ρt (r) does not require subsequences.
• By (7.6) for any r ∈ [0, 1] and t ∈ T (τ )
(τ 2−n ,−)
lim F (r; St
n→∞
(ρinit )) = F (r; ρt )
and by (6.14)
(N )
lim Pξ0
→0
h
i
F (r; ρt ) − ζ ≤ F (r; ξ−2 t ) ≤ F (r; ρt ) + ζ, for all r ∈ [0, 1] = 1
35
We are now ready to deduce the proofs of Theorem 2.1 and Theorem 2.3
Proof of Theorem 2.3. It follows from the reasoning above and the use of Theorem 6.3
with the choice m = 2n and δ = t2−n .
Proof of Theorem 2.1. We have already proved Theorem 2.1 but only for t ∈ T (τ ) . To
(τ )
underline this let us rename ρt as ρt noticing that by continuity it extends uniquely to a
(τ )
function still denoted by ρt with t ∈ R+ . To prove independence of τ we use the following
lemma:
Lemma 8.1. There is c so that for any 0 < δ < δ 0 , u ∈ Uδ and n ≥ 1
(δ,−)
kSnδ
(δ 0 ,−)
(u) − Snδ0
(δ,−)
Proof. In order to compare Sδ
(δ 0 ,−)
and Sδ0
0
(u)k1 ≤ ckuk1 n
kK (δ) (w) − K (δ ) (w)k1 ≤ 2j(δ 0 − δ),
δ0 − δ
δ 3/2
(8.2)
we shall use the following bounds:
∗ wk1 ≤
∗ w − Gneum
kGneum
δ
δ0
c(δ 0 − δ)
kwk1
δ 3/2
(δ,−)
together with kK (δ) (w)−K (δ) (w)k1 ≤ kv −wk1 , see (7.24). Indeed we can bound kSδ
(δ 0 ,−)
Sδ0
(8.3)
(w)−
(v)k1 by
0
∗ v}k1
∗ v}k1 + k(K (δ ) − K (δ) )Gneum
∗ w − Gneum
≤ kK (δ) {Gneum
δ
δ0
δ0
∗ vk1 + 2j(δ 0 − δ)
∗ w − Gneum
≤ kGneum
δ
δ0
∗ vk1 + 2j(δ 0 − δ)
∗ v − Gneum
∗ vk1 + kGneum
∗ w − Gneum
≤ kGneum
δ
δ
δ
δ0
getting
(δ,−)
kSδ
(δ 0 ,−)
(w) − Sδ0
(v)k1 ≤ kw − vk1 + c
δ0 − δ
kvk1 + 2j(δ 0 − δ)
δ 3/2
(8.4)
(δ 0 ,−)
(δ,−)
By using (8.4) with w = S(n−1)δ (u) and v = S(n−1)δ0 (u) we then get by iteration (8.2).
(τ )
Theorem 8.2. ρt
is independent of τ (and it will thus be called in the sequel ρt ).
(τ 0 )
(τ )
Proof. Let τ 0 6= τ . If τ 0 ∈ T (τ ) then obviously ρt = ρt . Suppose therefore that τ 0 ∈
/ T (τ ) .
0
Let t0 ∈ T (τ ) , t0 = nδ 0 , δ 0 = τ 0 2−m . Let δ ∈ T (τ ) , δ < δ 0 . Then by the lemma, for all r ∈ [0, 1]
(δ 0 ,−)
F (r; St0
(δ,−)
(ρinit )) ≤ F (r; Snδ
(ρinit )) + ckρinit k1 n
δ0 − δ
δ 3/2
(8.5)
For any p ∈ N large enough there is kp ∈ N so that δ = kp τ 2−p . Then by (6.8)
(δ,−)
F (r; Snδ
(τ 2−p ,−)
(ρinit )) ≤ F (r; Snδ
(ρinit ))
(8.6)
By taking p → ∞:
(δ 0 ,−)
F (r; St0
(τ )
(ρinit )) ≤ F (r; ρnδ ) + ckρinit k1 n
36
δ0 − δ
δ 3/2
(8.7)
We then let δ → δ 0 on T (τ ) getting
(δ 0 ,−)
F (r; St0
(τ )
(ρinit )) ≤ F (r; ρt0 )
We next take m → ∞, recall δ 0 = τ 0 2−m , and get
(τ 0 )
(τ )
for any t0 ∈ T (τ
F (r; ρt0 ) ≤ F (r; ρt0 ),
0)
In an analogous fashion we get
(τ 0 )
(τ )
F (r; ρt ) ≤ F (r; ρt
for any t ∈ T (τ )
),
(τ )
The theorem then follows from the continuity of ρt
(τ 0 )
and ρt0 .
This concludes the proof of Theorem 2.1.
We now start the proof of Theorem 2.4, thus we fix ρinit such that ρinit (1) > 0 and we call
ρt the function of Theorem 2.1.
For any a > 0 arbitrarly small we define
Ta = sup{t > 0 : ρt (1) ≥ a}
Lemma 8.3. For any a > 0 there exists 0 < a0 < a such that
(δ,+)
Snδ
ρinit (1) ≥ a0
n
for any
such that
δn < Ta
(8.8)
Proof. Let t ∈ δN with t < Ta then ρt (1) ≥ a. From (6.14) and Theorem 2.1, for any
r ∈ [0, 1], t ∈ δN we have
(δ,−)
F (r; St
(δ,−)
(ρ0 )) ≤ F (r; ρt ) ≤ F (r; St
(ρ0 ))
(8.9)
On the other hand, from (7.6), for any r ∈ [0, 1], t ≥ 0,
then, choosing r = 1 −
√
F (r; S (δ,−) (ρ0 )) − F (r; S (δ,+) (ρ0 )) ≤ 4jδ
t
t
(8.10)
δ we have
Z 1
Z
(δ,+)
(r) dr ≥
√ ρt
(8.11)
√
1− δ
1− δ
From Lemma 7.4, for r ∈ [1 −
(δ,+)
|ρt
hence
√
(δ,+)
1
(δ,+)
√
1− δ
ρt (r) dr − 4jδ
δ, 1],
(r) − ρt
Z
1
ρt
1 √
1
(1)| ≤ c max{|1 − r| 3 , δ} ≤ cδ 6
(δ,+)
(r) dr ≤ (ρt
37
1 √
(1) + c δ 6 ) δ
(8.12)
Then, combining (8.11) and (8.12),
Z 1
Z 1
√
1
1
1
(δ,+)
(δ,+)
ρt
(1) + c δ 6 ≥ √
(r) dr ≥ √
√ ρt
√ ρt (r) dr − 4j δ
δ 1− δ
δ 1− δ
(8.13)
thus
Z 1
1
0 61
≥√
(8.14)
√ ρt (r) dr − c δ
δ 1− δ
Then, from the space continuity of ρt obtained
√ in Theorem 2.1, for any a > 0 there exists
δ > 0 small enough such that, for |r − 1| ≤ δ,
(δ,+)
ρt
(1)
ρt (r) ≥ ρt (1) − a/2 ≥ a/2
where the last inequality holds for all t < Ta . Then the statement of the Lemma follows from
1
(8.14) with a0 = a/2 − c0 δ 6 which is positive for δ small enough.
Lemma 8.4. For any a > 0 there is Ca > 0 such that for any t ∈ δN, t < Ta
(δ,+)
Rδ (Snδ
(ρinit )) ≥ 1 − Ca δ
(8.15)
Proof. Fix C > 0 then, from Lemma 7.4 we know that there is c > 0 so that for any
r ∈ [1 − Cδ, 1], t ∈ δN,
1
(δ,+)
(δ,+)
(8.16)
ρt
(r) ≥ ρt
(1) − cδ 3
then, from Lemma 8.3, for any a > 0 there is 0 < a0 < a such that
Z 1
1
(δ,+)
ρt
(r) dr ≥ Cδ(a0 − cδ 3 )
∀t < Ta
(8.17)
1−Cδ
1
now it is sufficient to chose C = Ca > (a0 − cδ 3 )/j, δ small enough to get
Z 1
(δ,+)
ρt
(r) dr > jδ
∀t < Ta ,
(8.18)
1−Ca δ
that gives (8.15).
Proof of Theorem 2.4 We define the dynamics
(δ,+)
Ŝnδ
(u) := Gneum
∗ · · · ∗ Qδ Gneum
∗ Qδ u
δ
δ
=
Gneum
∗
δ
n times
(δ,+)
Qδ Ŝ(n−1)δ (u)
(8.19)
with
Qδ u = u + jδD0 − jδD1
then
(δ,+)
Ŝnδ
(u) = Gneum
∗ u + jδ
nδ
n−1
X
k=0
38
Gkδ ∗ D0 − jδ
(8.20)
n−1
X
k=0
Gkδ ∗ D1
(8.21)
(δ,+)
hence Ŝt
(u) converges as δ → 0 to the dynamics defined by (2.20). It remains to prove
(δ,+)
(δ,+)
that Snδ (u) − Ŝnδ (u) converges weakly to zero for nδ < supa Ta .
From (8.19) and (2.14), we can write
(δ,+)
Snδ
(δ,+)
(u) − Ŝnδ (u) =
(δ,+)
(δ,+)
neum
= Gδ
∗ K δ S(n−1)δ − Qδ Ŝ(n−1)δ (u) =
(δ,+)
(δ,+)
(δ,+)
δ
δ
neum
δ
= Gneum
∗
K
−
Q
S
(u)
+
G
∗
Q
S
−
Ŝ
δ
δ
(n−1)δ
(n−1)δ
(n−1)δ (u) =
(δ,+)
δ
δ
neum
δ neum
δ (δ,+)
δ (δ,+)
= Gneum
∗
K
−
Q
S
(u)
+
G
∗
Q
G
∗
K
S
−
Q
Ŝ
δ
δ
δ
(n−1)δ
(n−2)δ
(n−2)δ (u) =
=
n
X
(δ,+)
Gneum
∗ Qδ Gneum
∗ · · · ∗ Qδ Gneum
∗ (K δ − Qδ )S(n−k)δ (u)
δ
δ
δ
(by iteration) (8.22)
k=1
where the Gneum
appears k times in the k-th term of the sum and
δ
(K δ − Qδ )v := j δ D1 − 1[Rδ (v),1] v
(8.23)
Then, in order to prove the convergence of (8) to 0 we prove that each term in the sum
converges in distribution to 0 as δ → 0. This is true since and for any n : nδ < supa Ta
(δ,+)
(K δ − Qδ )Snδ
u→0
weakly as δ → 0
(8.24)
let us see. We first fix a > 0 arbitrarily small, then, from (8.15), there exists Ca > 0 so that
(δ,+)
supp 1
S
u ≤ Ca δ,
for any n : nδ ≤ Ta
(8.25)
(δ,+)
R (S
u) nδ
δ
kδ
Then for any test function φ, nδ ≤ Ta ,
1 Z 1
(δ,+)
Snδ u(r) · φ(r) dr − φ(1)
(δ,+)
jδ Rδ (Snδ
u)
1 Z 1
(δ,+)
=
Snδ u(r) · (φ(r) − φ(1)) dr
(δ,+)
jδ Rδ (Snδ u)
φ(r) − φ(1) ≤ sup φ(r) − φ(1)
≤
sup
(δ,+)
r∈[Rδ (Snδ
(8.26)
|r−1|≤Ca δ
u),1]
that vanishes as φ is continuous. Hence, for any a > 0,
1 Z 1
(δ,+)
lim Snδ u(r) · φ(r) dr − φ(1) = 0
(δ,+)
δ→0 jδ Rδ (S
u)
nδ
for kδ ≤ Ta
(8.27)
then (8.27) is certainly true as long as nδ ≤ supa Ta , this yields the convergence in distribution
to equation (2.20) for any time t such that ρt (1) > 0. We know that the convergence of
(t2−n ,+)
St
(ρinit ) to ρt as n → ∞ in the sense of the interfaces (see Theorem 2.3) implies the
convergence in distribution to the same limit. This and the uniqueness of the weak limit
univoquely characterizes ρt as the function given by (2.20) for t : ρt (1) > 0. Then the
Theorem is proved.
39
Acknowledgments. We thank Pablo Ferrari and John Ockendon for many useful comments and discussions. The research has been partially supported by PRIN 2009 (prot.
2009TA2595-002). A. De Masi and E. Presutti acknowledge kind hospitality at the Dipartamento di Matematica della Università di Modena.
References
[1] M. Ballerini et at. M. Ballerini, N. Cabibbo, R. Candelier, A. Cavagna, E. Cisbani, I.
Giardina, V. Lecomte, A. Orlandi, G. Parisi, A. Procaccini, M. Viale, V. Zdravkovic,
Interaction ruling animal collective behavior depends on topological rather than metric
distance: Evidence from a field study. Proceedings of the National Academy of Sciences,
105(4), 1232-1237, (2008).
[2] G. Carinci, C. Giardina’, A. De Masi, E. Presutti, in preparation
[3] G. Carinci, C. Giardina’, A. De Masi, E. Presutti, in preparation
[4] G. Carinci, C. Giardina’, C. Giberti, F. Redig, Duality for stochastic models of transport.
http://arxiv.org/abs/1212.3154 (2012).
[5] J Crank, R S Gupta. A moving boundary problem arising from the diffusion of oxygen
in absorbing tissue.
[6] A. De Masi, P.A. Ferrari, A remark on the hydrodynamics of the zero range process.
Journal of Statistical Physics, 36, 81–87 (1984).
[7] A. De Masi, E. Presutti, Mathematical methods for hydrodynamic limits. Lecture Notes
in Mathematics Springer-Verlag, 1501 (1991).
[8] A. De Masi, E. Presutti, D. Tsagkarogiannis, M.E. Vares, Current reservoirs in the simple
exclusion process. Journal of Statistical Physics, 144, 1151–1170 (2011).
[9] A. De Masi, E. Presutti, D. Tsagkarogiannis, M.E. Vares, Truncated correlations in the
stirring process with births and deaths. Electronical Journal of Probability, 17, 1–35,
(2012).
[10] A. De Masi, E. Presutti, D. Tsagkarogiannis. Fourier law, phase transitions and the
stationary Stefan problem. Archive for Rational Mechanics and Analysis, 201, 681–725
(2011).
[11] A. De Masi , P.A. Ferrari, E. Presutti. Symmetric simple exclusion process with free
boundaries. http://arxiv.org/abs/1304.0701 (2013).
[12] P.A. Ferrari, A. Galves
[13] I. Karatzas, S.E. Shreve, Brownian motion and stochastic calculus (Vol. 113), Springer
Verlag (1991).
[14] J. R. Ockendon, The role of the Crank-Gupta model in the theory of free and moving
boundary problems. Advances in Computational Mathematics. 6 pp 281-293 (1996)
[15] Piccoli
40
© Copyright 2026 Paperzz