AN ISOMORPHISM THEOREM FOR RANDOM INTERLACEMENTS
Alain-Sol Sznitman∗
Abstract
We consider continuous-time random interlacements on a transient weighted
graph. We prove an identity in law relating the field of occupation times of random
interlacements at level u to the Gaussian free field on the weighted graph. This
identity is closely linked to the generalized second Ray-Knight theorem of [2], [4],
and uniquely determines the law of occupation times of random interlacements at
level u.
Departement Mathematik
ETH-Zentrum
CH-8092 Zürich
Switzerland
∗
This research was supported in part by the grant ERC-2009-AdG 245728-RWPERCRI
0
Introduction
In this note we consider continuous-time random interlacements on a transient weighted
graph E. We prove an identity in law, which relates the field of occupation times of
random interlacements at level u to the Gaussian free field on E. The identity can be
viewed as a kind of generalized second Ray-Knight theorem, see [2], [4], and characterizes
the law of the field of occupation times of random interlacements at level u.
We now describe our results and refer to Section 1 for details. We consider a countable,
locally finite, connected graph, with vertex set E, endowed with non-negative symmetric
weights cx,y = cy,x , x, y ∈ E, which are positive exactly when x, y are distinct and {x, y}
is an edge of the graph. We assume that the induced discrete-time random walk on E is
transient. Its transition probability is defined by
(0.1)
px,y =
P
cx,y
, where λx =
cx,z , for x, y ∈ E.
λx
z∈E
In essence, continuous-time random interlacements consist of a Poisson point process on
a certain space of doubly infinite E-valued trajectories marked by their duration at each
step, modulo time-shift. A non-negative parameter u plays the role of a multiplicative
factor of the intensity of this Poisson point process, which is defined on a suitable canonical
space (Ω, A, P). The field of occupation times of random interlacements at level u is then
defined for x ∈ E, u ≥ 0, ω ∈ Ω, by (see (1.8) for the precise expression)
(0.2)
Lx,u (ω) = λ−1
x × the total duration spent at x by the trajectories modulo
time-shift with label at most u in the cloud ω
(informally, the durations of the successive steps of a trajectory are described by independent exponential variables of parameter 1, but occupation times at x get rescaled by
a factor λ−1
x ).
The Gaussian free field on E is the other ingredient of our isomorphism theorem. Its
canonical law P G on RE is such that
(0.3)
under P G , the canonical field ϕx , x ∈ E, is a centered Gaussian field with
G
covariance E P [ϕx ϕy ] = g(x, y), for x, y ∈ E,
where g(·, ·) stands for the Green function attached to the walk on E, see (1.3). The main
result of this note is the next theorem:
Theorem 0.1. For each u ≥ 0,
1
Lx,u + ϕ2x x∈E under P ⊗ P G , has the same law as
2
(0.4)
√
1
(ϕx + 2u)2 x∈E under P G .
2
This theorem provides for each u an identity in law very much in the spirit of the
so-called generalized second Ray-Knight theorems, see Theorem 1.1 of [2] or Theorem
8.2.2 of [4]. Remarkably, although we are in a transient set-up, (0.4) corresponds to the
recurrent case in the context of generalized Ray-Knight theorems. Let us underline that
1
(0.4) uniquely determines the law of (Lx,u )x∈E under P, as the consideration of Laplace
transforms readily shows. We also refer to Remark 3.1 for a variation of (0.4).
The proof of Theorem 0.1 involves an approximation argument of the law of (Lx,u )x∈E
stated in Theorem 2.1, which is of independent interest. This approximation has a similar
flavor to what appears at the end of Section 4.5 of [7], when giving a precise interpretation
of random interlacements as “loops going through infinity”, see also [3], p. 85. The
combination of Theorem 2.1 and the generalized second Ray-Knight theorem readily yields
Theorem 0.1. As an application of Theorem 0.1 we give a new proof of Theorem 5.1 of
[6] concerning the large u behavior of (Lx,u )x∈E , see Theorem 4.1.
We now explain how this note is organized.
In Section 1, we provide precise definitions and recall useful facts. Section 2 develops
the approximation procedure for (Lx,u )x∈E . We give two proofs of the main Theorem
2.1, and an extension appears in Remark 2.2. The short Section 3 contains the proof
of Theorem 0.1, and a variation of (0.4) in Remark 3.1. In Section 4, we present an
application to the study of the large u behavior of (Lx,u )x∈E , see Theorem 4.1.
1
Notation and useful results
In this section we provide additional notation and recall some definitions and useful facts
related to random walks, potential theory, and continuous-time interlacements.
c+ and W
c of infinite, and doubly infinite, E × (0, ∞)-valued
We consider the spaces W
sequences, such that the E-valued sequences form an infinite, respectively doubly-infinite,
nearest-neighbor trajectory spending finite time in any finite subset of E, and such that the
c+ , and infinite “forward”
(0, ∞)-valued components have an infinite sum in the case of W
c.
and “backward” sums, when restricted to positive and negative indices, in the case of W
We write Zn , σn , with n ≥ 0, or n ∈ Z, for the respective E- and (0, ∞)-valued
c+ and W
c . We denote by Px , x ∈ E, the law on W
c+ , endowed with its
coordinates on W
canonical σ-algebra, under which Zn , n ≥ 0, is distributed as simple random walk starting
at x, and σn , n ≥ 0, are i.i.d. exponential variables with parameter 1, independent from
the Zn , n ≥ 0. We denote by Ex the corresponding
expectation. Further, when ρ is a
P
measure on E, we write Pρ for the measure x∈E ρ(x)Px , and Eρ for the corresponding
expectation.
We denote by Xt , t ≥ 0, the continuous-time random walk on E, with constant jump
c+ , by
rate 1, defined for t ≥ 0, w
b∈W
(1.1)
Xt (w)
b = Zk (w),
b when σ0 (w)
b + · · · + σk−1 (w)
b ≤ t < σ0 (w)
b + · · · + σk (w)
b
(by convention the term bounding t from below vanishes when k = 0).
e U = inf{t > 0; Xt ∈ U, and for
Given U ⊆ E, we write HU = inf{t ≥ 0; Xt ∈ U}, H
some s ∈ (0, t), Xs 6= X0 }, and TU = inf{t ≥ 0; Xt ∈
/ U}, for the entrance time in U, the
2
hitting time of U, and the exit time from U. We denote by gU (·, ·) the Green function of
the walk killed when exiting U
h Z TU
i
1
(1.2)
gU (x, y) =
Ex
1{Xs = y}ds , for x, y ∈ E.
λy
0
The function gU (·, ·) is known to be symmetric and finite (due to the transience assumption
we have made). When U = E, no killing takes place (i.e. TU = ∞), and we simply write
(1.3)
g(x, y) = gU =E (x, y), for x, y ∈ E,
for the Green function.
Given a finite subset K of U, the equilibrium measure and capacity of K relative to
U are defined by
(1.4)
(1.5)
e K > TU ] λx 1K (x), for x ∈ E,
eK,U (x) = Px [H
P
capU (K) =
eK,U (x).
x∈E
When U = E, we simply drop U from the notation, and refer to eK and cap(K), as the
equilibrium measure and the capacity of K. Further, the probability to enter K before
exiting U can be expressed as
P
(1.6)
Px [HK < TU ] =
gU (x, y) eK,U (y), for x ∈ E.
y∈E
We now turn to the description of continuous-time random interlacements on the transient
c ∗ for the space W
c (introduced at the beginning of this
weighted graph E. We write W
∗
c = W/ ∼, where for w,
c, w
section), modulo time-shift, i.e. W
b w
b′ ∈ W
b∼w
b′ means that
c→W
c ∗ the canonical map, and
w(·)
b =w
b′ (· + k) for some k ∈ Z. We denote by π ∗ : W
c ∗ with the σ-algebra consisting of sets with inverse image under π ∗ belonging to
endow W
c.
the canonical σ-algebra of W
The continuous-time interlacement point process is a Poisson point process on the
c ∗ × R+ . Its intensity measure has the form ν(dw
space W
b∗ )du, where νb is the σ-finite
c ∗ such that for any finite subset K of E, the restriction of νb to the subset
measure on W
c ∗ consisting of those w
of W
b∗ for which the E-valued trajectory modulo time-shift enters
bK , the image of Q
bK under π ∗ , where Q
bK is the finite measure on W
c
K, is equal to π ∗ ◦ Q
specified by
bK (Z0 = x) = eK (x), for x ∈ E,
i) Q
(1.7)
ii) when eK (x) > 0, conditionally on Z0 = x, (Zn )n≥0 , (Z−n )n≥0 , (σn )n∈Z
are independent, respectively distributed as simple random walk starting
at x, as simple random walk starting at x conditioned never to return
to K, and as a doubly infinite sequence of i.i.d. exponential variables with
parameter 1.
3
As in [6], the canonical continuous-time random interlacement point process is then
constructed
similarly to (1.16) of [5], or (2.10) of [8], on a space (Ω, A, P), with ω =
P
bi∗ ,ui ) denoting a generic element of Ω. A central object of interest in this note is
i≥0 δ(w
the random field of occupation times of random interlacements at level u ≥ 0:
Lx,u (ω) =
(1.8)
1
λx
P P
σn (w
bi ) 1{Zn (w
bi ) = x, ui ≤ u}, for x ∈ E, ω ∈ Ω,
P
where ω =
δ(wbi∗ ,ui ) and π ∗ (w
bi ) = w
bi∗ , for each i ≥ 0.
i≥0 n∈Z
i≥0
The Laplace transform of (Lx,uP
)x∈E has been computed in [6]. More precisely, given a
function f : E → R, such that y∈E g(x, y)|f (y)| < ∞, for x ∈ E, one sets
P
(1.9)
Gf (x) =
g(x, y) f (y), for x ∈ E.
y∈E
One knows from Theorem 2.1 and Remark 2.4 4) of [6], that when V : E → R+ has finite
support and
(1.10)
sup GV (x) < 1,
x∈E
one has the identity
h
n P
oi
(1.11)
E exp −
V (x) Lx,u = exp{−uhV, (I + GV )−1 1E i}, for u ≥ 0,
x∈E
P
where the notation hf, gi stands for x∈E f (x) g(x), when f, g are functions on E such
that the previous sum converges absolutely, and 1E denotes the constant function identically equal to 1 on E.
2
An approximation scheme for random interlacements
In this section we develop an approximation scheme for (Lx,u )x∈E in terms of the fields of
local times of certain finite state space Markov chains. The main result is Theorem 2.1,
but Remark 2.2 states a by-product of the approximation scheme concerning the random
interlacement at level u. This has a similar flavor to Theorem 4.17 of [7], where one gives
one of several possible meanings to random interlacements viewed as “Markovian loops
going through infinity”, see also Le Jan [3], p. 85.
We consider a non-decreasing sequence Un , n ≥ 1, of finite connected subsets of E,
increasing to E, as well as x∗ some fixed point not belonging to E. We introduce the sets
En = Un ∪ {x∗ }, for n ≥ 1, and endow En with the weights cnx,y , x, y ∈ En , obtained by
“collapsing Unc on x∗ ”, that is, for any n ≥ 1, and x, y ∈ Un , we set
cnx,y = cx,y ,
(2.1)
cnx∗ ,y = cny,x∗ =
P
z∈E\Un
4
cz,y ,
and otherwise set cnx,y = 0 (i.e. cnx∗ ,x∗ = 0). We also write
P n
cx,y , for x ∈ En (in particular λnx = λx , when x ∈ Un ).
(2.2)
λnx =
y∈En
We tacitly view Un as a subset of both E and En . We consider the canonical simple
random walk in continuous time on En , attached to the weights cnx,y , x, y ∈ En , with jump
rate equal to 1. We write Xtn , t ≥ 0, for its canonical process, Pxn for its canonical law
starting from x ∈ En , and Exn for the corresponding expectation.
The local time of this Markov chain is defined by
Z t
1
n,x
(2.3)
ℓt = n
1{Xsn = x} ds, for x ∈ En and t ≥ 0.
λx
0
The function t ≥ 0 → ℓn,x
≥ 0 is continuous, non-decreasings, starts at 0, and Pyn -a.s.
t
tends to infinity, as t goes to infinity (the walk on En is irreducible and recurrent). By
= 0, for all t ≥ 0. We introduce the rightconvention, when x ∈ E\Un , we set ℓn,x
t
∗
continuous inverse of ℓn,x
.
∗
> u}, for any u ≥ 0.
τun = inf{t ≥ 0; ℓn,x
t
(2.4)
We are now ready for the main result of this section. We tacitly endow RE with the
product topology, and convergence in distribution, as stated below (and in the sequel),
corresponds to convergence in law of all finite dimensional marginals.
Theorem 2.1. (u ≥ 0)
(2.5)
n
(ℓn,x
τun )x∈E under Px∗ converges in distribution to (Lx,u )x∈E under P.
Proof. We give two proofs.
First proof: We denote by T the set of piecewise-constant, right-continuous, E ∪ {x∗ }valued trajectories, which at a finite time reach x∗ , and from that time onwards remain
equal to x∗ . We endow T with its canonical σ-algebra.
Under Pxn∗ , one has almost surely two infinite sequences Rℓ , ℓ ≥ 1 and Dℓ , ℓ ≥ 1,
(2.6)
R1 = 0 < D1 < R2 < · · · < Rℓ < Dℓ < . . .
of successive returns Rℓ of X.n to x∗ , and departures Dℓ from x∗ , which tend to infinity.
One introduces the random point measure on T
P
1{Dℓ < τun } δ(XDn +· )0≤·≤Rℓ+1 −Dℓ , u ≥ 0,
(2.7)
Γnu =
ℓ
ℓ≥1
which collects the successive excursions of X.n (out of x∗ until first return to x∗ ) that start
before τun . By classical Markov chain excursion theory we know that
(2.8)
Γnu is a Poisson point measure on T with intensity measure
n
)
∈ ·] on T ,
γun (·) = u Pκnn [(Xs∧T
Un s≥0
5
where TUn stands for the exit time of X.n from Un and κn for the measure on Un
(2.9)
κn (y) =
λnx∗
cnx∗ ,y
P
(2.1)
n
cx,y , for y ∈ Un .
=
=
c
x
,y
∗
λnx∗
x∈E\Un
When starting in Un , the Markov chains X on E, and X n on En , have the same evolution
strictly before the exit time of Un . Denoting by (X. )0≤·<TUn the random element of T ,
which equals Xs , for 0 ≤ s < TUn , and x∗ for s ≥ TUn , we see that
γun (·) = u Pκn [(X. )0≤·<TUn ∈ ·], for all n ≥ 1, u ≥ 0.
(2.10)
Let K be a finite subset of E, and assume n large enough so that K ⊆ Un . We introduce
the point measure on T obtained by selecting the excursions in the support of Γnu that
enter K, and only keeping track of their trajectory after they enter K, that is
µnK,u = θHK ◦ (1{HK < ∞} Γnu),
(2.11)
where θt , t ≥ 0, stands for the canonical shift on T , and we use similar notation on T as
below (1.1). By (2.8), (2.10) it follows that
(2.12)
µnK,u is a Poisson point measure on T with intensity measure
n
γK,u
(·) = u PρnK [(X. )0≤·<TUn ∈ ·] on T ,
where ρnK is the measure supported by K such that
(2.13)
ρnK (x) = Pκn [HK < TUn , XHK = x] = eK,Un (x), for x ∈ K,
where the last equality follows from (1.60) in Proposition 1.8 of [7]. Note that eK,Un and
eK are concentrated on K, and for x ∈ K,
(2.14)
(1.4)
e K = ∞] λx = eK (x).
e K > TUn ] λx −→ Px [H
eK,Un (x) = Px [H
n→∞
Consider V : E → R+ supported in K, and Φ: T → R+ , the map
Z ∞
P
1
Φ(w) =
V (x)
1{w(s) = x}ds, for w ∈ T .
x∈E
λx
0
The measure µnK,u contains in its support the pieces of the trajectory X.n up to time τun ,
where X.n visits K, see (2.11), and we have
h
n
oi (2.12)
oi
h
n P
n,x
n
n
n
=
= Ex∗ exp − hµK,u, Φi
V (x) ℓτun
Ex∗ exp −
x∈E
Z
h R TUn V
n
o (2.12),(2.13)
n
io
n
(2.15)
exp
(e−Φ − 1) dγK,u
=
exp u EeK,Un e− 0 λ (Xs )ds − 1
T n
h
n P
oi
o
R∞ V
V (x) Lx,u ,
−→ exp u EeK e− 0 λ (Xs )ds − 1 = E exp −
n→∞
x∈E
where we used (2.14) and the fact that TUn ↑ ∞, Px -a.s., for x in E, for the limit in the
last line, and a similar calculation as in (2.5) of [6] for the last equality. Since K and the
function V : E → R+ , supported in K, are arbitrary, the claim (2.5) follows.
6
Second Proof: We will now make direct use of (1.11). The argument is more computational, but also of interest. We consider K and V as above, as well as a positive number
λ. We assume n large enough so that K ⊆ Un . We further make a smallness assumption
on the non-negative function V (supported in K):
P
(2.16)
sup (GV )(x) + λ−1
V (x) < 1.
x∈E
x∈K
We define the operator Gn on REn attached to the kernel gn (·, ·) in a similar fashion to
(1.9), where we use the notation
gn (x, y) = gUn (x, y) + λ−1 , for x, y ∈ En ,
(2.17)
and we have set gUn (x∗ , ·) = gUn (·, x∗ ) = 0, by convention, to define gUn (·, ·) on En × En .
Since gUn (·, ·) ≤ g(·, ·) on E × E, it follows from (2.16) that supx∈En (Gn V )(x) < 1,
where we have set V (x∗ ) = 0, by convention, so that the operator I + Gn V is invertible.
We introduce the positive number
Z ∞
i
h − P V (x)ℓn,x
n
τu
−λu n
x∈E
du,
(2.18)
an =
λe Ex∗ e
0
where we recall that ℓn,x
= 0, when x ∈ E\Un . Using (2.93), (2.41), (2.71) of [7], or by
t
(8.44) and Remark 3.10.3 of Marcus-Rosen [4], we know that
an = (I + Gn V )−1 1En (x∗ ).
(2.19)
We then define the function hn on En and the real number bn :
P
V (x) hn (x).
(2.20)
hn = (I + Gn V )−1 1En and bn =
x∈K
We let G∗Un be the operator on REn attached to the kernel gUn (·, ·) (on En × En ), in a
similar fashion to (1.9). By (2.17) and (2.20), we have
hn + G∗Un V hn + λ−1 bn 1En = 1En , so that
b
hn = 1 − n (1 + G∗Un V )−1 1En ,
(2.21)
λ
noting that the above inverse is well defined by the same argument used below (2.17). By
the second equality in (2.20) it follows that
P
bn
bn
−1
∗
(2.22)
bn = 1 −
hV, (I + GUn V )−1 1E i,
V (x)(I + GUn V ) (x) = 1 −
λ
λ
x∈K
where we refer to below (1.11) for notation, GUn is the operator on RE attached to the
kernel gUn (·, ·) on E × E, and the last equality follows by writing the Neumann series for
(I + G∗Un V )−1 and (I + GUn V )−1 (note that V ≥ 0 and (2.16) straightforwardly imply
the convergence of these series in the respective operator norms induced by L∞ (En ) and
L∞ (E)).
7
We can now solve for bn . Noting that an = hn (x∗ ) = 1 −
bn
,
λ
by (2.21), we find
an = (1 + λ−1 hV, (I + GUn V )−1 1E i)−1 .
(2.23)
Using the Neumann series for (I +GUn V )−1 , and applying dominated convergence together
with the fact that gUn (·, ·) ↑ g(·, ·) on E × E, we see that
an −→ (1 + λ−1 hV, (I + GV )−1 1E i)−1 .
(2.24)
n→∞
Taking the identity (1.11) into account, we have shown that under (2.16),
Z ∞
Z ∞
P
P
− V (x)Lx,u − V (x)ℓn,x
n τu
−λu
−λu n
x∈E
x∈E
du.
du =
(2.25)
lim
λe E e
λe Ex∗ e
n
0
0
Note that when V : E → R+ is supported in K and supx∈E GV (x) < 1, then (2.16) holds
for λ large (depending on V ). The expectation under the integral in the left-hand side of
(2.25) is non-increasing in u, whereas the expectation under the integral in the right-hand
side of (2.25) is continuous in u by (1.11). It then follows from [1], p. 193-194, that for V
as above,
(2.26)
lim Exn∗
n
−
e
P
x∈E
V (x)ℓn,x
τn u
P
− V (x)Lx,u , for u ≥ 0.
= E e x∈E
n
This readily implies the tightness of the laws of (ℓn,x
τun )x∈K under Px∗ , and uniquely determines the Laplace transform of their possible limit points, see Theorem 6.6.5 of [1].
Letting K vary, the claim (2.5) follows.
Remark 2.2. The approximation scheme introduced in this section can also be used to
approximate the random interlacement at level u, as we now explain. We let Iun stand for
the trace left on Un by the walk on En up to time τun :
(2.27)
Iun = {x ∈ Un ; ℓn,x
τun > 0}.
By (2.12), (2.14), it follows that for any finite subset K of E and u ≥ 0,
(2.28) Pxn∗ [Iun ∩K = φ] = Pxn∗ [µnK,u = 0] = e−u capUn (K)
(1.4),(1.5)
−→
n
e−u cap(K) = P[I u ∩K = φ],
where I u stands for the random interlacement at level u, that is, the trace on E of doubly
infinite trajectories modulo time-shift in the Poisson cloud ω with label at most u. By an
inclusion-exclusion argument, see for instance Remark 4.15 of [7] or Remark 2.2 of [5], it
follows that, as n → ∞,
(2.29)
Iun under Pxn∗ , converges in distribution to I u under P, for any u ≥ 0,
where the above distributions are viewed as laws on {0, 1}E endowed with the product
topology.
8
3
Proof of the isomorphism theorem
In this short section we combine Theorem 2.1 and the generalized second Ray-Knight
theorem of [2] to prove Theorem 0.1. We also state a variation of (0.4) in Remark 3.1.
Proof of Theorem 0.1: For U ⊆ G we denote by P G,U the law on RE of the centered
Gaussian field with covariance E G,U [ϕx ϕy ] = gU (x, y), x, y ∈ E (in particular ϕx = 0,
P G,U -a.s., when x ∈ E\U). It follows from the generalized second Ray-Knight theorem,
see Theorem 8.2.2 of [4], or Theorem 2.17 of [7], that for n ≥ 1, u ≥ 0, in the notation of
Section 2,
ϕ2x x∈Un under Pxn∗ ⊗ P G,Un , has the same law as
√
1
(ϕx + 2u)2 x∈Un under P G,Un .
ℓn,x
τun +
(3.1)
1
2
2
Since gUn (·, ·) ↑ g(·, ·), we see that P G,Un converges weakly to P G (looking for instance at
characteristic functions of finite dimensional marginals). Taking Theorem 2.1 into account
we thus see letting n tend to infinity that
ϕ2x x∈E under P ⊗ P G , has the same law as
√
1
(ϕx + 2u)2 x∈E under P G ,
Lx,u +
(3.2)
1
2
2
and Theorem 0.1 is proved.
Remark 3.1. Let us mention a variation on (0.4) of Theorem 0.1. By Theorem 1.1 of
[2], one knows that for u ≥ 0, a ∈ R, n ≥ 1,
(3.3)
(ϕx + a)2 x∈Un under Pxn∗ ⊗ P G,Un , has the same law as
√
2 1
ϕx + 2u + a2 x∈Un under P G,Un .
ℓn,x
τun +
1
2
2
Letting n tend to infinity, the same argument as above shows that for u ≥ 0, and a ∈ R,
(3.4)
(ϕx + a)2 x∈E under P ⊗ P G , has the same law as
√
2 1
ϕx + 2u + a2 x∈E under P G .
Lx,u +
1
2
2
4
An application
We illustrate the use of Theorem 0.1 and show how one can study the large u asymptotics
of (Lx,u )x∈E and in particular recover Theorem 5.1 of [6], see also Remark 5.2 of [6]. We
denote by x0 some fixed point of E.
9
Theorem 4.1. As u → ∞,
1
Lx,u
(4.1)
converges in distribution to the constant field equal to 1,
u
(4.2)
x∈E
Lx,u − u
√
2u
converges in distribution to (ϕx )x∈E under P G .
x∈E
In particular, as u → ∞,
(4.3)
Lx,u − Lx0 ,u
√
2u
x∈E
converges in distribution to (ϕx − ϕx0 )x∈E under P G .
Proof. We first prove (4.1). To this end we note that P G -a.s., for x ∈ E,
1
2u
(4.4)
ϕ2x → 0 and
1
2u
(ϕx +
√
2u)2 → 1, as u → ∞.
Thus Theorem 0.1 implies that u1 Lx,u converges in distribution to the constant 1 as u
tends to infinity, and (4.1) follows.
We then observe that (4.3) is a direct consequence of (4.2), and turn to the proof of
(4.3). Note that by Theorem 0.1
Lx,u − u
1
√
+ √ ϕ2x
under P ⊗ P G , has the same law as
2u
2 2u
x∈E
(4.5)
√ 2
1
√
.
[(ϕx + 2u) − 2u]
2 2u
x∈E
Note also that for each x ∈ E, P G -a.s., as u → ∞,
(4.6)
1
√
2 2u
ϕ2x → 0, and
(4.7)
1
√
2 2u
[(ϕx +
√
1
2 2u
2u)2 − 2u] = √
ϕ2x + ϕx → ϕx .
Looking at the characteristic function of finite dimensional marginals of the fields in the
first and second line of (4.5), we readily obtain (4.3).
Remark 4.2. In view of the above illustration of the use of Theorem 0.1, one can naturally wonder about the nature of its scope as a transfer mechanism between random
interlacements and the Gaussian free field.
References
[1] K.L. Chung. A course in probability theory. Second edition. Academic Press, San
Diego, 1974.
[2] N. Eisenbaum, H. Kaspi, M.B. Marcus, J. Rosen and Z. Shi. A Ray-Knight theorem
for symmetric Markov processes. Ann. Probab., 28(4):1781–1796, 2000.
10
[3] Y. Le Jan. Markov paths, loops and fields, volume 2026 of Lecture Notes in Math.
Ecole d’Eté de Probabilités de St. Flour, Springer, Berlin, 2011.
[4] M.B. Marcus and J. Rosen. Markov processes, Gaussian processes, and local times.
Cambridge University Press, 2006.
[5] A.S. Sznitman. Vacant set of random interlacements and percolation. Ann. Math.,
171:2039–2087, 2010.
[6] A.S. Sznitman. Random interlacements and the Gaussian free field. To appear in
Ann. Probab., also available at arXiv:1102.2077.
[7] Sznitman, A.S.: Topics in occupation times and Gaussian free fields. Notes of
the course “Special topics in probability”, Spring 2011, to appear in
Zurich Lectures in Advanced Mathematics, EMS, Zurich, also available at
”http://www.math.ethz.ch/u/sznitman/preprints”.
[8] A. Teixeira. Interlacement percolation on transient weighted graphs. Electron. J.
Probab., 14:1604–1627, 2009.
11
© Copyright 2026 Paperzz