slides

Some sufficient condition for the ergodicity of the
Lévy transform
Vilmos Prokaj
Eötvös Loránd University, Hungary
Probability, Control and Finance
A Conference in Honor of Ioannis Karatzas
2012, New York
Lévy transformation of the path space
I
β is a Brownian motion
Z
Tβ =
sign(βs )dβs = |β | − L0 (β).
I
T is a transformation of the path space.
I
T preserves the Wiener measure.
I
Is T ergodic?
I
A deep result of Marc Malric claims that the Lévy transform is
topologically recurrent, i.e., on an almost sure event
{Tn β : n ≥ 0} ∩ G 6= ∅,
I
for all nonempty open G ⊂ C [0, ∞).
We use only a weaker form, also due to Marc Malric, the density of zeros
of iterated paths, i.e.:
∞
[
n=0
{t > 0 : (Tn β)t = 0} is dense in [0, ∞).
Ergodicity and Strong mixing (reminder)
T : Ω → Ω,
I
P ◦ T −1 = P
T is ergodic, if
I
N−1
1 X
P A ∩ T −n B → P (A) P (B ) ,
N n=0
for all A, B.
N−1
I
I
I
1 X
X ◦ T n → E (X ) , for each r.v. X ∈ L1 .
N n=0
or, the invariant σ –field, is trivial.
or,
T is strongly mixing if P A ∩ T −n B → P (A) P (B ) ,
for all A, B.
Ergodicity and weak convergence
In our case Ω = C [0, ∞) is a polish space (complete, separable, metric space).
Theorem
Ω polish, T is a measure preserving transform of (Ω, B(Ω), P). Then
I
n−1
1X
w
T is ergodic iff
P ◦ (T 0 , T k )−1 −→ P ⊗ P as n → ∞.
n
k=0
I
w
T is strongly mixing iff P ◦ (T 0 , T n )−1 −→ P ⊗ P as n → ∞.
Ergodicity and weak convergence
In our case Ω = C [0, ∞) is a polish space (complete, separable, metric space).
Theorem
Ω polish, T is a measure preserving transform of (Ω, B(Ω), P). Then
I
n−1
1X
w
T is ergodic iff
P ◦ (T 0 , T k )−1 −→ P ⊗ P as n → ∞.
n
k=0
I
w
T is strongly mixing iff P ◦ (T 0 , T n )−1 −→ P ⊗ P as n → ∞.
Note that both families of measures are tight:
( n−1
)
1X
0
n −1
0
n −1
P ◦ (T , T ) : n ≥ 0
and
P ◦ (T , T ) : n ≥ 0
n
k=0
If C ⊂ Ω compact, with P(Ω \ C ) < ε then
P (T 0 , T k ) ∈
/ C × C ≤ P T0 ∈
/ C + P Tk ∈
/ C < 2ε.
f .d.
Convergence of finite dim. marginals ( −→ ) is enough
Some notations:
I
β is the canonical process on Ω = C [0, ∞),
I
h : [0, ∞) × C [0, ∞) progressive, |h| = 1 dt ⊗ dP a.e.
Z .
T : Ω → Ω,
Tβ =
h(s, β)dβs , (e.g. h(s, β) = sign(βs )).
0
I
I
β (n) = T n β is the n-th iterated path.
R t (n)
Qn−1
(n)
(n)
h(0) = 1, hs = k=0
h(s, β (k) ) for n > 0, so βt = 0 hs dβs .
f .d.
Convergence of finite dim. marginals ( −→ ) is enough
Some notations:
I
β is the canonical process on Ω = C [0, ∞),
I
h : [0, ∞) × C [0, ∞) progressive, |h| = 1 dt ⊗ dP a.e.
Z .
T : Ω → Ω,
Tβ =
h(s, β)dβs , (e.g. h(s, β) = sign(βs )).
0
I
I
Then
β (n) = T n β is the n-th iterated path.
R t (n)
Qn−1
(n)
(n)
h(0) = 1, hs = k=0
h(s, β (k) ) for n > 0, so βt = 0 hs dβs .
I
The distribution of (β, β (n) ) is P ◦ (T 0 , T n )−1
I
Let κn is uniform on {0, 1,
of β.
P. . . , n − 1} 0andkindependent
−1
P
◦
(T
,
T
)
.
The law of (β, β (κn ) ) is n1 n−1
k=0
I
T is strongly mixing, iff (β, β (n) ) −→ BM-2.
I
Similarly, T is ergodic, iff (β, β (κn ) ) −→ BM-2.
I
Reason: Tightness + −→ = convergence in law.
f .d.
f .d.
f .d.
Characteristic function
I
Fix t1 , . . . , tr ≥ 0 and α = (a1 , . . . , ar , b1 , . . . , br ) ∈ R2r
I
The characteristic function of (βt1 , . . . , βtr , βt1 , . . . , βtr ) at α is
R
R
R
(n)
(n)
f (s)dβs + g (s)dβs
i f (s)+g (s)hs dβs
i
=E e
φn = E e
,
(n)
where f =
I
Pr
j=1 aj 1[0,tj ] ,
g=
Pr
j=1
(n)
bj 1[0,tj ] .
Finite dim. marginals has the right limit, if for all choices r ≥ 1,
α ∈ R2r , t1 , . . . , tr ≥ 0
Z
1
2
2
φn → exp −
f +g
for strong mixing,
2
Z
n−1
1X
1
2
2
f +g
for ergodicity.
φk → exp −
n
2
k=0
Estimate for |φn − φ |, where φ = e
Rt
− 12
R
f 2 +g 2
I
Mt =
I
M is a closed martingale and so is Z = exp iM +
I
Z0 = 1 =⇒ E (Z∞ ) = 1.
R∞
R ∞ (n)
hM i∞ = 0 f 2 (s) + g 2 (s)ds + 2 0 hs f (s)g (s)ds
I
0 (f
(s) + hs g (s))dβs .
(n)
1
2
hM i .
I
Z
φ = φE (Z∞ ) = E exp i
0
I
Recall that fg =
P
j
∞
∞
Z
(fdβ + gdβ ) +
aj bj 1[0,tj ] . Then with Xn (t) =
Z
R∞
R
(n) |φn − φ| ≤ E 1 − e 0 fgh ≤ e |fg| E 0
∞
Rt
0
fgh
(n)
(n)
0
hs ds
(n)
r
X
fgh(n) ≤ C
E |Xn (tj )| ,
j=1
where C = C (f , g ) = C (α, t1 , . . . , tr ) does not depend on n.
Xn (t) =
Rt
0
Theorem
p
hs ds → 0 for all t ≥ 0 would be enough
(n)
p
1. If Xn (t) → 0 for all t ≥ 0, then T is strongly mixing.
P
p
2
2. T is ergodic if and only if n1 n−1
k=0 Xk (t) → 0 for all t ≥ 0.
Strong mixing:
I
I
I
The only missing part is the convergence of finite dimensional marginals.
p
If Xn (t) → 0 then E |Xn (t)| → 0 since |Xn (t)| ≤ t.
P
f .d.
Then |φn − φ| ≤ C j E |Xn (tj )| → 0 =⇒ (β, β (n) ) −→ BM-2.
Remember, that:
f .d.
D
I
(β, β (n) ) −→ BM-2 + tightness gives: (β, β (n) ) → BM-2.
I
(β, β (n) ) → BM-2 ⇔ T strong mixing.
D
Xn (t) =
Rt
0
p
hs ds → 0 for all t ≥ 0 would be enough
(n)
Theorem
p
1. If Xn (t) → 0 for all t ≥ 0, then T is strongly mixing.
P
p
2
2. T is ergodic if and only if n1 n−1
k=0 Xk (t) → 0 for all t ≥ 0.
Ergodicity. ⇐.
I
I
By Cauchy-Schwarz and |Xk (t)| ≤ t
P
P
n−1
2
|Xk (t)| ≤ E1/2 n1 n−1
X
(t)
→ 0.
E n1 k=0
k=0 k
P
P
Pn−1
Then n1 n−1
φ
−
φ
≤ C j E n1 k=0 |Xk (tj )| → 0 =⇒
k=0 k
f .d.
(β, β (κn ) ) −→ BM-2.
Remeber that
I
κn is uniform on {0, . . . , n − 1} and independent of β
I
(β, β (κn ) ) −→ BM-2 + tightness gives: (β, β (κn ) ) → BM-2.
I
(β, β (κn ) ) → BM-2 ⇔ T ergodic.
f .d.
D
D
Xn (t) =
Rt
0
p
hs ds → 0 for all t ≥ 0 would be enough
(n)
Theorem
p
1. If Xn (t) → 0 for all t ≥ 0, then T is strongly mixing.
P
p
2
2. T is ergodic if and only if n1 n−1
k=0 Xk (t) → 0 for all t ≥ 0.
Ergodicity. ⇒ (outline of the proof)
I
Fix 0 < s < t. Then the following limits exist a.s and in L2 :
n−1
1 X (k) (k)
hs hu
n→∞ n
for s ≤ u ≤ t,
Zu = lim
k=0
I
k=0
moreover |Z | and |Zu | are invariant for T , hence they are non-random.
R t (k) ([k)
(k) (k)
(k)
Then hs (βt − βs ) = s hs hu dβu and
Z
Z=
I
n−1
1 X (k) (k) (k)
hs (βt −βs )
n→∞ n
Z = lim
s
t
Z
Zu dβu =
t
Z
|Zu | d β̃u ,
s
where
β̃ =
s
.
sign(Zu )dβu .
Z ∼ N(0, σ 2 ) since |Zu | is non-random.
P But 2|Z | is also non-random.
=⇒ Z = 0. =⇒ Zu = 0. =⇒ n1 n−1
k=0 Xk (t) → 0.
A variant of the mean ergodic theorem
I
I
I
I
T is a measure preserving transformation of Ω,
ε0 is r.v. taking values in { − 1, +1}, εk = ε0 ◦ T k .
For ξ ∈ L2 (Ω), Uξ = ξ ◦ T ε0 is an isometry.
von Neumann’s mean errgodic theorem says, that
n−1
1X k
U ξ → Pξ, ∈ L2
n
k=0
where P is the projection onto
X ∈ L2 : X ◦ T ε0 = UX = X
I
I
|Pξ | is invariant under T .
what is U k ξ?
Uξ = ξ ◦ T ε0 ,
U 2 ξ = ξ ◦ T 2 ε1 ε0 ,
...
Uk ξ = ξ ◦ T k
k−1
Y
εj ,
j=0
I
Almost sure convergence also holds by the subadditive ergodic theorem.
Lévy transformation
I
The Lévy transformation T is scaling invariant, that is, if for x > 0
Θx : C [0, ∞) → C [0, ∞) denotes Θx (w )(t) = xw (t/x 2 ) then
Θx T = TΘx
I
As before β (n) = β ◦ Tn , ht =
(n)
Qn−1
k=0
sign(βt ), Xn (t) =
(k)
Rt
By scaling we get:
Theorem
p
1. If Xn (1) → 0 as n → ∞, then T is strongly mixing.
P
p
2
2. T is ergodic, if and only if n1 n−1
k=0 Xk (1) → 0 as n → ∞.
0
hs ds.
(n)
Behaviour of Xn (1)
sd[n] ~ n
1/sd[n] ~ n
40
●
0.6
●
30
●
0.4
●
0.2
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
0.0
0
20
40
60
80
100
20
10
0
sd[n]*n ~ n
●
●
●
●
●
●
●
●●
●
●●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
0
20
40
60
80
100
2.5
2.0
1.5
●
●
●●●
●
●●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●●●●
●
●
●
●
●
●●
●● ●
●●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
1.0
●
0
20
40
60
80
100
Figure: sd 2 [n]= E Xn2 (1) , sample size: 2000, number of steps of SRW: 106 .
R1Q
(k)
Pink crosses denotes n × E1/2 X̃n2 , where X̃n = 0 n−1
k=0 sign(β̃s )ds with
π
independent BM-s (β̃ (k) )k≥0 . We have E1/2 X̃n2 ≈ n√
≈ 2.22
.
n
2
Conjecture
R 1 (n)
E Xn2 (1) = O(1/n2 ). This would give: Xn (1) = 0 hs ds → 0 almost
surely.
Simplification
R1
p
hs ds → 0.
I
Goal: Xn (1) =
I
Enough: Xn (1) → 0 in L2 .
0
(n)
I
E Xn2 (1) = 2
I
Enough:
Z
E hu(n) hv(n) dudv = 2
0<u<v <1
(n)
E hs(n) h1 → 0,
Z
cov hu(n) , hv(n) dudv .
0<u<v <1
for 0 < s < 1,
by boundedness and scaling.
I
New goal: fixing s ∈ (0, 1),
(n)
(n)
P hs(n) h1 = 1 ≈ P hs(n) h1 = −1 ,
Idea: coupling.
for n large.
Coupling I.
I
Assume that S : C [0, ∞) → C [0, ∞) preserves P.
I
Denote by β̃ (n) the shadow path β (n) ◦ S.
I
Assume also that there is an event A such that on A the sequences
sign(βs(n) ) sign(β1 )
(n)
sign(β̃s(n) ) sign(β̃1 )
(n)
and
differ at exactly one index denoted by ν.
Then
(n) lim sup E hs(n) h1 ≤ P (Ac ) .
n→∞
Reason:
(n) (n) Ehs h1 = E
hs h1 + h̃s h̃1
2
(n) (n)
(n) (n)
!
≤ P(Ac ) + P (n < ν ) .
Coupling.
Proposition
If there is a stopping time τ, s.t.
I
s < τ < 1,
I
exists ν < ∞, s.t. βτ = 0,
√
(k)
min0≤k<ν |βτ | > C 1 − τ,
I
(ν)
=⇒
(n) lim sup Ehs(n) h1 ≤
n
P max |βs | > C .
s∈[0,1]
Coupling.
Proposition
If there is a stopping time τ, s.t.
I
s < τ < 1,
I
exists ν < ∞, s.t. βτ = 0,
√
(k)
min0≤k<ν |βτ | > C 1 − τ,
I
I
I
I
=⇒
(ν)
(n) lim sup Ehs(n) h1 ≤
n
P max |βs | > C .
s∈[0,1]
S reflects β after τ:
(Sβ)t = β̃t = βt∧τ − (βt − βt∧τ ).
n
o
√
(0)
(0)
A = maxt∈[τ,1] |βt − βτ | ≤ C 1 − τ .
β (0)
√
2C 1 − τ
β̃ (0)
t
Then
P (Ac ) = P
max |βs | > C
s∈[0,1]
by strong Markov property and scaling.
s τ
1
Coupling.
Proposition
If there is a stopping time τ, s.t.
I
s < τ < 1,
I
exists ν < ∞, s.t. βτ = 0,
√
(k)
min0≤k<ν |βτ | > C 1 − τ,
I
I
I
I
(ν)
=⇒
S reflects β after τ:
(Sβ)t = β̃t = βt∧τ − (βt − βt∧τ ).
n
o
√
(0)
(0)
A = maxt∈[τ,1] |βt − βτ | ≤ C 1 − τ .
(n) lim sup Ehs(n) h1 ≤
n
P max |βs | > C .
s∈[0,1]
β (0)
√
2C 1 − τ
β̃ (0)
t
We need that
(n)
sign(βs(n) β1 )
differs from
(n)
sign(β̃s(n) β̃1 )
at exactly one place, when n = ν.
I
Recall that Tβ = |β | − L.
s τ
1
Coupling.
Proposition
If there is a stopping time τ, s.t.
I
s < τ < 1,
I
exists ν < ∞, s.t. βτ = 0,
√
(k)
min0≤k<ν |βτ | > C 1 − τ,
I
I
I
I
(ν)
=⇒
(n) lim sup Ehs(n) h1 ≤
n
P max |βs | > C .
s∈[0,1]
S reflects β after τ:
(Sβ)t = β̃t = βt∧τ − (βt − βt∧τ ).
n
o
√
(0)
(0)
A = maxt∈[τ,1] |βt − βτ | ≤ C 1 − τ .
β (1)
√
2C 1 − τ
β̃ (1)
t
We need that
(n)
sign(βs(n) β1 )
differs from
(n)
sign(β̃s(n) β̃1 )
at exactly one place, when n = ν.
I
Recall that Tβ = |β | − L.
s τ
1
Coupling.
Proposition
If there is a stopping time τ, s.t.
I
s < τ < 1,
I
exists ν < ∞, s.t. βτ = 0,
√
(k)
min0≤k<ν |βτ | > C 1 − τ,
I
I
I
I
(ν)
=⇒
(n) lim sup Ehs(n) h1 ≤
n
P max |βs | > C .
s∈[0,1]
S reflects β after τ:
(Sβ)t = β̃t = βt∧τ − (βt − βt∧τ ).
n
o
√
(0)
(0)
A = maxt∈[τ,1] |βt − βτ | ≤ C 1 − τ .
β (2)
√
2C 1 − τ
β̃ (2)
t
We need that
(n)
sign(βs(n) β1 )
differs from
(n)
sign(β̃s(n) β̃1 )
at exactly one place, when n = ν.
I
Recall that Tβ = |β | − L.
s τ
1
Coupling.
Proposition
If there is a stopping time τ, s.t.
I
s < τ < 1,
I
exists ν < ∞, s.t. βτ = 0,
√
(k)
min0≤k<ν |βτ | > C 1 − τ,
I
I
I
I
(ν)
=⇒
(n) lim sup Ehs(n) h1 ≤
n
P max |βs | > C .
s∈[0,1]
S reflects β after τ:
(Sβ)t = β̃t = βt∧τ − (βt − βt∧τ ).
n
o
√
(0)
(0)
A = maxt∈[τ,1] |βt − βτ | ≤ C 1 − τ .
(n)
sign(βs(n) β1 )
β (3)
√
2C 1 − τ
We need that
differs from
t
β̃ (3)
(n)
sign(β̃s(n) β̃1 )
at exactly one place, when n = ν.
I
Recall that Tβ = |β | − L.
s τ
1
Coupling.
Proposition
If there is a stopping time τ, s.t.
I
s < τ < 1,
I
exists ν < ∞, s.t. βτ = 0,
√
(k)
min0≤k<ν |βτ | > C 1 − τ,
I
I
I
I
(ν)
=⇒
(n) lim sup Ehs(n) h1 ≤
n
P max |βs | > C .
s∈[0,1]
S reflects β after τ:
(Sβ)t = β̃t = βt∧τ − (βt − βt∧τ ).
n
o
√
(0)
(0)
A = maxt∈[τ,1] |βt − βτ | ≤ C 1 − τ .
t
We need that
(n)
sign(βs(n) β1 )
differs from
(n)
sign(β̃s(n) β̃1 )
at exactly one place, when n = ν.
I
Recall that Tβ = |β | − L.
β (4) = β̃ (4)
Simplification
Proposition
If there is a random time τ, s.t,
1. s < τ < 1,
2. exists ν < ∞, s.t.
(ν)
βτ
= 0,
√
(k)
3. min0≤k<ν |βτ | > C 1 − τ,
=⇒
then there is also a stopping time
with similar properties (replacing
C by C /2 in 3.).
Simplification
Proposition
If there is a random time τ, s.t,
1. s < τ < 1,
2. exists ν < ∞, s.t.
(ν)
βτ
= 0,
√
(k)
3. min0≤k<ν |βτ | > C 1 − τ,
=⇒
τn = inf
t ≥ s : βt = 0, min |βt | ≥ C
(n)
(k)
0≤k<n
then there is also a stopping time
with similar properties (replacing
C by C /2 in 3.).
p
(1 − t) ∨ 0 ,
I
τn , τ̃ are stopping times.
I
By the condition s ≤ τ̃ < 1.
I
If for some ω ∈ Ω, τ̃(ω) < τn (ω) for all n then by continuity
√
(n)
inf |βτ̃ | ≥ C 1 − τ̃ > 0 at ω.
τ̃ = inf τn .
n
n≥0
I
This can only happen with probability zero due to Malric’s density
theorem!!
Good time points
Definition
For s ∈ (0, 1), C > 0
A(C , s) =
t ≥ 0 : ∃γ, n, s · t < γ < t, βγ(n) = 0,
√
min βγ(n) > C t − γ
0≤k<n
is the set of good time points.
That is, t is good, if some iterated path has a zero close to t and previous
iterates are sufficiently large.
Good time points
Definition
For s ∈ (0, 1), C > 0
A(C , s) =
t ≥ 0 : ∃γ, n, s · t < γ < t, βγ(n) = 0,
√
min βγ(n) > C t − γ
0≤k<n
is the set of good time points.
That is, t is good, if some iterated path has a zero close to t and previous
iterates are sufficiently large.
Goal:
P (1 ∈ A(C , s)) = 1,
for all C > 0, s ∈ (0, 1).
Set of good points II.
A(C , s) =
t ≥ 0 : ∃γ, n, s · t < γ < t, βγ(n) = 0,
√
min βγ(n) > C t − γ
0≤k<n
I
I
P (t ∈ A(C , s)) does not depend on t.
P (1 ∈ A(C , s)) = 1 ⇔ A(C , s) has full Lebesgue measure almost surely
Proof: Let Z exponential independent of β (0) . Then
Z ∞
1 = P (Z ∈ A(C , s)) =
P (t ∈ A(C , s)) e −t dt.
0
New goal:
The random set of good time points A(C , s) is of full Lebesgue measure almost
surely.
Good time points, a picture s = .9 and C = 2
0.5
β(0)
−0.5
0.0
β(1)
−1.0
β(2)
−1.5
β(3)
all
0.0
0.2
0.4
If γ is a zero of β
0.6
(n)
and
0.8
1.0
0.0
(k)
min0≤k<n |βγ |
I = (γ, γ + L) ⊂ A(C , s),
0.2
0.4
0.6
0.8
= ξ > 0 then
where
L=
ξ2
(1 − s)γ
∧
.
C2
s
A(C , s) is a dense open set! May have small Lebesgue measure.
1.0
Porous sets
Definition
The set H ⊂ R is porous at x if
lim sup
r →0
I
length of the largest subinterval of (x − r , x + r ) \ H
> 0.
2r
H is porous at x =⇒ the Lebesgue density of H at x cannot be 1.
Porous sets
Definition
The set H ⊂ R is porous at x if
lim sup
r →0
length of the largest subinterval of (x − r , x + r ) \ H
> 0.
2r
I
H is porous at x =⇒ the Lebesgue density of H at x cannot be 1.
I
H is Borel and porous at Lebesgue almost every point of R =⇒ H is of
Lebesgue measure zero.
Porous sets
Definition
The set H ⊂ R is porous at x if
lim sup
r →0
length of the largest subinterval of (x − r , x + r ) \ H
> 0.
2r
I
H is porous at x =⇒ the Lebesgue density of H at x cannot be 1.
I
H is Borel and porous at Lebesgue almost every point of R =⇒ H is of
Lebesgue measure zero.
I
For H = [0, ∞) \ A(C , s) the set of bad time points
P (H is porous at 1) = 1
=⇒
=⇒
∀t > 0, P (H is porous at t ) = 1
P (H is porous at a.e. t > 0) = 1
=⇒
P (λ(H) = 0) = 1
Porous sets
Definition
The set H ⊂ R is porous at x if
lim sup
r →0
length of the largest subinterval of (x − r , x + r ) \ H
> 0.
2r
I
H is porous at x =⇒ the Lebesgue density of H at x cannot be 1.
I
H is Borel and porous at Lebesgue almost every point of R =⇒ H is of
Lebesgue measure zero.
I
For H = [0, ∞) \ A(C , s) the set of bad time points
P (H is porous at 1) = 1
=⇒
=⇒
∀t > 0, P (H is porous at t ) = 1
P (H is porous at a.e. t > 0) = 1
=⇒
P (λ(H) = 0) = 1
New goal:
The set of good time points contains sufficiently large intervals near 1.
min0≤k<n |β ∗ |
lim supn→∞ √1−γ ∗ γn > 0 a.s. is enough for
n
n
o
(n)
I Here γn = sup t ≤ 1 : βt = 0 , γ ∗ = max0≤k≤n γk .
n
(k)
strong mixing
min0≤k<n |β ∗ |
lim supn→∞ √1−γ ∗ γn > 0 a.s. is enough for
n
n
o
(n)
I Here γn = sup t ≤ 1 : βt = 0 , γ ∗ = max0≤k≤n γk .
n
(k)
I
strong mixing
By Malric’s density theorem γn∗ → 1.
|βγn |
(1)
|βγn |
(0)
(k)
|βγn |
I ⊂ A(C , s) = {t>0 : ∃n, ∃γ∈(st,t), βγ(n) =0, mink<n |βγ(k) |>C √t−γ}
..
.
(n)
|βγn |
s
γn∗ = γn
|I | =
ξ 2 ∧C 2
C 2 (1
− γn )
√
ξ 1 − γn ; ξ =
1
1
2
mink<n |βγ ∗ |
n
√
1−γn∗
(k)
lim supn→∞
min0≤k<n |β ∗ |
lim supn→∞ √1−γ ∗ γn > 0 a.s. is enough for
n
n
o
(n)
I Here γn = sup t ≤ 1 : βt = 0 , γ ∗ = max0≤k≤n γk .
n
(k)
I
strong mixing
By Malric’s density theorem γn∗ → 1.
|βγn |
(1)
|βγn |
(0)
(k)
|βγn |
I ⊂ A(C , s) = {t>0 : ∃n, ∃γ∈(st,t), βγ(n) =0, mink<n |βγ(k) |>C √t−γ}
..
.
(n)
|βγn |
s
γn∗ = γn
|I | =
ξ 2 ∧C 2
C 2 (1
− γn )
√
ξ 1 − γn ; ξ =
1
2
mink<n |βγ ∗ |
n
√
1−γn∗
(k)
lim supn→∞
1
I
I ⊂ A(C , s), the length of I is proportional to (1 − γn ) = δ 0 .
I
A(C , s) is of full Lebesgue measure for all C , s, etc..
< 1 a.s. also guarantees strong mixing
lim inf n→∞ ZZn+1
n
I
Here Zn = min0≤k<n |β1 |.
(k)
< 1 a.s. also guarantees strong mixing
lim inf n→∞ ZZn+1
n
I
I
Here Zn = min0≤k<n |β1 |.
(k)
This condition is obtained similarly, by considering the right
neighborhood of 1.
h1 β (0)
(0)
h1 β (k)
(k)
I ⊂ A(C , r ), for x 2 <
|I | ≥
(n)
(1 + ξ)x x = h1 β
(n)
1
h1 β (1)
(1)
ξx
τ
1+x
2
ξ 2 ∧C 2 2
C2 x
1−r
2r
Remark on X = lim inf
Zn+1
Zn
min0≤k<n |βγ ∗ |
n
√
1−γn∗
(k)
and Y = lim sup
o
n
(k)
(k)
Here Zn = min |β1 |, γn∗ = max γk , γk = sup t ≤ 1 : βt = 0 .
0≤k<n
0≤k≤n
Working a bit harder, one can obtain that both X and Y are invariant, and
I
I
I
Either Y = 0 a.s.,
or 0 < P (Y = 0) < 1 and T is not ergodic,
or Y > 0 a.s. and then Y = ∞ and T is strongly mixing.
Also
I
Either X = 1,
I
or 0 < P (X = 1) < 1 and T is not ergodic,
I
or X < 1 a.s. and then X = 0 and Y = ∞ and T is strongly mixing.
Remark on X = lim inf
Zn+1
Zn
min0≤k<n |βγ ∗ |
n
√
1−γn∗
(k)
and Y = lim sup
o
n
(k)
(k)
Here Zn = min |β1 |, γn∗ = max γk , γk = sup t ≤ 1 : βt = 0 .
0≤k<n
0≤k≤n
Working a bit harder, one can obtain that both X and Y are invariant, and
I
I
I
Either Y = 0 a.s.,
or 0 < P (Y = 0) < 1 and T is not ergodic,
or Y > 0 a.s. and then Y = ∞ and T is strongly mixing.
Also
I
Either X = 1,
I
or 0 < P (X = 1) < 1 and T is not ergodic,
I
or X < 1 a.s. and then X = 0 and Y = ∞ and T is strongly mixing.
Remark: There is a hope that P (X = 1) = 1 is impossible. Then
I P (X = 1) > 0 =⇒ X is not constant, hence X is a nontrivial
invariant variable.
I
Both X , and Y characterize ergodicity: X < 1 ⇔ Y > 0 ⇔ T is
strongly mixing.
lim inf x&0 |β1x
(ν(x))
I
|
< 1 ⇔ X = lim inf n→∞ ZZn+1
n
n
o
(n)
(k)
Here ν(x) = inf n ≥ 0 : |β1 | < x and Zn = mink≤0<n |β1 |.
I
|β
|
Zn+1
= lim inf 1
.
x&0
Zn
x
(ν(x))
X = lim inf
n→∞
x
Zn−1 Zn
|β1
(ν(x))
|
Zn+1 Zn+2
lim inf x&0 |β1x
(ν(x))
I
|
< 1 ⇔ X = lim inf n→∞ ZZn+1
n
n
o
(n)
(k)
Here ν(x) = inf n ≥ 0 : |β1 | < x and Zn = mink≤0<n |β1 |.
I
|β
|
Zn+1
= lim inf 1
.
x&0
Zn
x
(ν(x))
X = lim inf
n→∞
I
I
x
|β1
Zn−1 Zn
Claim. {xν(x) : x ∈ (0, 1)} is tight =⇒ X = lim inf x&0
a.s. =⇒ T is strongly mixing.
Proof: 1(X >1−δ) ≤ lim inf 1(|β (ν(x)) |/x>1−δ) .
1
(ν(x))
|
Zn+1 Zn+2
|β1 |
x
(ν(x))
<1
lim inf x&0 |β1x
(ν(x))
I
|
< 1 ⇔ X = lim inf n→∞ ZZn+1
n
n
o
(n)
(k)
Here ν(x) = inf n ≥ 0 : |β1 | < x and Zn = mink≤0<n |β1 |.
I
|β
|
Zn+1
= lim inf 1
.
x&0
Zn
x
x
(ν(x))
X = lim inf
n→∞
I
I
|β1
Zn−1 Zn
Claim. {xν(x) : x ∈ (0, 1)} is tight =⇒ X = lim inf x&0
a.s. =⇒ T is strongly mixing.
Proof: 1(X >1−δ) ≤ lim inf 1(|β (ν(x)) |/x>1−δ) . By Fatou–lemma
1
P (X > 1 − δ ) ≤ lim inf
P |β1
+
x→0
(ν(x))
| > (1 − δ)x
(ν(x))
|
Zn+1 Zn+2
|β1 |
x
(ν(x))
<1
lim inf x&0 |β1x
(ν(x))
I
|
< 1 ⇔ X = lim inf n→∞ ZZn+1
n
n
o
(n)
(k)
Here ν(x) = inf n ≥ 0 : |β1 | < x and Zn = mink≤0<n |β1 |.
I
|β
|
Zn+1
= lim inf 1
.
x&0
Zn
x
x
(ν(x))
X = lim inf
n→∞
I
I
|β1
Zn−1 Zn
Claim. {xν(x) : x ∈ (0, 1)} is tight =⇒ X = lim inf x&0
a.s. =⇒ T is strongly mixing.
Proof: 1(X >1−δ) ≤ lim inf 1(|β (ν(x)) |/x>1−δ) . By Fatou–lemma
(ν(x))
|
Zn+1 Zn+2
|β1 |
x
(ν(x))
<1
1
P (X > 1 − δ ) ≤ lim inf
P |β1
+
x→0
(ν(x))
| > (1 − δ)x
≤ lim inf
P (xν(x) > K ) + (1 + K /x)P (1 − δ < |β1 | /x < 1)
+
x→0
lim inf x&0 |β1x
(ν(x))
I
|
< 1 ⇔ X = lim inf n→∞ ZZn+1
n
n
o
(n)
(k)
Here ν(x) = inf n ≥ 0 : |β1 | < x and Zn = mink≤0<n |β1 |.
I
|β
|
Zn+1
= lim inf 1
.
x&0
Zn
x
x
(ν(x))
X = lim inf
n→∞
I
I
|β1
Zn−1 Zn
Claim. {xν(x) : x ∈ (0, 1)} is tight =⇒ X = lim inf x&0
a.s. =⇒ T is strongly mixing.
Proof: 1(X >1−δ) ≤ lim inf 1(|β (ν(x)) |/x>1−δ) . By Fatou–lemma
(ν(x))
|
Zn+1 Zn+2
|β1 |
x
(ν(x))
<1
1
P (X > 1 − δ ) ≤ lim inf
P |β1
+
x→0
(ν(x))
| > (1 − δ)x
≤ lim inf
P (xν(x) > K ) + (1 + K /x)P (1 − δ < |β1 | /x < 1)
+
x→0
≤ sup P (xν(x) > K ) + (1 + K )δ.
x∈(0,1)
P (X = 1) ≤ inf inf P (xν(x) > K ) + (1 + K )δ.
K
δ
Is {xν(x) : x > 0} tight?
Recall that
supx∈(0,1) E (xν(x)) < ∞ =⇒ {xν(x) : x ∈ (0, 1)} is thight (by Markov
inequality) =⇒ T is strongly mixing.
E(ν(x))
p(x)E(ν(x))
200
1.10
150
1.05
1.00
100
0.95
50
0.90
0
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
Figure: E (ν ∗ (x)) estimated from long runs of a SRW (number of iteration: 105 ,
number of steps of SRW: 109 ).
(0)
On the x-axis the probability p(x) = P |β1 | < x is given.
Density of x1 |β1
(ν(x))
I
|
Consider the natural extension of (Ω, B, P, T ). Then T is an invertible
measure preserving transformation on the extension.
That is
I
I
I
Ω = C[0, ∞)Z ,
for ω = (ωn )n∈Z (T ω)n = ωn+1 and β (n) (ω) = ωn ,
P is such that β (k) , β (k+1) , . . . has the same joint law as (β, T1 β, . . . ) for
all k ∈ Z.
Density of x1 |β1
(ν(x))
I
I
|
Consider the natural extension of (Ω, B, P, T ). Then T is an invertible
measure preserving
the extension.
n transformation on o
(−n)
∗
Put ν (x) = inf n ≥ 1 : |β1 | < x , the return time for the inverse
Lévy transform.
Density of x1 |β1
(ν(x))
I
I
I
|
Consider the natural extension of (Ω, B, P, T ). Then T is an invertible
measure preserving
the extension.
n transformation on o
(−n)
∗
Put ν (x) = inf n ≥ 1 : |β1 | < x , the return time for the inverse
Lévy transform.
Then by the tower
decomposition of Ω, for A ⊂ C [0, ∞) and
à = β (0) ∈ A .
P β (ν(x)) ∈ A = E ν ∗ (x)1{|β (0) |<x} 1{β (0) ∈A}
1
Density of x1 |β1
(ν(x))
I
I
I
|
Consider the natural extension of (Ω, B, P, T ). Then T is an invertible
measure preserving
the extension.
n transformation on o
(−n)
∗
Put ν (x) = inf n ≥ 1 : |β1 | < x , the return time for the inverse
Lévy transform.
Then by the tower
decomposition of Ω, for A ⊂ C [0, ∞) and
à = β (0) ∈ A .
P β (ν(x)) ∈ A = E ν ∗ (x)1{|β (0) |<x} 1{β (0) ∈A}
1
..
.
T
T
{ν ∗ = 1}
{ν ∗ = 2}
T
T
{ν ∗ = 3}
T
T
T
{ν ∗ = 4}
{ν
{ν
{ν
{ν
= 3}
= 2}
= 1}
= 0} ∩ Ã
Density of x1 |β1
(ν(x))
I
I
I
|
Consider the natural extension of (Ω, B, P, T ). Then T is an invertible
measure preserving
the extension.
n transformation on o
(−n)
∗
Put ν (x) = inf n ≥ 1 : |β1 | < x , the return time for the inverse
Lévy transform.
Then by the tower
decomposition of Ω, for A ⊂ C [0, ∞) and
à = β (0) ∈ A .
P β (ν(x)) ∈ A = E ν ∗ (x)1{|β (0) |<x} 1{β (0) ∈A}
1
..
.
T
T
{ν ∗ = 1}
I
{ν ∗ = 2}
The density fx of x1 |β1
T
T
{ν ∗ = 3}
T
T
T
{ν ∗ = 4}
{ν
{ν
{ν
{ν
| is obtained by conditioning
(0)
fx (y ) = 2φ(yx)E xν ∗ (x) |β1 | = yx , for y ∈ (0, 1)
(ν(x))
= 3}
= 2}
= 1}
= 0} ∩ Ã
(0)
The density E xν (x) |β1 | = yx ?
∗
U < 5e−04
0e+00
2e−04
4e−04
6e−04
8e−04
0.8
0.4
rescaled return time
1e−03
●
●●
●
●●
● ●● ●
● ●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●● ●
●
●●
● ●
●
●
●
●
●
●
● ●
●
●●
●●
●
●
● ● ●
●●
●
●
●
●
●
●
●
● ●● ●
●●
●
●
●●
●
●
●
●●
●
● ●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●● ●
●
●● ●
●
● ●
●
● ●
●● ●
●●
● ●
●
●
●
●
● ●
●
● ●
●● ● ●
●
●
● ● ●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
● ● ●
●
●
●
● ●
●
●
●
●
● ●
●
●
●
●
● ●
●
●● ●
●
● ●
●●
● ●
●
●
● ●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●●
●
● ●
● ●
●
●
●●
●
●● ●
●
●
● ●
● ●
● ●
●● ● ●
●
● ● ●●
● ●
●
●
●●
●●
●● ● ●
●
● ● ●
●
●
●
●
●
●
●● ●
●
● ●
●
● ●
●
●
● ●●
●
●
●
●● ●
●● ● ●
●
●
● ●
●
●
● ●
●
●
●
● ● ●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●● ● ●
●
●
●
●
●
● ●
●
●
● ●
●●
● ●
●
●
●
●
● ●● ●
●
●● ● ●●
● ●
●
●●
●
●
●
●
●
●
●
●
●
● ●
●
●
●●
●
●
●
●
● ● ●
●●
●● ●
●
●●
●
●
●
●
● ●●
●
● ●
●
●
●
●
●
●
●
●● ● ●
●
●
●
●
●
●
●
●
●
●●
●● ● ● ●
● ● ●●
●
●
●●
●
●
●
●
●●
●●
● ●
●● ●
●
●
●
●
●
●●
●
●
●
●● ●
●
●
●● ●
● ●●
●
● ●
●
●
●
●
●
●
● ●
●
●
●
●
●
●●
●
●
●
●
●
● ●
●
●
●●
●
● ●
●
●
●
●
●
●
● ●●
● ●● ●
●●
●
●
●
●
● ●●
●●● ●
●●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
0e+00
●
●
●
●
0.0
0.8
0.4
0.0
rescaled return time
U < 0.001
●
●
●
●●
●
●
●
●
●
●
●●
●
●●
●●
●
●
●
● ●●
● ●●
●
●●
●●
● ●
●
●●
●
●
● ●●
● ●
●
●●
●
● ●
●
●
●● ●●
●
● ●
●
● ●
● ●
●
● ● ●
● ● ● ●
●
●
●
● ● ●
●
●
● ●●●
●● ●
● ●● ●
● ●
●
●
●
●
●
● ● ●
● ●
●
● ● ● ● ● ●● ●
● ●
●
●
●
●● ●
●
●
●
●●
●● ●● ● ●●
●
●
●
●
●
●
●
●
●●
●
●
● ●
●
●
●
● ●●
●
●●
●●
●
●
●
●
● ●●
● ● ●
●
●●
●●
●
●● ●
●
●
● ●
●
●●
●
●
●
●
●
●
●●
●
●
● ●● ●
●
● ● ●
●●●
●● ●
●
● ●
● ●
●
● ●
●
● ●
●
●
●●
●
● ● ●●
●
● ● ●●
●●● ●●●
●●
●
●
●● ●
●
●
●
● ●●
● ● ●●
●
●
●● ●
●
● ●
●
●
●
● ●
● ●
● ●
●
●
●
●●
●
●● ●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
● ●●
●
●
● ● ●
●
● ● ●
● ● ●●
●
● ●●
● ●
●
●●● ●
● ●
●
● ● ●●
●
● ●
●●
●
●
● ● ●
●
●
●
● ●●
●
● ●●
●●
●●
● ●
● ●
●●
●●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
● ● ●
●●
●
●
● ●
●
●● ●
●●
● ●●
●●● ● ●
●●
●
●
●
●
● ● ●● ● ●● ●
● ●●
●
●●●● ●●
●
● ●
●
●
●●
●●● ●
●
●
●●
●
● ●
●
●●
● ●● ●
●● ●●
●
●
●
●●●●
●● ● ●
●
●
●
●
●
●
●
●
●
●●
●
●● ●
●
●
● ●
●
●
● ●●
● ● ●● ●
● ● ●
● ●
●
●
●
●
● ●
●● ●
●● ●
●
●
●
● ● ●●
●
●●
● ●
●●
●
● ● ●
● ●
●● ● ● ● ●
●
●
●
●● ●
●● ● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
● ●
●
●● ●
●
●● ●
●●
●
●●
●
●
●
●
●
●● ●
●
●
●
●●
●
●
● ●
●●
●● ●
●
●●
●●●● ●●
●
● ●●● ●
● ●●
●
● ●
●
●
●
●●●
●● ● ●
●
● ●
●
●
●
●
●●
●
●
●
●
●
●
● ●● ●
●
● ● ● ●●
● ●● ●
● ● ●●●●
●
●
●
●
● ● ●
●
● ●●●
●
●
●● ●●
● ●
●
●
●
● ●● ● ●
●
● ● ● ●
●● ● ● ●
●●
●
●
● ●●
●
● ●●
● ●●
●●
● ●
●● ●
● ●
●
●
●
●
● ●
●●
●● ● ● ●●●
●●
● ●
●
●
●
● ●
●● ●
●● ●
● ●●
●
● ●
●
●
●
● ●●
● ●
●
● ●
●● ●
● ●
●● ● ●
●
●
●
● ● ●
●●
● ●
●
●
●
●●
●●
●
●●
●●
●
●
● ● ● ● ●●●●
● ●●
●
●● ● ● ● ●
● ●
●●●
●
●
●
●
● ●
●
● ●●
●
●
●
●
●
● ● ●●
●
●
● ●
● ●
●
●
●
●
●
●
●●●●
●●
●
● ●
●● ●
● ●●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
● ●
●●
● ●
●●
●
●
● ● ● ●
● ●
● ●●
●
●
●
● ●
●
●
● ●●
●
●
●
●
●
● ●
●●
●
●
● ●● ●
●
●
● ●●
●
● ●
●
●●
●
● ● ●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●● ● ●
●
● ● ●● ● ● ● ● ● ● ●
● ●
● ●
●●● ● ●● ●
●
●
●
● ●●
●
●
● ●● ●
● ●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
1e−04
2e−04
U U.small
< 1e−04
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
2e−05
4e−05
6e−05
8e−05
0.8
●
●
●
●
●
●
●
●
●
●
●
●
5e−04
●
●
●
●
●
●
●●
●
●
●
●
●
●
● ●
1e−04
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
0.4
●
●
●
●
●
0.4
●
●
●
0e+00
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●●
●
●
●
●
0.0
0.8
●
●
●
●
●
●
●
●
rescaled return time
●
●
4e−04
●
●
●
●
●
0.0
rescaled return time
●
3e−04
U U.small
< 5e−05
●
●
●
●
●
●
●
●
0e+00
1e−05
2e−05
3e−05
4e−05
5e−05
Joint behaviour of |β1 | and ν ∗ (x) given |β1 | < x. Both are rescaled to
uniform variables.
1 (0)
∗
x |β1 | seems to be conditionally independent of xν (x),
(0)
(0)
(From one long random walk: number of steps 1013 , number of iterated paths 106 .)
(0)
limx→0+ E xν (x) |β1 | = yx =?
I
I
∗
Conjecture: x1 |β1 | converges in distribution to a uniform variable.
Actually the density seems to go to 1 as x → 0+ .
(ν(x))
Playing with two types of expected return times one can show that
(ν(x))
P
|β
|
<
x/2
> 0.
lim inf
1
+
x→0
I
This is enough
|β1 |
<1
x
(ν(x))
lim inf
+
x→0
I
with positive probability.
Recall that then both
min0≤k≤n |β1 |
x→0
min0≤k<n |β1 |
(k)
min0≤k<n |βγn∗ |
√
1 − γn∗
(k)
(k)
X = lim inf
+
,
Y = lim sup
x→0+
characterize ergodicity: X < 1 ⇔ Y > 0 ⇔ T is strongly mixing ⇔ T
is ergodic.
Conclusion
I
I
I
Marc Malric has proved that the orbit of a typical sample path meets
every open set.
To prove strong mixing only certain open sets has to be considered.
For these open sets
• Tightness of the family rescaled hitting times would be enough.
• or a quantitative result is needed: the expected hitting times do not
growth faster than the inverse of the size of these open sets.
Thank you for your attention!
Happy birthday!