Introduction
Main Theorem
Proof
The Prokofiev-Svistunov-Ising process is
rapidly mixing
Tim Garoni
School of Mathematical Sciences
Monash University
Discussion
Introduction
Main Theorem
Collaborators
◮
◮
◮
Andrea Collevecchio (Monash University)
Tim Hyndman (Monash University)
Daniel Tokarev (Monash University)
Proof
Discussion
Introduction
Main Theorem
Proof
Discussion
Ising model - Motivation
◮
◮
Introduced in 1920 as a model for ferromagnetism
Hope was to explain the Curie transition:
◮
◮
◮
◮
Place iron in a magnetic field
Increase field to high value
Slowly reduce field to zero
Exists critical temperature Tc below which iron remains magnetized
Introduction
Main Theorem
Proof
Discussion
Ising model - Motivation
◮
◮
Introduced in 1920 as a model for ferromagnetism
Hope was to explain the Curie transition:
◮
◮
◮
◮
◮
Place iron in a magnetic field
Increase field to high value
Slowly reduce field to zero
Exists critical temperature Tc below which iron remains magnetized
During mid 1930s it was realized the Ising model also describes
phases transitions in other physical systems
◮
◮
Gas-liquid critical phenomena (lattice gas)
Binary alloys
Introduction
Main Theorem
Proof
Discussion
Ising model - Motivation
◮
◮
Introduced in 1920 as a model for ferromagnetism
Hope was to explain the Curie transition:
◮
◮
◮
◮
◮
During mid 1930s it was realized the Ising model also describes
phases transitions in other physical systems
◮
◮
◮
Place iron in a magnetic field
Increase field to high value
Slowly reduce field to zero
Exists critical temperature Tc below which iron remains magnetized
Gas-liquid critical phenomena (lattice gas)
Binary alloys
Now a paradigm for order/disorder transitions with applications to
◮
◮
◮
◮
Economics (opinion formation)
Finance (stock price dynamics)
Biology (hemoglobin, DNA, . . . )
Image processing (archetypal Markov random field)
Introduction
Main Theorem
Proof
Discussion
Ising model - Motivation
◮
◮
Introduced in 1920 as a model for ferromagnetism
Hope was to explain the Curie transition:
◮
◮
◮
◮
◮
During mid 1930s it was realized the Ising model also describes
phases transitions in other physical systems
◮
◮
◮
Gas-liquid critical phenomena (lattice gas)
Binary alloys
Now a paradigm for order/disorder transitions with applications to
◮
◮
◮
◮
◮
Place iron in a magnetic field
Increase field to high value
Slowly reduce field to zero
Exists critical temperature Tc below which iron remains magnetized
Economics (opinion formation)
Finance (stock price dynamics)
Biology (hemoglobin, DNA, . . . )
Image processing (archetypal Markov random field)
Continues to play fundamental role in theoretical/mathematical
studies of phase transitions and critical phenomena
Introduction
Main Theorem
Proof
Discussion
The Ising model
◮
Finite graph G = (V, E)
◮
Configuration space ΣG = {−1, +1}V
Measure
X
X
1
P(σ) = exp β
σi σj + h
σi
Z
◮
ij∈E
◮
◮
◮
Inverse temperature β
External field h
Z is the partition function
i∈V
Introduction
Main Theorem
Proof
Discussion
The Ising model
◮
Finite graph G = (V, E)
◮
Configuration space ΣG = {−1, +1}V
Measure
X
X
1
P(σ) = exp β
σi σj + h
σi
Z
◮
ij∈E
◮
◮
◮
◮
Inverse temperature β
External field h
Z is the partition function
Main physical interest is in certain expectations such as:
◮
◮
Two-point correlation cov(σu , σv ) = E(σu σv ) − E(σu )E(σv )
1 X
Susceptibility χ =
cov(σu , σv )
|V | u,v∈V
i∈V
Introduction
Main Theorem
Proof
Phase transition
Let Λn = {−n, . . . , n}d ⊂ Zd , and consider
d
Z
Σ+
: σi = +1 for all i 6∈ Λn }
Λn = {σ ∈ {−1, 1}
+
Sequence of Gibbs measures on Λn converges: P+
Λn ,β,h ⇒ Pβ,h
Discussion
Introduction
Main Theorem
Proof
Phase transition
Let Λn = {−n, . . . , n}d ⊂ Zd , and consider
d
Z
Σ+
: σi = +1 for all i 6∈ Λn }
Λn = {σ ∈ {−1, 1}
+
Sequence of Gibbs measures on Λn converges: P+
Λn ,β,h ⇒ Pβ,h
Analogous construction for “minus” boundary conditions
Discussion
Introduction
Main Theorem
Proof
Discussion
Phase transition
Let Λn = {−n, . . . , n}d ⊂ Zd , and consider
d
Z
Σ+
: σi = +1 for all i 6∈ Λn }
Λn = {σ ∈ {−1, 1}
+
Sequence of Gibbs measures on Λn converges: P+
Λn ,β,h ⇒ Pβ,h
Analogous construction for “minus” boundary conditions
Theorem (Aizenman, Duminil-Copin, Sidoravicius (2014))
1. If d = 1, then for any (β, h) ∈ [0, ∞) × R there is a unique
infinite-volume Gibbs measure
Introduction
Main Theorem
Proof
Discussion
Phase transition
Let Λn = {−n, . . . , n}d ⊂ Zd , and consider
d
Z
Σ+
: σi = +1 for all i 6∈ Λn }
Λn = {σ ∈ {−1, 1}
+
Sequence of Gibbs measures on Λn converges: P+
Λn ,β,h ⇒ Pβ,h
Analogous construction for “minus” boundary conditions
Theorem (Aizenman, Duminil-Copin, Sidoravicius (2014))
1. If d = 1, then for any (β, h) ∈ [0, ∞) × R there is a unique
infinite-volume Gibbs measure
2. If d ≥ 2 and h 6= 0, then for any β ∈ [0, ∞) there is a unique
infinite-volume Gibbs measure
Introduction
Main Theorem
Proof
Discussion
Phase transition
Let Λn = {−n, . . . , n}d ⊂ Zd , and consider
d
Z
Σ+
: σi = +1 for all i 6∈ Λn }
Λn = {σ ∈ {−1, 1}
+
Sequence of Gibbs measures on Λn converges: P+
Λn ,β,h ⇒ Pβ,h
Analogous construction for “minus” boundary conditions
Theorem (Aizenman, Duminil-Copin, Sidoravicius (2014))
1. If d = 1, then for any (β, h) ∈ [0, ∞) × R there is a unique
infinite-volume Gibbs measure
2. If d ≥ 2 and h 6= 0, then for any β ∈ [0, ∞) there is a unique
infinite-volume Gibbs measure
3. If d ≥ 2 and h = 0, there exists βc (d) ∈ (0, ∞) such that:
a. If β ≤ βc , there is a unique infinite-volume Gibbs measure
−
b. If β > βc , then P+
β,0 6= Pβ,0
Introduction
Main Theorem
Exact solutions
◮
Gn = Zn solved by Ising (1925)
Proof
Discussion
Introduction
Main Theorem
Proof
Exact solutions
◮
◮
Gn = Zn solved by Ising (1925)
Gn = Kn solved by Husimi (1953) and Temperley (1954)
Discussion
Introduction
Main Theorem
Proof
Exact solutions
◮
◮
◮
Gn = Zn solved by Ising (1925)
Gn = Kn solved by Husimi (1953) and Temperley (1954)
Gn = Z2n solved by Peierls (1936), Kramers & Wannier (1941),
Onsager (1944), Yang (1952), Kasteleyn (1963), Fisher (1966)
Discussion
Introduction
Main Theorem
Proof
Exact solutions
◮
◮
◮
Gn = Zn solved by Ising (1925)
Gn = Kn solved by Husimi (1953) and Temperley (1954)
Gn = Z2n solved by Peierls (1936), Kramers & Wannier (1941),
Onsager (1944), Yang (1952), Kasteleyn (1963), Fisher (1966)
Discussion
Introduction
Main Theorem
Proof
Discussion
Exact solutions
◮
◮
◮
Gn = Zn solved by Ising (1925)
Gn = Kn solved by Husimi (1953) and Temperley (1954)
Gn = Z2n solved by Peierls (1936), Kramers & Wannier (1941),
Onsager (1944), Yang (1952), Kasteleyn (1963), Fisher (1966)
Smirnov (2006) proved critical interfaces between
+/− components have conformally invariant limit
◮
SLE(3) - Schramm-Löwner Evolution
Introduction
Main Theorem
Proof
Discussion
Exact solutions
◮
◮
◮
Gn = Zn solved by Ising (1925)
Gn = Kn solved by Husimi (1953) and Temperley (1954)
Gn = Z2n solved by Peierls (1936), Kramers & Wannier (1941),
Onsager (1944), Yang (1952), Kasteleyn (1963), Fisher (1966)
Smirnov (2006) proved critical interfaces between
+/− components have conformally invariant limit
◮
SLE(3) - Schramm-Löwner Evolution
K-F related Ising partition function to perfect matchings
Introduction
Main Theorem
Proof
Discussion
Exact solutions
◮
◮
◮
Gn = Zn solved by Ising (1925)
Gn = Kn solved by Husimi (1953) and Temperley (1954)
Gn = Z2n solved by Peierls (1936), Kramers & Wannier (1941),
Onsager (1944), Yang (1952), Kasteleyn (1963), Fisher (1966)
Smirnov (2006) proved critical interfaces between
+/− components have conformally invariant limit
◮
SLE(3) - Schramm-Löwner Evolution
K-F related Ising partition function to perfect matchings
◮
Elegant solution on planar graphs in terms of Pfaffians
Introduction
Main Theorem
Proof
Discussion
Exact solutions
◮
◮
◮
Gn = Zn solved by Ising (1925)
Gn = Kn solved by Husimi (1953) and Temperley (1954)
Gn = Z2n solved by Peierls (1936), Kramers & Wannier (1941),
Onsager (1944), Yang (1952), Kasteleyn (1963), Fisher (1966)
Smirnov (2006) proved critical interfaces between
+/− components have conformally invariant limit
◮
SLE(3) - Schramm-Löwner Evolution
K-F related Ising partition function to perfect matchings
◮
◮
Elegant solution on planar graphs in terms of Pfaffians
Only tractable on graphs of bounded (small) genus
Introduction
Main Theorem
Proof
Discussion
Exact solutions
◮
◮
◮
Gn = Zn solved by Ising (1925)
Gn = Kn solved by Husimi (1953) and Temperley (1954)
Gn = Z2n solved by Peierls (1936), Kramers & Wannier (1941),
Onsager (1944), Yang (1952), Kasteleyn (1963), Fisher (1966)
Smirnov (2006) proved critical interfaces between
+/− components have conformally invariant limit
◮
SLE(3) - Schramm-Löwner Evolution
K-F related Ising partition function to perfect matchings
◮
◮
◮
Elegant solution on planar graphs in terms of Pfaffians
Only tractable on graphs of bounded (small) genus
Method not tractable for Gn = Zdn with d ≥ 3
Introduction
Main Theorem
Proof
Discussion
Computational Complexity
◮
PARTITION:
◮
◮
◮
C ORRELATION:
◮
◮
◮
Input: Finite graph G = (V, E), and parameters β, h
Output: Ising partition function
Input: Finite graph G = (V, E), a pair u, v ∈ V , and parameters β, h
Output: Ising two-point correlation function
S USCEPTIBILITY:
◮
◮
Input: Finite graph G = (V, E), and parameters β, h
Output: Ising susceptibility
Introduction
Main Theorem
Proof
Discussion
Computational Complexity
◮
PARTITION:
◮
◮
◮
C ORRELATION:
◮
◮
◮
Input: Finite graph G = (V, E), and parameters β, h
Output: Ising partition function
Input: Finite graph G = (V, E), a pair u, v ∈ V , and parameters β, h
Output: Ising two-point correlation function
S USCEPTIBILITY:
◮
◮
Input: Finite graph G = (V, E), and parameters β, h
Output: Ising susceptibility
Proposition (Jerrum-Sinclair 1993; Sinclair-Srivastava 2014 )
PARTITION, S USCEPTIBILITY and C ORRELATION are #P-hard.
Introduction
Main Theorem
Proof
Markov-chain Monte Carlo
◮
Construct a transition matrix P on Ω which:
◮
◮
Is ergodic
Has stationary distribution π(·)
◮
Generate random samples with (approximate) distribution π
◮
Estimate π expectations using sample means
Discussion
Introduction
Main Theorem
Proof
Discussion
Markov-chain Monte Carlo
◮
Construct a transition matrix P on Ω which:
◮
◮
Is ergodic
Has stationary distribution π(·)
◮
Generate random samples with (approximate) distribution π
◮
Estimate π expectations using sample means
d(t) := max kP t (x, ·) − π(·)k ≤ Cαt ,
x∈Ω
◮
for α ∈ (0, 1)
Mixing time quantifies the rate of convergence
tmix (δ) := min {t : d(t) ≤ δ}
Introduction
Main Theorem
Proof
Discussion
Markov-chain Monte Carlo
◮
Construct a transition matrix P on Ω which:
◮
◮
Is ergodic
Has stationary distribution π(·)
◮
Generate random samples with (approximate) distribution π
◮
Estimate π expectations using sample means
d(t) := max kP t (x, ·) − π(·)k ≤ Cαt ,
x∈Ω
◮
for α ∈ (0, 1)
Mixing time quantifies the rate of convergence
tmix (δ) := min {t : d(t) ≤ δ}
◮
How does tmix depend on size of Ω?
Introduction
Main Theorem
Proof
Discussion
Markov-chain Monte Carlo
◮
Construct a transition matrix P on Ω which:
◮
◮
Is ergodic
Has stationary distribution π(·)
◮
Generate random samples with (approximate) distribution π
◮
Estimate π expectations using sample means
d(t) := max kP t (x, ·) − π(·)k ≤ Cαt ,
x∈Ω
◮
for α ∈ (0, 1)
Mixing time quantifies the rate of convergence
tmix (δ) := min {t : d(t) ≤ δ}
◮
How does tmix depend on size of Ω?
◮
◮
For Ising the size of a problem instance is n = |V |
If tmix = O(poly(n)) we have rapid mixing
Introduction
Main Theorem
Proof
Discussion
Markov-chain Monte Carlo
◮
Construct a transition matrix P on Ω which:
◮
◮
Is ergodic
Has stationary distribution π(·)
◮
Generate random samples with (approximate) distribution π
◮
Estimate π expectations using sample means
d(t) := max kP t (x, ·) − π(·)k ≤ Cαt ,
x∈Ω
◮
for α ∈ (0, 1)
Mixing time quantifies the rate of convergence
tmix (δ) := min {t : d(t) ≤ δ}
◮
How does tmix depend on size of Ω?
◮
◮
◮
For Ising the size of a problem instance is n = |V |
If tmix = O(poly(n)) we have rapid mixing
|Ω| = 2n so rapid mixing implies only logarithmically-many states
need be visited to reach approximate stationarity
Introduction
Main Theorem
Proof
Discussion
Markov chains for the Ising model
◮
Glauber process (arbitrary field) 1963
◮
◮
◮
Lubetzky & Sly (2012): Rapidly mixing on boxes in Z2 at h = 0 iff
β ≤ βc
Levin, Luczak & Peres (2010): Precise asymptotics on Kn for h = 0
Not used by computational physicists
Introduction
Main Theorem
Proof
Discussion
Markov chains for the Ising model
◮
Glauber process (arbitrary field) 1963
◮
◮
◮
◮
Lubetzky & Sly (2012): Rapidly mixing on boxes in Z2 at h = 0 iff
β ≤ βc
Levin, Luczak & Peres (2010): Precise asymptotics on Kn for h = 0
Not used by computational physicists
Swendsen-Wang process (zero field) 1987
◮
◮
◮
◮
Simulates coupling of Ising and Fortuin-Kasteleyn models
Long, Nachmias, Ding & Peres (2014): Precise asymptotics on Kn
Ullrich (2014): Rapidly mixing on boxes in Z2 at all β 6= βc
Empirically fast. State of the art 1987 – recently
Introduction
Main Theorem
Proof
Discussion
Markov chains for the Ising model
◮
Glauber process (arbitrary field) 1963
◮
◮
◮
◮
Swendsen-Wang process (zero field) 1987
◮
◮
◮
◮
◮
Lubetzky & Sly (2012): Rapidly mixing on boxes in Z2 at h = 0 iff
β ≤ βc
Levin, Luczak & Peres (2010): Precise asymptotics on Kn for h = 0
Not used by computational physicists
Simulates coupling of Ising and Fortuin-Kasteleyn models
Long, Nachmias, Ding & Peres (2014): Precise asymptotics on Kn
Ullrich (2014): Rapidly mixing on boxes in Z2 at all β 6= βc
Empirically fast. State of the art 1987 – recently
Jerrum-Sinclair process (positive field) 1993
◮
◮
◮
Simulates high-temperature graphs for h > 0
Proved rapidly mixing on all graphs at all temperatures for all h > 0
No empirical results - not used by computational physicists
Introduction
Main Theorem
Proof
Discussion
Markov chains for the Ising model
◮
Glauber process (arbitrary field) 1963
◮
◮
◮
◮
Swendsen-Wang process (zero field) 1987
◮
◮
◮
◮
◮
Simulates coupling of Ising and Fortuin-Kasteleyn models
Long, Nachmias, Ding & Peres (2014): Precise asymptotics on Kn
Ullrich (2014): Rapidly mixing on boxes in Z2 at all β 6= βc
Empirically fast. State of the art 1987 – recently
Jerrum-Sinclair process (positive field) 1993
◮
◮
◮
◮
Lubetzky & Sly (2012): Rapidly mixing on boxes in Z2 at h = 0 iff
β ≤ βc
Levin, Luczak & Peres (2010): Precise asymptotics on Kn for h = 0
Not used by computational physicists
Simulates high-temperature graphs for h > 0
Proved rapidly mixing on all graphs at all temperatures for all h > 0
No empirical results - not used by computational physicists
Prokofiev-Svistunov worm process (zero field) 2001
◮
◮
◮
No rigorous results currently known
Empirically, best method known for susceptibility (Deng, G., Sokal)
Widely used by computational physicists
Introduction
Main Theorem
Proof
Mixing time bound for PS process
Theorem (Collevecchio, G., Hyndman, Tokarev 2014+)
For any temperature, the mixing time of the PS process on graph
G = (V, E) satisfies
tmix (δ) = O(∆(G)m2 n5 )
with n = |V |, m = |E| and ∆(G) the maximum degree.
Discussion
Introduction
Main Theorem
Proof
Mixing time bound for PS process
Theorem (Collevecchio, G., Hyndman, Tokarev 2014+)
For any temperature, the mixing time of the PS process on graph
G = (V, E) satisfies
tmix (δ) = O(∆(G)m2 n5 )
with n = |V |, m = |E| and ∆(G) the maximum degree.
Only Markov chain for the Ising model currently known to be rapidly
mixing at the critical point for boxes in Zd
Discussion
Introduction
Main Theorem
Proof
Discussion
High-temperature expansions and the PS measure
◮
Let Ck = {A ⊆ E : (V, A) has k odd vertices}
Introduction
Main Theorem
Proof
Discussion
High-temperature expansions and the PS measure
◮
◮
Let Ck = {A ⊆ E : (V, A) has k odd vertices}
Let CW = {A ⊆ E : W is the set of odd vertices in (V, A) }
Introduction
Main Theorem
Proof
Discussion
High-temperature expansions and the PS measure
◮
◮
◮
Let Ck = {A ⊆ E : (V, A) has k odd vertices}
Let CW = {A ⊆ E : W is the set of odd vertices in (V, A) }
PS measure defined on the configuration space C0 ∪ C2
(
|A| n, A ∈ C0 ,
π(A) ∝ x
2, A ∈ C2 .
Introduction
Main Theorem
Proof
Discussion
High-temperature expansions and the PS measure
◮
◮
◮
◮
Let Ck = {A ⊆ E : (V, A) has k odd vertices}
Let CW = {A ⊆ E : W is the set of odd vertices in (V, A) }
PS measure defined on the configuration space C0 ∪ C2
(
|A| n, A ∈ C0 ,
π(A) ∝ x
2, A ∈ C2 .
If x = tanh β then:
◮
◮
1
π(C0 )
n π(Cuv )
Ising two-point correlation function E(σu σv ) =
2 π(C0 )
Ising susceptibility χ =
Introduction
Main Theorem
Proof
Discussion
High-temperature expansions and the PS measure
◮
◮
◮
◮
Let Ck = {A ⊆ E : (V, A) has k odd vertices}
Let CW = {A ⊆ E : W is the set of odd vertices in (V, A) }
PS measure defined on the configuration space C0 ∪ C2
(
|A| n, A ∈ C0 ,
π(A) ∝ x
2, A ∈ C2 .
If x = tanh β then:
◮
◮
◮
1
π(C0 )
n π(Cuv )
Ising two-point correlation function E(σu σv ) =
2 π(C0 )
Ising susceptibility χ =
PS measure is stationary distribution of PS process
Introduction
Main Theorem
Proof
Discussion
Fully-polynomial randomized approximation schemes
Definition
An fpras for an Ising property f is a randomized algorithm such that
for given G, T , and ξ, η ∈ (0, 1) the output Y satisfies
P[(1 − ξ)f ≤ Y ≤ (1 + ξ)f ] ≥ 1 − η
and the running time is bounded by a polynomial in n, ξ −1 , η −1 .
Introduction
Main Theorem
Proof
Discussion
Fully-polynomial randomized approximation schemes
Definition
An fpras for an Ising property f is a randomized algorithm such that
for given G, T , and ξ, η ∈ (0, 1) the output Y satisfies
P[(1 − ξ)f ≤ Y ≤ (1 + ξ)f ] ≥ 1 − η
and the running time is bounded by a polynomial in n, ξ −1 , η −1 .
Combine rapid mixing of PS process with general fpras construction
of Jerrum-Sinclair (1993):
Introduction
Main Theorem
Proof
Discussion
Fully-polynomial randomized approximation schemes
Definition
An fpras for an Ising property f is a randomized algorithm such that
for given G, T , and ξ, η ∈ (0, 1) the output Y satisfies
P[(1 − ξ)f ≤ Y ≤ (1 + ξ)f ] ≥ 1 − η
and the running time is bounded by a polynomial in n, ξ −1 , η −1 .
Combine rapid mixing of PS process with general fpras construction
of Jerrum-Sinclair (1993):
◮
Let A ⊆ C0 ∪ C2 with π(A) ≥ 1/S(n) for some polynomial S(n)
Introduction
Main Theorem
Proof
Discussion
Fully-polynomial randomized approximation schemes
Definition
An fpras for an Ising property f is a randomized algorithm such that
for given G, T , and ξ, η ∈ (0, 1) the output Y satisfies
P[(1 − ξ)f ≤ Y ≤ (1 + ξ)f ] ≥ 1 − η
and the running time is bounded by a polynomial in n, ξ −1 , η −1 .
Combine rapid mixing of PS process with general fpras construction
of Jerrum-Sinclair (1993):
◮
◮
Let A ⊆ C0 ∪ C2 with π(A) ≥ 1/S(n) for some polynomial S(n)
The following defines an fpras for π(A):
Introduction
Main Theorem
Proof
Discussion
Fully-polynomial randomized approximation schemes
Definition
An fpras for an Ising property f is a randomized algorithm such that
for given G, T , and ξ, η ∈ (0, 1) the output Y satisfies
P[(1 − ξ)f ≤ Y ≤ (1 + ξ)f ] ≥ 1 − η
and the running time is bounded by a polynomial in n, ξ −1 , η −1 .
Combine rapid mixing of PS process with general fpras construction
of Jerrum-Sinclair (1993):
◮
◮
Let A ⊆ C0 ∪ C2 with π(A) ≥ 1/S(n) for some polynomial S(n)
The following defines an fpras for π(A):
◮
Let R(G) be our upper bound for tmix (δ) with δ = ξ/[8S(n)]
Introduction
Main Theorem
Proof
Discussion
Fully-polynomial randomized approximation schemes
Definition
An fpras for an Ising property f is a randomized algorithm such that
for given G, T , and ξ, η ∈ (0, 1) the output Y satisfies
P[(1 − ξ)f ≤ Y ≤ (1 + ξ)f ] ≥ 1 − η
and the running time is bounded by a polynomial in n, ξ −1 , η −1 .
Combine rapid mixing of PS process with general fpras construction
of Jerrum-Sinclair (1993):
◮
◮
Let A ⊆ C0 ∪ C2 with π(A) ≥ 1/S(n) for some polynomial S(n)
The following defines an fpras for π(A):
◮
◮
Let R(G) be our upper bound for tmix (δ) with δ = ξ/[8S(n)]
Let Y = 1A
Introduction
Main Theorem
Proof
Discussion
Fully-polynomial randomized approximation schemes
Definition
An fpras for an Ising property f is a randomized algorithm such that
for given G, T , and ξ, η ∈ (0, 1) the output Y satisfies
P[(1 − ξ)f ≤ Y ≤ (1 + ξ)f ] ≥ 1 − η
and the running time is bounded by a polynomial in n, ξ −1 , η −1 .
Combine rapid mixing of PS process with general fpras construction
of Jerrum-Sinclair (1993):
◮
◮
Let A ⊆ C0 ∪ C2 with π(A) ≥ 1/S(n) for some polynomial S(n)
The following defines an fpras for π(A):
◮
◮
◮
Let R(G) be our upper bound for tmix (δ) with δ = ξ/[8S(n)]
Let Y = 1A
Run the PS process T = R(G) time steps and record YT
Introduction
Main Theorem
Proof
Discussion
Fully-polynomial randomized approximation schemes
Definition
An fpras for an Ising property f is a randomized algorithm such that
for given G, T , and ξ, η ∈ (0, 1) the output Y satisfies
P[(1 − ξ)f ≤ Y ≤ (1 + ξ)f ] ≥ 1 − η
and the running time is bounded by a polynomial in n, ξ −1 , η −1 .
Combine rapid mixing of PS process with general fpras construction
of Jerrum-Sinclair (1993):
◮
◮
Let A ⊆ C0 ∪ C2 with π(A) ≥ 1/S(n) for some polynomial S(n)
The following defines an fpras for π(A):
◮
◮
◮
◮
Let R(G) be our upper bound for tmix (δ) with δ = ξ/[8S(n)]
Let Y = 1A
Run the PS process T = R(G) time steps and record YT
Independently generate 72ξ −2 S(n) such samples and take the
sample mean
Introduction
Main Theorem
Proof
Discussion
Fully-polynomial randomized approximation schemes
Definition
An fpras for an Ising property f is a randomized algorithm such that
for given G, T , and ξ, η ∈ (0, 1) the output Y satisfies
P[(1 − ξ)f ≤ Y ≤ (1 + ξ)f ] ≥ 1 − η
and the running time is bounded by a polynomial in n, ξ −1 , η −1 .
Combine rapid mixing of PS process with general fpras construction
of Jerrum-Sinclair (1993):
◮
◮
Let A ⊆ C0 ∪ C2 with π(A) ≥ 1/S(n) for some polynomial S(n)
The following defines an fpras for π(A):
◮
◮
◮
◮
◮
Let R(G) be our upper bound for tmix (δ) with δ = ξ/[8S(n)]
Let Y = 1A
Run the PS process T = R(G) time steps and record YT
Independently generate 72ξ −2 S(n) such samples and take the
sample mean
Repeat 6 lg⌈1/η⌉ + 1 such experiments and take the median
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
Pick uniformly random u ∈ V
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
Pick uniformly random u ∈ V
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
◮
Pick uniformly random u ∈ V
Pick uniformly random v ∼ u
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
◮
Pick uniformly random u ∈ V
Pick uniformly random v ∼ u
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
◮
◮
Pick uniformly random u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
◮
◮
Pick uniformly random u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
◮
◮
◮
Pick uniformly random u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
If A ∈ C2 :
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
◮
◮
◮
Pick uniformly random u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
If A ∈ C2 :
◮
Pick random odd u ∈ V
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
◮
◮
◮
Pick uniformly random u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
If A ∈ C2 :
◮
Pick random odd u ∈ V
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
◮
◮
◮
Pick uniformly random u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
If A ∈ C2 :
◮
◮
Pick random odd u ∈ V
Pick uniformly random v ∼ u
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
◮
◮
◮
Pick uniformly random u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
If A ∈ C2 :
◮
◮
Pick random odd u ∈ V
Pick uniformly random v ∼ u
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
◮
◮
◮
Pick uniformly random u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
If A ∈ C2 :
◮
◮
◮
Pick random odd u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
◮
◮
◮
Pick uniformly random u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
If A ∈ C2 :
◮
◮
◮
Pick random odd u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
◮
◮
◮
Pick uniformly random u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
If A ∈ C2 :
◮
◮
◮
Pick random odd u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
◮
◮
◮
Pick uniformly random u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
If A ∈ C2 :
◮
◮
◮
Pick random odd u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
◮
◮
◮
Pick uniformly random u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
If A ∈ C2 :
◮
◮
◮
Pick random odd u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
◮
◮
◮
Pick uniformly random u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
If A ∈ C2 :
◮
◮
◮
Pick random odd u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
◮
◮
◮
Pick uniformly random u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
If A ∈ C2 :
◮
◮
◮
Pick random odd u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
◮
◮
◮
Pick uniformly random u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
If A ∈ C2 :
◮
◮
◮
Pick random odd u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
◮
◮
◮
Pick uniformly random u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
If A ∈ C2 :
◮
◮
◮
Pick random odd u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
◮
◮
◮
Pick uniformly random u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
If A ∈ C2 :
◮
◮
◮
Pick random odd u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
◮
◮
◮
Pick uniformly random u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
If A ∈ C2 :
◮
◮
◮
Pick random odd u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
Proof
Discussion
Introduction
Main Theorem
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
◮
◮
◮
Pick uniformly random u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
If A ∈ C2 :
◮
◮
◮
Pick random odd u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
Proof
Discussion
Introduction
Main Theorem
Proof
Prokofiev-Svistunov process
PS proposals:
◮ If A ∈ C0 :
◮
◮
◮
◮
Pick uniformly random u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
If A ∈ C2 :
◮
◮
◮
Pick random odd u ∈ V
Pick uniformly random v ∼ u
Propose A → A△uv
Metropolize proposals with respect to PS measure π(·)
Discussion
Introduction
Main Theorem
Proof
Proof of rapid mixing
◮
◮
We use the path method
Consider the transition graph G = (V, E) of the PS process
◮
◮
V = C0 ∪ C2
E = {AA′ : P (A, A′ ) > 0}
Discussion
Introduction
Main Theorem
Proof
Proof of rapid mixing
◮
◮
We use the path method
Consider the transition graph G = (V, E) of the PS process
◮
◮
◮
V = C0 ∪ C2
E = {AA′ : P (A, A′ ) > 0}
Specify paths γI,F in G between pairs of states I, F
Discussion
Introduction
Main Theorem
Proof
Proof of rapid mixing
◮
◮
We use the path method
Consider the transition graph G = (V, E) of the PS process
◮
◮
◮
◮
V = C0 ∪ C2
E = {AA′ : P (A, A′ ) > 0}
Specify paths γI,F in G between pairs of states I, F
If no transition is used too often, the process is rapidly mixing
Discussion
Introduction
Main Theorem
Proof
Discussion
Proof of rapid mixing
◮
◮
We use the path method
Consider the transition graph G = (V, E) of the PS process
◮
◮
◮
◮
◮
V = C0 ∪ C2
E = {AA′ : P (A, A′ ) > 0}
Specify paths γI,F in G between pairs of states I, F
If no transition is used too often, the process is rapidly mixing
In the most general case, one specifies paths between each pair
I, F ∈ C0 ∪ C2
Introduction
Main Theorem
Proof
Discussion
Proof of rapid mixing
◮
◮
We use the path method
Consider the transition graph G = (V, E) of the PS process
◮
◮
◮
◮
◮
◮
V = C0 ∪ C2
E = {AA′ : P (A, A′ ) > 0}
Specify paths γI,F in G between pairs of states I, F
If no transition is used too often, the process is rapidly mixing
In the most general case, one specifies paths between each pair
I, F ∈ C0 ∪ C2
For PS process, convenient to only specify paths from C2 to C0
C2
C0
Introduction
Main Theorem
Proof
Discussion
Proof of rapid mixing
◮
◮
We use the path method
Consider the transition graph G = (V, E) of the PS process
◮
◮
◮
◮
◮
◮
V = C0 ∪ C2
E = {AA′ : P (A, A′ ) > 0}
Specify paths γI,F in G between pairs of states I, F
If no transition is used too often, the process is rapidly mixing
In the most general case, one specifies paths between each pair
I, F ∈ C0 ∪ C2
For PS process, convenient to only specify paths from C2 to C0
I
C2
C0
F
Introduction
Main Theorem
Proof
Discussion
Lemma (Jerrum-Sinclair-Vigoda (2004))
Consider MC with state space Ω and stationary distribution π.
Let S ⊂ Ω, and specify paths Γ = {γI,F : I ∈ S, F ∈ S c }. Then
c
1
2 + 4 π(S) + π(S )
ϕ(Γ)
tmix (δ) ≤ log
δ min π(A)
π(S c )
π(S)
A∈Ω
where
ϕ(Γ) :=
max
(I,F )∈S×S
|γI,F |
c
max
′
AA ∈E
π(I)π(F )
π(A)P
(A, A′ )
)∈S×S c
(I,F
γ
∋AA′
X
I,F
Introduction
Main Theorem
Proof
Discussion
Lemma (Jerrum-Sinclair-Vigoda (2004))
Consider MC with state space Ω and stationary distribution π.
Let S ⊂ Ω, and specify paths Γ = {γI,F : I ∈ S, F ∈ S c }. Then
c
1
2 + 4 π(S) + π(S )
ϕ(Γ)
tmix (δ) ≤ log
δ min π(A)
π(S c )
π(S)
A∈Ω
where
ϕ(Γ) :=
max
(I,F )∈S×S
◮
|γI,F |
c
We choose S = C2 .
max
′
AA ∈E
π(I)π(F )
π(A)P
(A, A′ )
)∈S×S c
(I,F
γ
∋AA′
X
I,F
Introduction
Main Theorem
Proof
Discussion
Lemma (Jerrum-Sinclair-Vigoda (2004))
Consider MC with state space Ω and stationary distribution π.
Let S ⊂ Ω, and specify paths Γ = {γI,F : I ∈ S, F ∈ S c }. Then
c
1
2 + 4 π(S) + π(S )
ϕ(Γ)
tmix (δ) ≤ log
δ min π(A)
π(S c )
π(S)
A∈Ω
where
ϕ(Γ) :=
max
(I,F )∈S×S
◮
|γI,F |
c
max
′
AA ∈E
π(I)π(F )
π(A)P
(A, A′ )
)∈S×S c
(I,F
γ
∋AA′
X
I,F
We choose S = C2 . Elementary to show:
x m
π(C2 )
2 mx
≤
≤ n−1,
and
π(A) ≥
n mx + 1
π(C0 )
8
for all A
Introduction
Main Theorem
Proof
Discussion
Lemma (Jerrum-Sinclair-Vigoda (2004))
Consider MC with state space Ω and stationary distribution π.
Let S ⊂ Ω, and specify paths Γ = {γI,F : I ∈ S, F ∈ S c }. Then
c
1
2 + 4 π(S) + π(S )
ϕ(Γ)
tmix (δ) ≤ log
δ min π(A)
π(S c )
π(S)
A∈Ω
where
ϕ(Γ) :=
max
(I,F )∈S×S
◮
◮
|γI,F |
c
max
′
AA ∈E
π(I)π(F )
π(A)P
(A, A′ )
)∈S×S c
(I,F
γ
∋AA′
X
I,F
We choose S = C2 . Elementary to show:
x m
π(C2 )
2 mx
≤
≤ n−1,
and
π(A) ≥
n mx + 1
π(C0 )
8
Therefore:
log δ
1
8
−
3+
2 m n ϕ(Γ)
tmix (δ) ≤ log
x
m
mx
for all A
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
Flip each e ∈ I△F
I
I△F
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
Flip each e ∈ I△F
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
I
I△F
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
Flip each e ∈ I△F
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
A0 is a path
Ai disjoint cycles
I
I△F
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
Flip each e ∈ I△F
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
A0 is a path
Ai disjoint cycles
I
A0
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
Flip each e ∈ I△F
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
A0 is a path
Ai disjoint cycles
I
A1
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
Flip each e ∈ I△F
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
A0 is a path
Ai disjoint cycles
I
A2
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
I
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
A2
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
I
F
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
I
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
I
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
I
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
I
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
I
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
I
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
I
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
I
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
I
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
I
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
I
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
I
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
I
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
I
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
I
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
I
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
I
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
I
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
I
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
I
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
I
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
I
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
Flip each e ∈ I△F
A0 is a path
Ai disjoint cycles
I
F
γI,F defined by:
◮
◮
◮
◮
Traverse A0
. . . then A1
. . . then A2
...
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
A0 is a path
Ai disjoint cycles
I
F
γI,F defined by:
◮
◮
◮
◮
◮
Flip each e ∈ I△F
Traverse A0
. . . then A1
. . . then A2
...
Γ = {γI,F }
F
Introduction
Main Theorem
Proof
Discussion
Choice of Canonical Paths
◮
To transition from I to F
◮
◮
◮
If (I, F ) ∈ C2 × C0 then
I△F ∈ C2
S
I△F = A0 ∪
i≥1 Ai
◮
◮
◮
A0 is a path
Ai disjoint cycles
I
F
γI,F defined by:
◮
◮
◮
◮
◮
Flip each e ∈ I△F
Traverse A0
. . . then A1
. . . then A2
...
Γ = {γI,F }
One can show that ϕ(Γ) ≤ ∆(G)n4 m/2
F
Introduction
Main Theorem
Proof
Bounding the congestion
◮
◮
Define measure Λ
n, A ∈ C0
|A|
Λ(A) = x
2, A ∈ C2
1, A ∈ C4
π(A) =
Λ(A)
Λ(C0 ∪ C2 )
If T = AA′ is a maximally congested transition
X
π(I)π(F )
max |γI,F |
ϕ(Γ) =
π(A)P (A, A′ ) (I,F )∈S×S c
(I,F )∈P(T )
Discussion
Introduction
Main Theorem
Proof
Bounding the congestion
◮
◮
Define measure Λ
n, A ∈ C0
|A|
Λ(A) = x
2, A ∈ C2
1, A ∈ C4
π(A) =
Λ(A)
Λ(C0 ∪ C2 )
If T = AA′ is a maximally congested transition
X
π(I)π(F )
m
ϕ(Γ) ≤
π(A)P (A, A′ )
(I,F )∈P(T )
Discussion
Introduction
Main Theorem
Proof
Bounding the congestion
◮
◮
Define measure Λ
n, A ∈ C0
|A|
Λ(A) = x
2, A ∈ C2
1, A ∈ C4
π(A) =
Λ(A)
Λ(C0 ∪ C2 )
If T = AA′ is a maximally congested transition
X
m
Λ(I)Λ(F )
ϕ(Γ) ≤
Λ(A)P (A, A′ ) Λ(C0 ∪ C2 )
(I,F )∈P(T )
Discussion
Introduction
Main Theorem
Proof
Bounding the congestion
◮
◮
Define measure Λ
n, A ∈ C0
|A|
Λ(A) = x
2, A ∈ C2
1, A ∈ C4
π(A) =
Λ(A)
Λ(C0 ∪ C2 )
If T = AA′ is a maximally congested transition
X
Λ(I)Λ(F )
m
ϕ(Γ) ≤
Λ(C0 ∪ C2 )
Λ(A)P (A, A′ )
(I,F )∈P(T )
Discussion
Introduction
Main Theorem
Proof
Bounding the congestion
◮
◮
Define measure Λ
n, A ∈ C0
|A|
Λ(A) = x
2, A ∈ C2
1, A ∈ C4
π(A) =
◮
Define for each T = AA′ ∈ E
◮
◮
ηT : C2 × C0 → C0 ∪ C2 ∪ C4
ηT (I, F ) = I△F △(A ∪ A′ )
Λ(A)
Λ(C0 ∪ C2 )
If T = AA′ is a maximally congested transition
X
Λ(I)Λ(F )
m
ϕ(Γ) ≤
Λ(C0 ∪ C2 )
Λ(A)P (A, A′ )
(I,F )∈P(T )
Discussion
Introduction
Main Theorem
Proof
Discussion
Bounding the congestion
◮
◮
Define measure Λ
n, A ∈ C0
|A|
Λ(A) = x
2, A ∈ C2
1, A ∈ C4
π(A) =
◮
Define for each T = AA′ ∈ E
◮
◮
◮
ηT : C2 × C0 → C0 ∪ C2 ∪ C4
ηT (I, F ) = I△F △(A ∪ A′ )
Λ(I)Λ(F )
≤ 4∆(G)n Λ(ηT (I, F ))
Λ(A)P (A, A′ )
Λ(A)
Λ(C0 ∪ C2 )
If T = AA′ is a maximally congested transition
X
m
Λ(I)Λ(F )
ϕ(Γ) ≤
Λ(C0 ∪ C2 )
Λ(A)P (A, A′ )
(I,F )∈P(T )
Introduction
Main Theorem
Proof
Discussion
Bounding the congestion
◮
◮
Define measure Λ
n, A ∈ C0
|A|
Λ(A) = x
2, A ∈ C2
1, A ∈ C4
π(A) =
◮
Define for each T = AA′ ∈ E
◮
◮
◮
ηT : C2 × C0 → C0 ∪ C2 ∪ C4
ηT (I, F ) = I△F △(A ∪ A′ )
Λ(I)Λ(F )
≤ 4∆(G)n Λ(ηT (I, F ))
Λ(A)P (A, A′ )
Λ(A)
Λ(C0 ∪ C2 )
If T = AA′ is a maximally congested transition
X
m
Λ(I)Λ(F )
ϕ(Γ) ≤
Λ(C0 ∪ C2 )
Λ(A)P (A, A′ )
(I,F )∈P(T )
≤ 4 ∆(G) n m
1
Λ(C0 ∪ C2 )
X
(I,F )∈P(T )
Λ(ηT (I, F ))
Introduction
Main Theorem
Proof
Discussion
Bounding the congestion
◮
◮
Define measure Λ
n, A ∈ C0
|A|
Λ(A) = x
2, A ∈ C2
1, A ∈ C4
π(A) =
Λ(A)
Λ(C0 ∪ C2 )
◮
Define for each T = AA′ ∈ E
◮
◮
◮
◮
ηT : C2 × C0 → C0 ∪ C2 ∪ C4
ηT (I, F ) = I△F △(A ∪ A′ )
Λ(I)Λ(F )
≤ 4∆(G)n Λ(ηT (I, F ))
Λ(A)P (A, A′ )
ηT is an injection
If T = AA′ is a maximally congested transition
X
Λ(I)Λ(F )
m
ϕ(Γ) ≤
Λ(C0 ∪ C2 )
Λ(A)P (A, A′ )
(I,F )∈P(T )
≤ 4 ∆(G) n m
1
Λ(C0 ∪ C2 )
X
(I,F )∈P(T )
Λ(ηT (I, F ))
Introduction
Main Theorem
Proof
Discussion
Bounding the congestion
◮
◮
Define measure Λ
n, A ∈ C0
|A|
Λ(A) = x
2, A ∈ C2
1, A ∈ C4
π(A) =
Λ(A)
Λ(C0 ∪ C2 )
◮
Define for each T = AA′ ∈ E
◮
◮
◮
◮
ηT : C2 × C0 → C0 ∪ C2 ∪ C4
ηT (I, F ) = I△F △(A ∪ A′ )
Λ(I)Λ(F )
≤ 4∆(G)n Λ(ηT (I, F ))
Λ(A)P (A, A′ )
ηT is an injection
If T = AA′ is a maximally congested transition
X
Λ(I)Λ(F )
m
ϕ(Γ) ≤
Λ(C0 ∪ C2 )
Λ(A)P (A, A′ )
(I,F )∈P(T )
≤ 4 ∆(G) n m
≤ 4 ∆(G) n m
1
Λ(C0 ∪ C2 )
X
(I,F )∈P(T )
Λ(C0 ∪ C2 ∪ C4 )
Λ(C0 ∪ C2 )
Λ(ηT (I, F ))
Introduction
Main Theorem
Proof
Discussion
Bounding the congestion
◮
◮
Define measure Λ
n, A ∈ C0
|A|
Λ(A) = x
2, A ∈ C2
1, A ∈ C4
π(A) =
Λ(A)
Λ(C0 ∪ C2 )
◮
Define for each T = AA′ ∈ E
◮
◮
◮
◮
ηT : C2 × C0 → C0 ∪ C2 ∪ C4
ηT (I, F ) = I△F △(A ∪ A′ )
Λ(I)Λ(F )
≤ 4∆(G)n Λ(ηT (I, F ))
Λ(A)P (A, A′ )
ηT is an injection
If T = AA′ is a maximally congested transition
X
Λ(I)Λ(F )
m
ϕ(Γ) ≤
Λ(C0 ∪ C2 )
Λ(A)P (A, A′ )
(I,F )∈P(T )
≤ 4 ∆(G) n m
1
Λ(C0 ∪ C2 )
X
Λ(ηT (I, F ))
(I,F )∈P(T )
Λ(C4 )
Λ(C0 ∪ C2 ∪ C4 )
= 4 ∆(G) n m 1 +
≤ 4 ∆(G) n m
Λ(C0 ∪ C2 )
Λ(C0 ) + Λ(C2 )
Introduction
Main Theorem
Proof
Bounding the congestion cont. . .
The final step
is to note that the high-temperature expansion implies
1 n
Λ(C4 )
so that
≤
Λ(C0 )
n 4
Λ(C4 )
ϕ(Γ) ≤ 4 ∆(G) n m 1 +
Λ(C0 )
Discussion
Introduction
Main Theorem
Proof
Bounding the congestion cont. . .
The final step
is to note that the high-temperature expansion implies
1 n
Λ(C4 )
so that
≤
Λ(C0 )
n 4
Λ(C4 )
ϕ(Γ) ≤ 4 ∆(G) n m 1 +
Λ(C0 )
1 n
≤ 4 ∆(G) n m 1 +
n 4
Discussion
Introduction
Main Theorem
Proof
Bounding the congestion cont. . .
The final step
is to note that the high-temperature expansion implies
1 n
Λ(C4 )
so that
≤
Λ(C0 )
n 4
Λ(C4 )
ϕ(Γ) ≤ 4 ∆(G) n m 1 +
Λ(C0 )
1 n
≤ 4 ∆(G) n m 1 +
n 4
n3
≤ 4 ∆(G) n m
8
Discussion
Introduction
Main Theorem
Proof
Bounding the congestion cont. . .
The final step
is to note that the high-temperature expansion implies
1 n
Λ(C4 )
so that
≤
Λ(C0 )
n 4
Λ(C4 )
ϕ(Γ) ≤ 4 ∆(G) n m 1 +
Λ(C0 )
1 n
≤ 4 ∆(G) n m 1 +
n 4
n3
≤ 4 ∆(G) n m
8
∆(G) n4 m
=
2
Discussion
Introduction
Main Theorem
Proof
Discussion
Bounding the congestion cont. . .
The final step
is to note that the high-temperature expansion implies
1 n
Λ(C4 )
so that
≤
Λ(C0 )
n 4
Λ(C4 )
ϕ(Γ) ≤ 4 ∆(G) n m 1 +
Λ(C0 )
1 n
≤ 4 ∆(G) n m 1 +
n 4
n3
≤ 4 ∆(G) n m
8
∆(G) n4 m
=
2
Introduction
Main Theorem
Proof
Discussion
◮
Can we obtain sharper results if we focus on special families of
graphs, such as G = ZdL ?
Discussion
Introduction
Main Theorem
Proof
Discussion
Discussion
◮
◮
Can we obtain sharper results if we focus on special families of
graphs, such as G = ZdL ?
The PS process is closely related to a modification of the
“lamplighter walk” in which lamps are always switched when
visited. Can we use this similarity to say something more precise
when G = ZdL ?
Introduction
Main Theorem
Proof
Discussion
Discussion
◮
◮
◮
Can we obtain sharper results if we focus on special families of
graphs, such as G = ZdL ?
The PS process is closely related to a modification of the
“lamplighter walk” in which lamps are always switched when
visited. Can we use this similarity to say something more precise
when G = ZdL ?
Study related spin models using similar methods?
Mixing time bound for PS process
Theorem (Collevecchio, G., Hyndman, Tokarev 2014+)
The mixing time of the PS process on graph G = (V, E) with
parameter x ∈ (0, 1) satisfies
8
log δ
1
tmix (δ) ≤ log
−
3+
∆(G)m2 n5 ,
x
m
mx
with n = |V |, m = |E| and ∆(G) the maximum degree.
High-temperature expansions and the PS measure
◮
Let ∂A = {v ∈ V : v has odd degree in (V, A)}
High-temperature expansions and the PS measure
◮
◮
Let ∂A = {v ∈ V : v has odd degree in (V, A)}
Let CW := {A ⊆ E : ∂A = W } for W ⊆ V
High-temperature expansions and the PS measure
◮
◮
◮
Let ∂A = {v ∈ V : v has odd degree in (V, A)}
Let CW := {A ⊆ E : ∂A = W } for W ⊆ V
S
Let Ck := W ⊆V CW for integer 1 ≤ k ≤ |V |
|W |=k
High-temperature expansions and the PS measure
◮
◮
◮
◮
Let ∂A = {v ∈ V : v has odd degree in (V, A)}
Let CW := {A ⊆ E : ∂A = W } for W ⊆ V
S
Let Ck := W ⊆V CW for integer 1 ≤ k ≤ |V |
|W |=k
P
λ(·) defined by λ(S) = A∈S x|A| for S ⊆ {A ⊆ E}
High-temperature expansions and the PS measure
◮
◮
◮
◮
Let ∂A = {v ∈ V : v has odd degree in (V, A)}
Let CW := {A ⊆ E : ∂A = W } for W ⊆ V
S
Let Ck := W ⊆V CW for integer 1 ≤ k ≤ |V |
|W |=k
P
λ(·) defined by λ(S) = A∈S x|A| for S ⊆ {A ⊆ E}
If x = tanh β then
(Ising)
EG,β
Y
v∈W
σv
!
=
λ(CW )
λ(C0 )
High-temperature expansions and the PS measure
◮
◮
◮
◮
Let ∂A = {v ∈ V : v has odd degree in (V, A)}
Let CW := {A ⊆ E : ∂A = W } for W ⊆ V
S
Let Ck := W ⊆V CW for integer 1 ≤ k ≤ |V |
|W |=k
P
λ(·) defined by λ(S) = A∈S x|A| for S ⊆ {A ⊆ E}
If x = tanh β then
(Ising)
EG,β
Y
v∈W
◮
σv
!
=
λ(CW )
λ(C0 )
PS measure defined on the configuration space C0 ∪ C2
(
n, A ∈ C0 ,
λ(A)
π(A) =
nλ(C0 ) + 2λ(C2 ) 2, A ∈ C2 .
High-temperature expansions and the PS measure
◮
◮
◮
◮
Let ∂A = {v ∈ V : v has odd degree in (V, A)}
Let CW := {A ⊆ E : ∂A = W } for W ⊆ V
S
Let Ck := W ⊆V CW for integer 1 ≤ k ≤ |V |
|W |=k
P
λ(·) defined by λ(S) = A∈S x|A| for S ⊆ {A ⊆ E}
If x = tanh β then
(Ising)
EG,β
Y
v∈W
◮
◮
◮
σv
!
=
λ(CW )
λ(C0 )
PS measure defined on the configuration space C0 ∪ C2
(
n, A ∈ C0 ,
λ(A)
π(A) =
nλ(C0 ) + 2λ(C2 ) 2, A ∈ C2 .
1
Ising susceptibility χ =
π(C0 )
n π(Cuv )
Ising two-point correlation function E(σu σv ) =
2 π(C0 )
High-temperature expansions and the PS measure
◮
◮
◮
◮
Let ∂A = {v ∈ V : v has odd degree in (V, A)}
Let CW := {A ⊆ E : ∂A = W } for W ⊆ V
S
Let Ck := W ⊆V CW for integer 1 ≤ k ≤ |V |
|W |=k
P
λ(·) defined by λ(S) = A∈S x|A| for S ⊆ {A ⊆ E}
If x = tanh β then
(Ising)
EG,β
Y
v∈W
◮
◮
◮
◮
σv
!
=
λ(CW )
λ(C0 )
PS measure defined on the configuration space C0 ∪ C2
(
n, A ∈ C0 ,
λ(A)
π(A) =
nλ(C0 ) + 2λ(C2 ) 2, A ∈ C2 .
1
Ising susceptibility χ =
π(C0 )
n π(Cuv )
Ising two-point correlation function E(σu σv ) =
2 π(C0 )
PS measure is stationary distribution of PS process
© Copyright 2026 Paperzz