TOTAL VARIATION CUTOFF IN BIRTH-AND-DEATH CHAINS
JIAN DING, EYAL LUBETZKY AND YUVAL PERES
A. The cutoff phenomenon describes a case where a Markov
chain exhibits a sharp transition in its convergence to stationarity. In
1996, Diaconis surveyed this phenomenon, and asked how one could
recognize its occurrence in families of finite ergodic Markov chains. In
2004, the third author noted that a necessary condition for cutoff in a
family of reversible chains is that the product of the mixing-time and
spectral-gap tends to infinity, and conjectured that in many settings, this
condition should also be sufficient. Diaconis and Saloff-Coste (2006)
verified this conjecture for continuous-time birth-and-death chains, started at an endpoint, with convergence measured in separation. It is natural to ask whether the conjecture holds for these chains in the more
widely used total-variation distance.
In this work, we confirm the above conjecture for all continuous-time
or lazy discrete-time birth-and-death chains, with convergence measured
via total-variation distance. Namely, if the product of the mixing-time
and spectral-gap tends to infinity, the chains exhibit cutoff at the maximal
hitting time of the stationary distribution median, with a window of at
most the geometric mean between the relaxation-time and mixing-time.
In addition, we show that for any lazy (or continuous-time) birth-anddeath chain with stationary distribution π, the separation 1 − pt (x, y)/π(y)
is maximized when x, y are the endpoints. Together with the above results, this implies that total-variation cutoff is equivalent to separation
cutoff in any family of such chains.
1. I
The cutoff phenomenon arises when a finite Markov chain converges
abruptly to equilibrium. Roughly, this is the case where, over a negligible period of time known as the cutoff window, the distance of the chain
from the stationary measure drops from near its maximum to near 0.
Let (Xt ) denote an aperiodic irreducible Markov chain on a finite state
space Ω with transition kernel P(x, y), and let π denote its stationary distribution. For any two distributions µ, ν on Ω, their total-variation distance is
Research of J. Ding and Y. Peres was supported in part by NSF grant DMS-0605166.
1
2
JIAN DING, EYAL LUBETZKY AND YUVAL PERES
defined to be
kµ − νkTV := sup |µ(A) − ν(A)| =
A⊂Ω
1X
|µ(x) − ν(x)| .
2 x∈Ω
Consider the worst-case total-variation distance to stationarity at time t,
d(t) := max kP x (Xt ∈ ·) − πkTV ,
x∈Ω
where P x denotes the probability given X0 = x. The total-variation mixingtime of (Xt ), denoted by t (ε) for 0 < ε < 1, is defined to be
t (ε) := min {t : d(t) ≤ ε} .
Next, consider a family of such chains, (Xt(n) ), each with its corresponding
(n)
worst-distance from stationarity dn (t), its mixing-times t
, etc. We say that
this family of chains exhibits cutoff iff the following sharp transition in its
convergence to stationarity occurs:
lim
(n)
t
(ε)
n→∞ t(n) (1
− ε)
=1
for any 0 < ε < 1 .
(1.1)
Our main result is an essentially tight bound on the difference between
t (ε) and t (1 − ε) for general birth-and-death chains; a birth-and-death
chain has the state space {0, . . . , n} for some integer n, and always moves
from one state to a state adjacent to it (or stays in place).
We first state a quantitative bound for a single chain, then deduce a cutoff
criterion. Let gap be the spectral-gap of the chain (that is, gap := 1 − λ
where λ is the largest absolute-value of all nontrivial eigenvalues of the
transition kernel P), and let t := gap−1 denote the relaxation-time of the
chain. A chain is called lazy if P(x, x) ≥ 12 for all x ∈ Ω.
Theorem 1. For any 0 < ε < 12 there exists an explicit cε > 0 such that
every lazy irreducible birth-and-death chain (Xt ) satisfies
q
t (ε) − t (1 − ε) ≤ cε t · t ( 14 ) .
(1.2)
As we later show, the above theorem extends to continuous-time chains,
as well as to δ-lazy chains, which satisfy P(x, x) ≥ δ for all x ∈ Ω.
The notion of a cutoff-window relates Theorem 1 to the cutoff phenomenon. A sequence wn is called a cutoff window for a family of chains (Xt(n) )
(n) 1 if the following holds: wn = o t
( 4 ) , and for any ε > 0 there exists some
cε > 0 such that, for all n,
(n)
(n)
t
(ε) − t
(1 − ε) ≤ cε wn .
(1.3)
TOTAL VARIATION CUTOFF IN BIRTH-AND-DEATH CHAINS
3
Equivalently, if tn and wn are two sequences such that wn = o(tn ), one may
define that a sequence of chains exhibits cutoff at tn with window wn iff
(
limλ→∞ lim inf n→∞ dn (tn − λwn ) = 1 ,
limλ→∞ lim supn→∞ dn (tn + λwn ) = 0 .
(n) 1
To go from the first definition to the second, take tn = t
( 4 ).
Once we compare the forms of (1.2) and (1.3), it becomes clear that
Theorem 1 implies a bound on the cutoff window for any general family of
(n)
(n) 1 birth-and-death chains, provided that t
= o t
(4) .
Theorem 1 will be the key to establishing the criterion for total-variation
cutoff in a general family of birth-and-death chains.
1.1. Background. The cutoff phenomenon was first identified for the case
of random transpositions on the symmetric group in [11], and for the case
of random walks on the hypercube in [1]. It was given its name by Aldous
and Diaconis in their famous paper [3] from 1985, where they showed that
the top-in-at-random card shuffling process (repeatedly removing the top
card and reinserting it to the deck at a random position) has such a behavior.
Saloff-Coste [25] surveys the cutoff phenomenon for random walks on finite
groups.
Though many families of chains are believed to exhibit cutoff, proving
the occurrence of this phenomenon is often an extremely challenging task,
hence there are relatively few examples for which cutoff has been rigorously
shown. In 1996, Diaconis [7] surveyed the cutoff phenomenon, and asked if
one could determine whether or not it occurs in a given family of aperiodic
and irreducible finite Markov chains.
In 2004, the third author [24] observed that a necessary condition for
(n) 1
cutoff in a family of reversible chains is that the product t
( 4 ) · gap(n)
(n)
(n) 1 tends to infinity with n, or equivalently, t = o t ( 4 ) ; see Lemma 2.1.
The third author also conjectured that, in many natural classes of chains,
(n)
(n) 1 Cutoff occurs if and only if t
= o t
(4) .
(1.4)
In the general case, this condition does not always imply cutoff : Aldous
[2] and Pak (private communication via P. Diaconis) have constructed relevant examples (see also [5],[6] and [21]). This left open the question of
characterizing the classes of chains for which (1.4) holds.
One important class is the family of birth-and-death chains; see [10] for
many natural examples of such chains. They also occur as the magnetization chain of the mean-field Ising Model (see [12],[20]).
In 2006, Diaconis and Saloff-Coste [10] verified a variant of the conjecture (1.4) for birth-and-death chains, when the convergence to stationarity is
measured in separation, that is, according to the decay of sep(P0 (Xt ∈ ·), π),
4
JIAN DING, EYAL LUBETZKY AND YUVAL PERES
). Note that, although sep(µ, ν) assumes
where sep(µ, ν) = sup x∈Ω (1 − µ(x)
ν(x)
values in [0, 1], it is in fact not a metric (it is not even symmetric). See, e.g.,
[4, Chapter 4] for the connections between mixing-times in total-variation
and in separation.
More precisely, it was shown in [10] that any family of continuous-time
birth-and-death chains, started at 0, exhibits cutoff in separation if and only
(n)
(n) 1
if t
= o tsep
( 4 ; 0) , where tsep (ε; s) = min{t : sep(P s (Xt ∈ ·), π) < ε}. The
proof used a spectral representation of passage times [16, 17] and duality of
strong stationary times. Whether (1.4) holds with respect to the important
and widely used total-variation distance, remained unsettled.
1.2. Total-variation cutoff. In this work, we verify the conjecture (1.4) for
arbitrary birth-and-death chains, with the convergence to stationarity measured in total-variation distance. Our first result, which is a direct corollary
of Theorem 1, establishes this for lazy discrete-time irreducible birth-anddeath chains. We then derive versions of this result for continuous-time irreducible birth-and-death chains, as well as for δ-lazy discrete chains (where
P(x, x) ≥ δ for all x ∈ Ω). In what follows, we omit the dependence on n
wherever it is clear from the context.
Corollary 2. Let (Xt(n) ) be a sequence of lazy irreducible birth-and-death
(n)
chains. Then it exhibits cutoff in total-variation distance iff t
· gap(n)
tends to infinity with n. Furthermore, the cutoff window size is at most the
geometric mean between the mixing-time and relaxation time.
√
As we will later explain, the given bound t · t for the cutoff window is essentially tight, in the following sense. Suppose that the functions
t M (n) and tR (n) ≥ 2 denote the mixing-time and relaxation-time of (Xt(n) ), a
family of irreducible lazy birth-and-death chains. Then there exists a fam(n)
ily (Yt(n) ) of such chains with the parameters t
= (1 + o(1))t M (n) and
(n)
(n)
(n) 1/2
t = (1 + o(1))tR (n) that has a cutoff window of (t
· t
) . In other
words, no better bound on the cutoff window can be given without exploiting additional information on the chains.
Indeed, there are examples where additional
attributes of the chain imply
√
a cutoff window of order smaller than t · t . For instance, the cutoff
window has size t for the Ehrenfest urn (see, e.g., [9]) and for the magnetization chain in the mean field Ising Model at high temperature (see [12]).
Theorem 3.1, given in Section 3, extends Corollary 2 to the case of δ-lazy
discrete-time chains. We note that this is in fact the setting that corresponds
to the magnetization chain in the mean-field Ising Model (see, e.g., [20]).
Following is the continuous-time version of Corollary 2.
TOTAL VARIATION CUTOFF IN BIRTH-AND-DEATH CHAINS
5
Theorem 3. Let (Xt(n) ) be a sequence of continuous-time birth-and-death
(n)
(n)
chains. Then (Xt(n) ) exhibits cutoff
q in total-variation iff t = o(t ), and the
cutoff window size is at most
(n) 1
(n)
t
( 4 ) · t
.
By combining our results with those of [10] (while bearing in mind the
relation between the mixing-times in total-variation and in separation), one
can relate worst-case total-variation cutoff in any continuous-time family
of irreducible birth-and-death chains, to cutoff in separation started from 0.
This suggests that total-variation cutoff should be equivalent to separation
cutoff in such chains under the original definition of the worst starting point
(as opposed to fixing the starting point at one of the endpoints). Indeed,
it turns out that for any lazy or continuous-time birth-and-death chain, the
separation is always attained by the two endpoints, as formulated by the
next proposition.
Proposition 4. Let (Xt ) be a lazy (or continuous-time) birth-and-death chain
with stationary distribution π. Then for every integer (resp. real) t > 0, the
separation 1 − P x (Xt = y)/π(y) is maximized when x, y are the endpoints.
That is, for such chains, the maximal separation from π at time t is
simply 1 − Pt (0, n)/π(n) (for the lazy chain with transition kernel P) or
1 − Ht (0, n)/π(n) (for the continuous-time chain with heat kernel Ht ). As
we later show, this implies the following corollary:
Corollary 5. For any continuous-time family of irreducible birth-and-death
chains, cutoff in worst-case total-variation distance is equivalent to cutoff
in worst-case separation.
Note that, clearly, the above equivalence is in the sense that one cutoff
implies the other, yet the cutoff locations need not be equal (and sometimes
indeed are not equal, e.g., the Bernoulli-Laplace models, surveyed in [10,
Section 7]).
The rest of this paper is organized as follows. The proofs of Theorem 1
and Corollary 2 appear in Section 2. Section 3 contains the proofs of the
variants of Theorem 1 for the continuous-case (Theorem 3) and the δ-lazy
case. In Section 4, we discuss separation in general birth-and-death chains,
and provide the proofs of Proposition 4 and Corollary 5. The final section,
Section 5, is devoted to concluding remarks and open problems.
2. C --
In this section we prove the main result, which shows that the condition
gap · t → ∞ is necessary and sufficient for total-variation cutoff in lazy
birth-and-death chains.
6
JIAN DING, EYAL LUBETZKY AND YUVAL PERES
2.1. Proof of Corollary 2. The fact that any family of lazy irreducible
birth-and-death chains satisfying t · gap → ∞ exhibits
cutoff, follows
√
by definition from Theorem 1, as does the bound t · t on the cutoff
window size.
It remains to show that this condition is necessary for cutoff; this is known
to hold for any family of reversible Markov chains, using a straightforward
and well known lower bound on t in terms of t (cf., e.g., [21]). We
include its proof for the sake of completeness.
Lemma 2.1. Let (Xt ) denote a reversible Markov chain, and suppose that
t ≥ 1 + θt ( 14 ) for some fixed θ > 0. Then for any 0 < ε < 1
t (ε) ≥ t ( 41 ) · θ log(1/2ε) .
(2.1)
In particular, t (ε)/t ( 14 ) ≥ K for all K > 0 and ε < 21 exp(−K/θ).
Proof. Let P denote the transition kernel of X, and recall that the fact that
X is reversible implies that P is a symmetric operator with respect to h·, ·iπ
and 1 is an eigenfunction corresponding to the trivial eigenvalue 1.
Let λ denote the largest absolute-value of all nontrivial eigenvalues of P,
and let f be the corresponding eigenfunction, P f = ±λ f , normalized to
have k f k∞ = 1. Finally, let r be the state attaining | f (r)| = 1. Since f is
orthogonal to 1, it follows that for any t,
X
λt = (Pt f )(r) − h f, 1iπ ≤ max Pt (x, y) f (y) − π(y) f (y)
x∈Ω
y∈Ω
≤ k f k∞ max kP (x, ·) − πk1 = 2 max kPt (x, ·) − πkTV .
t
x∈Ω
x∈Ω
Therefore, for any 0 < ε < 1 we have
log(1/2ε)
= (t − 1) log(1/2ε) ,
t (ε) ≥ log1/λ (1/2ε) ≥ −1
λ −1
and (2.1) immediately follows.
This completes the proof of Corollary 2.
(2.2)
2.2. Proof of Theorem 1. The essence of proving the theorem lies in the
treatment of the regime where t is much smaller than t ( 41 ).
Theorem 2.2. Let (Xt ) denote a lazy irreducible birth-and-death chain, and
1
suppose that t < ε5 · t ( 14 ) for some 0 < ε < 16
. Then
q
t (4ε) − t (1 − 2ε) ≤ (6/ε) t · t ( 14 ) .
Proof of Theorem 1. To prove Theorem 1 from Theorem 2.2, let ε > 0,
1
and suppose first that t < ε5 · t ( 41 ). If ε < 64
, then the above theorem
clearly implies that (1.2) holds for cε = 24/ε. Since that the left-hand-side
TOTAL VARIATION CUTOFF IN BIRTH-AND-DEATH CHAINS
7
of (1.2) is monotone decreasing in ε, this result extends to any value of
ε < 12 by choosing
c1 = c1 (ε) = 24 max{1/ε, 64} .
It remains to treat the case where t ≥ ε5 · t ( 14 ). In this case, the submultiplicativity of the mixing-time (see, e.g., [4, Chapter 2]) gives
t (ε) ≤ t ( 14 )d 12 log2 (1/ε)e
In particular, for ε <
1
4
for any 0 < ε <
1
4
.
(2.3)
our assumption on t gives
t (ε) − t (1 − ε) ≤ t (ε) ≤ ε
−5/2
q
log2 (1/ε) t · t ( 14 ) .
Therefore, a choice of
c2 = c2 (ε) = max{log2 (1/ε)/ε5/2 , 64}
gives (1.2) for any ε < 21 (the case ε > 41 again follows from monotonicity).
Altogether, a choice of cε = max{c1 , c2 } completes the proof.
In the remainder of this section, we provide the proof of Theorem 2.2. To
this end, we must first establish several lemmas.
Let X = X(t) be the given (lazy irreducible) birth-and-death chain, and
from now on, let Ωn = {0, . . . , n} denote its state space. Let P denote the
transition kernel of X, and let π denote its stationary distribution. Our first
argument relates the mixing-time of the chain, starting from various starting
positions, with its hitting time from 0 to certain quantile states, defined next.
n
Q(ε) := min k :
k
X
o
π( j) ≥ ε ,
where 0 < ε < 1 .
(2.4)
j=0
Similarly, one may define the hitting times from n as follows:
n
n X
o
Q̃(ε) := max k :
π( j) ≥ ε ,
where 0 < ε < 1 .
(2.5)
j=k
Remark. Throughout the proof, we will occasionally need to shift from Q(ε)
to Q̃(1 − ε), and vice versa. Though the proof can be written in terms of
Q, Q̃, for the sake of simplicity it will be easier to have the symmetry
Q(ε) = Q̃(1 − ε) for almost any ε > 0 .
(2.6)
This is easily achieved by noticing that at most n values of ε do not satisfy
(2.6) for a given chain X(t) on n states. Hence, for any given countable
family of chains, we can eliminate a countable set of all such problematic
values of ε and obtain the above mentioned symmetry.
8
JIAN DING, EYAL LUBETZKY AND YUVAL PERES
Recalling that we defined Pk to be the probability on the event that the
starting position is k, we define Ek and Vark analogously. Finally, here
and in what follows, let τk denote the hitting-time of the state k, that is,
τk := min{t : X(t) = k}.
Lemma 2.3. For any fixed 0 < ε < 1 and lazy irreducible birth-and-death
chain X, the following holds for any t:
kPt (0, ·) − πkTV ≤ P0 (τQ(1−ε) > t) + ε ,
(2.7)
kPt (k, ·) − πkTV ≤ Pk (max{τQ(ε) , τQ(1−ε) } > t) + 2ε .
(2.8)
and for all k ∈ Ω,
Proof. Let X denote an instance of the lazy birth-and-death chain starting
from a given state k, and let X̃ denote another instance of the lazy chain
starting from the stationary distribution. Consider the following no-crossing
coupling of these two chains: at each step, a fair coin toss decides which
of the two chains moves according to its original (non-lazy) rule. Clearly,
this coupling does not allow the two chains to cross one another without
sharing the same state first (hence the name for the coupling). Furthermore,
notice that by definition, each of the two chains, given the number of coin
tosses that went its way, is independent of the other chain. Finally, for any
t, X̃(t), given the number of coin tosses that went its way until time t, has
the stationary distribution.
In order to deduce the mixing-times bounds, we show an upper bound on
the time it takes X and X̃ to coalesce. Consider the hitting time of X from
0 to Q(1 − ε), denoted by τQ(1−ε) . By the above argument, X̃(τQ(1−ε) ) enjoys
the stationary distribution, hence by the definition of Q(1 − ε),
P X̃(τQ(1−ε) ) ≤ X(τQ(1−ε) ) ≥ 1 − ε .
Therefore, by the property of the no-crossing coupling, X and X̃ must have
coalesced by time τQ(1−ε) with probability at least 1 − ε. This implies (2.7),
and it remains to prove (2.8). Notice that the above argument involving the
no-crossing coupling, this time with X starting from k, gives
P X̃(τQ(ε) ) ≥ X(τQ(ε) ) ≥ 1 − ε ,
and similarly,
P X̃(τQ(1−ε) ) ≤ X(τQ(1−ε) ) ≥ 1 − ε .
Therefore, the probability that X and X̃ coalesce between the times τQ(ε) and
τQ(ε) is at least 1 − 2ε, completing the proof.
Corollary 2.4. Let X(t) be a lazy irreducible birth-and-death chain on Ωn .
The following holds for any 0 < ε < 161 :
(2.9)
t ( 14 ) ≤ 16 max E0 τQ(1−ε) , En τQ(ε) .
TOTAL VARIATION CUTOFF IN BIRTH-AND-DEATH CHAINS
9
Proof. Clearly, for any source and target states x, y ∈ Ω, at least one of the
endpoints s ∈ {0, n} satisfies E s τy ≥ E x τy (by the definition of the birth-anddeath chain). Therefore, if T denotes the right-hand-side of (2.9), then
1
P x (max{τQ(ε) , τQ(1−ε) } ≥ T ) ≤ P x (τQ(ε) ≥ T ) + P x (τQ(1−ε) ≥ T ) ≤ ,
8
where the last transition is by Markov’s inequality. The proof now follows
directly from (2.8).
Remark. The above corollary shows that the order of the mixing time is at
most max{E0 τQ(1−ε) , En τQ(ε) }. It is in fact already possible (and not difficult)
to show that the mixing time has this order precisely. However, our proof
only uses the order of the mixing-time as an upper-bound, in order to finally
deduce a stronger result: this mixing-time is asymptotically equal to the
above maximum of the expected hitting times.
Having established that the order of the mixing-time is at most the expected hitting time of Q(1 − ε) and Q(ε) from the two endpoints of Ω,
assume here and in what follows, without loss of generality, that E0 τQ(1−ε)
is at least En τQ(ε) . Thus, (2.9) gives
t ( 14 ) ≤ 16 · E0 τQ(1−ε) for any 0 < ε <
1
16
.
(2.10)
A key element in our estimation is a result of Karlin and McGregor
[16, Equation (45)], reproved by Keilson [17], which represents hittingtimes for birth-and-death chains in continuous-time as a sum of independent exponential variables (see [13],[9], [14] for more on this result). The
discrete-time version of this result was given by Fill [13, Theorem 1.2].
Theorem 2.5 ([13]). Consider a discrete-time birth-and-death chain with
transition kernel P on the state space {0, . . . , d} started at 0. Suppose that
d is an absorbing state, and suppose that the other birth probabilities pi ,
0 ≤ i ≤ d − 1, and death probabilities qi , 1 ≤ i ≤ d − 1, are positive. Then
the absorption time in state d has probability generating function
u 7→
d−1 h
Y
(1 − θ j )u i
j=0
1 − θ ju
,
(2.11)
where −1 ≤ θ j < 1 are the d non-unit eigenvalues of P. Furthermore, if
P has nonnegative eigenvalues then the absorption time in state d is distributed as the sum of d independent geometric random variables whose
failure probabilities are the non-unit eigenvalues of P.
The above theorem provides means of establishing the concentration of
the passage time from left to right of a chain, where the target (right end)
state is turned into an absorbing state. Since we are interested in the hitting
10
JIAN DING, EYAL LUBETZKY AND YUVAL PERES
time from one end to a given state (namely, from 0 to Q(1 − ε)), it is clearly
equivalent to consider the chain where the target state is absorbing. We
thus turn to handle the hitting time of an absorbing end of a chain starting
from the other end. The following lemma will infer its concentration from
Theorem 2.5.
Lemma 2.6. Let X(t) be a lazy irreducible birth-and-death chain on the
state space {0, . . . , d}, where d is an absorbing state, and let gap denote its
spectral gap. Then Var0 τd ≤ (E0 τd ) /gap.
Proof. Let θ0 ≥ . . . ≥ θd−1 denote the d non-unit eigenvalues of the transition kernel of X. Recalling that X is a lazy irreducible birth-and-death
chain,Pθi ≥ 0 for all i, hence the second part of Theorem 2.5 implies that
d−1
τd ∼ i=0
Yi , where the Yi -s are independent geometric random variables
with means 1/(1 − θi ). Therefore,
E0 τd =
d−1
X
i=0
1
,
1 − θi
Var0 τd =
d−1
X
i=0
θi
,
(1 − θi )2
(2.12)
which, using the fact that θ0 ≥ θi for all i, gives
d−1
1 X 1
E0 τd
,
Var0 τd ≤
=
1 − θ0 i=0 1 − θi
gap
as required.
As we stated before, the hitting time of a state in our original chain has
the same distribution as the hitting time in the modified chain (where this
state is set to be an absorbing state). In order to derive concentration from
the above lemma, all that remains is to relate the spectral gaps of these two
chains. This is achieved by the next lemma.
Lemma 2.7. Let X(t) be a lazy irreducible birth-and-death chain, and gap
be its spectral gap. Set 0 < ε < 1, and let ` = Q(1 − ε). Consider the modified chain Y(t), where ` is turned into an absorbing state, and let gap|[0,`]
denote its spectral gap. Then gap|[0,`] ≥ ε · gap.
Proof. By [4, Chapter 3, Section 6], we have
P
2
h(I − P) f, f iπ
1 i, j ( f (i) − f ( j)) P(i, j)π(i)
P
gap = min
= min
.
2
f : Eπ f =0
f : Eπ f =0 2
h f, f iπ
i f (i) π(i)
f .0
f .0
(2.13)
Observe that gap|[0,`] is precisely 1 − λ, where λ is the largest eigenvalue
of P|` , the principal sub-matrix on the first ` rows and columns, indexed by
TOTAL VARIATION CUTOFF IN BIRTH-AND-DEATH CHAINS
11
{0, . . . , ` − 1} (notice that this sub-matrix is strictly sub-stochastic, as X is
irreducible). Being a birth-and-death chain, X is reversible, that is,
Pi j π(i) = P ji π( j) for any i, j .
Therefore, it is simple to verify that P|` is a symmetric operator on R` with
respect to the inner-product h·, ·iπ ; that is, hP|` x, yiπ = hx, P|` yiπ for every
x, y ∈ R` , and hence the Rayleigh-Ritz formula holds (cf., e.g., [15]), giving
λ = max
x∈R`
x,0
hP|` x, xiπ
.
hx, xiπ
It follows that
Pn gap|[0,`] = 1 − λ =
=
i=0
min
f : f .0
f (k)=0 ∀k≥`
min
f : f .0
f (k)=0 ∀k≥`
1
2
P
f (i) − nj=0 P(i, j) f ( j) f (i)π(i)
Pn
2
i=0 f (i) π(i)
P
0≤i, j≤n
( f (i) − f ( j))2 P(i, j)π(i)
Pn
,
2
i=0 f (i) π(i)
(2.14)
where the last equality is by the fact that P is stochastic.
Observe that (2.13) and (2.14) have similar forms, and for any f (which
can also be treated as a random variable) we can write f˜ = f − Eπ f such
that Eπ f˜ = 0. Clearly,
( f (i) − f ( j))2 P(i, j)π(i) = ( f˜(i) − f˜( j))2 P(i, j)π(i),
hence in order to compare gap and gap|[0,`] , it will suffice to compare the
denominators of (2.13) and (2.14). Noticing that
X
X
Varπ ( f ) =
f˜(i)2 π(i) , and Eπ f 2 =
f (i)2 π(i) ,
i
i
we wish to bound the ratio between the above two terms. Without loss of
generality, assume that Eπ f = 1. Then every f with f (k) = 0 for all k ≥ `
satisfies
h
i
Eπ f 2
= Eπ f 2 | f , 0 ≥ Eπ f | f , 0 2 = (π ( f , 0))−2 ,
π( f , 0)
and hence
1
≤ π( f , 0) ≤ 1 − ε ,
(2.15)
Eπ f 2
where the last inequality is by the definition of ` as Q(1 − ε). Once again,
using the fact that Eπ f = 1, we deduce that
Varπ f
1
=1−
≥ε.
2
Eπ f
Eπ f 2
(2.16)
12
JIAN DING, EYAL LUBETZKY AND YUVAL PERES
Altogether, by the above discussion on the comparison between (2.13) and
(2.14), we conclude that gap|[0,`] ≥ ε · gap.
Combining Lemma 2.6 and Lemma 2.7 yields the following corollary:
Corollary 2.8. Let X(t) be a lazy irreducible birth-and-death chain on Ωn ,
let gap denote its spectral-gap, and 0 < ε < 1. The following holds:
E0 τQ(1−ε)
Var0 τQ(1−ε) ≤
.
(2.17)
ε · gap
Remark. The above corollary implies the following statement: whenever
the product gap · E0 τQ(1−ε) tends to infinity with n, the hitting-time τQ(1−ε) is
concentrated. This is essentially the case under the assumptions of Theorem
2.2 (which include a lower bound on gap·t ( 14 ) in terms of ε), as we already
established in (2.10) that E0 τQ(1−ε) ≥ 161 t ( 14 ).
Recalling the definition of cutoff and the relation between the mixing
time and hitting times of the quantile states, we expect that the behaviors
of τQ(ε) and τQ(1−ε) would be roughly the same; this is formulated in the
following lemma.
Lemma 2.9. Let X(t) be a lazy irreducible birth-and-death chain on Ωn ,
1
and suppose that for some 0 < ε < 16
we have t < ε4 · E0 τQ(1−ε) . Then for
any fixed ε ≤ α < β ≤ 1 − ε:
3 q
EQ(α) τQ(β) ≤
t · E0 τQ( 12 ) .
(2.18)
2ε
Proof. Since by definition, EQ(ε) τQ(1−ε) ≥ EQ(α) τQ(β) (the left-hand-side can
be written as a sum of three independent hitting times, one of which being
the right-hand-side), it suffices to show (2.18) holds for α = ε and β = 1 − ε.
Consider the random variable ν, distributed according to the restriction
of the stationary distribution π to [Q(ε)] := {0, . . . , Q(ε)}, that is:
π(k)
ν(k) :=
1{[Q(ε)]} ,
(2.19)
π([Q(ε)])
and let w ∈ RΩ denote the vector w := 1{[Q(ε)]} /π([Q(ε)]). As X is reversible,
the following holds for any k:
X
Pt (ν, k) =
Pt (i, k)π(i)w(i) = (Pt w)(k) · π(k) .
i
Thus, by the definition of the total-variation distance (for a finite space):
n
1
1X
t
kP (ν, ·) − π(·)kTV =
π(k) Pt w (k) − 1 = kPt (w − 1)kL1 (π)
2 k=0
2
1
≤ kPt (w − 1)kL2 (π) ,
2
TOTAL VARIATION CUTOFF IN BIRTH-AND-DEATH CHAINS
13
where the last inequality follows from the Cauchy-Schwartz inequality. As
w − 1 is orthogonal to 1 in the inner-product space h·, ·iL2 (π) , we deduce that
kPt (w − 1)kL2 (π) ≤ λt2 kw − 1kL2 (π) ,
where λ2 is the second largest eigenvalue of P. Therefore,
λt
1 p
1
kPt (ν, ·) − π(·)kTV ≤ λt2 kw − 1kL2 (π) = λt2 (1/π([Q(ε)])) − 1 ≤ √2 ,
2
2
2 ε
where the last inequality is by the fact that π([Q(ε)]) ≥ ε (by definition).
Recalling that t = gap−1 = 1/(1 − λ2 ), define
tε = 2 log(1/ε)t .
Since log(1/x) ≥ 1 − x for all x ∈ (0, 1], it follows that λt2ε ≤ ε2 , thus (with
room to spare)
t
P ε (ν, ·) − π(·)TV ≤ ε/2 .
We will next use a second moment argument to obtain an upper bound
on the expected commute time. By (2.20) and the definition of the totalvariation distance,
Pν (τQ(1−ε) ≤ tε ) ≥ ε − kPtε (ν, ·) − π(·)kTV ≥ ε/2 ,
(2.20)
whereas the definition of ν as being supported by the range [Q(ε)] gives
VarQ(ε) τQ(1−ε)
Pν (τQ(1−ε) ≤ tε ) ≤ PQ(ε) (τQ(1−ε) ≤ tε ) ≤ 2 .
E τ
Q(ε) Q(1−ε) − tε Combining the two,
r
EQ(ε) τQ(1−ε) ≤ tε +
2
VarQ(ε) τQ(1−ε) .
ε
(2.21)
Recall that starting from 0, the hitting time to point Q(1 − ε) is exactly the
sum of the hitting time from 0 to Q(ε) and the hitting time from Q(ε) to
Q(1 − ε), where both these hitting times are independent. Therefore,
VarQ(ε) τQ(1−ε) ≤ Var0 τQ(1−ε) .
(2.22)
By (2.21) and (2.22) we get
r
2
Var0 τQ(1−ε)
ε
p
≤ 2 log(1/ε)t + (1/ε) 2t · E0 τQ(1−ε) ,
EQ(ε) τQ(1−ε) ≤ tε +
where the last inequality is by Corollary 2.8.
(2.23)
14
JIAN DING, EYAL LUBETZKY AND YUVAL PERES
We now wish to rewrite the bound (2.23) in terms of t · E0 τQ( 12 ) using
our assumptions on t and E0 τQ( 12 ) . First, twice plugging in the fact that
t < ε4 · E0 τQ(1−ε) yields
√ EQ(ε) τQ(1−ε) ≤ 2ε3 log(1/ε) + 2 ε · E0 τQ(1−ε)
≤ 32 ε · E0 τQ(1−ε) ,
where in the last inequality we used the fact that ε <
(2.24)
1
.
16
In particular,
E0 τQ(1−ε) ≤ E0 τQ( 12 ) + EQ(ε) τQ(1−ε) ≤ E0 τQ( 21 ) + 32 ε · E0 τQ(1−ε) ,
and after rearranging,
E0 τQ(1−ε) ≤ E0 τQ( 12 ) / 1 − 32 ε .
(2.25)
Plugging this result back in (2.23), we deduce that
s
1 2t · E0 τQ( 12 )
EQ(ε) τQ(1−ε) ≤ 2 log(1/ε) · t +
.
ε
1 − 32 ε
A final application of the fact t < ε4 · E0 τQ(1−ε) , together with (2.25) and
the fact that ε < 161 , gives
2ε2 log(1/ε) + √2/ε q
EQ(ε) τQ(1−ε) ≤
t · E0 τQ( 12 )
q
1 − 32 ε
3 q
≤
t · E0 τQ( 12 ) ,
(2.26)
2ε
as required.
We are now ready to prove the main theorem.
Proof of Theorem 2.2. Recall our assumption (without loss of generality)
E0 τQ(1−ε) ≥ En τQ(ε) ,
(2.27)
and define what would be two ends of the cutoff window:
q
−
−
1 − γ
t
=
t
(γ)
:=
E
τ
t · E0 τQ( 21 ) ,
0
Q(
)
2
q
+
+
t = t (γ) := E0 τQ( 12 ) + γ t · E0 τQ( 21 ) .
For the lower bound, let 0 < ε <
that t ≤ ε5 · t ( 14 ) gives
1
;
16
combining (2.10) with the assumption
t ≤ 16ε5 · E0 τQ(1−ε) < ε4 · E0 τQ(1−ε) .
(2.28)
TOTAL VARIATION CUTOFF IN BIRTH-AND-DEATH CHAINS
15
Thus, we may apply Lemma 2.9 to get
E0 τQ(ε) ≥ E0 τQ( 12 ) − EQ(ε) τQ(1−ε) ≥ E0 τQ( 12 ) −
3 q
t · E0 τQ( 12 ) .
2ε
Furthermore, recalling Corollary 2.8, we also have
Var0 τQ(ε) ≤
1
t · E0 τQ(ε) ≤ 2t · E0 τQ( 12 ) .
1−ε
Therefore, by Chebyshev’s inequality, the following holds for any γ >
!−2
3
t−
−
kP (0, ·) − πkTV ≥ 1 − ε − P0 (τQ(ε) ≤ t ) ≥ 1 − ε − 2 γ −
,
2ε
and a choice of γ = 2/ε implies that (with room to spare, as ε <
q
t (1 − 2ε) ≥ E0 τQ( 21 ) − (2/ε) t · E0 τQ( 12 ) .
3
:
2ε
1
)
16
(2.29)
1
The upper bound will follow from a similar argument. Take 0 < ε < 16
and recall that t < ε4 · E0 τQ(1−ε) . Applying Corollary 2.8 and Lemma 2.9
once more (with (2.25) as well as (2.27) in mind) yields:
3 q
1
En τQ(ε) ≤ E0 τQ(1−ε) ≤ E0 τQ( 2 ) +
t · E0 τQ( 12 ) ,
2ε
t · E0 τQ( 21 )
≤ (2/ε)t · E0 τQ( 12 ) ,
Var0 τQ(1−ε) ≤ (1/ε)t · E0 τQ(1−ε) ≤
ε(1 − 32 ε)
Varn τQ(ε) ≤ (1/ε)t · En τQ(ε) ≤ (2/ε)t · E0 τQ( 12 ) .
Hence, combining Chebyshev’s inequality with (2.8) implies that for all k
and γ > 2ε3 ,
+
kPt (k, ·) − πkTV ≤ 2ε + P0 (τQ(1−ε) > t+ ) + Pn (τQ(ε) > t+ )
!−2
3
4
≤ 2ε + γ −
.
ε
2ε
Again choosing γ = 3/ε we conclude that (with room to spare)
q
t (4ε) ≤ E0 τQ( 12 ) + (3/ε) t · E0 τQ( 21 )
(2.30)
We have thus established the cutoff window in terms of t and E0 τQ( 12 ) , and
it remains to write it in terms of t and t . To this end, recall that (2.25)
implies that
ε4
4
t < ε · E0 τQ(1−ε) ≤
E0 τQ( 12 ) ,
1 − 32 ε
16
JIAN DING, EYAL LUBETZKY AND YUVAL PERES
hence (2.29) gives the following for any ε <
t
1
4
1
:
16
5
2ε ≥ 1− q
· E0 τQ( 12 ) ≥ E0 τQ( 21 ) .
6
1 − 23 ε
(2.31)
Altogether, (2.29), (2.30) and (2.31) give
q
q
t (4ε) − t (1 − 2ε) ≤ (5/ε) t · E0 τQ( 12 ) ≤ (6/ε) t · t 14 ,
completing the proof of the theorem.
√
2.3. Tightness of the bound on the cutoff window. The bound t · t
on the size of the cutoff window, given in Corollary 2, is essentially tight in
the following sense. Suppose that t M (n) and tR (n) ≥ 2 are the mixing-time
t ( 14 ) and relaxation-time t of a family (Xt(n) ) of lazy irreducible birthand-death chains that exhibits cutoff. For any fixed ε > 0, we construct a
family (Yt(n) ) of such chains satisfying
(
(n) 1
(1 − ε)t M ≤ t
( 4 ) ≤ (1 + ε)t M ,
(2.32)
(n)
|t − tR | ≤ ε ,
(n)
(n) 1/2
and in addition, having a cutoff window of size (t
· t
) .
Our construction is as follows: we first choose n reals in [0, 1), which
would serve as the nontrivial eigenvalues of our chain: any such sequence
can be realized as the nontrivial eigenvalues of a birth-and-death chain with
death probabilities all zero, and an absorbing state at n. Our choice of eigen(n)
(n)
values will be such that t
= ( 21 + o(1))t M , t
= 12 tR and the chain will
√
exhibit cutoff with a window of t M · tR . Finally, we perturb the chain to
make it irreducible, and consider its lazy version to obtain (2.32).
First, notice that tR = o(t M ) (a necessary condition for the cutoff, as
given by Corollary 2). Second, if a family of chains has mixing-time and
relaxation-time t M and tR respectively, then the cutoff point is without loss
of generality the expected hitting time from 0 to some state m (namely, for
m = Q( 12 )); let hm denote this expected hitting time. Theorem 2.5, combined
with Lemma 2.7, asserts that hm ≤ m · tR .
Setting ε > 0, we may assume that tR ≥ 2(1 + ε) (since tR ≥ 2, and
a small additive error is permitted in (2.32)). Set K = 21 hm /tR , and define
the following sequence of eigenvalues {λi }: the first bKc eigenvalues will be
equal to λ := 1 − 2/tR , P
and the remaining eigenvalues will all have the value
λ0 , such that the sum ni=1 1/(1 − λi ) equals 21 hm (our choice of K and the
fact that hm ≤ ntR ensures that λ0 ≤ λ). By Theorem 2.5, the birth-and-death
TOTAL VARIATION CUTOFF IN BIRTH-AND-DEATH CHAINS
17
chain with absorbing state in n which realizes these eigenvalues satisfies:
(n)
t = (1 + o(1))E0 τn = ( 12 + o(1))t M ,
(n) 1
t = 2 tR ,
Var0 τn ≥ bKc λ 2 ≥ ε+o(1) t M · tR ,
8(1+ε)
(1−λ)
where in the last inequality we merely considered the contribution of the
first K geometric random variables to the variance. Continuing to focus on
the sum of these K i.i.d. random variables, and recalling that K → ∞ with
n (by the assumption tR = o(t M )), the Central-Limit-Theorem implies that
√
P0 (τn − E0 τn > γ t M · tR ) ≥ c(γ, ε) > 0 for any γ > 0 .
√
Hence, the cutoff window of this chain has order at least t M · tR .
Clearly, perturbing the transition kernel to have all death-probabilities
equal some ε0 (giving an irreducible chain), shifts every eigenvalue by at
most ε0 (note that τn from 0 has the same distribution if n is an absorbing
state). Finally, the lazy version of this chain has twice the values of E0 τn
and t , giving the required result (2.32).
3. C- δ- -
In this section, we discuss the versions of Corollary 2 (and Theorem
2.2) for the cases of either continuous-time chains (Theorem 3), or δ-lazy
discrete-time chains (Theorem 3.1). Since the proofs of these versions follow the original arguments almost entirely, we describe only the modifications required in the new settings.
3.1. Continuous-time birth-and-death chains. In order to prove Theorem 3, recall the definition of the heat-kernel of a continuous-time chain
as Ht (x, y) := P x (Xt = y), rewritten in matrix-representation as Ht = et(P−I)
(where P is the transition kernel of the chain).
et are the heat-kernels correspondIt is well known (and easy) that if Ht , H
ing to the continuous-time chain and the lazy continuous-time chain, then
e2t for any t. This follows immediately from the next simple and
Ht = H
well-known matrix-exponentiation argument shows:
Ht = et(P−I) = e2t(
P+I
2 −I)
e2t .
=H
(3.1)
Hence, it suffices to show cutoff for the lazy continuous-time chains. We
therefore need to simply adjust the original proof dealing with lazy irreducible chains, from the discrete-time case to the continuous-time case.
The first modification is in the proof of Lemma 2.3, where a no-crossing
coupling was constructed for the discrete-time chain. Clearly, no such coupling is required for the continuous case, as the event that the two chains
cross one another at precisely the same time now has probability 0.
18
JIAN DING, EYAL LUBETZKY AND YUVAL PERES
To complete the proof, one must show that the statement of Corollary
2.8 still holds; indeed, this follows from the fact that the hitting time τQ(1−ε)
of the discrete-time chain is concentrated, combined with the concentration
of the sum of the exponential variables that determine the timescale of the
continuous-time chain.
3.2. Discrete-time δ-lazy birth-and-death chains.
Theorem 3.1. Let (Xt(n) ) be a family of discrete-time δ-lazy birth-and-death
chains, for some fixed δ > 0. Then (Xt(n) ) exhibits cutoff
q in total-variation iff
(n)
(n)
t
= o(t
), and the cutoff window size is at most
(n) 1
(n)
t
( 4 ) · t
.
Proof. In order to extend Theorem 2.2 and Corollary 2 to δ-lazy chains,
notice that there are precisely two locations where their proof rely on the
fact that the chain is lazy. The first location is the construction of the nocrossing coupling in the proof of Lemma 2.3. The second location is the
fact that all eigenvalues are non-negative in the application of Theorem 2.5.
Though we can no longer construct a no-crossing coupling, Lemma 2.3
can be mended as follows: recalling that P(x, x) ≥ δ for all x ∈ Ω, define
1
P0 = 1−δ
(P − δI), and notice that P0 and P share the same stationary distribution (and hence define the same quantile states Q(ε) and Q(1 − ε) on Ω).
Let X 0 denote a chain which has the transition kernel P0 , and X denote its
coupled appropriate lazy version: the number of steps it takes X to perform
the corresponding move of X 0 is an independent geometric random variable
with mean 1 − δ.
Set p = 1 − δ(1 − 2ε), and condition on the path of the chain X 0 , from the
starting point and until this chain completes T = log p ε rounds from Q(ε)
to Q(1 − ε), back and forth. As argued before, as X follows this path, upon
completion of each commute time from Q(ε) to Q(1 − ε) and back, it has
probability 1 − 2ε to cross X̃. Hence, by definition, in each such trip there
is a probability of at least δ(1 − 2ε) that X and X̃ coalesce. Crucially, these
events are independent, since we pre-conditioned on the trajectory of X 0 .
Thus, after T such trips, the X and X̃ have a probability of at least 1 − ε to
meet, as required.
It remains to argue that the expressions for the expectation and variance
of the hitting-times, which were derived from Theorem 2.5, remain unchanged when moving from the 12 -lazy setting to δ-lazy chains. Indeed,
this follows directly from the expression for the probability-generatingfunction, as given in (2.11).
TOTAL VARIATION CUTOFF IN BIRTH-AND-DEATH CHAINS
19
4. S --
In this section, we provides the proofs for Proposition 4 and Corollary 5.
Let (Xt ) be an ergodic birth-and-death chain on Ω = {0, . . . , n}, with a
transition kernel P and stationary distribution π. Let dsep (t; x) denote the
separation of X, started from x, from π, that is
dsep (t; x) := max 1 − Pt (x, y)/π(y) .
y∈Ω
According to this notation, dsep (t) := max x∈Ω dsep (t; x) measures separation
from the worst starting position.
The chain X is called monotone iff Pi,i+1 + Pi+1,i ≤ 1 for all i < n. It is well
known (and easy to show) that if X is monotone, then the likelihood ratio
Pt (0, k)/π(k) is monotone decreasing in k (see, e.g., [8]). An immediate
corollary of this fact is that the separation of such a chain from the the
stationary distribution is the same for the two starting points {0, n}. We
provide the proof of this simple fact for completeness.
Lemma 4.1. Let P be the transition kernel of a monotone birth-and-death
chain on Ω = {0, . . . , n}. If f : Ω → R is a monotone increasing (decreasing) function, so is P f . In particular,
Pt (k, 0) ≥ Pt (k + 1, 0) for any t ≥ 0 and 0 ≤ k < n .
(4.1)
Proof. Let {pi }, {qi } and {ri } denote the birth, death and holding probabilities
of the chain respectively, and for convenience, let f (x) be 0 for any x < Ω.
Assume without loss of generality that f is increasing (otherwise, one may
consider − f ). In this case, the following holds for every 0 ≤ x < n:
P f (x) = q x f (x − 1) + r x f (x) + p x f (x + 1)
≤ (1 − p x ) f (x) + p x f (x + 1) ,
and
P f (x + 1) = q x+1 f (x) + r x+1 f (x + 1) + p x+1 f (x + 2)
≥ (1 − q x+1 ) f (x) + q x+1 f (x + 1) .
Therefore, by the monotonicity of f and the fact that p x +q x+1 ≤ 1 we obtain
that P f (x) ≤ P f (x + 1), as required.
Finally, the monotonicity of the chain implies that Pt (·, 0) is monotone
decreasing for t = 1, hence the above argument immediately implies that
this is the case for any integer t ≥ 1.
By reversibility, the following holds for any monotone birth-and-death
chain with transition kernel P and stationary distribution π:
Pt (0, k) Pt (0, k + 1)
≥
for any t ≥ 0 and 0 ≤ k < n .
(4.2)
π(k)
π(k + 1)
20
JIAN DING, EYAL LUBETZKY AND YUVAL PERES
1
•
0
1
5
•
1
1
5
•
2
1
1
•
3
F 1. A monotone irreducible birth-and-death chain
where worst separation may not involve the endpoints. Edge
weights denote the conductances (see Example 4.3).
In particular, the maximum of 1 − Pt (0, j)/π( j) is attained at j = n, and the
separation is precisely the same when starting at either of the two endpoints:
Corollary 4.2. Let (Xt ) be a monotone irreducible birth-and-death chain on
Ω = {0, . . . , n} with transition kernel P and stationary distribution π. Then
for any integer t,
Pt (0, n)
Pt (n, 0)
sep Pt (0, ·), π = 1 −
=1−
= sep Pt (n, ·), π .
π(n)
π(0)
Since lazy chains are a special case of monotone chains, the relation (3.1)
between lazy and non-lazy continuous-time chains gives an analogous statement for continuous-time irreducible birth-and-death chains. That is, for
any real t > 0,
Ht (0, n)
Ht (n, 0)
sep (Ht (0, ·), π) = 1 −
=1−
= sep (Ht (n, ·), π) ,
π(n)
π(0)
where Ht is the heat-kernel of the chain, and π is its stationary distribution.
Unfortunately, when attempting to generalize Lemma 4.1 (and Corollary
4.2) to an arbitrary starting point, one finds that it is no longer the case
that the worst separation involves one of the endpoints, even if the chain is
monotone and irreducible. This is demonstrated next.
Example 4.3. Let P and π denote the transition kernel and stationary distribution of the birth-and-death chain on the state space Ω = {0, 1, 2, 3}, given
in Figure 1. It is easy to verify that this chain is monotone and irreducible,
and furthermore, that the following holds:
P2 (1, y)
min
is attained solely at y = 2,
y∈Ω
π(y)
P3 (x, y)
min
is attained solely at x = y = 1.
x,y∈Ω π(y)
Thus, when starting from an interior point, the worst separation might not
be attained by an endpoint, and in addition, the overall worst separation may
not involve the endpoints at all.
TOTAL VARIATION CUTOFF IN BIRTH-AND-DEATH CHAINS
21
However, as we next show, once we replace the monotonicity requirement with the stricter assumption that the chain is lazy, it turns out that the
above phenomenon can no longer occur.
The approach that led to the following result relied on maximal couplings
(see, e.g., [19], [22] and [18], and also [23, Chapter III.3]). We provide a
straightforward proof for it, based on an inductive argument.
Lemma 4.4. Let P be the transition kernel of a lazy birth-and-death chain.
Then for any unimodal non-negative f : Ω → R+ , the function P f is also
unimodal. In particular, for any integer t, all columns of Pt are unimodal.
Proof. Let {pi }, {qi } and {ri } be the birth, death and holding probabilities of
the chain respectively, and for convenience, define f (i) to be 0 for i ∈ N \ Ω.
Let m ∈ Ω be a state achieving the global maximum of f , and set g = P f .
For every 0 < x < m, the unimodality of f implies that
g(x) = q x f (x − 1) + r x f (x) + p x f (x + 1)
≥ q x f (x − 1) + (1 − q x ) f (x) ,
and similarly,
g(x − 1) = q x−1 f (x − 2) + r x−1 f (x − 1) + p x−1 f (x)
≤ (1 − p x−1 ) f (x − 1) + p x−1 f (x) .
Therefore, by the monotonicity of the chain, we deduce that g(x) ≥ g(x −1).
The same argument shows that for every m < y < n we have g(y) ≥ g(y + 1).
As g is increasing on {0, . . . , m − 1} and decreasing on {m + 1, . . . , n},
unimodality will follow from showing that g(m) ≥ min {g(m − 1), g(m + 1)}
(the global maximum of g would then be attained at m0 ∈ {m − 1, m, m + 1}).
To this end, assume without loss of generality that f (m − 1) ≥ f (m + 1). The
following holds:
g(m) = qm f (m − 1) + rm f (m) + pm f (m + 1)
≥ rm f (m) + (1 − rm ) f (m + 1) ,
and
g(m + 1) = qm+1 f (m) + rm+1 f (m + 1) + pm+1 f (m + 2)
≤ qm+1 f (m) + (1 − qm+1 ) f (m + 1) .
Thus, the laziness of the chain implies that g(m) ≥ g(m+1), as required. By reversibility, Lemma 4.4 has the following corollary:
Corollary 4.5. Let (Xt ) be a lazy and irreducible birth-and-death chain
on the state space Ω = {0, . . . , n}, with transition kernel P and stationary
distribution π. Then for any s ∈ Ω and any integer t ≥ 0, the function
f (x) := Pt (s, x)/π(x) is unimodal.
22
JIAN DING, EYAL LUBETZKY AND YUVAL PERES
Remark. The maximum of the unimodal function f (x) in Corollary 4.5 need
not be located at x = s, the starting point of the chain. This can be demonstrated, e.g., by the biased random walk.
Proposition 4 will immediately follow from the above results.
Proof of Proposition 4. We begin with the case where (Xt ) is a lazy birthand-death chain, with transition kernel P. Let s ∈ Ω be a starting position
which maximizes dsep (t). Then by Corollary 4.5, dsep (t) is either equal to
1 − Pt (s, 0)/π(0) or to 1 − Pt (s, n)/π(n). Consider the first case (the second
case is treated by the exact same argument); by reversibility,
dsep (t) = 1 −
Pt (0, s)
Pt (0, n)
≤1−
,
π(s)
π(n)
where the last inequality is by Lemma 4.1. Therefore, the endpoints of X
assume the worst separation distance at every time t.
To show that dsep (t) = 1−Ht (0, n)/π(n) in the continuous-time case, recall
that
h
i X
Ht (x, y) = P x (Xt = y) = E PNt (x, y) =
Pk (x, y)P(Nt = k) ,
k
where P is the transition kernel of the corresponding discrete-time chain,
and Nt is a Poisson random variable with mean t. Though Pk has unimodal
columns for any integer k, a linear combination of the matrices Pk does not
necessarily maintain this property. We therefore consider a variant of the
process, where Nt is approximated by an appropriate binomial variable.
Fix t > 0, and for any integer m ≥ 2t let Nt0 (m) be a binomial random
variable with parameters Bin(m,ht/m). Since
Nt0 (m) converges in distribution
i
0
to Nt , it follows that Ht0 (m) := E PNt (m) converges to Ht as m → ∞. Writing
Nt0 (m) as a sum of independent indicators
{Bi : i = 1, . . . , m} with success
t
probabilities t/m, and letting Q := 1 − m I + mt P, we have
h Pm i
Ht0 (m) = E P i=1 Bi = Qm .
Note that for every m ≥ 2t, the transition kernel Q corresponds to a lazy
birth-and-death chain, thus Lemma 4.4 ensures that Ht0 (m) has unimodal
columns for every such m. In particular, Ht = limm→∞ Ht0 (m) has unimodal
columns. This completes the proof.
Proof of Corollary 5. By Theorem 3, total-variation
cutoff (from the worst
1 starting position) occurs iff t = o t ( 4 ) . Combining Proposition 4 with
[10, Theorem 5.1] we deduce that separation
cutoff (from the worst starting
point) occurs if and only if t = o tsep ( 14 ) (where tsep (ε) = max x tsep (ε; x)
is the minimum t such that max x sep(Ht (x, ·), π) ≤ ε).
TOTAL VARIATION CUTOFF IN BIRTH-AND-DEATH CHAINS
23
Therefore, the proof will follow from the well known fact that tsep ( 41 ) and
t ( 14 ) have the same order. One can obtain this fact, for instance, from
Lemma 7 of [4, Chapter 4], which states that (as the chain is reversible)
2
d̄(t) ≤ dsep (t) , and dsep (2t) ≤ 1 − 1 − d̄(t) ,
where d̄(t) := max x,y∈Ω P x (Xt ∈ ·) − Py (Xt ∈ ·)TV . Combining this with
the sub-multiplicativity of d̄(t), and the fact that d(t) ≤ d̄(t) ≤ 2d(t) (see
Definition 3.1 in [4, Chapter 4]), we obtain that for any t,
d(t) ≤ dsep (t) , and dsep (8t) ≤ 2d̄(4t) ≤ 32 (d(t))4 .
This in turn implies that 18 tsep ( 41 ) ≤ t ( 14 ) ≤ tsep ( 14 ), as required.
5. C
• As stated in Corollary 5, our results on continuous-time birth-anddeath chains, combined with those of [10], imply that cutoff in totalvariation distance is equivalent to separation cutoff for such chains.
This raises the following question:
Question 5.1. Let (Xt(n) ) denote a family of irreducible reversible
Markov chains, either in continuous-time or in lazy discrete-time.
Is it true that there is cutoff in separation iff there is cutoff in totalvariation distance (where the distance in both cases is measured
from the worst starting position)?
• One might assume that the cutoff-criterion (1.4) also holds for close
variants of birth-and-death chains. For that matter, we note that
Aldous’s example of a family of reversible Markov chains, which
(n)
(n) 1 satisfies t
= o t
( 4 ) and yet does not exhibit cutoff, can be written so that each of its chains is a biased random walk on a cycle. In
other words, it suffices that a family of birth-and-death chains permits the one extra transition between states 0 and n, and already the
cutoff criterion (1.4) ceases to hold.
• Finally, it would be interesting to characterize the cutoff criterion in
additional natural families of ergodic Markov chains.
Question 5.2. Does (1.4) hold for the family of lazy simple random
walks on vertex transitive bounded-degree graphs?
A
We thank Persi Diaconis, Jim Fill, Jim Pitman and Laurent Saloff-Coste
for useful comments on an early draft.
24
JIAN DING, EYAL LUBETZKY AND YUVAL PERES
R
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
D. Aldous, Random walks on finite groups and rapidly mixing Markov chains, Seminar on probability, XVII, 1983, pp. 243–297.
D. Aldous, American Institute of Mathematics (AIM) research workshop “Sharp
Thresholds for Mixing Times” (Palo Alto, December 2004). Summary available at
http://www.aimath.org/WWN/mixingtimes.
D. Aldous and P. Diaconis, Shuffling cards and stopping times, Amer. Math. Monthly
93 ( 1986 ), 333–348.
D. Aldous and J. A. Fill, Reversible Markov Chains and Random Walks on Graphs.
In preparation, http://www.stat.berkeley.edu/˜aldous/RWG/book.html.
G.-Y. Chen, The cut-off phenomenon for finite Markov chains, Ph.D. dissertation,
Cornell University (2006).
G.-Y. Chen and L. Saloff-Coste, The cutoff phenomenon for ergodic Markov processes, Electronic Journal of Probability 13 (2008), 26–78.
P. Diaconis, The cutoff phenomenon in finite Markov chains, Proc. Nat. Acad. Sci.
U.S.A. 93 (1996), no. 4, 1659–1664.
P. Diaconis and J. A. Fill, Strong stationary times via a new form of duality, Ann.
Probab. 18 (1990), no. 4, 1483–1522.
P. Diaconis and L. Miclo, On times to quasi-stationarity for birth and death processes.
preprint.
P. Diaconis and L. Saloff-Coste, Separation cut-offs for birth and death chains, Ann.
Appl. Probab. 16 (2006), no. 4, 2098–2122.
P. Diaconis and M. Shahshahani, Generating a random permutation with random
transpositions, Z. Wahrsch. Verw. Gebiete 57 (1981), no. 2, 159–179.
J. Ding, E. Lubetzky, and Y. Peres, The mixing time evolution of Glauber dynamics
for the Mean-field Ising Model. preprint.
J. A. Fill, The passage time distribution for a birth-and-death chain: Strong stationary
duality gives a first stochastic proof. preprint.
J. A. Fill, On hitting times and fastest strong stationary times for skip-free chains.
preprint.
P. R. Halmos, Finite-dimensional vector spaces, Springer-Verlag, New York, 1974.
S. Karlin and J. McGregor, Coincidence properties of birth and death processes,
Pacific J. Math. 9 (1959), 1109–1140.
J. Keilson, Markov chain models – rarity and exponentiality, Applied Mathematical
Sciences, vol. 28, Springer-Verlag, New York, 1979.
S. Goldstein, Maximal coupling, Z. Wahrsch. Verw. Gebiete 46 (1978/79), no. 2, 193–
204.
D. Griffeath, A maximal coupling for Markov chains, Z. Wahrsch. Verw. Gebiete 31
(1975), 95–106.
D. A. Levin, M. Luczak, and Y. Peres, Glauber dynamics for the Mean-field Ising
Model: cut-off, critical power law, and metastability. to appear.
D. A. Levin, Y. Peres, and E. Wilmer, Markov Chains and Mixing Times, 2007. In
preparation.
J. W. Pitman, On coupling of Markov chains, Z. Wahrsch. Verw. Gebiete 35 (1976),
no. 4, 315–322.
T. Lindvall, Lectures on the coupling method, Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics, John Wiley & Sons Inc.,
New York, 1992. A Wiley-Interscience Publication.
TOTAL VARIATION CUTOFF IN BIRTH-AND-DEATH CHAINS
25
[24] Y. Peres, American Institute of Mathematics (AIM) research workshop “Sharp
Thresholds for Mixing Times” (Palo Alto, December 2004). Summary available at
http://www.aimath.org/WWN/mixingtimes.
[25] L. Saloff-Coste, Random walks on finite groups, Probability on discrete structures,
2004, pp. 263–346.
J D
D S, UC B, B, CA 94720, USA.
E-mail address: [email protected]
E L
M R, O M W, R, WA 98052-6399, USA.
E-mail address: [email protected]
Y P
M R, O M W, R, WA 98052-6399, USA.
E-mail address: [email protected]
© Copyright 2026 Paperzz