QUASIRANDOM RUMOUR SPREADING ON THE COMPLETE GRAPH
IS AS FAST AS RANDOMIZED RUMOUR SPREADING
NIKOLAOS FOUNTOULAKIS AND ANNA HUBER
Abstract. In this paper, we provide a detailed comparison between a fully randomized
protocol for rumour spreading on a complete graph and a quasirandom protocol introduced
by Doerr, Friedrich and Sauerwald (2008). In the former, initially there is one vertex which
holds a piece of information and during each round every one of the informed vertices
chooses one of its neighbours uniformly at random and independently and informs it. In
the quasirandom version of this method (cf. Doerr et al. 2008) each vertex has a cyclic list
of its neighbours. Once a vertex has been informed, it chooses uniformly at random only
one neighbour. In the following round, it informs this neighbour and at each subsequent
round it picks the next neighbour from its list and informs it. We give a precise analysis
of the evolution of the quasirandom protocol on the complete graph with n vertices and
show that it evolves essentially in the same way as the randomized protocol. In particular,
if S(n) denotes the number of rounds that are needed until all vertices are informed, we
show that for any slowly growing function ω(n)
log2 n + ln n − 4 ln ln n ≤ S(n) ≤ log2 n + ln n + ω(n),
with probability 1 − o(1).
1. Introduction
The study of the dissemination of information within networks has become a topic of
intense research in recent years, mainly due to the development and the widespread applications of networks such as the Internet or various types of distributed networks. In the
latter, for example, there might be a given server/node who wants to pass a certain message
to every other node so that the whole network is informed about a certain situation. The
main issue one is trying to address is given a network and a piece of information that is
currently held by one (or possibly very few) of the nodes of the network, what is an efficient
method to spread this all over the network as quickly and as reliably as possible. However,
in this paper, we will not concern ourselves with reliability issues and we will assume that
information is transmitted from node to node without errors.
A simple but nonetheless non-trivial method for the spread of information within a connected network is the so-called randomized broadcasting. This method proceeds in stages,
assuming that initially only one node of the network possesses a piece of information. During
the first stage, this vertex informs one uniformly chosen neighbour. Now, if I(t) denotes the
set of informed vertices after the first t stages, then during the t + 1th stage every node in
I(t) chooses one of its neighbours uniformly at random, independently of every other vertex
in I(t), and informs it.
This model of dissemination was initially studied by Frieze and Grimmett [6] on the
complete graph with n vertices, where they proved that all nodes are informed in (1 +
op (1))(log2 n + ln n) stages, where op (1) denotes a random variable Xn which converges to
2000 Mathematics Subject Classification. 60C05,60G35.
AN EXTENDED ABSTRACT OF THIS PAPER WILL APPEAR IN THE PROCEEDINGS OF
EUROCOMB 2009.
1
2
NIKOLAOS FOUNTOULAKIS AND ANNA HUBER
0 in probability, as n → ∞ (that is, for every ε > 0 we have P(|Xn | > ε) = o(1)). Later,
Pittel [9] improved on this, showing that in fact the randomized broadcasting protocol
informs all vertices within log2 n + ln n + Op (1) stages (where Op (1) denotes a random
variable Xn where |Xn | ≤ ω(n) with probability 1 − o(1) for every ω such that ω(n) → ∞
as n → ∞.). Upper bounds for arbitrary connected graphs were given by Feige, Peleg,
Raghavan and Upfal [5]. Also, they determined the correct order of magnitude in the case
of hypercubes as well as G(n, p) random graphs, where the edge probability p exceeds the
connectivity threshold (see [3]).
The main drawback of the randomized broadcasting method is the amount of randomness
used, as each vertex must make a random choice at each stage. Doerr, Friedrich and Sauerwald [4] introduced a quasirandom analogue of this method, in order to reduce significantly
the amount of random bits that every node uses. This was inspired by the rotor-router model
introduced by Priezzhev, Dhar, Dhar, and Krishnamurthy [10] as a deterministic simulation
of simple random walks on graphs. This was later popularized by Propp [7] and became
known as the Propp machine. We will describe the quasirandom broadcasting method for
the complete graph.
Let n be a positive integer and let Vn = {1, . . . , n}. We associate with each vertex v ∈ Vn a
cyclic permutation of Vn , which we denote by `(v). (For the simplicity of our calculations, we
assume that any given vertex can also contact itself.) This determines the order according to
which a vertex informs the other vertices. We assume that each vertex has a pointer which
always points at the vertex which is to be informed at the next stage. We let Ln = {`(v)}v∈Vn
and we refer to the pair Qn := (Vn , Ln ) as a quasirandom rumour spreading configuration.
For arbitrary graphs, each list contains only the neighbouring vertices of the corresponding
vertex.
Quasirandom rumour spreading also proceeds in stages or steps. We assume that in
the beginning only one vertex is aware of the information and, without loss of generality,
we assume that this vertex is 1. Initially, it selects a position in `(1) uniformly at random.
During the first stage, vertex 1 informs the vertex which is at this position and, subsequently,
moves its pointer to the next position (that is, if the pointer was in position i it moves to
position i + 1 mod n). Also the newly informed vertex chooses at random a position in its
own cyclic ordering. More generally, assume that after t ≥ 1 stages there are I(t) informed
vertices and let I(t) denote their set (also I(0) = {1}). At the t + 1th stage, each v ∈ I(t)
informs the vertex that is indicated by its pointer (if the latter is uniformed) and, then,
it moves the pointer to the next position. Let N (t + 1) denote the set of newly informed
vertices and let N (t + 1) = |N (t + 1)| - thus I(t + 1) = I(t) ∪ N (t + 1). At the end of the
t + 1th stage, each v ∈ N (t + 1) selects uniformly at random a position in `(v) and places
its pointer there.
Now each vertex makes only one random choice, namely during the stage at which it is
informed. Doerr, Friedrich and Sauerwald [4] analyzed this model for arbitrary graphs, k-ary
trees, hypercubes as well as for G(n, p) random graphs with p exceeding the connectivity
threshold. In fact, they proved that in the case of k-ary trees and G(n, p) random graphs,
where p is close to the connectivity threshold, the quasirandom method performs better than
the randomized method.
Let us focus on the complete graph with n vertices and let
S(n) := min{t ≥ 0 : I(t) = n},
which is the number of steps needed until all vertices have been informed. Note that always
log2 n ≤ S(n) ≤ n. Doerr, Friedrich and Sauerwald [4] also proved that S(n) ≤ C ln n
with probability 1 − o(1), for some constant C > 0. Recently, Angelopoulos, Bläser,
QUASIRANDOM RUMOUR SPREADING ON THE COMPLETE GRAPH
3
Doerr, Fouz, Huber and Panagiotou [1] proved the analogue of the result which Frieze
and Grimmett [6] proved for the randomized model on the complete graph. That is,
S(n) = (1 + op (1))(logn n + ln n). In this paper, we present a tight analysis of the rumour spreading under the quasirandom model and show that its evolution is very close to
the evolution of the randomized model. Since such a tight analysis has only been made for
complete graphs in the random model, we consider complete graphs throughout the paper.
We will prove an (almost) analogue of the bound that Pittel [9] gave for the randomized
model. The main theorem of this paper is as follows:
Theorem 1. Given a quasirandom rumour spreading configuration Qn and any function
ω(n) : Z+ → R that tends to infinity as n → ∞, with probability 1 − o(1)
log2 n + ln n − 4 ln ln n ≤ S(n) ≤ log2 n + ln n + ω(n).
An interesting question would be whether a corresponding lower bound also holds, that
is, whether one can replace the 4 ln ln n term by ω(n).
Angelopoulos and Panagiotou [2] showed that an error term of the same order of magnitude, as far as the lower bound is concerned, can be obtained by a refinement of the proof
in [1].
1.1. Sketch of proof of Theorem 1. The proof of the theorem is based on splitting the
set of stages into consecutive phases. We show that during the first three phases the number
of informed vertices at each stage nearly doubles. These phases last until the number of
informed vertices becomes very close to n (but still o(n)) and yield the log2 n term in the
above theorem. Thereafter, there is an intermediate phase where the vast majority of the
vertices are informed leaving no more than ne−ω(n)/2 uninformed vertices with probability
1 − o(1). During the subsequent phase, which lasts for approximately 21 ln n stages, we show
that at each stage the number of uninformed vertices decreases approximately by a factor
of e−1 . We deduce the upper bound in Theorem 1 by looking at the contents of the lists
of the informed vertices within length 12 ln n after the current position of their pointer and
proving that they cover all the uninformed vertices with high probability. In other words, if
we let the system run for another 12 ln n steps, then all vertices will have been informed with
probability 1 − o(1). As far as the lower bound is concerned, we condition on the number of
uninformed vertices just after the third phase (recall that this number is still almost equal to
n) and then we couple the process with a process in which all the uninformed vertices make
their random choices simultaneously and we look at the segments of length (approximately)
ln n−4 ln ln n after the positions of the pointers. The number of vertices which do not belong
to any of these segments is stochastically smaller than the number of uniformed vertices after
log2 n + ln n − 4 ln ln n steps in the original process. We use Chebyschev’s inequality to show
that the former is positive with probability 1 − o(1) and the lower bound in Theorem 1
follows. In summary, we show that the quasirandom broadcasting method has essentially
the same evolution as the randomized method, as presented in detail by Pittel in [9].
1.2. Organization of the paper. In the next section, we analyze the first three phases
of the process and show that during each of these the number of informed vertices almost
doubles. In Section 3, we present the basic tools with the use of which we analyze the fourth
and the fifth phase. In Subsection 3.1, we present the proof of the upper bound of Theorem 1
and we conclude with the proof of the lower bound in Subsection 3.2.
1.3. A sharp concentration bound. A tool that we shall apply several times is the
inequality by Azuma and Hoeffding, which provides strong bounds for the probability that
a function defined on a set of independent random variables deviates significantly from its
4
NIKOLAOS FOUNTOULAKIS AND ANNA HUBER
expectation, when the value of the function is not affected much by small changes in each
one of its arguments.
Theorem 2 (Hoeffding-Azuma Inequality). (See for example [8, Corollary 2.27, p.38]) Let
Z1 , . . . , ZN be independent random variables taking values in the sets Λ1 , . . . , ΛN respectively.
Let Λ = Λ1 × · · · × ΛN . Let f : Λ → R be a function and set X = f (Z1 , . . . , ZN ). Assume
that there are quantities ck , k = 1, . . . , N satisfying the following:
a. If z, z 0 ∈ Λ differ only in the kth coordinate, then |f (z) − f (z 0 )| ≤ ck .
Then, for every x ≥ 0 we have that
(1)
x2
P(|X − E(X)| ≥ x) ≤ 2 exp − PN
2 i=1 c2i
!
.
2. Nearly doubling of I(t) during the early stages
2.1. Phase 1: t < 12 log2 lnn3 n . In this subsection we analyze the evolution of the rumour
spreading process for t < 12 log2 lnn3 n . In particular, we show that during each of these steps
the number
of informed
vertices actually doubles with probability 1−o(1). Therefore, setting
k
j
n
1
t1 := 2 log2 ln3 n , at the end of this phase the number of informed vertices is 2t1 .
Firstly, note that t1 < 12 log2 n < ln n. With L := b3 ln nc, let `L (v) denote the segment
of length L in `(v) that starts at the position which v selects randomly. That is, if v selects
position i, then `L (v) consists of the vertices in `(v) which are located at positions {i, i + 1
mod n, . . . , i + L − 1 mod n}.
Observe that the number of informed vertices during each of the first t1 steps doubles, if
1. for all distinct v, v 0 ∈ I(t1 ) we have `L (v) ∩ `L (v 0 ) = ∅ and
2. for all v, v 0 ∈ I(t1 ) we have v 0 6∈ `L (v).
We will show inductively that this event occurs with probability 1 − o(1). For this purpose,
we let Et be the event that is defined by these two conditions taken up to stage t instead of
t1 .
Note that, as far as the first stage is concerned, it suffices to show that the second part of
the event occurs with high probability, as there is only one vertex initially, namely vertex 1,
which is aware of the rumour. So
(2)
P(E0 ) = P(1 6∈ `L (1)) ≥ 1 −
3 ln n
.
n
Let us assume now that for 1 ≤ t ≤ t1 the event Et−1 is realized. Then N (t) = I(t−1) = 2t−1 .
Let us fix an ordering on N (t) according to which we expose the random choices of the
vertices in N (t). If vi ∈ N (t) is the ith vertex according to this ordering, then
(3)
2 · 2t−1 3 ln n
P ∃v 0 ∈ I(t − 1) ∪ {v1 , . . . , vi } : v 0 ∈ `L (vi ) ≤
.
n
Also
(4)
2 · 2t−1 9 ln2 n
P ∃v 0 ∈ I(t − 1) ∪ {v1 , . . . , vi−1 } : `L (v 0 ) ∩ `L (vi ) 6= ∅ ≤
.
n
QUASIRANDOM RUMOUR SPREADING ON THE COMPLETE GRAPH
5
Since the random choices of the vertices in N (t) are independent, we have for n large enough
N (t) N (t)
2t 3 ln n 2t 9 ln2 n
2t 10 ln2 n
P(Et | Et−1 ) ≥
1−
−
≥ 1−
n
n
n
t−1
2
2t 10 ln2 n
4t 10 ln2 n
N (t)=2t−1
=
1−
≥1−
.
n
n
(3),(4)
(5)
Thus for n large enough
t1
Y
Y
t1 4t 10 ln2 n
P(Et | Et−1 ) ≥
1−
P(Et1 ) = P(E0 )
n
t=1
t=1
!
4t1 ln2 n
t1
=o(1)
X
n
4t 20 ln2 n
3 ln n
≥
1−
exp −
n
n
t=1
(6)
!
t1
20 ln2 n X
4 · 4t1 20 ln2 n
3 ln n
3 ln n
t
exp −
exp −
≥ 1−
4 ≥ 1−
n
n
n
3
n
t=0
4t1 ≤ n
ln3 n
3 ln n
27
28
≥
1−
exp −
≥1−
.
n
ln n
ln n
q
q
Note that on Et1 , we have 12 lnn3 n ≤ I(t1 ) ≤ lnn3 n .
(2),(5)
3 ln n
1−
n
l
m
2.2. Phase 2: 21 log2 lnn3 n ≤ t ≤ log2 lnn6 n . We set t2 := log2 lnn6 n . We will approximate I(t) from below by a suitable subset of informed vertices. Firstly, let us set
I 0 (t1 ) := I(t1 ) and I 0 (t1 ) := |I 0 (t1 )|. Note that conditional on Et1 , N (t1 + 1) = I 0 (t1 ). Assume that we have defined I 0 (t − 1) ⊆ I(t − 1), which is such that for every v 0 , v ∈ I 0 (t − 1)
we have `L (v) ∩ `L (v 0 ) = ∅ (if v 6= v 0 ) and v 0 6∈ `L (v). Let N 0 (t) be the set of vertices which
are contacted by the vertices in I 0 (t − 1) during the tth stage and let N 0 (t) := |N 0 (t)|. Note
that N 0 (t) = I 0 (t − 1) := |I 0 (t − 1)|. Assume that the vertices of N 0 (t) make their random
choices according to a particular ordering which we fix. Let Bt be the set of vertices in N 0 (t)
defined as follows. If vi is the ith vertex in the ordering, then vi ∈ Bt if either there exists
v 0 ∈ I 0 (t − 1) ∪ {v1 , . . . , vi−1 } such that the segment `L (vi ) overlaps with `L (v 0 ) or `L (vi )
contains some v 0 ∈ I 0 (t − 1) ∪ {v1 , . . . , vi }. We set I 0 (t) := I 0 (t − 1) ∪ (N 0 (t) \ Bt ).
Now, for t1 < t ≤ t2 , we let Et be the event that for all t1 ≤ s ≤ t we have I 0 (s) ≥
0
0
and also Et1 is realized. We will show that
2I (s − 1) − I ln(s−1)
2
n
Lemma 3. For all t1 < t ≤ t2 , P(Et | Et−1 ) ≥ 1 −
Proof We will show that I 0 (t) ≥ 2I 0 (t − 1) −
24
.
ln2 n
I 0 (t−1)
ln2 n
with probability at least 1 − 24/ ln2 n.
0
In other words, it suffices to show that conditional on Et−1 , we have Bt := |Bt | ≤ I ln(t−1)
2
n
with this probability. If vi ∈ N 0 (t) denotes the ith vertex in the ordering of N 0 (t), then the
probability that `L (vi ) contains any one of the vertices in I 0 (t − 1) or a vertex in {v1 , . . . , vi }
0
ln n
is at most (I (t−1)+i)3
. Also, the probability that `L (vi ) is not disjoint from `L (v 0 ) for
n
(I 0 (t−1)+i−1)9 ln2 n
. Thus the union of these
n
(I 0 (t−1)+i)(3 ln n+9 ln2 n)
(I 0 (t−1)+i)12 ln2 n
≤
, for n ≥ 3.
n
n
some v 0 ∈ I 0 (t − 1) ∪ {v1 , . . . , vi−1 } is at most
events occurs with probability at most
6
NIKOLAOS FOUNTOULAKIS AND ANNA HUBER
So if we condition on specific realizations of I 0 (t − 1) and N 0 (t)
(7)
0
N (t)
12 ln2 n X 0
0
0
E(Bt |I (t − 1), N (t), Et−1 ) ≤
(I (t − 1) + i)
n
i=1
N 0 (t)=I 0 (t−1) 12 ln2 n 0
24 ln2 n 0
12 ln n 0
I (t − 1)N 0 (t) + N 0 (t)2
=
2I (t − 1)2 =
I (t − 1)2 .
≤
n
n
n
Therefore by Markov’s inequality
2
(8)
24 ln2 n I 0 (t − 1)2
24 ln4 n 0
I 0 (t − 1)
0
0
| I (t − 1), N (t), Et−1 ≤
≤
P Bt ≥
I (t − 1)
2
2
n
n
ln n
I 0 (t − 1)/ ln n
I 0 (t−1)≤2t−1
≤
24 ln4 n n
24
= 2 .
6
n
ln n
ln n
Averaging over all realizations of I 0 (t − 1) and N 0 (t) such that Et−1 is realized, we deduce
the lemma.
Therefore
P(Et2 ) = P(Et1 )
t2
Y
P(Et | Et−1 )
≥
t=t1 +1
(9)
t2 ≤2 ln n
≥
24 t2 −t1
28
1− 2
1−
ln n
ln n
48
76
28
1−
≥1−
.
≥ 1−
ln n
ln n
ln n
(6),Lemma 3
28
24 2 ln n
1−
1− 2
ln n
ln n
Also, on Et2 we have
t2 −t1
t2 −t1
1
1
I(t1 )=2t1 t2
I (t2 ) ≥ I(t1 )2
=
2
1−
2 ln2 n
2 ln2 n
2 ln n
n
1
1
1
t2
t2
≥2
≥2
≥ 6
1−
.
1−
1−
ln n
ln n
2 ln2 n
ln n
0
t2 −t1
1−
Since I(t) ≥ I 0 (t) we deduce that on Et2
n
1
1−
.
(10)
I(t2 ) ≥ 6
ln n
ln n
l
6 m
l
6 m
n
ln n
2.3. Phase 3: t2 < t ≤ t2 + log2 ln
+
1.
We
set
t
:=
t
+
log
+ 1. Here
3
2
2
ω(n)
ω(n)
ω(n) is a real-valued function on the set of positive integers that tends to infinity as n grows,
slowly enough for our calculations to work. For convenience we will drop the argument and
we will be writing ω.
The analysis in this phase refines the idea that was used in the analysis in the previous
phase.
Quite informally, we will work with a subset Ĩ(t) of the set of informed vertices I(t),
which has the property that any two vertices v and v 0 belonging to this set are such that
1. if v and v 0 are distinct, the segments of length t3 − t in `(v) and `(v 0 ) which start at
the positions of the pointers of v and v 0 after stage t are disjoint;
2. this segment in `(v) does not contain v 0 .
QUASIRANDOM RUMOUR SPREADING ON THE COMPLETE GRAPH
7
We denote the segment of length t3 − t in `(v) starting at the position of the pointer of v
after stage t by `(v; t + 1, t3 ). We will give an inductive definition of the set Ĩ(t). Note first
that the vertices which belong to the set I 0 (t2 ) satisfy the above conditions. Thus, we may
set Ĩ(t2 ) := I 0 (t2 ). Assume now that we have defined the set Ĩ(t − 1), for t > t2 . Note
that the above properties imply that if Ñ (t) denotes the set of vertices contacted at stage
˜ − 1). The set Ĩ(t) is
t by the vertices of Ĩ(t − 1), then Ñ (t) := |Ñ (t)| = |Ĩ(t − 1)| =: I(t
defined as the union of the sets Ĩ1 (t) ⊆ Ĩ(t − 1) and Ĩ2 (t) ⊆ Ñ (t) which in turn are defined
as follows:
1. the set Ĩ1 (t) consists of those vertices v ∈ Ĩ(t − 1) such that for all v 0 ∈ Ñ (t) we
have `(v; t + 1, t3 ) ∩ `(v 0 ; t + 1, t3 ) = ∅ and also v 6∈ `(v 0 ; t + 1, t3 );
2. the set Ĩ2 (t) consists of those vertices v ∈ Ñ (t) such that for all v 0 ∈ Ñ (t) we have
`(v; t + 1, t3 ) ∩ `(v 0 ; t + 1, t3 ) = ∅, if v 6= v 0 , and also v 6∈ `(v 0 ; t + 1, t3 ).
The main lemma of this subsection concerns the rate of growth of Ĩ(t) within this phase:
Lemma 4. Conditional on Et2 , with probability 1 − o(1), for all t with t2 ≤ t < t3 − 1 we
have
!
2
˜
I(t)(t
−
t)
3
˜ + 1) ≥ 2I(t)
˜
.
I(t
1−
n
Proof We will show the lemma by induction on t. For t2 < t ≤ t3 we define Et to be the
event that for all s such that t2 ≤ s ≤ t
!
˜ − 1)(t3 − s + 1)2
I(s
˜ ≥ 2I(s
˜ − 1) 1 −
,
I(s)
n
and also Et2 is realized. Note that the event of the lemma is Et3 −1 . We will show that
1/4
P(Et+1 | Et ) ≥ 1 − 2e−n
(11)
.
Since the events {Et }t2 ≤t≤t3 −1 form a decreasing family, we then have
(12)
tY
3 −2
1/4 t3 −t2 t3 −t2 ≤ln n
1/4 ln n
1/5
P(Et3 −1 | Et2 ) =
P(Et+1 | Et ) ≥ 1 − 2e−n
≥
1 − 2e−n
≥ 1−e−n ,
t=t2
if n is large enough. Now let us fix some t which satisfies t2 ≤ t < t3 − 1 and let us condition
on Et . To estimate P(Et+1 | Et ), we only need to estimate the (conditional) probability that
!
2
˜
I(t)(t
−
t)
3
˜ + 1) ≥ 2I(t)
˜
.
I(t
1−
n
Let I˜i (t + 1) := |Ĩi (t + 1)|, for i = 1, 2. As Ĩ(t + 1) is the disjoint union of the sets Ĩ1 (t + 1)
and Ĩ2 (t + 1), it suffices to show that each of the following events occurs with sufficiently
high (conditional) probability:
!
2
˜
I(t)(t
−
t)
3
˜
(13)
I˜1 (t + 1) ≥ I(t)
1−
n
and
(14)
2
˜
I(t)(t
3 − t)
˜
I˜2 (t + 1) ≥ I(t)
1−
n
!
.
8
NIKOLAOS FOUNTOULAKIS AND ANNA HUBER
Proof of (13). Note that the size of Ĩ1 (t + 1) is a function of the independent random
choices of the vertices in Ñ (t + 1). We will bound the probability of (13) from below
using standard concentration inequalities. Here all probabilities and expected values are
conditional on Et . But first we shall give a lower bound on the (conditional) expected value
of I˜1 (t + 1). Recall that a vertex v ∈ Ĩ(t) belongs to Ĩ1 (t + 1) if for all v 0 ∈ Ñ (t + 1) we
have `(v; t + 2, t3 ) ∩ `(v 0 ; t + 2, t3 ) = ∅ and also v 6∈ `(v 0 ; t + 2, t3 ). The former fails with
2
˜
Ñ (t+1)(t3 −(t+2)+1)2
= I(t)(t3n−t−1) .
n
˜
Ñ (t+1)(t3 −t−1)
= I(t)(t3n−t−1) . Thus, for a vertex v
n
probability at most
most
The latter fails with probability at
∈ Ĩ(t)
1 2
2
˜
˜
˜
I(t)(t
I(t)(t
I(t)(t
3 − t − 2)
3 − t − 1)
3 − t − 1)
−
≥1−
.
n
n
n
In turn, the expected value of I˜1 (t + 1) conditional on Et is
!
1 2
˜
)
I(t)(t
−
t
−
3
2
˜
.
(15)
E(I˜1 (t + 1)) ≥ I(t)
1−
n
P(v ∈ Ĩ1 (t + 1)) ≥ 1 −
To show (13) it suffices to bound the probability of the event I˜1 (t+1) < E(I˜1 (t+1))− I˜2/3 (t).
Indeed, by (15)
I˜2 (t)(t3 − t − 21 )2 ˜2/3
2/3
˜
˜
˜
E(I1 (t + 1))) − I (t) ≥ I(t) −
− I (t).
n
We have
Claim 5.
I˜2 (t)(t3 − t − 12 )2 ˜2/3
I˜2 (t)(t3 − t)2
+ I (t) ≤
.
n
n
Proof of Claim The inequality is equivalent to
1 2 nI˜2/3 (t)
t3 − t −
+
≤ (t3 − t)2 .
2
˜
2
I (t)
˜2/3
Note that (t3 − t)2 − (t3 − t − 12 )2 = t3 − t − 41 ≥ 1. But the ratio nII˜2 (t)(t) = I˜4/3n (t) is o(1).
˜ ≥ I(s
˜ − 1) for all s with t2 + 1 ≤ s ≤ t, as 1 − I(s
˜ − 1)(t3 − s + 1)2 /n ≥
Indeed, on Et , I(s)
1 − 2s−1 (t3 − s + 1)2 /n ≥ 1/2, which can be shown applying elementary methods.
Thus, by (10)
n
˜ ≥ I(t
˜ 2) ≥
,
(16)
I(t)
(2 ln6 n)
for n sufficiently large. Therefore
the Claim. Therefore
n
I˜4/3 (t)
≤
24/3 ln8 n
n1/3
= o(1) and this concludes the proof of
!
2
˜
I(t)(t
−
t)
3
˜
E(I˜1 (t + 1)) − I (t) ≥ I(t)
1−
.
n
We will bound P I˜1 (t + 1) < E(I˜1 (t + 1)) − I˜2/3 (t) using the Hoeffding-Azuma inequality.
Note that if we change the choice of one vertex in Ñ (t + 1), then I˜1 (t + 1) can change by
˜2/3
QUASIRANDOM RUMOUR SPREADING ON THE COMPLETE GRAPH
9
˜
at most t3 − t. Also recall that Ñ (t + 1) = I(t).
Therefore the Hoeffding-Azuma inequality
yields for n sufficiently large:
(17)
P I˜1 (t + 1) < E(I˜1 (t + 1)) − I˜2/3 (t) ≤ exp −
(16)
≤ exp −
n1/3
24/3 ln4 n
I˜4/3 (t)
2
˜
2I(t)(t
3 − t)
!
≤ exp −n1/4 .
!
t3 −t≤ln n
≤
I˜1/3 (t)
exp −
2 ln2 n
!
So (13) holds with probability at least 1 − exp −n1/4 .
Proof of (14) The proof of (14) is also based on the application of the Hoeffding-Azuma
inequality. We begin with a lower bound on the expected value of I˜2 (t + 1) conditional on
Et . Again all probabilities and expected values are conditional on Et . We will give a lower
bound on the probability that a given vertex v ∈ Ñ (t + 1) belongs to Ĩ2 (t + 1). Firstly,
we will expose the random choice of v and we will condition on the event that there is no
v 0 ∈ Ñ (t + 1) which belongs to `(v; t + 2, t3 ). The probability that such a vertex exists is
˜
3 −t−1)
at most Ñ (t+1)(t
= I(t)(t3n−t−1) . Having fixed the choice of v, the probability that for a
n
given vertex v 0 ∈ Ñ (t+1) that is different from v the segments `(v; t+2, t3 ) and `(v 0 ; t+2, t3 )
2
are disjoint is at least 1 − (t3 −t−1)
. Since the random choices of the vertices in Ñ (t + 1) are
n
independent, we obtain:
!
Ñ (t+1)−1
˜
I(t)(t
(t3 − t − 1)2
3 − t − 1)
P(v ∈ Ĩ2 (t + 1)) ≥ 1 −
1−
n
n
!
I(t)
˜
˜
Ñ (t+1)=I(t)
˜
I(t)(t
(t3 − t − 1)2
3 − t − 1)
≥
1−
1−
n
n
!
!
2
˜
˜
I(t)(t
I(t)(t
3 − t − 1)
3 − t − 1)
≥ 1−
1−
n
n
≥1−
1 2
˜
˜
I(t)(t
I(t)
3 − t − 2)
(t3 − t − 1) + (t3 − t − 1)2 ≥ 1 −
.
n
n
Therefore,
(18)
1 2
˜
I(t)(t
3 − t − 2)
˜
E(I˜2 (t + 1)) ≥ I(t)
1−
n
!
.
As in the proof of (13), we will use the Hoeffding-Azuma inequality to bound the probability
that I˜2 (t + 1) < E(I˜2 (t + 1)) − I˜2/3 (t). This is indeed sufficient to show that inequality (14)
holds with high probability. Claim 5 yields
!
2
˜
I(t)(t
−
t
)
3
˜
E(I˜2 (t + 1)) − I˜2/3 (t) ≥ I(t)
1−
.
n
Let us now see what happens to I˜2 (t + 1) when we change the choice of one vertex in
Ñ (t+1). Observe first that for any v, v 0 ∈ Ĩ2 (t+1) we have `(v; t+2, t3 )∩`(v 0 ; t+2, t3 ) = ∅, if
v 6= v 0 , and v 0 6∈ `(v; t + 2, t3 ). Therefore, if we change the choice of one vertex in Ñ (t + 1) we
may “destroy” and also “create” at most t3 −(t+2)+1+1 of them (the last term comes from
the fact that the vertex whose choice we change might also be “destroyed” or “created”). So
10
NIKOLAOS FOUNTOULAKIS AND ANNA HUBER
I˜2 (t + 1) changes by at most (t3 − t). Therefore, applying the Hoeffding-Azuma inequality
as in (17) we deduce that for n large enough
P(I˜2 (t + 1) < E(I˜2 (t + 1)) − I˜2/3 (t)) ≤ exp −n1/4 .
In turn, this implies that (14) holds with probability at least 1 − exp −n1/4 .
˜
The recursive relation in the above lemma implies the following bound on I(t):
Lemma 6. If Et3 −1 is realized, then for all t ∈ {t2 , . . . , t3 − 1}, we have
2(t−t2 ) I˜2 (t )(t − t + 2)2
2
3
˜ 2) − 2 · 2
˜ ≥ 2t−t2 I(t
I(t)
.
n
Proof We will show this by induction on t. Clearly, for t = t2 this holds. Assume now that
it also holds for some t with t2 ≤ t ≤ t3 − 2. As Et3 −1 holds,
2
˜2
˜ + 1) ≥ 2I(t)
˜ − 2 I (t)(t3 − t) .
I(t
n
(19)
˜ from below as well as the trivial upper
We will use the induction hypothesis to bound I(t)
t−t
2
˜
˜
bound I(t) ≤ 2
I(t2 ). Thus (19) becomes:
2(t−t2 )+1 I˜2 (t )(t − t + 2)2
22(t−t2 ) I˜2 (t2 )(t3 − t)2
2
3
˜ + 1) ≥ 2t+1−t2 I(t
˜ 2) − 2 · 2
I(t
−2·
n
n
2(t−t
)+2
2
2
˜
1
2
I (t2 ) 1
2
2
t+1−t2 ˜
(t3 − t + 2) + (t3 − t)
=2
I(t2 ) − 2 ·
n
2
4
2(t−t
)+2
2
2
2
˜
I (t2 )(t3 − t + 1)
˜ 2) − 2 · 2
,
≥ 2t+1−t2 I(t
n
which concludes the proof of the lemma.
This lemma implies that on Et3 −1
And so as
have
ln6 n
ω
2(t3 −t2 −1) I˜2 (t )
2
˜ 3 − 1) ≥ 2t3 −t2 −1 I(t
˜ 2 ) − 18 · 2
I(t
.
n
6
˜ 2 ) ≤ 2t2 ≤ 2 ·
≤ 2t3 −t2 −1 ≤ 2 · lnω n and lnn6 n 1 − ln1n ≤ I(t
˜ 3 − 1) ≥
I(t
=
=
6
4
ln n n
1
1−
ω ln6 n
ln n
n
1
1−
− 288 ·
ω
ln n
1
288
n
1−
−
ω
ln n
ω
− 18 ·
ln6 n
ω
2 2
4 lnn6 n
n
n
ω2
=
n
(1 − o(1)) .
ω
The definition of Ĩ(t3 − 1) implies for n large enough:
(20)
n
ln6 n
˜ 3 − 1) + Ñ (t3 ) = 2I(t
˜ 3 − 1) ≥ n .
I(t3 ) ≥ I(t
ω
on Et3 −1 we
QUASIRANDOM RUMOUR SPREADING ON THE COMPLETE GRAPH
11
3. The final stages
Let U(t) denote the set of uninformed vertices after the first t stages, that is, U(t) :=
Vn \I(t), and let U (t) denote its size. For t ≥ 0 and s > t we set Ut (s) := U(t)\∪v∈I(t) `(v; t+
1, s − 1) (where we assume that `(v; t + 1, t) = ∅) and let Ut (s) := |Ut (s)|.
These quantities are fairly important as they help us to describe the evolution of the
process after time t3 . In particular, observe that Ut (t + 1) \ Ut (t + 2) = N (t + 1). Thus if we
have estimates for Ut (t + 1) and Ut (t + 2) we will be able to estimate N (t + 1) as well. Let
T := t3 + ω 2 + dln ne + ωand without loss of generality assume that ω takes integral values.
Throughout this section we will
the fact that T < 3 ln n, for any n sufficiently large.
be using
Also we set T 0 := t3 + ω 2 + 21 ln n . Our aim is to keep track of Ut (s) for all t = 0, . . . , T 0
and all t < s ≤ T . Eventually, we will show that the expected value of UT 0 (T ) is o(1), which
implies that UT 0 (T ) is zero with probability 1 − o(1). In turn, the definition of the set UT 0 (T )
implies that if the process continues after stage T 0 until stage T , then all vertices will be
informed and this will conclude the proof of the upper bound on S(n).
For the lower bound, we will set T− := t3 + ω 2 + ln n − 4 ln ln n and we will use a second
moment argument to show that UT 0 (T− ) is non-empty with probability 1 − o(1). Of course
this does not imply that if we let the process continue until time T− , there will still be
uninformed vertices. However, we will also show that with probability 1 − o(1) none of the
vertices in UT 0 (T− ) is informed between stages T 0 and T− , which will conclude the proof of
the lower bound on S(n).
We begin with an estimate of a random variable, which, as we will see later, approximates
the conditional expectation of Ut (s) given the history of the process until stage t. For
0≤t<s
t Y
s − r − 1 N (r)
U t (s) := (n − 1)
1−
,
(21)
n
r=0
where N (0) = 1. We will show the following
Lemma 7. If n is large enough, then for all 0 < t < s ≤ T
Pt−1
1 (s−t−1)I(t)+ r=0 I(r)
(22)
U t (s) ≤ (n − 1) 1 −
n
and
Pt−1
I(t)
1 (s−t−1)I(t)+ r=0 I(r)
10 ln2 n
U t (s) ≥ (n − 1) 1 −
.
1−
n
n2
(23)
Proof For the upper bound observe first that
Ptr=0 (s−r−1)N (r)
t Y
s − r − 1 N (r)
1
U t (s) = (n − 1)
≤ (n − 1) 1 −
.
1−
n
n
r=0
Now, we write
t
X
(s − r − 1)N (r) = (s − 1)
r=0
t
X
r=0
N (r) −
t
X
rN (r) = (s − 1)I(t) −
r=1
r=1
But note that
t
X
r=1
rN (r) =
t−1
X
r=0
(I(t) − I(r)) = tI(t) −
t
X
t−1
X
r=0
I(r).
rN (r).
12
NIKOLAOS FOUNTOULAKIS AND ANNA HUBER
Therefore
t
X
(24)
(s − r − 1)N (r) = (s − t − 1)I(t) +
r=0
t−1
X
I(r)
r=0
and (22) follows.
To show (23) observe first that
s−r−1
1−
≥
n
1 s−r−1 (s − r − 1)2
−
,
1−
n
n2
which follows, for example, from Bonferroni inequalities (see [3, Theorem 1.10] p.17). Thus
for n sufficiently large
s−r−1
1−
≥
n
s<T <3 ln n
1 s−r−1
s2
1 s−r−1
10 ln2 n
1−
1− 2
≥
1−
1−
.
n
n (1 − 1/n)s
n
n2
Therefore,
N (r)
1 N (r)(s−r−1)
10 ln2 n
U t (s) ≥ (n − 1)
1−
1−
n
n2
r=0
P
t−1
I(t)
1 (s−t−1)I(t)+ r=0 I(r)
10 ln2 n
(24)
= (n − 1) 1 −
,
1−
n
n2
t Y
and the lower bound follows.
Let Dt be the event that for all 0 ≤ r ≤ t and for all r < s ≤ T 0 + 1 we have
r+1
1
(25)
Ur (s) ∈ U r (s) 1 ± 3
.
ln n
We now define
(26)
At :=
Dt ,
t < t3
.
Dt ∩ Et3 , t ≥ t3
Note that A0 occurs with probability 1 as U0 (s) ∈ {n − s, n − s + 1} ∈ U 0 (s) 1 ± ln13 n .
We will show by induction on t that AT 0 occurs with probability 1 − o(1). To this aim we
need the following lemma.
Lemma 8. For all 0 ≤ t < T 0 we have
1
P(Dt+1 | At ) ≤ e−n 5ω .
Proof
1
We will show that Dt+1 occurs with probability at least 1 − e−n 5ω , that is, we will show
1
that with probability at least 1 − e−n 5ω we have for every s with t + 1 < s ≤ T 0 + 1
t+2
1
(27)
Ut+1 (s) ∈ U t+1 (s) 1 ± 3
.
ln n
In fact, we will show that Ut+1 (s) is concentrated around its expected value conditional on
its history up to stage t.
QUASIRANDOM RUMOUR SPREADING ON THE COMPLETE GRAPH
13
Let at be a realization of the process up to stage t fulfilling At . Note that
s − t − 2 N (t+1)
E(Ut+1 (s) | at ) = Ut (s) 1 −
n
(28)
t+1 t+1
1
s − t − 2 N (t+1)
1
= U t+1 (s) 1 ± 3
1−
,
∈ U t (s) 1 ± 3
n
ln n
ln n
where N (t+1) is determined by at . Thus, it suffices to show that with conditional probability
1
at least 1 − e−n 5ω for all s with t + 1 < s ≤ T 0 + 1
E(Ut+1 (s) | at )
.
ln3 n
Note that Ut+1 (s) = Ut (s) − Lt+1 (s), where Lt+1 (s) is a non-negative random variable
that is equal to the number of vertices which are removed from Ut (s) during the t + 1th
stage. Thus
|Ut+1 (s) − E(Ut+1 (s) | at )| = |Lt+1 (s) − E(Lt+1 (s) | at )| .
(29)
|Ut+1 (s) − E(Ut+1 (s) | at )| ≤
1
So it suffices to show that with probability at least 1 − e−n 5ω for all s with t + 1 < s ≤ T 0 + 1
E(Ut+1 (s) | at )
.
ln3 n
In the present situation, we will use Talagrand’s inequality. In particular, this is useful when
the number of informed vertices has become linear. There, the number of vertices which
make a random choice is quite large compared to the conditional expectation of Lt+1 (s) and,
for example, the Hoeffding-Azuma inequality would give trivial bounds. Thus, we need to
apply a stronger tool such as Talagrand’s inequality (see [11] or Theorem 2.29 in [8]):
(30)
|Lt+1 (s) − E(Lt+1 (s) | at )| ≤
Theorem 9. (Theorem 2.29 [8]) Let Z1 , . . . , ZN be independent random variables taking
values in the sets Λ1 , . . . , ΛN respectively. Let f : Λ1 × · · · × ΛN → R be a function and let
X = f (Z1 , . . . , ZN ). Assume that there are constants ck , k = 1, . . . , N and some increasing
function ψ satisfying the following two conditions:
a. If z, z 0 ∈ Λ1 × · · · × ΛN differ only in the kth coordinate, then |f (z) − f (z 0 )| ≤ ck ;
b. if z ∈P
Λ1 × · · · × ΛN and r ∈ R with f (z) ≥ r, then there exists a set J ⊆ {1, . . . , N }
with i∈J c2i ≤ ψ(r), such that for all y ∈ Λ1 × · · · × ΛN with yi = zi when i ∈ J we
have f (y) ≥ r.
Then if m is the median of X, for every x ≥ 0 we have
x2
(31)
P(|X − m| ≥ x) ≤ 4 exp −
.
4ψ(m + x)
We will apply this inequality to Lt+1 (s). Note that Lt+1 (s) is a function of the independent
random choices of the vertices in N (t + 1). Note also that if we change only one of these
random choices, then Lt+1 (s) can change by at most s − 1 − (t + 2) + 1 < s ≤ T < 3 ln n.
Therefore, we may take ck := 3 ln n, for all k. Regarding condition b. in the above theorem
and the definition of ψ, observe that if Lt+1 (s) ≥ r, then there must be some vertices in
N (t + 1) which force Lt+1 (s) to be at least r and these must be no more than r. In other
words, if a vertex v is removed from Ut (s), then there must be some vertex v 0 ∈ N (t + 1) for
which v ∈ `(v 0 ; t + 2, s − 1). So if r vertices are removed from Ut (s) after the exposition of
the random choices of the vertices in N (t + 1), then there must be at most r of them that
certify this. So we may take ψ(r) = 9r ln2 n.
14
NIKOLAOS FOUNTOULAKIS AND ANNA HUBER
However, we would like to show the concentration of Lt+1 (s) around its expected value.
If m now denotes the (conditional) median of Lt+1 (s), then the triangle inequality implies
that
|Lt+1 (s) − E(Lt+1 (s) | at )| ≤ |Lt+1 (s) − m| + |m − E(Lt+1 (s) | at )|.
But |m−E(Lt+1 (s) | at )| is not very large. In fact, the argument in Example 2.33 on page 41
in [8] implies that
p
|m − E(Lt+1 (s) | at )| = O ln n E(Lt+1 (s) | at ) .
We will show that
(32)
ln n
p
E(Lt+1 (s) | at ) = o
E(Ut+1 (s) | at )
ln3 n
.
In turn, this will imply that for n sufficiently large
(33)
E(Ut+1 (s) | at )
E(Ut+1 (s) | at )
if |Lt+1 (s) − E(Lt+1 (s) | at )| >
, then |Lt+1 (s) − m| >
.
3
ln n
2 ln3 n
To show (32) it suffices to show that
p
E(Lt+1 (s) | at )
1
1
E(Lt+1 (s) | at )
=o
=o
, or equivalently 2
.
E(Ut+1 (s) | at )
E (Ut+1 (s) | at )
ln4 n
ln8 n
As E(Lt+1 (s) | at ) ≤ Ut (s), this will follow if we show that
Claim 10.
1
Ut (s)
≤ 1
=o
E2 (Ut+1 (s) | at )
n 3ω ln8 n
1
ln8 n
.
Proof As At holds, we obtain for n sufficiently large
Pt−1
t+1 (22), t≤T 0 <3 ln n
1
1 (s−t−1)I(t)+ r=0 I(r)
≤
2n 1 −
.
(34)
Ut (s) ≤ U t (s) 1 + 3
n
ln n
By (28) and for n sufficiently large
(35)
t+1
1
E(Ut+1 (s) | at ) ≥ U t+1 (s) 1 − 3
ln n
Pt
I(t+1) t+1
(s−t−2)I(t+1)+
r=0 I(r)
(23)
1
10 ln2 n
1
≥ (n − 1) 1 −
1−
1− 3
n
n2
ln n
P
t
3 ln n
n
(s−t−2)I(t+1)+
I(r)
2
r=0
t<s<3 ln n,I(t)≤n
10 ln n
1
1
1−
≥
(n − 1) 1 −
1− 3
n
n2
ln n
(s−t−2)I(t+1)+Ptr=0 I(r)
n
1
≥
1−
.
2
n
In the case where t ≥ t3 , we take a further lower bound on E(Ut+1 (s) | at ) by giving an upper
P
bound on (s − t − 2)I(t + 1) + tr=0 I(r). We first split the second sum into two parts and
P
P 3 −1
P
write tr=0 I(r) = tr=0
I(r) + tr=t3 I(r). For the first part, we use the bound I(r) ≤ 2r .
QUASIRANDOM RUMOUR SPREADING ON THE COMPLETE GRAPH
15
P 3 −1
This yields tr=0
I(r) ≤ 2t3 ≤ 8n/ω. For the second sum we use the bound I(r) ≤ n. This
implies that
(s − t − 2)I(t + 1) +
t
X
I(r) ≤ (s − t − 2 + t − t3 + 1) n ≤ (s − t3 ) n.
r=t3
Thus by (35) and for n large enough
(s−t3 )n+ 8n
ω
n
1
1 (s−t3 )n
n
(36)
E(Ut+1 (s) | at ) ≥
1−
1−
≥
.
2
n
4
n
We will consider three different cases:
t < t3 : In this case, we write
(37)
Pt−1
Pt
(34),(35) 8
1 (s−t−1)I(t)+ r=0 I(r)−2(s−t−2)I(t+1)−2 r=0 I(r)
Ut (s)
1−
≤
E2 (Ut+1 (s) | at )
n
n
t3
t3
−2(s−t−2)I(t+1)−2 Ptr=0 I(r) I(r)≤2r , t<t
3 8
8
1
1 −2(s−t−2)2 −2·2
≤
1−
≤
1−
n
n
n
n
−16(s−t−1)n/ω
1
8 16(s−t−1) s−t−1≤T 0 <3 ln n 8 48 ln n
1
8
1−
≤ e ω
e ω ≤ 1
≤
.
≤
n
n
n
n
n 3ω ln8 n
t3 ≤ t ≤ t4 := t3 + ω 2 : By (34) and since Et3 is realized,
1 (s−t−1)I(t3 )
1 (s−t−1)I(t)
≤ 2n 1 −
Ut (s) ≤ 2n 1 −
n
n
I(t3 )≥n/ω
≤
1
2n 1 −
n
(s−t−1)n/ω
.
Combining this with (36) we obtain for n large enough
Ut (s)
32
1 (s−t−1)n/ω−2(s−t3 )n 32 −(s−t−1)/ω+2(s−t3 )
≤
1−
≤
e
E2 (Ut+1 (s) | at )
n
n
n
32 −(T 0 −t−1)/ω+2(T 0 −t3 ) 32 −(T 0 −(t3 +ω2 )−1)/ω+2(T 0 −t3 )
(38)
≤
e
≤
e
n
n
2
32 −( 1 ln n−1)/ω+2( 1 ln n+ω2 ) 33e2ω
1
2
≤
e 2
≤
≤ 1
.
1
n
n 2ω
n 3ω ln8 n
t4 < t ≤ T 0 − 1: Here we bound the following ratio separately:
(34),(35)
Ut (s)
1 −(s−t−2)(I(t+1)−I(t))
1 −(s−t−2)U (t)
≤ 4 1−
≤4 1−
E(Ut+1 (s) | at )
n
n
(39)
−(s−t−2)U (t4 )
1
≤4 1−
.
n
We will first give an upper bound on U (t4 ). Recall that U (t) = Ut (t + 1). So we are able to
give an upper bound on U (t4 ) by giving an upper bound on Ut4 (t4 + 1). On At , we have for
n sufficiently large
(40)
4 −1 I(r) 4 −1 I(r)
Ptr=0
t4 +1
Ptr=t
At ,(22)
3
1
1
1
≤ 2n 1 −
U (t4 ) = Ut4 (t4 + 1) ≤ n 1 −
1+ 3
n
n
ln n
ω2 I(t3 ) Et ,(20)
ω2 n
ω
3
1
1
≤ 2n 1 −
≤ 2n 1 −
≤ 2ne−ω ≤ ne−ω/2 .
n
n
16
NIKOLAOS FOUNTOULAKIS AND ANNA HUBER
Substituting this into (39) we obtain
(41)
−ω/2
Ut (s)
1 −(s−t−2)ne
−ω/2
≤ 4e(s−t)e
≤4 1−
E(Ut+1 (s) | at )
n
s−t≤ 12 ln n
≤
4e
e−ω/2
2
ln n
1
= 4n 2eω/2 .
By (36)
E(Ut+1 (s) | at )
(42)
s−t3 ≤ 12 ln n+ω 2
≥
n
4
2
1
√ −ω2
1 ( 2 ln n+ω )n
ne
≥
1−
.
n
8
So (41) and (42) yield:
2
Ut (s)
32 eω
1
,
(43)
≤
≤ 1
1
1
2
−
E (Ut+1 (s) | at )
n 3ω ln8 n
n 2 2eω/2
and this concludes the proof of the claim.
(s) | at )
We will apply Talagrand’s inequality (31) with x = E(Ut+1
. We use the ob2 ln3 n
servation (see for example p. 42 in [8]) that m ≤ 2E(Lt+1 (s) | at ) ≤ 2Ut (s) and also
x ≤ E(Ut+1 (s) | at ) ≤ Ut (s). Thus ψ(m + x) ≤ 27 Ut (s) ln2 n. Therefore by (31) and
(33) and for n sufficiently large
E(Ut+1 (s) | at ) P |Lt+1 (s) − E(Lt+1 (s) | at )| >
at
ln3 n
(44)
1
E2 (Ut+1 (s) | at ) Claim 10 −n 4ω
≤ 4 exp −
≤
e
.
16 · 27 Ut (s) ln8 n
Thus for n sufficiently large
(45)
P(Dt+1 | at )
E(Ut+1 (s) | at )
≤ P ∃s : t + 1 < s ≤ T 0 , |Lt+1 (s) − E(Lt+1 (s) | at )| >
ln3 n
(44)
1
1
at
1
≤ T 0 e−n 4ω < 3 ln n e−n 4ω ≤ e−n 5ω .
1
Averaging over all at such that At holds, we deduce P(Dt+1 | At ) ≤ e−n 5ω .
By Lemma 8, for t < t3 − 1
1
P(At+1 | At ) ≥ 1 − e−n 5ω ,
(46)
and therefore
(47) P(At3 −1 ) = P(A0 )
tY
3 −2
P(At+1
t3 log2 n
1
1
−n 5ω
−n 5ω
| At ) ≥ 1 − e
≥ 1−e
= 1−o(1).
t=0
So
(9),(12)
(48)
P(At3 ) = P(Dt3 ∩ Et3 ) ≤ P(Dt3 ) + P(Et3 )
≤
P(Dt3 ) + o(1)
(47)
≤ P(Dt3 | At3 −1 ) + P(At3 −1 ) + o(1) = P(Dt3 | At3 −1 ) + o(1)
Now note that for any t ≥ t3
1
P(At+1 | At ) = P(Dt+1 | At ) ≤ e−n 5ω .
Lemma 8
=
o(1).
QUASIRANDOM RUMOUR SPREADING ON THE COMPLETE GRAPH
17
So using (48) and the above inequality
P(AT 0 ) = P(At3 )
(49)
0 −1
TY
T 0
1
P(At+1 | At ) ≥ (1 − o(1)) 1 − e−n 5ω
t=t3
3 ln n
1
−n 5ω
≥ (1 − o(1)) 1 − e
= 1 − o(1).
Now we will study the evolution of I(t) on the event AT 0 . Recall that t4 = t3 + ω 2 . We
first prove the following lemma:
Lemma 11. On AT 0 , if n is sufficiently large, then for any t such that t4 ≤ t ≤ T 0 − 1 we
have
2
U (t + 1)
≤ .
U (t)
e
Proof As we mentioned above, U (t) = Ut (t + 1). But also observe that U (t + 1) = Ut (t + 2).
Therefore
U (t + 1)
Ut (t + 2)
=
.
U (t)
Ut (t + 1)
Thus we may use the estimates on Ut (t + 2) and Ut (t + 1) on the event AT 0 . So we have
t+1 −t−1
Ut (t + 2) AT 0 ,(25) U t (t + 2)
1
1
(50)
≤
1+ 3
1− 3
.
Ut (t + 1)
ln n
ln n
U t (t + 1)
By Lemma 7 ((22) and (23)), if n is large enough, then
Pt
−n
10 ln2 n
U t (t + 2)
(1 − 1/n) r=0 I(r)
1−
≤
Pt−1
n2
U t (t + 1)
r=0 I(r)
(1
−
1/n)
(51)
−n I(t)≥I(t ) −n
4
1 I(t)
10 ln2 n
1 I(t4 )
10 ln2 n
≤ 1−
1−
≤
1−
1−
.
n
n2
n
n2
But by (40) I(t4 ) ≥ n − ne−ω/2 . Thus
−ω/2
1 I(t4 )
1 n−ne
−ω/2
(52)
1−
≤ 1−
≤ e−1 ee
.
n
n
Now we combine (50), (51) and (52) and we deduce that for n sufficiently large
t+1 −t−1 −n
t<T 0 <3 ln n 2
Ut (t + 2)
1
1
10 ln2 n
−ω/2
≤ e−1 ee
1+ 3
1− 3
1−
≤
.
Ut (t + 1)
n2
e
ln n
ln n
3.1. Proof of Theorem 1: the upper bound. We are now ready to conclude the
proof of the upper bound in Theorem 1. We will condition on AT 0 and we will calculate E(UT 0 (T ) | AT 0 ) proving that it is o(1). So the upper bound in Theorem 1 will follow
from Markov’s inequality.
Let v ∈ Vn \ {1} and let (a0 , . . . , aT 0 ) be a realization of the process up to time T 0 fulfilling
the events “v ∈ UT 0 (T )” and AT 0 . More precisely, ai is an ordered set of i + 1 sets with the
jth set containing ordered pairs, where the first element is a vertex informed at stage j and
the second one is its random choice. Moreover, these i + 1 sets describe a feasible realization
of the process until the ith stage with the constrains that the events “v ∈ UT 0 (T )” and AT 0
are not violated.
18
NIKOLAOS FOUNTOULAKIS AND ANNA HUBER
Then
P (a0 ) = P (a0 , v ∈ U0 (T )) = P (v ∈ U0 (T )) P (a0 | v ∈ U0 (T )) ,
(53)
and, since at+1 satisfies the event “v ∈ Ut+1 (T )”, one has
(54)
P (at+1 | at ) = P (at+1 , v ∈ Ut+1 (T ) | at ) = P (v ∈ Ut+1 (T ) | at ) P (at+1 | v ∈ Ut+1 (T ), at ) .
So summing over all realizations (a0 , . . . , aT 0 ) of the process up to time T 0 fulfilling the events
“v ∈ UT 0 (T )” and AT 0 , we write
X
P (v ∈ UT 0 (T ), AT 0 ) =
P (a0 , . . . , aT 0 ) =
(53),(54)
X
=
P (a0 )
0 −1
TY
P (at+1 | at )
t=0
(a0 ,...,aT 0 )
(a0 ,...,aT 0 )
(55)
X
P (v ∈ U0 (T )) P (a0 | v ∈ U0 (T )) ×
(a0 ,...,aT 0 )
0 −1
TY
P (v ∈ Ut+1 (T ) | at ) P (at+1 | v ∈ Ut+1 (T ), at ) .
t=0
Firstly, we will obtain a uniform bound on P (v ∈ U0 (T ))
0 −1
TQ
P (v ∈ Ut+1 (T ) | at ):
t=0
Claim 12. If n is large enough, then for all realizations (a0 , . . . , aT 0 ) of the process up to
stage T 0 fulfilling the events “v ∈ UT 0 (T )” and AT 0 we have
P (v ∈ U0 (T ))
0 −1
TY
P (v ∈ Ut+1 (T ) | at ) ≤
t=0
2e−ω
.
n
Proof Let N (r, ar−1 ) denote the number of newly informed vertices at stage r given the
history of the process up to stage r − 1; note that the latter determines this set. But as
(a0 , . . . , aT 0 ) fulfills AT 0 ,
(56)
P (v ∈ U0 (T ))
0 −1
TY
t=0
T0 T − r − 1 N (r,ar−1 )
T − 1 N (0) Y
1−
P (v ∈ Ut+1 (T ) | at ) = 1 −
n
n
r=1
T0
P
≤
1 (T −1)N (0)+r=1(T −r−1)N (r,ar−1 )
1−
.
n
To bound this from above, we will give a lower bound on (T − 1)N (0) +
T0
P
(T − r −
r=1
1)N (r, ar−1 ).
0
(T − 1)N (0) +
T
X
(24)
0
0
(T − r − 1)N (r, ar−1 ) ≥ (T − T − 1)I(T ) +
r=1
(57)
≥ (T − T 0 − 1)I(T 0 ) +
0 −1
TX
I(r)
r=0
0 −1
TX
I(r) = (T − T 0 − 1)(n − U (T 0 )) +
r=t4
0
0 −1
TX
(n − U (r))
r=t4
0
= (T − t4 − 1)n − (T − T − 1)U (T ) −
0 −1
TX
r=t4
U (r).
QUASIRANDOM RUMOUR SPREADING ON THE COMPLETE GRAPH
19
Therefore (56) yields
P (v ∈ U0 (T ))
0 −1
TY
P (v ∈ Ut+1 (T ) | at )
t=0
0
(T −t4 −1)n−(T −T 0 −1)U (T 0 )−TP−1 U (r)
1
r=t4
≤ 1−
n
(58)
≤ e− ln n−ω e
0 −1
TP
(T −T 0 −1)U (T 0 )+
U (r)
r=t4
n
0 −1
TP
(T −T 0 )U (T 0 )+
U (r)
r=t4
n
e−ω
e
.
n
By Lemma 11 for n sufficiently large
T 0 −t4
ln n/2
(40)
2
2
0
ne−ω/2 ,
(59)
U (T ) ≤
U (t4 ) ≤
e
e
≤
and also
(60)
0 −1
TX
U (r) ≤
0 −1 TX
r=t4
r=t4
2
e
r−t4
U (t4 ) ≤
(40)
e
e
U (t4 ) ≤ n
e−ω/2 .
e−2
e−2
So (59) and (60) imply that
(T − T 0 )U (T 0 ) +
0 −1
TP
U (r)
r=t4
n
ln n/2
2
e
≤ 3 ln n
e−ω/2 +
e−ω/2 = o(1).
e
e−2
Substituting this bound into (58) we obtain
P (v ∈ U0 (T ))
0 −1
TY
P (v ∈ Ut+1 (T ) | at ) ≤
t=0
2e−ω
.
n
So (55) becomes
(61)
2e−ω
P (v ∈ UT 0 (T ), AT 0 ) ≤
n
X
P (a0 | v ∈ U0 (T ))
0 −1
TY
P (at+1 | v ∈ Ut+1 (T ), at )
t=0
(a0 ,...,aT 0 )
But for the sum on the right-hand side we have the following:
Claim 13. For all realizations (a0 , . . . , aT 0 ) of the process up to time T 0 fulfilling the events
“v ∈ UT 0 (T )” and AT 0 we have
X
(a0 ,...,aT 0 )
P (a0 | v ∈ U0 (T ))
0 −1
TY
t=0
P (at+1 | v ∈ Ut+1 (T ), at ) ≤ 1.
20
NIKOLAOS FOUNTOULAKIS AND ANNA HUBER
Proof Consider the following probability space: We run the process from 0 to T 0 as before,
except that in each step t each vertex u ∈ Nt selects its starting position uniformly at
random from those positions such that v ∈
/ `(u; t + 1, T − 1). In other words, u selects a
position in its list such that v is avoided in the length-T − t − 1-segment starting at this
position. We will denote the probabilities in this probability space by P̃. All realizations
(a0 , . . . , aT 0 ) of the process up to time T 0 fulfilling the events “v ∈ UT 0 (T )” and AT 0 are in
this modified probability space and we have
P (a0 | v ∈ U0 (T )) = P̃ (a0 )
and in general, for all 0 ≤ t ≤ T 0 − 1
P (at+1 | v ∈ Ut+1 (T ), at ) = P̃ (at+1 | at ) .
T 0 −1
So P (a0 | v ∈ U0 (T ))
Q
P (at+1 | v ∈ Ut+1 (T ), at ) = P̃ (a0 )
0 −1
TQ
t=0
P̃ (at+1 | at ) = P̃ (a0 , . . . , aT 0 )
t=0
and
X
P (a0 | v ∈ U0 (T ))
0 −1
TY
t=0
(a0 ,...,aT 0 )
X
P (at+1 | v ∈ Ut+1 (T ), at ) =
P̃ (a0 , . . . , aT 0 ) ≤ 1.
(a0 ,...,aT 0 )
So (61) becomes
P (v ∈ UT 0 (T ), AT 0 ) ≤
(62)
2e−ω
,
n
Bayes’ rule now yields for n sufficiently large
P (v ∈ UT 0 (T ) | AT 0 ) =
P (v ∈ UT 0 (T ), AT 0 )
P(AT 0 )
(49),(62)
≤
3e−ω
n
and therefore1
(63)
E(UT 0 (T ) | AT 0 ) =
X
P (v ∈ UT 0 (T ) | AT 0 ) ≤ 3e−ω = o(1).
v∈Vn \{1}
We now have
P(S(n) > T ) ≤ P(S(n) > T | AT 0 ) + P(AT 0 )
(64)
(49)
(63)
≤ P(UT 0 (T ) > 0 | AT 0 ) + o(1) ≤ E(UT 0 (T ) | AT 0 ) + o(1) = o(1),
and this concludes the proof of the upper bound in Theorem 1.
3.2. Proof of Theorem 1: the lower bound. Here we set T− := t3 + ω 2 + ln n − 4 ln ln n.
Firstly, we will show that with probability 1 − o(1) none of the vertices in UT 0 (T− ) will be
informed until step T− . We will then conclude the proof of the lower bound on S(n), proving
that UT 0 (T− ) > 0 with probability 1 − o(1). We will show the latter by means of a second
moment argument. In both steps, we will work conditioning on the event AT 0 .
Let us begin with the first step. Recall that by (59) on AT 0 we have
ln n
ln n
2 2
2 2
0
−ω 2
≤
U (T ) ≤
ne
n.
e
e
1Presumably we need to extend the definition of U (s) for all t.
t
QUASIRANDOM RUMOUR SPREADING ON THE COMPLETE GRAPH
21
The probability that a given vertex in UT 0 (T− ) is informed during one of the subsequent
0
0 U (T )
steps, that is, from step T 0 + 1 up to T− , is no more than 1 − 1 − T−n−T
. But as
T− − T 0 ≤
1
2
ln n, the latter probability is no more than
0
2 ln2n
ln n
ln n
ln n ( e ) n ln n 2 2
2 2
T− − T 0 U (T )
≤1− 1−
n ≤ ln n
.
1− 1−
≤
n
2n
2n e
e
Also, we repeat the calculations which led to (63) replacing T by T− (note that T− − t4 =
ln n − 4 ln ln n). We obtain:
E(UT 0 (T− ) | AT 0 ) ≤ 3e4 ln ln n .
So conditional on AT 0 , the expected number of vertices in UT 0 (T− ) which are informed
ln n
ln n
between steps T 0 and (including) T− is at most 3e4 ln ln n ln n 2e 2 = 3 ln5 n 2e 2 = o(1).
In other words, with (conditional) probability 1 − o(1), no vertex in UT 0 (T− ) is informed
during these steps.
Now we conclude the proof of the lower bound, showing that conditional on AT 0 with
probability 1 − o(1) we have UT 0 (T− ) > 0. Since AT 0 itself occurs with probability 1 − o(1),
this implies that with probability 1 − o(1), running the process for T− steps is not enough
to inform all vertices.
Let Dt0 3 be the event that for all 0 ≤ r ≤ t3 and for all r < s ≤ T− we have
r+1
1
.
(65)
Ur (s) ∈ U r (s) 1 ± 3
ln n
We now define
A0t3 := Dt0 3 ∩ Et3 .
(66)
We can prove that
Claim 14.
P(A0t3 ) = 1 − o(1).
Indeed, the proof goes exactly as the proof of the fact that P(At3 ) = 1 − o(1). We can
apply Talagrand’s inequality (Eq. (31)), since Claim 10 holds in this case. In particular,
(37) still holds for t ≤ t3 , if we consider s ≤ T− (instead of s ≤ T 0 ), and this is sufficient for
Claim 10. We omit the details.
Now, on A0t3 we have
t3 +1
1
Ut3 (T− ) ≥
ln3 n
Pt3 −1
I(t3 ) t3 +1
(23)
10 ln2 n
1
1 (T− −t3 −1)I(t3 )+ r=0 I(r)
≥ (n − 1) 1 −
1−
1− 3
n
n2
ln n
3 −1 I(r)
(T− −t3 −1)I(t3 )+Ptr=0
1
(1 − o(1)) .
= n 1−
n
P 3 −1
P 3 −1 r
8n
ω = 1 − o(1). Therefore
But tr=0
I(r) ≤ tr=0
2 ≤ 8n
ω and (1 − 1/n)
(T− −t3 )I(t3 )
I(t3 )
1
(67)
Ut3 (T− ) ≥ n 1 −
(1 − o(1)) = ne−(T− −t3 ) n (1 − o(1)) .
n
U t3 (T− ) 1 −
22
NIKOLAOS FOUNTOULAKIS AND ANNA HUBER
Now, let us fix a certain realization of Ut3 (T− ) such that A0t3 is fulfilled and consider the
following “imaginary” setting: after step t3 , every vertex from U(t3 ) chooses a random
segment of length T− − t3 of its list. Let ŨT 0 (T− ) be the set of vertices from Ut3 (T− ) which
are in none of these lists and let ŨT 0 (T− ) := |ŨT 0 (T− )|.
The random variable ŨT 0 (T− ) is stochastically smaller than UT 0 (T− ), because in the original setting UT 0 (T− ) is determined by the vertices informed during steps t3 + 1 up to T 0
(there are at most U (t3 ) of them) and for each of them only a random segment of length less
than T− − t3 is taken into account. So it suffices to show that ŨT 0 (T− ) > 0 with probability
1 − o(1).
Now we calculate E(ŨT 0 (T− )).
T− − t3 U (t3 )
E(ŨT 0 (T− )) = Ut3 (T− ) 1 −
n
= Ut3 (T− )e−(T− −t3 )
(68)
U (t3 )
n
(1 − o(1)) .
It is easy to see that E(ŨT 0 (T− )) → ∞ as n → ∞. In particular, we have
E(ŨT 0 (T− ))
(68)
=
(67)
I(t3 )
n
U (t3 )
n
ne−(T− −t3 )
=
ne
−(T− −t3 )
ne
−ω 2 −ln n+4 ln ln n
=
e
−ω 2
(1 − o(1))
e−(T− −t3 )
≥
=
(69)
Ut3 (T− )e−(T− −t3 )
U (t3 )
n
(1 − o(1))
(1 − o(1))
(1 − o(1))
ln4 n (1 − o(1)) .
We will show that
Lemma 15. With probability 1 − o(1)
ŨT 0 (T− ) > 0.
Therefore
Corollary 16.
P(UT 0 (T− ) > 0 | A0t3 ) = 1 − o(1).
Since A0t3 and AT 0 both occur with probability 1 − o(1), repeated application of Bayes’
rule may show that also
P(UT 0 (T− ) > 0 | AT 0 ) = 1 − o(1).
This concludes the proof of the lower bound. So the only work left now is the proof of
Lemma 15.
Proof of Lemma 15 Chebyschev’s Inequality yields
Var(Ũ 0 (T ))
−
T
.
P(ŨT 0 (T− ) = 0) ≤ P ŨT 0 (T− ) − E(ŨT 0 (T− )) ≥ E(ŨT 0 (T− )) ≤
E2 (ŨT 0 (T− ))
To show that this ratio is o(1), it suffices to show that
(70)
E ŨT2 0 (T− ) ≤ (1 + o(1))E2 ŨT 0 (T− ) .
As
(71)
E ŨT2 0 (T− ) =
X
v,u∈Ut3 (T− )
P(v, u ∈ ŨT 0 (T− )),
QUASIRANDOM RUMOUR SPREADING ON THE COMPLETE GRAPH
23
we will estimate E ŨT2 0 (T− ) by estimating first P(v, u ∈ ŨT 0 (T− )) for any given u, v ∈
Ut3 (T− ).
If the distance of v, u in `(x) was larger than T− − t3 for all x, then P(v, u ∈ ŨT 0 (T− ))
would be bounded from above by the square of an expression similar to that in (56), yielding
(70). However, this is not the case for all pairs u, v. Nonetheless, we can show that it is true
for most of the pairs:
−)
Claim 17. Let B be the set of distinct unordered pairs {v, u} ∈ Ut3 (T
for which there are
2
2
no more than U (t3 ) − U (t3 )/ ln n vertices x ∈ U(t3 ) such that v and u are at distance at
least T− in `(x). Then
2T− ln2 n Ut3 (T− )
.
|B| ≤
2
Ut3 (T− )
−)
Proof of Claim Let us write |B| = λ Ut3 (T
and let us count the pairs (x, {v, u}) ∈ U(t3 )×
2
Ut3 (T− )
which are such that v and u are at distance at least T− in `(x). Then the number
2
of such pairs is at least U (t3 )Ut3 (T− )(Ut3 (T− ) − 2T− + 2)/2. Let us also express this number
−)
−)
U (t3 ).
(U (t3 ) − U (t3 )/ ln2 n) + (1 − λ) Ut3 (T
in terms of |B|. Then this is at most λ Ut3 (T
2
2
In other words,
Ut3 (T− ) − 2T− + 2
Ut3 (T− )
1
Ut3 (T− )
U (t3 )Ut3 (T− )
≤λ
U (t3 ) 1 − 2
+(1−λ)
U (t3 ),
2
2
2
ln n
or
2T−
Ut3 (T− )2
Ut3 (T− )
1
Ut3 (T− )
1−
U (t3 )
≤λ
U (t3 ) 1 − 2
+ (1 − λ)
U (t3 ).
2
Ut3 (T− )
2
2
ln n
−)
we obtain:
Dividing both sides by Ut3 (T
2
Ut3 (T− )
2T−
1
+ (1 − λ)U (t3 )
U (t3 )
1−
≤ λU (t3 ) 1 − 2
Ut3 (T− ) − 1
Ut3 (T− )
ln n
and so
2T−
1
U (t3 ) 1 −
+ (1 − λ)U (t3 ).
≤ λU (t3 ) 1 − 2
Ut3 (T− )
ln n
Now we divide both sides by U (t3 ) and we obtain
2T−
λ
1
1−
≤λ 1− 2
+ (1 − λ) = 1 − 2 ,
Ut3 (T− )
ln n
ln n
which yields
2T− ln2 n
λ≤
.
Ut3 (T− )
Now we write the sum in (71) as follows:
E ŨT2 0 (T− ) =
X
X
(72)
2
P(v, u ∈ ŨT 0 (T− )) + 2
P(v, u ∈ ŨT 0 (T− )) +
{v,u}∈B
{v,u}∈B
X
P(v ∈ ŨT 0 (T− )).
v∈Ut3 (T− )
We will treat the three sums separately. The third sum is:
X
(73)
P(v ∈ ŨT 0 (T− )) = E(ŨT 0 (T− )) = o(E2 (ŨT 0 (T− ))),
v∈Ut3 (T− )
24
NIKOLAOS FOUNTOULAKIS AND ANNA HUBER
since E(ŨT 0 (T− )) → ∞. In the first sum, we bound each summand crudely:
T− − t3 U (t3 )
.
P(v, u ∈ ŨT 0 (T− )) ≤ 1 −
n
So we have:
X
P(v, u ∈ ŨT 0 (T− ))
{v,u}∈B
T− − t3
|B| 1 −
n
≤
U (t3 )
Claim 17
≤
T− − t3 U (t3 )
2T− ln2 n Ut3 (T− )
1−
2
Ut3 (T− )
n
U (t3 )
T− − t3
T− ln2 n
2
Ut3 (T− ) 1 −
Ut3 (T− )
n
−U (t3 )
2 T− − t3
T− ln n
1−
E2 (ŨT 0 (T− )).
Ut3 (T− )
n
≤
(68)
=
Now, we use (67) to bound Ut3 (T− ) from below, thus obtaining for n large enough:
X
P(v, u ∈ ŨT 0 (T− ))
{v,u}∈B
(67)
≤
≤
U (t3 )=n−I(t3 )
=
T− <3 ln n
≤
≤
(74)
T− ln2 n (T− −t3 ) I(t3 )
n
e
n
T− − t3
1−
n
−U (t3 )
(1 + o(1))E2 (ŨT 0 (T− ))
T− ln2 n (T− −t3 ) I(t3 ) +(T− −t3 ) U (t3 )
2
n
n (1 + o(1))E (Ũ 0 (T ))
e
−
T
n
T− ln2 n T− −t3
e
(1 + o(1))E2 (ŨT 0 (T− ))
n
3 ln3 n ω2 +ln n−4 ln ln n
e
(1 + o(1))E2 (ŨT 0 (T− ))
n
2
4eω ln3 n 2
E (ŨT 0 (T− ) = o(E2 (ŨT 0 (T− ))).
ln4 n
Now, each summand in the second sum is
2(T− − t3 ) U (t3 )
P(v, u ∈ ŨT 0 (T− )) = 1 −
.
n
So we obtain:
2
X
{v,u}∈B
(75)
Ut3 (T− )
2(T− − t3 ) U (t3 )
P(v, u ∈ Ũ (T− )) = 2
1−
2
n
T0
T− − t3 2U (t3 )
≤ Ut3 (T− ) 1 −
= E2 (ŨT 0 (T− )).
n
2
We combine (74), (73) and (75) to bound E(ŨT 0 (T− )) as in (72), thus deducing (70).
QUASIRANDOM RUMOUR SPREADING ON THE COMPLETE GRAPH
25
4. Discussion
In this paper, we presented a precise analysis of the quasirandom rumour spreading on the
complete graph and showed that it evolves essentially as the randomized rumour spreading.
Although in terms of applications the upper bound is more important, it would be interesting
to show that a similar lower bound holds. That is, S(n) ≥ log2 n+ln n−ω(n) with probability
1 − o(1). We believe that such a bound holds, but we have been unable to prove it.
References
[1] S. Angelopoulos, B. Doerr, A. Huber, and K. Panagiotou. Tight bounds for quasirandom rumor spreading. submitted.
[2] S. Angelopoulos and K. Panagiotou. Private communication.
[3] B. Bollobás. Random Graphs. Cambridge University Press, 2nd. edition, 2001.
[4] B. Doerr, T. Friedrich, and T. Sauerwald. Quasirandom broadcasting. In Proceedings of the 19th Annual
ACM-SIAM Symp. on Disc. Alg. (SODA), pages 773–781, 2008.
[5] U. Feige, D. Peleg, P. Raghavan, and E. Upfal. Randomized broadcast in networks. Random Structures
and Algorithms, 1(4):447–460, 1990.
[6] A. Frieze and G. Grimmett. The shortest-path problem for graphs with random arc-lengths. Discr. Appl.
Math., 10:57–77, 1985.
[7] A. E. Holroyd, L. Levine, K. Mészáros, Y. Peres, J. Propp, and D. B. Wilson. Chip-firing and rotorrouting on directed graphs. To appear in “In and Out of Equilibrium II,” Eds. V. Sidoravicius and M.
E. Vares, Birkhäuser.
[8] S. Janson, T. Luczak, and A. Ruciński. Random Graphs. Wiley Interscience, 2000.
[9] B. Pittel. On spreading a rumor. SIAM J. on Appl. Math., 47(1):213–223, 1987.
[10] V. B. Priezzhev, D. Dhar, A. Dhar, and S. Krishnamurthy. Eulerian walkers as a model of self-organized
criticality. Phys. Rev. Lett., 77(25):5079–5082, 1996.
[11] M. Talagrand. Concentration of measure and isoperimetric inequalities in product spaces. Inst. Hautes
Études Sci. Publ. Math., 81:73–205, 1995.
Nikolaos Fountoulakis & Anna Huber
Max-Planck-Institut für Informatik
Campus E1 4
D-66123, Saarbrücken
Germany
E-mail addresses: {fountoul, ahuber}@mpi-inf.mpg.de
© Copyright 2026 Paperzz