Optimal, Distributed Decision-Making: The Case of No Communication

Optimal, Distributed Decision-Making:
The Case of No Communication
Stavros Georgiades
University of Cyprus, Nicosia, Cyprus
Marios Mavronicolas
University of Cyprus, Nicosia, Cyprus
Paul Spirakis
University of Patras & Computer Technology Institute, Patras, Greece
(April 24, 2000)
We are honored to dedicate this work to the memory of Professor Gian-Carlo Rota (1932{1999).
A preliminary version of this work appears in the Proceedings of the 12th International Symposium on Fundamentals of Computation Theory, G. Ciobanu and G. Paun eds., pp. 293{
303, Vol. 1684, Lecture Notes in Computer Science, Springer-Verlag, Iasi, Romania, August/September 1999.
This work has been supported in part by the European Union's ESPRIT Long Term Research
Projects ALCOM-IT (contract # 20244) and GEPPCOM (contract # 9072), and by the Greek
Ministry of Education.
Part of the work of M. Mavronicolas was done while at Department of Computer Science and
Engineering, University of Connecticut, Storrs, CT, and while at AT&T Labs { Research,
Florham Park, NJ, as a visitor to the DIMACS Special Year on Networks, organized by the
DIMACS Center for Discrete Mathematics and Theoretical Computer Science, Piscataway, NJ.
Authors' addresses: S. Georgiades, Department of Computer Science, University of Cyprus,
P. O. Box 1537, Nicosia, CY-1678, Cyprus M. Mavronicolas, Department of Computer Science,
University of Cyprus, P. O. Box 1537, Nicosia, CY-1678, Cyprus P. Spirakis, Department of
Computer Engineering and Informatics, University of Patras, Rion, 265 00 Patras, Greece, &
Computer Technology Institute, P. O. Box 1122, 261 10 Patras, Greece.
1
Abstract. We present a combinatorial framework for the study of a natural class of distributed
optimization problems that involve decision-making by a collection of n distributed agents in
the presence of incomplete information such problems were originally considered in a load
balancing setting by Papadimitriou and Yannakakis (Proceedings of the 10th Annual ACM
Symposium on Principles of Distributed Computing, pp. 61{64, August 1991). Within our
framework, we are able to settle completely the case where no communication is allowed anong
the agents. For that case, for any given decision protocol, our framework allows to obtain
a combinatorial inclusion-exclusion expression for the probability that no \overow" occurs,
called the winning probability, in terms of the volume of some simple combinatorial polytope.
Within our general framework, we oer a complete resolution to the special cases of oblivious
algorithms, for which agents do not \look at" their inputs, and non-oblivious algorithms, for
which they do, of the general optimization problem. In either case, we derive optimality
conditions in the form of combinatorial polynomial equations. For oblivious algorithms, we
explicitly solve these equations to show that the optimal algorithm is simple and uniform,
in the sense that agents need not \know" n. Most interestingly, we show that optimal nonoblivious algorithms must be non-uniform: we demonstrate that the optimality conditions
admit dierent solutions for particular, dierent \small" values of n however, these solutions
improve in terms of the winning probability over the optimal, oblivious algorithm. Our results
demonstrate an interesting trade-o between the amount of knowledge used by agents and
uniformity for optimal, distributed decision-making with no communication.
2
Categories and Subject Descriptors: C.2.1 Computer-Communication Networks]: Network Architecture and Design{ distributed networks C.2.3 Computer-Communication
Networks]: Network Operations{ network management C.2.4 Computer-Communication
Networks]: Distributed Systems{ distributed applications C.4 Performance of Systems]:
performance attributes F.1.1 Computation by Abstract Devices]: Models of Computation{
relations among models F.1.2 Computation by Abstract Devices]: Modes of Computation{
probabilistic computation F.2.2 Analysis of Algorithms and Problem Complexity]:
Nonnumerical Algorithms and Problems{ computations on discrete structures, sequencing and
scheduling G.2.1 Discrete Mathematics]: Combinatorics{ combinatorial algorithms, counting problems
General Terms: Algorithms, Performance, Theory
Additional Key Words and Phrases: distributed computing, decision-making, oblivious and
non-oblivious protocols, uniform and non-uniform protocols, optimality, principle of inclusionexclusion.
3
Contents
1 Introduction
5
2 Combinatorial Framework
7
2.1 Geometrical Preliminaries : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 7
2.2 Probabilistic Tools : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 11
2.3 Notation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 15
3 Model
15
4 Oblivious Algorithms
17
5 Non-Oblivious Algorithms
25
3.1 Distributed Decision-Making : : : : : : : : : : : : : : : : : : : : : : : : : : : : 15
3.2 The No Communication Case : : : : : : : : : : : : : : : : : : : : : : : : : : : : 16
4.1 The Winning Probability and Optimality Conditions : : : : : : : : : : : : : : : 17
4.2 Uniformity : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 19
5.1 The Winning Probability and Optimality Conditions
5.2 Non-Uniformity : : : : : : : : : : : : : : : : : : : : :
5.2.1 The Case n = 3 and = 1 : : : : : : : : : : :
5.2.2 The Case n = 4 and = 4=3 : : : : : : : : :
6 Discussion
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
25
32
32
36
42
4
1 Introduction
In a distributed optimization problem, each of n distributed agents receives a private input,
communicates possibly with other agents to learn about their own inputs, and decides, based
on this possibly partial knowledge, on an output the task is to maximize a common objective
function. Such problems were originally introduced by Papadimitriou and Yannakakis 11],
in an eort to understand the crucial economic value of information 1] as a computational
resource in a distributed system (see, also, 2, 5, 10, 12]). Intuitively, the more information
available to agents, the better decisions they make, but naturally the more expensive the
solution becomes due to the need for increased communication. Such natural trade-os between
communication cost and the quality of decision-making have been studied in the contexts of
communication complexity 9] and concurrency control 8] as well.
Papadimitriou and Yannakakis 11] examined the special case of such distributed optimization problems where there are just three agents. More specically, Papadimitriou and
Yannakakis focused on a natural load balancing problem (see, e.g., 4, 7, 13], where each agent
is presented with an input, and must decide on a binary output, representing one of two available \bins," each of capacity one the input is assumed to be distributed uniformly in the unit
interval 0 1]. The load balancing property is modeled by requiring that no \overow" occurs,
namely that inputs dropped into each \bin" not exceed together its capacity. Papadimitriou
and Yannakakis 11] pursued a comprehensive study of how the best possible probability, over
the distribution of inputs, of \no overow" depends on the amount of communication available
to the agents. For each possible communication pattern, Papadimitriou and Yannakakis 11]
discovered the corresponding optimal decision protocol to be unexpectedly sophisticated. The
proof techniques of Papadimitriou and Yannakakis 11] were surprisingly complex, even for
this seemingly simplest case, combining tools from nonlinear optimization with geometric and
combinatorial arguments these techniques have not been hoped to be conveniently extendible
to instances of even this particular load balancing problem whose size exceeds three.
In this work, we introduce a novel combinatorial framework in order to enhance the study of
general instances of distributed optimization problems of the kind considered by Papadimitriou
and Yannakakis 11]. More specically, we proceed to the general case of n agents, with
each still receiving an input uniformly distributed over 0 1] and having to choose one out
of two \bins" however, in order to render the problem interesting, we make the technical
assumption that the capacity of each \bin" is equal to , for some real number possibly
greater than one, so as to compensate for the increase in the number of players. Papadimitriou
and Yannakakis 11] focused on a specic kind of decision protocols by which each agent chooses
a \bin" by comparing a \weighted average" of the inputs it \sees" against some \threshold"
value in contrast, our framework allows for the consideration of general decision protocols by
which each agent decides by using any (computable) function of the inputs it \sees".
Our starting point is a combinatorial result that provides an explicit inclusion-exclusion
formula 17, Section 2.1] for calculating the volume of any particular geometric polytope, in any
given dimension, of some specic form (Proposition 2.2). Roughly speaking, such polytopes
5
are the intersection of a simplex in the positive quadrant with an orthogonal parallelepiped.
An immediate implication of this result are inclusion-exclusion formulas for calculating the
(conditional) probability of \no overow" for a single \bin," as a function of the capacity and the number of inputs that are dropped into the \bin" (Lemmas 2.4 and 2.7).
In this work, we focus on the case where there is no communication among the agents,
which we completely settle for the case of general n. Since communication comes at a cost,
which it would be desirable to avoid, it is both natural and interesting to choose the case of no
communication as an initial \testbed". We consider both oblivious algorithms, where players
do not \look" at their inputs, and non-oblivious algorithms, where they do. For each case, we
are interested in optimal algorithms.
We rst consider oblivious algorithms. Our rst major result is a combinatorial expression
in the form of an inclusion-exclusion formula for the probability that \no overow" occurs for
either of the \bins" (Theorem 4.1). This formula incorporates a suitable inclusion-exclusion
summation, over all possible input vectors, of the probabilities, induced by any particular decision algorithm, on the space of all possible decision vectors, as a function of the corresponding
input vector. The coe cients of these probabilities in the summation are independent of any
specic parameters of the algorithm, while they do depend on the input vector. A rst implication of this expression is the reduction of the general problem of computing the probability
that \no overow" occurs to the problem of computing, given a particular decision algorithm,
the probability distribution of the binary output vectors it yields. Most signicantly, this expression contributes a methodology for the design of optimal decision algorithms \compatible"
with any specic pattern of communication, and not just for the case of no communication that
we particularly examine: one simply renders only those parameters of the decision algorithm
that correspond to the possible communications, and computes values for these parameters
that maximize the combinatorial expression as a function of these parameters. This is done
by solving a certain system of optimality conditions (Corollary 4.2).
We demonstrate that our methodology for designing optimal algorithms for distributed
decision-making is both eective and useful by applying it to the special case of no communication that we consider. We manage to settle down completely this case for oblivious algorithms.
We exploit the underlying \symmetry" with respect to dierent agents in order to simplify the
optimality conditions (by observing that all parameters satisfying them must be equal). This
simplication reveals a beautiful combinatorial structure more specically, we discover that
each optimality condition eventually amounts to zeroing a particular \symmetric" polynomial
of a single variable. In turn, we explicitly solve these conditions to show that the best possible
oblivious algorithm for the case of no communication is the very simple one by which each agent
uses 1=2 as its \threshold" value given that the optimal (non-oblivious) algorithms presented
by Papadimitriou and Yannakakis for the special case where n = 3 are somehow unexpectedly
sophisticated, it is perhaps surprising that such simple oblivious algorithm is indeed optimal
for all values of n.
We next turn to non-oblivious algorithms, still for the case of no communication. In
that case, we demonstrate that the optimality conditions do not admit a \constant" solution.
6
Through a more sophisticated analysis, we are able to compute more complex expressions for
the optimality conditions, which still allow exploitation of \symmetry". We consider the particular instances of the optimality conditions where n = 3 and = 1 (considered by Papadimitriou
and Yannakakis 11]), and n = 4 and = 4=3. We discover that the optimal algorithms are
dierent in each of these cases. However, they achieve larger winning probabilities than their
oblivious counterparts. This shows that the improved performance of non-oblivious algorithms
comes at the cost of sacricing uniformity.
We believe that our work opens up the way for the design and analysis of algorithms for
general instances of the problem of distributed decision-making in the presence of incomplete
information. We envision that algorithms that are more complex, general communication
patterns, and more realistic assumptions on the distribution of inputs, can all be treated in
our combinatorial framework to yield optimal algorithms for distributed decision-making for
these cases as well.
The rest of this paper is organized as follows. Section 2 presents a framework for distributed
decision-making. Formal denitions for the model and the problem are included in Section 3.
Oblivious and non-oblivious algorithms are treated in Sections 4 and 5, respectively. We
conclude, in Section 6, with a discussion of our results and some open problems.
2 Combinatorial Framework
In this section, we introduce a combinatorial framework for our study of distributed decisionmaking. Section 2.1 oers some geometrical preliminaries built on their top are two probabilistic lemmas presented in Section 2.2. Finally, Section 2.3 summarizes some notation.
2.1 Geometrical Preliminaries
Throughout, we use <+ to denote the set of non-negative real numbers. A polyhedron is the
solution set of a nite system of linear inequalities (cf. 3, Section 6.2]). A polyhedron is
bounded if it contains no innite half-line. A polyhedron of this type is called a polytope. We
will be interested in computing volumes of polytopes that have some specic form. For any
polytope , denote Vol() the volume of .
Fix any integer m 2. Consider any pair of (real) vectors = h1 2 : : : m iT , and
= h1 2 : : : miT, where for any l, 1 l m, 0 < l l < 1. These dene the mdimensional, orthogonal simplex
(m)() = fhx1 x2 : : : xmiT 2 <+ j
m x
X
l
l=1 l
1g with orthogonal sides 1 2 : : : m , and the m-dimensional orthogonal parallelepiped
(m)() = 0 1] 0 2] : : : 0 m] 7
with orthogonal sides 1 2 : : : m , respectively.
Both (m) () and (m)( ) are polytopes. The following lemma recalls simple expressions
for the volumes of (m) () and (m) ().
Lemma 2.1 (Volumes of (m)() and (m)())
Q .
2. Vol((m) ()) = m
l=1 l
Q 1. Vol((m) ()) = m1! m
l=1 l
A cornerstone for our analysis is a particular polytope that has some specic form. Dene
the m-dimensional polytope
(m)( ) = (m)() \ (m)() thus, (m) ( ) is the intersection of the m-dimensional orthogonal simplex (m)( ) and
the m-dimensional orthogonal parallelepiped (m)( ), so that
(m)( ) = fhx1 x2 : : : xmiT 2 0 1] 0 2] : : : 0 m] j
m x
X
l
1g :
l
l=1
We provide an explicit inclusion-exclusion formula (see, e.g., 14], or 15, Chapter III] for
a textbook discussion) for the volume of (m) ( ).
Proposition 2.2 (Volume of (m)( ))
m
m
X
1 Y
Vol((m) ( )) =
(;1)i
l
m!
l=1
X
I f1 2 : : : mg
jIj
P = i = < 1
i=0
0
1m
X
@1 ; l A
l2I l
l2I l l
Proof: Clearly,
Vol((m)( ))
m x
X
l
= Vol(fhx1 x2 : : : xmiT 2 0 1] 0 2] : : : 0 m] j
1g)
(by denition of (m) ( ))
= Vol(fhx1 x2 : : : xmiT 2 <+ j
l=1 l
m x
X
l
1g)
l=1 l
m
m
X
X
; Vol(fhx1 x2 : : : xmiT 2 <+ j xl
i=1
l=1 i
8
1 and xi 62 0 i]g)
+
X
1i<j m
Vol(fhx1 x2 : : : xmiT 2 <+
j
m x
X
l
1 and
l=1 l
^
l=ij
xl 62 0 l]g)
;:::+:::
m x
X
l 1 and ^ x 62 0 ]g)
+(;1)m Vol(fhx1 x2 : : : xmiT 2 <+ j
l
l
(by the principle of inclusion-exclusion)
m
Y
= m1 ! l
l=1
;
m
X
Vol(fhx1 x2 : : : xm iT 2 <+
j
l=1 l
1lm
m x
X
l
1 and xi 62 0 i]g)
i
i=1
l=1
m x
X
X
l 1 and ^ x 62 0 ]g)
+
Vol(fhx1 x2 : : : xmiT 2 <+ j
l
l
1i<j m
l=ij
l=1 l
;:::+:::
m x
X
l 1 and ^ x 62 0 ]g)
+(;1)m Vol(fhx1 x2 : : : xmiT 2 <+ j
l
l
l=1 l
(by Lemma 2.1(2))
m
+
Y
X
= m1 ! l + (;1)i i=1
l=1
X
I f1 2 : : : mg
jIj = i
Vol(fhx1 x2 : : : xm iT 2 <+
1lm
j
m x
X
l
1 and
l=1 l
^
l2I
xl 62 0 l]g) :
We continue to calculate the polytope volumes involved in the last summation.
Lemma 2.3 For any non-empty set I f1 2 : : : mg
m x
X
l 1 and ^ x 62 0 ]g)
Vol(fhx1 x2 : : : xm iT 2 <+ j
l
l
l2I
l=1 l
0
1m
m
Y
X
1
l
= m! l @1 ; A l=1
l2I l
P
if 1 > l2I l =l , and 0 otherwise.
Proof: Clearly, the hyperplanes Pl2I xl=l = 1 and xl = l, l 2 I , intersect if and only if
P
= = 1. Since the polytope
l2I l l
fhx1 x2 : : : xmiT 2 <+
m x
X
l
j
1 and
l=1 l
9
^
l2I
xl 62 0 l]g
P
is the intersection of the half-spaces l2I xl =l 1, xlP l for l 2 I and xl 0 for 1 l m,
it follows that this polytope is non-null if and only if l2I l =l < 1.
In that case, this polytope is an orthogonal simplex its orthonormal faces lie on the hyperplanes xlP= l for l 2 I and xl = 0 for l 62 I , while its non-orthonormal one lies on the
hyperplane l2I xl =l = 1. Thus, this polytope is similar to the original orthogonal simplex
m
X
fhx1 x2 : : : xmiT 2 <+ j xl 1g l=1 l
P
and the similarity ratio is equal to (1 ; l2I l l;1 )m . Thus,
m x
X
l 1 and ^ x 62 0 ]g)
T
+
Vol(fhx1 x2 : : : xmi 2 < j
l
l
l=1 l
l2I
1m
0
X
= @1 ; l A Vol(#(m)())
l2I l
0
1m
m
Y
X
l @1 ; l A
= 1
m! l=1
as needed.
l2I l
(by Lemma 2.1(1)) Thus, by Lemma 2.3,
Vol((m)( ))
m
m
Y
X
= m1 ! l + (;1)i i=1
l=1
0
1m
m
X
1 Y
lA
@
m! l=1 l 1 ; l2I l
I f1 2 : : : mg
jIj = i
P
l2I l =l < 1
X
m
m
m
Y
X
1 Y
= 1
+
(;1)i l m!
l
m!
l=1
l=1
i=1
X
I f1 2 : : : mg
jIj
P = i = < 1
0
1m
X
@1 ; l A
l2I l
l2I l l
0
1
BB
CC
BB
0
1mCC
m
m
B
Y
X
X
X
l A C
BB1 + (;1)i CC
@
= 1
1
;
l
B
C
m! l=1 B i=1
l2I l C
I f1 2 : : : mg
BB
CC
jIj
=
i
@
A
P = < 1
l2I l l
10
m
m
Y
X
= m1 ! l (;1)i i=0
l=1
X
I f1 2 : : : mg
jIj
P = i = < 1
0
1m
X
lA @1 ;
l2I l
l2I l l
as needed.
2.2 Probabilistic Tools
In this section, we present two consequences of Proposition 2.2. Each of these two claims
determines the probability that a certain sum of independent, uniformly distributed random
variables does not exceed a given threshold value the probability is a function of the threshold
value and the distribution intervals of the random variables. The proofs of these claims exploit
the reduction of probability to a volume ratio that is possible for uniform random variables
and appeal to Proposition 2.2. (Both of these claims will be used in our later proofs.)
We rst recall some basic notions from probability theory (see, e.g., 15, 16]). For a (continuous) random variable x, denote Fx (t) the cumulative distribution function of x that is,
Fx (t) = P(x t) is the probability of the event x t. The density function of the random
variable X is given by fx (t) = dFx(t)=dt.
Lemma 2.4 Assume that for each i, 1 i m, xi is uniformly distributed over 0 i]. Then,
for any parameter t > 0,
FPmi=1 xi (t) =
Q1
m
X
X
i
m! ml=1 l i=0 (;1) I f1 2 : : : mg
jIj
P = i < t
l2I l
0
1m
X
@t ; lA :
l2I
Proof: SincePeach random variable xi, 1 i m, is uniformly distributed over 0 i], the
probability P( mi=1 xi t) is the ratio of the volume of the polytope
(actually, its portion
P
falling in the product domain) corresponding to the inequality mi=1 xi t to the volume of
the product domain 0 1] 0 2] : : : 0 m] of the variables x1 x2 : : : xm . Thus,
m
X
FPml=1 xl (t) = P( xl t)
l=1
T 2 0 1] 0 2] : : : 0 m] j Pm xl tg)
Vol
(
fh
x
x
:
:
:
x
i
1
2
m
l=1
=
Vol(fhx1 x2 : : : xmi 2 0 1] 0 2] : : : 0 m]g)
(m) (m)
= Vol( (m(t)1 ))
Vol( ())
11
(by denitions of (m) ( ) and (m) ( )) 0
1m
m X
m
Y
X
X
1
1
l
@1 ;
A
= Qm m! t (;1)i t
l=1 l
l=1 i=0
l2I
I f1 2 : : : mg
jIj
P = i =t < 1
l2I l
(by Lemma 2.1(2) and Lemma 2.2)
0
1m
m
X
X
X
1
1
@
= m! Qm tm (;1)i lA
m t;
t
l=1 l
i=0
l2I
I f1 2 : : : mg
jIj
P = i
Q1
l2I l < t
m
X
X
= m! m (;1)i l=1 l i=0
I f1 2 : : : mg
jIj
P = i
as needed.
0
1m
X
@t ; lA l2I
l2I l < t
Lemma 2.4 allows for the computation of the density function of m independent, uniformly
distributed random variables.
Lemma 2.5 Assume that for each i, 1 i m, xi is uniformly distributed over 0 i]. Then,
for any parameter t > 0,
0
1m;1
m
X
X
X
1
i
@t ; lA :
Q
fPmi=1 xi (t) =
(m ; 1)! ml=1 l i=0(;1) l2I
I f1 2 : : : mg
jIj =Pi
t>
l2I l
Proof: Clearly,
dFPmi=1 xi (t)
P
m
f i=1 xi (t) =
2 dt
3
66
77
66
0
1m 77
m
6
7
X
X
X
@t ; lA 777
= d 666 Q1m
(;1)i dt 6 m! l=1 l i=0
77
l2I
I f1 2 : : : mg
66
75
jIj =Pi
4
t>
l2I l
12
=
Q1
m
X
X
i
m! ml=1 l i=0(;1) I f1 2 : : : mg
jIj =Pi
t > l2I l
m
X
1m;1
0
X
m @t ; lA
X
l2I
= (m ; 1)!1 Qm (;1)i l=1 l i=0
I f1 2 : : : mg
jIj =Pi
t>
0
1m;1
X
@t ; lA l2I
l2I l
as needed.
Lemma 2.5 is of independent interest since it provides an answer to a Research Problem of
Rota 16, Research Problem 10, p. xviii & 4/15/98.14, p. 314]:
\Find a nice formula for the density of n independent, uniformly distributed random
variables."
We continue with an immediate implication of Lemma 2.4 that concerns the special case
where for each i, 1 i n, i = 1.
Corollary 2.6 Assume that for each i, 1 i m, xi is uniformly distributed over 0 1].
Then, for any parameter t > 0,
1 FPmi=1 xi (t) =
m!
!
X
0im
(;1)i mi (t ; i)m :
i<t
Proof: By Lemma 2.4,
FPmi=1 xi (t) =
m
X
X
1
i
m! Qml=1 1 i=0(;1) I f1 2 : : : mg
jIj = i
P
l2I 1 < t
m
X
= 1 (;1)i m! i=0
X
I f1 2 : : : mg
jIj = i
jIj < t
13
0
1m
X
@t ; 1A (t ; jIj)m l2I
m
X
= 1 (;1)i m! i=0
X
(t ; i)m I f1 2 : : : mg
jIj = i
i<t
!
X
1
i
(;1) mi (t ; i)m = m! 0im
i<t
as needed.
We continue to show:
Lemma 2.7 Assume that for each i, 1 i m, xi is uniformly distributed over i 1]. Then,
for any parameter t > 0,
FPmi=1 xi (t)
m
X
i
= 1 ; Qm 1
m! l=1(1 ; l ) i=0 (;1) X
I f1 2 : : : mg
jIj = i
jIj < m ; t + Pl2I l
0
1m
X
@m ; t ; jIj + lA :
l2I
Proof: Clearly, for each i, 1 i n, the random variable x0i = 1 ; xi is uniformly distributed
over 0 1 ; i ]. Thus,
FPmi=1 xi (t)
m
X
P( xi t)
i=1
m
X
= P(; xi ;t)
i=1
m
X
= P(m ; xi m ; t)
i=1
m
X
= P( (1 ; xi ) m ; t)
i=1
m
X
= 1 ; P( (1 ; xi ) m ; t)
i=1
= 1 ; FPmi=1 (1;xi )(m ; t)
14
= 1 ; m! Qm 1(1 ; )
l
l=1
(by Lemma 2.2)
= 1 ; Qm 1
m! l=1 (1 ; l )
m
X
X
i=0
I f1 2 : : : mg
jIj
P =(1i ; ) < m ; t
(;1)i 0
1m
X
@m ; t ; (1 ; l)A
l2I
l
l2I
m
X
0
1m
X
X
@m ; t ; 1 + lA
X
(;1)i l2I
l2I
I f1 2 : : : mg
jIj
P = 1i; P < m ; t
l2I
l2I l
0
1m
m
X
X
X
1
@m ; t ; jIj + lA = 1 ; m! Qm (1 ; ) (;1)i l i=0
l=1
l2I
I f1 2 : : : mg
jIj = i
jIj < m ; t + P i=0
l2I l
as needed.
2.3 Notation
Throughout, for any bit b 2 f0 1g and real number 2 0 1], denote b the complement of b,
and (b) to be if b = 1, and 1 ; if b = 0. For any binary vector b, denote jbj the number
of entries of b that are 1.
3 Model
Our model is based on the one of Papadimitriou and Yannakakis 11].
3.1 Distributed Decision-Making
We consider a collection of n distributed entities P1 P2 : : : Pn , called players, where n 2
n is the size of the distributed system. Each player Pi receives an input xi , which is the value
of a random variable distributed uniformly over 0 1] denote x = hx1 x2 : : : xniT the input
vector. Associated with each player Pi is a (local) decision-making algorithm Ai , that may be
either deterministic or randomized, and \maps" the input xi of player Pi and the inputs of
other players that are \known" to player Pi to Pi 's output yi . (A player Pj 's input xj , j 6= i, is
not \known" to player Pi if Ai is independent of xj .) A distributed decision-making algorithm
is a collection A = hA1 A2 : : : An i of (local) decision algorithms, one for each player.
15
Formally, a deterministic decision-making algorithm is a function Ai : 0 1]n ! f0 1g, that
maps the input vector x to Pi 's (boolean) output yi = Ai (x) denote
yA(x) = hA1(x1) A2(x2) : : : An(xn)iT
the output vector of A on input vector x. A randomized decision-making algorithm is a function
Ai which assigns, for each input vector x, a probability distribution on f0 1g. We consider
that Ai (x) is the probability that player Pi decides on 0.
For each b 2 f0 1g, dene
X
#b =
xi i:Ai (x)=b
thus, #b is the sum of the inputs of the players that decide on b. Thus, #b is a random variable
induced by the distribution of the inputs (and the coin tosses of the algorithm if the algorithm
is randomized). For each parameter t > 0, we are interested in the event that neither #0 nor
#1 exceeds t denote
PA(t) = P(#0 t and #1 t)
the probability of this event, taken over all input vectors x (and coin tosses of the algorithm
A in case A is randomized). Call PA (t) the winning probability of algorithm A. We wish to
maximize PA (t) over all algorithms A any maximizing algorithm will be called an optimal
algorithm.
A set of algorithms is optimally uniform, or uniform for short, if it includes a particular
algorithm that is optimal for all values of the size n.
3.2 The No Communication Case
We focus on the case where there is no communication among the players. We model this
by assuming that for each i, 1 i n, Ai = Ai (xi ) thus, Ai does not depend on the input
of any player other than Pi . From this point on, all of our discussion will refer to the no
communication case.
We distinguish between oblivious and non-oblivious, distributed decision-making algorithms. A distributed decision-making algorithm is oblivious if for each player Pi , Ai does not
depend on Pi 's input xi . Thus, an oblivious algorithm is a collection hA1 A2 : : : An i of probability distributions on f0 1g. We identify an oblivious algorithm A with a probability vector Q
such that for each i, i = P(yi = 0). Thus, for any vector b 2 f0 1gn, P(yA = b) = ni=1 (i bi ).
A distributed decision-making algorithm is non-oblivious if it is not oblivious. A (deterministic) non-oblivious algorithm is single-threshold if for each player Pi , Ai is a single-threshold
function that is,
(
0 xi ai Ai (xi) =
1 xi > ai
where 0 ai < 1.
16
4 Oblivious Algorithms
In this section, we present our results for oblivious algorithms. A combinatorial expression for
the winning probability of any oblivious algorithm and corresponding optimality conditions
are provided in Section 4.1 Section 4.2 uses these conditions to derive the optimal oblivious
algorithm.
4.1 The Winning Probability and Optimality Conditions
We show:
Theorem 4.1 Assume that A is any oblivious algorithm. Then,
PA(t)
!
X
P
1
i
= b2f01gn
(;1) jbj (t ; i)jbj jbj!
i
0 i jbj
i<t
!
X
n
;
j
b
j
1
i
(;1) (t ; i)n;jbj (n ; jbj)! i
0 i n ; jbj
n
Y
i=1
i<t
(i bi ) :
Proof: Clearly,
PA(t) = P(#
X0 t and #1 t)
=
P(#0 t and #1 t j yA(x) = b) PA(yA(x) = b)
b2f01gn
(by the Conditional Law of Alternatives) :
Since A is a randomized oblivious algorithm, yA(x) does not depend on x, so that PA(yA(x) =
b) = PA(yA = b) = Qni=1 (ibi). Thus,
PA(t) =
=
X
b2f01gn
X
b2f01gn
P(#0 t and #1 t j yA = b) PA(yA = b)
n
n
X
X
P( bixi t and bixi t) PA(yA = b) :
i=1
i=1
17
P
P
Notice that the sums ni=1 bixi and ni=1 bi xi are independent random variables P
since the
n bx
random
variables
x
,
1
i
n
are
independent
and
none
of
them
appears
in
both
i
i=1 i i
P
n
and i=1 bi xi . Thus,
n
n
n
n
X
X
X
X
P( bixi t and bixi t) = P( bixi t) P( bixi t)
i=1
i=1
so that
PA(t) =
X
b2f01gn
i=1
i=1
n
n
X
X
P( bixi t) P( bixi t) PA(yA = b) :
i=1
i=1
all variables
xi, 1 i n, are identically distributed, for
any vectorPb, the sums
Pn Since
P
P
j
b
n
b x and
b x are identically distributed with the sums j x and n;jbj x , rei=1 i i
i=1 i i
i=1 i
spectively. It follows that
PA(t)
=
P
=
P
b2f01gn
b2f01gn
jbj
X
i
nX
;jbj
P( xi t) P( xi t) PA(yA = b)
i=1
i=1
!
X
1 i
(;1) jbi j (t ; i)jbj jbj!
0 i jbj
i<t
!
X
n
;
j
b
j
1
i
(;1) (t ; i)n;jbj (n ; jbj)! i
0 i n ; jbj
n
Y
(by Corollary 2.6) i=1
i=1
i<t
(i bi )
as needed.
Notice that the winning probability is a function of the probability vector of an algorithm
A. Thus, an optimal algorithm corresponds to a probability vector that maximizes the winning
probability. Clearly, all partial derivatives with respect to the vector's entries must vanish
at an extreme point (maximum or minimum) of winning probability. Hence, Theorem 4.1
immediately implies necessary conditions for any optimal protocol.
18
Corollary 4.2 (Optimality conditions for oblivious algorithms) Assume that A is an
optimal, randomized oblivious algorithm. Then, for any index k, 1 k n,
0
P
= b2f01gn jb1j! 1
X
0 i jbj
i<t
(n ; jbj)! (;1)i !
jbj (t ; i)jbj X
0 i n ; jbj
i
(;1)i !
n ; jbj (t ; i)n;jbj i
i<t
n
@ (bk ) Y
(bi )
@
k k i=1i6=k i :
We remark that the (necessary) conditions for optimal oblivious algorithms determined in
Corollary 4.2 amount to a system of n multilinear equations in the probability vector . In
Section 4.2, we will explicitly solve this system and show that the solution indeed determines
an optimal algorithm.
4.2 Uniformity
We show that the optimal winning probability is achieved by the very simple uniform algorithm
by which each player preassigns equal probability (1/2) to each of its choices.
Theorem 4.3 Assume that A is an optimal oblivious algorithm. Then, = h1=2 1=2 : : : 1=2iT
and
PA(t)
P
= 1
2n
b 2 f0 1gn
1 jbj!
1
X
0 i jbj
i<t
(n ; jbj)! (;1)i !
jbj (t ; i)jbj X
0 i n ; jbj
i
(;1)i !
n ; jbj (t ; i)n;jbj :
i
i<t
Proof: Take any optimal algorithm A. Fix any index j , 1 j n. By Corollary 4.2,
0
19
P
= b2f01gn jb1j! 1
X
0 i jbj
i<t
(n ; jbj)! (;1)i !
jbj (t ; i)jbj X
0 i n ; jbj
i
(;1)i !
n ; jbj (t ; i)n;jbj i
i<t
n
@ (bj ) Y
(bi ) :
j
i
@
j
i=1i6=j
Clearly, @
(jbj ) =@
j = 1 if bj = 1 and ;1 if bj = 0. It follows that
X
b 2 f0 1gn
t(jbj)
bj = 1
Y
1in
i 6= j
(ibi ) ;
X
b 2 f0 1gn
t (jbj)
bj = 0
Y
1in
i 6= j
(ibi ) = 0 where for any vector b 2 f0 1gn and (real) parameter t > 0,
!
X
i
(;1) jbi j (t ; i)jbj 0 i jbj
i<t
!
X
1
n
;
j
b
j
n;jbj :
i
(
;
1)
(
t
;
i
)
(n ; jbj)!
i
0 i n ; jbj
t(jbj) = jb1j! i<t
By denition of t (jbj), it immediately follows:
Lemma 4.4 For any vector b 2 f0 1gn and (real) parameter t > 0, t(jbj) = t(n ; jbj).
We continue to show:
Lemma 4.5 1 = 2 = : : : = n
Proof: Take any arbitrary indices j and k, 1 j k n and j 6= k. Clearly,
X
Y
X
Y
t(jbj)
(ibi ) ;
t(jbj)
(i bi ) = 0
b 2 f0 1gn
1in
b 2 f0 1gn
1in
bj = 1
i=
6 j
bj = 0
i=
6 j
20
and
X
b 2 f0 1gn
t(jbj)
bk = 1
Y
1in
i 6= k
The rst equation implies that
X
b 2 f0 1gn
k t (jbj)
bj = 1
bk = 1
;
= 0:
or
(ibi ) ;
Y
1in
i 6= j k
X
b 2 f0 1gn
t(jbj)
bk = 0
(i bi ) +
X
b 2 f0 1gn
bj = 1
bk = 0
Y
1in
i 6= k
k t (jbj)
(ibi ) = 0 :
Y
1in
i 6= j k
(i bi )
X
Y
X
Y
k t (jbj)
(i bi ) ;
k t (jbj)
(i bi )
b 2 f0 1gn
1in
b 2 f0 1gn
1in
bj = 0
i=
6 j k
bj = 0
i=
6 j k
bk = 1
bk = 0
0
1
BB
CC
BB
CC
BB X
Y
X
Y
(bi ) ;
(bi ) C
C
k B
(
j
b
j
)
(
j
b
j
)
t
t
i
i C
BB
CC
1in
b 2 f0 1gn
1in
BB b 2 f0 1gn
CC
b
=
1
i
=
6
j
k
b
=
0
i
=
6
j
k
@ j
A
j
bk = 1
bk = 1
0
1
BB
CC
BB
CC
BB X
Y
X
Y
(bi ) ;
(bi ) C
C
(
j
b
j
)
(
j
b
j
)
+
k B
t
t
i
i C
BB
CC
1in
b 2 f0 1gn
1in
BB b 2 f0 1gn
CC
i 6= j k
bj = 0
i 6= j k
@ bj = 1
A
= 0:
bk = 0
bk = 0
21
Similarly, the second equation implies that
0
1
BB
CC
BB
CC
BB X
Y
X
Y
(bi ) ;
(bi )C
C
j B
(
j
b
j
)
(
j
b
j
)
t
t
i
i C
BB
CC
1in
b 2 f0 1gn
1in
BB b 2 f0 1gn
CC
i 6= k j
bk = 0
i 6= k j
@ bk = 1
A
b =1
bj = 1
0j
1
BB
CC
BB
CC
BB X
Y
X
Y
(bi ) ;
(bi ) C
C
+
j B
(
j
b
j
)
(
j
b
j
)
t
t
i
i C
BB
CC
1in
b 2 f0 1gn
1in
BB b 2 f0 1gn
CC
i 6= k j
bk = 0
i 6= k j
@ bk = 1
A
bj = 0
= 0:
bj = 0
There is a one-to-one correspondence between vectors b 2 f0 1gn with bj = 0 and bk = 1
and vectors b 2 f0 1gn with bk = 0 and bj = 1: two such vectors b and b0 correspond to each
other if they have all other entries identical. Then, clearly, jbj = jb0j and for each index i,
i 6= j k, (i bi) = (ibi ). This implies that
0
X
Y
X
Y
(i bi ) =
t (jbj)
(i bi ) :
t(jbj)
b 2 f0 1gn
1in
b 2 f0 1gn
1in
bk = 0
i=
6 k j
bj = 0
i=
6 j k
bj = 1
bk = 1
It follows that k and j satisfy identical linear equations (each involving the remaining
variables i , i 6= j k, as parameters) and, therefore, they are equal. Since the indices j and k
were chosen arbitrarily, it follows that 1 = 2 = : : : = n , as needed.
Denote the common value of the variables 1 2 : : : n. It follows that
X
b 2 f0 1gn
bj = 1
t(jbj)
Y
1in
i 6= j
(bi ) ;
X
b 2 f0 1gn
bj = 0
t (jbj)
Y
1in
i 6= j
(bi ) = 0 For any vector b with bj = 1, there are jbj ; 1 indices i, i 6= j , with bi = 1 and n ; jbj
indices i with bi = 0. Similarly, for any vector b with bj = 0, there are jbj indices i with bi = 1
22
and n ; jbj ; 1 indices i, i 6= j , with bi = 0. Thus,
X
t(jbj) jbj;1(1 ; )n;jbj ;
b 2 f0 1gn
bj = 1
X
t (jbj) jbj(1 ; )n;jbj;1 = 0 :
b 2 f0 1gn
bj = 0
Dividing through by (1 ; )n;1 yileds that
X
b 2 f0 1gn
jbj;1
t (jbj) 1 ; ;
bj = 1
;
X
b 2 f0 1gn
jbj
t(jbj) 1 ; = 0:
bj = 0
There are jbn;j;11 vectors b 2 f0 1gn with bj = 1 for any such vector, 1 jbj n.
; Similarly, there are nj;bj1 vectors b 2 f0 1gn with bj = 0 for any such vector, 0 jbj n ; 1.
It follows that
jbj
jbj;1 nX
;1 n ; 1!
n n;1 !
X
t(jbj) 1 ; ;
t(jbj) 1 ; = 0:
jbj=0 jbj
jb=1 jbj ; 1
We next show:
Lemma 4.6 = 1=2
Proof: Assume, by way of contradiction, that 6= 1=2. This implies that =(
; 1) 6= 1.
We have established that =(
; 1) satises a polynomial equation of degree n ; 1. Consider
any exponent r, 0 r n ; 1. The coe cient of (
=(
; 1))r is
!
!
!
n ; 1 (r + 1) ; n ; 1 (r) = n ; 1 ( (r + 1) ; (r)) r
t
r
t
r
t
t
while the coe cient of (
=(
; 1))n;1;r is
n;1
!
n;1
!
n ; 1 ; r t(n ; r) ; n ; 1 ; r t(n ; 1 ; r) =
!
!
n ; 1 ( (n ; r) ; (n ; 1 ; r)) = n ; 1 ( (n ; r) ; (n ; 1 ; r))
t
t
t
n;1;r t
r
By Lemma 4.4, t (n ; r) = t (r) and t (n ; 1 ; r) = t(r +1). It follows that the coe cient
of (
=(
; 1))r is equal to the negative of the coe cient of (
=(
; 1))n;1;r . In particular,
23
when n is odd, the coe cient of (
=(
; 1))(n;1)=2 is equal to the negative of the coe cient
of (
=(
; 1))n;1;(n;1)=2 = (
=(
; 1))(n;1)=2, which implies that in that case the coe cient
of (
=(
; 1))(n;1)=2 is equal to zero. It follows that
X
0 r n;2 1
!
!
n ; 1 ( (r + 1) ; (r)) r ; n;1;r = 0 :
t
t
r
;1
;1
We proceed by case analysis. Assume rst that =(
; 1) > 1. Then, for any integer r
such that 1 r (n ; 1)=2, r < n ; 1 ; r, so that
r
n;1;r
:
;1 < ;1
Since
!
n ; 1 ( (r + 1) ; (r)) > 0 t
t
r
it follows that
X
0 r n;2 1
!
!
n ; 1 ( (r + 1) ; (r)) r ; n;1;r < 0 t
t
r
;1
;1
a contradiction.
Assume now that =(
; 1) < 1. Then, for any integer r such that 1 r (n ; 1)=2,
r < n ; 1 ; r, so that
r
n;1;r
:
;1 > ;1
Since
!
n ; 1 ( (r + 1) ; (r)) > 0 t
t
r
it follows that
X
0 r n;2 1
!
!
n ; 1 ( (r + 1) ; (r)) r ; n;1;r > 0 t
t
r
;1
;1
a contradiction. This completes our proof.
24
By Lemmas 4.5 and 4.6, Theorem 4.1 implies that
PA(t)
!
X
P
1
i
1
(;1) jbi j (t ; i)jbj = 2n b 2 f0 1gn jbj! 0 i jbj
1
i<t
(n ; jbj)! X
0 i n ; jbj
!
(;1)i n ;i jbj (t ; i)n;jbj i<t
as needed.
Theorem 4.3 implies that the optimal winning probability of an oblivious algorithm can be
computed exactly in exponential time.
5 Non-Oblivious Algorithms
In this section, we present our results for non-oblivious, single threshold algorithms. A combinatorial expression for the winning probability of any non-oblivious, single threshold algorithm,
and corresponding optimality conditions, are provided in Section 5.1 Section 5.2 uses the optimality conditions to demonstrate non-uniformity for the optimal non-oblivious, single threshold
algorithm.
5.1 The Winning Probability and Optimality Conditions
We show:
Theorem 5.1 Assume that A is any non-oblivious, single threshold algorithm. Then,
PAX
(t)
=
b2f01gn
0
1
BB
CC
BB
0
1n;jbjCC
BB 1
C
X
X
X
@t ; lA CCC (;1)i BB (n ; jbj)! BB
CC
l2I
0in;jbj
I fi : bi = 0g
B@
CA
jIj
P = i < t
l2I l
25
0
1
BB
CC
BB
0
1
CC
jbjC
BB Y
X
X
X
1
i
@jbj ; t ; jIj + lA CCC :
(;1) BB (1 ; l) ; jbj!
BBl:bl=1
CC
l2I
0ijbj
I fi : bi = 1g
B@
CA
jIj = i
P
jIj < jbj ; t + l2I l
Proof: Clearly,
PA(t)
= P(#
X0 t and #1 t)
=
P(#0 t and #1 t j yA(x) = b) PA(yA(x) = b)
b2f01gn
(byXthe Conditional
X Law of Alternatives)
X
=
P(
xi t and
xi t j yA(x) = b) PA(yA(x) = b)
b2f01gn
=
=
i:Ai(x)=0
i:Ai (x)=1
(byXdenition
Xof #0 and #1)X
P( xi t and
xi t
i:bi =1
b2f01gn i:bi=0
X
b2f01gn
X
P(
i:bi =0
=
=
xi t and
X
i:bi =1
xi t j
^
i:bi =0
j yA(x) = b) PA(yA(x) = b)
xi 2 0 i] and
PA(yA(x) = b)
(since
X A is a single threshold algorithm)
b2f01gn
P(Pi:bi=0 xi t
PA(yA(x) = b)
i:bi =1
xi 2 i 1]) P x t and V x 2 0 ] and V x 2 1])
i
i
i:bi =1 i
i:bi =0 i
i:bi =1 i
P(V x 2 0 ] and V x 2 1])
and
i:bi =0 i
i
PA(yA(x) = b)
(byXdenition of conditional probability)
b2f01gn
P(Pi:bi=0 xi t
^
V
i:bi =1 i
P
i
V
and i:bi =0 xi 2 0 i]) P( i:bi =1 xi t and i:bi =1 xi 2 i 1])
P(Vi:bi=0 xi 2 0 i]) P(Vi:bi =1 xi 2 i 1])
(since all variables xi are independent)
26
=
=
X
b2f01gn
P(Pi:bi=0 xi V t and Vi:bi=0 xi
P( i:bi=0 xi 2 0 i])
PA(yA(x) = b)
(byXdenition of conditional probability)
b2f01gn
X
P(
i:bi =0
=
=
2 0 i]) P(Pi:bi=1 xi V t and Vi:bi=1 xi 2 i 1]) P( i:bi=1 xi 2 i 1])
xi t j
PAX
(yA(x) = b)
^
i:bi =0
xi 2 0 i]) P(
X
i:bi =1
xi t j
^
i:bi =1
xi 2 i 1]) b2f01gn
0
1
BB
CC
BB
0
1n;jbjCCC
BB
X
X
@t ; X lA CCC (;1)i BB (n ; jbj)!1Q l:bl=0 l 0in;jbj
BB
CC
l2I
I fi : bi = 0g
B@
CA
jIj
P = i < t
l2I l
1
0
CC
BB
BB
0
1jbjCC
C
BB
X
X
X
@jbj ; t ; jIj + lA CC (;1)i BB1 ; jbj! Q 1 (1 ; ) CC
l 1ijbj
l:bl =1
l2I
BB
I
f
i
: bi = 1g
CC
B@
jIj = i
A
P
jIj < jbj ; t + l2I l
PA(yA(x) = b)
(byXLemmas 2.4 and 2.7)
b2f01gn
0
1
BB
CC
BB
0
1n;jbjCCC
BB
X
X
X
@t ; lA CCC (;1)i BB (n ; jbj)!1Q l:bl=0 l 0in;jbj
BB
CC
l2I
I fi : bi = 0g
B@
CA
jIj
P = i < t
l2I l
27
=
Q 1(1 ; ) l
0 l:bl=1
1
BB
CC
BB
0
1jbjCCC
BB Y
X
X
@jbj ; t ; jIj + X lA CCC (;1)i BB (1 ; l) ; jb1j!
BBl:bl=1
CC
l2I
0ijbj
I fi : bi = 1g
B@
CA
jIj = i
P
jIj < jbj ; t + l2I l
PAX
(yA(x) = b)
b2f01gn
0
1
CC
BB
BB
0
1n;jbjCC
C
BB
X
X
@t ; X lA CCC (;1)i BB (n ; jbj)!1Q l:bl=0 l 0in;jbj
CC
BB
l2I
I fi : bi = 0g
B@
CA
jIj
P = i < t
l2I l
Q 1(1 ; ) l
0 l:bl=1
1
BB
CC
BB
0
1jbjCC
BB Y
C
X
X
@jbj ; t ; jIj + X lA CCC (;1)i BB (1 ; l) ; jb1j!
BBl:bl=1
CC
l2I
0ijbj
I fi : bi = 1g
B@
CA
jIj = i
P
jIj < jbj ; t + l2I l
Y (bl)
1ln
l
(since A is a single threshold algorithm) Clearly,
Y
1ln
(l bl ) =
Y
l:bl =0
so that
PA(t)
28
l Y
(1 ; l ) l:bl =1
=
X
b2f01gn
1
0
CC
BB
BB
0
1n;jbjCCC
BB 1
X
X
@t ; X lA CCC (;1)i BB (n ; jbj)! CC
BB
l2I
0in;jbj
I fi : bi = 0g
CA
B@
jIj
P = i < t
l2I l
0
1
BB
CC
BB
0
1jbjCCC
BB Y
X
X
X
@jbj ; t ; jIj + lA CCC
(;1)i BB (1 ; l) ; jb1j!
BBl:bl=1
CC
l2I
0ijbj
I fi : bi = 1g
B@
CA
jIj = i
P
jIj < jbj ; t + l2I l
as needed.
Figures 1 and 2 depict the winning probabilities for some simple cases. It is already evident
that the optimal non-oblivious algorithm is not uniform.
For non-oblivious algorithms, the analysis is more involved since it must take into account
the conditional probabilities \created" by the knowledge of inputs by the agents. We show:
Theorem 5.2 (Optimality conditions for non-oblivious algorithms) Assume that A is
an optimal, randomized non-oblivious algorithm. Then, for any index k,
n n ; 1!
X
1
(
(
n
;
1
; jbj)!
j
b
j
jbj=0
!
X
n
;
1
;
j
b
j
l
(;1)
( ; l)n;1;jbj ) l
0ln;1;jbj;l>0
(;(jbj + 1)(1 ; )jbj ; (jbj + 1)
(jbj + 1)! !
X
(;1)l l j;bj1 l (b + 1 ; ; l + l)jbj) +
1ljbj+1b+1;;l+l>0
nX
;1 n ; 1!
jbj=0
jbj
(((1 ; )jbj ; ( 1
X
jbj! 0ljbjjbj;;l+l>0
(;1)l
29
!
jbj (jbj ; ; l + l)jbj)) l
Figure 1: The winning probabilities for n = 3, n = 4 and n = 5
30
Figure 2: The winning probabilities for n = 3, n = 4 and n = 5
31
!
X
;(n ; jbj)
n
;
j
b
j
;
1
n;jbj;1 )
l
(n ; jbj)! 1ln;jbj;l>0(;1)
l ; 1 l ( ; l)
= 0:
Unfortunately, the conditions in Theorem 5.2 do not admit a uniform solution (independent
of n). We discover that the solutions for n = 3 and n = 4 are dierent. The solution for n = 3
2
and = 1 satises
p the polynomial equation ; 2 + 6=7 = 0 the solution is calculated
to be to 1 ; 1=7 = 0:622, which is the threshold value conjectured by Papadimitriou and
Yannakakis in 11] to imply optimality for the same case. (See Appendix ?? for a complete
derivation.) On the other hand, the solution for n = 4 and = 4=3 satises the polynomial
equation ;(26=3) 3 + (98=3) 2 ; (368=9) ; 416=27 = 0 the solution is calculated to be equal
to approximately 0:678.
5.2 Non-Uniformity
In this section, we derive the optimal algorithms for the special cases where (i) n = 3 and
= 1, and (ii) n = 4 and = 4=3.
5.2.1 The Case n = 3 and = 1
We proceed by case analysis on the interval in which lies. In each case, we rst use Theorem 5.1 to derive an expression for the probability of any symmetric protocol as a function
of the common threshold . (Theorem 5.2 establishes that an optimal protocol is symmetric.)
We use this expression to compute the optimal .
2 0 1=3]
The optimal probability is
!
3 3!
X
X
1
3
;
j
b
j
3;jbj) l
(
(
;
1)
(1
;
l
)
l
jbj=0 jbj (3 ; jbj)! 0l3;jbj1;l>0
!
X
(1 ; )jbj ; jb1j!
(;1)l jbl j (jbj ; 1 ; l ; l)jbj]
! ! 0ljbj(jb!j;1;l;l)>0 !
!
3
1
3
3
3
3
3
3
=
( ( (1 ; 0) ;
(1 ; ) +
(1 ; 2 ) ; 3 (1 ; 3 )3)) 0 3! 0
1
2
3
!
(1 ; )0 ; 0!1 ( 00 (0 ; 1 ; 0 + 0)0 ] +
32
!
=
=
=
=
!
!
!
3 ( 1 ( 2 (1 ; 0)2 ; 2 (1 ; )2 + 2 (1 ; 2 )2)) 1 2! 0
1
2
!
!
(1 ; )1 ; 1!1 ( 10 (1 ; 1 ; 0 + 0)1 ; 11 (1 ; 1 ; 1 + )1 )] +
! !
!
3 ( 1 ( 1 (1 ; 0)1 ; 1 (1 ; )1 )) 2 1! 0
1
!
!
!
1
2
2
2
2
2
(1 ; ) ; 2! ( 0 (2 ; 1 ; 0 + 0) ; 1 (2 ; 1 ; 1 + ) + 22 (2 ; 1 ; 2 + 2 )2)] +
! !
3 ( 1 ( 0 (1 ; 0)0)) 3 0! 0
!
!
!
3
3
1
(1 ; )3 ; ( (3 ; 1 ; 0 + 0)3 ;
(3 ; 1 ; 1 + )3 + 3 (3 ; 1 ; 2 + 2 )3 ;
3! 0
1
2
!
3 (3 ; 1 ; 3 + 3 )3)]
3
1 (1 ; 3(1 ; 3 + 3 2 ; 3) + 3(1 ; 6 + 12 2 ; 8 3) ; (1 ; 9 + 27 2 ; 27 3)) +
6
3 (1 ; 2(1 ; 2 + 2) + (1 ; 4 + 4 2))(1 ; ) +
2
3 (1 ; )2 ; 12 (1 ; 2 2 )] +
((1 ; )3 ; 1 (8 ; 3(1 + 3 + 3 2 + 3 ) + 24 3 ))
6
1 (1 ; 3 + 9 ; 9 2 + 3 3 + 3 ; 18 + 36 2 ; 24 3 ; 27 2 + 27 3 + 9 ; 1) +
6
3 (1 ; 2 + 4 ; 2 2 + 1 ; 4 + 4 2)(1 ; ) +
2
3 (1 ; 2 + 2 ; 12 + 2 ) +
((1 ; )3 ; 16 (5 ; 9 ; 9 2 + 21 3))
( 3 ) + (3 2 ; 3 3) + ( 23 ; 6 2 + 6 3) + ( 61 ; 32 + 29 2 ; 92 3 )
1 + 3 2 ; 1 3 :
6 2
2
The optimality condition is derived by dierentiating with respect to . We obtain that
3 ; 3 2=2 = 0, which implies that 3 (1 ; =2) = 0. Either = 0 or = 2. Neither of these
values both is acceptable and maximizes the probability (the rst is acceptable but minimizes
the probability, while the second is not acceptable, since it is greater than 1/3, which is the
upper boundary of the interval under consideration).
33
2 (1=3 1=2]
The optimal probability is
!
3 3!
X
X
1
3
;
j
b
j
l
( (3 ; jbj)!
(;1)
(1 ; l)3;jbj) j
b
j
l
jbj=0
0l3;jbj1;l>0
!
X
1
l
j
b
j
(;1) jbl j (jbj ; 1 ; l ; l)jbj]
(1 ; ) ; jbj!
0ljbj(jbj;1;l;l)>0
!
!
!
!
!
= 30 ( 3!1 ( 30 (1 ; 0)3 ; 31 (1 ; )3 + 32 (1 ; 2 )3 ; 33 (1 ; 3 )3)) !
1
0
(1 ; ) ; 0! ( 00 (0 ; 1 ; 0 + 0)0 ] +
! !
!
!
3 ( 1 ( 2 (1 ; 0)2 ; 2 (1 ; )2 + 2 (1 ; 2 )2)) 1 2! 0
1
2
!
!
(1 ; )1 ; 1!1 ( 10 (1 ; 1 ; 0 + 0)1 ; 11 (1 ; 1 ; 1 + )1 )] +
!
! !
3 ( 1 ( 1 (1 ; 0)1 ; 1 (1 ; )1 )) 1
2 1! 0
!
!
!
1
2
2
(1 ; )2 ; ( (2 ; 1 ; 0 + 0)2 ;
(2 ; 1 ; 1 + )2 + 2 (2 ; 1 ; 2 + 2 )2)] +
2! 0
1
2
! !
3 ( 1 ( 0 (1 ; 0)0)) 3 0! 0
!
!
!
3
3
1
(1 ; )3 ; 3! ( 0 (3 ; 1 ; 0 + 0)3 ; 1 (3 ; 1 ; 1 + )3 + 32 (3 ; 1 ; 2 + 2 )3 ;
!
3 (3 ; 1 ; 3 + 3 )3)]
3
= 61 (1 ; 3(1 ; 3 + 3 2 ; 3) + 3(1 ; 6 + 12 2 ; 8 3)) +
3 (1 ; 2(1 ; 2 + 2) + (1 ; 4 + 4 2))(1 ; ) +
2
3 (1 ; )2 ; 12 (1 ; 2 2 )] +
((1 ; )3 ; 16 (8 ; 3(1 + 3 + 3 2 + 3 ) + 24 3 ) ; (;1 + 9 ; 27 2 + 27 3 ))
= 61 (1 ; 3 + 9 ; 9 2 + 3 3 + 3 ; 18 + 36 2 ; 24 3) +
3 (1 ; 2 + 4 ; 2 2 + 1 ; 4 + 4 2)(1 ; ) +
2
34
3 (1 ; 2 + 2 ; 12 + 2 ) +
((1 ; )3 ; 16 (5 ; 9 ; 9 2 + 21 3 ; 27 3 + 27 2 ; 9 + 1))
= ( 61 ; 32 + 29 2 ; 72 3 ) + (3 2 ; 3 3 ) + ( 32 ; 6 2 + 6 3 ) + 0
= 16 + 23 2 ; 12 3
The optimality condition is derived by dierentiating with respect to . We obtain that
3 ; 3 2=2 = 0, which implies that 3 (1 ; =2) = 0. Either = 0 or = 2. Neither of these
values is acceptable (both are outside the interval (1=3 1=2] under consideration).
2 (1=2 1]
The optimal probability is
!
3 3!
X
X
1
3
;
j
b
j
3;jbj) l
(
(
;
1)
(1
;
l
)
l
jbj=0 jbj (3 ; jbj)! 0l3;jbj1;l>0
!
X
(1 ; )jbj ; jb1j!
(;1)l jbl j (jbj ; 1 ; l ; l)jbj]
!
! ! 0ljbj(jb!j;1;l;l)>0 !
3
3
3
3
1
3
3
3
(1 ; ) +
(1 ; 2 ) ; 3 (1 ; 3 )3)) =
( ( (1 ; 0) ;
1
2
3
0 3! 0
!
(1 ; )0 ; 0!1 ( 00 (0 ; 1 ; 0 + 0)0 ] +
!
!
! !
3 ( 1 ( 2 (1 ; 0)2 ; 2 (1 ; )2 + 2 (1 ; 2 )2)) 1
2
1 2! 0
!
!
(1 ; )1 ; 1!1 ( 10 (1 ; 1 ; 0 + 0)1 ; 11 (1 ; 1 ; 1 + )1 )] +
! !
!
3 ( 1 ( 1 (1 ; 0)1 ; 1 (1 ; )1 )) 2 1! 0
1
!
!
!
2
2
1
2
2
2
(1 ; ) ; 2! ( 0 (2 ; 1 ; 0 + 0) ; 1 (2 ; 1 ; 1 + ) + 22 (2 ; 1 ; 2 + 2 )2)] +
! !
3 ( 1 ( 0 (1 ; 0)0;)) 3 0! 0
!
!
!
1
3
3
(1 ; )3 ; ( (3 ; 1 ; 0 + 0)3 ;
(3 ; 1 ; 1 + )3 + 3 (3 ; 1 ; 2 + 2 )3 ;
3! 0
1
2
35
!
3 (3 ; 1 ; 3 + 3 )3)]
3
= 16 (1 ; 3(1 ; 3 + 3 2 ; 3)) +
3 (1 ; 2(1 ; 2 + 2))(1 ; ) +
2
3 (1 ; )2 ; 12 (1 ; 2 2 + 1 ; 4 + 4 2 )] +
((1 ; )3 ; 61 (8 ; 3(1 + 3 + 3 2 + 3 ) + 24 3 ; 27 3 + 27 2 ; 9 + 1))
= 1 (;2 + 9 ; 9 2 + 3 3 ) +
6
3 (1 ; 2 + 4 ; 2 2 )(1 ; ) +
2
3 (1 ; 2 + 2 ; 1 + 2 ; 2 ) +
(1 ; 3 + 3 2 ; 3 ; 16 (8 ; 3 ; 9 ; 9 2 ; 3 3 + 24 3 ; 27 3 + 27 2 ; 9 + 1))
2
3
= (; 31 + 23 ; 32 2 + 21 3) + (; 32 + 15
2 ; 9 + 3 ) + 0 + 0
21 2 + 7 3 :
+
9
;
= ; 11
6
2
2
The optimality condition is derived by dierentiating
p with respect top. We obtain that
2
6=7 ; 2 + = 0, which implies that either = 1 + 1=7 or = 1 ; 1=7 = 0:622. The
rst is not acceptable because it is greater than 1.pThe second maximizes indeed the optimal
probability. (The second derivative at = 1 ; 1=7 becomes negative.) The corresponding optimal (maximum) probability is 0.545. This settles a conjecture of Papadimitriou and
Yannakakis 11] for the case n = 3 and = 1.
5.2.2 The Case n = 4 and = 4=3
!
4 4!
X
X
4
4
;
j
b
j
4 ; l4;jbj ) l
(
(
;
1)
3
l
jbj=0 jbj (4 ; jbj)! 0l4;jbj 34 ;l>0
!
jbj
X
1
j
b
j
l
(1 ; ) ; jbj!
(;1) jbl j jbj ; 34 ; l ; l ]
4
!
!
! ! 0ljbj(jbj;!3 ;l;l)>0 !
4
4
4
4
4
4
4
4
4
1
4
4
4
4
( ; ) +
( ; 2 ) ;
( ; 3 ) + 4 ( 4 ; 4 )4)) =
( ( ( ; 0) ;
1 3
2 3
3 3
4 3
0 4! 0 3
!
(1 ; )0 ; 0!1 ( 00 (0 ; 34 ; 0 + 0)0] +
36
!
=
=
=
=
!
!
!
!
4 ( 1 ( 3 ( 4 ; 0)3 ; 3 ( 4 ; )3 + 3 ( 4 ; 2 )3 ; 3 ( 4 ; 3 )3)) 1 3! 0 3
1 3
2 3
3 3
!
!
(1 ; )1 ; 1!1 ( 10 (1 ; 34 ; 0 + 0)1 ; 11 (1 ; 43 ; 1 + )1 )] +
! !
!
!
4 ( 1 ( 2 ( 4 ; 0)2 ; 2 ( 4 ; )2 + 2 ( 4 ; 2 )2)) 2 2! 0 3
1 3
2 3
!
!
!
1
2
4
2
4
2
2
2
(1 ; ) ; 2! ( 0 (2 ; 3 ; 0 + 0) ; 1 (2 ; 3 ; 1 + ) + 22 (2 ; 43 ; 2 + 2 )2)] +
! !
!
4 ( 1 ( 1 ( 4 ; 0)1 ; 1 ( 4 ; )1)) 3 1! 0 3
3
!
!
!
3
4
3
4
1
(1 ; )3 ; ( (3 ; ; 0 + 0)3 ;
(3 ; ; 1 + )3 + 3 (3 ; 4 ; 2 + 2 )3 ;
3! 0
3
1
3
2
3
!
3 (3 ; 4 ; 3 + 3 )3 )]
3
3
! !
4 ( 1 ( 0 ( 4 ; 0)0 4 0! 0 3
!
!
!
!
1
4
4
4
4
4
4
4
4
4
4
(1 ; ) ; 4! ( 0 (4 ; 3 ; 0 + 0) ; 1 (4 ; 3 ; 1 + ) + 2 (4 ; 3 ; 2 + 2 ) ; 43 (4 ; 43 ; 3 +
!
4 (4 ; 4 ; 4 + 4 )4 )]
3
4
1 ( 256 ; 4( 256 ; 256 + 96 2 ; 16 3 + 4 ) + 6( 256 ; 512 + 384 2 ; 128 3 + 16 4 ) ; 4( 256 ; 76
24 81
81 27
9
3
81 27
9
3
81 27
2 ( 64 ; 3( 64 ; 48 + 12 2 ; 3) + 3( 64 ; 96 + 48 2 ; 8 3) ; ( 64 ; 144 + 108 2 ; 27 3))(1 ; )
3 27
27 9
3
27 9
3
27 9
3
1
125
8
4
;
1
2
3
2
3
2
3
4 (1 ; ) ; 6 ( 27 ; 3( 27 ; 3 + 2 + ) + 3( 27 + 3 ; 4 + 8 ))] +
1 ( 4096 ; 4( 625 + 500 + 150 2 + 20 3 + 4 ) + 6( 16 + 64 + 96 2 + 64 3 + 16 4) ; 4(
(1 ; )4 ; 24
81
81 27
9
3
81 27
9
3
1
2
7
1
1688
1568
80
4
3
2
2
4
3
24 (24 )] + 3 (6 )(1 ; )] + 6 ( 9 ; 2 + )] + (1 ; ) ; 24 ( 81 ; 27 ; 3 2 + 736
3 ; 23
32 2 + 24 3 ; 18 4 ) + ( 32 ; 128 + 64 2 ; 128 3 + 32 4 )
4 + (43 ; 44) + ( 128
;
81
3
243 81
9
9
3
32 + 10 2 + 16 3 ; 13 4 :
243 9
9
3
!
4 4!
X
X
4
;
j
b
j
4 ; l4;jbj ) 4
l
(
;
1)
(
l
3
jbj=0 jbj (4 ; jbj)! 0l4;jbj 4 ;l>0
3
37
!
jbj
X
(;1)l jbl j jbj ; 34 ; l ; l ]
; jb1j!
4 ;l;l)>0
!
!
!
! ! 0ljbj(jbj;!3
4
4
4
4
4
4
4
4
4
1
4
4
4
4
= 0 ( 4! ( 0 ( 3 ; 0) ; 1 ( 3 ; ) + 2 ( 3 ; 2 ) ; 3 ( 3 ; 3 ) + 44 ( 43 ; 4 )4)) !
1
0
(1 ; ) ; 0! ( 00 (0 ; 34 ; 0 + 0)0] +
! !
!
!
!
4 ( 1 ( 3 ( 4 ; 0)3 ; 3 ( 4 ; )3 + 3 ( 4 ; 2 )3 ; 3 ( 4 ; 3 )3)) 1 3! 0 3
1 3
2 3
3 3
!
!
(1 ; )1 ; 1!1 ( 10 (1 ; 34 ; 0 + 0)1 ; 11 (1 ; 43 ; 1 + )1 )] +
!
!
! !
4 ( 1 ( 2 ( 4 ; 0)2 ; 2 ( 4 ; )2 + 2 ( 4 ; 2 )2)) 1 3
2 3
2 2! 0 3
!
!
!
1
2
4
2
4
(1 ; )2 ; ( (2 ; ; 0 + 0)2 ;
(2 ; ; 1 + )2 + 2 (2 ; 4 ; 2 + 2 )2)] +
2! 0
3
1
3
2
3
! !
!
4 ( 1 ( 1 ( 4 ; 0)1 ; 1 ( 4 ; )1)) 3 1! 0 3
3
!
!
!
1
3
4
3
4
(1 ; )3 ; 3! ( 0 (3 ; 3 ; 0 + 0)3 ; 1 (3 ; 3 ; 1 + )3 + 32 (3 ; 43 ; 2 + 2 )3 ;
!
3 (3 ; 4 ; 3 + 3 )3 )]
3
3
! !
4 ( 1 ( 0 ( 4 ; 0)0 ) 4 0! 0 3
!
!
!
!
1
4
4
4
4
4
4
4
4
4
4
(1 ; ) ; 4! ( 0 (4 ; 3 ; 0 + 0) ; 1 (4 ; 3 ; 1 + ) + 2 (4 ; 3 ; 2 + 2 ) ; 43 (4 ; 43 ; 3 +
!
4 (4 ; 4 ; 4 + 4 )4 )]
4
3
1 ( ;256 + 1024 ; 1536 2 + 1024 3 ; 232 4) +
= 24
81
27
9
3
2 ( 64 ; 3( 64 ; 48 + 12 2 ; 3) + 3( 64 ; 96 + 48 2 ; 8 3) ; ( 64 ; 144 + 108 2 ; 27 3))(1 ; )
3 27
27 9
3
27 9
3
27 9
3
2
4
1
2
2
2
6 (1 ; 2 + ; 2 ( 9 + 3 ; 2 ) +
4 (1 ; )3 ; 1 ( 125 ; 3( 8 ; 4 + 2 2 + 3 ) + 3( ;1 + 2 ; 4 2 + 8 3 ))] +
6 27
27 3
27 3
1
4096
625
500
150
20
16 + 64 + 96 2 + 64 3 + 16 4) ; 4(
4
2
(1 ; ) ; 24 ( 81 ; 4( 81 + 27 + 9 + 3 3 + 4 ) + 6( 81
27
9
3
(1 ; )jbj
38
32 + 128 ; 64 2 +
= ( ;243
81
9
128 3 ; 29 4 ) + ( 2 (6 3)(1 ; )) + (6 2( 8 ; 8 + 2 2 )) + ((1 ; 4 ) ; 1 ( 1688 ; 1568 ; 80 2 + 736
9
3
3
9 3
24 81
27
3
3
128
64
;
32
2
= ( 243 + 81 ; 9 +
128 3 ; 29 4 ) + (4 3 ; 4 4 ) +
9
3
128
32
( ; 2 + 24 3 ; 18 4 ) + ( 32 ; 128 + 64 2 ; 128 3 + 32 4 )
81
3
243 81
9
9
3
128
16
= ( 81 ; 3 2 + 12 3 ; 9 4 ):
!
4 4!
X
X
4
;
j
b
j
4 ; l4;jbj ) 4
l
(
(;1)
l
3
jbj=0 jbj (4 ; jbj)! 0l4;jbj 34 ;l>0
!
jbj
X
1
j
b
j
l
(1 ; ) ; jbj!
(;1) jbl j jbj ; 34 ; l ; l ]
4
! ! 0ljbj(jbj;!3 ;l;l)>0 !
!
!
4
1
4
4
4
4
4
4
4
4
4
4
4
4
( ; ) +
( ; 2 ) ;
( ; 3 ) + 4 ( 4 ; 4 )4)) =
( ( ( ; 0) ;
1 3
2 3
3 3
4 3
0 4! 0 3
!
(1 ; )0 ; 0!1 ( 00 (0 ; 34 ; 0 + 0)0] +
!
!
!
! !
4 ( 1 ( 3 ( 4 ; 0)3 ; 3 ( 4 ; )3 + 3 ( 4 ; 2 )3 ; 3 ( 4 ; 3 )3)) 1 3
2 3
3 3
1 3! 0 3
!
!
(1 ; )1 ; 1!1 ( 10 (1 ; 34 ; 0 + 0)1 ; 11 (1 ; 43 ; 1 + )1 )] +
!
!
! !
4 ( 1 ( 2 ( 4 ; 0)2 ; 2 ( 4 ; )2 + 2 ( 4 ; 2 )2)) 1 3
2 3
2 2! 0 3
!
!
!
1
2
4
2
4
2
2
2
(1 ; ) ; 2! ( 0 (2 ; 3 ; 0 + 0) ; 1 (2 ; 3 ; 1 + ) + 22 (2 ; 43 ; 2 + 2 )2)] +
!
! !
4 ( 1 ( 1 ( 4 ; 0)1 ; 1 ( 4 ; )1)) 3
3 1! 0 3
!
!
!
1
3
4
3
4
3
3
3
(3 ; ; 1 + ) + 3 (3 ; 4 ; 2 + 2 )3 ;
(1 ; ) ; ( (3 ; ; 0 + 0) ;
3! 0
3
1
3
2
3
!
3 (3 ; 4 ; 3 + 3 )3 )]
3
3
39
!
=
=
=
=
!
4 ( 1 ( 0 ( 4 ; 0)0 4 0! 0 3
!
!
!
!
4
4
4
4
4
4
1
4
4
4
4
(1 ; ) ; 4! ( 0 (4 ; 3 ; 0 + 0) ; 1 (4 ; 3 ; 1 + ) + 2 (4 ; 3 ; 2 + 2 ) ; 43 (4 ; 43 ; 3 +
!
4 (4 ; 4 ; 4 + 4 )4 )]
4
3
1 ( 256 ; 4( 256 ; 256 + 96 2 ; 16 3 + 4 ) + 6( 256 ; 512 + 384 2 ; 128 3 + 16 4 ) +
24 81
81 27
9
3
81 27
9
3
2 ( 64 ; 3( 64 ; 48 + 12 2 ; 3 ) + 3( 64 ; 96 + 48 2 ; 8 3 )) (1 ; )
3 27
27 9
3
27 9
3
1
1
4
1
2
6 2 (2 2)1 ; 2 + 2 ; 2 ( 9 ; 2( 9 ; 3 + 2 ))] +
8 4
2
3
4 (1 ; )3 ; 16 ( 125
27 ; 3( 27 + 3 + 2 + beta )
1 + 2 ; 4 2 + 8 3) ; (; 64 + 16 ; 36 2 + 27beta3))]
+3(; 27
3
27
1
1688
1568
80
1536 2
4
3
4 256 1024
(1 ; ) ; 24 ( 81 ; 27 ; 3 2 + 736
3 ; 232 + 81 ; 27 + 9 +
; 1024
3 + 2564
3
1 ( 768 ; 2048 ; 1920 2 ; 704 3 + 12 4 ) +
24 81
27
9
3
128
32
( 81 ; 3 + 24 2 ; 14 3) (1 ; ) + 6 2( 98 ; 83 + 2 2) +
4 (1 ; 3 + 3 2 ; 3 ; 1 + 3 ; 3 2 + 3 )(1 ; )4 ; (1 ; 4 + 6 2 ; 4 3 + 4 )
32 ; 256 + 80 2 ; 88 3 + 92 4 + 128 ; 992 + 104 2 ; 38 3 + 14 4 +
81 81
9
9
24
81 81
3
48 2 ; 48 3 + 12 4 + 0 + 0
9
3
160 + 416 + 440 2 + 574 3 ; 179 4 :
81 27
9
9
6
!
4;jbj
4 4!
X
X
1
4
;
j
b
j
4
l
(
(;1)
)
l
3 ; l
jbj=0 jbj (4 ; jbj)! 0l4;jbj 34 ;l>0
!
jbj
X
1
j
b
j
4
l
j
b
j
(1 ; ) ; jbj!
(;1) l jbj ; 3 ; l ; l ]
0ljbj(jbj; 43 ;l;l)>0
!
!
!
!
! !
4
4
4
4
4
4
4
4
4
1
4
4
4
4
= 0 ( 4! ( 0 ( 3 ; 0) ; 1 ( 3 ; ) + 2 ( 3 ; 2 ) ; 3 ( 3 ; 3 ) + 44 ( 43 ; 4 )4)) 40
!
(1 ; )0 ; 1 ( 0 (0 ; 4 ; 0 + 0)0] +
0! 0
3
!
!
!
4 ( 1 ( 3 ( 4 ; 0)3 ; 3 ( 4 ; )3 + 3 ( 4 ; 2 )3 3 ( 4 ; 3 )3)) 1 3
2 3
3 3
1 3! 0 3
!
!
(1 ; )1 ; 1!1 ( 10 (1 ; 34 ; 0 + 0)1 ; 11 (1 ; 43 ; 1 + )1 )] +
! !
!
!
4 ( 1 ( 2 ( 4 ; 0)2 ; 2 ( 4 ; )2 + 2 ( 4 ; 2 )2)) 2 2! 0 3
1 3
2 3
!
!
!
1
2
4
2
4
2
2
2
(1 ; ) ; 2! ( 0 (2 ; 3 ; 0 + 0) ; 1 (2 ; 3 ; 1 + ) + 22 (2 ; 43 ; 2 + 2 )2)] +
!
! !
4 ( 1 ( 1 ( 4 ; 0)1 ; 1 ( 4 ; )1)) 1 3
3 1! 0 3
!
!
!
1
3
4
3
4
3
3
3
(1 ; ) ; ( (3 ; ; 0 + 0) ;
(3 ; ; 1 + ) + 3 (3 ; 4 ; 2 + 2 )3) ;
2! 0
3
1
3
2
3
!
3 (3 ; 4 ; 3 + 3 )3 )] +
3
3
! !
4 ( 1 ( 0 ( 4 ; 0)0 ;)) 4 0! 0 3
!
!
!
4
4
4
4
1
(1 ; )4 ; 4! ( 0 (4 ; 3 ; 0 + 0)4 ; 1 (4 ; 3 ; 1 + )4 + 42 (4 ; 43 ; 2 + 2 )4 ;
!
4 (4 ; 4 ; 3 + 3 )4 )]
3
3
!
4 (4 ; 4 ; 4 + 4 )4 )]
3
4
1 ( 256 ; 4( 256 ; 256 + 96 2 ; 16 3 + 4 )) +
= 24
81
81 27
9
3
2 ( 64 ; 3( 64 ; 48 + 4 2 ; 3 )(1 ; )) +
3 27
27 9
16
16
3( ; 2( ; 8 + 2 )] 9
9 3
16 + 4 2)] +
(1 ; )2 ; 12 ( 49 ; 2( 91 ; 23 + 2) + 16
;
9 3
1
3
2
4 (1 ; ) ; 6 (8 ; 3(1 + 3 + 3 + 3 ) + 24 3 + 1 ; 9 + 27 2 ; 27 3)] +
1 ( 4096 ; 4( 625 + 500 + 150 2) + 20 3 + 4 ) + 6( 16 + 64 + 96 2 ) +
(1 ; )4 ; 24
81
81 27
9
3
81 27
9
!
!
41
64 3 + 16 4) ; 4( 1 ; 4 + 6 2 ) ; 36 3 + 81 4)]
3
81 9
478
128
16
= ; 1215 + 81 ; 9 2 + 98 3 ; 16 4
32
2 2
;( 256
81 + 3 ; 8 + 3 )(1 ; ) +
2
2
2
(; 16
3 + 16 ; 6 )(1 ; 2 + ; 1 + 2 ; )
4 (1 ; 3 + 3 2 ; 3 ; 1 (8 ; 3 ; 9 ; 9 2 ; 3 3 + 24 3 + 1 ; 9 + 27 2 ; 27 3)) +
6
211 + 196 + 10 2 ; 92 3 + 29 4 )
(1 ; 4 + 6 2 ; 4 3 + 4 ; 243
81
9
9
3
128
16
8
1
478
2
3
4
= ; 1215 + 81 ; 9 + 9 ; 6 +
1120 ; 56 2 + 10 3 ; 2 4 ) + 0 + 0 + 0
(; 256
+
81
81
3
4318
416
184
3 13 4
= 1215 ; 27 ; 9 2 + 98
9 ; 6
6 Discussion
We have presented a simple yet elegant combinatorial framework for the design and analysis
of distributed, decision-making protocols in the presence of incomplete information. Within
this framework, we have settled down completely the case where no communication is allowed
among the agents. Our techniques and arguments have been purely combinatorial as such,
they are of independent interest. We feel that our work makes a signicant advancement in the
eld of distributed optimization problems by providing a mathematical framework in which
further research can be carried out.
42
References
1] K. Arrow, The Economics of Information, Harvard University Press, 1984.
2] G. Brightwell, T. J. Ott, and P. Winkler, \Target Shooting with Programmed Random
Variables," Proceedings of the 24th Annual ACM Symposium on Theory of Computing,
pp. 691{698, May 1992.
3] W. J. Cook, W. H. Cunningham, W. R. Pulleyblank and A. Schrijver, Combinatorial
Optimization, Wiley-Interscience Series in Discrete Mathematics and Optimization, 1998.
4] P. Fizzano, D. Karger, C. Stein and J. Wein, \Job Scheduling in Rings," Proceedings of
the 6th Annual ACM Symposium on Parallel Algorithms and Architectures, pp. 210{219,
June 1994.
5] X. Deng and C. H. Papadimitriou, \Competitive Distributed Decision-Making," Algorithmica, Vol. 16, No. 2, pp. 133{150, 1996.
6] P. Fatourou, M. Mavronicolas and P. Spirakis, \E ciency of Oblivious versus NonOblivious Schedulers for Optimistic, Rate-Based Flow Control," Proceedings of the 16th
Annual ACM Symposium on Principles of Distributed Computing, pp. 139{148, August
1997.
7] J. M. Hayman, A. A. Lazar, and G. Pacici, \Joint Scheduling and Admission Control for
ATM-Based Switching Nodes," Proceedings of the ACM SIGCOMM, pp. 223{234, 1992.
8] P. C. Kanellakis and C. H. Papadimitriou, \The Complexity of Distributed Concurrency
Control," SIAM Journal on Computing, Vol. 14, No. 1, pp. 52{75, February 1985.
9] E. Kushilevitz and N. Nisan, Communication Complexity, Cambridge University Press,
Cambridge, 1996.
10] C. H. Papadimitriou, \Computational Aspects of Organization Theory," Proceedings of
the 4th Annual European Symposium on Algorithms, September 1996.
11] C. H. Papadimitriou and M. Yannakakis, \On the Value of Information in Distributed
Decision-Making," Proceedings of the 10th Annual ACM Symposium on Principles of Distributed Computing, pp. 61{64, August 1991.
12] C. H. Papadimitriou and M. Yannakakis, \Linear Programming Without the Matrix,"
Proceedings of the 25th Annual ACM Symposium on Theory of Computing, pp. 121{129,
May 1993.
13] C. Polychronopoulos and D. Kuck, \Guided Self-Scheduling: A Practical Scheduling
Scheme for Parallel Computers," IEEE Transactions on Computers, Vol. 12, pp. 1425{
1439, 1987.
43
14] G. C. Rota, \On the Foundations of Combinatorial Theory I: Theory of M$obius Functions," Z. Wahrscheinlichkeitstheorie, Vol. 2, pp. 340{368, 1964.
15] G. C. Rota, Introduction to Probability Theory, Third Preliminary Edition, Birkh$auser,
1999.
16] G. C. Rota, Probability, Lecture Notes { Spring 1998, MIT Course 18.313.
17] R. P. Stanley, Enumerative Combinatorics: Volume 1, The Wadsworth & Brooks/Cole
Mathematics Series, 1986.
44