Lower Bounds on the Communication
of Distributed Graph Algorithms:
Progress and Obstacles
Rotem Oshman
ADGA 2013
Overview: Network Models
LOCAL
CONGESTED CLIQUE
CONGEST / general network
ASYNC MESSAGE-PASSING
X
Talk Overview
I. Lower bound techniques
a. CONGEST general networks: reductions from 2party communication complexity
b. Asynchronous message passing: reductions from
multi-party communication complexity
II. Obstacles on proving lower bounds for the
congested clique
Communication Complexity
π π, π = ?
π
π
Example: DISJOINTNESS
DISJπ :
π β {1, β¦ , π}
π β© π = β
?
Ξ©(π) bits
needed
π β {1, β¦ , π}
[Kalyanasundaram and Schnitger, Razborov β92]
Applying 2-Party Communication
Complexity Lower Bounds
Textbook reduction:
Given algorithm π΄ for solving task πβ¦
π
π
Simulate π΄
Based
on π
π bits
Based
on π
Solution for π β¨ answer for DISJOINTNESS
Example: Spanning Trees
β’ Setting: directed, strongly-connected network
β’ Communication by local broadcast with
bandwidth π΅
β’ UIDs 1, β¦ , π
β’ Diameter 2
β’ Question: how many rounds to find a rooted
spanning tree?
New Problem: PARTITION
β’ Inputs: π, π β 1, β¦ , π , with the promise
that π βͺ π = 1, β¦ , π
π
β’ Goal:
π
Alice outputs π΄ β π,
Bob outputs π΅ β π such that
π΄, π΅ partition {1, β¦ , π}.
The PARTITION Problem
β’ Trivial algorithm:
β Alice sends her input to Bob
β Alice outputs all tasks in her input
β Bob outputs all remaining tasks
β’ Communication complexity: π bits
β’ Lower bound?
Reduction from DISJ to PARTITION
β’ Given input π, π for DISJ :
β Notice: π β© π = β
 iff π βͺ π = π
β To test whether π βͺ π = π :
β’ Try to solve PARTITION on π, π β π΄, π΅
β’ Ensure π΄ β π, π΅ β π
β’ Check if π΄, π΅ is a partition of π : Alice sends Bob
hash(π΄), Bob compares it to hash(π΅)
From PARTITION to Spanning Tree
Given a spanning tree algorithm π΄β¦
1
2
a
π = {1,2,3}
3
4
5
6
b
π = {2,4,5,6}
From PARTITION to Spanning Tree
Simulating one round of π΄ :
1
2
a
Node bβs
message
π = {1,2,3}
3
4
6
5
b
Node aβs
message
π = {2,4,5,6}
From PARTITION to Spanning Tree
When π΄ outputs a spanning tree:
1
2
a
π = {1,2,3}
3
4
5
6
b
π = {2,4,5,6}
From PARTITION to Spanning Tree
β’ If π΄ runs for π‘ rounds, we use 2π΅π‘ bits
β π‘ = Ξ© π/π΅
β’ One detail: randomness
β Solution: Alice and Bob use public randomness
When Two Players Just Arenβt Enough
β’ No bottlenecks in the network
When Two Players Just Arenβt Enough
β’ Too much information revealed
Multi-Player Communication
Complexity
β’ Communication by shared blackboard ??
β’ Number-on-forehead
β’ Number-in-hand
The Message-Passing Model
β’
β’
β’
β’
π players
Private channels
Private π-bit inputs π1 , β¦ , ππ
Private randomness
β’ Goal: compute π π1 , β¦ , ππ
β’ Cost: total communication
The Coordinator Model
β’ π players, one coordinator
β’ The coordinator has no input
Message-Passing vs. Coordinator
β
Prior Work on Message-Passing
β’ For π players with π-bit inputsβ¦
β’ Phillips, Verbin, Zhang β12:
β Ξ© ππ for bitwise problems (AND/OR, MAJ, β¦)
β’ Woodruff, Zhang β12, β13:
β Ξ© ππ for threshold and graph problems
β’ Braverman, Ellen, O., Pitassi, Vaikuntanathan
β13: Ξ© ππ for DIS J π,π
Set Disjointness
π2
π1
π3
?
π5
π4
π
π
π
ππ
DIS J π,π =
π=1 π=1
Notation
β’ Ξ  : randomized protocol
β Also, the protocolβs transcript
β Ξ π : player πβs view of the transcript
β’ πΆπΆ Ξ  = worst-case communication of Ξ 
β’ πΆπΆπ π =
min
Ξ  with error π
πΆπΆ(Ξ )
in the worst case
Entropy and Mutual Information
β’ Entropy:
π» π =β
Pr π = π₯ β
 log Pr π = π₯
π₯
β’ A lossless encoding of π requires π» π bits
β’ Conditional entropy:
π» π π = πΌπ¦ π» π π = π¦ β€ π»(π)
Entropy and Mutual Information
β’ Mutual information:
πΌ π; π = π» π β π»(π|π)
β’ Conditional mutual information:
πΌ π; π π = πΌπ§ πΌ π; π π = π§
= π» π π β π»(π|π, π)
Information Cost for Two Players
[Chakrabarti, Shi, Wirth, Yao β01], [Bar-Yossef, Jayram, Kumar, Sivakumar β04], [Braverman, Rao β10], β¦
Fix a distribution π,
β’ External information cost:
πΌ π,π βΌπ ππ; Ξ  = π» Ξ  β π» Ξ  ππ β€ |Ξ |
β’ Internal information cost:
πΌ π,π βΌπ π; Ξ  π + πΌ π,π βΌπ (π; Ξ |X
Extension to the coordinator model:
π
[πΌ ππ ; Ξ π π, π + πΌ(π; Ξ π |ππ , π)]
π=1
Why is Info Complexity Nice?
β’ Formalizes a natural notion
β Analogous to causality/knowledge
β’ Admits direct sum theorem:
βThe cost of solving π independent
copies of problem π is π times the cost
of solving πβ
Example
π
π
π
ππ
DIS J π,π =
π=1 π=1
Example (Work in Progress)
β’ Triangle detection in general congested graphs
β’ βIs there a triangleβ =
π£1 ,π£2 ,π£3 βis {π£1 , π£2 , π£3 } a triangleβ
Application of DISJ Lower Bound
β’ Open problem from Woodruff & Zhang β13:
β Hardness of computing the diameter of a graph
β’ We can show: Ξ© ππ bits to distinguish
diameter β€3 from diameter β
β’ Reduction from DISJ : given π1 , β¦ , ππ ,
β Notice: π1 , β¦ , ππ disjoint iff π1 βͺ β― βͺ ππ = π
Application of DISJ Lower Bound
3
2
4
π2
1
π1
π3
π3
5
π4
6
β’ π1 βͺ β― βͺ ππ = π β Diameter β€ 3
β’ π1 βͺ β― βͺ ππ β  π β Diameter = β
Part II: The Power of the Congested
Clique
CONGESTED CLIQUE
Conversion from Boolean Circuit
β’ Suppose we have a Boolean circuit πΆ
β Any type of gate, π2 inputs
β Fan-in β€ 2
β Depth = polylog(π), #gates and wires =
π2 polylog π
β’ Step 1: reduce the fan-out to β€ 2
β Convert large fan-out gates to βcopying treeβ
β Blowup: π log π depth, π 1 size
β’ Step 2: convert to a layered circuit
Conversion from Boolean Circuit
β’ Now we have a layered circuit πΆβ² of depth
polylog π and size = π2 polylog π
β With fan-in and fan-out β€ 2
β’ Design a CONGEST protocol:
β Fix partition of inputs π1 , β¦ , ππ of size π each
β Assign each gate to a random CONGEST node
β Simulate the circuit layer-by-layer
Simulating a Layer
β’ If node π’ βownsβ gate πΊ on layer π, it sends
πΊβs output to the nodes that need it on layer
π+1
β’ Size of layer π + 1
β€ 2 β
 size of layer π
β’ What is the load on edge π, π ?
β For each wire from layer π to layer π + 1,
Pr π€πππ ππ π πππππ π‘π π, π = 1/π2
β At most π2 polylog π wires in total
β By Chernoff, w.h.p. the load is polylog π
Conversion from Boolean Circuit
β’ A union-bound finishes the proof
β’ Corollary: explicit lower bounds in the
congested clique imply explicit lower bounds
on Boolean circuits with polylogarithmic
depth and nearly-linear size.
β’ Even worse:
β Reasons to believe even Ξ©(log log π) bound hard
Conclusion
LOCAL
CONGESTED CLIQUE
CONGEST / general network
ASYNC MESSAGE-PASSING
X
                
    
            
    
                © Copyright 2025 Paperzz