Algorithms, Graph Theory, and Linear EquaLons in

Algorithms, Graph Theory, and Linear Equa9ons in Laplacians Daniel A. Spielman Yale University Shang-­‐Hua Teng 滕 尚 華 Carter Fussell TPS David Harbater UPenn Todd Knoblock Pavel Cur9s CTY Serge Lang 1927-­‐2005 Gian-­‐Carlo Rota 1932-­‐1999 Algorithms, Graph Theory, and Linear Equa9ons in Laplacians Daniel A. Spielman Yale University Solving Linear Equa9ons Ax = b, Quickly Goal: In 9me linear in the number of non-­‐zeros entries of A
Special case: A is the Laplacian Matrix of a Graph Solving Linear Equa9ons Ax = b, Quickly Goal: In 9me linear in the number of non-­‐zeros entries of A
Special case: A is the Laplacian Matrix of a Graph Solving Linear Equa9ons Ax = b, Quickly Goal: In 9me linear in the number of non-­‐zeros entries of A
Special case: A is the Laplacian Matrix of a Graph 1.  Why 2.  Classic Approaches 3.  Recent Developments 4.  Connec9ons Graphs Set of ver9ces V. Set of edges E of pairs {u,v} ⊆ V
Graphs Set of ver9ces V. Set of edges E of pairs {u,v} ⊆ V
Erdös Alon Lovász Karp 8 1 2 3 5 7 4 6 9 10 Laplacian Quadra9c Form of G = (V,E)
For x : V → IR
T
x LG x =
�
(u,v)∈E
(x (u) − x (v))
2
Laplacian Quadra9c Form of G = (V,E)
For x : V → IR
T
x LG x =
�
(u,v)∈E
x :
−3
−1
(x (u) − x (v))
0
2
1
3
Laplacian Quadra9c Form of G = (V,E)
For x : V → IR
T
x LG x =
�
(u,v)∈E
x :
(x (u) − x (v))
−1
−3
22
1
0
12
12
2
T
x LG x = 15
2
3
3
Laplacian Quadra9c Form for Weighted Graph
G = (V, E, w)
w : E → IR
T
x LG x =
+
assigns a posi9ve weight to every edge �
(u,v)∈E
w(u,v) (x (u) − x (v))
2
Matrix LG is posi9ve semi-­‐definite nullspace spanned by const vector, if connected Laplacian Matrix of a Weighted Graph


−w(u, v)
LG (u, v) = d(u)


0
d(u) =
�
(v,u)∈E
if (u, v) ∈ E
if u = v
otherwise
w(u, v)
the weighted degree of u
1 1!
4 1!
2!
5 1!
2 3!
3 4
-1
0
-1
-2
-1
4
-3
0
0
0
-3
4
-1
0
-1
0
-1
2
0
-2!
0!
0!
0!
2!
Laplacian Matrix of a Weighted Graph


−w(u, v)
LG (u, v) = d(u)


0
d(u) =
�
(v,u)∈E
if (u, v) ∈ E
if u = v
otherwise
w(u, v)
the weighted degree of u
combinatorial degree is # of aaached edges 3 1 1!
2 4 1!
2!
5 1 1!
2 2 3!
3 2 4
-1
0
-1
-2
-1
4
-3
0
0
0
-3
4
-1
0
-1
0
-1
2
0
-2!
0!
0!
0!
2!
Networks of Resistors Ohm’s laws gives i = v/r
In general, i =
L
G
v with w(u,v) = 1/r(u,v)
Minimize dissipated energy v T LG v
1V 0V Networks of Resistors Ohm’s laws gives i = v/r
In general, i =
L
G
v with w(u,v) = 1/r(u,v)
Minimize dissipated energy v T LG v
1V 0.5V By solving Laplacian 0V 0.5V 0.375V 0.625V Learning on Graphs Infer values of a func9on at all ver9ces from known values at a few ver9ces. �
2
T
Minimize x LG x =
w(u,v) (x (u) − x (v))
(u,v)∈E
Subject to known values 1 0 Learning on Graphs Infer values of a func9on at all ver9ces from known values at a few ver9ces. �
2
T
Minimize x LG x =
w(u,v) (x (u) − x (v))
(u,v)∈E
Subject to known values 0.5 0.5 1 0 0.375 0.625 By solving Laplacian Spectral Graph Theory Combinatorial proper9es of G are revealed by eigenvalues and eigenvectors of LG
Compute the most important ones by solving equa9ons in the Laplacian. Solving Linear Programs in Op9miza9on Interior Point Methods for Linear Programming: network flow problems Laplacian systems Numerical solu9on of Ellip9c PDEs Finite Element Method How to Solve Linear Equa9ons Quickly Fast when graph is simple, by elimina9on. Fast when graph is complicated*, by Conjugate Gradient (Hestenes ‘51, S9efel ‘52) Cholesky Factoriza9on of Laplacians 1 1!
4 1!
1!
5 1!
2 1!
3 When eliminate a vertex, connect its neighbors. Also known as Y-­‐Δ 3
-1
0
-1
-1
-1
2
-1
0
0
0
-1
2
-1
0
-1
0
-1
2
0
-1!
0!
0!
0!
1!
Cholesky Factoriza9on of Laplacians 1 1!
4 1!
1! .33!
.33!
5 1!
2 .33!
1!
3 When eliminate a vertex, connect its neighbors. Also known as Y-­‐Δ 3
-1
0
-1
-1
-1
2
-1
0
0
0
-1
2
-1
0
-1
0
-1
2
0
-1!
0!
0!
0!
1!
3
0
0
0
0!
0 1.67 -1.00 -0.33 -0.33!
0 -1.00 2.00 -1.00
0!
0 -0.33 -1.00 1.67 -0.33!
0 -0.33
0 -0.33 0.67!
Cholesky Factoriza9on of Laplacians 1 .33!
4 .33!
5 1!
2 .33!
1!
3 When eliminate a vertex, connect its neighbors. Also known as Y-­‐Δ 3
-1
0
-1
-1
-1
2
-1
0
0
0
-1
2
-1
0
-1
0
-1
2
0
-1!
0!
0!
0!
1!
3
0
0
0
0!
0 1.67 -1.00 -0.33 -0.33!
0 -1.00 2.00 -1.00
0!
0 -0.33 -1.00 1.67 -0.33!
0 -0.33
0 -0.33 0.67!
1 1!
2 1!
5 1!
1!
4 1!
3 1 .33!
2 5 4 .33!
.33!
1!
3 1 2 5 4 1!
.2!
.4!
1.2!
3 3
-1
0
-1
-1
-1
2
-1
0
0
0
-1
2
-1
0
-1
0
-1
2
0
-1!
0!
0!
0!
1!
3
0
0
0
0!
0 1.67 -1.00 -0.33 -0.33!
0 -1.00 2.00 -1.00
0!
0 -0.33 -1.00 1.67 -0.33!
0 -0.33
0 -0.33 0.67!
3
0
0
0
0
0
1.67
0
0
0
0
0
1.4
-1.2
-0.2
0
0
-1.2
1.6
-0.4
0!
0!
-0.2!
-0.4!
0.6!
The order maaers 2 1!
5 2 1!
5 2 1!
1!
1 1!
1!
1 1!
0.5!
1 5 1!
3 1!
4 3 1!
4 3 1!
4 1
-1
0
0
0
-1
3
-1
0
-1
0
-1
2
-1
0
0
0
-1
2
-1
0!
-1!
0!
-1!
2!
1
0
0
0
0
0
2
-1
0
-1
0
-1
2
-1
0
0
0
-1
2
-1
0!
-1!
0!
-1!
2!
1
0
0
0
0
0
2
0
0
0
0
0
1.5
-1.0
-0.5
0
0
-1
2
-1
0!
0!
-0.5!
-1.0!
1.5!
Complexity of Cholesky Factoriza9on #ops ~ Σv (degree of v when eliminate)2 Tree (connected, no cycles) #ops ~ O(|V|) Complexity of Cholesky Factoriza9on #ops ~ Σv (degree of v when eliminate)2 Tree (connected, no cycles) #ops ~ O(|V|) Complexity of Cholesky Factoriza9on #ops ~ Σv (degree of v when eliminate)2 Tree #ops ~ O(|V|) Complexity of Cholesky Factoriza9on #ops ~ Σv (degree of v when eliminate)2 Tree #ops ~ O(|V|) Complexity of Cholesky Factoriza9on #ops ~ Σv (degree of v when eliminate)2 Tree #ops ~ O(|V|) Complexity of Cholesky Factoriza9on #ops ~ Σv (degree of v when eliminate)2 Tree #ops ~ O(|V|) Complexity of Cholesky Factoriza9on #ops ~ Σv (degree of v when eliminate)2 Tree #ops ~ O(|V|) Planar #ops ~ O(|V|3/2) Lipton-­‐Rose-­‐Tarjan ‘79 Complexity of Cholesky Factoriza9on #ops ~ Σv (degree of v when eliminate)2 Tree #ops ~ O(|V|) Planar #ops ~ O(|V|3/2) Lipton-­‐Rose-­‐Tarjan ‘79 Complexity of Cholesky Factoriza9on #ops ~ Σv (degree of v when eliminate)2 Tree #ops ~ O(|V|) Planar #ops ~ O(|V|3/2) Expander Lipton-­‐Rose-­‐Tarjan ‘79 like random, but O(|V|) edges #ops ≳ Ω(|V|3) Lipton-­‐Rose-­‐Tarjan ‘79 Conductance and Cholesky Factoriza9on Cholesky slow when conductance high Cholesky fast when low for G and all subgraphs For S ⊂ V
Φ(S) =
S
# edges leaving S
sum degrees on smaller side, S or V − S
ΦG = minS⊂V Φ(S)
Conductance S
Conductance S
Φ(S) = 3/5
Conductance S
Cheeger’s Inequality and the Conjugate Gradient Cheeger’s inequality (degree-­‐d unwted case) �
λ2
1 λ2
≤
Φ
≤
2
G
2 d
d
λ2 = second-­‐smallest eigenvalue of LG
~ d/mixing 9me of random walk Cheeger’s Inequality and the Conjugate Gradient Cheeger’s inequality (degree-­‐d unwted case) �
λ2
1 λ2
≤
Φ
≤
2
G
2 d
d
λ2 = second-­‐smallest eigenvalue of LG
~ d/mixing 9me of random walk Conjugate Gradient finds ∊ -­‐approx solu9on to LG x = b �
in O(
d/λ
2 log
� −1
) mults by LG �
in O(|E|
d/λ
2 log
� −1
) ops Fast solu9on of linear equa9ons CG fast when conductance high. Elimina9on fast when low for G and all subgraphs. Fast solu9on of linear equa9ons CG fast when conductance high. Planar graphs Elimina9on fast when low for G and all subgraphs. Want speed of extremes in the middle Fast solu9on of linear equa9ons CG fast when conductance high. Planar graphs Elimina9on fast when low for G and all subgraphs. Want speed of extremes in the middle Not all graphs fit into these categories! Precondi9oned Conjugate Gradient Solve LG x = b by Approxima9ng LG by LH (the precondi9oner) In each itera9on solve a system in LH mul9ply a vector by LG ∊ -­‐approx solu9on ater �
O( κ(LG , LH ) log �−1 ) itera9ons Precondi9oned Conjugate Gradient Solve LG x = b by Approxima9ng LG by LH (the precondi9oner) In each itera9on solve a system in LH mul9ply a vector by LG ∊ -­‐approx solu9on ater �
O( κ(LG , LH ) log �−1 ) itera9ons rela6ve condi6on number Inequali9es and Approxima9on LH � LG if LG − LH is positive semi-definite, "
i.e. for all x,
xT LH x � xT LG x
Example: if H is a subgraph of G
T
x LG x =
�
(u,v)∈E
w(u,v) (x (u) − x (v))
2
Inequali9es and Approxima9on LH � LG if LG − LH is positive semi-definite, "
i.e. for all x,
xT LH x � xT LG x
κ(LG , LH ) ≤ t
if LH � LG � tLH
iff cL
H
�
L
G
�
ctL
H for some c
Inequali9es and Approxima9on LH � LG if LG − LH is positive semi-definite, "
i.e. for all x,
xT LH x � xT LG x
κ(LG , LH ) ≤ t
if LH � LG � tLH
iff cL
H
�
L
G
�
ctL
H for some c
Call H a t-­‐approx of G if κ(LG , LH ) ≤ t
Other defini9ons of the condi9on number (Golds9ne, von Neumann ‘47) κ(LG , LH ) =
�
T
x LG x
max
x∈Span(LH ) xT LH x
λmax (LG L+
H)
κ(LG , LH ) =
λmin (LG L+
H)
��
T
x LH x
max
x∈Span(LG ) xT LG x
pseudo-­inverse min non-­zero eigenvalue �
Vaidya’s Subgraph Precondi9oners Precondi9on G by a subgraph H
LH � LG so just need t for which LG � tLH
Easy to bound t if H is a spanning tree H And, easy to solve equa9ons in LH by elimina9on The Stretch of Spanning Trees Boman-­‐Hendrickson ‘01: LG � stG (T )LT
�
path-lengthT (u, v)
Where stT (G) =
(u,v)∈E
The Stretch of Spanning Trees Boman-­‐Hendrickson ‘01: LG � stG (T )LT
�
path-lengthT (u, v)
Where stT (G) =
(u,v)∈E
The Stretch of Spanning Trees Boman-­‐Hendrickson ‘01: LG � stG (T )LT
�
path-lengthT (u, v)
Where stT (G) =
(u,v)∈E
path-­‐len 3 The Stretch of Spanning Trees Boman-­‐Hendrickson ‘01: LG � stG (T )LT
�
path-lengthT (u, v)
Where stT (G) =
(u,v)∈E
path-­‐len 5 The Stretch of Spanning Trees Boman-­‐Hendrickson ‘01: LG � stG (T )LT
�
path-lengthT (u, v)
Where stT (G) =
(u,v)∈E
path-­‐len 1 The Stretch of Spanning Trees Boman-­‐Hendrickson ‘01: LG � stG (T )LT
�
path-lengthT (u, v)
Where stT (G) =
(u,v)∈E
In weighted case, measure resistances of paths Low-­‐Stretch Spanning Trees For every G there is a T with where m = |E|
1+o(1)
stT (G) ≤ m
(Alon-­‐Karp-­‐Peleg-­‐West ’91) 2
stT (G) ≤ O(m log m log log m)
(Elkin-­‐Emek-­‐S-­‐Teng ’04, Abraham-­‐Bartal-­‐Neiman ’08) 3/2
O(m
log m)
Solve linear systems in 9me Low-­‐Stretch Spanning Trees For every G there is a T with where m = |E|
1+o(1)
stT (G) ≤ m
(Alon-­‐Karp-­‐Peleg-­‐West ’91) 2
stT (G) ≤ O(m log m log log m)
(Elkin-­‐Emek-­‐S-­‐Teng ’04, Abraham-­‐Bartal-­‐Neiman ’08) If G is an expander stT (G) ≥ Ω(m log m)
Expander Graphs Infinite family of d-­‐regular graphs (all degrees d) satsifying λ2 ≥ const > 0
Spectrally best are Ramanujan Graphs (Margulis ‘88, Lubotzky-­‐Phillips-­‐Sarnak ’88) √
all eigenvalues inside d ± 2 d − 1
Fundamental examples Amazing proper9es Expanders Approximate Complete Graphs Let G be the complete graph on n ver9ces (having all possible edges) All non-­‐zero eigenvalues of LG are n xT LG x = n for all x ⊥ 1, �x� = 1
Expanders Approximate Complete Graphs Let G be the complete graph on n ver9ces T
x LG x = n for all x ⊥ 1, �x� = 1
Let H be a d-­‐regular Ramanujan Expander √
√
T
(d − 2 d − 1) ≤ x LH x ≤ (d + 2 d − 1)
√
d+2 d−1
√
κ(LG , LH ) ≤
→1
d−2 d−1
Sparsifica9on Goal: find sparse approxima9on for every G
S-­‐Teng ‘04: For every G is an H with 7
2
O(n log n/� ) edges and κ(LG , LH ) ≤ 1 + �
Sparsifica9on S-­‐Teng ‘04: For every G is an H with 7
O(n log n/�2 ) edges and κ(LG , LH ) ≤ 1 + �
Conductance high λ
2 high (Cheeger) random sample good (Füredi-­‐Komlós ‘81) Conductance not high can split graph while removing few edges Fast Graph Decomposi9on by local graph clustering Given vertex of interest find nearby cluster, small Φ(S)
, in 9me Fast Graph Decomposi9on by local graph clustering Given vertex of interest find nearby cluster, small Φ(S)
, in 9me S-­‐Teng ’04: Lovász-­‐Simonovits Andersen-­‐Chung-­‐Lang ‘06: PageRank Andersen-­‐Peres ‘09: evoloving set Markov chain Sparsifica9on Goal: find sparse approxima9on for every G
S-­‐Teng ‘04: For every G is an H with 7
2
O(n log n/� ) edges and κ(LG , LH ) ≤ 1 + �
S-­‐Srivastava ‘08: with O(n
log
n/�
2 ) edges proof by modern random matrix theory Rudelson’s concentra9on for random sums Sparsifica9on Goal: find sparse approxima9on for every G
S-­‐Teng ‘04: For every G is an H with 7
2
O(n log n/� ) edges and κ(LG , LH ) ≤ 1 + �
S-­‐Srivastava ‘08: with O(n
log
n/�
2 ) edges Batson-­‐S-­‐Srivastava ‘09 √
d+1+2 d
dn edges and κ(LG , LH ) ≤
√
d+1−2 d
Sparsifica9on by Linear Algebra edges vectors n
Given vectors v 1 , ...,
v m
∈ IR
s.t. Find a small subset S ⊂ {1, ..., m}
and coefficients c e s.t. �
||
ce ve veT − I|| ≤ �
e∈S
�
e
ve veT = I
Sparsifica9on by Linear Algebra n
Given vectors v 1 , ...,
v m
∈
IR
s.t. �
Find a small subset S ⊂ {1, ..., m}
and coefficients c e s.t. ||
�
e∈S
ve veT = I
e
ce ve veT − I|| ≤ �
Rudelson ‘99 says can find |S| ≤ O(n log n/�2 )
if choose S at random* * = Pr
[e] ∼ 1/ �ve �
2
In graphs, are effec9ve resistances Sparsifica9on by Linear Algebra n
Given vectors v 1 , ...,
v m
∈
IR
s.t. �
Find a small subset S ⊂ {1, ..., m}
and coefficients c e s.t. ||
�
e∈S
ve veT = I
e
ce ve veT − I|| ≤ �
Batson-­‐S-­‐Srivastava: can find |S| ≤ 4n/�2
by greedy algorithm with S9eltjes poten9al func9on Rela9on to Kadison-­‐Singer, Paving Conjecture Would be implied by the following strong version of Weaver’s conjecture KS’2 n
v
,
...,
v
∈
IR
Exists constant α s.t. for 1 m
s.t. for �
n
T
2
v
v
e e =I
�ve � =
≤α
m
e
�
�
� 1
��
1
�
T
Exists S ⊂ {1, ..., m} , |S| = m/2 �
v
v
−
I
�
�≤
e e
�
2 � 4
e∈S
Rela9on to Kadison-­‐Singer, Paving Conjecture Exists constant α s.t. for v 1 , ...,
v m
∈
IR
n s.t. for n
2
�ve � =
≤α
m
�
ve veT = I
e
Exists S ⊂ {1, ..., m} , |S| = m/2
Rudelson ‘99: α = const/(log n)
�
�
� 1
��
1
�
�
T
ve ve − I � ≤
�
�
2 � 4
e∈S
Batson-­‐S-­‐Srivastava ‘09: true, but with coefficients in sum Rela9on to Kadison-­‐Singer, Paving Conjecture Exists constant α s.t. for v 1 , ...,
v m
∈
IR
n s.t. for n
2
�ve � =
≤α
m
�
ve veT = I
e
Exists S ⊂ {1, ..., m} , |S| = m/2
Rudelson ‘99: α = const/(log n)
�
�
� 1
��
1
�
�
T
ve ve − I � ≤
�
�
2 � 4
e∈S
Batson-­‐S-­‐Srivastava ‘09: true, but with coefficients in sum S-­‐Srivastava ‘10: Bourgain-­‐Tzafriri Restricted Invertability Sparsifica9on and Solving Linear Equa9ons Can reduce any Laplacian to one with O(|V |)
non-­‐zero entries/edges. S-­‐Teng ‘04: Combine Low-­‐Stretch Trees with Sparsifica9on to solve Laplacian systems in 9me c
−1
m = |E|
n = |V |
O(m log n log �
)
Sparsifica9on and Solving Linear Equa9ons S-­‐Teng ‘04: Combine Low-­‐Stretch Trees with Sparsifica9on to solve Laplacian systems in 9me c
−1
m = |E|
n = |V |
O(m log n log �
)
2
−1
O(m
log
n
log
�
)
Kou9s-­‐Miller-­‐Peng ’10: 9me Kolla-­‐Makarychev-­‐Saberi-­‐Teng ’09: −1
O(m
log
n
log
�
) ater preprocessing What’s next Other families of linear equa9ons: from directed graphs from physical problems from op9miza9on problems Solving Linear equa9ons as a primi9ve Decomposi9ons of the iden9ty: Understanding the vectors we get from graphs the Ramanujan bound Kadison-­‐Singer? Conclusion It is all connected