Two-player games on Galton

Two-player games on Galton-Watson trees
James Martin
University of Oxford
Joint work with Alexander Holroyd
Paris-Bath meeting on branching structures,
Paris, 16th September 2011
Example: branching process extinction probability
Galton-Watson branching process, offspring distribution
P
(pk , k = 0, 1, 2, . . . ), generating function G (x) = pk x k .
Write η = probability of survival of the process
η̄ = 1 − η = probability of extinction
ηn = probability of survival to generation n
η̄n = 1 − ηn = probability of extinction before generation n
We have η̄0 = 0 and for n ≥ 0,
η̄n+1 =
X
pk η̄nk
k
= G (η̄n ).
η̄n ↑ η̄, and since G is continuous and increasing, it follows that
η̄ is the smallest solution in [0, 1] of x = G (x).
Extinction probability η̄ is the smallest solution of G (x) − x = 0.
At x = 1, the function G (x) − x has value 0 and slope µ − 1.
0.4
0.5
0.3
0.4
0.3
0.2
0.2
0.1
0.1
0.2
0.4
0.6
0.8
Subcritical: µ < 1
1
0.2
0.4
0.6
0.8
1
Supercritical: µ > 1
Extinction probability is continuous at the critical point (except
degenerate case p0 = 1).
Example: complete binary tree in branching process
Let β = 1 − β̄ be the probability that the process contains an
infinite binary tree rooted at the first individual.
Let βn = 1 − β̄n be the probability that the process contains an
n-level binary tree rooted at the first individual.
We have β̄0 = 0 and for n ≥ 1,
X β̄n+1 =
pk β̄nk + k(1 − β̄n )β̄nk−1
k
= G (β̄n ) + (1 − β̄n )G 0 (β̄n ).
Then β̄n ↑ β̄ and
β̄ is the smallest solution of G (x) + (1 − x)G 0 (x) = x.
β̄ is the location of the smallest zero of the function
G (x) + (1 − x)G 0 (x) − x.
0.15
0.15
0.1
0.1
0.05
0.05
0.2
0.4
0.6
0.8
1
Poisson(3.3): subcritical
0.2
0.4
0.6
0.8
1
Poisson(3.4): supercritical
Probability to contain a binary tree is discontinuous at the critical
point. E.g. for Poisson(λ) offspring distribution, there is λc ≈ 3.35
such that β(λ) = 0 if λ < λc , while β(λc ) ≈ 0.535.
(cf. Dekking and Pakes(1991))
“Normal game” on a graph
Given a graph G and a starting vertex v , consider the following
two-player game.
A token starts at v , and the players take turns to move the token
from its current vertex to a neighbouring vertex, but never
returning to a previously visited vertex.
If a player is unable to move, he loses the game.
On a finite graph G , with best play the normal game is either a
win for the first player or a win for the second player. If G is
infinite, the game may also be a draw.
Theorem (Wästlund)
The normal game on a finite graph G , started from v , is a
first-player win iff v is in every matching on G of maximum size.
SImple proof by induction on the size of the graph:
v is in every max matching of G
iff
not all neighbours of v are in every max matching of G \ v .
So if v is in every max matching, we can move from v to a vertex
w that is not in every max matching of G \ v . But if v is not in
some max matching, then either we are already stuck, or every
move takes us to a vertex that is in every max matching of G \ v .
If G is a tree, the following is also true: the first player has a win
from v iff there is an independent set of maximum size that does
not contain v .
“Misère game” and “Escaper/stopper game”
In the normal game, a player who is unable to move loses.
In the misère game, a player who is unable to move wins.
In the escaper/stopper game, played on an infinite graph, one
player is the escaper and the other is the stopper. If either player
becomes unable to move, the stopper wins. Otherwise (i.e. if the
game goes on for ever) the escaper wins.
We look at playing these games on Galton-Watson trees.
Given an offspring distribution, write
w = P(first player wins normal game)
` = P(first player loses normal game)
d = 1 − w − ` = P(draw in normal game).
e de for corresponding values in the misère case.
e , `,
Write w
Finally for the escaper/stopper game,
s (1) = P(stopper wins, when playing first)
s (2) = P(stopper wins, when playing second).
Particular questions: when is d > 0 or de > 0? When are s (1) and
s (2) < 1?
On any tree, we can express the outcome of the game played from
the root recursively in terms of the outcomes of the games played
on the subtrees rooted at each child.
Normal game:
O is a win if at least one child is a loss
O is a loss if all children are wins
O is a draw if no child is a loss and at least one child is a draw
This gives the following recursion for the Galton-Watson case:
P
w = 1 − k pk (1 − `)k
= 1 − G (1 − `)
P
= 1 − G 1 − k pk w k
= 1 − G (1 − G (w ))
= F (F (w ))
where F (x) = 1 − G (x).
F is strictly decreasing so F (F (.)) is strictly increasing. By a
similar argument to the survival/extinction case,
w is the smallest fixed point of the function F (F (.)),
1 − ` is the largest fixed point of the function F (F (.)).
Note that d = 0 iff w = 1 − `, which occurs iff F (F (x)) − x = 0
has a unique solution.
0.15
0.15
0.1
0.1
0.05
0.05
0.2
0.4
0.6
0.8
1
0.2
0.4
0.6
0.8
1
-0.05
-0.05
Poisson(2.5) offspring. d = 0.
Poisson(2.95) offspring. d > 0.
Recursions for the misère game work similarly; only difference is at
nodes with no children:
O is a win if at least one child is a loss or O has no children
O is a loss if all children are wins except if O has no children
O is a draw if no child is a loss and at least one child is a draw
e
e = p0 + 1 − G (1 − `)
w
e )))
= p0 + 1 − G (p0 + 1 − G (w
e ))
= H(H(w
where H(x) = p0 + 1 − G (x). Then
e is the smallest fixed point of the function H(H(.)),
w
1 − `e is the largest fixed point of the function H(H(.)).
Again de = 0 iff H(H(x)) − x = 0 has only one solution in [0, 1].
Recursions for the escaper/stopper game are a mixture of the
previous two cases (vertices without children are a win for one
player but not the other). We obtain for example:
s (1) is the smallest fixed point in [0, 1] of the function H(F (.)).
The pictures in this case look a bit different. The equation
H(F (x)) − x = 0 always has the solution x = 1. If this is the only
solution, then no escape is possible; if lower solutions exist, then
escaper wins with positive probability (s (1) < 0).
0.15
0.15
0.1
0.1
0.05
0.05
0.2
0.4
0.6
0.8
Poisson(3.2). No escape!
1
0.2
0.4
0.6
0.8
1
Poisson(3.4). Escaper can win!
Leaf removal
These recursions are closely related to leaf removal algorithms.
For example, in the normal game, any leaf of the tree is a loss so
its parent is a win. We remove them both from the tree (maybe
this creates new leaves). We keep on removing leaves in this way.
If this procedure eventually leads to the root becoming a leaf or
the parent of a leaf, then the root is a win or a loss. If not, the
root is a draw.
This is essentially the same as the leaf-removal used by the
Karp-Sipser algorithm for finding maximum independent sets in a
graph.
The misère and stopper/escaper games have analogous (but
different) leaf-removal procedures.
Inequalities between different probabilities:
(i) w ≤ s (1)
e ≤ s (1)
(ii) w
(iii) ` ≤ s (2)
(iv) `e ≤ s (2)
(v) s (2) ≤ s (1)
e
(vi) `e ≤ w
e
(vii) ` ≤ `
(viii) `e ≤ w
(ix) d ≤ de
(i)–(iv) are easy from the definitions (and hold pathwise)
(v)–(vi) follow from “strategy stealing” and the branching property.
(vii)–(ix) are less obvious.
Examples and phase transitions:
Binary branching
Let p ∈ (0, 1) and take p0 = 1 − p, p2 = p.
I Survival: continuous phase transition at pc = 1/2.
I Misère game: continuous phase transition at pm = 3/4.
e
e
d(p)
= 0 for p ≤ 3/4 and d(p)
> 0 for p > 3/4.
I Normal game: continuous phase transition at
√
pd = 3/2 = 0.866....
d(p) = 0 for p ≤ pd and d(p) > 0 for p > pd .
I Escape game: discontinuous phase transition at
pe = 3/(25/3 ) = 0.945....
s (1) (p) = 1 for p < pe , but s (1) (pe ) = 1 − 24/3 /3 = 0.1601....
Poisson
I
Survival: λc = 1 (continuous)
I
Misère game: λm = 2.1026... (continuous)
I
Normal game: λd = e = 2.718... (continuous)
I
Escape game: λe = 3.3185... (discontinuous)
Appearance of draws in the normal game at λ = e is related to the
phase transition for the behaviour of the Karp-Sipser
“leaf-removal” algorithm for finding matchings and independent
sets, applied to an Erdös-Renyi random graph.
Geometric
Let α ∈ (0, 1) and pk = (1 − α)αk , k ≥ 0.
For any α, all of the functions
F (F (x)) − x
H(H(x)) − x
H(F (x)) − x
are strictly decreasing on [0, 1], and so have only a single root.
Hence no draws or escapes for any of the games for any geometric
distribution.
Example: discontinuous phase transition in the normal
game
Let a ∈ (0, 1), and put p0 = 1 − a, p2 = a/2, p12 = a/2.
For the normal game, this family has a discontinuous phase
transition at the point ad ≈ 0.9858:
d(a) = 0 for a < ad
d(ad ) ≈ 0.5561.
0.12
0.1
0.08
0.06
0.04
0.02
-0.02
0.2
0.4
0.6
0.8
1
a = 0.98; unique fixed point of F (F (.)).
Example: discontinuous phase transition in the normal
game
Let a ∈ (0, 1), and put p0 = 1 − a, p2 = a/2, p12 = a/2.
For the normal game, this family has a discontinuous phase
transition at the point ad ≈ 0.9858:
d(a) = 0 for a < ad
d(ad ) ≈ 0.5561.
0.12
0.1
0.08
0.06
0.04
0.02
-0.02
0.2
0.4
0.6
0.8
1
a = 0.9858; three fixed points of F (F (.)).
Example: discontinuous phase transition in the normal
game
Let a ∈ (0, 1), and put p0 = 1 − a, p2 = a/2, p12 = a/2.
For the normal game, this family has a discontinuous phase
transition at the point ad ≈ 0.9858:
d(a) = 0 for a < ad
d(ad ) ≈ 0.5561.
0.12
0.1
0.08
0.06
0.04
0.02
-0.02
0.2
0.4
0.6
0.8
1
a = 0.988; five fixed points of F (F (.)).
Phase transitions in the escape game
Theorem
If p1 µ > 1 then the escaper has positive probability of winning.
Idea: this gives positive probability of existence of a path from the
origin in which all vertices in odd positions of the path have degree
1. Then the escaper can guide the game down this path on which
the stopper never has any choice.
The phase transition for the existence of such a structure is
continuous (it’s essentially the survival of a branching process). So
although the escape game often has discontinuous phase
transitions, it can also have a continuous one at a point where
p1 µ = 1.
e.g. take a ∈ (0, 1) and p0 = (1 − a)/10, p1 = a/3, p4 = 2a/3,
p30 = 9(1 − a)/10.
Recall that s (1) , the probability of win by the stopper, is the
smallest fixed point of H(F (.)). The point 1 is always a fixed
point. Graph of H(F (x)) − x:
0.8
0.6
0.4
0.2
0.2
0.4
0.6
a = 0.1.
0.8
1
e.g. take a ∈ (0, 1) and p0 = (1 − a)/10, p1 = a/3, p4 = 2a/3,
p30 = 9(1 − a)/10.
Recall that s (1) , the probability of win by the stopper, is the
smallest fixed point of H(F (.)). The point 1 is always a fixed
point. Graph of H(F (x)) − x:
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.2
0.4
0.6
a = 0.15.
0.8
1
e.g. take a ∈ (0, 1) and p0 = (1 − a)/10, p1 = a/3, p4 = 2a/3,
p30 = 9(1 − a)/10.
Recall that s (1) , the probability of win by the stopper, is the
smallest fixed point of H(F (.)). The point 1 is always a fixed
point. Graph of H(F (x)) − x:
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.2
0.4
0.6
a = 0.2.
0.8
1
e.g. take a ∈ (0, 1) and p0 = (1 − a)/10, p1 = a/3, p4 = 2a/3,
p30 = 9(1 − a)/10.
Recall that s (1) , the probability of win by the stopper, is the
smallest fixed point of H(F (.)). The point 1 is always a fixed
point. Graph of H(F (x)) − x:
0.6
0.5
0.4
0.3
0.2
0.1
0.2
0.4
0.6
a = 0.4.
0.8
1
e.g. take a ∈ (0, 1) and p0 = (1 − a)/10, p1 = a/3, p4 = 2a/3,
p30 = 9(1 − a)/10.
Recall that s (1) , the probability of win by the stopper, is the
smallest fixed point of H(F (.)). The point 1 is always a fixed
point. Graph of H(F (x)) − x:
0.15
0.1
0.05
0.2
0.4
0.6
a = 0.8.
0.8
1
e.g. take a ∈ (0, 1) and p0 = (1 − a)/10, p1 = a/3, p4 = 2a/3,
p30 = 9(1 − a)/10.
Recall that s (1) , the probability of win by the stopper, is the
smallest fixed point of H(F (.)). The point 1 is always a fixed
point. Graph of H(F (x)) − x:
0.06
0.04
0.02
0.2
0.4
0.6
a = 0.9.
0.8
1
e.g. take a ∈ (0, 1) and p0 = (1 − a)/10, p1 = a/3, p4 = 2a/3,
p30 = 9(1 − a)/10.
Recall that s (1) , the probability of win by the stopper, is the
smallest fixed point of H(F (.)). The point 1 is always a fixed
point. Graph of H(F (x)) − x:
0.05
0.04
0.03
0.02
0.01
0.2
0.4
0.6
a = 0.927.
0.8
1
e.g. take a ∈ (0, 1) and p0 = (1 − a)/10, p1 = a/3, p4 = 2a/3,
p30 = 9(1 − a)/10.
Recall that s (1) , the probability of win by the stopper, is the
smallest fixed point of H(F (.)). The point 1 is always a fixed
point. Graph of H(F (x)) − x:
0.05
0.04
0.03
0.02
0.01
0.2
0.4
0.6
a = 0.94.
0.8
1
Endogeny and sensitivity to boundary conditions
The equation F (x) = x always has exactly one solution.
To this solution corresponds a ”stationary” recursive tree process
(RTP) under which each node of the tree has value “W” or “L”
and the values obey the recurrences
v has value W iff at least one child of v has value L.
Under this RTP the probability that the root has value W is x.
One way to construct the law of this process: truncate the tree at
level n, and put boundary conditions under which each node at
level n has value W w.p. x and L w.p. 1 − x, independently. This
gives a family of distributions which are consistent; to get the law
on the whole tree, take n to ∞.
This RTP is endogenous iff the fixed point of F is stable.
That is, the value at the root (say) can be written down as a
function of the structure of the tree.
Looked at another way: we can consider the martingale Mn which
is the conditional probability that the root has value W, given the
top n levels of the tree. This converges a.s. as n → ∞. The
process is endogenous iff the limit is always in {0, 1}.
Note that the fixed point of F may be stable even if the function
F (F (.)) has multiple fixed points and so the game has draws. In
that case, the value at the root feels the effect of “extreme”
boundary conditions: if the game is a draw, then the difference
between “all W” and “all L” boundary conditions at level n is
significant at the root for all n. However, if we take two
independent realisations of i.i.d. Bernoulli(x) boundary conditions,
then with high probability as n → ∞, the value at the root is the
same for each of them.
Large random graphs
Suppose G is a large (but finite) random graph. If the graph
ensemble has a G-W process as its local weak limit (e.g. Poisson
G-W for Erd‘”os-Renyi random graphs) then the game played on
the Galton-Watson tree may be expected to approximate the game
played on the random graph.
In many cases we expect to see replica symmetry breaking. For
example, in the E-R case, if the average degree is above e, then a
“core” of linear size is left after the leaf-removal. If this core has
even size, then w.h.p it will have a perfect matching. If it has odd
size, then there are “almost perfect matchings” with any given
vertex omitted. So vertices that are “locally draws” become wins
or losses in some non-local and highly correlated way.
Related question of expected size of maximum matching was
elegantly resolved recently by Bordenave, Lelarge and Salez.