Best-Response in Pure Public Good Games on

Best-Response in Pure Public Good Games on
Networks
Sebastian Bervoets∗and Mathieu Faure†
November 21, 2016
Abstract
We study the convergence of the best-response dynamics in the local public
good game played on a network, introduced in Bramoullé and Kranton (2007).
This game has linear best responses and strategic substitutes. The structure of
Nash equilibria is very complex, due to high multiplicity (continua of equilibria often appear). Still, we prove that the best-response dynamics always converges to
a Nash equilibrium. Next, we turn to stability. The stability of the best-response
dynamics in these games has been studied in the particular case where equilibria
are isolated (Bramoullé and Kranton (2007) and Bramoullé et al. (2014)). Here,
we consider the general problem and treat the case where continua of equilibria
appear. We provide necessary and sufficient conditions for a component of Nash
equilibria to be stable vis-à-vis this dynamics. Interestingly, even though these
conditions relate to the structure of the network, they are simple to verify. We
believe these two findings strongly support the concept of Nash equilibrium in
games played on networks.
Keywords: Best-Response Dynamics; Public Goods; Convergence; Stability.
JEL Codes: C62, C73, D83, H41
1
Introduction
The recent literature on games played on social networks has highlighted the relationships between Nash equilibria and the structure of the network (see for instance
∗
Corresponding author. Aix-Marseille University (Aix-Marseille School of Economics), CNRS and
EHESS. Email address: [email protected]. Postal address: GREQAM-AMSE - Centre
de la Vieille Charité - 2, rue de la Charité - 13002 Marseille - FRANCE.
†
Aix-Marseille University (Aix-Marseille School of Economics), CNRS and EHESS. Email address:
[email protected].
1
Jackson et al. (2016)). These findings can, of course, clarify how networks affect individuals’ outcomes, provided we assume that individuals are actually playing some
Nash equilibrium. However, the complexity of these relationships between equilibria
and networks naturally gives rise to concern about the appropriateness of the concept
of Nash equilibrium in network games1 .
The literature on learning in games addresses this concern, by analyzing whether
individuals that interact repeatedly and follow simple rules can converge to some Nash
equilibrium (see for instance Fudenberg and Levine (1998)). The general message coming out of this literature is that convergence is difficult to guarantee. To put it in
Fundenberg and Levine’s words, “game theory lacks a general and convincing argument that a Nash outcome will occur”.
Our aim is to apply a natural dynamics, the continuous-time best-response dynamics
(BRD), to the game of public goods in networks first studied in Bramoullé and Kranton
(2007). This game is simple, it has linear best responses and strategic substitutes, where
individuals’ payoffs depend on the sum of neighbors’ actions. Yet, it possesses a rich
structure of Nash equilibria. In particular, many networks have continua of equilibria.
As we show, the set of Nash equilibria is a finite union of connected components, which
can be complex high-dimensional objects. This game is thus plagued by multiplicity,
and the structure of the set of equilibria critically depends on the structure of the
network. All these properties make it a very interesting case for the study of convergence
to Nash.
We focus on the best-response dynamics because it has long been analyzed in economics and constitutes a very natural first step into the overall issue of learning in
network games. Moreover, it also sheds light on many other more sophisticated learning procedures2 . Also, the problem is challenging since there are no general results
on best-response dynamics, which can converge or not, depending on the game that
is played. In discrete-time versions and/or in games with discrete action space, the
dynamics is known to possibly diverge; even in very structured games such as potential games, additional conditions are needed to guarantee convergence (see Kukushkin
1
For instance, in the game of strategic complementarities, Ballester et al. (2006) show that the
Nash equilibrium is unique and related to the Bonacich centrality of agents. This centrality is a very
difficult object to compute, as it considers an infinite sum of paths, discounted by their lengths.
2
For instance, Hofbauer and Sorin (2006) use it to prove results on the fictitious play, Bravo and
Faure (2015) study reinforcement learning based on the BRD. Other examples can be found with
the learning processes of Bervoets et al. (2016) or Leslie and Collins (2006) for instance. In fact, any
learning process where players update their actions using payoff-based rules is likely to be approximated
by a best-response dynamics.
2
(2015)). Little is known about continuous-time dynamics in games with continuous
action space.
We ask two general questions: first, given this structural complexity of the set of
Nash equilibria (high level of multiplicity and dependence on the network’s structure),
will individuals converge to some Nash equilibrium? And second, if they do, can we
distinguish between stable and unstable equilibria in such a complex environment?
We find positive answers to both. First, we show that convergence to a Nash
equilibrium is guaranteed for every initial condition. We exploit the fact that the game
is an ordinal potential game. In dynamical systems terminology, this directly implies
the existence of a Lyapunov function with respect to the set of Nash equilibria and
thereby guarantees convergence to a component of Nash equilibria. This is not enough,
however. Due to the existence of continua, the system could get closer and closer to a
component and still move around that component. We rely on the work of Bhat and
Bernstein (2003), which is one of the only papers analyzing convergence in systems
that have continua of equilibria3 , and show that the BRD does converge to a Nash
equilibrium for every initial condition. This result strongly supports the appropriacy of
the Nash equilibrium concept even in games that are plagued by multiplicity, and where
the individuals’ equilibrium action depends on the very complex pattern of pairwise
interactions. Simple learning dynamics can converge to Nash, even in non-trivial cases.
We next turn to the issue of stability. It should be noted that this issue has been
analyzed in Bramoullé and Kranton (2007) for the discrete-time version of the BRD
and in Bramoullé et al. (2014) for the continuous-time version, by using the concept of
asymptotic stability. However, this concept is not appropriate in this game, because of
the presence of continua of equilibria. As stated in Bhat and Bernstein (2003), “since
every neighborhood of a non-isolated equilibrium contains another equilibrium, a nonisolated equilibrium cannot be asymptotically stable. Thus asymptotic stability is not
the appropriate notion of stability for systems having a continuum of equilibria”.
We use the concept of stability that is standard in the dynamical systems literature,
i.e. the concept of attractor. An attractor is a set that captures any trajectory that
enters its neighborhood. This implies that if a point in the attractor is perturbed so
as to be ejected from the attractor, the dynamics will drive us back to it. We wish to
emphasize here that although the dynamics might then lead to a point that is different
from the initial point that was perturbed, these two points still belong to the same set.
3
Apart from this paper, the only other study we found is by Bhat and Bernstein (2010). However,
the conditions they require do not apply to our dynamics.
3
As mentioned earlier, the set of Nash equilibria is a union of connected components.
This may include the case of isolated equilibria, which can be seen as degenerate components (i.e. consisting in a singleton). For these isolated equilibria, the concepts of
asymptotic stability and attractors coincide. However, in non-degenerate components,
no point can be asymptotically stable, and yet the entire component might be an attractor. Our paper thus advances the analysis in Bramoullé and Kranton (2007) by
providing a full characterization of stable equilibrium sets.
They are necessarily components of Nash equilibria, because of the first result on
convergence. Furthermore, we exploit the existence of an ordinal potential function to
show that attractors (i.e. stable sets) correspond to strict local maxima of the potential.
We then obtain the necessary and sufficient conditions for a set to be an attractor. Our
theorem can be decomposed in three steps.
The first step establishes that every attractor must contain some specialized Nash
equilibria, i.e. equilibria in which individuals are either active and exert the autarkic
effort, or inactive (i.e. they exert no effort and free-ride). These specialized equilibria
are in some sense the “extreme” points of the sets4 . Thus Nash equilibria that are
not connected to a specialized equilibrium belong to a component that is unstable. In
particular, isolated interior equilibria are necessarily unstable.
The second step states that a component of Nash equilibria is an attractor if and
only if the specialized equilibria that it contains are themselves local maxima of the
potential. Thus, however complex the components of Nash may be, one simply needs
to focus on their “extreme points”.
The third step establishes the necessary and sufficient conditions for any specialized
equilibrium to be a local maximum of the potential. These conditions are related to the
intimate structure of the network: consider a specialized equilibrium and the subgraphs
induced by the set of inactive players that are linked to only one active individual. Then
the specialized equilibrium is a local maximum of the potential if and only if all these
subgraphs are complete networks.
Our theorem is a very sharp result because it relates the set of attractors of the
dynamics to the topology of the network, and it does so in a way that is very simple to
operate. Indeed, it also provides an algorithmic method of finding them.
One corollary is that at least one stable component exists for every network: any
4
They are easy to find, as they correspond to maximal independent sets of the network (see
Theorem 1 in Bramoullé and Kranton (2007)).
4
component containing a maximum independent set5 of the network is stable. This is an
important difference with Bramoullé and Kranton (2007), where stable equilibria often
fail to exist, because they focus only on isolated equilibria. In their paper, they obtain
that an equilibrium is stable if and only if it is specialized and the set of active players
forms a maximal independent set of order at least two. We obtain this as a second
corollary, because with isolated specialized equilibria, the subgraphs induced by the set
of inactive players that are linked to only one active individual are complete networks
if and only if the set of active players forms a maximal independent set of order at least
two.
Additionally, our results significantly reduce the multiplicity problem by ruling out
entire components of Nash equilibria (those that are not attractors). This leads us to a
better understanding of where individuals might converge to in such games. Moreover
even though Nash equilibria are complex objects that depend on the details of the
network, our findings relate the structure of the network to stable sets in a very tractable
way.
In the next section, we detail the learning process and the game, and spend some
time illustrating the high multiplicity problem. Then in Section 3 we turn to the results
on convergence. Finally, we analyze stability in Section 4 and conclude in Section 5.
All proofs are relegated to Section 6.
2
The local public good game and structure of Nash
equilibria
2.1
The local public good game
Consider a game G = (N , X, u), where N = {1, . . . , N } is the set of players, Xi ⊂ R is
a continuous action space and X = ×i=1,...,N Xi . An action xi ∈ Xi can be thought of,
for instance, as an effort level chosen by individuals, a price set by a firm, a monetary
contribution to a public good. Because in all these examples actions take positive
values, we will assume that Xi = [0, +∞[. As usual, X−i denotes the set of action
profiles of every agent except agent i: X−i = ×j6=i Xj . Finally, u = (ui )i=1,...,N is the
vector of payoff functions. We assume that the payoffs have a network structure in the
following sense: there exists an undirected graph G whose vertices are the agents and
whose edges represent interactions. Let Ni (G) be the set of neighbors of agent i. We
5
The maximum independent sets of a network are the largest maximal independent sets.
5
consider games with payoff functions u that are differentiable, strictly concave and have
unique best responses of the following form




X
∀i ∈ N , Bri (x−i ) = max 1 −
xj , 0 .
(1)


j∈Ni (G)
−i )
≤ 0, and best responses depend
These games exhibit strategic substitutes, as ∂Br∂xi (x
−i
only on the sum of neighbors’ action. The set of games that have best responses of the
form (1) is denoted GL . A prominent member of this class is the game of local public
goods introduced in Bramoullé and Kranton (2007). It is the game that we will use in
this paper. The payoff function is


X
ui (x) = b xi +
xj  − cxi
(2)
j∈Ni (G)
where c > 0 is the marginal cost of effort and b(.) is a differentiable, strictly increasing
concave function such that b0 (1) = c.
Further examples of games in GL are discussed in Bramoullé et al. (2014). The
authors show that a specific game in the class GL is a potential game, where the potential
function P was identified in Monderer and Shapley (1996):
1
1
P (x) = hx, 1i − kxk2 − hx, Gxi .
2
2
(3)
In fact, one can check that all games in GL , and in particular the local public good
game given by (2), are ordinal potential games, i.e.:
∂ui
∂P
∂ui
∂P
(x) > 0 ⇔
(x) > 0 and
(x) < 0 ⇔
(x) < 0.
∂xi
∂xi
∂xi
∂xi
(4)
Note that this potential is bounded above.6
2.2
Nash equilibria
For games in GL , the set of Nash equilibria (NE) is easy to describe:




X
x∗ ∈ N E ⇔ ∀i, x∗i = max 1 −
x∗j , 0 .


j∈Ni (G)
6
Regardless of the structure of the network, it cannot be greater than n/2, which occurs for empty
graphs.
6
Thus, either the neighbors of i provide less than 1 and i fills the gap, or else they
provide more than 1 and i enjoys the benefits without exerting any effort. Although
it is easy to describe, the set of NE can take various forms. Here we describe through
examples the different kinds of equilibria that exist in this class of games.
We start with the simplest possible network, the pair (Figure 1). Here the profile
(1, 0) where x1 = 1 and x2 = 0 is an NE. The profile (0, 1) is also an NE, together with
( 31 , 32 ). In fact, any profile of the form (α, 1 − α), with 0 ≤ α ≤ 1, is an NE.
Figure 1: The pair. Left panel: profile (1, 0); middle panel: profile (0, 1); right panel: profile
(α, 1 − α).
We say that there is a continuum of equilibria, represented by the connected component Λ = {(α, 1 − α) : α ∈ [0, 1]}.
Definition 1 Λ is a connected component of NE if and only if:
1 - Λ is connected,
2 - Λ ⊂ N E and
3 - there exists an open neighborhood U of Λ such that U ∩ N E = Λ.
Now, consider the kite network of Figure 2.
Here, the profiles (1, 0, 1, 0) and (1, 0, 0, 1) are NE, as well as any profile of the form
(1, 0, β, 1 − β). Thus the connected component Λ1 = {(1, 0, β, 1 − β) : β ∈ [0, 1]} is
also a continuum of equilibria. However, there is another equilibrium, given by the
profile (0, 1, 0, 0). This equilibrium does not belong to a continuum, so its component
is degenerate: Λ2 = {(0, 1, 0, 0)}. In this case, we say that the equilibrium is isolated.
Next, consider the square in Figure 3.
Here, there is no continuum of equilibria. However, there are three isolated equilibria: (1, 0, 1, 0), (0, 1, 0, 1) and ( 31 , 13 , 13 , 13 ). These equilibria are qualitatively different: in
the first two, only a subset of agents exert an effort, and they are linked to free-riders.
In the third, every agent exerts a positive effort. We say that the first two isolated
equilibria are specialized, while the third is isolated and interior.
7
3
3
3
α
2
1
2
1
1
2
1−α
4
4
4
3
2
1
4
Figure 2: The kite. Upper panels: a continuum of equilibria; lower panel: an isolated equilibrium.
Last, consider the 5 agents network in Figure 4. There are three connected components: Λ1 = {(0, 1, 1, 0, 0)}, Λ2 = {(1, 0, 0, α, 1 − α) : α ∈ [0, 1]} and Λ3 = {( 13 , 13 , 13 , 13 −
, ) : ∈ [0, 13 ]}. The first is degenerate, an isolated specialized equilibrium; the second
is a continuum containing two specialized equilibria (when α ∈ {0, 1}); the third is also
a continuum, but it does not contain any specialized equilibrium: every point is either
interior if ∈
/ {0, 31 } or neither interior nor specialized when ∈ {0, 13 }.
The examples presented above describe all the types of equilibria in these games.
Proposition 1 The set of Nash equilibria can be described as
N E = ∪Li=1 Λi
where every Λi is a connected component.
It should be emphasized here that the existence of continua of equilibria seems to be
the rule rather than an exception7 . In the examples we have provided, the continuum
is one-dimensional because it involves only two agents. However, in general, continua
are much more complex objects. For instance, the following arbitrary network with 9
players exhibits a four-dimensional continuum (Figure 5).
This complexity in the structure of Nash equilibria makes the game a rich source of
insights into learning dynamics: they are very simple to write down, but it is difficult to
predict how individuals will behave. In order to proceed, we next present the learning
7
This is based on our empirical observations. We have spent some time trying to prove some
theoretical results supporting this statement, without success.
8
2
1
1/3
1/3
1/3
1/3
4
3
Figure 3: The square. Upper panels: two isolated specialized equilibria; lower panel: an
isolated interior equilibrium.
process and define notions that are suitable for environments with components that are
not degenerate.
3
Best-Response Dynamics - Convergence
The continuous-time best-response dynamics is a well-known learning process where all
players simultaneously revise their strategies in the direction of their best response to
the current action profile. As seen in (1), best responses for games in GL are unique. Let
Br : X → X, x 7→ Br(x) := (Br1 (x−1 ), ..., Brn (x−n )). The Best-Response Dynamics
(BRD) is given by the following dynamical system:
ẋ(t) = −x(t) + Br(x(t))
(5)
The map Br(.) being Lipschitz, the ordinary differential equation (5) has a unique
solution curve starting from every initial condition in RN . Because we restrict attention
N
to X = RN
and to positive times (t ≥ 0), we consider the semi-flow
+ instead of R
ϕ : (x, t) ∈ X × R+ → ϕ(x, t) ∈ X,
(6)
where, for t ≥ 0, ϕ(x, t) is the unique solution of (5) at time t, with initial condition
x ∈ RN
+.
9
1
2
4
2
4
2
4
5
3
5
3
5
3
1
1
Figure 4: A 5 agents network. Left panel: Λ1 , a continuum of equilibria containing two
specialized equilibria; Middle panel: Λ2 , an isolated equilibrium; Right panel: Λ3 ,
a continuum without any specialized equilibria.
We will say that the BRD, starting from a point x ∈ X, converges to a set S ⊂ X if
lim d(ϕ(x, t), S) = 0
t→+∞
Equivalently, given x ∈ X, we call y an omega limit point of x for BRD if there exists a
non-negative sequence tn ↑n +∞ such that ϕ(x, tn ) → y. The set of omega limit points
of x is called the omega limit set of x and denoted ω(x). Clearly the BRD, starting
from a point x ∈ X, converges to a set S if and only if ω(x) ⊂ S.
In the proof section, we prove that for all x ∈ X, ω(x) ⊂ N E. Thus the set of limit
points of the BRD is included in the set of Nash equilibria. The proof is simple: we
show that the potential function (3) is a Lyapunov function for the BRD with respect
to the set of NE. This guarantees that the omega limit-set of the BRD is included in
the set of points that maximize the Lyapunov function, i.e. the set NE.
We actually prove a much stronger result:
Theorem 1 For any game in GL , the BRD converges to a Nash equilibrium for any
initial condition:
∀x0 ∈ X, ∃x∗ ∈ N E such that ω(x0 ) = {x∗ }.
In other words, for any initial conditions x0 , the solution curve starting from x0 converges to a Nash equilibrium, as t increases to infinity. While it was straightforward to
show the convergence of BRD to the set of Nash equilibria, establishing this stronger
result requires much more work. We sketch out the proof of this result.
Let us denote by f the vector field of BRD: f (x) := −x + Br(x). We know that
for any initial condition x0 , we have ω(x0 ) ⊂ Λ, where Λ is a connected component of
10
δ
α−γ
α−δ
γ
1 − 2α
1 − 2α
α
α−β
β
Figure 5: An example with a four-dimensional continuum: α ≤ 1/2, β ≤ α, γ ≤ α, δ ≤ α.
NE. Assume x is one point in the continuum of equilibria, and x ∈ ω(x0 ). Then we
want to show that ω(x0 ) = {x}. One way of doing that is to look at how the vector
field f behaves when the system approaches x8 . The way it behaves is captured by
the possible directions the system can take when it approaches x. This is called the
direction cone at point x, i.e. the convex cone generated by f (U ) for arbitrarily small
open neighborhoods U of x.
Now, we need to compare this set of possible directions with the set of directions
that the system should take in order to stay in component Λ, when close to x. These
directions are given by the tangent cone at x. If the tangent cone and the direction cone
have a non-empty intersection, then the system could keep moving along Λ, thus satisfying ω(x0 ) ⊂ Λ, without ever converging. If, on the contrary, the intersection between
the tangent cone and the direction cone is the null vector, then we are guaranteed that
the vector field f is non-tangent to Λ at point x. Thus, the system must converge to x.
We illustrate how the proof works on a very simple case: consider the pair. In
this case, as already stated above, there is one connected component of Nash equilibria
Λ = {(α, 1 − α), α ∈ [0, 1]}. Assume that we start at x0 and that there is an interior x
8
By definition of the omega limit set, we know that at some point the system will get arbitrarily
close to x.
11
such that x ∈ ω(x0 ). Then at some point the system will enter a small neighborhood of
x, and in that neighborhood we have x˙1 = (1 − x1 − x2 ) and x˙2 = (1 − x1 − x2 ). Hence
x˙1 = x˙2 , and the direction cone is given by all vectors that are proportional to (1, 1).
Now, the tangent cone is defined by all the directions compatible with staying in Λ. So
if x is interior, the admissible deviations are of the form (+α, −α). The tangent cone is
thus given by all vectors that are proportional to (1, −1). Obviously, the tangent cone
and the direction cone are orthogonal, so their intersection is only the null vector. As
a consequence, the vector field is transverse to x, so [x ∈ ω(x0 ) =⇒ ω(x0 ) = {x}].
This example is relatively simple to deal with, in particular because we assumed
that x is interior. However, ω(x0 ) will usually contain agents playing 0, in which case
the direction cone is harder to determine. Accordingly, the set of equilibria usually
possesses corners, and finding the tangent cone is not as easy. However, the main
intuition of what is going on is contained in the above example.
We believe theorem 1 conveys good news. In particular, it strongly supports applying the Nash concept to games plagued by multiplicity, and where the individuals’
equilibrium action depends on the very complex pattern of pairwise interactions. Of
course, this result does not tell us which Nash equilibrium players will converge to, as
that depends on the initial conditions. However, equilibria that are stable should be
observed as outcomes of the game more readily than equilibria that are unstable. We
thus turn to the analysis of stability.
4
4.1
Best-Response Dynamics - Stability
Attractors and Ordinal Potential Games
The stability concept we will use can be illustrated by the example of the pair network.
Assume that players have settled at a specific equilibrium x∗ = (α, 1 − α), which is
stationary for the BRD. Assume that their actions are slightly perturbed and become
(α + 1 , 1 − α + 2 ), for small values of 1 and 2 such that |1 | =
6 |2 |. Thus, the players
are no longer in a Nash equilibrium. As a consequence, the BRD makes them move
again, until they converge to another equilibrium (this is guaranteed by Theorem 1),
say y ∗ = (β, 1−β). Because x∗ 6= y ∗ , the equilibrium x∗ is not asymptotically stable. In
fact, whenever we perturb equilibrium actions, players end up in a different equilibrium.
So no equilibrium is asymptotically stable.
However, what is important here is that both equilibria x∗ and y ∗ belong to the
12
same connected component Λ. And in the pair network, no matter what equilibrium in
component Λ the players have converged to, when perturbed they will converge back
to an equilibrium that is also in Λ. So no equilibrium is stable, but component Λ is.
We say that the component is an attractor for the BRD.
Definition 2 (Attractor) A ⊂ X is an attractor for ẋ = −x + Br(x) if and only if
(i) A is compact and invariant9 ,
(ii) there exists an open neighborhood U of A with the following property:
∀ > 0, ∃T > 0 such that ∀x ∈ U, ∀t ≥ T, d(ϕ(x, t), A) < ,
An attractor for a dynamical system is a set with strong properties. In particular,
it uniformly attracts a neighborhood of itself. Indeed, if the dynamics starts inside the
attractor, it will stay there. Furthermore, if we perturb the vector not too far away
from the attractor, the dynamics will go back to it. The notion of attractor extends to
sets the concept of asymptotic stability that applies to points10 .
Lemma 1 If a set Λ is an attractor for ẋ = −x+Br(x), then Λ is a union of connected
components of NE.
This lemma is important because it tells us that attractors are unions of connected
components, thereby eliminating the possibility that only some points or some pieces
of a continuum are stable, while the others are not. Thus, when there is a continuum
of equilibria, either it is entirely stable, or it is unstable.
9
Let S be a subset of RN . Then S is invariant for the flow ϕ if a) ∀x ∈ S, ∀t ∈ R, ϕ(x, t) ∈ S, and
b) ∀y ∈ S, ∀t ∈ R, there exists x ∈ S such that ϕ(x, t) = y
10
In Bramoullé and Kranton (2007) and Bramoullé et al. (2014), the authors consider asymptotic
stability of isolated equilibria wrt the BRD, which is precisely the definition of an attractor restricted
to degenerate components. To better understand the analogy, note that an isolated equilibrium is
asymptotically stable for the BRD if and only if it is linearly stable for the BRD (because best
responses are linear). As a consequence, an isolated equilibrium is asymptotically stable for the BRD
if and only if all the eigenvalues of the jacobian matrix of the dynamical system are strictly negative
when evaluated at this point. However, when an equilibrium is not isolated (i.e. in a non-degenerate
component), there is always at least one eigenvalue that is equal to 0. So in fact, no equilibrium in a
continuum is asymptotically stable, only isolated equilibria are candidates to this kind of stability. This
explains, for instance, why Bramoullé and Kranton (2007) find that the pair has no stable equilibrium,
because there are no isolated equilibria in the pair. As we will see, with the notion of attractor, it
appears that the entire component is stable.
13
One interesting feature of ordinal potential games is that the potential P is strictly
increasing along solution trajectories of the BRD that lie outside the set of Nash equilibria. Now consider an isolated Nash equilibrium x∗ . If we perturb the actions slightly
and get out of the Nash equilibrium, the solution will follow a trajectory that increases
the potential. So either one of these trajectories takes the system away from x∗ , and
the Nash equilibrium x∗ is (asymptotically) unstable, or all trajectories that increase
P go towards x∗ , and the Nash equilibrium x∗ is (asymptotically) stable. If all trajectories that increase P go towards x∗ , x∗ must be a strict local maximum of P . Thus,
an isolated equilibrium is an attractor for the best-response dynamics if and only if
it is a strict local maximum of the potential. Because we are interested in non-trivial
components of Nash equilibria, we need a more general concept of local maximum.
Definition 3 Let Λ ⊂ X be compact. We say that Λ is a strict local maximum of P if
P is constant on Λ and there exists an open neighborhood U of Λ such that P (x) > P (y)
for any x ∈ Λ, y ∈ U \ Λ.
We show (see Lemma 4) that a connected component of NE is a strict local maximum
of P if and only if ∀x ∈ Λ, x is a local maximum of P .
Proposition 2 Let Λ be a connected component of Nash equilibria. Then Λ is an
attractor for ẋ = −x + Br(x) if and only if Λ is a strict local maximum of P .
We know by lemma 1 that attractors are connected components of NE. However,
all connected components of NE are not attractors. Proposition 2 provides us with
a direction to differentiate between stable and unstable ones. In what follows, we
characterize the set of strict local maxima of P .
4.2
A complete characterization of attractors
In what follows, we will distinguish between three types of individuals. First, given
x ∈ X, we denote by A(x) the set of players that are active in x, i.e. A(x) := {i :
xi > 0}. Next, we denote by I(x) the set of inactive agents, that is the set of agents
i such that xi = 0 and (Gx)i = 1. These agents are free riders, but they have exactly
1 around them. Finally, we denote by SI(x) the set of strictly inactive agents, that
is the set of agents i such that xi = 0 and (Gx)i > 1. When useful, we will denote
Ã(x) = A(x) ∪ I(x).
14
We illustrate this on the 4-line network. There is a two-dimensional continuum of
Nash equilibria that is given by Λ = {(1 − β, β, α, 1 − α); αβ = 0, α ∈ [0, 1], β ∈ [0, 1]}.
As we move along this component, we find four types of equilibria. At profile (1, 0, 1, 0),
A(x) = {1, 3}, I(x) = {4} and SI(x) = {2}. At profile (1, 0, α, 1 − α), A(x) = {1, 3, 4},
I(x) = ∅ and SI(x) = {2}. At profile (1, 0, 0, 1), A(x) = {1, 4}, I(x) = {2, 3} and
SI(x) = ∅. Finally, at profile (1−β, β, 0, 1), A(x) = {1, 2, 4}, I(x) = ∅ and SI(x) = {3}.
Before stating our main theorem about stability we introduce two notions.
Definition 4 Given an undirected graph G, a subset of nodes M is a maximal independent set if
(i) ∀i ∈ M , Ni (G) ∩ M = ∅;
(ii) ∀j ∈
/ M , Nj (G) ∩ M 6= ∅
A maximal independent set is of order k ∈ N∗ if, ∀j ∈
/ M , we have Card(Nj (G)∩M ) ≥
k.
The set of specialized Nash equilibrium is denoted by SPNE.
Remark 1 (Bramoullé and Kranton (2007)) Let M be a maximal independent set
of the graph G. Then the effort profile in which xi = 1 if i ∈ M and xi = 0 if i ∈
/ M is
an SPNE. Conversely, any SPNE is a maximal independent set of G.
In the sequel, we will say that an SPNE x and the maximal independent set formed
of the active individuals in x are associated. We denote the maximal independent set
associated to x by M (x). Note that for x ∈ SP N E, M (x) coincides with A(x).
Definition 5 (Influence set) Consider an SPNE x and its associated maximal independent set M (x). Given i ∈ M (x), the set
C(i, M (x)) = {j ∈ N : Nj (G) ∩ M (x) = {i}}
will be called the influence set of agent i in M (x).
The influence set of agent i in M (x) is the set of neighbors of i that have no other
neighbor in M (x). For instance, in the star in Figure 6, there are two SPNE that
are associated with two different maximal independent sets, M (x1 ) and M (x2 ) where
x1 = (1, 0, 0, 0) and x2 = (0, 1, 1, 1). In the first, only individual 1 has an influence set:
15
2
2
1
3
1
4
3
4
Figure 6: The star network with 4 agents. On the left, M (x1 ) = {1}. On the right, M (x2 ) =
{2, 3, 4}.
C(1, M (x1 )) = {2, . . . , n}. In the second, C(i, M (x2 )) = ∅ for all i ∈ M (x2 ). Note that
in any network, if M (x) is of order at least 2, then C(i, M (x)) = ∅ for all i ∈ M (x).
For any i ∈ M (x) we can then define G[i, M (x)] as the subgraph of G restricted to
the agents {i} ∪ C(i, M (x)).
Theorem 2 A set Λ is an attractor for the BRD if and only if:
• Λ is a connected component of NE such that Λ ∩ SP N E 6= ∅ and
• for all x ∈ Λ ∩ SP N E, G[i, M (x)] is complete ∀i ∈ M (x).
The proof of this theorem appears in section 6. As it is long and involves many
arguments that we think are interesting, we provide a detailed intuition together with
several illustrating examples in what follows. The uninterested reader can go directly
to subsection 4.3. The theorem can be divided into three steps.
First, we show that if Λ is an attractor for the BRD, then Λ contains at least one
SPNE. Thus, if a component does not contain any SPNE, it can be discarded. This
means that, for some initial conditions, the individuals might converge to a point in
that component, but that once their actions are slightly perturbed, they will move away
and not come back11 .
We illustrate this on the network in Figure 4. There are three connected components:
Λ1 = {(0, 1, 1, 0, 0)}, Λ2 = {(1, 0, 0, α, 1 − α) : α ∈ [0, 1]} and Λ3 = {( 31 , 13 , 13 , 13 − , ) :
∈ [0, 31 ]}. According to Theorem 2, Λ1 is a candidate to being an attractor. Λ2 is also
candidate because it contains two SPNE when α ∈ {0, 1}. Finally, Λ3 cannot be an
attractor as it does not contain any SPNE.
The proof of this first part involves several arguments. First, we show that a necessary condition for Λ to be an attractor is that for any equilibrium x that it contains, the
11
This implies that isolated interior equilibria cannot be stable.
16
set of active agents A(x) must form a union of complete networks (Lemma 5). There is
a spectral argument to this. In order to check whether x is a maximum of P , we compare P (x) with P (x + u), where u is any vector such that x + u remains in Rn+ . They
represent all the feasible deviations from equilibrium. The deviations likely to most
strongly increase P are those that follow the eigenvector associated with the eigenvalue
of the graph of active agents that has the greatest module. This eigenvalue is λmin ,
the largest negative eigenvalue. As we show, for any graph of active agents such that
λmin < −1, there is at least one deviation that strictly increases the potential. Thus
x cannot be a local maximum. The only case where the potential remains constant
while following this deviation is when λmin = −1, and the only graphs that have this
property are complete networks (or unions of complete networks).
Second, we observe that in complete networks, efforts can be ”transferred” continuously from any active individual to any other and still remain in the set of Nash
equilibria. Thus we can transfer efforts from one active player to the other until the
point where the first player is no longer exerting any effort. The subgraph of active
players is now reduced by one individual and we can start again, until only one individual of the initial complete subgraph exerts all the effort, while all the others are
inactive.
To summarize, if Λ is an attractor, the different subgraphs of active agents are
complete, and are connected to an equilibrium in which some agents exert an effort of
1 while all others are either inactive or strictly inactive. This is precisely the definition
of an SPNE. Thus Λ contains an SPNE.
Let us illustrate this on the network of Figure 7. As one can observe, the initial
profile is a Nash equilibrium, provided α + β ≥ 1. As efforts are transferred from one
player in the triangle to another, the profile is still a Nash equilibrium. The agent to
which efforts are transferred becomes a specialized agent. If we replaced the triangle of
this example by a square, as in Figure 8, we would also start at an equilibrium, but as
can be verified, it would be impossible to continuously transfer efforts from one agent
in the square to another without moving out of the set of Nash equilibria.
In the second step of the proof, we show that SPNE in non-degenerate components
play a special role. In particular, a connected component of NE containing an SPNE
is an attractor if and only if every SPNE is a local maximum of P . This result is
a serious step toward a characterization of attractors, helping to hugely simplify the
problem. Indeed, however complex a component might be (recall for instance the fourdimensional continuum in Figure 5), it is only necessary to identify the set of SPNE
17
0
δ
1−α
1−α
δ
1−β−δ
1−β−δ
α
α
β
β+δ
0
0
0
0
1−α
0
0
0
1−α
α
1−β−δ
0
0
Figure 7: An example with 6 players. When α + β ≥ 1, the initial action profile (upper left
panel) is a Nash equilibrium. In the second and third action profiles (upper right
and lower left panels), efforts are “transferred” from one agent to another in the
triangle and yet, the profiles remain in the Nash component. In the last profile
(lower right panel), efforts are transferred from one agent in the pair to the other.
The final profile is an SPNE.
that it contains. These are the only points that matter. Furthermore, being maximal
independent sets of the network, SPNE are easy to find. There are many algorithms
that, although time-consuming, will permit them all to be found.
The necessary condition in this second step is immediate. The intuition for the
sufficient condition is the following: we start by showing that if a connected component
intersects the set of SPNE, and each SPNE is a local maximum of P , then any point
x in the component is also a local maximum if and only if the set A(x) forms a union
of complete networks. Thus what we need to prove is that A(x) is a union of complete
networks. We assume by contradiction that this is not the case.
We start by using the fact that Λ is connected, so that any point can be reached
from any other point in Λ by continuously moving individuals’ actions. In particular, it
is possible to start from equilibrium x and reach an SPNE of the component. Formally,
we build a continuous map x(t) for t ∈ [0, 1] such that x(t) ∈ Λ for all t, and x(0) = x
while x(1) is an SPNE of Λ.
Obviously, in any SPNE, the set of active agents is a union of complete networks
18
1/3
1−α
α
1/3
0
1/3
1/3
Figure 8: An example with 7 players. When α+ 31 ≥ 1, this action profile is a Nash equilibrium.
However, effort profiles exit the Nash component as soon as efforts are transferred
from one agent to another in the square.
(singletons). And because A(x) is not a union of complete networks, necessarily there
is a first moment t∗ , on the path from x to the SPNE, such that A(x(t∗ )) becomes a
union of complete networks precisely at time t∗ , although it was not just before.
We identify the agents that have become inactive at t∗ , and we use them to build our
contradiction by examining their situations at time t < t∗ , i.e. when they are still active.
There are two cases. In the first, there is only one such individual. Then we show that
equilibrium x must be connected to an SPNE in Λ that is not a local maximum of the
potential, a contradiction with the assumptions. In the second case, there are several
such individuals. Then, there are at least two of them who are linked and do not share
the same neighborhood. We thus show that the aggregate effort around them is strictly
greater than 1. This is inconsistent with the fact that these agents are active and that
x(t) is a Nash equilibrium (being a point of Λ).
The two first steps of Theorem 2 indicate that all is needed to identify a component
of Nash equilibria as an attractor is a look at its specialized equilibria. In the third and
last step, we state an extremely simple criterion that establishes when a given specialized
Nash equilibrium is a local maximum of the potential: for each SPNE, construct the
influence set of every active individual. Check whether the graphs induced by these
influence sets are complete or not. If they are, the component is stable; otherwise it is
unstable.
The proof of this third step is direct. We simply compute the value of the potential at
an SPNE, and compare it with the value of the potential after any admissible deviation
of the players. We show that this difference is always positive if the sets C(i, M (x))
form complete networks.
19
4.3
Illustration and implications
Theorem 2 provides an algorithm to find the attractors. We illustrate how to implement
it on an example. Consider the following arbitrary 8-agents graph.
4
6
8
1
2
5
3
7
Figure 9: An arbitrary 8-agents graph
The maximal independent sets are the following:
M1 = {3, 4, 6, 7}, M2 = {1, 2, 3}, M3 = {1, 2, 8}, M4 = {6, 7, 8}, M5 = {5, 6, 8}.
Hence there are five specialized Nash equilibria that we denote by xMi , i = 1, ..., 5.
There are three connected components of Nash equilibria containing at least one SPNE:
• a singleton Λ1 = {xM1 }:
• a continuum Λ2 = {(1, 1, 1 − α, 0, 0, 0, 0, α) : α ∈ [0, 1]}, that contains xM2 and
xM 3 .
• and another continuum Λ3 = {(0, 0, 0, 0, α, 1, 1 − α, 1) : α ∈ [0, 1]}, that contains
xM4 and xM5 .
There is no point looking for other components of Nash equilibria as they will not
be attractors. The set M1 is a maximal independent set of order 2. Therefore, Λ1 is an
attractor (the influence set of every agent i ∈ M1 is empty).
A priori, nothing differentiates Λ2 from Λ3 : the potential is equal to 3/2 along
both components and they have similar shapes. However let’s take a closer look at the
specialized Nash equilibria in both components.
For component Λ2 we have C(1, M2 ) = C(2, M2 ) = ∅ and C(3, M2 ) = {8}. Thus
{1} ∪ C(1, M2 ) is a singleton, making the induced subgraph a complete network; {2} ∪
20
4
6
4
6
8
1
8
α
1
1−α
2
2
3
5
5
7
3
7
Component Λ1
Component Λ2
4
6
8
1
2
3
5
α
7 1−α
Component Λ3
Figure 10: The three components of Nash equilibria
C(2, M2 ) is also a singleton; finally {3} ∪ C(3, M2 ) = {3, 8} and the induced subgraph
is a pair and thus a complete network. This means that xM2 is a local maximum of
the potential. Similarly, C(1, M3 ) = C(2, M3 ) = ∅ and C(8, M3 ) = {3}. Hence xM3
is also a local maximum of the potential. Thus all the specialized Nash equilibria in
component Λ2 are local maxima of the potential and Λ2 is an attractor for the bestresponse dynamics.
For component Λ3 , we have C(6, M5 ) = C(8, M5 ) = ∅ and C(5, M5 ) = {7}, which
makes xM5 a local maximum of the potential. Also C(6, M4 ) = ∅ and C(7, M4 ) = {5}.
However C(8, M4 ) = {3, 4}, which does not correspond to a complete graph (3 and 4 are
not linked). As a result, our main theorem claims that xM4 is not a local maximum of
the potential. Indeed, for x = (0, 0, , , 0, 1, 1, 1 − 2) we have P (x ) = 23 + 2 > P (x).
Thus the BRD would lead individuals out of the component, following a direction that
increases P .
We conclude that there are two attractors, Λ1 and Λ2 .
We now illustrate how our results extend the results in Bramoullé and Kranton
(2007).
Corollary 1 (Bramoullé and Kranton (2007), Theorem 2) Let x be an isolated
Nash equilibrium. Then if x is an attractor, it has to be an SPNE. Further, x has to be
associated with a maximal independent set of order 2.
21
The proof of this corollary relies on the first step in theorem 2 that excludes interior
isolated equilibria from being attractors. And on the fact that the conditions of theorem 2 are trivially satisfied: the influence set of every individual in M is empty, thus
complete.
Notice however that the statement was an ”if and only if” in Bramoullé and Kranton (2007), it is now just a sufficient condition. We illustrate the difference with the
following corollary.
Corollary 2 (Existence) Every network has at least one attractor.
This corollary relies on the following:
Definition 6 A maximum independent set of an undirected graph G is a maximal
independent set of largest cardinality.
Because the proof is short and easy we give it here. Let x be an SPNE associated to a
maximum independent set of a network M (x), then G[i, M (x)] is complete ∀i ∈ M (x).
Assume it is not the case. This implies the existence of an individual i in M (x) and
at least two individuals j1 and j2 in C(i, M (x)) that are not linked. Furthermore, they
are not linked to any other individual in M (x) by definition. Thus, we can construct
another SPNE x0 such that M (x0 ) = M (x) \ {i} ∪ {j1 , j2 }. The set M (x0 ) is also a
maximal independent set, and |M (x0 )| > |M (x)|, a contradiction.
An alternative proof follows from the observation that the potential P necessarily
has a global maximum. Corollary 2 departs from the findings of Bramoullé and Kranton (2007), for exactly
the reasons explained in footnote 10. For instance, in any complete network, such as the
pair, the maximum independent set is of order 1. While no equilibrium is asymptotically
stable, the connected component of NE is still an attractor.
5
Conclusion
We examined a very standard learning dynamics in a game played on networks. This
game was chosen because, while it is simple, it is nevertheless plagued by a multiplicity of equilibria that can take very complex shapes. Two important findings emerge.
First, the best-response dynamics always converges to an equilibrium. This is far from
trivial, since the literature contains no general results on systems that have continua
22
of equilibria. Second, we give necessary and sufficient conditions to find attractors, i.e.
stable components of equilibria. These conditions are closely linked to the structure of
the network, but they are very simple to verify.
This work takes a step forward answering a natural question that has not yet been
explored. When games are played on networks, Nash equilibria become so complex
that their appropriacy as a concept can be questioned. However, we have shown that
the concept has some support.
Acknowledgments
We wish to thank Y. Bramoullé and N. Allouch for useful conversations, as well as
participants of the several conferences where this work was presented. The authors
thank the French National Research Agency (ANR) for their financial support through
the program ANR 13 JSH1 0009 01.
6
Proof of the main results
Proof of Proposition 1. We show that there exists a finite family of compact convex
sets Λ1 , ..., ΛL such that
N E = ∪Ll=1 Λl .
Let A be a subset of agents and N E A be the set
{x ∈ N E : xi = 0 ∀i ∈
/ A, xi + (Gx)i = 1 ∀i ∈ A} .
It is not hard to see that N E A is closed. Now let x, y ∈ N E A , λ ∈ [0, 1] and z =
λx + (1 − λ)y. If i ∈
/ A, xi = yi = 0. Hence zi = 0. If i ∈ A
zi + (Gz)i = λ(xi + (Gx)i ) + (1 − λ)(yi + (Gy)i ) = 1.
Compact and convex sets are connected components, so this proves the result. 6.1
Proof of convergence of BRD
The purpose of this section is to prove Theorem 1. Recall that (x, t) 7→ ϕ(x, t) is the
flow associated with the best-response dynamics ẋ = −x + Br(x).
Proposition 3 The potential P defined in (3) is a strict Lyapunov function for ẋ =
−x + Br(x), that is:
23
• for x ∈ N E the map t 7→ P (ϕ(x, t)) is constant;
• for x ∈
/ N E the map t 7→ P (ϕ(x, t)) is strictly increasing.
Proof. We have
∀x, ∀i,
∂ui
∂P
∂ui
∂P
(x) > 0 ⇒
(x) > 0 and
(x) < 0 ⇒
(x) < 0.
∂xi
∂xi
∂xi
∂xi
Also
hDP (x), −x + Br(x)i =
X ∂P
(x)(−xi + Bri (x−i )).
∂xi
i
We need to check that, if x ∈
/ N E, then this quantity is positive. Let i be such that
i
xi 6= Bri (x), say xi < BRi (x−i ). Then by strict concavity of ui we have ∂u
(x) > 0.
∂xi
∂P
Thus ∂xi (x) > 0 and hDP (x), −x + Br(x)i > 0. Note that, for any x0 ∈ X and any r > 1, ϕ(x0 , t) ∈ [0, r]N for t large enough.
Indeed, if xi ≥ r then ẋi ≤ 1 − r < 0. That means that, for any x0 ∈ X, ω(x0 ) is
nonempty and contained in [0, 1]N . For any x ∈ ω(x0 ), we have P (x) = lim P (ϕ(x0 , t)).
Now ω(x0 ) being invariant directly implies that P (ϕ(x, t)) = P (x) for any t, which
means that x ∈ N E by proposition 3. Thus ω(x0 ) ⊂ N E. We can actually say more:
since ω(x0 ) is connected it must necessarily be contained in a connected component of
NE, i.e. ω(x0 ) ⊂ Λl for some Λl in the decomposition of proposition 1. This is a severe
restriction on the possible omega limit sets of the dynamics; however, we wish to prove
more, that is |ω(x0 )| = 1 for any x0 ∈ X.
Let A ⊂ X and x ∈ A. The tangent cone to A in x is the set of directions v ∈ RN
such that there exists a sequence (xn )n in A converging to x and hn ↓n 0 with the
following property:
xn − x
v = lim
.
n→+∞
hn
We denote this set Tx̂ A. Let x̂ be a Nash equilibrium and Λ be the connected component
containing x̂. Then v ∈ Tx̂ Λ if and only if
vi ≥ 0 ∀i ∈ I(x̂)
vi = 0 ∀i ∈ SI(x̂)
((I + G)v)i = 0 ∀i ∈ A(x̂)
((I + G)v)i = 0 ∀i ∈ I(x̂) s.t. vi > 0,
((I + G)v)i ≥ 0 ∀i ∈ I(x̂)
24
Given x ∈ S the direction cone Fx of the vector field at x is defined by
Fx := ∩>0 coco(f (B(x, ) \ {0}),
where coco(A) is the cone generated y the convex hull of A.
Proposition 4 (Bhat and Bernstein, 2004) Let x0 ∈ X and x̂ ∈ ω(x0 ). Then if
Tx̂ ω(x0 ) ∩ Fx̂ ⊂ {0} we have ω(x) = {x̂}.
Proposition 5 Let x0 ∈ X. There exists x̂ ∈ ω(x0 ) and > 0 such that
∀y ∈ B(x̂, ) ∩ ω(x0 ), I(y) = I(x).
(7)
As a consequence, for any v ∈ Tx̂ ω(x0 ), vi = 0 and ((I + G)v)i = 0 ∀i ∈ I(x̂).
Proof. Pick any x ∈ ω(x0 ). Note that there exists an open neighborhood U of x such
that I(y) ⊂ I(x) ∀y ∈ U . Suppose that, for any > 0, (7) does not hold. Then
there exists in particular x0 ∈ U ∩ ω(x0 ) such that I(y) is strictly contained in I(x).
Consequently |I(y)| < |I(x)|. By repeating this reasoning (at most |I(x)| times) the
result follows.
Now given v ∈ Tx̂ ω(x0 ) and i ∈ I(x̂), we necessarily have vi = 0, by definition of the
tangent cone, as otherwise there would exist a sequence of points in ω(x0 ) for which i is
active, and that converges to x̂. Also we have ((I + G)v)i = 0, as otherwise there would
exist a sequence of points in ω(x0 ) for which i is strictly inactive, and that converges
to x̂. Proof of Theorem 1. We must prove that |ω(x0 )| = 1 for any x0 ∈ X. Let x0 ∈ X
and x̂ ∈ ω(x0 ) be such that (7) holds. Pick v ∈ Tx̂ ω(x0 )∩Fx̂ . We must prove that v = 0.
By the last point of last proposition, we have vi = 0 ∀i ∈ I(x̂), and ((I + G)v)i = 0
∀i ∈ I(x̂).
Now let i ∈ A(x̂). There exists > 0 such that fi (x) = 1 − ((I + G)x)i = −((I +
G)(x − x̂))i for any x ∈ U := B(x̂, ). Also fi (x) = −xi for i ∈ SI(x̂). As a consequence
f (U ) ⊂ C(U ), where
C(U ) := {w ∈ RN : ∃x ∈ U s.t. wi = ((I + G)(x̂ − x))i ∀i ∈ A(x̂), wi = −xi ∀i ∈ SI(x̂)},
which is convex. By definition of co(f (U )), we therefore have co(f (U )) ⊂ C. Now since
v ∈ coco(f (U )), we have v = λw, with λ ≥ 0 and w ∈ C associated to some x ∈ U . As
v ∈ Tx̂ ω(x0 ) we have wi = 0 for i ∈ SI(x̂), and therefore xi = 0 for i ∈ SI(x̂).
25
Let us prove that v = 0. We have vi = λ((I + G)(x̂ − x))i . Thus
X
hv, vi = λ
vi × ((I + G)(x̂ − x))i
(8)
i∈A(x̂)
X X
vi (I + G)i,j (x̂j − xj )
(9)
X
X
(x̂j − xj )
(I + G)j,i vi
(10)
= λ
i∈A(x̂) j∈N
= λ
j∈N
X
= λ
i∈A(x̂)
(x̂j − xj )((I + G)v)j
(11)
j ∈SI(x̂)
/
= 0
(12)
where we used the fact that vi = 0 for i ∈
/ A(x̂), that I + G is symmetric, that xj = 0
for j ∈ SI(x̂) and that ((I + G)v)j = 0 for any j ∈
/ SI(x̂). The proof is complete. Proof of Lemma 1: Let Λ be connected and an attractor for the BRD. Then Λ ⊂ N E,
as for any x ∈
/ N E we have P (ϕ(t, x) > P (x) for any > 0. Suppose now that Λ is
not isolated in N E. Then there would exist (xn )n ∈ N E \ Λ such that d(xn , N E) →n 0
and Λ would not be an attractor. 6.2
Intermediate results
Here, in addition to the lemma 4 and proposition 2 stated in the main text, we prove
a series of other results that will be useful in what follows.
Lemma 2 Let x be a Nash equilibrium. Then
(i) P (x) =
1
2
P
i
xi .
(ii) Let u be such that x + u ∈ RN
+ . Then
P (x + u) = P (x) +
X
i∈SI(x)
ui (1 − (Gx)i ) −
26
1
kuk2 + hu, Gui .
2
Proof. First let us prove (i). Let x is an interior Nash equilibrium. Then (I + G)x = 1
and thus
X
X
1
1
1X
1
xi − hx, 1i =
xi .
P (x) =
xi − hx, xi − hx, Gxi =
2
2
2
2
i
i
i
Now if x is a Nash equilibrium (not necessarily interior) then this reasoning is true on
”connected components where x has positive values” and the result also holds.
For point (ii), we have
1
1
P (x + u) = P (x) + hu, 1i − hu, xi − kuk2 − (hu, Gxi + hx, Gui + hu, Gui)
2
2
1
2
= P (x) + hu, 1 − x − Gxi −
kuk + hu, Gui
2
X
1
= P (x) +
ui (1 − (Gx)i ) −
kuk2 + hu, Gui . 2
i∈SI(x)
Lemma 3 Let Λ be any connected component of N E. Then the potential P is constant
along Λ.
Proof. As we have showed, there exists a finite family of compact convex sets Λ1 , ..., ΛL
such that
N E = ∪Ll=1 Λl .
Remember that for a subset A of agents,
N E A := {x ∈ N E : xi = 0 ∀i ∈
/ A, xi + (Gx)i = 1 ∀i ∈ A} .
We now prove that P is constant on N E A . Let P A : RA → R be defined as
P A (z) =
X
i
zi −
kzk2 1
− hz, GA zi .
2
2
For x ∈ N E A , let xA = (xi )i∈A . We then have P A (xA ) = P (x). Moreover, for i ∈ A,
∂P A
= 1 − zi − (GA z)i .
∂zi
By definition of N E A , we then have
{xA : x ∈ N E A } ⊂ {z ∈ RA : ∇z P A = 0}.
27
Now P A is C ∞ and, by Sard’s Theorem, P A ({xA : x ∈ N E A }) has Lebesgue measure
zero in RA . As an immediate consequence, P A is constant on {xA : x ∈ N E A } , which
directly implies that P is constant on N E A . Since N E is a finite union of sets on which
P is constant, P must remain constant on any connected component of N E. Lemma 4 Let Λ be a connected component of NE and assume that for any x ∈ Λ, x
is a local maximum of P . Then Λ is a strict local maximum of P .
Proof. By Lemma 3, P is constant on Λ. Let v := P (Λ). Assume by contradiction
that there exists a sequence (xn )n ∈ X \ Λ such that limn d(xn , Λ) = 0 and P (xn ) ≥ v.
By compactness of Λ, we can assume without loss of generality that limn xn = x ∈ Λ.
Let U be an open neighborhood of x such that P (x) ≥ P (y), ∀y ∈ U . We have
P (ϕ(xn , t) > v for t > 0. Let tn ↓n 0. For n large enough and t small enough
ϕ(t, xn ) ∈ U , a contradiction. Proof of Proposition 2: Let Λ be a connected component of Nash equilibria. Let us
first prove the direct implication. By Lemma 3, P is constant on Λ. Let v := P (Λ).
If Λ was not a strict local maximum of P then there would exist a sequence xn such
that d(xn , Λ) →n 0 and P (xn ) > v. Since Λ is isolated in NE we have xn ∈ X \ N E
and P (ϕ(xn , t)) > P (xn ) > v for any t > 0 hence d(ϕ(xn , t), Λ) 9 0 and Λ is not an
attractor.
We prove the reverse implication, namely that if Λ is a local maximum of P then it
is an attractor for the best-response dynamics. Since P is a strict Lyapunov function
for the best-response dynamics, the statement we want to prove is then a consequence
of Proposition 3.25 in Benaı̈m et al. (2005). We adapt the proof in our context for
convenience. Let Vr := {x ∈ U : P (x) > v − r}. Clearly ∩r Vr = Λ. Also ϕ(Vr , t) ⊂ Vr ,
for t > 0, r small enough12 . This implies that Λ = ∩r>0 Vr contains an attractor A (see
Conley (1978)). The potential being constant on Λ, A cannot be strictly contained in
Λ and therefore Λ is an attractor. Given some action profile x, we call Λ(x) be the connected component of N E that
contains x, and GA(x) the (possibly unconnected) graph obtained from G by removing
every agents that do not belong to A(x). Note that there exists K ≥ 1 such that
GA(x) = C1 ∪ C2 ∪ ... ∪ CK ,
12
we need to make sure that r is small enough such that Vr = P −1 ([v − r, v]) ⊂ U
28
(13)
where C1 , ..., CK are connected components.
Proposition 6 Let x ∈ Λ, where Λ is a component of Nash equilibria and assume that
in the decomposition (13) Ck is complete for any k. Suppose also that SP N E ∩ Λ ⊂
M axloc. Then x is a local maximum for P . Moreover there exists a map x : t ∈ [0, 1] 7→
x(t) ∈ Λ such that
(i) x(0) = x and x(1) ∈ SP N E;
(ii) A(x(t0 )) ⊂ A(x(t)), ∀t ≤ t0 , ∀i.
Proof For the first statement, assume by contradiction that there exists a sequence xn
such that P (xn ) > P (x) = P (Λ(x)) and limn xn = x. For k = 1, ..., K let
ink ∈ Argmini∈Ck
X
xnj .
j∈Ni \Ck
P
Define x̂n as x̂nink = i∈Ck xni , x̂ni = 0 for i ∈ Ck \ink , k ∈ 1, ..., K. Then P (x̂n ) ≥ P (xn ).
By construction of x̂n , given any > 0, there exists n0 ∈ N such that d(x̂n , SP N E ∩
Λ(x)) < for any n ≥ n0 . Thus P (xn ) ≤ P (x̂n ) ≤ P (Λ(x)) for large enough n, a
contradiction.
For the second point, assume that x is not specialized. By recursion, it is sufficient
to prove that there exists a path x in Λ such that x(0) = x, A(x(t)) ⊂ A(x) and
|A(x(1))| < |A(x)|. We have λmin (GA(x) ) = −1. Let u ∈ R|A(x)| be a unit eigenvector
associated with −1 and v ∈ RN be defined by vi = ui ∀i ∈ A(x) and vj = 0 ∀j ∈
/ A(x).
We then have P (x+λv) = P (x) by Lemma 2. Thus, as long as x+λv ∈ RN
+ , it is a Nash
equilibrium. Indeed assume that this is not the case, there would then exist 1 > λ0 ≥ 0
such that x + λ0 v ∈ N E and x + (λ0 + 1/n)v ∈
/ N E for n large enough. However
P (x + (λ0 + 1/n)v) = P (x + λ0 v). and consequently P (ϕ(x + (λ0 + 1/n)v, 1/n)) >
P (x + λ0 v). This is a contradiction with the fact that x + λ0 v is a local maximum of
P , as GA(x+λ0 v) = GA(x) consists in a disjoint union of complete graphs.
Now let λ := max{λ > 0 : x + λv ∈ RN
+ } and let x(t) = x + tλv. Necessarily there
exists i ∈ A(x) such that xi (1) = 0, which means that |A(x(1))| < |A(x)|. Lemma 5 Let x ∈ N E. Assume that one of the component of GA(x) is not complete.
Then x is not a local maximum of P .
29
Proof. One of the component of GA(x) is not complete if and only if λmin (GA(x) ) < −1.
Let xA be the restriction of x to A(x) and uA ∈ R|A(x)| be an eigenvector associated to
λmin (GA ):
GA u = λmin (GA )u.
By Lemma 2, taking u = (uA , 0),
1
kuk2 + < u, Gu >
2
1
= − (kuA k2 + < uA , GA uA >)
2
1
= − kuA k2 (1 + λmin (GA ))
2
P (x + u) − P (x) = −
Now since λmin (GA ) < −1, we have P (x + u) > P (x) and x + u ∈ RN
+ provided u is
small enough. Thus x is not a local maximum of P . 6.3
Proof of the stability results
We prove theorem 2 by proving the three following claims:
Claim 1 Let Λ be an attractor for the BRD. Then Λ ∩ SP N E 6= ∅
Claim 2 Let Λ be a connected component of NE such that Λ ∩ SP N E 6= ∅. Then Λ is
an attractor if and only if, for all x ∈ Λ ∩ SP N E, x is a local maximum of P
Claim 3 Let x ∈ SP N E and M (x) be the associated maximal independent set. Then
x is a local maximum of P if and only if G[i, M (x)] is complete ∀i ∈ M (x).
Proof of Claim 1. Let Λ be an attractor, that we assume to be connected without loss
of generality. We need to prove that it intersects the set of specialized Nash equilibria.
let x ∈ Λ and assume without loss of generality that x ∈
/ SP N E (otherwise, there is
nothing to prove). Let GA(x) = C1 ∪ C2 ∪ ... ∪ CK be the decomposition (13) according
to the active agents in x. Since Λ is an attractor, GA(x) is a union of complete graphs
by lemma 5, and at least one of them contains at least 2 agents. We can almost use
the same reasoning as in the proof of Proposition 6: λmin (GA(x) ) = −1. Let u ∈ R|A(x)|
be a unit eigenvector associated with −1 and v ∈ RN be defined by vi = ui ∀i ∈ A(x)
and vj = 0 ∀j ∈
/ A(x). We then have P (x + λv) = P (x) by Lemma 2. Thus, as long
as x + λv ∈ RN
+ , it is a Nash equilibrium. Indeed assume that this is not the case.
30
Then there would exist λ0 > 0 such that x + λ0 v ∈ N E and x + λv ∈
/ N E for λ > λ0
close enough of λ0 . By definition of λ0 , we also would have x + λ0 v ∈ Λ. This is
a contradiction with the fact that x + λ0 v is a local maximum of P and therefore in
contradiction with the fact that Λ is an attractor.
Now let λ := max{λ > 0 : x + λv ∈ RN
+ }. Necessarily there exists i ∈ A(x) such
that xi + λvi = 0. Clearly x + λv ∈ Λ(x) and |A(x + λv)| < |A(x)|. By reiterating this
reasoning, there exists x̂ ∈ Λ(x) ∩ SP N E. To prove the next result, we recall some notations from section 4.2. Given an SPNE
x and i ∈ M (x),
C(i, M (x)) = {j ∈ N : Nj (G) ∩ M (x) = {i}}
is the influence set of agent i in M (x). We then have the following:
Lemma 6 Let x ∈ SP N E. Then if there exists i ∈ M (x) such that C(i, M (x)) is not
complete, x is not a local maximum of P .
Proof. Assume first that there exists i ∈ M (x) such that G(i, M (x)) is not complete.
Then there exists j, k ∈ C(i, M (x)) such that jk ∈
/ G. Now let x be defined by
xj = xk = , xi = 1 − 2. Straightforward computations give
P (x ) = P (x) + 2
and x is not a local maximum of P . Proof of Claim 2. Let Λ be a connected component of NE that intersects the set
of specialized NE and suppose that every specialized Nash equilibrium in Λ is a local
maximum of P . Pick x ∈ Λ. We must prove that x is necessarily a local maximum
of P . By Proposition 6, we just need to prove that, in the decomposition GA(x) =
C1 ∪ C2 ∪ ... ∪ CK , Ck is complete. Assume by contradiction that this is not the case.
By assumption there exists a maximal independent set M such that xM (the specialized
Nash equilibrium associated to M ) belongs to Λ. In other terms we can construct a
continuous map
t ∈ [0, 1] 7→ x(t) ∈ Λ
such that x(0) = x and x(1) = xM . Denote by (Ckt )k=1,...,K(t) the decomposition (13)
associated to x(t). We can then define t∗ as the first instant where this decomposition
only contains complete subgraphs, i.e.
t∗ := inf t > 0 : ∀k ∈ {1, ...K(t)} Ckt is complete
31
∗
t
Ck(3)
i3
i1
∗
t
Ck(1)
i2
Figure 11: An incomplete component. If, for instance, agents i1 , i2 and i3 become inactive at
time t∗ , we obtain two complete components.
This quantity is well defined as Ck1 is simply a singleton for any k, and t∗ ∈]0, 1]. By
∗
continuity, two facts hold: first Ckt is complete for all k and second there exists τ > 0
such that (Ckt )k remains constant for t ∈]t∗ − τ, t∗ [ (i.e. the set of agents in (Ckt )k is the
same), and by definition of t∗ there exists k such that Ckt is non complete ∀t ∈]t∗ −τ, t∗ [.
Note that the only way for an incomplete component to become complete at time
∗
t is when some agents are active right before t∗ and become inactive at t∗ . This is
illustrated in Figure 11. Thus there exists i1 , i2 , ..., iP such that
xip (t) > 0 and xip (t∗ ) = 0 ∀p = 1, ..., P, ∀t ∈]t∗ − τ, t∗ [.
By continuity of t 7→ x(t) and the fact that xip (t∗ ) = 0, we necessarily have
1. By definition of t∗ ,
∗
Card({k = 1, ..., K(t∗ ) : Nip ∩ Ckt 6= ∅}) ≥ 1.
We now distinguish two cases:
a) First suppose that, for p = 1, ..., P ,
∗
Card({k = 1, ..., K(t∗ ) : Nip ∩ Ckt 6= ∅}) = 1.
32
P
j∈Nip
xj (t∗ ) =
Then there exists a unique vector (k(p))p=1,...,P such that
∗
t
Nip ∩ Ck(p)
6= ∅, ∀p = 1, ..., P
P
t∗
t∗
⊂
, xj (t∗ ) = 0. Since j∈Nip xj (t∗ ) = 1 we therefore have Ck(p)
and ∀j ∈ Nip \ Ck(p)
∗
∗
Nip ∀p. Assume without loss of generality that, for t ∈]t − τ, t [ the component that
t
(keep in mind that this component remains
contains i1 is not complete and call it Ck(1)
t
t
∗
∗
t∗
is not complete
. Since Ck(1)
constant on ]t − τ, t [). Clearly Ck(1) is a subgraph of Ck(1)
∗
t
t
and Ck(1) ⊂ Ni1 , Ck(1) necessarily contains other vertices i2 , ..., in (n ≤ P ). Without
t∗
t∗
. We already
= Ck(1)
loss of generality, for p = 1, ..., m (m ≤ n) we suppose that Ck(p)
t∗
t∗
established that Ck(1) ⊂ Nip for p ≤ m. If m = 1 then Ck(1) ∪ {i1 } is complete. If
m ≥ 2 and there exists p, q ≤ m such that ip ∈
/ Niq . Now by Proposition 6 there exists
a specialized Nash equilibrium x̂ in Λ such that M (x̂) ⊂ A(x(t∗ )), and in particular
t∗
there exists j ∈ Ck(1)
such that x̂j = 1. However we just proved that G[j, M (x̂)] is
not complete, as it contains ip and iq , which is a contradiction, by Lemma 6. As a
consequence
t∗
Ck(1)
∪ {i1 , ..., im }
∗
t
t
∪ {i1 , ..., im }
= Ck(1)
must form a complete graph. Also n > m (because otherwise Ck(1)
t∗
t∗
6= Ck(1)
would be complete). Let r ≥ m + 1 and q ≤ m such that ir ∈ Niq and Ck(r)
(by definition of m). But this brings a contradiction, as at time t ∈]t∗ − τ, t∗ [ we would
t∗
,
have, for any j ∈ Ck(1)
xiq (t) +
X
l∈Niq
xl (t) ≥ xir (t) + xj (t) +
X
l∈Nj
xl (t) ≥ xir (t) + 1 > 1.
b) Now suppose that there exists p such that
∗
{k = 1, ..., K(t∗ ) : Nip ∩ Ckt 6= ∅} = {1, ..., l}
P
∗
∗
with l ≥ 2. Since l∈Nip xl (t∗ ) = 1, Ckt 6⊂ Nip for k = 1, ..., l. (remember that ∀j ∈ Ckt ,
xj (t∗ ) > 0, by definition). Again by Proposition 6, there exists x̂ ∈ Λ ∩ SP N E such
∗
that M (x̂) ⊂ A(x(t∗ ), that is necessarily M (x̂) = {j1 , ..., jK(t∗ ) }, with jk ∈ Ckt . Since
G[jk , M (x̂)] is complete by assumption (and Lemma 6), ip ∈
/ G[jk , M (x̂)] for k = 1, ..., l;
in other terms agent ip is strictly inactive in x̂. Assume without loss of generality that
∗
Nip ∩ M (x̂) = {j1 , j2 } and consider M 0 := (M (x̂) ∪ {j20 }) \ {j2 }, with j20 ∈ C2t \ Nip .
Then M 0 is also a maximal independent set and the corresponding SPNE belongs to Λ
(this is due to the fact that any inactive agent in x̂ is either strictly inactive, or linked
33
∗
t
Ck(3)
i3
i1
∗
t
Ck(1)
i2
Figure 12: Case a) with n = 3, m = 2, q = 2, r = 3
to some jk , k =
6 2 or linked to j2 and j20 ; hence the path going from M (x̂)) to M 0
is contained in Λ). However G[j1 , M 0 ] contains {ip } and therefore is not complete, a
contradiction. Proof of Claim 3. By Lemma 6, we just need to prove the reverse implication, assume
that G[i, M (x)] is complete for any i ∈ M (x). Denote M (x) = {1, ..., m}. Let u be an
admissible vector, that is uj ≥ 0 for j ∈
/ M . By Lemma 2,
P (x + u) − P (x) =
X
i∈SI(x)
1
ui (1 − (Gx)i ) − (kuk2 + hu, Gui)
2
Write uà = (u1 , u1 , u2 , u2 , ..., um , um ) where ui is the restriction of u to C(i, M (x)).
Then, since (Gx)i > 1 by definition of SI(x) and ui ≥ 0 for i ∈ SI(x), we have
D
E
1
P (x + u) − P (x) ≤ − (kuà k2 + uà , Gà uà )
2
provided kuk is small enough. Also let ui = (ui , ui ). Note that ui are non-negative
vectors (as these individuals are playing action 0), whereas u1 , ..., um can be anything.
We will simply write G[i] instead of G(i, M (x)), and G[i, j] for the |C(i, M (x))| ×
|C(j, M (x))| matrix corresponding to the edges between C(i, M (x)) and C(j, M (x)).
34
Then
D
Ã
Ã
u , GÃ u
E
=
m
X
i=1
=
m
X
i=1
≥
m
X
i=1
i
i
u , G[i] · u + 2
X
i
u , G[i, j]u
j
i<j
+
m
X
i=1
ui
X X
Gi,k uk
j6=i k∈G[j]
X
ui , G[i] · ui + 2
ui , G[i, j]uj
i<j
(ui , ui ), G[i] · (ui , ui )
≥ −kuà k2 ,
where the last inequality comes from the fact that G[i] is a complete graph, which
means that λmin (G[i]) = −1. The theorem is proved. References
Ballester, C., Calvó-Armengol, A., and Zenou, Y. (2006). Who’s who in networks.
wanted: the key player. Econometrica, 74(5):1403–1417.
Benaı̈m, M., Hofbauer, J., and Sorin, S. (2005). Stochastic approximations and differential inclusions. I. SIAM Journal on Optimization and Control, 44:328–348.
Bervoets, S., Bravo, M., and Faure, M. (2016). Learning and convergence to nash in
continuous action games. mimeo.
Bhat, S. P. and Bernstein, D. S. (2003). Nontangency-based lyapunov tests for convergence and stability in systems having a continuum of equilibria. SIAM Journal on
Control and Optimization, 42(5):1745–1775.
Bhat, S. P. and Bernstein, D. S. (2010). Arc-length-based lyapunov tests for convergence and stability with applications to systems having a continuum of equilibria.
Mathematics of Control, Signals, and Systems, 22(2):155–184.
Bramoullé, Y. and Kranton, R. (2007). Public goods in networks. Journal of Economic
Theory, 135(1):478–494.
Bramoullé, Y., Kranton, R., and D’amours, M. (2014). Strategic interaction and networks. The American Economic Review, 104(3):898–930.
35
Bravo, M. and Faure, M. (2015). Reinforcement learning with restrictions on the action
set. SIAM Journal on Control and Optimization, 53(1):287–312.
Conley, C. (1978). Isolated Invariant Sets and the Morse Index. American Mathematical
Society.
Fudenberg, D. and Levine, D. (1998). The Theory of Learning in Games. MIT Press.
Hofbauer, J. and Sorin, S. (2006). Best response dynamics for continuous zero-sum
games. Discrete and continuous dynamical systemsSERIES B, 6(1):215–224.
Jackson, M. O., Rogers, B. W., and Zenou, Y. (2016). The economic consequences of
social network structure. Available at SSRN 2467812.
Kukushkin, N. S. (2015). Cournot tatonnement and potentials. Journal of Mathematical
Economics, 59:117–127.
Leslie, D. and Collins, E. (2006). Generalised weakened fictitious play. Games and
Economic Behavior, 56(2):285–298.
Monderer, D. and Shapley, L. S. (1996). Potential games. Games and economic behavior,
14(1):124–143.
36
∗
t
Ck(2)
∗
t
Ck(1)
ip
Figure 13: Case b) with l = 2
37