Collaboration in networks with randomly chosen agents

Collaboration in networks with randomly chosen agents∗
Zhiwei Cui†, Rui Wang‡
Abstract
The present paper considers a population of finite agents located in arbitrary, fixed networks. In each period, a fixed number of agents are randomly chosen to play a minimum effort
game. They learn from their neighbors’ experience through imitation of the most successful
strategies though occasionally make mistakes. We show that in the long run all agents will
choose the highest effort level and socially optimal collaboration will be guaranteed, provided
that each agent’s neighborhood is sufficiently large.
JEL: C72; D83
Keywords: Minimum effort game; Local observation; Random sampling; Imitation
∗
We are indebted to the editor and two anonymous referees for suggesting ways to improve the substance and
exposition of this paper. We are also grateful to Zhigang Cao, Chih Chang, Hsiao-Chi Chen, Jonathan Newton and
Chun-Lei Yang for helpful comments and suggestions. This paper was presented at the 2014 Beijing Economic Theory
Workshop at University of International Business and Economics and various other seminars; we greatly acknowledge
participants for their comments. This work was financially supported by the National Science Foundation of China
(No.61202425) and the Fundamental Research Funds for the Central Universities (YWF-13-D2-JC-11, YWF-14JGXY-016).
†
Corresponding author. School of Economics and Management, Beihang University, Beijing 100191, PR China.
Email: [email protected]
‡
School of Computer Science and Engineering, Beihang University, Beijing 100191, PR China. Email:
[email protected]
1
1
Introduction
In economics and sociology, the study of interactions in networks is an active field of research;
for an overview, see Goyal (2011) and Özgür (2011). The literature has tended to focus on local
interaction (Weidenholzer, 2010 and Young, 1998). That is, individuals only interact with their
direct neighbors. Nevertheless, in a wide range of economic and social situations, it is possible
that individuals interact with strangers; their opponents will vary over times; and the interaction
between the same group of individuals is observed with low frequency, such as peer-to-peer file
sharing and online anonymous financial transaction. Another distinguishing and important feature
is that in each period only a small percentage of agents take part in the interaction. In order to
capture these phenomena, the present paper assumes that in each period, a fixed number of agents
are randomly chosen from a large population to participate in interactions and that the selected
agents make their choices based on the past experiences of their neighbors.
Formally, we consider a finite but large population of agents located in an arbitrary, fixed network. In each period, a fixed number of individuals from the whole population are randomly
chosen to play a minimum effort game. In the minimum effort game, players choose among different effort levels and their payoffs depend on the minimum amount of all players’ choices. In other
words, the minimum effort game models the situation where to a large extent, the performance of
a group is determined by the effort level exerted by the least hardworking member.1 And, each
selected agent is able to observe effort levels and performances of his neighbors in the most recent
interaction. As a result, the agent imitates the most successful choice. Occasionally, mistakes
occur in the revision of strategy.
We find that if each agent has a large number of neighbors, and no matter whether or not the
network is connected, in the long run all agents will choose the highest effort level and socially
optimal collaboration will be guaranteed provided that the probability of random noises is sufficiently small.2 In the language of economic and social networks, the main results show that highly
cohesive networks will lead to the most efficient collaboration when the minimum effort game is
repeatedly played by randomly chosen agents.3
Our work contributes to research on local interaction. Starting with the seminal papers of
Blume (1993, 1995) and Ellison (1993), much of the literature has studied local interaction games.
It is assumed that agents are located in a fixed network and interact with their direct neighbors
(Alós-Ferrer and Weidenholzer, 2007, 2008, 2014; Cui, 2014; Ellison, 2000; Eshel, Samuelson
1
The minimum effort game can be used to model collaboration among individuals (Alós-Ferrer and Weidenholzer,
2014 and Van Huyck, Battalio, and Beil, 1990).
2
By conducting laboratory experiments with a minimum effort game, Weber (2006) shows that slow growth and
exposure of entrants to previous norms can alleviate the problem of large group coordination failure.
3
Given that social and economic networks are described precisely as undirected graphs, Seidman (1983) proposes
that the minimum degree of all nodes can be used as a measure of network cohesion.
2
and Shaked, 1998 and Morris, 2000). Two prominent dynamic adjustment rules are used in these
models: myopic best-response reply and imitation. The present paper considers the possibility that
each individual is unable to identify who his opponents are. Due to this uncertainty, it is reasonable
to assume that individuals will learn from their neighbors’s experiences and employ an imitation
rule to update their choices.
Our paper also relates to research that explores the effects of random matching on the long-run
outcomes of the learning dynamics. Building on Kandori, Mailath and Rob (1993), Robson and
Vega-Redondo (1996) show that for coordination game, a genuinely random pairing-up mechanism
can lead to the emergence of Pareto efficient equilibrium rather than risk dominant one. And, given
that agents are located in regular networks, Anderlini and Ianni (1996) introduce the concept of
the 1-factor that enables each agent to be coupled with one of his neighbors and show that in
the long run, different actions of a coordination game may survive at different locations. Khan
(2014) develops a model where in each period individuals are randomly partitioned into pairs to
play a coordination game and they update choices by imitating the most successful neighbors,
and he offers sufficient conditions for the emergence of Pareto efficient convention. Although the
opponents change over time, all of these papers assume that in each period, (1) all members take
part in interactions and (2) each agent plays 2 × 2 coordination game with another agent. The
present paper considers the minimum effort game and assumes that given a large population, in
each period, only a small portion of agents from a large population play the game.
The most closely related paper is that of Alós-Ferrer and Weidenholzer (2014) where it is
assumed that each individual plays the minimum effort game with all of his neighbors and an
imitation rule is employed.4 They show that the possibility of observing agents who are not direct
neighbors might help to overcome the coordination problem when the interactions are not too
global. By contrast, we assume that the minimum effort game is played by randomly chosen agents
and they can only observe the experience of their direct neighbors when receiving the opportunity
of participating in the game and modifying their effort levels by imitation. We find that if each
agent has a large number of neighbors, in the long run, the socially optimal coordination will
be attained. In the language of social and economic networks, to guarantee the socially optimal
collaboration, sparse networks are required in Alós-Ferrer and Weidenholzer (2014) while our
paper requires a highly cohesive network. This difference may owe to the fact that in our paper,
in each period, only a small proportion of members of a large population are randomly chosen to
participate in the the minimum effort game.
The rest of the present paper is organized as follows. Section 2 describes the basic building
blocks of our model, including the minimum effort game, observation network, imitation and
dynamics. Section 3 investigates the convergence of unperturbed dynamics and stochastic stability
4
Angus and Masson (2010) assume that information exchanges and interactions are specified by two distinct networks and use simulation to show the effect of this distinction on contagion processes.
3
of randomly perturbed dynamics. Section 4 illustrates the main results using a stag-hunt game.
Finally, Section 5 concludes.
2
2.1
The model
Minimum effort game
The base game is an n-person minimum effort game with n ≥ 2. Intuitively, each player’s payoff
depends on the minimum amount resulting among the player’s own and all others’ effort levels.
.
Formally, let N = {1, · · · , n}, n ≥ 2, be the set of players. Each player i chooses a level of
.
personal effort, ei , from the same set E = {e1 , e2 , · · · , em } where m > 1 and 0 < e1 < e2 <
−
· · · < em . Let (e1 , e2 , · · · , en ) denote a strategy profile of all n players’ effort levels, and →
e
represents a strategy profile where all players choose the same effort level e for any e ∈ E. Let
(e1 , · · · , ei−1 , ei+1 , · · · , en ) be the effort level choices of all players except player i.
For the choice of effort level ei , player i bears a cost δ × ei where 0 < δ < 1. So, given a
strategy profile (e1 , e2 , · · · , en ), player i’s payoff is specified by
πi (e1 , e2 , · · · , en ) = min ej − δ × ei .
1≤j≤n
.
Hence the minimum effort game is defined as Γ = (N, E n , {πi }i∈N ). Note that for each player,
the highest payoff brought about by effort level e, e ∈ E, is (1 − δ)e which can be attained when
other players’ effort levels are no less than e. Thus, the highest possible payoff is (1 − δ)em . Let
.
Π = {πi (e) : i ∈ N and e ∈ E n } denote the set of each player’s possible payoffs. Without loss of
generality, it is assuemd that 0 ∈
/ Π. Let Π̂ be the union of Π and {0}.
Given (e1 , · · · , ei−1 , ei+1 , · · · , en ), player i’s best reaction is to choose effort level minj∈N,j6=i ej .
−
Thus, for the minimum effort game Γ, the set of Nash equilibria coincides with {→
e : e ∈ E}. And,
→
−
each e is a strict Nash equilibrium.
2.2
Network and local observation
An observation system consists of a finite population of agents such that when each agent is selected to participate in the minimum effort game Γ, he can only observe and make use of the
information of members from a subpopulation.
.
Formally, let I = {1, · · · , I} be a set of agents with I ≥ n. For any two agents i, j ∈ I,
the pairwise relationship between them is captured by a binary variable, gij ∈ {0, 1}. gij = 1 if
agents i and j can observe each other’s effort level and performance in the most recent interaction
and gij = 0 otherwise; conventionally, it is assumed that gii = 1. In other words, there exists a
.
dichotomy of observation: mutual observation and nonexistence of observation. g = {(gij )i,j∈I }
4
.
is referred to as an observation network over the set I. For any observation network g, g = (I, g)
defines an undirected graph and is referred to as an observation system.
.
For any agent i, i ∈ I, let the information neighborhood M (i) = {j ∈ I : gij = 1} be the
set of agents whose effort levels and performances in the most recent interaction can be observed
by agent i. Without loss of generality, assume that each agent i’s information neighborhood M (i)
contains two or more elements; that is, except himself, i is able to observe at least one another
agent. And, it is straightforward to see that for any two distinct agents i and j, j ∈ M (i) implies
that i ∈ M (j) and j ∈
/ M (i) equals to that i ∈
/ M (j).
0
0
For any subset I of the set I, the subgraph (I , (gij )i,j∈I0 ) is a component of the graph g if (1)
0
0
for any two distinct agents i and j from I , there exists {i1 , · · · , iL } ⊆ I satisfying that L ≥ 2,
00
0
i1 = i, iL = j and gil il+1 = 1 for any l, 1 ≤ l ≤ (L − 1) and (2) there is no strict superset I of I
0
0
such that (1) also holds. Let c denote a component. For any component c = (I , (gij )i,j∈I0 ), I will
0
be referred to as the set of members of c, and let |c| denote the numbers of agents belonging to I .5
0
0
And, without confusion, we use i ∈ c (i ∈
/ c) to represent i ∈ I (i ∈
/ I ). Let C be the set of all
components, and let η(g) be the number of components. An observation network g is connected if
the graph (I, g) has a unique component.
2.3
Imitation and dynamics
In each period, among the whole population I = {1, · · · , I}, only n distinct agents are randomly
chosen to play the minimum effort game Γ. In our analysis, n stays fixed throughout. That is, the
number of selected agents does not change as time goes on. Each selected agent observes effort
levels and performances of members belonging to his own information neighborhood in the most
recent interaction and makes his choice following an imitation rule.
0
. 0 0
For the whole population I, let I n = {I : I ⊆ I and |I | = n} be the set of all possible subsets
of size n. Assume that the random sampling mechanism ν is a probability distribution on the set
0
0
I n , satisfying that ν(I ) > 0 for any I ∈ I n .
0
Time is modeled discretely; t = 0, 1, 2, · · · . In each period t, a group I (t) of n distinct agents,
0
0
I (t) ∈ I n , is randomly chosen with strictly positive probability ν(I (t)) > 0 to play the minimum
effort game Γ.
Each agent i’s state in period t, t = 0, 1, 2, · · · , is given by si (t) = (ei (t), ui (t)). Now we spec0
ify how the state (ei (t), ui (t)) is determined. First, consider agent i, i ∈ I (t). If t = 0 or sj (t) =
(em , 0) for all j ∈ M (i), agent i randomly chooses one effort level from E = {e1 , e2 , · · · , em };
otherwise, agent i observes his neighbors’ (including himself) choices of effort levels and performances in the most recent interaction and imitates the most successful alternative; formally, effort
5
For any finite set X, the cardinality |X| is the number of elements belonging to X.
5
level ei (t) is determined by
0
ei (t) ∈ {ej (t − 1) : j ∈ M (i) and uj (t − 1) ≥ uj 0 (t − 1) for ∀j ∈ M (i)}.
(1)
If the set specified by the right hand side of Equation (1) is not a singleton, every element will be
taken with strictly positive probability. Then, the minimum effort game Γ is played by members
0
0
of I (t) with {ej (t)}j∈I0 (t) . As a result, agent i, i ∈ I (t), receives a payoff
ui (t) = min
ej (t) − δ × ei (t).
0
j∈I (t)
0
In addition, for any agent i, i ∈
/ I (t), si (t) = (em , 0) if t = 0 and si (t) = si (t − 1) if t ≥ 1.
The strategy revision process defines a Markov chain over the state space, which we may
.
identify with the set Ω = E I × Π̂I . This process will be denoted by {S(t)}t∈N , and will be referred
to as unperturbed dynamics. An absorbing set is a minimal subset of Ω with the property that,
under the unperturbed dynamics, the probability of exit from it is zero. So the possible states the
unperturbed dynamics will converge to are states contained in absorbing sets. An absorbing state is
an element which forms a singleton absorbing set. For any absorbing set Ω̄, the basin of attraction
D(Ω̄) is the set of states from which the unperturbed dynamic converges to Ω̄ with probability one.
There exists a multiplicity of absorbing sets. For instance, for all effort level e, e ∈ E, (e1, (1−
δ)e1) is an absorbing state where 1 is an I-dimension vector with each element being 1. Assume
that all agents have participated in the minimum effort game Γ in the previous t periods and ej (t) =
e and uj (t) = (1 − δ)e for any j ∈ I. When agent i, i ∈ I, is randomly chosen to participate in the
minimum effort game Γ in period t + 1, he or she observes that all neighbors undertake the same
effort level e and get the same payoff (1 − δ)e. Agent i will also choose effort level e and then
receive the payoff (1 − δ)e.
To select among all possible absorbing sets, we employ the technique of introducing random
noise and identifying the most likely long-run equilibria (Ellison 1993; Kandori, Mailath and Rob,
1993 and Young, 1993). Suppose that, if selected to participate in the minimum effort game Γ,
each agent i makes mistakes with probability , ∈ (0, 1). In this case, agents randomly choose
their effort levels. The most common specification of mistakes in the economics literature is for
random noises to be independent across agents and periods. Therefore, the perturbed dynamics
({S (t)}t∈N , Ω) are defined. For each , ∈ (0, 1), {S (t)}t∈N is an irreducible Markov chain,
so it has a unique invariant distribution µ . A state s is stochastically stable if µ∗ (s) > 0 where
µ∗ = lim→0 µ .
3
Main results
Having spelled out the basic building blocks of the model, this section examines the likelihood of
emergence of the most efficient collaboration. Intuitively, we want to identify sufficient conditions
6
Figure 1: An illustration of an observation system consisting of two components.
under which, in the long run, every selected agent will choose the highest effort level em , and
from which the socially optimal collaboration is guaranteed. With this aim, we investigate the
convergence of unperturbed dynamics and stochastic stability of randomly perturbed dynamics.
3.1
Convergence of the unperturbed dynamics
In the following theorem, we characterize the absorbing sets of the unperturbed dynamics {S(t)}t∈N .
Theorem 1 Consider the unperturbed dynamics {S(t)}t∈N . Given any two states
0
0
0
0
0
s = ((e1 , · · · , eI ), (u1 , · · · , uI )) and s = ((e1 , · · · , eI ), (u1 , · · · , uI ))
0
0
from the same absorbing set Ω̄, ei = ej = ei = ej holds for any two distinct members i and j of a
component c.
The proof is provided in the Appendix.
Theorem 1 shows that in the long run, all members belonging to a component will choose
the same effort level. The intuition behind this result is as follows. Consider a component c =
0
0
0
(I , (gij )i,j∈I0 ). Without losing generality, assume that |I | ≤ n. Suppose that all agents from I
have been selected to participate in the game Γ in period t0 , and that the imitation rule prescribes
0
that agent i0 , i0 ∈ I , will choose effort level e1 if randomly chosen in period t0 +1. In the following
0
0
0
two periods t0 + 1 and t0 + 2, all elements of I are selected to play Γ; that is, I ⊆ I (t0 + 1) and
0
0
I ⊆ I (t0 + 2). Owing to the imitation rule, all elements of M (i0 ) choose effort level e1 in period
t0 + 2. Following the same logic, with strictly positive probability, effort level e1 will spread from
i0 to i0 ’s neighbors, and finally to all agents belonging to the component c provided that members
0
of I are consecutively selected to play the minimum effort game Γ.
For an unconnected observation network g, all absorbing sets can be divided into two types.
When all individuals from the whole population I choose the same effort level e, e ∈ E, the
absorbing set turns out to be the monomorphic absorbing state (e1, (1 − δ)e1). However, when
there exists two distinct components such that their members choose different effort levels, the
absorbing set contains multiple elements. We employ an example to illustrate this result.
7
Consider a society I = {1, · · · , 7} where members are randomly chosen to play a 2-person
minimum effort game. The observation system (I, g) consists of two components:
c1 = {1, 2, 3}, g12 = g23 = g13 = 1 ,
c2 = {4, 5, 6, 7}, g45 = g56 = g67 = g74 = 1 .
As an illustration, see Figure 1. Assume that in some period t0 , (1) all agents have been selected
to participate in the 2-person minimum effort game and (2) all elements of c1 (c2 ) have the same
effort level e1 (e2 ). Following the imitation rule specified by Equation (1), all agents belonging to
c1 (c2 ) will choose effort level e1 (e2 ) forever. However, in the following periods, each selected
agent’s payoff will depend on whether or not the other player comes from the same component as
himself and will not be a constant amount. Therefore, the unperturbed dynamics {S(t)}t∈N will
converge to an absorbing set consisting of multiple states with probability one.
As an immediate result, Theorem 1 yields the following proposition for connected observation
networks.
Proposition 1 Assume that the observation network g is connected. Then the unperturbed dynamics {S(t)}t∈N will converge to one of the monomorphic limit states with probability one.
Proposition 1 says that when the observation network g is connected, in the long run, all individuals in population I will choose the same effort level e and as a consequence obtain the same
payoff of amount (1 − δ)e if selected to play the minimum effort game Γ. In other words, the
connectedness of the observation networks will lead to the disappearance of the heterogeneity of
agents’ choices of effort levels, and each absorbing set turns out to be the singleton.
3.2
Stochastic stability of the perturbed dynamics
In this subsection, we first explore stochastic stability of randomly perturbed dynamics for general
observation networks. Then, we restrict our attention to connected observation networks.
3.2.1
General observation networks
Before proceeding to the results concerning stochastic stability of the randomly perturbed dynamics {S (t)}t∈N , ∈ (0, 1), we provide two lemmas that characterize the traverse across the different
absorbing sets.
Consider an absorbing set Ω̄. Owing to Theorem 1, for any e ∈ E, all of elements Ω̄ have
the same number of agents choosing effort level e. Without confusion, let ρ(Ω̄) be the number of
agents choosing effort level h(Ω̄). And, let ς(Ω̄) denote the number of agents taking effort level
l(Ω̄).
The following lemma explores the transition from any absorbing set to (em 1, (1 − δ)em 1) and
presents how it is possible to leave the basin of attraction D (e1 1, (1 − δ)e1 1) .
8
Lemma 1 Consider the randomly perturbed dynamics ({S (t)}t∈N , Ω), ∈ (0, 1). For any absorbing set Ω̄, Ω̄ 6= {(em 1, (1 − δ)em 1)}, there exists a sequence of distinct absorbing sets
(Ω̄0 , Ω̄1 , · · · , Ω̄l ) such that
• Ω̄0 = Ω̄ and Ω̄l = {(em 1, (1 − δ)em 1)};
• ρ(Ω̄1 ) ≥ n and compared with Ω̄l0 , for Ω̄l0 +1 , one more component satisfies that all members
0
choose em for any 1 ≤ l ≤ (l − 1); and
• to transit from Ω̄0 to Ω̄1 , max{n − ρ(Ω̄0 ), 1} mistakes are required and sufficient and one
0
single mutation can induce the transition from Ω̄l0 to Ω̄l0 +1 for any 1 ≤ l ≤ (l − 1).
And, to exit from D (e1 1, (1 − δ)e1 1) , n mutations are needed.
The proof is provided in the Appendix.
Owing to the definition of the minimum effort game Γ, to make the choice em optimal, n agents
must take em and participate in Γ in the same period. By the interaction, these agents receives the
maximum possible payoff (1 − δ)em , and they can serve as good examples. As a result, the highest
effort level em can spread to all other agents who are from the same components as these n agents.
The intuition underlying Lemma 1 is as follows. Without loss of generality, assume that η(g) > n.
0
0
For any absorbing set Ω̄ with ρ(Ω̄ ) < n, when all individuals choosing em and n − ρ(Ω̄ ) other
0
agents are selected to play Γ, owing to n−ρ(Ω̄ ) mistakes, it is possible that they all take em . Then,
{S (t)}t∈N can converge to another absorbing set where the number of agents playing the highest
0
effort level em is no less than n. For any absorbing set Ω̄, ρ(Ω̄ ) ≥ n and (em 1, (1 − δ)em 1) ∈
/ Ω̄,
m
assume that all members of the component c1 do not take e . When (n − 1) individuals who
choose em and one member of c1 are selected to play Γ, due to one single mutation, it is likely that
they all choose em . Then, ({S (t)}t∈N , Ω) can converge to another absorbing set where compared
with Ω̄, there is one more component c1 such that all of its members adopt em .
The following lemma examines the transition from any other absorbing set to (e1 1, (1 − δ)e1 1)
and shows how it is possible to exit from the basin of attraction D (em 1, (1 − δ)em 1) .
Lemma 2 Consider the randomly perturbed dynamics ({S (t)}t∈N , Ω), ∈ (0, 1). For any absorbing set Ω̄, Ω̄ 6= {(e1 1, (1−δ)e1 1)}, there is a sequence of distinct absorbing sets (Ω̄0 , Ω̄1 , · · · , Ω̄k )
such that
• Ω̄0 = Ω̄ and Ω̄k = {(e1 1, (1 − δ)e1 1)};
• compared with Ω̄k0 , for the absorbing set Ω̄k0 +1 , one more component satisfies that all mem0
bers choose e1 for any 0 ≤ k ≤ (k − 1);
9
|M (i)| − 1
e mistakes are needed and sufficient;
i∈I
n−1
otherwise, one single mistake can lead to the transition; and, one single mutation can induce
0
the transition from Ω̄k0 to Ω̄k0 +1 for any 1 ≤ k ≤ (k − 1).
• to transit from Ω̄0 to Ω̄1 , if ς(Ω̄) = 0, mind
|M (i)| − 1
And, to leave D (em 1, (1 − δ)em 1) , mind
e mutations are required.
i∈I
n−1
The proof is provided in the Appendix.
The intuition underlying Lemma 2 is as follows. Without loss of generality, we focus on the exit
from D (em 1, (1−δ)em 1) . If one element of the set M (i) has chosen effort level em and received
a payoff (1 − δ)em in the most recent partition, then in the light of the deterministic imitation rule
given by Equation (1), agent i will also choose effort level em if selected to play the minimum effort
game Γ. On the other hand, consider agent i0 where i0 ∈ arg mini∈I |M (i)| and i0 is a member
0
of the component c1 = (I1 , (gij )i,j∈I0 ). Provided that in the most recent interaction, (1) agent i0
1
has chosen effort level e1 and got a payoff (1 − δ)e1 and (2) each agent j, j ∈ M (i) \ {i0 }, has
chosen effort level em and obtained a payoff e1 − δ × em , agent i0 will choose effort level e1 with
certainty. To promote the emergence of a state satisfying the above property, the most likely way is
that agent i0 and his neighbors are consecutively selected to play the game Γ and each time agent
i0 takes e1 by mistakes, and d(|M (i0 )| − 1)/(n − 1)e mutations are required and sufficient. Then,
following the unperturbed dynamics, with strictly positive probability, {S (t)}t∈N will converge to
one absorbing set where all members of c1 select effort level e1 .
With the help of Lemma 1 and Lemma 2, we are able to present the following result concerning
stochastic stability of the randomly perturbed dynamics.
Theorem 2 Consider the randomly perturbed dynamics ({S (t)}t∈N , Ω), ∈ (0, 1). Then,
• only (em 1, (1 − δ)em 1) is stochastically stable provided that mind
i∈I
|M (i)| − 1
e > n;
n−1
• the set of stochastically stable equilibria is given by {(e1, (1 − δ)e1) : e ∈ E}, when
|M (i)| − 1
mind
e = n; and,
i∈I
n−1
• if mind
i∈I
|M (i)| − 1
e < n, (e1 1, (1 − δ)e1 1) is a unique stochastically stable equilibrium.
n−1
The proof is provided in the Appendix.
When each agent is able to acquire abundant information about other agents’ choices of effort
levels and their performances in the most recent interaction, given that {S (t)}t∈N starting from
(em 1, (1 − δ)em 1), it is very likely that each agent is able to observe that one neighbor whose state
is (em , (1 − δ)em ) when selected to participate in the minimum effort game Γ. Then, in the long
run, following the randomly perturbed imitation rule, each selected agent will choose effort level
10
em and receive a payoff (1 − δ)em , provided that the probability of random noises goes to zero.
Therefore, the socially optimal outcome can be attained.
Consider the case that an agent i0 does not have enough neighbors. To leave the basin of
attraction D (e1 1, (1 − δ)e1 1) , n mutations are required. And, when the initial state is (em 1, (1 −
δ)em 1), it is possible that agent i0 and his neighbors are consecutively selected to play the game Γ
and each time agent i0 adopts effort level e1 by mistakes, and e1 then can spread to all other agents
who are from the same component as i0 . Henceforth, one sigle mutation can induce all members
of one more component to switch from em to e1 . That is, to exit from D (e1 1, (1 − δ)e1 1) is more
difficult than to enter it when stochastic mutations vanish. And, (e1 1, (1 − δ)e1 1) is a unique long
run equilibrium.
Now we present an example to explore what happens if neither of conditions in Theorem 2
holds. Assume that agents are randomly chosen to play an n-person minimum effort game Γ
satisfying that n > 2 and E = {e1 , e2 }. And, the observation system (I, g) consists of two
|M (i)| − 1
|M (i)| − 1
e = mind
e = n. Theorem 1 shows that
components c1 and c2 where mind
i∈c1
i∈I
n−1
n−1
there are four absorbing sets
(e1 c1 , e1 c2 ), (e1 c1 , e2 c2 ), (e2 c1 , e1 c2 ) and (e2 c1 , e2 c2 )
0
0
where for any e, e ∈ E, (ec1 , e c2 ) represents the absorbing set where all members of c1 choose
0
effort level e a nd all members of c2 choose effort level e . Following from Lemma 1, to exit from
D((e1 c1 , e1 c2 )), n mutations are sufficient and can induce the transition to (e2 c1 , e2 c2 ); then, one
mutation can lead to . Lemma 2 we have that
3.2.2
Connected observation networks
Proposition 1 illustrates that for connected observation networks, the unperturbed dynamics will
converge to one of the absorbing states with probability one. Now we check the robustness of
various absorbing states to random noise. As in the above subsubsection, we first present two
lemmas that investigate how the perturbed dynamics traverse across all absorbing states.
The following lemma shows starting from an absorbing state, how it is possible to complete
the transition to another absorbing state where all agents undertake a strictly higher effort level.
Lemma 3 Assume that the observation network g is connected. Consider the randomly perturbed
0
0
dynamics ({S (t)}t∈N , Ω), ∈ (0, 1). For any two effort levels e and e , e < e , to induce the
0
0
transition from the absorbing state (e1, (1 − δ)e1) to the absorbing state (e 1, (1 − δ)e 1), n
mutations are both necessary and sufficient.
The proof is the same as for the proof of Lemma 1. We omit it here.
And, the following lemma examines the possibility of transition from an absorbing state to
another limit state where all agents choose a strictly lower effort level.
11
Lemma 4 Assume that the observation network g is connected. Consider the randomly perturbed
0
0
dynamics ({S (t)}t∈N , Ω), ∈ (0, 1). For any two effort levels e and e , e < e , to induce
0
0
the transition from the absorbing state (e 1, (1 − δ)e 1) to the absorbing state (e1, (1 − δ)e1),
|M (i)| − 1
e mutations are both necessary and sufficient.
mind
i∈I
n−1
The proof resembles that the proof of Lemma 2. We omit it here.
Combining Lemma 3 with Lemma 4, we can now present the following theorem about stochastic stability provided that the observation network g is connected.
Theorem 3 Assume that the observation network g is connected. Consider the randomly perturbed dynamics ({S (t)}t∈N , Ω), ∈ (0, 1). Then,
|M (i)| − 1
e > n, the absorbing state (em 1, (1 − δ)em 1) is the unique stochastically
i∈I
n−1
stable equilibrium;
• if mind
• if mind
i∈I
|M (i)| − 1
e = n, all absorbing states are stochastically stable; and
n−1
|M (i)| − 1
e < n, only the absorbing state (e1 1, (1 − δ)e1 1) turns out to be stochasi∈I
n−1
tically stable.
• if mind
The proof is provided in the Appendix.
Theorem 3 shows that for a connected observation network, if each agent is able to acquire
information about n2 or more other agents’ choices of effort levels and payoffs in the most recent
interaction, in the long run, all randomly chosen agents will choose the highest effort level em
and obtain the maximum possible payoff (1 − δ)em , provided that stochastic mutations vanish.
That is, the socially optimal collaboration is achieved. However, when there exists an agent i0
who observes only (n − 1)2 or less other agents’ information, compared to other absorbing states,
the basin of attraction of (e1 1, (1 − δ)e1 1) is more resistent against random noises, given that the
probability of random noises goes to zero. Therefore there may exist a tension between individual
optimality and social efficiency.
As an immediate result, Theorem 3 yields the following corollary.
Corollary 1 Assume that (1) there exists no more than (n − 1)2 members of the whole population;
that is, I ≤ (n − 1)2 and (2) the observation network g is connected. Consider the randomly
perturbed dynamics ({S (t)}t∈N , Ω), ∈ (0, 1). Then, only the absorbing state (e1 1, (1 − δ)e1 1)
is stochastically stable.
Corollary 1 shows that if in each period, a relatively high proportion of members of I are
randomly chosen to play the minimum effort game Γ, in the long term, only the absorbing state
12
(1, e1 1, (1−δ)e1 1) will be observed with strictly positive probability. That is, the society is trapped
into the worst outcome. The intuition behind this result is as follows. For a small population, it
is very easy for effort level e1 to be the “best” choice of any specific agent and then spread to the
whole population.
4
Stag-hunt dilemma in networks with randomly chosen agents
In this section, we consider a special case of the minimum effort game where n is equal to 2 and E
only contains two distinct elements e1 and e2 . Formally, this 2 × 2 version of the minimum effort
game Γ can be represented by the following table
e1
e2
e1
(1 − δ)e1 , (1 − δ)e1
e1 − δe2 , (1 − δ)e1
e2
(1 − δ)e1 , e1 − δe2
(1 − δ)e2 , (1 − δ)e2
where as before, it is assumed that 0 < δ < 1 and e1 < e2 . For clarity, let Γ2,2 denote this 2 × 2
example of the minimum effort game Γ.
The 2 × 2 game Γ2,2 has two strict Nash equilibria (e1 , e1 ) and (e2 , e2 ). Further, by simple
calculation, the game Γ2,2 can be transformed into
e1
e2
e1
δ, δ
0, δ
e2
δ, 0
1, 1
It is straightforward to check that the game Γ2,2 can be regarded as a version of the stag-hunt
dilemma where the effort levels e1 and e2 correspond to the strategies “Hare” and “Stag”, respectively.
Theorem 2 then immediately yields the following proposition concerning the long-run outcome
of the randomly perturbed dynamics given that the stag-hunt game is repeatedly played by two
randomly chosen agents.
Proposition 2 Consider the randomly perturbed dynamics ({S (t)}t∈N , Ω), ∈ (0, 1), for the
stag-hunt game Γ2,2 . Then, the absorbing state (e2 1, (1 − δ)e2 1) is the unique stochastically stable
equilibrium if min |M (i, g)| − 1 > max{2, η(g)}.
i∈I
When the observation network g is connected, Proposition 1 tells us that for the unperturbed
dynamics, there are only two possible limit states (e1 1, (1 − δ)e1 1) and (e2 1, (1 − δ)e2 1) provided
that the stag-hunt game is repeatedly played by a pair of randomly chosen agents. Furthermore,
the following proposition concerns stochastic stability of the randomly perturbed dynamics.
13
Proposition 3 Assume that the observation network g is connected. Consider the randomly perturbed dynamics ({S (t)}t∈N , Ω), ∈ (0, 1), for the stag-hunt game Γ2,2 . Then,
• only the limit state (e2 1, (1 − δ)e2 1) is stochastically stable if min |M (i, g)| > 3;
i∈I
• when (1) min |M (i, g)| ≥ 3 and (2) there exists one agent i0 such that |M (i0 , g)| = 3, both
i∈I
(e1 1, (1 − δ)e1 1) and (e2 1, (1 − δ)e2 1) are stochastically stable; and
• only the limit state (e1 1, (1 − δ)e1 1) is stochastically stable if there exists one agent i0 such
that |M (i0 , g)| = 2.
The first result is a straightforward application of Theorem 3; the second one follows from the
fact that two mutations are both sufficient and necessary to induce the transition from one limit state
to the other when each agent has two or more “neighbors” and there exists one agent has exactly
two “neighbors”; and the third one owes to the fact that one single mutation can lead the transition
from (e2 1, (1 − δ)e2 1) to (e1 1, (1 − δ)e1 1) if there exists one agent has only one “neighbor”.
5
Concluding remark
The present paper has developed a model where the minimum effort game is repeatedly played by
a fixed number of randomly chosen agents. Each agent if selected observes his neighbors’ choices
of effort levels and their payoffs from the most recent interaction. They imitate the most successful
alternatives; and occasionally make mistakes. This paper has shown that in the long run all agents
will choose the highest effort level and socially optimal collaboration will be guaranteed, provided
that each agent’s neighborhood is sufficiently large.
The above condition requires that each agent is able to acquire abundant information about
other agents’ experiences in the most recent interaction when selected to participate in the minimum effort game. In other words, observation networks with high cohesion are preferred by both
individual agents and the whole of society. Therefore, it is tempting to conclude that with declining costs of information acquisition promoted by progress in information and communication
technology, more efficient collaborations can reasonably be expected.
There are several natural extensions to the research presented here. First, in practice it is very
often the case that in each period, whether or not to participate in a collaboration is upto agent to
decide, rather than being determined by random sampling. To take this into account, the minimum
game should be extended. Second, it would be desirable to consider a more reasonable case where
each agent’s payoff increases with the average amount of all agents’ effort levels and decreases
with own effort level. For example, one would like to consider the average opinion games in Van
Huyck, Battalio, and Beil (1991). Third, it would be interesting to check whether the main results
presented here are supported by laboratory experimental examination.
14
A
A.1
Appendix
Radius-coradius theorem
Here we present the modified radius-coradius theorem of Ellison (2000). Some preliminary definitions are necessary. Consider the perturbed dynamics ({X (t)}t∈N , S), ∈ (0, 1). For any ,
0
∈ (0, 1), {X (t)}t∈N is an irreducible Markov chain. For any two states s and s , the resistance
is defined as
0
log Prob{X (t + 1) = s |X (t) = s}
.
r(s, s ) = lim
→0
log 0
0
0
0
For any two subset Ω̄ and Ω̄ of Ω, Ω̄ ∩ Ω̄ = ∅, a path from Ω̄ to Ω̄ is a sequence of distinct states
0
0
(s1 , · · · , sm ) with s1 ∈ Ω̄, sl ∈
/ Ω̄ for 2 ≤ l ≤ m − 1 and sm ∈ Ω̄ . Define the resistance of the
path (s1 , · · · , sm ) as
.
r(s1 , · · · , sm ) =
X
r(sl , sl+1 ).
1≤l≤(m−1)
0
0
Let P(Ω̄, Ω̄ ) be the set of all paths from Ω̄ to Ω̄ .
For any absorbing set Ω̄, the radius R(Ω̄) is
.
R(Ω̄) =
min
(s1 ,··· ,sm )∈P(Ω̄,Ω\D(Ω̄))
r(s1 , · · · , sm ).
If a path (s1 , · · · , sm ) passes through absorbing sets Ω̄1 , · · · , Ω̄k , the modified-resistance is
.
r∗ (s1 , · · · , sm ) = r(s1 , · · · , sm ) −
X
R({Ω̄l }).
2≤l≤(k−1)
The notation of modified-resistance can be extended to a point-set concept by setting
.
r∗ (s, Ω̄) =
min
(s1 ,··· ,sm )∈P({s},Ω̄)
r∗ (s1 , · · · , sm ).
And, for any absorbing set Ω̄, the modified-coradius CR∗ (Ω̄) is defined by
.
CR∗ (Ω̄) = max r∗ (s, Ω̄).
s∈D(
/ Ω̄)
Theorem 2 in Ellison (2000, p.24) shows that for any union of absorbing sets Ω̄, when R(Ω̄) >
CR∗ (Ω̄), Ω̄ contains all stochastically stable equilibria. Intuitively, if it is less resistant to enter the
basin of attraction D(Ω̄) than to leave it, Ω̄ is relatively stable against the perturbation of random
noises and will be observed for most of the time when random noises vanish.
15
A.2
Proofs
Proof of Theorem 1. Without loss of generality, assume that there exists a period t0 such that
all agents have play the minimum effort game Γ in period t0 . It is sufficient to prove the conclusion
0
for one component c = (I , (gij )i,j∈I0 ). According to the specific value of |c|, the rest of the proof
is divided into two cases.
Case 1: |c| ≤ n.
0
0
0
0
In period t0 + 1, elements of the set I (t0 + 1), I ⊂ I (t0 + 1) and |I (t0 + 1)| = n, are randomly
0
chosen to participate in the minimum effort game Γ with probability ν(I (t0 + 1)) > 0. Each agent
0
i, i ∈ I , will choose effort level ei (t0 + 1) following Equation (1), and
ui (t0 + 1) = min
ej (t0 + 1) − δ × ei (t0 + 1).
0
j∈It
0 +1
0
0
Therefore, for any two distinct agents i and i from I , if ei (t0 + 1) ≥ ei0 (t0 + 1), then ui (t0 + 1) ≤
ui0 (t0 + 1).
0
0
0
0
In period t0 + 2, agents from the set I (t0 + 2), I ⊂ I (t0 + 2) and |I (t0 + 2)| = n, are selected
0
to participate in the game Γ with probability ν(I (t0 + 2)) > 0. According to the deterministic
0
rule, each agent i, i ∈ I , will choose effort level ei (t0 + 2) = minj∈M (i) ej (t0 + 1). The strategy
revision follows from the fact that for all agents belonging to M (i), the agent exerting the least
effort level has the best performance in the previous period t0 + 1.
0
Following the same logic, after a finite period, all agents belonging to I will choose the same
0
effort level min{ej (t0 + 1) : j ∈ I } with strictly positive probability.
Case 2: |c| > n.
0
0
Without loss of generality, assume that for the component c = (I , {(gij )i,j∈I0 }), n < |I | <
(2n − 1).
0
0
For any agent i, i ∈ I , let ei (t0 + 1) be determined by
0
0
.
ei (t0 + 1) = min{ej (t0 ) : j ∈ M (i) and uj (t0 ) ≥ uj 0 (t0 ) for ∀j ∈ M (i)}.
That is, if agent i is selected in period t0 + 1, following the deterministic rule of imitation given
0
by Equation (1), he will choose effort level ei (t0 + 1) with strictly positive probability. And, let
0
.
i0 = arg mini∈I0 ei (t0 + 1).
0
In period t0 + 1, elements of the set I (t0 + 1) are randomly chosen to participate in the game Γ
0
0
0
0
0
with probability ν(I (t0 +1)) > 0 where i0 ∈ I (t0 +1), I (t0 +1) ⊂ I and I (t0 +1) ∈ I n . Suppose
0
0
0
that each agent i, i ∈ I (t0 + 1), chooses effort level ei (t0 + 1); that is, ei (t0 + 1) = ei (t0 + 1).
16
0
Thus, for any i ∈ I (t0 + 1),
ui (t0 + 1) =
0
0
ej (t0 + 1) − δ × ei (t0 + 1) = ei0 (t0 + 1) − δ × ei (t0 + 1)
min
0
j∈It
0 +1
0
≤ (1 − δ) × ei0 (t0 + 1).
0
In particular, ui0 (t0 + 1) = (1 − δ) × ei0 (t0 + 1).
0
In period t0 + 2, agents belonging to the set I (t0 + 2) are selected to participate in the game Γ
0
0
0
0
0
with probability ν(I (t0 +2)) > 0 where i0 ∪(I \I (t0 +1)) ⊂ I (t0 +2) and |I (t0 +2)| = n. Now,
0
consider agents’ strategy revision. On the one hand, agent i0 will choose effort level ei0 (t0 +1) with
0
strictly positive probability. For any agent j ∈ M (i0 ), (1) if j ∈ I (t0 + 1), uj (t0 + 1) ≤ ui0 (t0 + 1)
0
0
0
and the equality holds only when ej (t0 + 1) = ej (t0 + 1) = ei0 (t0 + 1); (2) if j ∈
/ I (t0 + 1),
0
uj (t0 + 1) = uj (t0 ) ≤ maxj 0 ∈M (i0 ) uj 0 (t0 ) ≤ (1 − δ) × ei0 (t0 + 1) = ui0 (t0 + 1) where the first
0
inequality follows from the determination of ei0 (t0 + 1) and the second inequality follows from the
0
0
fact that the highest payoff brought by effort level ei0 (t0 + 1) is (1 − δ) × ei0 (t0 + 1). Suppose that
0
0
ei0 (t0 + 2) = ei0 (t0 + 1). On the other hand, for any agent i, i ∈ I (t0 + 2) \ {i0 }, i’s effort level
ei (t0 + 2) satisfies that
ei (t0 + 2) ∈ {ek (t0 + 1) : k ∈ M (i)}
0
⊆ ∪k∈M (i) {ej (t0 ) : j ∈ M (k) and uj (t0 ) ≥ uj 0 (t0 ) for ∀j ∈ M (k)}.
0
0
According to the determination of ei0 (t0 + 1), ei (t0 + 2) ≥ ei0 (t0 + 1) = ei0 (t0 + 2). Therefore the
0
unperturbed dynamics {S(t)}t∈N arrives at a state s(t0 +2) where for each agent i, i ∈ I , i’s payoff
0
0
in the most recent interaction is given by ei0 (t0 + 1) − δ × ei (t0 + 2) where ei (t0 + 2) ≥ ei0 (t0 + 1).
0
In period t0 + 3, agents belonging to the set I (t0 + 3) are randomly chosen to participate in the
0
0
0
game Γ with probability ν(I (t0 + 3)) > 0 where 1) |I (t0 + 3)| = n and 2) I (t0 + 3) ⊂ M (i0 , g)
0
if |M (i0 )| ≥ n; otherwise, M (i0 ) ⊂ I (t0 + 3). According to the deterministic rule, each agent i,
0
0
i ∈ I (t0 + 3) will choose effort level ei0 (t0 + 1) with probability 1. The strategy revision owes to
0
0
the fact that for any i ∈ I , ui (t0 + 2) ≤ (1 − δ) × ei0 (t0 + 1) = ui0 (t0 + 2) and the equality holds
0
only when ei (t0 + 2) = ei0 (t0 + 1).
0
Following the same logic, after a finite period, all agents belonging to I will choose the same
0
effort level ei0 (t0 + 1) with strictly positive probability.
.
For each period t, t ∈ N, let Î(t) = {i ∈ I : (ei (t), ui (t)) = (em , (1 − δ)em )} denote the set of
agents who have chosen effort level em and received the maximum possible payoff (1 − δ)em in the
most recent interaction until period t. The following lemma is essential for the proof of Lemma 1.
Lemma 5 Consider the unperturbed dynamics {S(t)}t∈N . Assume that in some period t0 , the set
Î(t0 ) satisfies that |Î(t0 )| ≥ n. Then, {S(t)}t∈N can converge to an absorbing set where for any
0
0
component c = (I , (gij )i,j∈I0 ), if I ∩ Î(t0 ) 6= ∅, all members of c choose effort level em .
17
0
0
Proof . Consider the component c1 = (I1 , (gij )i,j∈I0 ) where I1 ∩ Î(t0 ) 6= ∅. Without losing
1
0
generality, assume that I1 \ Î(t0 ) 6= ∅.
0
0
In period t0 +1, agents belonging to the set I (t0 +1), I (t0 +1) ∈ I n , are selected to participate
0
0
0
in the game Γ with probability ν(I (t0 + 1)) > 0 where both (I (t0 + 1) ∩ I1 ) \ Î(t0 ) 6= ∅ and
0
0
I (t0 + 1) ⊂ ∪j∈Î(t0 ) M (j) hold. The existence of the group I (t0 + 1) follows from the fact that
0
0
I1 \ Î(t0 ) 6= ∅ and I1 ∩ Î(t0 ) 6= ∅. According to the deterministic rule given by Equation (1), all
0
member of I (t0 + 1) will choose the same effort level em . The strategy revisions owes to that for
each agent the maximum possible payoff is (1 − δ)em which can be attained only by choosing the
0
highest effort level em . Thus, for any i ∈ I (t0 + 1), ei (t0 + 1) = em and ui (t0 + 1) = (1 − δ)em .
Following the same logic, after a finite number of periods, the unperturbed dynamics {S(t)}t∈N
will arrive at a state where all members of c1 choose effort level em . Therefore, {S(t)}t∈N can con0
0
verge to an absorbing set where for any component c = (I , (gij )i,j∈I0 ), if I ∩ Î(t0 ) 6= ∅, all
members of c choose effort level em .
Proof of Lemma 1. Without loss of generality, it is sufficient for us to consider the case that
η(g) = n + 1. In other words, the interaction system (I, g) consists of n + 1 components. And,
assume that in period t0 , the perturbed dynamics {S (t)}t∈N , ∈ (0, 1), arrives at (e1 1, (1 − δ)e1 1)
which constitutes the absorbing set Ω̄.
0
0
In period t0 + 1, members of the group I (t0 + 1), I (t0 + 1) ∈ I n , are randomly chosen to
0
participate in the minimum effort game Γ with probability ν(I (t0 + 1)) > 0 where for any two
0
distinct elements of I (t0 + 1), they do not belong to the same component. Due to random noises,
0
each agent from I (t0 + 1) chooses the highest effort level em and receives the maximum possible
payoff (1 − δ)em . Owing to the definition of the game Γ, n mistakes are necessary.
0
The group I (t0 + 1) is able to play the role of the set Î(t0 ) in Lemma 5. As an application
of Lemma 5, following the unperturbed dynamics, in some period t1 , {S (t)}t∈N can arrive at the
0
state where there exists one and only one component c = (I , (gij )i,j∈I0 ) such that all members of
c take effort level e1 while all other agents choose effort level em .
0
In period t1 + 1, a group I (t1 + 1) of n agents are selected to play the game Γ with probability
0
0
0
0
0
ν(I (t1 + 1)) > 0 where I (t1 + 1) ∩ I is a singleton. Assume that I (t1 + 1) ∩ I consists of
agent i0 . According to the deterministic rule of imitation specified by Equation (1), each agent
0
j ∈ I (t1 + 1) \ {i0 } chooses effort level em because all elements of M (j) take em in the previous
period. And, due to one single mutation, agent i0 switches from e1 to em . Thus, for any agent i,
0
i ∈ I (t1 + 1),
ei (t1 + 1) = em and ui (t1 + 1) = (1 − δ)em .
0
The set I (t1 +1) can play the role of Î(t0 ) in Lemma 5. By applying Lemma 5 again, following
the unperturbed dynamics, after a finite period, the unperturbed dynamics {S(t)}t∈N will arrive
18
at a state where all individuals choose the highest effort level em which belongs to the basin of
attraction D (em 1, (1 − δ)em 1) .
And, the above proof also implies that to induce the exit from D {(e1 1, (1 − δ)e1 1)} , n mutations are required. Otherwise, effort level e1 still is the most successful choice.
Proof of Lemma 2. It is sufficient for us to consider the case that η(g) = 2. That is, the interac0
0
tion system (I, g) consists of two components c1 = (I1 , {(gij )i,j∈I0 }) and c2 = (I2 , {(gij )i,j∈I0 }).
1
2
Let i0 be the agent with the least number of neighbors; formally, |M (i0 )| ≤ |M (i)| for any i ∈ I.
0
Without loss of generality, assume that i0 ∈ I1 and (n−1) < |M (i0 )|−1 ≤ (2n−2). And, assume
that in period t0 , the perturbed dynamics {S (t)}t∈N , ∈ (0, 1), arrives at (em 1, (1 − δ)em 1) which
constitutes the absorbing set Ω̄.
Part 1: Exit from D (em 1, (1 − δ)em 1) .
0
0
In period t0 + 1, agents belonging to the set I (t0 + 1), I (t0 + 1) ∈ I n , are selected to
0
0
participate in the minimum effort game Γ with probability ν(I (t0 + 1)) > 0 where i0 ∈ I (t0 + 1)
0
and I (t0 + 1) ⊂ M (i0 ). The deterministic rule in Section 2.3 stipulates the choice of effort level
0
em for each agent i, i ∈ I (t0 + 1). Owing to one single mutation, agent i0 plays effort level e1 and
0
0
each agent j, j ∈ I (t0 + 1) \ {i0 } = I (t0 + 1) ∩ M (i0 ) \ {i0 } , chooses effort level em . As a
result,
ei0 (t0 + 1) = e1 , ui0 (t0 + 1) = (1 − δ)e1
0
and for any j ∈ I (t0 + 1) ∩ M (i0 ) \ {i0 } ,
ej (t0 + 1) = em , uj (t0 + 1) = e1 − δem < ui0 (t0 + 1).
0
0
In period t0 + 2, members of the group I (t0 + 2), I (t0 + 2) ∈ I n , are selected to play game
0
0
0
0
Γ with probability ν(I (t0 + 2)) > 0 where i0 ∈ I (t0 + 2) and M (i0 ) \ I (t0 + 1) ⊂ I (t0 + 2).
0
The existence of I (t0 + 2) follows from the fact that (n − 1) < |M (i0 )| − 1 ≤ (2n − 2) and
0
0
I (t0 + 1) ⊂ M (i0 ). Owing to the deterministic imitation rule, each agent j, j ∈ M (i0 ) \ I (t0 + 1),
0
0
chooses effort level em . In fact, for any j ∈ M (i0 ) \ I (t0 + 1), (1) M (j) \ I (t0 + 1) 6= ∅
holds; otherwise, |M (j)| ≤ n < |M (i0 )| which contradicts the determination of i0 and (2) for
0
any k ∈ M (j) \ I (t0 + 1), we have that ek (t0 + 1) = em and uk (t0 + 1) = (1 − δ)em . And,
by mistakes, agent i0 takes effort level e1 , although there exists one neighbor j ∈ M (i0 ) such that
ej (t0 + 1) = em and uj (t0 + 1) = em − δem .
Therefore after two mutations, the perturbed dynamics {S (t)}t∈N , ∈ (0, 1), can arrive at a
state where
ei0 (t0 + 2) = e1 , ui0 (t0 + 2) = (1 − δ)e1 , .
19
and for any j ∈ M (i0 , g) \ {i0 },
ej (t0 + 2) = em , uj (t0 + 2) = e1 − δem < ui0 (t0 + 2).
Due to the deterministic imitation rule in Section 2.3, agent i0 will choose effort level e1 if selected
in the next period.
Following the same logic as the proof of Theorem 1, even without any further random noise,
after a finite number of periods, {S (t)}t∈N , ∈ (0, 1), can reach one element of the absorbing set
where all member of c1 choose effort level e1 while all member of c2 choose effort level em .
Part 2: Transition to (e1 1, (1 − δ)e1 1).
Following the unperturbed dynamics, in some period t1 , t1 > t0 + 2, {S (t)}t∈N can arrive at
0
0
the state where si (t1 ) = (e1 , (1 − δ)e1 ) for any i ∈ I1 and sj (t1 ) = (em , e1 − δem ) for any i ∈ I2 .
0
0
In period t1 +1, agents belonging to the set I (t1 +1), I (t1 +1) ∈ I n , are selected to participate
0
0
0
0
0
in the game Γ with probability ν(I (t0 + 1)) > 0 where I (t1 + 1) ∩ I1 6= ∅ and I (t1 + 1) ∩ I2 6= ∅.
0
0
Owing to the deterministic rule, agent i, i ∈ I (t1 + 1) ∩ I1 , should exert effort level e1 and agent j,
0
0
0
0
j ∈ I (t1 +1)∩I2 , should choose effort level em . However, one agent j0 , j0 ∈ I (t1 +1)∩I2 , makes
mistakes and takes effort level e1 . Then, sj0 (t1 +1) = (e1 , (1−δ)e1 ) and sj (t1 +1) = (em , e1 −δem )
0
for any j ∈ I2 \ {j0 }. It is straightforward to see that agent j0 will choose effort level e1 if selected
in the next period.
Following the same logic as the proof of Theorem 1, even without any further stochastic mutation, after a finite number of periods, {S (t)}t∈N , ∈ (0, 1), can arrive at the absorbing state
(e1 1, (1 − δ)e1 1).
And, the above proof also implies that to induce the exit from the basin of attraction D {(em 1, (1−
δ)em 1)} , mini∈I d(|M (i)| − 1)/(n − 1)e mutations are necessary. Otherwise, following the deterministic imitation rule, each agent i, i ∈ I, will choose effort level em if selected to participate in
the minimum effort game Γ.
Proof of Theorem 2. The proof uses the modified radius-coradius theorem developed by Elli|M (i, g)| − 1
son (2000). Here, we restrict our attention to the case mind
e > n. The other case
i∈I
n−1
can be proved in a similar way.
First of all, by Lemma 2, we have that
R({(em 1, (1 − δ)em 1)}) = mind
i∈I
|M (i)| − 1
e.
n−1
To calculate CR∗ ({(em 1, (1 − δ)em 1)}), it is sufficient for us to focus on the transition from
20
the state (e1 1, (1 − δ)e1 1) to the state (em 1, (1 − δ)e1 m). Therefore,
CR∗ ({(em 1, (1 − δ)em 1)}) =
max
∗
m
m
r (s, (e 1, (1 − δ)e 1))
s∈D
/
{(em 1,(1−δ)em 1)}
= r∗ ((e1 1, (1 − δ)e1 1), (em 1, (1 − δ)em 1))
= [n + (max{n, η(g)} − n)] − (max{n, η(g)} − n)
= n
where the third equation owes to the fact that following from Lemma 1, there exists a sequence of
distinct absorbing sets (Ω̄0 , Ω̄1 , · · · , Ω̄l ) such that (1) Ω̄0 = (e1 1, (1−δ)e1 1) and Ω̄l = {(em 1, (1−
δ)em 1)}; (2) for Ω̄1 , min{n, η(g)} components have all of their members choose effort level em
and compared with Ω̄l0 , for Ω̄l0 +1 , there is one more component such that all of its members take
0
em when l = 1, · · · , (l−1); (3) to transit from (e1 1, (1−δ)e1 1) to Ω̄1 , n mistakes are required and
0
sufficient and one single mutation induces the transition from Ω̄l0 to Ω̄l0 +1 for any 1 ≤ l ≤ (l − 1);
0
and (4) R(Ω̄l0 ) = 1 for any 1 ≤ l ≤ (l − 1).
|M (i)| − 1
Consequently, if mind
e > n, R({(em 1, (1−δ)em 1)}) > CR∗ ({(em 1, (1−δ)em 1)})
i∈I
n−1
holds. As an application of the modified radius-coradius theorem in Ellison (2000), only the absorbing state (em 1, (1 − δ)em 1) is stochastically stable.
Proof of Theorem 3. Following from Lemma 3 and Lemma 4, we have that
R({(e1 1, (1 − δ)e1 1)}) = n,
|M (i)| − 1
e} for any e, e ∈ E \ {e1 , em },
i∈I
n−1
|M (i)| − 1
e
R({(em 1, (1 − δ)em 1)}) = mind
i∈I
n−1
R({(e1, (1 − δ)e1)}) = min{n, mind
and furthermore,
|M (i)| − 1
e,
i∈I
n−1
|M (i)| − 1
CR({(e1, (1 − δ)e1)}) = max{n, mind
e} for any e, e ∈ E \ {e1 , em },
i∈I
n−1
CR({(em 1, (1 − δ)em 1)}) = n.
CR({(e1 1, (1 − δ)e1 1)}) = mind
|M (i)| − 1
e < n, R({(e1 1, (1 − δ)e1 1)}) > CR({(e1 1, (1 − δ)e1 1)})
i∈I
n−1
holds. As an application of the radius-coradius theorem in Ellison (2000), we find that only the
absorbing state (e1 1, (1 − δ)e1 1) turns out to be stochastically stable. Following the same logic,
we are able to show that (em 1, (1 − δ)em 1) is the unique stochastically stable equilibrium when
|M (i)| − 1
mind
e > n.
i∈I
n−1
As a result, if mind
21
References
[1] Alós-Ferrer, C., 2004. “Cournot versus Walras in dynamic oligopolies with memory.” International Journal of Industrial Organization 22, 193–217.
[2] Alós-Ferrer, C., Weidenholzer, S., 2007. “Partial bandwagon effects and local interactions.”
Games and Economic Behavior 61, 179–197.
[3] Alós-Ferrer, C., Weidenholzer, S., 2008. “Contagion and efficiency.” Journal of Economic
Theory 143, 251-274.
[4] Alós-Ferrer, C., Weidenholzer, S., 2014. “Imitation and the role of information in overcoming
coordination failures.” Games and Economic Behavior 87, 397–411.
[5] Anderlini, L., Ianni, A., 1996. “Path dependence and learning from neighbors.” Games and
Economic Behavior 13, 141–177.
[6] Angus, S., Masson, V., 2010. “The effects of information and interactions on contagion processes.” Working Paper.
[7] Bergin, J., Bergin, J., 2009. “Cooperation through imitation.” Games and Economic Behavior
67, 376–388.
[8] Blume, L.E., 1993. “The statistical mechanics of strategic interaction.” Games and Economic
Behavior 5, 387–424.
[9] Blume, L.E., 1995. “The statistical mechanics of best-response strategy revision.” Games and
Economic Behavior 11, 111–145.
[10] Cui, Z., 2014. “More neighbors, more efficiency.” Journal of Economic Dynamics & Control
40, 103-115.
[11] Ellison, G., 1993. “Learning, local interaction, and coordination.” Econometrica 61, 10471071.
[12] Ellison, G., 2000. “Basins of attraction, long-run stochastic stability, and the speed of stepby-step evolution.” Review of Economic Studies 67, 17–45.
[13] Eshel, I., Samuelson, L., Shaked, A., 1998. “Altruists, egoists, and hooligans in a local interaction model. ” American Economic Review 88, 157-179.
[14] Goyal, S., 2011. “Learning in networks.” In: Benhabib, J., Bisin, A., Jackson, M., (Eds.)
Handbook of Social Economics, Volume 1, North Holland.
22
[15] Khan, A., 2014. “Coordination under global random interaction and local imitation.” International Journal of Game Theory 43, 721-745.
[16] Kandori, M., Mailath, G., Rob, R., 1993. “Learning, mutation, and long run equilibria in
games.” Econometrica 61, 29-56.
[17] Özgür, O., 2011. “Local interactions.” In: Benhabib, J., Bisin, A., Jackson, M., (Eds.) Handbook of Social Economics, Volume 1, North Holland.
[18] Morris, S., 2000. “Contagion.” Review of Economic Studies 67, 57-78.
[19] Robson, A.J., Vega-Redondo, S., 1996. “Efficient equilibrium selection in evolutionary
games with random matching.” Journal of Economic Theory 70, 65-92.
[20] Seidman, S.B., 1983. “Network structure and minimum degree. ” Social Networks 5, 269287.
[21] Van Huyck, J., Battalio, R., Beil, R., 1990. “Tacit coordination games, strategic uncertainty,
and coordination failure.” American Economic Review 80, 234-248.
[22] Van Huyck, J., Battalio, R., Beil, R., 1991. “Strategic uncertainty, equilibrium selection, and
coordination failure in average opinion games.” Quarterly Journal of Economics 106, 234248.
[23] Weber, R., 2006. “Managing growth to achieve efficient coordination in large groups.” American Economic Review 96, 114-126.
[24] Weidenholzer, S., 2010. “Coordination games and local interactions: A survey of the game
theoretic literature.” Games 1, 885-910.
[25] Young, H.P., 1993. “The evolution of conventions.” Econometrica 61, 57-84.
[26] Young, H.P., 1998. “Individual Strategy and Social Structure: An Evolutionary Theory of
Institutions.” Princeton University Press.
23