Collaboration in networks with randomly chosen agents∗ Zhiwei Cui†, Rui Wang‡ August, 2014 Abstract The present paper considers a population of finite agents located in arbitrary, fixed network. In each period, a fixed number of agents are randomly chosen to play a minimum effort game. They learn from their neighbors’ experience through the imitation of the most successful strategies and occasionally make mistakes. We show that in the long run all agents will choose the highest effort level and socially optimal collaboration will be guaranteed, provided that each agent’s neighborhood is sufficiently large. JEL: C72; D83 Keywords: Minimum effort game; Local observation; Random sampling; Imitation ∗ We thank Zhigang Cao, Chih Chang, Jonathan Newton and Chun-Lei Yang for helpful comments and suggestions. This paper was presented at the 2014 Beijing Economic Theory Workshop at University of International Business and Economics and various other seminars; we greatly acknowledge participants for their comments. This work was financially supported by the National Science Foundation of China (No.61202425) and the Fundamental Research Funds for the Central Universities (YWF-13-D2-JC-11, YWF-14-JGXY-016). † Corresponding author. School of Economics and Management, Beihang University, Beijing 100191, PR China. Email: [email protected] ‡ School of Computer Science and Engineering, Beihang University, Beijing 100191, PR China. Email: [email protected] 1 1 Introduction In economics and sociology, the study of interactions in networks is an active field of research; for an overview, see Goyal (2011) and Özgür (2011). The literature has tended to focus on local interaction (Weidenholzer, 2010 and Young, 1998). That is, individuals only interact with their direct neighbors. Nevertheless, in a wide range of economic and social situations, it is possible that individuals interact with strangers; their opponents will vary over times; and the interaction between the same group of individuals is observed with low frequency, such as peer-to-peer file sharing and online anonymous financial transaction. Another distinguishing and important feature is that in each period only a small percentage of agents take part in the interaction. In order to capture these phenomena, the present paper assumes that in each period, a fixed number of agents are randomly chosen from a large population to participate in interactions and that the selected agents make their choices based on the experiences of their neighbors. Formally, we consider a finite but large population of agents located in an arbitrary, fixed network. In each period, a fixed number of individuals from the whole population are randomly chosen to play a minimum effort game. In the minimum effort game, players choose among different effort levels and their payoffs depend on the minimum amount of all players’ choices. In other words, the minimum effort game models the situation where to a large extent, the performance of a group is determined by the effort level exerted by the least hardworking member of the group.1 And, each selected agent is able to observe effort levels and performances of his neighbors in the most recent interaction. Then, agent will imitate the most successful choice. Occasionally, mistakes occur in the revision of strategy. We find that if each agent has a large number of neighbors, and no matter whether or not the network is connected, in the long run all agents will choose the salient effort level and socially optimal collaboration will be guaranteed provided that the probability of random noises is sufficiently small.2 In the language of economic and social networks, the main results show that highly cohesive networks will lead to the most efficient collaboration when the minimum effort game is repeatedly played by randomly chosen agents.3 Our work contributes to research on local interaction. Starting with the seminal papers of Blume (1993, 1995) and Ellison (1993), much of the literature has studied local interaction games. It is assumed that agents are located in a fixed network and only interact with their direct neighbors (Alós-Ferrer and Weidenholzer, 2007, 2008, 2014; Ellison, 2000; Eshel, Samuelson and Shaked, 1 The minimum effort game can be used to model collaboration among individuals (Alós-Ferrer and Weidenholzer, 2014 and Van Huyck, Battalio, and Beil, 1990). 2 By conducting laboratory experiments with a minimum effort game, Weber (2006) shows that slow growth and exposure of entrants to previous norms can alleviate the problem of large group coordination failure. 3 Given that social and economic networks are described precisely as undirected graphs, Seidman (1983) proposes that the minimum degree of all nodes can be used as a measure of network cohesion. 2 1998 and Morris, 2000). In addition, two prominent dynamic adjustment rules are used in these models: myopic best-response reply and imitating better or best action. However, the present paper focuses on the possibility that each individual is unable to identify who his opponents are. Due to this uncertainty, it is reasonable to assume that individuals will learn from their neighbors’s experiences and employ an imitation rule to update their choices. Our paper also relates to research that explores the effects of random matching on the long-run outcomes of the learning dynamics. Building on Kandori, Mailath and Rob (1993), Robson and Vega-Redondo (1996) show that for coordination game, a genuinely random pairing-up mechanism can lead to the emergence of Pareto efficient equilibrium rather than risk dominant one. And, given that agents are located in regular networks, Anderlini and Ianni (1996) introduce the concept of the 1-factor that enables each agent to be coupled with one of his neighbors to show that in the long run, different actions of a coordination game may survive at different locations. Khan (2014) develops a model where in each period individuals are partitioned into pairs to play coordination game and they update choices by imitating the most successful neighbors, and he offers sufficient conditions for the emergence of Pareto efficient convention. Although the opponents change over time, both of these papers assume that in each period, (1) all members take part in interactions and (2) each agent plays 2 × 2 coordination game with another agent. Nevertheless, the present paper considers the minimum effort game and assumes that given a large population, in each period, only a small portion of agents from a large population play the game. The most closely related paper is that of Alós-Ferrer and Weidenholzer (2014) where it is assumed that each individual plays the minimum effort game with all of his neighbors and an imitation rule is employed. They show that the possibility of observing agents who are not direct neighbors might help to overcome the coordination problem when the interactions are not too global. By contrast, we assume that the minimum effort game is played by randomly chosen agents and they can only observe the experience of their direct neighbors when receiving the opportunity of participating in the game and modifying their effort levels by imitation. We find that if each agent has a large number of neighbors, in the long run, the socially optimal coordination will be attained. In the language of social and economic networks, to guarantee the socially optimal collaboration, sparse networks are required in Alós-Ferrer and Weidenholzer (2014) while our paper requires a highly cohesive network. This difference may owe to the fact that in our paper, the players of the minimum effort game are randomly chosen from a large population. The rest of the present paper is organized as follows. Section 2 describes the basic building blocks of our model, including the minimum effort game, observation network, imitation and dynamics. Section 3 investigates the convergence of unperturbed dynamics and stochastic stability of randomly perturbed dynamics. Section 4 illustrates the main results using a stag-hunt game. Finally, Section 5 concludes. 3 2 2.1 The model Minimum effort game The base game is an n-person minimum effort game with n ≥ 2. Intuitively, each player’s payoff depends on the minimum amount resulting among the player’s own and all others’ effort levels. . Formally, let N = {1, · · · , n}, n ≥ 2, be the set of players. Each player i chooses a level of . personal effort, ei , from the same set E = {e1 , e2 , · · · , em } where m > 1 and 0 < e1 < e2 < · · · < em . Let e = (e1 , e2 , · · · , en ) be a strategy profile of all n players’ effort levels. Given the . strategy profile e, let e−i = (e1 , · · · , ei−1 , ei+1 , · · · , en ) be the effort levels of all players except − player i. And, for any e ∈ E, let → e be a strategy profile where all players choose the same effort level e. For the choice of effort level ei , player i bears a cost δ × ei where 0 < δ < 1. Then for the strategy profile e = (e1 , e2 , · · · , en ), player i’s payoff is specified by πi (e1 , e2 , · · · , en ) = min ej − δ × ei . 1≤j≤n . Hence the minimum effort game Γ = (N, {Ei }i∈N , {πi }i∈N ), where Ei = E for any i ∈ N, is . defined. Let Π = {πi (e) : i ∈ N and e ∈ E n } be the set of each player’s possible payoffs. Note that for each player, the highest payoff brought about by effort level e, e ∈ E, is (1 − δ) × e which is attained when other players’ effort levels are no less than e. Thus, the maximum possible payoff is (1 − δ) × em . Given e−i = (e1 , · · · , ei−1 , ei+1 , · · · , en ), player i’s best reaction is to choose effort level minj∈N,j6=i ej . Thus, for the minimum effort game Γ, the set of Nash equilibria coincides with − − {→ e : e ∈ E}. And, each → e is a strict Nash equilibrium. 2.2 Network and local observation An observation system consists of a finite population of agents such that when each one is selected to participate in the minimum effort game Γ, he or she is only able to observe and make use of the information of members from a subpopulation. . Formally, let I = {1, · · · , I} be the set of agents with I ≥ n. For any two agents i, j ∈ I, the pairwise relationship between them is captured by a binary variable, gij ∈ {0, 1}. gij = 1 if agent i and agent j are able to observe each other’s effort level and performance in the most recent interaction and gij = 0 otherwise; conventionally, it is assumed that gii = 1. That is, there exists . a dichotomy of observation: mutual observation and nonexistence of observation. g = {(gij )i,j∈I } . is referred to as an observation network over the set I. For any observation network g, g = (I, g) defines an undirected graph and is referred to as an observation system. 4 . For any agent i, i ∈ I, let the information neighborhood M (i, g) = {j ∈ I : gij = 1} be the set of agents whose effort levels and performances in the most recent interaction are observed by agent i, given the observation network g. Without loss of generality, assume that each agent i’s information neighborhood M (i, g) contains at least two elements; that is, except himself, agent i is able to observe at least one another agent. And, it is straightforward to see that for any two distinct agents i and j, j ∈ M (i, g) implies that i ∈ M (j, g) and j ∈ / M (i, g) equals to that i ∈ / M (j, g). 0 0 For any subset I of the set I, the subgraph (I , (gij )i,j∈I0 ) is a component of the graph g if (1) 0 0 for any two distinct agents i and j from I , there exists {i1 , · · · , iL } ⊆ I satisfying that L ≥ 2, i1 = i, iL = j and gil il+1 = 1 for any l, 1 ≤ l ≤ (L − 1) and (2) there is no strict superset 00 0 I of I such that (1) also holds. Let c denote a component. For simplicity, for any component 0 0 c = (I , (gij )i,j∈I0 ), I will be referred to as the set of members of c and let |c| denote the numbers 0 of agents belonging to I . An observation network g is connected if the graph (I, g) has a unique component. Let η(g) be the number of components of the observation system g = (I, g). 2.3 Imitation and dynamics In each period, among the whole population I = {1, · · · , I}, only n distinct agents are randomly chosen to play the the minimum effort game Γ. Each selected agent observes effort levels and performances of members belonging to his own information neighborhood in the most recent interaction and makes his choice following an imitation rule. And, we consider an randomly perturbed dynamics model in which agents are allowed to make mistakes. 0 . 0 0 Let I n = {I : I ⊆ I and |I | = n} be a set with the property that each element is a subset of I and has cardinality n.4 Assume that the sampling mechanism ν is a probability distribution on the 0 0 set I n , satisfying that ν(I ) > 0 for any I ∈ I n . 0 0 Time is modeled discretely; t = 0, 1, 2, · · · . In each period t, a group I (t) of n agents, I (t) ∈ 0 I n , is randomly selected with strictly positive probability ν(I (t)) > 0 to play the minimum effort game Γ. Each agent i’s state in period t, t = 0, 1, 2, · · · , is given by si (t) = ιi (t), (ei (t), ui (t)) . In particular, the participation index ιi (t) ∈ {0, 1} shows whether or not agent i has participated in the minimum effort game Γ during the previous t periods; formally, ( 0 0 0 0 1 if i ∈ I (t ) for some t , 0 ≤ t ≤ t; ιi (t) = 0 otherwise. 0 In other words, in any period t ≥ 1, if i ∈ I (t), ιi (t) = 1; otherwise; ιi (t) = ιi (t − 1). And, (ei (t), ui (t)) records agent i’s choice of effort level and performance in the most recent interaction until period t. 4 For any finite set X, the cardinality |X| is the number of elements belonging to X. 5 Now we specify for each agent i, how the second component of his state, (ei (t), ui (t)), is deter0 mined. First, consider agent i, i ∈ I (t). If either t = 0 or ιj (t − 1) = 0 holds for any j ∈ M (i, g), agent i randomly chooses one of these effort levels from the set E = {e1 , e2 , · · · , em }; otherwise, agent i observes his neighbors’ (including himself) choices of effort levels and performances in the most recent interaction and imitates the most successful alternative; formally, effort level ei (t) is determined by 0 ei (t) ∈ {ej (t−1) : j ∈ M (i, g), ιj (t−1) = 1 and uj (t−1) ≥ uj 0 (t−1) for ∀j ∈ M (i, g)}. (1) If the set specified by the right hand side of Equation (1) is not a singleton, every element of this set will be taken with strictly positive probability. Then, the minimum effort game Γ is played by 0 0 members of I (t) with {ej (t)}j∈I0 (t) . And, agent i, i ∈ I (t), receives a payoff of ui (t) = min ej (t) − δ × ei (t). 0 j∈I (t) 0 In addition, for any agent i, i ∈ / I (t), (ei (t), ui (t)) = (em , e1 − δem ) if t = 0 and (ei (t), ui (t)) = (ei (t − 1), ui (t − 1)) if t ≥ 1. The strategy revision process defines a Markov chain over the state space, which we may . identify with the set Ω = {0, 1}I × E I × ΠI . This process will be denoted by {S(t)}t∈N , and will be referred to as unperturbed dynamics. An absorbing set is a minimal subset of Ω with the property that, under the unperturbed dynamics, the probability to exit from it is zero. So the possible states the unperturbed dynamics will converge to are states contained in absorbing sets. An absorbing state is an element which forms a singleton absorbing set. For any absorbing set Ω̄, the basin of attraction D(Ω̄) is the set of states from which the unperturbed dynamic converges to Ω̄ with probability one. There exists of course a multiplicity of absorbing sets. For instance, for any effort level e, e ∈ E, (1, e1, (1 − δ)e1) is an absorbing state where 1 is an I-dimension vector with each element being 1. Assume that all agents have participated in the minimum effort game Γ in previous t periods and ej (t) = e and uj (t) = (1 − δ)e for any j ∈ I. When agent i, i ∈ I, is randomly chosen to participate in the minimum effort game Γ in period t + 1, he or she observes that all neighbors undertake the same effort level e and get the same payoff (1 − δ)e. Agent i will also choose effort level e and then receive the payoff (1 − δ)e. To select among all possible absorbing sets, we employ the technique of introducing random noise and identifying the most likely long-run equilibria (Ellison 1993; Kandori, Mailath and Rob, 1993 and Young, 1993). Suppose that, if selected to participate in the minimum effort game Γ, each agent i makes mistakes with probability , ∈ (0, 1). In this case, agents randomly choose their effort levels. Conventionally, random noises are assumed to be independent across agents and periods. Therefore, the perturbed dynamics ({S (t)}t∈N , Ω) are defined. For each , {S (t)}t∈N is an irreducible Markov chain, so it has a unique invariant distribution µ . A state s is stochastically stable if µ∗ (s) > 0 where µ∗ = lim→0 µ . 6 3 Main results Having spelled out the basic building blocks of the model, this section examines the likelihood of emergence of the most efficient collaboration. Intuitively, we want to identify sufficient conditions under which, in the long run, every selected agent will choose the salient effort level em , and from which the socially optimal collaboration is guaranteed. With this aim, we examine the convergence of unperturbed dynamics and stochastic stability of randomly perturbed dynamics. 3.1 Convergence of the unperturbed dynamics To begin with, we introduce a lemma concerning the long-run behavior of the participation index ιi . Lemma 1 Consider an absorbing set Ω̄. For any state s = ((ι1 , · · · , ιI ), (e1 , · · · , eI ), (u1 , · · · , uI )) ∈ Ω̄, ιi = 1 holds for any i ∈ I. The proof is trivial. We omit it here. According to the specification of the selection mechanism ν, in each period, every agent is likely to be randomly chosen to play the minimum effort game Γ with others. And, Lemma 1 tells us that in the long run, with probability one, every agent i will have participated in the minimum effort game Γ. In the following theorem, we continue to characterize the absorbing sets of the unperturbed dynamics {S(t)}t∈N . Theorem 1 Consider the unperturbed dynamics {S(t)}t∈N . Given any two states 0 0 0 0 0 s = (1, (e1 , · · · , eI ), (u1 , · · · , uI )) and s = (1, (e1 , · · · , eI ), (u1 , · · · , uI )) 0 0 from an absorbing set Ω̄, ei = ej = ei = ej holds for any two distinct members i and j of a component c. The proof is provided in the Appendix. Theorem 1 shows that in the long run, all members belonging to a component will choose the same effort level when there are no random noises. The intuition behind this result is as follows. 0 0 Consider a component c = (I , (gij )i,j∈I0 ). Suppose that all agents from I have been selected to 0 participate in the game Γ until period t0 , and the imitation rule prescribes that agent i0 , i0 ∈ I , will choose an effort level e1 if randomly chosen in period t0 + 1. Then, it is likely that effort level e1 will spread from i0 to i0 ’s neighbors, and finally to all agents belonging to the component 0 c provided that members of I are successively selected to play the minimum effort game Γ. 7 Figure 1: An illustration of an observation system consisting of two components. For an unconnected observation network g, all absorbing sets can be divided into two types. When all individuals from the whole population I choose the same effort level e, e ∈ E, the absorbing set turns out to be the absorbing state (1, e1, (1 − δ)e1). However, when agents from distinct components choose different effort levels, the absorbing set contains multiple elements. We employ an example to illustrate this result. Consider a society I = {1, · · · , 7} where members are randomly chosen to play a 2-person minimum effort game. The observation system (I, g) consists of two components: c1 = {1, 2, 3}, g12 = g23 = g13 = 1 , c2 = {4, 5, 6, 7}, g45 = g56 = g67 = g74 = 1 . As an illustration, see Figure 1. Assume that in some period t0 , the state s(t0 ) is characterized by (1) ιi (t0 ) = 1 for any i ∈ I and (2) all elements of c1 (c2 ) have the same effort level e1 (e2 ). Following the imitation rule specified by Equation (1), all agents belonging to c1 (c2 ) will choose effort level e1 (e2 ) forever. However, in the following periods, each selected agent’s payoff will depend on whether or not the other player comes from the same component as himself and will not be a constant amount. Therefore, the unperturbed dynamics {S(t)}t∈N will converge to an absorbing set consisting of multiple states with probability one. As an immediate result, Theorem 1 yields the following proposition for connected observation networks. Proposition 1 Assume that the observation network g is connected. Then the unperturbed dynamics {S(t)}t∈N will converge to one of the limit states with probability one, where the set of limit states is given by {(1, e1, (1 − δ)e1) : e ∈ E}. Proposition 1 says that when the observation network g is connected, in the long run, all individuals in population I will choose the same effort level e and as a consequence obtain payoffs of (1 − δ)e if selected to play the minimum effort game Γ. In other words, the connectedness of the observation networks will lead to the disappearance of the heterogeneity of agents’ choices of effort levels, and each absorbing set turns out to be the singleton. 8 3.2 Stochastic stability of the perturbed dynamics In this subsection, we first explore stochastic stability of randomly perturbed dynamics for general observation networks. Then, we restrict our attention to connected observation networks. 3.2.1 General observation networks Before proceeding to the results concerning stochastic stability of the randomly perturbed dynamics {S (t)}t∈N , ∈ (0, 1), it is necessary to provide two lemmas that characterize the traverse across the different absorbing sets. The following lemma shows starting from any other absorbing set, how it is possible to enter into the basin of attraction of the absorbing state (1, em 1, (1 − δ)em 1). Lemma 2 Consider the randomly perturbed dynamics ({S (t)}t∈N , Ω), ∈ (0, 1). For any absorbing set Ω̄, Ω̄ 6= {(1, em 1, (1 − δ)em 1)}, to induce the transition from Ω̄ to {(1, em 1, (1 − δ)em 1)}, max{n, η(g)} mutations are sufficient. The proof is provided in the Appendix. 0 The intuition underlying Lemma 2 is as follows. When n members of a group, I (t0 ), are selected to play the minimum effort game Γ, owing to random noises, it is possible that each one chooses effort level em no matter what choice is specified by the deterministic rule of imitation and receives the maximum possible payoff (1 − δ)em . This interaction will serve as a good example, and the salient effort level em will spread to all other agents who are from the same components as 0 members of I (t0 ) with strictly positive probability. In addition, to induce entrance into the basin of attraction of the absorbing state (1, em 1, (1 − δ)em 1), each component must have an agent who has chosen effort level em and received a payoff (1 − δ)em in the most recent interaction. And, the following lemma describes how it is possible to leave the basin of attraction of the absorbing state (1, em 1, (1 − δ)em 1). Lemma 3 Consider the randomly perturbed dynamics ({S (t)}t∈N , Ω), ∈ (0, 1). To exit from the basin of attraction D (1, em 1, (1 − δ)em 1) , mini∈I |M (i, g)| mutations are both necessary and sufficient. The proof is provided in the Appendix. If one element of the set M (i, g) has chosen effort level em and received payoff (1 − δ)em in the most recent partition, then in the light of the unperturbed imitation rule, agent i will also choose effort level em if selected to play the minimum effort game Γ. On the other hand, consider the agent i0 , i0 ∈ arg mini∈I |M (i, g)|. Provided that all agents belonging to M (i0 , g) choose effort level e1 by mistake when selected to play the game Γ, it is possible that with the absence of 9 random noise, {S (t)}t∈N will converge to one absorbing set where all members of the component 0 0 c = (I , (gij )i,j∈I0 ), i0 ∈ I , select the same effort level e1 . With the help of Lemma 2 and Lemma 3, we are able to present the following result concerning stochastic stability of the randomly perturbed dynamics. Theorem 2 Consider the randomly perturbed dynamics ({S (t)}t∈N , Ω), ∈ (0, 1). Then only the absorbing state (1, em 1, (1 − δ)em 1) is stochastically stable provided that min |M (i, g)| > max{n, η(g)}. i∈I The proof is provided in the Appendix. Given that each agent’s information neighborhood is sufficiently large, then to exit from the basin of attraction D (1, em 1, (1 − δ)em 1) is more difficult than to enter it when the probability of random noise tends to zero; that is, the absorbing state (1, em 1, (1 − δ)em 1) turns out to be the unique stochastically stable equilibrium. The reasons are as follows. Assume that each agent is able to acquire abundant information about other agents’ choices of effort level and their performances in the most recent interaction if selected to participate in the minimum effort game Γ. It is very likely that each agent observes that one neighbor has chosen effort level em and received payoff (1 − δ)em . Then in the long run, following the randomly perturbed imitation rule, all selected agents will choose the same salient effort level em and receive the maximum possible payoff (1 − δ)em for most of time, provided that the probability of random noises goes to zero. Therefore, the socially optimal outcome is attained. 3.2.2 Connected observation networks Proposition 1 illustrates that for connected observation networks, the unperturbed dynamics will converge to one of the absorbing states with probability one. Now we check the robustness of various absorbing states to random noise. As in the above subsubsection, we first of all offer two lemmas that investigate how the perturbed dynamics traverse across all absorbing states. The following lemma shows starting from an absorbing state, how it is possible to complete the transition to another absorbing state where all agents undertake a strictly higher effort level. Lemma 4 Assume that the observation network g is connected. Consider the randomly perturbed 0 0 dynamics ({S (t)}t∈N , Ω), ∈ (0, 1). For any two effort levels e and e , e < e , to induce the 0 0 transition from the absorbing state (1, e1, (1 − δ)e1) to (1, e 1, (1 − δ)e 1), n mutations are both necessary and sufficient. The proof is the same as for Lemma 2. We omit it here. And, the following lemma examines the possibility of transition from an absorbing state to another limit state where all agents choose a strictly lower effort level. 10 Lemma 5 Assume that the observation network g is connected. Consider the randomly perturbed 0 0 dynamics ({S (t)}t∈N , Ω), ∈ (0, 1). For any two effort levels e and e , e < e , to induce a transi0 0 tion from the absorbing state (1, e 1, (1 − δ)e 1) to (1, e1, (1 − δ)e1), mini∈I |M (i, g)| mutations are both necessary and sufficient. The proof resembles that of Lemma 3. We omit it here. Combining Lemma 4 with Lemma 5, we can now present the following theorem about stochastic stability provided that the observation network g is connected. Theorem 3 Assume that the observation network g is connected. Consider the randomly perturbed dynamics ({S (t)}t∈N , Ω), ∈ (0, 1). Then, • the absorbing state (1, em 1, (1−δ)em 1) is the unique stochastically stable equilibrium given that |M (i, g)| ≥ (n + 1) for any i, i ∈ I; and • when mini∈I |M (i, g)| < n, only the absorbing state (1, e1 1, (1 − δ)e1 1) is stochastically stable. The proof is provided in the Appendix. Theorem 3 shows that for a connected observation network, if each agent is able to acquire information about n or more other agents’ choices of effort levels and payoffs in the most recent interaction, in the long run, all randomly chosen agents will choose the salient effort level em and obtain the maximum possible payoff (1 − δ)em for most of the time, provided that stochastic mutations vanish. That is, the socially optimal collaboration is achieved. However, when there exists an agent i0 who observes only n − 2 or less other agents’ information, compared to other absorbing states, the basin of attraction of (1, e1 1, (1 − δ)e1 1) is more resistent against random noises, given that the probability of random noises goes to zero. Therefore there may exist a tension between individual optimality and social efficiency. 4 Stag-hunt dilemma in networks with randomly chosen agents In this section, we consider a special case of the minimum effort game where n is equal to 2 and E only contains two distinct elements e1 and e2 . Formally, this 2 × 2 version of the minimum effort game Γ can be represented by the following table e1 e2 e1 (1 − δ)e1 , (1 − δ)e1 e1 − δe2 , (1 − δ)e1 e2 (1 − δ)e1 , e1 − δe2 (1 − δ)e2 , (1 − δ)e2 where as before, it is assumed that 0 < δ < 1 and e1 < e2 . For clarity, let Γ2,2 denote this 2 × 2 example of the minimum effort game Γ. 11 The 2 × 2 game Γ2,2 has two strict Nash equilibria (e1 , e1 ) and (e2 , e2 ). Further, by simple calculation, the game Γ2,2 can be transformed into e1 e2 e1 δ, δ 0, δ e2 δ, 0 1, 1 It is straightforward to check that the game Γ2,2 can be regarded as a version of the stag-hunt dilemma where the effort levels e1 and e2 correspond to the strategies “Hare” and “Stag”, respectively. Theorem 2 then immediately yields the following proposition concerning the long-run outcome of the randomly perturbed dynamics given that the stag-hunt game is repeatedly played by two randomly chosen agents. Proposition 2 Consider the randomly perturbed dynamics ({S (t)}t∈N , Ω), ∈ (0, 1), for the stag-hunt game Γ2,2 . Then, the absorbing state (1, e2 1, (1 − δ)e2 1) is the unique stochastically stable equilibrium if |M (i, g)| ≥ max{3, η(g) + 1} for any i, i ∈ I. When the observation network g is connected, Proposition 1 tells us that for the unperturbed dynamics, there are only two possible limit states (1, e1 1, (1−δ)e1 1) and (1, e2 1, (1−δ)e2 1) provided that the stag-hunt game is repeatedly played by a pair of randomly chosen agents. Furthermore, the following proposition concerns stochastic stability of the randomly perturbed dynamics. Proposition 3 Assume that the observation network g is connected. Consider the randomly perturbed dynamics ({S (t)}t∈N , Ω), ∈ (0, 1), for the stag-hunt game Γ2,2 . Then, • only the limit state (1, e2 1, (1 − δ)e2 1) turns out to be stochastically stable if |M (i, g)| ≥ 3 for any i, i ∈ I; and • when there exists one agent i0 such that |M (i0 , g)| = 2, both (1, e1 1, (1 − δ)e1 1) and (1, e2 1, (1 − δ)e2 1) are stochastically stable. The first result is a straightforward application of Theorem 3 and the second result follows from the fact that two mutations are both sufficient and necessary to induce the transition from one limit state to the other (see Lemma 4 and Lemma 5). 5 Concluding remark The present paper has developed a model where the minimum effort game is repeatedly played by a fixed number of randomly chosen agents. Each agent if selected observes his neighbors’ choices 12 of effort levels and their payoffs from the most recent interaction. They imitate the most successful alternatives; and occasionally make mistakes. This paper has shown that in the long run all agents will choose the salient effort level and socially optimal collaboration will be guaranteed, provided that each agent’s neighborhood is sufficiently large. The above condition requires that each agent is able to acquire abundant information about other agents’ experiences in the most recent interaction when selected to participate in the minimum effort game. In other words, observation networks with high cohesion are preferred by both individual agents and the whole of society.5 Therefore, it is tempting to conclude that with declining costs of information acquisition promoted by progress in information and communication technology, more efficient collaborations can reasonably be expected. There are several natural extensions to the research presented here. First, in practice it is very often the case that in each period, whether or not to participate in a collaboration is upto agent to decide, rather than being determined by random sampling. To take this into account, the minimum game should be extended. Second, it would be desirable to consider a more reasonable case where each agent’s payoff increases with the average amount of all agents’ effort levels and decreases with own effort level. For example, one would like to consider the average opinion games in Van Huyck, Battalio, and Beil (1991). Third, it would be interesting to check that whether the main results presented here are supported by laboratory experimental examination. Appendix A A.1 Radius-coradius theorem Here we present the radius-coradius theorem designed by Ellison (2000). Some preliminary definitions are necessary. Consider the perturbed dynamics ({X (t)}t∈N , S), ∈ (0, 1). For any , 0 ∈ (0, 1), {X (t)}t∈N is an irreducible Markov chain. For any two states s and s , the resistance is defined by 0 0 r(s, s ) = lim[log Prob{X (1) = s |X (0) = s}/ log ]. →0 0 0 0 For any two subset S̄ and S̄ of S, S̄ ∩ S̄ = ∅, a path from S̄ to S̄ is a sequence of distinct states 0 0 (s1 , · · · , sm ) with s1 ∈ S̄, sl ∈ / S̄ for 2 ≤ l ≤ m − 1 and sm ∈ S̄ . Define the resistance of the path (s1 , · · · , sm ) to be . r(s1 , · · · , sm ) = X r(sl , sl+1 ). 1≤l≤(m−1) 5 Seidman (1983) proposes that the minimum degree of all nodes can be used as a measure of social and economic network cohesion. 13 0 0 Let P(S̄, S̄ ) be the set of all paths from S̄ to S̄ . For any absorbing set S̄, the radius R(S̄) is defined by . R(S̄) = min (s1 ,··· ,sm )∈P(S̄,S\D(S̄)) r(s1 , · · · , sm ). The notation of resistance can be extended to a point-set concept by setting . r(s, S̄) = min (s1 ,··· ,sm )∈P({s},S̄) r∗ (s1 , · · · , sm ). And, for any absorbing set S̄, the coradius CR(S̄) is defined by . CR(S̄) = max r(s, S̄). s∈D( / S̄) Theorem 1 in Ellison (2000) shows that for any absorbing set S̄, when R(S̄) > CR(S̄), S̄ is the set of all stochastically stable equilibria. Intuitively, if it is less resistant to enter the basin of attraction of the absorbing set D(S̄) than to leave it, S̄ is relatively stable against random noises and will be observed for most of the time when random noises vanish. A.2 Proofs Proof of Theorem 1. Consider the unperturbed dynamics {S(t)}t∈N . Following from the specification of the sampling mechanism ν, the sequence of random variables {(ι1 (t), · · · , ιI (t))}+∞ t=1 converges to (1, · · · , 1) with probability 1. Without loss of generality, assume that there exists a period t0 such that ιi (t0 ) = 1 for ∀i ∈ I. It is sufficient therefore to prove the conclusion for one 0 component c = (I , (gij )i,j∈I0 ). According to the specific value of |c|, the rest of the proof is divided into two cases. Case 1: |c| ≤ n. 0 0 0 0 In period t0 + 1, elements of the set It0 +1 , I ⊂ It0 +1 and |It0 +1 | = n, are randomly chosen to 0 0 participate in the minimum effort game Γ with probability ν(It0 +1 ) > 0. Each agent i, i ∈ I , will choose effort level ei (t0 + 1) following Equation (1), and ej (t0 + 1) − δ × ei (t0 + 1). ui (t0 + 1) = min 0 j∈It 0 +1 0 0 0 Therefore, for any two distinct agents i and i , i, i ∈ I , if ei (t0 +1) ≥ ei0 (t0 +1), then ui (t0 +1) ≤ ui0 (t0 + 1). 0 0 0 0 In period t0 + 2, agents from the set It0 +2 , I ⊂ It0 +2 and |It0 +2 | = n, are selected to participate 0 in the game Γ with probability ν(It0 +2 ) > 0. According to the deterministic rule, each agent i, 0 i ∈ I , will choose effort level ei (t0 + 2) = minj∈M (i,g) ej (t0 + 1). The strategy revision follows 14 from the fact that for all agents belonging to M (i, g), the agent exerting the least effort level has the best performance in the previous period t0 + 1. 0 Following the same logic, after a finite period, all agents belonging to I will choose the same 0 effort level min{ej (t0 + 1) : j ∈ I } with strictly positive probability. Case 2: |c| > n. 0 0 Without loss of generality, assume that for the component c = (I , {(gij )i,j∈I0 }), n < |I | < (2n − 1). 0 0 For any agent i, i ∈ I , let ei (t0 + 1) be determined by 0 0 . ei (t0 + 1) = min{ej (t0 ) : j ∈ M (i, g) and uj (t0 ) ≥ uj 0 (t0 ) for ∀j ∈ M (i, g)}. That is, if agent i is selected in period t0 + 1, following the deterministic rule of imitation given 0 by Equation (1), he will choose effort level ei (t0 + 1) with strictly positive probability. And, let 0 . i0 = arg mini∈I0 ei (t0 + 1). 0 In period t0 + 1, elements of the set It0 +1 are randomly chosen to participate in the game Γ with 0 0 0 0 0 probability ν(It0 +1 ) > 0 where i0 ∈ It0 +1 , It0 +1 ⊂ I and It0 +1 ∈ I n . Suppose that each agent i, 0 0 0 0 i ∈ It0 +1 , chooses effort level ei (t0 + 1); that is, ei (t0 + 1) = ei (t0 + 1). Thus, for any i ∈ It0 +1 , ui (t0 + 1) = 0 0 min ej (t0 + 1) − δ × ei (t0 + 1) = ei0 (t0 + 1) − δ × ei (t0 + 1) 0 j∈It 0 +1 0 ≤ (1 − δ) × ei0 (t0 + 1). 0 In particular, ui0 (t0 + 1) = (1 − δ) × ei0 (t0 + 1). 0 In period t0 + 2, agents belonging to the set It0 +2 are selected to participate in the game Γ with 0 0 0 0 0 probability ν(It0 +2 ) > 0 where i0 ∪ (I \ It0 +1 ) ⊂ It0 +2 and |It0 +2 | = n. Now, consider agents’ 0 strategy revisions. On the one hand, agent i0 will choose effort level ei0 (t0 +1) with strictly positive 0 probability. For any agent j ∈ M (i0 , g), (1) if j ∈ It0 +1 , uj (t0 + 1) ≤ ui0 (t0 + 1) and the equality 0 0 0 holds only when ej (t0 + 1) = ej (t0 + 1) = ei0 (t0 + 1); (2) if j ∈ / It0 +1 , uj (t0 + 1) = uj (t0 ) ≤ 0 maxj 0 ∈M (i0 ,g) uj 0 (t0 ) ≤ (1−δ)×ei0 (t0 +1) = ui0 (t0 +1) where the second inequality follows from 0 the determination of ei0 (t0 +1) and the third inequality follows from the fact that the highest payoff 0 0 0 brought by effort level ei0 (t0 + 1) is (1 − δ) × ei0 (t0 + 1). Suppose that ei0 (t0 + 2) = ei0 (t0 + 1). 0 On the other hand, for any agent i, i ∈ It0 +2 \ {i0 }, i’s effort level ei (t0 + 2) satisfies that ei (t0 + 2) ∈ {ek (t0 + 1) : k ∈ M (i, g)} 0 ⊆ ∪k∈M (i,g) {ej (t0 ) : j ∈ M (k, g) and uj (t0 ) ≥ uj 0 (t0 ) for ∀j ∈ M (k, g)}. 0 0 According to the definition of ei0 (t0 + 1), ei (t0 + 2) ≥ ei0 (t0 + 1) = ei0 (t0 + 2). Therefore the 0 unperturbed dynamics {S(t)}t∈N arrives at a state s(t0 +2) where for each agent i, i ∈ I , i’s payoff 0 0 in the most recent interaction is given by ei0 (t0 + 1) − δ × ei (t0 + 2) and ei (t0 + 2) ≥ ei0 (t0 + 1). 15 0 In period t0 +3, agents belonging to the set It0 +3 are randomly chosen to participate in the game 0 0 0 Γ with probability ν(It0 +3 ) > 0 where 1) |It0 +3 | = n and 2) It0 +3 ⊂ M (i0 , g) if |M (i0 , g)| ≥ n; 0 0 otherwise, M (i0 , g) ⊂ It0 +3 . According to the deterministic rule, each agent i, i ∈ It0 +3 will 0 choose effort level ei0 (t0 + 1) with probability 1. The strategy revision owes to the fact that for 0 0 i ∈ I , ui (t0 + 2) ≤ (1 − δ) × ei0 (t0 + 1) = ui0 (t0 + 2) and the equality holds only when 0 ei (t0 + 2) = ei0 (t0 + 1). 0 Following the same logic, after a finite period, all agents belonging to I will choose the same 0 effort level ei0 (t0 + 1) with strictly positive probability. . For each period t, t ∈ N, let Î(t) = {i ∈ I : ιi (t) = 1 and (ei (t), ui (t)) = (em , (1 − δ)em )} be the set of agents who have chosen effort level em and received payoff (1 − δ)em in the most recent interaction until period t. The following lemma is essential for the proof of Lemma 2. Lemma 6 Consider the unperturbed dynamics {S(t)}t∈N . Assume that in some period t0 , the set 0 0 Î(t0 ) satisfies that (1) Î(t0 ) ∩ I 6= ∅ for any component c = (I , {(gij )i,j∈I0 }) and (2) |Î(t0 )| ≥ (n − 1). Then, {S(t)}t∈N will converge to the absorbing state (1, em 1, (1 − δ)em 1) with strictly positive probability. Proof . Without losing generality, assume that I \ Î(t0 ) 6= ∅. To prove Lemma 6, it is sufficient to show that there is a strictly positive probability that as time t increases, the size of Î(t) will strictly increase. 0 0 In period t0 + 1, agents belonging to the set It0 +1 , It0 +1 ∈ I n , are selected to participate in 0 0 the game Γ with probability ν(It0 +1 ) > 0 where It0 +1 \ Î(t0 ) 6= ∅ and for any element i of the set 0 0 It0 +1 , either i ∈ Î(t0 ) or i ∈ ∪j∈Î(t0 ) (M (j, g) \ {j}). The existence of the set It0 +1 is guaranteed by the assumption I \ Î(t0 ) 6= ∅ and the condition of Lemma 6. According to the deterministic 0 rule given by Equation (1), all member of It0 +1 will choose the same effort level em . The strategy revisions follow from the fact that for each agent the maximum possible payoff is (1 − δ)em which 0 is likely to be attained only by choosing effort level em . Thus, for any i ∈ It0 +1 , ei (t0 + 1) = em and ui (t0 + 1) = (1 − δ)em . And, we have that |Î(t0 + 1)| ≥ |Î(t0 )| + 1. Following the same logic, after a finite period, the unperturbed dynamics {S(t)}t∈N will arrive at the state (1, em 1, (1 − δ)em 1) with strictly positive probability. Proof of Lemma 2. It is sufficient to consider the case that η(g) ≤ n. In other words, the interaction system (I, g) has at most n components. Assume that in period t0 , the perturbed dynamics ({S (t)}t∈N , Ω), ∈ (0, 1), arrives at a state belonging to the absorbing set Ω̄. 0 0 In period t0 + 1, elements of the set It0 +1 , It0 +1 ∈ I n , are randomly chosen to participate 0 in the minimum effort game Γ with probability ν(It0 +1 ) > 0 where for any component c = 0 0 0 0 (I , {(gij )i,j∈I0 }), It0 +1 ∩ I 6= ∅ holds. Each member of It0 +1 chooses effort level em no matter 16 whether or not it is specified by the deterministic rule of imitation given by Equation (1). To 0 complete agents’ strategy revisions, at most n mistakes by all members of It0 +1 are required, and when none of the agents in I choose effort level em in period t0 , exactly n mutations are required. 0 Thus, we have that for any agent i, i ∈ It0 +1 , ιi (t0 + 1) = 1, ei (t0 + 1) = em and ui (t0 + 1) = (1 − δ)em . 0 As a result, the set It0 +1 is able to play the role of the set Î(t0 ) in Lemma 6. And, as an application of Lemma 6, it is straightforward to conclude that following the unperturbed dynamics in forthcoming periods {S (t)}t∈N will converge to the absorbing state (1, em 1, (1 − δ)em 1) with strictly positive probability. Proof of Lemma 3. Consider an agent i, i ∈ I. For any agent j, j ∈ M (i, g), if we have that ιj (t) = 1, ej (t) = em and uj (t) = (1 − δ)em , both agent i and agent j will choose effort level em with probability 1 when receiving the opportunity of participating in the minimum effort game Γ in period t + 1. Thus, to induce exit from the basin of attraction of the absorbing state {(1, em 1, (1 − δ)em 1)}, mini∈I |M (i, g)| mutations are necessary. Now we show that to leave the basin of attraction of {(1, em 1, (1 − δ)em 1)}, mini∈I |M (i, g)| mutations are also sufficient. Suppose that in some period t0 , the perturbed dynamics {S (t)}t∈N , ∈ (0, 1), is at the state (1, em 1, (1 − δ)em 1). And, let i0 be the agent with the least number of neighbors; formally, |M (i0 , g)| ≤ |M (i, g)| for any i ∈ I. Without loss of generality, assume that n < |M (i0 , g)| ≤ 2n. 0 0 In period t0 + 1, agents belonging to the set It0 +1 , It0 +1 ∈ I n , are selected to participate in 0 0 the minimum effort game Γ with probability ν(It0 +1 ) > 0 where It0 +1 ⊂ M (i0 , g). Although the 0 deterministic rule stipulates the choice of em , each agent i, i ∈ It0 +1 , chooses effort level e1 by mistake. The strategy modifications therefore incur n mutations. 0 0 In period t0 + 2, members of the set It0 +2 , It0 +2 ∈ I n , are selected to play the minimum 0 0 0 effort game Γ with probability ν(It0 +2 ) > 0 where M (i0 , g) \ It0 +1 ⊂ It0 +2 . Each agent i, i ∈ 0 0 M (i0 , g)\It0 +1 , chooses effort level e1 by mistake. In fact, for any i ∈ M (i0 , g), M (i, g)\It0 +1 6= ∅ holds; otherwise, |M (i, g)| ≤ n < |M (i0 , g)| which contradicts the determination of i0 . And, for 0 any j ∈ M (i, g) \ It0 +1 , we have that ιj (t0 + 1) = 1, ej (t0 + 1) = em and uj (t0 + 1) = (1 − δ)em . 0 Therefore the choice of effort level e1 owes to the mistake of any agent i, i ∈ M (i0 , g) \ It0 +1 . Together, |M (i0 , g)| − n mutations are responsible for the strategy revisions. As a consequence, after mini∈I |M (i, g)| mutations, {S (t)}t∈N , ∈ (0, 1), arrives at a state where for any i ∈ M (i0 , g), ιi (t0 + 2) = 1, ei (t0 + 2) = e1 and ui (t0 + 2) = (1 − δ)e1 . Similar to the proof of Theorem 1, we can prove that after a finite period, following the unperturbed dynamics, {S (t)}t∈N , ∈ (0, 1), will reach the absorbing set where all member of the component 17 0 0 c = (I , {(gij )i,j∈I0 }), i0 ∈ I , choose the same effort level e1 while all other agents choose the same effort level em , with strictly positive probability. Proof of Theorem 2. By Lemma 3, we show that R({(1, em 1, (1 − δ)em 1)}) = min |M (i, g)|. i∈I On the other hand, following from Lemma 2, we have that CR({(1, em 1, (1 − δ)em 1)}) = max{n, η(g)}. Consequently, R({(1, em 1, (1 − δ)em 1)}) > CR({(1, em 1, (1 − δ)em 1)}) holds. As an application of the radius-coradius theorem in Ellison (2000), we conclude that only the absorbing state (1, em 1, (1 − δ)em 1) turns out to be stochastically stable. Proof of Theorem 3. Following from Lemma 4 and Lemma 5, we have that R({(1, e1 1, (1 − δ)e1 1)}) = n, R({(1, e1, (1 − δ)e1)}) = min{n, min |M (i, g)|} for any e, e ∈ E \ {e1 , em }, i∈I m m R({(1, e 1, (1 − δ)e 1)}) = min |M (i, g)| i∈I and furthermore, CR({(1, e1 1, (1 − δ)e1 1)}) = min |M (i, g)|, i∈I CR({(1, e1, (1 − δ)e1)}) = max{n, min |M (i, g)|} for any e, e ∈ E \ {e1 , em }, i∈I CR({(1, em 1, (1 − δ)em 1)}) = n. As a result, if mini∈I |M (i, g)| < n, R({(1, e1 1, (1 − δ)e1 1)}) > CR({(1, e1 1, (1 − δ)e1 1)}) holds. As an application of the radius-coradius theorem in Ellison (2000), we find that only the absorbing state (1, e1 1, (1 − δ)e1 1) turns out to be stochastically stable. Following the same logic, we are able to show that (1, em 1, (1 − δ)em 1) is the unique stochastically stable equilibrium when mini∈I |M (i, g)| > n. References [1] Anderlini, L., Ianni, A., 1996. “Path dependence and learning from neighbors.” Games and Economic Behavior 13, 141–177. [2] Alós-Ferrer, C., Weidenholzer, S., 2007. “Partial bandwagon effects and local interactions.” Games and Economic Behavior 61, 179–197. 18 [3] Alós-Ferrer, C., Weidenholzer, S., 2008. “Contagion and efficiency.” Journal of Economic Theory 143, 251-274. [4] Alós-Ferrer, C., Weidenholzer, S., 2014. “Imitation and the role of information in overcoming coordination failures.” Games and Economic Behavior 87, 397–411. [5] Blume, L.E., 1993. “The statistical mechanics of strategic interaction.” Games and Economic Behavior 5, 387–424. [6] Blume, L.E., 1995. “The statistical mechanics of best-response strategy revision.” Games and Economic Behavior 11, 111–145. [7] Ellison, G., 1993. “Learning, local interaction, and coordination.” Econometrica 61, 10471071. [8] Ellison, G., 2000. “Basins of attraction, long-run stochastic stability, and the speed of stepby-step evolution.” Review of Economic Studies 67, 17–45. [9] Eshel, I., Samuelson, L., Shaked, A., 1998. “Altruists, egoists, and hooligans in a local interaction model. ” American Economic Review 88, 157-179. [10] Goyal, S., 2011. “Learning in networks.” In: Benhabib, J., Bisin, A., Jackson, M., (Eds.) Handbook of Social Economics, Volume 1, North Holland. [11] Khan, A., 2014. “Coordination under global random interaction and local imitation.” International Journal of Game Theory, forthcoming. [12] Kandori, M., Mailath, G., Rob, R., 1993. “Learning, mutation, and long run equilibria in games.” Econometrica 61, 29-56. [13] Özgür, O., 2011. “Local interactions.” In: Benhabib, J., Bisin, A., Jackson, M., (Eds.) Handbook of Social Economics, Volume 1, North Holland. [14] Morris, S., 2000. “Contagion.” Review of Economic Studies 67, 57-78. [15] Robson, A.J., Vega-Redondo, S., 1996. “Efficient equilibrium selection in evolutionary games with random matching.” Journal of Economic Theory 70, 65-92. [16] Seidman, S.B., 1983. “Network structure and minimum degree. ” Social Networks 5, 269287. [17] Van Huyck, J., Battalio, R., Beil, R., 1990. “Tacit coordination games, strategic uncertainty, and coordination failure.” American Economic Review 80, 234-248. 19 [18] Van Huyck, J., Battalio, R., Beil, R., 1991. “Strategic uncertainty, equilibrium selection, and coordination failure in average opinion games.” Quarterly Journal of Economics 106, 234248. [19] Weber, R., 2006. “Managing growth to achieve efficient coordination in large groups.” American Economic Review 96, 114-126. [20] Weidenholzer, S., 2010. “Coordination games and local interactions: A survey of the game theoretic literature.” Games 1, 885-910. [21] Young, H.P., 1993. “The evolution of conventions.” Econometrica 61, 57-84. [22] Young, H.P., 1998. “Individual Strategy and Social Structure: An Evolutionary Theory of Institutions.” Princeton University Press. 20
© Copyright 2026 Paperzz