A Folk Theorem for Contract Games with Multiple Principals and Agents Siyang Xiong† March 27, 2013 Abstract We fully characterize the set of deterministic equilibrium allocations in a competing contract game with multiple principals and agents. Compared to similar folk theorems in the literature, our main theorem relaxes an indispensable assumption on equilibrium existence in Yamashita [8], and does not require a particular communication protocol as required in Peters and Troncoso-Valverde [6]. I thank Eddie Dekel, Songying Fang, Mike Peters, Balázs Szentes, Stephen Wolff, Takuro Yamashita and anonymous referees for helpful comments. I thank the National Science Foundation (grant SES-1227620) for financial support. All remaining errors are my own. † Department of Economics, Rice University, [email protected] 1 1 Introduction In the presence of asymmetric information, economic players may write contracts to elicit other players’ information, in order to make optimal decisions. Most of the large literature on contract theory focuses on the classical principal-agent (i.e., one principal and one or multiple agents) model and the common-agency (i.e., one agent and multiple principals) model.1 However, both multiple principals and multiple agents are usually involved in many economic situations. For example, there are many competing insurance companies that offer different policies, and there are many consumers trying to find the best insurance plans. To understand such situations, we study a two-stage multi-principal-multi-agent game in this paper. In this contract game, the payoffs of the players2 depend on both the states and the actions. States are privately observed by the agents, while actions are chosen by the principals. In stage 1, the principals offer contracts, which specify their committed actions contingent on the messages reported by the agents. In stage 2, the agents report messages and the contracts are executed. Our objective is to fully characterize the set of deterministic equilibrium allocations3 in this contract game. Our main result (Theorem 1) says that the equilibrium allocations are fully characterized by two incentive compatibility (IC) conditions for the agents and the principals. truthful reporting by the agents forms a Bayes Nash equilibrium ; (I) agents: in the direct mechanisms defined by the allocation (II) principals: every principal is endowed with a min-max-min value (as defined in (5)); under the allocation, every principal achieves utility larger than the min-max-min value. . Condition (I) must hold in every equilibrium; the intuition is the same as the revelation principle in the classical principal-agent model. However, the “revelation principle” 1 See Bolton and Dewatripont [1] and Martimort [4] for surveys on the principal-agent model and the comon-agency model, respectively. 2 Throughout the paper, we use “she” and “he” to refer to a principal and an agent, respectively. We use “players” to refer to both principals and agents. 3 We discuss stochastic equilibrium allocations in Section 5.2. 2 ignores IC for the principals, so we need condition (II). The min-max-min value in condition (II) has the same role as the min-max value in a usual 2-player normal-form game — given other principals’ and agents’ strategies, principal j can always find a contract that guarantees her min-max-min value. As a result, in any equilibrium, all principals must achieve utility larger than the min-max-min values, i.e., condition (II) must hold in every equilibrium. Conversely, for any allocation satisfying conditions (I) and (II), we construct an equilibrium to implement the allocation. Specifically, for any principal k, principals k can find punishing contracts ckk under which principal k achieves utility less than her min- max-min value.4 Then, the intuition of the equilibrium construction is clear: in stage 1, each principal offers a set of contracts, which includes the equilibrium-allocation contract and the punishing contracts; in stage 2, the agents choose the equilibrium-allocation contract on the equilibrium path and they choose the punishing contracts ckk whenever principal k deviates from the equilibrium path. Yamashita [8]’s theorem shares a similar intuition. However, his proof requires the agents play pure strategies in the subgames in stage 2, which poses a technical problem — a pure-strategy equilibrium does not exist in many games. To eliminate this problem, Yamashita assumes the equilibrium existence for all the possible subgames in stage 2. Equivalently, any contracts that induce no pure-strategy Nash equilibrium in stage 2 are automatically excluded from the principals’ choice sets in stage 1. This limits the scope of Yamashita’s theorem, as illustrated by Example 1 in Section 2. In this example, market information is observed only by two agents who have diametrically opposed preferences. Thus, any contract offered by the principal defines a zero-sum game for the agents. Since pure-strategy Nash equilibria exist only in degenerate zero-sum games, no valid contract can extract the market information. As a result, the unique equilibrium allocation is inefficient. In this paper, we allow for mixed strategies in stage 2. Under some usual technical conditions (i.e., compact message spaces and continuous contracts), a mixed-strategy Nash equilibrium always exists in stage 2 (Glicksberg’s Theorem, see [2]). Hence, we do not need the equilibrium existence assumption in [8]. Equivalently, we do not place any restrictions on the set of contracts offered by the principals in stage 1. As a result, our 4 We adopt the standard notation of letting k denote all members of a group other than individual k. 3 folk theorem expands the scope of Yamashita’s theorem. For instance, if we allow for mixed-strategy equilibria in Example 1, the principals can find a contract to elicit market information. Consequently, an efficient equilibrium allocation exists. It is worth noting that each principal in our contract game is allowed to offer only a single contract, while we let each principal offer a set of contracts, as suggested by the equilibrium construction above. One methodological contribution of this paper is to introduce a way to embed such a set of contracts into one single valid contract. One caveat of our equilibrium characterization is that it is complicated, because the min-max-min values are hard to compute. Peters and Troncoso-Valverde [6] provide a much simpler characterization, but they require a particular complicated protocol for communication among the players.5 We discuss these issues in Sections 5.4 and 5.5. The remainder of the paper is organized as follows. An illustrating example is provided in Section 2. The model is defined in Section 3. We present the main result in Section 4. Section 5 concludes with discussions. Technical proofs can be found in Appendix A. 2 Pure-strategy implementation versus mixed-strategy implementation: an example We use the following simple example to illustrate the major difference between our setup and that in Yamashita [8]: mixed strategies are allowed in our setup, while only pure strategies are valid in [8]. Example 1 There are two agents i1 , i2 , and one principal j.6 The state is θ 0, 1 Θ, which is uniformly distributed. Suppose i1 and i2 observe θ, but j does not. Principal j has to choose an action a 1, 0, 1 A. The players are expected-utility maximizers, with Bernoulli utility 5 The complicated communication protocol is rarely observed in most economic situations. In such cases, our folk theorem applies. 6 For simplicity, we consider only two agents and one principal. The example remains valid if we add additional payoff-irrelevant agents and principals (i.e., these agents do not observe the states, these principals do not take payoff-relevant actions, and their payoffs are always constant). 4 functions defined as follows: u j a, θ 0, 2, if a 0; if a 1 θ; 3, if a 1 and θ 0; ui1 a, θ a; ui2 a, θ a. Following Yamashita [8], a valid contract for principal j is a function c j : Mi1 Mi2 A, where Mi1 and Mi2 are some exogenously given message spaces for i1 and i2 , respectively. Upon receiving the contract, the agents send their messages and then the contract is executed. Pure-strategy implementation As in [8], the agents are required to play pure strategies, and we assume the existence of pure-strategy equilibria in the subgames defined by all the valid contracts of the principal. Equivalently, any contract that induces no pure-strategy Nash equilibria in stage 2 is not available to j. For every a A, let z a denote the constant allocation, in which a is taken at both states by principal j. It is easy to draw the following observations. (1) Only constant allocations can be implemented by pure-strategy equilibria.7 (2) Principal j strictly prefers z a0 to z a1 or z a1 . By 1 and 2, z a0 is the unique equilibrium allocation, under which the players achieve a payoff of 0. We show below that z a0 is not ex-ante pareto efficient. 7 Suppose otherwise. I.e., a at θ 0 and a a at θ 1 is implemented by some contract c j and some pure-strategy Nash equilibrium m1 , m2 at θ 0 and m1 , m2 at θ 1 . Suppose c j m1 , m2 a . Without loss of generality, suppose a i1 a and a i2 a. By the IC of i2 at state θ 0, we have a i2 a . Hence, we have a i2 a i2 a . Furthermore, by the IC of i1 at state θ 1, we have a i1 a , which implies a i2 a , contradicting a i2 a i2 a . 5 Mixed-strategy implementation Suppose we allow the agents to play mixed strategies. Consider the contract ζ j for principal j defined as follows:8 m2 m2 m2 ζ j : m1 a0 a1 m1 a0 a 1 a 1 a1 The following strategy profile of the agents forms a mixed-strategy Nash equilibrium in the game defined by ζ j . agent i1 : agent i : 2 at both states, report m1 and m1 with equal probability (i.e., 21 ); at state θ 0, report m2 with probability 1 at state θ 1, report m2 and m2 with equal probability (i.e., 12 ). Under this equilibrium, the agents fully reveal the true states, and principal j achieves the maximal ex-ante payoff 1 1 2 0 12 2 , while both agents get an ex-ante payoff of 0. Therefore, this equilibrium allocation is pareto efficient, and it pareto dominates z a0 . 3 Setup 3.1 The contract game There are two sets of players: the principals 1, ..., J and the agents 1, ..., I . Each agent i privately observes his type θ i Θi , and each principal j has to take an action y j j . Let Θ ∏ Θi and ∏ j . The Bernoulli utility functions for i j principal j and agent i are denoted by v j : Θ R and ui : Θ R, respectively. We assume 1, 2, Θ ∞, ∞, , and θ Θ is distributed according to a common prior p Θ. Let ji denote the set of messages that agent i can send to principal j. Throughout this paper, we assume ji j i 0, 1 R for every j, i . 8 The contract ζ j is not valid for prinicipal j in [8], because no pure-strategy Nash equilibrium exists in the game defined by ζ j . 6 A strategy of each principal j is a valid contract, i.e., a continuous function c j : ∏ ji Yj .9 Let j denote j’s strategy set and ∏ j . A strategy of i j each agent i is a (type, contract)-contingent signalling scheme, i.e., a function Si : ∏ ji .10 Let i denote i’s strategy set and ∏ i . That is, in this j i contract game, the principals first choose contracts c c j j in stage 1. Upon Θi observing c and θ i , each agent i sends (possibly stochastic) messages to the principals in stage 2, and then the contracts are implemented. I.e., this is a 2-stage dynamic game. Each c chosen by the principals in stage 1 defines a subgame for the agentsin stage 2. In every such subgame, each agent i chooses a behavior strategy si : Θi the set isub ∏ ji j Θi ∏ ji j in . Define sub ∏ isub . i We use Si and si to denote generic elements in i and isub , respectively. Given c , we use Si c isub to denote the behavior strategy induced by Si and c, i.e., Si c θ i Si θ i , c . Let φ s, θ ∏ j , i ji denote the independent11 distribution over message profiles induced by any s, θ sub Θ, i.e., φ s, θ Ei i ∏ si θ i Ei , measurable i Ei i ∏ i ∏ ji j . Let ψ c, m denote the independent distribution over action profiles induced by 9 We require the codomain of c j be Yj rather than Yj , so that c j can be made continuous and Glicks- berg’s Theorem can be applied later. 10 We implicitly embed mixed strategies of the agents into our model, i.e., a pure strategy for agent i is a function from Θi to ∏ ji , and a mixed strategy is a function from Θi to j 11 Given each θ Θ, every agent i takes his strategies si θ i the distribution φ s, θ induced by s, θ is a distribution over ∏ i I. i 7 ∏ ji j ∏ ji j ∏ ji . j independently. As a result, which is independent across any c, m c j j , m ji j , i ψ c, m y j j ∏ j cj ∏ j , i m ji ji , i.e., y j , y j j . i Given c, s, θ , Γ c, s, θ, y defined below calculates the probability that the action profile y is taken by the principals: Γ c, s, θ, y m ∏ j , i ψ c, m y d φ s, θ . ji Define Vj : sub R and Ui : sub Θi R as follows, where, given c, s, Vj c, s and Ui c, s, θ i denote the expected payoffs of principal j and agent i who observes θ i , respectively: Vj c, s Ui c, s, θ i ∑ θ i Θ i ∑ θ Θ ∑ y ∑ v j y, y θ Γ c, s, θ, y v j y, θ i , θ i Γ c, s, θ i , θ i , y p θ ; p θ i , θ i . p θ , θ i i ∑ θ i Θi In the subgame defined by any c , the continuity of c j j and the compact- ness of ∏ ji ensure that a (mixed-strategy) Nash equilibrium (NE) always exists by j Glicksberg’s Theorem (see [2]). Let c sub denote the set of Nash equilibria in the subgame defined by c, i.e., sub sub . c s : Ui c, si , si , θ i Ui c, si , si , θ i , i, θ, s Θ Hence, c for any c . Throughout the paper, for every c , fix some e c ei ci c sub . (1) We adopt the solution concept of subgame perfect Nash equilibrium (SPNE)12 for this 2-stage contract game. Specifically, it is defined as follows. 12 Throughout the paper, we use “SPNE” and “NE” to represent the equilibrium in the 2-stage contract game and the subgame in stage 2, respectively. 8 c j j , Si i is a SPNE if Vj c j , c j , S c j , c j Vj c j , c j , S c j , c j , j, c , and Si c i c , c . Definition 1 c, S 3.2 (2) Transformation of a mixed strategy through a homeomorphism Let A and B denote two subsets of the message spaces such that A is homeomorphic to B. Fix any homeomorphism ξ : A B. In our proofs below, we need to transform a mixed strategy in A (i.e., α A) to a mixed strategy in B (i.e., β B) via the homeomorphism ξ. Specifically, for any α A, define ξ α B as follows: ξ α E α a A : ξ a E , measurable E B. That is, ξ renames the elements in A to the homeomorphic elements in B, and ξ α is just α, respecting this renaming. 4 Main result 4.1 Equilibrium allocations A (deterministic) allocation is a function z : Θ ; let z j : Θ j denote the jth projection of z. As in [8], we say z is incentive compatible iff ∑ ui z j θ i , θ i j , θ i , θ i p θ i , θ i θ i Θ i ∑ θ i Θ i ui (3) z j θ ji , θ i j , θ i , θ i p θ i , θ i , i , θ i , θ ji j Θi Θi . We say an allocation z is induced by c, s sub iff z θ y Γ c, s, θ, y 1, θ, y Θ Y. Note that if z is induced by c, s, then Vj c, s ∑ v j z θ , θ p θ , θ Θ 9 j . (4) Let Z IC denote the set of incentive compatible allocations. Let Z denote the set of allocations induced by SPNEs.13 It is easy to show Z Z IC .14 Define inf vj c j j sup c j j min Vj c j , c j , s s c , j ; (5) note that v j is well-defined by Lemma 3 in Appendix A.1. However, min c j j j : sup c j j min Vj c j , c j , s s c min c j j sup c j j may or may not exist. Define min Vj c j , c j , s s c is well-defined . The following is our main result. Theorem 1 Z z Z IC : ∑ θ Θ v j z θ , θ p θ v j for every j ; ∑ v j z θ , θ p θ v j for every j / . θ Θ Consider the following thought experiment: We ask each agent i to report his type θ i , and require the principals carry out their actions according to z θ i i , i.e., j takes the action z j θ . Theorem 1 says that we can implement z by a SPNE in the contract game defined in Section 3.1 iff in the thought experiment, no agent finds it profitable to deviate from the truthful report and every principal j achieves at least an expected payoff of v j .15 The “only if (i.e., )” direction of the proof of Theorem 1, which is similar to that in Yamashita [8], can be found in Appendix A.2. We focus on the “if (i.e., )” direction in Section 4.2. 13 I.e., Z allocation z : there exists a SPNE c, S such that z is induced by c, S c. intuition is just the revelation principle. See also the proof of Lemma 1 in Yamashita [8, p.795-796]. 15 Different from [8], no principal in can achieve the payoff v in any SPNE, due to the technical j 14 The difficulty associated with the infinite-message mechanisms that we consider. 10 4.2 The proof of Theorem 1: the “if” direction Throughout this subsection, we fix any z Z IC such that every j and ∑ θ Θ ∑ v j z θ , θ p θ v j for θ Θ v j z θ , θ p θ v j for every j / . We will construct a SPNE c , S such that z is induced by c , S c . The following two lemmas will be used later, and their proofs can be found in Appendix A.3 and A.4. s . s sub such that s c and z is induced by c, Lemma 1 There exists c, Lemma 2 For any k , there exists ckk ckj k such that for any ck k , we can j k find s ck , ckk ck , ckk satisfying Vk ck , ckk , s ck , ckk ∑ vk z θ , θ p θ . (6) θ Θ Lemma 1 says that z is induced by c, s and s is a NE in the subgame defined by c. Lemma 2 says that for any principal k, k have the punishing contracts ckk principals and the agents have the punishing NE s ck , ckk to deter principal k from deviating from the allocation z . It is worth noting that the principals’ punishing contracts ckk does not depend on the deviating strategy (i.e., ck ck ) chosen by principal k, while the agents’ punishing NE s ck , ckk does. 4.2.1 A sketch of the proof This contract game has two stages. In stage 1, the principals choose c, and in stage 2, the agents choose s. Ignoring the IC of the principals, Lemma 1 says that z can be implemented by c, s, and the IC of the agents is satisfied, i.e., s c. To ensure the IC of the principals, we modify the rules of the game, hypothetically. Suppose there is an extra stage between the two stages, which we call stage 1.5. In stage 1.5, every principal j observes all the contracts chosen in stage 1, and then she revises 11 her contract according to the following protocol16 : revise to ckj if principal k j is the unique principal who deviates from the strategy profile c defined in Lemma 1; do not revise otherwise. Lastly, based on the final contracts offered in stage 1.5, the agents choose s in stage 2. Given the extra stage and the described protocol, the following strategy profile forms a SPNE which implements z . A) Equilibrium Path: in stage 1, the principals offer c; in stage 1.5, no revision occurs; in stage 2, the agents choose s; B) Off-Equilibrium Path - case i): in stage 1, the principals offer ck , ck c; in stage 1.5, principals k revise their contractsto ckk and principal k does not revise her contract; in stage 2, the agents choose s ck , ckk ; C) Off-Equilibrium Path - other cases: in stage 1, the principals offer c c and two or more principals deviate from c; in stage 1.5, no revision occurs; in stage 2, the agents choose e c c fixed in (1). Lemma 1 implies that z is implemented on the equilibrium path. Furthermore, the IC of the principals is satisfied: if principal k deviates unilaterally from c in stage 1, principals k would choose contracts ckk in stage 1.5 and the agents would the punishing play the punishing NE s ck , ckk in stage 2. By (6) in Lemma 2, principal k does not find it profitable to deviate from the equilibrium path in stage 1. The problem for the scheme above is that stage 1.5 does not exist in our original contract game. In particular, the principals cannot observe who deviates in stage 1, even though they have the punishing contracts to deter deviation. Instead, it is the agents (in stage 2) who can observe the deviating principal. Furthermore, in stage 2, even if the agents inform the principals about the deviating one in stage 1, the principals do not have a chance to revise their contracts after stage 1. Hence, it should be the agents that revise the contracts for the principals in stage 2. To achieve this without stage 1.5, roughly, in stage 1 on the equilibrium path, we let each principal j offer a set of contracts including cj and ckj for every k j 16 In stage 1.5, the principals follow the protocol mechanically, i.e., they do not behave strategically. 12 and delegate the choice of contracts to the agents in stage 2: on the equilibrium path in s to which no principal deviates, the agents choose cj j for the principals and play implement z ; on the off-equilibrium path in whichprincipal k deviates unilaterally to ck , the agents choose ckk for principals k and play s ck , ckk to punish principal k. Still, we are facing one problem. The “set of contracts” described above is not a “valid” contract in the contract game defined in Section 3.1. We embed the set of contracts into one single valid contract in the next subsection. 4.2.2 The embedding and the extension To define each cj : ∏ ji j , we first break each ∏ ji into several parts, and i i then define cj on these parts separately. For any E ∏ ji , we use cj E to denote cj E.17 with the restricted domain Fix any a0 , a1 , ..., a2 , a2 1 such that i 0 a0 a1 ... a2 a2 1 1. Recall ji 0, 1. Define k kji a2k , a2k1 for every j, i and every k 0, 1, 2, ..., . That is, we fix 1 disjoint closed intervals in , and the agents use messages in these disjoint intervals to tell the principals about who deviates from the equilibrium path in stage 1: m ji 0ji means no one deviates in stage 1, and m ji kji (with k 0) means principal k deviates in stage 1. For every k 0, 1, 2, ..., , k is homeomorphic to ; let hk : k be a 1 homeomorphism. We use hk to denote the inverse function of hk . Furthermore, let ID k : k k be the identity function, i.e., ID k m m for every m k . Since k is closed, by the Tietze Extension Theorem (see [5, p.219]), there exists a continuous function gk : k such that gk m ID k m m for every m k . That is, gk , while preserving the definition of ID k on k , extends the domain continuously to . We describe the embedding in three cases. 17 I.e., c j E : E j satisfies cj E m cj m, m E. 13 Recall s c in Lemma A) The equilibrium path: the principals offer c in stage 1. 1, i.e., for every i, θ Θ, we have Ui c, si , si , θ i Ui c, si , si , θ i , si ∏ ji j Θi . (7) For every j , define cj ∏ 0 : ∏ 0ji j as follows: cj m ji i i ji i cj h0 m ji i , m ji i ∏ 0ji . i I.e., on the equilibrium path, the agents reports messages in 0ji . Upon receiving the messages, the principals translate them to messages in ji via the homeomorphism h0 , and then implement c. For every i , consider the homeomorphism H 0 : ∏ ji ∏ 0ji defined as follows: H 0 m ji j 1 0 h m ji For every θ i Θi , note that si θ i j j ∏ ji j j , m ji j ∏ ji . j and H 0 si θ i ∏ 0ji . That j 1 is, each agent i encodes the messages via h0 and transforms the strategy si to H 0 si , while each principal j decodes messages via h0 and transforms the strategy H 0 si back to si before executing the contract cj . As implied by (7), for every i, θ Θ, we have Θi si , θ i , si ∏ 0ji . (8) Ui c , H 0 si , H 0 si , θ i Ui c , si , H 0 j On the equilibrium path, if we hypothetically require agents report messages in 0ji , (8) would imply that H 0 s is a NE in the subgame defined by c . However, agent i may devi- ate to report messages outside 0ji . To make H 0 s remain a NE without the hypothetical requirement, we need to extend cj ∏ 0 to cj ji i cj ji 0j, i i m ji i i ji 0j, i cj ∏ 0 ji i g 0 as follows: m ji m ji , m j, i ji 0j, i . i , i I.e., any unilateral deviation to a message outside 0ji is first translated to a message s remains a NE in the subgame defined by cj . inside 0ji via g0 . As implied by (8), H 0 14 B) The off-equilibrium path with unilateral deviation In stage 1, suppose principal k offers ck ck and principals k offer ck . By Lemma 2, s s ck , ckk ck , ckk , i.e., for every i, θ Θ, we have Ui ck , ckk , si , si , θ i Ui ck , ckk , si , si , θ i , si ∏ ji j For every j k , define cj ∏ k : ∏ kji j as follows: cj m ji i i ckj ji i hk m ji i , m ji i Θi . (9) ∏ kji . i I.e., the agents reports messages in kji , and principals k translate them to messages in ji via hk before implementing ckk . For every i , consider the homeomorphism H k : ji j ki ∏ kji defined as follows: 1 k k H mki , m ji jk mki , h m ji For every θ i Θi , note that si θ i jk jk ∏ ji j , mki , m ji jk ki ∏ kji . and H k si θ i jk ki ∏ kji . jk 1 That is, each agent i encodes the messages to principals k via hk and transforms the strategy si to H k si , while principals k decode messages via hk and transform the strategy H k si back to si before executing the contract ckk . As implied by (9), for every i, θ Θ, we have k k k Ui ck , ck , H si , H si , θ i Ui ck , ck , si , H si , θ i , Θi si ki ∏ kji (10) . jk If we hypothetically require agents report messages in kji to each principal j k, (10) would imply that H k s is a NE in the subgame defined by ck , ck . However, agent i may deviate to report messages outside kji . To make H k s remain a NE without the hypothetical requirement, we need to extend cj ∏ k to cj cj i ji kj, i m ji i m ji , m j, i i ji cj ∏ k i 15 i ji i ji kj, i gk m ji ji kj, i . i , as follows: I.e., any unilateral deviation to a message outside kji is first translated to a message inside kji via gk . As implied by (10), H k s remains a NE in the subgame defined by ck , ck . C) The other off-equilibrium paths In the above, we have defined each cj on the re stricted domain ji kj, i . Since the restricted domain is closed and cj is k j i continuous on it, by the Tietze Extension Theorem, there exists a continuous function cj : ∏ ji j such that i cj m cj k j i m k j i ji kj, i m , ji kj, i , which completes the definition of cj . 4.2.3 The definition of S Finally, we define S as follows: S c H 0 s; k k S ck , ck H s ck , ck , k , ck k ck ; S c e c otherwise; where e c c is fixed for every c in (1). This completes the definition of c , S . Clearly, c , S is a SPNE as argued above, and z is induced by c , S c . 5 Discussions We conclude the paper with several discussions. 16 5.1 Finite messages versus infinite messages Unlike Yamashita [8], we have to consider infinite-message spaces. In our proof of the “if” direction of Theorem 1, a message m ji sent from agent i to principal j plays two roles: i) it recommends a contract c j for principal j, and ii) it reports i’s “message” when c j is adopted. That is, we have to establish a surjective function from the message set M ji to L M ji , where L denotes the set of potential contracts recommended by i. Clearly, such a surjective function exists only if M ji is an infinite set. Nevertheless, it is without loss of generality to focus on infinite-message mechanisms. Let Q j denote the set of all finite-message mechanisms for principal j, i.e., j q j : Wji i I j Wji ∞, j, i . Although j j , every element in j can be represented by some element in j : For any allocation z that is induced by some q Π j J j and s q, there exist c Π j J j and s c such that z is induced by c and s .18 Thus, any allocation, which is implementable via finite-message mechanisms, can be implemented via infinite-message mechanisms. 5.2 Stochastic equilibrium allocations For notational ease, we focus on deterministic equilibrium allocations throughout the paper. With some modification, our result also applies to stochastic equilibrium allocations. A stochastic allocation is a function β : Θ . We say a stochastic allocation β is independent if β θ is an independent distribution over Y for every θ; otherwise, we say that β is correlated. Recall that we use a “direct mechanism” on the equilibrium path to implement a potential equilibrium allocation. However, an implicit assumption of our model is that the principals take actions independently for each profile of messages received. Consequently, a stochastic allocation can be implemented on the equilibrium path if and only if it is independent. Hence, Theorem 1 remains true for independent stochastic equilibrium allocations. 18 The proof is similar to the proof of Lemma 1 and is omitted. 17 For correlated stochastic equilibrium allocations, our characterization is true if we endow the principals with the correlation-generating device used in Kalai, Kalai, Lehrer and Samet [3] and Peters and Troncoso-Valverde [6], so that, for each profile of message received, the principals are induced by the correlation device to take correlated actions. 5.3 Szentes’ critique Following the setup of Yamashita [8], Szentes [7] constructs a complete-information example, in which a principal cannot even achieve her min-max utility. The problem, according to [7], is that the principals delegate their actions to the agents in [8]. Szentes then modifies Yamashita’s model to a no-delegation model, and proves a folk theorem under the complete-information setup.19 In Yamashita’s setup, the principals are restricted to choose deterministic contracts. If we allow for random contracts, every principal in Szentes’ example is able to achieve her min-max value. Hence, our model, which allows for random contracts, is immune to the critique raised by Szentes [7]. 5.4 A caveat and a re-interpretation A caveat of our theorem is that v j defined in (5) is difficult to compute, which seems to make Theorem 1 hard to use. However, the following result shows that we can apply Theorem 1 without knowing v j . Theorem 2 For any z Z , we have z Z IC : ∑ v j z θ , θ p θ θ Θ ∑ v j z θ , θ p θ , j θ Θ Z . Theorem 2 says that any IC allocation, which dominates another SPNE allocation regarding the principals’ payoffs, can be implemented by a SPNE. Though Theorem 2 is immediately implied by Theorem 1, the arguments in Section 4.2 provide a constructive 19 It remains an open question to extend Szentes’ folk theorem to incomplete-information setups. 18 way to prove Theorem 2 directly. Suppose z Z is implemented by a SPNE c, S. Then, ck can be used as punishing contracts for principals k to deter principal k from deviating from the equilibrium path for z , i.e., ∑ vk θ Θ z θ , θ p θ ∑ vk z θ , θ p θ Vk c, S c Vk θ Θ ck , ck , S ck , ck , ck k , where the last inequality follows from the fact that c, S is a SPNE. Therefore, with ckk replaced by ck , the argument in Section 4.2 proves Theorem 2. We use the following example to illustrate the idea. Example 2 (A Public Good Project) Multiple principals decide whether to participate in a publicgood project and how much to invest if they participate. The project can be completed if and only if every principal participates. Furthermore, multiple agents privately observe the states which determines the payoffs of the project to all the players. Clearly, it is a SPNE that no principal participates, and all players get 0 in this equilibrium. Furthermore, by Theorem 2, any IC allocation, in which the principals gets non-negative payoffs, can be implemented by a SPNE.20 5.5 Comparison to Peters and Troncoso-Valverde [6] We differ from Yamashita [8] only on the strategy spaces: we allow for mixed strategies, but [8] does not. We differ from Peters and Troncoso-Valverde[6] on three respects. First, as in Yamashita [8], we follow the classical setup in contract theory, in which minimal communication is required: the principals first offer the contracts, i.e., functions from (exogenously given) message spaces to action spaces; the agents then send their messages, and the contracts are executed. However, Peters and Troncoso-Valverde [6] requires a particular procedure of two-stage communication among all players. Due to this difference, our equilibrium characterization is more complicated than that in [6]. Hence, an alternative way to interpret [6] is that it finds a communication protocol which induces a simple and elegant characterization of equilibrium allocations. 20 To implement such an allocation, the principals use the direct mechanism on the equilibrium path; if any principal unilaterally deviates, the agents recommed non-participation for the other principals. 19 Second, we adopt the solution concept of SPNE, while Bayesian Nash equilibrium is adopted in [6]. That is, different from [6], we require the agents’ equilibrium strategy profile remains a continuation equilibrium in the subgame defined by any contract profile offered by the principals.21 Due to this difference, we define v j as a min-max-min value, while [6] defines v j as a min-max value, where the extra “min” takes care of all potential equilibria in subgames. Finally, in Peters and Troncoso-Valverde [6], every player is both a principal and an agent. Through the two-stage communication procedure, all players observe anyone who deviates from the equilibrium path, and they can revise their “equilibrium contracts” to “punishing contracts” so as to punish the deviator. However, in Yamashita [8] and this paper, though the principals are endowed with “punishing contracts,” they have to offer their contracts before observing the deviating principal, and they cannot revise their contracts later. Furthermore, the agents, who observe the deviating principal, are not endowed with “punishing contracts.” Hence, it is the agents who tell the principals how to revise their “equilibrium contracts” to “punishing contracts.” Thus, different from [6], we overcome a conceptual and technical obstacle: the transmission of information regarding the deviating principal from the agents (who observe the deviator but are not endowed with “punishing contracts”) to the principals (who do not observe the deviator but are endowed with “punishing contracts”). A Appendix A.1 Lemma 3 Lemma 3 min Vj c, s is well defined for every j, c . s c Proof. Fix any j, c . Suppose inf Vj c, s α. Then, for any positive ins c c such that Vj c, sn α n1 . Since sub is compact, sn has a convergent subsequence. With abuse of notation, let sn denote this convergent subsequence, i.e., sn s for some s sub . teger n, there exists 21 See sn more discussion in Troncoso-Valverde [6, Section 7]. 20 Note that ∏ ji j is endowed with weak topology and isub ∏ ji j Θi is endowed with product topology. Furthermore, for any j, i, θ i , both Vj c, s and Ui c, s, θ i are continuous on Ssub .22 Consequently, the NE is upper hemi-continuous, i.e., sn c s imply s c. Hence, for all n and sn Vj c, s inf Vj c, s α. (11) s c 1 n s imply for all n and sn 1 n Vj c, s lim Vj c, s lim α α. n∞ n∞ n Furthermore, Vj c, sn α (12) (11) and (12) imply Vj c, s α inf Vj c, s. Finally, s c implies that min Vj c, s s c is well defined. A.2 s c Proof of Theorem 1: the “only if” direction Fix any allocation z Z , i.e., there exists a SPNE c, S such that z is induced by c, S c . For notational ease, let s denote S c. Hence, by (4), we have Vj c, s We now show ∑ v j z θ , θ p θ , θ Θ ∑ v j z θ , θ p θ θ Θ ∑ v j z θ , θ p θ θ Θ j . v j if j ; v j if j / . Suppose otherwise. I.e., there are two cases: 1) there exists j such that p θ v j ; 2) there exists j such that ∑ v j z θ , θ p θ v j . ∑ v j z θ , θ θ Θ θ Θ In case 1), Vj c, s ∑ v j z θ , θ p θ θ Θ vj 22 This inf c j j sup c j j min Vj c j , c j , s s c sup c j j min s c j , c j Vj c j , c j , s is immediately implied by the definition of weak topology and the fact that c is continuous. 21 . In case 2), s Vj c, ∑ v j z θ , θ p θ θ Θ vj inf c j j sup c j j min Vj c j , c j , s s c sup c j j where the last inequality follows from j / , i.e., arg min c j j does not exist. min s c j , c j sup c j j Vj c j , c j , s min Vj c j , c j , s s c , Therefore, in both cases, we have Vj c, s sup c j j min s c j , c j which implies that there exists cj j s.t. Vj c, s s cj , c j Vj I.e., min Vj c j , c j , s V c , c , s j j j cj , c j , S cj , c j , . Vj cj , c j , S cj , c j Vj c j , c j , S c j , c j , contradicting the fact that c, S is an equilibrium (i.e., (2)). A.3 Proof of Lemma 1 Lemma 1 There exists c, s sub such that s c and z is induced by c, s . Proof. Recall z Z IC . Suppose every principal j offers the direct mechanism zj (where z zj ). Since z is incentive compatible (see (3)), truthful reporting by the agents j is a NE and z is implemented. That is, (3) is equivalent to u z θ , θ , θ , θ ∑ i j i i j i i p θ i , θ i θ i Θ i θ ji j Θi ∑ θ i Θ i ui z j θ ji , θ i j , θ i , θ i p θ i , θ i 22 (13) dρ, ρ Θi . However, zj is not a “valid” contract, i.e., it is not a continuous function from ∏ ji i to j . We thus need to transform every zj to a valid contract. First, based on zj : ∏ Θi j , consider the continuous function Γzj : ∏ Θi i i j defined as follows: Γzj µi i y j ∑ θ θ Θ: zj θ y j ∏ µi θ i , i µi i ∏ Θi , y j j . i That is, under the direct mechanism zj , the agents report deterministic types, while Γzj is the stochastic direct mechanism induced by zj , in which the agents report stochastic types. Clearly, truthful reporting remains a NE under Γzj . Second, for any a, we use δ a to denote the Dirac measure on the set a. For every i , fix any injective function i : Θi ji . We use i θ i to represent θ i , i.e., by reporting i θ i ji , agent i informs principal j of i’s type θ i . Consider the continuous function Λj : i θ i i : θ Θ Λj i θ i i δθ i i ∏ Θi , i , θ Θ. Since i θ i i : θ Θ is a closed subset of ji i , by the Tietze Extension Theo- rem, there exists a continuous function Λ j : ji i ∏ Θi , s.t. i Λ j i θ i i Λj i θ i i , θ Θ. That is, i θ i ji is used by agent i to inform principal j about i’s type θ i , and any other message in ji reported by i would suggest a stochastic type in Θi . Third, define c, s as follows: cj Γzj Λ j , j ; 23 s i θ i δ i θ i j , i, θ Θ. I.e., cj is the generalized stochastic “direct mechanism,” and si is the “truthful reporting” s . strategy. By (13), s c and z is induced by c, A.4 Proof of Lemma 2 ckk Lemma 2 For any k , there exists find s ck , ckk ck , ckk satisfying ckj Vk ck , ckk , s ck , ckk jk k such that for any ck k , we can ∑ vk z θ , θ p θ . (14) θ Θ Proof. First, for any k , we will show below that there exists ckk k such that (15) sup min Vk ck , ckk , s ∑ vk z θ , θ p θ . ,ck s c k k ck k θ Θ Second, fix any k and ckk described above. For any ck k , pick some s ck , ckk arg min Vk ck , ckk , s . Hence, s ck ,ckk Vk ck , ckk , s ck , ckk min Vk ck , ckk , s (16) s ck ,ckk sup min Vk ck , ckk , s . k ck k s ck ,ck Therefore, (14) is implied by (16) and (15). Finally, we prove (15) by considering two cases: k and k / . For any k , pick ckk arg min sup min Vk ck , ck , s . Then, ck k ck k s ck ,ck sup ck k min Vk s ck ,ckk ck , ckk , s min sup ck k vk 24 ck k min s ck ,ck Vk ck , ck , s ∑ vk z θ , θ p θ , θ Θ i.e., (15) holds. For any k / , since inf sup min Vk ck , ck , s vk ∑ vk z θ , θ p θ , ck k ck k s ck ,ck θ Θ by the definition of "inf," we have ckk k s.t. sup ck k min Vk s ck , ckk ck , ckk , s ∑ vk z θ , θ p θ , θ Θ i.e., (15) holds. References [1] Bolton, P. and M. Dewatripont (2004): "Contract Theory," Cambridge, MA: The MIT Press. [2] Fudenberg, D. and J. Tirole (1991): "Game Theory," Cambridge, MA: The MIT Press. [3] Kalai, A.T., E. Kalai, E. Lehrer and D. Samat (2010): "A Commitment Folk Theorem," Games and Economic Behavior 69, 127–137. [4] Martimort, D. (2006): "Multi-Contracting Mechanism Design," in Advances in Economics and Econometrics: Theory and Applications, Ninth World Congress, Vol. I, ed. by R. Blundell, W. K. Newey, and T. Persson. Cambridge, MA: Cambridge University Press, 57-101. [5] Munkres, J. (2000): "Topology," Upper Saddle River, NJ: Prentice-Hall. [6] Peters, M. and C. Troncoso-Valverde (2013): "A Folk Theorem for Competing Mechanisms" Journal of Economic Theory, forthcoming. [7] Szentes, B. (2009): "A Note on ‘Mechanism Games with Multiple Principals and Three or More Agents’ by T. Yamashita," manuscript, University of Chicago. [8] Yamashita, T. (2010): "Mechanism Games with Multiple Principals and Three or More Agents," Econometrica 78, 791–801. 25
© Copyright 2026 Paperzz