Decision Rules Revealing Commonly Known Events

Decision Rules Revealing Commonly Known Events∗
Tymofiy Mylovanov
Andriy Zapechelnyuk
University of Pennsylvania†
Queen Mary, University of London‡
June 3, 2012
Abstract
We provide a sufficient condition under which an uniformed principal can infer
any information that is common knowledge among two experts, regardless of the
structure of the parties’ beliefs. The condition requires that the bias of each expert is
less than the radius of the smallest ball containing the action space. If the principal
does not know the structure of the experts’ biases and is concerned with the worstcase scenario, this condition is also necessary. If this condition is satisfied, there
exists an ex-post incentive compatible decision rule that implements an optimal
action for the principal conditional on the smallest event commonly known by the
experts. This rule is simple: it is fully determined by the action space and is
independent of other details of the environment.
JEL classification: D74, D82
Keywords: Communication, multidimensional cheap talk, multi-sender cheap talk,
principal-expert model, common knowledge, robustness.
∗
The authors would like to express their gratitude to Attila Ambrus for very helpful comments.
Email : [email protected]
‡
Corresponding author: School of Economics and Finance, Queen Mary, University of London, Mile
End Road, London E1 4NS, UK. E-mail: [email protected].
†
1
Introduction
In this note, we ask under what conditions an uniformed principal can infer any information that is common knowledge among two experts. We allow for general decision
rules that select an action contingent on the agents’ reports and show that there exists
a decision rule which makes it optimal for the experts to reveal the commonly known
information if the supremum of each agent’s bias relative to the principal’s optimal action
is less than the radius of the smallest ball that contains the action space. This condition
becomes necessary if the principal does not know the structure of the experts’ biases and
is concerned with the worst-case scenario.1
The rule constructed in this paper is simple and depends only on the boundaries of the
action space, but is independent of other details of the environment, such as beliefs, the
direction of biases, and the relationship between the biases and the agents’ information.
It implements an optimal action for the decision maker given the commonly known event
by the experts if their reports about this event agree and implements a constant stochastic
action that is independent of the nature of disagreement otherwise. If the experts commonly know the optimal action for the principal, this rule is first best and implements
the optimal action in each state.
This note is related to the cheap-talk literature with two experts who commonly know
the principal’s optimal action that has focused on establishing conditions under which the
decision maker can achieve the first best (Krishna and Morgan (2001b), Battaglini (2002),
Ambrus and Takahashi (2008)).2 In cheap talk, the decision maker has no commitment
power and must take an action that is sequentially rational given her beliefs. In this
paper, there is no such restriction. Thus, our full commitment environment serves as a
natural benchmark for the various modifications of environments studied in the existing
literature.
Naturally, our condition for the first best under full commitment is related to but
weaker than those in cheap talk environments (Krishna and Morgan (2001a), Battaglini
(2002), and Ambrus and Takahashi (2008)).3 The key difference is that our condition
1
Frankel (2011) studies an optimal robust decision rule in environments with one expert, where the
principal has limited information about the agent’s preferences.
2
Crawford and Sobel (1982) is the seminal reference on cheap talk communication with one expert.
For models of cheap talk communication with two experts see also Gilligan and Krehbiel (1989), Krishna
and Morgan (2001a), Battaglini (2004), Levy and Razin (2007), Ambrus and Lu (2010), Li (2008, 2010),
See also Li and Suen (2009) for a survey of work on decision making in committees.
3
In a seminal paper, Battaglini (2002) demonstrates in generic cheap talk environments with unbounded multidimensional action spaces and common knowledge of state among the agents, there exists
a fully revealing equilibrium that implements the first best. Clearly, the first best is also implementable
1
bounds each agent’s bias independently, whereas this is not so in cheap talk. For instance,
if the state space is unidimensional, Proposition 1 in Battaglini (2002) establishes that a
necessary and sufficient condition for a fully revealing cheap talk equilibrium is that the
sum of the absolute values of the agents’ biases is less than half of the measure of the
action space. Under full commitment, the first best is implementable if and only if each
of the agent’s biases, rather than their sum, is bounded by that value.
In cheap talk, the inability of the decision maker to commit to a decision rule causes
punishments in fully revealing equilibria to depend non-trivially on the experts’ reports
(Krishna and Morgan 2001a, Krishna and Morgan 2001b, Battaglini 2002, Ambrus and
Takahashi 2008).4 In the decision rule in this paper, the punishment for disagreement
is independent of the agents’ reports and is fully determined by the boundaries of the
action space available to the decision maker. In that sense, the optimal rule is simple and
constant on a large set of environments.
The rule in this paper is ex-post incentive compatible and provides a lower bound
on the decision maker’s payoff in an optimal decision rule. Full investigation of optimal
decision rules in Bayesian environments with noise is not the focus of this note.5 However, the construction presented here can be made robust to small mistakes in experts’
observations, in the sense of satisfying the property of “diagonal continuity” of Ambrus
and Takahashi (2008).
This paper is a natural continuation of the work on optimal delegation to one agent
that was initiated by Holmström (1977, 1984) for unidimensional action spaces6 and that
if the decision maker has more commitment power. In this note, it is assumed that the action space is
bounded; our condition is trivially satisfied for unbounded action spaces.
4
Furthermore, as pointed out by Battaglini (2002), these cheap talk equilibria contain implausible outof-equilibrium beliefs. By contrast, there is no issue of out-of-equilibrium beliefs in our model because
the decision maker can commit to her actions.
5
Enviroments with noise have been considered in Austen-Smith (1993), Wolinsky (2002), Battaglini
(2004), Levy and Razin (2007), and Ambrus and Lu (2010). In particular, in a model with multiple agents,
a multidimensional environment, and noisy signals, Battaglini (2004) shows that minimal commitment
power is sufficient for the first best outcome to become feasible in the limit as the number of agents
increases. Ambrus and Lu (2010) construct fully revealing equilibria that are robust to a small amount
of noise in environments in which the state space is sufficiently large relative to the size of the agents’
biases.
6
See Holmström (1977, 1984), Melumad and Shibano (1991), Dessein (2002), Martimort and Semenov
(2006), Alonso and Matouschek (2008), Martimort and Semenov (2008), Goltsman, Hörner, Pavlov and
Squintani (2009), Kovac and Mylovanov (2009), Amador and Bagwell (2010). Armstrong and Vickers
(2008), Koessler and Martimort (2012), Li and Li (2009), and Lim (2009) who study optimal decision
rules in environments which are related, but not identical to the model of Holmström (1977, 1984).
Optimal decision rules for environments in which a decision maker can commit to monetary payments
are characterized in Baron (2000), Krishna and Morgan (2008), Bester and Krähmer (2008), Raith (2008),
and Ambrus and Egorov (2009).
2
was recently studied in multidimensional action spaces by Koessler and Martimort (2012)
and Frankel (2011, 2012). The problem of optimal decision rules for two experts has also
been studied in Martimort and Semenov (2008). They focus on experts who are biased
in the same direction and consider dominant strategy implementation. By contrast with
our results, Martimort and Semenov (2008) demonstrate impossibility of the first best
outcome and show that a sufficiently high bias renders the experts not valuable for the
decision maker.
The optimal delegation rules that are robust to details of the environments are studied
in Frankel (2011); the principal is assumed to know the distribution of states but has
limited information about the agent’s preferences.
In this paper, incentives for the experts to report their commonly known information truthfully are provided by punishment of disagreements by a stochastic action. This
feature of design, together with the assumption of common knowledge of the principal’s
optimal action by the experts, also appears in Mylovanov and Zapechelnyuk (2012) and
Zapechelnyuk (2012). Mylovanov and Zapechelnyuk’s (2012) model is limited to a unidimensional action space and deals with the case of large and opposing biases, where the
first best implementation is generally impossible. The model of Zapechelnyuk (2012) allows multiple experts to collude in their information disclosure strategies, which leads to
a very different result that the first best is achievable if and only if the principal’s optimal
action is Pareto undominated for the experts. This condition is generally violated in the
environment of the current paper, with two experts and multidimensional action space.
2
The Model
There are two informed experts, i = 1, 2, and an uninformed principal. The set of actions
available to the principal is Y , a compact subset of Rd , d ≥ 1. The principal’s optimal
action x, called state, belongs to a set of states X ⊂ Y . Each expert i = 1, 2 is endowed
with an information partition Pi of X and knows that x belongs to Xi ∈ Pi . Denote by
X̂ the smallest subset of X that is common knowledge for the experts.7
Let || · || denote a norm on Rd . Let rY be the radius of the smallest ball that contains
Y and without loss of generality set the center of this ball to be at the origin.
Expert i’s most preferred action at state x is given by x+bi (x), where bi (x) is i’s bias.8
Each expert minimizes his loss function given by a convex transformation of the squared
7
8
That is, X̂ is the element of the meet P1 ∧ P2 that contains x.
We do not assume that x + bi (x) ∈ Y .
3
distance between action y ∈ Y and i’s most preferred action:
Li (x, y) = hi ky − (x + bi (x))k2 , i = 1, 2,
where hi : R+ → R+ is a weakly convex function with hi (0) = 0.
Let ∆(Y ) denote the set of probability measures on Y (stochastic actions). ConvenR
tionally, we extend the definition of Li to X ×∆(Y ) by Li (x, λ) = Y Li (x, y)λ(dy), x ∈ X,
λ ∈ ∆(Y ). A decision rule is a measurable function
µ : X 2 → ∆(Y ), (X̂1 , X̂2 ) 7→ µ(X̂1 , X̂2 ),
where X = P1 ∧ P2 is the set of subsets of X that can be commonly known by the
experts and µ(X̂1 , X̂2 ) is a stochastic action that is contingent on the experts’ reports
(X̂1 , X̂2 ) ∈ X 2 . A decision rule induces a game, in which the experts simultaneously
make reports X̂1 , X̂2 ∈ X and the outcome µ(X̂1 , X̂2 ) is implemented.
Decision rule µ is common-knowledge-revealing if for all X̂ ∈ X ,
(C1 ) µ(X̂, X̂) is a stochastic action with support on X̂, and
(C2 ) Li (x, y) ≤ Li (x, µ(X̂1 , X̂2 )) for all x, y ∈ X̂ and all X̂1 , X̂2 ∈ X with X̂1 6= X̂2 .
Condition (C1 ) requires that if the experts’ reports agree, the decision rule implements
an action in the reported set, while condition (C2 ) requires that each expert prefers any
action in the commonly known set to any action that can possibly be implemented if
the experts’ reports disagree. These conditions imply that truthtelling is ex-post equilibrium in the common-knowledge-revealing rule if the agents know the rule. Furthermore,
truthtelling is optimal even if the agents do not know the rule but know that it will not
implement actions outside of the commonly known set.
Our objective is to construct a common-knowledge-revealing rule. As an example,
consider action set Y illustrated by Fig. 1, with the grid representing the experts’ common
knowledge partition X (we assume X = Y ). If state x is as shown on Fig. 1, then
it is common knowledge for the experts that x is in the shaded rectangle, X̂. In this
environment, how can the principal motivate the experts to reveal their common knowledge
at every possible state?
Consider decision rule µ∗ that satisfies (C1 ) for all X̂ ∈ X and µ∗ (X̂1 , X̂2 ) = λ∗
whenever X̂1 6= X̂2 , where stochastic action λ∗ is a solution of:
Z
||y|| λ(dy),
max
λ∈∆(Y )
Z
2
Y
s.t.
yλ(dy) = 0.
Y
4
(1)
B
b2 (x)
A
x
C
b1 (x)
D
Fig. 1.
That is, any disagreement between the experts results into a stochastic action that has
the maximum variance among all random variables on Y with zero expectation.
To construct λ∗ geometrically, one draws the smallest ball that contains Y and assigns
positive weights on extreme actions in Y that lie on the boundary of this ball, such that
the weighted average of those actions is at the center of the ball. So, for the example on
Fig. 1, one will assign positive weights on vertices A, B, and C (note that vertex D is
strictly inside the smallest ball, so it is assigned zero weight).
Theorem 1 Decision rule µ∗ is common-knowledge-revealing if ||bi (x)|| ≤ rY for all x ∈
X and i = 1, 2.
Proof. We need to prove (C2 ) for all X̂ ∈ X . For all x ∈ X, all y ∈ Y , and each i = 1, 2,
by monotonicity9 of hi we have
Li (x, y) = hi ||y − (x + bi (x))||2 ≤ hi ||y||2 + ||x + bi (x)||2 .
Next, by assumption, the smallest ball that contains Y is centered at the origin and has
R
radius rY . Consequently, Li (x, y) ≤ hi (rY2 + ||x + bi (x)||2 ). Also, Y ||z||2 λ∗ (dz) = rY2 ,
since λ∗ that solves (1) must assign positive mass only on subsets of Y that have distance
rY from the origin (in the example depicted on Fig. 1, λ∗ will assign positive mass only
9
The assumptions on hi imply that it is increasing.
5
R
on vertices A and B and C). Also, by (1), Y zλ∗ (dz) = 0. Using the above and Jensen’s
inequality, we obtain for all x ∈ X, all y ∈ Y , and each i = 1, 2
∗
Z
2
Z
∗
2 ∗
hi ||z − (x + bi (x))|| λ (dz) ≥ hi
||z − (x + bi (x))|| λ (dz)
Y
Z n
o
∗
2
2
= hi
||z|| − 2z · (x + bi (x)) + ||x + bi (x)|| λ (dz)
Y
Z
Z
2 ∗
∗
2
= hi
||z|| λ (dz) − 2(x + bi (x)) ·
zλ (dz) + ||x + bi (x)||
Li (x, λ ) =
Y
Y
= hi
rY2
Y
2
+ ||x + bi (x)||
≥ Li (x, y),
and (C2 ) follows immediately.
Note that common-knowledge-revealing rule µ∗ is robust to details of the preferences
and the information structure of the experts. The only information relevant for construction of µ∗ is the principal’s set of actions Y and the upper bounds on the experts’ biases.
The stochastic punishment action λ∗ is constant with respect to state space X ⊂ Y , information structure P1 and P2 of the experts, as well as parameters of their preferences,
hi and bi , i = 1, 2; to verify the sufficient condition in Theorem 1 one needs to know only
the radius of Y and the upper bounds on the experts’ biases.
If the principal does not know the biases of the agents, but nevertheless would like
the ensure that a common-knowledge-revealing rule exists, the condition in Theorem 1
becomes necessary.
Theorem 2 Assume that the agents’ biases are constant. Then, there exists a pair of
directions of biases such that a common-knowledge-revealing rule exists if and only if
||bi || ≤ rY for each i = 1, 2.
Proof. We show that for any action set Y , we can find directions of the biases such that
if ||bi || > rY for some i = 1, 2, then a common-knowledge-revealing rule does not exist.
As illustrated by Fig. 2, consider action x̂ in the support of stochastic action λ∗ that
satisfies (1) and assume that b1 satisfies ||bb11 || = −x̂ and ||b1 || > rX . Observe that, since
||b1 || > rX , the set of actions that expert 1 prefers to action y = x̂ is inside the larger
dashed circle, so she strictly prefers every action in Y \{x̂} to x̂. Hence, to provide the
incentive for 1 to report the element X̂ of X that is commonly known by the agents in x̂
truthfully, one must choose µ(X̂1 , X̂) = x̂ for all X̂1 ∈ X . If X̂ is not a singleton, (C2 ) is
violated for any x ∈ X̂ and x 6= x̂. Let now X̂ = {x̂}. Then, however, in any state expert
2 can report X̂2 = X̂ and obtain action y = x̂. Provided the directions of the biases are
6
b2
b1
x̂
x′
Fig. 2.
different enough, there exists x0 6= x̂ such that expert 2 (whose set of actions preferred to
x0 is depicted by the smaller dashed circle) strictly prefers x̂ to x0 . Consequently, (C2 ) is
violated for expert 2 at x0 .
An implication of this theorem that in the environments in some natural environments
in which the experts commonly know the optimal action for the principal, the bound on
the experts’ biases becomes a necessary and sufficient condition for existence of a decision
rule that implements the principal’s optimal action in each state. In particular, assume
that the set of actions Y is the ball with radius rY , X = Y , and the experts commonly
know the optimal action for the principal, that is, Pi = X, i = 1, 2. If biases are
state-independent and not collinear, i.e., bi (x) ≡ bi , and ||bb11 || 6= ||bb22 || , then, a commonknowledge-revealing rule exists if and only if ||bi || ≤ rY for each i = 1, 2. (The proof of
this result is essentially identical to that of Theorem 2.)10
If the experts’ information partitions are not identical, there may exist decision rules
that extract more information from the experts than their common knowledge. Hence,
given the principal’s preferences and beliefs about x, the common-knowledge-revealing
rule imposes the lower bound on the principal’s expected payoff that can be possibly
achieved.
The revelation principle justifies our focus on truthtelling equilibria. Nevertheless,
10
Mylovanov and Zapechelnyuk (2010) proves this result for a unidimensional action space.
7
common-knowledge-revealing rules typically permit multiple equilibria, providing the experts with an incentive to collude on other strategies. Zapechelnyuk (2012) addresses
the question of collusive behavior of the experts, who commonly know the state and are
able to make binding agreements, and shows that the principal can implement her most
preferred action if and only if the principal’s optimal action is Pareto undominated for
the experts. In this paper, this condition holds only if the expert’s biases are exactly
opposing and is generically violated in multi-dimensional action spaces.
References
Alonso, Ricardo and Niko Matouschek, “Optimal Delegation,” Review of Economic
Studies, 2008, 75 (1), 259–293.
Amador, Manuel and Kyle Bagwell, “On the Optimality of Tariff Caps,” Mimeo
2010.
Ambrus, Atilla and Georgy Egorov, “Delegation and Nonmonetary Incentives,”
Mimeo 2009.
and Shih-En Lu, “Robust fully revealing equilibria in multi-sender cheap talk,”
mimeo, 2010.
Ambrus, Attila and Satoru Takahashi, “The multi-sender cheap talk with restricted
state spaces,” Theoretical Economics, 2008, 3, 1–27.
Armstrong, Mark and John Vickers, “A model of delegated project choice,” Econometrica, 2010, 78, 213–244.
Austen-Smith, David, “Interested Experts and Policy Advice: Multiple Referrals under
Open Rule,” Games and Economic Behavior, January 1993, 5 (1), 3–43.
Baron, David P., “Legislative Organization with Informational Committees,” American
Journal of Political Science, 2000, 44 (3), 485–505.
Battaglini, Marco, “Multiple Referrals and Multidimensional Cheap Talk,” Econometrica, 2002, 70, 1379–1401.
, “Policy Advice with Imperfectly Informed Experts,” Advances in Theoretical Economics, 2004, 4. Article 1.
Bester, Helmut and Daniel Krähmer, “Delegation and incentives,” RAND Journal
of Economics, 2008, 39 (3), 664–682.
8
Crawford, Vincent P. and Joel Sobel, “Strategic Information Transmission,” Econometrica, 1982, 50 (6), 1431–51.
Dessein, Wouter, “Authority and Communication in Organizations,” Review of Economic Studies, October 2002, 69 (4), 811–38.
Frankel, Alexander, “Aligned delegation,” mimeo, 2011.
, “Delegating multiple decisions,” mimeo, 2012.
Gilligan, T. and K. Krehbiel, “Asymetric information and legislative rules with a
heterogeneous committee,” American Journal of Political Science, 1989, 33, 459–
490.
Goltsman, Maria, Johannes Hörner, Gregory Pavlov, and Francesco Squintani,
“Mediation, Arbitration, and Renegotiation,” Journal of Economic Theory, 2009,
144, 1397–1420.
Holmström, Bengt, “On Incentives and Control in Organizations,” PhD thesis, Stanford University 1977.
, “On the Theory of Delegation,” in M. Boyer and R. E. Kihlstrom, eds., Bayesian
Models in Economic Theory, North-Holland, 1984, pp. 115–41.
Koessler, Frédéric and David Martimort, “Optimal Delegation with multidimensional decisions,” mimeo, 2012.
Kovac, Eugen and Tymofiy Mylovanov, “Stochastic mechanisms in settings without
monetary transfers: The regular case,” Journal of Economic Theory, 2009, 144,
1373–1395.
Krishna, Vijay and John Morgan, “Asymmetric Information and Legislative Rules:
Some Amendments,” American Political Science Review, 2001, 95 (2), 435–52.
and
747–75.
, “A Model of Expertise,” Quarterly Journal of Economics, 2001, 116 (2),
and
, “Contracting for information under imperfect commitment,” RAND
Journal of Economics, 2008, 39 (4), 905–925.
Levy, Gilat and Ronny Razin, “On the limits of communication in multidimensional
cheap talk: a comment,” Econometrica, 2007, 75, 885–893.
Li, Hao and Wei Li, “Optimal Delegation with Limited Commitment,” mimeo, 2009.
and Wing Suen, “Viewpoint: Decision-Making in Committees,” Canadian Journal
of Economics, 2009, 42, 359–392.
9
Li, Ming, “Two (talking) heads are not better than one,” Economics Bulletin, 2008, 3
(63), 1–8.
, “Advice from Multiple Experts: A Comparison of Simultaneous, Sequential, and
Hierarchical Communication,” The B.E. Journal of Theoretical Economics (Topics),
2010, 10 (1), Article 18, 22 pages.
Lim, Wooyoung, “Selling Authority,” mimeo, 2009.
Martimort, David and Aggey Semenov, “Continuity in Mechanism Design without
Transfers,” Economics Letters, 2006, 93 (2), 182–189.
and
, “The informational effects of competition and collusion in legislative
politics,” Journal of Public Economics, 2008, 92 (7), 1541–1563.
Melumad, Nahum D. and Toshiyuki Shibano, “Communication in Settings with
No Transfers,” The RAND Journal of Economics, 1991, 22 (2), 173–98.
Mylovanov, Tymofiy and Andriy Zapechelnyuk, “Decision rules for experts with
opposing interests,” mimeo, 2010.
and
, “Optimal arbitration,” mimeo, 2012.
Raith, Michael, “Specific knowledge and performance measurement,” RAND Journal
of Economics, 2008, 39 (4), 1059–1079.
Wolinsky, Asher, “Eliciting information from multiple experts,” Games and Economic
Behavior, 2002, 41 (1), 141–160.
Zapechelnyuk, Andriy, “Eliciting information from a committee,” mimeo, 2012.
10