screening through coordination

SCREENING THROUGH COORDINATION
NEMANJA ANTIĆ* AND KAI STEVERSONy
Abstract. A principal decides whether or not or to take an action on each of a number
of agents without the use of transfers. Each agent desires the action to be taken and
has private information (his type) about the principal’s payo¤ from taking the action on
him. The principal’s payo¤ exhibits complementarities across agents, which the principal
leverages by coordinating the agents. Coordination involves taking the action on agents
with high (low) types more often when the other agents also have high (low) types. Due to
complementarities, coordination optimally accepts failure when the consequence are small in
order to ensure success when the gain is high. We also …nd that, in a variety of environments,
the action is taken more often for an ex-ante inferior agent; a phenomenon we call favoritism.
One possible application of our setting is a …rm downsizing a division and deciding which
employees to keep.
Date: February 12, 2016.
Key words and phrases. Mechanism Design, Screening, Coordination.
* Nemanja Antic: Kellogg School of Management, MEDS Department, Northwestern University,
Evanston, IL 60208. Email: [email protected]
y
Kai Steverson: New York University, New York, NY 10003. Email: [email protected]
Thanks to Eddie Dekel, Faruk Gul, Peter Klibano¤, Stephen Morris, Mallesh Pai, Wolfgang Pesendorfer,
Nicola Persico, Doron Ravid and Asher Wolinsky for helpful discussions.
1
2
NEMANJA ANTIĆ AND KAI STEVERSON
1. Introduction
Consider a principal deciding whether or not to take an action on each of a number of
agents without the use of transfers. The agents always desire the action speci…c to them and
care about nothing else. Each agent has private and independent information (his type) that
determines an agent-speci…c outcome resulting from the action; not taking the action always
yields the same outcome. We will say an agent has a type of high (low) quality if the action
produces a higher (lower) agent-speci…c outcome than not taking the action. The principal’s
payo¤ is an increasing function of the agents’outcomes and exhibits complementarities across
agents.
A number of economic environments correspond to our setting, a few of which we list here.
A …rm downsizing a division and deciding which employees to keep. A CEO deciding whether
to include updated components, developed by di¤erent divisions, or use components from
the previous version of a product. Choosing a team from within an organization to work on
a new prestigious project. Deciding whether to implement separate recommendations from
experts working on retainer. A grant agency choosing whether to fund separate research
teams that collaborate.
In a typical screening problem, the principal induces truthful revealation by tailoring the
reward for reporting a type to the preferences of the agent associated with that type. In our
setting, the principal faces the di¢ culty that agents’ preferences are independent of their
type; agents will always make whatever report maximizes the probability of the action being
taken on them. Moreover, the principal is unable to verify the agents’reports either directly
as in Ben-Porath et al. (2014) or indirectly using correlation among the agents’types1 . It
is perhaps surprising that screening is helpful at all. Indeed, when faced with a single agent,
the principal can do no better than always taking or not taking the action.
However, a principal who faces multiple agents can leverage the complementarities in her
own payo¤ function by coordinating the agents. Coordination involves taking the action on
agents with high (low) quality types more often when the other agents also have high (low)
quality types. Taking the action on low quality types in some states of the world is necessary
to incentivize truthful reporting. Under complementarities, the principal’s payo¤ has greater
1
In our setting, the agents have private types and hence draw their types independently.
SCREENING THROUGH COORDINATION
3
potential when there are more high quality agents. Hence coordination allows the principal
to accept the failure necessary for incentive compatibility when the consequence are small in
order to ensure success when the potential gain is high. In theorem 1, we show there always
exists an optimal mechanism that coordinates the agents.
We also …nd that, in a variety of environments, the action is taken more often for an exante inferior agent; a phenomenon we call favoritism. Such behavior is sometimes attributed
to biases of the principal such as nepotism; a key insight of our model is that favoritism can
arise purely as a consequence of optimal design. Under complementarities, agent i having a
higher quality type increases the gain (loss) from taking the action on agent j when he is a
high (low) quality type. In environments where the increase in gain outweighs the increase in
loss, agent i having a higher quality type makes taking the action on agent j more attractive
to the principal. In other words, high quality of agent i creates a positive spillover that
bene…ts agent j. In some environments, this spillover is strong enough that agent j bene…ts
from an increase in agent i’s type more than agent i, which leads to favoritism.
To get a closer look at coordination, we consider a case that allows for a complete characterization of the optimal mechanism. We suppose the principal’s payo¤ function treats
the agents symmetrically and each agent has two types, a high quality and a low quality
type, that occur with equal probability. In this setting, coordination takes the stark form of
spectacular successes and spectacular failures. A spectacular success occurs when the principal takes the action on every high quality agent and avoids the action on every low quality
agent; a spectacular failure does the opposite. The optimal mechanism will perform a spectacular success (failure) action when the number of high (low) quality agents is above a …xed
threshold. When neither the number of high or low quality agents surpass the threshold,
then the optimal mechanism either takes the action on everyone or on no one.
In some applications the agents may not always desire the action. For instance, in the
case of experts giving recommendations, the expert may only wish su¢ ciently high quality
recommendations to be taken up by the principal; naturally the expert should desire the
principal to take up any recommendation that bene…ts the principal. We capture this idea
in an extension that maintains the core ideas of our baseline setting. We suppose each agent
only desires the action if their type is above a …xed threshold, where at the threshold the
agents have a low quality type. Agents with types below the threshold will willingly reveal
4
NEMANJA ANTIĆ AND KAI STEVERSON
they are unsuited for the action and the optimal mechanism will never take the action on
them. Agents with types above the threshold all want the action to be taken, and the design
problem on these agents resembles our baseline setting and all the results discussed above
carry through.
Lastly, we consider an extension that allows the agents to collude against the principal;
for example agreeing to all report they are the highest possible type. Given the way that
coordination works it is clear how such collusion could bene…t the agents. However, we show
that the principal can make the optimal mechanism coalition proof while achieving virtually
the same payo¤. Additionally, the coalition proof mechanism will also be arbitrarily close
to the optimal mechanism, characterized previously, in terms of the actions taken on the
agents.
1.1. Literature Review
There is a large literature on screening with agents who have type-dependent preferences;
mostly this literature assumes the availability of transfers. Broadly speaking, this literature
uses di¤erences in agents’preferences to screen and optimal screening is based on balancing
the rewards from separating agents with the cost of doing so (this cost could be a transfer,
if this is available, or an ine¢ ciency in allocation).
Our paper belongs to the literature on screening without transfers where agents have typeindependent preferences. While the literature considers manifold settings, the environments
are typically such that they allow the principal to "check" the agent’s report (perhaps imperfectly) and, naturally, the optimal mechanisms exploit this. For example, the closely-related
work by Ben-Porath et al. (2014) directly assumes that the principal can (at a cost) check
the agents’reports and use this (or the threat of it) to discipline agents. Another class of
papers, e.g., Jackson & Sonnenschein (2007), have repeated interactions and the principal
can hold the agent accountable if the agent’s reports fail to satisfy the probability law known
to the principal. Finally, if types are correlated across agents2, the principal may check one
agent’s reports against others’reports. In our setting none of this is possible— the principal
2 While
we are unaware of this type of mechanism in the screening literature without transfers and agents
who have type independent preferences, the principle is applied in Cremer-McLean (1988) mechanisms, as
well as in the classic implementation literature.
SCREENING THROUGH COORDINATION
5
cannot verify reports explicitly, probabilistic knowledge is not going to help in a one-shot
game and agents’types are independent.
Related work by Guo & Hörner (2015) and Lipnowski & Ramos (2015), as well as the
more applied work of Li et al. (2015), have repeated interactions between a principal and
one agent and no transfers. While the agent always wants the principal to take the action on
them (e.g., allocate a good to them), these papers make use of type-dependent intensity of
agent preferences. In particular, these papers assume that agents get more utility from the
action when the principal also gets more utility from taking the action on them. Without
this alignment of preferences between the principal and agent the types of mechanisms they
propose would be ine¤ective. Our mechanism does not require preferences to be aligned
in any way. The misalignment of preferences between principal and agent is not only of
theoretical interest: it is easy to motivate. An oft-quoted class of examples in this literature
is the problem of a college dean and a department making hiring decisions. If there are two
types of agents, one who is a better researcher and another who is a better teacher, it may
be reasonable to assume that the dean wants only to hire someone who is a good "teacher",
while the department prefers the "researcher".
2. Simple Example
To introduce our key ideas of coordination and favoritism we examine the simplest possible
example. A principal is planning a project and consults two expert agents i 2 f1; 2g who
each have an idea for the project. The agents have private information about the quality
of their idea ( i ) which can be either low (
i
= 0) or high (
i
= 1). The principal’s action
is whether to include each agent’s idea in the project. The principal’s action is denoted by
(a1 ; a2 ) where ai = 1 or ai = 0 means including or not including the idea respectively. Both
agents always want their ideas included in the project regardless of its quality, while the
principal cares about about the overall quality of the project. Project quality is zero if one
or more low quality ideas are included or if no ideas are included. If one high quality idea
is included the project has value
1
4
and if two high quality ideas are included it has value 1.
6
NEMANJA ANTIĆ AND KAI STEVERSON
Summarizing, the principal’s utility function V is given by:
8
>
>
1
if 1 = 2 = 1 and a1 = a2 = 1
>
<
1
.
V ( 1 ; 2 ; a1 ; a2 ) =
if i = 1 and ai = 1, a i = 0
4
>
>
>
: 0
otherwise
It is common knowledge that agent i has chance
i
2 [ 12 ; 1) of having a high quality idea and
the agents draw their idea quality independently.
If the principal was only facing a single agent, the problem is straight-foward and only
trivial screening occurs— the principal should always include the agent’s idea3. What makes
this problem interesting is, therefore, the interaction between the agents’reports.
Before talking to the agents the principal can commit to a mechanism specifying contingent plan of which ideas to include based on what the agents tell her. The revelation
principle applies in this setting reducing the principal’s problem to …nding an optimal direct
mechanism. A direct mechanism g speci…es a (possibly mixed) action for each possible pair
of agent types. Concretely, g ( 1 ;
2 ) [a1 ; a2 ]
speci…es the probability the principal employs
action (a1 ; a2 ) after receiving a report that agent quality is ( 1 ;
2 ).
We will typically drop
the commas when writing the action and quality pairs.
Incentive compatibility of the mechanism requires that reporting type
for an agent of type
i.
i
must be optimal
However, the agent’s preferences do not depend on their type,
which makes incentive compatibility possible only when the agent is indi¤erent among all
possible reports. Creating this indi¤erence requires the mechanism to "pay for" including
high quality ideas by occasionally including low quality ideas. When the mechanism includes
a high quality idea it pushes the agents towards preferring to report they are high quality
which must be balanced by including low quality ideas in some states of the worlds.
Using the idea of paying for high quality ideas with low quality ideas we can derive some
properties of the optimal mechanisms. When both ideas are low quality the project value
will always be zero, hence the optimal mechanism might as well include both ideas to help
pay for including high quality ideas at other states of the world. When one idea is high
quality and the other low quality, the optimal mechanism should never include both ideas
or exclude both ideas. Including one low quality idea assures a project value of zero, hence
3 With
other assumptions, one can have the principal never including the idea of a single agent or being
indi¤erent. In any case, screening never helps the principal.
SCREENING THROUGH COORDINATION
7
taking the high quality idea is wasteful. Similarly, failing to include the high quality idea
ensures a payo¤ zero, so the principal may as well include the bad idea. Lastly, notice that
with one or less high quality idea the maximum payo¤ is 1=4, while the mechanism that
always includes both ideas regardless of quality gives payo¤
1 2
> 1=4. Hence in order to
at least match the payo¤ of the always include both ideas strategy, the mechanism must take
advantage of including both ideas when both are high quality. Putting together the above
reasoning gives the optimal mechanism g the following structure:
g (11) [11] = 1,
(1)
g (10) [10] + g (10) [01] = 1,
g (01) [01] + g (01) [10] = 1,
g (00) [11] = 1.
By examining the above equations we can see the optimal mechanism includes a high (low)
quality idea more often when the other agent’s idea is also high (low) quality; we call this
property coordination and it is the …rst key idea to come out of our model. The optimality
of coordination arises from the complementarity of idea quality in determining project value.
When complementarities exist, coordination ensures high quality agents are included when
they have the most value, i.e., when the overall quality of agents is high. Conversely, low
quality agents are included when they can do the least damage, i.e., when the overall quality
of agents is low. In Theorem 1, we show that coordination is not special to this example and
is in fact a general feature of an optimal mechanism even with very weak complementarities.
To examine our next key idea of favoritism, consider the case where
1
>
2
= 12 . Using
the structure of g derived above, we can express the incentive constraints of the two agents
as follows:
2
+ (1
2) g
(10) [10] = 1
2
+
2g
(01) [10] ,
1
+ (1
1) g
(01) [01] = 1
1
+
1g
(10) [01] .
The left-hand side of the …rst equation gives the probability that agent 1’s idea is included
conditional on him reporting to be type 1. The right-hand side gives the probability of
including agent 1’s idea conditional on him reporting to be type 0. Incentive compatibility
requires that these probabilities are equal, making the agent indi¤erent between the two
8
NEMANJA ANTIĆ AND KAI STEVERSON
reports. The second equation is the corresponding incentive compatibility condition for
agent 2. Using that
2
= 12 , the …rst equation immediately yields g (10) [10] = g (01) [10].
From the structure on g we derived earlier, this implies g (01) [01] = g (10) [01], and plugging
that into agent 2’s incentive compatibility constraint yields g (01) [01] = g (10) [01] = 1.
Therefore, the optimal mechanism can be fully described as:
g (11) [11] = g (01) [01] = g (10) [01] = g (00) [11] = 1.
Despite the fact that agent 1 is more likely to have a high quality idea, the optimal mechanism
includes agent 2’s idea with a higher probability (indeed with probability 1). In this case,
we say the optimal mechanism displays favoritism by more often including an agent who is
ex-ante inferior for the principal’s own aims. In theorem 2 we will show that in a large set
of environments every optimal mechanism displays favoritism.
3. Model
A single principal faces n agents. Each agent i randomly draws a privately known type
from the …nite set
i
R according to distribution
independent across agents. The total state space is
=
n
Y
i
2
i.
Draws are assumed to be
where:
i.
i=1
The principal and agents share a common prior
( )=
n
Y
2
i
de…ned by:
( i) .
i=1
Without loss of generality we can suppose that
is full support since each
i
can be edited
to remove types that never occur.
The principal decides whether or not to take an action on each of the n agents. Hence the
principal’s action space A is given by:
A = f0; 1gn ,
and a generic action is denoted a 2 A, where the ith component of a, ai = 1 is interpreted
as taking the action on agent i and ai = 0 is interpreted as not taking the action on agent i.
Each agent always desires the action speci…c to them and cares about nothing else. Therefore, agent i seeks to maximize the probability that ai = 1. Our model would be equivalent
SCREENING THROUGH COORDINATION
9
if the agents always preferred to not have the action taken, or even if some agents preferred
the action and others did not. We restrict ourselves to the case that all agents desire the
action only for ease of exposition.
The principal cares about the output of each agent which depends on the agent’s type and
whether the action is taken. Agents who are not acted on always provide output tp 2 R,
while agents who are acted on provide output equal to their type. In other words, agent i
has output function Xi :
i
f0; 1g ! R where
Xi (t; 1) = t and Xi (t; 0) = tp .
We refer to types above tp as "high quality types", and types below tp as "low quality types".
Taking the action on a high (low) quality type increases (decreases) the principal’s payo¤
relative to not taking the action on that agent.
The principal’s payo¤ is a function of the agent-speci…c outputs that is increasing and
exhibits complementarities across agents. Formally we de…ne V :
A ! R, where V ( ; a)
gives the principal’s payo¤ for taking action a in state . Moreover for some strictly increasing
Q
and strictly supermodular4 function W : ni=1 ( i [ ftp g) ! R we have:
V ( ; a) = W (X1 ( 1 ; a1 ) ; :::; Xn ( n ; an )) .
We normalize the principal’s payo¤ to always be non-negative.
To avoid trivial cases, we assume that for each agent i there exists t; t0 2
i
such that
t < tp < t0 . If an agent fails this condition, then the principal should either always or never
take the action on that agent, and the problem can be rewritten with the remaining n
1
agents.
3.1. The Design Problem
Since the revelation principle applies, the principal commits to a (direct) mechanism which
assigns a lottery over A for every possible pro…le of types in
is a function g :
!
. In other words, a mechanism
A, and we let g ( ) [a] denote the probability the mechanism assigns
to action a at state . Let G be the space of all mechanisms. For any g 2 G we let gi ( )
4 Strict
supermodularity can be de…ned as follows, for any X; X 0 2 (
W (X ^ X 0 )
W (X) + W (X 0 ), with the inequality strict whenever X 6
i
[ ftp g):
X 0 and X 0 6
X.
W (X _ X 0 ) +
10
NEMANJA ANTIĆ AND KAI STEVERSON
denote the marginal probability that the action is taken on agent i in state . Formally:
X
gi ( ) =
g ( ; a) .
a2Ajai =1
Given mechanism g; let pgi :
i
! [0; 1] de…ne the probability that the action is taken on
agent i conditional on his report. In other words:
X
pgi (t) =
gi ( )
2 j i =t
( )
.
i (t)
We now provide a useful and tight characterization of incentive compatibility.
Lemma 1. A mechanism g is incentive compatible if and only if for every agent i and all
t; t0 2
i
we have pgi (t0 ) = pgi (t).
Every agent wants to maximize the probability the action is taken on them. Hence an
agent will only report type t if it maximizes pgi (t). Therefore, in order for truthful reporting
to be incentive compatible it must be that pgi (t) gives the same value for every t 2
i.
Conversely if pgi (t) gives the same value for every t then, the agent is indi¤erent between all
reports which makes truthful reporting incentive compatible.
Since there are no transfers, the worst case for each agent is never receiving the action.
Therefore, individual rationality constraints do not bind and can be safely disregarded. We
can now write the principal’s design problem as follows.
X
max
g ( ) [a] ( ) V ( ; a) ,
g2G
;a
such that for all i = 1; :::; n and for all t; t0 2
i
pgi (t) = pgi (t0 ) :
4. Coordination
In this section we discuss how and why an optimal mechanism coordinates the agents. We
introduce two notions of coordination: across states and within states. Coordinating the
agents across states means taking the action more often on an agent of high (low) quality
when the other agents are also high (low) quality. Put another way, coordination across
states desires taking the action on agents of similar quality together. Coordination within
states requires that at a given state the mechanism correlates its "successes" together, where
SCREENING THROUGH COORDINATION
11
success means taking the action on high quality agents and not taking the action on low
quality agents. Across state coordination gives restrictions on the marginal probability of
an action being taken on agent (gi ( )), while within state coordination characterizes the
mechanism given the values of gi ( ). Theorem 1 asserts there always exists an optimal
mechanism that exhibits both types of coordination.
We start by de…ning, for each agent, a partial order on the states that will be used in the
de…nition of coordination across states.
De…nition 1. For any i 2 f1; :::ng let
whenever
i
0
i
=
0
a binary relation on
and for all j 6= i one of two things is true: (i)
0
Moreover we will write
In words,
i be
?
i
says
?
i
0
to mean
0
?
i
0
and
0
j
=
such that
j
or (ii)
j
0
i
0
j.
< tp
6= .
and di¤er only in that some low quality agents in state become
high quality agents in state 0 , and agent i was not one of the changed agents. Stated another
way,
0
?
i
requires that agent i is paired with more high quality agents in
0
and all else
is equal.
De…nition 2. We say a mechanism g coordinates the agents across states if for any agent
0
i and ;
2
with gi ( ) > 0:
(1)
i
< tp and
0
i
implies gi ( 0 ) = 1
(2)
i
> tp and
0
i
implies gi ( 0 ) = 1
Coordination across states says if a high quality agent is acted on at state , then he
must be acted on with probability 1 for all states
0
i
. Similarly, if a low quality agent
is acted on at state , then he must be acted on with probability 1 for all states
0
i
.
Hence coordination across states captures the idea that high (low) quality agents should be
acted on more often when paired with more high (low) quality agents, all else being equal.
Additionally, coordinating the agents across states implies that gi ( ) is
states where
i
is low quality and
i –increasing
among states when
We now formalize our notion of coordination within states.
i –decreasing
i
among
is high quality.
12
NEMANJA ANTIĆ AND KAI STEVERSON
De…nition 3. Given a mechanism g; de…ne Sig :
Sig ( ) = gi ( ) if
Sig ( ) = 1
! [0; 1] for all i = 1; ::n as
i
> tp ,
gi ( ) if
i
< tp ,
g
and by setting S0g ( ) = 0,Sn+1
( ) = 1. We say g coordinates the agents within states if
given any
2
, after reordering the agents so that S1g ( )
:::
Sng ( ) ; we have for every
i = 0; :::n:
g( ) 1
where 1
i <tp
1 <tp
; :::; 1
i <tp
;1
i+1 >tp
; :::; 1
n >tp
g
= Si+1
( )
is an indicator function that takes value 1 if
similarly for 1
i+1 >tp
i
Sig ( ) ,
< tp and value 0 otherwise;
.
Within state coordination …xes a state as well as the probability of correctly responding to
each agent’s type, and then, as much as possible, groups the correct responses together and
groups the incorrect responses together. Correctly responding means taking the action on
high quality agents and not taking the action on low quality agents. The function Sig ( ) gives
the probability mechanism g correctly responds to agent i’s type at state . After reordering
the agents so that S1g ( )
:::
Sng ( ), the maximum probability mechanism g can correctly
respond to every agent is given by S1g ( ). Hence coordination within states requires correctly
responding to every agent with probability S1g ( ), which makes the maximum probability
of correctly responding to every agent except 1 to be S2g ( )
S1g ( ). And iterating this
reasoning yields the above de…nition.
For …xed values of gi ( ) , within state coordination fully characterizes the mechanism.
Since the values of gi ( ) are all that matter for incentive compatibility, within state coordination works orthogonally to any incentive concerns and represents an unconstrained
expression of the complementarities in the principal’s payo¤ function. Across state coordination, on the other hand, imposes restrictions on the values of gi ( ) and must take incentive
concerns into account.
Theorem 1. There always exists an optimal mechanism which coordinates the agents across
states and all optimal mechanisms coordinate agents within states.
The optimality of coordination ‡ows from the complementarities of the principal payo¤
function. Complementarities ensure that the value of correctly responding to an agent’s type
SCREENING THROUGH COORDINATION
13
is higher when the overall quality of the agents is higher. Similarly the cost of incorrectly
responding to an agent’s type is lowest when the overall quality is low. In order to satisfy
incentive compatibility, a mechanism must do both incorrect and correct responses. Hence,
the optimal mechanism prioritizes correctly responding to the agents’ types when overall
quality is high and do the opposite when the overall quality is low, which is captured in the
de…nition of coordination across states.
Theorem 1 only ensures that coordination across states occurs in an optimal mechanism
and not for every optimal mechanism. The existence of optimal non-coordination mechanisms occurs because the principal is indi¤erent about the agent’s type when not taking
the action on that agent. If we perturbed Xi (t; 0) to be strictly increasing in t, then every
optimal mechanism would be a coordination mechanism, indeed proving this fact using a
similar perturbation is the main step in the proof of Theorem 1.
We now turn to the proof of theorem 1. We focus here on proving coordination across
states, the proof for within state coordination is straightforward and can be found in the
appendix.
Proof of Theorem 1. We start by perturbing the principal’s payo¤ by changing the agent
n on
n
~i
speci…c production from fXi gi=1 to X
where, for " > 0,
i=1
~ i (t; 0) = tp + " if t
X
tp
~ i (t; ai ) = Xi (t; ai ) otherwise.
X
The idea here is that high quality types are always better for the principal than low quality
~ : Qn ( i [ ftp ; tp + "g) ! R as follows. Let f : Qn ( i [ ftp ; tp + "g) !
types. We extend W to W
i=1
i=1
Qn
~
~
= X if and only if (1) Xi 6= tp + " implies
i=1 ( i [ ftp g) be de…ned such that f X
Q
X~i = Xi and (2) X~i = tp + " implies Xi = tp . Now let h : n ( i [ ftp ; tp + "g) !
i=1
f0; 1; :::; ng be such that h X~ is the number of coordinates X~ which equals tp + ". For any
~ 2 ( i [ ftp ; tp + "g) we set
X
~ X~ = W f X~
W
+ " h X~
2
.
~ is strictly
We choose " small enough so that t > tp implies t > tp + " and to ensure that W
~ is strictly supermodular. And we de…ne an associated
increasing. It is easy to check that W
14
NEMANJA ANTIĆ AND KAI STEVERSON
V~ :
A ! R, as
~ X
~ 1 ( ; a) ; :::; X
~ 1 ( ; a) .
V~ ( ; a) = W
The perturbed optimal design problem can be written as
X
( ) g ( ) [a] V~ ( ; a)
max
g2G
such that for every i and any t; t0 2
i:
pgi (t) = pgi (t0 ) .
The constraint set of this problem does not depend on " and it is easy to see that the
objective function is jointly continuous in both " and g. Hence we know the theorem of the
maximum applies and the optimal solution is upper-hemi continuous in " and " = 0 recovers
the original unperturbed problem. Next we will show that for any " > 0, in the perturbed
problem every optimal mechanism coordinates the agents across states, which combined with
the upper-hemi continuity will …nish the proof.
Suppose for contradiction that g is an optimal mechanism of the " perturbed problem
that does not coordinate the agents across states. There must exists an i and ;
that gi ( ) > 0, gi ( 0 ) 6= 1and either (i)
i
< tp ,
0
i
or (ii)
i
> tp ,
0
0
2
such
. We will
i
treat only case (i); the proof for case (ii) is similar. Case (i) implies there exists a; a0 with
^,^
a0 2 A by setting
g ( ) [a] > 0 and g ( 0 ) [a0 ] > 0 as well as ai = 1 and a0i = 0. Now de…ne a
for every j = 1; :::; n:
8
< a
^j = a0j ,
^0j = aj and a
: a
^0j = a0j and a
^ j = aj ,
Since
0
if
j
=
0
j
~ j ( j ; aj ) < X
~j
and X
0
0
j ; aj
.
otherwise
the …rst option activates at least at j = i and the second option will be
~ ( 1 ; a1 ) ; :::; X
~ ( n ; an ) .
active whenever j 6= 0j . Let X~ ( ; a) stand for the vector X
i
Claim 2.
X~ ( ; a) _ X~ ( 0 ; a0 ) = X~ ( ; a
^), X~ ( ; a) ^ X~ ( 0 ; a0 ) = X~ ( 0 ; a
^0 ) and X~ ( 0 ; a0 ) 6
X~ ( ; a), X~ ( 0 ; a0 ) 6 X~ ( ; a).
The proof of this claim can be found in the appendix, but notice that showing X~ ( 0 ; a0 ) 6
X~ ( ; a) uses the " perturbation, and is the only place in this proof where that perturbation
is necessary. Now choose any
> 0 such that
< min fg ( ) [a] ( ) ; g ( 0 ) [a0 ] ( 0 )g .
SCREENING THROUGH COORDINATION
15
We construct an alternative mechanism g 0 from g in the following way.
g 0 ~ [~
a] = g ~ [~
a] when ~ 2
= f ; 0 g OR a
~2
= fa; a0 ; a
^; a
^0 g
g 0 ( ) [a] = g ( ) [a]
and g 0 ( ) [^
a] = g ( ) [^
a] +
( )
g 0 ( 0 ) [a0 ] = g ( 0 ) [a0 ]
( )
and g 0 ( 0 ) [^
a0 ] = g ( 0 ) [^
a0 ] +
0
( )
( 0)
By lemma 1, and using the fact that g is incentive compatible, incentive compatibility of g 0
will follow if for all i = 1; :::; n and for all t 2
i
0
we have pgi (t) = pgi (t). By construction this
equality holds immediately for any agent j and type t 2
=
j
6=
0
j,
0
j; j
. Similarly if t 2
0
j; j
and
g0
then by construction a
^0j = a0j and a
^j = aj which makes pgi (t) = pi (t) immediate.
Now suppose that t =
0
pgj (t)
j
=
0
j
^j = a0j = 0:
and a
^0j = aj = 1 and a
pgj (t) =
0
( )
=
0
j
j
as desired. The remaining case where t =
0
j
!
j
=
j
j
0
j
( )
j
(
j)
=0
( j)
and a
^0j = aj = 0 and a
^j = a0j = 1 works
similarly.
Now we show that g 0 delivers a strictly higher payo¤ than g. The di¤erence in the principal’s payo¤ between g 0 and g is given by;
( )
( )
=
~ X~ ( ; a
W
^)
~ X~ ( ; a
W
^)
~ X~ ( ; a)
W
~ X~ ( ; a)
W
+
+
( 0)
( 0)
~ X~ ( 0 ; a
W
^0 )
~ X~ ( 0 ; a
W
^0 )
~ X~ ( 0 ; a0 )
W
~ X~ ( 0 ; a0 )
W
It su¢ ces to show that
~ X~ ( 0 ; a0 ) < W
~ X~ ( 0 ; a
~ X~ ( ; a
~ X~ ( ; a) + W
^0 ) + W
^) .
W
And the above inequality is a direct consequence of combining claim 2 with the strict super~.
modularity of W
5. Favoritism
In a robust variety of environments, the optimal mechanism will appear to display favoritism among the agents. Our notion of favoritism occurs when the optimal mechanism
acts on an ex-ante inferior agent more often than an ex-ante superior agent. We say an
16
NEMANJA ANTIĆ AND KAI STEVERSON
agent is ex-ante inferior if he has an unambiguously worse distribution of type quality, and
the principal’s payo¤ function treats the agents symmetrically5. A mechanism which exhibits
favoritism takes the action more often on an agent who, by any objective measure, is inferior
for the principal’s own aims. In practice, such behavior is sometimes attributed to biases
of the principal such as nepotism; a key insight of our model is that favoritism can arise
purely as a consequence of optimal design. Theorem 3 establishes that favoritism is a robust
concept and occurs in a wide variety of environments.
We …rst formally de…ne what it means for the principal’s payo¤ function to be symmetric.
De…nition 4. We say that the principal’s payo¤ function is symmetric if for all i; j: i =
Q
0
0
( i [ ftp g) where x is a permutation of
j , Xi = Xj and W (x) = W (x ) for all x; x 2
x0 .
We now de…ne a restriction on the principal’s utility function required for the theorem;
the role of this mild assumption is explained below.
De…nition 5. We say a symmetric principal payo¤ function obeys condition F if there exists
2
, such that
1
is a high type,
2
is a low type and setting
0
= ( 2;
1;
12 )
we have
1
1
1
1
V ( ; a) + V ( 0 ; a) > V ( ; a
^) + V ( 0 ; a
^)
2
2
2
2
where a = (1; 0; 0; :::; 0) and a
^ = (1; 1; 0; :::; 0).
Since we focus on a symmetric principal payo¤ function, the use of the …rst two agents,
as opposed to two arbitrary agents, in the above de…nition is purely for convenience. To
understand condition F , suppose the principal already decided to not act on agents 3; :::n,
and the principal knows exactly one of agents 1 and 2 is a high type while putting on equal
chance on each agent being the high type. Condition F says that, faced with this scenario,
the principal prefers taking the action on exactly one of the …rst two agents over taking
the action on both.
Acting on both agents ensures acting on the low type agent, which
under complementarities, damages the bene…t to the principal of taking the action on a high
5 To
get a clean comparison, it is important to keep some features symmetric. In this comparative static,
therefore, we keep principal utility symmetric and consinder what happens when agents have di¤erent ex-ante
probability distribtutions over types.
SCREENING THROUGH COORDINATION
17
type agent. When complementarities are su¢ ciently high, as measured by condition F , this
damages ensures that taking the action on both agents is the inferior action.
To de…ne our notion of ex-ante superiority, consider that when
dominates
j,
i
…rst order stochastically
agent i has a superior distribution of type quality. With a symmetric principal
payo¤ function, the distribution of type quality unambiguously measures which agent is
superior for the principal’s aims in an ex-ante sense. Using this unambiguous measure and
condition F we can now state the main theorem of this section.
Theorem 3. Choose any symmetric principal payo¤ function that satis…es condition F.
Q
Then there exists an open set of priors,
i , and two agents i; j 2 f1; :::; ng such
that for all
2 ,
i
…rst order stochastically dominates
j,
but every optimal mechanism g
takes the action on agent j strictly more often than agent i, i.e
X
gj ( ) ( ) >
X
gi ( ) ( ) .
2
2
Proof. See Appendix.
To understand how favoritism works consider the case with two agents who can be either
a high type (1) or a low type (0) so that
= f0; 1g
f0; 1g .
Suppose we could choose the open set of priors ( ) speci…ed in theorem 3 to ensure every
optimal mechanism uses a complete form of coordination by always either succeeding or
always failing on both agents. Moreover, in line with coordination across states, succeeding
(failing) on both agents occurs with probability one when both agents are high (low) types.
Recall that succeeding on agent means taking the action on a high type and not taking the
action on a low type; failing means the opposite. The existence of an open set of priors with
these properties is assisted by su¢ ciently high complementarities, speci…cally when condition
F holds.
For a prior , we will abuse notation and let
i
indicate the probability that agent i is type
1. Using the complete coordination properties ensured by our choice of
, we can rewrite
18
NEMANJA ANTIĆ AND KAI STEVERSON
the incentive compatibility condition for each agent as follows.
2
+ g (10) [10] (1
2)
= 1
2
+ (1
g (01) [01])
2
1
+ g (01) [01] (1
1)
= 1
1
+ (1
g (10) [10])
1.
The left and right hand side of the …rst equation gives pg1 (1) and pg1 (0) respectively. Equating
pg1 (1) and pg1 (0) characterizes incentive compatibility for agent 1 as established by lemma
1. The second equation similarly characterizes incentive compatibility for agent 2. We now
have two equations and two unknowns which we solve to get
(1
g (10) [10] =
1 ) (2 2
1
(2
g (01) [01] =
The set of priors can be chosen so that
i
1
+
2
1)
1
1) (1
2)
.
1
1+ 2
> :5 for both agents which ensues that both
expressions above are positive. Using the properties of complete coordination, we know
agent 1 is chosen more often than agent 2 if and only if
g (10) [10] + g (01) [10]
g (01) [01] + g (10) [01] ,
which, using g (01) [01] + g (01) [10] = 1, reduces to
g (10) [10]
And that inequality holds if and only if
2
g (01) [01] .
1.
Hence, agent 1 is chosen more often exactly
when he is less likely to be a high type. Therefore the optimal mechanism displays favoritism.
The …rst step of the full proof of theorem 3 establishes favoritism in the two agent two
type cases along the lines discussed above. The full argument is slightly more subtle in that
the set of priors
ensures the optimal mechanism belongs to a handful of cases of which the
complete coordination described above is only one. Every one of the handful of cases have
the property that the ex-ante inferior agent is acted on more often. In the general case with
n agents with any number of types, we focus on priors in which all but two agents are a low
type with probability close to one, and the remaining two agents put almost probability 1
on being one of two types. These priors are very close to the two agent two type case, which
SCREENING THROUGH COORDINATION
19
allows us to …nish the proof by establishing that the set of optimal mechanisms moves upper
hemi-continuously and applying a continuity argument.
Theorem 3 makes no claim on the degree of superiority of the better agent. In the example
in section 2, favoritism could occur even when the superior agent had as much as a …fty
percent higher chance of being a high quality type. By strengthening condition F , we can
generalize this extreme form of favoritism to a variety of environments.
De…nition 6. For any M > 0, we say a symmetric principal payo¤ function satis…es condition F ? (M ) if there exists
2
such that
is a high type,
1
2
is a low type, and for any
a1 ; a2 2 f0; 1g such that (a1 ; a2 ) 6= (1; 0) we have
V ( ; (1; 0; 0; :::; 0)) > M V ( ; (a1 ; a2 ; 0; :::; 0))
Clearly condition F ? (M ) for any M > 2 implies condition F . And for high enough M ,
we can show favoritism occurs even when the superior agent has as much as a …fty percent
higher chance of being a high quality type, as captured in the following theorem.
2 0; 12 , there exists a M > 0 such that for all symmetric principal
Q
payo¤ functions that satisfy condition F ? (M ), there exists an open set
i and two
Theorem 4. For any
agents i; j 2 f1; :::; ng such that for every
and agent i has a
2 ,
i
…rst order stochastically dominates
j,
higher probability of being a high type than agent j, that is
X
t>tp
i (t)
X
j
(t)
:
t>tp
However, every optimal mechanism g takes the action on agent j strictly more often than
agent i, in other words that:
X
2
gj ( ) ( ) >
X
gi ( ) ( ) .
2
Proof. See Appendix.
6. The Two Type Symmetric Case
To get closer look at coordination, we specialize to a case that allows for a complete
characterization of the optimal mechanism. We suppose the principal’s payo¤ function is
symmetric and each agent has two types, a high quality and a low quality type, that occur
20
NEMANJA ANTIĆ AND KAI STEVERSON
with equal probability. In this setting, coordination takes the stark form of spectacular
successes and failures. A spectacular success occurs when the principal takes the action
on every high quality agent and avoids the action on every low quality agent; a spectacular
failure does the opposite. The optimal mechanism will perform a spectacular success (failure)
action when the number of high (low) quality agents is above a …xed threshold. When neither
the number of high or low quality agents surpass the threshold, then the mechanism either
takes the action on everyone or on no one.
We now formally specify the setting. The state space and prior are given by:
= f0; 1gn and
( )=
1
for all
2n
2
.
Each agent can be type 0 or type 1 which corresponds to a low quality type and a high
quality type respectively. We also assume a symmetric principal payo¤ function, which we
de…ned in section 5.
Given the strong symmetry of the setting we will be able to restrict attention to anonymous
mechanisms in which permuting the roles of the agents leads to the same mechanism. A
permutation of the agents is any bijection
we will abuse notation and will use
;a 2 A
( ) ; (a) to mean:
( ) =
(a) =
: f1; :::; ng ! f1; :::; ng, and for any 2
(1) ;
a
(2) ; :::;
(n)
(1) ; a (2) ; :::; a (n)
2
2 A.
De…nition 7. We say a mechanism g is anonymous if for any permutation of the agents ;
and for all
2
, a 2 A:
g ( ) [a] = g ( ( )) [ (a)] .
Proposition 1. There exists an optimal mechanism that is anonymous and coordinates
across and within the states.
Proof. See appendix.
The proof of this proposition begins by perturbing the principal’s payo¤ function precisely
as in proof of theorem 1. Using the symmetry in the setting, an even mixture of all possible
permutations of any optimal mechanism of the perturbed setting will result in a mechanism
that is both optimal and anonymous. From the proof of theorem 1, we know all optimal
SCREENING THROUGH COORDINATION
21
mechanisms in the perturbed setting coordinate both across and within states, and in particular the anonymous optimal mechanism we just constructed must do so as well. We can
then take the perturbations to zero while constructing a sequence of such mechanisms, and
by the upper-hemi ensured by the theorem of the maximum, the limit of these mechanisms
must by optimal. And that limit mechanism will be anonymous and coordinate across and
within states which …nishes the proof.
Proposition 1 allows us to restrict attention to anonymous coordination mechanisms. In
an anonymous mechanism, the de…nition of coordination within states can be characterized
as follows.
Remark 1. An anonymous mechanism coordinates within states if and only if
i
=
j
and
ai 6= aj implies g ( ) [a] = 0.
Any anonymous mechanism that coordinates within states can be described as a function
f : f0; 1; :::ng !
(f0; 1g
f0; 1g) where f (m) [a0 ; a1 ] gives the probability of taking action
a0 on all type 0 agents and action a1 on all type 1 agents in every state with m high types. At
state m, such a mechanism has only four possible actions: (1) spectacular failure (f (m) [1; 0])
(2) spectacular success (f (m [0; 1])) (3) take the action on everyone (f (m) [1; 1]) and (4) take
the action on no one (f (m) [0; 0]). We can similarly characterize coordination across states.
Remark 2. An anonymous mechanism that coordinates within states, characterized by f :
f0; 1; :::ng !
(f0; 1g
f0; 1g), coordinates across states if and only if
(1) f (m) [1; a1 ] > 0 implies f (m0 ) [1; 0] + f (m0 ) [1; 1] = 1 for all m0 < m.
(2) f (m) [a0 ; 1] > 0 implies f (m0 ) [0; 1] + f (m0 ) [1; 1] = 1 for all m0 > m.
From this remark we can see coordination across states requires that if a spectacular
success (failure) occurs with positive probability at m then it must occur with probability 1
for all m0 > (<) m. In other words there will be two cuto¤s mH > mL : above mH spectacular
success occurs and below mL spectacular failure occurs. Coordination across states also rules
out that a mechanism picks everyone at one state and picks no one at another state, therefore
in for every state between mL ; mH the mechanism does one of picking everyone or picking no
one. In the appendix we show incentive constraints holding requires mL = n mH . Therefore,
we can treat a single cuto¤ m?
d n2 e by setting mH = m? and mL = n
m? . Moreover the
incentive constraint also require that if m? = n2 ; then f (m? ) [1; 0] = f (m? ) [0; 1] = 12 .
22
NEMANJA ANTIĆ AND KAI STEVERSON
We now turn to deriving the value of m? , for which we must …rst introduce some notation.
!
!
Let 0 and 1 be vectors of n 0’s and 1’s respectively, both of which can be interpreted
either as an action pro…le or a type pro…le. Any type pro…le
action pro…le and action
can be interpreted as an
!
is a spectacular success at state . Similarly action 1
is a
spectacular failure at state .
We now de…ne two conditions that will characterize m? . We say condition 1 holds at
d n2 e if for every
m
with exactly m high types:
1
1
!
V ( ; )+ V 1
2
2
(C1)
We say condition 2 holds at m
V
d n2 e if for every
1
1
!
V ( ; )+ V 1
2
2
(C2)
;
;
1
V
2
!
!
; 0 +V 1
with exactly m high types:
1
!
!
;1 + V 1
2
Condition 1 says that a spectacular success at state
!
1
!
;0 .
!
;1 .
and a spectacular failure at state
yields a higher payo¤ than taking no one on both those states. In any optimal
mechanism where the middle states use the take no one action, condition 1 holding at m?
and failing at m?
1 is a necessary condition. Similarly, if the middle states use the take
everyone action, condition 2 holding at m? and failing at m?
1 is a necessary condition.
And it turns out these necessary conditions are su¢ cient to uniquely pin down the optimal
m? . Using the supermodularity of W one can show that if either condition holds at m it
holds for all m0
m. Moreover, one can also show if condition 1 ever fails condition 2 never
does and vice versa. These properties imply there is at most one m? and one choice of action
for the middle states that satisfy the necessary conditions for optimality.
The above discussion glosses over two possible edge cases: condition 1 and condition 2
always hold, or one of the conditions always fails. If condition 1 and condition 2 always
hold, then the optimal mechanism has no middle region and only spectacular successes and
failures are used; in other words m? = d n2 e. If one of the conditions always fails, the optimal
mechanism everywhere uses the take everyone or take no one action depending on which
condition fails; we can think of this case as m? = n + 1.
Summarizing our discussion above, the following proposition provides a complete description of the optimal mechanism.
SCREENING THROUGH COORDINATION
Proposition 2. Let m?
23
d n2 e be the lowest value for which both C1 and C2 hold. If C1
and C2 both never hold set m? = n + 1. Let a? = (0; 0) if C1 fails at m?
1 and a? = (1; 1)
otherwise. The following anonymous mechanism f ? is optimal:
(1) If m
m? ,m 6=
(2) For all m
(3) For all n
(4) If m? =
n
2
n
n
2
: f ? (m) [0; 1] = 1
m? ,m 6=
n
2
: f ? (m) [1; 0] = 1
m? < m < m? f ? (m) [a? ] = 1
then g ? (m? ) [0; 1] = f ? (n
m? ) [0; 1] =
1
2
As a concrete demonstration of proposition 2, consider the case with three agents and
where conditions 1 and 2 hold at states with three high types (m = 3) and condition 1 fails
at states with two high types (m = 2). Using the proposition, the optimal mechanism is
described by the following table.
m List of States
Action
3
(1; 1; 1)
Spectacular Success
2
(1; 1; 0) ; (1; 0; 1) ; (0; 1; 1) Take action on no one
1
(1; 0; 0) ; (0; 1; 0) ; (0; 0; 1) Take action on no one
0
(0; 0; 0)
Spectacular Failure
7. Extensions
We now consider two extensions. In section 7.1 we generalize the agents’preferences to
increase the …t of the examples while maintaining the core ideas of the baseline setting. In
section 7.2 we show how the principal can make the optimal mechanism robust to collusion
by the agents while achieving virtually the same payo¤.
7.1. Threshold Agent Preferences
To improve the …t of the examples, we consider an extension that allows the agents’
preferences to vary with their type while maintaining the core ideas of the baseline setting.
We suppose each agent desires the action if and only if their type is above a …xed threshold,
where the threshold is below tp . Agents with types below the threshold will willingly reveal
they are unsuited for the action, and the optimal mechanism will never act on them. Agents
with types above the threshold desire the action causing the design problem on these agents
24
NEMANJA ANTIĆ AND KAI STEVERSON
to resemble our baseline setting, and the main results of favoritism and coordination will
hold with slight modi…cations.
The altered version of the agents’preferences can naturally arise in several of our examples.
For instance, in the case of experts giving recommendations of uncertain quality, the expert
may wish a su¢ ciently poor recommendation to not be taken up by the principal. In the case
of assembling a team, an agent may not wish to join a team for which they are su¢ ciently
unsuited.
Formally, we modify our baseline setting by supposing there exists ta 2 R with ta < tp
such that agents wants to be acted on if their type is weakly above ta and do not want to
be acted on if their type is below ta . For simplicity we have used the same cuto¤ ta for each
agent, but the same analysis would proceed identically if each agent had a di¤erent cuto¤.
An agent with type t < ta will be called passive, agents with types between ta
be called low types, and agents with types ta
t < tp will
tp will be called high types.
As a non-trivality condition we assume each agent has a positive probability of being
a low type and a high type. The optimal mechanism will never act on an agent who is
never a high type, and an agent who is never a low type has no con‡ict of interest with
the principal. Either way the mechanism for such an agent can be trivially solved and the
problem reformulated with the remaining n
1 agents.
The following lemma characterizes incentive compatibility in the modi…ed setting.
Lemma 2. In any optimal mechanism g,
(1)
i
< ta implies gi ( ) = 0
(2) If t; t0 2
i
with t; t0
ta then pgi (t) = pgi (t0 )
Moreover any mechanism that obeys (1) and (2) is incentive compatible.
Proof. See Appendix.
Part of (1) of the lemmas follows since passive agents will willingly reveal they are unsuited
for the action. Moreover, never acting on passive agents does not interfere with the incentives
of low or high type agents since those agents want to be acted on. Part (2) of the lemma
characterizes incentive compatibility for any mechanism that obeys (1) using the same logic
as in lemma 1.
SCREENING THROUGH COORDINATION
25
The favoritism result (theorem 3) holds in the modi…ed setting as written. The only
caveat is the meaning of low/high quality, as used in de…nition 5, must be modi…ed to be the
meaning given in this section. The favoritism result only asserts the existence of a set of open
priors where favoritism occurs, and this open set can put an arbitrarily small probability
on any agent being a passive type. Hence a continuity argument can be used to extend
the proof of theorem 3 into the modi…ed setting in this section. The continuity argument
involves showing the optimal mechanism moves upper-hemi continuously, which can be done
using arguments similar to what is found in appendix B.1.
To extend the coordination result (theorem 1) we need only to modify the de…nition of
the partial order
?
i as
follows.
De…nition 8. For any i 2 f1; :::ng let
whenever
or (iii)
0
j
i
=
0
i,
< ta
i
be a binary relation on
and for all j 6= i one of three things is true: (i)
j
< tp . We say
0
?
i
if
0
?
i
and
0
0
j
such that
=
j
(ii)
j
0
< tp
i
0
j
6= .
With the modi…ed partial order the de…nition of coordination and the statement of theorem
1 goes through as written in section 4. As before, the logic behind coordination is that
states higher in the
?
i ordering
have higher outcomes for agents other than i and, due to
complementarities, are more suitable for correctly responding to agent i’s type. High types
have an outcome of at least tp , passive always have an outcome of exactly tp (see lemma 2),
and low types have an outcome of at most tp . Hence, changing an agent other than i into a
high type, or changing from a low type into a passive types should result in a state higher
in the
?
i ordering,
which is precisely the de…nition given above.
7.2. Coalition-Proof Mechanisms
One concern with the optimal direct mechanism is that agents may be able to collude
to improve their outcomes6. In the example in section 2, the agents could achieve the
best possible outcome if both agents report they are the high type, as the mechanism will
select both of them with probability one. Surprisingly, it turns out that we can amend the
optimal direct mechanism to make it immune to collusion, in other words coalition proof,
with virtually the same payo¤ to the principal.
6 This
is not unusual and is a concern in related work, e.g., Jackson and Sonneschein (2007).
26
NEMANJA ANTIĆ AND KAI STEVERSON
The coalition proof mechanism we propose is inspired by techniques found in the virtual
implementation literature (e.g., Abreu & Matsushima (1992)), where with arbitrarily high
probability the optimal direct mechanism is implemented. However, given our assumption
that the agents’preferences are independent of their type, our environment fails to satisfy
Abreu-Matsushima measurability7, and the results in that work cannot be directly applied to
our setting. Interestingly, we are able to add ideas from the classic implementation literature,
notably integer games (Maskin (1999)), to get around this problem. Moreover our concept
of a coalition proof mechanism requires immunity from coalition deviations, a requirement
not found in the aforementioned implementation literatures.
Our coalition proof mechanism asks agents to report not only their own type, but also
a guess for each other agent’s report and a non-negative integer. With 1
" probability
the optimal direct mechanism is implemented which only relies on each agent’s report about
their own type. With " probability the mechanism plays a betting game that rewards agents
for correctly guessing the other agent’s reports; rewarding means taking the action on that
agent with some probability, higher probability corresponds to a higher reward. However,
only the agent who reports the highest integer will have their bet count, all other agents will
receive no reward. Agents can opt out of the betting by reporting an integer of 0, in which
case they receive a …xed reward. The bets will be calibrated so that the agents break even on
every possible bet when everyone is reporting truthfully. When collusion occurs, the agents
can make a strict gain from betting, and hence all agents will desire to report the highest
integer which destroys any possible equilibrium.
More formally, for any " > 0 we will de…ne a coalition proof mechanism, g " :
n
Zn+ !
A, where each agents reports a guess on the total type pro…le as well as a non-negative
n
integer. Given report (T; z) 2
Zn+ , T is the combined report of each agent about the
total type pro…le and z is a vector of integers. We let Ti 2
and Tij 2
j
denote agent
i’s report on the total type pro…le and agent j’s type respectively. We also let zi denote the
integer reported by agent i. Let diag (T ) = (T11 ; T22 ; :::; Tnn ) be the diagonal vector of each
7 Since
each type of every player assesses all Anscombe-Aumann acts in the same manner, the limit
partition of the set of types for each player is the entire set of types (i.e., in the notation of Abreu and
Matsushima (1992),
0
i
=
i
=
i ).
This implies that only constant social choice functions are Abreu-
Matsushima measurable, and so the designer can simply choose all players with some exogenous probability.
SCREENING THROUGH COORDINATION
27
agent’s report about their own type. We shall write g " (T; z) [a] to indicate the probablity the
mechanisms takes action a after receiving report (T; z). And gi" (T; z) will be the marginal
probability the action is taken on agent i following report (T; z).
To de…ne g " we suppose that with probability 1 " the optimal direct mechanism is played
using diag (T ). With probability " a betting mechanism g b :
the values of gib (T; z) are given by
8
>
>
>
< arg min i 2 i i ( i )
gib
(T; z) =
>
>
>
: 0
i
(Ti; i )
1
arg min
n
Zn+ !
A is used where
if zi = 0
i2
i
i
(
if zi > maxj6=i zj , and diag (T ) = Ti .
i)
otherwise.
The above de…nition only pins down the marginal probability of acting on each agent. To
fully de…ne g b we suppose the probability each agent is acted is independent across agents
so that:
Y
b
g (T; z) [a] =
gib
(T; z)
ai =1
!
Y
gib
1
(T; z)
ai =0
!
.
As " goes to zero mechanism, g " implements the optimal direct mechanism with probability
approaching 1, and we will see in theorem 5 that g " is be immune to coalitions for any
" 2 (0; 1). And since the payo¤ to the principal is bounded, the payo¤ g " converges to the
optimal payo¤ as " ! 0. In this sense, the principal can achieve virtually the optimal payo¤
in a coalition proof mechanism.
A strategy of agent i is a function
probability an agent i of type
i
i
:
i
!
Z) where
(
will jointly report pro…le t 2
i
( i ) [t; zi ] gives the
and integer zi 2 Z. We
will …nd it useful to formally de…ne what it means for an agent to use a truthful reporting
strategy.
De…nition 9. For each agent i we say
i
:
i
!
(
Z) is truthful reporting strategy if
it has the property that
X
t2 jti =
i
( i ) [t; 0] = 1 for all
i
i
2
i:
Truthful reporting requires the agent to make no bets (zi = 0) and report their own types
truthfully. Truthful reporting makes no restriction on how an agent reports the other agent’s
types, however when zi = 0 such reports are irrelevant.
28
NEMANJA ANTIĆ AND KAI STEVERSON
Our notion of collusion is a coalition of agents agreeing to change their own strategies
in a way that strictly bene…ts everyone in the group. We require that the coalition not be
vulnerable to internal defections; in other words coalition members must be best responding
to the deviation the coalition is performing. Members outside the coalition are unaware of
the deviation and report truthfully.
De…nition 10. We say a mechanism is coalition proof if there does not exist a pair (I; f i gni=1 )
which consists of a subset of the agents I
f1; :::; ng and a strategy
i
for each agent such
that the following three conditions hold:
(1) Every agent j 2
= I follows a truthful reporting strategy.
(2) For all i 2 I,
i
is a best response to f i gni=1 .
(3) For all i 2 I agent i receives a strictly higher payo¤ than if all agents truthfully
reported.
Notice that any coalition proof mechanism necessarily has truthful reporting by all agents
as an equilibrium. Hence coalition proof can be seen as a strengthening of the incentive
compatibility requirement.
Theorem 5. For any " 2 (0; 1), mechanism g " is coalition proof and for any
exists a " small enough such that the principal’s payo¤ under g " is within
direct mechanism.
Proof. See Appendix.
> 0 there
of the optimal
SCREENING THROUGH COORDINATION
29
References
Abreu, Dilip, & Matsushima, Hitoshi. 1992. Virtual Implementation in Iteratively Undominated Strategies: Complete Information. Econometrica, 60(5), 993–1008.
Ben-Porath, Elchanan, Dekel, Eddie, & Lipman, Barton L. 2014. Optimal allocation with
costly veri…cation. American Economic Review, 104(12), 3779–3813.
Guo, Yingni, & Hörner, Johannes. 2015. Dynamic Mechanisms without Money. Working
Paper.
Jackson, Matthew O., & Sonnenschein, Hugo F. 2007. Overcoming incentive constraints by
linking decisions. Econometrica, 75(1), 241–257.
Li, Jin, Matouschek, Niko, & Powell, Michael. 2015. The Burden of Past Promises. Working
Paper.
Lipnowski, Elliot, & Ramos, Joao. 2015. Repeated Delegation. Working Paper, 1–50.
Maskin, Eric. 1999. Nash Equilibrium and Welfare Optimality. Review of Economic Studies,
66(1), 23–38.
30
NEMANJA ANTIĆ AND KAI STEVERSON
Appendix A. Proof of Theorem 1
In this section we complete the proof of theorem Theorem 1 by proving coordination within
states and showing claim 2.
A.1. Coordination within states
Proof. Choose an optimal mechanism g, …x any
O; 4 on A where aOa0 ; a4a0 2 A, and
2
. Relative to , de…ne binary operators
(aOa0 )i = max fai ; a0i g if
i
> tp and (aOa0 )i = min fai ; a0i g if
i
< tp
(a4a0 )i = min fai ; a0i g if
i
> tp and (a4a0 )i = max fai ; a0i g if
i
< tp .
Now suppose that g ( ) [a] ; g ( ) [a0 ] > 0 then it must be that either X ( ; a)
X ( ; a0 ) or
X ( ; a0 ). If not, we can modify the mechanism to take actions a; a0 with " less
X ( ; a)
probability and takes actions aOa0 ; a4a0 with " higher probability. And by construction we
would have
X ( ; aOa0 ) = X ( ; a) _ X ( ; a0 )
X ( ; a4a0 ) = X ( ; a) ^ X ( ; a0 )
and by de…nition of strict supermodularity the proposed modi…ed mechanism would deliver
a strictly higher payo¤, and would be incentive compatible since it leaves the values of gi ( )
unchanged.
De…ne Sig as in the de…nition of coordination within states. Reorder the agents so that
S1g ( )
:::Sng ( ). Take any a with g ( ) [a] > 0 and a1 = 1
that ai 6= 1
a1 6= 1
1 >tp
i >tp
. Since Sig ( )
and ai 6= 1
i >tp
X ( ; a0 ) and X ( ; a) 6
1 >tp
and suppose for contradiction
S1g ( ) there must exist a0 2 A such that g ( ) [a] > 0,
, but this creates a contradiction since it will imply that X ( ; a) 6
X ( ; a0 ). Hence for any a1 = 1
1 >tp
we have that ai = 1
i >tp
for all
i; which will imply that
g( ) 1
1 >tp
; :::; 1
Now take any a with g ( ) [a] > 0, a1 6= 1
i >tp
n >tp
= S1g ( )
and a2 = 1
above we can show that for all i > 2 it must be that ai = 1
g( ) 1
1 <tp
;1
2 >tp
; :::; 1
n >tp
= S2g ( )
1 >tp
1 >tp
. By similar reasoning to
which implies that
S1g ( ) .
SCREENING THROUGH COORDINATION
31
And we continue on like this for i = 1; :::; n and establish that
g( ) 1
1 <tp
; :::; 1
i <tp
;1
i+1 >tp
; :::; 1
n >tp
g
= Si+1
( )
Sig ( )
as desired.
A.2. Proof of Claim 2
Proof. We start with ;
0
such that
0
i
~ i ( i ; 1) = X
~ i ( i ; 1) < tp . We also know there
,X
exists a; a0 with g ( ) [a] > 0 and g ( 0 ) [a0 ] > 0 as well as ai = 1 and a0i = 0. Now we construct
a
^,^
a0 as follows:
a
^0j =
aj and a
^j = a0j If
j
=
0
j
~ j ( j ; aj ) < X
~j
and X
0
0
j ; aj
a
^0j = a0j and a
^j = aj Otherwise
We want to show that X~ ( ; a) _ X~ ( 0 ; a0 ) = X~ ( ; a
^), X~ ( ; a) ^ X~ ( 0 ; a0 ) = X~ ( 0 ; a
^0 ) and
X~ ( 0 ; a0 ) 6 X~ ( ; a), X~ ( 0 ; a0 ) 6 X~ ( ; a). Well notice that the de…nition of a
^; a
^0 has the
consequence that for all j,
n
o
0
~
~
~
Xj ( j ; a
^) = max Xj ( j ; a) ; Xj ( j ; a )
n
o
0
0
~
~
~
Xj ( j ; a
^ ) = min Xj ( j ; a) ; Xj ( j ; a )
from which X~ ( ; a) _ X~ ( 0 ; a0 ) = X~ ( ; a
^), X~ ( ; a) ^ X~ ( 0 ; a0 ) = X~ ( 0 ; a
^0 ). Now all we have
~ ( ; a), X~ ( 0 ; a0 ) 6 X~ ( ; a). Well notice that at i we have that:
to show is that X~ ( 0 ; a0 ) 6 X
~ i ( i ; ai ) = X
~ i ( i ; 1) < tp and X
~ i ( i ; a0 ) = X
~ i ( i ; 0) = tp
X
i
hence X~ ( 0 ; a0 ) 6
such that
j
X~ ( ; a) must hold. And by de…nition,
is a "good" type and
0
j
that:
~ j ( j ; aj )
X
hence it must follow that X~ ( 0 ; a0 ) 6
0
means there exist some j
~ we can get
is a "bad" type. Using how we de…ned X
i
tp + " > tp
~j
X
X~ ( ; a) as desired.
0
0
j ; aj
32
NEMANJA ANTIĆ AND KAI STEVERSON
Appendix B. Proof of Theorem 3
Let overall probability that agent i is acted on by mechanism g be given by pgi where
pgi =
Let
i
>>
j
mean that
i
X
( ) gi ( ) :
strictly …rst order stochastically dominates
j .We
start by stating
a useful lemma.
Lemma 3. Let
= f0; 1g
f0; 1g and let (V; W ) be a symmetric principal payo¤ function
such that 0 < tp < 1 and
1
1
1
1
V ((10) ; (10)) + V ((01) ; (10)) > V ((10) ; (11)) + V ((01) ; (11)) .
2
2
2
2
Then there exists
2
f0; 1g
f0; 1g with
1
(1) >
2
(1) > 0 such that in every optimal
mechanism g has the property that pg2 > pg1 .
Proof. See section B.2.
A sketch of the proof of this lemma was presented in the text of section 5 above; the full
proof can be found in section B.2 below.
To prove theorem 3, we start with a symmetric principal payo¤ function ( ; V; W ) with n
agents which satis…es condition F . For contradiction suppose that no such open set as in
0
the statement of theorem 3 exists. Let ^; ^ be the states mentioned in condition F , which
implies ^1 > tp > ^2 . By symmetry of the principal’s payo¤ function we know i = j
Q
for all i; j. Hence we can de…ne U
2 U , if and only if i > 2 implies
i such that
^2 = 1 and i = 1; 2 implies i ^1 + i ^2 = 1. Now choose any ^ 2 U such that
i
^ 1 ^1
> ^ 2 ^1 . Using how U is constructed it is easy to see that if E is an open set
containing ^ , then there exists E 0
E such that every
2 E 0 is full support and
1
>>
2.
2 E 0 there must exist an optimal mechanism g with pg1
pg2 :
Q
m
m
Therefore, we can construct a sequence m 2
! ^, m
i such that
1 >> 2 and
By assumption for any
at every point along the sequence there exists an optimal mechanism g m with pg1
m
m
pg2 .
Since the set mechanism is a compact space, sequence g m has a convergent subsequence, let
its limit by g. Since pgi is a continuous function of
and g it follows that pg2
pg2 . We
show in proposition 3 below that the set of optimal mechanisms is upper hemi continuous
SCREENING THROUGH COORDINATION
in the domain
H
1
^1
> ^2
Q
H
2 ,
i
33
implying that g is optimal at ^ . We have shown for every ^ 2 U with
pg2 .
there exists an optimal mechanism with the property that pg1
We now show how to reduce environments with priors in U to a space with two agents
and two types in order to achieve a contradiction of proposition 3. De…ne ~ as
~ =
and de…ne V~ : ~
L H
2; 2
L H
1; 1
f0; 1g2 ! R as
V~ ( ; a) = V
For any mechanism g :
g~ ( ) [a] = g
It is clear that for any
; (a; 0; 0; :::; 0) .
f0; 1gn de…ne the reduction of g to agents 1; 2 as g~ : ~ !
!
f0; 1g2 where
L
L
3 ; :::; n
;
;
L
L
3 ; :::; n
[a; 0; 0; ::0] .
2 U every optimal mechanism in the original n agent space
never acts on any agent i for i > 2. Hence, for every ~ 2
that
2
choose a
2 U such
= ~ 2 and for every mechanism g that is optimal at ( ; V; ) the reduction
of g to agents 1; 2 is optimal at ~ ; V~ ; ~ . It follows that for every ~ 2
1
2 , if
1
1
H
1
= ~1,
1
>
2
However,
so that
H
i
2
H
2
then there exists an optimal mechanism at
~ ; V~
~ ; ~ ; V~ with pg1
pg2 .
obeys all the condition in the statement of propostion 3 if we remap ~
= 1 and
L
i
= 0. The inequality stated in propostion 3 holds as a consequence of
condition F holding on V and the way we choose ^1 ; ^2 . Hence it must be that there exists
^1 > 2 ^1 and pg > pg for every optimal optimal mechanism,
~2
1
2 with 1
2
1
which is a contradiction.
B.1. Proof that the optimal mechanism is upper hemi continuous in the doQ
main
i
Proposition 3. Fix
; V; W .
:
Qn
i=1
set of optimal mechanism at prior . Then
i
G be a corresponding where
( ) gives the
is non-empty and compact valued as well as
upper hemi-continuous.
To prove this proposition we only need to prove that the set of incentive compatible
Q
mechanisms moves continuously in the domain ni=1
i , and the result will follow by the
theorem of the maximum. The set of equalities that characterize incentive compatibility
34
NEMANJA ANTIĆ AND KAI STEVERSON
move continuously as
changes continuously. And it is well known that the set of solutions
to a continuously changing set of equalities and inequalities moves upper hemi continuously
as long as the set of solutions is always non-empty. Hence the set of incentive compatible
mechanisms moves upper hemi continuously. Now we only need to prove low-hemi continuity
which is done in the following lemma.
Lemma 4. Fix
m
. Let
2
Qn
m
with
i
i=1
! . Then for every mechanism g that is
incentive compatible at ( ; ), there exists a sequence of mechanisms g m such that g m ! g
and g m is incentive compatible at ( ;
m
) for every m.
1
Proof. For i = 0; :::n we will recursively de…ne fg m;i gm=1 so that for each i, g m;i is incentive
compatible for agents 1; ::; i and g m;i ! g as m ! 1. Hence g m;n will produce the sequence
demanded by the statement of the proposition. Without loss of generality we will assume
there is no agent i which g always takes the action on. If any such an agent existed, we can
1
simply set our sequences fg m;i gm=1 to always take the action on that agent and reformulate
the problem on the remaining n
1 agents. This reformulating works because incentive
compatibility for agent i only depends on the action speci…c to agent i.
1
As an initial step set g m;0 = g for all m. Now assume fg m;i 1 gm=1 has been appropriately
1
de…ned and we will show how to de…ne fg m;i gm=1 . For each t 2
t
> 0; g
t
t
i
[at ] > 0; ati = 0 and
= t. If no such
t
i
set
t
; at such that
; at exists that must mean that
pgi (t) = 1 , and since g is incentive compatible it must be that mechanism g always takes the
action on agent i, which we ruled out earlier. Hence such a
For any natural number m, de…ne "m :
"m (t) = max
0
t2
8
< X
i :
i2
gim;i
1
. In the limit g m;i
1
(
i
1
; at must always exists.
! R as:
m
i
0
i; t )
(
The value "m measures how far g m;i
m
i
t
9
=
)
i
;
X
i2
gim;i
1
(
i ; t)
m
i
(
i) .
i
is from being incentive compatible for agent i at prior
! g which is fully incentive compatible at , hence "m (t) ! 0 as
m ! 1 for …xed t. Since g m;i
enough so that for all m
1
t
! g, g
> 0 and "m (t) ! 0, we can choose M large
M we have
"m (t)
m
i
t
i
< g m;i
1
t
at for all t 2
i.
SCREENING THROUGH COORDINATION
M de…ne g m;i as follows:
For each m
t
g m;i
g m;i
35
1
t
at
= g m;i
1; at i
= g m;i
g m;i ( ) [a] = g m;i
And it is clear that g m;i and g m;i
1
t
1
t
1
1
"m (t)
at
1; at i
for all t 2
t
m
i
i
+
"m (t)
m
i
i
for all t 2
t
i
( ) [a] elsewhere:
converge together as m ! 1, hence g m;i ! g must also
hold. All that remains to be shown is that g m is incentive compatible at ( ;
m, i and t 2
pgi
m;i
i
(t) =
we have
X
i2
= max
0
t2
i
gim;i (
m
i
i ; t)
(
i)
8
< X
=
:
i
X
i2
gim;i
1
i
(
0
i; t )
m
i
(
i2
gim;i
1
(
i ; t)
i
i)
m
i
(
m
). For each
9
=
+ "m (t)
i)
;
i
which implies that for a …xed m and t; t0 2
incentive compatible at ( ;
m
) for all m
we have pgi
m;i
(t) = pgi
m;i
(t0 ), and hence g m;i is
M . We have constructed a sequence g m;i from
M to 1 with the desired properties, and now we simply renumber the sequence to go from
1 to 1 and we are done.
B.2. Proof of Proposition 3
We start with two agents each with two types and a state space of:
= f0; 1g
f0; 1g :
Without loss of generality we set tp = 21 . We have a symmetric principal payo¤ function
(W; V ) that has the property that
1
1
1
1
V ((10) ; (10)) + V ((01) ; (10)) > V ((10) ; (11)) + V ((01) ; (11))
2
2
2
2
We will write W (x1 x2 ) to mean W ((x1 ; x2 )). So for example if the mechanism does not act
on either agent, the payo¤ to the principal is W
11
22
. The above inequality can be rewritten
as
1
1
1
1
+ W
W 1
0 > W (10) .
2
2
2
2
The symmetry of the principal payo¤ function and that tp = 12 will be assumed throughout
this section. In order to prove proposition 3 we will …rst prove a number of useful lemmas.
36
NEMANJA ANTIĆ AND KAI STEVERSON
Lemma 5. Suppose the prior
is such that
W 1 12
W 1 21
1;
2
>
1
2
and suppose that for i = 1; 2
W (10)
<
1
W 12 12
i
,
i
then in any optimal mechanism the following three statements hold:
(1) g1 (11) = g2 (11)
(2) g (11) [11] = 1
(3) g (00) [11] = 1
Proof. A consequence of the inequality assumed in the statement of the lemma is that for
each i 2 f1; 2g
W 1 21
W (11)
(2)
W (10)
<
1
W 1 12
i
i
To see why note that by supermodularity and symmetry:
W (11)
W
1
1
2
W
1
1
2
W
11
22
=W
1
1
2
W
11
22
and hence
W 1 12
W (11)
W (10)
W 1 12
W 1 12
W 1 12
W (10)
.
W 21 12
Part (1): Let g be an optimal mechanism and suppose for contradiction that g1 (11) >
g2 (11). Since pg2 < 1, incentive compatibility will require there to exist
and g
1
1
; a1 with
1
2
= a12 = 0
[a1 ] > 0. De…ne a
~1 = (a11 ; 1). Also g1 (11) > g2 (11) will imply that g (11) [10] > 0.
Now consider the following alternate mechanism g~.
g~ (11) [11] = g (11) [11] +
"
1
g~ (11) [10] = g (11) [10]
"
1
g~ ( ) a
~1
= g~ ( ) a
~1 +
g~ ( ) a1
= g~ ( ) a1
"
1 ( 1)
"
1 ( 1)
g~ ( ) [a] = g ( ) [a] otherwise
SCREENING THROUGH COORDINATION
37
When agent 1 is acted on has not changed, hence g~ is incentive compatible for agent 1. To
check incentive compatibility for agent 2 notice that
pg2~ (1) = pg2 (1) + "
pg2~ (0) = pg2 (0) + ".
Since g is incentive compatible pg2 (1) = pg2 (0) and hence g~ is incentive compatible. Now the
di¤erence in payo¤ between g~ and g is given by:
V (~
g)
V (g) =
"
(11) W (11)
W
1
1
1
2
"
1
1
V
1
; a1
V
1
;a
~1
1
= "
2
W (11)
W
1
1
2
(1
2) "
"
2
W (11)
W
1
1
2
(1
2) "
V
W
1
; a1
1
1
2
V
1
;a
~1
W (10)
> 0
the …rst inequality holds because of supermodularity, and the second inequality holds because
of inequality 2 above. And we have violated the optimality of mechanism g. A similar
argument will rule out g2 (11) > g1 (11).
Part (2): Suppose that g1 (11) = g2 (11) < 1. Since p1 ; p2 < 1, incentive compatibility
requires that for i = 1; 2 there exists
i
; ai with
i
i
= aii = 0 and g
i
[ai ] > 0. De…ne
~2 = (a21 ; 1) : Now consider the following alternate mechanism g~
a
~1 = (1; a12 ) and a
g~ (11) [11] = g (11) [11] + "
g~
1
a
~1
= g
1
a
~1 + "
g~
2
a
~2
= g
2
a
~2 + "
g~ (11) [00] = g (11) [00]
"
g~
1
a1
= g
1
a1
"
g~
2
a2
= g
2
a2
"
g~ ( ) [a] = g ( ) [a] elsewhere
38
NEMANJA ANTIĆ AND KAI STEVERSON
To check that g~ is incentive compatible for agent 1 notice that
pg1~ (1) = pg1 (1) + "
2
pg1~ (0) = pg1 (0) + "
2
and hence pg1~ (1) = pg1~ (0) since g is incentive compatible. A similar argument can be made
for agent 2. The payo¤ di¤erence between g and g~ is given by:
V (~
g)
V (g) = " (11) W (11)
+"
2
2
V
;a
~2
" (11) W (11)
2
+" (10) V
W (11)
;a
~2
1
+ (11) " W
1
2
1
2
V
1
;a
~1
V
1
" ( (10) + (01)) W
1
+ " (01) V
1
11
22
W
1
2
W
1
2
11
22
W
; a1
; a1
; a2
" (11)
1
2
V
1
; a2
11
22
W
;a
~1
V
1
+"
2
V
11
22
W
(11) " W
11
22
W
1
1
W
1
1
2
1
1
2
W (10)
W (10)
1
2
W (10)
> 0
The …rst inequality uses that
1;
2
> 12 . The second inequality uses supermodularity and
symmetry of W . The third inequality can derived as follows:
W (11)
W
11
22
= W (11)
2W
1
W
1
2
1
W
1
2
+W
1
1
2
W
11
22
11
22
The fourth inequality follows from the inequality assumed in the statement of the lemma.
Hence we have
V (~
g)
V (g) > 0,
which contradicts the optimality of g, and we have proved that g (11) [11] = 1.
Part 3 ) We know from part 2 that g (11) [11] = 1. Since
1;
2
> 12 , incentive compatibility
will require g1 (01) ; g2 (10) > 0, which means for i = 1; 2 ai with aii = 1 and g (01) [a1 ] >
0; g (10) [a2 ] > 0. De…ne a
~1 = (0; a12 ) and a
~2 = (a21 ; 0). We will …rst show that g (00) [00] = 0.
SCREENING THROUGH COORDINATION
39
Suppose not for contradiction. Construct alternate mechanism g~ as follows
g~ (00) [11] = g (00) [11] + "
g~ (01) a
~1
= g (01) a
~1 + "
1
2
2
g~ (10) a
~2
= g (10) a
~2 + "
1
1
1
g~ (00) [00] = g (00) [00]
"
= g (01) a1
"
g~ (01) a1
1
2
2
g~ (10) a2
= g (10) a2
"
1
1
1
g~ ( ) [a] = g ( ) [a] elsewhere
Using similar methods to above it can be checked that g~ is incentive compatible. The change
in payo¤ between mechanism g and g~ is given by:
V (~
g)
V (g) = " (01)
1
2
;a
~1
V
V
; a1
+ " (10)
1
2
W
;a
~1
V
V
" (00) 2 W
> " (00) W
;a
~2
V
V
; a2
1
+" (00) W (00)
= " (00)
1
11
22
11
22
11
22
; a1
W
W
+V
;a
~2
V
W
11
22
1
0
2
1
0 +W
2
0
1
2
; a2
W (00)
W
11
22
W (00)
W (00)
W
11
22
W (00)
= 0
And we have contradicted the optimality of g and hence g (00) [00] = 0. A similar proof will
rule out g (00) [01] > 0 and g (00) [10] > 0, for each of those cases construct g~ using a1 ; a
~1 or
a2 ; a
~2 respectively.
40
NEMANJA ANTIĆ AND KAI STEVERSON
Now de…ne the following U1 ; :::U5 as functions of
U1 = W
1
1
2
(10)
1
1
1
0
2
+W
1
+ (01) W
U2 = W
1
1
2
11
22
(10)
(1
2 ) (2 1
and W as follows:
2
(10)
1
1
1
1)
+ (01) W
1 2
1
1
+ W (10) (10)
2
1
1
1
1
0
2
(1
1
2 ) (2 1
1)
1 2
+W
1
0
2
(10)
2
1
(01)
U3 = W (10) (10) + W (10) (01)
U4 = W
+W
U5 = W
+W
1
1
2
1
(10)
1
2
(01)
2
1) (1
1)
+W
1
1+ 2
(1
2 ) (2 1
1
1
0
2
2
(10)
+
1
1
1
11
22
Lemma 6. Let
(2
(01)
1
2
1)
1
+W
1
0
2
+W
11
22
1
0
2
(10)
(01)
1
1
(2 1
1+
1
+W
1
2
2 2
1+
1
0
2
1)
1
1
2
1
(01)
2
1
2
2
2
2
1
>
2
>
1
2
and max fU1 ; U2 ; U3 g < max fU4 ; U5 g . Then if an optimal
mechanism has g (11) [11] = g (00) [11] = 1, it must be that pg1 < pg2 .
Proof. Let g be an optimal mechanism with the property that
g (11) [11] = g (00) [11] = 1.
Using coordination within states (see theorem 1) we know that
g (10) [10] = min fg1 (10) ; 1
g2 (10)g
g (10) [11] = max f0; g1 (10) + g2 (10)
g (10) [00] = max f1
g2 (10)
g (10) [01] = min f1
g1 (10) ; g2 (10)g
g (01) [01] = min fg2 (01) ; 1
g1 (10) ; 0g
g1 (01)g
g (01) [11] = max f0; g2 (01) + g1 (01)
g (01) [00] = max f1
g1 (01)
g (01) [10] = min fg1 (01) ; 1
1g
1g
g2 (01) ; 0g
g2 (01)g
SCREENING THROUGH COORDINATION
41
The incentive constraint for each player can be written as:
p1 =
2
+ g1 (10) (1
2)
=1
2
+ g1 (01)
2
p2 =
1
+ g2 (01) (1
1)
=1
1
+ g2 (10)
1
From these incentive constraints it is easy to see that p1 2 [ 2 ; 1] and p2 2 [ 1 ; 1] : We can
also fully characterize the mechanism in terms of p1 ; p2 .
p1
1
g1 (10) =
2
1
and g1 (01) = 1
p1
2
2
1
g2 (10) = 1
p2
1
p2
and g2 (01) =
1
1
1
Hence the optimal mechanism g has to set p1 ; p2 to solve the following maximization
problem:
max (10) W
1
p1 ;p2
+ (10) W
+ (01) W
+ (01) W
1
2
11
22
max
1
2
min
11
22
max
1
p1
1
min
1
2
;
1
p2
2
p2
p2
1
1
1
p1
1
1
p1
1
1
;
+ (10) W (10) max 0;
1
2
; 0 + (10) W
2
p1
+ (01) W (10) max 0;
2
1
; 0 + (01) W
1
p2
1
1
0 min 1
2
1
2
p2
2
1
0 min 1
2
2
p2
1
p1
1
1
p1
1
2
1
2
1
1
;1
1
p1
1
2
1
p1
2
;1
p2
1
Such that
p1 2 [ 2 ; 1] and p2 2 [ 1 ; 1] .
The objection function is piecewise linear with two kinks. We say a point p1 ; p2 is at a kink
if either
p1
1
2
2
=
1
p2
1
OR
p2
1
1
1
=
1
p1
2
De…ne an extreme point of the above maximization problem to be any (p1 ; p2 ) where 1)
(p1 ; p2 ) is feasible 2) for each i 2 f1; 2g and for any " > 0 there exists p^i 2 B" (pi ) such
that (^
pi ; p i ) is either not feasible or (^
p; p i ) is at one of the kinks. Since the maximization
problem is linear, we know there must exist an optimal extreme point, and all optimal points
have to be convex combinations of two optimal extreme points. Hence it will su¢ ce to show
that there is no optimal extreme point with the property that p2
possible extreme points which we now enumerate.
p2
p1 . There are seven
1
1
42
NEMANJA ANTIĆ AND KAI STEVERSON
(1)
p1
1
2
2
=
1 p2
1
; p2 =
(2) p1 = 1; p2 =
1
1
(3) p1 = 1; p2 = 1
(4)
p1
1
2
2
=
1 p2
1
; p12
(5) p1 =
2 ; p2
=
(6) p1 =
2 ; p2
=1
(7) p1 =
p2
2; 1
1
1
1
1
=
1 p1
2
1
1 p1
=
2
Some potential extreme points are removed because they are either redundant or infeasible.
We now list those:
(1)
p2
1
1
2
=
1 p1
2
; p2 =
1
is the same as (2) above
p1
1 p2
2
is the same as (6) above
2; 1 2 =
1
p1 = 1; p11 2 = 1 p2 requires p2 = 1
1 < 1
2
1
p1 = 1; p12 1 = 1 p1 is the same as (2) above
1
2
p2
1 p1
1
=
; p2 = 1 requires p1 = 1
2 < 2
1 2
2
p1
2
= 1 p2 ; p2 = 1 is the same as (6) above.
1 2
1
(2) p1 =
(3)
(4)
(5)
(6)
which is infeasible
which is infeasible
So we have 7 possible extreme points. It is easy to verify that only extreme points 1,2,3
have the property that p2
p1 . And Ui for i = 1; ::5 was de…ned as the value of the above
objective function at extreme point i. Hence if max fU1 ; U2 ; U3 g < max fU4 ; U5 g, then there
are no optimal extreme points with p2
with p2
p1 and therefore there can be no optimal mechanism
p1 , as desired.
Lemma 7. Take any
with
1
=
2
>
1
2
and such that W 1 12 (1
)+W
then max fU2 ; U3 g < U4
Proof. Using
1
=
2
we can calculate that:
U2
1 1
= W 1
(10)
2
U3
= 2W (10)
(10)
U4
= W
(10)
1
1
2
2 (1
1
+ W (10)
1
2
1
1
1
1)
+W
1
0 2
2
1
+W
1
0
2
1
0
2
> W (10)
SCREENING THROUGH COORDINATION
43
We can calculate that:
U4
U2
= W
(10)
(2
=
1
1
2
2 (1
1
1)
1
2
W (10)
1
1)
1
W
1
1
1
2
(1
1
1
1
0 (2
2
+W
1
1)
1
0
2
+W
W (10)
1
1
1)
>0
The inequality follows the statement of the lemma and the fact that 2
1 > 0. Next we
calculate that
U4
U3
= W
(10)
1
1
2
2 (1
1
2
= 2 W
1
with
=
1)
(1
2W (10) + W
1)
1
0
2
+W
1
1
0 2
2
1
W (10)
>0
as desired.
Lemma 8. Take any
W
11
22
1
2
1
2
>
and such that W 1 12
+W
1
0
2
(1
1 ) (2 1
2
1
) 6=
(1
, then U1 < max fU4 ; U5 g :
Proof. For contradiction suppose that
max fU4 ; U5 g
U1
Using
1
=
2
we can calculate that:
U1
= W
(10)
1
1
2
U4
= W
(10)
1
U5
= W
(10)
1
0 2
2
1
1
+W
1
1
2
2 (1
2
1)
(5
2
1
4
1
+ 1)
2
1
1
0 2
2
+W
1
1
1
0
2
+W
1
11
22
+W
1)
1
11
22
2
1
1
1
Therefore we can calculate that:
U1
U4
= W
(10)
=
1
1
2
1
1
1)
+W
1
+W
11
22
W
1
1
2
(2
11
22
(1
+W
2 (1
(1
1
1 ) (2 1
2
1
1) (1
(5
2
1
4
1
+ 1)
2
2
1
1)
1)
1
1 ) (2 1
2
1
1
0
2
1)
W
1
0
2
(1
2
1 ) (2
2
1
1
1)
!
1
44
NEMANJA ANTIĆ AND KAI STEVERSON
Hence we have U1
0 if and only if
U4
W
1
1
2
1
1
0 (1
2
+W
1)
11
22
W
And we can also calculate that
U1
U5
= W
(10)
1
1
1
= W
1
Therefore we have that U1
1
2
(1
1
1
5
1 ) (2 1
2
1
1)
+W
1
2
1
4
1
+1
2
2
1
2
2
1
1
1
1
1
1
(1
1
0
2
1)
2
W
2
1
11
22
1
1
2
1
0 if and only if
U5
1
1
0
2
+W
1
11
22
+W
W
1
2
1
2
1
+W
1
0 (1
2
1)
W
11
22
1
0 (1
2
1)
=W
11
22
Combining with what we had before gives
W
1
1
2
1
+W
which is a contradiction.
Lemma 9. For any
W
11
22
>
1
2
such that
W (1 21 ) W (10)
W (10) W ( 21 0)
>
, there exists a " > 0 such that for any prior
and W 1 12
1
with
1
+W
= ;
2
2(
1
0
2
(1
) 6=
"; ) we have
max fU1 ; U2 ; U3 g < max fU4 ; U5 g .
Proof. For this proof we will treat each Ui as a function
! R using the equations given
above. Suppose for contradiction no such " exists then we can take a sequence "n > 0;
"n ! 0 and de…ne
n
n
1
as
max fU1 (
Clearly
n
!
where
1
= ;
n
2
=
"n and at each n we would have
n ) ; U2
(
n ) ; U3
(
=
2
n )g
max fU4 (
n ) ; U5
(
n )g .
= . Using the continuity of the de…nition of Ui ( ) it must
be that
max fU1 ( ) ; U2 ( ) ; U3 ( )g
But
max fU4 ( ) ; U5 ( )g .
obeys the assumptions in the statement of lemmas 8,7 from which we can see that
max fU1 ( ) ; U2 ( ) ; U3 ( )g < max fU4 ( ) ; U5 ( )g .
SCREENING THROUGH COORDINATION
45
which gives us a contradiction.
We now complete the proof of proposition 3 by constructing the appropriate . By supermodularity it is clear that
W 1 12
W 1 12
W (10)
W 1 12
<
W 21 12
W (10)
W (10)
W 12 0
and the inequality assumed in the statement of proposition 3 guarantees that
1
2
1, hence we can …nd an
<
L
<
H
W 1 12
W 1 12
Choose any
1
2
W
1
2(
L;
9. De…ne a prior
W (10)
W 12 21
1
0 (1
2
+W
H)
such that for all
<
2(
<
1
) 6= W
L;
W (1 21 ) W (10)
W (10) W ( 12 0)
>
H)
W 1 21
W (10)
W (10)
.
W 12 0
11
22
and choose an associated " as mentioned in the statement of lemma
as:
1
and it is clear that W;
=
and
2
min f";
=
2
Lg
obeys the conditions in both lemma 9 and lemma 5, and combining
those lemmas together yields that in every optimal mechanism g (11) [11] = g (00) [11] = 1
as well as that
max fU1 ; U2 ; U3 g < max fU4 ; U5 g .
We can apply lemma 6 to conclude pg1 < pg2 for any optimal mechanism g. And we chosen
the prior so that
1
>
2,
hence we have proved proposition 3.
Appendix C. Proof of Theorem 4
We start by proving the following lemma;
Lemma 10. Fix any . Let a; b; c; d 2 R, such that a > 0, then there exists a M such that
for all symmetric principal payo¤ functions that are monotone of degree at least M we have
that
aW
1
1
2
+ bW
11
22
+ cW (10) + dW
1
0
2
>0
46
NEMANJA ANTIĆ AND KAI STEVERSON
Proof. Set M so that
jbj + jcj + jdj
<a
M
then we have that for any symmetric principal payo¤ function that is monotone of degree at
least M
aW
1
1
2
> aW
1
1
2
= W
1
1
2
a
W 21 12
jbj
W 1 12
1
2
a
jbj + jcj + jdj
M
W
1
11
22
+ bW
11
22
jbjW
1
0
2
+ cW (10) + dW
jcjW (10)
jdjW
1
0
2
W 21 0
jdj
W 1 12
W (10)
jcj
W 1 12
!
>0
as desired.
Corollary 1. For any …xed
with
1;
2
>
1
2
and i = 1; ::3 there exists M ? such that if W
is symmetric principal payo¤ functions that is monotone of degree at least M ?
Ui < U4
Proof. First take i = 0 and we only need to focus on the W 1 12 coe¢ cient of U4
U1 , which
is
W
= W
1
1
1
2
1
2
(10)
(2
1
(1
2 ) (2 1
1) (1
1
1) (1
1)
1( 1+ 2
1)
1 ) (2 2
2)
+
1) (1
1
2
Now we check the W 1 12 coe¢ cient on U4
W
= W
1
1
1
2
1
2
(10)
(1
(1
1) (1
1)
1( 1+ 2
2 ) (4 1 2
1
+
The inequality uses the fact that
W 1 21 for U4
1;
U3 is strictly positve.
2
>
1)
2)
1
1
2
1
1
2
(01)
1
2
(01)
(1
2 ) (2 1
1
+
2
1)
1
>0
U2 to get:
2 ) (2 1
1 ) (1
+W
1
.
2
+W
1
(1
2 ) (2 1
1
+
2
1)
1
>0
And it is trivial to check the co¢ cient on
SCREENING THROUGH COORDINATION
Now set
47
as
3
+
4 2
3
=
4 2
=
1
2
It is easy to check that
1;
2
> 21 . Using the above corrollary, we can …nd M1 ; ::M3 such
that for all W that are monotone of degree at least max fM1 ; ::M3 g we have
max fU4 ; U5 g > U4 > fU1 ; :::U3 g
Now choose any M such that
M > max M1 ; M2 ; M3 ;
Using that M >
3 2
2 4
3
2
2
4
.
we can show that
1
1
1
M1
<
3 2
1+2
Now for any symmetric principal payo¤ function that is montone of degree at least M we
have
W 1 21
W 1 12
W (10)
W 21 12
1
=
W (10)
W (1 12 )
)
W( )
3 2
<
=
1+2
1
1
W(
1 1
2 2
1 12
1
<
W(
1 1
2 2
1 12
)
W( )
1
2
<
2
<
1
1
1
1
1
M
.
1
Hence we can apply lemma 5 to conclude that g (11) [11] = g (00) [11] = 0. And applying
lemma 6 we know that p2 > p1 in all optimal mechanisms whenever
max fU4 ; U5 g > fU1 ; :::U3 g
which we know is true for every symmetric principal payo¤ function that is monotone of
degree at least M . And that …nishes the proof.
48
NEMANJA ANTIĆ AND KAI STEVERSON
Appendix D. Proofs from Section 6
D.1. Proof of proposition 1
We start by perturbing the principal payo¤ function exactly as done in the proof of theorem
1. Let g be any optimal mechanism in this perturbed environment and let P be the set of
all bijections from f1; :::; ng to f1; :::; ng. We will think of elements of P as permutations,
and for any
2
,a 2 A,
2 P we will abuse notation by setting:
( ) =
(a) =
Now for every
(1) ;
a
(2) ; :::;
(n)
(1) ; a (2) ; :::; a (n)
2
2A
2 P de…ne mechanism g as
g ( ) [a] = g ( ( )) [ (a)] .
Since V is symmetric and
i
=
j
for all i; j we have that g and g give the same payo¤.
And g is incentive compatible since every agent has the prior distribution on their types.
Hence g is optimal. And in a mixtures of any number of optimal mechanism is also optimal,
hence we can de…ne an optimal mechanism g~ as
X 1
g ( ) [a] .
g~ ( ) [a] =
jP
j
2P
And for any permutation
2 P we have that
X 1
g~ ( ) [a] =
g ( ) [a] = g~ ( ( )) [ (a)] .
jP
j
2P
The second equality holds because the set of all permutations of ; a is equivalent to the set
of all permutation of
( ) ; (a) for all
2 P.
We have shown there is an optimal mechanism that is anonymous in the perturbed environment. The proof of theorem 1 ensures that in this environment every optimal mechanism
must coordinate across states. Hence, the optimal mechanism we found must also coordinate
across states. Now, just as in the proof of theorem 1, take the perturbations to zero, and
the upper-hemi continuity guaranteed by the theorem of the maximum means we there must
be an optimal mechanism that is anonymous and coordinates across states in the original
setting. From the statement of theorem 1 we know this mechanism must also coordinate
within states, and we are done.
SCREENING THROUGH COORDINATION
D.2. Proof that mL = n
49
mH
We have anonymous coordination mechanism described by f : f0; 1; :::ng !
(f0; 1g
f0; 1g).
It is easy to show that if the spectacular success action is never used then the spectacular
failure action is also never used and vice versa. We think of this case as mH = n + 1 and
mL =
1. Now suppose that spectacular success and failure are employed, and let mH be the
lowest value for which f mH [0; 1] > 0 and mL the highest value for which f mL [1; 0] > 0.
Now incentive compatibility requires that for all i we have:
pgi (1) = pgi (0)
Let # be the number of high equality types at state , so that # =
P
i.
Also de…ne
f1 (m) = f (m) [0; 1] + f (m) [1; 1] and f0 (m) = f (m) [1; 0] + f (m) [1; 1]. Choose a? 2 f0; 1g
to be 0 if the middle regions use the choose no one action and 1 if the middle regions use
the choose everyone action. Using the properties of f discussed in section 6 we can rewrite
pgi (1) as:
pgi (1) =
j#
X
>mH ;
0
1 @
2n
=
( )+
j#
i =1
X
n
m
n m>mH
X
=mH ;
( ) f1 (m) + a?
X
jmL <#
i =1
1
n 1
+
f1 (m) + a?
1
mH
( )
<mH
1
X
n A
m
X
n
m
mL <m<mH
We can also derive
X
n
1 @
2n
0
0
1
= n@
2
X
pgi (0) =
m
0 m<mL
=
0
1
m<mL
X
X
+
( ) f0 (m) + a?
j# =mL ; i =0
n
n
n mL <m n
1
1
n
m
m
+
mL <m<mH
n 1
f0 (m) + a?
L
m
1
n 1
+
f0 (m) + a?
1
n mL
X
mL <m<mH
X
mL <m<mH
1
n A
m
1
n A
m
50
NEMANJA ANTIĆ AND KAI STEVERSON
The second equality uses that in general
n
mL > mH , then we would have:
X
pgi (1) pgi (0) =
n mL m>mH
>
n
m
n
k
=
n
n k
. Now suppose for contradiction that
1
n 1
+
f1 (m)
1
mH
n 1
f0 (m)
n mL
n 1
f1 (m)
mH
> 0
which contradicts incentive compatibility. And a similar argument will rule our that n mL <
mH and we are done.