Communication Strategies in a Matching Game

Communication Strategies in a Matching Game
Dmitri Kuksov
Washington University in St. Louis
May, 2007
Abstract
This paper explores the communication and choice strategies of agents in a matching
game, where agents are uncertain about their payoffs, and payoffs of each agent depend
on and are partly known to the potential partner. Business examples of such decisions
include mergers, acquisitions, distribution channel partners, as well as manufacturing and
brand alliances. When communication is informative, the communication strategy as a
function of the expected payoff of the partnership involves pretending fit when expected
payoff is high, pretending misfit when expected payoff is low, and telling the truth in an
intermediate range. The condition for informativeness of communication turns out to be
that the distribution of payoffs should have thin tails.
Keywords: Matching, Communication, Search, Cheap Talk.
1
Introduction
Information exchange and communication is an important issue in many social and economic
activities. For example, one may argue that the purpose of a job interview or the dating process
is the information exchange to figure out both the objective quality and the fit between potential
partners. In each meeting, agents face the problem of whether to be truthful and what to say
about their own characteristics or any other information, which they have and which may be
relevant to the payoffs of the partnership.
Consider a matching game where the only decisions made by each player are decisions to
accept or not each potential partner, and only when both potential partners accept, the partnership agreement is signed. Under some conditions, the communication decision is theoretically
easy: if one party knows the value of the partnership to her precisely and is interested in the
match, optimally, her statements are chosen as to maximize the other party’s expectation of
the payoff, and thus achieve the highest possible probability of acceptance by the other party.
1
However, oftentimes, the party selecting a statement to say is uncertain about the value and
desirability of the match. In such a case, communication can have another goal: to dissuade
the other party from accepting the partnership if its value to the first party, as privately known
to the other, is such that the first, if knew, would not accept.
In other words, the goal of an agent’s statement is twofold. The first goal is to increase the
probability of acceptance by the other party. The second goal is to maximize the expected value
of the partnership conditional on the other party accepting. Ideally, the goal of a statement
is to influence the accept decision by the other party to be optimal to the first, i.e., increase
probability of the other party acceptance if and only if such acceptance is known by the second
party to be beneficial to the first.
The above two goals do not always go hand-in-hand. If they do, the communication strategy
is easier, but at the same time, communication may also be less informative or not informative
at all. This paper shows how the conflict of the above two goals of communication is essential
and results in a potentially informative and valuable communication in a matching game, and
analyzes agents’ communication strategies.
The communication problem and the matching game setting described above can be applied
to examples in very different settings. For example, jobs and job candidate selections as well
as dating and marriage-partner selection provide examples to the above problem: a person
may want to increase the interest of the other party, but, at the same time, may be afraid
of attracting the wrong one. Although the examples of dating or marriage markets seem
to be the eye-catching interpretation of the matching model, the same considerations may
apply to business management: communication to establish brand and manufacturing alliances,
distribution channels, mergers, etc.
The model, fully described in Section 2, involves two agents considering a partnership.1 The
payoffs of the partnership, if signed, between the two agents depend on the fit between their
types and have both common and private value components, with common value component
only partially known by each agent. Specifically, each agent knows own type and can partially
observe the type of the other agent. Then, the agent endowed with the ability to talk makes a
statement describing his or her type (or, rather, the part of her type that has not been observed
1
Section 5 extends this model to a model of two populations of agents, with agents in each population
searching to match with agents of the other population through a costly search: agents have to pay s per each
meeting, thus endogenizing the value of the outside option.
2
by the other agent), and then, both agents make simultaneous accept/reject decisions.2 The
partnership is declared if and only if both agents decided to accept.
We find that the optimal communication strategy, if communication is informative, is the
following. When an agent expects a high payoff, she pretends to fit the other agent (i.e., that
she states that she has information that the part of the payoff that is common to both of
them is high). On the other hand, if the agent expects low payoff, so that she is close to
indifferent between accepting and not, she pretends that she doesn’t fit and, at the same time,
makes the decision to accept. In between these two cases, it is optimal for the agent to tell
the truth. The existence of the range of expected payoffs where agents tell the truth results in
conversation being potentially informative. Section 3.3 explicitly derives the condition under
which communication is informative. This condition can be interpreted as requiring the tails
of the distribution of payoff to be sufficiently thin. The probability of the range of truth-telling
translates to a measure of informativeness of conversation and is dependent on the correlation
and the probability distribution of the payoffs.
The literature on asymmetric information and information transmission has mainly considered the possibility of signalling using costly statements (e.g., Spence (1973), Milgrom and
Roberts (1986)), where the cost may be potentially coming from reputation or possibility of
verification by the other party. In the context of social interaction, Pesendorfer (1995) considers
a population of agents different in a vertical dimension: everybody prefers to match with the
“high” types, and therefore statements must be costly to be believable In contrast, the current
paper considers horizontally different agents: the question is of fit rather than of how good
(objectively) the agent is for a match. This allows us to consider costless statements that are
not verifiable before the payoffs are received in a non-repeated game.
Costless communication (“cheap talk”) may have value. Crawford and Sobel (1982) analyzes the optimal amount of information transmission from a sender to a receiver when the
sender is more informed, but the receiver decides on the action affecting both.3 Forges (1990),
Aumann and Hart (2003) and Krishna and Morgan (2004) consider how different possibilities
of communication may lead to different payoffs in the equilibrium. The matching game setup of
this paper is different from the game setups in the above literature in that both sides make the
decision that affects the other side (decision to accept). Also, in difference to the above litera2
The assumption of only one population being able to talk is needed for the tractability of communication
strategies.
3
See Farrel and Rabin (1996) for a survey discussion of cheap talk literature.
3
ture, which consider communicating agent having more information than the other agent (i.e.,
the information set of the communicating agent is a subset of the information set of the other
agent), in the matching game of this paper, if one side had more information than the other,
communication would never be informative. Therefore, in a matching game, it is essential to
consider both sides being uncertain and having different information.
The rest of the paper is structured as follows. Section 2 formally introduces the model.
Section 3 derives the equilibrium conditions and the above discussed communication strategy
in the full model (with communication), providing conditions on when communication may be
informative and proving the above described communication strategy. Section 4 illustrates the
solution on the case of a uniform distribution of the private value part of the payoffs. Section 5
applies the model to a costly search for a match across an infinite population of agents. Section 6
analyzes what would change if agents have the possibility of revealing their types instead of
just making statements that are not restricted to be true. Contrasting to Milgrom (1981),
in the equilibrium of the current model, when agents are allowed to reveal their types, not
revealing information by the sender does not make the receiver believe that the information
is necessarily worst possible. We also see that the ability to reveal own type may make the
agent, in expectation, worse off. Section 6 concludes with discussion of the key assumptions
and intuition behind the results, and how they contrast and relate to the results in the existing
literature.
2
The Model
Two agents, named 1 and 2, are considering whether to enter into a partnership with one
another. Each agent has two characteristics that we will specially identify: x and y. Thus,
we will call the pair (x, y) the type of the agent. Assume agents know their own type. For
simplicity, both characteristics take only values from {−1, 1}, and the a priori probability of
each type (x, y) is equal to 1/4 for any x, y ∈ {−1, 1}.
If the agents end up in a partnership, their payoffs, respectively, are:
V1 = A1 + m(x1 , x2 ) + am(y1 , y2 ),
V2 = A2 + am(x1 , x2 ) + m(y1 , y2 ),
(1)
where A1 and A2 are i.i.d. random variables with cumulative distribution function FA (·),
m(x, y) = xy (i.e., equals 1 if preference matches the attribute and -1 otherwise), and a is
a parameter.
4
The three parts of the payoff may have the following interpretation: m(x1 , x2 ) and am(y1 , y2 )
is how much agent 1 values the fit with the other agent on the x-characteristic and y-characteristic,
respectively. Reversely, agent 2 values the fit over characteristics as am(x1 , x2 ) and m(y1 , y2 ),
respectively. In other words, while both agents value fit with each other, they may differ (if
a 6= 1) with respect to which component of the fit they consider more important. Finally, Ak
is the idiosyncratic appeal to agent k of the partnership with agent j (k ∈ {1, 2}, j = 3 − k),
and could potentially be also coming from fit between other characteristics, which we do not
explicitly consider.4
As a shorthand, let us write m1 ≡ m(x1 , x2 ) and m2 ≡ m(y1 , y2 ), indicating, respectively,
whether the agents fit each other on the first and the second characteristic.
The timing, actions, and information structure of the game is as follows. On the first
stage, agent k, k ∈ {1, 2}, learns own idiosyncratic part Ak of her payoff Vk , and the first (x)
characteristic of the other agent, so that she also knows mk but not mj (j = 3 − k). This means
that agent 1 knows A1 + m1 part of the payoff, but remains uncertain about am2 part. Thus,
at the beginning of the game, since each agents knows a part of the common part of the payoff
and agents know different part contributing to the common payoff, information exchange is
potentially valuable to both agents. On the second stage, agent 1 sends a message about m1 to
the other agent.5 Specifically, she sends one of the two messages: either the message indicating
that she sees fit (i.e., m1 = 1) or misfit (i.e., m1 = −1), but the message does not have to be
truthful. For succinctness, let us denote the first message by 1 (“I have private information
that our payoffs will be higher than what I expect you to know”) and the second by 0 (“I have
private information that our payoffs will be lower than what I expect you to know”).
6
On the
third stage, agents simultaneously make the decision whether to accept or not.
If both accept, the game ends with a partnership with the above stated payoffs. If either does
not accept, the meeting ends with no partnership with agents receiving their “outside option”
payoffs of R1 and R2 to agent 1 and 2, respectively. We are looking for perfect Bayesian
equilibrium of the above game.
4
We make no assumptions on a, and in fact, one could also assume a = 1 to reduce the number of parameters
in the model. However, this parameter will help us to see which part of the payoff affects which part in the
strategies, and therefore, will provide greater intuition for the results. We will later discuss some possible
interpretations of what different values of a may mean. Separating the Ak part of the payoff and considering
correlation and communication only in the rest of the payoff, which has discrete distribution, makes the model
tractable.
5
To be consistent with the literature, we will sometimes refer to agent 1 as the sender and to agent 2 as the
receiver.
6
Section 6 discusses the possibility of agent 2 also being able to send a message.
5
To better understand the structure of the payoff of a match and the nature of information
structure, let us consider the payoffs when a = 1. In this case, the payoff of a match can be
thought of as consisting of the private value component Ak (k ∈ {1, 2}) and the common value
component CV = m1 + m2 . Furthermore, each agent knows her private value component,
but only partially knows the common value component. Specifically, in our case, CV a priori
can take three values: -2, 0, or 2 with probabilities 1/4, 1/2, and 1/4, correspondingly; at the
beginning of the game, each agent k ∈ {1, 2} can be thought of as receiving a signal sk about
the value of the common value component, which is equal to one part of the common value
component and is either 1 (good) or -1 (bad), such that the distribution of CV conditional on
sk is B(0, 2), if si = 1, and B(−2, 0) if sk = −1, where B(a, b) is the Bernouli random variable
taking values a and b with equal probability. These assumptions on the common value and
information (signal) structure are intuitive given the interpretation of the common value as
coming from the two components, one of which known to one agent and the other known to
the other agent.7
3
Solution
3.1
Some Preliminaries
In a game with cheap talk such as this, there is always an equilibrium (or rather a set of
equilibria) with uninformative communication, where one side sends messages independent of
the true type, and the other side ignores any message. However, in our model, if an equilibrium
with an informative communication exists, equilibria with uninformative communication are
not stable in the following sense. Consider a small deviation in the language use, so that a
particular message is used on average more frequently to indicate fit rather than misfit. Then,
agents who want to communicate fit would start to use that message, and agents who want
to communicate misfit would stop using that message. It turns out that if the equilibrium
with informative communication exists and is unique, the above described use of that message
would result in the equilibrium with informative communication. However, we abstract from the
problem of how agents coordinate on a language and look for an equilibrium with informative
communication if such exists.
7
The intuition for the results will not change if we would not model where the signals and the common value
component are coming from and just assumed that, for example, the common value can take one of two values
and agents would be receiving i.i.d. noisy signals on what the value is.
6
Another issue with communication is that, as with any language, there are more than one
way of setting up the correspondence between messages and meanings that are equivalent from
the point of view of information transferred. Due to this we introduce the following assumption
without loss of generality. For an equilibrium, denote the probability that m1 = 1, given that
agent 1 accepts and sends message ` ∈ {0, 1} to be q` . We restrict the language to be such that
q1 ≥ q0 ,
(2)
i.e., we assume that there is at least as high a chance of fit given match when agent 1 claimed
fit than the chance of fit given match when agent 1 claimed the absence of fit. In other words,
a statement about a characteristic non-negatively correlates with the true characteristic. This
is without loss of generality, because if the restriction is not satisfied, messages 0 and 1 can be
renamed to 1 and 0 so that the restriction will be satisfied. The problem of multiple languages
can be resolved in our model in such a simple way because we consider a language with only
two possible statements. The possibility that the above inequality is strict will be referred to
as communication being informative.
It will be of future interest to consider the game we defined without the possibility of
communication, i.e., the game where meetings do not have the second stage. We will call such
a game “matching game without communication”, whereas the original game will be referred to
as a game with (one-sided) communication. In particular, when communication is completely
uninformative (conditions on which we will identify), the game with communication is outcomeequivalent to the game without communication.
We first analyze the model for general FA (·), in particular, asking the question: under which
conditions on FA (·), communication may be informative? Then, we illustrate the solution on the
example of a uniform distribution A ∼ U (0, b) (in which case, communication is informative).
3.2
Solution in the Case of No Communication
As noted before, the game with communication always has an equilibrium with uninformative
communication, and we will show that under some conditions, an equilibrium with informative communication does not exist. When communication is not informative, the game result
will be the same as if there were no communication. At the same time, when informative
communication equilibrium exists, this game serves as a benchmark for deriving the value of
communication to either agent and for considering the effects of communication in general.
7
Considering non-informative conversation greatly simplifies the strategy set, since we neither
have to consider what an agent has to say about her attributes, nor the decision on acceptance
has to depend on the statement. Let us consider decision problem of agent 1 keeping in mind
that agent 2 faces the same problem, and hence everything that holds for agent 1 holds for
agent 2 with indices 1 and 2 swapped.
Since decision of agent 1 to accept or not affects the outcome only if the other agent accepts,
agent 1 decides on acceptance conditional on agent 2 accepting, i.e., as if she knows that agent
2 has accepted. Agent 1 should optimally accept if and only if she expects her payoff from
partnership to be at least as high as her outside option:
E1 (A1 + m1 + am2 ) ≡ A1 + m1 + aE1 m2 ≥ R1 ,
(3)
where the index 1 on the expectation denotes that the expectation is taken over the information
set of agent 1. Therefore, agent 1 accepts if and only if A1 ≥ R1 − m1 − aE1 m2 .
The expected value of m2 over all possibilities is 1 · 1/2 + (−1) · 1/2 = 0. However,
as noted above, the relevant expectation is the expectation of m2 conditional on agent 2
accepting. Denote this conditional expectation E1 m2 = E1 (m2 | agent 2 accepts) by z1 and
E2 m1 = E2 (m1 | agent 1 accepts) by z2 . Then:
zk =
FA (Rj + 1 − azj ) − FA (Rj − 1 − azj )
,
2 − FA (Rj + 1 − azj ) − FA (Rj − 1 − azj )
(4)
where k ∈ {1, 2} and j = 3 − k. Equation (4) for k = 1, 2 forms a system of two equations with
two unknowns zk . Solving it for zk completes the derivation of the acceptance rule of agent k,
which is to accept if and only if Ak + mk ≥ Rk − azk .
3.3
Solution of the Game with Communication
Note that agent 1 does not receive any information between sending the message and deciding
whether to accept. On the other side, agent 2 does not care what decision to make if agent
1 does not accept. Therefore, we can assume with no change in the outcome of the matching
process that agent 1 makes the accept decision before sending the message, and if the decision
is not to accept, the meeting ends with no match before the second stage. In other words, the
game is outcome-equivalent to the game where agent 1 makes all decisions before agent 2.
Define z1` and z2` , where ` ∈ {0, 1}, as
z1` = E1 (m2 | Agent 2 accepts, Agent 1 sent message `),
(5)
z2` = E2 (m1 | Agent 1 accepts and sends message `),
(6)
8
where the expectations are by agents 1 and 2 given their information sets after the message has
been sent and received. Variables z1` and z2` are similar to z1 and z2 from the previous section
in that they indicate the expected value of the unobserved fit given the information at the time
of accept decisions. While without communication, we only had one variable per agent, now
the expectations also have the subscript of the message sent or received in the second stage.
To gain some intuition of what z1` and z2` are, consider the following. If agent 2 believes that
agent 1 tells the truth no matter what, then z20 = −1 and z21 = 1. If agent 2 doubts agent 1’s
message, but thinks that it is indicative of (correlated with) the truth, then z21 > z20 . Finally, if
agent 2 does not believe agent 1’s message has anything to do with the truth, z21 = z20 . Agent
k (k ∈ {1, 2}) only considers the expected values conditional on the other agent accepting,
because if the other agent j = 3 − k does not accept, payoff of agent k does not depend on
what it decides, and hence the decision of agent k is irrelevant.
The effect of agent 1’s truthfulness on z1` is less straightforward. Of course, if agent 2 does
not find 1’s message informative, then agent 2’s decision is unrelated to agent 1’s message, and
so, z10 = z11 . However, if agent 1 convinces agent 2 that there is at least some truth in what
she says, i.e., that z21 > z20 , it still could be that z10 > z11 or that z10 < z11 . In fact, as we will
show below, depending on the distribution of Ai , either result is possible (but only one may
happen in an equilibrium with informative communication, the other one makes it impossible
for agent 1 to convince agent 2 that agent 1’s statement has anything to do with the truth).
According to (2), as a convention, we assumed that z21 ≥ z20 , i.e., that if agent 1 claimed
fit, the probability of fit is no less than if agent 1 were to claim misfit. Since we are looking for
an equilibrium with informative conversation, we will assume first that the inequality is strict,
i.e., z21 > z20 . If we find a contradiction, it would mean that an equilibrium with informative
communication does not exist.
With respect to the relation between z10 and z11 , there are a priori two possibilities. One
is that z10 ≤ z11 , i.e., agent 1’s reporting misfit does not increase the relative probability that
when agent 2 accepts, agent 2 fits over the other attribute. We will show that in this case, agent
1’s message is not informative. The other possibility is that z10 > z11 . If this happens, we will
show that agent 1’s message is at least partially informative, i.e., in the equilibrium, z21 > z20 .
The conditions on FA (the CDF of the distribution of Ak ) that leads to the second and the first
case are the conditions on the possibility and impossibility of informative communication.
Proposition 1. If z10 ≤ z11 , communication is not informative. Moreover, if agent 2’s beliefs
9
about m1 could be influenced by agent 1’s statement, agent 1 would make a statement that would
increase agent 2’s expectation of m1 no matter what the true m1 is.
Proof. The first claim follows from the second claim: i.e., if communication is informative,
agent 1 will always report fit. Indeed, if agent 1 were to report misfit while communication
was informative, the probability of acceptance by agent 2 would decreases (since z21 > z20 , and
so, agent 2 expects lower payoff), and the expected value (to agent 1) of match given agent 2’s
acceptance decreases by z11 − z10 ≥ 0, i.e., does not increase. Hence, agent 1 always prefers to
report fit, and thus, this communication can not be informative of the true fit.
Note that the only thing that prevents us from claiming that agent 1 always reports fit (i.e.,
including when z10 ≤ z11 ) is that when communication is not informative, no restriction can be
placed on optimal communication strategy, besides that it is not informative to agent 2, i.e.,
not dependent on the variables (m1 ) unknown to agent 2.
The argument in the proof of the above proposition also provides the intuition on when
communication can be informative: the only potential benefit to agent 1 from reducing agent
2’s belief of the fit is that such a statement could dissuade agent 2 with lower fit (in the part
that is observed by agent 2 only) from accepting more than it would dissuade the agent 2 with
a higher value of the fit from accepting. This benefit exists (is positive) if and only if z10 > z11 .
Since we have proved that no informative communication may exist when z10 ≤ z11 , we now
consider the case of z10 > z11 (the conditions for this are derived later). In this case, if agent
1 reports misfit, her expected payoff given agent 2 accepts increases by ∆z1 = z10 − z11 > 0.
However, the probability of acceptance by agent 2 decreases by ∆q2 = q21 − q20 ≥ 0, where
qk` = Prob(Ak + mk ≥ Rk − azk` )
(7)
is the probability of acceptance by agent 2 given that message was `.
Therefore, it is optimal for agent 1 to report misfit if and only if she expects low enough
payoff and does not care too much about the probability of acceptance, but rather worries
about the expected payoff given acceptance. This leads to the following proposition about the
communication strategy.
Proposition 2. When communication is informative and agent 1 entertains the possibility of
acceptance, agent 1 reports fit regardless of actual fit when A1 is high enough, reports misfit
10
regardless of actual fit when A1 is low enough, and truthfully reports fit or misfit for intermediate
range of A1 .
Proof. First, note that as we have shown before, “communication is informative” condition implies z21 > z20 (and q21 > q20 ) by definition, and implies z10 > z11 by Proposition 1. Comparing
the expected payoffs to agent 1 when reporting fit and misfit while accepting (i.e., conditional
on her expected payoff value higher than her reservation value R1 ), one obtains that it is optimal
for agent 1 to report misfit (rather than fit) if and only if
(A1 + m1 + az10 − R1 )q20 > (A1 + m1 + az11 − R1 )q21 ,
(8)
Rearranging the terms in (8), we obtain an equivalent condition:
A1 + m1 < R1 − az10 +
aq21 ∆z1
,
∆q2
(9)
where the last term is positive (remind that ∆z1 ≡ z10 − z11 ). In other words, we obtain that
agent 1 should report fit when she expects high enough payoff. When agent 1 expects lower
A1 + m1 , but still such that A1 + m1 ≥ R1 − az10 , it is optimal for agent 1 to accept, but report
misfit. When A1 + m1 is even lower, agent 1 should reject, and hence it does not matter what
she reports.
Let us restate the optimal statement by agent 1 derived in the above proof. Denote
Ra ≡ R1 − az10 ,
and Rf ≡ R1 − az10 +
aq21 ∆z1
.
∆q2
(10)
Agent 1 accepts if and only if
A1 + m 1 ≥ R a .
(11)
Furthermore, when accepting, the communication strategy of agent 1 as a function of A1 and
m1 is

if A1 ≥ Rf + 1,
 “I see fit” (i.e., ` = 1),
The truth (` = 1 if and only if m1 = 1), if Rf − 1 < A1 < Rf + 1,
message =

“I see misfit” (` = 0),
if A1 ≤ Rf − 1.
(12)
Note that Ra < Rf , i.e., there is a non-empty interval of the outcome of A1 when it is optimal
for agent 1 to accept but report misfit.
We now return to the question of formulating conditions on informativeness of communication (i.e., conditions that yield z21 > z20 and z10 > z11 ) in terms of distribution of A.
11
Proposition 3. Let fA (·) be the probability density function of A1 . If
fA (X)
1 − FA (X)
is increasing in X, communication is informative. On the other hand, if it is non-increasing
(i.e., weakly decreasing), communication is not informative.
Proof. See Appendix.
To intuitively see that the communication can be informative when the sender does not have
complete information consider the following. Consider the case when the sender is indifferent
between accepting and not. In this case, the sender does not particularly care whether the receiver accepts or not. What she does care about is increasing the payoff iff the receiver accepts.
In other words, the sender cares about dissuading the receiver from accepting if the receiver has
unfavorable private information about the potential match. Under certain conditions on the
private component of the payoff distribution (the ones in Proposition 3), increasing receiver’s
reservation value dissuades the receiver from accepting with a higher probability if the receiver
has the unfavorable information about the common value. Hence the sender desires to convey
misfit if she expects low payoff. Hence, sender’s statement that she observes fit correlates with
actual fit, and is, therefore, informative.
To better understand the condition for informativeness of communication (Proposition 3),
consider the following interpretation. What we needed to show is that when agent 2’s reservation
value of A2 +m2 decreases (due to agent 1’s message of misfit that decreases agent 2’s expectation
of the unknown component of her payoff), and therefore, agent 2 requires a higher value of
A2 + m2 to accept, the fact that agent 2 accepts indicates that the m2 term is higher.
Due to the above intuitive interpretation, one could be tempted to believe that the condition (28) always holds. However, this is not the case. The reason is that, although may
be intuitive, it is not true that if a sum of random variables is higher, than each component
must be, in expectation, higher. The function fA (x)/(1 − FA (x)), where fA (·) and FA (·) are
probability density and cumulative distribution functions of random variable A, is known in
statistics as a hazard rate of distribution A. The condition that it is increasing has interpretation that the distribution of A has thin tails. To see this, write the CDF of FA (·) in the
form FA (x) = 1 − Ce−
Rx
0
c(x)dx
. Then fA (x)/(1 − FA (x)) is increasing if c(x) is increasing and is
decreasing if c(x) is decreasing.8
8
It is also easy to construct a two-point distribution of A for which condition (28) does not hold.
12
Coming back to the communication strategy (Proposition 2), one can observe that whenever
communication is informative, there is also a range of observed payoff values where an agent
will prefer to pretend misfit. One could think of a situation when one party tries to upset the
other on purpose as representation of this outcome (“let’s see if you still want to be with me
even if I am nasty to you”). This potential outcome implies that communication can not be
100% informative.
3.4
Value of the Game and Value of Communication
Let us consider how the possibility of communication by agent 1 changes benefits (or hurts)
each agent. In order to find this, we derive the value of the game without communication and
the value of the game (to each agent) with communication.
In the absence of communication, the value of the game (i.e., the increase in the expected
utility over the value of the outside option value) for agent k ∈ {1, 2} is
Z
nocomm
Gk
(x + azk − Rk ) dH(x),
= (1 − H(Rj − azj ))
(13)
x>Rk −azk
where j = 3 − i is the other agent, and H(x) is, as previously defined, the C.D.F. of the
distribution of Ak + mk , which is (FA (x − 1) + FA (x + 1))/2. The value in the first parenthesis
is the probability that the other agent (agent j) accepts, and the integral value is the expected
payoff from the partnership (given that both agents accepted) times the probability that agent
k accepts.
With communication, the derivation is similar, except that one needs to add separately the
potential gains from the partnership when agent 1 reports fit and when she reports misfit. For
agent 1, the value of the game becomes
Z Rf
Z
comm
G1
= q20
(x − Ra ) dH(x) + q21
x>Ra
(x + az11 − R1 ) dH(x),
(14)
x>Rf
where q2` = 1 − H(R2 − az2` ) is the probability that agent 2 accepts given that agent 1 sends
message `, while for agent 2, the value of the game becomes
Z
comm
G2
=(1 − H(Rf ))
(u + az21 − R2 ) dH(u)
u≥R2 −az21
Z
+ (H(Rf ) − H(Ra ))
(u + az20 − R2 ) dH(u).
u≥R2 −az20
The value of communication by agent 1 to agent k is Gcomm
− Gnocomm
.
k
k
13
(15)
4
An Example: Uniform Distribution of A
To illustrate the equilibrium conditions and to see the possibilities of the relationship between
the values of the game to each population with and without the possibility of communication, let
us consider the game on the example of A distributed uniformly on [0, B]. To avoid considering
multiple cases in the C.D.F. of A, let us restrict attention to such parameter values that
RK ∈ (1 + a, B − 1 − a) for K = 1 and 2 (i.e., outside option value is well within bounds of the
support of the distribution). With this assumption, we can obtain9
z21 =
(B − Rf )
1
1
, z20 = 0, z11 =
, and z10 =
.
B − Rf
(B − R2 )(B − Rf ) + a
B − R2
(16)
It is straightforward to check that the conditions z21 > z20 and z11 < z10 , necessary and sufficient
for communication to be informative, are satisfied. We also have q2` = (B − Rk + azkl )/B.
Substituting the above values in the defining equation of Rf , we obtain Rf = R1 .10 By definition
a
of Ra , we obtain Ra = GI − B−R
. In other words, agent 1 decides to accept and state that she
2
sees fit if what she sees in in the known component of the payoff is equal to the value of her
outside option. If she sees the value lower by as much as (but no more than) a/(B − R2 ) than
her outside option, she accepts, but claims that she sees misfit. Again, this strategy does not
mean that her message is not informative (and in fact, it is informative), because the condition
on when to “stating fit” correlates with the value of the fit she observes.
Let us now calculate the value of the game and the value of communication. The explicit
solution for values of the game and values of communication is too cumbersome to report. Some
of the properties are, however, curious. First, it turns out, unsurprisingly, that when the value
of outside option is the same across agents, the communication possibility (vs. the game with
no communication) increases the expected payoffs of both agents. Somewhat more surprising
is that in this case, communication by agent 1 increases expected payoff of the agents in exactly
equal amounts. Moreover, it turns out that the payoffs from the possibility of communication
do not depend on which agent is able to communicate, even when the two agents have different
values of the outside option.
Note, that the game with communication we considered is inherently asymmetric between
agents, since only agent 1 can talk. Therefore, there is no reason to expect that agents would
9
This immediately follows from the equations (22), (23), and (27) in the appendix.
One could expect that the right hand side of definition of Rf , after substitution, would depend on Rf , since
the functions that are substituted depend on it. This would give a recursive equation that can be solved for
Rf . In our case, however, after simplification, Rf completely cancels from the right hand side.
10
14
not care who is able to communicate (as far as one of them can). One could, for example,
have expected that the sender should be in a more advantageous position than the receiver,
since it controls the flow of information. However, while the sender controls some flow of
information, the receiver ends up with more information (since the receiver finds the sender’s
message informative). In the particular example of this section, the above two effects turn
out to exactly balance so that the expected benefit of communication is exactly the same no
matter which agent is able to communicate. This seems to be a consequence of the uniform
distribution of the private value component Ak .
Another interesting implication of the solution is that the possibility to communicate can
actually hurt the agent (and by the symmetry above, it can hurt either one). In particular,
it hurts the agent with the outside option that is sufficiently above the value of the outside
option of the other agent. For example, if B = 10, a = 1, R1 = 6, and R2 = 4, then the values
of the game without and with communication to agent 1 are, respectively, 0.5846 and 0.57208,
while for agent 2, these values are, respectively, 0.82578 and 0.83208. In other words, the
possibility to communicate hurts agent 1 by 0.0125 and helps agent 2 by 0.0063, thus, reducing
the social surplus of the game, but making agent welfare more equal between the two agents
(since communication increases the expected utility of the agent with lower outside option and
lowers the expected utility of the agent with higher outside option). It is also possible (for
example, consider R1 = 7 and R2 = 3 in the above example) that communication hurts both
agents.
5
5.1
A Matching Game Between Two Populations
Model Reformulation
Instead of just two agents 1 and 2, assume that there are two infinite populations I and J,
agents from which are interested in matching with agents from the opposite population. For
a possibility of a partnership, an agent must go to a meeting with an agent from the other
population. Random meetings are set up at a cost s to each agent in the meeting.11 At a
meeting, first, each agent k learns his Ak and mk and remains uncertain about mj . Second,
agent from population I makes a statement about her mi . Then, both agents decide whether
11
Alternatively, we could assume that agents engage in a stream of costless meetings with exogenous timing,
but discount the future. This is similar, because expected value lost due to time discounting corresponds to a
search cost per meeting.
15
to accept or reject the partnership. If either one rejects, the partnership is not declared and
they are free to search again. If both accept, partnership is signed, the partnership payoffs
realize and the game ends. Assume that all Ak and mk are i.i.d. across meetings. Note that
this is consistent with agents having characteristics that do not change from a meeting to a
meeting, because mk depends on the fit between characteristics, rather than on the value of
own characteristic. Normalize the value of the outside option of not searching for a partner for
an agent of either population is 0.
5.2
Solution
As in any setup with sequential search and incomplete information, and where agents’ possibilities and information sets do not change with the number of searches, the decision to accept
follows a “reservation rule” strategy (Stigler, 1961). That is, the optimal strategy for an agent
k ∈ K, where K = I, or J, is to set up meetings until the expected payoff from the match in the
meeting is at or above the reservation value RK . This reservation value is also the value of the
game to an agent k ∈ K, because, in the equilibrium, the expected value from playing the game
for agent k ∈ K is RK . Given such a strategy, we can now directly apply our analysis of the
game between two agents in the base model. However, after solving for the communication and
accept strategies given the reservation values RI and RJ (which play the roles of the outside
option values R1 and R2 we used before), we need to derive the equilibrium equations on RI
and RJ . These equations are coming from the condition that the incremental cost of search s
should be equal to the incremental benefit of search, which is equal to the difference between
RK and the value of the game derived in Section 3.4. This process is elaborated in the following
subsections.
5.2.1
Without Communication
The equilibrium condition on RI is that the search cost s for the next meeting is equal to the
expected benefit, which is the probability of agent j accepting and agent i expecting payoff
higher than RI times the expected value (by agent i) in excess of RI in the event that the other
agent accepts and the expected by agent i value is above RI .12
12
It is easy to see that this is the condition when agent i has the ability to return to the previous meeting
and reconsider her decision on acceptance. Since nothing changes between meetings (neither the distribution
of agents nor belief of agent i about the chances in the following meetings changes), agent i will never want to
reconsider. Therefore, the condition still holds in when return is not possible, as it was assumed in this model.
16
Formally, the reservation value RI of an agent i ∈ I satisfies
Z
s = Prob(j accepts)
(x − RI )dFEi (Vi |j accepts) (x),
(17)
x>RI
where FEi (Vi |j accepts) (·) is the cumulative distribution function of the payoff expected by agent
i value of Vi at the time when agent i needs to make the decision on acceptance (i.e., at the
stage 3) and conditional on agent j deciding to accept.
The CDF of Ai + mi (as known to agent i before the first stage of the meeting) is
H(u) = (FA (u + 1) + FA (u − 1))/2.
(18)
0
Let RK
be the reservation value of Ak + mk at which agent k ∈ K (K = I or J) accepts given
0
his expected value zK of mk0 conditional on the agent k 0 accepting (we have: RK
= RK − zK ).
Then we have the following equations on the equilibrium reservation values RI0 and RJ0 for
agents in I and J respectively:
(
R
s = (1 − H(RJ0 )) x>R0 (x − RI0 ) dH(x),
I
R
s = (1 − H(RI0 )) x>R0 (x − RJ0 ) dH(x),
(19)
J
where 1 −
H(RJ0 )
is the probability agent j accepts i in a meeting with agent i. Note that zI
and zJ do not enter the above equations.
5.2.2
With Communication
For the agent i ∈ I, the equilibrium condition on RI is that s is equal to the sum of: a)
the probability of sending the message of fit and accepting multiplied by the probability of j
accepting multiplied by the expected excess over RI of the expected value of the match for i
given the events that j accepted and i accepted and sent the message of fit; and b) the similar
expression with sending message of misfit. Formally, this equation is
Z
Z
s = (1−H(RJ −azJ1 ))
(u+azI1 −RI )dH(u)+ (1−H(RJ −azJ0 ))
u≥Rf
Rf
(u−Ra )dH(u), (20)
Ra
where zK` for K = I, J; ` = 0, 1 are defined as before where I corresponds to agent 1 and J
to agent 2. Similarly, the equation on RJ is that s is equal to the sum of: a) the probability
that i sends message fit and accepts multiplied by the expected excess over RJ value of the
match for j given that j accepts multiplied by the probability that j accepts; and b) the similar
expression with message of i of misfit. Formally, this equation is:
Z
Z
s = (1−H(Rf ))
(u+azJ1 −RJ )dH(u)+(H(Rf )−H(Ra ))
u≥RJ −azJ1
(u+azJ0 −RJ )dH(u)
u≥RJ −azJ0
(21)
17
6
Discussion and Conclusion
we considered messages that did not have to be truthful. While this is true in many situations,
it is also possible that sometimes agents can offer proof of their statements, or, in other words,
show (or “reveal”) the data rather than make a statement about it. In such an environment,
there are two decisions to make: which statement to make, and whether to offer the proof. A
statement with proof offered is, practically, a revelation of own type. Again, we consider the
possibility of costless information revelation of type by agents from population I. Consider the
model of Section 2 with message restricted to be either empty or truthful.
Similarly to the derived communication strategy before, it is easy to see that the revelation
strategy is the following: reveal if fit for large values of A, reveal if misfit for small values of
A, but do not reveal if misfit and A is large, or fit and A is large. Hence, when the type is not
revealed, agent 2 assumes that agent 1 most likely doesn’t fit, but may fit. This situation is
interesting to compare to the outcome of game of persuasion with costless messages in Milgrom
(1981). In that game, the sender (e.g., a salesman) who has more information than the receiver
(e.g., the buyer) in the equilibrium, reveals all but the worst of the possible information if
the receiver knows that the sender has the information, lest the sender will be assumed to
have the worst possible knowledge. In the current model, however, when the sender is close
to indifferent between accepting and not, it has an incentive to make the receiver believe that
the information is bad rather than good, and therefore, when the sender refuses to reveal
information, the receiver allows for the possibility that the sender only pretends to have bad
news. It may be interesting to note that this condition on the distribution of A that makes the
revelation strategy not to be one of full disclosure (existence of the incentive to pretend misfit)
is similar to the one that makes communication strategy of the model in the Section 2 partially
informative.
Let us note two key differences between our model and what have been usually considered
in the literature on cheap talk (e.g, Crawford and Sobel, 1980, Forges, 1990, Auman and Hart,
2003, Krishna and Morgan, 2004). The first key assumption is that both parties are able to take
actions, and the only actions are to accept or not. This assumption is the key to understanding
the potential for conversation to be uninformative. There are two barriers to informativeness of
conversation in our model. First, the sender does not have to worry as much about preventing
any action by the receiver, since it can make the decision not to accept herself. Second, sender’s
action (accept or not accept) already conveys some information to the receiver. Therefore, less
18
amount of uncertainty remains to be conveyed. The second point may require some clarification.
In our model setup, we have assumed that the accept decisions are simultaneous. However,
since one party’s accept decision is irrelevant if the other party decides not to accept, the accept
decisions at a given meeting can be done as if it is known that the other party accepted in this
particular meeting. This means that there would be no difference in the outcome if decisions
were made sequentially (and regardless of who decides first).
This simple and two-sided structure of the agents’ accept decisions (the first assumption)
that makes conversation uninformative if the receiver does not have any private information
about the part of the sender’s payoff unknown to the sender. Therefore, allowing the receiver
to know something the sender does not (the second assumption) is essential for informativeness
of communication by the sender.13 Hence, the second key assumption is that both parties
have private information that is of interest to the other party, i.e., each party knows a part
of the common part of the payoff unknown to the other party. The level of informativeness
of communication by an agent in a matching game depends on the uncertainty and increases
as a function of how much information the communicating agent does not have, but the other
agent knows. This result is in contrast to the result of Crawford and Sobel (1982) stating that
communication may be informative even if the receiving party has no new information, and is
due to the specifics of the matching game (namely, the first key assumption identified above).
Indeed, it is based on that the sender makes the accept decision as well, and therefore, the
sender can prevent the other agent from making a match if a match is not desired to the sender
(and the sender knows this).
13
Baliga and Morris (2002) derive a similar result when the communication is about the actions and the
communication effect is in coordination on the equilibrium actions rather than about the player type.
19
Appendix
Proof of Proposition 3
Agent 1’s strategy defined by (11) and (9) lead to the following equations for z2` :
Prob(A1 > Rf − 1) − Prob(A1 > Rf + 1)
Prob(A1 > Rf − 1) + Prob(A1 > Rf + 1)
(22)
Prob(A1 ∈ (Ra − 1, Rf − 1)) − Prob(A1 ∈ (Ra + 1, Rf + 1))
,
Prob(A1 ∈ (Ra − 1, Rf − 1)) + Prob(A1 ∈ (Ra + 1, Rf + 1))
(23)
z21 =
and
z20 =
which, in terms of FA (·) imply
FA (Rf + 1) − FA (Rf − 1)
2 − FA (Rf + 1) − FA (Rf − 1)
(24)
FA (Rf − 1) − FA (Ra − 1) − FA (Rf + 1) + FA (Ra + 1)
.
FA (Rf − 1) − FA (Ra − 1) + FA (Rf + 1) − FA (Ra + 1)
(25)
z21 =
and
z20 =
Note that Proposition 2 does not yet imply that if Rf > Ra , and hence, if there is a truthtelling interval, then z21 > z20 . However, in order for the message to be informative, we need
this satisfied. Simplifying condition z21 > z20 , we have the following result:
Lemma 1. Conversation can be informative only when
1 − FA (Rf − 1)
FA (Rf − 1) − FA (Ra − 1)
>
.
1 − FA (Rf + 1)
FA (Rf + 1) − FA (Ra + 1)
(26)
Since agent 2’s strategy is to accept if and only if A2 + m2 + az2` ≥ R2 , where ` is the
message received, we have the following equation for z1l :
z1` =
FA (R2 − az2` + 1) − FA (R2 − az2` − 1)
2 − FA (R2 − az2` + 1) − FA (R2 − az2` − 1)
(27)
The following lemma derives a sufficient condition for z10 > z11 , which, according to Proposition 1, we need for communication to be informative.
Lemma 2. If
fA (X)
increases in X,
1 − FA (X)
(28)
then z10 > z11 whenever z21 > z20 .
Proof. Let X1 = R2 −az21 and X2 = R2 −az20 , and assume z21 > z20 , implying X2 > X1 . Then,
the equations (27) imply that for z10 > z11 , it is sufficient that
X on [X1 , X2 ].
20
FA (X+1)−FA (X−1)
2−FA (X+1)−FA (X−1)
increases in
For any two functions g(x) and h(x), we have: h(x)/g(x) is decreasing if and only if (h(x) −
g(x))/g(x) ≡ h(x)/g(x) − 1 is decreasing. Therefore, g(x)/h(x) is increasing if and only if
g(x)/(h(x) − g(x)) is increasing.
Applying this to the fraction in question, we obtain that the above sufficient condition
simplifies to
FA (X+1)−FA (X−1)
1−FA (X+1)
is increasing on [X1 , X2 ]. A function is increasing if it’s derivative
is positive. The derivative of the above fraction with respect to X is equal to:
(fA (X + 1) − fA (X − 1))(1 − FA (X + 1)) + (FA (X + 1) − FA (X − 1))fA (X + 1)
(1 − FA (X + 1))2
fA (X + 1)(1 − FA (X − 1)) − fA (X − 1)(1 − FA (X + 1))
=
,
(1 − FA (X + 1))2
which is positive if and only if
fA (X+1)
1−FA (X+1)
>
fA (X−1)
,
1−FA (X−1)
i.e., it is positive when fA (X)/(1−FA (X))
is increasing.
We are now ready to prove the proposition. If fA (x)/(1 − FA (x)) is non-increasing then it
is easy to see from the proof of Lemma 2 that z21 > z20 leads to z10 < z11 , and therefore, it
is optimal for agent 1 to always pretend fit, and hence, z21 = z20 , i.e., communication is not
informative.
We now show that condition (28) is sufficient for inequality (26) to hold. Indeed, inequality (26) can be rewritten as
1 − FA (Rf − 1)
1 − FA (Rf + 1)
>
.
FA (Rf − 1) − FA (Ra − 1)
FA (Rf + 1) − FA (Ra + 1)
(29)
The above inequality is satisfied if the function (1 − FA (x))/(F (x) − F (x − b)) is decreasing,
where b = Rf − Ra > 0. Differentiating this function with respect to x, we obtain that it is
decreasing if fA (x − b)/(1 − FA (x − b)) < fA (x)/(1 − FA (x)), which is true if fA (x)/(1 − FA (x))
is increasing.
Hence, we have that if fA (x)/(1 − FA (x)) is increasing, z21 > z20 implies that z10 > z11 , and
with the optimal message choice by agent 1, we have z21 > z20 satisfied, i.e., communication is
informative.
21
REFERENCES
Aumann, Robert J., and Sergiu Hart (2003), “Long Cheap Talk,” Econometrica, 71(6),
1619-1660.
Baliga, Sandeep, and Stephen Morris (2002), “Co-ordination, Spillovers, and Cheap
Talk,” Journal of Economic Theory, 105(2), 450-468.
Crawford, Vincent P. and Sobel, Joel (1982), “Strategic Information Transmission,”
Econometrica, 50(6), 1431-1451.
Farrel, Joseph and Matthew Rabin (1996), “Cheap Talk,” Journal of Economic Perspectives, 10(3), 103-118.
Forges, Francoise (1990), “Equilibria with Communication in a Job Market Example,”
Quarterly Journal of Economics, 105(2), 375-398.
Krishna, Vijay and Morgan, John (2004), “The Art of Conversation: Eliciting Information from Experts Through Multi-Stage Communication,” Journal of Economic Theory, 117, 147-179.
Milgrom, Paul (1981), “Good News and Bad News: Representation Theorems and Applications,” The Bell Journal of Economics, 12(2), 380-391.
Milgrom, Paul, and Roberts, John (1986), “Price and Advertising as Signals of New
Product Quality,” Journal of Political Economy, 94, 796-821.
Pesendorfer, Wolfgang (1995), “Design Innovation and Fashion Cycles,” American
Economic Review, 85(4), 771-792.
Spence, Michael (1973), “Job Market Signaling,” Quarterly Journal of Economics, 87(3),
355-374.
Stigler, George J. (1961), “The Economics of Information,” Journal of Political Economy, 69(3), 213-225.
22