Screening Ethics when Honest Agents Care about

Screening Ethics when
Honest Agents Care about Fairness
Ingela Alger∗
and
Régis Renault∗∗
August 2002
Abstract
We explore the potential for discriminating between honest and dishonest agents, when a
principal faces an agent with private information about the circumstances of the exchange
(good or bad). When honest agents reveal circumstances truthfully independently of the
contract offered, the principal leaves a rent only to dishonest agents (even if honest agents
are willing to lie about their ethics); the principal is able to screen between good and bad
circumstances. In contrast, if honest behavior is conditional on the contract being fair,
the principal cannot screen along the ethics dimension. If the probability that the agent is
dishonest is large, the optimal mechanism is as if the agent were dishonest with certainty
(standard second best). Otherwise, it is as if the agent were honest with certainty (first best).
In the latter case, the principal is unable to screen between circumstances if the agent is dishonest.
Keywords: ethics, honesty, adverse selection, screening.
JEL classifications: D82
*Economics Department, Boston College, Chestnut Hill, MA 02467. E-mail: [email protected]
**Université de Cergy-Pontoise, ThEMA. E-mail: [email protected]
Acknowledgement: We are grateful to seminar participants at Université de Caen, Erasmus
University Rotterdam, ESEM 99, Université de Lille 3, Université de Lyon 2, MIT, Université
de Paris X, Université de Perpignan, University of St Andrews, Stockholm School of Economics,
Université des Sciences Sociales de Toulouse, and University of Virginia for useful comments. The
usual disclaimer applies.
1
Introduction
Adam Smith has taught us that we may trade with others without much regard for their
ethics; “It is not from the benevolence of the Butcher (...) that we expect our dinner.”1
As long as the invisible hand is at work, ethics is irrelevant. However, in the extensive
research devoted to the shortcomings of the invisible hand, it may no more be innocuous to
postulate opportunistic economic agents, as is typically done. For instance, in the public
goods provision problem, the emphasis has been on inefficiencies resulting from unrestrained
opportunism. Yet there is some evidence of somewhat more scrupulous attitudes regarding
public goods financing. The empirical studies on tax compliance surveyed by Andreoni,
Erard and Feinstein (1998) find that a large number of taxpayers report their income
truthfully, and that those who cheat do so by fairly small amounts. They further conclude
that the IRS audit and penalty rates are too low to justify these findings if all taxpayers act
strategically.2 Similar conclusions have been reached in various experiments on voluntary
public goods financing (see the survey by Dawes and Thaler, 1988, and the references
therein). In one of the experiments, those who gave money “indicated that their motive was
to ‘do the right thing’ irrespective of the financial payoffs.” (p.194) In the context of work
relations, the use of pre-employment integrity tests suggests that employers acknowledge a
potential heterogeneity in ethics and find it useful to discriminate on this basis.3 Our goal
is to investigate the possibility of such screening among agents with different ethics, using
standard tools of economic theory.4
Since the formalization of an honest behavior is obviously a difficult problem, we choose
to consider the simplest possible definition and use it in a simple and well-known framework.
1 Wealth
of Nations, book I, Chapter II.
2 See
also Roth, Scholtz and Witte (1989) for survey evidence.
3 See
Ryan et al. (1997) for some references in psychology on the subject.
4 The
existing literature allowing for heterogeneity in ethics (see Jaffee and Russell, 1976, Erard
and Feinstein, 1994, Kofman and Lawarrée, 1996, and Tirole, 1992) assumes that the principal is
unable to screen ethics.
1
We present an extension of a one-period, adverse selection model with one principal and one
agent, where the agent has private information about the circumstances of the exchange,
which may be good or bad.5 The terms of the exchange are set within a mechanism,
designed by the principal. We formalize the idea of honest behavior by adding a second
piece of private information, namely, the agent’s ethics: for simplicity, the agent is either
honest or opportunistic. Whereas an opportunistic agent is always willing to lie about
circumstances if it increases his surplus in the standard exchange problem, an honest agent
is not necessarily prepared to do this.6 We take the individual’s ethics as given, adopting
a reduced form of a more complex model where the individual’s preferences induce honest
behavior. We restrict attention to direct mechanisms, i.e., where a message must correspond
to a possible “type” (combination of circumstances and ethics). This restriction is due to
our belief that honesty becomes a fuzzy concept if one allows for messages that are not
related to a true state of the world.
Although the most straightforward formulation would be to assume that an honest
agent always reveals his private information truthfully, research by psychologists suggests
that this may not be a widespread attitude. A common theme in psychology, as for instance
in the seminal study by Hartshorne and May (1928), is that honest and dishonest behavior
depends more on the situations involved than on the individual’s particular set of norms. A
second and related observation by psychologists and sociologists is that those who engage in
an unethical conduct usually resort to neutralization techniques providing a justification for
a deviance from the common norm (Ryan et al., 1997). In particular, agents weigh honesty
against other moral values. One standard excuse for lying is that an individual is confronted
with an inequitable situation. For instance, Hollinger (1991), in a study of neutralization
5 Below
this will be referred to as the standard second-best problem.
6A
possible extension of the present paper would be to adapt the framework used in the
literature on costly state falsification, where misrepresenting circumstances is all the more costly
that the discrepancy with the truth is large (see Lacker and Weinberg, 1989, Maggi and RodriguezClare, 1989, and Crocker and Morgan, 1998). In that literature agents are homogeneous regarding
falsification costs.
2
in the workplace, found that a significant predictor of deviant behavior (such as theft and
counterproductive behavior) is what he calls “denial of victim,” which is related to the
worker’s assessment of the inequity of the formal reward system: “Workers may elect to
engage in unauthorized actions to redress the perceived inequities.” (p. 182) We therefore
choose to model honest behavior as being conditional on the fairness of the contract.7
As a benchmark, we first consider the more straightforward unconditional honesty. We
distinguish two possible definitions: honesty is of the first order if the honest is only restricted in his announcement of circumstances, and of the second order if he is also restricted
in his announcement of ethics. Surprisingly, the implemented allocation is the same whether
honesty is of the first or second order. A rent is awarded only if the agent is dishonest and
circumstances are good. Since the dishonest may always claim to be honest, the classical
rent-efficiency trade-off implies that the decision when circumstances are bad is distorted
downwards even if the agent is honest. However, because a rent is awarded only if the agent
is dishonest, the distortion is smaller than in the standard second-best contract. There is
no distortion at the top (when circumstances are good). However, the message structure
used to implement this allocation depends on the order of honesty. Whereas an incentive
compatible mechanism may be used under second-order honesty, it is necessary to let a
dishonest agent lie under first-order honesty so that the revelation principle fails in the
latter case.
Formally, and as will be explained in detail in the text, our model is a special case of
Green and Laffont (1986), who show that the revelation principle may fail to apply if the
agent’s message space is restricted.8 Within that context, Deneckere and Severinov (2001)
and Forges and Koessler (2003) derive an extended revelation principle, which implies that
the largest set of allocations that can be implemented may be achieved using “password
7 Relatedly,
Mueller and Wynn (2000) found that in the U.S. and in Canada, justice is the third
most important workplace value, after the perceived ability to do the job, and the respect of the
boss; pay came in ninth place only.
8 Green
and Laffont did not characterize the optimal mechanism for their general framework.
3
mechanisms,” in which the agent would announce his own type together with the set of
all types he could conceivably announce. We will argue that their results may be used to
show that in our model the principal would not be better off if she could use a password
mechanism rather than a direct mechanism, when honesty is unconditional.
Under both types of unconditional honesty, the principal openly endorses an unfair
treatment of the honest agent. With second-order honesty, the dishonest agent in good
circumstances gets a larger rent simply by admitting that he is dishonest. With first-order
honesty, when the agent claims to be a dishonest agent in bad circumstances, the principal
knows that it must be an agent who is misrepresenting circumstances in order to capture
a rent.9 We interpret the psychology literature referred to above as implying that even an
agent with a strong taste for honesty would consider behaving dishonestly to rectify such
an unfair treatment, i.e., honest behavior may be conditional on fair treatment.
To model conditional honesty, we impose an “acceptability” condition which is necessary
to rule out such an open endorsement of inequitable treatment: all the allocations associated
to a certain set of circumstances should yield the same surplus if circumstances are as
specified. If this condition did not hold, the less attractive allocation would either be chosen
by an honest agent who is being discriminated against (as with second-order honesty), or by
a dishonest agent who is lying about circumstances and thus, as under first-order honesty,
capturing a rent out of reach for an honest agent. We derive the optimal mechanism
assuming that the honest agent gives up honesty altogether if the “acceptability” condition
is not satisfied.
The revelation principle does not apply, implying that there is a fairly large number
of possible equilibrium behaviors for the dishonest agent. It turns out, however, that only
two are relevant: either he reveals circumstances truthfully, or he lies when circumstances
are good. Thus, the trade-off is clear. Either the principal screens circumstances for the
dishonest, at the cost of also leaving a rent to the honest; then it is as if the agent were
9 Apart
from volunteers who provide work for free although others are being paid to do the
same work, we cannot think of any explicit contracts with such features.
4
dishonest with certainty, and the optimal contract is the standard second-best one. Or,
she forgoes the benefit of screening circumstances for the dishonest agent to avoid leaving
a rent to the honest; then it is as if circumstances were always bad for the dishonest agent,
and as if information were observable (since the honest reveals circumstances); as a result,
the traditional rent-efficiency trade-off disappears, and the optimal contract is the first-best
one. The latter is optimal if and only if the probability that the agent is honest is large.
Our results enable us to derive endogenously the exogenous no-screening restriction
imposed in the previous literature (see footnote 4). Our benchmark results however indicate
that this relies on the assumption that the honest agent has strong equity concerns; under
unconditional honesty, the principal always screens along the ethics dimension.
Another interesting implication of our analysis concerns the rent obtained by the dishonest, and we illustrate this using a workplace example. In order to screen circumstances, the
principal pays a monetary rent, which is just sufficient to deter the agent in good circumstances from producing the smaller quantity associated with bad circumstances. When the
principal does not screen circumstances for the dishonest, the dishonest produces a small
output even if circumstances are good. However, he is paid as if circumstances were bad; an
interpretation is that he is paid for the number of hours needed under bad circumstances to
produce that output, although he would actually have to put in fewer hours. Part of the rent
would then be in kind. This is in line with casual empiricism, which does suggest that some
employees manage to garner in-kind rents by being less conscientious than others: some
take unscheduled pauses and sick leave on a regular basis, whereas others almost never do.
An empirical study of production deviance in the workplace by Parilla, Hollinger and Clark
(1988) indicates that people differ in their propensity to engage in deviant behavior. For
instance, the proportion of people admitting they get paid for more hours than were worked
was 5.8% in the retail sector, 6.1% in the hospital sector, and 9.2% in the manufacturing
sector. The standard model, where the agent is dishonest with certainty, typically predicts
the use of a monetary rent; the sole exception is when “bunching” occurs (this is related to
characteristics of the distribution function of circumstances). Our framework provides an
5
explanation other than bunching for the widespread tolerance of such practices, by pointing
out the cost of deterring them: providing monetary incentives to deter such behavior might
be too costly, since these monetary incentives would have to be given also to those who
already do their work conscientiously.
An alternative formulation of conditional honesty is explored in Alger and Renault
(1998) where we use a two periods version of the current model, and in Alger and Ma
(2003), who study optimal health insurance contracts when fraudulent insurance claims
may be filed only if the physician is not honest: an agent truthfully reveals circumstances
only if he feels committed to doing so (for instance, because he has signed a contract prior
to learning circumstances). In the two models, there are two dates: whereas ethics is known
to the agent (or to the physician) from the start, circumstances are only revealed in the
second period. An honest agent reveals circumstances truthfully in the second period if
a contract specifying allocations as a function of circumstances only is signed in the first
period. Some of the results obtained in the commitment framework are similar to those we
obtain here by taking into account fairness considerations. In both commitment papers, the
standard second-best mechanism is optimal if and only if honesty is sufficiently unlikely;
screening along the ethics dimension is feasible only by letting the dishonest agent lie. In
our other paper, the rent-efficiency trade-off also disappears if honesty is likely enough10
which is not the case in Alger and Ma.11
The next section introduces the formal model. Section 3 is devoted to unconditional
honesty and we characterize the optimal contract under conditional honesty in Section 4.
Section 5 concludes.
10 One
difference is that ethics screening may occur over an intermediate range of the probability.
Both these contracts are second-best and the honest agent obtains an informational rent.
11 This
is due to a particular set-up where the decision variable (the medical treatment) does
not affect the utility of a healthy patient. In terms of our model below, it would be equivalent to
assuming that v(x, θg ) is independent of x.
6
2
The Model
Consider the following standard principal-agent setting. Preferences of both the principal
(she) and the agent (he) depend on the allocation {x, t}, where x ∈ IR+ is some decision
variable, and t ∈ IR is a monetary transfer from the principal to the agent. The agent’s
utility also depends on a parameter θ ∈ IR, with θ taking on the value θb with probability
α, and the value θg with probability (1 − α).12 The value of θ is a measure of how much
benefit there is in contracting between the two parties. Let Π(x, t) = π(x) − t be the
surplus of the principal, with π strictly concave in x, and U (x, t, θ) = t − v(x, θ) be the
surplus of the agent, v being convex in x. Depending on the application, π and v are either
both strictly increasing or both strictly decreasing in x. For instance, if the principal is an
employer and the agent an employee, they are both increasing (and t is positive). They
are on the contrary both decreasing (with t negative) if the principal is an upstream firm
signing a contract with a customer, x being the quantity supplied. Further assumptions
ensuring existence and uniqueness of interior solutions are as follows (the prime indicates
a partial derivative with respect to x): for any θ, π(0) = v(0, θ) = 0, limx→0 π 0 (x) = +∞ if
π 0 ≥ 0 (0 if π 0 ≤ 0) and limx→+∞ π 0 (x) − v 0 (x, θ) < 0. Finally, we assume that, for any x,
v 0 (x, θb ) > v 0 (x, θg ), so that total surplus is larger if θ = θg than if θ = θb . We will therefore
say that circumstances are good when θ = θg , and bad when θ = θb .
Only the agent knows circumstances. The principal acts as a Stackelberg leader and
sets the terms of the transactions in a contract. The agent is free to accept or reject
the offer. If there is no transaction, the agent’s surplus is zero. If the principal could
observe θ, she would therefore offer to implement the following first-best allocations. Under
circumstances θb (respectively θg ) the first-best allocation is (xb ∗ , tb ∗ ) (respectively (xg ∗ , tg ∗ ))
where π 0 (xb ∗ , θb ) = v 0 (xb ∗ , θb ), π 0 (xg ∗ , θg ) = v 0 (xg ∗ , θg ), and tb ∗ = v(xb ∗ , θb ), tg ∗ = v(xg ∗ , θg ).
We now introduce the novel feature of the model, namely, the agent’s ethics, denoted k:
he is dishonest (k = d) with probability γ, and honest (k = h) with probability (1−γ). Prior
12 All
probability distributions are common knowledge.
7
to further defining ethics, let us be more specific about the type of contract or “mechanism”
that the principal may offer to the agent. Throughout the paper, we restrict attention to
direct mechanisms, where the set of messages available to the agent is the set of types Θ =
{(h, θg ), (h, θb ), (d, θg ), (d, θb )}. This restriction would actually be implied by the assumption
that an agent adopts an honest behavior only when confronted with a direct mechanism; at
this point, we just believe that it would be hard to justify the definition of honest behavior
within an indirect mechanism, where the meaning of a message is arbitrary. We have the
following definition:
Definition 1 (Mechanisms) A mechanism is a mapping m : Θ → IR+ ×IR, where IR+ ×IR
is the set of allocations.
Below we will distinguish between three alternative definitions of honesty; however, a
common element to those is that an honest agent divulges circumstances even when lying
about them would afford him a larger surplus. Formally, honesty is defined as a restriction
on the message space available to the honest agent. Let Θk,` denote the message space
available to agent of type (k, `), where k ∈ {h, d} and ` ∈ {g, b}.
Assumption 1 (Honest and Dishonest Agents) Let M be the set of mechanisms that
the principal may offer. Then,
1. Θd,g (m) = Θd,b (m) = Θ ∀m ∈ M .
2. Θh,g (m) ⊆ {(h, θg ), (d, θg )}, Θh,b (m) ⊆ {(h, θb ), (d, θb )} for some m ∈ M .
Item 1 simply says that a dishonest agent is unrestricted in his ability to lie. The important
point in Item 2 is that an honest agent reveals the true circumstances. We will distinguish
the case where an honest behavior is unconditional and the case where it is conditional on
the proposed mechanism. In the former case, Item 2 holds for all mechanisms while in the
latter, Item 2 only holds for some proper subset of M . Precise definitions will be given
below.
8
A first benchmark is a situation where the principal knows for sure that the agent
behaves honestly (γ = 0),13 so that true circumstances are revealed even if the contract is
not incentive compatible. She then offers the following first-best contract.
Definition 2 (First-Best Contract) In the first-best contract, denoted m∗ , the agent
may announce a message µ ∈ {θg , θb }. If the agent announces µ, the corresponding first-best
allocation is implemented.
In contrast, the standard second-best analysis assumes that the agent’s behavior is dishonest
(γ = 1): he is fully capable and willing to misrepresent circumstances in order to obtain
the largest possible surplus. As is well-known, the revelation principle then applies, and
the optimal contract leaves a rent if and only if circumstances are good and specifies a
suboptimal allocation if and only if circumstances are bad:
Definition 3 (Standard Second-Best Contract) In the standard second-best contract,
denoted ms , the agent may announce a message µ ∈ {θg , θb }. The implemented allocations
are (xb s , tb s ) if µ = θb , and (xg s , tg s ) if µ = θg , where
tb s = v(xb s , θb )
v 0 (xb s , θb ) = π 0 (xb s ) −
tg s = v(xg ∗ , θg ) + [v(xb s , θb ) − v(xb s , θg )]
1−α 0 s
[v (xb , θb ) − v 0 (xb s , θg )]
α
xg s = xg ∗ .
It is however unlikely that the principal knows the agent’s ethics. We therefore assume that
ethics is private information to the agent, and that γ ∈ (0, 1).
3
Unconditional Honesty
As a benchmark, we first consider an environment where the honest agent reveals circumstances truthfully, independent of the offer the principal makes. We remain agnostic about
13 Although
honest behavior is conditional on the contract offered under conditional honesty, the
benchmark considered here also applies to that situation (the first-best contract is “acceptable”).
9
an honest agent’s attitude regarding ethics revelation and consider both the case where he
reveals it unconditionally and the case where he feels free to misrepresent it, should it be in
his best interest. We distinguish two kinds of unconditional honesty. If an honest agent is
only restricted in his announcement of circumstances, we speak of first-order honesty while
if his announcements are restricted in both dimensions we speak of second-order honesty.
Formally, we have:
Definition 4 (First-Order Honesty)
Honesty is of the first order if
∀m ∈ M, Θh,g = {(h, θg ), (d, θg )} and Θh,b = {(h, θb ), (d, θg )}.
Definition 5 (Second-Order Honesty)
Honesty is of the second order if
∀m ∈ M, Θh,g = {(h, θg )} and Θh,b = {(h, θb )}.
Green and Laffont (1986) have shown that, if the agent is restricted in his announcements
and if the principal is restricted to using direct mechanisms, the revelation principle may
not apply in the sense that there may be allocations that can be implemented with a
direct mechanism but not with an incentive compatible direct mechanism (a mechanism
where true types are announced). They derive a necessary and sufficient condition for
the revelation principle to hold for direct mechanisms. This “Nested Range Condition” is
defined as follows.
Definition 6 (Nested Range Condition) Let Θ be the set of agent’s types. The Nested
Range Condition holds if and only if, for any A, B, C, ∈ Θ such that A and B are restricted
to ΘA and ΘB respectively with B ∈ ΘA and C ∈ ΘB , we have C ∈ ΘA .
It is easy to verify that NRC is satisfied if honesty is of the second order,
while it is not for first-order honesty.
esty.
We begin by analyzing second-order hon-
Standard techniques may then be used to derive the optimal contract m =
10
{(xd,b , td,b ), (xd,g , td,g ), (xh,b , th,b ), (xh,g , th,g )}; it maximizes:
γ[α(π(xd,b ) − td,b ) + (1 − α)(π(xd,g ) − td,g )]
(1)
+(1 − γ)[α(π(xh,b ) − th,b ) + (1 − α)(π(xh,g ) − th,g )]
subject to six incentive compatibility constraints:
(2)
td,g − v(xd,g , θg ) ≥ tk,` − v(xk,` , θg ) k = h, d,
` = g, b, (k, `) 6= (d, b),
(3)
td,b − v(xd,b , θb ) ≥ tk,` − v(xk,` , θb ) k = h, d,
` = g, b, (k, `) 6= (d, b),
and four participation constraints:
tk,` − v(xk,` , θ` ) ≥ 0
(4)
k = h, d
` = g, b.
As is usual in this type of principal-agent relationship, the only relevant incentive problem
is to prevent the (dishonest) agent from pretending that circumstances are bad when they
are good. For types other than a dishonest ethics in good circumstances, the individual
rationality constraints are binding. Hence:
Proposition 1 Under second-order honesty, the principal offers the following mechanism:
xh,g = xd,g = xg ∗
xh,b = xd,b = xb ,
where xb is solution to
v 0 (xb , θb ) = π 0 (xb ) −
γ(1 − α) 0
[v (xb , θb ) − v 0 (xb , θg )],
α
and transfers are given by
th,b = td,b = v(xb , θb ), th,g = v(xg ∗ , θg ), td,g = v(xg ∗ , θg ) + v(xb , θb ) − v(xb , θg ).
Proof: Standard arguments show that the participation constraints for types (h, θb ), (d, θb ),
and (h, θg ) are binding. Furthermore, one of the incentive compatibility constraints in (2)
must be binding.
11
¯ be
Let us define x = min{xh,b , xd,b } and x̄ = max{xh,b , xd,b } and let (IC) and (IC)
the corresponding incentive compatibility constraints for type (d, θg ). Since individual
rationality constraints are binding for types (h, θb ) and (d, θb ), the assumption on cross
¯ must be binding while (IC) is binding if and only if x = x̄.
partials of v implies that (IC)
We now show that x = x̄. First, we must have x̄ ≤ xb ∗ . If this was not the case, then
the principal’s surplus could be increased by reducing x̄ slightly, along with the transfer
associated to that type, so that the individual rationality constraint would still bind (total
surplus would increase while the agent’s surplus would be unchanged). This would clearly
not violate the incentive compatibility constraints. Suppose now that x < x̄. Then, x < xb ∗ .
Thus the principal’s surplus could be increased by increasing x slightly, along with the
transfer associated to that type, so that the individual rationality constraint would still
bind. Since (IC) was not binding initially, it would still hold if the increase was small
enough. Therefore we must have x = x̄.
Hence, both incentive compatibility constraints in (2) are binding. Substituting all the
binding constraints into the principal’s surplus and solving for interior solutions yields the
result. It is easy to verify that the solution satisfies all the remaining incentive compatibility
constraints.
Q.E.D.
Not surprisingly, an honest agent who automatically reveals all his private information
receives no informational rent; only the dishonest in good circumstances obtains such a rent.
There is therefore a rent-efficiency trade-off, as in the standard model with only dishonest
agents. In fact, since a dishonest agent may always claim to be honest, the decision for an
honest agent under bad circumstances is distorted downward compared to the first-best,
and no ethics screening occurs when circumstances are bad. This in turn implies that the
distortion is smaller than in the standard model: whereas the expected marginal cost of
distorting the decision xb is as in the standard model (xb being implemented with probability
α), the expected marginal benefit is smaller (since the probability that the principal should
leave a rent to the agent is only γ(1 − α), instead of (1 − α)). The distortion increases with
12
the probability γ that the agent is dishonest; it tends to the standard distortion as γ tends
to 1, and to 0 as γ tends to 0.
The NRC condition of Green and Laffont (1986) only ensures that the revelation principle holds for direct mechanisms, which leaves open the question of whether the principal
could do better using more general mechanisms. Deneckere and Severinov (2001) point out
that the restriction to mechanisms where the agent sends only one message could be misleading; sending out several messages may provide relevant information for the principal. In
the terminology of Forges and Koessler (2003), sending several messages provides certifiable
information about the agent’s type. Deneckere and Severinov derive an extended revelation principle which shows that the set of allocations that can be implemented through
mechanisms where the agent announces his type along with the set of allowable messages,
is the largest that can be implemented.14 Incentive compatibility conditions only need to
be imposed on those types of agents who can mimic other types in such a “password”
mechanism.
In our context, in a password mechanism a dishonest agent would send messages about
his type as well as about all possible types, while an honest agent of the second order
would announce his own type along with the set comprised of his own type alone; only the
dishonest can mimic an honest. As a result, no incentive compatibility constraint would be
needed to induce the honest to tell the truth. In contrast, it would be necessary to impose
appropriate incentive compatibility constraints for the dishonest. This is exactly what was
done above; thus in the case of second-order honesty, there is no loss of generality involved
in considering only direct mechanisms.
Consider now first-order honesty: in a direct mechanism, an honest agent may pretend
to be dishonest. By contrast, with a password mechanism the honest agent would not
be able to mimic a dishonest, implying that the set of implementable allocations would
14 Forges
and Koessler (2003) show that this set of implementable allocations may be achieved
through mechanisms where agents only announce their type and some subsets of the set of possible
types.
13
be the same as under second-order honesty.15 Note however that the only reason why an
honest agent is not able to imitate a dishonest one in the password mechanism, is that he
is not willing to announce a set of allowable messages involving false circumstances. Yet,
all that matters to a first-order honest agent is that the principal obtains true information
about circumstances, which he can ensure in a password mechanism even if he lies about
his message set. An honest agent who feels compelled to reveal the true message set is
basically a second-order honest agent.
Restricting the analysis to direct mechanisms means that the principal faces an additional incentive problem when dishonesty is of the first order. Intuition would suggest that
this should impose some cost on the principal. We now show that this is not the case
because she can actually implement the same allocation as under second-order honesty using a direct mechanism. In this sense, restricting attention to direct mechanisms under
first-order honesty involves no loss of generality.
A simple remark is sufficient to derive the result. Note that the mechanism of Proposition 1 specifies the same allocation for the honest and the dishonest agent under bad
circumstances. Now consider a direct mechanism where the allocation associated with message (d, θb ) is the one which in Proposition 1 is associated with message (d, θg ), while the
first-best allocation is now associated to announcing good circumstances for both honest
and dishonest agents. Leaving the allocation associated with (h, θb ) unchanged, the dishonest will announce (d, θb ) when circumstances are good, and (h, θb ) when circumstances
are bad, while the honest will remain truthful (in particular, announcing (d, θb ) under bad
circumstances would yield a negative surplus). The allocation that is being implemented is
therefore the same as in Proposition 1.
Proposition 2 Under second-order honesty, the principal offers the following mechanism:
xh,g = xd,g = xd,b = xg ∗
15 This
can also be seen from the result in Forges and Koessler (2003), since the smallest type
set that can be certified by each type is the same in both settings.
14
and xh,b is solution to
v 0 (xh,b , θb ) = π 0 (xh,b ) −
γ(1 − α) 0
[v (xh,b , θb ) − v 0 (xh,b , θg )].
α
Transfers are given by
th,b = v(xh,b , θb ), th,g = td,g = v(xg ∗ , θg ), td,b = v(xg ∗ , θg ) + v(xh,b , θb ) − v(xh,b , θg ).
The mechanism appears to be fair to the honest agent: when circumstances are good,
he gets the same utility whether he claims to be honest or dishonest; when circumstances
are bad, he gets a higher utility if he admits his honesty than if he claims to be dishonest.
However, the fair treatment is merely formal. The cheating through which a dishonest
garners a rent is fully endorsed by the principal who is aware that the allocation designed
for a dishonest agent under bad circumstances may only be chosen if circumstances are
good.
An honest agent with a taste for equity would hardly be satisfied with this. Presumably,
the honest agent can understand that the fair treatment is merely formal. In order to
capture an honest agent’s taste for equity, it is not enough to let him lie about ethics if he
is at all sophisticated. In the next section, the honest agent’s behavior is assumed to be
conditional on the proposed mechanism; if the mechanism is not “acceptable,” the honest
agent retaliates by behaving like a dishonest agent.
4
Conditional Honesty
Thus far we have assumed that an honest agent’s behavior is unaffected by the type of
contract offered by the principal. Yet, if this contract specifies an obviously unfair treatment
for those who behave honestly, the agent may feel justified in adopting a deviant conduct.
As Bok (1978, p. 83) points out: “Those who believe they are exploited hold that this
fact by itself justifies dishonesty in rectifying the equilibrium.” This view is also supported
by an experimental study on tax compliance by Spicer and Becker (1980). They find that
15
people who were told their taxes were higher than others evaded by higher amounts than
those who were told their taxes were lower than others. We now assume that an honest
agent behaves dishonestly if he finds the contract offered by the principal unfair.
With unconditional honesty, an honest agent endures an obviously inequitable treatment, which is openly endorsed by the principal: with second order honesty, the principal
openly discriminates against the honest agent; with first order honesty, the allocation designed for a dishonest agent under bad circumstances will only be chosen by an agent facing
good circumstances, implying that if it is chosen, the principal knows for sure that the dishonest has captured an extra rent. Henceforth, we assume that an honest agent would
consider that a contract is unfair if it specifies allocations that, if they are selected, reveal
with probability 1 that the agent is capturing a rent which would be out of reach if he
were honest. Obviously, whether such an uncertainty remains depends on the equilibrium
behavior of the various types of agents. Nevertheless, a contract that would not satisfy the
following condition should definitely be ruled out.
Definition 7 (Acceptable contract) A mechanism m is acceptable if and only if for any
θ we have
(5)
t(θ, d) − v(x(θ, d), θ) = t(θ, h) − v(x(θ, h), θ).
If the above condition is not satisfied for some circumstances θ, then, the allocation that
yields the strictly lower surplus in these circumstances, would either be chosen by an honest
agent who feels obligated to do so (as under second-order honesty) or by a dishonest agent
who misrepresents circumstances.
The behavior of a conditionally honest agent may be characterized as follows.
Assumption 2 (Conditional Honesty) The honest agent’s message space is restricted
if and only if m is acceptable. If this is the case then
Θh (m) = {(h, θg )} and Θ̄h (m) = {(h, θb )}.
16
Conditional on an acceptable mechanism, honesty is of the second order; assuming firstorder honesty instead would not affect the results. In contrast with Green and Laffont
(1986), here the honest agent’s message space is restricted only for some mechanisms, so
their result cannot be directly applied. In fact, the following counter-example shows that
the revelation principle does not apply, although the Nested Range Condition is satisfied
for acceptable mechanisms.16 Assume that the principal offers the first-best contract; this is
an acceptable mechanism. Then only the honest announces good circumstances truthfully
and receives no rent; the dishonest under good circumstances derives a rent from choosing
the allocation associated with bad circumstances. Now, if the revelation principle applied,
we could find a direct mechanism such that the agent always reveals his true type in
equilibrium, and that implements the same equilibrium allocations as the said contract.
But clearly, this is impossible here: if the dishonest under good circumstances were given
a rent while revealing his true type, acceptability would imply that the honest would also
have to garner that rent.
Although the revelation principle fails, the analysis turns out to be simple. First, note
that the standard second-best mechanism (which is optimal when the agent is unrestricted)
is an acceptable mechanism; this clearly implies that the optimal mechanism should be
acceptable. Then, the mechanism may or may not be incentive compatible. Before characterizing the optimal mechanism we consider these two possibilities in sequence.
4.1
Two Candidates for the Optimal Mechanism
Assume first that the mechanism is incentive-compatible, so that the agent reveals his true
type in equilibrium. The principal maximizes (1) subject to the incentive compatibility
16 Deneckere
and Severinov (2001) and Forges and Koessler (2003) also do not condition the
restrictions on the messages on the mechanism. But clearly, if, as is the case here, the notion of
acceptability relies on the equality of surplus for a given announcement of circumstances, then the
extended revelation principle of these authors would not apply either (the acceptability restriction
may be extended to the framework considered by these authors since the mechanisms they propose
have agent announce circumstances).
17
and participation constraints for the dishonest, (2) to (4), and two additional constraints,
referring to acceptability:
th,` − v(xh,` , θ` ) = td,` − v(xd,` , θ` ) ` = g, b.
(6)
Together with the constraints ensuring that the dishonest agent reveals circumstances, these
constraints imply
th,i − v(xh,i , θi ) ≥ th,j − v(xh,j , θi ) i, j = g, b.
(7)
Thus the mechanism must satisfy incentive compatibility constraints ensuring truthful
revelation of circumstances for both the honest and the dishonest agent. As a result, the
optimal incentive compatible mechanism is the standard second-best contract regardless of
ethics.
Lemma 1 The optimal incentive-compatible mechanism specifies the second-best allocation regardless of ethics: (xd,b , td,b ) = (xh,b , th,b ) = (xb s , tb s ) and (xd,g , td,g ) = (xh,g , th,g ) =
(xg s , tg s ) (see Definition 3).
If the principal wants to make the dishonest agent reveal ethics and circumstances she
cannot exploit the honest agent’s behavior, since she then must give the honest agent the
same rent as to the dishonest agent.
Lemma 2 If the principal does not offer an incentive compatible mechanism, the optimal
mechanism is such that (d, θg ) announces θb , and it specifies the following allocations:
(i) the first-best allocations (xb ∗ , tb ∗ ) and (xg ∗ , tg ∗ ) associated to messages (h, θb ) and (h, θg )
respectively;
(ii) the first-best allocations (xb ∗ , tb ∗ ) associated to message (d, θb );
(iii) (xd,g , td,g ) satisfying td,g = v(xd,g , θg ) associated to message (d, θg ), where xd,g may take
on any positive value.
18
Proof: First note that the argument used to prove Lemma 1 would also show that if
(d, θg ) announces (h, θg ) or (d, θb ) announces (h, θb ), the optimal mechanism would be the
standard second-best.17 Now consider a mechanism where type (d, θb ) would announce θg
along with either d or h. Then, because of individual rationality constraints for (d, θb ) and
(h, θb ) and using acceptability conditions, all four allocations must yield a positive surplus
under bad circumstances. Then, the behavior of the dishonest under good circumstances
becomes irrelevant and the principal offers the first-best allocation in bad circumstances
for any announcement. However, this allocation can be implemented through an incentive
compatible mechanism so that it is dominated by the standard second-best.
From the argument above, in an optimal mechanism, which is not incentive compatible, a
dishonest agent always claims that circumstances are bad. Given such a message structure,
and provided that no incentive compatibility constraint is binding, the allocations specified
in the Lemma are clearly optimal and they are consistent with the message structure.
Q.E.D.
Acceptability implies that the principal compensates the dishonest in good circumstances as if circumstances were bad; as a result, the decision implemented for the dishonest in good circumstances is the first-best decision for bad circumstances, xb ∗ . Here the
principal would not benefit from altering efficiency under bad circumstances to decrease
the dishonest’s rent; the traditional rent-efficiency trade-off is absent.
In fact the contract can be implemented by simply offering the first-best mechanism;
no ethics screening occurs, and screening among circumstances occurs only if the agent is
honest. As with first-order honesty, the principal avoids leaving a rent to the honest agent
by letting the dishonest agent lie. However, as noted in the introduction, in the above
contract part of the rent is in kind instead of being monetary: the dishonest agent under
good circumstances is not paid more than the honest agent for the same output; instead,
17 Strictly
speaking, the allocations associated with (d, θg ) or (d, θb ) could be any allocation
satisfying the acceptability condition, since it is not implemented in equilibrium.
19
he produces a smaller output and is paid as if circumstances were bad.
4.2
The Optimal Mechanism
The standard second-best contract deals optimally with the rent-efficiency trade-off for the
dishonest at the expense of leaving a rent to the honest and distorting the allocation under bad circumstances even if the agent is honest. The first-best contract on the other
hand implements the first best with no rent for an honest agent but acceptability prevents
the principal from achieving discrimination among dishonest agents on the basis of circumstances. The next proposition confirms the intuition that the standard second-best contract
should be preferred if and only if the probability that the agent is dishonest is large.
Proposition 3 For any α ∈ [0, 1), there exists γ ∗ ∈ [0, 1] such that the standard second-best
contract is optimal for γ ≥ γ ∗ , and the first-best contract is optimal for γ < γ ∗ . Moreover,
γ ∗ is a strictly increasing function of α; γ ∗ (0) = 0 and
(8)
v(xb ∗ , θb ) − v(xb ∗ , θg )
lim γ ≡ γ̂ =
< 1.
α→1
[π(xg ∗ ) − v(xg ∗ , θg )] − [π(xb ∗ ) − v(xb ∗ , θb )]
∗
For any γ > γ̂ the second-best contract is optimal regardless of the value of α.
Proof: Let Πs and Π∗ denote the principal’s expected surplus with the incentive-compatible
mechanism and with the first-best mechanism; Πs is independent of γ for any value of α,
whereas Π∗ is decreasing in γ. For γ = 0, we have Π∗ > Πs , and for γ = 1, the reverse is
true. Hence, for every α there exists a unique γ ∗ as specified in the proposition. To further
characterize the properties of γ ∗ we now show that it is invertible on [0, 1), and that the
inverse α∗ is strictly increasing over its domain [0, γ̂).
Henceforth we study the functions Πs (α) and Π∗ (α) (see Figure 1). Ceteris paribus, Πs
and Π∗ are decreasing in α:
(9)
∂Π∗ (α)
= (1 − γ)[π(xb ∗ ) − v(xb ∗ , θb ) − π(xg ∗ ) + v(xg ∗ , θg )] < 0,
∂α
20
(10)
∂Πs (α)
= π(xb s ) − v(xb s , θg ) − π(xg ∗ ) + v(xg ∗ , θg ) < 0,
∂α
where the second expression is derived using the envelope theorem. Moreover, Π∗ is linear,
whereas Πs is convex:
(11)
∂xb s (α)
∂ 2 Πs (α)
0
s
0
s
0
s
0
s
=
[π
(x
)
−
v
(x
,
θ
)
+
v
(x
,
θ
)
−
v
(x
,
θ
)]
,
b
b
b
b
b
b
g
∂α2
∂α
which is positive since both terms are positive (xb s being smaller than xb ∗ ). Furthermore,
for any γ ∈ [0, 1), Πs (0) ≥ Π∗ (0) (with equality occurring only for γ = 0), whereas Πs (1) =
Π∗ (1). Thus, a necessary and sufficient condition for there to exist a unique α∗ , is that
(12)
∂Πs
∂Π∗
> lim
,
α→1 ∂α
α→1 ∂α
lim
which reduces to γ < γ̂. Finally, since Πs (α) is independent of γ for any value of α, whereas
Π∗ (0) is decreasing in γ, α∗ is strictly increasing.
Q.E.D.
Conditional honesty thus implies that no explicit screening of ethics occurs; other authors, such as Kofman and Lawarrée (1996) and Erard and Feinstein (1994), have imposed
this as an exogenous restriction. The absence of explicit ethics screening suggests that
the traditional approaches, i.e., assuming either that the agent is honest or dishonest with
certainty, are robust to the introduction of ethics heterogeneity. In fact, as Figure 1 shows,
the standard second-best approach appears to be remarkably robust to the introduction of
a non-trivial proportion of conditionally honest agents. The figure shows the principal’s expected profit with the standard second-best contract (Πs ) and the first-best contract (Π∗ ),
as functions of α. Whereas the former does not depend on the probability that the agent is
dishonest, the latter does; in the graph, the line Π∗1 corresponds to a smaller value of γ than
Π∗2 . For the larger value of γ, the standard second-best contract is optimal irrespectively of
the probability distribution of circumstances. This corresponds to the case where γ exceeds
γ̂.
We conjecture that the robustness of the standard second-best approach when honesty is
sufficiently unlikely holds for situations with any finite number of potential circumstances.
21
In particular, the arguments used to prove Lemma 1 easily extend to show that the only
incentive compatible mechanism is the standard second-best contract. For any pair of potential circumstances, any incentive compatibility imposed for a dishonest agent combined
with acceptability conditions for the two circumstances under consideration implies the
same incentive compatibility constraint for an honest agent. Now if γ is close to 1, it is
intuitive that, in a mechanism where a dishonest lies in some circumstances, the principal
would incur a discrete loss relative to the standard second-best, when confronted with a
dishonest agent,18 while the potential expected benefits with an honest agent are small for
γ close to 1. Thus the standard second-best contract would remain her best option.19
Another implication of the above result is that the agent had rather live in a world with
few others sharing the same ethics: the honest agent receives a rent only if the probability
that the agent is conditionally honest is small; the dishonest agent’s rent is larger in the
first-best mechanism than in the standard second-best contract. Also note that the principal
gains from an increase in the proportion of honest agents only if that proportion is already
sufficiently large for the first-best contract to be optimal.
The honest agent is actually treated unfairly when the first-best contract is offered, in
spite of the acceptability condition: only the dishonest agent garners a rent when circumstances are good. However, there is no longer an open endorsement of that unfairness here,
since an agent announcing bad circumstances may be either lying or telling the truth: the
principal has no way of knowing which. Furthermore, the first-best contract effectively
grants the honest agent a higher expected wage; it is then reasonable to believe that the
honest agent would not feel unfairly treated, even though the dishonest agent garners a
rent in kind.
18 This
is due to acceptability and would not be true in the standard problem where the principal
could do as well as in the standard second-best by allowing systematic lies.
19 It
is less obvious what would happen with a continuum of types. Some further intuition could
be gained on this point by looking at the impact of a change in the values of θ on the critical γ ∗ .
It turns out that it can go both ways.
22
5
Concluding Remarks
Should we try and find out how honest our trade partners are? We have considered a
situation where a principal contracts with an agent of uncertain ethics and uncertain trade
circumstances, and where honest behavior amounts to revealing circumstances truthfully.
We determined the optimal contract when honest behavior is conditional on the contract
being fair.
Our analysis reveals that the condition that the contract be fair implies two major costs
for the principal. First, any monetary rent bestowed on the dishonest agent must also be
given to the honest agent; hence, the principal does not benefit from honest behavior if
she wants to screen among circumstances. Second, and relatedly, if she wants to leave no
monetary rent to the honest agent, she must forgo the benefits of discrimination when the
agent is dishonest. Consequently, the contract involves a monetary rent only if the agent is
likely to be dishonest (and the contract is the standard second-best one). Otherwise, the
principal offers the first-best contract, and the dishonest agent garners a rent in kind.
Our analysis relies on the use of direct mechanisms, where the agent announces his type.
In the workplace example, an interpretation is that the contract specifies the time it takes
to perform a certain task. An implication of our analysis is that it may matter to specify
such non-contractible circumstances in a contract. Doing so prevents honest agents from
self-selecting in a way which would be unfavorable for the principal.
23
References
Alger, I. and A. C.-t. Ma (2003), “Moral Hazard, Insurance, and Some Collusion,”
Journal of Economic Behavior and Organization, 50:225-247.
Alger, I. and R. Renault (1998), “Screening Ethics when Honest Agents Keep Their
Word,” mimeo, Boston College and Université de Cergy-Pontoise.
Andreoni, J., B. Erard and J. Feinstein (1998), “Tax Compliance,” Journal of Economic Literature, 36:818-860.
Bok, S. (1978), Lying: Moral Choice in Public and Private Life, New York: Pantheon
Press.
Crocker, K.J. and J. Morgan (1998), “Is Honesty the Best Policy? Curtailing Insurance Fraud through Optimal Incentive Contracts,” Journal of Political Economy,
106:355-375.
Dawes, R.M. and R.H. Thaler (1988), “Anomalies: Cooperation,” Journal of Economic
Perspectives, 2:187-197.
Deneckere, R. and S. Severinov (2001), “Mechanism Design with Communication
Costs,” mimeo, University of Wisconsin and Duke University.
Deneckere, R. and S. Severinov (2003), “Does the Monopoly Need to Exclude?,”
mimeo, University of Wisconsin and Duke University.
Erard, B. and J.S. Feinstein (1994), “Honesty and Evasion in the Tax Compliance
Game,” Rand Journal of Economics, 25:1-19.
Forges, F. and F. Koessler (2003), “Communication Equilibria with Partially Verifiable
Types,” mimeo, Université de Cergy-Pontoise.
Green, J.R. and J.-J. Laffont (1986), “Partially Verifiable Information and Mechanism
Deisgn,” Review of Economic Studies, 53:447-456.
Hartshorne, H. and M.A, May (1928), Studies in deceit, New York: McMillan.
Hollinger, R.C. (1991), “Neutralizing in the Workplace: An Empirical Analysis of Property Theft and Production Deviance,” Deviant Behavior: An Interdisciplinary Journal,
12:169-202.
Jaffee, D.M. and T. Russell (1976), “Imperfect Information, Uncertainty, and Credit
Rationing,” Quarterly Journal of Economics, 90:651-666.
Kofman, F. and J. Lawarrée (1996), “On the Optimality of Allowing Collusion,” Journal of Public Economics, 61:383-407.
Lacker, J.M. and J.A. Weinberg (1989), “Optimal Contracts under Costly State Falsification,” Journal of Political Economy, 97:1345-1363.
Maggi G. and A. Rodriguez-Clare (1989), “Costly Distortion of Information in
Agency Problems,” Rand Journal of Economics, 26:675-689.
24
Mueller, C.W. and T. Wynn (2000), “The Degree to Which Justice is Valued in the
Workplace,” Social Justice Research, 13:1-24.
Parilla, P.F., R.C. Hollinger and J.P. Clark (1988), “Organizational Control of
Deviant Behavior: The Case of Employee Theft,” Social Science Quarterly, 69:261-280.
Roth, J.A., Scholz, J.T., and Witte A.D., eds. (1989), Taxpayer compliance: An
Agenda for Research, Philadelphia: University of Pennsylvania Press..
Ryan, A.M., M.J. Schmidt, D.L. Daum, S.Brutus, S.A. McCormick and M.H.
Brodke (1997), “Workplace Integrity: Differences in Perceptions of Behaviors and
Situational Factors,” Journal of Business and Psychology, 12:67-83.
Spicer, M.W. and L.A. Becker (1980), “Fiscal Inequity and Tax Evasion: An Experimental Approach,” National Taxation Journal, 33:171-175.
Tirole, J. (1992), “Collusion and the Theory of Organizations,” in Laffont, J.-J.,ed.,
Advances in Economic Theory: Proceedings of the Sixth World Congress of the Econometric Society, Cambridge: Cambridge University Press.
25
Π
ΠS
aa
a
∗ aa
Π1
aa u
a
aa
aa
aa
hhhh
hhh
aa
h
∗
h
hhh
Π2
hhhhaaa
hhha
ha
hh
au
α∗
1
Figure 1
26
α