Moral-Hazard

Non expected utility and moral hazard: theory and evidence
The aim of the present work is to further analyze situations of moral hazard, focusing on the role
played by players’ beliefs. Specifically, I will study how those expectations shape the optimal contract, and I will try to find the conditions for beliefs convergence in cases of repeated interaction.
The analysis will be based on a theoretical model, a laboratory experiment, and a field study.
Moral-Hazard
The standard moral hazard model assumes that the probability distribution of the outcome
given the effort is given and is common knowledge ([1], [2], [3]). The general result is that if the
agent is risk neutral, a flat contract can be optimal, while if the agent is risk averse, the optimal
contract should be outcome-based and should use all the relevant information available to the
principal. Moreover, the optimal level of effort is increasing in the power of incentives ([4]).
The analysis is motivated by the consideration that the role of outcome-based incentives in
aligning the interests of the principal and the agent has been questioned by a series of studies,
showing that they may also have the negative effect of crowding out personal motivation, thus
reducing the effort that the agent is willing to exert (see [5] for a review). Thus, further research
is needed to understand the relation between effort and incentives, and the present study will
try to move some steps into it by considering the role that subjective belief may have in shaping
the optimal contract and choosing the relative effort.
Indeed, the knowledge of the relation between effort and outcome is crucial for the principal,
to be able to give the right incentives to the agent, and also for the agent, to choose the optimal
level of effort when faced with an outcome-based contract. If this relation is not objectively
defined, the principal and the agent may hold (different) subjective beliefs over it.
The heterogeneity in belief may be explained by different information available to the parties,
or by errors in the assessment of probabilities. Indeed, people have biases when dealing with
probabilities as shown, for example, in [6] and [7], and it may well be that the principal and the
agent have got different informations: the principal may (think to) know better the characteristics
of the task, while the agent may (think to) know better his own abilities.
Thus, the agent and the principal may hold different beliefs regarding the probability distribution of the output given effort, and they may even not be able to assess a single-valued
probability function, i.e. there might be ambiguity. If there is common knowledge over those
subjective beliefs, the principal can modify his maximization problem by including agent’s belief
in the relative constraints. Otherwise, he should form a second-order belief over the agent’s
belief, which may be based on which information the principal thinks the agent has got, and on
the degree he thinks the agent is able to correctly assess his beliefs, given the information he has.
The situation is even more complex if we allow for belief updating: in a one-shot relation, the
agent may update his beliefs after looking at the offered contract, while in a repeated interaction
both the principal and the agent may update their beliefs after looking at the realized outcome.
The purpose of the research is to study how the characteristics of the optimal contract change
as we impose different structures on the possible beliefs, both in a one-shot and in a repeated
interaction, and considering both cases of single- and multi-valued probability distribution.
1
Overconfidence
People are generally found to be overconfident of their own abilities. Among others, [8] found
that people may be led to think to have caused an action, even if they actually didn’t ; [9] that
people may learn to become overconfident if they overweight successful outcomes and [10], that
people tend to under-evaluate the likelihood of an unfavorable event happening to them.
The importance of this topic in principal-agent relations is illustrated, for example, in [11],
who found that overconfident workers tend to earn lower wages and work more, allowing the principal to gain more, while [12] firstly introduced overconfidence in a moral hazard model, finding
conditions under which effort and power of incentives are increasing in agent’s overconfidence.
He assumes that the principal knows the agent’s degree of overconfidence, but when relaxing
the assumption of common knowledge, several studies have shown that we have biases when
assessing other’s beliefs. For example, an experiment by [13] found that individuals tend to
recognize that people may be overconfident, but they underestimate the magnitude of this bias;
[14] propose a model and tested it experimentally, showing that, on average, players underestimate other’s rationality ; finally, [15] studied five bounded rational types with different models
of others, showing that those model outperform the rational-type model in their experiment.
Thus, it may well be that the principal underestimates the degree of agent over- or underconfidence. More importantly, the principal may be overconfident himself when determining
agent’s beliefs. The implications for the optimal contract are not trivial, since the different
combination of principal overconfidence and principal’s belief of agent overconfidence may lead
to contracts with a different ability to induce positive effort.
The situation is more complex when considering repeated interactions. The dynamic model
of moral hazard has been widely studied, since the seminal paper by [16], where showing that
repeated interaction may alleviate the agency problem, if the parties are patient enough.
More recently, [17] studied the case in which the agent has biased beliefs, and can choose
between a “known” strategy and an “exploring” one, whose returns are uncertain. He modeled
judgmental overconfidence and optimism, respectively, as a lower variance and an higher mean
of the probability distribution of the (outcome of the) exploring strategy, and he showed that,
while the latter is positively linked with innovation, the first is not.
This result underlines how important it is to be able to model the different forces at work
when the agent decides which action to take. Indeed, the relation between agent’s overconfidence
and his behavior following an outcome-based contract is not clear: on one hand, as in [12], if
the agent is overconfident enough, lower incentives may be enough to make him exert a given
level of effort, but higher incentives would also work, i.e. the agent’s effort is increasing in the
power of incentives; on the other hand, [18] found that if the agent is overconfident because
of self deception (i.e. people with selective memory that may choose to remain overconfident),
extrinsic incentives may be counterproductive, since they may be perceived as a signal of low
trust in agent’s ability, thus undermining his intrinsic motivation by lowering his confidence1 .
Those models maintain the assumption of common knowledge of the (subjective) probability
distribution of outcome, while I am particularly interested to relax it, allowing for belief updating.
1 This effect may be alleviated if the relation is repeated, as shown in [19], who also studied a model with
imperfect memory and self deception, allowing for regret aversion.
2
Examples of recent models allowing for beliefs updating are [20], who studied a repeated
reputation game with one-side moral hazard, founding that false reputation will disappear in
every equilibrium, and [21], who showed that with dynamic beliefs, the degree of optimism of a
reference-dependent agent with anticipatory utility decreases, as the payoff date approaches.
Indeed, the model in [12] can be considered as a special case of the model by [22], who
analyzed a repeated principal-agent situation without common knowledge on subjective beliefs,
concluding that in those cases longer relationships tend to increase the agency problem, contrary
to the general result ([16]).
Research Question 1: How do principal’s and agent’s beliefs shape the optimal contract?
To answer this question, I will first extend [22] model, allowing the principal to be overconfident when assessing his second order beliefs. I will determine which is the relation between
beliefs, effort and contract’s structure. In a one shot interaction, the expected result is that if
the principal is too much overconfident, he may offer sub-optimal contracts that are not able to
induce positive effort.
In a repeated interaction, I will allow for belief updating, and I will try to disentangle the
difference in beliefs due to different informations from the one due to overconfidence. I expect
that, if a real objective probability function do exists, the beliefs of both the agent and the
principal are expected to converge to it (if the interaction is long enough), but also that if the
difference in belief is due to overconfidence, it is more difficult for beliefs to converge.
Finally, I will study an evolutionary model of moral hazard, in which a population of shortlived agents (unable to learn from experience) faces one long-lived principal (able to update his
beliefs). I expect that the equilibrium behavior of the population will crucially depend on the
initial conditions on the belief 2 .
Ambiguity
It may be the case that the principal (or the agent, or both) may not be able to determine a
single probability for each level of outcome. In this case, the situation is said to be ambiguous,
and beliefs are better represented by set of probabilities. The seminal paper by [24], showed
that, generally, people tend to prefer known over unknown probabilities, i.e. they tend to avoid
ambiguity. Evidence of ambiguity and ambiguity aversion are found in neural studies ([25], [26]),
in laboratory experiments ([27], [28]) and in field studies( [29], [30]).
Nonetheless, [31] showed that, besides ambiguity aversion, individuals prefer to bet on a
certain event before rather than after the event actually happens, and that if they fell knowledgeable, they prefer to bet on their own rating of confidence in an answer rather than on an
equal chance event (like a toin-coss if the confidence in the answer was 50%). In this experiment,
paid subjects answer correctly to more questions than unpaid subjects, and they were also found
to be more overconfident.
The fact that people may prefer to bet on their own knowledge (which is more ambiguous
than a chance event), may lead us to think that it is not only the impossibility of computing an
objective probability that make people in the Ellsberg paradox to choose the unambiguous urn.
2 [23] had a similar result when studying the possibility for the agent to be unaware of certain contingencies.
He found that, depending on the initial degree of awareness of the population, in the resulting equilibrium there
would be different levels of awareness.
3
Indeed, we may think that more overconfident people tend to have more confidence also in
their own judgment, so that the set of beliefs become a singleton, which may have for the agent
even more importance than an objective probability. Thus, we may think overconfidence and
ambiguity to be related: more overconfident people may have less ambiguous belief.
A series of model have been proposed to study the role of ambiguity in decisions under
uncertainty ( for a review see [32] and [33]).
According to the comparative ambiguity model by [34], for example, ambiguity aversion is
produced by the comparison with less ambiguous events or with people with more knowledge.
In support to this model, [35] found larger difference between known and unknown probabilities
in comparative evaluations rather than separate evaluations.
Ambiguity may be important in moral hazard situations, as showed by [36], who studied the
relation between noisy performance and work motivation, finding that increasing the variance
of the performance lead to a larger effort. Indeed, several authors tried to include ambiguity in
moral hazard models, often with different results, like [37], who studied a risk neutral agent with
ambiguous knowledge, finding that for high levels of ambiguity flex contracts may be optimal;
[38], who considered an ambiguity averse principal and, using E-capabilities, found conditions
under which complete contracts are optimal, while [39] studied a model where the agent has
multiple beliefs and the principal is ambiguity averse, showing the conditions under which the
optimal contract was binary, as a flat payment plus bonus 3 .
Finally, [42] considered an ambiguity averse agent and an ambiguity neutral principal, with
equal degree of ambiguity, and showed under which conditions the power of incentives are increasing in the level of ambiguity. He uses the set up from [43], in which ambiguity-averse individuals
evaluate an act by the probability that it yields the lowest expected utility.
More recently, ambiguous scenarios have been analyzed also in a dynamic context, for example
in the model by [44], where the ambiguity regards the drift and volatility of a dynamic process, or
by [45], who studied the optimal dynamic contract when the principal has ambiguous knowledge
over the cost function of the agent, and found the conditions under which the possibility of firing
may lead to lower effort.
Research Question 2: How do the presence of overconfidence and ambiguity interact in
shaping the optimal contract?
To answer this question, I will first extend [42] model by including different degrees of ambiguity for the principal and the agent, and principal’s belief over the agent’s degree of ambiguity.
Second, I will consider a repeated version of the model, showing how a partial resolution of
ambiguity may affect agent’s behavior, and considering also the case in which, after the partial
resolution of ambiguity, the agent is overconfident. Finally, I will study the situation in which
one principal faces a population of agent.
Since the presence of overconfidence may lead to larger incentives (specially if the agent is
risk neutral), while the presence of ambiguity may lead to smaller incentives, the resulting effect
is not trivial.
3 binary payments were found to be optimal, despite the number of possible outcomes, also by [40], in studying
loss aversion and [41], in dealing with relational incentive contracts.
4
Crowding out
The agent may like to exert effort (up to some point [46]), and several authors claim this can
be a case in which performance-based payments may be detrimental (crowding-out effect).
The actual choice of effort may depend on many aspects: [47] studied the role of social
comparison, finding that if only one of the two workers wages is cut, the effort decreases more
than if both wages are cut, while [48] found that increasing power led to increased self-serving
behavior, which may be balanced by accountability
Several studies have found evidence of crowding out, i.e. evidence that performance-based
incentives may lower the actual level of effort, compared to flat payments and sometimes to no
payment at all. The motives and the importance of this behavior is still debated.
Crowding out of intrinsic motivation is among the reasons why extrinsic incentives may lower
agent’s provision of effort, in [49], together with reciprocity and social approval, and in [50], in
the form of loyalty and trust, which may provide an alternative mechanism to control agents
behavior. Analyzing data from the public sector, [51] reports evidence supporting the crowding
out hypothesis in the higher education sector and in the National Health Service: higher extrinsic
rewards tend to reduce the propensity of intrinsic motivated individuals to accept public jobs.
Finally, in a study with a real field task (i.e. data entry for the library and door-to-door foundraising for a research center), [52] compared the effort exerted according to whether a monetary
gift was added to the promised payment or not. They found that worker effort in the first few
hours on the job was considerably higher in the ”gift” treatment than in the ”nongift” treatment,
although after the initial few hours, no difference in outcomes is observed, and overall the gift
treatment yielded inferior aggregate outcomes for the employer. [53] analyzed offers in a proposer
responder game where the responder can punish or reward the proposer. He found a w-shaped
curve: compared with the dictator game, little reward and little punishment lead to smaller
offers and higher surplus for the proposer, while with larger incentives the initial amount offered
is higher: with high punishment the proposal’s payoff is the smallest, while higher rewards
give the highest payoff to both. Among the causes of this detrimental effect, there may be:
information regarding the importance of the task, feeling insulted and fairness.
Analyzing this evidence, there are two counter-intuitive behaviors: the first is that an individual may exert positive effort if he is not paid at all, while exerting zero effort if he’s paid
not enough; the second is that the individual may exert more effort when the contract is flat,
compared to the case in which the payment is performance-based. Both those behaviors may be
signals of crowding out, but while the first is probably due to social motives (e.g. the desire to
help)([53]), the second may be caused by the difference in subjective beliefs ([18]).
Thus, it would be interesting to find which is the role of subjective beliefs in determining
whether an incentive is “large enough” to induce positive effort or, equivalently, whether an
incentive is too small (or too large) to destroy the intrinsic motivation of doing the task.
Several authors tried to model the crowding out of intrinsic motivation: [54] studied group
identity, finding an hidden cost of incentives because agents do not expect to be monitored ; [55]
analyzed a repeated gift-exchange game where only one part cares for reciprocity, showing that
gift giving never arises in equilibrium; [56] found an equilibrium with ethical types, moral vs
amoral and finally [57] proposed a model of intrinsic and extrinsic motivation, showing the nega-
5
tive effect in the long run of performance-based incentives if they provide certain info to the agent.
Research Question 3: Which is the role of overconfidence and ambiguity in determining
the crowding out of intrinsic motivation?
To answer this question, I will use the model developed in the previous section, and study
under which initial conditions on subjective beliefs there may be crowding out.
The expected result is that different degrees of overconfidence may lead to different behaviors
following a performance-based contract, including crowding out.
Method
The analysis will be developed in three steps:
1. a theoretical model where I consider the role of first- and second-order beliefs, taking into
account the possibility that they can be biased. I will consider different beliefs structures
and different ways of belief updating , including the presence of ambiguity in the distribution of outcome;
2. a laboratory experiment where I will simulate the relationship between the principal and
the agent to study the emergence of contracts in the case of a real task delegation and moral
hazard, trying to modify individual levels of ambiguity to check whether the theoretical
predictions fit the data;
3. a field experiment to test whether the theoretical predictions and the laboratory implications are robust once moving to the real world. Specifically, I will study the effect of
economic incentives on the effort exerted by workers in two different firms, characterized
by different levels of variability in the worker output.
Theoretical Model
As an example, consider a simple binary model, in which the agent cannot update his belief
after looking at the contract, he is protected by a limited liability, and both the agent and the
principal are risk neutral: the principal delegates a task to the agent, the outcome x of the task
is stochastic and it can be a success or a failure, i.e. x ∈ {R; 0}.
The principal proposes a contract (wR , w0 ) to the agent, which can be based on the final
outcome of the task (wR 6= w0 ), or not (wR = w0 ) . If the agent accepts the contract, he decides
a level of effort, e ∈ {0, 1}, which is non observable by the principal and is costly for the agent,
with c(1) = c > 0 = c(0). There is an objective probability distribution given outcome, unknown
to both, with π(R|e) = πe , π(0|e) = 1 − πe and π1 > π0 . The principal and the agent have
subjective beliefs πeP and πeA . Moreover, they have second order beliefs π̂eA and π̂eP , where π̂ej is
the belief of individual i over the belief of j.
If the principal can observe agent’s effort, and wants to implement e = 1, he will solve:
6
max V = π1P (R − wR ) + (1 − π1P )(−w0 )
sub U = π̂1A wR + (1 − π̂1A )w0 − c ≥ 0
(PC)
wR ≥ 0
w0 ≥ 0
The solution of this problem clearly depends on principal’s belief: if the principal thinks that
the agent holds the same beliefs as him, all contracts such that the participation constraint binds
and the limited liability constraints hold, are optimal; if he thinks that the agent’s beliefs are
larger than his, he will set w0 = 0 and wR =
c
.
1−π̂1A
set wR = 0 and w0 =
c
,
π̂1A
while if he thinks that they are smaller he will
The agent will accept the contract, knowing he will have to exert
positive effort, whenever ∆wπ1A ≥ c − w0 .
Simple calculations allow to complete the following table:
∆w
accept if
π̂1A < π̂1P
π̂1A = π̂1P
c
− 1−π̂
A
π̂1A ≥ π1A
1
c−w0
π̂1A
if c < w0 → π̂1A ≤ π1A , if c > w0 → π̂1A ≥ π1A ,
if c = w0 = wR → always accept
π̂1A
π̂1P
c
π̂1A
>
π̂1A ≤ π1A
Thus, if the principal does not know agent’s beliefs, and he does not know whether his belief
over agent belief are correct, he can “play sure” and offer a flat contract, which guarantees that
every agent will accept it and gives the principal an expected payoff of π1P R − c.
If the principal do not take into account the possibility that he is wrong about agent’s beliefs,
and sticks with his own second order belief, he may not be able to make the agent accept the
contract, in case he overestimates the agent’s bias. On the other hand, if he underestimates or
correctly guesses the agent’s bias, he may ensure an utility larger than the “sure” option, getting
πP
1−π P
π1R − c 1−π̂1A in case of agent underconfidence, and π1R − c π̂1A in case of agent overconfidence.
1
1
The “sure” option is no longer an option when there is incomplete information also over
agent’s action. In this case the principal should give to the agent the necessary incentives, and
solves:
max V = π1P (R − wR ) + (1 − π1P )(−w0 )
sub U = π̂1A wR + (1 − π̂1A )w0 − c ≥ 0
∆π̂ A ∆w ≥ c − w0
wR ≥ 0
The optimal contract in this case is w0 = 0, wR =
agent will accept it if
π1A
c
.
∆π̂ A
w0 ≥ 0
When offered such a contract, the
A
≥ ∆π̂ , and will exert positive effort if ∆π A ≥ ∆π̂ A . Thus, it may be
that the agent will accept the contract and exert no effort (in case ∆π̂ A < π1A < ∆π̂ A + π0A ).
This set is increasing in π0A , which is the agent’s probability of having a success given zero
effort. An high value of π0A may be due to agent’s confidence in “good luck”, but also to an
overestimation of the simplicity of the task. On the other hand, high values of π1A may be a signal
of agent overestimation of the impact of his effort on the outcome realization, thus increasing
the probabilities for the agent to exert positive effort.
It is clear from this very simple example that relaxing the assumption of common knowledge
may have several implications for the equilibrium contract: in the first best, depending on the
principal’s initial beliefs very different contracts may arise, while in the second best, a new
7
possibility for the agent to shrink arises.
Indeed, I will try to enrich the previous model by adding different structures on individuals’
beliefs. Specifically I will model explicitly the errors of estimation of both individuals and study
how they impact on the equilibrium behavior. As this little example suggests,I expect that errors
due to confidence in luck to be worst for the total welfare than the errors due to overestimation
of own abilities. Moreover, I will study the behavior of beliefs in an evolutionary model, in which
the principal and the agent may or may not be able to correctly update their beliefs. I expect
that when both individuals can update their beliefs, the populations beliefs’ will converge to a
unique distribution, which may or may not be equal to the real, objective distribution function,
that is, there might be cases in which an equilibrium arises only because the parts involved think
to be in equilibrium.
Laboratory experiment
Several authors tried to replicate the principal-agent relationship in the laboratory, trying
to determine which aspects of the relation the agent takes into account when deciding the level
of effort4 . As an example, [62] studied the role of contingent versus non contingent incentives,
finding that very small contingent payments tend to decrease the response rate, while very small
non-contingent payments may increase it, while [63] studied the effect of financial incentives,
finding that they may improve performance for medium tasks, but are ineffective for very easy or
very hard ones. Finally, in [64], it is reported an experiment with students, showing that those
who were only asked to perform a task exerted more effort compared to other students to whom
the experimenter promised a monetary payment.
Since the main focus is on the assessment of subjective probabilities, I will ask participants
to perform a real task5 , and I will give them different informations regarding its difficulty (both
true, but underlying different aspects of the task). I will then ask them, according to the strategy
method, to assess their first- and second- order beliefs.
The general setting is an extension of [65], who examined a principal-agent relationship with
fixed pay and real effort. The task consists in finding hidden differences between two figures,
the precise quantity being unknown to participants. Thus, when the agent finds 20 differences,
the principal will not be able to assess with certainty the exact level of effort the agent exerted.
Participants will be allowed to surf on the Internet instead of finding differences, if they want,
in order to give them an incentive to shrink.
I will give to participants different informations regarding the number of hidden differences
(e.g. there are more than 5 vs there are less than 20), and then ask them the probability with
which they think they will be able to find all differences, the probability with which they think
the other person will be able to find all differences, and the probability they think the other
person thinks that they will be able to find all differences, all as functions of the time effectively
spent in doing the task (i.e. not surfing the internet).
Then, they will be randomly assigned to the roles of “principal” and “agent”. The principal
is faced with a contract by the experimenter, and he may choose to delegate the task to the agent
4 see,
for further examples, the series of experiment by Cappelen: [58] [59], [60], [61]
asking participants to perform a real task I will gain in terms of realism, but a disadvantage is that
participant’s cost function will remain unknown, and I wont be able to compute their net payoff.
5 By
8
by offering him another contract, which may be a flat payment, or based on the differences found
and/or on the share of the final output, and he may also indicate a desired number of differences
to be found. The agent can always reject the contract, in which case the game is over in a one
shot interaction, and goes on to the next period in the repeated one.
As in [66], before the experiment a preliminary questionnaire will be given to the participants,
to assess measures such as risk aversion and self confidence.
The experiment will consist of three parts:
→ In the first part, the aim is to assess the effectiveness of extrinsic incentives. First, to all
participants will be asked to find hidden differences between two figures (do the task). There will
be no payment in this first stage. Second, participants will be divided in four groups, and they will
be asked to do the task again, this time with extrinsic incentives. According to the group, there
may be different incentives: flat-small, flat-large, performance based-small, performance-basedlarge. The effort exerted in the first step will give a first a measure of the intrinsic motivation of
all participants to do the task, while differences in behavior between groups will be driven (also)
by the different extrinsic incentives.
→ In the second part, the aim is to observe how players beliefs affect their actions.
First, participant will be divided in three groups, and they will be matched to play a revised
form of the principal agent game. In the first group there are no extrinsic incentives: the principal
is asked to do the task, and, if he wants, he can ask the agent to check some figures for him, as
a favor. In the second group, the principal is offered a monetary reward by the experimenter,
but he can only ask a favor to the agent. In the third group, the principal is offered a monetary
reward and can offer also a contract to the agent. After this phase, all principals will be asked
to offer a contract to the agent to do the task, without the possibility of doing it by himself.
The effort exerted in the first step, together with the number of figures “exchanged”, will give a
second measure of the intrinsic motivation of participants to do the task, while difference with
the second group will give a second measure of effectiveness of the monetary incentive, as well
as the difference of agent’s behavior when facing a request and when facing a contract.
→ In the third part, the aim is to observe how players actions affect their beliefs. Participants
will be again divided in three groups, and they will repeat the game in 2.d for 20 periods, knowing
that only one will be selected at the end by the experimenter for payment. In the first group a
principal and an agent will be randomly matched and stay together until the end of the game;
in the second group they will be randomly matched in every period, while in the third group
the principal will play repeatedly but the agent will play only once. Since at the beginning of
each period the subjects are asked to answer the belief-questionnaire, I will be able to assess if
and how they will update their belief, and whether there are differences among the three groups.
Specifically, the difference in belief (and payoff) between the first and the second group, will give
a measure of the importance of the belief-updating procedure, the difference between the first
and the third a measure of importance of information, since the principal knows that the agent
is less expert than him in evaluating the task, being involved in a repeated game. Finally, the
difference between the second and the third group will give a measure of the degree to which
the principal may “exploit” the agent, by using his private information (i.e. the history of the
game).
For the second and the third part there will be three different treatments, in order to further
9
study the extent to which monetary incentives may crowd out personal motivation: in the first
treatment, the differences found by the principal and by the agent will be paid the same by
the experimenter; in the second, the experimenter will pay more the differences found by the
principal, while in the third he will pay more the differences found by the agent. For each
treatment there will be three different payments schemes, according to how much one difference
is paid, for example (0, 1 cent, 50 cents, 1 euro). If monetary incentives do crowd out the intrinsic
motivation, I expect to find a level of effort which is high when the payment is zero, higher when
it is 50 cents, low when it is 1 euro, and lower when it is 1 cent. On the other hand, if the
monetary incentives are effective in inducing effort, I expect to find a level of effort which is
increasing in the level of payments.
This setting may be easily extended to allow to study other motives for crowding out. To
study the effect of reputation, principals may be able to know the number of differences detected
by the agent in the first part of the experiment. I expect the power of incentives to be higher
for those agents who perform poorly in the first part. To study the effect of social motives, the
identity of the opponent may be told to the participants. I expect the level of effort to be higher
in the case in which the agent knows the principal’s identity, because of the establishment of a
closer relationship. Finally, to study the effect of fairness and social comparison, the agent may
know the contracts offered by all the other principals. I expect those agents with contracts worst
than the average to exert low effort, even if the contract is incentive compatible.
Field Study
It is crucial to assess whether the results found in the theoretical model and in the laboratory
experiments are indeed a good prediction of what happen in real life.
Thus, I will conclude the analysis with a field study, in which I will use personal, structured
interviews to assess whether the principals take into account aspects as probability or overconfidence when designing a contract, and whether the agents reason in a similar way when deciding
the level of effort. Specifically, I will check if the contract form and the relative beliefs are
consistent with the theory.
To do so, I will consider two different firms, characterized by a different degree of agent’s
responsibility over the task, and a different role of chance in the realization of the task.
In the first firm, workers have to tie the vines to different sticks in the summer, to prune them
in the winter and to collect the grapes in autumn. I choose this sector since the output of those
kind of tasks is very easy to evaluate in quantity, but not in quality, thus different incentives
scheme may induce different behaviors. Also, since the firm is family owned, a little part of the
workers are family member’s or friends, thus they will have a different set of incentives and a
larger intrinsic motivation to perform their job. The usual payment for external workers is a
fix wage per day, which could be modified in size, or linked to output quantity. Interestingly,
in those kind of situations having a family member who works together with external workers
seems to increase their level of effort, and this could be not only an effect of monitoring, but also
of increasing intrinsic motivation, since family members may work harder and thus incentivize
also external workers to behave in the same manner.
In the second firm, workers have to go shop-by-shop to sell flowers with a van. Thus, the task
10
is characterized by a large degree of autonomy and the sole measure the employer has to evaluate
the effort is the revenue the workers reported to have done. Nonetheless, the kind of contract in
this firm are strongly performance-based, reducing (almost) to minimum the worker incentive to
shrink. Since the firm’s owner and (mostly) workers didn’t change for almost 30 years, it will be
also possible to check if and how contracts, worker’s effort and firm’s profit changed over time.
I will use interviews to understand what people believe about their work, whether there are
some problems linked to ambiguity, and how the characteristic of the task is related with the
contract. More importantly, I hope that trough the use of interviews I will be able to understand
if people do actually take aspects like likelihood, or ambiguity, into account when deciding how
to behave.
Extensions and Conclusions
The aim of this study is to relax the hypothesis of common knowledge of beliefs in classical
moral hazard problem, trying to make it more similar to real situations. The development of
behavioral economics has pointed out how different assumption and results of the “traditional”
economics are not confirmed by actual behavior. The assumption of equal beliefs when studying
moral hazard is a rather strong assumption, since for a once-and-for-all event is difficult (if not
impossible) to compute an objective probability of success. We should then rely on the notion
of subjective probability, a probability that is based on personal beliefs. It is quite realistic to
assume the principal could know better the task, while the agent could know better his own
abilities, so it should be natural to assume they have different beliefs over the probability of
success of the task, and that they may have a second order belief regarding the other part belief.
What is less natural is to assess how those belief shape the optimal contract, which is precisely
what the present study aims at doing.
Moreover, the relationship between intrinsic motivation and monetary incentives is far from
begin well understood. Several studies reported different results, but, to the best of my knowledge, there is no study that tries to explicitly incorporate principal’s beliefs, overconfidence and
ambiguity, and test it with an experiment that considers intrinsic motivation with moral hazard
and variable incentive schemes.
If findings happen to support the crowding out theory, it may be relevant for firm’s management, since there will be evidence that relationship based on fairness and trust are more effective
in inducing large effort than relationship based on monetary incentives.
On the other hand, if findings happen to support the standard theory, it may be relevant specially for the public sector, since most of public contracts are framed in terms of fixed payments,
while evidence would suggest that performance based incentives lead to higher effort.
References
[1]
B. Hölmstrom. “Moral hazard and observability”. In: The Bell journal of economics (1979),
pp. 74–91.
11
[2]
P. R. Milgrom. “Good news and bad news: Representation theorems and applications”. In:
The Bell Journal of Economics (1981), pp. 380–391.
[3]
W. P. Rogerson. “The first-order approach to principal-agent problems”. In: Econometrica:
Journal of the Econometric Society (1985), pp. 1357–1367.
[4]
J.-J. Laffont and D. Martimort. The theory of incentives: the principal-agent model. Princeton university press, 2009.
[5]
B. S. Frey and R. Jegen. “Motivation crowding theory”. In: Journal of economic surveys
15.5 (2001), pp. 589–611.
[6]
A. Tversky and D. Kahneman. “Judgment under uncertainty: Heuristics and biases”. In:
science 185.4157 (1974), pp. 1124–1131.
[7]
A. Tversky and D. Kahneman. “Extensional versus intuitive reasoning: The conjunction
fallacy in probability judgment.” In: Psychological review 90.4 (1983), p. 293.
[8]
D. M. Wegner and T. Wheatley. “Apparent mental causation: Sources of the experience of
will.” In: American Psychologist 54.7 (1999), p. 480.
[9]
S. Gervais and T. Odean. “Learning to be overconfident”. In: Review of Financial studies
14.1 (2001), pp. 1–27.
[10]
N. Jacquemet, J.-L. Rulliere, and I. Vialle. “Monitoring optimistic agents”. In: Journal of
Economic Psychology 29.5 (2008), pp. 698–714.
[11]
A. Sautmann. Contracts for agents with biased beliefs: Some theory and an experiment.
Tech. rep. 2011.
[12]
L. E. De la Rosa. “Overconfidence and moral hazard”. In: Games and Economic Behavior
73.2 (2011), pp. 429–451.
[13]
S. Ludwig and J. Nafziger. Do You Know That I Am Biased? An Experiment. Bonn Econ
Discussion Papers 11. University of Bonn, Germany, 2007.
[14]
G. Weizsäcker. “Ignoring the rationality of others: evidence from experimental normal-form
games”. In: Games and Economic Behavior 44.1 (2003), pp. 145–171.
[15]
D. O. Stahl and P. W. Wilson. “On Players Models of Other Players: Theory and Experimental Evidence”. In: Games and Economic Behavior 10.1 (1995), pp. 218–254.
[16]
R. Radner. “Repeated principal-agent games with discounting”. In: Econometrica: Journal
of the Econometric Society (1985), pp. 1173–1198.
[17]
H. Herz, D. Schunk, and C. Zehnder. “How do judgmental overconfidence and overoptimism
shape innovative activity?” In: Games and Economic Behavior 83 (2014), pp. 1–23. issn:
0899-8256.
[18]
R. Benabou and J. Tirole. “Self-confidence and personal motivation”. In: Quarterly journal
of economics (2002), pp. 871–915.
12
[19]
D. Gottlieb. “Imperfect memory and choice under risk”. In: Games and Economic Behavior
85 (2014), pp. 127–158. issn: 0899-8256.
[20]
A. Ozdogan. “Disappearance of reputations in two-sided incomplete-information games”.
In: Games and Economic Behavior 88 (2014), pp. 211–220. issn: 0899-8256.
[21]
R. Macera. “Dynamic beliefs”. In: Games and Economic Behavior 87 (2014), pp. 1–18.
issn: 0899-8256.
[22]
V. Bhaskar. “Dynamic moral hazard, learning and belief manipulation”. In: (2012).
[23]
E.-L. Von Thadden and X. Zhao. “Incentives for unaware agents”. In: The Review of
Economic Studies (2012), rdr050.
[24]
D. Ellsberg. “Risk, ambiguity, and the Savage axioms”. In: The quarterly journal of economics (1961), pp. 643–669.
[25]
I. Levy et al. “Neural representation of subjective value under risk and ambiguity”. In:
Journal of neurophysiology 103.2 (2010), pp. 1036–1047.
[26]
M. Hsu et al. “Neural systems responding to degrees of uncertainty in human decisionmaking”. In: Science 310.5754 (2005), pp. 1680–1683.
[27]
J. Stecher, T. Shields, and J. Dickhaut. “Generating ambiguity in the laboratory”. In:
Management Science 57.4 (2011), pp. 705–712.
[28]
A. Kothiyal, V. Spinu, and P. P. Wakker. “An experimental test of prospect theory for
predicting choice under ambiguity”. In: Journal of Risk and Uncertainty 48.1 (2014), pp. 1–
17.
[29]
H. Kunreuther et al. “Ambiguity and underwriter decision processes”. In: Journal of Economic Behavior & Organization 26.3 (1995), pp. 337–352.
[30]
L. Cabantous. “Ambiguity aversion in the field of insurance: Insurers’ attitude to imprecise
and conflicting probability estimates”. In: Theory and Decision 62.3 (2007), pp. 219–240.
[31]
C. Heath and A. Tversky. “Preference and belief: Ambiguity and competence in choice
under uncertainty”. In: Journal of risk and uncertainty 4.1 (1991), pp. 5–28.
[32]
M. J. Machina and M. Siniscalchi. “Ambiguity and Ambiguity Aversion”. In: Handbook of
the Economics of Uncertainty 1 (2014), pp. 729–807.
[33]
J. Etner, M. Jeleva, and J.-M. Tallon. “Decision theory under ambiguity”. In: Journal of
Economic Surveys 26.2 (2012), pp. 234–270.
[34]
C. R. Fox and A. Tversky. “Ambiguity Aversion and Comparative Ignorance”. In: The
Quarterly Journal of Economics 110.3 (1995), pp. 585–603.
[35]
C. C. Chow and R. K. Sarin. “Comparative ignorance and the Ellsberg paradox”. In:
Journal of risk and Uncertainty 22.2 (2001), pp. 129–139.
13
[36]
R. Sloof and C. M. Van Praag. “The effect of noise in a performance measure on work motivation: A real effort laboratory experiment”. In: Labour Economics 17.5 (2010), pp. 751–
765.
[37]
S. Mukerji. “Ambiguity aversion and incompleteness of contractual form”. In: American
Economic Review (1998), pp. 1207–1231.
[38]
M. Basili and M. Franzini. “Subjective ambiguity and moral hazard in a principal-agent
model”. In: (2003).
[39]
G. Lopomo, L. Rigotti, and C. Shannon. “Knightian uncertainty and moral hazard”. In:
Journal of Economic Theory 146.3 (2011), pp. 1148–1172.
[40]
F. Herweg, D. Müller, and P. Weinschenk. “Binary payment schemes: Moral hazard and
loss aversion”. In: The American Economic Review 100.5 (2010), pp. 2451–2477.
[41]
J. Levin. “Relational incentive contracts”. In: The American Economic Review 93.3 (2003),
pp. 835–857.
[42]
P. Weinschenk. Moral hazard and ambiguity. Tech. rep. Preprints of the Max Planck Institute for Research on Collective Goods, 2010.
[43]
I. Gilboa and D. Schmeidler. “Maxmin expected utility with non-unique prior”. In: Journal
of mathematical economics 18.2 (1989), pp. 141–153.
[44]
L. G. Epstein and S. Ji. “Ambiguous volatility, possibility and utility in continuous time”.
In: Journal of Mathematical Economics 50 (2014), pp. 269–282.
[45]
M. Szydlowski. Ambiguity in dynamic contracts. Tech. rep. Discussion Paper, Center for
Mathematical Studies in Economics and Management Science, 2012.
[46]
D. M. Kreps. “Intrinsic Motivation and Extrinsic Incentives”. English. In: The American
Economic Review 87.2 (1997), pp. 359–364. issn: 00028282.
[47]
A. Cohn et al. “Social comparison and effort provision: evidence from a field experiment”.
In: Journal of the European Economic Association (2014).
[48]
M. Pitesa and S. Thau. “Masters of the universe: How power and accountability influence
self-serving decisions under moral hazard.” In: Journal of Applied Psychology 98.3 (2013),
p. 550.
[49]
E. Fehr and A. Falk. “Psychological foundations of incentives”. In: European Economic
Review 46 (2002), pp. 687–724.
[50]
G. Cuevas-Rodriguez, L. R. Gomez-Mejia, and R. M. Wiseman. “Has Agency Theory
Run its Course?: Making the Theory more Flexible to Inform the Management of Reward
Systems”. In: Corporate Governance: An International Review 20.6 (2012), pp. 526–546.
14
[51]
Y. Georgellis, E. Iossa, and V. Tabvuma. “Crowding out intrinsic motivation in the public
sector”. In: Journal of Public Administration Research and Theory 21.3 (2011), pp. 473–
493.
[52]
U. Gneezy and J. A. List. “Putting behavioral economics to work: testing for gift exchange
in labor markets using field experiments”. In: Econometrica 74.5 (2006),
[53]
[54]
U. Gneezy. The W effect. 2003.
P. Masella, S. Meier, and P. Zahn. “Incentives and group identity”. In: Games and Economic Behavior 86 (2014), pp. 12–25. issn: 0899-8256.
[55]
N. Netzer and A. Schmutzler. “Explaining gift exchange – The limits of good intentions”.
In: Journal of the European Economic Association 12.6 (2014), pp. 1586–1616. issn: 15424766.
[56]
G. Kanatas and C. Stefanadis. “Ethics, welfare, and capital markets”. In: Games and
Economic Behavior 87 (2014), pp. 34–49.
[57]
R. Benabou and J. Tirole. “Intrinsic and extrinsic motivation”. In: The Review of Economic
Studies 70.3 (2003), pp. 489–520.
[58]
A. W. Cappelen et al. Leadership and incentives. Discussion Paper Series in Economics
2/2014. Department of Economics, Norwegian School of Economics, Feb. 2014.
[59]
A. W. Cappelen et al. “Just Luck: An Experimental Study of Risk-Taking and Fairness”.
In: American Economic Review 103.4 (June 2013), pp. 1398–1413.
[60]
A. W. Cappelen et al. “Needs Versus Entitlements—An International Fairness Experiment”. In: Journal of the European Economic Association 11.3 (June 2013), pp. 574–598.
[61]
A. W. Cappelen et al. Give and Take in Dictator Games. Discussion Paper Series in Economics 14/2012. Department of Economics, Norwegian School of Economics, July 2012.
[62]
U. Gneezy and P. Rey-Biel. “On the Relative Efficiency of Performance Pay and Noncontingent Incentives”. In: Journal of the European Economic Association 12.1 (2014), pp. 62–
72.
[63]
C. F. Camerer and R. M. Hogarth. “The effects of financial incentives in experiments: A
review and capital-labor-production framework”. In: Journal of risk and uncertainty 19.1-3
(1999), pp. 7–42.
[64]
D. Ariely. Predictably Irrational. Ed. by H. Collis. 2009.
[65]
A. Kirstein. “Bonus and malus in principal-agent relations with fixed pay and real effort”.
In: Schmalenbach Business Review (sbr) 60.3 (2008), pp. 280–303.
[66]
T. Dohmen and A. Falk. “Performance Pay and Multidimensional Sorting: Productivity,
Preferences, and Gender”. In: The American Economic Review 101.2 (2011),
15