Journal of Mathematical Economics A reformulation of the maxmin

Journal of Mathematical Economics 45 (2009) 97–112
Contents lists available at ScienceDirect
Journal of Mathematical Economics
journal homepage: www.elsevier.com/locate/jmateco
A reformulation of the maxmin expected utility model with
application to agency theory
Edi Karni
Johns Hopkins University, Baltimore, MD 21218, United States
a r t i c l e
i n f o
Article history:
Received 16 November 2006
Received in revised form 13 July 2008
Accepted 16 July 2008
Available online 26 July 2008
Keywords:
Maxmin expected utility
Principal–agent theory
Moral hazard
a b s t r a c t
Invoking the parameterized distribution formulation of agency theory, the paper develops
axiomatic foundations of the principal’s and agent’s choice behaviors that are representable
by the maximization of the minimum expected utility over action-dependent sets of priors. In the context of this model, the paper also discusses some implications of ambiguity
aversion for the design of optimal incentive schemes.
© 2008 Elsevier B.V. All rights reserved.
1. Introduction
In recent years, ambiguity aversion, displayed by a pattern of choice first noted by Ellsberg (1961), has become a focal
issue in the theory of decision making under uncertainty. This phenomenon is incompatible with representations of decision makers’ beliefs by additive (subjective) probability measures. Axiomatic models of decision making under uncertainty
designed to accommodate ambiguity aversion include the Choquet expected utility model of Schmeidler (1989), the maxmin
expected utility model of Gilboa and Schmeidler (1989), the invariant biseparable preferences of Ghirardato et al. (2004), the
smooth ambiguity preferences of Klibanoff et al. (2005) and the variational preference model of Maccheroni et al. (2006).
If ambiguity aversion is an important aspect of individual decision-making under uncertainty then, like risk aversion,
it should manifest itself in shaping economic institutions and practices. In this paper, I take a step toward incorporating
ambiguity aversion into the theory of incentive contracts and the study of its implications.
By and large, the analysis of the principal–agent relationship in the presence of moral hazard is based on the assumption
that the two parties are expected-utility maximizers. In particular, the parameterized distribution formulation, pioneered
by Mirrlees (1974, 1976), envisions a principal designing an incentive contract intended to induce an agent to choose an
action that would maximize the principal’s expected utility, over a class of action-dependent probability distributions on
outcomes. Given the contract, the agent is supposed to choose the action that would maximize his own expected utility over
the same class of action-dependent probability distributions on outcomes.
In view of the main concern of this work, I begin by developing axiomatic models of ambiguity-averse principal and agent,
whose preferences are represented by maxmin expected utility over action-dependent subjective sets of priors. These models
constitute a synthesis of the maxmin expected utility model of Gilboa and Schmeidler (1989) and the analytical framework
of Karni (2006). The analytical framework dispenses with the notion of states of nature which, for reasons explained in
Karni (2006), I regard as unsatisfactory, introducing instead a choice space whose elements are action–bet pairs. Actions
E-mail address: [email protected].
0304-4068/$ – see front matter © 2008 Elsevier B.V. All rights reserved.
doi:10.1016/j.jmateco.2008.07.008
98
E. Karni / Journal of Mathematical Economics 45 (2009) 97–112
correspond to means by which agents may influence the likely realizations of different outcomes; bets represent outcomecontingent payoffs. The resulting representations are subjective versions of the parameterized distribution formulation,
lending themselves naturally to the investigation of incentive contracts in the presence of moral hazard.
Despite obvious similarities, the approach taken here is different from existing models in several important respects. First,
unlike Mirrlees (1974, 1976), who assumes that the family of action-dependent distributions is given, I derive the relevant
family of action-dependent (sets of) distributions from the principal’s and agent’s preferences. Second, the representations
are different from Gilboa and Schmeidler (1989) in that, through their selection of actions, decision makers can choose
among sets of distributions on outcomes. Moreover, the preference relation on the contingent payoff schemes, representing
the contracts, may be outcome dependent.
To render this discussion more precise, I denote the set of actions by A and the set of contracts by W. Elements of W are
real-valued functions on a set, , of outcomes. Assume that the principal and the agent have preference relations on the set,
A × W , of action–contract pairs that are denoted by P and A , respectively.
An action–contract pair (a∗ , w∗ ) is said to be a maximal element of a subset F ⊂ A × W according to if (a∗ , w∗ ) ∈ F and
∗
(a , w∗ ) (a, w) for all (a, w) ∈ F. The principal’s problem may be stated as follows: Choose the maximal element according
to P of the “feasible” set F ⊂ A × W , defined by: (a , w ) ∈ F if and only if
(a , w )A (a, w )
a∈A
for all
(1)
and
(a , w )A (a0 , w0 ),
(2)
where w0 ∈ W denotes the agent’s “outside option,” and a0 satisfies (a0 , w0 )A (a, w0 ), for all a ∈ A.
If the principal and the agent are maxmin expected utility maximizers, and the agent’s utility function is additively
separable in actions and payoffs, then the problem may be stated as follows: Choose an action–contract pair (a , w ),
(a , w ) ∈ argmaxA×W
min
∈ C P (a)
uP (w(); )()
(3)
∈
subject to the incentive compatibility constraints, for all a ∈ A,
min
∈ C A (a )
uA (w (), )() + v(a ) ≥ min
∈
∈ C A (a)
and the participation constraint
min
∈ C A (a )
uA (w (), )() + v(a ) ≥
∈
min
uA (w (), )() + v(a),
(4)
∈
∈ C A (a0 )
uA (w0 (), )() + v(a0 ),
(5)
∈
where {uP (·, )| ∈ } and {uA (·, )| ∈ } are outcome-dependent utility functions of the principal and the agent, respectively;
{C P (a)|a ∈ A} and {C A (a)|a ∈ A}
respective sets of priors on ; v denotes
the agent’s disutility of actions, and w0 ∈ W
are their
0
A
0
0
and a satisfies min ∈ C A (a0 ) ∈ u (w (), )() + v(a ) ≥ min ∈ C A (a) ∈ uA (w0 (), )() + v(a), for all a ∈ A.
Assume that the payoffs of the bets are roulette lotteries (or lotteries, for short) in the sense of Anscombe and
Aumann (1963). The key notion invoked in the axiomatization of the principal’s preferences is that of constantvaluation bets. Loosely speaking, constant-valuation bets are defined by the behavioral implication that, given such
a bet, the decision maker is indifferent among all actions. Because the choice of action has a direct effect on the
agent’s well-being, but not necessarily on that of the principal, constant valuation bets mean different things for the
two parties. To handle this difference, the axiomatic structure of the agent’s preferences permits the separation of
the direct effect of the actions from the indirect effect (that is, the impact of the actions on the distributions of outcomes).
The literature dealing with agency theory with ambiguity-averse players is rather meager. Ghirardato (1994) examines the validity of some properties of optimal incentive schemes, in the context of Schmeidler’s (1989) Choquet expected
utility model. While the motivation is similar, the present work differs from that of Ghirardato both in terms of the analytical framework and the representations of the players’ preferences. Like Schmeidler’s Choquet expected utility model,
on which it is based, Ghirardato’s analysis does not allow for outcome-dependent preferences over monetary payoffs.
This aspect of the preference relations, which is relevant in many applications, is accommodated by the approach taken
here.
Mukerji (2004) studies the effect of ambiguity aversion on the contractual form of procurement contracts. To model
ambiguity and the principal and agent ambiguity aversion, Mukerji invokes the model of Ghirardato et al. (2004), in which
acts are evaluated by a convex combination of their respective minimum and maximum expected payoff on a set of priors.
Bypassing the effects of attitude towards risk, Mukerji shows that the more ambiguity averse the agent relative to the principal
(that is, the higher the weight placed by the agent on the minimum of the expected payoff, relative to that of the principal)
the smaller is the power of the incentives built into the procurement contract.
E. Karni / Journal of Mathematical Economics 45 (2009) 97–112
99
Rigotti (2001) analyzes a principal–agent relationship in an environment in which the agent does not have a precise understanding of the relation between his actions and the random outcomes they produce. This idea is formalized by assuming
that the agent’s preferences are incomplete, that is, the agent is incapable of comparing the alternative contracts. The result
is an expected utility representation with a set of priors. Rigotti shows that the result is incentive contracts that are robust,
in the sense of being attractive when evaluated by a whole set of priors and, under some conditions, simple, consisting of a
basic payoff that is augmented by a bonus payoff in some states.
Drèze (1961, 1987) was the first to axiomatize subjective expected utility theory with state-dependent preferences and
moral hazard. He argues that the “reversal of order” axiom of Anscombe and Aumann (1963) embodies the decision maker’s
belief that he cannot influence the likely realization of alternative states. In situations in which a decision maker may take
actions that would tilt the probabilities of the states in his favor, he would prefer knowing the outcome of a lottery (and,
consequently, his payoff contingent on the state that obtains) before rather than after the state of nature is revealed. In
Drèze’s theory the decision maker’s actions are tacit.1
Like the maxmin expected utility model, the representation in Drèze’s theory entails the maximization of subjective
expected utility over a convex set of subjective probability measures. Yet, Drèze’s theory is different from the maxmin
expected utility model in its motivation and interpretation. Drèze’s motivation is to model decision making in the presence
of moral hazard, not ambiguity aversion. Consequently, the set of probability measures in his theory represents the leverage
a decision maker believes he has over the likely realization of the states, rather than his ambiguity aversion.
The present work differs from that of Drèze in several respects, the most important of which is the analytical framework.
In this paper, the agent’s actions and the relation that the principal and the agent believe exist between these actions and
probability distributions of outcomes, are explicit aspects of the representation. This difference in the treatment of the
actions, emanates from distinct methodological outlooks. Drèze is looking at the problem of modeling the agent’s behavior
from the perspective of an observer (or the principal), who may “see” only the decision makers’ preferences over acts and over
the timing of resolution of risk. In contrast, here the principal and the agent have preferences over the set of action–contract
pairs that are known to them and, in principle, observable.
Also related to the present work is Karni (in press). In an analytical framework similar to the one used here, I explored
the axiomatic foundations of principal and agent’s who, unlike in this paper, are expected utility maximizers. The present
paper is also different from Karni (in press) in that it explores some implications of a version of the minmax expected utility
theory the analysis of optimal incentive contracts.
In the following section, I introduce the analytical framework. The principal’s preference relation and its representation
appear in Section 3. The agent’s preference relation and its representation is the subject matter of Section 4. In Section 5,
I discuss issues pertaining to the application of the maxmin expected utility model to agency theory and, within a simple
model, examine some welfare implications. Section 6 contains a brief summary and a discussion of related literature. The
proofs appear in Section 7.
2. The analytical framework
2.1. Preliminaries
Let be a finite set of outcomes, and let A be a set whose elements, called actions, describe activities by which a decision
maker, the agent in this instance, may influence the likely realization of different outcomes.2 Let I() be an interval in R
representing monetary prizes that are feasible if the outcome ∈ obtains. Denote by (I()) the set of all simple probability
distributions (that is, distributions that have finite support) on I(). Elements of (I()) are referred to as lotteries. A bet b,
is a function on such that b() ∈ (I()). Denote by B the set of all bets (that is, B:= ∈ (I())). The c hoice set is the
product set C:=A × B, whose generic element, (a, b), is an action–bet pair.
A decision maker is characterized by a preference relation, , on C that has the usual interpretation.3 In other words,
decision makers are supposed to be able to choose among, or express preferences over, action–bet pairs, presumably taking
into account the influence the choice of action may have on the likely realization of alternative outcomes and, consequently,
on the desirability of the corresponding bets. The strict preference relation, , and the indifference relation, ∼, are defined
as usual. Note that the symbol denotes the preference relation of either the principal or the agent.
For all b, b ∈ B and ˛ ∈ [0, 1], define ˛b + (1 − ˛)b ∈ B by: (˛b + (1 − ˛)b )() = ˛b() + (1 − ˛)b (), for all ∈ . I use the
notation b− p to denote the bet that is obtained from the bet b by replacing its − coordinate with the lottery p.
An outcome ∈ is said to be null given the action a and the preference relation , if (a, b− p) ∼ (a, b− q) for all p, q ∈ (I()),
and b ∈ B, otherwise it is nonnull given the action a and the preference relation . In general, given , an outcome may be null
under some actions and nonnull under others. However, it is customary in agency theory, to suppose that the set of outcomes
constituting the support of the distributions is invariable with respect to the actions. Formally,
The full support assumption: Given the set of outcomes that are nonnull under a is , for all a ∈ A.
1
2
3
The same comment applies to the extension of Drèze’s work in Drèze and Rustichini (1999).
Additional assumptions about the size and structure of the set of actions are introduced below when they become necessary.
A preference relation, , is a binary relation on C, and (a, b) (a , b ) means that (a, b) is at least as desirable as (a , b ).
100
E. Karni / Journal of Mathematical Economics 45 (2009) 97–112
2.2. Constant valuation bets
Given a preference relation , the idea of constant valuation bets presumes the existence of a subset, Â, of actions and a
subset Bcv of bets such that, given b̄ ∈ Bcv , the variations in the decision maker’s well-being that is due to the direct impact
of the actions in  are compensated by variations due to impact of these actions on the likely realization of the different
outcomes.4 A constant valuation bet is manifested if, given this bet, the decision maker is indifferent among all the actions
in Â.
For the constant valuation bets to be well-defined, there must exist sufficiently many distinct actions in  (that is, |Â| ≥ ||)
so as to render the coordinates of each constant valuation bet unique, in the sense of belonging to the same equivalence class of
lotteries. Formally, let I(b; a) = {b ∈ B|(a, b ) ∼ (a, b)} and I(p; , b, a) = {q ∈ (I())|(a, b− q) ∼ (a, b− p)}. I(b; a) denotes the
indifference class of b given a and, similarly, I(p; , b, a) denotes the indifference class of p given , b and a.
Definition 1.
A bet b̄ is a constant valuation bet according to if (a, b̄) ∼ (a , b̄) for all a, a ∈ Â and, for all b ∈ ∩a ∈ Â I(b̄; a),
b() ∈ I(b̄(); , b̄, a) for all ∈ and a ∈ Â.
Let Bcv () denote the set of all constant valuation bets according to . In view of the definition of constant valuation
bets, if b̄ , b̄ ∈ Bcv () I write b̄ b̄ instead of (a, b̄ ) (a , b̄). Also, henceforth, to simplify the notations, I shall suppress the
preference relation symbol and write Bcv instead of Bcv ().
Given p ∈ (I()) denote by b− p the constant valuation bet whose − th coordinate is p. Assume that all these constant
valuation bets are exist and the choice set has maximal and minimal elements whose bet components are constant valuation.
Formally,
(A.0). For all ∈ and p ∈ (I()), there exists b− p ∈ Bcv unique up to an indifference class of lotteries for each ∈ , and
there are b̄∗∗ , b̄∗ ∈ Bcv such that b̄∗∗ (a, b) b̄∗ for all (a, b) ∈ C, and b̄∗∗ b̄∗ .
2.3. Axioms
In general, the principal and the agent are motivated by distinct considerations. In some applications the agent’s wellbeing may not be affected directly by the outcome, and in most applications the principal’s well-being is not directly affected
by the agent’s actions. For example, given an employment contract, the employer’s well-being is directly affected by the
firm’s revenues; whereas revenues affect the employee’s well-being only to the extent that they determine his payoff under
the employment contract. Moreover, the employer’s sole concern with the employee’s actions is their effect on the likely
realization of profits, while the employee’s well-being is directly affected by these actions (e.g. effort). Despite their different
concerns, certain principles, expressed by the following axioms, govern the choice behavior of the principal and the agent
alike. The first two axioms are standard and require no further comment.
(A.1) (Weak order).
is complete and transitive.
(A.2) (Archimedean). For all a ∈ A and b, b , b ∈ B, such that (a, b) (a, b ) (a, b ), there exist ˛, ˇ ∈ (0, 1) satisfying
(a, ˛b + (1 − ˛)b ) (a, b ) (a, ˇb + (1 − ˇ)b ).
The third axiom is an adaptation of Gilboa and Schmeidler’s (1989) uncertainty aversion axiom, and as in that work, it
captures the decision maker’s preference for utility hedging. In other words, because the consequences are mixture spaces
and the utility that figure in the eventual representation in affine, ambiguity aversion has the interpretation of preference
for bets involving outcome-wise utility mixture over its (indifferent) ingredient bets.
(A.3) (Ambiguity aversion).
For all a ∈ A, b, b ∈ B, and ˛ ∈ (0, 1), if (a, b) ∼ (a, b ) then (a, ˛b + (1 − ˛)b ) (a, b).
3. The principal’s preferences and their representation
3.1. Constant utility bets
By and large, the analysis of principal–agent problems, is based on the assumption that the principal’s concern about the
agent’s choice of action is indirect, and has to do solely with its effect on the likelihoods of the alternative outcomes. In other
words, the agent’s choice of action has no direct implications for the principal’s well-being. Consequently, the principal’s
constant valuation bets are, in fact, tacitly, constant utility bets. The axiomatic structure pertaining to the principal’s choice
behavior below should be interpreted with this understanding of the meaning of the principal’s constant valuation bets.
In view of this interpretation it is natural to define a preference relation, , on (I()) by p q if and only if b− p b− q,
p, q ∈ (I()). (The symmetric and asymmetric parts of are denoted by and ∼ , respectively). The interpretation of is
based on the understanding that b− p is the constant utility bet whose outcome contingent payoff is equivalent to the payoff
p when the outcome is . Thus p q, that is, in the event the payoff p is preferred to q, has a choice-theoretic meaning,
4
The concept of constant valuation bets first appeared in Karni (2006).
E. Karni / Journal of Mathematical Economics 45 (2009) 97–112
101
namely, that the payoff of the bet that pays for sure the equivalent of p in the event is preferred over the bet that pays for
sure the equivalent of q in the event .
To simplify the exposition of the principal’s preferences, without essential loss of generality, I assume that  = A.5
3.2. Axioms
In addition to the axioms (A.1)–(A.3), the principal’s preference structure is assumed to satisfy the following addition
conditions.
Axiom (A.4) below requires that if outcome-wise the payoff of one bet is preferred over that of another bet, then the first
bet, because for sure it yields a better payoff, must be preferred over the second.
(A.4) (Monotonicity).
For all a ∈ A, b, b ∈ B, if b() b () for all ∈ then (a, b) (a, b ).
The following axiom is analogous to the certainty-independence axiom of Gilboa and Schmeidler (1989). It requires that
the independence axiom of expected utility theory applies when mixing bets with constant valuation bets.
(A.5) (Constant-valuation independence). For all a ∈ A, b, b ∈ B, b̄ ∈ Bcv , and ˛ ∈ (0, 1), (a, b) (a, b ) if and only if (a, ˛b +
(1 − ˛)b̄) (a, ˛b + (1 − ˛)b̄).
The next axiom requires the set of constant valuation bets be convex (that is, b̄, b̄ ∈ Bcv implies (a, ˛b̄ + (1 − ˛)b̄ ) ∼
(a , ˛b̄ + (1 − ˛)b̄ ) for all a, a ∈ A and ˛ ∈ (0, 1)). The intuition underlying this condition is, that what the principal ultimately
cares about are consequences not actions. Specifically, he cares about the outcome, , that obtains and the associated payoff,
b(). Thus, facing a choice between (a, ˛b̄ + (1 − ˛)b̄ ) and (a , ˛b̄ + (1 − ˛)b̄ ), the principal recognizes that, for each outcome,
he is indifferent between consequences of the two alternatives. Regardless of the action, he is awarded a composite lottery
˛b̄ + (1 − ˛)b̄ , whose consequences are (˛b̄() + (1 − ˛)b̄ (), ), ∈ . Moreover, because b̄ and b̄ are constant-valuation
bets, which in the case of the principal amount to constant utility bets, these consequences are equally preferred. Formally,
(A.6) (Convexity).
For all a, a ∈ A, b̄, b̄ ∈ Bcv , and ˛ ∈ (0, 1), (a, ˛b̄ + (1 − ˛)b̄ ) ∼ (a , ˛b̄ + (1 − ˛)b̄ ).
Because what constitutes a constant utility bet is a personal (subjective) matter, the convexity of the set Bcv is not implied
by the structure of the choice space.6
3.3. Representation
The representation of the principal’s preferences is analogous to the maxmin expected utility representation of Gilboa
and Schmeidler (1989). The statement of the representation theorem below involves the following terminology. A family
n
{fj }j=1 of real-valued functions is said to be cardinally measurable and fully comparable if, for the purpose of representation, it
is equivalent to another set of functions {gj }nj=1 , where gj = a + cfj , c > 0 for all j = 1, . . . , n.
Theorem 1. Let be a binary relation on C. Assume that (A.0) and the full support assumption hold. Then the following conditions
are equivalent:
(i) satisfies (A.1)–(A.6).
(ii) There exist affine functions {u(·, ) : (I()) → R} ∈ , and a family of closed and convex sets, {C(a)}a ∈ A , of probability
measures on such that, for all (a, b), (a , b ) ∈ C,
(a, b) (a , b ) ⇔ min
∈ C(a)
u(b(), )() ≥ min
∈
∈ C(a )
u(b (), )().
∈
Moreover, the functions {u(·, )} ∈ are cardinally measurable and fully comparable, for every a ∈ A the set C(a) is unique, and
each ∈ C(a) has full support.
The proof of Theorem 1 is an adaptation of the proof of Gilboa and Schmeidler (1989) and it appears in Section 7.1. It
involves constructing representation of the preferences over bets for each action, the utility functions on the consequences
are defined using constant valuation bets where Gilboa and Schmeidler use constant acts, and using
the constant valuation
bets to link the representations across actions. Note that the affinity of u implies that u(b(), ) =
u(z, )b(z, ).
z ∈ I()
Karni (2006) contains a discussion on how to extend the analysis to the case in which  is a proper subset of A.
Gilboa and Schmeidler (1989) assume, implicitly, that constant acts are constant utility acts. Because the set of constant functions is convex, the
restriction imposed by Axiom (A.6) has no bite.
5
6
102
E. Karni / Journal of Mathematical Economics 45 (2009) 97–112
It is noteworthy that, while I interpret Theorem 1 as a representation of the principal’s preference relation, there is nothing
in the formalism that prevents its application to the agent’s preferences as well. However, the literature, dealing with the
analysis of principal–agent relationship, supposes that the agent’s choice of action has no d irect effect on the principal’s
utility. Hence, as already mentioned, in so far as the principal’s preference relation is concerned, constant valuation bets
are constant utility bets. It is natural, therefore, to interpret the representation in Theorem 1 as pertaining to the principal’s
preference relation. In so far as the agent’s preference relation is concerned, it is common practice to separate the utility of
payoffs from the direct cost of effort. In the next section, I explore an alternative approach which allows for such separation.
4. Additive separability over actions and bets
4.1. Action–bet separability and constant utility bets
Assume that A is a connected, separable, topological space and let A × B be endowed with the product topology. Let B̃()
be a convex subset of B on which the following, well-known, axioms hold. (Anticipating the representation below and its
interpretation, henceforth I refer to elements of B̃() as constant-utility bets from the point of view of the agent whose
preference relation is .).
The first axiom requires that both actions and constant utility bets are essential in the sense that, for some b̃ ∈ B̃, the agent
is not indifferent among all the actions and that, for some a ∈ A, he is not indifferent among all elements of B̃.
(A.7) (Essentiality).
There are a, a ∈ A and b̃, b̃ ∈ B̃ such that (a, b̃) (a , b̃) and (a, b̃) (a, b̃ ).
The second axiom requires, that the preference ranking of constant utility bets be independent of the actions, and that of
actions be independent of constant utility bets.
(A.8) (Coordinate independence). For all a, a ∈ A and b̃, b̃ ∈ B̃, (a, b̃) (a, b̃ ) if and only if (a , b̃) (a , b̃ ) and (a, b̃) (a , b̃)
if and only if (a, b̃ ) (a , b̃ ).
The third axiom is the familiar condition that is necessary for the existence of separately additive representations when
there are two factors (actions and bets in the this case).7 Using coordinate independence, and with slight abuse of notations,
if b̃, b̃ ∈ B̃ I shall write b̃ b̃ instead of (a, b̃) (a, b̃ ).
(A.9) (Hexagon condition). For all a, a , a ∈ A and b̃, b̃ , b̃ ∈ B̃ if (a, b̃ ) ∼ (a , b̃) and (a, b̃ ) ∼ (a , b̃ ) ∼ (a , b̃) then (a , b̃ ) ∼
(a , b̃ ).
Henceforth, where there is no risk of confusion, I write B̃ instead of B̃(). The reader should keep in mind, however, that
what the agent regards as constant utility bets, is entirely a subjective matter. The motivation for using bets in B̃ are threefold:
First, they are used to derive action-independent utility function and thus separate utilities from probabilities. Second, these
bets are constant utility in so far as the agent is concerned, and can be used to rank the consequences (that is, outcome
lottery pairs). Third, these bets are unaffected by ambiguity.8
The restriction of to A × B̃ is a continuous weak order satisfying axioms (A.7)–(A.9) if and only if there exist continuous
real-valued functions V on B̃ and ς on A such that on A × B̃ is represented by (a, b̃) → V (b̃) + ς(a).9 Moreover, the functions
(V, ς) are jointly cardinal (that is, if (V̂ , ς̂) is another additive representation of on A × B̃ then V̂ = V + ı1 and ς̂ = ς + ı2 ,
> 0).10
Next I restate some of the axioms of the preceding section, replacing constant valuation bets with elements of B̃.
Stating the modified monotonicity axiom, requires additional notation and definitions. Given p ∈ (I()) denote by b
− p
the bet in B̃ whose − th coordinate is p.
(A.4 ) (Monotonicity).
For all a ∈ A, b, b ∈ B, if (a, b
− b()) (a, b− b ()) for all ∈ then (a, b) (a, b ).
Constant-valuation independence is replaced by constant-utility independence.
(A.5 ) (Constant-utility independence). For all a ∈ A, b, b ∈ B, b̃ ∈ B̃, and ˛ ∈ (0, 1), (a, b) (a, b ) if and only if (a, ˛b + (1 −
˛)b̃) (a, ˛b + (1 − ˛)b̃).
Analogously to (A.0), assume the following:
(A.0 ). For each and p ∈ (I()), b
− p exists and is unique up to indifference class of lotteries for each , and there are
b̃∗∗ , b̃∗ ∈ B̃ such that b̃∗∗ (a, b) b̃∗ , for all (a, b) ∈ C, and b̃∗∗ b̃∗ .
7
8
9
10
See Wakker (1989).
I am indebted to the referee for these observations.
The relation is continuous (in the topological sense) if the sets {(a, b) ∈ A × B|(a, b) (a , b )} and {(a, b) ∈ A × B|(a , b ) (a, b)} are closed.
See Wakker (1989) Theorem III.4.1.
E. Karni / Journal of Mathematical Economics 45 (2009) 97–112
103
4.2. Representation
The next theorem is a representation of the agent’s choice behavior, separating his attitude towards risk from his actions.
Theorem 2. Let be a binary relation on C. Assume that (A.0 ) and the full support assumption hold. Then the following conditions
are equivalent:
(i) is a continuous weak order satisfying (A.3), (A.4 ), (A.5 ) and, on some convex subset, B̃, of bets it satisfies (A.7)–(A.9).
(ii) There exist affine functions {v(·, ) : (I()) → R}, a function ς : A → R, and a family of closed and convex sets, {C(a)}a ∈ A , of
full support probability measures on such that, for all (a, b), (a , b ) ∈ C,
(a, b) (a , b ) ⇔ min
∈ C(a)
v(b(), )() + ς(a) ≥ min
∈
∈ C(a )
v(b(), )() + ς(a ).
∈
Moreover, the functions {v(·, )} ∈ are cardinally measurable and fully comparable, V :=
jointly cardinal, and, for each a ∈ A, the set C(a) is unique.
∈
v(b̃(), ) on B̃ and ς are
The proof is similar to that of Theorem 1 and is outlined in Section 7.2.
The function ς represents the (dis)utility of the actions. For reasons described earlier, the representation in Theorem 2
pertains to the preference relation of the agent.
4.3. Outcome-independent preferences
In many instances, it is natural to suppose that the agent’s attitude towards risk is outcome-independent. To formalize
the revealed preference implications of outcome-independent risk attitudes, it is necessary to take account of the “rankdependent” nature of the maxmin preference relation. Without essential loss of generality, assume that I() = [x, x̄] for all
∈ , and, consequently, (I()) = for all ∈ . The following axiom lends a choice-based meaning to the idea of outcome
independence.
(A.10) (Outcome independence).
(a, b̃∗∗
q).
− p) (a, b̃∗∗
q) if and only if (a, b̃∗∗
p) For all , ∈ and p, q ∈ , and a ∈ A, (a, b̃∗∗
−
−
− Note that the effect of “rank dependence” has been neutralized by the choice of b̃∗∗ as the reference bet. To grasp this,
∗∗
observe that b̃∗∗ = (ıx̄ , . . . , ıx̄ ) and recall that, by definition, b̃∗∗ b
− p for all p ∈ and ∈ . Hence, replacing b̃ () by
either p or q, does not entail a change in the preference ordering of the coordinates. (If it did, then it would have implied that
∗∗
b
− p b̃ for some ∈ , leading to a contradiction.).
Adding effect-independence to the axiomatic structure of Theorem 2, yields the following representation.
Theorem 3. Let be a binary relation on C. Assume that (A.0 ) and the full support assumption hold. Then the following conditions
are equivalent:
(i) is a continuous weak order satisfying (A.3), (A.4 ), (A.5 ), (A.10) and, on some convex subset, B̃, of bets it satisfies (A.7)–(A.9).
(ii) There exist an affine function v : → R, a function ς : A → R, and a family of closed and convex sets, {C(a)}a ∈ A , of probability
measures on such that, for all (a, b), (a , b ) ∈ C,
(a, b) (a , b ) ⇔ min
∈ C(a)
v(b())() + ς(a) ≥ min
∈
∈ C(a )
v(b())() + ς(a ).
∈
Moreover, the functions v and ς are jointly cardinal and, for each a ∈ A, the set C(a) is unique, and each ∈ C(a) has full support.
The proof appears in Section 7.3.
5. A simple application
To explore some rudimentary agency-theory implications of ambiguity aversion, I consider a principal who engages an
agent to perform a task. The task must be performed in one of two ways, involving distinct actions, a0 and a1 ; either way may
result in success or failure. Assume that the agent’s choice of action is his private information and the outcome is publicly
observable. If the agent performs the task successfully, the principal stands to gain xs dollars; if the agent fails, the principal
104
E. Karni / Journal of Mathematical Economics 45 (2009) 97–112
stands to gain xf dollars, where xs > xf . Assume that a1 is more costly to the agent and yields a more favorable prospect of
success. Assume also that the principal is risk neutral and the agent is risk averse. To explore the implications of ambiguity
aversion for the design of incentive contracts, I consider next alternative attitudes toward ambiguity.
5.1. The benchmark case: expected utility-maximizing behavior
Consider the standard problem in which both the agent and the principal are ambiguity neutral (that is, they are expected
utility maximizers). As is customary in agency theory, assume that the principal and the agent agree on the likelihood of
success, conditional on the alternative actions. Denote by ps (ai ) the probability of success under ai , i = 0, 1, and assume that
ps (a1 ) > ps (a0 ). Suppose that the principal wants to implement a1 . His problem is to design a contract, w = (ws , wf ) ∈ R2 , so
as to maximize
ps (a1 )(xs − ws ) + (1 − ps (a1 ))(xf − wf )
subject to the incentive compatibility (IC) constraint
(ps (a1 ) − ps (a0 ))(u(ws ) − u(wf )) ≥ v(a1 ) − v(a0 )
and the individual rationality (IR) constraint
ps (a1 )u(ws ) + (1 − ps (a1 ))u(wf ) − v(a1 ) ≥ u,
where u, the agent’s von Neumann–Morgenstern utility function, is increasing and strictly concave, v(a1 ) > v(a0 ), v is the
utility cost to the agent of the actions, and u is the agent’s utility of his, uncertainty free, “outside option.”
In this case, both constraints are binding and the solution to the problem is given by11
u(ws∗ ) = u + v(a0 ) +
1 − ps (a0 )
(v(a1 ) − v(a0 ))
ps (a1 ) − ps (a0 )
u(wf∗ ) = u + v(a0 ) −
ps (a0 )
(v(a1 ) − v(a0 )).
ps (a1 ) − ps (a0 )
(6)
Clearly, ws∗ > wf∗ .
5.2. Ambiguity-neutral principal and ambiguity-averse agent
Analogous to the assumption that the principal is risk neutral and the agent is risk averse, I assume next that the principal
is ambiguity neutral and the agent ambiguity averse. The principal’s objective function is as in the previous case. To describe
the constraints he faces, let C(ai ) = [˛i , ˇi ], i = 0, 1, be the agent’s set of probabilities of success under ai .
It is customary to suppose that costlier actions yield a higher probability of success. In the present context, the analogous
assumption requires that ˛1 > ˛0 and ˇ1 > ˇ0 . Furthermore, because the agent’s preferences are represented by maxmin
expected utility functional with action-dependent sets of priors, while the principal is an expected utility maximizer, the
“common priors” assumption must be modified. Assume that the principal is aware of the agent’s ambiguity aversion, that
he knows C(ai ), i = 0, 1, and that his own conditional probabilities of success satisfy ps (ai ) ∈ C(ai ), i = 0, 1.
As in the benchmark case, suppose that the principal would like to implement the action a1 . Note that under these
assumptions, if w = (ws , wf ) is the optimal incentive contract, ws > wf . To see this, observe that because the IC constraint
is binding, if ws ≤ wf , then the IC constraint implies that (ˇ1 − ˇ0 )(u(ws ) − u(wf )) = v(a1 ) − v(a0 ) > 0. Since ˇ1 > ˇ0 then
u(ws ) − u(wf ) > 0 and, since u is strictly monotonic increasing, ws > wf , a contradiction. If ws ≤ wf and ˇ1 = ˇ0 , then the
agent cannot be induced to choose a1 .
Let ŵ = (ŵs , ŵf ) be the optimal incentive contract designed to induce the agent to choose the action a1 . By assumption,
˛1 > ˛0 (otherwise, it is impossible to implement a1 ). Because ŵs > ŵf the IC constraint is
(˛1 − ˛0 )(u(ŵs ) − u(ŵf )) ≥ v(a1 ) − v(a0 ),
(7)
and the corresponding IR constraint is
˛1 u(ŵs ) + (1 − ˛1 )u(ŵf ) − v(a1 ) ≥ u.
(8)
It is easy to verify that, since the two constraints are binding, ŵ is given by
11
u(ŵs ) = u + v(a0 ) +
1 − ˛1
(v(a1 ) − v(a0 ))
˛1 − ˛0
u(ŵf ) = u + v(a0 ) −
˛1
(v(a1 ) − v(a0 )).
˛1 − ˛0
See Salanié (1997).
(9)
E. Karni / Journal of Mathematical Economics 45 (2009) 97–112
105
Comparing this solution to the benchmark case, if ps (a1 ) − ps (a0 ) ≥ ˛1 − ˛0 and ps (a1 ) > ˛1 then
1 − ps (a1 )
1 − ˛1
<
˛1 − ˛0
ps (a1 ) − ps (a0 )
(10)
˛1
ps (a1 )
˛1 − ˛0
ps (a1 ) − ps (a0 )
(11)
and
Thus ŵs > ws∗ and ŵf wf∗ .
(a) If ŵf > wf∗ then the cost, to the principal, of implementing a1 when the agent is ambiguity averse is higher than in the
benchmark case.
(b) If ŵf ≤ wf∗ then implementing a1 , requires exposing the agent to greater risk. In addition, since ˛1 u(ws ) + (1 − ˛1 )u(wf ) <
ps (a1 )u(ws ) + (1 − ps (a1 ))u(wf ) for all w such that ws > wf , the cost to the principal of satisfying the individual rationality
constraint, necessary to implement a1 , is higher.
Thus, in either case, compared with the benchmark case, when the principal is ambiguity neutral and the agent is ambiguity
averse, the implementation of the costly action, a1 , results in a Pareto inferior allocation.
A different question concerns the welfare effects of increasing the agent’s ambiguity aversion. One agent is said to display
greater ambiguity aversion than another if, for any given action, the set of priors of the former contains that of the latter.
Inspection of Eq. (9) reveals that, ceteris paribus, the effect of greater ambiguity aversion depends on the coefficient ˛1 /(˛1 −
˛0 ) and (1 − ˛1 )/(˛1 − ˛0 ). Specifically, increasing ambiguity aversion corresponds to a decline, in the weak sense, in the
values of ˛i , i = 0, 1. Hence if the shift, ˛1 − ˛0 , due to the change of action, is independent of the agent’s ambiguity aversion,
then greater ambiguity aversion implies a larger spread between the payoffs associated with success and failure. Because
the agent is risk averse, the mean payoff required to induce the agent to implement a1 must increase, and the principal is
made worse off. Since the utility of the outside option of the agent remains the same, the overall result is Pareto inferior.
5.3. Ambiguity-averse principal and agent
Suppose that the principal and the agent are both ambiguity averse and that C P (ai ) = C A (ai ):=C(ai ), i = 0, 1. As in the
preceding case, denote by ŵ = (ŵs , ŵf ) the contract required to implement a1 (the solution to Eq. (9)) and let w̄ = (w̄, w̄) be
the contract required to implement a0 . What are the consequences of the ambiguity aversion of the principal? In particular,
is it possible that, ceteris paribus, an ambiguity-neutral principal would choose to implement an action different from that
of an ambiguity-averse principal?
Recall that ŵs > ŵf and suppose that (xs − ŵs ) > (xf − ŵf ). If, in addition, ps (a1 ) − ps (a0 ) > ˛1 − ˛0 , it is possible that
ps (a1 )(xs − ŵs ) + (1 − ps (a1 ))(xf − ŵf ) > ps (a0 )xs + (1 − ps (a0 ))xf − w̄,
(12)
˛0 xs + (1 − ˛0 )xf − w̄ > ˛1 (xs − ŵs ) + (1 − ˛1 )(xf − ŵf ).
(13)
and
In other words, it is possible that an ambiguity-neutral principal would induce the agent to implement a1 while an ambiguityaverse principal, in the same situation, would prefer that the agent implement a0 .
6. Summary
This paper axiomatizes the parametrized distributions formulation of agency model in which the agent are representable by the maxmin expected utility preferences displaying ambiguity aversion. The two features that distinguish
maxmin expected utility representations in this paper from the original maxmin expected utility representation of Gilboa
and Schmeidler (1989) are: the action-dependent sets of priors and outcome-dependent utility functions of money. These
features reflect the distinct analytical frameworks more than the characteristics of the individual preferences.
Using a simple set-up, this paper also discusses some considerations that need to be addressed when modeling
principal–agent relations in which either party or both are maxmin expected utility players. Within this framework the
paper examines some implications of ambiguity aversion for the design of incentive contracts, illustrating complications
and welfare implications that arise due to ambiguity aversion.
7. Proofs
7.1. Proof of Theorem 1
That (i) ⇒ (ii) is implied by the following lemmata.
106
E. Karni / Journal of Mathematical Economics 45 (2009) 97–112
Lemma 4. There exist affine function Ū : Bcv → R such that, for all b̄, b̄ ∈ Bcv , b̄ b̄ if and only if Ū(b̄) ≥ Ū(b̄ ). Moreover, Ū is
unique up to positive affine transformation.
Proof of Lemma 4. By Axiom (A.6) Bcv is a convex set. Axiom (A.5) implies that the independence axiom of expected utility
theory holds on Bcv . Hence, by the von Neumann–Morgenstern theorem, axioms (A.1), (A.2) and (A.5) imply that, for every
a ∈ A, there exists affine function, U(·, a) : Bcv → R, unique up to positive affine transformation, such that, for all b̄, b̄ ∈ Bcv ,
(a, b̄) (a, b̄ ) ⇔ U(b̄, a) ≥ U(b̄ , a).
(14)
But, by definition of constant valuation bets, U(b̄, a) is independent of a. Define Ū : Bcv → R by Ū(b̄) = U(b̄, a). Hence, by
Eq. (14), b b̄ if and only if Ū(b̄) ≥ Ū(b̄ ), and Ū is affine. The uniqueness of Ū is immediate. Lemma 5.
Given a function Ū from Lemma 4, there exists a unique J : C → R such that:
(i) (a, b) (a , b ) if and only if J(a, b) ≥ J(a , b ) for all (a,b), (a , B ) ∈ C.
(ii) For every b̄ ∈ Bcv , J(a, b̄) = Ū(b̄) for all a ∈ A.
Proof of Lemma 5. On A × Bcv J is uniquely determined by (ii) and is independent of a. To extend J to C, take (a, b) ∈ C
then b̄∗∗ (a, b) b̄∗ . But (A.0), (A.1), (A.2) and (A.4) imply that there exists a unique ˛(a, b) ∈ [0, 1] such that (a, b) ∼
(a, ˛(a, b)b̄∗∗ + (1 − ˛(a, b))b̄∗ ).12 By the convexity of Bcv , ˛(a, b)b̄∗∗ + (1 − ˛(a, b))b̄∗ ∈ Bcv . Define
J(a, b) = J(a, ˛(a, b)b̄∗∗ + (1 − ˛(a, b))b̄∗ ) = Ū(˛(a, b)b̄∗∗ + (1 − ˛(a, b))b̄∗ ).
(15)
But, by transitivity, (a, b) (a , b ) if and only if ˛(a, b)b̄∗∗ + (1 − ˛(a, b))b̄∗ ˛(a , b )b̄∗∗ + (1 − ˛(a , b ))b̄∗ . By Lemma 4 and
Eq. (15), this holds if and only if
Ū(˛(a, b)b̄∗∗ + (1 − ˛(a, b))b̄∗ ) ≥ Ū(˛(a , b )b̄∗∗ + (1 − ˛(a , b ))b̄∗ ) ⇔ J(a, b) ≥ J(a , b ).
Thus (a, b) (a , b ) if and only if J(a, b) ≥ J(a , b ). Hence J satisfies (i) and is also unique.
Choose a utility function such that the utility of b̄∗
smaller than −1 and that of b̄∗∗
is greater than 1. (This choice is possible
due to (A.0), and the uniqueness of Ū in Lemma 5.). For expository convenience, with slight abuse of notation, I denote this
function by Ū.
Let F be the set of all real-valued functions on . Let K = Ū(Bcv ) and denote by F(K) the set of real-valued functions on with values in K. Given b ∈ B, let Ū ⊗ b ∈ F(K) be defined by: Ū ⊗ b:=(Ū(b− b())) ∈ . Note that, because Ū is affine, for all
b, b ∈ B and ˛ ∈ [0, 1], Ū ⊗ (˛b + (1 − ˛)b ) = ˛Ū ⊗ b + (1 − ˛)Ū ⊗ b . (To see this, observe that
Ū ⊗ (˛b + (1 − ˛)b ) = (Ū(b− (˛b + (1 − ˛)b )())) ∈ = (Ū(˛b− b() + (1 − ˛)b− b ())) ∈ = (˛Ū(b− b()) + (1 − ˛)Ū(b− b ())) ∈ = ˛Ū ⊗ b + (1 − ˛)Ū ⊗ b ,
where use has been made of the convexity of Bcv and Definition 1).
For every r ∈ R denote by rc the constant function in F whose value is r. Let b̄1 ∈ Bcv be such that Ū ⊗ b̄1 =
(Ū(b̄1 ), . . . , Ū(b̄1 )) = 1c .13
Lemma 6.
There exists a functional, I : A × F → R, such that:
(i) For all (a, b) ∈ C, I(a, Ū ⊗ b) = J(a, b) (hence I(a, Ū ⊗ b̄1 ) = 1 for all a ∈ A).
For all a ∈ A,
(ii) I(a, ·) is monotonic (that is, for all f, g ∈ F, f ≥ g ⇒ I(a, f ) ≥ I(a, g)).
(iii) I(a, ·) is superadditive and homogeneous of degree 1.
(iv) I(a, ·) is constant independent (that is, I(a, f + rc ) = I(a, f ) + I(a, rc ) for all f ∈ F a nd r ∈ R).
Proof of Lemma 6. Let I be defined by condition (i) then, by Lemma 5 and (A.4), I restricted to F(K) is well defined.14 Let
f, g ∈ F(K) satisfy f = ˛g, for some ˛ ∈ (0, 1]. The proof that I(a, ·) is homogeneous degree 1 requires showing that I(a, f ) =
˛I(a, g). (This will imply the equality for ˛ > 1). Let b ∈ B be such that Ū ⊗ b = g and let b̄0 ∈ Bcv satisfy J(a, b̄0 ) = Ū(b̄0 ) = 0
(the first equality is implied by Lemma 5). Define b = ˛b + (1 − ˛)b̄0 . By Lemma 4, Ū is affine. Hence Ū ⊗ b = ˛Ū ⊗ b + (1 −
˛)Ū ⊗ b̄0 = ˛g = f , where use has been made of the fact that Ū ⊗ b̄0 is a constant function whose value is Ū(b̄0 ) = 0. By (i)
Specifically, ˛(a, b) = sup{˛ ∈ [0, 1]|(a, b) (a, ˛(a, b)b̄∗∗ + (1 − ˛(a, b))b̄∗ )}.
The fact that Ū composed with elements of Bcv is the constant function that for every equals Ū of that bet follows from the definition of Ū. Specifically,
for every b̄ ∈ Bcv and ∈ (b− b()) = b̄. Hence Ū ⊗ b̄:=(Ū(b̄), . . . , Ū(b̄)).
12
13
14
To see this, note that, by Lemma 4, Ū ⊗ b = Ū ⊗ b implies that b− b() ∼ b− b () for all ∈ . Thus, by definition of , b()∼ b (), for all ∈ . By
(A.4), this implies (a, b) ∼ (a, b ). Whence (i) implies I(a, Ū ⊗ b) = J(a, b) = J(a, b ) = I(a, Ū ⊗ b )), and I is well-defined.
E. Karni / Journal of Mathematical Economics 45 (2009) 97–112
107
I(a, f ) = J(a, b ). Let b̄ ∈ Bcv satisfy (a, b̄) ∼ (a, b) (hence J(a, b̄) = J(a, b) = I(a, g)). Then, by axiom (A.5), (a, ˛b̄ + (1 − ˛)b̄0 ) ∼
(a , ˛b + (1 − ˛)b̄0 ). Invoking the convexity of Bcv and the affinity of Ū, by Lemma 5,
J(a, b ) = J(a, ˛b̄ + (1 − ˛)b̄0 ) = ˛J(a, b̄) + (1 − ˛)J(a, b̄0 ) = ˛J(a, b̄).
(16)
I(a, f ) = J(a, b ) = ˛J(a, b̄) = ˛J(a, b) = ˛I(a, g)
(17)
Thus
and I(a, ·) is linear homogeneous.
Using homogeneity, extend I(a, ·) to F. To prove (ii), take f, g ∈ F(K) such that f ≥ g. Take b, b ∈ B such that t Ū ⊗ b = f and
t Ū ⊗ b = g. Then Ū ⊗ b− b() ≥ Ū ⊗ b− b () for all ∈ . By Lemma 4 and the definition of , b() b () for all ∈ .
Axiom (A.4) implies that (a, b) (a, b ) for all a ∈ A. By Lemma 5, the definition of I(a, ·), and its homogeneity this implies
I(a, f )/t = J(a, b) ≥ J(a, b ) = I(a, g)/t, and I(a, ·) satisfies (ii).
Next consider condition (iv). Let there be given f ∈ F and r ∈ R. Invoking homogeneity, without loss of generality, assume
that 2f, 2rc ∈ F(K). Define = I(a, 2f ) = 2I(a, f ). Let b ∈ B satisfy Ū ⊗ b = 2f and let p, q ∈ (I()) satisfy Ū(b− p) = and
Ū(b− q) = 2r. Since, by Lemma 5 and (i), (a, b) ∼ (a, b− p), axiom (A.5) implies that
a,
1
1
b + b− q
2
2
∼
a,
1
1
b p + b− q .
2 −
2
(18)
Hence, by Lemma 5, J(a, (1/2)b + (1/2)b− q) = J(a, (1/2)b− p + (1/2)b− q). By (i) this implies,
I(a, f + rc ) = I a,
1
1
Ū ⊗ b + Ū ⊗ b− q
2
2
where used has been made of
I a,
1
1
Ū ⊗ b− p + Ū ⊗ b− q
2
2
= J a,
= I a,
1
1
Ū ⊗ b− p + Ū ⊗ b− q
2
2
1
1
b p + b− q
2 −
2
= Ū
1
2
b− p +
= I a,
1
b q
2 −
=
1
c + rc
2
=
1
+ r = I(a, f ) + r
2
(19)
1
1
1
Ū(b− p) + Ū(b− q) = + r
2
2
2
(20)
Thus I(a, ·) satisfies (iv).
To show that I(a, ·) is superadditive let f, g ∈ F be given. By homogeneity it suffices to prove that I(a, (1/2)f + (1/2)g) ≥
(1/2)I(a, f ) + (1/2)I(a, g). Let b, b ∈ B satisfy Ū ⊗ b = f and Ū ⊗ b = g. If I(a, f ) = I(a, g) then (a, b) ∼ (a, b ) and, by axiom
(A.3), (a, (1/2)b + (1/2)b ) (a, b) which, in turn, implies I(a, (1/2)f + (1/2)g) ≥ I(a, f ) = (1/2)I(a, f ) + (1/2)I(a, g).
Assume that I(a, f ) > I(a, g) and let ˇ = I(a, f ) − I(a, g). Set h = g + ˇc then, by (iv), I(a, h) = I(a, g) + ˇ = I(a, f ). Hence,
by (iv) and the superadditivity for the case above,
I a,
1
1
f+ g
2
2
+
1
1
1
ˇ = I a, f + h
2
2
2
Thus I(a, ·) is superadditive.
≥
1
1
1
1
1
I(a, f ) + I(a, h) = I(a, f ) + I(a, g) + ˇ.
2
2
2
2
2
(21)
Lemma 7. Let I be the functional in Lemma 6. Then, there exist a family {C(a)}a ∈ A of closed and convex sets of probability
measures on such that, for all (a, f ) ∈ A × F,
I(a, f ) = min
f ()()| ∈ C(a)
.
∈
Proof of Lemma
7. Let (a, g) ∈ A × F satisfy I(a, g) > 0. The
next step is to construct a probability measure, h(a,g) on , such
that I(a, f ) ≤
f ()h(a,g) (), for all f ∈ F, and I(a, g) =
g()h(a,g) (). Define
∈
∈
D1 (a) = {f ∈ F|I(a, f ) > 1}
and
D2 (a) = conv({f ∈ F|f ≤ 1c } ∪ {f ∈ F|f ≤ g/I(a, g)}).
The sets D1 (a) and D2 (a) are disjoint. To see that, let d2 ∈ D2 (a) satisfy d2 = ˛f1 + (1 − ˛)f2 , where f1 ≤ 1c , f2 ≤ g/I(a, g)
and ˛ ∈ [0, 1]. Then, by monotonicity, homogeneity and constant-valuation independence,
I(a, d2 ) ≤ I(a, ˛1c + (1 − ˛)f2 ) = ˛ + (1 − ˛)I(a, f2 ) ≤ 1.
(22)
/ D1 (a) whence D1 (a) ∩ D2 (a) = ∅.
Thus d2 ∈
Since both D1 (a) and D2 (a) are convex sets with nonempty interiors, there exist a hyperplane (h(a,g) , ˇ) separating D1 (a)
and D2 (a). That is, for all d1 ∈ D1 (a) and d2 ∈ D2 (a),
h(a,g) · d1 ≥ ˇ ≥ h(a,g) · d2 .
(23)
108
E. Karni / Journal of Mathematical Economics 45 (2009) 97–112
Since the unit ball of F is contained in D2 (a), ˇ > 0. (Otherwise h(a,g) = 0.) Henceforth assume, without loss of generality,
that ˇ = 1.
Since 1c ∈ D2 (a) inequalities (23) imply that h(a,g) · 1c ≤ 1. Because I(a, 1c ) = 1, 1c is a limit point of D1 (a), thus h(a,g) ·
1c ≥ 1. Hence h(a,g) · 1c = 1. Let E ⊂ and 1E be the indicator function, then h(a,g) · 1E + h(a,g) · (1c − 1E ) = h(a,g) · 1c = 1. But
(1c − 1E ) ∈ D2 (a) hence h(a,g) · 1E ≥ 0 for all E ⊂ . Thus h(a,g) is a probability measure on .
I show next that h(a,g) · f ≥ I(a, f ) for all f ∈ F and h(a,g) · g = I(a, g). If I(a, f ) > 0 then f/I(a, f ) + (1/n)c ∈ D1 (a) for all n =
1, 2, . . .. Hence inequalities (23) imply that h(a,g) · f ≥ I(a, f ). If I(a, f ) ≤ 0, there is r ∈ R such that I(a, f + rc ) = I(a, f ) + r > 0,
since I satisfies constant independence. Apply the argument above to I(a, f + rc ) to obtain:
h(a,g) · (f + rc ) = h(a,g) · f + r ≥ I(a, f + rc ) = I(a, f ) + r.
(24)
Hence h(a,g) · f ≥ I(a, f ).
Since g/I(a, g) ∈ D2 (a) inequalities (23) imply the converse inequality for g. Hence h(a,g) · g = I(a, g).
Let C(a) be the closure of the convex hull of {h(a,g) |I(a, g) > 0, g ∈ F}. Then
I(a, f ) ≤ min{
f ()()| ∈ C(a)}.
∈
For f such that I(a, f ) > 0, it was shown that the converse inequality holds as well. For f such that I(a, f ) ≤ 0 the converse
inequality follows from constant independence. For every ∈ define a real-valued function, u(·, ) on (I()), by u(p, ) = Ū(b− p). Then Lemmata 4–7 imply that, for
all (a, b), (a , b ) ∈ C,
(a, b) (a , b ) ⇔ min
u(b(), )() ≥ min
∈ C(a )
∈ C(a)
∈
u(b (), )()
(25)
∈
Furthermore,
u(˛p + (1 − ˛)q, ) = Ū(b− (˛p + (1 − ˛)q)) = ˛Ū(b− p) + (1 − ˛)Ū(b− q)
where use is made of the fact that
Bcv
(26)
is convex, Ū is affine, and
Ū(b− (˛p + (1 − ˛)q)) = Ū(˛(b− p) + (1 − ˛)(b− q)).
Hence
u(˛p + (1 − ˛)q, ) = ˛u(p, ) + (1 − ˛)u(q, )
(27)
Thus u(·, ) is affine. This completes the proof that (i) ⇒ (ii).
(ii) ⇒ (i). Let I be a real-valued function on A × F that, for every given a ∈ A, is defined by I(a, f ) = min ∈ C(a) ∈ f ()(),
where C(a) is compact and convex. Then I(a, ·) is continuous and satisfy (i)–(iv) of Lemma 6. Let {u(·, ) : (I()) → R} ∈ be a family of affine functions. Define a function Ū : Bcv → R by Ū ⊗ b− p:=u(p, ), ∈ .
Define a preference relation on C by (a, b) (a , b ) if and only if I(a, (Ū ⊗ b− b()) ∈ ) ≥ I(a , (Ū ⊗ b− b ()) ∈ ). Then
satisfies axioms (A.1)–(A.5). To show that (A.6) holds, let b̄, b̄ ∈ Bcv we need to show that ˛b̄ + (1 − ˛)b̄ ∈ Bcv . Let Ū ⊗ b̄ = rc
and Ū ⊗ b̄ = rc then I(a, ˛rc + (1 − ˛)rc ) = ˛r + (1 − ˛)r = ˛Ū ⊗ b̄ + (1 − ˛)Ū ⊗ b̄ for all a ∈ A. Since u(·, ), ∈ , are affine
functions, Ū is affine. Hence ˛Ū ⊗ b̄ + (1 − ˛)Ū ⊗ b̄ = Ū ⊗ (˛b̄ + (1 − ˛)b̄ ). Thus
I(a, Ū ⊗ (˛b̄ + (1 − ˛)b̄ )) = ˛r + (1 − ˛)r = I(a , Ū ⊗ (˛b̄ + (1 − ˛)b̄ )),
for all a, a ∈ A. Thus, by definition, (a, ˛b̄ + (1 − ˛)b̄ ) ∼ (a , ˛b̄ + (1 − ˛)b̄ ) for all a, a ∈ A. Hence ˛b̄ + (1 − ˛)b̄ ∈ Bcv . This
completes the proof of the existence of the representation.
The uniqueness of {u(·, )} ∈ follows from the uniqueness of Ū in Lemma 4.
/ C2 (a) both nonempty, closed, convex sets,
By (A.0), is nondegenerate. Assume that, for some a ∈ A, there are C1 (a) =
such that the two functions on B :
H1 (b; a) = min
∈ C1 (a)
u(b(), )()
∈
and H2 (b; a) = min
∈ C2 (a)
u(b(), )()
∈
ˆ ∈ C1 (a) \ C2 (a). Then, by a separating hyperplane
both represent . Without loss of generality assume that there exist theorem, there exist h ∈ R|| such that
ˆ
h()()
< min
∈
∈ C2 (a)
h()().
∈
Without loss of generality, assume that h ∈ F(K). Then there exist b ∈ B such that H1 (b; a) < H2 (b; a). Let b̄ ∈ Bcv satisfy b̄ ∼ (a, b)
then H1 (b̄; a) = H1 (b; a) < H2 (b; a) = H2 (b̄; a), a contradiction.
E. Karni / Journal of Mathematical Economics 45 (2009) 97–112
109
7.2. Proof of Theorem 2
The proof of Theorem 2 is similar to that of Theorem 1 and is only outlined here.
Lemma 8. There exists jointly cardinal functions V : B̃ → R and ς : A → R such that, for all (a, b̃), (a , b̃ ) ∈ A × B̃, (a, b̃) (a , b̃ )
if and only if V (b̃) + ς(a) ≥ V (b̃ ) + ς(a ). Moreover, the function V is affine.
Proof of Lemma 8. The restriction of to A × B̃ is a continuous weak order satisfying axioms (A.7)—(A.9) if and only if
there exist jointly cardinal, continuous, real-valued functions V on B̃ and ς on A such that on A × B̃ is represented by
(a, b̃) → V (b̃) + ς(a) (Wakker (1989) Theorem III.4.1).
Because B̃ is a convex set and is a continuous weak order, axiom (A.5 ) implies that, on B̃, the von Neumann–Morgenstern
expected utility theorem applies. Hence V is affine. Lemma 9.
Given the functions (V, ς) from Lemma 8, there exist a unique function J : A × B → R such that:
(i) (a, b) (a , b ) if and only if J(a, b) ≥ J(a , b ) for all (a, b), (a , b ) ∈ C.
(ii) For every (a, b̃) ∈ A × B̃, J(a, b̃) = V (b̃) + ς(a).
Proof of Lemma 9. On A × B̃ let J be uniquely determined by (V, ς) according to (ii).
By (A.0 ) (a, b̃∗∗ ) (a, b) (a, b̃∗ ), for all (a, b) ∈ C. Invoking the uniqueness of V, without loss of generality, normalize V
so that V (b̃∗∗ ) = 1 and V (b̃∗ ) = 0. For every given (a, b) ∈ C, let ˛(a, b) be defined by (a, b) ∼ (a, ˛(a, b)b̃∗∗ + (1 − ˛(a, b))b̃∗ ).
Because is continuous and satisfies axiom (A.5 ), ˛(a, b) is well-defined. Define
J(a, b) = J(a, ˛(a, b)b̃∗∗ + (1 − ˛(a, b))b̃∗ ) = V (˛(a, b)b̃∗∗ + (1 − ˛(a, b))b̃∗ ) + ς(a)
(28)
where the last equality follows from the convexity of B̃. But V is affine, hence
J(a, b) = ˛(a, b)V (b̃∗∗ ) + (1 − ˛(a, b))V (b̃∗ ) + ς(a) = ˛(a, b) + ς(a)
where use was made of the normalization of V. By transitivity and Lemma 8, J satisfies (i).
V (b̃∗ )
(29)
V (b̃∗∗ )
< −1 and
> 1. Let L = V (B̃) and denote by F(L) the set of real-valued
Choose a utility function V such that
functions on with values in L. Given b ∈ B, let V ⊗ b ∈ F(L) be defined by: V ⊗ b:=(V (b
− b())) ∈ , where b− p is the element
of B̃ whose − coordinate is p. Note that, because V is affine, B̃ is convex, and the uniqueness of the elements of B̃, V ⊗ (˛b +
(1 − ˛)b ) = ˛V ⊗ b + (1 − ˛)V ⊗ b for all b, b ∈ B and ˛ ∈ [0, 1]. (To see this, note that, by the convexity and uniqueness of
B̃, for each ∈ , b (˛p
+ (1 − ˛)q)( ) and ˛bp( ) + (1 − ˛)bq( ) are elements of I((˛p + (1 − ˛)q); , b̃, a). Hence, by
−
−
−
Lemma 8,
V (b− (˛p
+ (1 − ˛)q)) = V (˛b
− p( ) + (1 − ˛)b− q( )) = ˛V (b− p( )) + (1 − ˛)V (b− q( )).
The affinity of V ⊗ follows from its definition.)
The remainder of the proof uses the same arguments as in the proof of Theorem 1, with the obvious changes, that is,
replace Bcv with B̃, b̄ with b̃. By the same arguments as in the proof of Lemma 6, there is a functional, I : A × F → R, defined
by I(a, V ⊗ b) = J(a, b) − ς(a). Then, for all (a, b) ∈ C, I(a, ·) is monotonic, super additive, homogeneous of degree 1, and satisfies
constant-independence.15
Let I be the functional in Lemma 10. Then, by the same argument as in the proof of Lemma 7, there exists a family {C(a)}a ∈ A
of closed and convex sets of probability measures on such that, for all (a, f ) ∈ A × F,
I(a, f ) = min
f ()()| ∈ C(a)
.
∈
For every ∈ define a real-valued function, v(·, ) on (I()), by v(b(), ) = V (b
− b()). Then, Lemmas 8, 9 and the
definition of I imply that, for all (a, b), (a , b ) ∈ C,
(a, b) (a , b ) ⇔ min
v(b(), )() + ς(a) ≥ min
∈ C(a)
∈
∈ C(a )
Furthermore, because B̃ is convex, V is affine and
V (b− (˛p
+ (1 − ˛)q)) = V (˛(b
− p) + (1 − ˛)(b− q)).
15
See Lemma 10 in the Appendix A for details.
v(b (), )() + ς(a )
∈
(30)
110
E. Karni / Journal of Mathematical Economics 45 (2009) 97–112
Because V is affine,
v(˛p + (1 − ˛)q, ) = ˛V (b
− p) + (1 − ˛)V (b− q) = ˛v(p, ) + (1 − ˛)v(q, )
(31)
Thus v(·, ) is affine. This completes the proof that (i) ⇒ (ii).
(ii) ⇒ (i). That (ii) implies (A.7)–(A.9) is an implication of Lemma 8. That is a continuous weak order satisfying (A.3),
(A.4 ), (A.5 ) follows using the
same arguments as in the proof of Theorem 1.
That the functions V := ∈ v(b̃(), ) on B̃ and ς are jointly cardinal is an implication of Lemma 8. Moreover, for all
∈ , v(·, ) is affine. Hence if {v̂(·, )} ∈ and ς̂ is another set of functions, representing in the sense of (ii), then there
are ˇ > 0, , and ı, such that v̂(·, ) = ˇv(·, ) + and ς̂ = ˇς + ı.16 Thus {v(·, )} ∈ are cardinally measurable and fully
comparable.
The proof that, for each a ∈ A, the set C(a) is unique, and each ∈ C(a) has full support, is by the same argument as in the
proof of Theorem 1.
7.3. Proof of Theorem 3
(i) ⇔ (ii). By Theorem 2, for all (a, b), (a , b ) ∈ C,
(a, b) (a , b ) ⇔ min
v(b(), )() + ς(a) ≥ min
∈ C(a )
∈ C(a)
∈
v(b(), )() + ς(a ).
(32)
∈
For all a ∈ A, , ∈ , and p, q ∈ , v(p, ) ≥ v(q, ) if and only if,
v(p, )() +
ˆ )(
ˆ
ˆ ≥ v(q, )() +
v(b̃∗∗ (),
)
ˆ ∈ −{}
ˆ )(
ˆ
ˆ
v(b̃∗∗ (),
)
v((b̃∗∗
q)(), )().
(Note
that,
∈
−
∗∗
argminC(a) ∈ v((b̃− p)(), )() = argminC(a) ∈ v((b̃∗∗
q)(),
)()).
−
Thus, by (32), this is equivalent to (a, b̃∗∗
p) (a, b̃∗∗
q). By axiom (A.10) this
−
−
where
∈ argminC(a)
by (32), this is equivalent to,
v(p, )( ) +
(33)
ˆ ∈ −{}
ˆ )(
ˆ
ˆ ≥ v(q, )( ) +
v(b̃∗∗ (),
)
ˆ ∈ −{ }
because
∗∗ p) (a, b
∗∗ q),
(a, b̃∗∗ ) (a, b
−
−
is equivalent to (a, b̃∗∗
p) (a, b̃∗∗
q). Hence,
− − ˆ )(
ˆ
ˆ
v(b̃∗∗ (),
)
(34)
ˆ ∈ −{ }
But (34) holds if and only if v(p, ) ≥ v(q, ). Thus, by the uniqueness of the von Neumann–Morgenstern utility function, for
all ∈ , v(·, ) = ˇ()v(·) + (), ˇ()
> 0.
However, for all b̃ ∈ B̃, min ∈ C(a) ∈ [ˇ()v(b̃()) + ()]() is independent of a. By the uniqueness of b̃,
ˇ()v(b̃()) + () = ˇ( )v(b̃( )) + ( ),
for all , ∈ and
b̃ ∈ B̃
(35)
Hence
[ˇ() − ˇ( )][v(b̃()) − v(b̃ ( ))] = 0
for all
, ∈ and
b̃ , ∈ B̃
(36)
Thus ˇ() = ˇ for all ∈ . But v(b̃()) = v(b̃ ()), thus Eq. (35) imply that () = for all ∈ . By the normalization of V,
and the definition of v, v(b̃0 ()) = 0 and v(b̃1 ()) = 1. Hence = 0 and ˇ = 1.
The uniqueness follows from Theorem 2.
Acknowledgements
I am grateful to an anonymous referee for many useful comments and suggestions and to the National Science Foundation
for financial support under grant SES-0314249.
16
Note that, {v(·, )} ∈ are jointly cardinal. That is, if {v̂(·, )} ∈ is another set of effect-dependent utility function that, jointly with ς represent the
preference relation, then v̂(·, ) = ˇv(·, ) + (). However, for this representation to hold for every a, it is necessary that () = for all ∈ .
E. Karni / Journal of Mathematical Economics 45 (2009) 97–112
111
Appendix A
There exists a functional I : A × F → R such that:
Lemma 10.
For all (a, b) ∈ C, I(a, V ⊗ b) = J(a, b) − ς(a).
I(a, ·) is monotonic (that is, for all f, g ∈ F, f ≥ g ⇔ I(a, f ) ≥ I(a, g)).
I(a, ·) is superadditive and homogeneous of degree 1.
I(a, ·) is constant independent (that is, I(a, f + rc ) = I(a, f ) + I(a, rc ) for all f ∈ F and r ∈ R).
(i)
(ii)
(iii)
(iv)
Proof of Lemma 10. Let I be defined by condition (i) then, by Lemma 9, I is well defined. Let f, g ∈ F(L) satisfy f = ˛g,
for some ˛ ∈ (0, 1]. The proof that I(a, ·) is linear homogeneous requires showing that I(a, f ) = ˛I(a, g). (This will imply
the equality for ˛ > 1). Let b ∈ B be such that V ⊗ b = g and let b̃∗ ∈ B̃ satisfy J(a, b̃∗ ) − ς(a) = V (b̃∗ ) = 0 (the first equality is
implied by Lemma 9 and the second by the normalization of V). Define b = ˛b + (1 − ˛)b̃∗ . By Lemma 8 V is affine, hence
V ⊗ b = ˛V ⊗ b + (1 − ˛)V ⊗ b̃∗ = ˛g = f , where use has been made of (V ⊗ b− b̃∗ ()) ∈ = (V (b̃∗ ), . . . , V (b̃∗ )) = 0c . By (i)
I(a, f ) = J(a, b ) − ς(a). Let b̃ ∈ B̃ satisfy (a, b̃) ∼ (a, b) (hence J(a, b̃) − ς(a) = J(a, b) − ς(a) = I(a, g)). But, by axiom (A.5 ), ˛b̃ +
(1 − ˛)b̃0 ∼ ˛b + (1 − ˛)b̃∗ . Hence, invoking the convexity of Bcv and the affinity of V, J(a, ˛b̃ + (1 − ˛)b̃0 ) − ς(a) = ˛J(a, b̃) +
(1 − ˛)J(a, b̃∗ ) − ς(a) = ˛[J(a, b̃) − ς(a)]. Thus
J(a, b ) − ς(a) = J(a, ˛b̃ + (1 − ˛)b̃0 ) − ς(a) = ˛[J(a, b̃) − ς(a)].
(37)
I(a, f ) = J(a, b ) − ς(a) = ˛[J(a, b̃) − ς(a)] = ˛[J(a, b) − ς(a)] = ˛I(a, g)
(38)
and
Hence I(a, ·) is linear homogeneous.
Using homogeneity extend I(a, ·) to F. To prove (ii) let f, g ∈ F(L), f ≥ g. Let b, b ∈ B be such that tV ⊗ b = f and tV ⊗ b = g.
Then V ⊗ b
− b() ≥ V ⊗ b− b () for all ∈ . Thus, by Lemma 8, (a, b− b()) (a, b− b ()) for all ∈ and a ∈ A. Hence
Axiom (A.4 ) implies that (a, b) (a, b ) for all a ∈ A. By Lemma 9, the definition of I(a, ·), and its homogeneity this implies
I(a, f )/t = J(a, b) ≥ J(a, b ) = I(a, g)/t, and I(a, ·) satisfies (ii).
Next consider condition (iv). Let there be given f ∈ F and r ∈ R. Invoking homogeneity, without loss of generality, assume
that 2f, 2rc ∈ F(L). Define = I(a, 2f ) = 2I(a, f ). Let b ∈ B satisfy V ⊗ b = 2f and let p, q ∈ (I()) satisfy V (b
− p) = and
V (bq) = 2r. Since, by Lemma 9 and (i), (a, b) ∼ (a, bp), axiom (A.5 ) implies that
−
1
1
a, b + b
q
2
2 −
Hence, by (i),
I(a, f + rc ) = I a,
∼
1
1
V ⊗ b + V ⊗ b
− q
2
2
where used has been made of
I a,
−
1
1
a, b
p + b
q .
2 −
2 −
1
1
V ⊗ b
− p + V ⊗ b− q
2
2
= J a,
=V
1
(39)
= I a,
1
1
V ⊗ b
− p + V ⊗ b− q
2
2
= I a,
1
c + rc
2
=
1
+ r = I(a, f ) + r,
2
(40)
1
1
V ⊗ b
− p + V ⊗ b− q − ς(a)
2
2
1
q
bp + b
2 −
2 −
=
1 1
1
V (b− p) + V (b
+ r.
− q) =
2
2
2
(41)
Thus I(a, ·) satisfies (iv).
To show that I(a, ·) is superadditive let f, g ∈ F be given. By homogeneity it suffices to prove that I(a, (1/2)f + (1/2)g) ≥
(1/2)I(a, f ) + (1/2)I(a, g). Let b, b ∈ B satisfy V ⊗ b = f and V ⊗ b = g. If I(a, f ) = I(a, g) then (a, b) ∼ (a, b ) and, by uncertainty aversion (axiom (A.3)), (a, (1/2)b + (1/2)b ) (a, b) which, by Lemma 9, implies I(a, (1/2)f + (1/2)g) ≥ I(a, f ) =
(1/2)I(a, f ) + (1/2)I(a, g).
Assume that I(a, f ) > I(a, g) and let ˇ = I(a, f ) − I(a, g). Set = g + ˇc then, by (iv), I(a, ) = I(a, g) + ˇ = I(a, f ). Hence,
by (iv) and the superadditivity for the case above,
I a,
1
1
f+ g
2
2
+
1
1
1
ˇ = I a, f + h
2
2
2
Thus I(a, ·) is superadditive.
≥
1
1
1
1
1
I(a, f ) + I(a, h) = I(a, f ) + I(a, g) + ˇ.
2
2
2
2
2
(42)
References
Anscombe, F., Aumann, R., 1963. A definition of subjective probability. Annals of Mathematical Statistics 34, 199–205.
Drèze, J.H., 1961. Les Fondements Logiques de L’utilite Cardinale et de la Probabilite Subjective. In: La Decision. Colloques Internationaux de CNRS, pp. 73–87.
112
E. Karni / Journal of Mathematical Economics 45 (2009) 97–112
Drèze, J.H., 1987. Decision Theory with Moral Hazard and State-Dependent Preferences. In: Drèze, J.H. (Ed.), Essays on Economic Decisions Under Uncertainty.
Cambridge University Press, Cambridge.
Drèze, J.H., Rustichini, A., 1999. Moral hazard and conditional preferences. Journal of Mathematical Economics 31, 159–181.
Ellsberg, D., 1961. Risk, ambiguity, and the savage axioms. Quarterly Journal of Economics 75, 643–669.
Ghirardato, P., 1994. Agency Theory with Non-Additive Uncertainty, Unpublished manuscript.
Ghirardato, P., Maccheroni, F., Marinacci, M., 2004. Differentiating ambiguity and ambiguity attitude. Journal of Economic Theory 118, 133–173.
Gilboa, I., Schmeidler, D., 1989. Maxmin expected utility with non-unique prior. Journal of Mathematical Economics 18, 141–153.
Karni, E., 2006. Subjective expected utility theory without states of the world. Journal of Mathematical Economics 42, 325–342.
Karni, E., in press. Agency theory: a choice-based behavioral foundations of the parametrized distribution formulation, Economic Theory.
Klibanoff, P., Marinacci, M., Mukerji, S., 2005. A smooth model of decision making under uncertainty. Econometrica 73, 1849–1892.
Mirrlees, J., 1974. Notes on Welfare Economics, Information and Uncertainty. In: Balch, M., McFadden, D., Wu, S. (Eds.), Essays in Economic Behavior Under
Uncertainty. North Holland, Amsterdam.
Mirrlees, J., 1976. The optimal structure of authority and incentives within an organization. The Bell Journal of Economics 7, 105–131.
Mukerji, S., 2004. Ambiguity Aversion and Cost-Plus Procurement Contracts, Unpublished manuscript.
Maccheroni, F., Marinacci, M., Rustichini, A., 2006. Ambiguity aversion, robustness, and the variational representation of preferences. Econometrica 74,
1447–1498.
Rigotti, L., 2001. Imprecise Beliefs in a Principal Agent Model, unpublished manuscript.
Schmeidler, D., 1989. Subjective probability and expected utility without additivity. Econometrica 57, 571–587.
Wakker, P.P., 1989. The Additive Representations of Preferences. Kluwer Academic Publishers, Dordrecht.