Econ 207 Lecture Notes
Ricardo Fernholz, July 2007
1
Subjective Expected Utility: Anscombe-Aumann
Lecture 2, January 24: The goal of this section is to model subjective uncertainty (i.e. “horse
race” lotteries where the probability of each outcome can vary across individuals) and present the
main Anscombe-Aumann result about such uncertainty. We start with a finite set X of outcomes
and a corresponding set ∆(X) of lotteries on X. There is also a finite set of states of nature
S = {1, 2, . . . , S}. An act is defined as a function f : S → ∆(X) specifying for each s ∈ S a
distribution fs = f (s) ∈ ∆(X) obtaining in state s ∈ S. It follows then that fs (x) is the probability
of outcome x ∈ X occurring conditional on state s ∈ S being realized. Next, we define H = ∆(X)S
as the set of all possible acts and the subset Hc of H by
Hc = {f ∈ H : f (s) = f (s0 ) ∀s, s0 ∈ S}.
This is the set of constant acts; regardless of the state of nature, the same lottery (distribution)
obtains. Note that the set of constant acts Hc can be associated with ∆(X) since there is some
constant act that maps to any element of ∆(X). Finally, in this model there is of course a decisionmaker whose primitive is a preference relation over the set of acts H. This preference relation % is
a binary relation on H that is complete and transitive (a weak order).
We are now ready to prove some results, starting with the classic mixture space theorem. First
we need some definitions.
Definition 1.1. A mixture space is a set Π and a family of functions ma : Π × Π → Π, a ∈ [0, 1]
such that
m1 (π, ρ) = π
∀(π, ρ) ∈ Π × Π
ma (π, ρ) = m1−a (ρ, π)
ma (mb (π, ρ)) = mab (π, ρ)
∀(π, ρ) ∈ Π × Π
and
∀(π, ρ) ∈ Π × Π and
∀a ∈ [0, 1]
∀a, b ∈ [0, 1].
Some examples of mixture spaces include some convex subset Π ⊂ Rk with ma (π, ρ) = aπ +
(1 − a)ρ for any a ∈ [0, 1], Π = ∆(X) ⊂ Rn where ma is simply convex combinations of lotteries,
1
and finally Π = H = ∆(X)S which is a convex subset of (Rn )S where ma (f, g) = af + (1 − a)g ≡
af (s) + (1 − a)g(s) for any a ∈ [0, 1].
Definition 1.2. Let (Π; ma , a ∈ [0, 1]) be a mixture space and % be a preference relation on Π. Then,
% is independent if for all π, ρ, σ ∈ Π and all a ∈ (0, 1), π % ρ if and only if ma (π, σ) % ma (ρ, σ).
Additionally, % is Archimedean if for all π, ρ, σ ∈ Π such that π % ρ % σ, there exist a, b ∈ (0, 1)
such that ma (π, σ) Â ρ Â mb (π, σ).
Theorem 1.3. (Mixture Space Theorem) Let (Π; ma , a ∈ [0, 1]) be a mixture space and % be a
preference relation on Π. Then, % is independent and Archimedean if and only if there exists a
function F : Π → R representing % (so π % ρ iff F (π) ≥ F (ρ) for all π, ρ ∈ Π) such that for all
π, ρ ∈ Π and a ∈ [0, 1], F (ma (π, ρ)) = aF (π) + (1 − a)F (ρ). Moreover, F is unique up to positive
affine transformations.
The proof of the mixture space theorem is rather tedious and not particularly instructive or
interesting. However, it is essential for the main result in this section, the Anscombe-Aumann
expected utility theorem. Before stating and proving this theorem, we need one more basic definition.
Definition 1.4. A preference relation % on H is monotonic if for all f, g ∈ H, fs % gs for all s ∈ S
implies that f % g. It is nondegenerate if there exist f, g ∈ H such that f  g.
Note that the statement fs % gs refers to preferences over lotteries (elements of ∆(X)). This
makes sense because the set of constant acts Hc , together with standard preference relation over
acts, induces a preference relation over lotteries. In other words, for x, y ∈ ∆(X), x % y if and only
if the constant act that maps all states to x is weakly preferred to the constant act that maps all
states to y. We see then that monotonicity simply says that if the lotteries in every state for some
act are preferred to those for some other act, then that first act must be preferred to the second.
Theorem 1.5. (Anscombe-Aumann Expected Utility Theorem) A preference relation % on H is
independent, Archimedean, monotonic, and nondegenerate if and only if there exists a von NeumannMorgenstern index u : X → R and a unique subjective probability measure µ ∈ ∆(S) such that
Ã
!
X
X
U (f ) =
µs
fs (x)u(x)
s∈S
x∈X
represents %. Moreover, u is unique up to positive affine transformations.
2
We only sketch out a proof of the theorem here. Denote by %∆(X) the binary relation on ∆(X)
induced by %. Thus, for p, q ∈ ∆(X), p %∆(X) q if and only if (p, p, . . . , p) % (q, q, . . . , q). %∆(X)
is a preference relation on ∆(X) that is independent and Archimedean, so by the mixture space
theorem 1.3 we know there exists a linear function v : ∆(X) → R representing %∆(X) .
Then, v(∆(X)) is a nonempty interval in R. Without loss of generality, take 0 ∈ Int(v(∆(X))).
Next, construct an order ≥∗ on the set v(∆(X))S (note that this space is a mixture space). Define
≥∗ on v(∆(X))S for w, y ∈ v(∆(X))S ⊂ RS by w ≥∗ y if and only if there exist f, g ∈ H such that
w = v ◦ f , y = v ◦ g, and f % g. Note that v ◦ f = (v(f1 ), v(f2 ), . . . , v(fS )). Then ≥∗ is well-defined.
To see this, let f, f 0 , g, g 0 ∈ H such that w = v ◦ f = v ◦ f 0 and y = v ◦ g = v ◦ g 0 . Then v(fs ) = v(fs0 )
and v(gs ) = v(gs0 ) for all s ∈ S. But that implies that fs ∼∆(X) fs0 and gs ∼∆(X) gs0 for all s ∈ S,
which by monotonicity implies that f ∼ f 0 and g ∼ g 0 .
It is straightforward to show that ≥∗ is a preference relation on v(∆(X))S that is independent
and Archimedean. Furthermore, ≥∗ is monotonic in the sense that if w, y ∈ v(∆(X))S and w ≥ y (in
the ordinary, pointwise sense in RS ) then w ≥∗ y. Thus, by the mixture space theorem there exists
a linear function representing ≥∗ on v(∆(X))S . By monotonicity, all coefficients of this function are
nonnegative and hence there exists some c ∈ RS+ , with c 6= 0 (by nondegeneracy), such that w ≥∗ y
if and only if c · w ≥ c · y. Of course, this is equivalent to
µ
P
1
¶
µ
c·w ≥
s∈S cs
P
1
¶
s∈S cs
which gives us the desired subjective distribution over S:
µ
µ=
P
¶
1
s∈S cs
3
∈ ∆(S).
c · y,
2
Gilboa-Schmeidler Maxmin Expected Utility
Lecture 3, January 31: The example that most inspired extensions and departures from the
subjective expected utility framework is the Ellsberg paradox. It states that when decision makers
are to draw one ball at random from an urn containing 30 red balls and 60 balls that are either
blue or yellow, they generally prefer to bet on a red ball being drawn rather than a blue (or yellow)
ball being drawn. In addition, decision makers prefer to bet on a blue or yellow ball being drawn
(both at the same time now) rather than a red or blue ball being drawn (same for red and yellow of
course).
Preferences of this kind, which have been confirmed by experiments, violate the axiom of independence that is crucial to the Anscombe-Aumann theory described above. They also rule out the
existence of a consistent probability distribution like the one established in the proof of the expected
utility theorem. More precisely, given the state space S = {r, b, y} and the preference relation %
over lotteries in this space, we can define a relation %̇ over events in the state space by E %̇ E 0 if and
0
only if hE % hE , where hE is the lottery that pays $100 if the event E ⊂ S occurs and $0 otherwise.
Ellsberg preferences imply that {r} Â̇ {b} and {b, y} Â̇ {r, y} in this case, and this requires that
π(r) > π(b) and π(b, y) > π(r, y) for any probability distribution π ∈ ∆(S) that is to be consistent.
But because π(r) = 1 − π(b, y) and π(b) = 1 − π(r, y), it follows that no such distribution is possible.
It is precisely this issue that Gilboa and Schmeidler (1989) address. They adapt the AnscombeAumann setup where X is a finite set of outcomes and S = {1, 2, . . . , S} is the finite state space, but
now the standard independence axiom is replaced by the weaker condition of certainty independence.
Definition 2.1. A preference relation % on H is certainty independent if for all f, g ∈ H, all p ∈ Hc ,
and all α ∈ (0, 1), f % g if and only if αf + (1 − α)p % αg + (1 − α)p. This preference relation is
uncertainty averse if f ∼ g if and only if αf + (1 − α)g % f ∼ g.
Theorem 2.2. (Gilboa-Schmeidler Maxmin Expected Utility Theorem) A preference relation % on
H is nondegenerate, monotonic, Archimedean, certainty independent, and uncertainty averse if and
only if there exists a von Neumann-Morgenstern index u : X → R and a unique closed and convex
set M ⊂ ∆(S), such that
U (f ) = min
µ∈M
X
Ã
µs
s∈S
X
x∈X
4
!
fs (x)u(x)
represents %. Moreover, u is unique up to positive affine transformations.
Before proving this theorem, some basic results from convex analysis are necessary.
Definition 2.3. A set K ⊂ Rn is a convex cone if (i) for all x ∈ K and all λ > 0, λx ∈ K, and (ii)
for all x, y ∈ K and all α ∈ (0, 1), αx + (1 − α)y ∈ K.
Definition 2.4. Let K be a nonempty convex cone. The dual cone of K, denoted by K 0 , is the
collection of all normal vectors to K at 0: K 0 = {y ∈ Rn : y · x ≥ 0
∀x ∈ K}.
A basic result of convex analysis says that if K ⊂ Rn is a nonempty, closed, convex cone, then
K 0 is also a nonempty, closed, convex cone and K 00 = K. An implication of this result is that
x ∈ K if and only if y · x ≥ 0 for all y ∈ K 0 . We are now ready to prove the Gilboa-Schmeidler
theorem. Throughout the proof, for a ∈ R, let ā = (a, a, . . . , a) ∈ Rs .
As in the proof of the Anscombe-Aumann theorem, define the induced preference relation %∆(X)
on ∆(X) by restricting % to Hc . Then %∆(X) is Archimedean and independent (by independence
of %). By the mixture space theorem 1.3, there exists a linear function v : ∆(X) → R, unique up to
positive affine transformations, representing %∆(X) . As before, v(∆(X)) is an interval in R and we
can take 0 ∈ Int(v(∆(X))) without loss of generality.
Then we define the binary relation ≥∗ on v(∆(X))S by for w, y ∈ v(∆(X))S , w ≥∗ y if and
only if there exist f, g ∈ H such that w = v ◦ f , y = v ◦ g, and f % g. This is a well-defined
preference relation on v(∆(X))S that is Archimedean and independent. Furthermore, by certainty
independence, for all w, y ∈ v(∆(X))S , all a ∈ v(∆(X)), and all α ∈ (0, 1), w ≥∗ y if and only if
αw+(1−α)ā ≥∗ αy+(1−α)ā. By uncertainty aversion, w =∗ y if and only if αw+(1−α)y ≥∗ w =∗ y.
We can extend ≥∗ to RS by noting that for all w, y ∈ v(∆(X))S and all λ ∈ (0, 1], w ≥∗ y if and
only if λw ≥∗ λy (just mix with 0̄ and use certainty independence). In particular, given w, y ∈ RS ,
let w ≥∗ y if and only if there exists some λ ∈ (0, 1] such that λw, λy ∈ v(∆(X))S and λw ≥∗ λy.
It follows now that ≥∗ is an Archimedean preference relation on RS such that whenever w ≥ y,
w ≥∗ y as well (by monotonicity). It can be shown (which we did for homework I think) that for
any w ∈ RS , there exists a unique aw ∈ R such that w =∗ aw . We call this the certainty equivalent
of w. It can also be shown (also done in homework) that for all w ∈ RS and all a, b ∈ R, w ≥∗ ā if
5
and only if w − b̄ ≥∗ ā − b̄. From this it follows that
{z ∈ RS : z ≥∗ w} = {z ∈ RS : z − aw ≥∗ 0̄} = {z ∈ RS : z ≥∗ 0̄} + {aw }.
In other words, all ≥∗ -better-than sets are translates of {z ∈ RS : z ≥∗ 0̄} along some diagonal.
We claim that this set, {z ∈ RS : z ≥∗ 0̄}, is a nonempty, closed, convex cone. First, note that
by monotonicity of ≥∗ this set is nonempty. Next, let w, y ≥∗ 0̄ and α ∈ (0, 1) and choose aw , ay ∈ R
such that w − aw =∗ y − ay =∗ 0̄ (note that by monotonicity aw , ay ≥ 0). It follows by uncertainty
aversion that αw + (1 − α)y ≥∗ αw + (1 − α)y − [αaw + (1 − α)ay ] = α(w − aw ) + (1 − α)(y − ay ) ≥∗ 0̄,
and so we have shown convexity. This set is a cone by construction since y ≥∗ 0̄ if and only if
λy ≥∗ λ0 = 0 for all λ > 0. The set is also closed, completing the proof of our claim. Note that by
monotonicity we have RS+ ⊂ {z ∈ RS : z ≥∗ 0̄}.
Lecture 4, February 7: Let D = {z ∈ RS : z ≥∗ 0̄}0 = {d ∈ RS : d · z ≥ 0
∀z ≥∗ 0̄}. It follows
that D ⊂ RS+ . Now, set M = D ∩ ∆(S). Note that for any z ∈ RS , z ≥∗ 0̄ if and only if d · z ≥ 0
for all d ∈ D, which is true if and only if
P1
s
ds (d
· z) ≥ 0 for all d ∈ D\{0}, which is equivalent
to µ · z ≥ 0 for all µ ∈ M . Given this, it follows that for any w, y ∈ RS , w ≥∗ y if and only if
µ · (w − ay ) ≥ 0 for all µ ∈ M . This is equivalent to µ · w ≥ ay for all µ ∈ M , which is equivalent to
minµ∈M (µ · w) ≥ ay .
From this, it is clear that if we can show that ay = minµ∈M µ · y, then we are done. Note that
y − ay =∗ 0̄ and for all b > ay , y − b̄ <∗ 0̄. This implies that for all b > ay , there exists some µ ∈ M
such that µ · (y − b̄) < 0, or equivalently µ · y < b. But this means that minµ∈M µ · y ≤ ay , which of
course implies that minµ∈M µ · y = ay .
Finally, then, we have shown that for any w, y ∈ RS , w ≥∗ y if and only if minµ∈M µ · w ≥
minµ∈M µ · y. Therefore, for any f, g ∈ H, we have that f % g if and only if minµ∈M µ · (v ◦ f ) ≥
minµ∈M µ · (v ◦ g), which is obviously equivalent to the statement
Ã
!
Ã
!
X
X
X
X
min
µs
fs (x)u(x) ≥ min
µs
gs (x)u(x) ,
µ∈M
s∈S
µ∈M
x∈X
s∈S
x∈X
where u : X → R is a unique (up to positive affine transformations) von Neumann-Morgenstern
index given by u(x) = v(kx ), where kx ∈ ∆(X) is the lottery that yields state x ∈ X with certainty.
6
3
Choquet Expected Utility
Another important extension of the Anscombe-Aumann subjective expected utility theory that does
not conflict with the behavior described by the Ellsberg paradox is the capacity-based approach
developed by Schmeidler (1989). Before proving the main result, we need several definitions and
examples. As always, we assume a finite set of outcomes X and a finite set of states S = {1, 2, . . . , S}.
Definition 3.1. Acts f, g ∈ H are comonotonic for % if there are no states s, s0 ∈ S such that
fs  fs0 and gs0  gs . A preference relation % on H is comonotonic independent if for all f, g, h ∈ H
that are pairwise comonotonic and for all α ∈ (0, 1), f % g if and only if αf +(1−α)h % αg+(1−α)h.
Note that fs  fs0 and gs0  gs refer to preferences over lotteries as defined and explained
in section one above. Clearly, all constant acts are comonotonic with any act h ∈ H. Also, the
preferences described in the Ellsberg paradox are consistent with comonotonic independence.
Definition 3.2. A capacity on S is a function ν : 2S → [0, 1] such that v(∅) = 0, v(S) = 1, and for
any A, B ⊂ S such that A ⊂ B, ν(A) ≤ ν(B). If in addition, ν(A ∪ B) + ν(A ∩ B) = ν(A) + ν(B)
for all A, B ⊂ S, then ν is a probability measure.
For example, if S = {1, 2} then a capacity could be given by ν(∅) = 0, ν(S) = 1, ν(1) = 1/3, and
ν(2) = 1/3, or instead by ν(1) = 2/3 and ν(2) = 4/5. Alternatively, if S = {1, 2, 3} then ν(∅) = 0,
ν(S) = 1, ν(1) = ν(2) = ν(3) = 1/4, and ν(1, 2) = ν(2, 3) = ν(1, 3) = 2/3 defines a capacity.
We move now to the topic of Choquet expectations. Let w ∈ RS with w1 < w2 < · · · < wS . If
µ ∈ ∆(S), then
Eµ (w) =
X
µs ws = w1 + (w2 − w1 )
s
= w1 +
S
X
S
X
µs + (w3 − w2 )
s=2
"
(ws − ws−1 )
S
X
t=s
s=2
#
S
X
µs + · · · + (wS − wS−1 )µS
s=3
µt = w1 +
S
X
(ws − ws−1 )µ(s, . . . , S).
s=2
Alternatively, we can write
Eµ (w) =
X
µs ws =
s
S−1
X
[µ(s, . . . , S) − µ(s + 1, . . . , S)]ws + µS wS .
s=1
The definition of the Choquet expectation of w is related to these two expressions. In particular,
for a given capacity ν on S, we have
CEν (w) = w1 +
S
S−1
X
X
(ws − ws−1 )ν(s, . . . , S) =
ws [ν(s, . . . , S) − ν(s + 1, . . . , S)] + wS ν(S).
s=2
s=1
7
In the general case where it is not necessarily the case that w1 < w2 < · · · < wS , things are
a little more complicated. Let P(S) be the set of all permutations of S (so bijections from S
to S) and given some ρ ∈ P(S) define Aρt = {ρ(t), ρ(t + 1), . . . , ρ(S)} for each t ∈ S. Also, let
R(w) = {π ∈ P(S) : wρ(1) ≤ wρ(2) ≤ · · · ≤ wρ(S) }. Then, for any ρ ∈ R(w) we have
CEν (w) =
S−1
X
£
¤
wρ(s) ν(Aρs ) − ν(Aρs+1 ) + wρ(S) ν(ρ(S)).
s=1
An example is in order. Let S = {1, 2}, ν : 2S → [0, 1] be a capacity on S, and R+ = X ⊂ R
be the set of prizes. Also, let u : X → R by a von Neumann-Morgenstern index that is strictly
increasing and concave. If x = (x1 , x2 ) and x1 < x2 , then u(x1 ) < u(x2 ) and
CEν (u ◦ x) = u(x1 ) + [u(x2 ) − u(x1 )]ν2 = (1 − ν2 )u(x1 ) + ν2 u(x2 ),
(1)
where ν1 = ν(1) and ν2 = ν(2). Alternatively, if x1 ≥ x2 , then u(x1 ) ≥ u(x2 ) and
CEν (u ◦ x) = u(x2 ) + [u(x1 ) − u(x2 )]ν1 = ν1 u(x1 ) + (1 − ν1 )u(x2 ).
(2)
Definition 3.3. A capacity ν on S is convex if ν(A ∪ B) + ν(A ∩ B) ≥ ν(A) + ν(B) for all A, B ⊂ S.
In particular, if ν is convex, then for all A, B such that A ∩ B = ∅, ν(A ∪ B) ≥ ν(A) + ν(B).
Definition 3.4. The core of a capacity ν on S is given by
Core(ν) = {µ ∈ ∆(S) : µ(A) ≥ ν(A)
∀A ⊂ S}.
A couple of things are worth noting here. First, if ν is a convex capacity, then there exists some
µ ∈ ∆(S) such that µ(A) ≥ ν(A) for all A ⊂ S. Second, the core of a capacity ν is a compact,
convex subset of ∆(S) (it can be empty in some cases). For example, letting S = {1, 2}, ν(∅) = 0,
ν(S) = 1, ν(1) = 1/3, and ν(2) = 1/4, we find that
Core(ν) = {µ ∈ ∆(S) : µ(1) ≥ 1/3, µ(2) ≥ 1/4} = {µ ∈ ∆(S) : µ(1) ∈ [1/3, 3/4]}.
To see that the core can be empty, consider S = {1, 2}, ν̃(∅) = 0, ν̃(S) = 1, ν̃(1) = 1/2, and
ν̃(2) = 3/4. In this case, we have
Core(ν̃) = {µ ∈ ∆(S) : µ(1) ≥ 1/2, µ(2) ≥ 3/4} = ∅.
8
Lecture 5, February 14: Recalling the formula for the Choquet expectation (1), we find that if
ν is as in the example just given above and w ∈ R2 is such that w1 ≤ w2 (this is without loss of
generality), then
CEν (w) = w1 + (w2 − w1 )ν2 = w1 + (w2 − w1 )(1/4) = (3/4)w1 + (1/4)w2 .
Of course, this expression is equivalent to Eµ (w), where µ ∈ Core(ν) is such that ν(1) = 3/4 and
ν(2) = 1/4. Furthermore, Eµ (w) = minµ∈Core(ν) Eµ (w), an observation that leads us to our next
result.
Theorem 3.5. Let S be finite and ν be a convex capacity on S. Then, Core(ν) is a nonempty,
convex, compact, subset of ∆(S) and for each w ∈ RS , CEν (w) = minµ∈Core(ν) Eµ (w).
Let ν be a convex capacity on S. Without loss of generality, fix w ∈ RS such that w1 ≤
P
w2 ≤ · · · ≤ wS . Then, letting w0 = 0 we have CEν (w) =
s (ws − ws−1 )ν(s, . . . , S). Define
P
Es = {s, s + 1, . . . , S} with E0 = ∅, and ds = ws − ws−1 with d1 = w1 . Then, w = s ds 1Es and
P
CEν (w) = s ds ν(Es ).
Now, define µ ∈ ∆(S) by µ(s) = ν(Es ) − ν(Es+1 ) for all s ∈ S. It is easy to see that µ(∅) = 0,
µ(S) = 1, and µ(s) ≥ 0 for all s ∈ S (because ν is nondecreasing). Furthermore, it can easily be
shown that µ(Es ) = ν(Es ) for any s ∈ S, and we already showed in homework that µ ∈ Core(ν). It
follows that for any µ̃ ∈ Core(ν), we have
Eµ̃ (w) =
S
X
s=1
ds µ̃(Es ) ≥
S
X
ds ν(Es ) =
s=1
S
X
ds µ(Es ) = Eµ (w) = CEν (w),
s=1
which completes the proof that CEν (w) = minµ∈Core(ν) Eµ (w).
Before moving on to the main theorem of this section, a few more interesting examples of capacities are worth mentioning. The first is called ²-contamination. Given an ² ∈ [0, 1] and µ ∈ ∆(S),
the capacity ν² is defined so that for any A ⊂ S,
1
if A = S,
ν² (A) =
(1 − ²)µ(A) if A 6= S.
For all ² ∈ [0, 1], ν² is a convex capacity and its core is given by
Core(ν² ) = {µ̃ ∈ ∆(S) : µ̃ = (1 − ²)µ + ²µ0
9
for some µ0 ∈ ∆(S)}.
The second example is called belief distortion. Given some µ ∈ ∆(S), define ν by ν(A) = ϕ(µ(A))
for all A ⊂ S, where ϕ : [0, 1] → [0, 1] with ϕ(0) = 0, ϕ(1) = 1, and ϕ nondecreasing.
Theorem 3.6. (Choquet Expected Utility Theorem) A preference relation % on H is Archimedean,
monotonic, nondegenerate, and comonotonic independent if and only if there exists a von NeumannMorgenstern index u : X → R, unique up to positive affine transformations, and a unique capacity
ν on S, such that for all f, g ∈ H, f % g if and only if CEν (ū ◦ f ) ≥ CEν (ū ◦ g) where, for h ∈ H,
Ã
!
X
X
X
ū ◦ h =
h1 (x)u(x),
h2 (x)u(x), . . . ,
hS (x)u(x) .
x
x
x
If in addition % is uncertainty averse, then ν is a convex capacity and for all f, g ∈ H, f % g if and
only if
min
µ∈Core(ν)
Eµ (ū ◦ f ) ≥
min
µ∈Core(ν)
Eµ (ū ◦ g).
As in the proofs of theorems 1.5 and 2.2, define the preference relation %∆(X) on ∆(X) by
restricting % to Hc . By comonotonic independence (and because constant acts are comonotonic),
%∆(X) is Archimedean and independent and hence by the mixture space theorem 1.3 there exists
a unique (up to positive affine transformations) linear function v : ∆(X) → R representing %∆(X) .
Also as before, construct the Archimedean preference relation ≥∗ on v(∆(X))S . Recall that v(∆(X))
is an interval in R, and without loss of generality 0 ∈ Int(v(∆(X))).
For each ρ ∈ P(S), set C(ρ) = {w ∈ RS : wρ(1) ≤ wρ(2) ≤ · · · ≤ wρ(S) }. By comonotonic independence, ≥∗ is Archimedean, independent, monotonic, and nondegenerate on C(ρ). By applying
the argument from the proof of theorem 1.5 to each C(ρ), we can construct a unique collection of
measures {µρ ∈ ∆(S) : ρ ∈ P(S)} such that for all ρ ∈ P(S) and all w, y ∈ C(ρ), w ≥∗ y if and only
if µρ · w ≥ µρ · y. We now show that for all w, y ∈ v(∆(X))S , w ≥∗ y if and only if µρ̂ · w ≥ µρ̃ · y for
any ρ̂, ρ̃ ∈ P(S) such that w ∈ C(ρ̂) and y ∈ C(ρ̃). This follows from the fact that there exist unique
aw , ay ∈ R such that w =∗ aw and y =∗ ay . Therefore, for any ρ̂, ρ̃ ∈ P(S) such that w ∈ C(ρ̂) and
y ∈ C(ρ̃), we have µρ̂ · w = µρ̂ · aw = aw and µρ̃ · y = µρ̃ · ay = ay . It follows now that w ≥∗ y if and
only if aw ≥ ay , which is equivalent to µρ̂ · w ≥ µρ̃ · y.
Note that if E ⊂ S and 1E ∈ C(ρ̂) ∩ C(ρ̃) for some ρ̂, ρ̃ ∈ P(S), then µρ̂ · 1E = µρ̃ · 1E and so
µρ̂ (E) = µρ̃ (E). Define the mapping ν : 2S → [0, 1] by ν(E) = µρ (E) for any ρ such that 1E ∈ C(ρ).
Then, ν is well-defined by {µρ : ρ ∈ P(S)} and is a capacity since ν(∅) = 0, ν(S) = 1, and if
10
E ⊂ F ⊂ S then 1E , 1F are comonotonic so that there exists ρ̂ ∈ P(S) such that 1E , 1F ∈ C(ρ) and
hence ν(E) = µρ̂ (E) ≤ µρ̂ (F ) = ν(F ).
Let E ⊂ S and F = E ∪ {s} for s 6∈ E. For any ρ such that 1E , 1F ∈ C(ρ), ν(F ) = ν(E) =
µρ (F ) − µρ (E) = µρ (s). Fix w ∈ v(∆(X))S and ρ such that w ∈ C(ρ). Then, we have
µρ · w =
X
s
µρ (s)ws =
X
µρ (ρ(s))wρ(s) =
s
S−1
X
wρ(s) [µρ (Aρs ) − µρ (Aρs+1 )]
s=1
=
S−1
X
wρ(s) [ν(Aρs ) − ν(Aρs+1 )] = CEν (w),
s=1
proving that for all w, y ∈ v(∆(X))S , w ≥∗ y if and only if CEν (w) ≥ CEν (y).
Finally, if we assume that % is also uncertainty averse, then for all f, g ∈ H and all α ∈ (0, 1),
f % g implies that αf + (1 − α)g % g. Thus, for all w, y ∈ v(∆(X))S and all α ∈ (0, 1), w ≥∗ y
implies that αw + (1 − α)y ≥∗ y. Now, fix E, F ⊂ S. We want to show that ν is a convex capacity,
so ν(E) + ν(F ) ≤ ν(E ∪ F ) + ν(E ∩ F ). Without loss of generality, suppose that ν(E) ≥ ν(F ) and
choose λ ≥ 1 such that ν(E) = λν(F ). Then, 1E =∗ λ1F and thus 0.51E + 0.5λ1F ≥∗ λ1F =∗ 1E .
It follows that
ν(E) = λν(F ) ≤ CEν (0.51E + 0.5λ1F )
= 0.5 CEν (1E + λ1F )
= 0.5 CEν (1E∪F + (λ − 1)1F + 1E∩F ).
If we choose ρ ∈ P(S) such that 1E∪F , 1F , 1E∩F ∈ C(ρ), then we have
ν(E) + λν(F ) ≤ CEν (1E∪F + (λ − 1)1F + 1E∩F )
= µρ (E ∪ F ) + (λ − 1)µρ (F ) + µρ (E ∩ F )
= ν(E ∪ F ) + (λ − 1)ν(F ) + ν(E ∩ F ),
which implies that ν(E) + ν(F ) ≤ ν(E ∪ F ) + ν(E ∩ F ). Since we have now shown that ν is a convex
capacity in the case of uncertainty aversion, a simple application of theorem 3.5 completes the proof.
Lecture 6, February 21: We already have notes that were handed out for lecture 6. It covers the
material from Machina and Schmeidler (1995).
11
4
Preference for Flexibility and Temptation and Self-Control
Lecture 7, February 28: The following discussion is from Kreps (1979). The basic setup consists
of a finite set Z of prizes or outcomes and a set of menus M , which is defined as the set of non-empty
subsets of Z, 2Z \∅. A decision maker faces two stages in this setup: first he chooses a menu A ∈ M
and next he chooses an outcome from this menu x ∈ A. We assume that there is a preference relation
% over Z satisfying the standard properties (complete, transitive, etc.), and then note that % can
be extended to M by defining %̇ on M by
A %̇ A0
if and only if
∀x0 ∈ A0 , ∃x ∈ A such that x % x0 .
(3)
It follows that %̇ is complete and transitive as well, and that it satisfies
A %̇ A0
if and only if A ∼
˙ A ∪ A0 .
(4)
Proposition 4.1. The binary relation %̇ on M is complete, transitive, and satisfies (4) if and only
if there exists a complete and transitive preference relation % on Z such that (3) holds.
The proof of this proposition is left as an exercise. To understand what is meant by preference
for flexibility versus temptation and self-control, consider an example. Let Z = {s, h, f } so that
M = {{s}, {h}, {f }, {s, h}, {s, f }, {h, f }, {s, h, f }}. Suppose that {s} Â̇ {h}. Then, preference
for flexibility would imply that {s, h} Â̇ {s}, {h}, while temptation and self-control would imply
that {s} Â̇ {s, h}. More generally, for any two sets A, A0 , A %̇ A0 can be consistent with either
A ∪ A0 Â̇ A %̇ A0 or A Â̇ A ∪ A0 .
We now develop an axiomatic approach to this issue. For the following axioms, we assume that
there is a given preference relation %̇ on the set of menus M .
Axiom 4.2. %̇ is a weak order.
Axiom 4.3. For any A, A0 ∈ M , if A0 ⊂ A then A %̇ A0 .
Axiom 4.4. For any A, A0 ∈ M , if A ∼
˙ A ∪ A0 then for all A00 ∈ M , A ∪ A00 ∼
˙ A ∪ A00 ∪ A0 .
˙
Theorem 4.5. Let Z be finite. A binary relation succsim
on M = 2Z \∅ satisfies axioms 4.2-4.4
if and only if there exists a finite set S and a function U : Z × S → R such that
¸
X·
max U (x, s)
v(A) =
s
x∈A
12
represents %̇ on M .
It is important to note that the set S and function U representing the preference relation are not
necessarily unique. Also, the above expression for v(A) can easily be transformed into an expression
P
of the form v̄(A) = s µs [maxx∈A Ū (x, s)], where µ ∈ ∆(S).
Lecture 8, March 7: If we let %̇ be a weak order on M satisfying axioms 4.3 and 4.4, then we
˙ on M by A ≥
˙ A0 if and only if A %̇ A ∪ A0 for any A, A0 ∈ M .
can define the dominance relation ≥
˙ A0 .
˙ A0 and A ⊂ A00 , then A00 ≥
With this new definition, we can restate axiom 4.4: if A ≥
˙ be the domiProposition 4.6. Let %̇ be a weak order on M satisfying axioms 4.3 and 4.4, and ≥
nance relation %̇ induces on M . Then,
˙ is reflexive and transitive,
(a) ≥
˙ A0 ,
(b) if A0 ⊂ A, then A ≥
˙ A0 and A00 ⊂ A0 , then A ≥
˙ A00 ,
(c) if A ≥
˙ A2 and A3 ≥
˙ A4 , then A1 ∪ A3 ≥
˙ A2 ∪ A4 ,
(d) if A1 ≥
˙ A0 , and A ≥
˙ A00 if and only
(e) for all A ∈ M , there exists some A0 ∈ M such that A ⊂ A0 , A ≥
if A00 ⊂ A0 .
The proof of this proposition was done for homework. As a consequence of part (e), for each
A ∈ M there exists a maximal set that A dominates, and by part (c) this set is given by
f (A)
[
=
B.
˙ B}
{B∈M :A ≥
Also by part (e), it follows that
A00 ⊂ f (A)
if and only if
Proposition 4.7. If %̇ satisfies axioms 4.2-4.4, then
(a) for all A ∈ M , f (f (A)) = f (A),
˙ A0 if and only if f (A0 ) ⊂ f (A).
(b) A ≥
13
˙ A00 .
A≥
(5)
˙ f (A) ≥
˙ f (f (A)) and hence A ≥
˙ f (f (A)). Thus, f (f (A)) ⊂ f (A)
For part (a), note that A ≥
˙ A0 if and only if A ≥
˙ f (A0 ). This
proving that f (A) = f (f (A)). For part (b), note first that A ≥
˙ A0 ≥
˙ f (A0 ), and in the other direction because A ≥
˙ f (A0 ) and A0 ⊂ f (A0 )
follows because A ≥
˙ A0 . But by 5 we have that A ≥
˙ f (A0 ) if and only if f (A0 ) ⊂ f (A),
together imply that A ≥
completing the proof.
Theorem 4.8. Let Z be finite. A binary relation %̇ on M satisfies axioms 4.2-4.4 if and only if there
exists a finite set S, a state-dependent utility U : Z × S → R, and a strictly increasing u : RS → R
such that if w : M → RS is defined by
w(A)s = max U (x, s),
x∈A
then u ◦ w represents on M .
If %̇ satisfies axioms 4.2-4.4, then by theorem 4.5 there exists a function v : M → R representing
it. Now, let S = {A ∈ M : A = f (A)}, and for (x, s) ∈ Z × S let
1 if x 6∈ s,
U (x, s) =
0 if x ∈ s.
Recall that for all A ∈ M , f (A) ∈ S by proposition 4.7. With the above definitions, it follows that
1 if A ⊂
6 s,
w(A)s =
0 if A ⊂ s.
˙ A0 , then A0 6∈ f (A)
˙ A0 . First, if A 6≥
We claim that for any A ∈ M , w(A) ≥ w(A0 ) if and only if A ≥
˙ A0 , then any
so that w(A0 )f (A) = 1 > 0 = w(A)f (A) and hence w(A) 6≥ w(A0 ). Conversely, if A ≥
˙ A so that f (A) ⊂ f (s) = s and
s ∈ S such that w(A)s = 0 has A ⊂ s. But this implies that s ≥
also f (A0 ) ⊂ f (A) ⊂ s. Of course, A0 ⊂ f (A0 ) and hence we have that w(A0 )s = 0, proving that
w(A) ≥ w(A0 ).
Now, we can define u : RS → R by setting u(w(A)) = v(A) where A ∈ M . To see that this
˙ A0 ≥
˙ A, so that A ∼
is well-defined note that if w(A) = w(A0 ) then A ≥
˙ A ∪ A0 ∼
˙ A0 and hence
˙ A0 and
v(A) = v(A ∪ A0 ) = v(A0 ). Also, u is strictly increasing. If w(A) > w(A0 ), then A ≥
˙ A, so that A ∼
˙ A ∪ A0 . But then we have A ∪ A0 Â̇ A0 , which implies that
A0 6≥
˙ A ∪ A0 and A0 6∼
v(A) = v(A ∪ A0 ) > v(A0 ) and thus u(w(A)) > u(w(A0 )).
14
References
Gilboa, I. and D. Schmeidler (1989). Maxmin expected utility with non-unique prior. Journal of
Mathematical Economics 18, 141–153.
Kreps, D. M. (1979). A representation theorem for ‘preference for flexibility’. Econometrica 47 (3),
565–577.
Machina, M. J. and D. Schmeidler (1995, October). Bayes without bernoulli: Simple conditions
for probabilistically sophisticated choice. Journal of Economic Theory 67, 106–128.
Schmeidler, D. (1989, March). Subjective probability and expected utility without additivity.
Econometrica 57 (3), 571–587.
15
© Copyright 2026 Paperzz