A defense of the ex-post evaluation of risk

A defense of the ex-post evaluation of risk∗
Marc Fleurbaey†
January 2010
Abstract
This note proposes three arguments in favor of the ex-post approach to the evaluation of risky
prospects.
1
Introduction
Welfare economics has not yet reached a consensus about how to assess social situations involving risk. A
cornerstone is Harsanyi’s (1955) aggregation theorem showing that the utilitarian criterion has the unique
virtue of being at the same time based on expected social utility — corresponding to rationality at the
social level — and on ex-ante individual preferences — i.e., respecting the Pareto principle.1 A well-known
drawback of the utilitarian criterion, however, is its indifference to ex-ante and ex-post inequalities, while
many authors have argued that ex-ante inequalities matter, (Diamond 1967, Keeney 1980, Epstein and
Segal 1992, among others), and ex-post inequality aversion has also been endorsed (Adler and Sanchirico
2006, Meyer and Mookherjee 1987, Harel et al. 2005, Otsuka and Voorhoeve 2007). For those who seek
an inequality-averse criterion, the choice between ex-ante criteria — which focus on individual expected
utilities — and ex-post criteria — which compute the expected value of social welfare — appears to take the
form of a dilemma.
∗ This
note is a preparatory material for a paper being written with A. Voorhoeve.
CNRS
and
University
Paris
Descartes,
CORE,
Sciences-Po
† CERSES,
and
IDEP.
Email:
[email protected].
1 Hammond (1983) and Broome (1991) have offered additional arguments in favor of this criterion, and made a careful
scrutiny of the scope of this result. See also Weymark (1991, 2005) for a broad examination of Harsanyi’s arguments in
favor of utilitarianism.
1
The problem can be illustrated as follows.2
Consider two individuals, Ann and Bob, and two
equiprobable states of the world, Heads and Tails (as with coin flipping). There are three lotteries to
compare, L1 , L2 and L3 , which are described in the following tables by their utility consequences for
Ann and Bob. To make things simple, we assume for the moment that utilities are fully measurable and
interpersonally comparable.
Example 1:
L1
H
T
L2
H
T
L3
H
T
Ann
4
4
Ann
4
0
Ann
4
0
Bob
0
0
Bob
0
4
Bob
4
0
Harsanyi’s additive criterion is indifferent between the three lotteries. An ex-post approach is indifferent
between L1 and L2 (provided it is impartial between Ann and Bob), because the ex-post distribution is
(0, 4) for sure for both lotteries, while an ex-ante approach is indifferent between L2 and L3 , because the
individuals face the same individual prospects (0 or 4 with equal probability) in the two lotteries.
But, many authors say, L1 is worse than L2 which is worse than L3 . Indeed, with L1 Ann is advantaged for sure whereas L2 randomizes over the two individuals, as with a coin tossing. This is Diamond’s
(1967) observation. With L3 , there is the same ex-ante prospect for each individual as with L2 , but with
less inequality ex post, as observed in Broome (1991).
Facing this problem, Ben Porath et al. (1997) suggest taking a weighted sum of an ex-ante and
an ex-post egalitarian criterion.3 Broome (1991) proposes to retain a formal version of the utilitarian
criterion while introducing a dimension of ex-ante fairness and ex-post inequalities in the measurement of
individual utilities. In a similar vein, Hammond (1983) alludes to the possibility of recording intermediate
consequences jointly with terminal consequences. Other authors, such as Epstein and Segal (1992),
simply ignore ex-post inequalities and propose an inequality-averse ex-ante criterion. The applied welfare
economics of risk and insurance is mostly tied to the ex-ante approach.4 All in all, it is safe to say that the
ex-ante approach has a dominant position in the field. Even if ex-post criteria are recurrently discussed
in the literature, there is no systematic argument challenging the ex-ante viewpoint and defending the
ex-post viewpoint as preferable.5
2 This
is inspired by Broome (1991) and Ben Porath et al. (1997). See also Fishburn (1984) for an early axiomatic
analysis of these issues.
3 A similar proposal is made in Gajdos and Maurin (2004).
4 See e.g. Viscusi (1992), Pratt and Zeckhauser (1996). (The latter even take a veil-of-ignorance perspective, which
introduces artificial risk in order to achieve an impartial judgment.)
5 For instance, Meyer and Mookherjee (1987) and Otsuka and Voorhoeve (2007) simply assume that ex-post inequalities
2
This note proposes such an argument. It is based on the observation that risky situations are
fundamentally situations of imperfect information. In this light the Pareto principle applied to exante prospects appears much less compelling than the standard Pareto principle. Moreover, the ex-post
approach turns out to be justified by basic rationality principles that revolve around the idea that a social
criterion should exploit the available information about the final distribution of individual situations.
The next section develops these arguments. An appendix explains the link between the basic rationality
principles discusses here and the sure-thing principle.
2
Risk as incomplete information
One of the greatest achievements of economic theory is the Arrow-Debreu-McKenzie general equilibrium
model, and especially its extension to the case of contingent goods (see e.g. Debreu 1959a), which allows
one to analyze risky situations with the standard analytical concepts of consumer demand. The formal
analogy between consumer choice over, say, apples and oranges and consumer choice over money if it
rains and money if it shines is a very powerful insight which underlies the economics of risk and insurance
and enables us to understand insurance demand and risk-taking behavior in a very convenient way. This
analogy is generally carried over from positive analysis to normative prescriptions. The economics of
insurance typically analyzes the performance of insurance markets in terms of efficiency and equity by
referring to the satisfaction of the agents’ ex-ante preferences, just as one is used to catering to consumer
preferences over apples and oranges. The analysis of public policies reducing risk is also generally made
from an ex-ante perspective.
This extension of the ex-ante viewpoint from positive economics to normative issues is unwarranted.
The principle of consumer sovereignty means that the consumer’s preferences over his personal consumption are authoritative. The principle is valid in this simple form, however, only when the consumer
is correctly informed. Under imperfect information, the true interest of the consumer lies in what his
preferences would command under perfect information. If a consumer prefers apples to oranges solely
because he wants vitamin C and has the wrong belief that apples contain more of it, he would be better
off with oranges, not with apples. This assertion, of course, does not imply that he should be forced
to have oranges against his will. Being constrained may dampen well-being. But the point is that the
are undesirable, while Harel et al. (2005) only seek to show that ex-post egalitarianism is well in line with certain legal
practices.
3
interest of the consumer lies in the preferences that would result from the combination of his tastes with
a correct information about the options.
Now, risky situations are, by definition, situations of imperfect information. As we shall see, it is
not so obvious what the principle of consumer sovereignty requires in this context. Imagine a corn-flakes
producer who discovers that a small number b of boxes have been contaminated with potentially lethal
chemicals. The boxes are now in the market and, for the sake of the example, let us imagine that the
producer has no way of tracking where these boxes are. Suppose that b is so small that, even knowing
about the risk, consumers would still be willing to buy corn-flakes, provided a moderate discount is made
on the price.
From the ex-ante perspective, the contamination is just like a small reduction in the uniform quality
of corn-flakes, and therefore, since there is nothing problematic in letting consumers accept a small
reduction in quality, it is no more problematic to let them accept the risk of dying from a contaminated
box. By taking the ex-ante perspective, one therefore concludes that the reasons for concern are neither
smaller nor greater in any of the two cases, quality reduction or contamination hazard. Such a conclusion,
however, is questionable. There is a key difference between the case of a consumer who buys a box knowing
that the quality is lower, and the case of a consumer who buys a box without knowing that this particular
box is contaminated. There is a sense in which the true interest of the consumer is not in buying a
contaminated box. The risky situation involves a lack of information that is absent from the reduced
quality case. The ex-ante perspective blurs this difference.
In order to analyze this issue more precisely, consider the following example, which is a variant of
the example of the introduction
Example 2:
L2
H
T
L4
H
T
Ann
4
0
Ann
3
1
Bob
0
4
Bob
1
3
Assuming that individuals evaluate prospects in terms of expected utilities, they are indifferent between
L2 and L4 . An ex-ante criterion should then deem these two lotteries to be equivalent, no matter how
inequality averse this criterion is.
We now introduce three closely related arguments, each of which questions the Pareto principle
applied to ex-ante preferences and imposes a basic rationality requirement for the social criterion. As we
shall see, this basic rationality requirement is weaker than Harsanyi’s expected utility assumption for the
4
social criterion, but is strong enough to rule out inequality averse ex-ante criteria.
Informed consumer sovereignty. The first argument relates the example to the standard theory
of the second-best. Consider a social planner who evaluates a redistribution policy. She knows the
distribution of types in the population but does not observe individual characteristics. Specifically, she
does not know if Ann or Bob is better-off, but she knows that the better-off has utility 4, the worse-off
utility 0, and after redistribution these figures would be brought to 3 and 1, respectively. In other words,
there are two possible states of the world, H in which Ann is better-off and T in which Bob is better-off.
Formulated in this way, her problem can be described by the same matrices L2 , L4 as above. The only
difference between the second-best problem and the risk problem is that in the former case the individuals
know their own type (rich or poor), whereas in the risk problem they do not know their type (lucky or
unlucky). How does economic theory, following Mirrlees (1971), proceed in the second-best problem?
The social planner simply has to compare the distributions (4,0) and (3,1). With any non-null degree of
aversion to inequality, the latter is preferred.
If we assumed that the individuals also knew their type in the risk problem, we should therefore
proceed exactly in the same way. Does the fact that the individuals do not know their type in the risk
problem justify adopting another evaluation criterion? The answer depends on whether their informed
or their uninformed preferences should have precedence. We have recalled above that welfare economics
traditionally gives authority to individual informed preferences. Let us call this the principle of informed
consumer sovereignty. This principle implies that it is better, whenever possible, to treat the risk problem
as if the individuals knew their type, which, in this example, requires treating it just like a second-best
problem, justifying a preference for L4 under inequality aversion.6
In summary, this argument rejects the ex-ante Pareto principle because priority should be given to
informed preferences, and it imposes a basic rationality requirement on the social criterion, namely, to
follow the second-best method when the final distribution of individual utilities is known for sure.
Independence of irrelevant risk. The second argument is based on the observation that a social
observer may have, in some sense, more information than individuals even though she has no better
information about the states of the world. A basic rationality requirement for decision under risk is that
6 It
is already widely recognized that, when individuals have different beliefs, their unanimity over prospects may be
spurious and carries little normative weight (see, e.g., Broome 1999 (p. 95), or Mongin 2005). But the point here is that
even if all the individuals have the same beliefs about the probabilities of the various possible states of the world, their
ex-ante preferences over prospects do not call for the same respect as their ex-post preferences.
5
the irrelevant part of the risk should be ignored. If someone is indifferent between a blue car and a red
car, the fact that the car seller cannot promise whether the new car will be blue or red does not create
any relevant risk for the buyer. He should then decide as if he knew for sure that the car will be, say,
blue. This can be called the principle of independence of irrelevant risk.
In Example 2, each individual faces relevant risk in L2 as well as in L4 , because their final utility
depends on the state of the world. But the social evaluator does not face relevant risk if she is impartial
between Ann and Bob. In L2 , for instance, she knows for sure that the distribution will be (4,0), and the
risk determines only who is better-off, which is irrelevant. Therefore the social evaluator, even though
she does not know more about the state of the world than the individuals, is in a situation which is
relevantly equivalent to a situation of full information.
By the principle of independence of irrelevant risk, the evaluator can then examine how she would
rank these lotteries if she knew that the true state is, say, H. Her answer in this eventuality is how she
should rank the lotteries when, as is the case, she does not know the state of the world. Any inequality
averse evaluator, knowing the true state of the world, would prefer L4 .
Once again, this argument rejects the ex-ante Pareto principle, because the risk may be irrelevant
to the evaluator even when it is relevant to individuals, and requires using the evaluator’s (hypothetical)
informed preferences when the risk to her is irrelevant. This is equivalent to the injunction obtained from
the first argument: when the final distribution of utilities is known for sure, one should simply evaluate
this distribution as if there was no risk.
Full information principle. The third argument generalizes this rationality injunction slightly. A
very basic principle in decision theory is that, if the set of feasible actions is not affected by the acquisition
of information, it is always good to acquire more information.7 A derivative (weaker) principle is that,
when one can guess how one would decide with more information, one should follow suit. Let us call
it the full information principle.8 This principle is concerned with cases in which information has no
value, and requires acting as if one had this worthless information. This sounds similar to the principle
of irrelevant risk, but it is slightly more powerful because it also bites when there is relevant risk.
7 See
Savage (1972), Marshak (1951). Drèze (1987) provides many examples in which the acquisition of information
reduces the set of feasible actions, inducing a negative value of information. Most examples involve market or strategic
interactions, but there is also the purely individualistic example of the “nervous bowman” whose shooting is less precise
when the stakes are high.
8 This can be viewed as a particular component of the principle of “no foreseeable regrets” put forth by Arntzenius (2007).
If you know you go against what you would decide with full information, you are sure you will regret your decision.
6
This principle is still very weak because there are few situations in which it applies. In fact, there
are exactly two cases in which it applies: 1) when a lottery has a better consequence than another in
every state of the world, implying that one would prefer it with full information; 2) when a lottery has
at least as good a consequence as another in every state of the world, implying that one would weakly
prefer it with full information. In all other cases, one lottery is better in one state of the world and worse
in another, rendering it impossible to guess how one would rank them with better information.
The full information principle therefore implies two basic dominance requirements: 1) Strict dominance: a lottery is better than another if it is better in every state of the world; 2) Weak dominance: a
lottery is at least as good as another if it is at least as good in every state of the world.
Observe that the full information principle does not directly imply Strong dominance, namely, the
requirement that a lottery is better if it is at least as good in every state of the world and better in some
state of the world – because it might be that the true state of the world is one in which the lottery is
just as good as the other. However, there is an additional argument that justifies Strong dominance. If a
lottery is at least as good in every state of the world and better in some state of the world, by adopting
indifference between the two lotteries one may make a wrong choice, selecting the dominated lottery in
the state of the world in which it is strictly worse. In order to avoid this problem, one must strictly prefer
the dominating lottery, even if it may be that in the true state of the world it is just as good as the other.
Strict dominance, combined with inequality aversion, justifies preferring L4 to L2 in Example 2,
thereby rejecting the ex-ante Pareto principle.
Weak dominance implies the rationality requirements obtained with the first two arguments. When
a lottery yields an equivalent consequence in every state of the world, weak dominance requires evaluating
this distribution as if there was no risk. For an impartial criterion, this is what happens with L2 and L4 .
References
Adler M.D., C.W. Sanchirico 2006, “Inequality and uncertainty: Theory and legal applications”, University of Pennsylvania Law Review 155: 279-377.
Anscombe F.J., R.J. Aumann 1963, “A definition of subjective probability”, Annals of Mathematical
Statistics 34: 199-205.
Arntzenius F. 2007, “No regrets”, forth. in Erkenntnis.
Ben-Porath E., I. Gilboa, D. Schmeidler 1997, “On the measurement of inequality under uncertainty”,
7
Journal of Economic Theory 75: 194-204.
Broome J. 1984, “Uncertainty and fairness”, Economic Journal 94: 624-632.
Broome J. 1991, Weighing Goods. Equality, Uncertainty and Time, Oxford: Blackwell.
Broome J. 1999, Ethics Out of Economics, Cambridge: Cambridge University Press.
Debreu G. 1959a, Theory of Value, New York: Wiley.
Deschamps R., L. Gevers 1979, “Separability, risk-bearing and social welfare judgments”, in J.J. Laffont
ed., Aggregation and Revelation of Preferences, Amsterdam: North-Holland.
Diamond P.A. 1967, “Cardinal welfare, individualistic ethics, and interpersonal comparisons of utility:
Comment”, Journal of Political Economy 75: 765-766.
Drèze J.H. 1987, Essays on Economic Decisions under Uncertainty, Cambridge: Cambridge University
Press.
Epstein L.G., U. Segal 1992, “Quadratic social welfare functions”, Journal of Political Economy 100:
691-712.
Fishburn P.C. 1978, “On Handa’s "New theory of cardinal utility" and the maximization of expected
return”, Journal of Political Economy, 86: 321-324.
Gajdos T., E. Maurin 2004, “Unequal uncertainties and uncertain inequalities: An axiomatic approach”,
Journal of Economic Theory 116: 93-118.
Grant S. 1995, “Subjective probability without monotonicity: Or how Machina’s Mom may also be
probabilistically sophisticated”, Econometrica 63: 159-189.
Hammond P.J. 1981, “Ex-ante and ex-post welfare optimality under uncertainty”, Economica 48: 235250.
Hammond P.J. 1983, “Ex post optimality as a dynamically consistent objective for collective choice under
uncertainty”, in P.K. Pattanaik, M. Salles eds., Social Choice and Welfare, Amsterdam: North-Holland.
Hammond P.J. 1988, “Consequentialist foundations for expected utility”, Theory and Decision 25: 25-78.
Hammond P.J. 1989, “Consistent plans, consequentialism, and expected utility”, Econometrica 57: 14451449.
Handa 1977, “Risk, probabilities, and a new theory of cardinal utility”, Journal of Political Economy 85:
97-122.
Harel A., Z. Safra, U. Segal 2005, “Ex-post egalitarianism and legal justice”, Journal of Law, Economics
and Organization 21: 57-75.
8
Harsanyi J.C. 1955, “Cardinal welfare, individualistic ethics, and interpersonal comparisons of utility”,
Journal of Political Economy 63: 309-321.
Kahneman D., A. Tversky 1979, “Prospect theory: An analysis of decision under risk”, Econometrica
47: 263-292.
Kolm S.C. 1998, “Chance and justice: Social policies and the Harsanyi-Vickrey-Rawls problem”, European
Economic Review 42: 1393-1416.
Machina M.J. 1982, “Expected utility analysis without the independence axiom”, Econometrica 50: 277323.
Machina M.J. 1989, “Dynamic consistency and non-expected utility models of choice under uncertainty”,
Journal of Economic Literature 27: 1622-1668.
Meyer M.A., D. Mookherjee 1987, “Incentives, compensation, and social welfare”, Review of Economic
Studies 54: 209-226.
Mongin P. 1995, “Consistent Bayesian aggregation”, Journal of Economic Theory 66: 313-351.
Mongin P. 2005, “Spurious unanimity and the Pareto principle”, LSE-CPNSS Working Paper, vol. 1,
no. 5.
Myerson R.B. 1981, “Utilitarianism, egalitarianism, and the timing effect in social choice problems”,
Econometrica 49: 883-897.
Otsuka M., A. Voorhoeve 2007, “Why it matters that some are worse off than others: An argument
against Parfit’s Priority View”, Philosophy & Public Affairs.
Pratt J.W., R.J. Zeckhauser 1996, “Willingness to pay and the distribution of risk and wealth”, Journal
of Political Economy 104: 747-763.
Quiggin J. 1982, “A theory of anticipated utility”, Journal of Economic Behavior and Organization 3:
323-343.
Quiggin J. 1989, “Stochastic dominance in regret theory”, Review of Economic Studies, 57: 503-511.
Rabinowicz W. 2006, “Prioritarianism and uncertainty: On the interpersonal addition theorem and the
Priority View”, mimeo.
Savage L.J. 1972, The Foundations of Statistics, 2nd revised ed., New York: Dover Publications.
Tversky A., D. Kahneman 1992, “Advances in prospect theory: cumulative representation of uncertainty”,
Journal of Risk and Uncertainty 5: 297-323.
Viscusi W.K. 1992, Fatal Tradeoffs: Private and Public Responsibilities for Risk, Oxford: Oxford University Press.
9
Wakker P.P. 1993, “Savage’s axioms usually imply violation of strict stochastic dominance”, Review of
Economic Studies, 60: 487-493.
Weymark J.A. 1991, “A reconsideration of the Harsanyi-Sen debate on utilitarianism”, in J. Elster, J.E.
Roemer eds., Interpersonal Comparisons of Well-Being, Cambridge: Cambridge University Press.
Weymark J.A. 2005, “Measurement theory and the foundations of utilitarianism”, Social Choice and
Welfare 27: 527-555.
Appendix: Dominance and the sure-thing principle
In decision theory, Weak Dominance is often called “monotonicity” (Anscombe and Aumann 1963) or
“statewise dominance” (Quiggin 1989). It is worth clarifying the relation between statewise dominance
and the sure-thing principle because the latter is often confused with a separability condition. In Savage’s
theory (Savage 1972), an act f is a function that associates every state of the world s ∈ S with a
consequence f (s). Savage defines the sure-thing principle as follows. Take any arbitrary event E (i.e., a
subset of the set S of states of the world). If the decision-maker weakly prefers act f to act g conditionally
on event E being realized, and also conditionally on event E not being realized, then he must weakly
prefer f to g. If a businessman considers it worthy to buy a piece of property on the assumption that the
Democrats win the next election, and also considers it worthy on the assumption that the Republicans
win, then he should buy it. Savage writes that he knows of “no other extralogical principle governing
decisions that finds such ready acceptance” (p. 21).
Statewise dominance is weaker than the sure-thing principle because it applies only in the case when,
for all s ∈ S, the decision-maker weakly prefers f to g conditionally on the singleton event E = {s} being
realized. Here is an example of a rule that is not absurd and satisfies statewise dominance but not the
sure-thing principle. When the probability of death — or, more rigorously, of a lower level of utility
than a certain critical level — exceeds a threshold, the decision-maker seeks to minimize this probability.
Otherwise (and in case of ties for the over-threshold probability of death), he maximizes expected utility.
This rule satisfies statewise dominance because when an act weakly dominates another for all states of the
world, it has either a lower probability of death or an equal probability of death combined with a weakly
greater expected utility. The rule violates the sure-thing principle because knowing that an event E will
occur may raise the (conditional) probability of death above the threshold and reverse the preference
between two acts that coincide on S \ E. That is, this rule may prefer the higher expected-utility act f to
10
the lower probability-of-death act g even though: i ) it prefers g conditionally on E being realized, and
ii ) it is indifferent between f and g conditionally on E not being realized.
Another difference between statewise dominance and Savage’s theory is that, instead of introducing
conditional preferences directly, Savage chooses to formalize the sure-thing principle with his P2 axiom
which incorporates an additional separability requirement in order to simplify the expression of conditional
preferences.
P2: If f is weakly preferred to g, f (s) = f 0 (s) for all s ∈ E, g(s) = g 0 (s) for all s ∈ E, f (s) = g(s) for all
s∈
/ E, f 0 (s) = g 0 (s) for all s ∈
/ E, then f 0 is weakly preferred to g0 .
This is a separability axiom implying that preference conditional on E being realized does not
depend on what happens if E is not realized. It makes it possible to derive conditional preference on E
unambiguously from preference over acts that coincide on S \ E.
P2 implies the sure-thing principle by an iteration argument.9 The sure-thing principle, however,
does not require separability in itself. The maximin criterion is an example of a (not absurd) rule that
satisfies the sure-thing principle but not P2.10
9 If
and
f is weakly preferred to g conditionally on E and conditionally on not E, under P2 this means that
⎫
⎧
⎫
⎧
⎪
⎪
⎬
⎨ g(s) for s ∈ E ⎪
⎬
⎨ f (s) for s ∈ E ⎪
is weakly preferred to
f =
⎪
⎪
⎪
⎪
⎩ f (s) for s ∈
⎩ f (s) for s ∈
/E ⎭
/E ⎭
⎧
⎫
⎪
⎨ g(s) for s ∈ E ⎪
⎬
⎪
⎪
⎩ f (s) for s ∈
/E ⎭
is weakly preferred to
⎧
⎫
⎪
⎨ g(s) for s ∈ E ⎪
⎬
⎪
⎪
⎩ g(s) for s ∈
/E ⎭
= g,
implying, by transitivity, that f is weakly preferred to g.
1 0 The next axiom in Savage’s list relies on the notion of conditional preferences and requires consistency between global
and conditional preferences over the subset of constant acts:
P3: If f (s) = a for all s ∈ S, g(s) = b for all s ∈ S, and if preference conditional on E is not universal indifference, then f
is weakly preferred to g if and only if f is weakly preferred to g conditionally on E being realized.
An alternative formulation of P3, that does not incorporate P2, can be found in the literature. For instance, in Grant
(1995, p. 164) one finds:
P3* If f (s) = a for all s ∈ E, g(s) = b for all s ∈ E, f (s) = g(s) for all s ∈
/ E, f 0 (s) = a for all s ∈ S, g 0 (s) = b for all
s ∈ S, and if preference on acts that coincide on S \ E is not universal indifference, then f is weakly preferred to g if
and only if f 0 is weakly preferred to g0 .
This axiom, in its “only if” part, contains a weaker separability requirement than P2, as analyzed by Grant in terms
of independence over lotteries. If one restricts P3* to its “if” part, it implies statewise dominance but not the sure-thing
principle.
11
Statewise dominance is also weaker than first-order stochastic dominance, since the latter additionally
incorporates the possibility to compare acts over different states of the world (e.g., assuming that S =
{s, s0 } and s and s0 are equiprobable, if f (s) is better than g(s0 ) while f (s0 ) is better than g(s), then
first-order stochastic dominance applies while statewise dominance may be silent).11
The non-expected utility theories failing to satisfy statewise dominance or first-order stochastic
dominance have been criticized (see, e.g., Handa 1977 and Fishburn 1978, or Kahneman and Tversky
1979, p. 284), and the more recent theories (e.g., Quiggin 1982, Machina 1982, Tversky and Kahneman
1992) do respect such dominance principles.12 Interestingly, Grant (1995) notes the violation of statewise
dominance in Diamond’s critique for social evaluation and considers that it can be used to question
statewise dominance in individual decision theory, invoking Machina’s (1989) example of a mother who
must give an indivisible “treat” to her daughter Abigail or her son Benjamin and seeks to be fair by the
use of a lottery. This example is indeed a direct adaptation of Diamond’s argument. We have, however,
already explained above why fairness considerations can justify the use of lotteries but not a violation
of the omniscience principle. There is an obvious difference between “giving the treat to Abigail, being
unfair to Benjie” and “giving it to Abigail after a fair lottery” which can be described in terms of ex-post
consequences and does not require such a strong violation of rationality.
1 1 For
instance, as explained in Quiggin (1989), regret theory violates first-order stochastic dominance but satisfies state-
wise dominance.
1 2 Note that, unlike the weak version, the strict version of statewise dominance (i.e., an act is strictly better if it strictly
dominates in all states of nature) is not even satisfied by Savage’s expected utility theory in some special contexts (finitely
additive probabilities — see Wakker 1993, p. 491).
12