C:\papers\ee\loss\papers\Choice Behavior, Asset Integration

Choice Behavior, Asset Integration and
Natural Reference Points
by
Steffen Andersen, Glenn W. Harrison and E. Elisabet Rutström †
May 2006
Working Paper 06-07, Department of Economics,
College of Business Administration, University of Central Florida, 2006.
Abstract. Reference points play a major role in differentiating theories of choice under
uncertainty. Under expected utility theory the reference point is implicit in the assumptions
made about asset integration, which is the same thing as assuming different arguments of the
utility function. Under prospect theory the reference point differentiates gains and losses,
and the manner in which prospects are evaluated. We consider the implications for both
models of allowing subjects to have natural reference points, in the sense that they derive
from “homegrown” expectations about earnings in a dynamic task, rather than from a frame
presumed by the observer. We elicit initial beliefs about expected earnings, and implement a
dynamic decision process in which subjects could win or lose money, and even go bankrupt.
In short, we cultivate a fertile and natural breeding ground for the effects of reference points
to emerge. To characterize the latent data-generating process in a flexible statistical manner
we assume that some observations are generated by an expected utility model and that some
observations are generated by a cumulative prospect theory model. This specification leads
to a finite mixture model in which reference points may be different from the frame
presented by the lottery prizes. We report several striking findings. First, expected utility
theory accounts for a large fraction of the observations, despite this setting providing a
seemingly more natural breeding ground for prospect theory. Assuming homogenous,
representative agents for each model type, expected utility theory accounts for b of the
observations. With demographic heterogeneity controlled for it still accounts for ½ of the
observations. Second, the expected utility theory subjects appear to have utility functions
defined over their cumulative income over the sequence of tasks, rather than as defined over
the prizes in each individual lottery choice. Finally, we identify demographic characteristics
which differentiate the probability that a decision-maker used expected utility theory. Men
are much more likely to use expected utility theory than women, racial minorities are more
likely than others, those with higher grades are more likely, and those from poorer
households are more likely. Thus our results provide insights into the domain of applicability
of each of the major choice models, rather than claiming one to be the sole, true model.
†
Centre for Economic and Business Research, Copenhagen, Denmark, and Department of
Economics, University of Chicago (Andersen) and Department of Economics, College of Business
Administration, University of Central Florida, USA (Harrison and Rutström). E-mail:
[email protected], [email protected] and [email protected].
Supporting data and statistical code are stored in the ExLab Digital Archive at
http://exlab.bus.ucf.edu.
We consider the behavior of individuals facing a sequence of risky choices that may make
some of them rich and some of them poor. Behavior is characterized in terms of the two major
competing models of choice under uncertainty. The use of sequences of risk choices allows us to
examine many components of the debate over alternative models of choice. What are the arguments
of the utility functions of agents? Is utility defined over the prizes of each risky choice, or over the
outcome of a sequence of risky choices? Or do some subjects behave as if their utility functions are
defined over prizes, and other subjects behave as if their utility functions are defined over income
streams? Do subjects use reference points when evaluating prospects? Are those reference points
determined by the framing of the lottery as a gain or a loss from an endowment, or are there
“homegrown reference points” that subjects bring to a task? Do those reference points evolve over
time? Conditional on the reference point, does the subject exhibit loss aversion? Finally, is behavior
better characterized by one model of choice behavior, or by a mixture of alternative choice models
in which some subjects follow expected utility theory and others follow prospect theory?
We control for many of the factors that might influence choice by using laboratory
experiments. Our experiment allows us to identify what determines the natural evolution of income
in a simple setting with risk. Strategic considerations and fairness play no role. Options play no role,
since each choice is disconnected from the characteristics of future choice sets. Income evolves
naturally because of the choices that subjects make and, literally, the roll of the die presented by
Nature. In this manner the reference point that subjects use for their decisions evolves naturally,
rather than being imposed by the experimenter’s frame.1
Our design represents a simple modification of the standard environment used to test
expected utility theory. In that standard environment subjects are typically asked to make choices
1
There is considerable experimental evidence that allowing endowments to be “earned” by natural tasks, rather
than received as “house money,” can affect behavior in games of social and strategic interaction. See Rutström and
Williams [2000] for evidence in the context of income distribution tasks, Cherry, Frykblom and Shogren [2002] in the
context of dictator games, and Clark [2002], Cherry, Kroll and Shogren [2005] and Harrison [2006] for mixed evidence in
the context of public good games. Munro and Sugden [2003] and KÅszegi and Rabin [2005][2006] review additional
evidence of “endowment effects” and “status quo biases,” and theoretically consider the effects of allowing for
endogenous reference points.
-1-
over a large number of lotteries, and then one is picked to be played out for real. Sometimes the
lotteries are framed as gains, and sometimes as losses from an endowment. But the intent is to
present the individual with a “one shot” decision environment. Useful as that environment has been,
it might blur the true effect of reference points and loss aversion on choice.2
We present our experimental task in section 1, which also serves as a replication “with
incentives” of the tasks used in Tversky and Kahneman [1992]. Section 2 discusses our hypotheses,
section 3 reviews the experimental procedures, section 4 examines the data, section 5 undertakes a
statistical analysis of the data, and section 6 compares results with similar designs in the literature.
We draw three major conclusions. First, expected utility theory accounts for a large fraction
of the observations, despite this setting providing a seemingly more natural breeding ground for
prospect theory. Expected utility theory accounts for between b and ½ of the observations,
depending on controls for demographic characteristics. Second, subjects behaving in accord with
expected utility theory appear to have utility functions defined over their cumulative income over the
sequence of tasks, rather than defined over the prizes in each individual lottery choice. Finally, we
identify demographic characteristics which differentiate the probability that a decision-maker used
expected utility theory. Men are much more likely to use expected utility theory than women, racial
minorities are more likely than others, those with higher grades are more likely, and those from
poorer households are more likely. Thus our results provide insight into the domain of applicability
of each of the major choice models, rather than claiming one to be the sole, true model.
1. Experimental Task
The objective of our experimental task was to provide subjects with a sequence of choices
over risky outcomes that allowed them to win or lose money. Every subject was given an initial
2
This argument has been made forcefully by Cubitt and Sugden [2001]. It is also present in debates over the
effect of market experience, feedback and repetition on alleged anomalies: for example, see Cox and Grether [1996],
Loomes, Starmer and Sugden [2003] and List [2003]. It is also clearly stated in Thaler and Johnson [1990; p. 643], who
recognize that the issues are “quite general since decisions are rarely made in temporal isolation.”
-2-
stake, and kept informed of their current endowment after every choice and realization of chance.
Each subject was given a small overdraft limit, and was allowed to take on gambles that did not
result in them exceeding that limit. Any subject whose earnings fell below zero was unable to make
further choices.
The details of our task are as follow. Our subjects are given a fixed show-up fee, and told
that they cannot lose that amount. They are also told that they might earn some money in the
experiment, depending on their choices and some random outcomes, but that they might also end
up with zero earnings. Each subject is initially endowed with a stake, selected at random from a
uniform distribution between $1 and $6 in discrete $1 intervals. This is provided as compensation
for completing a questionnaire requesting standard demographic information.
The subject then faces 17 lottery choices in a sequence. These lotteries offer the possibility
of some gains, but also offer the possibility of some losses. In fact, most of the lottery choices have
outcomes that are either non-negative or non-positive. Some lotteries mix positive and negative
outcomes. The specific lotteries presented to each subject are selected “more or less” at random
from a fixed set of 60 lotteries. This fixed set includes virtually all of the lotteries used by Tversky
and Kahneman [1992; Table 3], as well as several lotteries designed to have outcomes with different
signs. The selection is non-random only to the extent that we ensure that the first 3 lotteries all offer
non-negative outcomes, to allow subjects to accumulate some earnings before facing the prospect of
losses. After these three, each subject receives a random sequence of draws, without repetition, from
the fixed set.
Each lottery choice is actually a choice between a specific lottery and an ordered list of 10
non-stochastic amounts. The list was ordered in a log-linear manner from the lowest lottery prize to
the highest lottery prize, following Tversky and Kahneman [1992]. For example, consider a lottery
with an outcome of $0 with probability ¼ and an outcome of $40 with probability ¾. The first
choice would be between this lottery and $0 for sure. The second would be between the lottery and
$7 for sure, the third would be between the lottery and $13 for sure, and the tenth would be between
-3-
the lottery and $40 for sure. The subject is asked to state a “switch point” in this ordered list.3 The
above display is from one of the tasks. The two prizes are $5 and $15, so the ordered list starts at $5
and increments by $1.75, then by $1.50, then by $1.25, and so on.
After each choice there is a physical draw of a random number between 0 and 100, using a
100-sided die, and this is applied by each subject to their choice. Their choice then determines the
change to their accumulated earnings, and they are informed of that.
Our subjects are told that they will be provided an “overdraft limit” of a certain amount,
selected at random from a uniform distribution between $0 and $9, again in discrete $1 intervals.
3
The complete list of prices in this example would be $0, $7, $13, $18, $22.75, $26.75, $30.50, $34, $37 and
$40, since we rounded to the nearest 25 cents. This “multiple price list” of ordered binary choices is essentially the
experimental method used by Tversky and Kahneman [1992] and Schubert, Brown, Gysler and Brachinger [1999], and
later extended by Holt and Laury [2002] to consider choices between a “risky” and a “safe lottery.” Andersen et al. [2006]
examine various forms of these multiple price lists, and explicitly compare behavior under the version in which the
subject simply states a single switch point and the version in which the subject makes 10 separate binary choices.
-4-
This limit allows the subject to choose a lottery that might make their accumulated earnings
negative, but only if it is within their overdraft limit. If a lottery has an outcome that is not within
their overdraft limit, the subject is not allowed to choose it. If the subject’s accumulated earnings fall
to zero or are negative, they are told that they may not make any more bets. One might argue that
such subjects should still be allowed to make choices over lotteries that offer possible gains, but we
view this threshold as akin to bankruptcy, and an enforced restraint on the subject’s ability to make
trades.4
Before the task began, we asked subjects to tell us what earnings they believed that subjects
had earned on average in the previous session. This amount was to include the $5 show-up fee. We
incentivized this question in a simple way: if the answer was correct, to the nearest dollar, the subject
would earn $10, and if it was within $5 the subject would receive $5. This question was asked in
order to provide some evidence on “homegrown reference points” that the subject might have
brought to the lab.
2. Hypotheses
Our design raises many issues, and provides a rich environment to contrast the behavioral
relevance of some major alternative theories of choice under uncertainty.
First, do subjects process each lottery choice in the sequence separately, or do they integrate
the choices? This question focuses on the assumed argument of the utility function or value function
of each agent. Is it just the outcome in a given choice, or is it the outcome over the current choice
and all previous choices, or is it somewhere between? These questions refer to the “asset integration
hypothesis” of standard expected utility theory, identified most clearly by Kahneman and Tversky
4
In the United States there are usually restrictions of some kind on the economic activities of a bankrupt
individual or firm. Although bankruptcy is on a credit report for an individual for between 7 and 10 years, and will
restrict many opportunities, it is generally possible for bankrupt individuals to reestablish limited credit within a year or
two of declaring bankruptcy. It is typically easier to reestablish credit after a discharge order has been entered. However,
to obtain a conventional home loan most debtors must wait 2 years after the discharge order is entered. Debtors are
typically able to obtain car loans immediately after discharge. Debtors are generally able to obtain credit cards, albeit on
terms much less favorable than they might have previously been accustomed to receiving.
-5-
[1979; p.264]. Referring to prospects defined over probabilities pi and outcomes xi, they define it as
follows:
Asset Integration: (x1, p1; ...; xn, pn) is acceptable at asset position w iff U(w+x1, p1; ...;
w+xn, pn) > U(w). That is, a prospect is acceptable if the utility resulting from
integrating the prospect with one’s assets exceeds the utility of those assets alone.
Thus, the domain of the utility function is final states (which includes one’s asset
position) rather than gains or losses.
We examine this hypothesis “locally,” in the sense that we examine asset integration between the
prizes of individual prospects and wealth accumulated within an experimental session. Obviously
one could extend the scope of the analysis to consider pre-experimental income or expected postexperimental income.5 Within this examination of “local asset integration,” we seek to identify the
horizon over which subjects appear to optimize, and whether it varies across subjects.
Second, and related to the first question, what is the reference point for choices? Under
some theories there is no “reference point,” but it plays a central role in other theories.6 In our
design there are many possible reference points: zero is always one, but it is possible that it might
change over time with accumulated earnings. Assume that the subject has earned $190 over 19
choices, and is making the last lottery choice. One might argue that $10 is a possible reference point
for this task, or some linear combination of $0 and $10. Or the subject might have some
homegrown reference point, based on prior experiences as an experimental subject or other beliefs
about the task to come. To allow for this possibility, we asked subjects to tell us what they believed
the average earnings had been in the prior session of experiments in our lab.
Third, what type of subjects drop out of the economy as it evolves? In this design we expect
to see subjects go bankrupt earlier if they follow one model of behavior compared to another. For
5
Binswanger [1981] and Quizon, Machina and Binswanger [1984] examine asset integration in the context of
lottery choices undertaken by poor Indian farmers.
6
Campbell, Lo and MacKinlay [1997; p.333] note that in “applying prospect theory to asset pricing, a key
question is how the benchmark outcome defining gains and losses evolves over time. Benartzi and Thaler [1995] assume
that investors have preferences defined over returns, where a zero return marks the boundary between a gain and a loss.
Returns may be measured over different horizons; a K-month return is relevant if investors update their benchmark
outcomes every K months. Benartzi and Thaler consider values of K ranging from one to 18. They show that loss
aversion combined with a short horizon can rationalize investors’ unwillingness to hold stocks even in the face of a large
equity premium.”
-6-
example, if they are extremely averse to losses, use a reference point of zero or some positive
amount, and treat each lottery as a separate choice to be evaluated, they will tend to go bankrupt
earlier. The reason is that they will accept larger non-stochastic losses than subjects that are not loss
averse, and smaller non-stochastic gains, if the lottery includes a negative outcome. Thus they will
accumulate earnings at a slower rate than others, such as risk-neutral subjects following expected
utility models of choice, and face bankruptcy earlier. One might expect to see the mix of “active
subjects” vary over time, as certain subject types drop out of the sample due to bankruptcy.
All of these questions and hypotheses make it essential that our statistical analysis respect the
possibility that some subjects might follow different models of choice at any point in time. We
consider two major competing alternatives: standard expected utility theory (EUT) and cumulative
prospect theory (CPT). Furthermore, we allow for variants within the EUT or CPT specifications.
For example, some EUT subjects might be risk averse and others might be risk loving. More
fundamentally, some EUT subjects might behave as if their utility functions are defined over the
outcomes of a given lottery, and some EUT subjects might have their utility functions defined over
the outcomes of several lotteries. Similarly with respect to the CPT subjects. In that case one must
also allow for different models of the determination of reference points. We allow the data to
identify the fraction of the sample covered by each type of subject, using “finite mixture models.”7
Apart from characterizing the cross-sectional fraction of subjects that follow EUT or CPT at
any point in time, we also examine the temporal evolution of that fraction. Previous studies of the
mixture of subjects have been in “static experiments” that confronted a given subject pool with a
fixed number of lotteries, with no sequential resolution of bankruptcy and no possibility of
bankruptcy.8 They generally find that the subjects are equally divided between EUT and CPT
representations, in the sense that one-half of the subjects are viewed as being EUT decision-makers
7
Finite mixture models have a long pedigree in statistics, and have been extensively used in economics in recent
years. Harrison and Rutström [2005] appears to be the first application to experimental data and individual choice
behavior under uncertainty.
8
Harrison and Rutström [2005] and Harrison, Humphrey and Verschoor [2005]. This issue has also been
studied in the context of auction behavior in common value settings in which subjects might incur losses: see Hansen
and Lott [1991], Kagel and Levin [1991] and Lind and Plott [1991].
-7-
and the other half as CPT decision-makers. If this is true then our design can allow a natural followon question: is this true when the economy has time to weed out certain decision-makers by
bankruptcy? If our earlier hypothesis about the greater propensity of loss averse CPT decision
makers to go bankrupt is correct, then we hypothesize that the economy might evolve into a selfselected mix dominated by EUT decision makers. This result would have dramatic implications for
the interpretation of results from experiments, suggesting that experiments that control for the
sample selection effects of bankruptcy might not reflect the choices in the naturally occurring
economy where those effects have full play (Harrison and List [2004]).
3. Procedures
Ninety subjects were recruited from the student population of the University of Central
Florida in 2005. Each subject signed an informed consent form, and was shown the die that would
generate all random numbers throughout the session. Subjects were assigned a sequential ID
number as they arrived, and then held in one large room until the experiment was ready to begin.
They were then instructed to find the seat in a computerized laboratory that had their ID on it; these
had been randomized beforehand, to reduce the chance that friends that arrived together did not sit
together. Each computer had a sunken screen and dividers, to mitigate the ability to see what the
other subjects were doing.
The instructions were provided to subjects on paper, and are reproduced in an appendix.
They were also read out aloud to subjects, to ensure that impatient subjects did not just glance at
them. Every subject then participated in a short trainer, in which they made choices in three lotteries
for no reward. Ample opportunity arose for subjects to ask individual questions before proceeding
with the main task.
The main task consisted of an incentivized question about beliefs of average earnings in a
past session (with the correct answer to be provided after the session was complete), a demographic
questionnaire, a random reward for answering the questionnaire, and then the lottery choices. After
-8-
each choice the program paused to allow for a public roll of the die, which was effected by randomly
walking around the room and having different subjects roll. One die determined the row that was to
be played out, and one row determined the lottery outcome.9 After subjects entered these values
their screens updated with their cumulative earnings, and they went on to the next task.
All subjects were paid in cash at the end of the session.
4. Data
Figure 1 and Table 1 document the path of cumulative earnings of the sample. Clearly we
have considerable variation in earnings, as expected. The sample drops from 90 in the first four
tasks down to 65 by the end of the session. Since the first three lotteries were all in the gain frame,
by construction, no subject could go bankrupt until the fourth choice. It is interesting that most of
the bankruptcies occurred relatively quickly, between choices 5 and 10.
Average earnings for the entire sample are $50, with a median of $40. However, the
survivors end up with much more than this, as the earnings for task 17 in Table 1 illustrate. The
average payoff for the survivors was $89, with a median of $92. Thus we observe a striking
evolution of the distribution of income over the life of the experiment, as expected. This evolution
is also ideal for our experimental objective of identifying the effect of changes in earnings on
reference points: if everyone had the same income path there would be little variation to test.
Figure 2 displays the distribution of expected earnings elicited from subjects prior to the
choice task. A striking feature about this distribution is that it is positive, reflecting the intuitive
belief that subjects come to the lab expecting to earn more than the show-up fee. No subject stated
an expected earnings below $7, and some stated expected earnings as high as $80. Both extremes
had been observed for individuals in previous experimental sessions with this population. Average
9
The instructions informed subjects that we would check that they had entered the correct values at the end of
the session, and that if they had not then we would have to calculate their earnings “by hand” and pay them after
everyone else had been paid. We controlled for such errors by means of a simple check-sum that displayed at the end of
the session for each subject. Only one subject made such an error, and it was a minor one.
-9-
expected earnings were $31, with a skewness apparent from Figure 2. The four modes of this
distribution are at $15, $25, $30 and $35, which had 11.1%, 7.8%, 10.0% and 6.7% of the sample
observations. Using a bandwidth of $5 for the kernel density displayed in Figure 2, the modes at $15
and $30 stand out. Thus we have evidence that subjects come to the lab with some initial,
homegrown beliefs that they will earn some positive amount of money. The question then becomes
what role these homegrown beliefs play in their behavior, and the extent to which they are modified
as the task progresses.
5. Analysis
We assume that the observed choices were generated by latent data-generating processes,
each process corresponding to a different theoretical model of choice behavior. We examine the two
most popular models of choice under uncertainty: EUT and CPT. We estimate a finite mixture
model in which both processes are allowed to explain the observed choice behavior, and allow the
data to tell us the relative fraction of the sample that each process accounts for. Our initial analysis
assumes that there is a representative EUT decision-maker and a representative CPT decisionmaker, in the sense that we do not allow any heterogeneity within each type of decision-maker. This
focuses attention on the heterogeneity of the processes themselves.
We then consider three extensions. The first extension is to allow for endogenous reference
points for determining if some outcome is a gain or a loss. The second is to allow for heterogeneity
over time in the mix between EUT and CPT. The third is to further allow for heterogeneity within
each of the decision-making processes, so that we allow some EUT (CPT) decision-makers to have
difference preferences than other EUT (CPT) decision-makers.
A. Expected Utility Theory
We assume that utility of income is defined by the Constant Relative Risk Aversion (CRRA)
specification
-10-
U(s, x) = (Ts+x)r for (Ts+x)$0
(1)
U(s, x) = -[-(Ts+x)]r for (Ts+x)<0,
(2)
and
where s is the cumulative earnings of the subject at each choice, x is the lottery prize, and T and r
are parameters to be estimated. The CRRA for this specification is r, with r=1 for risk-neutral
subjects, r<1 for risk-averse subjects, and r>1 for risk-loving subjects. When T=0 this specification
collapses to assuming that utility is defined solely over the prize for a particular choice; when T=1 it
collapses to assuming that utility is defined solely over the cumulative income for the sequence of
tasks. Following the general arguments of Cox and Sadiraj [2006] we also allow T to take on any
values between 0 and 1, so that we can estimate what arguments of the utility function best account
for observed behaviour without imposing one or other assumption a priori.10
Probabilities for each outcome k, p(k), are those that are induced by the experimenter, so
expected utility is simply the probability weighted utility of each outcome in each lottery. Since there
were up to two implicit outcomes in each lottery i,
EUi = 3k [ p(k) × U(k) ] for k = 1, 2.
(3)
A simple stochastic specification was used to specify likelihoods conditional on the model.
The EU for each lottery pair was calculated for a candidate estimate of r and T, and the index
LEU = (EUR - EUL)/:
(4)
calculated, where EUL is the left lottery in the display, EUR is the right lottery, and : is a Fechner
noise parameter following Hey and Orme [1992].11 One of these lotteries can be degenerate, in the
sense of representing a non-stochastic amount of money. The index LEU is then used to define the
10
Sugden [2003] develops a related theory in which preferences over choices depend both on final outcomes
and on reference points.
11
Harless and Camerer [1994], Hey and Orme [1994] and Loomes and Sugden [1995] provided the first wave
of empirical studies including some formal stochastic specification in the version of EUT tested. There are several
species of “errors” in use, reviewed by Hey [1995][2002], Loomes and Sugden [1995], Ballinger and Wilcox [1997], and
Loomes, Moffatt and Sugden [2002]. Some place the error at the final choice between one lottery or the other after the
subject has decided deterministically which one has the higher expected utility; some place the error earlier, on the
comparison of preferences leading to the choice; and some place the error even earlier, on the determination of the
expected utility of each lottery.
-11-
cumulative probability of the observed choice12 using the cumulative standard normal distribution
function:
G(LEU) = M(LEU).
(5)
Thus the likelihood, conditional on the EUT model being true, depends on the estimates of r and T
given the above specification and the observed choices. The conditional log-likelihood is
ln LEUT(r, T; y,X) = 3i l iEUT = 3i [ (ln G(LEU) * yi=1) + (ln (1-G(LEU)) * yi=0) ]
(6)
where yi =1(0) denotes the choice of the right (left) lottery in task i, and X is a vector of individual
characteristics.
We allow each parameter to be a linear function of the observed individual characteristics of
the subject. This is the X vector referred to above. We consider nine characteristics: binary variables
to identify those over 22 years in age, Females, subjects that self-reported their ethnicity as Black,
those that reported being Asian, those that reported being Hispanic, those that had a Business
major, those that reported a high cumulative GPA (>3¾), those that reported a low cumulative
GPA (<3¼), and those that reported having parents with income that exceeded $80,000 p.a.. The
estimates of each parameter in the above likelihood function actually entails estimation of the
coefficients of a linear function of these characteristics. So the estimate of r, ^r , would actually be
^r = ^r 0 + (r^Over22 × Over22) (r^FEMALE × FEMALE)+ (r^BLACK × BLACK) +
(r^ASIAN × ASIAN) + (r^HISPANIC × HISPANIC) + (r^GPAlow × GPAlow) +
(r^GPAhigh × GPAhigh) + (r^Pincome × Pincome)
where ^r 0 is the estimate of the constant. The same linear function is used for T. If we collapse this
specification by dropping all individual characteristics, we would simply be estimating the constant
terms for each of r and T. Obviously the X vector could include treatment effects as well as
individual effects, or interaction effects.
The estimates allow for the possibility of correlation between responses by the same subject,
so the standard errors on estimates are corrected for the possibility that the responses are clustered
12
In some specifications, such as the Luce error process used by Holt and Laury [2002], the index is already in
the form of a cumulative density function and this step is not needed.
-12-
for the same subject. The use of clustering to allow for “panel effects” from unobserved individual
effects is common in the statistical survey literature.13
Table 2 reports maximum likelihood estimates of the models defined by this EUT
specification. We report results initially with no demographic controls, constraining T=0 in Panel A
or T=1 in Panel B, and then allowing T0(0,1) in Panel C. We then report results in Panel D when
T0(0,1) and demographic controls are added for both T and r.
In Panel A we find that the CRRA estimates suggest risk aversion (r=0.779<1) when utility
is defined over prizes. The 95% confidence interval for CRRA under this specification is between
0.693 and 0.865. When we examine the same data through the lens of an EUT model that assumes
that the argument of utility is cumulative income, we infer that subjects are risk-neutral: the point
estimate is 1.054, and the 95% confidence interval includes r=1. The estimated noise parameter :
increases as we move from the EUT specification in panel A to panel B, and there is a drop in the
log-likelihood value. These indicators suggest that the initial specification in panel A was the better
one from an EUT perspective, and this is borne out by the estimates in Panel C in which T is
allowed to take on any value between 0 and 1. In fact, we estimate T to be 0.03, implying that
virtually no weight should be placed on cumulative income in the utility function.
Panel D extends this general specification in Panel C to allow for demographic variation in T
and r.14 There does not appear to be much demographic heterogeneity with respect to the risk
aversion parameter r, but there is with respect to the weighting parameter T. It is worth noting that
the estimate of the noise parameter : falls from 1.847 (Panel A) to 1.758 (Panel C) and then to 1.593
13
Clustering commonly arises in national field surveys from the fact that physically proximate households are
often sampled to save time and money, but it can also arise from more homely sampling procedures. For example,
Williams [2000; p.645] notes that it could arise from dental studies that “collect data on each tooth surface for each of
several teeth from a set of patients” or “repeated measurements or recurrent events observed on the same person.” The
procedures for allowing for clustering allow heteroskedasticity between and within clusters, as well as autocorrelation
within clusters. They are closely related to the “generalized estimating equations” approach to panel estimation in
epidemiology (see Liang and Zeger [1986]), and generalize the “robust standard errors” approach popular in
econometrics (see Rogers [1993]). Wooldridge [2003] reviews some issues in the use of clustering for panel effects, in
particular noting that significant inferential problems may arise with small numbers of panels.
14
Panel D reports the values for a transform of T, to ensure that T lies between 0 and 1. Specifically, we
report estimates for f, where T = 1/(1+exp(f)). Hence f=0 implies T=½, f<0 implies T>½, and f>0 implies T<½.
f 0 (- 4, + 4), so T 0 (0,1).
-13-
(Panel D) as one allows more flexible specifications and individual heterogeneity. Figure 3 displays
the distribution of the estimates of T across all subjects, based on the model estimated in Panel D of
Table 2. Although there are some subjects with slightly higher values, the clear mass of estimates are
below 0.05.
B. Cumulative Prospect Theory
The Prospect Theory (PT) model is an extension of the EUT model to account for
probability weighting and loss aversion relative to a reference point. We assume that utility is defined
by the CRRA specification
U(x) = x" for x $ P
(7)
U(x) = -8(-x)$ for x < P
(8)
and
where " is the CRRA parameter in the gain domain, $ is the CRRA parameter in the loss domain, 8
is the loss aversion parameter, and P is the reference point used to define if lottery prize x is a gain
or a loss. When P=0 this specification collapses to assuming that utility is defined solely over the
prize for a particular choice, and that the subject determines gain and loss frames directly from the
sign of the prize: that is, the reference point is $0. When P>0 we allow the subject to characterize
prizes x as gains or losses even if x>0. We discuss the specification of P further below.
In PT the decision maker is assumed to employ weighted probabilities rather than the
probabilities induced by the experimenter, although a special case is where the objective probabilities
are not weighted. There are two variants of PT, depending on the manner in which the probability
weighting function is combined with utilities. The original version proposed by Kahneman and
Tversky [1979] posits some weighting function which is separable in outcomes, and has been
usefully termed Separable Prospect Theory (SPT) by Camerer and Ho [1994; p. 185]. The alternative
version, proposed by Tversky and Kahneman [1992], posits a weighting function defined over the
cumulative probability distributions. In either case, the simple weighting function proposed by
-14-
Tversky and Kahneman [1992] has been widely used. It assumes weights
w(p) = p(/[ p( + (1-p)( ]1/(
(9)
for induced probabilities p and parameter (. When (=1 this function collapses to the standard EUT
specification that w(p) = p. When (<1, the usual case, the decision maker exhibits overweighting of
low probabilities and underweighting of higher probabilities, with fixed point w(p)=p at p=a.
Assuming that SPT is the true model, prospective utility PU is defined in much the same
manner as when EUT is assumed to be the true model. The PT utility function is used instead of the
EUT utility function, and w(p) is used instead of p, but the steps are otherwise identical. Under
CPT, however, there is an additional step required to transform the w(p) values into decision
weights. The weighting function is given by (9), but the prospective utility of lottery i under CPT is
now defined as
PUi = [ w(p2)U(k2) + (1- w(p2))U(k1)]
(10)
where k2 > k1. The difference in prospective utilities under CPT is then defined in the same way as
(4), using the same error process when subjects form their preferences. The index
LPU = (PUR - PUL)/:
(11)
is calculated, where PU replaces EU in the EUT specification. Again, this index of differences in
prospective utility can be transformed into a cumulative distribution using
G(LPU) = M(LPU).
(12)
The likelihood, conditional on the CPT model being true, depends on the estimates of ", $, 8 and (
given the above specification and the observed choices.15 The conditional log-likelihood is
ln LPT(", $, 8, (; y,X) = 3i l iPT = 3i [ (ln G(LPU) * yi=1) + (ln (1-G(LPU)) * yi=0) ].
Again, each parameter can be estimated as a linear function of a vector of characteristics, X. This
includes the individual demographic characteristics used for the EUT model. In this case we also
include the reported expected earnings in the task, elicited at the outset of the experiment.
15
The fact that the PT specification has more parameters than the EUT specification is meaningless. One
should not pick models on some arbitrary parameter-counting basis.
-15-
(13)
PT differs from EUT in many respects, but one that is important for present purposes is the
treatment of reference points. Note from (1) and (2) that the argument of utility for EUT is Ts+x,
where T is either assumed to be zero or is estimated. Although Ts+x is also used to determine if the
utility function must be transformed by taking two negatives, this is a purely mathematical device to
ensure that utility is increasing when x<0. PT, on the other hand, allows a critical role for the
reference point P in determining the form of the utility function applied (using $ instead of ") in (7)
and (8), whether loss aversion plays a role in (8), and whether we are to use one form of the utility
function or another (viz., (7) or (8)). But PT is quite clear that the argument of the utility function,
whether (7) or (8), is to be the prospect over which the choice is being made and not some broader
concept of income or lifetime wealth. Thus, allowing the reference point to be non-zero (P$0) is
distinct from assuming that the argument of the utility function is something other than the
immediate prospect.16
Kahneman and Tversky [1979] were clear that the reference point need not be zero in
general, even if they later adopted that simplification:
The emphasis on changes as the carriers of value should not be taken to imply that
the value of a particular change is independent of initial position. Strictly speaking,
value should be treated as a function in two arguments: the asset position that serves
as reference point, and the magnitude of the change (positive or negative) from that
reference point. An individual’s attitude to money, say, could be described by a book,
where each page presents the value function for changes at a particular asset
position. Clearly, the value functions described on different pages are not identical;
they are likely to become more linear with increases in assets. However, the
preference order of prospects is not greatly altered by small or even moderate
variations in asset position. (p.277)
This passage nicely anticipates our estimation of homegrown reference points below. Similarly, in a
dynamic context Benartzi and Thaler [1993; p.79] note that since the argument of the utility function
in PT “... is a change it is measured as the difference in wealth with respect to the last time wealth
was measured, so the status quo is moving over time.” So there can be variations in P over time, but
16
Thaler and Johnson [1991] hypothesize an effect from prior losses or gains in PT, as part of the “editing
process” by which prospects are evaluated. This process is absent in CPT, but was prominent in SPT. We discuss their
hypothesis further in section 6.
-16-
the argument of the utility function is simply x.
Table 3 reports maximum likelihood estimates of various CPT models assuming
homogenous agents, and Table 4 reports estimates allowing for full heterogeneity in the core
parameters. Figures 4 and 5 display the distribution of estimated parameters from the model
estimates in Table 4.
The first result is that subjects have coefficients for " and $ that are generally consistent
with them being risk averse over gains and slightly risk averse or risk neutral over losses. When we
allow for heterogeneity with respect to demographics (Table 4), we find that there is rough similarity
between the two risk aversion parameters. The left panels of Figure 5 display a kernel density
function of the estimated parameters, and the estimates for $ tend to be more dispersed than the
estimates for ", but otherwise comparable. Although there is more evidence for some subjects being
risk neutral and even risk loving in the loss domain ($), the general tendency is for slight risk
aversion.
The second result is that the probability weighting parameter ( is estimated to be 0.76 in the
simplest CPT model (Table 3, Panel A), and this average estimate persists in the other CPT
specifications. This we find evidence that subjects probability weight in the usual fashion,
overweighting low probabilities and underweighting high probabilities. When we allow for
demographic heterogeneity, we find varying degrees of probability weighting, illustrated in Figure 5.
These estimates are broadly consistent with the range of estimates found in the comparable
literature.
The third result is that estimates of loss aversion are sensitive to the particular CPT
specification. When one assumes that utility is defined solely over prizes, and the reference point is
$0 (Table 3, Panel A), the loss aversion parameter is estimated to be 1.07, with a standard error large
enough that the 95% confidence interval easily includes 1. This implies no loss aversion in this case.
But when we allow the reference point to be estimated, in Panels B and C, the loss aversion
coefficient drops significantly to around 0.5. Figure 5 shows the distribution of estimates from the
-17-
model in Table 4, and indicates striking heterogeneity. One clear mode at 8=1 reflects subjects that
do not exhibit any loss aversion at all, suggesting that there may be some subjects that are making
decisions consistently with EUT. This conjecture is addressed directly with the mixture model
below. The second mode in Figure 5 does indicate significant loss aversion (8>1), but then there is a
third, smaller mode that reflects loss seeking (8<1).
The interaction between these three modes of 8 and the estimates for " and $ is worth
examining more closely. Loss seeking in losses might show up through 8 values below 1 or $ values
greater than 1, and loss aversion in gains might show up through 8 values greater than 1 or " values
less than 1.17 It is noteworthy that many, if not all, of the existing estimates of loss aversion in the
literature assume risk-neutrality ("= $=1). We consider this issue further below as well.
The fourth result is that the reference point is estimated to be positive, but small. From
Panel B of Table 3 we estimate the reference point to be $1.11 on average, with a 95% confidence
interval between $0.43 and $1.80. Panel C allows this to be a linear function of the expected task
earnings, and estimates that subjects start with a homegrown reference point of $0.35 per task plus
3% of their expected earnings.
Table 4 shows that these homegrown reference points vary considerably with demographic
characteristics of the subject. Figure 4 displays the distribution of estimated reference points,
evaluated using the model in Table 4. Two modes are evident, at $0 or some very small amount such
as $1 or $2, and then at $5 or $6.
17
The interaction of the additional parameters of PT with conventional parameters of EUT is an old and
important theme. As noted by Camerer [2005; p.130], one can resolve the St. Petersburg paradox by appealing solely to
loss aversion with risk neutral utility functions: the “... paradox is not necessarily resolved uniquely by locally concave
utility” (emphasis added). Indeed, this was a major theme of earlier work on non-EUT specifications, focusing on
probability weighting. Camerer [2005; p.130] again: “Yaari [1987] forcefully notes the duality of the concavity of money
utility and the nonlinearity of probability weighting as equally plausible explanations of risk aversion. His intuitions that
people do not care that much about the value of consequences if they win a lot (concave [x]) and that people do not
think that they are likely to win a lot (underweighting p[win]) are both plausible explanations. Why should the concave
u(x) explanation be privileged over underweighting of p(win)?”
-18-
C. Prospect Theory Assuming Risk Neutrality
An influential subset of the literature on PT focuses on Myopic Loss Aversion (MLA). The
“myopic” part refers to the assumption that the argument of the utility function is just gains or
losses, and the “loss aversion” part refers to the emphasis placed on the loss aversion parameter to
explain observed behavior. MLA justifiably captured the attention of mainstream economists when
Benartzi and Thaler [1995] (BT) used it to offer an explanation of the “equity premium puzzle.”
However, somewhere on the way to BT, two funny things happened to the definition of a
utility function within EUT. First, and all of a sudden, EUT was to be exclusively defined in terms of
utility as a function of terminal wealth. Thus BT (p.74) could write that in prospect theory “...utility
is defined over gains and losses relative to some neutral reference point, such as the status quo, as
opposed to wealth as in expected utility theory.” There is more to the BT explanation of the equity
premium than just this interpretation of the arguments of the utility function, but it glosses major
contributions to EUT in which the argument of utility is income or increments in income, and not
wealth (Cox and Sadiraj [2006]).
The second funny thing that happened is the pervasive use of risk neutrality in the utility
function for gains and losses. For example, Benartzi and Thaler [1995; p.79], Gneezy and Potters
[1997; p.632], Gneezy, Kapteyn and Potters [2003; p. 822] and Haigh and List [2005; p.525] all
assume risk neutrality in their exposition and analysis.18 Kahneman and Lovallo [1993; p. 20] and
Thaler, Tversky, Kahneman and Schwartz [1997; p.650] allow explicitly for CRRA. But the
legitimate expositional device of assuming away risk aversion should not lead one into assuming that in
general. Moreover, we often encounter extraordinarily strong priors about the “correct” values of (
and 8 that are based on intuitive reasoning when "=$=1.
It is therefore useful to ask what happens to our estimates in Table 4 if we add this risk
neutrality constraint. Figure 6 displays the resulting distribution, for comparison with Figure 5. The
18
The formal hypothesis tests of the last three studies, about the fraction of an endowment invested in a risky
asset, also follow if one assumes CRRA utility. But they do not if one assumes more flexible specifications consistent
with EUT that allow for varying RRA.
-19-
results for the loss aversion parameter, 8, are strikingly different. There is a wide range of estimates
for loss aversion, with the average estimate of 2.66 being dominated by the mode exhibiting extreme
loss aversion (8>1). The estimates for the probability weighting parameter, (, are again consistent
with the expected shape of the weighting function from the previous literature. However, the
variance in weighting behavior is now smaller. Thus we find that loss aversion seems to play a much
more important role, and probability weighting a much less important role, when we impose risk
neutrality.
These results remind us to infer estimates of the parameters of latent choice models jointly.19
One simply cannot reliably elicit estimates of probability weighting functions or loss aversion
independently of estimates of risk aversion.
D. A Mixture Model
If we let BEUT denote the probability that the EUT model is correct, and BPT = (1-BEUT)
denote the probability that the PT model is correct, a grand likelihood can be written as the
probability weighted average of the conditional likelihoods. The likelihood for the overall model is
then defined by
ln L(r, T, ", $, 8, (, P, BEUT; y,X) = 3i ln [ (BEUT × l iEUT ) + (BPT × l iPT ) ].
(14)
This log-likelihood can be maximized to find estimates of the parameters.
This approach literally assumes that any one observation can be generated by both models,
although it admits of extremes in which one or other model wholly generates the observation. One
could alternatively define a grand likelihood in which observations or subjects are classified as
following one model or the other on the basis of the latent probabilities BEUT and BPT.20 We will
interpret estimates of BEUT and hence BPT as reflecting the fraction of the observations consistent
19
Andersen et al. [2005] provide a striking example of this point. They estimate risk attitudes and time
preferences jointly, and show that the implied annual discount rates fall from 25% or more to less than 10% when one
accounts for the concavity of the utility of discounted money.
20
El-Gamal and Grether [1995] illustrate this approach in the context of identifying behavioral strategies in
Bayesian updating experiments.
-20-
with each model, and on occasion interpret these probabilities as the fraction of subjects consistent
with each model. These interpretations are perfectly consistent with the statistical specification in
(14), but they are interpretations and there are others that are possible.
Table 5 reports the maximum likelihood estimates, assuming that the EUT and PT agents
are each homogeneous. So the only heterogeneity that is allowed in this empirical model is with
respect to EUT or PT. The first result is that the EUT model is estimated to have 67% support,
which is to say that the latent data-generating process assumed by EUT accounts for roughly b of
the observations. The second result is that EUT subjects tend to be risk-lovers, with a CRRA of
1.23. This is qualitatively different from the estimates shown in Table 2, which assumed that EUT
characterized every observation and not just b of the observations. The third result is that the EUT
subjects tend to consider the prizes they are faced in conjunction with cumulative income when
making decisions, again in marked qualitative contrast to the findings when we assumed that EUT
characterized every observation. From Table 2 we have estimates of T that are extremely low,
around 3% or so, implying that subjects only consider the utility from the prizes in each lottery.
From Table 5 we have an estimate of T that implies that a weight of 82% is placed on cumulative
income. The logic of this difference is immediate from the finding that b of the observations are
characterized better by EUT: the analysis that required that 100% of the observations “fit into the
EUT model” effectively required that the parameters of the EUT model account for the a of the
data that was actually generated by the PT model. In that case the implicit weight on cumulative
income in the utility function is 0, by definition, so the EUT model needed to account for those
observations as well.
Turning to the parameters from the PT data-generating process estimated in Table 5, we
find some robust results with respect to the earlier estimates of the PT model when it was assumed
to characterize 100% of the observations. First, the risk aversion parameters consistently suggest risk
aversion over gains and losses: " = 0.79 and $ = 0.71, and in each case the upper bound of the 95%
confidence interval is below the risk-neutral point (" = $ = 1). Second, there is evidence of
-21-
significant probability weighting, with ( = 0.54, a much lower value than obtained in Table 3 when
PT was assumed to characterize the complete sample. Again, this shift is expected, since (=1 for
EUT subjects, and PT was required to explain their behavior in Table 3; in Table 5 it is “free” to just
characterize the PT subjects. Third, there is no consistent evidence of loss aversion. Even though
the estimate of 8 is 1.31, and this is greater than 1, this parameter is not estimated precisely (it has a
standard error of 0.51 and a 95% confidence interval between 0.31 and 2.32). Thus we find no
evidence of loss aversion in this case, although the imprecision of the estimate of 8 may be due to
heterogeneity that is assumed away in this specification. We consider controls for observable
individual characteristics below, and that may help generate more precise estimates.
Table 6 reports maximum likelihood estimates when each of the parameters of EUT and PT
are assumed to be linear functions of the observed demographic characteristics. In addition, we
include binary dummies for each period in the linear function for BEUT, to allow variations over time
in the extent to which subjects apply EUT-consistent or PT-consistent decision rules.
Figure 7 displays the distribution of BEUT pooled over all choices, and Figure 8 displays the
distribution of BEUT for each task in sequence. Figure 7 reveals an intriguing pattern: we classify
observations or subjects as being “clearly PT” or as being “probably EUT.” It is not the case that we
have two sharp peaks at either end of Figure 8, which would be the case if observations or subjects
were “all PT” or “all EUT.” This pattern is the qualitative reverse of the pattern identified using
comparable mixture models by Harrison and Rutström [2005] and static lottery choice data. Figure 8
shows that there is some considerable variation in these estimated probabilities over the course of
the experiment, but no clear trend. Thus we do not appear to have an evolution towards EUT or PT
within this experimental session.
Figures 9 and 10 display the distribution of estimates of the parameters of the EUT and PT
models. These distributions reflect the heterogeneity due to demographic characteristics shown in
Table 6. They also reflect the sub-sample of observations for which the probability of being EUT or
PT exceeds ¼, since those observations better reflect the corresponding theory: we should not be
-22-
concerned about the estimate of a PT parameter for a subject with a probability of being a PT
decision maker that is close to zero.21
The choices consistent with EUT tend to be risk-loving (r>1), whereas those associated with
PT tend to be risk neutral or risk averse ("#1 and $<1). We can identify considerable heterogeneity
in the PT estimates: for example, one clear mode of the distribution of estimates of " is risk-neutral,
another is risk averse, and yet another two are risk-loving. Similarly, the modes for $ span riskloving, slight risk aversion, and severe risk aversion. The choices consistent with PT also tend to
exhibit loss aversion (8>1) and probability weighting ((<1), consistent with a priori expectations
from the literature.
Finally, Figure 11 displays the distribution of the T parameter for EUT, when we estimate it
assuming that every observation is generated by EUT (Table 2, panel D) and when we assume that
only some of the observations are generated by EUT (Table 6). We observe a striking result. When
EUT is required to characterize all of the data, we estimate T to be very small, and close to zero,
consistent with the notion that subjects only use the lottery prize as an argument in their EUT utility
function. But the assumption that EUT characterizes all of the data is rejected by the mixture model,
so our earlier estimates suffered from a mis-specification error. In effect, they had to accommodate
the observations that were primarily generated by the latent PT decision-making process, which
presumes that T=0. When we correct for that mis-specification, and allow for mixtures of latent
data generation processes, a very different picture emerges. These results indicate that observations
better characterized by EUT should be viewed as being generated by a utility function defined over
cumulative income for the session rather just the lottery prize of the specific choice facing the
subject. Some EUT-consistent subjects still put more weight on the lottery choice (T<1), but the
tendency is to integrate lottery prizes with cumulative income.
21
In fact, the distributions look similar if one includes all observations.
-23-
6. Comparisons
Thaler and Johnson [1990] focused directly on the question of how risk-taking behavior is
affected by prior gains or losses. They view choices from the perspective of PT, but allow for
interesting variations in the manner in which the “editing phase” of PT is applied. They provide
(p.646) a simple example in which the subject is told that they have just won $30, and must then
choose between (a) no further gain or loss, or (b) a 50-50 chance of winning $9 or losing $9. Three
representations of this problem are suggested:
u($21) + w(½) [ u($39) - u($21) ]
(15a)
u($30) + w(½) u($9) + w(½) u(-$9)
(15b)
u($21) + w(½) u($18)
(15c)
The representation in (15a) assumes that prior outcomes are embedded into the choice problem. In
effect, it adds “memory” to the standard PT representation of the task, and then applies the PT
editing rule that the prospect is broken into the certain part and then the residual uncertain part
(Kahneman and Tversky [1979; p.276]). The representation in (15b) assumes that prior outcomes, in
this case the $30 of cumulative income, has no effect on the framing of the task. This is the standard
PT formulation, which we have assumed in our analysis. The difference between (15a) and (15b) has
something of the flavor of the asset integration parameter T that we introduced in (1) and (2) for
EUT. But it also has something of the flavor of the endogenous reference point P that we
introduced in (7) and (8) for PT.
The representation in (15c) assumes that subjects actively deform the prospect to make it
appear more attractive. Thus the possibility of a $9 loss is integrated into the $30 on hand, to be
evaluated as a certain $21, and the risky part of the gamble is evaluated as a potential gain of $18. Of
course, one problem with this “hedonic editing process,” originally suggested by Thaler [1985], is
that it has to presume something about the preferences of the subject. To use our notation, what
values of ", $, 8 and ( does the subject have? These can be expected to vary across subjects, since
preferences are subjective, so one cannot unambiguously define a single best editing rule without
-24-
knowing those parameters or making some assumptions about them, and then one cannot (easily)
embed the editing rule so derived into a likelihood function that is to be used to estimate them.22
We ignore the editing process originally proposed as part of PT, since we focus on CPT as
the exemplar of PT. It is striking that the editing processes so important to Kahneman and Tversky
[1979] seem to have been abandoned in the developments of CPT in Tversky and Kahneman [1992].
Starmer [2000; p.354] makes this point, suggesting that some of the procedural aspects of PT and
related psychological theories may have been discarded too quickly in the rush to address some
(serious) technical difficulties with the original PT formulation.23
Cubitt and Sugden [2001] examine behavior in dynamic choice settings that are strikingly
similar to ours in many respects. Subjects participate in a series of rounds, and may accumulate or
lose income as they proceed. They may go bankrupt, in part due to the random flow of choices and
in part due to their current income from past choices and chance. They correctly note that the
standard design of most laboratory tests of EUT has not provided a particularly fertile breeding
ground for alternatives to EUT. They argue (p.103) that
...a sceptic might suspect that laboratory subjects have typically faced risks that are
neutered, in certain senses, relative to risk outside the laboratory and, consequently,
that experiments have tended to attenuate affective experiences of risk-taking, such
as hope, fear, thrill, elation, disappointment, pain of loss, or regret. If these
experiences are important determinants of behavior, this suspicion casts doubt on
existing experiments as guides to behavior in non-laboratory domains in which
affective experiences are more pronounced.
They also clearly differentiate between an EUT model in which the domain of the utility function is
the individual choice the subject makes and an EUT model in which the domain is the final outcome
over all choices in the session. They refer to the latter (p.111) as reduced-form EUT, since it is as if
EUT is applied to the “reduced form” of the sequence of choices and not to the structural
22
The numerical problem is that the likelihood would not be differentiatiable with respect to the parameters,
since there might be discrete changes in the representation and evaluation of the lotteries for small changes in
parameters. More generally, Stevenson, Busemeyer and Naylor [1991] and Starmer [2000; p.354] point out that this
indeterminancy can extend to the order in which multiple editing operations are applied.
23
Fennema and Wakker [1997] provide a careful statement of the differences between the original version of
PT and CPT. KÅszegi and Rabin [2005][2006] provide an extension of PT that allows for reference-dependent
preferences, but that does not introduce a role for reference points through an editing process. They also recognize the
effects of allowing for subjective reference points that are non-deterministic and that evolve over time.
-25-
components of the sequence.
The particularly innovative aspect of the experimental design proposed by Cubitt and
Sugden [2001; p.113ff.] is that subjects are given an option to play accumulator gambles with 0, 1 or
2 rounds added at the end of any compulsory rounds. In one treatment they must make this choice
at the outset of the sequence, before knowing any outcomes. In another treatment they can make
the choice if and when they get there.24
There is also a long tradition in the judgment and decision-making literature that is
concerned with the procedural effect of experience on the reference point that subjects use in
dynamic decisions. For example, see Barkan and Busemeyer [1999], Busemeyer, Weg, Barkan, Li and
Ma [2000] and Starmer [2000; §4.2.1] and the literature they reference.
Most dynamic environments offer the decision-maker options in the future that depend on
choices made in the present. The clearest examples are financial options, of course. But it is now
widely acknowledged that many “real options” exist more generally when one considers capital
investment decisions over time: see Dixit and Pindyck [1994] and Trigeorgis [1996] for reviews of
the extensive literature. In these settings several behaviorally subtle issues arise.25 First, how far
ahead do subjects look when considering the effect of options on their current decision? Second,
how does this planning horizon interact with the presumed arguments of the utility function?
Another way in which the environment could be complicated is if the subject’s reference
points were endogenously chosen in a structured, strategic setting. Our environment allows for
subjects to come to the experiment with some homegrown reference point, and to possibly modify
it as the consequences of decisions unfold. But there is no strategic component to the game, and no
24
Allowing options in this manner makes these tasks similar to some television game shows, such as Card
Sharks! and Deal Or No Deal.
25
The identification of planning horizons with experiments involves some inferential subtleties, as discussed
by Hey [2005]. Subjects might appear not to plan ahead in some designs but just be averse to pre-commitment, which is
a different thing and suggests that utility be defined over the temporal resolution of uncertainty (per Kreps and Porteus
[1978]). Similarly, some designs require that subjects predict what they would do in some future state of the world, and
then their actual choices compared. Evidence that they do not do what they said they would do might just reflect the fact
that the initial statements of plans were not incentivized, and hence cannot be reliably compared to actual choices. These
are not failings of the experimental method per se, since the issues are hopelessly confounded in non-experimental
settings.
-26-
way in which subjects could gauge their relative fortunes during the session. Falk and Knell [2004]
examine hypotheses that derive from a social comparisons model in which subjects do choose
reference points in this manner.
7. Conclusions
Reference points play a major role in differentiating theories of choice under uncertainty.
Under EUT the reference point is implicit in the assumptions made about asset integration, which is
the same thing as assuming different arguments of the utility function. Under PT the reference point
differentiates gains and losses, and the manner in which prospects are evaluated. We consider the
implications for both models of allowing subjects to have natural reference points, in the sense that
they derive from “homegrown” expectations about earnings in a dynamic task, rather than from a
frame presumed by the observer. We elicit initial beliefs about expected earnings, and implement a
dynamic decision process in which subjects could win or lose money, and even go bankrupt. In
short, we cultivate a fertile and natural breeding ground for the effects of reference points to
emerge.
To characterize the latent data-generating process in a flexible statistical manner we assume
that some observations are generated by an EUT model and that some observations are generated
by a CPT model. This specification leads to a finite mixture model in which reference points may be
different from the frame presented by the lottery prizes.
We report several striking findings.
First, EUT accounts for a large fraction of the observations, despite this setting providing a
seemingly more natural breeding ground for prospect theory. Assuming homogenous, representative
agents for each latent model type, EUT accounts for b of the observations. With demographic
heterogeneity controlled for it still accounts for ½ of the observations.
Second, the EUT subjects appear to have utility functions defined over their cumulative
income over the sequence of tasks, rather than as defined over the prizes in each individual lottery
-27-
choice. Our specification allows considerable flexibility as to the correct argument of the utility
function. Some subjects might have utility functions defined over a specific lottery prize in the
choice before them, others might have utility functions defined over the entire stream of cumulative
income, and yet others may be in between.
Third, we find substantial differences in the estimated choice model parameters as one varies
the maintained assumptions about the latent choice process. If one assumes that all choices are
generated by CPT, probability weighting plays a major role and loss aversion disappears or mutates
into loss loving. If one instead allows for a latent process in which CPT only characterizes some of
the observations, and EUT characterizes the remainder, evidence for loss aversion re-appears for
those observations characterized by CPT.
Finally, we identify demographic characteristics which differentiate the probability that a
decision-maker uses EUT. Men are much more likely to use EUT than women, racial minorities are
more likely than others, those with higher grades are more likely, and those from poorer households
are more likely. Thus our results provide insights into the domain of applicability of each of the
major choice models, rather than claiming one to be the sole, true model.
-28-
Figure 1: Cumulative Earnings of Each Subject
Cumulative Earnings ($)
200
150
100
50
0
1
2
3
4
5
6
7
8
9 10 11
Choice
12 13
14
15
16
17
Figure 2: Expected Earnings Before The Task
Kernel density using Epanechnikov kernel function & bandwidth of $5
.04
Density
.03
.02
.01
0
0
5
10
15
20
25 30 35 40 45 50 55 60
Expected Earnings After Show-Up Fee
-29-
65
70
75
80
Table 1: Cumulative Earnings
5%
10%
90%
95%
Percentile
Percentile
Percentile
Percentile
$21
$6
$7
$18
$19
6
30
10
12
25
27
5
13
36
21
21
33
34
29
14
4
73
9
12
49
53
36
37
20
-3
94
8
13
64
72
83
35
36
24
-9
101
5
9
66
85
7
74
43
48
25
-1
123
9
17
80
96
8
72
55
53
26
4
127
11
17
84
100
9
72
52
52
29
-1
142
7
15
84
94
10
69
59
57
32
-4
162
8
15
98
106
11
68
62
62
33
5
152
18
21
101
114
12
68
67
65
35
6
152
13
16
110
117
13
67
69
69
37
8
157
13
20
117
127
14
66
74
74
38
16
177
21
23
121
136
15
66
82
79
38
13
192
19
31
122
135
16
65
86
85
34
20
187
31
41
122
137
17
65
92
89
34
28
197
40
46
128
132
1280
40
50
35
-9
197
9
13
102
114
Task
Sample
Median
Mean
Std.dev.
Minimum
Maximum
1
90
$12
$12
$4
$1
2
90
19
19
5
3
90
29
28
4
90
28
5
85
6
Tot
-30-
Table 2: Maximum Likelihood Estimates Assuming EUT
Parameter Coefficient
Point Estimate
Standard Error
p-value
95% Confidence
A. Assuming that utility is only defined over prizes: T=0
Constant
0.779
0.044
0.000
0.693
0.865
Constant
1.847
0.202
0.000
1.450
2.243
B. Assuming that utility is defined over cumulative income and prizes: T=1
r
Constant
1.054
0.072
0.000
0.913
1.195
:
Constant
4.866
1.474
0.001
1.978
7.755
C. Assuming that utility is defined over a weighted average of cumulative income and prizes: T0(0,1)
r
Constant
0.765
0.044
0.000
0.679
0.851
T
Constant
0.031
0.000
0.000
0.030
0.032
:
Constant
1.758
0.215
0.000
1.336
2.179
D. Assuming that utility is defined over a weighted average of cumulative income and prizes: T0(0,1)
r
Over 22 years of age
0.015
0.051
0.778
-0.086
0.115
Female
-0.027
0.048
0.573
-0.121
0.067
Black
0.030
0.091
0.739
-0.147
0.208
Asian
-0.032
0.087
0.716
-0.203
0.140
Hispanic
-0.056
0.062
0.365
-0.178
0.065
High GPA (greater than 3.75)
0.059
0.067
0.377
-0.072
0.190
Low GPA (below 3.24)
0.010
0.058
0.861
-0.104
0.124
Parental income above $80000
0.026
0.056
0.643
-0.084
0.136
Constant
0.730
0.063
0.000
0.606
0.854
T
Over 22 years of age
0.013
0.002
0.000
0.009
0.016
Female
0.010
0.001
0.000
0.008
0.012
Black
0.007
0.001
0.000
0.005
0.010
Asian
-0.007
0.003
0.023
-0.013
-0.001
Hispanic
-0.017
0.001
0.000
-0.019
-0.015
High GPA (greater than 3.75)
0.001
0.002
0.676
-0.003
0.005
Low GPA (below 3.24)
-0.014
0.001
0.000
-0.016
-0.013
Parental income above $80000
0.088
0.003
0.000
0.082
0.094
Constant
0.024
0.001
0.000
0.021
0.026
:
Constant
1.593
0.164
0.000
1.271
.0098324
r
:
-31-
Figure 3: What Are The Arguments of Utility Under EUT?
Estimated Weight on Cumulative Earnings
15
Density
10
5
0
0
.05
.1
Estimate of ù
-32-
.15
.2
Table 3: Maximum Likelihood Estimates Assuming CPT And Homogenous Agents
Parameter Coefficient
(
"
8
$
:
(
"
8
$
Re
:
(
"
8
$
Re
:
Point Estimate
Standard Error
p-value
95% Confidence
A. Assuming 0 is the reference point
0.763
0.024
0.000
0.716
0.809
0.065
0.000
0.683
1.070
0.220
0.000
0.637
0.801
0.038
0.000
0.726
1.883
0.248
0.000
1.397
B. Assuming endogenous reference point
Constant
0.767
0.025
0.000
0.718
Constant
0.731
0.071
0.000
0.592
Constant
0.533
0.155
0.001
0.228
Constant
0.904
0.054
0.000
0.797
Constant
1.114
0.349
0.001
0.430
Constant
1.366
0.222
0.000
0.931
C. Assuming endogenous and exogenous reference point on expected earnings
Constant
0.766
0.025
0.000
0.718
Constant
0.725
0.068
0.000
0.591
Constant
0.507
0.166
0.002
0.181
Constant
0.908
0.062
0.000
0.786
Expected Earnings
0.030
0.002
0.000
0.026
Constant
0.354
0.094
0.000
0.171
Constant
1.327
0.214
0.000
0.908
Constant
Constant
Constant
Constant
Constant
-33-
0.811
0.936
1.502
0.876
2.369
0.815
0.870
0.837
1.010
1.798
1.801
0.815
0.858
0.833
1.030
0.034
0.537
1.745
Table 4: Maximum Likelihood Estimates Assuming CPT And Heterogeneous Agents
Parameter Coefficient
(
"
8
$
Re
:
Over 22 years of age
Female
Black
Asian
Hispanic
High GPA (greater than 3.75)
Low GPA (below 3.24)
Parental income above $80,000 annually
Constant
Over 22 years of age
Female
Black
Asian
Hispanic
High GPA (greater than 3.75)
Low GPA (below 3.24)
Parental income above $80,000 annually
Constant
Over 22 years of age
Female
Black
Asian
Hispanic
High GPA (greater than 3.75)
Low GPA (below 3.24)
Parental income above $80,000 annually
Constant
Over 22 years of age
Female
Black
Asian
Hispanic
High GPA (greater than 3.75)
Low GPA (below 3.24)
Parental income above $80000 anually
Constant
Over 22 years of age
Female
Black
Asian
Hispanic
High GPA (greater than 3.75)
Low GPA (below 3.24)
Parental income above $80000 anually
Constant
Constant
Point Estimate
-0.051
-0.110
0.143
0.146
-0.070
-0.127
-0.266
-0.026
0.975
0.173
0.051
-0.163
-0.202
-0.225
0.127
0.199
0.029
0.744
-0.635
-0.699
-0.178
0.136
-0.150
-0.112
-0.894
-0.008
2.158
0.139
0.152
0.212
0.161
0.164
-0.068
0.096
0.093
0.615
0.013
1.210
0.186
-12.189
-6.266
-6.453
4.796
6.660
0.373
-0.501
-34-
Std Error
0.056
0.051
0.056
0.164
0.068
0.070
0.069
0.042
0.081
0.062
0.051
0.153
0.091
0.075
0.074
0.072
0.064
0.089
0.266
0.364
0.274
0.310
0.319
0.636
0.426
0.286
0.581
0.081
0.089
0.125
0.109
0.108
0.127
0.089
0.087
0.073
0.013
2.087
0.191
0.383
1.640
0.118
0.188
0.163
0.269
0.518
p-value 95% Confidence Interval
0.368
0.031
0.011
0.373
0.305
0.072
0.000
0.535
0.000
0.005
0.315
0.287
0.026
0.003
0.087
0.006
0.650
0.000
0.017
0.055
0.515
0.660
0.637
0.861
0.036
0.978
0.000
0.088
0.089
0.089
0.139
0.129
0.595
0.279
0.281
0.000
0.337
0.562
0.331
0.000
0.000
0.000
0.000
0.000
0.166
0.333
-0.161
-0.210
0.033
-0.175
-0.203
-0.265
-0.401
-0.107
0.816
0.052
-0.049
-0.464
-0.379
-0.372
-0.018
0.057
-0.096
0.570
-1.156
-1.413
-0.714
-0.471
-0.776
-1.357
-1.728
-0.568
1.019
-0.021
-0.023
-0.032
-0.052
-0.048
-0.318
-0.078
-0.076
0.473
-0.013
-2.880
-0.189
-12.940
-9.481
-6.684
4.428
6.341
-0.155
-1.517
0.060
-0.010
0.254
0.466
0.063
0.011
-0.132
0.056
1.134
0.294
0.152
0.137
-0.024
-0.078
0.272
0.341
0.153
0.918
-0.114
0.015
0.358
0.743
0.475
1.134
-0.060
0.553
3.297
0.298
0.326
0.457
0.374
0.377
0.182
0.270
0.263
0.758
0.038
5.301
0.561
-11.438
-3.051
-6.222
5.165
6.979
0.901
0.514
Figure 4: Estimated Reference Point for CPT Decision-Makers
Allowing Heterogeneous Preferences
.2
Kernel density
Normal density
Density
.15
.1
.05
0
-15
-10
-5
0
Reference Point in Dollars
-35-
5
10
Figure 5: Estimates of Prospect Theory Parameters
When PT Is Assumed to Characterize Every Observation
Maximum likelihood estimates allowing for demographic heterogeneity
ë
Percent
Percent
á
.4
.5
.6
.7
.8
.9
1
1.1
1.2
1.3
1.4
-1
0
1
2
3
ã
Percent
Percent
â
.4
.5
.6
.7
.8
.9
1
1.1
1.2
1.3
1.4
.4
.6
.8
1
1.2
Figure 6: Estimates of Prospect Theory Parameters
When PT Is Assumed to Characterize Every Observation
and and Are Constrained to Ensure Risk Neutrality
Percent
Percent
Maximum likelihood estimates allowing for demographic heterogeneity
.5
.6
.7
.8
.9
1
1.1
1.2
1.3
1.4
.4
.5
.6
.7
.8
.9
1
1.1
1.2
1.3
1.4
-5
0
5
10
15
.9
1
1.1
1.2
Percent
Percent
.4
-36-
.8
Table 5: Maximum Likelihood Estimates of EUT and PT Mixture Model
Assuming Homogeneous Agents of Each Type
Parameter Coefficient
Constant
Constant
Constant
Constant
Constant
Constant
Constant
Constant
Constant
Standard Error
1.228
0.823
0.544
0.789
1.314
0.712
3.121
0.669
0.331
p-value
0.071
0.098
0.071
0.092
0.512
0.105
0.637
0.091
0.091
0.000
0.000
0.000
0.000
0.010
0.000
0.000
0.000
0.000
Figure 7: Probability of EUT Model
2
1.5
Density
r
T
(
"
8
$
:
BEUT
BPT
Point Estimate
1
.5
0
0
.1
.2
.3
.4
.5
.6
Predicted Probability
-37-
.7
.8
.9
1
95% Confidence
1.088
0.632
0.405
0.607
0.310
0.506
1.873
0.491
0.153
1.367
1.020
0.683
0.970
2.318
0.919
4.369
0.847
0.509
Figure 8: Probability of EUT Model Over Time
Point estimate and 95% confidence interval
1
.9
.8
.7
.6
.5
.4
.3
.2
.1
0
1
2
3
4
5
6
7
8
9
10
Choice
11
12
13
14
15
16
17
Figure 9: Expected Utility Theory Parameter
Estimated Within the Mixture Model
Assuming that the probability of the EUT model exceeds 0.25
Percent
r
.9
1
1.1
1.2
-38-
1.3
1.4
Figure 10: Prospect Theory Parameters
Estimated Within the Mixture Model
Assuming that the probability of the EUT model exceeds 0.25
ë
Percent
Percent
á
.4
.5
.6
.7
.8
.9
1
1.1 1.2 1.3 1.4
0
1
2
3
4
ã
Percent
Percent
â
.4
.5
.6
.7
.8
.9
1
1.1
1.2
1.3
1.4
.2
.4
.6
.8
1
1.2
Figure 11: Asset Integration Parameter for EUT
Percent
ù in Model Assuming
That EUT Explains Every Observation
0
.1
.2
.3
.4
.5
.6
.7
.8
.9
1
.7
.8
.9
1
Percent
ù in Mixture Model
Assuming EUT and PT
0
.1
.2
.3
.4
.5
-39-
.6
References
Andersen, Steffen; Harrison, Glenn W.; Lau, Morten Igel, and Rutström, E. Elisabet, “Eliciting Risk
and Time Preferences,” Working Paper 05-24, Department of Economics, College of Business
Administration, University of Central Florida, 2005.
Andersen, Steffen; Harrison, Glenn W.; Lau, Morten Igel, and Rutström, E. Elisabet, “Elicitation
Using Multiple Price Lists,” Experimental Economics, 2006 forthcoming.
Ballinger, T. Parker, and Wilcox, Nathaniel T., “Decisions, Error and Heterogeneity,” Economic
Journal, 107, July 1997, 1090-1105.
Barkan, Rachel, and Busemeyer, Jerome R., “Changing Plans: Dynamic Inconsistency and the Effect
of Experience on the Reference Point,” Psychonomic Bulletin and Review, 6, 1999, 547-554.
Benartzi, Shlomo, and Thaler, Richard H., “Myopic Loss Aversion and the Equity Premium Puzzle,”
Quarterly Journal of Economics, 111(1), February 1995, 75-92.
Binswanger, Hans P., “Attitudes Toward Risk: Theoretical Implications of an Experiment in Rural
India,” Economic Journal, 91, December 1981, 867-890.
Busemeyer, Jerome R.; Weg, Ethan; Barkan, Rachel; Li, Xuyang, and Ma, Zhengping, “Dynamic and
Consequential Consistency of Choices Between Paths of Decision Trees,” Journal of
Experimental Psychology: General, 129(4), 2000, 530-545.
Camerer, Colin F., “Prospect Theory in the Wild: Evidence from the Field,” in D. Kahneman and
A. Tversky (eds.), Choices, Values and Frames (New York: Cambridge University Press,
2000).
Camerer, Colin F., “Three Cheers – Psychological, Theoretical, Empirical – for Loss Aversion,”
Journal of Marketing Research, XLII, May 2005, 129-133.
Camerer, Colin F., and Ho, Teck-Hua, “Violations of the Betweenness Axiom and Nonlinearity in
Probability,” Journal of Risk & Uncertainty, 8, 1994, 167-196.
Campbell, John Y.; Lo, Andrew W., and MacKinlay, A. Craig, The Econometrics of Financial Markets
(Princeton: Princeton University Press, 1997).
Cherry, Todd L.; Frykblom, Peter, and Shogren, Jason F., “Hardnose the Dictator,” American
Economic Review, 92(4), September 2002, 1218-1221.
Cherry, Todd L.; Kroll, Stephan, and Shogren, Jason F., “The Impact of Endowment Heterogeneity
and Origin on Public Good Contributions: Evidence from the Lab,” Journal of Economic
Behavior & Organization, 57, 2005, 357-365.
Clark, Jeremy, “House Money Effects in Public Good Experiments,” Experimental Economics, 5(3),
December 2002, 223-231.
Cox, James C., and Grether, David M., “The Preference Reversal Phenomenon: Response Mode,
Markets and Incentives,” Economic Theory, 7, 1996, 381-405.
Cox, James C., and Sadiraj, Vjollca, “Small- and Large-Stakes Risk Aversion: Implications of
-40-
Concavity Calibration for Decision Theory,” Unpublished Manuscript, Department of
Economics, University of Arizona, May 2005; forthcoming, Games & Economic Behavior.
Cubitt, Robin P., and Sugden, Robert, “Dynamic Decision-Making Under Uncertainty: An
Experimental investigation of Choices between Accumulator Gambles,” Journal of Risk &
Uncertainty, 22(2), 2001, 103-128.
Dixit, Avinash, and Pindyck, Richard S., Investment Under Uncertainty (Princeton: Princeton University
Press, 1994).
El-Gamal, Mahmoud A., and Grether, David M., “Are People Bayesian? Uncovering Behavioral
Strategies,” Journal of the American Statistical Association, 90, 432, December 1995, 1137-1145.
Falk, Armin, and Knell, Markus, “Choosing the Joneses: Endogenous Goals and Reference
Standards,” Scandinavian Journal of Economics, 106(3), 2004, 417-435.
Fennema, Hein, and Wakker, Peter, “Original and Cumulative Prospect Theory: A Discussion of
Empirical Differences,” Journal of Behavioral Decision Making, 10, 1997, 53-64.
Friedman, Milton, and Savage, Leonard J., “The Utility Analysis of Choices Involving Risk,” Journal
of Political Economy, 46, August 1948, 279-304.
Gneezy, Uri; Kapteyn, Arie, and Potters, Jan, “Evaluation Periods and Asset Prices in a Market
Experiment,” Journal of Finance, 58, 2003, 821-838.
Gneezy, Uri, and Potters, Jan, “An Experiment on Risk Taking and Evaluation Periods,” Quarterly
Journal of Economics, 112, 1997, 631-645.
Gollier, Christian, The Economics of Risk and Time (Cambridge, MA: MIT Press, 2001).
Haigh, Michael S., and List, John A., “Do Professional Traders Exhibit Myopic Loss Aversion? An
Experimental Analysis,” Journal of Finance, 60(1), February 2005, 523-534.
Hansen, Robert G., and Lott, John R., “The Winner’s Curse and Public Information in Common
Value Auctions: Comment,” American Economic Review, 81(1), March 1991, 347-361.
Harless, David W., and Camerer, Colin F., “The Predictive Utility of Generalized Expected Utility
Theories,” Econometrica, 62(6), November 1994, 1251-1289.
Harrison, Glenn W., “House Money Effects in Public Good Experiments: Comment,” Working
Paper 06-05, Department of Economics, College of Business Administration, University of
Central Florida, 2006; forthcoming, Experimental Economics.
Harrison, Glenn W., and List, John A., “Field Experiments,” Journal of Economic Literature, 42(4),
December 2004, 1013-1059
Harrison, Glenn W., and Rutström, E. Elisabet, “Representative Agents in Lottery Choice
Experiments: One Wedding and A Decent Funeral,” Working Paper 05-18, Department of
Economics, College of Business Administration, University of Central Florida, 2005.
Hey, John D., “Do People (Want To) Plan?” Scottish Journal of Political Economy, 52(1), February 2005,
122-138.
-41-
Hey, John D., and Orme, Chris, “Investigating Generalizations of Expected Utility Theory Using
Experimental Data,” Econometrica, 62(6), November 1994, 1291-1326.
Holt, Charles A., and Laury, Susan K., “Risk Aversion and Incentive Effects,” American Economic
Review, 92(5), December 2002, 1644-1655.
Kagel, John H., and Levin, Dan., “The Winner’s Curse and Public Information in Common Value
Auctions: Reply,” American Economic Review, 81(1), March 1991, 362-369.
Kahneman, Daniel, and Lovallo, Da, “Timid Choices and Bold Forecasts: A Cognitive Perspective
on Risk Taking,” Management Science, 39(1), January 1993, 17-31.
Kahneman, Daniel, and Tversky, Amos, “Prospect Theory: An Analysis of Decision Under Risk,”
Econometrica, 47, 1979, 263-291.
KÅszegi, Botond, and Rabin, Matthew, “Reference-Dependent Risk Attitudes,” Unpublished
Manuscript, Department of Economics, University of California at Berkeley, October 2005.
KÅszegi, Botond, and Rabin, Matthew, “A Model of Reference-Dependent Preferences,” Quarterly
Journal of Economics, 121(4), November 2006, forthcoming.
Kreps, David., and Porteus, E., “Temporal Resolution of Uncertainty and Dynamic Choice Theory,”
Econometrica, 1978, 185-200.
Liang, K-Y., and Zeger, S.L., “Longitudinal Data Analysis Using Generalized Linear Models,”
Biometrika, 73, 1986, 13-22.
Lind, Barrry, and Plott, Charles R., “The Winner’s Curse: Experiments with Buyers and with
Sellers,” American Economic Review, 81(1), March 1991, 335-346.
List, John A., “Does Market Experience Eliminate Market Anomalies?” Quarterly Journal of Economics,
118, February 2003, 41-71.
List, John A., “Neoclassical Theory Versus Prospect Theory: Evidence from the Marketplace,”
Econometrica, 72(2), March 2004, 615-625.
Loomes, Graham; Starmer, Chris, and Sugden, Robert, “Do Anomalies Disappear in Repeated
Markets?” Economic Journal, 113, March 2003, C153-C166.
Munro, Alistair, and Sugden, Robert, “On the Theory of Reference-Dependent Preferences,” Journal
of Economic Behavior & Organization, 50, 2003, 407-428.
Novemsky, Nathan, and Kahneman, Daniel, “The Boundaries of Loss Aversion,” Journal of Marketing
Research, XLII, May 2005a, 119-128.
Quizon, Jaime B.; Binswanger, Hans P., and Machina, Mark J., “Attitudes Toward Risk: Further
Remarks,” Economic Journal, 94, March 1984, 144-148.
Rabin, Matthew, “Risk Aversion and Expected Utility Theory: A Calibration Theorem,”
Econometrica, 68, 2000, 1281-1292.
Rabin, Matthew and Thaler, Richard, “Anomalies: Risk Aversion,” Journal of Economic Perspectives, 15,
-42-
Winter 2001, 219-232.
Rogers, W. H., “Regression standard errors in clustered samples,” Stata Technical Bulletin, 13, 1993,
19-23.
Rubinstein, Ariel, “Comments on the Risk and Time Preferences in Economics,” Unpublished
Manuscript, Department of Economics, Princeton University, 2002.
Rutström, E. Elisabet, and Williams, Melonie B., “Entitlements and Fairness: An Experimental
Study of Distributive Preferences,” Journal of Economic Behavior and Organization, 43, 2000,
75-80.
Schubert, Renate; Brown, Martin; Gysler, Matthias, Brachinger, Hans Wolfgang, “Financial Decision
Making: Are Women Really More Risk Averse?,” American Economic Review, 89(2), May 1999,
381-385.
Starmer, Chris, “Developments in Non-Expected Utility Theory: The Hunt for a Descriptive Theory
of Choice Under Risk,” Journal of Economic Literature, 38, June 2000, 332-382.
Stevenson, Mary Kay; Busemeyer, Jerome R., and Naylor, James C., “Judgement and DecisionMaking Theory,” in M. Dunette and L.N. Hough (eds.), New Handbook of Industrial
Organizational Psychology (Palo Alto, CA: Consulting Psychology, Press, 1991).
Sugden, Robert, “Reference-Dependent Subjective Expected Utility,” Journal of Economic Theory, 111,
2003, 172-191.
Thaler, Richard H., “Mental Accounting and Consumer Choice,” Marketing Science, 4(3), Summer
1985, 199-214.
Thaler, Richard H., and Johnson, Eric J., “Gambling With The House Money and Trying to Break
Even: The Effects of Prior Outcomes on Risky Choice,” Management Science, 36(6), June
1990, 643-660.
Thaler, Richard H; Tversky, Amos; Kahneman, Daniel, and Schwartz, Alan, “The Effect of Myopia
and Loss Aversion on Risk Taking: An Experimental Test,” Quarterly Journal of Economics, 112,
647-661.
Trigeorgis, Lenos, Real Options (Cambridge, MA: MIT Press, 1996).
Tversky, Amos, and Kahneman, Daniel, “Advances in Prospect Theory: Cumulative
Representations of Uncertainty,” Journal of Risk & Uncertainty, 5, 1992, 297-323; references to
reprint in D. Kahneman and A. Tversky (eds.), Choices, Values, and Frames (New York:
Cambridge University Press, 2000).
Williams, Rick L., “A Note on Robust Variance Estimation for Cluster-Correlated Data,” Biometrics,
56, June 2000, 645-646.
Wooldridge, Jeffrey, “Cluster-Sample Methods in Applied Econometrics,” American Economic Review
(Papers & Proceedings), 93, May 1993, 133-138.
Yaari, Menahem E., “The Dual Theory of Choice Under Risk,” Econometrica, 55(1), 1987, 95-115.
-43-