Behavioural Economics

Behavioral Economics
NS Grewal, Ipsos Neuroscience & Emotion Center of Excellence, San Francisco, CA, USA
JA Sparks, University of California, Davis, CA, USA
J Reiter, Ipsos Neuroscience & Emotion Center of Excellence, New York, NY, USA
E Moses, Ipsos Neuroscience & Emotion Center of Excellence, Norwalk, CT, USA
r 2016 Elsevier Inc. All rights reserved.
Glossary
Asymmetric paternalism Refers to policies designed to
help people who behave irrationally (in the sense that they
are not entirely self-interested), while interfering minimally
with those who behave rationally.
Availability The heuristic where decision-makers assess
the frequency of a class or the probability of an event by the
ease with which instances or occurrences can be brought to
mind.
Bounded rationality The idea that rationality of
individuals is limited by the information they have, the
cognitive capacity of their minds, and time constraints.
Delay discounting When faced with a future choice, the
phenomenon where one sharply reduces its importance
relative to that of a present alternative.
Endowment effect How the value of a good increases
when it becomes part of a person’s endowment.
Definition and Introduction
Behavioral economics endeavors to provide a descriptively accurate account of human decision making, motivated by the
desire to improve the explanatory and predictive power of
economic models. Since the days of Adam Smith and Jeremy
Bentham, neoclassical economists have traditionally viewed
consumers as single-mindedly motivated by the pursuit of selfinterest. According to the neoclassical view of rational economic
agents (Smith, 1776), people make decisions based on maximization of expected utility. That is, people act as if they are
equipped with unlimited knowledge, time, and informationprocessing abilities. However, this traditional view has been
challenged by findings from psychology and neuroscience that
suggest emotional and social motives are drivers of economic
decision making. Historically, as a subfield of economics, behavioral economics pioneered the revision of neoclassical economic theory using insights from psychology about the
mechanisms underlying the decision making process. The field
of behavioral economics is grounded in relaxing this most
fundamental assumption underlying microeconomic theory –
the rationality of economic agents – to study the array of cognitive, social, and emotional factors that influence the economic decisions of individuals and groups, primarily concerned
with the bounds of rationality. As the mental health field deals
with the emotional and psychological well-being of people,
behavioral economics can provide perspective into the utility,
health, and well-being of others as well as contribute to the
development of strategies to improve health-related decision
Encyclopedia of Mental Health, Volume 1
Law of small numbers A belief that random samples
of a population will resemble each other and the
population more closely than statistical sampling theory
would predict.
Neuroeconomics An interdisciplinary field that
investigates the physiological and neural underpinnings of
the topics addressed in the behavioral economics literature,
like heuristics and biases, fairness, and trust.
Overconfidence The bias where people are overconfident
when assessing the accuracy of their answers.
Prospect theory An alternative to expected utility theory
that measures value in terms of gains and losses or
deviations from a reference point.
Representativeness The heuristic whereby people often
judge probabilities by the degree to which A is
representative of B.
making. This article summarizes the key history and research in
the field of behavioral economics, and discusses applications to
the mental health field.
History
Before the development of neoclassical economics, psychology
and economics were closely connected. Adam Smith’s The
Theory of Moral Sentiments proposed psychological and philosophical underpinnings of human behavior. Utilitarian philosopher Jeremy Bentham wrote that people ought to desire
things that maximize their utility (Bentham, 1824/1987).
Then, the development of neoclassical theory reshaped economics into a science designed to deduce behavior from assumptions about economic agents. ‘Homo economicus,’ the
rational man, was the key assumption underlying neoclassical
economic theory. Internal states of economic agents (i.e.,
preferences, beliefs, and emotions) were held constant,
thereby assuming rational, utility maximizers with full access
to information about their choices (Smith, 1776). Objective
actions or behavioral responses of individuals were thought to
be markers for the internal preferences driving decisions. Thus,
neoclassical economics modeled consumer behavior without
explicitly considering the psychological forces driving behavior. A consumer purchase was considered an absolute
marker for preference, and decision theory developed along
with neoclassical economics devoid of any account for subjective states or emotions.
doi:10.1016/B978-0-12-397045-9.00201-9
143
144
Behavioral Economics
By the 1950s, anomalies in rational judgment and decision
making were captured in experimental studies and challenged
the neoclassical assumptions. Nobelist Maurice Allais designed
the Allais paradox, a decision problem that contradicted the
predictions of expected utility theory (Allais, 1953). Nobelist
Herbert Simon developed the theory of bounded rationality to
explain how people seek satisfaction instead of maximizing
utility as neoclassical economics would predict (Simon, 1957).
By the 1960s, the cognitive revolution progressed psychology
to the study of the brain as an information-processing unit,
shifting focus on modeling decision making and its underlying
mechanisms.
In 1979, Psychologists Kahneman and Tversky wrote ‘Prospect theory: An analysis of decision under risk,’ a paper that
used psychology to explain the divergences of economic decision making from neoclassical theory. In 2002, Kahneman
won the Nobel Prize in Economics for having “integrated insights from psychological research into economic science, especially concerning human judgment and decision making
under uncertainty” (The Royal Swedish Academy of Sciences)
and Vernon Smith won for his work in experimental economics. The development of behavioral economics brought
more attention to the impact of emotion on decision making
(e.g., see Shiv et al., 2005). In more recent years, economic
decision making has been linked to brain functioning, where
the field of neuroeconomics investigates the neural and
physiological factors that mediate the interaction between
cognition and emotion (e.g., see Glimcher, 2003).
to concavity of the utility function, as described by Jensen’s
Inequality,
!
N
u
∑ pj xj
j¼1
N
∑ pj uðxj Þ
j¼1
In words, the utility of the expected value of the lottery is
weakly greater than the utility of the lottery. Intuitively, the
more concave a utility function, the more risk averse an agent
behaves.
Expected Utility Theory
Expected utility theory is used in economics to explain decision making under uncertainty. It was first proposed by von
Neumann and Morgenstern (1947) as a theory describing how
agents ought to make choices under uncertainty if they adhere
to certain axioms of rational choice. The theory states that
agents should choose between risky prospects by comparing
expected values, or weighted averages composed of utility
values multiplied by their respective probabilities. They ought
to pick the alternative that maximizes expected utility,
N
max EðuðxÞÞ ¼ ∑ pi uðxi Þ
i¼1
Definition of Rational Choice
Rational choice pertains to a pattern of decisions that meet the
following assumptions (Plous, 1993):
Risk and Uncertainty
The fundamentals of behavioral economics stem from the act
of real-life decision making, which regularly occurs under
conditions of uncertainty. In many everyday decisions, there is
no one-to-one correspondence between the actions one takes
and the outcomes of those actions. An agent chooses an action
and a random event occurs that determines the result. This
describes risk – where the probabilities (pj) associated with
outcomes (xj) are known, but the end result is not determined.
Uncertainty is more extreme and perhaps more realistic in that
it describes a situation where we do not even have enough
information to determine the probabilities to assign to outcomes. In this section we introduce the fundamentals of
neoclassical theory to serve as a foundation for the later sections on behavioral economic theories.
Neoclassical Model of Risk Aversion with Concave Utility
Function
Risk aversion describes a situation where an agent faces a
choice between a lottery and a sure money payoff (equal to the
expected value of the lottery), and they weakly prefer the sure
money payoff. In other words, a risk averse person prefers the
sure thing to the gamble. A risk averse agent’s utility function is
depicted in Figure 1.
If an agent is risk averse over some region, the chord drawn
between any two points on her utility function must lie below
the function (Varian, 1992). Mathematically, this is equivalent
1. Completeness: Rational decision-makers should be able to
rank any two alternatives in the sense that they prefer one
to the other or are indifferent between them.
2. Transitivity: If a rational decision-maker prefers A to B and
B to C, then they should prefer A to C.
3. Independence: Inclusion of irrelevant alternatives to a
choice set must not change one’s preference ranking over
the relevant choices.
Violations
In reality, decision-makers often violate these assumptions and
therefore make choices that expected utility theory would not
predict. For example, violations of the transitivity axiom are
common. If you prefer chocolate to vanilla ice cream, and
vanilla to strawberry ice cream, you should prefer chocolate to
strawberry. If, given the choices, you pick strawberry ice cream
over chocolate, you have violated the transitivity axiom.
Transitivity
Tversky conducted an experiment in 1969 that showed highly
reliable violations of transitivity in participant preferences. He
presented a sample of Harvard undergraduates with five lotteries, where the expected value of each lottery increased with
the probability of winning and decreased with the payoff
amount. The students were randomly presented with a pair of
lotteries and asked to choose which one they preferred. When
two lotteries had very similar probabilities of winning, subjects
chose the option with the higher payoff. However, when the
Behavioral Economics
145
Concave utility function
7
6
U(3) 5
Utility
0.5U(1)+0.5U(5) 4
3
2
1
0
0
1
2
4
3
5
6
7
Wealth
Figure 1 Concave utility function (wealth in dollars by utility in utils). A risk averse agent has a concave utility function. The expected utility of
the lottery is E(U)¼0.5U(1) þ 0.5U(5) and the utility of the lottery is U(0.5(1) þ 0.5(5))¼U(3).
difference in probabilities was large, students chose the option
with the higher probability of winning. Thus, Lottery A was
preferred to Lottery B, Lottery B to Lottery C, Lottery C to
Lottery D, Lottery D to Lottery E, but Lottery E was preferred to
Lottery A, which demonstrates intransitivity in student preferences (Plous, 1993).
Anomalies in Judgment and Decision Making
Representativeness
The representativeness heuristic (Tversky and Kahneman,
1974) is the fallacy in which people judge the probability of
an event based on how similar, or representative, it is to a
prototype. Representativeness causes people to underestimate
the probability of the change in a common trend – for example, the likelihood of rain when it is sunny for weeks
straight. Representativeness predicts many other documented
anomalies in decision making, including the belief in the ‘law
of small numbers,’ that one’s choices are often overly influenced by the outcomes from a small sample.
Heuristics and Biases
Growing from the work on violations of rational choice assumptions, the field of behavioral economics has largely focused on investigating the wide range of phenomena that lead
to decision making deviating from neoclassical predictions.
These anomalous decisions often result from the use of
heuristics, or rules of thumb. Human decision making relies
on heuristic reasoning strategies to save cognitive resources
and improve efficiency. However, heuristics can lead to systematic errors in judgment that can be isolated experimentally,
called biases. Dozens of heuristics and biases have been
studied extensively and verified experimentally. Here, we
summarize only a few of the most prominent and well-studied
heuristics and biases in the behavioral economics literature
(Tversky and Kahneman, 1974).
Availability
Availability is the ease with which a given event or scenario can
be accessed from memory. The availability heuristic (Tversky
and Kahneman, 1974) is used when events are judged as more
probable due to the prevalence of similar salient events, or
events that are easy to recall or imagine. This causes biased
probability judgments when other factors that influence
availability are ignored. The availability heuristic explains why
vivid scenarios such as plane crashes are more available than
objectively more likely problems such as cancer.
Anchoring
Anchoring is based on the observation that people solve
problems by starting from an arbitrary salient starting point
that is then adjusted to generate a final answer. The bias occurs
when the adjustment is insufficient, and the final answer is
anchored to the initial guess (Tversky and Kahneman, 1974).
Anchoring explains several well-documented biases including
‘overconfidence,’ the finding that people’s subjective confidence in their own judgments is invariably greater than their
objective accuracy.
Cognition: Dual-Process Theory
‘Dual-process theories’ postulate that there are two functionally different systems that can process information. In the
behavioral economics and psychology literatures, it is generally accepted that decision making can be characterized by the
distinction of two processing modes: System 1 processes
are fast, effortless, automatic, nonconscious, and impulsive,
whereas System 2 processes are slow, effortful, conscious, rational, and deliberatively controlled (Evans and Over, 1996;
Kahneman, 2003; Sloman, 1996; Stanovich and West, 2002).
With functionally distinct roles, these systems differ according
to the type of information they process and encode, as well as
the end judgment or response they generate.
146
Behavioral Economics
In his book Thinking Fast and Slow, Kahneman (2011)
writes that human decision making is a compound system
containing one part intuitive (System 1) and one part rational
(System 2). Accordingly, biases in cognition are attributed to
the fast and effortless processes of System 1, which are heuristic or associative. Logical responses are attributed to the slow
and effortful processes of System 2, which are characterized as
rule based or analytical.
Prospect Theory
Kahneman and Tversky demonstrated in controlled laboratory
experiments that people systematically violate the axioms of
expected utility theory in their decision making (Tversky and
Kahneman, 1974). In response to their findings, the psychologists developed an alternative, descriptive, and empirically
supported theory of choice – prospect theory (Kahneman and
Tversky, 1979). Prospect theory is the behavioral economic
theory that describes the way in which people make real-life
choices between uncertain alternatives. The theory posits that
people make choices based on relative judgments (rather than
absolute), and that they evaluate options using heuristics
(Kahneman and Tversky, 1979). The prospect theory value
function is defined on deviations from a point of reference,
and is concave for gains (implying risk aversion), convex for
losses (risk seeking), and steeper for losses than for gains (loss
aversion) (Figure 2).
The value function predicts that the experience of a loss is
more painful than the experience of a gain is enjoyable such
that given an equal-sized gain or loss, the loss hurts much
more than the gain helps.
Framing
Building on their earlier work on heuristics and biases, in
‘Prospect theory: An analysis of decision making under risk’ Kahneman and Tversky (1979) investigate how consumer choice
is shaped by probability inferences. Framing, or the way in
which a problem is presented, is one of the main factors that
influence choice. Kahneman and Tversky found that the
presentation of a problem can drastically change the way it is
viewed, and the solution that is generated. Take, for example,
the following two statements:
1. The surgery has a 95% survival rate.
2. There is a 5% chance of death from the surgery.
If a doctor told you (1), you would likely go through with
the surgery. However, if a doctor told you (2), you might have
second thoughts. The probability of success in both cases is the
same, but the way the problem is framed can influence your
decision of whether or not to pursue the surgery. A large body
of experimental research on framing has demonstrated that
people’s actions depend on the way choices are presented (e.g.,
see Tversky and Kahneman, 1981).
Loss aversion and the endowment effect
Two key principles deriving from Prospect Theory, and used as
evidence for reference-dependent preferences, are loss aversion
and the endowment effect (Kahneman et al., 1991). Loss
aversion reflects a person’s preference to prefer avoiding losses
to acquiring gains. The endowment effect is a manifestation of
loss aversion, wherein people place extra value on goods they
own compared to identical goods they do not own. In other
words, the value of a good increases once a person establishes
his or her property right over it. In the original endowment
effect experiment (Kahneman et al., 1990), students demanded
a higher price for a mug that had been given to them but put a
lower price on a mug they did not yet own – when the actual
price of each mug was identical. The endowment effect has
been described as an anomaly in neoclassical theory, which
predicts that a person’s willingness to pay (WTP) for a good
should be equivalent to their willingness to accept (WTA)
payment to be deprived of the same good. In other words,
valuation should not be affected by ownership. In reality, as
the endowment effect demonstrates, references points (as
predicted by the Prospect Theory value function) do influence
valuations and decisions and can result in WTA being greater
than WTP.
Subjective value
Risk aversion
Concave
Gains
−8
−6
−4
−2
Losses
Convex
Risk seeking
Figure 2 Prospect theory value function.
Objective value
0
2
4
6
8
Behavioral Economics
Intertemporal Choice
Decision making frequently involves making tradeoffs between choices at different points in time. Two differently valued outcomes that cannot both be obtained at the same point
in time present an intertemporal choice. Since the future is
uncertain and people are risk averse for gains (as derived from
prospect theory), there tends to be a strong preference for
smaller immediate gains over larger future gains. Colloquially,
‘a bird in the hand is worth two in the bush,’ or having
something for certain now is often preferred over the possibility of getting something better in the future. As a result, the
typical response when evaluating a future choice is to sharply
reduce its importance relative to that of a present alternative,
an effect known as delay discounting (Frederick et al., 2002).
The traditional model from economics, discounted utility
(Samuelson, 1937), assumes time consistency in intertemporal choices. For example, if you choose US$5 today over
US$10 tomorrow, the discounted utility model would predict
you would choose US$5 in 100 days from now over US$10 in
101 days from now. However, experimental studies have
shown that intertemporal choices often result in preference
reversals such that preferences for the long term tend to conflict with behavior in the short term (Loewenstein and Prelec,
1992). You may choose US$10 today over US$20 tomorrow
but over a longer time-horizon, say US$10 in 100 days or US
$20 in 101 days, you would choose the delayed US$20. Time
inconsistency is captured by hyperbolic and quasi-hyperbolic
models of discounting in the behavioral economics literature
(e.g., Ainslie and Haslam, 1992; Laibson, 1997).
Social Preferences
The assumption of self-interest from neoclassical economics is
appended in behavioral economics to account for interdependent preferences, or so-called social preferences. In many
ways, people behave as if they value the utility ascribed to
others and therefore do not act purely out of self-interest. Integrating social preferences into the analysis can improve
predictions of important economic phenomena (e.g., Kahneman et al., 1986a). The study of social preferences generally
focuses on two types: distributive preferences (preferences related to equality of outcomes; like altruism or equity) or reciprocal preferences (preferences related to the desire to
reward/punish others based on past actions; like fairness, envy,
and trust) (Croson and Konow, 2009). Behavioral economists
have formulated and tested a variety of two-player ‘games’ or
strategic situations, where the choice of an individual is
dependent upon the choices of another player. These gametheoretic experiments isolate and capture the social nature of
preferences (von Neumann and Morgenstern, 1944). We review three of these games here; for a complete overview see
Camerer (2003).
Trust game
In the trust game (Berg et al., 1995), an endowment (say, US
$10) is given to a player called the proposer. The proposer
then offers an amount of the US$10 to share with another
player, the responder. The amount offered is tripled by a third
party, and then given to the responder. The responder then
147
must decide how much of the money they were given to return
to the proposer. In the case of an initial US$10, if the proposer
was to offer all the money, and the responder was to return
half of it, both players walk away with US$15. In reality, most
proposers do not offer the full amount, and most responders
do not return even splits. Experimental studies (e.g., Cox,
2004) have found that the more the proposer gives, the more
the responder is likely to return to them, and thus the transfers
in the trust game experimentally isolate and quantify trust and
reciprocity.
Ultimatum game
In the ultimatum game (Güth et al., 1982), the proposer’s role
is to offer a split of the initial endowment between herself and
the responder. The responder can then either accept the split
and both players walk away with their designated portion, or
refuse the split and both players get US$0. As predicted by
neoclassical theory, rationally the responder should accept
whatever the split is, even if it is just US$1, because it is a gain
(US$1 4US$0). However, in reality, if the responder views the
split from the proposer as being ‘unfair’ (typically, an ‘unfair’
split is 35% and under), they will punish the responder by
refusing the deal. In this case, both players walk away with
nothing (Camerer, 2003). The act of punishing the proposer
for lack of fairness is in itself more rewarding for the responder
than an objective US$1 gain.
Dictator game
In the dictator game (Kahneman et al., 1986b), the proposer
becomes the ‘dictator.’ She decides how much of the initial
endowment she wants to share with the responder, if any at
all, and the game ends. The responder has no decision making
power at all. Yet, even in this game, the proposer on average
shares 20–25% of the initial endowment. According to the
rational actor model of neoclassical economics, she should
keep all the money, but this rarely happens. The dictator game
has been cited as demonstrating that the proposer is fundamentally concerned with fairness (Kahneman et al., 1986b).
Neuroeconomics
The field of neuroeconomics evolved from the study of individual decision making in social contexts, building on the
study of social preferences. Neuroeconomics investigates the
physiological and neural underpinnings of the phenomena
described in the behavioral economics literature (e.g., heuristics and biases, fairness, trust, etc.). The major topics of study
are often the same as in behavioral economics (e.g., risk and
uncertainty, loss aversion, intertemporal choice, and social
preferences), but the methodologies used differ. For example,
evaluating the neural correlates of intertemporal choice (e.g.,
using functional magnetic resonance imaging to measure
neural activity in the ventral striatum and medial prefrontal
cortex while subjects make intertemporal decisions) is an area
of study within the field. Whereas behavioral economics traditionally records people’s choices and generates mathematical
models to make predictions, neuroeconomics adds observations of the central and peripheral nervous system to the
explanatory variables. Thereby, neuroeconomics aims to
148
Behavioral Economics
determine the physiological basis for the observed anomalies
in the rational actor model of neoclassical economics. This
allows neuroeconomics to pursue the investigation of the
reasons for fallacies, in an effort to improve human decision
making. For a comprehensive review of methodologies and
literature in the neuroeconomics field, see Glimcher et al.
(2009).
Summary of Key Learnings from Behavioral Economics
To derive insights that can be applied to the mental health
field, below is a summary of key points from the preceding
sections.
1. People do not always act ‘rationally’ as neoclassical economic models would predict.
2. People have limited cognitive processing power, time, and
information. We have evolved to rely on heuristics to save
resources. Heuristics usually do us good, but they can lead
to systematic biases in judgments.
3. People make decisions based on changes in states or references points.
a. Losses hurt more than gains feel good.
b. Possessions are subjectively valued more than equivalent items that are not owned.
4. People are influenced by nuances in the way choices are
presented. Different ways of framing a problem can lead to
changes in preference.
5. People have difficulty making decisions with long-term
payoffs, where gratification is delayed. We tend to prefer
immediate smaller rewards to delayed larger rewards.
6. People take into account, and derive utility from, others’
preferences. We are frequently driven by emotions rather
than self-interest.
Health-Related Decision Making
Behavioral economics can contribute to the development of
effective interventions for the mental health field, most notably in the area of changing behaviors that, due to self-control
problems, are particularly difficult to achieve. Mental health
domains where this approach holds promise include, but are
not limited to: obesity and eating disorders, medication adherence, substance abuse and addiction, and antisocial behavior. More generally, mental health conditions that have
effects on willpower, temptation, and self-control (where behaviors may significantly deviate from those predicted from
‘rational’ models) can benefit from the application of behavioral economics.
Health-related decisions involve making choices over a
wide variety of options that shift over time and have uncertain
consequences. These decisions frequency present intertemporal choices, where individuals discount health outcomes
and tend to make short-term choices that are detrimental over
the long term. Two important examples for the mental health
field are the growing obesity rate, and the decline in medication adherence. According to the Center for Disease Control
(CDC), in 2010 there were 12 states in the United States with
an obesity prevalence (Body Mass Index Z30) of 30% or
more. Medication adherence statistics provide another alarming example. Although chiefly important for those suffering
from mental health, and other health conditions, only about
75% of medical prescriptions are ever filled and medications
are not taken as prescribed approximately 50% of the time.
Lifestyle diseases (diabetes, heart disease, and substance abuse
from alcohol use and drug addiction) are responsible for over
60% of global deaths. These premature deaths and illnesses are
chiefly due to unhealthy habits and poor lifestyle choices –
decisions behavioral economic strategies can improve.
The premise of neoclassical theory assumes that people
know what is best for themselves, and that people are able to
act on that understanding. Studies from behavioral economics,
as well as real-world data like that referenced from the CDC,
have repeatedly shown this not to be true. Even when people
are given information about what is best for them, for example, in calorie menu labeling, this information alone does
not have a significant effect on daily food choice (Downs et al.,
2009; Elbel et al., 2009; Giesen et al., 2011). These results
suggest the problem is not lack of information, but rather lack
of self-control. People are often overwhelmed by information
due to limited cognitive processing capacity, and although
they know what is ‘best’ for them they cannot implement
strategies to achieve that optimal outcome. Further, physiological barriers (e.g., hormone imbalances) may inhibit selfcontrol mechanisms and prevent people from acting on the
optimal outcomes (Madden and Bickel, 2010).
Hence, a common theme in health interventions utilizing
behavioral economics is the exploitation of biases with the
intent to help people rather than hurt them. These strategies
focus on incorporating cognitive biases to promote healthier
choices (e.g., Marsch and Bickel, 2001; Logue, 2000; Simpson
and Vuchinish, 2000) and reducing biased choices with incentives for self-control development (Marsch and Bickel,
2001). For example, offering only healthy options in an office
vending machine will encourage healthier snack choices. Behavioral economics has also been used to create a new approach to health behavior policy, called asymmetric
paternalism (Camerer et al., 2003; Loewenstein et al., 2007;
Sunstein and Thaler, 2003). Asymmetric paternalism uses
biases to help people susceptible to self-harmful choices make
better decisions, without interfering with their freedom to
choose. To the extent that the errors in decision making lead
people to behave in ways contrary to their own best interests,
paternalism may prove valuable. For example, a paternalistic
approach would recommend reorganizing cafeterias such that
healthy options are presented first in an effort to help people
make healthier choices (Madden and Bickel, 2010). Simply
making the easy, default choices the optimal decisions can
capitalize on people’s status quo bias (Samuelson and Zeckhauser, 1988), or preference to stay at the current state.
While these applications to the mental health field all
fundamentally involve discrepancies in willpower and selfcontrol, an important consideration is whether people suffering from mental health illness have the ability and free
will to make optimal choices on their own. They may have
functional deficits that affect parts of the brain involved in selfcontrol (e.g., the dorsolateral prefrontal cortex) as well as reward processing mechanisms, inhibiting their ability to make
Behavioral Economics
optimal choices. Mental health sufferers may be particularly
disadvantaged in decision making, and thus benefit disproportionately from behavioral economic strategies that
capitalize on cognitive biases to enforce better decision making. In particular, for areas of medication compliance where
most mental health sufferers would rely on adherence, asymmetric paternalistic strategies may prove to be effective (Loewenstein et al., 2007).
See also: Age and Emotion. Alcohol Use Disorders. Altruism.
Behavioral Addiction. Dieting. Disorders of Impulse Control. Eating
Disorders. Self-Regulation. Empathy. Food, Nutrition, and Mental
Health. Learning Disorders and Dyslexia. Major and Mild
Neurocognitive Disorders. Obesity. Tobacco, Nicotine, Health, and
Mental Health. Substance Abuse: Drugs
References
Ainslie, G., Haslam, N., 1992. Hyperbolic discounting. In: Loewenstein, G., Elster, J.
(Eds.), Choice Over Time. New York, NY: Russell Sage Foundation, pp. 57–92.
Allais, M., 1953. Le comportement de l’homme rationnel devant le risque: Critique
des postulats etaxiomes de l’école Américaine. Econometrica 21 (4), 503–546.
Bentham, J., 1824/1987. An introduction to the principles of morals and legislation.
In: Mill, J.S., Bentham, J. (Eds.), Utilitarianism and Other Essays.
Harmondsworth: Penguin, pp. S1–S389.
Berg, J., Dickhaut, J., McCabe, K., 1995. Trust, reciprocity, and social history.
Games and Economic Behavior 10 (123), 122–142.
Camerer, C., 2003. Behavioral Game Theory: Experiments on Strategic Interaction.
Princeton, NJ: Princeton University Press.
Camerer, C., Issacharoff, S., Loewenstein, G., O’Donoghue, T., Rabin, M., 2003.
Regulation for conservatives: Behavioral economics and the case for ‘‘asymmetric
paternalism’’. University of Pennsylvania Law Review 151, 1211–1254.
doi:10.2307/3312889.
Cox, J., 2004. How to identify trust and reciprocity. Games and Economic Behavior
46, 260–281.
Croson, R., Konow, J., 2009. Social preferences and moral biases. Journal of
Economic Behavior and Organization 69 (3), 201–212.
Downs, J.S., Loewenstein, G., Wisdom, J., 2009. Strategies for promoting healthier
food choices. American Economic Review 99 (2), 159–164.
Elbel, B., Kersh, R., Brescoll, V.L., Dixon, L.B., 2009. Calorie labeling and food
choices: A first look at the effects on low‐income people in New York City.
Health Affairs 28 (6), w1110–w1121.
Evans, J. St B.T., Over, D.E., 1996. Rationality and Reasoning. Hove: Psychology
Press.
Frederick, S., Loewenstein, G., O’Donoghue, T., 2002. Time discounting and time
preference: A critical review. Journal of Economic Literature 40, 351–401.
Giesen, J.C., Payne, C.R., Havermans, R.C., Jansen, A., 2011. Exploring how calorie
information and taxes on high‐calorie foods influence lunch decisions. American
Journal of Clinical Nutrition 93, 689–694.
Glimcher, P.W., 2003. The neurobiology of visual-saccadic decision making. Annual
Review of Neuroscience 26, 133–179. doi:10.1146/annurev.neuro.26.010302.
081134.
Glimcher, P.W., Camerer, C.F., Fehr, E., Poldrack, R.A., 2009. Neuroeconomics −
Decision Making and the Brain. London: Elsevier Academic Press. ISBN 978-012-374176-9.
Güth, W., Schmittberger, R., Schwarze, B., 1982. An experimental analysis of
ultimatum bargaining. Journal of Economic Behavior and Organization 3 (4),
367–388.
Kahneman, D., 2003. A perspective on judgment and choice: Mapping bounded
rationality. American Psychologist 58, 697–720.
149
Kahneman, D., 2011. Thinking, Fast and Slow. New York, NY: Farrar, Strauss,
Giroux.
Kahneman, D., Knetsch, J., Thaler, R., 1990. Experimental tests of the endowment
effect and the Coase Theorem. Journal of Political Economy 98, 1325–1348.
Kahneman, D., Knetsch, J., Thaler, R., 1991. Anomalies: The endowment effect, loss
aversion, and status quo bias. Journal of Economic Perspectives 5 (1), 193–206.
Kahneman, D., Knetsch, J.L., Thaler, R., 1986a. Fairness as a constraint on profit
seeking: Entitlements in the market. American Economic Review 76, 728–741.
Kahneman, D., Knetsch, J.L., Thaler, R., 1986b. Fairness and the assumptions of
economics. Journal of Business 59, S285–S300.
Kahneman, D., Tversky, A., 1979. Prospect theory: An analysis of decision under
risk. Econometrica 47, 313–327.
Laibson, D., 1997. Golden eggs and hyperbolic discounting. Quarterly Journal of
Economics 112, 443–477.
Loewenstein, G., Brennan, T., Volpp, K.G., 2007. Asymmetric paternalism to improve
health behaviors. JAMA 298 (20), 2415–2417.
Loewenstein, G., Prelec, D., 1992. Anomalies in intertemporal choice − Evidence
and an interpretation. Quarterly Journal of Economics 107 (2), 573–597.
Logue, A.W., 2000. Self control and health behavior. In: Bickel, W.K., Vuchinich, R.
E. (Eds.), Reframing Health Behavior Change with Behavioral Economics,
pp. 167–192.
Madden, G.J., Bickel, W.K. (Eds.), 2010. Impulsivity: The Behavioral and
Neurological Science of Discounting. Washington, DC: American Psychological
Association.
Marsch, L.A., Bickel, W.K., 2001. Toward a behavioral economic understanding of
drug dependence: Delay discounting processes. Addiction 96, 73–86.
von Neumann, J., Morgenstern, O., 1944. Theory of Games and Economic
Behavior, vol. 2. Princeton: Princeton University Press, p. 625. doi:10.1177/
1468795X06065810.
von Neumann, J., Morgenstern, O., 1947. Theory of Games and Economic Behavior,
second ed. Princeton: Princeton University Press.
Plous, S., 1993. The Psychology of Judgment and Decision Making. New York, NY:
McGraw-Hill.
Samuelson, P., 1937. A note on measurement of utility. Review of Economic Studies
4 (2), 155–161.
Samuelson, W., Zeckhauser, R., 1988. Status quo bias in decision making. Journal
of Risk and Uncertainty 1, 7–59.
Shiv, B., Loewenstein, G., Bechara, A., 2005. The dark side of emotion in decisionmaking: When individuals with decreased emotional reactions make more
advantageous decisions. Cognitive Brain Research 23, 85–92.
Simon, H., 1957. A Behavioral Model of Rational Choice. Models of Man, Social
and Rational: Mathematical Essays on Rational Human Behavior in a Social
Setting. New York, NY: Wiley.
Simpson, C.A., Vuchinish, R.E., 2000. Temporal changes in the value of objects of
choice: Discounting, behavior patterns, and health behavior. In: Bickel, W.K.,
Vuchinish, R.E. (Eds.), Reframing Health Behavior Change with Health
Economics. Englewood Cliffs, NJ: Prentice-Hall, pp. 193–215.
Sloman, S.A., 1996. The empirical case for two systems of reasoning. Psychological
Bulletin 119, 3–22.
Smith, A., 1776. An Inquiry into the Nature and Causes of the Wealth of Nations,
fifth ed. London: Methuen and Co. Reprinted in 1904 from The Library of
Economics and Liberty.
Stanovich, K.E., West, R.F., 2002. Individual differences in reasoning: Implications
for the rationality debate? In: Gilovich, T., Griffin, D.W., Kahneman, D. (Eds.),
Heuristics and Biases: The Psychology of Intuitive Judgment. New York, NY:
Cambridge University Press, pp. 421–440.
Sunstein, C.R., Thaler, R.H., 2003. Libertarian Paternalism Is Not an Oxymoron.
University of Chicago Law Review 70 (4), 1159–1202.
The Royal Swedish Academy of Sciences, 2002. The Bank of Sweden Prize in
Economic Sciences in Memory of Alfred Nobel 2002 (Press Release). Available
at: http://www.nobelprize.org/nobel_prizes/economic-sciences/laureates/2002/
press.html (accessed 15.12.13).
Tversky, A., Kahneman, D., 1974. Judgment under uncertainty: Heuristics and
biases. Science 185 (4157), 1124–1131. doi:10.1126/science.185.4157.1124.
Tversky, A., Kahneman, D., 1981. The Framing of decisions and the psychology of
choice. Science 211 (4481), 453–458.
Varian, H., 1992. Microeconomic Analysis, third ed. New York, NY: Norton.