Treating Rationality as a Continuous Variable Amitai

Treating Rationality as a Continuous
Variable
Amitai Etzioni
Society
ISSN 0147-2011
Volume 51
Number 4
Soc (2014) 51:393-400
DOI 10.1007/s12115-014-9798-6
1 23
Your article is protected by copyright and all
rights are held exclusively by Springer Science
+Business Media New York. This e-offprint is
for personal use only and shall not be selfarchived in electronic repositories. If you wish
to self-archive your article, please use the
accepted manuscript version for posting on
your own website. You may further deposit
the accepted manuscript version in any
repository, provided it is only made publicly
available 12 months after official publication
or later and provided acknowledgement is
given to the original source of publication
and a link is inserted to the published article
on Springer's website. The link must be
accompanied by the following text: "The final
publication is available at link.springer.com”.
1 23
Author's personal copy
Soc (2014) 51:393–400
DOI 10.1007/s12115-014-9798-6
SYMPOSIUM: THE ACHIEVEMENT OF AMITAI ETZIONI
Treating Rationality as a Continuous Variable
Amitai Etzioni
Published online: 3 July 2014
# Springer Science+Business Media New York 2014
Abstract Granted, Behavioral Economics has demonstrated
that “people” (implying all) are unable to act as strong definitions of rationality assume. Their cognitive limitations are
“hard wired”. However Behavioral Economics’ own data
show that important segments of the population find “the”
rational answer to choices posed to them. How do these
findings square with the thesis that limitations are hard wired
and universal? And, more attention should be paid to the
extent to which various people deviate from the rational
choice, and—whether training can improve performance despite the claim that flaws are hard wired.
Keywords Behavioral economics . Rationality . Training .
Methodology
This article argues that (a) behavioral economics (BE) has
produced an unusually robust body of evidence about human
deliberations and decision making (in short, choice behavior);
that (b) this body of evidence requires a fundamental change
in the ways one views and studies choice behavior; that (c) the
same body of evidence is open to a major misperception; and
that (d) closing the door that leads to this misperception—by
drawing on BE’s own data—provides for additional, major
steps forward in the study of choice behavior.
Behavioral Economics: Unusually Robust
and Transformative
The key finding of BE—that people have built-in or hardwired, major cognitive biases that prevent them from making
A. Etzioni (*)
The George Washington University, 1922 F St. NW, Room 413,
Washington, DC 20052, USA
e-mail: [email protected]
choices rationally—has been replicated by numerous scholars
from a variety of different countries and cultures (Schwartz
2008; Korobkin and Ulen 2000).1 It has furthermore been
supported by experiments conducted in “the field” as well as
under laboratory conditions (Hursh and Roma 2013). It thus
exhibits a level of robustness not often found in the social
sciences—or even in the “hard” sciences. One study of preclinical cancer research found that, of 53 studies that contributed “landmark” advances to the field, only six could be
replicated—a mere 11 % (Begley and Ellis, 2012).
Drawing on these findings, BE demonstrated that a key
assumption of neoclassical economics, sometimes referred to
as mainstream economics in the United States, and its allied
fields such as law and economics and public choice theory, is
unsustainable (Mueller 2004).2 BE challenges the prevailing
paradigm of neoclassical economics by showing that there are
“three important ‘bounds’ on human behavior, bounds that
draw into question the central ideas of utility maximization,
stable preferences, rational expectations, and optimal processing of information. People can be said to display bounded
rationality, bounded willpower, and bounded self-interest”
(Jolls et al. 1998:1471). Behavioral economics is one subdiscipline that adheres to the “complexity approach” to economics, a paradigm that is “grounded on sharply different microfoundations and methodology [than the Neoclassical/
Samuelsonian paradigm]” and represents a fundamental “shift
from the neoclassical focus” (Fontana 2010). Davis asserts
that behavioral economics “involve[s] significant departures
1
“There is simply too much credible experimental evidence that individuals frequently act in ways that are incompatible with the assumptions of
rational choice theory.”
2
One of the two “relatively recent challenges to the neoclassical orthodoxy” that has “taken hold of a non-negligible minority of the profession”
is behavioral economics. In some cases, neoclassical economics has
fought back by citing evidence that behavioral economics only provides
a good model for inexperienced consumers and investors.
Author's personal copy
394
from the neoclassical economics as generally understood”
(Davis 2006). Richard Thaler, Cass Sunstein, and Daniel
Kahneman (2011) use the terms “Econ” and “Human” to
stress this transformative finding. “Econ,” they write, is short
for Homo economicus—“the notion that each of us thinks and
chooses unfailingly well, and thus fits within the textbook
picture of human beings offered by economists.” Human, on
the other hand, is “the name used by the authors for homo
sapiens, or ‘real people’. Real people have trouble with long
division if they don’t have a calculator, sometimes forget their
spouse’s birthday and have a hangover on New Year’s Day.”
We are all “real people,” behavioral economists find, not the
artifacts conjured by economics.
Initially, BE also held that its findings reveal cognitive
biases rather than emotions are the main reasons that people
make poor choices (some write “are irrational”). Some still
hold to this observation (Dictionary.com, 2013). However,
Kahneman explains that more recently BE has come to recognize that in addition to suffering from cognitive limitations,
humans are also influenced by their emotions (Personal communication, Kahneman 2011). This issue is not further explored in this article.
Cognitive Biases: Predetermined or Predispositional?
The cognitive biases documented by behavioral economics
are variously referred to as “hard-wired” (Powell et al. 2011)3
or “systematic”4 (Pesendorfer 2006). Behavioral economists
do not devote much text to defining or studying these concepts
because they are used in the same way they are employed by
others sciences and defined in a standard dictionary. For
instance, “hard-wired,” is understood to mean “pertaining to
or being an intrinsic and relatively unmodifiable behavior
pattern” (Dictionary.com, 2013), “genetically or innately determined” (Merriam-Webster Online Dictionary, 2013) or “determine[d] or put into effect by physiological or neurological
mechanisms[;] automatic or innate” (The Free Online Dictionary, 2013). Vis (2011) summarizes the state of the field by
writing, “An increasing amount of work drawing on evolutionary biology and neuroeconomics suggests that we, so to
speak, cannot help ourselves [from displaying cognitive
biases], as this behavior is hardwired.” (Seymour and Dolan
(2008) report from a neurobiological perspective that the
behavioral anomalies documented by behavioral economics
can be traced to processes in the amygdala.)
“Instead of trying to fix the hardwired errors of individual cognition,
organizations should focus on managing the psychological architecture of
the choice environment.”
4
Zarri (2010) holds that one of the two major groups of behavioral
economists is primarily concerned with the discussion and documentation
of “major cognitive limitations and systematic biases in decision
making.”
3
Soc (2014) 51:393–400
One may hold that behavioral economists who use these
phrases mean to suggest that cognitive biases are merely
predispositions rather than determinative attributes, although
one is hard put to find a behavioral economist who explicitly
makes this argument. However, if these are merely predispositions, the next step is clearly to identify under what conditions cognitive biases are activated, remain dormant, or are
overcome, as well as to determine the strength of people’s
predispositions. Can training, education, and self-awareness
keep them at bay? Stanovich and West (1998) addressed this
issue, but the question has since largely been ignored by the
mainstream of BE scholarship, it shall be seen.
The point spills over from BE’s academic publications into
the more popular media. One business journal writes that
“irrational decision-making under risk seems to be hardwired into our brains” (Rockford and Gray 2012). An NPR
article holds that “behavioral economics [argues] that the
human animal is hard-wired to make errors when it comes to
decision-making,” and that “the human brain is hard-wired to
make serious errors” (Spiegel, Alix 2009). An article in the
Washington Post states, “The notion that we’re hard-wired to
make poor decisions is a central tenet of investor psychology”
(Frick, 2010).
Nuclear biology distinguishes between genes or constellations of genes that determine individual attributes and those
that merely predispose an individual to a particular attribute.
Behavioral economics seems to imply that the strong and
pervasive cognitive biases that they uncovered are determinative. The extent to which they can be corrected, if at all, is less
clearly articulated, a point further explored below. However
much of BE leaves itself open to the perception that because
biases are hardwired, they are insurmountable.
Universal or Particularistic?
The sciences differ a great deal in terms of the attributes on
which they focus. Some focus on attributes of all the members
of the category the given science studies (Kahneman 2003)5;
other sciences focus on attributes particular to subcategories of
individual members of the species. The differences between
universalistic and particularistic attributes become salient if
one asks—who has these cognitive biases? Behavioral economics answers clearly: humans, “people,” the whole species.
This generalization, as will shortly be seen, is both validated
by the evidence and open to a major misinterpretation.
A review of ten of the most influential behavioral economics articles, as compiled by Mark Egan, PhD student in behavioral science, and Liam Delaney, professor at Stirling
5
“Psychology offers integrative concepts and mid-level generalizations,
which gain credibility from their ability to explain ostensibly different
phenomena in diverse domains.”
Author's personal copy
Soc (2014) 51:393–400
University, finds that nine of the articles made generalizations
that “people” (Thaler 2008), “consumers” (Schmidt and Fehr
1999), or “the average” individual (Thaler and Richard 1980)
display generic or specific cognitive biases. Several scholars
imply that these limitations are universalistic by stating that
people “generally” (Thaler 2008) are subject to these cognitive biases, and pay little mind to exceptions—or at the very
least, fail to acknowledge their importance. Thus, in a very
often cited article by Tversky and Kahneman (1981) they
write that “people exhibit patterns of preference which appear
incompatible with expected utility theory,” and that “people
generally evaluate…” In another article, they write that “decision makers systematically violate [the] basic tenets [of
rational choice theory],” that “people overweight” certain
probabilities, and that “people employ a variety of heuristic
procedures” (Tversky and Kahneman 1992). Thaler (2008), in
describing his theory of consumer behavior, writes that “people appear to…” and that “people generally will demand…”
Loewenstein et al. (2001) write that “people react…” Sunstein
(2013) writes that “Behavioral economists have emphasized
that in important contexts, people err.” Rarely, terms such as
“often” are used, implying that there are exceptions; however,
when examined in the context of the article, it becomes clear
that the term is used stylistically rather than to characterize,
let alone study, those who do not exhibit such biases.
The same universalistic pattern is evident in a second
sample of articles examined—the ten most downloaded articles (since 2010) using the keyword “behavioral economics”
at Social Sciences Research Network. Nine of the ten articles
(most of which deal with the subfield of behavioral finance)—
were kept as part of the sample; the tenth was dropped because
it is a short outline of points rather than a report of findings or
analysis. Of these remaining nine, all used generalizing
phrases about people and cognitive biases. Sunstein (2013)
writes that “people make choices that are not in their interest.”
Ricciardi and Simon (2000) hold that “human beings” and
“people” have a tendency to behave contrary to the tenets of
rational choice. Thaler et al. (2010) report straightforwardly
that “Humans make mistakes.”
The strong tendency to address the species rather than
particularistic differences within it is understandable given
the context in which BE developed. BE is clearly in dialogue
(some may argue that it is more appropriate to say it is in a
clash) with neoclassical economics. Given that economists
deal with “people”—assumed to be rational, able to make
the necessary calculations and choices to maximize the utility
they seek (Fehr and Schmidt 1999)6—it is enough for BE to
show that “people” are unable to meet these requirements, to
demonstrate that the core economic assumption is unsustainable. It does not matter, from this perspective, whether the
6
“Almost all economic models assume that all people are exclusively
pursuing their material self-interest…”
395
cognition of all people is flawed in the same measure or
whether some are less biased than others. To put it differently,
if rationality were a variable whose score ranges from zero to
ten, economists would hold that people who score ten are
rational. It is enough for behavioral economists to show that
people score less than ten to disprove the economists’ assumption. It matters not if people score a nine or a two on average,
or if some people score a nine while others score a two, or
even if a minority of people scores a ten. However, for all
purposes other than disproving neoclassical economics, such
variations matter a great deal.
Behavioral Economics Data Support the Particularistic,
Distribution Thesis
BE own data do not support a “universalizing” conclusion,
and instead show that important segments of people are either
able to act rationally according to BE’s own criteria—or at
least, their cognitive limitations are much less limiting than
those of others. Indeed, a few scholars questioned “the magnitude and pervasiveness of many of these deviations”
(Robinson and Hammitt 2011). The discussion below turns
to asking what determines these differences.
An early study of confirmation bias found that students
confronted with fictional studies showing or disproving that
capital punishment has a deterrent effect on crime were more
likely to find the studies unconvincing if the “evidence”
contradicted their original personal opinion. However, the
same study found that people were still influenced to some
degree “in the direction” of the presented evidence even if
they had initially disagreed (Lord et al. 1979). Confounding
factors that might lead people both to adopt their original
opinion and to agree with evidence in support of that original
position—say, normative commitments to particular values,
or perspectives about what constitutes a strong argument—are
not considered, even though the issue under consideration is
one highly charged with ethical considerations. An experiment by Darley and Latané that proposed the concept of
bystander apathy found that 31 % of individuals who thought
that they were one of six participants did respond to a fellow
participant’s apparent seizure, while 85 % of those who
thought that they were the only one capable of responding
did so. (These figures, however, should not be taken to mean
that all of the participants responded at the same time. Some
responded more rapidly than others).) Nisbett and Borgida
(1975) found in their study that only four of fifteen participants immediately responded to their comrade’s distress. Neither experiment explains the behavior of those who responded
in spite of the chance to act as a bystander—27 % in the latter
study and 31 % in the former. In a study at Harvard, Princeton,
and MIT, behavioral economists asked students to answer the
following simple math question:
Author's personal copy
396
A bat and a ball cost $1.10. The bat costs one dollar
more than the ball. How much does the ball cost?
The study found that “more than 50 %” of students at toptier universities—and more than 80 % at less prestigious institutions—gave the incorrect answer (Kahneman 2003).7 Neither
the authors nor many who cite their work note that nearly 50 %
of the students at top universities and nearly 20 % of students
elsewhere answered correctly. In Chabris’ and Simons’ famous
“invisible gorilla” experiment, approximately 50 % of people—
with no apparent pattern as to age, gender, or education level—
tasked with counting the number of basketball passes in a video
didn’t remember seeing a woman in a gorilla costume who
appears on screen for 9 seconds (Young 2011). However, this
means that about 50 % did see the “invisible gorilla.” Another
study examining the oft-cited phenomenon of preference reversals found that preference reversal occurred at an overall rate of
46 % (Tversky et al. 1990)—which of course means that over
half of the time, the study participants displayed consistent
preferences and thus did not reveal their cognitive bias. In the
often-cited “Linda” study, the majority—64 %—of study participants correctly concluded that the likelihood of Linda being
a bank teller was greater than the likelihood of Linda being a
bank teller and a feminist (Kahneman 2011). Even more
stunningly, Cosmides and Tooby (1996) found that between
76 and 92 % of study participants displayed proper Bayesian
reasoning (answered “correctly”) when the statistical problem
at hand was framed in terms of frequencies rather than probabilities, suggesting that the “cognitive bias” documented by
behavioral economics may in fact be an artifact of BE’s experimental methods. None of these behavioral economics articles
calls attention to or studies those who get the right answers,
even when they comprise the majority of the participants.
BE has been criticized on the grounds that many of its
studies rely on experiments that use highly contrived conditions, trivial stimuli, and draw conclusions from very small
samples. A survey of psychological studies found that psychologists have persistently selected “samples so small that
they exposed themselves to a 50 % risk of failing to confirm
their true hypotheses!” (Kahneman 2011) Kahneman (2011)
notes that many behavioral economists, himself included,
similarly failed to use standard methods of selecting sample
sizes, thus leading to inappropriately small samples. For example, Tversky and Kahneman’s (1992) often-cited study that
proposed a two-phase model of decision making was based on
a survey of 25 students. The finding that memories are shaped
most strongly by the end of the event– is based on a study of
two colonoscopy patients (Kahneman, Daniel 2010).
Hence the importance of the Investment Company Institute’s
2006 study of 401(k) participation, which is based on “real” field
7
47 of 93 students in the Princeton sample and 164/293 in the University
of Michigan sample gave the wrong answer.
Soc (2014) 51:393–400
conditions and uses a very large population (Holden et al. 2006).
(Sizes of studies in the paper ranged from 1.1 million to 3.5
million.) The ICI study (and similar studies) is often cited to
show that “people” do not act rationally—because they do not
put money into their 401(k) accounts in spite of the fact that they
provide significant tax deductions or deferments. Sunstein and
Thaler (2008), for example, in their acclaimed book, Nudge,
argue that “people” need paternalistic libertarian “nudges” to
make the correct decision to contribute to 401(k) plans. However,
the ICI study by Holden et al. (2006) found that approximately
70 % of eligible employees participated in 401(k) plans in
2003—only 30 % did not. (Nor is it obvious that all these people
should have enrolled given their economic circumstances.) In
another study, cited by Thaler and Sunstein (2008), the authors
stress that almost 50 % of eligible employees fail to participate in
401(k) plans. Note should be taken that over 50 % of eligible
employees did participate.
In short, both experiments and “field” studies show that
important segments of “the people” act rationally even by BE
standards because they got the correct answers on the various
questions or tests administered by behavioral economists or
acted as a rational person is expected to. This contrasts with
the way behavioral economists state their findings, which
leaves them open to the misperception that they mean that
nobody, or almost nobody, is able to think or act rationally
because they are hardwired to make systematic mistakes.
Knowing Should Be Treated as a Continuous Variable
Many of BE’s experiments and much of the reporting of its
findings is cast in terms that suggest that rationality is a
dichotomous variable: either one is rational or not. (What is
rational, though, is somewhat pliable, as indicated for instance
by the term “bounded rationality.”) This further reflects the
fact that for a behavioral economist to show that people are not
rational suffices to falsify the key assumption of neoclassical
economics; it matters not how many and to what extent people
err—or at least, if the errors are more than trivial, which
neoclassical economists cannot absorb by referring to ‘imperfect knowledge” as if the errors are a rounding error that can
be ignored.
In contrast to the assumptions of BE, it should be assumed
that rationality and cognitive biases—and their application in
choice behavior—are continuous variables (Korobkin and
Ulen 2000).8 The term “knowing” is next used to refer to the
8
Russell B. Korobkin and Thomas S. Ulen acknowledge the fallaciousness of this dichotomy. “To claim that rational choice theory is an
insufficient behavioral model on which to base legal policy is not to argue
that individuals behave irrationally (although they certainly do in some
circumstances). Rather, it is to assert that legal scholars seeking to
understand the incentive effects of law in order to propose efficacious
legal policy should not be limited to rational choice theory.”
Author's personal copy
Soc (2014) 51:393–400
result of collecting and processing information and subsequently drawing conclusions. The term “knowledge” is used
rather than “information,” because information is the raw
material from which knowledge might be made. However,
one may have the wrong kind or wrong amounts (e.g.
excessive) of information, or may process it poorly; this is
the merit of keeping the two concepts distinct. The merit of
viewing rationality as a continuous variable is demonstrated
by one of Kahneman, Tversky, and Slovic’s own early experiments about preference reversal. In the experiment, people
ranked two options and were later asked to price similar
options without being given specific prices from which to
choose. The study showed that people priced the options in
ways that were inconsistent with the way that they had earlier
ranked their preferences. Because price is a continuous variable in this experiment, it is theoretically possible to show that
some people price the options “worse” (more inconsistently)
than others do and therefore display more severe preference
reversal than others. Heberlein and Bishop conducted an
experiment in which people were asked to give the amount
that they would pay to obtain a hunting permit, and then asked
to state for how much they would sell the same permit assuming they possessed one. They found that on average people
would pay $31 to buy and would sell for $143 (Bishop and
Heberlein 1979). How much, then, is the permit actually
“worth?” Using the continuous variable of price here makes
it possible to see just how “poor” subjects are at making
decisions. Those who would buy at the same price at which
they would sell the permit—say, they would buy it at $31 and
sell it for $31—would score high on the rationality variable.
Those who would buy at $31 and sell at $50 are not rational,
but are better knowers than those who would buy at $31 and
sell for $143. Similarly, Thaler and Sunstein (2008) speak
about the number of people who participated in a 401(k) plan
in a particular year as evidence of cognitive biases. However,
surely there is a difference in the level of rationality displayed
by someone who enrolls in a 401(k) plan immediately upon
becoming eligible—the “perfectly rational” choice, given that
benefits begin accruing immediately, chosen by less than 20 %
of people—and someone who procrastinates up to 36 months
before eventually making the “correct” choice—a path taken
by the remaining 45 % of those who participated in a 401(k)
according to a study by Madrian and Shea (2000). Thus, the
65 % of people who participate in a 401(k) are not all equally
limited by their cognitive biases. Some manage to overcome
their biases in order to make a decision that is close to the one
expected from rational actors—others, meanwhile, are much
worse knowers and score much lower on the rationality
variable.
One may accept that generally humans are poor knowers—
that is, many make serious mistakes—and nonetheless recognize that on some subjects important segments of the public do
get it right, and above all that people vary a great deal in the
397
extent to which they err—or know. This raises two questions
behavioral economists have yet to address much more than
they have done so far: given the findings just cited, it seems
that the biases are not hardwired. But what drives them? What
accounts for the differences? And above all, to what extent can
the biases be corrected through education, training, structural
arrangements, or in some other way?
What is Responsible for the Differences?
What accounts for the differences in cognitive performance is
a key question for those who view limitations as a distribution,
but an irrelevant consideration for those who view them as
universal attributes of all humans. Given that the evidence
mustered by BE and other disciplines strongly indicates that
there are significant differences in the degree to which individuals are “poor knowers,” much more attention must be paid
to the variables that influence cognitive abilities. That people
at “top tier” universities made many fewer errors—a difference of 30 percentage points—while solving a simple math
question than did other students (Kahneman 2003) suggests
that education or selection may be two such variables. Others
seem to include emotions (Kahneman 2011),9 social norms
and cultural factors may affect levels of self-control or ability
to defer gratification which in turn may affect the level of poor
knowing. The same may hold for age and structural factors
(Forbes 2005)—all deserve more attention within the BE
context.
Hard-wired or Therapeutic?
Apart from contributing to a better understanding of the
causes of human cognitive flaws, identification of the said
variables is relevant to determining the extent to which these
flaws can be overcome. True, BE seems correct to conclude
that in spite of training, education, and other provisions, most
people will remain incapable of acting in ways neoclassical
economists would consider “rational”—that is, score the maximum on the knowing variable. However, if it is possible to
improve people’s knowing from a very poor level to a significantly higher one, it is surely worth knowing.
Behavioral economists have not dedicated much attention
to this question, which is understandable because if one can
overcome biases they are not hardwired. Indeed, Kahneman
(2011) writes that some of his research “supports the uncomfortable conclusion that teaching psychology is mostly a waste
of time” because knowledge of one’s cognitive biases does
little to ameliorate them. He writes in that it may be possible to
9
47 of 93 students in the Princeton sample and 164/293 in the University
of Michigan sample gave the wrong answer. With payoffs.
Author's personal copy
398
“discipline” one’s intuition, but he admits that even he, one of
the foremost scholars of behavioral science, still “finds it
unnatural to [implement them].” Trout (2005) finds that biases
“are virtually as stable, durable, and universal as reflexes” and
are “extremely difficult for individuals to correct, and so are,
for practical purposes, psychologically incorrigible.” He thus
recommends “institutional prosthetics” to mitigate their
effects. Thaler and Sunstein (2008) write in Nudge that although education is the “obvious” answer to the question of
how to correct cognitive bias, “the evidence does not suggest
that education is, in and of itself, an adequate solution.”
Casscells, Schoenberger, and Grayboys found, in a study of
sixty medical students and faculty, that only 18 % answered a
simple statistical problem involving false positive rates correctly (Kaplan and Du 2009)—suggesting that even those who
are scientifically trained are subject to cognitive bias. A study
by British scientists Rabipour and Raz (2012), using the
“largest sample size ever used in cognitive research,” found
“no significant increase in general cognitive performance
following 6-weeks of online training.”10 An article by Jolls
and Sunstein (2004) suggests that it is possible to reduce or
even “eliminate” the effects of cognitive bias through the law,
but this approach emphasizes correcting for cognitive biases
rather than training individuals to overcome them. Indeed, the
concept of libertarian paternalism is predicated on the idea that
rather than attempting to teach people to overcome their
biases, one should work around them by changing their choice
architecture and by using other methods that ensure favorable
outcomes without relying upon individuals’ deliberations
(Thaler and Sunstein 2008). However, the more one notes
the BE own data show significant differences in poverty of
decision making—no one is rich but some are more less poor
than others—the more one must wonder what role education
and training play in these differences.
BE has established, through a very robust set of data, that a
key assumption of neoclassical economics—namely, that people are rational actors—is untenable. BE did so by showing
that “people” have multiple, systematic, hard-wired major
cognitive biases. However, in the process, BE greatly
overstated its case by a) largely ignoring those significant
parts of the population that by its own data did act “rationally,”
and b) ignoring significant differences in the magnitude of the
participants’ deviations from that which BE considers
“rational.”
The article led to the following exchange with the editor of
the Review of Behavioral Economics, Barkley Rosser, which
helps much in clarifying the issues at hand.
He wrote:
“While it is true that many behavioral economists sloppily
talk about average behaviors, most fully recognize that there
10
The study had several limitations, but the general findings of the report
were widely circulated in the British media.
Soc (2014) 51:393–400
are considerable variations of this, and there is a considerable
literature in which efforts are made to measure these degrees
of rationality or intelligence in connection with various
behaviors.
The other is that I think this matter of what “rationality” is is
more complicated than you have let on and is in fact multidimensional, although it may be continuous on most of these. It
would seem that you focus on basic ability to calculate, obviously highly correlated with math IQ, but there are many other
sources of “irrationality” than just this, although it is an important one and one that can be measured better than others.”
I responded by “could you share with me who measures
degrees of rationality. of course only someone completely
clueless would treat rationality as equivalent to intelligence. “
Barkley Rosser responded:
“The idea that rationality in various forms varies continuously, or at least varies systematically in various ways across
individuals, was present from the very beginning of behavioral economics as a self-conscious approach initiated by my late
friend whom I imagine you also knew, Herb Simon. It is
present even in his 1947 Administrative Behavior. He posed
bounded rationality as reflecting both access to information
and computational ability as determining how far boundedly
rational behavior would deviate from full rationality.
So, in his classic 1955 article, 69(1), “A behavioral model
of rational choice,” he states “Broadly stated, the task is to
replace the global rationality of economic man with a kind of
rational behavior that is compatible with the access to information and the computational capacities that are actually
possessed by organisms, including man, in the kinds of environments these organisms exist.” This clearly implies variations of these “computational capacities.,” with no reason to
assume they are not reasonably continuous (and IQ certainly is
continuous, which is correlated with “computational capacity,” if imperfectly so).
Another area where this comes up is in discussions of
rational expectations. Thus in a classic paper testing rational
expectations that also cites Simon’s Nobel Prize address
(AER, Sept. 1979), Michael C. Lovell finds that the hypothesis fails, but he notes that there are many competing alternatives that people use, with some coming closer to the hypothesis than others, clearing implying variations in the degree of
rationality, “Tests of the Rational Expectations Hypothesis,”
AER, March 1986, 76, 110–124. This a much cited paper,
with others following it.
Somewhat related to that is a large and ongoing literature
that allows for heterogeneous agents who vary in their willingness to switch strategies in financial markets based on the
relative performance of those strategies, with an instantaneous
such willingness identified with “Chicago-style full rationality,” but with this measure varying continuously and crucial to
the dynamic patterns observed in markets. A classic that
underlay the widely studied Santa Fe stock market model is
Author's personal copy
Soc (2014) 51:393–400
William A. Brock and Cars Hommes, “Heterogeneous beliefs
and routes to chaos in a simple asset pricing model,” Journal
of Economic Dynamics and Control, 1998, 22, 1236–1274.
Finally, we have the massive literature on the ultimatum
game, where the discussion becomes entangled with whether
deviations from Nash equilibrium “rational” strategies are
done or not for reasons of a “taste for fairness,” with these
now observed to vary widely across societies. An early study
that criticized the ultimatum game results and argued that
making it into a two-stage game would lead to much more
behavior that was rational a la Nash, “gamesman rational”
rather than “fairnessman irrational,” with this varying across
individuals is Ken Binmore, Avner Shaked, and J. Sutton,
“Testing noncooperative bargaining theory: A preliminary
study,” AER, Dec. 1985, with discussions in terms of degrees
of rationality on p. 1180. The literature on this is also enormous and ongoing, with the more complicated arguments
about “what is rational” involved, quite aside from recognizing that there are degrees of such rationality across individuals, however “rationality” is defined.”
I suggested that :
“The fact that various scholars defined rationality differently, and that same definitions set the bar lower than others, does
not address the question of measuring the degree of rationality
of any particular kind. This is like when one suggests that now
that we know that oranges are acidic, one should measure
differences in degree of acidity and what causes them—one
is told that there are differences in acidity among oranges,
tangerines, and grapefruits. Well put, but not to the point.
Thus Herb Simon may talk about bounded rationality as
distinct from optimal rationality—but does not tell us who has
less vs more bounded rationality and so on and on.
Ultimately the reason to know why some people are somewhat more rational than others—is to help people become less
irrational. However, to proceed it is not enough to show that
training and education do not come easily, but to make for more
effective measures, given that none of them are very potent.”
Barkley Rosser should have the last word
“I completely agree that ”rationality“ is multi-dimensional,
and from Simon on various people have indeed identified
different forms of rationality. Different aspects can be measured in different ways, and those who have sought to be more
specific about doing so in a continuous manner, as some have,
have had to be specific about the specific instrument used to
make that measurement.”
Further Reading
Begley, C. G. & Ellis, L. M. 2012. Drug development: Raise standards for
preclinical cancer research. Nature, 483, 531–533.
399
Bishop, R. C. & Heberlein, T. A. 1979. Measuring Values of Extramarket
Goods: Are Indirect Measures Biased? American Journal of
Agricultural Economics, 61.5, 926–930.
Cosmides, L., & Tooby, J. 1996. Are human beings good intuitive
statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition, 58(1), 1–73.
Darley, J. M., & Latané, B. 1968. Bystander Intervention in Emergencies:
Diffusion of Responsibility. Journal of Personality and Social
Psychology, 8(4), 377–383.
Davis, J. B. 2006. The turn in economics: neoclassical dominance to
mainstream pluralism? Journal of Institutional Economics, 2(1), 1–
21.
Fehr, E., & Schmidt, K. M. 1999. ATheory of Fairness, Competition, and
Cooperation. Quarterly Journal of Economics, 114(3), 817–868.
Fontana, M. 2010. Can neoclassical economics handle complexity? The
fallacy of the oil spot dynamic. Journal of Economic Behavior &
Organization, 76(3), 584–596.
Forbes, D. P. 2005. Are some entrepreneurs more overconfident than
others? Journal of Business Venturing, 20(5), 623–640.
Frick, B. 2010. How to Outsmart your Biases. Washington Post,
September 25.
Holden, S., Brady, P., & Hadley, M. 2006. 401(k) Plans: A 25-Year
Retrospective. Investment Company Institute Research
Perspective, 12(2), 1–40. Retrieved from http://www.ici.org/pdf/
per12-02.pdf.
Hursh, S. R., & Roma, P. G. 2013. Behavioral Economics and Empirical
Public Policy. Journal of the Experimental Analysis of Behavior,
99(1), 98–124.
Jolls, Christine, & Sunstein, Cass R. 2004. Debiasing Through Law.
Working Paper. Retrieved from http://www.ftc.gov/be/
seminardocs/04sunstein.pdf.
Jolls, C., Sunstein, C., & Thaler, R. 1998. A Behavioral Approach
to Law and Economics. Stanford Law Review, 50(5), 1471–
1548.
Kahneman, D. 2003. Maps of Bounded Rationality: Psychology for
Behavioral Economics. The American Economic Review, 93(5),
1449–1475.
Kahneman, D. 2011. Thinking, Fast and Slow. New York: Farrar,
Straus and Giroux. Daniel Kahneman, personal email to the
author.
Kahneman, Daniel. 2010. The riddle of experience vs. memory. Palm
Springs, FL. Retrieved from http://www.ted.com/talks/daniel_
kahneman_the_riddle_of_experience_vs_memory.html.
Kaplan, J., & Du, J. 2009. Question Format and Representation: Do
Heuristics and Biases Apply to Statistics Students? Statistics
Education Research Journal, 8(2), 56–73.
Korobkin, R. B., & Ulen, T. S. 2000. Law and Behavioral Science:
Removing the Rationality Assumption from Law and Economics.
California Law Review, 88(4), 1051–1144.
List, J. A. 2004. Neoclassical theory versus prospect theory: Evidence
from the marketplace. Econometrica, 72(2), 615–625.
Loewenstein, G. F., Weber, E. U., Hsee, C. K., & Welch, N. 2001. Risk as
Feelings. Psychological Bulletin, 127(2), 267–286.
Lord, C. G., Ross, L., & Lepper, M. R. 1979. Biased Assimilation and
Attitude Polarization: The Effects of Prior Theories on Subsequently
Considered Evidence. Journal of Personality and Social
Psychology, 37(11), 2098–2109.
Madrian, B. C. & Shea, D. F. 2000. The Power of Suggestion: Inertia in
401 (K) Participation and Saving Behavior. National Bureau of
Economic Research.
Mueller, D. C. 2004. Models of man: neoclassical, behavioural,
and evolutionary. Politics, Philosophy and Economics, 3(1),
59–76.
Nisbett, R. E., & Borgida, E. 1975. Attribution and the Psychology of
Prediction. Journal of Personality and Social Psychology, 32(5),
932–943.
Author's personal copy
400
Pesendorfer, W. 2006. Behavioral Economics Comes of Age: A Review
Essay on Advances in Behavioral Economics. Journal of Economic
Literature, 44(3), 712–721.
Powell, T. C., Lovallo, D., & Fox, C. R. 2011. Behavioural Strategy.
Strategic Management Journal, 32(13), 1369–1386.
Rabipour, S., & Raz, A. 2012. Training the brain: Fact and fad in
cognitive and behavioural remediation. Brain and Cognition,
79(2), 159–179.
Ricciardi, V., & Simon, H. K. 2000. What is Behavioral Finance?
Business, Education, and Technology Journal, 2(2), 1–9.
Robinson, L. A., & Hammitt, J. K. 2011. Behavioral Economics and
Regulatory Analysis. Risk Analysis, 31(9), 1408–1422.
Rockford, Marv, & Gray, Steve. 2012.How could they be so stupid? What
were they thinking? The Business Journals. Retrieved from http://
www.bizjournals.com/bizjournals/how-to/marketing/2012/11/howcould-they-be-so-stupid-what-were.html?page=all.
Schmidt, K. M., & Fehr, E. 1999. A theory of fairness, competition, and
cooperation. The Quarterly Journal of Economics, 114(3), 817–868.
Schwartz, H. 2008. A Guide to Behavioral Economics (pp. 998–999).
Falls Church: Higher Education Publications.
Seymour, B., & Dolan, R. 2008. Emotion, Decision Making, and the
Amygdala. Neuron, 58(5), 662–671.
Spiegel, Alix. 2009.Using Psychology To Save You From Yourself. NPR.
Retrieved from http://www.npr.org/templates/story/story.php?
storyId=104803094.
Stanovich, K. E., & West, R. F. 1998. Individual Differences in Rational
Thought. Journal of Experimental Psychology, 127(2), 161–188.
Stojanovic, B. 2013. The Riddle of Thinking: Thinking, Fast and Slow.
Panoeconomicus, 2013(4), 569–576.
Sunstein, C. R. 2013. The Storrs Lectures: Behavioral Economics and
Paternalism. Yale Law Journal, 122(7), 1826–1899.
Thaler, R. H. 2008. Mental accounting and consumer choice. Marketing
Science, 27(1), 15–25.
Soc (2014) 51:393–400
Thaler, R. H. 1980. Toward a positive theory of consumer choice. Journal
of Economic Behavior and Organization, 1(1), 39–60.
Thaler, R. H., & Sunstein, C. R. 2008. Nudge: Improving Decisions
About Health, Wealth, and Happiness (pp. 108–109). New Haven:
Yale University Press.
Thaler, R. H., Sunstein, Cass R., & Balz, John P. 2010. Choice
Architecture. University of Chicago Booth School of Business
Working Paper.
The Economist. 2013. How science goes wrong. Retrieved from http://
www.economist.com/news/leaders/21588069-scientific-research-haschanged-world-now-it-needs-change-itself-how-science-goes-wrong.
Trout, J. D. 2005. Paternalism and cognitive bias. Law and Philosophy,
24(4), 393–434.
Tversky, A., & Kahneman, D. 1981. The Framing of Decisions and the
Psychology of Choice. Science, 211(4481), 453–458.
Tversky, A., & Kahneman, D. 1992. Advances in Prospect Theory:
Cumulative Representation of Uncertainty. Journal of Risk and
Uncertainty, 5(4), 297–323.
Tversky, A., Slovic, P., & Kahneman, D. 1990. The Causes of
Preference Reversal. The American Economic Review, 80(1),
204–217.
Vis, B. 2011. Prospect Theory and Political Decision Making. Political
Studies Review, 9(3), 334–343.
Young, B. 2011. The Invisible Gorilla. Dental Products Report, 45(4).
Zarri, L. 2010. Behavioral economics has two ‘souls’: Do they both
depart from economic rationality? The Journal of SocioEconomics, 39(5), 562–567.
Amitai Etzioni is a University Professor at the George Washington
University and author of The Active Society and New Common Grounds,
among other books.