1 1 A neuroscientific perspective on human decision making 2 3 J.E.

1
2
A neuroscientific perspective on human decision making
3
4
J.E. Korteling, A.-M. Brouwer, A. Toet
5
TNO, Soesterberg, The Netherlands
6
7
8
Abstract
9
10
11
Human reasoning shows systematic and persistent simplifications (‘heuristics’) and
deviations from ‘rational thinking’ that may lead to suboptimal decisional outcomes
(‘cognitive biases’).
12
13
14
15
There are currently three prevailing theoretical perspectives on the origin of heuristics and
cognitive biases: a cognitive-psychological, an ecological and an evolutionary perspective.
However, none of these viewpoints provides an overall explanatory framework for the
underlying mechanisms of cognitive biases.
16
17
18
19
20
Here we propose a neuroscientific framework for cognitive biases, arguing that many
cognitive biases arise from intrinsic brain mechanisms that are fundamental to biological
neural networks and that originally developed to perform basic physical, perceptual, and
motor functions.
21
22
Keywords
23
cognitive biases, heuristics, decision making, neural networks
1
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
In daily life, we constantly make judgments and decisions (either conscious or unconscious)
without knowing their outcome. We typically violate rules of logic and probability and resort
to simple and near-optimal heuristic decision rules (‘mental shortcuts’) to optimize the
likelihood of an acceptable outcome, which may be effective in conditions with timeconstraints, information overload, or when no optimal solution is evident (Gigerenzer &
Gaissmaier, 2010). We are also inclined to use heuristics when problems appear familiar and
when we do not feel the need to gather additional information. Although heuristics can result
in quite acceptable outcomes in everyday situations, and when the time cost of reasoning are
taken into account, people often violate tenets of rationality in inadvisable ways (Shafir &
LeBoeuf, 2002). The resulting suboptimal decisions are known as ‘cognitive biases’ (Tversky
& Kahneman, 1974). Using heuristics, we typically feel quite confident about our decisions
and judgments, even when evidence is scarce and when we are aware of our cognitive
limitations (Risen, 2015). As a result, cognitive biases are quite pervasive and persistent.
Also, they are surprisingly systematic: in a wide range of different conditions, people tend to
use similar heuristics and show the same cognitive biases (Kahneman, 2011; Shafir &
LeBoeuf, 2002).
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
According to Lieder, Griffiths, Huys, and Goodman (2017) the discovery of cognitive biases
and the following doubt on human rationality shakes the foundations of economics, the social
sciences, and rational models of cognition. If the human mind does not follow the principles
of rationality, then there is little ground for deriving unifying laws of cognition from a set of
basic axioms (Lieder et al., 2017). In our opinion, it is therefore important to gain more
insight into the origins and underlying mechanisms of heuristics and cognitive biases. This
may provide more guidance for how to explain and model cognitive processes in order to
adequately understand and predict human behavior. In this connection, the literature presents
three prevailing viewpoints on the origin of heuristics and cognitive biases: a cognitivepsychological perspective, an ecological perspective, and an evolutionary perspective. These
three perspectives are complementary and emphasize different aspects of heuristic thinking.
However, they provide no general explanation why and when biases occur, and why they are
so persistent and consistent over different people and conditions. They also do not provide a
unifying framework. These shortcomings have stimulated an ongoing discussion in the
literature on the extent to which human decision making, in general, is either irrational
(biased) or rational and effective (heuristic). However, until now this debate has not resulted
in an overall explanatory model or framework with axioms and principles that may form a
basis for the laws of cognition and human (ir)rationality. In order to provide this guidance,
we will outline a first neuroscientific framework of principles based on intrinsic brain
mechanisms that are fundamental to biological neural networks.
60
61
CURRENT PERSPECTIVES ON COGNITIVE BIASES
62
63
64
65
The cognitive-psychological (or heuristics and biases) perspective (Evans, 2008; Kahneman
& Klein, 2009) attributes cognitive biases to limitations in the human information processing
capacity (Broadbent, 1958; Kahneman, 1973; Norman & Bobrow, 1975; Simon, 1955). In
this view, we tend to use simple heuristics in complex, unfamiliar, uncertain, and/or time2
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
constrained situations because we can only process a limited amount of the available
information (‘limited-’ or ‘bounded rationality’: Gigerenzer & Selten, 2002; Kahneman,
1973; Norman & Bobrow, 1975; Simon, 1955). Decision errors or biases may then occur
when relevant information is ignored or inappropriately weighted and when irrelevant
information interferes (Evans, 2008; Kahneman, 2003). This view has resulted in dualprocess heuristic-deliberate theories that postulate a distinction between fast, intuitive,
automatic, heuristic, and emotionally charged (heuristic or ‘Type 1’) processes versus slow,
conscious, controlled, deliberate and analytic (deliberate or ‘Type 2’) processes (Evans,
2008). When fast decisions are required, performance is based on low-effort heuristic
processes. In complex and/or unfamiliar situations, or when sufficient time is available,
deliberate processes may monitor and revise the output of the (default) heuristic system
(Evans, 1984, 1989; Kahneman, 2003; Kahneman, 2011). In this view, biases occur in these
situations when deliberate processing either (1) fails to successfully engage (Kahneman,
2003) or (2) fails to override the biased heuristic response (Evans & Stanovich, 2013). The
slower deliberate processes rely on time- and resource-consuming serial operations and are
constrained by the limited capacity of the central working memory (Baddeley, 1986;
Baddeley & Hitch, 1974; Evans & Stanovich, 2013). Conversely, heuristic processes do not
demand executive working memory resources and operate implicitly, in parallel, and are
highly accessible (De Neys, 2006; Kahneman, 2003).
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
The ecological perspective points out that heuristics can be quite effective in practical and
natural situations (Gigerenzer, 2000; Gigerenzer & Gaissmaier, 2010; Goldstein &
Gigerenzer, 2002). This perspective attributes cognitive biases to a mismatch between
heuristics and the context or the environment in which they are applied Klein, 1993; Klein,
1998; Klein, 2008. Biases occur when people apply experience-based heuristics (‘Naturalistic
Decision Making’: Klein, 1993; Klein, 1998; Klein, 2008) in unknown or unfamiliar
conditions that do not match their mental model, as is the case in experiments performed in
most artificial- or laboratory settings, or when people lack relevant expertise. In this view,
people use effective heuristics (acquired through learning or experience) that are based on the
spatiotemporal regularities of the context and environment in which they routinely live and
work (‘ecological rationality’). Only through extensive experience and dedicated learning can
difficult cognitive tasks become more intuitive (Klein, 1998; Klein, 2008). A prerequisite for
the development of adequate heuristics (i.e., for effective learning) is that the environment
should be sufficiently stable and predictable, providing adequate feedback and a high number
and variety of experiences (Klein, 1998; Shanteau, 1992). After sufficient training, experts
will usually effectively rely on heuristic processing but may switch to deliberate reasoning
when they notice that they are relatively unfamiliar with a given issue (‘adaptive rationality’).
102
103
104
105
106
107
108
The evolutionary perspective attributes cognitive biases to a mismatch between evolutionarily
developed heuristics (‘evolutionary rationality’: Haselton et al., 2009) and the current context
or environment (Tooby & Cosmides, 2005). In this view, the same heuristics that optimized
the chances of survival of our ancestors in their (natural) environment can lead to
maladaptive (‘biased’) behavior when they are used in our current (artificial) settings.
Examples of such heuristics are for instance the action bias (a penchant for action even when
there is no rational justification: Patt & Zeckhauser, 2000), loss aversion (the disutility of
3
109
110
111
112
113
114
115
116
117
118
119
giving up an object is greater than the utility associated with acquiring it: Kahneman &
Tversky, 1984) and the scarcity heuristic (a tendency to attribute greater subjective value to
items that are more difficult to acquire or in greater demand: Mittone & Savadori, 2009).
While the evolutionary and ecological perspectives both emphasize that bias only occurs
when there is a mismatch between heuristics and the situations in which they are applied,
they differ in the hypothesized origin of this mismatch: the ecological perspective assumes
that adequate heuristics are lacking and/or inappropriately applied, while the evolutionary
perspective assumes that they are largely genetically determined. In both views, biases may
also represent experimental artifacts that occur when problems (that can in principle be
solved in a rational way) are presented in unusual or unnatural formats, as in many laboratory
studies (Haselton et al., 2009; Haselton, Nettle, & Andrews, 2005).
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
While all three perspectives on cognitive biases may to a certain extent explain why human
judgment and decision making so consistently show deviations from rational thinking, they
also have their limitations. For instance, limited-capacity as a central explanatory construct
easily leads to tautological explanations. Experimental findings of heuristics and bias are
'explained' by limited capacity in processing resources, while this latter kind of capacity
limitation is inferred from the empirical fact of the former. This is a classical circulus
viciosus: what has to be explained (limited output capacity) is part of the explanatory
construct (limited input capacity). For this reason, it becomes difficult to predict when a
given decision task sufficiently exceeds our cognitive capacity, or when a problem differs
sufficiently from a given context to induce biased reasoning. More importantly, they do not
explain why biases, such as superstition or overconfidence, persistently occur even when we
are (made) aware of them and when we have ample knowledge and time to do better (Risen,
2015). The psychological and ecological viewpoint also do not explain why heuristics and
biases are so consistent and specific over many different people and situations, i.e., why
people so systematically show the same typical idiosyncrasies in reasoning- and judgment
tasks (e.g., Kahneman, 2011; Shafir & LeBoeuf, 2002). This seems in strong contradiction
with the fact that in other kinds of tasks different people make a broad variety of different
errors when these tasks are difficult or unfamiliar. In addition, most explanations of cognitive
biases are also highly specific and not easily generalized, in some cases even contradicting
their own rationale and empirical facts. For instance, some heuristics and biases typically
seem to include an increase rather than decrease in the amount and complexity of information
that needs to be processed. Examples are the base rate neglect (specific information is
preferred over general information: Tversky & Kahneman, 1982), the representativeness bias
(the probability that an entity belongs to a certain category is assessed by its similarity to the
typical features of that category instead of its simple base rate: Tversky & Kahneman, 1974),
the conjunction fallacy (a combination of conditions is considered more likely than only one
of those conditions: Tversky & Kahneman, 1983) and the story bias (consistent and
believable stories are more easily accepted and remembered than simple facts: Dawes, 2001;
Turner, 1996). Heuristics are used and biases also occur in relatively simple and obvious
decision situations without time pressure, such as loss aversion, the endowment effect
(ascribing more value to items solely because you (almost) own them: Kahneman, Knetsch,
& Thaler, 1990), or the omission bias (deliberate negligence is favored over an equally
4
152
153
154
155
156
157
158
159
160
161
162
163
164
165
wrongful act: Spranca, Minsk, & Baron, 1991). Finally, biases are also seen in the behavior
of higher as well as lower animals and in artificial neural networks. Especially for animals,
the literature shows many examples of bias (e.g., Chen, Lakshminarayanan, & Santos, 2006;
Dawkins & Brockmann, 1980; Lakshminaryanan, Chen, & Santos, 2008). These biases are
usually explained on the basis of evolutionary survival principles. But recently human-like
stereotyped biases have also been observed in the context of artificial neural networks, e.g. in
machine learning (Caliskan, Bryson, & Narayanan, 2017). Apart from the operation of basic
evolutionary mechanisms, it seems highly unlikely that animals show biases by developing
human-like smart and practical (‘shortcut’) solutions in their decision making processes. The
same holds for bias-like phenomena in artificial neural networks. Most of the aforementioned
objections concern the psychological and, to a lesser degree, the ecological explanations.
However, while the evolutionary perspective in its present form may explain many
behavioral biases, including those in animals, it only seems relevant for a limited number of
cognitive biases (Haselton et al., 2009).
166
A NEUROSCIENTIFIC PERSPECTIVE ON COGNITIVE BIASES
167
168
169
170
171
172
173
To enhance our understanding of cognitive heuristics and biases, we propose a
neuroscientific perspective that explains why our brain systematically tends to default to
heuristic decision making. In this view, human reasoning is affected by ‘hard-wired’ design
characteristics of neural information processing, which (almost) inevitably induce deviations
from the laws of logic and probability. This perspective complements the earlier three
viewpoints and provides more fundamental and general insights for cognitive heuristics and
biases, building on (and consistent with) basic biological and physiological principles.
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
The neuroscientific perspective elaborates partly on the evolutionary viewpoint by assuming
that the functioning of our brain is based on universal mechanisms that are typical for all
biological neural networks, such as coincidence detection, facilitation, adaptation, and
reciprocal inhibition. During the evolution of (higher) animals these mechanisms provided us
with complex capabilities (e.g., motor skills, pattern recognition, and associative information
processing) that are essential to perform and maintain our physical integrity (Damasio, 1994)
in our (natural) environment (e.g., searching food, detecting danger, fight or flight). As
Churchland (1987) remarked: “The principal function of nervous systems is to get the body
parts where they should be so that the organism may survive.” These basic physical and
perceptual-motor skills are relatively distinct from the cognitive functions like analytic and
symbolic reasoning that developed much later from these ancient neural mechanisms. As
formulated by Damasio (Damasio, 1994, p. 128): “Nature appears to have built the
apparatus of rationality not just on top of the apparatus of biological regulation, but also
from it and with it.” Despite their primal origin, perceptual-motor processes are highly
complex and depend on the continuous parallel processing of massive incoming sensory data
streams. The computational complexity of these processes becomes clear when we attempt to
simulate them in logical machines (robots, computers). Biological neural networks on the
other hand, continuously and efficiently perform this kind of processing without any
conscious effort. Hence, computational complexity of a task does not necessarily imply
subjective difficulty. However, in contrast to logical machines, our brain is less well equipped
5
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
to perform cognitive tasks that require rational thinking (e.g., calculation, statistics, analysis,
reasoning, abstraction, conceptual thinking) and that have only become essential for
‘survival’ in relatively modern civilizations. Our rational or ‘higher-order’ cognitive abilities
are (from an evolutionary viewpoint) very recently acquired capabilities, developed from
neural mechanisms that were originally designed to perform more basic physical, perceptual
and motor functions (Damasio, 1994). Using our brain for rational thinking is thus much like
using tongs to hammer a nail: it is feasible, and you may succeed driving the nail in, but not
without difficulty since tongs were simply not designed for this purpose. Similarly, our
neuroscientific perspective suggests that biased decision making results from a mismatch
between the original design characteristics of our brain as a neural network for maintaining
physical integrity and the nature of many rational problems. It explains why we can easily
and effortlessly perform computationally complex perceptual-motor tasks (typically
involving massively parallel data streams), whereas we may experience great difficulty in
solving computationally simple arithmetic or logical problems. It also explains why some
heuristics and biases include an increase in the amount and complexity (but not difficulty) of
information processing, rather than a decrease.
The neuroscientific perspective proposed here relates the occurrence of many cognitive
biases to four principles that are characteristic for all biological neural networks, and thus
also for our ‘higher’ cortical functions: (1) Association (Bar, 2007), (2) Compatibility
(Thomson, 2000), (3) Retainment (Kosslyn & Koenig, 1992), and (4) Cognitive Blind Spots.
We will discuss each of these four principles in the next sections. These characteristics of
neural information processing will always affect information processing in ways that violate
the rules of logical analysis and probability.
The Association Principle: On the basis of correlation and coincidence detection the brain
“searches” associatively for relationships, coherence and patterns in the available and
stimulus flux.
The brain (like all neural networks) functions in a highly associative way. Correlation and
coincidence detection are the basic operations of neural functioning, as manifested in e.g.
Hebb’s rule (Hebb, 1949; Shatz, 1992), the ‘Law of Effect’ (Thorndike, 1927, 1933),
Pavlovian conditioning (Pavlov, 2010), or autocorrelation (Reichardt, 1961). As a result, the
brain automatically and subconsciously ‘searches’ for correlation, coherence, and (causal)
connections; it is highly sensitive to consistent and invariant patterns. Association forms the
basis of our unequaled pattern recognition capability, i.e., our ability to perceive coherent
patterns and structures in the abundance of information that we absorb and process. The
patterns we perceive can be based on many relationships, such as covariance, spatiotemporal
coincidences, or similarity in form and content. Because of this associative way of
perceiving, understanding and predicting the world we tend to arrange our observations into
regular, orderly relationships and patterns and we have difficulty in dealing with randomness,
unpredictability, and chaos. Even when these relations and patterns are accidental, our brain
will be inclined to see them as meaningful, characteristic, and causally connected (Beitman,
2009). In line with this association principle, we tend to classify events, objects or individuals
into coherent categories that are determined by stereotypical combinations of features and
traits. Examples of heuristics and bias resulting from associative information processing are
6
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
the control illusion (people tend to overestimate the degree to which they are in control
(Langer, 1975), superstition (Risen, 2015), spurious causality (seeing causality in
unconnected correlations), the conjunction fallacy (Tversky & Kahneman, 1983), the
representativeness heuristic (Tversky & Kahneman, 1981), and the previously mentioned
story bias. We will now discuss the superstition bias (as an example of the association
principle) in more detail to show how it can be explained in terms of neural mechanisms.
Many people associate their success in a sports- or gambling game with whatever chance
actions they performed immediately before winning. Subsequently they tend to repeat these
same actions every time they play. This may even lead to widely shared rituals. A few
accidental cooccurrences of a certain behavior and a favorable outcome may already suffice
to set up and maintain this kind of behavior in spite of many subsequent contradicting
instances. Performing their rituals tends to give people a feeling of control, even if there is no
relation with the actual outcome of their following actions. According to Risen (2015) this
‘magical thinking’ is surprisingly ordinary. In line with the Hebb doctrine (Hebb, 1949), the
neuroscientific perspective contributes to an explaination of these phenomena by the way (the
weight of) connections between neurons are affected by covarying inputs. Synaptic strengths
are typically altered by either the temporal firing pattern of the presynaptic neuron or by
neuromodulatory neurons (Marder & Thirumalai, 2002). Neurons that repeatedly or
persistently fire together, change each other’s excitability and synaptic connectivity
(Destexhe & Marder, 2004). This basic principle, i.e. “cells that fire together, wire together”
(Shatz, 1992), enables the continuous adaptation and construction of neural connections and
associations based on simultaneous and covarying activations. This serves the formation of
connections between functional subsystems based on covarying environmental inputs. This
forms the basis of human learning processes, such as classical, operant and perceptual-motor
learning: relationships are discovered and captured in the connectionist characteristics of cells
and synapses. This basic principle of neural information processing makes the brain sensitive
to capitalize on correlating and covarying inputs (Gibson, 1966; Gibson, 1979), causing a
strong tendency to associatively perceive coherence, (causal) relationships, and patterns, and
build neural connections on the basis of these, even where there are none.
The Compatibility Principle: What is selected and picked up from the environmental inputs
is highly determined by the compatibility (match) with the momentary state and
connectionist properties of the neural network.
The brain is not like a conventional repository or hard disc that can take up and store any
information that is provided, almost indifferently of its characteristics. Instead, it is an
extremely complex construction that requires new or additional information to be compliant
with its existing state (like the ‘Tetris game,' in which falling blocks need to be manipulated
to let them fit in the pre-existing ‘construction’). What is selected, processed and
associatively integrated is determined by the compatibility (match) with the brain’s
momentary state and connectionist characteristics (next to stimulus characteristics like the
conspicuity of the target in its context). Based on our competences and preconceptions
(resulting from training and experience) we predominantly see what we expect to see. If a
hammer is all you have, everything resembles a nail. This principle of compatibility in neural
information processing implies a compulsion to be consistent with what we already know,
think or have done, resulting in a tendency to ignore or overlook relevant information
because it does not match with our current mindset. The most well-known biases resulting
7
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
from this principle are the confirmation bias (a tendency to search for, interpret, focus on and
remember information in a way that confirms one's preconceptions: Nickerson, 1998) and
cognitive dissonance (a state of having inconsistent thoughts and a resulting tendency to
search for and select consistent information: Festinger, 1957). Other examples of this kind of
biased reasoning are the curse of knowledge (difficulty with taking the perspective of lesserinformed people: Kennedy, 1995) and the sunk-cost fallacy (tendency to consistently
continue a chosen course with negative outcomes rather than alter it: Arkes & Ayton, 1999).
Elaborating on the confirmation bias as an example: When people perceive information, they
tend to selectively notice examples that are consistent with (confirm) their existing
superstitious intuitions. This may be explained by the fact that neural networks are more
easily activated by stimulus patterns that are more congruent with their established
connectionist properties or their current status. An example is priming (exposure to one
stimulus influences the response to another: Bao, Kandel, & Hawkins, 1997; Meyer &
Schvaneveldt, 1971). For example, a word is more quickly and easy recognized after the
presentation of a semantically related word. When a stimulus is experienced, subsequent
experiences of the same stimulus will be processed more quickly by the brain Forster &
Davis, 1984. It is a situational context activating connected neural (knowledge) structures
(characteristics, patterns, stereotypes, similarities, associations) in an automatic and
unconscious manner (Bargh, Chen, & Burrows, 1996). In general neural information
processing is characterized by processes such as potentiation and facilitation (Bao et al.,
1997; Katz & Miledi, 1968). Neural facilitation means that a post-synaptic action potential
evoked by an impulse is increased when that impulse closely follows a prior one. This is
caused by a residue of ‘active calcium’ entering the terminal axon membrane during the
nerve impulse leading to an increased release of neurotransmitter (Katz & Miledi, 1968).
Hence, a cell is activated more easily (its activation threshold is lower) directly after a prior
activation. These short-term synaptic changes support a variety of computations (Abbott &
Regehr, 2004). Potentiation works on similar principles but on longer time scales (tens of
seconds to minutes: Bao et al., 1997). Long-term Hebbian forms of plasticity such as
potentiation make the processing of incoming information more efficient and effective when
this information complies with previous activations (Destexhe & Marder, 2004; Wimmer &
Shohamy, 2012). We suppose that these kinds of processes may form the basis for our
tendency to interpret and focus on information that confirms previously established
perceptions, interpretations or conclusions (i.e., the confirmation bias).
The Retainment Principle: New information (relevant or irrelevant) entering the brain is
integrated and ‘captured’ in existing neural circuits and cannot be simply erased or
ignored.
brain associatively ‘searches’ for information that is consistent and compatible
with its current state, it cannot completely disregard irrelevant, erroneous, or redundant
information. All stimuli entering the nervous system affect its physical-chemical structure
and thereby its connectionist properties. Therefore, with regard to neural information
processing, a hardware- software distinction does not apply: information is structurally
encoded in ‘wetware’ (Kosslyn & Koenig, 1992). Unlike a computer program, once
Although the
8
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
information has entered the brain, it cannot simply be ignored or put aside. It always has an
effect on existing neural circuitry relating, or relevant, to this input. As a consequence, it is
nearly impossible to carry out an assignment like: “Do not think of a pink elephant”. Once
perceived and processed by the brain, information is captured and retained and cannot easily
be erased or ignored. This means that new perceptions and feedback tend to have persisting
(‘anchoring’) effects, which affect judgment and decision-making. Biased reasoning then
occurs when irrelevant or misleading information associatively interferes with this process.
Examples of this type of biased reasoning are the anchoring bias (decisions are biased toward
previously acquired information or the ‘anchor’: Furnham & Boo, 2011), the endowment
effect, the hindsight bias (the tendency to erroneously perceive events as inevitable or more
likely once they have occurred: Hoffrage, Hertwig, & Gigerenzer, 2000; Roese & Vohs,
2012), and the outcome bias (the perceived quality of a decision is based on its outcome
rather than on the - mostly less obvious - factors that led to the decision: Baron & Hershey,
1988). The Hebbian principles of neural plasticity imply that the accumulation and
processing of information necessarily causes synaptic changes, thereby altering the dynamics
of neural networks (Destexhe & Marder, 2004). When existing circuits are associatively
activated by new related inputs, their processing characteristics and outcomes (estimations,
judgments, decisions) will also be affected. This may form the basis of, for example, the
hindsight and outcome biases. Unlike conventional computer programs, the brain does not
store new information independent and separately from old information. New information is
associatively processed in existing circuits (‘memory’), which are consequently modified
(through Hebbian learning). This neural reconsolidation of memory circuits, integrating new
inputs with (related) existing representations will make the exact representation of the
original information principally inaccessible for the brain (Hoffrage et al., 2000). In
behavioral terms: since hindsight or outcome knowledge is intrinsically connected to the
original memories, new information received after the fact influences how the person
remembers the event. Because of this blending of the neural representations of initial
situations and outcomes the original representations have to be reconstructed, which may
cause a bias towards the final state. This may easily result in the tendency to see past events
as having been predictable at the time they occurred, and in the tendency to weigh the
ultimate outcome in judging the quality of a previous course of events (outcome bias).
The Cognitive Blind Spot Principle: What you see is all there is” (WYSIATI), only what
pops up in our thought processes seems to exist; that other possible relevant information
may exist beyond our view or span of knowledge is insufficiently recognized (and included
in our decisions).
When making a decision, the brain is not a logical system that systematically takes into
account and weighs all relevant information. Instead, when making decisions, we tend to rely
on conclusions that are based on limited amounts of consistent information rather than on
larger bodies of less consistent data (illusion of validity: Kahneman & Tversky, 1973). This
overall tendency to overvalue a limited amount of readily available information and to ignore
other sources brought Kahneman to the notion of: “What you see is all there is” (WYSIATI:
Kahneman, 2011): only what pops up when making a decision ‘exists’; other information that
may be useful but which is not directly available is (largely) ignored or not recognized. This
is much like the blind spot in our eyes: the obscuration of the visual field due to the absence
of light-detecting photoreceptor cells on the place where the optic nerve passes through the
9
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
optic disc of the retina. As result the corresponding part of the visual field (roughly 7.5° high
and 5.5° wide) is invisible. This local blindness usually remains unnoticed until one is
subjected to specific tests (Wandell, 1995). A major example of this ‘blind spot’ principle is
the availability bias. This is the tendency to judge the frequency, importance or likelihood of
an event by the ease with which relevant instances come to mind, which is highly determined
by their imaginability or retrievability (Tversky & Kahneman, 1973, 1974). The availability
bias causes us to ignore the impact of missing data: “Out of sight, out of mind” (Heuer, 2013).
Also, we have a poor appreciation of the limits of our knowledge, and we usually tend to be
overconfident about how much we already know. As stated by Kahneman (2011), the
capacity of humans to believe the unbelievable on the basis of limited amounts of consistent
information is in itself almost inconceivable (i.e., overconfidence bias). We are overconfident
about the accuracy of our estimations or predictions (prognosis illusion) and we pay limited
attention to factors that hamper the accuracy of our predictions (overconfidence effect: Moore
& Healy, 2008) Other examples of WYSIATI biased reasoning are the survivorship bias (a
tendency to focus on the elements that survived a process and to forget about those that were
eliminated: Brown, Goetzmann, Ibbotson, & Ross, 1992), the priority heuristic (decisions are
based on only one dominant piece of information: Brandstätter, Gigerenzer, & Hertwig,
2006), the omission bias, the fundamental attribution error (explaining own behavior by
external factors and other people’s behavior by personal factors: Ross, 1977), regression
fallacy (failing to account for - relatively less visible- random fluctuation: Friedman, 1992),
and the familiarity heuristic (familiar items are favored over unfamiliar ones: Park & Lessig,
1981). An example of how the blind spot principle in neural processes may explain our
tendency to focus on and overvalue readily available information can be demonstrated by the
way formerly established preconceptions may associatively enter a judgement and
deliberation process and thus affect the resulting decision. In general, when people think
about their superstitious intuitions, they are likely to automatically remember examples that
support these. And when a compatible experience recently has reinforced such a superstitious
belief, according to the Hebb rule it may be more easily activated again (compatibility).
Moreover, these reconsolidating activations may also enhance inhibitory collateral outputs,
which for example mediate the mechanism of reciprocal inhibition (Cohen, Servan-Schreiber,
& McClelland, 1992). Reciprocal inhibition in neural circuits involves a mutual inhibition of
(competing) neurons proportionally to their activation level. In this way (groups of) neurons
that are slightly more active can quickly become dominant. Reciprocal inhibition amplifies
initially small differences (e.g., Mach Bands) and accelerates discriminatory processes
leading to dominant associations (‘The winner takes it all’). These dominant associations may
suppress the activation and retrieval of other (possibly contradicting) associations. This
suppression may prevent that conflicting data or memories come to mind (‘blind spots’). It
may also explain why we are unable to simultaneously perceive contradicting interpretations
in ambiguous stimuli like the Necker Cube. This basic neural mechanism of contrast
enhancement by reciprocal inhibition makes us base our decisions on a limited amount of
consistent information while we remain unware of the fact that we fail to consider additional
relevant data or alternative interpretations. This way limited amounts of relatively strong
ideas, habits or intuitions may easily dominate our decision making processes by suppressing
alternative but weaker processes.
10
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
CONCLUSION
Our tendency to use heuristics that seem to violate the tenets of rationality may often produce
fairly acceptable outcomes in everyday situations and/or when decision situations are
complex, new, uncertain or time-constrained. Next to that, we tend to rely on heuristic
decision strategies that do not match with the characteristics of the problem situation, which
may produce suboptimal outcomes, or bias. Because of this contradiction it is difficult to
provide an overall conclusion whether or not human decision making should be generally
considered as rational or not. Whether a decision is rational or not depends on the context and
on the chosen standard to provide this qualification. Therefore, instead of arguing about the
fundamental question whether human heuristic thinking should be considered as rational or
irrational, we have focused on the origins and theoretical explanation of the heuristic
characteristics of human (ir)rationality.
The current three explanatory perspectives attribute biases in human decision making to
cognitive capacity limitations and mismatches between available or used heuristics (that are
optimized for specific conditions) and the conditions in which they are actually applied. As
noted before, this does not explain why human decision making so universally and
systematically violates the rules of logic and probability in relatively simple problem
situations and/or when sufficient time is available or why some heuristics and biases involve
an increase in the amount and/or complexity of information processing (Shafir & LeBoeuf,
2002). This universal and pervasive character of heuristics and biases merely calls for a more
fundamental and generic explanation. Based on biological and neuroscientific evidence, we
conjectured that cognitive heuristics and bias are inevitable tendencies linked to the intrinsic
design characteristics of our brain, i.e. to those characteristics that are fundamental to the
functioning of biological neural networks that originally developed to perform more basic
physical, perceptual and motor functions. These intrinsic design characteristics form the basis
for our inclinations to associate (unrelated) information, to give priority to compatible
information, to retain irrelevant information, and to ignore information that is not directly
available or recognized.
When heuristics and biases emerge from the basic characteristics of biological neural
networks, it is not very surprising that comparable cognitive phenomena are found in animal
behavior. (e.g., Chen et al., 2006; Dawkins & Brockmann, 1980; Lakshminaryanan et al.,
2008). Also, in the domain of artificial neural networks, it has been found that applying
standard machine learning to textual data results in human-like stereotyped biases that reflect
everyday human culture and traditional semantic associations (Caliskan et al., 2017). Instead
of supposing cognitive strategies to deal with decisional complexity, it seems far more likely
that these kinds of findings reflect our conjectured relation between basic principles of neural
networks and bias.
11
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
The examples presented above should not suggest that there is a one-to-one relationship
between specific mechanisms and the design principles of the brain and cognitive biases
(Poldrack, 2010). All four principles may affect decision making, but the degree to which
they do so may vary over different situations. For example, the thoughts that ‘fortuitously’
pop up (or not) in our mind when we make a decision (availability bias) always result from
an Association process, determined by the characteristics of the issue in relation to our
current neural state (Compatibility). The effects of these processes on our judgements or
decisions cannot easily be suppressed or ignored (Retainment), and those thoughts that do
come up seem all there is (Blind spot, WYSIATI), such that they will determine (bias) our
resulting judgements and decisions. By relating ‘higher’ cognitive phenomena from
established biological and neurophysiological processes, this perspective may contribute to
cumulative knowledge building, thus helping to bridge gaps between the cognitive- and
neurosciences.
Apart from clarifying the systematic and persistent nature of biases, and from resolving some
theoretical contradictions (computational complexity of heuristics) and limitations (ad hoc
character of explanations), our perspective may contribute to explaining why only after
extensive training domain-experienced decision makers become capable of solving difficult
problems so easily and ‘intuitively'. These experts do not analyze situations into their
constituent components, nor do they explicitly calculate and weigh effects of different
options (Type 2). Instead, their judgments and decisions seem to be based on the Type 1
quick and automatic recognition of patterns in the available information (Klein, 1993; Klein,
1998).
Finally, in view of the loose and global character of the current psychological explanations, in
this paper we only provide the basis for a really detailed (neural) account of heuristics and
biases. A more exact and detailed neural explanation, possibly combined with recent insights
into functional differentiation of the brain area’s, requires more work to be done. So far, a
major conclusion of the present endeavor may be that it can be beneficial to adopt the
conception of the brain as a neural network as the basis (or starting point) of psychological
theory formation, instead of e.g. the omnipresent (and often implicit) computer metaphor. In
this way our perspective may serve to find better ways to prevent or predict the occurrence of
biases. The basis for this is the profound awareness of the hard-wired origins of human bias.
It is this intrinsic character that makes our ‘automatic’ and ‘intuitive’ processes feel so
overwhelming, pervasive and real that we often experience (emotional) reluctance to correct
them (‘acquiescence’), irrespective of the awareness of our rational shortcomings (Risen,
2015).
501
References
502
503
504
505
506
507
Abbott, & Regehr. (2004). Synaptic computation. Nature, 431(7010), 796-803.
doi:10.1038/nature03010
Arkes, & Ayton. (1999). The sunk cost and concorde effects: Are humans less rational than
lower animals? Psychological Bulletin, 125(5), 591.
Baddeley. (1986). Working memory. Oxford: Oxford University Press.
12
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
Baddeley, & Hitch. (1974). Working memory. In G. H. Bower (Ed.), Psychology of learning
and motivation (Vol. Volume 8, pp. 47-89): Academic Press.
Bao, Kandel, & Hawkins. (1997). Involvement of pre- and postsynaptic mechanisms in
posttetanic potentiation at aplysia synapses. Science, 275(5302), 969-973.
doi:10.1126/science.275.5302.969
Bar. (2007). The proactive brain: Using analogies and associations to generate predictions.
Trends in Cognitive Sciences, 11(7), 280-289. doi:10.1016/j.tics.2007.05.005
Bargh, Chen, & Burrows. (1996). Automaticity of social behavior: Direct effects of trait
construct and stereotype activation on action. Journal of Personality and Social
Psychology, 71(2), 230-244. doi:10.1037/0022-3514.71.2.230
Baron, & Hershey. (1988). Outcome bias in decision evaluation. Journal of Personality and
Social Psychology, 54(4), 569-579. doi:10.1037/0022-3514.54.4.569
Beitman. (2009). Brains seek patterns in coincidences. Psychiatric Annals, 39(5), 255-264.
doi:10.3928/00485713-20090421-02
Brandstätter, Gigerenzer, & Hertwig. (2006). The priority heuristic: Making choices without
trade-offs. Psychological Review, 113(2), 409-432. doi:10.1037/0033-295X.113.2.409
Broadbent. (1958). Perception and communication. New York: Pergamon Press.
Brown, Goetzmann, Ibbotson, & Ross. (1992). Survivorship bias in performance studies.
Review of Financial Studies, 5(4), 553-580. doi:10.1093/rfs/5.4.553
Caliskan, Bryson, & Narayanan. (2017). Semantics derived automatically from language
corpora contain human-like biases. Science, 356(6334), 183-186.
doi:10.1126/science.aal4230
Chen, Lakshminarayanan, & Santos. (2006). How basic are behavioral biases? Evidence from
capuchin monkey trading behavior. Journal of Political Economy, 114(3), 517-537.
Churchland. (1987). Epistemology in the age of neuroscience. The Journal of Philosophy,
84(10), 544-553. doi:10.5840/jphil1987841026
Cohen, Servan-Schreiber, & McClelland. (1992). A parallel distributed processing approach
to automaticity. The American Journal of Psychology, 105(2), 239-269.
doi:10.2307/1423029
Damasio. (1994). Descartes' error: Emotion, reason and the human brain. New York, NY,
USA: G. P. Putnam's Sons.
Dawes. (2001). Everyday irrationality: How pseudo-scientists, lunatics, and the rest of us
systematically fail to think rationally. Boulder, CO, US: Westview Press.
Dawkins, & Brockmann. (1980). Do digger wasps commit the concorde fallacy? Animal
Behaviour, 28(3), 892-896. doi:10.1016/S0003-3472(80)80149-7
De Neys. (2006). Automaticheuristic and executiveanalytic processing during reasoning:
Chronometric and dual-task considerations. The Quarterly Journal of Experimental
Psychology, 59(6), 1070-1100. doi:10.1080/02724980543000123
Destexhe, & Marder. (2004). Plasticity in single neuron and circuit computations. Nature,
431(7010), 789-795. doi:10.1038/nature03011
Evans. (1984). Heuristic and analytic processes in reasoning. British Journal of Psychology,
75(4), 451-468. doi:10.1111/j.2044-8295.1984.tb01915.x
Evans. (1989). Bias in human reasoning: Causes and consequences. London, UK: Lawrence
Erlbaum Associates, Inc.
Evans. (2008). Dual-processing accounts of reasoning, judgment, and social cognition.
Annual Review of Psychology, 59(1), 255-278.
doi:10.1146/annurev.psych.59.103006.093629
Evans, & Stanovich. (2013). Dual-process theories of higher cognition: Advancing the
debate. Perspectives on Psychological Science, 8(3), 223-241.
doi:doi:10.1177/1745691612460685
13
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
Festinger. (1957). A theory of cognitive dissonance. Stanford, CA, USA: Stanford University
Press.
Forster, & Davis. (1984). Repetition priming and frequency attenuation in lexical access.
Journal of Experimental Psychology: Learning, Memory, and Cognition, 10(4), 680-698.
doi:10.1037/0278-7393.10.4.680
Friedman. (1992). Do old fallacies ever die? Journal of Economic Literature, 30(4), 21292132.
Furnham, & Boo. (2011). A literature review of the anchoring effect. The Journal of SocioEconomics, 40(1), 35-42. doi:10.1016/j.socec.2010.10.008
Gibson. (1966). The senses considered as perceptual systems. Oxford, England: Houghton
Mifflin.
Gibson. (1979). The ecological approach to visual perception. Boston, NJ, USA: Houghton
Mifflin.
Gigerenzer. (2000). Adaptive thinking: Rationality in the real world. New York: Oxford
University Press.
Gigerenzer, & Gaissmaier. (2010). Heuristic decision making. Annual Review of Psychology,
62(1), 451-482. doi:10.1146/annurev-psych-120709-145346
Gigerenzer, & Selten. (2002). Bounded rationality: The adaptive toolbox. Cambridge, MA:
MIT press.
Goldstein, & Gigerenzer. (2002). Models of ecological rationality: The recognition heuristic.
Psychological Review, 109(1), 75-90. doi:10.1037//0033-295X.109.1.75
Haselton, Bryant, Wilke, Frederick, Galperin, Frankenhuis, & Moore. (2009). Adaptive
rationality: An evolutionary perspective on cognitive bias. Social Cognition, 27(5), 733762. doi:10.1521/soco.2009.27.5.733
Haselton, Nettle, & Andrews. (2005). The evolution of cognitive bias. In D. M. Buss (Ed.),
The handbook of evolutionary psychology (pp. 724-746). Hoboken, NJ, USA: John
Wiley & Sons Inc.
Hebb. (1949). The organization of behavior. New York, USA: Wiley.
Heuer. (2013). Cognitive factors in deception and counter deception. Paper presented at the
The art and science of military deception.
Hoffrage, Hertwig, & Gigerenzer. (2000). Hindsight bias: A by-product of knowledge
updating? Journal of Experimental Psychology: Learning, Memory, and Cognition,
26(3), 566. doi:10.1037/0278-7393.26.3.566
Kahneman. (1973). Attention and effort. Englewood Cliffs, New Jersey: Prentice-Hall Inc.
Kahneman. (2003). A perspective on judgment and choice: Mapping bounded rationality.
American Psychologist, 58(9), 697-720. doi:10.1037/0003-066X.58.9.697
Kahneman. (2011). Thinking, fast and slow. New York, USA: Farrar, Straus and Giroux.
Kahneman, & Klein. (2009). Conditions for intuitive expertise: A failure to disagree.
American Psychologist, 64(6), 515-526. doi:10.1037/a0016755
Kahneman, Knetsch, & Thaler. (1990). Experimental tests of the endowment effect and the
coase theorem. Journal of Political Economy, 98(6), 1325-1348. doi:10.1086/261737.
JSTOR 2937761
Kahneman, & Tversky. (1973). On the psychology of prediction. Psychological Review,
80(4), 237-251. doi:10.1037/h0034747
Kahneman, & Tversky. (1984). Choices, values, and frames. American Psychologist, 39(4),
341-350. doi:10.1037/0003-066X.39.4.341
Katz, & Miledi. (1968). The role of calcium in neuromuscular facilitation. The Journal of
Physiology, 195(2), 481-492. doi:10.1113/jphysiol.1968.sp008469
Kennedy. (1995). Debiasing the curse of knowledge in audit judgment. The Accounting
Review, 70(2), 249-273.
14
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
Klein. (1993). A recognition-primed decision (rpd) model of rapid decision making. In G. A.
Klein, J. Orasanu, R. Calderwood, & C. E. Zsambok (Eds.), Decision making in action:
Models and methods (pp. 138-147). Norwood, NJ: Ablex Publishing Corporation.
Klein. (1998). Sources of power: How people make decisions. Cambridge, MA, USA: The
MIT Press.
Klein. (2008). Naturalistic decision making. Human Factors: The Journal of the Human
Factors and Ergonomics Society, 50(3), 456-460. doi:10.1518/001872008X288385
Kosslyn, & Koenig. (1992). Wet mind: The new cognitive neuroscience. New York, USA:
The Free Press.
Lakshminaryanan, Chen, & Santos. (2008). Endowment effect in capuchin monkeys.
Philosophical Transactions of the Royal Society B: Biological Sciences, 363(1511),
3837-3844. doi:10.1098/rstb.2008.0149
Langer. (1975). The illusion of control. Journal of Personality and Social Psychology, 32(2),
311-328. doi:10.1037/0022-3514.32.2.311
Lieder, Griffiths, Huys, & Goodman. (2017). The anchoring bias reflects rational use of
cognitive resources. Retrieved from PsyArXiv at osf.io/preprints/psyarxiv/5x2em, 22 jan.
doi:osf.io/preprints/psyarxiv/5x2em.
Marder, & Thirumalai. (2002). Cellular, synaptic and network effects of neuromodulation.
Neural Networks, 15(4–6), 479-493. doi:10.1016/S0893-6080(02)00043-6
Meyer, & Schvaneveldt. (1971). Facilitation in recognizing pairs of words: Evidence of a
dependence between retrieval operations. Journal of Experimental Psychology, 90(2),
227-234. doi:10.1037/h0031564
Mittone, & Savadori. (2009). The scarcity bias. Applied Psychology, 58(3), 453-468.
doi:10.1111/j.1464-0597.2009.00401.x
Moore, & Healy. (2008). The trouble with overconfidence. Psychological Review, 115(2),
502-517. doi:10.1037/0033-295X.115.2.502
Nickerson. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of
General Psychology, 2(2), 175-220. doi:10.1037/1089-2680.2.2.175
Norman, & Bobrow. (1975). On data-limited and resource-limited processes. Cognitive
Psychology, 7(1), 44-64. doi:10.1016/0010-0285(75)90004-3
Park, & Lessig. (1981). Familiarity and its impact on consumer decision biases and
heuristics. Journal of Consumer Research, 8(2), 223-230. doi:10.1086/208859
Patt, & Zeckhauser. (2000). Action bias and environmental decisions. Journal of Risk and
Uncertainty, 21(1), 45-72. doi:10.1023/a:1026517309871
Pavlov. (2010). Conditioned reflexes: An investigation of the physiological activity of the
cerebral cortex. Annals of Neurosciences, 17(3), 136-141. doi:10.5214/ans.09727531.1017309
Poldrack. (2010). Mapping mental function to brain structure: How can cognitive
neuroimaging succeed? Perspectives on Psychological Science, 5(6), 753-761.
doi:doi:10.1177/1745691610388777
Reichardt. (1961). Autocorrelation, a principle for the evaluation of sensory information by
the central nervous system. Sensory communication, 303-317.
doi:10.7551/mitpress/9780262518420.003.0017
Risen. (2015). Believing what we do not believe: Acquiescence to superstitious beliefs and
other powerful intuitions. Psychological Review, 123(2), 128-207.
doi:10.1037/rev0000017
Roese, & Vohs. (2012). Hindsight bias. Perspectives on Psychological Science, 7(5), 411426. doi:10.1177/1745691612454303
15
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
Ross. (1977). The intuitive psychologist and his shortcomings: Distortions in the attribution
process. In B. Leonard (Ed.), Advances in experimental social psychology (Vol. Volume
10, pp. 173-220): Academic Press.
Shafir, & LeBoeuf. (2002). Rationality. Annual Review of Psychology, 53(1), 491-517.
doi:10.1146/annurev.psych.53.100901.135213
Shanteau. (1992). Competence in experts: The role of task characteristics. Organizational
Behavior and Human Decision Processes, 53(2), 252-266. doi:10.1016/07495978(92)90064-E
Shatz. (1992). The developing brain. Scientific American, 267(3), 60-67.
Simon. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics,
69(1), 99-118. doi:10.2307/1884852
Spranca, Minsk, & Baron. (1991). Omission and commission in judgment and choice.
Journal of Experimental Social Psychology, 27(1), 76-105. doi:10.1016/00221031(91)90011-T
Thomson. (2000). Facilitation, augmentation and potentiation at central synapses. Trends in
Neurosciences, 23(7), 305-312. doi:10.1016/S0166-2236(00)01580-0
Thorndike. (1927). The law of effect. The American Journal of Psychology, 39(1/4), 212-222.
doi:10.2307/1415413
Thorndike. (1933). A proof of the law of effect. Science, 77(1989), 173-175.
doi:10.1126/science.77.1989.173-a
Tooby, & Cosmides. (2005). Conceptual foundations of evolutionary psychology. In D. M.
Buss (Ed.), Handbook of evolutionary psychology (pp. 5-67). Hoboken, New Jersey:
John Wiley & Sons, Inc.
Turner. (1996). The literary mind: The origins of thought and language. New York, Oxford:
Oxford University Press.
Tversky, & Kahneman. (1973). Availability: A heuristic for judging frequency and
probability. Cognitive Psychology, 5(2), 207-232. doi:10.1016/0010-0285(73)90033-9
Tversky, & Kahneman. (1974). Judgment under uncertainty: Heuristics and biases. Science,
185(4157), 1124-1131. doi:10.1126/science.185.4157.1124
Tversky, & Kahneman. (1981). Judgments of and by representativeness. Paper presented at
the Judgment under uncertainty: Heuristics and biases.
Tversky, & Kahneman. (1982). Evidential impact of base rates. In D. Kahneman, P. Slovic,
& A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases. Cambridge,
UK: Cambridge University Press.
Tversky, & Kahneman. (1983). Extensional versus intuitive reasoning: The conjunction
fallacy in probability judgment. Psychological Review, 90(4), 293-315.
doi:10.1037/0033-295X.90.4.293
Wandell. (1995). Foundations of vision. Sunderland, MA, USA: Sinauer Associates.
Wimmer, & Shohamy. (2012). Preference by association: How memory mechanisms in the
hippocampus bias decisions. Science, 338(6104), 270-273. doi:10.1126/science.1223252
696
16