Orbitofrontal contributions to value

Ann. N.Y. Acad. Sci. ISSN 0077-8923
A N N A L S O F T H E N E W Y O R K A C A D E M Y O F SC I E N C E S
Issue: Critical Contributions of the Orbitofrontal Cortex to Behavior
Orbitofrontal contributions to value-based decision
making: evidence from humans with frontal lobe damage
Lesley K. Fellows
Department of Neurology and Neurosurgery, McGill University, Montreal, Canada
Address for correspondence: Lesley K. Fellows, MD CM, DPhil, Department of Neurology and Neurosurgery, McGill
University, 3801 University St., Montreal, QC Canada. [email protected]
The work described here aims to isolate the component processes of decision making that rely critically on particular
subregions of the human prefrontal cortex, with a particular focus on the orbitofrontal cortex. Here, experiments
isolating specific aspects of decision making, using very simple preference judgment and reinforcement learning
paradigms, were carried out in patients with focal frontal damage. The orbitofrontal cortex and the adjacent ventromedial prefrontal cortex play a critical role in decisions based on subjective value, across many categories of stimuli,
and in learning to choose between stimuli based on value feedback. However, these regions are not required for
learning to choose between actions based on feedback, which instead seems to rely critically on the dorsomedial
prefrontal cortex. These results point to a potentially common role for the orbitofrontal cortex in representing the
context-sensitive, subjective value of stimuli to allow consistent choices between them. They also argue for multiple,
parallel, value-based processes that influence behavior through dissociable mechanisms.
Keywords: executive function; neuropsychology; lesion study; reversal learning; reward
Decision making: a novel window on the
human brain
Life is full of choices, large and small. Making
the best of our environment involves determining
the value of the possibilities that the environment
presents, and acting to obtain those with more value.
Effective decision making will confer an adaptive
advantage, and it is thus likely that the brain is organized to ensure optimal decisions. This may be particularly true for decisions about primary goods like
food, water, or mating opportunities but probably
applies quite generally to the more abstract choices
that are also part of everyday human experience.
Viewing human behavior through the lens of decision making provides an interesting perspective for
cognitive neuroscience that has led to novel insights
about the functional organization of the brain.
This vantage point leads to a series of basic questions: how is value abstracted from environmental
information, how is it represented in the brain, and
how do these representations influence behavior?
Over the last several years, progress in answering
these questions has come from two directions: the
first has been to build on an extensive literature on
reinforcement learning in animals, and the second to
combine analytic models and behavioral paradigms
drawn from economics and decision psychology
with cognitive neuroscience methods. This volume
as a whole showcases these different approaches.
Here, I will review work based on both of these
frameworks applied in human subjects with focal
brain injury. Figure 1 provides a general schematic
of the processes that may be engaged during decision making, showing how the economic and learning frameworks connect, at least on a conceptual
level.1,2
Is economic value represented in the
brain?
The economic framework to date has focused on
the concept of value: This is a subjective property
that can be assigned to stimuli (i.e., goods) and perhaps also to actions. Value is a potentially useful
construct; if it exists as a biologically definable signal, then it provides an elegant way to understand
doi: 10.1111/j.1749-6632.2011.06229.x
c 2011 New York Academy of Sciences.
Ann. N.Y. Acad. Sci. 1239 (2011) 51–58 51
The human frontal lobes in choice and learning
Fellows
Figure 1. Schematic overview of the steps leading to a decision, that is, any nonarbitrary choice between two or more options.
These putative stages may be highly interlinked and are likely to have bidirectional influences on each other. The last two stages
suppose that the outcome is experienced, and this experience will influence future value and choice events: that is, that learning will
occur.
how the brain supports adaptive decision making.
Experimental work in economics and decision psychology supports the usefulness of subjective value
in analyzing human (and indeed, animal) behavior.3
Is value information encoded in the brain, and if so,
where and for what purpose?
There is ample evidence that signals correlating with subjective value are indeed present in the
brain. This has been shown in a variety of organisms, including humans, with both neurophysiological methods and functional neuroimaging.2,4–6
The presence of value signals in many regions of the
brain, notably including the ventral striatum, and
various regions within prefrontal and parietal cortex, is cause for both celebration and caution. Value
may be a very useful construct for understanding
particular aspects of brain function. On the other
hand, it may be so generic that it correlates with
or subsumes more specific processes important in
motivated behavior, in which case it would be unhelpful or even misleading as a basis for interpreting
brain signals.7 This makes it especially important to
be as clear as possible in defining and operationalizing value-related processes. That said, the concept of
value has added to the study of these brain regions,
both by providing a fresh conceptual perspective,
and by opening the door to a rich array of behavioral tasks and analytic methods that can be useful tools for both probing and understanding brain
function.
Evidence from loss of function studies can address
whether value-related processes are distinct from
other aspects of motivated behavior.8 Manipulations that disrupt the function of a brain region that
carries value signals should have predictable effects
on decision making if those signals reflect a neurobiologically relevant aspect of value. In humans,
there are currently only two loss-of-function methods: transcranial magnetic stimulation and studies
52
of patients with naturally occurring brain damage,
typically due to a stroke or other local injury. This
review will focus on the latter, which, for practical
reasons, has been the most instructive to date.
There is no doubt that brain damage can affect decision making. Impairments in judgment and
decision making are well recognized in various neurological conditions but have historically been particularly linked to frontal lobe damage. Experimental work aimed at understanding the mechanisms
of these impairments began only relatively recently,
and has largely continued that focus on the frontal
lobes. Influential early studies suggested that damage to orbitofrontal and adjacent ventromedial prefrontal cortex might lead to specific deficits in decision making,9,10 leading to a further focus on these
two regions, which I will refer to here together as
the ventromedial frontal lobe (VMF). Much of the
work to date in patients with VMF damage has
used relatively complex tasks, beginning with the
Iowa Gambling Task. While VMF damage clearly
disrupts task performance, task complexity leaves
room for debate about which processes are, in fact,
disrupted.11,12
Simple preference judgments
Preference judgment is a simpler alternative task
that would seem to focus quite tightly on subjective value. When choosing between fully known options, such as “chocolate or vanilla?,” in principle,
one is somehow comparing the anticipated relative value in a task that would seem to require few
other processes. If VMF representations of relative
value are necessary for choice, then damage to the
VMF should disrupt decision making even under
these very simple conditions. The challenge here is
to know how to detect that disruption, since preferences of this kind are subjective and obviously vary
normally across individuals.
c 2011 New York Academy of Sciences.
Ann. N.Y. Acad. Sci. 1239 (2011) 51–58 Fellows
One solution is to consider how consistently an
individual is in his or her choices. We took this approach in an initial study, asking subjects to choose
between all possible pairs of stimuli within several
categories, and looking at the consistency of these
choices as the dependent measure. While subjective
values may vary normally even within an individual, they tend not to vary dramatically for a given
set of choices over a short time frame. As an example, if you choose an apple over a banana, and
a banana over an orange, when you are offered the
choice of apple or orange, you should choose the
apple if you are basing the choice on some consistent subjective value metric. Choosing the orange
instead would constitute a violation of transitivity of preferences. We have carried out two studies using this basic approach. The first involved 10
subjects with VMF damage, compared to nine subjects with frontal damage sparing the VMF, and 19
healthy, demographically matched control participants. Participants made pairwise choices between
stimuli printed on cue cards across three categories:
foods, color swatches, famous people. They were
asked to treat each choice as independent, to pick
the option they “liked best” and were not prompted
to be consistent. The choices were entirely hypothetical. A control task with similar demands asked
them to make perceptual, rather than value-based,
judgments: which of two color swatches (along a
spectrum from red to blue) looked bluer, and which
of two lines was longer.
Patients with VMF damage performed as well as
either control group in the perceptual tasks but, as
predicted, showed significantly more inconsistency
in their preference judgments. Patients with frontal
damage sparing the VMF showed no deficit in preference judgments.13
This finding is strong support for the claim that
the VMF (relative) value representations, demonstrated in a range of functional magnetic resonance
imaging (fMRI) paradigms, are indeed necessary for
choosing between goods based on subjective value.
There is mixed support for the same claim from lesion studies in nonhuman primates with variably selective lesions to homologous frontal areas.14,15 This
issue is so central to current models of VMF function, and of the neural substrates of decision making, that we pursued it further in two more recent
experiments. One used the identical experimental
structure, but a new version of the preference task
The human frontal lobes in choice and learning
that had more categories of stimuli (to assess more
fully the generality of any effects) and more stimuli
within each category, to guard against the possibility
that controls, but not patients, used explicit strategies to stay consistent. This version of the task was
computerized to allow accurate reaction time (RT)
measurement and to probe whether VMF damage
prolongs decision time as well as disrupts choice
consistency.
In a sample of patients with VMF damage
(n = 15), non-VMF frontal damage (n = 11), and
23 healthy controls, we replicated the findings from
the previous study. Again, subjects with VMF damage showed significantly more inconsistent choices,
collapsed across categories, while those with nonVMF frontal damage were comparable to healthy
subjects. Three of the patients in the VMF group
had also participated in the 2007 study; when their
data are excluded from the analysis, the same pattern
remains, thus constituting an independent replication of preference deficits after VMF damage.16 The
inclusion of a non-VMF control group supports the
claim that impaired preference judgment is the result of VMF damage specifically, and not frontal
damage in general. However, this particular study is
not powered to provide strong evidence about the
potential role of other specific prefrontal regions.
Further work will be needed to definitively establish that dorsomedial or dorsolateral prefrontal cortex (PFC), for example, are not critical for such
decisions.
The five categories examined in this version of
the task were color, fruits, vegetables, landscapes,
and puppies. There were no discernible differences
in the basic pattern of inconsistencies across categories, in keeping with a very general role for the
VMF in preference judgments regardless of the specific class of stimuli. Interestingly, there were no consistent effects of lesion on decision time. Although
VMF damage led to increased inconsistency, this
was not associated with prolonged RT (i.e., equivocation) or with impulsive, overly quick choices.
Healthy controls showed the expected orderly relationship between subjective value distance and RT,
taking longer to decide between options that were
closer together on their subjective value ordering.
This phenomenon has been reported as far back as
the 1930s.17 Frontal lobe damage, whether affecting
the VMF or elsewhere, did not alter this relationship in any substantial way. This observation has
c 2011 New York Academy of Sciences.
Ann. N.Y. Acad. Sci. 1239 (2011) 51–58 53
The human frontal lobes in choice and learning
Fellows
implications for mechanistic models of the role of
the VMF in decision making. Damage seems to affect value-based decision “accuracy” without a substantial effect on speed. This argues that there is
little monitoring of the quality of the putative relative value signal arising from the VMF as it leads
to choice: a degraded or noisy value signal does not
lead to longer processing, at least for these hypothetical choices, made in the absence of substantial time pressure. If decisions follow the roughly
linear sequence suggested by Figure 1, then whatever checks and balances may be built in to value
processing would seem to occur prior to the VMF.
Alternatively, the VMF may play some modulatory
or fine-tuning role in value-based choice that is not
(strongly) monitored.
Value maximizing choice
A third recent study probed subjective value using
a different choice paradigm more amenable to formal analysis. In this study,18 we borrowed a task
from experimental economics to test whether VMF
damage disrupts the ability to consistently choose
between bundles of goods so as to maximize subjective value. Participants chose between different
bundles of chocolate bars and fruit juice (e.g., between two bars of chocolate bundled with two boxes
of juice, or four bars of chocolate with no juice).
Following the same logic as was applied to the preference tasks, above, here expressed as the Generalized Axiom of Revealed Preferences (GARP),19 we
evaluated the degree to which bundle choices were
rational, that is, maximized subjective value as indicated by the overall pattern of choices. Patients
with VMF damage (n = 9) had more GARP violations, and these violations were more severe, than
healthy subjects of the same age and education.
These findings again support a necessary role for
the human VMF in value-based decisions, here in a
task without memory requirements, and involving
choices with real outcomes. (Subjects actually received one of their chosen bundles at the end of the
task.)
Learning about value
The experiments described up to this point have
focused on “one-off” decisions taken without feedback, based on prior knowledge, and without uncertainty, in the formal sense of that term. A quite
different literature that is also relevant to deci-
54
sion making instead focuses on the accumulation
of value information through repeated choices with
feedback.
Learning the value of stimuli
An extensive animal literature has established the
neural circuits involved in learning by trial and error from reward and from punishment. Much of
this work has been carried out in the rat, and has
focused on striatum and related subcortical nuclei,
and on the role of dopamine. Research building on
this foundation addresses rodent frontal cortex contributions to learning and choice. A related literature
has established that the orbitofrontal cortex (OFC)
in nonhuman primates plays a key role in learning
from feedback, particularly under conditions that
require flexible responding and/or interpretation of
the meaning of feedback within a particular context.20,21 A canonical example of such conditions is
that of reversal learning. In this classic reinforcement
learning task, subjects learn through trial and error
to prefer one (rewarded) stimulus over another (that
leads to nonreward or punishment, depending on
the paradigm). Once a learning criterion has been
met, the contingencies are reversed, and the subject
must flexibly adapt and learn to prefer the previously
punished, now rewarded stimulus. Lesion studies in
nonhuman primates extending back several decades
established that the OFC is necessary for reversal
learning.20 Several studies have confirmed that the
same is true in humans. Damage to the VMF, and
perhaps specifically to the OFC, but not to dorsolateral frontal areas, disrupts stimulus-based reversal
learning.22–24 These studies used fairly simple tasks,
generally with deterministic feedback, leading to the
impression that the reinforcement learning deficits
were present only in reversal, and not in initial learning. This in turn led to the deficits being framed as
“perseverative,” in that subjects tended to continue
to choose the originally rewarded stimulus after
reversal.
Subsequent work with more challenging reinforcement schedules requires that this interpretation be revised. In a recent study of prefrontal
contributions to stimulus-based reversal learning,
this time with probabilistic feedback, we found
that VMF damage disrupts both initial and reversal learning.25 This study involved 39 subjects
with focal damage affecting the frontal lobes, of
whom 11 had damage to the VMF. The sample size
c 2011 New York Academy of Sciences.
Ann. N.Y. Acad. Sci. 1239 (2011) 51–58 Fellows
permitted the use of voxel-based lesion-symptom
mapping (VLSM) to supplement the conventional
region-of-interest analysis. VLSM provides a statistical test of structure–function relationships at the
voxel-by-voxel level, and so can identify the specific areas contributing to a given functional deficit,
within the constraints of the variability and overlap
of lesions in a given sample.26 This analysis identified damage to the bilateral medial OFC (as well
as a region with the left lateral OFC) as being most
strongly related to poor performance on the task as
a whole.
Trial-by-trial analysis sheds further light on the
deficits of these patients in a probabilistic feedback environment. OFC damage seems to disrupt
the ability to interpret a given instance of feedback
within the larger task context. That is, healthy participants typically “stay” after a win, and “shift” after a loss, particularly when that loss is in keeping
with a larger pattern, suggesting that that stimulus is a poor choice. In contrast, those with OFC
damage were more likely to shift after a win. They
also shifted after probabilistic losses from the overall
better choice, suggesting that they were struggling
to interpret the feedback in relation to the stream of
preceding feedback. Interestingly, they showed no
tendency to perseverate in this noisy, rapidly changing feedback environment.
Learning the value of actions
Models of both decision making and reinforcement
learning make distinctions between value attached
to stimuli (i.e., goods) and the value of an action,
that is, the behavioral event that expresses the choice.
One plausible model has subjective value of stimuli
encoded in VMF, in turn biasing the motor system to
make the choice of the most valuable option. There is
evidence from fMRI and monkey electrophysiology
that action-related value information is encoded in
the dorsal medial frontal lobe (DMF), a region that
might be a candidate for this goods-to-action value
transformation.
This view supposes sequential processing of value
information. In general, the brain tends toward
parallel processing, and there is evidence from lesion studies in macaques that the linking of value
information to stimuli, and to actions, may be
dissociable.27,28
We addressed the same question in human subjects, using very similar methods. Subjects who had
The human frontal lobes in choice and learning
completed the probabilistic stimulus-value reversal
learning task described, and who had focal damage
affecting either VMF (and not DMF; N = 5) or DMF
(sparing VMF; N = 4) participated in a second experiment to assess action-value learning. This task
used exactly the same probabilistic reinforcement
with reversals, but related to two distinct wrist actions, rather than to two stimuli. Subjects held a joystick, and on a generic “go” signal, chose between
supinating or pronating their wrist, twisting the joystick in one of two directions. This was followed by
feedback in the form of a $50 play money win or
loss, exactly as in the stimulus-based task. Also as in
that task, the reinforcement contingencies reversed
once a learning criterion was met.
Compared to healthy controls, patients with OFC
damage were impaired on the stimulus-value learning task. They made significantly fewer reversals,
and also made more errors in the initial learning
phase. In contrast, their performance on the actionvalue task was comparable to controls. The patients
with DMF damage showed the opposite pattern.
They were impaired on action-value learning, making more errors in the initial phase of the task.
They performed within normal on the stimulusvalue task. This double dissociation was most starkly
revealed in the trial-by-trial analysis. Patients with
OFC damage were significantly more likely to shift
after a win in the stimulus-value task, which is a
highly maladaptive response in this task, and so
rarely produced by controls. In contrast, they did
not differ from controls in how often they shifted
after a win in the action-value task. Patients with
DMF damage showed the opposite pattern of impairment, shifting more often after a win in the
action-, but not stimulus-value task, compared to
controls.29
This pattern is very similar to the effects of focal
lesions to the dorsal anterior cingulate and OFC in
macaques. The human lesion data relies on too few
subjects to be very confident about the localization
within the DMF, but dACC was the area with the
most shared damage within that patient group. Our
prior, larger study, described earlier, argues that the
medial OFC is the key region responsible for the
stimulus-value learning deficit. This double dissociation argues against a sequential model of valuebased decisions, instead supporting multiple routes,
and perhaps multiple timecourses, by which value
may influence choice.
c 2011 New York Academy of Sciences.
Ann. N.Y. Acad. Sci. 1239 (2011) 51–58 55
The human frontal lobes in choice and learning
Fellows
The DMF has been implicated in several other
aspects of response control, including conflict monitoring,30,31 error-related processes,32,33 and selfgenerated action.34 It remains to be seen whether
and how these relate to the observation that this region is also critical for selecting between actions on
the basis of value. There have been efforts to unite
these possibilities from a modeling point of view;35
our data underline the potential usefulness of reinforcement learning models in understanding the
processes that are supported by this brain region.
damage for whom we have both measures, there is
an overall trend for performance on the two tasks
to be correlated, but this relationship is not obligatory: one patient is an extremely poor performer
on the preference task but is near the healthy control average for reversal learning. If this outlier is
excluded, the remaining seven VMF subjects show
a weak trend toward a relationship between reversal learning and preference impairments (Spearman
␳ = .54, P = .19). Further work will be needed to
definitively answer this question.
Value and learning: two sides of the same
coin?
Conclusions and future directions
This article summarizes recent studies of the component processes of decision making subserved by
specific regions of human prefrontal cortex. I initially showed that the VMF is necessary for even very
simple decisions that rely on subjective value comparisons. I then reviewed the evidence that VMF also
plays a critical and specific role in learning to choose
between stimuli on the basis of valenced feedback
(wins and losses). In the opening paragraphs of this
review, I proposed a simple model of decision making that included a learning mechanism to update
value estimates based on feedback, when available.
A natural question at this point is whether the effects of VMF damage on value-based choice, and
value-based learning, reflect the disruption of a single process, or of two processes that happen to be
anatomically proximate. Both decisions and learning rely on anticipating the value of an outcome:
imagining the pleasures of chocolate compared to
vanilla, and anticipating that the choice of a given
deck of cards is more likely to yield a win. It is
the difference between anticipated and actual outcomes that drives learning,36 and the same anticipation could inform stimulus-based choices even if no
learning subsequently occurs, as in the hypothetical
preference tasks described.37
In principle, lesion studies should be able to provide a clear answer to this question, by determining
whether the two abilities are dissociable. In practice,
this requires that the measures of the two abilities
are similarly difficult, among other factors, which
is by no means certain in this case. Only a subset of the participants in the studies just reviewed
completed both the more sensitive reinforcement
learning measures and the more sensitive preference judgment task. In the eight subjects with VMF
56
The work reviewed here argues that information
about subjective value is not only represented in
specific regions of prefrontal cortex, but that this
information is required for flexibly learning about
value in rapidly changing environments, and for
value-based decision making. The VMF in particular, including the OFC and the adjacent vmPFC, is
critical both for learning about and choosing stimuli based on subjective value. It remains to be seen
whether these two functions rest on a common process. DMF plays a role in choosing actions on the
basis of feedback, and this role does not depend on
an intact VMF, arguing for parallel value influences
on choice between stimuli, and between actions in
prefrontal cortex.
Overall, these results from loss-of-function experiments provide reassuringly convergent evidence
to support current views of the human VMF as playing a critical role in decision making. Importantly,
the findings reviewed here argue that the VMF
is specifically important in representing subjective
value information that is called upon in stimulusbased decision making: damage here disrupts even
very simple forms of decision making, in the absence
of risk, delay, or ambiguity. Also importantly, these
results largely agree with findings using very similar paradigms in macaques with selective prefrontal
lesions, suggesting that the neural circuits supporting value-based learning and choice are conserved
across primate species.
The growing evidence that value-based feedback
can influence choice through dissociable mechanisms, depending on the nature of the task, emphasizes the importance of continuing to draw on
convergent forms of evidence in studying decision
making. Value, even when defined in formal economic terms, remains a broad construct that is likely
c 2011 New York Academy of Sciences.
Ann. N.Y. Acad. Sci. 1239 (2011) 51–58 Fellows
to be linked to behavior in many ways, and via multiple neural circuits. As this volume illustrates, there
is still much to understand about these processes.
However, it is clear that even complex human decision making builds on processes conserved across
species, likely developed in the pursuit of primary
rewards, and the avoidance of simple punishments,
meaning that the whole range of systems neuroscience methods can be drawn on to help us develop
that full understanding.
Acknowledgments
Ami Tsuchida, Nathalie Camille, and Joe Kable contributed importantly to the work described here.
Salary support from the FRSQ and the Killam Trust,
operating support from the CIHR (MOP-77583),
and the patient databases of McGill University and
the University of Pennsylvania made this work possible. The participation of patients, their families,
and their physicians in these studies is gratefully
acknowledged.
Conflicts of interest
The author declares no conflicts of interest.
References
1. Fellows, L.K. 2004. The cognitive neuroscience of decision
making: a review and conceptual framework. Behav. Cogn.
Neurosci. Rev. 3: 159–172.
2. Padoa-Schioppa, C. 2011. Neurobiology of economic choice:
a good-based model. Annu. Rev. Neurosci. 34: 333–359.
3. Kahneman, D. & A. Tversky. 1984. Choices, values, and
frames. Am. Psychol. 39: 341–350.
4. Sugrue, L.P., G.S. Corrado & W.T. Newsome. 2004. Matching behavior and the representation of value in the parietal
cortex. Science 304: 1782–1787.
5. Knutson, B., J. Taylor, M. Kaufman, et al. 2005. Distributed
neural representation of expected value. J. Neurosci. 25:
4806–4812.
6. Rushworth, M.F. & T.E. Behrens. 2008. Choice, uncertainty
and value in prefrontal and cingulate cortex. Nat. Neurosci.
11: 389–397.
7. Roesch, M.R. & C.R. Olson. 2004. Neuronal activity related
to reward value and motivation in primate frontal cortex.
Science 304: 307–310.
8. Fellows, L.K., A.S. Heberlein, D.A. Morales, et al. 2005.
Method matters: an empirical study of impact in cognitive
neuroscience. J. Cogn. Neurosci. 17: 850–858.
9. Bechara, A., H. Damasio, D. Tranel & A.R. Damasio. 1997.
Deciding advantageously before knowing the advantageous
strategy. Science 275: 1293–1295.
10. Eslinger, P.J. & A.R. Damasio. 1985. Severe disturbance of
higher cognition after bilateral frontal lobe ablation: patient
EVR. Neurology 35: 1731–1741.
The human frontal lobes in choice and learning
11. Dunn, B.D., T. Dalgleish & A.D. Lawrence. 2006. The somatic
marker hypothesis: a critical evaluation. Neurosci. Biobehav.
Rev. 30: 239–271.
12. Fellows, L.K. 2007. Advances in understanding ventromedial prefrontal function: the accountant joins the executive.
Neurology 68: 991–995.
13. Fellows, L.K. & M.J. Farah. 2007. The role of ventromedial prefrontal cortex in decision making: judgment under
uncertainty or judgment per se? Cereb. Cortex 17: 2669–
2674.
14. Baylis, L.L. & D. Gaffan. 1991. Amygdalectomy and ventromedial prefrontal ablation produce similar deficits in food
choice and in simple object discrimination learning for an
unseen reward. Exp. Brain Res. 86: 617–622.
15. Izquierdo, A., R.K. Suda & E.A. Murray. 2004. Bilateral orbital prefrontal cortex lesions in rhesus monkeys disrupt
choices guided by both reward value and reward contingency. J. Neurosci. 24: 7540–7548.
16. Henri-Bhargava, A., A.S. Simioni & L.K. Fellows. Ventromedial frontal lobe damage disrupts the accuracy, but not the
speed, of value-based preference judgments. Submitted.
17. Dashiell, J.F. 1937. Affective value-distances as a determinant
of esthetic judgment-times. Am. J. Psychol. 50: 57–67.
18. Camille, N., C.A. Griffiths, K. Vo, et al. 2011. Ventromedial
frontal lobe damage disrupts value maximization in humans.
J. Neurosci. 31: 7527–7532.
19. Samuelson, P.A. 1938. A note on the pure theory of consumer’s behavior. Economica 5: 61–71.
20. Murray, E.A., J.P. O’Doherty & G. Schoenbaum. 2007. What
we know and do not know about the functions of the orbitofrontal cortex after 20 years of cross-species studies.
J. Neurosci. 27: 8166–8169.
21. Rushworth, M.F., M.P. Noonan, E.D. Boorman, et al. 2011.
Frontal cortex and reward-guided learning and decisionmaking. Neuron 70: 1054–1069.
22. Fellows, L.K. & M.J. Farah. 2003. Ventromedial frontal cortex mediates affective shifting in humans: evidence from a
reversal learning paradigm. Brain 126: 1830–1837.
23. Hornak, J., J. O’Doherty, J. Bramham, et al. 2004. Rewardrelated reversal learning after surgical excisions in orbitofrontal or dorsolateral prefrontal cortex in humans. J. Cogn.
Neurosci. 16: 463–478.
24. Rolls, E.T., J. Hornak, D. Wade & J. McGrath. 1994. Emotionrelated learning in patients with social and emotional
changes associated with frontal lobe damage. J. Neurol. Neurosurg. Psychiatry 57: 1518–1524.
25. Tsuchida, A., B.B. Doll & L.K. Fellows. 2010. Beyond reversal: a critical role for human orbitofrontal cortex in flexible
learning from probabilistic feedback. J. Neurosci. 30: 16868–
16875.
26. Rorden, C., J. Fridriksson & H.O. Karnath. 2009. An evaluation of traditional and novel tools for lesion behavior
mapping. Neuroimage 44: 1355–1362.
27. Kennerley, S.W., M.E. Walton, T.E. Behrens, et al. 2006.
Optimal decision making and the anterior cingulate cortex.
Nat. Neurosci. 9: 940–947.
28. Rudebeck, P.H., T.E. Behrens, S.W. Kennerley, et al. 2008.
Frontal cortex subregions play distinct roles in choices
between actions and stimuli. J. Neurosci. 28: 13775–
13785.
c 2011 New York Academy of Sciences.
Ann. N.Y. Acad. Sci. 1239 (2011) 51–58 57
The human frontal lobes in choice and learning
Fellows
29. Camille, N., A. Tsuchida & L.K. Fellows. In press. Double
dissociation of stimulus-value and action-value learning in
humans with orbitofrontal or anterior cingulate cortex damage. J. Neurosci.
30. Botvinick, M., L.E. Nystrom, K. Fissell, et al. 1999. Conflict
monitoring versus selection-for-action in anterior cingulate.
Nature 402: 179–181.
31. Ridderinkhof, K.R., M. Ullsperger, E.A. Crone & S.
Nieuwenhuis. 2004. The role of the medial frontal cortex
in cognitive control. Science 306: 443–447.
32. Modirrousta, M. & L.K. Fellows. 2008. Dorsal medial prefrontal cortex plays a necessary role in rapid error prediction
in humans. J. Neurosci. 28: 14000–14005.
33. Rushworth, M.F., R.B. Mars & C. Summerfield. 2009. Gen-
58
34.
35.
36.
37.
eral mechanisms for making decisions? Curr. Opin. Neurobiol. 19: 75–83.
Nachev, P., C. Kennard & M. Husain. 2008. Functional role
of the supplementary and pre-supplementary motor areas.
Nat. Rev. Neurosci. 9: 856–869.
Botvinick, M.M. 2007. Conflict monitoring and decision
making: reconciling two perspectives on anterior cingulate function. Cogn. Affect. Behav. Neurosci. 7: 356–
366.
Sutton, R.S. & A.G. Barton. 1998. Reinforcement Learning:
An Introduction. MIT Press. Cambridge, MA.
Kringelbach, M.L. 2005. The human orbitofrontal cortex:
linking reward to hedonic experience. Nat. Rev. Neurosci. 6:
691–702.
c 2011 New York Academy of Sciences.
Ann. N.Y. Acad. Sci. 1239 (2011) 51–58