Can beneficial ends justify lying? Neural

Social Cognitive and Affective Neuroscience, 2016, 423–432
doi: 10.1093/scan/nsv127
Advance Access Publication Date: 10 October 2015
Original Article
Can beneficial ends justify lying? Neural responses to
the passive reception of lies and truth-telling with
beneficial and harmful monetary outcomes
Lijun Yin1,2 and Bernd Weber1,2,3
1
Center for Economics and Neuroscience (CENs), University of Bonn, Nachtigallenweg 86, 2Department of
Epileptology, University Hospital Bonn, and 3Life & Brain Center, Department of NeuroCognition, University
of Bonn, Sigmund-Freud-Str. 25, 53127 Bonn, Germany
Correspondence should be addressed to Lijun Yin, Center for Economics and Neuroscience, University of Bonn, Nachtigallenweg 86, 53127 Bonn,
Germany. E-mail: [email protected].
Abstract
Can beneficial ends justify morally questionable means? To investigate how monetary outcomes influence the neural responses to lying, we used a modified, cheap talk sender–receiver game in which participants were the direct recipients of
lies and truthful statements resulting in either beneficial or harmful monetary outcomes. Both truth-telling (vs lying) as
well as beneficial (vs harmful) outcomes elicited higher activity in the nucleus accumbens. Lying (vs truth-telling) elicited
higher activity in the supplementary motor area, right inferior frontal gyrus, superior temporal sulcus and left anterior insula. Moreover, the significant interaction effect was found in the left amygdala, which showed that the monetary outcomes
modulated the neural activity in the left amygdala only when truth-telling rather than lying. Our study identified a neural
network associated with the reception of lies and truth, including the regions linked to the reward process, recognition and
emotional experiences of being treated (dis)honestly.
Key words: lying; truth-telling; fMRI
Introduction
It is debated extensively as to whether lying can be morally acceptable under specific circumstances, e.g. if beneficial ends
can justify morally questionable means. Kantian moral theory
suggests that telling a lie is never morally permissible, regardless of the outcome (Kant, 1797). However, the utilitarianism
school of thought implies that lying is morally right if it produces more total welfare than any other act could have (Carson,
2010). From the perspective of decision makers who decide
whether to lie and benefit more, accumulating evidence suggests people encounter a utility loss due to deception (Ellingsen
and Johannesson, 2004), which runs against assumptions of the
classic approach in economics that people are selfish and lying
does not carry any cost (Crawford and Howell, 1998). One behavioral study found that a significant fraction of people chose the
materially advantageous allocation more often in a dictator
game than in a deception game (Gneezy, 2005), supporting the
idea of an lying aversion parameter significantly influencing individual utility. Participants have also been shown to be more
likely to avoid settings enabling them to deceive others, compared with the settings which do not enable deception, suggesting the temptation to lie triggers anticipated moral emotion to a
greater extent (Shalvi et al., 2011). Even in situations where lies
benefit both self and others, participants were still reluctant to
lie (Erat and Gneezy, 2012). This is again in line with the intuition of an aversion to lie, i.e. a dislike for lies independent of
the consequences (López-Pérez and Spiegelman, 2013).
From the perspective of individuals who are the recipients of
lies (i.e. being lied to), how do the consequences influence the
process of lies and truth? To investigate this question, one can
confront the participants with several scenarios which contain
lying or truth-telling behaviors, then ask them to provide moral
judgments. It is the typical paradigm researchers used to
Received: 14 April 2015; Revised: 24 July 2015; Accepted: 7 October 2015
C The Author (2015). Published by Oxford University Press. For Permissions, please email: [email protected]
V
423
424
|
Social Cognitive and Affective Neuroscience, 2016, Vol. 11, No. 3
investigate the neural patterns underlying moral judgment of
deception by using functional magnetic resonance imaging
(fMRI) techniques (Berthoz et al., 2006; Hayashi et al., 2010, 2014;
Wu et al., 2011). However, the existing affect might be altered if
participants are interrupted with questions and required to report their feelings or judgments (Knutson et al., 2014). What is
more, it has been argued that a third-party judgment might not
be emotionally arousing and instead more involve rational processing (Wu et al., 2011) which might induce higher involvement
of prefrontal areas (Abe et al., 2014; Hayashi et al., 2014).
According to the social intuitionist model (Haidt, 2001), moral
judgment is the result of a quick ‘gut’ feeling. Moral reasoning
helps people to justify their intuitive responses, to share their
judgment with others, or to be relied upon when no initial or
conflicted intuitions exist (Schnall et al., 2008). Therefore, moral
reasoning might also play a role in moral judgment tasks.
Finally, the neural patterns differed when comparing the affective reaction of being deceived to the affective reaction of watching others being deceived (Grezes et al., 2006). In a study
conducted by Grezes et al. (2006), participants watched videos of
actors (either themselves or others) lifting a box. They then
determined whether or not the actors had been misled as to the
actual weight of the box. Activation in the amygdala and fusiform gyrus was only found in the condition that the participants themselves being deceived. As moral judgment tasks
during the experiment might bring confounds (i.e. interrupting
the mental process) and personal involvement is crucial, in the
current study we would like to investigate the neural responses
to moral behaviors from the perspective of the recipients without using an explicit moral judgment task during the scanning.
However, little is known about the neural response to the
passive reception of lying and truth-telling. To the best of our
knowledge, no previous study has investigated how monetary
consequences influence the judgment of deception from the
perspective of the recipients themselves. In previous studies
about neural responses to lies and truth, the participants were
neither the direct recipients of lying and truth-telling (e.g. reading scenarios) nor did they experience lying and truth-telling
with different outcomes. Therefore, our experiment utilized a
modified sender–receiver game (Erat and Gneezy, 2012) in
which participants were the direct recipients (i.e. message receivers) of truthful or deceptive statements sent by other players (i.e. message senders). The sender chose to send a message
indicating the roll of a die which represented one of two payoff
options for both the sender and the receiver. In the original setting, if the receiver chose the actual outcome of the die roll, the
corresponding option would be implemented. Otherwise, an alternative option would be implemented. In our modified version, if a receiver did not believe the sender’s message, both the
receiver and the sender received a minimum payoff, which
makes believing the sender’s message be the dominant strategy
for the receiver. This setting was applied to avoid confounding
factors, such as extra processes of generating strategies or feelings of regret due to bad choices, which might influence the
neural response to moral behaviors. When following the sender’s information, the receiver faces four possible situations: the
‘beneficial lies situation’, in which the information is deceptive
and both players earn more money than in the alternative option; the ‘beneficial truth situation’, in which the information is
truthful and both players earn more money than in the alternative option; the ‘harmful lies situation’, in which the information is deceptive, the sender earns more money and the
participant earns less money than in the alternative option; and
the ‘harmful truth situation’, in which the information is
truthful, the sender earns more money and the participant
earns less money compared with the alternative option.
Mainly two clusters of brain regions might be critically
involved, based on previous literatures. First, the manipulation of monetary outcomes might modulate reward associated
region. The nucleus accumbens (NAcc) is a key structure that
is linked to the anticipation of rewards, independent of the
kind of incentive, such as monetary benefits (Delgado et al.,
2000; Berns et al., 2001; Knutson et al., 2001a,b; O’Doherty, 2004;
Bartra et al., 2013) or social reward (Izuma et al., 2008;
Spreckelmeyer et al., 2009; Häusler et al., 2015). Second, being
the recipient of deception might be associated with regions
which have been consistently found to relate to moral sensitivity [i.e. the detection and interpretation of moral situations
or issues (Rest et al., 1999)], including the superior temporal
sulcus (STS), insula and limbic regions (especially the amygdala) (Robertson et al., 2007). Intriguingly, two influential
human moral theories lead to different predictions. Given the
Kantian moral theory, both types of lies regardless of their outcomes should be equally treated as less permissible and less
rewarding compared with truth-telling, which could lead to
lower activities in reward-related regions (especially NAcc) but
higher activities in regions related to moral sensitivity.
Alternatively, according to utilitarianism theory, we could expect increased NAcc activation in the beneficial lie condition
but decreased activities in moral sensitivity-related regions in
beneficial lies (vs harmful lies), as beneficial lies would be considered as rewarding and morally permissible.
Materials and Methods
Participants
In the experiment, participants were divided into two groups:
senders and receivers. Prior to the fMRI experiment, we invited
86 participants to play the game online as senders. Forty-one
participants were enrolled as receivers in the fMRI part of the
experiment. Data from three subjects were excluded due to excessive head movements (i.e. >3 mm). The remaining 38 participants (22 females) ranged from 19 to 32 years of age
(mean 6 s.d. ¼ 24 6 3.24 years). All participants had normal or
corrected-to-normal vision and no prior history of psychiatric or
neurological disorders. All participants provided their informed
consent and the study was approved by the Ethics Committee
of the University of Bonn.
Tasks
In this experiment, we used a modified sender–receiver game
with real incentives (Erat and Gneezy, 2012) (Figure 1). The sender’s task was to send a message to the receiver and the receiver’s task was to decide whether or not to follow the sender’s
message. For each round, the outcome of the die roll corresponded to one of two payment options. From six possible message choices, the senders sent a message to anonymous
receivers: ‘The outcome of the die roll is x’ (where x ¼ 1, 2, 3, 4, 5
or 6). Before the fMRI experiment, we collected the decisions
from 86 senders in advance (Figure 1A) and they earned 5E for
their participation.
In the fMRI experiment, 41 participants were invited to play
the game as receivers. Before starting, the receivers completed a
manipulation check to ensure that they fully understood the experiment. Importantly, the receivers were told that when the
senders made the decisions, they knew all the information
L. Yin and B. Weber
|
425
Fig. 1. Experimental paradigm of the sender–receiver game. The outcome of the die roll corresponds to the right payment option. A sender (e.g. L.Y.) sends the message:
‘The outcome of the die roll is 6’ to a receiver (A). In the scanner, if the receiver believes the sender’s messages (B), the other payment option would be implemented
(the option within the yellow outlined frame). This example belongs to the beneficial lie condition. If the receiver does not believe (C), both the receiver and the sender
earn 1E. The red and white bars, respectively, represent the payoff for the receiver and the sender.
beforehand, unlike the receivers. During scanning (Figure 1B
and C), in each round the receiver first saw the message (e.g.
‘Message from L.Y.: ‘The outcome of the die roll is 6’). This message was sent from one of the 86 senders. On the same screen,
the receiver would also see the phrase ‘Believe?’ and choose to
believe or not believe the sender’s messages by choosing either
the ‘Yes’ or ‘No’ option. The position of ‘Yes’ and ‘No’ option
was counterbalanced on the left and right side of the screen.
After the receiver made a decision, the next screen displayed
two payoff options and the actual outcome of the die roll. The
receiver then discovered whether or not the sender sent a truthful message. If the receiver believed the sender’s message and
the message matched the outcome of the die roll (i.e. a truthful
message), the payment option which the die represents would
be implemented. If the receiver believed the untruthful message
which did not match the number of the die roll, the other payment option would be implemented (Figure 1B). If the receiver
did not believe, both the receiver and the sender earned 1E for
that round (Figure 1C).
In each round, for both the sender and the receiver, the payoff in the implemented option was one of three monetary
amounts (4E, 8E or 12E). In the other option, the payoff for the
receiver was increased or decreased by 25% or 75% and the payoff for the sender was decreased by 25% or 75% compared with
the payoff in the implemented option (to check the payoff structure, see Table 1). We used 2 (means: lies and truth) by 2 (ends:
benefit and harm) within subject factorial design and there
were four experimental conditions: beneficial lies, beneficial
truth, harmful lies and harmful truth. Under lying conditions,
the sender sent an incorrect message; under truthful conditions, the sender sent the correct message; under beneficial
conditions, the receiver’s final payment was higher than the
payment in the other option; and under harmful conditions, the
receiver’s final payment was lower than the payment in the
other option. There were 36 rounds for each condition and 144
rounds total were played. After the experiment, the computer
randomly chose one of the rounds to be implemented as the
final payment for the senders and the receivers. The receivers
earned an extra 10E for their participation. The entire experiment lasted about 40 min.
To measure the emotional valence and moral acceptance toward the four different conditions presented in the experiment,
the participants were asked to rate the emotional valence according to Lang’s Self-Assessment-Manikin Valence Scale (Lang,
1980) and then rate moral acceptance according to a nine-level
valence scale (1: not acceptable at all, 5: neutral, 9: extremely acceptable) after completing the experiment. The SelfAssessment Manikin scale of a nine-level valence scale (1: very
426
|
Social Cognitive and Affective Neuroscience, 2016, Vol. 11, No. 3
unhappy, 5: neural, 9: very happy) was adapted from PXLab
(Irtel, 2008).
Image acquisition and data analysis
All images were acquired on a Siemens Trio 3.0 Tesla scanner
with a standard 32-channel head coil. Structural scans included
T1-weighted images (TR ¼ 1660 ms; TE ¼ 2.75 ms; flip angle ¼ 9 ;
slice thickness ¼ 0.8 mm). One functional session was run which
started with a localizer scan and was then followed by the paradigm implemented in presentation (Neurobehavioral Systems;
http://www.neurobs.com) during which T2*-weighted echo planar images (EPI) were collected (TR ¼ 2500 ms; TE ¼ 30 ms; flip
angle ¼ 90 ; 37 slices with 3 mm slice thickness; 96 96 acquisition
matrix;
field
of
view ¼ 192 mm 192 mm;
voxel
size ¼ 2 2 3 mm3).
SPM8 was adopted for fMRI data analysis (Welcome
Department of Cognitive Neurology, London, UK; http://www.l.
ion.ucl.ac.uk/spm/). For each subject, EPI images were first realigned and resliced. Data sets that exhibited movement of
>3 mm or 3 of rotation were not included. The anatomical
image was then co-registered with the mean EPI image of each
participant which was further segmented. The SPM8’s DARTEL
tool was used to create a template and normalize functional
and anatomical scans to the MNI template. Finally, the normalized functional images were smoothed by an 8 mm, full-width,
half-maximum Gaussian filter.
Each participant’s brain activation was estimated using a
general linear model (GLM). The regressors of interest included
beneficial lies, beneficial truth, harmful lies and harmful truth.
Trials with no response, the decision phase, trials in which participants chose not to believe, and six sets of motion parameters
were also included in the GLM as regressors of no interest. The
canonical hemodynamic response function implemented in
SPM8 was used to model the fMRI signal, and a high-pass filter
was set at 128 s to reduce low frequency noise. For the second
level analysis, four contrasts (i.e. beneficial lies, beneficial truth,
harmful lies and harmful truth) for each participant were
entered into a flexible factorial model with two within-group
factors [ends (beneficial vs harmful outcome) and means (lying
vs truth-telling)]. Results were voxel-level height thresholded at
P < 0.001 (k ¼ 10) and survived after cluster-level family-wise
error (FWE) correction, P < 0.05.
Table 1. Payoff structure
Ratio S/R
Final payoff S/R
4/4
8/4
12/4
4/8
8/8
12/8
4/12
8/12
12/12
3/6
1/2
1/6
3/2
3/10
1/14
1/10
3/14
6/6
2/2
2/6
6/2
6/10
2/14
2/10
6/14
9/6
3/2
3/6
9/2
9/10
3/14
3/10
9/14
3/9
1/3
1/9
3/3
3/15
1/21
1/15
3/21
6/9
2/3
2/9
6/3
6/15
2/21
2/15
6/21
9/9
3/3
3/9
9/3
9/15
3/21
3/15
9/21
Results
Behavioral results
For the emotional valence, we performed a 2 (means: lying and
truth-telling) 2 (ends: beneficial and harmful outcome) analysis of variance (ANOVA) (Figure 2A). Significant main effects of
means [truth-telling > lying; F(1, 37) ¼ 80.50, P < 0.001] and ends
[beneficial outcome > harmful outcome; F(1, 37) ¼ 15.59, P < 0.001]
and a significant means ends interaction [F(1, 37) ¼ 5.66,
P ¼ 0.02] were found. Post hoc analysis showed that emotional
valences were significantly higher (P ¼ 0.005) for the beneficial
truth condition (mean 6 s.d.: 7.79 6 1.43) than for the beneficial
lies condition (7.05 6 1.73) and were significantly higher
(P < 0.001) for the harmful truth condition (5.55 6 1.63) than for
the harmful lies condition (4.24 6 1.85). The differences between
valences of harmful truth vs the harmful lies condition
(mean 6 s.d.: 1.32 6 1.97) were significantly higher (P ¼ 0.02) than
the differences between valences of beneficial truth vs beneficial lies (mean 6 s.d.: 0.74 6 1.53).
For the moral acceptance, we performed the same ANOVA
(Figure 2B). The mean ratings and standard deviations of moral
acceptance of beneficial lies, beneficial truth, harmful lies and
harmful truth were: 6.97 6 2.03, 8.34 6 1.12, 4.58 6 2.16 and
6.61 6 2.04. Significant main effects of means [truthtelling > lying; F(1, 37) ¼ 75.39, P < 0.001] and ends [beneficial outcome > harmful outcome; F(1, 37) ¼ 32.34, P < 0.001] were found.
The interaction showed only a trend (P ¼ 0.08).
The emotional valence and moral acceptance ratings were
significantly correlated (beneficial lies: r ¼ 0.84, P < 0.001; beneficial truth: r ¼ 0.69, P < 0.001; harmful lies: r ¼ 0.69, P < 0.001;
harmful truth: r ¼ 0.54, P < 0.001). There were no significant gender differences in the ratings.
Functional MRI results
The results are listed in Table 2. Beneficial outcome elicited
stronger activation in the bilateral NAcc compared with harmful outcome (Figure 3A). The opposite comparison yielded no
results. Truth elicited stronger activation in the left NAcc compared with lies (Figure 3B). Compared with the truth, lies were
associated with stronger brain activation in the right STS, left
supplementary motor area (SMA), left anterior insula (AI) and
right inferior frontal gyrus (IFG; Figure 4A). The blood oxygenation level dependent (BOLD) percentage signal change for all
four conditions was extracted from the left NAcc (overlapping
region of both contrasts of truth vs lies and beneficial vs harmful outcome; Figure 3C), SMA (Figure 4B), right IFG (Figure 4C),
left AI (Figure 4D) and right STS (Figure 4E). No significant
Alternatives S/R
25%/25%
75%/75%
75%/25%
25%/75%
25%/þ25%
75%/þ75%
75%/þ25%
25%/þ75%
3/3
1/1
1/3
3/1
3/5
1/7
1/5
3/7
6/3
2/1
2/3
6/1
6/5
2/7
2/5
6/7
9/3
3/1
3/3
9/1
9/5
3/7
3/5
9/7
Notes: S, sender; R, receiver; negative ratio, the alternative payoff was decreased
compared with the final payoff (beneficial conditions); positive ratio, the alter-
Fig. 2. Emotional valence (A) and moral acceptance ratings (B) of four conditions
native payoff was increased compared with the final payoff (Harmful
(beneficial lies, beneficial truth, harmful lies and harmful truth). Error bar denotes standard deviation of the mean (s.d.).
conditions).
L. Yin and B. Weber
interaction of (beneficial lies–harmful lies) vs (beneficial truth–
harmful truth) was found. The other interaction effect (beneficial truth–harmful truth) vs (beneficial lies–harmful lies) was
found in the left amygdala (small-volume correction; P < 0.05,
FWE correction; Figure 5A). Further analysis of the left amygdala showed that there were significant differences in brain
activity in the contrasts of beneficial truth vs harmful truth
(Figure 5B; P ¼ 0.02) as well as beneficial truth vs beneficial lies
(P < 0.001). No significant differences were found in the contrasts of beneficial lies vs harmful lies as well as harmful lies
vs harmful truth (Figure 5B; P’s 0.08). There were no significant gender differences in any of the four conditions.
Discussion
The goal of this study was to investigate how beneficial or
harmful outcomes influence the neural responses of the recipients to lying and truth-telling. Our results showed that the neural responses to lying and truth-telling were modulated by
whether the behavior leads to beneficial or harmful monetary
outcomes. First, both contrasts of truth vs lies and beneficial vs
harmful outcomes activated the left NAcc (Figure 3). Second, in
contrast to truth-telling condition, lying was associated with
higher activity in the SMA, right IFG, STS and left AI. Third, the
left amygdala showed significant interaction effect [i.e.
|
427
(beneficial truth–harmful truth) vs (beneficial lies–harmful lies)].
Further analysis revealed that monetary outcomes only modulated the activity of the left amygdala toward truth-telling (i.e.
beneficial outcome increased the amygdala activity), without
affecting the response of the amygdala toward lying.
In our results, we found that both contrasts of truth vs lies
and beneficial vs harmful outcomes showed higher activation
in the NAcc (Figure 3). Furthermore, no positive NAcc activity
linked to beneficial lying was found. The NAcc has been implicated in the processing of monetary rewards (Knutson and
Cooper, 2005; Adcock et al., 2006; Sabatinelli et al., 2007) and social rewards (Izuma et al., 2008; Spreckelmeyer et al., 2009;
Häusler et al., 2015). In our study, the enhanced activity in the
NAcc was found not only in the comparison of beneficial vs
harmful outcomes but also observed in comparing truth-telling
to lying. But to our knowledge, activation in the NAcc has not
been found in previous studies which investigated the judgment or the reception of lying and truth-telling (Grezes et al.,
2004a,b, 2006; Berthoz et al., 2006; Hayashi et al., 2010, 2014; Wu
et al., 2011). One possible explanation is that in these previous
studies, participants served as third-party observers and provided explicit judgments. More rational judgment/reasoning
processes and the lack of personal involvement might dampen
the neural activity in the regions which are associated with reward processing. In the interactive game (prisoner’s dilemma
Fig. 3. Main effects of ends (beneficial vs harmful outcome; A) and means (truth vs lies; B). Voxels within the significant clusters were height thresholded at P < 0.001
(k ¼ 10) and survived cluster-level P < 0.05 FWE correction. (C) Percentage signal changes were extracted from the overlap in left NAcc [yellow; masked with NAcc anatomical mask from WFU Pickatlas Tool (Maldjian et al., 2003)]. Error bar denotes standard error of the mean (s.e.).
428
|
Social Cognitive and Affective Neuroscience, 2016, Vol. 11, No. 3
Fig. 4. Main effect of lies vs truth. (A) Voxels within the significant clusters were height thresholded at P < 0.001 (k ¼ 10) and survived cluster-level P < 0.05 FWE correction. Percentage signal changes were extracted from the left SMA (B), right IFG (C), left AI (D) and right STS (E). Error bar denotes standard error of the mean (s.e.).
Fig. 5. The interaction (beneficial truth–harmful truth) vs (beneficial lies–harmful lies). (A) The effect in the amygdala is significant after small-volume correction for
multiple comparisons (P < 0.05, FWE correction). (B) Percentage signal changes were extracted from the left amygdala. Significant differences were found in brain activity in the contrasts of beneficial truth (vs harmful truth) as well as beneficial truth (vs beneficial lies). No significant differences were found in the contrasts of beneficial
lies (vs harmful lies) as well as harmful lies (vs harmful truth). Error bar denotes standard error of the mean (s.e.). (*P < 0.05; ***P < 0.001; n.s.: not significant).
game), the NAcc was activated when participants were seeing
the faces of intentional cooperators compared with neutral
faces or non-intentional cooperators (Singer et al., 2004a). In the
neuroimaging studies of deceiving others, the successful deception (vs truth-telling) elicited stronger activations in the ventral
striatum (Sun et al., 2015). The findings in the NAcc were in accordance with previous fMRI findings on monetary incentives
and social favorable behavior which are experienced as rewarding. Our results further suggest that compared with truth-telling, lying is experienced as less rewarding even when it comes
with beneficial outcomes.
When comparing lying with truth-telling, significant activation was found in the left AI, SMA, right STS and IFG (Figure
4). The insula has been found to be implicated in multiple domains, such as perceptual decision making (Binder et al., 2004;
Thielscher and Pessoa, 2007), deceptive decision making
(Christ et al., 2009; Farah et al., 2014; Lisofsky et al., 2014) and
empathy (Singer et al., 2004b; Jackson et al., 2005). Due to its involvement in a wide range of domains, the AI has been
thought to play a fundamental role in awareness (Craig, 2009)
or general arousal (Singer et al., 2009). Note that in our experiment, participants showed significantly less positive valence
toward lying. It is possible that the negative affect toward lying
might cause the higher activity in the AI. The results from previous studies revealed the involvement of the insula in processing negative events, such as experience of monetary loss
(O’Doherty et al., 2003), disgust (Calder et al., 2000), aversion
(Paulus et al., 2003; Chang et al., 2011) and negative arousal
(Paulus and Stein, 2006). For example, activation in the AI was
found while participants were faced with unfair offers, and
positively correlated with the propensity to reject unfair offers
(Sanfey et al., 2003). Patients with insula lesions showed
L. Yin and B. Weber
Table 2. fMRI results (voxel-level height threshold P < 0.001, k ¼ 10,
cluster-level P < 0.05, FWE correction)
Brain area
L/R Cluster Z-value MNI coordinates
x
y
Main effect of means
Lies > truth
STS
R
878
4.51
58 48
SMA
L
372
4.09
3
22
AI
L
475
4.08
36
22
IFG
R
480
4.01
40
16
Truth > lies
Occipital lobe
R 17 215
6.50
12 65
Nucleus accumbens (NAcc) L
374
4.20
12
4
Main effect of ends
Beneficial > harmful outcome
Nucleus accumbens (NAcc) R
725
4.64
12
12
Nucleus accumbens (NAcc) L
386
4.34
10
7
Harmful > beneficial outcome
None
Interaction
(beneficial lies–harmful lies) vs (beneficial truth–harmful truth)
None
(beneficial truth–harmful truth) vs (beneficial lies–harmful lies)
L
52
3.52
17
0
Amygdalaa
z
12
60
9
24
3
10
6
7
18
a
The effect is significant after small-volume correction for multiple comparisons
within the amygdala ROI (P < 0.05, FWE correction).
impairment for the perception and experience of disgust
(Sprengelmeyer, 2007). The tendency to accept unfair proposals was associated with decreased activity in the
AI which has been implicated in negative affect (Tabibnia
et al., 2008).
Nevertheless, findings about association between activations
in brain regions and multiple emotions do not support a simple
one-to-one mappings between emotions and brain regions
(Hamann, 2012) and the activation alone might provide little information about the underlying mental state (Poldrack, 2006;
Glimcher and Fehr, 2013). Besides the AI, we also found that the
right IFG was activated in the contrast of lying vs truth-telling.
The right IFG has been typically related to response inhibition
tasks (Aron et al., 2004) and proposed as a brake which functions
in different modes and in different contexts (Aron et al., 2014).
Previous deception studies where participants were instructed to
lie showed that the IFG and insula are two of the regions which
were consistently more active for deceptive responses compared
with honest responses (Christ et al., 2009; Farah et al., 2014).
Deception-related regions (the IFG and insula) overlapped with
more than one of the activation likelihood estimate (ALE) maps
for working memory, inhibitory control and task switching and
therefore were believed to be associated with generally executive
control (Christ et al., 2009). However, several limitations are integral parts of the instructed paradigms such as poor ecological validity (Ganis and Keenan, 2009) or different involvement of
emotions and cognitive control from that in real life (Farah et al.,
2014). Thus, the function of the insula and IFG as speculated from
the instructed paradigms might differ in the paradigms which
allow participants to make their own choices. In the studies
where participants chose to lie on their own initiatives (i.e. spontaneous deception), the IFG was found activated when making
deceptive responses (Sip et al., 2013) and also when deciding how
to respond regardless of the choice made (Sip et al., 2012).
|
429
Together with the insula, the hyper-activation in the IFG
was consistently found to be associated with anxiety and stress
(Etkin and Wager, 2007). In addition, increased activity in the
IFG correlated with higher aversion to risk (Christopoulos et al.,
2009). One potential explanation is that the negative emotion or
aversion to certain contexts or behaviors elicits higher activation in the IFG and AI. By using resting electroencephalography,
researchers found that the higher baseline activation in the AI
is the lower participants’ propensity to lie in the trust game
(Baumgartner et al., 2013). This suggests aversion to lying might
rely upon the AI and associated networks. In a trust game, participants (in the role of trustees) first could either make a promise about paying back money or have no chance to make any
promise. They then waited for an investor’s decision about
trusting or not, and decided to pay back or keep the money
(Baumgartner et al., 2009). In dishonest participants, promise
giving (vs the no promise giving condition) elicited increased
right AI and IFG activity during the promise stage, whereas the
no promise giving condition elicited higher AI and IFG activity
during the anticipation stage. The authors speculated that the
former was associated with aversive emotional experiences of
providing mislead promise and the latter was due to the anticipation of negative and unforeseeable emotional events. Taken
together, the negative emotion or aversion to lying might be
associated with the AI and IFG.
The contrast of lying vs truth-telling also activated the STS,
which is critical in social perception, i.e. the evaluation of the social communicative intentions of others extracted from gaze, facial expression, body gesture and motions (Pelphrey et al., 2004;
Moll et al., 2005). The posterior STS might be also critical to moral
sensitivity (Robertson et al., 2007). The posterior STS has been
found more active when judging the facial trustworthiness
(Bzdok et al., 2011), viewing moral pictures (Harenski and
Hamann, 2006) and making moral judgment (Moll et al., 2002;
Greene et al., 2004; Heekeren et al., 2005; Sevinc and Spreng, 2014).
In previous deception studies, the activation in the STS was also
observed when the participants judged the behaviors as misleading (Grezes et al., 2004b, 2006). The recognition-related increases
for morally transgressive behaviors (e.g. lying) might explain the
higher activation in the STS (Robertson et al., 2007).
The interaction results showed that the amygdala activity
was significantly increased by the beneficial outcome under
truthful rather than lying conditions (Figure 5). At least two alternative explanations of the findings in the amygdala are feasible. The first explanation would be: the stronger deactivation in
the amygdala might be caused by the involvement of emotional
regulation. Previous findings have suggested a specific role of
the left amygdala in cognitive and intentional control of mood.
With the use of real-time fMRI neurofeedback from the amygdala, patients with major depressive disorder learned to upregulate the amygdala activation (i.e. a significant BOLD signal
increase during training) resulting in increased ratings of happiness and decreased ratings of depression and anxiety (Young
et al., 2014). It can also enhance participants’ ability to downregulate the amygdala activity (i.e. decreasing activity) while
being stimulated with negative emotional faces (Brühl et al.,
2014). The reduction of the amygdala response was then
speculated as one aspect of successful emotion regulation. The
meta-analyses in brain regions associated with fear extinction,
placebo and cognitive emotion regulation where participants
successfully diminished negative affect, showed hypo-activations in the left amygdala (Diekhof et al., 2011). Another metaanalysis identified that activity in the amygdala decreased in
the presence of reappraisal (Buhle et al., 2014). In an imaging
430
|
Social Cognitive and Affective Neuroscience, 2016, Vol. 11, No. 3
study applying painful stimuli, the deactivation of the amygdala
was found during experiencing pain (Petrovic et al., 2004). This
might reflect a regulation process of subjective distress while
facing an aversive context.
We speculate that the significant decreases in the amygdala
when participants were facing a harmful outcome and lying behavior might be due to the higher involvement of emotional
regulation. First, the regulation of negative emotions toward
losses might decrease the amygdala activity. Previous study
found that successfully regulating loss aversion reduces amygdala responses to loss but not gain outcomes (Sokol-Hessner
et al., 2013). Second, moral transgression might also reduce
amygdala responses. The amygdala showed significantly less
activity compared with baseline in response to unfair offers
(Tabibnia et al., 2008). Therefore, decreased amygdala responses
might be associated with emotion regulation of the more emotionally aversive and less morally acceptable means and outcomes. Nevertheless, our study does not provide direct
experimental evidence for this explanation. Participants were
neither instructed to modulate their mood nor asked to report
the strategies/existence of emotion regulation after the
experiment.
Alternatively, the increased activity in the amygdala while
encountering beneficial truth-telling might also be explained as
the consequence of the increase of reward. The amygdala is
thought to play a major role in the perception and production of
negative emotion or affect (Öhman and Mineka, 2001; Phelps
et al., 2001; Mason et al., 2006). In previous neuroimaging studies
addressing the judgment of lying and truth-telling behaviors,
the amygdala was found to be more active when participants
thought they themselves were being deceived by the experimenter (Grezes et al., 2006) and it was more active toward the
participants’ own moral violation (Berthoz et al., 2006). However,
the amygdala is also involved in the processes of stimuli with
positive valence and reward (Baxter and Murray, 2002; Murray,
2007; Anders et al., 2008; Ball et al., 2009; Morrison and Salzman,
2010). In animal studies, selective amygdala lesions can affect
the ability to associate stimuli with reward value (Gaffan et al.,
1988; Cador et al., 1989; Holland and Gallagher, 1999). In human
studies, activity in the amygdala and the NAcc increased for
both adolescents and adults when they were winning money
(as opposed to not winning money) (Ernst et al., 2005).
Therefore, beneficial outcomes and morally favorable behavior
(i.e. truth-telling) could increase reward which could further
lead to higher activity in the amygdala. However, most deductions of the function of the amygdala come from the positive activation changes. Whether the payoff changes also modulate
the amygdala’s activity in the negative direction remains unclear. Further investigation is required to determine the role of
the amygdala in the process of lying and truth-telling.
Beneficial outcomes did seem to make the dark side of lying
less negative, supported by less valence difference between
truth-telling and lying in the beneficial conditions as well as the
increasing BOLD signal in the NAcc in the contrast of beneficial
lies vs harmful lies. This conflicts with the notion of Kant’s absolute prohibition against lying. Lying might be sometimes permissible if the norm of not lying conflicts with a more
important, or equally important, goal (e.g. to financially benefit
the receivers in our experiment). Nevertheless, our results do
not completely favor the utilitarianism theory which strongly
permits lying. The moral acceptance ratings of lying behaviors
were significantly less than those of truth-telling. What is more,
we did not observe a specific rewarding effect of the influence
of beneficial outcome to lying behavior. But it is also possible
that the monetary benefits offered in our experiment might be
not sufficiently large enough to make lying rewarding.
Nevertheless, lying might always have indirect bad consequences. In a broader sense, it would weaken one’s reputation,
and finally diminish the short-term benefits gained from lying
(Carson, 2010).
In conclusion, the beneficial or helpful outcomes modulated
a network involving the NAcc, IFG/AI, STS and amygdala which
linked to the reward process, recognition and emotional experiences of being treated (dis)honestly. Although the present study
and findings do not settle the long debate of whether the ends
can justify the means, it might provide us with hints about how
the consequences of moral behaviors alter our attitude.
Acknowledgements
We thank Karen Woermann and Yang Hu for their comments. This work was supported by a Heisenberg Grant of
the German Research Council (DFG; We 4427/3-2) to B.W.
Conflict of interest. None declared.
References
Abe, N., Fujii, T., Ito, A., et al. (2014). The neural basis of dishonest
decisions that serve to harm or help the target. Brain and
Cognition, 90, 41–9.
Adcock, R.A., Thangavel, A., Whitfield-Gabrieli, S., Knutson, B.,
Gabrieli, J.D. (2006). Reward-motivated learning: mesolimbic
activation precedes memory formation. Neuron, 50(3), 507–17.
Anders, S., Eippert, F., Weiskopf, N., Veit, R. (2008). The human
amygdala is sensitive to the valence of pictures and sounds irrespective of arousal: an fMRI study. Social Cognitive and
Affective Neuroscience, 3(3), 233–43.
Aron, A.R., Robbins, T.W., Poldrack, R.A. (2004). Inhibition and
the right inferior frontal cortex. Trends in Cognitive Sciences, 8(4),
170–7.
Aron, A.R., Robbins, T.W., Poldrack, R.A. (2014). Inhibition and
the right inferior frontal cortex: one decade on. Trends in
Cognitive Sciences, 18(4), 177–85.
Ball, T., Derix, J., Wentlandt, J., et al. (2009). Anatomical specificity
of functional amygdala imaging of responses to stimuli with
positive and negative emotional valence. Journal of Neuroscience
Methods, 180(1), 57–70.
Bartra, O., McGuire, J.T., Kable, J.W. (2013). The valuation system:
a coordinate-based meta-analysis of BOLD fMRI experiments
examining neural correlates of subjective value. NeuroImage,
76, 412–27.
Baumgartner, T., Fischbacher, U., Feierabend, A., Lutz, K., Fehr, E.
(2009). The neural circuitry of a broken promise. Neuron, 64(5),
756–70.
Baumgartner, T., Gianotti, L.R., Knoch, D. (2013). Who is honest
and why: baseline activation in anterior insula predicts interindividual differences in deceptive behavior. Biological
Psychology, 94(1), 192–7.
Baxter, M.G., Murray, E.A. (2002). The amygdala and reward.
Nature Reviews Neuroscience, 3(7), 563–73.
Berns, G.S., McClure, S.M., Pagnoni, G., Montague, P.R. (2001).
Predictability modulates human brain response to reward. The
Journal of Neuroscience, 21(8), 2793.
Berthoz, S., Grezes, J., Armony, J., Passingham, R., Dolan, R.
(2006). Affective response to one’s own moral violations.
NeuroImage, 31(2), 945–50.
L. Yin and B. Weber
Binder, J.R., Liebenthal, E., Possing, E.T., Medler, D.A., Ward, B.D.
(2004). Neural correlates of sensory and decision processes in
auditory object identification. Nature Neuroscience, 7(3), 295–
301.
Brühl, A.B., Scherpiet, S., Sulzer, J., Stämpfli, P., Seifritz, E.,
Herwig, U. (2014). Real-time neurofeedback using functional
MRI could improve down-regulation of amygdala activity during emotional stimulation: a proof-of-concept study. Brain
Topography, 27(1), 138–48.
Buhle, J.T., Silvers, J.A., Wager, T.D., et al. (2014). Cognitive reappraisal of emotion: a meta-analysis of human neuroimaging
studies. Cerebral Cortex, 24(11), 2981–90.
Bzdok, D., Langner, R., Caspers, S., et al. (2011). ALE meta-analysis
on facial judgments of trustworthiness and attractiveness.
Brain Structure and Function, 215(3-4), 209–23.
Cador, M., Robbins, T., Everitt, B. (1989). Involvement of the
amygdala in stimulus-reward associations: interaction with
the ventral striatum. Neuroscience, 30(1), 77–86.
Calder, A.J., Keane, J., Manes, F., Antoun, N., Young, A.W. (2000).
Impaired recognition and experience of disgust following
brain injury. Nature Neuroscience, 3(11), 1077–8.
Carson, T.L. (2010). Lying and Deception: Theory and Practice.
Oxford: Oxford University Press.
Chang, L.J., Smith, A., Dufwenberg, M., Sanfey, A.G. (2011).
Triangulating the neural, psychological, and economic bases
of guilt aversion. Neuron, 70(3), 560–72.
Christ, S., Van Essen, D., Watson, J., Brubaker, L., McDermott, K.
(2009). The contributions of prefrontal cortex and executive
control to deception: evidence from activation likelihood estimate meta-analyses. Cerebral Cortex, 19(7), 1557–66.
Christopoulos, G.I., Tobler, P.N., Bossaerts, P., Dolan, R.J., Schultz,
W. (2009). Neural correlates of value, risk, and risk aversion
contributing to decision making under risk. The Journal of
Neuroscience, 29(40), 12574.
Craig, A. (2009). How do you feel—now? The anterior insula and
human awareness. Nature Reviews Neuroscience, 10(1), 59.
Crawford, J.R., Howell, D.C. (1998). Comparing an individual’s
test score against norms derived from small samples. The
Clinical Neuropsychologist, 12(4), 482–6.
Delgado, M.R., Nystrom, L.E., Fissell, C., Noll, D., Fiez, J.A. (2000).
Tracking the hemodynamic responses to reward and punishment in the striatum. Journal of Neurophysiology, 84(6), 3072.
Diekhof, E.K., Geier, K., Falkai, P., Gruber, O. (2011). Fear is only as
deep as the mind allows: a coordinate-based meta-analysis of
neuroimaging studies on the regulation of negative affect.
NeuroImage, 58(1), 275–85.
Ellingsen, T., Johannesson, M. (2004). Promises, threats and fairness. The Economic Journal, 114(495), 397–420.
Erat, S., Gneezy, U. (2012). White lies. Management Science, 58(4),
723–33.
Ernst, M., Nelson, E.E., Jazbec, S., et al. (2005). Amygdala and nucleus accumbens in responses to receipt and omission of gains
in adults and adolescents. NeuroImage, 25(4), 1279–91.
Etkin, A., Wager, T.D. (2007). Functional neuroimaging of anxiety: a meta-analysis of emotional processing in PTSD, social
anxiety disorder, and specific phobia. American Journal of
Psychiatry, 164(10), 1476–88.
Farah, M.J., Hutchinson, J.B., Phelps, E.A., Wagner, A.D. (2014).
Functional MRI-based lie detection: scientific and societal
challenges. Nature Reviews Neuroscience, 15(2), 123–31.
Gaffan, E., Gaffan, D., Harrison, S. (1988). Disconnection of the
amygdala from visual association cortex impairs visual reward-association learning in monkeys. The Journal of
Neuroscience, 8(9), 3144–50.
|
431
Ganis, G., Keenan, J.P. (2009). The cognitive neuroscience of deception. Social Neuroscience, 4(6), 465–72.
Glimcher, P.W., Fehr, E. (2013). Neuroeconomics: Decision Making
and the Brain. New York, NY: Academic Press.
Gneezy, U. (2005). Deception: the role of consequences. The
American Economic Review, 95(1), 384–94.
Greene, J.D., Nystrom, L.E., Engell, A.D., Darley, J.M., Cohen, J.D.
(2004). The neural bases of cognitive conflict and control in
moral judgment. Neuron, 44(2), 389–400.
Grezes, J., Berthoz, S., Passingham, R. (2006). Amygdala activation when one is the target of deceit: did he lie to you or to
someone else? NeuroImage, 30(2), 601–8.
Grezes, J., Frith, C., Passingham, R. (2004a). Brain mechanisms
for inferring deceit in the actions of others. Journal of
Neuroscience, 24(24), 5500.
Grezes, J., Frith, C., Passingham, R. (2004b). Inferring false beliefs
from the actions of oneself and others: an fMRI study.
NeuroImage, 21(2), 744–50.
Haidt, J. (2001). The emotional dog and its rational tail: a social
intuitionist approach to moral judgment. Psychological Review,
108(4), 814.
Hamann, S. (2012). Mapping discrete and dimensional emotions
onto the brain: controversies and consensus. Trends in
Cognitive Sciences, 16(9), 458–66.
Harenski, C.L., Hamann, S. (2006). Neural correlates of regulating
negative emotions related to moral violations. NeuroImage,
30(1), 313–24.
Häusler, A.N., Becker, B., Bartling, M., Weber, B. (2015). Goal or
gold: overlapping reward processes in soccer players upon
scoring and winning money. PloS one, 10(4), e0122798.
Hayashi, A., Abe, N., Fujii, T., et al. (2014). Dissociable neural systems for moral judgment of anti-and pro-social lying. Brain
Research, 1556, 46–56.
Hayashi, A., Abe, N., Ueno, A., et al. (2010). Neural correlates of
forgiveness for moral transgressions involving deception.
Brain Research, 1332, 90–9.
Heekeren, H.R., Wartenburger, I., Schmidt, H., Prehn, K.,
Schwintowski, H.P., Villringer, A. (2005). Influence of bodily
harm on neural correlates of semantic and moral decisionmaking. NeuroImage, 24(3), 887–97.
Holland, P.C., Gallagher, M. (1999). Amygdala circuitry in attentional and representational processes. Trends in Cognitive
Sciences, 3(2), 65–73.
Irtel, H. (2008). The PXLab Self-Assessment-Manikin Scales. (http://
irtel.uni-mannheim.de/pxlab/demos/index_SAM.html)
Izuma, K., Saito, D.N., Sadato, N. (2008). Processing of social
and monetary rewards in the human striatum. Neuron, 58(2),
284–94.
Jackson, P.L., Meltzoff, A.N., Decety, J. (2005). How do we perceive
the pain of others? A window into the neural processes
involved in empathy. NeuroImage, 24(3), 771–9.
Kant, I. (1797). On a Supposed Right to Lie from Philanthropy.
Cambrige: Cambridge University.
Knutson, B., Adams, C.M., Fong, G.W., Hommer, D. (2001a).
Anticipation of increasing monetary reward selectively recruits nucleus accumbens. Journal of Neuroscience, 21(16), 1–5.
Knutson, B., Cooper, J.C. (2005). Functional magnetic resonance
imaging of reward prediction. Current Opinion in Neurology,
18(4), 411.
Knutson, B., Fong, G.W., Adams, C.M., Varner, J.L., Hommer, D.
(2001b). Dissociation of reward anticipation and outcome with
event-related fMRI. Neuroreport, 12(17), 3683.
Knutson, B., Katovich, K., Suri, G. (2014). Inferring affect from
fMRI data. Trends in Cognitive Sciences, 18(8), 422–8.
432
|
Social Cognitive and Affective Neuroscience, 2016, Vol. 11, No. 3
Lang, P.J. (1980). Behavioral treatment and bio-behavioral assessment: computer applications. In: Sidowski, J.J.J., Williams,
T., editors. Technology in Mental Health Care Delivery Systems.
New Jersey, Norwood.
Lisofsky, N., Kazzer, P., Heekeren, H.R., Prehn, K. (2014). Investigating
socio-cognitive processes in deception: a quantitative metaanalysis of neuroimaging studies. Neuropsychologia, 61, 113–22.
López-Pérez, R., Spiegelman, E. (2013). Why do people tell the
truth? Experimental evidence for pure lie aversion.
Experimental Economics, 16(3), 233–247.
Maldjian, J.A., Laurienti, P.J., Kraft, R.A., Burdette, J.H. (2003). An automated method for neuroanatomic and cytoarchitectonic atlasbased interrogation of fMRI data sets. NeuroImage, 19(3), 1233–9.
Mason, W.A., Capitanio, J.P., Machado, C.J., Mendoza, S.P.,
Amaral, D.G. (2006). Amygdalectomy and responsiveness to
novelty in rhesus monkeys (Macaca mulatta): generality and individual consistency of effects. Emotion, 6(1), 73.
Moll, J., de Oliveira-Souza, R., Bramati, I.E., Grafman, J. (2002).
Functional networks in emotional moral and nonmoral social
judgments. NeuroImage, 16(3), 696–703.
Moll, J., Zahn, R., de Oliveira-Souza, R., Krueger, F., Grafman, J.
(2005). The neural basis of human moral cognition. Nature
Reviews Neuroscience, 6(10), 799–809.
Morrison, S.E., Salzman, C.D. (2010). Re-valuing the amygdala.
Current Opinion in Neurobiology, 20(2), 221–30.
Murray, E.A. (2007). The amygdala, reward and emotion. Trends
in Cognitive Sciences, 11(11), 489–97.
O’Doherty, J. (2004). Reward representations and reward-related
learning in the human brain: insights from neuroimaging.
Current Opinion in Neurobiology, 14(6), 769–76.
O’Doherty, J., Critchley, H., Deichmann, R., Dolan, R. (2003).
Dissociating valence of outcome from behavioral control in
human orbital and ventral prefrontal cortices. Journal of
Neuroscience, 23(21), 7931–39.
Öhman, A., Mineka, S. (2001). Fears, phobias, and preparedness:
toward an evolved module of fear and fear learning.
Psychological review, 108(3), 483.
Paulus, M.P., Rogalsky, C., Simmons, A., Feinstein, J.S., Stein, M.B.
(2003). Increased activation in the right insula during risk-taking decision making is related to harm avoidance and neuroticism. NeuroImage, 19(4), 1439–48.
Paulus, M.P., Stein, M.B. (2006). An insular view of anxiety.
Biological Psychiatry, 60(4), 383–7.
Pelphrey, K., Morris, J., Mccarthy, G. (2004). Grasping the intentions of others: the perceived intentionality of an action influences activity in the superior temporal sulcus during social
perception. Journal of Cognitive Neuroscience, 16(10), 1706–16.
Petrovic, P., Carlsson, K., Petersson, K.M., Hansson, P., Ingvar, M.
(2004). Context-dependent deactivation of the amygdala during pain. Journal of Cognitive Neuroscience, 16(7), 1289–1301.
Phelps, E.A., O’Connor, K.J., Gatenby, J.C., Gore, J.C., Grillon, C.,
Davis, M. (2001). Activation of the left amygdala to a cognitive
representation of fear. Nature Neuroscience, 4(4), 437–41.
Poldrack, R.A. (2006). Can cognitive processes be inferred from
neuroimaging data? Trends in Cognitive Sciences, 10(2), 59–63.
Rest, J.R., Bebeau, M.J., Thoma, S.J. (1999). Postconventional Moral
Thinking: A Neo-Kohlbergian Approach. Mahwah, NJ: Lawrence
Erlbaum.
Robertson, D., Snarey, J., Ousley, O., et al. (2007). The neural processing of moral sensitivity to issues of justice and care.
Neuropsychologia, 45(4), 755–66.
Sabatinelli, D., Bradley, M.M., Lang, P.J., Costa, V.D., Versace, F.
(2007). Pleasure rather than salience activates human nucleus
accumbens and medial prefrontal cortex. Journal of
Neurophysiology, 98(3), 1374.
Sanfey, A.G., Rilling, J.K., Aronson, J.A., Nystrom, L.E., Cohen, J.D.
(2003). The neural basis of economic decision-making in the
ultimatum game. Science, 300(5626), 1755.
Schnall, S., Haidt, J., Clore, G.L., Jordan, A.H. (2008). Disgust as
embodied moral judgment. Personality and Social Psychology
Bulletin, 34(8), 1096–109.
Sevinc, G., Spreng, R.N. (2014). Contextual and perceptual brain
processes underlying moral cognition: a quantitative metaanalysis of moral reasoning and moral emotions. PLoS One,
9(2), e87427.
Shalvi, S., Handgraaf, M.J.J., De Dreu, C.K.W. (2011). People avoid
situations that enable them to deceive others. Journal of
Experimental Social Psychology, 47(6), 1096–106.
Singer, T., Critchley, H.D., Preuschoff, K. (2009). A common role
of insula in feelings, empathy and uncertainty. Trends in
Cognitive Sciences, 13(8), 334–40.
Singer, T., Kiebel, S.J., Winston, J.S., Dolan, R.J., Frith, C.D. (2004a).
Brain responses to the acquired moral status of faces. Neuron,
41(4), 653–62.
Singer, T., Seymour, B., O’Doherty, J., Kaube, H., Dolan, R.J.,
Frith, C.D. (2004b). Empathy for pain involves the affective
but not sensory components of pain. Science, 303(5661),
1157.
Sip, K.E., Carmel, D., Marchant, J.L., et al. (2013). When
Pinocchio’s nose does not grow: belief regarding lie-detectability modulates production of deception. Frontiers in Human
Neuroscience, 16, 1–11.
Sip, K.E., Skewes, J.C., Marchant, J.L., McGregor, W.B., Roepstorff,
A., Frith, C.D. (2012). What if I get busted? Deception, choice,
and decision-making in social interaction. Frontiers in
Neuroscience, 58, 1–10.
Sokol-Hessner, P., Camerer, C.F., Phelps, E.A. (2013). Emotion
regulation reduces loss aversion and decreases amygdala
responses to losses. Social Cognitive and Affective
Neuroscience, 8(3), 341–50.
Spreckelmeyer, K.N., Krach, S., Kohls, G., et al. (2009).
Anticipation of monetary and social reward differently activates mesolimbic brain structures in men and women. Social
Cognitive and Affective Neuroscience, 4(2), 158.
Sprengelmeyer, R. (2007). The neurology of disgust. Brain, 130(7),
1715–7.
Sun, D., Chan, C.C., Hu, Y., Wang, Z., Lee, T.M. (2015). Neural correlates of outcome processing post dishonest choice: an fMRI
and ERP study. Neuropsychologia, 68, 148–57.
Tabibnia, G., Satpute, A.B., Lieberman, M.D. (2008). The sunny
side of fairness preference for fairness activates reward circuitry (and disregarding unfairness activates self-control circuitry). Psychological Science, 19(4), 339–47.
Thielscher, A., Pessoa, L. (2007). Neural correlates of perceptual
choice and decision making during fear–disgust discrimination. The Journal of Neuroscience, 27(11), 2908–17.
Wu, D., Loke, I.C., Xu, F., Lee, K. (2011). Neural correlates of evaluations of lying and truth-telling in different social contexts.
Brain Research, 1389, 115–24.
Young, K.D., Zotev, V., Phillips, R., et al. (2014). Real-time FMRI
neurofeedback training of amygdala activity in patients with
major depressive disorder.