Human content in affect-inducing stimuli

Motiv Emot
DOI 10.1007/s11031-008-9107-z
ORIGINAL PAPER
Human content in affect-inducing stimuli: A secondary analysis
of the international affective picture system
Albina Colden Æ Martin Bruder Æ
Antony S. R. Manstead
Springer Science+Business Media, LLC 2008
Abstract We report a secondary analysis of the international affective picture system (IAPS), the broadest
available standardized sample of emotional stimuli, which
confirmed our prediction that the distribution of slides
across the valence and arousal dimensions would be related
to human versus inanimate slide content. Pictures depicting
humans are over-represented in the high arousal/positive
and high arousal/negative areas of affective space as
compared to inanimate pictures, which are especially frequent in the low arousal/neutral valence area. Results
pertaining to dominance ratings and gender differences in
valence and arousal ratings further suggest that there are
qualitative differences between emotional reactions to
animal or human slide content and responses to nonsocial
still photos. Researchers need to be mindful of this distinction when selecting affect-inducing stimuli.
Keywords IAPS Picture processing Face processing Emotion induction Empathy
Albina Colden and Martin Bruder have contributed equally to this
work.
A. Colden
University of Cambridge, Cambridge, UK
Present Address:
A. Colden
Sigmund Freud University, Vienna, Austria
M. Bruder A. S. R. Manstead (&)
School of Psychology, Cardiff University, Tower Building,
Park Place, Cardiff CF10 3AT, UK
e-mail: [email protected]
Introduction
Visual images that elicit affective responses are used in
paradigms in social psychology, cognitive psychology, and
social-cognitive neuroscience to examine emotion processes and affective influences on cognitive processes. A
well-established and widely used source of such stimuli is
the international affective picture system (IAPS; Lang et al.
2005). It consists of 942 images evoking a range of
affective responses and includes normative ratings of these
images with respect to valence, arousal, and dominance.
These ratings are reliable (Lang et al. 2005) and have been
corroborated by other self-assessment procedures (Ito et al.
1998), by a range of psychophysiological measures (e.g.,
Smith et al. 2006), and fMRI (Lang et al. 1998). The IAPS
provides a standardized pool of affect-inducing stimuli and
is a highly useful methodological tool: Stimulus sets are
routinely selected on the basis of the normative ratings.
However, selecting pictures solely on this basis is
potentially problematic. For example, studies in socialcognitive neuroscience are careful to ensure comparability
of experimental stimulus sets with respect to physical
properties such as size, luminance, and spatial frequencies
(Codispoti and De Cesarei 2007; Delplanque et al. 2007;
Sabatinelli et al. 2005). If uncontrolled, the possibly
uneven distribution of these properties across the valence
or arousal dimension might lead to differences in the early
visual processing of pictures that have been selected from
different areas of affective space.
Just as these physical properties might influence the
affective processing of visual stimulus material, there is
reason to believe that aspects of slide content trigger different affect-related processes (e.g., Bernat et al. 2006;
Bradley et al. 2001, 2003). The IAPS lends itself to such
analyses because its pictures are diverse with respect to
123
Motiv Emot
content, depicting wildlife, scenery, everyday objects,
abstract patterns, sexual material, food, weapons, sports
activities, expressive faces, and surgical slides, amongst
other things. Although the wide range of thematic contents
and the fact that the IAPS thereby covers a broad range of
human experience is intended and useful, it remains
unclear which underlying dimensions produce differential
responses to the various thematic categories. In their study
demonstrating differential emotional effects of picture
content, Bernat and colleagues (2006) conclude that future
research should establish a more ‘‘formal scheme for
classifying pictures of different types into meaningful
content categories’’ (p. 101) in order to more appropriately
control for the possible content specificity of emotional
reactions.
We suggest that one critical and basic underlying content-related distinction is that some images depict human
beings, while others do not.1 Below we review literature
that demonstrates that human stimuli engage processes and
reactions that differ from those engaged by inanimate
objects.
Person perception is distinct
Several lines of research suggest that the perception of
emotions expressed by another person elicits congruent
emotional reactions in onlookers. It is argued that emotional contagion leads people not only to mimic each
others’ ‘‘movements, expressions, postures, and vocalizations,’’ but also, through facial and postural feedback, to
‘‘converge emotionally’’ (Hatfield et al. 1992, p. 154).
Chartrand and Bargh (1999) demonstrated that this
synchronization with the interpersonal environment is
functional because it provides a form of ‘‘social glue’’
(p. 897). Evidence of automatic imitation is strongest for
emotional faces. For example, Dimberg et al. (2000)
demonstrated that even subliminal presentation of facial
displays leads to congruent changes in facial muscular
activity, suggesting that mimicry is an automatic process.
A recent study on facial mimicry by Achaibou et al.
(2008) reports increased EMG activity of the corrugator
supercilii (responsible for the knitting of the eyebrows
during a frown) in response to angry expressions and
enhanced EMG activity of the zygomaticus major
(responsible for elevating the lips during a smile) in
response to happy expressions. The authors also found that
the amplitude of an early visual evoked potential was
greater when muscular response to happy and angry faces
1
In a recent chapter Bradley and Lang (2007, p. 34) note that picture
content is associated with how people respond to the IAPS slides, and
observe that pictures depicting ‘‘human agents, activities, and events’’
evoke the most emotion.
123
was high than when it was low, suggesting that early visual
processing of facial expression may determine the magnitude of subsequent facial imitation.
Several related ‘‘embodied’’ theories of emotion and
visual representation (see Niedenthal 2007, for a review)
explain how imitation of others’ behavior might form the
basis of the human ability to understand the mental states
of conspecifics. Collingwood’s (1946) philosophy of
‘‘re-enactment’’ stipulates that historical understanding is
inherently different from the processing of abstract information in the natural sciences and mathematics, in that we
re-enact the state of mind of the historical figures, thereby
forming a personal sense of history. The Adaptive Resonance Theory (Carpenter and Grossberg 2003) describes a
complex neuronal system of ‘‘match-based learning,’’
whereby persons enter into ‘‘resonant states’’ of matching
new experiences against an internal database of parallel
personal experiences. The Emulation Theory of Representation (Grush 2004) proposes that the human brain
constructs neural circuits that act as inner models of the
body and the environment.
In the present context, an embodied perspective on
emotion and cognition (Niedenthal et al. 2005; Wilson
2002) suggests that social information processing is distinct
from the processing of nonsocial situations in that it is
grounded in bodily states and simulations of experience in
modality-specific brain systems. Based on this reasoning it
seems likely that emotions elicited by social stimuli (e.g.,
IAPS slides depicting human beings) involve different
neurophysiological responses from those arising in reaction
to nonsocial situations (e.g., IAPS slides depicting inanimate objects).
Focusing on the neuronal level, Simulation Theory (e.g.,
Adolphs 2002; Decety and Grèzes 2006) proposes that
internal simulation of others’ actions forms the very basis
of our ability to recognize and reason about other people’s
mental states. Emotion perception is assigned a prominent
role in this process, in that people ‘‘judge another person’s
emotional state from the facial expression by reconstructing in their own brains a simulation of what the other
person might be feeling’’ (Adolphs 2002, p. 324f).
This theoretical notion is underpinned by the identification of a ‘‘mirror-neuron system’’ (Rizzolatti and
Craighero 2004), which is thought to enable primates to
simulate and thereby understand actions of others (Gallese
et al. 2004). These neurons have been labeled ‘‘mirrors’’
because they fire both when individuals perform physical
tasks and when they see a conspecific perform the same
tasks. These responses are so ubiquitous that inhibitory
processes might be needed to prevent perceivers from
constantly imitating their social environment (Brass and
Heyes 2005). Automatic imitation is likely to be crucial for
humans’ social functioning and might form the neural basis
Motiv Emot
of empathy (Gallese et al. 2004; Preston and de Waal
2002).
The idea that automatic imitative reactions to social
stimuli are related to empathy is supported by SonnbyBorgström et al. (2003). They found increased mimicry of
emotional expressions in highly empathic participants at
short exposure times. Relatedly, the common finding of
higher self-reported empathy in women (Eisenberg and
Lennon 1983), their superior decoding ability of nonverbal
signals (Hall 1978; Hall et al. 2000), and their increased
tendency to automatically mimic emotional facial expressions (Dimberg and Lundquist 1990) suggest a gender
effect in reactivity to emotional slides containing social
information.
In sum, person perception in general, and the perception
of faces in particular, elicits neurophysiological, behavioral, and experiential reactions that are different from
those aroused by the perception of inanimate stimuli. These
reactions can be conceptualized as empathic processes, and
may therefore be moderated by individual differences in
empathy and gender, which have been documented by a
wide body of research (see Davis and Kraus 1997; Mehrabian and Epstein 1972; Baron-Cohen et al. 2002).
The present study
In the present research, we investigated the impact of
human content on ratings of the affect-inducing visual
stimuli in the IAPS and explored possible qualitative differences between responses to human, animal, and
inanimate slides.
Aims and hypotheses
Our goals were twofold. First, we wanted to establish
whether there are quantitative differences between the
distribution of images depicting human beings and images
depicting inanimate objects in affective space. We chose
the high arousal/negative, high arousal/positive and low
arousal/neutral valence areas as regions of interest, because
it is these pictures that tend to be selected for use in
emotion research (e.g., Codispoti et al. 2007; Ribeiro et al.
2007; Smith et al. 2006). We predicted that—due to the
distinctive and emotionally powerful processes involved in
the perception of humans and human faces—pictures
evoking high arousal and extreme (high or low) valence are
more likely to portray human beings, and that affectively
neutral pictures are more likely to portray inanimate
objects. With the facial mimicry findings in mind, we
explored whether human face and human nonface slides
are differently distributed in the highly arousing/positive
and highly arousing/negative regions of affective space.
Finally, we hypothesized that, across the entire IAPS picture set, high arousal and extreme (high or low) valence
would be associated with human content whereas inanimate slides would elicit relatively lower arousal and less
extreme valence ratings. Again, we explored possible differences between human nonface and human face slides.
Second, we were interested in whether pictures depicting humans elicit emotion in a way that is qualitatively
different from other pictorial stimuli. When the stimulus is
an object, the emotional response is likely determined by
the physical properties of the slide and the viewer’s individual appraisal of the slide content. In contrast, when the
stimulus is a human being, it is possible that the response is
at least partly empathic, resulting from the automatic imitation, simulation, and contagion processes triggered by
person perception.
Although secondary analysis of the normative IAPS
ratings does not provide a complete test of this hypothesis,
it provides a useful starting-point. Dominance ratings
indicate the extent to which viewers feel in control of their
affective states while looking at slides. The instructions for
these ratings read:
At one end of the scale you have feelings characterized as completely controlled, influenced, cared-for,
awed, submissive, guided. […] At the other extreme
of this scale, you felt completely controlling, influential, in control, important, dominant, autonomous.
(Lang et al. 2005, p. 5)
If pictures of humans evoke emotion partly through
automatic empathic processes that are relatively immune to
conscious control, viewers should feel less in control of
their responses to such slides than to slides depicting
inanimate objects. Given that low dominance scores indicate that viewers felt controlled (rather than controlling),
dominance scores for slides with human or facial content
should be lower than those for slides depicting inanimate
objects.
It should be noted that in contrast to valence and arousal,
the concept of dominance remains largely under-investigated in IAPS literature. A recent study by Fontaine et al.
(2007) addresses the shortcomings of a two-dimensional
valence-arousal focus and proposes that four dimensions
are needed. The authors identify one of these dimensions as
‘‘potency-control,’’ which they describe as being ‘‘characterized by appraisals of control, leading to feelings of
power or weakness; interpersonal dominance or submission, including impulses to act or refrain from action’’ (p.
1051). They also link this dimension to emotions such as
pride, anger, and contempt as opposed to sadness, shame,
and despair.
Gender differences in affective ratings might also reflect
the degree to which empathic processes are involved in
123
Motiv Emot
reactions to human versus object slides. If human images
evoke emotion partly or primarily through empathic processes, women should—given their higher nonverbal
sensitivity (Hall 1978; Hall et al. 2000)—be more
responsive to them than men. The difference between
female and male valence, arousal, and dominance scores
should therefore be greater for human slides than for object
slides. An fMRI study by Schulte-Rüther et al. (2008)
investigating gender differences in self-oriented versus
other-oriented emotion attribution tasks found that females
and males rely on different strategies when assessing their
emotions in response to other persons: While performing
both ‘‘self’’ and ‘‘other’’ tasks, females, but not males,
showed increased activation of the right inferior frontal
cortex, suggesting that females recruit areas containing
mirror neurons to a higher degree than males during
empathic face-to-face interactions. The authors propose
that this difference may underlie facilitated emotional
contagion in females.
We also included slides depicting animals in all analyses
in order to explore whether these pictures elicited responses similar to those of pictures with human or inanimate
content.
Method
Slides
The latest version of the IAPS (Lang et al. 2005) consists of
942 digital still photos. These formed the unit of analysis
for the present study. The developers of the IAPS provide
normative ratings on the dimensions of valence, arousal,
and dominance.2 For each picture in the IAPS database, the
mean rating on each emotional dimension (and its SD) is
given for (a) male participants, (b) female participants, and
(c) collapsed across male and female participants. We used
the collapsed scores for all analyses apart from the ones
directly pertaining to gender differences. To ease the
interpretation of these latter analyses, we computed difference scores between the scores of male and female
participants.
To these normative ratings we added a slide content
variable. Two coders independently viewed all pictures and
categorized them as ‘‘inanimate,’’ ‘‘human nonface,’’
‘‘human face,’’ ‘‘animal nonface,’’ or ‘‘animal face.’’ Slides
with no human and no animal imagery were coded as
inanimate. If a slide depicted a human whose face was not
visible, it was coded as human nonface. Slides in which
human facial information was visible were categorized as
2
We are grateful to the developers of the IAPS for their permission
to use these ratings for the present research.
123
human face.3 All slides depicting animals, but no humans,
were coded as animal. We also distinguished between
nonface and face animal slides. The slides depicting animals were included in subsequent analyses on an
exploratory basis. We had no predictions concerning how
animal face or animal nonface slides would influence
emotion judgments.
The reliability of the judges’ codings was high (Cohen’s
j = .96). Where coders agreed, the picture was labeled
accordingly. In cases of disagreement (2.9%), pictures
were labeled ‘‘ambiguous’’ and excluded from all further
analyses.4 In total 915 slides were included in the analyses
reported below.
Results
Quantitative differences
Slide distribution across positive, negative,
and emotionally neutral areas
The scatterplot shown in Fig. 1 illustrates the distribution
of IAPS slides in the affective space created by the valence
and arousal dimensions. The resulting distribution pattern
resembles a ‘‘boomerang’’ (Bradley and Lang 2007, p. 32),
with one wing extending toward the low valence/high
arousal area (arousing negative affect), the second wing
extending toward the high valence/high arousal area
(arousing positive affect), and the connecting angle of the
boomerang located in the neutral valence and low arousal
area (affectively neutral). If the distribution of slides across
affective space is unrelated to slide content, there should be
approximately equal proportions of human, animal, and
inanimate slides in these three key areas. However, when
slides are coded according to whether they contain inanimate objects, animals, or humans, the inequality of the
distribution becomes apparent. A dense concentration of
circles, representing inanimate slides, can be observed in
the affectively neutral area (Figs. 1 and 2a), whereas
human face slides (squares) and human nonface slides
(diamonds) are more broadly distributed in IAPS affective
space but are relatively scarce in the low arousal area and
relatively more concentrated at the affectively arousing tips
of the boomerang (Figs. 1 and 2b).
Chi-squared analyses confirmed this observation. We
first tested whether slides of different content were equally
3
The criterion for coding a picture as containing facial information
was that at least one eye or the mouth region needed to be clearly
visible. We thank Lena Heuel for her help in coding the pictures.
4
A data file with the IAPS labels and our new content variable is
available from the third author (see correspondence address).
Motiv Emot
arousal/neutral valence, high arousal/positive valence, and
high arousal/negative valence zones for the liberal and
restricted analyses.
For the liberal analysis, a v2-test revealed that slide
content was not randomly distributed across zones, v2(8,
n = 251) = 72.29, p \ .001. Pairwise tests showed that
both the distribution between the neutral and the positive
area, v2(3, n = 171) = 35.98, p \ .001, and between the
neutral and the negative area, v2(4, n = 165) = 54.98,
p \ .001, differed significantly from chance. For both
comparisons, there were disproportionate numbers of
human stimuli in the emotional areas (npositive = 74, nnegative = 76) as compared to the neutral area (nneutral = 38),
and disproportionate numbers of inanimate objects in the
neutral area (nneutral = 46) as compared to the emotional
areas (npositive = 11, nnegative = 3). Only three slides with
animal content fell into these areas (explaining the different
degrees of freedom for the follow-up v2-analyses).
An identical pattern emerged for the restricted analysis.
Again, the overall v2-test was highly significant, v2(4,
n = 62) = 34.72, p \ .001, with both the comparisons
between neutral and positive, v2(2, n = 44) = 17.39,
p \ .001, and neutral and negative, v2(2, n = 40) = 19.68,
p \ .001, showing distributions significantly different from
chance. Again, human content was over-represented in the
highly arousing positive and negative areas (npositive = 20,
nnegative = 18, nneutral = 7), whereas a disproportionate
number of inanimate objects were located in the neutral
valence/low arousal area (npositive = 2, nnegative = 0, nneutral = 15). No animal pictures fell into these extreme areas
of affective space.
In an analysis comparing the distribution of human face
and human nonface slides in the highly arousing/positive
and the highly arousing/negative regions of interest, a difference between the liberal and restricted analyses emerged:
Whereas human face and human nonface slides were evenly
distributed in the extreme positive and negative areas when
9
8
Valence Mean
7
6
5
4
3
Picture Content
Inanimate
Animal Nonface
Animal face
Human Nonface
Human face
2
1
2
3
4
5
6
7
8
9
Arousal Mean
Fig. 1 Distribution of IAPS pictures in affective space and liberal
(light and dark grey) and restricted (dark grey only) regions of
interest for v2-analysis
likely to be present in these three key areas. We defined the
regions of interest in terms of percentile scores, using
‘liberal’ and ‘restricted’ criteria. On the arousal dimension,
pictures below the 30th percentile (liberal) or 15th percentile (restricted) or above the 70th percentile (liberal) or
85th percentile (restricted) qualified for inclusion as low
arousal or high arousal pictures, respectively. For the
neutral pictures, those of the remaining slides that fell into
an area of 50 ± 15% (liberal) or 50 ± 7.5% (restricted)
were defined as neutral valence. For the high-arousal pictures, negative valence was defined as below the 30th
(liberal) or 15th (restricted) percentile of the remaining
pictures, positive valence as above the 70th (liberal) or
85th (restricted) percentile. Figure 1 shows the low
9
(a)
9
8
8
7
7
Valence Mean
Valence Mean
Fig. 2 a Distribution of
inanimate pictures in IAPS
affective space. b Distribution
of human nonface and human
face pictures in IAPS affective
space
6
5
4
3
(b)
6
5
4
3
2
2
Picture Content
All Other Pictures
Inanimate
1
1
2
3
Picture Content
All Other Pictures
Human Nonface
1
4
5
6
Arousal Mean
7
8
9
Human Face
1
2
3
4
5
6
7
8
9
Arousal Mean
123
Motiv Emot
using the liberal criteria, v2(1, n = 150) = .10, p = .919, a
significant difference was identified in the more restricted
analysis, v2(1, n = 38) = 4.08, p = .043. Human face
slides (73.7% of these pictures) were, compared to human
nonface slides, over-represented in the highly arousing and
negative area (88.9% of the pictures) and relatively less
highly represented in the highly arousing and positive area
(60% of the pictures).
Slide distribution across the entire affective space
Still concerned with slide distribution across IAPS affective space, the results of the targeted v2-analyses were
supplemented by parametric analyses of the full slide set.
Using the normative arousal and valence scores as dependent variables, we conducted one-way analyses of variance
(ANOVAs) to examine whether slide type (inanimate,
animal face, human nonface, human face) was significantly
associated with these ratings (see Table 1 for means). Due
to the small number of animal nonface slides (n = 15),
these were excluded from the analysis.
For arousal, the omnibus F-test was significant, F(3,
896) = 58.13, p \ .001, g2p = .163. Tukey’s honestly significant difference (HSD) tests revealed that inanimate
slides elicited significantly lower arousal ratings than all
other categories and human nonface slides evoked the
highest levels of arousal.
We conducted separate ANOVAs on negative and positive slides (i.e., slides falling below and above the valence
scale midpoint) because we expected human content to be
associated with more positive valence scores for positive
slides, but more negative valence scores for negative slides.
Consistent with these predictions, Tukey’s HSD tests
showed that valence ratings were significantly more extreme
for negative pictures depicting humans than for pictures
depicting objects or animals, F(3, 396) = 26.40, p \ .001,
g2p = .167. The overall test was also significant for positive
slides, F(3, 496) = 5.70, p = .001, g2p = .033. Here animal
face slides were associated with the most extreme valence
ratings (albeit not significantly different from human face
and nonface slides). Again, inanimate objects attracted the
least extreme ratings.
Qualitative differences
Dominance ratings
Slide content was significantly related to dominance
ratings, F(3, 896) = 19.07, p \ .001, g2p = .060 (see
Table 1). As expected, participants felt most in control in
response to inanimate slides. Surprisingly, animal face
slides were associated with lower perceptions of control
than human face slides. This general pattern remained
unchanged when valence was introduced as a covariate.
However, controlling for arousal eliminated the effect of
slide content on dominance, F(3, 895) = 1.07, p = .361,
g2p = .004.
Gender effects
Differences between male and female arousal and valence
ratings varied significantly as a function of image content
Table 1 Means and standard errors of arousal, valence, dominance ratings, and gender difference scores corresponding to inanimate object,
animal face, human nonface, and human face slides
Mean
Inanimate (N = 244)
Standard error
Animal
Human
Inanimate
Animal
Human
Face (N = 70)
Nonface (N = 129)
Face (N = 457)
Face
Nonface
Face
4.06a
5.06b
5.41c
4.98b
.07
.13
.10
.05
Negative slides
4.09a
3.78a
3.22b
Positive slides
6.30a
6.89b
6.57ab
3.08b
.10
.17
.11
.07
6.49a
.07
.14
.11
5.57a
5.05bc
4.74b
.05
5.13c
.07
.13
.09
.05
.39ab
.48b
.05
.09
.06
.04
Emotion ratings
Arousal
Valence
Dominance
Gender difference scores
a
Arousal
Negative slides
.19a
.34ab
Valence
Negative slides
Dominance
-.30a
-.89b
-.74bc
-.63c
.06
.10
.07
.04
-.18a
-.51b
-.52b
-.45b
.04
.07
.05
.03
Note: Row means that do not share subscripts differ at p \ .05 as shown by Tukey’s HSD tests
a
Difference scores were calculated as female minus male ratings
123
Motiv Emot
only for those slides falling below the valence midpoint.
For positive slides, the omnibus tests for arousal, F(3,
496) = .98, p = .401, g2p = .006, and valence, F(3,
496) = 2.24, p = .083, g2p = .013, were not significant.
For negative slides, the gender difference in arousal
scores was significantly associated with slide content, F(3,
396) = 7.52, p \ .001, g2p = .054. Whereas the mean
arousal difference between women and men in response to
inanimate slides was only .19 (indicating that women were
slightly more aroused than men), it was significantly higher
for human face slides (difference = .48).
The omnibus test investigating the association between
gender difference scores in valence ratings and slide content was also significant for negative slides, F(3,
396) = 13.92, p \ .001, g2p = .095. Whereas women rated
inanimate objects only slightly more negatively than men
(difference = -.30), the corresponding difference was
significantly greater for human and, in particular, animal
face pictures (difference = -.89).
Finally, gender differences in dominance ratings varied
significantly as a function of image content across the
entire affective space, F(3, 896) = 14.82, p \ .001,
g2p = .047. The dominance difference scores were consistently negative, indicating that women reported feeling less
in control in response to the slides than men. This difference was significantly smaller for inanimate slides than for
the other categories.
Discussion
These results support our prediction that the distribution of
IAPS pictures across the valence and arousal dimensions
would be related to slide content. The v2-analyses demonstrated that pictures with human content are overrepresented in the high arousal/positive valence and high
arousal/negative valence areas of affective space compared
to the low arousal/neutral valence area. Within the high
arousal/positive and high arousal/negative areas we also
observed that human face slides are over-represented in
these high-emotion areas, and that this is particularly true
for pictures eliciting extremely high arousal and strong
negative valence. All but two of these slides depict mutilated, dead, or blood-covered faces (the remaining two
show dead bodies). It is an empirical question whether
equally strong negative emotions could be evoked by
nonhuman material or human material not including faces.
However, the significant effort that the developers of the
IAPS invested in covering ‘‘affective space’’ suggests that
it would not be easy to elicit high levels of emotion without
engaging what we suggest is an empathic route to emotion elicitation. Virtually no pictures of animals fell into
the regions of interest defined for the v2-analyses,
demonstrating that animal pictures usually elicit medium
levels of emotion.
On the basis of these findings, researchers using stimulus sets drawn from defined areas of affective space need to
be mindful of possible alternative explanations of observed
effects of the stimulus material. Instead of being due to
differences in valence or arousal, these effects could reflect
different proportions of human versus inanimate content.
For example, Ribeiro et al. (2007) report that low arousal/
positive stimuli elicit stronger EMG responses of the zygomaticus major muscle (activated during smiling) than
high arousal/positive stimuli. Our content coding of the
IAPS slides used in Ribeiro et al.’s study shows the proportion of these slides depicting inanimate objects was
lower for their eight low-arousal slides (12.5%) than for
their eight high-arousal slides (37.5%). Conversely,
according to our coding, there was one more human-content slide in their low-arousal (75%) than in their higharousal (62.5%) stimulus set. Although it is possible that
pleasant low-arousal stimuli elicit stronger smiling
responses than pleasant high-arousal stimuli, there is also
reason to believe that the sociality of stimulus content
facilitates smiling (Fridlund 1991), which provides a possible alternative explanation for the findings.
The analysis of the entire affective space partly confirmed our prediction that inanimate object slides would
attract the lowest arousal and least extreme valence ratings.
Inanimate slides were rated as significantly less arousing
than all other categories. Also, within the negative valence
area these slides were rated less negatively than human
stimuli. Yet in the positively valenced region inanimate
slides were only rated less positively than animal face
slides. The relatively high valence ratings for positive
animal faces might reflect the large proportion of slides
evoking the Kindchenschema (i.e., showing a ‘‘baby face’’;
Lorenz 1943). A supplementary analysis showed that
whereas 32.4% (or 12 out of 37) of the positive animal face
slides contain the characteristic features of this evolutionarily relevant physiognomy, this is true of only 18.8% (48
out of 256) of the positive human face slides. This may be
why the valence differences between categories of slide
content are less systematic and pronounced for positive
than for negative images.
Our data also offer indirect support for the proposed
qualitative differences between slides with human versus
inanimate content. As expected, viewers felt more in control of their emotions in response to inanimate than animate
slides. Although the fact that this effect disappeared when
controlling for arousal calls for a cautious interpretation,
the pattern is consistent with the idea that reactions to
social stimuli are partly based on automatic empathic
processes and that these empathic responses are less controllable than appraisal-based reactions to inanimate
123
Motiv Emot
stimuli. Furthermore, gender differences in response to
negative slides were more pronounced for animate than for
inanimate content, although for arousal only the difference
between inanimate slides and human face slides reached
significance. It is possible that this increased responsiveness of women to slides depicting human (and animal)
content is due to their superior ability in decoding nonverbal cues (Hall 1978; Hall et al. 2000) and their higher
empathic ability (or, at least, motivation to act empathically; Ickes et al. 2000).
These findings are consistent with the notion that different processes might underlie responses to inanimate
slide content and responses to human slide content.
Research in social-cognitive neuroscience and cognitive
psychology has shown that the perception of other people’s
faces, compared to object perception, involves distinct
patterns of brain activation and processes (for reviews see
Calder and Young 2005; Farah et al. 1998). In neuroscience, particular attention has been drawn to the complex
role of the so-called fusiform face area (FFA) in face and
object perception (e.g., Grill-Spector et al. 2006; Kanwisher et al. 1997) and the possible face-specific occurrence
or modulation of ERP components (in particular N170,
e.g., Itier and Taylor 2004). Additionally, the extrastriate
body area (EBA), a functionally specialized region of the
visual cortex exhibiting modulation by body-related stimuli, has been shown to play a role in the perception of
others. Hodzic et al. (in press) report similarities between
the patterns of activation in EBA for the perception of
one’s own body and the bodies of others. The authors
found that the cortical networks for the extraction of bodyrelated information versus for the extraction of self-related
body information overlap in the right superior and inferior
parietal cortices. Using IAPS slides as stimulus material,
Schupp et al. (2004, Table 2) have shown that neutral faces
differ from neutral objects in that they evoke larger late
positive event-related potentials. Taken together, this evidence suggests that qualitatively different brain systems are
involved in processing human (facial) as compared to
inanimate stimuli.
For fear-relevant stimuli, more direct evidence pertaining to the IAPS slide set and demonstrating differences in
neuronal processes triggered by differing picture content
comes from a study by Hariri et al. (2002). They examined
the strength and specificity of amygdala responses in
reaction to angry and fearful human facial expressions and
fear-relevant nonface pictures, including animal threats and
depictions of ‘‘guns, car accidents, plane crashes, explosions’’ (p. 318). Their analyses revealed a stronger
amygdala response and larger changes in skin conductance
for face as compared to nonface slides. They also report a
lateralized response to facial stimuli that particularly
involved the right amygdala. Again, our findings are
123
consistent with the notion that different neurophysiological
processes underlie reactions to stimuli that produce similar
self-reported emotional responses.
Animal slides generally elicited medium levels of affect
and the pattern of results for these slides was closer to the
pattern for human than for inanimate stimuli. Empathic
processes may play a role in reaction to animal slides as
well. Future research will need to test more fully the
emotional characteristics of animal slides and establish
whether the observed pattern is specific to animal face
slides or whether a similar pattern would emerge for animal
nonface slides.
We recognize that the present analyses provide only
indirect and preliminary evidence for the argument concerning differences in the way human and object slides are
processed. However, given the broad sampling and careful
selection of the IAPS and the fact that it covers the entire
affective spectrum (Bradley and Lang 2007), we argue that
there is sufficient basis for conducting further research on
the role played by empathic processes in response to
emotion-inducing stimuli more generally. Whatever the
exact mechanisms are, if stimuli in ‘‘affective’’ conditions
of experiments predominantly depict human content,
whereas stimuli in ‘‘neutral’’ conditions have predominantly inanimate content, it is possible that observed
differences in response are due partly to content differences. Our results suggest that plausible reasons for such
differences are empathy-related automatic processes such
as imitation, contagion, and simulation.
References
Achaibou, A., Pourtois, G., Schwartz, S., & Vuilleumier, P. (2008).
Simultaneous recording of EEG and facial muscle reactions
during spontaneous emotional mimicry. Neuropsychologia, 46,
1104–1113. doi:10.1016/j.neuropsychologia.2007.10.019.
Adolphs, R. (2002). Social cognition and the human brain. In J. T.
Cacioppo, G. G. Berntson, R. Adolphs, C. S. Carter, R. J.
Davidson, M. K. McClintock, et al. (Eds.), Foundations in social
neuroscience (pp. 317–329). Cambridge, MA: MIT Press.
Baron-Cohen, S., Wheelwright, S., Griffin, R., Lawson, J., & Hill, J.
(2002). The exact mind: Empathising and systemising in autism
spectrum conditions. In U. Goswami (Ed.), Blackwell handbook
of childhood cognitive development (pp. 491–508). Oxford, UK:
Blackwell.
Bernat, E., Patrick, C. J., Benning, S. D., & Tellegen, A. (2006).
Effects of picture content and intensity on affective physiological
response. Psychophysiology, 43, 93–103. doi:10.1111/j.14698986.2006.00380.x.
Bradley, M. M., Codispoti, M., Cuthbert, B. N., & Lang, P. J. (2001).
Emotion and motivation I: Defensive and appetitive reactions in
picture processing. Emotion, 1, 276–298.
Bradley, M. M., & Lang, P. J. (2007). The International Affective
Picture System (IAPS) in the study of emotion and attention. In
J. A. Coan & J. J. B. Allen (Eds.), The handbook of emotion
elicitation and assessment (pp. 29–46). Oxford, UK: Oxford
University Press.
Motiv Emot
Bradley, M. M., Sabatinelli, D., Lang, P. J., Fitzsimmons, J. R., King,
W. M., & Desai, P. (2003). Activation of the visual cortex in
motivated attention. Behavioral Neuroscience, 117, 369–380.
doi:10.1037/0735-7044.117.2.369.
Brass, M., & Heyes, C. (2005). Imitation: Is cognitive neuroscience
solving the correspondence problem? Trends in Cognitive
Sciences, 9, 489–495. doi:10.1016/j.tics.2005.08.007.
Calder, A. J., & Young, A. W. (2005). Understanding the recognition
of facial identity and facial expression. Nature Reviews.
Neuroscience, 6, 641–651. doi:10.1038/nrn1724.
Carpenter, G. A., & Grossberg, S. (2003). Adaptive resonance theory.
In M. A. Arbib (Ed.), The handbook of brain theory and neural
networks (2nd ed., pp. 87–90). Cambridge, MA: MIT Press.
Chartrand, T. L., & Bargh, J. A. (1999). The chameleon effect: The
perception-behavior link and social interaction. Journal of
Personality and Social Psychology, 76, 893–910. doi:10.1037/
0022-3514.76.6.893.
Codispoti, M., & De Cesarei, A. (2007). Arousal and attention:
Picture size and emotional reactions. Psychophysiology, 44,
680–686. doi:10.1111/j.1469-8986.2007.00545.x.
Codispoti, M., Ferrari, V., & Bradley, M. M. (2007). Repetition and
event-related potentials: Distinguishing early and late processes
in affective picture perception. Journal of Cognitive Neuroscience, 19, 577–586. doi:10.1162/jocn.2007.19.4.577.
Collingwood, R. G. (1946). The idea of history. Oxford, UK: Oxford
University Press.
Davis, M. H., & Kraus, L. A. (1997). Personality and empathic
accuracy. In W. J. Ickes (Ed.), Empathic accuracy (pp. 144–168).
New York, NY: Guilford.
Decety, J., & Grèzes, J. (2006). The power of simulation: Imagining
one’s own and others’ behavior. Brain Research, 1079, 4–14.
doi:10.1016/j.brainres.2005.12.115.
Delplanque, S., N’diaye, K., Scherer, K., & Grandjean, D. (2007).
Spatial frequencies or emotional effects? A systematic measure
of spatial frequencies for IAPS pictures by a discrete wavelet
analysis. Journal of Neuroscience Methods, 165, 144–150. doi:
10.1016/j.jneumeth.2007.05.030.
Dimberg, U., & Lundquist, L.-O. (1990). Gender differences in facial
reactions to facial expressions. Biological Psychology, 30, 151–159.
doi:10.1016/0301-0511(90)90024-Q.
Dimberg, U., Thunberg, M., & Elmehed, K. (2000). Unconscious
facial reactions to emotional facial expressions. Psychological
Science, 11, 86–89. doi:10.1111/1467-9280.00221.
Eisenberg, N., & Lennon, R. (1983). Sex differences in empathy and
related capacities. Psychological Bulletin, 94, 100–131. doi:
10.1037/0033-2909.94.1.100.
Farah, M. J., Wilson, K. D., Drain, M., & Tanaka, J. N. (1998). What
is ‘‘special’’ about face perception? Psychological Review, 105,
482–498. doi:10.1037/0033-295X.105.3.482.
Fontaine, J. R., Scherer, K. R., Roesch, E. B., & Ellsworth, P. (2007).
The world of emotion is not two-dimensional. Psychological Science, 13, 1050–1057. doi:10.1111/j.1467-9280.2007.
02024.x.
Fridlund, A. J. (1991). Sociality of solitary smiling: Potentiation by an
implicit audience. Journal of Personality and Social Psychology,
60, 229–240. doi:10.1037/0022-3514.60.2.229.
Gallese, V., Keysers, C., & Rizzolatti, G. (2004). A unifying view of
the basis of social cognition. Trends in Cognitive Sciences, 8,
396–403. doi:10.1016/j.tics.2004.07.002.
Grill-Spector, K., Sayres, R., & Ress, D. (2006). High-resolution
imaging reveals highly selective nonface clusters in the fusiform
face area. Nature Neuroscience, 9, 1177–1185. doi:10.1038/
nn1745.
Grush, R. (2004). The emulation theory of representation: Motor
control, imagery, and perception. The Behavioral and Brain
Sciences, 27, 377–396.
Hall, J. A. (1978). Gender effects in decoding nonverbal cues.
Psychological Bulletin, 85, 845–857. doi:10.1037/0033-2909.
85.4.845.
Hall, J. A., Carter, J. D., & Horgan, T. G. (2000). Gender differences
in nonverbal communication of emotion. In A. H. Fischer (Ed.),
Gender and emotion: Social psychological perspectives (pp. 97–
117). New York, NY: Cambridge University Press.
Hariri, A. R., Tessitore, A., Mattay, V. S., Fera, F., & Weinberger,
D. R. (2002). The amygdala response to emotional stimuli: A
comparison of faces and scenes. NeuroImage, 17, 317–323. doi:
10.1006/nimg.2002.1179.
Hatfield, E., Cacioppo, J. T., & Rapson, R. L. (1992). Primitive
emotional contagion. In M. S. Clark (Ed.), Emotion and social
behavior (pp. 151–177). Thousand Oaks, CA: Sage.
Hodzic, A., Muckli, L., Singer, W., & Stirn, A. (in press). Cortical
responses to self and others. Human Brain Mapping. doi:
10.1002/hbm.20558.
Ickes, W., Gesn, P. R., & Graham, T. (2000). Gender differences in
empathic accuracy: Differential ability or differential motivation? Personal Relationships, 7, 95–110. doi:10.1111/j.14756811.2000.tb00006.x.
Itier, R. J., & Taylor, M. J. (2004). N170 or N1? Spatiotemporal
differences between object and face processing using ERPs.
Cerebral Cortex, 14, 132–142. doi:10.1093/cercor/bhg111.
Ito, T. A., Cacioppo, J. T., & Lang, P. J. (1998). Eliciting affect using
the International Affective Picture System: Trajectories through
evaluative space. Personality and Social Psychology Bulletin,
24, 855–879. doi:10.1177/0146167298248006.
Kanwisher, N., McDermott, J., & Chun, M. M. (1997). The fusiform
face area: A module in human extrastriate cortex specialized for
face perception. The Journal of Neuroscience, 17, 4302–4311.
Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (2005). International
Affective Picture System (IAPS): Affective ratings of pictures and
instruction manual. Technical Report A-6. University of Florida,
Gainesville, FL.
Lang, P. J., Bradley, M. M., Fitzsimmons, J. R., Cuthbert, B. N.,
Scott, J. D., Bradley, M., et al. (1998). Emotional arousal and
activation of the visual cortex: An fMRI analysis. Psychophysiology, 35, 199–210. doi:10.1017/S0048577298001991.
Lorenz, K. (1943). Die angeborenen Formen möglicher Erfahrung
[The innate forms of possible experience]. Zeitschrift fur
Tierpsychologie, 5, 235–409.
Mehrabian, A., & Epstein, N. (1972). A measure of emotional
empathy. Journal of Personality, 40, 525–543. doi:10.1111/
j.1467-6494.1972.tb00078.x.
Niedenthal, P. M. (2007). Embodying emotion. Science, 316, 1002–
1005. doi:10.1126/science.1136930.
Niedenthal, P. M., Barsalou, L. W., Winkielman, P., Krauth-Gruber,
S., & Ric, F. (2005). Embodiment in attitudes, social perception,
and emotion. Personality and Social Psychology Review, 9, 184–
211. doi:10.1207/s15327957pspr0903_1.
Preston, S. D., & de Waal, F. B. M. (2002). Empathy: Its ultimate and
proximate bases. The Behavioral and Brain Sciences, 25, 1–72.
Ribeiro, R. L., Teixeira-Silva, F., Pompeia, S., & Bueno, O. F. A.
(2007). IAPS includes photographs that elicit low-arousal
physiological responses in healthy volunteers. Physiology &
Behavior, 91, 671–675. doi:10.1016/j.physbeh.2007.03.031.
Rizzolatti, G., & Craighero, L. (2004). The mirror-neuron system.
Annual Review of Neuroscience, 27, 169–192. doi:10.1146/
annurev.neuro.27.070203.144230.
Sabatinelli, D., Bradley, M. M., Fitzsimmons, J. R., & Lang, P. J.
(2005). Parallel amygdala and inferotemporal activation reflect
emotional intensity and fear relevance. NeuroImage, 24, 1265–
1270. doi:10.1016/j.neuroimage.2004.12.015.
Schulte-Rüther, M., Markowitsch, H. J., Shah, N. J., Fink, G. R., &
Piefke, M. (2008). Gender differences in brain networks
123
Motiv Emot
supporting empathy. NeuroImage, 42, 393–403. doi:10.1016/
j.neuroimage.2008.04.180.
Schupp, H. T., Cuthbert, B. N., Bradley, M. M., Hillman, C. H.,
Hamm, A. O., & Lang, P. J. (2004). Brain processes in emotional
perception: Motivated attention. Cognition and Emotion, 18,
593–611. doi:10.1080/02699930341000239.
Smith, J. C., Löw, A., Bradley, M. M., & Lang, P. J. (2006). Rapid
picture presentation and affective engagement. Emotion, 6, 208–
214. doi:10.1037/1528-3542.6.2.208.
123
Sonnby-Borgström, M., Jönsson, P., & Svensson, O. (2003). Emotional empathy as related to mimicry reactions at different levels
of information processing. Journal of Nonverbal Behavior, 27,
3–23. doi:10.1023/A:1023608506243.
Wilson, M. (2002). Six views of embodied cognition. Psychonomic
Bulletin & Review, 9, 625–636.