Detecting the Unidentified Victims: Recognized Versus

Differential Processing of Emotional Cues in Inpatients and Outpatients with Serious
Mental Illness
Melissa Tarasenko, Petra Kleinlein, Kee-Hong Choi, Elizabeth Cook, Charlie A. Davidson, and William
D. Spaulding
University of Nebraska-Lincoln
Introduction
Table 1. Intercorrelations Among Affect Perception and Neurocognitive
Measures in Inpatients
It has consistently been shown that people with serious mental illness (SMI) demonstrate impairments in
social cognition, or the ability to perceive, interpret, and process social information. Specifically, the ability to
perceive facial emotion appears to be deficient in this population. Although the exact nature of this impairment
has not yet been specified, it appears as though people with schizophrenia have particular difficulty with the
recognition of negative emotions (Muzekari & Bates, 1977; Borod et al., 1993; Penn et al., 2006; Bozikas et al.,
2006; Murphy & Cutting, 1990; Edwards et al., 2002).
Measures
1
2
3
BLERT
--
--
--
FEIT
0.21
--
--
VEIT
0.64**
0.27
--
As more support is gained for the functional implications of social cognition, so does its potential of
becoming a routine target for assessment and treatment. Previous studies have analyzed the correlation between
emotion perception and global functioning of individuals with schizophrenia (Hooker and Park, 2002; Ihnen et
al., 1998; Kee et al., 2003; Mueser et al., 1996; Penn et al., 1996; Poole et al., 2000). However, few of these
studies examined multiple modalities of emotion perception and their relationship to overall functioning. The
present study aims to analyze facial and vocal emotion perception as it relates to level of community functioning
in people with SMI.
BTFR
0.22
0.22
0.26
* p < .05
Correlational analyses were conducted to examine the relationship between facial and auditory
emotion perception in the inpatient and outpatient groups. Tables 1 and 2 summarize these results. In
the inpatient sample, performance on the VEIT was significantly correlated with performance on the
BLERT. Alternatively, in outpatients, performance on the BLERT was significantly correlated with
performance on both the FEIT and the VEIT. A one-way ANOVA revealed no significant difference on
BLERT performance between the two groups, F(1,48) = 2.48, p = .122.
** p < .01
Table 2. Intercorrelations Among Affect Perception and Neurocognitive
Measures in Outpatients
Method
Measures
1
2
3
BLERT
--
--
--
FEIT
0.43*
--
--
VEIT
0.51*
0.33
--
Participants
Twenty-four outpatients and 26 inpatients with serious mental illness were selected for participation in this
study. Data from the inpatient sample was collected as part of a prior research project. Criteria for inclusion in
the study include a chronic and stable DSM-IV-TR Axis I diagnosis, no concurrent substance abuse or
dependence, no mental retardation, no organic brain injuries, and a stable medication regimen.
BTFR
0.22
* p < .05
0.55**
0.17
** p < .01
Neurocognitive Measure
The Test of Facial Recognition (BTFR; Benton et al., 1983): This task uses pictures of faces to assess
visuospatial processing. In the BTFR, each participant is shown a neutral target face along with a set of six
neutral test faces. He or she is then instructed to choose either one or three of the six faces that matches the
target face. The maximum possible score is 54, and a score of 41 or greater is considered normal. The ability to
attend to the characteristics of the person in each picture is necessary in order to correctly discriminate between
the faces.
Table 3. Summary of Multiple Regression Analysis for Group, FEIT, VEIT, and
BTFR Predicting BLERT Performance
ß
Variable
Group
.293
2.298 (.026)*
FEIT
.144
1.099 (.277)
VEIT
.510
4.085 (.000)**
BTFR
.052
.384 (.703)
** p < .01
Findings from the present study suggest a possible discrepancy in the ways in which inpatients and
outpatients with serious mental illness process emotional cues. We found that inpatients tend to focus
primarily on auditory emotional cues when completing a complex task that requires the integration of
visual and auditory emotional cues. Outpatients, on the other hand, appear to attend to both visual and
auditory cues. However, their multimodal processing of information does not appear to facilitate better
performance on complex emotional tasks, as evidenced by their inability to outperform inpatients on the
BLERT. We hypothesize that this inability to efficiently process emotional information might be due to
an inability to integrate sensory information from multiple modalities. As a result, the cognitive
capacities of these individuals might become overloaded, thus reducing their ability to effectively process
relatively complex emotional cues.
t(p)
BLERT
[R2 =.39, F(4, 45) = 7.07, p = .000]
* p < .05
Multiple regression analyses were also conducted to explore more complex relationships among
the variables. As shown in Table 3, the multiple regression model with all four predictors (group, FEIT,
VEIT, and BTFR) produced R2 = .39, F(4, 45) = 7.07, p < .001. Both Group and VEIT had significant
positive regression weights. These results indicate that, after controlling for visual and auditory affect
perception and visuospatial processing of faces, inpatients are expected to perform significantly worse
than outpatients on complex tasks that involve the integration of visual and auditory emotion perception.
Additionally, individuals who perform better on tasks involving auditory emotion perception would be
expected to perform better on tasks involving the integration of auditory and visual emotional cues when
facial affect recognition and visuospatial abilities are held constant.
Discussion
Affect Perception Measures
Face/Voice Emotion Identification Test (FEIT/VEIT; Kerr and Neale, 1993): The Face Emotion
Identification Test consists of still photographs of faces, adopted from Ekman (1976) and Izard (1971) that
convey happiness, sadness, anger, fear, surprise, and shame, while the Voice Emotion Identification Test consists
of 21 recorded neutral statements that are vocalized to convey the emotions of happiness, sadness, anger, fear,
surprise, or shame. In each test, participants are told to choose one of the six emotions that they believe is being
depicted by the stimuli. The percentage of correct responses serves as the performance indicator. The VEIT
appears to be a more reliable measure than the FEIT, with alpha coefficients of .74 and .56, respectively.
Bell-Lysaker Emotion Recognition Task (BLERT; Bell et al., 1997): This test is comprised of 21 video
clips that feature a male actor making neutral statements about a job. The participant is instructed to determine
whether the actor is portraying happiness, sadness, anger, fear, surprise, disgust, or no emotion based on his
facial expression and affective prosody. Prior studies have estimated the test-retest reliability of the BLERT to be
approximately .76.
Results
Further analyses revealed that, after controlling for visual affect perception, auditory affect
perception, and visuospatial processing of faces, outpatients are expected to outperform inpatients on
tasks involving sensory integration of emotional cues. Additionally, individuals who perform better on
tasks of auditory affect perception are also expected to perform better on tasks involving both auditory
and visual emotional cues, suggesting that the ability to recognize auditory emotional cues is an essential
component in the completion of complex emotional tasks.
The present findings hold significant implications for the remediation of social cognitive deficits in
people with SMI. Currently, affect perception training approaches primarily focus on improving facial
affect recognition abilities (Wölwer et al., 2005; Choi & Kwon, 2006; Penn et al., 2007), while auditory
affect recognition is largely ignored. Future interventions would likely benefit from including auditory
emotion recognition and multimodal sensory integration training components, particularly in outpatient
settings.