What Can Head and Facial Movements Convey about Positive and

What Can Head and Facial Movements Convey
about Positive and Negative Affect?
Zakia Hammal1 , Jeffrey F Cohn1,2 , Carrie Heike3 , and Matthew L Speltz4
1
2
Robotics Institute, Carnegie Mellon University, Pittsburgh, USA
Department of Psychology, University of Pittsburgh, Pittsburgh, USA
3 Seattle Children’s Hospital, Seattle, USA
4 University of Washington School of Medicine, Seattle, USA
Abstract—We investigated whether the dynamics of head and
facial movements apart from specific facial expressions communicate affect in infants. Age-appropriate tasks were used to elicit
positive and negative affect in 28 ethnically diverse 12-monthold infants. 3D head and facial movements were tracked from
2D video. Strong effects were found for both head and facial
movements. For head movement, angular velocity and angular
acceleration of pitch, yaw, and roll were higher during negative
relative to positive affect. For facial movement, displacement,
velocity, and acceleration also increased during negative relative
to positive affect. Our results suggests that the dynamics of
head and facial movements communicate affect at ages as young
as 12 months. These findings deepen our understanding of
emotion communication and provide basis for studying individual
differences in emotion in socio-emotional development.
I. INTRODUCTION
The past five years have witnessed dramatic advances in
automatic affect analysis and recognition [31], [23]. The longterm goal of analyzing spontaneous affective behavior in
relatively uncontrolled social interaction has become possible
through major advances in computer vision in the development
of reliable and fully automatic, person-independent methods
for the recognition of facial expression in diverse settings
[5], [27], [28]. However, until recently, most research in
affective computing has emphasized the morphology of facial
expression (e.g., holistic expressions and anatomically-based
action units (AUs)) to the neglect of related nonverbal displays
and their dynamics. When previous research has used temporal
information, it typically is limited to adjacent frames within a
temporal window to improve the automatic recognition of AUs
or holistic facial expressions or the detection of their temporal
segments [24], [16] [26], [29] (for a more complete review
please see [31] and [23]).
This emphasis on holistic expressions and AUs stems in
part from Ekman’s pioneering work on basic emotions [6] and
his Facial Action Coding System (FACS) [7]. FACS provided
guidance on what to decode for emotion recognition, making
possible dramatic advances in automated measurement of facial expression and the emerging field of affective computing.
Emphasis on static holistic expressions and AUs, however,
ignores two critical aspects of affect communication. One,
affect communication is multimodal. People do not communicate solely by the exchange of facial displays. Facial
expression is ”packaged” within other non-verbal modalities
that may powerfully communicate emotion [1]. Two, affect
communication is dynamic. Faces and heads move and the
dynamics of that motion matter to the meaning of affective
behavior. As but one example, timing systematically varies between spontaneous and posed smiles [?]. Spontaneous smiles
follow ballistic timing; whereas posed smiles fail to.
Most previous research in automatic affect analysis has
considered only adults. With few exceptions [18], [19], affect
related behavior has not been considered in infants. However,
both theory and data indicate that affect communication,
manifested by nonverbal behaviors (such as head and face),
play a critical role in infant’s social, emotional, and cognitive
development [13], [2], [19], [25]. Differences in how infants
respond to positive and negative emotion inductions are predictive of developmental outcomes that include attachment
security [3] and behavioral problems [20].
To understand the beginnings of nonverbal communication,
we investigated the extent to which non-verbal behavior apart
from specific facial expressions communicates emotion in
infants at ages as young as 12 months. Additionally, to inform
subsequent research into how the dynamics of nonverbal
behavior in infants may be related to change with development in normative and high-risk samples, we included both
typically- and atypically developing infants. We asked to what
extent the temporal dynamics of head and facial movements
communicate information about positive and negative affect.
We hypothesized that infants’ head and facial movements
during positive and negative affect systematically differ.
Automatic analysis of head and facial movement in infants is challenging for several reasons. Compared to adults,
head shape differs markedly, and they have smoother skin,
fewer wrinkles, and faint eyebrows. Occlusion, due to hands
in mouth and extreme head movement, is another common
problem. To overcome these challenges, we used a newly
developed technique that tracks dense feature points and
performs 3D alignment from 2D video [14]. This allowed us
to ask to what extent head and facial movements communicate
infants’ affect. To elicit positive and negative affect, we used
two age-appropriate emotion inductions, Bubble Task and ToyRemoval Task [11]. We then used quantitative measurements
to compare the spatiotemporal dynamics of head and facial
movements during negative and positive affect.
We found strong evidence that head and facial movement
communicate affect. Velocity and acceleration of head and facial movements were greater during negative affect compared
with positive affect. Observers time varying behavioral ratings
of infant affect were also related to head and facial dynamics.
II. METHODS
A. Participants
Participants were 28 ethnically diverse 12-month-old infants
(M = 13.15, SD = 0.52) recruited as part of a multi-site study
involving children’s hospitals in Seattle, Chicago, Los Angeles, and Philadelphia. Participants in the present study were
primarily from Seattle Children’s Hospital. Two infants were
African-American, 1 Asian-American, 5 Hispanic-American,
16 European-American, and 4 Multiracial. Eight were girls.
Twelve infants were mildly affected with craniofacial microsomia (CFM). CFM is congenital condition associated with
varying degrees of facial asymmetry. Case-control differences
between CFM and unaffected infants will be a focus of future
research as more infants are ascertained. All parents gave their
informed consent to the procedures.
toward the bubbles in Fig. 3 and withdrew their hand from the
toy in Fig. 4.
The trials were recorded using a Sony DXC190 compact
camera at 60 frames per second. Infants’ face orientation
to the camera was approximately 15◦ from frontal, with
considerable head movement.
B. Observational Procedures
Infants were seated in a highchair at a table with an
experimenter and their mother across from them. The
experimenter sat to the mothers’ left out of camera view and
closer to the table as shown in Fig.1. Two emotion inductions
were administered with each consisting of multiple trials.
1) Emotion Induction: The “Bubble Task” was intended
to elicit positive affect (surprise, amusement, or interest); the
“Toy-Removal Task”, was intended to elicit negative affect
(frustration, anger, or distress).
Bubble Task: Soap bubbles were blown toward the center
of the table and below camera view (see Fig.3, first row).
Before blowing bubbles, the examiner attempted to build
suspense (e.g., counting 1-2-3; “Ooh, look at the pretty
bubbles”; “Can you catch one?”; “Where’s that bubble
going?”) and maintain infants’ engagement (e.g., “allow all
bubbles to pop before continuing again”). This procedure was
repeated three times.
Toy-Removal Task: The examiner showed a car to the
infant and demonstrated how it works by “running” the car
between her own hands to generate interest (e.g., car is “wound
up” by pulling car back a few inches with all 4 wheels on the
table surface) [11]. The toy was then placed within the infants’
reach, allowing them to play with the toy for 15 seconds (see
Fig. 4, first row). The examiner then gently took back the toy
and placed it inside a clear plastic bin for 30 seconds just out
of the infant’s reach to elicit frustration or anger (see Fig. 4,
rows 2-4). After 30 seconds, the toy was returned to the infant.
This procedure was repeated three times.
Neither task required an active response from the infant.
Qualitative observations suggest that head pose and hand
position appeared to be independent of task. For instance, as
shown in Fig. 3 and 4, frontal and non-frontal pose and hand
placements were observed in both tasks. The infant reached
Fig. 1: Observational Procedure.
2) Manual Continuous Ratings: Despite the standardized
emotion inductions, positive and negative affect could occur
during both tasks. To control for this possibility, manual continuous ratings of valence were obtained from four raters. For
each of 56 video recordings (28 infants * 2 conditions, Bubble
and Toy-Removal Tasks) they independently rated valence
from the video on a continuous scale (“−100” (maximum of
negative affect) to “+100” (maximum of positive affect)) using
custom software [10]. This produced a time series rating for
each rater and each video. To achieve high effective reliability,
following [22], the time series were averaged across raters.
Effective reliability was evaluated using intraclass correlation
[17], [22]. The ICC was 0.92, which indicated high reliability.
Fig.2 shows the emotional maps obtained from the aggregated ratings. Continuous ratings of Bubble Tasks are shown
on the right, and continuous ratings of Toy-Removal Tasks
are shown on the left. In most cases, segments of positive and
negative affect (delimited in green and red, respectively in
Fig. 2) alternated throughout the interaction regardless of the
experimental tasks. In a few cases, infants never progressed
beyond negative affect even in the nominally positive Bubble
Task (see rows 4 and 6 from the top in Fig. 2). This pattern
highlights the importance of using independent criteria to
identify episodes of positive and negative affect instead of
relying upon the intended goal of each condition.
Overall, infants were more positive in the Bubble Task and
more negative in the Toy-Removal Task. The ratio of positive
to negative affect was significantly higher in the Bubble Task
compared to the Toy-Removal Task (t = 7.1, df = 27, p <=
0.001, see green segments in Fig. 2 left), and the ratio of
negative to positive affect was significantly higher in the ToyRemoval Task compared to the Bubble Task (t = 7.1, df =
27, p <= 0.001, see red segments in Fig. 4 right). Moreover,
Fig. 2: Emotional maps obtained by the aggregated ratings. Each row corresponds to one infant. Columns correspond to the aggregated
continuous rating across frames. Green and red delimit positive and negative affect, respectively
the differences between the selected affective segments were
specific to valence (i.e., positive and negative affect) rather
than the intensity. The absolute intensity of behavioral ratings
did not differ between segments of positive and negative affect
(p > 0.1).
C. Automatic Tracking of Facial Landmarks and Head Orientation
A recently developed 3D face tracker was used to track
the registered 3D coordinates from 2D video of 49 facial
landmarks, or fiducial points, and the 3 degrees of rigid head
movements (i.e., pitch, yaw, and roll, see Fig. 3 and Fig. 4)
[14]. The method is generic; person-specific training is not
required. More specifically, the tracker uses a combined 3D
supervised descent method [30], where the shape model is
defined by a 3D mesh and the 3D vertex locations of the
mesh. The tracker registers a dense parameterized shape model
to an image such that its landmarks correspond to consistent
locations on the face (for details, please see [14]).
To evaluate the concurrent validity of the tracker for head
pose tracking, it was compared with the CSIRO cylinderbased 3D head tracker [4] on 16 randomly selected videos
of Bubble and Toy-Removal Tasks from 8 infants. Concurrent
correlations between the two methods for pitch, yaw, and roll
were 0.74, 0.93, and 0.92, respectively. Examples of the tracker
performance are shown in Fig. 3 and Fig. 4, respectively.
III. DATA REDUCTION
In the following we first describe the results of the automatic
tracking and the manual validation of the tracked data. We
then describe how the measures of displacement, velocity, and
acceleration were computed for both head and facial landmark
movements.
A. Tracking Results
For each video frame, the tracker output the 3D coordinates
of 49 fiducial points and 3 degrees of freedom of rigid head
movements (pitch, yaw, and roll) or a failure message when
a frame could not be tracked. To rule out poorly tracked
frames, we visually reviewed tracking results overlaid on the
corresponding video frames. Any frames for which the head
pose or feature tracking was visually flawed were removed
from further analysis (as shown in Fig. 3 and Fig. 4). Frames
that could not be tracked or failed visual review were not
analyzed. Failures were due primarily to self occlusions (e.g.,
hand-in-mouth) and extreme head turn (out of frame). For each
infant the percentage of well tracked frames was lower for
Toy-Removal Task than for Bubble Task (see Table I).
TABLE I: Proportion of Valid Tracking during Bubble and ToyRemoval Tasks.
Bubble Task
Toy-Removal Task
Infants
87
72
Note: t = 4.50, p <= 0.01, d f = 27.
B. Measurement of Head Movement
Head angles in the horizontal, vertical, and lateral directions
were selected to measure head movement. These directions
correspond to head nods (i.e. pitch), head turns (i.e. yaw),
and lateral head inclinations (i.e. roll), respectively (see blue,
green, and red rows in Fig. 3 and Fig. 4). Head angles were
converted into angular displacement, angular velocity, and
angular acceleration. For pitch, yaw, and roll, angular displacement was computed by subtracting the overall mean head angle
from each observed head angle within each valid segment
(i.e., consecutive valid frames). Similarly, angular velocity and
angular acceleration for pitch, yaw, and roll, were computed
Fig. 3: Examples of tracking results (head orientation pitch (green), yaw(blue), and roll (red) and the 49 fiducial points) during Bubble Task.
Fig. 4: Examples of tracking results (head orientation pitch (green), yaw(blue), and roll (red) and the 49 fiducial points) during a Toy-Removal
Task.
as the derivative of angular displacement and angular velocity,
respectively. Angular velocity and angular acceleration allow
measurements of changes in head movement from one frame
to the next.
The root mean square (RMS) was then used to measure
the magnitude of variation of the angular displacement, the
angular velocity, and angular acceleration. The RMS value of
the angular displacement, the angular velocity, and the angular
acceleration were computed as the square root of the mean
value of the squared values of the quantity taken over each
consecutive valid segment for each condition. The RMS of the
horizontal, vertical, and lateral angular displacement, angular
velocity, and angular acceleration were then calculated for
each consecutive valid segment, for each condition, and for
each infant separately. RMSs of head movement as used in
the analyses refer to the mean of the RMSs of consecutive
valid segments.
C. Measurement of Facial Movement
The movement of the 49 detected 3D fiducial points (see
Fig. 3 and Fig. 4) was selected to measure facial movement. The movement of the 49 fiducial points correspond
to the movement (without rigid head movements) of the
corresponding facial features such the eyes, eyebrows, and
mouth. First, the mean position in the face of each detected
fiducial point within all valid segments (i.e., consecutive valid
frames) was computed. The displacement of each one of the
49 fiducial points was computed by measuring the Euclidean
distance between its current position in the face and its mean
position in the face. The velocity of movement of the 49
detected fiducial points was computed as the derivative of the
corresponding displacement, measuring the speed of change
of the fiducial points position on the face from one frame to
the next. Similarly, angular acceleration of each one of the 49
detected fiducial points was computed as the derivative of the
corresponding velocities.
The RMS was then used to measure the magnitude of
variation of the fiducial points displacement, velocity, and acceleration, respectively. Similarly to head movement, the RMS
values of the fiducial points displacement, the fiducial points
velocity, and the fiducial points acceleration were computed
for each consecutive valid segment for each condition. RMSs
of facial movement as used for analyses refer to the mean
of the RMSs of consecutive valid segments. For simplicity, in
the following sections the fiducial points displacement, fiducial
points velocity, and fiducial points acceleration are referred to
as facial displacement, facial velocity, and facial acceleration,
respectively.
Because the movements of individual points were highly
correlated, principal components analysis (PCA) was used to
reduce the number of parameters (a compressed representation of the 49 RMSs of facial displacement, velocity, and
acceleration , respectively). The first principal components of
displacement, velocity, and acceleration accounted for 60%,
70%, and 70% of the respective variance and were used for
analysis.
IV. DATA ANALYSIS AND RESULTS
Because of the repeated-measures nature of the data,
repeated-measures analyses of variance (ANOVAs) were used
to test for differences in head and facial movements between
positive and negative affect. Separate ANOVAs were used for
each measure (e.g., RMS of angular displacement of head
pitch). Student’s paired t-tests were used for post-hoc analyses
following significant ANOVAs.
A. Infants’ Head Movement During Positive and Negative
Affect
For RMS angular displacement of pitch, yaw, and roll there
was no effect for condition (F1,52 = 0.39, p = 0.53, F1,52 =
1.20, p = 0.27, and F1,52 = 0.01, p = 0.94, respectively)1 .
For RMS angular velocity of pitch, yaw, and roll, there
were significant effects for condition (F1,52 = 5.66, p = 0.02,
and F1,52 = 11.53, p = 0.001, and F1,52 = 4.69, p = 0.03,
respectively).
Similarly, for RMS angular acceleration of pitch and yaw,
there were significant effects for condition (F1,52 = 4.30, p =
0.04, and F1,52 = 4.90, p = 0.03, respectively), and a marginal
effect for RMS angular acceleration of roll (F1,52 = 3.32,
p = 0.07). RMS angular velocities and angular accelerations
increased in negative affect compared with positive affect (see
Table II).
B. Infants’ Facial Movement During Positive and Negative
Affect
There was a marginal effect for RMS facial displacement (F1,52 = 3.23, p = 0.07). RMS facial movements were
marginally higher in negative affect compared with positive
affect (Table III).
For RMS facial velocity and RMS facial acceleration there
were significant effects for condition (F1,52 = 3.98, p = 0.05
and F1,52 = 4.79, p = 0.03, respectively).
C. Correlation Between Head and Facial Movements and
Correlation between Head and Facial Movements and Continuous Behavioral Ratings
Because both head and facial movements for most measures
were faster during negative than positive affect, we wished to
investigate whether head and facial movements were themselves correlated and to investigate the correlation between
head and facial movements and the behavioral ratings. To reduce the number of facial movement parameters to a compact
representation, PCA was used to reduce the 49 fiducial point
time series to three time series components that accounted for a
total of 70% of the variance (see Comp-1, Comp-2, and Comp3 in Table IV and Table V). We then computed the correlations
between the continuous behavioral ratings and the infants’
time series of angular displacement of pitch, yaw and roll, and
facial displacement (measured using the compressed 3 times
1F
1,52 corresponds to the F statistic and its associated degrees of freedom.
The first number refers to the degrees of freedom for the specific effect and
the second number to the degrees of freedom of error.
series Comp-1, Comp-2, and Comp-3) during Bubble and ToyRemoval Tasks. The obtained correlations are reported in Table
IV and Table V. A one-sample t-test tested the null hypothesis
that the mean correlations did not significantly differ from a
population mean of zero. The null hypotheses were all rejected
at p <= 0.01.
Head angular displacement and facial displacement were
all moderately correlated with continuous behavioral ratings
of positive and negative affect (see Table V) and with each
other (see Table IV). To test whether the behavioral ratings
of positive and negative affect were more correlated with
facial displacement or with head angular displacement, we
used a paired t-test to test the null hypothesis that there
were no differences between correlations. For the Bubble
Task, the mean correlations between the behavioral ratings
and facial displacement (using Comp-1) were higher than
between the behavioral ratings and head angular displacement
for pitch, yaw, and roll (t = −2.34, p <= 0.02, t = −2.48,
p <= 0.02, and t = −3.37, p <= 0.002, d f = 27, for pitch,
yaw, and roll, respectively). For the Toy-Removal Task, the
mean correlations between the behavioral ratings and facial
displacement (using Comp-1) were higher than between the
behavioral ratings and head angular displacement for yaw,
and roll (t = −2.60, p <= 0.01 and t = −2.85, p <= 0.008,
d f = 27, for yaw and roll, respectively). Effect sizes of this
magnitude are considered large in behavioral science. No
differences were found for pitch (t = −1.21, p > 0.1).
Overall, both head angular displacement and facial displacement accounted for significant variation in the continuous
ratings of infants’ affect; and facial displacement accounted
for more of the variance of the continuous ratings.
V. DISCUSSION AND CONCLUSION
We tested the hypothesis that head and facial movements in
infants vary between positive and negative affect. We found
striking differences in head and facial dynamics between
positive and negative affect. Velocity and acceleration for both
head and facial movements were greater during negative than
positive affect.
Were these effects specific to the affective differences between conditions? Alternative explanations might be considered. One is whether the differences were specific to intensity
rather than valence. These two dimensions are fundamental
to emotion and affective phenomena according to dimensional
models (e.g., the ”circumplex” model of Russell and Bullock).
If the intensity of observer ratings were greater during one
or the other condition, this would suggest that the valence
differences were confounded by intensity. To evaluate this
possibility, we compared observer ratings of intensity between
conditions. They failed to vary. This negative finding suggests
that intensity of response was not responsible for the systematic variation we observed between conditions. They were
specific to valence of affect.
Another possible alternative explanation could be demand
characteristics of the tasks. Toy-removal elicits reaching for
the toy; bubble task elicits head motion toward the bubbles.
TABLE II: Post-hoc paired t-tests (following significant ANOVAs) for pitch and roll RMS angular displacements, RMS angular velocities,
and RMS angular accelerations.
Positive
Negative
Differences by paired t-test
Angular Velocity
M (SD)
M (SD)
t
df
p
Pitch
Yaw
Roll
0.68 (0.13)
0.60 (0.11)
0.33 (0.08)
0.79 (0.19)
0.73 (0.16)
0.40 (0.14)
-3.31
-4.29
-2.61
25
25
25
0.002
< 0.001
0.014
Angular Acceleration
Pitch
Yaw
Roll
0.91 (0.18) 1.03 (0.24) -2.84
25
0.78 (0.13) 0.89 (0.22) -2.73
25
0.41 (0.10) 0.48 (0.18) -1.86
25
Note: M: Mean of RMSs, t: t-ratio, d f : degrees of freedom,
p: probability.
0.01
0.01
0.07
TABLE III: Post-hoc paired t-tests (following significant ANOVAs) for RMS facial displacements, RMS facial velocities, and RMS facial
accelerations.
Positive
Negative
Differences by paired t-test
M (SD)
M (SD)
t
df
p
Facial displacement
-0.94 (3.43)
0.87 (3.98)
-2.05
25
0.05
Facial Velocity
-0.12 (0.35)
0.11 (0.50)
-2.61
25
0.01
Facial Acceleration -0.17 (0.44) 0.16 (0.65) -2.61
25
Note: M: Mean of RMSs, t: t-ratio, d f : degrees of freedom,
p: probability.
To consider whether these characteristics may have been
a factor, we compared the amount of movement between
conditions. Displacement of the head failed to vary between
tasks. The lack of differences in head displacement suggests
that demand characteristics did not account for the differences
in acceleration and velocity we observed.
The obtained results are consistent with prior work in
adult psychiatric populations that found that head movement
systematically varies between affective states. For instance,
our findings are consistent with a recent study that found
greater velocity of head motion in intimate couples during
episodes of high versus low conflict [12]. They are consistent
as well with the finding that velocity of head movement was
inversely correlated with depression severity. Previous studies
have found that depression is marked by reductions in general
facial expressiveness [21] and head movement [8], [15], [9].
Together, these findings suggest that head and facial motion
are lowest during depressed affect, increase during neutral
to positive affect, and increase yet again during conflict and
negative affect. Future research is needed to investigate more
fully the relation between affect and the dynamics of head and
facial movement.
Finally, some infants had mild CFM. While CFM could
potentially have moderated the effects we observed, too few
infants with CFM have yet been enrolled to evaluate this hypothesis. Also, CFM was relatively mild. For these reasons, we
focus on the relation between head movement and expression
0.01
independent of CFM status. Once study enrollment is further
advanced, the possible role of CFM as a moderator can be
examined.
We also examined the relative importance of infants’ head
and facial displacement in the behavioral ratings independently
of specific facial expressions and AUs. The dynamics of
both head and facial movement accounted for the variance
in behavioral ratings of infants’ affect. The next step will be
to evaluate how head and facial movement temporally covary
with specific facial expression and AU.
In summary, we found strong evidence that head and facial
movements vary markedly with affect. Our results suggest
that the dynamics of head and facial movements together
communicate affect independently of static facial expressions.
ACKNOWLEDGMENT
Research reported in this publication was supported in part
by the National Institute of Dental and Craniofacial Research
of the National Institutes of Health under Award Number
DE022438. The content is solely the responsibility of the
authors and does not necessarily represent the official views
of the National Institutes of Health.
R EFERENCES
[1] Beebe, B., and Gerstman, L. The ”packaging” of maternal stimulation in
relation to infant facial-visual engagement. A case study at four months.
Merrill-Palmer Quarterly, 26,(4), pp. 321-339, 1980
TABLE IV: Mean correlations between pitch, yaw and roll angular displacement and facial displacement.
Bubble Task
Toy-Removal Task
Comp-1
Comp-2
Comp-3
Comp-1
Comp-2
Comp-3
Pitch
0.23
0.22
0.14
0.21
0.27
0.18
Yaw
0.20
0.15
0.11
0.17
0.17
0.14
Roll
0.17
0.14
0.12
0.11
0.11
0.14
Note: All correlations differed significantly from 0, p ≤ 0.01
TABLE V: Mean correlations between pitch, yaw and roll angular displacement and behavioral ratings and between facial displacement
and behavioral ratings.
Bubble Task
Toy-Removal Task
Pitch
Yaw
Roll
Pitch
Yaw
Roll
0.23
0.21
0.18
0.23
0.17
0.17
Comp-1
Comp-2
Comp-3
Comp-1
Comp-2
Comp-3
0.34
0.21
0.27
0.29
0.19
0.28
Note: All correlations differed significantly from 0, p ≤ 0.01
[2] J.J. Campos, K. Caplowitz, M.E. Lamb, H.H. Goldsmith, C. Stenberg.
Socioemotional development. Haith MM, Campos JJ, eds. Handbook of
child psychology, II(4), New York: Wiley, pp 1373-1396, 1983.
[3] Cohn, J. F., Campbell, S. B., and Ross, S. Infant response in the stillface paradigm at 6 months predicts avoidant and secure attachment at
12 months. Development and Psychopathology, 3, 367-376,1991.
[4] M. Cox, J. Nuevo-Chiquero, J. M. Saragih, and S. Lucey, CSIRO Face
Analysis SDK, ed. Brisbane, Australia, 10th IEEE Int. Conf. Autom.
Face Gesture Recog, Shangai, China, Apr. 2013.
[5] Cohn, J.F., and De la Torre, F. Automated face analysis for affective
computing. In R. A. Calvo, S. K. D’Mello, J. Gratch and A. Kappas
(Eds.), Handbook of affective computing, pp. 131-150, New York, NY:
Oxford, 2015.
[6] Ekman, P. Universals and Cultural Differences in Facial Expressions of
Emotions. In Cole, J. (Ed.), Nebraska Symposium on Motivation pp.
207-282. Lincoln, NB: University of Nebraska Press, 1972.
[7] Ekman, P., Friesen, W. V., and Hager, J. C. (Eds.). Facial Action
Coding System [E-book]. Salt Lake City, UT: Research Nexus.Ekman,
P., Friesen, W. V., and OSullivan, M. (1988). Smiles when lying. Journal
of Personality and Social Psychology, 54, pp. 414-420, 2002
[8] H.U. Fisch, S. Frey, H.P. Hirsbrunner. Analyzing nonverbal behavior in
depression. J. Abnorm. Psychol., 92, pp. 307318, 1983
[9] J. Girard, J.F. Cohn, M. H. Mahoor, S.M. Mavadati, Z. Hammal, and D.
Rosenwald. Nonverbal social withdrawal in depression: Evidence from
manual and automatic analyses. Image and Vision Computing Journal,
2014.
[10] J. Girard. CARMA: Software for continuous affect rating and media
annotation. Journal of Open Research Software, 2014.
[11] H.H. Goldsmith, M.K. Rothbart. The laboratory temperament assessment battery. Eugene, OR: University of Oregon, 1999.
[12] Z. Hammal, J.F. Cohn, and D.T. George. Interpersonal Coordination of
Head Motion in Distressed Couples. IEEE Transactions on Affective
Computing, 5(2), pp.155-167, 2014
[13] C. Izard, E. Woodburn, K. Finlon, S. Krauthamer-Ewing, S. Grossman,
A. Seidenfeld. Emotion knowledge, emotion utilization, and emotion
regulation. Emotion Review, 3(44), 2011.
[14] LA Jeni, JF Cohn, T Kanade. Dense 3D Face Alignment from 2D Videos
in Real-Time. Proc. IEEE Int’l Conf. Face and Gesture Recognition,
2015.
[15] J. Joshi, R. Goecke, G. Parker, M. Breakspear. Can body expressions
contribute to automatic depression analysis?, Proc. IEEE Int’l Conf. Face
and Gesture Recognition, Shangai, China, 2013.
[16] S. Koelstra, M. Pantic, I. Patras. A dynamic texture based approach to
recognition of facial actions and their temporal models IEEE Transactions on PAMI, 32 (11), pp. 19401954, 2010
[17] K.O. McGraw, S.P. Wong. Forming inferences about some intraclass
correlation coefficients. Psychological Methods, 1, 30-46, 1996.
[18] D.S. Messinger, M. H. Mahoor, S.M. Chow, and J.F. Cohn. Automated
measurement of facial expression in infant-mother interaction: A pilot
study. Inter. J. of Computer Vision, 14(3), pp. 285-305, 2009.
[19] D.S. Messinger, W. Mattson, M.H. Mohammad, and J.F. Cohn. The
eyes have it: Making positive expressions more positive and negative
expressions more negative, Emotion, 12, pp. 430-436, 2012.
[20] Moore, G. A., Cohn, J. F., and Campbell, S. B. Infant affective responses
to mother’s still-face at 6 months differentially predict externalizing and
internalizing behaviors at 18 months. Developmental Psychology, 37,
pp. 706-714, 2001.
[21] B. Renneberg, K. Heyn, R. Gebhard, S. Bachmann. Facial expression
of emotions in borderline personality disorder and depression. J. Behav.
Ther. Exp. Psychiatry, 36, pp. 183196, 2005.
[22] R. Rosenthal. Conducting judgment studies: Some methodological issues. The handbook of methods in nonverbal behavior research, pp.
287365, 2005.
[23] E. Sariyanidi, H. Gunes, A. Cavallaro, Automatic analysis of facial
affect: A survey of registration, representation and recognition. IEEE
Transactions on PAMI, 37(6), pp.1113-1133, 2015.
[24] Y. Tong, W. Liao, Q. Ji Facial action unit recognition by exploiting their
dynamics and semantic relationships IEEE Transactions on PAMI, 29,
pp. 16831699, 2007.
[25] E. Z. Tronick. Emotions and emotional communication in infants. Am
Psychol, 44(2), pp 112-119,1989.
[26] M.F. Valstar, M. Pantic Fully automatic recognition of the temporal
phases of facial actions IEEE Trans. Syst., Man, Cybernet., Part B
(2012), pp. 2843
[27] Michel F. Valstar, Bjrn Schuller, Kirsty Smith, Florian Eyben, Bihan
Jiang, Sanjay Bilakhia, Sebastian Schnieder, Roddy Cowie, and Maja
Pantic, ’AVEC 2013 - The Continuous Audio/Visual Emotion and
Depression Recognition Challenge’, Proc. 3rd ACM AVEC challenge,
pp. 3-10, 2013
[28] Michel Valstar, Jeff Girard, Timur Almaev, Gary McKeown, Marc
Mehu, Lijun Yin, Maja Pantic, Jeff Cohn, FERA 2015 - Second Facial
Expression Recognition and Analysis Challenge, Proc. IEEE Int’l Conf.
Face and Gesture Recognition, 2015
[29] Z. Wang, S. Wang, Q. Ji, Capturing complex spatio-temporal relations
among facial muscles for facial expression recognition, in: Proc. IEEE
CVPR, pp. 34223429, 2013.
[30] X. Xiong, F. de la Torre. Supervised descent method and its applications
to face alignment. Proc. IEEE CVPR, pp. 532-539, June 2013.
[31] Zeng, Z., Pantic, M., Roisman, G.I., and Huang, T.S. A Survey of Affect
Recognition Methods: Audio, Visual, and Spontaneous Expressions,
IEEE Transactions on PAMI, 31(1), pp. 39-58, 2012.