DIST_Paris8 HUMAINE Summer School 2006

Full-body motion analysis for
animating expressive,
socially-attuned agents
Elisabetta Bevacqua Paris8
Ginevra Castellano DIST
Maurizio Mancini Paris8
Chris Peters Paris8
People involved
• DIST
- full-body movement and gesture analysis
• Paris8
- Agent processing and behavior
Overview
• Scenario: agent that senses, interprets and copies
a range of full-body movements from a person in
the real world
• System able to
- acquire input from a video camera
- process information related to the expressivity of
human movement
- generate copying behaviours
• Towards a system that recognizes emotions of
users from human movement and an expressive
agent that shows empathy to them
General framework
• Encompasses
domains of:
–
–
–
–
•
Sensing
Interpretation
Planning
Generation
E. Bevacqua, A. Raouzaiou, C. Peters, G. Caridakis, K. Karpouzis, C. Pelachaud, M. Mancini,
Multimodal sensing, interpretation and copying of movements by a virtual agent, PIT 2006.
The application
• From human motion to behaviour generation of
expressive agents
• Full-body motion analysis of a dancer
- real and virtual world
• Agent’s response to expressive human motion
descriptors
- quantity of motion
- contraction/expansion
• Copying behaviour
Part 1. Sensing and analysis
• Real world Analysis
– Computer vision
techniques
– Facial analysis
– Gesture analysis
– Full-body analysis
• Ambition: ‘switchable’ sensing
– Real-world and virtual environment
– Bridge gap between ECA and embedded virtual
agents
Full-body analysis
• Expressive cues from human full-body movement
– Real motion
– Virtual motion
• Global indicators
• EyesWeb Expressive Gesture Processing Library*
– MotionAnalysis: motion trackers (e.g., LK), movement
expressive cues (QoM, CI, ...).
– TrajectoryProcessing: processing of 2D (physical or
abstract) trajectories (e.g., kinematics, directness, …)
– SpaceAnalysis
*Camurri, A., Mazzarino, B. and Volpe, G., Analysis of Expressive Gesture: The Eyesweb Expressive
Gesture Processing Library, in A. Camurri, G.Volpe (Eds.), “Gesture-based Communication in HumanComputer Interaction ”, LNAI 2915, Springer Verlag, 2004.
SMI and Quantity of Motion
• Quantity of Motion is an approximation of the amount
of detected movement, based on Silhouette Motion
Images
n

SMI t    Silhouettet  i   Silhouettet 
 i 0

QoM = Area(SMI[t, n])/Area(Silhouette[t])
Contraction Index
• A measure, ranging from 0 to 1, of how the dancer’s body
uses the space surrounding it
• It can be calculated using a technique related to the
bounding region, i.e., the minimum rectangle surrounding
the dancer’s body: the algorithm compares the area covered
by this rectangle with the area currently covered by the
silhouette
Full-body analysis: examples in the
real world and in the virtual
environment (I)
• Analysis of quantity of motion and contraction
index with EyesWeb
(G. Castellano, C. Peters, Full-body analysis of real and virtual human motion for animating expressive
agents, HUMAINE Presentation, Athens 2006)
• Real world and virtual environment
• Switchable sensing: analysis algorithms capable of
- handling input from real-world video stream and from
virtual data
- providing similar results
Full-body analysis: examples in the
real world and in the virtual
environment (II)
Comparison of metrics: contraction
index
Contraction Index Real Dancer
0,80
0,70
0,60
0,50
0,40
0,30
0,20
0,10
0,00
0
100
200
300
400
500
600
700
800
600
700
800
Frames
Contraction Index Virtual Dancer
0,80
0,70
0,60
0,50
0,40
0,30
0,20
0,10
0,00
0
100
200
300
400
Frames
500
Comparison of metrics: quantity of
movement
Quantity of Motion Real Dancer
0,25
0,20
0,15
0,10
0,05
0,00
0
100
200
300
400
500
600
700
800
600
700
800
Frames
Quantity of Motion Virtual Dancer
0,25
0,20
0,15
0,10
0,05
0,00
0
100
200
300
400
Fram es
500
Part 2. Interpretation and Behaviour
• Ideal goal: What do
we use the
expressive cues
for?
– Planning how to
behave according to
users’ quality of
gesture
• In this work:
Copying dancer’s
quality of gesture
Analysis of gesture data
• Full-body analysis of a dancer
• Manual segmentation of dancer’s gestures
• Mean value of the quantity of motion and the
contraction index of the dancer for each gesture
CI & QoM Copying
• Greta performs one gesture type (same shape)
but copies the gesture quality of movement of
the dancer
• Greta uses expressivity parameters to modulate
the quality of her gestures
• Mapping expressive cues to expressivity
parameters:
» CI  Spatial extent
» QoM  Temporal extent
Parameters scaling
Copying: an example
Video of dancer moving and virtual agent performing
gestures copying quality of the dancer motion
DEMO!
Facial expressions (1)
• Show emotional facial expressions depending
on users’ quality movement
• Study the relation between quality of movement
and emotion
• Example: Link QoM and CI to threat:
Facial expressions (2)
• Example: Link QoM and CI to empathy:
Future
• Preliminary work
• Validation both for analysis and synthesis
– Perceptive tests to study how users associate an
emotional label to an expressive behaviour
• Towards a virtual agent able to recognize users’
emotions from their movement and to show
empathy
• Real-time system with continuous input