What the Face Reveals and the Benefits to Facial Rigging A Thesis

What the Face Reveals and the Benefits to Facial Rigging
A Thesis Submitted to the Faculty of the Animation Department in
Partial Fulfillment of the Requirements for the
Degree of Master of Fine Arts in Animation
at
Savannah College of Art and Design
Rickey Cloudsdale
Savannah, Georgia
© September 2014
Thesis Chair: Ashwin Inamdar
Committee Member: Brian Schindler
Committee Member: Michael Betancourt
List of Figures
TABLE OF CONTENTS
1
Thesis Abstract
2
Introduction
3
Facial Expressions Universal
3
Communication of Emotions
8
F.A.C.S & Relating to Rigging
11
Designing the Rig
13
Configuring the Auto Rig
20
Conclusion
23
Bibliography
25
LIST OF FIGURES
Fig. 1 Cynopithecus niger, in a Placid Condition and Titillated, Joseph Wolf.
6
Fig. 2 "Duchenne Smile" Compared to Induced Smile, Gillaume Duchenne.
9
Fig. 3 FACS Example, Paul Ekman.
12
Fig. 4 Slider Widget Window, Jason Osipa.
15
Fig. 5 Cloud Facial Auto Rig, Rickey Cloudsdale.
21
Fig. 6 Cloud F.A.R - Blendshapes, Rickey Cloudsdale.
22
Fig. 7 "Duchenne Smile" Compared to Induced Smile, Rickey Cloudsdale.
23
1
What the Face Reveals and the Benefits to Facial Rigging
Rickey Cloudsdale
September 2014
ABSTRACT
This thesis will focus on research of the facial movements and structure. Facial
movements reveal a great deal of information when fully understood. Multiple scientists have
already led the way with creating tools and catalogs of the various movements involved in
creating expressions. The information from these catalogs can provide a great source of
information for riggers, animators, and many more. As the face is commonly left bare in auto
rigs, it's the goal of the thesis to create a viable auto rig option using the scientist's research.
2
INTRODUCTION
What does the face reveal, and how can it benefit rigging and animation? There has a
been a multitude of research on communication from emotions, and the possibility of such traits
being universal. Paul Ekman and Wallace Friesen developed a system for categorizing facial
expressions and movements called Facial Action Coding System. The information provided
from F.A.C.S. can be translated to animated characters to allow for more believability in the
animation. A problem arises with CGI of creating a rig for the character to be able to mimic and
create these movements. There are multiple options when designing a facial rig. Each option has
its own strengths and weaknesses, and depending on the time allowed for the project the option
could change. When designing a tool to create a facial rig these options become very numbered.
The tool needs to be versatile enough to be used on an assortment of faces, but also simple
enough for any level of user to efficiently set up, as well as animate.
There are multiple tools that are currently available that help with rigging the body, but
often times they leave the face very basic only allowing the head to positioned and rotation of the
jaw. The face has often been left alone due to the complexity to create the varying shapes and
deformations that are required for variety of characters. However, in recent years with the
increase of computer power and assortment of different rigging techniques coming to light it
could very well be possible. This paper will investigate the findings of Paul Ekman and Wallce
Friesen on the Facial Action Coding System and how it can be used to help create a viable tool to
quickly rig the face.
FACIAL EXPRESSIONS UNIVERSAL
There have been studies by multiple scientists exploring the idea that facial expressions
are universal, and even going beyond that to say they can be traced in animals as well. One of the
3
earliest studies started with Guillaume Duchenne, and his experiments with applying electric
currents to facial muscles. His interest was originally sparked during an attempt "to treat a
patient's facial neuralgia when he noticed that applying an electrical current caused underlying
muscle to contract sharply". 1 He later goes to recreate the test on both dead and living subjects.
Duchenne noticed distinguishing factors if a smile was of enjoyment or produced deliberately by
movements of two muscles. "According to Duchenne, The first [zygomaticus major] obeys the
will but the second [orbicularis oculi] is only put in play by the sweet emotions of the soul." 2 For
the first time, Duchenne argues that facial expressions are driven biologically and something
much more than something a child pickes up observering people. Before Duchenne originally
moved onto living subjects he would use the heads of those recently executed, this give him
access to a large variety of people. After his experiments on various human subjects living and
dead he felt expressions "constitute a universal language...the same in all people, in savages, and
civilized nations." 3
Charles Darwin later investigates the idea of facial expression being biologically linked,
and even believing it is tied in with evolution involving primates being used as form of
communication. Darwin's multiple studies gives insight to the idea that expressions could be
much more than something just taught to children, but engrained throughout our species and
others, as well as allowing researchers to view muscle areas involved in expressions. When his
work was originally published during 1872 it was not well received as many recipients believed
"emotion to emanate from the soul and its expression a gift from God". 4 Darwin was always sure
1
Conniff, Richard. "What's Behind a Smile?." Smithsonian 38, no. 5 (August 2007): 46-53. Academic Search
Premier, EBSCOhost (accessed April 27, 2014).
2
Ekman, Paul. "Facial Expressions of Emotion: New Findings, New Questions." Psychological Science (WileyBlackwell) 3, no. 1 (January 1992): 36. Psychology and Behavioral Sciences Collection, EBSCOhost
(accessed April 27, 2014).
3
Conniff, Richard. "What's Behind a Smile?." 46-53.
4
Prodger, Phillip. An annotated catalogue of the illustrations of human and animal expression from the collection of
4
to be very thorough with his works to provide a strong foundation for any arguments that could
arise. A goal of this project was to be able to tell "precisely which muscles were involved in the
execution of expressions". 5 Darwin started his study with collecting images of expressive
behavior. The main problem with trying to study specific emotions is to get the subject to portray
the true emotion and to be able to capture it in that short amount of time. An expression of joy or
sadness can come across the face and leave within a second. Darwin , although a draftsmen ,
started to use photographs as another medium to capture expressions as the accuracy from a
photograph will always succeed that of a drawing. The photographs also allow for a way to
capture those seconds where the face expresses pure joy. The problem with collecting pictures
from the mass media and the general public is if these images were staged, so Darwin contracted
Oscar Rejlander, a photographer, to create pictures to his specifications. 6
Once the images were collected it was much easier to compare and contrast from the still
images to the changes in the faces. This allowed for a much more thorough understanding of the
muscles involved. Darwin was trying to account for the inexplicable behaviors of movements to
facial muscles associated with laughing, crying, anger, and sadness. 7 An example that stood out
from Darwin's images was at the London Zoo. He was able to hide a turtle under a pile of hay in
a monkey's cage to capture the reaction of the monkey as the turtle exposes himself. The artist is
able to capture this moment of terror from the monkey in a drawing. This is one of the earliest
examples of showing the possibility that emotions are biological, since how else could a monkey
know what a frightened expression involves.
Charles Darwin: an early case of the use of photography in scientific research. (Lewiston, N.Y.: Edwin
Mellen Press, 1998), 4.
5
Ibid., 1.
Ibid., 6.
7
Ibid., 3.
6
5
Figure 1 - Cynopithecus niger, in a Placid Condition and Titillated
Darwin's studies from 1870s linking primates and humans expressions later spark other
scientists interest for they believe that facial expressions are biologically based in all people.
Paul Ekman was one of these scientists who believed there was much more to understanding the
face than we currently knew. Even after Darwin's various studies with human and animal
expressions, many psychologists from the 1920s through the 1960s still believed facial
expressions to be socially learned and vary culturally. Ekman along with Wallace Friesen
conducted studies on "literate cultures, working independently but at the same time." 8 In their
studies both were able to find evidence of recognizing emotions from Western and non-Western
cultures. They both felt that it was possible that these cultures could have learned expressions
from mass media, so they went to a "isolated preliterate culture in New Guinea." 9 Ekman and
8
9
Ekman, Paul. "Facial Expressions of Emotion: New Findings, New Questions." 34.
Ibid.
6
Friesen replicated the tests , and received the same results as previous test on literate cultures.
These tests further strengthen the idea that Darwin had originally thought that facial expressions
were biological. Still there were often apparent differences between cultures that had yet to be
explained. Ekman and Friesen both thought of the possibility of "display rules" that they
presumed were cultural teachings about management of expression in varying circumstances. 10
The two went about testing this theory by observing the expressions of Caucasians and Japanese
separately, with and without an authoritative figure in the room. It became apparent that the
Japanese were much more likely to mask negative feelings when an authoritative figure was
present. Ekman later finds that cultures do not necessarily display different expressions but rather
the strength at which they display it. 11
These studies on Universal Facial Expressions with Charles Darwin, Paul Ekman,
Wallace Friesen, and many other scientists, have helped pave the way for further studies of
understanding the face. Guillaume Duchenne gives insight for the first time expressions are
much more mechanical than previously believed. Scientist now have a strong foundation for
expressions being universal and innate instead of being bestowed upon by God. 12 This of course
opens of up new questions, and desire for greater understanding of how the face works.
Understanding facial expressions are universal and biologically installed, animators and riggers
both need to get a thorough perception of how the face moves to create believable actions. The
animators want to keep the viewers involved with the film with believable motions, and be able
to differentiate expressions to best fit the mood of the character. The riggers want to create a
structure similar to the muscle layout that Duchenne, Darwin, and Ekman had begun exploring.
Being able to recreate a rig similar to the human face will allow for more naturalistic movements
10
Ibid.
Ibid., 35.
12
Conniff, Richard. "What's Behind a Smile?." 46-53.
11
7
from the animators. Through further studies the scientist will be able to investigate how and what
exactly the face is saying with expressions.
COMMUNICATION OF EMOTIONS
Facial expressions can reveal a great deal of information about what a person is currently
thinking or feeling, as well as evoke an expression from the viewer. All the movements involved
in creating an expression play a role in how it is received. There have been many studies by
scientists such as Paul Ekman that further investigate facial expressions in communication roles.
Facial expressions are "one of a number of emotional responses that are generated centrally
when an emotion is called forth." 13 Ekman and Friesen conducted an experiment to test if getting
subjects to create an expression by describing how to recreate by sections of the face. The
subjects from the test resulted with feeling experiences of an emotion but were unsure of the
specific one. Their subjects would unknowingly begin to relate to feelings of associated with the
expression. Other teams of scientist replicated the experiment on the varying cultures of "the
Minangkabau of Sumatra, Indonesia, who are fundamentalist Moslem and matrilineal." 14 These
findings benefit the original theory of Darwin's that expressions are universal, and helps
strengthen Ekman's recent study. Ekman and Friesen go even further to test this theory with
hooking subjects to EEG machines, and recording the changing activities as they recreated the
described movements to create an expression. 15 This means by having a subject change their
expression there is a deeper internal change as well that can occur making them associate with
that expression.
13
Ekman, Paul. "Facial Expressions of Emotion: New Findings, New Questions." 35.
Ibid.
15
Ibid.
14
8
Since facial expressions can so easily be created by a person there is the possibility they
could try to be misguiding by displaying a different emotion than their true feeling. This was
seen originally by Duchenne with electrical current experiments on facial muscles. One of the
most used expressions is the smile being used in various forms. Duchenne's studies on the face
were some of the earliest of their kind,
with him being the first to notice the
differences in smiling expressions.
Ekman and Friesen at the lead of the
studies at current time decide to name
the smile of enjoyment the "Duchenne
Figure 2 "Duchenne Smile" Compared to Induced Smile
Smile" in honor of Guillaume
Duchenne. 16 These varying smiles do have visible differences between them all allowing the
viewer to distinguish between joyous smile and conniving smirk. Animators and riggers both
need to understand how the face moves when creating any expression. Animators need to have
character rigs capable of making these small differences to help create a variety of believable
expressions. The stronger the connection an animator achieve with the audience, the more they
are able to draw them into the film. Understanding how the face communicates is incredibly
important to animators since viewers will be looking to the faces of the characters to understand
what they are feeling. Adding the extra creases to the eyes when smiling a "Duchene Smile"
communicates clearly that the mood of the character is pure joy.
The idea that the facial muscle movements can communicate an inner feeling reminiscent
to specific expression has thoroughly been explored though multiple studies, but there is still the
16
Ibid,. 36.
9
external communication to the viewer. Ulf Dimberg found when having subjects view imagery
of emotional facial expressions they would spontaneously react with similar expressions. He
found when viewing happy faces the viewer would "spontaneously evoke increased zygomatic
major muscle activity", these muscles being responsible for creating lips to form a smile. 17 Since
these reactions are almost instantaneous they can sometimes occur unnoticed or unconsciously.
These basic reactions could almost be considered primal with how visible the traits are in
primates. Marina Ross of University of Portsmouth, found that when orangutans would make an
open-mouth expression, ape equivalent to smiling, they would respond the same usually in less
than half a second. 18 This mimicking that occurs is more than just a random effect, but to help
gather an understanding of their feelings. During the mimicking of these emotions the face is
recreating the expression, unconsciously taking you through the feelings. The movement of the
facial muscles can trigger many of the same feelings that go along with the emotion. Scientist,
Bernhard Haslinger at Technical University of Munich, tested this by giving part of the subjects
a shot to temporarily paralyze the facial muscles and connecting them all to EGG machines to
scan for brain activity. Haslinger found that the subjects that were able to mimic the expressions
of the pictures had more brain activity than those with the null face muscles. 19 These tests further
prove biological validity, and importance of properly understanding function to connect with
another person.
When trying to better understand a person you can read a great deal about them by fully
reading their expression. Expressions have slight changes, being able to notice these will allow
17
Dimberg, UlfThunberg, Monika. "Unconscious Facial Reastions to Emotional Facial Expressions." Psychological
Science (Wiley-Blackwell) 11, no. 1: 86. Psychological and Behavioral Sciences Collection, EBSCOhost
(accessed April 28, 2014).
18
Zimmer, Carl. "The Brain." Discover 29, no. 11 (November 2008): 24-27. Academic Search Premier, EBSCOhost
(accessed April 28, 2014).
19
Ibid.
10
better understanding of what that person is thinking. When speaking a person can repeat the
same lines, but depending on how one presents their facial expressions the spoken lines could
come across as meaning a variety of effects. The same goes for a person being silent, if just the
facial expressions change the person can give off an entirely different mood. Mimicking the
expression allows the viewer to empathize with the person. 20 The idea of communication of
emotion is important for animators as it gives them the power to control how the audience feels.
As an animator they must "create in the audience a sense of empathy." 21 The audience will pick
up on the expression, thus beginning the processing of the emotion. The rigger must understand
the process as they need to be able to create similar structures to mimic the facial movements.
The more believable the movements the easier the job for the animator.
F.A.C.S & RELATING TO RIGGING
The Facial Action Coding System was developed in 1978, by Paul Ekman and Wallace
Friesen as they continued their research in facial expressions. F.A.C.S was created to categorize
all the various facial muscle movements, and the expressions that can be formed. The catalog
provides "an anatomical guide to more than 3,000 meaningful facial expressions, including many
different smiles." 22 The guide lays out in detail the muscles involved in any particular
expression, such as Duchene smile, and describes the motion of the muscles involved in creating
the expression. The two scientists created this tool to try and organize the vast amount of
movements in the face. All of their previous research on facial muscles and expressions aided in
the creation of the tool, but also was a majoring factor in its necessity of creation. This tool is
used by CG animators and behavioral scientist alike "to know the exact movements the face can
20
Ibid.
Hooks, Ed.. Acting for animators. 3rd ed. (London: Routledge, 2011), 14.
22
Conniff, Richard. "What's Behind a Smile?." 46-53.
21
11
perform, and what muscles product them." 23 FACS is used for descriptive actions of facial
muscles, and does not consist of any emotion specific details. "Hypotheses and inferences about
the emotional meaning of facial actions is extrinsic to FACS." 24 The importance is in the
movement of the muscles in the creation of the expressions. Animators and riggers should use
FACS to help with believability in creating
expressions. The rigger wants to give an
animator the most amount of control possible
without slowing down the animation process.
If a rigger were to create 3,000 facial
Figure 3 FACS Example
expressions the animation process could get
quite hectic and confusing for each specific small variance. Instead often riggers break
expressions down into sections allowing the animator to mix and match, giving option to a
greater variety of results.
When originally conceptualizing the Facial Action Coding System, Ekman and Friesen
had come up with two ways of studying facial behavior. They knew there was the visual
appearance changes made as the face creates the expression, and then the other "task is to make
inferences about something underlying the facial behavior --emotion, mood, traits attitudes,
personality, and the like". 25 Observing the facial movements is a much more measurable option,
compared to the judgmental aspect of reading what they mean. FACS consist of the
observational changes and goes into detail when describing an expression. An example, the
23
24
25
Ambadar, Zara., Cohn, Jeffrey., and Ekman, Paul. "Observer-based measurement of facial expression with the
Facial Action Coding System," The handbook of emotion elicitation and assessment. (Oxford University
Press Series in Affective Science, J. A. Coan & J. B. Allen, ed., 2006), 203.
Ibid.
Ibid,. 204.
12
smiling face would consist of the "face as having an upward, oblique movement of the lip
corners". 26
Facial Action Coding System plays a close role to rigging. The description of muscle
layout gives a basis for what will need to be animated, and how it needs to be able to move. The
breakdown of FACS allows the rigger to add multiple options for the animator if the time is
available. The rigger needs to be able to recreate the structure and movements in a timely manner
to allow for projects to be completed on time. Being able to replicate these movements is
imperative to drawing the audience deeper into the film. FACS was the basis for Gollum's face in
Lord of the Rings, giving the animators complete control over manually creating expressions. 27
Gollum's rig took 3 years to originally create and the facial rig consists of "over 10,000 shapes,
or facial poses". 28 The facial rig consists of a blendshape system that is driven by 64 individual
controllers, as well as control points that are driven by motion capture. Often times the animators
would complete delete the motion capture for the face and do only key frame animation.
29
Gollum is a bit of an extreme example since not everyone has the time to create the massive
library of blendshapes that made up the facial rig, but it goes to show the possibilities of FACS
in the CGI world. The basis of building up blendshapes to create a facial rig is very effective,
even though it may not taken to the same level as Gollum.
DESIGNING THE RIG
When designing a rig to mimic the abilities of the face as categorized by the Facial
Action Coding System there are quite a few options available. One of the more common
variations would be a blendshape based rig, which allows for any variety of poses to be designed,
26
Ibid.
Hooks, Ed.. Acting for animators. 61.
28
Singer, Gregory. "The Two Towers: Face to Face With Gollum." Animation World Network.
http://www.awn.com/animationworld/two-towers-face-face-gollum (accessed July 28, 2014).
29
Ibid.
27
13
as long as known beforehand. One of the main downfalls of blendshapes is the time required to
build up the rig. Another viable option would be using the joint system built into Maya allowing
the animators to move things as they please. The fastest of the options though would be using
curves for the foundation of the face rig. Each of these rigging choices handles Duchene and
Ekman's findings in different but effective manners. These are each viable options but they all
come with their own strengths and weaknesses.
Blendshapes, the most common of the three, requires the rigger to build up a library of
poses for each section of the face separated for the left and right sides. This library of
blendshapes is one of the more time consuming aspects with some sections being broken down
to multiple areas. The eyebrows will need to be broken down into at least 2 separate areas
allowing for the areas to posed individually giving more variety to the poses from the
blendshapes. 30 The rigger also has the ability to form the character's face to the desired pose
giving a more human feeling. This benefits in the creation of the exact poses the face needs to
achieve a more realistic animation. In both Duchenne and Ekman's studies they noted specific
changes in the face for pure expressions compared to those forced. An animator needs to have
the option to control variables for creating an expression such as the "Duchenne Smile".
Blendshapes allow for manipulation to the face that often times cannot occur with only joints.
The most commonly forgot area would be the crease from the zygomatic muscles, above the
corner of the lips. 31 The library of poses that are built when using blendshapes allow for the
animator to easily reach desired poses since they are prebuilt into the rig. The short coming of
blendshapes comes when you want to go beyond the pose built, or if you want to have a slightly
30
Osipa, Jason. Stop staring facial modeling and animation done right, third edition. 3rd ed. (Indianapolis, Ind.:
Wiley Pub., 2010), 226.
31
Ibid,. 32.
14
different pose than built. Gollum however showed the benefits of animation with blendshapes
with an award winning performance in The Lord of the Rings, however the rigger did have to
create over thousands of different poses to allow for the incredible performance.
The controller layout for the blendshapes is regularly very intrusive to the animator. The
most common layout would be Jason Osipa's control style. Instead of moving controllers on the
face to achieve poses it is controlling various sliders inside of boxes. It is usually a multitude of
nurb squares with smaller nurbs squares inside to pose separate areas of the face. Jason Osipa's
control set up allows the use of multiple blendshapes with a single slider controller by
manipulating the space inside its box. As the controller moves inside the box it activates the
specific blendshape, allowing the animator to easily mix and match various expressions.
Figure 4 Slider Widget Window
Often times when looking at animating the face, the camera is close to the character's face
not allowing for full view of the controllers. Riggers will occasionally even create a separate
camera so that you can have a straight on shot of just the controllers. This can be done to allow
the viewer to better see the face while posing with the controllers in a separate window. The set
up involved in this controller layout is also quite extensive. The controllers themselves can
15
quickly be made with scripts from "Stop Staring", but the artist must then connect all of
blendshapes to corresponding controllers. Blendshapes are often times controlled through
attribute settings on controllers instead of translating or rotating the controllers themselves. This
allows for a cleaner control layout, but too many attributes can be just as difficult as too many
controllers.
Building a joint based face rig consists of joints, expressions, nodes, and constraints.
Before starting to build up the face rig the character’s head should be duplicated to be used as a
blendshape to the original head. This will allow for a layering of control on the final rig giving
more deformation to the skin. On the duplicated head the artist should begin to layout joints in
relation to facial muscle groups to achieve the most believability when animating. When laying
out the joints for the face there needs to be a balance between control and setup. If too many
joints are added animating the face could become very hectic, but if there are not enough joints
the face may not deform properly to achieve desired poses. The eyebrows are generally always
the same with 3 joints being sufficient for most poses. The bottom of the orbicularis Oculi
muscle, or the top cheek area, can be made up of 3 to 5 joints depending on the size of the face.
These joints will be used for posing the general top cheek area such as lifting the joints up and
out to create a more uplifting look for a believable smile. The buccinatorius, or side cheek area,
will be made up of similar amount of joints as orbicularious Oculi depending on size of the
character’s head. The zygomatic muscles can be made up of 3 to 5 joints as well and will be used
to create the crease in the face as the corner of the mouth raises. The area that will be receiving
the most deformation will be the lips, which will need the most amount of joints as well. Both
the bottom and top lip will want to have at least 5 joints inbetween the corner of the lips. The top
and bottom lips can share the joints in the corner of the lips. Now that the layout for joints is
16
done on the duplicated head it is time to skin the joints to the duplicated head. This is the main
time consuming part of joint based rigging. The lips need to have enough over lapping weight to
create smooth transitions between the joints, but the geometry must not collapse as the two joints
move closer together. Once finished painting weights on the duplicated character head it is time
to layout the joints for the original. This is a much quicker process since the original head is
meant for movements in a larger area. There needs to be a joint for the jaw as well a joint for
each corner of the mouth, and joints for the top and bottom eye lids. The joints in the corners will
be used to widen and shrink the mouth to help achieve the “O” shape. Painting skin weights for
the original head can be easier but still requires attention. The two corner mouth joints need to be
able to come in together with a smooth deformation of the skin.
When trying to recreate various movements for expressions the joint system can allow for
complete freedom to the animator. The rig should be able to create a multitude of expressions
with the joints, leaving it to the animator to create any pose. This is beneficial to mimicking the
movements and expressions that Ekman has documented in Facial Action Coding System. The
joint system allows for a great deal of freedom, but cannot always give some of the more
naturalistic feelings such as creases. Characters need to be modeled with the intention that they
will later be animated with creases to help achieve a more believable effect.
During construction of a curve based rig it is very important the curves used to deform
the character’s head will follow facial muscle structure to get the most believability out of the
animation. Before laying out the curves though, the original head mesh needs to be duplicated
this will allow for a layering of deformation similar to that of the joint based face rig. Once the
mesh has been duplicated the curve layout will be similar to joint placement, but the paint
weights stage is much faster when using curves. Curves should be placed on the eyebrows, the
17
orbicularisOculi, buccinator, zygomatic, and the Orbicularis oris muscle. 32 The orbicularisOculi
should run across the top of the cheek starting from the nose. The buccinatorious should run
along the side of the cheek starting just past the zygomatic curve. The zygomatic curve should
run along the crease of the mouth and cheek. These curves should be duplicated to allow more
control and deformation. The zygomatic curves will be used to create a crease as the mouth
comes up. The mouth will be made up of 3 curves: one for the top, one for the bottom, and one
that circles the entire mouth. The large mouth curve will be used to drive the two lip curves to
help shape the mouth as a whole while still having tweak control. After the all of the curves have
been placed they need to have clusters added, and then controllers to manipulate the clusters. The
skinning process for curves can be done very quickly. First create a root joint at the base of the
head and add a jaw joint, these will be used for the start of the skinning process on the head.
Once the joints have been skinned the curves can be added as extra influences. This means that
10 curves will be added to the face with a paint weight value of 1.0 on the vertices closest to the
curve with a quick fall off of weight. Once skinning is complete the artist can begin setting up
expressions to have the lip controllers follow the jaw joint to have the mouth open and close. The
original character's head will not require much work. It can simply consist of head joint and
joints for the top and bottom eye lids. Curve based rigging holds as a viable option for a facial
rig and has been present in such films as Delgo. 33
Each of these rigging options seem to be very plausible to remake similar movements to
the findings of Ekman and Duchenne, but for the best results a combination of the three will be
32
Ambadar, Zara., Cohn, Jeffrey., and Ekman, Paul. "Observer-based measurement of facial expression with the
Facial Action Coding System," 206.
33
Grub, Warren. "Facial Animation Rig for Delgo." Creative Crash.
http://www.creativecrash.com/maya/tutorials/character/c/facial-animation-rig-for-delgo (accessed April
25, 2014).
18
used. These scientist realized that small variances in the face can entirely change how a face is
perceived. If an animator wants to truly deceive an audience with their performance they need to
create these small nuances. Although blendshapes will probably produce the best results, the
ability to build an auto rig entirely around mesh based blendshapes would be close to impossible.
With a combination of joints and curves the base of the rig is designed with the intention of
creating movements similar to those documented by Paul Ekman in Facial Action Coding
System.
The curves are laid out by the muscles most important for creating expressions:
Orbicularis oris, zygomatics, buccinators, base of orbicularis oculi m., and corrugator supercilii
muscle. The curves will be used as a base layer of control that will have joints and controllers
under it as children. This will allow the curves to be used to create blendshapes of expressions,
but much faster rather than the standard method being used on a polygonal mesh. When creating
blendshapes there are only a few curve vertexes that can be used to pose the face. The joints are
driven by controllers using offset joints parented to locators, that are motionpath constrained to
the curve. This layering setup will allow for multiple levels of control. The curve's blendshapes
will be able to move the locators to the desired position, at that point the controllers can be used
to move the skin joints and finesse the pose. The offset joints above the controllers allow for any
additional control that may be needed. The skin joints will be what drives the face, as they offer
the most control and versatility when controlling a mesh.
Most of the sections of the face use a similar setup, and do not require much in the
importance of group layout. However this is different when handling the mouth and surrounding
area, as the animator needs to have a great amount of flexibility. The mouth is broken into two
curves, one for the top and one for the bottom lips. The curves use the same setup as before with
19
locators, offset joints, controllers, and skin joints, but there are two separate controllers on the
corners of the mouth. These two controllers are used to help pose the lips with group movement,
as well as store the attributes for activating the blendshapes. The corner mouth controls will have
an exponentially decreasing effect as it reaches the center mouth offset joints. Giving the
animator the ability to pose the face using the blendshape from the curve to get to an expression,
and then move the left or right side corner mouth controllers and individual controllers on the
lips to further push or create a variance in the expression. As the points on both ends of the top
and bottom lip curves move, a multiply divide node is used to have the offset joints in the
zygomatic curves move accordingly to create the crease commonly seen in the face. The bottom
lip locators are parented under the jaw joint to allow for opening and closing of the mouth, while
still using the other methods of posing as well.
The eyelids are the last area to require a unique setup to allow for a quick and easy open
close method. A joint is placed in the center of the eye with two children joints as controllers for
rotating offset groups above the locators that are associated with the corresponding eyelid. This
allows for a quick way to open and close the eye, while allowing the animator to finesse the pose
as with the rest of the facial rig.
CONFIGURING THE AUTO RIG
It is essential for the auto rig to be easy to use in set up, and offer a great amount of
versatility and efficiency to the animator. Design choices for the rig were chosen to benefit the
creation of the facial auto rig, and aid in the movements documented in Ekman's Facial Action
Coding System. The animator needs to communicate with the audience using differences in
expressions, noted as early as Duchenne and Darwin, to best invoke the desired emotions. The
option of using blendshapes in the traditional sense was out of the question, although it is
20
probably one of the best methods it could not be used on a multitude of models. To best
maximize the animators options, a layering of control was designed into the setup incorporating
multiple rigging methods. The rig uses curves as the base, but has joints driving the mesh of the
character. This allows predesigned expressions to be built with blendshapes, and final finessing
with the joints. Giving freedom to the animator to pose the face in any position, but also quick
predefined poses built into the rig. The use of curves and joints however can be scripted to allow
the user to shape the rig to better match the provided model. The UI will appear simple to aid in
the ease of use, but still hold a great amount of power.
To begin using the auto rig, a window will appear with the option to create an
asymmetrical or symmetrical proxy rig. The proxy rig is used only to position the curves that
will be used as motionpaths so that they
match the specific character. The
option for the two is for the ease of the
user, as with the symmetrical proxy rig
only one side must be positioned. Once
the front axis is selected, and the jaw
joint, left and right eye position, and
head control have been stored the
control rig is ready to be created. The
auto rig gives the option to give a
Figure 5 Cloud Facial Auto Rig
generic skinning if the user stores the head geometry and sets a skin drop off, or if they would
prefer to paint the weights manually can lock the weights at 0.
21
To aid in the ability of creating blendshapes an additional tab was created in the UI. This
section gives the option to create blendshapes for the left and right mouth, as well as the eye
brows. First a name is entered into the field for a new curve to be created. Large , brightly
colored controllers appear to shape the curves into the specific pose, and depending if the user
needs they can mirror the blendshape or just connect it directly to the corner mouth controller.
This creates a new attribute on the mouth controller with the ability to turn off and on the
blendshape. The option is left to create each side separately as well to create a slight variation in
the poses.
Figure 6 Cloud F.A.R - Blendshapes
The original design for the rig was greatly influenced by the need to be recreated into an
auto rig. Duchenne's findings on the differences between expressions helped recognize the
varying muscles involved in creating expressions. His electrical studies also are a basis for
understanding the way the muscles move. Paul Ekman later categorizes these movement into the
Facial Action Coding System. The Facial Action Coding system is essentially the basis of the
22
auto rig. The auto rig should be able to replicate movements similar to those of F.A.C.S, and
create the expressions as described by Ekman.
Figure 7 "Duchenne Smile" Compared to Induced Smile in CGI
Since creating an auto rig, the rig will need the ability to be scripted and easily fit a variety of
models. Darwin's studies of emotions in animals help provide evidence the rig could possibly be
used on animals with human like features. Curves can easily be created and posed to fit a
multitude of faces, human or animal, while the joints give a great deal of control for an animator.
The UI is fairly simple, but still offers options to speed up the task of creating the rig if possible.
Giving the UI the ability to easily create blendshapes takes away the possibilities of user error
when trying to create and connect them to controllers.
CONCLUSION
Duchenne's original studies with electric currents helped pave the way for facial
expression research resulting in the FACS, a catalogue of facial movement, that can be translated
to rigging to allow for an enhancement in believability with animation. Darwin's push for an
understanding of expressions gave new perspective on being biologically linked. He believed
that these traits could be traced back to primates, and were possibly part of the evolutionary
23
process. This link to animals allows for riggers and animators to know that a rig similar to that of
the human face could easily be used for another animal while being able to read and understand
their expressions. Darwin's studies lead the way for Ekman and Friesen as they began to research
the effects of expressions, and what it is that allows for various poses. This knowledge is key to
understanding ways of connect with the audience, and keeping them drawn into the film.
Ekman's studies further proved the idea of biologically based expressions with the creation of the
Facial Action Coding System. A catalog of facial muscle movements to create specific poses is a
great tool when creating a rig. There needs to be an understanding of how the muscles move, so
the rig can replicate the movements. Creating an auto rig with the versatility to be used on
multiple models it needed to be flexible in layout of control. The answer to this way a
combination of rigging techniques involving a layering of setup and control. The face is a wealth
of information. The better one can understand the face, the better one can create a connection
with another being.
24
BIBLIOGRAPHY
Ambadar, Zara., Cohn, Jeffrey., and Ekman, Paul. "Observer-based measurement of facial
expression with the Facial Action Coding System," The handbook of emotion elicitation
and assessment. Oxford University Press Series in Affective Science, J. A. Coan & J. B.
Allen, ed., 2006
Conniff, Richard. "Whats Behind a Smile?." Smithsonian 38, no. 5 (August 2007): 46-53.
Academic Search Premier, EBSCOhost (accessed April 27, 2014).
Dimberg, UlfThunberg, Monika. "Unconscious Facial Reastions to Emotional Facial
Expressions." Psychological Science (Wiley-Blackwell) 11, no. 1: 86. Psychological
and Behavioral Sciences Collection, EBSCOhost (accessed April 28, 2014).
Ekman, Paul. "Facial Expressions of Emotion: New Findings, New Questions." Psychological
Science (Wiley-Blackwell) 3, no. 1 (January 1992): 34-38. Psychology and Behavioral
Sciences Collection, EBSCOhost (accessed April 27, 2014).
Hooks, Ed.. Acting for animators. 3rd ed. London: Routledge, 2011.
Grub, Warren. "Facial Animation Rig for Delgo." Creative Crash.
http://www.creativecrash.com/maya/tutorials/character/c/facial-animation-rig-for-delgo
(accessed April 25, 2014).
Osipa, Jason. Stop staring facial modeling and animation done right, third edition. 3rd ed.
Indianapolis, Ind.: Wiley Pub., 2010.
Prodger, Phillip. An annotated catalogue of the illustrations of human and animal expression
from the collection of Charles Darwin: an early case of the use of photography in
scientific research. Lewiston, N.Y.: Edwin Mellen Press, 1998.
Singer, Gregory. "The Two Towers: Face to Face With Gollum." Animation World Network.
http://www.awn.com/animationworld/two-towers-face-face-gollum (accessed July 28,
2014).
Zimmer, Carl. "The Brain." Discover 29, no. 11 (November 2008): 24-27. Academic Search
Premier, EBSCOhost (accessed April 28, 2014).
25