human(oid) - Steph Sassine

HUMAN(OID);
by
Stephanie Sassine
Received and approved:
___________________________________________
Date _______________
Thesis Advisor – Liubo Borissov
___________________________________________
Chair – Peter Patchen
Date ______________
HUMAN(OID);
by
Stephanie Sassine
© 2014 Stephanie Sassine
A thesis
Submitted in partial fulfillment
of the requirements for the degree of
Master of Fine Arts
(Department of Digital Arts)
School of Art and Design
Pratt Institute
May 2014
HUMAN(OID);
by
Stephanie Sassine
ABSTRACT
“Human(oid);” is a subtle interactive installation that generates an intellectual
uncertainty in the participants’ minds, in addition to engaging them to connect with a
digital being on an emotional level, be it positively or negatively.
ACKNOWLEDGEMENTS
I would like to primarily express my gratitude to my thesis advisor and mentor, professor
Liubomir Borissov for his unlimited guidance, knowledge and resourcefulness, in addition to the
exclusive exposure he gave me to cutting edge techniques and software. I would also like to thank
both of my thesis proposal advisors, professor and assistant chair Carla Gannis and professor
Linda Lauro-Lazin, for their enthusiasm, encouragements and belief in me. Furthermore, many
thanks to professor Matthias Wolfel for kindly giving me access to an essential tool for my
project, and professor Peter Mackey and chairperson Peter Patchen for always pushing me
forward. Last but not least, many thanks to my friends and loved ones who supported me through
thick and thin, most importantly my parents and my brother who always supported and believed
in me no matter what life threw at us.
iv
TABLE OF CONTENTS
ACKNOWLEDGEMENTS ...................................................................................................... iv
LIST OF ILLUSTRATIONS .................................................................................................... vi
CHAPTER
1. INTRODUCTION ........................................................................................................... 1
1.1 Origins ............................................................................................................. 1
1.2 The Uncanny Valley ........................................................................................ 4
1.3 Human(oid); ..................................................................................................... 6
2. INFLUENCES .......................................................................................................... 7
2.1 Gary Hill .................................................................................................... 7
2.2 Karolina Sobecka ....................................................................................... 8
3. ISSUES OF CONTENT ......................................................................................... 12
3.1 Site-Specific ............................................................................................. 12
3.2 The Experience ........................................................................................ 13
4. ISSUES OF AESTHETICS .................................................................................... 14
4.1 Character’s Design ................................................................................... 14
4.2 Character Animation ................................................................................ 17
5. TECHNICAL ISSUES ........................................................................................... 20
5.1 Platforms .................................................................................................. 20
5.2 Interactions .............................................................................................. 23
5.3 Installation ............................................................................................... 25
6. EXHIBITION ......................................................................................................... 29
CONCLUSION ........................................................................................................................ 31
BIBLIOGRAPHY AND REFERENCES ................................................................................ 32
DVD ……………………………………………………………………..….…. Pocket
v
LIST OF ILLUSTRATIONS
FIGURE
1. Georges and Stephanie Sassine ............................................................................................ 1
2. Wispy Walker (Left) - Life size Esmeralda (Right) ............................................................ 2
3. Stills from Black Hole Sun by SoundGarden ....................................................................... 3
4. Masahiro Mori's Uncanny Valley (Edited by Jonathan Joly) 4 ............................................. 4
5. Gary Hill Tall Ships (1992) .................................................................................................. 8
6. Karolina Sobecka It's You .................................................................................................... 9
7. Karolina Sobecka Sniff ....................................................................................................... 10
8. Left: Project Pinocchio Design - Right: Humanoid ........................................................... 15
9. Left: Texture Map - Right: Close Up ................................................................................. 15
10. Left: Coraline 12 - Right: The Purge 14 ............................................................................. 16
11. Left: Full Body - Right: Close-Up ................................................................................... 17
12. Motion Capture Recording Session ................................................................................. 19
13. Motion Capture Data Analysis ......................................................................................... 19
14. Kinetic Lab View: Recording Gesture ............................................................................. 21
15. Kinetic View: Gesture recognition .................................................................................. 22
16. Unity3D Animator Component: Mecanim ...................................................................... 23
17. Flow Chart........................................................................................................................ 24
18. Hardware System ............................................................................................................. 25
19. DDA Gallery's Floor Plan (Close up) .............................................................................. 26
20. Problem: Bad Projection Transparency and Reflection ................................................... 27
21. Solution: Vellum roll inside the gallery ........................................................................... 28
22. Final Installation .............................................................................................................. 28
23. Character staring at participant ........................................................................................ 29
24. Participant interacting with the humanoid ....................................................................... 29
25. Member of the audience interacting ................................................................................. 30
26. Character following a participant ..................................................................................... 30
vi
CHAPTER 1
INTRODUCTION
While first brainstorming of possibilities for a thesis project, all I knew was that I wanted
to create a memorable experience that would generate an indelible souvenir of the art
piece by drawing an emotion in the participants’ minds. Consequently, I began thinking
about what my own subconscious has retained from the experiences of my childhood, and
from there, commenced my journey.
1.1 Origins
It all started with a photograph, and a child’s extensive imagination. I used to wake up
every morning to get ready for school, the dead silence of the morning ringing in my ears.
My “innocent” large-scale portrait (Figure 1) stands there staring at me, smiling and
toying with my imagination.
Figure 1. Georges and Stephanie Sassine
1
This might look like a normal portrait of two children, but to me, this particular portrait
of that little girl forced me to look away especially during that time of the day. I used to
watch too many scary movies, and while I continue to love them, I developed a real
discomfort when it came to fixed smiles and gazes on inanimate representations of
human beings. A couple of years later, the newest trend in toys became to have life-sized
dolls (Figure 2) which was fun to play with during the day, but when came the night,
having “someone” sitting at the foot of your bed staring at you smiling, in the dark, was
certainly a surreal experience.
Figure 2. Wispy Walker (Left) - Life size Esmeralda (Right)
I later forgot about all of it, up until I came across a certain music video that terrified me:
“Black Hole Sun” by Soundgarden (Figure 3). I did not understand the reason behind my
discomfort at the time; all I knew was the characters in the video terrified me. It affected
me so much that I still have a vivid memory of the first time I saw it, even 15 years later.
2
Figure 3. Stills from Black Hole Sun by SoundGarden
Reminiscing these feelings drove me to investigate and learn more about what
was causing them. I started looking into fear of dolls, so-called pediophobia. It turned out
to be a type of automatonophobia, which is a fear of humanoid figures. Dolls and other
inanimate representations of human beings are created to “mimic people, sometimes
eerily so.” (Latta) 1 Psychoanalyst Sigmund Freud claimed in his 1919 essay The
Uncanny that children fantasize about dolls coming to life, 2 but when that fantasy turns
real, it becomes terrifying. This idea has been exploited abundantly in pop culture, and “a
whole genre of horror movies were created based on this premise.” (Latta ) 1 It turned out
that feeling anxious around inanimate figures that appear too humanlike is a universal
feeling, and the terms to describe them have been coined as “uncanny”.
3
1.2 The Uncanny Valley
Psychologist Ernst Jentsch theorized that “uncomfortable or uncanny feelings arise when
there is an intellectual uncertainty about whether an object is alive or not.” 3 He defines it
in his On the Psychology of the Uncanny essay in 1906, as an emotion that surfaces when
“an object that one knows to be inanimate resembles a living being enough to generate
confusion about its nature.” 3 Freud took Jentsch’s theory further by stating that the
uncanny was not only weird and mysterious, but “strangely familiar.” 2
Japanese roboticist Masahiro Mori was the one who devised the equation needed
to measure the uncanny in human representations. He coined the terms “uncanny valley”
directed to the area where human representations drop from generating positive to
negative feelings in the viewer’s point of view (Figure 4).
Figure 4. Masahiro Mori's Uncanny Valley (Edited by Jonathan Joly) 4
4
The two axes represent familiarity and similarity. As a character approaches human
likeness in aesthetics and motion, it becomes more familiar, and we like it more. But this
trend only goes to a certain point, at which we reach the uncanny valley. Over there, our
brain gets confused; where we saw a human-looking robot or character, we now see a
human, with non-human characteristics, which becomes just unsettling. In the area of 3D
animation, the uncanny valley is supposed to be avoided if one’s goal is to create a
successful and likeable product across most audiences, which is why Pixar Animation
Studios is staying away from photorealistic film productions.
Looking back at history, we notice we have always tried to create things that
behave or look as human as possible. We seem to be fascinated with the recreation of the
human genome through art and technology. This obsession of God-defying digital
procreation has been reaching results that are so successful that they directly fall in the pit
of the uncanny valley. This is publicly showcased in humanoid robots, hologram
projections, and the next generation of 3D characters in movies and games. It has been
the subject of many debates over the decades, especially since technology is advancing at
such a high speed that the lines between virtual and reality are being blurred. Movie
director Spike Jonze made a movie entitled Her (2013), which touches on many themes,
but the one idea that stroke me the most was how uncanny it felt.
[The movie is about a man who] decides to purchase the new OS1, which is
advertised as the world’s first artificially intelligent operating system. ‘It's not just
an operating system, it’s a consciousness,’ the ad states. [He] quickly finds
himself drawn in with Samantha, the voice behind his OS1. As they start
spending time together they grow closer and closer and eventually find
themselves in love. 5
5
Every time I tried to empathize with the characters, it would be by adapting my mind to
think of the Operating System as a real living being with feelings, and like that, my mind
would reject the idea and I felt uneasy for even just having that thought.
Many examples in the industry of 3D animated movies and games fall in the uncanny
category notably Polar Express (2004), Heavy Rain (2010), and L.A. Noir (2011). Today,
the term “next-gen” is common, describing the new generation of hyper realistic 3D
visuals, in terms of experiences, characters, and environments. Konami Digital
Entertainment technology director Julien Merceron declares in his interview with Games
Industry International that technological advancement is helping some games reach a
place at the limit of the uncanny valley, but it is not where they want to settle.
[The uncanny valley] will always be a problem [with developers]. […] You can’t
have these stunning graphics while characters are acting funny on the screen. […]
As soon as we ramp up the quality on graphics, this level of quality on facial
animations won’t be good enough. […] The problem is that as rendering quality
will go up, new problems will surface. The quality of the facial and body
animations and the acting won’t be good enough. So that is why as you evolve,
you have to upgrade [and balance out] your physics, rendering and animations. 6
1.3 Human(oid);
Departing from these notions, I decided to embrace the uncanny valley and slip it in our
daily routine, exposing and imposing it to the audience. What if this experience travelled
from the screens of our electronic devices, which we have control over, to our daily
environment where we don’t expect it? With these ideas in mind, I created Human(oid);,
a subtle interactive installation that acts like a highly interactive augmented reality piece,
where the virtual character will seemingly invade and roam the gallery space, staring into
the hallway, looking for people to observe, judge, track and interact with.
6
CHAPTER 2
INFLUENCES
We are used to seeing human portraits in galleries, but these are usually inanimate
images. Our interaction with them is essentially one-sided, they can’t respond or talk
back. As Kathy Cleland mentions in her publication, “video art and video installations
have already gone a long way in bringing the human figure to life in the gallery,
animating the static portrait and creating new interacting experiences for audiences. Now
this trend is being taken even further by interactive digital technologies, which are
enabling some interesting new possibilities for audience interactions and further
challenging the ontological status of the art object.” 7
2.1 Gary Hill
American artist Gary Hill’s Tall Ships video installation (1992) was an interesting
pioneer effort in reversing the traditional hierarchy between audience and artwork. The
audience enters a dark corridor lined by sixteen evenly spaced projections of pale black
and white figures, their bright white faces delivering the only source of light. As you
walk, the figure that is nearest to you leaves the shadows and walks towards you until its
image stands life-size directly in front of you. (Figure 5) “The confrontation is mute, but
profoundly affecting. The men, women and children appear to want to communicate, they
hover uncertainly in front of you as if they are about to speak, but then they turn away,
and recede back into the shadows taking their secrets with them.” (Cleland) 7
The feeling this artwork creates and the figures’ behaviors influenced my own
piece in part of my character’s persona and behaviors. My humanoid is also mute,
mysterious, lonely, and tries to communicate with the audience, and both pieces are about
7
an encounter with a stranger. Furthermore, in regard of the audience’s feelings and
reactions to the piece, we both share the same output, which Hill articulates as “whether
the viewer finds it dreamlike, playful, or frightening is a reflection of is or her state of
mind and sense of ease or discomfort when encountering others. Notably, the figures in
Tall Ships do not speak silence heightens our awareness of body language and may evoke
a deeper meaning beyond words.” 8
Figure 5. Gary Hill Tall Ships (1992)
2.2 Karolina Sobecka
Karolina Sobecka is my main artistic influence in this installation. A couple of her works
were particularly influential to my piece.
The first one, an interactive storefront window projection entitled It’s You
“explores the mechanisms of public behaviors and the line between the real and
8
constructed social actions.” (Sobecka, It's You) 9A crowd made out of greyish stylized
polygonal 3D human figures is rear projected on large storefront windows. (Figure 6)
[They seem to be gathered around something that is unclear from the
pedestrian’s view.] When the pedestrian enters the interaction area in
front of the windows, the figures turn their heads glancing at him. If the
pedestrian stops […] behind them, as if to look over their shoulders, they
step aside to allow him a view onto what they’re looking at. The
pedestrian can now see part of the unfolding scene, and he obscures the
view for the other pedestrians; he’s become part of the crowd. […] After
the pedestrian has been in the interaction area for a period of time, the
figures will turn their attention to him. The viewer becomes the
performer. If he does something to entertain his viewers, the […] figures
will react by clapping, applauding the performance and clarifying their
role of audience. 9
Figure 6. Karolina Sobecka It's You
Sobecka’s artwork inspired me in the feeling of being observed, judged, and
stared at. When the projected figures glance back at you, it feels a little bizarre. The
moment they completely turn over to look at you perform; they actually look hostile.
Having this big crowd of life sized human figures staring at you with a blank expression
9
on their faces is uncanny, up until they react positively to your performance. Similarly, in
my piece, I made my life sized character stare directly at the participant, his head and
eyes following him as he moves. Having the virtual character look directly into your eyes
is indeed an eerie experience. It’s You was created using OpenTSPS, OpenFrameworks,
Blender3d, and Unity3d Game Engine. The parameters being tracked are the participant’s
presence, location and time elapsed.
The second notable piece by Sobecka is Sniff, an interactive projection of greyish
stylized polygonal 3D animated dog on a storefront window. Both It’s You and Sniff ‘s
visual elements have the same aesthetic treatment, which creates a nice series of related
works. The “dog” responds to the participant’s movements and gestures, ones you usually
make to interact with a dog. The dog barks, which adds a great touch of realism, and is
effectively reactive. (Figure 7)
Figure 7. Karolina Sobecka Sniff
10
[When the viewer walks by the windows], his movements and gestures are
tracked by a computer vision system. [The dog] follows the viewer, dynamically
responds to his gestures, and changes his behavior based on the state of
engagement with the viewer. 10
Sniff was done using OpenFrameworks, Blender3d and Unity3d Game Engine. The
parameters being tracked are the user’s location, and the type of movements - whether
they are perceived as “friendly” or “aggressive”, depending on a simple algorithm
calculating the speed of the user’s movements.
I was inspired from these two artworks while creating my piece, and this is
reflected in its characteristics of being a 3D model projected on treated glass, and its
general idea of it interacting with participants and having independent behaviors. It is put
together in the Unity3d Game Engine as well, which renders in real time and allows to
changing behaviors dynamically based on incoming data. On a different note, I think of
my piece as a genre of gestural conversational bot, where movements and gestures
replace speech. All of these pieces have this idea in common rooted somewhere, where
they are all extensions of Joseph Weizenbaum’s early Eliza Chat Bot. 11
11
CHAPTER 3
ISSUES OF CONTENT
3.1 Site-Specific
During my time at Pratt Institute, I noticed that Pratt students, especially in the
Department of Digital Arts, including myself, were becoming real New Yorkers in terms
of stress, rush, and always being busy. Our gallery space is placed strategically in the
center of the fourth floor of Myrtle Hall, separating two wings full of classes. Usually, all
works and exhibitions are placed inside the gallery space, which gives the audience the
power of choice in terms of choosing to make time in their busy schedules to enter the
gallery in order to view the artworks. I noticed that time is a luxury rare graduate students
can afford. We don’t spend enough time in front of the artworks in the gallery. We do a
quick tour the first time we see it, and rarely come back for more. Since my exhibition in
the Department of Digital Arts gallery meant that my audience would be mainly Pratt
DDA students and faculty, I decided to change that power dynamic, and break the
physical barrier between the two spaces, liberating my piece from the confinements of the
gallery space. I wanted to impose my piece to the audience and place it in their routine
lives, by placing the interactive area, in the hallway, instead of the inner space of the
gallery.
Since the gallery space is separated from the hallway by floor to ceiling glass
panels, I decided to use them as a canvas to my piece. I placed my projector inside the
gallery, projecting the image onto the glass panels. By doing so, my character would
appear on the glass and be visible and communicating with people passing in the hallway.
If showing my piece somewhere else, I would recreate this kind of setting on window
panels in the streets, similarly to Karolina Sobecka’s artworks I mentioned earlier. These
12
kinds of settings add a social component to my piece. I essentially want my character to
sneak up on my audience where they don’t expect him to be. He becomes one of them,
shares their social setting, shows up in their routine, and slithers his way into their lives.
3.2 The Experience
The audience is composed of pedestrians or bystanders, who happen to be or pass by this
area of the hallway. The unprepared participants will be startled by a presence in the
corner of their eyes: someone is watching. The human sized character is in the gallery,
looking out in the hallway, spying on the audience. The character is just there, watching
the scenes in the hallway unfold, while looking for people of interest to him. He chooses
to walk towards different participants, and tries to get their attention. If no one seems
interesting to him, he will walk away from the view, only walking back to the participant
if the latter insists. Once you are selected, the character will let you know that he sees
you. He will turn his gaze at you, and if you move he will turn his head and eyes
accordingly. Starting this point, you will be interesting to him, and he will try his best to
communicate with you. He will track your movements, follow you around, and look for
gestures that fall into his understanding so he can respond to you. If you leave before he
decides your time is up, he will be offended and aggressively call you a Loser. On the
other hand, if you are too “boring” to him he will just ignore you and move on, looking
for someone else. He is constantly on the hunt for someone better, someone more
interesting. During the exhibition, I found that the power dynamic between artwork and
participant completely flipped: the participants were the ones actively trying to please the
character, attract his attention, and try to communicate with him.
13
CHAPTER 4
ISSUES OF AESTHETICS
4.1 Character’s Design
My first inspirations for the character’s design were part of my findings from my research
of the uncanny valley, which were composed essentially of uncanny humanoid robots and
characters. This set the tone of the design into having the character look human, and since
everything started with a portrait of myself, I based the character’s face on my own. I
wanted him to have my face and facial structure, but with a baldhead, and gender-neutral
to manly bodily features. I wanted to blur his gender, and succeeded in doing so, since
during the exhibition, different participants in the audience were referring to the character
with different pronouns.
I gave him a rough design base by using the tool Project Pinocchio from
Autodesk: a female face structurally close to mine, in combination with a male body. I
then proceeded to roughly texture his face based on pictures of my own face, using
Adobe Photoshop. I then used Autodesk Mudbox and ZBrush to sculpt his face to match
my own facial structure (Figure 8) and finalize his general shape by giving depth to his
body and clothes. I finally proceeded in finalizing the textures for his face and body,
including the normal map and the light map. The texture map turned out to be incredibly
uncanny, especially on the facial part, since it is basically my own face flattened on a flat
canvas (Figure 9). This whole process was extremely weird and somehow therapeutic as
a personal experience. It forced me to look at my face in details and recreate it. It
reminded me of artist Amber Hawk Swanson, and her real doll clone (realdolls.com).
14
Figure 8. Left: Project Pinocchio Design - Right: Humanoid
Figure 9. Left: Texture Map - Right: Close Up
15
Subsequently, I took a step back, and went back to the origins of my concept and the
reasons behind my childhood fears. The common thing between the figures that scared
me were their fixed stare and smile. I found that characters’ eyes played an important role
in making them feel uncanny. The eyes are the windows to the soul, so having dead or
bizarre eyes makes the character uncanny. Thinking of that reminded me of the stop
motion movie Coraline (2009) by Director Henry Selick 12 , where the plot turns the
characters’ eyes into buttons, which gave the movie a strange aspect. On the other hand, a
smile is supposed to be welcoming and friendly, but having it fixed changes everything.
We depend on people’s facial expressions to know how to respond to a situation, and to
empathize. Just like clowns, having no expression or a fixed one, violates the expected
norms of human behaviors.13 In other words, a lack of facial expression dehumanizes the
character and makes it look psychotic. An example is the character in the horror movie
The Purge (2013) directed by James DeMonaco 14 (Figure 10).
Figure 10. Left: Coraline 12 - Right: The Purge 14
16
Consequently, I decided to give him a wide smile and psychotic eyes as well. These
attributes finally got him to where I wanted him to be. He was my human Chessur (Alice
in Wonderland’s smiling cat) 15 , my Pinocchio who wanted to be a real boy 16 , or even
my mini-me who wanted to be me (Figure 11). I re-sculpted his teeth as well and gave
them some imperfections found in my own set. Since I was not using any other
expressions or animating a muscle in his face in real time, I did not need the blend shapes
and the additional file weight. Therefore, I baked the final expression, removed the blend
shapes, and re-attached the rig and weights. He was finally ready to be alive.
Figure 11. Left: Full Body - Right: Close-Up
4.2 Character Animation
My character being silent and only communicating through gestures, I was inspired by
pantomimes both in their way of communication, and their quirky personality.
Pantomimes are usually dressed with stripes, which is why I made my character wear a
17
dark blue sweater with white stripes. I chose to complete his look with blue jeans and
black sneakers in order for him to blend in the crowd and be dressed like an average
nobody. I tried as much as possible to avoid him being identified as a specific common
character stereotype and archetype that wouldn’t align with my intentions.
Since the character is almost an extension of myself, I had to be both in his shoes
and in the audience’s in order to set his behaviors and prepare the animations. I started
my process by thinking as participant, and tried to set down a general outline of
possibilities of how a participant would react and behave around the character. These got
set as the parameters for interaction, and I animated my character as reacting to these
conditions. Subsequently, I put myself in my character’s shoes, got into character and
acquired his personality. The audience’s reactions in mind, I physically acted out my
character’s behaviors and captured my body’s motions simultaneously through two
Microsoft Kinect camera sensors placed at a 90 degrees angle in relation to each other
(Figure 12). I performed and captured my motion with the iPi Soft Markerless Motion
Capture software, a tool to track 3D human body motions and produce 3D animations. I
recorded my skeleton data and motion with the iPi Recorder and then analyzed and did
rough passes to cleanup that data in the iPi Mocap Studio (Figure 13). Afterwards, I
exported these animation data to Autodesk Maya, and cleaned up the animations after
transferring the motion to my character. I made animation clips and got all my assets
ready to use in Unity3d Game Engine.
18
Figure 12. Motion Capture Recording Session
Figure 13. Motion Capture Data Analysis
19
CHAPTER 5
TECHNICAL ISSUES
5.1 Platforms
I used The Microsoft Kinect camera as my primary sensor in collecting user data. It’s a
motion-based controller that has the capabilities of recognizing users, extracting their
skeletons main joints data, and the environment’s depth data. I experimented with
Max/MSP and Jitter, but since I was dealing with a 3D model’s real time animations and
dynamic transitions, this platform wasn’t ideal as a final output. Therefore, I transitioned
to the Unity3d Game Engine, which had a new animation system called Mecanim, which
was a great fit for what I was trying to accomplish. It basically enables you to transition
effectively between animation clips, calculating the necessary joint rotations for a smooth
transition between clips. It also gives you the ability to create conditions and triggers for
certain animations to play or stop. I first wanted to use the Zigfu plug-in to use the Kinect
capabilities directly in Unity3D. I experimented with it, and gave the participant the role
of puppet master over my character. The character would follow the participant’s
gestures in real time, embodying the participant’s skeleton. I wanted to have this as a part
of the interaction for a certain amount of time, but several reasons made me drop the idea,
the most important one was because the character lost its uncanny edge, independence,
and the general effect did not fall in line with my intentions.
I continued experimenting and researching, and came across a wonderful tool,
Kinetic Space 17; an open source tool developed by Matthias Wolfel in the Processing
platform, which uses the Kinect sensor to track, record, and recognize gestures. Data is
sent through the OSC Protocol to other platforms. The XML file can be edited to send
user’s x y and z location in real time, as well as user events, among other parameters.
20
In the interface, you can record up to 50 gestures, and edit the OSC messages you want to
send for each gesture (Figure 14). This is a great tool for gesture recognition, and was
exactly what I needed. For my exhibition, I recorded 8 gestures the character would
respond to. I recorded several versions of the each gesture (Figure 15), in order for the
character to recognize them better, and instructed each version to output an OSC message
comprised of an integer representing the user, followed with another integer representing
the pose number. I then sent and parsed the information to Unity3D by including the
scripts needed to receive OSC messages. Since I want my character to only interact with
one person at a time, I specified in his Unity C# script to only respond to User 0, which is
the integer of the first user he chooses.
Figure 14. Kinetic Lab View: Recording Gesture
21
Figure 15. Kinetic View: Gesture recognition
I organized the events received in three general areas: User Presence, User Location, and
Gestures. I attached a C# script to my character’s game object and sent these OSC events
to the character script to analyze. I also sent separately another copy of the OSC location
messages to another game object, to represent the user’s location. I then made that game
object physically acquire the user’s position, in order to use it in one part of the
interaction: when my character rotates his eyes and head to stare continuously at the user.
Technically, the character would be following that game object’s position, hence the
position of the user. The issue I was having with this interaction was that the animations
continuously coming from the animator component would override the joints rotations
that were being applied to the head and eyes. I remedied to that issue first by choosing to
trigger this interaction only when a participant is first chosen, so that would be the first
reaction my character would have, which was even better, since it gave the participant a
22
direct visual queue to exactly who he was tracking. I forced the character to deactivate its
animator component the moment he selects a participant, which would leave the ground
for the head and eyes to rotate according to the user’s X position. This staring
competition would be visually performed by the character freezing on the animation
frame he was on the moment he saw the participant, while his head and eyes stare at him.
I set this to go on for 30 seconds, and then reactivate the animator component either if the
30 seconds are up, or if the character recognizes a gesture. (Figure 16)
Figure 16. Unity3D Animator Component: Mecanim
5.2 Interactions
The installation’s storyline is different for each user, depending on the character’s
“interest” in each user, and the participant’s patience and insistence, which makes the
experience much more dynamic. Generally speaking, I will list the stages of interaction
and what happens during each stage of the interaction. (Figure 17)
23
Figure 17. Flow Chart
When no one is there, and/or as long as the character did not choose a participant,
the character is acting the idle animation, and overlaying it with six different animation
clips chosen randomly. He puts his hands on the glass and to looks around the hallway,
looking to his left, looking to his right, yawning, and staring. He also follows random
users around the two window panels. When he finally chooses a “victim” he stops his
movements and stares into its soul. The animations are essentially paused, and all that he
moves are his eyes and his head, turning for his gaze to follow the participant. At this
point, either the participant does a gesture that the character recognizes, or 30 seconds go
by, for the character to break his stare and wait for the participant to interest him. He tries
to get the participant’s attention to do something by knocking on the window. He also
24
knocks again on the window if 30 seconds go by and the participant still didn’t do a
gesture he understands. If he understands a gesture, he will respond to it. If the participant
walks he will follow him/her, and if he walks away and/or the character stops tracking
him/her, the character will “call him” a loser, and move on. (Figure 17) In order to
generate each animation clip, I used triggers to prompt a response to its awaited gesture
or lack of gesture, and Boolean states for the rest of the conditions.
5.3 Installation
When came the day to install my piece in the gallery, I had a lot of problems that I
couldn’t foresee until that day, but dealt with successfully.
I ended up using a Mac Mini (with mouse and keyboard), a Viewsonic Pro 9500
projector, an HDMI cable to connect them together, two extension power cables, a
Microsoft Kinect Camera, and a USB extension cable (Figure 18).
Figure 18. Hardware System
25
After testing and deciding on the best distance and tilt I needed the projector to be, I
formed a dark room with a 10x25 feet black velour curtain I rented out from Rose Brand,
by hanging it on the ceiling by using 9 screw hooks. I also used another curtain to block
out the third window panel on the far left of the gallery. I put up a freestanding wall to
mount a shelf to place the projector to avoid having a hot spot, but the wall wasn’t really
stable, and after testing all my materials, I found out that I did not need to mount the
projector after all because I ended up not having a hot spot problem. Therefore I placed
each of the Mac Mini and the projector on a pedestal. I mounted the Kinect Camera on
the ceiling in the hallway, and ran its cables on the borders of the glass, and made sure to
be able to easily disconnect the cable when needed to close the gallery’s door (Figure 19).
Figure 19. DDA Gallery's Floor Plan (Close up)
26
I wanted to turn the gallery’s window panels into one-way mirrors, and in order
to do so; I applied on the hallway side of the mirror “Gila 3x15 feet Privacy Window
Film”. When testing the projection on it, I noticed that the projection was barely visible,
and was reflecting everywhere else instead (Figure 20).
Figure 20. Problem: Bad Projection Transparency and Reflection
In order to fix the problem, I applied a roll of #90 sheer vellum on the gallery side of the
windows (Figure 21). When installing it in the gallery, the light coming from outside was
still affecting the projection’s transparency, so I covered them neatly with craft paper to
block out the light.
On another hand, the only place I could neatly mount the Kinect Camera in the
hallway was on the ceiling, which is very high. This posed a problem for effective
gesture recognition, since the Kinect was seeing the participants’ skeletons from a top
view, which made it hard for it to clearly see their gestures. The poses that required the
27
user to have his/her hands extended away from their bodies, like pointing and the T-Pose
for instance, worked well nevertheless. I also had to edit the values for the “Following
User” states, for both the head and eyes rotations and the threshold values to check for
the character to walk from one panel to another, following the user. I also had to switch
was the Left and the Right, since I didn’t realize until then that they would be flipped
when viewed from the hallway. In the end, the installation was a success (Figure 22).
Figure 21. Solution: Vellum roll inside the gallery
Figure 22. Final Installation
28
CHAPTER 6
EXHIBITION
Figure 23. Character staring at participant
Figure 24. Participant interacting with the humanoid
29
Figure 25. Member of the audience interacting
Figure 26. Character following a participant
30
CONCLUSION
This thesis was challenging in execution since I learned from scratch essentially most
of the software and programming languages I ended up using solely for that purpose.
It was a great and rewarding experience where I learned a lot and got the chance to
use tools and platforms I had always wanted to learn but didn’t have the chance to.
During the exhibition, the audience’s reactions were priceless, and even though my
character isn’t exceptionally photorealistic, he still had the desired effect on them:
they thought of him as uncanny and creepy, while plenty nervous giggles were heard.
31
BIBLIOGRAPHY AND REFERENCES
1
Latta, Sara. Scared Stiff: Everything You Need to Know about 50 Famous Phobias.
San Francisco: Zest Books, 2013.
2
Freud, Sigmund. The Uncanny. Trans. Laurel Amtower. Vienna, 1919.
3
Jentsch, Ernst. On the Psychology of the Uncanny. Trans. Roy Sellars. 1906.
4
Joly, Jonathan. "Can a Polygon Make you Cry." Bournemouth: The Arts Institute of
Bournemouth, 2008.
5
Philpot, Bob. "Her - Plot Summary." 2013. IMDB. April 2014
<http://www.imdb.com/title/tt1798709/plotsummary>.
6
Merceron, Julien. Worldwide Technology Director. with James Brightman. Games
Industry International. July 2012.
7
Cleland, Kathy. "Talk to me: getting personal with interactive art." Interaction:
systems, practice and theory. 2004.
8
Digital Interactive Galleries. Gary Hill. 2009. <http://dig.henryart.org/northwestartists/artist/gary-hill>.
9
Sobecka, Karolina. It's You. April 2014 <http://www.gravitytrap.com/artwork/itsyou>.
10
Sobecka, Karolina. Sniff. April 2014 <http://www.gravitytrap.com/artwork/sniff>.
11
Weizenbaum, Joseph. Eliza. <http://www-ai.ijs.si/eliza/eliza.html>.
12
Coraline. By Henry Selick and Neil Gaiman. Dir. Henry Selick. 2009.
13
Tinwell , Angela, Deborah Abdel Nabi and John P. Charlton . "Perception of
psychopathy and the Uncanny Valley in virtual characters." Computers in Human
Behavior 29.4 (2013): 1617–1625.
14
The Purge. By James DeMonaco. Dir. James DeMonaco. 2013.
15
Alice in Wonderland. By Caroll Lewis. Dirs. Clyde Geronimi, Wilfred Jackson and
Hamilton Luske . 1951.
16
Pinocchio. By Carlo Collodi. Dirs. Norman Ferguson, et al. 1940.
17
Wolfel, Matthias. Kinetic Space Source Code. <http://kineticspace.googlecode.com
>.
32