Simulating Emotions
of Computer Game Characters
STEFAN
DAGNELL
Master of Science Thesis
Stockholm, Sweden 2007
Simulating Emotions
of Computer Game Characters
STEFAN
DAGNELL
Master’s Thesis in Computer Science (20 credits)
at the School of Computer Science and Engineering
Royal Institute of Technology year 2007
Supervisor at CSC was Henrik Eriksson
Examiner was Stefan Arnborg
TRITA-CSC-E 2007:031
ISRN-KTH/CSC/E--07/031--SE
ISSN-1653-5715
Royal Institute of Technology
School of Computer Science and Communication
KTH CSC
SE-100 44 Stockholm, Sweden
URL: www.csc.kth.se
Abstract
Computer and console games have made huge improvements in the
last decade. But still many characters in the games look and act stiff
and not very human like. At the same time much research have been
done to understand and model how the human brain works. Thanks
to this and observations by psychologists, models that simulate human
emotions have been created.
This report describes the design, implementation and results of using emotion simulation in a modern computer game. It also includes
the description of an agent memory which was needed to simulate the
emotions. To maximize the usability of the system in future games it
is very generic, enabling the user to specify which emotions to use and
their characteristics. The results show that the AI can be improved
without the performance suffering if the user takes some time to tune
the parameters.
Referat
Simulering av datorstyrda agenters känslor.
Dator och TV-spel har de senaste åren utvecklats väldigt fort. Ett
problem med många av dagens spel är dock att de datorstyrda karaktärerna agerar och ser väldigt stela ut. Tack vare intensiv forskning om
hjärnan och av psykologer har man dock kunnat göra modeller av hur
känslor fungerar hos människor. Eftersom det fortfarande är mycket
som man inte vet finns det dock många olika idéer om hur man bäst
simulerar känslor.
Denna rapport beskriver design, implementation och resultat av en
undersökning av att använda känslor hos de datorstyrda karaktärerna i
ett modernt datorspel. Under arbetes gång uppdagades det att agenterna även behövde ett mer avancerat minne än vad som fanns. På grund
av detta beskrivs även designen av en enklare typ minne. För att systemet ska vara så flexibelt och utbyggbart som möjligt har användaren
stora friheter att definiera vilka känslor som ska användas och dess
egenskaper. Resultaten visar att det går att förbättra AI:n med hjälp
av känslor utan att prestandan påverkas nämnvärt.
Acknowledgements
This report was produced as a part of a master’s project in Computer Science and
Engineering. It was done at Avalanche Studios as part of the master’s program
at the School of Computer Science and Communication at the Royal Institute of
Technology (KTH) in Stockholm, Sweden.
I would like to thank Avalanche Studios for giving me the opportunity to conduct
the master’s project there. I would also like to thank Henrik Eriksson at KTH for
supervising the project and providing support and comments. At Avalanche Studios
I especially want to thank Gustav Taxèn and Linus Blomberg for their ideas and
support during the whole project.
Stefan Dagnell, January 2007
Contents
1 Introduction
1
2 Background
2.1 Finite State Machines . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Just Cause . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
3
5
3 Memory simulation
3.1 Knowledge Representation . . . . . . . . . . . . . . . .
3.1.1 Logic . . . . . . . . . . . . . . . . . . . . . . .
3.1.2 Network . . . . . . . . . . . . . . . . . . . . . .
3.1.3 Frames . . . . . . . . . . . . . . . . . . . . . .
3.2 Knowledge Representation in Just Cause . . . . . . . .
3.3 Memory frames . . . . . . . . . . . . . . . . . . . . . .
3.3.1 Memory types . . . . . . . . . . . . . . . . . .
3.3.2 Strength . . . . . . . . . . . . . . . . . . . . . .
3.3.3 Frame storage and memory type characteristics
3.3.4 Pinning . . . . . . . . . . . . . . . . . . . . . .
3.4 Interface . . . . . . . . . . . . . . . . . . . . . . . . . .
4 Emotion simulation
4.1 Psychological . . . . . . . . . . . . . . .
4.1.1 Primary and secondary emotions
4.1.2 Emotions . . . . . . . . . . . . .
4.2 Emotional state . . . . . . . . . . . . . .
4.3 Modelling emotions . . . . . . . . . . . .
4.4 Emotion updates . . . . . . . . . . . . .
4.5 Emotion decay . . . . . . . . . . . . . .
5 Relation between memory and
5.1 Design discussion . . . . . . .
5.2 From memory to emotions . .
5.2.1 Simple mode . . . . .
5.2.2 Advanced mode . . . .
.
.
.
.
.
.
.
emotions
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7
7
7
8
8
9
9
10
10
11
12
13
.
.
.
.
.
.
.
14
15
15
15
17
17
20
21
.
.
.
.
24
24
24
25
25
6 Results
6.1 Using the emotions . . . . .
6.2 Results . . . . . . . . . . . .
6.2.1 Scenario 1 . . . . . .
6.2.2 Scenario 2 . . . . . .
6.2.3 Scenario 3 . . . . . .
6.3 Facial expressions and more
6.4 Performance evaluation . .
6.5 Conclusions . . . . . . . . .
6.6 Just Cause . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
27
27
27
28
29
29
30
31
31
33
Bibliography
34
Appendices
35
A XML schema for memory settings XML
36
B Example of memory settings XML
38
C Memory interface
40
D Example of emotion settings XML
41
E Lua script file used in evaluation
43
Chapter 1
Introduction
Computer and video games have become a huge industry in the last couple of years
with projects having budgets of tens of million euros. The first computer games
released a few decades ago were developed by one or two people in their spare time.
Today more than a hundred people can be involved in the development of a game
with projects spanning a few years.
Artificial intelligence in today’s games are very much about choosing the best
action at a specific time. Therefore many games contain characters that most
of the time make the perfect decision based on a set of parameters. A problem
with this behavior is that people often consider it more artificial than intelligent.
This is because humans very often use their emotions when deciding what to do
and therefore consider that to be intelligent, or at least human-like. This master’s
project evaluates if it is possible to achieve a more human-like behavior by simulating
emotions of the agents and if it is feasible to do this in a modern computer game.
In this report, I present the design and implementation of a system that simulates the emotions of computer game characters. The system is designed in a
generic manner to allow it to be used in a wide variety of applications, although it
is designed with shooter games in mind. Because of this the report mostly discusses
examples of situations taken from this type of games and related problems. I also
present the results of an evaluation of the system in the commercially available
game called Just Cause developed by the Swedish game studio Avalanche Studios.
This sample usage is made to show the potential of using emotions in computer
games and to assure the reader that the system indeed works. Because it is difficult
to measure the resulting behavior, scenarios where the agents are affected by their
emotional state were created and studied to assert that the behaviors indeed were
improved.
A problem when creating this emotion simulation has been that it is difficult to
balance between scientifically correct models and what works good in the game. The
models presented in this report is therefore a mixture between scientifically proved
facts, theories of how memory and emotions work in humans and ad-hoc solutions.
The purpose of the system is to create more believable agents in computer games,
1
CHAPTER 1. INTRODUCTION
and not the best possible model of how humans function. Because of this I have
chosen the solution that gives best result in the game, and not the most supported
theory, when I had to make a choice.
2
Chapter 2
Background
I have been working with the game engine used in the computer and video game
Just Cause developed by Avalanche Studios. A screenshot from the game can be
seen in figure 2.1. Just Cause is a so called third person shooter (TPS) which takes
place in a vast tropical environment. A third person shooter is a game where the
player controls a single character and the camera is located a few meters behind the
player. The goal of the game is to complete various tasks using the character. In
Just Cause these tasks involves rescuing captives, racing with cars, liberating cities
and more.
Since Just Cause does not have a multiplayer mode (playing against friends
over the internet for example) it is important that the computer controlled enemies
(non-player characters, NPC) are very good. Just Cause is very action focused and
therefore the most common interaction with the NPCs is the player shooting at
them with various kinds of weapons. Because of this the NPCs have to act very
realistic in shootouts. This is the reason I have mostly considered these situations.
The artificial intelligence (AI) in this game is based on finite state machines
(FSM). This is one of the most common methods when creating artificial intelligence
in games today. The reason why they are so common is mostly because they are
very simple and yet able to produce very good results. By simple I mean that it is
easy to understand the concept and make a simple FSM based AI; to make a good
AI many advanced senses and actions are needed.
2.1
Finite State Machines
A finite state machine (FSM) is a very powerful, and therefore common, way to
make autonomous agents in computer games and other areas. There is a very
large amount of theory written about finite state machines, but most of this is not
necessary for understanding how they can be used in computer games. Because of
this I here present the basics of FSMs needed to understand the rest of the report.
An FSM can be represented as a directed graph. Each node in this graph is
called a state and the edges are called transitions. The states can be patrolling,
3
CHAPTER 2. BACKGROUND
Figure 2.1. Screenshot from Just Cause.
attacking, fleeing etc and the transitions can be heard explosion, hit by bullet, saw
enemy and so on. The FSM starts in a state called the start state and must be in
only one state at a time. Every state has at least one transition going out from it to
another state, unless it is a terminal state. In this case the FSM terminates when
it reaches it. The transitions control when and if the system should change state.
They are triggered by different kind of events (see examples above). The FSM is in
a state until one of the transition conditions are met; then it changes state to the
target of the transition. A simple example of an FSM is shown in figure 2.2.
Very often the transitions are functions in the code that returns different values if
the transition conditions are met or not, i.e. a procedural approach. Each time step
the function is called to determine if the FSM should change its current state or not.
A common problem with this design, and FSMs in general, is a loss of data[7]. If no
special solutions are used the only thing the agents know about is their current state
in the FSM, i.e. they have no memory. This can partly be remedied by letting the
transition functions output information. The saw enemy function can for example
output the enemy that was spotted which then can be stored in the attack state.
This way the agent will at least remember who to attack. It still lacks a way to
remember, for example, all the enemies it has seen and who to attack first.
More details about using FSM for AI in games can be found in [16, 7].
4
CHAPTER 2. BACKGROUND
Figure 2.2. Example of a simple FSM that could be used for an agent in an
TPS or FPS. It starts in the patrol state, indicated by the bold circle.
2.2
Just Cause
In Just Cause, which uses an FSM based AI, each agent receives input about the
world from different kind of senses. These senses are the only way agents get
information about the world. Example of senses are:
• VisionInput - detects other characters and vehicles in the world.
• ProximityInput - detects other approaching agents and vehicles.
• HealthInput - triggers when the agent’s health drops below a defined threshold
• GotPunchedInput - triggers when the agent gets punched by someone.
These senses are used as transitions in the FSM. Each sense can take an arbitrary
number of input arguments and output one or more variables. When using, for
example, the vision input you can specify a number of input arguments, in this case
5
CHAPTER 2. BACKGROUND
if you want to see enemies or allies, how far the agent can see, the field of view
etc. The output of the vision sense is the spotted agent, its position and more. A
transition can consist of many senses that need to be triggered at the same time to
enable it.
In Just Cause the agents do not have any memory other than the state the agent
currently is in and a few variables that can be set. These variables are mostly used
to keep track of which agent to attack, which vehicle the agent is in, information
about the patrol route etc. They are single value variables (no lists, queues etc
allowed) of a specific type (float, string, agent, position etc).
The senses described above are the input to the system I present in the rest
of the report. The system consists of two major parts, the memory and emotion
simulation, described in chapter 3 and 4, respectively. An overview can be seen in
figure 2.3. The memory system uses a well-known representation and is designed as
an independent part that can be used without emotion simulation. This decision
was made even though there is no clear distinction between memory and emotions
(see section 5.1 for details). The output of the system is the emotions in a format
that is easy for others to use. The chosen representation of emotional state is
inspired by related work and is presented in section 4.2.
Figure 2.3. An overview of the system described in this report.
6
Chapter 3
Memory simulation
To be able to simulate emotions in Just Cause a better memory than the current is
needed. The reason a memory is needed is twofold. First of all it acts like a cache
so that every subsystem does not need to query the different inputs every time it
needs information about the world. For example, the vision input casts a ray from
the agent’s head towards every possible other object to determine if it can be seen
by the agent or if something is blocking its path. If every subsystem that needs to
detect, for example, enemies would have to cast a ray each, the frame rate would
suffer more than necessary.
The other reason for using a memory is to get more believable agents. For
example, if a civilian person sees another armed agent he might get so afraid that he
runs away in panic. If the civilian is walking around in the world a minute later and
sees the same agent, but unarmed, the desired behavior is that he remembers that
he has seen the agent armed before and reacts accordingly. Without the memory,
and just using a pure FSM, the civilian would react as he has not seen the other
agent before. When using a better type of memory the system that creates the
emotion output can query the memory if the spotted agent has been seen armed
before and change the emotions according to that.
3.1
Knowledge Representation
An overview of knowledge representations can be found in [11]. There are basically
three different ways to represent memory; logic, network and frame based models.
They are not radically different, but they each have different benefits and drawbacks.
3.1.1
Logic
Using logic to represent knowledge is popular because it is very easy to use when
doing inference and reasoning about knowledge. If we know that A is an elephant
and that all elephants are gray we can easily draw the conclusion that A is gray by
7
CHAPTER 3. MEMORY SIMULATION
using an inference engine. The problem with this representation is to translate the
input to rules and facts.
3.1.2
Network
This representation can be seen as a graph. Each node in this graph is a fact and
each edge describes how they relate. When using a logic representation it is very
difficult to inspect the final knowledge base. When using networks this is very easy
because of their graph-like structure. For example, a John node might be connected
to the student node with a member-of relation if John is a student. A Hat node
can then be connected to John as a part-of relation.
3.1.3
Frames
A frame based approach stores each memory in a data structure called frame.
J. Mylopoulos writes the following about frames in his overview of knowledge
representation[11]:
A frame is a complex data structure for representing a stereotypical
situation such as being in a certain kind of living room or going to a
child’s birthday party.
This data structure can also contain references to other frame instances to create
relations among the knowledge. These references are called slots. The drawback of
using frames is that they only can be used for storing predefined types of facts. For
example, if an agent needs to store memories of all the cars it has seen, we have
to define a data structure used to store car memories. This data structure might
contain the color of the car, the number of wheels, brand, model, a slot for the
driver etc. See figure 3.1.
Figure 3.1. Frame for car memories. A trailing > indicates a slot.
If the memories that need to be stored are very similar, the same data structure
can be used for all memories by adding a type variable to the frame.
8
CHAPTER 3. MEMORY SIMULATION
3.2
Knowledge Representation in Just Cause
The requirements for the memory in this project are:
1. You have to be able to rate memories to enable the user to query it fast and
easily. For example “who is the most dangerous enemy that is still alive?”
or “what is the best place to take cover at?”. The user of the system has to
calculate this rating based on relevant parameters.
2. The system has to support an arbitrary number of memory types defined by
the user.
3. The agent has to forget about memories. How fast they should forget about
different types of memories has to be customizable. Seeing a dead friend might
be remembered longer than seeing a squirrel climbing up a tree.
4. The user has to be able to add certain memories that the agent should never
forget about.
The memory storage model I have chosen for this problem is a frame based
one. The reason why this model was chosen is because it is simple and neither
requires much memory nor processing power, which is vital in computer games.
Another motivation is that the benefits of the other two models are not something
that was needed in this case. The superiority of the logic based one when it comes
to reasoning and drawing conclusions cannot be neglected, but simple reasoning
is not impossible in a frame based model either. Because of the characteristics
of the type of memories that need to be stored, advanced reasoning and inference
using the memory is a nice feature rather than a requirement. Another reason that
frames were used is that they have successfully been used in similar projects[6, 13].
Frames also map quite nicely to objects when using an object oriented programming
language.
3.3
Memory frames
Each new input that is inputted to the system by any of the input producers (VisionInput, ProximityInput etc) results in a new frame, referred to as a memory
entry. A memory entry is made up of five different attributes inspired by [6, 13].
An illustration of a memory entry can be found in figure 3.2.
The memory entry frame consists of the following variables:
1. Memory type of the entry (see section 3.3.1).
2. Time when the memory entry was added to the agent’s memory. This time
stamp is used to purge old memories and can be used by the subsystems that
use the memory to do calculations.
3. Strength of the memory (see section 3.3.2).
9
CHAPTER 3. MEMORY SIMULATION
Figure 3.2. A memory entry (frame) and its properties.
4. Position of the memory in the world when applicable. Explosions might set
this parameter to where the explosion occurred, spotted agents set this to
where the agent was spotted etc.
5. An optional object associated with the memory, generally called a slot in frame
based knowledge representation[7, 11]. If the memory is about a seen agent
the object might be the spotted agent. The memory entry of an exploding
grenade leaves this unset, but it might be set to the agent that threw the
grenade if needed.
3.3.1
Memory types
The different kind of memories are defined by a string referred to as memory type (or
type of memory). For best compatibility 8-bit ANSI-strings without blank spaces
are recommended. Examples of these strings are: SawEnemy, HeardExplosion,
HitByBullet etc. Since these strings are used both in a configuration XML file
(section 5.2.1) and a Lua script file (section 5.2.2) as well as the C++ source code,
compatibility issues may arise if the user relies on a specific character set.
3.3.2
Strength
One of the most important pieces of information a memory entry holds is its
strength. The strength of a memory entry is similar to the desire attribute that J.
Orkin describes in his architecture[13]. The strength is a decimal value with the
range (0, 1], where 1 is the strongest value and 0 is the weakest. How this value
is calculated and interpreted is up to the user of the system. The only thing the
memory assumes about the value is that if m1 and m2 are two memory entries and
m1 .strength > m2 .strength, m2 should be forgotten about before m1 . That is,
10
CHAPTER 3. MEMORY SIMULATION
m1 is a stronger memory than m2 . Strength is used to satisfy requirement 1 and 3
in section 3.2.
The idea is that the strength is a way to prioritize among the memories. The
best example to illustrate this idea is when the agent sees an enemy. The strength
this new memory entry might be calculated by the following formula:
s(m) = f (distance) + g(visibility(m.object))
(3.1)
If this formula is used and f () and g() are chosen carefully, agents that are close
and fully visible will get the highest strength value and hidden agents far away will
get the lowest. When the agent needs to decide what to do next it can query the
memory to determine if it can recall seeing any enemy at all. If it remembers more
than one enemy it can use the strength value of each memory entry to decide who
it should attack. If any of them is worth attacking at all, i.e. strength is greater
than a threshold value or the distance is less than a predefined value.
3.3.3
Frame storage and memory type characteristics
The memory can be seen as a set of buckets, one for each memory type. When
new data gets inputted to the system by a sense, a new memory entry is created
and added to the corresponding bucket. Because every memory type might need
to have different characteristics, each bucket can be individually configured. By
characteristics I mean how many memories that can be stored and when and how
the agent forgets about memories of that type. The most important parameter that
can be set per bucket is the strength decay. The strength decay specifies how much
the memory strength decreases each second. The new strength is calculated by:
st+1 = st − strength_decay ∗ dt
(3.2)
If memory entry m1 is added at time t with strength s and memory entry m2
is added at time t + 1 with the same strength s, m2 should, in most cases, have
higher priority, i.e. strength, than m1 . The reason behind this is quite intuitive.
A good example is when an unarmed agent hears gunfire and the strength is how
much sound that reaches the ears. If they all have the same strength, the desired
behavior is to run away from the most recent direction in which gunfire was heard.
If this is the case for a memory type it can be done automatically by setting the
strength decay. If it is not, it can be disabled.
Another way to specify how memory entries are forgotten about is for the user to
set the maximum age for a memory type. In every update each bucket is traversed
from oldest memory entry to newest until a memory entry that is newer than max
age is reached, deleting every memory entry where:
currenttime() − timestamp > maximum age
(3.3)
The user can also set the maximum number of memory entry a bucket for a
certain memory type can contain. If a sense tries to add a memory entry to a full
11
CHAPTER 3. MEMORY SIMULATION
bucket it uses the strength to decide which memory entry to discard. If the new
entry has a strength greater than another entry in the bucket, the entry with the
lowest strength is replaced with the new one. Else the new entry is discarded.
These settings can be used to simulate basic short- and long-term memory. The
SawDeadFriend memory type, for example, should most likely not be forgotten
in a very long time. By setting the maximum age and size to an infinitely large
number a basic simulation of a long-term memory can be achieved. Similarly, a
small maximum age can be used for short-term memories.
Since all game engine settings in Just Cause are stored in an XML file the
memory configuration is as well. The possible memory types are also defined in
the same file. The XSD for this XML can be found in appendix A and an example
XML in appendix B.
3.3.4
Pinning
There is also an ability for the user to add a memory entry that the agent should not
forget about until explicitly told to. This can for example be used for remembering
important agents that should not be forgot about in a specific time frame, vehicles
that should not be forgot about even if thousands of other vehicle memories enters
the system etc. This is called pinning a memory entry in the rest of the report.
Pinning a memory could be achieved by adding a new memory type and setting
the strength decay to zero and maximum age to infinite. The reason a new memory
type has to be added is because strength decay is set per memory type, not memory
entry. When we see a new enemy that we do not want to forget about, we add it
as the new type instead of the default making the memories stay forever.
The downside with this approach is twofold, the first problem is that many
specialized memory types have to be added which makes it less generic. The other
problem is that even though we add a new type of memory, e.g. SawReallyBadGuy,
we often still want to treat him as a normal enemy, but with a higher priority. In
this case we have to write special cases in the AI-system that uses the memory to
decide if we should retrieve the next enemy or engage the really bad guy.
Instead, pinning uses a special flag added to the memory entry. If this flag is
set the memory ignores it when decreasing the strengths of the memory entries.
This enables us to use the same memory type for each enemy we see, including the
really bad guy, and pin that memory entry at, for example, 0.8, we will get the
same behavior without writing special cases. Because the bad guy will get a higher
priority than most other agents the result will be that the agent will go after him
most of the time, but if another enemy suddenly appears that results in a strength
greater than 0.8, he will temporarily attack him automatically. When the target no
longer is of any value, the memory entry can be un-pinned and be forgot about like
the rest of the memory entries of that type.
12
CHAPTER 3. MEMORY SIMULATION
3.4
Interface
To be able to switch back-ends for the memory storage I have created an interface
for accessing the memory. It uses the façade design pattern[2] to provide a simple
interface for the possibly complex memory storage. This interface defines functions
for adding and removing memory entries as well as querying the contents. The functions it defines are the ones that was identified as useful when using the memory to
implement a demo. An example of a query function is HasMemory that determines
if the memory contains any memory entry of a specific memory type and optionally
a specific object. Another example is GetStrongestEntry that returns the memory
entry with the highest strength of a specific memory type. The full interface can
be found in appendix C.
The memory storage interface also implements the observer design pattern[4]
that subsystems can use to be notified when new memory entries are added to the
memory or existing ones are updated. This is used by the emotion system described
in the next chapter.
13
Chapter 4
Emotion simulation
The second part of the project is to simulate the agent’s emotions based on the
content of the memory. The first problem with simulating emotions is that it does
not exist a clear definition of what emotions are, even though they are a very familiar
feature to most people. Because of this ambiguity the definition of emotions used
in this report might be slightly different from other literature on the subject. Even
though no one can define what an emotion is, it is still possible to simulate them.
Picard[15] chose to quote John McCarthy when dealing with this matter and since
that quote describes the problem very well I will use it in this report as well:
We can’t define Mt. Everest precisely - whether or not a particular rock
or piece of ice is or isn’t part of it; but it is true, without qualification,
that Edmund Hillary and Tenzing Norgay climbed it in 1953. In other
words, we can base solid facts and knowledge on structures that are
themselves imprecisely defined.
Eugénio Oliveira and Luís Sarmento have done studies on how emotions can
make artificial intelligence better. They write that “... studies reveal an interesting
functionality that emotions seem to have: the power to condense complex and uncertain information from several distributed sources into single packages (conscious
or not)”[12]. They further conclude that emotions are essential in environments
that require rapid responses to events. Querying the memory for information in
these situations might be too slow to avoid the consequences. This is what I have
tried to simulate in computer controlled characters in a computer game. The input
information for a computer game character is almost exactly the same as a real
human being in a specific situation. Research has been done to create a generic
memory for AI-agents that is as flexible as the one that humans possess. I have
chosen to mostly ignore these because they are not suitable for usage in a computer
game, and when you study the possibly inputs from the game world, you realize
that only a subset of these are important to consider.
14
CHAPTER 4. EMOTION SIMULATION
4.1
Psychological
According to Picard[15] emotions can be studied both from a cognitive and physical
point of view. The cognitive part of emotions focuses on why and how different
emotions are created. That is, how input to the brain gives rise to different emotional
experiences. The physical part of emotions are more focused on how we experience
emotions, e.g. increased heartbeat when a person is afraid or warmth in the chest
and face when we are ashamed[1]. Although the physical part of emotions is an
interesting subject, it is something that can be ignored in this case since simulating
it is not a part of this project.
4.1.1
Primary and secondary emotions
Emotions can be divided into two categories, primary and secondary emotions[8, 17].
Some say that two types are not enough and have proposed a third one, tertiary
emotions[18].
Primary emotions are the fast, reactive emotions. These are most likely the
oldest type of emotions since they are very basic and can be found in many animals
as well. Primary emotions can be seen as a pattern recognition system that is attached to some parts of the brain that handles input. These emotions are very fast
and triggers falsely every now and then. An example of a primary emotion is the
reaction when you see something approaching you fast in the corner of your eye.
The normal reaction is to be startled and almost ready to run away. The primary
emotion system recognizes this as a possibly dangerous situation and therefore prepares the body for the worst. To determine if the situation really is dangerous,
more information is needed. Because of this the primary emotion system triggers a
reaction to look at the approaching object to see what it really is.
Secondary emotions are most likely unique for humans. They are a much more
complex type of emotions mostly cognitively generated. They require the brain to
be able to reason about goals, motives, previous experiences etc. An example of
these emotions is shame. To feel shame you have to be able to reason about other
people’s thoughts, what you have previously done and the effect of those actions.
Sloman goes even further by dividing the secondary emotions into two categories,
central and peripheral secondary emotions. See [17] for more information.
Tertiary emotions is described by Sloman[18] as classes of emotions that depend
on mechanisms that are concerned with high level management of mental processes.
Examples of these types of emotions are to feel lonely, enthusiasm, satisfaction and
more.
4.1.2
Emotions
There exists many theories about which emotions humans posses, if it even is possible to list them. Ortony and Turner[14] have created a summary of different theories
presented in table 4.1.
15
CHAPTER 4. EMOTION SIMULATION
Reference
Fundamental emotion
Basis for inclusion
Arnold (1960)
Anger, aversion, courage, dejection,
desire, despair, fear, hate, hope,
love, sadness
Anger, disgust, fear, joy, sadness,
surprise
Desire, happiness, interest,
surprise, wonder, sorrow
Rage and terror, anxiety, joy
Anger, contempt, disgust, distress,
fear, guilt, interest, joy, shame,
surprise
Fear, grief, love, rage
Anger, disgust, elation, fear,
subjection, tender-emotion,
wonder
Pain, pleasure
Anger, disgust, anxiety, happiness,
sadness
Expectancy, fear, rage, panic
Acceptance, anger, anticipation,
disgust, joy, fear, sadness,
surprise
Anger, interest, contempt, disgust,
distress, fear, joy, shame,
surprise
Fear, love, rage
Happiness, sadness
Relation to action
tendencies
Ekman, Friesen,
Ellsworth (1982)
Frijda
Gray (1982)
Izard (1971)
James (1884)
McDougall (1926)
Mowrer (1960)
Oatley & Johnsonlaird (1987)
Panksepp (1982)
Plutchik (1980)
Tomkins (1984)
Watson (1930)
Weiner & Graham
(1984)
Universal facial expressions
Forms of action readiness
Hardwired
Hardwired
Bodily involvement
Relation to instincts
Unlearned emotional states
Do not require
propositional content
Hardwired
Relation to adaptive
biological processes
Density of neural firing
Hardwired
Attribution independent
Table 4.1. List of primary emotions[14].
16
CHAPTER 4. EMOTION SIMULATION
Picard[15, pp.167-168] further concludes that the most common emotions in a
summary of lists of basic (primary) emotions created by Ortony, Clore and Collins
are fear, anger, sadness and joy. Due to the characteristics of the game used in
this project, only fear and anger were identified as important for the agents. Even
though only two emotions were studied, the final system has support for an arbitrary
number of emotions defined by the user since this was a requirement of the project.
4.2
Emotional state
To be able to simulate emotions you need a model of the emotional state and a
way to update it. The update must account for the decay of emotions and novelty,
i.e. habituation to certain inputs. You also need a way to specify how to translate
the new inputs and the memory contents to changes of the emotional state.
Parts of the emotional state representation and how the update of this works is
inspired by A. Egges[9]. The emotional state of an agent is represented by an mdimensional vector, where m is the number of emotions of the agent, called emotion
vector. Each element in this vector is a value between 0 and 1. A value of 0
represents an absence of this emotion while 1 represents the maximum value.
4.3
Modelling emotions
Picard suggests that emotions can be modeled as a signal that varies between 0 and
1 over time. By using this model, an event that results in a change of an emotion
value is very similar to the amplitude of the sound produced by striking a bell. It
creates a pulse which height is dependent on how strong the input is and that slowly
decreases to zero. Figure 4.1 shows how a graph of this function may look.
Figure 4.1. A pulse created when striking a bell, the x-axis is time and y-axis
the sound intensity.
Another similarity between emotion and bell pulses is that multiple hits (events
that changes the emotion value) results in a value greater than the individual pulses,
17
CHAPTER 4. EMOTION SIMULATION
if the time elapsed between them is short enough.
A very important aspect of simulating emotions is novelty, also called habituation. When using novelty, the emotion value updates also use the history of previous
events when calculating the resulting pulse. It uses the history to be able to simulate that an agent becomes used to certain inputs. The result is that the emotion
influence of similar events entering the system will decrease if the time between
them is too small. This is what happens when you hear a joke multiple times. The
first time will, for most jokes at least, be the funniest. If the person that told the
joke tells it one more time it might funny, but not as funny as the first time. If the
person continues you will probably stop being being emotionally affected by it, at
least if you consider the emotion joy.
To model habituation, Picard[15] suggest that one can use a sigmoid function.
If an event in the world results in an emotion value e0 , a new saturated value e1 is
calculated and used instead of e0 based on the current habituation. e1 is calculated
using:
e1 = f (e0 )
(4.1)
where
f (x) =
1
1+
e5+10(s−x)
(4.2)
and s is a value between 0 and 1 that indicates how habituated the agent is to
the input. A graph of this function with different s values can be seen in figure 4.2.
To get the habituation value, s, used in the formula, a history of past events
is needed. When a new event enters the emotion system the memory entry is
appended to a history list. To enable the user to customize the habituation for
different memory types two parameters can be set. These parameters are hist_size
which specifies the maximum number of entries in the history and max_age which
specifies the maximum age of entries. The function that calculates the habituated
value s has to fulfill certain criteria:
1. If the history list contains hist_size events of a memory type where every
entry has age = 0, s should be 1, i.e. maximum habituation.
2. If the history list is empty, s should be 0, i.e. no habituation at all.
3. If the history contains one event e, s should decrease as e.age increases.
To calculate the habituation value for a set of events E the following function
is used:
s(E) =
v.age
2
X 1 − ( max
_age )
v∈E
hist_size
18
(4.3)
CHAPTER 4. EMOTION SIMULATION
Real emotion value change
1
s = 0.00
s = 0.25
s = 0.50
s = 1.00
0.8
0.6
0.4
0.2
0
0
0.1
0.2
0.3 0.4 0.5 0.6 0.7 0.8
Desired emotion value change
0.9
1
Figure 4.2. Habituation function with different degree of habituation (s). The
x-axis represent the desired increase of an emotion value and the y-axis is the
resulting change.
v.age
2
See figure 4.3 for a plot of eq. 4.3 with different values of hist_size. 1 − ( max
_age )
will range between 1 and 0 as v.age increases. This is the contribution of each event
in the history to the final habituation value. The reason a second order function is
v.age
used instead of max
_age is because the results were much more convincing. When
v.age
using max
,
the
habituation
value decreased too fast in the beginning. By using
_age
the square of that value it decreases faster the older an event is, which proved to
result in a much more believable values. The sum of each event’s contribution to the
habituation value is then divided with hist_size which enables the user to specify
how much each event should affect the final value. This is needed to support both
events that happens very seldom as well as those that occurs many times a second.
A more simple function for calculating s was evaluated before settling with
eq. 4.3:
|E|
(4.4)
hist_size
Because this function does not take the age of each event into account it is not
suitable for events that happen very seldom. In this case eq. 4.4 resulted in a very
sudden change of s that occured when the event was removed from the history. If
hist_size was 1, s changed from 1 to 0 instantly which resulted in very unsatisfying
behaviors.
For events that happen very rapidly and where hist_size is very large, the age
of an event in the history is often not very important. In these cases the number
of events in the history is more important than the age. Because of this the results
s(E) =
19
CHAPTER 4. EMOTION SIMULATION
of using eq. 4.4 were almost as good as those of eq. 4.3. Since eq. 4.3 is O(N ) the
calculations made in eq. 4.3 can be too demanding for some applications. If this is
the case eq. 4.4 can be used for the types of events described previously, since it is
O(1), and eq. 4.3 where needed.
Habituation value (s)
1
size 1
size 2
size 4
0.8
0.6
0.4
0.2
0
0
0.1
0.2
0.3
0.4 0.5 0.6 0.7
Time since event
0.8
0.9
1
Figure 4.3. Function 4.3 when using different history sizes. A new memory
entry is added at time 0 and removed at time 1, max_age = 1.
4.4
Emotion updates
An emotion update starts with a new event entering the system. This event is
probably triggered by a vision or hearing input etc. The memory system passes a
memory entry to the emotion system containing information about the event. The
memory entry is then used to calculate what A. Egges calls the desired change in
emotional intensity[9] stored in an emotion vector. This vector contains m values,
where m is the number of emotions, between 0 and 1. The values in this vector
represent the desired change of emotional intensities calculated based on the memory
entry. The details of this calculation is presented in section 5.2. It is called desired
change because it does not include any habituation calculations or checks to make
sure the final value is between 0 and 1. It can be seen as the raw input to the
emotion system.
The next step of the emotion update is to apply habituation saturation. A
detailed description of how this is done can be found in the previous section. This
step could also include taking the personality into account. The system developed
in this project does not use personality but more information about that subject
can be found in [9, 20]. If et is the emotion values at time t, et+1 is the new emotion
20
CHAPTER 4. EMOTION SIMULATION
values, d is the desired emotion influence and H is the history of previous, similar
events, the update step can be summarized as:
et+1 = et + g(d, H)
(4.5)
Where g is the previously described function that takes habituation into account.
To make the emotion values decay, we need to extend eq. 4.5 with a decay term.
The final update function can then be written as:
et+1 = et + g(d, H) + u(H)
(4.6)
u(h) is a function that calculates the change of the emotion values since the last
time and is described in the next section.
4.5
Emotion decay
If the emotional state of an agent is represented by a single value for each emotion,
representing emotion pulses becomes quite problematic. A possible remedy is to
create a discrete representation of the next t seconds by sampling the future emotion
pulses. In this case the future emotion values for an emotion will be stored in a
vector of t ∗ n values where t is the number of seconds into the future to store and
n is the number of samples per second. The value at position 0 in this vector is the
current emotion value and index x ∗ n contains the value at time x. If an emotion
pulse p(t) should be added to the current emotional state, p(t) will be sampled t ∗ n
times and added to the vector. An example representing two pulses can be shown
in figure 4.4. Since the different emotions can have very diverse properties it is
difficult to find good values of t and n without making the vector too large, i.e. too
memory demanding.
Because of this another solution was used. The emotional state is represented
by three vectors of size m, denoted by ν, π and ω. π represents the current mood.
There is no clear definition of what mood is, but many people represent mood by
a single value that ranges between good (+1) and bad (-1) mood. Inspired by
Picard[15], I have chosen to view mood as an emotion value for each emotion that
reacts more slowly than emotion pulses. In this case the mood vector π is a vector
of size m that contains a mood value for each type of emotion. A predefined part
of the emotion pulse size is then added to the mood value vector. ν is used to store
the current emotion pulse value.
When a new emotion pulse enters the system the final change in emotional
intensity (see next section) is split up between mood and pulse value as: πt+1 (m) =
πt (m) + a ∗ in(m) and νt+1 (m) = νt (m) + (1 − a) ∗ in(m) where a is a parameter
set for each emotion. The final emotion value is calculated by em = π(m) + ν(m).
ω is used to store the last peak value when the latest emotion pulse was added.
This peak is used to calculate the decay of the value in ν. The analogy between
striking a bell and emotion pulses is also used when calculating how the emotion
21
CHAPTER 4. EMOTION SIMULATION
Figure 4.4. Using sampling to represent two emotion pulses, n = 7.
pulses decay. Figure 4.1 shows the sound amplitude when striking a bell. The
calculations of emotion pulse decay has been inspired by this and measurements
presented by Picard[15]. A gaussian function was used to get a derivative that is
small in the beginning and end and large in the middle:
f (t) = a ∗ e−
(x−b)2
c2
(4.7)
where a is set for each emotion type, b = 0 and c = 23 .
A graph of this function can be seen in figure 4.5. The decrease of the values
in ν is calculated in each call to the update function. It is calculated using the
following formula:
νt+1 (m) = νt (m) − f (1 −
νt (m)
)
ω(m)
(4.8)
The disadvantage with this approach is that since only the peak value from
the latest emotion pulse is used, only the current values of the past pulses will be
remembered. That is, the resulting values of two pulses will not be the sum of the
two functions. Because of the mood vector that handles emotions over a longer time
this is not a big problem.
22
CHAPTER 4. EMOTION SIMULATION
Decrease of emotion value
1
a=1.0
a=0.6
a=0.3
0.8
0.6
0.4
0.2
0
0
0.2
0.4
0.6
0.8
1
Difference between last emotion peak and current value
Figure 4.5. Gaussian used to calculate the decay of an emotion pulse, f in
equation 4.8.
23
Chapter 5
Relation between memory and emotions
5.1
Design discussion
As described so far, the system consists of two subsystems, memory and emotion.
The emotion system depends on the memory system, but the memory is standalone
and contains some generic functions to query it. A problem with this design is that
there is no distinct difference between memories and emotions. As stated earlier,
emotions are just a condensed version of the memories. This makes it quite difficult to decide what should be considered as a memory and what parameters that
just are a part of the emotion value calculation. For example: Agent a stands a
few meters away from agent b. Most likely the vision system adds a memory entry
to a’s memory of type SawPerson and b as an object, and vice versa. Suddenly
agent b starts to walk towards agent a. Should the memory system contain a set
of rules that states that if an agent has a velocity greater than v towards an agent,
a memory of type SawApproachingPerson should be added to the memory? Or
should the emotion system use the distance and velocity of the SawPerson memory
to calculate the emotion influence? Both approaches have their pros and cons. By
using the former we might end up with very specialized memories, e.g. SawVeryFastApproachingArmedPerson_Near etc. The good thing is that calculations are
only made once even though many subsystem uses the same memory. The latter
approach will result in a much less bloated memory but will be more demanding on
the hardware since very similar calculations might be done more than once. The
solution is to give the user the ability to decide what results are so important that
they should be added to the memory.
5.2
From memory to emotions
A goal of the project was to enable the user to easily customize how the agents
were affected by input from the world. Because of this the first option to control the emotions consists of a setting a few parameters described in section 5.2.1.
Sometimes this is not enough and therefore a more flexible way to control it was
24
CHAPTER 5. RELATION BETWEEN MEMORY AND EMOTIONS
created (section 5.2.2). The difference between these two is very similar to fixed
and dynamic pipelines with shaders in modern computer graphics hardware.
5.2.1
Simple mode
This mode is controlled by an XML file and assumes that the desired emotion
influence for a new memory entry is its strength times a constant. In the XML
file the user sets one factor per emotion and memory type with which the strength
will be multiplied. For example, if a new memory entry of type BulletHitNearby
with strength s enters the memory and the XML file specifies that BulletHitNearby
should increase Fear with a factor c, the desired emotion influence for Fear will
be c ∗ s. These values are then passed to the system described in section 4.4 to
calculate the final change in emotions, with respect to habituation etc.
When a new memory entry enters the memory it checks whether an entry of that
type and the same object already exists. If this is the case the old entry is replaced.
Because these updates might influence the emotions of the agents the results of the
updates are forwarded to the emotion system. An example of this is the memory
type SawPerson. If the strength of this type is based only on the distance between
the agent and the spotted person, an update will be triggered when the person
moves closer to the agent. This is desired since a person that suddenly starts to
charge the agent most likely will affect its emotions, even though it is not a new
memory entry.
The updates are handled in a similar way as new memory entries. The user sets
a factor that the change in strength will be multiplied with. This factor can be
different for changes that are negative and positive.
For every < emotion, memorytype >-tuple the user can also specify how much
of the desired emotion influence that should be considered as mood. This is the
parameter called a in section 4.5 that describes how emotions are divided into mood
and emotion pulses. An example of an XML file that controls an agent’s converting
from memory to emotions can be found in appendix D.
5.2.2
Advanced mode
Sometimes more information than the strength of the memory is needed to calculate
the emotion influence. If using the same example about SawPerson as in last section,
the simple mode will have problems when the agent runs towards the spotted person.
In this case the agent will react the same way as when the person runs towards the
agent, which most likely is wrong. Sometimes you also need to query the agent’s
memory about certain information to determine how it is affected.
The advanced mode gives the user full control of how the emotion influence is
calculated. The user writes a script in the script language Lua[5] containing functions that will be called on different events. Lua was chosen because it is very light
weight and easy to embed in C++. Two functions are currently supported, OnNewMemory and OnMemoryUpdate, that both should return the resulting emotion
25
CHAPTER 5. RELATION BETWEEN MEMORY AND EMOTIONS
influence vector. OnNewMemory gets the newly added memory as argument and
OnMemoryUpdate receives the old and new entry. The scripts have full access to
information about the agent, including its memory, and the objects associated with
the memory entries. The following code is part of the script used in the performance
evaluation in section 6.4.
function OnMemoryUpdate( oldentry, newentry, influence_vector )
local mtype = oldentry.MemoryType
if mtype == "SawEnemy" then
local agent = CAgentObj(newentry)
if SELF.Charging or (
agent.Valid and
SELF:SpeedTowards(agent) > agent:SpeedTowards(SELF) )
then
influence_vector:SetEmotionValue("Fear", 0)
end
end
return true
end
This prevents the agent from getting afraid when it runs towards the spotted
person. These scripts also enables the user to, for example, make the agent query
the memory when he see an unarmed person to determine if it has seen him armed
before.
26
Chapter 6
Results
6.1
Using the emotions
The game engine I have been working with uses an FSM based AI. To integrate the
emotion values into the current FSM, two transition functions were implemented.
The first one is based on the change of an emotion value. The user of this transition
specifies how much an emotion value must increase during a specific amount of time
for the transition to trigger. This transition is meant to resemble the reactive layer
described by Sloman[18] and can be seen as the fast primary emotions.
The other transition is a pure threshold based one. The user specifies a threshold
for the different emotion types. When the emotion values exceed the threshold, or
drops below it if using the inverted transition, the transition is triggered. The main
reason this one was implemented is to enable the agents to change their gaits and
behaviors when they are for example afraid or happy.
6.2
Results
The goal of the project was to improve the behavior of computer controlled agents.
Measuring this is rather difficult since the results are very subjective. Another
problem with evaluating the system was that the goal of the project was to improve
an existing AI rather than creating a new one from scratch. To evaluate the system,
three scenarios were created to test and make sure the agents behaved as desired. In
all three scenarios two emotions were used, fear and anger. These were used because
they are two of the most common emotions thought to be primary ones[14]. Due
to limitations in the game engine of Just Cause, advanced usage of the emotions,
such as facial expressions, changing sensitivity of senses etc, was not an option. See
section 6.3 for more details. Because of this more basic reactions were used in the
evaluation which are descibed in each scenario’s section. A total of seven memory
types was identified as useful for the agents. A list of these memory types and how
they affect the emotions are listed in table 6.1.
27
CHAPTER 6. RESULTS
Fear
HeardExplosion
SawEnemy
SawFriend
SawDeadFriend
HitByBullet
BulletHitNearby
SawDeadEnemy
Anger
+
+
+
+
-
Table 6.1. Different types of memories and how they affect the emotions.
6.2.1
Scenario 1
In this scenario only the memory types HeardExplosion, HitByBullet and BulletHitNearby were used. How they influenced the emotions can be seen in table 6.1. Figure
6.1 shows the finite state machine used by the agent.
Figure 6.1. FSM used in scenario 1.
This agent was used to do low level evaluation of the habituation model and
different emotion parameters as well as the transitions. The parameters were tuned
so that the agent got used to being fired at before running in panic. This resulted
in an agent that did not move no matter how much the player fired at him. Even
a single explosion did not make him move. To make the panic transition trigger
the player had to fire at and around the agent and throwing a grenade close to him
in short succession. Because of limitations in the game engine the emotional state
between being neutral and running in panic, i.e. afraid, was not visualized by more
than a number above the agent’s head.
The result of this scenario was that the agent was more believable due to the
fact that it appeared to have a memory of past events.
28
CHAPTER 6. RESULTS
6.2.2
Scenario 2
A civilian agent using the same FSM as the agent in scenario 1. In this scenario
only the memory types SawEnemy and SawFriend were used. The vision input used
in this case was modified to take the spotted person’s weapon into account when
calculating the desired emotion influence. If the person had a weapon equipped, the
new memory entry was treated as being of type SawEnemy and if no weapon was
equipped, it was interpreted as SawFriend. If the spotted person had no weapon
equipped, the memory was queried to determine if the agent had any memory of
seeing the other person armed. If this was the case the resulting emotion influence
became an average between that of SawEnemy and SawFriend. The reason behind
this is quite intuitive. If you are walking around and you see a person you have
seen armed before, you would most likely end up being more afraid than seeing a
random person but not as afraid as seeing someone with a gun.
The results of this scenario were very convincing. When walking up to the agent
unarmed he just looked at the player. When you equipped a gun, he panicked and
ran away from the player. If you approached him later he fell on his knees begging
for his life. By using different emotion parameters for characters the reactions could
be made more varied. Some might turn around and walk away from you while some
might try to neutralize the player by punching him.
6.2.3
Scenario 3
Three enemy agents fully armed and using the whole spectrum of emotions and
memory types. These agents used the FSM in figure 6.2.
Figure 6.2. FSM used in scenario 3.
The attack state was also modified to use the emotion state when taking decisions. The original attack action divides the distance to the player in three sections,
close, near and far. If the player is too close to the agent, i.e. in the close section, the
29
CHAPTER 6. RESULTS
agent starts moving backwards. If the player is in the far section the agent moves
towards him. In the middle section the agent choses to move forward or backward
quite randomly. I modified the behavior in the middle section by removing the
random part and using the emotion values instead. If the agent’s emotion value for
fear was greater than a function of the distance to the player, he always chose to
move backwards. If the value for anger was greater than a threshold, he charged
the player.
The results of this scenario were better than without emotions. If you slowly
approached an enemy he just stood there firing at you. But if you ran against him
firing he started moving backwards when you got too close. If you threw a grenade
at him while firing and running towards him he turned around and started running
away from you. As mentioned in the description of the last scenario, the emotion
parameters can be different for the agents giving the impression that some agents
are more easily scared than others while some do not get scared by anything. This
will certainly make the agents seem more human like than when they are all using
the exact same behavior, except for some randomness.
6.3
Facial expressions and more
A problem with basing the evaluation on the scenarios described in this report is
that only extreme emotion values are shown. That is panic, an agent getting so
angry that he does not care about his own life and attacks the player etc. This is
only a small part of how you can use the emotion values in the game. Manipulating
facial expressions, voices, gaits etc to show minor changes in emotions will certainly
make the agents more vivid. That is probably a better way to show emotions since
this is the way humans are used to other people’s expressing them. This is not
covered in the project since doing this is almost a whole master’s project itself.
There are numerous books written on this subject that thoroughly describes how
to mediate emotions[10, 19].
The Just Cause engine was not designed to support different facial expressions
and gaits. Adding support for this to evaluate the visual effects of using emotions
would therefore have required too much work to be part of this project. But since
the values outputted by the emotion system seemed believable there are no reasons
to believe the visual results would not be that too.
One problem with the design proposed in this report is that complementary
emotions are not used. The resulting emotions can therefore be quite ambiguous.
An agent can for example have a fear value of 1.0 and at the same time have joy at
1.0. In these types of situations it can be difficult to decide what emotion the agent
should show. How this is solved depends on the system using the emotions, but for
facial expressions, for example, an emotion priority list might provide reasonable
results. The reason complementary emotions are not used is to allow the user of
the system a total freedom when specifying emotions.
30
CHAPTER 6. RESULTS
6.4
Performance evaluation
Since most modern computer and console games use every little clock cycle and
memory available, performance is essential to evaluate when considering a new
system. The first attempt to evaluate the performance of the emotion system was
to measure the FPS (frames per second) when running the game with and without
simulating emotions. The problem with this approach was that the game was not
CPU-limited on the computer used for testing. That is, the FPS were most likely
limited by the graphics adapter or bus bandwidth. Because the emotion simulation
only uses the CPU and RAM the FPS would not be affected if the CPU usage
increased a few percent.
A profiler was used to measure how much time the game spent in the emotion
system. The profiler used was Intel VTune[3] which is widely used by many companies. The game was profiled for 5 minutes with two different setups, the first one
with a total of 6 agents divided into two teams and the other with 14 agents. The
profiler measured how many clock ticks the different parts of the emotion system
used compared to the total amount used by the game. Four different parts of the
system were evaluated:
• Memory - includes creating new memories from sense inputs and simulating
the forgetting of them.
• Emotion - includes calculating the translation between memories and emotion
values and the decay of them.
• Lua - includes translations between C++ and Lua and executing the Lua
code. When profiling the system the Lua script in appendix E was used.
• Types to id - includes the translation between an type string (described in
section 3.3.1) and an id used in many parts of the system. This can be
optimzed heavily and is therefore not very important.
The results are presented in table 6.2. A large part of the clockticks used by the
memory system was in the function that handles the forgetting of memories. This
could be optimzed by running the function every other frame, or even less often,
instead of every frame. The same applies to the emotion system. Running these
update functions, for example, 10 times every second or less would dramatically
lower the performance demands while not effecting the results too much. But on
the whole, the emotion simulations are not very resource demanding.
6.5
Conclusions
Picard[15, pp.70-71] states that a computer can be said to have emotions if:
1. System has behavior that appears to arise from emotions.
31
CHAPTER 6. RESULTS
Subsystem
6 agents
14 agents
Memory
Emotion
Lua
Types to id
1.16%
0.29%
0.31%
0.19%
1.72%
0.45%
1.19%
0.36%
Table 6.2. Results of performance evaluation for different parts of the system in
percentage of total clock ticks. Types to id is the translation between memory
and emotion types described in section 3.3.1 and id. This can be optimized
and is therefore not that important.
2. System has fast “primary” emotion responses to certain inputs.
3. System can cognitively generate emotions, by reasoning about situations, especially as they concern its goals, standards, preferences, and expectations.
4. System can have an emotional experience, specifically: cognitive awareness,
physiological awareness, subjective feelings.
5. The system’s emotions interact with other processes that imitate human cognitive and physical functions, e.g. perception, memory, decision making, planning, learning etc.
See [15] for more detailed description of each point.
The system described in this report fulfills most of these requirements. Point
1 is satisfied by the primary emotions and letting the emotions influence decision
making. How well this is satisfied depends on the usage of the emotion values.
Point 2 is satisfied by the emotion triggers. The Just Cause example implementation uses these triggers as FSM transitions to simulate the behavior caused by
primary emotions.
Point 3 is partly satisfied by setting the emotion influence of an event for a
character in the game. Most agents will have survival as a goal. Making an explosion
increase the agent’s fear can be seen as the agent identifying the explosion as being
a threat to his survival goal. In games where the agents do not live very long, e.g.
Just Cause, this might be enough. Extending the methods proposed in this report
to fully support this is not an easy task though.
Point 4 is not satisfied.
Point 5 is fully supported if the user of the emotion system choses to use the
emotion values to affect the rest of the game. The example implementation in Just
Cause uses the emotion values when taking some decisions. These values could also
be used to affect the perceptions, e.g. more sensitive vision and hearing senses when
afraid etc.
32
CHAPTER 6. RESULTS
6.6
Just Cause
Using artificial emotions in Just Case, and probably many other computer games,
will improve the behavior of the computer controlled agents if done properly. Tweaking the memory and emotion parameters requires some work to make the emotions
seem human like, but considering the results presented in this report it is worth it.
Some behaviors that have to be hardcoded today will also be handled automatically when using emotions. Instead of writing special cases to make the agent flee
from explosions and so on, emotions can take care of it. All it takes is to specify the
different possible events and how they affect the agent’s emotions. E.g. explosions
increases the fear of agents nearby by X and decreases joy with Y . When the agent
needs to decide between fight or flight he can base that decision on the current
emotion values. In this case it is not important exactly which events that caused
him to get this afraid, although he needs to know where to flee from. Using the
memory to make this decision will in most cases to be more time consuming than
using emotions.
Even if the emotions cannot be used to influence the behavior of the agents it
can still be used for facial expressions and gaits. Because of the small impact on
game performance the memory and emotion system has, I do not see any reason
not to use it in this case and other similar games.
33
Bibliography
[1]
Emotion. URL http://en.wikipedia.org/wiki/Emotion.
[2]
Facade pattern. URL http://c2.com/cgi/wiki?FacadePattern.
[3]
Intel
vtune
performance
analyzers.
URL
http://www.intel.com/cd/software/products/asmo-na/eng/vtune/239144.htm.
[4]
On
using
the
observer
design
pattern.
http://www.wohnklo.de/patterns/observer.html.
[5]
The programming language lua. URL http://www.lua.org.
[6]
R. Burke, D. Isla, M. Downie, Y. Ivanov, and B. Blumberg. Creaturesmarts:
The art and architecture of a virtual brain, 2001.
[7]
A. Champandard. AI Game Development. New Riders Publishing, 2003. ISBN
1-5972-3004-3.
[8]
A. Damasio. Descartes’ Error: Emotion, Reason, and the Human Brain. Putnam Publishing, 1994. ISBN 0-399-13894-3.
[9]
A. Egges, S. Kshirsagar, and N. Magnenat-Thalmann. A model for personality
and emotion simulation. In Vasile Palade, Robert J. Howlett, and Lakhmi C.
Jain, editors, KES, volume 2773 of Lecture Notes in Computer Science, pages
453–461. Springer, 2003. ISBN 3-540-40803-7.
URL
[10] D. Freeman. Creating Emotion in Games: The Craft and Art of Emotioneering.
New Riders, 2003. ISBN 1-59273-007-8.
[11] J. Mylopoulos and H. J. Levesque. An overview of knowledge representation. In
M. L. Brodie, J. Mylopoulos, and J. W. Schmidt, editors, On Conceptual Modelling: Perspectives from Artificial Intelligence, Databases, and Programming
Languages, pages 3–17. Springer, New York, 1984.
[12] E. Oliveira and L. Sarmento. Emotional advantage for adaptability and autonomy, 2003.
[13] J. Orkin. Agent architecture considerations for real-time planning in games,
2005.
34
BIBLIOGRAPHY
[14] A. Ortony and T. Turner. What’s basic about basic emotions?, 1990.
[15] R. Picard. Affective Computing. MIT Press, 1997. ISBN 0-262-16170-2.
[16] S. Rabin. AI Game Programming Wisdom 2. Charles River Media, 2003. ISBN
1-58450-289-4.
[17] A. Sloman. Review of affective computing, 1999.
[18] A. Sloman. Beyond shallow models of emotion, 2001.
[19] M. Smith. Engaging Characters: Fiction, Emotion, and the Cinema. Oxford
Univ Pr, 1995. ISBN 019818347X.
[20] I. Wilson. The artificial emotion engine, driving emotional behavior, 1999.
35
Appendix A
XML schema for memory settings XML
<?xml version="1.0" encoding="UTF-8" ?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="object">
<xs:complexType>
<xs:sequence minOccurs="0" maxOccurs="unbounded">
<xs:element name="object" type="MemoryEntry" />
</xs:sequence>
<xs:attribute name="name" type="xs:string" fixed="memories" />
</xs:complexType>
</xs:element>
<xs:complexType name="MemoryEntry">
<xs:sequence minOccurs="0" maxOccurs="unbounded">
<xs:element name="value" type="Value"/>
</xs:sequence>
<xs:attribute name="name" type="xs:ID" use="required" />
</xs:complexType>
<xs:complexType name="Value" mixed="true">
<xs:attribute name="name" use="required" >
<xs:simpleType>
<xs:restriction base="xs:NMTOKEN">
<xs:enumeration value="max_age" />
<xs:enumeration value="strength_decay" />
<xs:enumeration value="strength_threshold" />
</xs:restriction>
</xs:simpleType>
</xs:attribute>
<xs:attribute name="type" use="required" >
36
APPENDIX A. XML SCHEMA FOR MEMORY SETTINGS XML
<xs:simpleType>
<xs:restriction base="xs:NMTOKEN">
<xs:enumeration value="float" />
<xs:enumeration value="int" />
<xs:enumeration value="string" />
</xs:restriction>
</xs:simpleType>
</xs:attribute>
</xs:complexType>
</xs:schema>
37
Appendix B
Example of memory settings XML
<object name="memories"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="memoryparams.xsd">
<object name="HeardExplosion">
<value name="strength_decay" type="float">0.02</value>
<value name="strength_threshold" type="float">0.1</value>
</object>
<object name="SawEnemy">
<value name="max_age" type="float">30</value>
</object>
<object name="SawFriend">
<value name="max_age" type="float">5</value>
</object>
<object name="SawDeadFriend">
<value name="max_age" type="float">5</value>
</object>
<object name="HitByBullet">
<value name="strength_decay" type="float">0.5</value>
</object>
<object name="BulletHitNearby">
<value name="strength_decay" type="float">0.5</value>
</object>
<object name="SawDeadEnemy">
38
APPENDIX B. EXAMPLE OF MEMORY SETTINGS XML
<value name="max_age" type="float">3</value>
</object>
</object>
39
Appendix C
Memory interface
Figure C.1. Memory storage interface.
40
Appendix D
Example of emotion settings XML
<object name="memory_history">
<object name="SawEnemy">
<value name="size" type="int">5</value>
<value name="max_age" type="float">5</value>
</object>
<object name="HeardExplosion">
<value name="size" type="int">1</value>
<value name="max_age" type="float">8</value>
</object>
<object name="SawDeadFriend" />
</object>
<object name="emotions">
<object name="Fear">
<value name="decay_rate" type="float">0.2</value>
<value name="mood_decay_rate" type="float">0.01</value>
<value name="emotion_mood_influence" type="float">0.1</value>
<object name="memory_influence">
<object name="SawEnemy">
<value name="influence" type="float">0.5</value>
<value name="use_delta" type="string">true</value>
<value name="negative_influence" type="float">0.15</value>
<value name="emotion_mood_influence" type="float">0.6</value>
41
APPENDIX D. EXAMPLE OF EMOTION SETTINGS XML
</object>
<object name="HeardExplosion">
<value name="influence" type="float">0.75</value>
<value name="emotion_mood_influence" type="float">0.1</value>
</object>
</object>
</object>
<object name="Aggression">
<value name="decay_rate" type="float">0.2</value>
<value name="mood_decay_rate" type="float">0.01</value>
<value name="emotion_mood_influence" type="float">0.1</value>
<object name="memory_influence">
<object name="SawDeadFriend">
<value name="influence" type="float">0.35</value>
<value name="emotion_mood_influence" type="float">0.8</value>
</object>
</object>
</object>
</object>
42
Appendix E
Lua script file used in evaluation
CONSOLE:Write("Loaded ’genericsoldier.lua’")
function OnNewMemory( memory_entry, influence_vector )
local mtype = memory_entry.MemoryType
if mtype ~= "SawFriend" and mtype ~= "SawEnemy" then
CONSOLE:Write("New memory ("..mtype..")")
end
return true
end
function OnMemoryUpdate( oldentry, newentry, influence_vector )
local mtype = oldentry.MemoryType
if mtype ~= "SawFriend" and mtype ~= "SawEnemy" then
CONSOLE:Write("New memory ("..mtype..")")
end
if mtype == "SawEnemy" then
local agent = CAgentObj(oldentry)
if SELF.Charging or (
agent.Valid and
SELF:SpeedTowards(agent) > agent:SpeedTowards(SELF) )
then
influence_vector:SetEmotionValue("Fear", 0)
end
end
return true
end
43
TRITA-CSC-E 2007: 031
ISRN-KTH/CSC/E--07/031--SE
ISSN-1653-5715
www.kth.se
© Copyright 2026 Paperzz