Enhancing Believability of Virtual Soccer Players

Enhancing Believability of Virtual Soccer Players:
Application of a BDI-Model with Emotions and Trust
Tibor Bosse and Daniel Höhle
Vrije Universiteit Amsterdam, Department of Artificial Intelligence
de Boelelaan 1081a, 1081 HV Amsterdam, The Netherlands
[email protected], [email protected]
Abstract. Despite significant progress in the development of virtual soccer
games, the affective behavior of virtual soccer players, both in software and
hardware applications, is still limited. To improve this situation, this paper
presents a generic model for decision making of virtual agents in relation to
emotions and trust. The model takes the BDI framework as point of departure,
and extends this with mechanisms to represent the dynamics of emotions and
trust. After testing the model by means of simulation experiments, it has been
incorporated into the virtual agents within the RoboCup 2D soccer
environment. A preliminary evaluation pointed out that the model successfully
enhances believability of virtual soccer players.
Keywords: soccer simulation, cognitive modeling, affective computing, trust.
1 Introduction
Association football, commonly known as soccer, has been the world’s most popular
sport for many decades. Thanks to the rapid developments in the video game industry
as well as scientific areas such as Computer Science and Artificial Intelligence,
nowadays, soccer games are not only played in the real world, but also in the virtual
world. Overall, the purpose of virtual soccer games (or soccer simulations) is threefold. First and foremost, virtual soccer games are played for entertainment purposes;
in these cases, one or multiple human users control the soccer players in a virtual
environment, with the goal to defeat each other or a computer opponent [12]. Second,
soccer simulations are increasingly being used for the purpose of analysis, for
instance by managers and coaches of real soccer teams; here, the idea is that the
virtual environment provides an easy framework to test various strategies that cannot
easily be tested in the real world. For example, in case a coach intends to investigate
what might happen if his team uses a particular tactical strategy in a particular game,
he could run a simulation of this game, where each of the simulated players behaves
according to this strategy. Although this area is yet to be explored in much more
detail, some anecdotical cases have been reported in which soccer simulations were
able to make accurate predictions (e.g., [15, 17]). Third, virtual soccer games have
proved to be an excellent test bed for the application of a variety of AI techniques,
including (real-time) multi-agent planning, multi-agent communication, behavior
modeling, and learning. These challenges have been the main motivation for the
foundation of the RoboCup competition in 1997 [10].
As a result of these developments, realism of virtual soccer games has increased
very rapidly over the past years. Not only has the physical appearance of players and
environment become more and more appealing, also the technical movements as well
as the tactical decisions of the players have become much closer to reality. In contrast,
one particular aspect that has stayed a bit behind is the players’ mental behavior, and
in particular their affective behavior. For instance, in state-of-the-art video games, the
only moments in which players’ emotions are really apparent are right after a goal has
been scored. In such cases, players usually show some built-in emotional expressions
(e.g., cheering, or looking angry). However, complex emotional states that change
dynamically during the game and influence the players’ behavior are usually very
limited. This is in conflict with the behavior of human soccer players, which has
shown extremely sensitive to the dynamics of their affective states [7]. Consequently,
existing soccer simulations are not as realistic (and appealing) as they could be.
To deal with this problem, the current paper presents a generic model for decision
making in relation to emotions and trust, and applies this to the domain of virtual
soccer. As also put forward in [16], endowing virtual soccer players with emotions
will enhance their believability, thereby making the game more engaging for
spectators. Both emotions and trust are mental states which have been found to have a
serious impact on people’s decisions, in general as well as in sports context [7, 9].
The presented model takes the BDI framework [13] as point of departure, and extends
this with mechanisms to represent the dynamics of emotions and trust.
The remainder of this paper is structured as follows. The generic simulation model
is presented at a conceptual level in Section 2. In Section 3, this model is illustrated
for the context of soccer games, by showing a number of simulations generated in the
LEADSTO simulation environment. Next, Section 4 describes how the model was
incorporated into the virtual agents within the RoboCup 2D soccer environment, and
presents some preliminary results. Section 5 concludes the paper with a discussion.
2 Conceptual Model
In this section, the model for decision making with emotions and trust will be
described at an intuitive, conceptual level, using the agent-based modeling language
LEADSTO [2]. This language allows the modeler to integrate both logical
(qualitative) and numerical (quantitative) aspects. In LEADSTO, direct temporal
dependencies between two state properties in successive states are modeled by
executable dynamic properties. The format is defined as follows: let and be state
properties of the form ‘conjunction of ground atoms or negations of ground atoms’. In
LEADSTO the notation α →
→e, f, g, h β means:
If state property α holds for a certain time interval with duration g,
then after some delay (between e and f), state property β will hold for a certain time interval of length h.
Here atomic state properties can have a qualitative, logical format, such as an
expression desire(d), expressing that desire d occurs, or a quantitative, numerical
format such as an expression has_value(x, v) which expresses that variable x has value
v. For more details, see [2].
Below, the LEADSTO model for decision making with emotions and trust is
introduced (at a conceptual level) in three steps. In Section 2.1, the basic BeliefDesire-Intention (BDI) model is briefly introduced. Next, in Section 2.2 and 2.3, this
model is extended with mechanisms to represent the dynamics of emotions and trust,
respectively. For the complete LEADSTO specification, see the M.Sc. Thesis in [6].
2.1 BDI-model
The BDI-model is a standard paradigm within the Agent Technology domain, which
describes how agents decide to perform actions based on their beliefs, desires and
intentions (e.g., [13]). This model forms the foundation of the model presented in this
paper. See the middle part of the global overview in Figure 1. Here the box indicates
the borders of the agent, the ovals denote components of the agent model representing
different mental states, and the arrows indicate influences between mental states.
emotions
observations
beliefs
desires
intentions
actions
trust
Fig. 1. Global overview of the BDI-model with Emotions and Trust
For instance, an action A is executed when the agent has the intention to perform
this action and it has the belief that certain circumstances in the world are fulfilled
such that there is an opportunity to perform the action. Beliefs are created on the basis
of observations. Moreover, the intention to do a specific type of action is created if
there is some desire to reach state S, and the agent believes that performing this action
will fulfill this desire. Such relations within the general BDI-model can be specified
in formal LEADSTO format as follows (where the timing parameters e, f, g, h, as well
as a parameter Ag indicating which agent has the mental states, have been omitted for
simplicity):
∀S:STATE ∀A:ACTION
desire(S) ∧ belief(satisfies(A, S)) →
→ intention(A)
∀A:ACTION
intention(A) ∧ belief(opportunity_for(A)) →
→ performed(A)
Note that the beliefs used here depend on observations (see Figure 1), or on common
knowledge. In the remainder of this paper, desires and intentions are parameterized
with a real number I (between 0 and 1) to represent the strength of the mental state.
2.2 Emotions
In order to represent emotional states, a main assumption made is that the intensity of
different emotions can be represented via a number between 0 and 1, as often done
within emotion models within Artificial Intelligence (e.g., [5]). In the current paper,
only one type of emotion is modeled, namely happiness (with 0 = very sad and 1 =
very happy), but the model can also be used to represent emotions like anger and fear.
As shown in the upper part of Figure 1, the component to represent emotions
interacts with the basic BDI-model in various manners. First, for generation of
emotions, it is assumed that they are the result of the evaluation of particular events
against the individual’s own goals (or desires), as described by appraisal theory [5].
This process is formalized, among others, by the following LEADSTO rules (stating
that reaching a goal state S leads to happiness with an intensity that is proportional to
the strength of the desire, and inversely for not reaching a goal state):
∀S:STATE ∀I:REAL
desire(S, I) ∧ belief(S) →
→ happiness(S, I)
desire(S, I) ∧ belief(not(S)) →
→ happiness(S, 1-I)
For example, in case an agent desires to score a goal, but misses, then the agent
becomes sad about this. In addition to these event-specific emotional states, the
agents’ long term mood is modeled. Moods are usually distinguished from emotions
in that they last longer and are less triggered by particular stimuli. Thus, in the model
a global mood is assumed with a value between 0 and 1 (representing its positive
valence), which depends on the values of all specific emotions in the following way:
∀S1, ..., Sn:STATE ∀I1, ..., In, J:REAL
happiness(S1, I1) ∧ ... ∧ happiness(Sn, In) ∧ mood(J) →
→
mood( * J + (1- ) * (w1*I1 + ... + wn*In))
Thus, the new mood state is calculated based on the valence J of the old mood state,
combined with the weighted sum of the emotional values Ik for the different aspects
Sk in the domain of application. Here, the wk’s (which sum up to 1) are weight factors
representing the relative importance of the different aspects, and the (between 0 and
1) is a persistence factor representing the rate at which mood changes over time.
Finally, various rules have been formulated to represent the impact of emotions
and mood on beliefs, desires, and intentions. These rules are mostly domain-specific;
for instance, a positive mood increases the desire to cooperate with teammates, and a
negative mood increases the desire to behave aggressively. These rules are described
in detail in [6].
2.3 Trust
According to [4], trust is a (dynamic) mental state representing some kind of
expectation that an agent may have with respect to the behavior of another agent or
entity. The authors propose that a belief in competence of the other entity is an
important contributor to the development of trust (in addition to a second aspect, the
belief in willingness; however this aspect is currently ignored). This mechanism is
formalized in the current paper by assuming that trust in another agent is based on a
weighted sum of the beliefs in the agent’s capabilities (using a similar formula as
above for mood update):
∀A:AGENT ∀X1, ..., Xn:ACTION ∀I1, ..., In, J:REAL
belief(has_capability(A, X1, I1)) ∧ ... ∧ belief(has_capability(A, Xn, In)) ∧ trust(A, J) →
→
trust(A, * J + (1- ) * (w1*I1 + w2*I2 + ... + wn*In))
Thus, the new trust state in agent A is calculated based on the level J of the old trust
state, combined with the weighted sum of the beliefs in capabilities Ik for the different
actions Xk in the domain of application. For example, a soccer player trusts his
teammate more if he believes that he is good at attacking as well as defending. Again,
the wk’s are weight factors, and the is a persistence factor.
A next step is to define how beliefs in capabilities are determined. For this, the
mechanism put forward in [8] is reused, which states that a new trust state in some
entity is partly based on the old trust state, and partly on an experience. This is
modeled via the following LEADSTO rule (where the experiences are observed
actions performed by teammates):
∀A:AGENT ∀X:ACTION ∀I:REAL
belief(has_capability(A, X, I)) ∧ observed(performed(A, X, succeeded)) →
→
belief(has_capability(A, X, 1 - + *I))
belief(has_capability(A, X, I)) ∧ observed(performed(A, X, failed)) →
→
belief(has_capability(A, X, *I))
For instance, in case agent X believes that agent Y’s capability with respect to
tackling is 0.6, and Y performs a successful tackle, then this belief is strengthened
(where is an update speed factor). Note that the mechanism to update trust is also
applied to the self, to model some kind of self-confidence.
Finally, as with emotions, trust states also have an impact on the other states in the
BDI-model. For example, a high trust in a teammate increases the strength of the
intention to pass the ball to this player. Again, these mechanisms are represented
using (mostly domain-specific) rules, which can be found in [6].
3 Simulation Results
To test the basic mechanisms of the model, it has been used to generate a number of
simulation runs within the LEADSTO simulation environment. To this end, various
scenarios have been established in the context of a (simplified) soccer game. The
game has been simplified in the sense that we did not simulate a complete soccer
match (as is the case in the RoboCup environment), including computer-generated
teammates and opponents, and environmental processes (e.g., movements of the ball).
Instead, the tests focused on the behavior of one particular agent (player X). To test
this behavior, the scenarios consisted of a series of predefined events (e.g., ‘teammate
Y passes the ball to teammate Z’, ‘teammate Z shoots on target’), which were
provided as input to player X. Based on these inputs, all actions and emotions derived
by the agent were observed and evaluated, and in case of inappropriate behavior, the
model was improved (manually). Since the main goal of the simulations was to test
the model for decision making in relation to emotions and trust, this simplified setup
was considered a necessary first step. As a next step (see Section 4), the model was
tested in a more complete setting (the RoboCup environment).
To represent the domain-specific aspects of virtual soccer, the different logical
sorts introduced in Section 2 were filled with a number of elements. For example,
some instances of the sort STATE were {win_game, ball_in_possession, ball_nearby_goal,
score_other_goal}, some instances of the sort BELIEF were {is_at_location(AGENT, REAL),
has_capability(AGENT, ACTION, REAL), satisfies(ACTION, STATE), opportunity_for(ACTION)},
and some instances of the sort ACTION were {shoot_at_goal, pass_from_to(AGENT,
AGENT), dribble, run_free}.
To illustrate the behavior of the model, Figures 2 and 3 show some fragments of an
example LEADSTO simulation trace, addressing a simple scenario in which a number
of subsequent (positive and negative) events occur. In both figures, time is on the
horizontal axis. Figure 2 shows a number of actions that are performed during the
scenario by player Z, a teammate of player X. A dark box on top of a line indicates
that an action is being performed at that time point. Figure 3 shows the emotional
state (in this case the amount of happiness, on a [0,1] scale) that player X has with
respect to player Z.
Fig. 2. Example simulation results - Actions performed by player Z
Fig. 3. Example simulation results - Emotional state of player X with respect to player Z
As can be seen from the figures, player X’s emotional state regarding player Z
changes as a consequence of player Z’s actions. Note that it takes about 3 time points
in the simulations to change an emotional state, which explains the short delays. For
instance, at time point 7-9, player Z shoots at the goal. Apparently, this attempt is
appreciated by player X (i.e., it contributes to fulfillment of this agent’s desires), since
at time point 10-12, the agent’s happiness regarding agent Z increases from 0.5 to
0.55. Similarly, player Z’s attempts to pass the ball forward (time point 19-25) also
lead to an increase of agent X’s happiness. Note that not all of player Z’s actions lead
to an increase of happiness. There are two potential causes for this: either the action
fails (not shown in the figure), or the action does not contribute to player X’s desires.
In addition to the scenario shown above, various other scenarios have been
simulated in a systematic manner (varying from scenarios in which only positive or
negative events occur to scenarios in which both positive and negative events occur).
Due to space limitations, the details of these simulations are not shown here. To
improve the model’s accuracy, each simulation run has been evaluated in the
following manner. Based on the literature in decision making, emotion, and trust (see,
e.g., [4, 5] and references in those papers), a number of requirements for the behavior
of the agents have been formulated; some examples are the following:
•
•
•
‘events that contribute to an agent’s goals lead to an increase of happiness’
‘agents that perform more successful actions are trusted more’
‘happy agents perform different actions than sad agents’
These requirements have been tested against all simulated traces. In case a
requirement was not fulfilled, small modifications in the model were made, either by
correcting bugs in the specification or (more often) by adapting the values of the
parameters involved. After a number of iterations, the model was considered ready to
be implemented in a more realistic setting.
4 RoboCup Application
After the initial tests mentioned above pointed out that the model showed acceptable
behavior, it has been implemented within the RoboCup 2D virtual soccer
environment. RoboCup is an international competition founded in 1997 with the aim
to develop autonomous (hardware and software) agents for various domains, with the
intention to promote research and education in Artificial Intelligence [10]. RoboCup
has four competition domains (Soccer, Rescue, Home, and Junior), each with a
number of leagues. For the current project, the RoboCup Soccer 2D Simulation
League was used, since this league provides good possibilities for rapid prototyping
for games with many inter-player interactions.
In order to implement the model for decision making with emotions and trust in an
efficient manner, part of the existing code of the 2005 Brainstormers team [14] was
reused. This code contains a number of explicit classes for different actions (such as
‘shoot_at_goal’, ‘pass_from_to’, ‘dribble’, ‘tackle’, and ‘run_free’). To create optimal
strategies at the level of the team, the Brainstormers used Machine Learning
techniques to learn which actions were most appropriate in which situations. Instead,
for the current paper, a different approach was taken: for each individual agent, the
original action classes were directly connected to the output (the generated actions) of
the implemented decision making model. Thus, instead of having a team of agents
that act according to a learned collective strategy, each agent in our implementation
acts according to its personal role, (partly emotion-driven) desires and intentions1.
This approach turned out to be successful in generating (intuitively) realistic soccer
matches, in which the behavior of the players is influenced by their emotions and trust
1 Recall that the aim of the current paper is not to develop optimal strategies, but rather to enhance
believability of the virtual players.
states. A screenshot of the application is shown in Figure 4. The top of the figure
shows the soccer field, including two teams of 11 agents (the colored circles) and the
ball (the white circle), which is held by the goalkeeper on the right. On top of each
agent, a small number is displayed, which indicates the player’s personal number.
Below, some information is displayed in plain text, to give some insight in the mental
states of the agents. Obviously, there is no space to display all aspects of the agents’
mental states (emotions, moods, trust states, beliefs, desires, and intentions).
Therefore the user can select which information is shown. In the example shown in
Figure 4, five columns of information are shown, indicating, respectively, the current
time, and each agent’s number, mood, emotion regarding itself, and current action.
Fig. 4. Screenshot of the RoboCup application
An example animation illustrating the behavior of the application can be found at
http://www.youtube.com/watch?v=I-ZamC-louo. This movie shows a scenario where
player 6 passes the ball to player 10, who then dribbles with it towards the goal, and
scores. As a result of these successful actions, various aspects of player 10’s mental
state change (see the textual information below the playing field): its emotion with
respect to itself increases (from 0.366 to 0.686), as well as its overall mood (from
0.398 to 0.557), its trust in itself (from 0.522 to 0.552), and its belief about its own
capability to shoot at the goal (from 0.509 to 0.602). As a result, in the rest of the
scenario the player will behave more confidently, and will try to score more often.
The application has been tested extensively, using a large amount of different
parameter settings (e.g. for initial trust, emotion and mood states, trust and emotion
flexibility, and action preferences, see also [6]). In all tests, the behavior of all players
was analyzed in a step-wise manner, and compared with expectations of the modelers
(based on common sense). Various interesting types of behavior were observed,
which were found to be consistent with behavior as observed in real soccer games.
For instance, players that had scored a number of goals were trusted more by their
teammates, and received more passes from them. Also, players with negative
emotions committed more fouls than other agents.
To further evaluate the application, 10 participants (experienced video gamers
between 25 and 32 years old) were asked to judge the behavior of the virtual players,
by answering questions like “do you think the players behave in a realistic manner?”,
“do you think the players show appropriate emotions?”. The initial results of this
evaluation were promising: the participants very much appreciated the players’
abilities to show emotions and trust. Overall, they had the idea that the presented
model made the agents more believable, and that it enhanced the experienced fun
when watching the soccer games. Nevertheless, in a later stage, a more elaborated
evaluation experiment will be performed, in cooperation with colleagues from Social
Sciences. In this experiment, we plan to compare virtual players that use different
variants of the presented model with players that do not have emotions and trust.
5 Discussion
To enhance believability of virtual soccer players’ affective behavior, this paper
presented a generic model for decision making of virtual agents in relation to
emotions and trust. The backbone of the presented model is the BDI framework,
which was extended with mechanisms to represent the dynamics of emotions and
trust. After testing the model by means of simulation experiments, it has been
incorporated into the virtual agents within the RoboCup 2D soccer environment.
Although preliminary, an initial evaluation pointed out that the model has the
potential to enhance believability of virtual soccer players.
Related work in the area of believable virtual soccer players is scarce. Since the
initiation of the RoboCup competition in 1997, a number of participants focused on
the cognitive aspects of the players (e.g., [3, 11]), but emotional aspects were mostly
ignored. A welcome exception is presented by Willis [16], who proposes an
architecture to endow the robots in the physical league with emotional intelligence.
Unlike the model put forward in the current paper, this architecture has not been
implemented yet, nor does it address the concept of trust. Finally, some papers (e.g.,
[1]) address implementation of emotions within the spectators and commentators of
virtual soccer games; however, to the best of our knowledge, these approaches have
never been applied to the player agents.
As mentioned in the introduction, endowing virtual soccer players with more
human-like behavior can be useful for several reasons (see [16] for an extensive
overview): for instance, the games become 1) more fun to watch and 2) more faithful
to reality, which makes it possible to use them as an analytical tool for coaches. A
third reason put forward in [16] is that soccer teams with emotional intelligence may
have a competitive advantage over other teams. Although this was not the main focus
of the current paper, future research may investigate how well our agents perform
against similar agents without emotions and trust.
Other future work will include a further elaboration of the model, among others, by
focusing on other emotions such as anger and fear, and the interaction between
multiple emotions. Also, possibilities will be explored to visualize players’ emotional
states in such a way that they are easily observed by spectators. For instance, a
straightforward way to implement this would be using different colors to represent
different emotions. It is expected that these extensions will enhance spectators’
engagement in virtual soccer games even further.
References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
Binsted, K.. and Luke, S. (1999). Character Design for Soccer Commentary. In: Asada,
M. and Kitano, H. (eds.), Robocup-98: Robot Soccer World Cup II. Lecture Notes in
Artificial Intelligence, vol. 1604, Springer Verlag, pp. 22-33.
Bosse, T., Jonker, C.M., Meij, L. van der, and Treur, J. (2007). A Language and
Environment for Analysis of Dynamics by SimulaTiOn. International Journal of AI
Tools, volume 16, issue 3, pp. 435-464.
da Costa, A.C.P.L. and Bittencourt, G. (1999). UFSC-team: A Cognitive Multi-Agent
Approach to the RoboCup'98 Simulator. In: Asada, M. and Kitano, H. (eds.), Robocup98: Robot Soccer World Cup II. LNAI, vol. 1604, Springer Verlag, pp. 371-376.
Falcone, R. and Castelfranchi, C. (2004). Trust dynamics: How Trust is Influenced by
Direct Experiences and by Trust Itself. In: Proc. of AAMAS 2004, 2004, pp. 740-747.
Gratch, J., and Marsella, S. (2004). A domain independent framework for modeling
emotion. Journal of Cognitive Systems Research, vol. 5, pp. 269-306.
Höhle, D. (2010). A General Agent Model of Emotion and Trust using the BDI Structure.
M.Sc. Thesis, Vrije Universiteit Amsterdam. http://hdl.handle.net/1871/16276.
Jones, M.V. (2003). Controlling Emotions in Sport. The Sport Psychologist, vol. 17,
Human Kinetics Publishers, Inc., pp. 471-486.
Jonker, C.M., and Treur, J. (1999). Formal Analysis of Models for the Dynamics of Trust
based on Experiences. In: F.J. Garijo, M. Boman (eds.), Multi-Agent System Engineering,
Proc. of MAAMAW'99. LNAI, vol. 1647, Springer Verlag, Berlin, 1999, pp. 221-232.
Jowett, S. and Lavallee, D. (eds.) (2007). Social Psychology in Sport. Champaign:
Human Kinetics, 2007.
Kitano, H., Asada, M., Noda, I., and Matsubara, H. (1998). RoboCup: robot world cup.
IEEE Robotics and Automation Magazine, vol. 5, issue 3, pp. 30-36.
Muñoz-Hernandez, S. and Wiguna, W.S. (2007). Fuzzy Cognitive Layer in
RoboCupSoccer. In: Melin, P. et al. (eds.), Proceedings of IFSA 2007. Lecture Notes in
Artificial Intelligence, vol. 4529, Springer Verlag, pp. 635-645.
Poole, S. (2000). Trigger Happy: Videogames and the entertainment revolution. Arcade
Publishing, New York, 2000.
Rao, A.S. and Georgeff, M.P. (1991). Modelling Rational Agents within a BDIarchitecture. In: Allen, J., et al.. (eds.), 2nd International Conference on Principles of
Knowledge Representation and Reasoning, KR’91. Morgan Kaufmann, pp. 473-484.
Riedmiller, M., Gabel, T., Knabe, J., and Strasdat, H. (2006). Brainstormers 2D - Team
Description 2005. In: Bredenfeld, A. et al. (eds.), RoboCup 2005: Robot Soccer World
Cup IX. LNCS, vol. 4020, Springer Verlag, pp. 219-229.
Silva, C.F., Garcia, E.S., and Saliby, E. (2002). Soccer championship analysis using
Monte Carlo simulation. In: Yücesan, E., Chen, C.-H., Snowdon, J.L., and Charnes, J.M.
(eds.), Proceedings of the 2002 Winter Simulation Conference, vol. 2, pp. 2011-2016.
Willis, M. (2008). RoboCup as a Spectator Sport: Simulating Emotional Response in the
Four-Legged League. In: Proceedings of the 5th Australasian Conference on Interactive
Entertainment. ACM Press, vol. 391.
http://www.ea.com/uk/news/ea-sports-predicts-spain-win-2010-world-cup.