Multi-agent Systems in Massive - People

Multi-agent Systems in Massive
Amit Lakhani
MSc Computer Animation
NCCA
Bournemouth University
December 2007
Abstract
This work demonstrates how to create a humanoid agent using Massive software. Artistic aspects
are ignored in order to focus on the technical aspects of the recommended Massive pipeline, looking at
creating an agent with motion captured actions, agent body, motion tree, inverse kinematics, intelligence
and emergent behaviour. The process is illustrated by making agents play musical chairs, each acting
dierently according to behavioural rules specied in the agent brain. The project is intended to be
educational and is accompanied by a tutorial showing techniques to create such a simulation.
i
Contents
1 Introduction
1
2 Multi-agent Systems
1
3 Autonomy of actors in Computer Graphics
1
4 Related Work
2
5 Massive
4
6 Fuzzy Logic
5
3.1
3.2
3.3
6.1
6.2
Particle-based Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Rule-based behavioural Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Crowd AI (Articial Intelligence) Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Fuzzy Logic vs Boolean Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Example of fuzzy logic implementation in Massive . . . . . . . . . . . . . . . . . . . . . . . .
7 Musical Chairs
7.1
7.2
7.3
The rules of Musical Chairs
Rules of Agents . . . . . . .
Agents . . . . . . . . . . . .
7.3.1 Player . . . . . . . .
7.3.2 Chair . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
8 Pipeline
8.1
Massive Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9 Pipe implementation
9.1
9.2
9.3
9.4
9.5
9.6
Motion tree . . . . . . . . . . . . .
Motion capture . . . . . . . . . . .
Skeleton design . . . . . . . . . . .
Import skeleton . . . . . . . . . . .
Adjust skeleton . . . . . . . . . . .
Import motion . . . . . . . . . . .
9.6.1 Transform and Loop/Trim .
9.6.2 Inverse Kinematics . . . . .
10 The Brain
10.1 Available tools . . . . . .
10.1.1 Sound . . . . . . .
10.1.2 Vision . . . . . . .
10.1.3 Flow eld . . . . .
10.1.4 Collision . . . . . .
10.1.5 Ground Colour . .
10.2 Proposed implementation
10.2.1 Music On mode . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
ii
1
2
2
5
6
7
7
7
8
8
8
9
9
11
11
11
12
12
12
13
13
13
16
16
16
16
17
17
17
17
18
10.2.2
10.3 Actual
10.3.1
10.3.2
10.3.3
10.3.4
10.3.5
10.3.6
10.3.7
10.3.8
Music O mode . . . . . . .
implementation . . . . . . .
General rules and variables
Music modes switch . . . .
Output macros . . . . . . .
Music On mode . . . . . . .
Music O mode . . . . . . .
The Loser . . . . . . . . . .
Body Movements . . . . . .
Holding poses . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
18
19
19
19
19
20
21
24
24
24
11 Further Work
25
12 Conclusion
26
iii
List of Figures
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
Separation: steer to avoid collision with nearby ock mates . . . . .
Alignment: match velocity with ock mates . . . . . . . . . . . . . .
Cohesion: steer to move toward the average position (centre) of ock
Typical structure of fuzzy logic nodes . . . . . . . . . . . . . . . . .
Fuzz graph and membership curves . . . . . . . . . . . . . . . . . . .
Massive recommended pre-production pipe . . . . . . . . . . . . . .
Massive recommended production pipe . . . . . . . . . . . . . . . . .
Custom pipe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Motion tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Bones and body . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Degrees of freedom tab . . . . . . . . . . . . . . . . . . . . . . . . . .
Action editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IK hold curves for right toe in the action editor for the walk action .
Inside 'Turning' macro . . . . . . . . . . . . . . . . . . . . . . . . . .
Inside 'Walk Rate' macro . . . . . . . . . . . . . . . . . . . . . . . .
Music ON mode outside of macros . . . . . . . . . . . . . . . . . . .
Distance range for walkToSit action . . . . . . . . . . . . . . . . . .
x angle range for walkToSit action . . . . . . . . . . . . . . . . . . .
ox angle range for walkToSit action . . . . . . . . . . . . . . . . . . .
Combined sitting area . . . . . . . . . . . . . . . . . . . . . . . . . .
Music o mode from outside macros . . . . . . . . . . . . . . . . . .
Sad player agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iv
. . . .
. . . .
mates
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
3
3
6
6
9
10
10
11
13
13
14
15
20
20
21
22
22
23
23
23
24
List of Tables
1
2
3
4
5
6
7
8
9
10
Fuzzy logic example . . . . . . .
Boolean Descriptions . . . . . . .
Overview of Mode rules . . . . .
Explicit rules of Modes . . . . . .
Action Triggers . . . . . . . . . .
Short Python script . . . . . . .
Output sounds . . . . . . . . . .
Input sounds . . . . . . . . . . .
Reception channels for vision . .
Ground colour reception channels
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
v
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
6
8
8
12
15
16
16
17
18
1 Introduction
Multi-agent systems oer a great number of advantages to the industry and have many uses, from pedestrian
simulation for planning large public events to crowd simulation to populate lms with extras. Massive
software is a recent addition to many production houses' pipelines since it has been made commercially
available. It aims to provide an all in one solution to creating agents that can behave according to rules to
fulll a number of dierent situations. This project discusses dierent aspects of Massive, including most
of its capabilities and tools available to create multiple agents with a degree of autonomy. A simulation is
implemented to demonstrate how the tools work, the simulation is a group of people playing musical chairs
with the potential for it to be the biggest game of musical chairs ever (the world record stands at 5060
participants!). As Massive is fairly new to the industry a tutorial of the demonstration agents accompanies
this document.
2 Multi-agent Systems
Multi-agent systems (MAS) are widely used in many elds, from articial worlds to program design. It
is a simple concept, Woodridge[12] states that MultiAgent systems are composed of multiple interacting
computing elements, known as agents. There must be some degree of autonomous behaviour that allows the
agents to meet their design objective and be capable of interacting with other agents in analogous ways such
as cooperation, coordination and negotiation. In addition the agents will be able to act in an environment, be
driven by a rule set and perceive its environment, to name just some of the criteria that Ferber[11] suggests
an agent should meet.
From the point of view of computer animation a MAS can be used to help speed up the production of
projects by freeing animators time. For this reason articial intelligence is rarely desired due to the amount
of time it takes to evolve an agent.
3 Autonomy of actors in Computer Graphics
The movements and interaction of agents can be achieved in a number of ways, 3 techniques are discussed
in the following subsections, 3.1, 3.2, 3.3.
3.1 Particle-based Systems
Particles are animated by simulating factors such as wind, gravity, attractions and obstacle collision. Then
agents are made up of pre-animated cycles and attached to the point particles. This method is available in
1
most 3D software packages such as Houdini and Maya. Due to the particle's movement being based on its
reaction to the factors listed above, the simulation is often unrealistic because it is hard to control individual
agents. This kind of method is very much suited to situations such as emergency evacuation simulations as
crowd behaviour and interaction with the environment are minimal. In this case, uid simulation could be
used to control the particles as the crowd will be owing towards exits.
3.2 Rule-based behavioural Systems
Agents are assigned particular rules to base their actions on. This approach is a particularly fast method in
simulation, as once the rules of the crowd are identied and implemented the agents will move in a naturalistic
way. However getting all the rules in place can be quite time consuming in complex situations. This technique
has been adopted by commercial packages such as Character Studio 4.0 and SoftImage/Behaviour. As with
particle-based systems this is more suited to real-time simulation. O-line productions for lm may require
some touch up work of poor behaviour.
3.3 Crowd AI (Articial Intelligence) Systems
This technique is very realistic but very expensive to program and implement. Agents are given articial
intelligence, which determines their actions based on several factors. These include energy levels, hearing,
sight and aggressiveness levels. The agents are also more adaptable to dierent terrains and situations, for
example, if the agent comes across a ladder then it may decide to climb it. The method is often found in
present day computer games. Agents can take a long time to train so they can evolve there behaviour.
4 Related Work
The are several topics related to this project, some of which are: environment interaction, collision avoidance,
rule-based and behavioural systems. Some past work on these topics are presented in this section.
Combinations of the techniques mentioned in section 3 may be used in order to satisfy certain cases, as
Reynolds[1] succeeded with in 1986. Reynolds used a particle system and a rule based approach to successfully
create a ocking system using 'boids' (bird-oids). They adhere to three simples rules: separation (gure 1),
alignment (gure 2) and cohesion (gure 3). This gives the boids realistic behaviour to simulate the movement
of ocks and herds of animals, as well as schools of sh. Each boids has access to world information but only
local data of it's own ock is required to behave adequately.
2
Figure 1: Separation: steer to avoid collision with nearby ock mates
Figure 2: Alignment: match velocity with ock mates
Reynolds ocking system works great when observing behaviour in natural formations by birds, sh,
animals, etc. However, pedestrians act in a slightly dierent way, it is true that they avoid collisions as
would, say, sh in a school, but they do so in a more selsh way, and do not adhere to the rules of cohesion
and alignment in quite the same way. A school of sh will have the same objective whereas pedestrians will
rarely have the same primary goal.
Other authors have also used rule based behavioural systems to describe human behaviours, Farenc et
al[5] present a syntactic analyser of natural language to specify behavioural rules to guide crowds based upon
the hierarchical structure dened by Rasmussen's[6] Skill-Rule-Knowledge levels. Farenc et al use several
modules sychronised by a controller to handle various information and tasks. A virtual city database holds
Figure 3: Cohesion: steer to move toward the average position (centre) of ock mates
3
environment information, a human crowd manager module controls the motions and behaviours of groups of
agents, a smart object manager is responsible for handling agent to object interactions and an agent manager
is a library consisting of agent actions such as behaviours and motions.
Musse and Thalmann [4] use a hierarchical model to create guided to autonomous crowds with diering
levels of control. The system uses information contained in the group of individuals in order to control
behaviour based on rules.
When observing people in dense urban environments, as pointed out by Loscos et al[9], it is apparent that
people rarely walk alone, it is more common for them to be in pairs. Therefore it is important to include
group behaviour to add realism. Musse et al[7] add the fact that each group will often have diering goals
and have thus dened a system where agents from the same group share the same list of goals but are subject
to social eects, namely they may change groups. They used 3 laws: agents in the same group walk at the
same speed, follow the same path and wait for one another. In another paper by Musse et al agents are given
a level of dominance, a level of relationship and an emotional state, and are ruled by seeking and ocking
laws. The more sociological approach results in a more human realistic behaviour.
Collision detection is a very important aspect of crowd simulation and a variety of methods have been
successfully attempted. Reynolds implemented collision detection based on exact mathematical computation,
Bouvier and Cohen simulated a simple crowd of 45,000 agents using Newtonian mechanics. Feurtey proposed
collision detection based upon predicting agent trajectories in a (x, y, t) space. He used a cone to limit the
maneuverable space by analysing other agents speeds and trajectories. This method has the advantage of
allowing agents to evaluate the consequence of changing it path and speed in respect to its goal, however at
present it is not scalable to large populations.
Musse et al's[8] proposal is to use an algorithm based on two principle laws: the slower of the two agents
stops just before the collision, or both go around each other. The latter is only used when the observer's
position is near enough to notice a collision due to computational expense. This is still to be tested on large
populations.
A more common approach as used by Loscos et al[9] is to use 2d grid to manage the position of pedestrians.
5 Massive
Massive is the premier 3D animation system for generating crowd-related visual eects for
lm and television.[3]
Massive software is for creating behavioural simulations using multi-agent systems and is designed to easily
slot into a company's existing pipeline. It allows actors, known as agents, to be created with a set of senses
used to react to its surrounding local and world environment. Motion capture or key framed animations can
be used to control the actions of the agents based on their reactions. When the simulation is run, the agents
4
Output: Inputs: temperature
Outputs: fan speed
Fuzzy input values representing temperature: cold, warm, hot
Fuzzy output values for fan speed: stop, slow, fast
Rules: If hot then fast, if warm then slow, if cold then stop
Logic: If too hot
Table 1: Fuzzy logic example
will interpret and act in the environment based on rules dened in the the agents brain, then the agents can
be scaled up into hundreds of thousands in order to populate a whole scene. Variation can be added to each
agent in order to add some individuality and realism to the simulation.
Massive is rule based and uses logic derived from fuzzy set theory known as fuzzy logic (explained in
section 6) in order to control how an agents brain reacts to given environment information. A given input is
'fuzzed' and then 'defuzzed' to obtain a truth value that the agent can use to act accordingly.
Massive is made up of 4 sections: scene page, body page, brain page and motion page. Each fuzzy agent
consists of a body and a brain. They can transition between actions based on an agents motion tree. The
agents are placed into a scene which can have an optional terrain model.
Scenes can then be rendered out using Renderman with attached geometry and shaders for agent clothing
and exterior.
6 Fuzzy Logic
As previously mentioned, Massive makes use of fuzzy logic. Kaehler[10] accurately describes Fuzzy logic as
...a simple way to arrive at a denite conclusion based upon vague, ambiguous, imprecise, noisy, or missing
input information. This is a good denition of fuzzy logic in an uncomplicated way. The easiest way to
explain what it does is with a common example.
In table 1, there is a certain amount of uncertainty present when receiving the input data. It is important
to determine the dierence between a large and small error, e.g. an error of 1◦ c may be considered small
enough to ignore, while anything more than 3◦ c out may be too large. Therefore if the temperature sensor
is less 3◦ c lower than the 'hot' value, then 'hot' equals true.
6.1 Fuzzy Logic vs Boolean Logic
Fuzzy logic and Boolean logic can both be used to manipulate logical values. However, the conventional
Boolean logic used in most computer programming can be too explicit for dealing with the subtleties in every
day logic, this is where fuzzy logic is more appropriate.
5
Description
Close
Far
Very Far
Boolean Logic
1km≥distance
1km < distance < 10km
10km ≤ distance
Table 2: Boolean Descriptions
Figure 4: Typical structure of fuzzy logic nodes
An example will help clarify this point. In the case of describing distances using Boolean logic we can
dierentiate between distances as described in table 2.
According to this logic if a place is 1km away then it is far, which is ne. However, if a place is 0.9km
away, just 10 metres closer then it is considered to be close. Would this kind of logic really be used when
describing the distance of a place to another person. This is where fuzzy logic comes into play, in this same
situation the bounding values are not so dened so when the distance is just 10 metres closer then it would
still be considered far.
6.2 Example of fuzzy logic implementation in Massive
In Massive the brain is built up of nodes, the typical structure is INPUT > FUZZ > DEFUZZ > OUTPUT,
see gure 4. First the agent must get input from one of its senses, the fuzzy value must be specied using a
choice of 5 dierent membership curves to form a dierent graph (see gure 5), these values are then defuzzed
to determine the output value. The defuzzied value can then be put into an output node with a channel
specied to adjust.
The fuzz value is 0 when the input value maps to the graph when the curve is at the bottom and 1 when
at the top, with curve values in between. They are to the defuzz node and this 0 to 1 is value is proportionally
mapped to the specied upper most value specied in the defuzz and 0. The mapped value is then passed
into the output node to control the selected channel or variable.
Figure 5: Fuzz graph and membership curves
6
7 Musical Chairs
7.1 The rules of Musical Chairs
Before attempting to simulate a game of musical chairs we must state the rules from the real life game. There
are many variations of musical chairs, varying from country to country, family to family and amongst age
groups. Therefore we must explicitly dene the rules that we are going to implement. The rules used have
been carefully picked from dierent variations in order to simplify the implementation.
• There must be at least 2 players and another person to switch on/o the music.
• The number of persons to play are counted.
• Two straight rows of chairs are arranged back to back. The number of chairs should equal one less than
the number of players
• One person, apart from the persons playing the game, must play some music and at some random time
abruptly stop the music.
• When the music is playing the players must walk around the chairs one behind the other.
• The moment the music stops all players must try to sit on the nearest chair.
• Once a player passes a chair he/she cannot go back to it
• As there is one chair less than the amount of people one player will have no chair, the player is left
standing is out of the game.
• The music begins again with one more chair removed, then the whole process is repeated.
• The person left sitting at the end of the game is the winner!
7.2 Rules of Agents
Now that the rules of the real life game have been established they must be broken down to understand what
we need the agents to achieve.
Already we can see that there are two main scenarios of the game, when the music is on and when the
music is o. During these two dierent scenarios the agents must react to dierent rules, therefore a mode
based approach would be appropriate. The agents must obey the rules as dened in table 3.
The rules in table 3 would be sucient enough to explain the rules of the game to a human as they do
not need to be explicitly told that a rule of the game is that you cannot walk through one another, mainly
7
Music ON
Walk around chairs without
overtaking
Music OFF
Sit on nearest available chair
Table 3: Overview of Mode rules
Music ON
Avoid collisions with one another
Walk around the chair arrangement
Avoid intersecting with chairs
Do not overtake other player
Music OFF
Avoid collisions with one another
Walk around the chair
arrangement
Avoid intersecting with chairs
Walk around chairs until
available chair is found
Sit on next nearest available
chair
Table 4: Explicit rules of Modes
because its physically not possible, but also because people already have enough intelligence to know not to
walk through one another. However, with our agents we cannot take for granted that they already know
information like that, so we must further clarify the rules that they must obey.
Table 4 denes the rules that the agents must explicitly follow.
7.3 Agents
It is proposed that 2 dierent agents must be created in order to give the players enough world information
to be able to play the game. These agents are: Player and Chair.
7.3.1 Player
The Player agent is the human agent that will be participating in the game. The agents brain will hold the
bulk of the brain amongst the agents and will receive information from the other agents to create awareness
of what is happening in the game (e.g. is the music playing) and where it is in the scene (e.g. is it near a
free chair).
7.3.2 Chair
The Chair agent will be used to emit data to the Player agent to tell it whether it is available or not.
8
Figure 6: Massive recommended pre-production pipe
8 Pipeline
This section discusses the rst few stages of creating the Player agent. As the brain is such a large subject
it is discussed in the next section.
8.1 Massive Pipeline
Massive has documented their recommended pre-production (gure 6) and production pipe (gure 7). The
pre-production pipe concerns designing and making the agents. As this project is focused more on the
technical side of creating agents rather than the art side, the last 3 stages will be ignored. Therefore the
project will roughly follow the recommended pipe, but will have to be adapted to match the customised pipe
in gure 8.
9
Figure 7: Massive recommended production pipe
Figure 8: Custom pipe
10
Figure 9: Motion tree
9 Pipe implementation
9.1 Motion tree
The motion tree is required to determine possible sequences of actions that the agents may perform. Ideally a
motion tree should be designed before the motion capture (mocap) stage. However in the case of this project
there will not be time to complete a mocap session and therefore pre-captured data will be used. This does
create some limitations but will still achieve the learning objectives of the project.
The game rules suggest that the following actions will be required:
• Walk
• Walk to Sit
• Sit
• Sit to Walk
The tree can be seen in gure 9. The boxes represent the transitions between actions and the oval shapes
represent the actions.
Triggers from the brain result in the action transitioning from one to another, or the same one in the
case of repeating actions. The triggers are as depicted in table 5. You will notice from the motion tree that
dierent action nodes are connected to the transition nodes in dierent ways. If the action is repetitive as in
the case of the walk action then the walk transition is connected in a loop between the action and transition
node. If the transition nodes begins in one pose and ends in another then it is a 'one shot' non-looping action
and it connected to from the beginning action then the end action.
9.2 Motion capture
The motion capture can be caught using a number of methods available. Once captured the data must be
cleaned up and saved as either Acclaim .amc les or Biovision .bvh les. The mocap used for this project is
taken from the archives of AccessMocap, Bournemouth University.
11
Action
walk
walkToSit
sit
sitToWalk
Trigger
walk
sit
sit
walk
Loop Type
looping
one shot
looping
one shot
Table 5: Action Triggers
9.3 Skeleton design
The skeleton can be designed while the motion tree is designed but must be before the mocap stage. The
skeleton can be designed in Massive and exported as an Acclaim .asf le to be used at the mocap studio.
Alternatively it can be designed in Maya and saved as a Maya ASCII le or any software that exports to .asf
les.
The skeleton used is from the mocap studio and has been exported from Motion Builder as an Acclaim
.asf le with no root specied.
9.4 Import skeleton
If the skeleton is not already in Massive it can be imported as either a Maya ASCII le or as Acclaim .asf
le.
The skeleton is imported using the command line as follows:
massive skeleton.asf
Where 'massive' is the command to launch the program and 'skeleton.asf' is the bones to be imported.
When the program loads it will have created a new agent with the body set up according to the skeleton le.
9.5 Adjust skeleton
When the skeleton is imported the bones will be as expected, however the segments that are generated may
need to be adjusted to match the skeleton. This can be done in the body page by adjusting the shape and
rest position of the segments. The adjusted skeleton can be seen in gure 10.
Inverse kinematics (IK) and degrees of freedom settings need to be specied for each segment of the
skeleton in order for IK to work properly. In the body page each segment has a dof (degrees of freedom) tab
(see gure 11) where IK can be switched on/o for each segment along with the amount of freedom amongst
the rx, ry, rz, tx, ty and tz of each segment.
12
Figure 10: Bones and body
Figure 11: Degrees of freedom tab
9.6 Import motion
The motion is imported as Acclaim .asf les and can be edited using the action editor, see gure 12. The
action editor allows each action to be trimmed for smooth loops, inverse kinematics can be added and motion
curves can be viewed and edited (editing is better done in appropriate software such as Motion Builder).
9.6.1 Transform and Loop/Trim
Each imported action begins with the T-pose so must have the start of each loop trimmed at the start to
begin from the start of the action by itself. In order to achieve seamless smooth blends it is essential that
the end pose of one action that transitions to another is the same as the start pose of the end action. For
example the walk cycle begins with the agent starting to move his left leg forwards, therefore the end of the
sitToWalk cycle must end with the agent starting to move his leg forwards.
9.6.2 Inverse Kinematics
With this agent IK has been applied to the feet. The mocap data for the walk cycle was taken with the
agent walking in a straight line, therefore when the agent turns the feet slide on the ground terrain in an
13
Figure 12: Action editor
unrealistic fashion. IK hold curves can be generated in order to keep the feet still when on the ground and
in order to make sure the connected bones move in the correct way.
The bones that we want to be aected are as in the following hierarchy: Hip > Knee > Toe. However at
the moment the IK will aect the goal bone (toe) and the two bones above it in the hierarchy, namely, Knee
> Ankle > Toe. To rectify this, IK can switched o for the ankle so that it is ignored in the hierarchy.
It is worth noting that the names of the bones represent the point where they begin, so the 'Toe' bone is
actually the foot bone.
As the ankle bone is skipped its IK may result in the bone rotating at strange ankles, therefore rotation
constraint curves must be generated for the ankle, these new curves will try and keep the ankle parallel to
the ground. In the action editor is a function that will automatically create this curves. Next the IK curves
need to be generated for the toe bone. Again using the action editor IK curves can be generated and so
can IK hold curves. With the IK hold curves a threshold for the speed and height of the bone needs to be
specied to generate the appropriate curves, which can also be hand edited. The IK hold curves look as in
gure 13, shown as the red line, the yellow line represents the speed and the green line is the height. IK hold
curve range between 1 and 0, when at 1 the toe bone will stay xed to the ground, when it is at 0 it will
move as it would with the original action.
This must be implemented for each action. In order to speed up the process, Massive oer functions
that can be used with Python scripting. The script found in table 6is executed in the text port to generate
rotation constraint, IK and hold curves. The hold curves must then be adjusted in the usual way.
14
Figure 13: IK hold curves for right toe in the action editor for the walk action
agent = get_agents()
a = agent.actions
l_rc = agent.nd_segment("LeftAnkle")
l_ik = agent.nd_segment("LeftToe")
r_rc = agent.nd_segment("RightAnkle")
r_ik = agent.nd_segment("RightToe")
while a:
reset_all()
print a.name
a.create_rc_curves(l_rc)
a.create_rc_curves(r_rc)
a.create_ik_curves(l_ik)
a.create_ik_curves(r_ik)
a.create_hold_curve(l_ik)
a.create_hold_curve(r_ik)
a = a.next
Table 6: Short Python script
15
Channel
sound.f
sound.a
Range
0 to ∞
0 to ∞
Explanation
Species the constant frequency of the emitted sound
Species the amplitude of the emitted sound
Table 7: Output sounds
Channel
sound.f
sound.d
sound.a
sound.x
sound.y
sound.ox
sound.oy
Range
0 to ∞
0 to 1
0 to ∞
-180 to 180
-90 to 90
-180 to 180
-90 to 90
Explanation
Value of sound frequency received
Distance of sound source
Amplitude at the receiving agent
Polar coordinate about the Y axis of sound source
Vertical polar coordinate of sound source
Polar coordinate about Y axis of relative orientation
Vertical coordinate of relative orientation
Table 8: Input sounds
10 The Brain
By taking each rule one at a time we can propose how they can be implemented in Massive using the
information available to the agents.
10.1 Available tools
Massive oers several senses that can be used individually and combined using logic in order to get the
desired behaviour. The agents receive the information via various channels and can use fuzzy logic in order
to react accordingly. These are covered in the subsections 10.1.2, 10.1.1, 10.1.3, 10.1.4, 10.1.5.
10.1.1 Sound
Agents can emit spherical sound elds which are then audible to all other agents. The values of the output
channels in table 7 can be specied as explained.
Agents can receive sound information using the reception channels in table 8.
Three dierent formants of sound can be specied for each agent.
10.1.2 Vision
Vision is implemented using a scanline render which occurs once per frame (except the rst) for any agents
with access to vision data, this information can then be used by fuzzy rules. Because of this vision impacts
simulation time quite signicantly, therefore should be used only if other methods are inappropriate.
16
Channel
vision.x
Range
-1 to 1
vision.y
vision.z
vision.h
vision.i
-1 to 1
0 to 1
0 to 1
-1 to 1
Explanation
Horizontal scan position oset to
align with agent space +ve z axis
Vertical scan position
Distance
Hue or colour
Horizontal scan position in
image space
Table 9: Reception channels for vision
Agents react when they see other agents/segments according to the agent/segment colours which are
specied in the body section of Massive.
The reception channels available to the agents are in table 9.
10.1.3 Flow eld
The ow eld provides agents with directional information used to guide them about a particular path. Flow
elds can be dened by using splines to draw a path and then embedded into the terrain map's alpha channel.
It represents angles about the world Y axis ranging from 0 to 360 degrees. An agent is then able to reference
the value of the ow eld relative to its own heading. Using fuzzy logic it can then be told to react to the
heading in the desired manner. The channel used is 'ground.ow', with a range of -180 to 180.
10.1.4 Collision
Although collision information is more useful when using dynamics it can also be used to create invisible
triggers. Invisible agents can be placed and when an agent collides with it, it will receive some information
about the world location of the agent.
10.1.5 Ground Colour
Massive oers a paint tool which allows the terrain map to be painted dierent colours and then relay
information based on the terrain colour in relation to the agent. Table 10 shows the available channels.
10.2 Proposed implementation
As there are two modes to consider it is a good idea to face one at a time.
17
Channels
ground.r,
ground.g,
ground.b,
ground.a
ground.r.dx,
ground.g.dx,
ground.b.dx
ground.r.dz,
ground.g.dz,
ground.b.dz
Range
0 to 1
Explanation
Colour of terrain beneath agent
for red, green, blue and alpha
-∞ to ∞
Gradient of terrain R, G, B with
respect to agent's X
-∞ to ∞
Gradient of terrain R, G, B with
respect to agent's Z
Table 10: Ground colour reception channels
10.2.1 Music On mode
When the music is on we can satisfy multiple rules using the same rule as below:
• Walk around the chairs & avoid intersecting with the chairs : Based upon the tools available in Massive
the obvious way to make agents walk around the chairs is to use a ow eld. The ow eld path can be
drawn around the chairs forcing the agents to stay on a strict path thus causing them to not intersect
with the chairs.
• Avoid colliding with one another & do not overtake each other : As the agents will be one behind the
other the agents vision and some fuzzy logic can be used to achieve these rules. The agents can be told
that if they see another agent and is too close then slow down. This will stop them from colliding and
overtaking.
10.2.2 Music O mode
In this mode some of the rules from the previous mode must still be in place, some with extra criteria to be
met and also with some additional rules.
• Walk around the chairs & avoid intersecting with the chairs : This can be achieved in the same way as
proposed above.
• Avoid collisions with one another : The same method as before can be used but must extended as they
are now allowed to overtake each other. Using the vision the agents must slow down when too close to
each other but now they must also be told to turn left or right depending on where the agent in front
is.
18
• Sit on nearest available chair : Sound or vision can be used to tell an agent where the chair is so that
they can walk towards it and sit. By changing the chairs emitting frequency if using sound or changing
its colour if using vision availability of the chair can be relayed to the agents.
The last two rules must take precedence over the rst rule, as the agent should only follow the ow eld until
an available chair is found, then it should go to the chair and sit. Or when an agent is overtaking another it
may have to leave the path of the ow eld. Therefore a priority system must be implemented should that
when any of the two latter rules are trying to control the agents direction then the rst rule is ignored. This
can be done using Boolean logic.
10.3 Actual implementation
The outline of the actual brain implementation is explained in this section, however if you wish to see a more
detailed description then refer to the accompanying tutorial.
10.3.1 General rules and variables
To begin the agent must walk on the ground, to do this the ground channel is used to nd out the ground
height in relation to the agent, the agent can then adjust its [ty]:oset to the ground level. By using the
oset it means move the whole agent.
Sound is emitted from the agent using the sound.a and sound.f channels, this is to allow other agents to
gather local information when the agents are close.
10.3.2 Music modes switch
As mentioned in the planning stage the mode based approach has been used for music ON and OFF. All
rules that happen when the music is on stops when the music goes of and vice versa. A new brain variable
is created and controlled by a timer to change the music variable from 0 when o, to 1 when on.
10.3.3 Output macros
Each rule has been divided into a macro (group) and contains the input fuzzication stage and there are two
output macros that handles the defuzzication stage for the agent turns and speed changes. This keeps the
brain organised and allows easy reusability and value averaging in the output channels.
The 'Turning' macro has two inputs for left and right (gure 14), one has priority over the other so rules
don't conict when being defuzzed. For example if an agent needs to avoid another agent then this turning
19
Figure 14: Inside 'Turning' macro
Figure 15: Inside 'Walk Rate' macro
action will take precedence over sticking to the ow eld. The output changes the [ry]:oset channel which
is telling the whole agent to turn about the y axis.
The 'Walk Rate' macro (gure 15)looks a lot more complicated, however, is quite simple. When an agent
is told to slow down the value of walk rate decreases, otherwise it will continue according to the value stated
in the 'normal' defuzz node. When the music goes o the 'normal' defuzz node is no longer used, instead
it is switched with the 'fast' defuzz node and sent to the 'walk:rate' output. The fast defuzz high a higher
value than normal. To add to realism each agent has a dierent walk rate that is controlled by a random
noise node and clamped so that the values are reasonable.
10.3.4 Music On mode
When in the music on mode all rules are combined with an input node called 'music', this contains the value
of the 'music' variable we created earlier. The 'music' input is ANDed with all the music on rules so that
they will happen if the 'music' variable is true only.
The rst thing to happen when music is on is that the agents walk or go from their pose to walk. To
achieve this the 'music' variable read in by an input node is connected to an output that references the 'walk'
action.
As planned the navigation around the chairs is controlled by a ow eld. The ow eld is edited so that
the border is angle at 40degso that it points in the direction the agents are going while pushing them back
to the centre of the ow. The edge is also increased in size to stop agent straying from the game. The
20
Figure 16: Music ON mode outside of macros
ground.ow channel is used to determine where the ow is in relation to the agent and to correct the agents
direction properly.
To prevent agents colliding with one another and overtaking a combination of vision channels are used.
If an agent can see another agent of the same colour in front of and near it then he is told to slow down.
Figure 16 shows the music on mode outside the macros.
10.3.5 Music O mode
When in music o mode the 'music' input node is combined with all rules using a NOT connector so that
they will only occur when the input is not true.
Following around the chairs is done using a copy of the rule used in music on mode but with a dierent
'music' input connector.
When the music is o players are allowed to overtake each other but must still avoid collisions. The vision
collision avoidance rules have been used from the music on mode, then extended. The agent has an extra
two fuzz nodes that allow it to know when an agent is to the left or right of the other agent. It is then told
to turn the other way. The fuzz is connected to the higher priority part of the 'Turning' macro.
The next rule is a little more complicated, it tells the agent when it is allowed to commit to sitting down.
The motion captured walkToSit action has the agent approach a chair from the left and sit down on it. The
agent must be within a certain area before he can start the walkToSit action, else he will form the sitting
pose but not be on a chair.
Sound channels are used in combinations to region out the area where the agent can start the transition.
Using the sound.d channel we can make sure the agent is the right distance away, with sound.x we can check
that the agent is the correct angle from the chair and with sound.ox we check to see if the agent is facing the
right direction in relation to the agent. In principle this makes sense and seems simple, however, nding the
correct values requires trial and error methods to see at when instance, angle and heading the agent can be
at and still actually sit on the chair. By using a number of agents and starting the walkToSit motion from
dierent positions we can nd bounding values for the area where the agent can be to actually sit on the
chair. Figures 17, 18, 19 and 20 visually represent the area that the individual rules mask o. The centre
dot is the centre of the chair and where the sound source comes from, the large circle represents the sound
21
Figure 17: Distance range for walkToSit action
Figure 18: x angle range for walkToSit action
being emitted. Figure ??? shows the 'ok to sit' region when the channels are combined.
The region when the agent can sit is fairly small in relation to places where the agent might be standing,
therefore the odds of it being in the right place at the right time with the right heading is low. Therefore a
guide is introduced next to the chair, just before the 'ok to sit' area and assigned a colour. The agent uses
vision to walk towards the guide and once past the guide the agent will be roughly in the right position. This
method is a bit of a cheat, but with only one walkToSit motion the way an agent must come into sit is very
limited.
When an agent is near the chair, enough to assume he is committing to sit then the Chair agent can listen
and detect the Player agent and switch the colour of the chair and guide to 0, so not to attract any other
agents.
Because the agents are using vision to follow the guide it causes problems when the agent can see guides
on the other side of the game, therefore a divide has been used to limit the agents vision.
Figure 21 shows the nished music o mode from the macros level.
22
Figure 19: ox angle range for walkToSit action
Figure 20: Combined sitting area
Figure 21: Music o mode from outside macros
23
Figure 22: Sad player agent
10.3.6 The Loser
Each round ends with an agent leaving the game, when the music is o and there are no seats available to
sit on the agent must sit. This rule is implemented using vision and a timer. The Chair agents are set to be
green when available and change to grey when not available. The timer is started as soon as the simulation
starts, everytime an agent see a chair that is available (green) the timer resets and starts again. If a chair is
available then it is safe to say that the game is not over. If all chairs are taken then the agent will continue to
look for a chair until the timer reaches one, then it is combined with an input checking whether it the agent
is walking or not. If the agent is walking ad the timer is up then he will leave a 'Leave' variable it triggered.
This then tells the agent to use vision to see where the furniture and other agent are and walk away, leaving
the game. When he sees a blue area on the ground he will sit down.
10.3.7 Body Movements
In order to add some degree of variance and realism the agents have extra head movement implemented.
When an agent is turning its head, neck and upper torso segments are accessed using output nodes and
rotated in a scaled down value of [ry]:oset. This causes agents to turn their head and upper body in the
direction that they are turning.
Also when an agent has lost the game it will lower the head, neck and upper torso segments to look sad
and disappointed, as in gure 22.
10.3.8 Holding poses
When running the simulation it becomes apparent that when agents are in a still pose they are being forced
to move in an unwanted way. For example, when an agent is sitting down his oset position should remain
the same, however, the ow eld would force him to become oriented in relation to it. To remedy this problem
outputs from the motion tree dictating that the pose is a still one have been created. These values can be
accessed in the brain and combined with a not connector to rules such as 'follow the ow eld' so that these
24
rules are ignored if an agent should be still.
11 Further Work
In terms of the demonstration musical chair agent, some renement is necessary to make the behaviour more
realistic. Firstly some more motion capture data with appropriate actions for the game would avoid such
monotonous movement when sitting for example. With the locomotive cycles blends from normal walking to
jogging and walk with smaller strides, also some strang data would increase the realism in movement.
Aside from mocap issues some more emotion and variance needs to be added to the brain, for example,
if a person is old and has been playing for quite some time then maybe he won't be able to move so quickly
or be motivated to even ght for a seat.
The addition of more motion capture data should also x problems such as the chairs being far apart.
The way the set up of the scene has to be so particular in the way the ow eld is positioned and the use of
guides problem would be resolved. With more walk to sit actions the areas where the agent can decide to sit
can be increased so the agent can be in almost any starting position.
As an alternative to using lots more motion capture data, dynamics can be used and rigged to make
agents react to bumping in to each other so agents can ght over seats. This is very time consuming to set
up, simulate and dicult to make all joints react in the correct way, which is why it hasn't been implemented.
Also the version of Massive used tends to crash with to much dynamics happening at any moment.
By implementing so many rules, dynamics, vision and large amounts of agents in one scene the simulation
can be slow. Massive has the ability to use 'standins' with lower level of detail to display agents when they are
between particular distances from the camera. Implementing this would increase render time substantially
without compromising the end result. For that reasons is it a must to ad to the simulation.
The version of Massive (v2.6.3) used has since been updated to version 3 with new functionality. Using
the new functionality may oer better ways of implementing the rules. Also version 2.6.3 is very unstable so
using the new version with bug xes will allow more time to be spent working on the simulation rather than
repeating steps constantly as it crashes and corrupts les.
In the case of the tutorial, a video with commentary would be very handy in allowing people to learn
dierent ways of using Massive compared to the documented examples.
Further investigation into other ways of implementing this same scenario would be interesting to look at
as when working with Massive it can sometimes be limiting, for example the way agents cannot share global
variables and how a simple counter is dicult to implement. Plus the software isn't widely documented.
However, at least Massive provides a complete tool set from start to nish for making simulations and
rendering them out. In behavioural terms using a high level programming language may oer more ways of
implementing rules in better ways resulting in better behaviour. This is denitely something to be looked
into in the future.
25
12 Conclusion
The project has been a success on a number of levels but leaves a lot of room from further development
and less simplication of the musical chairs rules. The objective was to create a complete agent in Massive
from a technical point of view, and to make it capable of carrying out activities when placed in a scene.
When making the demonstration agent a wide range of Massive has been explored and my knowledge has
grown immensely. As Massive is so new the the industry it is not widely documented, the only help is the
Massive fundamental pack and manual which just shows very simple examples in separate scenarios. Despite
the problems with the end product the accompanying tutorial shows many dierent techniques for satisfying
rules and overcoming problems. This will be informative to anyone using Massive, if only to see dierent
ways people use the same tools and how dierent rules can be with dierent senses being used.
26
References
[1] Reynolds, C. Flocks, Herds and Schools: A Distributed Behavioural Model. Proc. SIGGRAPH '87,
Computer Graphics. 1987.
[2] Musse, S. et al. Modeling Individual Behaviors in Crowd Simulation. University of Vale do Rio dos
Sinos. 2003.
[3] Massive Website. www.massivesoftware.com
[4] Musse, S., Thalmann, D., Hierarchical Model for Real Time Simulation of Virtual Human Crowds.
IEEE Transactions on Visualization and Computer Graphics. 2001.
[5] Farenc, N. et al. A Paradigm for Controlling Virtual Humans in Urban Environment Simulations.
Applied Articial Intelligence Journal, 1999.
[6] Rasmussen, J. Information Processing and Human Machine Interaction: An Approach to Cognitive
Engineering. North Holland. 1986.
[7] Musse, S. Babski, C. Capin, T. Thalmann, D. Crowd Modelling in Collaborative Virtual Environments.
ACM VRST'98, Taiwan. 1998.
[8] Musse, R. Thalmann, D. A model of human crowd behavior: group inter-relationship and collision detection analysis, Proc Workshop of Computer Animation and Simulation of Eurographics'97, Budapest,
Hungary. Sept 1997.
[9] Loscos, C. et al. Intuitive Crowd Behaviour in Dense Urban Environments using Local Laws. Univesity
College London. 2003.
[10] Kaehler, S. Fuzzy Logic Tutorial. Encoder The Newsletter of the Seattle Robotics Society. http:
//www.seattlerobotics.org/encoder/mar98/fuz/flindex.html. 1998.
[11] Ferber, J. Multi-Agent Systems An introduction to Distributed Articial Intelligence. Addison-Wesley.
1999.
[12] Woodridge, M. An Introduction to MultiAgent Systems. University of Liverpool. John Wiley & Sons.
2004.
27