5.6 An Overview of the Virtual Burglars 5.7 Agent Architectures

106 CHAPTER 5. CREATING VIRTUAL PEOPLE AND THEIR VIRTUAL ENVIRONMENT
Occupancy functions
("
!#'"
!#&"
Students (f)
Unemployed (g)
Part-time (h)
Families (i)
!#%"
!#$"
!"
!)*!!"
!-*!!"
!,*!!"
!(*!!"
$,*!!"
$(*!!"
(+*!!"
()*!!"
(-*!!"
(,*!!"
((*!!"
!+*!!"
!)*!!"
!"#$%&'%()*%
Figure 5.12: Functions to calculate occupancy for different variables depending on time.
5.6
An Overview of the Virtual Burglars
A drawback with traditional modelling approaches, as discussed in Section 3.3, is that they struggle to account for the micro-interactions that lead to an individual crime event occurring and,
from which, city-wide burglary rates ultimately emerge. Important concepts from environmental
criminology – such as individual offender awareness spaces or the convergence in space/time of
offenders and victims – cannot be included directly because the models work at an aggregate level,
rather than at the level of the individual. This research addresses this drawback by incorporating
individual house data where possible (as discussed earlier in this chapter) and by simulating the
behaviour of individual burglars directly. The remainder of this chapter will discuss how the burglars can be simulated in a computer model, discussing how the literature and the available data
have guided the agent design process.
5.7
Agent Architectures / Cognitive Frameworks
An agent’s architecture determines how the functionality of the agent is organised and how the
agent replicates human or biological traits (Singh, 2005). A number of architectures (or cognitive
frameworks) have been proposed to address how these traits should be mimicked; three relevant
ones are outlined in the following sections.
It is possible not to use a published architecture at all and to, instead, program behaviour manually. This is the approach that has been taken by some of the agent-based crime studies outlined
in Section 3.5 (e.g. Groff and Mazerolle, 2008; Hayslett-McCall et al., 2008). However, unless
considerable care is taken over the design of the agents’ behaviour, a manually-defined approach
is unlikely to offer the level of flexibility or the behavioural realism of a published framework.
5.7. AGENT ARCHITECTURES / COGNITIVE FRAMEWORKS
107
For example, none of the research outlined in Section 3.5 allows truly dynamic agent behaviour;
agents often have set routines and cannot change their behaviour as a result of changing internal/external conditions. To avoid these drawbacks and provide a more accurate representation of
human behaviour, this research will build upon a published cognitive architecture. An interesting
experiment therefore, would be to test the model with and without the presence of the cognitive
framework. However, this would effectively require a complete re-design of the agents’ behaviour
(if the framework were removed from the model there would be nothing to control the agents) so
must be left for future work.
5.7.1 Beliefs Desires Intentions (BDI)
The BDI architecture (Bratman et al., 1988) is perhaps the most popular architecture and is centred
around equipping agents with the mental components of beliefs, desires and intentions. The actor
loop is a process which is used to determine how the agent will react to some input from the
environment. This approach follows rational choice ideas because no action is performed without
some form of deliberation (Balzer, 2000). The behaviour of a BDI agent is characterised by
“practical reasoning”: goals are decided upon and then a plan is formed in order to satisfy the
goals (Singh, 2005).
Beliefs represent the agent’s internal knowledge of the world. The agent has a “memory” of
past experiences and the state of the environment as it was last seen. Desires are all the goals
which the agent is trying to achieve. These can include short term goals such as “eat food” or
more complex, long term goals such as “raise children”. As some goals might be contradictory,
intentions represent the most important goals which the agent chooses to achieve first. Intentions
are sometimes viewed as a subset of goals, while at other times they are viewed as the set of
plans which will achieve the desired goals (Singh, 2005). Goals (and therefore intentions as well)
will change throughout time depending on external inputs and the agent’s internal state. A level
of caution can be integrated into a BDI agent by specifying how eager the agent is to change its
intentions.
Although the BDI architecture has been widely used (Rao and Georgeff, 1995; Müller, 1998;
Taylor et al., 2004; Brantingham et al., 2005b,a), it has also suffered some criticism. Fundamentally the architecture assumes rational decision making which is difficult to justify because people
rarely meet the requirements of rational choice models (Axelrod, 1997). Also, Balzer (2000)
notes that the core human elements (beliefs, desires and intentions) are difficult to observe directly. Access to them can only be achieved in a laboratory setting which might not relate to
real situations (Balzer, 2000). Some criticise the three attitudes which form the core of the architecture (beliefs, desires and intentions) with being too restrictive and others with being overly
complicated (Rao and Georgeff, 1995).
5.7.2 Behaviour Based Artificial Intelligence (BBAI)
BBAI is a modular behaviour architecture developed by Brooks (1986). It will be briefly outlined
here because, as Section 3.5 discussed, it was used by another agent-based crime model (Birks
108 CHAPTER 5. CREATING VIRTUAL PEOPLE AND THEIR VIRTUAL ENVIRONMENT
et al., 2008). The architecture was designed to control autonomous robots, but can equally be
applied to software agents. Its basic structure consists of a number of hierarchical layers of increasing behavioural complexity. All layers act as individual controllers of the agent, they operate
independently and simultaneously. Therefore it is the purpose of a ‘suppression mechanism’ to
determine which layer should have overall control at a particular time. The advantage with this approach is that the agent can work towards different goals simultaneously, no early decision needs
to be made about which goal to persue (Brooks, 1986). Although having separate and autonomous
layers provides robustness (if a high level fails the lower behavioural levels will take over the robot
so that it continues to do something) and efficiency (there is no communication overhead between
layers) a drawback is that a new layer must re-implement basic functionality that would otherwise
be provided by lower layers. Therefore attempts to implement intelligence using BBAI have not
proved as successful as alternative, hand-designed systems (Bryson, 2002).
Although the architecture might be appropriate for implementing simple behaviour in physical
robots, virtual agents do not need such robust behaviour because the model can be fully tested
and the agents’ environment has been wholly specified by the researcher. Therefore an agent
will not encounter an unexpected object in the virtual environment which might cause a fault
in a behavioural routine. The advantages of robustness offered by BBAI are thus counteracted
by the difficulties required to implement complex “human-like” behaviour. As a consequence,
architectures such as BDI which are specifically designed to model high levels of agent intelligence
are more appropriate for this research.
5.7.3
PECS
Proposed by Schmidt (2000) and Urban (2000), the PECS architecture states that human behaviour
can be modelled by taking into account a person’s physical conditions, emotional states, cognitive capabilities and social status. Personality is incorporated into the agents by adjusting the
rate that internal state variables change and also how these changes are reflected in agent behaviour (Schmidt, 2002). The framework is modular, so that separate components control each
aspect of the agent’s behaviour (Martinez-Miranda and Aldea, 2005). Proponents of PECS cite
that as rational decision making is not required and the framework is not restricted to the factors
of beliefs, desires and intentions (Schmidt, 2000), it is an improvement on the BDI architecture.
Also, the framework is highly modular which makes it trivial to add or remove different types of
behaviour as appropriate. For these reasons the framework appears to be the most appropriate for
this research and the remainder of this section will outline PECS in detail, later illustrating how it
can be used to create realistic burglar behaviour.
To illustrate the PECS features, an example proposed by Urban (2000) will be adapted. Consider a person in a shop who is thinking about purchasing some goods. They might experience
physical needs (such as hunger), emotional states (such as surprise at the available goods), cognition (such as information about current prices) and social status (which will affect how the agent
reacts to the shop assistant). Schmidt and Urban believe that all aspects of human behaviour can
modelled using these components. In order to compare the strength of all the different types of
5.7. AGENT ARCHITECTURES / COGNITIVE FRAMEWORKS
109
behaviour which might be acting upon a person simultaneously (such as the shopper), PECS uses
the concept of “motives”. Intensity functions make it possible to compare motives from different
behavioural systems (such as comparing the drive to eat food with the act of will of studying for an
exam). The motive with the highest intensity becomes the “action guiding motive” as depicted in
Figure 5.13. Once the action guiding motive is known the agent can behave accordingly, whether
this is to instinctively react to a stimulus or create a complex action plan to pursue a constructive
goal.
Set of all
motives
Intensity
functions
Calculate strongest
motive
Need
T = f(N, E, X )
Drive
Intensity
Emotion
E = g(I, A, X )
Emotion
Intensity
Act of Will
W = h(I, D, X )
Strength
of Will
Set of all actions
Action 1
Action 2
Action guiding
motive
Action n
Figure 5.13: Motives and motive selection, adapted from Schmidt and Schneider (2004)
The PECS framework distributes all behaviours in to two major categories: reactive and deliberative. Reactive behaviour classifies actions which are largely instinctive, it can be modelled
using a set of rules without deliberation on the part of the organism. The organism does not consider why it is behaving the way it is. For example, a reactive being is not aware that looking
for food is a task which ultimately ensures survival. Deliberative behaviour, on the other hand,
involves the conscious pursuit of goals. The organism is able to deliberate over its current goal(s),
form action plans to satisfy a goal and break a larger goal into smaller sub-goals. Table 5.4 summarises the different types of behaviour as stipulated by Schmidt (2005) and also provides their
intensity functions.
Table 5.4: Different types of PECS behaviour.
Behaviour
Description
Reactive Behaviour
Instinctive
behaviour
Learned
behaviour
An automatic reaction to a stimulus such as a parent reacting instinctively to a child’s cry. Instinctive behaviour can be modelled relatively easily using pre-defined rules which are called up
in certain circumstances.
Similar to instinctive behaviour but with rules that are learnt dynamically. Schmidt (2000) cites
the example of a car driver who will instinctively brake if they see a child running across the road.
110 CHAPTER 5. CREATING VIRTUAL PEOPLE AND THEIR VIRTUAL ENVIRONMENT
Behaviour
Description
Drive
This type of behaviour is directed by internal drives to satisfy needs. These range from basic
needs required to preserve life (such as the need for food or safety) to social needs and finally to
intellectual needs. Schmidt defines the function, f , to determine drive intensity, T , as
controlled
behaviour
T = f (N, E, X)
(5.4)
where: N is the agent’s personal preference for the need; E represents environmental influences;
and X denotes other influences. For example, a drug addict will have a strong drive to take
drugs if the need, N, is high because they have gone without drugs for some time. However, the
environment, E must also be taken into account: the drive might be strong if they are surrounded
by other addicts who are also using drugs even if the need, N, is not great.
Emotionally
controlled
behaviour
Emotions are similar to drives because, if they are strong enough, they will affect the behaviour
of the agent. Unlike drives, however, they are stimulated externally, not by internal state changes.
Schmidt notes that the intensity of emotions, E, are very hard to model, but defines the following
formula:
E = g(I, A, X)
(5.5)
where I represents the importance of the event which has generated the emotion, A is the agent’s
personal assessment of the event and X represents other influences.
Deliberative Behaviour
Constructive
behaviour
Schmidt (2000) discusses how an organism which is able to perform constructive behaviour is able
to build an internal representation of its environment and also construct and deliberate over plans
of action which should allow it to satisfy goals. Goals assembled in this manner are associated
with acts of will, the organism “wants” to achieve the goal (Schmidt, 2000). In a similar fashion
to reactive forms of behaviour which have a “need” associated with them, constructive behaviours
have an “importance” attached to them by the agent which will influence their intensity. For
example, one agent might attach a higher importance to the pursuit of gaining knowledge than
another. In addition, the closer a goal is to completion, the higher the will associated with the
goal. Schmidt defines the following function, h, to calculate will intensity, W :
W = h(I, D, X)
(5.6)
where I is the importance of the goal, D is the distance from completing the goal and X are other
influences.
Reflective
behaviour
Representing the highest level of behaviour, the ability to exhibit reflective behaviour is reserved
for human beings above all other organisms. Reflective action relates to the ability to monitor and
control one’s own thought processes. Also, in addition to a model of their environment, reflective
organisms have a model of self which can lead to the most advanced forms of emotion such as
inferiority complex and jealousy (Schmidt, 2000). To model this type of behaviour Schmidt states
that, within its cognitive module, the PECS agent will have another entire PECS model of itself.
At the time of writing, documented use of the PECS framework is limited, especially when
compared to other behavioural models such as BDI. However, the few studies that were found
originate from diverse fields. For example, PECS has been used by Ammar et al. (2006) and Neji
and Ammar (2007) to build emotions into a virtual learning environment. The authors incorporate
non-verbal communication in the form of emotional facial expressions to improve the relationship
between a human learner and a computer-controlled tutor. In the field of health care, Brailsford
and Schmidt (2003) used the framework to improve a simulation of disease screening. The au-