Simulating the Pack Mentality of
Predatory Wolves
Andrew Hayward
BSc (Hons) in Computer Science
Department of Computer Science
University of Bath
2005
Optimis parentibus
Cover illustration by Isabel Levesque (from Mech, Meier, Burch and Adams, 1995)
i
Abstract
A simulated environment was constructed in which computer representations
of wolves and their prey could be placed, to investigate their behaviour when
interacting with one another as autonomous agents.
The MASON simulation framework was chosen from among the many available to base the model upon, and the animal agents implemented as Java
objects. Agents were given characteristics including energy and speed, as
well as the ability to ‘see’ within a defined field of vision. Wolves were also
given a notional ‘rank’ and ‘personal space’ which determined their ‘following’ behaviour. Both wolves and prey had a state, dependant on their
awareness of a ‘foe’, which determined their subsequent behaviour, based on
simple algorithms.
Even given this limited awareness and the limited behavioural ‘rules’, the
simulation produced recognisable and realistic behaviour patterns both within
and between the two groups of animals. Prey agents displayed typical herding and grazing behaviour, while wolves typically travelled in a column, with
the alpha wolf at its head. When a wolf pack encountered a grazing herd,
the herd would flee from the pack, but several stragglers would tend to get
left behind, one of which would be singled out by the pack and attacked.
Simulating more sophisticated behaviour (including introducing other sensory inputs, modelling more complex cooperation between the wolves when
attacking, and the effects of a more realistic environment) was beyond the
scope of this project. However, although this simulation was successful
within its own limitations, more characteristics must be included and behaviours modelled, to fully represent wolves’ hunting behaviour. To this
end, various suggestions were made as to future directions that this work
might take.
ii
Simulating the Pack Mentality of Predatory Wolves
submitted by Andrew Hayward
COPYRIGHT
Attention is drawn to the fact that copyright of this thesis rests with its
author. The Intellectual Property Rights of the products produced as part
of the project belong to the University of Bath.
See http://www.bath.ac.uk/ordinances/#intelprop.
This copy of the thesis has been supplied on condition that anyone who
consults it is understood to recognise that its copyright rests with its author
and that no quotation from the thesis and no information derived from it
may be published without the prior written consent of the author.
Declaration
This dissertation is submitted to the University of Bath in accordance with
the requirements of the degree of Batchelor of Science in the Department
of Computer Science. No portion of the work in this dissertation has been
submitted in support of an application for any other degree or qualification
of this or any other university or institution of learning. Except where
specifically acknowledged, it is the work of the author.
Signed:
Andrew Hayward
Restrictions
This thesis may be made available for consultation within the University
Library and may be photocopied or lent to other libraries for the purposes
of consultation.
Signed:
Andrew Hayward
iii
Acknowledgements
First and foremost, I would like to thank my project supervisor Dr. Alwyn
Barry, without whom this project would not have happened. Aside from
being his initial idea, he provided much inspiration and direction throughout.
I would also like to thank those who took the time to read through chapters,
paragraphs & sentences, and generally be available to bounce ideas off –
there are too many people to mention by name, but you know who you are.
There are, however, a few people that deserve personal thanks, for keeping
me (relatively) sane for the duration, and getting me through to the end –
Robin, Lindsey, Nick, Spongebob, and especially my parents.
iv
Contents
List of Figures
ix
1 Introduction
1
2 Literature Survey
3
2.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
2.2
Real Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
2.2.1
Communication . . . . . . . . . . . . . . . . . . . . . .
4
2.2.2
The Hunter . . . . . . . . . . . . . . . . . . . . . . . .
4
2.2.3
The Hunted . . . . . . . . . . . . . . . . . . . . . . . .
7
Artificial Life . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
2.3.1
What is Artificial Life?
. . . . . . . . . . . . . . . . .
9
2.3.2
Emergence . . . . . . . . . . . . . . . . . . . . . . . .
9
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
2.3
2.4
3 System Design
3.1
12
System boundaries . . . . . . . . . . . . . . . . . . . . . . . .
12
3.1.1
The wolves . . . . . . . . . . . . . . . . . . . . . . . .
12
3.1.2
The prey . . . . . . . . . . . . . . . . . . . . . . . . .
13
v
3.2
Development Strategy . . . . . . . . . . . . . . . . . . . . . .
13
3.3
Choosing a framework . . . . . . . . . . . . . . . . . . . . . .
13
3.4
Available Options . . . . . . . . . . . . . . . . . . . . . . . . .
14
3.4.1
Swarm . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
3.4.2
RePast
. . . . . . . . . . . . . . . . . . . . . . . . . .
15
3.4.3
MASON . . . . . . . . . . . . . . . . . . . . . . . . . .
15
3.4.4
Breve . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
3.5
Toolkit of choice . . . . . . . . . . . . . . . . . . . . . . . . .
16
3.6
Background to MASON . . . . . . . . . . . . . . . . . . . . . .
17
4 Implementing Behaviour
19
4.1
Environment . . . . . . . . . . . . . . . . . . . . . . . . . . .
20
4.2
Animal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20
4.2.1
Characteristics . . . . . . . . . . . . . . . . . . . . . .
20
4.2.2
Vision . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
4.2.3
Motion Behaviours . . . . . . . . . . . . . . . . . . . .
24
4.2.4
Action Selection . . . . . . . . . . . . . . . . . . . . .
24
4.2.5
Steering Behaviour . . . . . . . . . . . . . . . . . . . .
25
4.2.6
Neighbourhoods . . . . . . . . . . . . . . . . . . . . .
27
4.2.7
Locomotion . . . . . . . . . . . . . . . . . . . . . . . .
27
Wolf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
28
4.3.1
Characteristics . . . . . . . . . . . . . . . . . . . . . .
28
4.3.2
Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . .
28
4.3.3
Personal Space . . . . . . . . . . . . . . . . . . . . . .
29
4.3.4
Action Selection . . . . . . . . . . . . . . . . . . . . .
30
4.3
vi
4.3.5
4.4
4.5
4.6
Steering Behaviour . . . . . . . . . . . . . . . . . . . .
32
Prey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
4.4.1
Characteristics . . . . . . . . . . . . . . . . . . . . . .
32
4.4.2
Action Selection . . . . . . . . . . . . . . . . . . . . .
32
4.4.3
Steering Behaviour . . . . . . . . . . . . . . . . . . . .
33
Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
4.5.1
Wolf . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
4.5.2
Prey . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
Results of simulations . . . . . . . . . . . . . . . . . . . . . .
37
5 Evaluation
39
5.1
Agent Behaviour . . . . . . . . . . . . . . . . . . . . . . . . .
39
5.2
System Design . . . . . . . . . . . . . . . . . . . . . . . . . .
41
5.3
The Future . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41
5.3.1
Senses . . . . . . . . . . . . . . . . . . . . . . . . . . .
41
5.3.2
Environment . . . . . . . . . . . . . . . . . . . . . . .
42
5.3.3
Animal behaviour . . . . . . . . . . . . . . . . . . . .
42
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43
5.4
Bibliography
44
Appendix A: Class Diagrams
47
A.1 Environment Class Structure . . . . . . . . . . . . . . . . . .
48
A.2 Object Class Structure . . . . . . . . . . . . . . . . . . . . . .
49
Appendix B: Calculating ‘personal space’
vii
49
Appendix C: Code
52
C.1 Animal.java . . . . . . . . . . . . . . . . . . . . . . . . . . . .
54
C.2 AnimalPortrayal2D.java . . . . . . . . . . . . . . . . . . . . .
61
C.3 Item.java . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
63
C.4 ParameterDatabase.java . . . . . . . . . . . . . . . . . . . . .
66
C.5 Prey.java
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
69
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
73
C.7 Wolf.java . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
74
C.8 WolfPortrayal2D.java . . . . . . . . . . . . . . . . . . . . . . .
78
C.9 World.java . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
79
C.10 WorldWithUI.java . . . . . . . . . . . . . . . . . . . . . . . .
82
C.6 Vision.java
Appendix D: Configuration
84
D.1 Example Configuration File . . . . . . . . . . . . . . . . . . .
viii
85
List of Figures
3.1
MASON Architecture . . . . . . . . . . . . . . . . . . . . . . .
17
3.2
MASON Checkpointing . . . . . . . . . . . . . . . . . . . . . .
18
4.1
Energy Consumption . . . . . . . . . . . . . . . . . . . . . . .
22
4.2
Maximum Speed . . . . . . . . . . . . . . . . . . . . . . . . .
22
4.3
Sight Cones . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
4.4
Motion Behaviours . . . . . . . . . . . . . . . . . . . . . . . .
24
4.5
Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
4.6
Cohesion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
4.7
Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
4.8
Neighbourhoods
. . . . . . . . . . . . . . . . . . . . . . . . .
28
4.9
Wolf State Diagram . . . . . . . . . . . . . . . . . . . . . . .
30
4.10 Wolf Following Detail . . . . . . . . . . . . . . . . . . . . . .
31
4.11 Wolf Following Behaviour . . . . . . . . . . . . . . . . . . . .
31
4.12 Prey State Diagram . . . . . . . . . . . . . . . . . . . . . . .
33
4.13 Herding & Grazing . . . . . . . . . . . . . . . . . . . . . . . .
34
4.14 Herding Behaviour . . . . . . . . . . . . . . . . . . . . . . . .
35
4.15 Frames from a typical simulation . . . . . . . . . . . . . . . .
38
ix
5.1
Frames from an atypical simulation . . . . . . . . . . . . . . .
40
A.1 Environment Class Structure . . . . . . . . . . . . . . . . . .
48
A.2 Object Class Structure . . . . . . . . . . . . . . . . . . . . . .
49
B.1 Defining the curve . . . . . . . . . . . . . . . . . . . . . . . .
50
x
Chapter 1
Introduction
The wolf, properly known as the Grey Wolf (Canis lupus), is a mammal of
the Canidae family and the ancestor of the domestic dog. Wolves function
as social predators and hunt in packs organised according to a strict social
hierarchy and led by an alpha male and alpha female. This social structure
was originally thought to allow the wolf to take prey many times its size,
although new theories are emerging that suggest the pack strategy, instead,
maximizes reproductive success and has less to do with hunting.
This project aims to investigate the hunting process of wolves: to see if
simple rules underlie the complex group interactions seen in wolf hunting,
and not a reliance on sophisticated communication and hierarchy.
“The idea of playing god on computers took off 30 years ago,
when mathematician John Conway invented the Game of Life,
in which coloured cells in a grid vie for survival. By now, applications of artificial life are becoming commonplace: social scientists use ‘evolutionary’ algorithms to explore social interactions,
for example, while biologists harness the equations for studying
protein folding and lining up DNA sequences.”
Kaiser (1999)
The concept of using computers to investigate the real world is not a new one.
Conway (1970) made “a breakthrough that has had exciting repercussions in
both group theory and number theory” (Gardner, 1970) in the late ’60s; some
20 years later, Reynolds (1987) was making similar breakthroughs simulating
the flocking behaviour of birds, commonly referred to as boids.
1
We shall be focusing very much on Reynolds’s (1999) work, an in-depth
study of steering behaviour, throughout this project, as we attempt to simulate the real world of wolves. We will also be building on the work of Yeates
(2003), who attempted a similar project to this.
Initially, we shall discuss some of the background to this problem, looking
at the real world of the animals we are attempting to simulate. From this
position, we shall attempt to establish the relevant characteristics that need
to be modelled, and hence select a suitable platform on which to base our
model. We shall continue to look at how we can match the real world
components to those within our simulation and construct suitable algorithms
to enable them to reflect reality. Having constructed our electronic wolves
and their prey1 , we can proceed to place them in a simulated world. We shall
first observe their interactions within their own pack or herd respectively.
We shall then look at the displayed behaviour as wolves approach a herd of
prey.
Although “great care must be taken in using [abstract models] to draw conclusions about behaviour in the real world” (Jakobi, Husbands and Harvey, 1995), whatever results we do obtain from this project will be of some
use. It may only show us that the predatory behaviour of wolves is in
fact very complex. Or it could lead us to an entirely new system of noncommunicative group dynamic.
1
Representative objects of this kind are generally referred to as ‘agents’ in this context
2
Chapter 2
Literature Survey
2.1
Introduction
The primary aim of this project is to “demonstrate that simple rules underlie the complex group interactions seen in wolf hunting”. To that end the
following was hypothesised:
Emergent behaviour (the process of complex pattern formation
from simpler rules) similar to that of hunting wolf packs can be
produced by a set of autonomous agents.
There are two obvious sides to the project – that of the actual wolves themselves, and how they hunt and interact, and also the autonomous agents
being used to represent them.
An understanding of how wolves hunt is vital to this project, in order to construct an effective simulation. Several questions shall have to be answered
when approaching this subject in order to adequately understand the underlying pack mentality during the hunt. Something must also be known about
the history of computer simulations, and how useful they are in representing
real systems.
2.2
Real Life
For all animals, regardless of whether they are solitary or social, the search
for and retrieval of food items constitute the major part of their time-budget
3
and energy expenditure (Schatz, Lachaud and Beugnon, 1997). Mammalian
carnivores are, in general, solitary animals, and social interaction is minimal. However, in social mammals, such as wolves, cooperative food source
mastering and/or recovery may greatly enhance the eficiency of the foraging
by reducing total energy costs.
2.2.1
Communication
There are many examples of cooperative behavior in nature, such as pack
hunting by wolves and lions or the use of sentries by geese and prairie dogs.
It is generally assumed that the advantages (increased hunting success or
reduced predation) of these strategies for the pack or flock outweigh the
disadvantages (having to share kills or increased danger for the sentry)
(Soule, 1999).
Pack hunting requires a great deal of co-ordination between group members.
Moukas and Hayes (1996) went so far as to claim that “communication is
a prerequisite for co-operation”. But how much actual communication is
involved in the hunt process? It may be possible for a pack animals to hunt
successfully by receiving information only from their environment, relying
only on innate or learnt behaviours to direct the activity. As Joseph Priestly
(1957) once said, “the more elaborate our means of communication, the less
we communicate”.
Communcation can be a slippery subject, partly due to the enormous variety of signalling behaviours in nature (Noble and Cliff, 1996). Looking
at intra-species communication alone we find aggregational signals, alarm
signals, food signals, territorial and aggressive signals, appeasement signals,
courtship and mating signals, and signalling between parents and offspring
(Lewis and Gower, 1980). Taking it to another level, communication does
not have to be purely auditory. Other mechanisms, such as visual, olfactory,
and touch also exist, which when hunting probably play as much if not more
of a role as auditory communication.
2.2.2
The Hunter
Pack Hunting
Unfortunately, what research material there is on wolves centers more on
their domestic relationships rather than their predatory interaction. The
majority of information on communal hunting has been on three other
4
species, namely lions, hyenas and African wild dogs (Creel and Creel, 1995),
as well as some on cheetahs (Radloff and Du Toit, 2004). However, three
obvious methods of predation do appear out of other research. It is not the
purpose of this study to investigate in any depth how these other systems
work, but some background information should be revealed, which could be
used as a starting point for any model decided upon for wolves.
Lions encircle their intended target, and if lucky get close enough to bring
down the prey. Usually, however, the group will stampede, forcing
the prey towards other waiting members of the pride. It should be
noted though that “a female might split off specifically to hunt if she
was hungry but her pride-mates were not” (Creel, 1997).
Cheetahs make good use of their high sprinting speed, and rely on getting
close enough to their prey to be able to outrun them, quickly bringing
them down.
African wild dogs and hyenas have neither the size and power of lions, nor
the speed and agility of cheetahs. Instead, they utilise large numbers
and persistence, attacking from multiple directions.
Wolves
There are three primary species of wolf: the grey wolf (Canis lupus), the red
wolf (Canis rufus), and the Ethiopian (or Abyssinian) wolf (Canis simensis)
(International Wolf Center). Grey wolves are by far the most common and
hence the best studied and documented and, for the purpose of this study,
have been selected as the primary target of modelling.
Grey wolves are highly social, pack-living animals. Each pack comprises
of anything between two and thirty individuals, depending upon habitat
and abundance of prey. Most packs are made up of 5 to 9 individuals,
though some packs as large as thirty have been indentified in Alaska and
north-west Canada (International Wolf Center, 2002). Packs are typically
composed of an alpha pair and their offspring, including young of previous
years. Unrelated immigrants may also become members of packs. Packs are
built around a very rigid dominance hierarchy, with this alpha pair at the
very top. It is generally the alpha male that makes the decisions regarding
the movements of the pack, hunting, and marking of the territory (Gévaudan
Wolf Park, 2003).
Despite being the most studied of the wolf species, minimal scientific literature is available on the grey wolf. There are several reasons for this, not
5
least because of the difficulty in actually surveying them in the wild, without
a great deal of luck or intimate territorial knowledege, due to their secretive
nature. Studies from the Gévaudan Wolf Park would suggest that “one can
live for months in an area where they are still abundant, without seeing a
single one”.
Having concluded that “accurate observations of wolves hunting prey are
scarce, and accurate accounts are even scarcer”, Mech (1970) comments
that most conclusions about the hunting habits of wolves have been based
on hearsay and vague interpretations. It may be, therefore, quite hard
to produce an appropriate simulation that adequately represents the pack
mentality of hunting wolves.
What information we do have on hunting behaviour has largely been collected by those working in the field, and often on reservations, based on their
own observations. Without any technical and scientific data, it may well be
that we have to rely largely on these experiences, initially at least, to build
a working simulation of how we perceive wolves and their hunting practices.
From various sources, it would seem that wolves hunt by “coursing” rather
than stalking, as big cats do. Contrary to popular belief, wolves rarely
attack from behind. Instead, they run around to the front of their prey
and bite into their nose or throat and hang on with powerful canines until
the animals goes down – sometimes this may take as long as five minutes.
However, as most research on the social dynamics of wolf packs has been
conducted on non-natural assortments of captive wolves, Mech (1999) notes
that we should be wary when using this data.
The first stage in the hunting process is that of locating the prey, which can
involve travel of up to 30 miles in a day attempting to do this (International
Wolf Center, 2002). Mech (1970) detailed three primary methods of locating
prey: direct scenting, chance encounter, and tracking, noting that direct
scenting was probably the most used method. However, depending on the
density of prey, chance encounters can be quite freqent.
Having located the prey, the pack would seem to assemble on each other,
sniffing the air, wagging tails and waiting for a signal, before heading after
the prey. However, Mech (1970) states that each wolf cannot make it’s own
decision on whether or not to chase the prey; some member of the pack
must show initiative. There is no evidence to suggest a reason for these
gatherings, other than to allow for some organisation. No form of tactical
communication can be assumed from this.
Wolves employ a variety of different communication methods within their
social interactions. Rank is communicated among wolves by body language
6
and facial expressions, such as crouching, chin touching, and rolling over to
show their stomach. Vocalisations, such as howling, allows pack members to
communicate with each other about where they are, when they should assemble for group hunts, and to communicate with other packs about where
the boundaries of their territories are. (It should be noted that this howling
is a very effective means of communication, with one wolf can being able
to hear another wolf howl over 10 miles away across open ground.) Scent
marking is ordinarily only done by the alpha male, and is used for communication with other packs (Smith and Dewey, 2002). It has been noted
(Mech, 1970) that it is not entirely clear how the alpha male actually goes
about directing the pack activities. However, observations would indicate
that this leadership is a combination of both autocracy and democracy,
i.e. at times the male leads by example, and at other times the behaviour
of each pack member is noted and the most appropriate action is selected
(Yeates, 2003).
2.2.3
The Hunted
To properly simulate the predatory behaviour of wolves, something must
also be known about the prey they are hunting, and how this prey would
go about avoiding the pack. Grey wolves prey primarily on large, hoofed
mammals such as white-tailed deer, mule deer, moose, elk, caribou, bison,
Dall sheep, musk oxen, and mountain goat. Medium sized mammals, such
as beaver and snowshoe hare, can be an important secondary food source.
Occasionally wolves will prey on birds or small mammals (International Wolf
Center).
There are several techniques which can be employed to increase the prey’s
chances of survival:
• Herd behaviour offers the protection of numbers
• Environmental awareness allows for early predator detection
• Power and size may discourage the predator
• Endurance allows for out-lasting attackers
• Sprinting enables the out-running of attackers
• Defensive capabilities (e.g. antlers) allow the prey to fight back
• Camouflage may well prevent an attack in the first place
7
This is by no means an exhaustive list, but various different methods may
have to be implemented in the final simulation if an effective predatory
model is to be assumed.
Of these actions, perhaps the most important in this case is the flock/herdlike behaviour. This has been an entire project in itself (Reynolds, 1987),
with continued interest in the field. It may be that an entire system is built
up using steering behaviour (Reynolds, 1999) (which is covered in more
depth at a later stage), in order to appropriately simulate the actions of the
prey.
The Large. . .
Most reported wolf encounters with moose suggest that they will either run,
move slowly away, or stand their ground and physically defend themselves
against wolves (Kuzyk, 2001), often opting to move into areas of denser
vegetation, where wolf mobility would be hindered.
Although possibly beyond the scope of this study, the effect of offspring
could also be considered, as it is “the weak, old, and immature” (Smith
and Dewey, 2002) that are often the target of an attack. When offspring
are under intense predation pressure, female moose adopt an antipredator
strategy that is sensitive to variation in calf activity as well as proximity to
protective cover (White and Berger, 2001).
. . . and the Small
Due to their small nature, marmots do not have the advantage of being
able to turn and face their attacker. Instead, they must use other types
of behaviour. Marmots spend much of their time either ‘foraging’ (feeding
with head held down) or ‘looking’ (watching their environment with head
help high) (Blumstein, Daniel and Bryant, 2001). It is primarily this extra
vigilance that allows them time to escape before being attacked, making use
of extensive burrow networks.
Blumstein (1999) also notes that marmots produce “distinctive loud alarm
vocalizations that [can] be categorized by their relative shape, duration, and
whether calls [are] quickly repeated to create multi-note vocalizations”, suggesting they also used group collaboration to avert danger.
Because of these various techniques that are employed by the prey, the need
for group coordination amongst the predators is all the more necessary.
8
Single predators would not be able to bring down large prey on their own,
nor would they be able to charge a sizable group of smaller animals.
2.3
2.3.1
Artificial Life
What is Artificial Life?
The term ‘Artificial Life’ (ALife) was coined by C. G. Langton for his 1989
International Conference on Artificial Life (ALIFEI) , but the concept has
been around since the end of the end of the 1940s, when W. Grey Walter
(1950) constructed a pair of turtles that displayed autonomous behaviour.
In its most basic form, ALife has been described as life constructed by
humans, rather than nature. However, Bedau (1992) more formally defined
it as the attempt to “devise and study computationally implemented models
of the processes characteristic of living things”. He suggests these processes
as being:
• Self-organisation, spontaneous generation of order and co-operation
• Self-reproduction and metabolism
• Learning, adaption, purposiveness, and evolution
What Bedau is saying here is that, hypothetically, “the essential nature
of the fundamental principles of life can be captured in realtively simple
models”. Indeed, Bedau (1992) states that “ALife may help to understand
complex mental processes” in ways that have not been thought of in the
past.
2.3.2
Emergence
It is hoped that a collection of agents will be able to display group behaviour
similar to that of predatory wolves by following a simple set of rules, without
having to implement every required macro action – a process known as
“emergent behaviour”. That is, when a system performs a task it was not
explicitly programmed to perform, the behaviour is said to have emerged
(Howie).
A good and oft-cited example of this is Reynolds’s (1987) ‘boids’ simulation,
which led to complex flocking behaviour being displayed by bird agents, hav9
ing only been given three simple rules to to follow — cohesion, alignment
and separation. Reynolds (1999) has taken these so-called steering behavious much further since their initial conception, and there are now several
simple rules that can be applied to autonomous agent systems that give them
“the ability to navigate around their world in a life-like and improvisational
manner”.
Applications
These techniques were used extensively for the production of the recent
film trilogy The Lord of The Rings — software was developed that allowed
upwards of 20,000 autonomous agents that could ‘think’ and fight independently of each other, identifying friend and foe (Koeppel, 2002). In an early
simulation, the directors “watched as several thousand characters fought like
hell while, in the background, a small contingent of combatants seemed to
think better of it and run away. They weren’t programmed to do this. It just
happened”.
Multirobot systems are becoming more and more significant in industrial,
commercial and scientific applications including: plant maintenance, warehouse operation, space missions, operations in hazardous environments and
military applications. Localised control has advantages over hierarchical
control because the robots can be autonomous.
A typical application for distributed artificial intelligence is found in the
control of traffic. The traffic control may concern physical vehicles, or simply the flow of information packets in a network. Air traffic control, in
particular, has been intensively studied. It is focused on cooperation among
air traffic controllers themselves, and between the controllers and aircraft.
Smooth cooperation is required in order to achieve a safe, orderly, prompt
and efficient movement of traffic in airspace.
2.4
Conclusions
It can be seen that wolves prey on a whole variety of different prey, which
vary greatly in size. Despite this, a common pattern in the hunting process
can be identified. To implement a working system, the predominant focus
will be on that of the wolves, and not of the prey. However, the actions of
the prey should not be underestimated, as they play an important role in
the hunting process.
10
We have identified several key features in the hunting behaviour of wolves.
Several methods of locating prey have been suggested, namely direct scenting, direct encounter and tracking. We should bear these in mind when we
implement our hunting agents. We have also identified several characteristics
associated with the wolves’ prey. In order to build an accurate simulation
involving appropriate prey agents, these should be considered properly.
As regards the simulation itself, we have identfied a key area that we will
have to experiment with – emergence. By utilising Reynolds’s (1999) steering behaviours, we will hopefully be able to produce a fairly complex system
from some relatively basic building blocks.
11
Chapter 3
System Design
It is important to clarify at the outset what is being simulated. Clearly, at
the highest level, we are simulating the predatory behaviour of wolves. But
the wolves have various types of prey to find, and use several methods of
finding it. Similarly, the prey themselves employ a multitude of methods to
ensure their own survival. In addition, effects of the environment could also
be added to the simulation.
3.1
System boundaries
The first thing that we can quickly discount in a simulation is a high level
of graphical realism. We are attempting to simulate behaviours, and as
such we do not require realistic animals to be roaming around in a fully
rendered 3-dimensional environment. Therefore we can be content with
simple, 2-dimensional representations of our ‘agents’; that is, the objects
within the computer model that represent animals. For this study, we shall
also restrict ourselves to the direct interactions between the animals, and
not directly consider environmental effects. We shall further restrict our
agents to simple forwards-facing objects. That is, they can only look in the
direction they are facing.
3.1.1
The wolves
We have already recognised three primary methods which wolves use to hunt
their prey: direct scenting, chance encounter, and tracking. Although direct
scenting is probably the most used of these, to fully implement this would
12
involve the environment being able to carry scent, and the prey agents to
give off scent, which at this stage is beyond the scope of this project, and
would best be left for further investigation. Likewise, for the wolves to be
able to track, the prey agents would have to leave tracks behind. So for
the purposes of this study, we shall rely on sight to enable the wolves to
locate prey. If we ensure that the environment is sufficiently limited and the
prey sufficiently numerous, then “chance encounter” will suffice for initial
contact.
3.1.2
The prey
Because wolves hunt a variety of animals, we cannot possibly simulate them
all at the same time. We shall therefore assume that our wolf agents are
to hunt a herd of average sized ungulate mammals1 , which allows plenty of
scope for a variety of experiments.
3.2
Development Strategy
The objective of this project is to investigate wolf behaviour and not to
concentrate on building a generic simulation environment, especially as a
great number of such modelling environments already exist. Developing an
entirely new simulation framework would seem to be a poor use of time.
Indeed, such an endeavour would most likely be suitable for a project in
its own right, amply demonstrated by some of the environments described
below. Precedence was therefore given to the animal simulations themselves,
and a pre-existing modelling framework was used to provide the necessary
infrastructure. The first task was therefore to choose a suitable framework
to adopt, from the many that have been developed.
3.3
Choosing a framework
It is claimed that the world is inherently object orientated. For example, we
are all human, but one human is not the same as another; individuals can
be considered as instances of a generic model. This is the same on every
scale, from the largest bodies in the universe, down to the smallest atom.
In particular, it makes sense in this project to consider the wolves and their
prey in this way and to use animal agents (objects that represent animals)
1
An ungulate is a hoofed mammal which is usually adapted for running
13
and an Action Selection Mechanism (a way of determining an animal’s next
action based on behaviour; this will be explained in more detail later). C++
or Java are the more obvious choices of language, though there are others,
and languages such as VB.Net, C# and Python should not necessarily be
ruled out.
3.4
Available Options
To reach a rational decision as to which toolkit to use would require a detailed investigation into those that are available, looking at the advantages,
disadvantages and various facilities that each offered. However, there are
many modelling tools available, and assessing each in detail would be a major undertaking, and use up a large proportion of the time available. But
we need only to find an environment that is suitable for our needs, and that
caters for what we have to do. We can immediately rule out many toolkits,
as they are still rudimentary or have not yet been fully published. Of the
rest, we shall look at a few common toolkits that seem to be most popular
in similar fields of study, and draw a conclusion based on these restricted
findings, and on the experience gained within the Department of Computer
Science at the University of Bath. The development time required, the skills
and past experiences of the developer, and the intended future usage/audience must also be taken into consideration when making the choice of which
development tool to use.
3.4.1
Swarm
Originally created at the Santa Fe Institute, Swarm has been under development in one form or another since late 1994, and is perhaps the oldest
and most widely recognised simulation environment. Indeed, it has received
interest from a large number of researchers who wish to create simulations
to facilitate safe, controllable experimentation to take place in areas which
are often very hard to model.
It is a set of code libraries around which a great variety of agent-based models can be implemented, either in objective C or Java. In essence, “Swarm
provides object oriented libraries of reusable components for building models and analyzing, displaying, and controlling experiments on these models”
(Minar, Burkhart, Langton and Askenazi, 1996).
Swarm is designed to be an all encompassing environment - an original aim
of the developers was to widen the usage of simulations by making Swarm a
14
higher level simulation tool, i.e. making many of the programming intensive
aspects easier through utilising preformed components (Yeates, 2003).
3.4.2
RePast
The REcursive Porous Agent Simulation Toolkit (RePast) was developed at
the University of Chicago’s Social Science Research Computing lab, in partnership with the Argonne National Laboratory (Collier, Howe and North,
2003), specifically for creating agent-based simulations. It is very ‘Swarmlike’, both in philosophy and appearance, and like Swarm it provides a library of code for creating, running, displaying and collecting data from
simulations.
RePast provides a few, well-known, demonstration simulation models such
as SugarScape, Swarm’s Heatbugs and MouseTrap models. Unfortunately,
there are very few other sample models generally available on the internet
so that it is rather opaque to the inexperienced user.
RePast does have SimBuilder, an easy-to-use tool (drag-and-drop interface) for developing simple RePast models. Actions of agents are then programmed using NQPY (Not Quite Python), a subset of the Python language,
so that there is no need to learn Java. Users with a general knowledge of
programming can learn this scripting language very easily. However, this
tool does not allow the user access to the full extent of RePast’s library, and
only rather basic models can be constructed in this manner.
3.4.3
MASON
The Multi-Agent Simulator Of Neighborhoods [or Networks. . . or something
– even the creators are not entirely sure what their own acronym means!]
was written relatively recently in 2003, with similar principles to those of
Swarm and RePast, although it is derived from neither. It is designed to be
the foundation for large custom-purpose Java simulations, providing more
than enough functionality for many lightweight simulation needs, containing
an optional suite of visualization tools in 2D and 3D (MASON, 2005).
It is not really designed for inexperienced programmers who just want to
create a simple simulation – rather, the design philosophy was “to build a
fast, orthogonal, minimal model library to which an experienced Java programmer can easily add features” (MASON, 2005). Fortunately, MASON has
an extensive variety of examples, which make getting started a much simpler
process than it might otherwise be.
15
3.4.4
Breve
Breve is a 3D simulation environment designed for simulation of decentralized systems and artificial life, and is conceptually very similar to existing
packages such as Swarm. Rather than being a collection of libraries, Breve
is an integrated environment which aims to simplify the implementation of
decentralised systems, offering support for 3D worlds and realistic physical
simulations with continuous time and agent states (Klein, 2002).
Simulations in Breve are written in an interpreted procedural object-orientated
language called ‘Steve’, not unlike Objective-C. The plug-in architecture
that Breve works with allows the user to interface with external libraries
and languages, accessing them directly from a Steve script.
3.5
Toolkit of choice
Having looked at these many toolkit options, and reviewing current and
past experiences in the department, it was decided to use MASON as the
environment of choice, for a number of reasons.
The fact that it is written in Java not only takes advantage of the language’s
portability, strict math and type definitions (to guarantee replicable results),
and object serialization (to ’checkpoint’ simulations) (Luke, Cioffi-Revilla,
Panait and Sullivan, 2004), but it also means that, for this project, no
new languages have to be learnt. For this last reason, at least, Java was
considered most appropriate, meaning that the project could concentrate
on modelling the wolve’s behaviour: “the system is written in pure Java
and is intended for experienced Java coders who want something general
and easily hackable to start with, rather than a domain-specific simulation
environment.”
Although Java has an reputation for being slow compared to compiled languages such as C and C++, this is in fact rather unfair (Marner, 2002),
particularly if care is taken to write efficient code. It is certainly faster than
interpreted languages such as those used by Breve and similar systems.
Given that it is hoped that further work might be derived from this project,
it is important that we choose a flexible system initially, that allows us scope
to do what we need, whilst at the same time not being so vast that it is overcomplicated and we are offered many utilities we do not need. With this in
mind, MASON also has a number of particular advantages:
16
• Simulations can be serialized to checkpoints and written to disk. These
can be recovered at any time, even to different Java platforms and new
MASON visualization toolkits.
• MASON can be set up to be guaranteed duplicable, meaning that the
same simulation parameters will produce the same results regardless
of platform.
• Libraries are provided for visualizing in 2D and in 3D (using Java3D),
to manipulate the model graphically, to take screenshots, and to generate movies (using the Java Media Framework).
• While the visualization toolkits are fairly large, the core simulation
model is intentionally very small, fast, and easy to understand.
3.6
Background to
MASON
The MASON toolkit has a modular, layered architecture, as shown in figure
3.1. The lower layer of this structure comprises a small collection of classes
consisting of a discrete event schedule, a random number generator, and
a variety of fields which hold objects and associate them with locations.
Alongside this is a set of utility data structures which are used for many
purposes within the toolkit. This code alone is sufficient to write basic
simulations running on the command line. However, there a further layer
on top of these. This visualization layer is separate, and allows fields to be
be displayed and the user to control the simulation.
Figure 3.1: Basic elements of the MASON model and visualisation layers.
(Luke et al., 2004)
17
MASON typically uses two separate windows. The Console Window is used
to set up the initial conditions for a simulation; to stop, start and step
through a simulation; to inspect the state of objects during a simulation.
The visual output of a simulation is shown in a separate Display Window.
The Display Window has a number of controls to allow visualisation settings
to be changed and to enable the simulation to be captured as video or
individual frames.
The separation between the model layer and the visualisation layer in MASON allows the user to treat the simulation as a self-contained entity. At any
time the model can be separated from the visualization and ’checkpointed’
(through Java serialization) to disk as indicated in figure 3.1. Checkpointing therefore means that the model can be moved to a different platform
and continued, or attached to an entirely different visualization collection,
or simply captured for future reference (figure 3.2).
Figure 3.2: Checkpointing and recovering a MASON model to be run standalone or under different kinds of visualisation. (Luke et al., 2004)
18
Chapter 4
Implementing Behaviour
We have already seen that MASON has an object orientated view of the
world. Anything within this world is instantiated in MASON as a Java
object. For this model, the basic object used was an <item> class, which
has information such as ID and location. We can extend this class either
to create static objects such as rocks and trees, or animals which make
decisions and perform actions in time. The <animal> class has a number
of parameters which defines its characteristics. MASON considers <animal>
to be an ‘agent’, in that it implements the <steppable> interface, which
has a single step method. This method enables the object to be activated
at each step in the simulation, or at any defined point in time. <animal>
actually implements step as an abstract method, and leaves the particular
actions to be defined in its subclasses. The <animal> class is then extended
to two further classes: <wolf> and <prey>, each with their own particular
characteristics, which attempt to model the behaviour of those particular
animals (see appendix A.2).
In the simulations modelled here, we chose to keep the specific values for
these characteristics in a configuration file, so that they could be easily
changed without having to recompile any code. An example of such a
file is shown in appendix A.2. These configuration files are loaded simply
by passing them to the simulation as a command line argument. Internally, parameters within the configuration file are accessed using the static
World.parameters property - an instantiation of the <ParameterDatabase>
class. This class provides a number of functions for retrieving a variety of
different data types, which could easily be used by those wishing to extend
this simulation.
We will describe each of the agent classes introduced above in turn, the
19
characteristics of each, and, for <wolf> and <prey>, how they behave in the
model with other agents of the same type. We will then go on to describe
how, when left to their own devices, agents of these two classes interact
when placed in the same model. But first we must mention the environment
in which these agents exist.
4.1
Environment
Within MASON the world, or environment, in which the animals (agents)
exist is described by fields. For convenience we created three inter-related
fields: wolves, prey and obstacles. This was so that the different types
of agents could be separated for the purposes of modelling. Although the
obstacle field was initially used to contain such things as trees, this was
never fully incorporated into the model, and most simulations were done
without this field being used.
The fields were represented in MASON with the <ContinuousPortrayal2D>
class, which provides a flat toroidal landscape, i.e. one which wraps top-tobottom and side-to-side, to provide the agents with an effectively boundless
environment. The fields were set to be an arbitrary 250 units in both directions. On this scale animals were typically about 1 unit in size. Because of
the nature of this field type, the location of animals was not restricted to
discrete cells, but they could be positioned anywhere within the field.
The full class structure for the environment can be found in appendix A.1.
4.2
4.2.1
Animal
Characteristics
The <animal> class in the model has a number of parameters which define
the basic characteristics of an animal:
• dead – boolean value indicating whether the agent is still active
• orientation – double value indicating the direction of the agent
• size – double value indicating the ‘size’ of the agent
• curSpeed – double value indicating the current speed of the agent
20
• curEnergy – double value indicating the current energy level of the
agent
• absMaxSpeed – double value indicating the theoretical maximum speed
that this agent can reach
• absMaxSpeedEnergy – double value indicating the energy required to
move at the theoretical maximum speed
• fitnessFactor – double value influencing how fast the agent’s energy
depletes
• restingEnergyConsumption – double value indicating how much energy the agent uses when not doing anything
We can use these characteristics specifically to calculate two other required
parameters:
Energy Consumption
energy consumption = α curSpeed2 + β
where:
1
fitnessFactor
β = restingEnergyConsumption
α=
Although there was no direct experimental evidence to support this equation, it was assumed that energy consumption increased exponentially with
speed, and is influenced by fitness and the amount of energy used when
resting. (See figure 4.1.)
Maximum Speed
maximum speed = β [cos(α curEnergy − π) + 1]
where:
π
absMaxSpeedEnergy
absMaxSpeed
β=
2
α=
As above, we have no direct experimental evidence for this relationship, but
can reasonably assume a sigmoidal curve, most easily defined using a cosine
equation. (See figure 4.2.)
21
energy consumed
curEnergy
Figure 4.1: Energy consumption in relation to speed (arbitrary units)
theoretical
maximum speed
maximum speed
energy required to
reach maximum speed
curEnergy
Figure 4.2: Maximum speed in relation to current energy level (arbitrary
units)
22
4.2.2
Vision
We have already stated that animals in the model will rely on vision rather
than scent or tracking. We therefore implemented a mechanism to allow the
animals to ‘see’ the environment around them.
It was decided that a basic sight model would be sufficient for the purposes
of this project. It does, however, serve its purpose very well. Two levels of
vision have been used – standard and peripheral. They both work in similar
ways, utilising an arc of a given radius around the animal, and of a given
angle to either side of the axis.
It is a relatively simple solution to what is actually quite a complex problem,
but it could have been taken much further. For example, the only difference
between the standard and peripheral vision is how likely another animal is
to be seen. It is worth noting that even this method falls far short of how
real animals work. In real life, animals do of course have other senses, and
do not work on sight alone. Their sight may also be impaired by various
objects in the environment around them.
The visualisation of the sight cones (see figure 4.3) can be turned off in the
display window if desired, and this is preferred if a large number of agents
are being used, as drawing several hundred semi-opaque cones can have a
serious effect on performance.
Figure 4.3: Sight Cones: the darker, narrower cone represents the standard
forwards vision, while the lighter cone represents the peripheral vision.
23
4.2.3
Motion Behaviours
As well as the above characteristics, which describe the current state of an
animal, we must also consider the important question of how the animal is
going to move, and what will motivate it to do so.
Reynolds (1999) suggests that the motion behaviour of an autonomous agent
can be “better understood by dividing it up into several tiers” (see figure 4.4).
Figure 4.4: A hierarchy of motion behaviours (Reynolds, 1999)
We will discuss each of these layers in turn, in relation to this simulation.
4.2.4
Action Selection
Agents continually face the problem of a large number of possible choices
about what to do next; how to deal with arising opportunities and resolving
conflicting situations. At any moment in time, autonomous agents must
select the most appropriate action amongst all the possible actions that can
be executed, to satisfy its own goals (Garcı́ a, Pérez and Martı́ nez, 2000).
Maes (1994) describes these goals as including needs, desires, motivations,
tasks and negatives (i.e. goals which should be avoided).
An Action Selection Mechanism (ASM) is a way of implementing the decisionmaking process of agents. ASMs are computational procedures which produce specfic actions given specific input stimuli (Yeates, 2003). A wide
range of ASMs exist, and there has been much discussion over which are
most suitable for different situations. The main differences between these
various ASMs is the way they deal with input stimuli and goals; they are
all fairly similar as regards handling opportunities and contingencies. Maes
(1994) suggests three classes by which these mechanisms can be divided:
hand-built flat networks, compiled flat networks, and hand-built hierarchical networks. Maes notes that the former requires “the designer of an agent
to carefully analyze the environment and task at hand and then design a set
of reflex modules”.
24
We have already seen that wolves face various choices during the hunting
process, as do the prey they are hunting. Due to the restrictions of this
project, these choices are rather limited, and as such we do not require a
complex system in order to decide what action should be taken at any given
time.
4.2.5
Steering Behaviour
As Reynolds (1999) points out, the term behaviour has many meanings. “It
can mean the complex action of a human or other animal based on volition or
instinct. It can mean the largely predictable actions of a simple mechanical
system, or the complex action of a chaotic system. In virtual reality and
multimedia applications, it is sometimes used as a synonym for ‘animation’.”
In this study, we use the the term to refer to the improvisational and life-like
actions of our animal agents.
Reynolds defines steering as the middle layer of his three tier behaviour
model (see figure 4.4). Steering behaviours are the key link between locomotion (the control signals required to move an agent in a desired direction
at a certain speed) and action selection (discussed above). “Steering is the
locomotion-independent behaviour required to help achieve an action selection goal.”
In various cicumstances, animals can be attracted or repelled from other
animals. They will also tend to align themselves with others nearby. The
balance between these different ‘forces’ results in a ‘dynamic equilibrium’,
which is ever-changing as the environment changes around the animal.
Separation
Separation behaviour gives an agent the ability to maintain a certain separation distance from others nearby. For each nearby agent, a repulsive force
is computed by subtracting the positions of the agent in question and the
nearby agent, normalizing, and then applying a weighting. (See figure 4.5.)
Cohesion
Cohesion behaviour gives an agent the ability to approach and form a group
with other nearby agents. By computing the average position of nearby
agents, a steering force can be applied to pull the agent in question towards
25
Figure 4.5: Separation (Reynolds, 1999)
Figure 4.6: Cohesion (Reynolds, 1999)
Figure 4.7: Alignment (Reynolds, 1999)
26
a group, by subtracting the agent position from the average position. (See
figure 4.6.)
Alignment
Alignment behaviour] gives an agent the ability to align itself with (that is,
head in the same direction and/or speed as) other nearby agents. Averaging the combined forward unit vectors of local agents allows to calculate a
desired ‘velocity’ (that is, a speed and a direction) for the agent in question, which can be compared to the current velocity, and alternations made
appropriately. (See figure 4.7.)
4.2.6
Neighbourhoods
All three of these behaviours rely on what is know as a ‘neighbourhood’.
To compute any steering requirements, we have to know what other agents
are doing. This could be an exhaustive search of all agents in the simulated
world, in which there could be several hundred agents at any one time.
This approach would be entirely inappropriate and impracticle. Rather,
using some sort of spatial partitioning scheme to limit the search to local
characters means that the simulation not only becomes more efficient, but
also more realistic as it is highly unlikely that an animal in the real world
would know what every other animal was doing.
When Reynolds (1987) first developed his ‘boids’ system, these neighbourhoods were simple circles around the agent, or spheres in the case of his
three-dimensional ‘boids’, outside of which everything was ignored. In his
later work, Reynolds (1999) used a more advance system for neighbourhoods
(see figure 4.8), which we have extended in this project, introducing sight
cones, as described above. That is, an agent’s neighbourhood is defined by
what it can see, not just by a simple circle.
4.2.7
Locomotion
Because this model only has a very basic concept of an animal, we are not
concerned about the physical actions involved in motion. Therefore, we have
not modelled aspects of movement beyond calculating the amount of energy
used in the process. If we were to model this scenario in greater depth, we
would have to be more aware of the physical processes involved, and their
energy usage.
27
Figure 4.8: An agent’s neighbourhood (Reynolds, 1999)
4.3
4.3.1
Wolf
Characteristics
The <wolf> class in the model extends <animal>, and so inherits all the characteristics of its super class. In addition, it has two further characteristics:
• rank – double value indicating the hierarchical standing
• state – integer value indicating the current state, taking one of three
values: FOLLOW, CHASE or ATTACK
4.3.2
Hierarchy
Each wolf has its own individual picture of how a pack is structured. This
will be similar, but not necessarily identical, to that of another wolf in the
same pack. A wolf does not have to explicitly ask it neighbours whether
they are above or below it in the hierarchy; it has an implicit knowledge,
gained through previous interactions and experience. A wolf can generally
recognise the other members of the pack, and so associate various facts,
including its social standing, with any given pack member. The wolf does
not of course need to know every social link. For example, it might not
28
particularly care about the relationship between two yearlings, only that
they are both below it in the hierarchy.
A wolf therefore has knowledge of the relationships between itself and other
wolves, and can apply this information in a particular situation simply by
observing the wolves around it, without having to explicity communicate
with them.
However, in the implementation of this model, agents do not acquire and
maintain this personal map of relationships. It would be possible to envisage
the model doing this by gradually building up a ‘memory’ of individual
interactions. But this is not directly relevant to the present study, and for
our purposes a similar effect can be achieved more simply, albeit by allowing
the agents to communicate their individual ranks - by calling one another’s
getRank() method, and comparing the returned value of rank. Thus, the
relationship map which, in reality, is held by each individual, is, in the model,
distributed across the entire population and portions of it are referred to by
individuals as required.
For the purposes of this model, the agents are arbitrarily assigned an absolute rank at the start of the simulation (it is possible for the user to set
this in configuration). By polling all the others around it, it would therefore
be possible for an agent to produce a complete precedence line, by using
simple numerical sorting. But for the purposes of this experiment, because
an agent is only interested in finding the agent directly above it in rank, and
is not concerned about its absolute ranking, it only requires a simple loop:
for each ( [ wolf ] in [ neighbourhood ] ) {
if ( ![ wolf ]. dead ) and ( [ wolf ]. rank > [ this ]. rank )
and ( [ wolf ]. rank < [ leader ]. rank )
then [ leader ] = [ wolf ]
}
The outcome of this is the same as for wolves in a real pack. At any time, a
wolf can form a picture of where each neighbouring wolf stands in relation
to the other nearby wolves, as can the agents. If we were to model other
facets, by looking at other aspects of wolf pack interaction, we might have
to reconsider how a wolf generates this information.
4.3.3
Personal Space
Individual wolves possess what we have called ‘personal space’. This is
the area around the wolf from which lower ranking wolves are excluded.
Reynolds (1999) suggested that this should be a rectangular shape extending
29
forwards from the wolf’s position. We considered that a curve would be more
realistic. The algorithms involved in calculating this are shown in appendix
A.2.
Originally we set this curve to go through the furthest extent of the wolf’s
field of vision, but this proved to be far too large, and so it was reduced.
However, there was still a problem in that the space extended indefinitely
in front of the wolf. It was therefore necessary to limit the forward extent
of the space. In practice it was found that a circle in a similar position
to the curve produced the desired effect and was in fact much simpler to
implement.
The impeding method in the <wolf> class is used to indicate whether a wolf
is in another’s space. If it finds that it is, it will move away.
4.3.4
Action Selection
The wolf’s action selection mechanism is implemented in the agent’s step
method, and the particular action is determined by the current value of its
state parameter. Normally state has a value of FOLLOW. In this situation
the wolf simply follows its leader, as described above. When the wolf sees a
prey within its field of vision its state changes to CHASE. At this point the
wolves stop following each other, and move towards the prey in an attempt
to isolate an individual from the herd and make ground on it. At this stage,
state changes to ATTACK, and the pack of wolves close into surround the
individual prey. This is illustrated in figure 4.9. These interactions between
wolves and their prey are discussed later in section 4.5.
Figure 4.9: Action selection for a <wolf> agent.
30
Figure 4.10: Wolf behaviour, showing three wolves following each other,
travelling to the right with the alpha wolf in front. Personal space is indicated by circles. Normal sight cones in dark grey, peripheral vision in light
grey. Note that while individuals might be within the personal space of a
wolf lower in the hierarchy, they do not encroach into their leader’s space.
Figure 4.11: Wolf behaviour showing three wolves travelling to the right.
The three wolves start approximately together, but become a column with
the alpha wolf, represented by the darker line, in front (see ends of lines to
the right).
31
4.3.5
Steering Behaviour
As with any animal, wolf agents combine the three behaviours of cohesion,
separation and alignment in order to move as a pack. In addition to this,
as discussed above, wolves know who and where their immediate leader is
within their local neighbourhood, i.e. within their field of vision. They use
this knowledge to follow that leader, whilst avoiding entering its ‘personal
space’.
The agents use simple arrival behaviour (Reynolds, 1999) to achieve this, in
which they head directly towards the current location of their leader. The
result of this is that the pack tends to travel in a column, with the alpha
wolf at its head (see figures 4.10 and 4.11).
4.4
4.4.1
Prey
Characteristics
The <prey> class in the model extends <animal>, and so inherits all the characteristics of its super class. In addition, it has one further characteristic:
• state – integer value indicating the current state, taking one of three
values: GRAZE, ALERT or FLEE
4.4.2
Action Selection
The prey’s action selection mechanism is implemented in the agent’s step
method, and the particular action is determined by the current value of
its state parameter. Normally state has a value of GRAZE. This normal
grazing behaviour is discussed below. If an animal becomes aware of a wolf,
its state changes to ALERT. Its state will also change if it becomes aware of
another animal nearby who’s state is ALERT. When in this state, the animal
will stop grazing for a period of time, and observe the actions of the wolf.
If it perceives that the wolf is coming towards it, its state will change to
FLEE. The actions of prey in this state, and the interactions between wolves
and their prey are discussed later in section 4.5. Fig. 4.12 shows the state
changes for prey.
32
Figure 4.12: Action selection for a <prey> agent.
4.4.3
Steering Behaviour
The type of prey that we have concerned ourselves with in this project
typically travel in large herds. The implication of this is that “the group
tends to act together (for example, all moving in the same direction at a
given time), but this does not occur as a result of planning or co-ordination.
Rather, each individual is choosing behaviour that corresponds to that of the
majority of other members, possibly through imitation or possibly because
all are responding to the same external circumstances” (Wikipedia, 2005).
With this in mind, herding may seem like quite an elaborate system – after
all, each individual animal must pay attention to what is going on around
it, being aware of what other animals are doing. However, this practice can
easily be attained by combining three key behaviours – separation, cohesion
and alignment.
Grazing
In the real world, herds of animals do not just herd, wandering around
aimlessly – rather, they graze. That is, they take the time to stop and eat,
to ensure their energy levels are maintained. Grazing is, however, just an
extension of herding. Whilst grazing, the herd must make certain that they
remain together as a group, all the while ensuring that they do not get in
the way of other herd members.
To produce this behaviour, we introduce a random element to their movement. Rather than using a herding factor as the required steering behaviour
on every cycle, we can randomly choose to not move instead. By checking
the energy levels of the creature, we can make sure that the animal stops
frequently enough to ‘eat’ so that its energy levels are suitably maintained.
33
Figure 4.13: Herding & Grazing: the lighter agents are in the process of
eating. The size of an individual agent reflects the animal’s actual size.
The behaviour produced by the prey agents as a result reflects very well
what can be observed in the real world (see figures 4.13 and 4.14).
4.5
Interactions
When placed together into the same environment, wolves and their prey interact. They each respond to the presence and actions of the other, although
there is no direct communication between the two groups of animals. Their
resultant behaviours are therefore determined solely by their own characteristics and not by any programmatically imposed instruction. We saw above
that on encountering an agent of a different type an animal’s state changes
accordingly. We shall now look at these states in a little more detail for
both types of animal.
4.5.1
Wolf
When the wolf sees a prey within its field of vision its state changes to
CHASE. At this point the wolves stop following each other, and move towards
the prey in an attempt to isolate an individual from the herd and make
ground on it. At this stage, state changes to ATTACK, and the pack of
34
Figure 4.14: Herding behaviour. Lines represent individual animals moving
together (generally from top-left to bottom-right) into a herd.
35
wolves close into surround the individual prey. This is illustrated in figure
4.9. These interactions between wolves and their prey are discussed later in
section 4.5.
We saw above that the normal state of a wolf agent is FOLLOW. The observance of a prey agent changes this to CHASE.
Chase
Because of the behaviour that emerges from a wolf’s basic characteristics,
the normal behaviour of a pack, as we have seen, is to follow the alpha wolf
in a column. This is what we typically see when their state is FOLLOW. As
soon as a wolf observes a prey agent within its field of vision, the wolf’s state
changes to CHASE.
In this state the agent stops following its leader and instead applies the
same principles to “follow” the prey, although in this state the wolves no
longer concern themselves with each other’s personal space and head directly
towards the prey. If the wolf can see a whole herd within its field of vision
it will follow the nearest animal. When chasing prey a wolf will travel at
the maximum possible speed consistent with its current energy level. If the
wolf is relatively close, it is able to catch the prey before slowing sufficiently,
allowing the prey to escape.
Attack
Having started chasing and then isolating an individual quarry, a wolf’s
state changes to ATTACK. As soon as one wolf changes its state to ATTACK,
the others follow suit and if they were not already chasing the same prey
will switch their attention to the animal being attacked by the first wolf.
At this stage, there is cooperation between the wolves in the pack as they
attempt to surround the prey before moving in for the kill.
4.5.2
Prey
We saw above that the normal state of a prey agent is GRAZE. The observance
of a wolf agent changes this to ALERT.
36
Alert
During the normal course of events, an agent checks on neighbouring animals within its field of vision, so as to maintain its position to them within
the herd. If it discovers that one of the animals is actually a wolf, its state
changes to ALERT. Similarly, if in the course of checking other prey agents it
discovers that one of them is already alerted, its own state also changes to
ALERT. While in this state it remains stationary and its attention is concentrated on the actions of any wolves it can see. If a wolf starts approaching
it at a speed greater than a certain threshold, its state changes to FLEE.
Flee
As soon as the prey agent’s state changes to FLEE it begins to take evasive
action by running away.
On an individual basis, fleeing is quite a basic concept. The optimal strategy
is to steer the agent so that it points directly away from the target location.
However, when dealing with large numbers of agents, as in a herd situation,
we cannot necessarily assume that the best course of action for the agent is
to run directly away. Attention must be paid to its neighbours, as with general herding, so that where possible the integrity of the herd is maintained.
This can be achieved by simply combining the resultant steering vectors of
separation, cohesion, alignment and the new behaviour, fleeing.
The agent will move at it’s maximum possible speed commensurate with its
current energy level. Obviously it will start travelling relatively quickly but
will slow down if it has to keep moving for any extended period. Depending
on the circumstances this might mean that the animal escapes the wolves,
or it might mean that it is attacked and killed.
4.6
Results of simulations
Fig. 4.15 shows the output from MASON’s display window for a typical
simulation involving both wolves and prey. It captures four stages during
the simulation.
Shortly after the beginning (frame a), the prey agents have started to move
together into a herd, and are mostly grazing. The wolves have formed into
a typical column and are moving up the screen to the left of the herd,
following the alpha wolf. The state of the wolves and prey at this stage is
37
their normal FOLLOW and GRAZE respectively.
As the wolves approach the herd the alpha wolf becomes aware of the nearest
prey animals (frame b), and turns towards them; its state has become
CHASE. The others, still following, likewise turn towards the herd, acquiring
the CHASE state from the alpha wolf. Meanwhile the herd has become aware
of the wolves’ approach, at first becoming ALERT, but then moving away as
the wolves get closer (their state becomes FLEE).
In frame c, most of the herd is well clear of the wolves, although one or two
stragglers have been left behind, because they are small and cannot move
as fast. The wolves have singled out the nearest of these, and have attacked
it.
Finally (frame d), the wolves have surrounded and killed the selected prey.
The rest of the herd, having reached a safe distance from the wolves, have
stopped fleeing, and returned to an ALERT state, as the wolves are still in
the vicinity.
a
c
b
d
Figure 4.15: Frames captured at four steps during a simulation, illustrating
a pack of five wolves approaching a herd of prey, and then singling out and
attacking an individual. For clarity, the wolves are highlighted with a grey
background. Moving prey agents are shown in black. Others, either grazing
or alert and therefore stationary, are shown in grey.
38
Chapter 5
Evaluation
As stated in the introduction, the initial aim of this project was to “investigate the hunting process of wolves, to see if simple rules underlie the complex
group interactions seen in wolf hunting, and not a reliance on sophisticated
communication and hierarchy”.
To this end, we created a simulation which included both wolf and prey
agents, and observed their interactions. In general, most of the characteristics we gave to the model agents produced reasonably realistic behaviours.
5.1
Agent Behaviour
The prey agents, with the characteristics they were given, were seen to
realistically graze. They acted together to herd and reacted appropriately
and in unison to approaching threat.
Although the animals were restricted in the model to using sight only, rather
than a full range of senses, this single sense seemed to work effectively as
modelled. Even so, there is probably scope for refinements, as described
later.
By giving individual animals varying speeds and energy levels, we were
able to observe the typical phenomonon of stragglers being left behind and
“picked off” by attacking wolves. However, we found some problems with
the ALERT state, in that the agents did not realistically return to their normal grazing behaviour after a threat was removed. This deserves further
investigation and probably improvement to the algorithm responsible.
39
The following behaviour of the wolf agents was as we expected, having given
them a rank and told them to follow their immediate superior. However,
we do not know, and it is not clear from the literature, exactly how this
following behaviour works in reality. It is known that the alpha wolf always
leads the pack, which follow in a column, but it is not clear how the column
is organised. We have assumed that it is by rank, but future research might
indicate otherwise and the model should reflect any such findings.
Although an ATTACK state was included in the model, and it was understood
at a high level what this should achieve (as described previously), there was
not time to properly implement it. The wolves therefore continued to use
their CHASE behaviour, until they had intercepted their prey. With the
ATTACK behaviour properly implemented, we would have expected the pack
to cooperate and concentrate its efforts on a single target. In fact, it was
sometimes observed that the pack would split and atack different individuals,
as illustrated in figure 5.1.
The behaviours observed in the model were of relatively short duration.
With the parameters as set, the wolves usually caught their prey relatively
quickly after an initial contact was made. It was not therefore possible, with
the current model, to assess long-term behaviour, in particular, the effect of
prolonged chasing and fleeing on speed and energy levels.
a
b
Figure 5.1: Two frames captured from a simulation illustrating a pack of
five wolves splitting to attack two targets. In (a) the pack has just split. In
(b) both targets have been attacked. Highlighting is as figure 4.15.
40
5.2
System Design
The class structure we created for the model seems in retrospect to be appropriate, and capable of extension for future work. Some aspects were
included in this structure, such as a field for the environment and the wolf’s
ATTACK state, which were eventually not used, but which could be implemented later.
5.3
The Future
There are many further aspects that can be investigated. Some would be
relatively easy to implement. Others might require major research and effort.
5.3.1
Senses
Firstly, we only took into account vision. We constructed sight cones which
assumed that an animal is always looking straight ahead. However, an
animal can turn its head whilst still moving forwards, thus complicating
the modelling of this sense. It might be that it is sufficient to increase the
angle of the sight cone to simulate this, but further investigation would be
required.
Both wolves and prey obviously have other senses, which they use for detecting both friend and foe. To include the sense of smell, an animal would
need to emit a scent, which would radiate in a diminishing concentration
from the animal, but would also remain for a period of time after the animal
moved on. The concentration of scent at any one location is also affected
by the wind strength and direction, which would have to be modelled too.
Other animals would presumably only be able to detect this scent above a
certain threshold level.
Animals also use sound to detect other animals. In theory this might be
easier to model than scent, as the sound is simply emitted in every direction
from the current location of the animal, and diminishes according to the
Inverse Square law. However its initial volume is probably related to the
activity of the animal and, like scent, its propagation is affected by the wind
and is only detectable above a certain threshold.
Another behaviour that could be modelled is to allow the wolves to track
41
their prey. This would obviously require the simulation to maintain tracks
of where the herd had been. Again, this might be complicated if the tracks
fade with time, and do not exist for ever. A mechanism to allow the wolves
to to interpret tracks would also be required.
If all these different senses were to be included together, there would need
to be a suitable algorithm to combine their inputs and enable the wolf to
work out where its prey is, and perhaps anticipate where it is going.
5.3.2
Environment
Although we included a third field explicitly to model aspects of the environment, such as trees and other obstacles, we did not have time to use this
to any extent. Obviously future studies could include more or less realistic environments, which could have interesting effects on behaviour. Trees,
rocks and similar objects could be accomodated in a two-dimensional environment with little trouble. Variations in ground cover, like forrested areas,
soft ground and even water, could also be included. But changes in terrain
and various obstacles would mean that the animal agents would have to be
given sufficient intelligence to negotiate them effectively.
It might also be interesting to include hills and valleys, which would then
require a three-dimensional space. These would considerably complicate the
model, as the various senses described above would all be affected in different
ways, and the animals would have to include these variations in altitude as
they made decisions about where they move. This also raises the question
whether the animals “learn” their environment, and whether for example
the wolves might chose a longer route over a shorter one to their prey if this
was to their advantage.
5.3.3
Animal behaviour
There are a number of observed behaviours in both wolves and their prey
that we did not model, and which could usefully be incorporated. For example, we only modelled a flight response to attacking wolves. For some prey
it might also be appropriate to allow them to stand their ground. Enabling
them to move to wooded cover, if a suitable environment was modelled,
might also be a possibility. Furthermore, it is sometimes the case that the
younger members of the herd are protected within it and larger animals, particularly mothers, place themselves between their young and the attackers.
This too could be included in the model.
42
On the other side of the equation, we were only able to model a very simple
attack method for the wolves. The simple arrival behaviour that wolves use
in the current model to chase their prey could well be made more sophisticated, and it would obviously be beneficial to enable the wolves to cooperate
more in singling out and bringing down their prey.
It was not possible to extensively test the algorithms included in the model
for calculating speed, energy and so on. An extension to this work could
usefully investigate these to see if they faithfully represent reality. This
would probably involve changing the model to force the animals to respond
for longer time periods, and observe them as they become tired. An accurate
model might also show wolves giving up the hunt after a protracted period
of chasing, which we were not able to demonstrate.
5.4
Conclusions
We can conclude that the simulation, as implemented, successfully modelled
those characteristics which were included and that the relatively simple algorithms included produced a recognisable hunting behaviour. However, it
is clearly apparent that in order to fully model all the various hunting behaviours of wolves and the responses of their prey, many more characteristics
must be taken into account. This clearly leaves plenty of scope for future
extensions to this project.
43
Bibliography
Bedau, M. A. (1992), Philosophical Aspects of Artificial Life, in F. Varela
and P. Bourgine, eds, ‘Towards a Practice of Autonomous Systems’,
MIT Press, pp. 494–503.
Blumstein, D. T. (1999), ‘Alarm Calling in Three Species of Marmots’,
Animal Behaviour 136(6), 731–757.
Blumstein, D. T., Daniel, J. C. and Bryant, A. A. (2001), ‘Anti-Predator
Behavior of Vancouver Island Marmots: Using Congeners to Evaluate
Abilities of a Critically Endangered Mammal’, Ethology 107, 1–14.
Collier, N., Howe, T. and North, M. (2003), Onward and Upward: The Transition to Repast 2.0, in ‘Proceedings of the First Annual North American Association for Computational Social and Organizational Science
Conference’.
Conway, J. (1970), ‘Game of Life’, Scientific American April, 120.
Creel, S. (1997), ‘Cooperative Hunting and Group Size: Assumptions and
Currencies’, Animal Behaviour 54(5), 1147–1154.
Creel, S. and Creel, N. M. (1995), ‘Communal hunting and pack size in
African wild dogs, Lycaon pictus’, Animal Behaviour 50(5), 1325–1339.
Garcı́ a, C. G., Pérez, P. P. G. and Martı́ nez, J. N. (2000), Action Selection
Properties in a Software Simulated Agent, in ‘MICAI 2000: Proceedings
of the Mexican International Conference on Artificial Intelligence’, Vol.
1793, Springer-Verlag, pp. 634–648.
Gardner, M. (1970), ‘Mathematical Games: The fantastic combinations of
John Conway’s new solitaire game “life”’, Scientific American pp. 120–
123.
Gévaudan Wolf Park (2003), WWW.
http://www.loupsdugevaudan.com/defaultgb.asp
44
Howie, G. (2001), ‘Emergent Behaviour’, WWW.
http://www.iis.ee.ic.ac.uk/ frank/surp99/article2/gh197/
International Wolf Center (2002), WWW.
http://www.wolf.org
Jakobi, N., Husbands, P. and Harvey, I. (1995), ‘Noise and The Reality
Gap: The Use of Simulation in Evolutionary Robotics’, Lecture Notes
in Computer Science 929, 704–720.
Kaiser, J. (1999), ‘Life and Death on a Computer’, Science Magazine
286(5441), 867.
Klein, J. (2002), BREVE: a 3D Environment for the Simulation of Decentralized Systems and Artificial Life, in ‘Artificial Life VIII: The 8th
International Conference on the Simulation and Synthesis of Living
Systems’, MIT Press.
Koeppel, D. (2002), ‘MASSIVE Attack’, Popular Science December
‘02, 38–44.
Kuzyk, G. W. (2001), ‘Female and Calf Moose Remain Stationary and
Non-aggressive when Approached by Wolves in West-central Alberta’,
Alberta Naturalist 31(4), 53–54.
Lewis, D. B. and Gower, D. M. (1980), Biology of Communication, Blackie.
Luke, S., Cioffi-Revilla, C., Panait, L. and Sullivan, K. (2004), MASON:
A New Multi-Agent Simulation Toolkit, in ‘Proceedings of the 2004
SwarmFest Workshop’.
Maes, P. (1994), ‘Modeling Adaptive Autonomous Agents’, Artificial Life
1(1-2), 135–162.
Marner, J. (2002), Evaluating Java for Game Development, Thesis, Department of Computer Science, University of Copenhagen.
MASON(2005), WWW.
http://cs.gmu.edu/ eclab/projects/mason/
Mech, L. D. (1970), The Wolf: The Ecology and Behaviour of an Endangered Species,
University of Minnesota Press.
Mech, L. D. (1999), ‘Alpha Status, Dominance, and Division of Labor in
Wolf Packs’, Canadian Journal of Zoology 77, 1196–1203.
Mech, L. D., Meier, T. J., Burch, J. W. and Adams, L. G. (1995), Patterns of
Prey Selection by Wolves in Denali National Park, Alaska, in L. Carbyn,
S. Fritts and D. Seip, eds, ‘Ecology and Conservation of Wolves in a
Changing World’, Canadian Circumpolar Institute, pp. 231–244.
45
Minar, N. R., Burkhart, R., Langton, C. and Askenazi, M. (1996), The
SWARM Simulation System: A Toolkit for Building Multi-Agent Simulations, Technical report, Santa Fe Institute.
Moukas, A. and Hayes, G. (1996), Synthetic robotic language acquisition
by observation, in ‘From Animals to Animats 4 – Proceeding on the
Fourth International Conference on Adaptive Behaviour’.
Noble, J. and Cliff, D. (1996), On Simulating the Evolution of Communication, Technical report, School of Cognitive and Computing Sciences,
University of Sussex.
Priestly, J. B. (1957), Thoughts in the Wilderness, Heinemann.
Radloff, F. G. T. and Du Toit, J. T. (2004), ‘Large Predators and Their
Prey in a Southern African Savanna: a Predator’s Size Determines its
Prey Size Range’, Journal of Animal Ecology 73(3), 410–423.
Reynolds, C. W. (1987), ‘Flocks, Herds, and Schools: A Distributed Behavioral Model’, Computer Graphics 21(4), 25–34.
Reynolds, C. W. (1999), Steering Behaviors for Autonomous Characters,
Technical report, Sony Computer Entertainment America.
Schatz, B., Lachaud, J. and Beugnon, G. (1997), ‘Graded Recruitment and
Hunting Strategies Linked to Prey Weight and Size in the Ponerine ant
Ectatomma ruidum’, Behavioral Ecology and Sociobiology 40(6), 337–
349.
Smith, J. and Dewey, T. (2002), Canis lupus. Animal Diversity Web, Online.
http://animaldiversity.ummz.umich.edu/site/accounts/information/Canis lupus.html
Soule, T. (1999), Voting Teams: A Cooperative Approach to Non-typical
Problems Using Genetic Programming, Technical report, Department
of Computer Science, St. Cloud State University.
Walter, W. G. (1950), ‘An Imitation of Life’, Scientific American 182(5), 42–
45.
White, K. S. and Berger, J. (2001), ‘Antipredator Strategies of Alaskan
Moose: are Maternal Trade-offs Influenced by Offspring Activity?’,
Canadian Journal of Zoology 79, 2055–2062.
Wikipedia (2005), ‘Herd’, WWW.
http://en.wikipedia.org/wiki/Herd
Yeates, I. D. (2003), WolfHunt: A Simulation of Wolf Pack Hunting Behaviour, Master’s thesis, Department of Computer Science, University of
Bath.
46
Appendix A
Class Diagrams
47
A.1
Environment Class Structure
Figure A.1: The class structure of the environment in which agents exist.
48
A.2
Object Class Structure
Figure A.2: The object classes from which the <wolf> and <prey> classes are
derived.
49
Appendix B
Calculating ‘personal space’
One way of visualising a wolf’s ‘personal space’ is as a curve extending
forward from a point offset slightly behind the animal, as illustrated below.
S
(a,b)
a
q
(c,d)
(x’,y’)
D
(i,j)
Figure B.1: Defining the curve of a wolf’s ‘personal space’
x0 and y 0 are the coordinates of the agent, and α is its heading. θ is the
angle of the sight cone, and S is its extent. D is the optimum separation
distance between wolf agents.
The coordinates of the three points can be defined as follows. . .
i = x0 + D cos(α − π)
j = y 0 + D sin(α − π)
50
a = x0 + S cos(α − θ)
b = y 0 + S sin(α − θ)
c = x0 + S cos(α + θ)
d = y 0 + S sin(α + θ)
The general formula for a quadratic curve is as follows. . .
y = Ax2 + Bx + C
(B.1)
We know that the three points, (a, b), (c, d) and (i, j) all lie on the curve –
it is defined by these three points. By substituting each of these points into
(B.1), we arrive at the following. . .
b = Aa2 + Ba + C
(B.2)
d = Ac2 + Bc + C
(B.3)
j = Ai2 + Bi + C
(B.4)
First we subtract (B.4) from (B.2) and (B.3). . .
b − j = A(a2 − i2 ) + B(a − i)
(B.5)
d − j = A(c2 − i2 ) + B(c − i)
(B.6)
. . . then multiply (B.5) by (i − c). . .
(b − j)(i − c) = (i − c)A(a2 − i2 ) + B(a − i)(i − c)
(B.7)
. . . and (B.6) by (a − i). . .
(d − j)(a − i) = (a − i)A(c2 − i2 ) + B(c − i)(a − i)
(B.8)
By adding (B.7) and (B.8). . .
(b − j)(i − c) + (d − j)(a − i) =
(i − c)A(a2 − i2 ) + (a − i)A(c2 − i2 )
+ B(a − i)(i − c) + B(c − i)(a − i)
51
(B.9)
. . . and then cancelling down. . .
(b − j)(i − c) + (d − j)(a − i) =
A (i − c)(a2 − i2 ) + (a − i)(c2 − i2 )
(B.10)
. . . we can establish a formula for A. . .
A=
(b − j)(i − c) + (d − j)(a − i)
(i − c)(a2 − i2 )
(B.11)
Consequently, by substituting (B.11) into (B.1), we can establish formulae
for B and C as well. . .
B=
(b − j) − A(a2 − i2 )
a−i
C = j − Ai2 − Bi
(B.12)
(B.13)
Looking back at (B.1), substituting in (B.11), (B.12) and (B.13), we can
work out the exact formula for the following boundary curves. . .
y = Ax2 + Bx + C
= Ax2 + Bx + j − Ai2 − Bi
= A(x2 − i2 ) + B(x − i) + j
(b − j)(i − c) + (d − j)(a − i)
y =
(x2 − i2 )
(i − c)(a2 − i2 )
h
i
(b − j) − (b−j)(i−c)+(d−j)(a−i)
(a2 − i2 )
(i−c)(a2 −i2 )
(x − i)
+
a−i
+ j
(B.14)
We could at this point plug in all of the original point equations, taking it
right back to basic x0 and y 0 coordinates. But to be perfectly honest it is
probably better to let the computer take over!
52
Appendix C
Code
On the following pages can be found the complete code listing for the simulation presented in this document. Only that code written by the author, or
otherwise not freely available in the MASON source, has been included. All
other code is obtainable directly from the MASON source, the latest version
of which can be downloaded from:
http://cs.gmu.edu/∼eclab/projects/mason/
53
C.1
Animal.java
package uk . ac . bath . cs1ah . hunt ;
import java . awt . Color ;
import java . io . FileWriter ;
50
import ec . util . M e r s e n n e T w i s t e r F a s t ;
import
import
10 import
import
sim . engine .*;
sim . field . continuous .*;
sim . portrayal .*;
sim . util .*;
public abstract class Animal
extends Item
implements Steppable , sim . portrayal . Oriented2D {
54
20
protected
protected
protected
protected
Vision standard = new Vision ( 0 , 0 ) ;
Vision peripheral = new Vision ( 0 , 0 ) ;
double orientation = 0;
Double2D lastStep = new Double2D ( 0 , 0 ) ;
protected boolean dead = false ;
protected double size , curSpeed , curEnergy ,
absMaxSpeed , absMaxSpeedEnergy ,
fitnessFactor , r e s t i n g E n e r g y C o n s u m p t i o n ;
30
40
60
70
/* *
* Class c o n s t r u c t o r
*/
public Animal ( ) {
this ( new Double2D ( 0 , 0 ) ) ;
}
/* *
* Class c o n s t r u c t o r s p e c i f y i n g l o c a t i o n of animal .
*
* @param l o c a t i o n
the l o c a t i o n of the animal
*/
public Animal ( Double2D location ) {
this ( location , 1.0 , 1.0 ) ;
}
/* *
* Class c o n s t r u c t o r s p e c i f y i n g l o c a t i o n and domain of animal .
*
80
90
* @param l o c a t i o n
the l o c a t i o n of the animal
* @param domain
the domain of the animal
*/
public Animal ( Double2D location , Continuous2D domain ) {
this ( location ) ;
}
/* *
* Class c o n s t r u c t o r s p e c i f y i n g size and rank of animal .
*
* @param size
the size of the animal
* @param rank
the rank of the animal
*/
public Animal ( double size , double rank ) {
this ( new Double2D ( 0 , 0 ) , 0 , size , rank ) ;
}
/* *
* Class c o n s t r u c t o r s p e c i f y i n g the s t a n d a r d and p e r i p h e r a l vision
* of animal .
*
* @param s t a n d a r d
the s t a n d a r d vision of the animal
* @param p e r i p h e r a l
the p e r i p h e r a l vision of the animal
*/
public Animal ( Vision standard , Vision peripheral ) {
this ( new Double2D ( 0 , 0 ) , standard , peripheral ) ;
}
/* *
* Class c o n s t r u c t o r s p e c i f y i n g the location , size and rank of
* animal .
*
* @param l o c a t i o n
the l o c a t i o n of the animal
* @param size
the size of the animal
* @param rank
the rank of the animal
*/
public Animal ( Double2D location , double size , double rank ) {
this ( location , 0 , size , rank ) ;
}
/* *
* Class c o n s t r u c t o r s p e c i f y i n g the location , size , rank and domain
* of animal .
*
* @param l o c a t i o n
the l o c a t i o n of the animal
* @param size
the size of the animal
* @param rank
the rank of the animal
* @param domain
the domain of the animal
*/
public Animal ( Double2D location , double size , double rank ,
Continuous2D domain ) {
this ( location , 0 , size , rank , domain ) ;
}
100
110
55
120
/* *
* Class c o n s t r u c t o r s p e c i f y i n g the location , last step ( m o m e n t u m ) ,
* size and rank of animal .
*
* @param l o c a t i o n
the l o c a t i o n of the animal
* @param o r i e n t a t i o n
the last step ( m o m e n tu m ) of the animal
* @param size
the size of the animal
* @param rank
the rank of the animal
*/
public Animal ( Double2D location , double orientation , double size ,
double rank ) {
this ( location , orientation , size , rank , new Vision ( 5 , 20 ) ,
new Vision ( 1 , 100 ) , null ) ;
}
/* *
* Class c o n s t r u c t o r s p e c i f y i n g the location , last step ( m o m e n t u m ) ,
* size , rank and domain of animal .
*
* @param l o c a t i o n
the l o c a t i o n of the animal
* @param o r i e n t a t i o n
the last step ( m o m e n t u m ) of the animal
* @param size
the size of the animal
* @param rank
the rank of the animal
* @param domain
the domain of the animal
*/
public Animal ( Double2D location , double orientation , double size ,
double rank , Continuous2D domain ) {
this ( location , orientation , size , rank , new Vision ( 5 , 20 ) ,
new Vision ( 1 , 100 ) , domain ) ;
}
130
140
this ( location , 0 , 1.0 , 1.0 , standard , peripheral , null ) ;
}
150
160
170
/* *
* Class c o n s t r u c t o r s p e c i f y i n g the location , s t a n d a r d vision ,
* p e r i p h e r a l vision and domain of animal .
*
* @param l o c a t i o n
the l o c a t i o n of the animal
* @param s t a n d a r d
the s t a n d a r d vision of the animal
* @param p e r i p h e r a l
the p e r i p h e r a l vision of the animal
* @param domain
the domain of the animal
*/
public Animal ( Double2D location , Vision standard ,
Vision peripheral , Continuous2D domain ) {
this ( location , 0 , 1.0 , 1.0 , standard , peripheral , domain ) ;
}
/* *
* Class c o n s t r u c t o r s p e c i f y i n g the location , last step ( momentum ) ,
* size , rank , s t a n d a r d vision , p e r i p h e r a l vision and domain of
* animal .
*
* @param l o c a t i o n
the l o c a t i o n of the animal
* @param o r i e n t a t i o n
the last step ( m o me n t u m ) of the animal
* @param size
the size of the animal
* @param rank
the rank of the animal
* @param s t a n d a r d
the s t a n d a r d vision of the animal
* @param p e r i p h e r a l
the p e r i p h e r a l vision of the animal
* @param domain
the domain of the animal
*/
public Animal ( Double2D location , double orientation , double size ,
double rank , Vision standard , Vision peripheral ,
Continuous2D domain ) {
super ( location , size , domain ) ;
this . orientation = orientation ;
this . size = size ;
this . rank = rank ;
this . standard = standard ;
this . peripheral = peripheral ;
180
/* *
* Class c o n s t r u c t o r s p e c i f y i n g the location , s t a n d a r d vision and
* p e r i p h e r a l vision of animal .
*
* @param l o c a t i o n
the l o c a t i o n of the animal
* @param s t a n d a r d
the s t a n d a r d vision of the animal
* @param p e r i p h e r a l
the p e r i p h e r a l vision of the animal
*/
public Animal ( Double2D location , Vision standard ,
Vision peripheral ) {
this . curSpeed = 0;
this . curEnergy = 50;
this . absMaxSpeed = 15;
this . ab sM a xS p e e d E n e r g y = 50;
this . fitnessFactor = 1;
this . r e s t i n g E n e r g y C o n s u m p t i o n = 0.5;
}
190
/* *
* Gets the current energy a v a i l a b l e to the animal .
*
* @return
the energy of the animal
*/
public double getEnergy ( ) {
return this . curEnergy ;
}
public double getMaxSpeed ( ) {
return maxSpeed () ;
200
}
250
/* *
* Gets the size of the animal
*
* @return
the size of the animal
*/
public double getSize ( ) {
return this . size ;
}
210
260
56
/* *
* Sets the size of the animal
*
* @param
size
the size of the animal
*/
public void setSize ( double size ) {
this . size = size ;
}
220
230
240
/* *
* Gets the p e r i p h e r a l vision of the animal .
*
* @return
the p e r i p h e r a l vision of the animal
*/
public Vision g e t P e r i p h e r a l V i s i o n ( ) {
return this . peripheral ;
}
/* *
* Gets the s t a n d a r d vision of the animal .
*
* @return
the p e r i p h e r a l vision if the animal
*/
public Vision ge t St an da rd V is io n ( ) {
return this . standard ;
}
/* *
* Gets the p o r t r a y a l r e n de r e r for the animal .
*
270
* @return
the p o r t r a y a l r e n d e r e r of the animal
*/
public Portrayal getPortrayal ( ) {
if ( this . portrayal == null ) {
this . portrayal = new A n i m a l P o r t r a y a l 2 D (
new S i m p l e P o r t r a y a l 2 D () ,
this . size ,
this . color
);
}
return this . portrayal ;
}
/* *
* Checks to see if the given item is within range of vision .
* NB : This doesn ’t check to see if there are o b s t a c l e s in the way .
*
* @param item
the item to check
* @return
a boolean value i n d i c a t i n g whether the given
*
item is in range
*/
public boolean canSee ( Item item ) {
Double2D iLoc = item . getLocation () ;
double dx = iLoc . x - this . location . x ;
double dy = iLoc . y - this . location . y ;
double theta = Math . atan2 ( dx , dy ) ;
double d = Math . sqrt ( dx * dx + dy * dy ) ;
double o = this . orientation ;
return (
(
theta >= o - standard . getAngle () &&
theta <= o + standard . getAngle () &&
d <= standard . getDistance ()
) || (
theta >= o - peripheral . getAngle () &&
theta <= o + peripheral . getAngle () &&
d <= peripheral . getDistance ()
)
);
}
280
/* *
*/
public boolean isDead ( ) {
return this . dead ;
}
290
/* *
*/
public void die ( ) {
this . dead = true ;
setColor ( Color . gray ) ;
}
double dx = alig . x + cohe . x + mome . x + rand . x + sepa . x ;
double dy = alig . y + cohe . y + mome . y + rand . y + sepa . y ;
protected void setColor ( Color color ) {
this . color = color ;
if ( this . portrayal != null )
( ( An im a lP or tr ay a l2 D ) this . portrayal ) . setColor ( color ) ;
}
300
310
57
320
/* *
*/
public double orientation2D ( ) {
// if ( o r i e n t a t i o n . x == 0 && o r i e n t a t i o n . y == 0 ) return 0;
// return Math . atan2 ( o r i e n t a t i o n .y , o r i e n t a t i o n . x ) ;
return this . orientation ;
}
/* *
* S y n o n y m o u s with orientation2D , which is re q u i r e d by Mason .
*
* @return
the current d i r e c t i o n of the animal
*/
public double getOrientation ( ) {
return this . orientation2D () ;
}
/* *
* Called every step in the s i m u l a t i o n .
*
* @param
state
the world as it is
*/
public abstract void step ( SimState state ) ;
double dis = Math . sqrt ( dx * dx + dy * dy ) ;
if ( dis > 0 ) {
dx = ( dx / dis ) * this . speed () ;
dy = ( dy / dis ) * this . speed () ;
}
orientation = Math . atan2 ( dy , dx ) ;
return new Double2D ( Math . cos ( orientation ) ,
Math . sin ( orientation ) ) ;
350
}
protected Double2D wander ( SimState state ) {
this . setColor ( Color . black ) ;
World world = ( World ) state ;
setSpeed ( curSpeed + ( ge tTa rg etS pe ed () - curSpeed ) / 5 +
( -1 + world . random . nextDouble () * 2 ) ) ;
360
Double2D mome = momentum ( ) ;
Double2D rand = randomness ( world . random ) ;
double dx = mome . x + rand . x ;
double dy = mome . y + rand . y ;
double dis = Math . sqrt ( dx * dx + dy * dy ) ;
if ( dis > 0 ) {
dx = ( dx / dis ) * this . speed () ;
dy = ( dy / dis ) * this . speed () ;
}
orientation = Math . atan2 ( dy , dx ) ;
return new Double2D ( Math . cos ( orientation ) ,
Math . sin ( orientation ) ) ;
370
// Ste e r i n g b e h a v i o u r bits
}
protected Double2D herd ( SimState state ) {
this . setColor ( Color . black ) ;
World world = ( World ) state ;
330
setSpeed ( world . random . nextDouble () * 3 ) ;
380
Continuous2D [] domains = new Continuous2D [] {
world . Wolves () , world . Prey () , world . Obstacles ()
};
340
Double2D
Double2D
Double2D
Double2D
Double2D
alig
cohe
mome
rand
sepa
=
=
=
=
=
alignment ( domains ) ;
cohesion ( domains ) ;
momentum ( ) ;
randomness ( world . random ) ;
separation ( domains ) ;
// P r o d u c e s an a l i g n m e n t factor s u i t a b l e for r e a s o n a b l e herding
public double al i gn m e n t Fa c t o r ( Item item ) {
return 1.0; // 10.0;
}
// P r o d u c e s an c o h e s i o n factor s u i t a bl e for r e a s o n a b l e herding
public double coh es ion Fa cto r ( Item item ) {
return 1.0; // 10.0;
}
390
// P r o d u c e s an s e p a r a t a i o n factor s u i t a b l e for r e a s o n a b l e herding
public double s e p a r a t i o n F a c t o r ( Item item ) {
return 1.0; // 0.1;
}
public Double2D cohesion ( Continuous2D [] items ) {
double x = 0;
double y = 0;
public Double2D alignment ( Continuous2D items ) {
return alignment ( new Continuous2D [] { items } ) ;
}
Bag neighbours = null ;
int count = 0;
double sd = ( this . standard != null ?
this . standard . getDistance () : 0 ) ;
double pd = ( this . peripheral != null ?
this . peripheral . getDistance () : 0 ) ;
double vd = Math . max ( sd , pd ) ;
double vdSq = vd * vd ;
for ( int i = 0; i < items . length ; i ++ ) {
neighbours = items [ i ]. g e t O b j e c t s W i t h i n D i s t a n c e (
this . location , vd / 3 , true , true ) ;
for ( int j = 0; j < neighbours . numObjs ; j ++ ) {
count ++;
Item other = ( Item ) neighbours . objs [ j ];
if ( this != other && this . canSee ( other ) ) {
Double2D otherLoc = other . getLocation () ;
double dx = items [ i ]. tdx ( this . location .x ,
otherLoc . x ) ;
double dy = items [ i ]. tdy ( this . location .y ,
otherLoc . y ) ;
double lenSq = dx * dx + dy * dy ;
if ( lenSq <= vdSq ) {
x += dx * co hes ion Fa cto r ( other ) ;
y += dy * co hes ion Fa cto r ( other ) ;
}
}
}
}
if ( count > 0 ) {
x /= count ;
y /= count ;
}
return new Double2D ( -x / 10 , -y / 10 ) ;
public Double2D alignment ( Continuous2D [] items ) {
double x = 0;
double y = 0;
400
450
Bag neighbours = null ;
int count = 0;
double sd = ( this . standard != null ?
this . standard . getDistance () : 0 ) ;
double pd = ( this . peripheral != null ?
this . peripheral . getDistance () : 0 ) ;
double vd = Math . max ( sd , pd ) ;
double vdSq = vd * vd ;
for ( int i = 0; i < items . length ; i ++ ) {
neighbours = items [ i ]. g e t O b j e c t s W i t h i n D i s t a n c e (
this . location , vd / 3 , true , true ) ;
for ( int j = 0; j < neighbours . numObjs ; j ++ ) {
count ++;
Item other = ( Item ) neighbours . objs [ j ];
if ( this != other && this . canSee ( other ) ) {
Double2D otherLoc = other . getLocation () ;
double dx = items [ i ]. tdx ( this . location .x ,
otherLoc . x ) ;
double dy = items [ i ]. tdy ( this . location .y ,
otherLoc . y ) ;
double lenSq = dx * dx + dy * dy ;
if ( lenSq <= vdSq ) {
Double2D otherMom = other . momentum () ;
x += otherMom . x * alignmentFactor ( other ) ;
y += otherMom . y * alignmentFactor ( other ) ;
}
}
}
}
if ( count > 0 ) {
x /= count ;
y /= count ;
}
return new Double2D ( x , y ) ;
410
58
420
430
460
470
}
480
public Double2D separation ( Continuous2D [] items ) {
double x = 0;
double y = 0;
}
public Double2D cohesion ( Continuous2D items ) {
return cohesion ( new Continuous2D [] { items } ) ;
}
440
public Double2D separation ( Continuous2D items ) {
return separation ( new Continuous2D [] { items } ) ;
}
490
Bag neighbours = null ;
int count = 0;
double sd = ( this . standard != null ?
this . standard . getDistance () : 0 ) ;
double pd = 0;
double vd = Math . max ( sd , pd ) ;
double vdSq = vd * vd ;
for ( int i = 0; i < items . length ; i ++ ) {
neighbours = items [ i ]. g e t O b j e c t s W i t h i n D i s t a n c e (
this . location , vd / 3 , true , true ) ;
for ( int j = 0; j < neighbours . numObjs ; j ++ ) {
Item other = ( Item ) neighbours . objs [ j ];
if ( this != other && this . canSee ( other ) ) {
count ++;
Double2D otherLoc = other . getLocation () ;
double dx = items [ i ]. tdx ( this . location .x ,
otherLoc . x ) ;
double dy = items [ i ]. tdy ( this . location .y ,
otherLoc . y ) ;
double lenSq = dx * dx + dy * dy ;
if ( lenSq <= vdSq ) {
x += ( dx / ( lenSq * lenSq + 1) ) *
sep arat ionFa ctor ( other ) ;
y += ( dy / ( lenSq * lenSq + 1) ) *
sep arat ionFa ctor ( other ) ;
}
}
}
}
if ( count > 0 ) {
x /= count ;
y /= count ;
}
return new Double2D ( 400 * x , 400 * y ) ;
500
510
orientation = Math . atan2 ( dy , dx ) ;
setSpeed ( l ) ;
return new Double2D ( ( curSpeed * dx / l ) - lastStep .x ,
( curSpeed * dy / l ) - lastStep . y ) ;
550
public Double2D randomness ( M e r s e n n e T w i s t e r F a s t r ) {
double x = r . nextDouble () * 2 - 1.0;
double y = r . nextDouble () * 2 - 1.0;
double l = Math . sqrt ( x * x + y * y ) ;
return new Double2D ( 0.05 * x / l , 0.05 * y / l ) ;
}
560
public Double2D momentum ( ) {
return this . lastStep ;
}
59
public double get Ta rge tS pee d ( ) {
return 5;
}
}
public Double2D seek ( Item item ) {
return seek ( item . location ) ;
}
public double getSpeed ( ) {
return speed () ;
}
public Double2D seek ( Double2D location ) {
return seek ( location .x , location . y ) ;
}
580
public void setSpeed ( double speed ) {
this . curSpeed = Math . min ( speed , maxSpeed () ) ;
}
590
public
/*
*
*
*
*
*
*
*
570
public Double2D seek ( double seekX , double seekY ) {
// d e s i r e d _ v e c t o r = n o r m a l i s e ( p o s i t i o n - target ) *
//
desired_speed
// s t e e r i n g = d e s i r e d _ v e c t o r - c u r _ v e c t o r
// c u r _ v e c t o r == m o m e n t u m == o r i e n t a t i o n
// d e s i r e d _ s p e e d == Speed ()
540
}
public double speed ( ) {
double ms = maxSpeed () ;
if ( curSpeed > ms )
curSpeed = ms ;
return curSpeed ;
}
520
530
dx = domain . tdx ( seekX , this . location . x ) ;
dy = domain . tdy ( seekY , this . location . y ) ;
l = Math . sqrt ( dx * dx + dy * dy ) ;
double dx , dy , l ;
double maxSpeed ( ) {
y = ( m / 2 ) ( cos ( ( PI / e ) x - PI ) + 1 )
y:
x:
e:
m:
< max speed at current engergy level >
< current energy level >
< energy r e q u i r e d to attain max . speed >
< abs . max . speed >
if x > e , return m
*/
if ( curEnergy > a b sM ax Sp e ed En er g y )
return absMaxSpeed ;
return ( absMaxSpeed / 2 ) * ( Math . cos (
( Math . PI / ab s Ma xS pe e dE ne rg y ) *
curEnergy - Math . PI ) + 1 ) ;
}
600
protected double energyConsumed ( ) {
/* y = ( 1 / ( 12.5 + 12.5 f ) ) x ^2 + r
*
* y : < energy c o n s u m p t i o n at current speed >
* x : < current speed >
* f : < fitness factor > [ the higher the value of f , the longer it
*
takes to get tired ]
* r : < energy c o n s u m p t i o n at rest >
*
* if x == 0 return r
*/
610
if ( curSpeed == 0 )
return r e s t i n g E n e r g y C o n s u m p t i o n ;
60
return ( 1 / ( 12.5 + 12.5 * fitnessFactor ) ) *
( curSpeed * curSpeed ) + r e s t i n g E n e r g y C o n s u m p t i o n ;
}
620
protected void report ( ) {
long time = world . schedule . getSteps () ;
double x = location . x ;
double y = location . y ;
String filename = world . parameters . getLong ( " base . seed " , 0 ) +
" . " + getClass () . getName () + id + " . log " ;
try {
FileWriter log = new FileWriter ( " logs / " + filename ,
true ) ;
log . write ( time + " \ t " + x + " \ t " + y +
System . getProperty ( " line . separator " ) ) ;
log . close () ;
} catch ( java . io . IOException e ) {
System . out . println ( " Failed writing to log < logs / " +
filename + " > at time " + time ) ;
}
630
}
}
C.2
AnimalPortrayal2D.java
package uk . ac . bath . cs1ah . hunt ;
import java . awt .*;
import sim . display .*;
import sim . portrayal .*;
int extension ) {
this ( child , scale , extension , DEFAULT_COLOR ) ;
}
50
public class A ni m al Po rt r ay al 2D extends S i mp le Po r tr ay al 2 D {
10
public static final double DEFAULT_SCALE = 0.5;
public static final int D E FA UL T_ E XT EN SI O N = 0;
public static final Color DEFAULT_COLOR = Color . red ;
protected
protected
protected
protected
protected
protected
61
20
Color color ;
Color peripheralColor = new Color ( 102 , 153 , 102 , 50 ) ;
Color standardColor = new Color ( 51 , 102 , 153 , 50 ) ;
S i mp le Po r tr ay al 2D child ;
double scale ;
int extension ;
60
protected boolean visionVisible = World . parameters . getBoolean (
" world . sightCones " , false ) ;
protected int [] simplePolygonX = new int [4];
protected int [] simplePolygonY = new int [4];
public void showVision ( boolean show ) {
this . visionVisible = show ;
70
}
30
public void setColor ( Color color ) {
this . color = color ;
public An im al P or t r a y a l 2 D ( S i m p l e P o r t r a y a l 2 D child , double scale ,
int extension , Color color ) {
this . child = child ;
this . scale = scale ;
this . extension = extension ;
this . color = color ;
}
public Si mp le P or t r a y a l 2 D getChild ( Object object ) {
if ( child != null ) return child ;
else {
if ( !( object instanceof S i m p l e P o r t r a y a l 2 D ) )
throw new R u n t i m e E x c e p t i o n ( " Object provided to " +
" A n i m a l P o r t r a y a l 2 D is not a S i m p l e P o r t r a y a l 2 D : " +
object ) ;
return ( S i m p l e P o r t r a y a l 2 D ) object ;
}
}
public void draw ( Object object , Graphics2D graphics ,
DrawInfo2D info ) {
getChild ( object ) . draw ( object , graphics , info ) ;
if ( object != null && ( object instanceof Animal ) ) {
Animal animal = ( Animal ) object ;
}
public Color getColor ( ) {
return this . color ;
}
public An im al P or t r a y a l 2 D ( S i m p l e P o r t r a y a l 2 D child , double scale ,
Color color ) {
this ( child , scale , DEFAULT_EXTENSION , color ) ;
}
80
public An im al Po r tr ay al 2 D ( Si m pl eP or t ra ya l2 D child ) {
this ( child , DEFAULT_SCALE , DEFAULT_EXTENSION , DEFAULT_COLOR ) ;
final double orientation = animal . orientation2D () ;
final int length = ( int ) ( scale * (
info . draw . width < info . draw . height ?
info . draw . width : info . draw . height ) ) + extension ;
}
40
final
final
final
final
public An im al Po r tr ay al 2 D ( Si m pl eP or t ra ya l2 D child , Color color ) {
this ( child , DEFAULT_SCALE , DEFAULT_EXTENSION , color ) ;
}
public An im al Po r tr ay al 2 D ( Si m pl eP or t ra ya l2 D child , double scale ,
90
double lenx =
double leny =
int x = ( int )
int y = ( int )
Math . cos ( orientation ) * length ;
Math . sin ( orientation ) * length ;
( info . draw . x + lenx ) ;
( info . draw . y + leny ) ;
int vSize , vX , vY , sAngle , aAngle ;
simplePoly go nX [3] = ( int ) ( info . draw . x + leny + - lenx ) ;
simplePoly go nY [3] = ( int ) ( info . draw . y + - lenx + - leny ) ;
graphics . fillPolygon ( simplePolygonX , simplePolygonY , 4) ;
if ( this . visionVisible ) {
if ( ! animal . isDead () ) {
// p e r i p h e r a l vision
if ( animal . peripheral != null ) {
}
}
graphics . setPaint ( peripheralColor ) ;
vSize = ( int ) ( length *
animal . peripheral . getDistance () ) ;
vX = ( int ) ( info . draw . x - vSize ) ;
vY = ( int ) ( info . draw . y - vSize ) ;
sAngle = ( int ) ( ( - orientation animal . peripheral . getAngle () ) *
( 180 / Math . PI ) ) ;
aAngle = ( int ) ( ( animal . peripheral . getAngle ()
* 2 ) * ( 180 / Math . PI ) ) ;
graphics . fillArc ( vX , vY , 2 * vSize , 2 * vSize ,
sAngle , aAngle ) ;
100
public boolean hitObject ( Object object , DrawInfo2D range ) {
return getChild ( object ) . hitObject ( object , range ) ;
}
150
public boolean setSelected ( L o c at i o n W ra p p e r wrapper ,
boolean selected ) {
return getChild ( wrapper . getObject () ) . setSelected (
wrapper , selected ) ;
}
public Inspector getInspector ( Lo c a t i on W r a pp e r wrapper ,
GUIState state ) {
return getChild ( wrapper . getObject () ) . getInspector (
wrapper , state ) ;
}
160
110
}
public String getName ( L oc a t i o nW r a p p er wrapper ) {
return getChild ( wrapper . getObject () ) . getName (
wrapper ) ;
}
// s t a n d a r d vision
if ( animal . standard != null ) {
62
graphics . setPaint ( standardColor ) ;
vSize = ( int ) ( length *
animal . standard . getDistance () ) ;
vX = ( int ) ( info . draw . x - vSize ) ;
vY = ( int ) ( info . draw . y - vSize ) ;
sAngle = ( int ) ( ( - orientation animal . standard . getAngle () ) *
( 180 / Math . PI ) ) ;
aAngle = ( int ) ( ( animal . standard . getAngle ()
* 2 ) * ( 180 / Math . PI ) ) ;
graphics . fillArc ( vX , vY , 2 * vSize , 2 * vSize ,
sAngle , aAngle ) ;
120
}
130
}
}
graphics . setPaint ( color ) ;
140
simplePolygonX [0]
simplePolygonY [0]
simplePolygonX [1]
simplePolygonY [1]
simplePolygonX [2]
simplePolygonY [2]
=
=
=
=
=
=
x;
y;
( int )
( int )
( int )
( int )
(
(
(
(
info . draw . x
info . draw . y
info . draw . x
info . draw . y
+
+
+
+
- leny + - lenx ) ;
lenx + - leny ) ;
- lenx /2 ) ;
- leny /2 ) ;
}
C.3
Item.java
package uk . ac . bath . cs1ah . hunt ;
import java . awt . Color ;
import sim . field . continuous . Continuous2D ;
import sim . portrayal .*;
import sim . util .*;
50
public abstract class Item {
10
protected
protected
protected
protected
protected
protected
protected
protected
63
20
Double2D location ;
Continuous2D domain ;
String name ;
Portrayal portrayal ;
double boundingRadius ;
int id ;
World world = null ;
Color color = Color . black ;
/* *
* Class constructor , s p e c i f y i n g the x and y c o o r d i n a t e s of the
* item .
*
* @param
x
the x c o o r d i n a t e of the item
* @param
y
the y c o o r d i n a t e of the item
*/
public Item ( double x , double y ) {
this ( new Double2D ( x , y ) ) ;
}
60
70
30
40
/* *
* Class constructor , s p e c i f y i n g the x and y c o o r d i n a t e s and the
* bou n di n g radius of the item .
*
* @param
x
the x c o o r d i n a t e of the item
* @param
y
the y c o o r d i n a t e of the item
* @param
radius
the bouding radius of the item
*/
public Item ( double x , double y , double radius ) {
this ( new Double2D ( x , y ) , radius ) ;
}
/* *
* Class constructor , s p e c i f y i n g the l o c a t i on of the item .
*
80
90
* @param
location
the l o c a t i o n of the item
*/
public Item ( Double2D location ) {
this ( location , 0 , null ) ;
}
/* *
* Class constructor , s p e c i f y i n g the l o c a t i o n and the b o unding
* radius of the item .
*
* @param
location
the l o c a t i o n of the item
* @param
radius
the bouding radius of the item
*/
public Item ( Double2D location , double radius ) {
this ( location , radius , null ) ;
}
/* *
* Class constructor , s p e c i f y i n g the x and y c o o r d i n a t e s and the
* domain of the item .
*
* @param
x
the x c o o r d i n a t e of the item
* @param
y
the y c o o r d i n a t e of the item
* @param
domain
the domain of the item
*/
public Item ( double x , double y , Continuous2D domain ) {
this ( new Double2D ( x , y ) , 0 , domain ) ;
}
/* *
* Class constructor , s p e c i f y i n g the l o c a t i o n and the domain of the
* item .
*
* @param
location
the l o c a t i o n of the item
* @param
domain
the domain of the item
*/
public Item ( Double2D location , Continuous2D domain ) {
this ( location , 0 , domain ) ;
}
/* *
* Class constructor , s p e c i f y i n g the location , b o u n d i n g radius and
* domain of the item .
*
* @param
location
the l o c a t i o n of the item
* @param
radius
the b o u n d i n g radius of the item
* @param
domain
the domain of the item
*/
public Item ( Double2D location , double radius ,
Continuous2D domain ) {
this . domain = domain ;
this . boundingRadius = radius ;
this . setLocation ( location ) ;
}
100
/* *
* Sets the b o u n d i n g radius of the item .
*
* @param
radius
the b o u n d i n g radius of the item
*/
public void s et B o u n d i n g R a d i u s ( double radius ) {
this . bounding Rad iu s = radius ;
}
150
/* *
* Gets the domain of the item .
*
* @return
the domain of the item
*/
public Continuous2D getDomain ( ) {
return this . domain ;
}
/* *
*/
public World getWorld ( ) {
return this . world ;
}
110
64
120
130
140
/* *
*/
public void setWorld ( World world ) {
this . world = world ;
}
/* *
* Gets the l o c a t i o n of the item .
*
* @return
the l o c a t i o n of the item
*/
public Double2D getLocation ( ) {
return this . location ;
}
/* *
* Sets the l o c a t i o n of the item .
*
* @param
location
the l o c a t i o n of the item
*/
public void setLocation ( Double2D location ) {
this . location = location ;
if ( this . domain != null )
this . domain . s et Ob je ct L oc at io n ( this , location ) ;
}
/* *
* Gets the b o u n d i n g radius of the item .
*
* @return
the b o u n d i n g radius of the item
*/
public double ge t Bo un di ng R ad iu s ( ) {
return this . boundingRadius ;
}
160
/* *
* Sets the domain of the item .
*
* @param
domain
the domain of the item
*/
public void setDomain ( Continuous2D domain ) {
this . domain = domain ;
setLocation ( this . location ) ;
}
170
/* *
* Gets the name of the item .
*
* @return
the name of the item
*/
public String getName ( ) {
return this . name ;
}
180
190
/* *
* Sets the name of the item .
*
* @param
name
the name of the item
*/
public void setName ( String name ) {
this . name = name ;
}
/* *
*
*/
public Double2D momentum ( ) {
return new Double2D ( 0 , 0 ) ;
}
/* *
* Pro v i d e s a string readout of the item .
*
* @return
a string readout of the item
*/
public String toString ( ) {
return this . getClass () . getName () + " : " + this . name + " [ " +
this . location . x + " ," + this . location . y + " ] " ;
}
200
/* *
* Gets the p o r t r a y a l that should be used to render the item .
*
* @return
the p o r t r a y a l to render the item
*/
public abstract Portrayal getPortrayal ( ) ;
210
}
65
C.4
ParameterDatabase.java
package uk . ac . bath . cs1ah . hunt ;
store ( new F i l e O u t p u t S t r e a m ( file . g e tA b s o l ut e F i l e () ) , null ) ;
}
import
import
import
import
import
import
import
java . io . File ;
java . io . FileInputStream ;
java . io . Fil eOutp utSt ream ;
java . io . F i l e N o t F o u n d E x c e p t i o n ;
java . io . IOException ;
java . lang . N u m b e r F o r m a t E x c e p t i o n ;
java . util . Properties ;
50
10
public class P ar a me te rD a ta ba se
extends Properties {
66
/* *
* Class c o n s t r u c t o r .
*/
public Pa ra me te r Da ta ba s e ( ) {
super () ;
}
60
20
30
40
/* *
* Class c o n s t r u c t o r s p e c i f y i n g the l o c a t i o n of a c o n f i g u r a t i o n
* file .
*
* @param
location
a string giving the l o c a t i o n of the
c o n f i g u r a t i o n file
* @throws F i l e N o t F o u n d E x c e p t i o n
if the given file is not found
* @throws I O E x c e p t i o n
if an input e x c e p t i o n o c c u r r e d
*/
public Pa ra me te r Da ta ba s e ( String location ) throws
FileNotFoundException , IOException {
super () ;
File file = new File ( location ) ;
load ( new FileInputStream ( file . getAbsoluteFile () ) ) ;
}
80
/* *
* Saves the current p a r a m t e r c o n f i g u r a t i o n to a file .
*
* @param
location
a string giving the l o c a t i o n of the file
*
to save to
* @throws I O E x c e p t i o n
if an output e x c e p t i o n o c c u r r e d
*/
public void save ( String location ) throws IOException {
File file = new File ( location ) ;
90
70
/* *
* Returns a string held within the d a t a b a s e . If the given key is
* not found , the default value is r e t u r n e d .
*
* @param
key
the name of the p a r a m e t e r to return
* @param
defaultValue
a string to return should the p a r a m e t e r
*
not be found
* @return
the value of the parameter , if found ;
*
default value o t h e r w i s e
*/
public String getString ( String key , String defaultValue ) {
key = key . trim () ;
if ( ! this . containsKey ( key ) )
this . setProperty ( key , defaultValue ) ;
// return d e f a u l t V a l u e ;
return getProperty ( key , defaultValue ) ;
}
/* *
* Sets a string p a r a m e t e r in the d a t a b a s e .
*
* @param
key
the name of the p a r a m e t e r to set
* @param
value
the value of the p a r a m e t e r to set
*/
public void setString ( String key , String value ) {
this . setProperty ( key . trim () , value ) ;
}
/* *
* Returns a integer held within the d a t a b a s e . If the given key is
* not found , the default value is r e t u r n e d .
*
* @param
key
the name of the p a r a m e t e r to return
* @param
defaultValue
a integer to return should the p a r a m e t e r
*
not be found
* @return
the value of the parameter , if found ;
*
default value o t h e r w i s e
*/
public int getInt ( String key , int defaultValue ) {
key = key . trim () ;
if ( ! this . containsKey ( key ) )
// return d e f a u l t V a l u e ;
this . setInt ( key , defaultValue ) ;
try {
return Integer . parseInt ( this . getProperty ( key ) ) ;
} catch ( N u m b e r F o r m a t E x c e p t i o n e ) {
System . err . println ( " Unable to cast parameter <" + key +
" > to int " ) ;
System . err . println ( e . toString () ) ;
return defaultValue ;
}
*/
public void setDouble ( String key , double value ) {
this . setString ( key , " " + value ) ;
}
100
}
150
110
/* *
* Sets a integer p a r a m e t e r in the d a t a b a s e .
*
* @param
key
the name of the p a r a m e t e r to set
* @param
value
the value of the p a r a m e t e r to set
*/
public void setInt ( String key , int value ) {
this . setString ( key , " " + value ) ;
}
160
67
120
130
140
/* *
* Returns a double held within the d a t a b a s e . If the given key is
* not found , the default value is r e t u r n e d .
*
* @param
key
the name of the p a r a m e t e r to return
* @param
defaultValue
a double to return should the p a r a m e t e r
*
not be found
* @return
the value of the parameter , if found ;
*
default value o t h e r w i s e
*/
public double getDouble ( String key , double defaultValue ) {
key = key . trim () ;
if ( ! this . containsKey ( key ) )
this . setDouble ( key , defaultValue ) ;
// return d e f a u l t V a l u e ;
try {
return Double . parseDouble ( this . getProperty ( key ) ) ;
} catch ( N u m b e r F o r m a t E x c e p t i o n e ) {
System . err . println ( " Unable to cast parameter <" + key +
" > to double " ) ;
return defaultValue ;
}
}
/* *
* Sets a double p a r a m e t e r in the d a t a b a s e .
*
* @param
key
the name of the p a r a m e t e r to set
* @param
value
the value of the p a r a m e t e r to set
/* *
* Returns a float held within the d a t a b a s e . If the given key is not
* found , the default value is r e t u r n e d .
*
* @param
key
the name of the p a r a m e t e r to return
* @param
defaultValue
a float to return should the p a r a m e t e r
*
not be found
* @return
the value of the parameter , if found ;
*
default value o t h e r w i s e
*/
public float getFloat ( String key , float defaultValue ) {
key = key . trim () ;
if ( ! this . containsKey ( key ) )
this . setFloat ( key , defaultValue ) ;
// return d e f a u l t V a l u e ;
try {
return Float . parseFloat ( this . getProperty ( key ) ) ;
} catch ( N u m b e r F o r m a t E x c e p t i o n e ) {
System . err . println ( " Unable to cast parameter <" + key +
" > to float " ) ;
return defaultValue ;
}
}
170
/* *
* Sets a float p a r a m e t e r in the d a t a b a s e .
*
* @param
key
the name of the p a r a m e t e r to set
* @param
value
the value of the p a r a m e t e r to set
*/
public void setFloat ( String key , float value ) {
this . setString ( key , " " + value ) ;
}
180
/* *
* Returns a long held within the d a t a b a s e . If the given key is not
* found , the default value is r e t u r n e d .
*
* @param
key
the name of the p a r a m e t e r to return
* @param
defaultValue
a long to return should the p a r a m e t e r
*
not be found
* @return
the value of the parameter , if found ;
*
default value o t h e r w i s e
*/
public long getLong ( String key , long defaultValue ) {
190
key = key . trim () ;
if ( ! this . containsKey ( key ) )
this . setLong ( key , defaultValue ) ;
// return d e f a u l t V a l u e ;
try {
return Long . parseLong ( this . getProperty ( key ) ) ;
} catch ( N u m b e r F o r m a t E x c e p t i o n e ) {
System . err . println ( " Unable to cast parameter <" + key +
" > to long " ) ;
return defaultValue ;
}
200
}
210
68
220
230
240
/* *
* Sets a long p a r a m e t e r in the d a t a b a s e .
*
* @param
key
the name of the p a r a m e t e r to set
* @param
value
the value of the p a r a m e t e r to set
*/
public void setLong ( String key , long value ) {
this . setString ( key , " " + value ) ;
}
/* *
* Returns a boolean held within the d a t a b a s e . If the given key is
* not found , the default value is r e t u r n e d .
*
* @param
key
the name of the p a r a m e t e r to return
* @param
defaultValue
a boolean to return should the p a r a m e t e r
*
not be found
* @return
the value of the parameter , if found ;
*
default value o t h e r w i s e
*/
public boolean getBoolean ( String key , boolean defaultValue ) {
key = key . trim () ;
if ( ! this . containsKey ( key ) )
this . setBoolean ( key , defaultValue ) ;
// return d e f a u l t V a l u e ;
try {
return Boolean . valueOf ( this . getProperty ( key )
) . booleanValue () ;
} catch ( N u m b e r F o r m a t E x c e p t i o n e ) {
System . err . println ( " Unable to cast parameter <" + key +
" > to long " ) ;
return defaultValue ;
}
}
/* *
* Sets a boolean p a r a m e t e r in the d a t a b a s e .
*
* @param
key
the name of the p a r a m e t e r to set
* @param
value
the value of the p a r a m e t e r to set
*/
public void setBoolean ( String key , boolean value ) {
this . setString ( key , " " + value ) ;
}
}
C.5
Prey.java
package uk . ac . bath . cs1ah . hunt ;
" prey . vision . standard . angle " , 160 ) ;
this . standard = new Vision ( svd , sva ) ;
import java . awt . Color ;
import
import
import
import
sim . engine .*;
sim . field . continuous . Continuous2D ;
sim . util . Bag ;
sim . util . Double2D ;
double pvd = this . size * World . parameters . getDouble (
" prey . vision . peripheral . distance " , 50 ) ;
int pva = World . parameters . getInt (
" prey . vision . peripheral . angle " , 160 ) ;
this . peripheral = new Vision ( pvd , pva ) ;
50
10 public class Prey
extends Animal {
public static final int STATE_GRAZE = 0;
public static final int STATE_ALERT = 1;
public static final int STATE_FLEE = 2;
this . ab sM a xS p e e d E n e r g y = ( -1 * ( this . size - 1 ) / 3 + 1 )
* 50;
this . curEnergy = this . a b s M a x S p e e d E n e r g y * 7;
this . absMaxSpeed = ( ( this . size - 1 ) / 3 + 1 ) *
World . parameters . getDouble ( " prey . maxSpeed " ,
this . absMaxSpeed ) ;
this . fitnessFactor = World . parameters . getDouble ( " prey [ " + id +
" ]. fitness " , 1 + world . random . nextDouble () * 2 ) ;
this . r e s t i n g E n e r g y C o n s u m p t i o n = ( -1 * ( this . size - 1 ) / 3 +
1 ) * World . parameters . getDouble (
" prey . r e s t i n g E n e r g y C o n s u m p t i o n " ,
this . r e s t i n g E n e r g y C o n s u m p t i o n ) ;
60
protected int state = STATE_GRAZE ;
69
20
public Prey ( int id , World world , Continuous2D domain ) {
super () ;
this . id = id ;
this . setWorld ( world ) ;
this . domain = domain ;
this . init () ;
}
this . name = World . parameters . getString ( " prey [ " + id +
" ]. name " , " Prey # " + id ) ;
70
protected void init ( ) {
30
40
public void setState ( int state ) {
if ( state != STATE_GRAZE && state != STATE_ALERT &&
state != STATE_FLEE )
throw new R u n t i m e E x c e p t i o n (
" State specified does not exist " ) ;
this . state = state ;
}
this . size = World . parameters . getDouble ( " prey [ " + id +
" ]. size " , 0.5 + world . random . nextDouble () * 1 ) ;
this . size = Math . min ( Math . max ( this . size , 0.5 ) , 1.5 ) ;
int spread = world . getNumPrey () / Math . max ( ( ( int )
world . getNumPrey () / 50 ) , 1 ) ;
double theta = world . random . nextDouble () * Math . PI * 2;
double distance = world . random . nextDouble () * spread ;
double x = World . parameters . getDouble ( " prey [ " + id + " ]. x " ,
world . getWidth () / 2 + ( Math . cos ( theta ) * distance ) ) ;
double y = World . parameters . getDouble ( " prey [ " + id + " ]. y " ,
world . getHeight () / 2 + ( Math . sin ( theta ) * distance ) ) ;
this . setLocation ( new Double2D ( x , y ) ) ;
double svd = this . size * World . parameters . getDouble (
" prey . vision . standard . distance " , 30 ) ;
int sva = World . parameters . getInt (
}
80
public int getState ( ) {
return this . state ;
}
public void setColor ( Color color ) {
if ( this . portrayal != null )
(( A n im al P o r t r a y a l 2 D ) this . portrayal ) . setColor ( color ) ;
}
90
public Color getColor ( ) {
if ( this . portrayal == null )
return Color . black ;
return (( A n im al Po r tr ay al 2 D ) this . portrayal ) . getColor ( ) ;
}
public void step ( SimState state ) {
this . report () ;
if ( this . dead )
return ;
100
150
if ( threatCheck < 0 )
threatCheck ++;
switch ( this . state ) {
case STATE_GRAZE :
this . graze ( state ) ;
break ;
case STATE_ALERT :
this . alert ( state ) ;
break ;
case STATE_FLEE :
this . flee ( state ) ;
break ;
}
110
private int threatCheck = -1;
protected void alert ( SimState state ) {
threatCheck ++;
setColor ( Color . orange ) ;
World world = ( World ) state ;
160
70
this . curEnergy -= energyConsumed () ;
if ( this . curEnergy <= 0 )
this . die () ;
120
130
140
}
protected void graze ( SimState state ) {
double gr azeP robab ility = World . parameters . getDouble (
" prey . graz eProb abil ity " , 0.35 ) ;
Double2D nextStep = ( world . random . nextDouble () >
gra zePro babi lity ) ? herd ( state ) : eat ( state ) ;
setLocation (
new Double2D (
domain . stx (
location . x + curSpeed * Math . cos ( orientation )
/ 10
),
domain . sty (
location . y + curSpeed * Math . sin ( orientation )
/ 10
)
)
);
lastStep = nextStep ;
}
protected Double2D eat ( SimState state ) {
this . setColor ( Color . green ) ;
this . curEnergy += world . random . nextDouble () *
World . parameters . getDouble ( " world . grass . energy " , 3 ) ;
curSpeed = 0;
orientation = orientation - ( Math . PI / 6 ) +
world . random . nextDouble () * ( Math . PI / 3 ) ;
return new Double2D ( 0 , 0 ) ;
}
170
180
orientation += ( Math . PI / 4 ) - ( world . random . nextDouble () *
Math . PI / 2 ) ;
double sd = ( this . standard != null ?
this . standard . getDistance () : 0 ) ;
double pd = ( this . peripheral != null ?
this . peripheral . getDistance () : 0 ) ;
double vd = Math . max ( sd , pd ) ;
this . location , vd / 3 , true , true ) ;
/* */
for ( int i = 0; i < threats . numObjs ; i ++ ) {
Wolf wolf = ( Wolf ) threats . objs [ i ];
/* */
Double2D otherLoc = wolf . getLocation () ;
double dx = domain . tdx ( this . location .x , otherLoc . x ) ;
double dy = domain . tdy ( this . location .y , otherLoc . y ) ;
double d = Math . sqrt ( dx * dx + dy * dy ) ;
/* */
if ( wolf . speed () < 1 && d > vd / 4 )
threatCheck ++;
else if ( wolf . speed () > 5 )
setState ( Prey . STATE_FLEE ) ;
/* */
}
/* */
if ( threats . numObjs > 0 )
threatCheck = 0;
190
if ( threatCheck == World . parameters . getInt (
" prey . threatCheck " , 20 ) ) {
threatCheck = -1;
setState ( Prey . STATE_GRAZE ) ;
domain . stx (
location . x +
Math . cos
),
domain . sty (
location . y +
Math . sin
)
}
}
protected void flee ( SimState state ) {
200
210
71
220
World world = ( World ) state ;
double sd = ( this . standard != null ?
this . standard . getDistance () : 0 ) ;
double pd = ( this . peripheral != null ?
this . peripheral . getDistance () : 0 ) ;
double vd = Math . max ( sd , pd ) ;
Wolf nearest = null ;
double nearestDistance = -1;
Bag threats = world . Wolves () . g e t O b j e c t s W i t h i n D i s t a n c e (
this . location , vd / 3 , true , true ) ;
for ( int i = 0; i < threats . numObjs ; i ++ ) {
Wolf w = ( Wolf ) threats . objs [ i ];
Double2D otherLoc = w . getLocation () ;
double dx = domain . tdx ( this . location .x , otherLoc . x ) ;
double dy = domain . tdy ( this . location .y , otherLoc . y ) ;
double d = Math . sqrt ( dx * dx + dy * dy ) ;
if ( nearest == null || d < nearestDistance ) {
nearest = w ;
nearestDistance = d ;
}
}
if ( nearest == null ) {
setState ( Prey . STATE_ALERT ) ;
} else {
setColor ( Color . red ) ;
Double2D wLoc = nearest . getLocation () ;
Double2D herdFactor = herd ( state ) ;
double dx = domain . tdx ( location .x , wLoc . x ) +
herdFactor . x ;
double dy = domain . tdy ( location .y , wLoc . y ) +
herdFactor . y ;
curSpeed *
( orientation ) / 10
curSpeed *
( orientation ) / 10
);
250
setLocation ( nextStep ) ;
lastStep = nextStep ;
}
}
public double al i gn m e n t Fa c t o r ( Item item ) {
if ( item instanceof Prey ) {
if ( (( Prey ) item ) . isDead () )
return 0;
260
Double2D d = domain . tv ( this . location , item . location ) ;
double awareness = World . parameters . getDouble (
" prey . awareness " , 7.5 ) ;
if ( threatCheck >= 0 &&
( Math . sqrt ( d . x * d . x + d . y * d . y ) < awareness )
&& ( world . random . nextDouble () < awareness / 10 ) )
if ( this . state != Prey . STATE_FLEE ) {
int os = (( Prey ) item ) . getState () ;
if ( os == Prey . STATE_ALERT )
this . setState ( Prey . STATE_ALERT ) ;
else if ( os == Prey . STATE_FLEE )
this . setState ( Prey . STATE_FLEE ) ;
}
return 11 - size ;
} else {
return 0;
}
270
}
230
double dis
if ( dis >
dx = (
dy = (
}
= Math . sqrt ( dx * dx + dy * dy ) ;
0 ) {
dx / dis ) * this . speed () ;
dy / dis ) * this . speed () ;
280
orientation = Math . atan2 ( dy , dx ) ;
double targetSpeed = maxSpeed () ;
setSpeed ( curSpeed + ( targetSpeed - curSpeed ) / 5 +
( -1 + world . random . nextDouble () * 2 ) ) ;
240
Double2D nextStep = new Double2D (
290
public double coh es ion Fa cto r ( Item item ) {
if ( item instanceof Prey ) {
if ( (( Prey ) item ) . isDead () )
return 0;
Double2D d = domain . tv ( this . location , item . location ) ;
double awareness = World . parameters . getDouble (
" prey . awareness " , 7.5 ) ;
if ( threatCheck >= 0 &&
( Math . sqrt ( d . x * d . x + d . y * d . y ) < awareness )
&& ( world . random . nextDouble () < awareness / 10 ) )
if ( this . state != Prey . STATE_FLEE ) {
int os = (( Prey ) item ) . getState () ;
if ( os == Prey . STATE_ALERT )
this . setState ( Prey . STATE_ALERT ) ;
else if ( os == Prey . STATE_FLEE )
this . setState ( Prey . STATE_FLEE ) ;
}
return 5.5 - size ;
} else if ( item instanceof Wolf ) {
threatCheck = Math . max ( threatCheck , 0 ) ;
this . setState ( Prey . STATE_ALERT ) ;
return -1.0;
} else {
return 0;
}
300
}
public double s epar ation Facto r ( Item item ) {
if ( item instanceof Prey ) {
if ( (( Prey ) item ) . isDead () )
return 2;
310
72
Double2D d = domain . tv ( this . location , item . location ) ;
double awareness = World . parameters . getDouble (
" prey . awareness " , 7.5 ) ;
if ( threatCheck >= 0 &&
( Math . sqrt ( d . x * d . x + d . y * d . y ) < awareness )
&& ( world . random . nextDouble () < awareness / 10 ) )
if ( this . state != Prey . STATE_FLEE ) {
int os = (( Prey ) item ) . getState () ;
if ( os == Prey . STATE_ALERT )
this . setState ( Prey . STATE_ALERT ) ;
else if ( os == Prey . STATE_FLEE )
this . setState ( Prey . STATE_FLEE ) ;
}
return 2 * size ;
} else if ( item instanceof Wolf ) {
threatCheck = Math . max ( threatCheck , 0 ) ;
this . setState ( Prey . STATE_ALERT ) ;
return 10.0;
} else {
return 1.0;
}
320
330
}
}
C.6
Vision.java
package uk . ac . bath . cs1ah . hunt ;
return this . angle ;
}
public class Vision {
protected double distance , angle ;
10
73
20
/* *
* Class constructor , taking a d i s t a n c e and an angle in radians ( up
* to pi radians either side ) .
*
* @param
distance
the maximum d i s t a n c e of the vision
* @param
angle
the angle of the vision between 0 and Pi
*/
public Vision ( double distance , double angle ) {
this . distance = distance ;
this . angle = Math . min ( Math . PI , Math . abs ( angle ) ) ;
}
/* *
* Class constructor , taking a d i s t a n c e and an angle in degrees ( up
* to 180 degrees either side ) .
*
* @param
distance
the maximum d i s t a n c e of the vision
* @param
angle
the angle of the vision between 0 and
*
180
*/
public Vision ( double distance , int angle ) {
this ( distance , Math . PI * angle / 180 ) ;
}
30
/* *
* Gets the d i s t a n c e of the vision .
*
* @return
the d i s t a n c e of the vision
*/
public double getDistance ( ) {
return this . distance ;
}
40
/* *
* Gets the angle of the vision , in radians .
*
* @return
the angle of the vision
*/
public double getAngle ( ) {
/* *
* Gets the angle of the vision , in degrees .
*
* @return
the angle of the vision
*/
public double ge t A n g l e I n D e g r e e s ( ) {
return this . angle * 180 / Math . PI ;
}
50
/* *
* Returns a string e q u i v a l e n t of the vision .
*
* @return
the vision as a string
*/
public String toString ( ) {
return this . distance + " , " + this . angle ;
}
60
}
C.7
Wolf.java
package uk . ac . bath . cs1ah . hunt ;
( int ) world . getNumWolves () / 50 ) , 1 ) ;
double theta = world . random . nextDouble () * Math . PI * 2;
double distance = world . random . nextDouble () * spread ;
double x = World . parameters . getDouble ( " wolves [ " + id + " ]. x " ,
world . getWidth () / 2 + ( Math . cos ( theta ) * distance ) ) ;
double y = World . parameters . getDouble ( " wolves [ " + id + " ]. y " ,
world . getHeight () / 2 + ( Math . sin ( theta ) * distance ) ) ;
this . setLocation ( new Double2D ( x , y ) ) ;
import java . awt . Color ;
import java . awt . geom . QuadCurve2D ;
50
import
import
import
import
10 import
sim . engine .*;
sim . field . continuous . Continuous2D ;
sim . portrayal .*;
sim . util . Bag ;
sim . util . Double2D ;
double svd = this . size * World . parameters . getDouble (
" wolf . vision . standard . distance " , 45 ) ;
int sva = World . parameters . getInt (
" wolf . vision . standard . angle " , 40 ) ;
this . standard = new Vision ( svd , sva ) ;
public class Wolf
extends Animal {
public static final int STATE_FOLLOW = 1;
public static final int STATE_CHASE = 2;
public static final int STATE_ATTACK = 3;
60
74
double pvd = this . size * World . parameters . getDouble (
" wolf . vision . peripheral . distance " , 35 ) ;
int pva = World . parameters . getInt (
" wolf . vision . peripheral . angle " , 80 ) ;
this . peripheral = new Vision ( pvd , pva ) ;
protected int state = STATE_FOLLOW ;
20
protected double rank ;
public Wolf ( int id , World world , Continuous2D domain ) {
super () ;
this . id = id ;
this . setWorld ( world ) ;
this . domain = domain ;
this . init () ;
}
this . ab sM a xS p e e d E n e r g y = ( -1 * ( this . size - scalar ) /
3 + scalar ) * 50;
this . curEnergy = this . a b s M a x S p e e d E n e r g y * 50;
this . absMaxSpeed = ( ( this . size - scalar ) /
3 + scalar ) * World . parameters . getDouble (
" wolf . maxSpeed " , this . absMaxSpeed ) ;
this . fitnessFactor = World . parameters . getDouble (
" wolves [ " + id + " ]. fitness " , 1 +
world . random . nextDouble () * 2 ) ;
this . r e s t i n g E n e r g y C o n s u m p t i o n = ( -1 * ( this . size - scalar ) /
3 + scalar ) * World . parameters . getDouble (
" wolf . r e s t i n g E n e r g y C o n s u m p t i o n " ,
this . r e s t i n g E n e r g y C o n s u m p t i o n ) ;
70
30
protected void init ( ) {
40
// c o n s t r a i n between 0.5 and 1.5... then fiddle a bit to get
// s o m e t h i n g in the range 2/3 - 1
this . size = World . parameters . getDouble (
" wolves [ " + id + " ]. size " , 0.5 +
world . random . nextDouble () * 1 ) ;
double scalar = 1 - 0.5 / 7;
this . size = ( ( Math . min ( Math . max ( this . size , 0.5 ) , 1.5 )
- 1 ) / 7 ) + scalar ;
80
this . name = World . parameters . getString ( " wolves [ " + id +
" ]. name " , " Wolf # " + id ) ;
}
this . rank = World . parameters . getDouble ( " wolves [ " + id +
" ]. rank " , this . size ) ;
int spread = world . getNumWolves () / Math . max ( (
90
public void setState ( int state ) {
if ( state != STATE_FOLLOW && state != STATE_CHASE &&
state != STATE_ATTACK )
throw new R u n t i m e E x c e p t i o n (
" State specified does not exist " ) ;
this . state = state ;
}
case STATE_ATTACK :
this . attack ( state ) ;
break ;
public int getState ( ) {
return this . state ;
}
100
110
/* *
* Gets the rank of the animal
*
* @return
the rank of the wolf
*/
public double getRank ( ) {
return this . rank ;
}
/* *
* Sets the rank of the animal
*
* @param
rank
the rank of the wolf
*/
public void setRank ( double rank ) {
this . rank = rank ;
}
}
this . curEnergy -= ene rg yCo nsu me d () ;
if ( this . curEnergy <= 0 )
this . die () ;
150
160
75
public void step ( SimState state ) {
this . report () ;
120
if ( this . dead )
return ;
170
World world = ( World ) state ;
130
140
double sd = ( this . standard != null ?
this . standard . getDistance () : 0 ) ;
double pd = ( this . peripheral != null ?
this . peripheral . getDistance () : 0 ) ;
double vd = Math . max ( sd , pd ) ;
Bag prey = world . Prey () . g e t O b j e c t s W i t h i n D i s t a n c e (
this . location , vd / 3 , true , true ) ;
if ( prey . numObjs > 0 )
setState ( STATE_CHASE ) ;
switch ( this . state ) {
case STATE_FOLLOW :
this . follow ( state ) ;
break ;
case STATE_CHASE :
this . chase ( state ) ;
break ;
180
190
}
protected void follow ( SimState state ) {
double sd = ( this . standard != null ?
this . standard . getDistance () : 0 ) ;
double pd = ( this . peripheral != null ?
this . peripheral . getDistance () : 0 ) ;
double vd = Math . max ( sd , pd ) ;
Double2D nextStep ;
Bag neighbours = domain . g e t O b j e c t s W i t h i n D i s t a n c e (
this . location , vd / 3 , true , true ) ;
Wolf leader = null ;
for ( int i = 0; i < neighbours . numObjs ; i ++ ) {
Wolf w = ( Wolf ) neighbours . objs [ i ];
if ( ! w . isDead () && w . getRank () > rank && ( leader == null
|| w . getRank () <= leader . getRank () ) )
leader = w ;
}
if ( leader == null || leader == this ) {
nextStep = wander ( state ) ;
setColor ( Color . blue ) ;
} else {
Double2D herd = herd ( state ) ;
Double2D seek = seek ( leader ) ;
double dx = herd . x + seek . x ;
double dy = herd . y + seek . y ;
double dis = Math . sqrt ( dx * dx + dy * dy ) ;
if ( dis > 0 ) {
dx = ( dx / dis ) * this . speed () ;
dy = ( dy / dis ) * this . speed () ;
}
orientation = Math . atan2 ( dy , dx ) ;
nextStep = new Double2D ( Math . cos ( orientation ) ,
Math . sin ( orientation ) ) ;
}
setLocation (
new Double2D (
domain . stx (
location . x + curSpeed *
Math . cos ( orientation ) / 10
),
domain . sty (
location . y + curSpeed *
Math . sin ( orientation ) / 10
)
dy = ( dy / dis ) * this . speed () ;
}
orientation = Math . atan2 ( dy , dx ) ;
nextStep = new Double2D ( Math . cos ( orientation ) ,
Math . sin ( orientation ) ) ;
)
);
lastStep = nextStep ;
setLocation (
new Double2D (
domain . stx (
location . x +
Math . cos
),
domain . sty (
location . y +
Math . sin
)
)
);
lastStep = nextStep ;
}
200
250
protected void chase ( SimState state ) {
setColor ( Color . green ) ;
World world = ( World ) state ;
Double2D nextStep ;
210
76
220
230
double sd = ( this . standard != null ?
this . standard . getDistance () : 0 ) ;
double pd = ( this . peripheral != null ?
this . peripheral . getDistance () : 0 ) ;
double vd = Math . max ( sd , pd ) ;
Bag prey = world . Prey () . g e t O b j e c t s W i t h i n D i s t a n c e (
this . location , vd / 3 , true , true ) ;
Prey nearest = null ;
double distance = 0;
for ( int i = 0; i < prey . numObjs ; i ++ ) {
double d = world . Prey () . tds ( this . location ,
(( Prey ) prey . objs [ i ] ) . getLocation () ) ;
if ( nearest == null || d <= distance ) {
nearest = ( Prey ) prey . objs [ i ];
distance = d ;
}
}
if ( nearest == null ) {
setState ( STATE_FOLLOW ) ;
return ;
}
}
protected void attack ( SimState state ) {
}
270
280
double dx = herd . x + seek . x ;
double dy = herd . y + seek . y ;
double dis = Math . sqrt ( dx * dx + dy * dy ) ;
if ( dis > 0 ) {
dx = ( dx / dis ) * this . speed () ;
curSpeed *
( orientation ) / 10
260
Double2D herd = herd ( state ) ;
Double2D seek = seek ( nearest ) ;
240
curSpeed *
( orientation ) / 10
290
public double al i gn m e n t Fa c t o r ( Item item ) {
if ( item instanceof Wolf ) {
Wolf wolf = ( Wolf ) item ;
if ( ( wolf . getRank () < this . rank ) ||
this . impeding ( wolf ) )
return 5;
return 10;
} else {
return 0;
}
}
public double coh es ion Fa cto r ( Item item ) {
if ( item instanceof Wolf ) {
Wolf wolf = ( Wolf ) item ;
if ( ( wolf . getRank () < this . rank ) ||
this . impeding ( wolf ) )
return 0;
return 5;
} else {
return 0;
}
}
public double se p a r a t i o n F a c t o r ( Item item ) {
if ( item instanceof Wolf ) {
Wolf wolf = ( Wolf ) item ;
if ( ( wolf . getRank () < this . rank ) ||
this . impeding ( wolf ) )
return 2;
return 0.5;
} else {
return 1.0;
}
300
}
public boolean impeding ( Wolf w ) {
if ( w . getRank () < this . rank )
return false ;
double r = World . parameters . getDouble ( " wolf . influence " , 10 ) *
w . getSize () ;
Double2D wLoc = w . getLocation () ;
double wOr = w . getOrientation () ;
double x = wLoc . x + Math . cos ( wOr ) * ( r / 2 ) ;
double y = wLoc . y + Math . sin ( wOr ) * ( r / 2 ) ;
310
double dx = domain . tdx ( location .x , wLoc . x ) ;
double dy = domain . tdy ( location .y , wLoc . y ) ;
77
return ( Math . sqrt ( dx * dx + dy * dy ) <= r ) ;
}
public Portrayal getPortrayal ( ) {
if ( this . portrayal == null )
this . portrayal = new WolfPortrayal2D (
new Si mp le P or tr ay a l2 D () ,
this . size ,
this . color
);
return this . portrayal ;
}
320
}
C.8
WolfPortrayal2D.java
package uk . ac . bath . cs1ah . hunt ;
info . draw . width < info . draw . height ?
info . draw . width : info . draw . height ) ) + e x t e n s i o n ;
import java . awt .*;
import java . awt . geom . QuadCurve2D ;
final double lenx = Math . cos ( o r i e n t a t i o n ) * length ;
final double leny = Math . sin ( o r i e n t a t i o n ) * length ;
double x = info . draw . x + lenx ;
double y = info . draw . y + leny ;
50
import sim . display .*;
import sim . portrayal .*;
public class WolfPortrayal2D extends An im al P or tr ay al 2 D {
g r a p h i c s . s e t P a i n t ( Color . red ) ;
double r = length * World . p a r a m e t e r s . g e t D o u b l e (
" wolf . i n f l u e n c e " , 10 ) * wolf . getSize () ;
10
public WolfPortrayal2D ( S im pl eP or t ra ya l2 D child ) {
super ( child , DEFAULT_SCALE , DEFAULT_EXTENSION ,
DEFAULT_COLOR ) ;
}
x += ( r / 2 ) * Math . cos ( o r i e n t a t i o n ) ;
y += ( r / 2 ) * Math . sin ( o r i e n t a t i o n ) ;
60
public WolfPortrayal2D ( S im pl eP or t ra ya l2 D child , Color color ) {
super ( child , DEFAULT_SCALE , DEFAULT_EXTENSION , color ) ;
g r a p h i c s . d r a w O v a l ( ( int ) ( x - r ) , ( int ) ( y - r ) ,
( int ) ( 2 * r ) , ( int ) ( 2 * r ) ) ;
/* */
}
78
20
public WolfPortrayal2D ( S im pl eP or t ra ya l2 D child , double scale ,
int extension ) {
super ( child , scale , extension , DEFAULT_COLOR ) ;
}
public WolfPortrayal2D ( S im pl eP or t ra ya l2 D child , double scale ,
Color color ) {
this ( child , scale , DEFAULT_EXTENSION , color ) ;
}
30
public WolfPortrayal2D ( S im pl eP or t ra ya l2 D child , double scale ,
int extension , Color color ) {
super ( child , scale , extension , color ) ;
}
public void draw ( Object object , Graphics2D graphics ,
DrawInfo2D info ) {
super . draw ( object , graphics , info ) ;
40
/* * /
gra p h i c s . s e t P a i n t ( Color . red ) ;
gra p h i c s . s e t S t r o k e ( new B a s i c S t r o k e ( 1.0 f ) ) ;
Wolf wolf = ( Wolf ) object ;
final double o r i e n t a t i o n = wolf . o r i e n t a t i o n 2 D () ;
final int length = ( int ) ( scale * (
}
}
C.9
World.java
package uk . ac . bath . cs1ah . hunt ;
" world . height " , this . height ) ;
this . sightCones = World . parameters . getBoolean (
" world . sightCones " , this . sightCones ) ;
this . numPrey = World . parameters . getInt (
" world . numPrey " , this . numPrey ) ;
this . numWolves = World . parameters . getInt (
" world . numWolves " , this . numWolves ) ;
import java . io . IOException ;
import sim . engine .*;
import sim . field . continuous .*;
import sim . util .*;
50
}
10
public class World
extends SimState {
/* *
*/
public double getWidth ( ) {
return this . width ;
}
public static Pa ra m et er Da t ab as e parameters =
new Pa ra me te r Da ta ba s e () ;
protected Continuous2D wolves ;
protected Continuous2D prey ;
protected Continuous2D obstacles ;
79
20
30
40
protected
protected
protected
protected
protected
60
/* *
*/
public double getHeight ( ) {
return this . height ;
}
double width = 250;
double height = 250;
boolean sightCones = true ;
int numPrey = 100;
int numWolves = 7;
/* *
* Class c o n s t r u c t o r .
*/
public World ( ) {
this ( World . parameters . getLong ( " base . seed " ,
System . cu rr en t Ti me Mi l li s () ) ) ;
}
/* *
* Class c o n s t r u c t o r s p e c i f y i n g the r a n d o m i z a t i o n seed .
*
* @param
seed
the r a n d o m i z a t i o n seed
*/
public World ( long seed ) {
super ( new ec . util . M e r s e n n e T w i s t e r F a s t ( seed ) ,
new Schedule (1) ) ;
World . parameters . setLong ( " base . seed " , seed ) ;
this . width = World . parameters . getDouble (
" world . width " , this . width ) ;
this . height = World . parameters . getDouble (
70
/* *
*/
public int getNumPrey ( ) {
return this . numPrey ;
}
/* *
*/
public int getNumWolves ( ) {
return this . numWolves ;
}
80
90
/* *
* Returns the wolves domain for this world .
*
* @return
the animal domain
*/
public Continuous2D Wolves ( ) {
return this . wolves ;
}
/* *
* Returns the prey domain for this world .
*
* @return
the animal domain
*/
public Continuous2D Prey ( ) {
return this . prey ;
}
100
110
80
120
/* *
* Returns the o b s t a c l e s domain for this world .
*
* @return
the o b s t a c l e domain
*/
public Continuous2D Obstacles ( ) {
return this . obstacles ;
}
/* *
* I n i t i a l i z e s the s i m u l a t i o n . Should be called before a n y t h i n g
* else .
*
* @param
args
command line a r g u m e n t s to i n i a l i ze with .
*/
public static void initialize ( String [] args ) {
/* Read in the configuration , if s p e c i f i e d
*/
if ( args . length > 0 ) {
try {
parameters = new Pa ra m et er Da t ab as e ( args [0] ) ;
} catch ( IOException e ) {
System . out . println ( " Unable to find config file : " +
args [0] ) ;
parameters = new Pa ra m et er Da t ab as e () ;
}
} else {
parameters = new Pa ra m et er Da t ab as e () ;
}
/* Print the current c o n f i g u r a t i o n to the command line
*/
parameters . list ( System . out ) ;
}
130
int spread ;
double x , y ;
Double2D location ;
150
numWolves = World . parameters . getInt (
" wolves . count " , numWolves ) ;
spread = ( int ) numWolves / 2;
for ( int i = 1; i <= numWolves ; i ++ ) {
Wolf w = new Wolf ( i , this , wolves ) ;
schedule . s c h e d u l e R e p e a t i n g ( w ) ;
}
160
/* if we were c o n c e r n e d with such things as trees and so on ,
* these would be i n c l u d e d here , in a similar fashion to the
* wolves and prey .
*/
}
170
180
/* *
*
*/
public void start ( ) {
super . start () ;
this . prey = new Continuous2D ( 10 , this . width , this . height ) ;
this . wolves = new Continuous2D ( 10 , this . width , this . height ) ;
this . obstacles = new Continuous2D (
10 , this . width , this . height ) ;
140
numPrey = World . parameters . getInt ( " prey . count " , numPrey ) ;
spread = ( int ) numPrey / 2;
for ( int i = 1; i <= numPrey ; i ++ ) {
Prey p = new Prey ( i , this , prey ) ;
schedule . s c h e d u l e R e p e a t i n g ( p ) ;
}
190
/* *
* F i n a l i z e s the s i m u l a t i o n . Should be called at the very end of the
* simulation .
*/
public void finish ( ) {
finish ( new String [] {} ) ;
}
/* *
* F i n a l i z e s the s i m u l a t i o n . Should be called at the very end of the
* simulation .
*
* @param
args
command line a r g u m e n t s to f i nalize with
*/
public void finish ( String [] args ) {
super . finish () ;
/* Save the final configuration , so that any mid - session changes
* are stored
*/
String finalConfig = " final . cfg " ;
if ( args . length > 1 )
finalConfig = args [1];
System . out . println ( " ---" ) ;
try {
parameters . save ( finalConfig ) ;
System . out . println (
" Final configuration written to file : " + finalConfig ) ;
} catch ( IOException e ) {
System . out . println (
" Unable to save final configuration to file : " +
finalConfig ) ;
}
}
200
/* *
* Allows this class to be run stand - alone from the command line
*
* @param
args
the command line a r g u m e n t s
*/
public static void main ( String [] args ) {
World . initialize ( args ) ;
210
81
/* Run the s i m u l a t i o n
*/
World world = new World () ;
world . start () ;
for ( int x = 0; x < 5000; x ++ ) {
// if ( x % 100 == 0 ) System . out . println ( x ) ;
if ( ! world . schedule . step ( world ) ) break ;
}
world . finish () ;
220
}
}
C.10
WorldWithUI.java
package uk . ac . bath . cs1ah . hunt ;
import java . awt . Color ;
import javax . swing .*;
import
import
import
10 import
50
sim . display .*;
sim . engine . SimState ;
sim . field . continuous .*;
sim . portrayal . continuous .*;
public Object g e t S i m u l a t i o n I n s p e c t e d O b j e c t () {
return state ;
}
public void init ( Controller c ) {
super . init ( c ) ;
public class WorldWithUI
extends GUIState {
public Display2D display ;
public JFrame displayFrame ;
82
20
public void load ( SimState state ) {
super . load ( state ) ;
setupPortray a l s () ;
}
display = new Display2D ( iDim , iDim , this , 1 ) ;
display . setBackdrop ( Color . white ) ;
60
displayFrame = display . createFrame () ;
displayFrame . setTitle ( this . getName () ) ;
c . registerFrame ( displayFrame ) ;
displayFrame . setVisible ( true ) ;
display . attach ( wolfPortrayal , " Wolves " ) ;
display . attach ( preyPortrayal , " Prey " ) ;
display . attach ( obstaclePortrayal , " Obstacles " ) ;
protected C o n t i n u o u s P o r t r a y a l 2 D wolfPortrayal =
new C o n t i n u o u s P o r t r a y a l 2 D () ;
protected C o n t i n u o u s P o r t r a y a l 2 D preyPortrayal =
new C o n t i n u o u s P o r t r a y a l 2 D () ;
protected C o n t i n u o u s P o r t r a y a l 2 D ob st a cl eP or t ra ya l =
new C o n t i n u o u s P o r t r a y a l 2 D () ;
protected double iDim = 600;
}
70
public WorldWithUI ( ) {
super ( new World ( ) ) ;
}
30
public WorldWithUI ( long seed ) {
super ( new World ( seed ) ) ;
}
public WorldWithUI ( SimState state ) {
super ( state ) ;
}
40
private void set u p P o rt r a y a ls ( ) {
World world = ( World ) state ;
Continuous2D wolves = world . Wolves () ;
Continuous2D prey = world . Prey () ;
Continuous2D obstacles = world . Obstacles () ;
wolfPortrayal . setField ( wolves ) ;
preyPortrayal . setField ( prey ) ;
o bs ta cl eP o rt r a y a l . setField ( obstacles ) ;
80
for ( int i = 0; i < wolves . allObjects . numObjs ; i ++ )
wolfPortrayal . s e t P o r t r a y a l F o r O b j e c t (
wolves . allObjects . objs [ i ] ,
(( Item ) wolves . allObjects . objs [ i ]) . getPortrayal () ) ;
public String getName ( ) {
return " Wolves et al . " ;
}
public void start ( ) {
super . start () ;
s etupPortrayals () ;
}
for ( int i = 0; i < prey . allObjects . numObjs ; i ++ )
preyPortrayal . s e t P o r t r a y a l F o r O b j e c t (
prey . allObjects . objs [ i ] ,
(( Item ) prey . allObjects . objs [ i ]) . getPortrayal () ) ;
90
for ( int i = 0; i < obstacles . allObjects . numObjs ; i ++ )
o bs ta cl eP o rt ra ya l . s e t P o r t r a y a l F o r O b j e c t (
obstacles . allObjects . objs [ i ] ,
(( Item ) obstacles . allObjects . objs [ i ]) . getPortrayal () ) ;
double w = wolves . getWidth () ;
double h = wolves . getHeight () ;
if ( w == h ) {
display . insideDisplay . width =
display . insideDisplay . height = this . iDim ;
} else if ( w > h ) {
display . insideDisplay . width = this . iDim ;
display . insideDisplay . height = this . iDim * ( h / w ) ;
} else {
display . insideDisplay . height = this . iDim ;
display . insideDisplay . width = this . iDim * ( w / h ) ;
}
100
}
110
public void quit () {
super . quit () ;
83
if ( displayFrame != null ) displayFrame . dispose () ;
displayFrame = null ;
display = null ;
}
public static void main ( String [] args ) {
World . initialize ( args ) ;
Console c = new Console ( new WorldWithUI () ) ;
c . setVisible ( true ) ;
}
120
}
Appendix D
Configuration
This is a sample configuration file used to set initial values for the parameters
defining characteristics of agents.
84
D.1
##
##
##
##
##
##
##
##
##
10 ##
Example Configuration File
Please note that although the notation may suggest some form of
structure / member system , it ’ s really just to make life that much
easier , and they ’ re just random dots in the middle of parameter
names .
Comments are any lines starting with an octothorpe ( a #) . Despite the
fact that all comments actually follow two of these , only one is
actually required . The author just wanted a quick way to
differentiate between actual comments and parameters that had just
been temporarily removed .
## Seeds everything in the simulation - comment it out to use a random
## seed .
base . seed = 555
85
world . grass . energy = 5
20 world . sightCones = false
# world . sightCones = true
prey . count = 50
# prey . count = 0
prey . g r a z eP roba bilit y = 0.4
# prey . vision . standard . distance =
# prey . vision . standard . angle =
# prey . vision . peripheral . distance =
# prey . vision . peripheral . angle =
30 ## Individual animals can be controlled like so ...
## prey [ i ]. size = < double >
## ... and so on , where ‘i ’ is the number of the beast ( so to speak ) . The
## prey ‘ array ’ is one - based - ‘ prey [1] ’ is the first animal
# wolves . count = 0
# wolves . count = 9
wolves . count = 5
# wolves . vision . standard . distance =
# wolves . vision . standard . angle =
40 # wolves . vision . peripheral . distance =
# wolves . vision . peripheral . angle =
## The same control system works for wolves , too ...
## wolves [ i ]. size = < double >
## wolves [ i ]. rank = < double >
wolves [1]. x
wolves [1]. y
wolves [2]. x
wolves [2]. y
50 wolves [3]. x
wolves [3]. y
wolves [4]. x
wolves [4]. y
wolves [5]. x
wolves [5]. y
wolves [6]. x
wolves [6]. y
=
=
=
=
=
=
=
=
=
=
=
=
10
10
11
11
12
12
13
13
14
14
15
15
© Copyright 2026 Paperzz