Topic 3
Reactive animat, search and obstacle
avoidance
•
•
•
•
•
•
•
•
Game’s “Physical” Environment
Machine Vision
Representing Space in the Game World
Movement in the Game World
Navigation in the Game World
Obstacle Avoidance
A Reactive Control System
Use of Guntactyx
Reading: Champandard Chapters 5,6,8
FEAR documentation
GunTactyx readme and manual
Website links on graph search
Game’s “Physical” Environment
• We may distinguish between two aspects of a game environment:
Structure - topography of the environment as it constrains movement (physics and
layout of walls, paths, obstacles etc.
Detail – graphical appearance of the game world and placement of objects which don’t
impede movement
• What about players, NPCs and monsters? Really need to consider
moving things as a third category, especially when interactions go
beyond simply destroying everything you see
• Humans perceive the world mostly visually through detail, while AI sees
the world a as simplified data structures and must interpret these as well
as it can
• We can try to make an AI interpret the graphical world directly, as if it
was seeing through eyes, but such machine vision has proven to be
very difficult to program and expensive to compute (at least at a human
level of skill)
• It is an important concept of nouvelle game AI that an animat should
only have local, not global, knowledge of the game (like a human)
• Having complete, perfect knowledge of the world is not good for AI or
ICT219
2
games
Machine Vision
• Getting a machine to see is a traditional sub-discipline of AI
• A typical system might involve a camera returning a digitised image,
interpretive software and some kind of output arrangement
• Eg handwritten letter recogniser – easy
pattern
recognition
software
digital
camera
ASCII for ‘2’
00110010
data representation
raw image
• To get more sophisticated output information requires more complex
processing. Eg. scene analysis to aid robot navigation – hard
• The output for that would be information enabling the robot to identify
particular objects, or find their range and bearing, to help navigate
around them
ICT219
3
Face recognition from security video is now a mainstream
application of machine vision ICT219
4
Representing Space in the Game World
• How space is represented is important
• 2D vs 3D – how the location of objects is encoded in the structure, not
how the detail makes the world appear
• Discrete vs continuous – meaning whether objects are placed in a grid
or matrix with a finite number of locations or not (eg chess vs marbles)
• Representation of time - also discrete (turn-taking) or continuous
(stream of consciousness)
• Conversions – discrete vs continuous is a relative matter, since a fine
enough unit size (grid or time-steps) may be considered continuous in
practice
• In fact, all representations in computers must ultimately be discrete,
approximating continuous to a greater or lesser degree!
ICT219
5
Movement in the Game World
• At present, game engines provide a locomotion layer which abstracts
basic movement actions away from the strategic control of direction
• Physics simulation is required to handle gravity, throwing, fire, water etc.
Control
(forward/backward, turns)
signals from user or
parameters via API from
the decision-making AI
• Low level motion might now
be handled by the AI
Collision detection
Physics in the game
signals a collision halting
(forward) motion
Image: Chris Bayliss
Integration
signals from environment
alter behaviour as
appropriate (eg falls)
Simulation loop
(walking, running)
physics handles displacement
animation handles limb cycling
locomotion
layer
ICT219
6
Representing Space in the Game World
• In a classical AI navigation experiment, travel paths in the world
model might be represented at design-time as a graph, with nodes
representing rooms and arcs representing passages or doors
between them
• Finding an optimal path from a current location to a target was then
a matter of search on the graph
• There are well-studied and good search algorithms available
Search - Basics
• Uninformed search algorithms simply follow a pattern to examine all
nodes until one containing the goal is found (aka “brute force”)
• Depth-first search - start at a root and explore as far as possible
along each branch before backtracking until goal is found
• Breadth-first search - start at a root and explore all neighbouring
nodes, then for each neighbour, explore their unexplored
neighbours, and so on until goal is found
On this graph, starting at A, choosing
left nodes before right and remembering
previously visited nodes:
- Depth First Search visits nodes in this
order: A,B,D,F,E,C,G
-Breadth First Search visits nodes in this
order: A,B,C,E,D,F,G
ICT219
8
Search – Using Domain Information
• Informed search algorithms use some method to choose
intelligently which node to search next
• Best-first search – modifies breadth-first method to order all current
paths by a heuristic which estimates how close the end of the path
is to a goal. Paths that are closest to a goal are extended first.
• A* search - is a best-first method that calculates a cost and an
estimate for the path leading to each node:
• Cost is zero for the first node; for all others, cost is the cumulative
sum associated with all its ancestor nodes plus the cost of the
operation which reached the node (e.g. linear distance)
• An estimate measures how far a given node is thought to be from a
goal state (e.g. intensity on a sensory gradient)
• Cost and estimate are summed to score each path for eligibility for
exploration
• For more detail, view the ‘Graph Search Methods’ link on the
website
ICT219
9
Navigation in the Game World
• There are problems with this kind of classical search however:
– Depends on global information at design-time, so
– Question of realism arises – not comparable with the limited
viewpoint of real biological creatures
– A lot of information may overwhelm decision-making processes
– Information does not update dynamically via sensors, so cannot track changes
(eg moving creatures in the world)
• Nouvelle game AI animats are (virtually) embodied, which implies that
•
- They have a (simulated) limited perceptual system which updates the AI
continuously
- They need more plausible navigation algorithms which can work on
limited information and in real time
• For now, we are interested in reactive solutions
• Fortunately, such solutions have been studied for the design of robots
Modelling an Animat’s State in Space
• For many (but not all) AI models, need a description of the animat’s
position and orientation in space, as well as how it will move
Animat
(3,3)
Object
(3,2)
Object
(6,1)
(0,0)
World Origin
Absolute Coordinate System
Animat
(0,0)
Egocentric Origin Relative Coordinate System
90 deg angles
any angle
any distance
Continuous moves and turns
unit distance
ICT219
Discrete moves and turns
11
Animat brains in FEAR
• In FEAR architectures, modules and interfaces are defined declaratively
and at a very high level in XML
• This is supposed to make building and debugging easy
• Yet for a standard animat in the Quake 2 demonstrator, the most
interesting specification is that of the animat’s brain, which is a C++
function defined in the file Brain.cpp called Think and that is procedural
• The following very basic navigating brain depends on calls to the physics,
vision and motion modules
void Think()
{
bool col = physics->Collision(); //check for touching contact
float front_obstacle = vision->Tracewalk (0.0f,8.0f); //any problems ahead?
}
//decide what actions to call
motion->Move(Forward);
if (col || front_obstacle < 2.0f)
motion->Turn(orientation); // where orientation is some angle
ICT219
12
Sensing and Acting in the World - FEAR
• Physics module
// returns true if animat has bumped something, false otherwise
physics->Collision(); //Polled
physics->OnCollision(); //Asychronous callback detecting a collision
• Vision Module
// returns the distance to nearest object at bearing angle
vision->TraceWalk(angle, steps); //simulating walking on that bearing
vision->TraceLine(angle); //as if throwing projectile on that bearing
• Motion module
// continuous - angle is in deg. from current orientation
motion->Turn(angle);
// continuous - move in direction, a literal eg. Motion::Forward
motion->Move(direction);
ICT219
13
Obstacle Avoidance – Basic functionality
• Finding one’s way around obstacles is fundamental to
navigation
• Well suited to implementation by a reactive control system
• Begins from general principles:
1.
When no obstacle is sensed, move
ahead normally (straight or wandering
randomly)
4
2.
If wall detected on one side, veer away
slowly
3
3.
If obstacle detected ahead, turn to one
side to avoid it
4.
When stuck in a corner, a more radical
turn is needed
ICT219
2
1
14
A Reactive Control System
• A reactive system requires three elements:
• 1) a set of perceptual and action functions, which apply in a particular
situation
• 2) a mapping showing which percepts release which behaviours. That
means a theory about how the animat behave (note related to idea of
behavioural laws for agents). See previous slide.
• An if-else-if-else structure could be used to order calls to perceptual
and motor functions – procedural
• A rule-based system (Topic 6) is another possibility – declarative
• In FEAR we have synchronous calls, based on the client-server model.
Requests by a client are made and do not return until the server has
computed a result. Usually based on simple function calls. Commonly
used for the delegation of tasks
ICT219
15
A Reactive Control System
• Could also use asynchronous events. Based on something called the
observer design pattern: have a set of behaviours ready to go and a set of
events that will trigger them. A kind of event-based processing
• These can interrupt another routine and transfer control when something
comes up unexpectedly
• These should operate in ‘parallel’, competing for control of the animat’s
body => need for 3) arbitration in case of a tie
• How are competing motor outputs combined to control the limbs?
- Independent sum – different outputs connected to different effectors so that they
cannot clash
- Combination – some formula (eg weighted sum) combines two or more outputs
to blend them into output commands
- Suppression – certain components get priority over others, and can replace their
output with their own (eg. subsumption architecture)
- Sequential – output of different behaviours
getting control at alternate times
ICT219
16
Subsumption architecture works on real
robots!
Motor load
Random Act
Bump switches
Escape
IR proximity
Avoid
Photosensors
Follow
Sensory triggers
Cruise
s
s
s
s
s
ICT219
Motor command
17
Heroes of AI #3 – The Radical
• The radical idea that sophisticated robot behaviour could be accomplished without
high-powered computing was advanced in the 1980s by ex-patriot Australian
engineer Rodney Brooks. At the time robots were slow and clumsy, using
classical planning to compute their motions
• Brooks argued that robots could interact directly with the world via properly designed
reactive network of sensors and actuators, and created a number of behaviour-based
control architectures for robots. Without the need for complex representations. In the
1990s he and his students at the MIT robotics lab demonstrated ever more ingenious
robots that used his subsumption architecture.
• Brooks was featured in a 1997 documentary called “Fast, Cheap and Out of Control’,
the name of his paper to the British Interplanetary Society arguing that insect-like
robots could be used for exploration of the solar system.
• Formed a company called iRobot, which now provides pack-bot robots to the
US military as well as mass producing Rhoomba floor-cleaning robots
• Latest and most demanding project is Cog, a behaviour-based humanoid robot
ICT219
18
How to Use Guntactyx
• Guntactyx is a programming game - write scripts to control the
behaviour of a team of ‘bots
• Up to 4 teams can play different kinds of games, including
fighting, racing and soccer – better to use two small teams on
most machines
• Control is via the main front panel (but the options panel (‘0’) is
better). Press F3 to get a "heads-up"-style overlay
• Scripts are written in a C-like language called Small. Use any
kind of text editor to create and modify Small scripts (but choose
one which does not insert special characters into the text eg.
Notepad, not Word)
• Script files are saved into the \bots directory with the extension
.sma
• Script files must be compiled using the Small compiler sc.exe
before they will appear as a team on the main panel
• Read the GUNTACTYX_manual.htm for more detail
ICT219
20
Summary
•
Game virtual worlds generally distinguish structure and detail. Moving
objects could be considered a third category
•
Humans see detail, but game characters generally interact via structure
•
Machine vision is an important but difficult sub-field of AI
•
The representation of space and time may be 2D or 3D, discrete or
continuous
•
Present game engines provide a locomotion layer which abstracts basic
movement actions away from the strategic control of direction. In future AI
may automate basic interactions with world
•
Travel paths in through space may be represented as graphs. These are
traditionally searched by search methods such as breadth-first,
depth-first, or A* search. Such search is a general problem-solving method
•
Reactive control systems require 1) perceptual & action functions
2) a mapping from percepts to actions representing a theory of behaviour
and 3) a method to arbitrate conflicts – to resolve which action will be taken
in case of a tie
•
Could be implemented procedurally as if-else-if statement or declaratively as
a set of rules in a RBS
References
•
Brooks, R. & Flynn, A. Fast, Cheap and Out of Control: The Robotic
Invasion of the Solar System. Journal of The British Interplanetary Society,
Vol. 42, 1989, 478-485.
ICT219
22
© Copyright 2026 Paperzz