Trees

CO3301 - Games Development 2
Week 22
Trees
Gareth Bellaby
1
Probability Tree
2
Probability Trees
A probability tree is a tree which has probabilities
associated with each branch.
3
Probability Trees
4
Probability Trees
Probabilities are propagated down the tree. A
probability which follows on from another
probability is multiplied, i.e. one probability is
multiplied by the other probability in order to
calculate the final probability.
A each level of the tree the total of the
probabilities must be equal to the sum of 1.
0.125 + 0.125 + 0.25 + 0.5 = 1
5
Decision Tree
6
Decision Trees
•A
decision tree is a way of representing
knowledge.
•A
decision tree is a way of using inputs to
predict future outputs.
• Decision
trees are a good way of expressing
decisions for computer games. Not just for AI
but general game play.
• A decision tree is a classification method.
• Decision trees learn from examples
using
induction.
7
Example
8
Decision Trees
Each internal node is a test.
Each leaf node is a classification.
The intention is that an unknown type can be
classified by traversing the tree.
Decision trees can be created in real time
extremely efficiently. This means that they are a
practical option for machine learning for games.
9
Decision Trees
• Can
have the response "unknown" on a branch.
Decision trees can deal with uncertainty.
• Decision trees cannot use ranges because of the
large number of branching alternatives, e.g. floating
point numbers. Instead you must provide a set of
"buckets" each with a specified range. We'll an
example of this below with Black and White talking
about "none", "medium" and "maximum".
• Tests followed in sequence down the tree can be
considered to be logical AND.
• The branching can be considered to be logical OR.
10
Behaviour Tree
11
Behaviour Tree
• Tree structure: a hierarchical structure
• Nodes modelling decision making of an entity
• Akin to a hierarchical FSM, but built out of tasks
rather than states.
• Transparent, easy to understand. Diagrammatic
method.
• Expressive.
• Robust. Less error prone.
12
Implementation
• Behaviour
of a node is dependent upon
context.
• Operation of a node is inserted into the scope
of its parent.
• Allows modularity.
• Leaf nodes are tasks and so may take time to
complete.
13
Approach
• Leaf nodes.
• Control nodes:
• Sequence
• Selector
• Decorators
14
Leaf
• Leaf nodes. A task. An execution node.
• Signals:
• success
• failure
• running
15
Sequence
• Execute first node that has not yet succeeded.
• Execution in sequence: task 1 until it returns
success, then task 2 until it returns success,
and so on.
• Variants:
any
failure leads to
failure
of
the
sequence overall,
or just select the
next in sequence
16
Selector
• Selects
one child node to execute. Could be
random choice or some sort of control
mechanism.
17
Decorator
• Single child node.
• Allows for other types
of operation such as
repetition, filter or an invertor.
18
Examples
19
Examples
• Chris Simpson, Behavior trees for AI: How they
work
20
Examples
• Chris Simpson, Behavior trees for AI: How they
work
21
Examples
• Chris Simpson, Behavior trees for AI: How they
work
22
Learning Tree/Classification Tree
23
Black & White
What he ate
Feedback "How nice
it tasted"
A big rock
-1.0
A small rock
-0.5
A small rock
-0.4
A tree
-0.2
A cow
+0.6
The values are averaged.
Taken from Evans, R., (2002), "Varieties of Learning".
24
Black & White
What creature attacked
Feedback from player
Friendly town, weak defence, tribe Celtic
-1.0
Enemy town, weak defence, tribe Celtic
+0.4
Friendly town, strong defence, tribe Norse
-1.0
Enemy town, strong defence, tribe Norse
-0.2
Friendly
Greek
town,
medium
defence,
tribe -1.0
Enemy town, medium defence, tribe Greek
+0.2
Enemy town, strong defence, tribe Greek
-0.4
Enemy town, medium defence, tribe Aztec
0.0
Friendly town, weak defence, tribe Aztec
-1.0
25
Black & White
Taken from Evans, R., (2002), "Varieties of Learning".
26
Black & White
• Some of the criteria is lost because it turns out
to be irrelevant, e.g. information about the tribe.
• The decision tree is created in real-time. Each
time the creature receives new input from the
player the tree will be rebuilt.
• Each new input will change the values.
• The rebuilding could be significant. Information
that previously was jettisoned as irrelevant
could become relevant.
27
Black & White
• The
Evan's article provides more detail as to
how decision trees were used in Black and
White.
• One
thing that it is important to add is his
observation that in order to iterate through all
of the attributes of an object efficiently is
necessary to define the objects by their
attributes.
28
ID3
• The
ID3 algorithm was presented by Quinlan,
1986.
• Uses an iterative method.
• From
the training examples a random subset is
selected.
• Test the tree on training examples.
• If all of the examples are classified successfully
then end.
• Otherwise
add some more training examples to
our subset and repeat the process.
29
ID3
• Start with a root node. Assign to the root node the
best attribute.
• Branch
then generated for each value of the
attribute.
• A node is created at the end of each branch.
• Each training example is assigned to one
of
these new nodes.
• If no examples are assigned then the node and
branch can be removed.
• Each node is then treated as a new root and the
process repeated.
30
ID3
• It
should be apparent that different trees can be
constructed.
• It is desirable to derive the smallest tree since this will
be the most efficient one.
• The
top most choices need to be the most
informative.
• Aiming towards the greatest information gain.
• Information theory provides a mathematical
measurement of the information content of a
message. Information Theory was presented by
Shannon in 1948.
31
Information Theory
• Shannon defines the amount of information in a
message as a function of the probability of
occurrence of each possible message.
32
ID3
• ID3
was extended by Quinlan to provide
probabilistic classification using Bayesian
statistics.
33
Sources & Further reading
DeSylva, C., (2005), "Optimizing a Decision Tree Query
Algorithm for Multithreaded Architectures", Game
Programming Gems 5, Charles River Media: Hingham,
Mass, USA.
Evans, R., (2002), "Varieties of Learning", AI Game
Programming Wisdom, Charles River Media: Hingham,
Mass, USA.
Fu , D., & Houlette, R., (2003), "Constructing a Decision
Tree Based on Past Experience", AI Game Programming
Wisdom 2, Charles River Media: Hingham, Mass, USA.
34
Sources & Further reading
Manslow, J., (2006), "Practical Algorithms for In-Game
Learning", AI Game Programming Wisdom 3, Charles
River Media: Hingham, Mass, USA.
Quinlan, J. R., (1986), "Induction of decision trees",
Machine Learning, 1: 81-106.
Shannon, C, (1948), "A mathematical theory
communication", Bell System Technical Journal.
of
Chris Simpson, Behavior trees for AI: How they work on
Gamasutra 07/17/14
35