Machine Learning

CO3301 - Games Development 2
Week 22
Machine Learning
Gareth Bellaby
1
Machine Learning & Games
• There
are some games which “learn”
during play, e.g. Creatures, Petz, Black &
White.
• Generally older games.
• Quite a few claims in
publicity but no
reality…
• So
I stopped doing anything about
machine learning.
2
Machine Learning & Games
• However,
I’ve seen various
instances of ML being used
recent
• Not “in game”, but to train a game, e.g.
Kinect Sport
• Learning a function from examples of its
inputs and outputs is called inductive
learning
• Functions can be represented by logical
sentences, polynomials, belief networks,
neural networks etc
3
Learning
• It's
possible to get build a computer
which learns.
• Two common methods:
• Genetic Algorithms
• Neural Nets
4
Genetic Algorithms
•
•
•
Inspiration from genetics, evolution and "survival
of the fittest".
GA programs don't solve problems by reasoning
logically about them.
Instead, populations of competing candidate
solutions are created. Poor candidates die out,
better candidates survive and reproduce by
constructing new solutions out of components of
their parents, i.e. out of their "genetic material".
5
Neural Nets
• Use
a structure based (supposedly) on
the brain's neurons.
•A
net because many, interconnected,
"neurons" are used.
• Knowledge
as a pattern of activation,
rather than rules.
6
Learning
• Neural
nets were one of the first
instances of machine learning. A
neural net is not set up according to
some understanding of a problem,
rather it learns (or is taught) how to
solve the problem itself.
7
Learning
•
•
In essence a neural net works by summing
multiple inputs according to weighting (which
can be adjusted), with output being triggered at
a threshold.
A basic neural nets uses a rule of the form:
• "Change
the weight of the connection
between a unit A and a unit B in proportion
to the product of their simultaneous
activation"
8
Learning
• Starting
from 0 weights, expose the
network to a series of learning trials.
• Learning is from experience which causes
changes in weights and thus changes in
patterns of activation.
• System cycles through learning trials until
the system settles down to a rest state.
9
Artificial Neuron
x1
x2
w1
w2
w3
x3
net =
xiwi
f(net)
output
wn
xn
•
•
•
•
input signals
weights
activation level
threshold function
xi
wi
xiwi
f
10
Neural Nets
• Historically comes in two forms:
• Basic
2-layer net, which has some
important limitations.
•
Multi-layered net.
11
2-layer net
•
•
Advantages:
• There is a guaranteed learning rule (the
Hebb learning rule), i.e. change the weight
of the connection between two units in
proportion to the product of their
simultaneous activation.
Disadvantages:
• There is a strong correlation between input
and output units.
• XOR problem. This is a result of the strong
correlation between input and output units.
Summing inputs takes the node over its
threshold and so it wrongly fires.
12
McCulloch-Pitt AND neuron
x
y
1
+1
+1
x+y-2
x^y
-2
•
Three inputs: x, y and the bias, which has a
constant value of +1
• Weights are +1, +1 and -2, respectively.
• Threshold is: if the sum of x + y -2 is less
than 0, return -1. Otherwise return 1.
13
McCulloch-Pitt AND neuron
x
y
x + y -2 output
1
1
0
1
1
0
-1
-1
0
1
-1
-1
0
0
-2
-1
14
Multi-layered net
• Multi-layered
nets overcome the
learning limitations of 2-layer nets.
• Prevents
the net from becoming
simply correlational.
•
•A
disadvantage is that there is no
guaranteed learning rules.
15
Solution to XOR problem
activation
threshold = .5
output layer
-2
+1
+1
hidden unit
activation
threshold = 1.5
+1
+1
input
layer
x
y
16
Connectionism
•
•
•
One approach towards knowledge is that taken by
symbolic logic (e.g. Turing). This approach
contains the suggestion that the architecture or
framework which underlies computation, and
thought, is irrelevant.
Connectionism suggests that architecture does
matter and, specifically, that it useful to use an
architecture which is brain-like, i.e. parallel
machines massively connected.
Note: neural nets can be implemented on single
processor, binary machines.
17
Knowledge Representation.
• Connectionism also represents knowledge
in a different fashion from (for instance)
semantic nets or production rules.
Knowledge is represented as a pattern of
activation. This pattern bears no
resemblance to the knowledge being
represented. The pattern is distributed
across the system as a whole. Learning is
by some type of internal representation.
• Neural
nets are something like brains:
they are parallel and distributed.
18
The Symbol Grounding Problem
19
Symbols
• Computers are machines that manipulate
symbols. There is a philosophical
tradition which suggests that that symbol
manipulation captures all thinking and
understanding. Includes such writers as
Turing, Russell, early Wittgenstein.
•
• “Symbolic" model of the mind: The mind
is a symbol system and cognition is
symbol manipulation.
20
Turing
• Turing
Machine (TM) can compute
anything which is computable. Uses
symbolic logic. Universal TM can
perform as any other TM. If it
possible to work out how cognitive,
intelligent processes work. the UTM
can be programmed to perform
them.
21
Searle
• Searle argues machines can never possess
‘intentionality’. Most AI programs consist of
sets of rules for manipulating symbols- they
are arbitrary and never about objects or
events.
•
• Intentionality is aboutness. "The property of
the mind by which it is directed at, about, or
'of' objects and events in the world.
Aboutness - in the manner of beliefs, fears,
desires, etc", Eliasmith, Dictionary of
Philosophy of Mind.
22
Penrose
• Penrose:
cognitive processes are
not computable.
• Agues
that human thought cannot
be simulated by any computation.
• Uses
Gödel's
incompleteness
theorem.
23
Symbol Grounding Problem
• The
problem of providing systems with
some external, fixed reference.
• One (simplistic) way to think of this is to
distinguish between knowledge and data.
Data can be understood, manipulated,
used to create new data, etc., but all of
these processes can occur even though
the data is untrue. The word 'knowledge'
can be used for those things we 'know' to
be true.
24
Symbol Grounding Problem
• Some types of intelligent behavior can be
reproduced using sets of formal rules to
manipulate symbols (e.g. chess), but this
is not obviously true of many of the
things we do.
•
• Could
a computer be given the
equipment to interact with the world in
the way that we do, and to learn from it?
25
Harnad
• "How can the semantic interpretation of a
formal symbol system be made intrinsic
to the system, rather than just parasitic
on the meanings in our heads? How can
the meanings of the meaningless symbol
tokens, manipulated solely on the basis
of their (arbitrary) shapes, be grounded
in anything but other meaningless
symbols?“ Harnad, S. ,(1990), "The
Symbol Grounding Problem."
26
Harnad
•
•
•
•
•
Harnad himself proposes connectionism as a
possible solution.
Neural nets are used to ground symbols (i.e.
provide the symbols with a foundation).
"Connectionism is one natural candidate for
the mechanism that learns the invariant
features
underlying
categorical
representations, thereby connecting names to
the proximal projections of the distal objects
they stand for." Harnad, S. ,(1990), "The
Symbol Grounding Problem."
27