(ILTM) Output long-term memory (OLTM)

Lecture 2
ASSOCIATIONS, RULES, AND MACHINES
Victor Eliashberg
Consulting professor, Stanford University,
Department of Electrical Engineering
Slide 1
SCIENTIFIC / EGINEERING APPROACH
“When you have eliminated the impossible, whatever remains, however
improbable, must be the truth.” (Sherlock Holmes)
External system (W,D)
Sensorimotor devices, D
W
External world, W
Computing system, B, simulating
the work of human nervous system
D
B
Human-like robot (D,B)
Slide 2
ZERO-APPROXIMATION MODEL
s(ν)
s(ν+1)
Slide 3
BIOLOGICAL INTERPRETATION
Motor control
AM
Working memory, episodic
memory, and mental imagery
AS
Slide 4
PROBLEM 1: LEARNING TO SIMULATE the Teacher
This problem is simple: system AM needs to learn a manageable number of
fixed rules.
X
X11
X12
AM
y
sel
y
NM.y
0
1
NM
symbol read
current state of mind
Teacher
move
type symbol
next state of mind
Slide 5
PROBLEM 2: LEARNING TO SIMULATE EXTERNAL SYSTEM
This problem is hard: the number of fixed rules needed to represent a RAM with
n locations explodes exponentially with n.
y
1
2
NS
NOTE. System (W,D) shown in slide 3 has the properties of a
random access memory (RAM).
Slide 6
Programmable logic array (PLA): a logic implementation
of a local associative memory (solves problem 1 from slide 5)
Slide 7
BASIC CONCEPTS FROM THE AREA
OF ARTIFICIAL NEURAL NETWORKS
Slide 8
Typical neuron
Neuron is a very specialized cell. There are several types of neurons with different
shapes and different types of membrane proteins. Biological neuron is a complex
functional unit. However, it is helpful to start with a simple artificial neuron (next slide).
Slide 9
Neuron as the first-order linear threshold element:
x1
g1
gk
xk
xm
τ
Output:
R’ is the set of real non-negative numbers
du
+ u = Σ gkxk
dt
k=1
y=L( u )
where,
u if u > 0
L( u) =
0 otherwise
{
y
 R’
Parameters: g1,… gm
Equations:
m
gm
u
 R’
y  R’
Inputs: xk
y=L( u )
(1)
(2)
(3)
u
0
A more convenient notation
x1
xk
g1
gk
xm
gm
s
τ
u
xk
gk
is the k-th component of input vector
is the gain (weight) of the k-th synapse
m
s = Σ gkxk
k=1
is the total postsynaptic current
u is the postsynaptic potential
y is the neuron output
y
τ is the time constant of the neuron
Slide 10
Input synaptic matrix, input long-term memory (ILTM) and DECODING
gx1k
x1
xk
xm
ILTM
gxnk
gxik
x
s1
si
sn
DECODING
(computing similarity)
si
s1
sn
An abstract representation of (1):
m
si =
Σ
gxikxk
k=1
i=1,…n
(1)
fdec: X × Gx
S
(2)
Notation:
x=(x1, .. xm) are the signals from input neurons (not shown)
gx = (gxik) i=1,…n, k=1,…m
is the matrix of synaptic gains -- we postulate that
this matrix represents input long-term memory (ILTM)
s=(s1, .. sn) is the similarity function
Slide 11
Layer with inhibitory connections as the mechanism of the winner-take-all (WTA) choice
s1
α
si
u1 τ
α
sn
α
ui τ
xinh
q
un τ
Equations:
(1)
β
d1
β
β
dn
di
(2)
Note. Small white and black circles represent excitatory
and inhibitory synapses, respectively.
(3)
s1
sn
si
Procedural representation:
RANDOM CHOICE
iwin
“
“:
iwin : { i / si=max( j )sj > 0 }
(4)
if (i == iwin) di=1; else di=0;
(5)
denotes random equally probable choice
Slide 12
Output synaptic matrix, output long-term memory (OLTM) and ENCODING
y1
yk
yp
d1
y
gyki
gyk1
di
d1
dn
di
gykn
dn
ENCODING
(data retrieval)
OLTM
An abstract representation of (1):
n
yk =
Σ
gykidi
i=1
k=1,…p
(1)
fenc: D × Gy
Y
(2)
NOTATION:
d=(d1, .. dm) signals from the WTA layer (see previous slide)
gy = (gyki) i=1,…n, k=1,…m
is the matrix of synaptic gains -- we postulate that
this matrix represents output long-term memory (OLTM)
y=(y1, .. yp)
output vector
Slide 13
A neural implementation of a local associative memory
(solves problem 1 from slide 5) (WTA.EXE)
addressing by content
S21(I,j)
S21(i,j)
DECODING
Input long-term memory (ILTM)
N1(j)
RANDOM CHOICE
Output long-term memory (OLTM)
ENCODING
retrieval
Slide 14
A functional model of the previous network [7],[8],[11]
(WTA.EXE)
(1)
(2)
(3)
(4)
(5)
Slide 15
Representation of local associative memory in terms of three
“one-step” procedures: DECODING, CHOICE, ENCODING
Slide 17
HOW CAN WE SOLVE THE HARD
PROBLEM 2 from slide 6?
Slide 18