Strong claim: Synaptic plasticity is the only game in town. Weak

Theoretical Neuroscience II: Learning, Perception and
Cognition
The synaptic Basis for Learning and Memory:
a Theoretical approach
Harel Shouval
Phone: 713-500-5708
Email: [email protected]
Course web page: http://nba.uth.tmc.edu/homepage/shouval/teaching.htm
Strong claim:
Synaptic plasticity is the only game in town.
Weak Claim:
Synaptic plasticity is a game in town.
Different examples of learning and memory:
•  Learning to see/hear etc. – unsupervised learning.
•  Learning not to stick your hand in the electricity –
reinforcement learning
•  This class – supervised learning
•  Learning to separate different types of objects classification
•  Remembering the face of your teacher – episodic memory
The cortex has ~109
neurons.
Each Neuron has up to 104
synapses
Central Hypothesis
Changes in synapses underlie the basis of
learning, memory and some aspects of
development.
•  What is the connection between these seemingly
very different phenomena?
•  Do we have experimental evidence for this
hypothesis
A cellular correlate of Learning, memoryreceptive field plasticity
Classical Conditioning
Hebb s rule
Ear
A
Nose
B
Tongue
When an axon in cell A is near enough to excite cell B and repeatedly and persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A s efficacy in firing B is increased D. O. Hebb (1949)
Two examples of Machine learning based on
synaptic plasticity
1.The Perceptron (Rosenblatt 1962)
2. Associative memory
THE
PERCEPTRON in 2D (classification)
Example in 2D (on board):
Actual output:
-w0
w1
w2
!
x
!
µ
2
Oµ = " (w1 x1µ + w 2 x 2µ # w 0 ) where
" (x)
1
$1 x > 0
" (x) = %
&0 x # 0
µ – is the pattern label!
Desired output:
y
0
µ
Learning = changing weights (w s) to obtain
!
x
µ
y =O
µ
THE PERCEPTRONin N-D: (Classification)
&1 x > 0
Threshold unit: O = " (# w x $ w 0 ) where " (x) = '
i
(0 x % 0
µ
µ
i i
where
is the output for input pattern
w i are the synaptic weights .
x µ,
!
!
!
w1 w2
w3 w4
w5
AND
x1
1
1
0
0
x2
1
0
1
0
y
1
0
0
0
1
0
1
-1.5
1
1
Linearly seprable
OR
x1
1
1
0
0
x2
1
0
1
0
y
1
1
1
0
1
0
1
-0.5
1
1
Linearly separable
Perceptron learning rule:
w1 w2 w3 w4
w5
Associative memory:
Famous images
Names
Albert
Input
desired output
Marilyn
.
.
.
.
.
.
1.  Feed forward matrix networks
2.  Attractor networks
Harel
Associative memory:
Hetero associative
Auto associative
A
α
A
A
B
β
B
B
Hetero associative
Why did I show you these examples?
These are examples in which
changes in synaptic weights are
the basis for learning (Perceptron)
and memory (Associative
memory).
Synaptic plasticity evoked artificially
Examples of Long term potentiation (LTP)
and long term depression (LTD).
LTP First demonstrated by Bliss and Lomo in
1973. Since then induced in many different ways,
usually in slice.
LTD, robustly shown by Dudek and Bear in 1992,
in Hippocampal slice.
Artificially induced synaptic plasticity.
Presynaptic rate-based induction
Bear et. al. 94
Depolarization based induction
Feldman, 2000
Spike timing dependent plasticity
Markram et. al. 1997
At this level we know much about the cellular
and molecular basis of synaptic plasticity.
But how do we know that synaptic
plasticity as observed on the cellular level
has any connection to learning and memory?
What types of criterions can we use to answer
this question?
Assessment criterions for the synaptic hypothesis:
(From Martin and Morris 2002)
1. DETECTABILITY: If an animal displays memory
of some previous experience (or has learnt a new
task), a change in synaptic efficacy should be
detectable somewhere in its nervous system.
2. MIMICRY: If it were possible to induce the
appropriate pattern of synaptic weight changes
artificially, the animal should display apparent
memory for some past experience which did not
in practice occur.
3. ANTEROGRADE ALTERATION: Interventions
that prevent the induction of synaptic weight
changes during a learning experience should impair
the animal s memory of that experience (or prevent
the learning).
4. RETROGRADE ALTERATION: Interventions that
alter the spatial distribution of synaptic weight
changes induced by a prior learning experience
(see detectability) should alter the animals memory
of that experience (or alter the learning).
Detectability
Example from Rioult-Pedotti - 1998
Example: Inhibitory avoidance
•  Fast
•  Depends on Hippocampus
Whitlock et. al. 2006
Occlusion of LTP in
trained hemisphere
More LTD in trained
hemisphere
(Riolt-Pedoti 2000)
Mimicry: Generate a false memory, teach a
skill by directly altering the synaptic
connections.
This is the ultimate test, and at this point in
time it is science fiction.
ANTEROGRADE ALTERATION:
Interventions that prevent the induction of synaptic
weight changes during a learning experience
should impair the animal s memory of that
experience (or prevent the learning).
This is the most common approach. It relies
on utilizing the known properties of synaptic
plasticity as induced artificially.
Example: Spatial learning is impaired by block of
NMDA receptors (Morris, 1989)
Morris water maze
platform
rat
4. RETROGRADE ALTERATION: Interventions that
alter the spatial distribution of synaptic weight changes
induced by a prior learning experience should alter the
animals memory of that experience (or alter the
learning).
Lacuna TM
Receptive field plasticity is a cellular
correlate of learning.
What is a receptive field?
First described – somatosensory receptive
fields (Mountcastle)
Best known example – visual receptive fields
Summary –
End of Short introduction- continue if have time