Review
1.Neuronal Dynamical Systems
We describe the neuronal dynamical systems by firstorder differential or difference equations that govern
the time evolution of the neuronal activations or
membrane potentials.
x g ( FX , FY ,)
y h( FX , FY ,)
Review
4.Additive activation models
p
xi Ai xi S j ( y j )n ji I i
j 1
n
y j A j y j Si ( xi )mij J j
i 1
Hopfield circuit:
1. Additive autoassociative model;
2. Strictly increasing bounded signal function ( S 0) ;
3. Synaptic connection matrix is symmetric ( M M T ).
xi
Ci xi S j ( x j )m ji I i
Ri
j
Review
5.Additive bivalent models
p
xik 1 S j ( y kj )m ji I i
j
n
y kj 1 Si ( xik )mij I j
i
Lyapunov Functions
Cannot find a lyapunov function,nothing follows;
Can find a lyapunov function,stability holds.
Review
A dynamics system is
stable , if
L 0
;
0
asymptotically stable, if L
.
Monotonicity of a lyapunov function is a sufficient
not necessary condition for stability and asymptotic
stability.
Review
Bivalent BAM theorem.
Every matrix is bidirectionally stable for synchronous or
asynchronous state changes.
•
Synchronous:update an entire field of neurons at a time.
•
Simple asynchronous:only one neuron makes a statechange decision.
•
Subset asynchronous:one subset of neurons per field
makes state-change decisions at a time.
Chapter 3. Neural Dynamics II:Activation Models
The most popular method for constructing M:the bipolar
Hebbian or outer-product learning method
binary vector associations: ( Ai , Bi )
bipolar vector associations: ( X i , Yi )
i 1,2, m
1
Ai [ X i 1]
2
X i 2 Ai 1
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
The binary outer-product law:
m
M AkT Bk
k
The bipolar outer-product law:
m
M X kT Yk
k
The Boolean outer-product law:
m
M AkT Bk
k
mij max( a1i b1j , , a mi bmj )
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
The weighted outer-product law:
m
M wk X kT Y k
Where
k
m
w
k
1 holds.
k
In matrix notation:
Where
M X T WY
X T [ X 1T | | X mT ]
Y T [Y1T | | YmT ]
W Diagonal [w1 ,, wm ]
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.1 Optimal Linear Associative Memory Matrices
Optimal linear associative memory matrices:
MX Y
*
The pseudo-inverse matrix of
X
:
X
*
XX * X X
X XX X
*
X X (X X )
*
*
*
T
*
XX ( XX )
*
* T
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.1 Optimal Linear Associative Memory Matrices
Optimal linear associative memory matrices:
The pseudo-inverse matrix of
If x is a nonzero scalar:
If x is a nonzero vector:
X
:
x* 1/ x
T
x
x* T
xx
If x is a zero scalar or zero vector :
For a rectangular matrix
X
X , if
*
x* 0
( XX T ) 1exists:
X X ( XX )
*
T
T
1
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.1 Optimal Linear Associative Memory Matrices
Define the matrix Euclidean norm M as
M Trace( MM T )
Minimize the mean-squared error of forward
recall,to find M̂ that satifies the relation
Y XMˆ Y XM
for all M
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.1 Optimal Linear Associative Memory Matrices
1
X
Suppose further that the inverse matrix
exists.
Then
0 0
Y Y
Y - XX -1Y
ˆ X 1Y
So the OLAM matrix M̂ correspond to M
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
If the set of vector { X 1 ,, X m } is orthonormal
1 if
X i X Tj
0 if
i j
i j
Then the OLAM matrix reduces to the classical linear
associative memory(LAM) :
T
ˆ
MX Y
For
X
is orthonormal, the inverse of
X
is
X
T
.
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.2 Autoassociative OLAM Filtering
Autoassociative OLAM systems behave as linear filters.
In the autoassociative case the OLAM matrix encodes only
the known signal vectors xi . Then the OLAM matrix
equation (3-78) reduces to
M X *X
M linearly “filters” input measurement x to the output
vector
x by vector matrix multiplication: xM x .
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.2 Autoassociative OLAM Filtering
The OLAM matrix X * X behaves as a projection
operator[Sorenson,1980].Algebraically,this means
the matrix M is idempotent: M 2 M .
Since matrix multiplication is associative,pseudoinverse property (3-80) implies idempotency of the
autoassociative OLAM matrix M:
M 2 MM
X * XX * X
( X * XX * ) X
X*X
M
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.2 Autoassociative OLAM Filtering
Then (3-80) also implies that the additive dual matrix
I X * X behaves as a projection operator:
( I X * X ) 2 ( I X * X )( I X * X )
I 2 - X * X - X * X X * XX * X
I - 2X * X ( X * XX * ) X
I - 2X * X X * X
I - X*X
We can represent a projection matrix M as the
mapping
M : Rn L
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.2 Autoassociative OLAM Filtering
The Pythagorean theorem underlies projection
operators.
The known signal vectors X 1 , , X m span
n
some unique linear subspace L( X 1 , , X m ) of R
L equals {im ci X i : for all ci R} , the set of all
linear combinations of the m known signal vectors.
L denotes the orthogonal complement space
{x Rn : xy T 0 for all y L}
,the set of all real n-vectors x orthogonal to every
n-vector y in L.
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.2 Autoassociative OLAM Filtering
1. Operator X X projects
*
Rn
onto L.
2. The dual operator I X * X projects R n onto L .
Projection Operator X * X and I X * X uniquely
decompose every R n vector x into a summed signal
~
vector x̂ and a noise or novelty vector x :
x xX * X x ( I X * X )
xˆ ~
x
x
~
x
x̂
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.2 Autoassociative OLAM Filtering
The unique additive decomposition xˆ
generalized Pythagorean theorem:
|| x ||2 || xˆ ||2 || ~
x ||2
~
x
obeys a
2
2
2
where || x || x1 x n defines the squared
Euclidean or l 2 norm.
Kohonen[1988] calls I X * X the novelty filter on R n .
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.2 Autoassociative OLAM Filtering
Projection x̂ measures what we know about input x
relative to stored signal vectors X 1 , , X m :
m
x̂ c i x i
i
for some constant vector
( c1 , , c n )
.
~
x
The novelty vector
measures what is maximally
unknown or novel in the measured input signal x.
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.2 Autoassociative OLAM Filtering
Suppose we model a random measurement vector x as
a random signal vector x s corrupted by an additive,
independent random-noise vector x N :
x xs xN
We can estimate the unknown signal
*
filtered output xˆ xX X .
x s as the OLAM-
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.2 Autoassociative OLAM Filtering
Kohonen[1988] has shown that if the multivariable noise
distribution is radially symmetric, such as a multivariable
Gaussian distribution,then the OLAM capacity m and
pattern dimension n scale the variance of the randomvariable estimator-error norm || xˆ x s || :
m
|| x x s ||2
n
m
|| x N ||2
n
V [|| xˆ x s ||]
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.2 Autoassociative OLAM Filtering
1.The autoassociative OLAM filter suppress noise if m n ,
when memory capacity does not exceed signal dimension.
2.The OLAM filter amplifies noise if m n , when capacity
exceeds dimension.
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.3 BAM Correlation Encoding Example
The above data-dependent encoding schemes add
outer-product correlation matrices.
The following example illustrates a complete nonlinear
feedback neural network in action,with data deliberately
encoded into the system dynamics.
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.3 BAM Correlation Encoding Example
Suppose the data consists of two unweighted ( w1 w2 1)
binary associations ( A1 , B1 ) and ( A2 , B2 ) defined by the
nonorthogonal binary signal vectors:
A1 1 0 1 0 1 0
B1 1 1 0 0
A2 1 1 1 0 0 0
B2 1 0 1 0
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.3 BAM Correlation Encoding Example
These binary associations correspond to the two bipolar
associations ( X 1 , Y1 ) and ( X 2 , Y 2 ) defined by the bipol
–ar signal vectors:
X1 1 1 1 1 1 1
Y1 1 1 1 1
X 2 1 1 1 1 1 1
Y2 1 1 1 1
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.3 BAM Correlation Encoding Example
We compute the BAM memory matrix M by adding the bipol
T
–ar correlation matrices X 1 Y1 and X 2T Y2 pointwise. The first
T
correlation matrix X 1 Y1 equals
1 1 1
1
1
1
1 1 1 1
1
1
1
1
1
X 1T Y1 1 1 1 1
1 1
1
1 1
1
1
1
1
1
1
1 1
1
1
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.3 BAM Correlation Encoding Example
T
Observe that the i th row of the correlation matrix X 1 Y1
equals the bipolar vector Y 1 multipled by the i th element
T
of X 1 . The j th column has the similar result. So X 2 Y2
equals
1 1 1 1
1 1 1 1
1 1 1 1
X 2T Y 2
1 1 1 1
1
1
1
1
1 1 1 1
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.3 BAM Correlation Encoding Example
Adding these matrices pairwise gives M:
M X 1T Y1 X 2T Y2
1
1
1
1
1
1
1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
1
1 1
1 2 0 0 2
1 1 1 0 2 2 0
1 1 1 2 0 0 2
1 1 1 2 0 0 2
1 1 1 0 2 2 0
1 1 1 2 0 0 2
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.3 BAM Correlation Encoding Example
Suppose, first,we use binary state vectors.All update policies
are synchronous.Suppose we present binary vector A1 as
input to the system—as the current signal state vector at F X .
Then applying the threshold law (3-26) synchronously gives
A1 M ( 4
2 2 4 ) (1
1 0 0 ) B1
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.3 BAM Correlation Encoding Example
T
Passing B1 through the backward filter M , and applying
the bipolar version of the threshold law(3-27),gives back A1 :
B1M T ( 2 2 2 2 2 2 ) ( 1 0 1 0 1 0 ) A1
So ( A1 , B1 ) is a fixed point of the BAM dynamical system.
It has Lyapunov “energy” L( A1 , B1 ) A1 MB1T 6 ,
which equals the backward value B1 M T A1T 6 .
( A2 , B2 ) has the similar result:a fixed point with
energy A2 MB2T 6 .
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.3 BAM Correlation Encoding Example
So the two deliberately encoded fixed points reside in
equally “deep” attractors.
Hamming distance H equals l 1 distance. H ( Ai , A j ) counts the
number of slots in which binary vectors Ai and A j differ:
n
H ( Ai , A j ) | a ik a kj |
k
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.3 BAM Correlation Encoding Example
Consider for example the input A ( 0 1 1 0 0 0 ) ,
which differs from A2 by 1 bit , or H ( A, A2 ) 1 . Then
AM ( 2 2
2 2)( 1
0
1
0 ) B2
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.4 Memory Capacity:Dimensionality Limits Capacity
Synaptic connection matrices encode limited
information.
We sum more correlation matrices ,then mij 1
holds more frequently.
After a point,adding additional associations ( Ak , Bk )
Does not significantly change the connection
matrix. The system “forgets”some patterns.
This limits the memory capacity.
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.4 Memory Capacity:Dimensionality Limits Capacity
Grossberg’s sparse coding theorem [1976] says , for
deterministic encoding ,that pattern dimensionality must
exceed pattern number to prevent learning some patterns
at the expense of forgetting others.
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.5 The Hopfield Model
The Hopfield model illustrates an autoassociative additive
bivalent BAM operated serially with simple asynchronous
state changes.
Autoassociativity means the network topology reduces to only
one field, F X ,of neurons: FX FY .The synaptic connection
matrix M symmetrically intraconnects the n neurons in field
M M T or mij m ji .
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.5 The Hopfield Model
The autoassociative version of Equation (3-24) describes
the additive neuronal activation dynamics:
xik 1 S j ( x kj )m ji I i
(3-87)
j
for constant input I i , with threshold signal function
1
k 1
S i ( x i ) S i ( x ik )
0
如果 x ik 1 U i
如果 x ik 1 U i
(3-88)
如果 x ik 1 U i
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.6.5 The Hopfield Model
We precompute the Hebbian synaptic connection matrix M
by summing bipolar outer-product(autocorrelation)matrices
and zeroing the main diagonal:
m
M X kT X k mI
k 1
(3-89)
where I denotes the n-by-n identity matrix .
Zeroing the main diagonal tends to improve recall accuracy
by helping the system transfer function
behave less
like the identity operator.
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.7 Additive dynamics and the noise-saturation dilemma
Grossberg’s Saturation Theorem
Grossberg’s Saturation theorem states that additive
activation models saturate for large inputs, but
multiplicative models do not .
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
The stationary “reflectance pattern” P ( p1 ,, pn )
confronts the system amid the background illumination I (t )
pi 0, and
p1 pn 1
The i th neuron receives input I i .Convex coefficient p i
defines the “reflectance” I i :
I i pi I
A
: the passive decay rate
[0, B ] : the activation bound
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
Additive Grossberg model:
xi Axi ( B xi ) I i
( A I i ) xi BI i
We can solve the linear differential equation to yield
xi (t ) xi (0)e
( A I i ) t
BI i
[1 e ( A I ) t ]
A Ii
i
For initial condition xi (0) 0 , as time increases the
activation converges to its steady-state value:
BI i
xi
B
A Ii
As I
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
So the additive model saturates.
Multiplicative activation model:
x i Ax i ( B xi ) I i xi I j
j i
( A I i I j ) xi BI i
j i
( A I ) xi BI i
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
For initial condition x i (0) 0 ,the solution to this
differential equation becomes
I
xi pi B
(1 e ( A I ) t )
A I
As time increases, the neuron reaches steady state
exponentially fast:
I
xi pi B
pi B
A I
(3-96)
as I .
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
This proves the Grossberg saturation theorem:
Additive models saturate ,multiplicative
models do not.
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
In general the activation variable x i can assume negative
values . Then the operating range equals [ C i , Bi ]
for C i 0 .In the neurobiological literature the lower
bound C i is usually smaller in magnitude than the upper
bound B i : C i Bi
This leads to the slightly more general shunting
activation model:
x i Ax i ( B x i ) I i (C x i ) I j
j 1
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
Setting the right-hand side of the above equation to zero ,
and we can get the equilibrium activation value:
C
(B C)I
xi [ pi
]
BC
A I
which reduces to (3-96) if C=0.
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
※3.8 General Neuronal Activations:Cohen-Grossberg and
multiplicative models
Consider the symmetric unidirectional or autoassociative
case when FX FY , M M T , and M is constant . Then a
neural network possesses Cohen-Grossberg[1983] activation
dynamics if its activation equations have the form
n
x i a i ( x i )[bi ( x i ) S j ( x j )mij ]
j 1
(3-102)
The nonnegative function a i ( x i ) 0 represents an abstract
amplification function.
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
Grossberg[1988]has also shown that (3-102) reduces to the
additive brain-state-in-a-box model of Anderson[1977,1983]
and the shunting masking-field model [Cohen,1987] upon
appropriate change of variables.
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
If a i 1 / C i , bi ( x i / Ri ) I i , S i ( x i ) g i ( x i ) Vi and
constant mij m ji Tij T ji
, where C i and R i are
positive constants , and input I i is constant or varies slowly
relative to fluctuations in x i ,then (3-102) reduces to the
Hopfield circuit[1984]:
Ci x i
xi
V j Tij I i
j
Ri
An autoassociative network has shunting or multiplicative
activation dynamics when the amplification function a i is linear,
and bi is nonlinear .
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
For instance , if a i x i , mii 1 (self-excitation in lateral
inhibition) , and
1
bi [ Ai xi Bi ( S i I i ) xi ( S i I i ) Ci ( S j mij I i )]
j i
xi
then (3-104) describes the distance-dependent (mij m ji )
unidirectional shunting network :
xi Ai xi ( Bi xi )[Si ( xi ) I i ] (Ci xi )[ S j ( x j )mij I i ]
j i
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
Hodgkin-Huxley membrane equation:
Vi
c
(V p Vi ) g ip (V Vi ) g i (V Vi ) g i
t
V p , V and V denote respectively passive(chloride Cl ) ,
excitatory (sodium Na ) , and inhibitory (potassium K )
saturation upper bounds .
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
At equilibrium, when the current equals zero ,the HodgkinHuxley model has the resting potential V rest :
V rest
g tpV p g V g V
g p g g
Neglect chloride-based passive terms.This gives the
resting potential of the shunting model as
V rest
g V g V
g g
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
BAM activations also possess Cohen-Grossberg dynamics,
and their extensions:
p
x i a(
i x i )[bi ( x i ) S j ( y j )mij ]
j
n
y j a j ( y j )[b j ( y j ) S i ( x i )mij ]
i
with corresponding Lyapunov function L , as we show in
Chapter 6 :
L S i S j mij 0x S i' ( i )bi ( i )d i 0y S 'j ( j )d j
i
i j
i
j
j
2002.10.8
Chapter 3. Neural Dynamics II:Activation Models
谢谢大家!
2002.10.8
© Copyright 2026 Paperzz