slides

Nimrod Shaham and Yoram Burak
Continuous parameter working memory in balanced
chaotic neural network
arXiv:1508.06944v3
Maurizio De Pittà
February 12, 2016
CNS Journal Club
1 / 18
Questions
- Can we use balanced networks with “plain” architecture and no additional mechanisms to produce persistent activity?
- How does chaotic noise arising from balanced dynamics affect maintenance of persistent activity?
CNS Journal Club
2 / 18
Network Model
- Two mutually-inhibiting balanced E-I subnetworks;
- N
(binary)
neurons
per
population;
random,
sparse
connections
with
K
p(j → i) = N (1 K N)
[van Vreeswijk and Sompolinsky, 1996];
- JEE = JIE = √1K , JEI =
√
J˜ = J¯ K (all-to-all).
JE
√
,
K
I
JII = − √JK
,
N
CNS Journal Club
3 / 18
Network Equations
Be uik =total input to neuron k in population i;
σik = Θ(uik ) ∈ {0, 1}. Then:
u1i
=
u2i =
u3i =
u4i =
N
X
i=1
N
X
i=1
N
X
i=1
N
X
i=1
CNS Journal Club
ij j
J11
σ1
+
ij j
J21
σ1 +
ij j
J32
σ2 +
ij j
J43
σ3 +
N
X
i=1
N
X
i=1
N
X
i=1
N
X
ij j
ij j
J12
σ2 + J14
σ4 +
√
K E0 − T1
ij j
J22
σ2 − T2
ij j
ij j
J33
σ3 + J34
σ4 +
√
K E0 − T1
ij j
J44
σ4 − T2
i=1
4 / 18
Network Equations
Be uik =total input to neuron k in population i;
σik = Θ(uik ) ∈ {0, 1}. Then:
u1i
√
N
N
N
X
1 j X JE j X ¯ K j √
√ σ1 −
√ σ2 −
=
J
σ + K E0 − T1
N 4
K
K
j=1
j=1
j=1
N
N
X
X
1
J
√ σ1j −
√ I σ2j − T2
K
K
j=1
j=1
√
N
N
X
√
K j X 1 j
J
i
¯
√ σ3 − √E σ4j + K E0 − T1
u3 = −
J
σ2 +
N
K
K
j=1
j=1
u2i =
u4i =
N
N
X
X
1
J
√ σ3j −
√ I σ4j − T2
K
K
j=1
j=1
CNS Journal Club
5 / 18
Mean field /1
[van Vreeswijk and Sompolinsky, 1998]:
- mki (t) = σki (t) (average over all i.c. that are consistent with mk (0)
and also over the random sequence of update times).
P
i
- mk (t) = σki (t) = N1 N
i=1 mk (t) (population average).
dmki
= −mki + Θ(uki )
τk
dt
dmk
d mki
τk
= − mki + Θ(uki ) τk
= −mk + Θ(uki )
dt
dt
Θ(uki ) represents the probability that the updating cells feeding into
population k are in the active state. Be pj (n) the probability that n cells
in population j are active, then
q
XY
i
pj (n)Θ(uki )
Θ(uk ) =
nq j=1
CNS Journal Club
6 / 18
Mean field /2
pj (n) is no more than the probability that a cell in population k receives
inputs from n out of N cells in network j. The probability of s synapses of
population j projecting on a cell is
s K s −K
K N−s
N
K
−−−−→
p(sj ) =
1−
e
N→∞ s !
n
N
N
On average each synapse in population j has a probability mj to be active,
so that:
N
X
sj
pj (n) =
p(sj )
mjn (1 − mj )sj −n
n
s =n
j
−−−−→
N→∞
∞
(mj K )n X ((1 − mj )K )r
(mj K )n −mj K
=
e
n!
r!
n!
r =0
(
−
1
−−−−→ p
e
K →∞
2πmj K
n−mj K
2mj K
CNS Journal Club
)
2
7 / 18
Mean field /3
uki = uk +
⇒ Θ(uki ) =
⇒ τk
√
αk x with x ∼ N (0, 1)
Z
DxΘ(uk +
√
uk
αk x) = H − √
αk
dmk
uk
= −mk + H − √
dt
αk
At s.s.:
√
dmk
= 0 ⇒ uk = − αk H −1 (mk )
dt
CNS Journal Club
8 / 18
Mean field /4
Averaging [h·i] over network equations:

 
 
√ D E
N
N
N
D
D
E
E
X
X
X
1
J
K
√
√E σ2j  − 
u1 = 
σ1j  − 
J¯
σ4j  +
N
K
K
j=1
j=1
j=1
√
+ K E0 − T1
√ hD Ei
√
JE hD j Ei
K
1 hD j Ei
¯
=K·√
σ1 − K · √
σ2 − N · J
σ4j + K E0 − T1
N
K
K
√
¯
= K m1 − JE m2 − Jm4 + E0 − T1
√
u2 =
u3 =
u4 =
√
√
K (m1 − JI m2 ) − T2
¯ 2 + m 3 − J E m 4 + E0 − T 1
K −Jm
K (m3 − JI m4 ) − T2
CNS Journal Club
9 / 18
Mean field /5
h
uki
−
uki
2 i
=
N XX
l
=
i2 
2
 J ij
ml −
kl
√
−
h
j
N
XX
l
Jklij σlj
Jklij σlj
j
K
Jkl ml
N
!2 
≈
X
Jkl2 ml
l
α1 = m1 + JI2 m2 + J¯2 m4
α2 = m1 + JI2 m2
α3 = J¯2 m2 + m3 J 2 m4
E
α4 = m3 +
CNS Journal Club
JI2 m4
10 / 18
Mean field /5
1
√
K
1
√
K
1
√
K
1
√
K
T1 −
T2 −
T1 −
T2 −
√
¯ 4 + E0
α1 H −1 (m1 ) = m1 − JE m2 − Jm
√
α2 H −1 (m2 ) = m1 − JI m2
√
¯ 2 + m 3 − J E m 4 + E0
α3 H −1 (m3 ) = −Jm
√
α4 H −1 (m4 ) = m3 − JI m4
CNS Journal Club
11 / 18
Continuous Attractor (K → ∞)
¯ 4 + E0
0 = m1 − JE m2 − Jm
0 = m1 − JI m2
¯ 4+E
0 − (JE − JI )m2 − Jm
¯ 2 + m 3 − J E m 4 + E0
0 = −Jm
0 = m3 − JI m4
¯ 2+E
0 = −(JE − JI )m4 − Jm
⇒ J¯ = JE − JI : singularity ⇒ continuous attractor
CNS Journal Club
12 / 18
Continuous Attractor (1 K N)
1 K N, N → ∞
1K N<∞
CNS Journal Club
13 / 18
Diffusion Approximation: Drift
X (t) = v0T · [m(t) − m0 ] = X (0)eλt v0T
Z t
Z t
0
0
0
= X (0) +
F (X (t ), t )dt +
G (X (t 0 ), t 0 )dW (t 0 )
0
F (X , ∆t) =
0
hX (t + ∆t) − X (t)|X (t) = X )it
∆t
CNS Journal Club
≈ λX
∆t→0
14 / 18
Diffusion Approximation: Noise
D
(X (t + ∆t) − X (t))2 |X (t) = X )
G (X , ∆t) =
E
t
∆t
4
∂qj
2∆t X 2
∆t τ : G (X , ∆t) ≈
v0 j −
N
∂t
j
"
!#2
p
Z
Z
uk + βk (t + θ)
dqk (θ)
1 ∞ − τt
τk
= −qk +
e k dt Dx H − p
dθ
τk 0
αk − βk (t + θ)
X
βk =
Jkl2 ql (θ)
[van Vreeswijk and Sompolinsky, 1998]
kl
τ . ∆t . λ−1 : G (X , ∆t) = 2 (CX (0) − CX (∆t))
N
E
1 XD i
Ckl (∆t) = 2
σk (t + ∆t)σlj (t)
N
t
(S19,S23)
k,l
CNS Journal Club
15 / 18
Diffusion coefficient
CNS Journal Club
16 / 18
OU description
CNS Journal Club
17 / 18
Reference
van Vreeswijk, C. and Sompolinsky, H. (1996).
Chaos in neuronal networks with balanced excitatory and inhibitory
activity.
Science, 274(5293):1724–1726.
van Vreeswijk, C. and Sompolinsky, H. (1998).
Chaotic balanced state in a model of cortical circuits.
Neural Comput., 10(6):1321–1371.
CNS Journal Club
18 / 18