Mean-field equations for neuronal networks
with arbitrary degree distributions
Duane Nykamp1 , Albert Compte2 , Alex Roxin2,3
1 School
of Mathematics
University of Minnesota
127 Vincent Hall, Minneapolis, MN 55455
2 Theoretical
Neurobiology of Cortical Circuits
Institut d’Investigacions Biomèdiques August Pi i Sunyer
Carrer Rossell’o 149, Barcelona 08036, Spain
3
Computational Neuroscience Group
Centre de Recerca Matemàtica
Campus de Bellaterra, Edifici C
08193 Bellaterra, Spain.
February 4, 2016
Abstract
The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics
and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections which are
made randomly with a fixed probability (Erdös-Rényi network) (ER),
or all-to-all connected networks. However, recent findings from slice
experiments suggest that the actual patterns of connectivity between
cortical neurons are more structured than in the ER random network.
Here we explore how introducing additional higher-order statistical
structure into the connectivity can affect the dynamics in neuronal
networks. Specifically, we consider networks in which the number of
pre-synaptic and post-synaptic contacts for each neuron, the degrees,
1
are drawn from a joint degree distribution. We derive a mean-field
equation for a network of excitatory and inhibitory neurons with arbitrary degree distributions, and show that such network have potentially much richer dynamics than an equivalent ER network. Finally,
we relate the degree distributions to so-called cortical motifs.
1
Introduction
The dynamics in neuronal networks is strongly influenced by the patterns
of synaptic connectivity. Networks in which connections are made randomly with a fixed probability, so-called Erdös-Rényi (ER) networks, exhibit broad distributions of firing rates by virtue of the quenched variability
in inputs coupled with the expansive nonlinearity of the neuronal transfer
function in the fluctuation-driven regime (Amit and Brunel, 1997b; Roxin
et al., 2011). Allowing for clustered structure, in which neurons are more
likely to be connected within clusters than across them, leads to multistable attractor states which may underlie working memory in cognitive
tasks (Amit and Brunel, 1997a). Analogously, networks in which the connectivity between neurons depends on the difference in their selectivity for
a continuously varying quantity, e.g. orientation or spatial angle, can exhibit bump attractors (Ben-Yishai, Lev Bar-Or, and Sompolinsky, 1995;
Compte et al., 2000). The detailed shape of such “spatially” structured
connectivity (space may refer to feature space as opposed to distance along
the cortical surface) in conjunction with synaptic delays determines the stability of such bumps and can lead to wave propagation (Roxin, Brunel, and
Hansel, 2005).
Experimental data on patterns of synaptic connectivity in cortical slices
indicates that cortical networks are not ER (Markram, 1997; Song et al.,
2005; Perin, Berger, and Markram, 2011). In particular, motifs between
neuron pairs and triplets measured in experiment deviate from the expected
values from ER networks, e.g. reciprocal connections are more frequent than
expected. Additionally, it is found that neurons which share more common neighbors are more likely to be connected, a feature which has been
attributed to the presence of clustering, e.g. (Perin, Berger, and Markram,
2011), although this remains to be shown directly.
Inspired by these findings, recent theoretical work has looked at how
changes in the frequency of second order motifs affects the synchronization
2
of neuronal oscillators (Zhao et al., 2011). It has also been shown that allowing for broad degree distributions (distributions of the numbers of incoming
and outgoing connections) can strongly affect dynamics in networks of asynchronous spiking neurons (Roxin, 2011). In that work, a heuristic firing rate
equation was derived which showed that the in-degree distribution, which is
related to the frequency of convergent connectivity motifs, strongly shapes
the firing rate dynamics. On the other hand, the out-degree (divergent motifs) affects subthreshold correlations. However, the firing rate equations did
not allow for correlations between in-degree and out-degree.
In this paper we derive and analyze heuristic mean-field equations for a
network of neurons with arbitrary degree distributions (DDs), including those
with correlated in-degree and out-degree. We first illustrate the derivation
with a single population of neurons, and show that increasing the covariance
between in-degree and out-degree is equivalent to increasing the strength of
synaptic weights as far as the linear stability of fixed point solutions is concerned. We then derive the mean-field equations for a network of excitatory
and inhibitory neurons (E-I network) and show that there are four relevant
macroscopic variables: the four synaptic outputs See , Sei , Sie , Sii , as opposed
to the two firing rates re , ri traditionally used in mean-field descriptions of
ER or all-to-all coupled networks. Finally, we show that the covariance between degrees is related to chain and reciprocal motifs, allowing one to use
our model to study how such motifs shape the mean-field dynamics.
2
2.1
Materials and Methods
Degree Distributions
The degree of a given node in a graph is just the number of edges connected to
it. In directed graphs such as neuronal networks there is both an in-degree,
the number of pre-synaptic inputs to a given neuron, and an out-degree,
the number of neurons which are post-synaptic to a given neuron. In this
manuscript we consider networks in which the in-degree and out-degree of
neurons are random variables drawn from a known joint-degree distribution.
Our goal is to formulate an equation governing the dynamics of the mean
firing rate in the network, based on the degree distribution of the neurons.
out
A neuron j has an in-degree din
j and an out-degree
P indj , 1which
P we normalize
1
by the mean degree in the network d = N j dj = N j dout
j . That is,
3
out
we consider the normalized in-degree xj = din
j /d and out-degree yj = dj /d,
which we will refer to simply as degrees from now on. For simplicity, we allow
degrees to be continuous random variables and define the degree distribution
ρ(x, y) as
ρ(x, y)dx dy = Pr(xj ∈ (x, x + dx), yj ∈ (y, y + dy)).
(1)
We assume the network is statistically homogeneous in the sense that equation (1) holds independent of neuron index j.
When we consider a network of excitatory and inhibitory neurons, each
neuron will have four degrees: two in-degrees from each of the populations
of neurons (E,I) and two out-degrees to both of the populations. Specificaly,
we consider a network with Ne excitatory neurons and Ni inhibitory neurons.
For the excitatory in-degree of an excitatory neuron i we write
din
ee
=
Ne
X
wijee ,
(2)
j=1
where wijee = 1 if there is a synapse from excitatory neuron j to excitatory
neuron i and is zero otherwise. To formulate our mean-field equations we
will again use degrees which are normalized by the mean degree. In the case
of the excitatory-to-excitatory (E-to-E) in-degree we write
din
ee
,
xee = in
hdee i
where hdin
ee i =
3
Ne
P
i=1
(3)
din
ee . The normalized E-to-E out-degree is written yee .
Results
We consider how network topology affects dynamics in large networks of
recurrently coupled neurons. Specifically, we look at a measure of network
connectivity called the degree distribution (DD), which is the distribution
of pre-synaptic and post-synaptic inputs, or in-degree and out-degree of the
cells, respectively. For simplicity we first derive a mean-field description for
a single neuronal population. We then extend this framework for a canonical
circuit which includes both excitatory and inhibitory neurons (E-I). Our main
4
?
Figure 1: The probability of a connection is determined by the out-degree of
the pre-synaptic and the in-degree of the post-synaptic. In this illustration,
the pre-synaptic out-degree is d˜out = 5 and the post-synaptic in-degree is
din = 4, as represented by the arrows. The probability of a connection from
the presynaptic neuron to the postsynaptic neuron is the probability that one
of the right arrows is the same as one of left arrows, which is proportional to
the product din d˜out .
finding is that allowing for correlated DDs increases the dimensionality of the
standard Wilson-Cowan firing rate description of an E-I circuit from two to
four. This occurs because the correct mean-field model for neuronal networks
with correlated DDs is in terms of the synaptic output, of which there are
four types (EE,EI,IE,II). Finally, we express the moments of the DDs in
terms of so-called cortical motifs, which allows one to see at a glance the
effect of certain motifs on mean-field dynamics in our framework.
3.1
A mean-field description of a single neuronal population
In order to construct a mean-field description of the network, we need to know
the probability of a neuron with in-degree x and out-degree y (we can say with
′
′
degree pair (x, y)) receiving a connection from one with degree pair (x , y ).
If all connections are made randomly and with equal probability it turns out
′
that the probability of connection in this case is just the product xy . This
can be seen intuitively since the likelihood of making a connection increases
both with the number of out-going connections from the pre-synaptic neuron
′
y and the number of in-coming connections to the post-synaptic neuron x.
This is illustrated in Fig.1. The expression is exact when the degrees are
small compared to the system size N .
We are now in a position to write down a self-consistent description of
the network activity in terms of firing rates. Specifically, the total input
to any given neuron is calculated by summing the activity of all neurons
pre-synaptic to it. Because the probability that any two neurons with given
5
degrees are connected is known, we approximate the summation over the
network by a summation over the degree distribution. This self-averaging
argument is valid if the network is large compared to the largest degree. The
firing rate of a neuron with degree pair (x, y) obeys the heuristic equation
Z Z
τ ṙ(x, y, t) = −r(x, y, t) + Φ
Jxỹr(x̃, ỹ, t)ρ(x̃, ỹ)dx̃ dỹ + I ,
(4)
where J is a synaptic weight, I is the external current, and Φ is a sigmoidal
nonlinearity giving the firing rate of a neuron as a function of its input.
Therefore, the firing rate relaxes to its steady state value with a characteristic
time constant τ . The recurrent input (the integral term) is just the firing
rate each neuron in the network, weighted by the probability of connection,
and averaged over the degree distribution. Note that this input does not
depend on the out-degree of the post-synaptic neuron, and so we can write
Z Z
τ ṙ(x, t) = −r(x, t) + Φ Jx
ỹr(x̃, t)ρ(x̃, ỹ)dx̃ dỹ + I .
(5)
The effect of the out-degree, however, does not drop out of the equation in
general. Specifically, pre-synaptic firing rates are weighted by the out-degree
of the corresponding neurons, and this can play an important role in the
dynamics as we will see in later sections.
The firing rate equation Eq.5 is for neurons with a given in-degree. To
close the system self-consistently we must determine the firing rate for neurons of all possible in-degrees. Alternatively, we could average the equations
to find the mean firing rate in the network. Therefore we integrate each term
in Eq.5 over the distribution of degrees ρ(x, y) yielding
* Z Z
+
,
(6)
τ Ṙ = −R + Φ Jx
ỹr(x̃, t)ρ(x̃, ỹ)dx̃ dỹ + I
R
where R = hr(x, t)i = r(x, t)ρ(x, y)dxdy. Unfortunately, the integral term
in Eq.6 is not equal to the mean firing rate in general and so we cannot
close the equations properly. The exception to this is when in-degree and
out-degree are uncorrelated and hence the joint degree distribution ρ(x, y) =
ρx (x)ρy (y). Then the recurrent input
Z Z
Z ′
Z
′
′
′
′
′
′
′
′
′
′
′
Jx
y ρ(x , y )r(x , t)dx dy = Jx
y ρy (y )dy
r(x , t)ρx (x )dx ,
= JxR(t).
6
So Eq.6 becomes
D
E
τ Ṙ = −R + Φ (JxR + I) .
(7)
Eq.7 and variants thereof for networks with delay and for E-I networks have
been studied already in (Roxin, 2011)1 The upshot is that changing the width
of the in-degree distribution is akin to changing the gain of the fI curve Φ
as far as linear stability is concerned. Therefore, networks with broad indegree distributions can be more (or less) susceptible to instabilities, e.g. to
oscillatory states. In this paper we go beyond this simple case and consider
networks for which in-degree and out-degree may be correlated. If they are,
then Eq.7 is no longer valid.
In the general case, for correlated degrees, we cannot express the mean
firing rate self-consistently. In fact, the recurrent input in Eq.5 would be
′
proportional to the mean rate if not for the presence of the out-degree y .
The presence of the out-degree indicates that the relevant mean-field variable
is actually the firing rate weighted by the number of outgoing connections,
and we will call this quantity the synaptic drive S(t). If we now average
each term in Eq.5, weighted by the out-degree, we obtain a self-consistent
equation for the synaptic drive
Z Z
τ Ṡ = −S +
yρ(x, y)Φ(JxS(t) + I)dxdy,
(8)
RR
where S(t) =
yr(x, t)ρ(x, y)dxdy. If the fI curve Φ were linear, Eq.8
could be greatly simplified by noting that
Z Z
Z Z
yρ(x, y)(JxS(t) + I)dxdy = J
xyρ(x, y)dxdyS(t) + I,
= J(1 + cov(x, y))S(t) + I,
and so the covariance of the in-degree and out-degree modulates the effective
strength of the recurrent input. (Note that this covariance is normalized by
the square of the mean degree d2 .) Specifically, positive covariance (which
means that neurons which receive many inputs have many outputs) is equivalent to having stronger synapses, as far as the first order effect on mean
1
Note that in (Roxin, 2011) the neurons were parameterized by anR“effective” in-degree
′
′
x
k which can now be identified as the cumulative degree k = F (x) =
ρx (x )dx . Therefore dk = ρx (x)dx and the pre-factor x = F −1 (k). This function of k, the inverse cumulative degree distribution, was called J(k) in (Roxin, 2011), see e.g. Eq.8 in that paper.
7
rates goes. A sufficiently strong negative covariance (those neurons with
many outputs receive but few inputs) effectively turns off the recurrent input. For nonlinear fI curves this is no longer strictly the case, however a
similar effect of the covariance on the stability of steady state solutions can
be seen. We consider the stability of a steady-state synaptic drive with the
ansatz S(t) = S0 + δSeλt , where δS ≪ 1. Plugging this into Eq.8 yields the
characteristic equation for the eigenvalue λ
′
τ λ = −1 + J Φ̃0 (1 + cov(x, y))
where
′
Φ̃0 =
RR
(9)
′
xyΦ (JxS0 + I)ρ(x, y)dxdy
RR
.
xyρ(x, y)dxdy
(10)
We would remind the reader that the characteristic equation in the case of an
′
equivalent one-population Wilson-Cowan equations would be τ λ = −1+JΦ0 .
That is, an instability occurs if the product of the synaptic weight and the
gain of the nonlinear transfer function at the steady state is greater than
one. Eq.9 has a similar form. There are two differences: 1 - the relevant gain
′
is an “effective gain” Φ̃ , Eq.10. The study of how the degree distribution
shapes this gain was the subject of (Roxin, 2011). Here there may be added
effects due to the correlations between degrees. 2 - The stability depends on
the covariance of the degrees.
4
Mean-field equations for an EI network
We can extend the mean-field analysis to a network of two populations
straightforwardly. Conceptually there is nothing new, although keeping track
of indices can be burdensome. For this reason we will try and minimize notation by outlining the derivation through examples, and then stating the
final result.
With two populations each neuron has four different degrees, as illustrated
in Figure 2. Now, from Fig.2 it is clear that for each excitatory neuron we
need to keep track of not just the E-to-E in-degree and out-degree (xee , yee ),
but also the E-to-I out-degree yie and the I-to-E in-degree xei . That is, the
firing rate of an excitatory neuron is a function of these degrees and can
be written re (xee , yee , xei , yie , t). For the sake of brevity we will write this
simply as re (xe , t). Similarly, for inhibitory neurons the relevant degrees are
8
Figure 2: Illustration of excitatory and inhibitory degrees of excitatory and
inhibitory neurons. To describe the degree of a single neuron requires four
numbers: the number of connection to and from excitatory and inhibitory
neurons. Left: The central excitatory neuron has three excitatory and two
inhibitory incoming connections, giving it an excitatory in-degree of din
ee = 3
and an inhibitory in-degree of din
=
2.
The
neuron
has
one
connection
ei
onto an excitatory neuron and three connections onto inhibitory connections,
giving it an excitatory out-degree of dout
ee = 1 and an inhibitory out-degree of
=
3.
Right:
The
central
inhibitory
neuron has an excitatory in-degree
dout
ie
in
in
of die = 2, an inhibitory in-degree of dii = 3, an excitatory out-degree of
out
dout
ei = 3, and an inhibitory out-degree of dii = 2.
the I-to-I in-degree and out-degree (xii , yii ), the I-to-E out-degree yei and the
E-to-I in-degree xie . Hence we should write ri (xie , yei , xii , yi , t) for the firing
rate of an inhibitory neuron, which we shorten to ri (xi , t).
Writing self-consistent mean-field equations requires only that we recall
that the probability of connection from a neuron j to a neuron i is proportional to the relevant out-degree of j and the relevant in-degree of i. The
equations are
Z
′
′
′
′
τe ṙe = −re + φe Jee xee yee ρe (xe )re (xe , t)dxe
−Jei xei
Z
′
′
−Jii xii
!
′
yei ρi (xi )ri (xi , t)dxi + Ie ,
τi ṙi = −ri + φi Jie xie
Z
′
′
Z
′
′
′
′
′
yie ρe (xe )re (xe , t)dxe
′
′
!
yii ρi (xi )ri (xi , t)dxi + Ii ,
9
(11)
where ρe (xe ) is the joint degree distribution for excitatory neurons and
depends on all four degrees xe . Therefore the integrals which determine
the recurrent input in Eqs.11 are actually quadruple integrals, and, e.g.
dxe = dxee dyee dxie dyie . In these equations, as before, the relevant mean-field
quantities are not firing rates, but rather synaptic outputs. For example, we
can define the E-to-E synaptic drive as
Z
′
See (t) = yee ρe (xe )re (xe , t)dxe .
(12)
It is clear that the recurrent input in the network depends on all four possible
synaptic drives. We can write self-consistent equations for these drives by
doing the appropriate integrals. This gives
Z
τe Ṡee = −See + yee ρe (xe )φe Jee xee See − Jei xei Sei + Ie dxe , (13)
Z
τe Ṡie = −Sie + yie ρe (xe )φe Jee xee See − Jei xei Sei + Ie dxe , (14)
Z
τi Ṡei = −Sei + yei ρi (xi )φi Jie xie Sie − Jii xii Sii + Ii dxi ,
(15)
Z
τi Ṡii = −Sii + yii ρi (xi )φi Jie xie Sie − Jii xii Sii + Ii dxi .
(16)
4.1
Linear Stability
We would like to know if the mean-field Eqs.13-16 exhibit richer dynamics
than the standard two-dimensional firing rate equations commonly used to
describe the activity in E-I networks. As a first step we consider the linear
stability of fixed point solutions. As in the case of a single population, seen
in the last section, linearizing about the fixed-point solution leads to terms
which are proportional to the product of an effective gain and a covariance
of degrees. The difference with the simple, one-population model is that in
the two population model now there are eight covariance terms, see Fig.3.
Note that when we talk about the excitatory synaptic drive being different for the excitatory and inhibitory populations, we are referring to the
effect of these covariances. For example, the first column in Fig.3 shows the
covariance of the in-degree from E to E cells, and the out-degree from E to
E cells (top) and the in-degree from E to E cells, and the out-degree from
E to I cells (bottom). If these are made different, for example, by having
10
cov(xee , yee ) > 0 and cov(xee , yie ) < 0, then this means that those excitatory
neurons which receive the most recurrent excitatory input are the ones to
project most broadly to other excitatory cells. On the other hand, inhibitory
neurons are receiving their excitation from those excitatory neuron which receive little recurrent excitation. In other words the excitatory and inhibitory
populations are sampling the excitatory synaptic output in very different
ways. This effect is not equivalent to changes in the synaptic weights Jee
and Jie , although a similar effect could be seen if the synaptic weights were
themselves distributed and correlated.
As an example consider the integral term in Eq.13. We replace See =
0
0
See + δSee eλt and Sei = Sei
+ δSei eλt and find
Z
0
λt
0
λt
yee ρe (xe )φe Jee xee (See + δSee e ) − Jei xei (Sei + δSei e ) dxe
Z
0
0
dxe
= yee ρe (xe )φe Jee xee See
− Jei xei Sei
Z
Z
′
′
λt
+Jee xee yee ρe (xe )φe (xee , xei )dxe δSee e − Jei xei yee ρe (xe )φe (xee , xei )dxe δSei eλt
′
1
+
cov(x
,
y
)
δSee eλt
= hφe iee + Jee hφe iee
ee
ee
ee
′
1
+
cov(x
,
y
)
δSei eλt ,
(17)
−Jei hφe iee
ei ee
ie
where
′
hφe iee
ei
=
R
′
xei yee ρe (xe )φe (xee , xei )dxe
R
,
xei yee ρe (xe )dxe
(18)
is the effective excitatory gain averaged and normalized with respect to the
covariance of inhibitory in-degree and excitatory out-degree of excitatory
neurons. Here we use the subscript ei to refer to the in-degree from inhibitory
to excitatory neurons and the superscript ee to refer to the out-degree from
excitatory to excitatory neurons. For brevity in notation, we will combine
the effective gain and covariance terms into a single parameter, e.g.
′
ee
1
+
cov(x
,
y
)
= βei
.
(19)
hφe iee
ei
ee
ei
We will consider how changes in this parameter can affect the linear stability
of fixed point solutions. In doing so, we will focus exclusively on the role of
changes in the covariance, that is we will consider the effective gain to be
fixed. This assumes, for example, that as the covariances in the degrees are
11
Figure 3: The eight degree covariances in a network of excitatory and inhibitory neurons. In our analysis, we include the effect of the covariance
cd
cov(xab , ycd ) in the parameter βab
, see Eq.20.
varied additional parameters must be adjusted (such as the external input)
to maintain constant gain. This is a reasonable assumption given that what
is observable and hence known in cortical circuits is the pattern of activity,
e.g. mean firing rates, and not the patterns of connectivity. Therefore we
will be investigating the effect of changes to the connectivity, while holding
the level of activity, and hence gain, fixed.
Using this notation, the linear stability analysis leads to the following
characteristic equation for the eigenvalue λ
i
h
ie ei
ii
ee
(τe λ + 1)(τi λ + 1) (τe λ + 1 − Jee βee )(τi λ + 1 + Jii βii ) + Jie Jei βei βie
h
i
ie
ei ii
ii ei
ei
ie ee
ee ie
+Jie Jei Jii (τe λ + 1)βei
(βie
βii − βie
βii ) + Jee (τi λ + 1)βie
(βee
βei − βee
βei )
ee ii ie ei
ie ii ee ei
ie ei ee ii
ee ei ie ii
+Jie Jei Jee Jii βee βie βei βii − βee βei βei βii + βee βie βei βii − βee βie βei βii
= 0.
(20)
We note that we can reduce the dimensionality of the Eqs.13-16 if the
excitatory (inhibitory) synaptic drive to the excitatory and inhibitory populations is the same. This means that the top row of covariances in Fig.3
is identical to the bottom row. When this is the case, the characteristic
equation Eq.20 simplifies to
h
i
ee
(τe λ + 1)(τi λ + 1) (τe λ + 1 − Jee βee
)(τi λ + 1 + Jii βiiii ) + Jie Jei βei βie = 0, (21)
which is essentially just the characteristic equation for a standard WilsonCowan description for an E-I network. As this is simply the first term in
12
Eq.20 it is clear that the second and third terms reflect the effect of having
the inhibitory and excitatory populations sample the recurrent excitation and
inhibition in different ways, as quantified by different degree covariances. We
will provide an example of such an effect in the next section, without doing
an exhaustive analysis.
4.1.1
Steady Instability: λ = 0
A steady instability indicates a saddle node bifurcation and a region of bistability, which can be computationally relevant for modeling working memory
states. In Eq.21, which is valid for a standard Wilson-Cowan E-I network, a
steady instability occurs when the recurrent excitatory input is sufficiently
strong, i.e. when Jee βee exceeds a critical value. This can occur either by
increasing the strength of the recurrent excitatory synaptic weights, or increasing the gain, for example through external inputs. Without recurrent
excitation it is not possible to have steady instabilities and hence working
memory states in standard Wilson-Cowan equations for E-I networks.
The presence of correlated DDs in E-I networks can actually allow for
working memory states even in the absence of recurrent excitation. To see
this, we set Jee = 0 in Eq.20, which (for λ = 0) gives
ie ei
ie
ei ii
ii ei
(1 + Jii βiiii ) + Jie Jei βei
βie + Jie Jei Jii βei
(βie
βii − βie
βii ) = 0.
(22)
ii ei
ei ii
Clearly for there to be a steady instability we must have βie
βii > βie
βii .
What does this condition mean? One way of meeting this condition is to
ii
ei
make βie
> βie
everything else being the same. This would indicate that
inhibition onto inhibitory neurons is stronger than the inhibition onto excitatory neurons, again not because of different synaptic weights, but rather
because E and I are sampling inhibitory outputs in very different ways. E
cells would tend to receive inhibition from I cells which receive little excitation while I cells receive inhibition from strongly excited I cells. This is
illustrated in Fig.4. The other alternative is to have βiiei > βiiii everything else
being the same. This would mean that E cells tend to receive inhibition from
I cells which are strongly inhibited, compared to I cells.
This scenario of working memory states through dis-inhibition of excitatory neurons has been explored before when there are two distinct inhibitory
populations, e.g. (Song and Wang, 2005). Here we have a similar mechanism, which however relies on distinct ways of sampling a single highly
heterogeneous population of inhibitory neurons.
13
Figure 4: A situation conducive to working memory states, even in the absence of recurrent excitation. The top row illustrates that E cells tend to
receive inhibition from neurons which are weakly excited and strongly inhibited, while I cells tend to receive inhibition from neurons which are strongly
excited and weakly inhibited, see Eq.22.
14
Figure 5: A situation conducive to generating oscillations, even in the absence
of recurrent excitation. The covariances have precisely the opposite pattern
compared to Fig.4, see Eq.24.
4.2
Oscillatory instability: λ = iω
Again we consider an E-I network without recurrent excitation, for which
there can be no oscillatory instability in the standard Wilson-Cowan firing
rate framework. Specifically, the condition for oscillations in this case, from
Eq.21, is
τi (1 − Jee βee ) + τe (1 + Jii βii ) = 0,
(23)
which cannot be met when Jee = 0.
However, if we consider Eq.20 with Jee = 0, we find that the condition
for oscillations reads
c = (1 + 1/τ )(a2 + (1 + τ )a + τ ) + b(a + τ ),
(24)
a = 1 + Jii βiiii ,
ie ei
b = Jei Jie βei
βie ,
ie
ei ii
ii ei
c = Jei Jie Jii βei
(βie
βii − βie
βii ).
(25)
(26)
(27)
where
The condition Eq.24 can only be met when c > 0, which is the case when
inhibitory inputs to excitatory neurons are stronger than those to inhibitory
15
neurons. This is precisely the opposite trend as needed for working-memorylike behavior, see Fig.5.
5
Simplified meanfield model
The meanfield equations Eqs.13-16 are unwieldy due to the integration over
the joint degree distributions. The integration essentially amounts to a rescaling and translation of the nonlinear transfer function φ. We can write
down simplified meanfield equations with an effective nonlinear transfer function with the same overall structure as Eqs.13-16, and in particular with the
same linear stability. These equations are
ee
ee
τe Ṡee = −See + φee Jee αee
See − Jei αei
Sei + Ie ,
(28)
ie
ie
τe Ṡie = −Sie + φie Jee αee
See − Jei αei
Sei + Ie ,
(29)
ei
τi Ṡei = −Sei + φei Jie αie
Sie − Jii αiiei Sii + Ii ,
(30)
ii
τi Ṡii = −Sii + φii Jie αie
Sie − Jii αiiii Sii + Ii .
(31)
The linear stability of perturbations of fixed points in Eqs.28-31 is given
′
cd
cd
cd
by Eq.20 with βab
= φcd αab
. That is, the parameter αab
represents the
covariance between the in-degree xab and the out-degree ycd .
Fig.6 shows an example of a bifurcation diagram for Eqs.28-31 when Jee =
0 (see the figure caption for parameter values). The black lines are for the
cd
case when all of the covariances are the same, i.e. αab
= 1 for all combinations
of a, b, c and d. When this is the case then Eqs.28-31 actually reduce to only
two equations and bistability is not possible without recurrent excitation, as
seen earlier. The colored symbols are simulations performed when one of
ii
the effective covariances is increased, namely αie
= 2. This is conducive to
bistability, as illustrated in Fig.4, see the lower right diagram.
A sample simulation of bistable behavior is shown in Fig.7. At time
t = 200 the external input to the excitatory neurons is increased transiently,
causing an increase in excitatory output Sie (top panel). There is an accompanying drop in the inhibitory output to excitatory cells Sei , although the
synaptic inhibition onto inhibitory cells increases Sii . Finally, an increase in
the external input to inhibitory cells at t = 400 causes Sie to drop back down
to a low-activity fixed point.
16
Sie
1
0.5
0
Sii, Sei
2
1.5
1
0.5
0
1.5
2
2.5
Ii
Figure 6: An example of bistability without recurrent excitation. Shown
is a bifurcation diagram of stable fixed point solutions to Eqs.28-31 with
all α = 1 and Ie = 1, τe =pτi = 1, Jee = 0, Jei = 1, Jie = Jii = 2,
φee (x) = φie (x) = 0, x2 or 2 x − 3/4 if x ≤ 0, 0 < x < 1 and x ≥ 1
respectively. φei (x) = φii (x) = 0 if x < 0 and x otherwise. Top: Black lines
ii
ii
See and Sie for αie
= 1. Red symbols See and Sie for αie
= 2. Bottom: Black
ii
ii
lines Sei and Sii for αie = 1. Red (green) symbols: Sei (Sii ) for αie
= 2.
The dotted vertical line indicates the value of Ii for which the simulations in
Fig.7 where done. The bistable region is shaded.
17
Sie
1.5
1
0.5
0
Sei
0.5
Sii
0
2.5
2
1.5
1
2.5
2
1.5
1
100
Ii
Ie
200
300
400
500
time
Figure 7: A sample simulation in the bistable regime. The value of Ii = 1.9
(see vertical dotted line in Fig.6).
18
0.8
Sie
0.6
0.4
0.2
0
-10
0
-5
5
Ii
Figure 8: The bifurcation diagram for See , Sie (which are the same here)
showing oscillations for the case when Jee = 0 and the covariances follow the
pattern shown in Fig.5. Parameters are Jie = Jei = 2, Jii = 1, τe = τi = 1,
ii
Ie = 40, all α = 10 except for αie
= αiiei = 0.5. Transfer functions are as in
Fig.6
Fig.8 shows the presence of oscillations when the degree covariances are
similar to those shown in Fig.5. A sample trace of oscillatory activity can be
seen in Fig.9.
6
Degree covariances are proportional to the
number of chains in the network
We have seen that the dynamics in E-I networks can be strongly shaped by
the presence of correlations between DDs. What do these correlations mean
in terms of simple connectivity motifs, which can be determined straightforwardly in slice experiments? In fact, positive covariances indicate the
presence of added chain motifs, about and beyond what is expected in an
Erdös-Rényi network. This is illustrated in Fig.10 which shows a chain motif
from neuron i to k through j. We can calculate the probability of finding
this particular motif based on the degrees of the neurons. In particular, the
19
Sie
0.8
0.6
0.4
Sei
2.2
2
1.8
10
11
12
time
Figure 9: A sample trace of oscillatory activity given the parameters in Fig.8
with Ii = −10. Here Sii = 0.
20
Figure 10: The probability of a chain motifs depends on the degrees. In this
example there is a chain from excitatory neuron i to j to k. This particular
in
out
motif depends on the four degrees dout
and din
i , dj , dj
k.
probability that cell i connects to cell j is just proportional to the product of
in
the degrees dout
i dj , while the probability that j connects to k is proportional
in
to dout
j dk . The probability of this chain motif in a network of N ≫ 1 neurons
is then
in out in
dout
i dj dj dk
pchain (i → j → k) =
,
(32)
d2 N 2
where d is the mean degree. The expected value of this probability in the
network is
Echain =
=
=
=
=
in out in
E(dout
i dj dj dk )ijk
,
d2 N 2
in out
in
E(dout
i )i E(dj dj )j E(dk )k
,
d2 N 2
E(din dout )
,
N2
d2 + cov(din dout )
,
N2
p2 (1 + cov(x, y)),
(33)
where p = d/N and (x, y) are the degrees normalized by the mean degree
d. When the covariance is zero, then the probability of a chain motif is
just p2 , whereas increasing or decreasing covariance increases and decreases
the probability of chains respectively. Therefore we can study the affect of
different chain motifs on the meanfield by directly varying the covariances.
In an E-I network, the number of chains from cell type A → B → C where
A, B, C ∈ {E, I} is varied by changing the covariance cov(xBA , yCB ).
21
Finally, we remark that reciprocal motifs are also a type of chain which
goes from i → j → i. A calculation analogous to Eq.33 shows that the probability of reciprocal connections also depends on the covariance as Erecip =
p2 (1 + cov(x, y))2 .
7
Discussion
In this paper we have derived mean-field equations for a network of neurons with a given degree distribution. To do so we only needed to know
the probability of connection between any two cells given their degrees. This
probability takes on a simple form, namely it is proportional to the product of
the out-degree of the pre-synaptic cell and the in-degree of the post-synaptic
cell. The form of this connection probability has a simple intuitive explanation: in a network where connections are made randomly and with equal
probability, the probability that a single given out-going synapse from a cell
j is made onto a cell i must be proportional to din
i , the in-degree of cell i. If
out
there are dj out-going synapses from cell j, then the total probability of a
out
connection from j to i is clearly just proportional to din
i dj . This is a good
approximation as long as the number of neurons is large compared to the
largest degree in the network.
Interestingly, we have found that the resulting relevant mean-field variables are not the firing rates of the neurons, but rather their synaptic output.
In a canonical E-I network, with two populations of neurons, this means that
the proper mean-field equations are four dimensional, not two. Namely, the
mean-field variables are what we call synaptic drives See , Sie , Sei and Sii .
This indicates that given certain types of degree distributions, the excitatory
(or inhibitory) synaptic output to excitatory neurons can be radically different from the output to inhibitory neurons. Note that this is not equivalent to
changing synaptic weights. For example, in a classical E-I firing rate model
when the synaptic weights Jii and Jei are different, then I cells and E cells
receive different amounts of synaptic inhibition, in magnitude. However, the
time course of the synaptic input is identical for both populations. In the
case of our mean-field equations Eqs.13-16, the synaptic outputs Sii and Sei
are dynamically independent variables. How is this possible? It is possible if
the population of I cells is highly heterogeneous. When this is the case, then
if E cells and I cells receive their input predominantly from different “subclasses” of I cells, these inputs can be dynamically independent. Of course,
22
there are no sub-classes in our model, only a continuum of degrees. However,
with sufficiently broad degrees there will be, for example, I cells which receive
very few E inputs and I cells which receive many E inputs. Now, if E (I)
cells receive their inhibition predominantly from those I cells which receive
few (many) E inputs, they will sample the activity of very different cells.
This scenario can be summarized succinctly by stating that the covariance
cov(xie , yei ) is small and cov(xie , yii ) is large, which is in fact conducive to
working memory-like bistability in E-I circuits, see the right-hand column of
Fig.4.
In fact, in our network any such “sampling-of-different-sub-classes” argument relies on there being a significant difference in covariances related
to differential inputs between E and I cells. In a classic E-I Erdös-Rényi
random network all covariances are zero and a description of the meanfield activity in terms of firing rates alone is possible. Similarly, even in
a network in which DDs are broadly distributed, if degree covariances are
zero then a mean-field description in terms of firing rates is self-consistent
(Roxin, 2011). Actually, even if there are significant covariances between
degrees, Eqs.13-16 indicate that the mean-field dynamics will be two dimensional as long as cov(xee , yee ) = cov(xee , yie ), cov(xei , yee ) = cov(xei , yie ),
cov(xie , yei ) = cov(xie , yii ) and cov(xii , yei ) = cov(xii , yii ).
We showed that the degree covariances can be related to chain and reciprocal motifs in the network. For example, increasing the covariance between
the excitatory in-degree and out-degree of E cells increases the number of
E → E → E chains and reciprocally connected pairs of E cells. This is a particularly relevant example since the fraction of reciprocally connected pairs
of E cells is a statistic which has been measured several times over the past
few decades in cortical slice experiments (Markram, 1997; Song et al., 2005;
Perin, Berger, and Markram, 2011). It is robustly found that the fraction of
reciprocal motifs is somewhere between 2 and 4 times what would be expected
from an ER network. From our work here we can immediately conclude that
one way to to achieve this is to allow for correlations between the excitatory
in-degree and out-degree. Furthermore, the effect of this correlation on the
meanfield is equivalent to increasing recurrent excitatory synaptic weights
according to Eq.9.
The mean-field model Eqs.13-16 is heuristic. It is meant to describe the
collective dynamics of large networks of E and I spiking neurons, but it is derived by assuming that a description of the dynamics in terms of mean firing
rates (or synaptic outputs) is the relevant one. It may be that networks with
23
broad, correlated degree distributions exhibit other modes of activity which
cannot be captured in this framework, such as highly synchronous states. Indeed, it was shown in (Roxin, 2011) that the out-degree of neurons strongly
influences pairwise cross-correlations in synaptic inputs to neurons. It may
be that this effect is enhanced if the out-degree is positively correlated to the
in-degree. Studying this or similar phenomena would require direct numerical
simulation of networks of spiking neurons or another theoretical framework
which can account for spike synchrony. Barring such effects, Eqs.13-16 can
be used to study the effect of DDs on network dynamics with relatively little
numerical or analytical effort.
Acknowledgments
A.R. acknowledges a Ramón y Cajal grant from the Spanish government and
a project grant from the Spanish ministry of Economics and Competitiveness
BFU2012-33413.
References
Amit, D. J. and N. Brunel (1997a). Dynamics of a recurrent network of
spiking neurons before and following learning. Network 8: 373–404.
Amit, D. J. and N. Brunel (1997b). Model of global spontaneous activity
and local structured activity during delay periods in the cerebral cortex.
Cerebral Cortex 7: 237–252.
Ben-Yishai, R., R. Lev Bar-Or, and H. Sompolinsky (1995). Theory of orientation tuning in visual cortex. Proc. Natl. Acad. Sci. USA 92: 3844–3848.
Compte, A., N. Brunel, P. S. Goldman-Rakic, and X.-J. Wang (2000).
Synaptic mechanisms and network dynamics underlying spatial working
memory in a cortical network model. Cerebral Cortex 10: 910–923.
Markram, H. (1997). A network of tufted layer 5 pyramidal neurons. Cereb.
Cortex 7: 523–533.
Perin, R., T. K. Berger, and H. Markram (2011). A synaptic organizing
principle for cortical neuronal groups. PNAS 108: 5419–5424.
24
Roxin, A. (2011). The role of degree distribution in shaping the dynamics in
networks of sparsely connected spiking neurons. Front Comput Neurosci 5:8.
Roxin, A., N. Brunel, and D. Hansel (2005). Role of delays in shaping
spatiotemoral dynamics of neuronal activity in large networks. Phys. Rev.
Lett. 94: 238103.
Roxin, A., N. Brunel, D. Hansel, G. Mongillo, and C. Vreeswijk (2011).
On the distribution of firing rates in networks of cortical neurons. J. Neurosci. 31: 16217–16226.
Song, P. and X. J. Wang (2005). Angular path integration by moving hill of
activity: a spiking neuron model without recurrent excitation of the headdirection system. J. Neurosci. 25: 1002–1014.
Song, S., P. J. Sjöström, M. Reigl, S. Nelson, and D. B. Chklovski (2005).
Highly nonrandom features of synaptic connectivity in local cortical circuits.
PLoS Biology 3: 507–519.
Zhao, L., B. Beverlin, T. Netoff, and D. Q. Nykamp (2011). Synchronization from second order network connectivity statistics. Front. Comput.
Neurosci. 5: 28.
25
© Copyright 2026 Paperzz