Kinetic Theory for Neuronal Networks with Fast and Slow Excitatory Conductances Driven by the Same Spike Train, A.V. Rangan, G. Kovacic, and D. Cai, Phys. Rev. E 77, 041915 (2008)

PHYSICAL REVIEW E 77, 041915 共2008兲
Kinetic theory for neuronal networks with fast and slow excitatory conductances driven
by the same spike train
Aaditya V. Rangan,1 Gregor Kovačič,2 and David Cai1
1
Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, New York 10012-1185, USA
2
Mathematical Sciences Department, Rensselaer Polytechnic Institute, 110 8th Street, Troy, New York 12180, USA
共Received 23 August 2007; revised manuscript received 29 December 2007; published 18 April 2008兲
We present a kinetic theory for all-to-all coupled networks of identical, linear, integrate-and-fire, excitatory
point neurons in which a fast and a slow excitatory conductance are driven by the same spike train in the
presence of synaptic failure. The maximal-entropy principle guides us in deriving a set of three
共1 + 1兲-dimensional kinetic moment equations from a Boltzmann-like equation describing the evolution of the
one-neuron probability density function. We explain the emergence of correlation terms in the kinetic moment
and Boltzmann-like equations as a consequence of simultaneous activation of both the fast and slow excitatory
conductances and furnish numerical evidence for their importance in correctly describing the coarse-grained
dynamics of the underlying neuronal network.
DOI: 10.1103/PhysRevE.77.041915
PACS number共s兲: 87.19.L⫺, 05.20.Dd, 84.35.⫹i
I. INTRODUCTION
Attempts to understand the enormous complexity of neuronal processing that takes place in the mammalian brain,
supported by the ever-increasing computational power used
in the modeling of the brain, have given rise to greatly increased sophistication in mathematical and computational
modeling of realistic neuronal networks 关1–7兴. Striking
manifestations of spatiotemporal neuronal dynamics, such as
patterns of spontaneous activity in the primary visual cortex
共V1兲 关8,9兴 and motion illusions 关10兴, which take place on
length scales of several millimeters and involve millions of
neurons, can now be computed using large-scale, coupled,
point-neuron models 关11,12兴. The ability to describe still
more complicated neuronal interactions in yet larger portions
of the brain, such as among multiple areas or layers of the
visual cortex, may be significantly enhanced by appropriate
coarse graining.
Coarse-grained neuronal network models can describe
network firing rates using the average membrane potential
alone 关13–25兴, or they can also take into account its fluctuations 关26–29兴. The latter models are applicable to networks
in which neuronal firing occurs solely due to membrane potential fluctuations while the average membrane potential
stays below the firing threshold 关30–36兴, as well as to those
that operate in the mean-driven regime in which the slaving
potential that drives the neurons to fire is consistently above
the firing threshold. A particularly fruitful use of coarsegrained models is in combination with point-neuron models,
forming so-called embedded networks 关37兴. Such models are
especially appropriate for describing neuronal dynamics in
the brain because of the many regular feature maps in the
laminar structure of the cerebral cortex. The primary visual
cortex alone has a number of neuronal preference maps, such
as for orientation or spatial frequency, laid out in regular or
irregular patterns 关38–44兴. Certain subpopulations which
contain sufficiently many neurons, but are small enough that
their response properties can be treated as constant across the
subpopulation, may be replaced by coarse-grained patches,
while other subpopulations may be represented by point neu1539-3755/2008/77共4兲/041915共13兲
rons embedded in this coarse-grained background.
In the simplest case of all-to-all coupled neuronal networks, the initial step is to use a nonequilibrium statistical
physics framework and develop a Boltzmann-type kinetic
equation for the network probability density function of the
voltage and various conductances 关26–29,45–57兴. Since this
is a partial differential equation in three or more dimensions,
it is advantageous to further project the dynamics to one
dimension and obtain a reduced kinetic theory in terms of the
voltage alone. For purely excitatory networks, as well as networks with both excitatory and inhibitory neurons and both
simple and complex cells, such a theory was developed in
关28,29兴. It achieves dimension reduction by means of a novel
moment closure, which was shown to follow from the
maximum-entropy principle 关58兴. Efficient numerical methods for solving the resulting set of kinetic equations were
developed in 关59兴.
The predominant excitatory neurotransmitter in the central nervous systems of vertebrates is glutamate, which binds
to a number of different types of post-synaptic receptors
关60兴. Two main classes of glutamate-binding receptors are
the AMPA and NMDA receptors. Activation of the AMPA
receptors gives rise to fast post-synaptic conductance dynamics with decay times of about 3–8 ms, while the activation
of the NMDA receptors gives rise to slow conductance
dynamics with decay times of about 60–200 ms, respectively
关61–64兴. AMPA and NMDA receptors are frequently colocalized, such as in the rat visual cortex 关65兴 and hippocampus
关63,66兴 or in the superficial layers of cat V1 关67兴, and may
thus respond to the same spikes. This colocalization of receptors with different time constants motivates the present
study. We will examine its theoretical description in terms of
a kinetic theory and investigate its dynamical consequences.
From the viewpoint of kinetic theory, an important aspect
of neuronal networks with conductances activated by fastand slow-acting receptors, colocalized on the same synapses,
is the nature of the statistical correlations arising due to the
two types of conductances being driven by 共parts of兲 the
same network spike train. Therefore, in this paper, we develop a kinetic theory for purely excitatory neuronal net-
041915-1
©2008 The American Physical Society
PHYSICAL REVIEW E 77, 041915 共2008兲
RANGAN, KOVAČIČ, AND CAI
works of this type. For simplicity, we derive this kinetic
theory for a linear, conductance-driven, all-to-all-coupled,
integrate-and-fire 共IF兲 network of identical excitatory point
neurons that incorporates synaptic failure 关68–73兴. Once we
find an appropriate set of conductance variables, which are
driven by disjoint spike trains, we can derive the kinetic
equation as in 关29兴 and use the maximum-entropy principle
of 关58兴 to suggest the appropriate closure that will achieve a
reduction of independent variables to time and voltage alone.
However, this closure is more general than that of 关58兴 insofar as it involves using the dynamics of the conductance
共co兲variances computed from a set of ordinary differential
equations. Adding inhibitory neurons and space dependence
of the neuronal connections to the kinetic description developed in this paper is a straightforward application of the
results in 关28,29,58兴.
The paper is organized as follows. In Sec. II, we describe
the equations for a linear IF excitatory neuronal network
with a fast and slow conductance. In Sec. III, we introduce a
set of conductance-variable changes such that the resulting
conductances are driven by disjoint spike trains. In Sec. IV,
we present the Boltzmann-like kinetic equation that describes statistically the dynamics of the membrane potential
and these new conductance variables, develop a diffusion
approximation for it, and recast it in terms of the voltage and
the fast and slow conductances alone. In Sec. V, we impose
the boundary conditions for the kinetic equation obtained in
Sec. IV in terms of the voltage and conductance variables
and describe how the resulting problem becomes nonlinear
due to the simultaneous presence of the firing rate as a selfconsistency parameter in the equation and the boundary conditions. In Sec. VI, we find an equivalent statistical description of the network dynamics in terms of an infinite
hierarchy of equations for the conditional conductance moments, in which the independent variables are the time and
membrane potential alone. In Sec. VII we describe the
maximum-entropy principle which we use to guide us in
discovering an appropriate closure for the infinite hierarchy
of equations from Sec. VI. In Sec. VIII, we discuss the dynamics of the conductance moments and 共co兲variances used
in the closure and also their role in determining the validity
of the conductance boundary conditions imposed in Sec. V.
In Sec. IX we postulate the closure and derive three closed
kinetic equations for the voltage probability density and the
first conditional conductance moments as functions of time
and membrane potential. We also derive the boundary conditions for these equations. In Sec. X we consider the limit in
which the decay rate of the fast conductance becomes infinitely fast and the additional limit in which the decay rate of
the slow conductance becomes infinitely slow. In Sec. XI, we
present conclusions and discuss the agreement of our kinetic
theory with direct, full numerical simulations of the corresponding IF model.
II. INTEGRATE-AND-FIRE NEURONAL NETWORK WITH
FAST AND SLOW EXCITATORY CONDUCTANCES
The kinetic theory we develop in this paper describes statistically the dynamics exhibited by linear, IF networks com-
posed of excitatory point neurons with two types of postsynaptic conductances: one mediated by a fast and another
by a slow receptor channel. In a network of N identical,
excitatory point neurons, the membrane potential of the ith
neuron is governed by the equation
冉
冊
冉
冊
Vi − ␧r
Vi − ␧E
d
− Gi共t兲
,
Vi = −
dt
␶
␶
共1兲
where ␶ is the leakage time constant, ␧r is the reset potential,
␧E is the excitatory reversal potential, and Gi共t兲 is the neuron’s total conductance. The voltage Vi evolves according to
Eq. 共1兲 as long as it stays below the firing threshold VT.
When Vi reaches VT, the neuron fires a spike and Vi is reset
to the value ␧r. In the absence of a refractory period, the
dynamics of Vi then immediately becomes governed by Eq.
共1兲 again.
The total conductance Gi共t兲 of the ith neuron in the network 共1兲 is expressed as the sum
Gi共t兲 = ␭GAi 共t兲 + 共1 − ␭兲GNi 共t兲
共2兲
= ␭Gif 共t兲 + ␭G共1兲
i 共t兲 + 共1 − ␭兲Gi 共t兲,
共2a兲
in which
GAi 共t兲 = Gif 共t兲 + G共1兲
i 共t兲
and
GNi 共t兲 = G共2兲
i 共t兲
共2b兲
are the fast and slow conductances, respectively. The con共2兲
ductances G共1兲
i 共t兲 and Gi 共t兲 arise from the spikes mediated
by the synaptic coupling among pairs of neurons within the
network, while Gif 共t兲 is the conductance driven by the external stimuli. The parameter ␭ denotes the percentage of fast
receptor contribution to the total conductance Gi共t兲 关74–76兴.
If the coupling for the network 共1兲 is assumed to be allto-all, the conductances in 共2b兲 can be modeled by the equations
d
␴1 Gif = − Gif + f 兺 ␦共t − ti␮兲,
dt
␮
共3a兲
d
S̄
␴1 G共1兲
= − G共1兲
兺 兺 p共1兲␦共t − tkl兲,
i +
dt i
Np k⫽i l kl
共3b兲
d
S̄
共2兲
␴2 G共2兲
兺 兺 p共2兲␦共t − tkl兲,
i = − Gi +
dt
Np k⫽i l kl
共3c兲
where we have assumed that the rising time scales of the
conductances are infinitely fast. In Eqs. 共3a兲–共3c兲, ␦共¯兲 is
the Dirac delta function, ␴1 and ␴2 are the time constants of
the fast and slow synapses, f is the synaptic strength of the
external inputs, and S̄ is the network synaptic strength. The ␦
functions on the right-hand sides of 共3a兲–共3c兲 describe the
spike trains arriving at the neuron in question. In particular,
the time ti␮ is that of the ␮th external input spike delivered to
the ith neuron and tkl is the lth spike time of the kth neuron
in network. Note that the network neuron spikes 兵tkl其 are
共2兲
common to G共1兲
i and Gi due to the all-to-all nature of the
network couplings.
共2兲
The coefficients p共1兲
kl and pkl in Eqs. 共3b兲 and 共3c兲 model
the stochastic nature of the synaptic release 关68–73兴. Each
041915-2
KINETIC THEORY FOR NEURONAL NETWORKS WITH …
p共i兲
kl , i = 1 , 2, is taken to be an independent Bernoullidistributed stochastic variable upon receiving a spike at the
time tkl; i.e., p共i兲
kl = 1 with probability p and 0 with probability
1 − p, where p is the synaptic release probability. The stochastic nature of the synaptic release will play an important
role in the next two sections in helping us determine the
appropriate conductance variables, which are driven by disjoint spike trains and can be used in deriving the correct
Boltzmann-like kinetic equation. Moreover, synaptic noise
often appears to be the dominant noise source in neuronal
network dynamics 关77–79兴. Other types of neuronal noise,
such as thermal 关78兴 and channel noise 关79,80兴, can also be
incorporated in the kinetic theory developed here or the kinetic theory with only fast excitatory conductances, developed in 关28,29,58兴.
In the remainder of the paper, we assume that the train of
the external input spiking times ti␮ is a realization of a Poisson process with rate ␯0共t兲. While it need not be true that the
spike times of an individual neuron, tki with a fixed i, are
Poisson distributed, all the network spiking times tkl can be
considered as Poisson distributed with the rate Nm共t兲 when
the number of neurons, N, in the network is sufficiently
large, the firing rate per neuron is small, and each neuronal
firing event is independent of all the others 关81兴. This Poisson approximation for the spike trains arriving at each neuron in the network is used in deriving the kinetic equation.
The division of the spiking terms in Eqs. 共3b兲 and 共3c兲 by the
number of neurons, N, provides for a well-defined network
coupling in the large-network limit N → ⬁.
PHYSICAL REVIEW E 77, 041915 共2008兲
d
S̄
共2兲
␴2 Y 共2兲
兺 兺 p共2兲共1 − p共1兲
i = − Yi +
kl 兲␦共t − tkl兲.
dt
Np k⫽i l kl
共4d兲
We have chosen these variables so that the dynamics of X共1兲
i
and Y 共1兲
i is driven by the synaptic release on both the fast and
is driven by the release
slow receptors simultaneously, X共2兲
i
on the fast receptors without release on the slow receptors,
and Y 共2兲
i is driven by the release on the slow receptors without release on the fast receptors.
By adding the pairs of Eqs. 共4a兲, 共4b兲 and 共4c兲, 共4d兲, re共1兲
共2兲
共1兲
共2兲
and G共2兲
spectively, we find that G共1兲
i = Xi + Xi
i = Yi + Yi
satisfy Eqs. 共3b兲 and 共3c兲. From Eqs. 共4a兲 and 共4c兲, we like共1兲
is
wise compute that the conductance Ai = ␴1X共1兲
i + ␴ 2Y i
driven by the synaptic release on both the fast and slow
共1兲
receives
receptors simultaneously, while Bi = ␴1X共1兲
i − ␴ 2Y i
no synaptic driving at all. From Eqs. 共1兲, 共3a兲, 共4b兲, and 共4d兲,
we thus finally collect a set of equations in which all the
conductance variables have been chosen so that they are
driven by disjoint spike trains. This set is
冉
冉
冊
共5a兲
d
␴1 Gif = − Gif + f 兺 ␦共t − ti␮兲,
dt
␮
共5b兲
1
1
2S̄
d
Ai = − Ai − Bi +
兺 兺 p共1兲p共2兲␦共t − tkl兲,
dt
␴+
␴−
Np k⫽i l kl kl
III. CONDUCTANCES DRIVEN BY DISJOINT SPIKE
TRAINS
共5c兲
The fact that two different sets of conductances with distinct time constants are driven by spikes belonging to the
same spike train prevents us from being able to derive a
kinetic representation of the neuronal network in the present
case by using a straightforward generalization of the results
in 关28,29,58兴. Therefore we must take advantage of the stochastic nature of the synaptic release and first introduce a set
of conductance variables which are all driven by disjoint
spike trains. We find this set in two steps.
As a first step, we introduce four auxiliary conductance
共2兲
共1兲
共2兲
variables X共1兲
i , Xi , Y i , and Y i , which obey the dynamics
d
S̄
共1兲
␴1 X共1兲
兺 兺 p共1兲p共2兲␦共t − tkl兲,
i = − Xi +
dt
Np k⫽i l kl kl
冊
Vi − ␧r
Vi − ␧E
d
− Gi共t兲
,
Vi = −
dt
␶
␶
1
1
d
Bi = − Bi − Ai ,
dt
␴+
␴−
共5d兲
d
S̄
共2兲
␴1 X共2兲
兺 兺 p共1兲共1 − p共2兲
i = − Xi +
kl 兲␦共t − tkl兲,
dt
Np k⫽i l kl
共5e兲
d
S̄
共2兲
␴2 Y 共2兲
兺 兺 p共2兲共1 − p共1兲
i = − Yi +
kl 兲␦共t − tkl兲,
dt
Np k⫽i l kl
共4a兲
共5f兲
with the total conductance Gi given by
d
S̄
␴1 X共2兲
= − X共2兲
兺 兺 p共1兲共1 − p共2兲
i +
kl 兲␦共t − tkl兲,
dt i
Np k⫽i l kl
共2兲
Gi共t兲 = ␭Gif 共t兲 + ␭+Ai + ␭−Bi + ␭X共2兲
i + 共1 − ␭兲Y i . 共5g兲
共4b兲
In Eqs. 共5a兲–共5g兲, the constants ␴⫾ and ␭⫾ are
␴+ =
d
S̄
␴2 Y 共1兲
= − Y 共1兲
兺 兺 p共1兲p共2兲␦共t − tkl兲,
i +
dt i
Np k⫽i l kl kl
共4c兲
and
041915-3
2 ␴ 1␴ 2
,
␴2 + ␴1
␴− =
2 ␴ 1␴ 2
␴2 − ␴1
共6兲
PHYSICAL REVIEW E 77, 041915 共2008兲
RANGAN, KOVAČIČ, AND CAI
␭+ =
冉
冊
1 ␭ 1−␭
+
,
2 ␴1
␴2
␭− =
冉
冊
1 ␭ 1−␭
−
,
2 ␴1
␴2
␳ ⬅ ␳共v,g f , ␩, ␨,x2,y 2,t兲
共7兲
=E
respectively. In other words, the conductances Gif , Ai, X共2兲
i ,
in
Eqs.
共5a兲–共5g兲
jump
in
a
mutually
exclusive
fashand Y 共2兲
i
ion in that no two of them can jump as the result of the same
spike.
In what is to follow, we still treat the spike trains that
共2兲
drive the conductance variables Gif , Ai, X共2兲
i , and Y i in Eqs.
共5a兲–共5g兲 as Poisson for the same reason as explained at the
end of Sec. II.
v − ␧r
␶
+ 关␭g f + ␭+␩ + ␭−␨ + ␭x2 + 共1 − ␭兲y 2兴
册 冋冉 冊 册
冊册 冋 册
− ␳共v,g f , ␩, ␨,x2,y 2兲 +
+
⳵
⳵␨
冋冉
⳵
⳵␩
i
␦„v − Vi共t兲…␦„g f − Gif 共t兲…␦„␩ − Ai共t兲…
册
共8兲
for the voltage and the conductance variables in Eqs.
共5a兲–共5g兲. Here E represents the expectation over all possible
realizations of the Poisson spike trains and initial conditions.
An argument nearly identical to that presented in Appendix
B of 关29兴 shows that the evolution of the probability density
␳共v , g f , ␩ , ␨ , x2 , y 2兲 for the voltages and conductances evolving according to Eqs. 共5a兲–共5g兲 is described by the kinetic
equation
To arrive at a statistical description of the neuronal network dynamics, we begin by considering the probability
density
再冋冉 冊
1
N
共2兲
⫻␦„␨ − Bi共t兲…␦„x2 − X共2兲
i 共t兲…␦„y 2 − Y i 共t兲…
IV. KINETIC EQUATION
⳵␳ ⳵
=
⳵t ⳵v
冋兺
冉 冊册 冎 冋 册 冋 冉
冊
冋冉
冊
冋冉
v − ␧E
␶
␳ +
⳵ gf
f
␳ + ␯0共t兲 ␳ v,g f − , ␩, ␨,x2,y 2
⳵ g f ␴1
␴1
1
2S̄
1
, ␨,x2,y 2 − ␳共v,g f , ␩, ␨,x2,y 2兲
␩ + ␨ ␳ + m共t兲p2N ␳ v,g f , ␩ −
␴+
␴−
Np
冊
册
册 冋 册
1
1
⳵ x2
⳵ y2
S̄
␨+ ␩ ␳ +
␳ + m共t兲p共1 − p兲N ␳ v,g f , ␩, ␨,x2 −
,y 2 − ␳共v,g f , ␩, ␨,x2,y 2兲 +
␳
␴+
␴−
⳵ x2 ␴1
Np␴1
⳵ y 2 ␴2
冋冉
+ m共t兲p共1 − p兲N ␳ v,g f , ␩, ␨,x2,y 2 −
冊
册
S̄
− ␳共v,g f , ␩, ␨,x2,y 2兲 .
Np␴2
Here, ␯0共t兲 is the Poisson spiking rate of the external input
and m共t兲 is the network firing rate normalized by the number
of neurons, N. This normalization is used to prevent m共t兲
from increasing without bounds in the large-N limit. Equivalently, the firing rate m共t兲 is the population-averaged firing
rate per neuron in the network.
We here give a brief, intuitive description of the derivation process leading from the dynamical equations 共5a兲–共5g兲
to the kinetic equation 共9兲. To this end, we consider the dynamics of a neuron in the time interval 共t , t + ⌬t兲 and analyze
the conditional expectations for the values of the voltage
Vi共t + ⌬t兲 and conductances Gif 共t + ⌬t兲, Ai共t + ⌬t兲, Bi共t + ⌬t兲,
共2兲
X共2兲
i 共t + ⌬t兲, and Y i 共t + ⌬t兲 at time t + ⌬t, given their values
共2兲
f
Vi共t兲, Gi 共t兲, Ai共t兲, Bi共t兲, X共2兲
i 共t兲, and Y i 共t兲 at time t. If the
time increment ⌬t is chosen to be sufficiently small, at most
one spike can arrive at this neuron with nonzero probability
during the interval 共t , t + ⌬t兲. Five different, mutually exclusive events can take place during this time-interval. The first
event is that either no spike arrives at the neuron or else a
spike arrives, but activates none of the conductances because
of synaptic failure. This event occurs with probability
关1 − ␯0共t兲⌬t兴关1 − p2m共t兲N⌬t兴关1 − p共1 − p兲m共t兲N⌬t兴2 + O共⌬t2兲
The
neuron’s
= 1 − 关␯0共t兲 + 共2p − p2兲m共t兲N兴⌬t + O共⌬t2兲.
共9兲
dynamics is then governed by the smooth 共streaming兲 terms
in Eqs. 共5a兲–共5g兲. The other four events consist of an external input or network spike arriving and activating one of the
共2兲
conductances Gif , Ai, X共2兲
i , or Y i , with the respective
2
2
probabilities ␯0共t兲⌬t + O共⌬t 兲, p m共t兲N ⌬t + O共⌬t2兲, and
p共1 − p兲m共t兲N ⌬t + O共⌬t2兲 for the last two possible events.
Due to our variable choice, no two of these conductances can
be activated simultaneously. The corresponding conductance
jumps are f / ␴1, 2S̄ / Np, S̄ / ␴1Np, and S̄ / ␴2Np, respectively.
This argument lets us compute the single-neuron conditional
probability density function at time t + ⌬t. We then average
over all possible values of Vi共t兲, Gif 共t兲, Ai共t兲, Bi共t兲, X共2兲
i 共t兲,
and Y 共2兲
i 共t兲, as well as all neurons, initial conditions, and
possible spike trains. This leads to the right-hand side of the
kinetic equation 共9兲 being the coefficient multiplying ⌬t in
the ⌬t expansion of the density ␳共v , g f , ␩ , ␨ , x2 , y 2 , t + ⌬t兲,
which implies 共9兲. The details of the calculation are similar
to those given in Appendix B of 关29兴.
Assuming the conductance jump values f / ␴1, S̄ / Np,
S̄ / Np␴1, and S̄ / Np␴2 to be small, we Taylor-expand the difference terms in 共9兲 and thus obtain the diffusion approximation to the kinetic equation 共9兲, given by
041915-4
KINETIC THEORY FOR NEURONAL NETWORKS WITH …
⳵
⳵
␳=
⳵t
⳵v
再冋冉 冊
v − ␧r
再冋
再冋
+
⳵
⳵␩
+
⳵
⳵ y2
␶
PHYSICAL REVIEW E 77, 041915 共2008兲
冋冉 冊册 册冎 再冋
再冉 冊 冎 再冋
+ 关␭g f + ␭+␩ + ␭−␨ + ␭x2 + 共1 − ␭兲y 2兴
册冎
1
1
⳵ 2␳ ⳵
共␩ − ¯␩兲 + ␨ ␳ + 4p2¯␴2 2 +
␴+
␴−
⳵␩
⳵␨
册冎
v − ␧E
␶
⳵
⳵gf
册冎
␴ 2 ⳵ 2␳
1
共g f − ḡ兲 ␳ + 2f 2
␴1
␴1 ⳵ g f
册冎
¯␴2 ⳵2␳
1
共x2 − x̄2兲 ␳ + p共1 − p兲 2 2
␴1
␴1 ⳵ x2
¯␴2 ⳵2␳
1
共y 2 − ȳ 2兲 ␳ + p共1 − p兲 2 2 .
␴2
␴2 ⳵ y 2
ḡ共t兲 = f ␯0共t兲,
¯␩共t兲 = 2␴+ pS̄m共t兲,
共10兲
␴2f 共t兲
JV共v,gA,gN,t兲 = −
1
= f 2␯0共t兲,
2
¯␴2共t兲 =
S̄2
m共t兲,
2Np2
共11兲
with ␴⫾ and ␭⫾ as defined in 共6兲 and 共7兲, respectively.
We now reverse the conductance variable changes performed in Sec. III, used to derive Eqs. 共5a兲–共5g兲, in order to
derive a kinetic equation 共in the diffusion approximation兲 for
the time evolution of the probability density function involving only the voltage v and the fast and slow conductances gA
and gN, respectively: ␳ ⬅ ␳共v , gA , gN兲. This kinetic equation is
再冋冉 冊
v − ␧r
+ 关␭gA + 共1 − ␭兲gN兴
冉 冊册 冎
v − ␧E
␶
␳
+
1 ⳵
1 ⳵
兵关gA − ḡA共t兲兴␳其 +
兵关gN − ḡN共t兲兴␳其
␴1 ⳵ gA
␴2 ⳵ gN
+
␴A2 ⳵2␳ 2p␴20 ⳵2␳
␴20 ⳵2␳
+
+
,
␴21 ⳵ gA2 ␴1␴2 ⳵ gA ⳵ gN ␴22 ⳵ gN2
共12兲
with
ḡA共t兲 = f ␯0共t兲 + S̄m共t兲,
1
␴A2 共t兲 = f 2␯0共t兲 + ␴20共t兲,
2
ḡN共t兲 = S̄m共t兲,
␴20共t兲 =
冋冉 冊
冉 冊册
⫻
x̄2共t兲 = ȳ 2共t兲 = 共1 − p兲S̄m共t兲,
␶
+
1
1
⳵
␨+ ␩ ␳ +
␴+
␴−
⳵ x2
In this equation the 共time-dependent兲 coefficients are
⳵
⳵
␳=
⳵t
⳵v
␳
S̄2
m共t兲.
2Np
共13兲
Note that the cross-derivative term in the last line of the
kinetic equation 共12兲 is due to the correlation between the
jumps in fast and slow conductances GA and GN when both
synaptic releases occur with probability p2. Had we treated
the spike trains in Eqs. 共3b兲 and 共3c兲 as if they were distinct,
this cross term would be missing.
V. BOUNDARY CONDITIONS IN TERMS OF VOLTAGE
AND CONDUCTANCES
The first term in Eq. 共12兲 is the derivative of the probability flux in the voltage direction:
v − ␧r
␶
v − ␧E
␶
+ 关␭gA + 共1 − ␭兲gN兴
␳共v,gA,gN,t兲.
共14兲
Since we have assumed that the neurons in the network 共1兲
and 共3a兲–共3c兲 have no refractory period, a neuron’s voltage is
reset to ␧r as soon as it crosses the threshold value VT, before
any change to the neuron’s conductance can occur. This implies the voltage-flux boundary condition
JV共VT,gA,gN,t兲 = JV共␧r,gA,gN,t兲.
共15兲
With the conductance boundary conditions, we must proceed a bit more cautiously. Specifically, in the IF system 共1兲
and 共3a兲–共3c兲, the spike-induced jumps in the conductances
are positive. In conjunction with the form of Eqs. 共3a兲–共3c兲,
this fact implies that a neuron’s conductances GAi and GNi
must always be positive and that there cannot be any conductance flux across the boundary of the first quadrant in the
共GAi , GNi 兲 conductance plane in either direction. One would
therefore expect the probability density ␳共v , gA , gn , t兲 to be
nonzero only when gA ⬎ 0 and gN ⬎ 0 as a result of the natural conductance dynamics, and not as a result of any boundary conditions.
The approximation leading from the difference equation
共9兲 to the diffusion equation 共10兲, however, replaces the conductance jumps by a transport and a diffusion term, and one
should expect that this combination of terms may imply
some, at least local, downward conductance flux even across
the half-rays gA = 0, gN ⬎ 0 and gN = 0, gA ⬎ 0. Therefore, a
no-flux boundary condition across these two half-rays would
have to be enforced, essentially artificially, as part of the
diffusion approximation. In addition, due to the crossderivative term in Eq. 共12兲, the individual conductance fluxes
cannot be defined uniquely in the 共gA , gN兲 conductance variables, but only in the transformed conductance variables in
which the second-order-derivative terms in 共12兲 become diagonalized. Employing such a transformation, taking the dot
product of the flux vector with the appropriate normal, and
then transforming that expression back in the 共gA , gN兲 conductance variables would make the already somewhat artificial no-flux boundary conditions also extremely unwieldy to
compute.
Alternatively, in Eq. 共12兲, we allow for the probability
density ␳共v , gA , gN , t兲 to be a non-negative function everywhere and take instead the smallness of ␳共v , gA , gN , t兲 for
041915-5
PHYSICAL REVIEW E 77, 041915 共2008兲
RANGAN, KOVAČIČ, AND CAI
gA ⱕ 0 or gN ⱕ 0 as part of the diffusion approximation; its
loss may limit the temporal validity range of this approximation. The boundary conditions that we assume for the conductances in the solution of Eq. 共12兲 are thus simply that no
neuron have infinitely large or small conductances—that is,
␳共v,gA → ⫾ ⬁,gN,t兲 → 0,
␳共v,gA,gN → ⫾ ⬁,t兲 → 0,
冕冕
⬁
JV共VT,gA,gN,t兲dgAdgN .
␮A共v兲 ⬅ ␮1,0共v兲,
␮A共2兲共v兲 ⬅ ␮2,0共v兲,
共17兲
⳵
⳵t
⬁
␳共v,gA,gN,t兲dgAdgN
␮r,s共v兲 =
冕冕
−⬁
⳵
⳵v
冕冕
冕冕
⳵f
␳ dgAdgN +
−⬁ ⳵ t
⬁
共2兲
␮AN
共v兲 ⬅ ␮1,1共v兲.
−⬁
⬁
−
−⬁
⬁
fJV dgAdgN
−⬁
⬁
⳵f
JV dgAdgN
−⬁ ⳵ v
共g − ḡA兲 ⳵ f
␳ dgAdgN
␴1 ⳵ gA
−
共g − ḡN兲 ⳵ f
␳ dgAdgN
␴2 ⳵ gN
⬁
␴A2 ⳵2 f
2
2 ␳ dgAdgN
−⬁ ␴1 ⳵ gA
+
⬁
+
−⬁
2p␴20 ⳵2 f
␳ dgAdgN
␴ 1␴ 2 ⳵ g A ⳵ g N
⬁
␴20 ⳵2 f
2
2 ␳ dgAdgN .
−⬁ ␴2 ⳵ gN
+
共22兲
Letting f = gArgNs in 共22兲, with r , s = 0 , 1 , 2 , . . ., and using
共14兲, we now derive a set of equations for the moments
冕冕
共18兲
⬁
gArgNs␳共v,gA,gN兲dgAdgN = 共v兲␮r,s共v兲.
共23兲
−⬁
and the conditional moments
gArgNs␳共兩gA,gN兩v兲dgAdgN ,
f ␳ dgAdgN +
⬁
−⬁
⬁
冕冕
冕冕
冕冕
冕冕
冕冕
冕冕
冕冕
⬁
=
VI. HIERARCHY OF MOMENTS
冕冕
␮N共2兲共v兲 ⬅ ␮0,2共v兲,
共21a兲
In order to obtain the equations that describe the dynamics of
the conditional moments 共19兲, we begin by, for any given
function f共v , gA , gN , t兲, using Eq. 共12兲 to derive an equation
for the evolution of this function’s projection onto the v
space alone. We multiply 共12兲 by f共v , gA , gN , t兲 and integrate
over the conductance variables. Integrating by parts and taking into account the boundary conditions 共16兲 stating that ␳
must vanish together with all its derivatives at gA → ⫾ ⬁ and
gN → ⫾ ⬁, faster than any power of gA or gN, we thus arrive
at the equation
Note that this firing rate feeds into the definitions 共13兲 for the
parameters in the kinetic equation 共12兲.
Equation 共12兲, boundary conditions 共15兲 and 共16兲, and the
expression for the firing rate, Eq. 共17兲, which reenters Eq.
共12兲 as a self-consistency parameter via Eqs. 共13兲, provide a
complete statistical description 共in the diffusion approximation兲 of the dynamics taking place in the neuronal network
共1兲 and 共3a兲–共3c兲. The simultaneous occurrence of the firing
rate m共t兲 in both Eq. 共12兲 and the integral 共17兲 of the voltage
boundary condition makes this description highly nonlinear.
Solving a nonlinear problem involving a partial differential
equation in four dimensions is a formidable task, and it is
therefore important to reduce the dimensionality of the problem to 2 by eliminating the explicit dependence on the conductances from the description. We describe this reduction
process in the subsequent sections.
共v,t兲 =
␮N共v兲 ⬅ ␮0,1共v兲,
共21b兲
−⬁
In this section, we derive a description of the dynamics in
terms of functions of voltage alone. In other words, we seek
to describe the evolution of the voltage probability density
共20兲
We derive a hierarchy of equations for the conditional
moments 共19兲, for which we will subsequently devise an
approximate closure via the maximal entropy principle. For
clarity of exposition, we rename
共16兲
sufficiently rapidly, together with all its derivatives.
As a parenthetical remark, we should point out that, technically, the adoption of the approximate boundary conditions
共16兲 is also advantageous in eliminating unwanted boundary
terms from the conductance-moment equations to be derived
in Secs. VI and VIII, and thus significantly simplifying the
resulting reduced kinetic theory. In Sec. VIII we will also
present a conductance-moment-based validity criterion for
the approximation made in adopting the conditions 共16兲.
The population-averaged firing rate per neuron, m共t兲, is
the total voltage flux 共14兲 across the firing threshold VT that
includes all values of the conductances and is thus given by
the integral
m共t兲 =
␳共v,gA,gN,t兲 = 共v,t兲␳共兩gA,gN,t兩v兲.
These equations read
共19兲
−⬁
where the conditional density ␳共兩gA , gN兩v兲 is defined by the
equation
⳵
⳵
−
⳵t
⳵v
041915-6
再冋冉 冊 冉 冊
v − ␧r
␶
+
v − ␧E
␶
册冎
关␭␮A + 共1 − ␭兲␮N兴 = 0,
共24a兲
KINETIC THEORY FOR NEURONAL NETWORKS WITH …
⳵
⳵
共␮A兲 −
⳵t
⳵v
再冋冉 冊 冉 冊
册冎
再冋冉 冊 冉 冊
册冎
v − ␧r
␶
␮A +
v − ␧r
␶
␮N +
+ 共1 − ␭兲␮N共2兲兴 = −
and, for r + s ⱖ 2,
⳵
⳵
共␮r,s兲 −
⳵t
⳵v
+
v − ␧E
␶
␶
共24b兲
␮r,s +
v − ␧E
␶
共24c兲
关␭␮r+1,s
r
共␮r,s − ḡA␮r−1,s兲
␴1
␴2
s
共␮r,s − ḡN␮r,s−1兲 + A2 r共r − 1兲␮r−2,s
␴2
␴1
2p␴20
␴ 1␴ 2
rs␮r−1,s−1 +
␴20
s共s
␴22
册
− 1兲␮r,s−2 ,
冉
冊
册
+ 共1 − ␭兲␮r,s+1共VT兲兴 共VT兲 −
冉
冊
␳0共gA,gN兲 =
where
M=
␣=
␲
e
冉 冊 冉 冊 冉 冊
␣ ␤
,
␤ ␥
g=
␴ 1共 ␴ 1 + ␴ 2兲
,
2D
2
gA
,
gN
␤=
g=
ḡA共t兲
ḡN共t兲
,
p ␴ 1␴ 2共 ␴ 1 + ␴ 2兲
,
D
␳共v,gA,gN,t兲
␳共v,gA,gN,t兲dgAdgN ,
␳0共gA,gN兲
subject to three constraints: 共18兲, and also 共23兲 with r = 1 and
s = 0, and 共23兲 with r = 0 and s = 1, which are
冕冕
⬁
gA␳共v,gA,gN兲dgAdgN = 共v兲␮A共v兲,
−⬁
冕冕
⬁
gN␳共v,gA,gN兲dgAdgN = 共v兲␮N共v兲.
共30兲
−⬁
␳ˆ 共v,gA,gN兲 = ␳0共gA,gN兲exp关a0共v兲 + aA共v兲gA + aN共v兲gN兴,
共31兲
where the multipliers a0共v兲, aA共v兲, and aN共v兲 are to be determined. Solving the constraints 共18兲 and 共30兲, we find
共26兲
,
ln
共29兲
When the Poisson rate ␯0 of the external input is independent of time, Eq. 共12兲 possesses an invariant Gaussian solution ␳0共gA , gN兲 given by the formula
−共g−ḡ兲·M共g−ḡ兲
⬁
−⬁
共25兲
VII. MAXIMUM-ENTROPY PRINCIPLE
冑det M
冕冕
Applying Lagrange multipliers to maximizing the entropy
共29兲 under the constraints 共18兲 and 共30兲, we find for the
maximizing density function ␳ˆ 共v , gA , gN兲 to have the form
␧r − ␧E
关␭␮r+1,s共␧r兲
␶
+ 共1 − ␭兲␮r,s+1共␧r兲兴共␧r兲 = 0.
共28c兲
From 共13兲, we see that ␴A2 ⬎ ␴20, and since 0 ⬍ p ⱕ 1, it is
clear that D ⬎ 0. We also calculate that det M = ␣␥ − ␤2
= ␴1␴2共␴1 + ␴2兲2 / 4D␴20 ⬎ 0, where the inequality holds because all the factors are positive. The inequalities det M ⬎ 0
and ␣ ⬎ 0, which follow from 共28a兲–共28c兲, imply that the
matrix M is positive definite. The solution 共26兲 is normalized
to unity over the 共gA , gN兲 space; its normalization over the
共v , gA , gN兲 space would be VT − ␧r.
Using the equilibrium solution ␳0共gA , gN兲 in 共26兲, we now
formulate the maximal entropy principle, which will guide us
in determining the correct closure conditions for the hierarchy 共24a兲–共24d兲. According to this principle, we seek the
density function ␳ˆ 共v , gA , gN , t兲 which maximizes the entropy
共24d兲
VT − ␧r
VT − ␧E
␮r,s共VT兲 +
关␭␮r+1,s共VT兲
␶
␶
共28b兲
and
S关␳, ␳0兴共v,t兲 = −
where two zero subscripts in any of the moments should be
interpreted as making this moment unity and a negative subscript as making it zero. Equations 共24a兲–共24d兲 indeed form
an infinite hierarchy, which needs to be closed at a finite
order if we are to simplify the problem by eliminating the
dependence on the conductances.
Noting the form of the voltage-derivative terms in 共22兲
and using the voltage boundary condition 共15兲 and the flux
definition 共14兲, we find the boundary conditions
冋冉 冊
␴A2 ␴2共␴1 + ␴2兲2
2D
␴20
D = ␴A2 共␴1 + ␴2兲2 − 4p2␴20␴1␴2 .
共2兲
关␭␮AN
再冋冉 冊 冉 冊
册冎 冋
v − ␧r
␥=
1
共␮N − ḡN兲,
␴2
+ 共1 − ␭兲␮r,s+1兴 = −
−
␶
关␭␮A共2兲
1
共␮A − ḡA兲,
␴1
共2兲
+ 共1 − ␭兲␮AN
兴 =−
⳵
⳵
共␮N兲 −
⳵t
⳵v
v − ␧E
PHYSICAL REVIEW E 77, 041915 共2008兲
共27兲
␳ˆ 共v,gA,gN,t兲 =
冑det M
␲
共32兲
where ␮ = 共␮A共v兲 , ␮N共v兲兲T is the vector of the first conditional moments, defined by 共21a兲, 共21b兲, and 共19兲.
To find the relations between the first and second conditional moments of the probability density function
␳ˆ 共v , gA , gN , t兲 in 共32兲, which maximizes the entropy 共29兲 under the constraints 共18兲 and 共30兲, we first put
x = gA − ␮A共v兲,
共28a兲
共v兲e−共g−␮兲·M共g−␮兲 ,
y = gN − ␮N共v兲.
共33兲
We then use the definition of the matrix M in 共27兲 to rewrite
the normalization of the function 共26兲 in the form
041915-7
PHYSICAL REVIEW E 77, 041915 共2008兲
RANGAN, KOVAČIČ, AND CAI
冕冕
⬁
e−共␣x
2+2␤xy+␥ y 2兲
dx dy =
−⬁
␲
冑␣␥ − ␤
2
冉
which we differentiate upon ␣, ␤, and ␥, respectively, and
from 共32兲 obtain the expressions for the second moments of
the conductances x and y in 共33兲 with respect to the maximizing density ␳ˆ 共v , gA , gN , t兲. Using 共23兲 and 共28a兲–共28c兲,
for the 共co兲variances
⌺A2 共v兲 ⬅ ␮A共2兲共v兲 − 关␮A共v兲兴2 ,
共2兲
共v兲 − ␮A共v兲␮N共v兲,
CAN共v兲 ⬅ ␮AN
⌺N2 共v兲 ⬅ ␮N共2兲共v兲 − 关␮N共v兲兴2 ,
共34兲
+
CAN共v兲 = −
␴2g共t兲 = 具gA2 典 − 具gA典2 ,
␴N2 共t兲 = 具gN2 典 − 具gN典2 ,
2p␴20
␤
=
,
共␣␥ − ␤2兲 ␴1 + ␴2
2␴2
2
d 2
␴g = − ␴2g + 2A ,
dt
␴1
␴1
共35兲
冉
␧r
VT
=
␧r
gArgNs␳共v,gA,gN兲dv dgAdgN
−⬁
共v兲␮r,s共v兲dv ,
共36兲
with 0 ⱕ r ⱕ 2, 0 ⱕ s ⱕ 2, and 1 ⱕ r + s ⱕ 2. These moments
are the averages of the conditional moments ␮A共v兲 through
␮N共2兲共v兲 in 共21a兲 and 共21b兲 over the voltage distribution 共v兲.
Integrating 共24a兲–共24d兲 over the voltage interval ␧r ⬍ v ⬍ VT
and using the boundary conditions 共25兲, we find for these
moments the ordinary differential equations
1
d
具gA典 = − 关具gA典 − ḡA共t兲兴,
dt
␴1
共37a兲
1
d
具gN典 = − 关具gN典 − ḡN共t兲兴,
dt
␴2
共37b兲
2␴2
2
d 2
具gA典 = − 关具gA2 典 − 具gA典ḡA共t兲兴 + 2A ,
dt
␴1
␴1
共37c兲
共39a兲
冊
2p␴20
1
1
d
cAN = −
+
cAN +
,
dt
␴1 ␴2
␴ 1␴ 2
共39b兲
2␴2
2
d 2
␴N = − ␴N2 + 20 .
dt
␴2
␴2
共39c兲
Note that when the parameters ␴A2 and ␴20 do not depend on
time, Eqs. 共39a兲–共39c兲 have the unique attracting equilibrium
points
␴2g =
More generally we seek the closure in terms of the moments
冕 冕冕
冕
共38兲
Eqs. 共37a兲–共37e兲 yield the equations
VIII. DYNAMICS OF CONDUCTANCE MOMENTS
⬁
共37e兲
cAN共t兲 = 具gAgN典 − 具gA典具gN典,
Equations 共34兲 and 共35兲 furnish the expressions for the
second-order conditional moments in term of the first-order
conditional moments and amount to closure conditions based
on the maximum-entropy principle. With the aid of these
closure conditions we can close the hierarchy 共24a兲–共24d兲 at
first order when the external-input and network firing rates
are constant in time.
VT
共37d兲
The initial conditions for these equations can be computed
from the initial probability density ␳共v , gA , gN , t = 0兲 or its
marginal counterpart over the conductances alone.
For the 共co兲variances
␴A2
␥
=
,
2共␣␥ − ␤2兲 ␴1
␴20
␣
=
.
⌺N2 共v兲 =
2共␣␥ − ␤2兲 ␴2
具gArgNs典 =
2p␴20
1
具gA典ḡN共t兲 +
,
␴2
␴ 1␴ 2
2␴2
2
d 2
具gN典 = − 关具gN2 典 − 具gN典ḡN共t兲兴 + 20 .
dt
␴2
␴2
we thus obtain the relations
⌺A2 共v兲 =
冊
1
1
1
d
具gAgN典 = −
+
具gAgN典 + 具gN典ḡA共t兲
dt
␴1 ␴2
␴1
,
␴A2
,
␴1
cAN =
2p␴20
,
␴1 + ␴2
␴N2 =
␴20
,
␴2
共40兲
whose values are identical to the respective right-hand sides
of the maximal-entropy relations 共35兲. This observation casts
the 共co兲variances 共38兲 even in the time-dependent case as
suitable candidates for replacing the right-hand sides of 共35兲
in a general second-order closure scheme, which we will
postulate in the next section.
We need to stress, however, that even if we were to solve
the moment equations 共37a兲–共37e兲 共and their higher-order
counterparts兲 explicitly, we would not obtain an explicit solution to the moment problem for the density ␳共v , gA , gN兲.
This is because Eqs. 共37a兲–共37e兲, and so also their solutions,
depend on the as-yet-unknown firing rate m共t兲. How to find
this rate will be explained in the next section.
Here, we discuss another important aspect of the conductance moment dynamics—namely, their significance in the
validity of the diffusion approximation leading to Eq. 共12兲
and the boundary conditions 共16兲. As pointed out in Sec. V,
the probability density function ␳共v , gA , gN兲 must be negligibly small in the region outside the first quadrant gA ⬎ 0, gN
⬎ 0, in order for this approximation to hold. 关This smallness
can, for example, be expressed in terms of the integral of
␳共v , gA , gN兲 over that region and the voltage being small.兴 It
041915-8
KINETIC THEORY FOR NEURONAL NETWORKS WITH …
is clear that the necessary condition for this smallness is that
the ratio of the variances, ␴2g共t兲 and ␴N2 共t兲, versus the first
moments, 具gA典 and 具gN典, as computed from Eqs. 共39a兲, 共39c兲,
共37a兲, and 共37b兲, respectively, must be small. If this is not the
case, the diffusion approximation 共12兲, as well as the reduction to the voltage dependence alone discussed in the next
section, may become invalid.
PHYSICAL REVIEW E 77, 041915 共2008兲
⫻
⌺A2 共v兲 ⬅ ␴2g共t兲,
JV共v,t兲 = −
⌺N2 共v兲 ⬅ ␴N2 共t兲,
CAN共v兲 ⬅ cAN共t兲,
共41兲
where the 共co兲variances ⌺A2 共v兲, CAN共v兲, and ⌺N2 共v兲 are defined as in 共34兲 and ␴2g共t兲, cAN共t兲, and ␴N2 共t兲 are computed
from Eqs. 共39a兲–共39c兲 with the appropriate initial conditions,
as discussed after listing Eqs. 共37a兲–共37e兲.
Since the 共co兲variances ␴2g共t兲, cAN共t兲, and ␴N2 共t兲 no longer
depend on the voltage v, Eqs. 共24a兲–共24c兲 for the voltage
probability density function 共v兲 and the first conditional
moments ␮A共v兲 and ␮N共v兲 under the closure assumption 共41兲
simplify to become
⳵
⳵
共v兲 =
⳵t
⳵v
⫻
再冋冉 冊
v − ␧r
␶
冉 冊册
v − ␧E
␶
冎
再冉 冊
1
⳵
␮A共v兲 = − 共␮A共v兲 − ḡA兲 +
⳵t
␴1
共42a兲
v − ␧r
␶
+ 关␭␮A共v兲 + 共1 − ␭兲␮N共v兲兴
冉 冊冎
v − ␧E
␶
冋冉 冊 册
+ 关␭cAN共t兲 + 共1 −
␶
v − ␧E
␶
+ 关␭␮A共v兲 + 共1 − ␭兲␮N共v兲兴
共v兲 =
⬁
JV共v,gA,gN,t兲dgAdgN ,
−⬁
共43兲
where JV共v , gA , gN , t兲 is the probability flux 共14兲 in the voltage direction for the original kinetic equation 共12兲. The second equality in 共43兲 is obtained from the definitions 共14兲,
共21a兲, and 共21b兲 and Eq. 共23兲, just as in the derivation of Eq.
共25兲. From 共14兲 and the definition 共17兲 of the populationaveraged firing rate per neuron, m共t兲, we now immediately
see that this firing rate can be expressed as the voltage flux
through the firing threshold:
m共t兲 = JV共VT,t兲.
共44兲
We now proceed to derive the boundary conditions for the
reduced equations 共42a兲–共42c兲. First, from the definition of
the voltage probability flux 共43兲 and the voltage boundary
condition 共15兲, we immmediately derive the equation
JV共VT,t兲 = JV共␧r,t兲,
共45兲
which is also the first equation of Eqs. 共25兲 and further gives
the boundary condition
兵共VT − ␧r兲 + 关␭␮A共VT兲 + 共1 − ␭兲␮N共VT兲兴共VT − ␧E兲其共VT兲
共46a兲
Under the closure 共41兲, using the 共co兲variance definitions
共34兲, and the voltage flux probability definition 共43兲, the
boundary condition 共45兲, and the firing rate expression 共44兲,
we can transform Eqs. 共25兲 for r = 1, s = 0 and r = 0, s = 1,
respectively, into the equations
⫻关共VT − ␧E兲共VT兲
⳵
␮ A共 v 兲
⳵v
− 共␧r − ␧E兲共␧r兲兴,
共46b兲
⫻关共VT − ␧E兲共VT兲
共42b兲
− 共␧r − ␧E兲共␧r兲兴.
v − ␧r
+ 关␭␮A共v兲 + 共1 − ␭兲␮N共v兲兴
v − ␧r
␶m共t兲关␮A共VT兲 − ␮A共␧r兲兴 = 关␭␴2g共t兲 + 共1 − ␭兲cAN共t兲兴
再冉 冊
␶
共42c兲
␶m共t兲关␮N共VT兲 − ␮N共␧r兲兴 = 关␭cAN共t兲 + 共1 − ␭兲␴N2 共t兲兴
v − ␧E
共v兲 ,
␶
1
⳵
␮N共v兲 = − 共␮N共v兲 − ḡN兲 +
⳵t
␴2
共v兲 .
␶
冋冉 冊
冉 冊册 冕 冕
⫻
+ 关␭␴2g共t兲 + 共1 − ␭兲cAN共t兲兴
1 ⳵
⫻
共v兲 ⳵ v
v − ␧E
= 关␭␮A共␧r兲 + 共1 − ␭兲␮N共␧r兲兴共␧r − ␧E兲共␧r兲.
+ 关␭␮A共v兲 + 共1 − ␭兲␮N共v兲兴
共v兲 ,
冋冉 冊 册
Equations 共42a兲–共42c兲 provide the desired reduced kinetic
description of moments, and are the main result of this paper.
Equation 共42a兲 is clearly in conservation form, with the
corresponding voltage probability flux
IX. CLOSURE AND REDUCED KINETIC MOMENT
EQUATIONS
In this section, we postulate a second-order closure for the
hierarchy 共24a兲–共24d兲 of equations for the moments 共19兲
based on the discussion of the previous two sections. First,
using Eq. 共24a兲, we rewrite Eqs. 共24b兲 and 共24c兲 in such a
共2兲
共v兲, and ␮N共2兲共v兲 in
way that the second moments ␮A共2兲共v兲, ␮AN
them are expressed in terms of the first moments ␮A共v兲 and
␮N共v兲 and the 共co兲variances ⌺A2 共v兲, CAN共v兲, and ⌺A2 共v兲 via
Eqs. 共34兲.
In view of the discussion presented in the preceding two
sections, for the closure in the general case, we consider it
natural to postulate
1 ⳵
共v兲 ⳵ v
冉 冊冎
v − ␧E
␶
⳵
␮ N共 v 兲
⳵v
共46c兲
Equations 共46a兲–共46c兲 provide the physiological boundary
conditions for the kinetic moment equations 共42a兲–共42c兲.
In passing, let us remark that, due to the definitions 共34兲,
共36兲, and 共38兲, the closure 共41兲 implies the moment relations
␭兲␴N2 共t兲兴
冕
VT
␧r
041915-9
共v兲␮A2 共v兲dv =
冉冕
VT
␧r
共v兲␮A共v兲dv
冊
2
,
PHYSICAL REVIEW E 77, 041915 共2008兲
RANGAN, KOVAČIČ, AND CAI
冕
VT
␧r
共v兲␮A共v兲␮N共v兲dv =
冕
␧r
⫻
冕
VT
␧r
共v兲␮N2 共v兲dv =
冉冕
VT
VT
␧r
冕
1
⳵
␮N共v兲 = − 关␮N共v兲 − ḡN共t兲兴 +
⳵t
␴2
共v兲␮A共v兲dv
VT
␧r
冊
+ 关␭cAN共t兲 + 共1 −
2
⫻
.
Recapping the results of this section, the reduced statistical description for the voltage dynamics alone is furnished
by Eqs. 共42a兲–共42c兲, the boundary conditions 共46a兲–共46c兲,
the ordinary differential equations 共39a兲–共39c兲, the firing rate
definition 共44兲, the parameter definitions 共13兲, and the initial
conductance distribution or its linear and quadratic moments.
This problem is highly nonlinear because of the nonlinear
moment equations 共42b兲 and 共42c兲 and boundary condition
共46a兲 and because the firing rate m共t兲 enters the governing
equations as well as the boundary conditions as a selfconsistency parameter.
␴A2 共t兲
,
␴1
2p␴20共t兲
S̄2
=
m共t兲.
␴2
N␴2
␭␴2 共t兲 ⳵
␮A共v兲 = ḡA共t兲 + A
共v兲 ⳵ v
冋冉 冊 册
v − ␧E
␶
共v兲 .
共48兲
In other words, the conditional moment ␮A共v兲 relaxes on the
O共␴1兲 time scale to being slaved to the dynamics of 共v兲 and
␮N共v兲.
In the limit of the instantaneous fast conductance time
scale, as ␴1 → 0, we thus only have two equations governing
the moments 共v兲 and ␮N共v兲,
⳵
⳵
共v兲 =
⳵t
⳵v
⫻
再冋冉 冊
v − ␧r
␶
冉 冊册
v − ␧E
␶
+ 关␭␮A共v兲 + 共1 − ␭兲␮N共v兲兴
冎
共v兲 ,
共49a兲
␶
⳵
␮ N共 v 兲
⳵v
v − ␧E
␶
共v兲 ,
共49b兲
⌺N2 共v兲 = ␴N2 共t兲,
2p␴20共t兲
.
␴2
共50兲
Due to relation 共48兲, Eq. 共49b兲 is now second-order in the
voltage v.
Recall that the terms in 共49a兲 and 共49b兲 multiplied by
cAN共t兲 originate from the correlation effects due to the spikes
that are common to both the fast and slow receptors. In a
steady state, using 共40兲 and taking the limit ␴1 → 0, we have
cAN → 2p␴N2 —i.e., a rather strong correlation effect if the
synaptic failure is low.
Because of 共47兲, as ␴1 → 0, the boundary condition 共46b兲
simplifies to become
共VT − ␧E兲共VT兲 = 共␧r − ␧E兲共␧r兲,
共47兲
The limits 共47兲 imply that ␴1␴2g共t兲 → ␴A2 共t兲 and ␴1cAN共t兲
→ 0, and thus Eq. 共42b兲 becomes
v − ␧E
冋冉 冊 册
CAN共v兲 =
In the limit of the instantaneous fast conductance time
scale—i.e., when ␴1 → 0—regardless of whether the forcing
terms ␴A2 共t兲 and ␴20共t兲 depend on time or not, the solutions
␴2g共t兲 and cAN共t兲 of Eqs. 共39a兲 and 共39b兲 relax on the O共␴1兲
time scale to their respective forcing terms, so that
冉 冊冎
␭兲␴N2 共t兲兴
␴1⌺A2 共v兲 = ␴A2 共t兲,
A. Instantaneous fast-conductance time scale
cAN共t兲 →
1 ⳵
共v兲 ⳵ v
␶
while ␮A共v兲 is expressed by the relation 共48兲. Here, the timedependent coefficients ḡA共t兲, ḡN共t兲, and ␴A2 共t兲 can be found in
共13兲 and cAN共t兲 is computed in 共47兲. Note that all these quantities can now be computed from the external-input and network firing rates ␯0共t兲 and m共t兲 alone, without having to
solve any additional differential equation. Only the coefficient ␴N2 共t兲 is still computed from the ordinary differential
equation 共39c兲, with ␴20共t兲 again given in terms of the firing
rate m共t兲 in 共13兲. The closure 共41兲, in the limit ␴1 → 0, effectively becomes
X. DISTINGUISHED LIMITS
␴2g共t兲 →
v − ␧r
+ 关␭␮A共v兲 + 共1 − ␭兲␮N共v兲兴
共v兲␮N共v兲dv ,
共v兲␮N共v兲dv
再冉 冊
共51a兲
and therefore the condition 共46c兲 also simplifies to become
␮N共VT兲 = ␮N共␧r兲.
共51b兲
Using the boundary conditions 共51a兲 and 共51b兲 and relation
共48兲, we can finally simplify the boundary condition 共46a兲 to
become
␶共VT − ␧r兲共VT兲 + ␭2␴A2 共t兲共VT − ␧E兲
= ␭2␴A2 共t兲共␧r − ␧E兲
⳵
共␧r兲.
⳵v
⳵
共VT兲
⳵v
共51c兲
Note that these boundary conditions are now linear in and
␮N, but still contain the firing rate through the parameter
␴A2 共t兲.
To summarize, in the ␴1 → 0 limit, the simplified description of the problem is achieved by Eqs. 共49a兲 and 共49b兲,
relation 共48兲, the boundary conditions 共51a兲–共51c兲, the single
ordinary differential equation 共39c兲, the firing rate definition
041915-10
KINETIC THEORY FOR NEURONAL NETWORKS WITH …
PHYSICAL REVIEW E 77, 041915 共2008兲
共44兲, the parameter definitions 共13兲, and the initial conductance distribution or its initial moments 具gN典共0兲 and 具gN2 典共0兲.
If, in addition to ␴1 → 0, we now also consider the limit
␴2 → ⬁—i.e., if we consider the dynamics over the time
scales with ␴1 t ␴2—we find
cAN共t兲 =
S̄2
m共t兲 → 0
N␴2
firing rate (spikes/s)
80
B. Instantaneous fast-conductance and infinite slowconductance time scales
共52兲
共53兲
where ␴N2 共0兲 = 具gN2 典共0兲 − 具gN典2共0兲 and 具gN2 典共0兲 and 具gN典共0兲 are
computed from their definitions in 共36兲 with the integrals
taken against the initial probability density ␳共v , gA , gN , t = 0兲
or its conductance-dependent marginals. Using 共52兲 and 共53兲,
we conclude that Eqs. 共49a兲 and 共49b兲 then reduce yet further
to
⳵
⳵
共v兲 =
⳵t
⳵v
⫻
再冋冉 冊
v − ␧r
␶
+ 关␭␮A共v兲 + 共1 − ␭兲␮N共v兲兴
冉 冊册 冎
再冉 冊
冉 冊冎
v − ␧E
共v兲 ,
␶
⳵
␮ N共 v 兲 =
⳵t
v − ␧r
␶
⫻
+
␶
⳵
␮ N共 v 兲
⳵v
共1 − ␭兲␴N2 共0兲 ⳵
共v兲
⳵v
冋冉 冊 册
v − ␧E
␶
共v兲 , 共54b兲
with ␮A共v兲 again expressed by 共48兲. Because of 共52兲, the
correlation term disappears from Eq. 共54b兲. The closure 共41兲,
in the limit ␴1 → 0 and ␴2 → ⬁, effectively becomes
␴1⌺A2 共v兲 = ␴A2 共t兲,
0
100
200
300
400
500
time (ms)
FIG. 1. 共Color online兲 Accuracy of the kinetic theory for a network of neurons with fast and slow conductances driven by the
same spike train: The time evolution of the population-averaged
firing rate. Gray 共red online兲: simulation of the IF neuronal network
described in the text, averaged over 1024 ensembles. Dark line
共blue online兲: numerical solution of the kinetic moment equations
共42a兲–共42c兲. Light line 共blue online兲: numerical solution of the kinetic moment equations 共42a兲–共42c兲, without the correlation
terms—i.e., with cAN共t兲 = 0. A step stimulus is turned on at t = 0 with
N = 20, ␶ = 20, ␧r = 0, ␧E = 4.67, VT = 1, ␭ = 0.5, ␴1 = 0.005, ␴2 = 0.01,
f A = 0.05, ␯A = 11, f N = 0.13, ␯N = 0.13, p = 1, and S̄ = 0.857.
共54a兲
+ 关␭␮A共v兲 + 共1 − ␭兲␮N共v兲兴
v − ␧E
40
20
from 共47兲, as well as d␴N2 / dt = 0 from 共39c兲. It is therefore
clear that
␴N2 共t兲 = ␴N2 共0兲,
60
⌺N2 共v兲 = ␴N2 共0兲,
CAN共v兲 = 0.
共55兲
The complete description of the problem in this limit is
achieved using the same ingredients as in the previous section, except that Eq. 共54b兲 replaces Eq. 共49b兲 and that the
variance ␴N2 共0兲 does not evolve, so there is no need to solve
any of the ordinary differential equations 共39a兲–共39c兲.
XI. DISCUSSION
A comparison among a full simulation of IF neuronal network ensembles corresponding to different realizations of the
Poisson inputs, and two corresponding numerical solutions
of kinetic moment equations—one including and the other
excluding the correlations cAN共t兲—is presented in Fig. 1. The
results are depicted in gray 共red online兲, thick, and fine 共blue
online兲 lines, respectively. The IF neuronal network simulated here is a straightforward generalization of Eqs. 共1兲 and
共3a兲–共3c兲, which includes external drive for the slow conductances GNi with the Poisson rate ␯N and strength f N in addition to the drive for the fast conductances GAi with the Poisson rate ␯A and strength f A. The kinetic moment equations
are the corresponding generalizations of 共42a兲–共42c兲.
It is apparent that the curve representing the results of the
kinetic theory with the correlations included faithfully tracks
the temporal decay of the firing rate oscillations computed
using the IF network. On the other hand, the kinetic theory
that does not take the correlations into account dramatically
overestimates the amplitude of the firing rate and thus underestimates the decay rate of its oscillations.
The results depicted in Fig. 1 convincingly illustrate how
important it is to include explicitly in the kinetic theory the
assumption that both the fast and slow conductances are
driven by the same spike train. The correlation terms appearing as the consequence in the kinetic moment equations render an accurate statistical picture of the corresponding IF
network dynamics.
Finally, let us remark that developing a kinetic theory for
more realistic neuronal networks, for instance, with neurons
containing dendritic and somatic compartments and a
voltage-dependent time course of the slow conductance
mimicking the true NMDA conductance 关82–84兴, would proceed along the same lines as those employed in the current
paper.
041915-11
PHYSICAL REVIEW E 77, 041915 共2008兲
RANGAN, KOVAČIČ, AND CAI
ACKNOWLEDGMENTS
We would like to thank S. Epstein, P. R. Kramer, and L.
Tao for helpful discussions. A.V.R. and D.C. were partly supported by NSF Grant No. DMS-0506396 and a grant from
the Swartz Foundation. G.K. was supported by NSF Grant
No. DMS-0506287 and gratefully acknowledges the hospitality of the Courant Institute of Mathematical Sciences and
Center for Neural Science.
关1兴 D. Somers, S. Nelson, and M. Sur, J. Neurosci. 15, 5448
共1995兲.
关2兴 T. Troyer, A. Krukowski, N. Priebe, and K. Miller, J. Neurosci.
18, 5908 共1998兲.
关3兴 D. W. McLaughlin, R. Shapley, M. J. Shelley, and J. Wielaard,
Proc. Natl. Acad. Sci. U.S.A. 97, 8087 共2000兲.
关4兴 L. Tao, M. J. Shelley, D. W. McLaughlin, and R. Shapley,
Proc. Natl. Acad. Sci. U.S.A. 101, 366 共2004兲.
关5兴 N. Carnevale and M. Hines, The NEURON Book 共Cambridge
University Press, Cambridge, England, 2006兲.
关6兴 R. Brette, M. Rudolph, T. Carnevale, M. Hines, D. Beeman, J.
M. Bower, M. Diesmann, A. Morrison, P. H. Goodman, F. C.
Harris, Jr. et al., J. Comput. Neurosci. 23, 349 共2007兲.
关7兴 A. V. Rangan and D. Cai, J. Comput. Neurosci. 22, 81 共2007兲.
关8兴 M. Tsodyks, T. Kenet, A. Grinvald, and A. Arieli, Science
286, 1943 共1999兲.
关9兴 T. Kenet, D. Bibitchkov, M. Tsodyks, A. Grinvald, and A.
Arieli, Nature 共London兲 425, 954 共2003兲.
关10兴 D. Jancke, F. Chavance, S. Naaman, and A. Grinvald, Nature
共London兲 428, 423 共2004兲.
关11兴 D. Cai, A. V. Rangan, and D. W. McLaughlin, Proc. Natl.
Acad. Sci. U.S.A. 102, 5868 共2005兲.
关12兴 A. V. Rangan, D. Cai, and D. W. McLaughlin, Proc. Natl.
Acad. Sci. U.S.A. 102, 18793 共2005兲.
关13兴 H. R. Wilson and J. D. Cowan, Biophys. J. 12, 1 共1972兲.
关14兴 H. R. Wilson and J. D. Cowan, Kybernetik 13, 55 共1973兲.
关15兴 F. H. Lopes da Silva, A. Hoeks, H. Smits, and L. H. Zetterberg,
Kybernetik 15, 27 共1974兲.
关16兴 A. Treves, Network 4, 259 共1993兲.
关17兴 R. Ben-Yishai, R. Bar-Or, and H. Sompolinski, Proc. Natl.
Acad. Sci. U.S.A. 92, 3844 共1995兲.
关18兴 M. J. Shelley and D. W. McLaughlin, J. Comput. Neurosci.
12, 97 共2002兲.
关19兴 P. Bressloff, J. Cowan, M. Golubitsky, P. Thomas, and M.
Wiener, Philos. Trans. R. Soc. London, Ser. B 356, 299
共2001兲.
关20兴 F. Wendling, F. Bartolomei, J. J. Bellanger, and P. Chauvel,
Eur. J. Neurosci. 15, 1499 共2002兲.
关21兴 P. C. Bressloff, Phys. Rev. Lett. 89, 088101 共2002兲.
关22兴 P. C. Bressloff and J. D. Cowan, Phys. Rev. Lett. 88, 078102
共2002兲.
关23兴 P. Bressloff, J. Cowan, M. Golubitsky, P. Thomas, and M.
Wiener, Neural Comput. 14, 473 共2002兲.
关24兴 P. Suffczynski, S. Kalitzin, and F. H. Lopes da Silva, Neuroscience 126, 467 共2004兲.
关25兴 L. Schwabe, K. Obermayer, A. Angelucci, and P. C. Bressloff,
J. Neurosci. 26, 9117 共2006兲.
关26兴 D. Nykamp and D. Tranchina, J. Comput. Neurosci. 8, 19
共2000兲.
关27兴 D. Nykamp and D. Tranchina, Neural Comput. 13, 511 共2001兲.
关28兴 D. Cai, L. Tao, M. J. Shelley, and D. W. McLaughlin, Proc.
Natl. Acad. Sci. U.S.A. 101, 7757 共2004兲.
关29兴 D. Cai, L. Tao, A. V. Rangan, and D. W. McLaughlin, Commun. Math. Sci. 4, 97 共2006兲.
关30兴 Z. F. Mainen and T. J. Sejnowski, Science 268, 1503 共1995兲.
关31兴 L. G. Nowak, M. V. Sanchez-Vives, and D. A. McCormick,
Cereb. Cortex 7, 487 共1997兲.
关32兴 C. F. Stevens and A. M. Zador, Nat. Neurosci. 1, 210 共1998兲.
关33兴 J. Anderson, I. Lampl, I. Reichova, M. Carandini, and D. Ferster, Nat. Neurosci. 3, 617 共2000兲.
关34兴 M. Volgushev, J. Pernberg, and U. T. Eysel, J. Physiol. 540,
307 共2002兲.
关35兴 M. Volgushev, J. Pernberg, and U. T. Eysel, Eur. J. Neurosci.
17, 1768 共2003兲.
关36兴 G. Silberberg, M. Bethge, H. Markram, K. Pawelzik, and M.
Tsodyks, J. Neurophysiol. 91, 704 共2004兲.
关37兴 D. Cai, L. Tao, and D. W. McLaughlin, Proc. Natl. Acad. Sci.
U.S.A. 101, 14288 共2004兲.
关38兴 T. Bonhoeffer and A. Grinvald, Nature 共London兲 353, 429
共1991兲.
关39兴 G. Blasdel, J. Neurosci. 12, 3115 共1992兲.
关40兴 G. Blasdel, J. Neurosci. 12, 3139 共1992兲.
关41兴 R. Everson, A. Prashanth, M. Gabbay, B. Knight, L. Sirovich,
and E. Kaplan, Proc. Natl. Acad. Sci. U.S.A. 95, 8334 共1998兲.
关42兴 P. Maldonado, I. Godecke, C. Gray, and T. Bonhoeffer, Science 276, 1551 共1997兲.
关43兴 G. DeAngelis, R. Ghose, I. Ohzawa, and R. Freeman, J. Neurosci. 19, 4046 共1999兲.
关44兴 W. Vanduffel, R. Tootell, A. Schoups, and G. Orban, Cereb.
Cortex 12, 647 共2002兲.
关45兴 B. Knight, J. Gen. Physiol. 59, 734 共1972兲.
关46兴 W. Wilbur and J. Rinzel, J. Theor. Biol. 105, 345 共1983兲.
关47兴 L. F. Abbott and C. van Vreeswijk, Phys. Rev. E 48, 1483
共1993兲.
关48兴 T. Chawanya, A. Aoyagi, T. Nishikawa, K. Okuda, and Y.
Kuramoto, Biol. Cybern. 68, 483 共1993兲.
关49兴 G. Barna, T. Grobler, and P. Erdi, Biol. Cybern. 79, 309
共1998兲.
关50兴 J. Pham, K. Pakdaman, J. Champagnat, and J. Vibert, Neural
Networks 11, 415 共1998兲.
关51兴 N. Brunel and V. Hakim, Neural Comput. 11, 1621 共1999兲.
关52兴 W. Gerstner, Neural Comput. 12, 43 共2000兲.
关53兴 A. Omurtag, B. Knight, and L. Sirovich, J. Comput. Neurosci.
8, 51 共2000兲.
关54兴 A. Omurtag, E. Kaplan, B. Knight, and L. Sirovich, Network
11, 247 共2000兲.
关55兴 E. Haskell, D. Nykamp, and D. Tranchina, Network Comput.
Neural Syst. 12, 141 共2001兲.
关56兴 A. Casti, A. Omurtag, A. Sornborger, E. Kaplan, B. Knight, J.
Victor, and L. Sirovich, Neural Comput. 14, 957 共2002兲.
041915-12
KINETIC THEORY FOR NEURONAL NETWORKS WITH …
关57兴 N. Fourcaud and N. Brunel, Neural Comput. 14, 2057 共2002兲.
关58兴 A. V. Rangan and D. Cai, Phys. Rev. Lett. 96, 178101 共2006兲.
关59兴 A. V. Rangan, D. Cai, and L. Tao, J. Comput. Phys. 221, 781
共2007兲.
关60兴 M. Hollmann and S. Heinemann, Annu. Rev. Neurosci. 17, 31
共1994兲.
关61兴 G. L. Collingridge, C. E. Herron, and R. A. J. Lester, J.
Physiol. 399, 283 共1989兲.
关62兴 I. D. Forsythe and G. L. Westbrook, J. Physiol. 396, 515
共1988兲.
关63兴 S. Hestrin, R. A. Nicoll, D. J. Perkel, and P. Sah, J. Physiol.
422, 203 共1990兲.
关64兴 N. W. Daw, P. G. S. Stein, and K. Fox, Annu. Rev. Neurosci.
16, 207 共1993兲.
关65兴 K. A. Jones and R. W. Baughman, J. Neurosci. 8, 3522 共1988兲.
关66兴 J. M. Bekkers and C. F. Stevens, Nature 共London兲 341, 230
共1989兲.
关67兴 C. Rivadulla, J. Sharma, and M. Sur, J. Neurosci. 21, 1710
共2001兲.
关68兴 S. Redman, Physiol. Rev. 70, 165 共1990兲.
关69兴 N. Otmakhov, A. M. Shirke, and R. Malinow, Neuron 10,
1101 共1993兲.
关70兴 C. Allen and C. F. Stevens, Proc. Natl. Acad. Sci. U.S.A. 91,
10380 共1994兲.
关71兴 N. R. Hardingham and A. U. Larkman, J. Physiol. 507, 249
PHYSICAL REVIEW E 77, 041915 共2008兲
共1998兲.
关72兴 S. J. Pyott and C. Rosenmund, J. Physiol. 539, 523 共2002兲.
关73兴 M. Volgushev, I. Kudryashov, M. Chistiakova, M. Mukovski,
J. Niesmann, and U. T. Eysel, J. Neurophysiol. 92, 212 共2004兲.
关74兴 C. Myme, K. Sugino, G. Turrigiano, and S. Nelson, J. Neurophysiol. 90, 771 共2003兲.
关75兴 C. Schroeder, D. Javitt, M. Steinschneider, A. Mehta, S. Givre,
H. Vaughan, Jr., and J. Arezzo, Exp. Brain Res. 114, 271
共1997兲.
关76兴 G. Huntley, J. Vickers, N. Brose, S. Heinemann, and J. Morrison, J. Neurosci. 14, 3603 共1994兲.
关77兴 P. C. Bressloff and J. G. Taylor, Phys. Rev. A 41, 1126 共1990兲.
关78兴 A. Manwani and C. Koch, Neural Comput. 11, 1797 共1999兲.
关79兴 J. A. White, J. T. Rubinstein, and A. R. Kay, Trends Neurosci.
23, 131 共2000兲.
关80兴 E. Schneidman, B. Freedman, and I. Segev, Neural Comput.
10, 1679 共1998兲.
关81兴 E. Cinlar, in Stochastic Point Processes: Statistical Analysis,
Theory, and Applications, edited by P. Lewis 共Wiley, New
York, 1972兲, pp. 549–606.
关82兴 C. Jahr and C. Stevens, J. Neurosci. 10, 3178 共1990兲.
关83兴 A. E. Krukowski and K. D. Miller, Nat. Neurosci. 4, 424
共2001兲.
关84兴 X.-J. Wang, J. Neurosci. 19, 9587 共1999兲.
041915-13