The concurrence of form and function in developing networks: An

arXiv:1705.02773v1 [nlin.AO] 8 May 2017
The concurrence of form and function in
developing networks: An explanation for
synaptic pruning
Ana P. Millána∗, J.J. Torresa , S. Johnsonb , and J. Marroa
a
Institute Carlos I for Theoretical and Computational Physics,
University of Granada, Spain
b
Warwick Mathematics Institute and Centre for Complexity Science,
University of Warwick, Coventry CV4 7AL, UK.
Corresponding author ([email protected]).
May 9, 2017
Abstract
A fundamental question in neuroscience is how structure and function of neural systems are related. We study this interplay by combining a familiar autoassociative neural network with an evolving mechanism for the birth and death
of synapses. A feedback loop then arises leading to two qualitatively different
behaviours. In one, the network structure becomes heterogeneous and dissasortative and the system displays good memory performance. In the other, the
structure remains homogeneous and incapable of pattern retrieval. These findings
are compatible with experimental results on early brain development, and provide
an explanation for the existence of synaptic pruning. Other evolving networks –
such as those of protein interaction – might share the basic ingredients for this
feedback loop, and indeed many of their structural features are as predicted by
our model.
1
Introduction
The fundamental question in neuroscience of how structural and functional properties
of neural networks are related has recently been considered in terms of large-scale
connectomes and functional networks obtained with various imaging techniques [1]. But
at the lower level of individual neurons and synapses there is still much to learn. From
this point of view, the brain can be regarded as a complex network in which nodes (or
vertices) represent neurons, and edges (or links) stand for synapses. It is then possible
1
to use mathematical models which capture the essence of neural and synaptic activity
to study how a great many such elements can give rise to collective behaviour with at
least some of the characteristics of cognition [2, 3].
Following his strategy, structural properties of the underlying network have been
found to affect the behaviour of the system in various ways [4, 5, 6, 7]. For instance,
the robustness to noise of memory performance is influenced by the heterogeneity of
the degree distribution (the degree of each node being the number of nodes it is connected to) as well as by degree-degree correlations [8, 9]. And indeed, networks that
connect neurons by means of synapses have been found to exhibit degree heterogeneity
(according to roughly scale-free distributions) and negative degree-degree correlations
(dissasortativity), properties which, it has been suggested, may contribute to robustness
and efficiency [10]. It is also known that these networks are not static: new synapses
grow and others disappear in response to neural activity [11, 12, 13, 14].
Models in which networks are gradually formed, for instance by addition of nodes
and edges or by rewiring of the latter, have long been studied in many contexts. Typically, the probabilistic addition or deletion of elements at each time step is a function
of the existing structure. For example, in the familiar Barabási-Albert model, a node’s
probability of receiving a new edge is proportional to its degree [15]. These rules often
give rise to phase transitions (almost invariably of a continuous nature), such that different kinds of network topology can ensue, depending on parameters [16]. One such
model was used to interpret data on brain development, and it was found that several
topological characteristics of empirical networks were reproduced close to the critical
point separating phases of homogeneous and heterogeneous degree distributions [17].
Here we combine this model of brain network development with the Amari-Hopfield
neural network [2, 3]. The probabilities of synaptic growth and death are set to depend on neural activity, as has been empirically observed, instead of on node degree.
This is in the spirit of other co-evolving network models which have been studied in
various fields [18, 19, 20, 21, 22]. We find that, for certain parameter ranges, the phase
transition between memory and randomness becomes discontinuous (i.e. resembling a
first order thermodynamic transition). Depending on initial conditions, the system can
either evolve towards heterogeneous networks with good memory performance, or homogeneous ones incapable of memory. The reason is a feedback loop between structure
and function, as we shall go on to show. Previous studies of co-evolving brain networks
have focussed on the temporal evolution of mean degree [23], particular microscopic
mechanisms [24], the development of certain computational capabilities [25], or the effects of specific growth rules [26]. However, to the best of our knowledge, this is the
first time that a feedback loop, and the ensuing discontinuous transition, have been
identified.
In many species, early brain development seems to be dominated by a remarkable
process known as synaptic pruning. The brain starts out with a relatively high density of
synapses, which is gradually reduced as the individual matures. In humans, for example,
synaptic density at birth is about twice what it will be at puberty, and certain disorders,
such as autism and schizophrenia, have been related to this reduction process [27, 28,
2
29, 30]. It has been suggested that such synaptic pruning may represent some kind of
optimization, perhaps to minimise energy consumption or related genetic information
[24]. But why begin with so many more synapses than required? Our proposal can give
a hint in this regard.
That is, it seems that the strategy could be to start with a homogeneous, randomly
wired network which would then evolve under the effects of a competition between form
and function to develop in this way the gradual optimization of the structure. A possible
- and coherent with experiments - strategy to provide memory performance during this
phase could be as simple as maintaining a highly connected energy consuming network
during infancy. Our results thus suggest a simple explanation for the existence of an
initial explosion of connections followed by a synaptic pruning process.
Finally, according to some indications which we shall describe, the chances are that
the mechanism in this paper is general enough to describe the underluing dynamics os
other biological systems, such as protein interaction networks.
2
Model definition
Consider an undirected network with N nodes, and edges which can change in discrete
time. At time t, the adjacency matrix is {eij (t)} (for i, j = 1, . . . , N), with elements 1
or 0 according to whether there exists or not an edge between the pair of nodes (i, j),
P
respectively. The degree of node i at time t is ki (t) = N
j eij (t), and the mean degree
−1 PN
of the network is κ(t) = N
i ki (t).
The neural network model. Each node represents a neuron, and each edge a
synapse. We shall use the Amari-Hopfield model, according to which each neuron has
two possible states at each time t, given by si (t) = {0, 1} [31]. Each edge (i, j) has a
P
synaptic weight wij , and the field at neuron i is hi (t) = N
j=1 wij eij (t)sj (t). The states
of all neurons are updated in parallel at every time step according to the transition
probabilities
1
[1 + tanh (β [hi (t) − θi ])] ,
2
P
−1
is a ‘noise’
where θi = 21 N
j=1 wij is a neuron’s threshold for firing, and β = T
parameter controlling stochasticity (analogous to the inverse temperature in statistical
physics) [32]. The synaptic weights can be used to encode information in the form of a
set of P ‘patterns’ {ξiµ ; µ = 1, . . . , P }, with mean a0 = hξ µ i, specific configurations of
neural states which can be regarded as memories. We shall use the Hebb learning rule:
P {si (t + 1) = 1} =
wij = [κ0 a0 (1 − a0 )]−1
P
X
(ξiµ − a0 )(ξjµ − a0 ).
µ=1
The canonical Amari-Hopfield model [31] which is here a reference is defined on a fixed
fully connected network (eij = 1, ∀i = j), and it exhibits a continuous phase transition
at the critical temperature T = 1.
3
The pruning model. The structure of the network also changes in time. For this
we shall adapt the model put forward in Ref. [17]. At each time t, each node has
a probability Pig (t) of being assigned a new edge to another node, randomly chosen.
Likewise, each node has a probability of losing one of its edges, with probability Pil (t).
These probabilities are given by
Pig = u (κ) π (Ii ) ,
Pil = d (κ) η (Ii ) ,
(1)
where we have dropped the time dependence of all variables for clarity. The first terms
on the right-hand side of each equation represent a global dependence to account for the
fact that such processes rely in some way on diffusion of different molecules through
large areas of tissue, and we take the mean degree κ at each time as a proxy (note
that, since the number of nodes is fixed, the mean degree is proportional to the total
number of synapses). The simplest choice for this effect consistent with empirical data
on synaptic pruning in humans [17, 23] is
n κ
u (κ) =
,
N 2κ∞
n
κ
d (κ) =
1−
,
N
2κ∞
(2)
where n is the number of edges to be added or removed at each step, which sets the speed
of the process, and κ∞ is the stationary mean degree (i.e. the mean degree of the system
κ tends to κ∞ as t → ∞). It should also be noticed, however, that experimental studies
[23] have revealed a fast initial overgrowth of synapses associated with the transient
existence of different growth factors. This and other particular mechanisms could be
accounted for by adding extra factors in Eqs. (2). For example, initial overgrowth has
been reproduced by adding the term a exp(−t/τg ) in the probability of growth [17].
The second terms of the right hand of Eqs. (1) were previously considered [17] to be
functions of ki . This was meant as a proxy for the empirical observations that synaptic
growth and death are determined by neural activity [11, 12, 13, 14]. However, we now
make this dependence explicit, by considering a dependence on the local current at each
neuron: Ii = |hi − θi |. This couples neural dynamics with the evolving network.
Monte Carlo simulations. The initial conditions for the neural states are random.
For the network topology we can draw an initial degree sequence from some distribution
p(k, t = 0), and then place edges between nodes i and j with a probability proportional
to ki (0)kj (0), as in the configuration model. Time evolution is then accomplished in
practice via computer simulations as follows. First, the number of links to be created
and destroyed are chosen according to two Poisson distributions with means Nu (κ) and
Nd (κ), respectively. Then, as many times as needed according to this draw, we choose
a node i with probability π (Ii ) to be assigned a new edge, to another node randomly
chosen; and similarly we choose a new node j according to η (Ij ) to lose an edge from
one of its neighbours, randomly chosen. This procedure uses the BKL algorithm to
assure proper evolution towards stationarity [32]. Each node can then be chosen via
two paths: either through the process with probability π (Ii ) for a gain (or η (Ii ) for
a loss), or when it is randomly connected to (or disconnected from) an already chosen
4
node. Therefore, the effective values of the second factors in (1) are πe = π (Ii ) + 1/N
and ηe = η (Ii ) + ki /(κN).
For the sake of simplicity, we shall consider πe and ηe to be power laws, as in Ref.
[17], which allows one to move smoothly from a sub-linear to a super-linear dependence
with a single parameter: πe ∝ Iiα and ηe ∝ Iiγ . This leads to:
π (Ii ) =
2
N hIi
!α
!
hI α i
I −
,
2
α
Iiγ
η (Ii ) = γ
.
hI i N
(3)
This assumption is most important, as it couples current activity and structure. However, the particular functions are an arbitrary choice and other ones could be considered.
In our scenario, the parameters α and γ characterize the dependence of the local probabilities on the local currents and account for the different proteins and factors that
control synaptic growth. These could be obtained experimentally, although to the best
of our knowledge this has not yet been done.
The time scale for structure changes is set by the parameter n in (2), whereas the
time unit for activity changes is the number hs of Monte Carlo Steps (MCS) between
each configuration updating. Our study shows a low dependence on this parameter in
the cases of interest, so we only report results here for hs = 10 MCS.
The macroscopic state. This may be characterized by the overlap among the neuP
µ
ral activity and each of the memorized patterns, mµ = [Na0 (1 − a0 )]−1 N
i=1 (ξi − a0 ) (si − a0 ) ,
P
and by the mean neural activity, M = N1 N
i=1 si . Also of interest is the degree distribution at each time, p (k, t), whose heterogeneity may be measured via g = exp (−σ 2 /κ2 ),
where σ 2 is the variance. g equals one for the homogeneous case p (k) = δ (k − κ) and
tends to zero for highly heterogeneous networks.
E the Pearson correD We also monitored
2
2
2 2
lation coefficient applied to the edges: r = hki k knn (k) − hk i / hki hk 3 i − hk 2 i ,
where knn (k) is the mean nearest-neighbour-degree function. This characterizes the
degree-degree correlations, which have important implications for network connectedness and robustness. That is, whereas most social networks are assortative (r > 0),
almost all other networks, whether biological, technological or information-related, seem
to be generically dissasortative (r < 0), meaning that high degree nodes tend to have
low degree neighbours, and vice-versa. In fact, previous studies showed that heterogeneous networks favour the emergence of dissasortative correlations [33, 34].
3
Main results
Consider first what we shall call the topological limit of the model, in which there is no
neuron dynamics and one substitutes Ii → ki , so that ηi = η (ki ) and πi = π (ki ). This
immediately leads to the master equation: dP (k, t) /dt = u (κ) πe (k − 1) p (k − 1, t) +
d (κ) ηe (k + 1) p (k + 1, t) − [u (κ) πe (k) + d (κ) πe (k)] p (k, t), which is exact in the limit
of no degree-degree correlations between nodes. This equation can be numerically
integrated for each α and γ to obtain the corresponding degree distribution. Three
5
qualitatively different behaviours, leading in practice to different phases, follow for
ki ≫ 1 (see figure 1a):
1. α < γ =⇒ ηe > πe , i.e., high degree nodes are more likely to lose than to gain
edges. Excluding low k, a relatively homogeneous degree distribution emerges,
with the probability of having high degree nodes very rapidly vanishing from a
maximum (circles in figure 1a).
2. α > γ =⇒ ηe < πe , i.e., high degree nodes are more likely to continue to gain than
to lose edges. Since the stationary number of edges Nκt is fixed, this leads to
kind of a bimodal degree distribution, with high probability for very high degree
nodes (squares in fig. 1a).
3. α = γ =⇒ ηe = πe , i.e., very connected nodes are as likely to gain as to lose edges.
Excluding low degrees, the stationary distribution then decays as a power law with
exponent µ close to 2.5 (diamonds in figure 1a), in accordance with long-range
connections observed in the human brain [7] and measures in protein interaction
networks [35]. Notice that the condition ηe = πe (for ki ≫ 1) is mandatory to
obtain this critical behaviour, despite previous studies suggesting the opposite
[17] (see fig. 1b).
This description is valid for more general choices of both ηe and πe , as long as both
are monotonous functions. Notice also that even for α < γ the stationary network
presents some degree of heterogeneity, that is, the degree distribution is not of the form
δ(k − κ∞ ), due to the intrinsic noise associated with edge dynamics.
One can easily prove that γ = 1 corresponds in the topological limit to links chosen
at random for removal by noticing that the probability of choosing any edge for removal
1
is then pij = k1i ηe (ki )+ k1j ηe (kj ) = hkiN
. This can be seen as a first order approximation to
the pruning dynamics, and it also induces powerful simplifications during computations.
Furthermore, the relevant parameter is the relation between α and γ, whereas their
absolute values only affect quantitatively, e.g., to the exponent of the critical degree
distribution. Therefore, we shall mainly refer here to the case γ = 1.
Concerning the full model, in which pruning depends on Ii , computer simulations
depict a rich emergent phenomenology depending on the levels of stochasticity and
heterogeneity. In particular, we focus on the influence of both the noise level T and
the local probability parameter α. Other parameters are set as in a previous study
[17]; this is for simplicity and also because it then follows that the present model
agrees with experimental data as reported there. New preliminary simulations suggested
dependence on the heterogeneity of the initial condition (IC), so that we considered two
different types of IC, namely, homogeneous, p (k, t = 0) = δ(k −κ0 ), and heterogeneous,
p (k, t) ∝ k −2 networks, with fixed κ0 .
Phase diagram. Our coupling dynamics leads to a rich phenomenology, including
discontinuous transitions and multistability. Three different phases can be identified by
monitoring the order parameters g(α, T ), m(α, T ) and r(α, T ), as illustrated in figure
6
100
1
g
(a)
(b)
-2
0
p(K)
10
0.5
0
0.5
1
1.5
Normalized local eld
10-1
10-4
10
0
10
1
10
2
10
Degree (k)
3
(c)
10-2
10
-3
10-4
10-4
10
-3
10
-2
10-1
Normalized degree
Figure 1: Topological limit. (a) Degree distributions for the three different phases:
subcritical (α = 0.8, purple circles), critical (α = 1.0, green diamonds) and supercritical
(α = 1.2, blue squares). N = 3200 and γ = 1.0. Notice how p(k) decays slowly in the
critical case, approximately as k −µ with µ ≈ 2.5. (b) g(α) for the topological case
(purple squares), and for a different set of local probabilities (green circles) that do not
fulfil the condition ηe = πe for ki ≫ 1 (in particular we set here ηi ∝ ki and πi ∝ kiα ,
with α rescaled so that both transitions occur at α = 1). The transition in the latter
case is a discontinuous one, which fails to produce scale-free networks. The indicated
lines follow from integration of the corresponding master equations, whereas the points
are Monte Carlo data. Deviations from simulations are due to the emergence of small
degree-degree correlations that are not taken into account by the master equation. Full
model: (c) Normalized local field of each neuron as a function of its (normalized)
degree, showing the correlation between activity and the topology that emerges in the
system. This is almost one-to-one in the critical case for every node degree, whereas
homonegeneous networks present a very broad correlation, as bimodal ones do for low
degrees (notice the double logscale). Data symbols meaning as in (a).
7
1
g
1
(a)
(b)
0
0
1
0.5
(d)
m
(c)
0
0
0
1
T
2
0
1
α
2
Figure 2: Behaviour along the dotted lines marked in the diagram of figure 3 when
N = 1600, κm = 10 and n = 5. Left: g (T ) (top) and m (T ) (bottom) curves for
different values of α, as indicated, for both homogeneous and heterogeneous IC. Right:
g (α) (top) and m (α) (bottom) curves for different temperatures, as indicated. Solid
points are for homogeneous IC; empty are for heterogeneous IC.
8
0
1
2
Figure 3: Phase diagram as obtained from analysis of the control parameters (m, g
and r) as defined in the main text. The transition lines, corresponding to αct (T ) and
αcm (T ) as labelled, are indicated either with solid or with dashed lines regarding whether
they correspond to continuous or discontinuous transitions. The horizontal and vertical
dotted lines correspond to the cases illustrated in figure 2.
9
2 (our data for r(α, T ) is not shown here since it does not provide new information).
Analysis of these and similar curves for different parameter values leads to the phase
diagram in figure 3. That is, there is a homogeneous memory phase for low values of
both α and T which is characterized by high overlap m, high heterogeneity g and low
(negative, almost zero) correlation r (see the curves in fig. 2a,c for α = 0.5 and low T
and in fig. 2b,d for T < 1.2 for low α). The system is then able to reach and maintain
memory, while the topological process leads to a homogeneous network configuration.
Due to the memory, there is a strong correlation between the physiological state of the
network, as measured by the currents Ii , and its topology, via the degrees ki (see figure
1c).
A continuous increase of α leads through a topological phase transition to heterogeneous (bimodal) networks which corresponds to the line α = αct (T ) in the diagram of
fig. 3, along which one finds scale-free distributed networks. The phase transition is revealed in the g(α) curves of fig. 2b (for T = 0.9, 1.0 and 1.1) as a fast (and continuous)
decay to zero, and it also shows in the overlap curves m(α) (figure 2d) where memory
initially grows monotonously with α, to slightly decrease in the transition, and then it
approaches a constant value. This behaviour reveals a strong coupling between structure and memory, and it also shows how scale-free distributed networks can optimize
memory recovery for a given set of control parameters. We call this the heterogeneous
memory phase (figure 3), which is characterized by high overlap m, very low (almost
zero) g and high negative r, indicating a memory state with heterogeneous (bimodal)
dissasortative structure. Interestingly, this phase expands up to high noise levels as a
consequence of network heterogeneity, which happens to help memory recovery. Even
more, memory recovery in turn favours network heterogenization in a feed-back manner,
enhancing the stability of the state.
Dynamics is finally governed by noise as T is further increased, and the stationary
network is then homogeneous. It results in a homogeneous noisy phase, characterized
by low (almost zero) overlap, high g and low (almost zero) r (see fig. 2a,c for α = 0.5
and 1.0 and high T and fig. 2b,d for T > 0.9 and low α).
Interestingly, the transition from memory to noise strongly differs depending on the
initial low T phase, i.e., on the value of α. For low α values (homogeneous memory
phase), a noise increase causes a transition from memory to noise through the line
α = αcm (T ) (see fig. 2a and c for α = 0.5 and the diagram of fig. 3), which also leads to
more homogeneous networks, since structure evolution is driven by noise. Our results
indicate that this transition has a continuous nature. On the other hand, the transition
at higher α from heterogeneous memory has a discontinuous nature, and it includes a
bi-stability region (see the striped area in figure 3), which is evident when one compares
g(α, T ) and m(α, T ) for homogeneous and heterogeneous IC, particularly for T = 1.2
(fig. 2 b and d) and for α = 0.5 (fig. 2 a and c).
The existence of multistability illustrates how memory promotes itself in a heterogeneous network, which is a direct consequence of the coupled dynamics in our model.
To understand this effect, we recall previous studies on stationary networks [8, 9] which
showed that heterogeneous networks present higher memory recovery than homogeneous
10
ones for high noise levels, particulary for T > 1. Besides this effect, in our model the
presence of memory leads to some degree of network heterogenization, which depends
on α, giving rise to a memory-heterogeneity feed-back loop. This implies differences
between homogeneous and heterogeneous IC, which translates for high α and T into a
multistability region and the presence of memory for T > 1.
To further visualize this effect, notice how the complex transition lines discovered
here allow us to trace isolines (either constant α or constant T ) that go through the
three phases. Particularly, for T = 1.0 and 1.1 (fig. 2b and c) the system is initially
in the homogeneous memory phase, and an increase in α leads to networks that are
heterogeneous enough to maintain memory through the transition line αcm (T ). The
memory-heterogeneity feed-back loop then appears, favouring higher heterogenization
and causing a "fall" of g (α) to memory levels. A final increase in α leads to a topological transition at αct (T ) towards heterogeneous networks. Similarly, the system visits
for α = 1 (fig. 2 a and c) the three phases: at very low T , α is high enough to develop
heterogeneous memory networks, but a slight noise increase is enough to suppress heterogeneity (the system crosses αm (T ) from above), until noise is too high and memory
is also lost (thus leading to the homogeneous noisy phase).
Therefore, a main observation here is that the model shows an intriguing relation
between memory and topology which induces complex transitions that might be relevant
to understand actual systems. In particular, the inclusion of a topological process allows
for memory recovery when T > 1, whereas the presence of thermal noise shifts the
topological transition (which in the topological limit occurs at α = 1) to α > 1. These
findings can also have implications for network design, e.g. to help memory recovery
optimization in a noisy environment.
Synaptic pruning with initial overgrowth. A question that naturally arises
here is the significance of considering a heterogeneous structure as the initial network
state, which has been artificially placed so far. However, there are mechanisms not
taken into consideration here, such as additional growth factors, that could lead to the
same effects - that is, to an enhanced stability of the memory state. In fact, some
preliminary simulations show that homogeneous IC could strongly improve memory
retrieval after (and also during) pruning by including the term a exp(t/τg ) in the growth
probability. This is due to the combination of two effects. First, the inclusion of such
a term creates a long transitory with high κ, which allows the network to maintain
a memory. Second, since the preferential attachment process is already taking place,
network structure begins heterogenizing and hubs appear, which will ensure memory
retrieval when κ diminishes. Therefore, it would seem that a rapid initial growth
mechanism could be essential in helping the subsequent synaptic pruning to assure a
proper brain development.
Protein interaction networks. Many complex systems can be described as networks which evolve under the influence of node activity. It is therefore likely that the
structure-function feedback loop we have described for neural networks will play a role
in other settings too. For instance, this mechanism could be compatible with experimental and theoretical studies concerning protein interaction networks. These gather
11
different types of metabolic interactions, whether physical, due to a physico-chemical
contact between two proteins, or through an action on the corresponding gene. Interactions can then be either inhibitory or excitatory and have different strength, as edges
in our model. This structure is known to change on a evolutionary time scale, that is,
during the evolution of species rather than that of individuals, and these changes combine a random origin (since they are typically due to mutations) with a "force" driven
by natural selection, which is likely to be activity dependent (see [35] for a detailed
analysis of their characteristics). The similarities between this picture and our model
suggest a parallel concerning the resulting network topologies as well.
To make a more specific contact, note that measurements on these networks show
scale-free distribution of some important node measurements with respect to node degree: (a) degree distributions p(k) ∝ k −µ with µ ≈ 2.5 [35]; (b) clustering coefficient
C(k) ∝ k −θ with θ ≈ 1, 2 depending on the type of network, namely, metabolic networks are for θ ≈ 1, while protein interaction networks are for θ ≈ 2 [35, 36]; and (c)
mean nearest neighbour degree, K(k) ∝ k −ν with ν ≈ 0.6 [37]. Our model also produces scale-free distribution for these magnitudes, thus reproducing basic qualitative
properties of these networks. Even more, we even reproduce the observed exponent µ,
indicating a general mechanism in biological systems. Also interestingly, in our model
we find that C(k) decays as k −1 for control parameters in the heterogeneous memory phase and close to the topological transition αct , the same region where power-law
distributed degree-distributions emerge. Moreover, the function K(k) decays in our
model as a power law for almost every value of the parameters. According to [37],
this is related with the intrinsic topological dynamics of the model, which create an
asymmetry between the couple of nodes that gain (or loose) an edge. We find ν ≈ 1.0
for homogeneous networks and ν ≈ 2.0 for heterogeneous bimodal networks, whereas
scale-free distributed networks lie in between those, with ν ≈ 1.5. Therefore, besides
the qualitative reproduction of these functions, we are able to approximately obtain µ
and θ for metabolic networks. Also, given that the particular values of these parameters
depend on the local probabilities in our model, we expect that they can be accurately
reproduced by appropriately tuning the parameters α and γ.
In conclusion, there are indications that some of the main topological properties of
protein interaction networks could be qualitatively reproduced with simple adjustments
or extensions of the model. This suggests a general mechanism underlying the dynamics
of different biological systems, which is likely to extend as well to other contexts.
4
Final discussion
It is well known from a large body of work that the brain stores information in synaptic
conductances, which mediate neural activity; and that this in turn affects the birth
and death of synapses. To explore what this feedback might entail, we have coupled
an auto associative or attractor neural network model with a model for the evolution
of its network topology which describes synaptic pruning. The resulting picture allows
12
for a two-way functional coupling between physiological activity in the network and its
topological structure, which is very likely to occur in nature. This opens a way to relate
brain cognitive abilities, such as memory related processes, with known biophysical
dynamics at the cellular level.
Taken separately, each of the two models involved exhibit continuous phase transitions: between a phase of memory and a noise one in the neural network, and between
homogeneous and heterogeneous network topologies in the synaptic pruning model.
The coupled model continues to display both of these transitions for certain parameter
regimes, and a new transition emerges which is discontinuous. One of the hallmarks of
such a discontinuous transition is the coexistence of phases. In other words, situations
with the same parameters but slightly different initial conditions can lead to markedly
different outcomes. In this case, whether the attractor neural network is initially capable of memory retrieval influences the emerging network structure, which feeds back
into the memory performance. There is therefore an interesting feedback loop between
structure or form and function which determines the capabilities of the eventual system
which the process yields. Our picture thus relates well-defined mathematical properties
with specific dynamic processes in the brain and, in particular, it addresses how neural
activity can impact on early brain development.
The models we have coupled in this work are the simplest ones able to reproduce the
behaviour of interest - namely, associative memory and network topologies with realistic
features. Our choices were made in the spirit of identifying the minimum ingredients
necessary for the feed-back loop between structure and function to arise. However,
there are no indications to believe that this feedback will disappear when using more
complex models.
On the other hand, neural systems may not be the only ones to display the properties
we found to be sufficient for the existence of this feedback loop between structure and
function. Networks of proteins, metabolites or genes also adopt specific configurations
(often associated to attractors of some dynamics), and the existence of interactions
between nodes might depend on their activity. It is thus noteworthy that we find
some evidence that protein networks seem to have topological features which emerge
naturally from the coupled models we have considered here. One may expect that the
interplay between form and function we have described in this work could play a role
in many natural, complex systems.
Finally, an unresolved question in neuroscience is why brain development proceeds
via a severe synaptic pruning. Fewer synapses require less metabolic energy, so why
not just begin with the ideal number? Presumably, much less genetic information is
needed to build a randomly wired neural network than to determine a priori the exact
locations of synapses, or even to specify other topological properties, such as the degree
distribution. As our study shows, a heterogeneous topology vastly increases memory
efficiency in a noisy environment. The way to achieve this seems to be to start with an
initial overgrowth in the synaptic density, and then follow a pruning process guided by
early brain activity. This is enough to provide a heterogeneous network configuration
that efficiently supports memory after pruning greatly reduces the number of synapses.
13
This is compatible with the view that the process serves to reduce energy consumption.
If this view is correct, then the model suggests that the probabilities of birth and death
of synapses as functions of the current arriving at each neuron should be approximately
linear, so as to produce optimal (scale-free) network topologies. This prediction could
be tested experimentally.
We are grateful for financial support from the Spanish MINECO (under grant
FIS2013-43201-P; FEDER funds) and from the "Obra Social La Caixa".
References
[1] Ed Bullmore and Olaf Sporns. "Complex brain networks: graph theoretical analysis
of structural and functional systems." Nature Reviews Neuroscience, 10:186, 2009.
[2] S Amari. "Characteristics of random nets of analog neuron-like elements." Artificial
neural networks: theoretical concepts. IEEE Computer Society Press 2: 643, 1972.
[3] J J Hopfield. "Neural networks and physical systems with emergent collective
computational abilities." Proc.Nat.Acad.Sci., 79:2554, 1982.
[4] D S Modha and R Singh. "Network architecture of the long-distance pathways in
the macaque brain." Proc.Nat.Acad.Sci.USA, 107:13485, 2010.
[5] Qawi K Telesford et al. "The brain as a complex system: using network science as
a tool for understanding the brain." Brain Connectivity, 1:295,2011.
[6] N Voges and L Perrinet. "Complex dynamics in recurrent cortical networks based
on spatially realistic connectivities." Front Comput Neurosci, 6:41, 2012.
[7] MT Gastner and Géza Ódor. "The topology of large open connectome networks
for the human brain." Sci. Reports, 6:27249 ,2016.
[8] JJ Torres, MA Munoz, J Marro and PL Garrido. "Influence of topology on the
performance of a neural network." Neurocomputing, 58, 229, 2004.
[9] S de Franciscis, S Johnson and J J Torres. "Enhancing neural-network performance
via assortativity." Phys. Rev. E 83:0.6114, 2011.
[10] S Navlakha, A L Barth, and Z Bar-Joseph. "Decreasing-rate pruning optimizes
the construction of efficient and robust distributed networks." PLoS Comput Biol,
11:e1004347, 2015.
[11] K S Lee, F Schottler, M Oliver and G Lynch "Brief bursts of high-frequency
stimulation produce two types of structural change in rat hippocampus." Journal
of Neurophysiology, 44:247, 1980.
14
[12] E Frank. "Synapse elimination: For nerves it’s all or nothing." Science, 275:324,
1997.
[13] A Y Klintsova and W T Greenough. "Synaptic plasticity in cortical systems."
Current Opinion in Neurobiology, 9:203, 1999.
[14] M De Roo, P Klauser, P Mendez, L Poglia and D Muller. "Activity-dependent psd
formation and stabilization of newly formed spines in hippocampal slice cultures."
Cerebral Cortex, 18:151, 2008.
[15] AL Barabási and R Albert. "Emergence of scaling in random networks." Science,
286:509, 1999.
[16] S Johnson, J J Torres, and J Marro. "Nonlinear preferential rewiring in fixed-size
networks as a diffusion process." Phys. Rev. E 79.5:050104, 2009.
[17] S Johnson, J Marro, and J J Torres. "Evolving networks and the development
of neural systems." Journal of Statistical Mechanics: Theory and Experiment,
P03003, 2010.
[18] F Vazquez, Víctor M. Eguiluz, and M San Miguel. "Generic absorbing transition
in coevolution dynamics." Phys. Rev. Lett., 100:108702, 2008.
[19] T Gross and B Blasius. "Adaptive coevolutionary networks: a review." Journal of
The Royal Society Interface, 5:259, 2008.
[20] P Kaluza, A Kölzsch, M T Gastner and B Blasius. "The complex network of global
cargo ship movements." Journal of the Royal Society Interface, 7:1093, 2010.
[21] C Meisel, A Storch, S Hallmeyer-Elgner, E Bullmore and T Gross. "Failure of
adaptive self-organized criticality during epileptic seizure attacks." PLoS Comput
Biol, 8:1, 2012.
[22] H Sayama, I Pestov, J Schmidt, BJ Bush, C Wong, J Yamanoi and T Gross.
"Modeling complex systems with adaptive networks." Computers & Mathematics
with Applications, 65:1645, 2013.
[23] P R Huttenlocher and A S Dabholkar. "Regional differences in synaptogenesis in
human cerebral cortex." Journal of Comparative Neurology, 387:167, 1997.
[24] G Chechik, I Meilijson, and E Ruppin. "Neuronal regulation: A mechanism for
synaptic pruning during brain maturation." Neural Computation, 11:2061, 1999.
[25] V M Eguiluz, D R Chialvo, G A Cecchi, M Baliki and A-V Apkarian "Scale-free
brain functional networks." Phys.Rev.Let., 94:018102, 2005.
[26] J Iglesias, J Eriksson, F Grize, M Tomassini and AE Villa "Dynamics of pruning
in simulated large-scale spiking neural networks. Biosystems.", 79:11, 2005.
15
[27] M S Keshavan, S Anderson, and J W Pettergrew. "Is schizophrenia due to excessive synaptic pruning in the prefrontal cortex? the feinberg hypothesis revisited."
Journal of Psychiatric Research, 28:239, 1994.
[28] D H Geschwind and P Levitt. "Autism spectrum disorders: developmental disconnection syndromes." Current Opinion in Neurobiology, 17:103, 2007.
[29] G Faludi and K Mirnics. "Synaptic changes in the brain of subjects with schizophrenia." International Journal of Developmental Neuroscience, 29:305, 2011.
[30] A Fornito, A Zalesky, and M Breakspear. "The connectomics of brain disorders."
Nature Reviews Neuroscience, 16:159, 2015.
[31] D J Amit. Modeling brain function: The world of attractor neural networks. Cambridge University Press, 1992.
[32] A B Bortz, M H Kalos, and J L Lebowitz. "A new algorithm for monte carlo
simulation of ising spin systems." Journal of Computational Physics, 17:10, 1975.
[33] S Boccaletti, V Latora, Y Moreno, M Chavez and DU Hwang. "Complex networks:
Structure and dynamics." Phys.Rep. 424:175, 2006.
[34] S Johnson, J J Torres, J Marro, and M A Muñoz. "Entropic origin of disassortativity in complex networks." Phys. Rev. Lett. 104: 108702, 2010.
[35] R Albert. "Scale-free networks in cell biology." J. of cell science 118:21, 2005.
[36] S Maslov and K Sneppen. "Specificity and stability in topology of protein networks." Science 296:5569, 2002.
[37] J Berg et al. "Structure and evolution of protein interaction networks: a statistical
model for link dynamics and gene duplications." BMC Evolutionary Biology 4:1,
2004.
16